Next Article in Journal
Light Pollution Monitoring and Sky Colours
Previous Article in Journal
CleanPage: Fast and Clean Document and Whiteboard Capture
Previous Article in Special Issue
Understanding the Effects of Optimal Combination of Spectral Bands on Deep Learning Model Predictions: A Case Study Based on Permafrost Tundra Landform Mapping Using High Resolution Multispectral Satellite Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion of External and Internal Prior Information for the Removal of Gaussian Noise in Images

Faculty of Engineering and Information Technology, Al-Azhar University, Gaza 79715, Palestine
J. Imaging 2020, 6(10), 103; https://doi.org/10.3390/jimaging6100103
Submission received: 3 August 2020 / Revised: 17 September 2020 / Accepted: 30 September 2020 / Published: 4 October 2020
(This article belongs to the Special Issue Robust Image Processing)

Abstract

:
In this paper, a new method for the removal of Gaussian noise based on two types of prior information is described. The first type of prior information is internal, based on the similarities between the pixels in the noisy image, and the other is external, based on the index or pixel location in the image. The proposed method focuses on leveraging these two types of prior information to obtain tangible results. To this end, very similar patches are collected from the noisy image. This is done by sorting the image pixels in ascending order and then placing them in consecutive rows in a new two-dimensional image. Henceforth, a principal component analysis is applied on the patch matrix to help remove the small noisy components. Since the restored pixels are similar or close in values to those in the clean image, it is preferable to arrange them using indices similar to those of the clean pixels. Simulation experiments show that outstanding results are achieved, compared to other known methods, either in terms of image visual quality or peak signal to noise ratio. Specifically, once the proper indices are used, the proposed method achieves PSNR value better than the other well-known methods by >1.5 dB in all the simulation experiments.

1. Introduction

Images can acquire noise during image acquisition, transmission, or recording. Gaussian noise is considered one of the most prevalent types of noise that may degrade an image and exists both, in wired and wireless channels [1,2]. Therefore, image denoising is a fundamental process that should be implemented before any advance image processing tasks, and remains challenging. A wide range of algorithms have been proposed in the literature using prior information for estimating the noisy images. Some approaches use the input noisy image as a supplement or as prior information, such as NLM [3], BM3D [4], PGPCA [5], LPG-PCA [6], and WNNM [7]. Other approaches use external images as prior information, such as EPLL [8], dictionary-based denoising methods [9] and others [10,11]. To better estimate the noisy patches, other approaches use a combination of internal and external prior information [12,13]. The authors in [14] propose a new method called expectation-maximization adaptation to adapt the external database using the internal one and to decrease the amount of training data. In [15] the authors proposed k-nearest neighbor-based collaborative filtering in which a query of similar patches is recommended, with the help of the similar patches in an internal or external database. Other methods based on wavelet shrinkage methods [16,17,18,19] are proposed and many others are used to build complex mapping functions between a corrupted and a clean version of an image, as described in [20,21]. The low-rank models, described in [22,23,24], are used for image restoration in order to deliver favorable results. Moreover, different methods based on deep learning have been explored for image denoising as explained in [25,26,27,28].
In this paper, two types of prior information are used; one is external, based on the indices of the training image, and the other is internal, based on the similarity between the pixels in the overlapped patches. Extensive simulation experiments are implemented on different images to illustrate that the proposed method delivers outstanding results, either in terms of visual image quality based on human perception or peak signal to noise ratio (PSNR). Note that sometimes PSNR provides insufficient description about the restored image, therefore human visual perception is essential. This paper is outlined as follows: In the first section an introduction is presented. Then, in Section 2 and Section 3 the algorithm description and the simulation results are given. Finally, a conclusion is outlined in Section 4.

2. Algorithm Description

In this paper, a patch-based approach is proposed, in which two types of prior information are used to help estimate the noisy patches. The first type is similarity-based and the second is index-based. The first type is based on constructing very similar patches. They are constructed by ordering all the pixels in the noisy image in ascending fashion in a one-directional (1-D) image. Then, the pixels in the 1-D version are separated in rows in a two-directional (2-D) image, where each row includes a specific number of pixels. The number of columns in the 2-D image is usually less than the number of rows. Then, a sliding window moves over the entire new image to produce a patch matrix. After using a principal component analysis (PCA) on the patch matrix, the second type of prior information is used. In other words, the question is how we can rearrange the pixels in the estimated patch matrix based on their new values so that the output image achieves the optimum result. Since the new estimated pixels have similar values to those of the original ones, the optimum solution is to relocate each estimated pixel in the index or location of its equivalent in the original image. If the original indices are not saved in a library, then the next best solution is to use the indices of the image that are estimated by another efficient method, particularly those estimated from a low-noise-corrupted image. In any case, it is recommended that a library including the indices of a large number of images be established or that research to estimate the image indices be conducted. Another important point is that the proposed algorithm is self-terminating. In other words, it is terminated once the root-mean-square error (RMSE) between the restored and corrupted versions reaches a minimum. Figure 1 shows the denoising steps of the proposed algorithm, starting from converting the noisy image to a 1-D image, then applying PCA on the patch matrix until the relocation process in which the estimated pixels are placed in a 2-D image based on the indices of the corresponding pixels in the training image. The algorithm steps are described in detail as shown below:

2.1. Input Image for Using Internal Information

Consider a 2-D image X of M × N size corrupted by independent and identically distributed Gaussian noise ε of zero mean and variance σ2 as ε~N(0, σ2). Mathematically, x = y + ε, where x, y ∈ ℝM×N, y is the cleaned pixel and x is the corrupted pixel. Define the corrupted image as X:
X = . x 11 x 12 . . . . x 1 N . x 22 . . . . . x 33 . . . . x M 1 x M N

2.2. Finding the 1-D Sorted Image

The first problem is how we can achieve maximum similarity or minimum intensity distance between each two consecutive pixels. To this end, assume a pixel xi’j’ in location i’, j’ in the corrupted image. Then, we have to search all the other locations ij, i = 1, 2, …, M, i ≠ i’, j = 1, 2, …, N, j ≠ j’, in the image in order to find the pixel that provides the maximum similarity S or minimum intensity distance with pixel xi’j’ as:
S = arg min i j x i j x i j
Pixels xij and xi’j’ are expected to be two consecutive pixels in a new image. The suggested solution for the problem mentioned in step 2 is to order the pixels in the image X in an ascending fashion in a one-dimensional 1-D vector X ¯ to maintain maximum similarity or lowest intensity difference between each two consecutive pixels, which are defined as the pixels that follow each other in the spatial domain as:
X ¯ = { x ¯ 1 , j | x ¯ 1 , j x ¯ 1 , j + 1 , j = 1 , 2 , , ( M N 1 ) }

2.3. Finding the 2-D Image of Lxl Size and Patch Matrix

Reshape matrix X ¯ to be with similar consecutive rows r’s where number of rows equal to L = M × a, a is an integer number. Each row r has a length of = N / a pixels. As a result, a matrix R of L × size is obtained. It helps in finding patches of similar elements:
r 1 = { x ¯ 1 , x ¯ 2 , x ¯ 3 , x ¯ l } r 2 = { x ¯ 1 + , x ¯ 2 + , x ¯ 3 + , x ¯ 2 } , , r L = { x ¯ ( L - 1 ) + 1 , x ¯ ( L - 1 ) + 2 , x ¯ ( L - 1 ) + 3 , x ¯ L } r k = { x ¯ 1 , j + ( k 1 ) | j = 1 , 2 , , ; k = 1 , 2 , , L } R = [ r 1 , r 2 , r 3 , , r L ] T
The aim of the patch matrix P is to increase the redundancy of each pixel in the local region. To this end, a sliding window of w × w size is moved over the image R and then inserted as a column vector in P. Thus, the size of P is equal to w2 × MN. Note that matrix R is padded in all directions by (w − 1)/2 rows and (w − 1)/2 columns which are mirror reflections of the rows and columns along the border.

2.4. PCA and Noise Removal

To remove the noise from the patch matrix P the covariance matrix Σ of P is calculated as,
= E [ ( P - μ I ) ( P - μ I ) T ] = E [ C C T ]
where C = P − µI, µ is a column vector that includes the mean of all the elements p’s in each row in the patch matrix P, i.e., μ = i = 1 i = w 2 j = 1 j = M N p i j and I is a unity row vector of size 1 × MN. More specifically, the centralized matrix C is attained if each element in patch P is decreased by the mean µ of the element’s row. Since Σ is symmetric, it is valid to use Eigenvalue decomposition by which Σ = U Λ UT, where U and Λ are Eigenvector, and eigenvalue matrices, respectively.
Then, we have to find the estimated centralized matrix C ^ that achieves minimum error E with C as:
E = arg min T h | C ^ C |
The projection of C on U is defined by matrix B and the projection of C ^ on U is defined by matrix B ^ as:
Β = C U   and   B ^ = C ^ U
To estimate C ^ , any small components p, p Β are neglected based on a predefined threshold Th, i.e., p = 0 if (p/pmax) < Th, pmax is the maximum value in Β. The result is a new matrix B ^ that includes the remaining informative components. Therefore, C ^ = B ^ U T . Note that UT = U−1 because matrix U is an orthogonal matrix. Finally, the estimated patch matrix P ^ based on Equations (4) and (6) are obtained as:
P ^ = B ^ U T + μ I

2.5. Finding the Estimated Lxl Size Image and Its 1-D Sorted Image

Matrix P ^ is aggregated in a way opposite to that mentioned in step 4. The result is an estimated version R ^ of L × size. Then, after removing the padded rows and columns, is ordered in an ascending fashion to create a 1-D estimated version X ¯ ^ of matrix X ¯ and is defined as: X ¯ ^ = [ x ¯ ^ 1 , , , x ¯ ^ 2 , , x ¯ ^ M N ] .

2.6. Finding the Indices of the 2-D Training Image as an External Information

2-D training image I 2 t and its 1-D ascending ordered version I 2 t are used in this step. Each pixel in I 1 t vector has a corresponding index (ai,aj) in the I 2 t version. Indices (ai,aj) are saved in a new index image Iindex which is defined with matrix X ¯ ^ as:
I i n d e x = [ ( a 1 , a 2 ) , ( a 3 , a 4 ) , . , ( a M , a N ) ]
X ¯ ^ = { x ¯ ^ | x ¯ ^ 1 , j x ¯ ^ 1 , j + 1 , j = 1 , 2 , , ( M N 1 ) }

2.7. Mapping Process

If matrices X ¯ ^ and I 1 t have similar pixels, then they should have the same indices mentioned in matrix Iindex. Thus, a mapping or a relocation process is performed to locate each pixel in to a new location in X ^ . The new location for each pixel in X ¯ ^ is specified by its corresponding index described in Iindex as follows:
X ^ ( I i n d e x ) = X ¯ ^ x ¯ ^ 1 ( a 1 , a 2 ) ,   x ¯ ^ 2 ( a 3 , a 4 ) ,   , x ¯ ^ M N ( a M , a N )
Note that indices that are similar to those of the original pixels always deliver the best results. However, indices of a restored version from a low-noise-corrupted image will provide superior results as well. If these indices are not available, one may create a library that includes the indices of the best-known images, or may one conduct research to build a model for prediction of indices using deep learning techniques.

2.8. Algorithm Termination

To terminate the algorithm, the minimum RMSE between the restored image and the corrupted version should be calculated, and it is important to mention that minimum RMSE is achieved at a certain threshold value as follows:
arg max T h P S N R = arg min T h i M j N ( x i j x i j ) 2 M N 1 / 2

3. Simulation Results

The purpose of this section is to illustrate the performance of the proposed method in comparison with state-of-the-art methods such as EPLL [8], BM3D [4], and PGPCA [5]. Each method is evaluated objectively, based on PSNR, and subjectively based on the visual quality of the restored images. The proposed method utilizes two types of prior information to restore the corrupted image. The first is taken from similar pixels of similar objects in the image. The second is taken by using the indices of the original image or the indices of other images having pixel values similar to the original values. Therefore, the implementation of the proposed method becomes very easy once a database including indices of many images is collected, particularly the indices of images that are used the most. One may conduct research to estimate the proper indices for the estimated versions. In the current study, 8- bit gray level images having an intensity range from 0 to 255 and of 512 × 512 size are used in all the simulation experiments. Each image is converted first to a 1-D version ordered in ascending fashion. Then, to construct a 2-D image of similar rows, the 1-D version is converted to a new matrix of L = 512 × 16 rows and = 512/16 columns. The window used in the simulation experiments is of the size w × w = 11 × 11, which is then inserted in the patch matrix as a column of size {(11 × 11),1}. To select the optimum threshold value Th for each experiment, the algorithm is executed seven times from Th = 0.1 to Th = 0.7. Then, the threshold which provides the optimum result is selected. Zero mean Additive White Gaussian Noise, AWGN, is generated using the MATLAB (version R2014a) randn fuction and added to the image in different amounts based on the standard deviation σ. The advantage of the proposed method is that two of its parameters remain unchangeable, but the other is changed or tuned for optimum results. Note that the number of columns l of the new converted image is chosen to be smaller than the main image columns, i.e., l < N, in order to divide the image into similar objects with similar rows. Another main advantage of the proposed method is that the algorithm is self-terminated at a certain threshold value which achieves minimum root-mean-square error between the restored and corrupted versions. To study the effect of each parameter on the restoration performance, several tables and figures are proposed.
Table 1 shows the restoration performance in terms of PSNR when the corrupted image is reshaped at different sizes, each having a decreased number of columns l and an additional number of rows L. It is notable from the table that as the number of columns decreases gradually, but to a specific limit, the restoration performance increases gradually. In other words, one can say that at l = 128, 64, and 32 the proposed method provides satisfactory results, but for small image widths, i.e., l = 16, poor results are obtained. Note that the values of the other parameters w × w and th value remain constant. Thus, it is better to decrease the number of columns in the corrupted image to divide the image into objects of similar rows.
Table 2 shows the restoration performance in terms of PSNR at different window sizes. One can observe that as the size of the window w × w increases, but to a specific size, the restoration performance increases. In other words, sizes of w × w = 9 × 9, and 11 × 11 provide satisfactory results. One can conclude that within these sizes, we may find objects of similar rows. The threshold value Th and the new size of the corrupted image L × l remain intact. The abbreviation Lena(original) denotes that the indices of the restored image are obtained from the original image; but the notation Lena(BM3D,5) in the table denotes that the indices of the restored image are obtained from the restored version that was achieved due to restoring a corrupted Lena image at σ = 5 by the BM3D method.
Figure 2 illustrates the threshold effect in restoring Lena and Pepper images corrupted at σ = 20. It is obvious that for each image there is a certain threshold that provides maximum PSNR and minimum root-mean-square error RMSE between the restored and the corrupted versions. Note that if RMSE reaches a minimum value, the algorithm is terminated automatically.
Table 3 illustrates the restoration performance of the proposed method at different threshold values and at σ = 30. The other parameters remain unchanged. It is clear from the table that at a certain threshold value, minimum RMSE or maximum PSNR is obtained.
At this threshold value the algorithm is self-terminated. Note that in all the simulation experiments one iteration is used, as the optimum parameters are used in this iteration.
Table 4 demonstrates the consumed time in seconds for different methods in restoring two corrupted versions at σ = 20, one for the Lena image and the other for the Pepper image. It is obvious that the proposed method is fast and provides computational complexity lower than other methods. It is clear that the BM3D method is the fastest one. However, the proposed method delivers the best restoration performance. These results are implemented in MATLAB version R2014a on a computer with an HP ENVY TS 14 Sleekbook Intel Core [email protected] GHz CPU.
Figure 3 shows four versions restored by the proposed method at Th = 0.4 from a corrupted Pepper image at σ = 20. Each version uses different indices. Three different indices are obtained from three different versions restored by EPLL, PGPCA and BM3D methods. Two versions are obtained by PGPCA and BM3D as a result of restoring corrupted images at σ = 5. The other version is restored by EPLL from the corrupted version at 𝜎 = 20.
It is clear that the versions using the indices of the original image in (b) and the indices of the restored version obtained from low-noise-corrupted image at σ = 5 in (c) and (d) deliver superior results that are better than BM3D and PGPCA, either in terms of PSNR or visual image quality. One can note that the indices of PGPCA are used in (d) for delivering better a PSNR value than that obtained by the BM3D method in (f), although BM3D is one of the best methods in this field. Figure 4 includes enlarged parts attained from the restored images shown in Figure 3a,c,f. Black circles in Figure 3 or black arrows in Figure 4 clearly indicate that the images restored by BM3D and PGPCA have less detail than the others.
Figure 5 shows an enlarged part from the bridge image from a version corrupted at σ = 30 restored by different methods. Th = 0.4 is used in the proposed method. It is clear that the new method with the indices of the restored version obtained from low-noise-corrupted image at σ = 5, restored by BM3D, in (b) provides an image with more detail and a more pleasing appearance than the others as indicated in the surrouded area in each restored version.
Figure 6. Shows the performance of different methods in restoring the Baboon image corrupted at σ = 50. In this figure, the following notation is used (new, b, c, d) where “new” denotes the new method; b: denotes an image or name of a method, which means that the new method uses the indices of the image in b or the indices of a restored image attained by the method mentioned in b; c: denotes that the restored image in b is obtained from a version corrupted at σ = c; d: denotes the performance of the proposed method in terms of d = PSNR. It is clear that the proposed algorithm, using threshold Th = 0.5, delivers superior results with the indices attained from the original image and the restored version produced from a low-noise-corrupted image at σ = 5. It is evident that the results of the new method are better than those of any of the other methods.
Table 5 and Table 6 illustrate the restoration results in terms of PSNR for different methods in restoring different images at σ = 20, and σ = 30, respectively. It is clear that the new method with the indices of original or low noise restored version by PGPCA or BM3D at σ = 5 delivers the best results.

4. Conclusions

A new method for the removal of Gaussian noise is explored in this paper. The proposed method is based on internal and external prior information used in estimating the corrupted pixels. The first type of information is achieved by gathering the most similar patches from the noisy image. The second is utilized when the restored pixels are relocated in new positions in the image. Since the restored pixels have new values similar to those of the original version, it is preferable to relocate them with the indices of the original image or with the indices of a restored version, similar to the original image. Therefore, establishing a library that includes the indices of as many different images as possible will be helpful and should be considered in future work. Finally, the algorithm is self-terminating once the root-mean-square error between the estimated and corrupted image reaches a minimum. Simulation experiments proved that the new method outperforms the other well-known methods and yields extraordinary results.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Xie, J.; Xu, L.; Chen, E. Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Systems 25; Curran Associates Inc.: New York, NY, USA, 2012; pp. 341–349. [Google Scholar]
  2. Jain, A.; Bhateja, V. A versatile denoising method for images contaminated with Gaussian noise. In Proceedings of the CUBE International Information Technology Conference, Pune, India, 3–5 September 2012; Association for Computing Machinery (ACM): New York, NY, USA, 2012; pp. 65–68. [Google Scholar]
  3. Buades, A.; Coll, B.; Morel, J.M. A Review of Image Denoising Algorithms, with a New One. Multiscale Model. Simul. 2005, 4, 490–530. [Google Scholar] [CrossRef]
  4. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  5. Deledalle, C.-A.; Salmon, J.; Dalalyan, A. Image Denoising with Patch Based PCA: Local Versus Global. In Proceedings of the British Machine Vision Conference, Dundee, Scotland, UK, 29 August–2 September 2011; British Machine Vision Association and Society for Pattern Recognition: Dundee, Scotland, UK, 2011; pp. 1–25. [Google Scholar]
  6. Zhang, K.; Dong, W.; Zhang, L.; Shi, G. Two-stage image denoising by principal component analysis with local pixel grouping. Pattern Recognit. 2010, 43, 1531–1549. [Google Scholar] [CrossRef] [Green Version]
  7. Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X.; Zhang, K. Weighted Nuclear Norm Minimization and Its Applications to Low Level Vision. Int. J. Comput. Vis. 2016, 121, 183–208. [Google Scholar] [CrossRef]
  8. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar]
  9. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  10. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
  11. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, K. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Mosseri, I.; Zontak, M.; Irani, M. Combining the power of Internal and External denoising. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Cambridge, MA, USA, 19–21 April 2013; pp. 1–9. [Google Scholar]
  13. Burger, H.C.; Schüler, C.; Harmeling, S. Learning How to Combine Internal and External Denoising Methods. Comput. Vis. 2013, 8142, 121–130. [Google Scholar] [CrossRef]
  14. Chan, S.H.; Luo, E.; Nguyen, T.Q. Adaptive patch-based image denoising by EM-adaptation. In Proceedings of the 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, USA, 14–16 December 2015; pp. 810–814. [Google Scholar]
  15. Parameswaran, S.; Luo, E.; Nguyen, T.Q. Patch Matching for Image Denoising Using Neighborhood-Based Collaborative Filtering. IEEE Trans. Circuits Syst. Video Technol. 2016, 28, 392–401. [Google Scholar] [CrossRef]
  16. Donoho, D. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef] [Green Version]
  17. Donoho, D.L.; Johnstone, I.M. Adapting to unknown smoothness via wavelet shrinkage. J. Am. Stat. Assoc. 1995, 90, 1200–1224. [Google Scholar] [CrossRef]
  18. Portilla, J.; Strela, V.; Wainwright, M.J.; Simoncelli, E.P. Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Trans. Image Process. 2003, 12, 1338–1351. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Rajaei, B. An Analysis and Improvement of the BLS-GSM Denoising Method. Image Process. Line 2014, 4, 44–70. [Google Scholar] [CrossRef] [Green Version]
  20. Schmidt, U.; Roth, S. Shrinkage fields for effective image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2014, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  21. Chen, Y.; Yu, W.; Pock, T. On learning optimized reaction diffusion processes for effective image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 5261–5269. [Google Scholar]
  22. Wang, S.; Zhang, K.; Liang, Y. Nonlocal Spectral Prior Model for Low-Level Vision. In Proceedings of the Computer Vision, Daejeon, Korea, 5–9 November 2012; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2013; pp. 231–244. [Google Scholar]
  23. Ji, H.; Liu, C.; Shen, Z.; Xu, Y. Robust video denoising using low rank matrix completion. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1791–1798. [Google Scholar]
  24. Oh, T.-H.; Kim, H.; Tai, Y.-W.; Bazin, J.-C.; Kweon, I.S. Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 145–152. [Google Scholar]
  25. Isogawa, K.; Ida, T.; Shiodera, T.; Takeguchi, T. Deep Shrinkage Convolutional Neural Network for Adaptive Noise Reduction. IEEE Signal. Process. Lett. 2017, 25, 224–228. [Google Scholar] [CrossRef]
  26. Wang, X.; Tao, Q.; Wang, L.; Li, D.; Zhang, M.; Xuejiao, W.; Qiuyan, T.; Lianghao, W.; Dongxiao, L.; Ming, Z. Deep convolutional architecture for natural image denoising. In Proceedings of the International Conference on Wireless Communications & Signal Processing (WCSP), Nanjing, China, 15–17 October 2015; pp. 1–4. [Google Scholar]
  27. Min, C.; Wen, G.; Li, B.; Fan, F. Blind Deblurring via a Novel Recursive Deep CNN Improved by Wavelet Transform. IEEE Access 2018, 6, 69242–69252. [Google Scholar] [CrossRef]
  28. Xiu, C.; Su, X. Composite Convolutional Neural Network for Noise Deduction. IEEE Access 2019, 7, 117814–117828. [Google Scholar] [CrossRef]
Figure 1. Block diagram describing the steps of the proposed algorithm in denoising corrupted images.
Figure 1. Block diagram describing the steps of the proposed algorithm in denoising corrupted images.
Jimaging 06 00103 g001
Figure 2. Comparison between RMSE and PSNR at different threshold values for different images. Once the threshold value achieves minimum RMSE between the restored and the corrupted versions, the algorithm terminated.
Figure 2. Comparison between RMSE and PSNR at different threshold values for different images. Once the threshold value achieves minimum RMSE between the restored and the corrupted versions, the algorithm terminated.
Jimaging 06 00103 g002
Figure 3. Outputs of the proposed and other methods in restoring Pepper image corrupted at σ = 20. In proposed method, each output uses a different indices: (a) Corrupted image; (b) new with indices of original image, PSNR = 35.62; (c) new with indices of BM3D, PSNR = 34.06; (d) New with indices of PGPCA, PSNR = 34; (e) New with indices of EPLL, PSNR = 31.27; (f) output from BM3D, PSNR = 33.64; (g) output from PGPCA, PSNR = 32.59.
Figure 3. Outputs of the proposed and other methods in restoring Pepper image corrupted at σ = 20. In proposed method, each output uses a different indices: (a) Corrupted image; (b) new with indices of original image, PSNR = 35.62; (c) new with indices of BM3D, PSNR = 34.06; (d) New with indices of PGPCA, PSNR = 34; (e) New with indices of EPLL, PSNR = 31.27; (f) output from BM3D, PSNR = 33.64; (g) output from PGPCA, PSNR = 32.59.
Jimaging 06 00103 g003
Figure 4. Enlarged parts from the outputs b, c, and f mentioned in Figure 3 to show the restoration performance of the proposed and BM3D methods in restoring Pepper image corrupted at σ = 20: (a) Part from the corrupted Pepper image; (b) New with indices of original image; (c) New with indices of BM3D(σ = 5); (d) Output from BM3D.
Figure 4. Enlarged parts from the outputs b, c, and f mentioned in Figure 3 to show the restoration performance of the proposed and BM3D methods in restoring Pepper image corrupted at σ = 20: (a) Part from the corrupted Pepper image; (b) New with indices of original image; (c) New with indices of BM3D(σ = 5); (d) Output from BM3D.
Jimaging 06 00103 g004
Figure 5. Enlarged parts from the restored Bridge images attained by the proposed and BM3D methods to show the restoration performance of each in restoring Bridge image corrupted at σ = 30: (a) Part from the corrupted Brige image; (b) New with indices of restored version by BM3D from low-noise-corrupted image at σ = 5; (c) New with indices of restored version by BM3D from a corrupted image at σ = 30; (d) Output from BM3D.
Figure 5. Enlarged parts from the restored Bridge images attained by the proposed and BM3D methods to show the restoration performance of each in restoring Bridge image corrupted at σ = 30: (a) Part from the corrupted Brige image; (b) New with indices of restored version by BM3D from low-noise-corrupted image at σ = 5; (c) New with indices of restored version by BM3D from a corrupted image at σ = 30; (d) Output from BM3D.
Jimaging 06 00103 g005
Figure 6. Outputs of the proposed algorithms compared with different methods in restoring Baboon image corrupted at σ = 50: (a) Corrupted image; (b) (New, indices of original, σ = 0, PSNR = 24.27); (c) (New, indices of BM3D, σ = 5, PSNR = 23.91); (d) BM3D, PSNR = 22.28; (e) PGPCA, PSNR = 21.97, (f) EPLL, PSNR = 22.39.
Figure 6. Outputs of the proposed algorithms compared with different methods in restoring Baboon image corrupted at σ = 50: (a) Corrupted image; (b) (New, indices of original, σ = 0, PSNR = 24.27); (c) (New, indices of BM3D, σ = 5, PSNR = 23.91); (d) BM3D, PSNR = 22.28; (e) PGPCA, PSNR = 21.97, (f) EPLL, PSNR = 22.39.
Jimaging 06 00103 g006
Table 1. The effect of resizing the corrupted image in terms of PSNR. Th = 0.5(lena), Th = 0.4(pepper), Th = 0.4(bridge), indices from original image, and w × w = 11 × 11.
Table 1. The effect of resizing the corrupted image in terms of PSNR. Th = 0.5(lena), Th = 0.4(pepper), Th = 0.4(bridge), indices from original image, and w × w = 11 × 11.
σ = 20 512 × 5121024 × 2562048 × 1284096 × 648192 × 3216,384 × 16
Lena33.3633.9235.1236.8637.228.26
Pepper33.734.0734.3536.7535.6230.88
Bridge 36.0536.3836.8037.2034.4428.32
Table 2. The effect of changing the window size w × w in terms of PSNR. Th = 0.5(Lena), Th = 0.4(pepper),Th = 0.4(bridge), Th = 0.6(Lake).
Table 2. The effect of changing the window size w × w in terms of PSNR. Th = 0.5(Lena), Th = 0.4(pepper),Th = 0.4(bridge), Th = 0.6(Lake).
σ = 207 × 79 × 911 × 1113 × 1315 × 1517 × 17
Lena(original)35.4435.437.236.5436.1234.00
Lena(BM3D,5)34.0734.05 35.3134.9234.6232.98
Pepper(original)36.2136.6335.6233.4233.4633.72
Bridge(original)35.02 34.5134.4433.3433.2532.54
Lake(original)33.8737.1935.4131.6431.3432.02
Table 3. The effect of changing the threshold value in terms of RMSE and PSNR (RMSE/PSNR). At minimum RMSE or maximum PSNR the algorithim terminated.
Table 3. The effect of changing the threshold value in terms of RMSE and PSNR (RMSE/PSNR). At minimum RMSE or maximum PSNR the algorithim terminated.
σ = 30Th = 0.1 Th = 0.2 Th = 0.3Th = 0.4Th = 0.5Th = 0.6
Lena31.52/28.3 31.24/29.1 31.17/29.4 30.63/32 30.46/33.330.79/31
Pepper31.43/28.631.25/29.231.15/29.630.75/31.430.41/3430.82/31
Bridge31.06/29.9431.02/30.1430.79/31.1830.5/33.1730.7/31.7631.35/28.88
Baboon31.57/28.331.4/28.831.17/29.530.550/32.930.554/32.7630.80/31.12
Lake32.55/26.132.27/26.632.15/26.932/27.230.53/3330.50/33.2
Table 4. Comparison between different methods in terms of time consumption in seconds.
Table 4. Comparison between different methods in terms of time consumption in seconds.
σ = 20New(original)New, BM3D(5) BM3DPGPCAEPLL
Lena20.2020.498.4316.76821.04
Pepper20.9620.428.9617.51870.67
Table 5. Comparison between different methods in restoring different images at σ = 20.
Table 5. Comparison between different methods in restoring different images at σ = 20.
σ = 20Original, New PGPCA(5), NewBM3D PGPCAEPLL
Lena(0.5)37.235.2433.2932.4532.9
Pepper(0.4)35.62 34.06 33.6432.5933.29
Lake(0.6)35.4132.9230.333030.39
Boat(0.3)33.08 31.7931.1230.3930.96
Baboon(0.4)37.6833.5726.5726.2326.73
Fruits(0.3)36.8034.6432.7631.7032.67
Cat(0.3)36.9334.2229.8529.5529.65
Table 6. Comparison between different methods in restoring different images at σ = 30.
Table 6. Comparison between different methods in restoring different images at σ = 30.
σ = 30Original, New BM3D(5), NewBM3D(10), New BM3DPGPCA
Lena(0.5)33.332.3631.4431.531.29
Pepper(0.5)3432.9131.9931.9431.46
Bridge(0.4)33.1731.3429.0125.4325.92
Baboon(0.4)32.930.9628.4824.5225
Lake(0.6)33.231.3629.7128.5328.87

Share and Cite

MDPI and ACS Style

Awad, A.S. Fusion of External and Internal Prior Information for the Removal of Gaussian Noise in Images. J. Imaging 2020, 6, 103. https://doi.org/10.3390/jimaging6100103

AMA Style

Awad AS. Fusion of External and Internal Prior Information for the Removal of Gaussian Noise in Images. Journal of Imaging. 2020; 6(10):103. https://doi.org/10.3390/jimaging6100103

Chicago/Turabian Style

Awad, Ali S. 2020. "Fusion of External and Internal Prior Information for the Removal of Gaussian Noise in Images" Journal of Imaging 6, no. 10: 103. https://doi.org/10.3390/jimaging6100103

APA Style

Awad, A. S. (2020). Fusion of External and Internal Prior Information for the Removal of Gaussian Noise in Images. Journal of Imaging, 6(10), 103. https://doi.org/10.3390/jimaging6100103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop