Next Article in Journal
A Novel Fault Diagnosis Method for Rolling Bearing Based on Hierarchical Refined Composite Multiscale Fluctuation-Based Dispersion Entropy and PSO-ELM
Previous Article in Journal
Quantum Information Entropy of Hyperbolic Potentials in Fractional Schrödinger Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind Additive Gaussian White Noise Level Estimation from a Single Image by Employing Chi-Square Distribution †

1
School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, China
2
Artificial Intelligence School, Wuchang University of Technology, Wuhan 430223, China
3
School of Electronic Information Engineering, Wuhan Donghu University, Wuhan 430212, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 2021 IEEE 21st International Conference on Communication Technology (ICCT), Tianjin, China, 13–16 October 2021.
Entropy 2022, 24(11), 1518; https://doi.org/10.3390/e24111518
Submission received: 19 May 2022 / Revised: 9 July 2022 / Accepted: 19 October 2022 / Published: 24 October 2022

Abstract

:
The additive Gaussian white noise (AGWN) level in real-life images is usually unknown, for which the empirical setting will make the denoising methods over-smooth fine structures or remove noise incompletely. The previous noise level estimation methods are easily lost in accurately estimating them from images with complicated structures. To cope with this issue, we propose a novel noise level estimation scheme based on Chi-square distribution, including the following key points: First, a degraded image is divided into many image patches through a sliding window. Then, flat patches are selected by using a patch selection strategy on the gradient maps of those image patches. Next, the initial noise level is calculated by employing Chi-square distribution on the selected flat patches. Finally, the stable noise level is optimized by an iterative strategy. Quantitative, with association, to qualitative results of experiments on synthetic real-life images validate that the proposed noise level estimation method is effective and even superior to the state-of-the-art methods. Extensive experiments on noise removal using BM3D further illustrate that the proposed noise level estimation method is more beneficial for achieving favorable denoising performance with detail preservation.

1. Introduction

In image denoising, additive Gaussian white noise level is an important parameter, but it is usually unknown in real-life images [1,2]. A denoising method with accurate noise levels may generate comfortable results with abundant richness [3]. The prior ways usually set it empirically, which may result in undesirable maps: the method with a high noise level may over-smooth rich structures, while AGWN may still exist in the denoising result with a low level [4,5].
To provide accurate noise levels, researchers have developed a number of noise level estimation methods [6,7]. Among them, the patch-based methods are widely used due to their easy implantation and low computational burden. However, noise levels may be overestimated or underestimated with the usage of inaccurate homogeneous patches, which are sensitive to the complexity of images and noise levels.
To cope with the problems, this paper presents a novel method of noise level estimation, which combines a simple and effective patch-based method with the Chi-square distribution. Figure 1 shows the flowchart of our proposed method. The input image is first divided into image patches through a sliding window. Then, we select the flat patches using a flat patch selection strategy with the texture maps extraction. Next, the initial noise level is calculated by employing Chi-square distribution on the selected flat patches. Finally, the estimation performance is boosted via an iterative strategy. The main contributions are detailed as follows:
1.
A novel patch-based noise level estimation method based on Chi-square distribution is proposed.
2.
An optimization iteration scheme is proposed to improve the accuracy and stability of the noise level estimation strategy.
3.
Quantitative results associated with the qualitative results of the experiments are used to verify the effectiveness of the proposed noise level strategy.
Figure 1. Flowchart of the proposed noise level estimation method.
Figure 1. Flowchart of the proposed noise level estimation method.
Entropy 24 01518 g001
The remainder of this paper is organized as follows. Section 2 elaborates the literature of existing noise level estimation methods. The proposed noise level estimation method is described in Section 3. Section 4 shows our experimental results, and we conclude this paper in Section 5.

2. Literature Review

Currently, the noise level estimation methods are roughly divided into the following three categories:
  • Filter-based methods: These methods extract the differential image by convolving the noisy image with a specially designed filter, and then use the filtered differential image as the noise map to estimate the noise level [8,9]. For example, Immerkaer [10] designed an image structure insensitive method to filter noisy images and estimate noise level by averaging the convolved images. It performed well on estimating noise levels, but lost in structural images [11]. To address this issue, Rank et al. [12] combined histogram statistics with a filter-based approach to generate the stable noise level. However, it produced an overestimation noise level from texture images [13]. To reduce the adverse effects caused by the image structures, Tai et al. [14,15] applied a Laplacian operator to remove strong edge pixels before filtering so as to improve the accuracy at low noise levels.
  • Transform-based methods: Instead of using spatial information, these methods estimate noise level by transforming an image into other spaces [16,17]. For example, Donoho [18] proposed a mean absolute deviation (MAD) method to estimate the noise level on the wavelet domain. They treated all the coefficients of the highest frequency subband as noise, and estimated the standard deviation. This method performed well on estimating high noise level, but the error is increased when noise level is low [19,20]. Recently, the models based on the singular value decomposition (SVD) were widely used in noise level estimation [21,22]. For example, Wei et al. [23] used singular value tail data in SVD to estimate the noise level, which minimizes the interference of image structures. However, image details and noise cannot be completely separated at the end of the singular value for images with rich structures, so the noise level is invariably overestimated [24].
  • Patch-based methods: In these methods, a noisy image is initially decomposed into a group of patches. Then, the homogeneous patches are selected via various statistical techniques for noise level estimation [25,26,27]. For example, Pyatykh et al. [28] proposed a method based on principal component analysis (PCA), which viewed the smallest eigenvalue of the image patch covariance matrix as the noise level. Since the minimum eigenvalue of PCA about the noisy image patch does not always satisfy the null hypothesis, it is easy to cause instability or overestimation of the noise level estimation [29,30,31]. In order to optimize the above methods, Liu et al. [32] proposed an automatic noise level estimation method by adaptively selecting effective image patches for covariance calculation. It effectively reduces the obvious overestimation of the low noise level based on the PCA method, but there is still underestimation in the case of high noise level [33].
The above mentioned methods have achieved considerable progress in improving the accuracy of noise level estimation, but they also have their respective drawbacks [34,35]. For example, filter-based methods have a large error when the structures in an image are dense and have a highly computational complexity [36,37]. Transform-based methods always produce over-estimation results when noise levels are low [38,39]. Patch-based methods may generate underestimation results at high noise levels [17,40,41].

3. Image Noise Level Estimation Based on Chi-Square Distribution

3.1. Image Decomposition into Patches

Assuming that an image is degraded by additive white Gaussian noise (AWGN) with unknown standard deviation σ , the model can be defined as
Y = X + N ,
where X is the latent clean image, N is AWGN: N N ( 0 , σ 2 ) , and Y is the noisy image. For a pure flat image, the contaminated image Y F can be expressed as
Y F = X F + N ,
where X F represents the ground truth image. As X F is purely flat, its gradient maps are zero. Then the noise level of the noisy image Y F can be easily obtained by the following formula:
σ f = i = 1 P j = 1 Q ( y ( i , j ) Y ¯ F ) 2 N u m ,
where P and Q are the height and width of the noisy image Y F , y ( i , j ) represents gray value of Y F at the point ( i , j ) , Y ¯ F is the average of Y F , N u m is the number of pixels, and σ f is the estimated noise level. Unfortunately, natural images usually have rich structures, so it is inaccurate to estimate the noise level directly using Equation (3). To accurately estimate the noise level of a degraded image, we first decompose the image into many image patches through a sliding window (such as 6 × 6 in this paper). The patch y k can be defined as
y k = x k + n k , k = 1 , 2 , 3 , , S ,
where S is the number of image patches. x k is the k-th clear patch from X, and each patch is defined by its center pixel. y k denotes noisy patch corrupted by Gaussian noise n k with zero-mean and noise level σ n . For the follow-up work, we used the set Y k | k = 1 S to represent the noisy patches, which can be subdivided into contaminated detail and latent flat patches.

3.2. Flat Patches Selection

In this work, we will select latent flat patches from set Y k | k = 1 S and define them as set Z l | l = 1 C , where C represents the number of flat patches. Comparing the details and flat of the patches, we find that the flatness of the patches can be well expressed by the gradient features maps.
The gradient maps of y k can be calculated with the following function
G h = H h × y k G v = H v × y k ,
where H h and H v are the horizontal h and vertical v gradient matrices, respectively. G h and G v respectively represent the horizontal and vertical gradient strength maps. By substituting Equation (4) into Equation (5), it is rewritten as
G h = H h × ( x k + n k ) G v = H v × ( x k + n k ) .
The gradient texture intensity ε k of the image patch y k is formulated as
ε k = i = 1 K j = 1 J [ G h ( i , j ) + G v ( i , j ) ] = i = 1 K j = 1 J [ H h × ( x k ( i , j ) + n k ( i , j ) ) + H v × ( x k ( i , j ) + n k ( i , j ) ) ] = i = 1 K j = 1 J [ ( H h + H v ) × x k ( i , j ) + ( H h + H v ) × n k ( i , j ) ] ,
where K and J respectively represent the height and width of the image patch y k . In order to set a reasonable threshold to select flat patches, we conduct a more detailed study on the gradient texture intensity ε k . To simplify the process, Equation (7) can be rewritten as
ε k = A ( x k ) + B ( n k ) ,
where x k is the latent clean patch, A ( x k ) = i = 1 K j = 1 J [ ( H h + H v ) × x k ( i , j ) ] , n k represents the Gaussian noise of patch and B ( n k ) = i = 1 K j = 1 J [ ( H h + H v ) × n k ( i , j ) ] . The gradient texture intensity component A ( x k ) is determined by the patch x k and B ( n k ) is generated by AGWN. For flat patches from Z l | l = 1 C , x l is absolutely flat so that A ( x ) = 0 . Therefore, the gradient texture intensity ε f l a t of the patch can be expressed as
ε f l a t = A ( x f l a t ) + B ( n f l a t ) = B ( n f l a t ) = i = 1 K j = 1 J [ n f l a t ( i , j ) × ( H h + H v ) ] ,
where x f l a t is the latent flat patch and n f l a t is AGWN. Since B ( n k ) is only affected by AGWN and patch size, we can fit on the extremely flat image with known conditions to calculate the gradient texture intensity threshold. Firstly, the image is divided into patches, and then the gradient texture intensity of each patch is counted by Equation (9). The gradient texture intensity threshold ε δ can be calculated from
ε δ = M a x ( B ( n k ( i , j ) ) ) ,
where B ( n k ( i , j ) ) represents the noise gradient texture intensity component of the noise patch n k ( i , j ) . According to the statistics of a large number of noise patches, the maximum value of B ( n k ( i , j ) ) is selected as ε δ . So we have
ε f l a t < ε δ < ε .
Equation (11) shows that the left side is a good approximation of the gradient texture intensity of the flat patches. In the selection of the flat patch on the natural image, we will use the parameter ε δ as the threshold. If the gradient texture intensity ε k is less than the threshold ε δ , such that the patch can be regarded as a flat patch and used for noise level estimation. For the patch z l in the selected flat patch set Z l | l = 1 C , we have
z l = x l + n l ,
where z l is the latent flat patch of the noisy image Y, x l is the flat patch of X, and n l is AGWN.

3.3. Image Noise Level Estimation

Ideally, according to Equation (12), since z l is the flat image patch, its average is equal to the mean of the clean patch x l , and the variance is provided by the noise n l . We can draw the conclusions
Z N ( μ , σ 2 ) = z 1 N ( μ 1 , σ 1 2 ) z 2 N ( μ 2 , σ 2 2 ) z C 1 N ( μ C 1 , σ C 1 2 ) z C N ( μ C , σ C 2 ) ,
where μ is the average of all patches Z l | l = 1 C , μ l ( l ( 1 , C ) ) is the mean of the l-th flat patch z l , and σ l 2 is the variance of the l-th flat patch z l . Equation (13) can be deduced as
( Z μ ) N ( 0 , σ 2 ) = ( z 1 μ 1 ) N ( 0 , σ 1 2 ) ( z 2 μ 2 ) N ( 0 , σ 2 2 ) ( z C 1 μ C 1 ) N ( 0 , σ C 1 2 ) ( z C μ C ) N ( 0 , σ C 2 ) .
In the ideal case, σ 2 = σ 1 2 = σ 2 2 = = σ C 2 , so Equation (14) is simplified as
Z l μ l l = 1 C N ( 0 , σ 2 ) .
Assuming M l = Z l μ l , M 1 , M 2 , ⋯, M C are mutually independent and identically distributed random variables. Then we can obtain
G l = M l σ 2 N ( 0 , 1 ) .
For all flat patches, the distribution U = l = 1 C G l 2 follows the Chi-square distribution with the degree of freedom C, and can be denoted as χ 2 ( C ) . Then, the calculation formula of the noise level is defined as follows:
σ = l = 1 C σ l 2 χ T 2 ,
with the confidence level T.

3.4. Noise Level Estimation Optimization

In order to improve the accuracy of noise level estimation, we use the iterative noise level σ t as a parameter in combination with the gradient texture intensity threshold ε δ to further select the flat patches Z l | l = 1 C . As shown in Algorithm 1, first, the initial noise level σ 0 is estimated by the method in Section 3.3. Next, the threshold value ε δ is obtained through the gradient feature and iterative noise level σ t . After, the flat patches Z l | l = 1 C are selected according to ε δ . Then, the noise level σ ( t + 1 ) is estimated by Chi-square distribution on the flat patches Z l | l = 1 C . This process is iterated until the estimated noise level σ ( t + 1 ) converges to a fixed point. The iterative convergence criterion is
Φ = σ t + 1 2 σ t 2 2 σ t 2 2 γ ,
where γ = 10 3 . σ t is the estimation of the noise level in the t-th iteration while σ t + 1 is the estimation result in the (t + 1)-th iteration. If the termination condition is satisfied, the estimated final noise level σ e n d is output.
Algorithm 1 Noise level estimation optimization.
  • Initial input: Noised Image Y R P × Q , Patch Size d, Iteration Number I t e r , Convergence Threshold γ .
  • Step 1: Generating image patch dataset Y k | k = 1 S , which contains S = ( P −c ( d + 2 ) ) × ( Q ( d + 2 ) ) patches with patch size w = d 2 from the noisy image Y;
  • Step 2: Generating the gradient texture intensity dataset E k | k = 1 S , where ε k is calculated using Equation (7) from the patch dataset Y k | k = 1 S ;
  • Step 3: Estimated initial noise level σ 0 by Section 3.3;
  • Step 4: Calculating the threshold ε δ combine with σ t using Equation (10);
  •  // The selection of flat patches //
  • for  k = 1   t o   S do
  •   if ε k < = ε δ then
  •    The patch y k is regarded as the flat image patch Z l ;
  •   else
  •    The patch y k is considered a detail patch and is removed;
  •   end if
  • end for
  •  // Image noise level calculation //
  • for   t = 0 to I t e r do
  •   Calculating σ ( t + 1 ) with Z l | l = 1 C using Equation (17);
  •   Generating Φ using Equation (18);
  •   if Φ γ then
  •     σ t = σ ( t + 1 ) and back to step 4;
  •   else
  •     σ e n d = σ ( t + 1 ) and break;
  •   end if
  • end for
  • Output: Estimated noise level σ e n d .

4. Experimental Results and Discussions

In this section, extensive experiments are conducted to evaluate the performance of our proposed method. In Section 4.1, we first perform simulation experiments by superimposing different noise levels on three synthetic flat images to verify the correctness of our theory. The selection of flat patches and the stability of the iterative strategy are then examined as described in Section 4.2 and Section 4.3. Finally, we compare our model with the state of the art to demonstrate its powerful performance. Furthermore, in Section 4.5 and Section 4.6, we test the practical applicability of our method by combining it with the BM3D denoising method.
The complete experiments of the image noise level estimation algorithms were performed under the MATLAB 2019b software on a Windows 10 with Intel i7 eight-core, CPU 3.0 GHz, and 8 GB RAM memory. In addition, the comparison models used in our experiments, such as Olsen [8], Tai [14], Donoho [18], are generic versions without tuning parameters.

4.1. Feasibility Study on Test Flat Images

To verify the feasibility of the proposed noise level estimation theory, three different flat images shown in Figure 2 are selected to be experimented. These images are 1024 × 1024 pixels, and the gray values are respectively 64, 128, and 192. The sliding window size for the image patches is 6 × 6 and T = 1 e 6 .
The noise level estimation curves are shown in Figure 3, from which we can see that the noise levels estimated by the proposed scheme are very close to the ground-truth noise levels, indicating that the variance distribution of flat image patches obeys the Chi-square distribution, which validates the feasibility of the proposed method.

4.2. Analysis of the Flat Patch Selection

The accuracy of the flat patches selection is key to our method. Some patches with its gradient texture strength maps and the threshold of noisy image S t a b l e ( σ = 5 ) are shown in Figure 4. According to Equation (11), the patches are viewed as flat patches when ε k < ε δ = 36,944, where ε δ is calculated by the extremely flat image with iterative noise level σ t + 1 .
As shown in Figure 5, the visual results of four representative images are displayed. We experimented on these images at four different standard noise levels ( σ = 5 , 10 , 15 , and 20). One can observe that the image is divided into numerous patches, and the flat patches are shown in the green area of the labeled images. At different noise levels, our method can adaptively select the flat patches, which can be helpful to estimate the noise level more accurately. The method of flat patch selection will be used in Section 4.4, Section 4.5 and Section 4.6.

4.3. Discussion of the Iterative Model

In this section, we test the impact of the iterative model on our method. Firstly, the gradient texture intensity threshold ε δ is calculated through the iterative noise level σ t . Secondly, the iterative noise level σ t + 1 is obtained according to the Chi-square distribution. Finally, the final estimated noise level σ e n d is output according to the termination condition Φ and I t e r , where Φ is calculated by Equation (18) and I t e r = 10 .
As illustrated in Figure 6, they are the experimental results of our optimization method. Figure 6a–c shows the estimated noise level along with iteration on the S t a b l e image respectively with σ = 5 and σ = 15 ; both of them were experimented 100 times. Accordingly, Figure 6b–d shows their average estimated noise level along with iteration. These experimental results show that the noise level with the iterative strategy in our method can be converged to a fixed point, which verifies its effectiveness.

4.4. Comparisons of Experiments on Synthetic Images

We tested the proposed noise level estimation method on six well-known images (which are from the BSD image database [42]) shown in Figure 7. The proposed method is compared with state-of-the-arts, such as Donoho [18], Immerkar [10], Olsen [8], Pyatykh [28] and Tai [14].
The comparisons of noise level estimation results are shown in Figure 8, where the noise level range from 1 to 50. We can observe that the proposed method performs better than the state of the art on noise level estimation for each image under the same noise level. In particular, the proposed method can also yield pleasing results on images with rich textures (such as the K o a l a image shown in Figure 8f. In addition, the Table 1 shows the average error of estimated noise levels from six images, which is calculated as E e r r = | E n E ¯ n | . At each noise level, the first two best results are bold. The results show that our method produces smaller average errors in most cases than other advanced noise level estimation methods, demonstrating that our method has powerful capacity in accurately estimating noise level. Compared with the Tai algorithm, our method is slightly inferior when the noise level is 15, where the reason may be that some noised structural patches are mis-viewed as flat patches for noise level estimation. However, our method gains superiority when the noise level is other.

4.5. Combined with BM3D on Synthetic Images

We also test noise level estimations with the classic block-matching and 3D collaborative filtering (BM3D) [43] method for blind AWGN noise removal. Two synthetic images with different noise levels are tested, and the visual results are compared in Figure 9 and Figure 10. However, their differences at each level on each image are small, resulting in the subtle differences of visual results, especially in the enlarged areas. The quantitative denoising results (the peak signal-to-noise ratio (PSNR) [44] is adopted to assess their ability for noise reduction and the structural similarity index (SSIM) [45] is exploited to assess their capability for structure preservation) are respectively presented in Table 2 and Table 3. We combine BM3D with true noise level to obtain the reconstructed image and use its PSNR and SSIM values as benchmarks to evaluate noise level estimation methods. It can be seen that our method tends to produces competitive results that are closer to the benchmarks than the state-of-the-art methods. Generally, combining BM3D with our method produces the best results compared to the images produced by other methods, but the performance on some images (such as Church and Koala) is a little worse than other methods. The reason is that the BM3D method over-smooths the irregular structures because the self-similarity property of these two noised images is not exhibited.

4.6. Combined with BM3D on Real-World Images

To further verify the effectiveness of our method, we conduct experiments on real-life X-ray angiograms, as shown in Figure 11 and Figure 12. From the figures, several observations can be concluded: First, when the noise level is empirically set too small, although the blood vessels are preserved well, the noise will be removed incompletely, as shown in Figure 11c and Figure 12c. Second, when the noise level is empirically set too large, the noise is thoroughly removed but the fine vessels are over-smoothed, as shown in Figure 11d and Figure 12d. In contrast, the proposed method can perfectly reduce noise as well as preserving rich vessels, as shown in Figure 11b and Figure 12b. The extensively experimental results illustrate that our method can yield satisfactory results, which are beneficial for applications.

5. Conclusions

Image noise levels are unknown in the real-life world, thus how to robustly and accurately estimate the blind noise level from real-world images is important for image denoising methods to remove noise. This paper proposed a novel noise level estimation method by using Chi-square distribution on image patches. The key procedures of the proposed method, including image decomposition into patches, flat patches selection, image noise level estimation based on Chi-square distribution, and robust noise level estimation by an iterative strategy, are discussed in detail. The results of the experiments on flat images, synthetic maps, and real-world images respectively validated the feasibility, the effectiveness, and the further application of the proposed method. Although the proposed method obtained feasible and competitive results, its performance may decrease when the images have rich structures. In the future work, we will further optimize the selection of flat patches to estimate noise levels more accurately.

Author Contributions

Z.W.: Investigation, Writing—original draft. Q.A.: Software. Z.Z.: Visualization, Investigation. H.F.: Writing—review and editing. Z.H.: Conceptualization, Methodology. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 61901309, and by the Graduate Innovative Fund of Wuhan Institute of Technology, No. CX2021084.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. Data are not publicly available due to privacy considerations.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Das, P.; Pal, C.; Chakrabarti, A.; Acharyya, A.; Basu, S. Adaptive denoising of 3d volumetric mr images using local variance based estimator. Biomed. Signal Process. Control. 2020, 59, 101901. [Google Scholar] [CrossRef]
  2. Yang, X.; Tan, L.; He, L. A robust least squares support vector machine for regression and classification with noise. Neurocomputing 2014, 140, 41–52. [Google Scholar] [CrossRef]
  3. Huang, Z.; Li, Q.; Zhang, T.; Sang, N.; Hong, H. Iterative weighted sparse representation for x-ray cardiovascular angiogram image denoising over learned dictionary. IET Image Process. 2018, 12, 254–261. [Google Scholar] [CrossRef]
  4. Kokil, P.; Pratap, T. Additive white gaussian noise level estimation for natural images using linear scale-space features. Circuits Syst. Signal Process. 2021, 40, 353–374. [Google Scholar] [CrossRef]
  5. Huang, Z.; Zhang, Y.; Li, Q.; Li, Z.; Zhang, T.; Sang, N.; Xiong, S. Unidirectional variation and deep cnn denoiser priors for simultaneously destriping and denoising optical remote sensing image. Int. J. Remote Sens. 2019, 40, 5737–5748. [Google Scholar] [CrossRef]
  6. Lin, C.; Ye, Y.; Feng, S.; Huang, M. A noise level estimation method of impulse noise image based on local similarity. Multimed. Tools Appl. 2022, 81, 15947–15960. [Google Scholar] [CrossRef]
  7. Rakhshanfar, M.; Amer, M. Estimation of Gaussian, Poissonian–Gaussian, and processed visual noise and its level function. IEEE Trans. Image Process. 2016, 25, 4172–4185. [Google Scholar] [CrossRef]
  8. Olsen, S.I. Estimation of noise in images: An evaluation. CVGIP Graph. Model. Image Process. 1993, 55, 319–323. [Google Scholar] [CrossRef]
  9. Khmag, A.; Al, H.S.A.R.; Ramlee, R.A.; Kamarudin, N.; Malallah, F.L. Natural image noise removal using nonlocal means and hidden markov models in transform domain. Vis. Comput. 2018, 34, 1661–1675. [Google Scholar] [CrossRef]
  10. Immerkaer, J. Fast noise variance estimation. Comput. Vis. Image Underst. 1996, 64, 300–302. [Google Scholar] [CrossRef]
  11. Zhang, X.; Xiong, Y. Impulse noise removal using directional difference based noise detector and adaptive weighted mean filter. IEEE Signal Process. Lett. 2009, 16, 295–298. [Google Scholar] [CrossRef]
  12. Rank, K.; Lendl, M.; Unbehauen, R. Estimation of image noise variance. IEE Proc.-Vision Image Signal Process. 1999, 146, 80–84. [Google Scholar] [CrossRef]
  13. Dong, L.; Zhou, J.; Tang, Y.Y. Noise level estimation for natural images based on scale-invariant kurtosis and piecewise stationarity. IEEE Trans. Image Process. 2016, 26, 1017–1030. [Google Scholar] [CrossRef]
  14. Tai, S.C.; Yang, S.M. A fast method for image noise estimation using laplacian operator and adaptive edge detection. In Proceedings of the 2008 3rd International Symposium on Communications, Control and Signal Processing, St Julians, Malta, 12–14 March 2008; pp. 1077–1081. [Google Scholar]
  15. Yang, S.M.; Tai, S.C. Fast and reliable image-noise estimation using a hybrid approach. J. Electron. Imaging 2010, 19, 033007. [Google Scholar] [CrossRef]
  16. Tang, C.; Yang, X.; Zhai, G. Dual-transform based noise estimation. In Proceedings of the 2012 IEEE International Conference on Multimedia and Expo, Melbourne, VIC, Australia, 9–13 July 2012; pp. 991–996. [Google Scholar]
  17. Jiang, P.; Wang, Q.; Wu, J. Efficient noise-level estimation based on principal image texture. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 1987–1999. [Google Scholar] [CrossRef]
  18. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef] [Green Version]
  19. Stefano, A.D.; White, P.R.; Collis, W.B. Training methods for image noise level estimation on wavelet components. EURASIP J. Adv. Signal Process. 2004, 2004, 1–8. [Google Scholar] [CrossRef] [Green Version]
  20. Ugweje, O.C. Selective noise filtration of image signals using wavelet transform. Measurement 2004, 36, 279–287. [Google Scholar] [CrossRef]
  21. Liu, W. Additive white gaussian noise level estimation based on block svd. In Proceedings of the 2014 IEEE Workshop on Electronics, Computer and Applications, Ottawa, ON, Canada, 8–9 May 2014; pp. 960–963. [Google Scholar]
  22. Khmag, A.; Malallah, F.L.; Sharef, B.T. Additive noise level estimation based on singular value decomposition (svd) in natural digital images. In Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 17–19 September 2019; pp. 225–230. [Google Scholar]
  23. Liu, W.; Lin, W. Additive white gaussian noise level estimation in svd domain for images. IEEE Trans. Image Process. 2012, 22, 872–883. [Google Scholar] [CrossRef]
  24. Yang, J.; Gan, Z.; Wu, Z.; Hou, C. Estimation of signal-dependent noise level function in transform domain via a sparse recovery model. IEEE Trans. Image Process. Iss 2015, 24, 1561–1572. [Google Scholar] [CrossRef]
  25. Huang, Z.; Wang, Z.; Zhu, Z.; Zhang, Y.; Fang, H.; Shi, Y.; Zhang, T. DLRP: Learning deep low-rank prior for remotely sensed image denoising. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  26. Fu, P.; Li, C.; Cai, W.; Sun, Q. A spatially cohesive superpixel model for image noise level estimation. Neurocomputing 2017, 266, 420–432. [Google Scholar] [CrossRef]
  27. Shin, D.H.; Park, R.H.; Yang, S.; Jung, J.H. Block-based noise estimation using adaptive gaussian filtering. IEEE Trans. Consum. Electron. 2005, 51, 218–226. [Google Scholar] [CrossRef]
  28. Pyatykh, S.; Hesser, J.; Zheng, L. Image noise level estimation by principal component analysis. IEEE Trans. Image Process. 2012, 22, 687–699. [Google Scholar] [CrossRef] [PubMed]
  29. Manjón, J.V.; Coupé, P.; Buades, A. MRI noise estimation and denoising using non-local pca. Med. Image Anal. 2015, 22, 35–47. [Google Scholar] [CrossRef]
  30. Varon, C.; Alzate, C.; Suykens, J.A. Noise level estimation for model selection in kernel pca denoising. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2650–2663. [Google Scholar] [CrossRef]
  31. Zeng, H.; Zhan, Y.; Kang, X.; Lin, X. Image splicing localization using pca-based noise level estimation. Multimed. Tools Appl. 2017, 76, 4783–4799. [Google Scholar] [CrossRef]
  32. Liu, X.; Tanaka, M.; Okutomi, M. Single-image noise level estimation for blind denoising. IEEE Trans. Image Process. 2013, 22, 5226–5237. [Google Scholar] [CrossRef]
  33. Wang, Z.; Huang, Z.; Xu, Y.; Zhang, Y.; Li, X.; Cai, W. Image noise level estimation by employing chi-square distribution. In Proceedings of the2021 IEEE 21st International Conference on Communication Technology (ICCT), Tianjin, China, 13–16 October 2021; pp. 1158–1161. [Google Scholar]
  34. Huang, Z.; Zhang, Y.; Li, Q.; Li, X.; Zhang, T.; Sang, N.; Hong, H. Joint analysis and weighted synthesis sparsity priors for simultaneous denoising and destriping optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6958–6982. [Google Scholar] [CrossRef]
  35. Yesilyurt, A.B.; Erol, A.; Kamisli, F.; Alatan, A.A. Single image noise level estimation using dark channel prior. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2065–2069. [Google Scholar]
  36. Huang, Z.; Zhang, Y.; Li, Q.; Zhang, T.; Sang, N. Spatially adaptive denoising for X-ray cardiovascular angiogram images. Biomed. Signal Process. Control. 2018, 40, 131–139. [Google Scholar] [CrossRef]
  37. Ghazi, M.M.; Erdogan, H. Image noise level estimation based on higher-order statistics. Multimed. Tools Appl. 2017, 76, 2379–2397. [Google Scholar] [CrossRef]
  38. Turajlic, E. Adaptive svd domain-based white gaussian noise level estimation in images. IEEE Access 2018, 6, 72735–72747. [Google Scholar] [CrossRef]
  39. Khmag, A.; Ramli, A.R.; Al-haddad, S.A.R.; Kamarudin, N. Natural image noise level estimation based on local statistics for blind noise reduction. Vis. Comput. 2018, 34, 575–587. [Google Scholar] [CrossRef]
  40. Chen, L.; Huang, X.; Tian, J.; Fu, X. Blind noisy image quality evaluation using a deformable ant colony algorithm. Opt. Laser Technol. 2015, 57, 265–270. [Google Scholar] [CrossRef]
  41. Huang, Z.; Zhu, Z.; An, Q.; Wang, Z.; Zhou, Q.; Zhang, T.; Alshomrani, A.S. Luminance learning for remotely sensed image enhancement guided by weighted least squares. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  42. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the 8th International Conference on Computer Vision, Vancouver, BC, Canada, 9–12 July 2001; Volume 2, pp. 416–423. [Google Scholar]
  43. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  44. Huang, Z.; Wang, L.; An, Q.; Zhou, Q.; Hong, H. Learning a contrast enhancer for intensity correction of remotely sensed images. IEEE Signal Process. Lett. 2022, 29, 394–398. [Google Scholar] [CrossRef]
  45. Huang, Z.; Wang, Z.; Zhu, Z.; Zhang, Y.; Fang, H.; Shi, Y.; Zhang, T. Spatially adaptive multi-scale image enhancement based on nonsubsampled contourlet transform. Infrared Phys. Technol. 2022, 121, 104014. [Google Scholar] [CrossRef]
Figure 2. Test flat images with the gray values of (a) 64, (b) 128, and (c) 192, respectively.
Figure 2. Test flat images with the gray values of (a) 64, (b) 128, and (c) 192, respectively.
Entropy 24 01518 g002
Figure 3. Noise level estimation experiments on test flat images (a) 64, (b) 128, and (c) 192 with different noise levels.
Figure 3. Noise level estimation experiments on test flat images (a) 64, (b) 128, and (c) 192 with different noise levels.
Entropy 24 01518 g003
Figure 4. Example patches and the threshold on the S t a b l e image with σ = 5 .
Figure 4. Example patches and the threshold on the S t a b l e image with σ = 5 .
Entropy 24 01518 g004
Figure 5. Flat patches selection results of four images ( C h u r c h , S t a b l e , M o o r i s h I d o l and D e s e r t ). From left to right: clean images, partition results when σ = 5 , σ = 10 , σ = 15 and σ = 20 for AGWN, where the green region represents the selection results of the flat patches.
Figure 5. Flat patches selection results of four images ( C h u r c h , S t a b l e , M o o r i s h I d o l and D e s e r t ). From left to right: clean images, partition results when σ = 5 , σ = 10 , σ = 15 and σ = 20 for AGWN, where the green region represents the selection results of the flat patches.
Entropy 24 01518 g005
Figure 6. Estimated noise level in each iterative. We conducted 100 experiments. (a) S t a b l e image added noise level σ = 5 . (b) Average estimated noise level of S t a b l e , σ = 5 . (c) S t a b l e image added noise level σ = 15 . (d) Average estimated noise level of S t a b l e , σ = 15 .
Figure 6. Estimated noise level in each iterative. We conducted 100 experiments. (a) S t a b l e image added noise level σ = 5 . (b) Average estimated noise level of S t a b l e , σ = 5 . (c) S t a b l e image added noise level σ = 15 . (d) Average estimated noise level of S t a b l e , σ = 15 .
Entropy 24 01518 g006
Figure 7. Six well-known testing images in the BSDS database.
Figure 7. Six well-known testing images in the BSDS database.
Entropy 24 01518 g007
Figure 8. Noise level estimation experiments on synthetic images (a) C h u r c h , (b) M o o r i s h I d o l , (c) S t a b l e , (d) C a c t u s , (e) D e s e r t , and (f) K o a l a with different noise levels.
Figure 8. Noise level estimation experiments on synthetic images (a) C h u r c h , (b) M o o r i s h I d o l , (c) S t a b l e , (d) C a c t u s , (e) D e s e r t , and (f) K o a l a with different noise levels.
Entropy 24 01518 g008
Figure 9. Visual results (a) Clean image C a c t u s ; (b) Noisy image with σ = 10 ; (ch) Visual results of BM3D with noise level estimation in (c) [8], (d) [10], (e) [14], (f) [18], (g) [28], and (h) ours.
Figure 9. Visual results (a) Clean image C a c t u s ; (b) Noisy image with σ = 10 ; (ch) Visual results of BM3D with noise level estimation in (c) [8], (d) [10], (e) [14], (f) [18], (g) [28], and (h) ours.
Entropy 24 01518 g009
Figure 10. Visual results. (a) Clean image M o o r i s h I d o l ; (b) Noisy image with σ = 15 ; (ch) Visual results of BM3D with noise level estimation in (c) [8], (d) [10], (e) [14], (f) [18], (g) [28], and (h) ours.
Figure 10. Visual results. (a) Clean image M o o r i s h I d o l ; (b) Noisy image with σ = 15 ; (ch) Visual results of BM3D with noise level estimation in (c) [8], (d) [10], (e) [14], (f) [18], (g) [28], and (h) ours.
Entropy 24 01518 g010
Figure 11. Visual comparison results (a) real-life X-ray angiogram at the 57th frame. (b) Visual result of BM3D with our noise level estimation method ( σ = 13.0365 ). (c,d) Visual results of BM3D respectively with empirical noise levels 8 and 18.
Figure 11. Visual comparison results (a) real-life X-ray angiogram at the 57th frame. (b) Visual result of BM3D with our noise level estimation method ( σ = 13.0365 ). (c,d) Visual results of BM3D respectively with empirical noise levels 8 and 18.
Entropy 24 01518 g011
Figure 12. Visual comparison results (a) real-life X-ray angiogram at the 85th frame. (b) Visual result of BM3D with our noise level estimation method ( σ = 18.6854 ). (c,d) Visual results of BM3D respectively with empirical noise levels 14 and 24.
Figure 12. Visual comparison results (a) real-life X-ray angiogram at the 85th frame. (b) Visual result of BM3D with our noise level estimation method ( σ = 18.6854 ). (c,d) Visual results of BM3D respectively with empirical noise levels 14 and 24.
Entropy 24 01518 g012
Table 1. The average error value of the images in Figure 7. The bold font indicates the first two best results at each noise level.
Table 1. The average error value of the images in Figure 7. The bold font indicates the first two best results at each noise level.
Noise Level5101520253035404550
Donoho [18]0.70580.61570.52410.50130.36960.43580.35620.32480.30100.3885
J. Immerkar [10]0.83340.60190.45100.35130.28870.24270.24790.23620.25540.2411
S. I. Olsen [8]0.61820.55610.49010.51900.44590.46800.48620.45220.38380.4088
S. Pyatykh [28]0.51900.33290.23910.18730.21600.29600.31650.17170.26140.2485
Tai Yang [14]0.13610.19990.17320.20280.22470.16960.22260.20530.35560.4126
Our proposed0.10570.18020.19940.14060.13120.12720.11630.13260.08450.1624
Table 2. Average PSNR scores and its standard deviation of the images in Figure 7; each was experimented 100 times. The bold font denotes the best results.
Table 2. Average PSNR scores and its standard deviation of the images in Figure 7; each was experimented 100 times. The bold font denotes the best results.
BM3D [43] + Church MoorishIdol Stable
Predictive Model102040102040102040
True noise level (benchmark)34.5386 ± 0.021531.0408 ± 0.036427.8572 ± 0.045132.9813 ± 0.020729.4181 ± 0.032526.4669 ± 0.034232.0914 ± 0.017928.2578 ± 0.026 225.2087 ± 0.0264
Tai Yang [14]34.5259 ± 0.023631.0271 ± 0.036727.8536 ± 0.045632.9708 ± 0.019229.4011 ± 0.033826.4630 ± 0.035032.0358 ± 0.023128.1978 ± 0.031325.1916 ± 0.0292
Donoho [18]34.4205 ± 0.025430.9860 ± 0.036027.8447 ± 0.046132.7986 ± 0.022029.3387 ± 0.034226.4538 ± 0.033031.6144 ± 0.029628.0578 ± 0.030325.1678 ± 0.0287
Immeakar [10]34.4438 ± 0.023230.9898 ± 0.036927.8449 ± 0.045532.8660 ± 0.017729.3583 ± 0.033326.4505 ± 0.033731.6606 ± 0.025128.0919 ± 0.027325.1670 ± 0.0274
S. I. Olsen [8]34.5307 ± 0.021631.0705 ± 0.036527.8540 ± 0.043732.9744 ± 0.020729.4382 ± 0.033026.4795 ± 0.035531.9148 ± 0.032028.2026 ± 0.033725.2394 ± 0.0262
S. Pyatykh [28]34.5170 ± 0.023231.0326 ± 0.039027.8602 ± 0.045232.9632 ± 0.021229.4093 ± 0.032826.4700 ± 0.033731.9727 ± 0.026528.2078 ± 0.028825.2043 ± 0.0282
Our proposed34.5359 ± 0.023731.0322 ± 0.031227.8526 ± 0.045932.9826 ± 0.026429.4206 ± 0.026426.4661 ± 0.030232.0962 ± 0.021428.2592 ± 0.023025.1962 ± 0.0259
BM3D [43] + Cactus Desert Koala
Predictive Model102040102040102040
True noise level (benchmark)32.1121 ± 0.020828.4538 ± 0.026325.6292 ± 0.029132.9813 ± 0.020729.4181 ± 0.032526.4669 ± 0.034233.9070 ± 0.018530.4351 ± 0.029027.5316 ± 0.0370
Tai Yang [14]32.0414 ± 0.024128.4121 ± 0.034925.6221 ± 0.028932.9708 ± 0.019229.4011 ± 0.033826.4630 ± 0.035033.9049 ± 0.018030.4268 ± 0.028427.5322 ± 0.0366
Donoho [18]31.6267 ± 0.025928.3106 ± 0.027625.6093 ± 0.029832.7986 ± 0.022029.3387 ± 0.034226.4538 ± 0.033033.8378 ± 0.021930.4063 ± 0.028427.5333 ± 0.0370
Immeakar [10]31.7469 ± 0.022728.3446 ± 0.027825.6079 ± 0.028932.8660±0.017729.3583 ± 0.033326.4505 ± 0.033733.8755 ± 0.019330.4110 ± 0.028627.5321 ± 0.0369
S. I. Olsen [8]32.0035 ± 0.023728.4579 ± 0.027025.6467 ± 0.029632.9744±0.020729.4382 ± 0.033026.4795 ± 0.035533.9043 ± 0.019730.4461 ± 0.029427.4901 ± 0.0372
S. Pyatykh [28]32.0124 ± 0.028428.4252 ± 0.031425.6301 ± 0.028932.9632 ± 0.021229.4093 ± 0.032826.4700 ± 0.033733.9063 ± 0.018330.4358 ± 0.028827.5290 ± 0.0370
Our proposed32.1210 ± 0.019028.4463 ± 0.022825.6193 ± 0.034232.9826 ± 0.026429.4206 ± 0.026426.4661 ± 0.030233.9071 ± 0.022830.4244 ± 0.031627.5426 ± 0.0341
Table 3. Average SSIM scores and its standard deviation of the images in Figure 7; each was experimented 100 times. The bold font denotes the best results.
Table 3. Average SSIM scores and its standard deviation of the images in Figure 7; each was experimented 100 times. The bold font denotes the best results.
BM3D [43] + Church MoorishIdol Stable
Predictive Model102040102040102040
True noise level (benchmark)0.9162 ± 0.00040.8542 ± 0.00120.7581 ± 0.00190.9125 ± 0.00050.8251 ± 0.00150.6983 ± 0.00320.9188 ± 0.00050.8175 ± 0.00130.6862 ± 0.0020
Tai Yang [14]0.9156 ± 0.00050.8539 ± 0.00120.7583 ± 0.00190.9112 ± 0.00060.8241 ± 0.00160.6981 ± 0.00310.9150 ± 0.00080.8132 ± 0.00160.6852 ± 0.0022
Donoho [18]0.9123 ± 0.00050.8527 ± 0.00120.7586 ± 0.00190.9041 ± 0.00060.8210 ± 0.00160.6976 ± 0.00320.8998 ± 0.00090.8043 ± 0.00150.6837 ± 0.0021
Immeakar [10]0.9129 ± 0.00040.8528 ± 0.00120.7589 ± 0.00190.9064 ± 0.00050.8219 ± 0.00150.6975 ± 0.00320.9012 ± 0.00080.8064 ± 0.00140.6837 ± 0.0021
S. I. Olsen [8]0.9158 ± 0.00050.8548 ± 0.00120.7511 ± 0.00250.9115 ± 0.00060.8264 ± 0.00160.6984 ± 0.00310.9122 ± 0.00110.8135 ± 0.00180.6879 ± 0.0021
S. Pyatykh [28]0.9152 ± 0.00050.8540± 0.00120.7517 ± 0.00200.9107 ± 0.00060.8246 ± 0.00160.6985 ± 0.00310.9122 ± 0.00090.8139 ± 0.00150.6859 ± 0.0021
Our proposed0.9159 ± 0.00050.8540 ± 0.00090.7580 ± 0.00180.9128 ± 0.00060.8251 ± 0.00140.6983 ± 0.00320.9197 ± 0.00050.8172 ± 0.00130.6856 ± 0.0017
BM3D [43] + Cactus Desert Koala
Predictive Model102040102040102040
True noise level (benchmark)0.9049 ± 0.00050.8008 ± 0.00140.6805 ± 0.00190.8786 ± 0.00070.8029 ± 0.00100.7222 ± 0.00150.9117 ± 0.00050.8166 ± 0.00130.6957 ± 0.0027
Tai Yang [14]0.9010 ± 0.00080.7975 ± 0.00190.6799±0.00200.8750 ± 0.00120.8020 ± 0.00110.7226 ± 0.00160.9110 ± 0.00060.8158 ± 0.00130.6956± 0.0027
Donoho [18]0.8853 ± 0.00080.7905 ± 0.00140.6789 ± 0.00200.8687 ± 0.00090.8006 ± 0.00110.7228 ± 0.00150.9074 ± 0.00070.8141 ± 0.00130.6955 ± 0.0027
Immeakar [10]0.8895 ± 0.00070.7927 ± 0.00150.6788 ± 0.00200.8696 ± 0.00080.8005 ± 0.00100.7233 ± 0.00140.9091 ± 0.00060.8144 ± 0.00130.6951 ± 0.0027
S. I. Olsen [8]0.8994 ± 0.00070.8011 ± 0.00150.6819 ± 0.00200.8766 ± 0.00100.8046 ± 0.00090.7141 ± 0.00190.9109 ± 0.00070.8184 ± 0.00130.6948 ± 0.0026
S. Pyatykh [28]0.8997 ± 0.00090.7985 ± 0.00190.6805 ± 0.00200.8750 ± 0.00110.8025 ± 0.00110.7215 ± 0.00160.9111 ± 0.00060.8167 ± 0.00130.6958± 0.0027
Our proposed0.9056 ± 0.00060.8000 ± 0.00140.6800 ± 0.00210.8776 ± 0.00090.8022 ± 0.00110.7219 ± 0.00170.9115 ± 0.00050.8150 ± 0.00170.6958 ± 0.0024
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Z.; An, Q.; Zhu, Z.; Fang, H.; Huang, Z. Blind Additive Gaussian White Noise Level Estimation from a Single Image by Employing Chi-Square Distribution. Entropy 2022, 24, 1518. https://doi.org/10.3390/e24111518

AMA Style

Wang Z, An Q, Zhu Z, Fang H, Huang Z. Blind Additive Gaussian White Noise Level Estimation from a Single Image by Employing Chi-Square Distribution. Entropy. 2022; 24(11):1518. https://doi.org/10.3390/e24111518

Chicago/Turabian Style

Wang, Zhicheng, Qing An, Zifan Zhu, Hao Fang, and Zhenghua Huang. 2022. "Blind Additive Gaussian White Noise Level Estimation from a Single Image by Employing Chi-Square Distribution" Entropy 24, no. 11: 1518. https://doi.org/10.3390/e24111518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop