Next Article in Journal
Research on High Precision Stiffness Modeling Method of Redundant Over-Constrained Parallel Mechanism
Previous Article in Journal
Smart Farming Revolution: Portable and Real-Time Soil Nitrogen and Phosphorus Monitoring for Sustainable Agriculture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Considering Image Information and Self-Similarity: A Compositional Denoising Network

1
The State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing 100024, China
2
The School of Data Science and Media Intelligence, Communication University of China, Beijing 100024, China
3
School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 610054, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(13), 5915; https://doi.org/10.3390/s23135915
Submission received: 16 May 2023 / Revised: 15 June 2023 / Accepted: 21 June 2023 / Published: 26 June 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Recently, convolutional neural networks (CNNs) have been widely used in image denoising, and their performance has been enhanced through residual learning. However, previous research mostly focused on optimizing the network architecture of CNNs, ignoring the limitations of the commonly used residual learning. This paper identifies two of its limitations, which are the neglect of image information and the lack of effective consideration of image self-similarity. To solve these limitations, this paper proposes a compositional denoising network (CDN), which contains two sub-paths, the image information path (IIP) and the noise estimation path (NEP), respectively. IIP is trained via an image-to-image method to extract image information. For NEP, it utilizes image self-similarity from the perspective of training. This similarity-based training method constrains NEP to output similar estimated noise distributions for different image patches with a specific kind of noise. Finally, image information and noise distribution information are comprehensively considered for image denoising. Experimental results indicate that CDN outperforms other CNN-based methods in both synthetic and real-world image denoising, achieving state-of-the-art performance.

1. Introduction

Image denoising is a commonly studied problem in computer vision and has been shown to be important in medical images [1,2], remote sensing images [3], mobile phone images [4], etc. It aims to restore a corrupted image x to the ground-truth clean image y, which can be modeled as y = x v , where v is the noise. Synthetic noisy images and real-world noisy images are studied in this paper.
Recently, convolutional neural networks (CNNs) have been popularly adapted to image denoising. Zhang et al. [5] proposed a feed-forward denoising a convolutional neural network (DnCNN) with residual learning and batch normalization to remove the additive white Gaussian noise (AWGN). Residual learning here is training networks to estimate the noise of the noisy image and then subtract it to obtain the corresponding clean image. Based on residual learning, CNNs obtained remarkable denoising results, completely exceeding the traditional methods such BM3D [6] and WNNM [7]. Currently, most work focused on designing more effective network modules to enhance denoising performance. For instance, depth networks [8,9,10], width networks [4,11,12,13,14] and attention mechanisms [15,16,17,18,19] were deeply studied. Additionally, some methods employed variations of convolution, such as deformed convolution [20], to improve image denoising.
Despite achieving high performance in image denoising, the aforementioned methods have not fully addressed the limitations of residual learning. Firstly, residual learning ignores the image information, because its optimization target can be expressed as the distance between the estimated noise and the ground-truth noise, formulated as:
L ( θ ) = L f ( x f ( x , θ ) , y )
where L f is an arbitrary loss function, θ is trainable parameters, and f ( · ) is a neural network. However, generating high-quality denoised images also depends on the image information. Secondly, residual learning does not fully exploit the image self-similarity, which is crucial in image restoration. Existing methods addressed this issue by designing specific function modules, such as non-local mechanisms [17,21,22], which restore one pixel by using its neighboring pixels. However, non-local mechanisms require high computation, especially when considering many neighboring pixels.
This paper aims to solve the aforementioned problems. For the first one, we propose an image-to-image training method that minimizes the Structure Similarity Index Measure (SSIM) loss between denoised images and ground-truth clean images to enable a network to extract the image information effectively. For the second one, we propose using image self-similarity from a training perspective. Specifically, for an image with additive white Gaussian noise (AWGN), the noise distributions in different patches of the image are similar. Thus, we split the noisy image into patches and train the proposed noise estimation path (NEP) to output similar noise estimations among these patches.
Based on the two points, we propose a compositional denoising network (CDN) with an image information path (IIP), a noise estimation path (NEP), and an integration denoising module (IDM). The IIP is optimized using the image-to-image training method to extract image information, while the NEP is trained using image self-similarity to estimate the noise distributions. IDM receives the outputs of IIP and NEP, generates the final estimated noise, and outputs a denoised image via residual subtraction. The main contributions of this paper are as follows:
(1) We suggest two limitations of the commonly used residual learning in image denoising, which are largely ignored in previous research. Our work may inspire further exploration of training methods in image denoising.
(2) We propose to leverage image information and image self-similarity to address the limitations of residual learning. The proposed IIP is optimized with image-to-image training to extract image information, while NEP utilizes similarity-based training to estimate image noise. Our ablation experiments demonstrate the effectiveness of these methods.
(3) Our proposed CDN, built upon IIP and NEP, achieves superior results in image denoising on both synthetic and real-world datasets.

2. Related Work

2.1. Residual Learning for Image Denoising

Residual learning was proposed in ResNet [23] to solve the performance degradation problem with the increasing network depth. With such a learning strategy, the residual network learns a residual mapping for a few stacked layers. Before ResNet, learning the residual mapping had already been adopted in some low-level vision tasks [24,25]. Zhang et al. [5] extended this concept to image denoising, using a single residual unit to predict the residual image instead of many stacked units. Nowadays, residual learning is widely used in most deep denoising networks. However, residual learning alone may not be sufficient to obtain satisfactory denoising results since image information acquisition is also important. Furthermore, existing residual learning methods do not consider image self-similarity.

2.2. Deep Networks for Image Denoising

Over the years, many methods have been proposed for image denoising, including both traditional and deep-learning-based approaches. In this paper, we focus on the deep-learning-based methods. Zhang et al. [5] proposed a deep convolutional neural network (CNN) that goes beyond traditional Gaussian denoisers by utilizing residual learning to learn a residual mapping between noisy and clean images. Ren et al. [9] further extended the use of residual blocks in image denoising through their DN-ResNet, which incorporates dense connections and residual learning. Additionally, Zhang et al. [10] introduced an effective residual block that improves image denoising performance. Tian et al. [12], on the other hand, introduced batch renormalization to deep CNNs for image denoising, which effectively reduces the impact of different batch sizes on training. This study shows the effectiveness of renormalization compared to previous normalization techniques. These approaches demonstrate the efficacy of increasing network depth in addressing image denoising.
However, the increased depth makes models suffer from gradient vanishing or exploding. Hierarchical networks were proposed to use wide network structures to alleviate the problem. Tian et al. [12] utilized a two-path network to increase the width of the network and thus obtained more features. They also proposed a dual denoising network (DudeNet) with two paths and further designed their different functions [11]. Specifically, the top sub-path of DudeNet uses a sparse mechanism to extract global and local features. A non-local hierarchical network (NHNet) [17] used two sub-paths to process different resolutions of the noisy image. For the high-resolution path, it employed a novel upsampling method with a non-local mechanism to obtain effective features. Some U-Net-based networks adopt a three-path structure to improve denoising performance. DHDN [4] replaced the convolution block in the original U-Net [26] with dense blocks and obtained better denoising results. MCU-Net [14] added an extra branch of atrous spatial pyramid pooling (ASPP) based on residual dense blocks. Sub-paths of these models extract different resolution image features, which are fused at the end of the network for denoising. From a frequency domain perspective, some methods [13,27] have employed the multi-level wavelet transform in image denoising and achieved high performance.
This paper utilizes the hierarchical structure to design a network, and the sub-path functions are clearly defined. Specifically, IIP extracts the image information, and NEP estimates the noise distribution.

3. The Proposed Method

3.1. Network Architecture

The proposed CDN is shown in Figure 1; it consists of three main modules, IIP, NEP, and IDM. Here, C denotes convolution layer, BN denotes batch normalization [28], PR denotes parametric rectified linear unit [29], and R denotes rectified linear unit [30]. Convolution layers in CDN are set kernel size ( 3 × 3 ) , stride 1, and padding 1. During training, an input noisy image x is divided into four equal patches, x 1 , x 2 , x 3 , and x 4 . This splitting operation aims to utilize the similarity of patches to train NEP. Here, we empirically divide the image into four patches for the following reasons: (1) fewer patches do not take advantage of similarity; and (2) more patches cause fewer noise samples per patch; thus, they can not sufficiently estimate the noise distribution. The training method of IIP is image-to-image. Without loss of generality, we choose the first patch x 1 and use y 1 to denote the ground-truth clean image of x 1 . Finally, the denoised x 1 is output by CDN, noted as x 1 ˜ .
When testing, the input of CDN is a complete noisy image, and the output is the denoised image.

3.1.1. Image Information Path (IIP)

IIP consists of one convolution layer and seven DBlocks, as shown in Figure 1. It is proposed to extract the image information. During training, the input of IIP is x 1 . IIP extracts the image features of x 1 , and then the denoised image x 1 c and noise estimation x 1 n are obtained based on image features. It is expressed as:
x 1 c = C o n v ( I I P ( x 1 ) ) x 1 n = C o n v ( x 1 ) I I P ( x 1 )
where Conv is the convolution layer changing the number of feature channels. x 1 c is used to constrain IIP to extract the image information via the image-to-image training method. Therefore, y 1 is the optimization target of x 1 c , where SSIM is chosen as the loss function. For x 1 n , it is further processed in IDM. The SSIM loss is as follows:
L S S I M = 2 μ x 1 c μ y 1 + C 1 μ x 1 c 2 + μ y 1 2 + C 1 × 2 σ x 1 c y 1 + C 2 σ x 1 c 2 + σ y 1 2 + C 2
where μ , σ , and σ x 1 c y 1 denote the mean, standard deviation, and covariance, respectively. C 1 and C 2 are the image-dependent constants, which provide stabilization against small denominators.
For testing, IIP receives a complete image and outputs its noise based on the image information.

3.1.2. Noise Estimation Path (NEP)

The architecture of NEP is similar to that of IIP. During training, x 1 , x 2 , x 3 , and x 4 are input into NEP, respectively, and their estimated noise, n 1 , n 2 , n 3 , and n 4 , are outputted. We suggest that n 1 , n 2 , n 3 , and n 4 should have a similar distribution when noise in the input image is specific. Therefore, Kullback–Leibler divergence (KLD) is used to evaluate the distance of these noise distributions:
D K L ( P Q ) = P ( x ) log P ( x ) Q ( x )
where P and Q are probability distributions. The sum of these distances forms the loss function:
L K L D = i = 1 4 j = 1 , j i 4 D K L n i n j
By minimizing L K L D , this similarity-based training method solves the limitation of residual learning.
For testing, NEP receives a complete noisy image and outputs its noise distribution.

3.1.3. Integration Denoising Module (IDM)

IDM is proposed to integrate the outputs from IIP and NEP. It is the U-Net-based network shown in Figure 2. DBlock in IDM is the basic feature extraction block, and PixelShuffle is the upsampling method based on the efficient sub-pixel convolution [31]. IDM outputs the final estimated noise and then obtains the denoised image x 1 ˜ via residual subtraction. L 1 loss is used as the loss function of x 1 ˜ and ground-truth clean image y 1 :
L 1 = 1 N | x 1 ˜ y 1 |

3.2. Training Loss

The loss function used in this paper consists of three equally important components. Firstly, the image information loss L S S I M is utilized to train IIP to extract image information. Secondly, the noise estimation loss L K L D is employed to train NEP to estimate image noise accurately. Finally, the overall residual loss L 1 is used to ensure that the whole network outputs the corresponding clean image of an input noisy image. By combining these three components, CDN can be effectively trained to remove image noise. The training loss can be formulated as follows:
L = L S S I M + L K L D + L 1

4. Experiments

4.1. Datasets

4.1.1. Synthetic Noise Datasets

DIV2K [32] is commonly used in image processing; it contains 800 images for training, 100 for validation, and 100 for testing. We used the training set of DIV2K to train CDN. For testing, we used gray-scale image datasets Set12 [5] and BSD68 [33] and color-scale image datasets Set5 [34] and Kodak24 [35]. Figure 3 shows images of Set12, which contains C.man, House, Peppers, etc. Images in these datasets are all clean and their corresponding synthetic noisy images are generated by adding AWGN. We refer to the AWGN generation algorithm from [5], in which the noise level is determined by the standard deviation σ . Three noise levels, σ = 15 , σ = 25 , and σ = 50 , were chosen to train and test the CDN.

4.1.2. Real-World Noise Datasets

Real-world noise images are directly obtained in the natural environment. Here, we used the training set of the Smartphone Image Denoising DATA (SIDD) sRGB track [36] to train CDN. It contains 160 scene instances captured by five smartphone cameras under different lighting conditions and camera settings. There are two pairs of high-resolution images for each scene instance, and each pair contains one noisy image and its corresponding clean image. In total, 320 pairs of images were used for training. For testing, we used the the SIDD validation set and the Darmstadt Noise Data set (DND) [37]. DND does not provide any training data. It has 50 pairs of images captured by four different consumer cameras for testing. We obtained the PSNR and SSIM results by submitting the denoising images to the official DND website.

4.2. Training Setting

CDN is implemented by Pytorch 1.5.1 based on Python 3.5 and Cuda 9.2. Experiments were run on NVIDIA Tesla P100 GPUs. We used the Adam [38] algorithm with an initial learning rate of 0.0002 and a weight decay of 0.0001 to minimize the loss function. The learning rate will decrease with the increment in training epochs. During training, the mini-batch size was set to 64. Data augmentations were adopted to network training, which randomly splits the images into 128 × 128 patches and flips them horizontally and vertically.

4.3. Experimental Results

We evaluated the denoising performance of CDN on synthetic and real-world datasets and compared it with some popular methods.

4.3.1. Evaluation Metrics

Peak signal-to-noise ratio (PSNR) was used as the evaluation metric; it is one of the most common indicators for image processing methods. It measures the level of distortion or error between the original image and the reconstructed image by comparing their pixel values. PSNR is calculated based on the mean squared error (MSE) between the two images. The formula for calculating MSE is as follows:
M S E = 1 N ( I R ) 2
where N is the total number of pixels in the image and I and R represent the pixel value of the original image and the reconstructed image, respectively. PSNR is then computed as the ratio of the maximum possible pixel value (usually 255 for 8-bit images) to the square root of the MSE:
P S N R = 20 · log 10 ( M A X ) 10 · log 10 ( M S E )
where M A X is the maximum possible pixel value of the image. A higher PSNR value indicates a lower level of distortion and better image quality.
SSIM is also a widely used method to measure similarity between two images. It is designed to evaluate the perceived quality of images by taking into account their structural information. SSIM compares local patterns of pixel intensities in the reference and distorted images and computes a similarity score ranging from 0 to 1. The formulation of SSIM is described in Section 3.1.1. Similar to PSNR, a higher SSIM value indicates a lower level of distortion and higher image quality.
Consistent with most previous studies, we present the PSNR results for synthetic image denoising and both PSNR and SSIM results for real-world image denoising.

4.3.2. AWGN Denoising

Gray-scale image: We first report the training PSNR curve in Figure 4, demonstrating that CDN was well trained and obtains good denoising results on new data from Set12. Table 1 and Table 2 show the synthetic gray-scale noisy image denoising results of different methods on Set12 and BSD68, respectively. CDN outperforms other methods at all noise levels on Set12 and σ = 15 and σ = 25 on BSD68. Additionally, CDN is a hierarchical network, and compared with other hierarchical networks—U-Net [26], DIDN [39], BRDNet [12] and NHNet [17]—CDN has superior performance.
CDN exhibits the most substantial improvement in denoising the Barbara image in Set12, with improvements of 0.34 at σ = 15 , 0.53 at σ = 25 , and 0.9 at σ = 50 over the second-best method. As illustrated in Figure 3, Barbara is a highly textured image, which indicates that CDN performs well in preserving image details. Moreover, the superior PSNR results of CDN on other texture-rich images, such as Monarch, Man, and Couple, further support this point. As shown in Figure 5, CDN exhibits the best visual effect in denoising the image Monarch. Overall, these results suggest that CDN is a powerful and effective method for image denoising with the ability to preserve image details and texture.
Color-scale images: We also tested CDN on color-scale noisy images. As shown in Table 3, CDN achieves the highest PSNR results on the Kodak24 dataset. The visual comparisons of CDN with other methods on Set5 and Kodak24 are shown in Figure 6 and Figure 7, respectively. The results indicate that CDN outperforms the other methods and can recover cleaner images.

4.3.3. Real-World Image Denoising

While the AWGN task can provide some insight into the effectiveness of a denoising method, its limitation is clear. Real-world noise is more complicated and unpredictable, so evaluating denoising methods on real-world noisy images is more meaningful. To assess CDN’s performance in real-world image denoising, we used the SIDD validation set and DND. Table 4 lists denoising results of different methods, where CDN achieves the best PSNR and SSIM results on the SIDD validation set and competitive performance on DND. Figure 8 shows some denoised images of CDN on the SIDD dataset, indicating that CDN successfully removes noise. These results demonstrate that CDN is also useful in denoising real-world images and has practical application value.

4.4. Ablation Experiments

The effectiveness of CDN in denoising relies on two key components: IIP, which extracts image information through image-to-image training, and NEP, which estimates noise through image self-similarity training. In this section, we study the contributions of these components in detail and demonstrate their effectiveness in improving denoising results.

4.4.1. Role of IIP

IIP in CDN is crucial for extracting image information, and it is necessary to determine whether this information improves the denoising performance. We first conducted an ablation experiment by removing it from a pretrained CDN model, named CDN-IIP. This was achieved by replacing the output of IIP with a zero matrix of the same size. Figure 9 shows the denoising results of CDN-IIP. Compared to CDN, the denoised images produced by CDN-IIP are significantly blurred and lack details, demonstrating the importance of image information in restoring image details. Then, we further evaluated IIP by removing IIP and retraining CDN, denoted CDN-IIP(R) in Table 5. The results show that removing IIP leads to a decrease in both PSNR and SSIM, further confirming the effectiveness of IIP.
IIP uses SSIM loss as the loss function because it provides a comprehensive measure of image similarity based on brightness, contrast, and structure. We compared the denoising results using different loss functions, including L1 loss and mean square error (MSE) loss, and found that the SSIM loss provides better denoising performance, as shown in Table 6.

4.4.2. Role of NEP

NEP provides noise estimation of an noisy image. CDN-NEP in Figure 9 denotes CDN cutting off NEP. It can be seen that although the denoised image of CDN-NEP contains sufficient image information, the noise is obviously not removed well. Therefore, the noise distribution estimated from NEP is essential for removing noise. Similar to the study of IIP, we also report the PSNR and SSIM results of retained CDN-NEP. The results of CDN-NEP(R) on Set12 are listed in Table 5, and they are are significantly lower than that of CDN. This demonstrates that using the estimated noise distribution can improve denoising performance.

4.4.3. Role of Training Methods

CDN solves the limitations of traditional residual learning by using the image-to-image training method to train IIP and the similarity-based training method to train NEP. Here, we study the effect of these training methods on the denoising performance. CDN-SSIM in Table 5 denotes that CDN is trained without the image-to-image training method, which is implemented by removing the SSIM loss during training. Similarly, CDN-KLD denotes that CDN is trained without the similarity-based training method. CDN-SSIM-KLD denotes CDN is trained without either training method, which can be considered ordinary residual learning. The results in Table 5 show CDN that performs comprehensively better than the other methods. In particular, CDN significantly outperforms CDN-SSIM-KLD, indicating that considering image information and self-similarity improves residual learning.
The input image is divided into patches during training. The number of patches also affects the denoising performance. Table 7 lists the PSNR denoising results of different numbers of patches, which shows that too many patches leads to noise performance degradation. The reason is that many patches cause small patch sizes and thus an individual patch can not contain enough image information. In order to concisely describe the proposed model, four patches were selected in this paper.

5. Discussion

Deep-learning-based image denoising methods are increasingly popular among researchers due to their ease of implementation and fast processing speed. While most research focuses on improving network architecture, potential limitations in the commonly used residual learning method are often neglected. This paper points out two limitations of the residual learning and proposes a novel denoising CDN to solve them. We conducted comparison experiments between the proposed methods (CDN) and original residual learning (CDN-SSIM-KLD), which demonstrated that our solution significantly improves denoising performance.
IIP and NEP in CDN with their training methods aim to solve the limitations of residual learning that do not consider image information and image self-similarity, respectively. Specifically, IIP is trained to extract image information using an image-to-image approach, while NEP estimates image noise by leveraging image self-similarity. To explicate their functions, we conducted corresponding experiments. Firstly, we observed that removing either IIP or NEP could result in an increase in PSNR and SSIM, which demonstrated their significance in image denoising. Secondly, we visualized the denoised images of CDN-IIP and CDN-NEP. Results revealed that CDN without IIP was able to successfully remove noise but failed to preserve fine image details. The inverse situation can be found in CDN without NEP. These results illustrate that IIP and NEP achieve the expected function and improve the residual learning.
However, there are still some limitations in our work. The same architecture is used for both IIP and NEP, and exploring different architectures could potentially improve the performance of the network. For example, vision-transformer-based image denoising networks have achieved state-of-the-art performance [45,46]. Incorporating a vision transformer as the backbone of IIP or NEP may enhance the denoising performance of CDN. In addition, although SSIM loss shows better results than L1 and MSE for the image information loss, there might be other loss functions that could be more effective. These concerns will be explored in future studies.

6. Conclusions

This paper introduces a novel denoising network, CDN, which aims to overcome the two limitations of residual learning in image denoising. Firstly, residual learning fails to fully consider image information. This is tackled by training IIP in CDN to extract image information through the proposed image-to-image method. Secondly, residual learning does not take into account image self-similarity. To solve this issue, we propose a similarity-based training method to train NEP in CDN to estimate image noise. Consequently, CDN can successfully remove image noise using the extracted image information from IIP and noise estimation from NEP. Experimental results show that CDN achieves superior performance in both synthetic and real-world image denoising. Besides the high performance, we also discuss the potential limitations in our work. While previous studies primarily focused on improving network architecture, we present a novel perspective that improves the training method for enhanced denoising results. It may encourage further exploration into the effectiveness of training methods in image denoising research.

Author Contributions

Conceptualization, Y.Z.; methodology, J.Z.; software, J.Z.; validation, J.M.; formal analysis, J.Z.; investigation, J.M.; resources, Y.Z.; data curation, W.Y.; writing—original draft preparation, J.Z.; writing—review and editing, J.Z.; visualization, J.M.; supervision, Y.Z.; project administration, J.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Supported by the National Natural Science Foundation of China (No. 11571325) and the Fundamental Research Funds for the Central Universities (No. CUC2019 A002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Datasets used in this paper are open access and are available from: DIV2K is openly available in “NTIRE 2017 challenge on single image super-resolution: Dataset and study”, reference number [32]; Set12 is openly available in “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising”, reference number [5]; BSD68 is openly available in “Fields of Experts: a framework for learning image priors”, reference number [33]; Set5 is openly available in “Accurate Image Super-Resolution Using Very Deep Convolutional Networks”, reference number [34]; Kodak24 is openly available in “Kodak lossless true color image suite: PhotoCD PCD0992” at url: http://r0k.us/graphics/kodak.182 (accessed on 14 June 2023), reference number [35]; SIDD is openly available in “A High-Quality Denoising Dataset for Smartphone Cameras”, reference number [36]; DND is openly available in “Benchmarking Denoising Algorithms with Real Photographs”, reference number [37]. A preprint has previously been published in arXiv by Zhang et al. [47]. Our code will be released on https://github.com/JiaHongZ/CDN (accessed on 14 June 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sagheer, S.V.M.; George, S.N. A review on medical image denoising algorithms. Biomed. Signal Process. Control 2020, 61, 102036. [Google Scholar] [CrossRef]
  2. Jifara, W.; Jiang, F.; Rho, S.; Cheng, M.; Liu, S. Medical image denoising using convolutional neural network: A residual learning approach. J. Supercomput. 2019, 75, 704–718. [Google Scholar] [CrossRef]
  3. Liu, P.; Wang, M.; Wang, L.; Han, W. Remote-sensing image denoising with multi-sourced information. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 660–674. [Google Scholar] [CrossRef]
  4. Park, B.; Yu, S.; Jeong, J. Densely Connected Hierarchical Network for Image Denoising. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2104–2113. [Google Scholar] [CrossRef]
  5. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [Green Version]
  6. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  7. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar] [CrossRef] [Green Version]
  8. Singh, G.; Mittal, A.; Aggarwal, N. ResDNN: Deep residual learning for natural image denoising. IET Image Process. 2020, 14, 2425–2434. [Google Scholar] [CrossRef]
  9. Ren, H.; El-Khamy, M.; Lee, J. DN-ResNet: Efficient Deep Residual Network for Image Denoising. In Computer Vision—ACCV 2018, Proceedings of the 14th Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2018; Revised Selected Papers, Part V; Jawahar, C.V., Li, H., Mori, G., Schindler, K., Eds.; Springer: Berlin, Germany, 2018; Volume 11365, Lecture Notes in Computer Science; pp. 215–230. [Google Scholar] [CrossRef] [Green Version]
  10. Zhang, J.; Zhu, Y.; Li, W.; Fu, W.; Cao, L. DRNet: A deep neural network with multi-layer residual blocks improves image denoising. IEEE Access 2021, 9, 79936–79946. [Google Scholar] [CrossRef]
  11. Tian, C.; Xu, Y.; Zuo, W.; Du, B.; Lin, C.W.; Zhang, D. Designing and training of a dual CNN for image denoising. Knowl.-Based Syst. 2021, 226, 106949. [Google Scholar] [CrossRef]
  12. Tian, C.; Xu, Y.; Zuo, W. Image denoising using deep CNN with batch renormalization. Neural Netw. 2020, 121, 461–473. [Google Scholar] [CrossRef]
  13. Liu, P.; Zhang, H.; Zhang, K.; Lin, L.; Zuo, W. Multi-level Wavelet-CNN for Image Restoration. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 886–895. [Google Scholar] [CrossRef] [Green Version]
  14. Bao, L.; Yang, Z.; Wang, S.; Bai, D.; Lee, J. Real image denoising based on multi-scale residual dense block and cascaded U-Net with block-connection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 448–449. [Google Scholar]
  15. Tian, C.; Xu, Y.; Li, Z.; Zuo, W.; Fei, L.; Liu, H. Attention-guided CNN for image denoising. Neural Netw. 2020, 124, 117–129. [Google Scholar] [CrossRef]
  16. Liu, D.; Wen, B.; Fan, Y.; Loy, C.C.; Huang, T.S. Non-Local Recurrent Network for Image Restoration. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS’18), Montreal, QC, Canada, 3–8 December 2018; pp. 1680–1689. [Google Scholar]
  17. Zhang, J.; Cao, L.; Wang, T.; Fu, W.; Shen, W. NHNet: A non-local hierarchical network for image denoising. IET Image Process. 2022, 16, 2446–2456. [Google Scholar] [CrossRef]
  18. Ma, R.; Zhang, B.; Zhou, Y.; Li, Z.; Lei, F. PID Controller-Guided Attention Neural Network Learning for Fast and Effective Real Photographs Denoising. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 3010–3023. [Google Scholar] [CrossRef]
  19. Anwar, S.; Barnes, N. Real Image Denoising With Feature Attention. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 Octber–2 November 2019; pp. 3155–3164. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, Q.; Xiao, J.; Tian, C.; Lin, J.C.-W.; Zhang, S. A robust deformed convolutional neural network (CNN) for image denoising. CAAI Trans. Intell. Technol. 2022, 8, 1–12. [Google Scholar] [CrossRef]
  21. Zhang, C.; Hu, W.; Jin, T.; Mei, Z. Nonlocal image denoising via adaptive tensor nuclear norm minimization. Neural Comput. Appl. 2018, 29, 3–19. [Google Scholar] [CrossRef]
  22. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; Volume 2, pp. 60–65. [Google Scholar] [CrossRef]
  23. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  24. Timofte, R.; De Smet, V.; Van Gool, L. A+: Adjusted anchored neighborhood regression for fast super-resolution. In Proceedings of the Asian Conference on Computer Vision, Singapore, 1–5 November 2014; Springer: Berlin, Germany, 2014; pp. 111–126. [Google Scholar]
  25. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Residual interpolation for color image demosaicking. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 2304–2308. [Google Scholar]
  26. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin, Germany, 2015; pp. 234–241. [Google Scholar]
  27. Tian, C.; Zheng, M.; Zuo, W.; Zhang, B.; Zhang, Y.; Zhang, D. Multi-stage image denoising with the wavelet transform. Pattern Recognit. 2023, 134, 109050. [Google Scholar] [CrossRef]
  28. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  29. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar] [CrossRef] [Green Version]
  30. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef] [Green Version]
  31. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  32. Agustsson, E.; Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  33. Roth, S.; Black, M. Fields of Experts: A framework for learning image priors. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 860–867. [Google Scholar] [CrossRef]
  34. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar] [CrossRef] [Green Version]
  35. Franzen, R. Kodak Lossless True Color Image Suite: PhotoCD PCD0992. Available online: http://r0k.us/graphics/kodak (accessed on 14 June 2023).
  36. Abdelhamed, A.; Lin, S.; Brown, M.S. A High-Quality Denoising Dataset for Smartphone Cameras. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1692–1700. [Google Scholar] [CrossRef]
  37. Plötz, T.; Roth, S. Benchmarking Denoising Algorithms with Real Photographs. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2750–2759. [Google Scholar] [CrossRef] [Green Version]
  38. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  39. Yu, S.; Park, B.; Jeong, J. Deep iterative down-up cnn for image denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
  40. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a Fast and Flexible Solution for CNN-Based Image Denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef] [Green Version]
  41. Lefkimmiatis, S. Universal Denoising Networks: A Novel CNN Architecture for Image Denoising. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3204–3213. [Google Scholar] [CrossRef] [Green Version]
  42. Zhang, J.; Qu, M.; Wang, Y.; Cao, L. A Multi-Head Convolutional Neural Network With Multi-path Attention improves Image Denoising. arXiv 2022, arXiv:2204.12736. [Google Scholar]
  43. Yue, Z.; Yong, H.; Zhao, Q.; Zhang, L.; Meng, D. Variational denoising network: Toward blind noise modeling and removal. arXiv 2019, arXiv:1908.11314. [Google Scholar]
  44. Guo, S.; Yan, Z.; Zhang, K.; Zuo, W.; Zhang, L. Toward Convolutional Blind Denoising of Real Photographs. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1712–1722. [Google Scholar] [CrossRef] [Green Version]
  45. Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; Gao, W. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12299–12310. [Google Scholar]
  46. Fan, C.M.; Liu, T.J.; Liu, K.H. SUNet: Swin transformer UNet for image denoising. In Proceedings of the 2022 IEEE International Symposium on Circuits and Systems (ISCAS), Austin, TX, USA, 27 May–1 June 2022; pp. 2333–2337. [Google Scholar]
  47. Zhang, J.; Zhu, Y.; Yu, W.; Ma, J. Considering Image Information and Self-similarity: A Compositional Denoising Network. arXiv 2022, arXiv:2209.06417. [Google Scholar]
Figure 1. The architecture of CDN. It contains three main modules, IIP, NEP, and IDM. (a) The training stage. A noisy image is split into several patches. The training target is removing the first patch’s noise. Other patches are used in self-similarity training for NEP. IIP is optimized to extract image information in the image-to-image training method. NEP is trained by considering image self-similarity, which estimates the noise distribution. IDM integrates the output of IIP and NEP to obtain the final estimated noise. It is shown in Figure 2. (b) The testing stage. It removes the noise of a complete image. (c) The basic CNN block of CDN, DBlock.
Figure 1. The architecture of CDN. It contains three main modules, IIP, NEP, and IDM. (a) The training stage. A noisy image is split into several patches. The training target is removing the first patch’s noise. Other patches are used in self-similarity training for NEP. IIP is optimized to extract image information in the image-to-image training method. NEP is trained by considering image self-similarity, which estimates the noise distribution. IDM integrates the output of IIP and NEP to obtain the final estimated noise. It is shown in Figure 2. (b) The testing stage. It removes the noise of a complete image. (c) The basic CNN block of CDN, DBlock.
Sensors 23 05915 g001
Figure 2. The integration denoising module (IDM). It uses UNet-based architecture, and the number of output channels of each DBlock is shown at the bottom. DBlock is shown in Figure 1c.
Figure 2. The integration denoising module (IDM). It uses UNet-based architecture, and the number of output channels of each DBlock is shown at the bottom. DBlock is shown in Figure 1c.
Sensors 23 05915 g002
Figure 3. Images in Set12, which are C.man, House, Peppers, Starfish, Monarch, Airplane, Parrot, Lena, Barbara, Boat, Man, and Couple, in order.
Figure 3. Images in Set12, which are C.man, House, Peppers, Starfish, Monarch, Airplane, Parrot, Lena, Barbara, Boat, Man, and Couple, in order.
Sensors 23 05915 g003
Figure 4. Training PSNR curve for CDN in AWGN denoising. The training dataset is the DIV2K training set and PSNR results are computed on Set12 at noise level σ = 25 .
Figure 4. Training PSNR curve for CDN in AWGN denoising. The training dataset is the DIV2K training set and PSNR results are computed on Set12 at noise level σ = 25 .
Sensors 23 05915 g004
Figure 5. PSNR results of the image Monarch from Set12 with noise level σ = 50 . (a) Clean image, (b) noisy image/14.71 dB, (c) DnCNN [5]/26.78 dB, (d) BRDNet [12]/26.97 dB, (e) MHCNN [42]/27.12 dB, and (f) CDN/27.21 dB.
Figure 5. PSNR results of the image Monarch from Set12 with noise level σ = 50 . (a) Clean image, (b) noisy image/14.71 dB, (c) DnCNN [5]/26.78 dB, (d) BRDNet [12]/26.97 dB, (e) MHCNN [42]/27.12 dB, and (f) CDN/27.21 dB.
Sensors 23 05915 g005
Figure 6. Denoising result of Butterfly from Set5 at noise level σ = 50 . (a) Clean image, (b) noisy image, (c) VDN, (d) NHNet, (e) CDN.
Figure 6. Denoising result of Butterfly from Set5 at noise level σ = 50 . (a) Clean image, (b) noisy image, (c) VDN, (d) NHNet, (e) CDN.
Sensors 23 05915 g006
Figure 7. Denoising result on the department wall from Kodak24 at noise level σ = 50 . (a) Noisy image, (b) DnCNN/25.80 dB, (c) BRDNet/26.33 dB, (d) FFDNet/26.13 dB, (e) NHNet/26.49 dB, and (f) CDN/29.53 dB.
Figure 7. Denoising result on the department wall from Kodak24 at noise level σ = 50 . (a) Noisy image, (b) DnCNN/25.80 dB, (c) BRDNet/26.33 dB, (d) FFDNet/26.13 dB, (e) NHNet/26.49 dB, and (f) CDN/29.53 dB.
Sensors 23 05915 g007
Figure 8. Denoising results of CDN on SIDD.
Figure 8. Denoising results of CDN on SIDD.
Sensors 23 05915 g008
Figure 9. Visual denoising results comparison of the ablation models of CDN. Images are from Set12. The left and right columns show the images with noise level σ = 50 and the clean images, respectively. The middle three columns list the denoised images from CDN and its ablation variants.
Figure 9. Visual denoising results comparison of the ablation models of CDN. Images are from Set12. The left and right columns show the images with noise level σ = 50 and the clean images, respectively. The middle three columns list the denoised images from CDN and its ablation variants.
Sensors 23 05915 g009
Table 1. PSNR (dB) results of different networks on Set12 at noise levels of 15, 25, and 50.
Table 1. PSNR (dB) results of different networks on Set12 at noise levels of 15, 25, and 50.
ImageC.manHousePeppersStarfishMonarchAirplaneParrotLenaBarbaraBoatManCoupleAverage
Noise level σ = 15
BM3D [6]31.9134.9332.6931.1431.8531.0731.3734.2633.1032.1331.9232.1032.37
DnCNN [5]32.6134.9733.3032.2033.0931.7031.8334.6232.6432.4232.4632.4732.86
FFDNet [40]32.4335.0733.2531.9932.6631.5731.8134.6232.5432.3832.4132.4632.77
ResDNN [8]32.7334.9933.2332.1133.2031.6531.8734.5732.5632.3932.4232.4332.85
U-Net [41]32.3334.7933.1632.0032.9431.6431.8434.4632.4332.3032.3432.3132.71
ADNet [15]32.8135.2233.4932.1733.1731.8631.9634.7132.8032.5732.4732.5832.98
DudeNet [11]32.7135.1333.3832.2933.2831.7831.9334.6632.7332.4632.4632.4932.94
BRDNet [12]32.8035.2733.4732.2433.3531.8232.0034.7532.9332.5532.5032.6233.03
NHNet [17]32.9535.4033.6032.3633.5531.9832.1034.8033.1432.6532.5732.6933.15
CDN32.7335.7033.4232.4033.5731.8732.0234.9133.4832.7332.5832.7333.18
Noise level σ = 25
BM3D [6]29.4532.8530.1628.5629.2528.4228.9332.0730.7129.9029.6129.7129.97
DnCNN [5]30.1833.0630.8729.4130.2829.1329.4332.4430.0030.2130.1030.1230.43
FFDNet [40]30.1033.2830.9329.3230.0829.0429.4432.5730.0130.2530.1130.2030.44
ResDNN [8]30.1732.9930.7329.2430.3029.0029.3832.3129.7030.1130.0429.9630.33
U-Net [41]30.1833.1830.9129.3830.4129.1829.5732.5930.1930.2530.1030.1430.51
ADNet [15]30.3433.4131.1429.4130.3929.1729.4932.6130.2530.3730.0830.2430.58
DudeNet [11]30.2333.2430.9829.5330.4429.1429.4832.5230.1530.2430.0830.1530.52
BRDNet [12]31.3933.4131.0429.4630.5029.2029.5532.6530.3430.3330.1430.2830.61
NHNet [17]30.4933.6531.2029.7230.6829.3429.6532.7630.7030.4430.2030.4030.77
CDN30.4533.8631.1229.7830.9129.2929.6732.9531.2430.6030.2730.5230.89
Noise level σ = 50
BM3D [6]26.1329.6926.6825.0425.8225.1025.9029.0527.2226.7826.8126.4626.72
DnCNN [5]27.0330.0027.3225.7026.7825.8726.4829.3926.2227.2027.2426.9027.18
FFDNet [40]27.0530.3727.5425.7526.8125.8926.5729.6626.4527.3327.2927.0827.32
ResDNN [8]26.6329.2726.6825.3126.2725.3526.0128.8024.4826.7226.9026.2526.56
U-Net [41]27.4230.4827.6725.9226.9425.8926.6629.8427.0227.4227.3027.1727.48
ADNet [15]27.3130.5927.6925.7026.9025.8826.5629.5926.6427.3527.1727.0727.37
DudeNet [11]27.2230.2727.5125.8826.9325.8826.5029.4526.4927.2627.1926.9727.30
BRDNet [12]27.4430.5327.6725.7726.9725.9326.6629.7326.8527.3827.2727.1727.45
NHNet [17]27.5430.8527.8426.2427.1026.0026.7629.8327.1927.4627.3227.2827.62
CDN27.7031.2627.8226.2927.2326.0626.8830.0728.1227.6527.4227.5227.83
Table 2. Results of different networks on BSD68.
Table 2. Results of different networks on BSD68.
NetworkBM3D [6]DnCNN [5]FFDNet [40]ADNet [15]DudeNet [11]BRDNet [12]U-Net [41]RIDNet [19]NHNetCDN
ine σ = 15 31.0731.7231.6231.7431.7831.7931.5431.8131.8531.89
ine σ = 25 28.5729.2329.1929.2529.2929.2929.1329.3429.3729.44
ine σ = 50 25.6226.2326.3026.2926.3126.3626.3926.4026.4326.39
Table 3. Color image denoising results of different networks.
Table 3. Color image denoising results of different networks.
DatasetMethod σ = 15 σ = 25 σ = 50
Set5CBM3D [6]33.4230.9228.16
FFDNet [40]34.3032.1029.25
VDN [43]34.3432.2429.47
NHNet [17]34.8032.5629.64
CDN34.7032.5829.66
Kodak24CBM3D [6]34.2831.6828.46
FFDNet [40]34.5532.1128.99
DnCNN 34.7332.2329.02
ADNet [15]34.7632.2629.10
DudeNet [11]34.8132.2629.10
BRDNet [12]34.8832.4129.22
NHNet [17]35.0232.5429.41
CDN35.0532.5729.54
Table 4. Denoising results of different networks on real-world noise datasets.
Table 4. Denoising results of different networks on real-world noise datasets.
Test DataSIDD Validation
MethodBM3D [6]WNNM [7]CBDNet [44]RIDNet [19]VDN [43]MHCNN [42]CDN
PSNR25.6525.7838.6838.7139.2839.0639.36
SSIM0.6850.6850.8090.9140.9090.9140.918
Test DataDND
MethodBM3D [6]WNNM [7]CBDNet [44]RIDNet [19]VDN [43]PAN-Net [18]MHCNN [42]CDN
PSNR34.5134.6738.0639.2639.3839.4439.5239.44
SSIM0.8510.8650.9420.9530.9520.9520.9510.951
Table 5. Ablation experiment results on Set12.
Table 5. Ablation experiment results on Set12.
Method σ = 15 σ = 25 σ = 50
PSNRSSIMPSNRSSIMPSNRSSIM
CDN-IIP(R)33.140.908030.820.871227.770.8050
CDN-NEP(R)33.020.905830.570.865927.540.7991
CDN-SSIM33.100.907530.800.870927.760.8052
CDN-KLD33.150.908330.830.870927.780.8051
CDN-SSIM-KLD33.050.906930.780.870327.750.8049
CDN33.180.908930.890.872427.830.8074
Table 6. Comparison results of SSIM and other loss functions in IIP on Set12 at noise level σ = 25 .
Table 6. Comparison results of SSIM and other loss functions in IIP on Set12 at noise level σ = 25 .
Loss FunctionL1MSESSIM
PSNR30.8630.8530.89
Table 7. PSNR results of CDN on Set12 with different training patches.
Table 7. PSNR results of CDN on Set12 with different training patches.
Patches σ = 15 σ = 25 σ = 50
433.1830.8927.83
933.1830.9027.84
1633.1430.8227.69
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Zhu, Y.; Yu, W.; Ma, J. Considering Image Information and Self-Similarity: A Compositional Denoising Network. Sensors 2023, 23, 5915. https://doi.org/10.3390/s23135915

AMA Style

Zhang J, Zhu Y, Yu W, Ma J. Considering Image Information and Self-Similarity: A Compositional Denoising Network. Sensors. 2023; 23(13):5915. https://doi.org/10.3390/s23135915

Chicago/Turabian Style

Zhang, Jiahong, Yonggui Zhu, Wenshu Yu, and Jingning Ma. 2023. "Considering Image Information and Self-Similarity: A Compositional Denoising Network" Sensors 23, no. 13: 5915. https://doi.org/10.3390/s23135915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop