Next Article in Journal
Dual-Layer Fusion Model Using Bayesian Optimization for Asphalt Pavement Condition Index Prediction
Previous Article in Journal
Malicious Traffic Detection Method for Power Monitoring Systems Based on Multi-Model Fusion Stacking Ensemble Learning
Previous Article in Special Issue
Image-Driven Hybrid Structural Analysis Based on Continuum Point Cloud Method with Boundary Capturing Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Overview of Research on Digital Image Denoising Methods

1
Graduate School of Environmental Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan
2
Department of Information Systems Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan
3
School of Electronic and Information Engineering, Ankang University, Ankang 725000, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(8), 2615; https://doi.org/10.3390/s25082615
Submission received: 18 March 2025 / Revised: 16 April 2025 / Accepted: 18 April 2025 / Published: 20 April 2025
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies—Second Edition)

Abstract

:
During image collection, images are often polluted by noise because of imaging conditions and equipment limitations. Images are also disturbed by external noise during compression and transmission, which adversely affects consequent processing, like image segmentation, target recognition, and text detection. A two-dimensional amplitude image is one of the most common image categories, which is widely used in people’s daily life and work. Research on this kind of image-denoising algorithm is a hotspot in the field of image denoising. Conventional denoising methods mainly use the nonlocal self-similarity of images and sparser representatives in the converted domain for image denoising. In particular, the three-dimensional block matching filtering (BM3D) algorithm not only effectively removes the image noise but also better retains the detailed information in the image. As artificial intelligence develops, the deep learning-based image-denoising method has become an important research direction. This review provides a general overview and comparison of traditional image-denoising methods and deep neural network-based image-denoising methods. First, the essential framework of classic traditional denoising and deep neural network denoising approaches is presented, and the denoising approaches are classified and summarized. Then, existing denoising methods are compared with quantitative and qualitative analyses on a public denoising dataset. Finally, we point out some potential challenges and directions for future research in the field of image denoising. This review can help researchers clearly understand the differences between various image-denoising algorithms, which not only helps them to choose suitable algorithms or improve and innovate on this basis but also provides research ideas and directions for subsequent research in this field.

1. Introduction

Images are among the main ways for humans to acquire and exchange messages. With the popularization of digital imaging devices, the range of image applications has expanded, and the types of images have become more abundant. Two-dimensional amplitude-type images are the most common image category. By recording the intensity of light, it can show the form, texture, and brightness of objects in two-dimensional form, such as daily photographs, which are intuitive presentations of different natural scenes. Hyperspectral images add a spectral dimension based on two-dimensional space, which can obtain continuous spectral band information. They have been widely used in remote sensing, medicine, agronomy, and other fields. Spatio-temporal images are varied in both time and space, such as satellite remote sensing images and video streaming. Layered images can clearly show the internal structure of objects and are commonly used in medical diagnosis, industrial non-destructive testing, and other fields. Terahertz images use imaging of the interaction of terahertz waves with matter, which have great potential in security inspection and biomedical imaging. Digital holograms record the amplitude and phase information of an object’s light waves through the interference and diffraction of light and realize the reproduction of the object’s three-dimensional information through the digitized processing and reconstruction of the interference fringes. Holograms are commonly used in fields such as microscopic particle observation, imaging of biological samples, and measurement of three-dimensional objects. These different types of images play a key role in their respective fields and jointly promote the development and progress of science and technology. However, images are unavoidably polluted by noise in acquisition and transmission, which reduces the image quality as a result of outside elements, acquisition devices, and storage media. Damaged images are detrimental to the transmission of information and cause an incalculable impact on the following image processes: feature extraction, text detection, and image segmentation [1]. Accordingly, image-denoising techniques have become a research topic in image processing, computer vision, and related applications.
Researchers have developed corresponding denoising techniques and methods for the unique characteristics of different types of images. Katkovnik et al. proposed a sparse phase imaging method based on the complex-domain nonlocal BM3D technique, which was innovative and applied to deal with the problems related to the denoising of hyperspectral images [2,3]. Dutta et al. [4] proposed the application of deep learning to terahertz image denoising for the non-destructive analysis of historical documents, using deep learning to automatically learn complex features and patterns from the data to solve the noise problem in terahertz images. Wu et al. [5] proposed an unsupervised deep residual sparse attention network for retinal optical coherence tomography (OCT) image denoising. Bianco et al. [6] proposed a framework combining multi-hologram coding, block grouping, and collaborative filtering to realize quasi-noise-free digital holographic reconstruction. Two-dimensional amplitude-type images, especially natural images, are closely connected with people’s daily life and have an extremely wide range of applications, so the denoising problem of such images attracts a lot of attention from the academic community and is also the main research object of this review.
Two-dimensional amplitude-type image-denoising methods have a long history, and the earliest methods were mainly filter-based denoising methods, which were performed in the spatial domain or transform domain. Typical spatial domain methods include Gaussian filtering [7], Wiener filtering [8], bilateral filtering [9], and so on. Over recent years, as computer technology has developed, image-denoising methods have been continuously improved, and the most typical denoising method is the well-known BM3D framework [10], which combines the nonlocal similarity features of natural images and the sparse presentation in the transformation domain. At an early stage, most of the work associated with denoising images concerns the filtering of single-channel grayscale images. In recent years, advances in imaging systems and techniques have greatly expanded the information saved and displayed in color images, which could reproduce real scenes more realistically. Over the last two decades, representative BM3D methods have been significantly applied to multidimensional images in two distinct manners. The first approach was to use some kind of relevant transformation, such that every channel in the transformation space could be filtered individually via some efficient one-channel denoiser. For example, the CBM3D method [11] created a YCbCr color transform for natural RGB images, which provided almost-optimal solution correlation for color data. The other solution was to use channel or band correlation to model the entire multidimensional image dataset by jointly processing it. Maggioni et al. [12] extended BM3D to sparse 4D transform-domain collaborative filtering (BM4D) by using 3D pixel cubes for color image denoising. Although traditional methods have made significant achievements in image denoising, they also have many drawbacks. In the past few years, with the development and application of neural network technology, they have achieved remarkable results in the field of image denoising.
This review aims to make a comprehensive and systematic summary and comparative analysis between traditional image-denoising methods and deep neural network-based image-denoising methods. The main work is as follows:
(1) Analyzed the basic frameworks of classical traditional denoising methods and deep neural network-based denoising methods, and classified various denoising methods according to their principles, characteristics, and other factors, to lay the foundation for subsequent research.
(2) Relying on a public denoising dataset, a rigorous and comprehensive comparative evaluation of existing denoising methods was carried out from both quantitative analysis and qualitative analysis.
(3) By reflecting on the current status of the image-denoising field, we point out the hidden challenges and also look forward to the future research direction, providing a guideline for subsequent scientific research work.

2. Image-Denoising Framework

Image noise mainly comes from the process of image acquisition and transmission. For example, in the process of image acquisition, a light-sensitive device is affected by the brightness of the light and the different ambient temperatures, which would affect the imaging effect of the image and generate noise. In the process of image transmission, noise was generated due to the imperfection of the sending and receiving equipment and the poor transmission channel. According to the distribution law of noise, it can be divided into Gaussian noise, Poisson noise, and Skellam noise.
(1) Gaussian Noise
Gaussian noise is the most common type of noise in imaging and is represented by a Gaussian distribution function [13]. It is additive and independent. For example, when shooting with a camera, the amplifier inside the camera introduces noise when amplifying the signal. In conventional film photography, the uneven distribution of silver salt particles in the film causes this noise.
An image containing Gaussian noise can be represented as g x , y = f x , y + n x , y , where g is the noise-containing image, f is the original image, and n is the additive noise on the pixels.
(2) Poissonian Noise
When an image is affected by photon noise, its noise presents a Poissonian distribution [14]. Photon noise originates from the image acquisition process, with the random fluctuation in the photon impacting the sensor. In addition to photon noise, factors such as deviations in the calibration process of the machine, interference in the data transmission chain, and the characteristics of the storage medium also produce noise that similarly obeys the Poisson distribution.
(3) Skellam Noise
The noise generated during terahertz pulsed time-domain holographic raster scanning based on a balanced detection system and low-photon imaging such as X-ray fluoroscopy can be described by the Skellam distribution [15]. Skellam distribution noise variance is independent of the true signal and exists in imaging scenarios involving the random behavior of photons or similar independent Poisson processes.
Whatever the type of noise is, it degrades the quality of an image and interferes with the subsequent image analysis and processing. Therefore, before further processing of the image, it has to be denoised in the first place. In this review, the focus is on the techniques and methods of Gaussian noise removal.

2.1. Traditional Denoising Methods

Traditional denoising methods fall into three main types.
(1)
Noise removal using filter techniques
It is mainly performed in the space and transform domains. The basic principle is to process each pixel point in an image and use the information of its neighboring pixel points to correct the pixels at that point, thus smoothing the image. Take the classical bilateral filtering algorithm as an example; it is an adaptive weighted filtering method. The pixel values of the output image are weighted combinations of neighboring pixels. The weights are determined based on the geometric distance and gray value differences between the filtered point and the neighboring pixels. In other words, the weights are dynamically adjusted with the geometric distance of the pixels and the similarity of the pixel gray values. The pixel point that is closer to the filtered point and has a smaller difference in the gray level is assigned a larger weight, and vice versa, a smaller weight. Bilateral filtering has significant advantages in edge protection and image smoothing compared to Gaussian filters and median filters. The disadvantages of the bilateral filtering algorithm are mainly the higher computational complexity and slower processing speed because it requires complex weight calculation for the neighborhood of each pixel. In addition, the parameter selection is more difficult, and it needs to be adjusted according to the characteristics of different images; otherwise, it might affect the filtering effect. Based on the theoretical foundation of the classical algorithm, scholars have further explored the classical denoising algorithm in the spatial and transform domains. Tang et al. [16] proposed a modified curvature filtering algorithm. The algorithm combined the half-window triangular tangent plane and the minimum triangular tangent plane, replaced the traditional minimum triangular tangent plane projection operator with the projection operator, and amended the regular energy function to add the regular energy of local variance according to the characteristics of the strong noise image that existed in the strong noise speckle, which improved the denoising performance of the algorithm. Although the algorithm had a good denoising effect on strong noise, it cannot be tuned adaptively to the tangent plane projective operator in the neighborhood. It also took a long time to run. Cheng et al. [17] designed a remote sensing image denoising method based on a curvilinear waveform transform and goodness-of-fit test. The algorithm was designed to normalize the curvilinear coefficients in the curvilinear domain and to perform a local test on the normalized curvilinear coefficients using the goodness-of-fit test (GOF), which removed the noise coefficients and left the signal coefficients. Finally, the inverse normalized signal curvilinear coefficients were inversely curvilinear transformed to obtain the denoised remote image. Zhang et al. [18] accomplished image denoising by setting appropriate tuning arguments, selecting fixed thresholds statistically, and adding a tuning parameter for reducing the deviations from a constant between original wavelet coefficients and estimated wavelet coefficients.
(2)
Denoising using sparse coding
The core idea of sparse coding denoising is to utilize the sparsity of the signal to sparsely represent the noise-containing signal in an overcomplete dictionary, and denoising is achieved by removing the non-sparse portion of the noise in the dictionary. The K-SVD algorithm is a typical representative of this [19]. The principle is divided into three main steps. The first step is dictionary learning. The algorithm first randomly initializes an overcomplete dictionary D . Then, for a given collection of noisy image blocks, an optimization problem is solved to find the sparse representation coefficient X of each image block on the dictionary, such that D X approximates as closely as possible to the original image block, while X is sparse, i.e., most of the elements are zero. The second step is the dictionary update. After obtaining the sparse representation coefficients, the algorithm updates the dictionary using singular value decomposition (SVD) methods to better fit the features of the image blocks. The third step is the denoising process. After several iterations of dictionary learning and updating, a dictionary that can represent the image block well is obtained. The new noise-containing image is divided into multiple image blocks, and then each image block is sparsely represented using the learned dictionary, and, finally, the denoised image is reconstructed by these sparse representation coefficients and the dictionary. The K-SVD algorithm can effectively utilize the sparsity of the image for denoising and better retain the details and texture information of the image while removing the noise. However, it also has drawbacks, including high computational complexity, resulting in slow operation speed. It is more sensitive to the initialization of the dictionary, and different initializations may affect the final denoising effect. Therefore, researchers have continuously improved this algorithm. Li et al. [20] proposed an image-denoising algorithm based on adaptive match tracking. The algorithm first solved the sparse coefficient problem through the adaptive matching tracking mechanism. Then, the dictionary was trained into an adaptive dictionary that can effectively reflect the structural features of the image using the K singular value decomposition algorithm. Finally, the sparse coefficients were integrated with the adaptive dictionary to reconstruct the image. Li et al. [21] proposed an image-denoising method for a 2D multipath matching tracking algorithm. The approach divided the noisy image into image blocks and used the dictionary that trained on the noisy image to sparsely represent each image block using breadth-first and/or depth-first 2D multipath matching tracking algorithms, which removed the noise from every image block, then reconstructed the denoised image block into a complete denoised image. Yuan et al. [22] used similar block matching globally to obtain sparse coefficient estimates for ideal images. The class dictionary and estimated sparse coefficients were utilized to implement image denoising.
(3)
Use of external prior for denoising, also known as model-based denoising
From a Bayesian perspective, this class of methods defines the denoising task as an optimizing issue based on maximum a posteriori probability, in which prior knowledge plays a crucial role in the optimization task. There are various classical models, mainly including nonlocal self-similar models [23,24,25,26], gradient models [27,28,29], and Markov random field models [30]. There are many representative approaches such as NLM [23] and BM3D [10]. They had a similar idea of combining filtering with a self-similarity model to search for familiar regions in the whole image in small image blocks and performing an averaging operation to remove the noise.
BM3D is a three-dimensional transform-domain filtering-based algorithm, which is among the best image-denoising techniques available. Many efficient noise removal approaches have been introduced based on the BM3D algorithm. The algorithm is organized into two main steps. The first step is base estimation. The image is divided into sub-modules of fixed size, and each block in the image is estimated block by block. The blocks are grouped by their similarity to each other, and these blocks are aggregated into a three-dimensional array. Afterward, the 3D transformation is performed on the 3D array. In the end, the base estimation of the image is obtained by weighting the blocks with overlap through aggregation. The second step is to perform a second estimation of each block using the base estimation image obtained in the first step. The block matching is performed again to find the location of the block that is similar in the benchmark estimation image. After matching, two 3D arrays are obtained. Joint Wiener filtering is performed on the two 3D arrays formed. In the end, the weighted average of the estimated values of the overlapping blocks is used to obtain the ultimate denoised image. A flowchart of the BM3D algorithm is given in Figure 1.
Other typical methods include weighted nuclear norm minimization (WNNM) [31]. Liu et al. [32] suggested a novel image-denoising approach by using the coefficient matrix in the low-rank representation model to impose a total variation paradigm constraint. Lv et al. [33] incorporated relative total variation (RTV) into weighted nuclear norm minimization (WNNM), imposed RTV norm constraints on the WNNM low-rank model, and proposed a relative total variation and weighted nuclear norm minimization (RTV-WNNM) image-denoising approach. Model-based approaches can initially achieve noise removal, but most of them have two obvious drawbacks. The first one is the need to select the parameters manually, and the second one is the complex optimization process involved in the testing phase, which requires long processing time. More traditional denoising algorithms are shown in Table 1 for detail.
Through the study of the above literature, the advantages and disadvantages of various types of algorithms are summarized as follows:
The advantages of filter-based denoising algorithms mainly include the following: (1) Higher computational efficiency compared to the other two types. It is usually based on a simple convolutional operation, which can quickly process the image. (2) The principle of the algorithm is relatively simple, the implementation difficulty is low, the hardware requirements are not high, and it can be run on devices with limited resources. (3) For Gaussian noise and other noise with specific statistical characteristics, a good denoising effect can be achieved by designing a suitable filter. The disadvantages mainly include the following: (1) Image blurring. Although the Gaussian noise can be suppressed to a certain extent, when smoothing the noise, the image details and edge information are smoothed together, resulting in an overall blurring of the image. (2) Different filtering algorithms are designed for specific noises, and the effect of changing the type of noise may become worse. (3) The parameter settings are poorly self-adaptive. With different parameters, the filtering effect varies greatly, and there are no general optimal parameters. Given the advantages and disadvantages of such algorithms, it can be concluded that this class of algorithms is suitable for denoising simple textured images with low noise levels, relatively high real-time requirements, and simple textures.
The advantages of image-denoising algorithms based on sparse coding mainly include the following: (1) Good detail retention. It can remove noise while retaining the detailed information of the image better so that the denoised image is clearer. (2) By learning the sparse representation of the image, it can adaptively denoise according to the local features of the image and has good adaptability to different types and distributions of images. Disadvantages mainly include the following: (1) High computational complexity. The algorithm involves a large number of matrix operations and optimization solutions, high computational cost, and slow processing speed; when the image resolution is high or the amount of data is large, it is especially more obvious. (2) Dictionary learning problem. The dictionary learning process is time-consuming and depends on a large amount of training data. If the training data differ greatly from the actual application image, the denoising effect is affected. (3) Difficulty of parameter adjustment. Parameters need to be adjusted according to the image and noise characteristics. There is no universal optimal value, and repeated experiments and debugging are required, making it more difficult to use. This type of algorithm is suitable for processing images with rich texture and details, moderate noise levels, and the need for high-precision denoising.
The advantages of image-denoising algorithms based on external a priori assumptions mainly include the following: (1) Preservation of image features and strong detail recovery ability. When the external a priori knowledge accurately reflects the true characteristics of the image, the algorithm is able to recover the details in the image with high accuracy based on this knowledge. (2) High flexibility. The appropriate external prior knowledge can be selected according to different application scenarios and needs, with high flexibility and customizability. Disadvantages include the following: (1) Low computational efficiency. The testing phase often involves complex optimization problems, large computational volume, and time-consuming denoising, and it is difficult to take into account high performance and computational efficiency. (2) Hyper-parameterization. The model is mostly non-convex, so the parameters have to be selected manually. Different parameters have a big impact on the denoising performance, and the tuning of parameters depends on experience and is cumbersome. (3) Strong dependence on a priori assumptions. When the a priori assumption is inaccurate or inapplicable, it would misjudge the image region, destroy the details of the image structure, fail to effectively distinguish between the noise and the signal, and may also introduce artifacts. Synthesizing the advantages and disadvantages of such algorithms, it can be concluded that this class of algorithms is suitable for processing images with specific a priori knowledge, such as medical images. Using the existing prior knowledge of medical images can accurately remove noise and retain key medical features. This type of algorithm is also suitable for processing high-precision images in specialized fields, such as satellite remote sensing images, aerial photography images, and so on. These images usually require high-precision denoising processing, and prior knowledge in specialized fields could be utilized to improve the accuracy and reliability of denoising.

2.2. Deep Learning Denoising Approaches

According to the difference in the type of noise that the model can handle, deep learning denoising algorithms are categorized into four groups: additive Gaussian white noise image denoising, real noise image denoising, blind noise image denoising, and mixed noise image denoising.
(1)
Denoising approaches for additive Gaussian white noise images
Additive Gaussian white noise is prevalent in various imaging systems and communication channels with explicit mathematical models and statistical properties, which facilitates theoretical analysis and algorithm design, based on which many classical denoising algorithms were developed. Mao et al. [53] proposed a deep all-convolutional encoding–decoding framework, which introduced symmetric jump connections between convolutional and anti-convolutional layers and was successfully used for image reproduction tasks like denoising and super-resolution. Tai et al. [54] created a long-term memory network (MemNet) that was 80 layers deep. Memory modules were constructed by using recursive units and gating units to determine the proportion of the current short-term memory and pre-learned features in the subsequent information transfer. MemNet had a strong learning ability and had a good gain. However, it was too computationally intensive and took a long time to process. Zhang et al. [55] proposed a denoising convolutional neural network (DnCNN) that combined batch normalization and residual learning techniques. Although the denoising approach achieved exceptionally outstanding results, the entire system required too much iteration to achieve a better model. The speed and vergence of the overall algorithm were not prominent enough. Zhang et al. [56] presented a fast and flexible denoising network (FFDNet). FFDNet was based on the network structure of DnCNN. It was able to process additive Gaussian white noise (AWGN) with a noise intensity in the range [0, 75] using only one training model. However, it must choose the corresponding noise level mapping with the noisy image together with the input network for training, and the model training complexity is relatively high. Zhang et al. [57] designed a multiscale feature learning convolutional neural network (MSFLNet). It was composed of three feature learning (FL) modules, a reconstruction generation (RG) module, and a residual connection. It can effectively study the characteristic information of the image and increase the noise removal efficiency. Valsesia et al. [58] proposed a graph-convolutional-based operation to create a nonlocal sensory field. A graph-convolutional image-denoising (GCDN) model for the efficient representation of self-similarity was obtained by dynamically computing the similarity of graphs in hidden features. However, the generalization ability of the model could be improved. Table 2 shows more information about Gaussian white noise image-denoising approaches based on deep learning.
(2)
Denoising methods for real noisy images
Image acquisition is affected by a variety of factors, and the noise situation is also complex and varied. The study of real image-denoising algorithms can make the image maintain better quality in different scenarios. Yan et al. [76] derived a noise map from the noise image directly, thereby realizing the unsupervised noisy modeling and accomplishing the denoising of unpaired real noisy images. The network architecture was self-consistent generative adversarial networks (SCGANs). Zhao et al. [77] proposed end-to-end denoising of dark burst images using recurrent fully convolutional networks. The original burst image was directly mapped to the SRGB output to generate the optimal image or to generate a multi-frame denoised image sequence with a recurrent fully convolutional network (RFCN). Although this denoising framework is highly flexible, it does not extend its framework to video denoising, and the framework is not yet portable. Abuya et al. [78] used an integrated approach to eliminate noise from the CT image disturbed by adductive Gaussian noise, which integrated an anisotropic Gaussian filter and wavelet transform with a deep learning denoising convolutional neural network and demonstrated excellent performance in maintaining image quality and preserving fine details. Gou et al. [79] proposed a multiscale adaptive network (MSANet). This network simultaneously considered the scale characteristics and cross-scale complementarity and integrated them into a multiscale design, which effectively improved the denoising performance of images. However, the algorithm still does not consider the loss of image details. Bao et al. [80] designed an MCU-Net denoising method, which added a branch based on a residual dense block of empty spatial pyramid pools (ASPPs). These model sub-paths extracted image features of different resolutions, which were fused at the end of the network for denoising. Table 3 shows more information about deep learning-based denoising approaches for realistic noisy images.
(3)
Denoising methods for blind noise images
Blind noise image denoising refers to the processing of noise-contaminated images to restore their original clarity when the type, intensity, and distribution of the noise are unknown. Yang et al. [91] designed a new approach to evaluate the noise level from a single image with multi-column convolutional neural networks. However, this algorithm has not been implemented to denoise natural images. Guo et al. [92] continued the idea of FFDNet. A two-stage network (CBDNet) was designed from the consideration of a noise level map. Firstly, the noise level map was obtained through the noise estimation sub-network, and then it was fed into the non-blind denoising sub-network together with the noisy image, to achieve a certain degree of image blind denoising. Tao et al. [93] proposed a residual dense attentional similarity network (RDASNet) for image denoising. The network extracted the partial features of the image via CNN and attended to the overall message of the image via the attentional similarity module (ASM). In addition, it used inflated convolution to expand the sensory field to pay greater attention to more global features. Table 4 shows more information about the blind noise image-denoising approaches based on deep learning.
(4)
Denoising methods for mixed noise images
It is a technology for denoising images, which are disturbed by many different types of noise at the same time. Zhang et al. [100] proposed a three-layer hyper-resolution network for multiple degeneracy, which was a general framework with a dimensionality expansion strategy to handle multiple and even spatially varying degradations. Li et al. [101] designed a self-supervised two-phased denoising approach for EBAPS image mixture noise. Self-supervised learning was achieved by combining UNet [102] and BSN and drawing on the iterative training idea of IDR [103]. Table 5 shows more information about the deep learning-based denoising approaches for mixed noise images.
In 1980, Sullivan et al. [108] as well as Zhou et al. [109] first used deep neural networks for image denoising. Compared with the traditional typical image denoising approach, deep convolutional neural networks show powerful machine learning capabilities. Using many noise-containing image sample data for training can effectively improve the ability of a network to fit various intensities of noise and make it have a stronger generalization ability. The CNN-based image-denoising approach usually adopts the strategy of learning clear images. Among many algorithms, DnCNN is undoubtedly the more efficient approach, combining residual learning and batch-normalized optimization training for image denoising. The DnCNN model emphasizes the complementary roles of residual learning and batch normalization in image recovery. It also achieves fast convergence and denoising capabilities despite a deeper network. So far, this algorithm is one of the more advanced denoising algorithms in the field of image denoising, and it is one of the more prominent denoising algorithms. The network is mainly divided into a preprocessing layer, an intermediate processing layer, and an image reconstruction layer. The preprocessing layer applies 64 filters of size 3 × 3 for the initial feature extraction of noisy images to produce 64 feature maps. Then, the activation function ReLu is used for nonlinearity. The intermediate processing layer performs the deep feature extraction of the image, corresponding to the 2 ~(D-1) layers of the neural network. Every layer uses 64 filters of size 3 × 3 × 64 with batch normalization added in between convolution and ReLU. The image reconstruction layer using C filters of size 3 × 3 × 64 is used as the reconstructed output.
Although the DNCNN model has greatly improved the denoising performance, there is still much room for improvement with the continuous development of deep learning technology. Many researchers have improved it with new techniques. Based on the above deep learning-based denoising methods, it can be seen that the improvement direction of the DNCNN model can be divided into three aspects:
First, optimization of the network structure. By deepening or broadening the network, the expressive power of the model can be significantly improved, enabling it to learn more complex image features [65,69,70,72,73,90]. In this process, it is necessary to pay attention to the phenomenon of over-fitting and the resulting high computational complexity. Therefore, the introduction of a residual connection is also one of the optimization ideas [61,65,71,81]. Residual connection can effectively solve the problem of gradient disappearance, which not only makes the network training process smoother but also retains the image details when denoising better. In addition, multiscale convolution and convolution operations with convolution kernels of different sizes can extract image features at different scales, thereby enhancing the model’s ability to capture image details and structure [61,63,65,73]. The convolutional neural network was optimized using the above method so that the noise reduction model based on the convolutional neural network (CNN) can accurately learn the difference between noise and image features. Removing white Gaussian noise can not only effectively reduce the noise level but also better retain the edge and texture details of the image.
Second, introducing an attention mechanism. The introduction of the attention mechanism opens a new path for model optimization [64,67,87,90,91,92,93]. Among them, the channel attention mechanism allows the model to automatically learn the importance of different channels, pay more attention to important channels, suppress those unimportant channels, and improve the feature extraction ability of the model. The spatial attention mechanism enables the model to accurately focus on important spatial regions in the image, enhance the ability to capture local features of the image, better retain the details and edges of the image in the process of denoising, and significantly improve the quality of the image after denoising. The deep learning model combined with the attention mechanism can adaptively learn and estimate the unknown noise characteristics to achieve more effective denoising of blind noise images.
Third, enhancing data processing. In the data processing phase, a variety of data enhancements can be performed. For example, rotating, flipping, scaling and other operations on training data can greatly enrich the diversity of data, effectively improve the generalization ability of the model, and enable it to perform well in the face of various complex noises [85,92]. At the same time, the adversarial training mechanism was introduced to make the generator and discriminator progress together in the adversarial game so that the generator can generate more realistic denoised images and further improve the overall performance of the model [82,83,96,104,107]. The application of these methods has greatly improved the denoising performance of deep convolutional neural networks in real noise images and mixed noise images.

3. Datasets

Most effective deep learning image-denoising methods mainly use paired data training. The training data are required to be (noisy, clear) image pairs. There are currently three ways to construct (noisy, clear) image pairs, summarized as follows:
The first is to obtain clear and high-quality images from image databases, such as the Berkeley Segmentation Dataset [110], Waterloo Exploration Database [111], and DIV2K [112]. Then, add noise, such as Gaussian noise, Poisson noise, and so on, to the clear image to synthesize the image with noise, and build the paired training data. This way of synthesizing noise images is relatively simple, but there are some differences with real noise images.
In the second method, a low-ISO image is taken as a clear image, and the corresponding high-ISO image is taken as a noisy image in the same scene. This method only uses a single low-ISO image as a clear image, which may carry noise, or there are problems with brightness, contrast, and other conditions that do not correspond to the noisy image.
The third is to shoot multiple images for the same scene, and then carry out operations such as image alignment and post-processing to generate almost noise-free images by weighted average. This method uses multiple images to obtain clear images with relatively high quality but requires a large amount of data as the premise, a large amount of work to shoot images, and the calibration rules between multiple images are more stringent.
Most researchers use the first method to create training image pairs. Due to the complex sources of noise, there may be different types of noise in real noisy images. If it was assumed to be the sum of independent random variables with different probability distributions, according to the central limit theorem, the more noise of different properties, the more the normalized result would approximate the Gaussian distribution [113]. Therefore, the Gaussian noise that was closest to the real noise was usually chosen for the experiment. The detailed information of the synthetic noise image dataset is shown in Table 6.
Considering that real noise is more complex, some researchers have applied the second and third methods to construct real (noisy, clear) image pairs as training or test data. The commonly used datasets are shown in Table 7. These images were collected with different cameras and different ISO settings.

4. Evaluation Standards

A noisy image is denoised to obtain an image that theoretically removes the noise efficiently and retains the important information. This can be illustrated from the following aspects [118]: (1) the removal of noise; (2) the protection of edge texture and other details; (3) the degree of regional smoothing. Therefore, to evaluate the image quality after denoising, the above three aspects should be considered. At present, image quality assessment metrics can be categorized into subjective and objective assessments [119].
(1) Subjective evaluation
Subjective evaluation refers to the assessment of the quality of the image through visual observation and subjective feelings. The usual approach for subjective evaluation is the comparative observation method. It can be assessed artificially with a comparison between the denoised image and the original image and also by comparing the denoised image with different algorithms. In the subjective evaluation criteria, the mean subjective score (MOS) was that the observer used the dual-stimulus continuous-quality grading method for the image to be evaluated concerning the original image. The image to be evaluated and the original image were alternately played for a certain period to the observer according to certain rules, and then a certain time interval was set aside for the observer to score after the playback. Finally, the average of all the given scores was taken as the score of the image to be evaluated.
(2) Objective evaluation
The objective evaluation was used to assess the image quality via the discrepancy between the original image and the processed image. The most commonly used objective evaluation criteria were mean square error (MSE), pear signal-to-noise ratio (PSNR), and structural similarity (SSIM), as shown in Equations (1)–(3).
M S E = 1 H × W i = 1 H j = 1 W [ I ( i , j ) I ~ ( i , j ) ] 2
P S N R = 10 × l o g 10 2 8 1 1 H × W i = 1 H j = 1 W [ I ( i , j ) I ~ ( i , j ) ] 2
S S I M = 2 μ 1 μ 2 + C 1 ) ( 2 σ 12 + C 2 μ 1 2 + μ 2 2 + C 1 ( σ 1 2 + σ 2 2 + C 2 )
(3) Spearman rank order correlation coefficient
The evaluation index was used to measure the correlation between the rank of two variables. In image quality evaluation, it can be used to evaluate the consistency between objective evaluation indicators and subjective evaluation results.

5. Experimental Result

To make a comparison among the denoising performance of traditional and deep neural networks, quantitative and qualitative evaluation experiments were conducted on Set12, BSD68, CBSD68, Kodak24, McMaster, DND, SIDD, polyU, and CC datasets. The experimental hardware environment was Windows 11. The CPU was 11th Gen Intel (R) Core (TM) i7-11700T. RAM: 32 GB. The software environment was Matlab2023B, python 3.9, PyTorch 1.1.0.

5.1. Grayscale White Gaussian Noise Image Denoising

For grey images containing Gaussian white noise, traditional denoising methods were compared with deep learning denoising methods. Four traditional approaches (NLM, BM3D, WNNM, and EPLL) and ten deep learning-based denoising methods were included. The 10 deep learning-based denoising methods can be divided into three categories: (1) the classical single model trained for each noise level (MLP, TNRD, DnCNN, DudeNet); (2) the classical blind denoising models based on CNN that can handle various noise levels (IRCNN, ECNDNet, ADNet, FFDNet); (3) nonlocal self-similar priors in traditional methods were integrated into deep learning denoising models (NLRN, RNAN). Set12 was used as the test dataset. Gaussian white noise of different intensities was added to the images in Set12. The noise levels were 15, 25, 50, respectively, and these images were denoised.
The average PSNR values of the various approaches on the Set12 dataset with noise levels of 15, 25, and 50 are shown in Table 8. The bold text refers to the optimal results. It is evident that the NLRN had the most effective PSNR. At different noise levels, the average PSNR increase in NLRN compared to the BM3D method was about 0.77 dB, 0.81 dB, and 0.88 dB. Other deep learning denoising methods also had good denoising performance compared to traditional denoising methods, as shown in Figure 2. It was evident that the PSNR of deep learning denoising methods grew faster with the increase in noise intensity, indicating that deep denoising methods had better results in strong noise removal, showing the advantages of deep learning technology.
Figure 3 and Figure 4 show the denoising results of grayscale white noise images on the Set12 dataset using different methods when the noise levels were 15 and 23, respectively. It can be seen that among traditional algorithms, BM3D had excellent denoising performance, which can effectively remove the noise, and had excellent performance in preserving image details and texture. After denoising, the image had a clear visual effect and complete structure, as shown in Figure 3d and Figure 4d. WNNM had a good denoising effect. While suppressing noise, it can retain image edge and detail information well, especially when processing images with complex textures, as shown in Figure 3e and Figure 4e. EPLL can effectively remove noise and has certain advantages in recovering the high-frequency details of images, but it may produce artifacts, as shown in Figure 3f and Figure 4f. The effect of the deep learning denoising method was better than that of the traditional method. Compared with FFDNet and IRCNN, DudeNet had a better denoising effect, especially for images with rich textures, as shown in Figure 3i–k. NLRN combined nonlocal operations with recurrent neural networks. Nonlocal operations captured global context information by calculating the similarity between each pixel and all its neighborhood pixels and can handle small changes in complex scenes. With the introduction of recurrent neural networks, the model can gradually optimize the recovery results through several iterations and gradually approach the optimal solution in each iteration. As a result, the restored image edges and textures were sharper.
To verify the denoising efficiency of different algorithms, five grayscale images with a size of 512 × 512 were selected from the Set12 dataset to test in a single CPU environment. Table 9 shows the average time of processing a single image using the above different algorithms. As can be seen from Table 9, in traditional denoising algorithms, NLM had low efficiency due to excessive similarity calculation; BM3D had more matrix operations and data processing, and its efficiency was general; WNNM was more computative than NLM, and the efficiency was very low. EPLL was not efficient because it needed to solve complex optimization problems. Combining the denoising effect and efficiency of all kinds of denoising algorithms, it can be concluded that BM3D was slightly lower than WNNM in denoising performance but far better than WNNM in efficiency. In the deep learning denoising algorithm, DudeNet showed a similar denoising performance as NLRN, but its efficiency was relatively low. NLRN had the best denoising performance, and the efficiency of denoising was also good.

5.2. Color Image Denoising

For color image denoising, the classical traditional denoising approach, CBM3D, and six deep learning methods were selected for comparative analysis. CBSD68, Kodak24, Set5, and McMaster datasets were used as test sets. Gaussian white noise of different intensities was added to these datasets, the variance in the noise was 15,25 and 50, and the denoising experiment was carried out on these added images.
Table 10 shows the denoising outcomes of different approaches on different datasets for color images. The bold text indicates the optimal results. Take the BSD68 dataset as an example. When the noise level was 25, compared with the traditional denoising algorithm CBM3D, the average PSNR of DNCNN, FFDNET, DSNet [120], BRDNet, RPCNN [121], and IRCNN was improved by 0.6 dB, 0.5 dB, 0.57 dB, 0.72 dB, 0.53 dB, and 0.45 dB. Therefore, the deep learning-based denoising methods show significant advantages. In particular, the BRDNet denoising approach achieved the best results on all four datasets. Figure 5 shows the visual results of several methods for an image from the Set5 dataset with a noise level of 25. Figure 6 shows the visual results of several methods for an image from the CBSD68 dataset with a noise level of 50. Figure 7 shows the visual results of several methods for an image from the McMaster dataset with a noise level of 50. It is clear that BRDNet could recover more details and textures than the other approaches.
To verify the denoising efficiency of different algorithms, five color images of size 280 × 280 in the Set5 dataset were tested. Table 11 shows the average time taken by several different algorithms to process a single image. It can be seen that, among these algorithms, FFDNET was efficient in denoising and can quickly denoise color images. DNCNN and the traditional algorithm CBM3D had higher processing efficiency. BRDNet’s efficiency was in the medium range. However, IRCNN had relatively low efficiency due to iterative calculation.

5.3. Real Image Denoising

To test the denoising performance of deep learning technology in real noise images, public datasets, such as DND, SIDD, PolyU, and CC, were selected to carry out experiments. Three traditional denoising methods were compared with eight deep neural network denoising methods. The PSNR and SSIM values for various denoising approaches on the DND dataset and the SIDD dataset are listed in Table 12. Bold represents the optimal results. As can be seen from Table 12, the deep learning denoising algorithm had significant advantages over traditional denoising algorithms in DND and SIDD datasets. Taking SIDD as an example, compared with traditional denoising algorithms (CBM3D, EPLL, and KSVD), the GMSNet and MPRNet algorithms can increase the average PSNR by about 14 dB and SSIM by about 0.34. On the DND dataset, GMSNet performed equally well, with a PSNR of about 5 dB higher and SSIM of about 0.11 higher than traditional denoising methods. Through testing on DND and SIDD datasets, the performance of deep neural networks in the field of image denoising was better than that of traditional denoising methods.
Table 13 shows the PSNR and SSIM values for different denoising approaches on the CC and PolyU datasets, with bold indicating the optimal results. It can be seen that the traditional denoising approaches CBM3D and NLH displayed very competitive performance compared to the deep neural network algorithms DNCNN, FFDNet, and MIRNet in both datasets, whereas the deep neural network methods did not always demonstrate superiority over the traditional denoisers, which was largely due to insufficient training data.

6. Challenges, Opportunities, and Future Directions

In the area of digital image denoising, machine learning has shown significant advantages with its powerful feature extraction and pattern recognition capabilities. For example, it can effectively deal with complex noise patterns, denoising and retaining image details with excellent results. However, many problems have been exposed in practical applications: (1) Model training highly relied on the quality and quantity of training data. A slight deviation or insufficiency of data will seriously affect the denoising effect. (2) It consumed a great deal of computing resources. The deep neural network model had a complex structure, and the training and inference process required a large amount of GPU computing resources and a long time, which constrains its applications for devices with resource constraints, like mobile devices and embedded systems. (3) The trained model lacks generality. The characteristics of different types of noise were very different, and the existing machine learning denoising algorithms are often designed for specific types of noise and lack versatility. When faced with mixed noise or unknown kinds of noise, the denoising results of the model will be much less effective.
With the continuous development and maturity of new AI technologies, the research on digital image-denoising algorithms will show new research trends. (1) Explore new neural network architectures, such as developing more efficient convolutional neural network variants or Transformer-derived architectures, optimizing the network structure, reducing the number of parameters, and decreasing the computational complexity to improve the denoising efficiency and performance of the model. The denoising effect can also be improved by designing a more reasonable attention mechanism so that the model focuses more on the key regions of the image. (2) By fusing multimodal data, such as depth information and spectral information, to provide richer image features, it helps the model to more accurately distinguish between noise and real image information, to achieve a better denoising effect. For example, in medical imaging, information from MRI and CT images is combined for denoising and diagnosis. (3) Reinforcement learning is introduced into the field of image denoising, which allows the model to learn the optimal denoising strategy through interaction with the environment. Reinforcement learning can dynamically adjust the parameters and steps of the denoising algorithm according to different image noise situations and application requirements, and it can achieve an adaptive denoising process. (4) Domain adaptive and transfer learning techniques can be used. The models trained in one domain are quickly applied to other domains, reducing the dependence on data from the new domain and the cost of retraining. For example, a denoising model trained on natural images migrates to the domain of medical images by fine-tuning the model parameters to adapt to the characteristics of medical images.

7. Conclusions

With the development of digital technology, image processing has been widely used in various fields and has become an extremely important and indispensable technology. Before image processing, the effective removal of noise generated in image acquisition, transmission, and other links was a prerequisite to ensure the accuracy and reliability of subsequent processing. This review carried out a comprehensive and in-depth exploration of the field of image denoising and made a detailed comparison, analysis, and systematic summary of traditional methods and deep learning methods. Firstly, the classical denoising algorithms for image denoising were described and compared in detail. For simple noise, they could quickly realize the suppression of noise and had the advantages of low calculation cost and easy implementation. However, for complex noise or high noise, the traditional algorithm was not good at processing. Then, this review focused on deep learning techniques for image noise reduction. The research showed that deep learning technology opened up a new idea and method for image noise reduction through its powerful feature extraction and model fitting capabilities. Deep convolutional neural networks can accurately learn the difference between noise and image features and can not only effectively reduce the noise level but also better preserve the edge and texture details of the image when removing white Gaussian noise. The deep learning model combined with the attention mechanism can adaptively learn and estimate the unknown noise characteristics to achieve more effective denoising of blind noise images. Generative adduction networks (GANs) show unique advantages in denoising real noise images and mixed noise images. To show the performance of different methods more directly, the denoising performance of different networks was verified and compared to the standard dataset. The evaluation indexes of deep learning networks such as PSNR and SSIM were better than those of traditional methods, which further advanced deep learning technology in the field of image denoising. Finally, it summarized the challenges, opportunities, and future development directions of deep learning denoising algorithms. At present, deep learning denoising algorithms face challenges, such as high model complexity, large computing resource requirements, and dependence on large-scale high-quality datasets. However, with the rapid development of hardware technology and the continuous emergence of new algorithms, future deep learning denoising algorithms will be more intelligent, adaptive, and deeply integrated with traditional methods. This review provides a valuable reference for those seeking to apply digital image-denoising technology in a specific field or to promote the development of this field with innovative algorithm design.

Author Contributions

Conceptualization, J.M. and L.S.; methodology, J.M. and J.C.; writing—original draft preparation, J.M.; writing—review and editing, J.M., J.C. and S.Y.; funding acquisition, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (grant #: 12174004) and the Science and Technology Plan Project of Ankang City (grant #: AK202-GY03-2).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, J.; Lu, M.X.; Li, J.K. Noisy image denoising method based on residual dense convolutional self-coding. Comput. Sci. 2024, 51, 555–561. [Google Scholar]
  2. Katkovnik, V.; Egiazarian, K. Sparse phase imaging based on complex domain nonlocal BM3D techniques. Digit. Signal Process. 2017, 63, 72–85. [Google Scholar] [CrossRef]
  3. Shevkunov, I.; Katkovnik, V.; Claus, D.; Pedrini, G.; Petrov, N.V.; Egiazarian, K. Hyperspectral phase imaging based on denoising in complex-valued eigensubspace. Opt. Lasers Eng. 2020, 127, 105973. [Google Scholar] [CrossRef]
  4. Dutta, B.; Root, K.; Ullmann, I.; Wagner, F.; Mayr, M.; Seuret, M.; Huang, Y. Deep learning for terahertz image denoising in nondestructive historical document analysis. Sci. Rep. 2022, 12, 22554. [Google Scholar] [CrossRef]
  5. Wu, G.Y.; Yuan, Z.Q.; Liang, Y.M. Unsupervised denoising method for retinal OCT images based on deep learning. Acta Opt. Sin. 2023, 43, 2010002. [Google Scholar]
  6. Bianco, V.; Memmolo, P.; Paturzo, M.; Finizio, A.; Javidi, B.; Ferraro, P. Quasi noise-free digital holography. Light Sci. Appl. 2016, 5, e16142. [Google Scholar] [CrossRef]
  7. Ogose, S. Optimum Gaussian filter for MSK with 2-bit differential detection. IEICE Trans. 1983, 66, 459–460. [Google Scholar]
  8. Martins, A.L.; Mascarenhas, N.D.; Suazo, C.A. Spatio-temporal resolution enhancement of vocal tract MRI sequences based on image registration. Integr. Comput. Aided Eng. 2011, 18, 143–155. [Google Scholar] [CrossRef]
  9. Raman, S.; Chaudhuri, S. Bilateral Filter Based Compositing for Variable Exposure Photography. Eurographics 2009, 1, 1–4. [Google Scholar]
  10. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  11. Cai, S.; Zheng, X.; Dong, X. CBM3d, a novel subfamily of family 3 carbohydrate-binding modules identified in Cel48A exoglucanase of Cellulosilyticum ruminicola. J. Bacteriol. 2011, 193, 5199–5206. [Google Scholar] [CrossRef] [PubMed]
  12. Maggioni, M.; Katkovnik, V.; Egiazarian, K.; Foi, A. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process. 2012, 22, 119–133. [Google Scholar] [CrossRef] [PubMed]
  13. Mafi, M.; Martin, H.; Cabrerizo, M.; Andrian, J.; Barreto, A.; Adjouadi, M. A comprehensive survey on impulse and Gaussian denoising filters for digital images. Signal Process. 2019, 157, 236–260. [Google Scholar] [CrossRef]
  14. Yadava, P.C.; Srivastava, S. Denoising of poisson-corrupted microscopic biopsy images using fourth-order partial differential equation with ant colony optimization. Biomed. Signal Process. Control. 2024, 93, 106207. [Google Scholar] [CrossRef]
  15. Kulya, M.; Petrov, N.V.; Katkovnik, V.; Egiazarian, K. Terahertz pulse time-domain holography with balance detection: Complex-domain sparse imaging. Appl. Opt. 2019, 58, G61–G70. [Google Scholar] [CrossRef]
  16. Tang, C.; Xv, J.L.; Zhou, Z.G. Improved curvature filtering strong noise image denoising method. Chin. J. Image Graph. 2019, 24, 346–356. [Google Scholar]
  17. Cheng, L.B.; Li, X.Y.; Li, C. Remote sensing image denoising method based on curvilinear wave transform and goodness-of-fit test. J. Jilin Univ. 2023, 53, 3207–3213. [Google Scholar]
  18. Zhang, H.J.; Zhang, D.M.; Yan, W. Wavelet transform image denoising algorithm based on improved threshold function. Comput. Appl. Res. 2020, 37, 1545–1548. [Google Scholar]
  19. Elan, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar]
  20. Li, G.H.; Li, J.J.; Fan, H. Adaptive matching tracking image denoising algorithm. Comput. Sci. 2020, 47, 176–185. [Google Scholar]
  21. Li, K.; Wang, C.; Ming, X.F. A denoising method for chip ultrasound signals based on improved multipath matched tracking. J. Instrum. 2023, 44, 93–100. [Google Scholar]
  22. Yuan, X.J.; Zhou, T.; Li, C. Research on image denoising algorithm based on non-local clustering with sparse prior. Comput. Eng. Appl. 2020, 56, 177–185. [Google Scholar]
  23. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 60–65. [Google Scholar]
  24. Bai, T.L.; Zhang, C.F. Image Denoising based on External Non-local self-similarity priors. Telecommun. Eng. 2021, 61, 211–217. [Google Scholar]
  25. Ziad, L.; Oubbih, O.; Karami, F.; Sniba, F. A nonlocal model for image restoration corrupted by multiplicative noise. Signal Image Video Process. 2024, 18, 5701–5718. [Google Scholar] [CrossRef]
  26. Lecouat, B.; Ponce, J.; Mairal, J. Fully trainable and interpretable non-local sparse models for image restoration. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 238–254. [Google Scholar]
  27. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  28. Osher, S.; Burger, M.; Goldfarb, D. An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 2005, 4, 460–489. [Google Scholar] [CrossRef]
  29. Zuo, W.; Zhang, L.; Song, C. Gradient histogram estimation and preservation for texture enhanced image denoising. IEEE Trans. Image Process. 2014, 23, 2459–2472. [Google Scholar]
  30. Roth, S.; Black, M.J. Fields of experts: A framework for learning image priors. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 860–867. [Google Scholar]
  31. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 2862–2869. [Google Scholar]
  32. Liu, C.S.; Zhao, Z.G.; Li, Q. Enhanced denoising algorithm for low-rank representation images. Comput. Eng. Appl. 2020, 56, 216–225. [Google Scholar]
  33. Lv, J.R.; Luo, X.G.; Qi, S.F. Weighted kernel-paradigm minimization image denoising with preserving local structure. Adv. Lasers Optoelectron. 2019, 56, 57–64. [Google Scholar]
  34. Chen, J. Application of full-variance curvilinear waveform transform in medical image denoising. J. Yanbian Univ. 2021, 47, 361–364. [Google Scholar]
  35. Chen, Y.; Xu, H.L.; Xing, Q.; Zhuang, J. SICM image noise reduction algorithm combining wavelet transform and bilateral filtering. Electron. Meas. Technol. 2022, 45, 114–119. [Google Scholar]
  36. Wu, X.L.; Wang, Z.Z.; Xi, B.Q.; Zhen, R. Ground-penetrating radar denoising based on wavelet adaptive thresholding method. Sci. Technol. Eng. 2023, 23, 4686–4692. [Google Scholar]
  37. Romano, Y.; Elad, M. Boosting of Image Denoising Algorithms. Siam J. Imaging Sci. 2015, 8, 1187–1219. [Google Scholar] [CrossRef]
  38. Dong, W.; Zhang, L.; Shi, G. Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 2012, 22, 1620–1630. [Google Scholar] [CrossRef]
  39. Xu, J.; Zhang, L.; Zhang, D. A trilateral weighted sparse coding scheme for real-world image denoising. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 20–36. [Google Scholar]
  40. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
  41. Wei, Y.; Han, L.L.; Li, L. Joint group dictionary-based structural sparse representation for image restoration. Digit. Signal Process. 2023, 137, 104029. [Google Scholar]
  42. Qian, C.; Chang, D.X. Graph Laplace regularised sparse transform learning image denoising algorithm. Comput. Eng. Appl. 2022, 58, 232–239. [Google Scholar]
  43. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar]
  44. Hou, Y.K.; Xu, J.; Liu, M.X. NLH: A blind pixel-level non-local method for real-world image denoising. IEEE Trans. Image Process. 2020, 29, 5121–5135. [Google Scholar] [CrossRef]
  45. Chen, Y.; Cao, X.; Zhao, Q.; Meng, D.; Xu, Z. Denoising hyperspectral image with non-iid noise structure. IEEE Trans. Syst. Man Cybern. 2018, 48, 1054–1066. [Google Scholar]
  46. Zuo, W.; Zhang, L.; Song, C.; Zhang, D. Texture enhanced image denoising via gradient histogram preservation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 25–27 June 2013; pp. 1203–1210. [Google Scholar]
  47. Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.L.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 1227–1243. [Google Scholar] [CrossRef]
  48. Xu, J.; Zhang, L.; Zhang, D.; Feng, X. Multi-channel weighted nuclear norm minimization for real color image denoising. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1096–1104. [Google Scholar]
  49. Hu, H.; Froment, J.; Liu, Q. A note on patch-based low-rank minimization for fast image denoising. J. Vis. Commun. Image Represent. 2018, 50, 100–110. [Google Scholar] [CrossRef]
  50. Xu, J.; Zhang, L.; Zuo, W.; Zhang, D.; Feng, X. Patch group-based nonlocal self-similarity prior learning for image denoising. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 244–252. [Google Scholar]
  51. Zhuang, L.N.; Bioucas-dias, J.M. Fast hyperspectral image denoising and inpainting based on low-rank and sparse representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 730–742. [Google Scholar] [CrossRef]
  52. Xu, J.; Zhang, L.; Zhang, D. External prior guided internal prior learning for real-world noisy image denoising. IEEE Trans. Image Process. 2018, 27, 2996–3010. [Google Scholar] [CrossRef] [PubMed]
  53. Mao, X.; Shen, C.; Yang, Y.B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. Adv. Neural Inf. Process. Syst. 2016, 29, 2802–2810. [Google Scholar]
  54. Tai, Y.; Yang, J.; Liu, X.; Xu, C. Memnet: A persistent memory network for image restoration. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4539–4547. [Google Scholar]
  55. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 2016, 26, 3142–3155. [Google Scholar] [CrossRef]
  56. Zhang, K.; Zuo, W.; Zhang, L. Ffdnet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef]
  57. Zhang, S.; Liu, C.; Zhang, Y.; Liu, S.; Wang, X. Multi-scale feature learning convolutional neural network for image denoising. Sensors 2023, 23, 7713. [Google Scholar] [CrossRef]
  58. Valsesia, D.; Fracastoro, G.; Magli, E. Deep graph-convolutional image denoising. IEEE Trans. Image Process. 2020, 29, 8226–8237. [Google Scholar] [CrossRef]
  59. Burger, H.C.; Schuler, C.J.; Harmeling, S. Image denoising: Can plain neural networks compete with BM3D? In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2392–2399. [Google Scholar]
  60. Chen, Y.; Pock, T. Trainable nonlinear reaction-diffusion: A flexible framework for fast and effective image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1256–1272. [Google Scholar] [CrossRef]
  61. Tian, C.; Xu, Y.; Fei, L.; Wang, J.; Wen, J.; Luo, N. Enhanced CNN for image denoising. CAAI Trans. Intell. Technol. 2019, 4, 17–23. [Google Scholar] [CrossRef]
  62. Schmidt, U.; Roth, S. Shrinkage fields for effective image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 2774–2781. [Google Scholar]
  63. Chen, M.Y.; Li, R.X.; Liu, H. Pre-filtering-based group sparse residual constraint image denoising model. Transducer Microsyst. Technol. 2020, 39, 48–51. [Google Scholar]
  64. Tian, C.; Xu, Y.; Li, Z.; Zuo, W.; Fei, L.; Liu, H. Attention-guided CNN for image denoising. Neural Netw. 2020, 124, 117–129. [Google Scholar] [CrossRef] [PubMed]
  65. Tian, C.; Xu, Y.; Zuo, W. Image denoising using deep CNN with batch renormalization. Neural Netw. 2020, 121, 461–473. [Google Scholar] [CrossRef] [PubMed]
  66. Liu, D.; Wen, B.H.; Fan, Y.C. Non-local recurrent network for image restoration. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 1680–1689. [Google Scholar]
  67. Mou, C.; Zhang, J.; Wu, Z. Dynamic attentive graph learning for image restoration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville, TN, USA, 11–17 October 2021; pp. 4328–4337. [Google Scholar]
  68. Miranda-González, A.A.; Rosales-Silva, A.J.; Mújica-Vargas, D.; Escamilla-Ambrosio, P.J.; Gallegos-Funes, F.J.; Vianney-Kinani, J.M.; Velázquez-Lozada, E.; Pérez-Hernández, L.M.; Lozano-Vázquez, L.V. Denoising Vanilla Autoencoder for RGB and GS Images with Gaussian Noise. Entropy 2023, 25, 1467. [Google Scholar] [CrossRef] [PubMed]
  69. Tang, Z.; Jian, X. Thermal fault diagnosis of complex electrical equipment based on infrared image recognition. Sci. Rep. 2024, 14, 5547. [Google Scholar] [CrossRef]
  70. Huang, J.J.; Dragotti, P.L. WINNet: Wavelet-Inspired Invertible Network for Image Denoising. IEEE Trans. Image Process. 2022, 31, 4377–4392. [Google Scholar] [CrossRef]
  71. Ren, C.; He, X.; Wang, C.; Zhao, Z. Adaptive consistency prior based deep network for image denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 8596–8606. [Google Scholar]
  72. Hu, Y.; Xu, S.; Cheng, X.; Zhou, C.; Hu, Y. A Triple Deep Image Prior Model for Image Denoising Based on Mixed Priors and Noise Learning. Appl. Sci. 2023, 13, 5265. [Google Scholar] [CrossRef]
  73. Li, X.; Han, J.; Yuan, Q.; Zhang, Y.; Fu, Z.; Zou, M.; Huang, Z. FEUSNet: Fourier Embedded U-Shaped Network for Image Denoising. Entropy 2023, 25, 1418. [Google Scholar] [CrossRef]
  74. Lai, Z.; Wei, K.; Fu, Y. Deep plug-and-play prior for hyperspectral image restoration. Neurocomputing 2022, 481, 281–293. [Google Scholar] [CrossRef]
  75. Zhang, J.; Cao, L.; Wang, T.; Fu, W.; Shen, W. NHNet: A non-local hierarchical network for image denoising. IET Image Process. 2022, 16, 2446–2456. [Google Scholar] [CrossRef]
  76. Yan, H.; Chen, X.; Tan, V.Y.; Yang, W.; Wu, J.; Feng, J. Unsupervised image noise modeling with self-consistent GAN. arXiv 2019, arXiv:1906.05762. [Google Scholar]
  77. Zhao, D.; Ma, L.; Li, S.; Yu, D. End-to-end denoising of dark burst images using recurrent fully convolutional networks. arXiv 2019, arXiv:1904.07483. [Google Scholar]
  78. Abuya, T.K.; Rimiru, R.M.; Okeyo, G.O. An Image Denoising Technique Using Wavelet-Anisotropic Gaussian Filter-Based Denoising Convolutional Neural Network for CT Images. Appl. Sci. 2023, 13, 12069. [Google Scholar] [CrossRef]
  79. Gou, Y.; Hu, P.; Lv, J.; Zhou, J.T.; Peng, X. Multi-scale adaptive network for single image denoising. Adv. Neural Inf. Process. Syst. 2022, 35, 14099–14112. [Google Scholar]
  80. Bao, L.; Yang, Z.; Wang, S.; Bai, D.; Lee, J. Real image denoising based on multi-scale residual dense block and cascaded U-Net with block-connection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 448–449. [Google Scholar]
  81. Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning deep CNN denoiser prior for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 22–25 July 2017; pp. 3929–3938. [Google Scholar]
  82. Yue, Z.; Zhao, Q.; Zhang, L.; Meng, D. Dual adversarial network: Toward real-world noise removal and noise generation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 41–58. [Google Scholar]
  83. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Learning enriched features for real image restoration and enhancement. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 492–511. [Google Scholar]
  84. Yuan, N.; Wang, L.; Ye, C.; Deng, Z.; Zhang, J.; Zhu, Y. Self-supervised structural similarity-based convolutional neural network for cardiac diffusion tensor image denoising. Med. Phys. 2023, 50, 6137–6150. [Google Scholar] [CrossRef]
  85. Du, H.; Yuan, N.; Wang, L. Node2Node: Self-Supervised Cardiac Diffusion Tensor Image Denoising Method. Appl. Sci. 2023, 13, 10829. [Google Scholar] [CrossRef]
  86. Huang, R.; Li, X.; Fang, Y.; Cao, Z.; Xia, C. Robust Hyperspectral Unmixing with Practical Learning-Based Hyperspectral Image Denoising. Remote Sens. 2023, 15, 1058. [Google Scholar] [CrossRef]
  87. Fan, C.M.; Liu, T.J.; Liu, K.H. SUNet: Swin Transformer UNet for Image Denoising. In Proceedings of the 2022 IEEE International Symposium on Circuits and Systems (ISCAS), Austin, TX, USA, 27 May–1 June 2022; pp. 2333–2337. [Google Scholar]
  88. Zhang, J.; Zhu, Y.; Yu, W.; Ma, J. Considering Image Information and Self-Similarity: A Compositional Denoising Network. Sensors 2023, 23, 5915. [Google Scholar] [CrossRef]
  89. Anwar, S.; Barnes, N. Real Image Denoising With Feature Attention. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3155–3164. [Google Scholar]
  90. Zhang, J.; Qu, M.; Wang, Y.; Cao, L. A multi-head convolutional neural network with multi-path attention improves image denoising. In Proceedings of the Pacific Rim International Conference on Artificial Intelligence, Shanghai, China, 10–13 November 2022; pp. 338–351. [Google Scholar]
  91. Yang, J.; Liu, X.; Song, X.; Li, K. Estimation of signal-dependent noise level function using multi-column convolutional neural network. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 2418–2422. [Google Scholar]
  92. Guo, S.; Yan, Z.; Zhang, K.; Zuo, W.; Zhang, L. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 1712–1722. [Google Scholar]
  93. Tao, H.; Guo, W.; Han, R.; Yang, Q.; Zhao, J. Rdasnet: Image denoising via a residual dense attention similarity network. Sensors 2023, 23, 1486. [Google Scholar] [CrossRef]
  94. Wei, X.; Xiao, J.; Gong, Y. Blind Hyperspectral Image Denoising with Degradation Information Learning. Remote Sens. 2023, 15, 490. [Google Scholar] [CrossRef]
  95. Tian, C.; Xu, Y.; Zuo, W.; Du, B.; Lin, C.W.; Zhang, D. Designing and training of a dual CNN for image denoising. Knowl. Based Syst. 2021, 226, 106949. [Google Scholar] [CrossRef]
  96. Chen, J.; Chen, J.; Chao, H.; Yang, M. Image blind denoising with generative adversarial network-based noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3155–3164. [Google Scholar]
  97. Isogawa, K.; Ida, T.; Shiodera, T.; Takeguchi, T. Deep shrinkage convolutional neural network for adaptive noise reduction. IEEE Signal Process. Lett. 2017, 25, 224–228. [Google Scholar] [CrossRef]
  98. Jaszewski, M.; Parameswaran, S. Exploring efficient and tunable convolutional blind image denoising networks. In Proceedings of the 2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 15–17 October 2019; pp. 1–9. [Google Scholar]
  99. Chen, K.; Pu, X.; Ren, Y.; Qiu, H.; Li, H.; Sun, J. Low-dose ct image blind denoising with graph convolutional networks. In Proceedings of the International Conference on Neural Information Processing, Bangkok, Thailand, 23–27 November 2020; pp. 423–435. [Google Scholar]
  100. Zhang, K.; Zuo, W.; Zhang, L. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 19–21 June 2018; pp. 3262–3271. [Google Scholar]
  101. Li, B.Z.; Liu, X.; Zhao, Z.X.; Li, L.; Jin, W.Q. Self-supervised two-stage denoising algorithm for EBAPS images based on blind spot network. Acta Opt. Sin. 2024, 44, 2210001. [Google Scholar]
  102. Siddique, N.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V. U-net and its variants for medical image segmentation: A review of theory and applications. IEEE Access 2021, 9, 82031–82057. [Google Scholar] [CrossRef]
  103. Zhang, Y.; Li, D.; Law, K.L.; Wang, X.; Qin, H.; Li, H. IDR: Self-supervised image denoising via iterative data refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2022; pp. 2098–2107. [Google Scholar]
  104. Yeh, R.A.; Lim, T.Y.; Chen, C.; Schwing, A.G.; Hasegawa-Johnson, M.; Do, M.N. Image restoration with deep generative models. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 6772–6776. [Google Scholar]
  105. Islam, M.T.; Rahman, S.M.; Ahmad, M.O.; Swamy, M.N.S. Mixed Gaussian-impulse noise reduction from images using convolutional neural network. Signal Process. Image Commun. 2018, 68, 26–41. [Google Scholar] [CrossRef]
  106. Kim, Y.; Soh, J.W.; Park, G.Y.; Cho, N.I. Transfer learning from synthetic to real-noise denoising with adaptive instance normalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3482–3492. [Google Scholar]
  107. Wang, H.; Yang, X.; Wang, Z.; Yang, H.; Wang, J.; Zhou, X. Improved CycleGAN for Mixed Noise Removal in Infrared Images. Appl. Sci. 2024, 14, 6122. [Google Scholar] [CrossRef]
  108. Chiang, Y.W.; Sullivan, B.J. Multi-frame image restoration using a neural network. In Proceedings of the 32nd Midwest Symposium on Circuits and Systems, Champaign, IL, USA, 14–16 August 1989; pp. 744–747. [Google Scholar]
  109. Zhou, Y.T.; Chellappa, R.; Vaid, A.; Jenkins, B.K. Image restoration using a neural network. IEEE Trans. Acoust. Speech Signal Process. 1988, 36, 1141–1151. [Google Scholar] [CrossRef]
  110. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human-segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; pp. 416–423. [Google Scholar]
  111. Ma, K.; Duanmu, Z.; Wu, Q.; Wang, Z.; Zhang, L. Waterloo exploration database: New challenges for image quality assessment models. IEEE Trans. Image Process. 2017, 26, 1004–1016. [Google Scholar] [CrossRef]
  112. Agustsson, E.; Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 22–25 July 2017; pp. 1122–1131. [Google Scholar]
  113. Reeves, G. Conditional central limit theorems for Gaussian projections. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 3045–3049. [Google Scholar]
  114. Nam, S.; Hwang, Y.; Matsushita, Y.; Kim, S.J. A holistic approach to cross-channel image noise modeling and its application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26–30 June 2016; pp. 1683–1691. [Google Scholar]
  115. Plotz, T.; Roth, S. Benchmarking denoising algorithms with real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 22–25 July 2017; pp. 1586–1595. [Google Scholar]
  116. Abdelhamed, A.; Lin, S.; Brown, M.S. A High-Quality Denoising Dataset for Smartphone Cameras. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1692–1700. [Google Scholar]
  117. Xu, J.; Li, H.; Liang, Z.; Zhang, D.; Zhang, L. Real-world noisy image denoising: A new benchmark. arXiv 2018, arXiv:1804.02603. [Google Scholar]
  118. Chang, Y.; Yan, L.; Fang, H.; Zhong, S.; Liao, W. HSI-DeNet: Hyperspectral image restoration via convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 667–682. [Google Scholar] [CrossRef]
  119. Zhao, T.; McNitt-Gray, M.; Ruan, D. A convolutional neural network for ultra-low-dose CT denoising and emphysema screening. Med. Phys. 2019, 46, 3941–3950. [Google Scholar] [CrossRef]
  120. Peng, Y.; Zhang, L.; Liu, S.; Wu, X.; Zhang, Y.; Wang, X. Dilated residual networks with symmetric skip connection for image denoising. Neurocomputing 2019, 345, 67–76. [Google Scholar] [CrossRef]
  121. Xi, Z.; Chakrabarti, A. Identifying recurring patterns with deep neural networks for natural image denoising. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Long Beach, CA, USA, 14–19 June 2020; pp. 2426–2434. [Google Scholar]
  122. Yue, Z.; Yong, H.; Zhao, Q.; Meng, D.; Zhang, L. Variational denoising network: Toward blind noise modeling and removal. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, CA, USA, 8–14 December 2019; pp. 1688–1699. [Google Scholar]
  123. Song, Y.; Zhu, Y.; Du, X. Grouped multi-scale network for real-world image denoising. IEEE Signal Process. Lett. 2020, 27, 2124–2128. [Google Scholar] [CrossRef]
  124. Rajaei, B.; Rajaei, S.; Damavandi, H. An analysis of multi-stage progressive image restoration network (MPRNet). Image Process. Line 2023, 13, 140–152. [Google Scholar] [CrossRef]
Figure 1. BM3D algorithm flowchart.
Figure 1. BM3D algorithm flowchart.
Sensors 25 02615 g001
Figure 2. PSNR growth results of different methods relative to BM3D.
Figure 2. PSNR growth results of different methods relative to BM3D.
Sensors 25 02615 g002
Figure 3. Denoising effect of different methods on Gaussian white noise image (δ = 15). (a) Original image; (b) noisy image; (c) NLM/30.32 dB; (d) BM3D/31.85 dB; (e) WNNM/32.71 dB; (f) EPLL/32.10 dB; (g) TNRD/32.27 dB; (h) DnCNN/33.09 dB; (i) DudeNet/32.93 dB; (j) FFDNet/32.66 dB; (k) IRCNN/32.82 dB; (l) NLRN/33.02 dB.
Figure 3. Denoising effect of different methods on Gaussian white noise image (δ = 15). (a) Original image; (b) noisy image; (c) NLM/30.32 dB; (d) BM3D/31.85 dB; (e) WNNM/32.71 dB; (f) EPLL/32.10 dB; (g) TNRD/32.27 dB; (h) DnCNN/33.09 dB; (i) DudeNet/32.93 dB; (j) FFDNet/32.66 dB; (k) IRCNN/32.82 dB; (l) NLRN/33.02 dB.
Sensors 25 02615 g003aSensors 25 02615 g003b
Figure 4. Denoising effect of different methods on Gaussian white noise image (δ = 25). (a) Original image; (b) noisy image; (c) NLM/26.72 dB; (d) BM3D/32.96 dB; (e) WNNM/33.21 dB; (f) EPLL/32.08 dB; (g) TNRD/32.94 dB; (h) DnCNN/33.08 dB; (i) DudeNet/33.02 dB; (j) FFDNet/33.27 dB; (k) IRCNN/33.07 dB; (l) NLRN/33.57 dB.
Figure 4. Denoising effect of different methods on Gaussian white noise image (δ = 25). (a) Original image; (b) noisy image; (c) NLM/26.72 dB; (d) BM3D/32.96 dB; (e) WNNM/33.21 dB; (f) EPLL/32.08 dB; (g) TNRD/32.94 dB; (h) DnCNN/33.08 dB; (i) DudeNet/33.02 dB; (j) FFDNet/33.27 dB; (k) IRCNN/33.07 dB; (l) NLRN/33.57 dB.
Sensors 25 02615 g004aSensors 25 02615 g004b
Figure 5. Denoising effect of different methods on Set5 (δ = 25). (a) Original image; (b) noisy image/20.18 dB; (c) CBM3D/32.38 dB; (d) DNCNN/32.39 dB; (e) FFDNet/32.88 dB; (f) IRCNN/32.56 dB; (g) BRDNet/33.12 dB.
Figure 5. Denoising effect of different methods on Set5 (δ = 25). (a) Original image; (b) noisy image/20.18 dB; (c) CBM3D/32.38 dB; (d) DNCNN/32.39 dB; (e) FFDNet/32.88 dB; (f) IRCNN/32.56 dB; (g) BRDNet/33.12 dB.
Sensors 25 02615 g005
Figure 6. Denoising effect of different methods on CBSD68 (δ = 50). (a) Original image; (b) noisy image/14.15 dB; (c) CBM3D/28.32 dB; (d) DNCNN/28.93 dB; (e) FFDNet/28.93 dB; (f) IRCNN/29.02 dB; (g) BRDNet/29.39 dB.
Figure 6. Denoising effect of different methods on CBSD68 (δ = 50). (a) Original image; (b) noisy image/14.15 dB; (c) CBM3D/28.32 dB; (d) DNCNN/28.93 dB; (e) FFDNet/28.93 dB; (f) IRCNN/29.02 dB; (g) BRDNet/29.39 dB.
Sensors 25 02615 g006
Figure 7. Denoising effect of different methods on McMaster (δ = 50). (a) Original image; (b) noisy image/14.16 dB; (c) CBM3D/30.94 dB; (d) DNCNN/31.17 dB; (e) FFDNet/31.65 dB; (f) IRCNN/31.39 dB; (g) BRDNet/32.31 dB.
Figure 7. Denoising effect of different methods on McMaster (δ = 50). (a) Original image; (b) noisy image/14.16 dB; (c) CBM3D/30.94 dB; (d) DNCNN/31.17 dB; (e) FFDNet/31.65 dB; (f) IRCNN/31.39 dB; (g) BRDNet/32.31 dB.
Sensors 25 02615 g007
Table 1. Traditional denoising approaches.
Table 1. Traditional denoising approaches.
Type of MethodMethod NameType of NoiseDescriptions
FilterTVCT [34]AWGNFully variable difference curve wave transform algorithm
BFWIT [35]Mixed noiseBilateral filtering for hierarchical thresholding of wavelets
AWT [36]AWGNAdaptive thresholding wavelet denoising
BIDA [37]AWGNEnhanced image-denoising algorithm
NLM [23]AWGNNonlocal mean filtering
BM3D [10]AWGNSparse 3D transform-domain co-filtering
CBM3D [11]AWGNBM3D YUV color mode conversion
Sparse codingK-SVD [19]AWGNSparse and redundant representation denoising methods
NCSR [38]AWGNNonlocalized concentration
TWSC [39]real noiseThree-sided weighted sparse coding
LSSC [40]Real, mixed noiseNonlocal sparse model
GDSR [41]AWGNSparse representation denoising based on joint group dictionary
GLRSTL [42]AWGNGraph Laplace regularized sparse transform learning
Model-basedEPLL [43]AWGNModel learning from natural image blocks
NLH [44]AWGNPixel-level nonlocal self-similarity
NMOG [45]AWGNLow-rank matrix recovery methods
WNNM [31]AWGNWeighted nuclear norm minimization
GHP [46]AWGNGradient histogram and texture enhancement
LRTDTV [47]AWGNLow-rank tensor approach
WCNNM [48]AWGNMulti-channel weighted nuclear specification minimization
PLR [49]AWGNFast image denoising based on patch with low-rank minimization
PGPD [50]AWGNNonlocal self-similarity based on patch groups
FastHyde [51]AWGNLow-rank and sparse representation denoising methods
GID [52]AWGNExternal data guidance and internal a priori learning
Table 2. Deep learning-based denoising approaches for additive white noise image.
Table 2. Deep learning-based denoising approaches for additive white noise image.
MethodTechnical Characteristics
MLP [59]Multilayer perceptron.
TNRD [60]An unfolded network structure based on nonlinear reaction–diffusion models;
Combining traditional optimization methods with deep learning.
ECNDNet [61]Residual learning and batch normalization techniques to address network training difficulties;
Expanding the capture of contextual information using null convolution.
IDDRL [62]A cascaded Gaussian conditional random field was used to optimize the model parameters through iterative learning;
Deep residual learning network.
PSN-K [63]A sparse representation and residual constraints were used to remove the noise.
DnCNN [55]Residual learning and batch normalization.
ADNet [64]Introducing an attention mechanism to dynamically assign feature weights.
BRDNet [65]Combine two networks to increase network width;
Combine two-way loop structure to capture image local and global dependencies.
NLRN [66]Fusion of nonlocal similarity modeling with recurrent network stepwise refinement of denoising results.
DAGL [67]Combination of graph convolutional networks and dynamic attention for effective modeling of nonlocal information in images.
DVA [68]Unsupervised neural network and self-encoder architecture.
WAGFNet [69]Wavelet anisotropic convolutional neural networks.
WINNet [70]A two-branch architecture integrating wavelet transform and deep learning;
Using reversible neural network (INN) architecture to avoid information loss during image sampling.
DeamNet [71]Multiscale feature fusion and edge-aware design.
TripleDIP [72]Deep Image Prior (DIP) framework;
Three constraints (noise distribution, structural similarity, and sparsity) were introduced to optimize the generation process.
FEUSNet [73]Fourier-embedded U-Net.
DPHSIR [74]Combining pre-trained deep neural networks as a priori knowledge with classical optimization algorithms.
NHNet [75]Using two sub-paths to process noisy images with different resolutions;
Combining nonlocal operations to obtain effective features.
Table 3. Deep learning-based denoising approaches for realistic noisy images.
Table 3. Deep learning-based denoising approaches for realistic noisy images.
MethodTechnical Characteristics
IRCNN [81]Training a set of fast and efficient CNNs and integrating them into a model-based optimization approach;
Accelerating training using batch normalization and residual learning.
DANet [82]Removing real noise by the confrontation between the denoising network and noise-generating network.
MIRNet [83]Parallel convolutional and multiscale fusion networks.
DNCNN [55]Residual learning and batch normalization.
SSECNN [84]Self-supervised structural similarity-based convolutional network.
Node2Node [85]Self-supervised cardiac diffusion tensor.
RHUPL [86]Robust hyperspectral unmixing with practical learning.
SUNet [87]Transformer combined with U-Net improved ability to capture contextual information;
Dual upsampling module prevented artifacts.
FEUSNet [73]Fusion of Fourier features, based on their amplitude spectrum and phase spectrum properties;
Embedding the learned features as a priori modules in a U-Net.
CDN [88]Image information path (IIP) and noise estimation path (NEP) combined;
Compositing information and self-similarity.
RIDNet [89]Integrating multiscale feature extraction and attention mechanism.
MHCNN [90]Multiple head convolutional neural network;
A new multipath attention mechanism (MPA).
Table 4. Deep learning-based denoising approaches for blind noise image.
Table 4. Deep learning-based denoising approaches for blind noise image.
MethodTechnical Characteristics
CBDNet [92]The model was composed of a noise estimation subnetwork and a non-blind denoising subnetwork.
DIBD [94]De-noising network based on degraded information learning.
DudeNet [95]Using the dual-path network to increase the width of the network, to obtain more features;
Using sparse mechanisms to extract global and local features
GCBD [96]GAN-based blind denoiser.
SCNN [97]The alternate direction multiplier (ADMM) method and the semi-quadratic segmentation method were used;
Integrate the trained CNN noise-canceller into the model-based optimization method.
DNW [98]Tunable convolutional neural network.
RDASNet [93]Residual learning and dense connection and attention mechanism.
FFDNet [56]Input noise level adjustable convolutional neural network.
GCDN [99]Graph convolutional denoising network.
Table 5. Deep learning-based denoising approaches for mixed noise images.
Table 5. Deep learning-based denoising approaches for mixed noise images.
MethodTechnical Characteristics
SRMDNF [100]Single convolutional super-resolution network for multiple degradations.
DnGAN [104]Learning prior deep generative model.
TLCNN [105]A new mapping from noisy to noise-free images using a four-stage CNN architecture;
Adoption of transfer learning.
AINDNet + TF [106]Adopting an adaptive instance normalization to build a denoiser
ICycleGAN [107]Adding the EMA attention mechanism to the traditional residual module structure;
Proposing a Resnet-E feature extraction module.
Table 6. Synthetic noise image datasets.
Table 6. Synthetic noise image datasets.
CategoryNameColorNumberSize
Training datasetBSD432gray432481 × 321, 321 × 481
CBSD432gray432481 × 321, 321 × 481
DIV2KGray, RGB800Sizes vary, about 1 k, 2 k
Test datasetSet5gray5280 × 280
Set12gray12256 × 256
BSD68gray68481 × 321, 321 × 481
CBSD68RGB68481 × 321, 321 × 481
Kodak24RGB24500 × 500
McMasterRGB18500 × 500
Table 7. Real noise image dataset.
Table 7. Real noise image dataset.
CategoryNameAcquisition ModeCameraISO
Test datasetCC [114]The third methodCanon 5D Mark3.2 k
Nikon D6001.6 k
Nikon D8001.6 k, 3.2 k, 6.4 k
Test datasetDND [115]The second methodSony A7R100~25.6 k
Olympus E-M10200~25.6 k
Sony RX100 IV125~8 k
Huawei Nexus 6P100~6.4 k
Test datasetSIDD [116]The third methodGoogle Pixel50~10 k
iPhone 7100~2 k
Samsung Galaxy S6 Edge100~3.2 k
Motorola Nexus 6100~3.2 k
LG G4100~800
Training datasetPoly [117]The third methodCanon 5D3.2 k, 6.4 k
Canon 80D800~12.8 k
Canon 600D1.6 k, 3.2 k
Nikon D8001.6 k~6.4 k
Sony A71.6 k, 3.2 k, 6.4 k
Table 8. Average PSNR (dB) for different approaches in Set12.
Table 8. Average PSNR (dB) for different approaches in Set12.
Method δ = 15 δ = 25 δ = 50
NLM30.8928.5622.55
BM3D32.3729.9726.72
WNNM32.7030.2627.05
EPLL32.1429.6926.47
MLP30.0326.78
TNRD32.5030.0626.81
DnCNN32.8630.4327.18
FFDNet32.7730.4827.33
IRCNN32.7730.3827.14
ECNDNet32.8030.3927.15
DudeNet32.9430.5227.30
ADNet32.9830.5827.37
NLRN33.1630.8027.64
RNAN--27.62
The bold one in the table is the best indicator.
Table 9. Average time for processing a single image using different algorithms (512 × 512).
Table 9. Average time for processing a single image using different algorithms (512 × 512).
MethodNLMBM3DWNNMEPLLTNRDDnCNNDudeNetFFDNetIRCNNNLRN
Time(s)2302.26740421.963.127.191.166.314.17
Table 10. Average PSNR (dB) of different approaches on the color dataset.
Table 10. Average PSNR (dB) of different approaches on the color dataset.
DatasetσCBM3DDNCNNFFDNETDSNetBRDNetRPCNNIRCNN
CBSD681533.5233.9833.87133.9134.10-33.86
2530.7131.3131.2131.2831.4331.2431.16
5027.3828.0127.9628.0428.1628 0627.86
Kodak241534.2834.7334.5534.6334.88-34.56
2531.6832.2332.1132.l632.4132 3432.03
5028.4629.0228.9929.0529.2229.2528.81
Set51534.0434.2934.3134.1734.57-34.26
2531.6531.9132.1132.2932.5132.1431.98
5028.6928.9629.2229.0629.3129.0429.00
McMaster1534.0633.4534.6633.9235.0834.9634.58
2531.6631.5232.3532.1432.7532.3432.18
5028.5128.6229.1828.9329.5229.3128.91
The bold one in the table is the best indicator.
Table 11. Average time for processing a single image using different algorithms (280 × 280).
Table 11. Average time for processing a single image using different algorithms (280 × 280).
MethodCBM3DDNCNNFFDNETBRDNetIRCNN
Time (s)1.461.370.633.234.82
Table 12. PSNR and SSIM for different approaches on the two datasets.
Table 12. PSNR and SSIM for different approaches on the two datasets.
MethodSIDDDND
PSNR (dB)SSIMPSNR (dB)SSIM
CBM3D25.650.68534.510.851
EPLL27.110.87033.510.824
K-SVD26.880.84236.490.899
DnCNN32.590.86137.900.943
FFDNet38.270.94837.610.942
CBDNet33.280.86838.060.942
RIDNet38.710.95139.260.953
VDNet [122]39.260.95539.380.952
GMSNet-A [123]39.510.95840.150.961
GMSNet-B [123]39.690.95840.240.962
MPRNet [124]39.710.95839.800.954
The bold one in the table is the best indicator.
Table 13. Average PSNR (dB) values of different algorithms on CC15 and PolyU datasets.
Table 13. Average PSNR (dB) values of different algorithms on CC15 and PolyU datasets.
DatasetCBM3DNLHDNCNNFFDNetMIRNet
CC1537.9538.4937.4737.6836.06
PolyU38.8138.3638.5138.5637.49
The bold one in the table is the best indicator.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mao, J.; Sun, L.; Chen, J.; Yu, S. Overview of Research on Digital Image Denoising Methods. Sensors 2025, 25, 2615. https://doi.org/10.3390/s25082615

AMA Style

Mao J, Sun L, Chen J, Yu S. Overview of Research on Digital Image Denoising Methods. Sensors. 2025; 25(8):2615. https://doi.org/10.3390/s25082615

Chicago/Turabian Style

Mao, Jing, Lianming Sun, Jie Chen, and Shunyuan Yu. 2025. "Overview of Research on Digital Image Denoising Methods" Sensors 25, no. 8: 2615. https://doi.org/10.3390/s25082615

APA Style

Mao, J., Sun, L., Chen, J., & Yu, S. (2025). Overview of Research on Digital Image Denoising Methods. Sensors, 25(8), 2615. https://doi.org/10.3390/s25082615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop