Next Article in Journal
A Soft Actor-Critic Deep Reinforcement-Learning-Based Robot Navigation Method Using LiDAR
Next Article in Special Issue
A Scene Classification Model Based on Global-Local Features and Attention in Lie Group Space
Previous Article in Journal
Predicting the Global Potential Suitable Distribution of Fall Armyworm and Its Host Plants Based on Machine Learning Models
Previous Article in Special Issue
LinkNet-Spectral-Spatial-Temporal Transformer Based on Few-Shot Learning for Mangrove Loss Detection with Small Dataset
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Denoising Based on Deep and Total Variation Priors

1
Collaborative Innovation Center of Geo-Information Technology for Smart Central Plains, Zhengzhou 450052, China
2
Key Laboratory of Spatiotemporal Perception and Intelligent Processing, Ministry of Natural Resources, Zhengzhou 450052, China
3
College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
4
National Dam Safety Research Center, Wuhan 430010, China
5
College of Information and Communication Engineering, Dalian Minzu University, Dalian 116600, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(12), 2071; https://doi.org/10.3390/rs16122071
Submission received: 20 April 2024 / Revised: 5 June 2024 / Accepted: 5 June 2024 / Published: 7 June 2024

Abstract

:
To address the problems of noise interference and image blurring in hyperspectral imaging (HSI), this paper proposes a denoising method for HSI based on deep learning and a total variation (TV) prior. The method minimizes the first-order moment distance between the deep prior of a Fast and Flexible Denoising Convolutional Neural Network (FFDNet) and the Enhanced 3D TV (E3DTV) prior, obtaining dual priors that complement and reinforce each other’s advantages. Specifically, the original HSI is initially processed with a random binary sparse observation matrix to achieve a sparse representation. Subsequently, the plug-and-play (PnP) algorithm is employed within the framework of generalized alternating projection (GAP) to denoise the sparsely represented HSI. Experimental results demonstrate that, compared to existing methods, this method shows significant advantages in both quantitative and qualitative assessments, effectively enhancing the quality of HSIs.

1. Introduction

HSI has become significantly important in remote sensing, for it provides rich spectral information across a wide range of wavelengths, enabling more precise analysis and the interpretation of complex scenes and materials. The analysis of hyperspectral images (HSIs) is a rapidly advancing technique in remote sensing, extensively applied in sectors such as agriculture, mineral detection, military, and biomedicine [1]. Since HSIs are typically generated by an array of sensors, they are often contaminated with noise [2,3,4]. The main sources of noise consist of signal-independent (SI) circuitry noise and signal-dependent (SD) photonic noise [5]. These noise sources contribute to the degradation of hyperspectral images during the acquisition process, often manifesting as a combination of different types of noise such as Gaussian noise [6], impulse noise, deadlines, stripes, and others [7]. Moreover, atmospheric turbulence and system movement cause blurring in HSIs, leading to a mixture of different degradation types [8,9,10,11,12,13]. Consequently, there is an urgent need to enhance the quality of and reduce the noise in HSI before its application in various fields. Several denoising strategies for HSI have been suggested up to now, including non-local similarity, principal component analysis (PCA), low-rank regularization, matrix decomposition/factorization, the adaptive-weighting rank-reduction method [14], random noise attenuation in images using antileakage least-squares spectral analysis [15], and so forth.
In the past two decades, numerous techniques have emerged for the denoising of hyperspectral images (HSIs). Initially, each band of a HSI was treated as a grayscale image for noise removal in conventional two-dimensional optical images. However, directly extending conventional image processing methods to HSI band-by-band denoising [16,17] failed to fully exploit the spectral correlation, resulting in unsatisfactory denoising results. To address this issue, improved methods based on model optimization have been proposed to leverage both the spatial and spectral prior information of HSIs. These methods model the statistical properties of HSIs and employ optimization algorithms for denoising. By combining prior information and observed data, HSIs can be better restored after noise removal. Common examples of prior knowledge utilized in model-based optimization methods include local smoothness priors, non-local similarity priors, sparsity priors, and low-rank (LR) priors. Among them, the local smoothness prior assumes that adjacent pixels in HSIs exhibit spatial similarity, while the non-local similarity prior optimizes denoising results by leveraging image similarities [18,19]. The sparsity prior hypothesizes that HSIs can be expressed with fewer sparse representations, and related HSI denoising methods are broadly classified into two types: methods based on fixed dictionaries [19] and methods based on learned dictionaries [20,21]. However, sparse representation-based methods still have difficulties fully utilizing the joint spectral–spatial redundancy in three-dimensional data. Therefore, the low-rank prior for HSI denoising has been increasingly developed to address these issues collectively.
The low-rank prior assumes that an HSI is low-rank in a specific representation space, and researchers have explored noise removal through methods such as low-rank matrix denoising frameworks and low-rank tensor denoising frameworks. Among them, the low-rank matrix recovery (LRMR) framework was the first low-rank method proposed, which utilizes the low-rank structure of HSIs to remove noise [7]. It leverages the low-rank property along the spectral dimension by rearranging the three-dimensional HSI into a two-dimensional matrix, serving as the foundation for a series of subsequent HSI recovery operations that harness the low-rank structure [14,22,23]. However, this model fails to effectively utilize the spectral information of HSIs, thus rendering it incapable of handling heavy noise. To address this issue, total variation (TV) regularization, which preserves spatial edge information, is introduced to enhance the performance of LRMR. He et al., proposed a Total Variation Regularized Low-Rank Matrix Factorization (LRTV) method [23] for HSI denoising. This method combines TV regularization with low-rank matrix factorization, effectively removing noise while preserving spatial edge information in HSIs. Their findings demonstrate the necessity of jointly applying sparsity and low-rankness in HSI denoising. Considering HSIs’ strong spectral smoothness prior, He further proposed a Local Low-Rank Matrix Recovery and Global Spatial–Spectral Total Variation (LLRSSTV) method [24], which introduces anisotropic spatial–spectral TV regularization and non-local similarity regularization, further enhancing the performance of the denoising model. Additionally, approaches that employ nonconvex functions to approximate the rank can also improve the performance of LRMR [25,26]. Some methods aim to denoise HSIs by constraining differential images, effectively removing noise through constraints on the differential information [27]. It is worth noting that some low-rank matrix modeling methods still yield suboptimal recovery results in heavy noise conditions, as they are unable to leverage finer spatial–spectral information. Consequently, HSI denoising techniques have evolved to incorporate methods that consider spectral information.
In recent years, low-rank tensor denoising models have gained widespread applications, with numerous methods leveraging tensor decomposition and tensor rank minimization to handle HSI noise. Tensors are effective for representing high-dimensional data, and third-order tensors are regarded by researchers as appropriate for HSI [28]. Due to the redundancy and correlation in HSIs’ spatial and spectral dimensions, the low-rank feature of third-order tensors is prominent and intrinsic [29]. Therefore, strategies based on low-rank tensor decomposition are also suitable for HSI restoration. Among tensor decomposition methods, commonly used approaches include t-SVD decomposition, Tucker decomposition, CP decomposition, and tensor ring decomposition. These tensor decomposition methods and tensor rank minimization methods enable more accurate and efficient HSI denoising. To address the major issue of low-rank tensor-based methods—how to select the minimized rank, which is a computationally complex NP-hard problem—and the further increase in computational complexity caused by adding regularization terms based on tensor low-rank methods, researchers have begun to primarily explore the use of tractable approximation methods to replace rank minimization. For instance, the tensor nuclear norm method, a convex approximation approach, has been widely applied in HSI denoising. However, due to the lack of consideration for the differences in low-rank properties across HSI dimensions, this method often exhibits substandard denoising performance [27]. Consequently, researchers have proposed novel methods to improve the performance of the nuclear norm method, including Wang et al.’s method [30], Total Variation Regularized Low-Rank Tensor Decomposition (LRTDTV), combining low-rank tensor Tucker decomposition and TV regularization [28], as well as Weighted Group Sparsity-Regularized Low-Rank Tensor Decomposition (LRTDGS), integrating weighted group sparsity regularization. These methods have demonstrated superior denoising performance and results [31,32].
While rank approximation methods have proven effective in HSI denoising, their practical application requires computationally intensive singular value decomposition. Consequently, researchers have begun to explore the use of low-rank factorization techniques to enhance computational efficiency while capturing the high correlations in HSIs. A common low-rank decomposition approach is factor group sparsity-regularized nonconvex low-rank approximation (FGSLR), which combines low-rank constraints with sparse constraints based on factor group sparsity regularization. This approach utilizes regularization terms to promote sparsity in the factor groups, thereby better capturing the low-rank structure and sparse nature of the data [33]. Besides low-rank decomposition methods, subspace representation-based denoising methods can also reduce computational costs. For instance, Zhuang et al., proposed Fast Hyperspectral Image Denoising (FastHyDe), a denoising method that represents the spectral vectors of clean images in pre-learned subspaces [34]. To further improve computational efficiency and denoising effect, He et al., introduced the NGmeet method [35], a unified HSI denoising paradigm based on subspace models. This approach better addresses issues such as high computational costs and difficulties in handling mixed noise, demonstrating superior HSI denoising results.
Recently, numerous researchers have proposed HSI noise removal models based on the Convolutional Neural Network (CNN) framework to further enhance denoising performance, achieving promising results in feature selection and nonlinear fitting [36,37,38,39,40,41,42,43,44]. Through the CNN, these models are capable of learning rich feature representations from data and achieving efficient noise removal for HSIs. The Denoising Convolutional Neural Network (DnCNN) and the Fast and Flexible Denoising Convolutional Neural Network (FFDNet) are two CNN frameworks applied to image deep learning that have achieved remarkable results in HSI noise removal. DnCNN utilizes batch normalization and residual learning to effectively remove uniform Gaussian noise and suppress noise within a certain noise level range [36]. FFDNet combines the degradation map with the noisy image as the input of the denoising network, balancing noise suppression and detail preservation, resulting in denoising outcomes that eliminate noise without excessive smoothing [45].
Despite the significant progress made in denoising with deep learning-based methods, they still lack consideration for the characteristics of HSI, especially the spectral correlation, which is crucial in enhancing denoising effectiveness and achieving stable denoising results [46,47]. For example, FFDNet may potentially compromise certain image details and edge information. On the other hand, some traditional model optimization methods can leverage the spectral information and correlations in HSIs to enhance denoising performance. In summary, in the context of noise removal in HSIs, deep learning methods and traditional model optimization methods each have their strengths and weaknesses [48,49,50,51]. Further research and exploration are needed to effectively leverage the advantages of both approaches and compensate for their respective shortcomings.
The closeness between various priors grants them an opportunity to acquire each other’s respective advantages [52]. While utilizing denoising networks such as FFDNet or TV regularization independently does not yield satisfactory results, this paper employs the PnP algorithm [53] within the GAP framework [54] to denoise HSIs processed by the sparse observation matrix. By decreasing the L 2 norm difference between the initial statistical moments of the FFDNet deep prior and the TV prior, the problem is transformed into quadratic and semidefinite programming. Consequently, a dual prior information approach that combines machine learning and deep learning is obtained, effectively compensating for their respective limitations, and capitalizing on their strengths. As a result, a superior denoising effect for hyperspectral images is achieved. This paper makes the following three main contributions:
  • We first process the original HSI using a randomly binarized sparse observation matrix. This not only reduces the influence of noise on the HSI, but also mitigates the blurring effect experienced during HSI acquisition, providing a better initial input for subsequent denoising operations. Additionally, by introducing the randomness inherent in this process, the randomness in image processing is increased, enhancing the robustness and stability of the subsequent denoising algorithms.
  • We innovatively introduce two priors, FFDNet and TV, into the hyperspectral remote sensing image denoising algorithm simultaneously. FFDNet provides high-quality initial denoising results by learning the complex relationship between noise and signaling. TV regularization further optimizes these results by reducing noise while preserving image details. By leveraging two complementary priors, this compensates for their individual limitations and combines their strengths, resulting in superior denoising performance for hyperspectral images.
  • Extensive experiments are conducted, and the results demonstrate significant improvements in quantitative evaluation and visual effects when juxtaposed with prevailing denoising techniques.
The layout of the present study is systematically arranged as follows: Section 2 provides a comprehensive analysis of the methodology put forward. Section 3 presents the experimental outcomes and partakes in pertinent discourse. Section 4 introduces the discussion of the experimental results. Conclusions are presented in Section 5.

2. Methodology

2.1. Related Work

2.1.1. Compressed Sensing Theory

The accelerated advancement of Internet technology has brought about the explosive growth of data information, and people are faced with more and more data processing and information analysis tasks in their daily life. As an efficient information carrier, images play an increasingly important role in today’s technologically developed world. HSI (hyperspectral imaging) technology has indeed garnered significant attention in the field of imaging due to its unique advantages. This technique captures detailed spectral information unavailable in traditional images, enabling images to transcend the mere expression of spatial information and delve deeper into the spectral characteristics of objects. This type of imagery simultaneously captures spectral information across multiple contiguous wavelength bands, thereby providing a more comprehensive description of an object’s characteristics and properties. Nevertheless, researchers encounter a challenge when utilizing HSI: their desire for imagery that possesses higher spatial and spectral resolutions. Traditional data sampling compression techniques encounter limitations in addressing this issue, necessitating the development of a more efficient data compression method that can meet these demands. Compressed sensing (CS) theory indeed provides a solution to the aforementioned challenge. By leveraging the sparsity and redundancy of signals, it enables the acquisition of sufficient signal information with minimal sampling. Specifically, CS employs special sparse sampling matrices and reconstruction algorithms to sample images at a lower sampling rate than traditional methods, while effectively reconstructing the original image.
In contrast to traditional imaging methods, which adhere to the Nyquist–Shannon sampling theorem and require at least twice the sampling rate of the original signal for information collection, compressed sensing techniques capitalize on signal sparsity and redundancy to achieve the effective imaging of high-dimensional image signals at significantly lower sampling rates.
At present, Snapshot Compressive Imaging (SCI) stands out as a pivotal hyperspectral imaging system, deriving its principles from compressed sensing (CS) theory. SCI employs a two-dimensional sensor for high-dimensional image data acquisition and utilizes suitable algorithms for reconstructing the necessary information. In contrast to conventional systems, SCI introduces a fresh perspective on compressed imaging. It amalgamates multiple image frames into a singular snapshot measurement by leveraging a sparse observation matrix, which results in benefits such as reduced memory usage, power consumption, bandwidth needs, and cost. These characteristics equip SCI with the capability to capture HSI effectively.
Inspired by this, we follow the approach outlined in reference [45] by first processing the original HSI through a randomly binary sparse observation matrix. This matrix, with its unique properties, spatially selects and weights certain pixel values in the image by setting them to 0 or 1. This process achieves a sparse representation of the HSI, effectively selecting and emphasizing certain informational components. This sparse representation serves as a superior initial input for subsequent denoising procedures.

2.1.2. DnCNN

The Denoising Convolutional Neural Network (DnCNN) represents a specialized deep convolutional neural network (CNN) model, thoughtfully engineered for the purpose of denoising images. Proposed by Kai Zhang and his colleagues in 2017 [36], it is tailored to address noise issues in images.
The overall structure of the DnCNN is remarkably straightforward, consisting primarily of multiple convolutional layers. The DnCNN learns a residual mapping function R ( y ) = w to obtain a denoised and clean HSI x through x = y R ( y ) .
The network architecture of the DnCNN can be summarized into the following three components:
  • The first layer performs a convolution operation using a 3 × 3 kernel size with zero padding incorporated into the input. Following this, the output of this stratum undergoes an ReLU [37], a non-linear transformation, yielding the initial layer’s image configuration.
  • For each subsequent layer, batch normalization (BN) [55] is utilized in conjunction with both the convolution operation and the ReLU nonlinear activation function. Batch normalization accelerates the convergence of training and enhances the model’s robustness. This combination ensures that the activations within the network remain stable, facilitating efficient learning.
  • The final layer: The last layer exclusively utilizes a convolution procedure to procure the ultimate residual outcome for the image. This residual outcome embodies the noise element that the DnCNN has been trained to deduct from the initial input, consequently unveiling the noise-reduced HSI.
During the training process, the DnCNN model continuously adjusts its parameters using backpropagation and gradient descent algorithms to minimize the mean squared error. With numerous training samples and iterative optimization, the DnCNN learns the statistical characteristics of noise present in images, enabling it to effectively attenuate noise and enhance image quality.
Overall, the DnCNN has demonstrated excellent performance in image denoising tasks. It boasts several advantages, including a straightforward model structure, strong trainability, and good adaptability to various noise categories. Essentially, the DnCNN represents a specialized deep convolutional neural network designed for the purpose of removing noise from images. It incorporates the characteristics of image noise and the process of denoising, enabling it to accurately restore pristine image quality. It stands as one of the most efficient and broadly implemented models in the image processing domain.

2.1.3. FFDNet

FFDNet is a neural network model, building upon the advancements of the DnCNN. Its structure remains similar to the DnCNN, but incorporates a more flexible learning strategy, enabling accurate denoising across various noise levels. A crucial element of FFDNet involves the application of the batch normalization (BN) technique, which normalizes the output of each node within a layer before forwarding it. This normalization process accelerates convergence.
FFDNet’s objective function is denoted as x = F ( y , M ; Θ σ ) , in which y symbolizes the noisy image, x represents the desired output, and F ( ) represents the mapping function between y and x . A crucial aspect of FFDNet is the utilization of a noise level map, M , which is generated based on user-inputted parameter σ . Notably, a distinctive feature of FFDNet compared to the DnCNN is the inclusion of the noise level map, denoted as M , as an input to the system, rather than allowing the model parameters Θ σ to vary with different noise levels. Consequently, the model parameters Θ remain constant across different noise levels σ . This design confers significant flexibility to FFDNet, as it eliminates the need for retraining the model under varying noise intensities.
FFDNet employs a strategy of processing downsampled sub-images. This approach not only reduces the size of the input images, accelerating both training and inference speeds, but also preserves finer details, thereby enhancing the denoising effectiveness of the algorithm. The network structure of FFDNet remains largely consistent with the DnCNN, albeit with distinct inputs and outputs. Particularly, the system processes four sub-images that are derived from downscaling the initial noisy image and includes a noise level map, denoted as M . It generates four cleaned sub-images as its output, which are subsequently enlarged to reconstruct the final noise-free image. The performance metric employed here is the mean squared error (MSE).
In summary, the design of FFDNet enables it to handle images with varying noise intensities and types while maintaining excellent denoising performance even during accelerated training and inference processes.

2.1.4. E3DTV

In order to more effectively utilize the inherent information in the image space and enhance the effectiveness of image recovery, the total variation (TV) regularization technique was introduced by Rudin and colleagues [56] in 1992. This method exploits the sparse prior details present in the difference image, skillfully reconstructing fine details and preserving overall image smoothness. To put it simply, the TV regularization for a two-dimensional grayscale image A m × n can be expressed as follows:
A TV = n = 1 2 D n A 1
where D 1 and D 2 denote the first-order discrete differential operators, with each operator pertaining to the horizontal and vertical axes, respectively.
The preliminary strategy for integrating TV regularization into HSI restoration involved treating each band separately, followed by a consolidation of the outcomes [23]. This approach, however, only delves into the sparse prior within the HSI space and overlooks the interrelation among distinct HSI bands, thus constraining the efficacy of the restoration. In response to this limitation, Chang and his team [57] suggested an enhanced variant of TV regularization, specifically designed for HSI, referred to as anisotropic spatial–spectral total variation (SSTV) regularization. SSTV incorporates the correlation between different frequency bands, leading to improved HSI restoration results. For a 3-D HSI, the SSTV regularization can be formulated as follows:
Γ LSTV = n = 1 3 D n Γ h
where D signifies the first-order differential operator along the spectral axis.
Among various regularization techniques, the SSTV method exhibits strong adaptability and performs well in HSI denoising. Upon delving deeper into the complexities of actual HSI data, it was observed that the sparse nature of hyperspectral imagery (HSI) difference images across different spectral bands is both independent and non-uniform. Under these conditions, Peng et al. [58] noted that SSTV (Smoothed Second-Order Total Variation) regularization fails to precisely capture the true attributes of data. They argued that the sparseness pattern of the difference images varies independently among HSI bands, yet these images remain interconnected. As a result, they introduced the E3DTV method, which incorporates meager prior details from the low-dimensional subspace of the difference images, thereby better mirroring the traits of actual HSI data. Specifically, for a two-dimensional HSI data matrix, the E3DTV-norm can be expressed as follows:
X E 3 DTV   = 3 B n 1 , D n X = B n C n T
where D n represents the first-order difference operator. Meanwhile, matrices B n m n × 1 and C n 1 × B ( l m i n ( m n , B ) ) can be approximated through a low-rank decomposition.
Variants of TV have a wide range of applications in HSI processing. In [59], Chan et al. use smoothed total variation (STV) to increase the classification accuracy. It is used to detect semi-supervised changes in small water bodies in [60].

2.1.5. PnP Framework

Plug-and-play (PnP) is an algorithm that enables the interchangeable utilization of various algorithms and models within a unified framework, thereby achieving greater efficiency and flexibility. It leverages advanced denoisers (denoising algorithms) derived from proximal algorithms such as GAP, employing variable splitting techniques to decompose originally intricate optimization problems into more tractable subproblems. A common approach for addressing these subproblems involves the utilization of proximal algorithms with regularization for optimization. The definition of the regularized proximal operator prox Φ : n n is as follows:
prox Φ ( y ) = arg min x { Φ ( x ) + ρ 2 x - y 2 }
Within the plug-and-play (PnP) framework, prox Φ can be seamlessly replaced by a denoiser. This denoiser algorithm serves to transform a noisy image into a clean one. Here, the denoiser functions as an implicit regularizer, encoding prior knowledge about the structure of the underlying depth image. Regularization by denoising (RED) is a technique that utilizes a denoising engine to define the regularization for inverse problems, offering an alternative framework for image restoration tasks.
The fundamental concept of the PnP framework involves breaking down the initial problem into two separate subproblems: first, regularizing the image using a prior model, and then minimizing the discrepancy with observed data through an optimization process. Since these two subproblems are decoupled, the PnP framework leverages the prior knowledge encoded in the model and the specific capabilities of existing algorithms to tackle specific problems.
A key feature of the PnP framework is its flexibility and scalability. By incorporating different prior models, adapting to various data issues, and employing diverse optimization methods, the PnP framework can adapt to diverse application scenarios and deliver efficient solutions. Additionally, the iterative nature of the framework allows for a gradual improvement in results, introducing additional prior information and constraints at each iteration step.
In summary, the PnP framework provides a flexible and efficient approach to tackle image restoration and signal processing problems. It combines the regularization of prior models with data-adaptive optimization, enabling its widespread application across different problem domains and scenarios. By leveraging the strengths of both prior knowledge and optimization techniques, the PnP framework offers a powerful tool for enhancing image quality and extracting meaningful information from noisy data.

2.1.6. GAP Algorithm

Generalized alternating projection (GAP) is a recurring optimization technique employed to address constrained optimization issues. It is suitable for a variety of general variational problems, wherein the optimal solution is sought within a defined set of constraints. It presumes the optimization issue being examined is outlined as follows:
min x , C   C   s . t . x 2 , 1 ς β C   and   A x = y
The core idea of the GAP algorithm is to iteratively apply projection operators to drive the objective function towards the constrained set. The essence of the aforementioned optimization problem is to find a minimum weighted 2 , 1 norm ball that has a non-empty intersection with a given linear manifold. This problem can be solved through a series of alternating projections, and it can be transformed by introducing auxiliary variable w :
( x ( l ) , w ( l ) ) = arg min x , w 1 2 x w 2 2 + λ ( l ) w 2 , 1 ς β   s . t .   A x = y
where the Lagrange multiplier λ ( l ) 0 is associated with a constraint related to C ( l ) . By alternatingly updating the variables x and w , the aforementioned optimization problem can be solved. For instance, when w is given, the update of x corresponds to the Euclidean projection of w onto a linear manifold. Conversely, when x is fixed, the update of w is obtained through a soft-thresholding operation on x . Both updates possess closed-form solutions, facilitating the efficient solution of the optimization problem.
Specifically, during each iteration of the GAP (generalized alternating projection) algorithm, an update operation is first performed on the variables. Subsequently, a projection operator is employed to map the updated variable values back to the constraint set, ensuring the satisfaction of the preset constraints. This process can be repeated until convergence criteria are met or the maximum iteration count is reached. Detailed mathematical derivations and proofs can be found in [61].
The GAP algorithm finds extensive application, notably in areas such as signal processing, image rejuvenation, image compression, and machine learning. It is effective in handling optimization problems with multiple constraint conditions and, in some cases, can converge to the global optimal solution. Furthermore, the GAP algorithm exhibits excellent scalability, allowing for the integration of various prior information and constraint conditions to adapt to different problem scenarios. Although the GAP algorithm may exhibit slower convergence in certain situations, it has demonstrated its effectiveness and practicality in addressing real-world problems. Through continuous improvements and optimizations, the GAP algorithm remains an essential optimization tool in various domains, offering a viable approach to addressing optimization issues with constraints.

2.2. Proposed Method

The FFDNet-TV technique, which is the focus of this manuscript, is detailed in this section. Given the superiority of sparsity, inspired by [52], the original HSI X m × n × B is first processed using a random binary sparse observation mask M m × n × B . By exploiting the characteristics of the random binary values, certain pixel values of the image are set to 0 or 1 in the spatial domain, which achieves the partial information selection and weighting of the image, further realizing the sparse representation of HSI Y m × n :
Y = i = 1 B M i X i + N
in which M i = M ( : , : , i ) denotes the i -th band of the sparse observation mask, while X i = X ( : , : , i ) corresponds to the i -th band of the image for the respective hyperspectral band. In the real world, the imaging process frequently introduces blurs into the HSI. The X to be collected is actually the result of the convolution of the ideal X 0 and the blurring matrix form b of the point spread function (PSF): X = b X 0 [62]. N m × n denotes different degrees of zero-mean Gaussian noise here.
Expressing Equation (7) as a product of matrices and vectors,
y = A x + n
in which y = Vec ( Y ) m n and n = Vec ( N ) m n . The collected HSI vector x m n B is described as
x = X = [ X 1 T , , X B T ] T
The sparse observation matrix A m n × m n B in Equation (8) is denoted as
A = [ C 1 , , C B ]
where C i = diag ( Vec ( M i ) ) m n × m n , for i = 1 , , B .
The advantage of processing the original HSI in this way is that it does not only retain pixels that may not be contaminated by noise in the subsequent process, but it also suppresses pixels that are prone to noise interference. In the original HSI, the sparse observation matrix may selectively retain image areas with higher intensity and suppress areas with lower intensity. It should be noted that setting certain pixel values of the image to 0 or 1 can also reduce the impact of blurring during the HSI acquisition process to some extent. Therefore, processing the original HSI through a random binary sparse observation matrix can reduce the range of noise interference in the subsequent HSI and the blurring effect during acquisition. At the same time, since this process is random, it introduces randomness into image processing, which helps improve the robustness and stability of subsequent denoising algorithms.
The subsequent denoising problem is addressed using the model given by Equation (11).
x ^ = arg min x 1 2 y A x 2 2 + λ Φ ( x )
where 1 2 y A x 2 2 denotes the data fidelity, Φ ( x ) denotes the regularization term, and λ manages the equilibrium between the terms of data fidelity and regularization.
Within this part, we employ the generalized alternating projection (GAP) framework, utilizing the plug-and-play (PnP) algorithm to refine the aforementioned issue. We briefly outline the specific steps of this algorithm. The in-depth derivation of these steps can be found in [54].
x ( l + 1 ) = w ( l ) + A T ( A A T ) - 1 ( y A w ( l ) )
w ( l + 1 ) = arg min w λ γ Φ ( w ) + 1 2 x ( l + 1 ) w 2 2
where l denotes the iteration index. FFDNet and TV regularization are employed to model the proximal operator of Φ ( w ) to obtain a more stable solution. Equation (13) is solved by using the maximum a posteriori (MAP) model here, as shown in the following equation:
w ( l + 1 ) = arg max w   p ( w | x ( l + 1 ) , σ )
where σ is the denoiser hyperparameter, and p ( w | x ( l + 1 ) , σ ) is its distribution.
Integrating over σ to eliminate σ and obtaining the posterior of Equation (14), we obtain
p ( w | x ( l + 1 ) ) = p ( w | x ( l + 1 ) , σ ) p ( σ | x ( l + 1 ) ) d σ
Our approach combines two denoisers. Similarly, the other denoiser is represented as follows:
q ( w | x ( l + 1 ) ) = q ( w | x ( l + 1 ) , g ) q ( g | x ( l + 1 ) ) d g
Due to the computationally challenging nature of Equations (15) and (16) in their integral forms, a simple approach is to discretize σ and g by considering them as elements of finite sets E and F , respectively. This allows us to express p ( σ | x ( l + 1 ) ) and q ( g | x ( l + 1 ) ) as discrete distributions. Consequently, Equations (15) and (16) can be reformulated as
p ( w | x ( l + 1 ) ) = σ E p ( w | x ( l + 1 ) , σ ) p ( σ | x ( l + 1 ) )
q ( w | x ( l + 1 ) ) = g F q ( w | x ( l + 1 ) , g ) q ( g | x ( l + 1 ) )
When different priors are closely related, they naturally inherit the strengths of one another. To implement the above statement using mathematical methods, the distance between p ( w | x ( l + 1 ) ) and q ( w | x ( l + 1 ) ) is necessary to be kept to a minimum:
min p ( σ | x ( l + 1 ) ) , q ( g | x ( l + 1 ) ) dist ( p ( w | x ( l + 1 ) ) , q ( w | x ( l + 1 ) ) )
where dist ( · , · ) is the distance function. Given the consideration of computational efficiency, we harness the L 2 norm between the initial statistical moments p ( w | x ( l + 1 ) ) and q ( w | x ( l + 1 ) ) .
min p ( σ | x ( l + 1 ) ) , q ( g | x ( l + 1 ) ) E p ( v | x ( l + 1 ) [ v ] E q ( v | x ( l + 1 ) ) [ v ] 2 2
Given the FFDNet training data of clean and Gaussian-noisy image pairs, we model FFDNet posterior p ( w | x ( l + 1 ) , σ ) as a Gaussian distribution:
p ( w | x ( l + 1 ) , σ ) = N ( FFD σ ( x ) , σ I )
where FFD σ ( x ) is FFDNet with x being the input, I the identity matrix, and σ I the covariance matrix associated with the Gaussian distribution.
The Gaussian distribution serves as a model for the posterior in the context of a TV denoiser:
q ( w | x ( l + 1 ) , g ) = N ( TV g ( x ) , g )
where TV g ( x ) m × n × B is the average, and g is the covariance matrix.
The ultimate optimization issue is as follows:
min p ( σ | x ( l + 1 ) ) , q ( g | x ( l + 1 ) ) p ( σ | x ( l + 1 ) ) , FFD σ ( x ) q ( w | x ( l + 1 ) ) TV g ( x ) 2 2 s . t . σ E p ( σ | x ( l + 1 ) ) = 1 , t F p ( g | x ( l + 1 ) ) = 1 , p ( σ | x ( l + 1 ) ) 0 , p ( g | x ( l + 1 ) ) 0
We employ quadratic programming and semidefinite programming to solve Equation (23), resulting in two similar posterior estimates that can be mutually beneficial.
By averaging these posteriors, we obtain a denoiser that is weighted towards the mean of the combined posterior:
w ( l + 1 ) = 1 2 ( σ E p ( σ | x ( l + 1 ) ) FFD σ ( x ) + t F q ( g | x ( l + 1 ) ) TV g ( x ) )
Algorithm 1 delineates the procedure of our approach.
Algorithm 1. Proposed approach.
1. Input: the sparse representation of HSI y and the sparse observation matrix A .
2. Initialize: Set w ( 0 ) = y , E = E i n i t i a l , F = F i n i t i a l
3. When not converging, do
4. l : = l + 1
5. Update x via (12).
6. Acquire a set of denoising images: { FFD σ ( x ) : σ E }
7. Acquire a set of denoising images: { TV g ( x ) : t F }
8. Solve problem (23).
9. Update w via the denoiser (24).
10. End while

2.3. Evaluation Metrics

In this section, we describe the evaluation metrics used to assess the performance of our proposed method. Specifically, we employ Mean Peak Signal-to-Noise Ratio (MPSNR), the Mean Structural Similarity Index Measure (MSSIM), and Spectral Angle Mapper (SAM) as our evaluation metrics [63,64].

2.3.1. Mean Peak Signal-to-Noise Ratio (MPSNR)

MPSNR is a widely used metric to evaluate the quality of image restoration or reconstruction. It measures the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. MPSNR is computed as the mean of the Peak Signal-to-Noise Ratio (PSNR) values obtained for each pixel or patch in the reconstructed image. Mathematically, MPSNR is defined as
MPSNR = 1 N i = 1 N PSNR ( i )
where N is the total number of pixels or patches in the image, and PSNR ( i ) is the PSNR value for the i -th pixel or patch. The PSNR for a pixel or patch is calculated as
PSNR ( i ) = 20 log 10 ( MAX I MSE ( i ) )
where MAX I is the maximum possible pixel value of the image and MSE ( i ) is the mean squared error between the original and reconstructed pixel or patch values.

2.3.2. Mean Structural Similarity Index Measure (MSSIM)

MSSIM is a perceptual image quality metric that quantifies the similarity between two images based on their luminance, contrast, and structure. It is calculated as the mean of the Structural Similarity Index Measure (SSIM) values obtained for local windows in the images. Mathematically, MSSIM is defined as
MSSIM = 1 M i = 1 M SSIM ( j )
where M is the total number of local windows in the image, and SSIM ( j ) is the SSIM value for the j -th window. The SSIM for a window is computed as:
SSIM ( j ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
where μ x and μ y are the mean values of the local windows in the original and reconstructed images, σ x 2 and σ y 2 are their variance, σ x y is the covariance between the two windows, and C 1 and C 2 are small constants to avoid division by zero.

2.3.3. Spectral Angle Mapper (SAM)

SAM is a metric commonly used in remote sensing and hyperspectral image analysis to compare the spectral signatures of pixels. It measures the angle between two spectral vectors and quantifies their similarity. For our purposes, we apply SAM to compare the spectral signatures of corresponding pixels in the original and reconstructed images. Mathematically, SAM is defined as
SAM ( s 1 , s 2 ) = arccos ( s 1 s 2 s 1     s 2 )
where s 1 and s 2 are the spectral vectors of corresponding pixels in the original and reconstructed images, denotes the dot product, and | s | represents the magnitude of the spectral vector. A lower SAM value indicates a higher similarity between the spectral signatures.

3. Experiments

3.1. Experimental Configuration

In order to assess the efficacy of the method we have introduced (referred to as FFDNet-TV) for HSI restoration, we use a combination of visual and quantitative measures. We perform a thorough evaluation of our algorithm’s performance by contrasting it with six other denoising techniques. This list encompasses several techniques such as low-rank matrix recovery (LRMR) [65], the weighted sum of nuclear norms (WSNM) [66], low-rank tensor decomposition with total variation (LRTDTV) [30], factor group sparsity-regularized nonconvex low-rank approximation (FGSLR) [33], FFDNet [36], and E3DTV [58]. The codes for these competing methods were supplied by their respective authors, with parameters either fine-tuned or as described in their respective papers for optimal performance. Our approach leverages the deep network FFDNet, which has previously been trained for various tasks. We employ the existing network architecture and its parameters to accomplish our objectives [36]. Before initiating the simulation, we standardize the grayscale values of all HSI bands to the interval [0, 1]. After the denoising procedure, the grayscale values are restored to their initial states.
Experiments are carried out on four different datasets: the Pavia City Center dataset, the University of Pavia dataset (captured via the Reflective Optics System Imaging Spectrometer, University of Pavia, Pavia, Italy), the Washington DC Shopping Center dataset (collected through the hyperspectral digital imagery collection experiment, or HYDICE), and the Salinas dataset. To simulate the blur effect that occurs during the image capture process, we implement convolution to yield the blurred HSI. We operate under the assumption that the point spread function (PSF) is known. To mimic the blur incurred during the imaging process for the first three datasets, we opt for a 5 × 5 light Gaussian blur kernel with a standard deviation of 3, a 3 × 3 Gaussian blur kernel with a standard deviation of 1, and a 6 × 6 Gaussian blur kernel with a standard deviation of 2. Thereafter, we add Gaussian noise, which has a mean of zero and variances of 0.13, 0.23, 0.15, and 0.10 to every spectral band in these four datasets.

3.2. Visual Quality Comparison

Considering the vast number of spectral bands in a hyperspectral data set, we choose a single band from each set to demonstrate the visual outcomes of various noise reduction techniques. Figure 1, Figure 2, Figure 3 and Figure 4 visually represent the results of these various denoising techniques applied to the four datasets. We only display one band of sub-images with notable features from the Pavia City Center dataset, with dimensions of 50 × 50 × 80, in Figure 1, after discarding bands heavily affected by noise contamination. Similarly, Figure 2 shows one band of sub-images from the University of Pavia with data size 50 × 50 × 103, and Figure 3 shows one band of sub-images from Washington DC Mall with data size 50 × 50 × 191. Figure 4 shows full-band images from the Slinas dataset with a data size of 217 × 217 × 204.
In Figure 1a,b, the original hyperspectral image (HSI) and its degraded counterpart from band 21 of the Pavia City Center dataset are presented. Similarly, Figure 2a,b showcase the pristine and corrupted images from band 82 of the University of Pavia dataset. Moving on to Figure 3a,b, we observe the untouched and distorted images from band 17 of the Washington DC Mall dataset. Similarly, Figure 4a,b show the original HSIs of Salinas and the corrupted ones. Comparative results from various denoising techniques can be observed in Figure 1c–i, Figure 2c–i, Figure 3c–i and Figure 4c–i. It is worth noting that the noise removal results for LRMR and LRTDTV—that are particularly evident in Figure 1c,e, Figure 2c,e, Figure 3c,e and Figure 4c,e—that were deemed unsatisfactory are primarily attributed to the heightened noise levels. Furthermore, the visual quality assessment reveals that FFDNet underperforms in terms of visual fidelity, as evident from Figure 1g, Figure 2g, Figure 3g and Figure 4g. This deficiency is attributed to the absence of prior knowledge regarding blurring effects, resulting in excessive smoothness, blurriness, and the consequential loss of intricate details. In contrast, an examination of Figure 1d, Figure 2d, Figure 3d and Figure 4d reveals that WSNM, while adept at preserving edge details, exhibits subpar performance in noise reduction. In comparison to other methods like E3DTV, which partially mitigates Gaussian noise but introduces excessive smoothing, as evident in Figure 1i, Figure 2i, Figure 3i and Figure 4i, the proposed approach excels in enhancing texture and edge details while maintaining clarity in smooth regions, surpassing the results achieved by FFDNet or E3DTV in isolation. This superiority stems from the synergistic combination of FFDNet and TV priors in the proposed method, optimizing their complementary strengths by reducing the discrepancy between their primary characteristics. While FFDNet can provide high-quality initial denoising results by learning the complex relationship between noise and signal, TV regularization further optimizes these results by reducing noise while preserving image details. Additionally, the early application of a random binary sparse observation matrix not only reduces the range of noise interference after HSI, but also diminishes the blurring effect during HSI acquisition, providing a better initial input for subsequent denoising processes. Consequently, among all these advanced methods, our method best preserves local details and eliminates noise.

3.3. Quantitative Comparison

To furnish a comprehensive and numerical assessment of the noise reduction capabilities, three metrics—Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Spectral Angle Mapper (SAM)—are utilized to evaluate the outcomes of the conducted simulations. The PSNR and SSIM metrics are computed individually for each clean band and restored band of each HSI, and subsequently averaged to calculate their mean equivalents, referred to as the MPSNR and MSSIM indices. MPSNR and MSSIM are spatial-based indicators, whereas SAM is a spectral-based indicator. The evaluation results of various denoising strategies for the Pavia City Center dataset are presented comprehensively in Table 1. Table 2 showcases the quantitative assessment results of seven different denoising approaches for the University of Pavia dataset. Additionally, Table 3 presents a numerical assessment of the noise reduction methods used on the Washington DC Mall data, emphasizing the best value for each metric in boldface.
Although the aggregate results of the experiments might not show extraordinary performance, this can be attributed to the combination of blurriness and significant Gaussian noise present in the images. Despite this, the method proposed in this study showcases superior quantitative outcomes when compared to alternative approaches, as illustrated in Table 1, Table 2 and Table 3, except for PSNR. The discrepancy in PSNR values can be attributed to its focus on pixel-level distinctions, which may not accurately reflect overall image quality. In scenarios where image structure and texture hold greater significance, such as in image enhancement and denoising, evaluations based on SSIM can offer a more consistent and meaningful assessment. As exhibited in Table 1, Table 2 and Table 3, our technique reliably secures the highest SSIM scores, which corresponds with the visual quality evaluations performed. Regarding Table 4, although our MPSNR, MSSIM, and SAM values are not the best among all methods, the values of these three indices for our method are relatively good. Furthermore, the other methods all exhibit deficiencies in one of these three indices, which demonstrates that our denoising method possesses a certain level of reliability. Moreover, the increased SAM values indicate that our approach successfully maintains the spectral fidelity of the initial hyperspectral image (HSI). In other words, the use of a random binary sparse observation matrix not only reduces the range of noise interference after HSI, but also decreases the blurring effect during HSI acquisition, providing a better initial input for subsequent denoising processes. Additionally, by leveraging both FFDNet and TV priors and minimizing the L 2 distance between their first-order moments, the proposed method allows for a closer relationship between these priors, giving them more opportunity to leverage each other’s strengths and achieve better results than using either method alone.

4. Discussion

4.1. Comparison of L 1 and L 2 Norm

The L 1 norm of a vector is defined as the sum of the absolute values of its elements. In one-dimensional space, it is equivalent to the sum of the absolute values of the vector’s elements [67]. The L 1 norm is commonly employed in sparse inference and feature selection, as it encourages certain elements to become zero, thereby facilitating sparsity. For x = ( x 1 , x 2 , , x n ) , its L 1 norm is as follows:
| x | 1 = | x 1 | + | x 2 | + + | x n |
The L 2 norm of a vector is defined as the square root of the sum of the squares of its elements. In Euclidean space, it represents the distance of the vector from the origin. The L 2 norm is widely used in various mathematical and optimization contexts due to its properties such as differentiability and its ability to emphasize larger values in the vector. It is calculated as follows:
x 2 = x 1 2 + x 2 2 + + x n 2
The L 1 norm tends to produce sparse solutions by driving certain elements of the vector to zero, making it suitable for applications such as feature selection and compressive sensing. In contrast, the L 2 norm emphasizes all the elements of the vector equally, is useful for controlling overall magnitude, and is commonly used in ridge regression and neural network weight regularization. Additionally, the L 2 norm is well behaved mathematically due to its smoothness and differentiability properties, while the L 1 norm is not differentiable at zero. To achieve more reliable results, we choose the L 2 norm in this paper.

4.2. Stopping Criteria for Denoising

In our experimental setup, we empirically determined that iterating the denoising process 100 times provided the optimal results for our specific task and dataset. This criterion was based on thorough experimentation and analysis, where we observed that fewer iterations failed to effectively remove noise from the images, while excessive iterations led to potential over-smoothing and a loss of important details. By iterating the denoising algorithm 100 times, we achieved a balance between noise reduction and the preservation of image quality, which ultimately contributed to the overall performance of our approach. This choice of iteration count demonstrates our commitment to thorough experimentation and the pursuit of optimal results for our research.

4.3. Run Time

Under the conditions of a Gaussian blur kernel size of 3 × 3 and a standard deviation of 1, Table 5 presents the computational duration of different methods applied to the Pavia University dataset. While our approach does not achieve the shortest computational duration, it is worth noting that the stopping criteria for denoising vary significantly among the various methods. Additionally, the algorithmic architecture employed in this experiment is intricate, rendering the computational duration a mere reference indicator.
In evaluating the performance of a denoising algorithm, not only the computational efficiency but also the denoising effect and preservation of image details are crucial factors. Our method, despite its relatively longer computational time, has demonstrated promising results in balancing these aspects. The intricate algorithmic design ensures that while reducing noise, vital image features and details are preserved, which is crucial for subsequent image analysis and interpretation. Therefore, while the computational duration serves as a valuable metric, it should not be the sole criterion for evaluating the superiority of a denoising method. Instead, a comprehensive analysis that takes into account multiple performance metrics, including the denoising effect, image quality preservation, and computational efficiency, is necessary to draw a more convincing conclusion.

4.4. Residual Image

The evaluation of the residual images resulting from the application of various denoising methods provides crucial insight into the performance and efficacy of these techniques. In this part, we compare the residual images after the normalization of the Pavia University dataset obtained from [30,33,36,58,65,66]. The objective is to analyze the degree of noise reduction achieved while preserving image detail and structural information.
Figure 5 shows us a group of residual images of the Pavia University dataset after normalization. It is clear that the residual images obtained via the LRMR, WSNM, and LRTDTV methods have less noise. However, some fine textural details were observed to be lost, indicating a possible trade-off between noise suppression and detail preservation. In contrast, the FGSLR, FFDNet, and E3DTV methods were able to restore texture details with good fidelity while maintaining a moderate level of noise reduction. This suggests a more balanced approach between noise removal and feature retention. Future research should aim to develop more efficient and robust denoising techniques that can adapt to the diverse challenges encountered in real-world scenarios.

4.5. Performance on Different Degrees of Noise

This part examines the efficacy of the suggested method when dealing with different degrees of noise. To ensure equitable comparisons, we maintain consistent levels of original blurring in both datasets before subjecting them to different noise intensities. In addition to the Gaussian noise variance levels explored in prior experiments (referred to as Case 1), we amplify the variance to introduce higher Gaussian noise levels, particularly for cases with variances of 0.2 (designated as Case 2) and 0.3 (designated as Case 3), for assessing the efficacy of our method. The assessment of various noise reduction methods on the Pavia Center dataset is portrayed in Figure 6a,b, displaying the MSSIM and SAM metrics. Similarly, Figure 7a,b display the MSSIM and SAM metrics for a variety of denoising techniques applied to the University of Pavia dataset, while Figure 8a,b show the same for the Washington DC Mall dataset. Quantitative evaluations across these three datasets clearly indicate that the method we have proposed effectively utilizes dual prior information from both the FFDNet deep prior and the TV prior, showcasing remarkable robustness in HSI denoising. It outperforms other methods for denoising various data under different noise conditions. Moreover, compared to alternative approaches, the proposed method offers partial blurring mitigation alongside Gaussian noise removal. This is attributed to the utilization of a random binary sparse observation matrix, which not only minimizes noise interference post-his, but also mitigates blurring effects during HSI acquisition. This approach enhances the initial input for subsequent denoising processes, thereby improving the overall efficacy of denoising operations.

5. Conclusions

This paper proposes a denoising method based on deep learning and total variation priors for HSI. We introduce a random binary sparse observation matrix for preliminary treatment that significantly improves denoising performance and reduces blurring effects during the imaging process. To effectively combine the FFDNet deep prior and the E3DTV prior, we employed the plug-and-play method within the framework of generalized alternating projection, achieving a precise suppression of noise. The analysis of the experimental results indicates that, compared to existing methods, the proposed method exhibits excellent performance across multiple datasets, especially in terms of MSSIM and SAM, achieving greater accuracies than other methods, and demonstrating significant advantages in maintaining image detail and spectral fidelity.

Author Contributions

Conceptualization, P.W.; Methodology, T.S.; Software, T.S.; Validation, X.W.; Formal analysis, L.G.; Investigation, P.W.; Resources, T.S.; Data curation, P.W. and Y.C.; Writing—original draft preparation, T.S.; Writing—review and editing, P.W. and L.W.; Visualization, P.W.; Supervision, X.W.; Project administration, L.W.; Funding acquisition, P.W. and Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research is financially funded by the Fundamental Research Funds for the Central Universities in Nanjing University of Aeronautics and Astronautics (Grant No. NS2023020 and QZJC20230206), Joint Fund of Collaborative Innovation Center of Geo-Information Technology for Smart Central Plains, Henan Province, and Key Laboratory of Spatiotemporal Perception and Intelligent processing, Ministry of Natural Resources (Grant No. 232205), Fund of National Dam Safety Research Center (Grant No. CX2023B04), National Natural Science Foundation of Jiangsu Province (Grant No. BK20221478), Hong Kong Scholars Program (Grant No. XJ2022043), Youth Promotion Talent Project of Jiangsu Association for Science and Technology (Grant No. TJ-2023-010), and National Natural Science Foundation of China (Grant No. 61801211).

Data Availability Statement

The datasets utilized were procured from the respective creators based on legitimate requests.

Acknowledgments

The authors express their genuine gratitude to the scholarly editors and peer reviewers of this work for their beneficial feedback and valuable recommendations.

Conflicts of Interest

Authors declare no conflicts of interest.

References

  1. Wang, P.; Wang, L.; Leung, H.; Zhang, G. Super-Resolution Mapping Based on Spatial–Spectral Correlation for Spectral Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2256–2268. [Google Scholar] [CrossRef]
  2. Ang, K.L.-M.; Seng, J.K.P. Big Data and Machine Learning with Hyperspectral Information in Agriculture. IEEE Access 2021, 9, 36699–36718. [Google Scholar] [CrossRef]
  3. Kovalev, D.M.; Obukhova, N.A. Modern Trends in Hyperspectral Archival Document Image Processing: A Review. In Proceedings of the 2023 Seminar on Signal Processing, Saint Petersburg, Russia, 22–22 November 2023; pp. 48–52. [Google Scholar] [CrossRef]
  4. Salut, M.M.; Anderson, D.V. Tensor Robust CUR for Compression and Denoising of Hyperspectral Data. IEEE Access 2023, 11, 77492–77505. [Google Scholar] [CrossRef]
  5. Uss, M.L.; Vozel, B.; Lukin, V.V.; Chehdi, K. Local Signal-Dependent Noise Variance Estimation from Hyperspectral Textural Images. IEEE J. Sel. Top. Signal Process. 2011, 5, 469–486. [Google Scholar] [CrossRef]
  6. Sarkar, S.; Sahay, R.R. A Non-Local Superpatch-Based Algorithm Exploiting Low Rank Prior for Restoration of Hyperspectral Images. IEEE Trans. Image Process. 2021, 30, 6335–6348. [Google Scholar] [CrossRef]
  7. Vuong, A.T.; Tang, V.H.; Ngo, L.T. A Hyperspectral Image Denoising Approach via Low-Rank Matrix Recovery and Greedy Bilateral. In Proceedings of the 2021 RIVF International Conference on Computing and Communication Technologies (RIVF), Hanoi, Vietnam, 19–21 August 2021; pp. 1–6. [Google Scholar] [CrossRef]
  8. Li, S.; Geng, X.; Zhu, L.; Ji, L.; Zhao, Y. Hyperspectral Image Denoising Based on Principal-Third-Order-Moment Analysis. Remote Sens. 2024, 16, 276. [Google Scholar] [CrossRef]
  9. Han, J.; Pan, C.; Ding, H.; Zhang, Z. Double-Factor Tensor Cascaded-Rank Decomposition for Hyperspectral Image Denoising. Remote Sens. 2024, 16, 109. [Google Scholar] [CrossRef]
  10. Lian, X.; Yin, Z.; Zhao, S.; Li, D.; Lv, S.; Pang, B.; Sun, D. A Neural Network for Hyperspectral Image Denoising by Combining Spatial–Spectral Information. Remote Sens. 2023, 15, 5174. [Google Scholar] [CrossRef]
  11. Gallo, I.; Boschetti, M.; Rehman, A.U.; Candiani, G. Self-Supervised Convolutional Neural Network Learning in a Hybrid Approach Framework to Estimate Chlorophyll and Nitrogen Content of Maize from Hyperspectral Images. Remote Sens. 2023, 15, 4765. [Google Scholar] [CrossRef]
  12. Li, M.; Fu, Y.; Zhang, Y. Spatial-spectral transformer for hyperspectral image denoising. Proc. AAAI Conf. Artif. Intell. 2023, 37, 1368–1376. [Google Scholar] [CrossRef]
  13. Yuan, Y.; Ma, H.; Liu, G. Partial-DNet: A Novel Blind Denoising Model With Noise Intensity Estimation for HSI. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5505913. [Google Scholar] [CrossRef]
  14. Bayati, F.; Trad, D. 3-D Data Interpolation and Denoising by an Adaptive Weighting Rank-Reduction Method Using Multichannel Singular Spectrum Analysis Algorithm. Sensors 2023, 23, 577. [Google Scholar] [CrossRef]
  15. Ghaderpour, E.; Liao, W.; Lamoureux, M.P. Antileakage least-squares spectral analysis for seismic data regularization and random noise attenuation. Geophysics 2018, 83, V157–V170. [Google Scholar] [CrossRef]
  16. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  17. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar] [CrossRef]
  18. Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z.; Gu, S.; Zuo, W.; Zhang, L. Multispectral Images Denoising by Intrinsic Tensor Sparsity Regularization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1692–1700. [Google Scholar] [CrossRef]
  19. Chen, G.; Qian, S. Denoising and dimensionality reduction of hyperspectral imagery using wavelet packets, neighbour shrinking and principal component analysis. Int. J. Remote Sens. 2009, 30, 4889–4895. [Google Scholar] [CrossRef]
  20. Ye, M.; Qian, Y.; Zhou, J. Multitask Sparse Nonnegative Matrix Factorization for Joint Spectral–Spatial Hyperspectral Imagery Denoising. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2621–2639. [Google Scholar] [CrossRef]
  21. Xing, Z.; Zhou, M.; Castrodad, A.; Sapiro, G.; Carin, L. Dictionary Learning for Noisy and Incomplete Hyperspectral Images. SIAM J. Imaging Sci. 2012, 5, 33–56. [Google Scholar] [CrossRef]
  22. Cai, W.; Jiang, J.; Ouyang, S. Hyperspectral Image Denoising Using Adaptive Weight Graph Total Variation Regularization and Low-Rank Matrix Recovery. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5509805. [Google Scholar] [CrossRef]
  23. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2015, 54, 178–188. [Google Scholar] [CrossRef]
  24. He, W.; Zhang, H.; Shen, H.; Zhang, L. Hyperspectral Image Denoising Using Local Low-Rank Matrix Recovery and Global Spatial-Spectral Total Variation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 713–729. [Google Scholar] [CrossRef]
  25. Chen, Y.; Huang, T.-Z.; Zhao, X.-L.; Deng, L.-J. Hyperspectral image restoration using framelet-regularized low-rank nonnegative matrix factorization. Appl. Math. Model. 2018, 63, 128–147. [Google Scholar] [CrossRef]
  26. Zeng, H.; Xie, X.; Ning, J. Hyperspectral Image Restoration via Global Total Variation Regularized Local Nonconvex Low-Rank Matrix Approximation. In Proceedings of the IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September 2020–2 October 2020; pp. 2312–2315. [Google Scholar] [CrossRef]
  27. Fan, H.; Li, J.; Yuan, Q.; Liu, X.; Ng, M. Hyperspectral image denoising with bilinear low rank matrix factorization. Signal Process. 2019, 163, 132–152. [Google Scholar] [CrossRef]
  28. Li, C.; Ma, Y.; Huang, J.; Mei, X.; Ma, J. Hyperspectral image denoising using the robust low-rank tensor recovery. J. Opt. Soc. Am. A 2015, 32, 1604–1612. [Google Scholar] [CrossRef]
  29. Gao, L.; Yao, D.; Li, Q.; Zhuang, L.; Zhang, B.; Bioucas-Dias, J.M. A new low-rank representation based hyperspectral image denoising method for mineral mapping. Remote Sens. 2017, 9, 1145. [Google Scholar] [CrossRef]
  30. Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.-L.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 1227–1243. [Google Scholar] [CrossRef]
  31. Kong, X.; Zhao, Y.; Xue, J.; Chan, J.C.-W.; Ren, Z.; Huang, H.; Zang, J. Hyperspectral image denoising based on nonlocal low-rank and TV regularization. Remote Sens. 2020, 12, 1956. [Google Scholar] [CrossRef]
  32. Yang, Y.; Chen, S.; Zheng, J. Moreau-Enhanced Total Variation and Subspace Factorization for Hyperspectral Denoising. Remote Sens. 2020, 12, 212. [Google Scholar] [CrossRef]
  33. Chen, Y.; Huang, T.-Z.; He, W.; Zhao, X.-L.; Zhang, H.; Zeng, J. Hyperspectral image denoising using factor group sparsity-regularized nonconvex low-rank approximation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5515916. [Google Scholar] [CrossRef]
  34. Zhuang, L.; Bioucas-Dias, J.M. Fast Hyperspectral Image Denoising and Inpainting Based on Low-Rank and Sparse Representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 730–742. [Google Scholar] [CrossRef]
  35. He, W.; Yao, Q.; Li, C.; Yokoya, N.; Zhao, Q. Non-Local Meets Global: An Integrated Paradigm for Hyperspectral Denoising. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 6861–6870. [Google Scholar] [CrossRef]
  36. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
  37. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  38. Sun, H.; Liu, M.; Zheng, K.; Yang, D.; Li, J.; Gao, L. Hyperspectral image denoising via low-rank representation and CNN denoiser. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 15, 716–728. [Google Scholar] [CrossRef]
  39. Dian, R.; Li, S.; Kang, X. Regularizing hyperspectral and multispectral image fusion by CNN denoiser. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1124–1135. [Google Scholar] [CrossRef]
  40. Nguyen, H.V.; Ulfarsson, M.O.; Sveinsson, J.R. Hyperspectral image denoising using SURE-based unsupervised convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3369–3382. [Google Scholar] [CrossRef]
  41. Lin, B.; Tao, X.; Lu, J. Hyperspectral image denoising via matrix factorization and deep prior regularization. IEEE Trans. Image Process. 2019, 29, 565–578. [Google Scholar] [CrossRef]
  42. Guan, J.; Lai, R.; Li, H.; Yang, Y.; Gu, L. DnRCNN: Deep Recurrent Convolutional Neural Network for HSI Destriping. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 3255–3268. [Google Scholar] [CrossRef]
  43. Zhang, H.; Chen, H.; Yang, G.; Zhang, L. LR-Net: Low-rank spatial-spectral network for hyperspectral image denoising. IEEE Trans. Image Process. 2021, 30, 8743–8758. [Google Scholar] [CrossRef]
  44. Zhuang, L.; Ng, M.K.; Gao, L.; Wang, Z. Eigen-CNN: Eigenimages Plus Eigennoise Level Maps Guided Network for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5512018. [Google Scholar] [CrossRef]
  45. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef]
  46. Dong, W.; Wang, H.; Wu, F.; Shi, G.M.; Li, X. Deep spatial–spectral representation learning for hyperspectral image denoising. IEEE Trans. Comput. Imaging 2019, 5, 635–648. [Google Scholar] [CrossRef]
  47. Zhang, Q.; Zheng, Y.; Yuan, Q.; Song, M.; Yu, H.; Xiao, Y. Hyperspectral Image Denoising: From Model-Driven, Data-Driven, to Model-Data-Driven. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–21. [Google Scholar] [CrossRef]
  48. Rasti, B.; Koirala, B.; Scheunders, P.; Ghamisi, P. How hyperspectral image unmixing and denoising can boost each other. Remote Sens. 2020, 12, 1728. [Google Scholar] [CrossRef]
  49. Khan, M.J.; Khan, H.S.; Yousaf, A.; Khurshid, K.; Abbas, A. Modern trends in hyperspectral image analysis: A review. IEEE Access 2018, 6, 14118–14129. [Google Scholar] [CrossRef]
  50. Zhang, T.; Fu, Y.; Zhang, J. Guided hyperspectral image denoising with realistic data. Int. J. Comput. Vis. 2022, 130, 2885–2901. [Google Scholar] [CrossRef]
  51. Sidorov, O.; Hardeberg, J.Y. Deep Hyperspectral Prior: Single-Image Denoising, Inpainting, Super-Resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop, (ICCVW), Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar] [CrossRef]
  52. Qiu, H.; Wang, Y.; Meng, D. Effective Snapshot Compressive-spectral Imaging via Deep Denoising and Total Variation Priors. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 9123–9132. [Google Scholar] [CrossRef]
  53. Yuan, X.; Liu, Y.; Suo, J.; Dai, Q. Plug-and-play algorithms for large-scale snapshot compressive imaging. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 1444–1454. [Google Scholar] [CrossRef]
  54. Liu, Y.; Yuan, X.; Suo, J.; Brady, D.J.; Dai, Q. Rank minimization for snapshot compressive imaging. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2990–3006. [Google Scholar] [CrossRef]
  55. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; pp. 448–456. Available online: https://proceedings.mlr.press/v37/ioffe15.html (accessed on 4 June 2024).
  56. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  57. Chang, Y.; Yan, L.; Fang, H.; Luo, C. Anisotropic spectral-spatial total variation model for multispectral remote sensing image destriping. IEEE Trans. Image Process. 2015, 24, 1852–1866. [Google Scholar] [CrossRef]
  58. Peng, J.; Xie, Q.; Zhao, Q.; Wang, Y.; Yee, L.; Meng, D. Enhanced 3DTV regularization and its applications on HSI denoising and compressed sensing. IEEE Trans. Image Process. 2020, 29, 7889–7903. [Google Scholar] [CrossRef]
  59. Chan, R.H.; Li, R. A 3-Stage Spectral-Spatial Method for Hyperspectral Image Classification. Remote Sens. 2022, 14, 3998. [Google Scholar] [CrossRef]
  60. Cui, K.; Camalan, S.; Li, R.; Pauca, V.P.; Alqahtani, S.; Plemmons, R.; Silman, M.; Dethier, E.N.; Lutz, D.; Chan, R. Semi-Supervised Change Detection of Small Water Bodies Using RGB and Multispectral Images in Peruvian Rainforests. In Proceedings of the 2022 12th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Rome, Italy, 13–16 September 2022; pp. 1–5. [Google Scholar] [CrossRef]
  61. Liao, X.; Li, H.; Carin, L. Generalized alternating projection for weighted-2,1 minimization with applications to model-based compressive sensing. SIAM J. Imaging Sci. 2014, 7, 797–823. [Google Scholar] [CrossRef]
  62. Tao, S.; Dong, W.; Tang, Z.; Wang, Q. Fast non-blind deconvolution method for blurred image corrupted by poisson noise. In Proceedings of the 2017 International Conference on Progress in Informatics and Computing (PIC), Nanjing, China, 15–17 December 2017; pp. 184–189. [Google Scholar] [CrossRef]
  63. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
  64. Horé, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar] [CrossRef]
  65. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 4729–4743. [Google Scholar] [CrossRef]
  66. Xie, Y.; Qu, Y.; Tao, D.; Wu, W.; Yuan, Q.; Zhang, W. Hyperspectral image restoration via iteratively regularized weighted schatten p-norm minimization. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4642–4659. [Google Scholar] [CrossRef]
  67. Kwak, N. Principal component analysis based on L1-norm maximization. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1672–1680. [Google Scholar] [CrossRef]
Figure 1. Outcomes for Band 21 from the Pavia City Center dataset. (a) Band 21 as the benchmark image; (b) the corrupted version; (c) LRMR; (d) WSNM; (e) LRTDTV; (f) FGSLR; (g) FFDNet; (h) E3DTV; (i) our suggested approach.
Figure 1. Outcomes for Band 21 from the Pavia City Center dataset. (a) Band 21 as the benchmark image; (b) the corrupted version; (c) LRMR; (d) WSNM; (e) LRTDTV; (f) FGSLR; (g) FFDNet; (h) E3DTV; (i) our suggested approach.
Remotesensing 16 02071 g001
Figure 2. Outcomes for Band 82 from the University of Pavia dataset. (a) Band 82 as the benchmark image; (b) the corrupted version; (c) LRMR; (d) WSNM; (e) LRTDTV; (f) FGSLR; (g) FFDNet; (h) E3DTV; (i) our suggested approach.
Figure 2. Outcomes for Band 82 from the University of Pavia dataset. (a) Band 82 as the benchmark image; (b) the corrupted version; (c) LRMR; (d) WSNM; (e) LRTDTV; (f) FGSLR; (g) FFDNet; (h) E3DTV; (i) our suggested approach.
Remotesensing 16 02071 g002aRemotesensing 16 02071 g002b
Figure 3. Outcomes for Band 17 from the Washington DC Mall dataset. (a) Band 17 as the benchmark image; (b) the corrupted version; (c) LRMR; (d) WSNM; (e) LRTDTV; (f) FGSLR; (g) FFDNet; (h) E3DTV; (i) our suggested approach.
Figure 3. Outcomes for Band 17 from the Washington DC Mall dataset. (a) Band 17 as the benchmark image; (b) the corrupted version; (c) LRMR; (d) WSNM; (e) LRTDTV; (f) FGSLR; (g) FFDNet; (h) E3DTV; (i) our suggested approach.
Remotesensing 16 02071 g003
Figure 4. Outcomes for band 1 from the Salinas dataset. (a) The benchmark image; (b) the corrupted version; (c) LRMR; (d) WSNM; (e) LRTDTV; (f) FGSLR; (g) FFDNet; (h) E3DTV; (i) our suggested approach.
Figure 4. Outcomes for band 1 from the Salinas dataset. (a) The benchmark image; (b) the corrupted version; (c) LRMR; (d) WSNM; (e) LRTDTV; (f) FGSLR; (g) FFDNet; (h) E3DTV; (i) our suggested approach.
Remotesensing 16 02071 g004
Figure 5. Residual images after normalization of the Pavia University dataset. (a) Original image; (b) LRMR method; (c) WSNM approach; (d) LRTDTV technique; (e) FGSLR strategy; (f) FFDNet model; (g) E3DTV algorithm; and (h) our suggested approach.
Figure 5. Residual images after normalization of the Pavia University dataset. (a) Original image; (b) LRMR method; (c) WSNM approach; (d) LRTDTV technique; (e) FGSLR strategy; (f) FFDNet model; (g) E3DTV algorithm; and (h) our suggested approach.
Remotesensing 16 02071 g005
Figure 6. The varying efficacy of multiple denoising techniques across various noise intensities is examined. (a) The MSSIM value specific to the Pavia City Center dataset. (b) The SAM value corresponding to the Pavia City Center dataset.
Figure 6. The varying efficacy of multiple denoising techniques across various noise intensities is examined. (a) The MSSIM value specific to the Pavia City Center dataset. (b) The SAM value corresponding to the Pavia City Center dataset.
Remotesensing 16 02071 g006
Figure 7. The varying efficacy of multiple denoising techniques across various noise intensities is examined. (a) The MSSIM value specific to the University of Pavia dataset. (b) The SAM value corresponding to the University of Pavia dataset.
Figure 7. The varying efficacy of multiple denoising techniques across various noise intensities is examined. (a) The MSSIM value specific to the University of Pavia dataset. (b) The SAM value corresponding to the University of Pavia dataset.
Remotesensing 16 02071 g007
Figure 8. The varying efficacy of multiple denoising techniques across various noise intensities is examined. (a) The MSSIM value specific to the Washington DC Mall dataset. (b) The SAM value corresponding to the Washington DC Mall dataset.
Figure 8. The varying efficacy of multiple denoising techniques across various noise intensities is examined. (a) The MSSIM value specific to the Washington DC Mall dataset. (b) The SAM value corresponding to the Washington DC Mall dataset.
Remotesensing 16 02071 g008
Table 1. Quantitative assessment using Pavia City Center dataset.
Table 1. Quantitative assessment using Pavia City Center dataset.
NoisyLRMRWSNMLRTDTVFGSLRFFDNetE3DTVOurs
MPSNR17.214225.720026.244626.257426.278624.769826.063525.2010
MSSIM0.20610.65030.68840.68200.68270.58460.66850.6986
SAM0.62890.25430.23710.23800.23560.28280.24010.2341
Table 2. Quantitative assessment using University of Pavia dataset.
Table 2. Quantitative assessment using University of Pavia dataset.
NoisyLRMRWSNMLRTDTVFGSLRFFDNetE3DTVOurs
MPSNR12.700622.919223.875824.481623.358823.628223.616623.1644
MSSIM0.16140.58290.76420.65900.63610.61370.66490.7950
SAM0.71710.25270.22570.21410.21270.21890.20540.2034
Table 3. Quantitative assessment using Washington DC Mall dataset.
Table 3. Quantitative assessment using Washington DC Mall dataset.
NoisyLRMRWSNMLRTDTVFGSLRFFDNetE3DTVOurs
MPSNR15.906222.128322.065222.700522.342922.316822.506821.6217
MSSIM0.18870.46050.54150.49060.47340.44330.46510.5417
SAM0.64200.34220.33240.32000.31510.32820.31400.3131
Table 4. Quantitative assessment using Salinas dataset.
Table 4. Quantitative assessment using Salinas dataset.
NoisyLRMRWSNMLRTDTVFGSLRFFDNetE3DTVOurs
MPSNR10.000926.361128.584931.177632.806870.325232.406866.1001
MSSIM0.02560.44740.57540.67460.75840.98750.85540.7517
SAM0.79080.16150.79070.09420.13160.67690.07910.1737
Table 5. Run time of different methods applied to the Pavia University dataset.
Table 5. Run time of different methods applied to the Pavia University dataset.
LRMRWSNMLRTDTVFGSLRFFDNetE3DTVOurs
Time(s)143.663591.131644.891574.36102.8240.48997.66
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, P.; Sun, T.; Chen, Y.; Ge, L.; Wang, X.; Wang, L. Hyperspectral Image Denoising Based on Deep and Total Variation Priors. Remote Sens. 2024, 16, 2071. https://doi.org/10.3390/rs16122071

AMA Style

Wang P, Sun T, Chen Y, Ge L, Wang X, Wang L. Hyperspectral Image Denoising Based on Deep and Total Variation Priors. Remote Sensing. 2024; 16(12):2071. https://doi.org/10.3390/rs16122071

Chicago/Turabian Style

Wang, Peng, Tianman Sun, Yiming Chen, Lihua Ge, Xiaoyi Wang, and Liguo Wang. 2024. "Hyperspectral Image Denoising Based on Deep and Total Variation Priors" Remote Sensing 16, no. 12: 2071. https://doi.org/10.3390/rs16122071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop