Next Article in Journal
Dual-Planar Monopole Antenna-Based Remote Sensing System for Microwave Medical Applications
Previous Article in Journal
Optimizing Winter Air Quality in Pig-Fattening Houses: A Plasma Deodorization Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Removal of Mixed Noise in Hyperspectral Images Based on Subspace Representation and Nonlocal Low-Rank Tensor Decomposition

1
College of Geophysics, Chengdu University of Technology, Chengdu 610059, China
2
Geomathematics Key Laboratory of Sichuan Province, Chengdu University of Technology, Chengdu 610059, China
3
Education and Information Technology Center, China West Normal University, Nanchong 637009, China
4
College of Mathematics and Physics, Chengdu University of Technology, Chengdu 610059, China
5
Engineering and Technical College, Chengdu University of Technology, Leshan 614000, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(2), 327; https://doi.org/10.3390/s24020327
Submission received: 16 November 2023 / Revised: 30 December 2023 / Accepted: 1 January 2024 / Published: 5 January 2024
(This article belongs to the Section Remote Sensors)

Abstract

:
Hyperspectral images (HSIs) contain abundant spectral and spatial structural information, but they are inevitably contaminated by a variety of noises during data reception and transmission, leading to image quality degradation and subsequent application hindrance. Hence, removing mixed noise from hyperspectral images is an important step in improving the performance of subsequent image processing. It is a well-established fact that the data information of hyperspectral images can be effectively represented by a global spectral low-rank subspace due to the high redundancy and correlation (RAC) in the spatial and spectral domains. Taking advantage of this property, a new algorithm based on subspace representation and nonlocal low-rank tensor decomposition is proposed to filter the mixed noise of hyperspectral images. The algorithm first obtains the subspace representation of the hyperspectral image by utilizing the spectral low-rank property and obtains the orthogonal basis and representation coefficient image (RCI). Then, the representation coefficient image is grouped and denoised using tensor decomposition and wavelet decomposition, respectively, according to the spatial nonlocal self-similarity. Afterward, the orthogonal basis and denoised representation coefficient image are optimized using the alternating direction method of multipliers (ADMM). Finally, iterative regularization is used to update the image to obtain the final denoised hyperspectral image. Experiments on both simulated and real datasets demonstrate that the algorithm proposed in this paper is superior to related mainstream methods in both quantitative metrics and intuitive vision. Because it is denoising for image subspace, the time complexity is greatly reduced and is lower than related denoising algorithms in terms of computational cost.

1. Introduction

Hyperspectral images (HSIs) contain vast amount of spectral information and spatial structure information due to the unique imaging mechanism and are widely used in various applications [1,2], such as object recognition [3,4], forest monitoring [5], earth observation [6], environmental protection [7], natural disaster monitoring [8], and military security [9], etc. However, HSIs are unavoidably corrupted by different types of noise during acquisition due to the presence of random errors, dark current, and thermal electronics [10,11], which greatly reduces the performance of subsequent processing for HSIs, including classification [12,13], unmixing [14,15,16], and target detection [17,18,19], etc. For the above reasons, removing noise from HSIs is a very imperative step before subsequent image processing [20].
So far, quite a few HSI denoising approaches have been put forward that have achieved certain denoising effects. Traditional denoising methods regard each band of HSI as an independent grayscale image and denoise each band image individually by using the band-by-band denoising technique. These approaches ignore the redundancy and correlation (RAC) among bands and fail to utilize the spectral and spatial structure information within HSI effectively; therefore, the overall denoising effect is not satisfactory. Typical traditional methods include BM3D [21], K-SVD [22], and WNNM [23].
To address the shortcomings of traditional denoising methods, many approaches utilizing spatial and spectral information have been put forward in recent years, including principal component analysis (PCA) [24], nonnegative matrix decomposition [25,26], sparse representations (SR) [27,28,29], and low-rank learning [30,31], etc. In reference [32], the authors proposed a low-rank nonlocal (LRSNL) method, which takes both spectral and spatial information into account and can remove mixed noise effectively. However, the method converts the HSI cube into a 2-D image for processing, resulting in incomplete spatial structure information obtained through the nonlocal method; therefore, there is still room for improvement. Reference [33] proposed a low-rank matrix recovery (LRMR) method, which considers a clean HSI as a low-rank matrix, allowing the mixed noise in HSI to be effectively removed by using the low-rank property. However, the LRMR method only considers the spectral correlation and ignores the spatial correlation. A total variation regularized low-rank matrix factorization (LRTV) method was proposed in the reference [34], where the total variation regularization term obtains spatial information of pixels but fails to utilize spectral information effectively and only has a good denoising effect on low-intensity noise; therefore, there is also much room for improvement. The LRTDTV method [35] uses tucker decomposition based on LRTV, which makes full use of the spectral correlation information. Its denoising effect is significantly better than that of LRTV, but it neglects the protection of HSI edge information. Therefore, the algorithm still needs further improvement.
Because an HSI is composed of a set of 2-D images that can be viewed as a 3-D tensor that naturally retains the spatial structure information of HSI, scholars have proposed many denoising methods based on low-rank tensor decomposition [36,37,38]. Most of these methods use a tensor kernel norm defined based on tensor singular value decomposition (t-SVD) for noise reduction. Using this property, a tensor robust principal component analysis (TRPCA) method was proposed [39], which utilizes both low-rank and sparsity of data and uses tensor kernel norm as a low-rank constraint so that Gaussian noise and sparse noise in HSI can be effectively removed. However, this approach utilizes the traditional l 1 -norm to constrain sparse terms, making the algorithm less accurate. Reference [40] proposed a low-rank tensor factorization method for spatial–spectral total variation regularization (SSTV-LRTF). The total variation (TV) regularization term maintains spatial piecewise smoothness while removing Gaussian noise; meanwhile, low-rank property can obtain the correlation of adjacent bands. Overall, the denoising approaches via low-rank tensor decomposition have achieved relatively effective denoising performance by fully preserving useful spatial structure information.
It is well known that HSI has much RAC in both the spatial domain and spectral domain and exists in a global spectral low-rank subspace; therefore, spectral low-rank constraints can be used for dimension reduction, as well as denoising. At present, there are relatively few subspace-based denoising algorithms for HSI. Reference [41] proposed an algorithm, FastHyDe, which obtains subspace representation of HSI through projection and then removes noise on subspace. On this basis, the authors also proposed a GLF algorithm [42]. The algorithm performs low-rank tensor decomposition for similar nonlocal 3-D image patches in subspace to remove noise and achieves good results. The FastHyMix algorithm [43] is a parameter-free mixed noise removal method for HSI, which uses a Gaussian mixture model to characterize the complex distribution of mixed noise and exploits the two main properties of hyperspectral data, namely, the low-rank in the spectral domain and the high correlation in the spatial domain. Because this method can extract the deep image before the neural denoising network, it runs much faster than other denoising algorithms. He et al. [44] utilized the spectral information and spatial structure of HSI to denoise the image on the subspace and then optimized the denoising results through iterations, achieving ideal denoising performance. Reference [45] proposed that the spectral features of HSI are located in a subspace and exploited low-rank decomposition to solve spatial nonlocal self-similarity. Meanwhile, the augmented Lagrangian method (ALM) was used to optimize the denoising model, and good denoising effects were also obtained. A tensor subspace low-rank learning method with nonlocal prior TSLRLN was proposed in reference [46]. In this method, the original noisy HSI tensor is projected into a low-dimensional subspace, and then the orthogonal tensor basis and tensor coefficients of the subspace are learned alternatively. The method also achieves positive denoising results due to fully utilizing the low-rank property of spatial and spectral of the HSI tensor. For the time being, combining subspace with low-rank tensor decomposition for HSI denoising is a research hotspot, and although some achievements have been made, there is still a long way to go.
Because of this, a new mixed noise-removing algorithm for HSI via subspace and nonlocal low-rank tensor decomposition is put forward in this paper. Firstly, the algorithm obtains the subspace representation (i.e., orthogonal basis and RCI) of HSI by utilizing RAC among the bands. Then, similar 3-D image patches in the RCI are grouped to form 3-D tensors, which are then denoised using low-rank tensor decomposition and the improved wavelet threshold method according to the nonlocal self-similarity of HSI. ADMM is used to optimize the orthogonal basis and RCI alternately to obtain the denoised HSI. After that, the denoised HSI is regularized iteratively, and the final denoised HSI is obtained through iterations. We name the algorithm SNLTAI, and its contributions are listed as follows:
  • The subspace representation is realized by using spectral low-rank constraint, which is obtained by projecting noisy HSI onto the orthogonal basis. The algorithm denoises the RCI obtained from the subspace representation instead of denoising all band images, which greatly reduces the complexity and saves the running time of the algorithm.
  • The low-rank tensor decomposition and the improved wavelet threshold denoising algorithm are successively used to denoise 3-D tensors constructed using nonlocal similar 3-D image patches in RCI. The improved wavelet threshold denoising algorithm results in a more thorough denoising of RCI. The orthogonal basis and RCI are updated using the ADMM algorithm to improve the denoising performance. The denoised HSI is iteratively regularized, so the final denoised HSI is obtained through iterations.
  • At present, many HSI denoising algorithms have considered mixed noise, but the ability to remove heavy mixed noise is extremely limited, and some algorithms are entirely unable to do so. The denoising model constructed using this algorithm has an excellent ability to remove strong mixed noise from HSI, which helps in recovering the image disturbed by strong noise so that the image can be efficiently processed accordingly for subsequent processing.
The sections are arranged as follows: Section 2 introduces the proposed HSI denoising model; Section 3 illustrates the specific steps of this algorithm; the experimental results of different algorithms for the simulated dataset and the real dataset, as well as the analysis of the corresponding parameters, are presented in Section 4; and Section 5 summarizes the paper.

2. Denoising Model for HSI

Suppose that y R r × c × b represents the observed noisy HSI cube, where r × c denotes the size of each band and b is the number of spectral bands. During processing, every band image is stretched into a row vector, then the HSI cube is reconstructed as a 2-D matrix Y R b × h   ( h = r × c ). The noise model for HSI can be formulated as:
Y = X + S + N ,
where Y and X R b × h represent noisy HSI and clean HSI, respectively, and S denotes sparse noise, which contains impulse noise, stripe noise, and deadlines. N denotes Gaussian noise.
Because the spectrum of the noisy HSI Y contains a large amount of RAC, a reasonable assumption is that the valid information of the clean HSI X exists in a spectral low-dimensional subspace that can be approximately estimated using the noisy HSI Y . The estimation formula is denoted as X = EZ . E is the orthogonal basis of the subspace, with E R b × k and k b , and k is the dimension of the subspace. Z R k × h represents the representation coefficient of the subspace. Therefore, the noise model can be reformulated as:
Y = EZ + S + N
According to the noise model, the denoising model for HSI can be formulated as:
{ E ^ , Z ^ , S ^ } = arg                               E , Z , S m i n 1 2 | | Y E Z S | | F 2 + λ 1 ϕ 1 ( Z ) + λ 2 ϕ 2 ( S ) ,
where the first term on the right side of Formula (3) represents the fidelity term of the data, which is considered to be zero-mean Gaussian-independent and identically distributed noise (other covariance matrices can be dealt with using the methods in reference [47]), and | | · | | F 2 represents the Frobenius norm. The second term is a regularization term related to the representation coefficient, which requires a low-rank constraint and can be processed with the nuclear norm [48]. The third term is utilized as the regularization term of sparse noise and can be suppressed by using the l 1 -norm of the matrix. l 1 -norm is also known as the Lasso regularization. The parameters λ 1 , λ 2 0 are scale coefficients, which are used to balance the overall effect of denoising.
Because E is an orthogonal basis, we have E T E = I k , and I k is the k -order identity matrix. Reference [49] highlighted that an orthogonal basis can reduce the complexity of the denoising model and accelerate the convergence of the algorithm.
After adding the constraint term of the orthogonal basis, the denoising model (3) can be rewritten as:
{ E ^ , Z ^ , S ^ } = arg                               E , Z , S m i n 1 2 | | Y E Z S | | F 2 + λ 1 | | Z | | * + λ 2 | | S | | 1 ,         s . t .     E T E = I k ,
where | | Z | | * represents the nuclear norm of Z , and S 1 denotes the l 1 -norm of S . The key to solving model (4) lies in the continuous optimization of the orthogonal basis E and the representation coefficient Z , which is described in detail in our algorithm later.

3. Proposed Denoising Method for HSI

In this part, we provide a detailed description of the proposed algorithm. The algorithm includes four steps: (1) spectral low-rank; (2) spatial nonlocal self-similarity; (3) updating orthogonal basis and representation coefficient; and (4) iterative regularization.
The overview of the proposed algorithm is shown in Figure 1.

3.1. Spectral Low-Rank

Due to the high RAC among the spectral bands, the valid information of a clean HSI exists in a low-dimensional subspace, which we assume to be S k , where k is the dimension of the subspace and k b . Hysime [50] is an efficient algorithm for estimating the dimension of the subspace of HSI based on the principle of least squares, which can be used to estimate the correlation matrix between the signal and the noise and thus simultaneously compute the dimension k of the subspace and the orthogonal basis E a l l of the signal space. The first k -column vectors of E a l l form the orthogonal basis E of the subspace S k .
Because there are hundreds of spectral bands in HSI, denoising for all bands is often time-consuming and does not yield good results. Therefore, performing low-rank spectral denoising on subspace can save a significant amount of processing time and achieve better denoising results. The orthogonal basis E of subspace S k can be projected on the observed HSI Y to obtain the representation coefficient Z of subspace S k , that is, Z = E T Y , and Z R k × h . Each row element of the matrix Z is reshaped into a matrix of size r × c , which we name the eigen-image. All the eigen-images form the representation coefficients image (RCI) of the subspace. In other words, an RCI consists of k eigen-images. Like all natural images, each eigen-image has nonlocal self-similarity, and there exists a high correlation among the eigen-images [41,42]. In addition, if the noisy HSI follows Gaussian distribution with zero-mean and variance σ 2 , then the RCI still follows the same distribution [44]. Therefore, denoising for HSI can be transformed into denoising for its RCI.
Because the model involves iterative regularization, each round of the noisy image Y is obtained from the previous round of iterative processing, and the orthogonal basis E , representation coefficients Z , and sparse noise S are generated through the noisy image Y . The denoising model with the addition of iterative regularization is (subscript i denotes the i-th iteration):
{ E ^ i , Z ^ i , S ^ i } = a r g   m i n                                   E i , Z i , S i 1 2 | | E i T Y i Z i E i T S i | | F 2 + λ 1 | | Z i | | * + λ 2 | | S i | | 1 ,       s . t .     E T E = I k

3.2. Spatial Nonlocal Self-Similarity

The processing in this stage includes the following three steps:
(1)
Grouping similar 3-D image patches for RCI to form 3-D tensors;
(2)
Performing low-rank tensor decomposition to denoise the 3-D tensor;
(3)
The improved wavelet threshold method is conducted on the denoised 3-D tensor.
  • Grouping similar 3-D image patches to form 3-D tensors
In order to obtain the groups composed of similar 3-D image patches from RCI, the mean value matrix R C I m e a n needs to be calculated, and then all 2-D reference patches and 2-D image patches in R C I m e a n are obtained according to the step sizes of 5 and 1, respectively. Each 2-D reference patch corresponds to multiple overlapping 2-D image patches. Image patches need to overlap to avoid artifacts in the later image recovery process [45].
To obtain the n 2-D image patches that are most similar to the 2-D reference patch, instead of using Euclidean distance directly, we use the inner product of the improved Gaussian kernel function and Euclidean distance to measure the similarity of two image patches [51]. Euclidean distance sets the same weight for each position of the image patch, failing to highlight the role of the central pixel point of the image patch. However, the closer the distance between a certain point and the center point is, the greater the impact will be; otherwise, the lesser the impact will be [52]. Compared with the original Gaussian kernel function, the improved Gaussian kernel function can better highlight the weight of the center pixel point in an image patch. The experimental results demonstrate that the use of the improved distance provides a more accurate measure of similarity between image patches than the Euclidean distance and hence, better denoising results.
The original Gaussian kernel function formula is as follows:
G ( a , b ) = 1 2 π σ 0 2 e x p a 2 + b 2 2 σ 0 2 , ( r a , b r ) ,
where σ 0 is the variance of the Gaussian kernel function and r is the radius of the Gaussian kernel. [a, b] is the index of the row and column in the Gaussian kernel, and the index of the central pixel is [0,0].
The improved Gaussian kernel function formula is as follows:
G ( a , b ) = t = 1 r 1 2 π σ 0 2 e x p ( a 2 + b 2 2 σ 0 2 ) , ( t a , b t ) ,
Formula (7) accumulates Gaussian kernels with different radii based on the original Gaussian kernel function, with the radii ranging from 1 to r . The number of accumulations is also r , and the size of the final Gaussian kernel matrix is ( 2 × r + 1 ) × ( 2 × r + 1 ) .
After obtaining the n 2-D image patches that are most similar to each 2-D reference patch in R C I m e a n , we can obtain the n 3-D image patches that are most similar to each 3-D reference patch from the corresponding position in RCI. The above operation of extracting similar 3-D image patches from RCI is denoted by an operator G . The 3-D reference patch with index- i n d is denoted by R i n d R s × s × k where s × s denotes the size of the reference patch, and k denotes the number of eigen-images. Then G i n d Z R s 2 × k × n denotes the 3-D tensor formed by the n similar 3-D image patches corresponding to the 3-D reference patch R i n d . We hope to obtain a clean 3-D tensor C i n d by estimating the noisy 3-D tensor G i n d Z . To this end, we built the following model:
C ^ i n d = a r g                               C i n d m i n i n d ( 1 σ i n d 2 | | G i n d Z C i n d | | F 2 + φ | | C i n d | | * ) ,
where σ i n d 2 represents the noise variance of the noisy 3-D tensor G i n d Z , which is used to normalize the fidelity term of the Frobenius norm, and φ | | C i n d | | * is the regularization to compute the nuclear norm of the tensor C i n d , which is used to constrain the low rank of the clean tensor.
From the analysis above, the denoising model for HSI based on subspace and nonlocal low-rank tensor decomposition proposed in this paper is as follows:
{ E ^ i , Z ^ i , S ^ i , C ^ ind }=arg min                        E i , Z i , S i, C ^ ind 1 2 | | E i T Y i Z i E i T S i | | F 2 + λ 1 ind ( 1 σ ind 2 | | G ind Z i C ind | | F 2 +φ | | C ind | | * )+ λ 2 | | S i | | 1 ,    s.t.   E T E= I k ,
2.
Denoising the 3-D tensor using nonlocal low-rank tensor decomposition
The solution of model (9) is performed in the following way. The n 3-D image patches corresponding to each 3-D reference patch in RCI are operated as follows:
(1) Consider each 3-D image patch as a stack of k 2-D patches, expand each of these k 2-D patches into a column according to their order in RCI, and put these columns sequentially in a column to form a column vector. Then there are s × s × k elements in the column vector. All the n 3-D image patches are put in their respective column vectors in the same way to form a 2-D matrix, so the size of the matrix is ( s × s × k ) × n . Let t 1 denote this matrix, i.e., t 1 R ( s × s × k ) × n .
(2) Compute the mean value of each row of the matrix t 1 so that there are a total of s × s × k values, and then expand the column vector with size s × s × k to n columns. Let t 2 denote this matrix, i.e., t 2 R ( s × s × k ) × n .
(3) Compute t = t 1 t 2 , and use the WNNM algorithm [23] to denoise the matrix t . After processing, each 3-D image patch needs to be placed at the corresponding position in RCI according to its index to realize the accumulation of element values, denoted as: i n d V i n d . Also, the number (weight) of 3-D image patches placed at each position needs to be accumulated, denoted as: i n d W i n d . Then i n d V i n d / i n d W i n d is the denoised RCI.
It should be noted that the noise level of RCI should be estimated in advance during processing. It has been indicated that the noise level σ R C I 2 of RCI is the same as that of the noisy HSI, so the noise level of the corresponding RCI can be obtained by estimating the noise level σ i 2 of the current noisy HSI during the iterations.
3.
Denoising using the improved wavelet threshold method
After denoising the grouped 3-D tensors using nonlocal low-rank tensor decomposition, we obtain the denoised RCI. However, there is still a certain amount of noise left, so it is necessary to further remove the residual noise to obtain better denoising results. In this step, we process the residual noise of the RCI using the improved wavelet threshold algorithm, which is processed by denoising each eigen-image in the RCI in turn.
The wavelet threshold algorithms have been proven to achieve good results in denoising 2-D images. Since Donoho et al. proposed the hard threshold [53] and soft threshold functions [54], many scholars have improved the threshold function by solving the problems of discontinuous image signals and the over-smoothing of images that exist in these two threshold functions, respectively. The new threshold function proposed by us not only solves the above problems but also has strong flexibility and adaptability because the threshold can change with the value of the high-frequency component of each layer in the wavelet decomposition.
The hard threshold and soft threshold functions proposed by Donoho are shown in Formulas (10) and (11), respectively.
W ˜ j , k = { W j , k                                                                     | W j , k | T 0                                                         | W j , k | < T   ,
W ˜ j , k = { s i g n ( W j . k ) ( | W j , k | T )                         | W j , k | T 0                                                         | W j , k | < T   ,
where W j , k is the k-th wavelet coefficient at the j-th scale of the noisy image after wavelet decomposition; W ˜ j , k is the wavelet coefficients obtained after threshold processing; T is the threshold; and sign is the sign function.
The following is the improved wavelet threshold function:
W ˜ j , k = { s i g n ( W j . k ) ( | W j , k | 2 ( 1 ω ) T 1 + e α ( | W j , k | T ) z )                             | W j , k | T s i g n ( W j , k ) ω | W j , k | 2 T                                                     | W j , k | < T   ,
where ω , α , and z are adjustable parameters. When | W j , k | < T , the threshold function is expressed as a quadratic function, which avoids the signal oscillation caused by the threshold function being set to 0 directly and solves the problem in which the wavelet coefficients of the traditional threshold function have a constant deviation with the estimated wavelet coefficients.
The selection of the threshold is also very important; too large a threshold will filter out some useful information in the image and too small a threshold will leave much noise [51].
The threshold proposed by Donoho is shown as follows [53]:
T = σ 2 ln ( M × N ) ,
where σ denotes the noise standard deviation and M × N denotes the size of the image.
The threshold in the improved algorithm is set as:
T = σ 2 ln ( M × N ) m × j 1 ,
where m is an adjustable parameter, j ( 1 j n ) is the corresponding wavelet decomposition scale, σ is the standard deviation of the noise and is defined as: σ = m e d i a n ( | W 1 , k | ) 0.6745 , m e d i a n ( x ) represents the calculation of the median value, m e d i a n ( | W 1 , k | ) is the median of the absolute value of the wavelet decomposition coefficient of the first layer, and 0.6745 is the adjustment coefficient of the standard deviation of Gaussian noise [51].
The improved threshold formula can obtain the adaptive threshold according to the current value of the wavelet decomposition scale, and the threshold satisfies the requirement of gradually decreasing with the increase in decomposition scale.

3.3. Updating Orthogonal Basis and Representation Coefficients

Using the above operation, the denoising of RCI is completed, and the estimated clean 3-D tensors C i n d is obtained. At this time, the denoising model becomes:
{ E ^ i , Z ^ i , S ^ i } = a r g   m i n                                         E i , Z i , S i 1 2 | | E i T Y i Z i E i T S i | | F 2 + λ 1 i n d ( 1 σ i n d 2 | | G i n d Z i C i n d | | F 2 ) + λ 2 | | S i | | 1 ,                 s . t .     E T E = I k
Because the denoised image X i = E i Z i obtained in this round of iteration will be used as the input for the new noisy HSI in the next round, to obtain a better denoising performance, the orthogonal basis Ε and the representation coefficients Z need to be updated. The ADMM algorithm [55] is used to update these variables with the basic idea of alternating optimization.
  • Updating S i by fixing E i and Z i
The formula for optimizing the variable S i from the model (15) is as follows:
  S ^ i = arg S i min 1 2 | | E i T Y i Z i E i T S i | | F 2 + λ 2 | | S i | | 1 = S λ 2 ( Y i E i Z i ) = s g n ( Y i E i Z i ) m a x ( | Y i E i Z i | λ 2   , 0 )
Actually, S λ 2 ( Y i E i Z i ) is the soft threshold operator proposed by Donoho [54].
2.
Updating Z i by fixing E i and S i
The formula for optimizing the variable Z i from the model (15) is as follows:
Z ^ i = arg Z i min 1 2 | | E i T Y i Z i E i T S i | | F 2 + λ 1 i n d ( 1 σ i n d 2 | | G i n d Z i C i n d | | F 2 ) = ( λ 1 i n d V i n d + E i T ( Y i S i ) ) / ( λ 1 i n d W i n d + I k ) ,
3.
Updating E i by fixing Z i and S i
The formula for optimizing the variable E i from the model (15) is as follows:
E ^ i = arg         E i ,   E i T E i = I k min 1 2 | | E i T Y i Z i E i T S i | | F 2 = U ( α ) V ( α ) T ,
where matrix α = ( Y i S i ) × Z i T , U ( α ) and V ( α ) represent the left singular vector and the right singular vector obtained from the SVD decomposition of matrix α , respectively.

3.4. Iterative Regularization

Iteration is often used in various algorithms to enhance their algorithmic performance [56,57]. Through the previous three steps, the algorithm realizes one round of denoising for the noisy HSI. To make the denoised HSI closer to the clean HSI, the algorithm continues to iteratively optimize the denoised HSI: X i . After repeated experiments, we found that updating the next round of input noisy HSI Y i + 1 as follows has a better denoising effect than using X i as the input HSI directly:
Y i + 1 = μ X i + ( 1 μ ) Y ,
where μ ( 0 , 1 ) is used to balance the ratio of denoised HSI X i and original noisy HSI Y . Adding a certain proportion of original noisy HSI can motivate the algorithm to play a better denoising performance.
In addition, the subspace dimension k is updated using the iterative regularization, as well. We know that the more severe the noise corruption, the smaller the subspace dimension of the HSI, which means that the number of column vectors of the orthogonal basis used to compose the subspace from the decomposition of the noisy HSI is smaller [44]. As the noise variance of the HSI decreases gradually with the iterative denoising, the subspace dimension k increases gradually. So, when the subspace dimension of the original noisy HSI Y is computed using the Hysime algorithm [50], we make an update of the subspace dimension k using the following formula:
k i + 1 = k i + ρ ,
where ρ 1 is an integer. Experiments have demonstrated that the denoised HSI obtained by updating   k via Formula (20) has a better denoising effect compared to the subspace dimension computed using the Hysime algorithm [50] in the iterations. The reason being that the Hysime algorithm only considers Gaussian noise, whereas realistic HSIs are often accompanied by a variety of noises simultaneously [45].
The proposed algorithm in this paper is shown in Algorithm 1:
Algorithm 1: HSI Denoising with the SNLTAI algorithm
Input: The noisy HSI Y , the patch size s , the number of similar 3-D image patches n , the regularization parameters λ 1 and λ 2 , the number of iterations iter, the parameters μ and ρ , the wavelet basis, the decomposition scale j , the adjustable parameters ω ,   α ,   z ,   m .
Output: The final denoised HSI X
1. X 1 = Y 1 = Y , estimate k 1 by using the Hysime algorithm;
2. for i = 1,2,3,…, iter do
3.   Step of spectral low-rank:
   Obtain orthogonal basis E i and representation coefficient Z i  via SVD on Y i ;
4.   Step of spatial nonlocal similarity:
   (1) Group similar 3-D image patches to construct 3-D tensor G i n d Z i ;
   (2) Denoise RCI via low-rank tensor decomposition;
   (3) Denoise RCI via the improved wavelet algorithm;
5.   Update Orthogonal basis E i and representation coefficient Z i with ADMM;
6.   Iterative regularization:
   (1) Update   Y i + 1 = μ X i + ( 1 μ ) Y ;
   (2) Update k i + 1 = k i + ρ ;
7. end for
8. Return the final denoised HSI X ;

4. Experiments and Analysis

In order to validate the effectiveness of the algorithm proposed in this paper, we conducted comparative experiments on the simulated and the real datasets of his. Nine image denoising algorithms were added to our comparative experiments, including BM4D [58], LRTV [34], LRMR [33], FastHyDe [41], GLF [42], NGmeet [44], LRTDTV [35], FastHyMix [43], and SNLRSF [45], and the parameter settings involved in these algorithms are consistent with their original papers. All the algorithms were run on MATLAB 2019a using a Lenovo computer with an Intel Core i5-6200U CPU and 8 GB of RAM. In addition, all the HSI datasets were normalized for each band before the experiment.

4.1. Simulated HSI Experiments

Two simulated datasets of HSI were used in this experiment, including the Washington DC (WDC) dataset and the Pavia Center (Pavia C) dataset. The WDC dataset contains 191 bands, and the image size of each band is 1208 × 307 pixels. We selected 256 × 256 pixels of the bands as the experimental objects, so the sub-image of the experiment was 256 × 256 × 191. There are 102 bands in the Pavia C dataset, each band with a size of 1096 × 715 pixels. Only the last 80 bands were selected to simulate a clean dataset because the first 22 bands contain a large amount of noise. We intercepted 201 × 201 pixels as the experiment object, so the sub-image of the experiment was 201 × 201 × 80. The false color images of these two datasets are shown in Figure 2.
To simulate different noise environments, we added Gaussian noise, impulse noise, and stripe noise with different intensities to these two HSI datasets. The specific addition method was as follows:
  • For the WDC dataset
Case 1: Add zero-mean Gaussian noise with a standard deviation of 0.1 for each band.
Case 2: Randomly add zero-mean Gaussian noise with a standard deviation of [0.1,0.2] for each band.
Case 3: Based on Case 2, 45 bands are randomly selected to add impulse noise (salt and pepper noise) with a density of 20%.
Case 4: Continue to add stripe noise with intensity 0.3 to each band based on Case 3.
2.
For the Pavia C dataset
Case 1: Add zero-mean Gaussian noise with a standard deviation of 0.2 for each band.
Case 2: Randomly add zero-mean Gaussian noise with a standard deviation of [0.1,0.2] for each band.
Case 3: Based on Case 2, impulse noise (salt and pepper noise) is added to each band with a density of 40%.
Case 4: Continue to add stripe noise with an intensity of 0.3 to each band based on Case 3.
In addition, five commonly used metrics were used to comprehensively evaluate the denoised results of various algorithms accurately and objectively. They are the mean peak signal-to-noise ratio (MPSNR), the mean structural similarity (MSSIM) [59], the error relative global adimensionnelle de synthese (ERGAS) [60], the mean spectral angle (MSA), and the spectral angle mapper (SAM) [61]. The larger the values of MPSNR and MSSIM, the better the image quality. Conversely, the smaller the values of the ERGAS, MSA, and SAM, the better the image quality. In addition, the running time of each algorithm is also included in the overall evaluation of the algorithm’s performance.
The formulas for these five evaluation metrics are given below.
M P S N R = 1 l i = 1 l p s n r i
M S S I M = 1 l i = 1 l s s i m i
E R G A S = 1 l i = 1 l m s e ( r e f i , d e n i ) M e a n 2 ( r e f i )
M S A = 1 m n i = 1 m   n 180 π × a r c c o s ( χ i ) T · ( χ ^ i ) | | χ i | | · | | χ ^ i | |
S A M = a r c c o s ( u R , u F | | u R | | 2 · | | u F | | 2 )
where p s n r i and s s i m i represent the PSNR and SSIM values for the i th band, r e f i and d e n i denote the band (spectral signatures) of the original noisy HSI and the denoised HSI, respectively. In Formula (23), M e a n 2 ( r e f i ) denotes the average pixel value for the r e f i . In Formula (25),   u R and u F denote the spectral vectors of the original noisy HSI and the denoised HSI, respectively.
Table 1 and Table 2 show the metric values obtained from the WDC dataset and Pavia C dataset, respectively, using different noise addition methods in different denoising algorithms. The best metric values are shown in bold, and the second-best metric values are shown in italics. The data in the tables demonstrate that compared with the state-of-the-art HSI denoising algorithms, the SNLTAI algorithm has great advantages in each evaluation metric, which proves that the algorithm has excellent denoising performance for different noise types and noise intensity. In terms of running time, the SNLTAI algorithm takes less time and has a lower computational cost, except for the FastHyDe and the FastHyMix algorithms. As mentioned above, short running time and fast speed are the greatest advantages of FastHyMix algorithm.
The denoising results of different algorithms for the WDC and Pavia C datasets in CASE3 and CASE4 are shown in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. We use a small red box to mark the detail-rich parts of the images and then use a large red box to enlarge the part in the upper left corner of the image to show and compare the noise removal visual performance of each algorithm more clearly.
Figure 3 shows the denoised images of the 95th band of the WDC dataset with Gaussian noise and impulse noise added to CASE3. Figure 4 shows the denoised false color images (R:95, G:114, B:153) of different algorithms of the WDC dataset on CASE3. Figure 5 shows the denoised images of the 114th band of the WDC dataset with Gaussian noise, impulse noise, and stripe noise added to CASE4, and Figure 6 shows the denoised false color images (R:95, G:114, B:153) of different algorithms of the WDC dataset on CASE4. The noisy images of CASE3 and CASE4 demonstrate that the original images are severely polluted. BM4D and LRTV algorithms cannot completely remove all the noise, and there are still many noise points and patches left. The GLF and FastHyMix algorithms are not thorough enough to remove the stripe noise. Other algorithms can remove the noise to a certain extent, but in terms of the recovery of the images’ textures and details, the SNLTAI algorithm performs better. The denoised false color images also reflect that SNLTAI performs better compared to other algorithms, both in terms of color reproduction and the presentation of image details and textures.
Figure 7 shows the denoising effect of each algorithm on CASE3 in the 34th band of the Pavia C dataset, and Figure 8 shows the denoising effect of each algorithm on CASE4 in the 6th band of the Pavia C dataset. As demonstrated in the images, the 34th and the 6th bands are severely polluted by various kinds of noise, and the noisy images have completely failed to reflect any shape or details of the objects in the original images. The BM4D and LRTV algorithms are unable to fix the serious noise damage, and there are still obvious noise points and stripes in the denoised image of the LRMR algorithm. For the FastHyDe, GLF, NGmeet, LRTDTV, and FastHyMix algorithms, the removal of severe stripe noise needs to be enhanced, and the SNLRSF algorithm loses too much of the image details. Comparatively, the SNLTAI algorithm performs much better in terms of removing the noise and keeping the details.
Therefore, the SNLTAI algorithm has a strong ability to remove all types of noise with different intensities, which also reflects the unique advantages of subspace and spectral low-rank decomposition in image denoising.
Figure 9 and Figure 10 show the comparison of PSNR and SSIM in each band of these two denoised HSI datasets using various algorithms. The comparison of the curves in the figures proves the denoising advantages of the SNLTAI algorithm again.

4.2. Real HSI Experiments

To further validate the denoising performance of the SNLTAI algorithm, we also conducted experiments on two real HSI datasets. Because the noise level needs to be estimated on the real datasets before denoising, we whitened the noise [42] or estimated the noise level by utilizing the algorithm from the reference [54] before running each algorithm.

4.2.1. AVIRIS Indian Pines Dataset

The first real dataset is the AVIRIS Indian Pines dataset, which was taken with the AVIRIS sensor in 1992 in Indiana, U.S.A. The size of the dataset is 145 × 145 × 220. Some bands of this dataset are contaminated with a mixture of Gaussian noise, impulse noise, stripe noise, and water absorption, etc. Figure 11 shows a false color image of this dataset (R:50, G:27, B:19) and the 145th band image.
We chose the 1st and 109th bands as the experimental images for the single-band denoising comparison of each algorithm, as shown in Figure 12 and Figure 13.
The band 1 image is heavily corrupted by various types of noise, and the image can reflect no useful information. It is not difficult to determine that the BM4D and LRTV algorithms can barely conduct effective noise removal to recover image details, and there is even the imprint of stripe noise in the BM4D algorithm. The LRMR algorithm also performs poorly against heavy pollution, with poor recovery of the red box in the image, which is lacking in texture and details. The NGmeet algorithm has issues with over-smoothing and serious detail loss. The contrast range of gray values in the whole image is too large, and the details are lost in both the bright and the dark parts of the image. The FastHyMix algorithm also retains pronounced stripe noise. Although the rest of the algorithms can recover the textures and details of the image to some extent, the SNLTAI algorithm is comparatively better at detail recovery and visual intuition.
The 109th band image is also severely corrupted by various types of noise, and only the general outline of the building can be seen in the image, and the details and textures are completely lost. All the algorithms can recover this band image to a certain extent. However, the BM4D, LRTV, and NGmeet algorithms result in unclear details and over-smoothing of the image, and the enlarged red boxes of the images processed by the LRMR, FastHyDe, GLF, and LRTDTV algorithms are somewhat blurry, with unclear boundaries. The FastHyMix algorithm also retains a significant residual noise distribution. Comparatively speaking, the images recovered using the SNLRSF and SNLTAI algorithms are closer to each other, and both of them recovered the detail and boundary information from the original image. This again shows that the SNLTAI algorithm can achieve excellent denoising performance because it utilizes both subspace representation and nonlocal low-rank tensor decomposition.
In order to further compare the denoising performance of various algorithms, we also performed false color composition on the denoised images of the Indian Pines dataset (R:219, G:109, B:1). In Figure 14, it is shown that BM4D and FastHyMix still have obvious horizontal stripes, LRTV and NGmeet have blurred images with lost details, and NGmeet is even worse, whereas FastHyDe, GLF, and SNLRSF do not perform well in color restoration. Relatively speaking, LRMR, LRTDTV, and SNLTAI perform better in denoising, color restoration, and detail presentation.
In addition to analyzing and comparing the visual effects of the denoised images, we also provide the vertical and horizontal mean profiles of band 150 after processing it using different algorithms, as shown in Figure 15. The rapid fluctuation of the original curve indicates that the image contains a large amount of irregular noise [45], and the smooth curve after noise removal indicates that the algorithm has a strong noise removal ability. Meanwhile, the closer the average gray value of the curve after noise removal is to the original image, the more consistent the brightness of the denoised image is with that of the original image. It is not difficult to see from the figure that among all the algorithms, the SNLTAI algorithm performs the best in combining these two evaluation criteria, thus once again proving the advantages of the SNLTAI algorithm for HSI denoising.

4.2.2. HYDICE Urban Dataset

The original HYDICE Urban dataset was obtained from the HYDICE sensors with a size of 307 × 307 × 210. Because some bands are severely polluted by noise such as stripes, deadlines, and atmospheric and water absorption and cannot provide any valuable information, we finally chose a sub-image with a size of 200 × 200 × 162 as the final experimental object. Figure 16 shows a false color image (R:16, G:118, B:153) and the 79th band image of the Urban dataset.
Figure 17 shows the denoised images of the 83rd band processed using various algorithms. The original image contains a little Gaussian noise and a lot of horizontal stripe noise. The BM4D, LRTV, LRMR, FastHyDe, GLF, and FastHyMix algorithms have a weak ability to remove stripe noise, as the stripes can still be clearly seen in the image after noise removal. Although the NGmeet algorithm can effectively remove the stripes, the denoised image is too smooth, resulting in blurring and the loss of a large amount of detail. Although the LRTDTV algorithm removes the stripes, the overall image is darker, and the brightness and contrast of the original image are greatly changed. The SNLRSF and SNLTAI algorithms can completely remove the stripes, and the image details and textures are maintained quite well.
Moreover, we also provide the horizontal mean profiles of the 83rd band image before and after denoising using different algorithms, as shown in Figure 18. The rapid fluctuation of the curve in the original image reflects the existence of stripe noise [33], and we marked the obvious fluctuation of the curve with a red ellipse. The image curves demonstrate that the BM4D, LRTV, LRMR, FastHyDe, GLF, and FastHyMix algorithms have no significant changes in these fluctuations, indicating that their ability to remove stripe noise is extremely limited, which is also consistent with the effect reflected in the denoised images. Furthermore, although the curve of the NGmeet algorithm does not contain these fluctuations, the curve is too smooth, indicating that much detail is lost; even the average DN value is far beyond the normal range. It can be inferred that the overall gray value of the denoised image is much larger than that of the original image, and the visual reflection is that the overall image is too bright, which is also consistent with the information reflected in the denoised image. The LRTDTV algorithm can remove these fluctuations to some extent. However, the SNLRSF and SNLTAI algorithms almost completely eliminate these fluctuations, and the curves are overall smoother, indicating that the stripe noise in this band is effectively removed.
Figure 19 and Figure 20 provide the image comparison and the vertical mean profiles of the 121st band before and after denoising. We can see that stripe noise still exists in band 121, and there is also a noticeable deadline in the red box. The denoised images and profiles reflect that the BM4D algorithm cannot successfully remove stripe noise and deadlines; the LRTV algorithm can suppress stripe noise effectively, but the deadlines still exist; and the LRMR, FastHyDe, GLF, and FastHyMix algorithms can filter the deadlines, but there is a certain degree of stripe noise left. The LRTDTV algorithm removes deadlines and stripes, but still reduces the contrast of the image, and the image is somewhat blurred, losing certain textures and details. The NGmeet algorithm still has the problem that the denoised image is too smooth, resulting in the loss of detail, and the overall gray value of the denoised image becomes larger. Both the SNLRSF and SNLTAI algorithms can completely suppress the stripe noise and deadlines. However, from the profiles, the curve of the SNLTAI algorithm is relatively smoother, which shows that the SNLTAI algorithm has a stronger denoising ability and better denoising performance.

4.3. Parameters Analysis

In this section, we analyze and discuss the parameters in the SNLTAI algorithm to find their most appropriate values. We extracted a CASE each from the WDC dataset and the Pavia C dataset for the corresponding analysis and discussion. MPSNR and MSSIM were used as evaluation metrics for parameter analysis.
  • Regularization parameters λ 1 and λ 2
CASE4 of the WDC dataset and CASE3 of the Pavia C dataset were extracted as the objects for parameter analysis. We selected λ 1 from the set of {0.001,0.002,0.004,0.008,0.01,0.02,0.025,0.03,0.04,0.05,0.1}, and λ 2 from the set of {0.03,0.06,0.09,0.12,0.15,0.2,0.3,0.5,0.7,0.9,1.0}. As demonstrated in Figure 21 and Figure 22, although the surface shapes are different, the MPSNR and MSSIM of both WDC and Pavia C reach their highest values around λ 1   = 0.02 and then gradually decrease until they become stable. Therefore, without loss of generality, λ 1 = 0.02 and λ 2 = 0.06 were selected as the parameter values of the simulated and the real datasets of the SNLTAI algorithm.
2.
Analysis of the number of iterations and algorithm convergence
We selected CASE1 of the WDC dataset and CASE4 of the Pavia C dataset for this study of iteration number Iter. Figure 23 demonstrates that for the WDC dataset CASE1, when the noise intensity is relatively small, the best results of denoising can be obtained when the number of iterations is 4 or 5, and in the algorithm, we take Iter = 5. Figure 24 reflects the iteration of the Pavia C dataset CASE4. It is clear that the best effect of denoising is achieved after the first iteration because the noise intensity of CASE4 is very high, and the textures and details of the original image are completely invisible. Repeated denoising processes will lead to the deviation between the denoised image and the original image increasing. Therefore, for CASE3 and CASE4 with extremely high noise intensity, the number of iterations was set to 1. In addition, Figure 23 and Figure 24 demonstrate that as the number of iterations gradually increases, the denoising effect decreases slowly until it converges to a certain value.
To further illustrate the convergence of the SNLTAI algorithm, the trend plots of the gradual convergence of MPSNR and MSSIM as the number of iterations increases for the CASE1 of the WDC dataset and the CASE4 of the Pavia C dataset are given in Figure 25 and Figure 26, respectively.
3.
The number of nonlocal similar 3-D image patches n
We selected the CASE1 of the WDC dataset and the CASE4 of the Pavia C dataset to study the number of similar 3-D patches. Because the CASE1 noise intensity of the WDC dataset is not high, a larger number of 3-D image patches is more likely to help the 3-D reference patch match and group similar image patches, thus improving the denoising performance. Figure 27 demonstrates that when the number of similar 3-D patches reaches 110, MPSNR and MSSIM reach the maximum value at almost the same time, and the values of these two metrics slowly decrease with the gradual increase in the number of similar 3-D patches. Therefore, for CASE1 and CASE2 of these two datasets, we take the value of n = 110. In contrast, Figure 28 reflects that in the case of extremely high noise intensity, the fewer the similar 3-D patches, the more favorable the denoising. Because the more similar the 3-D patches, the more they deviate from the textures and details of the reference patch, the best denoising effect can be achieved in this case by setting the number of similar 3-D image patches to n = 1.
4.
For the wavelet threshold denoising function in the proposed algorithm, the wavelet basis is set as sym15, the decomposition scale is set to j = 1, and the adjustable parameters are set to ω = 0.3,   α = 21,   z = 3.09,   m = 2. After many experiments, the value of parameter μ in Formula (20) is set as μ = 0.95, and ρ in Formula (21) is set as ρ = 2 in both the simulated and real dataset experiments.

5. Conclusions

In this paper, a new HSI denoising algorithm based on subspace and nonlocal low-rank tensor decomposition is proposed to remove mixed noise, and the algorithm takes full advantage of the spectral low-rank and spatial nonlocal self-similarity of HSIs. Because the valid information in HSIs exists in a low-dimensional subspace, denoising HSI can be converted to denoising subspace, which greatly saves the time cost of the algorithm. Not only does the subspace representation allow for spectral low rank, but spatial nonlocal self-similarity can also be realized in subspace. The algorithm performs low-rank tensor decomposition and an improved wavelet threshold method for denoising the tensor composed of similar 3-D image patches. Then the obtained image continues to be updated using the ADMM algorithm, and finally, the updated denoised HSI is subjected to iterative regularization to obtain the best denoising results. After comparing this algorithm with other typical HSI denoising algorithms using simulated and real HSI datasets, the results show that this algorithm has the best relative denoising performance in terms of both objective evaluation metrics and subjective visual reflection. In future research, we will further improve the algorithm and consider dynamically adjusting the values of the regularization parameters λ 1 , λ 2 , and the number of nonlocal similar 3-D image patches n with the number of iterations to achieve better denoising performance.

Author Contributions

Conceptualization, C.H. and K.G.; methodology, C.H. and H.H.; software, C.H.; validation, C.H. and Y.W.; formal analysis, C.H.; investigation, Y.W. and H.H.; resources, C.H.; data curation, K.G. and H.H.; writing—original draft preparation, C.H.; writing—review and editing, Y.W.; visualization, C.H.; supervision, K.G.; project administration, C.H.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Creative Research Groups of the Natural Science Foundation of Sichuan (Grant number: 2023NSFSC1984).

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank the handling editor and the anonymous reviewers for their careful reading and helpful remarks.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Plaza, A. Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art. IEEE Geosci. Remote Sens. Mag. 2018, 5, 37–78. [Google Scholar] [CrossRef]
  2. Wang, Z.; Ng, M.; Zhuang, L.; Gao, L.; Zhang, B. Nonlocal self-similarity-based hyperspectral remote sensing image denoising with 3-D convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5531617. [Google Scholar] [CrossRef]
  3. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Sparse Transfer Manifold Embedding for Hyperspectral Target Detection. IEEE Trans. Geosci. Remote Sens. 2013, 52, 1030–1043. [Google Scholar] [CrossRef]
  4. Tao, D.; Lin, X.; Jin, L.; Li, X. Principal Component 2-D Long Short-Term Memory for Font Recognition on Single Chinese Characters. IEEE Trans. Cybern. 2016, 46, 756–765. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, B. Progress and challenges in intelligent remote sensing satellite systems. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 1814–1822. [Google Scholar] [CrossRef]
  6. Willett, R.M.; Duarte, M.F.; Davenport, M.A.; Baraniuk, R.G. Sparsity and Structure in Hyperspectral Imaging: Sensing, Reconstruction, and Target Detection. IEEE Signal Process. Mag. 2014, 31, 116–126. [Google Scholar] [CrossRef]
  7. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  8. Plaza, A.; Bioucas-Dias, J.M.; Shimic, A.; Blackwell, W.J. Foreword to the special issue on hyperspectral image and signal processing. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2012, 5, 347–353. [Google Scholar] [CrossRef]
  9. Zhao, Y.; Zhang, L.; Kong, S.G. Band-subset-based clustering and fusion for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2011, 49, 747–756. [Google Scholar] [CrossRef]
  10. Li, J.; Yuan, Q.; Shen, H.; Zhang, L. Noise removal from hyperspectral image with joint spectral-spatial distributed sparse representation. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5425–5439. [Google Scholar] [CrossRef]
  11. Acito, N.; Diani, M.; Corsini, G. Signal-dependent noise modeling and model parameter estimation in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2957–2971. [Google Scholar] [CrossRef]
  12. Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced spectral classifiers for hyperspectral images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef]
  13. Zeng, S.; Wang, Z.; Gao, C.; Kang, Z.; Feng, D. Hyperspectral image classification with global-local discriminant analysis and spatial-spectral context. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 5005–5018. [Google Scholar] [CrossRef]
  14. Zhuang, L.; Lin, C.; Figueiredo, M.A.T.; Bioucas-Dias, J.M. Regularization parameter selection in minimum vol. hyperspectral, unmixing. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9858–9877. [Google Scholar] [CrossRef]
  15. Gao, L.; Wang, Z.; Zhuang, L.; Yu, H.; Zhang, B.; Chanussot, J. Using low-rank representation of abundance maps and nonnegative tensor factorization for hyperspectral nonlinear unmixing. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5504017. [Google Scholar] [CrossRef]
  16. Qian, Y.; Xiong, F.; Zeng, S.; Zhou, J.; Tang, Y. Matrix-vector nonnegative tensor factorization for blind unmixing of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1776–1792. [Google Scholar] [CrossRef]
  17. Fu, X.; Jia, S.; Zhuang, L.; Xu, M.; Zhou, J.; Li, Q. Hyperspectral anomaly detection via deep plug-and-play denoising CNN regularization. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9553–9568. [Google Scholar] [CrossRef]
  18. Zhuang, L.; Gao, L.; Zhang, B.; Fu, X.; Bioucas-Dias, J.M. Hyperspectral image denoising and anomaly detection based on low-rank and sparse representations. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5500117. [Google Scholar]
  19. Zhang, Y.; Du, B.; Zhang, L.; Liu, T. Joint sparse representation and multitask learning for hyperspectral target detection. IEEE Trans. Geosci. Remote Sens. 2017, 55, 894–906. [Google Scholar] [CrossRef]
  20. Zhuang, L.; Ng, M.K. Hyperspectral mixed noise removal by l 1 -norm-based subspace representation. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 1143–1157. [Google Scholar] [CrossRef]
  21. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  22. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing over complete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  23. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
  24. Rasti, B.; Sveinsson, J.R.; Ulfarsson, M.O.; Sigurdsson, J. Wavelet based sparse principal component analysis for hyperspectral denoising. In Proceedings of the 2013 5th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Gainesville, FL, USA, 26–28 June 2013; pp. 1–4. [Google Scholar]
  25. Ye, M.; Qian, Y.; Zhou, J. Multitask sparse nonnegative matrix factorization for joint spectral-spatial hyperspectral imagery denoising. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2621–2639. [Google Scholar] [CrossRef]
  26. Fan, Y.; Huang, T.; Zhao, X.; Deng, L.; Fan, S. Multispectral image denoising via nonlocal multitask sparse learning. Remote Sens. 2018, 10, 116. [Google Scholar] [CrossRef]
  27. Fu, Y.; Lam, A.; Sato, I.; Sato, Y. Adaptive spatial-spectral dictionary learning for hyperspectral image restoration. Int. J. Comput. Vis. 2017, 122, 228–245. [Google Scholar] [CrossRef]
  28. Fu, Y.; Zheng, Y.; Sato, I.; Sato, Y. Exploiting spectral-spatial correlation for coded hyperspectral image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 3727–3736. [Google Scholar]
  29. Wei, W.; Zhang, L.; Tian, C.; Plaza, A.; Zhang, Y. Structured sparse coding-based hyperspectral imagery denoising with intracluster filtering. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6860–6876. [Google Scholar] [CrossRef]
  30. Rasti, B.; Ulfarsson, M.O.; Ghamisi, P. Automatic hyperspectral image restoration using sparse and low-rank modeling. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2335–2339. [Google Scholar] [CrossRef]
  31. Rasti, B.; Scheunders, P.; Ghamisi, P.; Licciardi, G.; Chanussot, J. Noise reduction in hyperspectral imagery: Overview and application. Remote Sens. 2018, 10, 482. [Google Scholar] [CrossRef]
  32. Zhu, R.; Dong, M.; Xue, J.H. Spectral nonlocal restoration of hyperspectral images with low-rank property. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 8, 3062–3067. [Google Scholar] [CrossRef]
  33. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  34. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2015, 54, 178–188. [Google Scholar] [CrossRef]
  35. Wang, Y.; Peng, J.; Zhao, Q.; Zhao, Y.; Meng, D. Hyperspectral Image Restoration Via Total Variation Regularized Low-Rank Tensor Decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1227–1243. [Google Scholar] [CrossRef]
  36. Xu, Y.; Wu, Z.; Chanussot, J.; Wei, Z. Nonlocal patch tensor sparse representation for hyperspectral image super-resolution. IEEE Trans. Image Process. 2019, 28, 3034–3047. [Google Scholar] [CrossRef] [PubMed]
  37. Misha, E.K.; Carla, D.M. Factorization strategies for third-order tensors—Science direct. Linear Algebra Its Appl. 2011, 435, 641–658. [Google Scholar]
  38. Semerci, O.; Hao, N.; Kilmer, M.E.; Miller, E.L. Tensor-based formulation and nuclear norm regularization for multi-energy computed tomography. IEEE Trans. Image Process. 2014, 23, 1678–1693. [Google Scholar] [CrossRef] [PubMed]
  39. Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5249–5257. [Google Scholar]
  40. Fan, H.; Li, C.; Guo, Y.; Kuang, G.; Ma, J. Spatial-spectral total variation regularized low-rank tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6196–6213. [Google Scholar] [CrossRef]
  41. Zhuang, L.; Bioucas-Dias, J.M. Fast hyperspectral image denoising and inpainting based on low-rank and sparse representations. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 730–742. [Google Scholar] [CrossRef]
  42. Zhuang, L.; Bioucas-Dias, J.M. Hyperspectral image denoising based on global and non-local low-rank factorizations. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1900–1904. [Google Scholar]
  43. Zhuang, L.; Ng, M. FastHyMix: Fast and Parameter-Free Hyperspectral Image Mixed Noise Removal. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 4702–4716. [Google Scholar] [CrossRef]
  44. He, W.; Yao, Q.; Li, C.; Yokoya, N.; Zhao, Q. Non-local meets global: An integrated paradigm for hyperspectral denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  45. Cao, C.; Yu, J.; Zhou, C. Hyperspectral Image Denoising via Subspace-Based Nonlocal Low-Rank and Sparse Factorization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 973–988. [Google Scholar] [CrossRef]
  46. He, C.; Sun, L.; Huang, W. TSLRLN: Tensor subspace low-rank learning with non-local prior for hyperspectral image mixed denoising. Signal Process. 2021, 184, 108060. [Google Scholar] [CrossRef]
  47. Zhuang, L.; Bioucas-Dias, J.M. Fast hyperspectral image denoising based on low rank and sparse representations. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016. [Google Scholar]
  48. Oh, T.H.; Matsushita, Y.; Tai, Y.W.; Kweon, I.S. Fast randomized singular value thresholding for nuclear norm minimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4484–4493. [Google Scholar]
  49. Sun, L.; Jeon, B.; Soomro, B.N.; Zheng, Y.; Wu, Z.; Xiao, L. Fast superpixel based subspace low rank learning method for hyperspectral denoising. IEEE Access. 2018, 6, 12031–12043. [Google Scholar] [CrossRef]
  50. Bioucas-Dias, J.M.; Nascimento, J.M.P. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef]
  51. He, C.; Guo, K.; Chen, H. An improved image filtering algorithm for mixed noise. Appl. Sci. 2021, 11, 10358. [Google Scholar] [CrossRef]
  52. Budianto, B.; Lun, D.P.K. Discrete Periodic Radon Transform Based Weighted Nuclear Norm Minimization for Image Denoising. In Proceedings of the International Symposium on Computing & Networking, Aomori, Japan, 19–22 November 2017. [Google Scholar]
  53. Donoho, D.L.; Johnstone, I.M. Ideal spatial adaptation via wavelet shrinkage. Biometrika 1994, 81, 425–455. [Google Scholar] [CrossRef]
  54. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef]
  55. Boyd, S.; Parikh, N.; Chu, E. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  56. Chang, Y.; Yan, L.; Zhong, S. Hyper-Laplacian Regularized Unidirectional Low-Rank Tensor Recovery for Multispectral Image Denoising. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) IEEE, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  57. Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 1888–1902. [Google Scholar] [CrossRef]
  58. Maggioni, M.; Katkovnik, V.; Egiazarian, K.; Foi, A. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process. 2013, 22, 119–133. [Google Scholar] [CrossRef]
  59. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
  60. Wald, L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Presses de l’Ecole des Mines: Paris, France, 2002. [Google Scholar]
  61. Yuhas, R.H.; Boardman, J.W.; Goetz, A.F.H. Determination of semi-arid landscape endmembers and seasonal trends using convex geometry spectral unmixing techniques. In Proceedings of the 4th Annual JPL Airborne Geoscience Workshop, Washington, DC, USA, 25–29 October 1993. [Google Scholar]
Figure 1. Flowchart of the proposed algorithm.
Figure 1. Flowchart of the proposed algorithm.
Sensors 24 00327 g001
Figure 2. False color images of (a) WDC dataset (R:60 G:27 B:17); (b) Pavia C dataset (R:68 G:24 B:19).
Figure 2. False color images of (a) WDC dataset (R:60 G:27 B:17); (b) Pavia C dataset (R:68 G:24 B:19).
Sensors 24 00327 g002
Figure 3. Denoised images of different algorithms on the 95th band of the WDC dataset (CASE3): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Figure 3. Denoised images of different algorithms on the 95th band of the WDC dataset (CASE3): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Sensors 24 00327 g003
Figure 4. Denoised false color images (R:95, G:114, B:153) of different algorithms of the WDC dataset (CASE3): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Figure 4. Denoised false color images (R:95, G:114, B:153) of different algorithms of the WDC dataset (CASE3): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Sensors 24 00327 g004
Figure 5. Denoised images of different algorithms on the 114th band of the WDC dataset (CASE4): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Figure 5. Denoised images of different algorithms on the 114th band of the WDC dataset (CASE4): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Sensors 24 00327 g005
Figure 6. Denoised false color images (R:95, G:114, B:153) of different algorithms of the WDC dataset (CASE4): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Figure 6. Denoised false color images (R:95, G:114, B:153) of different algorithms of the WDC dataset (CASE4): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Sensors 24 00327 g006aSensors 24 00327 g006b
Figure 7. Denoised images of different algorithms on 34th band of the Pavia C dataset (CASE3): (a) Original; (b) Noisy; (c) BM4D; (d) IV; (e) LRMR; (f) FastHyDe; (g) GlF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Figure 7. Denoised images of different algorithms on 34th band of the Pavia C dataset (CASE3): (a) Original; (b) Noisy; (c) BM4D; (d) IV; (e) LRMR; (f) FastHyDe; (g) GlF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Sensors 24 00327 g007
Figure 8. Denoised images of different algorithms on the 6th band of the Pavia C dataset (CASE4): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Figure 8. Denoised images of different algorithms on the 6th band of the Pavia C dataset (CASE4): (a) Original; (b) Noisy; (c) BM4D; (d) LRTV; (e) LRMR; (f) FastHyDe; (g) GLF; (h) NGmeet; (i) LRTDTV; (j) FastHyMix; (k) SNLRSF; and (l) SNLTAI.
Sensors 24 00327 g008
Figure 9. PSNR/SSIM of each denoised band in the WDC dataset with different algorithms: (a) PSNR_WDC_CASE3; (b) SSIM_ WDC_CASE3; (c) PSNR_ WDC_CASE4; and (d) SSIM_ WDC_CASE4.
Figure 9. PSNR/SSIM of each denoised band in the WDC dataset with different algorithms: (a) PSNR_WDC_CASE3; (b) SSIM_ WDC_CASE3; (c) PSNR_ WDC_CASE4; and (d) SSIM_ WDC_CASE4.
Sensors 24 00327 g009aSensors 24 00327 g009b
Figure 10. PSNR/SSIM of each denoised band in the Pavia C dataset with different algorithms: (a) PSNR_ Pavia C _CASE3; (b) SSIM_ Pavia C _CASE3; (c) PSNR_ Pavia C _CASE4; and (d) SSIM_ Pavia C _CASE4.
Figure 10. PSNR/SSIM of each denoised band in the Pavia C dataset with different algorithms: (a) PSNR_ Pavia C _CASE3; (b) SSIM_ Pavia C _CASE3; (c) PSNR_ Pavia C _CASE4; and (d) SSIM_ Pavia C _CASE4.
Sensors 24 00327 g010
Figure 11. (a) False color images of the Indian Pines dataset (R:50, G:27, B:19); (b) The 145th band image of the Indian Pines dataset.
Figure 11. (a) False color images of the Indian Pines dataset (R:50, G:27, B:19); (b) The 145th band image of the Indian Pines dataset.
Sensors 24 00327 g011
Figure 12. Denoised images of different algorithms on the 1st band of the Indian Pines dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Figure 12. Denoised images of different algorithms on the 1st band of the Indian Pines dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Sensors 24 00327 g012
Figure 13. Denoised images of different algorithms on the 109th band of the Indian Pines dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Figure 13. Denoised images of different algorithms on the 109th band of the Indian Pines dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Sensors 24 00327 g013
Figure 14. Denoised false color images (R:219, G:109, B:1) of different algorithms of the Indian Pines dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Figure 14. Denoised false color images (R:219, G:109, B:1) of different algorithms of the Indian Pines dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Sensors 24 00327 g014
Figure 15. Mean profiles of band 150 of the Indian Pines dataset with different algorithms: (a) vertical mean profiles and (b) horizontal mean profiles.
Figure 15. Mean profiles of band 150 of the Indian Pines dataset with different algorithms: (a) vertical mean profiles and (b) horizontal mean profiles.
Sensors 24 00327 g015
Figure 16. (a) False color images of the HYDICE Urban Dataset (R:16, G:118, B:153); (b) the 79th band image of the HYDICE Urban Dataset.
Figure 16. (a) False color images of the HYDICE Urban Dataset (R:16, G:118, B:153); (b) the 79th band image of the HYDICE Urban Dataset.
Sensors 24 00327 g016
Figure 17. Denoised images using different algorithms on the 83rd band of the Urban dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Figure 17. Denoised images using different algorithms on the 83rd band of the Urban dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Sensors 24 00327 g017
Figure 18. Horizontal mean profiles for the Urban dataset (Band 83): (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Figure 18. Horizontal mean profiles for the Urban dataset (Band 83): (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Sensors 24 00327 g018
Figure 19. Denoised images of different algorithms on the 121st band of the Urban dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Figure 19. Denoised images of different algorithms on the 121st band of the Urban dataset: (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Sensors 24 00327 g019
Figure 20. Vertical mean profiles for the Urban dataset (Band 121): (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Figure 20. Vertical mean profiles for the Urban dataset (Band 121): (a) Original; (b) BM4D; (c) LRTV; (d) LRMR; (e) FastHyDe; (f) GLF; (g) NGmeet; (h) LRTDTV; (i) FastHyMix; (j) SNLRSF; and (k) SNLTAI.
Sensors 24 00327 g020
Figure 21. Parameter analysis for λ 1 and λ 2 in CASE4 of the WDC dataset: (a) MPSNR and (b) MSSIM.
Figure 21. Parameter analysis for λ 1 and λ 2 in CASE4 of the WDC dataset: (a) MPSNR and (b) MSSIM.
Sensors 24 00327 g021
Figure 22. Parameter analysis for λ 1 and λ 2 in CASE3 of the Pavia C dataset: (a) MPSNR and (b) MSSIM.
Figure 22. Parameter analysis for λ 1 and λ 2 in CASE3 of the Pavia C dataset: (a) MPSNR and (b) MSSIM.
Sensors 24 00327 g022
Figure 23. Analysis of the number of iterations for the WDC dataset CASE1: (a) MPSNR and (b) MSSIM.
Figure 23. Analysis of the number of iterations for the WDC dataset CASE1: (a) MPSNR and (b) MSSIM.
Sensors 24 00327 g023
Figure 24. Analysis of the number of iterations for the Pavia C dataset CASE4: (a) MPSNR and (b) MSSIM.
Figure 24. Analysis of the number of iterations for the Pavia C dataset CASE4: (a) MPSNR and (b) MSSIM.
Sensors 24 00327 g024
Figure 25. Convergence analysis for the WDC dataset CASE1: (a) MPSNR and (b) MSSIM.
Figure 25. Convergence analysis for the WDC dataset CASE1: (a) MPSNR and (b) MSSIM.
Sensors 24 00327 g025
Figure 26. Convergence analysis for the Pavia C dataset CASE4: (a) MPSNR and (b) MSSIM.
Figure 26. Convergence analysis for the Pavia C dataset CASE4: (a) MPSNR and (b) MSSIM.
Sensors 24 00327 g026
Figure 27. Analysis of the number of similar 3-D patches for the WDC dataset CASE1: (a) MPSNR and (b) MSSIM.
Figure 27. Analysis of the number of similar 3-D patches for the WDC dataset CASE1: (a) MPSNR and (b) MSSIM.
Sensors 24 00327 g027
Figure 28. Analysis of the number of similar 3-D patches for the Pavia C dataset CASE4: (a) MPSNR and (b) MSSIM.
Figure 28. Analysis of the number of similar 3-D patches for the Pavia C dataset CASE4: (a) MPSNR and (b) MSSIM.
Sensors 24 00327 g028
Table 1. Comparison of denoising metrics of different algorithms on the WDC dataset.
Table 1. Comparison of denoising metrics of different algorithms on the WDC dataset.
WDC Dataset
Noise CaseIndexNoisy DataBM4DLRTVLRMRFastHyDeGLFNGmeetLRTDTVFastHyMixSNLRSFSNLTAI
CASE1MPSNR20.0032.8825.7434.2334.9038.7238.6833.5738.2538.3938.84
MSSIM0.42180.91280.67620.93840.96540.97810.97870.92740.97360.97740.9789
ERGAS388.5485.79198.4572.8576.2543.7743.6678.4946.0245.8442.94
MSA0.46140.09330.25940.09210.09090.04800.04680.07500.05170.05020.0472
SAM26.445.3514.865.288.692.832.834.292.963.202.70
Time (s) 67362420038106616033713770103
CASE2MPSNR16.5830.4825.2531.4932.2836.7335.0232.7136.3035.8636.77
MSSIM0.27960.85580.60530.89370.93360.96650.95290.91240.96050.96310.9672
ERGAS589.78112.43206.5999.8796.6654.5667.7186.9557.1060.9360.34
MSA0.63800.11800.15650.12770.11440.06780.07750.09070.06230.06360.0630
SAM36.556.768.977.318.793.423.815.203.573.643.66
Time (s) 65461819438102615133313778102
CASE3MPSNR15.5027.6324.2930.9829.5833.4930.1130.5033.3733.8334.10
MSSIM0.23640.73130.55960.88790.90650.93480.88700.85420.93490.95640.9601
ERGAS703.66277.50250.67105.90180.37134.88154.29113.28129.3969.3666.17
MSA0.69960.35030.25510.13800.24380.17870.19340.09960.17430.07070.0666
SAM40.0920.0714.617.9011.0510.2411.085.709.993.935.32
Time (s) 663662198381123149351151212103
CASE4MPSNR12.0514.8515.4415.6915.5715.5515.5615.8115.5215.7415.78
MSSIM0.17880.41650.46900.64950.69400.68840.69420.63010.66570.73470.7382
ERGAS972.76702.21653.25635.87642.77647.47647.86625.63650.13632.47632.01
MSA0.53330.31720.27780.26590.29770.25260.25210.24390.25620.23660.2360
SAM30.5518.1815.9215.2417.5914.4814.3313.9714.6813.4813.94
Time (s) 673625196371179161362151129107
Table 2. Comparison of denoising metrics of different algorithms on the Pavia C dataset.
Table 2. Comparison of denoising metrics of different algorithms on the Pavia C dataset.
Pavia C Dataset
Noise CaseIndexNoisy DataBM4DLRTVLRMRFastHyDeGLFNGmeetLRTDTVFastHyMixSNLRSFSNLTAI
CASE1MPSNR13.9829.7724.3527.9531.8433.7634.4329.9633.4933.7634.67
MSSIM0.17340.84230.59100.79960.92740.93940.94750.84950.93520.93730.9504
ERGAS666.14107.37200.45133.3786.2868.1562.98106.3070.6068.1761.26
MSA0.72870.10800.24770.16040.08780.06010.05160.13190.06430.05920.0474
SAM41.756.1914.199.194.113.482.807.563.683.372.73
Time (s) 156222612141690109651739
CASE2MPSNR16.6131.3225.5929.8533.7535.6732.9732.2035.1535.6636.52
MSSIM0.27300.88360.65800.85500.94520.95930.89630.88830.95360.95800.9601
ERGAS512.3290.12186.27109.4869.6154.86108.8495.7958.6855.6055.35
MSA0.61180.09270.23090.13760.07030.05110.13770.11590.05640.05150.0510
SAM35.055.3113.237.883.542.976.826.643.232.952.72
Time (s) 150227652241677108761239
CASE3MPSNR8.4210.1011.5325.0914.8317.9817.6327.1718.0919.0126.50
MSSIM0.04030.05460.07210.68200.65290.58660.47850.78270.61800.62760.7788
ERGAS1268.981047.52891.19185.26638.07428.61456.35158.97425.77383.12157.92
MSA0.81100.70760.61440.17370.17190.13700.18380.13260.13750.12350.1069
SAM46.4740.5435.209.9510.887.849.697.597.886.926.13
Time (s) 142223622171777124967035
CASE4MPSNR6.237.187.9912.769.1810.0710.0013.4810.0010.2713.13
MSSIM0.02980.03920.05400.46900.37690.43600.36280.57090.35060.47660.5686
ERGAS1637.381469.891340.42769.221203.471057.301067.07797.071065.811016.81736.68
MSA0.62200.53940.46190.17440.21160.15910.17670.14100.17960.12700.1359
SAM35.6430.9026.4610.0011.519.2111.777.5010.297.317.79
Time (s)1472236255728117124967935
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, C.; Wei, Y.; Guo, K.; Han, H. Removal of Mixed Noise in Hyperspectral Images Based on Subspace Representation and Nonlocal Low-Rank Tensor Decomposition. Sensors 2024, 24, 327. https://doi.org/10.3390/s24020327

AMA Style

He C, Wei Y, Guo K, Han H. Removal of Mixed Noise in Hyperspectral Images Based on Subspace Representation and Nonlocal Low-Rank Tensor Decomposition. Sensors. 2024; 24(2):327. https://doi.org/10.3390/s24020327

Chicago/Turabian Style

He, Chun, Youhua Wei, Ke Guo, and Hongwei Han. 2024. "Removal of Mixed Noise in Hyperspectral Images Based on Subspace Representation and Nonlocal Low-Rank Tensor Decomposition" Sensors 24, no. 2: 327. https://doi.org/10.3390/s24020327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop