Next Article in Journal
Immersive Phobia Therapy through Adaptive Virtual Reality and Biofeedback
Previous Article in Journal
Examination of a Human Heart Fabricating Its 3D-Printed Cardiovascular Model and Employing Computational Technologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weighted Group Sparse Regularized Tensor Decomposition for Hyperspectral Image Denoising

1
School of Electronic Engineering and Automation, Key Laboratory of Automatic Detecting Technology and Instruments, Guilin University of Electronic Technology, Guilin 541004, China
2
School of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin 541004, China
3
Center for Applied Mathematics of Guangxi (GUET), Guilin 541004, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10363; https://doi.org/10.3390/app131810363
Submission received: 4 July 2023 / Revised: 7 September 2023 / Accepted: 14 September 2023 / Published: 16 September 2023

Abstract

:
Hyperspectral imaging (HSI) has been used in a wide range of applications in recent years. But in the process of image acquisition, hyperspectral images are subject to various types of noise interference. Noise reduction algorithms can be used to enhance the quality of images and make it easier to detect and analyze features of interest. To realize better image recovery, we propose a weighted group sparsity-regularized low-rank tensor ring decomposition (LRTRDGS) method for hyperspectral image recovery. Tensor ring decomposition can be utilized by this approach to investigate self-similarity and global spectral correlation. Furthermore, weighted group sparsity regularization can be employed to depict the sparsity structure of the group along the spectral dimension of the spatial difference image. Moreover, we solve the proposed model using a symmetric alternating direction method multiplier with the addition of a proximity term. The experimental data verify the effectiveness of our proposed method.

1. Introduction

Hyperspectral imaging (HSI) is a powerful tool for studying and analyzing data for various applications. HSI uses a combination of visible, infrared and ultraviolet wavelengths to capture images in a way that traditional imaging methods cannot. HSI manifests two essential attributes: spatial and spectral resolutions. Specifically, it possesses two spatial dimensions and one spectral dimension. As a result, it generates an almost uninterrupted reflectance spectrum for each pixel within the scene. This, in turn, facilitates a comprehensive spectral analysis of scene attributes that would otherwise prove challenging to identify using a broader range of multispectral scanners [1]. The Flying Laboratory of Imaging Systems (FLIS) provides a single platform that combines hyperspectral, thermal and laser scanning to produce high-quality remote sensing data [2].
HSI has many applications, such as mining [3], analyzing crops in agriculture [4,5], astroplanetary exploration [6], medical images [7], mineral exploration [8], the detection of soil-borne viruses [9] and the measurement of water quality [10]. High-spectral-resolution images often contain noise, which can significantly reduce their performance and accuracy. Hence, attenuating noise holds great significance in scrutinizing high-spectral-resolution pictures. This technique aids in augmenting image quality, thereby facilitating the identification and examination of intriguing attributes. With noise reduction algorithms, HSI is more useful and accurate for various applications.
Tensor is a powerful mathematical tool for extracting information from HSI images. It allows us to see patterns and relationships between different parts of an image that are not visible with traditional imaging techniques, and it is possible to analyze and interpret the data in a much more sophisticated way. Many experts use tensor methods, such as Tucker decomposition [11,12], canonical polyadic (CP) decomposition [13,14] and tensor singular-value decomposition (t-SVD) [15] for hyperspectral image analysis. For example, a model for the weighted group of low-rank tensor decomposition with regularized sparsity (LRTDGS) based on Tucker decomposition was proposed by Chen et al. [16]. Using Tucker decomposition, Wang et al. developed a hybrid noise removal method for HSI [17]. In their study, Zhao et al. introduced a nonlocal low-rank regularized canonical polyadic tensor decomposition method (NLR-CPTD) that relies on two prior information sources: GCS and nonlocal self-similarity [18]. The weighted Schatten l p -norm low-rank matrix approximation (WSN-LRMA) method was developed by Xie et al. In this method, the eigenvalues were obtained through t-SVD [19]. Despite the satisfactory denoising results obtained through the aforementioned techniques, there is still significant potential for enhancing the efficiency of denoising hyperspectral images.
Many scholars have recently proposed a decomposition method based on tensor rings (TRs) using cyclically contracted third-order tensors to represent higher-order tensors [20,21]. Circular shifting and equivalence can be performed according to the tensor ring factor, which can effectively balance the correlation between the modes. TR decomposition can better approximate higher-order tensors. For tensor completion, refs. [22,23] verified that the TR method obtains better results than the Tucker and CP decomposition methods. TR is able to improve accuracy and enhance security. It can store and process data with greater accuracy than traditional methods, as the three-dimensional structure of the ring allows for a more precise representation of the data. This improved accuracy helps to ensure accuracy during data processing. Data stored in a tensor ring are more secure than data processed by traditional methods, as the three-dimensional structure of the ring makes it difficult to hack or access the data without authorization. This means that data stored in a tensor ring are more secure than data stored in stationary locations.
The total variational (TV) regularization method was first proposed by Rudin et al. [24]. This technology effectively preserves the spatial sparsity and smoothness of images, making it applicable in various image processing tasks, like denoising, magnetic resonance and superresolution. He et al. proposed total variational regularized low-rank matrix decomposition (LRTV) [25]. Wang et al. proposed a spectral–spatial total variational regularization (SSTV) method to construct a smooth structure in the spectral and spatial domains [17]. He et al. also presented the method of local low-rank matrix recovery, known as spatial–spectral total variance regularization (LLRSSTV), which aims to effectively exploit the total variance nature of the variance in each direction of hyperspectral images for reorganizing the local low-rank patches [26]. However, most of the methods utilize one-norm constraints on the spatial difference images to promote structural segmentation smoothing. To address the issue of the ineffective depiction of the group sparse structure within the spectral dimensional spatial difference image, Chen et al. introduced a novel approach for hyperspectral image recovery. This technique employs weighted group sparsity-regularized low-rank tensor decomposition to overcome the limitation of the one-norm [16].
In light of the preceding discussion, we present a novel approach for enhancing hyperspectral image quality through the utilization of weighted group sparsity-regularized low-rank tensor ring decomposition (LRTRDGS). The method combines the advantages of tensor ring decomposition and weighted group sparse regularization. A symmetric alternating direction of multipliers method with a proximity point operator is used to solve our proposed model. This paper focuses on the following main topics:
(1)
Utilizing the global spatial and spectral correlation among hyperspectral images, the tensor ring decomposition technique is employed to segregate unpolluted hyperspectral images from raw observations that have been tainted with intricate noise.
(2)
Due to the fact that the gradient components in smooth areas of hyperspectral images typically exhibit a complete absence (a value of zero) in the spectral dimension, the gradient components in edge regions demonstrate non-zero values. Hence, to address this discrepancy, we incorporate a regularization term, with the group sparsity weighted, into the framework of tensor ring decomposition. It can explore the group structure of spatially differential images along the spectral dimension.
(3)
A symmetric alternating direction method multiplier is employed to solve the model of the low-rank tensor ring decomposition with regularization on weighted group sparsity. To enhance the efficiency of this method, a proximity point operator is incorporated. Through numerical experiments, it has been determined that this approach outperforms other commonly utilized methods in terms of both quantitative evaluation and visual comparison.
The structure of this paper is as follows. Several notations are presented, and the tensor ring is defined in Section 2. Section 3 proposes a hyperspectral image denoising model based on weighted group sparse regularized tensor ring decomposition and presents the model solving method. In Section 4, we present the experimental results for the simulated data and discuss the parametric analysis and convergence analysis. We summarize the proposed method in Section 5.

2. Notations and Tensor Ring

This section describes the notations used throughout this paper and introduces the tensor ring approach presented in [20].

2.1. Notations

Following the nomenclature in [27], we summarize the notations used in this paper as follows. x is a scalar, x is a vector, X is a matrix and X is a tensor. x i 1 , i 2 , , i N or X ( i 1 , i 2 , , i N ) is the ( i 1 , i 2 , , i N ) th element of X . x : , i 2 , i 3 or X ( : , i 2 , i 3 ) is the ( i 2 , i 3 ) th column of a third-order tensor X . x i 1 , : , i 3 or X ( i 1 , : , i 3 ) is the ( i 1 , i 3 ) th row of a third-order tensor X . x i 1 , i 2 , : or X ( i 1 , i 2 , : ) is the ( i 1 , i 2 ) th tubal of a third-order tensor X . X F = Σ i 1 , i 2 , , i N | x i 1 , i 2 , , i N | 2 is the Frobenius norm. X 1 = Σ i 1 , i 2 , , i N | x i 1 , i 2 , , i N | is the l 1 -norm. X ( k ) is the mode-k unfolding of X . < · , · > is the inner product. ⨂ is the Kronecker product. ⨀ is componentwise multiplication. × k is the mode-k tensor matrix product.
In order to increase the readability of this article, we summarize the abbreviations of the names in Abbreviations part.

2.2. Tensor Ring

In this segment, we present the technique of the tensor ring and the corresponding definitions linked to it. The tensor ring structure is a superior and effective form of decomposition in comparison to other types of tensor decomposition. TR decomposition can be represented as a sequence of cyclic multiplicative third-order tensors with a higher-order tensor. The nth-tensor is denoted as X R I 1 × × I n . The representation of the tensor ring can be decomposed into a series of latent tensors,
X ( i 1 , i 2 , , i n ) = T r { G 1 ( i 1 ) G 2 ( i 2 ) G n ( i n ) } = T r { k = 1 n G k ( i k ) }
where X ( i 1 , i 2 , , i n ) is the ( i 1 , i 2 , , i n ) th element of the tensor. G k ( i k ) represents the i k th lateral slice matrix of the latent tensor G k . It has a size of r k × r k + 1 . Any two adjacent latent tensors G k and G k + 1 have equivalent dimensions of r k + 1 on their corresponding modes. The last latent tensor G n has a size of r n × I n × r 1 , i.e., r n + 1 = r 1 , which ensures that the product of these matrices is a square matrix. In some cases, the latent tensor G k is called the kth-core. r k ( k = 1 , 2 , , n ) is the size of the cores. r = [ r 1 , r 2 , , r n ] are known as TR ranks. According to (1), the trace of a sequential product of matrices { G k ( i k ) } is equivalent to X ( i 1 , i 2 , , i n ) . In addition, (1) can be rewritten as follows:
X ( i 1 , i 2 , , i n ) = α 1 , , α n = 1 r 1 , , r n k = 1 n G k ( α k , i k , α k + 1 ) .
where k represents the tensor modes, i k represents the data dimension index and α k represents the latent dimension index. For k { 1 , , n } , 1 α k r k , 1 i k I k , Equation (2) can also be written as follows:
X = α 1 , , α n = 1 r 1 , , r n g 1 ( α 1 , α 2 ) g 2 ( α 2 , α 3 ) g n ( α n , α 1 ) .
where ‘∘’ is the outer product of the vector. g k ( α k , α k + 1 ) R I k is the ( α k , α k + 1 ) mode-2 fiber of tensor X .
Figure 1 graphically represents the tensor ring using a linear tensor network. The order of each tensor is determined by its edges, with every node representing a tensor. The size of each mode is indicated by the number on the corresponding edge. The tensor contraction, also known as the multilinear product operator, occurs when two tensors are connected on a specific mode. For a more comprehensive explanation, please refer to the provided reference [20].
Definition 1.
(Multilinear Product). For the two adjacent cores of TR decomposition G k and G k + 1 , G k , k + 1 R r k × I k I k + 1 × r k + 1 is the multilinear product of the two cores:
G k , k + 1 ( I n ( i k 1 ) + j k ) = G k ( i k ) G k + 1 ( j k ) .
where i k = 1 , , I k , j k = 1 , , I k + 1 .
According to Definition 1, it is established that the tensor X can be expressed as the multilinear outcome of a series of core tensors of the third order, known as TR decomposition, i.e.,
X = Φ ( [ G ] ) ,
where Φ is the reconfiguration operator of TR and [ G ] = k = 1 n G k = { G 1 , G 2 , , G n } .

3. Proposed Method

Assuming that the noise is independent in the denoising problem, we usually consider the following degradation model:
Y = X + S + N ,
where X R I 1 × I 2 × I 3 is the clean hyperspectral image; N R I 1 × I 2 × I 3 is Gaussian noise; S R I 1 × I 2 × I 3 is sparse noise; I 1 × I 2 is the hyperspectral images spatial size; and I 3 is the number of bands in the spectral analysis.
Based on this degradation model, we propose LRTRDGS, which is recovered by tensor ring constraints and preserves fine details by group sparse regularization. The framework of the proposed LRTRDGS method is depicted in Figure 2. The LRTRDGS model is represented as follows:
min X . N . S δ D X W , 2 , 1 + λ 1 S 1 + λ 2 N F 2 s . t . Y = X + N + S , X = Φ ( [ G ] ) .
where D is composed of D x and D y , which are two differential operators in two spatial dimensions. The weighted l 2 , 1 -norm of D X is as follows:
D X W , 2 , 1 = i = 1 m j = 1 n W x ( i , j ) D x X ( i , j , : ) 2 + i = 1 m j = 1 n W y ( i , j ) D y X ( i , j , : ) 2 .
To solve the LRTRDGS model (7), we propose a symmetric alternating direction method of the multiplier (ADMM) that incorporates proximity point operators. In the upcoming phases, the suggested ADMM will be implemented to address the proposed model. Two auxiliary variables can be introduced to the model (7) and are expressed as follows:
min X . N . S δ P W , 2 , 1 + λ 1 S 1 + λ 2 N F 2 s . t . Y = X + N + S , X = Q , P = D Q , X = Φ ( [ G ] ) .
Under the constraint of tensor ring decomposition X = Φ ( [ G ] ) , this problem has the following augmented Lagrangian function:
L μ X , S , N , P , Q , Λ 1 , Λ 2 , Λ 3 = δ P W , 2 , 1 + λ 1 S 1 + λ 2 N F 2 + Λ 1 , Y X N S + Λ 2 , X Q + Λ 3 , D Q P + μ 2 Y X N S F 2 + X Q F 2 + D Q P F 2 ,
where Λ 1 , Λ 2 and Λ 3 are the Lagrange multipliers and μ is a positive scalar. As seen from the Lagrangian function, all variables are independent. We fix other variables and choose suitable ones to optimize the incremental Lagrangian function in (9). To optimize the incremental Lagrangian function in (9), we need to fix the other variables and select the appropriate ones. In addition, we make use of the symmetric ADMM with the addition of the proximity point operator to improve the convergence speed of the algorithm. The specific algorithm framework is as follows:
X p + 1 = arg min X L μ X , S p , N p , P p , Q p , Λ 1 p , Λ 2 p , Λ 3 p Λ 1 p + 1 2 = Λ 1 p + v 1 μ Y X p + 1 N p S p Λ 2 p + 1 2 = Λ 2 p + v 2 μ X p + 1 Q p Q p + 1 = arg min Q L μ X p + 1 , S p , N p , P p , Q , Λ 1 p + 1 2 , Λ 2 p + 1 2 , Λ 3 p + σ μ 2 Q Q p F 2 Λ 3 p + 1 2 = Λ 3 p + v 3 μ D Q p + 1 P p P p + 1 = arg min P L μ X p + 1 , S p , N p , P , Q p + 1 , Λ 1 p + 1 2 , Λ 2 p + 1 2 , Λ 3 p + 1 2 S p + 1 = arg min S L μ X p + 1 , S , N p , P p + 1 , Q p + 1 , Λ 1 p + 1 2 , Λ 2 p + 1 2 , Λ 3 p + 1 2 N p + 1 = arg min N L μ X p + 1 , S p + 1 , N , P p + 1 , Q p + 1 , Λ 1 p + 1 2 , Λ 2 p + 1 2 , Λ 3 p + 1 2 Λ 1 p + 1 = Λ 1 p + 1 2 + v 4 μ Y X p + 1 N p + 1 S p + 1 Λ 2 p + 1 = Λ 2 p + 1 2 + v 5 μ X p + 1 Q p + 1 Λ 3 p + 1 = Λ 3 p + 1 2 + v 6 μ D Q p + 1 P p + 1
In the p + 1 th iteration, we update the variables involved in model (10) as follows:
(1) Subproblem X :
min X = Φ ( [ G ] ) Λ 1 p , Y X N p S p + Λ 2 p , X Q p + μ 2 Y X N p S p F 2 + X Q p F 2
(11) can be expressed as
min μ Φ ( [ G ] ) 1 2 Y N p S p + Q p + Λ 1 p Λ 2 p / μ F 2
Through tensor ring decomposition, we can easily obtain G p + 1 , and X can be updated as follows:
X p + 1 = Φ ( [ G p + 1 ] ) .
(2) Subproblem Q :
Q p + 1 = arg min Q Λ 2 p + 1 2 , X p + 1 Q + Λ 3 p , D Q P p + μ 2 X p + 1 Q F 2 + D Q P p F 2 + σ μ 2 Q Q p F 2
This problem can be optimized with the following linear system:
μ I + μ D * D + σ μ Q = μ X p + 1 + μ D * P p + Λ 2 p + 1 2 D * Λ 3 p + σ μ Q p
where D * is the adjoint operator of D. The matrix corresponding to operator D * D has a block cycle structure, which can be diagonalized by the 3-D Fast Fourier Transform (FFT) matrix. Thus, we have
H Q = μ X p + 1 + μ D * P p + Λ 2 p + 1 2 D * Λ 3 p + σ μ Q p , T Q = W x 2 | f f t n ( D x ) | 2 + W y 2 | f f t n ( D y ) | 2 , Q p + 1 = i f f t n f f t n ( H Q ) μ I + μ T Q + σ μ
where f f t n and i f f t n correspond to the rapid 3-D Fourier transformation and its inverse conversion, respectively. The squared component is denoted by | · | 2 , with division being executed in an element-wise manner.
(3) Subproblem P :
P p + 1 = arg min P δ P W , 2 , 1 + Λ 3 p + 1 2 , D Q p + 1 P + μ 2 D Q p + 1 P F 2 = arg min P δ P W , 2 , 1 + μ 2 P D Q p + 1 + Λ 3 p + 1 2 μ F 2 .
Using the soft threshold operator,
R Δ ( x ) = x Δ , i f x > Δ x + Δ , i f x < Δ 0 , o t h e r w i s e
where x R , and Δ > 0 , we have:
P p + 1 = R δ μ D Q p + 1 + Λ 3 p + 1 2 μ .
(4) Subproblem S :
S p + 1 = arg min S λ 1 S 1 + Λ 1 p + 1 2 , Y X p + 1 N p S + μ 2 Y X p + 1 N p S F 2 = arg min S λ 1 S 1 + μ 2 Y X p + 1 N p S + Λ 1 p + 1 2 μ F 2
By applying the aforementioned soft threshold operator, one can acquire the solution to the described problem:
S p + 1 = R λ 1 μ Y X p + 1 N p + Λ 1 p + 1 2 μ .
(5) Subproblem N :
N p + 1 = arg min N λ 2 N F 2 + Λ 1 p + 1 2 , Y X p + 1 N S p + 1 + μ 2 Y X p + 1 S p + 1 N F 2 = arg min N λ 2 + μ 2 N μ Y X p + 1 S p + 1 + Λ 1 p + 1 2 μ + 2 λ 2 F 2
By using a simple calculation, we can find the following solution:
N p + 1 = μ Y X p + 1 S p + 1 + Λ 1 p + 1 2 μ + 2 λ 2
After solving these subproblems, we summarize those steps in Algorithm 1.
Algorithm 1 ADMM for HSI Denoising.
  • Input : The noisy hyperspectral image Y , the parameters δ , λ 1 , λ 2 , μ , r = [ r 1 , r 2 , r 3 ] , and constant σ , v 1 , v 2 , v 3 , v 4 , v 5 , and v 6 .
  • Initialization : X = N = S = Q = P = 0 , p = 1 .
  •                while not converged do
  •                     Update X p + 1 via (13).
  •                     Update Λ 1 p + 1 2 via (10).
  •                     Update Λ 2 p + 1 2 via (10).
  •                     Update Q p + 1 via (15).
  •                     Update Λ 3 p + 1 2 via (10).
  •                     Update P p + 1 via (17).
  •                     Update S p + 1 via (19).
  •                     Update N p + 1 via (21).
  •                     Update Λ 1 p + 1 , Λ 2 p + 1 , Λ 3 p + 1 via (10).
  •                     Let μ = m a x ( 1.5 μ , 10 6 ) .
  •                     Check the convergence condition X p X p + 1 F 2 Y F 2 10 6 .
  •                End while
  • Output : The restored hyperspectral image X .

4. Experimental Results

In this section, we present the simulation outcomes for assessing the retrieval effectiveness of the novel LRTRDGS technique. The first dataset is the Washington DC Mall data, which is an image of Washington City, an aerial hyperspectral image acquired by the Hydice sensor, with a size of (256, 256, 160). Another dataset is the Pavia City Center dataset. This is an image of the city of Pavia in northern Italy captured by ROSIS sensors, taken by the Reflective Optical System Spectrometer, with dimensions (200, 200, 80) [28]. For comparison, we implement four typical HSI denoising methods, namely, total variation regularized low-rank matrix factorization (LRTV) [25], patchwise low-rank matrix approximation (LRMR) [29], weighted group sparsity-regularized low-rank tensor decomposition (LRTDGS) [16] and an L 1 2 spatial–spectral total variation regularized local low-rank tensor recovery model (TLR- L 1 2 SSTV) [30]. The experiment applies the parameters proposed in the referenced article or the code authored by the investigator.
In this paper, three quantitative image quality indexes are used to evaluate the performance of the comparison methods, including the peak signal-to-noise ratio based on the characteristics of the human visual system (PSNR-HVS), mean structural similarity index measure (MSSIM) and erreur relative globale adimensionnelle desynthse (ERGAS). PSNR-HVS is a quality metric that measures the characteristics of the human visual system in a full-reference manner. MSSIM assesses the structural consistency to determine the similarity between the original hyperspectral image and the denoised hyperspectral image. Evaluating the fidelity of the denoised hyperspectral image, ERGAS incorporates the MSE of each band by assigning proper weights. To gauge the quality of the resultant denoised image, we employ three metrics:
PSNR - HVS = 10 log 255 2 M S E H V S
M S E H V S = K i = 1 I 7 j = 1 J 7 m = 1 8 n = 1 8 X [ m , n ] i j X [ m , n ] i j e T C [ m , n ] 2 .
The PSNR-HVS can be adapted to different block sizes and is not computationally intensive. In this context, the height and width of the image are denoted as I and J, respectively, and an 8 × 8 image block with the upper left corner at ( i , j ) has X i j as its DCT coefficient. The DCT coefficient of the corresponding block in the original image is denoted as X i j e . The JPEG quantization table specified in the JPEG standard is represented as T c , and K is calculated as K = 1 / [ ( I 7 ) ( J 7 ) 64 ] [31].
SSIM = 2 μ x μ y + C 1 2 σ x y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 + σ y 2 + C 2 , MSSIM ( X , Y ) = 1 Q j = 1 Q SSIM x j , y j ,
where the constant C 1 represents μ x 2 + μ y 2 , while the C 2 value is equivalent to the C 1 value. The standard deviations of x and y are symbolized as σ x and σ y correspondingly. The initial image and the altered image are denoted as X and Y, respectively. The composition of the local window corresponding to the jth instance is indicated by x j and y j , whereas the total number of local windows is denoted as Q .
ERGAS = 1 P i = 1 P m e s ( r e f i , r e s i ) M e a n 2 ( r e f i ) ,
where r e f i and r e s i represent the reference image and recovered image of the ith band, respectively. P represents the number of bands.
In the experiment, all the reflectance values of the hyperspectral images are normalized to [0,1] for numerical calculation and visualization. To verify the performance of the method under different noise conditions, we add simulated noise to the simulated HSI data. The noise here mainly includes three types of noise: Gaussian noise, impulse noise and deadline noise. The deadline noise is caused by the uneven distribution of the sensor dark current and dark voltage, which will produce some dead lines in the image and affect the image quality. Here, we list five different noise situations:
Case 1 (i.i.d. Gaussian Noise): In this particular scenario, every band encounters independent Gaussian noise distributed uniformly with an average of zero.
Case 2 (i.i.d. Gaussian Noise + Deadline Noise): In this particular instance, following case 1, the band of 60–80 also experiences interference from a certain type of noise that disrupts the signal. For our analysis, we assign a random width to each stripe, ranging from 1 to 3, and a random number of stripes, ranging from 3 to 10.
Case 3 (i.i.d. Gaussian Noise + i.i.d. Impulse Noise): In this scenario, all bands encounter independent Gaussian noise that is distributed identically and exerts an average of zero. Additionally, they are subject to independent impulse noise that is identically distributed.
Case 4 (i.i.d. Gaussian Noise + i.i.d. Impulse Noise + Deadline Noise): In this specific scenario, as noted in case 3, the frequency bands ranging from 60 to 80 also experience interference from the dead line noise. To address this issue, we adopt a random approach for determining the width of each stripe, varying it between 1 and 3 units. Similarly, the number of stripes is selected randomly, ranging from 3 to 10.
Case 5 (non-i.i.d. Gaussian Noise + non-i.i.d. Impulse Noise + Deadline Noise): In this particular scenario, every single band experiences nonindependent identically distributed Gaussian noise that has an average of zero, as well as nonindependent identically distributed impulse noise. Additionally, the bands ranging from 60 to 80 encounter a certain level of dead line noise. The width of each stripe is determined in a random manner, falling between the range of 1 and 3. Furthermore, the number of stripes is randomly assigned between 3 and 10.
We provide the findings from our quantitative assessment of various comparison methods. The comparative outcomes of five HSI techniques on the Pavia City Center dataset are displayed in Table 1. Moreover, Table 2 demonstrates the comparative results of these five methods on the Washington DC Mall dataset. Notably, we have indicated the superior results, which are highlighted in bold within the tables. In this particular study, the Gaussian noise is denoted as θ , while the impulse noise is represented as ρ . To visually demonstrate the comparative methods for HSI denoising, the denoising results at band 68 of the Pavia City Center dataset can be seen in Figure 3, Figure 4 and Figure 5. Additionally, the denoising result at band 61 of the Washington DC Mall dataset is shown in Figure 6, Figure 7 and Figure 8.
In addition, Figure 3 and Figure 6 show the denoising results of the testing HSIs for case 1 and case 3, respectively. LRTRDGS, the method that is being proposed, demonstrates exceptional outcomes in terms of visual quality in comparison to the alternative methods. LRTRDGS showcases its ability to eliminate both Gaussian noise and impulse noise while simultaneously maintaining the fundamental structure of HSI. LRMR and TLR- L 1 2 SSTV produce results with some noise. LRTV and LRTDGS can eliminate hyperspectral image noise, but some details may be lost in the process. For cases 2, 4 and 5, the focus is on evaluating the performance of deadline noise removal. Figure 4 and Figure 5 depict the denoising outcomes obtained from the Pavia City Center dataset. Similarly, the denoising results for the Washington DC Mall dataset are shown in Figure 7 and Figure 8. The proposed LRTRDGS method demonstrates its capability to effectively remove unexpected deadline noise while preserving the underlying HSI details. The results of LRTV and LRTDGS also show a small amount of deadline noise. Additionally, we present the values of PSNR-HVS and MSSIM for each frequency band in the Washington DC Mall simulation dataset case 3-1 (Figure 9 and Figure 10) and case 5-1 (Figure 11 and Figure 12). The results demonstrate that our proposed LRTRDGS method outperforms the other methods in terms of both PSNR-HVS and MSSIM. Specifically, when applied to two different hyperspectral images, the LRTRDGS method achieves superior results compared to the four other comparison methods. These findings suggest that tensor ring decomposition plays a positive role in denoising hyperspectral images.
Our proposed LRTRDGS does not have an advantage in CPU time. This is because the tensor ring structure is slightly more complex, resulting in longer calculation times. Because of the use of tensor rings, our proposed method achieves better results in quantification (Table 1 and Table 2) and visualization (Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12).
In the LRTRDGS method, the selection of parameters determines the denoising results of the hyperspectral images. The parameters include the total variation regularization parameter δ , sparse noise regularization parameter λ 1 , Gaussian noise regularization parameter λ 2 , the rank of the tensor ring r = [ r 1 . r 2 , r 3 ] , adjacent operator parameter σ and other parameters v 1 , v 2 , v 3 , v 4 , v 5 and v 6 . Based on the simulation data of case 3 in the Washington DC Mall dataset, the influence of the parameters is discussed.
The influence of the total variation regularization parameter δ is shown below. Figure 13 shows the PSNR value for δ values of 0.01, 0.05, 0.1, 0.5, 1 and 1.5. The results show that δ = 0.05 has the best effect.
Below is the demonstration of the impact of the regularization parameter λ 1 on the sparse noise limitation, which restricts the extent of the sparsity in sparse noise. We set λ 1 = A × 10 6 M × N , where M is the height of the hyperspectral band and N is the width of the hyperspectral band. Figure 14 shows the PSNR values when A is set to 1, 2, 5, 8, 10, 15, 20, 25, 30, 40 and 50. The results show that A = 130 achieves the best PSNR value.
Here, we consider the influence of the sparse noise regularization parameter λ 2 . This parameter plays a role in limiting the sparsity of Gaussian noise. Figure 15 shows the PSNR values when λ 2 is set to 100, 200, 210, 220, 230, 240, 250, 260, 270, 280, 290 and 300. The results show that λ 2 = 210 yields the best PSNR value.
The influence of the rank of the tensor ring r = [ r 1 , r 2 , r 3 ] , which is an important parameter to control the correlation of tensors, is discussed. In order to streamline the determination of the tensor ring’s rank, we make the assumption that the ranks of the second and third dimensions are identical, denoted as r 2 = r 3 . The PSNR values vary under different tensor ring ranks, as shown in Figure 16. In order to achieve a balance between the robustness of the tensor ring rank and the effectiveness of the denoising results, we set the tensor ring rank differently in different noise environments. For the previously discussed case, we set the tensor ring to r = [ 7 , 14 , 14 ] .
We consider the influence of the adjacent operator parameter σ , which is used to limit the step size of a neighboring operator. Here, we select the sets 0.0001, 0.0005, 0.001, 0.005, 0.01 and 0.05 as the test data. Figure 17 shows the data test results. Finally, σ = 0.01 is selected as the optimal value.
Regarding the influence of parameter μ , Figure 18 shows the recovery results for changes in the sets 0.001, 0.005, 0.006, 0.007, 0.01, 0.05 and 0.1. The results show that μ = 0.006 has the best effect.
Regarding the influence of the parameters v 1 , v 2 , v 3 , v 4 , v 5 and v 6 , Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24 show the data analyzed. Finally, we chose v 1 = 1 , v 2 = 0.8 , v 3 = 0.9 , v 4 = 1.1 , v 5 = 1 and v 6 = 1 as the best results.
Because the parameters P , S , N and others are solved using simple algebraic calculations, our main focus is on the time spent on X and Q . h × w × b is the size of the hyperspectral image affected by noise. A tensor ring representation is used to estimate G for updating X . Assuming that the size is X R I × I × I and the TR rank is set as r 1 = r 2 = r 3 = r , there is a cost of O( r 6 + I 2 r 4 + I 3 r 2 ) to X in each iteration. O ( D 1 l o g D 1 ) refers to the computational complexity of the FFT, where D 1 is the size of the data. Consequently, the cost of updating the subproblem of Q is denoted as O ( h w b l o g ( h w b ) ) . According to the above, the proposed algorithm has a complexity of O( r 6 + I 2 r 4 + I 3 r 2 + h w b log(h w b)).
At last, numerical experiments are conducted to verify the convergence of the proposed algorithm. Figure 25 shows the variation in R with the number of iterations of LRTRDGS for hyperspectral reconstructions, where R = X p X p + 1 F 2 Y F 2 . As the number of iterations increases, R approaches zero.

5. Conclusions

A weighted group sparse regularization tensor ring decomposition method for hyperspectral image restoration is proposed. The global spatial–spectral correlation model of hyperspectral images is tensor ring decomposition. Using weighted group sparsity to constrain the spectral dimension of spatial difference images is more reasonable than using total variation regularization. The effectiveness of hyperspectral image restoration is demonstrated by the experimental findings of the LRTRDGS method proposed. The implementation of LRTRDGS provides a means to effectively mitigate noise while preserving the intricate textural features of HSI. For further exploration of the spatial domain of HSI, it is recommended to consider representation-based subspace low-rank learning methods that provide more accurate regularization of nonlocal self-similarity. Additionally, it is worth investigating image compression techniques with a low-rank degree.

Author Contributions

Conceptualization, Y.L.; Methodology, S.W., Y.L. and B.Z.; Software, S.W.; Resources, B.Z.; Writing—Review and Editing, S.W. and Z.Z.; Supervision, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (61967004, 11901137, 11961011, 72061007 and 62171147), the Guangxi Key Laboratory of Automatic Detection Technology and Instruments (YQ23105, YQ20113 and YQ20114) and the Guangxi Key Laboratory of Cryptography and Information Security (GCIS201621 and GCIS201927).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Abbreviations

AbbreviationName
FLISThe Flying Laboratory of Imaging Systems
HSIHyperspectral Imaging
CPCanonical Polyadic
t-SVDTensor Singular-Value Decomposition
LRTDGSA Weighted Group Sparsity-Regularized Low-Rank Tensor Decomposition Mode
GCSGlobal Correlation Across Spectrum
NLR-CPTDA Nonlocal Low-Rank Regularized CP Tensor Decomposition Method
WSN-LRMAWeighted Schatten l p -Norm Low-Rank Matrix Approximation
TRTensor Ring
TVTotal Variational
LRTVTotal Variational Regularized Low-Rank Matrix Decomposition
SSTVSpectral–Spatial Total Variational Regularization
LLRSSTVThe Spatial–Spectral Total Variance Regularized Local Low-Rank Matrix Recovery Method
LRMRPatchwise Low-Rank Matrix Approximation
TLR- L 1 2 SSTV L 1 2 Spatial–Spectral Total Variation Regularized Local Low-Rank Tensor Recovery Model
FFTFast Fourier Transform
PSNR-HVSThe Peak Signal-to-Noise Ratio Based On the Characteristics of the Human Visual System
MSSIMMean Structural Similarity Index Measure
ERGASErreur Relative Globale Adimensionnelle Desynthse

References

  1. Stuart, M.B.; McGonigle, A.J.S.; Willmott, J.R. Hyperspectral Imaging in Environmental Monitoring: A Review of Recent Developments and Technological Advances in Compact Field Deployable Systems. Sensors 2019, 19, 3071. [Google Scholar] [CrossRef]
  2. Hanuš, J.; Slezák, L.; Fabiánek, T.; Fajmon, L.; Hanousek, T.; Janoutová, R.; Kopkáně, D.; Novotný, J.; Pavelka, K.; Pikl, M.; et al. Flying Laboratory of Imaging Systems: Fusion of Airborne Hyperspectral and Laser Scanning for Ecosystem Research. Remote Sens. 2023, 15, 3130. [Google Scholar] [CrossRef]
  3. Schodlok, M.C.; Frei1, M.; Segl, K. Implications of new hyperspectral satellites for raw materials exploration. Miner. Econ. 2022, 35, 495–502. [Google Scholar] [CrossRef]
  4. Avola, G.; Matese, A.; Riggi, E. Precision Agriculture Using Hyperspectral Images. Remote Sens. 2023, 15, 1917. [Google Scholar] [CrossRef]
  5. Moncholi-Estornell, A.; Cendrero-Mateo, M.P.; Antala, M.; Cogliati, S.; Moreno, J.; Van Wittenberghe, S. Enhancing Solar-Induced Fluorescence Interpretation: Quantifying Fractional Sunlit Vegetation Cover Using Linear Spectral Unmixing. Remote Sens. 2023, 15, 4274. [Google Scholar] [CrossRef]
  6. Naß, A.; van Gasselt, S. A Cartographic Perspective on the Planetary Geologic Mapping Investigation of Ceres. Remote Sens. 2023, 15, 4209. [Google Scholar] [CrossRef]
  7. Sharma, S.R.; Singh, B.; Kaur, M. A hybrid encryption model for the hyperspectral images: Application to hyperspectral medical images. Multimed. Tools Appl. 2023. [Google Scholar] [CrossRef]
  8. Bedini, E. The use of hyperspectral remote sensing for mineral exploration: A review. J. Hyperspectral Remote Sens. 2017, 7, 189–211. [Google Scholar] [CrossRef]
  9. Haagsma, M.; Hagerty, C.H.; Kroese, D.R.; Selker, J.S. Detection of soil-borne wheat mosaic virus using hyperspectral imaging: From lab to field scans and from hyperspectral to multispectral data. Precis. Agric. 2023, 24, 1030–1048. [Google Scholar] [CrossRef]
  10. Adjovu, G.E.; Stephen, H.; James, D.; Ahmad, S. Measurement of Total Dissolved Solids and Total Suspended Solids in Water Systems: A Review of the Issues, Conventional, and Remote Sensing Techniques. Remote Sens. 2023, 15, 3534. [Google Scholar] [CrossRef]
  11. Renard, N.; Bourennane, S.; Blanc-Talon, J. Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2008, 5, 138–142. [Google Scholar] [CrossRef]
  12. Chen, Y.; Huang, T.Z.; Zhao, X.L. Destriping of multispectral remote sensing image using low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4950–4967. [Google Scholar] [CrossRef]
  13. Liu, X.; Bourennane, S.; Fossati, C. Denoising of hyperspectral images using the PARAFAC model and statistical performance analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3717–3724. [Google Scholar] [CrossRef]
  14. Guo, X.; Huang, X.; Zhang, L.; Zhang, L. Hyperspectral image noise reduction based on rank-1 tensor decomposition. ISPRS J. Photogramm. Remote Sens. 2013, 83, 50–63. [Google Scholar] [CrossRef]
  15. Fan, H.; Li, C.; Guo, Y.; Kuang, G.; Ma, J. Spatial Cspectral total variation regularized low-rank tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6196–6213. [Google Scholar] [CrossRef]
  16. Chen, Y.; He, W.; Yokoya, N.; Huang, T.Z. Hyperspectral Image Restoration Using Weighted Group Sparsity-Regularized Low-Rank Tensor Decomposition. IEEE Trans. Cybern. 2020, 50, 3556–3570. [Google Scholar] [CrossRef]
  17. Wang, Y.; Peng, J.; Zhao, Q.; Meng, D.; Leung, Y.; Zhao, X.-L. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1227–1243. [Google Scholar] [CrossRef]
  18. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C. Nonlocal low-rank regularized tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5174–5189. [Google Scholar] [CrossRef]
  19. Xie, Y.; Qu, Y.; Tao, D.; Wu, W.; Yuan, Q.; Zhang, W. Hyperspectral Image Restoration via Iteratively Regularized Weighted Schatten p-Norm Minimization. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4642–4659. [Google Scholar] [CrossRef]
  20. Zhao, Q.; Zhou, G.; Xie, S.; Zhang, L.; Cichocki, A. Tensor Ring Decomposition. arXiv 2016, arXiv:1606.05535. [Google Scholar] [CrossRef]
  21. Zhao, Q.; Sugiyama, M.; Yuan, L.; Cichocki, A. Learning Efficient Tensor Representations with Ring-structured Networks. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019. [Google Scholar]
  22. Wang, W.; Aggarwal, V.; Aeron, S. Efficient low rank tensor ring completion. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  23. He, W.; Yokoya, N.; Yuan, L.; Zhao, Q. Remote Sensing Image Reconstruction Using Tensor Ring Completion and Total Variation. IEEE Trans. Geoence Remote Sens. 2019, 57, 8998–9009. [Google Scholar] [CrossRef]
  24. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  25. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  26. Wei, H.; Zhang, H.; Shen, H.; Zhang, L. Hyperspectral Image Denoising Using Local Low-Rank Matrix Recovery and Global Spatial-CSpectral Total Variation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1–17. [Google Scholar]
  27. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  28. Hyperspectral Images. Available online: https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html (accessed on 3 July 2023).
  29. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  30. Zeng, H.; Xie, X.; Cui, H.; Yin, H.; Ning, J. Hyperspectral Image Restoration via Global L1-2 Spatial-Spectral Total Variation Regularized Local Low-Rank Tensor Recovery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3309–3325. [Google Scholar] [CrossRef]
  31. Valizadeh, S.; Nasiopoulos, P.; Ward, R. Perceptual rate distortion optimization of 3D–HEVC using PSNR-HVS. Multimed. Tools Appl. 2018, 77, 22985–23008. [Google Scholar] [CrossRef]
Figure 1. A graphical representation of tensor ring decomposition.
Figure 1. A graphical representation of tensor ring decomposition.
Applsci 13 10363 g001
Figure 2. Framework of the proposed HSI denoising method.
Figure 2. Framework of the proposed HSI denoising method.
Applsci 13 10363 g002
Figure 3. Denoising results of case 3-1 on the Pavia City Center dataset.
Figure 3. Denoising results of case 3-1 on the Pavia City Center dataset.
Applsci 13 10363 g003
Figure 4. Denoising results of case 4-2 on the Pavia City Center dataset.
Figure 4. Denoising results of case 4-2 on the Pavia City Center dataset.
Applsci 13 10363 g004
Figure 5. Denoising results of case 5-1 on the Pavia City Center dataset.
Figure 5. Denoising results of case 5-1 on the Pavia City Center dataset.
Applsci 13 10363 g005
Figure 6. Denoising results of case 1-2 on the Washington DC Mall dataset.
Figure 6. Denoising results of case 1-2 on the Washington DC Mall dataset.
Applsci 13 10363 g006
Figure 7. Denoising results of case 4-1 on the Washington DC Mall dataset.
Figure 7. Denoising results of case 4-1 on the Washington DC Mall dataset.
Applsci 13 10363 g007
Figure 8. Denoising results of case 5-2 on the Washington DC Mall dataset.
Figure 8. Denoising results of case 5-2 on the Washington DC Mall dataset.
Applsci 13 10363 g008
Figure 9. PSNR-HVS of the different methods for each band of case 3-1 on the Washington DC Mall dataset.
Figure 9. PSNR-HVS of the different methods for each band of case 3-1 on the Washington DC Mall dataset.
Applsci 13 10363 g009
Figure 10. MSSIM of the different methods for each band of case 3-1 on the Washington DC Mall dataset.
Figure 10. MSSIM of the different methods for each band of case 3-1 on the Washington DC Mall dataset.
Applsci 13 10363 g010
Figure 11. PSNR-HVS of the different methods for each band of case 5-1 on the Washington DC Mall dataset.
Figure 11. PSNR-HVS of the different methods for each band of case 5-1 on the Washington DC Mall dataset.
Applsci 13 10363 g011
Figure 12. MSSIM of the different methods for each band of case 5-1 on the Washington DC Mall dataset.
Figure 12. MSSIM of the different methods for each band of case 5-1 on the Washington DC Mall dataset.
Applsci 13 10363 g012
Figure 13. Influence of parameter δ .
Figure 13. Influence of parameter δ .
Applsci 13 10363 g013
Figure 14. Influence of parameter A.
Figure 14. Influence of parameter A.
Applsci 13 10363 g014
Figure 15. Influence of parameter λ 2 .
Figure 15. Influence of parameter λ 2 .
Applsci 13 10363 g015
Figure 16. Influence of parameter r.
Figure 16. Influence of parameter r.
Applsci 13 10363 g016
Figure 17. Influence of parameter σ .
Figure 17. Influence of parameter σ .
Applsci 13 10363 g017
Figure 18. Influence of parameter μ .
Figure 18. Influence of parameter μ .
Applsci 13 10363 g018
Figure 19. Influence of parameter v 1 .
Figure 19. Influence of parameter v 1 .
Applsci 13 10363 g019
Figure 20. Influence of parameter v 2 .
Figure 20. Influence of parameter v 2 .
Applsci 13 10363 g020
Figure 21. Influence of parameter v 3 .
Figure 21. Influence of parameter v 3 .
Applsci 13 10363 g021
Figure 22. Influence of parameters v 4 .
Figure 22. Influence of parameters v 4 .
Applsci 13 10363 g022
Figure 23. Influence of parameter v 5 .
Figure 23. Influence of parameter v 5 .
Applsci 13 10363 g023
Figure 24. Influence of parameter v 6 .
Figure 24. Influence of parameter v 6 .
Applsci 13 10363 g024
Figure 25. Change in the value R for the hyperspectral reconstructed images versus the number of iterations.
Figure 25. Change in the value R for the hyperspectral reconstructed images versus the number of iterations.
Applsci 13 10363 g025
Table 1. Results of removing noise from the Pavia City Center dataset in different cases.
Table 1. Results of removing noise from the Pavia City Center dataset in different cases.
CaseNoise LevelIndicatorsNoiseLRTVLRMRLRTDGSTLR- L 1 2 SSTVLRTRDGS
Case 1-1 θ = 0.2 PSNR-HVS13.993228.762529.168226.982527.732429.8258
MSSIM0.17330.79040.84690.71950.84380.8471
ERGAS665.9520168.4686117.701150.0049148.9314108.9523
time (s) 117.27299.20372.02288.274100.32
Case 1-2 θ = 0.4 PSNR-HVS7.996224.826124.371224.991224.627925.7813
MSSIM0.05100.64180.64690.60030.68340.6887
ERGAS1332.676331.8135206.591188.9809217.5825178.4174
time (s) 122.123101.69184.02491.271110.276
Case 2-1 θ = 0.2
+Deadline
PSNR-HVS10.762126.107226.418125.571927.461927.5891
MSSIM0.08710.69820.74450.64350.84010.8416
ERGAS999.3376233.1479163.708182.5346155.4789152.6477
time (s) 119.718100.17881.65187.874103.782
Case 2-2 θ = 0.4
+Deadline
PSNR-HVS7.996323.957424.357124.789323.996524.9979
MSSIM0.05050.62020.64770.59180.67360.6762
ERGAS1330.449348.6010208.053201.1706230.4959190.4651
time (s) 124.267104.78186.98391.784113.926
Case 3-1 θ = 0.3
+ ρ = 0.1
PSNR-HVS9.127125.287524.368825.287025.276925.9945
MSSIM0.06400.66580.67000.62070.72120.7223
ERGAS1169.856276.2930202.819184.5704200.0335166.8273
time (s) 98.962119.26774.23285.127103.261
Case 3-2 θ = 0.4
+ ρ = 0.15
PSNR-HVS6.961222.965922.489223.792623.892023.9864
MSSIM0.03620.55780.55650.54480.61300.6188
ERGAS1526.545351.0957257.677216.6734247.5929215.5840
time (s) 97.122116.96878.19785.969101.206
Case 4-1 θ = 0.4
+ ρ = 0.15
+Deadline
PSNR-HVS7.287123.378923.986524.854723.679424.2688
MSSIM0.04040.58000.58870.55980.60420.6277
ERGAS1462.063359.3642232.914211.6882253.7716208.3434
time (s) 100.271120.86181.99489.275111.788
Case 4-2 θ = 0.3
+ ρ = 0.075
+Deadline
PSNR-HVS10.112225.976925.015325.187824.996225.9014
MSSIM0.06850.67980.68800.62000.72420.7350
ERGAS1128.251269.8426189.770191.0023205.3501174.1756
time (s) 99.882117.96280.99787.291108.782
Case 5-1 θ = U ( 0.1 , 0.2 )
+ ρ = U ( 0.2 , 0.3 )
+Deadline
PSNR-HVS11.578626.897126.278826.016526.896126.9987
MSSIM0.10030.72420.74980.67000.82580.8666
ERGAS942.5571264.2848162.700170.5190165.8885154.6694
time (s) 100.978118.65382.87887.998109.004
Case 5-2 θ = U ( 0.4 , 0.5 )
+ ρ = U ( 0.01 , 0.1 )
+Deadline
PSNR-HVS6.876823.687123.090224.891023.876024.6891
MSSIM0.03730.57040.57760.55450.61390.6391
ERGAS1551.881381.8465236.612212.0504250.6892199.8251
time (s) 100.101118.43384.90388.022110.057
The best results are highlighted by bold.
Table 2. Results of removing noise from the Washington DC Mall dataset in different cases.
Table 2. Results of removing noise from the Washington DC Mall dataset in different cases.
CaseNoise LevelIndicatorsNoiseLRTVLRMRLRTDGSTLR- L 1 2 SSTVLRTRDGS
Case 1-1 θ = 0.2 PSNR-HVS14.016327.90129.781328.08227.987030.8762
MSSIM0.25600.81600.90460.82330.89280.9120
ERGAS751.882162.6289120.4509145.6978151.9219105.1567
time (s) 65.903125.355101.05168.771129.875
Case 1-2 θ = 0.4 PSNR-HVS8.072424.896224.980925.856521.680426.8577
MSSIM0.08850.67060.75630.73450.44560.8046
ERGAS1503.76265.8668211.9395188.3212314.1924168.3608
time (s) 68.384129.074116.272171.283134.115
Case 2-1 θ = 0.2
+Deadline
PSNR-HVS14.075827.957229.692728.018426.948330.0078
MSSIM0.25430.81210.90310.82340.86890.8996
ERGAS753.640159.1033122.6426146.1582167.6949120.0915
time (s) 66.282118.784110.709165.203130.673
Case 2-2 θ = 0.4
+Deadline
PSNR-HVS8.003424.989424.889225.876224.070826.9950
MSSIM0.08800.66620.75560.73930.78980.7987
ERGAS1503.85258.5927213.5223190.8474212.9540181.1324
time (s) 63.228112.783106.889159.995127.257
Case 3-1 θ = 0.2
+ ρ = 0.1
PSNR-HVS11.994727.784225.074927.930227.840430.7730
MSSIM0.17010.78080.79930.80560.87550.9012
ERGAS1047.90175.0467213.6028153.4816161.8219112.6899
time (s) 65.893116.291109.680160.003127.982
Case 3-2 θ = 0.3
+ ρ = 0.2
PSNR-HVS7.995624.094921.009224.870924.082526.9904
MSSIM0.08270.64650.65470.71620.76460.8026
ERGAS1529.86258.4262340.4201204.1038230.822580.2526
time (s) 66.113116.982111.293160.904128.040
Case 4-1 θ = 0.4
+ ρ = 0.1
+Deadline
PSNR-HVS7.283723.839223.039225.103923.902925.9372
MSSIM0.07160.60580.69080.70660.74500.7553
ERGAS1672.07303.2406264.5094207.8214236.6696201.1609
time (s) 70.284119.103117.739165.492130.265
Case 4-2 θ = 0.2
+ ρ = 0.3
+Deadline
PSNR-HVS8.982925.028418.927126.017425.017226.8740
MSSIM0.08490.69820.60030.75080.81200.8245
ERGAS1472.33241.7256449.2673185.5257204.7093174.2441
time (s) 69.027118.685115.336162.870128.278
Case 5-1 θ = U ( 0.3 , 0.4 )
+ ρ = U ( 0.1 , 0.2 )
+Deadline
PSNR-HVS7.794423.890221.908024.982323.980225.6890
MSSIM0.07800.61630.67150.70830.75250.7677
ERGAS1584.39307.9037301.3547208.4486236.4430198.0863
time (s) 73.282125.394117.003166.682131.082
Case 5-2 θ = U ( 0.2 , 0.3 )
+ ρ = U ( 0.2 , 0.3 )
+Deadline
PSNR-HVS8.590224.682319.932225.894824.783926.7735
MSSIM0.08600.66670.62780.72930.78470.8052
ERGAS1494.11271.4693392.7230198.8723222.9171184.1283
time (s) 71.942123.463110.228160.382129.735
The best results are highlighted by bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Zhu, Z.; Liu, Y.; Zhang, B. Weighted Group Sparse Regularized Tensor Decomposition for Hyperspectral Image Denoising. Appl. Sci. 2023, 13, 10363. https://doi.org/10.3390/app131810363

AMA Style

Wang S, Zhu Z, Liu Y, Zhang B. Weighted Group Sparse Regularized Tensor Decomposition for Hyperspectral Image Denoising. Applied Sciences. 2023; 13(18):10363. https://doi.org/10.3390/app131810363

Chicago/Turabian Style

Wang, Shuo, Zhibin Zhu, Yufeng Liu, and Benxin Zhang. 2023. "Weighted Group Sparse Regularized Tensor Decomposition for Hyperspectral Image Denoising" Applied Sciences 13, no. 18: 10363. https://doi.org/10.3390/app131810363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop