Next Article in Journal
Stability and Changes in the Spatial Distribution of China’s Population in the Past 30 Years Based on Census Data Spatialization
Previous Article in Journal
Comparing Water Indices for Landsat Data for Automated Surface Water Body Extraction under Complex Ground Background: A Case Study in Jilin Province
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Anomaly Detection Based on Regularized Background Abundance Tensor Decomposition

1
Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
2
Institute of Engineering, Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-Lab, 38000 Grenoble, France
3
Institut Universitaire de France (IUF), 75231 Paris, France
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(6), 1679; https://doi.org/10.3390/rs15061679
Submission received: 29 January 2023 / Revised: 8 March 2023 / Accepted: 14 March 2023 / Published: 20 March 2023
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
The low spatial resolution of hyperspectral images means that existing mixed pixels rely heavily on spectral information, making it difficult to differentiate between the target of interest and the background. The endmember extraction method is powerful in enhancing spatial structure information for hyperspectral anomaly detection, whereas most approaches are based on matrix representation, which inevitably destroys the spatial or spectral information. In this paper, we treated the hyperspectral image as a third-order tensor and proposed a novel anomaly detection method based on a low-rank linear mixing model of the scene background. The obtained abundance maps possessed more distinctive features than the raw data, which was beneficial for identifying anomalies in the background. Specifically, the low-rank tensor background was approximated as the mode-3 product of an abundance tensor and endmember matrix. Due to the distinctive features of the background’s abundance maps, we characterized them by tensor regularization and imposed low rankness through CP decomposition, smoothness, and sparsity. In addition, we utilized the 1 , 1 , 2 -norm to characterize the tube-wise sparsity of the anomaly, since it accounted for a small portion of the scene. The experimental results obtained using five real datasets demonstrated the outstanding performance of our proposed method.

1. Introduction

In contrast to multispectral sensors, hyperspectral sensors enable hundreds of narrow and contiguous bands to be obtained in order to characterize each pixel in a real scene. They have attracted great attention from researchers in signal and image processing, as hyperspectral images (HSIs) contain valuable spatial and spectral information. Over the past few decades, hyperspectral images have been applied in various areas [1], such as data fusion, unmixing, classification, and target detection [2,3,4,5,6,7].
Anomaly detection (AD) aims to detect points of interest, such as natural materials, man-made targets, and other interferers, and to label each pixel as a target or background. AD is one of the most important research fields in HSI processing [8]. Specifically, it is assumed that the spectral signatures of an anomaly are different from those of their surrounding neighbors, which makes it feasible to distinguish anomalies from the background. AD is, in essence, an unsupervised classification problem without prior knowledge of the anomaly or background, leading to an incredibly challenging detection task.
To date, a large number of AD methods have been proposed for HSIs, grouped into three main categories: deep-learning-based, statistics-based, and geometric-modeling-based methods. Deep-learning-based AD algorithms usually use deep neural networks to mine deep features of spectra in HSIs. Since no prior information can be exploited in AD, many researchers train the deep model in an unsupervised way for AD tasks [9,10,11]. However, the deep model for off-the-shelf HSIs still takes a lot of time to train while exhibiting limited generalization ability. Reed–Xiaoli (RX) [12], a benchmark statistics-based method, assumes that the background follows a Gaussian distribution and accomplishes AD via the generalized likelihood ratio test (GLRT). Specifically, the Mahalanobis distance between the pixel being tested and the surrounding background is calculated to determine whether the pixel is an anomaly. Many RX-based methods have been developed [13,14,15,16,17,18]. For example, global RX (GRX) [13] and local RX (LRX) [14] are two typical versions of this method, and they estimate the background statistical variables using the entire image and the surrounding neighbors, respectively. The above methods suffer the limitation of relying on the assumption that the background obeys a single distribution, which is difficult to satisfy in real hyperspectral scenes. Hence, the kernel-RX algorithm [15,16] was developed, which employs a kernel function and maps the data into a higher-dimensional feature space to characterize non-Gaussian distributions. Geometric-modeling-based methods [19,20,21,22,23,24,25,26,27,28] are another category of AD methods. Representation-based methods [19,20,21] have been successfully applied to AD because they do not need a specific distribution assumption, but they fail to capitalize on the high spectral correlation of HSIs. Thus, robust principal component analysis (RPCA) was proposed, which assumes that the background is represented by a single subspace and aims to recover low-rank background and separate a sparse anomaly from an observed HSI [22,23]. Nevertheless, the low-rank representation (LRR)-based model assumed that the data lay in multiple subspaces since most pixels were mixed pixels, which necessitated the exploration of other methods for HSI AD [24,25]. An accurate background dictionary construction for LRR is still a challenging task. Linear mixing model (LMM)-based methods [29] have attracted considerable attention in the AD field due to their explicit physical descriptions obtained through background modeling and their ability to enhance the spatial structure [26,27,28]. Here, the background mixed pixels can be linearly represented by pure material signatures, which is normally accomplished by non-negative matrix factorization (NMF) [30], and they can be approximately written as the product of two non-negative matrices: an endmember matrix and an abundance matrix. Initially, Qu et al. [26] applied spectral unmixing to model original HSI data and regarded the obtained abundance maps as the input for LRR. To enhance the HSI spatial structure smoothness, more recently, the authors of [28] proposed an enhanced total variation (ETV) model with an endmember background dictionary (EBD) by applying ETV to the row vectors of the representation coefficient matrix. The abovementioned methods both achieved promising performance with respect to background modeling using the LMM, though they were matrix-based methods that reshaped the 3D HSI into a 2D matrix and could not avoid destroying the spatial or spectral structure of the HSI.
Since an HSI is essentially a cube, the aforementioned matrix-based background modeling methods failed to explore the intrinsic multidimensional structure of HSIs [31]. In comparison, HSI processing based on tensor decomposition can simultaneously preserve the spatial and spectral structural information [32,33]. Recent developments in tensor-based AD methods have heightened the need for inner-structure preservation [34] and the exploration of tensor physical characteristics [35,36,37]. The spectral signatures of the background pixels in the homogeneous regions have a high correlation, resulting in the background having a strong spatial linear correlation and therefore a low-rank characteristic. Conventional low-rank tensor decomposition [38] includes prior-based tensor approximation, canonical polyadic (CP) decomposition, and Tucker decomposition. Recently, several new tensor decomposition models [39,40,41] were proposed to capture the low-rank structural information in a tensoral manner. Li et al. [35] proposed a prior-based tensor approximation (PTA) method for hyperspectral AD, assuming the low rankness of the spectra and piecewise-smoothness in the spatial domain. Note that PTA actually operates on matrices and not tensors, as the hyperspectral cube is unfolded in 2D structures. Thus, these tensor-based approaches do not allow one to preserve the inner structure of the data. Song et al. [36] proposed an AD algorithm based on endmember extraction and LRR, whereas the physical meaning of the abundance maps underlying the LMM were not exploited.
In this paper, motivated by the fact that abundance maps possess more distinctive features than raw data, contributing to an accurate separation of anomalies from the background, we propose an abundance tensor regularization procedure with low rankness and smoothness based on sparse unmixing (ATLSS) for hyperspectral AD. With the proper modeling of the physical conditions, an observed third-order tensor HSI can be decomposed into a background tensor, an anomaly tensor, and Gaussian noise. Based on the LMM, the background tensor is approximated as a mode-3 product of an abundance tensor and an endmember matrix to enhance the spatial structure information for the existing mixed pixels. Peng [42] also demonstrated that abundance maps inherit the spatial structure of the original HSI, and it is rational to characterize this property on abundance maps. The spectral signatures of the background pixels in the homogeneous regions have a high correlation, which yields a spatial linear correlation and, therefore, a low-rank property. In [31,43], the authors imposed low rankness on the abundance tensor to effectively capture the HSI’s low-dimensional structure. In our paper, the abundance tensor is characterized by tensor regularization with low rankness through CP decomposition. Moreover, each pixel contains limited materials, and neighboring pixels are constituted by similar materials, which display sparsity and spatial smoothness properties. The anomaly part accounts for a small portion of the whole scene, and each tube-wise fiber (i.e., the spectral bands of each pixel) contains few non-zero values, which indicates the tube-wise sparsity of the anomaly tensor. In a real observed HSI, the spectra are usually corrupted with noise caused by the precision limits of the imaging spectrometer and errors in analog-to-digital conversion. This issue was dealt with by modeling the noise as Gaussian random variables [44,45,46]. Here, Gaussian noise was modeled separately to the anomalies to suppress the noise confusion in the anomalies.
The main contributions of this paper can be summarized as follows:
(1)
The low spatial resolution of HSIs means that existing mixed pixels rely heavily on spectral information, which makes it difficult to differentiate the target of interest and the background. Therefore, in view of the LMM, we proposed a completely blind tensor-based model where the background is decomposed using the mode-3 product of the abundance tensor and endmember matrix, with the spatial structure and spectral information preserved. Moreover, the obtained abundance tensor of the background provides more robust structural information, which is essential for AD performance improvement.
(2)
Considering the distinctive features of the background’s abundance maps, we characterized them by tensor regularization and imposed low rankness, smoothness, and sparsity. Specifically, the 1 -norm was introduced to enforce sparseness, and the total variation (TV) regularizer was adopted to encourage spatial smoothness. Moreover, the typically high correlation between abundance vectors implies the low-rank structure of abundance maps. In contrast to the rigorous low-rank constraint, soft low-rank regularization was imposed on the background in order to leverage its spatial homogeneity. Its strictness was controlled by scalar parameters. In addition, for the anomaly part of HSI AD, the 1 , 1 , 2 -norm was utilized to characterize the tube-wise sparsity, since the anomalies accounted for a small portion of the scene. Gaussian noise was also incorporated into the model to suppress the noise confusion in the anomalies.
The experimental results obtained using five different datasets, with extensive metrics and illustrations, demonstrated that the proposed method significantly outperformed the other competing methods.
The remainder of this article is organized as follows. The third paragraph of Section 2 introduces the problem formulation and proposed method. In Section 3, we evaluate the performance of the proposed method and compare it with some traditional and state-of-the-art AD methods using five real hyperspectral datasets. Finally, the conclusion and future works are presented in Section 4.

2. Problem Formulation and Proposed Method

In this section, we first introduce HSI AD based on the background LMM. The proposed ATLSS algorithm is then illustrated. In addition, the overall flowchart of the proposed ATLSS algorithm for hyperspectral AD is shown in Figure 1.

2.1. Tensor Notation and Definition

This subsection introduces some mathematical notation and preliminaries related to tensors, and our proposed method is clearly described. We use lowercase bold symbols for vectors, e.g., x , and capital letters for matrices, e.g., X . The paper denotes third-order tensors using bold Euler script letters, e.g., X . The scalar is written as x.
Definition 1.
The dimension of a tensor is called the mode, and X R I 1 × I 2 × × I N has N modes. Slices are two-dimensional sections of a tensor and are defined by fixing all but two indices. For a third-order tensor Y R I 1 × I 2 × I 3 , and Y : , : , k is the k-th frontal slice.
Definition 2
(the mode-n unfolding and folding of a tensor). The “unfold” operation along mode-n on an N-mode tensor X R I 1 × I 2 × × I N is defined as unfold n ( X ) = X ( n ) R I n × ( I 1 I n 1 I n + 1 I N ) . Its inverse operation is mode-n folding, denoted as X = fold n X ( n ) .
Definition 3
(rank-one tensors). An N-way tensor X R I 1 × I 2 × × I N is a rank-one tensor if it can be written as the outer product of N vectors.
Definition 4
(mode-n product). The mode-n product of tensor X R I 1 × I 2 × × I N and a matrix U R J × I n is defined as:
( X × n U ) I 1 I n 1 J I n + 1 I N = i n = 1 I n x I 1 I 2 I N u J I n
In contrast, the n-mode product can be further computed by matrix multiplication.
Y = X × n U Y = U X ( n )
Definition 5
(CP decomposition). CP decomposition factorizes an n-order tensor X R I 1 × I 2 × × I N into a sum of component rank-one tensors as:
X λ ; B ( 1 ) , B ( 2 ) , , B ( N ) r = 1 R λ r b r ( 1 ) b r ( 2 ) b r ( N )
where λ R R is the weight vector and B ( n ) = [ b 1 ( n ) b R ( n ) ] R I n × R n = { 1 , , N } is the n-th factor matrix. R is the rank of tensor X , and we denote rank ( X ) = R .

2.2. HSI AD Based on Background LMM

Since an HSI cube can be naturally treated as a third-order tensor, we used a tensor-based representation to avoid spatial and spectral information loss. In such an AD application, an HSI tensor can be decomposed into so-called background and anomaly tensors [47]. However, in real-world applications, the scenes are usually corrupted by noise [45]; hence, in this paper, noise was also added into the model to suppress its confusion with the estimated anomaly term, and the model is expressed as follows:
Y = B + E + N
where Y R H × W × D is the observed HSI and B R H × W × D , E R H × W × D , and N R H × W × D are the background, anomaly, and noise, respectively. H, W, and D represent the height, width, and number of bands of each tensor, respectively. The purpose of AD is to reconstruct an accurate background image to more accurately separate the anomalies from the background and noise, yielding superior performance.
There are typically several mixed pixels in natural HSI scenes, implying that more than one material constitutes each mixed pixel. This refers to an explicit physical interpretation under the assumption of the LMM; that is, the spectrum of each pixel of a low-rank background can be linearly combined with a few pure spectral endmembers. NMF is an ideal solution for the LMM, as it decomposes the original data into the product of two low-dimensional non-negative matrices. Alternatively, under tensor notation, the tensor background approximates the mode-3 product of a non-negative abundance tensor and a non-negative endmember matrix. Mathematically, Equation (4), inspired by NMF, takes the following form:
Y = X × 3 A + E + N s . t . X 0 , A 0
where X R H × W × R is the third-order abundance tensor, A R D × R is the endmember matrix, and R is the number of endmembers. The third-order abundance tensor is obtained by reshaping each abundance vector into a frontal slice matrix X : , : , r r { 1 , , R } of dimensions H × W and then stacking them along the mode-3 direction. The newly formed abundance tensor implements the underlying inner low-rank structure information for thorough characterization.

2.3. Proposed Method

An HSI usually consists of a few materials, so it lies in low-rank subspaces. Moreover, a similar substance is distributed in the adjacent region, which gives the HSI a locally smooth property. Compared to the background, anomalies are distributed randomly; thus, anomalies are often assumed to be sparse. Therefore, in this paper, we modeled the problem based on the assumption that the observed HSI was a superposition of a low-rank background, sparse anomalies, and a noise term.
The LMM assumes that each mixed pixel in the background linearly consists of a few endmembers, indicating that many zero entries are contained in the abundance tensor, which can be represented by a sparse property. Remarkably, the 0 -norm can directly minimize non-zero components, but this leads to an NP-hard problem. The 1 -norm, therefore, is introduced to promote the sparsity of the abundance tensor, where the sparsity prior narrows the solution space and achieves accurate abundance tensor estimation. The anomaly pixels occupy a small proportion of the scene, indicating that the anomaly matrix has a column-sparse property and is characterized by the 2 , 1 -norm [48]. Here, we have E ( 3 ) 2 , 1 = E 1 , 1 , 2 ; therefore, due to the physical meaning and the definition of the 1 , 1 , 2 -norm in [49], it is reasonable to assume that the tensor anomaly has tube-wise sparsity. In addition, an HSI is usually corrupted by noise, which is assumed to be identically and independently distributed Gaussian random variables [44]; therefore, the noise was modeled as N F 2 to suppress it from being confused with the anomaly. In general, the background model based on the LMM can be rewritten as follows:
min X , A , E , N 1 2 N F 2 + λ 1 X 1 + β E 1 , 1 , 2 s . t . Y = X × 3 A + E + N , X 0 , A 0
where · 1 and · F 2 denote the 1 -norm and 2 -norm, respectively, and λ 1 and β are the trade-off parameters that control the sparsity of the abundance tensor and tube-wise sparsity of the anomaly, respectively.
In addition to the single-pixel sparsity, the spatial correlation between the neighboring pixels also deserves to be exploited. It supposes that there is a high correlation among the spectra of the pixels lying in homogeneous regions. For this, we imposed a soft low-rank property on the abundance tensor X of the background tensor B in order to model the aforementioned high correlation property of homogeneous regions.
Assuming that K X designates the rank of the abundance tensor, the loss function for HSI AD could be written as:
min X , A , E , N 1 2 N F 2 + λ 1 X 1 + β E 1 , 1 , 2 s . t . rank ( X ) = K X , Y = X × 3 A + E + N , X 0 , A 0 .
However, the rank of X in (7) is a non-convex problem in that the optimization is also difficult. Referring to [50], we introduced Q , which was assumed to be low-rank and represents a low-rank prior. Then, CP decomposition was employed to measure the low rankness of Q via the summation of the K X rank-one components. Consequently, the CP decomposition of Q could be written as:
Q = Z ( 1 ) , Z ( 2 ) , Z ( 3 ) = i = 1 K X z i ( 1 ) z i ( 2 ) z i ( 3 )
where Z ( 1 ) = z 1 ( 1 ) z K X ( 1 ) R H × K X , Z ( 2 ) = z 1 ( 2 ) z K X ( 2 ) R W × K X , and Z ( 3 ) = z 1 ( 3 ) z K X ( 3 ) R D × K X are the factor matrices.
Subsequently, we introduced a new regularization term controlled by a low-rank tensor Q to enforce a non-strict constraint on K X , as shown in (9). A very large K X value would lead to the mixing of anomalies into the background, which would undermine the low-rank characterization of the background by the regularization term. Therefore, the parameter λ 3 aimed to modify the strictness of the low-rank constraint on X . Thus, not only was the non-convex issue addressed, but the small-scale details that were necessary for the background could also be preserved. The function based on (7) was approximately rewritten as:
min X , A , E , Q , N 1 2 N F 2 + λ 1 X 1 + λ 3 2 X Q F 2 + β E 1 , 1 , 2 s . t . rank ( Q ) = K X , Y = X × 3 A + E + N , X 0 , A 0
where the rank is controlled by K X . Then, the model of (9) could be rewritten as:
min X , A , E , Z ( 1 ) , Z ( 2 ) , Z ( 3 ) , N 1 2 N F 2 + λ 1 X 1 + λ 3 2 X Z ( 1 ) , Z ( 2 ) , Z ( 3 ) F 2 + β E 1 , 1 , 2 s . t . Y = X × 3 A + E + N , X 0 , A 0 .
Notably, pixels with spatial homogeneity are more likely to contain the same materials, which indicates the fractional abundance of the adjacent pixels that tend to be similar. Here, the spatial context information of the abundance tensor was characterized by the TV regularizer [51] by encouraging a piecewise smoothness structure with the distinct edges preserved. The spatial TV norm of the abundance tensor X is defined as [52]:
X T V = H X 1 = H h X 1 + H v X 1 .
Let X h , w , r | h = { 1 , , H } , w = { 1 , , W } , r = { 1 , , R } indicate the intensity of the voxel ( h , w , r ) and H h and H v be the two horizontal and vertical differential operators in the spatial domain, respectively. Then, we have:
H h X h , w , r = X h , w + 1 , r X h , w , r H v X h , w , r = X h + 1 , w , r X h , w , r
The TV-based cost function corresponding to (10) could be modeled as:
min X , A , E , N 1 2 N F 2 + λ 1 X 1 + λ 2 X T V + β E 1 , 1 , 2 s . t . rank ( X ) = K X , Y = X × 3 A + E + N , X 0 , A 0
where · T V is the TV norm and λ 2 is a parameter to adjust the strength of the piecewise smoothness.
Similarly to ( 7 ) and ( 8 ) , this could be further rewritten as:
min X , A , E , Z ( 1 ) , Z ( 2 ) , Z ( 3 ) , N 1 2 N F 2 + λ 1 X 1 + λ 2 X T V + λ 3 2 X Z ( 1 ) , Z ( 2 ) , Z ( 3 ) F 2 + β E 1 , 1 , 2 s . t . Y = X × 3 A + E + N , X 0 , A 0 .

2.4. Optimization Procedure

The optimization problem in (14) could be solved by ADMM. We needed to introduce three auxiliary variables, V 1 , V 2 , and V 3 , and then transform it into the following equivalent problem:
min X , A , E , Z ( 1 ) , Z ( 2 ) , Z ( 3 ) , N 1 2 N F 2 + λ 1 V 3 1 + λ 2 V 2 1 + λ 3 2 X Z ( 1 ) , Z ( 2 ) , Z ( 3 ) F 2 + β E 1 , 1 , 2 s . t . A 0 , X 0 , X = V 1 , HV 1 = V 2 , X = V 3 , Y = X × 3 A + E + N .
The problem in (15) could be solved by ALM [53] to minimize the following augmented Lagrangian function:
L X , A , Q , N , E , V 1 , V 2 , V 3 , D 1 , D 2 , D 3 , D 4 = 1 2 N F 2 + λ 1 V 3 1 + λ 2 V 2 1 + λ 3 2 X Q F 2 + β E 1 , 1 , 2 + μ 2 ( X V 1 + D 1 F 2 + H V 1 V 2 + D 2 F 2 + X V 3 + D 3 F 2 + Y X × 3 A E N + D 4 F 2 s . t . A 0 , X 0
where D 1 , D 2 , D 3 , and D 4 are the Lagrange multipliers, and μ is the penalty parameter. The above problem could be solved by updating one variable while fixing the others. Specifically, in the t + 1 -th iteration, where the problem could be divided into several subproblems, the variables were updated as follows:
Update X :
X ( t + 1 ) = arg min X λ 3 2 X Q ( t ) F 2 + μ 2 ( X V 1 ( t ) + D 1 ( t ) F 2 + X V 3 ( t ) + D 3 ( t ) F 2 + Y X × 3 A ( t ) E ( t ) N ( t ) + D 4 ( t ) F 2 ) s . t . X 0
which could be transformed into the following linear system:
X ( t + 1 ) = X · [ ( μ × ( Y E ( t ) N ( t ) + D 4 ( t ) ) × 3 A ( t ) T + λ 3 Q ( t ) + μ ( V 1 ( t ) D 1 ( t ) + V 3 ( t ) D 3 ( t ) ) · ( μ × X × 3 ( A ( t ) T A ( t ) ) + ( λ 3 + 2 μ ) X ) ] .
Update A :
A ( t + 1 ) = arg min A Y X ( t + 1 ) × 3 A ( t ) E ( t ) N ( t ) + D 4 ( t ) F 2 s . t . A 0 .
Similarly to (18), we obtained the following solution:
A ( t + 1 ) = A Y E ( t ) N ( t ) + D 4 ( t ) X t + 1 ) T · / A X ( t + 1 ) X ( t + 1 ) T .
Update B :
We combined (18) and (20) and computed the result as follows:
B ( t + 1 ) = X ( t + 1 ) × 3 A ( t + 1 ) .
Update Q :
Q ( t + 1 ) = arg min Q λ 3 2 X ( t + 1 ) Q F 2 .
By substituting (13) into (14), the optimization problem became:
Z ( 1 ) , Z ( 2 ) , Z ( 3 ) = arg min Z ( 1 ) , Z ( 2 ) , Z ( 3 ) λ 3 2 X Z ( 1 ) , Z ( 2 ) , Z ( 3 ) F 2
where CP decomposition was solved by an alternating least-squares (ALS) [54] algorithm. Consequently, each factor matrix was calculated through a linear least-squares approach by fixing the other two matrices.
Update N
N ( t + 1 ) = arg min N 1 2 N ( t ) F 2 + μ 2 Y X ( t + 1 ) × 3 A ( t + 1 ) E ( t ) N ( t ) + D 4 ( t + 1 ) F 2
for which the solution is:
N ( t + 1 ) = μ Y X ( t + 1 ) × 3 A ( t + 1 ) E ( t ) + D 4 ( t ) ( μ + 1 ) .
Update V 1 :
V 1 ( t + 1 ) = arg min V 1 X ( t + 1 ) V 1 ( t ) + D 1 ( t ) F 2 + H V 1 ( t ) V 2 ( t ) + D 2 ( t ) F 2 .
Next, we solved the subproblem of V 1 :
H T H + I V 1 ( t + 1 ) = X ( t + 1 ) + D 1 ( t ) + H V 2 ( t ) D 2 ( t ) .
where I is the identity tensor; H is a convolution, as defined in (12), that operates in the spatial domain; and H T indicates the adjoint operator of H . Therefore, V 1 could be quickly computed by:
V 1 ( t + 1 ) = i f f t X ( t + 1 ) + D 1 ( t ) + H T V 2 ( t ) D 2 ( t ) 1 + f f t H h T f f t H h + f f t H v T f f t H v
where f f t and i f f t denote the fast Fourier transform [55] and its inverse, respectively.
Update V 2 and V 3 :
V 2 ( t + 1 ) = arg min V 2 λ 2 V 2 ( t ) 1 + μ 2 H V 1 ( t + 1 ) V 2 ( t ) + D 2 ( t ) F 2
which could be solved by a soft-thresholding function. The update rule for V 3 was similar to that of V 2 .
Update E :
E ( t + 1 ) = arg min ε μ 2 Y X ( t + 1 ) × 3 A ( t + 1 ) E N ( t + 1 ) + D 4 ( t ) F 2 + β E 1 , 1 , 2 .
Given F ( t ) = Y X ( t + 1 ) × 3 A ( t + 1 ) N ( t + 1 ) + D 4 ( t ) , the closed solution of (30) could be calculated as:
E ( t + 1 ) ( h , w , : ) = max 1 β μ ( t ) F ( t ) ( h , w , : ) F , 0 F ( t ) ( h , w , : )
where h { 1 , , H } and w { 1 , , W } .
Update D 1 , D 2 , D 3 , D 4 , and μ :
D 1 ( t + 1 ) = D 1 ( t ) + X ( t + 1 ) V 1 ( t + 1 ) ,
D 2 ( t + 1 ) = D 2 ( t ) + H V 1 ( t + 1 ) V 2 ( t + 1 ) ,
D 3 ( t + 1 ) = D 3 ( t ) + X ( t + 1 ) V 3 ( t + 1 ) ,
D 4 ( t + 1 ) = Y X t + 1 ) × 3 A ( t + 1 ) E ( t + 1 ) N ( t + 1 ) + D 4 ( t ) ,
μ ( t + 1 ) = min ρ μ ( t ) , μ max .
Finally, according to the anomaly E , the AD maps T could be obtained by T h , w = E h , w , : F | h = { 1 , , H } , w = { 1 , , W } . Then, we arrived at an augmented Lagrangian alternating direction method to solve the proposed ATLSS model.

2.5. Initialization and Termination Conditions

In the proposed ATLSS solver, the input terms were the observed HSI, the basis matrix A , the abundance matrix X , and the number of endmembers R. It is worth noting that the initialization of A , X , and R influenced the AD results of the proposed method. Hence, we initialized the background B by the RX algorithm. Then, HySime [56] was employed to estimate the number of background endmembers R. Afterward, NMF [57] and the related update rules were utilized to iterate over A and X , respectively. The CP rank K was determined by the algorithm referred to in [50] to provide an accurate rank estimation for the proposed method. In general, the iterative process continued until a maximum number of 100 iterations or a residual error was satisfied.

2.6. Computational Complexity Analysis

Each iteration’s computational cost consisted of updating all the relevant factors. The time complexity of CP decomposition (update Q ) is O ( T max R 2 ( W 2 D 2 + H 2 D 2 + H 2 W 2 ) ) ; the time complexity of FFT (update V 1 ) is O ( H W R log ( H W ) ) ; the time complexity of the soft threshold operator (update V 2 , V 3 ) is O ( H W R ) ; the time complexity of other matrix multiplication operations (update X , A , B , Z ( 1 ) , Z ( 2 ) , Z ( 3 ) , N , E , D 1 , D 2 , D 3 , and D 4 ) is O ( H W R + D R + H K X + 4 H W D ) = O ( H W ( D + R ) ) . Thus, the time complexity of the proposed method is O ( R 2 ( W 2 D 2 + H 2 D 2 + H 2 W 2 ) + H W R log ( H W ) + H W ( D + 2 R ) ) .

3. Experimental Results and Discussion

In this section, the proposed ATLSS algorithm is applied to five real HSI datasets for AD, and a detailed description of the experiments is provided below. All the experimental algorithms were performed in MATLAB 2016b on a computer with a 64-bit quad-core Intel Xeon 2.40 GHz CPU and 32.0 GB of RAM in Windows 7.

3.1. Experimental Datasets

(1)
AVIRIS airplane data: The AVIRIS airplane dataset was collected by AVIRIS in San Diego. One hundred and eighty-nine bands were retained, while the water absorption regions; low-SNR bands; and bad bands (1–6, 33–35, 97, 107–113, 153–166, 221–224) were removed. In Figure 1b, the subimage is named the AVIRIS−1 dataset, and it is located in the top-left corner of the AVIRIS image with a size of 150 × 150 × 186 . The anomaly contained in the image was three airplanes, and the ground truth is shown in Figure 1c. AVIRIS−2 was located in the center of San Diego, as shown in Figure 2, and it contained 120 anomaly pixels with a size 100 × 200 × 186 .
(2)
HYDICE data: The real data were collected by the hyperspectral digital imagery collection experiment (HYDICE) sensor, and the original image had a size of 307 × 307 × 210 . After removing the low-SNR and water-vapor absorption bands, 162 bands remained. An 80 × 100 subspace was cropped from the top right of the whole image, and the cars and roofs in the image scene were considered anomalies. The false-color image and the corresponding ground truth are shown in Figure 3.
(3)
Urban (ABU) data: The Urban dataset was collected with AVIRIS sensors and contained five images of five different scenes. In this paper, we selected the Urban−1 and Urban−2 images, captured at different locations on the Texas coast, to perform the experiment. The spatial size of the Urban−1 dataset was 100 × 100 , the number of spectral bands was 204, and its false-color image and the ground truth are presented in Figure 4a,b. For the Urban−2 dataset, with a size of 100 × 100 × 207 , Figure 5a,b shows the corresponding false-color image and ground truth.

3.2. Evaluation Metrics and Parameter Settings

The AD performance of the proposed ATLSS method is demonstrated in this section. Table 1 shows the formulations of ATLSS and its four degradation models (Dm), annotated as Dm-1, Dm-2, Dm-3, and Dm-4, based on the linear spectral unmixing method. In addition, we chose to compare our models with RX [12], RPCA [22], LRASR [24], GTVLRR [25], GVAE [9], PTA [35], TRPCA [49], and LRTDTV [34]. RXD is a statistics-based method in the HSI AD field, and it is the baseline in almost all reference articles. The methods listed above are based on matrix modeling. PTA is a tensor-based method, but it is based on matrix operations. GVAE is a deep learning method. TRPCA and the proposed ATLSS algorithm are based on tensor modeling. We classified the RX, RPCA, LRASR, GTVLRR, and PTA methods as matrix-based operations, and TRPCA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, LRTDTV, and ATLSS as tensor-based operations.
To evaluate the detector more effectively, the 3D ROC curve [58] was employed, which introduces the threshold τ in addition to the parameters P d and P f used in the 2D ROC curve [59] to specify a third dimension ( P d , P f , τ ) . In addition, the 2D ROC curves ( P d , P f ) and ( P f , τ ) were used to measure the AD result; an efficient detector would have a performance with a larger ( P d , P f ) ( 1 ) value but a smaller ( P f , τ ) ( 0 ) value, and it is desired that the curves of ( P d , P f ) and ( P f , τ ) are close to the upper-left and lower-left corners of the coordinate axis, respectively. In addition, box and whisker plots were used to evaluate the separability between the anomaly and background. The boxes in the box and whisker plot reflect the distribution range of the detection values of the anomaly and background; that is, a larger gap between the anomaly and background boxes indicates better discrimination by the detector.
In the proposed ATLSS method, the number of endmembers R was first estimated in the initialization phase using the HySime algorithm, whereas the most significant task was to search for the best set of parameters λ 1 , λ 2 , λ 3 , β , which needed to be carefully identified. Moreover, λ 1 , λ 2 , λ 3 , and β ranged within the set { 5 · 10 1 , 1 · 10 1 , 5 · 10 2 , 1 · 10 2 , 5 · 10 3 , 1 · 10 3 , 5 · 10 4 , 1 · 10 4 } , respectively, while μ was selected from { 0.01 , 0.001 } . Here, we carefully determined the optimal parameters for the five datasets and for all the algorithms. To demonstrate the contribution of the different parameters to AD, we took the example of the AVIRIS−2 airplane data while changing the parameters λ 1 , λ 2 , λ 3 , and β to illustrate the tuning procedure in detail. Specifically, we fixed λ 2 , λ 3 , and β and traversed λ 1 until a favorable result was observed; then, we alternated the other parameters. We applied ATLSS with different parameter settings to the AVIRIS airplane datasets (including AVIRIS−1 and AVIRIS−2), HYDICE dataset, and Urban (ABU) datasets (including Urban−1 and Urban−2) in turn to obtain detection results under the optimal parameter combination.

3.3. Detection Performance

We investigated the contribution of the regularization terms, including sparsity regularization, TV regularization, and CP regularization, in the proposed ATLSS method with regard to the accuracy of AD. We referred again to Table 1 for the ATLSS, Dm-1, Dm-2, Dm-3, and Dm-4 models. Furthermore, we compared the performance of ATLSS, including Dm-1, Dm-2, Dm-3, and Dm-4, with that of RX, RPCA, LRASR, GTVLRR, TRPCA, GVAE, PTA, and LRTDTV. The 2D plots of the algorithm detection results comparison for the five datasets are shown sequentially in Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10. Table 2 shows the AUC values of ( P d , P f ) / ( P f , τ ) obtained by different AD algorithms on the five real datasets. Each algorithm ran ten times on each dataset to avoid randomness, and the average AUC values were used. Figure 11 and Figure 12 show the corresponding performance curves and box and whisker plots of the different methods under comparison for the five real datasets. We also took the AVIRIS−2 dataset as an example to illustrate the superior performance of the proposed method in detail.
For Dm-1, when the parameter settings were λ 1 = 0.01 and β = 0.1 , all five datasets achieved very good results.

3.3.1. AVIRIS−2

For the AVIRIS−2 dataset, the estimated number of endmembers for the background was two, as shown in Figure 2c, and the estimated rank was K = 18 . In the following two paragraphs, we study (1.a) the effects of the regularization terms and (1.b) the comparison of different anomaly detectors.
(1.a) Effects of the Regularization Terms: After a large number of parameter traversals, the trade-off parameters referenced in the ATLSS algorithm and Dm-2 both achieved their optimal performance, i.e., λ 1 = 0.5 , λ 2 = 0.005 , λ 3 = 0.005 , and β = 0.005 for ATLSS and λ 1 = 0.1 , λ 3 = 0.5 , and β = 0.001 for Dm-2. Figure 13 shows the detection accuracy of ATLSS on the AVIRIS−2 dataset when one parameter varied within a predefined parameter range and the other three trade-off parameters were fixed. In Figure 13a, it can be observed that when λ 1 increased from 0.0001 to 0.5 , the curve showed an upward trend, and the highest detection result was obtained at λ 1 = 0.5 , which indicated a positive effect on controlling the sparsity of the abundance tensor. Table 2 and Figure 7 provide the certification based on quantitative analysis and visual qualitative analysis. The peak of the curve in Figure 13b was located at λ 2 = 0.005 when λ 2 was in the interval [ 0.0001 , 0.005 ] . The increasing curve implied that TV had a positive effect on suppressing noise, while λ 2 > 0.005 imposed a smoothness constraint on the abundance tensor, leading to a dramatic decline in the detection results. λ 3 was imposed on Q to control its low rankness, and Figure 13c reveals that setting λ 3 = 0.005 balanced the low-rank regularization with the most important information but captured small-scale details. The curve first steadily increased and then fell as it deviated from the optimal parameter value, implying that parameter values that were too large led to strict low rankness in X and thus resulted in a significant residual loss in the reconstructed abundance tensor. In Figure 13d, one can clearly see that the curve was stable when β [ 0.005 , 0.1 ] and exhibited superior performance. Outside this interval, the detection result was sensitive to changes in the parameter, with the detection performance declining in a straight downward trend. When β was set to 0.005 , the best performance was achieved.
(1.b) Comparison with Different Anomaly Detectors:Table 2 demonstrates the AUC values of ( P d , P f ) and ( P f , τ ) for the different methods under comparison on the five datasets. In Table 2, the anomaly detection accuracy ( P d , P f ) of ATLSS, which obtained the highest score among all the methods under comparison when τ = 0.087 , is illustrated by the ROC curve in the upper left corner, which also shows the efficiency of the model. The ( P f , τ ) score was lower than the others, and the ( P f , τ ) ROC curve is closest to the lower-left corner, as shown in Figure 11b. ATLSS performed well on the AVIRIS−2 dataset and achieved optimal AD accuracy and a low false-alarm rate. The proposed ATLSS model and the generation Dm-1, Dm-2, Dm-3, and Dm-4 methods better separated the anomaly and background, as shown in Figure 12b. The 2D plots of the detection results are presented in Figure 7i, showing the low-rank structure imposed by the regularization constraint (i.e., CP decomposition on the introduced low-rank prior term), which granted the abundance tensor adequate flexibility to model fine-scale spatial details with most of its spatial distribution preserved. The exploration of the background tensor enabled the suppression of the background more efficiently. Moreover, TV regularization, which smoothed the estimated abundance map, effectively suppressed Gaussian noise. When compared with the generation Dm-1, Dm-2, Dm-3, and Dm-4 methods, ATLSS demonstrated better background suppression, as shown in Figure 7, and the best AD performance, as shown in Table 2. In Figure 11b, in addition to the curves of ( P d , P f ) and ( P f , τ ) , the 3D ROC curve also comprehensively shows the performance of the proposed ATLSS method.
The methods assumed that the background and the anomaly had a low-rank and sparse property and performed better than the RX method in terms of the AUC value and false-alarm rate, which can be observed from Table 2 and Figure 11b. The 2D plots in Figure 7 also reveal that the background and Gaussian noise were both effectively suppressed; moreover, the anomaly airplanes were more clearly detected by ATLSS compared to RX. The GTVLRR, PTA, Dm-2, Dm-3, Dm-4, LRTDTV, and ATLSS, utilized TV regularization to smooth away the noise signature while strengthening the outlines of the airplanes. One can observe that GTVLRR performed well in comparison to the other methods. However, the proposed ATLSS method based on tensor decomposition was outstanding compared to the other methods in terms of ( P d , P f ) / ( P f , τ ) and the power of the anomaly and background separation.
Table 3 shows the computational times of the ten algorithms for the AVIRIS−2 dataset. We observed that the running time of ATLSS was longer than that of Dm-1, Dm-2, Dm-3, and Dm-4, having been increased by the added regular terms, but the AD performance improved. The execution time of ATLSS was shorter than that of GTVLRR and TRPCA, while it was higher than that of LRTDTV. The detection result of ATLSS was superior to that of LRTDTV. Compared to RX, RPCA, LRASR, and PTA, the time cost of ATLSS was much higher, because the former are based on matrix operations. As shown in Table 2, the deep learning method GVAE had a training time/test time of 80 × 2000 / 25.56 , presenting not only high time demands but also a far inferior experimental performance compared to our proposed ATLSS model.

3.3.2. AVIRIS−1

For the AVIRIS−1 dataset, the estimated number of endmembers for the background was three, as shown in Figure 1d, and the estimated rank was K = 46 . The best performance was achieved when we set the four parameters as ( 0.1 , 0.1 , 0.1 , 0.001 ) .
When comparing Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS, we observed that the anomaly included three planes, which are clearly identified in Figure 6. The ATLSS’s best performance was achieved when we set the four parameters as ( 0.1 , 0.1 , 0.1 , 0.001 ) . In Table 2 and Figure 11a, one can observe that the value of ( P d , P f ) was the largest for ATLSS, and the curve is in the upper-left corner. The ( P f , τ ) value of ATLSS was also relatively low. The superior performance of ATLSS demonstrated that it imposed low-rank tensor regularization and TV regularization on the abundance tensor, efficiently suppressing the background and smoothing away the noise. Among all the compared methods, as shown in Table 2, the best detection result was also achieved by ATLSS. One can observed that even though RX had the lowest false-alarm rate, the AD accuracy and the separation of the anomaly and background shown in Figure 11a and Figure 12a demonstrated that its performance was not very good. Notably, our proposed method was superior to the tensor-based PTA method on the AVIRIS−2 dataset for all measurements. Figure 11a shows that the ROC curve of ATLSS was closer to the top left than those of all the other methods, and Table 2 further proves the remarkable performance of ATLSS.

3.3.3. HYDICE

For the Urban data, the estimated number of endmembers was R = 4 , as shown in Figure 3d, and the estimated CP rank was K = 67 .
Compared to Dm-1, Dm-2, Dm-3, and Dm-4, as shown in Table 2, the ATLSS method showed a competitive performance when we set the parameters as ( 0.1 , 0.1 , 0.01 , 0.1 ) ; the AD AUC value was the highest; and the false-alarm rate was the lowest. ATLSS was also powerful in separating anomalies and the background, as shown in Figure 12c. In Figure 8i, the anomaly was accurately detected; furthermore, the background and the noise were both well-suppressed, which is a benefit of tensor low-rank regularization and TV smoothness for the background abundance tensor. Compared with PTA, the ( P d , P f ) / ( P f , τ ) values of ATLSS, as shown in Table 2, accounted for the fact that abundance maps possess more distinctive features than raw data, which is beneficial for identifying anomalies within a background and achieving outstanding performance. The qualitative and quantitative results are presented in Table 2 and Figure 11c, as well as in Figure 12c. These results were obtained by background low-rank decomposition, which enabled more accurate background reconstruction.

3.3.4. Urban (ABU-1)

For the ABU Urban−1 dataset, the number of estimation endmembers was two, as shown in Figure 4c, and the estimated CP rank was K = 25 .
In contrast with Dm-1, Dm-2, Dm-3, and Dm-4, the ATLSS background model imposed a CP regularization constraint and TV regularization on the abundance tensor, and an excellent AD performance was achieved when the parameters were ( 0.1 , 0.05 , 0.5 , 0.001 ) , as shown in Table 2. A higher detection result would lead to a few anomaly pixels being regarded as background pixels, resulting in a lower false-alarm rate. Thus, as shown in Table 2 and Figure 11d, the ( P d , P f ) values of Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS increased gradually, while the ( P f , τ ) values in Table 2 also increased. Furthermore, one can observe that the ( P f , τ ) of RX was the lowest and the ( P d , P f ) the highest among all the compared methods. However, Figure 12d shows that ATLSS and the generation methods achieved superior separation between the anomaly and background. In general, the evaluation metrics mentioned above validated that the proposed ATLSS method outperformed the other methods in both qualitative and quantitative aspects.

3.3.5. Urban (ABU-2)

For the ABU Urban−2 dataset, the number of estimation endmembers was three, as shown in Figure 5c, and the estimated CP rank was K = 25 .
We obtained the best performance when we set the parameters as ( 0.1 , 0.05 , 0.1 , 0.5 ) . As shown in Table 2, the detection result ( P d , P f ) of ATLSS was higher than those of Dm-1, Dm-2, Dm-3, and Dm-4. In contrast, the ( P f , τ ) value of ATLSS was slightly higher than those of the other four generative methods. This model suffered from a much higher detection result, which caused a few anomaly pixels to be detected as background pixels. However, its efficient separation between the anomaly and background is shown in Figure 6e, demonstrating that it is still a competitive AD method. For the Urban (ABU-2) dataset, the results of the tensor-based PTA and ATLSS methods were evenly matched, as shown in Table 2 and Figure 11 and Figure 12. However, it is worth noting that the ( P f , τ ) value of ATLSS was lower than that of PTA, which demonstrated that the abundance maps possessed more distinctive features than the original data and enabled the more accurate identification of the anomaly and the background; hence, the model achieved outstanding performance.
We performed the proposed ATLSS method and extensive comparison experiments using the five datasets and summarize the advantages of the proposed method as follows:
(1)
Effectiveness: The proposed ATLSS method decomposed the background into an abundance tensor and endmember matrix. The structural characteristics of the abundance tensor were fully explored, i.e., the local spatial continuity and the high abundance vector correlations, which contributed to reconstructing a more accurate abundance tensor for the background. The proposed ATLSS model performed excellently compared to its degradation models Dm-1, Dm-2, Dm-3, and Dm-4.
(2)
Performance: Eight comparison algorithms were presented to sufficiently demonstrate the performance of the proposed method. Compared to RX, ATLSS had a more accurate AD performance thanks to the low-rank and sparse assumptions. The deep learning method GVAE had a high training time while achieving a moderate performance when compared with ATLSS. Compared to the tensor-based methods PTA, TRPCA, and LRTDTV, the proposed ATLSS method exploited the abundance tensor’s physical meaning, which provided more distinctive features than the raw data, achieving outstanding AD performance.

4. Conclusions

In this paper, we proposed a novel tensor-based method for hyperspectral AD that was powerful in enhancing spatial structure information, such that the spatial and spectral information was preserved. Seeing as the low spatial resolution of HSIs makes existing mixed pixels rely heavily on spectral information, the background was decomposed into the mode-3 product of an abundance tensor and an endmember matrix to enhance the spatial structure information and the differentiation between the target and background. For the background part, considering the distinctive features of the background’s abundance maps, we characterized them by tensor regularization and imposed low rankness, smoothness, and sparsity. In addition, for the anomaly part, we utilized the 1 , 1 , 2 -norm to characterize the tube-wise sparsity, since the anomalies account for a small portion of the scene. Additionally, Gaussian noise was also incorporated into the model to suppress noise confusion with the anomalies. The experimental results obtained using the five datasets demonstrated that the proposed method had an excellent AD performance compared to the other methods.

Author Contributions

Formal analysis, W.S.; Methodology, W.S., M.J. and Z.W. (Zebin Wu); Supervision, Z.W. (Zebin Wu), Z.W. (Zhihui Wei) and Y.X.; Validation, W.S., Z.W. (Zebin Wu), M.D.M.; Writing—original draft, W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61772274, Grant 62071233, and grant 61701238, in part by the Jiangsu Provincial Natural Science Foundation of China under grant BK20211570, grant BK20180018, and grant BK20170858, in part by the Fundamental Research Funds for the Central Universities under grant 30917015104, grant 30919011103, grant 30919011402, and grant 30921011209, and in part by the China Postdoctoral Science Foundation under grant 2017M611814.

Data Availability Statement

The hyperspectral datasets used in this study are available at http://xudongkang.weebly.com/ (accessed on 28 January 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  2. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and Multispectral Data Fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  3. Xiong, F.; Qian, Y.; Zhou, J.; Tang, Y.Y. Hyperspectral Unmixing via Total Variation Regularized Nonnegative Tensor Factorization. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2341–2357. [Google Scholar] [CrossRef]
  4. Jouni, M.; Dalla Mura, M.; Comon, P. Hyperspectral image classification based on mathematical morphology and tensor decomposition. Math. Morphol.-Theory Appl. 2020, 4, 1–30. [Google Scholar] [CrossRef] [Green Version]
  5. Xi, B.; Li, J.; Li, Y.; Song, R.; Xiao, Y.; Shi, Y.; Du, Q. Multi-Direction Networks With Attentional Spectral Prior for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  6. Xie, J.; He, N.; Fang, L.; Ghamisi, P. Multiscale Densely-Connected Fusion Networks for Hyperspectral Images Classification. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 246–259. [Google Scholar] [CrossRef]
  7. Nasrabadi, N. Hyperspectral Target Detection: An Overview of Current and Future Challenges. Signal Process. Mag. IEEE 2014, 31, 34–44. [Google Scholar] [CrossRef]
  8. Zhang, X.; Ma, X.; Huyan, N.; Gu, J.; Tang, X.; Jiao, L. Spectral-Difference Low-Rank Representation Learning for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10364–10377. [Google Scholar] [CrossRef]
  9. Wei, J.; Zhang, J.; Xu, Y.; Xu, L.; Wu, Z.; Wei, Z. Hyperspectral Anomaly Detection Based On Graph Regularized Variational Autoencoder. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  10. Zhang, J.; Xu, Y.; Zhan, T.; Wu, Z.; Wei, Z. Anomaly detection in hyperspectral image using 3D-convolutional variational autoencoder. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2512–2515. [Google Scholar]
  11. Zhang, L.; Cheng, B. Transferred CNN Based on Tensor for Hyperspectral Anomaly Detection. IEEE Geosci. Remote Sens. Lett. 2020, 17, 2115–2119. [Google Scholar] [CrossRef]
  12. Reed, I.S.; Yu, X. Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1760–1770. [Google Scholar] [CrossRef]
  13. Molero, J.M.; Garzon, E.M.; Garcia, I.; Plaza, A. Analysis and optimizations of global and local versions of the RX algorithm for anomaly detection in hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 801–814. [Google Scholar] [CrossRef]
  14. Borghys, D.; Kåsen, I.; Achard, V.; Perneel, C. Comparative evaluation of hyperspectral anomaly detectors in different types of background. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, Baltimore, MD, USA, 23–27 April 2012; Volume 8390, p. 83902J. [Google Scholar]
  15. Kwon, H.; Nasrabadi, N.M. Kernel RX-algorithm: A nonlinear anomaly detector for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 388–397. [Google Scholar] [CrossRef]
  16. Gu, Y.; Liu, Y.; Zhang, Y. A selective KPCA algorithm based on high-order statistics for anomaly detection in hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2008, 5, 43–47. [Google Scholar] [CrossRef]
  17. Guo, Q.; Zhang, B.; Ran, Q.; Gao, L.; Li, J.; Plaza, A. Weighted-RXD and linear filter-based RXD: Improving background statistics estimation for anomaly detection in hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2351–2366. [Google Scholar] [CrossRef]
  18. Liu, J.; Hou, Z.; Li, W.; Tao, R.; Orlando, D.; Li, H. Multipixel anomaly detection with unknown patterns for hyperspectral imagery. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 5557–5567. [Google Scholar] [CrossRef]
  19. Li, J.; Zhang, H.; Zhang, L.; Ma, L. Hyperspectral anomaly detection by the use of background joint sparse representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2523–2533. [Google Scholar] [CrossRef]
  20. Li, W.; Du, Q. Collaborative representation for hyperspectral anomaly detection. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1463–1474. [Google Scholar] [CrossRef]
  21. Hou, Z.; Li, W.; Tao, R.; Ma, P.; Shi, W. Collaborative representation with background purification and saliency weight for hyperspectral anomaly detection. Sci. China Inf. Sci. 2022, 65, 1–12. [Google Scholar] [CrossRef]
  22. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM 2011, 58, 1–37. [Google Scholar] [CrossRef]
  23. Yao, W.; Li, L.; Ni, H.; Li, W.; Tao, R. Hyperspectral Anomaly Detection Based on Improved RPCA with Non-Convex Regularization. Remote Sens. 2022, 14, 1343. [Google Scholar] [CrossRef]
  24. Xu, Y.; Wu, Z.; Li, J.; Plaza, A.; Wei, Z. Anomaly detection in hyperspectral images based on low-rank and sparse representation. IEEE Trans. Geosci. Remote Sens. 2015, 54, 1990–2000. [Google Scholar] [CrossRef]
  25. Cheng, T.; Wang, B. Graph and total variation regularized low-rank representation for hyperspectral anomaly detection. IEEE Trans. Geosci. Remote Sens. 2019, 58, 391–406. [Google Scholar] [CrossRef]
  26. Qu, Y.; Wang, W.; Guo, R.; Ayhan, B.; Kwan, C.; Vance, S.; Qi, H. Hyperspectral anomaly detection through spectral unmixing and dictionary-based low-rank decomposition. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4391–4405. [Google Scholar] [CrossRef]
  27. Song, X.; Zou, L.; Wu, L. Detection of subpixel targets on hyperspectral remote sensing imagery based on background endmember extraction. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2365–2377. [Google Scholar] [CrossRef]
  28. Zhao, C.; Li, C.; Feng, S.; Jia, X. Enhanced Total Variation Regularized Representation Model With Endmember Background Dictionary for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  29. Manolakis, D.; Siracusa, C.; Shaw, G. Hyperspectral subpixel target detection using the linear mixing model. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1392–1409. [Google Scholar]
  30. Wang, N.; Du, B.; Zhang, L. An endmember dissimilarity constrained non-negative matrix factorization method for hyperspectral unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 554–569. [Google Scholar] [CrossRef]
  31. Imbiriba, T.; Borsoi, R.A.; Bermudez, J.C.M. A low-rank tensor regularization strategy for hyperspectral unmixing. In Proceedings of the 2018 IEEE Statistical Signal Processing Workshop (SSP), Freiburg, Germany, 10–13 June 2018; pp. 373–377. [Google Scholar]
  32. Hyperspectral Image Restoration via Global L 1-2 Spatial–Spectral Total Variation Regularized Local Low-Rank Tensor Recovery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3309–3325. [CrossRef]
  33. Xue, J.; Zhao, Y.; Liao, W.; Chan, C.W. Nonlocal Low-Rank Regularized Tensor Decomposition for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5174–5189. [Google Scholar] [CrossRef]
  34. Wang, Y.; Peng, J.; Zhao, Q.; Meng, D.; Leung, Y.; Zhao, X.L. Hyperspectral Image Restoration via Total Variation Regularized Low-rank Tensor Decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 1227–1243. [Google Scholar] [CrossRef] [Green Version]
  35. Li, L.; Li, W.; Qu, Y.; Zhao, C.; Tao, R.; Du, Q. Prior-Based Tensor Approximation for Anomaly Detection in Hyperspectral Imagery. IEEE Trans. Neural Netw. Learn. Syst. 2020, 33, 1037–1050. [Google Scholar] [CrossRef] [PubMed]
  36. Song, S.; Zhou, H.; Gu, L.; Yang, Y.; Yang, Y. Hyperspectral Anomaly Detection via Tensor-Based Endmember Extraction and Low-Rank Decomposition. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1772–1776. [Google Scholar] [CrossRef]
  37. Wang, J.; Xia, Y.; Zhang, Y. Anomaly detection of hyperspectral image via tensor completion. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1099–1103. [Google Scholar] [CrossRef]
  38. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  39. Li, S.; Dian, R.; Fang, L.; Bioucas-Dias, J.M. Fusing Hyperspectral and Multispectral Images via Coupled Sparse Tensor Factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef]
  40. Xue, J.; Zhao, Y.; Huang, S.; Liao, W.; Kong, S.G. Multilayer Sparsity-Based Tensor Decomposition for Low-Rank Tensor Completion. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6916–6930. [Google Scholar] [CrossRef]
  41. Xue, J.; Zhao, Y.; Bu, Y.; Chan, J.C.W.; Kong, S.G. When Laplacian Scale Mixture Meets Three-Layer Transform: A Parametric Tensor Sparsity for Tensor Completion. IEEE Trans. Cybern. 2022, 52, 13887–13901. [Google Scholar] [CrossRef]
  42. Peng, J.; Wang, H.; Cao, X.; Liu, X.; Rui, X.; Meng, D. Fast Noise Removal in Hyperspectral Images via Representative Coefficient Total Variation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–17. [Google Scholar] [CrossRef]
  43. Qian, Y.; Xiong, F.; Zeng, S.; Zhou, J.; Tang, Y.Y. Matrix-Vector Nonnegative Tensor Factorization for Blind Unmixing of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1776–1792. [Google Scholar] [CrossRef] [Green Version]
  44. Zhang, Y.; Du, B.; Zhang, L.; Wang, S. A low-rank and sparse matrix decomposition-based Mahalanobis distance method for hyperspectral anomaly detection. IEEE Trans. Geosci. Remote Sens. 2015, 54, 1376–1389. [Google Scholar] [CrossRef]
  45. Cheng, T.; Wang, B. Total Variation and Sparsity Regularized Decomposition Model With Union Dictionary for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1472–1486. [Google Scholar] [CrossRef]
  46. Sun, W.; Liu, C.; Li, J.; Lai, Y.M.; Li, W. Low-rank and sparse matrix decomposition-based anomaly detection for hyperspectral imagery. J. Appl. Remote Sens. 2014, 8, 083641. [Google Scholar] [CrossRef]
  47. Li, L.; Li, W.; Du, Q.; Tao, R. Low-Rank and Sparse Decomposition With Mixture of Gaussian for Hyperspectral Anomaly Detection. IEEE Trans. Cybern. 2021, 51, 4363–4372. [Google Scholar] [CrossRef]
  48. Nie, F.; Huang, H.; Cai, X.; Ding, C. Efficient and robust feature selection via joint 2,1-norms minimization. Adv. Neural Inf. Process. Syst. 2010, 23, 1–9. [Google Scholar]
  49. Xu, Y.; Wu, Z.; Chanussot, J.; Wei, Z. Joint reconstruction and anomaly detection from compressive hyperspectral images using Mahalanobis distance-regularized tensor RPCA. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2919–2930. [Google Scholar] [CrossRef]
  50. Imbiriba, T.; Borsoi, R.A.; Bermudez, J.C.M. Low-rank tensor modeling for hyperspectral unmixing accounting for spectral variability. IEEE Trans. Geosci. Remote Sens. 2019, 58, 1833–1842. [Google Scholar] [CrossRef] [Green Version]
  51. Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004, 20, 89–97. [Google Scholar]
  52. He, W.; Zhang, H.; Zhang, L. Total variation regularized reweighted sparse nonnegative matrix factorization for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3909–3921. [Google Scholar] [CrossRef]
  53. Lin, Z.; Chen, M.; Ma, Y. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv 2010, arXiv:1009.5055. [Google Scholar]
  54. Mitchell, D.; Ye, N.; De Sterck, H. Nesterov acceleration of alternating least squares for canonical tensor decomposition: Momentum step size selection and restart mechanisms. Numer. Linear Algebra Appl. 2020, 27, e2297. [Google Scholar] [CrossRef]
  55. Vinchurkar, P.P.; Rathkanthiwar, S.; Kakde, S. HDL implementation of DFT architectures using Winograd fast Fourier transform algorithm. In Proceedings of the 2015 Fifth International Conference on Communication Systems and Network Technologies, Gwalior, India, 4–6 April 2015; pp. 397–401. [Google Scholar]
  56. Bioucas-Dias, J.M.; Nascimento, J. Hyperspectral Subspace Identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef] [Green Version]
  57. Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef]
  58. Chang, C.I. An effective evaluation tool for hyperspectral target detection: 3D receiver operating characteristic curve analysis. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5131–5153. [Google Scholar] [CrossRef]
  59. Kerekes, J. Receiver operating characteristic curve confidence intervals and regions. IEEE Geosci. Remote Sens. Lett. 2008, 5, 251–255. [Google Scholar] [CrossRef] [Green Version]
Figure 1. AVIRIS−1 dataset. (a) False-color image of the whole scene. (b) False-color image of the detection area corresponding to the area surrounded by the red box in the image (a). (c) Ground-truth map of the anomalies. (d) Spectra of endmembers # 1 # 3 of the background; the estimated number of endmembers is 3.
Figure 1. AVIRIS−1 dataset. (a) False-color image of the whole scene. (b) False-color image of the detection area corresponding to the area surrounded by the red box in the image (a). (c) Ground-truth map of the anomalies. (d) Spectra of endmembers # 1 # 3 of the background; the estimated number of endmembers is 3.
Remotesensing 15 01679 g001
Figure 2. AVIRIS−2 dataset. (a) False-color image of the detection area corresponding to the area surrounded by the blue box in the scene in (a). (b) Ground-truth map of the anomalies. (c) Spectra of endmembers # 1 # 2 of the background; the estimated number of endmembers was 2.
Figure 2. AVIRIS−2 dataset. (a) False-color image of the detection area corresponding to the area surrounded by the blue box in the scene in (a). (b) Ground-truth map of the anomalies. (c) Spectra of endmembers # 1 # 2 of the background; the estimated number of endmembers was 2.
Remotesensing 15 01679 g002
Figure 3. HYDICE dataset. (a) False-color image of the whole scene. (b) False-color image of the detection area. (c) Ground-truth map of the anomalies. (d) Spectra of endmembers # 1 # 4 of the background; the estimated number of endmembers was 4.
Figure 3. HYDICE dataset. (a) False-color image of the whole scene. (b) False-color image of the detection area. (c) Ground-truth map of the anomalies. (d) Spectra of endmembers # 1 # 4 of the background; the estimated number of endmembers was 4.
Remotesensing 15 01679 g003
Figure 4. Urban−1 of the Urban (ABU) dataset. (a) False-color image of the detection area. (b) Ground-truth map of the anomalies. (c) Spectra of endmembers # 1 # 2 of the background; the estimated number of endmembers was 2.
Figure 4. Urban−1 of the Urban (ABU) dataset. (a) False-color image of the detection area. (b) Ground-truth map of the anomalies. (c) Spectra of endmembers # 1 # 2 of the background; the estimated number of endmembers was 2.
Remotesensing 15 01679 g004
Figure 5. Urban−2 of the Urban (ABU) dataset. (a) False-color image of the detection area. (b) Ground-truth map of the anomalies. (c) Spectra of endmembers # 1 # 3 of the background; the estimated number of endmembers was 3.
Figure 5. Urban−2 of the Urban (ABU) dataset. (a) False-color image of the detection area. (b) Ground-truth map of the anomalies. (c) Spectra of endmembers # 1 # 3 of the background; the estimated number of endmembers was 3.
Remotesensing 15 01679 g005
Figure 6. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the AVIRIS−1 dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Figure 6. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the AVIRIS−1 dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Remotesensing 15 01679 g006
Figure 7. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the AVIRIS−2 dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Figure 7. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the AVIRIS−2 dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Remotesensing 15 01679 g007
Figure 8. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the HYDICE dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Figure 8. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the HYDICE dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Remotesensing 15 01679 g008
Figure 9. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the ABU Urban−1 dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Figure 9. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the ABU Urban−1 dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Remotesensing 15 01679 g009
Figure 10. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the ABU Urban−2 dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Figure 10. 2D plots of the detection results obtained by RX, RPCA, LRASR, GTVLRR, GVAE, TRPCA, PTA, LRTDTV, Dm-1, Dm-2, Dm-3, Dm-4, and ATLSS on the ABU Urban−2 dataset. (a) RX. (b) RPCA. (c) LRASR. (d) GTVLRR. (e) GVAE. (f) TRPCA. (g) PTA. (h) LRTDTV. (i) Dm-1. (j) Dm-2. (k) Dm-3. (l) Dm-4. (m) ATLSS.
Remotesensing 15 01679 g010
Figure 11. The ROC curves of the different methods under comparison for the five datasets. (a) AVIRIS−1. (b) AVIRIS−2. (c) HYDICE. (d) ABU Urban−1. (e) ABU Urban−2. (Left to right) 3D ROC curve and 2D ROC curve of ( P d , P f ) , and 2D ROC curve of ( P f , τ ) .
Figure 11. The ROC curves of the different methods under comparison for the five datasets. (a) AVIRIS−1. (b) AVIRIS−2. (c) HYDICE. (d) ABU Urban−1. (e) ABU Urban−2. (Left to right) 3D ROC curve and 2D ROC curve of ( P d , P f ) , and 2D ROC curve of ( P f , τ ) .
Remotesensing 15 01679 g011
Figure 12. Box and whisker plots of the different methods under comparison for the five real datasets: (a) AVIRIS−1. (b) AVIRIS−2. (c) HYDICE. (d) ABU Urban−1. (e) ABU Urban−2.
Figure 12. Box and whisker plots of the different methods under comparison for the five real datasets: (a) AVIRIS−1. (b) AVIRIS−2. (c) HYDICE. (d) ABU Urban−1. (e) ABU Urban−2.
Remotesensing 15 01679 g012
Figure 13. Detection accuracy of ATLSS on the AVIRIS−2 dataset with different parameter settings. (a) λ 1 varied. (b) λ 2 varied. (c) λ 3 varied. (d) β varied.
Figure 13. Detection accuracy of ATLSS on the AVIRIS−2 dataset with different parameter settings. (a) λ 1 varied. (b) λ 2 varied. (c) λ 3 varied. (d) β varied.
Remotesensing 15 01679 g013
Table 1. Formulations of the ATLSS model and its different degradation models.
Table 1. Formulations of the ATLSS model and its different degradation models.
Model NameFormulation
Dm-1 min X , A , E , N 1 2 N F 2 + λ 1 X 1 + β E 1 , 1 , 2
s . t . Y = X × 3 A + E + N , X 0 , A 0
Dm-2 min X , A , E , N 1 2 N F 2 + λ 1 X 1 + β E 1 , 1 , 2
s . t . rank ( X ) = K X , Y = X × 3 A + E + N , X 0 , A 0
Dm-3 min X , A , E , N 1 2 N F 2 + λ 1 X T V + β E 1 , 1 , 2
s . t . Y = X × 3 A + E + N , X 0 , A 0
Dm-4 min X , A , E , N 1 2 N F 2 + λ 1 X T V + β E 1 , 1 , 2
s . t . rank ( X ) = K X , Y = X × 3 A + E + N , X 0 , A 0
ATLSS min X , A , E , N 1 2 N F 2 + λ 1 X 1 + λ 2 X T V + β E 1 , 1 , 2
s . t . rank ( X ) = K X , Y = X × 3 A + E + N , X 0 , A 0
Table 2. The AUC values obtained by different AD algorithms for the five real datasets.
Table 2. The AUC values obtained by different AD algorithms for the five real datasets.
DatasetsAUC Values of ( P d , P f ) / ( P f , τ )
Matrix-Based OperationsDeep Learning
RXRPCALRASRGTVLRRPTAGVAE
AVIRIS−1 0.9551 / 0.0118 0.8935 / 0.0129 0.9716 / 0.1968 0.9822 / 0.0893 0.9890 / 0.2227 0.9860 / 0.0718
AVIRIS−2 0.9213 / 0.0266 0.7969 / 0.0262 0.9672 / 0.0711 0.9816 / 0.0518 0.9609 / 0.0514 0.9616 / 0.2009
HYDICE 0.8511 / 0.0470 0.9436 / 0.0277 0.9311 / 0.0989 0.9393 / 0.0488 0.9829 / 0.0935 0.9311 / 0.1549
ABU Urban−1 0.9934 / 0.0329 0.9916 / 0.0496 0.8666 / 0.2096 0.9093 / 0.1261 0.9852 / 0.1869 0.9778 / 0.0925
ABU Urban−2 0.9946 / 0.0611 0.9960 / 0.0283 0.9867 / 0.0211 0.9967 / 0.0487 0.9992 / 0.0672 0.9828 / 0.1976
Tensor-Based Operations
TRPCALRTDTVDm-1Dm-2Dm-3Dm-4ATLSS
0.9801 / 0.0304 0.9843 / 0.0472 0.9950 / 0.0649 0.9954 / 0.0293 0.9945 / 0.0290 0.9949 / 0.0295 0.9967/0.0328
0.9549 / 0.0478 0.9890 / 0.0736 0.9967 / 0.0655 0.9970 / 0.0490 0.9966 / 0.0660 0.9969 / 0.0644 0.9982/0.0159
0.9600 / 0.0817 0.9725 / 0.0887 0.9829 / 0.0388 0.9847 / 0.0222 0.9812 / 0.0177 0.9830 / 0.0126 0.9893/0.0152
0.9823 / 0.2450 0.9870 / 0.0868 0.9940 / 0.1214 0.9960/0.1226 0.9921 / 0.0783 0.9956 / 0.1293 0.9962/0.1291
0.9456 / 0.0325 0.9904 / 0.0459 0.9976 / 0.0126 0.9989 / 0.0346 0.9973 / 0.0275 0.9981 / 0.0405 0.9992/0.0611
Table 3. The computational time (seconds) of different AD algorithms for the AVIRIS−2 dataset.
Table 3. The computational time (seconds) of different AD algorithms for the AVIRIS−2 dataset.
MethodMatrix-Based OperationsDeep LearningTensor-Based Operations
RXRPCALRASRGTVLRRPTAGVAETRPCALRTDTVDm-1Dm-2Dm-3Dm-4ATLSS
Time(s) 3.577 15.457 26.399 241.099 241.099 80 × 2000 + 25.56 234.465 23.709 70.494 149.949 138.588 194.204 200.732
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shang, W.; Jouni, M.; Wu, Z.; Xu, Y.; Dalla Mura, M.; Wei, Z. Hyperspectral Anomaly Detection Based on Regularized Background Abundance Tensor Decomposition. Remote Sens. 2023, 15, 1679. https://doi.org/10.3390/rs15061679

AMA Style

Shang W, Jouni M, Wu Z, Xu Y, Dalla Mura M, Wei Z. Hyperspectral Anomaly Detection Based on Regularized Background Abundance Tensor Decomposition. Remote Sensing. 2023; 15(6):1679. https://doi.org/10.3390/rs15061679

Chicago/Turabian Style

Shang, Wenting, Mohamad Jouni, Zebin Wu, Yang Xu, Mauro Dalla Mura, and Zhihui Wei. 2023. "Hyperspectral Anomaly Detection Based on Regularized Background Abundance Tensor Decomposition" Remote Sensing 15, no. 6: 1679. https://doi.org/10.3390/rs15061679

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop