Next Article in Journal
Combining Linear Pixel Unmixing and STARFM for Spatiotemporal Fusion of Gaofen-1 Wide Field of View Imagery and MODIS Imagery
Previous Article in Journal
Antarctic Surface Ice Velocity Retrieval from MODIS-Based Mosaic of Antarctica (MOA)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double Reweighted Sparse Regression and Graph Regularization for Hyperspectral Unmixing

1
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Institute of Computational Science, School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
3
College of Applied Mathematics, Chengdu University of Information Technology, Chengdu 610225, China
4
School of Mathematics, University of Minnesota Twin Cities, Minneapolis, MN 55455, USA
*
Authors to whom correspondence should be addressed.
Remote Sens. 2018, 10(7), 1046; https://doi.org/10.3390/rs10071046
Submission received: 14 May 2018 / Revised: 17 June 2018 / Accepted: 29 June 2018 / Published: 3 July 2018

Abstract

:
Hyperspectral unmixing, aiming to estimate the fractional abundances of pure spectral signatures in each mixed pixel, has attracted considerable attention in analyzing hyperspectral images. Plenty of sparse unmixing methods have been proposed in the literature that achieved promising performance. However, many of these methods overlook the latent geometrical structure of the hyperspectral data which limit their performance to some extent. To address this issue, a double reweighted sparse and graph regularized unmixing method is proposed in this paper. Specifically, a graph regularizer is employed to capture the correlation information between abundance vectors, which makes use of the property that similar pixels in a spectral neighborhood have higher probability to share similar abundances. In this way, the latent geometrical structure of the hyperspectral data can be transferred to the abundance space. In addition, a double weighted sparse regularizer is used to enhance the sparsity of endmembers and the fractional abundance maps, where one weight is introduced to promote the sparsity of endmembers as a hyperspectral image typically contains fewer endmembers compared to the overcomplete spectral library and the other weight is exploited to improve the sparsity of the abundance matrix. The weights of the double weighted sparse regularizer used for the next iteration are adaptively computed from the current abundance matrix. The experimental results on synthetic and real hyperspectral data demonstrate the superiority of our method compared with some state-of-the-art approaches.

Graphical Abstract

1. Introduction

Hyperspectral imaging has the capacity to record the same scene over many contiguous spectral bands including the visible, nearinfrared and shortwave infrared spectral bands, and it has been widely used in practical applications such as terrain classification, mineral detection and exploration [1,2], military target discrimination, environmental monitoring and pharmaceutical counterfeiting [3]. Mixed pixels are commonly present in hyperspectral data due to the low spatial resolution of the sensors and the complexity of the terrain. The occurrence of mixed pixels severely degrades the application of hyperspectral data [4]. Thus, hyperspectral unmixing (HU) is an important process for hyperspectral data exploitation.
The task of HU is to identify a set of endmembers, and to estimate their corresponding abundances in the formation of each hyperspectral image (HSI) pixel [5,6,7]. HU algorithms mainly depend on the expected type of mixing, which can be classified as either a nonlinear or linear model [8]. Nonlinear mixing model assumes that part of the source radiation is affected by multiple scattering effects through several endmembers in the scene before being collected at the sensor. Therefore, the observed mixed pixel spectrum is generated from nonlinear function of the abundance associated with the endmembers. On the other hand, the linear mixture model (LMM) assumes that the mixing only occurs within the measuring instrument itself and interactions among distinct endmembers are negligible. Hence, the observed mixed pixel spectrum can be expressed as a linear combination of a collection of endmember signatures, which is weighted by their corresponding fractional abundances. Even though LMM is not always a precise model to characterize many real scenarios, it is generally recognized as an acceptable approximation of the light scattering and instrument mechanism. It has been widely used for HU due to its simplicity, effectiveness in different applications and computational tractability.
Based on the LMM, many methods have been proposed, such as geometry and statistics based approaches [9]. Geometry based methods usually require that pixel observations of hyperspectral data are within a simplex and their vertices correspond to the endmember [10]. This class of methods includes N-FINDR [11], vertex component analysis [12], minimum-volume simplex analysis [13], simplex growing algorithm [14], etc. In general, these methods require the estimation of the number of the endmembers or the presence of pure pixels. These assumptions may not hold in many hyperspectral data owing to the relatively low spatial resolution of imaging spectrometers and the presence of the mixture phenomena at different scales.
Statistical approaches utilize a statistical characteristic of the hyperspectral data, such as independent component analysis (ICA) [15], and nonnegative matrix factorization (NMF) [16]. NMF can learn a parts-based representation of the data and decompose the high-dimensional data into two nonnegative matrices. Consequently, one can get the endmember signatures and their corresponding abundances simultaneously. This method does not require the pure pixel assumption. Nevertheless, NMF could lead to many local minima owing to the nonconvexity of the objective function. To alleviate this disadvantage, many methods have been proposed by adding some extra constraints under the NMF framework. For example, spatial and spectral regularization is considered in [4,17], the sparsity-constrained of abundances is considered in [18], and the manifold structure of the hyperspectral data is considered in [19,20]. All these methods are unsupervised methods, i.e., both the endmember signatures and the abundances are unknown.
As more spectral libraries become available to the public, many semisupervised methods have been proposed recently. In [21], Bioucas-Dias et al. proposed a sparse unmixing method, named SUnSAL, which is based on variable splitting and augmented Lagrangian. This method assumes that the observed image signatures can be expressed by a linear combination of a small number of pure spectral signature. The high mutual coherence of the hyperspectral library limits the performance of SunSAL. To mitigate the impact of the mutual coherence, Iordache et al. proposed a collaborative sparse regression method for hyperspectral unmixing (CLSUnSAL) [22]. To further improve the performance of the CLSUnSAL, a HU method based on weighted sparse regression with weighted l d , 1 ( d = 1 or 2) mixed norm was proposed by Zheng et al. [23]. In [24], Tang et al. assumed that some materials in the spectral library are known to exist in the hyperspectral scene in many situations and they propose the SunSPI algorithm which employs the spectral a priori information. This method improves the performance of SunSAL and CLSUnSAL. However, these methods do not exploit the spatial-contextual information of the hyperspectral images. In [5], Iordache et al. proposed the SUnSAL-TV method, which utilizes the spatial information of the first-order pixel neighborhood system of the abundance map through total variation (TV) on sparse unmixing formulation. In [25], Zhao et al. studied total variation regularization in deblurring and sparse unmixing of hyperspectral images and their method gets better performance than SUnSAL-TV. Taking the non-local spatial information of whole abundance image into account, the non-local mean unmixing method which utilizes the similar patterns and structures in the abundance map is proposed in [26]. Allowing for the low rankness and sparsity property of abundance matrices, Giampouras et al. proposed a method which combines the weighted l 1 -norm and the weighted nuclear norm of the abundance matrix as a penalty term [27]. These methods ignore the underlying structure of the hyperspectral images. To take advantage of this property, some graph-based methods are proposed [28,29], which employ the graph topology and sparse group lasso regularization.
To fully utilize the underlying structure of hyperspectral images (HSIs) and sparsity property of abundance matrix, we propose a double reweighted sparse and graph regularized hyperspectral unmixing model. The motivation is based on two priors. First, the similar pixels in a spectral neighborhood are more likely to share the same abundances according to the fundamental spatial–spectral correlation property of HSI. In our method, this latent structure is modeled by a graph regularizer, which groups the highly related neighboring pixels on the graph. In this way, it transfers the structure of hyperspectral data to the unmixing results. Second, a hyperspectral image typically contains fewer endmembers comparing to the spectral library and the abundance maps are naturally sparse. Thus, we employ a double reweighted sparse regularizer to capture this prior. Particularly, we introduce one weight to promote the column sparsity of fractional abundance, while another weight is employed to enhance the sparsity of the abundance along the abundance vector corresponding to each endmember. Moreover, we design a symmetric alternating direction method of multipliers based algorithm to solve the proposed model.
There are some works related to us (e.g., [4,29]). In [4], Wang et al. introduced a double reweighted sparse unmixing and TV (DRSU-TV) algorithm for hyperspectral unmixing. The difference is that they employ the total variation regularization which utilizes local neighbourhood pixels on the abundances rather than graph regularization which can exploit the structures hidden in the hyperspectral data and allow connecting a pixel with as many other pixels in the image as long as they are similar. In [29], the authors incorporated a graph-based total variation framework within the unmixing problem, which takes the strength of graph TV regularization and l 2 , 1 -norm regularization. Unlike [29], in which the graph TV regularization is imposed on the reconstructed spectra, we exploit the graph Laplacian directly on abundances. In addition, we adopt a double reweighted l 1 -norm rather than l 2 , 1 -norm regularization employed in work [29].
The rest of this paper is organized as follows. In Section 2, we briefly review the linear mixture model and symmetric alternating direction method of multipliers (symmetric ADMM). In Section 3, we present our formulation, the newly developed algorithm, called double reweighted sparse and graph regularized hyperspectral unmixing (DRSGHU). Section 4 describes the experimental results on simulated hyperspectral data and real hyperspectral data. Section 5 discusses different choices of the graph Laplacian matrix L in graph regularization term and analyzes the influence of the parameters. Finally, some conclusions are provided in Section 6.

2. Preliminary

In this section, we briefly review the LMM and symmetric ADMM algorithm for subsequent discussion. The former is the basic model of the linear hyperspectral unmixing and the latter is one of the representative optimization methods for unmixing.

2.1. Linear Mixture Model (LMM)

LMM is a popular model used in hyperspectral image unmixing, which assumes that the spectral response of a pixel is a linear combination of spectral signatures (called endmembers) [19,20,30,31,32]. Given an observed spectrum vector of a mixed pixel y R l × 1 , it can be approximated by a nonnegative linear combination of m endmembers, i.e.,
y = A x + e ,
where A R l × m represents a spectral library containing l spectral bands and m spectral signatures, x R m × 1 is the abundance vector, and e R l × 1 is the error term (i.e., the noise affecting the measurements). More generally, if the observed data set contains n mixed pixels organized in a matrix Y = [ y 1 , y 2 , , y n ] R l × n , the linear mixture model can be written as
Y = A X + E ,
where X = [ x 1 , x 2 , , x n ] R m × n is the abundance fractional matrix (each column x j R m × 1 corresponds with the abundance fractions of a mixed pixel y j R l × 1 ), and E = [ e 1 , e 2 , , e n ] R l × n is the noise matrix. In general, two restrictions are imposed on the fractional abundances X in LMM due to physical constraints [5,21,24,26,33]. On the one hand, we often assume X i j 0 where X i j denotes the ( i , j ) -th entry of X, termed as the abundance non-negative constraint (ANC). On the other hand, it is common to assume 1 T x j = 1 where 1 T R 1 × m is a line vector of 1’s compatible with x j , termed as the abundance sum-to-one constraint (ASC). Under some circumstances, the sum-to-one constraint cannot be exactly satisfied because of the variability of the signature [24]. Hence, we do not consider the sum-to-one constraint.

2.2. Symmetric ADMM

The symmetric alternating direction method with multipliers (symmetric ADMM) is an acceleration method of ADMM, which can be used to solve the constraint optimization formulation in image processing [34,35,36,37,38,39,40,41,42,43].
Consider the following linearly separable convex minimization problem of the form:
min { θ 1 ( x ) + θ 2 ( y ) | A x + B y = b , x Ω 1 , y Ω 2 } ,
where θ 1 : R n 1 R , θ 2 : R n 2 R are convex functions, b R m , Ω 1 R n 1 and Ω 2 R n 2 are closed convex sets, A R m × n 1 and B R m × n 2 are given matrices. The augmented Lagrangian function of Equation (3) can be written as:
L β ( x , y , λ ) = θ 1 ( x ) + θ 2 ( y ) λ ( A x + B y b ) + β 2 A x + B y b 2 ,
where λ is the Lagrangian multipliers and β > 0 is a penalty parameter. Then, the symmetric ADMM scheme is:
x k + 1 = arg min x L β ( x , y k , λ k ) | x Ω 1 , λ k + 1 2 = λ k r β ( A x k + 1 + B y k b ) , y k + 1 = arg min y L β ( x k + 1 , y , λ k + 1 2 ) | y Ω 2 λ k + 1 = λ k + 1 2 s β ( A x k + 1 + B y k + 1 b ) .

3. Double Reweighted Sparse and Graph Regularized Unmixing Method

In this section, we present the details of the proposed DRSGHU method, including the model formulation and its optimization by symmetric ADMM.

3.1. Double Reweighted l 1 Sparse Prior

As each pixel can be expressed as the linear combinations of a few number of endmembers, the abundance vector of a pixel in HSI is usually assumed to be sparse. In other words, most of the entries of the abundance matrix are supposed to be zero. Many methods have been proposed to utilize the sparse property of abundances in recent years. A very straightforward method is to introduce l 0 quasi-norm regularizer to count the nonzero entries in the abundance matrix. However, this is a NP-hard problem. To circumvent this disadvantage, a common surrogate regularizer is l 1 -norm, which is a convex substitution for l 0 regularizer. To further improve the estimation performance over the l 1 -norm and promote the sparsity of the solution, some other regularizers, such as l p -norm regularizer, collaborative sparse regularizer [22], reweighted collaborative sparse regularizer [23] are proposed to solve unmixing problems. In this paper, we propose to use a weighted formulation of l 1 minimization to improve the estimation performance over the l 1 -norm, inspired by [4,44,45]. Considering the fact that a hyperspectral image usually includes fewer endmembers compared with the overcomplete spectral library and the abundance map is inherently sparse, we employ the double reweighted l 1 sparse constraint which defined as
min X R 1 ( X ) = W 2 W 1 X 1 , 1 , s . t . X 0 .
Then, we introduce the double reweighted sparse regularizer to unmixing problem, and the new objective function is
( P 1 ) : min X 0 1 2 A X Y F 2 + α 2 W 2 W 1 X 1 , 1 ,
where the operator ⊙ denotes the Hadamard product of two variables, and W 1 and W 2 are weighted matrices. The first weight W 1 is introduced to promote the sparsity of endmembers in the spectral library as a hyperspectral image usually contains fewer endmembers compared with the overcomplete spectral library. Due to the unavailability of X, we use the abundance matrix of the current solution to compute the weight matrix W 1 for the next iteration
( W 1 ) i j k = 1 X k ( i , : ) 1 + ϵ ,
where ( W 1 ) i j k R m × n denotes the ( i , j ) -th element of W 1 estimated in the k-th iteration and X k ( i , : ) 1 denotes the l 1 -norm of X k ( i , : ) (the i-th row of X estimated in the k iteration). ϵ > 0 is a small positive constant to avoid singularity. Then,
W 1 X 1 , 1 = i = 1 m X ( i , : ) 1 X ( i , : ) 1 + ϵ X 1 , 0 ,
where X 1 , 0 is the l 0 quasi-norm of vector [ X ( 1 , : ) 1 , X ( 2 , : ) 1 , · , X ( m , : ) 1 ] , equivalent to the number of nonzero rows of X [23]. In this way, W 1 X 1 , 1 has the similar sparsity-promoting property as X 1 , 0 , which is used to measure the number of nonzero row of abundance matrix X. W 1 X 1 , 1 equally treats all of the entries in each abundance vector. In other words, the same material appears in every pixel. However, the fraction abundance matrix is intrinsically sparse. Hence, we introduce another weight W 2 to improve sparsity along the abundance vector corresponding to each pixel. Similar to W 1 , we compute W 2 iteratively, i.e., the iteration weighting coefficients of W 2 are based on the current estimation of X k :
( W 2 ) i j k = 1 | X i j k | + ϵ .
It suggests that the small weights of W 2 encourage nonzero entries in the estimated abundance matrix, and vice versa. It can be seen that when weighted matrices W 1 and W 2 are all-ones matrices, formulation (P1) become to sparse unmixing model proposed in [21].

3.2. Graph Regularizer

The sparse constraint only studies sparsity characteristic of the abundance matrix. However, it ignores the latent geometrical structure of the HSI data. As we know, similar pixels in a spatial or spectral neighborhood have higher probability to share similar abundances [46,47,48]. Therefore, fully making use of this property would enhance the accuracy of the unmixing. We consider the pixel similarity relationship among the neighboring pixels in the spectral space, that is, the abundance vector data set maintains the manifold structure of the spectral feature data set, and incorporates this information into the unmixing problem by the graph Laplacian regularizer. Given a hyperspectral data Y R l × n with n pixels, we view each pixel y i as a node, then we find its p nearest neighbors for each data point y i and put a edge between y i and its neighbors to construct a weighted graph W on the n data points. There are many different methods to define the weighted graph matrix. Here, we exploit heat kernel weighting, i.e., if nodes i and j are connected, set
W i j = e y i y j 2 2 σ 2 ,
where σ is the kernel’s bandwidth W [48]. Otherwise, if two nodes y i and y j are not connected, we set the weight W i j = 0 . Obviously, when y i and y j are very close, the weight W i j tends to be 1, i.e., a larger W i j value indicates that pixels y i and y j are similar.
Through above process, we can get the weight matrix W of size n × n that contains the latent structures in hyperspectral data. We aim to transfer the structure of Y to the abundance space. As the corresponding abundances of any two points y i and y j are expected to maintain the same structure, we adopt the following graph Laplacian regularizer:
min X R 2 ( X ) = 1 2 i = 1 n j = 1 n x i x j 2 W i j = i = 1 n x i T x i D i i i = 1 n j = 1 n x i T x j W i j = T r ( X D X T ) T r ( X W X T ) = T r ( X L X T ) , s . t . X 0 ,
where T r ( · ) represents the trace of a matrix and D is a diagonal matrix whose entries are column (or row, since W is symmetric) sums of W, D i i = j W i j . L = D W is called graph Laplacian [46]. By minimizing R 2 ( X ) , we expect that if two data points y i and y j in the HSI data space are close, then x i and x j in the abundance space also should be close.

3.3. The Double Reweighted Sparse and Graph Regularized Unmixing Model

To simultaneously utilize structure information of HSI data and the sparse property of abundance matrix, we propose the double reweighted sparse and graph regularized unmixing model which can be expressed as follows:
( P 2 ) : min X 0 1 2 A X Y F 2 + α 1 2 T r ( X L X T ) + α 2 W 2 W 1 X 1 , 1 ,
where α 1 > 0 and α 2 > 0 are regularization parameters. The first term is the reconstruction error, the second term is introduced to capture the correlation information between the abundance vectors that is hidden in the HSI data, and the third term is exploited to promote the sparseness of the abundance matrix.

3.4. Optimization Algorithm

Many optimization methods can be applied to solve the proposed formulation (P2), such as the split Bregman method, alternating direction method with multipliers [35,36,37,38,39,40,42,43,49,50]. Here, we employ symmetric ADMM to solve (P2) due to its simplicity and efficiency [34].
The optimization problem (P2) can be reformulated as
min X , V 1 , V 2 1 2 A X Y F 2 + α 1 2 T r ( V 1 L V 1 T ) + α 2 W 2 W 1 V 2 1 , 1 , subject to V 1 = X , V 2 = X , V 2 Ω : = { V 2 R m × n , V 2 0 } .
Then, Equation (13) can be further expressed as
min X , Z θ 1 ( X ) + θ 2 ( Z ) subject to B X + C Z = 0 2 m × n , V 2 Ω ,
where Z = ( V 1 ; V 2 ) , θ 1 ( X ) = 1 2 A X Y F 2 , θ 2 ( Z ) = α 1 2 T r ( V 1 L V 1 T ) + α 2 W 2 W 1 V 2 1 , 1 , and B and C are, respectively, given by
B = I I , C = I 0 0 I .
By introducing the Lagrangian multiplier Λ = ( Λ 1 ; Λ 2 ) R 2 m × n , the augmented Lagrangian function of Equation (13) can be written as
L ( X , Z , Λ ) = θ 1 ( X ) + θ 2 ( Z ) + Λ , B X + C Z + β 2 B X + C Z | F 2 = 1 2 A X Y F 2 + α 1 2 T r ( V 1 L V 1 T ) + α 2 W 2 W 1 V 2 1 , 1 + β 2 X V 1 + Λ 1 β F 2 + β 2 X V 2 + Λ 2 β F 2
where β > 0 is a penalty parameter. Note that for any fixed value of X, the variables V 1 and V 2 are decoupled. Therefore ,we can solve V 1 and V 2 on their corresponding subproblems separately. In the framework of symmetric ADMM, we use the following iteration scheme to approach the solution of Equation (15):
X k + 1 arg min X 1 2 A X Y F 2 + β 2 X V 1 k + 1 β Λ 1 k F 2 + β 2 X V 2 k + 1 β Λ 2 k F 2 , ( W 1 ) i j k + 1 = 1 X k + 1 ( i , : ) 1 + ϵ , ( W 2 ) i j k + 1 = 1 | X i j k + 1 | + ϵ , Λ 1 k + 1 2 = Λ 1 k + r β ( X k + 1 V 1 k ) , Λ 2 k + 1 2 = Λ 2 k + r β ( X k + 1 V 2 k ) , V 1 k + 1 arg min V 1 α 1 2 T r ( V 1 L V 1 T ) + β 2 X k + 1 V 1 + 1 β Λ 1 k + 1 2 F 2 , V 2 k + 1 arg min V 2 0 α 2 W 2 k + 1 W 1 k + 1 V 2 1 , 1 + β 2 X k + 1 V 2 + 1 β Λ 2 k + 1 2 F 2 , Λ 1 k + 1 = Λ 1 k + 1 2 + s β ( X k + 1 V 1 k + 1 ) , Λ 2 k + 1 = Λ 2 k + 1 2 + s β ( X k + 1 V 2 k + 1 ) .
For the X-subproblem, we get that
( A T A + 2 β I ) X k + 1 = A T Y + β ( V 1 k + V 2 k ) Λ 1 k Λ 2 k .
The solution of X k + 1 is explicitly given by:
X k + 1 = ( A T A + 2 β I ) 1 A T Y + β ( V 1 k + V 2 k ) Λ 1 k Λ 2 k .
For the V 1 -subproblem, we solve the following equation:
V 1 k + 1 ( α 1 L + β I ) = ( β X k + 1 + Λ 1 k + 1 2 ) ,
the solution of which is
V 1 k + 1 = ( β X k + 1 + Λ 1 k + 1 2 ) ( α 1 L + β I ) 1 .
Denoting W ˜ = W 2 W 1 , then the V 2 -subproblem can be written as
V 2 k + 1 arg min V 2 0 α 2 W ˜ k + 1 V 2 1 , 1 + β 2 X k + 1 V 2 + 1 β Λ 2 k + 1 2 F 2 ,
As the V 2 subproblem is component-wise separable, the solution of V 2 is
( V 2 k + 1 ) i j = max | ( X k + 1 + 1 β Λ 2 k + 1 2 ) i j | α 2 β W ˜ i j k + 1 , 0 sign ( ( X k + 1 + 1 β Λ 2 k + 1 2 ) i j ) ,
where W ˜ i j k + 1 = 1 ( | X i j k + 1 | + ϵ ) ( X k + 1 ( i , : ) 1 + ϵ ) , “∘” represents the point-wise product and sign ( x ) is the signum function.
In summary, the proposed double reweighted sparse graph regularized hyperspectral unmixing algorithm (DRSGHU) is described in Algorithm 1.
Algorithm 1 Pseudocode of DRSGHU
1. Initialization: V 1 0 , V 2 0 , Λ 1 0 , Λ 2 0 ;
2. set k = 0 , choose α 1 , α 2 , β , r, s;
3. repeat
    X k + 1 = ( A T A + 2 β I ) 1 ( A T Y + β ( V 1 k + V 2 k ) Λ 1 k Λ 2 k ) ,
    W ˜ i j k + 1 = 1 ( | X i j k + 1 | + ϵ ) ( X k + 1 ( i , : ) 1 + ϵ ) ,
    Λ 1 k + 1 2 = Λ 1 k + r β ( X k + 1 V 1 k ) ,
    Λ 2 k + 1 2 = Λ 2 k + r β ( X k + 1 V 2 k ) ,
    V 1 k + 1 = ( β X k + 1 + Λ 1 k + 1 2 ) ( α 1 L + β I ) 1 ,
    ( V 2 k + 1 ) i j = max | ( X k + 1 + 1 β Λ 2 k + 1 2 ) i j | α 2 β W ˜ i j k + 1 , 0 sign ( ( X k + 1 + 1 β Λ 2 k + 1 2 ) i j ) ,
    Λ 1 k + 1 = Λ 1 k + 1 2 + s β ( X k + 1 V 1 k + 1 ) ,
    Λ 2 k + 1 = Λ 2 k + 1 2 + s β ( X k + 1 V 2 k + 1 ) ;
4. update iteration k k + 1 ;
5. until the stopping criterion is satisfied.

4. Experimental Results

In this section, we test DRSGHU for hyperspectral unmixing using two simulated hyperspectral data and one real data. Both simulated and real hyperspectral data are generated to evaluate the effectiveness of the proposed algorithms. In the synthetic data experiments, we compare our method with some representative methods: (1) SUnSAL [21]; (2) CLSUnSAL [22]; (3) DRSU-TV [4]; (4) GLUP-Lap [28]; (5) Graph-regularized unmixing method (without sparse constraint on abundances), abbreviated as GraphHU; and (6) Sparse graph-regularized unmixing method (set W 1 = W 2 as all ones matrices in problem (P2)), abbreviated as SGHU. For quantitative comparison, the signal-to-reconstruction error (SRE) is introduced to measure the quality of the estimated fractional abundances of endmembers. SRE is defined as:
SRE : = E [ X true F 2 ] E [ X true X ^ F 2 ] ,
measured in dB: SRE ( dB ) 10 log 10 ( SRE ) , where E denotes the expected value, see [22]. Here, X true is the true fractional abundance of endmembers and X ^ is the estimated abundance of endmembers. Generally speaking, when the SRE becomes bigger, the estimation approximates the truth better. For initialization, we set V 1 0 = 0 , V 2 0 = 0 , Λ 1 0 = 0 , Λ 2 0 = 0 , where 0 is a zero matrix with size m × n . The parameters r and s are set to be r = 0.9 , and s = 0.9 , the same as [51]. The stopping criteria is set as | | X k X k 1 | | F | | X k 1 | | F < 10 3 or the maximum iteration number is reached, which is set to be 500 in our experiments.
To achieve the best performance, we test the proposed method for different combinations of α 1 , α 2 and β according to different noise levels. Specifically, α 1 is selected from { 0.001 , 0.01 , 0.05 , 0.1 , 0.5 , 1 , 5 , 10 , 100 } , α 2 is selected from { 0.01 , 0.05 , 0.1 , 0.5 , 1 } , and the penalty parameter β is selected from { 0.01 , 0.05 , 0.1 , 0.5 , 1 , 5 } . For a fair comparison, we also carefully tune the parameters in all competing methods to get the best possible performance.
All experiments are performed under windows 10 and Matlab R2016b running on a desktop with an Intel(R) Core(TM) i7-7700 CPU at 3.60 GHz and 16.0 GB of memory.

4.1. Experiments on Simulated Data

In the synthetic data experiments, we evaluate the unmixing performance of DRSGHU in situations of different signal-to-noise ratios ( SNR 10 log ( AX F 2 / N F 2 ) ) , different spectral libraries, and different simulated data. The simulated data are contaminated by Gaussian noise with different noise levels of the SNR: 20, 30 and 40 dB. We consider the following two spectral libraries in our simulated data experiments:
  • A 1 R 100 × 120 is generated from a library of 262 spectral library signatures generally found on satellites from the National Aeronautics and Space Administration Johnson Space Center (NASA JSC) Spacecraft Materials Spectral Database [52], with 100 spectral bands.
  • A 2 R 224 × 240 is generated from a random selection of 240 materials from the U.S. Geological Survey (USGS) digital spectral library (splib06) [24]. The reflectance values are measured for 224 spectral bands uniformly distributed in the interval 0.4–2.5 μ m.
In our simulation experiments, we test two different data sets. The details are presented in the following examples.
Example 1.
In this example, we test the performance on simulated data cube 1 (DC1). This simulated data cube contains 100 × 100 pixels generated from nine randomly selected signature from A 1 , and each pixel has 100 bands. Then, a Gaussian white noise with different SNR (20 dB, 30 dB or 40 dB) is added to DC1. The true fractional abundances of nine piecewise smooth with sharp transitions endmembers are shown in Figure 1. Table 1 lists the SRE results for all simulated data sets. From this table, we observe that DRSGHU gets higher SRE than GLUP-Lap, GraphHU, SGHU, SUnSAL for all noise levels. It indicates that both the double reweighted l 1 sparse prior and graph regularizer mutually enhance the abundance estimation accuracy. As the level of noise decreases, the rate at which DRSGHU improves comparing to SUnSAL increases. This is because that the observed HSI data contains less noise, the graph Laplacian matrix becomes more reliable. In most cases, DRSGHU achieves the best performance and DRSU-TV achieves second best results except when the level of noise is 20 dB. One possible reason is that the graph Laplacian L is obtained from the very noisy hyperspectral image rather than the clean image, making the graph Laplacian less reliable.
Figure 2 shows the estimated abundance maps by DRSGHU for a noise level of 30 dB. It can be seen that the unmixing results by DRSGHU are very close to the true fractional abundances presented in Figure 1. We further present the relative errors between the estimated fractional abundances and the true fractional abundances of all the endmembers in Table 2 for detailed performance evaluation of our method. In Table 2, the relative errors for the each endmember obtained by DRSGHU is smallest in most cases, and the average relative error of all the endmembers is also smallest. It indicates that the estimation by DRSGHU approximates the truth best.
Furthermore, we present the estimated fractional abundances of all endmembers in the library A 1 for the first 150 pixels under a noise level of 30dB in Figure 3 to appreciate the sparsity of the fractional abundances. We observe that DRSUTV and DRSGHU produce less outliers than other methods. As both methods employ the double reweighted sparse prior, they can get relative sparser solution. This suggests that the double weighted matrices promote the sparsity of abundance maps. We also see that the unmixing results by SGHU are much better than SUnSAL. This is because the graph regularizer is imposed onto the sparsity constraint, where the latent structure of HSI data can be utilized. It indicates that incorporating graph regularizer in the unmixing problem improves the abundance estimation accuracy.
Example 2.
In this example, we test our method on simulated data cube 2 (DC2). This simulated data cube contains 75 × 75 pixels generated from five randomly selected signature from A 2 . DC2 has 100 bands per pixel [5]. The true abundances of five endmembers are shown in Figure 4. To illustrate the effectiveness of the proposed method, we present the estimated abundance of endmember 4 (EM4) by different methods in Figure 5. We can observe that, when the noise level is very high (SNR = 20 dB), all methods performs not very good and the unmixing results still contain some noise. With the decreasing of noise level, the unmixing results look much more satisfactory. Although the unmixing results by DRSU-TV are much better than SUnSAL and CLSUnSAL, some unexpected transitions occurred and some squares are not correctly estimated, especially when the noise level is high as shown in Figure 5. This is possibly because that DRSU-TV links a pixel with only four neighbors and tends to produce piecewise smooth results. On the contrary, the unmixing results by DRSGHU is visually much more acceptable and perfectly recover the abundances. For quantitative comparison, we present the relative errors between the estimated fractional abundance EM4 and the true one at different noise levels in Table 3. It can be seen that DRSGHU achieves smallest relative errors of EM4.

4.2. Experiments on Real Data

In this subsection, we use the well-known Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Cuprite data (http://aviris.jpl.nasa.gov/html/aviris.freedata.html) set in our real data experiments. We use the portion corresponding to a 350 × 350 -pixel subset with 188 spectral bands in this experiment (see [5] for more details). A mineral map produced in 1995 by USGS, in which the Tricorder 3.3 software product was used to map different minerals present in the Cuprite mining district (http://speclab.cr.usgs.gov/spectral.lib06). Essential calibration was performed to mitigate the mismatches between the real image spectra and the spectra available in the library before unmixing the AVIRIS Cuprite hyperspectral data [5].The regularization parameters α 1 and α 2 are set to be 10, 0.1, and the penalty parameter β is set to 0.01 for the proposed method.
The estimated fractional abundance maps for Alunite, Muscovite, and Chalcedony by DRSU-TV, GraphHU, SGHU and DRSGHU are shown in Figure 6. In this figure, we also present the spatial distribution of these materials extracted from the Tricorder software product providing a reference for visual assessment. All compared algorithms produce consistent unmixing results. GraphHU, SGHU and DRSGHU reveal more spatial details for Alunite and Muscovite, and produce fewer outliers for Chalcedony comparing to DRSU-TV.

5. Discussion

5.1. The Graph Laplacian Matrix L

From our numerical experiments, we notice that it is very important to choose appropriate graph Laplacian matrix L to achieve good performance, as L reflects the similarity relationship among the spectral pixels. There are many methods to construct the graph Laplacian matrix L, most of which used the observed data directly to define the graph Laplacian, such as the ones mentioned in Section 2. However, when the observed data are highly corrupted by the noise, the graph Laplacian matrix directly constructed by observed data may not be reliable. Therefore, it is necessary to preprocess the observed data Y to get better estimation of L. For example, some denoising methods can be applied to observed data to get a good graph Laplacian matrix. Table 4 lists unmixing results by DRSGHU after applying the gaussian filter or bilateral filter [53] to the observed data when the SNR = 20 dB. The “reference” column represents the SREs of unmixing results using clean data (not contaminated by the noise) to construct the graph Laplacian matrix. It clearly demonstrates the advantage of pre-denoising before constructing L. Some other denoising methods could also be applied to get better performance, such as non-local total variation, BM3D, and sparse representation [2,26].

5.2. Parameter Analysis

In this section, we analyze the influence of the parameters α 1 , α 2 and β for DRSGHU. For simplicity, we take the simulation data DC1 under different noise levels as an example to illustrate the influence of parameters. We separate these three parameters into two groups, one is the regularization parameters α 1 and α 2 , and another group is penalty parameter β . To analyze the influence of the regularization parameters, we fix the penalty parameter β . Figure 7 presents SRE as a function of the regularization parameters of α 1 and α 2 . A general trend observed is that the optimal regularization parameters tend to be bigger when the noise level is higher. In addition, one can see that the surface is not notably steep, which suggests that the algorithm is quite robust around the optimal parameters. To analyze the influence of the penalty parameter, the regularization parameters α 1 and α 2 are fixed. Figure 8 presents SRE as a function of the penalty parameter β . It can be observed that the optimal penalty parameter becomes bigger with higher noise level. Moreover, we can see that SREs are relatively stable around the optimal penalty parameter under different noise level, also suggesting the robustness of our algorithm.

5.3. Convergence Analysis

The theoretical convergence analysis for the reweighted norm minimization problem is difficult to establish [44]. Nevertheless, we can observe that the relative error between restored abundance X and the truth abundance X t r u e becomes smaller with the increasing of iteration through the experiments. We take simulated data DC1 as an example to illustrate the convergence of our method. In this test, we set the maximum iteration number as 1000. In Figure 9, one can see that the relative error changes little after 200 iterations, which indicates the stability of our method.

6. Conclusions

In this paper, we propose a double reweighted sparse and graph regularization based formulation for hyperspectral unmixing. Then, we solve the proposed optimization problem by symmetric alternating direction method with multipliers. The proposed method takes both the structure information of HSI data and sparse property of abundance matrix into consideration. The graph regularization term is introduced to capture the structure information of HSI data, and the double reweighted l 1 -norm is used to describe the sparsity property of endmembers and fractional abundance maps. The experiments indicate that graph regularizer and weighted sparse regularizer both improve the unmixing performance. It is worth pointing out that the graph Laplacian matrix obtained from the observed data is important to the performance of unmixing result. When the noise level is very high, we could first pre-denoise the hyperspectral data before computing the graph Laplacian matrix to get better performance.
Some issues are worth being investigated in the future. We only consider the Gaussian noise in this work. In real applications, HSIs are usually degraded by different types of noise, such as hybrid Gaussian and stripe noise. We will study how to simultaneously remove these types of noise and unmixing. In addition, we utilize the graph regularizer to capture the structure information of hyperspectral images. Some other regularizers, such as nonlocal TV and low rank tensor, may also be well-suited for revealing spatial and spectral information of hyperspectral images. It also needs to be studied in further.

Author Contributions

All authors contributed to the design of the methodology and the validation of experiments. S.W. performed the numerical simulations, analyzed the results and wrote the manuscript. T.-Z.H., X.-L.Z., G.L. and Y.C. reviewed and revised the paper.

Funding

This research was funded by NSFC (61772003) and the Fundamental Research Funds for the Central Universities (ZYGX2016J132).

Acknowledgments

The authors would like to thank the supports by NSFC (61772003) and the Fundamental Research Funds for the Central Universities (ZYGX2016J132).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Manolakis, D.; Siracusa, C.; Shaw, G. Hyperspectral subpixel target detection using the linear mixing model. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1392–1409. [Google Scholar] [CrossRef]
  2. Zhao, Y.Q.; Yang, J. Hyperspectral image denoising via sparse representation and low-rank constraint. IEEE Trans. Geosci. Remote Sens. 2015, 53, 296–308. [Google Scholar] [CrossRef]
  3. Li, C.; Sun, T.; Kelly, K.F.; Zhang, Y. A compressive sensing and unmixing scheme for hyperspectral data processing. IEEE Trans. Image Process. 2012, 21, 1200–1210. [Google Scholar] [PubMed]
  4. Wang, R.; Li, H.C.; Pizurica, A.; Li, J.; Plaza, A.; Emery, W.J. Hyperspectral Unmixing Using Double Reweighted Sparse Regression and Total Variation. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1146–1150. [Google Scholar] [CrossRef]
  5. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Total variation spatial regularization for sparse hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4484–4502. [Google Scholar] [CrossRef]
  6. Du, Q.; Raksuntorn, N.; Younan, N.H.; King, R.L. End-member extraction for hyperspectral image analysis. Appl. Opt. 2008, 47, F77–F84. [Google Scholar] [CrossRef] [PubMed]
  7. Heinz, D.; Chang, C.I. Fully constrained least squares linear mixture analysis for material quantificationin in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 529–545. [Google Scholar] [CrossRef]
  8. He, W.; Zhang, H.; Zhang, L. Sparsity-Regularized Robust Non-Negative Matrix Factorization for Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Observ. 2016, 9, 4267–4279. [Google Scholar] [CrossRef]
  9. Nascimento, J.M.; Dias, J.M.B. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  10. Liu, X.; Xia, W.; Wang, B.; Zhang, L. An approach based on constrained nonnegative matrix factorization to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 757–772. [Google Scholar] [CrossRef]
  11. Winter, M.E. N-FINDR: An algorithm for fast autonomous spectral end-member determination in hyperspectral data. In Proceedings of the SPIE’s International Symposium on Optical Science, Engineering, and Instrumentation, Denver, CO, USA, 27 October 1999; pp. 266–275. [Google Scholar]
  12. Nascimento, J.M.; Dias, J.M.B. Does independent component analysis play a role in unmixing hyperspectral data? IEEE Trans. Geosci. Remote Sens. 2005, 43, 175–187. [Google Scholar] [CrossRef] [Green Version]
  13. Li, J.; Bioucas-Dias, J.M. Minimum volume simplex analysis: A fast algorithm to unmix hyperspectral data. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, IGARSS, Boston, MA, USA, 7–11 July 2008; Volume 3, pp. 250–253. [Google Scholar]
  14. Chang, C.I.; Wu, C.C.; Liu, W.m.; Ouyang, Y.C. A new growing method for simplex-based endmember extraction algorithm. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2804–2819. [Google Scholar] [CrossRef]
  15. Hyvärinen, A. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 1999, 10, 626–634. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [PubMed]
  17. Huck, A.; Guillaume, M. Robust hyperspectral data unmixing with spatial and spectral regularized NMF. In Proceedings of the 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  18. Qian, Y.; Jia, S.; Zhou, J.; Robles-Kelly, A. Hyperspectral unmixing via sparsity-constrained nonnegative matrix factorization. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4282–4297. [Google Scholar] [CrossRef]
  19. Lu, X.; Wu, H.; Yuan, Y.; Yan, P.g.; Li, X. Manifold regularized sparse NMF for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2815–2826. [Google Scholar] [CrossRef]
  20. Zhu, F.; Wang, Y.; Xiang, S.; Fan, B.; Pan, C. Structured sparse method for hyperspectral unmixing. ISPRS J. Photogramm. 2014, 88, 101–118. [Google Scholar] [CrossRef]
  21. Bioucas-Dias, J.M.; Figueiredo, M.A. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  22. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Collaborative sparse regression for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2014, 52, 341–354. [Google Scholar] [CrossRef]
  23. Zheng, C.Y.; Li, H.; Wang, Q.; Chen, C.P. Reweighted sparse regression for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 479–488. [Google Scholar] [CrossRef]
  24. Tang, W.; Shi, Z.; Wu, Y.; Zhang, C. Sparse Unmixing of Hyperspectral Data Using Spectral A Priori Information. IEEE Trans. Geosci. Remote Sens. 2015, 53, 770–783. [Google Scholar] [CrossRef]
  25. Zhao, X.L.; Wang, F.; Huang, T.Z.; Ng, M.K.; Plemmons, R.J. Deblurring and sparse unmixing for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4045–4058. [Google Scholar] [CrossRef]
  26. Zhong, Y.; Feng, R.; Zhang, L. Non-local sparse unmixing for hyperspectral remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Observ. 2014, 7, 1889–1909. [Google Scholar] [CrossRef]
  27. Giampouras, P.V.; Themelis, K.E.; Rontogiannis, A.A.; Koutroumbas, K.D. Hyperspectral image unmixing via simultaneously sparse and low rank abundance matrix estimation. In Proceedings of the 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Tokyo, Japan, 2–5 June 2015; pp. 1–4. [Google Scholar]
  28. Ammanouil, R.; Ferrari, A.; Richard, C. A graph laplacian regularization for hyperspectral data unmixing. In Proceedings of the IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, 19–24 April 2015; pp. 1637–1641. [Google Scholar]
  29. Ammanouil, R.; Ferrari, A.; Richard, C. Hyperspectral data unmixing with graph-based regularization. In Proceedings of the 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Tokyo, Japan, 2–5 June 2015; pp. 1–4. [Google Scholar]
  30. Rajabi, R.; Ghassemian, H. Sparsity Constrained Graph Regularized NMF for Spectral Unmixing of Hyperspectral Data. J. Indian Soc. Remote. 2015, 43, 269–278. [Google Scholar] [CrossRef]
  31. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Observ. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  32. Heylen, R.; Parente, M.; Gader, P. A review of nonlinear hyperspectral unmixing methods. IEEE J. Sel. Top. Appl. Earth Observ. 2014, 7, 1844–1868. [Google Scholar] [CrossRef]
  33. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  34. He, B.; Ma, F.; Yuan, X. On the Step Size of Symmetric Alternating Directions Method of Multipliers. Available online: http://www.optimization-online.org (accessed on 26 May 2015).
  35. Wang, S.; Huang, T.Z.; Zhao, X.L.; Mei, J.J.; Huang, J. Speckle noise removal in ultrasound images by first-and second-order total variation. Numer. Algorithms 2018, 78, 513–533. [Google Scholar] [CrossRef]
  36. Liu, J.; Huang, T.Z.; Selesnick, I.W.; Lv, X.G.; Chen, P.Y. Image restoration using total variation with overlapping group sparsity. Inf. Sci. 2015, 295, 232–246. [Google Scholar] [CrossRef]
  37. Ji, T.Y.; Huang, T.Z.; Zhao, X.L.; Ma, T.H.; Liu, G. Tensor completion using total variation and low-rank matrix factorization. Inf. Sci. 2016, 326, 243–257. [Google Scholar] [CrossRef]
  38. Liu, J.; Huang, T.Z.; Lv, X.G.; Wang, S. High-order total variation-based Poissonian image deconvolution with spatially adapted regularization parameter. Appl. Math. Modell. 2017, 45, 516–529. [Google Scholar] [CrossRef]
  39. Mei, J.J.; Dong, Y.; Huang, T.Z.; Yin, W. Cauchy noise removal by nonconvex admm with convergence guarantees. J. Sci. Comput. 2018, 74, 743–766. [Google Scholar] [CrossRef]
  40. Huang, Y.M.; Ng, M.K.; Wen, Y.W. A new total variation method for multiplicative noise removal. SIAM J. Imag. Sci. 2009, 2, 20–40. [Google Scholar] [CrossRef]
  41. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  42. Chen, Y.; Huang, T.Z.; Zhao, X.L.; Deng, L.J.; Huang, J. Stripe noise removal of remote sensing images by total variation regularization and group sparsity constraint. Remote Sens. 2017, 9, 559. [Google Scholar] [CrossRef]
  43. Zhao, X.L.; Wang, F.; Ng, M.K. A new convex optimization model for multiplicative noise and blur removal. SIAM J. Imaging Sci. 2014, 7, 456–475. [Google Scholar] [CrossRef]
  44. Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  45. He, W.; Zhang, H.; Zhang, L. Total variation regularized reweighted sparse nonnegative matrix factorization for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3909–3921. [Google Scholar] [CrossRef]
  46. Cai, D.; He, X.; Han, J.; Huang, T.S. Graph regularized nonnegative matrix factorization for data representation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1548–1560. [Google Scholar] [PubMed]
  47. Chung, F.R. Spectral Graph Theory; American Mathematical Soc.: Providence, RI, USA, 1997; Volume 92. [Google Scholar]
  48. Xiang, S.; Nie, F.; Zhang, C. Semi-supervised classification via local spline regression. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2039–2053. [Google Scholar] [CrossRef] [PubMed]
  49. Afonso, M.V.; Bioucas-Dias, J.M.; Figueiredo, M.A. An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans. Image Process. 2011, 20, 681–695. [Google Scholar] [CrossRef] [PubMed]
  50. Chen, C.; Ng, M.K.; Zhao, X.L. Alternating direction method of multipliers for nonlinear image restoration problems. IEEE Trans. Image Process. 2015, 24, 33–43. [Google Scholar] [CrossRef] [PubMed]
  51. He, B.; Liu, H.; Wang, Z.; Yuan, X. A Strictly Contractive Peaceman—Rachford Splitting Method for Convex Programming. SIAM J. Optim. 2014, 24, 1011–1040. [Google Scholar] [CrossRef] [PubMed]
  52. Schildknecht, T.; Vannanti, A.; Krag, H.; Erd, C. Reflectance spectra of space debris in geo. In Proceedings of the 2009 AMOS Technical Conference, Maui, HI, USA, 1–4 September 2009. [Google Scholar]
  53. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision, Bombay, India, 7 January 1998; pp. 839–846. [Google Scholar] [Green Version]
Figure 1. True fractional abundances of endmembers of DC1.
Figure 1. True fractional abundances of endmembers of DC1.
Remotesensing 10 01046 g001
Figure 2. The data is contaminated with Gaussian white noise with SNR of 30 dB. Abundances estimated by DRSGHU of DC1 ( α 1 = 0.01 , α 2 = 1 , β = 0.5 ).
Figure 2. The data is contaminated with Gaussian white noise with SNR of 30 dB. Abundances estimated by DRSGHU of DC1 ( α 1 = 0.01 , α 2 = 1 , β = 0.5 ).
Remotesensing 10 01046 g002
Figure 3. Top panel, from left to right: Ground-truth fractional abundances in DC1 containing 150 pixels and k = 9 endmembers randomly extracted from library A 1 , and fractional abundances estimated by SUnSAL, CLSUnSAL, and DRSU-TV. Bottom panel: Fractional abundances estimated by GLUP-Lap, GraphHU, SGHU and DRSGHU. Note that the lines denote the abundance of a certain endmember in the first 150 pixels of the images.
Figure 3. Top panel, from left to right: Ground-truth fractional abundances in DC1 containing 150 pixels and k = 9 endmembers randomly extracted from library A 1 , and fractional abundances estimated by SUnSAL, CLSUnSAL, and DRSU-TV. Bottom panel: Fractional abundances estimated by GLUP-Lap, GraphHU, SGHU and DRSGHU. Note that the lines denote the abundance of a certain endmember in the first 150 pixels of the images.
Remotesensing 10 01046 g003
Figure 4. True fractional abundances of endmembers of DC2.
Figure 4. True fractional abundances of endmembers of DC2.
Remotesensing 10 01046 g004
Figure 5. Abundance maps obtained by different methods for the fourth endmember (EM4) in DC2. From left to right: the unmixing results for the noise levels 20 dB, 30 dB and 40 dB. From top to bottom: The unmixing results by SUnSAL, CLSUnSAL, DRSU-TV, GLUP-Lap, GraphHU, SGHU and DRSGHU.
Figure 5. Abundance maps obtained by different methods for the fourth endmember (EM4) in DC2. From left to right: the unmixing results for the noise levels 20 dB, 30 dB and 40 dB. From top to bottom: The unmixing results by SUnSAL, CLSUnSAL, DRSU-TV, GLUP-Lap, GraphHU, SGHU and DRSGHU.
Remotesensing 10 01046 g005
Figure 6. Fractional abundance maps estimated for the AVIRIS Cuprite subscene with the USGS library. From top to bottom: Software product, the estimated fractional abundance maps by DRSU-TV, GraphHU, SGHU and DRSGHU.
Figure 6. Fractional abundance maps estimated for the AVIRIS Cuprite subscene with the USGS library. From top to bottom: Software product, the estimated fractional abundance maps by DRSU-TV, GraphHU, SGHU and DRSGHU.
Remotesensing 10 01046 g006
Figure 7. SRE (dB) as a function of parameters α 1 and α 2 for DC1 with different SNR levels. From left to right: SNR = 20 dB, β = 1 ; SNR = 30 dB, β = 0.5 ; and SNR = 40 dB, β = 0.5 .
Figure 7. SRE (dB) as a function of parameters α 1 and α 2 for DC1 with different SNR levels. From left to right: SNR = 20 dB, β = 1 ; SNR = 30 dB, β = 0.5 ; and SNR = 40 dB, β = 0.5 .
Remotesensing 10 01046 g007
Figure 8. SRE (dB) as a function of parameter β for DC1 with different noise levels. From left to right: SNR = 20 dB, α 1 = 0.05 , α 2 = 1 ; SNR = 30 dB, α 1 = 0.01 , α 2 = 1 ; and SNR = 40 dB, α 1 = 0.001 , α 2 = 0.1 .
Figure 8. SRE (dB) as a function of parameter β for DC1 with different noise levels. From left to right: SNR = 20 dB, α 1 = 0.05 , α 2 = 1 ; SNR = 30 dB, α 1 = 0.01 , α 2 = 1 ; and SNR = 40 dB, α 1 = 0.001 , α 2 = 0.1 .
Remotesensing 10 01046 g008
Figure 9. Relative error as a function of iteration number for DC1 with different noise levels. From left to right: SNR = 20 dB, SNR = 30 dB, and SNR = 40 dB.
Figure 9. Relative error as a function of iteration number for DC1 with different noise levels. From left to right: SNR = 20 dB, SNR = 30 dB, and SNR = 40 dB.
Remotesensing 10 01046 g009
Table 1. SRE (dB) values achieved by different methods for noise levels 20 dB, 30 dB and 40 dB on all simulation data.
Table 1. SRE (dB) values achieved by different methods for noise levels 20 dB, 30 dB and 40 dB on all simulation data.
DataSNRSUnSALCLSUnSALDRSU-TVGLUP-LapGraphHUSGHUDRSGHU
DC1203.894.8410.695.234.195.527.72
308.549.4719.8112.4611.1911.9620.26
4014.1315.8428.6419.2816.4917.4028.85
DataSNRSUnSALCLSUnSALDRSU-TVGLUP-LapGraphHUSGHUDRGSHU
DC2202.986.2010.727.585.756.8212.27
306.207.0325.8618.0711.6320.2726.03
4011.3011.9430.4227.5818.9528.9949.19
Table 2. Relative errors of the estimated fractional abundances of chosen endmembers for a noise level of 30 dB of DC1.
Table 2. Relative errors of the estimated fractional abundances of chosen endmembers for a noise level of 30 dB of DC1.
No.SUnSALCLSUnSALDRSU-TVGLU-LapGraphHUSGHUDRSGHU
Endmember10.21230.18080.04880.07680.10890.10840.0388
Endmember20.24000.12050.10060.13910.23060.17730.1232
Endmember30.05050.06580.07170.08610.13920.06900.0702
Endmember40.21400.21520.15330.21180.30950.26310.1328
Endmember50.36330.39720.06820.23280.21680.20150.0498
Endmember60.49860.48640.11540.23410.38730.35740.1079
Endmember70.46450.36540.13900.20490.35920.36190.1531
Endmember80.17150.11150.06250.08100.10420.11240.0496
Endmember90.07970.02900.10600.10860.15510.05170.0940
average0.25490.21910.09620.15280.22340.21110.0819
Table 3. Relative errors of the estimated fractional abundances of EM4 for different noise levels of DC2.
Table 3. Relative errors of the estimated fractional abundances of EM4 for different noise levels of DC2.
SNRSUnSALCLSUnSALDRSU-TVGLUP-LapGraphHUSGHUDRSGHU
20 dB0.53970.40100.11900.19860.11530.15940.0722
30 dB0.24510.20330.04530.05310.01700.04460.0133
40 dB0.12140.09760.02670.01520.00600.01040.0028
Table 4. Performance of different methods to construct W for noise level 20 dB.
Table 4. Performance of different methods to construct W for noise level 20 dB.
DataNon-FilterGaussian FilterBilateral FilterReference
DC17.6110.199.4911.72
DC212.2712.8413.0014.38

Share and Cite

MDPI and ACS Style

Wang, S.; Huang, T.-Z.; Zhao, X.-L.; Liu, G.; Cheng, Y. Double Reweighted Sparse Regression and Graph Regularization for Hyperspectral Unmixing. Remote Sens. 2018, 10, 1046. https://doi.org/10.3390/rs10071046

AMA Style

Wang S, Huang T-Z, Zhao X-L, Liu G, Cheng Y. Double Reweighted Sparse Regression and Graph Regularization for Hyperspectral Unmixing. Remote Sensing. 2018; 10(7):1046. https://doi.org/10.3390/rs10071046

Chicago/Turabian Style

Wang, Si, Ting-Zhu Huang, Xi-Le Zhao, Gang Liu, and Yougan Cheng. 2018. "Double Reweighted Sparse Regression and Graph Regularization for Hyperspectral Unmixing" Remote Sensing 10, no. 7: 1046. https://doi.org/10.3390/rs10071046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop