2.1. Related Work
The TV regularization term has also been successfully applied to HU technology. The TV term has an extremely wide range of applications in HU. For example, TV regularization term is applied to SUnSAL (SUnSAL-TV) [
23]; SUnSAL-TV involves a 2D total variation (2DTV) regularization that could effectively improve the smoothness between pixels, thereby optimizing the unmixing accuracy. On the basis of the TV item, the 2DTV term has been successfully applied in many areas of image processing for making use of the information contained in space and spectrum, such as HU, denoising [
24,
25,
26], medical image [
27], etc. There are many regularization methods for graph signals [
28]; the TV regularization terms are very widely used in graph signals. Therefore, the graph TV (g-TV) regularization term for hyperspectral imaging is also a common regular term [
29], such as graph NMF (G-NMF) [
30], structured sparse regularization NMF (SSNMF) [
31], and superpixel-based graph regularization multilinear mixture models (G-MLM) [
32]. Among them, GNMF aims to enhance the accuracy of traditional NMF methods and reduce the computational complexity, which considers not only the Euclidean distance internal structure of hyperspectral data, but also its internal Riemannian geometry. However, GNMF ignores the sparse nature of abundance. Therefore, a graph-regularized-NMF(GLNMF) method [
33] is suggested for spectral unmixing, which adds abundance sparse constraints on the basis of graph regularization and obtains good results. Since the development of 2DTV and g-TV is more mature, most of the current studies applying TV items to spectral unmixing are 2DTV and g-TV [
34]. However, a lot of these g-TV methods are computationally complex, specifically when calculating pairwise similarity between all pixels. To solve this problem, the Nyström method [
35] is employed, and unmixing can be accomplished by using the low-rank approximation of the Laplacian matrix of the graph obtained by this method. The gTVMBO method [
22] exploits the spectral information of different pixels that have similar features and preserves the abundance maps’ obvious edges, and it adopts the Nyström method to solve the computationally complexity problems.
According to the reference [
22], because the
-norm operation is involved in the small dataset and because using the
-norm for the small dataset will cause over smoothing, then to mitigate over-smoothing artifacts, the g-TV regularization is proposed and is written as:
where
represents the
ith column of
A,
and
Aℝ
,
,
can represent the physical distance between the vertices
and
. In this paper,
is viewed as the cosine similarity, which is a function used to calculate the distance for hyperspectral data [
36].
The margin of the abundance map of each material can be preserved in a non-local manner by minimizing the
. Then the addition of g-TV regularized term to the blind HU model is formulated as:
where
Y = [
]
ℝ
is an HSI;
represents the number of spectral bands;
represents the number of pixels. Furthermore, endmembers are defined as spectral properties of pure materials and use
X = [
]
ℝ
as the endmember matrix, which has
p pure materials. In addition, λ is tuning parameter, which is positive. It is noted that the sum-to-one constraint enforces sparsity due to the
-norm. The Merriman-Bence-Osher (MBO) scheme [
37] is then used to solve sub-problems in ADMOM; the sub-problems’ resolution process will be represented in
Section 2.2. The g-TV regularization takes into account the similarity of spectral information between different pixels, and thus fine spatial features can be preserved in the abundance map.
2.2. E-2DTV Regularization Term
Since gTVMBO does not take full advantage of the spectral and spatial information of HSI, E-2DTV regularization term is introduced to improve the existing gTVMBO model. The
-norm is on the gradient graph
of
A, where
represent the expansion matrices of the gradient map computed by
A along its spatial and spectral mode. The term is called 2DTV regularization, and it takes the form:
Extensive studies have shown that 2DTV is very helpful for various HSI processing tasks. However, 2DTV regularization has some obvious limitations, the most typical of which is to impose similar sparsity considerations across all bands of the spectrum and space. However, this always deviates from real-life situations. In different bands, there exists different sparsity of gradient maps. In addition, on account of the band-dependent properties of the original HSI, the different bands of the gradient map have non-negligible correlation. To solve this problem, we raise an enhanced 2DTV (E-2DTV) term. It is very much similar to the conventional 2DTV; the given term also applies to gradient maps along the spectrum. However, when the term is applied to the sparsity measure, which map along the subspace basis of all frequency bands, each basis is through the original gradient mapping vector linear combination of the bands: , where is a transformation matrix of size , promoting finding the basis of .
Representing the gradients with these computed few-bases provides sparse correlation insight into HSIs; their different coefficients represent the differences between the bands of these gradient maps. The E-2DTV item is described in detail below.
For an image
A, we can obtain
is the TV norm form.
can be presented as
;
is the
nth dimensional difference operator. Because TV norm contains the sum total of the grayscale values for the horizontal and vertical difference maps, the difference can be equated to the gradient, in the discrete case. In this way,
. To enhance the unmixing performance, a sparsity term is constructed for each
. In past research [
38,
39], we have good reason to assume that
A is low-rank. Thus,
is also low-rank property.
can be represented as:
where
ℝ
,
ℝ
; r represents the rank of
A. Each group of basis in gradient map
consists of columns in
. In particular,
can be seen as the conversion result of
. Derived from the formula, we can obtain
. Known from the literature [
40], using a sparse regularization the result of the spectral linear transformation of the gradient map works better than imposing the regularization term directly on the gradient map itself. Hence, we directly add a sparse regularization term to the linear transformation result of
which becomes
,
ℝ
. The problem is as follows:
where
is identity matrix; F represents the Frobenius Norm.
Inspired by the Equation (5), using a sparse regularization term on HSI images can achieve great results. Construct a regularization term for HSI, which can be obtained using the spectral and space. By adding the sparsity measure of the gradient map of the spatial pattern to the spectral pattern, we can model our regularization term as:
where
, is called the
-norm sparsity measure. Equation (6) can be converted to
where
ℝ
,
ℝ
,
.
In this paper, E-2DTV regularization term takes full advantage of the information between all these bands with spectral and spatial information of HSI, which is introduced to improve the existing gTVMBO model. Applying an E-2DTV term to a matrix converts a matrix into the sum of gradients in its dimensions. Therefore, the Equation (2) is equivalent to
where the * symbol means that two matrices are multiplied. From Equations (8) and (9), we improved the original method by applying E-2DTV terms to the abundance matrix; that is, the form A is expressed into the form
. Therefore, the proposed E-2DTV regularization term for gTVMBO, namely, the proposed E-gTVMBO is as follows:
where
is a positive parameter.
In order to solve the E-gTVMBO model, the ADMM algorithm is used; then by using indicator function
,
, we can rewrite the Equation (9) as an unconstrained problem as:
where
means that this is a non-negative matrix which has m rows and
n lines. Then, we introduce two auxiliary variable B
ℝ
, C
ℝ
; the Equation (10) can be formulated as:
where
,
are dual variables;
(
n = 1,2) is the Lagrange multiplier;
γ
is a positive parameter;
is a scalar in the ADMOM algorithm, which is positive.
In the ADMOM, we should alternately optimize every variable involved in (11); while optimizing one variable we need to keep the others constant. The ADMOM algorithm in this paper needs to solve six sub-problems in each iteration; that is, when
,
,
C,
X,
A,
B are fixed, L is minimized, respectively.
Appendix A demonstrates that the operational process of the six sub-problems in detail. In conclusion, each sub-problem in the ADMOM algorithm can be efficiently solved. The flow of E-gTVMBO method is shown in Algorithm 1.
Algorithm 1: E-gTVMBO |
Input: The € Y; parameter , . Output: X and A. |
Initialize: Initial , , , |
1: While not converge do |
2: Update , by Equations (12) and (14), respectively. |
3: Update C, X, A by Equations (15)–(17), respectively. |
4: Update B by Equation (19). |
5: Check the convergence conditions
|
6: End while |