Next Article in Journal
A Two-Source Model for Estimating Evaporative Fraction (TMEF) Coupling Priestley-Taylor Formula and Two-Stage Trapezoid
Next Article in Special Issue
Joint Sparse Sub-Pixel Mapping Model with Endmember Variability for Remotely Sensed Imagery
Previous Article in Journal
Correction of Incidence Angle and Distance Effects on TLS Intensity Data Based on Reference Targets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonlocal Total Variation Subpixel Mapping for Hyperspectral Remote Sensing Imagery

1
State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430079, China
2
Collaborative Innovation Center of Geospatial Technology, Wuhan University, Wuhan 430079, China
3
The Second Surveying and Mapping of Zhejiang Province, Hangzhou 310012, China
4
College of Surveying and Geo-Informatics, Tongji University, Shanghai 200092, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(3), 250; https://doi.org/10.3390/rs8030250
Submission received: 12 December 2015 / Revised: 3 March 2016 / Accepted: 11 March 2016 / Published: 16 March 2016
(This article belongs to the Special Issue Spatial Enhancement of Hyperspectral Data and Applications)

Abstract

:
Subpixel mapping is a method of enhancing the spatial resolution of images, which involves dividing a mixed pixel into subpixels and assigning each subpixel to a definite land-cover class. Traditionally, subpixel mapping is based on the assumption of spatial dependence, and the spatial correlation information among pixels and subpixels is considered in the prediction of the spatial locations of land-cover classes within the mixed pixels. In this paper, a novel subpixel mapping method for hyperspectral remote sensing imagery based on a nonlocal method, namely nonlocal total variation subpixel mapping (NLTVSM), is proposed to use the nonlocal self-similarity prior to improve the performance of the subpixel mapping task. Differing from the existing spatial regularization subpixel mapping technique, in NLTVSM, the nonlocal total variation is used as a spatial regularizer to exploit the similar patterns and structures in the image. In this way, the proposed method can obtain an optimal subpixel mapping result and accuracy by considering the nonlocal spatial information. Compared with the classical and state-of-the-art subpixel mapping approaches, the experimental results using a simulated hyperspectral image, two synthetic hyperspectral remote sensing images, and a real hyperspectral image confirm that the proposed algorithm can obtain better results in both visual and quantitative evaluations.

Graphical Abstract

1. Introduction

Due to the impact of the sensors’ instantaneous field of view (IFOV) and the diversity of land-cover objects, mixed pixels are common in hyperspectral remote sensing images [1,2,3]. Spectral unmixing can effectively solve the mixed pixel problem by acquiring the spectra of the endmembers and the abundance fraction images of these endmembers [4,5,6,7]. However, the spatial distribution of each endmember or land-cover class in the pixel remains unknown [8,9], which is quite important in real applications such as subpixel classification and subpixel target detection. To solve this problem, the subpixel mapping technique was proposed to determine the subpixel location of each class or endmember based on the fractional abundances [10,11,12,13,14,15]. In recent years, subpixel mapping has been applied in many practical applications, including land-cover change detection [16], waterline mapping [17], and landscape pattern indices calculation [18].
Subpixel mapping methods have been proposed to handle the mapping problem inside pixels. They aim to generate a finer classification result from a low-resolution abundance map by maximizing the spatial dependence, which is the basic principle of many subpixel mapping methods [19]. Subpixel mapping algorithms predict the spatial distribution of the different classes by dividing pixels into smaller subpixels based on the abundance values of each endmember, and then assign each subpixel to a single land-cover class, which can greatly enhance the spatial resolution of images. A number of subpixel mapping methods have been proposed based on spatial dependence, which refers to the fact that observations that are spatially close tend to be more similar than those that are further apart. Many subpixel mapping methods, both traditional and state-of-the-art, belong to this category, such as the linear optimization technique [20], the spatial attraction model (SASM) [21], the pixel-swapping subpixel mapping algorithm (PSSM) [22], neural networks [23,24], random fields [13,25], the geometric subpixel mapping algorithm (GSM) [26], evolutionary computing based subpixel mapping [27,28,29,30], and multi-agent systems [31].
Since the subpixel mapping issue is a typical ill-posed inverse problem whose aim is to predict unknown detail information at the subpixel scale from coarser pixels, subpixel mapping algorithms based on a spatial regularization prior have recently been studied by many researchers [12,14,32,33], as it is standard to use a regularization technique to make the original ill-posed problem well-posed [34]. These algorithms transform the ill-posed subpixel mapping problem into a well-posed regularization problem by assuming spatial prior information about the unknown subpixel mapping result based on a maximum a posteriori (MAP) model. A finer-resolution map can then be reconstructed from a series of lower-resolution abundance fraction images. Different spatial prior models have been applied for subpixel mapping, such as the Laplacian prior, the total variation prior, and the bilateral total variation prior. These spatial prior models, such as the subpixel mapping based on total variation (TVSM) model proposed in [12], incorporate the local spatial information in the subpixel mapping model, and they have achieved better subpixel mapping performances. However, both the total variation regularization method and the Laplacian prior model just consider the local spatial homogeneity of the neighborhood system (horizontal and vertical) and reconstruct the main geometrical configuration, but they do not utilize all the potential spatial information [35], especially in the nonlocal spatial neighborhood.
In this paper, a new spatial regularization subpixel mapping algorithm based on a nonlocal prior model, namely nonlocal total variation subpixel mapping (NLTVSM), is proposed. In NLTVSM, a nonlocal total variation prior model [36] is designed as the spatial regularizer by incorporating the nonlocal spatial information in the subpixel mapping model. The nonlocal means-based regularization has been applied in many real applications, including hyperspectral image denoising [36] and noise estimation [37], super-resolution reconstruction [38], image segmentation [39], image classification [40], sparse unmixing [41], and so on. Differing from the previous TVSM method, which utilizes the spatial information in a limited local window, the proposed NLTVSM method combines the nonlocal spatial information of the whole image and the total variation framework. NLTVSM can use all possible self-predictions in the image to take advantage of the high degree of redundancy of the image, and a variational framework-based nonlocal operator is utilized as the spatial prior by averaging all the pixels in a sliding window and then computing the problem as a weighted graph gradient [42]. Compared with the previous subpixel mapping algorithms, the experimental results using a simulated image, two synthetic hyperspectral remote sensing images, and a real remote sensing image demonstrate that the proposed NLTVSM algorithm can obtain an improved subpixel mapping accuracy, which is a little better than the latest TVSM algorithm.
The rest of this paper is organized as follows. Section 2 presents the basic idea of the proposed NLTVSM model. Section 3 provides the experimental results and analyses with simulated, synthetic, and real hyperspectral images. The conclusion is drawn in Section 4.

2. The Nonlocal Total Variation Subpixel Mapping Model for Hyperspectral Imagery

Differing from the previous TVSM [12], in this paper, the nonlocal total variation spatial operator is introduced into the subpixel mapping model to predict the fine structure, details, and texture, to enhance the subpixel mapping results.
In this part, the framework of the spatial regularization subpixel mapping model is first introduced, and the flowchart of the proposed NLTVSM is shown in Section 2.1. The spatial regularization method, the nonlocal total variation spatial operator, is presented in detail in Section 2.2. Finally, the proposed spatial regularization subpixel model, the NLTVSM algorithm, is described in Section 2.3.

2.1. Spatial Regularization Subpixel Mapping

Subpixel mapping techniques aim to obtain a higher-resolution classification image based on the low-resolution abundances, and divide a mixed pixel into several subpixels with a single land-cover class. The subpixel mapping problem involves estimating the possible distribution of every land-cover class inside a pixel at a subpixel scale, and the solution should not be unique, which is a typical ill-posed inverse problem. A simple example of subpixel mapping with three classes is displayed in Figure 1. In this example, the scale factor is 5, which means that one pixel in the original low-resolution image is divided into 5 × 5 subpixels. Figure 1a shows the original low-resolution fraction image, and the red, green, and blue pixels represent pure land-cover classes, while the white ones represent mixed pixels. The aim of subpixel mapping is to estimate the optimal distributions at the subpixel scale. Figure 1b,c show two possible distributions. Based on the spatial dependence rule, Figure 1b is more reasonable than the latter.
To better solve the ill-posed problem, a regularization technique can be considered to make it well-posed. In this way, the subpixel mapping problem can be transformed into a regularization problem, and a MAP model is utilized to regularize the subpixel mapping problem to be well-posed. The basic spatial regularization subpixel mapping model can be formulated as the following minimization problem:
min X 1 2 D X Y 2 + λ U ( X )
where Y R N × n denotes the input low-resolution abundance images, with N pixels with n dimensions, where n is the number of land-cover classes. X R ( s   ×   s   ×   N )   ×   n is the high-resolution subpixel mapping distribution map with s*s*N subpixels, and s is the scale factor. In the process of subpixel mapping, each pixel in the low-resolution abundance images corresponds to specific s*s subpixels. D R N × ( s   ×   s   ×   N ) is the downsampling matrix, and the elements’ values of matrix D are 0 and 1/(s2).
In Equation (1), the first term describes the data fidelity term, and the second term is the spatial prior term. U(X) represents the spatial regularization term, and the parameter λ is the spatial regularization parameter which controls the balance of the data fidelity and regularization terms. Different spatial regularization models have previously been successfully applied for subpixel mapping, such as the Laplacian prior model, the total variation model, and the bilateral total variation (BTV) method. In this paper, the nonlocal total variation model is utilized to consider the spatial information in subpixel mapping, and the flowchart of the proposed NLTVSM method is shown in Figure 2.
In the proposed method, the important improvement is the application of the nonlocal total variation method for the spatial consideration in the process of subpixel mapping. To better understand the proposed algorithm, the nonlocal total variation model should first be introduced.

2.2. The Nonlocal Total Variation Model

Unlike TVSM, in the proposed NLTVSM algorithm, the nonlocal total variation model is utilized as a spatial regularization term, which deals with the unknown pixels by searching for the similar pixels in a searching window to replace the current unknown pixels, with the aim being to exploit the similar patterns or patches in the image. The nonlocal-based method averages the other pixels with patches similar to that of the current one, as shown in Equation (2) [36]:
NL ( X ) ( x ) = 1 C ( x ) e ( G σ * | X ( x + ) X ( y + ) | 2 ) ( o ) h 2 X ( y ) d ( y )
where X describes the original image, x is the current pixel in the image, y is the central pixel in the similar window, C ( x ) = e ( G σ * | X ( x + · ) X ( z + · ) | 2 ) ( ο ) h 2 d ( z ) is the normalizing factor, Gσ represents a Gaussian kernel with standard deviation σ, and h is a decay parameter. x + , y + , and z + are the neighborhoods or the similar windows of pixels x, y, and z, respectively.
Figure 3 describes the basic principle of the nonlocal means model. In Figure 3, the whole image is assumed to be the searching window, which is utilized to restrict the search of similar windows in a real application. The small window centered at pixel P is the current window, and the windows centered at pixels q1, q2, and q3 are the similar windows to the current one. The nonlocal means operator is used to estimate the value of P using an average of all the pixels similar to the current one, such as the windows centered at pixels q1, q2, and q3, based on the weights of the measurement of the similarity w(P,q1), w(P,q2), and w(P, q3).
The nonlocal total variation (NLTV) functional is defined as a gradient-based function, which concerns both the nonlocal structures and textures, shown as:
J N L T V ( X ) = i = 1 n ( x Ω | w X i ( x ) | ) = i = 1 n ( x Ω y Ω ( X i ( y ) X i ( x ) ) 2 w ( x , y ) ) ( y , x Ω , Ω R 2 , X i : Ω R )
where Xi (i [1,n]) denotes one category of the distribution matrix X, and ∇wXi is a nonlocal gradient which is defined as the vector of all the partial differences ∇wXi(x,·) at pixel x. Xi(x) and Xi(y) are the values located at pixels x and y in the ith distribution map Xi. The weight w(x,y) is used to measure the similarity between pixels x and y. In Equation (3), there are two notations that should be given in detail. Firstly, the definition of the nonlocal gradient ∇wXi(x,·), which is calculated as shown in Equation (4) and, secondly, how to obtain the weight w(x,y), which is shown in Equation (5):
w X i ( x , y ) = y Ω ( X i ( y ) X i ( x ) ) 2 w ( x , y )   ( x , y Ω )
w ( x , y ) = 1 Z ( x ) e X ( N x ) X ( N y ) 2 , σ 2 h 2
where Xi(Nx) and Xi(Ny) respectively denote the square neighborhood of pixels x and y with a fixed window size on the category subpixel image Xi. The Euclidean distance between these vectors, Xi(y) − Xi(x), expresses the similarity of the two pixels, x and y. In Equation (5), Z(x) is the normalizing constant, which is defined as Equation (6):
Z ( x ) = y e X ( N x ) X ( N y ) 2 , σ 2 h 2
where X ( ( N x ) ) X ( N y ) 2 , σ 2 represents the Gaussian weighted Euclidean distance; σ is the standard deviation of the Gaussian kernel (σ > 0); and h acts as the degree of filtering, which controls the decay of the exponential function.

2.3. The NLTVSM Algorithm

In the previous section, we described the nonlocal total variation spatial operator in detail. Based on the basic spatial regularization subpixel mapping model in Equation (1), together with the nonlocal total variation model acting as the spatial regularization term, the NLTVSM algorithm can be written as Equation (7), where JNLTV(X) denotes the nonlocal total variation defined in Equation (3):
min X 1 2 DX Y 2 + λ J N L T V ( X )
As shown in Equation (7), the NLTVSM model has been built and the nonlocal total variation is designed as the spatial prior regularization term. Based on the equations introduced in Section 2.2, especially for the nonlocal total variation spatial operator, the objective function of the NLTVSM algorithm can be formulated. In this section, the Bregmanized operator splitting (BOS) algorithm [34] is used to optimize the NLTVSM model. The basic idea of the optimization algorithm can be stated as follows.
Firstly, the original NLTVSM optimization problem is enforced with the Bregman iteration [43] process as follows:
{ X k + 1 = arg min X 1 2 D X Y k 2 + λ J N L T V ( X ) Y k + 1 = Y k + Y D X k + 1
The operator splitting technique is then used to solve the unconstrained subproblem in Equation (8) as follows: for j 0, Xk+1,0 = Xk, where j denotes the infinite inner iteration time:
{ V k + 1 , j + 1 = X k , j δ D T ( D X k + 1 , j Y k ) X k + 1 , j + 1 = arg min X ( λ J N L T V ( X ) + 1 2 δ X V k + 1 , j + 1 2 )
where δ is a positive value that satisfies 0 < δ < (2/||DTD||). The minimization problem in Equation (9) can be solved by the split Bregman method [44], with which the subproblem of X can be split into the following:
{ ( X k + 1 , d k + 1 ) = arg min X , d ( λ δ i = 1 m ( x Ω d i ( x ) 2 ) + 1 2 X V 2 + μ 2 d w X i b k 2 ) , d i = w X i b k + 1 = b k + w X k + 1 d k + 1
where μ is the scale of the penalty term ||d-∇wXi-bk||2, and its value is usually inversely proportional to the value of λ·δ. The nonlocal total variation operator, denoted as JNLTV(X) in Equation (9), is rewritten as i = 1 m ( x Ω d i ( x ) 2 ) and d = ∇wXi. The solution of Equation (10), (Xk+1,dk+1), can be acquired by alternating between the following two subproblems in Equation (11):
{ X k + 1 = arg min X ( 1 2 X V 2 + μ 2 d k w X i b k 2 ) d k + 1 = argmin d ( λ δ i = 1 m ( x Ω d i ( x ) 2 ) + μ 2 d w X i b k 2 )
The subproblem for Xk+1 can be solved with the Euler–Lagrange equation, as follows:
( X k + 1 V ) μ di v w ( w X k + 1 + b k d k ) = 0
which provides:
X k + 1 = ( 1 Δ w ) 1 ( V + μ di v w ( b k d k ) )
Therefore, Xk+1 can be solved by the Gauss–Seidel algorithm [44]. Meanwhile, the dk+1 in Equation (11) can be solved using a shrinkage operator as:
d i k + 1 = shrink ( ( w X k + 1 + b k ) i , λ δ μ )
The shrinkage operator is defined as follows:
shrink ( p , t ) = sign ( p ) max { | p | t , 0 }
To decrease the influence of the spectral unmixing errors, the winner-takes-all class determination strategy [13] is then utilized to obtain the final subpixel mapping results.

3. Experiments and Analysis

Experiments were conducted to test the performance of the proposed NLTVSM algorithm with one simulated image, two synthetic images, and one real image. The proposed algorithm was compared with five subpixel mapping algorithms: subpixel mapping based on a spatial attraction mode (SASM) [21], the pixel-swapping subpixel mapping algorithm (PSSM) [22], the genetic algorithm (GASM) [27], the geometric subpixel mapping algorithm (GSM) [26], the latest adaptive subpixel mapping method based on a MAP model, and a winner-takes-all class determination strategy (AMCDSM) with a total variation prior model [12] (denoted as TVSM in this paper). In addition, all the algorithms used the same fraction images for the subpixel mapping, which were obtained by the fully constrained least squares (FCLS) [45] method and probabilistic support vector machine (P-SVM) [46].
Since the subpixel mapping result is the same as the classification result, to evaluate the performance of the different subpixel mapping algorithms, the producer’s accuracy (PA) (for single classes), the overall accuracy (OA), and the Kappa coefficient were adopted, and were used to verify the performance of the different classification algorithms with the help of ENVI software [47]. For the simulated hyperspectral images and the synthetic images, the low-resolution hyperspectral remote sensing images were first obtained by downsampling the original high-resolution images with a fixed scale. A spectral unmixing technique was then utilized to obtain the unmixing results—the fractional abundance images—as the input of the subpixel mapping. With the same fractional abundance images as the input, the different subpixel mapping algorithms obtained different performances and obtained different subpixel mapping results. The reference classification image used to verify the performance of the different subpixel mapping results was obtained by classification of the original high-resolution images by a hard classification method, such as a support vector machine (SVM) [48] or the minimum distance hard classification algorithm [49], which were implemented in ENVI software. For the real hyperspectral image, the low-resolution (LR) hyperspectral image was collected with a Nuance NIR imaging spectrometer, and the high-resolution (HR) image was taken by a digital camera for the same area at the same time. The spectral unmixing algorithm was then used with the LR images to obtain the fraction images, and the hard classification algorithm was used to obtain the ground truth from the HR data. More information about the experimental datasets is provided in the following section.

3.1. Experimental Design and Datasets

The first experimental dataset (simulated image) with 400 × 400 pixels and 50 bands, as shown in Figure 4a, was generated using the USGS spectral library following the methodology in [13]. This dataset consists of four typical land-cover classes: water, tree, agricultural field, and impervious layer. Figure 4b displays the true subpixel mapping image, acting as the reference image. In addition, a spectral unmixing algorithm was used with the low-resolution hyperspectral images (LR), which were acquired via a 4 × 4 mean filter from the original high-resolution simulated hyperspectral image, to obtain the fractional abundance images, as shown in Figure 4c. The subpixel scale factor (s) was set to 4 in this dataset.
Two synthetic hyperspectral remote sensing images were also used to test the performance of the different subpixel mapping algorithms. The first image was the Washington DC Mall Hyperspectral Digital Imagery Collection Experiment (HYDICE) image, with 200 × 300 pixels and 191 bands (as shown in Figure 5a), which has four classes of land cover, i.e., water, grass, tree, and road. The other image with more classes was the HYDICE Urban image with 300 × 300 pixels and 187 bands, as shown in Figure 6a, which is displayed in false color. The area used in the experiment consisted of six major classes of roof1, tree, concrete road, roof/shadow, grass, and asphalt road.
The classification maps for the evaluation of these two synthetic datasets were obtained from the original hyperspectral images with the SVM hard classification algorithm, and the reference classification images for the two datasets are shown in Figure 5c and Figure 6c. In addition, the results of the classification with the SVM algorithm were also tested with the ROIs selected manually, as shown in Figure 5b and Figure 6b, and the accuracies were 97.12% [32] and 97.03% for the OA values, corresponding to the Washington DC Mall HYDICE image (shown in Table 1) and the HYDICE Urban image (shown in Table 2), respectively. These accuracies were also considered to be sufficient for the subpixel mapping experiments. To simulate the coarse-resolution hyperspectral remote sensing images, a 4 × 4 mean filter was utilized for the downsampling, and the fractional abundances were obtained with the spectral unmixing algorithm, as shown in Figure 5d and Figure 6d, as inputs. In these two experiments, the subpixel mapping scale factor (s) was set to 4.
The real experimental dataset was acquired by a Nuance NIR imaging spectrometer, as shown in Figure 7a, and consisted of 46 bands, with 50 × 50 pixels. The spectral unmixing algorithm was applied to undertake the unmixing and obtain the fractional abundances, as shown in Figure 7b. To assess the performance of the different subpixel mapping algorithms, the HR image (Figure 7c)—150 × 150 pixels and three bands—was also collected for the same area using a digital camera at the same time. To verify the subpixel mapping results, the reference classification map was then obtained by the SVM algorithm, with the ROIs selected manually, as shown in Figure 7d,e, respectively. Furthermore, the accuracy of the SVM classification method for the Nuance image was also determined, as shown in Table 3. The OA was 99.27%, which confirmed that the result was suitable for testing the other subpixel mapping methods. In addition, the subpixel mapping scale factor (s) was set to three for the Nuance dataset.

3.2. Experimental Results and Analysis

Figure 8, Figure 9, Figure 10 and Figure 11 show the subpixel mapping results using SASM, PSSM, GASM, GSM, TVSM, and the proposed NLTVSM for the simulated image, the Washington DC Mall HYDICE image, the HYDICE Urban image, and the Nuance image, respectively. The visual comparisons in Figure 8, Figure 9, Figure 10 and Figure 11 show the varying performances of the different subpixel mapping methods.
The subpixel mapping results for the simulated image are shown in Figure 8b–g. For convenience, the true subpixel mapping image is also shown in Figure 8a. It can be observed that the results obtained by SASM, PSSM, GASM, and TVSM display similar distributions to the true subpixel mapping image, but the result of GSM exhibits a great difference, especially for the simulated impervious layer of the linear feature in red. However, the NLTVSM algorithm performs the best in this simulated dataset, especially in the transverse line in the top of the image, wiping off the wrong mapping at the subpixel scale. Since the nonlocal total variation spatial operator can obtain more spatial prior information according to the larger nonlocal searching window, the proposed NLTVSM achieves an improved performance.
To evaluate the different performances quantitatively, Table 4 provides the quantitative assessment results of the above subpixel mapping methods for the simulated image. It can be seen that the OA and Kappa are the same for SASM, PSSM, and GASM. The performance of GSM is not as good, at only 89.04% for the OA and 0.84 for Kappa. For the spatial regularization subpixel mapping algorithms—TVSM and NLTVSM—they both obtain competitive accuracies. Owing to the nonlocal searching strategy considering more spatial information in the nonlocal neighborhood, which can find more similar spatial distribution patterns in a larger neighborhood, NLTVSM obtains the best quantitative performance.
For the two synthetic hyperspectral images, the clear advantage of the proposed NLTVSM algorithm can be seen. For the results of the Washington DC Mall HYDICE image shown in Figure 9, it can be observed that the result of NLTVSM exhibits less salt-and-pepper noise, compared with the other methods, and the edge of the meadow and the lake line in the top-left corner show clear boundaries.
The quantitative results in Table 5 allow the same conclusion to be made. The proposed NLTVSM algorithm obtains a higher accuracy than the traditional methods and the state-of-the-art ones. For instance, the OA value of NLTVSM is 78.11% and the Kappa is 0.69 for the Washington DC Mall dataset. This is an improvement of 1.15% and 0.02 in the subpixel mapping accuracy (OA and Kappa coefficient, respectively) compared with the TVSM approach.
The results for the HYDICE Urban image are presented in Figure 10b–g, and a detailed verification of the results is provided in Table 6. Here, it can be observed that all the algorithms display different performances. It can be seen that the flat zones, such as grass or asphalt road, consist of lots of scatter, perhaps due to the fraction errors derived from the spectral unmixing process. Compared with the SASM, PSSM, and GASM methods, the GSM, TVSM, and NLTVSM approaches exhibit smoother visual results, benefiting from the consideration of the spatial information. Table 6 shows consistent results. Compared with the SASM, PSSM, and GASM methods, GSM, TVSM, and NLTVSM possess obvious advantages, and show an improvement of at least 4.63% in the OA and 0.05 in Kappa. The NLTVSM algorithm obtains the best accuracy among these algorithms by a narrow margin: 70.12% for the OA and 0.62 for Kappa, an OA improvement of 2.09% compared with TVSM. The experimental results show that NLTVSM is better than TVSM for more complex images with more classes.
In the real hyperspectral data experiment, the eight algorithms all obtain similar results for the withered vegetable class, as shown in Figure 11. The subpixel mapping results of the SASM, PSSM, and GASM algorithms with fraction constraints are poor, with significant salt-and-pepper noise. The other three subpixel mapping algorithms—GSM, TVSM, and the proposed NLTVSM—obtain better results with smoother and more continuous land-cover distributions. These algorithms relax the constraint of the fraction values, and can decrease the errors in the spectral unmixing. For the fresh vegetable and the background classes in particular, the subpixel mapping result of the NLTVSM method is better than the traditional algorithms. Comparing the proposed NLTVSM with the GSM and TVSM algorithms, although they all consider the spatial information as prior knowledge in the process of subpixel mapping, NLTVSM obtains a better result by utilizing the nonlocal total variation regularization operator, which can take advantage of the high degree of redundancy in the nonlocal spatial information of the image to predict more fine structure and details, and can suppress most of the salt-and-pepper noise, e.g., for the withered vegetable class.
The quantitative comparison using the OA and Kappa provided in Table 7 also allows the same observation to be made. As shown in Table 7, GSM, TVSM, and the proposed NLTVSM show a great improvement in subpixel mapping accuracy over the traditional subpixel mapping approaches. Compared with GSM and TVSM, NLTVSM obtains the highest subpixel mapping accuracies in Table 7. The OA and Kappa of NLTVSM are equal to 76.60% and 0.64%, respectively, for the real Nuance image. The main reason for the higher accuracy of the NLTVSM algorithm is as follows. TVSM only accounts for the spatial homogeneity of the first-order pixel neighborhood system. However, NLTVSM tries to make full use of the potential nonlocal spatial information by utilizing all possible self-similarities existing in the image. Based on the above analysis, the NLTVSM algorithm provides an effective option to perform the task of subpixel mapping for hyperspectral remote sensing imagery.

3.3. Sensitivity Analysis

(1) Discussion on sensitivity analysis for the spatial regularization parameter
In the optimization problem of the NLTVSM algorithm, there is an important parameter, λ—the spatial regularization parameter—which controls the balance of the data fidelity and regularization terms, and significantly influences the final subpixel mapping results. Figure 12 analyzes the effect of this parameter for the Washington DC Mall and Nuance images.
As shown in Figure 12, it can be seen that the best subpixel mapping result is acquired when the value of λ is equal to 0.05 and 0.5 for the two images, respectively. To achieve a good result and effectively exploit the useful nonlocal spatial information, appropriate regularization parameters should be found according to the different images or land-cover classes.
(2) The effect of different spectral unmixing approaches on NLTVSM
In this paper, the fractional abundances, derived from the spectral unmixing, act as the input of the subpixel mapping. Since uncertainty exists for the different spectral unmixing methods, and different unmixing results lead to different subpixel mapping outputs, the sensitivity of the different spectral unmixing approaches is discussed in this part. As an example, the Washington DC Mall image is used for analyzing the sensitivity of the unmixing results for the subpixel mapping, and the spectral unmixing algorithms adopted for comparison are FCLS, FCLS, and P-SVM, and a sparse regression method, sparse unmixing via variable splitting, and augmented Lagrangian (SUnSAL) [50]. Figure 13 shows the final subpixel mapping results.
It can be seen that different spectral unmixing results lead to different subpixel mapping outputs. FCLS, which is one of the classical linear spectral unmixing approaches, and is widely used for its robustness and tractability, leads to a satisfactory subpixel mapping result. For the sparse unmixing method, SUnSAL obtains a similar subpixel mapping result, although the spectral unmixing result evaluated by the signal-to-reconstruction error (SRE) value [50] of SUnSAL with 5.15 dB is a little higher than FCLS with 4.96 dB. The possible reason for this is the spatial consideration in the nonlocal total variation subpixel mapping method, which results in the unmixing difference being reduced. The spectral unmixing methods adopted in this paper, FCLS and P-SVM, obtain the best subpixel mapping results.
(3) Discussion on the effect of adding a shade endmember
In this paper, most of the final subpixel mapping results consist of a set of typical materials. For example, water, grass, tree, road, roof, etc. Shade is usually not considered as a typical endmember or land-cover component [51,52], but is treated as a variant of the brightness of other endmembers. However, it is interesting to discuss the effect of adding a shade endmember in the experiments, especially as water and shade have similar spectra. In this part, we discuss the effect of adding a shade endmember to the Washington DC Mall image. The comparison results are shown in Figure 14.
From Figure 14, it can be seen that adding the shade endmember has removed most of the misclassification between water/shade and water/tree. As shown in Table 8, TVSM and NLTVSM with the shade endmember acquire relatively satisfactory results since the spatial consideration can make full use of the prior knowledge in the image itself. Furthermore, NLTVSM shows a slight improvement over TVSM. It can be observed that both the OA and Kappa in Table 8 are lower than the corresponding result with the same method without the shade endmember (as shown in Table 5). The possible reason for this is that the difficulty of the spectral unmixing is increased when the shade endmember is added, as the shade is similar to the water component, and some trees are in the shade of other trees. It should be stressed that distinguishing shade from the other land-cover endmembers, such as water and trees, in this Washington DC Mall image, is a very challenging task.

3.4. Running Times of the Different Models for Different Images

The running times of the different models for the two synthetic images were tested on a PC with an Intel Core i3-2100 CPU @ 3.1 GHz and 8 GB RAM, using MATLAB R2011b. Since the different strategies used in the algorithms may lead to different running times, especially on the MATLAB platform, we just compared the algorithms proposed by our research group, i.e., TVSM and the proposed NLTVSM. All of the parameters and the iteration times were set to obtain the optimal results. The running times of the different models are listed in Table 9.
It can be observed from Table 9 that the running time of the proposed method, NLTVSM, is a little less than the other algorithm, TVSM, owing to the fact that the core process of NLTVSM is coded in Visual C++ 6.0. Furthermore, the use of fast Fourier transform (FFT) enhances the computational efficiency of TVSM, and it also greatly improves the efficiency of TVSM.

4. Conclusions

In this paper, the nonlocal total variation subpixel mapping algorithm, namely NLTVSM, has been proposed for hyperspectral remote sensing imagery. In NLTVSM, a nonlocal total variation model is designed as the spatial regularization term to utilize the nonlocal spatial information. Differing from the previous spatial regularization subpixel mapping algorithm based on total variation (TVSM), NLTVSM extracts the nonlocal spatial information by the use of a variational framework based nonlocal operator averaging the set of subpixels in a certain size of sliding window in the image. This combines the nonlocal spatial information of the whole image and the total variation framework, and utilizes all possible self-predictions in the subpixel mapping images to take advantage of the high degree of redundancy of the image. Experimental results using a simulated image, two synthetic hyperspectral remote sensing images, and a real hyperspectral image demonstrate that the proposed NLTVSM performs better than the traditional subpixel mapping algorithms and the latest TVSM algorithm. NLTVSM can obtain a higher subpixel mapping precision, a more accurate distribution map with less salt-and-pepper effect at the subpixel scale, and is also efficient. Hence, it provides a new option to perform the task of subpixel mapping for hyperspectral remote sensing imagery. In our future work, an adaptive parameter selection method will be studied.

Acknowledgments

This work was supported by National Natural Science Foundation of China under Grant No. 41371344, State Key Laboratory of Earth Surface Processes and Resource Ecology under Grant No. 2015-KF-02, and Open Research Fund Program of Shenzhen Key Laboratory of Spatial Smart Sensing and Services (Shenzhen University).

Author Contributions

All the authors made significant contributions to the work. Ruyi Feng and Yanfei Zhong designed the research and analyzed the results. Yunyun Wu, Da He and Xiong Xu assisted in the prepared work and validation work. Liangpei Zhang provided advice for the preparation and revision of the paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; and in the decision to publish the results.

References

  1. Bioucas-Dias, J.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  2. Johnson, B.; Tateishi, R.; Kobayashi, T. Remote sensing of fractional green vegetation cover using spatially-interpolated endmembers. Remote Sens. 2012, 4, 2619–2634. [Google Scholar] [CrossRef]
  3. Clasen, A.; Somers, B.; Pipkins, K.; Tits, L.; Segl, K.; Brell, M.; Kleinschmit, B.; Spengler, D.; Lausch, A.; Forster, M. Spectral unmixing of forest crown components at close range, airborne and simulated Sentinel-2 and EnMAP spectral image scale. Remote Sens. 2015, 7, 15361–15387. [Google Scholar] [CrossRef]
  4. Ma, W.; Bioucas-Dias, J.; Chan, T.; Gillis, N.; Gader, P.; Plaza, A.; Ambikapathi, A.; Chi, C. A signal processing perspective on hyperspectral unmixing: Insights from remote sensing. IEEE Signal Process. Mag. 2014, 1, 67–81. [Google Scholar] [CrossRef]
  5. Sun, X.; Yang, L.; Zhang, B.; Gao, L.; Gao, J. An endmember extraction method based on artificial bee colony algorithms for hyperspectral remote sensing images. Remote Sens. 2015, 7, 16363–16383. [Google Scholar] [CrossRef]
  6. Doxani, G.; Mitraka, Z.; Gascon, F.; Goryl, P.; Bojkov, B.R. A spectral unmixing model for the integration of multi-sensor imagery: A tool to generate consistent time series data. Remote Sens. 2015, 7, 14000–14018. [Google Scholar] [CrossRef]
  7. Sun, H.; Qie, G.; Wang, G.; Tan, Y.; Li, J.; Peng, Y.; Ma, Z.; Luo, C. Increasing the accuracy of mapping urban forest carbon density by combining spatial modeling and spectral unmixing analysis. Remote Sens. 2015, 7, 15114–15139. [Google Scholar] [CrossRef]
  8. Boucher, A.; Kyriakidis, P.C. Super-resolution land cover mapping with indicator geostatistics. Remote Sens. Environ. 2006, 104, 264–282. [Google Scholar] [CrossRef]
  9. Ling, F.; Du, Y.; Zhang, Y.; Li, X.; Xiao, F. Burned-area mapping at the subpixel scale with MODIS images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1963–1967. [Google Scholar] [CrossRef]
  10. Atkinson, P.M. Mapping sub-pixel boundaries from remotely sensed images. Innovat. GIS 1997, 4, 166–180. [Google Scholar]
  11. Wang, Q.; Atkinson, P.M.; Shi, W. Fast subpixel mapping algorithms for subpixel resolution change detection. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1692–1706. [Google Scholar] [CrossRef]
  12. Zhong, Y.; Wu, Y.; Xu, X.; Zhang, L. An adaptive subpixel mapping method based on MAP model and class determination strategy for hyperspectral remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1411–1426. [Google Scholar] [CrossRef]
  13. Zhao, J.; Zhong, Y.; Wu, Y.; Zhang, L.; Shu, H. Subpixel mapping based on conditional random fields for hyperspectral remote sensing imagery. IEEE J. Sel. Topics Signal Process. 2015, 9, 1049–1060. [Google Scholar] [CrossRef]
  14. Feng, R.; Zhong, Y.; Xu, X.; Zhang, L. Adaptive sparse subpixel mapping with a total variation model for remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2015. [Google Scholar] [CrossRef]
  15. Ling, F.; Li, X.; Xiao, F.; Du, Y. Superresolution land-cover mapping using spatial regularization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4424–4439. [Google Scholar] [CrossRef]
  16. Ling, F.; Li, W.; Du, Y.; Li, X. Land cover change mapping at the subpixel scale with different spatial-resolution remotely sensed imagery. IEEE Geosci. Remote Sens. Lett. 2011, 8, 182–186. [Google Scholar] [CrossRef]
  17. Muslim, A.M.; Foody, G.M.; Atkinson, P.M. Localized soft classification for super-resolution mapping of the shoreline. Int. J. Remote Sens. 2006, 27, 2271–2285. [Google Scholar] [CrossRef]
  18. Li, X.; Du, Y.; Ling, F.; Wu, S.; Feng, Q. Using a sub-pixel mapping model to improve the accuracy of landscape pattern indices. Ecol. Indic. 2011, 11, 1160–1170. [Google Scholar] [CrossRef]
  19. Atkinson, P.M. Downscaling in remote sensing. Int. J. Appl. Earth Observ. Geoinf. 2013, 22, 106–114. [Google Scholar] [CrossRef]
  20. Verhoeye, J.; De Wulf, R. Land cover mapping at sub-pixel scales using linear optimization techniques. Remote Sens. Environ. 2002, 79, 96–104. [Google Scholar] [CrossRef]
  21. Mertens, K.C.; De Baets, B.; Verbeke, L.P.C.; de Wulf, R.R. A sub-pixel mapping algorithm based on sub-pixel/pixel spatial attraction models. Int. J. Remote Sens. 2006, 27, 3293–3310. [Google Scholar] [CrossRef]
  22. Atkinson, P.M. Sub-pixel target mapping from soft-classified, remotely sensed imagery. Photogramm. Eng. Remote Sens. 2005, 71, 839–846. [Google Scholar] [CrossRef]
  23. Zhang, L.; Wu, K.; Zhong, Y.; Li, P. A new sub-pixel mapping algorithm based on a BP neural network with an observation model. Neurocomputing 2008, 71, 2046–2054. [Google Scholar] [CrossRef]
  24. Su, Y.-F.; Foody, G.M.; Muad, A.M.; Cheng, K.-S. Combining Hopfield neural network and contouring methods to enhance super-resolution mapping. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2012, 5, 1403–1417. [Google Scholar]
  25. Kasetkasem, T.; Arora, M.K.; Varshney, P.K. Super-resolution land cover mapping using a Markov random field based approach. Remote Sens. Environ. 2005, 96, 302–314. [Google Scholar] [CrossRef]
  26. Ge, Y.; Li, S.; Lakhan, V.C. Development and testing of a subpixel mapping algorithm. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2155–2164. [Google Scholar]
  27. Mertens, K.C.; Verbeke, L.P.C.; Ducheyne, E.I.; De Wulf, R.R. Using genetic algorithms in sub-pixel mapping. Int. J. Remote Sens. 2003, 24, 4241–4247. [Google Scholar] [CrossRef]
  28. Zhong, Y.; Zhang, L. Remote sensing image subpixel mapping based on adaptive differential evolution. IEEE Trans. Syst. Man Cybern. B: Cybern. 2012, 42, 1306–1329. [Google Scholar] [CrossRef] [PubMed]
  29. Wang, Q.; Wang, L.; Liu, D. Particle swarm optimization-based sub-pixel mapping for remote-sensing imagery. Int. J. Remote Sens. 2012, 33, 6480–6496. [Google Scholar] [CrossRef]
  30. Zhong, Y.; Zhang, L. Sub-pixel mapping based on artificial immune systems for remote sensing imagery. Pattern Recognit. 2013, 46, 2902–2926. [Google Scholar] [CrossRef]
  31. Xu, X.; Zhong, Y.; Zhang, L. Adaptive subpixel mapping based on a multiagent system for remote-sensing imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 787–804. [Google Scholar] [CrossRef]
  32. Xu, X.; Zhong, Y.; Zhang, L.; Zhang, H. Sub-pixel mapping based on a MAP model with multiple shifted hyperspectral imagery. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2013, 6, 580–593. [Google Scholar] [CrossRef]
  33. Zhong, Y.; Wu, Y.; Zhang, L.; Xu, X. Adaptive MAP sub-pixel mapping model based on regularization curve for multiple shifted hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2014, 96, 134–148. [Google Scholar] [CrossRef]
  34. Zhang, X.; Burger, M.; Bresson, X.; Osher, S. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imag. Sci. 2010, 3, 253–276. [Google Scholar] [CrossRef]
  35. Buades, A.; Coll, B.; Morel, J.M. A review of image denoising algorithms, with a new one. Multiscale Model. Sim. (SIAM Interdiscip. J.) 2005, 4, 490–530. [Google Scholar] [CrossRef]
  36. Buades, A.; Coll, B.; Morel, J.-M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conf. Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005; pp. 60–65.
  37. Qian, Y.; Ye, M. Hyperspectral imagery restoration using nonlocal spectral-spatial structured sparse representation with noise estimation. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2013, 6, 499–515. [Google Scholar] [CrossRef]
  38. Protter, M.; Elad, M.; Takeda, H.; Milanfar, P. Generalizing the nonlocal-means to super-resolution reconstruction. IEEE Trans. Image Process. 2009, 18, 36–51. [Google Scholar] [CrossRef] [PubMed]
  39. Gilboa, G.; Osher, S.J. Nonlocal linear image regularization and supervised segmentation. Multiscale Model. Sim. (SIAM Interdiscip. J.) 2007, 6, 595–630. [Google Scholar] [CrossRef]
  40. Li, J.; Zhang, H.; Huang, Y.; Zhang, L. Hyperspectral image classification by nonlocal joint collaborative representation with a locally adaptive dictionary. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3707–3719. [Google Scholar] [CrossRef]
  41. Zhong, Y.; Feng, R.; Zhang, L. Non-Local sparse unmixing for hyperspectral remote sensing imagery. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2014, 7, 1889–1909. [Google Scholar] [CrossRef]
  42. Gilboa, G.; Osher, S. Nonlocal operators with applications to image processing. Multiscale Model. Sim. 2008, 7, 1005–1028. [Google Scholar] [CrossRef]
  43. Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An iterative regularization method for total variation-based image restoration. SIAM Multiscale Model. Sim. 2005, 4, 460–489. [Google Scholar] [CrossRef]
  44. Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM J. Imag. Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  45. Heinz, D.C.; Chang, C.-I. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 529–545. [Google Scholar] [CrossRef]
  46. Villa, A.; Chanussot, J.; Benediktsson, J.A.; Jutten, C. Spectral unmixing for the classification of hyperspectral images at a finer spatial resolution. IEEE J. Sel. Top. Signal Process. 2011, 5, 521–533. [Google Scholar] [CrossRef]
  47. American ITT Visual Information Solutions Company. ENVI Online Tutorials. Available online: http://www.exelisvis.com/Learn/Resources/Tutorials.aspx (accessed on 14 March 2016).
  48. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  49. Sisodia, P.S.; Tiwari, V.; Kumar, A. A comparative analysis of remote sensing image classification techniques. In Proceedings of the 2014 International Conference on Advanced in Computing, Communications and Informatics, New Delhi, India, 24–27 September 2014; pp. 1418–1421.
  50. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  51. Powell, R.L.; Roberts, D.A.; Dennison, P.E.; Hess, L.L. Sub-pixel mapping of urban land cover using multiple endmember spectral mixture analysis: Manaus, Brazil. Remote Sens. Environ. 2007, 106, 253–267. [Google Scholar] [CrossRef]
  52. Roberts, D.A.; Gadner, M.; Church, R.; Ustin, S.; Scheer, G.; Green, O.R. Mapping chaparral in the Santa Monica mountains using multiple endmember spectral mixture models. Remote Sens. Environ. 1998, 65, 267–279. [Google Scholar] [CrossRef]
Figure 1. Simple example of subpixel mapping (3 × 3 low-resolution pixels, scale factor 5, three land-cover classes). (a) Fraction image; (b) optimal distribution; and (c) inferior distribution.
Figure 1. Simple example of subpixel mapping (3 × 3 low-resolution pixels, scale factor 5, three land-cover classes). (a) Fraction image; (b) optimal distribution; and (c) inferior distribution.
Remotesensing 08 00250 g001
Figure 2. Flowchart of the proposed NLTVSM method.
Figure 2. Flowchart of the proposed NLTVSM method.
Remotesensing 08 00250 g002
Figure 3. The basic principle of the nonlocal means model.
Figure 3. The basic principle of the nonlocal means model.
Remotesensing 08 00250 g003
Figure 4. Simulated image. (a) Original simulated hyperspectral image (400 × 400); (b) reference classification image; and (c) fractional abundances (100 × 100).
Figure 4. Simulated image. (a) Original simulated hyperspectral image (400 × 400); (b) reference classification image; and (c) fractional abundances (100 × 100).
Remotesensing 08 00250 g004
Figure 5. Washington DC Mall HYDICE image; (a) original Washington DC Mall hyperspectral image (200 × 300); (b) the ROIs; (c) the reference classification image; and (d) fractional abundances (50 × 75).
Figure 5. Washington DC Mall HYDICE image; (a) original Washington DC Mall hyperspectral image (200 × 300); (b) the ROIs; (c) the reference classification image; and (d) fractional abundances (50 × 75).
Remotesensing 08 00250 g005
Figure 6. HYDICE Urban image; (a) original HYDICE Urban image (300 × 300); (b) the ROIs; (c) the reference classification image; and (d) fractional abundances (75 × 75).
Figure 6. HYDICE Urban image; (a) original HYDICE Urban image (300 × 300); (b) the ROIs; (c) the reference classification image; and (d) fractional abundances (75 × 75).
Remotesensing 08 00250 g006
Figure 7. Nuance dataset. (a) Original Nuance hyperspectral image (50 × 50); (b) fractional abundances (50 × 50); (c) original HR color image (150 × 150); (d) the ROIs; and (e) the reference classification map.
Figure 7. Nuance dataset. (a) Original Nuance hyperspectral image (50 × 50); (b) fractional abundances (50 × 50); (c) original HR color image (150 × 150); (d) the ROIs; and (e) the reference classification map.
Remotesensing 08 00250 g007
Figure 8. The subpixel mapping results for the simulated image. (a) Reference image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Figure 8. The subpixel mapping results for the simulated image. (a) Reference image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Remotesensing 08 00250 g008
Figure 9. The subpixel mapping results for the Washington DC Mall HYDICE image. (a) Reference classification image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Figure 9. The subpixel mapping results for the Washington DC Mall HYDICE image. (a) Reference classification image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Remotesensing 08 00250 g009aRemotesensing 08 00250 g009b
Figure 10. The subpixel mapping results for the HYDICE Urban image. (a) Reference classification image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Figure 10. The subpixel mapping results for the HYDICE Urban image. (a) Reference classification image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Remotesensing 08 00250 g010
Figure 11. The subpixel mapping results for the Nuance hyperspectral image. (a) Reference classification image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Figure 11. The subpixel mapping results for the Nuance hyperspectral image. (a) Reference classification image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Remotesensing 08 00250 g011aRemotesensing 08 00250 g011b
Figure 12. Sensitivity analysis for the spatial regularization parameter λ. (a) Washington DC Mall image; and (b) real Nuance image.
Figure 12. Sensitivity analysis for the spatial regularization parameter λ. (a) Washington DC Mall image; and (b) real Nuance image.
Remotesensing 08 00250 g012
Figure 13. The NLTVSM results based on different fractional abundance images. (a) Reference classification image; (b) SUnSAL (OA = 73.63%, Kappa = 0.62); (c) FCLS (OA = 73.77%, Kappa = 0.63); and (d) FCLS and P-SVM (OA = 78.11%, Kappa = 0.69).
Figure 13. The NLTVSM results based on different fractional abundance images. (a) Reference classification image; (b) SUnSAL (OA = 73.63%, Kappa = 0.62); (c) FCLS (OA = 73.77%, Kappa = 0.63); and (d) FCLS and P-SVM (OA = 78.11%, Kappa = 0.69).
Remotesensing 08 00250 g013
Figure 14. The subpixel mapping results for the Washington DC Mall HYDICE image with a shade endmember in the fraction image. (a) Reference classification image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Figure 14. The subpixel mapping results for the Washington DC Mall HYDICE image with a shade endmember in the fraction image. (a) Reference classification image; (b) SASM; (c) PSSM; (d) GASM; (e) GSM; (f) TVSM; and (g) NLTVSM.
Remotesensing 08 00250 g014
Table 1. Accuracy of the classification method for the Washington DC Mall HYDICE image.
Table 1. Accuracy of the classification method for the Washington DC Mall HYDICE image.
MethodClassWaterGrassTreeRoad
SVMWater3206000
Grass0268520116
Tree016230301
Road0051318
OA = 97.12% Kappa = 0.961
Table 2. Accuracy of the classification method for the HYDICE Urban image.
Table 2. Accuracy of the classification method for the HYDICE Urban image.
MethodClassRoof1TreeConcrete RoadRoof2GrassAsphalt Road
SVMRoof178700000
Tree0103700490
Concrete road00685000
Roof2098044201
Grass0190016200
Asphalt road0106001401
OA = 97.03% Kappa = 0.963
Table 3. Accuracy of the classification method for the Nuance dataset.
Table 3. Accuracy of the classification method for the Nuance dataset.
MethodClassWithered VegetableFresh VegetableBackground
SVMWithered vegetable201200
Fresh vegetable17312
Background0251087
OA = 99.27% Kappa = 0.988
Table 4. Accuracy of the different algorithms with the simulated image.
Table 4. Accuracy of the different algorithms with the simulated image.
MethodsProducer’s Accuracy (%)OA (%)Kappa
Simulated WaterSimulated TreeAgricultural FieldImpervious Layer
SASM98.0985.0393.8281.3392.180.88
PSSM98.0985.0393.8281.3392.180.88
GASM98.0985.0393.8281.3392.180.88
GSM96.7575.2994.9077.1089.040.84
TVSM99.2590.1595.2186.9294.860.92
NLTVSM99.2891.1196.0186.2195.100.93
Note: The highest accuracies in the table are marked with bold and underlined.
Table 5. Accuracy of the different algorithms with the Washington DC Mall image.
Table 5. Accuracy of the different algorithms with the Washington DC Mall image.
MethodsProducer’s Accuracy (%)OA (%)Kappa
WaterGrassTreeRoad
SASM93.1377.0866.3838.1870.160.58
PSSM93.2577.0165.6537.6469.810.58
GASM93.2777.4666.3638.1370.300.58
GSM93.3574.4076.0737.6372.690.61
TVSM96.4985.3973.9342.9476.960.67
NLTVSM97.0785.7177.8039.7778.110.69
Note: The highest accuracies in the table are marked with bold and underlined.
Table 6. Accuracy of the different algorithms with the HYDICE Urban image.
Table 6. Accuracy of the different algorithms with the HYDICE Urban image.
MethodsProducer’s Accuracy (%)OA (%)Kappa
Roof1TreeConcrete RoadRoof2/
Shadow
GrassAsphalt Road
SASM71.3767.2261.0644.8755.4657.2559.320.49
PSSM71.1467.0159.7144.6354.9356.6958.820.48
GASM71.2867.7561.3845.0255.5657.0259.500.49
GSM70.0665.7459.9646.3867.4562.8263.400.54
TVSM75.9578.8871.6151.8561.8565.9068.030.59
NLTVSM79.4878.2161.6349.4570.2671.4070.120.62
Note: The highest accuracies in the table are marked with bold and underlined.
Table 7. Accuracy of the different algorithms with the Nuance dataset.
Table 7. Accuracy of the different algorithms with the Nuance dataset.
MethodsProducer’s Accuracy (%)OA (%)Kappa
Withered VegetableFresh VegetableBackground
SASM89.8066.8059.0371.600.57
PSSM89.3165.6657.2270.480.55
GASM89.7467.1159.0571.730.57
GSM87.5470.7262.2673.340.59
TVSM89.5971.4469.3776.320.64
NLTVSM89.5671.9669.6276.600.64
Note: The highest accuracies in the table are marked with bold and underlined.
Table 8. Accuracy of the different algorithms with a shade endmember for the Washington DC Mall image.
Table 8. Accuracy of the different algorithms with a shade endmember for the Washington DC Mall image.
MethodSASMPSSMGASMGSMTVSMNLTVSM
OA(%)63.5763.3363.8560.9472.1672.19
Kappa0.500.500.510.470.610.61
Table 9. Running times of the different models for the different images.
Table 9. Running times of the different models for the different images.
Number of PixelsNumber of ClassesScale FactorTVSMNLTVSM
Washington DC50 × 754465.30 s61.95 s
HYDICE Urban75 × 7564193.63 s110.04 s

Share and Cite

MDPI and ACS Style

Feng, R.; Zhong, Y.; Wu, Y.; He, D.; Xu, X.; Zhang, L. Nonlocal Total Variation Subpixel Mapping for Hyperspectral Remote Sensing Imagery. Remote Sens. 2016, 8, 250. https://doi.org/10.3390/rs8030250

AMA Style

Feng R, Zhong Y, Wu Y, He D, Xu X, Zhang L. Nonlocal Total Variation Subpixel Mapping for Hyperspectral Remote Sensing Imagery. Remote Sensing. 2016; 8(3):250. https://doi.org/10.3390/rs8030250

Chicago/Turabian Style

Feng, Ruyi, Yanfei Zhong, Yunyun Wu, Da He, Xiong Xu, and Liangpei Zhang. 2016. "Nonlocal Total Variation Subpixel Mapping for Hyperspectral Remote Sensing Imagery" Remote Sensing 8, no. 3: 250. https://doi.org/10.3390/rs8030250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop