Next Article in Journal
Automatic Counting of Large Mammals from Very High Resolution Panchromatic Satellite Imagery
Next Article in Special Issue
Classification of Tree Species in a Diverse African Agroforestry Landscape Using Imaging Spectroscopy and Laser Scanning
Previous Article in Journal
Determining the Pixel-to-Pixel Uncertainty in Satellite-Derived SST Fields
Previous Article in Special Issue
Criteria Comparison for Classifying Peatland Vegetation Types Using In Situ Hyperspectral Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiscale Union Regions Adaptive Sparse Representation for Hyperspectral Image Classification

1
School of Computer Science, China University of Geosciences, Lumo Road 388, Wuhan 430074, China
2
Department of Geodesy and Geomatics Engineering, University of New Brunswick, 15 Dineen Drive, Fredericton, NB E3B 5A3, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(9), 872; https://doi.org/10.3390/rs9090872
Submission received: 13 July 2017 / Revised: 14 August 2017 / Accepted: 21 August 2017 / Published: 23 August 2017
(This article belongs to the Special Issue Hyperspectral Imaging and Applications)

Abstract

:
Sparse Representation has been widely applied to classification of hyperspectral images (HSIs). Besides spectral information, the spatial context in HSIs also plays an important role in the classification. The recently published Multiscale Adaptive Sparse Representation (MASR) classifier has shown good performance in exploiting spatial information for HSI classification. But the spatial information is exploited by multiscale patches with fixed sizes of square windows. The patch can include all nearest neighbor pixels but these neighbor pixels may contain some noise pixels. Then another research proposed a Multiscale Superpixel-Based Sparse Representation (MSSR) classifier. Shape-adaptive superpixels can provide more accurate representation than patches. But it is difficult to select scales for superpixels. Therefore, inspired by the merits and demerits of multiscale patches and superpixels, we propose a novel algorithm called Multiscale Union Regions Adaptive Sparse Representation (MURASR). The union region, which is the overlap of patch and superpixel, can make full use of the advantages of both and overcome the weaknesses of each one. Experiments on several HSI datasets demonstrate that the proposed MURASR is superior to MASR and union region is better than the patch in the sparse representation.

Graphical Abstract

1. Introduction

Hyperspectral images have been widely applied to remote sensing image applications, such as land cover classification [1], target detection [2], anomaly detection [3], spectral unmixing [4] and others. Each pixel in HSI has hundreds of narrow contiguous bands, spanning from visible to infrared spectrum [5], which makes it possible to detect and distinguish various objects with higher accuracy [6]. However, increasing the number of spectral bands or features of an HSI pixel does not always help to increase the classification accuracy. Therefore, how to make full use of the information in HSIs is a problem in practical applications.
Many algorithms have been developed for the classification of HSIs. Among these, there are some well-known pixelwise classifiers, such as the support vector machine (SVM) [7,8,9], support vector conditional random classifier [10], multinomial logistic regression [11], neural network [12] and adaptive artificial immune network [13]. These pixelwise classifiers can make full use of the spectral information of HSIs, but the classification results are often noisy because the spatial information is not considered.
Therefore, some recent researches incorporated the spatial information in HSI classification to enhance the classification performance. The basic way to use spatial information is to assume that the pixels within a local region usually represent the same material and have similar spectral characteristics [1]. Various researches [14,15,16,17,18,19,20,21,22,23,24,25] have been done based on this assumption. Besides these researches, Sparse representation (SR), which is based on the observation that spectral pixels of a particular class should lie in a low-dimensional subspace spanned by dictionary atoms (training pixels) from the same class, is also employed. In [26], a Joint Sparse Representation Classification (JSRC) method has been proposed to incorporate spectral information and spatial information. The spatial information is expressed by a fixed-size local square window centered with the test pixel. Then all pixels in the window are simultaneously joint represented by a few common atoms in the specified dictionary. The JSRC can achieve a good performance but the optimal size of the window cannot be determined easily. In [27], a stepwise Markov random field (MRF) optimization was proposed to exploit spatial information based on the result of multitask joint sparse representation. In [28], MASR was proposed to release the difficulty in choosing region size. Instead of choosing a single scale, this method extends the spatial information to several scales to take advantage of correlations among multiple region scales for HSI classification. But the multiscale regions used in MASR refer to multiscale patches which may contain noise pixels. Better than patch region, shape-adaptive superpixel can provide more accurate spatial information. In [29], the superpixel was introduced to replace the patch region. Then a shape-adaptive local smooth region was generated for each test pixel by a shape-adaptive algorithm in [30]. The latest research proposed a Multiscale Superpixel-Based Sparse Representation [31]. In this research, multiscale superpixels were generated and then each scale was represented by JSRC. Finally, a fusion result was gotten from multiscale results by majority voting. But the selection of scales for superpixels is still a problem. Although it uses multiscale to release the difficulty of selecting segmentation scale, it still needs a fundamental number of superpixels determined empirically.
In fact, patch and superpixel both have their own advantages and shortages. The patch can include all nearest neighbors but it also may contain noise pixels. Shape-adaptive superpixel can exploit more accurate spatial information but there are still some mixed superpixels when the scale is not optimal. In a mixed superpixel, there must be wrong representation because all pixels in the superpixel share the same representation. Inspired by merits and demerits of patch and superpixel, we propose to use a union region to replace the patch and superpixel. Union region refers to the overlap of patch and superpixel. Compared with patch, union region includes more similar pixels for the test pixel aiming at decreasing the effect of noise pixels. Compared with superpixel, union region provides more direct neighbors for the test pixel to enhance the representation of pixels located in the wrong superpixel. In addition, the required superpixels for generating union regions don’t need empirical scale. The scales are determined by the size of the image and the corresponding patch sizes. By replacing patch in MASR with union region, we get a new algorithm called Multiscale Union Regions Adaptive Sparse Representation (MURASR). MURASR also adopts a probability majority voting method to optimize the classification result generated from the sparse representation. Experiment results show that the union region based algorithms always perform better than patch region based algorithms and the proposed MURASR outperforms other algorithms in terms of quantitative metrics and visual quality on the classification maps.
The rest parts of the paper are organized as follows. The JSRC and MASR are briefly introduced in Section 2. The details of proposed MURASR method are described in Section 3. The experimental results and discussions are presented in Section 4. Finally, Section 5 summarizes the paper and future works are suggested. The outline of the MURASR is illustrated in Figure 1.

2. Background

2.1. JSRC

The sparse representation classification (SRC) framework was first proposed for face recognition [32]. Then Chen et al. extended the SRC to pixelwise HSI classification, which relied on the observation that spectral pixels of a particular class should lie in a low-dimensional subspace spanned by dictionary atoms (training pixels) from the same class. But spatial information is not considered by Pixelwise Sparse Representation. Therefore, based on the observation that neighboring pixels belonging to the same class usually are strongly correlated with each other, JSRC is introduced to capture such spatial correlations by assuming that neighboring pixels within a region of fixed size can be jointly represented by a few common atoms from a structural dictionary. Concretely, let y R M × 1 be a pixel with M denoting the number of spectral bands and D = [ D 1 , , D c , , D C ] R M × N be a structure dictionary, where D c R M × N c , c = 1 , , C is the cth class subdictionary whose columns (atoms) are extracted from the training pixels; C is the number of classes; N c is the number of atoms in subdictionary D c ; and N = c = 1 C N c is the total number of atoms in D . Specifically, the size of a region surrounding the test pixel y 1 is denoted by W × W , and pixels within such a region can be denoted by a matrix Y = [ y 1 , y 2 , , y W × W ] . The matrix can be compactly represented as:
Y = [ y 1 , y 2 , , y W × W ] = [ D A 1 , D A 2 , , D A W × W ] = D [ A 1 , A 2 , , A W × W ] = DA
where A = [ A 1 , A 2 , , A W × W ] is the sparse coefficients matrix corresponding to Y . Since the indexes of the selected atoms in D are determined by the positions of nonzero coefficients in [ A 1 , A 2 , , A W × W ] , the neighboring pixels [ y 1 , y 2 , , y W × W ] can be represented by a small set of common atoms by enforcing a few nonzero rows on the sparse coefficients matrix A . Then, matrix A can be obtained by solving the following optimization problem:
A ^ = arg min A Y DA F subject   to A row , 0 K
where A row , 0 denotes the joint sparse norm, which is used to select a number of the most representative nonzero rows in A , and · F is the Frobenius norm. A variant of the OMP algorithm called the simultaneous OMP (SOMP) [33,34], can be used to efficiently obtain an approximate solution. After A ^ is recovered, the label of test pixel y 1 can be decided by the minimal total error:
c ^ = arg min c Y D c A ^ c F , c = 1 , , C
where A ^ c denotes the rows in A ^ associated with the cth class.

2.2. MASR

Compared with pixelwise SRC model, the JSRC can achieve more accurate classification results because of incorporating spatial information of local regions. However, the region size (or the region scale) has great influence on the classification performance. It is of great importance to determine an optimal region scale for the JSRC.
Then Fang et al. proposed the MASR to release the difficulty of choosing region scale. The MASR effectively exploits spatial information at multiple scales via an adaptive sparse strategy. Not only does the adaptive sparse strategy restrict pixels from different scales to be represented by training atoms from a particular class but also allow the selected atoms for these pixels to be varied, thus providing an improved representation. Given one test pixel y 1 in HSI, its T neighboring regions are selected via different predefined scales. Neighboring regions are defined by multiscale patches centered with test pixel. Then a multiscale matrix Y mp = [ Y 1 , , Y t , , Y T ] can be constructed by pixels within the selected regions, where the Y t includes pixels from the tth scale region. Since spatial structures and characteristics for different scales of regions are distinct, the generated multiscale matrix Y mp for the test pixel y 1 should provide complementary yet correlated information, which can be utilized to classify y 1 more accurately.
In MASR, an adaptive sparse strategy is adopted to utilize the correlated information among multiscales and achieve a flexible selection process for atoms. An important part of the adaptive strategy is the adoption of a collection of adaptive sets. Each adaptive set is denoted as the indexes of a set of nonzero scalar coefficients, which belong to the same class in the multiscale sparse matrix A mp . By combining the adaptive set with the row , 0 norm, a new adaptive norm adaptive , 0 is created on A mp , which can be used to select a small number of adaptive sets from A mp . Then, A mp matrix can be recovered by applying the adaptive norm as follows:
A ^ mp = arg min A mp Y mp DA mp F subject   to A mp adaptive , 0 K
After recovering the multiscale sparse representation matrix A ^ mp , a single decision can be made on the test pixel y 1 based on the lowest total representation error:
c ^ = arg min c Y mp D c A ^ c mp F , c = 1 , , C
where A ^ c mp represents rows in A ^ mp corresponding to the cth class.

3. Multiscale Union Regions Adaptive Sparse Representation

The aforementioned MASR shows good performance for HSI classification. But the MASR utilizes multiscale patches to exploit spatial information. In a patch, maybe most of the pixels are different from the test pixel, such as a pixel on the edge of a building. The classification may be misled by those noise pixels from other classes which are similar to the atoms in the dictionary, thus providing an incorrect classification for test pixel. In computer vision, superpixels have been studied to provide an efficient representation, which can facilitate visual recognition [35,36,37]. Each superpixel is a perceptually meaningful region, whose shape and size can be adaptively changed according to different spatial structures. But how to find an optimal scale for superpixels is still a challenge. Without optimal scale, some mixed superpixels will be generated. Based on the fact that patch and superpixel may include pixels from different classes, a multiscale union regions adaptive sparse representation model is proposed to decrease the influence of noise pixels for the test pixel. The union region is the overlap of the patch and corresponding superpixel with the same scale (see Figure 2). For a test pixel, if the patch includes some noise pixels, the superpixel can provide more similar pixels to reduce the impact of noise pixels. In the same way, if the test pixel is located in the wrong superpixel which has seldom pixels similar to test pixel, the patch can provide more similar pixels to enhance the right representation.

3.1. Generation of Multiscale Union Regions

Before generating multiscale union regions, we should get multiscale superpixels. There are various researches focusing on the segmentation [36,37,38,39]. In this paper, an oversegmentation algorithm called ERS [37] is applied to generate 2-D superpixel maps on the base images because of its high efficiency. Unlike the single-band gray or three-band color image, the HSI usually has hundreds of spectral bands. To improve the computational efficiency, PCA [40] is first used to reduce the spectral bands of the HSI. Since the important information of the HSI exists in the principle components (e.g., first three principle components), they are used as the base images. In this paper, only the first principle component is chosen as the base image. Instead of choosing scales for superpixels empirically, we calculate scales of superpixels based on corresponding patch sizes. Assuming that PS t refers to the patch size of tth scale and N t o t a l is the total number of pixels in the image (note that origin image will be extended for edge pixels), the superpixels number n t for tth segmentation is calculated as:
n t = N t o t a l / PS t
In this way, the average size of superpixels is equal to patch size. Then most superpixels will have similar sizes with patches. It guarantees that superpixel and patch can have similar influence on union region. What’s more, with the increasing of patch size, the superpixels number decreases fast. Thus, only limited number of segmentations can be generated. According to the performance of limited number of segmentations, it will be easier for users to determine the scales number. After segmentations, T superpixels are generated for each test pixel y 1 and these superpixels construct the corresponding multiscale matrix Y ms = [ Y 1 , , Y t , , Y T ] , where the Y t includes pixels from the tth superpixel. Then for a specific tth scale, the union region Y t mu is defined as following:
Y t mu = Y t ms Y t mp

3.2. Multiscale Union Regions Adaptive Sparse Representation

For a test pixel y 1 , the corresponding multiscale matrix is Y mu = [ Y 1 , , Y t , , Y T ] , where Y t is the union of Y t mp and Y t ms . Then the sparse coefficients matrix A mu can be recovered by solving following problem:
A ^ mu = arg min A mu Y mu DA mu F , subject   to A mu adaptive , 0 K
To solve this problem, the method used in MASR is applied. At each iteration, the current residual correlation matrix is calculated firstly. Then a a new adaptive set based on the current residual correlation matrix will be selected. Once the selecting of the new adaptive set is finished, the new adaptive set will be merged with previously selected adaptive sets. Then the sparse coefficients matrix is estimated based on the merged adaptive sets. Finally, the residue is updated. The iterations will stop if the termination criterion is satisfied. After the multiscale sparse representation matrix A ^ mu is recovered, the final label of the test pixel y 1 can be determined by minimal total representation error:
c ^ = arg min c Y mu D c A ^ c mu F , c = 1 , , C

3.3. Probability Majority Voting

Because multiscale union regions adaptive sparse representation is a pixel-based classifier, there will be some pepper salt noise pixels in ground truth objects. Therefore, a majority voting process will be helpful to optimize the classification result. As mentioned above, for each test pixel in each scale, a union region will be generated. Then for the union region, the probabilities belonging to all classes are calculated. If a union region at ith scale contains N i t o t a l labeled pixels and N i j pixels classified to jth class, the probability belonging to jth class P i j is calculated as:
P i j = N i j / N i t o t a l
Assuming that there are k classes, T scales of segmentation maps, the class label of the test pixel j ^ can be obtained by:
j ^ = arg max j ( i = 1 T P i j ) , j = 1 , , k

4. Experimental Results and Discussion

4.1. Data Sets

To verify the effectiveness of the proposed MURASR method and superiority of the union region, experiments are conducted on the following three hyperspectral data sets: the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Indian Pines data, the AVIRIS Salinas data, and the Reflective Optics System Imaging Spectrometer (ROSIS-03) University of Pavia data. The AVIRIS Indian Pines image has 220 data channels with the size of 145 × 145 across the spectral range from 0.2 to 2.4 μ m. It was captured over the agricultural Indian Pine test site in northwestern Indiana with a spatial resolution of 20 m per pixel. Before classification, 20 water absorption bands (No. 104–108, 150–163 and 220) were discarded [41]. Figure 3a,b show the color composite of the Indian Pines image and the corresponding reference data with 16 reference classes from different types of crops.
The Salinas image was also acquired by the AVIRIS sensor over Salinas Valley, California. The image is of size 512 × 217 × 224 with a spatial resolution of 3.7 m per pixel. Similar to the Indian Pines image, 20 water absorption spectral bands (No. 108–112, 154–167 and 224) were removed and 16 different reference classes are considered for this image. Figure 4a,b show the color composite of the Salinas image and the corresponding reference data.
The University of Pavia image, which captures an urban area surrounding the University of Pavia, Italy, was recorded by the ROSIS-03 sensor. The image is of size 610 × 340 × 115 with a spatial resolution of 1.3 m per pixel and a spectral coverage ranging from 0.43 to 0.86 μ m. The 12 very noisy channels were discarded before the experiments, and nine information classes are considered for this image. Figure 5a,b show the color composite of the University of Pavia image and the corresponding reference data.

4.2. Comparison of Experiment Results

In the experiments, all related algorithms are based on sparse representation. Except for published algorithms SRC, JSRC and MASR, JUSRC (Joint Union Sparse Representation Classification), MJSRC (Multiscale Joint Sparse Representation Classification), MJUSRC (Multiscale Joint Union Sparse Representation Classification), MURASR* and MURASR were conducted in the experiments. To verify the priority of union region further, the patch used in JSRC was replaced by JUSRC with the union region. For demonstrating the superiority of multiscale adaptive strategy, we extended the JSRC and JUSRC with a simple multiscale scheme that applied the majority voting to the results of all scales for the final decision-making. The extended algorithms are called MJSRC and MJUSRC. What’s more, the MURASR* is the MURASR without probability majority voting process. The comparison between MURASR* and MURASR can show the difference of whether the probability majority voting method was used or not. The parameters for the SRC, JSRC, and JUSRC algorithms were tuned to reach the best results in these experiments. For all multiscale algorithms, seven different scales were simultaneously adopted, and the selected region scales were as follows: 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, and 15 × 15. Then superpixels numbers for segmentation were calculated with Equation (6) and listed in Table 1. Other parameters in MJSRC, MJUSRC, MASR, MURASR*, and MURASR were the same as [28]. To evaluate the performance of classifiers, three objective metrics (overall accuracy (OA), average accuracy (AA) and kappa coefficient) are adopted. In addition, the McNemar’s test is applied to analyse the experiment results. The McNemar’s test is based on the standardized normal test statistic, as described in [42]:
Z = h 12 h 21 h 12 + h 21
where h 12 represents the samples correctly classified by method 1 but incorrectly classified by method 2. If | Z | > 1.96 , the accuracy between two methods can be considered statistically significant. The sign of the Z indicates which method is better. If Z > 0 , the method 1 is more accurate than method 2.
The Indian Pines data set was classified firstly. 10% of the labeled pixels were randomly sampled for training from each class, while the rest 90% were used to test the classifiers (see Table 2). The classification maps generated by different classifiers on the Indian Pines image are shown in Figure 6. The details of the classification results averaged by ten runs with randomly sampled training samples are tabulated in Table 3. The results of the McNemar’s tests between classifiers are listed in Table 4. It is easy to find that JUSRC, MJUSRC and MURASR* perform better than JSRC, MJSRC and MASR, which demonstrates the priority of union region over patch region. In addition, the multiscale majority voting based MJSRC and MJUSRC perform worse than the multiscale adaptive strategy based MASR and MURASR* for this image. Compared with MJSRC and MJUSRC, accuracy improvements of MASR and MURASR* are more than 3 % . MURASR gets a better result than MURASR* in accuracy and classification map. As can be observed from the classification maps of MURASR* and MURASR, many misclassifications in MURASR* can be eliminated efficiently by probability majority voting method. What’s more, MURASR performs best among all algorithms in terms of OA and AA, and the results of the McNemar’s test are statistically significant and coherent with the obtained overall accuracies.
The second experiment was performed on the Salinas data set. To compare the classification with MASR, only 1% of the labeled pixels for each class were randomly selected for training. Then the remaining 99% labeled data were classified with the classifiers to demonstrate the superiority of the proposed MURASR (see Table 5). The classification maps for various classifiers are illustrated in Figure 7 and the average quantitative results of ten runs are tabulated in Table 6. Moreover, the results of the McNemar’s tests are shown in Table 7. As can be observed, union region based algorithms JUSRC, MJUSRC and MURASR* still get more accurate results than patch region based JSRC, MJSRC and MASR in terms of OA, AA and Kappa coefficients. The classification maps of MJSRC and MJUSRC have more pepper salt noise pixels than MASR and MURASR*. Comparing classification maps of MURASR* and MURASR, we can find that most misclassifications generated by MURASR* can be corrected by probability majority voting method. In addition, the average accuracy of MURASR is 99 . 70 % which is very high for classification. Moreover, it should be noted that the McNemar’s tests between classifiers are also statistically significant and coherent with the obtained overall accuracies.
The final experiment was conducted on the University of Pavia image. The shapes of surface objects in this image are more complex than previous two images. For each reference class, 200 train samples were randomly selected from the labeled data and the remaining pixels were used for testing the performance of various classifiers (see Table 8). The classification maps are demonstrated in Figure 8 and the detail data averaged by ten runs in term of OA, AA, and Kappa coefficients is listed in Table 9. The McNemar’s tests between classifiers also were conducted on this image and the results are tabulated in Table 10. Same as previously mentioned two images, union region based classifiers also performed better than patch region based classifiers. Multiscale adaptive strategy still works better than multiscale majority voting strategy in this image. The accuracy improvement gained by probability majority voting is less than previous two images because the University of Pavia image has less large homogenous regions. And from Table 9, we can find that MASR only has more accurate result than MURASR with one class and MURASR performs best among all classifiers with 7 classes, which proves the priority of MURASR further. The results of the McNemar’s tests also provide enough support for the analysis.
Compared with many presented algorithms, MASR is a time-consuming algorithm. In this paper, the proposed MURASR is designed based on the multiscale adaptive representation in MASR. Also, the generation of union regions will consume some time. Moreover, the union region has more pixels than patch region. Therefore, the MURASR is also a time-consuming algorithm and the time cost of MURASR is about twice as much as MASR. But the proposed MURASR was coded in MATLAB (R2016a, Mathworks, Portola Valley, CA, USA) and was not optimized for speed. The MURASR can be significantly sped up by changing the compiling code from MATLAB to C + + and adopting a general-purpose graphics processing unit (GPU).

4.3. Effects of Region Scales

Except for SRC-Pixel-wise, other related algorithms can be affected by different number of scales. In the previously mentioned experiments, 7 scales have been chosen to compare the performance of all algorithms. The effect of region scales for JSRC, MJSRC, and MASR has been presented in [28]. From Table 1, we can find when the scale number is 7, the calculated scale for superpixels is large enough. If the scale continues increasing, there will be more mixed superpixels generated. Moreover, the classification results of MURASR on three images are encouraging when the number of scales is 7. Therefore, the effects of scales number under or equal to 7 will be analyzed in this section. It means that scales for patches range from 3 × 3 to 15 × 15. Figure 9 shows the average OA of ten runs for JSRC, JUSRC, MJSRC, MJUSRC, MASR, MURASR* and proposed MURASR. For multiscale algorithms, each scale represents the combination of the current scale and its smaller scales. It is easy to find that the union region based classifiers JUSRC, MJUSRC, and MURASR* generally outperform corresponding patch region based JSRC, MJSRC and MASR. And the probability majority voting method can optimize the classification result on each region scale. In addition, the proposed MURASR consistently outperforms other algorithms on all the region scales.

4.4. Effects of Training Samples Number

The number of training samples may affect the performance of the classifiers. Therefore the effects of different number of training samples on the JSRC, MJSRC, JUSRC, MJUSRC, MASR, MSPASR and proposed MURASR were examined on the three images. For the Indian Pines, the number of selected training samples for every class varies from 1% to 20% percentage. For the Salinas, the percentage is from 0.1% to 2%. For the University of Pavia, 60–500 training samples were selected for each reference class. The difference in terms of classification OA for each classifier with different number of training samples is illustrated in Figure 10. The OA is also the average of ten runs. As can be observed, the union region based classifiers JUSRC, MJUSRC and MURASR* always perform better than corresponding patch region based JSRC, MJSRC, and MASR. Comparing the result of MURASR* and MURASR, it is easy to find that the improvement obtained from probability majority voting method increases with the decreasing of training samples number. Moreover, the proposed MURASR generally outperforms other classifiers on all the training samples.

5. Conclusions

In this paper, a novel multiscale union region adaptive sparse representation, the MURASR, which uses union region integrating patch and superpixel to exploit the spatial information, is proposed for spectral-spatial HSI classification. Unlike the patch region based MASR, the proposed MURASR extends the patch region to the union region. The union region utilizes the integration of the observation that neighboring pixels that belong to the same material usually are strongly correlated with each other and pixels in the superpixel usually belong to the same material. Before sparse representation, multiscale union regions are generated via the union operation for patch and superpixel. Then multiscale adaptive sparse representation is adopted to classify multiscale union regions and an effective probability majority voting method is applied to generate the final result. Experiments on three HSIs demonstrate that the union region based algorithms always perform better than patch region based algorithms and the proposed MURASR outperforms other algorithms in terms of quantitative metrics and visual quality for the classification maps.
As the MURASR is a pixel-based algorithm, if we replace the superpixel with a region growing up from each test pixel, the generated union region will have more accurate representation of the spatial information. Thus, the further research will generate one superpixel for each test pixel. In addition, the structure dictionary for sparse representation is constructed directly by selected training pixels. A trained structure dictionary may decrease the running time of the algorithm and provide more accurate representation for test pixels.

Acknowledgments

This research was supported by the National Natural Science Foundation of China under Grant No. 41171339 and 61501413. The authors would like to thank Ly. Fang for providing the source code of MASR and M.-Y. Liu for over-segmentation methods on their website (http://www.escience.cn/people/LeyuanFang/index.html, http://mingyuliu.net). The authors would like to thank David A. Landgrebe from Purdue University for providing the AVIRIS image of Indian Pines and Paolo Gamba from University of Pavia for providing the ROSIS data set. The authors would like to thank the National Aeronautics and Space Administration Jet Propulsion Laboratory for providing the AVIRIS image of Salinas. The authors would also like to thank the handling editor and anonymous reviewers for their valuable comments and suggestions, which significantly improved the quality of this paper.

Author Contributions

Fei Tong and Hengjian Tong proposed the model and implemented the experiments. Fei Tong wrote the manuscript. Junjun Jiang provided overall guidance of the work and edited the manuscript. Yun Zhang reviewed and edited the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Advances in Spectral-Spatial Classification of Hyperspectral Images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
  2. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Sparse Transfer Manifold Embedding for Hyperspectral Target Detection. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1030–1043. [Google Scholar] [CrossRef]
  3. Du, B.; Zhang, L. Random-Selection-Based Anomaly Detector for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1578–1589. [Google Scholar] [CrossRef]
  4. Zhong, Y.; Wang, X.; Zhao, L.; Feng, R.; Zhang, L.; Xu, Y. Blind spectral unmixing based on sparse component analysis for hyperspectral remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2016, 119, 49–63. [Google Scholar] [CrossRef]
  5. Borengasser, M.; Hungate, W.S.; Watkins, R. Hyperspectral Remote Sensing: Principles and Applications; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  6. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113 (Suppl. 1), S110–S122. [Google Scholar] [CrossRef]
  7. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  8. Bruzzone, L.; Chi, M.; Marconcini, M. A Novel Transductive SVM for Semisupervised Classification of Remote-Sensing Images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 3363–3373. [Google Scholar] [CrossRef]
  9. Chi, M.; Bruzzone, L. Semisupervised Classification of Hyperspectral Images by SVMs Optimized in the Primal. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1870–1880. [Google Scholar] [CrossRef]
  10. Zhong, Y.; Lin, X.; Zhang, L. A Support Vector Conditional Random Fields Classifier With a Mahalanobis Distance Boundary Constraint for High Spatial Resolution Remote Sensing Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1314–1330. [Google Scholar] [CrossRef]
  11. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised Hyperspectral Image Classification Using Soft Sparse Multinomial Logistic Regression. IEEE Geosci. Remote Sens. Lett. 2013, 10, 318–322. [Google Scholar]
  12. Ratle, F.; Camps-Valls, G.; Weston, J. Semisupervised Neural Networks for Efficient Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2271–2282. [Google Scholar] [CrossRef]
  13. Zhong, Y.; Zhang, L. An Adaptive Artificial Immune Network for Supervised Classification of Multi-/Hyperspectral Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 894–909. [Google Scholar] [CrossRef]
  14. Ji, R.; Gao, Y.; Hong, R.; Liu, Q.; Tao, D.; Li, X. Spectral–Spatial Constraint Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1811–1824. [Google Scholar]
  15. Khodadadzadeh, M.; Li, J.; Plaza, A.; Ghassemian, H.; Bioucas-Dias, J.M.; Li, X. Spectral–Spatial Classification of Hyperspectral Data Using Local and Global Probabilities for Mixed Pixel Characterization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6298–6314. [Google Scholar] [CrossRef]
  16. Ghamisi, P.; Benediktsson, J.A.; Ulfarsson, M.O. Spectral–Spatial Classification of Hyperspectral Images Based on Hidden Markov Random Fields. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2565–2574. [Google Scholar] [CrossRef]
  17. Zhou, Y.; Peng, J.; Chen, C.L.P. Dimension Reduction Using Spatial and Spectral Regularized Local Discriminant Embedding for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1082–1095. [Google Scholar] [CrossRef]
  18. Falco, N.; Benediktsson, J.A.; Bruzzone, L. Spectral and spatial classification of hyperspectral images based on ICA and reduced morphological attribute profiles. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6223–6240. [Google Scholar] [CrossRef]
  19. Fang, L.; Li, S.; Duan, W.; Ren, J.; Benediktsson, J.A. Classification of Hyperspectral Images by Exploiting Spectral—Spatial Information of Superpixel via Multiple Kernels. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6663–6674. [Google Scholar] [CrossRef]
  20. Li, S.; Lu, T.; Fang, L.; Jia, X.; Benediktsson, J.A. Probabilistic fusion of pixel-level and superpixel-level hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7416–7430. [Google Scholar] [CrossRef]
  21. Wang, Y.; Song, H.; Zhang, Y. Spectral-spatial classification of hyperspectral images using joint bilateral filter and graph cut based model. Remote Sens. 2016, 8, 748. [Google Scholar] [CrossRef]
  22. Song, H.; Wang, Y. A spectral-spatial classification of hyperspectral images based on the algebraic multigrid method and hierarchical segmentation algorithm. Remote Sens. 2016, 8, 296. [Google Scholar] [CrossRef]
  23. Ma, L.; Ma, A.; Ju, C.; Li, X. Graph-based semi-supervised learning for spectral-spatial hyperspectral image classification. Pattern Recogn. Lett. 2016, 83, 133–142. [Google Scholar] [CrossRef]
  24. Jiang, J.; Chen, C.; Song, X.; Cai, Z. Hyperspectral image classification using set-to-set distance. In Proceedings of the ICASSP, Shanghai, China, 20–25 March 2016; pp. 3346–3350. [Google Scholar]
  25. Jiang, J.; Chen, C.; Yu, Y.; Jiang, X.; Ma, J. Spatial-aware collaborative representation for hyperspectral remote sensing image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 404–408. [Google Scholar] [CrossRef]
  26. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral Image Classification Using Dictionary-Based Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
  27. Yuan, Y.; Lin, J.; Wang, Q. Hyperspectral image classification via multitask joint sparse representation and stepwise MRF optimization. IEEE Trans. Cybern. 2016, 46, 2966–2977. [Google Scholar] [CrossRef] [PubMed]
  28. Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral–Spatial Hyperspectral Image Classification via Multiscale Adaptive Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7738–7749. [Google Scholar] [CrossRef]
  29. Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral–Spatial Classification of Hyperspectral Images with a Superpixel-Based Discriminative Sparse Model. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4186–4201. [Google Scholar] [CrossRef]
  30. Fu, W.; Li, S.; Fang, L.; Kang, X.; Benediktsson, J.A. Hyperspectral Image Classification Via Shape-Adaptive Joint Sparse Representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 556–567. [Google Scholar] [CrossRef]
  31. Zhang, S.; Li, S.; Fu, W.; Fang, L. Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification. Remote Sens. 2017, 9, 139. [Google Scholar] [CrossRef]
  32. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust Face Recognition via Sparse Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  33. Leviatan, D.; Temlyakov, V.N. Simultaneous approximation by greedy algorithms. Adv. Comput. Math. 2006, 25, 73–90. [Google Scholar] [CrossRef]
  34. Tropp, J.A.; Gilbert, A.C.; Strauss, M.J. Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit. Signal Process. 2006, 86, 572–588. [Google Scholar] [CrossRef]
  35. Mori, G.; Ren, X.; Efros, A.A.; Malik, J. Recovering human body configurations: combining segmentation and recognition. In Proceedings of the IEEE Conference Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; Volume 2, pp. 326–333. [Google Scholar]
  36. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
  37. Liu, M.Y.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy rate superpixel segmentation. In Proceedings of the IEEE Conference Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2097–2104. [Google Scholar]
  38. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
  39. Zhong, Y.; Gao, R.; Zhang, L. Multiscale and multifeature normalized cut segmentation for high spatial resolution remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6061–6075. [Google Scholar] [CrossRef]
  40. Vidal, R.; Ma, Y.; Sastry, S.S. Principal Component Analysis. In Generalized Principal Component Analysis; Springer: New York, NY, USA, 2016; pp. 25–62. [Google Scholar]
  41. Gualtieri, J.A.; Cromp, R.F. Support vector machines for hyperspectral remote sensing classification. In Proceedings of the SPIE, Washington, DC, USA, 29 January 1999; pp. 221–232. [Google Scholar]
  42. Foody, G.M. Thematic map comparison: Evaluating the statistical significance of differences in classification accuracy. Photogramm. Eng. Remote Sens. 2004, 70, 627–633. [Google Scholar] [CrossRef]
Figure 1. Outline of the proposed MURASR framework.
Figure 1. Outline of the proposed MURASR framework.
Remotesensing 09 00872 g001
Figure 2. Three kinds of spatial regions: (a) fixed-size patch; (b) adaptive size superpixel; and (c) union of patch and superpixel. The blue pixel represents test pixel, orange pixels are neighbors defined by patch, green pixels are neighbors defined by superpixel and red pixels are overlap of neighbors defined by patch and superpixel.
Figure 2. Three kinds of spatial regions: (a) fixed-size patch; (b) adaptive size superpixel; and (c) union of patch and superpixel. The blue pixel represents test pixel, orange pixels are neighbors defined by patch, green pixels are neighbors defined by superpixel and red pixels are overlap of neighbors defined by patch and superpixel.
Remotesensing 09 00872 g002
Figure 3. Indian Pines image: (a) three-band color composite image; (b) reference image.
Figure 3. Indian Pines image: (a) three-band color composite image; (b) reference image.
Remotesensing 09 00872 g003
Figure 4. Salinas image: (a) three-band color composite image; (b) reference image.
Figure 4. Salinas image: (a) three-band color composite image; (b) reference image.
Remotesensing 09 00872 g004
Figure 5. University of Pavia image: (a) three-band color composite image; (b) reference image.
Figure 5. University of Pavia image: (a) three-band color composite image; (b) reference image.
Remotesensing 09 00872 g005
Figure 6. Classification maps for the Indian Pines image by different algorithms: (a) SRC-Pixel-Wise; (b) JSRC; (c) JUSRC; (d) MJSRC; (e) MJUSRC; (f) MASR; (g) MURASR*; and (h) MURASR.
Figure 6. Classification maps for the Indian Pines image by different algorithms: (a) SRC-Pixel-Wise; (b) JSRC; (c) JUSRC; (d) MJSRC; (e) MJUSRC; (f) MASR; (g) MURASR*; and (h) MURASR.
Remotesensing 09 00872 g006
Figure 7. Classification maps for the Salinas image by different algorithms: (a) SRC-Pixel-Wise; (b) JSRC; (c) JUSRC; (d) MJSRC; (e) MJUSRC; (f) MASR; (g) MURASR*; and (h) MURASR.
Figure 7. Classification maps for the Salinas image by different algorithms: (a) SRC-Pixel-Wise; (b) JSRC; (c) JUSRC; (d) MJSRC; (e) MJUSRC; (f) MASR; (g) MURASR*; and (h) MURASR.
Remotesensing 09 00872 g007
Figure 8. Classification maps for the University of Pavia image by different algorithms: (a) SRC-Pixel-Wise; (b) JSRC; (c) JUSRC; (d) MJSRC; (e) MJUSRC; (f) MASR; (g) MURASR*; and (h) MURASR.
Figure 8. Classification maps for the University of Pavia image by different algorithms: (a) SRC-Pixel-Wise; (b) JSRC; (c) JUSRC; (d) MJSRC; (e) MJUSRC; (f) MASR; (g) MURASR*; and (h) MURASR.
Remotesensing 09 00872 g008
Figure 9. Effect of the region scales on single scale algorithms JSRC, JUSRC and the multiscale algorithms MJSRC, MJUSRC, MASR, MURASR* and MURASR for the: (a) Indian Pine image; (b) Salinas image; and (c) University of Pavia image.
Figure 9. Effect of the region scales on single scale algorithms JSRC, JUSRC and the multiscale algorithms MJSRC, MJUSRC, MASR, MURASR* and MURASR for the: (a) Indian Pine image; (b) Salinas image; and (c) University of Pavia image.
Remotesensing 09 00872 g009
Figure 10. Effect of the number of training samples on JSRC, JUSRC, MJSRC, MJUSRC, MASR, MURASR* and MURASR for the: (a) Indian Pine image; (b) Salinas image; and (c) University of Pavia image.
Figure 10. Effect of the number of training samples on JSRC, JUSRC, MJSRC, MJUSRC, MASR, MURASR* and MURASR for the: (a) Indian Pine image; (b) Salinas image; and (c) University of Pavia image.
Remotesensing 09 00872 g010
Table 1. Number of Superpixels in Each Scale.
Table 1. Number of Superpixels in Each Scale.
1234567
Indian Pines28091011515312208149112
Salinas13,5004860247915001004718540
University of Pavia24,54488354508272718251307981
Table 2. Sixteen reference classes in the Indian Pines image.
Table 2. Sixteen reference classes in the Indian Pines image.
ClassNameTrainTest
1Alfalfa541
2Corn-no till1431285
3Corn-min83747
4Corn24213
5Grass/Pasture48435
6Grass/Trees73657
7Grass/Pasture-mowed325
8Hay-windrowed48430
9Oats218
10Soybeans-no till97875
11Soybeans-min2462209
12Soybeans-clean59534
13Wheat21184
14Woods1271138
15Building-Grass-Trees-Drives39347
16Stone-steel Towers984
Total10279222
Table 3. Classification accuracy (averaged on ten runs with randomly sampled training samples) of the Indian Pines image. The best results are highlighted in bold typeface.
Table 3. Classification accuracy (averaged on ten runs with randomly sampled training samples) of the Indian Pines image. The best results are highlighted in bold typeface.
ClassSRC-Pixel-WiseJSRCJUSRCMJSRCMJUSRCMASRMURASR*MURASR
135.1287.5696.8395.3796.3493.6696.8398.54
254.6394.8796.4894.3995.2597.7797.9397.84
351.9993.4497.0091.6795.6998.1798.7799.54
436.5389.6295.3191.5092.7794.8995.7798.78
582.4494.2895.3892.1193.1795.5996.2396.51
693.3297.4398.9596.1998.4299.83100100
766.8096.8094.4066.4066.8098.8098.4096.00
895.9399.4499.7998.6098.7099.9599.98100
917.7860.5691.1112.2219.4464.4479.4471.67
1065.9995.6797.5889.1491.5597.6898.2397.67
1171.5296.6897.7895.9195.7699.0199.1199.85
1241.8289.7695.3087.8392.4796.5598.1599.25
1392.2894.9598.3790.4397.8398.7599.2999.89
1488.9398.9999.3399.2499.9299.9599.96100
1535.4589.0593.8392.5491.8497.5298.7099.48
1689.4088.3392.0281.9089.4096.0796.9098.69
OA68.8395.3597.3693.9195.3598.2998.6999.06
AA64.4094.6996.9893.0694.7195.5597.1198.93
Kappa0.640.920.960.860.880.980.990.97
Table 4. The McNemar’s tests between classifiers (averaged on ten runs with randomly sampled training samples) of the Indian Pines image.
Table 4. The McNemar’s tests between classifiers (averaged on ten runs with randomly sampled training samples) of the Indian Pines image.
MethodJSRCJUSRCMJSRCMJUSRCMASRMURASR*MURASR
JSRC−10.565.24−0.25−14.19−15.28−16.33
JUSRC10.5613.348.37−5.15−8.27−10.09
MJSRC−5.24−13.34-7.63−17.24−18.77−20.26
MJUSRC0.25−8.377.63−12.29−14.84−17.17
MASR14.195.1517.2412.29−3.51−5.65
MURASR*15.288.2718.7714.843.51−3.40
MURASR16.3310.0920.2617.175.653.40
Table 5. Sixteen reference classes in the Salinas image.
Table 5. Sixteen reference classes in the Salinas image.
ClassNameTrainTest
1Weeds 1201989
2Weeds 2373689
3Fallow201956
4Fallow plow141380
5Fallow smooth272651
6Stubble403919
7Celery363543
8Grapes11311,158
9Soil626141
10Corn333245
11Lettuce 4 wk111057
12Lettuce 5 wk191908
13Lettuce 6 wk9907
14Lettuce 7 wk111059
15Vinyard untrained737195
16Vinyard trellis181789
Total54353,586
Table 6. Classification accuracy (averaged on ten runs with randomly sampled training samples) of the Salinas image. The best results are highlighted in bold typeface.
Table 6. Classification accuracy (averaged on ten runs with randomly sampled training samples) of the Salinas image. The best results are highlighted in bold typeface.
ClassSRC-Pixel-WiseJSRCJUSRCMJSRCMJUSRCMASRMURASR*MURASR
198.2310010099.9910099.98100100
298.0499.9510099.9599.9499.7899.79100
394.1699.3399.7199.0799.6899.3899.86100
498.7770.5987.1085.4694.9197.3198.8399.88
591.8485.9892.0893.7797.7199.0799.5199.52
699.4195.6896.8199.2699.57100100100
799.1697.6598.4999.5799.7299.9599.92100
870.9995.1998.0794.2996.5296.4198.3999.61
997.2399.9899.9999.9710099.9199.95100
1085.4593.7895.6996.7697.6398.0698.4499.63
1193.5688.9193.8198.3499.1399.9199.92100
1299.7588.6894.3396.1698.9599.8599.96100
1397.1481.5289.3595.6497.6599.2699.4699.98
1492.6485.1587.1896.9597.5698.5998.5399.93
1559.1491.6996.0387.9092.4493.1296.7498.79
1693.9399.7399.5599.6599.6499.1699.1499.78
OA85.7994.3296.9695.8797.6597.9798.9899.70
AA84.1993.6796.6295.4097.3898.7399.2899.66
Kappa0.920.920.960.960.980.980.991
Table 7. The McNemar’s tests between classifiers (averaged on ten runs with randomly sampled training samples) of the Salinas image.
Table 7. The McNemar’s tests between classifiers (averaged on ten runs with randomly sampled training samples) of the Salinas image.
MethodJSRCJUSRCMJSRCMJUSRCMASRMURASR*MURASR
JSRC−32.24−17.31−34.78−37.06−46.17−53.08
JUSRC32.2411.43−9.24−11.66−26.65−36.90
MJSRC17.31−11.43−26.12−28.24−37.55−44.79
MJUSRC34.789.2426.12−4.82−22.67−32.56
MASR37.0611.6628.244.82−20.12−29.66
MURASR*46.1726.6537.5522.6720.12−18.89
MURASR53.0836.9044.7932.5629.6618.89
Table 8. Nine reference classes in the University of Pavia image.
Table 8. Nine reference classes in the University of Pavia image.
ClassNameTrainTest
1Asphalt2006431
2Meadows20018,449
3Gravel2001899
4Trees2002864
5Metal sheets2001145
6Bare soil2004829
7Bitumen2001130
8Bricks2003482
9Shadows200747
Total180040,976
Table 9. Classification accuracy (averaged on ten runs with randomly sampled training samples) of the University of Pavia image. The best results are highlighted in bold typeface.
Table 9. Classification accuracy (averaged on ten runs with randomly sampled training samples) of the University of Pavia image. The best results are highlighted in bold typeface.
ClassSRC-Pixel-WiseJSRCJUSRCMJSRCMJUSRCMASRMURASR*MURASR
162.2486.2293.1386.1494.7989.9796.8798.38
280.2296.6297.1597.7198.6398.7899.4499.70
369.0798.6499.3699.2799.7699.7899.8799.89
491.4891.3290.9495.8994.9997.4797.1395.66
599.5297.6999.1199.5599.78100100100
668.8499.5499.7199.5399.9399.8799.93100
786.9097.5399.5999.7999.92100100100
872.6795.8997.9296.7098.6898.7699.7299.94
998.1766.9675.1784.0886.4392.1694.9095.30
OA76.7494.5196.2795.8397.8397.4298.9299.21
AA69.6192.6795.0294.4297.0996.5498.5598.94
Kappa0.810.920.950.950.970.970.990.99
Table 10. The McNemar’s tests between classifiers (averaged on ten runs with randomly sampled training samples) of the University of Pavia image.
Table 10. The McNemar’s tests between classifiers (averaged on ten runs with randomly sampled training samples) of the University of Pavia image.
MethodJSRCJUSRCMJSRCMJUSRCMASRMURASR*MURASR
JSRC−16.69−13.38−31.89−28.46−40.43−42.27
JUSRC16.694.06−19.51−11.12−29.82−32.77
MJSRC13.38−4.06−22.57−20.97−32.92−34.68
MJUSRC31.8919.5122.575.09−18.40−21.43
MASR28.4611.1220.97−5.09−21.53−23.41
MURASR*40.4329.8232.9218.4021.53−6.93
MURASR42.2732.7734.6821.4323.416.93

Share and Cite

MDPI and ACS Style

Tong, F.; Tong, H.; Jiang, J.; Zhang, Y. Multiscale Union Regions Adaptive Sparse Representation for Hyperspectral Image Classification. Remote Sens. 2017, 9, 872. https://doi.org/10.3390/rs9090872

AMA Style

Tong F, Tong H, Jiang J, Zhang Y. Multiscale Union Regions Adaptive Sparse Representation for Hyperspectral Image Classification. Remote Sensing. 2017; 9(9):872. https://doi.org/10.3390/rs9090872

Chicago/Turabian Style

Tong, Fei, Hengjian Tong, Junjun Jiang, and Yun Zhang. 2017. "Multiscale Union Regions Adaptive Sparse Representation for Hyperspectral Image Classification" Remote Sensing 9, no. 9: 872. https://doi.org/10.3390/rs9090872

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop