Next Article in Journal
Erratum: Pauscher, L., et al. An Inter-Comparison Study of Multi- and DBS Lidar Measurements in Complex Terrain. Remote Sens. 2016, 8, 782
Next Article in Special Issue
Road Segmentation of Remotely-Sensed Images Using Deep Convolutional Neural Networks with Landscape Metrics and Conditional Random Fields
Previous Article in Journal
Study of PBLH and Its Correlation with Particulate Matter from One-Year Observation over Nanjing, Southeast China
Previous Article in Special Issue
Saliency Analysis via Hyperparameter Sparse Representation and Energy Distribution Optimization for Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonlinear Classification of Multispectral Imagery Using Representation-Based Classifiers

1
Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS 39762, USA
2
College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
3
Center for Research in Computer Vision, University of Central Florida, Orlando, FL 32816, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(7), 662; https://doi.org/10.3390/rs9070662
Submission received: 10 May 2017 / Revised: 19 June 2017 / Accepted: 25 June 2017 / Published: 28 June 2017
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)

Abstract

:
This paper investigates representation-based classification for multispectral imagery. Due to small spectral dimension, the performance of classification may be limited, and, in general, it is difficult to discriminate different classes with multispectral imagery. Nonlinear band generation method with explicit functions is proposed to use which can provide additional spectral information for multispectral image classification. Specifically, we propose the simple band ratio function, which can yield better performance than the nonlinear kernel method with implicit mapping function. Two representation-based classifiers—i.e., sparse representation classifier (SRC) and nearest regularized subspace (NRS) method—are evaluated on the nonlinearly generated datasets. Experimental results demonstrate that this dimensionality-expansion approach can outperform the traditional kernel method in terms of high classification accuracy and low computational cost when classifying multispectral imagery.

Graphical Abstract

1. Introduction

Airborne and spaceborne optical remote sensors collect useful information from the Earth’s surface based on the radiance reflected by different materials. Hyperspectral sensors acquire images at contiguous spectral ranges with high spectral resolution. On the contrary, multispectral sensors acquire only several wide bands with high spatial resolution. The high spectral resolution of hyperspectral imagery provides major advantages for classification and detection. However, due to the high dimensionality, its vast data volume can cause issues in data transmission, storage, and analysis [1,2]. Although multispectral imagery has low spectral resolution and it may be difficult to distinguish materials with similar spectral signatures, its high spatial resolution and wide coverage make it still popular in practical applications.
Recently, sparse representation classifier (SRC) [3] and collaborative representation classifier (CRC) [4] have gained much attention for hyperspectral imagery classification. Different from the traditional classifiers, such as support vector machine (SVM), these representation-based classifiers do not use the training-testing fashion. Instead, in these methods, a testing pixel is classified based on representation residual using labeled samples. The nearest regularized subspace (NRS) [5] is an improved version of CRC, where samples similar to the testing pixels are allowed to have high weights in the representation. Other variants of SRC or CRC have been proposed for hyperspectral imagery. For example, in [6], a local sparse representation-based nearest neighbor is proposed to increase the performance by utilizing class-specific sparse coefficients. A weighted joint collaborative representation based classifier is presented in [7], which adopts more appropriate weights by considering the similarity between the centered pixel and its surrounding pixels. Bian et al. proposed a multi-layer spatial-spectral representation framework for hyperspectral classification [8]. NRS is implemented as a class-specific version by using samples of each class separately in [9], and it is performed on Gabor features in [10], yielding improved classification accuracy. Representation-based approaches for hyperspectral classification and detection are summarized in [11]. However, the performance of such representation-based classifiers in multispectral image classification is limited, because the low-dimensional pixel vectors cannot offer significant discrepancy in representation residual when using training samples of different classes, producing ambiguity in label assignment.
As a classical feature expansion approach, the kernel method has been successfully applied to hyperspectral and multispectral classification. Using the kernel trick, it maps the original data to a high dimensional feature space without the need of knowing the actual mapping function. Kernel SVM (KSVM) is applied for hyperspectral image classification, which has been considered as a standard classifier [12]. Bernabe et al. employed kernel principal component analysis to extend the original principal component analysis to a nonlinear version [13]. Kernel collaborative representation with Tikhonov regularization (denoted as KNRS) is presented in [14], and Kernel sparse representation classifier (KSRC) is developed in [15]. The difficulties of the traditional kernel methods include high computational cost in the computation of Gram matrix and exhaustive searching in parameter tuning.
In this paper, we propose to use a simple strategy to generate artificial bands for multispectral imagery classification. The goal of this approach is to use explicit nonlinear functions to contrast the dissimilarity between original spectral measurements, which can provide additional spectral information for classification problems [16]. By generating new artificial bands, the spectral contrast between different classes can be increased. Our major contribution is to use the simple band ratio as the explicit nonlinear function for dimensionality expansion, which can offer better performance than the traditional kernel method in terms of high classification accuracy and low computational cost. Here, we limit the discussion in representation-based classifiers, although the discussed band expansion can be applicable to any other classifier.
The rest of this paper is organized as follows. Section 2 introduces the two representation-based classifiers, i.e., SRC and NRS. Section 3 presents the simple nonlinear band generation method. Section 4 discusses experimental result. The conclusion is drawn in Section 5.

2. Representation-Based Algorithms

Let the dataset with n labeled samples in c classes be X = { X 1 i , X 2 , , X c } d × n , where d is the number of bands and X i includes labeled samples for the i-th class.

2.1. SRC

In SRC [3], a testing sample y is linearly represented by all the training samples. The objective is to find a sparse weight vector a that minimizes the term y X a 2 2 , i.e.,
arg min a y X a 2 2 + λ a 1
where λ is the regularization term. In this research, Equation (1) is solved by mexLassoWeighted.m in MATLAB [17].
After the sparse weight vector a is estimated, the residual error for each class i is calculated as
r i ( y ) = y X i a i 2 2
where a i denotes the entries of sparse weight vector a associated with the i-th class. The testing sample is assigned as
class ( y ) =   arg   min i = 1 , 2 , , C ( r i ( y ) )

2.2. NRS

It has been argued that it is the collaborative representation instead of the l1 norm that actually improves the classification accuracy [4]. The NRS [5,9,10] can adaptively adjust the regularization term per sample such that only samples similar to the testing sample can actually participate in collaborative representation. Its objective function is expressed as
arg min a y X a 2 2 + λ Γ a 2 2
where Г is a diagonal matrix, which is defined as
Γ = [ ( y x ( 1 ) ) 2 2 0 0 ( y x ( n ) ) 2 2 ]
where x ( i ) is the i-th column of the dictionary X. The coefficient a has a closed-form solution as
a = ( X T X + λ Γ ) 1 X T y
Similarly, the residual error is used to determine the class label as Equation (3).
In SRC and NRS, the regularization parameter λ needs to be tuned, which can be the optimal value for the training samples.

3. Nonlinear Band Generation Method

A simple way for band generation is to adopt some explicit nonlinear functions to create artificial images that serve as additional linearly independent spectral measurements [16,18]. Although any nonlinear functions may be used, in our paper, we limit our discussion on multiplication and division. Band multiplication is related to their correlation, while band ratio is often used to remove the illumination factor [19]. Three new datasets can be generated by these two methods. The first dataset uses pixel-wise multiplication, and the second dataset is generated by division, i.e., band ratio. If we combine the original dataset with the artificial bands by both division and multiplication, we have the third dataset with a total number of N2 bands.
Note that, in the traditional kernel method, the kernel trick is to avoid to explicitly identify the nonlinear functions to use. We will show that simple multiplication and division can offer better classification than the kernel trick, and band ratio (division) is the best choice for the nonlinear function while keeping the data dimensionality manageable.

3.1. Multiplication

Suppose two images B i and B j (pixels at the same locations) are multiplied together, then a new image { B i B j } i = 1 , j = i + 1 N 1 is produced, where N is the total number of bands of the original multispectral imagery. Although multiplication can be used for a single band, we only apply multiplication to each pair of bands in order to compare with the division method yielding the same number of bands, i.e., { B i / B j } i = 1 , j = i + 1 N 1 . Combining the original multispectral dataset with the generated artificial bands with multiplication, there are a total of N2/2 + N/2 bands.

3.2. Division

New bands can be created as { B i / B j } i = 1 , j = i + 1 N 1 by dividing the pixels at the same locations in the original bands in the multispectral dataset. If we only combine the original dataset with the bands generated by division, we get the second dataset. The total number of bands after combining the original bands is N2/2 + N/2 in the second dataset. If we combine the original dataset with the artificial bands by both division and multiplication, we have the third dataset with a total number of N2 bands.
The proposed framework is shown in Figure 1, which includes the comparison of four cases: original bands, original bands and bands generated with multiplication (original + multiplication), original bands and bands generated with division (original + division), original bands and bands generated with both multiplication and division (original + multiplication + division).

3.3. Practical Consideration

In order for the generated bands to have similar dynamic ranges as the original bands, data is normalized by dividing the maximum value; in other words, after normalization, the maximum value of all the data points becomes 1. In division, the band with the larger local maximum value is chosen as the divider, or the band with non-zero minimum value is the divider.
In practice, the value of 0 often occurs at the same pixel locations, such as shadow pixels in all the bands. Then the band ratio is set to be 0. However, for pixels with very small non-zero values, such as water pixels, it may be needed to introduce a small constant in both denominator and numerator as [20]: { ( B i + K ) / ( B j + K ) } i = 1 , j = i + 1 N 1 . Note that due to spectral correlation, the materials (e.g., water, shadow) consistently have low or zero reflectance values without sudden change.

4. Experiment Results

4.1. Data Description and Experimental Setup

Due to lack of multispectral images with pixel level ground truth, data used in the experiments are simulated from hyperspectral images through band grouping.
The first multispectral dataset is simulated from hyperspectral Indian Pines dataset acquired by the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) over the Indiana’s Indian Pines in June 1992. The spatial size is 145 × 145 with the spatial resolution 20 m/pixel, and the 220 spectral bands are from 0.4 to 2.5 um. We generate six bands from this dataset since it has wider spectral range. The generated six bands are to simulate blue, green, red, near infrared, short wave infrared channels by grouping band range 6~12, 13~21, 24~33, 40~54, 123~143, and 177~220 of the Indian Pines dataset [21]. Using the technique in Section 3, 15 bands are generated with multiplication, and another 15 bands are generated with division. There are, in total, 16 different classes from the original ground truth; however, we select eight classes from the original dataset from a statistic viewpoint [5]. The eight classes we used in the experiments are Corn-no-till, Corn-min-till, Grass-pasture, Hay-windowed, Soybean-no-till, Soybean-min-till, Soybean-clean, and woods. The number of labeled samples are tabulated in Table 1. The false color-infrared image of this dataset is shown in Figure 2a.
The second multispectral dataset is generated from a hyperspectral image acquired by the reflective optics system imaging spectrometer (ROSIS) sensor. The image scene, covering the University of Pavia, has 115 spectral bands ranging from 0.43 to 0.86 um with the spatial size of 610 × 340 pixels, and the spatial resolution is 1.3 m per pixel. This dataset consists of 102 spectral bands after removing the 12 noisy bands. We generate four bands from this dataset according to [22]. Four bands—i.e., blue, green, red, near infrared channels—are simulated by grouping band range 6~24, 25~45, 54~69, and 89~103 in the original hyperspectral dataset. Based on these four bands, six bands are generated with multiplication, and another six with division. The number of labeled samples of nine classes are shown in Table 2. The false color-infrared image of this dataset is shown in Figure 2b.

4.2. Classification Results

The datasets using nonlinear band generation method are evaluated on SRC, NRS, their kernel versions with kernel trick (i.e., KSRC, KNRS), and KSVM. Each experiment is conducted 10 times to avoid any bias in sampling, and average performance of overall accuracy (OA) is reported. The number of training samples are set to 10, 30, 50, 70, 90, and 110, which are randomly selected. The regularization parameter λ is critical to the performance of the two classifiers, and we adopt 10-fold cross validation to choose the λ. Figure 3 and Figure 4 show the thematic maps from the NRS for Indian Pines and University of Pavia datasets, respectively. Obviously, there are many misclassified pixels. However, it can be observed that the maps using the original bands only are worse than others.
Figure 5 shows the results for the datasets generated by Indian Pines. We conclude the division method provides the best performance among other band generation methods. The OA using the division method for both classifiers increases approximately 7%, compared to using the original data only. Combining multiplication and division can provide approximately the same performance as using the division only. The KSRC performs slightly better than the original SRC. When the number of training samples is large, the KNRS outperforms the original NRS. However, when the number of training samples becomes small, the KNRS may be even worse than the linear NRS. The KSVM using the original multispectral imagery is inferior to SRC or NRS on the generated bands. The advantage of using generated bands is more obvious when the number of training samples is small, which may be because the dimensionality is expanded to a reasonable level.
Figure 6 presents the SRC and NRS results for the University of Pavia dataset. For SRC, using nonlinear bands outperforms KSRC with the change of training samples. The three datasets containing nonlinearly generated bands provide comparable performance. When the number of training samples is small, the KSRC offers similar performance as the SRC on the original dataset. However, when the number of training sample increases, the KSRC provides much better accuracy than the linear SRC. For the case of NRS, with a small number of training samples, the KNRS produces approximately the same performance as its linear version. When the training size is small, using nonlinear bands can outperform the KSVM; using nonlinear bands can provide an approximately similar performance as KSVM when the training size is large.
Table 3 and Table 4 provide the computation cost of different algorithms in MATLAB when the training sample is 110 per class. The computer has 3.40 GHz CPU and 16.0 GB RAM. We conclude that the KSRC is computationally expensive compared to the original SRC. If bands are nonlinearly generated for the SRC, then the computational cost is only slightly higher than using the original bands. The discrepancy on computational cost between NRS and KNRS is less significant. However, KNRS costs more time than the method using NRS on the generated datasets. The KSVM is the most time consuming approach. Compared with the NRS and SRC approaches, the KSVM is more computationally expensive.

4.3. Parameter Tuning

The parameter λ is important to the representation-based classifiers. In this session, we present the effects of different λ on both Indian Pines and the University of Pavia datasets using NRS and SRC. Figure 7a,b show the classification accuracy changes with λ in Indian Pines and Pavia University datasets, respectively. The training samples are set to be 90 per class, and each experiment is conducted 10 times to estimate the average results. Since the Original + Division provides better performance with less computational cost, we test the effects of different λ on its generated dataset. We can conclude a relatively small λ, e.g., 10−2, can guarantee satisfactory performance for both NRS and SRC. Obviously, NRS is less sensitive to λ due to the fact that the Г matrix can adaptively adjust the penalty according to the similarity between the training and testing pixels.
In KSRC and KCRC, the radial basis function (RBF) is chosen as the kernel function. According to [12], the parameter γ of the kernel function is set as the median value of 1/( | | x i x ¯ | | 2 2 ), i = 1, 2, …, n, where x ¯ = (1/n) i = 1 n x i ) is the mean of all available training samples. This simple strategy offers a similar performance as using the parameter tuned by cross-validation. For the RBF kernel in the KSVM, we choose the parameter γ and regularization parameter C with cross-validation.

4.4. Modified Band Ratio

To avoid a very small divider when calculating band ratio, a constant value of K can be added to both numerator and denominator. Figure 8 and Figure 9 show the results for the Indian Pines and University of Pavia datasets. Since the minimum value of the Indian Pines data is about 0.12 (after normalization), the original version of band ratio with K = 0 may be sufficient. In the University of Pavia dataset with many close-to-zero values, this strategy can improve the performance. Overall, a small value of K, such as K = 0.01, is an appropriate choice for both SRC and NRS.

5. Conclusions

This paper proposes to use nonlinear band generation method with explicit functions for multispectral classification. Two classifiers, i.e., SRC and NRS, and their kernel versions are evaluated on the new datasets. The experimental results show that this method performs better than the traditional kernel methods with higher classification accuracy and much lower computational cost. In particular, it can outperform when the number of training samples is small.
The difficulty of nonlinear band generation is choosing an appropriate nonlinear function for different datasets collected by various sensors covering all kinds of image scenes. In our experiments, it turns out that the band ratio offers the best performance. Considering its role in removing illumination factor [19], it would be a reasonable choice. Modified band ratio with a small adjustment parameter may further improve the performance when an image scene contains materials with very low reflectance.

Author Contributions

Yan Xu and Qian Du designed the experiments, prepared the first draft, and edited the manuscript. Wei Li, Chen Chen, and Nicolas H. Younan reviewed and edited the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, Y.; Du, Q.; Younan, N. Particle swarm optimization-based band selection for hyperspectral target detection. IEEE Geosci. Remote Sens. Lett. 2017, 14, 554–558. [Google Scholar] [CrossRef]
  2. Su, H.; Yang, H.; Du, Q.; Sheng, Y. Semi-supervised band clustering for dimensionality reduction of hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2011, 8, 1135–1139. [Google Scholar] [CrossRef]
  3. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE PAMI 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, L.; Yang, M.; Feng, X. Sparse Representation or Collaborative Representation: Which Helps faCe Recognition? In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar]
  5. Li, W.; Tramel, E.W.; Prasad, S.; Fowler, J.E. Nearest regularized subspace for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 477–489. [Google Scholar] [CrossRef]
  6. Zou, J.; Li, W.; Du, Q. Sparse representation-based nearest neighbor classifiers for hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2418–2422. [Google Scholar]
  7. Xiong, M.; Ran, Q.; Li, W.; Zou, J.; Du, Q. Hyperspectral image classification using weighted joint collaborative representation. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1209–1213. [Google Scholar] [CrossRef]
  8. Bian, X.; Chen, C.; Xu, Y.; Du, Q. Robust hyperspectral image classification by multi-layer spatial-spectral sparse representations. Remote Sens. 2016, 8, 985. [Google Scholar] [CrossRef]
  9. Li, W.; Du, Q. Joint within-class collaborative representation for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2200–2208. [Google Scholar] [CrossRef]
  10. Li, W.; Du, Q. Gabor-filtering based nearest regularized subspace for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1012–1022. [Google Scholar] [CrossRef]
  11. Li, W.; Du, Q. A survey on representation-based classification and detection in hyperspectral imagery. Pattern Recognit. Lett. 2016, 83, 115–123. [Google Scholar] [CrossRef]
  12. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  13. Bernabe, S.; Marpu, P.R.; Plaza, A.; Mura, M.D.; Benediktsson, J.A. Spectral–spatial classification of multispectral images using kernel feature space representation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 288–292. [Google Scholar] [CrossRef]
  14. Li, W.; Du, Q.; Xiong, M. Kernel collaborative representation with Tikhonov regularization for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 48–52. [Google Scholar]
  15. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification via kernel sparse representation. IEEE Trans. Geosci. Remote Sens. 2013, 51, 217–231. [Google Scholar] [CrossRef]
  16. Ren, H.; Chang, C.-I. A generalized orthogonal subspace projection approach to unsupervised multispectral image classification. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2515–2528. [Google Scholar]
  17. SPArse Modeling Software. Available online: http://spams-devel.gforge.inria.fr/ (accessed on 15 October 2016).
  18. Du, Q.; Kopriva, I.; Szu, H. Independent component analysis for classifying multispectral images with dimensionality limitation. Int. J. Inf. Acquis. 2004, 1, 201–216. [Google Scholar] [CrossRef]
  19. Lillesand, T.; Kiefer, R.W. Remote Sensing and Image Interpretation; Wiley: Hoboken, NJ, USA, 2015. [Google Scholar]
  20. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Proc. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  21. Platt, R.V.; Goetz, A.F.H. A comparison of AVIRIS and Landsat for land use classification at the urban fringe. Photogram. Eng. Remote Sens. 2004, 70, 813–819. [Google Scholar] [CrossRef]
  22. Kramer, H.J. Observations of the Earth and Its Environment; Springer: Berlin, Germany, 2002. [Google Scholar]
Figure 1. Framework of the band generation method.
Figure 1. Framework of the band generation method.
Remotesensing 09 00662 g001
Figure 2. Color-infrared composites for (a) Indian Pines Dataset; (b) University of Pavia dataset.
Figure 2. Color-infrared composites for (a) Indian Pines Dataset; (b) University of Pavia dataset.
Remotesensing 09 00662 g002
Figure 3. Thematic maps using 110 samples per class for the multispectral Indian Pines dataset with eight classes (and OA values). (a) Ground truth; (b) Training; (c) Original + NRS (0.7492); (d) Original + Multiplication + NRS (0.7781); (e) Original + Division + NRS (0.8159); (f) Original + Multiplication + Division + NRS (0.8124); (g) Original + KNRS (0.7852); (h) Original + KSVM (0.8193).
Figure 3. Thematic maps using 110 samples per class for the multispectral Indian Pines dataset with eight classes (and OA values). (a) Ground truth; (b) Training; (c) Original + NRS (0.7492); (d) Original + Multiplication + NRS (0.7781); (e) Original + Division + NRS (0.8159); (f) Original + Multiplication + Division + NRS (0.8124); (g) Original + KNRS (0.7852); (h) Original + KSVM (0.8193).
Remotesensing 09 00662 g003
Figure 4. Thematic maps using 110 samples per class for the multispectral University of Pavia dataset with nine classes (and OA values). (a) Ground truth; (b) Training; (c) Original + NRS (0.7698); (d) Original + Multiplication + NRS (0.7820); (e) Original + Division + NRS (0.7896); (f) Original + Multiplication + Division + NRS (0.7880); (g) Original + KNRS (0.7736); (h) Original + KSVM (0.7981).
Figure 4. Thematic maps using 110 samples per class for the multispectral University of Pavia dataset with nine classes (and OA values). (a) Ground truth; (b) Training; (c) Original + NRS (0.7698); (d) Original + Multiplication + NRS (0.7820); (e) Original + Division + NRS (0.7896); (f) Original + Multiplication + Division + NRS (0.7880); (g) Original + KNRS (0.7736); (h) Original + KSVM (0.7981).
Remotesensing 09 00662 g004
Figure 5. Classification on the multispectral dataset generated from the hyperspectral Indian Pines dataset.
Figure 5. Classification on the multispectral dataset generated from the hyperspectral Indian Pines dataset.
Remotesensing 09 00662 g005
Figure 6. Classification on the multispectral dataset generated from the hyperspectral University of Pavia dataset.
Figure 6. Classification on the multispectral dataset generated from the hyperspectral University of Pavia dataset.
Remotesensing 09 00662 g006
Figure 7. Classification Accuracy with different λ using NRS and SRC for: (a) multispectral Indian Pines; and (b) multispectral University of Pavia datasets.
Figure 7. Classification Accuracy with different λ using NRS and SRC for: (a) multispectral Indian Pines; and (b) multispectral University of Pavia datasets.
Remotesensing 09 00662 g007
Figure 8. Classification on the multispectral Indian Pines dataset using the original plus division-generated bands (original + division) with different adjustment parameter K.
Figure 8. Classification on the multispectral Indian Pines dataset using the original plus division-generated bands (original + division) with different adjustment parameter K.
Remotesensing 09 00662 g008
Figure 9. Classification on the multispectral University of Pavia dataset using the original plus division-generated bands (original + division) with different adjustment parameter K.
Figure 9. Classification on the multispectral University of Pavia dataset using the original plus division-generated bands (original + division) with different adjustment parameter K.
Remotesensing 09 00662 g009
Table 1. Number of samples per class for Indian Pines Dataset (the eight classes studied are bolded).
Table 1. Number of samples per class for Indian Pines Dataset (the eight classes studied are bolded).
Class No.Class NameNumber of Samples
C1Alfalfa46
C2Corn-no-till1460
C3Corn-min-till834
C4Corn237
C5Grass-pasture483
C6Grass-trees730
C7Grass-pasture-mowed28
C8Hay-windowed478
C9Oats20
C10Soybean-no-till972
C11Soybean-min-till2455
C12Soybean-clean593
C13Wheat205
C14Woods1265
C15Building-grass-trees-drives386
C16Stone-steel-towers93
Total10,249
Table 2. Number of samples per class for University of Pavia Dataset.
Table 2. Number of samples per class for University of Pavia Dataset.
Class No.Class NameNumber of Samples
C1Asphalt6631
C2Meadows18,649
C3Gravel2099
C4Trees3064
C5Painted metal sheets1345
C6Bare Soil5029
C7Bitumen1330
C8Self-Blocking Bricks3682
C9Shadows947
Total42,776
Table 3. Computing time (in seconds) in multispectral Indian Pines dataset using 110 samples per class.
Table 3. Computing time (in seconds) in multispectral Indian Pines dataset using 110 samples per class.
DatasetsSRCKSRCNRSKNRSKSVM
Original50.49311.89122.70152.811572.29
Original + Multiplication54.84131.27
Original + Division56.78135.90
Original + Multiplication + Division57.88137.05
Table 4. Computing time (in seconds) the multispectral University of Pavia dataset using 110 samples per class.
Table 4. Computing time (in seconds) the multispectral University of Pavia dataset using 110 samples per class.
DatasetsSRCKSRCNRSKNRSKSVM
Original228.542046.9592.97794.092122.59
Original + Multiplication240.34611.35
Original + Division245.34604.75
Original + Multiplication + Division251.48620.75

Share and Cite

MDPI and ACS Style

Xu, Y.; Du, Q.; Li, W.; Chen, C.; Younan, N.H. Nonlinear Classification of Multispectral Imagery Using Representation-Based Classifiers. Remote Sens. 2017, 9, 662. https://doi.org/10.3390/rs9070662

AMA Style

Xu Y, Du Q, Li W, Chen C, Younan NH. Nonlinear Classification of Multispectral Imagery Using Representation-Based Classifiers. Remote Sensing. 2017; 9(7):662. https://doi.org/10.3390/rs9070662

Chicago/Turabian Style

Xu, Yan, Qian Du, Wei Li, Chen Chen, and Nicolas H. Younan. 2017. "Nonlinear Classification of Multispectral Imagery Using Representation-Based Classifiers" Remote Sensing 9, no. 7: 662. https://doi.org/10.3390/rs9070662

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop