Next Article in Journal
High Quality Zenith Tropospheric Delay Estimation Using a Low-Cost Dual-Frequency Receiver and Relative Antenna Calibration
Next Article in Special Issue
Quantifying Seagrass Distribution in Coastal Water with Deep Learning Models
Previous Article in Journal
Earth Observation and Cloud Computing in Support of Two Sustainable Development Goals for the River Nile Watershed Countries
Previous Article in Special Issue
Object-Based Change Detection of Very High Resolution Images by Fusing Pixel-Based Change Detection Results Using Weighted Dempster–Shafer Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Land Cover Classification Using Extended Multi-Attribute Profiles (EMAP) Enhanced Color, Near Infrared, and LiDAR Data

1
Applied Research LLC, Rockville, MD 20850, USA
2
Department of Computer Architecture and Automation, Complutense University of Madrid, 28040 Madrid, Spain
3
Department of Technology of Computers and Communications, University of Extremadura, 10003 Cáceres, Spain
4
Institute of Applied Physics “Nello Carrara”, IFAC- CNR, Research Area of Florence, 50019 Sesto Fiorentino (FI), Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(9), 1392; https://doi.org/10.3390/rs12091392
Submission received: 4 April 2020 / Revised: 22 April 2020 / Accepted: 26 April 2020 / Published: 28 April 2020

Abstract

:
Hyperspectral (HS) data have found a wide range of applications in recent years. Researchers observed that more spectral information helps land cover classification performance in many cases. However, in some practical applications, HS data may not be available, due to cost, data storage, or bandwidth issues. Instead, users may only have RGB and near infrared (NIR) bands available for land cover classification. Sometimes, light detection and ranging (LiDAR) data may also be available to assist land cover classification. A natural research problem is to investigate how well land cover classification can be achieved under the aforementioned data constraints. In this paper, we investigate the performance of land cover classification while only using four bands (RGB+NIR) or five bands (RGB+NIR+LiDAR). A number of algorithms have been applied to a well-known dataset (2013 IEEE Geoscience and Remote Sensing Society Data Fusion Contest). One key observation is that some algorithms can achieve better land cover classification performance by using only four bands as compared to that of using all 144 bands in the original hyperspectral data with the help of synthetic bands generated by Extended Multi-attribute Profiles (EMAP). Moreover, LiDAR data do improve the land cover classification performance even further.

1. Introduction

Hyperspectral (HS) images have been used in many applications [1]. Examples of HS sensors include Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [2] and Adaptive Infrared Imaging Spectroradiometer (AIRIS) [3]. The AVIRIS images have 224 bands in the range of 0.4 to 2.5 μ m . AIRIS is a longwave infrared (LWIR) sensor with 20 bands for the remote detection of chemical agents, such as nerve gas. In the literature, people have used HS data in small target detection [4,5], fire damage assessment [6,7], anomaly detection [8,9,10,11,12,13,14], chemical agent detection and classification [3,15], border monitoring [16], change detection [17,18,19,20], and Mars mineral map abundance estimation [21,22]. There are also many papers on land cover classification. For instance, the fusion of HS and LiDAR data was proposed in [23] and applied to the 2013 IEEE Geoscience and Remote Sensing Society (GRSS) Data Fusion Contest dataset. All 144 bands with help from Extended Multi-attribute Profiles (EMAP) were used. The results achieved 90% in overall accuracy. In another paper [24], a graph-based approach was proposed in order to fuse HS and LiDAR data for land cover classification.
However, HS sensors are expensive and they usually demand large data storage. In some real-time applications, HS data may need to be transmitted via bandwidth constrained channels to ground stations for data processing. The above scenarios prohibit the use of HS sensors for some applications, such as precision agriculture, where farmers may have a limited budget. In [25], some challenges in practical applications are mentioned. One of them is that, in the event that no HS data are available, synthetic spectral bands using EMAP may be a good alternative. Some recent applications of using EMAP for soil detection and change detection can be found in [26].
In this paper, we focus on addressing the above practical problem in land cover classification, where only a few bands, namely RGB and near infrared (NIR) bands, are available. Under such data constrained situations, our first question is about what performance we can achieve for land cover classification while only using those four bands. Second, if synthetic bands using EMAP are available, what kind of performance boost can one get? Is the resulting performance close to the case of using all hyperspectral bands? Third, if LiDAR data are available, can we see even further enhancement in land cover classification?
It should be noted that RGB and NIR images are mostly used for vegetation detection, which involves the simple calculation of normalized difference vegetation index (NDVI). The generation of NDVI can be done in real-time. There are also some recent researchers who have used color images for visual object classification (VOC) using deep learning methods (DeepLabV3+ [27], SegNet [28], Pyramid Scene Parsing network (PSP) [29], and Fully Convolutional Network (FCN) [30]). In principle, those deep learning methods for object detection using color images can be adapted to land cover classification. Recently, our team initiated an investigation along that direction in [31]. However, those deep learning methods require a lot of training data and may not yield better results when data are scarce, which is the case for the IEEE Houston dataset presented in this paper.
In our investigations, we applied nine algorithms, including three hyperspectral classification methods, Matched Signature Detection (MSD), Adaptive Subspace Detection (ASD), Reed-Xiaoli Detection (RXD), and their kernel versions, and also Sparse Representation (SR), Joint SR (JSR), and Support Vector Machine (SVM) to the 2013 IEEE GRSS Data Fusion Contest dataset [23] for land cover classification. Some of them cannot be directly applied and required some customization. For instance, RXD is a well-known technique for anomaly detection. We modified it for land cover classification. The customization of existing algorithms can be considered as our first contribution. In our studies, we clearly saw the advantages of using EMAP. In most of the cases, EMAP versions resulted in a significant performance increase. Answering the question of what performance gain using synthetic bands for land cover classification is our second contribution. Moreover, we also confirmed that land cover classification performance can be further enhanced if LiDAR is combined with EMAP. Confirming that the LiDAR does help the classification accuracy is our third contribution.
It is worth mentioning that the proposed methodology has been applied to another dataset known as the Trento dataset, which contains both hyperspectral and LiDAR. The results clearly demonstrated that our proposed approach achieved land cover classification results that were very close to the state-of-the-art methods in the literature by using only RGB, NIR, and LiDAR with EMAP. However, we did not have permission to publish those results related to the Trento dataset. Interested readers can contact us directly for those results.
The paper is organized as follows. In Section 2, we will review those classification algorithms, the 2013 IEEE GRSS Data Fusion Contest Data, EMAP, and evaluation metrics. In Section 3, we will summarize our findings. The classification results using different methods and combinations of spectral bands will be presented with tables and classification maps. Moreover, we compared with two representative results in the literature for the same data set. Our results of using only four available bands with help from EMAP are very close to those results presented in [23,24] that used 144 bands. Finally, we will conclude our paper with a few remarks.

2. Methods and Data

2.1. Land Cover Classification Methods

Although land cover classification can be done using object based detection methods, here we perform pixel-based classification. This is because the IEEE dataset only has ground truth land cover labels in pixels rather than land cover maps. There are 15 land cover classes in the 2013 IEEE GRSS Data Fusion Contest. For each class, a number of signatures from the training data are available. For a particular classifier, the classification process begins by separately detecting each class. The maximum detection value at each pixel location across all classes’ maps is taken. The index of that maximum value will then be the class label. Aggregating all of those classification labels into a two-dimensional (2D) matrix yields the overall classification map. The details of each method are as follows.

2.1.1. Matched Subspace Detection (MSD)

MSD [32] is the process of matching the signatures of a background and target dataset in order to classify a given pixel as a specific class. There are two separate hypothetical scenarios, either with a present or absent target. These equations are established as H0 (target absent) and H1 (target present)
H 0 : y = B ζ + n ,
H 1 : y = T θ + B ζ + n = [ T   B ]   [ θ ζ ] + n ,
where T and B are the orthogonal matrices with column vectors of a certain dimension that span the subspace of the target and background/non-target, θ and ζ are the unknown vectors that account for the various different corresponding column vectors of T and B, respectively, and n represents random noise. These equations can then be transformed to create a generalized likelihood ratio test (GLRT) to predict whether a specific pixel will be a target or background pixel:
L 2 ( y ) = y T ( I P B ) y y T ( I P T B ) y .
where P B = B ( B T B ) 1 B T , P T B = [ T    B ] ( [ T    B ] T [ T    B ] ) 1 [ T    B ] T .
The reason that we chose MSD as one of the methods is because MSD has been proven to work well in some hyperspectral target detection applications [32]. In this paper, we use MSD, as follows. To detect class i, we use pixel signatures from the training samples in class i to form the target matrix Ti and the rest of the samples in other classes to form the matrix Bi. We then insert Ti and Bi in (1) and (2) and the perform target detection for class i using (3) for the whole image cube. The detection map i is saved. We then repeat the process for i = 1 to 15. The label that corresponds to maximum value out of the 15 maps at each pixel location will be the class identity.

2.1.2. Adaptive Subspace Detection (ASD)

ASD [32] has a very similar process to MSD with a slightly varying set of hypotheses:
H 0 : x = n ,
H 1   : x = U θ + σ n ,
where U is the orthogonal matrix whose column vectors are eigenvectors of the target subspace, θ is the unknown vector whose entries are coefficients for the target subspace, and n is the random Gaussian noise. Those equations are then solved in order to create a similar GLRT to MSD. The classification is done for each class first and the final decision is made by picking the class label that corresponds the maximum detection value at each pixel location.
Similar to MSD, we adopted ASD simply because it has been proven to work quite well in target detection while using hyperspectral images [32,33].

2.1.3. Reed-Xiaoli Detection (RXD)

In hyperspectral image processing community, RXD [34] is usually used for anomaly detection. It is simple and efficient. We would like to emphasize that, to the best of our knowledge, no one has applied RXD for land cover classification before. Here, we apply RXD in a very different way. For land cover classification, RXD follows the same procedure as MSD and ASD, using a H0 and H1 equation and combining them to generate a GLRT. The background pixels come from training samples in the 14 other classes to detect pixels in class i. That is, RXD can be expressed as:
R X ( r ) = ( r μ b ) T C b 1 ( r μ b ) ,
where r is the test pixel, μb is the estimated sample mean of the 14 background classes, and Cb is the background covariance of the training samples in the 14 other classes. The process will repeat for 15 classes, one for each class. The final classification is done by choosing the class label that corresponds to the maximum detection value at each pixel location.
The kernel versions of each of the above methods—ASD, MSD, and RXD—all follow a similar fashion.

2.1.4. Kernel MSD (KMSD)

In [32], it was demonstrated that KMSD has a better performance than MSD. In light of that, we also included KMSD in our investigations. In KMSD, the input data have been implicitly mapped by a nonlinear function Φ into a high dimensional feature space F. The detection model in F is then given by:
H 0 Φ :   Φ ( y ) = B Φ ζ Φ + n Φ       T a r g e t   a b s e n t
H 1 Φ :   Φ ( x ) = T Φ θ Φ + B Φ ζ Φ + n Φ       T a r g e t   p r e s e n t
where the variables are defined in [32]. The kernelized GLRT for KMSD can be found in Equation (9) of [32].

2.1.5. Kernel ASD (KASD)

Similar to KMSD, KASD [32] was also adopted in our investigations because of its good performance in [32]. In KASD, similar to KMSD, the detection formulation can be written as
H 0 Φ :   Φ ( x ) = n Φ       T a r g e t   a b s e n t
H 1 Φ :   Φ ( x ) = U Φ θ Φ + σ Φ n Φ       T a r g e t   p r e s e n t
The various variables are defined in [15]. The final detector in kernelized format can be found in Equation (30) of [15].

2.1.6. Kernel RXD (KRXD)

The reason for including KRXD was because KRXD was demonstrated to perform much better than RXD in [34]. In KRXD, every pixel is transformed to a high dimensional space via a nonlinear transformation. The kernel representation for the dot product in feature space between two arbitrary vectors xi and xj is expressed as
k(xi,xj) = 〈Φ(xi), Φ(xj)〉 = Φ(xi)·Φ(xj)
A commonly used kernel is the Gaussian radial basis function (RBF) kernel
k(x,y) = exp ((−‖xy2)/c)
where c is a constant and x and y are spectral signatures of two pixels in a hyperspectral image. The above kernel function is the well-known kernel trick that avoids the actual computation of high dimensional features and enables the implementation of KRXD. Details of KRXD can be found in [19,34].

2.1.7. Sparse Representation (SR)

In [16], we applied SR to detect soil due to illegal tunnel excavation. It was observed that SR was one of the high performing methods. We also included SR in this paper because of the above. SR exploits the structure of only having a few nonzero values by solving the convex l 1 , q -norm minimization problem:
min S S q s 0   s . t .   Y = D S
where S q is defined as the number of non-zero rows of S, the signature values of a given pixel, s 0 is a pre-defined maximum row-sparsity parameter, q > 1 is a norm of matrix S that encourages sparsity patterns across multiple observations, and D is the dictionary of class signatures.

2.1.8. Joint Sparse Representation (JSR)

Similar to SR, JSR was used in our earlier study in soil detection [16]. Although JSR is more computationally demanding, it exploits neighborhood pixels for joint land type classification. In JSR, 3 × 3 or 5 × 5 patch of pixels are used in the S target matrix. It is the same equation as Equation (13), but with an added dimension to S that accounts for each pixel within whatever patch size is used. Details of the mathematics can be found in [16].

2.1.9. Support Vector Machine (SVM)

SVMs were first suggested in the 1960s [35] for classification and they have been an area of intense research, owing to developments in the techniques and theory coupled with extensions to regression and density estimation. An SVM is a general architecture that can be applied to pattern recognition and classification [36], regression estimation, and other problems, such as speech and target recognition. SVM can be constructed from a simple linear maximum margin classifier that can be trained by solving a convex quadratic programming problem with constraints.
The reason for including SVM in our experiments is simply because several past papers [23,24] also used SVM in land cover classifications.

2.2. EMAP

In this section, we briefly introduce EMAP, which has been shown to yield good classification performance when only one has a few spectral bands available. Mathematically, given an input grayscale image f and a sequence of threshold levels { T h 1 ,   T h 2 ,     T h n } , the attribute profile (AP) of f is obtained by applying a sequence of thinning and thickening attribute transformations to every pixel in f , as follows:
A P ( f ) = { ϕ 1 ( f ) ,   ϕ 2 ( f ) ,     ϕ n ( f ) , f ,   γ 1 ( f ) ,   γ 2 ( f ) ,     γ n ( f ) }
where ϕ i and γ i   ( i = 1 ,   2 ,   n ) are the thickening and thinning operators at threshold T h i , respectively. The EMAP of f is then acquired by stacking two or more APs while using any feature reduction technique on multispectral/hyperspectral data, such as purely geometric attributes (e.g., area, length of the perimeter, image moments, shape factors), or textural attributes (e.g., range, standard deviation, entropy) [37,38,39,40].
EMAP ( f ) = { A P 1 ( f ) ,   A P 2 ( f )     A P m ( f ) }
In this paper, the “area (a)” and “length of the diagonal of the bounding box (d)” attributes of EMAP [26] were used. For the area attribute of EMAP, two thresholds used by the morphological attribute filters were set to 10 and 15. For the Length attribute of EMAP, the thresholds were set to 50, 100, and 500. The above thresholds were chosen based on experience, because we observed them to yield consistent results in our experiments. With this parameter setting, EMAP creates 11 synthetic bands for a given single band image. One of the bands comes from the original image.
EMAP has been used in hyperspectral image processing before. More technical details and applications of EMAP can be found in [37,38,39,40]. In fact, in [23,24], EMAP has been used for land cover classification before. One key difference between the above references and our approach here is that we applied EMAP to only RGB+NIR and RGB+NIR+LiDAR, whereas the above methods all used the original hyperspectral data.

2.3. Dataset Used

From the IEEE GRSS Data Fusion package [23], we obtained the ground truth classification pixels, the hyperspectral image of the University of Houston area, and the LiDAR data of the same area. The instruments used to collect the dataset are a hyperspectral sensor and a LiDAR sensor. The hyperspectral data contain 144 bands that range in wavelength from 380–1050 nm with spatial resolution of 2.5 m. Each band has a spectral width of 4.65 nm. The LiDAR data contain the elevation information with a resolution of 2.5 m.
Table 1 displays the number of training and testing pixels per class. Figure 1 below shows the Houston area with the ground truth classifications that the test used to compare and determine the overall accuracy.
The predetermined training data set includes 2832 pixels and the testing data included the remaining 12,197 labeled pixels from the University of Houston dataset. The results from using this predetermined training data were found to be considerably worse than the random subsampling approach, which suggested that this predetermined training data might not be entirely indicative of the testing data. In any event, we conducted our investigations with the predetermined data set to compare our results with other past studies.
There are a number of datasets used for analysis, as shown in Table 2. The first group is the RGB (band # 60, 30, 22 in the hyperspectral data) and the NIR band (band #103). It should be noted that the above selection of bands is not the same as band selection in the literature [41]. In band selection, the objective is to select the most informative bands out of the available hyperspectral bands. In our case, we are restricted to only having a few bands. One might have some concerns about the use of narrow bands in the HS data to emulate RGB and NIR bands. We will address this issue in Section 3.2. It turns out that such simple selection is almost equivalent to creating “realistic” RGB and NIR bands by applying spectral responses of actual color and NIR imagers to the hyperspectral bands. We call this group Dataset-4 (DS-4). The second group is the RGB and NIR coupled with LiDAR data. We denote is as Dataset-5 (DS-5). The third group is the four band group put through EMAP augmentation to produce 44 bands as each band produces ten other bands in addition to the original band (denoted as Dataset-44 (DS-44)). The fifth group is the five band plus EMAP augmentation (denoted as Dataset-55 (DS-55)). The sixth group is the full hyperspectral image of 144 bands (denoted as Dataset-144 (DS-144)). Finally, the last group is the 144 bands + LiDAR case (denoted as Dataset-145 (DS-145)). The DS-4 and DS-5 editions should require less computational times but with degradation in accuracy. DS-5 will result in a lesser reduction in accuracy with a minimal increase in time. The LiDAR data will help the classification of tall structures as well as the differentiation of trees, shrubs, and grass. These two cases will not expect to work well in land cover classification. Meanwhile, the DS-44 and DS-55 cases provide a middle ground in both time consumption and accuracy loss that, depending on the method, could prove to be useful in practical applications. The full hyperspectral image (DS-144) and the DS-145 case (144 + LiDAR) are simply used as benchmarks to compare with the rest of the combinations.

2.4. Evaluation Metrics

We have adopted overall accuracy (OA), average accuracy (AA), and kappa (k) coefficient as the performance metrics to be consistent with existing land cover classification methods in the literature. OA is defined as the ratio between the sum of correctly classified pixels from all classes and the total number of pixels in all classes. AA is the average of the individual class accuracies. Kappa coefficient is defined as (overall accuracy–random accuracy)/(1—random accuracy) where random accuracy is also known as accuracy by chance. For more details, please visit L3 Harris’ website at https://www.harrisgeospatial.com/docs/CalculatingConfusionMatrices.html.

3. Land Cover Classification Results

3.1. Results

3.1.1. Results of Using Narrow Bands

Analysis was conducted while using each method, with different combinations of bands in Table 2, to determine their accuracy and computational efficiency. We used well-known performance metrics in the research community, namely, overall accuracy (OA), average accuracy (AA), and Kappa (κ) coefficient. The definitions of those metrics can be found in the public domain.
Table 3, Table 4 and Table 5 summarize the key metrics (OA, AA, and Kappa) of each method for different band combinations. Red numbers indicate the best accuracy for each method and the bold numbers indicate the best performance for each dataset.
Table 6, Table 7 and Table 8 provide the detailed class-specific accuracies. For the majority of the tables, there is a progression of accuracy improvement from least bands to most bands used. The most obvious exception to this is the full hyperspectral image. It should be noted that, without EMAP, the performance metrics of SVM in DS-4 and DS-5 are, in general, inferior to the full HS cases. This basically answers our first question raised in Section 1 about the feasibility of using only RGB and NIR for land cover classification. In short, if one only uses RGB and NIR bands, the land cover classification performance is not accurate enough.
Now, the advantage of using EMAP is stressed here. In most of the cases, EMAP versions resulted in a significant performance increase. For instance, when we used the four bands with EMAP (DS-44), the JSR and SVM methods achieved 80.77% and 82.64% overall accuracy, respectively. The corresponding results of using JSR and SVM without EMAP (DS-4) are only 59.83% and 70.43%, respectively. When using EMAP with five bands (DS-55), Joint Sparse Representation (JSR) provided the highest overall accuracy with 86.86 %. The overall accuracy for JSR was 70.81% without EMAP (DS-5) while using the same five bands. This is followed by the SVM method, again, when using EMAP with five bands (DS-55), which had an overall accuracy of 86.00%. The overall accuracy for SVM was 74.62% without EMAP using the same five bands (DS-5). The above practically answers our second question about whether EMAP can help classification performance raised in Section 1.
In several instances, the accuracy will decrease from the DS-55 case of each method when compared to the DS-144 case. Interestingly, in both of the two best performing cases (JSR and SVM), if all 144 original bands were used, the classification accuracies were 72.57% (JSR in DS-144) and 78.68% (SVM in DS-144), respectively, which are considerably lower. This shows that sometimes using all of the hyperspectral bands for land cover classification could lead to poor results. We will discuss this point in Section 3.2.
From Table 3, Table 4 and Table 5, it is quite clear to see that LiDAR did help the performance, especially for the SVM cases. For example, DS-5 is better than DS-4 in SVM; DS-55 is better than DS-44 in SVM; and, DS-145 is better than DS-144 in SVM. This answers the third question about whether LiDAR can help the classification performance that we raised in Section 1.
The next important information is regarding computational complexity. As a general rule, all of the standard methods are faster than their kernel counterparts and the kernel methods are faster than the SR and JSR methods. Table 9 shows the varying elapsed time (ET) values during training in minutes. Table 9 adopts the same layout of Table 3, Table 4, and Table 5, except that the best value is the minimum value and bold values are not implemented. A Windows-7 PC without GPU (16 G RAM and i7-CPU) was used in our experiments. By looking at Table 9, it is clear that there are some major advantages to certain methods when it comes to the ET value. The most drastic difference in time is absolutely the SR and JSR methods. While the kernel methods take up to two hours to process the image, the JSR methods can take over a day and a half to process it. It is clear that the final three methods are most consistently accurate, but only SVM is really worth the increased consistency due to the vast amount of time that it takes for SR and JSR to generate results. At its slowest, JSR is still classifying about five pixels per second; however, MSD at its slowest still processes over 1,250 pixels per second and KASD processes about 85. JSR simply cannot keep up.
In the SVM row in Table 9, we noticed that the ET training times for DS-4 and DS-5 are actually higher than the other cases. This observation might look strange, even though we have repeated our experiments multiple times. The ET times are all correct. The reason is most likely because the support vectors in SVM are obtained via an iterative optimization process during training. If the optimization metric reaches a pre-specified threshold, then the iteration stops. We believe that, in the cases of DS-44, DS-45, DS-144, and DS-145, the convergence speeds were faster than those DS-4 and DS-5 cases and, hence, less time is needed to reach good performance. This might be corroborated by the fact that DS-4 and DS-5 indeed have inferior performance than the other cases, because more computational iterations (more time) were needed and yet the final performance metrics were still lower than the other cases.
The accuracy values are generated from a small subsection of the full image, but the ET value is calculated across the entire image. The ground truth values of around twelve thousand pixels while the full image encompasses more than 650,000 pixels. It is important to look at the classification maps of the full image in order to obtain a better sense of the accuracy of the methods. All of the classification maps (Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6, Figure A7, Figure A8 and Figure A9) are in the Appendix A but the classification maps of DS-44 case for the nine methods are shown in Figure 2, as it is the most practical case and gives a good sense of the accuracies of each method.
A large distortion is clearly present in the right quarter of each image, as can be seen in Figure 2. This is most likely caused by a cloud, which then affects the detection performance of each method. It can also be seen that certain images, such as KRXD (sub-image (f)), have a decent amount of noise in its classifications. Even with that noise, the accuracy of the ground truth for KRXD is still around 64%. In contrast, the SVM map is quite consistent with much less noise.

3.1.2. Comparison with Khodadadzadeh et al.’s Results [23]

Here, we extracted some numbers from Table V in [23] and put them in Table 10. According to [23], the case of Xh+EMAP(Xh) used all 144 bands and some additional EMAP bands; the case of Xh + AP(XL) used 144 bands and some additional bands from the LiDAR data; the last case is the combination of 144 bands, LiDAR, and EMAP bands.
We also extracted our best performing numbers from Table 3 and put them Table 10. Our DS-44 band case includes four optical bands (RGB+NIR) and 40 EMAP bands. This case is similar to the Xh+EMAP(Xh), except that we only used four bands out of the 144 bands. It can be seen that our results are only 2 to 4 % lower than that of using 144 bands. The DS-55 case includes LiDAR information. When comparing our results to those two cases (Xh + AP(XL)) and (Xh + AP(XL) + EMAP(Xh)) in [23], our results are only 1 to 4% lower.
The above comparison shows that our results of using only four or five bands with EMAP can achieve results that are only a few percentage points lower than MLRsub in [23]. This means that it is feasible to use RGB+NIR with EMAP for practical land cover classification.

3.1.3. Comparison with Liao et al.’s Results

Now, we compare our results with those results that were generated by using Generalized Graph-based Fusion (GGF) [24]. We extract some numbers from Table I of [24] and put them in Table 10. The HS case (known as Raw HS in [24]) is the result of directly using SVM on the 144 bands. The MPSHSLi used SVM on features generated by morphological profile of HS and LiDAR. The GGF case is the case of using SVM to GGF features (both hyperspectral and LiDAR).
When comparing our DS-44 case with the Raw HS case in [24], one can see that our results are almost the same. Comparing our DS-55 case to the MPSHSLi case in [24], we can see the results are also comparable. Finally, our DS-55 results are 7% lower than that of GGF. However, it is important to mention that the GGF method utilized some additional information from the test samples. From the paragraph below Figure 2 in Liao et al.’s paper [24], the authors mentioned “in our experiments, 5000 samples were randomly selected to train … our proposed GGF”. As there are only 2832 pixels in the training data, the 5000 samples must have come from the test data and this is not a fair comparison between GGF and our results.

3.1.4. Wide RGB and NIR Bands

When we created the RGB and NIR bands, we directly selected those narrow bands from the hyperspectral data. There are many color imagers with different wavelength ranges. Some imagers may have narrow bands that indeed look like those hyperspectral bands. In some imagers, the spectral response of each individual band might be wider. See Figure 3 for actual spectral responses of RGB and NIR bands of a commercial imager [42]. It will be important to investigate the robustness of our approach with respect to the different sensors. Will the bandwidth affect the observations in this paper? Here, we carry out a comparative study between the use of our choice of narrow RGB and NIR bands, and the wide band images by combining multiple bands from the hyperspectral image using the actual spectral response functions.
Here, we first synthesize wide RGB and NIR bands from the hyperspectral bands. We generate the individual bands by a weighted average of the hyperspectral bands based on the spectral response functions shown in Figure 3. That is, the weights come from the spectral response curves and we multiply the weights with each corresponding band in the hyperspectral data and then add the results together. After we generate the four wide bands, we then apply the EMAP algorithm to generate the synthetic EMAP bands. Finally, we applied the SVM classifier to those wide bands. Table 11 shows the classification results. From Table 11, it can be seen that the land cover classification performance using narrow bands from the HS data is only less than 0.5% from that of using wide bands. In four out of 12 cases, the narrow bands have slightly better performance than the wide bands. This study shows that, if wide RGB and NIR bands were used, the land cover classification performance would be better than that of using narrow bands from the HS data. This makes sense, because the wide bands have more spectral information. In short, the land cover classification performances of narrow and wide bands are comparable and our results are consistent, regardless of the bandwidths.

3.2. Discussion

It should be noted that our emphasis in this paper is to investigate the performance of using four bands (DS-4) and, if LiDAR data are available, five bands (DS-5) for land cover classification. The four and five bands were augmented with EMAP. We will address some additional discussions in the following.

3.2.1. Full Hyperspectral Data vs. Synthetic Bands

One might expect the case of using all 144 bands of hyperspectral data to yield the best performance. This turns out to be not the case in our findings. Although we do not have a solid theory to explain this behavior, we have certainly seen several observations in other researchers’ results. For instance, in Table IV of [43], there are at least two approaches (PCA and MPsHSI) that utilized a fewer number of bands and yet still achieved better overall accuracies than that of using all bands. In Table 4 of [41], the method that is known as ISSC used fewer bands and yet has better performance than that of using all bands. In Table 5 of [41], another method known as LP also used fewer bands and achieved higher accuracy than that of using all bands. One potential explanation for the above observations is the curse of dimensionality in the hyperspectral data. In other words, more bands may confuse the classifiers in some ways. Another potential explanation might be related to the “redundancy” issue, as suggested by one anonymous reviewer, in hyperspectral data. That is, more but redundant spectral bands may be harmful than that of fewer but non-redundant bands. However, more theoretical research might be needed to fully understand the above observations.

3.2.2. EMAP Based Augmentation vs. Deep Learning

Another natural question is that EMAP augmentation might be outdated in the era of deep learning because deep learning has some generalization capability. This viewpoint may or may not be valid, depending on applications. For the same data set, we are currently investigating two deep learning methods using only four (RGB+NIR) or five (RGB+NIR+LiDAR) bands. We developed one for soil detection using multispectral images. It is a customized structure with six layers. Details of the architecture can be found in [44]. Someone else developed another one for hyperspectral image classification. We found the open source code from Github [45]. Both deep learning methods are based on convolutional neural network (CNN).
In our preliminary experiments, we observed that the overall accuracies are around 80% using four or five bands. However, with the EMAP augmented bands, the deep learning results are improved to close to 88%.
In our opinion, the power of deep learning can only emerge when one has a vast amount of training data. Unfortunately, in the IEEE GRSS Data Fusion Contest dataset, the training samples are much fewer than the testing samples and, consequently, deep learning methods did not show its power in boosting up the land cover classification performance. We plan to wrap up this deep learning work in the near future.

3.3. Potential of Using Object Based Approaches

In recent years, object based classification approaches have gained popularity in remote sensing. Object based approaches involve segmentation and classification steps, and the salt-and-pepper classification maps that are generated by pixel based approaches can be avoided. Moreover, object based approaches can incorporate geometric shapes, sizes, and spectral information into the land cover classification process. In theory, object based approaches should have better performance in land cover classification. Some researchers concluded that object based methods have better performance than pixel based methods [46,47].
Unfortunately, the training, testing, and ground truth label data are all in pixels for both the IEEE and the Trento datasets. That is, we do not have the ground truth land cover maps for both datasets. It will be a good future direction to work with those dataset owners to define and generate the ground truth land cover maps for all of the land cover types, so that object based approaches can be evaluated for those datasets.

4. Conclusions and Future Directions

In this paper, we have investigated the performance of land cover classification while only using four bands (RGB+NIR) or five bands (RGB+NIR+LiDAR). Our first key observation is that the land cover classification performance using four or five bands without EMAP is not good enough, regardless of the classification algorithm. Our second key observation is that, with help from EMAP, the four or five bands can produce very good classification performance using the SVM and JSR algorithms. We also observed that LiDAR data further enhance the classification performance. Comparing our results with representative papers in the literature shows that using four or five bands with EMAP is feasible for land cover classification, as the accuracies are only a few percentage points lower than some of the best performing methods in the literature that utilize all of the hyperspectral bands. When taking computational times into account, there is further complexity, as the best performing methods often take a significantly longer amount of time to process information. The one exception to this is the SVM method, as it performs above average in all scenarios and is the best in all but the DS-55 case while maintaining a computational time of less than five minutes for all but the DS-4 case.
There are future directions for our work. The first one is about whether one can perform fusion of multiple classification maps to further improve the classification accuracy. For instance, we applied nine methods and each one generates a land cover classification map. It will be interesting to investigate the fusion of those nine maps via some voting or Dempster Shafer fusion algorithms. The second direction is to explore deep learning approaches for land classification while only using RGB+NIR bands. A third direction is to investigate what bands in the EMAP enhanced data are more useful than others.

Author Contributions

Conceptualization, C.K.; methodology, C.K., B.A., D.G., S.B., A.P.; writing—original draft preparation, C.K.; supervision, C.K.; project administration, C.K.; Writing—review & editing—S.B., A.P., M.S.; funding acquisition, C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by US Department of Energy under grant # DE-SC0019936. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

ASD maps for DS-4, DS-5, DS-55, DS-144, and DS-145 cases
Figure A1. ASD classification maps.
Figure A1. ASD classification maps.
Remotesensing 12 01392 g0a1aRemotesensing 12 01392 g0a1b
MSD maps for DS-4, DS-5, DS-55, DS-144, and DS-145 cases
Figure A2. MSD classification maps.
Figure A2. MSD classification maps.
Remotesensing 12 01392 g0a2aRemotesensing 12 01392 g0a2b
RXD maps for DS-4, DS-5, DS-55, DS-144, and DS-145 cases
Figure A3. RXD classification maps.
Figure A3. RXD classification maps.
Remotesensing 12 01392 g0a3aRemotesensing 12 01392 g0a3b
KASD maps for DS-4, DS-5, DS-55, DS-144, and DS-145 cases
Figure A4. KASD classification maps.
Figure A4. KASD classification maps.
Remotesensing 12 01392 g0a4
KMSD maps for DS-4, DS-5, DS-55, DS-144, and DS-145 cases
Figure A5. KMSD classification maps.
Figure A5. KMSD classification maps.
Remotesensing 12 01392 g0a5
KRXD maps for DS-4, DS-5, DS-55, DS-144, and DS-145 cases
Figure A6. KRXD classification maps.
Figure A6. KRXD classification maps.
Remotesensing 12 01392 g0a6aRemotesensing 12 01392 g0a6b
SR maps for DS-4, DS-5, DS-55, DS-144, and DS-145 cases
Figure A7. SR classification maps.
Figure A7. SR classification maps.
Remotesensing 12 01392 g0a7aRemotesensing 12 01392 g0a7b
JSR maps for DS-4, DS-5, DS-55, DS-144, and DS-145 cases
Figure A8. JSR classification maps.
Figure A8. JSR classification maps.
Remotesensing 12 01392 g0a8aRemotesensing 12 01392 g0a8b
SVM maps for DS-4, DS-5, DS-55, DS-144, and DS-145 cases
Figure A9. SVM classification maps.
Figure A9. SVM classification maps.
Remotesensing 12 01392 g0a9aRemotesensing 12 01392 g0a9b

References

  1. Lee, C.M.; Cable, M.L.; Hook, S.J.; Green, R.O.; Ustin, S.L.; Mandl, D.J.; Middleton, E.M. An introduction to the NASA Hyperspectral InfraRed Imager (HyspIRI) mission and preparatory activities. Remote. Sens. Environ. 2015, 167, 6–19. [Google Scholar] [CrossRef]
  2. AVIRIS. Available online: https://aviris.jpl.nasa.gov/aviris/index.html (accessed on 18 December 2019).
  3. Ayhan, B.; Kwan, C.; Jensen, J.O. Remote vapor detection and classification using hyperspectral images. In Proceedings of the Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) Sensing XX, Bellingham, WA, USA, 25 July 2019. [Google Scholar]
  4. Harsanyi, J.C.; Chang, C.-I. Hyperspectral image classification and dimensionality reduction: An orthogonal subspace projection approach. IEEE Trans. Geosci. Remote. Sens. 1994, 32, 779–785. [Google Scholar] [CrossRef] [Green Version]
  5. Heinz, D.; Chang, C.-I. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery. IEEE Trans. Geosci. Remote. Sens. 2001, 39, 529–545. [Google Scholar] [CrossRef] [Green Version]
  6. Dao, M.; Kwan, C.; Ayhan, B.; Tran, T.D. Burn scar detection using cloudy MODIS images via low-rank and sparsity-based models. In Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Washington, DC, USA, 7–9 December 2016; pp. 177–181. [Google Scholar] [CrossRef]
  7. Veraverbeke, S.; Dennison, P.; Gitas, I.; Hulley, G.; Kalashnikova, O.; Katagis, T.; Kuai, L.; Meng, R.; Roberts, D.; Stavros, N. Hyperspectral remote sensing of fire: State-of-the-art and future perspectives. Remote. Sens. Environ. 2018, 216, 105–121. [Google Scholar] [CrossRef]
  8. Wang, W.; Li, S.; Qi, H.; Ayhan, B.; Kwan, C.; Vance, S. Identify anomaly componentbysparsity and low rank. In Proceedings of the 2015 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Tokyo, Japan, 2–5 June 2015; pp. 1–4. [Google Scholar] [CrossRef]
  9. Chang, C.-I. Hyperspectral Imaging; Springer: New York, NY, USA, 2003. [Google Scholar]
  10. Li, S.; Wang, W.; Qi, H.; Ayhan, B.; Kwan, C.; Vance, S. Low-rank tensor decomposition based anomaly detection for hyperspectral imagery. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 4525–4529. [Google Scholar]
  11. Yang, Y.; Zhang, J.; Song, S.; Liu, D. Hyperspectral Anomaly Detection via Dictionary Construction-Based Low-Rank Representation and Adaptive Weighting. Remote. Sens. 2019, 11, 192. [Google Scholar] [CrossRef] [Green Version]
  12. Qu, Y.; Guo, R.; Wang, W.; Qi, H.; Ayhan, B.; Kwan, C.; Vance, S. Anomaly detection in hyperspectral images through spectral unmixing and low rank decomposition. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 1855–1858. [Google Scholar] [CrossRef]
  13. Li, F.; Zhang, L.; Zhang, X.; Chen, Y.; Jiang, D.; Zhao, G.; Zhang, Y. Structured Background Modeling for Hyperspectral Anomaly Detection. Sensors 2018, 18, 3137. [Google Scholar] [CrossRef] [Green Version]
  14. Qu, Y.; Qi, H.; Ayhan, B.; Kwan, C.; Kidd, R. DOES multispectral/hyperspectral pansharpening improve the performance of anomaly detection? In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 6130–6133. [Google Scholar] [CrossRef]
  15. Kwan, C.; Ayhan, B.; Chen, G.; Wang, J.; Ji, B.; Chang, C.-I. A novel approach for spectral unmixing, classification, and concentration estimation of chemical and biological agents. IEEE Trans. Geosci. Remote. Sens. 2006, 44, 409–419. [Google Scholar] [CrossRef]
  16. Dao, M.; Kwan, C.; Koperski, K.; Marchisio, G. A joint sparsity approach to tunnel activity monitoring using high resolution satellite images. In Proceedings of the 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York, NY, USA, 19–21 October 2017; pp. 322–328. [Google Scholar] [CrossRef]
  17. Radke, R.; Andra, S.; Al-Kofahi, O.; Roysam, B. Image change detection algorithms: A systematic survey. IEEE Trans. Image Process. 2005, 14, 294–307. [Google Scholar] [CrossRef]
  18. Ilsever, M.; Ünsalan, C. Two-Dimensional Change Detection Methods; Springer Science and Business Media LLC: London, UK, 2012. [Google Scholar]
  19. Zhou, J.; Kwan, C.; Ayhan, B.; Eismann, M.T. A Novel Cluster Kernel RX Algorithm for Anomaly and Change Detection Using Hyperspectral Images. IEEE Trans. Geosci. Remote. Sens. 2016, 54, 6497–6504. [Google Scholar] [CrossRef]
  20. Bovolo, F.; Bruzzone, L. The Time Variable in Data Fusion: A Change Detection Perspective. IEEE Geosci. Remote. Sens. Mag. 2015, 3, 8–26. [Google Scholar] [CrossRef]
  21. Kwan, C.; Haberle, C.; Echavarren, A.; Ayhan, B.; Chou, B.; Budavari, B.; Dickenshied, S. Mars Surface Mineral Abundance Estimation Using THEMIS and TES Images. In Proceedings of the 2018 9th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 8–9 November 2018. [Google Scholar]
  22. CRISM. Available online: http://crism.jhuapl.edu/ (accessed on 18 December 2019).
  23. Khodadadzadeh, M.; Li, J.; Prasad, S.; Plaza, J. Fusion of Hyperspectral and LiDAR Remote Sensing Data Using Multiple Feature Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2015, 8, 2971–2983. [Google Scholar] [CrossRef]
  24. Liao, W.; Pižurica, A.; Bellens, R.; Gautama, S.; Philips, W. Generalized Graph-Based Fusion of Hyperspectral and LiDAR Data Using Morphological Features. IEEE Geosci. Remote. Sens. Lett. 2014, 12, 552–556. [Google Scholar] [CrossRef]
  25. Kwan, C. Remote Sensing Performance Enhancement in Hyperspectral Images. Sensors 2018, 18, 3598. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Dao, M.; Kwan, C.; Bernabe, S.; Plaza, J.; Koperski, K. A Joint Sparsity Approach to Soil Detection Using Expanded Bands of WV-2 Images. IEEE Geosci. Remote. Sens. Lett. 2019, 16, 1869–1873. [Google Scholar] [CrossRef]
  27. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
  28. Badrinarayanan, V.; Badrinarayanan, V.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  29. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar]
  30. Zhou, B.; Zhao, H.; Puig, X.; Fidler, S.; Barriuso, A.; Torralba, A. Scene Parsing through ADE20K Dataset. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5122–5130. [Google Scholar]
  31. Ayhan, B.; Kwan, C. Tree, Shrub, and Grass Classification Using Only RGB Images. Remote. Sens. 2020, 12, 1333. [Google Scholar] [CrossRef] [Green Version]
  32. Nasrabadi, N.M. Kernel-Based Spectral Matched Signal Detectors for Hyperspectral Target Detection. In Proceedings of the Computer Vision; Springer Science and Business Media LLC: London, UK, 2007; Volume 4815, pp. 67–76. [Google Scholar]
  33. Nguyen, D.; Kwan, C.; Ayhan, B. A comparative study of several supervised target detection algorithms for hyperspectral images. In Proceedings of the 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York, NY, USA, 19–21 October 2017; pp. 192–196. [Google Scholar] [CrossRef]
  34. Kwon, H.; Nasrabadi, N. Kernel RX-algorithm: A nonlinear anomaly detector for hyperspectral imagery. IEEE Trans. Geosci. Remote. Sens. 2005, 43, 388–397. [Google Scholar] [CrossRef]
  35. Burges, C.J. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  36. Qian, T.; Li, X.; Ayhan, B.; Xu, R.; Kwan, C.; Griffin, T. Application of Support Vector Machines to Vapor Detection and Classification for Environmental Monitoring of Spacecraft; Springer Science and Business Media LLC: London, UK, 2006; Volume 3973, pp. 1216–1222. [Google Scholar]
  37. Bernabé, S.; Marpu, P.; Plaza, J.; Mura, M.D.; Benediktsson, J.A. Spectral–Spatial Classification of Multispectral Images Using Kernel Feature Space Representation. IEEE Geosci. Remote. Sens. Lett. 2014, 11, 288–292. [Google Scholar] [CrossRef]
  38. Bernabé, S.; Marpu, P.; Benediktsson, J.A. Spectral unmixing of multispectral satellite images with dimensionality expansion using morphological profiles. In Proceedings of the Satellite Data Compression, Communications, and Processing VIII, San Diego, CA, USA, 12–13 August 2012. [Google Scholar] [CrossRef]
  39. Mura, M.D.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Morphological Attribute Profiles for the Analysis of Very High Resolution Images. IEEE Trans. Geosci. Remote. Sens. 2010, 48, 3747–3762. [Google Scholar] [CrossRef]
  40. Mura, M.D.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Extended profiles with morphological attribute filters for the analysis of hyperspectral data. Int. J. Remote. Sens. 2010, 31, 5975–5991. [Google Scholar] [CrossRef]
  41. Sun, W.; Du, Q. Hyperspectral Band Selection: A Review. IEEE Geosci. Remote. Sens. Mag. 2019, 7, 118–139. [Google Scholar] [CrossRef]
  42. Spectral Responses of RGB and NIR Bands. Available online: https://www.spectraldevices.com/content/multispectral-imaging-technology (accessed on 27 April 2020).
  43. Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; Van Kasteren, T.; Liao, W.; Bellens, R.; Pižurica, A.; Gautama, S.; et al. Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2014, 7, 2405–2418. [Google Scholar] [CrossRef]
  44. Lu, Y.; Perez, D.; Dao, M.; Kwan, C.; Li, J. Deep Learning with Synthetic Hyperspectral Images for Improved Soil Detection in Multispectral Imagery. In Proceedings of the 2018 9th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 8–9 November 2018; pp. 666–672. [Google Scholar]
  45. Deep-Learning-for-HSI-Classification. Available online: https://github.com/luozm/Deep-Learning-for-HSI-classification (accessed on 27 April 2020).
  46. Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote. Sens. 2017, 130, 277–293. [Google Scholar] [CrossRef]
  47. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote. Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
Figure 1. RGB values in tandem with the ground truth pixel values.
Figure 1. RGB values in tandem with the ground truth pixel values.
Remotesensing 12 01392 g001
Figure 2. DS-44 band classification maps for (a) ASD, (b) MSD, (c) RXD, (d) KASD, (e) KMSD, (f) KRXD, (g) SR; (h) JSR; and, (i) SVM.
Figure 2. DS-44 band classification maps for (a) ASD, (b) MSD, (c) RXD, (d) KASD, (e) KMSD, (f) KRXD, (g) SR; (h) JSR; and, (i) SVM.
Remotesensing 12 01392 g002aRemotesensing 12 01392 g002b
Figure 3. Spectral response of RGB and near infrared (NIR) bands [42].
Figure 3. Spectral response of RGB and near infrared (NIR) bands [42].
Remotesensing 12 01392 g003
Table 1. Number of pixels per class in the IEEE Geoscience and Remote Sensing Society (GRSS) data. Total number of unlabeled pixels is 649816 and total number of pixels is 664845.
Table 1. Number of pixels per class in the IEEE Geoscience and Remote Sensing Society (GRSS) data. Total number of unlabeled pixels is 649816 and total number of pixels is 664845.
Class
NameNumberColor LegendSamples
TrainTest
Healthy grass1 1981053
Stressed grass2 1901064
Synthetic grass3 192505
Tree4 1881056
Soil5 1861056
Water6 182143
Residential7 1961072
Commercial8 1911053
Road9 1931059
Highway10 1911036
Railway11 1811054
Parking lot 112 1921041
Parking lot 213 184285
Tennis court14 181247
Running track15 181473
1–15 283212197
Table 2. Dataset labels and the corresponding bands.
Table 2. Dataset labels and the corresponding bands.
Dataset Label Short Label Bands Present in the Corresponding Dataset
RGBNIR DS-4RGB and the NIR bands (respectively bands # 60, # 30, # 22 and # 103 in the hyperspectral data).
RGBNIR_LiDAR DS-5RGB and the NIR bands; LiDAR data
EMAP_RGBNIRDS-44RGB and the NIR bands. 40 bands obtained by EMAP augmentation applied to RGB and the NIR bands.
EMAP_RGBNIR_LiDARDS-55RGB and the NIR bands; LiDAR data; 50 bands obtained by EMAP augmentation applied to RGB, NIR and LiDAR.
HYPERDS-144Hyperspectral data set
HYPER_LiDARDS-145Hyperspectral data set; LiDAR data
Table 3. Overall accuracy (OA) in percentage of each method and band combination. Red numbers indicate the best accuracy for each method and bold numbers indicate the best accuracy for each dataset.
Table 3. Overall accuracy (OA) in percentage of each method and band combination. Red numbers indicate the best accuracy for each method and bold numbers indicate the best accuracy for each dataset.
OADS-4DS-5DS-44DS-55DS-144DS-145
ASD4.280.0727.8956.9237.3738.38
MSD0.114.1648.6567.5555.5655.57
RXD28.9338.8746.0933.2942.6942.71
KASD6.167.9979.7081.2853.5753.26
KMSD26.3239.1569.2651.4053.6153.10
KRXD5.727.8264.1438.5371.7971.79
SR39.9942.964.470.9757.4657.46
JSR59.8370.8180.7786.8672.5759.04
SVM70.4374.6282.6486.0078.6881.76
Table 4. Average accuracy (AA) in percentage of each method and band combination. Red numbers indicate the best accuracy for each method and bold numbers indicate the best accuracy for each dataset.
Table 4. Average accuracy (AA) in percentage of each method and band combination. Red numbers indicate the best accuracy for each method and bold numbers indicate the best accuracy for each dataset.
AADS-4DS-5DS-44DS-55DS-144DS-145
ASD0.703.8065.4067.3933.8347.68
MSD0.342.4256.3972.6859.4558.71
RXD39.5343.8356.3032.6749.0447.84
KASD6.166.8083.5181.4350.2960.92
KMSD41.3448.9868.2358.2265.8956.06
KRXD6.496.6578.6353.1775.8576.21
SR44.2446.8669.8274.5861.7261.72
JSR60.1971.2183.2788.4574.8060.90
SVM70.7473.1285.6186.4881.1681.04
Table 5. Kappa coefficient (κ) of each method and band combination. Red numbers indicate the best accuracy for each method and bold numbers indicate the best accuracy for each dataset.
Table 5. Kappa coefficient (κ) of each method and band combination. Red numbers indicate the best accuracy for each method and bold numbers indicate the best accuracy for each dataset.
KappaDS-4DS-5DS-44DS-55DS-144DS-145
ASD−0.020.010.210.530.330.38
MSD−0.04−0.010.450.650.520.56
RXD0.230.340.420.280.380.38
KASD−0.010.0010.780.800.500.496
KMSD0.220.350.670.480.500.4912
KRXD−0.010.010.620.330.700.70
SR0.3580.3900.6150.6850.5410.541
JSR0.5670.6840.7910.8570.7040.557
SVM0.7040.7500.8120.8590.7650.864
Table 6. Class specific accuracies of Adaptive Subspace Detection (ASD), Matched Signature Detection (MSD), and Reed-Xiaoli Detection (RXD) with various band combinations.
Table 6. Class specific accuracies of Adaptive Subspace Detection (ASD), Matched Signature Detection (MSD), and Reed-Xiaoli Detection (RXD) with various band combinations.
ASDMSDRXD
DS-4DS-5DS-44DS-55DS-144DS-145DS-4DS-5DS-44DS-55DS-144DS-145DS-4DS-5DS-44DS-55DS-144DS-145
1-Healthy grass0.000.0043.9766.0075.7875.780.000.0013.0155.4651.2851.2883.1083.0042.2634.0960.5960.68
2-Stressed grass0.000.0913.914.0442.2944.080.000.0023.5043.9884.6884.680.000.0044.088.2780.7380.73
3-Synthetic grass0.000.0011.0999.41100.0100.00.0098.81100.0100.0100.0100.099.0198.81100.0100.082.1882.18
4-Trees0.000.0043.1883.6245.8347.160.000.0078.2275.3854.4554.454.7391.1971.0245.9345.4545.45
5-Soil0.000.1921.4027.7527.7525.570.000.0080.4092.8098.3098.3098.7798.2087.8858.0599.6299.62
6-Water0.000.000.0091.6189.5189.510.000.0065.7379.7282.5282.5256.6456.6467.8378.3271.3371.33
7-Residential0.000.0992.6367.6312.5012.220.000.0032.9373.7967.4467.5410.3520.2414.090.2846.7446.74
8-Commercial0.000.0057.0880.0659.6466.290.000.0020.1346.0649.1049.1037.5151.5742.9253.5621.8421.84
9-Road0.0040.980.3868.8411.8013.880.000.1929.6543.6339.4739.470.000.000.000.005.575.57
10-Highway0.0044.026.3744.315.606.660.000.0041.6068.1541.7041.700.000.0045.171.359.079.07
11-Railway4.650.000.2833.496.936.740.380.0064.2394.5023.0623.060.000.0031.8819.545.125.22
12-Parking lot 10.000.0010.5739.4813.2615.660.000.0042.7546.789.419.410.000.0028.3437.183.553.55
13-Parking lot 20.000.3522.1157.8951.2351.930.002.4658.2558.258.428.420.0018.9540.3554.0415.4415.44
14-Tennis Court0.000.0045.7597.9874.9074.903.640.0099.6098.7972.0672.060.000.0055.8739.2772.0672.06
15-Running Track100.00.0021.1499.1587.3284.780.000.0090.7096.1998.7398.73100.0100.0100.0100.098.1098.10
Table 7. Class specific accuracies of Kernel ASD (KASD), Kernel MSD (KMSD), and Kernel RXD (KRXD) with various band combinations.
Table 7. Class specific accuracies of Kernel ASD (KASD), Kernel MSD (KMSD), and Kernel RXD (KRXD) with various band combinations.
KASDKMSDKRXD
DS-4DS-5DS-44DS-55DS-144DS-145DS-4DS-5DS-44DS-55DS-144DS-145DS-4DS-5DS-44DS-55DS-144DS-145
1-Healthy grass4.947.3180.2583.0080.9181.0128.0245.5882.9165.8197.2597.443.891.3329.0613.2080.8280.82
2-Stressed grass4.420.6671.3371.5261.8461.9413.4433.5574.2548.6859.0259.8710.531.5073.6845.3980.9280.92
3-Synthetic grass1.393.1799.60100.099.4199.6055.6489.31100.089.7078.8179.012.180.0087.130.0099.8099.80
4-Trees6.722.3791.7691.7691.8692.5247.0659.3891.6720.0880.0280.4014.872.8464.770.8593.6693.66
5-Soil4.360.9598.9696.6972.6372.7321.3138.6473.3965.3482.9582.863.692.7587.7874.3497.9297.92
6-Water15.383.5095.8097.20100.0100.042.6668.5394.4193.0179.0281.126.295.5953.150.0095.1095.10
7-Residential0.8420.7191.7990.6761.0159.7018.3831.4461.4729.2045.6245.624.571.4964.654.2979.2079.20
8-Commercial9.311.6146.5373.1244.9243.8724.9825.1737.8025.0720.5121.186.9335.1425.2653.7530.4830.48
9-Road2.9311.0575.6480.5521.1521.6221.2531.7355.4336.6439.0032.293.9735.2273.7571.4867.5267.52
10-Highway13.902.0366.7045.4611.3912.0716.2221.5361.1066.2229.0529.837.340.3955.0236.2043.7343.73
11-Railway6.3640.6176.3883.9711.3911.8616.8922.8773.6267.1737.1039.282.180.7672.3061.7663.1963.19
12-Parking lot 19.611.6373.6874.6423.2522.0021.9027.7654.4743.7124.9822.002.595.2864.1729.3045.4445.44
13-Parking lot 25.962.1171.9372.2865.2665.6125.6133.3327.7216.4918.6013.335.611.7576.4951.5866.6766.67
14-Tennis Court9.310.81100.097.1790.6990.6931.1753.4495.1499.1993.5293.528.5010.1293.9330.77100.0100.0
15-Running Track3.591.06100.099.5884.7878.6563.2192.1898.9498.1063.6463.210.420.2187.9576.3298.7398.73
Table 8. Class specific accuracies of Sparse Representation (SR), Joint SR (JSR), and Support Vector Machine (SVM) with various band combinations.
Table 8. Class specific accuracies of Sparse Representation (SR), Joint SR (JSR), and Support Vector Machine (SVM) with various band combinations.
SRJSRSVM
DS-4DS-5DS-44DS-55DS-144DS-145DS-4DS-5DS-44DS-55DS-144DS-145DS-4DS-5DS-44DS-55DS-144DS-145
1-Healthy grass80.6381.3981.3983.1082.0582.0586.8498.8794.3698.9887.0182.4382.6282.5381.9697.3382.0596.76
2-Stressed grass7.6114.0053.5754.4278.2978.2999.3498.7689.1697.7199.3383.1883.0882.2481.3099.8982.6197.55
3-Synthetic grass98.4298.81100.0100.099.6099.6092.45100.0100.0100.0100.097.2399.6099.60100.00100.099.8039.28
4-Trees66.9566.6771.6990.3482.0182.0176.9484.3694.5199.2775.6979.7398.5899.2489.3999.4992.8097.40
5-Soil94.8995.2791.3897.3599.5399.5395.5097.0897.0598.0597.4899.6296.7897.2597.8298.0198.4897.48
6-Water99.3099.3099.3093.7197.2097.2044.9445.79100.096.4539.8168.5393.0191.6195.1022.7094.4147.06
7-Residential53.6455.9771.8393.1071.5571.5559.5245.6756.9474.3181.2251.4982.1883.2189.5569.2076.3189.93
8-Commercial0.470.4718.0438.8416.2416.2450.4650.1672.6969.6265.1411.4018.2352.2342.2685.1444.8283.62
9-Road21.0635.0316.5380.8315.4915.4948.2169.8765.5494.970.0036.4555.9073.0977.6292.0972.8085.97
10-Highway14.6728.9658.0143.7349.6149.6143.6454.0481.8984.430.0048.1753.3848.4668.4487.7556.9586.55
11-Railway4.363.9894.8894.1226.5726.5744.9266.9483.4169.6270.3530.5556.4577.9992.7975.6978.3777.91
12-Parking lot 19.133.2746.301.6313.6413.6437.9163.6184.7185.9366.3238.0450.6232.3785.2191.0073.4977.48
13-Parking lot 20.7013.3345.9649.1223.1623.165.5323.2049.1274.7512.2517.1933.3337.5474.3982.2967.0249.40
14-Tennis Court11.746.4898.3898.3870.8570.8550.1269.7479.6894.2765.7871.2698.38100.00100.0096.86100.0089.17
15-Running Track100.0100.0100.0100.0100.0100.066.49100.0100.0100.088.5298.3197.0497.89100.0099.7997.46100.00
Table 9. Elapsed time (ET) values (minutes) for different methods and datasets in the training process. Red numbers indicate the most efficient cases.
Table 9. Elapsed time (ET) values (minutes) for different methods and datasets in the training process. Red numbers indicate the most efficient cases.
ET (min)DS-4DS-5DS-44DS-55DS-144DS-145
ASD0.710.730.871.002.482.21
MSD0.750.771.211.768.748.12
RXD0.300.310.370.450.941.02
KASD60.6064.1389.7181.22127.42147.85
KMSD16.3316.4321.5623.0129.4429.89
KRXD32.9333.3454.4560.2387.7292.74
SR492.83694.33941.06921.451037.991056.98
JSR629.23891.712248.172198.562210.152310.42
SVM5.323.760.690.471.301.41
Table 10. Comparison of the overall accuracies (%) of several methods.
Table 10. Comparison of the overall accuracies (%) of several methods.
ReferenceDataset AdoptedAlgorithm AdoptedOverall Accuracy
This paperEMAP_RGBNIR (DS-44)JSR80.77
EMAP_RGBNIR (DS-44)SVM82.64
EMAP_RGBNIR_LiDAR (DS-55)JSR86.86
EMAP_RGBNIR_LiDAR (DS-55)SVM86.00
[23]Hyperspectral data; EMAP augmentation applied hyperspectral data (Xh+EMAP(Xh))MLRsub84.40
Hyperspectral data; Additional bands from LiDAR data (Xh + AP(XL))MLRsub87.91
Hyperspectral data; EMAP augmentation applied hyperspectral data; Additional bands from LiDAR data (Xh + AP(XL) + EMAP(Xh))MLRsub90.65
[24]Hyperspectral dataSVM80.72
Morphological Profile of hyperspectral and LiDAR data (MPSHSLi)SVM86.39
Generalized graph-based fusion features from hyperspectral and LiDAR data (GGF)SVM94
Table 11. Comparison of SVM classification results using narrow bands and wide bands. Bold numbers indicate better numbers when one compares metrics with the same number of bands.
Table 11. Comparison of SVM classification results using narrow bands and wide bands. Bold numbers indicate better numbers when one compares metrics with the same number of bands.
Narrow Bands from HS DataWide RGB and NIR Bands Based on Spectral Response
DS-4DS-5DS-44DS-55DS-4DS-5DS-44DS-55
OA (%)69.9975.3281.3185.7270.4374.6282.6486.00
AA (%)70.9773.4884.3787.3670.7473.1285.6186.48
κ 0.6770.7330.7990.8460.7040.7500.8120.859
1-Healthy grass82.7282.4382.4383.1082.6282.5381.9697.33
2-Stressed grass83.6582.1480.5580.9283.0882.2481.3099.89
3-Synthetic grass99.6099.60100.00100.0099.6099.60100.00100.00
4-Trees95.3699.1592.6196.7898.5899.2489.3999.49
5-Soil96.9797.3598.8697.7396.7897.2597.8298.01
6-Water93.0195.1095.1095.1093.0191.6195.1022.70
7-Residential78.4581.6280.6984.4282.1883.2189.5569.20
8-Commercial16.8151.2844.9271.6018.2352.2342.2685.14
9-Road51.4663.4681.7889.9955.9073.0977.6292.09
10-Highway54.1548.7565.0665.3553.3848.4668.4487.75
11-Railway59.3077.4273.2488.3356.4577.9992.7975.69
12-Parking lot 155.0448.4190.8782.6150.6232.3785.2191.00
13-Parking lot 229.8236.8474.7478.6033.3337.5474.3982.29
14-Tennis Court97.98100.00100.00100.0098.38100.00100.0096.86
15-Running Track97.2598.73100.00100.0097.0497.89100.0099.79

Share and Cite

MDPI and ACS Style

Kwan, C.; Gribben, D.; Ayhan, B.; Bernabe, S.; Plaza, A.; Selva, M. Improving Land Cover Classification Using Extended Multi-Attribute Profiles (EMAP) Enhanced Color, Near Infrared, and LiDAR Data. Remote Sens. 2020, 12, 1392. https://doi.org/10.3390/rs12091392

AMA Style

Kwan C, Gribben D, Ayhan B, Bernabe S, Plaza A, Selva M. Improving Land Cover Classification Using Extended Multi-Attribute Profiles (EMAP) Enhanced Color, Near Infrared, and LiDAR Data. Remote Sensing. 2020; 12(9):1392. https://doi.org/10.3390/rs12091392

Chicago/Turabian Style

Kwan, Chiman, David Gribben, Bulent Ayhan, Sergio Bernabe, Antonio Plaza, and Massimo Selva. 2020. "Improving Land Cover Classification Using Extended Multi-Attribute Profiles (EMAP) Enhanced Color, Near Infrared, and LiDAR Data" Remote Sensing 12, no. 9: 1392. https://doi.org/10.3390/rs12091392

APA Style

Kwan, C., Gribben, D., Ayhan, B., Bernabe, S., Plaza, A., & Selva, M. (2020). Improving Land Cover Classification Using Extended Multi-Attribute Profiles (EMAP) Enhanced Color, Near Infrared, and LiDAR Data. Remote Sensing, 12(9), 1392. https://doi.org/10.3390/rs12091392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop