Next Article in Journal
Inner Dynamic Detection and Prediction of Water Quality Based on CEEMDAN and GA-SVM Models
Previous Article in Journal
Integrating UAV-SfM and Airborne Lidar Point Cloud Data to Plantation Forest Feature Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer Learning

1
Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi’an 710062, China
2
ByteDance, Singapore 148957, Singapore
3
Intellifusion, Shenzhen 518000, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2022, 14(7), 1711; https://doi.org/10.3390/rs14071711
Submission received: 21 February 2022 / Revised: 25 March 2022 / Accepted: 31 March 2022 / Published: 1 April 2022

Abstract

:
Hyperspectral image (HSI) classification has been marked by exceptional progress in recent years. Much of this progess has come from advances in convolutional neural networks (CNNs). Different from the RGB images, HSI images are captured by various remote sensors with different spectral configurations. Moreover, each HSI dataset only contains very limited training samples and thus the model is prone to overfitting when using deep CNNs. In this paper, we first propose a 3D asymmetric inception network, AINet, to overcome the overfitting problem. With the emphasis on spectral signatures over spatial contexts of HSI data, the 3D convolution layer of AINet is replaced with two asymmetric inception units, i.e., a space inception unit and spectrum inception unit, to convey and classify the features effectively. In addition, we exploited a data-fusion transfer learning strategy to improve model initialization and classification performance. Extensive experiments show that the proposed approach outperforms all of the state-of-the-art methods via several HSI benchmarks, including Pavia University, Indian Pines and Kennedy Space Center (KSC).

1. Introduction

Hyperspectral image (HSI) classification is an important research problem in remote sensing (RS) and has a broad range of applications. Differing from RGB images, hyperspectral data is composed of spectral signatures and spatial contexts. On the one hand, it provides abundant spectral–spatial information for “over-band” classification. On the other hand, it raises challenges in extracting high-dimensional features [1,2,3,4].
The early HSI classification methods mainly focus on selecting or extracting spectral features due to abundant spectral information derived from the hundreds of contiguous spectral bands. Feature selection (also known as band selection) methods try to find the most representative features (bands) from raw HSI data to preserve their physical meaning. For instance, Wang et al. [5] used manifold ranking as an unsupervised feature-selection method to choose the most representative bands for training the following classifiers. Yin et al. [6] introduced a computational evolutionary strategy into the field of supervised band selection, where the candidate band combinations are evaluated through an affinity function driven by hyperspectral classification accuracy. Feature extraction approaches usually learn representative features through linear or nonlinear transformation. For instance, Huang et al. [7] extended the k-nearest neighbor technique and proposed a feature extraction method called double nearest proportion feature extraction to reduce the dimensionality. Based on linear transformation nonparametric weighted feature extraction (NWFE), Kuo et al. [8] proposed kernel-based NWFE, which has the advantages of both linear and nonlinear transformation.
These spectrum-based approaches select or extract the features directly from the pixel-wise spectra while ignoring the intrinsic geographical structure in HSI data. Recent studies have shown that the combined use of spectral and spatial information can enhance the ability to represent the extracted features. There are two categories of methods to extract spectral–spatial information from HSI data. The first one extracts the spectral signatures and the spatial contexts separately, and then combines them to perform pixel-wise classification [9]. The second one treats the raw HSI data as a whole and extracts joint spatial–spectral features directly by using a 3D feature extractor. For example, spectral–spatial integrated features were extracted at different frequencies and scales using a series of 3D discrete wavelet filters [10], 3D Gabor wavelets [11], or 3D scattering wavelets [12]. Since hyperspectral data is typically presented in the format of 3D cubes, the second category of methods can result in a large number of discriminative features, which can effectively improve the classification performance.
In the above traditional approaches, handcrafted features are typically used, and they are expected to be discriminative and representative of the characteristics of HSI data. Typically, the extracted features are based on domain knowledge, which may lose some valuable details. In feature classification, Support Vector Machines (SVMs) [13] are often employed because SVMs are robust at representing high-dimensional vectors, but their capacity to represent is still limited to finite dimensions.
Since 2012, with the emergence of deep learning, the performance of many vision tasks has been dramatically improved, including but not limited to object detection [14], segmentation [15] and tracking [16]. In recent years, deep-learning-based methods have been introduced in the field of HSI classification. In particular, supervised convolutional neural networks (CNNs) and their extensions, including 1D-CNN [17,18], 2D-CNN [19,20], 3D-CNN [18,21], and ResNet [22,23], have been successfully employed to extract deep spectral–spatial features and have demonstrated state-of-the-art performance. Usually, a CNN consists of at least three convolutional layers for extracting both low-level and high-level features. Moreover, instead of separating feature extraction and feature classification as two steps, the CNN structure integrating feature extraction and feature classification into one framework through back-propagation [24]. Since the extracted features directly contribute to the final classification performance, deep learning methods achieve better performance than traditional methods.
However, two constraints limit the state-of-the-art deep CNNs from being used directly for HSI classification. The first factor is the different data format between RGB images and HSI. Specifically, the RGB images can be well represented by a 2D CNN model to extract features, while 3D CNN is preferable to preserve the abundant information being extracted from the spectral signatures and the spatial contexts of HSI. However, the number of parameters grows exponentially when the convolution moves from 2D to 3D [25]. A 3D CNN has a lot more parameters than a 2D counterpart due to its additional kernel dimension, making it more difficult and expensive to train. The second factor is the limited training sample dilemma. Generally, the feature representation ability of deep learning models strongly depends on a large number of training samples. However, the manual annotation for hyperspectral data is difficult, which results in the lack of labeled pixels. Without sufficient training samples, a deep model that has a powerful representation capacity may suffer from overfitting. Therefore, most of the existing CNN-based HSI classification methods focus on using small-scale models with relatively less depth (no more than 10 layers, generally) at the cost of a decrease in performance. However, leveraging large-scale networks is still desirable to jointly exploit underlying the nonlinear spectral and spatial structures of hyperspectral data residing in a high-dimensional feature space [26].
To address these inherent problems, in this paper, we propose a 3D asymmetric inception network (AINet) and a data-fusion transfer learning strategy for HSI classification, and our contributions can be summarized as four points:
  • A novel deep light-weight 3D CNN, AINet, with asymmetric structure is proposed to handle HSI classification, which uses the available small volume of HSI datasets to train the very deep neural network and fully exploit the potential of CNN.
  • Considering the properties of hyperspectral images as well as spectral signatures are emphasized over spatial contexts, an asymmetric inception unit (AI unit) is proposed. To convey and classify the features effectively, we replace the 3D convolution layer with two asymmetric inception units, namely the space inception unit and the spectrum inception unit.
  • Data fusion transfer learning is exploited to improve model initialization. It increases training efficiency and classification performance while compensating for data limitations.
  • The proposed method were tested on three public HSI datasets. The experimental results show that the proposed method achieves better performance than other state-of-the-art deep learning-based methods.

2. Related Works

2.1. Convolutional Neural Network Architectures

Convolutional neural network (CNN) is one of the most popular deep learning methods, and many CNN-based HSI classification methods have been presented in recent years. Three typical supervised CNN architectures, referred to as 1D, 2D, and 3D CNNs, were investigated in HSI classification. In 1D CNN-based HSI classification approaches, the kernels of a convolution layer convolve the input samples along the spectral dimension [17,18], and thus the spatial information is lost. The conventional way to obtain deep spectral-spatial representations by 2D CNNs is to train a model based on patch-based samples by expanding input data with more spatial information [27]. Meanwhile, HSI data are always compressed via a certain dimension-reduction algorithm, such as principal component analysis (PCA), and then convolved with 2D kernels. For instance, Makantasis et al. [28] exploited randomized PCA to condense the spectral dimensionality of the entire HSI first, followed by applying a 2D CNN to extract deep features from the compressed HSI. Furthermore, two stream CNN models are proposed to extract the spatial and spectral features separately. For instance, Zhang et al. [27] proposed a dual-channel CNN model where spectral features and spatial features are extracted via 1D CNN and 2D CNN respectively, and then a softmax regression classifier is used to combine these two kinds of features and predict classification results eventually. As the spatial features and spectral features are extracted separately, they may not fully exploit the joint spatial/spectral correlation information, which can be important for classification.
Since hyperspectral imagery is naturally a 3D data cube, it is reasonable to extract deep spectral–spatial features through 3D CNNs. The first 3D CNN network for HSI classification was proposed by Chen et al. [18] in 2016, and L 2 - n o r m regularization and dropout are used. However, this is a shallow network, and it still suffers from overfitting when there is a shortage of annotated datasets. Similarly, a simpler 3D CNN structure using input cubes of HSIs with a smaller spatial size was presented in [21]. Later, Zhong et al. [22] proposed a supervised spectral–spatial residual network (SSRN) with consecutive spectral and spatial residual blocks to extract spectral and spatial features from HSI. Very recently, Fang et al. [29] proposed a 3D dense convolutional network with a spectral-wise attention mechanism (MSDN-SA) for HSI classification, where 3D dilated convolutions are exploited to capture spectral-spatial features at different scales, and all 3-D feature maps are densely connected to each other. The 3D CNN models generate classification maps with an approach that can directly process raw HSI. However, the classification accuracy decreases as the layers of the network become deeper. This is mainly due to the very limited HSI dataset used for training the network.

2.2. Efficient Deep Learning Models

Since AlexNet was proposed in 2012, a number of efficient deep learning models have been proposed. Three of these models, GoogLeNet [30], ResNet [23], and MobileNet [31], are related to our model proposed below. They also show the development trends of deep learning, with increasing depth while requiring less computation. GooleNet is the most basic of the so-called Inception series, ResNet is famous for its extreme depth, and MobileNet is well known for its low computation cost. The three models and their applications in HSI classification are described in detail below.
GoogLeNet consists of multiple inception modules, each of which contains four different convolution paths, and it is the most basic model of the Inception series [30]. Based on GoogLeNet, Inception-V1 to Inception-V4 are proposed [32,33,34,35]. The main advantage of an inception network is the ability to use multiple sizes of kernels for each branch, which allows the generation of a more flexible map of features [36]. Hidalgo et al. [37] proposed a data classification model that uses extended attribute profiles and an inception network to generate deep spatial–spectral features. Recently, a novel attention inception module was introduced to extract features dynamically from multiresolution convolutional filters [36].
ResNet employs shortcut connections to overcome the degradation problem, where accuracy becomes saturated and then degrades rapidly with the network depth increasing. In addition, in order to reduce the time complexity, He et al. [23] proposed a novel structure named “bottleneck”. Based on shortcut connection and the newly introduced bottleneck layers, He et al. [23] increased the depth of the network to more than 1000 layers and obtained excellent performance in image classification. Based on the shortcut connection, a supervised spectral–spatial residual network (SSRN) was proposed to mitigate the decreasing accuracy phenomenon and improve the HSI classification accuracy.
MobileNet employs depthwise separable convolutions to reduce the computation in the network and applies pointwise convolutions to combine the features of separate channels. Based on MobileNet-V1, MobileNet-V2 was also proposed to employ inverted residuals and linear bottlenecks, leading to better performance [31,38]. MobileNetV3 [39] is tuned through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then enhanced by novel architecture advances. Some researchers have applied depthwise separable convolutions and pointwise convolutions to convolutional neural network architecture to improve HSI classification performance [40,41,42].
Our proposed asymmetric residual network not only benefits from the much deeper and light-weight network design, but also from the asymmetric inception unit that we tailored for the HSI dataset. Specifically, we propose an asymmetric inception unit (AI unit), which consists of the space inception unit and the spectrum inception unit, to convey and classify the features effectively.

2.3. Transfer Learning

Compared with the thousands of millions of annotated datasets used in vision tasks, annotated data in existing HSI datasets is insufficient. Moreover, the imbalance among HSI datasets of intraclass sets and those captured from different sensors also makes it challenging to train the neural network. In the computer vision community, one common solution to this problem is transfer learning. Transfer learning focuses on storing knowledge gained while solving one problem and applying it to a different but related problem [43]. It is defined as the ability of a system to recognize and apply knowledge and skills learned in previous tasks to a novel task [44]. The concept behind transfer learning is that, in deep neural networks, the bottom-level and middle-level features take up the majority of the parameters stored in the CNN model, and usually capture the textures and edges of the objects. Then, those low-level features designed for simple tasks such as detection can be reused for more complex tasks such as segmentation and tracking. A common strategy for transfer learning is to pretrain a model on one data set, where labeled samples are sufficient, such as ImageNet, and then transfer the pretrained model to the target data set for fine-tuning.
Transfer learning offers two benefits: a better initialization of the model and a reduced training time for the network. It is beneficial to use transfer learning for data sets with very limited training samples, especially when the model is a deep CNN, which usually has a large number of parameters. Since the structure of HSI data is complex and the number of training samples is limited, transfer learning plays an instrumental role in HSI image classification. In [45], transfer learning has been adopted, but the source data sets and the target data sets are required to be gathered by the same sensor. Later, Lin et al. [46] used canonical correlation analysis (CCA) to transfer knowledge between two SAEs that were trained by source data and target data independently. Furthermore, the authors investigate the multisource or heterogeneous transfer learning strategy for HSI classification to alleviate the problem of small labeled samples [47,48]. In [49], Zhang et al. proposed a cross-modal transfer learning strategy which transfers models between data sets of different data modalities that exhibit different data characteristics, namely, from natural RGB image modality to HSI modality. It has been shown that the most significant benefit of the use of transfer learning is the improvement of model initialization, which is very important for training the model with limited samples.
Our proposed network adopts a data fusion transfer learning strategy. Concretely, the designed model is pretrained on HSI datasets captured by different sensors with 3D pyramid pooling and then fine-tuned on the target datasets to achieve a better performance.

3. Methodology

Among the deep learning models used in HSI literatures, 3D-CNN performs better than 2D-CNN for HSI classification due to the fact that 3D data formats are used in HSI. In fact, different objects in HSI generally have different spectral structures. Convolving along the spectral dimension is very critical. In addition, there are also some different objects which have similar spectral structures. For these objects, it is also beneficial to convolve along spatial dimensions to capture features, which can capture important spatial variations observed with high-resolution data [27,28]. For 2D-CNN based methods, without spectral dimension reduction, the number of parameters of 2D-CNNs will be extremely large due to the hundreds of bands. Howover, if dimension reduction is conducted, it may destroy the information of spectral structure which is critical for discriminating different objects.
Generally speaking, 3D-CNN-based approaches have better performance than 2D-CNN-based approaches [18,22]. However, the existing 3D-CNN-based approaches still have two deficiencies: (1) compared with 2D convolutions, 3D convolutions have more parameters and 3D-CNN models are computation-intensive; (2) being limited by the training samples in HSI datasets, 3D-CNN models employed in HSI classification almost always consist of less than five convolution layers. However, a large number of experiments in computer vision have proved that the deep depth of CNN is very significantly important for improving the performance of tasks related to image processing [23,30].
In this section, we first introduces the proposed AINet, and then describe the proposed data-fusion transfer learning strategy.

3.1. AINet for HSI Classification

Network Structure:Figure 1 shows the overall framework of the proposed AINet for HSI classification. In order to utilize the spectral and spatial information contained in HSI, we extract L × S × S -sized cubes from raw HSI data as samples, where L and S indicate the number of spectrum bands and the spatial size accordingly (Following [18], we set S to 27 in this paper). Then, the samples are fed into AINet to extract deep spectral-spatial features, and finally the classification results are calculated. Inspired by the design of ResNet [23], AINet employs a similar basic structure and introduces some key modifications for tailoring on HSI dataset. AINet starts with a 3D convolution layer, then stacks six AI units of increasing widths. It connects one 3D spatial pyramid pooling and one fully connected layer at the end. Specifically, the channels for the six AI units are 32, 64, 64, 128, 128 and 256, respectively. In order to reduce the dimension of features, four Max pooling layers are added with kernel = [3, 3, 3], stride = [2, 2, 2] within the six AI units.
3D Pyramid Pooling: Before the fully connected layer, a 3D pyramid pooling method is used to map features of different sizes to vectors with fixed dimensions. Different HSI datasets are usually captured by different sensors and with various numbers of spectrum bands, for example, the Pavia University dataset has 103 bands and the Indian Pines dataset contains 200 bands. With 3D pyramid pooling layer, the same network can be applied to different HSI datasets without any modification. In this paper, the 3D spatial pyramid pooling layer is composed of three-level pooling ( 1 × 1 × 1 , 2 × 1 × 1 , 3 × 1 × 1 ). As the last AI unit has 256 channels, the outputs of 3D pyramid pooling layer are 256 × 6 × 1 × 1 -sized cubes.
Training and Loss: We employ log softmax [50] as the activation function in the fully connected layer. During training, we take negative log likelihood as the loss function, and add L 2 regularization term with weight 1 × 10−5 to the loss function for alleviating over-fitting. The optimizer is stochastic gradient descent (SGD) with momentum [51]. For all of the experiments, the same setting is adopted, where momentum, weight decay, batch size, epochs and learning rate are 0.9, 1 × 10−5, 20, 60 and 0.01, respectively. In the last 12 epochs, the learning rate decreased to 0.001.

3.2. AI Unit

Because 3D convolution can learn the spectral and spatial information from the raw HSI datasets, the 3D-CNN based methods achieve the most advanced performance for HSI classification. However, compared with 2D convolutions, 3D convolutions are prone to overfitting and are computation-intensive. In order to address these problems, we propose an asymmetric inception unit (AI unit), which consists of the space inception unit and the spectrum inception unit. The structure of AI unit is illustrated in Figure 2.
In the space inception unit, there are three space convolution paths. Path one has one pointwise convolution layer only, path two consists of one pointwise convolution layer and one 2D convolution layer with 1 × 3 × 3 -sized kernels, and path three has one pointwise convolution layer and two 2D convolution layers. The outputs of each path are concatenated in channel, and are added to the output of the shortcut connection. Inspired by the Inception networks [35], we set the three paths with different widths. For each unit, we set the widths of three paths with a split ratio 1:2:1. In the last two paths, the width of the pointwise convolution layer is half of that of the other convolution layers. For instance, in the AI unit with 32 channels, the width of the first path is 8. For the second path, the widths of the pointwise convolution layer and 1 × 3 × 3 -sized convolution layer are 8 and 16 respectively. The widths of the three layers of the last path are 4, 8 and 8 accordingly. In the overall structure, the structure of spectrum inception unit is similar to the space inception unit, except that that 1 × 3 × 3 -sized 2D convolution layers in the space inception unit are replaced with 3 × 1 × 1 -sized 1D convolution layers.
In HSI datasets, the spectral resolution is much higher than the spatial resolution, and the spectral information is much richer. Therefore, in the process of spectral–spatial features extraction, we pay more attention to spectral feature extraction. In the proposed AINet, there are six AI units. The four units located in the middle can be divided into two groups, and each group stacks two units of equal width. Here, instead of stacking two same AI units in each group, we stack one space inception unit and two spectrum inception units. This is different from some popular networks, such as ResNet [23] and MobileNet [31], which build the whole model by stacking the same units. Figure 3 shows the difference between one AI unit and two AI units.

3.3. Transfer Learning with Data Fusion

In RGB images classification, pretraining networks on the ImageNet dataset which has over 14 million hand-annotated images and over 20,000 categories is common, and it is very useful for improving the performance and overcoming the problem of limited training samples. The diversity of datasets used for pretraining is a key factor in transfer learning. For example, pretraining the same model on a dataset with a million images and a thousand categories always achieves better results than pretraining the same model on a dataset with 10 million images and 10 categories. We believe that model pretraining with more diverse samples may result in better generalization ability.
For further improving the performance of HSI classification, we propose a data-fusion transfer learning strategy. As shown in Figure 4, the strategy is composed of data-fusion pretraining and finetuning: (1) data-fusion pretraining—during pretraining, the proposed network is trained on two different HSI datasets to improve the diversity of samples and obtain a robust initialized model; (2) fine-tuning—after the pretrained model is acquired, the new model is initialized using the parameters of the pretrained model for the target HSI dataset. The fully connected layers of the proposed model are randomly initialized with a Gaussian distribution.
During pretraining, the proposed network is trained on two source HSI datasets. Here, Pavia Center dataset and Salinas dataset are used as source HSI datasets for pretraining. Among the several public HSI datasets, those two datasets have the largest number of labeled samples. To be more specific, the model is initialized with Gaussian distribution on one-source HSI dataset and pretrained for N epochs, and then the feature extraction part is fixed and the classifier is reinitialized with Gaussian distribution. Later on, the feature extraction part and classifier on the other source HSI dataset are pretrained for N 2 epochs with a different learning rate. In this paper, N is set to 10 and the learning rate used for the feature extraction part is tenth of that used for the second pretraining HSI dataset.
After pretraining the model on the two source HSI datasets, we transfer the entire model except for the classifier, to construct the fine-tuning model for initialization of the target HSI dataset. Then the transfer part and the new classifier are fine-tuned at the same learning rate for training the second source HSI dataset.

4. Experiments

4.1. Datasets and Experiments Setting

In this paper, we compare the proposed AINet with a traditional approach and five CNN-based approaches for HSI classification on three public HSI datasets, including Pavia University, Indian Pines and KSC. In the transfer-learning experiment, the Pavia Center dataset and the Salinas dataset are employed as the source datasets. The false-color composite and ground truth of each dataset are shown in Figure 5. A brief introduction of each dataset is given in the following part and more information can be found on the website http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 20 February 2022). The code of the proposed algorithm can be found at: https://github.com/UniLauX/AINet (accessed on 20 February 2022).
Pavia University and Pavia Center datasets were captured by Reflective Optics System Imaging Spectrometer (ROSIS) sensor in 2001. After several noisiest bands being removed, Pavia University has 103 bands and Pavia Center has 102 bands. Both datasets are divided into 9 classes.
Indian Pines and Salinas datasets were acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in 1992. After correction, each dataset has 200 bands and contains 16 classes.
KSC was acquired by the AVIRIS sensor in 1996, and after removing water absorption and low SNR bands, 176 bands were used for analysis. For classification purposes, 13 classes are defined.
For the three target HSI datasets, samples are divided into training samples and testing samples. For comparison purposes, we follow [18] to set the samples distribution for Indian Pines and KSC datasets. As for the Pavia University dataset, 200 random samples are taken from each class as training samples. Table 1, Table 2 and Table 3 provide the split details. For the two-source HSI datasets, the description of the two datasets is shown in Table 4 and Table 5.
In the transfer learning experiment, we randomly extracted 200 samples from each class of Pavia Center dataset, 100 samples from each category of Salinas dataset as test samples, and take the rest as training samples.

4.2. Performance Comparison of Different Network Structures

In this section, we compare the proposed AINet with a traditional method and five CNN-based HSI classification methods, that are SVM-3DG [52], 1D-CNN, 2D-CNN, 3D-CNN [18], MSDN-SA [29], SSRN [22]. The experiments with the same settings are ran for 5 times to obtain the average performance. The experimental results are listed in Table 6, Table 7 and Table 8, where the number of training samples, the number of parameters used in the convolution layers, the depth of CNN models, overall accuracy (OA), average accuracy (AA) and kappa coefficient (K) are reported. OA is the ratio between the number of correctly classified samples in the test set and the total number of test sets. AA is the mean of the OA of all the categories. K is a coefficient which measures inter-rater agreement for qualitative items [53]. The classification maps are shown in Figure 6, Figure 7 and Figure 8. From Table 6, Table 7 and Table 8, we can see that the proposed AINet achieves the highest classification performance on all of the datasets. For instance, in the Indian Pines dataset, OA of AINet is 99.14, which is 9.15% better than that of 2D-CNN, 1.58% better than that of 3D-CNN and 0.74 better than that of SSRN. The experiments indicate that all of the 3D-CNN-based HSI classification methods are superior to 2D-CNN. From 3D-CNN, MSDN-SA, SSRN to AINet, the depth of the models is increasing and the classification accuracy keeps improving. In particular, the depths of the four models are 4, 7, 12, 32 respectively. Although AINet is much deeper than SSRN, AINet has slightly more parameters than SSRN and much fewer than 3D-CNN.

4.3. Classification Results with Spatially Disjoint Samples

Previous research [4,54,55] has pointed out that the random-sampling strategy has a significant impact on the reliability and quality of the solution, since this may make it easier for the networks to classify the test samples during the inference stage (as the network has already processed them in some way during training). As compared to disjointed samples, randomly selected samples may result in significant spatial overlap of the training and test samples, which may overestimate classification performance. Because of this, the results obtained by the model may not be realistic, since artificially optimistic results may be obtained. To obtain more realistic results and a more accurate evaluation of the models, in this subsection, a sampling strategy based on selecting spatially separated samples is used to evaluate the model. The classification results on two sampling strategies of all compared methods in Section 4.2 are summarized in Table 9, Table 10 and Table 11.
As can be seen, 2D-CNN, 3D-CNN, MSDN-SA and SSRN suffer an accuracy deterioration. In addition, the performance of 2D-CNN and 3D-CNN endures a drastic decline. As the spatial resolution of the Indian dataset is lower than that of the other two datasets, 2D-CNN and 3D-CNN algorithms that focus more on spatial information decline significantly in this dataset. Although AINet also experiences performance degradation, it still achieves the highest OA, AA and K.

4.4. Results of Transfer Learning

In this section, we combine the proposed AINet with data-fusion-based transfer learning to further improve the classification performance. In [45], the authors adopted transfer learning in their framework, but restricts that the data used for pretraining must be collected by the same sensor as the target data. In contrast to previous work, we have not imposed restrictions on the datasets used for pretraining, which makes these results more applicable than previous works.
Here, we employ five HSI datasets in total. Three datasets, Pavia University, Indian Pines and KSC, are used as target datasets. Two datasets, Pavia Center and Salinas, are used as source datasets. Both the source dataset Pavia Center and the target dataset Pavia University were collected by the same sensor ROSIS, so their spatial and spectral properties are similar. The source dataset Salinas and the target dataset Indian Pines were taken by the same sensor AVIRIS and their spatial and spectral resolution are roughly identical. The last target dataset, KSC, was also collected by AVIRIS, but KSC has 176 bands, which is much more than Salinas and Indian Pines. As a result, the basic attributes involved in KSC are rather different from those in Salinas and Indian Pines.
In transfer-learning experiments, we implement the experiments with four different transfer-learning strategies, named AINet+T1, AINet+T2, AINet+T3 and AINet+T4, respectively. In AINet+T1, we pretrain the proposed model with Pavia Center data at first, then transfer the pretrained model to target datasets and fine-tune it on target datasets. Similarly, in AINet+T2, we firstly pretrain our proposed model on Salinas, then transfer and fine-tune the pretrained model to target datasets. Different from AINet+T1 and AINet+T2, both AINet+T3 and AINet+T4 have two pretraining stages, in which different source datasets are used for pretraining. In AINet+T3, we pretrain the model on Pavia Center dataset in the first stage and pretrain the model on Salinas dataset in the second stage. In AINet+T4, we inverse the order of using source datasets to pretrain.
The experimental results of transfer learning are listed in Table 12 and shown in Figure 9 and Figure 10. For each target dataset, we randomly choose 15 and 30 samples from each class as the training samples and reserve the rest as test samples.

5. Discussion

5.1. Assessment of the Asymmetric Inception Unit

In order to evaluate the performance of the asymmetric inception unit (AI Unit) in the proposed framework, we replace the AI unit in the AINet with the residual unit as the basic model. The details of the AI unit have been introduced previously (Section 3.2). In addition, as described in Section 3.2, instead of stacking two AI units with the same type, we stack one space inception unit and two spectrum inception units to form AI unit × 2. To verify the performances of the basic network model, AINet (AI unit × 1) and AINet (AI unit × 2), we apply them to three target data sets. For the three datasets, the number of training samples is the same as Section 4.2. Table 13 list the experimental results. From Table 13 we can clearly see that AI Unit improves the classification results for all three datasets. Performance is boosted by a larger margin on the Indian Pines and KSC datasets than on the Pavia University dataset. These tables jointly demonstrate the effectiveness of AINet, being capable of providing the highest performance regarding a range of criteria, including OA, AA and K. From the basic model to AINet (AI unit × 2), the performance increases step by step. We argue that this is because the structure of AI Unit employed is becoming more effective.

5.2. Assessment of the Data Fusion Transfer Learning

The experimental results of transfer learning are listed in Table 12 and shown in Figure 9 and Figure 10. From the experimental results, we can see that transfer-learning strategies are beneficial for improving the performance of AINet, especially when the available training samples are relatively small. When we extract 15 samples per class for training, the transfer-learning strategy AINet+T1 obtains OA gains of 3.64% for Pavia University, 0.37% for Indian Pines and 2.33% for KSC, respectively. The gain provided by transfer learning drops with the increase in training samples. We conjecture that as the number of training samples increases, the model can directly obtain more guidance information from the target HSI data set. Therefore, the AINet can work well even without transfer learning.
Compared with pretraining the model with a single source dataset, pretraining the model with multiple source datasets is more effective. As we can see from Table 12, excellent performances are always achieved by AINet+T3 and AINet+T4, which fuse two different source datasets in the pretraining stage. For instance, in Pavia University, AINet+T4 improved the OA from 84.22% to 90.31% (improved by 6.09%). However, AINet+T1 just improved the OA to 87.86%, 2.45 percentage points lower than that of AINet+T4. We conjecture that this is mainly because the model pretrained with a multiple-source dataset has a better generalization ability than the model pretrained with a single-source dataset. From Figure 10, we can see that when we increase the number of training samples to 30 per class, pretraining the model with a single heterogeneous dataset (the dataset collected by different sensors) may harm the performance, but, pretraining the model with multiple-source datasets still boosts the performance.

6. Conclusions

This paper proposes a 3D asymmetric inception network (AINet) for hyperspectral image classification. Firstly, compared to traditional 3D CNNs, AINet proposed a light-weight but much deeper architecture that can exploit the potential of deep learning to extract representative features while alleviating the problems caused by limited annotated datasets. Secondly, considering the property of hyperspectral images, spectral signatures are emphasized over spatial contexts in the proposed AI Unit. Furthermore, a data-fusion transfer learning strategy is adopted to improve the initialization of the model and the classification accuracy.
We conduct comparison experiments on three challenging public HSI datasets and compare our proposed AINet with deep learning based HSI classification methods. The results of comparison experiments have demonstrated that our proposed AINet achieves competitive performance with others. Although AINet is much deeper than SSRN, the parameters of AINet are slightly more than that of SSRN and much less than that of 3D-CNN. In fact, benefiting from the AI Unit, AINet contains much less parameters and higher performance than the basic network model. In addition, we have performed experiments to verify the effectiveness of our proposed data fusion transfer learning strategy. Results show that compared with pretraining the model with a single-source dataset, pretraining the model with multiple-source datasets is more effective.
In the future, there are two topics we are keen to pursue. Investigating the reduction of the training time brought by transfer learning is the first, and the second is taking use of some policies to overcome the data imbalance in HSI classification.

Author Contributions

Conceptualization, B.F., Y.L. and H.Z.; data curation, B.F. and Y.L.; investigation, B.F. and Y.L.; methodology, B.F. and H.Z.; validation, B.F. and Y.L.; visualization, B.F.; writing—original draft, H.Z.; writing—review and editing, B.F. and Y.L.; supervision, H.Z. and J.H.; funding acquisition, B.F. and J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China (62107027, 62177032) and China Postdoctoral Science Foundation (No. 2021M692006).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in this article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments.

Conflicts of Interest

The authors declare no competing financial interests. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; and in the decision to publish the results.

References

  1. Zhou, Y.; Wei, Y. Learning hierarchical spectral–spatial features for hyperspectral image classification. IEEE Trans. Cybern. 2015, 46, 1667–1678. [Google Scholar] [CrossRef] [PubMed]
  2. Luo, F.; Du, B.; Zhang, L.; Zhang, L.; Tao, D. Feature learning using spatial-spectral hypergraph discriminant analysis for hyperspectral image. IEEE Trans. Cybern. 2018, 49, 2406–2419. [Google Scholar] [CrossRef] [PubMed]
  3. Yuan, H.; Tang, Y.Y. Spectral–spatial shared linear regression for hyperspectral image classification. IEEE Trans. Cybern. 2016, 47, 934–945. [Google Scholar] [CrossRef] [PubMed]
  4. Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. Isprs J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
  5. Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef]
  6. Yin, J.; Wang, Y.; Hu, J. A new dimensionality reduction algorithm for hyperspectral image using evolutionary strategy. IEEE Trans. Ind. Inform. 2012, 8, 935–943. [Google Scholar] [CrossRef]
  7. Huang, H.Y.; Kuo, B.C. Double nearest proportion feature extraction for hyperspectral-image classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4034–4046. [Google Scholar] [CrossRef]
  8. Kuo, B.C.; Li, C.H.; Yang, J.M. Kernel nonparametric weighted feature extraction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1139–1155. [Google Scholar]
  9. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  10. Qian, Y.; Ye, M.; Zhou, J. Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features. IEEE Trans. Geosci. Remote Sens. 2012, 51, 2276–2291. [Google Scholar] [CrossRef] [Green Version]
  11. Jia, S.; Shen, L.; Zhu, J.; Li, Q. A 3-D Gabor phase-based coding and matching framework for hyperspectral imagery classification. IEEE Trans. Cybern. 2018, 48, 1176–1188. [Google Scholar] [CrossRef]
  12. Tang, Y.Y.; Lu, Y.; Yuan, H. Hyperspectral image classification based on three-dimensional scattering wavelet transform. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2467–2480. [Google Scholar] [CrossRef]
  13. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  14. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
  15. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  16. Bertinetto, L.; Valmadre, J.; Henriques, J.F.; Vedaldi, A.; Torr, P.H. Fully-convolutional siamese networks for object tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 850–865. [Google Scholar]
  17. Zhang, H.; Li, Y. Spectral-spatial classification of hyperspectral imagery based on deep convolutional network. In Proceedings of the 2016 International Conference on Orange Technologies (ICOT), Melbourne, VIC, Australia, 18–20 December 2016; pp. 44–47. [Google Scholar]
  18. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  19. Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2017, 55, 844–853. [Google Scholar] [CrossRef]
  20. Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J. Spectral–spatial classification of hyperspectral imagery based on partitional clustering techniques. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2973–2987. [Google Scholar] [CrossRef]
  21. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
  22. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
  23. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  24. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  25. Xie, S.; Sun, C.; Huang, J.; Tu, Z.; Murphy, K. Rethinking spatiotemporal feature learning for video understanding. arXiv 2017, arXiv:1712.04851. [Google Scholar]
  26. Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [Green Version]
  27. Zhang, H.; Li, Y.; Zhang, Y.; Shen, Q. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network. Remote Sens. Lett. 2017, 8, 438–447. [Google Scholar] [CrossRef] [Green Version]
  28. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
  29. Fang, B.; Li, Y.; Zhang, H.; Chan, J.C.W. Hyperspectral images classification based on dense convolutional networks with spectral-wise attention mechanism. Remote Sens. 2019, 11, 159. [Google Scholar] [CrossRef] [Green Version]
  30. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  31. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  32. Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. Shufflenet v2: Practical guidelines for efficient CNN architecture design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
  33. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  34. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  35. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  36. Xiong, Z.; Yuan, Y.; Wang, Q. AI-NET: Attention inception neural networks for hyperspectral image classification. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 2647–2650. [Google Scholar]
  37. Ruiz Hidalgo, D.; Bacca Cortés, B.; Caicedo Bravo, E. Data classification of hyperspectral images based on inception networks and extended attribute profiles. Int. J. Remote Sens. 2020, 41, 8717–8738. [Google Scholar] [CrossRef]
  38. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  39. Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  40. Fang, B.; Li, Y.; Zhang, H.; Chan, J.C.W. Collaborative learning of lightweight convolutional neural network and deep clustering for hyperspectral image semi-supervised classification with limited training samples. Isprs J. Photogramm. Remote Sens. 2020, 161, 164–178. [Google Scholar] [CrossRef]
  41. Li, K.; Ma, Z.; Xu, L.; Chen, Y.; Ma, Y.; Wu, W.; Wang, F.; Liu, Z. Depthwise separable ResNet in the MAP framework for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1–5. [Google Scholar] [CrossRef]
  42. Meng, Z.; Jiao, L.; Liang, M.; Zhao, F. A lightweight spectral-spatial convolution module for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  43. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  44. Quattoni, A.; Collins, M.; Darrell, T. Transfer learning for image classification with sparse prototype representations. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  45. Yang, J.; Zhao, Y.Q.; Chan, J.C.W. Learning and transferring deep joint spectral–spatial features for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
  46. Lin, J.; Ward, R.; Wang, Z.J. Deep transfer learning for hyperspectral image classification. In Proceedings of the 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP), Vancouver, BC, Canada, 29–31 August 2018; pp. 1–5. [Google Scholar]
  47. He, X.; Chen, Y.; Ghamisi, P. Heterogeneous transfer learning for hyperspectral image classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3246–3263. [Google Scholar] [CrossRef]
  48. Zhao, X.; Liang, Y.; Guo, A.J.; Zhu, F. Classification of small-scale hyperspectral images with multi-source deep transfer learning. Remote Sens. Lett. 2020, 11, 303–312. [Google Scholar] [CrossRef]
  49. Zhang, H.; Li, Y.; Jiang, Y.; Wang, P.; Shen, Q.; Shen, C. Hyperspectral classification based on lightweight 3-D-CNN with transfer learning. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5813–5828. [Google Scholar] [CrossRef] [Green Version]
  50. De Brébisson, A.; Vincent, P. An exploration of softmax alternatives belonging to the spherical loss family. arXiv 2015, arXiv:1511.05042. [Google Scholar]
  51. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25. [Google Scholar]
  52. Cao, X.; Xu, L.; Meng, D.; Zhao, Q.; Xu, Z. Integration of 3-dimensional discrete wavelet transform and Markov random field for hyperspectral image classification. Neurocomputing 2017, 226, 90–100. [Google Scholar] [CrossRef]
  53. Thompson, W.D.; Walter, S.D. A reappraisal of the kappa coefficient. J. Clin. Epidemiol. 1988, 41, 949–958. [Google Scholar] [CrossRef]
  54. Hänsch, R.; Ley, A.; Hellwich, O. Correct and still wrong: The relationship between sampling strategies and the estimation of the generalization error. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 3672–3675. [Google Scholar]
  55. Xue, Z.; Zhang, M.; Liu, Y.; Du, P. Attention-based second-order pooling network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9600–9615. [Google Scholar] [CrossRef]
Figure 1. Framework of AINet. On the left, the L × S × S -sized samples from the neighborhood window centered around each target pixel are extracted, and then the samples are fed into AINet to extract deep spectral–spatial features. Finally, the classification scores are calculated by the classifier.
Figure 1. Framework of AINet. On the left, the L × S × S -sized samples from the neighborhood window centered around each target pixel are extracted, and then the samples are fed into AINet to extract deep spectral–spatial features. Finally, the classification scores are calculated by the classifier.
Remotesensing 14 01711 g001
Figure 2. Illustration of an AI unit. In AI unit, 3D convolution layer is replaced with two asymmetric inception units, i.e., space inception unit and spectrum inception unit. In the space inception unit, the input cube is fed into three different paths. In path one, a pointwise convolution layer is applied. In path two, one pointwise convolution layer and one 2D convolution layer are used. In path three, one pointwise convolution layer and two 2D convolution layers are used. The outputs of each path are concatenated in channel, and are added to the output of the shortcut connection. The structure of spectrum inception unit is similar to the space inception unit, except that 1 × 3 × 3 -sized convolution layers are replaced with 3 × 1 × 1 -sized convolution layers in spectrum inception unit.
Figure 2. Illustration of an AI unit. In AI unit, 3D convolution layer is replaced with two asymmetric inception units, i.e., space inception unit and spectrum inception unit. In the space inception unit, the input cube is fed into three different paths. In path one, a pointwise convolution layer is applied. In path two, one pointwise convolution layer and one 2D convolution layer are used. In path three, one pointwise convolution layer and two 2D convolution layers are used. The outputs of each path are concatenated in channel, and are added to the output of the shortcut connection. The structure of spectrum inception unit is similar to the space inception unit, except that 1 × 3 × 3 -sized convolution layers are replaced with 3 × 1 × 1 -sized convolution layers in spectrum inception unit.
Remotesensing 14 01711 g002
Figure 3. Illustration of stacking two AI units. (a) AI unit ×1; (b) AI unit ×2. Instead of stacking two AI units with the same type, we stack one space inception unit and two spectrum inception units to form AI unit ×2 as shown in (b).
Figure 3. Illustration of stacking two AI units. (a) AI unit ×1; (b) AI unit ×2. Instead of stacking two AI units with the same type, we stack one space inception unit and two spectrum inception units to form AI unit ×2 as shown in (b).
Remotesensing 14 01711 g003
Figure 4. Data-fusion-based transfer learning. (a) Data-fusion pretraining: during pretraining, the proposed network is trained on two different HSI datasets for improving the diversity of samples and obtaining a robust initialized model. (b) Fine-tuning: after the pretrained model is acquired, the new model is initialized using the parameters of the pretrained model for the target HSI dataset. Here, the fully connected layers of the proposed model are randomly initialized with a Gaussian distribution.
Figure 4. Data-fusion-based transfer learning. (a) Data-fusion pretraining: during pretraining, the proposed network is trained on two different HSI datasets for improving the diversity of samples and obtaining a robust initialized model. (b) Fine-tuning: after the pretrained model is acquired, the new model is initialized using the parameters of the pretrained model for the target HSI dataset. Here, the fully connected layers of the proposed model are randomly initialized with a Gaussian distribution.
Remotesensing 14 01711 g004
Figure 5. False-color composites (first row) and ground truths (second row) of experimental HSI datasets. Each color represents one kind of object. (a) Pavia University; (b) Indian Pines; (c) Kennedy Space Center; (d) Pavia Center; (e) Salinas.
Figure 5. False-color composites (first row) and ground truths (second row) of experimental HSI datasets. Each color represents one kind of object. (a) Pavia University; (b) Indian Pines; (c) Kennedy Space Center; (d) Pavia Center; (e) Salinas.
Remotesensing 14 01711 g005
Figure 6. Classification maps for Pavia University dataset. (a) Ground-truth map; (b) SVM-3DG; (c) 1D-CNN; (d) 2D-CNN; (e) 3D-CNN; (f) MSDN-SA; (g) SSRN; (h) AINet.
Figure 6. Classification maps for Pavia University dataset. (a) Ground-truth map; (b) SVM-3DG; (c) 1D-CNN; (d) 2D-CNN; (e) 3D-CNN; (f) MSDN-SA; (g) SSRN; (h) AINet.
Remotesensing 14 01711 g006
Figure 7. Classification maps for Indian Pines dataset. (a) Ground-truth map; (b) SVM-3DG; (c) 1D-CNN; (d) 2D-CNN; (e) 3D-CNN; (f) MSDN-SA; (g) SSRN; (h) AINet.
Figure 7. Classification maps for Indian Pines dataset. (a) Ground-truth map; (b) SVM-3DG; (c) 1D-CNN; (d) 2D-CNN; (e) 3D-CNN; (f) MSDN-SA; (g) SSRN; (h) AINet.
Remotesensing 14 01711 g007
Figure 8. Classification maps for KSC dataset. (a) Ground-truth map; (b) SVM-3DG; (c) 1D-CNN; (d) 2D-CNN; (e) 3D-CNN; (f) MSDN-SA; (g) SSRN; (h) AINet.
Figure 8. Classification maps for KSC dataset. (a) Ground-truth map; (b) SVM-3DG; (c) 1D-CNN; (d) 2D-CNN; (e) 3D-CNN; (f) MSDN-SA; (g) SSRN; (h) AINet.
Remotesensing 14 01711 g008
Figure 9. Transfer learning experiments with 15 training samples per class. (a) Pavia University; (b) Indian Pines; (c) Kennedy Space Center.
Figure 9. Transfer learning experiments with 15 training samples per class. (a) Pavia University; (b) Indian Pines; (c) Kennedy Space Center.
Remotesensing 14 01711 g009
Figure 10. Transfer learning experiments with 30 training samples per class. (a) Pavia University; (b) Indian Pines; (c) Kennedy Space Center.
Figure 10. Transfer learning experiments with 30 training samples per class. (a) Pavia University; (b) Indian Pines; (c) Kennedy Space Center.
Remotesensing 14 01711 g010
Table 1. Samples distribution for Pavia University dataset.
Table 1. Samples distribution for Pavia University dataset.
No.Class NameTraining SamplesTest Samples
1Asphalt2006431
2Meadows20018,449
3Gravel2001899
4Trees2002899
5Painted metal sheets2001145
6Bare Soil2004829
7Bitumen2001130
8Self-Blocking Bricks2003482
9Shadows200747
total180040,976
Table 2. Samples distribution for Indian Pines dataset.
Table 2. Samples distribution for Indian Pines dataset.
No.Class NameTraining SamplesTest Samples
1Alfalfa3016
2Corn—notill1501198
3Corn—mintill150232
4Corn1005
5Grass—pasture150139
6Grass—trees150580
7Grass—pasture-mowed208
8Hay—windrowed150130
9Oats155
10Soybean—notill150675
11Soybean—mintill1502032
12Soybean—clean150263
13Wheat15055
14Woods150793
15Buildings–Grass–Trees–Drives5049
16Stone–Steel-Towers5043
total17656223
Table 3. Samples distribution for KSC dataset.
Table 3. Samples distribution for KSC dataset.
No.Class NameTraining SamplesTest Samples
1Scrub33314
2Willow Swamp23220
3Cabbage Palm Hammock24232
4Cabbage Palm / Oak Hammock24228
5Slash Pine15146
6Oak / Broadleaf Hammock22207
7Hardwood Swamp996
8Graminoid Marsh38352
9Spartina Marsh51469
10Cattail Marsh39365
11Salt Marsh41378
12Mud Flats49454
13Water91836
total4591297
Table 4. Samples distribution for Pavia Center dataset.
Table 4. Samples distribution for Pavia Center dataset.
No.Class NameSamples
1Water824
2Trees820
3Asphalt816
4Self-Blocking Bricks808
5Bitumen808
6Tiles1260
7Shadows476
8Meadows824
9Bare Soil820
total7456
Table 5. Samples distribution for Salinas dataset.
Table 5. Samples distribution for Salinas dataset.
No.Class NameSamples
1Brocoli_green_weeds_12009
2Brocoli_green_weeds_23726
3Fallow1976
4Fallow_rough_plow1394
5Fallow_smooth2678
6Stubble3959
7Celery3579
8Grapes_untrained11,271
9Soil_vinyard_develop6203
10Corn_senesced_green_weeds3278
11Lettuce_romaine_4wk1068
12Lettuce_romaine_5wk1927
13Lettuce_romaine_6wk916
14Lettuce_romaine_7wk1070
15Vinyard_untrained7268
16Vinyard_vertical_trellis1807
total54,129
Table 6. Classification results for the Pavia University dataset.
Table 6. Classification results for the Pavia University dataset.
ModelsSVM-3DG [52]1D-CNN [18]2D-CNN [18]3D-CNN [18]MSDN-SA [29]SSRN [22]AINet
# train3930393039303930393018001800
# param.28980.183 M5.849 M3.058 M0.453 M0.487 M
depth44471232
OA90.18 ± 0.9589.01 ± 1.3191.13 ± 1.4995.63 ± 0.7996.85 ± 0.7198.98 ± 0.7399.42 ± 0.89
AA91.47 ± 0.9089.15 ± 0.8792.58 ± 1.7795.67 ± 0.8697.36 ± 0.3899.07 ± 1.4699.51 ± 0.58
K87.39 ± 0.9687.47 ± 1.6089.63 ± 0.9495.38 ± 1.5895.85 ± 0.9398.64 ± 1.3199.22 ± 1.73
Table 7. Classification results for the Indian Pines dataset.
Table 7. Classification results for the Indian Pines dataset.
ModelsSVM-3DG [52]1D-CNN [18]2D-CNN [18]3D-CNN [18]MSDN-SA [29]SSRN [22]AINet
# train1765176517651765176517651765
# param.25,9200.183 M44.893 M3.0580.453 M0.487 M
depth64471232
OA85.87 ± 0.9187.81 ± 1.2889.99 ± 1.6297.56 ± 1.2198.02 ± 1.8598.40 ± 0.9099.14 ± 1.74
AA89.74 ± 0.8293.12 ± 0.8697.19 ± 1.9699.23 ± 1.9498.69 ± 0.9498.52 ± 1.9899.47 ± 1.64
K84.08 ± 1.5485.30 ± 1.6987.95 ± 0.8697.02 ± 1.9797.75 ± 1.2198.14 ± 0.7599.00 ± 1.27
Table 8. Classification results for the KSC dataset.
Table 8. Classification results for the KSC dataset.
ModelsSVM-3DG [52]1D-CNN [18]2D-CNN [18]3D-CNN [18]MSDN-SA [29]SSRN [22]AINet
# train459459459459459459459
# param.14,9040.183 M5.849 M3.058 M0.453 M0.487 M
depth54471232
OA88.24 ± 1.3689.23 ± 1.6994.11 ± 1.3696.31 ± 0.9897.95 ± 1.9198.65 ± 1.3799.01 ± 0.69
AA85.68 ± 1.8783.32 ± 1.0591.98 ± 1.1994.68 ± 2.0497.80 ± 1.9497.78 ± 1.3298.65 ± 0.59
K87.04 ± 0.6586.91 ± 1.4793.44 ± 0.9895.90 ± 1.0897.70 ± 1.6598.54 ± 0.8998.90 ± 1.15
Table 9. Classification results with spatially disjoint samples for the Pavia University dataset.
Table 9. Classification results with spatially disjoint samples for the Pavia University dataset.
ModelsSVM-3DG [52]1D-CNN [18]2D-CNN [18]3D-CNN [18]MSDN-SA [29]SSRN [22]AINet
OA79.97 ± 1.9679.18 ± 1.3975.41 ± 1.7977.25 ± 1.0876.88 ± 2.1481.98 ± 1.2483.04 ± 1.28
AA80.99 ± 0.7979.47 ± 0.8676.56 ± 1.3678.56 ± 2.1478.07 ± 1.3583.67 ± 2.4385.64 ± 0.86
K78.53 ± 1.4277.31 ± 1.3874.14 ± 1.0375.61 ± 1.6275.78 ± 1.9480.49 ± 0.8782.49 ± 1.36
Table 10. Classification results with spatially disjoint samples for the Indian Pines dataset.
Table 10. Classification results with spatially disjoint samples for the Indian Pines dataset.
ModelsSVM-3DG [52]1D-CNN [18]2D-CNN [18]3D-CNN [18]MSDN-SA [29]SSRN [22]AINet
OA77.54 ± 1.3678.21 ± 1.7976.06 ± 1.5475.49 ± 2.1678.29 ± 0.4679.94 ± 1.8485.97 ± 1.08
AA79.92 ± 1.2479.72 ± 1.5678.70 ± 0.8976.66 ± 1.1879.21 ± 0.7980.78 ± 2.0687.09 ± 1.27
K76.91 ± 0.9876.32 ± 0.9374.26 ± 1.2873.87 ± 1.7875.78 ± 1.4776.47 ± 1.1983.14 ± 0.86
Table 11. Classification results with spatially disjoint samples for the KSC dataset.
Table 11. Classification results with spatially disjoint samples for the KSC dataset.
ModelsSVM-3DG [52]1D-CNN [18]2D-CNN [18]3D-CNN [18]MSDN-SA [29]SSRN [22]AINet
OA80.64 ± 1.5879.36 ± 1.5077.54 ± 1.2679.10 ± 1.2379.03 ± 1.3678.12 ± 1.0780.92 ± 1.02
AA82.74 ± 1.2480.94 ± 1.2279.65 ± 1.2482.28 ± 0.9882.93 ± 1.2080.77 ± 1.6583.26 ± 1.27
K78.48 ± 0.6877.95 ± 1.8176.34 ± 2.0478.77 ± 1.4277.41 ± 0.9675.78 ± 1.3477.57 ± 1.81
Table 12. Transfer learning results for the three target datasets.
Table 12. Transfer learning results for the three target datasets.
Training Samples1530
DatasetPavia University
OAAAKOAAAK
AINet84.22 ± 0.7286.36 ± 0.8979.67 ± 1.3491.94 ± 0.8793.52 ± 0.9089.48 ± 0.82
AINet+T187.86 ± 0.6988.94 ± 0.8484.20 ± 0.7693.25 ± 0.7795.06 ± 0.9891.17 ± 0.85
AINet+T284.29 ± 0.5884.02 ± 1.1979.64 ± 0.2989.63 ± 0.5990.41 ± 1.3086.22 ± 0.79
AINet+T389.91 ± 1.6789.58 ± 1.4986.80 ± 0.8993.73 ± 0.4694.04 ± 1.0791.77 ± 1.20
AINet+T490.31 ± 1.2390.57 ± 0.8587.32 ± 0.7894.17 ± 1.1694.52 ± 1.3992.32 ± 0.89
DatasetIndian Pines
OAAAKOAAAK
AINet75.77 ± 1.8286.43 ± 0.9572.68 ± 1.7987.79 ± 1.2993.76± 0.9886.12 ± 1.84
AINet+T176.14 ± 1.6986.37 ± 1.5473.14 ± 1.8386.76 ± 1.6593.21 ± 0.8784.92 ± 0.75
AINet+T278.74 ± 0.7488.09 ± 0.9976.10 ± 1.1888.93 ± 1.8994.32 ± 1.0587.42 ± 1.48
AINet+T379.35 ± 1.3788.00 ± 1.7976.70 ± 1.7489.30 ± 0.8794.34 ± 0.8587.83 ± 1.24
AINet+T479.21 ± 0.4788.39 ± 0.4976.55 ± 1.1389.07 ± 0.2894.29 ± 0.8687.64 ± 1.76
DatasetKSC
OAAAKOAAAK
AINet89.07 ± 0.6788.61 ± 0.3187.83 ± 0.8796.75 ± 1.9896.14 ± 0.7496.36 ± 0.66
AINet+T191.40 ± 1.6589.80 ± 1.4990.39 ± 0.5996.46 ± 1.2896.07 ± 1.4996.13 ± 0.89
AINet+T292.10 ± 1.6291.75 ± 2.1791.21 ± 0.9197.05 ± 0.5297.48 ± 0.5896.70 ± 1.38
AINet+T393.68 ± 2.0793.48 ± 1.4992.97 ± 0.6597.60 ± 1.2097.71 ± 0.4897.31 ± 1.27
AINet+T493.77 ± 0.5593.03 ± 0.9193.06 ± 0.5297.74 ± 1.5797.93 ± 2.2197.87 ± 1.26
Table 13. Classification results for the three target dataset.
Table 13. Classification results for the three target dataset.
DatasetPavia University
Training Samples1800
ModelsBasic network modelAINet (AI unit × 1)AINet (AI unit × 2)
OA99.27 ± 1.2499.36 ± 0.5499.42 ± 0.89
AA99.39 ± 1.4199.44 ± 0.6899.51 ± 0.58
K99.08 ± 1.6099.11 ± 0.4699.22 ± 1.73
DatasetIndian Pines
Training Samples1765
ModelsBasic network modelAINet (AI unit × 1)AINet (AI unit × 2)
OA98.85 ± 2.1399.00 ± 0.9099.14 ± 0.74
AA99.52 ± 1.2499.30 ± 0.3199.47 ± 1.64
K98.67 ± 1.1398.71 ± 0.8299.00 ± 1.27
DatasetKSC
Training Samples459
ModelsBasic network modelAINet (AI unit × 1)AINet (AI unit × 2)
OA97.12 ± 0.3898.29 ± 1.0499.01 ± 0.69
AA96.47 ± 0.8897.01 ± 0.8798.65 ± 0.59
K97.01 ± 0.9497.15 ± 1.2798.90 ± 1.15
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fang, B.; Liu, Y.; Zhang, H.; He, J. Hyperspectral Image Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer Learning. Remote Sens. 2022, 14, 1711. https://doi.org/10.3390/rs14071711

AMA Style

Fang B, Liu Y, Zhang H, He J. Hyperspectral Image Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer Learning. Remote Sensing. 2022; 14(7):1711. https://doi.org/10.3390/rs14071711

Chicago/Turabian Style

Fang, Bei, Yu Liu, Haokui Zhang, and Juhou He. 2022. "Hyperspectral Image Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer Learning" Remote Sensing 14, no. 7: 1711. https://doi.org/10.3390/rs14071711

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop