Next Article in Journal
Investigation of Integrated Reactive Multilayer Systems for Bonding in Microsystem Technology
Next Article in Special Issue
Wavelet Frequency Separation Attention Network for Chest X-ray Image Super-Resolution
Previous Article in Journal
Batch Manufacturing of Split-Actuator Micro Air Vehicle Based on Monolithic Processing Technology
Previous Article in Special Issue
Simultaneous Patch-Group Sparse Coding with Dual-Weighted p Minimization for Image Restoration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A 3D-2D Multibranch Feature Fusion and Dense Attention Network for Hyperspectral Image Classification

1
Key Laboratory of Water Big Data Technology of Ministry of Water Resources, Hohai University, No. 8 Focheng Road, Nanjing 211100, China
2
College of Computer and Information, Hohai University, No. 8 Focheng Road, Nanjing 211100, China
3
College of Computer and Software, Nanjing Vocational University of Industry Technology, Nanjing 211100, China
*
Author to whom correspondence should be addressed.
Micromachines 2021, 12(10), 1271; https://doi.org/10.3390/mi12101271
Submission received: 11 September 2021 / Revised: 8 October 2021 / Accepted: 15 October 2021 / Published: 18 October 2021

Abstract

:
In recent years, hyperspectral image classification (HSI) has attracted considerable attention. Various methods based on convolution neural networks have achieved outstanding classification results. However, most of them exited the defects of underutilization of spectral-spatial features, redundant information, and convergence difficulty. To address these problems, a novel 3D-2D multibranch feature fusion and dense attention network are proposed for HSI classification. Specifically, the 3D multibranch feature fusion module integrates multiple receptive fields in spatial and spectral dimensions to obtain shallow features. Then, a 2D densely connected attention module consists of densely connected layers and spatial-channel attention block. The former is used to alleviate the gradient vanishing and enhance the feature reuse during the training process. The latter emphasizes meaningful features and suppresses the interfering information along the two principal dimensions: channel and spatial axes. The experimental results on four benchmark hyperspectral images datasets demonstrate that the model can effectively improve the classification performance with great robustness.

1. Introduction

With the development of remote sensing, hyperspectral imaging technology has been widely applied in meteorological warning [1], agricultural monitoring [2], and marine safety [3]. Hyperspectral images are composed of hundreds of spectral bands and contain rich land-cover information. Hyperspectral image classification has received increasing attention as a crucial issue in the field of remote sensing.
The conventional classification methods include random forest (RF) [4], multiple logistic regression (MLP) [5], and support vector machine (SVM) [6]. They are all classified based on one-dimensional spectral information. Additionally, principal component analysis (PCA) [7] tends to be used to compress spectral dimensions while retaining essential spectral features, reduce band redundancy, and improve model robustness. Although these traditional methods obtain great results, they have a limited representation capacity and can only extract low-level features due to the shallow nonlinear structure.
Recently, hyperspectral images classification methods based on deep learning (DL) [8,9,10] have been increasingly favored by researchers to make up for the shortcomings of traditional methods. CNN has made a great breakthrough in the field of computer vision due to its excellent image representation ability and has been proved successful in the field of hyperspectral image classification. Makantasis et al. [11] developed a network based on 2D CNNs, where each pixel was packed into image patches of fixed size for spatial feature extraction and sent to multilayer perceptron for classification. However, 2D convolution can only extract features in height and width dimensions, ignoring the rich information of spectral bands. To further enhance the utilization of the spectral dimensions, the researchers turned their attention to 3D CNNs [12,13,14]. He et al. [12] proposed a multiscale 3D deep convolutional neural network (M3D-DCNN) for HSI classification, which learns the spatial and spectral features in the raw data of hyperspectral images in an end-to-end manner. Zhong et al. [13] developed a 3D spectral-spatial residual network (SSRN) that continuously learns discriminative features and spatial context information from redundant spectral signatures. Although 3D CNNs can make up for the defects of 2D CNNs in this regard, while 3D CNNs introduce a large number of computational parameters, increasing the training time and memory cost. In addition, due to the HSI having the characteristics of solid correlations between bands, there are phenomena of the same material that may present spectral dissimilarity. Different materials may have homologous spectral features, which seriously interfere with the extraction of spectral information and lead to the degradation of classification performance. How to distinguish the discriminative features of HSI is the key to improving the classification performance. Recent studies have shown that discriminative features can be enhanced using attention mechanisms [15,16]. Studies in cognitive biology reveal that human beings acquire the information of significance by paying attention to only a few critical features while ignoring the others. Similarly, attention has been effectively applied to various tasks in computer vision [17,18]. Numerous approaches based on existing attention mechanisms are also involved in the hyperspectral image classification tasks, demonstrating their efficiency in improving performance.
Additionally, with the continuous deepening of feature extraction, the neural network will inevitably become deeper. The phenomenon of gradient vanishing and network degradation becomes more and more serious, which deteriorates the classification performance. Furthermore, screening the crucial features from the complex features of hyperspectral images has become critical for network performance improvement. The paper proposed a 3D-2D multiscale feature fusion and dense attention network (MFFDAN) for hyperspectral image classification considering the above problems. In summary, the main contributions of this paper are as follows:
(1)
The whole network structure is composed of 3D and 2D convolutional layers, which not only avoids the problems of insufficient feature extraction when only 2D CNNs are used but also reduces the large number of training parameters caused by 3D CNNs alone, thereby improving the model efficiency.
(2)
The proposed 3D multiscale feature fusion module obtains the scaled features by combining multiple convolutional filters. In general, filters with large sizes are unable to capture the fine-grained structure of the images, whereas filters with small sizes are frequently eliminate the coarse-grained features of the images. Combining multiple convolutional filters of varying sizes allows for the extraction of more detailed features.
(3)
A 2D densely connected attention module is developed to overcome the gradient vanishing problem and select discriminative channel-spatial features from redundant hyperspectral images. A factorized spatial-channel attention block is proposed that can adaptively prioritize critical features and suppress less useful ones. Additionally, a simple 2D dense block is introduced to facilitate the information propagation and feature reuse and comprehensively utilizes features of different scales in 3D HSI cubes.
The rest of this paper is arranged as follows: Section 2 introduces the related CNNs methods and the frameworks of MFFDAN. Section 3 shows specific experiments on four benchmark datasets. Finally, the conclusions are presented in Section 4.

2. Material and Methods

2.1. D Multibranch Fusion Module

Three-dimensional CNNs work on hyperspectral images’ spectrum and spatial dimensions simultaneously through 3D convolution kernels and can directly extract spatial and spectral information from the raw hyperspectral images. The formula is as follows:
V l , i x , y , z = f ( m h = 0 H l 1 w = 0 W l 1 r = 0 D l 1 k l , i , m h , w , d v l 1 , m x + h , y + w , z + d + b l , i )
where H l , W l , and D l represents the height, width, and spectral dimension of convolution kernels. The k l , i , m h , w , d denotes the output value of the i - th convolution kernel in the l - th layer at the position of ( h , w , d ) .
The normal 3D CNN methods for hyperspectral image classification involve stacking convolutional blocks of convolutional layers (Conv), batch normalization (BN), and activation functions to extract detailed and discriminative features from raw hyperspectral images. While these methods improve the classification results to a certain degree, they also introduce numerous calculating parameters and increase the training time. Additionally, building deep convolutional neural networks tends to cause gradients vanishing and to suffer from classification performance degradation.
To solve the above problems, a 3D multibranch fusion module is proposed in this work. The architecture of the module is shown in Figure 1. First, 3 × 3 × 3 and 1 × 1 × 1 convolutional blocks are employed to form the shallow network, which can expand the information flow and allow the network to learn texture features. Then, it adds three branches that are composed of multiple convolution kernels in sequence. Different sizes of convolutional filers can be used to extract multiscale features from hyperspectral data. Merging with the shallow network frequently results in superior classification performance compared to stacked convolutional layers.

2.2. D-2D CNN

On the one hand, the features extracted by 2D CNNs alone are limited. On the other hand, 3D CNN consumes a substantial amount of computational resources. The combination of 2D CNNs and 3D CNNs can effectively make up for these defects. Roy et al. [14] proposed a hybrid spectral-spatial neural network HybridSN. First, 3D CNNs facilitate the joint spatial-spectral feature representation from a stack of spectral bands. Then, the 2D CNNs on top of the 3D CNNs further learn more abstract-level spatial representation. Compared with 3D CNNs alone, the hybrid CNNs can not only avoid the problem of insufficient feature extraction but also reduce model training parameters and improve model efficiency.

2.3. Attention Mechanism

2.3.1. Channel Attention

A new global context channel attention block is designed to enable the network to pay attention to the relationship between adjacent pixels in the channel dimensions. Simultaneously, the channel attention block also gains a long-distance dependence between pixels and improves the network’s global perception. The structure of the spectral attention is shown in Figure 2. The fully connected layer increases computational parameters. For this reason, a 1 × 1 convolutional neural network is intended to replace the fully connected layers, in which the scaling factor is set to 4 to reduce the calculation cost and prevent the network from overfitting. Additionally, layer normalization [19] is introduced to sparse the weights of the network:
z ^ ( l ) = z ( l ) μ ( l ) σ ( l ) 2 + ε γ + β L N γ , β ( z ( l ) )
where γ and β represent the parameter vectors of zooming and translation, respectively. The layer normalization is used to normalize the weights’ matrix, which can accelerate the convergence and regularization of the network. The formula of the channel attention is:
Z i = X i + W v 2 R e L U ( L N ( W v 1 j = 1 N P e W k x j m = 1 N p e W k x m x j ) )
where j = e W k x j m e W k x m represents the global pooling and W v 2 R e L U ( L N ( W v 1 ( ) ) ) denotes the bottleneck transform. The channel attention module uses global attention pooling to model the long-distance dependences and capture discriminative channel features from the redundant hyperspectral images.

2.3.2. Spatial Attention

A spatial attention block based on the interspatial relationships of features is developed, as inspired by CBAM [20]. Figure 3 illustrates the structure of the spatial attention block. To generate an efficient feature descriptor, average-pooling and max-pooling operations are applied along the channel axis, and they concatenate them. Pooling operations along the channel axis are shown to be effective at highlighting informative regions. Then, a convolution layer is applied to the concatenated feature descriptor to create a spatial attention map that specifies which features to emphasize or suppress. These are then convolved using a standard convolution layer to create a two-dimensional spatial attention map. In short, spatial attention is calculated as follows:
M ( F ) = σ ( f 3 × 3 [ A v g P o o l ( F ) ; M a x P o o l ( F ) ] ) = σ ( f 3 × 3 ( [ F a v g ; F max ] ) )
where σ denoted the sigmoid function and f 3 × 3 represents a convolution operation with the filter size of 3 × 3 .

2.4. HSI Classification Based on MFFDAN

The architecture of MFFDAN is depicted in Figure 4. The University of Pavia dataset is used to demonstrate the algorithm’s detailed process. The raw data are normalized to zero mean value with unit variance for preprocessing. Then, the PCA is employed to compress the spectral dimensions and eliminate band noise in the raw HSIs. Finally, hyperspectral image data were segmented into the fixed spatial size of 3D image patches centered on labeled pixels. After completing the above steps, they are sent to the 3D multibranch fusion module for feature extraction. The module is intended to extract multiscale features with multiple sizes of convolutional filters. Following that, the 3D feature maps reshape to 2D after dimension transformation and are sent to the 2D dense attention module. Then, a dense block [21] is arranged with spatial and channel attention in the middle of the module, which is used to enhance the information flow and adaptively select out the discriminative spatial-channel features. Finally, a fully connected layer with a softmax function is used for classification.

3. Results

3.1. Datasets

The University of Pavia (PU) dataset was acquired by the reflective optics system imaging spectrometer (ROSIS) sensor. The dataset consists of 103 spectral bands. There are 610 × 340 pixels and nine ground-truth classes in total. The number of training, validation, and testing samples in experiments is given in Table 1.
The Kennedy Space Center (KSC) dataset was gathered in 1996 by AVIRIS with wavelengths ranging from 400 to 2500 nm. The images have a spatial dimension of 512 × 614 pixels and 176 spectral bands. The KSC dataset consists of in total 5202 samples of 13 upland and wetland classes. The number of training, validation, and test samples in experiments is given in Table 2.
The Salinas Valley (SA) dataset contains 512 × 217 pixels with a spatial resolution of 3.7 m and 224 bands with wavelengths ranging from 0.36 to 2.5 μm. Additionally, 20 spectral bands of the dataset were eliminated due to water absorption. The SA dataset contains 16 labeled material classes in total. The number of training, validation, and testing samples in experiments is given in Table 3.
The Grass_dfc_2013 dataset was acquired by the compact airborne spectrographic imager (CASI) over the campus of the University of Houston and the neighboring urban area [22]. The dataset contains 349 × 1905 pixels, with a spatial resolution of 2.5 m and 144 spectral bands ranging from 0.38 to 1.05 µm. It includes 15 classes in total. The number of training, validation, and testing samples in experiments is given in Table 4.

3.2. Experimental Setup

The model proposed in this paper is implemented base on Python language and Pytorch deep learning framework. All experiments are carried out on a computer with Windows 10 operating system, NVIDIA RTX 2060 Super GPU, and 64 GB RAM. The overall accuracy (OA), average accuracy (AA), and kappa coefficient (Kappa) are adopted as the evaluation criteria. Different proportions of training, validation, and testing samples for each dataset are used to verify the effectiveness of the proposed model considering the unbalanced categories in four benchmarks.
The batch size and epochs are set to 16 and 200, respectively. Stochastic gradient descent (SGD) is adopted to optimize the training parameters. The initial learning rate is 0.05 and decreases by 1% every 50 epochs. All the experiments are repeated five times to avoid errors.

3.3. Analysis of Parameters

(1) Impact of Principal Component: In this section, the influence of the number of principal components C is tested on classification results. PCA is first used to reduce the dimensionality of the bands to 20, 30, 40, 50, and 60, respectively. The experimental results on four datasets are shown in Figure 5. For the University of Pavia and Kennedy Space Center datasets, the values of OA, AA, and Kappa rise from 20 (PU_OA = 98.81%, KSC_OA = 96.92%) and reach a peak at 30(PU_OA = 98.96%, KSC_OA = 99.07%). The increase in OA values on the KSC dataset is much higher than that on the PU dataset. It can be observed that the number of principal components has a significant impact on the KSC dataset. When the principal component bands exceed 30, these indicators decline to vary degrees. While for the Salinas Valley and GRSS_DFC_2013, the values of OA, AA, and Kappa seem to have no such relationships with the principal components. The OA values fluctuate in various number of principal components. The phenomenon is most likely caused by the fact that the latter two datasets have a higher land-cover resolution but a lower spectral band sensitivity.
(2) Impact of Spatial Size: The choice of the spatial size of the input image block has a crucial influence on classification accuracy. To find the best spatial size, it is necessary to test the model by adopting different spatial sizes: C × 9 × 9, C × 11 × 11, C × 13 × 13, C × 15 × 15, C × 17 × 17, and C × 19 × 19, where C is the fixed number of the principal component. The number of the principal component is set to 30 in all experiments to guarantee fairness.
Figure 6 shows the values of OA, AA, and Kappa of different spatial sizes on four datasets. The values of OA, AA, and Kappa rise steadily from spatial sizes of C × 9 × 9 to C × 15 × 15 on PU, KSC, and SA dataset and then decrease in the larger spatial sizes. That is to say, a target pixel and its adjacent neighbors usually belong to the same class to certain spatial sizes, while oversized regions may present additional noise and deteriorate the classification performance. While for the dataset of GRSS_DFC_2013, the values of three indicators fluctuate between the spatial size of C × 11 × 11 and C × 15 × 15 and decrease with larger spatial sizes. To sum up, the size of the patch for all datasets is set to C × 15 × 15.

3.4. Ablation Study

In order to test the effectiveness of the proposed densely connected attention module, several specific ablation experiments are designed. The models used for comparison are consistent with the network of the proposed method except for the removal of the densely connected attention module. The principal components and the spatial size are set to 30 and 15 × 15. The results on four datasets are displayed in Figure 7. The densely connected attention module improves the values of OA by approximately 0.93–1.75% on four datasets. Specifically, on most occasions (such as PU, SA, and GRSS dataset), a single-channel attention block outperforms a single spatial attention block by approximately 0.06–0.34% OA values. However, that does not mean that the spatial attention mechanism does not work, which plays a significant role in improving classification performance. Spatial attention alone has improved (0.52–1.27% OA) compared with nonattention block. The reason is likely that the densely connected attention module introduces the combination of attention mechanisms and densely connected layers. On the one hand, the attention mechanism can adaptively assign different weights to spatial-channel regions and suppress the effects of interfering pixels. On the other hand, densely connected layers relieve the gradient vanishing when the model bursts into deep layers and enhances the feature reuse during the convergence of the network.

3.5. Compared with Other Different Methods

To evaluate the performance of the proposed method, seven classification methods were selected to be compared. The methods are RBF-SVM with radial basis function kernel, multinomial logistic regression (MLR), random forest, spatial CNN with 2-D kernels [11], PyResNet [23], Hybrid-SN [14], and SSRN [13]. Figure 5, Figure 6 and Figure 7 show the classification maps of different methods on PU, KSC, SA, and Grss_dfc_2013 datasets.
The spatial size and the number of principal components are set to C × 15 × 15 and 30 for all DL methods to guarantee fairness. All the comparison experiments are carried out five times and calculate the average values and standard deviations. Due to the SSRN model not performing PCA in the manner described in the original paper, the results are the same when omitting this process in experiments. Other hyperparameters of the network are configured according to their papers.
The number of training, validation, and testing samples on University of Pavia dataset for comparison are in accordance with the list of samples in Table 1. Table 5 reveals the overall accuracy, average accuracy, and kappa coefficient of the different methods. It is obvious that classic machine learning methods such as RBF-SVM, RF, and MLR achieve relatively lower overall accuracies compared with other DL methods. They classify through the spectral dimensions of HSIs, which ignore the importance of 2D spatial characteristics. The proposed method obtained the best results among all the comparison methods, with 98.96% overall accuracy, which is 1.63% higher than the second-best (97.33%) achieved by HybridSN. Figure 8 shows the classification maps of these methods.
The selection of samples for training, validation, and testing on Kennedy Space Center dataset are consistent with the list of samples in Table 2. It is necessary to increase the training samples for the KSC dataset to avoid the underfit of the network. The 2D CNN model achieves the worst results among all the DL methods, which is difficult to obtain complex spectral-spatial features via 2D convolutional filters. The SSRN model obtains the second-best results due to its stacked 3D convolutional layers, which extract the discriminative spectral-spatial features from raw images.
The proposed method achieves the best results with (OA = 99.07%, AA = 97.70%, and Kappa = 98.97%). The quantitative classification results in terms of three indices, and the accuracies for each class are reported in Table 6. The classification maps of these methods are displayed in Figure 9.
The selection of samples for training, validation, and testing on Salinas Valley and Grass_dfc_2013 datasets is consistent with the list of samples in Table 3 and Table 4. Meanwhile, the quantitative results of different methods on these two datasets are reported in Table 7 and Table 8, respectively. The proposed method outperforms other comparison methods in terms of OA, AA, and Kappa indicators. The 3D multibranch feature fusion module can extract the multiscale features from raw hyperspectral images and improve the performance significantly. Figure 10 and Figure 11 reveal the classification maps of methods on these two datasets, which clearly show that the proposed model has better visual impressions than other comparison methods. For other models, the HybridSN and SSRN models have better classification performance than traditional machine learning methods and shallow DL classifiers. Specifically, the HybridSN model achieves 98.97% in OA, 98.95% in terms of AA, and 98.85% in terms of Kappa on the SA dataset, demonstrating the excellent feature representation ability in deep neural networks. SSRN model achieves 97.62% in OA, 98.47% in AA, and 97.36% in Kappa. The large kernel filters are good at extracting original features from HSIs without the PCA process. By comparison, the shallow 2D classifiers such as 2D CNN and PyResNet cannot obtain comprehensive features and miss rich spectral information during the training process. Therefore, they do not achieve competitive classification performance as HybridSN and SSRN models.

4. Conclusions

In this paper, a novel deep learning method called 3D-2D multibranch feature fusion and dense attention network is proposed for hyperspectral images classification. Both 3D and 2D CNNs are combined in an end-to-end network. Specifically, the 3D multibranch feature fusion module is designed to extract multiscale features from the spatial and spectrum of the hyperspectral images. Following that, a 2D dense attention module is introduced. The module consists of a densely connected block and a spatial-channel attention block. The dense block is intended to alleviate gradient vanishing in deep layers and enhance the reuse of features. The attention module includes the spatial attention block and the spectral attention block. The two blocks can adaptively select the discriminative features from the space and the spectrum of redundant hyperspectral images. Combining the densely connected block and attention block can significantly improve the classification performance and accelerate the convergence of the network. The elaborate hybrid module raises the OA by 0.93–1.75% on four different datasets. Additionally, the proposed model outperforms other comparison methods in terms of OA by 1.63–18.11% on the PU dataset, 0.26–16.06% on the KSC dataset, 0.76–13.48% on the SA dataset, and 0.46–23.39% on the Grass_dfc_2013 dataset. These experimental results demonstrate that the model proposed can achieve satisfactory classification performance.

Author Contributions

Y.Z. (Yiyan Zhang) and H.G. conceived the ideas; Z.C., C.L. and Y.Z. (Yunfei Zhang) gave suggestions for improvement; Y.Z. (Yiyan Zhang) and H.G. conducted the experiment and compiled the paper. H.Z. assisted and revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Natural Science Foundation of China (62071168), Natural Science Foundation of Jiangsu Province (BK20211201), Fundamental Research Funds for the Central Universities (No. B200202183), China Postdoctoral Science Foundation (No. 2021M690885), National Key R&D Program of China (2018YFC1508106).

Data Availability Statement

Some or all data used during the study are available online in accordance with funder data retention polices. (http://www.ehu.eus/ccwintco/index.phptitle=Hyperspectral_Remote_Sensing_Scenes, https://hyperspectral.ee.uh.edu, accessed on 20 August 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Banerjee, B.; Raval, S.; Cullen, P.J. UAV-hyperspectral imaging of spectrally complex environments. Int. J. Remote Sens. 2020, 41, 4136–4159. [Google Scholar] [CrossRef]
  2. Govender, M.; Chetty, K.; Bulcock, H. A review of hyperspectral remote sensing and its application in vegetation and water resource studies. Water SA 2007, 33, 145–151. [Google Scholar] [CrossRef] [Green Version]
  3. Mcmanamon, P.F. Dual Use Opportunities for EO Sensors-How to Afford Militarysensing. In Proceedings of the 15th Annual AESS/IEEE Dayton Section Symposium, Fairborn, OH, USA, 14–15 May 1998. [Google Scholar]
  4. Ham, J.; Chen, Y.; Crawford, M.M.; Ghosh, J. Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef] [Green Version]
  5. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised Hyperspectral Image Classification Using Soft Sparse Multinomial Logistic Regression. IEEE Geosci. Remote Sens. Lett. 2012, 10, 318–322. [Google Scholar] [CrossRef]
  6. Kuo, B.-C.; Ho, H.-H.; Li, C.-H.; Hung, C.-C.; Taur, J.-S. A Kernel-Based Feature Selection Method for SVM With RBF Kernel for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 317–326. [Google Scholar] [CrossRef]
  7. Imani, M.; Ghassemian, H. Principal component discriminant analysis for feature extraction and classification of hyperspectral images. In Proceedings of the 2014 Iranian Conference on Intelligent Systems (ICIS), Bam, Iran, 4–6 February 2014; pp. 1–5. [Google Scholar]
  8. Gao, H.; Zhang, Y.; Chen, Z.; Li, C. A Multiscale Dual-Branch Feature Fusion and Attention Network for Hyperspectral Images Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8180–8192. [Google Scholar] [CrossRef]
  9. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  10. Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature Extraction for Hyperspectral Imagery: The Evolution From Shallow to Deep: Overview and Toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
  11. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015. [Google Scholar]
  12. He, M.; Li, B.; Chen, H. Multi-scale 3D deep convolutional neural network for hyperspectral image classification. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017. [Google Scholar]
  13. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
  14. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  15. Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
  16. Itti, L.C.; Koch, C. Computational Modelling of Visual Attention. Nat. Rev. Neurosci. 2001, 2, 194–203. [Google Scholar] [CrossRef] [Green Version]
  17. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  18. Chen, L.; Zhang, H.; Xiao, J.; Nie, L.; Shao, J.; Liu, W.; Chua, T.-S. SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6298–6306. [Google Scholar]
  19. Ba, J.L.; Kiros, J.R.; Hinton, G.E. Layer Normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
  20. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  21. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. IEEE Comput. Soc. 2017, 4700–4708. [Google Scholar] [CrossRef] [Green Version]
  22. Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; Van Kasteren, T.; Liao, W.; Bellens, R.; Pizurica, A.; Gautama, S.; et al. Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2405–2418. [Google Scholar] [CrossRef]
  23. Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.J.; Pla, F. Deep Pyramidal Residual Networks for Spectral–Spatial Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 740–754. [Google Scholar] [CrossRef]
Figure 1. The architecture of 3D multibranch fusion module.
Figure 1. The architecture of 3D multibranch fusion module.
Micromachines 12 01271 g001
Figure 2. The architecture of 2D channel attention block.
Figure 2. The architecture of 2D channel attention block.
Micromachines 12 01271 g002
Figure 3. The architecture of 2D spatial attention block.
Figure 3. The architecture of 2D spatial attention block.
Micromachines 12 01271 g003
Figure 4. The Flowchart of MFFDAN network.
Figure 4. The Flowchart of MFFDAN network.
Micromachines 12 01271 g004
Figure 5. OA, AA, and Kappa accuracies with different principal components on four datasets. (a) Effect of principal components on University of Pavia dataset, (b) Effect of principal components on Kennedy Space Center, (c) Effect of principal components on Salinas Valley dataset, (d) Effect of principal components on GRSS_DFC_2013 dataset.
Figure 5. OA, AA, and Kappa accuracies with different principal components on four datasets. (a) Effect of principal components on University of Pavia dataset, (b) Effect of principal components on Kennedy Space Center, (c) Effect of principal components on Salinas Valley dataset, (d) Effect of principal components on GRSS_DFC_2013 dataset.
Micromachines 12 01271 g005
Figure 6. OA, AA, and Kappa accuracies with different spatial size on four datasets. (a) Effect of spatial size on University of Pavia dataset, (b) Effect of spatial size on Kennedy Space Center dataset, (c) Effect of spatial size on Salinas Valley dataset, (d) Effect of spatial size on GRSS_DFC_2013 dataset.
Figure 6. OA, AA, and Kappa accuracies with different spatial size on four datasets. (a) Effect of spatial size on University of Pavia dataset, (b) Effect of spatial size on Kennedy Space Center dataset, (c) Effect of spatial size on Salinas Valley dataset, (d) Effect of spatial size on GRSS_DFC_2013 dataset.
Micromachines 12 01271 g006
Figure 7. The overall accuracies of ablation experiments on four datasets.
Figure 7. The overall accuracies of ablation experiments on four datasets.
Micromachines 12 01271 g007
Figure 8. The classification maps of different methods on University of Pavia dataset. (a) False color map with truth labels, (b) ground truth, (c) RBF-SVM (d), MLR (e), RF (f) 2D-CNN (g), PyResNet (h) SSRN, (i) HybridSN, and (j) proposed.
Figure 8. The classification maps of different methods on University of Pavia dataset. (a) False color map with truth labels, (b) ground truth, (c) RBF-SVM (d), MLR (e), RF (f) 2D-CNN (g), PyResNet (h) SSRN, (i) HybridSN, and (j) proposed.
Micromachines 12 01271 g008
Figure 9. The classification maps of different methods on Kennedy Space Center dataset. (a) false color map with truth labels, (b) ground truth, (c) RBF-SVM, (d) MLR, (e) RF, (f) 2D-CNN, (g) PyResNet, (h) SSRN, (i) HybridSN, and (j) proposed.
Figure 9. The classification maps of different methods on Kennedy Space Center dataset. (a) false color map with truth labels, (b) ground truth, (c) RBF-SVM, (d) MLR, (e) RF, (f) 2D-CNN, (g) PyResNet, (h) SSRN, (i) HybridSN, and (j) proposed.
Micromachines 12 01271 g009
Figure 10. The classification maps of different methods on Salinas Valley dataset. (a) False color map with truth labels, (b) ground truth, (c) RBF-SVM, (d) MLR, (e) RF, (f) 2D-CNN, (g) PyResNet, (h) SSRN, (i) HybridSN, and (j) proposed.
Figure 10. The classification maps of different methods on Salinas Valley dataset. (a) False color map with truth labels, (b) ground truth, (c) RBF-SVM, (d) MLR, (e) RF, (f) 2D-CNN, (g) PyResNet, (h) SSRN, (i) HybridSN, and (j) proposed.
Micromachines 12 01271 g010
Figure 11. The classification maps of different methods on Grass_dfc_2013 dataset. (a) false color map with truth labels, (b) ground truth, (c) RBF-SVM, (d) MLR, (e) RF, (f) 2D-CNN, (g) PyResNet, (h) SSRN, (i) HybridSN, and (j) proposed.
Figure 11. The classification maps of different methods on Grass_dfc_2013 dataset. (a) false color map with truth labels, (b) ground truth, (c) RBF-SVM, (d) MLR, (e) RF, (f) 2D-CNN, (g) PyResNet, (h) SSRN, (i) HybridSN, and (j) proposed.
Micromachines 12 01271 g011
Table 1. The number of training, validation and testing samples for University of Pavia dataset.
Table 1. The number of training, validation and testing samples for University of Pavia dataset.
No.NameTrainValTest
1Asphalt666575908
2Meadows186184616,617
3Gravel212081870
4Trees313032730
5Painted-m-s131331199
6Bare Soil504984481
7Bitumen131321185
8Self-B-Bricks373653280
9Shadows994844
Total426423638,114
Table 2. The number of training, validation, and testing samples for Kennedy Space Center dataset.
Table 2. The number of training, validation, and testing samples for Kennedy Space Center dataset.
No.NameTrainValTest
1Scrub7669616
2Willow swamp2422197
3CP hammock2623207
4Slash pine2523204
5Oak/Broadleaf1615130
6Hardwood2321185
7Swamp11985
8Graminoid marsh4339349
9Spartina marsh5247421
10Cattail marsh4036328
11Salt marsh4238339
12Mud flats5045408
13Water9383751
Total5214704220
Table 3. The number of training, validation, and testing samples for Salinas Valley dataset.
Table 3. The number of training, validation, and testing samples for Salinas Valley dataset.
No.NameTrainValTest
1Brocoli_green _1201991790
2Brocoli_green _1373693320
3Fallow201961760
4Fallow _plow141381242
5Fallow_smooth272652386
6Stubble403923527
7Celery363543189
8Grapes_untrained113111610,042
9Soil _develop626145527
10Corn _weeds333252920
11Lettuce _4wk11106951
12Lettuc _5wk191911717
13Lettuce _6wk991816
14Lettuce _7wk11106953
15Vinyard_untrain737206475
16Vinyard _trellis181791610
Total543536148,225
Table 4. The number of training, validation, and testing samples for Grass_Dfc_2013 dataset.
Table 4. The number of training, validation, and testing samples for Grass_Dfc_2013 dataset.
No.NameTrainValTest
1Healthy grass1251131013
2Stressed grass1251131016
3Synthetic grass7063564
4Tree1241121008
5Soil1241121006
6Water3329263
7Residential1271141027
8Commercial1241121008
9Road1251131014
10Highway123110994
11Railway1241111000
12Parking lot 1123111999
13Parking lot 24742380
14Tennis court4339346
15Running track6659535
Total1503135312,173
Table 5. The categorized results of different methods on the Paiva of University dataset.
Table 5. The categorized results of different methods on the Paiva of University dataset.
ClassMethods
Conventional ClassifiersClassic Neural NetworksProposed
RBF-SVMMLRRF2D-CNNPyResNetSSRNHybridSN
189.00 ± 1.1090.21 ± 1.5686.11 ± 2.2193.30 ± 1.6293.45 ± 1.1499.19 ± 0.5997.64 ± 1.3799.50 ± 0.09
298.10 ± 0.6596.35 ± 1.6496.03 ± 1.2399.39 ± 0.8299.45 ± 0.5098.18 ± 3.2099.65 ± 0.2299.97 ± 0.02
360.47 ± 5.1742.36 ± 2.1730.19 ± 3.8971.09 ± 3.8377.90 ± 2.7485.28 ± 15.8178.24 ± 4.8788.53 ± 0.90
487.37 ± 4.3579.68 ± 3.4576.47 ± 5.1794.30 ± 2.2590.08 ± 2.9595.85 ± 1.4696.65 ± 0.3297.88 ± 0.30
599.07 ± 0.3298.89 ± 0.3398.26 ± 0.4433.26 ± 47.0399.82 ± 0.09100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
669.52 ± 3.5050.48 ± 5.6236.90 ± 4.8595.24 ± 3.5594.91 ± 1.4196.42 ± 5.1799.92 ± 0.1099.98 ± 0.03
769.22 ± 12.025.82 ± 2.9658.73 ± 6.8957.18 ± 40.4590.28 ± 0.7594.85 ± 2.7198.19 ± 1.2697.92 ± 0.89
886.00 ± 3.0487.85 ± 1.6983.92 ± 5.0487.33 ± 7.1374.65 ± 8.4584.49 ± 18.8894.29 ± 2.7998.83 ± 0.23
999.72 ± 0.0899.47 ± 0.1599.30 ± 0.2019.30 ± 27.2989.84 ± 1.8699.57 ± 0.4687.27 ± 9.3797.12 ± 1.04
OA (%)88.84 ± 0.3482.77 ± 0.4280.85 ± 0.6990.00 ± 2.3893.64 ± 0.6296.14 ± 2.2797.33 ± 0.4598.96 ± 0.09
AA (%)84.27 ± 1.6572.34 ± 0.2973.99 ± 1.4072.26 ± 8.4390.04 ± 0.7194.87 ± 1.6794.65 ± 1.0497.75 ± 0.22
Kappa × 10084.96 ± 0.5076.50 ± 0.5073.76 ± 0.8986.53 ± 3.3191.53 ± 0.8294.90 ± 2.9496.46 ± 0.6098.62 ± 0.12
Table 6. The categorized results of different methods on the Kennedy Space Center dataset.
Table 6. The categorized results of different methods on the Kennedy Space Center dataset.
ClassMethods
Conventional ClassifiersClassic Neural NetworksProposed
RBF-SVMMLRRF2D-CNNPyResNetSSRNHybridSN
195.85 ± 0.6695.83 ± 0.7994.63 ± 1.1299.22 ± 1.0099.42 ± 0.24100.00 ± 0.0099.48 ± 0.4699.97 ± 0.06
285.39 ± 3.4086.48 ± 2.0681.19 ± 6.4990.26 ± 4.4288.43 ± 2.64100.00 ± 0.0095.52 ± 2.7699.91 ± 0.18
388.09 ± 4.1890.87 ± 4.9889.48 ± 2.3288.69 ± 3.3996.52 ± 2.16100.00 ± 0.0092.26 ± 5.4499.22 ± 0.80
442.47 ± 5.3634.45 ± 6.0468.11 ± 5.6368.43 ± 1.6265.20 ± 4.6895.30 ± 1.8181.14 ± 7.2390.82 ± 5.06
547.45 ± 5.5624.55 ± 11.1746.48 ± 4.1120.92 ± 29.5966.44 ± 1.9873.56 ± 3.1077.38 ± 4.8798.01 ± 2.70
647.67 ± 3.5744.95 ± 2.5637.09 ± 4.2250.97 ± 36.0590.13 ± 1.6098.38 ± 1.6595.83 ± 4.2199.61 ± 0.57
783.58 ± 5.5179.37 ± 7.8175.79 ± 10.1632.98 ± 46.65100.00 ± 0.00100.00 ± 0.0097.05 ± 3.1582.52 ± 34.95
890.77 ± 1.3670.93 ± 4.9672.58 ± 5.2887.20 ± 2.3799.40 ± 0.53100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
994.53 ± 2.4183.89 ± 1.5092.78 ± 4.10100.00 ± 0.0099.86 ± 0.2099.22 ± 1.1199.91 ± 0.1799.92 ± 0.10
1092.69 ± 4.2186.37 ± 30.3581.81 ± 3.1988.37 ± 2.7997.25 ± 2.06100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
1196.98 ± 1.5294.96 ± 1.4395.81 ± 1.7799.56 ± 0.6398.67 ± 0.43100.00 ± 0.0099.73 ± 0.53100.00 ± 0.00
1285.83 ± 3.2984.81 ± 2.6182.21 ± 1.9384.91 ± 9.1670.71 ± 4.69100.00 ± 0.0098.23 ± 1.89100.00 ± 0.00
1399.93 ± 0.1499.86 ± 0.1899.74 ± 0.0999.84 ± 0.23100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
OA (%)87.59 ± 0.5583.01 ± 0.6984.87 ± 0.8987.91 ± 1.7792.84 ± 0.2198.81 ± 0.1797.28 ± 0.5999.07 ± 0.82
AA (%)80.86 ± 0.8775.18 ± 0.8678.28 ± 0.8677.80 ± 5.3590.16 ± 0.0697.42 ± 0.1995.12 ± 0.7397.70 ± 2.83
Kappa×10086.16 ± 0.6181.03 ± 0.7783.12 ± 0.9886.51 ± 1.9892.02 ± 0.2498.67 ± 0.1896.97 ± 0.6598.97 ± 0.92
Table 7. The categorized results of different methods on the Salinas Valley dataset.
Table 7. The categorized results of different methods on the Salinas Valley dataset.
ClassMethods
Conventional ClassifiersClassic Neural NetworksProposed
RBF-SVMMLRRF2D-CNNPyResNetSSRNHybridSN
197.28 ± 1.2597.76 ± 0.6997.07 ± 1.2899.89 ± 0.1599.77 ± 0.3398.74 ± 0.9399.99 ± 0.02100.00 ± 0.00
299.53 ± 0.2999.62 ± 0.2099.80 ± 0.1098.92 ± 1.6799.99 ± 0.0199.99 ± 0.01100.00 ± 0.00100.00 ± 0.00
396.81 ± 1.7595.36 ± 2.2888.51 ± 3.57100.00 ± 0.00100.00 ± 0.0099.06 ± 1.3299.99 ± 0.02100.00 ± 0.00
498.72 ± 0.6198.78 ± 0.3095.06 ± 2.6682.72 ± 3.6293.70 ± 1.9099.71 ± 0.2695.92 ± 0.8699.81 ± 0.12
595.96 ± 1.9498.29 ± 0.4794.15 ± 3.0796.58 ± 1.3194.91 ± 1.2994.76 ± 2.2595.98 ± 0.9197.45 ± 0.42
699.50 ± 0.4199.77 ± 0.1498.83 ± 0.8899.96 ± 0.08100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
799.44 ± 0.1899.48 ± 0.1198.34 ± 1.4899.52 ± 0.4599.77 ± 0.2599.66 ± 0.2499.95 ± 0.0499.95 ± 0.03
889.97 ± 1.2886.38 ± 3.1281.03 ± 1.5190.90 ± 1.5887.79 ± 1.3893.28 ± 4.6798.50 ± 0.5599.63 ± 0.07
999.04 ± 0.4799.16 ± 0.2698.84 ± 0.1699.92 ± 0.1399.82 ± 0.1799.60 ± 0.4599.86 ± 0.19100.00 ± 0.00
1085.27 ± 2.6083.70 ± 1.1181.16 ± 3.2197.78 ± 2.3199.61 ± 0.3699.11 ± 0.6499.40 ± 0.3199.87 ± 0.08
1190.35 ± 2.5389.12 ± 2.3082.84 ± 3.5899.77 ± 0.29100.00 ± 0.0099.91 ± 0.1399.94 ± 0.08100.00 ± 0.00
1299.55 ± 0.4499.70 ± 0.1098.41 ± 0.6899.31 ± 0.5399.93 ± 0.0798.38 ± 1.3399.83 ± 0.3499.93 ± 0.08
1396.98 ± 0.9697.89 ± 1.6895.45 ± 3.2196.54 ± 2.1999.93 ± 0.0598.13 ± 0.0999.49 ± 0.6099.96 ± 0.09
1492.94 ± 1.5091.62 ± 1.9493.15 ± 1.4888.42 ± 4.4695.84 ± 0.7198.08 ± 0.9496.75 ± 3.6599.75 ± 0.10
1547.86 ± 1.1950.73 ± 4.2152.43 ± 2.5588.39 ± 4.0098.13 ± 0.7097.21 ± 1.8198.04 ± 0.4399.80 ± 0.22
1694.80 ± 4.2692.25 ± 2.2988.56 ± 3.2199.72 ± 0.2499.83 ± 0.0499.94 ± 0.0899.55 ± 0.49100.00 ± 0.00
OA (%)88.78 ± 0.2988.33 ± 0.4086.25 ± 0.4795.35 ± 0.3596.63 ± 0.2097.62 ± 0.7398.97 ± 0.0699.73 ± 0.02
AA (%)92.75 ± 0.4192.47 ± 0.2390.23 ± 0.7096.15 ± 0.2098.06 ± 0.0998.47 ± 0.1898.95 ± 0.2099.72 ± 0.02
Kappa × 10087.45 ± 0.3286.96 ± 0.4484.64 ± 0.5494.82 ± 0.3996.26 ± 0.2297.36 ± 0.8098.85 ± 0.0799.70 ± 0.02
Table 8. The categorized results of different methods on the Grass_dfc_2013 dataset.
Table 8. The categorized results of different methods on the Grass_dfc_2013 dataset.
ClassMethods
Conventional ClassifiersClassic Neural NetworksProposed
RBF-SVMMLRRF2D-CNNPyResNetSSRNHybridSN
192.34 ± 5.1586.39 ± 0.3894.55 ± 2.1992.19 ± 0.8897.81 ± 0.6998.29 ± 1.2298.35 ± 0.7398.06 ± 0.74
284.79 ± 7.7194.47 ± 4.1197.78 ± 0.8595.67 ± 3.1598.46 ± 0.5097.59 ± 0.0898.60 ± 0.7499.08 ± 0.23
397.22 ± 1.3099.70 ± 0.0091.81 ± 2.0293.67 ± 2.0999.15 ± 0.5798.69 ± 0.7299.64 ± 0.2099.70 ± 0.06
490.29 ± 2.1197.11 ± 0.3892.65 ± 1.3992.64 ± 2.0496.02 ± 1.1697.77 ± 0.1099.60 ± 0.1797.8 ± 0.10
594.86 ± 1.6899.13 ± 0.4096.00 ± 2.3699.97 ± 0.0499.92 ± 0.1299.83 ± 0.1899.81 ± 0.27100.00 ± 0.00
681.86 ± 1.3082.20 ± 1.3478.71 ± 3.7772.05 ± 9.3191.05 ± 2.5787.92 ± 0.6196.96 ± 2.1094.5 ± 1.90
779.14 ± 4.5785.61 ± 2.6681.26 ± 4.3675.00 ± 1.1185.39 ± 2.6091.43 ± 0.6191.70 ± 1.4093.44 ± 0.25
858.72 ± 7.6350.73 ± 2.7271.57 ± 2.0957.82 ± 3.6192.78 ± 0.5290.24 ± 0.3390.08 ± 0.7091.71 ± 0.79
975.64 ± 6.7470.39 ± 5.0374.33 ± 2.8158.30 ± 1.3392.60 ± 0.7497.20 ± 1.0096.22 ± 1.6599.41 ± 0.61
1054.95 ± 11.7152.92 ± 9.5874.25 ± 4.6469.33 ± 6.4899.26 ± 0.54100.00 ± 0.0099.91 ± 0.13100.00 ± 0.00
1162.75 ± 6.4156.71 ± 1.6172.87 ± 1.8179.91 ± 8.6797.02 ± 0.6199.46 ± 0.7699.74 ± 0.2699.91 ± 0.07
1253.50 ± 10.7247.12 ± 3.3170.78 ± 4.9186.43 ± 2.0297.10 ± 0.7899.66 ± 0.0099.11 ± 0.4899.57 ± 0.08
1320.13 ± 7.944.03 ± 2.957.13 ± 2.489.34 ± 5.3281.99 ± 4.9989.46 ± 1.5094.93 ± 0.4894.84 ± 0.25
1481.08 ± 12.9982.60 ± 8.3593.32 ± 3.2392.77 ± 2.82100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00100 ± 0.00
1598.53 ± 0.4498.81 ± 0.1389.51 ± 3.56100.00 ± 0.0099.95 ± 0.0899.84 ± 0.22100.00 ± 0.00100 ± 0.00
OA (%)75.49 ± 0.7174.57 ± 0.7881.23 ± 0.6080.10 ± 0.5795.56 ± 0.5196.96 ± 0.1597.52 ± 0.3097.96 ± 0.13
AA (%)75.05 ± 0.9073.80 ± 0.9779.10 ± 0.8478.34 ± 0.7295.23 ± 0.6496.49 ± 0.0697.64 ± 0.3297.87 ± 0.18
Kappa × 10073.47 ± 0.7772.45 ± 0.8479.67 ± 0.6678.45 ± 0.6195.20 ± 0.5596.72 ± 0.1697.31 ± 0.3297.80 ± 0.14
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gao, H.; Zhang, Y.; Zhang, Y.; Chen, Z.; Li, C.; Zhou, H. A 3D-2D Multibranch Feature Fusion and Dense Attention Network for Hyperspectral Image Classification. Micromachines 2021, 12, 1271. https://doi.org/10.3390/mi12101271

AMA Style

Gao H, Zhang Y, Zhang Y, Chen Z, Li C, Zhou H. A 3D-2D Multibranch Feature Fusion and Dense Attention Network for Hyperspectral Image Classification. Micromachines. 2021; 12(10):1271. https://doi.org/10.3390/mi12101271

Chicago/Turabian Style

Gao, Hongmin, Yiyan Zhang, Yunfei Zhang, Zhonghao Chen, Chenming Li, and Hui Zhou. 2021. "A 3D-2D Multibranch Feature Fusion and Dense Attention Network for Hyperspectral Image Classification" Micromachines 12, no. 10: 1271. https://doi.org/10.3390/mi12101271

APA Style

Gao, H., Zhang, Y., Zhang, Y., Chen, Z., Li, C., & Zhou, H. (2021). A 3D-2D Multibranch Feature Fusion and Dense Attention Network for Hyperspectral Image Classification. Micromachines, 12(10), 1271. https://doi.org/10.3390/mi12101271

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop