Next Article in Journal
Evaluating the Impact of Grazing Cessation and Reintroduction in Mixed Prairie Using Raster Time Series Analysis of Landsat Data
Next Article in Special Issue
Efficient Transformer for Remote Sensing Image Segmentation
Previous Article in Journal
FTIR Measurements of Greenhouse Gases over Thessaloniki, Greece in the Framework of COCCON and Comparison with S5P/TROPOMI Observations
Previous Article in Special Issue
Effect of Attention Mechanism in Deep Learning-Based Remote Sensing Image Processing: A Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Densely Connected Pyramidal Dilated Convolutional Network for Hyperspectral Image Classification

1
School of Communications and Information Engineering (School of Artificial Intelligence), Xi’an University of Posts and Telecommunications, Xi’an 710121, China
2
School of Computer Science, Shaanxi Normal University, Xi’an 710119, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(17), 3396; https://doi.org/10.3390/rs13173396
Submission received: 16 July 2021 / Revised: 20 August 2021 / Accepted: 24 August 2021 / Published: 26 August 2021

Abstract

:
Recently, with the extensive application of deep learning techniques in the hyperspectral image (HSI) field, particularly convolutional neural network (CNN), the research of HSI classification has stepped into a new stage. To avoid the problem that the receptive field of naive convolution is small, the dilated convolution is introduced into the field of HSI classification. However, the dilated convolution usually generates blind spots in the receptive field, resulting in discontinuous spatial information obtained. In order to solve the above problem, a densely connected pyramidal dilated convolutional network (PDCNet) is proposed in this paper. Firstly, a pyramidal dilated convolutional (PDC) layer integrates different numbers of sub-dilated convolutional layers is proposed, where the dilated factor of the sub-dilated convolution increases exponentially, achieving multi-sacle receptive fields. Secondly, the number of sub-dilated convolutional layers increases in a pyramidal pattern with the depth of the network, thereby capturing more comprehensive hyperspectral information in the receptive field. Furthermore, a feature fusion mechanism combining pixel-by-pixel addition and channel stacking is adopted to extract more abstract spectral–spatial features. Finally, in order to reuse the features of the previous layers more effectively, dense connections are applied in densely pyramidal dilated convolutional (DPDC) blocks. Experiments on three well-known HSI datasets indicate that PDCNet proposed in this paper has good classification performance compared with other popular models.

Graphical Abstract

1. Introduction

Hyperspectral remote sensing image is characterized by high dimension, high resolution, and rich spectral and spatial information [1], which have been diffusely used in numerous real-world tasks, such as sea ice detection [2], ecosystem monitoring [3,4], vegetation species analysis [5] and classification tasks [6,7]. With the speedy progress of remote sensing technology and artificial intelligence (AI), a great proportion of new theories and methods in deep learning have been proposed to handle the challenges and problems faced by the field of hyperspectral image [8].
Hyperspectral image classification is an vital branch in the subject of HSI, which has gradually become a crucial direction for scholars in the AI industry. It is worth noting that hyperspectral image pixel-level classification determines the category label of each pixel, and segmentation determines the boundary of a given category of objects. HSI classification and segmentation are related to each other, and segmentation involves the classification of individual pixels. A number of conventional spectral-based classifiers, such as support vector machines (SVM) [9,10], random forest [11,12,13], k-nearest neighbors (kNN) [14,15,16], Bayesian [17], etc., can only show good classification performance in the case of abundant labeled training samples. Recently, more and more methods based on deep learning have been applied to HSI classification tasks, and have achieved well results, for instance, generative adversarial networks (GAN) [18,19,20], recurrent neural networks (RNN) [21,22,23], fully convolutional network (FCN) [24,25,26] and convolution neural network (CNN) [27,28,29]. Among the above methods, the capability of CNN is peculiarly salient. For most image classification issues, CNN is mainly composed of input layer, convolutional layer, pooling layer, fully connected layer and softmax layer. The convolutional layer is the most crucial part of CNN, which consists of multiple hidden layers. Hu et al. designed one-dimensional CNN (1D CNN) to extract feature maps of HSI in the spectral domain [30]. Cao et al. proposed a two-dimensional CNN (2D CNN) based on the active learning method, which reduces cost and improves accuracy by selecting pixels with the largest amount of information for labeling [31]. Hyperspectral images are typically divided into three-dimensional cubes, which makes it possible to classify them with a three-dimensional network structure. Xu et al. suggested a three-dimensional CNN (3-D CNN) framework for effectively extracting HSI information to acquire accurate classification results [27]. Li et al. designed a 3D CNN model combined with regularization to effectively extract the spectral-spatial features of hyperspectral images [32]. In addition, Woo et al. focused on another architecture design, the convolutional block attention module (CBAM), which can learned “what” and “where”, respectively, to enhance the network’s presentation capabilities [33]. Ma et al. proposed a double-branch multi-attention (DBMA) mechanism for HSI classification, and the branches of the network are used to capture more discriminative spectral and spatial features [34]. Li et al. proposed a double-branch dual-attention (DBDA) framework for HSI classification to capture and extract feature information [35].
Generally speaking, one way to obtain better classification results is by increasing the depth of the network. However, vanishing gradient and exploding gradient problems will appear with the increase of network layers, which will lead in the decline of accuracy and network degradation. He et al. proposed a residual network (ResNet) to solve the above problems, which integrates the shallow layer features of the network into subsequent layers through skip connections [36]. Zhong et al. applied the structure of residual blocks in the field of HSI classification and achieved active results [37]. Zhong et al. designed a supervised 3D spectral–spatial residual network (SSRN) to mitigate the decrease in model accuracy [38]. Meng et al. proposed a multipath ResNet (MPRN) framework that makes the network wider to realize effective gradient flow [39]. Inspired by ResNet, Han et al. proposed a novel residual structure, namely pyramidal residual network (PresNet), which can gradually increase the dimension of the feature maps to acquire as much information as possible [40]. Paoletti et al. designed a deep PresNet for HSI classification, which involves more location information as the depth of network increasing [41]. A fully dense multi-scale fusion network (FDMFN) for HSI classification proposed in [42], by fusing multi-scale spectral spatial features, which can obtain active classification results. Another way to improve the performance of the network is by reusing the features of all the previous layers, which can reduce parameters while avoiding the network being too deep or too wide, and alleviate the problem of vanishing gradient to a certain extent. Huang et al. proposed a dense connected network (DenseNet), which establishes a dense connection between all the front layers and back layers to achieve feature reuse on the channel [43]. A mixed link network for HSI classification proposed in [44], by combining the advantages of ResNet and DenseNet, which can further obtain the richer feature information and enhance the network learning ability. Paoletti et al. proposed a novel framework for HSI classification based on DenseNet, which can effectively alleviate overfitting and reduce excessive parameters [45].
In addition, Nalepa et al. proposed a resource-saving quantitative convolutional neural network for hyperspectral image segmentation, in which the quantization process can be well combined with the training process [46]. Moreover, the network can greatly reduce the complexity of the model without affecting the classification accuracy. Fu et al. proposed a super-pixel segmentation algorithm [47]. For high-texture hyperspectral images, the algorithm can decompose them into inhomogeneous blocks, which well maintains the homogeneous characteristic. Sun et al. proposed a fully convolutional segmentation network, which can simultaneously recognize the true labels of all pixels in the HSI cube [48]. For those cubes that contain more land cover categories, it has better recognition capabilities. The DeepLab v3+ network shows active performance in the field of semantic segmentation. Si et al. applied it to the field of HSI image classification for feature extraction [49]. Then, the SVM classifier is used to get the final classification result.
CNN will have better classification performance if the convolutional layer can capture more spectral–spatial information. Although the problem that the receptive field of naive convolution is too small can be effectively solved by dilated convolution, there are unrecognized regions (blind spots) in the receptive field of dilated convolution. Inspired by densely connected multi-dilated DenseNet (D3Net) [50], densely connected pyramidal dilated convolutional network for HSI classification is proposed in this paper to acquired more comprehensive feature information. The structure of the network is composed of several densely pyramidal dilated convolutional blocks and transition layers. In order to increase the size of the receptive field and eliminate blind spots without increasing parameters, dilated convolution with different dilated factors are applied to develop PDC layers. A hybrid feature fusion mechanism is applied to obtain richer information and reduce the depth of the network. The main contributions of the paper are summarized as follows. Firstly, the larger receptive field is obtained by applying the dilated convolution to CNN. Furthermore, in order to avoid blind spots in the receptive field of the feature maps extracted by dilated convolution, we set the dilated factors appropriately and increase the width of the network. Then, the hybrid feature fusion method of pixel-by-pixel addition and channel stacking is applied to extract more abstract feature information while effectively utilizing features. In addition, our network (PDCNet) achieves better performance on well-known datasets (Indian Pines, Pavia University and Salinas Valley datasets) by combining dilated convolution and dense connections than some popular methods.
The remaining part of this paper is organized as follows: Some state-of-the-art technologies related to convolutional neural networks for HSI classification will be introduced in Section 2. In Section 3, methods and network architecture proposed in this paper will be described in detail. The experimental settings and classification results will be shown in Section 4. The discussion of training samples, the number of parameters and the running time of the networks are carried out in Section 5. The conclusion of the paper and the outlook for future work are given in Section 6.

2. Related Work

Before introducing the hyperspectral image classification network proposed in this paper, some relevant techniques are reviewed in this section, namely residual network structure, pyramidal network structure, and dilated convolution.

2.1. Residual Network Structure

CNN can achieve good HSI classification performance. However, when the depth of the network reaches a certain degree, the phenomenon of the vanishing gradient will become more and more obvious, which will lead to the degradation of the network performance. ResNet [37] addresses the problem by adding identity mapping between layers. Recently, the idea of ResNet has been applied to various network models with good results. In order to solve the problems of too small receptive field and localized feature information obtained by naive convolution, Meng et al. proposed a deep residual involution network (DRIN) for hyperspectral image classification by combining residual and involution [51]. It can simulate remote spatial interaction through enlarged involution kernels, which makes feature information obtained by the network more comprehensive. Hyperspectral images often have high-dimensional characteristics. The equal treatment of all bands will cause the neural network to learn features from the useless bands for classification, which will affect the final classification results. In order to solve the above problem, Zhu et al. combined the residual and attention mechanism and proposed a residual spectral spatial attention network (RSSAN) for HSI classification [6]. Firstly, the spectral–spatial attention mechanism is used to emphasize useful bands and suppress useless bands. Then, the characteristic feature information is sent to the residual spectral–spatial attention (RSSA) module. However, how to judge the useless band and the useful band is a key problem. Moreover, the attention mechanism in RSSA module will increase parameters and the calculation cost.

2.2. Pyramidal Network Structure

Based on the idea of ResNet, a pyramid residual network (PresNet) for hyperspectral image classification was proposed in [41]. It can involve more location information as the depth of the network increases. In the basic unit of the pyramid residual network, the number of channels of each convolutional layer increases in a pyramid shape. In order to extract more discriminative and refined spectral-spatial features, Shi et al. proposed a double-branch network for hyperspectral image classification by combining the attention mechanism and pyramidal convolution [52]. Each branch contains two modules, namely the pyramidal spectral block (the spectral attention) and the pyramidal spatial block (the spatial attention). To solve the limitation that the pyramidal convolutional layer has a single-size receptive field, Gong et al. proposed a pyramid pooling module, which can aggregate multiple receptive fields of different scales and obtain more discriminative spatial context information [53]. The pyramid pooling module is mainly implemented by average pooling layers of different sizes, and then the feature map is restored to the original image size through deconvolution. However, the multi-path network model has more parameters than a single-path structure, which increases the running time of the network. In addition, the average pooling layer will reduce the size of the feature map and lose some feature information.

2.3. Dilated Convolution

Convolutional neural network has shown outstanding performance in the field of hyperspectral image classification in recent years. However, naive convolution focuses on the local feature information of hyperspectral images, which will cause the network to fail to learn the spatial similarity of adjacent regions. As shown in Figure 1, the receptive field of dilated convolution is usually larger than that of naive convolution, and more spatial information can be obtained, which can effectively avoid the problem of limited features obtained by naive convolution. It is worth noting that, as shown in Figure 1b, there are unrecognized regions (blind spots) in the receptive field of the dilated convolution, which will cause the obtained spatial information to be discontinuous.
A hybrid dilated convolution method is proposed for HSI classification, which combines multi-scale residuals to obtain good classification results [54]. Although it obtains a larger receptive field through hybrid dilated convolution, there are still a lot of blind spots in the receptive field. Furthermore, traditional CNN mostly uses fixed convolution kernels to extract features, which is not friendly to multi-scale features in hyperspectral images. In order to solve the above problems, Gao et al. proposed a multi-depth and multi-scale residual block (MDMSRB), which can fuse multi-scale receptive fields and multi-level features [55]. Although MDMSRB can integrate multi-scale receptive fields, the problem of blind spots in the receptive fields has not really been solved. In other words, when we introduce skip connections in different dilated convolution layers, there are still unrecognized areas in the receptive field corresponding to the skip connections.
In order to take full advantage of dilated convolution, Xu et al. extended the idea of multi-scale feature fusion and dilated convolution from spatial dimension to spectral dimension by combining dilated convolution, 3D CNN and residual connection, which makes it better applicable to HSI classification [27]. This method can obtain a wider range of spectral information, and it is a unique advantage of dilated convolution in 3D CNN. However, the introduction of dilated convolution into the spectrum will bring about the problem of blind spots, and it will lead to the discontinuity of the obtained spectrum information. In order to overcome the above problems, a PDCNet model is proposed in the paper.

3. Materials and Methods

3.1. Densely Connected Network Structure

With the development of deep learning, compared with traditional machine learning methods, neural networks show excellent performance on image recognition tasks. Simonyan et al. proposed the famous VGGNet in 2014 [56], which is mainly used in large-scale image recognition field. Then, ResNet [37] and DenseNet [45] for HSI classification came into being, which can extract more abstract spectral–spatial features and have fewer parameters. DenseNet has more advantages than ResNet in that it applies more skip connections, which improve the reuse of previous layers spectral–spatial features and reduce the vanishing gradient.
All layers in DenseNet are directly connected to ensure the maximum transmission of information between network layers. Simply put, the input of each layer is the output of all previous layers. As depicted in Figure 2, the densely connected structure is composed of several basic units, where the input of the n t h basic unit ( X ( n ) ) is consisted of the outputs of all previous blocks ( 1 , 2 , · · · , n 1 ) nd the input of the 1st basic unit, and the output of the basic unit will be the input of the next basic unit. Each basic unit contains the batch normalization (BN) layer, the ReLU activation function and the convolutional layer. The input data is scaled to the appropriate range through the nonlinear activation function of the BN layer, and then the expression ability of the neural network is improved by the ReLU nonlinear activation function. The equation of the BN layer is defined as:
X ( i ) = γ × X ( i ) m e a n [ X ( i ) ] V a r [ X ( i ) ] + β ,
where γ and β are the scaling factor and the shift factor, respectively. V a r [ · ] is the variance of the input data. The BN layer can effectively avoid the internal convariate shift and maintain the data distribution stable. The output of the ReLU layer is sent to the convolution layer to extract richer information.

3.2. Densely Pyramidal Dilated Convolutional Block

Dilated convolution, rather than naive convolution, is applied to DPDC blocks, which can integrate more multi-scale context information without loss of resolution [54], thereby improving spatial information utilization of HSI. The dilated convolution and receptive field will be described in detail in Section 3.3.
The three different convolution blocks are depicted in Figure 3. As shown in Figure 3a, three naive convolutional layers are densely connected. In order to increase the receptive field and obtain richer hyperspectral information without losing the size of the feature maps, dilated convolution is applied to replace naive convolution. As depicted in Figure 3b, a larger receptive field is obtained by densely connecting multiple dilated convolutions with different dilated factors, but there are blind spots in the receptive field, which will result in the acquired feature information discontinuous. Reasonably setting the dilated factors of the dilated convolution and increasing the width of the network like a pyramid are considered to be effective methods to obtain more abstract and comprehensive feature information (Figure 3c).
The DPDC block in this paper is composed of several PDC layers, and dense connections are adopted between different PDC layers to increase the flow of information within the network. The PDC layers is composed of dilated convolution layers with different dilated factors:
N k = n 1 1 Λ n 2 2 Λ n 3 4 Λ · · · Λ n k d ,
where N k represents the k t h PDC layer, and n k d indicates that k t h sub-dilated convolutional layer with dilated factor d = 2 k 1 in the k t h PDC layer. Λ represents the stacking of sub-dilated convolutional layers. Different skip connections correspond to different dilation factors. Generally speaking, the shallower skip connection corresponds to the smaller dilated factor. For instance, the skip connection between the input feature and the 3rd PDC layer corresponds to a sub-dilated convolutional layer with a dilated factor of 1; the skip connection between the 1st PDC layer and the 3rd PDC layer corresponds to a sub-dilated convolutional layer with a dilated factor of 2. The width of the network will increase as the number of PDC layers increases. The advantage of the structure is that more and larger ranges of spatial information can be obtained, while avoiding blind spots in the receptive field.

3.3. Receptive Field

The receptive field is defined as the region dominated by each neuron in the model. In other words, the receptive field refers to the area where the pixels on the output feature of each layer are mapped on the original image in the convolutional neural network. The receptive field of the 3rd layer of the densely naive dilated convolutional block (Figure 3b) is depicted in Figure 4, and the size of convolutional kernel is 3 × 3 . Red dots represent the points to which the filter is applied, and colored backgrounds represent the receptive field covered by red dots.
Suppose that the input data is directly fed into these three blocks. The receptive field of the 3rd layer in the densely naive convolutional block: As shown in Figure 4a, firstly, the receptive field of 3 × 3 (purple shaded area) corresponds to the skip connection between the input and the 3rd layer (see Figure 3a). Secondly, the receptive field of 5 × 5 (green shaded area) corresponds to the skip connection between the 1st layer and the 3rd layer. Finally, the receptive field of 7 × 7 (blue shaded area) corresponds to the skip connection between the 2nd layer and the 3rd layer. Furthermore, they correspond to a grid point in the output feature map (yellow shaded area).
The receptive field of the 3rd layer in the densely naive dilated convolutional block: As shown in Figure 4b, the receptive field of 3 × 3 (purple shaded area) corresponds to the skip connection between the input and the 3rd layer (see Figure 3b), but it contains a large number of unrecognized areas, which leads to discontinuous hyperspectral information obtained. The skip connection from the 1st layer to the 3rd layer corresponds to a larger receptive field than the densely naive convolutional block, but there are still blind spots in the receptive field, which is caused by the unreasonable setting of the dilated factor.
The receptive field of the 3rd layer in the DPDC block: As shown in Figure 4c, compared with the receptive field of densely naive convolutioanl blocks, the skip connection from the 1st layer to the 3rd layer in the densely pyramidal dilated convolutional block (see Figure 3c) has a larger receptive field. Compared with densely naive dilated convolutional blocks, there are no blind spots in the receptive field corresponding to the skip connections from the 1st layer to the 3rd layer in the pyramidal dilated convolutional block. This is mainly benefited from our reasonable setting of the dilated factor and the design of the PDC layer. The PDC layer performs different convolutional operations on the feature maps from different skip connections. For instance, the 3rd PDC layer in Figure 3c performs d = 1 and d = 2 dilated convolutional operations on the feature maps from two different skip connections, respectively. In DenseNet, feature maps of all previous k 1 layers [ x 0 , x 1 , · · · , x k 1 ] are used as the input of the k t h layer:
X k = R ( [ x 0 , x 1 , · · · , x k 1 ] ) w k ,
where R ( · ) refers to the composite operation of batch normalization and ReLU activation function. [ x 0 , x 1 , · · · , x k 1 ] denotes the stacking of the feature maps ( x 0 : the input feature) on the channel from layer 0 to k 1 , and the size of convolutional kernel ( w k ) is 3 × 3 . The d with the dilated factor d = 2 k 1 is used to represent the dilated convolution, and a variation of Equation (3) can be acquired by applying d to Densenet:
Y k = R ( [ x 0 , x 1 , · · · , x k 1 ] ) d w k .
However, skip connections will cause blind spots in the receptive field, so that the feature information learned by the convolutional layer is not comprehensive. To overcome this problem, densely pyramidal dilated convolutional block is proposed and defined as follows:
H k d i W k = i = 1 k 1 h i d i w k i ,
where H k = [ h 0 , h 1 , · · · , h k 1 ] = R ( [ x 0 , x 1 , · · · , x k 1 ] ) is the output of composite layer, W k refers to the set of convolutional kernels at the k t h layer and w k i denotes the convolutional kernel corresponding to the i t h skip connection of the k t h layer ( w k i is a subset of W k ). The continuity of spatial information is well preserved in the DPDC block (Figure 4c). In other words, blind spots problem in densely naive dilated convolutional block is effectively solved by choosing approprite dilated factors and increasing network width like a pyramid. The more comprehensive feature information of the PDC layer is obtained by pixel-level addition of feature maps of its internal sublayers. Furthermore, the dense connection mode is adopted between PDC layers, which can make more effective reuse of features.

3.4. PDCNet Model

Take PDCNet with three DPDC blocks as an example, its network structure is shown in Figure 5. B N + R e L U + C o n v o l u t i o n (hereinafter referred to as Conv) is used as our basic structure. Meanwhile, BN and ReLU operations are omitted in Figure 5. DenseNet model for HSI classification, DPDC block, receptive fields and dilated convolution were introduced in Section 3.1, Section 3.2 and Section 3.3. Although the size of the receptive field can be effectively increased by dilated convolution, feature information obtained is discontinuous due to the existence of blind spots. Therefore, while the dilated factors are effectively set in the DPDC block, network width gradually increases like a pyramid, which is conducive to eliminate blind spots, and acquire large-range and multi-scale feature information. Furthermore, to take advantage of the features of the previous layers, the dense connection pattern is introduced into PDCNet. High classification accuracy is achieved by combining dilated convolution and dense connection to extract more comprehensive and rich features. The Indian Pines dataset is applied as an example to feed into the PDCNet model proposed in this paper.
PDCNet is composed of three DPDC blocks and two transition layers. The transition layers are, respectively, embedded between the three DPDC blocks. The hyperspectral image is divided into cubes and fed into the proposed network. Firstly, the input features are sent to a convolution layer (the kernel size is 3 × 3 ) for feature extraction, and then they are sent to the subsequent modules of the network. Each DPDC block is densely connected by different number of PDC layers, while the PDC layer N K is stacked by different sub-dilated convolutional layers n k d . The input features of the DPDC block will be allocated to the dilated convolutional layers in the PDC layer through skip connections. Secondly, a hybrid feature fusion mechanism is applied in PDCNet. As shown in Figure 3c, the DPDC block contains two feature fusion methods: pixel-by-pixel addition and channel stacking. The feature fusion method of channel stacking is adopted between different PDC layers, and the pixel-by-pixel addition is used within each PDC layer and channel stacking is applied between each PDC layers. The hybrid feature fusion mechanism can realize the reuse of all previous layers output features while integrating large-range and multi-scale feature maps. In order to flexibly change the number of channels and reduce the parameters, Conv (the kernel size is 1 × 1 ) is applied in the transition layer. Finally, the classification results are obtained by an adaptive average pooling layer and a fully connected layer.

4. Experiments

4.1. Description of HSI Datasets

Indian Pines (IP): As a famous dataset for HSI classification, IP dataset was captured by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over the remote sensing test site in the northwest area of the India, 1992. It is composed of 200 valid bands with spectral range from 0.4 to 2.5 μ m after discarding 20 water absorption bands. The image of IP has 145 × 145 pixels with a spatial resolution of 20 mpp, and 16 vegetation classes are considered, e.g., alfalfa, oats, wheat, woods, etc. The ground-truth map, the false-color image and the corresponding color label are given in Figure 6.
Pavia University (UP): It was gathered by the Reflective Optices Spectrographic Imaging System (ROSIS) over the Pavia University in Italy, 2001. It is comprised of 103 effective bands with spectral range from 0.43 to 0.86 μ m after removing 12 noisy bands. The image of UP has 610 × 340 pixels with spatial resolution of 1.3 mpp, and 9 feature categories are used, such as trees, gravel, bricks, etc. The ground-truth map, the false-color image and the corresponding color labels are revealed in Figure 7.
Salinas Valley (SV): SV dataset was obtained by the AVARIS senor over an agricultural region of SV, CV, USA, in 1998, and it is consisted of 204 effective bands with spectral range from 0.4 to 2.5 μ m after ignoring 20 bands of low signal to noise ratio (SNR). The image of SV has 512 × 217 pixels with spatial resolution of 3.7 mpp, and 16 land cover classes are analyzed, e.g., fallow, stubble, celery, etc. The ground-truth map, the false-color image and the corresponding color labels are displayed in Figure 8.

4.2. Setting of Experimental Parameters

PyTorch deep learning framework is applied to a computer with 2.90 GHz Intel Core i5-10400F central processing unit (CPU) and 16 GB memory for experiments, and the average of five experimental results is taken as the final classification result. Three evaluation indicators are used to evaluate the performance of different networks: overall accuracy (OA), average accuracy (AA) and kappa coefficient (Kappa).
As shown in Table 1, 15% of the labeled samples in the IP dataset are used as the training set. Similarly, 5% and 2% of the label samples in the UP and SV datasets are used as the training set and the remaining labeled samples as the testing set (Table 2 and Table 3). To better illustrate the robustness of the network, the performance of networks for comparison under different proportions of training samples will be shown in Section 5.
To verify the effectiveness of the method proposed in this paper, several network models are adopted for comparative experiments. The optimal parameters of SVM [9] are obtained by grid search algorithm, which is a traditional machine learning method. In addition, comparative experiments are also carried out on some methods based on deep learning: 3-D CNN [32], FDMFN [42], PresNet [41] and DenseNet [45]. A baseline network (BMNet) composed of three densely naive convolutional blocks and two transition layers, and a dilated convolutional network (DCNet) composed of three dense ordinary dilated convolutional blocks are proposed for comparison experiments in this paper.
The relevant hyper-parameters of the experiment are set as follows. The patch size of comparative experiments with other models is set to 11 × 11 , and the epoch and batch size is 100. The learning rate of 3D-CNN, FDMFN, DenseNet, and PDCNet are set to 0.001. The learning rate of PresNet is 0.1. We use AdaptiveMoment Estimation (Adam) optimizer to optimize the learning rate for 3D-CNN, FDMFN, DenseNet, and PDCNet. The Stochastic Gradient Descent (SGD) optimizer is used to optimize the learning rate of PresNet. We use Cosine Annealing LR scheduler in the comparative experiments.

4.3. Influence of Parameters

Growth Rate g: It is used to control output channels of the convolutional layer. In the DPDC block, the number of output channels of each PDC layer will increase by g. For instance, the final output channel number of a DPDC block with three PDC layers will increase by 3×g. By adjusting the parameter, the information flow in the network can be controlled flexibly. The growth rate g in PDCNet is set as 52 because it achieved the highest classification accuracy, as shown in Table 4.
PDC Layer: The influence of PDCNet with different numbers of PDC layers in each DPDC block on the overall accuracy is shown in Table 5. Note that here the number of DPDC blocks is fixed to 3. PDCNet with 5 PDC layers in each DPDC block has the highest OA on the IP dataset. However, PDCNet with 3 PDC layers and PDCNet with 5 PDC layers in each block have little difference in accuracy on the IP dataset, so we choose PDCNet with 3 PDC layers in each block. PDCNet, which contains 3 PDC layers in each DPDC block, has the highest OA on the UP dataset. PDCNet with 2 PDC layers in each DPDC block reached the highest OA on the SV dataset.
DPDC Block: The influence of PDCNet with different numbers of DPDC blocks on the overall accuracy is shown in Table 6. Note that here the number of PDC layers in each block is fixed to 3. PDCNet with 2 DPDC blocks has the highest OA on the IP dataset. PDCNet with 3 DPDC blocks has the highest OA on the UP dataset in Table 6. PDCNet with 3 DPDC blocks has the highest accuracy on the SV dataset.
From the perspective of accuracy, comparing Table 5 and Table 6, firstly, we choose PDCNet with 2 DPDC blocks and 3 PDC layers in each block as the optimal PDCNet in the IP dataset. Secondly, PDCNet with 3 DPDC blocks and 3 PDC layers in each block is considered the optimal PDCNet in the UP dataset. Finally, the PDCNet with 3 DPDC blocks and 2 PDC layers in each block has the highest accuracy in the SV dataset.
Patch Size: The impact of different patches on the overall accuracy of the network is shown in Table 7. The network proposed in this paper achieves good results under the different patch sizes. When the patch size is 11 × 11 , PDCNet has the highest accuracy on the UP dataset, and has good performance on other datasets. In addition, considering the impact of patch size on training time, the patch size of the PDCNet model is set to 11 × 11 .

4.4. Ablation Experiments

As shown in Figure 3, three different blocks are designed in the paper. BMNet: BMNet is constructed by stacking three densely naive convolutional blocks (Figure 3a) and two transition layers, where the dilated factor of each convolutional layer is 1, and then densely connecting three naive convolutional layers to form a densely dilated convolutional block. DCNet: Compared with BMNet, DCNet is constructed by stacking three densely dilated convolutional blocks (Figure 3b) and two transition layers, where the dilated factor ( d = 2 k 1 ) of each dilated convolutional layer increases in turn, which can obtain larger the receptive field. However, there are blind spots in the receptive field. PDCNet: In order to reasonably increase the receptive field without introducing blind spots in the receptive field, a PDC layer is proposed, which contains sub-dilated convolutional layers with different dilated factors (Figure 3c). The DPDC block is composed of three PDC layers, and their width increases as the depth increases like a pyramid. The basic structure of PDCNet consists of three DPDC blocks and two transition layers through cross-stacking. To illustrate the effectiveness of the network proposed in this paper, BMNet, DCNet and PDCNet are experimented under the same parameter settings (i.e., patch size, learning rate, growth rate, etc.).
The overall accuracy of BMNet, DCNet and PDCNet in different proportions of training samples is shown in Figure 9. Overall accuracy on IP dataset of training samples with different proportions is shown in Figure 9a. The overall accuracy of PDCNet is represented by the red line, which has the highest accuracy compared to other models. As depicted in Figure 9b, with the proportion of training samples increasing, the overall accuracy of three networks become more and more close, but on the whole, PDCNet still showed good performance. As shown in Figure 9c, the overall accuracy of PDCNet is much higher than that of other networks with 2% of training samples. The OA, AA and Kappa of BMNet, DCNet and PDCNet with the same hyper-parameter settings on the three datasets (IP, UP and SV datasets) are shown in Figure 10. As a whole, the proposed network has the highest classification performance.

4.5. Classification Results

4.5.1. Classification Results (IP Dataset)

The classification results of PDCNet framework and other comparison methods on IP datasets are shown in Table 8. Correspondingly, Figure 11 shows classification maps of the model designed in this paper and other models, where Figure 11a,b are the false color image and the ground truth, respectively. Obviously, compared with other networks, the model designed in this paper has higher accuracy.
As shown in Table 8, the OA, AA and Kappa of the proposed network (PDCNet) are 99.47%, 99.03% and 99.39%, respectively. According to the classification results of Alfalfa (class 1), the accuracy of PDCNet reaches 97.95%, which is higher than that of other models. Compared with SVM, 3-D CNN, FDMFN, PresNet and DenseNet, the overall accuracy of the network proposed in this paper is increased by 14.93%, 2.22%, 1.00%, 0.73% and 0.35%, respectively. The average accuracy and Kappa coefficient are also improved to different degrees. As depicted in Figure 11, the classification accuracy of SVM is poor, and there are many noises and spots in classification map. Three-dimensional CNN has a poor ability to process edge information, which leads to edge classification errors in many categories, such as Corn-notill (class 2) and Soybean-notill (class 10) in the classification map. FDMFN, PresNet, DenseNet and PDCNet have better classification performance, but FDMFN has poor classification ability on Alfalfa and Corn-notill. Furthermore, PresNet cannot correctly classify the edges of Soybean-mintill (class 11) and Buildings-grass-trees-drivers (class 15). Furthermore, DenseNet achieves accuracy close to that of PDCNet, but internal noises of DenseNet is more than PDCNet. The problem can be avoided by setting the dilated factor in the dilated convolution reasonably.
While the PDC layer acquires a larger receptive field, it also ensures the continuity of spatial information, which can effectively reduce noise pollution in the receptive field. Therefore, compared with the classification results of other models, the classification map of PDCNet (Figure 11) has less noise and spots on the IP dataset.

4.5.2. Classification Results (UP Dataset)

The classification result of the proposed network and other comparison methods on the UP dataset are indicated in Table 9. Correspondingly, Figure 12 indicates the classification maps of PDCNet model and other models, where Figure 12a,b are the false color image and the ground truth, respectively. In summary, the model suggested in this paper has the highest accuracy compared to other networks.
As shown in Table 9, the OA, AA and Kappa of the designed netwrok (PDCNet) reaches 99.82%, 98.67% and 99.76%, respectively. Compared with SVM, 3-D CNN, FDMFN, PresNet, DenseNet, the kappa coefficient of PDCNet is improved by 12.50%, 2.06%, 0.71%, 0.65% and 0.11%, respectively. The overall accuracy and average accuracy are also improved to different degrees. As depicted in Figure 12, there are many noises in the classification areas of Gravel (class 3), Bare Soil (class 6) and Bitumen (class 7) in the classification map of SVM. Relatively speaking, the methods based on deep learning can reduce noises on the classification map of UP dataset. However, 3-D CNN, FDMFN and PresNet still have unsatisfactory classification results on Gravel and Bitumen. Although DenseNet has better classification performance on Bitumen, there is still obvious wrong classification for Gravel. It is worth noting that PDCNet has good classification results on areas that are difficult to classify, such as Gravel, Bare Soil and Bitumen.
Compared with the single feature fusion method of DenseNet, the feature fusion mechanism that combines pixel-by-pixel addition and channel stacking applied in PDCNet is more effective, and a larger receptive field is captured by dilated convolution. The spectral–spatial features obtained by PDCNet are more abstract and comprehensive, which makes it possible to classify some areas that are more difficult to distinguish accurately.

4.5.3. Classification Results (SV Dataset)

The classification result of the network suggested in this paper (PDCNet) and other comparison methods on the SV dataset are shown in Table 10. Correspondingly, Figure 13 shows the classification maps of PDCNet and other models, where Figure 13a,b are the false color image and the ground truth, respectively. In short, the proposed model has higher accuracy compared to other networks on the SV dataset.
As shown in Table 10, the OA, AA and Kappa of the network proposed in this paper (PDCNet) obtains the classification results of 99.18%, 99.62% and 99.08%, respectively. Compared with SVM, 3-D CNN, FDMFN, PresNet, DenseNet, overall accuracy of PDCNet is improved by 8.52%, 5.43%, 1.56%, 1.02% and 1.06%, respectively. The overall accuracy and the average accuracy are also improved to different degrees. As depicted in Figure 13, SVM cannot classify Grapes_untrained (class 8) and Vinyard_untrained (class 15) well, and there is serious noise pollution in the classification area of these categories. Although 3-D CNN alleviates the problem of noise pollution to a certrain extent, it is more sensitive to edge information, such as Soil_vinyard_develop (class 9), Lettuce_romiane_7wk (class 14) and Corn_senesced_green_weeds (class 10). In addition, for Grapes_untrained and Vinyard_untrained, PDCNet has less pollution and higher classification results than FDMFN, PresNet and DenseNet.
The higher classification results are mainly attributed to the combination of two ideas in PDCNet. Firstly, the blind spot problem in the receptive field is solved by setting the dilated factor reasonably and increasing the network width like a pyramid, which makes the classification map have less noise and spots. Secondly, the feature fusion method of the hybrid mode is adopted to obtain richer and comprehensive feature information. Furthermore, a larger receptive field is acquired through dilated convolution, which allows the edge features of each category to be better distinguished.
From the perspective of experimental results, compared with some traditional classification methods, the PDCNet proposed in this paper shows the best classification results on the three datasets. Firstly, from the classification accuracy of the three datasets, PDCNet obtained the highest classification accuracy. Secondly, the classification map of PDCNet on the three datasets suffers the least pollution, and it contains the least noise and spots. From the point of view of the network structure, we have introduced dilated convolution and short connections in the DPDC block, while obtaining a larger receptive field, it also eliminates the problem of blind spots caused by dilated convolution, which allows PDCNet to obtain more continuous and comprehensive spatial information.

4.6. Comparison with Other Segmentation Method

In this section, we use the PDCNet model structure shown in Figure 5 to conduct a comparative experiment with another hyperspectral image segmentation method (DeepLab v3+) [49] on the UP and KSC datasets. The corresponding classification results are shown in Table 11. We randomly select 5% of the labeled training samples in the UP and KSC datasets.
As shown in Table 11, the network proposed in this paper and DeepLab v3+ achieved similar OA and Kappa on the KSC dataset. However, AA of PDCNet is lower than that of DeepLab v3+. It is worth noting that the accuracy of PDCNet on the UP dataset is higher than DeepLab v3+. Among them, the OA, AA and Kappa of PDCNet are 0.72%, 0.31% and 0.95% higher than those of DeepLab v3+, respectively. Overall, PDCNet has achieved similar OA and Kappa to DeepLab v3+ on the KSC dataset. The OA, AA and Kappa of PDCNet on the UP dataset are all higher than DeepLab v3+.

5. Discussion

5.1. Influence of Training Samples

Different proportions of training samples on IP, UP and SV datasets are adopted to measure the performance of different networks. The overall accuracy of SVM, 3D CNN, FDMFN, PresNet, DenseNet and PDCNet are shown in Table 12. Note that here PDCNet with 3 DPDC blocks (3 PDC layers in each block) is used for comparison. On the IP dataset, the netwrok suggested in this paper is 0.79%, 0.66%, 0.69%, 0.45% higher than PresNet, and 0.26%, 0.27%, 0.31%, 0.31%, 0.14% higher than DenseNet. The overall accuracy of PDCNet is also improved in UP and SV datasets. The designed network shows great overall accuracy under different proportion of training samples.

5.2. Analysis of Running Time and Number of Network Parameters

Table 13 shows the running time and parameters of different networks on IP, UP and SV datasets. Note that here PDCNet with three DPDC blocks and three PDC layers in each block is used for comparison. Since the PDC layer in PDCNet could contain several sub-dilated convolutional layers, the training time of the network designed in this paper is longer than that of other networks. In addition, the parameters of the suggested network are more than those of 3D CNN and FDMFN. However, the proposed network has fewer parameters than DenseNet and PresNet.

6. Conclusions

In this paper, we propose a densely connected pyramidal dilated convolutional neural network for hyperspectral image classification, which can capture more comprehensive spatial information. Firstly, the PDC layer is composed of different numbers of dilated convolutions with different dilated factors to obtain receptive fields of multiple scales. Secondly, in order to eliminate blind spots in the receptive field, we densely connect different numbers of PDC layers to form a DPDC block. It can be seen from the classification result maps on the three datasets that the classification map of PDCNet suffers the least pollution and contains the least noise and spots, which is mainly due to the design of the DPDC block. Finally, a hybrid feature fusion mechanism of pixel-by-pixel addition and channel stacking is applied in PDCNet to improve the discriminative power of features. This is another reason for our good classification accuracy. In addition, the experimental results on three datasets show that our method can obtain good classification performance compared with other popular models.
In future work, since we have increased the width of the network, the training time of PDCNet is relatively long. Therefore, some methods to reduce computing cost will be considered and applied to the network in this paper. In addition, in order to further obtain more abstract spectral–spatial features, some new methods will be considered, such as channel shuffling technology and the utilization of more frequency domain information in pooling layer.

Author Contributions

Conceptualization, Z.M. and J.Z., and H.L.; methodology, F.Z., Z.M. and J.Z.; software, J.Z. and Z.M.; writing—original draft preparation, J.Z., Z.M., F.Z. and H.L.; writing—review and editing, Z.M. and J.Z.; funding acquisition, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (Grant Nos. 62071379, 61901365 and 61571361), the Natural Science Basic Research Plan in Shaanxi Province of China (Grant Nos. 2021JM-461 and 2020JM-299), the Fundamental Research Funds for the Central Universities (Grant No. GK202103085), and New Star Team of Xi’an University of Posts & Telecommunications (Grant No. xyt2016-01).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Three public datasets used in this paper can be found and experimented at http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 3 June 2021).

Acknowledgments

The authors would like to thank the editor and anonymous reviewers who made suggestion for our papers, and peer researchers who made the source code available to people who love research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shabbir, S.; Ahmad, M. Hyperspectral image classification–traditional to deep models: A survey for future prospects. arXiv 2021, arXiv:2101.06116. [Google Scholar]
  2. Han, Y.; Li, J.; Zhang, Y.; Hong, Z.; Wang, J. Sea ice detection based on an improved similarity measurement method using hyperspectral data. Sensors 2017, 17, 1124. [Google Scholar] [CrossRef] [Green Version]
  3. Stuart, M.B.; McGonigle, A.J.; Willmott, J.R. Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems. Sensors 2019, 19, 3071. [Google Scholar] [CrossRef] [Green Version]
  4. Garzon-Lopez, C.X.; Lasso, E. Species classification in a tropical alpine ecosystem using UAV-Borne RGB and hyperspectral imagery. Drones 2020, 4, 69. [Google Scholar] [CrossRef]
  5. Borana, S.; Yadav, S.; Parihar, S. Hyperspectral data analysis for arid vegetation species: Smart & sustainable growth. In Proceedings of the 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), Greater Noida, India, 18–19 October 2019; pp. 495–500. [Google Scholar]
  6. Zhu, M.; Jiao, L.; Liu, F.; Yang, S.; Wang, J. Residual spectral–spatial attention network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 449–462. [Google Scholar] [CrossRef]
  7. Meng, Z.; Jiao, L.; Liang, M.; Zhao, F. A lightweight spectral-spatial convolution module for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2021, 1–5. [Google Scholar] [CrossRef]
  8. Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep learning meets hyperspectral image analysis: A multidisciplinary review. J. Imaging 2019, 5, 52. [Google Scholar] [CrossRef] [Green Version]
  9. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  10. Okwuashi, O.; Ndehedehe, C.E. Deep support vector machine for hyperspectral image classification. Pattern Recognit. 2020, 103, 107298. [Google Scholar] [CrossRef]
  11. Sabat-Tomala, A.; Raczko, E.; Zagajewski, B. Comparison of support vector machine and random forest algorithms for invasive and expansive species classification using airborne hyperspectral data. Remote Sens. 2020, 12, 516. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, A.; Wang, Y.; Chen, Y. Hyperspectral image classification based on convolutional neural network and random forest. Remote Sens. Lett. 2019, 10, 1086–1094. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Cao, G.; Li, X.; Wang, B. Cascaded random forest for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1082–1094. [Google Scholar] [CrossRef]
  14. Cariou, C.; Le Moan, S.; Chehdi, K. Improving k-nearest neighbor approaches for density-based pixel clustering in hyperspectral remote sensing images. Remote Sens. 2020, 12, 3745. [Google Scholar] [CrossRef]
  15. Tu, B.; Wang, J.; Kang, X.; Zhang, G.; Ou, X.; Guo, L. KNN-based representation of superpixels for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4032–4047. [Google Scholar] [CrossRef]
  16. Su, H.; Yu, Y.; Wu, Z.; Du, Q. Random subspace-based k-nearest class collaborative representation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6840–6853. [Google Scholar] [CrossRef]
  17. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Li, J.; Plaza, A. Active learning with convolutional neural networks for hyperspectral image classification using a new bayesian approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6440–6461. [Google Scholar] [CrossRef]
  18. Zhan, Y.; Hu, D.; Wang, Y.; Yu, X. Semisupervised hyperspectral image classification based on generative adversarial networks. IEEE Geosci. Remote Sens. Lett. 2017, 15, 212–216. [Google Scholar] [CrossRef]
  19. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  20. Wang, H.; Tao, C.; Qi, J.; Li, H.; Tang, Y. Semi-supervised variational generative adversarial networks for hyperspectral image classification. In Proceedings of the 2019 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019; pp. 9792–9794. [Google Scholar]
  21. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
  22. Zhang, X.; Sun, Y.; Jiang, K.; Li, C.; Jiao, L.; Zhou, H. Spatial sequential recurrent neural network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4141–4155. [Google Scholar] [CrossRef] [Green Version]
  23. Hang, R.; Liu, Q.; Hong, D.; Ghamisi, P. Cascaded recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5384–5394. [Google Scholar] [CrossRef] [Green Version]
  24. Jiao, L.; Liang, M.; Chen, H.; Yang, S.; Liu, H.; Cao, X. Deep fully convolutional network-based spatial distribution prediction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5585–5599. [Google Scholar] [CrossRef]
  25. Li, J.; Zhao, X.; Li, Y.; Du, Q.; Xi, B.; Hu, J. Classification of hyperspectral imagery using a new fully convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 292–296. [Google Scholar] [CrossRef]
  26. Zou, L.; Zhu, X.; Wu, C.; Liu, Y.; Qu, L. Spectral–spatial exploration for hyperspectral image classification via the fusion of fully convolutional networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 659–674. [Google Scholar] [CrossRef]
  27. Xu, H.; Yao, W.; Cheng, L.; Li, B. Multiple spectral resolution 3D convolutional neural network for hyperspectral image classification. Remote Sens. 2021, 13, 1248. [Google Scholar] [CrossRef]
  28. Qing, Y.; Liu, W. Hyperspectral image classification based on multi-Scale residual network with attention mechanism. Remote Sens. 2021, 13, 335. [Google Scholar] [CrossRef]
  29. Rao, M.; Tang, P.; Zhang, Z. A developed siamese CNN with 3D adaptive spatial-spectral pyramid pooling for hyperspectral image classification. Remote Sens. 2020, 12, 1964. [Google Scholar] [CrossRef]
  30. Miclea, A.V.; Terebes, R.; Meza, S. One dimensional convolutional neural networks and local binary patterns for hyperspectral image classification. In Proceedings of the 2020 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR), Cluj-Napoca, Romania, 21–23 May 2020; pp. 1–6. [Google Scholar]
  31. Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4604–4616. [Google Scholar] [CrossRef]
  32. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
  33. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  34. Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-branch multi-attention mechanism network for hyperspectral image classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
  35. Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of hyperspectral image based on double-branch dual-attention mechanism network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef] [Green Version]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  37. Zhong, Z.; Li, J.; Ma, L.; Jiang, H.; Zhao, H. Deep residual networks for hyperspectral image classification. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1824–1827. [Google Scholar]
  38. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
  39. Meng, Z.; Li, L.; Tang, X.; Feng, Z.; Jiao, L.; Liang, M. Multipath residual network for spectral-spatial hyperspectral image classification. Remote Sens. 2019, 11, 1896. [Google Scholar] [CrossRef] [Green Version]
  40. Han, D.; Kim, J.; Kim, J. Deep pyramidal residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5927–5935. [Google Scholar]
  41. Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.J.; Pla, F. Deep pyramidal residual networks for spectral–spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 740–754. [Google Scholar] [CrossRef]
  42. Meng, Z.; Li, L.; Jiao, L.; Feng, Z.; Tang, X.; Liang, M. Fully dense multiscale fusion network for hyperspectral image classification. Remote Sens. 2019, 11, 2718. [Google Scholar] [CrossRef] [Green Version]
  43. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  44. Meng, Z.; Jiao, L.; Liang, M.; Zhao, F. Hyperspectral image classification with mixed link networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2494–2507. [Google Scholar] [CrossRef]
  45. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep&dense convolutional neural network for hyperspectral image classification. Remote Sens. 2018, 10, 1454. [Google Scholar]
  46. Nalepa, J.; Antoniak, M.; Myller, M.; Lorenzo, P.R.; Marcinkiewicz, M. Towards resource-frugal deep convolutional neural networks for hyperspectral image segmentation. Microprocess. Microsyst. 2020, 73, 102994. [Google Scholar] [CrossRef]
  47. Fu, P.; Sun, X.; Sun, Q. Hyperspectral image segmentation via frequency-based similarity for mixed noise estimation. Remote Sens. 2017, 9, 1237. [Google Scholar] [CrossRef] [Green Version]
  48. Sun, H.; Zheng, X.; Lu, X. A supervised segmentation network for hyperspectral image classification. IEEE Trans. Image Process 2021, 30, 2810–2825. [Google Scholar] [CrossRef]
  49. Si, Y.; Gong, D.; Guo, Y.; Zhu, X.; Huang, Q.; Evans, J.; He, S.; Sun, Y. An Advanced Spectral–Spatial Classification Framework for Hyperspectral Imagery Based on DeepLab v3+. Appl. Sci. 2021, 11, 5703. [Google Scholar] [CrossRef]
  50. Takahashi, N.; Mitsufuji, Y. Densely connected multidilated convolutional networks for dense prediction tasks. arXiv 2020, arXiv:2011.11844. [Google Scholar]
  51. Meng, Z.; Zhao, F.; Liang, M.; Xie, W. Deep Residual Involution Network for Hyperspectral Image Classification. Remote Sens. 2021, 13, 3055. [Google Scholar] [CrossRef]
  52. Shi, H.; Cao, G.; Ge, Z.; Zhang, Y.; Fu, P. Double-Branch Network with Pyramidal Convolution and Iterative Attention for Hyperspectral Image Classification. Remote Sens. 2021, 13, 1403. [Google Scholar] [CrossRef]
  53. Gong, H.; Li, Q.; Li, C.; Dai, H.; He, Z.; Wang, W.; Li, H.; Han, F.; Tuniyazi, A.; Mu, T. Multiscale Information Fusion for Hyperspectral Image Classification Based on Hybrid 2D-3D CNN. Remote Sens. 2021, 13, 2268. [Google Scholar] [CrossRef]
  54. Li, C.; Qiu, Z.; Cao, X.; Chen, Z.; Gao, H.; Hua, Z. Hybrid Dilated Convolution with Multi-Scale Residual Fusion Network for Hyperspectral Image Classification. Micromachines 2021, 12, 545. [Google Scholar] [CrossRef] [PubMed]
  55. Gao, H.; Chen, Z.; Li, C. Hierarchical Shrinkage Multi-Scale Network for Hyperspectral Image Classification with Hierarchical Feature Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5760–5772. [Google Scholar] [CrossRef]
  56. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
Figure 1. The ways of two different convolutions. (a) Naive convolution. (b) Dilated convolution.
Figure 1. The ways of two different convolutions. (a) Naive convolution. (b) Dilated convolution.
Remotesensing 13 03396 g001
Figure 2. Densely connected convolutional block of DenseNet.
Figure 2. Densely connected convolutional block of DenseNet.
Remotesensing 13 03396 g002
Figure 3. The structures of three different blocks. (a) Densely naive convolutional block. (b) Densely naive dilated convolutional block. (c) Densely pyramidal dilated convolutional block.
Figure 3. The structures of three different blocks. (a) Densely naive convolutional block. (b) Densely naive dilated convolutional block. (c) Densely pyramidal dilated convolutional block.
Remotesensing 13 03396 g003
Figure 4. The receiving fields of the 3rd layer in the different convolutional block (in the case of one-dimension). (a) The receptive field (densely naive convolutional block). (b) The receptive field (densely naive dilated convolutional block). (c) The receptive field (densely pyramidal dilated convolutional block).
Figure 4. The receiving fields of the 3rd layer in the different convolutional block (in the case of one-dimension). (a) The receptive field (densely naive convolutional block). (b) The receptive field (densely naive dilated convolutional block). (c) The receptive field (densely pyramidal dilated convolutional block).
Remotesensing 13 03396 g004
Figure 5. The framework of PDCNet.
Figure 5. The framework of PDCNet.
Remotesensing 13 03396 g005
Figure 6. The Indian Pines Dataset. (a) The false-color image. (b) The ground-truth map. (c) The corresponding color labels.
Figure 6. The Indian Pines Dataset. (a) The false-color image. (b) The ground-truth map. (c) The corresponding color labels.
Remotesensing 13 03396 g006
Figure 7. The Pavia University dataset. (a) The false-color image. (b) The ground-truth map. (c) The corresponding color labels.
Figure 7. The Pavia University dataset. (a) The false-color image. (b) The ground-truth map. (c) The corresponding color labels.
Remotesensing 13 03396 g007
Figure 8. The Salinas Valley dataset. (a) The false-color image. (b) The ground-truth map. (c) The corresponding color labels.
Figure 8. The Salinas Valley dataset. (a) The false-color image. (b) The ground-truth map. (c) The corresponding color labels.
Remotesensing 13 03396 g008
Figure 9. OA of different training samples with three datasets on BMNet, DCNet and PDCNet. (a) OA on IP dataset. (b) OA on UP dataset. (c) OA on SV dataset.
Figure 9. OA of different training samples with three datasets on BMNet, DCNet and PDCNet. (a) OA on IP dataset. (b) OA on UP dataset. (c) OA on SV dataset.
Remotesensing 13 03396 g009
Figure 10. Performance of BMNet, DCNet and PDCNet on different HSI datasets. (a) Metrics on IP dataset. (b) Metrics on UP dataset. (c) Metrics on SV dataset.
Figure 10. Performance of BMNet, DCNet and PDCNet on different HSI datasets. (a) Metrics on IP dataset. (b) Metrics on UP dataset. (c) Metrics on SV dataset.
Remotesensing 13 03396 g010
Figure 11. Classification performance of different network models over IP dataset. (a) False-color image. (b) Ground truth. (c) SVM. (d) 3-D CNN. (e) FDMFN. (f) DenseNet. (g) PresNet. (h) PDCNet.
Figure 11. Classification performance of different network models over IP dataset. (a) False-color image. (b) Ground truth. (c) SVM. (d) 3-D CNN. (e) FDMFN. (f) DenseNet. (g) PresNet. (h) PDCNet.
Remotesensing 13 03396 g011
Figure 12. Classification performance of different network models over UP dataset. (a) False-color image. (b) Ground truth. (c) SVM. (d) 3-D CNN. (e) FDMFN. (f) DenseNet. (g) PresNet. (h) PDCNet.
Figure 12. Classification performance of different network models over UP dataset. (a) False-color image. (b) Ground truth. (c) SVM. (d) 3-D CNN. (e) FDMFN. (f) DenseNet. (g) PresNet. (h) PDCNet.
Remotesensing 13 03396 g012
Figure 13. Classification performance of different network models over SV dataset. (a) False-color image. (b) Ground truth. (c) SVM. (d) 3-D CNN. (e) FDMFN. (f) DenseNet. (g) PresNet. (h) PDCNet.
Figure 13. Classification performance of different network models over SV dataset. (a) False-color image. (b) Ground truth. (c) SVM. (d) 3-D CNN. (e) FDMFN. (f) DenseNet. (g) PresNet. (h) PDCNet.
Remotesensing 13 03396 g013
Table 1. The number of samples in the IP dataset.
Table 1. The number of samples in the IP dataset.
NumberClassTrain SamplesTest SamplesTotal Samples
1Alfalfa73946
2Corn-notill21412141428
3Corn-mintill125705830
4Corn36201237
5Grass-pasture72411483
6Grass-trees110620730
7Grass-pasture-mowed42428
8Hay-windrowed72406478
9Oats31720
10Soybean-notill146826972
11Soybean-mintill36820872455
12Soybean-clean89504593
13Wheat31174205
14Woods19010751265
15Buildings-grass-trees-drivers58328386
16Stone-steel-towers147993
Sum1539871010,249
Table 2. The number of samples in the UP dataset.
Table 2. The number of samples in the UP dataset.
NumberClassTrain SamplesTest SamplesTotal Samples
1Asphalt33262996631
2Meadows93217,71718,649
3Gravel10519942099
4Trees15329113064
5Painted metal sheets6712781345
6Bare Soil25147785029
7Bitumen6712631330
8Self-Blocking Bricks18434983682
9Shadows47900947
Sum213840,63842,776
Table 3. The number of samples in the SV dataset.
Table 3. The number of samples in the SV dataset.
NumberClassTrain SamplesTest SamplesTotal Samples
1Brocoli_green_weeds_14019692009
2Brocoli_green_weeds_27536513726
3Fallow4019361976
4Fallow_rough_plow2813661394
5Fallow_smooth5426242678
6Stubble7938803959
7Celery7235073579
8Grapes_untrained22511,04611,271
9Soil_vinyard_develop12460796203
10Corn_senesced_green_weeds6632123278
11Lettuce_romaine_4wk2110471068
12Lettuce_romaine_5wk3918881927
13Lettuce_romiane_6wk18898916
14Lettuce_romiane_7wk2110491070
15Vinyard_untrained14571237268
16Vinyard_vertical_trellis3617711807
Sum108353,04654,129
Table 4. OA (%) of PDCNet with different growth rate g in IP, UP, and SV datasets.
Table 4. OA (%) of PDCNet with different growth rate g in IP, UP, and SV datasets.
Datasets4046525864
IP99.43 ± 0.2899.39 ± 0.2899.43 ± 0.2099.39 ± 0.2199.40 ± 0.22
UP99.78 ± 0.0299.80 ± 0.0599.82 ± 0.0699.81 ± 0.0399.78 ± 0.07
SV99.12 ± 0.2599.09 ± 0.2399.15 ± 0.1399.05 ± 0.2998.93 ± 0.24
Table 5. OA (%) of PDCNet with different number of PDC layers in IP, UP, and SV datasets.
Table 5. OA (%) of PDCNet with different number of PDC layers in IP, UP, and SV datasets.
Datasets23456
IP99.44 ± 0.2099.43 ± 0.2099.34 ± 0.2399.45 ± 0.2599.41 ± 0.24
UP99.78 ± 0.0199.82 ± 0.0699.76 ± 0.0399.77 ± 0.0599.78 ± 0.04
SV99.18 ± 0.1799.15 ± 0.1399.06 ± 0.2599.02 ± 0.2198.96 ± 0.15
Table 6. OA (%) of PDCNet with different number of DPDC blocks in IP, UP, and SV datasets.
Table 6. OA (%) of PDCNet with different number of DPDC blocks in IP, UP, and SV datasets.
Datasets12345
IP99.40 ± 0.1999.47 ± 0.1799.43 ± 0.2099.44 ± 0.2099.42 ± 0.21
UP99.73 ± 0.0999.81 ± 0.0299.82 ± 0.0699.78 ± 0.0499.71 ± 0.06
SV98.97 ± 0.2299.14 ± 0.2599.15 ± 0.1398.95 ± 0.3498.88 ± 0.28
Table 7. OA (%) of PDCNet with different patch size in IP, UP, and SV datasets.
Table 7. OA (%) of PDCNet with different patch size in IP, UP, and SV datasets.
Datasets911131517
IP99.36 ± 0.0999.43 ± 0.2099.50 ± 0.1899.38 ± 0.1099.46 ± 0.06
UP99.74 ± 0.1099.82 ± 0.0699.80 ± 0.0599.74 ± 0.0599.71 ± 0.07
SV98.53 ± 0.1399.15 ± 0.1399.29 ± 0.2199.45 ± 0.1599.67 ± 0.12
Table 8. The classification results for the IP dataset based on 15% training samples. The best results are highlighted in bold font.
Table 8. The classification results for the IP dataset based on 15% training samples. The best results are highlighted in bold font.
ClassSVM3-D CNNFDMFNPresNetDenseNetPDCNet
166.15 ± 8.1795.90 ± 1.2688.21 ± 13.6395.90 ± 3.0896.92 ± 2.9997.95 ± 1.92
281.55 ± 2.3596.06 ± 0.7997.50 ± 0.8098.93 ± 0.3599.32 ± 0.3099.34 ± 0.32
376.83 ± 3.4495.73 ± 1.2697.44 ± 1.2298.66 ± 0.9899.57 ± 0.2999.46 ± 0.57
470.33 ± 4.1494.26 ± 4.6696.64 ± 2.7097.13 ± 3.2897.43 ± 2.6099.11 ± 1.34
592.55 ± 2.5696.74 ± 1.1698.59 ± 0.9098.93 ± 0.9199.32 ± 0.6499.07 ± 1.03
696.75 ± 0.7099.29 ± 0.2899.52 ± 0.3599.52 ± 0.5199.55 ± 0.2699.71 ± 0.26
780.83 ± 5.6597.50 ± 3.3393.33 ± 6.7796.67 ± 4.8697.50 ± 5.0096.67 ± 3.12
898.33 ± 0.57100.00 ± 0.0100.00 ± 0.099.95 ± 0.10100.00 ± 0.0100.00 ± 0.0
963.53 ± 12.087.06 ± 5.7691.76 ± 16.598.82 ± 2.3595.29 ± 4.4098.82 ± 2.35
1077.74 ± 4.9194.49 ± 1.9497.57 ± 1.1696.89 ± 1.3798.13 ± 2.1198.59 ± 1.39
1184.32 ± 2.5298.06 ± 0.6999.32 ± 0.3599.29 ± 0.5498.76 ± 0.9499.75 ± 0.19
1280.83 ± 3.7396.67 ± 1.2497.75 ± 1.2496.88 ± 1.1699.05 ± 0.7098.93 ± 0.69
1396.32 ± 2.0499.31 ± 0.6799.77 ± 0.4699.66 ± 0.46100.00 ± 0.099.66 ± 0.46
1494.42 ± 1.1699.13 ± 0.6299.65 ± 0.2399.74 ± 0.2799.70 ± 0.24100.00 ± 0.0
1567.52 ± 3.9096.01 ± 4.8096.50 ± 4.2396.79 ± 3.0199.64 ± 0.4899.70 ± 0.46
1692.66 ± 0.5198.48 ± 1.8699.24 ± 0.6298.99 ± 0.5197.72 ± 1.2497.72 ± 0.95
OA (%)84.54 ± 0.4897.25 ± 0.0998.47 ± 0.2898.74 ± 0.2699.12 ± 0.4599.47 ± 0.17
AA (%)82.54 ± 1.0796.54 ± 0.4097.05 ± 1.5098.30 ± 0.5098.62 ± 0.4999.03 ± 0.31
Kappa (%)82.39 ± 0.5596.87 ± 0.1098.25 ± 0.3298.57 ± 0.3099.00 ± 0.5199.39 ± 0.20
Table 9. The classification results for the UP dataset based on 5% training samples. The best results are highlighted in bold font.
Table 9. The classification results for the UP dataset based on 5% training samples. The best results are highlighted in bold font.
ClassSVM3-D CNNFDMFNPresNetDenseNetPDCNet
192.98 ± 0.7698.44 ± 1.0099.38 ± 0.2399.36 ± 0.1499.77 ± 0.1499.69 ± 0.31
297.33 ± 0.2199.71 ± 0.1599.90 ± 0.0699.88 ± 0.0899.96 ± 0.0399.99 ± 0.01
376.97 ± 2.2590.73 ± 2.9195.74 ± 2.4295.70 ± 2.6798.35 ± 0.7599.82 ± 0.13
487.15 ± 1.9397.62 ± 0.8998.83 ± 0.3898.52 ± 0.5998.91 ± 0.3798.89 ± 0.52
599.31 ± 0.0999.66 ± 0.3699.92 ± 0.09100.00 ± 0.099.84 ± 0.1399.83 ± 0.10
677.14 ± 1.2898.64 ± 1.7498.86 ± 1.4599.88 ± 0.1599.99 ± 0.0299.99 ± 0.01
758.04 ± 5.7592.29 ± 3.8998.15 ± 1.5297.31 ± 2.0099.76 ± 0.4099.87 ± 0.15
885.25 ± 1.5996.62 ± 1.8199.11 ± 0.2498.81 ± 0.8699.78 ± 0.3399.89 ± 0.19
999.84 ± 0.1598.60 ± 0.6299.64 ± 0.2099.93 ± 0.1399.02 ± 0.6899.07 ± 0.74
OA (%)90.67 ± 0.1698.26 ± 0.7999.28 ± 0.1799.33 ± 0.0999.73 ± 0.0299.82 ± 0.06
AA (%)86.00 ± 0.7996.92 ± 1.2298.84 ± 0.3398.82 ± 0.2499.49 ± 0.0699.67 ± 0.08
Kappa (%)87.26 ± 0.2297.70 ± 1.0499.05 ± 0.2399.11 ± 0.1299.65 ± 0.0399.76 ± 0.07
Table 10. The classification results for the SV dataset based on 2% training samples. The best results are highlighted in bold font.
Table 10. The classification results for the SV dataset based on 2% training samples. The best results are highlighted in bold font.
ClassSVM3-D CNNFDMFNPresNetDenseNetPDCNet
198.32 ± 0.6499.22 ± 0.9099.57 ± 0.6799.80 ± 0.2284.33 ± 13.24100.00 ± 0.0
299.67 ± 0.3199.78 ± 0.1899.96 ± 0.0499.75 ± 0.4899.98 ± 0.03100.00 ± 0.0
396.55 ± 4.3397.96 ± 2.1099.73 ± 0.3999.71 ± 0.2499.12 ± 1.6899.98 ± 0.04
499.18 ± 0.3199.33 ± 0.5099.50 ± 0.5699.33 ± 0.2999.41 ± 0.6999.37 ± 0.55
598.31 ± 0.9797.26 ± 0.9399.71 ± 0.2699.66 ± 0.2299.64 ± 0.2699.70 ± 0.35
699.61 ± 0.1899.92 ± 0.1199.99 ± 0.01100.00 ± 0.0100.00 ± 0.0100.00 ± 0.0
799.44 ± 0.2899.41 ± 0.2599.94 ± 0.1399.90 ± 0.1099.97 ± 0.0399.99 ± 0.01
886.92 ± 1.6788.68 ± 1.2593.05 ± 1.0795.07 ± 0.3196.15 ± 1.7097.62 ± 0.62
999.04 ± 0.8299.38 ± 0.46100.00 ± 0.099.89 ± 0.1399.69 ± 0.55100.00 ± 0.0
1094.24 ± 0.7396.37 ± 1.7298.64 ± 0.9698.85 ± 0.9299.59 ± 0.4199.68 ± 0.36
1194.01 ± 3.9897.25 ± 1.7799.69 ± 0.3598.93 ± 1.0699.64 ± 0.3699.90 ± 0.15
1299.51 ± 0.4199.59 ± 0.3599.99 ± 0.0299.99 ± 0.02100.00 ± 0.0100.00 ± 0.0
1398.37 ± 0.5899.35 ± 0.4699.93 ± 0.09100.00 ± 0.0100.00 ± 0.0100.00 ± 0.0
1491.08 ± 2.2097.16 ± 1.6599.81 ± 0.1399.92 ± 0.1199.56 ± 0.7499.96 ± 0.05
1561.00 ± 2.5777.19 ± 3.8994.61 ± 1.0995.58 ± 1.2297.42 ± 1.8198.03 ± 0.90
1698.05 ± 0.9897.38 ± 1.0298.45 ± 0.7099.00 ± 0.6499.67 ± 0.2899.74 ± 0.23
OA (%)90.66 ± 0.5093.75 ± 0.7897.62 ± 0.1998.16 ± 0.1598.12 ± 0.7899.18 ± 0.17
AA (%)94.58 ± 0.6296.58 ± 0.4198.91 ± 0.0599.09 ± 0.1398.39 ± 1.0499.62 ± 0.07
Kappa (%)89.59 ± 0.5593.03 ± 0.8797.35 ± 0.2197.96 ± 0.1797.90 ± 0.8799.08 ± 0.19
Table 11. The classification results of PDCNet and DeepLab v3+ on the KSC and UP datasets based on 5% of the training samples.
Table 11. The classification results of PDCNet and DeepLab v3+ on the KSC and UP datasets based on 5% of the training samples.
ClassKSC (5%)UP (5%)
DeepLab v3+PDCNetDeepLab v3+PDCNet
198.89100.099.1999.69
2100.095.8499.4899.99
397.9897.5399.3099.82
499.1388.7997.5398.89
596.2090.2099.9299.83
6100.098.0699.7999.99
7100.092.2098.9099.87
891.4298.9896.9199.89
9100.0100.099.7899.07
1099.4799.53//
1198.7499.65//
1297.8899.54//
13100.0100.0//
OA (%)98.4798.4099.1099.82
AA (%)98.4496.9598.9899.67
Kappa (%)98.2998.2298.8199.76
Table 12. OA(%) of different methods with different proportions of training samples. The best results are highlighted in bold font.
Table 12. OA(%) of different methods with different proportions of training samples. The best results are highlighted in bold font.
DatasetTraining SamplesSVM3-D CNNFDMFNPresNetDenseNetPDCNet
IP12.0%83.19 ± 0.6095.98 ± 0.2397.60 ± 0.3598.38 ± 0.2798.91 ± 0.2799.17 ± 0.19
13.0%83.88 ± 0.5896.63 ± 0.0998.04 ± 0.3398.56 ± 0.3198.95 ± 0.2899.22 ± 0.34
14.0%84.27 ± 0.5996.95 ± 0.0998.30 ± 0.3698.58 ± 0.4599.03 ± 0.3399.34 ± 0.20
15.0%84.54 ± 0.4897.25 ± 0.0998.47 ± 0.2898.74 ± 0.2699.12 ± 0.4599.43 ± 0.20
16.0%84.82 ± 0.6597.76 ± 0.2698.70 ± 0.3499.05 ± 0.2499.36 ± 0.2099.50 ± 0.19
UP4.00%90.32 ± 0.1696.84 ± 1.8698.82 ± 0.2498.80 ± 0.2699.57 ± 0.0799.60 ± 0.06
5.00%90.67 ± 0.1698.26 ± 0.7999.28 ± 0.1799.33 ± 0.0999.73 ± 0.0299.82 ± 0.06
6.00%90.85 ± 0.1698.58 ± 0.6599.54 ± 0.1299.49 ± 0.2199.75 ± 0.0799.85 ± 0.03
7.00%90.95 ± 0.1498.75 ± 0.4399.61 ± 0.0899.62 ± 0.1999.79 ± 0.0399.89 ± 0.04
8.00%91.05 ± 0.1198.79 ± 0.5799.65 ± 0.1099.65 ± 0.1599.77 ± 0.1699.88 ± 0.05
SV2.00%90.66 ± 0.5093.75 ± 0.7897.62 ± 0.1998.16 ± 0.1598.12 ± 0.7899.15 ± 0.13
3.00%91.35 ± 0.2394.46 ± 0.5097.98 ± 0.4998.09 ± 0.3297.98 ± 0.9099.18 ± 0.33
4.00%91.90 ± 0.2295.91 ± 0.4098.98 ± 0.1499.42 ± 0.1299.48 ± 0.2099.75 ± 0.07
5.00%92.12 ± 0.1896.59 ± 0.5099.27 ± 0.1599.48 ± 0.1899.64 ± 0.1099.86 ± 0.07
6.00%92.36 ± 0.1496.71 ± 0.2799.36 ± 0.1399.57 ± 0.1999.66 ± 0.1599.88 ± 0.05
Table 13. Comparison of networks parameters and running time.
Table 13. Comparison of networks parameters and running time.
DatasetMethodTraining Time (s)Testing Time (s)Total Params (M)
IPSVM16.5625.270/
3-D CNN77.3284.3130.101
FDMFN45.6232.1770.139
PresNet77.1523.9751.126
DenseNet108.555.2294.749
PDCNet180.273.5781.020
UPSVM12.29412.575/
3-D CNN63.36522.8950.050
FDMFN53.93015.4850.137
PresNet96.10834.1411.110
DenseNet138.7147.8724.651
PDCNet234.4629.7900.927
SVSVM6.9679.888/
3-D CNN53.65922.2610.103
FDMFN32.26411.6950.139
PresNet54.21621.3721.127
DenseNet75.68128.7884.753
PDCNet125.2519.0431.024
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, F.; Zhang, J.; Meng, Z.; Liu, H. Densely Connected Pyramidal Dilated Convolutional Network for Hyperspectral Image Classification. Remote Sens. 2021, 13, 3396. https://doi.org/10.3390/rs13173396

AMA Style

Zhao F, Zhang J, Meng Z, Liu H. Densely Connected Pyramidal Dilated Convolutional Network for Hyperspectral Image Classification. Remote Sensing. 2021; 13(17):3396. https://doi.org/10.3390/rs13173396

Chicago/Turabian Style

Zhao, Feng, Junjie Zhang, Zhe Meng, and Hanqiang Liu. 2021. "Densely Connected Pyramidal Dilated Convolutional Network for Hyperspectral Image Classification" Remote Sensing 13, no. 17: 3396. https://doi.org/10.3390/rs13173396

APA Style

Zhao, F., Zhang, J., Meng, Z., & Liu, H. (2021). Densely Connected Pyramidal Dilated Convolutional Network for Hyperspectral Image Classification. Remote Sensing, 13(17), 3396. https://doi.org/10.3390/rs13173396

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop