Next Article in Journal
A Novel Approach to Characterizing Crown Vertical Profile Shapes Using Terrestrial Laser Scanning (TLS)
Previous Article in Journal
Spatial-Convolution Spectral-Transformer Interactive Network for Large-Scale Fast Refined Land Cover Classification and Mapping Based on ZY1-02D Satellite Hyperspectral Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Feature Learning in Remote Sensing Images Using an Integrated Deep Multi-Scale 3D/2D Convolutional Network

1
School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China
2
Department of Information Communication Technology, Kenya Forest Service, Nairobi 00100, Kenya
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(13), 3270; https://doi.org/10.3390/rs15133270
Submission received: 9 April 2023 / Revised: 16 June 2023 / Accepted: 21 June 2023 / Published: 25 June 2023
(This article belongs to the Special Issue Kernel-Based Remote Sensing Image Analysis)

Abstract

:
Developing complex hyperspectral image (HSI) sensors that capture high-resolution spatial information and voluminous (hundreds) spectral bands of the earth’s surface has made HSI pixel-wise classification a reality. The 3D-CNN has become the preferred HSI pixel-wise classification approach because of its ability to extract discriminative spectral and spatial information while maintaining data integrity. However, HSI datasets are characterized by high nonlinearity, voluminous spectral features, and limited training sample data. Therefore, developing deep HSI classification methods that purely utilize 3D-CNNs in their network structure often results in computationally expensive models prone to overfitting when the model depth increases. In this regard, this paper proposes an integrated deep multi-scale 3D/2D convolutional network block (MiCB) for simultaneous low-level spectral and high-level spatial feature extraction, which can optimally train on limited sample data. The strength of the proposed MiCB model solely lies in the innovative arrangement of convolution layers, giving the network the ability (i) to simultaneously convolve the low-level spectral with high-level spatial features; (ii) to use multiscale kernels to extract abundant contextual information; (iii) to apply residual connections to solve the degradation problem when the model depth increases beyond the threshold; and (iv) to utilize depthwise separable convolutions in its network structure to address the computational cost of the proposed MiCB model. We evaluate the efficacy of our proposed MiCB model using three publicly accessible HSI benchmarking datasets: Salinas Scene (SA), Indian Pines (IP), and the University of Pavia (UP). When trained on small amounts of training sample data, MiCB is better at classifying than the state-of-the-art methods used for comparison. For instance, the MiCB achieves a high overall classification accuracy of 97.35%, 98.29%, and 99.20% when trained on 5% IP, 1% UP, and 1% SA data, respectively.

1. Introduction

Remote sensing involves the use of sophisticated camera sensors to remotely (from satellite or aircraft) detect and monitor the physical characteristics of a given portion of Earth’s surface area using the reflected and emitted radiation [1]. The rapid technological innovation in remote sensing has resulted in the development of complex hyperspectral image (HSI) sensors that capture both voluminous (hundreds) spectral bands and high-resolution spatial information of the earth’s surface to produce a three-dimensional (3D) HSI data cube [2,3], as shown in Figure 1.
The original hyperspectral image H is depicted in Figure 1 as a three-dimensional (3D) data cube, with the planes X–Y denoting the spatial data and the Z-axis denoting the spectral bands. Assuming that there are b spectral bands in the original hyperspectral image, then every pixel in H is composed of b spectral bands. Since H is 3D data, the design of the deep learning-based HSI classifiers shifted to models that can handle both spectral and spatial features and can optimally train on limited HSI sample data [4]. The convolutional neural network (CNN) emerges as the favorite among the deep learning methods proposed for HSI classification because of its ability to extract rich deep features, ensure the integrity of spatial and spectral information, and avoid the initiative and randomness of human feature extraction. As depicted in Figure 2, Figure 3 and Figure 4, there are three categories of deep CNN feature learning methods based on how HSI features are processed: pre-processing-based, post-processing-based, and integrated-based.
The structural layout of the pre-processing-based methods is depicted in Figure 2. These methods separately acquire the raw HSI data’s low-level spectral and spatial features. The extracted low-level spatial data are converted into a 1D input vector before fusing with the original spectral vector. The resultant 1D spectral–spatial feature is channeled into a deep learning network, where further extraction of high-level spectral–spatial features occurs. The resulting highly discriminative spectral–spatial cues are input into the fully connected (FC) layer and subsequently given to the classifier for classification [5,6]. These methods are complex to train as they require high computation memory and large training data.
The structure of the post-processing-based networks is depicted in Figure 3. Using a deep 2D spatial network and a deep 1D spectral network, highly discriminative spatial and spectral properties are extracted, respectively. The output from these two distinct networks is fused before they are passed to the FC layers before being fed into the classifier for classification [7,8]. Hao et al. [2] proposed a novel two-stream deep architecture for HSI classification. However, these methods experience low classification accuracy, especially when subjected to very few training samples.
Unlike the pre-processing and post-processing approaches that exploit the spectral and spatial data separately, the integrated approach shown in Figure 4 simultaneously processes spectral–spatial features directly from the HSI data cube. It often increases their ability to obtain discriminative features robust to nonlinear processing, hence better classification accuracy. Zhong et al. [9], Chen et al. [10], and Li et al. [11] incorporated 3D-CNN layers into their network designs to learn spectral–spatial features from the raw HSI data simultaneously. However, due to the limited training data available in the HSI datasets, utilizing 3D-CNNs alone in developing deep HSI classifiers often results in computationally expensive models prone to overfitting as the network deepens [6,10,12].
Recently, several attempts have been proposed to reduce the dimensionality curse introduced by voluminous spectral bands in HSI data. Among the methods proposed, the principal component analysis (PCA) [13] has gained popularity in hyperspectral imaging [5,6,10,14,15,16]. Additionally, several methods have been introduced to address the challenge of overfitting in models utilizing an integrated network approach. Some notable integrated approach advancements include the research by Zhong et al. [9], who proposed and designed the SSRN structure with the identity mapping of residual blocks for spectral–spatial feature learning [9]. Lee and Kwon [17] also utilized the residual connections in their design to develop a network that learns hierarchical features [17]. Feng et al. [18] introduced residual connection on HybridSN to develop R-HybridSN, which can optimally train on limited HSI data without overfitting. Other researchers, such as Roy et al. [19], proposed a hybrid model that combines 3D-CNN and 2D-CNN in its network structure to extract spectral–spatial features [19]. Cao and Guo [20] proposed the SSRN, an end-to-end hybrid expansion residual deep convolutional network, which is composed of residual blocks and hybrid dilated convolutions (HDC) [20]. Wu et al. [21] designed the 3D ResNeXt structure using feature fusion and label-smoothing strategies [21]. Tinega et al. [1], developed a GGBN model that used the biological genome concept to combine 3D-CNN and 2D-CNN and residual connections in a network structure [1]. Among other researchers who utilized a mixture of 3D-CNN and 2D-CNN in their network structure, and residual connections to produce the state-of-the-art models include zhao et al. [22], and Tinega et al. [23].
To further the research on developing deep HSI models that can optimally be trained using limited training samples, we propose an integrated deep multi-scale 3D/2D convolutional network (MiCB). The main contribution of the proposed MiCB model is the creative use of MiCB blocks, which allow the network to convolve low-level spectral features with high-level spatial features that are strengthened by multi-scale kernels, residual connections, and depthwise separable convolutions.
The remainder of this work is structured as follows: Section 2 explains the research technique; Section 3 examines the experimental setup; Section 4 discusses the experimental results and discussion; and Section 5 concludes this research.

2. Methodology

2.1. The Proposed Model

Figure 5 depicts the whole framework of the suggested MiCB model, which can train optimally using a minimal amount of data. The MiCB network contains three parts: pre-processing, the spectral–spatial feature learning process (MiCB Architecture), and classification.

2.1.1. The Pre-Processing Part

Figure 6 outlines the pre-processing framework of the proposed MiCB network. The dimensionality of an original HSI data cube H can also be stated as H x × y × b , where x denotes the width, y denotes the height, and b denotes the number of spectral bands. Due to the presence of the voluminous spectral bands, the first step in pre-processing involves dimensionality reduction in these spectral bands. In this regard, we use the PCA as a dimensionality reduction method. However, the PCA is sensitive to variable variances. Therefore, the first step involves centering and standardizing the HSI data cube H by computing and subtracting the mean value and dividing it by the standard deviation for each spectral band in the original data cube. Mathematically, this can be written as:
H = v a l u e m e a n   s t a n d a r d   d e v i a t i o n
This is followed by the computation of the covariance matrix and the identification of the principal components, which are the data lines with maximal variance. The more significant the variance a line carries, the more information it holds. Therefore, PCA aims to find t number of components b that carry more information without losing valuable information, resulting in the data cube I x × y × t , such that t  < b . The data cube I R x × y × t is further subjected to neighborhood extraction (NE), where G overlapping 3D patches of dimensionality p × p × t are extracted. The truth label of these overlapping patches at a given spatial location is dependent on the label of the central pixel.

2.1.2. Spectral–Spatial Feature Learning Process

Figure 7 shows the detailed architecture of the proposed MiCB, which is based on a hybrid structure of 3D and 2D CNNs. It illustrates how the MiCB model simultaneously convolves low-level spectral cues with high-level spatial features, utilizing mixed 3D/2D CNN layers, multi-scale kernels, depthwise separable convolutions, and residual connections to extract highly discriminative HSI features. Substituting 3D-CNN with 2D-CNN layers increases the network’s ability to learn spatial information at the top levels and reduces the model complexity [17]. The usage of non-identity multi-residual connections drastically reduces the challenge of gradient disappearance in the MiCB network [15] while replacing the traditional 2D-CNN with the 2D depthwise separable convolutional layers promotes the reduction in network parameters and prevents overfitting as the model structure deepens. Lastly, the utilization of multi-scale kernels enhances the extraction of abundant contextual features [23,24,25,26].
At the bottom of the MiCB network architecture, we utilized the 3D convolution to extract HSI spectral–spatial features, as shown in Figure 8.
A 3D convolution operation, as illustrated in Figure 8, can be denoted as:
v i x , y , z =   c i + j = 1 J r = 0 R i 1 q = 0 Q i 1 p = 0 P i 1 w i , j r , q , p × v i 1 , j x + r ,   y + q , z + p
where   v i x , y , z is the neuron activity at spectral-spatial position x , y , z ,   w i , j r , q , p is the Kernel weight of the jth feature map in the i th layer at r , q , p . The   R ,   Q , and P are the length, width, and depth dimensions of the jth feature map. The v i 1 , j x + r ,   y + q , z + p , is the 3D convolution output at position x , y , z in the jth feature map of the i 1 layer. The c i is the bias value of the convolution filter in the i th layer.
We introduced nonlinearity to v l x , y , z using the rectified linear unit (ReLU) activation function as denoted in Equation (3):
R v l x , y , z = M a x 0 , v l x , y , z
Similarly, a nonlinear 2D convolutional operation in the jth feature map of the ith layer is as shown in Figure 9.
  • Figure 9 can be expressed mathematically using Equation (4):
    v i x , y =   R c i + j = 1 J r = 0 R i 1 q = 0 Q i 1 w i , j r , q × v i 1 , j x + r ,   y + q
  • R   represents the ReLU activation function.
At the top layer of the proposed MiCB model structure, the 3D feature maps are reshaped into 2D feature maps, as shown in Figure 9.
For example, in Figure 10, the 3D layer of the network has 48 feature maps with a size of 11 × 11 × 15 . To learn the feature maps in the 2D space, we reshape the 48 3D feature maps into 720 2D feature maps with dimensions of   11 × 11 .
We utilized the bottleneck concept shown in Figure 11 to reduce the number of network parameters and mitigate overfitting by downscaling the number of feature maps after reshaping. This is achieved by eliminating redundant features and retaining the smallest possible number of highly discriminative features that preserve the predictive power of the remaining data. In Figure 11, we used a 1 × 1 filter to downsample feature maps. The 1 × 1 filter has a single weight for each input feature map, making it possible to act like a single neuron with input from the same position across each feature map in the input. Therefore, utilizing a convolutional layer with multiple 1 × 1 filters at any point of the CNN structure allows the depth of the summarized input feature maps to be decreased to increase the network structure’s efficiency by reducing the computational costs.
The MiCB structure integrates feature maps from different convolution layers using a concatenation operation, as shown in Figure 12. Feature concatenation is just a feature-stacking operation. For example, in Figure 12, we have two layers with 125 feature maps each to perform channel-wise feature concatenation; the resulting output will have 125 × 2 = 250 concatenated feature maps.
The 2D feature maps are flattened into a continuous linear vector before being passed into the FC layer. We employ two FC layers to extract deep spectral–spatial features before pushing the features into the classification’s softmax layer.

2.1.3. MiCB Classifier

The MiCB model employs a softmax layer in the classification part. Softmax is a probabilistic function that measures the relationship between reference and output values. Thus, the chance that a particular input matches a specific class is calculated as follows:
σ ( t ) i = e t i j = 1 C e t j for   i = 1 , c , , C ,   and   t =   t 1 , , t C R C
t   =   t 1 , , t C R C denotes the input vector values t i , and C denotes the number of classes. The e t i denotes the standard exponential function, while the j = 1 C e t j is the normalization term that ensures that all output values of the standard exponential function sum to 1, and each value should range between 1 and 0 to constitute a proper probability distribution.
Fine-tuning the network is performed using backpropagation. We chose the categorical cross-entropy loss because the number of classes exceeds two. The categorical cross-entropy loss function measures how well our network models the training data. It aims to minimize the training loss between the anticipated and target output, as the lesser the loss, the more accurate the model. Mathematically, the cross-entropy loss can be defined as shown in Equation (6):
L = i = 1 C r   i log σ ( t ) i   ,   for   C   classes
  • r   i   is the ground truth label and σ ( t ) i is the softmax probability for the ith class.
Finally, the prediction label is decided by taking the argmin value   t l ^   of the loss function.
t i ^ = a r   g m i n   L c
  • The argmin operation finds the class with the least loss value from the target function.

2.2. Simultaneous Convolution of Low-Level High-Level Spectral–Spatial Features

The main backbone of the MiCB block is the convolution of low-level spectral features with high-level spatial features. This is achieved through the use of 3D kernels that only convolve the spatial dimensions while retaining the spectral aspect. The MiCB model uses kernels of sizes 5 × 5 × 1 and 3 × 3 × 1 to learn the spatial features and preserve low spectral features for convolution at higher network layers (see Figure 7).

2.3. Multi-Scale 3D Convolution Block

Figure 13 depicts the framework for multi-scale feature learning with kernels of various sizes to identify a broader range of significant characteristics [23,25,26,27]. The red, aqua, and blue boxes are distinct convolutional filters used to discover hidden characteristics.

2.4. Depthwise Separable Convolution

Unlike the conventional 2D convolution that jointly maps spatial and channel information when generating feature maps, the depthwise separable convolution performs convolution in two steps: the depthwise spatial convolution and the point-wise convolution, as shown in Figure 14.
The depthwise spatial convolution performs the 2D convolution on each feature map separately. As illustrated in Figure 14, the input data is comprised of 7 feature maps of size 6 × 6 . We use seven kernels, separately of size   3 × 3 . Each kernel convolves only one channel of the input layer resulting in a feature map of dimension   4 × 4 . We assume that padding is zero and the stride value is set to one. We then stack the resultant seven feature maps of dimensionality   4 × 4 to create seven images of size   4 × 4 . In the second step, we apply point-wise convolution, we extend the depth of the image by applying the 1 × 1 convolution with the kernel size of 1 × 17 will result in the feature map of dimension   4 × 4 . Hence after applying   1 × 1128 convolutions, we have the final output layer with dimension   4 × 4128 , as shown in Figure 14. Therefore, we transform an input data of dimension 6   × 67 into an output layer with dimension   4 × 4128 . The depthwise separable convolution reduces the model complexity regarding network parameters and the calculation time. From the illustration on Figure 14 the method cuts down the parameters from 733,824   ( 3 × 3 × 7 × 16 × 128 ) to 15,344 ( 3 × 3 × 7 × 16 + 1 × 1 × 7 × 16 × 128 ). However, its ability to reduce calculation time is illustrated in Section 4. Hyperspectral image classification has utilized these benefits, which help with information balancing for layers with depth dimensions far larger than spatial dimensions.

2.5. Residual Learning

Deep learning research indicates that the depth of the network is more advantageous than its width [9,28]. Deeper networks can learn highly discriminative features; however, training them on limited sample data often results in overfitting [23]. To address the challenge of overfitting, we applied residual connections to sufficiently recover the lost features in the MiCB model as the network depth increases [12]. The architecture of the proposed MiCB network employs non-identical residual connections, shown in Figure 15a,b in its design.
The first non-identical residual connection shown in Figure 15a is a 3D-CNN layer that only convolves spatial dimension + ReLU, utilized at the bottom part of the MiCB network structure among the successful 3D/2D CNN layers to conduct dimension adjustment and facilitate low-level spectral and high-level spatial convolution. We used a convolution layer + ReLU because it converts a network into layers of directed acyclic graphs in which each branch can independently learn highly discriminative features. In deep learning models, feature degradation is at its peak at the top part of the model. In order to recover lost features and control overfitting at this point, we utilized a max pooling layer (shown in Figure 15b) to conduct dimension adjustment. The max pooling function divides the input feature map into smaller regions and outputs the maximum value from each region. Mathematically, this can be expressed as:
g i j = p max   h i j
  • g i j denotes the pooled feature map, and p max . is a max pooling function.

3. Experimental Setup

This section contains the dataset description, comparison methods, implementation details, evaluation indicators, experimental results, and discussion.

3.1. Dataset Description

Three publicly accessible HSI datasets, IP, UP, and SA, were used to evaluate the effectiveness of the MiCB method. The original IP dataset image dimensions are 145 × 145 × 224. However, due to water absorption, 24 bands were discarded [19,29,30]. Therefore, this experiment uses an IP image dataset with dimensions of 145 × 145 × 200. Its ground truth consists of sixteen classes that are not mutually exclusive, as shown in Table 1 [26]. The IP dataset is the most unbalanced, followed by the UP and SA datasets (see Table 1). The original UP dataset image dimensions are 610 × 340 × 115. We reduced the spectral dimensions from 115 to 103 bands by removing 12 noisy bands [19,29,30]. Its ground truth contains nine classes, as shown in Table 1. The size of the original SA image is 512 × 217 × 204 pixels after discarding 20 water absorption bands. Nevertheless, we anticipated that the results obtained in the training SA dataset, even at less than 1%, would be superior to those of IP and UP, as the majority of classes are adequately represented even when trained using 1% of the data sample. The SA dataset has 16 class labels assigned to the land cover [26].

3.2. Implementation Details

All the tests in this paper were performed online using Google Colab Inc., Mountain View, CA, USA. We used random subsampling in MiCB, which randomly selects some data as the training data while the remaining data are used for testing the model. The results are reported as the mean of seven separate tests. A grid search was utilized to determine the optimal optimizer, learning rate, batch size, epochs, and dropout technique for the MiCB model. For both the IP and UP datasets, the Adam optimizer with a learning rate of 0.0009 and a dropout of 0.6 was selected. However, we adjusted the learning rate for SA to 0.0007 but retained the 0.6 dropout. The number of epochs was set to 100 for all datasets, and the batch size was set to 64, 128, and 96 for the IP, UP, and SA, respectively. To evaluate objectively, we extracted the same spatial–spectral dimension of the overlapping 3D patches of size 17 × 17 × 15 across all datasets (see Table 2, Table 3 and Table 4).

3.3. Evaluation Criteria

We evaluated the performance of the recommended HSI models using the Kappa Coefficient (Kappa), Overall Accuracy (OA), and Average Accuracy (AA) metrics. The OA calculates the percentage of correctly categorized samples. These are samples whose projected outcomes exactly matched the ground truth label. The AA calculates the mean of per-class accuracies. The Kappa coefficient’s values range between 0 and 1; a 0 value indicates no consistency while a 1 value indicates a perfect consistency between the classification map and its corresponding ground truth.

4. Experimental Results and Discussion

This section presents the experimental results and discussion of the proposed MiCB model in contrast to various cutting-edge approaches using the three (IP, UP, and SA) datasets, including M3D-CNN [27], SSRN [9], R-HybridSN [16], HybridSN [18], and GGBN [1].

4.1. Effect of Varying Window Size

This section examines the impact of altering the window size of the MiCB model across the three (IP, UP, and SA) datasets. We utilized a 1% training set for SA and UP and a 5% training set for IP.
Table 2, Table 3 and Table 4 shows that a spatial window size of 17 × 17 attained better classification accuracy on IP and UP datasets, while a 27 × 27 spatial window was optimal on the SA dataset. However, we choose a spatial window size of 17 × 17 across the three datasets for a fair comparison.

4.2. Ablation Results

Models A and B were developed to determine the impact of residual learning and depthwise separable convolution on MiCB classification results. Model A utilizes residual connections with a mixture of the traditional 2D- and 3D-CNN layers. On the other hand, Model B replaced the traditional 2D-CNN layers with depthwise separable convolutions but lacked residual connections. The classification outcomes are displayed in Table 5, Table 6 and Table 7.

4.2.1. The Summary of Classification Accuracies of the Selected Models Trained Using Very Minimal Sample Data

In this subsection, we present the summary result of the per-class accuracy, Kappa, OA, and AA of Model A, Model B, MiCB, and other selected models such as M3D-CNN, HybridSN, R-HybridSN, SSRN, and GGBN trained on 5% of the IP dataset’s total sample data and 1% of the UP and SA datasets’ total sample data.
The proposed MiCB outperforms the M3D-CNN, HybridSN, R-HybridSN, and SSRN models on the three (IP, UP, and SA) datasets, as shown in Table 5, Table 6 and Table 7. However, its performance is comparable with the GGBN on all three datasets. For instance, over the IP dataset, as shown in Table 5, the MiCB model increased the overall classification accuracy of M3D-CNN, HybridSN, R-HybridSN, and SSRN by +28.47%, +3.11%, +0.89%, and +3.96%, respectively. However, the proposed MiCB slightly increased the performance of the GGBN by +0.5%. A similar trend is observed for the UP and SA datasets, where the MiCB increased the overall classification accuracy of M3D-CNN, HybridSN, R-HybridSN, and SSRN by +13.66%, +3.2%, +1.7%, +0.62% on UP (see Table 6) and by +11.18%, +0.48%, +0.95%, +2.26% on SA (see Table 7), respectively. Similar to the MiCB performance on IP dataset, the proposed model slightly increased the overall classification accuracy of GGBN by +0.16% on UP and comparable on the SA dataset.
The M3D-CNN achieved the lowest classification accuracies due to its structural nature, which primarily extracts multi-scale spectral information and insufficient spectral–spatial features. In addition, the M3D-CNN lacks residual connections to recover lost features and prevent overfitting in deep networks when training samples are very few. The SSRN method, on the other hand, recorded better classification accuracies than M3D-CNN across all the tested datasets because it introduced skip connections in its network structure, which prevents degradation. The HybridSN utilized 3D and 2D CNN layers in its network structure to extract highly discriminative HSI characteristics. However, the overfitting problem worsened as the amount of training data declined. The R-HybridSN addressed the HybridSN’s limitations by introducing residual connection and depthwise separable convolutions in its network structure to achieve higher classification accuracy. In order to improve the classification accuracies when training on very little sample data, the GGBN incorporated the biological genome approach to judiciously utilize the 3D-CNN and 2D-CNN layers in its network structure to achieve comparable classification accuracies with the proposed MiCB model.
The high performance of the proposed MiCB model in HSI classification can be attributed to the utilization of residual connection, depthwise separable layers, multi-scale kernels for feature extraction, and its ability to convolve the low-level spectral with high-level spatial features. The effect of residual connection can be observed in classes with very few training samples. For example, the models with residual connections, such as Model A and MiCB, record very high classification accuracies in classes with extremely few training sample data compared to Model B which lacks residual connection in its network structure (See Table 5, class 7 and 9). On the IP dataset, the proposed MiCB model improved the OA and AA of Model A by +0.43% and +0.40%, respectively, and when compared with Model B, an increase of +1.28% in OA and +11.06% of AA was recorded. A similar trend was recorded in the SA dataset (see Table 7), where the MiCB model increased the OA and AA of Model A by +0.20% and +0.10% and Model B by +0.46% and +0.23%, respectively. In the UP dataset (see Table 6), the MiCB model recorded an increase in AO and AA for Model B; however, there was a decrease in OA and AA for Model A.

4.2.2. Computational Complexity of Model A, Model B, and MiCB over IP, UP, and SA Datasets

This section illustrates the computational complexity of the proposed MiCB model and its variants in terms of the network parameters and testing time.
We can observe in Table 8 that Model B, with no residual connection, recorded the lowest number of network parameters and the shortest test time length. A similar observation is made when replacing the traditional 2D layers in Model A with depthwise separable layers. Since the MiCB model is a hybrid of Model A and B, adding residual connection increases the computational cost, while replacing the traditional 2D layers with depthwise separable layers leads to reduced computational cost. Hence, the proposed model exhibits balanced network parameters and test time length.
For instance, Model A has 1,396,272 more parameters and increased the test time across all datasets by +0.48, +1.06, and +1.99 s, respectively, compared with the MiCB model. However, Model B recorded 532,320 fewer trainable parameters across all datasets than the proposed MiCB model. In terms of the testing time, Model B recorded −0.40, −1.83, and −1.71 s less than the MiCB model across all three datasets, respectively. These observations illustrate the effect of residual and depthwise separable layers on model complexity.

4.3. The Training Accuracy and Loss Convergence Graphs

This subsection uses the training accuracy and loss convergence graphs to demonstrate the competitiveness of the proposed MiCB model in comparison with the selected state-of-the-art models over the IP, UP, and SA datasets. Figure 16a–c shows that the proposed MiCB model converges faster than all the compared methods on the IP dataset and is commensurate with the R-HybridSN and HybridSN methods on SA and UP datasets but faster than the SSRN and GGBN models.

4.4. The Confusion Matrix

In this section, we further demonstrate the competitiveness and robustness of the proposed MiCB model against the GGBN, HybridSN, and R-HybridSN over the IP, UP, and SA datasets when a small amount of training sample data are used, as shown in Figure 17, Figure 18 and Figure 19.
From Figure 17, Figure 18 and Figure 19, it can be observed that most of the sample data of the proposed MiCB model lie in a diagonal line. This demonstrates the competitiveness and robustness of the model when trained on very limited sample data.

4.5. Classification Diagrams

This subsection demonstrates the competitiveness and robustness of the MiCB model using the classification diagrams.
Figure 20, Figure 21 and Figure 22 illustrate that, in contrast to the proposed MiCB model, the GGBN, HybridSN, and R-HybridSN classification maps exhibit more noisy spots over the three benchmarking datasets. Hence, with less training sample data, the proposed MiCB model produces less noisy dispersed points and delivers smoother classification results.

4.6. Comparison with Other Methods

This section details the experimental results in varying the training sample data across all three datasets. Using the IP dataset, we randomly trained the models on 2%, 5%, 10%, and 20% of the total sample data. However, in UP and SA, we trained the models on 0.4%, 0.8%, 1%, 2%, and 5% of the total sample data for the UP and SA datasets and utilized the remaining sample data to test the models. The HSI classifiers performed significantly poor on the IP dataset, especially when the training sample is reduced to below 5% because some classes might be missing in the training dataset. The outcome is summarized in Table 9, Table 10 and Table 11 and Figure 23a–c.
The increase in training sample data narrows the OA gap between the compared methods and converges at some point, as illustrated in Table 9, Table 10 and Table 11 and Figure 23a–c. For example, in the IP dataset, when 5% of the data are used to train the models, the MiCB increases the OA gap of M3D-CNN, HybridSN, R-HybridSN, SSRN, GGBN by +28.47%, 3.11%, +0.89%, +3.96%, and +0.5%, respectively. When the training sample data are 20% of the total data, the MiCB increases the OA gap of M3D-CNN, HybridSN, R-HybridSN, SSRN, and GGBN by +9.49%, +0.22%, 0.0%, +0.61%, and +0.07%, respectively. This observation is clearly visualized in Table 9 and Figure 23a. In addition, we note that when the comparison models are trained on extremely small sample sizes, the MiCB model achieves the highest classification accuracy in almost all the datasets, thus showing the model’s adaptability. For instance, the MiCB improved the OA of M3D-CNN, HybridSN, R-HybridSN, SSRN, and GGBN by +29.31%, +8.45%, +4.92%, +7.29%, and +2.22% on 2% IP train data (see Table 9); by +14.73%, +4.18%, +0.36%, +1.84%, +0.02% on 0.40% UP train data (see Table 10 and Figure 23b); and by +11.75%, +0.79%, +1.6%, +1.7%, and +0.25% on 0.8% SA train data (see Table 11 and Figure 23c), respectively.

5. Conclusions

This paper aims to add to the scientific work of making deep networks for HSI classification that can optimally train with merger training samples while mitigating the overfitting problem. This paper proposes an integrated deep multi-scale 3D/2D convolutional network block (MiCB) for simultaneous low-level spectral and high-level spatial feature extraction that can optimally be trained with a limited amount of training sample data. The primary contribution of the MiCB model is its creative use of MiCB blocks, which allow the network to convolve low-level spectral features with high-level spatial features that are strengthened by multi-scale kernels, residual connections, and depthwise separable convolutions. The use of non-identity multi-residual connections in the MiCB network drastically reduces the challenge of gradient disappearance in the MiCB network. Exploding network parameters can be addressed by replacing the traditional 2D-CNN with the 2D depthwise separable convolutional layers, which also prevents overfitting as the model structure deepens. Lastly, utilizing multi-scale kernels promotes the extraction of highly discriminative features and increases the generalizability of the model. The innovative combination of these four approaches in the MiCB network structure enables the model to extract distinct and abundant contextual features to achieve high classification accuracy even with few training samples. We tested the robustness and competitiveness of our model with cutting-edge methods over the IP, UP, and SA datasets. Our proposed method achieves better classification accuracy than M3D-CNN, HybridSN, SSRN, R-HybridSN, and comparable results with GGBN.

Author Contributions

Conceptualization, H.C.T., E.C. and D.O.N.; software, H.C.T. and D.O.N.; resources, E.C.; writing—original draft preparation, H.C.T. and D.O.N.; writing—review and editing, H.C.T., E.C. and D.O.N.; supervision, E.C.; funding acquisition, E.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was sponsored in part by grants 62101503 and 62101505 from the National Natural Science Foundation of China, and Henan Science and Technology Research Project under grant 222102210102.

Data Availability Statement

All datasets used in this research are openly accessible online (http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes, accessed date on 20 June 2023).

Conflicts of Interest

There are no conflicts of interest declared by the authors.

References

  1. Tinega, H.; Chen, E.; Ma, L.; Mariita, R.M.; Nyasaka, D. Hyperspectral Image Classification Using Deep Genome Graph-Based Approach. Sensors 2021, 21, 6467. [Google Scholar] [CrossRef]
  2. Hao, S.; Wang, W.; Ye, Y.; Nie, T.; Bruzzone, L. Two-Stream Deep Architecture for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote. Sens. 2018, 56, 2349–2361. [Google Scholar] [CrossRef]
  3. Khan, M.J.; Khan, H.S.; Yousaf, A.; Khurshid, K.; Abbas, A. Modern Trends in Hyperspectral Image Analysis: A Review. IEEE Access 2018, 6, 14118–14129. [Google Scholar] [CrossRef]
  4. Zhang, X.; Zheng, Y.; Liu, W.; Wang, Z. A hyperspectral image classification algorithm based on atrous convolution. EURASIP J. Wirel. Commun. Netw. 2019, 270. [Google Scholar] [CrossRef] [Green Version]
  5. Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  6. Lin, Z.; Chen, Y.; Zhao, X.; Wang, G. Spectral-spatial classification of hyperspectral image using autoencoders. In Proceedings of the 2013 9th International Conference on Information, Communications Signal Processing, Tainan, Taiwan, 10–13 December 2013; pp. 1–5. [Google Scholar] [CrossRef] [Green Version]
  7. Yue, J.; Mao, S.; Li, M. A deep learning framework for hyperspectral image classification using spatial pyramid pooling. Remote Sens. Lett. 2016, 7, 875–884. [Google Scholar] [CrossRef]
  8. Zhang, H.; Li, Y.; Zhang, Y.; Shen, Q. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network. Remote Sens. Lett. 2017, 8, 438–447. [Google Scholar] [CrossRef] [Green Version]
  9. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  10. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  11. Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
  12. Nyabuga, D.O.; Song, J.; Liu, G.; Adjeisah, M. A 3D-2D Convolutional Neural Network and Transfer Learning for Hyperspectral Image Classification. Comput. Intell. Neurosci. 2021, 2021, 1759111. [Google Scholar] [CrossRef]
  13. Licciardi, G.; Marpu, P.R.; Chanussot, J.; Benediktsson, J.A. Linear Versus Nonlinear PCA for the Classification of Hyperspectral Data Based on the Extended Morphological Profiles. IEEE Geosci. Remote Sens. Lett. 2012, 9, 447–451. [Google Scholar] [CrossRef] [Green Version]
  14. Li, T.; Zhang, J.; Zhang, Y. Classification of hyperspectral image based on deep belief networks. In Proceedings of the 2014 IEEE International Conference on Image Processing ICIP 2014, Paris, France, 27–30 October 2014; pp. 5132–5136. [Google Scholar] [CrossRef]
  15. Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral–spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
  16. Aptoula, E.; Ozdemir, M.C.; Yanikoglu, B. Deep Learning with Attribute Profiles for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1970–1974. [Google Scholar] [CrossRef]
  17. Lee, H.; Kwon, H. Going Deeper with Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [Green Version]
  18. Feng, F.; Wang, S.; Wang, C.; Zhang, J. Learning Deep Hierarchical Spatial–Spectral Features for Hyperspectral Image Classification Based on Residual 3D-2D CNN. Sensors 2019, 19, 5276. [Google Scholar] [CrossRef] [Green Version]
  19. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  20. Cao, F.; Guo, W. Deep hybrid dilated residual networks for hyperspectral image classification. Neurocomputing 2020, 384, 170–181. [Google Scholar] [CrossRef]
  21. Wu, P.; Cui, Z.; Gan, Z.; Liu, F. Three-dimensional resNeXt network using feature fusion and label smoothing for hyperspectral image classification. Sensors 2020, 20, 1652. [Google Scholar] [CrossRef] [Green Version]
  22. Zhao, C.; Zhao, H.; Wang, G.; Chen, H. Hybrid Depth-Separable Residual Networks for Hyperspectral Image Classification. Complexity 2020, 2020, 4608647. [Google Scholar] [CrossRef]
  23. Tinega, H.C.; Chen, E.; Ma, L.; Nyasaka, D.O.; Mariita, R.M. HybridGBN-SR: A Deep 3D/2D Genome Graph-Based Network for Hyperspectral Image Classification. Remote Sens. 2022, 14, 1332. [Google Scholar] [CrossRef]
  24. Bao, J.; Chen, Y.; Yu, L.; Chen, C. A multi-scale kernel learning method and its application in image classification. Neurocomputing 2017, 257, 16–23. [Google Scholar] [CrossRef]
  25. Zhou, W.; Lin, X.; Lei, J.; Yu, L.; Hwang, J.-N. MFFENet: Multiscale Feature Fusion and Enhancement Network For RGB–Thermal Urban Road Scene Parsing. IEEE Trans. Multimedia 2022, 24, 2526–2538. [Google Scholar] [CrossRef]
  26. Elizar, E.; Zulkifley, M.A.; Muharar, R.; Zaman, M.H.M.; Mustaza, S.M. A Review on Multiscale-Deep-Learning Applications. Sensors 2022, 22, 7384. [Google Scholar] [CrossRef]
  27. He, M.; Li, B.; Chen, H. Multi-Scale 3d Deep Convolutional Neural Network for Hyperspectral Image Classification. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3904–3908. [Google Scholar] [CrossRef]
  28. Mou, L.; Ghamisi, P.; Zhu, X.X. Unsupervised Spectral–Spatial Feature Learning via Deep Residual Conv–Deconv Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 391–406. [Google Scholar] [CrossRef] [Green Version]
  29. Liu, G.; Qi, L.; Tie, Y.; Ma, L. Hyperspectral Image Classification Using Kernel Fused Representation via a Spatial-Spectral Composite Kernel with Ideal Regularization. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1422–1426. [Google Scholar] [CrossRef]
  30. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The original hyperspectral image data cube and pixel vectors.
Figure 1. The original hyperspectral image data cube and pixel vectors.
Remotesensing 15 03270 g001
Figure 2. The structural design of the pre-processing-based networks.
Figure 2. The structural design of the pre-processing-based networks.
Remotesensing 15 03270 g002
Figure 3. The structural design of the post-processing-based networks.
Figure 3. The structural design of the post-processing-based networks.
Remotesensing 15 03270 g003
Figure 4. The structural design of the integrated networks.
Figure 4. The structural design of the integrated networks.
Remotesensing 15 03270 g004
Figure 5. Flowchart of the proposed MiCB network.
Figure 5. Flowchart of the proposed MiCB network.
Remotesensing 15 03270 g005
Figure 6. HSI data pre-processing framework.
Figure 6. HSI data pre-processing framework.
Remotesensing 15 03270 g006
Figure 7. The MiCB architecture.
Figure 7. The MiCB architecture.
Remotesensing 15 03270 g007
Figure 8. A nonlinear 3D convolutional operation in the jth feature map of the ith layer.
Figure 8. A nonlinear 3D convolutional operation in the jth feature map of the ith layer.
Remotesensing 15 03270 g008
Figure 9. A nonlinear 2D convolutional operation at position x , y in the jth feature map of the ith layer.
Figure 9. A nonlinear 2D convolutional operation at position x , y in the jth feature map of the ith layer.
Remotesensing 15 03270 g009
Figure 10. Transformation structure of 3D layer to 2D layer.
Figure 10. Transformation structure of 3D layer to 2D layer.
Remotesensing 15 03270 g010
Figure 11. Downsampling feature maps using 1 × 1 filters.
Figure 11. Downsampling feature maps using 1 × 1 filters.
Remotesensing 15 03270 g011
Figure 12. Feature concatenation operation.
Figure 12. Feature concatenation operation.
Remotesensing 15 03270 g012
Figure 13. The framework of multi-scale feature learning.
Figure 13. The framework of multi-scale feature learning.
Remotesensing 15 03270 g013
Figure 14. Overall depthwise separable convolution operation.
Figure 14. Overall depthwise separable convolution operation.
Remotesensing 15 03270 g014
Figure 15. Non-identical residual connection using (a) a 3D-CNN layer + ReLU for dimension adjustment, and (b) a max pooling layer for dimension adjustment.
Figure 15. Non-identical residual connection using (a) a 3D-CNN layer + ReLU for dimension adjustment, and (b) a max pooling layer for dimension adjustment.
Remotesensing 15 03270 g015
Figure 16. Training and loss graphs of the SSRN, HybridSN, R-HybridSN, and MiCB networks over the (a) IP, (b) UP, and (c) SA datasets.
Figure 16. Training and loss graphs of the SSRN, HybridSN, R-HybridSN, and MiCB networks over the (a) IP, (b) UP, and (c) SA datasets.
Remotesensing 15 03270 g016aRemotesensing 15 03270 g016b
Figure 17. Confusion matrices over the IP dataset for (a) GGBN, (b) HybridSN, (c) R-HybridSN, and (d) MiCB.
Figure 17. Confusion matrices over the IP dataset for (a) GGBN, (b) HybridSN, (c) R-HybridSN, and (d) MiCB.
Remotesensing 15 03270 g017
Figure 18. Confusion matrices over the UP dataset for (a) GGBN, (b) HybridSN, (c) R-HybridSN, and (d) MiCB.
Figure 18. Confusion matrices over the UP dataset for (a) GGBN, (b) HybridSN, (c) R-HybridSN, and (d) MiCB.
Remotesensing 15 03270 g018
Figure 19. Confusion matrices over the SA dataset for (a) GGBN, (b) HybridSN, (c) R-HybridSN, and (d) MiCB.
Figure 19. Confusion matrices over the SA dataset for (a) GGBN, (b) HybridSN, (c) R-HybridSN, and (d) MiCB.
Remotesensing 15 03270 g019
Figure 20. Classification maps of IP dataset: (a) ground truth; (b) R-hybridSN; (c) HybridSN; (d) GGBN; (e) MiCB.
Figure 20. Classification maps of IP dataset: (a) ground truth; (b) R-hybridSN; (c) HybridSN; (d) GGBN; (e) MiCB.
Remotesensing 15 03270 g020
Figure 21. Classification maps of UP dataset: (a) ground truth; (b) R-hybridSN; (c) HybridSN; (d) GGBN; (e) MiCB.
Figure 21. Classification maps of UP dataset: (a) ground truth; (b) R-hybridSN; (c) HybridSN; (d) GGBN; (e) MiCB.
Remotesensing 15 03270 g021
Figure 22. Classification maps of SA dataset: (a) ground truth; (b) R-hybridSN; (c) HybridSN; (d) GGBN; (e) MiCB.
Figure 22. Classification maps of SA dataset: (a) ground truth; (b) R-hybridSN; (c) HybridSN; (d) GGBN; (e) MiCB.
Remotesensing 15 03270 g022
Figure 23. The effect of varying the number of training samples for the M3D-CNN, HybridSN, R-HybridSN, SSRN, GGBN, and MiCB methods on the overall accuracy (OA) over (a) IP; (b) UP; (c) SA datasets.
Figure 23. The effect of varying the number of training samples for the M3D-CNN, HybridSN, R-HybridSN, SSRN, GGBN, and MiCB methods on the overall accuracy (OA) over (a) IP; (b) UP; (c) SA datasets.
Remotesensing 15 03270 g023
Table 1. The IP, UP, and SA datasets, class labels, and per class sample data percentage.
Table 1. The IP, UP, and SA datasets, class labels, and per class sample data percentage.
Class NoIP DatasetSA DatasetUP Dataset
Class LabelSamples (%)Class LabelSamples (%)Class LabelSamples (%)
1Alfalfa0.45Brocoli_green_weeds_13.71Asphalt15.50
2Corn-notill13.93Brocoli_green_weeds_26.88Meadows43.60
3Corn-mintill8.10Fallow3.65Gravel4.91
4Corn2.31Fallow_rough_plow2.58Trees7.16
5Grass-pasture4.71Fallow_smooth4.95Painted3.14
6Grass-trees7.12Stubble7.31Bare11.76
7Grass-pasture-mowed0.27Celery6.61Bitumen3.11
8Hay-windrowed4.66Grapes_untrained20.82Self-Blocking8.61
9Oats0.20Soil_vinyard_develop11.46Shadows2.21
10Soybean-notill9.48Corn_senesced_green_weeds6.06
11Soybean-mintill23.95Lettuce_romaine_4wk1.97
12Soybean-clean5.79Lettuce_romaine_5wk3.56
13Wheat2.00Lettuce_romaine_6wk1.69
14Woods12.34Lettuce_romaine_7wk1.98
15Buildings-Grass-
Trees-Drives
3.77Vinyard_untrained13.43
16Stone-Steel-Towers0.91Vinyard_vertical_trellis3.34
Table 2. The MiCB model’s performance on different window sizes when trained using 5% of the IP data sample and assessed in terms of Kappa, OA, and AA.
Table 2. The MiCB model’s performance on different window sizes when trained using 5% of the IP data sample and assessed in terms of Kappa, OA, and AA.
Evaluation15 ×1517 × 1719 × 1921 × 2123 × 2325 × 2527 × 27
Kappa0.9670.9700.9670.9650.9650.9630.963
OA97.1497.3597.1396.9096.9596.7696.71
AA90.0892.1691.3889.1891.4691.2291.79
Table 3. The MiCB model’s performance on different window sizes when trained using 1% of the UP data sample and assessed in terms of Kappa, OA, and AA.
Table 3. The MiCB model’s performance on different window sizes when trained using 1% of the UP data sample and assessed in terms of Kappa, OA, and AA.
Evaluation15 × 1517 × 1719 × 1921 × 2123 × 2325 × 2527 × 27
Kappa0.9740.9770.9730.9740.9700.9690.969
OA98.0398.2397.9698.0797.7397.6697.66
AA96.4496.7395.9996.0995.5194.9894.99
Table 4. The MiCB model’s performance on different window sizes when trained using 1% of the SA data sample and assessed in terms of Kappa, OA, and AA.
Table 4. The MiCB model’s performance on different window sizes when trained using 1% of the SA data sample and assessed in terms of Kappa, OA, and AA.
Evaluation15 × 1517 × 1719 × 1921 × 2123 × 2325 × 2527 × 27
Kappa0.9880.9910.9920.9950.9950.9950.996
OA98.9399.2099.2899.5199.5499.5699.64
AA98.9599.1399.1699.4299.3599.4799.51
Table 5. The summary of the results of the per-class accuracy, Kappa, OA, and AA of Model A, Model B, MiCB, and other selected models trained using 5% of the IP data sample.
Table 5. The summary of the results of the per-class accuracy, Kappa, OA, and AA of Model A, Model B, MiCB, and other selected models trained using 5% of the IP data sample.
Class NumberOverall Accuracy in Percentage (%)
M3D-CNNHybridSNR-HybridSNSSRNGGBNModel AModel BMiCB
127.561.824512.9946.3659.0932.1472.73
259.1592.2595.4593.0494.7895.1594.8596.05
345.0792.9797.3693.7298.3897.9798.7099.24
438.4978.2294.872.3894.4993.2788.5792.38
570.3396.698.8598.1699.1599.6999.3898.79
697.298.1199.3299.8698.0299.5598.6698.52
718.5268.5295.56087.7877.2512.7098.41
898.0499.9610099.9499.8599.8199.3199.94
925.7983.6865.26078.9582.7110.5350.38
1055.8596.1295.991.0197.8297.0496.3697.26
1176.296.6698.0995.6397.9897.8698.2198.73
1233.8985.4489.1587.992.9793.4592.0193.12
1391.2394.9799.7498.5397.6499.7199.4999.34
1494.6899.3499.2699.8299.0399.9799.1499.77
1542.3782.9287.6682.0992.2992.4586.2289.96
1649.328088.1882.3188.7582.7989.9489.94
Kappa0.6420.9340.960.9230.960.9650.9550.97
OA (%)68.8894.2496.4693.3996.8596.9296.0797.35
AA (%)57.7387.9790.675.2891.5191.6781.0192.16
Table 6. The summary of the results of the per-class accuracy, Kappa, OA, and AA of Model A, Model B, MiCB, and other selected models trained using 1% of the UP data sample.
Table 6. The summary of the results of the per-class accuracy, Kappa, OA, and AA of Model A, Model B, MiCB, and other selected models trained using 1% of the UP data sample.
ClassOverall Accuracy in Percentage (%)
M3D-CNNHybridSNR-HybridSNSSRNGGBNModel AModel BMiCB
190.5695.7296.9498.7698.5097.6598.5699.40
289.4799.6899.6999.9199.7099.6899.7799.38
359.1184.3887.1785.7289.0395.3789.7095.01
493.2587.789.1594.8593.2892.5793.2693.90
593.6698.9999.5199.7699.7198.8399.7799.47
669.6396.8298.4496.1199.7999.2697.5499.73
765.7184.4295.8295.9898.1497.1384.4093.97
878.3589.1893.2894.9696.0397.9391.5096.89
994.4171.7177.8299.8997.3597.7792.4292.79
Kappa0.7980.9350.9550.970.9750.9770.9600.977
OA (%)84.6395.0996.5997.6798.1398.3097.0198.29
AA (%)81.5789.8493.0996.2296.8497.3594.1096.73
Table 7. The summary of the results of the per-class accuracy, Kappa, OA, and AA of Model A, Model B, MiCB, and other selected models trained using 1% of the SA data sample.
Table 7. The summary of the results of the per-class accuracy, Kappa, OA, and AA of Model A, Model B, MiCB, and other selected models trained using 1% of the SA data sample.
ClassOverall Accuracy in Percentage (%)
M3D-CNNHybridSNR-HybridSNSSRNGGBNModel AModel BMiCB
194.8899.9910010099.95100.0099.9899.99
299.6110099.97100100100.0099.97100
391.8999.8299.4999.9699.9299.74100.0099.72
498.3398.3898.7299.7296.2599.6899.4999.66
598.8399.2698.4398.7399.3399.3499.6099.02
698.0999.9399.910099.9299.9899.6599.80
797.6799.9599.9699.9999.9899.98100.0099.85
882.497.7798.2395.0699.2599.3898.6098.98
998.1499.9999.9910010099.99100100
1087.698.3697.998.3399.0898.4898.4998.54
1186.7296.0696.4697.4298.7596.4996.8599.24
1296.9997.4499.0910099.7799.9299.4899.48
1397.1497.4282.8293.0293.6796.7996.3595.86
1491.7899.5297.2595.6299.2799.1099.3098.00
1564.4297.0695.1288.1898.4995.7695.1898.02
1678.1410099.7199.4999.9999.8899.5299.98
Kappa0.8670.9850.980.9660.9920.9890.9860.991
OA (%)88.0298.7298.2596.9499.2999.0098.7499.20
AA (%)91.4198.8197.6997.8498.9899.0398.9099.13
Table 8. The network parameters and testing time in seconds over IP, UP, and SA datasets for Model A, B, and MiCB.
Table 8. The network parameters and testing time in seconds over IP, UP, and SA datasets for Model A, B, and MiCB.
DatasetModel AModel BMiCB
ParamsTrain
Time
Test
Time
ParamsTest
Time
Test
Time
ParamsTest
Time
Test
Time
IP2,354,70043.012.95426,10835.082.07958,42839.962.47
UP2,353,79741.8811.88425,20533.248.99957,52542.0210.82
SA2,354,70047.8314.78426,10840.9211.08958,42845.4412.79
Table 9. The influence of altering the training data samples on overall accuracy for MiCB in comparison with the selected models over the IP dataset.
Table 9. The influence of altering the training data samples on overall accuracy for MiCB in comparison with the selected models over the IP dataset.
Training Sample Data in Percentage
Model20%10%8%5%2%
M3D-CNN90.0380.1078.0468.8862.28
SSRN98.9197.2596.3393.3984.30
HybridSN99.3097.6696.3794.2483.14
R-HybridSN99.5298.4498.1296.4686.67
GGBN99.4598.8098.0496.8589.37
MiCB 99.5298.7598.4397.3591.59
Table 10. The influence of altering the training data samples on overall accuracy for MiCB in comparison with the selected models over the UP dataset.
Table 10. The influence of altering the training data samples on overall accuracy for MiCB in comparison with the selected models over the UP dataset.
ModelTraining Sample Data
5%2%1%0.80%0.40%
M3D-CNN92.8089.2787.1982.7576.53
SSRN99.5799.0797.6797.1293.41
HybridSN99.4597.8695.8693.3085.95
R-HybridSN99.4798.4796.4095.6491.60
GGBN99.7499.3498.1397.4694.66
MiCB 99.7499.1698.2997.4894.21
Table 11. The influence of altering the training data samples on overall accuracy for MiCB in comparison with the selected models over the SA dataset.
Table 11. The influence of altering the training data samples on overall accuracy for MiCB in comparison with the selected models over the SA dataset.
ModelTraining Sample Data
5%2%1%0.80%0.40%
M3D-CNN92.6590.1788.0286.8283.42
SSRN98.798.0296.9496.8793.64
HybridSN99.8399.5798.7297.7894.88
R-HybridSN99.8299.3698.2596.9794.33
GGBN99.9799.6899.2998.3297.26
MiCB 99.9399.7399.2098.5796.07
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tinega, H.C.; Chen, E.; Nyasaka, D.O. Improving Feature Learning in Remote Sensing Images Using an Integrated Deep Multi-Scale 3D/2D Convolutional Network. Remote Sens. 2023, 15, 3270. https://doi.org/10.3390/rs15133270

AMA Style

Tinega HC, Chen E, Nyasaka DO. Improving Feature Learning in Remote Sensing Images Using an Integrated Deep Multi-Scale 3D/2D Convolutional Network. Remote Sensing. 2023; 15(13):3270. https://doi.org/10.3390/rs15133270

Chicago/Turabian Style

Tinega, Haron C., Enqing Chen, and Divinah O. Nyasaka. 2023. "Improving Feature Learning in Remote Sensing Images Using an Integrated Deep Multi-Scale 3D/2D Convolutional Network" Remote Sensing 15, no. 13: 3270. https://doi.org/10.3390/rs15133270

APA Style

Tinega, H. C., Chen, E., & Nyasaka, D. O. (2023). Improving Feature Learning in Remote Sensing Images Using an Integrated Deep Multi-Scale 3D/2D Convolutional Network. Remote Sensing, 15(13), 3270. https://doi.org/10.3390/rs15133270

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop