Next Article in Journal
Trust-Based Recommendation for Shared Mobility Systems Based on a Discrete Self-Adaptive Neighborhood Search Differential Evolution Algorithm
Next Article in Special Issue
Random Replacement Crisscross Butterfly Optimization Algorithm for Standard Evaluation of Overseas Chinese Associations
Previous Article in Journal
A Modulated Wideband Converter Calibration Technique Based on a Single Measurement of a White Noise Signal with Advanced Resynchronization Preprocessing
Previous Article in Special Issue
Multi-Population Enhanced Slime Mould Algorithm and with Application to Postgraduate Employment Stability Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Innovative Hyperspectral Image Classification Approach Using Optimized CNN and ELM

1
Key Lab of Earth Exploration & Information Techniques of Ministry Education, Chengdu University of Technology, Chengdu 610059, China
2
School of Computer Science, Chengdu University, Chengdu 610106, China
3
School of Information and Engineering, Sichuan Tourism University, Chengdu 610100, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(5), 775; https://doi.org/10.3390/electronics11050775
Submission received: 21 January 2022 / Revised: 23 February 2022 / Accepted: 1 March 2022 / Published: 2 March 2022
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)

Abstract

:
In order to effectively extract features and improve classification accuracy for hyperspectral remote sensing images (HRSIs), the advantages of enhanced particle swarm optimization (PSO) algorithm, convolutional neural network (CNN), and extreme learning machine (ELM) are fully utilized to propose an innovative classification method of HRSIs (IPCEHRIC) in this paper. In the IPCEHRIC, an enhanced PSO algorithm (CWLPSO) is developed by improving learning factor and inertia weight to improve the global optimization performance, which is employed to optimize the parameters of the CNN in order to construct an optimized CNN model for effectively extracting the deep features of HRSIs. Then, a feature matrix is constructed and the ELM with strong generalization ability and fast learning ability is employed to realize the accurate classification of HRSIs. Pavia University data and actual HRSIs after Jiuzhaigou M7.0 earthquake are applied to test and prove the effectiveness of the IPCEHRIC. The experiment results show that the optimized CNN can effectively extract the deep features from HRSIs, and the IPCEHRIC can accurately classify the HRSIs after Jiuzhaigou M7.0 earthquake to obtain the villages, bareland, grassland, trees, water, and rocks. Therefore, the IPCEHRIC takes on stronger generalization, faster learning ability, and higher classification accuracy.

1. Introduction

Remote sensing image (RSI) classification is to divide the image into several regions by using specific rule or algorithm according to the spectral features, geometric texture features, or other features [1,2,3]. Each region is a set of ground and objects with the same characteristics, or a lot of RSIs are divided into several sets through some methods, and each set represents a kind of ground or object category. It is a very important basic problem and plays a very important position in the field of RSIs [4,5,6]. Therefore, the research on remote sensing image classification method has become an important direction, which has very important theoretical significance and practical application value.
In recent years, many classification methods of RSIs have been proposed, which can be divided into two categories of manual visual interpretation and computer classification [7]. The manual visual interpretation is the most traditional classification method, which has large workload, low efficiency, and requires rich professional knowledge and interpretation experiences [8,9,10]. With the rapid development of computer techniques, the automatic classification method of RSIs replaces the manual visual interpretation classification method. The more complex computer technology uses the spectral brightness value of pixels and the spatial relationship between pixels and their surrounding pixels to realize pixel classification. Tran et al. [11] presented a sub-pixel and per-pixel classification method to analyze the impact of land cover heterogeneity. Khodadadzadeh et al. [12] presented a new hyperspectral spectral-spatial classifier. Li et al. [13] presented a novel classification method of RSIs based on the probabilistic fusion of pixel-level and superpixel-level classifiers. Li et al. [14] presented a novel pixel-pair method. Mei et al. [15] presented a novel pixel-level perceptual subspace learning method. Pan et al. [16] presented a new central pixel selection strategy based on gradient information to realize texture image classification. Bey et al. [17] presented a new land cover assessment methodology. Yan et al. [18] presented a triple counter domain adaptation approach for learning domain invariant classifier. Li et al. [19] presented a novel multi-view active learning approach based on sub-pixel and super-pixel. Ma and Chang [20] presented a novel mixed pixel classification approach.
The single pixel spectral classification method can obtain the hyperspectral spectral-spatial classification results, but they still exist at low classification accuracy and high time complexity. The signal processing method on computer has the characteristics of large amount of calculation and can obtain high classification accuracy. However, the high-resolution RSIs have high spatial resolution and complexity. It is very difficult to classify high-resolution RSIs by using traditional classification methods. Therefore, it is urgent to deeply study a fast classification approach that can be effectively applied to high-resolution RSIs [21,22]. As a field of artificial intelligence, deep learning has attracted extensive attention, and has gradually become one of the important technologies to promote the development of artificial intelligence. Therefore, many scholars have applied deep learning to remote sensing image classification and proposed many features extraction and classification methods. Romero et al. [23] presented a sparse feature unsupervised learning approach based on greedy hierarchical unsupervised pretraining method. Sharma et al. [24] presented a new deep patch-based CNN. Maggiori et al. [25] presented a dense pixel-level classification model. Wang et al. [26] presented a HRSI classification method using principal component analysis (PCA) and guided filtering, deep learning architecture. Ji et al. [27] presented a novel three-dimensional CNN to automatically classify crops. Ben et al. [28] presented 3-D deep learning approach. Xu et al. [29] presented a novel RSI classification model using generative adversarial network. Tao et al. [30] presented a novel reinforced deep neural network (DNN) with depth and width. Liang et al. [31] presented a new RSI classification approach using stacked denoising autoencoder. Li et al. [32] presented a novel region-wise depth feature extraction model. Li et al. [33] presented an adaptive multiscale deep fusion residual network. Yuan et al. [34] presented a classification approach based on rearranged local features. Zhang et al. [35] presented a new dense network with multi-scales. Zhang et al. [36] presented a new feature aggregation model based on 3-D CNN. Chen et al. [37] presented a novel deep Boltzmann machine based on the conjugate gradient update algorithm. Xiong et al. [38] presented a novel deep multi-feature fusion network based on two different deep architecture branches. Tong et al. [39] presented a channel-attention-based DenseNet network. Zhu et al. [40] presented a new deep network with dual-branch attention fusion. Raza et al. [41] presented a four-layer classification network based on visual attention mechanisms. Li et al. [42] presented a classification approach by combining generative adversarial network (GAN), CNN with long short-term memory. Gu et al. [43] presented a pseudo labeled sample generation method. Guo et al. [44] presented a novel self-supervised gated self-attention GAN. Li et al. [45] presented a novel locally preserving deep cross embedded classification network. Lei et al. [46] presented a novel deep convolutional capsule network using spectral-spatial features. Cui et al. [47] presented a dual-channel deep learning recognition model. Peng et al. [48] presented an efficient search framework to discover optimal network architectures. Guo et al. [49] presented a novel semi-supervised scene classification method using GAN. Dong et al. [50] presented a pixel cluster CNN. Li et al. [51] presented a new RSI classification approach using error-tolerant deep learning. Li et al. [52] presented a gated recursive neural network. Dong et al. [53] explored the potential of the reference-based super-resolution method. Wu et al. [54] presented a self-paced dynamic infinite mixture model. Karadal et al. [55] presented automated classification of remote sensing images based on multileveled MobileNetV2 and DWT. Ma et al. [56] presented a novel adaptive hybrid fusion network for multiresolution remote sensing images classification. Cai et al. [57] presented a novel cross-attention mechanism and graph convolution integration algorithm. Zhang et al. [58] presented a convolutional neural architecture for remote sensing image scene classification. Hilal et al. [59] presented a new deep transfer learning-based fusion model for remote-sensing image classification. Li et al. [60] presented a multi-scale fully convolutional network to exploit discriminative representations. In addition, some new optimization algorithms are proposed [61,62,63,64,65,66,67,68,69,70,71,72], which can optimize the parameters of classification models.
Because the CNN has good feature extraction ability, these classification methods based on CNN have obtained better classification effects. It has attracted extensive attention and has been widely applied in RSIs. However, the structure and parameter selection of the CNN seriously affect its learning accuracy. Therefore, the enhanced PSO algorithm with global optimization ability is employed to optimize and determine the parameters of the CNN to obtain the optimized parameter values for constructing an optimized CNN, which is applied to effectively extract the multi-layer features of HRSIs to form a multi-feature fusion matrix. Then, the ELM is employed to realize the classification of HRSIs. The effectiveness is verified by typical data set and actual HRSIs after Jiuzhaigou M7.0 earthquake.
The main contributions of this paper are described as follows.
(1)
For the slow convergence and low accuracy of the PSO, an enhanced PSO based on fusing multi-strategy (CWLPSO) is proposed by adding new acceleration factor strategy and inertia weight linear decreasing strategy.
(2)
For the difficultly determining the parameters of the CNN, an optimized CNN model using CWLPSO is developed to effectively extract the deep features of HRSIs.
(3)
The ELM with strong generalization ability, fast learning ability, and the constructed feature vector are combined to realize the accurate classification of HRSIs.
(4)
An innovative classification method of HRSIs based on CWLPSO, CNN, and ELM, namely, IPCEHRIC is proposed.

2. Basic Methods

2.1. CNN

The CNN is a feedforward neural network, which includes convolution calculation and representative algorithm. It has the representation learning ability and can classify the input information according to its hierarchical structure. The CNN includes input layer, hidden layer, and output layer, which is shown in Figure 1.
The structure of the CNN is described in detail as follows.
Input layer. It can deal with multidimensional data, and the input features need to be standardized.
Hidden layer. It includes convolution operation, pooling operation, and full connection layer. The convolution layer is used to extract features from input data through the convolution operation of multiple convolution cores to obtain and construct the feature map. The pooling layer is to select features and filter information from the feature map to retain important features, and preset the pooling function. The full connection layer is equivalent to the hidden layer in the network. The output is obtained.
Convolution kernel. When the convolution kernel works, it will regularly scan input features, multiply and sum the input features, and superimpose the deviation. The output of the l + 1 layer is described as follow.
Z l + 1 i , j = [ Z l w l + 1 ] i , j   +   b = k = 1 K l x = 1 f y = 1 f [ Z k l s 0 i + x ,   s 0 j + y w k l + 1 x , y ]   +   b i , j     0 , 1 , , L l + 1         L l + 1 = L l + 2 p f s 0 + 1
where, b is the offset, Z l and Z l + 1 represents the convolution input and output of the l + 1 layer, L l + 1 is the size of Z l + 1 . In here, it is assumed that the length and width of the characteristic graph are the same. Z i , j corresponds the pixels of the feature map, K is the number of channels, f , s 0 and p are the convolution layer parameters, which correspond to the kernel size, convolution step size and number of filling layers. Especially, when the kernel is f = 1, the step size is s 0 = 1, and when a filled unit convolution kernel is not included, the cross-correlation calculation is equivalent to matrix multiplication, and a fully connected network is established between the convolution layers.
Z l + 1 = k = 1 K l x = 1 L y = 1 L ( Z i , j l w k l + 1 ) + b = w l + 1 T Z l + 1 + b ,   L l + 1 = L
Output layer. The output layer is the same, and the output result is obtained.

2.2. PSO

The PSO is an intelligent algorithm, which was proposed by Eberhart and Kennedy in 1995 [73]. At first, it was to study the predation behavior of birds. Inspired by this, it carried out modeling research on bird activities. In PSO, the update formula of the particle velocity and position are described as follows.
v m + 1 = ω v m + c 1 r 1 p b e s t m x m + c 2 r 2 g b e s t m x m
x m + 1 = x m + v m + 1
where, v m + 1 represents the velocity of particles, ω is the inertia weight factor, c 1 and c 2 are learning factors, ω ,   c 1 , and c 2 are usually preseted in advance. r 1 and r 2 represent a random number, p b e s t m is the optimal value of individual, g b e s t m is the optimal value of swarm. The function used to evaluate the fitness value of particles is called fitness function, i.e., objective function. In most cases, the fitness value is smaller, the particle is better. The optimal value of the individual and the optimal value of swarm are generally updated by the following formula.
p b e s t m + 1 = x m + 1 , f x m + 1 < f p b e s t m p b e s t m , o t h e r w i s e
g b e s t m + 1 = p b e s t m + 1 , f p b e s t m + 1 < f g b e s t m + 1 g b e s t m + 1 , o t h e r w i s e
If the value of x m + 1 is smaller than the value of the individual extreme value, then p b e s t m + 1 is equal to x m + 1 . On the contrary, the individual extreme value is not updated. If the value of g b e s t m + 1 is greater than the value of the individual extreme value, then g b e s t m + 1 is equal to g b e s t m + 1 .

2.3. ELM

The ELM is one of the commonly used neural network models in machine learning. Its essence is a machine learning method based on single-hidden layer feed forward network (SLFN). Compared with back propagation (BP) neural network model that uses gradient descent algorithm to update the weight in the field of machine learning, the ELM can randomly generate the threshold value. It has low computational complexity and less time-consuming. In the classification and regression problems, the structure of the ELM model is generally divided into the input layer, hidden and output layers. The specific structure is shown in Figure 2.

3. Improved Learning Factor and Inertia Weight

Although many researchers have proposed some effective researches and improvements on the shortcomings of PSO, the PSO still has the problems of slow convergence, high time complexity, and low accuracy. Therefore, the acceleration factor strategy and the inertia weight linear decreasing strategy are introduced to propose an enhanced PSO(CWLPSO) in this paper. That is, aiming at the slow convergence speed, a fast convergence strategy with small deviation angle of particle speed and position is adopted to accelerate the convergence of particles. Aiming at the poor search ability, a new improvement strategy of learning factor is proposed in here. That is, different c 1 and c 2 values are selected in order to improve the local search ability of particles in the early stage, enhance the optimization ability of particle swarm and strengthen the overall search ability of particles in the later stage. Aiming at the premature in the later stage, a new linear decreasing strategy of inertia weight is adopted to linearly reduce the inertia weight from the maximum value to the minimum value, so as to avoid the premature and the oscillation in the later stage of the algorithm.

3.1. Improve Learning Factors

The learning factors c 1 and c 2 in the PSO represent the function of the particle itself and the remaining particles removed from the particle itself on the motion route of moving particles. At the same time, they also represent the information exchange between particles, which result in different motion trajectories of particles. Therefore, an improvement strategy of learning factor is designed to improve the local search ability of particles, enhance the optimization ability of particle swarm and strengthen the overall search ability of particles in here. That is, in the early stage of the algorithm, the c 1 value is larger and the c 2 value is smaller, so that the particles can enhance the ability of self-cognition and weaken the swarm cognition of the particles. However, in the later stage of the algorithm, the c 1 value decreases and the c 2 value increases, it can improve the search ability by increasing the c 1 value in the early stage, the proportion of particle swarm will be strengthened in the later stage, so that more particles can learn from the swarm optimum. At the same time, the fewer particles can learn from individual optimum, which is conducive to enhancing the optimization ability, and strengthening the overall search ability of particles. The improved strategy of learning factor is described as follows.
c 1 = c 1 m a x + c 1 m a x c 1 m i n i k
c 2 = c 2 m i n + c 2 m a x c 2 m i n i k
where, c 1 m a x and c 1 m i n represent the maximum and minimum values of learning factor c 1 . c 2 m i n and c 2 m a x represent the maximum and minimum values of learning factor c 2 , i represents the current iterations, and k represents the maximum iterations.

3.2. Linear Decreasing of Inertia Weight

Inertia weight plays an important role in PSO. Generally, the inertia weight is generally set to a fixed value between 0.6 and 0.9. The improper selection of inertia weight will cause errors. If the inertia weight is larger, on the one hand, it will help to jump out from the local minimum point and facilitate the global search, on the other hand, it will weaken the local search ability. Therefore, for the premature in the later stage of the algorithm, a new linear decreasing strategy of inertia weight is developed. That is, the inertia weight is linearly reduced from ω m a x to ω m i n , which is described as follows.
ω = ω m a x i ω m a x ω m i n k
where, ω is inertia weight, ω m a x is maximum value of inertia weight, ω m i n is minimum value of inertia weight, i is current iteration, and k is maximum iterations.

4. Optimize CNN Using CWLPSO

4.1. Optimized Idea for CNN

The CNN with combining weight sharing and local area connection reduces the complexity of the model and the values of parameters. However, the selection of the number of filters, activation function, and learning rate of the CNN seriously affects the learning accuracy. The parameters of the CNN are trained by the steepest gradient descent method, which has a great impact on the learning performance. The proposed CWLPSO has the characteristics of global search ability, population diversity, and fast convergence. Therefore, the CWLPSO is employed to optimize the parameters of the CNN, and an optimized CNN model based on the CWLPSO algorithm is developed in this paper. That is, each particle is a network structure of the CNN. After the CNN calculates the error between the expected value and the actual value, each particle considered the number of filters, activation function learning rate, initial weight, and initial offset of the CNN as particle dimensions. The obtained test error is taken as the fitness function value, the optimal CNN model is selected through the iteration of the CWLPSO.

4.2. Model of Optimized CNN

The optimization process of the CNN using CWLPSO is shown in Figure 3.
The specific optimization process of the CNN using CWLPSO are described as follows.
Step 1. Initialize the parameters of the CNN, which include the number of nodes in hidden layer, the learning rate, and so on.
Step 2. Initialize the parameters of the CWLPSO, which include the number of the population, the maximum number of iterations, and the initial learning factor and inertia weight, and so on.
Step 3. Construct the optimization objective function.
Step 4. Calculate the individual fitness values in the population in order to obtain the initial fitness values of the population.
Step 5. Determine whether the end condition is met. If the end condition is met, then the optimal individual is regarded as the optimal parameter value of the CNN and loop Step 7. Otherwise execute Step 6.
Step 6. The velocity and position are updated, then the learning factor and the weight factor is updated. Then return to Step 4.
Step 7. Obtain the optimal parameter values of the CNN and an optimized CNN model is output.

5. An Innovative Classification Method of HRSIs Using Optimized CNN and ELM

Classification accuracy is important indicators to evaluate the classification model for HRSIs. Therefore, the effective feature extraction of HRSIs is the key factor for affecting classification accuracy. As a deep learning method, the CNN can effectively mine the multi-layer representation feature information. Different levels of representation correspond to different feature attributes of the recognition object. For example, the shallow network mainly represents the texture, edge and other local information of the recognition object, while the deep network represents the more abstract semantics, structure, and other global information. This feature matrix composes of the multi-layer feature attributes of the HRSIs. As a fast machine learning algorithm, the weight parameters of the ELM and the offset parameters on the hidden layer do not need to be adjusted repeatedly through iteration, which can reduce the amount of calculation and shorten the training time. Therefore, in order to make full use of the feature extraction ability of the optimized CNN, the comprehensiveness of multi-layer features and the fast-training speed of the ELM, an innovative classification model of HRSIs based on combining the optimized CNN and ELM, namely IPCEHRIC is developed to improve the robustness and classification effect of the model. The classification process of HRSIs is shown in Figure 4.
The classification process of the IPCEHRIC is described as follows.
(1)
Preprocess HRSIs
Some preprocessed methods, such as whitening processing, normalization processing, gray transformation, image smoothing, interpolation method, and so on are used to eliminate irrelevant information in hyperspectral remote sensing images, restore useful real information, enhance the detectability of relevant information, and simplify the data to the greatest extent, including image denoising, enhancement, smoothing, and sharpening, so as to improve the reliability of feature extraction, image matching, and so on.
(2)
Optimize parameters of CNN
The CWLPSO with global optimization capability is employed to optimize and determine the parameters of the CNN, including the number of filters, activation function, learning rate, initial weight, and initial bias as particle dimension. The optimized parameter values are obtained, and an optimized CNN model is constructed.
(3)
Extract features
The optimized CNN is essentially a multi-layer perceptron, which is mainly characterized by its local connection and weight sharing mode. When the input data are images, the alternated convolution layer and maximum pool layer by layer are used to automatically complete the feature extraction.
(4)
Construct feature matrix
The extracted local features are input into the full connection layer of the first layer in order to form the global features. These images are taken from different feature ranges. Then these extracted features are selected to construct a feature matrix in order to provide feature matrix for the classifier.
(5)
Establish ELM classifier
The feature matrix is taken as the input of the ELM, elmtrain( ) function and training sets are created to train the ELM. Then, the trained parameters and elmpredict( ) function are used to test the test set, and finally the classification results are obtained.

6. Experiment Verification and Result Analysis

6.1. Experimental Environment and Parameter Setting

The experimental environment is Intel i7-11700 HQ_CPU_@ _ 2.5GHz, 16G RAM with Windows 10, and the programming language is Matlab 2018b. The IPCEHRIC network structure consists of two convolution layers, two pooling layers and an ELM classifier. The nonlinear activation function of CNN is RELU function, and the ELM classifier uses Sigmoid function. The initial parameters of CWLPSO are c 1 m a x = 2.0 , c 1 m i n = 0.5 , ω m a x = 0.9 , , maximum number of iterations K = 200. The initial parameters of the CNN are the number of convolution kernels (6), and the size of convolution kernels (1 * 3). The initial parameters of the ELM are σ = 0.1, regularity coefficient C = 0.5.

6.2. Pavia University Data

6.2.1. Data Description

Pavia University data set is a hyperspectral remote sensing image data set collected from the University of Pavia in northern Italy by using the airborne reflection optical spectrum imager of Germany. The size of image is 610 × 340, including 42,776 pixels and 9 types of features through excluding a large number of backgrounds. Basic information of Pavia University data is shown in Table 1. A total of 20% of the samples are randomly selected as the training set and 80% of the samples are used as the test set. The number of samples for training and test is shown in Table 2, and the describing of the HRSIs is shown in Figure 5.

6.2.2. Experimental Results and Analysis

To verify the effectiveness of the IPCEHRIC, the CNN, local binary pattern (LBP) and CNN (LBP-CNN), CNN and ELM (CNN-ELM), LBP, CNN and ELM (LBP-CNN-ELM), LBP, PCA, CNN and ELM (LBP-PCA-CNN-ELM) are selected in here. The experiment results of the Pavia university data are shown in Table 3. The overall accuracy (OA), average accuracy (AA), and standard deviation (STD) of classification results are calculated for each algorithm.
It can be seen from Table 3 that the IPCEHRIC method obtains the classification accuracies of OA and AA are 99.21 and 99.83%, which are best classification results among the CNN, LBP-CNN, CNN-ELM, LBP-CNN-ELM, LBP-PCA-CNN-ELM, and IPCEHRIC methods. The STD of the IPCEHRIC is 0.279, which is also the least STD among these methods. Among other comparison methods, the LBP-PCA-CNN-ELM method obtains the classification accuracies of OA and AA as 98.95 and 99.15%. While the CNN-ELM method obtains the classification accuracies for OA and AA of 92.63 and 93.60%. Compared with the CNN-ELM, the classification accuracies of OA and AA of the IPCEHRIC are improved by 6.58 and 6.23% than those of the CNN-ELM. This shows that the feature extraction ability of the optimized CNN is better than that of the CNN, which explains the global optimization ability of the CWLPSO algorithm. Therefore, the classification performance of the IPCEHRIC method is significantly better than those of the CNN, LBP-CNN, CNN-ELM, LBP-CNN-ELM, and LBP-PCA-CNN-ELM. The experiment results show that the IPCEHRIC method has higher classification accuracy than other comparison methods. The IPCEHRIC is an effective classification method for HRSIs.

6.3. Actual HRSI after Jiuzhaigou M7.0 Earthquake

6.3.1. Description of HRSI after Jiuzhaigou 7.0 Earthquake

Jiuzhaigou is located in Zhangzha Town, Jiuzhaigou County, Sichuan Province. It is located in the transition zone. It is more than 400 km away from Chengdu. It is a mountain valley with a depth of more than 50 km, with a total area of 64,297 hm2 and a forest coverage rate of more than 80%. The hyperspectral remote sensing image after Jiuzhaigou M7.0 earthquake on 8 August 2017 is shown in Figure 6.
The HRSI after Jiuzhaigou M7.0 earthquake is saved as *. mat file, which determined the coordinates of different areas by manual frame drawing. Then a matrix consistent with the size of the picture is constructed. The corresponding positions of the matrix with different numbers is marked according to the coordinates of different areas, so as to mark different labels on different areas of the picture, save and generate *.mat file with labels. A data set containing four types of samples is made, which include villages, water, grassland, and trees in the HRSIs after Jiuzhaigou M7.0 earthquake. The number of samples and four types are shown in Table 4.
According to the gray value of pixels, the color function is used to set the threshold. The different areas of HRSIs after Jiuzhaigou M7.0 earthquake are marked by different colors. A matrix consistent with the image size is constructed, and the different areas are marked with color. A data set with six types of samples is made, which include the villages, bareland, grassland, trees, water, and rocks in the HRSIs after Jiuzhaigou M7.0 earthquake. The number of samples and six types are shown in Table 5.

6.3.2. Experimental Results and Analysis

To prove the ability of the IPCEHRIC to solve practical engineering problems, the hyperspectral remote sensing images after Jiuzhaigou M7.0 earthquake is used for the experimental comparison and analysis. Similarly, the CNN, LBP-CNN, CNN-ELM, LBP-CNN-ELM, and LBP-PCA-CNN-ELM are selected to compare in here. Each algorithm is executed ten times independently. The classification results of HRSI after Jiuzhaigou 7.0 earthquake for four types are shown in Table 6 and Table 7. The classification results of HRSI after Jiuzhaigou M7.0 earthquake for six types are shown in Table 8 and Table 9.
As can be seen from Table 6, Table 7, Table 8 and Table 9 that the IPCEHRIC obtains the classification accuracies of AA are 90.30% for four types and 99.95% for six types, respectively, which are best classification results among the CNN, LBP-CNN, CNN-ELM, LBP-CNN-ELM, LBP-PCA-CNN-ELM, and IPCEHRIC methods. The STD of the IPCEHRIC is 1.396 for four types and 0.086 for six types, which are also the least STD among these methods. Among other comparison methods, for four types of the samples, the overall classification effect of these methods is not ideal. Especially, the classification accuracies of the CNN and LBP-CNN are very unsatisfactory. For six types of the samples, the overall classification effect of these methods is better. Especially, the classification accuracies of the CNN-ELM are ideal among CNN, LBP-CNN, CNN-ELM, LBP-CNN-ELM, and LBP-PCA-CNN-ELM. Compared with the CNN-ELM, the classification accuracy of AA of the IPCEHRIC method are improved by 18.44 and 0.31%, which indicate that the optimized CNN has better feature extraction ability and classification performance, and the CWLPSO has better global optimization ability. Therefore, the experiment results show that the classification accuracy of the IPCEHRIC is better than that of other comparison methods. The CWLPSO can optimize and determine the parameters of the CNN in order to construct an optimized CNN model, which can effectively extract the deep features of HRSIs after Jiuzhaigou 7.0 earthquake, so as to obtain a better classification result. It can effectively classify the HRSIs after Jiuzhaigou 7.0 earthquake to obtain the villages, bareland, grassland, trees, water, and rocks in HRSIs after Jiuzhaigou 7.0 earthquake.
The HRSIs after Jiuzhaigou 7.0 earthquake are divided into four types and six types. The classification effects of HRSIs are shown in Figure 7.
As can be seen from Figure 7, the classification effects of six types by using the IPCEHRIC for the HRSIs after Jiuzhaigou M7.0 earthquake is ideal. For actual HRSIs, the IPCEHRIC method has higher classification accuracy, and it is an effective classification method for actual HRSIs.

7. Conclusions

In this paper, an innovative hyperspectral remote sensing image classification method based on combining CWLPSO, CNN, and ELM, namely IPCEHRIC is proposed to obtain the accurate classification results. The CWLPSO with fusing multi-strategy is proposed to optimize the parameters of the CNN. Then the deep features are extracted from HRSIs, which are input into the ELM to realize the accurate classification of HRSIs. Pavia University data and actual HRSIs after Jiuzhaigou 7.0 earthquake are selected to verify the effectiveness of the IPCEHRIC. The experiment results show that the IPCEHRIC obtains the classification accuracies of 99.21% for Pavia University data, 90.30 and 99.95% for actual HRSIs after Jiuzhaigou 7.0 earthquake. The classification results of the IPCEHRIC are better than those of the CNN, LBP-CNN, CNN-ELM, LBP-CNN-ELM, and LBP-PCA-CNN-ELM methods. Compared with the CNN-ELM, the classification accuracies of the IPCEHRIC are improved by 6.58, 21.44, and 0.31%, respectively. This shows that the CWLPSO algorithm can effectively optimize the parameters and obtain reasonable parameter values for CNN to improve the feature extraction ability. Therefore, the IPCEHRIC has certain advantages on classification effect of the HRSIs. Especially, the IPCEHRIC can obtain accurate classification accuracy for actual HRSIs after Jiuzhaigou M7.0 earthquake. It can effectively classify the villages, bareland, grassland, trees, water, and rocks in the HRSIs after Jiuzhaigou M7.0 earthquake and achieve good classification result.

Author Contributions

Conceptualization, A.Y. and X.Z.; Methodology, A.Y.; Software, X.Z.; Validation, F.M. and X.Z.; Resources, F.M.; Writing—original draft preparation, A.Y.; Writing—review and editing, X.Z.; Visualization, X.Z.; Project administration, F.M.; Funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Sichuan Science and Technology Program, grant number 2019ZYZF0169, 2019YFG0307, 2021YFS0407; the A Ba Achievements Transformation Program, grant number R21CGZH0001; the Chengdu Science and technology planning project, grant number 2021-YF05-00933-SN.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to acknowledge the UCI Machine Learning Repository.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dumke, I.; Ludvigsen, M.; Ellefmo, S.L. Underwater hyperspectral imaging using a stationary platform in the Trans-Atlantic Geotraverse hydrothermal field. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2947–2962. [Google Scholar] [CrossRef] [Green Version]
  2. Chen, H.; Miao, F.; Chen, Y.; Xiong, Y.; Chen, T. A hyperspectral image classification method using multifeature vectors and optimized KELM. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 2781–2795. [Google Scholar] [CrossRef]
  3. Ma, K.Y.; Chang, C.I. Iterative training sampling coupled with active learning for semisupervised spectral–spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8672–8692. [Google Scholar] [CrossRef]
  4. Chen, Y.; Xiao, Z.; Chen, G. Detection of oasis soil composition and analysis of environmental parameters based on hyperspectral image and GIS. Arab. J. Geosci. 2021, 14, 1050. [Google Scholar] [CrossRef]
  5. Shimoni, M.; Haelterman, R.; Perneel, C. Hypersectral imaging for military and security applications: Combining myriad processing and sensing techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
  6. Luo, X.; Shen, Z.; Xue, R. Unsupervised band selection method based on importance-assisted column subset selection. IEEE Access 2018, 7, 517–527. [Google Scholar] [CrossRef]
  7. Chang, C.I.; Kuo, Y.M.; Chen, S. Self-mutual information-based band selection for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5979–5997. [Google Scholar] [CrossRef]
  8. Lin, Z.; Yan, L. A support vector machine classifier based on a new kernel function model for hyperspectral data. Mapp. Sci. Remote Sens. 2015, 53, 85–101. [Google Scholar] [CrossRef]
  9. Kang, X.; Xiang, X.; Li, S. PCA-based edge-preserving features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
  10. Yuan, H.; Tang, Y.Y.; Lu, Y. Spectral-spatial classification of hyperspectral image based on discriminant analysis. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2035–2043. [Google Scholar] [CrossRef]
  11. Tran, T.V.; Julian, J.P.; Beurs, K.M. Land cover heterogeneity effects on sub-pixel and per-pixel classifications. ISPRS Int. J. Geo-Inf. 2014, 3, 540–553. [Google Scholar] [CrossRef] [Green Version]
  12. Khodadadzadeh, M.; Li, J.; Plaza, A.; Ghassemian, H.; Bioucas-Dias, J.M.; Li, X. Spectral-spatial classification of hyperspectral data using local and global probabilities for mixed pixel characterization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6298–6314. [Google Scholar] [CrossRef]
  13. Li, S.T.; Lu, T.; Fang, L.Y.; Jia, X.P.; Benediktsson, J.A. Probabilistic fusion of pixel-level and superpixel-level hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7416–7430. [Google Scholar] [CrossRef]
  14. Li, W.; Wu, G.D.; Zhang, F.; Du, Q.; Hyperspectral, A. Image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2017, 55, 844–853. [Google Scholar] [CrossRef]
  15. Mei, J.; Wang, Y.B.; Zhang, L.Q.; Zhang, B.; Liu, S.H.; Zhu, P.P.; Ren, Y.C. PSASL: Pixel-level and superpixel-level aware subspace learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4278–4293. [Google Scholar] [CrossRef]
  16. Pan, Z.B.; Wu, X.Q.; Li, Z.Y. Central pixel selection strategy based on local gray-value distribution by using gradient information to enhance LBP for texture classification. Expert Syst. Appl. 2019, 120, 319–334. [Google Scholar] [CrossRef]
  17. Bey, A.; Jetimane, J.; Lisboa, S.N.; Ribeiro, N.; Sitoe, A.; Meyfroidt, P. Mapping smallholder and large-scale cropland dynamics with a flexible classification system and pixel-based composites in an emerging frontier of Mozambique. Remote Sens. Environ. 2020, 239, 111611. [Google Scholar] [CrossRef]
  18. Yan, L.; Fan, B.; Liu, H.M.; Huo, C.L.; Xiang, S.M.; Pan, C.H. Triplet adversarial domain adaptation for pixel-level classification of VHR remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3558–3573. [Google Scholar] [CrossRef]
  19. Li, Y.; Lu, T.; Li, S.T. Subpixel-pixel-superpixel-based multiview active learning for hyperspectral images classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4976–4988. [Google Scholar] [CrossRef]
  20. Ma, K.Y.; Chang, C.I. Kernel-based constrained energy minimization for hyperspectral mixed pixel classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5510723. [Google Scholar] [CrossRef]
  21. Chen, Y.; Lin, Z.; Zhao, X. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  22. Liu, L.; Wang, Y.; Peng, J. Latent relationship guided stacked sparse autoencoder for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3711–3725. [Google Scholar] [CrossRef]
  23. Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1349–1362. [Google Scholar] [CrossRef] [Green Version]
  24. Sharma, A.; Liu, X.W.; Yang, X.J.; Shi, D. A patch-based convolutional neural network for remote sensing image classification. Neural Netw. 2017, 95, 19–28. [Google Scholar] [CrossRef] [PubMed]
  25. Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional neural networks for large-scale remote-sensing image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 645–657. [Google Scholar] [CrossRef] [Green Version]
  26. Wang, L.Z.; Zhang, J.B.; Liu, P.; Choo, K.K.; Huang, F. Spectral-spatial multi-feature-based deep learning for hyperspectral remote sensing image classification. Appl. Soft Comput. 2017, 21, 213–221. [Google Scholar] [CrossRef]
  27. Ji, S.P.; Zhang, C.; Xu, A.J.; Shi, Y.; Duan, Y.L. 3D convolutional neural networks for crop classification with multi-temporal remote sensing images. Remote Sens. 2018, 10, 75. [Google Scholar] [CrossRef] [Green Version]
  28. Ben, H.A.; Benoit, A.; Lambert, P.; Ben, A.C. 3-D deep learning approach for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar]
  29. Xu, S.H.; Mu, X.D.; Chai, D.; Zhang, X.M. Remote sensing image scene classification based on generative adversarial networks. Remote Sens. Lett. 2018, 9, 617–626. [Google Scholar] [CrossRef]
  30. Tao, Y.T.; Xu, M.Z.; Lu, Z.Y.; Zhong, Y.F. DenseNet-based depth-width double reinforced deep learning neural network for high-resolution remote sensing image per-pixel classification. Remote Sens. 2018, 10, 779. [Google Scholar] [CrossRef] [Green Version]
  31. Liang, P.; Shi, W.Z.; Zhang, X.K. Remote sensing image classification based on stacked denoising autoencoder. Remote Sens. 2018, 10, 16. [Google Scholar] [CrossRef] [Green Version]
  32. Li, P.; Ren, P.; Zhang, X.Y.; Wang, Q.; Zhu, X.B.; Wang, L. Region-wise deep feature representation for remote sensing images. Remote Sens. 2018, 10, 871. [Google Scholar] [CrossRef] [Green Version]
  33. Li, G.; Li, L.L.; Zhu, H.; Liu, X.; Jiao, L.C. Adaptive multiscale deep fusion residual network for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8506–8521. [Google Scholar] [CrossRef]
  34. Yuan, Y.; Fang, J.; Lu, X.Q.; Feng, Y.C. Remote sensing image scene classification using rearranged local features. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1779–1792. [Google Scholar] [CrossRef]
  35. Zhang, C.J.; Li, G.D.; Du, S.H. Multi-scale dense networks for hyperspectral remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9201–9222. [Google Scholar] [CrossRef]
  36. Zhang, C.J.; Li, G.D.; Lei, R.M.; Du, S.H.; Zhang, X.Y.; Zheng, H.; Wu, Z.F. Deep feature aggregation network for hyperspectral remote sensing image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 5314–5325. [Google Scholar] [CrossRef]
  37. Chen, C.; Ma, Y.; Ren, G.B. Hyperspectral classification using deep belief networks based on conjugate gradient update and pixel-centric spectral block features. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 4060–4069. [Google Scholar] [CrossRef]
  38. Xiong, W.; Xiong, Z.Y.; Cui, Y.Q.; Lv, Y.F. Deep multi-feature fusion network for remote sensing images. Remote Sens. Lett. 2020, 11, 563–571. [Google Scholar] [CrossRef]
  39. Tong, W.; Chen, W.T.; Han, W.; Li, X.J.; Wang, L.Z. Channel-attention-based densenet network for remote sensing image scene classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 4121–4132. [Google Scholar] [CrossRef]
  40. Zhu, H.; Ma, W.P.; Li, L.L.; Jiao, L.C.; Yang, S.Y.; Hou, B. A dual-branch attention fusion deep network for multiresolution remote-sensing image classification. Inf. Fusion 2020, 58, 116–131. [Google Scholar] [CrossRef]
  41. Raza, A.; Huo, H.; Sirajuddin, S.; Fang, T. Diverse capsules network combining multiconvolutional layers for remote sensing image scene classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 5297–5313. [Google Scholar] [CrossRef]
  42. Li, J.T.; Shen, Y.L.; Yang, C. An adversarial generative network for crop classification from remote sensing timeseries images. Remote Sens. 2021, 13, 65. [Google Scholar] [CrossRef]
  43. Gu, S.W.; Zhang, R.; Luo, H.X.; Li, M.Y.; Feng, H.M.; Tang, X.G. Improved SinGAN integrated with an attentional mechanism for remote sensing image classification. Remote Sens. 2021, 13, 1713. [Google Scholar] [CrossRef]
  44. Guo, D.E.; Xia, Y.; Luo, X.B. Self-supervised GANs with similarity loss for remote sensing image scene classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 2508–2521. [Google Scholar] [CrossRef]
  45. Li, Y.S.; Zhu, Z.H.; Yu, J.G.; Zhang, Y.J. Learning deep cross-modal embedding networks for zero-shot remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10590–10603. [Google Scholar] [CrossRef]
  46. Lei, R.M.; Zhang, C.J.; Liu, W.C.; Zhang, L.; Zhang, X.Y.; Yang, Y.C.; Huang, J.W.; Li, Z.X.; Zhou, Z.Y. Hyperspectral remote sensing image classification using deep convolutional capsule network. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 8297–8315. [Google Scholar] [CrossRef]
  47. Cui, X.P.; Zou, C.; Wang, Z.S. Remote sensing image recognition based on dual-channel deep learning network. Multimed. Tools Appl. 2021, 80, 27683–27699. [Google Scholar] [CrossRef]
  48. Peng, C.; Li, Y.Y.; Jiao, L.C.; Shang, R.H. Efficient convolutional neural architecture search for remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 6092–6105. [Google Scholar] [CrossRef]
  49. Guo, D.G.; Xia, Y.; Luo, X.B. GAN-based semisupervised scene classification of remote sensing image. IEEE Geosci. Remote Sens. Lett. 2021, 18, 2067–2071. [Google Scholar] [CrossRef]
  50. Dong, S.X.; Quan, Y.H.; Feng, W.; Dauphin, G.; Gao, L.R.; Xing, M.D. A pixel cluster CNN and spectral-spatial fusion algorithm for hyperspectral image classification with small-size training samples. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 4101–4114. [Google Scholar] [CrossRef]
  51. Li, Y.S.; Zhang, Y.J.; Zhu, Z.H. Error-tolerant deep learning for remote sensing image scene classification. IEEE Trans. Cybern. 2021, 51, 1756–1768. [Google Scholar] [CrossRef] [PubMed]
  52. Li, B.Y.; Guo, Y.L.; Yang, J.G.; Wang, L.G.; Wang, Y.Q.; An, W. Gated recurrent multiattention network for VHR remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5606113. [Google Scholar] [CrossRef]
  53. Dong, R.M.; Zhang, L.X.; Fu, H.H. RRSGAN: Reference-based super-resolution for remote sensing image. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5601117. [Google Scholar] [CrossRef]
  54. Wu, E.Q.; Zhou, M.; Hu, D.; Zhu, L.; Tang, Z.; Qiu, X.Y.; Deng, P.Y.; Zhu, L.M.; Ren, H. Self-paced dynamic infinite mixture model for fatigue evaluation of pilots’ brains. IEEE Trans. Cybern. 2021. [Google Scholar] [CrossRef] [PubMed]
  55. Karadal, C.H.; Kaya, M.C.; Tuncer, T.; Dogan, S.; Acharya, U.R. Automated classification of remote sensing images using multileveled MobileNetV2 and DWT techniques. Expert Syst. Appl. 2021, 185, 115659. [Google Scholar] [CrossRef]
  56. Ma, W.P.; Shen, J.C.; Zhu, H.; Zhang, J.; Zhao, J.L.; Hou, B.; Jiao, L.C. A novel adaptive hybrid fusion network for multiresolution remote sensing images classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5400617. [Google Scholar] [CrossRef]
  57. Cai, W.W.; Wei, Z.G. Remote sensing image classification based on a cross-attention mechanism and graph convolution. IEEE Geosci. Remote Sens. Lett. 2022, 19, 80002005. [Google Scholar] [CrossRef]
  58. Zhang, Z.; Liu, S.H.; Zhang, Y.; Chen, W.B. RS-DARTS: A convolutional neural architecture search for remote sensing image scene classification. Remote Sens. 2022, 14, 141. [Google Scholar] [CrossRef]
  59. Hilal, A.M.; Al-Wesabi, F.N.; Alzahrani, K.J.; Al Duhayyim, M.; Hamza, M.A.; Rizwanullah, M.; Diaz, V.G. Deep transfer learning based fusion model for environmental remote sensing image classification model. J. Remote Sens. 2022. [Google Scholar] [CrossRef]
  60. Li, R.; Zheng, S.Y.; Duan, C.X.; Wang, L.B.; Zhang, C. Land cover classification from remote sensing images based on multi-scale fully convolutional network. GEO Spat. Inf. Sci. 2022. [Google Scholar] [CrossRef]
  61. Wei, Y.Y.; Zhou, Y.Q.; Luo, Q.F.; Deng, W. Optimal reactive power dispatch using an improved slime mould algorithm. Energy Rep. 2021, 7, 8742–8759. [Google Scholar] [CrossRef]
  62. Deng, W.; Zhang, X.; Zhou, Y.; Liu, Y.; Zhou, X.; Chen, H.; Zhao, H. An enhanced fast non-dominated solution sorting genetic algorithm for multi-objective problems. Inf. Sci. 2022, 585, 441–453. [Google Scholar] [CrossRef]
  63. Li, T.; Qian, Z.; Deng, W.; Zhang, D.; Lu, H.; Wang, S. Forecasting crude oil prices based on variational mode decomposition and random sparse Bayesian learning. Appl. Soft Comput. 2021, 113, 108032. [Google Scholar] [CrossRef]
  64. Deng, W.; Xu, J.; Zhao, H.; Song, Y. A novel gate resource allocation method using improved PSO-based QEA. IEEE Trans. Intell. Transp. Syst. 2020. [Google Scholar] [CrossRef]
  65. Cui, H.; Guan, Y.; Chen, H. Rolling element fault diagnosis based on VMD and sensitivity MCKD. IEEE Access 2021, 9, 120297–120308. [Google Scholar] [CrossRef]
  66. Wang, X.; Wang, H.; Du, C.; Fan, X.; Cui, L.; Chen, H.; Deng, F.; Tong, Q.; He, M.; Yang, M.; et al. Custom-molded offloading footwear effectively prevents recurrence and amputation, and lowers mortality rates in high-risk diabetic foot patients: A multicenter, prospective observational study. Diabetes Metab. Syndr. Obes. 2022, 15, 103–109. [Google Scholar]
  67. Deng, W.; Shang, S.; Cai, X.; Zhao, H.; Zhou, Y.; Chen, H.; Deng, W. Quantum differential evolution with cooperative coevolution framework and hybrid mutation strategy for large scale optimization. Knowl. Based Syst. 2021, 224, 107080. [Google Scholar] [CrossRef]
  68. Deng, W.; Xu, J.; Gao, X.; Zhao, H. An enhanced MSIQDE algorithm with novel multiple strategies for global optimization problems. IEEE Trans. Syst. Man Cybern. Syst. 2020, 52, 1578–1587. [Google Scholar] [CrossRef]
  69. Zhang, Z.H.; Min, F.; Chen, G.S.; Shen, S.P.; Wen, Z.C.; Zhou, X.B. Tri-partition state alphabet-based sequential pattern for multivariate time series. Cogn. Comput. 2021. [Google Scholar] [CrossRef]
  70. Ran, X.; Zhou, X.; Lei, M.; Tepsan, W.; Deng, W. A novel k-means clustering algorithm with a noise algorithm for capturing urban hotspots. Appl. Sci. 2021, 11, 11202. [Google Scholar] [CrossRef]
  71. Chen, H.; Zhang, Q.; Luo, J. An enhanced Bacterial Foraging Optimization and its application for training kernel extreme learning machine. Appl. Soft Comput. 2020, 86, 105884. [Google Scholar] [CrossRef]
  72. Cui, H.; Guan, Y.; Chen, H.; Deng, W. A novel advancing signal processing method based on coupled multi-stable stochastic resonance for fault detection. Appl. Sci. 2021, 11, 5385. [Google Scholar] [CrossRef]
  73. Kennedy, J.; Eberhart, R. Particle swarm optimization. IEEE Int. Conf. Neural Netw. Perth 1995, 4, 1942–1948. [Google Scholar]
Figure 1. The structure of the CNN.
Figure 1. The structure of the CNN.
Electronics 11 00775 g001
Figure 2. The structure of ELM.
Figure 2. The structure of ELM.
Electronics 11 00775 g002
Figure 3. The optimization process of the CNN using CWLPSO.
Figure 3. The optimization process of the CNN using CWLPSO.
Electronics 11 00775 g003
Figure 4. The innovative classification model of HRSIs.
Figure 4. The innovative classification model of HRSIs.
Electronics 11 00775 g004
Figure 5. The HRSIs of Pavia University. (a) False color composite of HRSI. (b) Surface observations.
Figure 5. The HRSIs of Pavia University. (a) False color composite of HRSI. (b) Surface observations.
Electronics 11 00775 g005
Figure 6. The HRSI after Jiuzhaigou M7.0 earthquake.
Figure 6. The HRSI after Jiuzhaigou M7.0 earthquake.
Electronics 11 00775 g006
Figure 7. The classification effects of HRSIs after Jiuzhaigou M7.0 earthquake. (a) Four types. (b) Six types.
Figure 7. The classification effects of HRSIs after Jiuzhaigou M7.0 earthquake. (a) Four types. (b) Six types.
Electronics 11 00775 g007
Table 1. Basic information of Pavia University data.
Table 1. Basic information of Pavia University data.
DataPavia University
Collection locationNorthern Italy
Acquisition equipmentROSIS
Spectral coverage (μm)0.43–0.86
Data size (pixel)610 × 340
Spatial resolution (m)1.3
Number of bands115
Number of bands after denoising103
Sample size42,776
Number of categories9
Table 2. The number of samples in Pavia University.
Table 2. The number of samples in Pavia University.
TypesClassTraining SamplesTest SamplesSamples
1Asphalt132653056631
2Meadows372214,92718,649
3Gravel41816812099
4Trees61224523064
5Painted metal sheets26810771345
6Bare Soil100440255029
7Bitumen26610641330
8Self-Blocking Bricks73629463682
9Shadows188759947
Total854034,23642,776
Table 3. The experiment results of the Pavia University data (%).
Table 3. The experiment results of the Pavia University data (%).
TypesClassCNNLBP-CNNCNN-ELMLBP-CNN-ELMLBP-PCA-CNN-ELMIPCEHRIC
1Asphalt90.0089.6494.7295.7299.9299.96
2Meadows89.9989.7993.0095.0099.1299.67
3Gravel89.7091.6399.9499.94100.00100.00
4Trees88.9087.1089.1594.1796.8899.84
5Painted metal sheets86.0089.9192.2696.6899.72100.00
6Bare Soil88.1589.9095.0096.27100.00100.00
7Bitumen90.4592.0094.1596.1599.1599.82
8Self-Blocking Bricks89.8391.8693.2595.0199.66100.00
9Shadows87.5093.8790.9097.7497.9499.15
OA (%)85.6788.7592.6395.6498.9599.21
AA (%)88.9590.6393.6096.3099.1599.83
STD1.4671.9393.0221.7221.0750.279
Table 4. The number of samples and four types.
Table 4. The number of samples and four types.
TypesClassSamples
1Villages12,575
2Water14,953
3Grassland38,790
4Trees39,159
Total105,477
Table 5. The number of samples and six types.
Table 5. The number of samples and six types.
TypesClassSamples
1Villages1608
2Bareland25
3Grassland376,651
4Trees110,409
5Water5558
6Rocks2469
Total495,087
Table 6. The classification results of HRSIs for 10 times for four types (%).
Table 6. The classification results of HRSIs for 10 times for four types (%).
TimesCNNLBP-CNNCNN-ELMLBP-CNN-ELMLBP-PCA-CNN-ELMIPCEHRIC
141.4736.6869.8064.3865.6789.76
241.8036.6875.8464.2565.1688.96
341.7536.6875.9864.1665.3389.99
441.7336.6875.3864.4065.1489.26
541.8537.0261.4564.4765.4789.76
641.7037.0275.8064.1265.5690.58
741.8637.0274.0463.8365.4091.64
841.7737.0260.0264.4465.1992.12
941.7836.6874.8064.3865.4990.99
1041.7637.0275.4664.1065.8189.94
AA (%)41.7536.8571.8664.2565.4290.30
STD0.1090.1796.1450.2010.2231.019
Table 7. The classification results of HRSIs for four types (%).
Table 7. The classification results of HRSIs for four types (%).
TypesClassCNNLBP-CNNCNN-ELMLBP-CNN-ELMLBP-PCA-CNN-ELMIPCEHRIC
1Villages50.4746.7679.1674.6475.7092.46
2Water41.8035.4378.3770.4773.2890.73
3Grassland39.2633.5873.7863.1972.4589.15
4Trees40.7336.2976.1269.2475.4291.48
OA (%)41.7536.8571.8664.2565.4290.30
AA (%)43.0738.0276.8669.3974.2190.96
STD5.0465.9392.4224.7331.5971.396
Table 8. The classification results of HRSIs for 10 times for six types (%).
Table 8. The classification results of HRSIs for 10 times for six types (%).
TimesCNNLBP-CNNCNN-ELMLBP-CNN-ELMLBP-PCA-CNN-ELMIPCEHRIC
179.7779.8399.2185.1285.1299.99
279.7879.8599.7884.8084.14100.0
379.8479.8499.9984.1484.0199.98
479.7879.8699.2684.8084.2299.78
579.8679.8499.9985.1285.46100.0
679.8779.8199.2184.7686.1399.77
779.8879.5999.9885.4684.57100.0
879.8779.5999.2786.0885.1299.98
979.8679.8099.9885.4686.02100.0
1079.8679.8499.7786.4384.8099.99
AA (%)79.8479.7999.6485.2284.9699.95
STD0.0430.1040.3600.6720.7530.092
Table 9. The classification results of HRSIs for six types (%).
Table 9. The classification results of HRSIs for six types (%).
TypesClassCNNLBP-CNNCNN-ELMLBP-CNN-ELMLBP-PCA-CNN-ELMIPCEHRIC
1Villages82.3485.4699.4687.4590.3599.98
2Bareland86.0586.0499.6489.6293.46100.0
3Grassland79.9885.3299.0687.1790.67100.0
4Trees78.4684.1499.3186.4389.8699.81
5Water83.4987.2599.7887.6990.34100.0
6Rocks82.1685.6899.3488.0392.0599.85
OA (%)79.8479.7999.6485.2284.9699.95
AA (%)82.0885.6599.4387.7391.1299.94
STD2.6581.0130.2561.0721.3670.086
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ye, A.; Zhou, X.; Miao, F. Innovative Hyperspectral Image Classification Approach Using Optimized CNN and ELM. Electronics 2022, 11, 775. https://doi.org/10.3390/electronics11050775

AMA Style

Ye A, Zhou X, Miao F. Innovative Hyperspectral Image Classification Approach Using Optimized CNN and ELM. Electronics. 2022; 11(5):775. https://doi.org/10.3390/electronics11050775

Chicago/Turabian Style

Ye, Ansheng, Xiangbing Zhou, and Fang Miao. 2022. "Innovative Hyperspectral Image Classification Approach Using Optimized CNN and ELM" Electronics 11, no. 5: 775. https://doi.org/10.3390/electronics11050775

APA Style

Ye, A., Zhou, X., & Miao, F. (2022). Innovative Hyperspectral Image Classification Approach Using Optimized CNN and ELM. Electronics, 11(5), 775. https://doi.org/10.3390/electronics11050775

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop