Next Article in Journal
The Impact of Dealiasing Biases on Bird and Insect Data Products of C-Band Weather Radars and Consequences for Aeroecological Applications
Previous Article in Journal
Rapid Micro-Motion Feature Extraction of Multiple Space Targets Based on Improved IRT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Band Selection Method Based on Global Partition Clustering

1
State Key Laboratory of Earth Surface Processes and Hazards Risk Governance (ESPHR), Faculty of Geography Science, Beijing Normal University, Beijing 100875, China
2
State Key Laboratory of Remote Sensing Science, Faculty of Geographical Science, Beijing Normal University, Beijing 100875, China
3
Beijing Engineering Research Center for Global Land Remote Sensing Products, Faculty of Geographical Science, Beijing Normal University, Beijing 100875, China
4
School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(3), 435; https://doi.org/10.3390/rs17030435
Submission received: 31 December 2024 / Revised: 23 January 2025 / Accepted: 25 January 2025 / Published: 27 January 2025
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Band selection is an important step in the dimensionality reduction processing of hyperspectral images and is highly important for eliminating redundant spectral information and reducing computational costs. In recent years, band selection methods based on ordered partition have been widely used in the dimensionality reduction processing of hyperspectral images. However, most of the methods use coarse and fine partition for band subspace partition, so that the partition results are affected by equal interval partition. Furthermore, existing methods usually select a representative band in each band subspace but do not consider the relationship between the output bands in the selection process, resulting in a certain degree of redundancy between the output bands. To solve the above problems, we propose a band selection method based on global partition clustering that contains band subspace partition and band selection. The band subspace partition is based on coarse and fine partition and the similarity-based ranking-structural similarity method, which divides the band space into band subspaces according to the relationship between the number of bands in the hyperspectral image and the number of selected bands. Band selection is based on the sequential forward selection method, which iteratively selects one band as the output band in each band subspace. The proposed method makes two main contributions. Firstly, this method avoids the negative effect of equal interval partition on the results. Secondly, this method fully considers the relationship between the selected bands and the bands to be selected. The effectiveness of the proposed method is demonstrated via ablation and comparison experiments on three publicly available datasets. Comparison experiments show that the classification accuracy of the proposed method can exceed the accuracy of the comparison methods.

1. Introduction

Hyperspectral images are remote sensing images with dozens or even hundreds of band channels [1]. Owing to the high spectral resolution of hyperspectral images, researchers can easily utilize subtle differences in the spectra to distinguish features more clearly [2]; this ability can be quite beneficial in a number of practical applications, such as remote sensing [3,4], geography [5,6], and medicine [7,8] and other fields [9,10]. On the other hand, the use of hyperspectral images in applications poses problems of highly redundant information, long processing times and the Hughes phenomenon [11,12,13]. In this context, the dimensionality reduction processing of hyperspectral images becomes particularly important [14,15].
The dimensionality reduction methods for hyperspectral images are mainly categorized as either feature extraction (FE) or feature selection (FS) [16]. Feature extraction involves the projection of high-dimensional hyperspectral images into low-dimensional bands via data transformation [17]. Feature selection is also known as band selection, which selects highly informative bands through metric evaluation [18]. In contrast to FE, FS does not destroy the original hyperspectral image and preserves its spatial and spectral information. FS can be further subdivided into supervised and unsupervised band selection methods according to whether label samples are used in the band selection process [19,20]. In practice, label samples are not easy to obtain [21], and therefore this paper focuses on unsupervised band selection methods.
There are three main types of band selection methods: ranking-based [22,23], clustering-based [24], and searching-based [25,26]. Ranking-based band selection method selects evaluation metrics for the evaluation of each band of the hyperspectral image and then selects the top-ranked band of the evaluation result according to the number of selected bands. Clustering-based band selection method divides the hyperspectral band space into band subspaces of the selected number of bands and then selects a representative band in each band subspace to form the output bands. Search-based band selection method defines an objective function and then calculates the set of output bands in which the objective function is optimal. Clustering-based band selection methods can be further subdivided into four main categories according to the clustering method used: density clustering [27], hierarchical clustering [28], graph clustering [29], and partition clustering [30]. Density clustering is clustering based on the distance between bands obtained by calculating the density of the bands and the nearest distance of higher band densities. Hierarchical clustering is based on the distance between the bands, where the bands are processed sequentially to separate the bands with the farthest distance or to merge the bands with the closest distance. Graph clustering is a clustering of bands based on the weight matrix and connectivity between the bands. Partition clustering is clustering based on the degree of correlation between the bands. Partition clustering divides the bands with a high degree of correlation into similar classes and the bands with a low degree of correlation into dissimilar classes. Partition clustering can be further categorized as ordered partition [31] or unordered partition [32] according to whether or not the band numbers are consecutive after the partition. Due to the fact that hyperspectral images are characterized by strong correlation between adjacent bands, Wang et al. [31] considered that hyperspectral images are more suitable for ordered segmentation and thus proposed the continuous band indexes constraint (CBIC). Therefore, this paper focuses on the band selection method based on ordered partition.
In recent years, researchers have extensively investigated band selection methods based on ordered partition. In 1999, Zhang et al. [33] first proposed adaptive subspace decomposition (ASD) to carry out the ordered partition of hyperspectral datasets. The method sets a threshold based on the correlation coefficients of neighboring bands, and then the points smaller than the threshold are used as partition points. This method cannot separate the band subspace according to the number of selected bands. In 2006, Chang et al. [34] proposed uniform band selection (UBS) for ordered partition of hyperspectral datasets. This method directly divides the hyperspectral dataset into equal intervals with a roughly equal number of bands in each band subspace. This method solves the problem of not being able to partition the band subspace according to the number of selected bands. However, this partition method lacks a theoretical basis and needs to consider the degree of correlation between the bands. In 2018, Wang et al. [31] proposed the optimal clustering framework (OCF) method for the ordered partition of hyperspectral datasets. This method uses dynamic programming to calculate all possible partition cases. Then, the objective function is set to measure the degree of correlation in the band subspace, and the optimal partition points are selected based on the degree of correlation. This method compensates for the lack of a theoretical basis for the partition method. However, due to the calculation of all possible partition cases, this method incurs a very large computational cost. In 2019, Wang et al. [35] proposed an adaptive subspace partition strategy (ASPS) for ordered hyperspectral partition. They proposed a coarse–fine strategy to reduce the computation cost of the partition clustering.
Their method first divides the entire band space at equal intervals, corresponding to the coarse partition. Then, the partition points are tuned according to the degree of correlation of the neighboring band subspaces, corresponding to fine partition. The use of coarse partition in this method can substantially reduce the computational amount. However, the partition result of this method is affected by the equal interval partition. This method affects a series of ordered partition methods, such as FNGBS [30], DIG [36] and E-SR–SSIM [37]. Therefore, it is particularly important to find an ordered partition method for which the partition results will not be affected by equal interval partition. Moreover, the OCF, ASPS, FNGBS and DIG band selection methods, which are based on ordered partition, select one representative band in each band subspace to form the output band. However, the relationship between the output bands is not considered in the selection process, leading to a certain degree of redundancy between the output bands.
To solve the above problems, we propose a hyperspectral band selection method based on global partition clustering, which consists of two parts: band subspace partition and band selection. The band subspace partition is based on coarse and fine partition and the similarity-based ranking-structural similarity (SR–SSIM) method [38], which divides the band space into band subspaces according to the number of selected bands so that the partition result is not affected by equal interval partition. The band selection is based on the principle of the sequential forward selection (SFS) method [39], which iteratively selects one band as the output band within each band subspace, thus reducing the redundancy between the output bands. The main contributions of this paper are as follows:
(1)
The global partition clustering (GPC) method for band subspace partition, which is based on coarse and fine partition and the SR–SSIM method, is proposed. In contrast to the previous methods, this method uses the output bands of the ranking-based band selection method. This approach avoids the negative effect of equal interval partition on the results, thus increasing the flexibility and accuracy of the partition.
(2)
Forward band replacement (FBR), which is based on the SFS method for band selection, is used for band selection. This method fully considers the relationship between the selected bands and the bands to be selected, effectively reducing the redundancy between the output bands.
(3)
Based on the above work, global partition clustering band selection (GPCBS), a hyperspectral band selection method based on global partition clustering, is proposed in this paper. GPCBS allows the partition results to be independent of equal interval partition while reducing redundancy in the output bands.
The rest of this article is organized as follows. In Section 2, the main focus is on related work. The related methods include coarse and fine partition, the SR–SSIM method and the SFS method. In Section 3, the proposed method is introduced and its time complexity is analyzed. The method consists of band subspace partition and band selection. In Section 4, we introduce the datasets used in the experiment, the experimental setup, the comparison methods and the specific experimental results. In Section 5, this article is summarized, and future research directions are given.

2. Related Work

As discussed in Section 1, the proposed method is based on the coarse–fine strategy, the SR–SSIM method and the SFS method. These methods will be discussed in this section.

2.1. Coarse–Fine Strategy

In 2019, Wang et al. [35] proposed an adaptive subspace partition strategy for the ordered partition of the hyperspectral band space. The coarse–fine strategy proposed in the paper is used to reduce the computational cost of the ordered partition.
The band subspace partition method contains two steps: coarse partition and fine partition. First, the hyperspectral band space is divided evenly into corresponding band subspaces according to the number of selected bands; this is the so-called coarse partition. The partition point of the coarse partition is given by
T k = f l o o r L N × k f l o o r ( L N × ( k 1 ) )
In the formula, T k is the partition point of the k th-band subspace, floor denotes rounding down, L is the number of bands of hyperspectral image, and N is the number of selected bands.
Second, the partition point is tuned according to the degree of correlation of the neighboring band subspaces; this is the so-called fine partition. Fine partition maximizes the degree of correlation within the band subspace groups while minimizing the degree of correlation between the band subspace groups. It is essential to note that different definitions for the degree of correlation are used by different band subspace partition methods.
Since the coarse partition allows the partition points to be limited to a small range, the number of computations can be drastically reduced. Moreover, when the number of selected bands is large, coarse partition can avoid the uneven distribution of the number of bands in the band subspace, which affects the classification effect of the method. However, this method makes the final partition results subject to coarse partition, which also affects the classification effect of the method.

2.2. SR–SSIM Method

In 2021, Xu et al. [38] proposed the SR–SSIM method, which is based on a ranking-based strategy. SR–SSIM first calculates the similarity index and dissimilarity index for each band, then calculates the score of each band by taking the product of these two indices and finally selects the band with the highest score based on the number of selected bands.
SSIM is the metric used by the SR–SSIM method, and thus it is necessary to calculate SSIM values for all pairs of bands. A greater structural similarity corresponds to a higher degree of similarity between the two bands. The structural similarity is calculated according to
S i , j = ( 2 μ H i μ H j + C 1 ) ( 2 σ H i , H j + C 2 ) ( μ H i 2 + μ H j 2 + C 1 ) ( σ H i 2 + σ H j 2 + C 2 )
In the formula, S i , j is the structural similarity of band i and band j , μ H i denotes the mean of band i , σ H i , H j is the covariance of band i and band j , and σ H i is the standard deviation of band i . C 1 is ( 0.01 × L ) 2 , and C 2 is ( 0.03 × L ) 2 .
Then, the SSIM values are sorted in ascending order from smallest to largest, and the mean values of the SSIMs in the top 5% to the top 10% are calculated as the cutoff similarity threshold. The similarity threshold is given by
d = m e a n ( S 5 % + + S 10 % )
In the formula, d is the cutoff similarity threshold and S 5 % denotes the top 5% similarity indices after sorting in ascending order.
Next, the degree of similarity and dissimilarity between a band and other bands are defined as the similarity index and the dissimilarity index of the band, respectively. The similarity and dissimilarity indices are defined by
α i = m e a n ( S i , j ) ,     |     S i , j > d       0 ,     |     S i , j d
θ i = 1 ( m a x j : α j > α i ( S i , j ) ) 2 ,     | i f   α i m a x ( α ) 1 ( m i n j : j i ( φ j ) ) 2 ,     | i f   α i = m a x ( α )
In the formula, α i is the similarity index for band i and θ i is the dissimilarity index for band i .
Finally, the product of the similarity and dissimilarity indices is used as the score of each band. The largest score bands are selected as the output bands according to the number of selected bands. Since the similarity and dissimilarity indices are of different magnitudes, they must be normalized prior to calculating their product. The product of the similarity and dissimilarity indices is given by
η i = n o r m ( α i ) × n o r m ( θ i )
In the formula, η i is the product of the similarity and dissimilarity indices for band i , and n o r m ( · ) denotes the normalization process.
The SR–SSIM method measures the importance of each band via structural similarity and selects representative bands as the output bands. However, for a large number of selected bands, the output bands of the SR–SSIM method are often sequentially consecutive, affecting the classification effectiveness of the method.

2.3. SFS Method

In 1971, Whitney [39] proposed the sequential forward selection method. In this method, an empty set and an objective function are defined. Then, a feature is added sequentially to the empty set to optimize the objective function. The procedure is terminated when either the number of added features reaches a threshold or the objective function is no longer improved. The advantage of this method is that the relationship between the selected features is considered. The selected features are obtained according to
R ( n ) = m a x ( W ( R n 1 , X ) )
In the formula, R ( n ) represents the n selected features, W ( · ) is the objective function, and X is the n th feature to be selected.

3. Materials and Methods

3.1. Design of Experimental Evaluation

3.1.1. Datasets

To ensure the reproducibility of the experiment and comparability of the method, this experiment uses the same three publicly available hyperspectral datasets as the DIG method, namely the Salinas, Botswana, and Pavia University datasets. Specific information for these hyperspectral datasets is shown in Table 1.
(1) Salinas: The dataset was acquired by the U.S. in the Salinas Valley, California, using the AVIRIS optical sensor with a spatial resolution of 3.7 m. The spectral resolution is 400–2500 nm and contains 224 spectral bands, and 204 bands were finally selected for the experiment. Each spectral band contains 512 × 217 image elements and divides the features into 16 categories, as shown in Figure 1.
(2) Botswana: The dataset was acquired by the United States in 2001 in the Okavango Delta, northwest of Botswana with a spatial resolution of 30 m. The spectral range is 400–2500 nm with 242 spectral bands, and 145 bands were finally selected for the experiment. Each spectral band contains 1476 × 256 image elements and divides the features into 14 categories, as shown in Figure 2.
(3) Pavia University: The dataset was acquired by the European Space Agency (ESA) in the northern Italian city of Pavia using a ROSIS sensor with a spatial resolution of 1.3 m. The spectral range is 430–860 nm and contains 105 spectral bands. However, only 103 spectral bands were actually used, and the features were grouped into 9 categories, as shown in Figure 3. The dataset initially contained 610 × 610 image elements but, since some bands did not contain any information, it contained a total of 610 × 340 image elements after the uninformative bands were removed.

3.1.2. Experimental Parameters

Two classifiers are used in this experiment: support vector machine (SVM) and Random Forest (RF). In the SVM classifier, the kernel function is the Gaussian kernel function, and the penalty coefficient is set to 1. In the RF classifier, the number of decision trees is 100, and the impurity is Gini. In this experiment, overall accuracy (OA) is chosen as the evaluation index. To prevent the influence of uneven data selection on the classification accuracy of the method, 10% of the data are extracted as training data, the remaining 90% are used as test data in this experiment, and the average value of the method after executing it 20 times is taken as the final result. The number of selected bands is set to select equal intervals of magnitudes varying from 5 to 30 in step 5.

3.2. Proposed Method

In this subsection, we introduce the method used in this paper and analyze its time complexity. The method of this paper is divided into two main parts, band subspace partition and band selection, and its flowchart is shown in Figure 4. Band subspace partition divides the band space into band subspaces via an ordered partition according to the relationship between the number of bands in the hyperspectral image and the number of selected bands. Band selection iteratively selects a representative band in each band subspace to form the output bands. The time complexity of the method in this paper is also analyzed in terms of band subspace partition and band selection. The code is available at https://github.com/AuthorNg/GPCBS (accessed on 24 January 2025).

3.2.1. Band Subspace Partition

This paper proposes global partition clustering to avoid the effect of coarse partition on the results. This band subspace partition method divides the band subspace according to the output bands of the ranking-based band selection method while considering the relationship between the number of bands in the hyperspectral image and the number of selected bands.
First, the band selection ratio, which is the theoretical number of bands in each band subspace, is calculated. Lu et al. [40] argued that too few bands in the band subspace would lead to too little band selectivity; thus, the number of bands in the band subspace should be set to greater than or equal to 3. However, when the number of selected bands is large, the output bands of the ranking-based band selection method are often sequentially consecutive, rendering the output band-based partition method incapable of fulfilling requirement for the number of bands in the band subspace. Therefore, the number of bands required in the band subspace should be tightened, and the number of bands in the band subspace should be set to greater than or equal to 5. When the band selection ratio is less than 5, the output band-based partition method is no longer applicable, resulting in equal interval partition of the band space. The band selection ratio is given by
K = L N
In the formula, K is the band selection ratio.
(1) Band selection ratio greater than or equal to 5: When the band selection ratio is greater than or equal to 5, the output bands of the ranking-based band selection method are selected as the center band, and the partition points are set and tuned by fine partition.
First, the SR–SSIM method is used to compute the center bands in the band space. The input is the SSIM of all pairs of bands, and the output is the center band of the number of selected bands. The center bands are obtained according to
C = S R ( S )
In the formula, C denotes the centre bands, S R ( · ) denotes the SR–SSIM method, and S denotes the SSIM of all band pairs.
Next, the partition points are set and tuned using the fine partition method in the E-SR–SSIM method, where the partition points should be such that there is only one center band in each band subspace. The first fine partition is to set the partition points between the center bands using the adjacent center bands as the boundaries. Subsequent fine partitions are made by tuning the partition points between the center bands using the adjacent partition points as the boundaries. The fine partition is then repeated until the partition points no longer change. The goal of the partition points is to maximize the correlation within a single band subspace group while minimizing the correlation between the adjacent band subspace groups. The partition points are set and adjusted according to
T k = a r g m i n i = 1 Z k j = Z k + 1 Z k + Z k + 1 R i , j i = 1 Z k 1 j = i + 1 Z k R i , j × i = 1 Z k + 1 1 j = i + 1 Z k + 1 R i , j
R i , j = σ H i , H j σ H i σ H i
In the formula, T k is the partition point of the k th band subspace after fine partition, Z k is the number of bands in the k th band subspace, and R i , j is the correlation coefficient of bands i and j .
(2) Band selection ratio less than 5: When the band selection ratio is less than 5, equal interval partition can avoid the uneven distribution of the number of bands in the band subspace. Therefore, the partition point is obtained by equal interval partition according to (1). The center bands of the band space are subsequently calculated using the SR–SSIM method. The input is the SSIM of all band pairs, and the band with the highest score in each band subspace is selected as the center band.

3.2.2. Band Selection

The OCF, ASPS, FNGBS and DIG ordered partition-based band selection methods select a representative band in each band subspace to form the output bands but do not consider the relationship between the output bands in the selection process, leading to a certain degree of redundancy between the output bands. Therefore, inspired by the SFS and DIG methods, this paper proposes the forward band replacement method. This band selection method fully considers the relationship between the output bands, reducing the redundancy of the output bands.
First, the entropy value of each band is calculated and normalized. The entropy is calculated according to
E i = k = 1 m p k log 2 p k
In the formula, E i is the entropy value of band i , m is the maximum value of band i , and p k is the ratio of the number of the image elements with value k to the total number of image elements of band i .
In both the FNGBS and DIG methods, the local densities of the bands are computed in each band subspace, and their local densities are affected only by the bands in that band subspace. Therefore, the local densities of bands that are in different band subspaces are not comparable, making the output bands selected based on the band score unreliable. To correct this problem, in this paper, the SR–SSIM method is used to calculate the global density of each band, and the output band score is used as the global density of that band. The input is the correlation coefficient of all pairs of bands. The global density is given by
U i = S R ( R )
In the formula, U i is the global density of band i and R is the correlation coefficient of all band pairs.
The discrepancy degree is subsequently calculated for each band. The discrepancy degree is defined as the sum of the Euclidean distances from the band to the central band and is normalized. Importantly, the discrepancy degree is not computed for the band and its center band in the same subspace. The discrepancy degree is given by
D i = k = 1 N H i H C k 2 , B ( i ) B ( C k )
In the formula, D i is the discrepancy degree of band i , · 2 denotes the calculation of the Euclidean distance, and B ( i ) is the ordinal number of the band subspace to which band i belongs.
The entropy value, global density and discrepancy degree are multiplied to calculate the score for each band. The score is given by
F i = E i × D i × U i
In the formula, F i is the score for band i .
Finally, the band with the highest score is selected as the output band, and that band is replaced with the center band within its band subspace. The above process is repeated until a band is selected as the output band within each band subspace.
The pseudocode of GPCBS is presented in Algorithm 1 to show the method framework in more detail.
Algorithm 1 GPCBS Method
Input: H R W × H × L : hyperspectral dataset; N : the number of selected bands.
Output: P : selected bands.
1. Apply (8) to calculate the selected band ratio K .
2. if K ≥ 5:
3.  Apply (9) to calculate the center bands C .
4.  Apply (10) to calculate the partition points T k .
5.  Apply (10) to tune the partition points T k .
6.  while T k change:
7.   Apply (10) to tune the partition points T k .
8. else:
9.   Apply (1) to calculate the partition points T k .
10. Apply (9) to calculate the center band C .
11. Apply (12) to calculate the entropy value E .
12. Apply (13) to calculate the global density U .
13. while P ! = N :
14.  Apply (14) to calculate the discrepancy degree D .
15.  Apply (15) to calculate the score F .
16.  Choose the selected band P .
17.  Replace the center band.

3.2.3. Time Complexity Analysis

The method in this paper consists of two main parts: band subspace partition and band selection. Next, we analyze the time complexity of each part separately.
(1)
Band Subspace Partition: The time complexity of the band subspace partition mainly includes coarse and fine partition and the SR–SSIM method. Coarse and fine partition includes two parts: coarse partition and fine partition. Among these, the time complexity of coarse partition is negligible, whereas the time complexity of the fine partition is O ( L 2 W H ) [35]. The time complexity of the SR–SSIM method is O ( L 2 W H ) . Therefore, the time complexity of the band subspace partition is O ( L 2 W H ) .
(2)
Band Selection: The time complexity of band selection is mainly due to the calculations of the entropy value, global density, and discrepancy degree. The time complexities of the calculations of the entropy value, global density, and discrepancy degree are O ( L W H ) , O ( L 2 W H ) , and O ( N 2 L ) , respectively. Therefore, the time complexity of band selection is O ( L 2 W H + L W H + N 2 L ) .
Considering the above two parts together, the time complexity of the method in this paper is O ( L 2 W H + L W H + N 2 L ) . Since N < L W H , the final time complexity of this method is O ( L 2 W H ) . The time complexity of GPCBS is the same as that of the OCF, ASPS, FNGBS, DIG and SR–SSIM methods.

3.3. Comparison Methods

Six band selection methods are used to compare classification accuracy with the methods in this paper. The main steps are shown below. Table 2 shows the specific information for the six comparison band selection methods.
(1)
OCF [31] uses a dynamic programming method to calculate all possible partition cases. Then, the objective function is set to measure the degree of correlation within the band subspace, and the optimal partition point is selected based on this degree of correlation. Finally, the E-FDPC method is used for the entire band space to select the highest-scoring band within each band subspace as the output band.
(2)
ASPS [35] first divides the entire band space into equal intervals, followed by tuning the partition points according to the degree of correlation of the neighboring band subspaces. The degree of intragroup correlation in the band subspace is maximized, whereas the degree of intergroup correlation in the adjacent band subspace is minimized. Finally, the mean and variance of each band patch are calculated to measure the noise level of the band, and the band with the lowest noise level in each band subspace is selected as the output band. The patch size is set to 3 × 3, and 10% of all patches in the band are selected to calculate the mean and variance.
(3)
FNGBS [30] first divides the entire band space into equal intervals and then identifies the bands between the k−1th band subspace and the center band of the k+1th band subspace. The degree of correlation between these bands and the center band of the kth band subspace is calculated so that the partition points can be tuned. Finally, the local density and entropy values of each band are calculated so that the score of each band is calculated based on the product of these two values, and the band with the highest score in each band subspace is selected as the output band. The value of the k nearest neighbor method is set to 3, and 1% of all the rasters of the band are selected to calculate the entropy value.
(4)
DIG [36] first divides the entire band space into equal intervals and then calculates the degree of correlation between the bands at the partition points and all of the bands between the neighboring partition points to tune the partition points. Finally, the local density, discrepancy degree and entropy value of each band are calculated to compute the score of each band based on the product of these three values, and the band with the highest score in each subspace is selected as the output band.
(5)
SR–SSIM [38] first calculates the similarity and dissimilarity indices for each band, then calculates the score for each band by taking the product of these two indices and finally selects the band with the highest score based on the number of selected bands.
(6)
E-SR–SSIM [37] first divides the entire band space into equal intervals and the partition points are tuned according to the degree of correlation of adjacent band subspaces. The tuning processes will continue to repeat until the partition points no longer change. Finally, use the modified SR–SSIM method (each similarity index α subtracts a minimum value ε , the value is 10 7 ) on each band subspace and the band with the highest score in each subspace is selected as the output band.

4. Results

To verify the effectiveness of the method developed in this paper, experiments are conducted on publicly available hyperspectral datasets. First, the effectiveness of the proposed band subspace partition and band selection is analyzed through ablation experiments. Second, the effectiveness of the proposed band selection method is analyzed and discussed through comparison experiments.

4.1. Ablation Study

The OCF, ASPS, FNGBS, DIG and GPCBS methods consist of two parts: band subspace partition and band selection. To verify the effectiveness of the GPC and FBR methods used in the proposed GPCBS method, GPC and FBR are combined with the OCF, ASPS, FNGBS, DIG methods to compare the classification accuracies on the three datasets.

4.1.1. Effectiveness of the Band Subspace Partition

The OCF, ASPS, FNGBS, and DIG methods are integrated with the GPC of the GPCBS method to obtain the GPC–OCF, GPC–ASPS, GPC–FNGBS, and GPC–DIG methods. The OA values of the original and improved methods on the two classifiers are shown in Figure 5 and Figure 6.
Figure 5 and Figure 6 show that the classification accuracy of GPC-OCF on the Salinas and Botswana datasets is significantly better than that of OCF. In particular, when the number of band selections is 5, GPC-OCF improves the classification accuracy by 6.72% and 4.97% for the SVM and RF classifiers on the Botswana dataset, respectively. The classification accuracy of GPC-ASPS has improved slightly compared to that of ASPS under the two classifiers for all three datasets. The classification accuracy of GPC-FNGBS is flat compared to that of FNGBS for the Salinas dataset, increases significantly for the Botswana dataset, and decreases slightly for the Pavia University dataset. Notably, when the number of band selections is 5, the classification accuracy of GPC-FNGBS is significantly improved for both the SVM and RF classifiers on the Salinas dataset by 1.38% and 1.60%, respectively. For the RF classifier, the classification accuracy of GPC-DIG is almost the same as that of DIG on all three datasets. However, for the SVM classifier, the classification accuracy of GPC-DIG is improved in the usual case on all three datasets. In summary, GPC generally improves the classification accuracy of the comparison method.

4.1.2. Effectiveness of Band Selection

The OCF, ASPS, FNGBS, and DIG methods are integrated with the FBR of the GPCBS method to obtain the OCF–FBR, ASPS–FBR, FNGBS–FBR, and DIG–FBR methods. The OA values of the original and improved methods on the two classifiers are shown in Figure 7 and Figure 8.
As shown in Figure 7 and Figure 8, the classification accuracy of OCF–FBR is significantly better than that of OCF. In particular, OCF–FBR improves the classification accuracy by an average of 1% and a maximum of 3.56% for the two classifiers on the Botswana dataset. For the two classifiers on the Salinas dataset, the classification accuracy of the ASPS-FBR is almost flat compared to ASPS. However, for the Botswana and Pavia University datasets, the classification accuracy of the ASPS–FBR is significantly better than that of ASPS. In particular, when the number of selected bands is 5, ASPS–FBR improves the classification accuracy by 7.47% and 6.97% for the SVM and RF classifiers on the Botswana dataset, respectively. For both classifiers of Salinas and Pavia University datasets, the classification accuracy of FNGBS–FBR is almost flat compared to FNGBS. However, the classification accuracy of FNGBS–FBR increases significantly compared with that of FNGBS for the two classifiers on the Botswana dataset. When the number of selected bands is 5, the classification accuracy of FNGBS–FBR improves by 3.86% and 2.12% for the SVM and RF classifiers, respectively. The classification accuracy of DIG–FBR is almost the same as that of DIG for both classifiers on the Pavia University dataset. However, on both the Salinas and Botswana datasets, the classification accuracy of DIG–FBR is better than that of DIG. For the SVM and RF classifiers on the Salinas dataset, the classification accuracy of DIG–FBR is improved by up to 1.03% and 1.10%, respectively. In summary, FBR generally improves the classification accuracy of the compared methods.

4.2. Comparison of Classification Performance

The OCF, ASPS, FNGBS, DIG, SR–SSIM and E-SR–SSIM methods are compared with the GPCBS method in terms of classification effectiveness. The OA values of the comparison methods and the method developed in this paper using two classifiers are shown in Figure 9 and Figure 10. The GPCBS method generally outperforms the OCF and ASPS methods for the two classifiers on the three datasets. For the RF classifier on the Botswana dataset, when the number of selected bands is 5, the GPCBS method outperforms the OCF and ASPS methods by 6.29% and 8.05%, respectively. The GPCBS shows better classification results for the SVM classifier on the three datasets compared to the DIG, the SR–SSIM and the E-SR–SSIM methods. For the SVM classifier on the Botswana dataset, when the number of selected bands is 30, the GPCBS method outperforms the DIG and SR–SSIM methods by 1.12% and 1.31%, respectively. For the RF classifier on the Salinas dataset, when the number of selected bands is 5, the GPCBS method outperforms the E-SR–SSIM method by 5.53%. The FNGBS method is not stable. On the Pavia University dataset, only if the number of selected bands is 20, the FNGBS method is more effective than the GPCBS method. When the number of selected bands is 5, the classification accuracy of GPCBS method for the SVM classifiers on the Salinas and Botswana datasets is higher than that of the FNGBS method by 1.69% and 3.86%, respectively. In summary, overall GPCBS outperforms the comparison methods for classification.
The GPCBS method has the advantage of not being affected by equal interval partition. For a small number of selected bands, the classification accuracy of GPCBS is theoretically better than that of the comparison methods. When the number of selected bands is 5, the output bands of this method and the comparison methods on the three datasets are shown in Table 3, and their classification accuracy under the two classifiers are shown in Table 4. It should be noted that the serial numbers of the output bands in Table 3 start from 1, and the methods with the best classification accuracy are shown in bold in Table 4. The selected band generally needs to avoid three situations. (1) The selected bands are continuous index. (2) The selected band is the first index. (3) The selected band is the last index. Table 3 shows that the select bands of the DIG method are continuous index on the Salinas and Pavia University datasets, and the select bands of the ASPS method are continuous index on the Pavia University dataset. The select band of the ASPS method on the Botswana dataset is the first index. The select bands of the E-SR–SSIM method on the Salinas and Pavia University datasets and the select bands of the ASPS method on the Botswana and Pavia University dataset are the last index. In theory, ASPS, DIG, and E-SR–SSIM methods have poor classification accuracy. Table 4 shows that the classification accuracy of the GPCBS on Salinas and Botswana datasets is better than that of the comparison methods. In particular, its classification results are 1–2% higher on the Salinas dataset. Although GPCBS is not the best at classifying the Pavia University dataset, it differs by only less than 0.8% from the best-performing OCF and E-SR–SSIM methods. Moreover, the OCF, ASPS, FNGBS and E-SR–SSIM methods are not stable. For the SVM classifier on the Botswana dataset, GPCBS outperforms the OCF, ASPS, FNGBS and E-SR–SSIM methods by 8.76%, 9.85%, 3.85% and 5.24%, respectively. Compared to the DIG method, which inspired the GPCBS method, and the SR–SSIM method, which generates the center bands, GPCBS shows better classification results on the three datasets. The classification accuracy of GPCBS can exceed that of the comparison methods. In summary, for a small number of required bands, GPCBS yields better classification results than the comparison methods.

5. Conclusions

This paper proposes a hyperspectral band selection method based on global partition clustering. First, the band space is divided into band subspaces according to the number of selected bands based on the coarse and fine partition and the SR–SSIM method. Then, one band is iteratively selected as the output band within each band subspace according to the SFS method. This method makes two main contributions. Firstly, this method avoids the negative effect of equal interval partition on the results, thus increasing the flexibility and accuracy of the partition. Secondly, this method fully considers the relationship between the selected bands and the bands to be selected, effectively reducing the redundancy between the output bands. The method proposed in this paper is demonstrated to generally outperform other methods via ablation experiments and comparison experiments on each of the three datasets. Comparison experiments show that the classification accuracy of the proposed method can exceed the accuracy of the comparison methods. Three research directions can be pursued in future work. First, more reasonable band selection ratio K can be obtained based on previous work and experience. Second, because GPCBS must calculate SSIM values and correlation coefficients, leading to a substantial increase in the running time of the method, it is particularly important to find alternative metrics for SSIM and correlation coefficients. Third, the results of the hyperspectral band selection methods can be used as data for land change simulation and for subsequent remote sensing research [41,42].

Author Contributions

Conceptualization, T.H. and P.G.; methodology, T.H., X.G. and P.G.; validation, T.H. and P.G.; formal analysis, T.H.; writing—original draft preparation, T.H., X.G. and P.G.; writing—review and editing, T.H., X.G. and P.G.; project administration, T.H. and P.G.; funding acquisition, T.H. and P.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China (Grant No. 42271418) and the Open Fund of State Key Laboratory of Remote Sensing Science and Beijing Engineering Research Center for Global Land Remote Sensing Products (Grant No. OF202412).

Data Availability Statement

The three public AVIRIS hyperspectral datasets can be downloaded from https://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 31 December 2024).

Acknowledgments

The authors thank four anonymous reviewers for supporting the insightful comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Duan, P.; Ghamisi, P.; Kang, X.; Rasti, B.; Li, S.; Gloaguen, R. Fusion of dual spatial information for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7726–7738. [Google Scholar] [CrossRef]
  2. Rasti, B.; Scheunders, P.; Ghamisi, P.; Licciardi, G.; Chanussot, J. Noise reduction in hyperspectral imagery: Overview and application. Remote Sens. 2018, 10, 482. [Google Scholar] [CrossRef]
  3. Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced spectral classifiers for hyperspectral images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef]
  4. Wang, Y.; Song, C.; Cheng, C.; Wang, H.; Wang, X.; Gao, P. Modelling and evaluating the economy-resource-ecological environment system of a third-polar city using system dynamics and ranked weights-based coupling coordination degree model. Cities 2023, 133, 104151. [Google Scholar] [CrossRef]
  5. Hu, T.; Zhang, X.; Sun, Z.; Ye, S.; Gao, P. Contrasting Effects of Structural Similarity and Entropic Metrics on Band Selection. IEEE Geosci. Remote Sens. Lett. 2024, 21, 5505205. [Google Scholar] [CrossRef]
  6. An, D.; Zhao, G.; Chang, C.; Wang, Z.; Li, P.; Zhang, T.; Jia, J. Hyperspectral field estimation and remote-sensing inversion of salt content in coastal saline soils of the Yellow River Delta. Int. J. Remote Sens. 2016, 37, 455–470. [Google Scholar] [CrossRef]
  7. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef] [PubMed]
  8. Calin, M.A.; Parasca, S.V.; Savastru, D.; Manea, D. Hyperspectral imaging in the medical field: Present and future. Appl. Spectrosc. Rev. 2014, 49, 435–447. [Google Scholar] [CrossRef]
  9. Gao, P.; Gao, Y.; Ou, Y.; McJeon, H.; Zhang, X.; Ye, S.; Wang, Y.; Song, C. Fulfilling global climate pledges can lead to major increase in forest land on Tibetan Plateau. iScience 2023, 26, 106364. [Google Scholar] [CrossRef] [PubMed]
  10. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  11. Gao, P.; Zhang, H.; Wu, Z.; Wang, J. A joint landscape metric and error image approach to unsupervised band selection for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5505805. [Google Scholar] [CrossRef]
  12. Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
  13. Zhou, Y.; Peng, J.; Chen, C.P. Dimension reduction using spatial and spectral regularized local discriminant embedding for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1082–1095. [Google Scholar] [CrossRef]
  14. Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
  15. Harsanyi, J.C.; Chang, C.-I. Hyperspectral image classification and dimensionality reduction: An orthogonal subspace projection approach. IEEE Trans. Geosci. Remote Sens. 1994, 32, 779–785. [Google Scholar] [CrossRef]
  16. Uddin, M.P.; Mamun, M.A.; Afjal, M.I.; Hossain, M.A. Information-theoretic feature selection with segmentation-based folded principal component analysis (PCA) for hyperspectral image classification. Int. J. Remote Sens. 2021, 42, 286–321. [Google Scholar] [CrossRef]
  17. Xu, Y.; Zhang, L.; Du, B.; Zhang, F. Spectral–spatial unified networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5893–5909. [Google Scholar] [CrossRef]
  18. Zhu, M.; Jiao, L.; Liu, F.; Yang, S.; Wang, J. Residual spectral–spatial attention network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 449–462. [Google Scholar] [CrossRef]
  19. Sun, H.; Ren, J.; Zhao, H.; Sun, G.; Liao, W.; Fang, Z.; Zabalza, J. Adaptive distance-based band hierarchy (ADBH) for effective hyperspectral band selection. IEEE Trans. Cybern. 2020, 52, 215–227. [Google Scholar] [CrossRef]
  20. Wang, J.; Tang, C.; Zheng, X.; Liu, X.; Zhang, W.; Zhu, E. Graph regularized spatial–spectral subspace clustering for hyperspectral band selection. Neural Netw. 2022, 153, 292–302. [Google Scholar] [CrossRef] [PubMed]
  21. Zeng, M.; Cai, Y.; Cai, Z.; Liu, X.; Hu, P.; Ku, J. Unsupervised hyperspectral image band selection based on deep subspace clustering. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1889–1893. [Google Scholar] [CrossRef]
  22. Gao, P.; Wang, J.; Zhang, H.; Li, Z. Boltzmann entropy-based unsupervised band selection for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2018, 16, 462–466. [Google Scholar] [CrossRef]
  23. Jia, S.; Tang, G.; Zhu, J.; Li, Q. A novel ranking-based clustering approach for hyperspectral band selection. IEEE Trans. Geosci. Remote Sens. 2015, 54, 88–102. [Google Scholar] [CrossRef]
  24. Li, S.; Peng, B.; Fang, L.; Li, Q. Hyperspectral band selection via optimal combination strategy. Remote Sens. 2022, 14, 2858. [Google Scholar] [CrossRef]
  25. Sun, W.; Du, Q. Hyperspectral band selection: A review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 118–139. [Google Scholar] [CrossRef]
  26. Wang, Q.; Zhang, F.; Li, X. Hyperspectral band selection via optimal neighborhood reconstruction. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8465–8476. [Google Scholar] [CrossRef]
  27. Luo, X.; Xue, R.; Yin, J. Information-assisted density peak index for hyperspectral band selection. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1870–1874. [Google Scholar] [CrossRef]
  28. Sun, H.; Zhang, L.; Ren, J.; Huang, H. Novel hyperbolic clustering-based band hierarchy (HCBH) for effective unsupervised band selection of hyperspectral images. Pattern Recognit. 2022, 130, 108788. [Google Scholar] [CrossRef]
  29. Sun, W.; Zhang, L.; Du, B.; Li, W.; Lai, Y.M. Band selection using improved sparse subspace clustering for hyperspectral imagery classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2784–2797. [Google Scholar] [CrossRef]
  30. Wang, Q.; Li, Q.; Li, X. A fast neighborhood grouping method for hyperspectral band selection. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5028–5039. [Google Scholar] [CrossRef]
  31. Wang, Q.; Zhang, F.; Li, X. Optimal clustering framework for hyperspectral band selection. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5910–5922. [Google Scholar] [CrossRef]
  32. Santos, A.; Pedrini, H. A combination of k-means clustering and entropy filtering for band selection and classification in hyperspectral images. Int. J. Remote Sens. 2016, 37, 3005–3020. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Desai, M.D.; Zhang, J.; Jin, M. Adaptive subspace decomposition for hyperspectral data dimensionality reduction. In Proceedings of the 1999 International Conference on Image Processing (Cat. 99CH36348), Kobe, Japan, 24–28 October 1999; pp. 326–329. [Google Scholar]
  34. Chang, C.-I.; Wang, S. Constrained band selection for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1575–1585. [Google Scholar] [CrossRef]
  35. Wang, Q.; Li, Q.; Li, X. Hyperspectral band selection via adaptive subspace partition strategy. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4940–4950. [Google Scholar] [CrossRef]
  36. Li, S.; Peng, B.; Fang, L.; Zhang, Q.; Cheng, L.; Li, Q. Hyperspectral band selection via difference between intergroups. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5503310. [Google Scholar] [CrossRef]
  37. Hu, T.; Gao, P.; Ye, S.; Shen, S. Improved SR-SSIM band selection method based on band subspace partition. Remote Sens. 2023, 15, 3596. [Google Scholar] [CrossRef]
  38. Xu, B.; Li, X.; Hou, W.; Wang, Y.; Wei, Y. A similarity-based ranking method for hyperspectral band selection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9585–9599. [Google Scholar] [CrossRef]
  39. Whitney, A.W. A direct method of nonparametric measurement selection. IEEE Trans. Comput. 1971, 100, 1100–1103. [Google Scholar] [CrossRef]
  40. Lu, Y.; Ren, Y.; Cui, B. Noise robust band selection method for hyperspectral images. J. Remote Sens. 2022, 26, 2382–2398. [Google Scholar] [CrossRef]
  41. Gao, Y.; Song, C.; Liu, Z.; Ye, S.; Gao, P. Land-N2N: An effective and efficient model for simulating the demand-driven changes in multifunctional lands. Environ. Model. Softw. 2025, 185, 106318. [Google Scholar] [CrossRef]
  42. Lv, J.; Song, C.; Gao, Y.; Ye, S.; Gao, P. Simulation and analysis of the long-term impacts of 1.5° C global climate pledges on China’s land systems. Sci. China Earth Sci. 2025, 68, 457–472. [Google Scholar] [CrossRef]
Figure 1. Ground truth of the Salinas dataset.
Figure 1. Ground truth of the Salinas dataset.
Remotesensing 17 00435 g001
Figure 2. Ground truth of the Botswana dataset.
Figure 2. Ground truth of the Botswana dataset.
Remotesensing 17 00435 g002
Figure 3. Ground truth of the Pavia University dataset.
Figure 3. Ground truth of the Pavia University dataset.
Remotesensing 17 00435 g003
Figure 4. Flowchart of the proposed method.
Figure 4. Flowchart of the proposed method.
Remotesensing 17 00435 g004
Figure 5. Classification results of the global partition clustering method for different datasets with the SVM classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Figure 5. Classification results of the global partition clustering method for different datasets with the SVM classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Remotesensing 17 00435 g005
Figure 6. Classification results of the global partition clustering method for different datasets with the RF classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Figure 6. Classification results of the global partition clustering method for different datasets with the RF classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Remotesensing 17 00435 g006
Figure 7. Classification results of the forward band replacement method for different datasets with the SVM classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Figure 7. Classification results of the forward band replacement method for different datasets with the SVM classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Remotesensing 17 00435 g007
Figure 8. Classification results of the forward band replacement method for different datasets with the RF classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Figure 8. Classification results of the forward band replacement method for different datasets with the RF classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Remotesensing 17 00435 g008
Figure 9. Classification results of different band selection methods for different datasets with the SVM classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Figure 9. Classification results of different band selection methods for different datasets with the SVM classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Remotesensing 17 00435 g009
Figure 10. Classification results of different band selection methods for different datasets with the RF classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Figure 10. Classification results of different band selection methods for different datasets with the RF classifier. (a) Salinas. (b) Botswana. (c) Pavia University.
Remotesensing 17 00435 g010
Table 1. Details of three hyperspectral datasets.
Table 1. Details of three hyperspectral datasets.
DatasetSensorPixel NumberSpatial ResolutionCategoryBand Number
SalinasAVIRIS512 × 2173.716204
BotswanaEO-11476 × 2563014145
Pavia UniversityROSIS610 × 3401.39103
Table 2. Details of six band selection methods.
Table 2. Details of six band selection methods.
MethodTypeMethod ParameterPublication Year
OCFClustering 2018
ASPSClusteringB = 3 × 3 M = 10%2019
FNGBSClusteringK = 3 Z = 1%2021
DIGClustering 2023
SR–SSIMRanking 2020
E-SR–SSIMClustering 2023
Table 3. Selected bands obtained by different band selection methods on three datasets.
Table 3. Selected bands obtained by different band selection methods on three datasets.
DatasetMethodSelected Band
SalinasOCF11, 32, 68, 88, 166
ASPS15, 52, 94, 147, 203
FNGBS32, 63, 69, 165, 194
DIG40, 41, 102, 122, 172
SR–SSIM6, 48, 62, 82, 202
E-SR–SSIM7, 101, 108, 147, 204
GPCBS32, 46, 70, 92, 180
BotswanaOCF53, 65, 92, 128, 137
ASPS1, 50, 110, 137, 145
FNGBS16, 53, 64, 88, 123
DIG3, 28, 58, 87, 116
SR–SSIM22, 34, 66, 98, 131
E-SR–SSIM22, 36, 54, 92, 138
GPCBS20, 35, 71, 104, 133
Pavia UniversityOCF19, 33, 61, 66, 88
ASPS27, 60, 61, 75, 103
FNGBS18, 39, 52, 76, 88
DIG20, 21, 41, 62, 82
SR–SSIM23, 27, 61, 79, 90
E-SR–SSIM16, 45, 56, 81, 103
GPCBS21, 32, 62, 80, 92
Table 4. OA for two classifiers on three datasets (%).
Table 4. OA for two classifiers on three datasets (%).
DatasetClassifierOCFASPSFNGBSDIGSR–SSIME-SR–SSIMGPCBS
SalinasSVM86.8286.1985.4985.3884.1481.9487.18 *
RF88.7388.2088.0087.8487.2884.1189.64 *
BotswanaSVM74.5473.4579.4582.2082.2682.2183.30 *
RF76.5674.8180.6882.8382.2682.7982.86 *
Pavia UniversitySVM79.27 *79.1579.2678.5778.8878.5778.99
RF85.2883.9184.6984.0883.6485.37 *84.65
* The method with the best classification accuracy in the same classifier and dataset is shown in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, T.; Guo, X.; Gao, P. Hyperspectral Band Selection Method Based on Global Partition Clustering. Remote Sens. 2025, 17, 435. https://doi.org/10.3390/rs17030435

AMA Style

Hu T, Guo X, Gao P. Hyperspectral Band Selection Method Based on Global Partition Clustering. Remote Sensing. 2025; 17(3):435. https://doi.org/10.3390/rs17030435

Chicago/Turabian Style

Hu, Tingrui, Xian Guo, and Peichao Gao. 2025. "Hyperspectral Band Selection Method Based on Global Partition Clustering" Remote Sensing 17, no. 3: 435. https://doi.org/10.3390/rs17030435

APA Style

Hu, T., Guo, X., & Gao, P. (2025). Hyperspectral Band Selection Method Based on Global Partition Clustering. Remote Sensing, 17(3), 435. https://doi.org/10.3390/rs17030435

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop