Next Article in Journal
Mineralogical Insights into PGM Recovery from Middle Group (1–4) Chromite Tailings
Previous Article in Journal
The Efficient Separation of Apatite from Dolomite Using Fucoidan as an Eco-Friendly Depressant
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Rock Classification Method Based on Spatial-Spectral Multidimensional Feature Fusion

1
Institute of Remote Sensing and Earth Sciences, School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
2
Zhejiang Provincial Key Laboratory of Urban Wetlands and Regional Change, Hangzhou Normal University, Hangzhou 311121, China
*
Author to whom correspondence should be addressed.
Minerals 2024, 14(9), 923; https://doi.org/10.3390/min14090923
Submission received: 30 July 2024 / Revised: 5 September 2024 / Accepted: 9 September 2024 / Published: 10 September 2024

Abstract

:
The issues of the same material with different spectra and the same spectra for different materials pose challenges in hyperspectral rock classification. This paper proposes a multidimensional feature network based on 2-D convolutional neural networks (2-D CNNs) and recurrent neural networks (RNNs) for achieving deep combined extraction and fusion of spatial information, such as the rock shape and texture, with spectral information. Experiments are conducted on a hyperspectral rock image dataset obtained by scanning 81 common igneous and metamorphic rock samples using the HySpex hyperspectral sensor imaging system to validate the effectiveness of the proposed network model. The results show that the model achieved an overall classification accuracy of 97.925% and an average classification accuracy of 97.956% on this dataset, surpassing the performances of existing models in the field of rock classification.

1. Introduction

In recent years, hyperspectral rock image classification has been demonstrated to have significant advantages in geological exploration, such as mineral resource localization, geological structures analysis, and environmental effect assessment [1,2,3,4,5]. High-resolution hyperspectral data, derived from satellites, airborne systems, and laboratory settings, have distinct advantages in various application scenarios using advanced artificial intelligence algorithms, such as machine learning and deep learning [6]. Satellite remote sensing images, characterized by a wide coverage and high temporal resolution, have been commonly used to conduct large-scale rock and mineral classifications. For example, multispectral and hyperspectral data, provided by satellites such as Landsat 8 and Sentinel-2, have been widely used in geological surveys [7,8]. Researchers have utilized these data and machine learning algorithms such as support vector machines (SVMs) [9] and random forest (RF) models [10] to precisely classify types of surface rocks. Similarly, airborne hyperspectral remote sensing systems, such as the hydrological modeling and analysis platform (HyMap) and airborne visible/infrared imaging spectrometer (AVIRIS), can acquire high-spectral-resolution data that accurately reflect the spectral characteristics of objects [11,12]. For example, several studies have achieved high-precision mineral composition classification and spatial distribution analysis by combining convolutional neural networks (CNNs) [13] and long short-term memory (LSTM) networks [14] with geographic information system (GIS) technology.
However, hyperspectral rock imagery acquired using satellites and airborne platforms predominantly emphasizes the breadth and richness of geological remote sensing and falls short of meeting the demand for the precise classification of individual rock samples [15,16]. In contrast to conventional satellite and airborne remote sensing, fine spatial resolution imaging spectroscopy coupled with quantitative analysis of geological materials in the laboratory or via controlled indoor environments is gradually becoming a mainstream research method [17,18]. Laboratory imaging spectrometers are capable of conducting advanced imaging spectroscopy on samples, cuttings, and cores, delivering spectral information that mirrors the surface reflectance spectra of the specimens that are characterized by exceptionally high spatial resolution and lateral continuity. This technology not only facilitates the identification of minerals and their mixtures but also evaluates the spatial distribution and variability of these minerals across the sample surface [19]. Consequently, hyperspectral rock data derived from laboratory platforms can be used to validate and calibrate airborne and satellite hyperspectral imagery, offering a more comprehensive evaluation framework for geological research [20].
Currently, as hyperspectral imaging systems become more portable, research on rock classification using hyperspectral rock sample images obtained in the laboratory or via controlled indoor environments is gaining more attention, and several case studies have been published. For example, Guo et al. [21] found that using high-frequency features after discrete wavelet transform (DWT) [22] can improve the rock discrimination accuracy. They conducted classification experiments on hyperspectral images of six types of rock samples, including dolomite, andesite, and tuff, obtained from a laboratory platform combined with a random forest classifier. Their experiments verified that the DWT can capture signal mutations and reveal subtle differences between different rock spectra, improving the overall classification accuracy by 10%–20% compared to that without discrete wavelet transformation. Galdames et al. [23] constructed a hyperspectral rock dataset for 13 types of rocks, including andesite, granodiorite, and breccia, using a laboratory-based hyperspectral sensor. They applied the mask region-based convolutional neural network (R-CNN) framework [24] to dimensionally reduced data through a fully convolutional network for rock classification. This method performed instance segmentation on each rock, achieving an accuracy of over 95% in classifying the 13 types of rock. Hamedianfar et al. [25] proposed using core scanning in the long-wave infrared wavelength region to map hyperspectral images of five types of rock, namely, quartz, talc, feldspar, chlorite, and quartz-carbonate, with a laboratory platform. Using a deep learning framework with the U-net architecture called ENVINet5 [26], they achieved pixel-wise precise classification with an overall accuracy of 82.74%.
Although current research on hyperspectral rock image classification has achieved some results, there are still many shortcomings. In the classification of hyperspectral rock images obtained using satellites and airborne platforms, traditional machine learning methods only extract shallow spectral features and fail to fully exploit the deep feature values of hyperspectral data, making it difficult to accurately identify mixed and overlapping rock types [27,28,29]. In deep learning methods, such as the multilayer perceptron neural network used by Bahrambeygi et al. [30], the network structure ignores the spatial structure of the input data and only performs linear combinations, making it difficult to extract deep spatial information, resulting in a low accuracy for spectrally classifying mixed rocks. In studies conducted in the laboratory or using controlled indoor environments on hyperspectral rock image classification, the spectral features of different rock and mineral samples exhibit many similarities, leading to the issues of the same spectra for different materials and different spectra for the same material, thus resulting in both misclassification and low classification accuracy. For example, although Guo et al. [21] improved the classification accuracy to some extent using DWT, the large number of high-dimensional features generated via DWT were not effectively selected, resulting in a relatively low overall classification accuracy. To address these issues, researchers have introduced a series of new methods. Zhang et al. [31] combined a 1-D CNN with a 2-D CNN, which enhanced the model’s ability to extract rock spectral features and improve the classification accuracy. Galdames et al. [23] used the mask R-CNN framework to conduct instance segmentation of rocks and achieved high accuracy in classifying certain types of rock. Hamedianfar et al. [25] used a deep learning framework with a U-net architecture called ENVINet5 to conduct precise pixel-wise classification, which improved the mineral category classification and captured subtle features. However, these methods still have limitations. For instance, Galdames et al.’s [23] method lacks an explanation of the phenomenon of different spectra for the same material, and Hamedianfar et al.’s [25] method still leaves some pixels unclassified when dealing with certain spectrally mixed categories.
Therefore, in this study, we utilized a push-broom orbit on a laboratory platform combined with the HySpex hyperspectral sensor imaging system to perform hyperspectral imaging on 81 common igneous and metamorphic rock samples, acquiring hyperspectral rock images and establishing ground truth labels. Because the compositional and spectral curve trends of the igneous and metamorphic rock samples had classification characteristics, the 81 rock samples were initially divided into 28 categories. Through spectral analysis of the rock samples, it was found that the spectral curves of the single-point reflections from the rock surface were closely related to the mineral composition of the rock. However, when dealing with complex rock samples with mixed or similar mineral compositions, misclassification often occurs due to overlapping or similar spectral curves. In addressing this issue, in this study, we demonstrated that the synergistic fusion of deep spatial information extracted using a CNN and spectral features can significantly improve the classification accuracy of hyperspectral rock images.

2. Materials and Method

2.1. Rock Dataset

In this study, spectral data of rocks were collected using the HySpex shortwave infrared (SWIR)-384 hyperspectral camera imaging system, manufactured by Norsk Elektro Optikk, located in Oslo, Norway (as shown in Figure 1). And the FieldSpec3 ASD spectroradiometer in a controlled darkroom laboratory environment. The HySpex SWIR-384 hyperspectral camera system covers a spectral range of 950 to 2500 nm, with 288 spectral bands and a resolution of 5.45 nm. The FieldSpec3 ASD spectroradiometer measures the near-infrared region, with a range of 1001 to 2500 nm and a resolution of 8 nm.
The research team conducted scanning and imaging of 81 freshly exposed rock surfaces, establishing a publicly available hyperspectral rock image dataset [32], which includes 35 metamorphic rocks and 46 igneous rocks (as shown in Figure 2a). The effectiveness of the proposed model was validated on this dataset.
In the preliminary phase of the project, 50 random 5 × 5 pixel ROIs were sampled for each rock, and the average spectra were calculated to generate the spectral curves shown in Figure 2b. Based on the overall spectral similarity and local absorption–reflection characteristics of the rocks, a preliminary classification system of 28 rock categories was established [33]. The names of the rocks included in each category are shown in Figure 2c and Table 1.
On this basis, we established ground truth labels for the 28 rock classes (Figure 3a). This facilitated the division of these classes into training, validation, and testing sets for the subsequent supervised classification network models. When generating the ground truth labels for the 28 rock classes, we encountered challenges due to the presence of blurred edges and shadows in the rock samples. To address this issue, a small portion of the data was excluded by trimming the edges of the rock fragments. To achieve this, we utilized image morphological processing techniques and applied a 5 × 5 erosion algorithm to remove the boundary parts of the rock samples (Figure 3b). The colors corresponding to the 28 rock labels and the number of pixel samples are shown in Table 2.

2.2. Methods

2.2.1. Related Work

Despite years of development, significant progress has been made in hyperspectral image classification; however, handling high-dimensional spectral data remains challenging. Deep learning, especially CNNs, has become a focal point in hyperspectral rock image classification. Originally designed for 2-D images, CNNs excel in capturing spatial features [34,35,36] (Figure 4).
The formula for convolution is as follows:
F m = f ( F m 1 W m + b m )
where f(•) is the nonlinear activation function, which enhances the algorithm’s ability to process nonlinear data. F m 1 is the input feature map of the (m − 1)th layer, and F m is the output feature map of the mth layer. W m is the convolution filter, and b m is the bias value for each output feature map.
The main objective of the convolution operation is to extract various features from the input data. While some convolution layers excel at capturing low-level features, such as edges, lines, and contours, the deeper convolution layers progressively extract more complex features, such as textures, based on these fundamental features. In the context of rock samples, the convolution layers are particularly effective at discerning spatial information, aiding in the accurate identification of key attributes such as rock locations, edges, and textures. After passing through the convolution layers, the features become high-dimensional. In this stage, pooling layers play a crucial role. These layers segment the features into different regions and compute the maximum or average value within these regions, thereby generating new, lower-dimensional features. Pooling layers are typically placed between consecutive convolution layers to reduce the volume of data and the parameters and prevent overfitting.
In terms of spectral information extraction, we employed recurrent neural networks (RNNs) to extract the spectral information, combined with spatial data extracted using 2-D CNNs, to establish a spectral-spatial fusion network. The specific RNN variant chosen for this purpose was an enhanced model known as the gated recurrent unit (GRU) [37], the structure of which is shown in Figure 5.
Similar to long short-term memory (LSTM), each GRU unit contains an update gate, reset gate, and candidate hidden state, effectively capturing the long-term dependencies in sequences. Specifically, assuming that there are h hidden units at a given time step t and for a mini-batch input X t R n × d (where n is the number of samples, and d is the number of input features), as well as the hidden state from the previous time steps H t 1 R n × d , the reset gate R t and update gate Z t are calculated as follows:
R t = s i g m o i d X t W x r + H t 1 W h r + b r
Z t = s i g m o i d X t W x z + H t 1 W h z + b z
The Sigmoid function is an activation function that transforms the values of elements to a range of 0 to 1. Therefore, for each element, the values of the reset gate R t and the update gate Z t are within [0, 1]. Next, the gated recurrent unit computes a candidate hidden state to aid in the subsequent hidden state computation. This is achieved by performing element-wise multiplication (denoted as ⊙) between the output of the reset gate at the current time step and the hidden state from the previous time step. Specifically, the formula for computing the candidate hidden state H ~ t R n × d at time step t is
H ~ t = t a n h X t W x h + ( R t H t 1 ) W h h + b h
In this network, each GRU layer’s task is to process a portion of the input data and generate the corresponding hidden states. These hidden states encapsulate abstract features relevant to the sequential data, which can then be used in further operations. Previous research has shown that in addition to CNNs, RNNs excel in handling spectral information in hyperspectral data. Particularly effective in scenarios where data exhibit sequence-like characteristics, RNNs are well-suited for processing spectral information in hyperspectral images as sequential data. Mou et al. [38] previously applied RNNs in hyperspectral image classification tasks, highlighting their significant potential in such classification tasks. The GRU specifically designed for use in this study handles longer sequences and mitigates the vanishing gradient problem. Compared to the three-gate mechanism (forget gate, input gate, and output gate) of LSTM, the GRU has the advantage of being more concise and easier to train when dealing with datasets such as that in this study, which contains 288 spectral bands.

2.2.2. Proposed Method

In this paper, one of the primary reasons for introducing deep learning algorithms for rock classification is the inherent limitations of human-based judgment based on the 28 classes of rock spectral curves. Although rocks are typically classified based on the similarities of their spectral curve trends, different types of rocks may have similar compositions and spectral features. Therefore, relying solely on spectral data in the classification can easily lead to confusion and low classification accuracy.
To this end, this study used HySpex hyperspectral images as research data, combined with rock sample spectra measured by the ASD spectroradiometer, to verify the spectral curves extracted from the corresponding samples in the images. Due to the potential variations in light intensity caused by light source non-uniformity and ambient light during the ASD spectroradiometer data collection process, a white reference was used to preprocess the measurement data, ultimately forming the rock reflectance spectral curves shown in Figure 6 and Figure 7.
As shown in Figure 6a, in this paper, we compare fine-grained granite belonging to category r10 and granite gneiss belonging to category r16 in the initial rock classification system. These two types of rocks have similar compositions but different proportions. However, their spectral trend features are largely consistent, indicating that the similarity of their spectral curves may not be sufficient to distinguish between them. Conversely, Figure 6b compares the andesite belonging to category r15 and the diorite belonging to category r20 in the initial rock classification system. These two types of rock have similar compositions, but their spectral curves are significantly different, illustrating a case of similar composition but different spectra. This further underscores the challenge of relying solely on the spectral features for rock classification.
In addition to the factors mentioned above, another significant influence in pixel-based rock classification models is the variability in the spectral features across pixels on the surface of a rock composed of different materials. Taking potassium feldspar granite (rock type r11) as an example, it is composed of 66% potassium feldspar, 25% quartz, 4% plagioclase, and 5% biotite. As shown in Figure 7, by comparing the scanning results of the HySpex imaging spectrometer with the measurements obtained using an Analytical Spectral Devices, Inc. (ASD) spectrometer, it was found that different locations on the same rock had different spectral features. Specifically, the spectral feature curves of pixel 1 (Figure 7a), pixel 2 (Figure 7b), and pixel 3 (Figure 7c) exhibit significant differences, which also differ from the spectral curve of the entire rock (Figure 7d).
This phenomenon reveals that in the analysis of high-spectral rock images, pixels containing different proportions of the components of the rock may exhibit varying spectral responses. This variability can potentially impact the accuracy of pixel-based rock classification models. Therefore, in the design and training of these models, it is crucial to consider how to handle these pixel-level spectral differences to enhance the accuracy and robustness of the rock classification.
To address the aforementioned issues in existing hyperspectral rock classification research, where relying solely on spectral information leads to the ‘same material with different spectra’ and ‘different materials with the same spectra’ problems, as well as the regional spectral differences caused by rocks being mineral composites, this study proposes a network model capable of extracting both spectral and spatial information from rocks, integrating and learning from these two types of features.
The proposed network model adopts a dual-branch structure in which the CNN and RNN serve as spatial and spectral feature extractors, respectively. These branches are subsequently connected through fully connected layers for the classification task. The specific structural design of the model is illustrated in Figure 8. Regarding the data input, these dual-branch networks require two types of data. One involves obtaining raw hyperspectral image data, performing principal component analysis for dimensionality reduction, and segmenting it into uniformly sized three-dimensional cubic data as the input for spatial feature extraction by the CNN. The other type involves obtaining segmented raw hyperspectral data, extracting central pixel blocks, and subjecting them to appropriate dimensionality reduction. These processed data serve as spectral features and are input into the RNN.
In the architecture design of this network model, the upper branch is primarily composed of a CNN. This branch includes a series of 2-D convolutional layers tailored for spatial dimension convolution operations on the input data. To ensure these convolutional layers effectively extract spatial features, a 2-D CNN spatial feature extraction branch based on PCA dimensionality reduction was designed. This branch focuses on extracting the spatial details related to rock edges, textures, and positions. By applying PCA to reduce the dimensionality of the input data in the spectral dimension, the model not only reduces computational complexity but also removes redundant information, thereby enhancing model performance. These convolutional layers vary in parameters and kernel sizes and include batch normalization and activation functions. Notably, a significant departure from traditional 2-D convolutional neural networks is the omission of pooling layers, allowing for a higher spatial resolution to be maintained during feature propagation between the convolutional layers. In dense prediction tasks, this approach not only reduces computational complexity but also proves advantageous in preserving more detailed information by avoiding spatial dimension reduction through pooling.
The lower branch is primarily composed of an RNN, which specifically consists of gated recurrent unit layers. When spectral data are fed into this branch, they are initially processed by a 1-D CNN at the front end. The role of this 1-D CNN is to extract the central pixel blocks and reduce the dimensionality of the input data blocks. When operating on spectral dimension data, the 1-D CNN aims to capture the spectral features and patterns from the input data. Considering that recurrent neural networks may encounter challenges such as gradient vanishing or exploding, especially when dealing with extended sequences, their ability to capture long-range dependencies is limited. The core of this network structure consists of three independent GRU layers, collectively forming a Tri-GRU, which is crucial for handling the temporal sequence features of the input data. In this context, the spectral dimension features of the hyperspectral images can be analogized to temporal sequence features. Each GRU layer includes a unidirectional GRU unit that is adept at capturing sequential patterns in the input data. By combining and stacking GRU units across different layers, the network effectively learns the patterns and features in the input data at different time scales. Recurrent neural networks have been proven to be particularly advantageous in the sequence modeling and processing of time series data, aiding the model’s understanding of temporal relationships within the data.
The pixel-by-pixel classification model, composed of the two branches described above, effectively utilizes and integrates both spatial and spectral features of hyperspectral rock data. Each rock’s spatial features (such as the shape, size, and surface texture) are inconsistent. The model avoids heavily relying on these specific spatial features during training while leveraging the spatial information from neighboring pixels to correct the misclassifications of spectral information by the GRU on individual pixels. Ultimately, the fusion strategy of feature concatenation under the complementary nature of these two types of information results in a more consistent and accurate overall classification.

3. Experiments and Discussion

3.1. Experiment

3.1.1. Experimental Environment and Evaluation Metric

The experiments in this study were conducted on a platform consisting of an Intel(R) Xeon(R) Platinum 8255C CPU, an NVIDIA GeForce RTX 2080Ti GPU, and the Ubuntu 20.04 operating system. All experiments were performed using PyTorch 1.11.0, Python 3.8, and CUDA 11.3. The specific Python modules and libraries used include numpy 0.2.4, scikit-learn 1.3.2, and torch 1.11.0+cu113.
The focus of this study was the classification of hyperspectral images. Therefore, common metrics in this field, including the overall accuracy (OA), average accuracy (AA), and kappa coefficient, were employed to evaluate the quality of the classification results. The OA represents the proportion of the correctly classified samples to the total number of samples, and it is calculated as follows:
O A = i = 1 n h i i i = 1 n N i
where n is the number of rock categories in the image, N i is the number of pixels in the ith rock category, and h i i is the number of pixels correctly classified in the ith category. The AA is the sum of the proportions of the samples in each category relative to the total number of samples divided by the number of categories. It is calculated as follows:
A A = 1 n i = 1 n h i i N i
The kappa coefficient is an indicator of the consistency of the model and is used to measure the effectiveness of the classification. It is calculated as follows:
K a p p a = N i = 1 r x i i i = 1 r ( x i + x + i ) N 2 i = 1 r ( x i + x + i )
where N is the total number of samples, r is the number of rows in the confusion matrix, x i + and x + i are the sums of the elements in the ith row and ith column of the confusion matrix, respectively, and x i i is the value in the ith row and ith column of the confusion matrix.

3.1.2. Selection of the Network Backbone

Three main network architectures can be chosen: the 2-D CNN, GRU, or 2-D CNN-GRU fusion network. Compared to the 2-D CNN, the GRU has advantages in the spectral dimension of the hyperspectral data, but it cannot compensate for the spatial information that is ignored in some specific datasets. The 2-D CNN-GRU fusion network considers both the overall classification performance and the efficiency. Table 3 presents the classification performances of these three network structures on the hyperspectral rock dataset.
Table 3 shows that both the OA and AA were higher when trained using the 2-D CNN-GRU backbone network compared to training using the 2-D CNN or GRU individually. This improvement is attributed to their respective strengths in handling different data characteristics, and using them in combination compensates for the shortcomings of each and enhances the overall performance. The 2-D CNN excels in extracting spatial features from hyperspectral images by capturing the relationships and texture information between the pixels, while the GRU captures the temporal information between bands to better understand the spectral evolution.

3.1.3. Parameter Settings

The model weights were initialized using He normal distribution, with the biases set to zero. For layers that do not include biases (such as the batch normalization layers), bias initialization was not necessary. The optimizer used was Adam, with an initial learning rate of 0.001, which was dynamically adjusted. During model training, L2 regularization, early stopping, and cross-validation techniques were employed to prevent overfitting. Specifically, L2 regularization was implemented by setting the weight decay parameter of the optimizer to 0.01. Early stopping involved monitoring the accuracy of the validation set during training, and training was halted when the accuracy ceased to improve over several epochs. In this study, the optimal number of training epochs was determined to be 50.
For dataset processing, to make more effective use of the data, k-fold cross-validation was employed to randomly split the training and testing image samples, thereby validating the model’s performance across multiple subsets. Specifically, 5-fold cross-validation was applied to split the dataset across the entire hyperspectral image. In each fold, 20% of the pixels were randomly selected as the training set for model training, and the remaining pixels were used as the testing set. This process is repeated for all five folds. Additionally, within the testing set, 10% of the samples were selected as a validation set for parameter optimization.
One branch of the convolutional network in the model received fixed-size hyperspectral spatial neighborhoods as the input data. The amount of data the model received depended on the size of these patches, which may significantly impact the final classification performance. Therefore, in this study, experiments were conducted on the dataset with different batch sizes and three patch sizes (5, 7, and 9) to determine the optimal parameter size of the model. The results are shown in Table 4.
As shown in Table 4, the model achieved the highest classification accuracy when the dataset batch size was 128 and the adjacent pixel block size was 7. Therefore, in the subsequent experiment, all of the models were configured using these dataset size parameters.

3.1.4. Analysis of the Impact of PCA-Preserved Components

In this study, PCA dimensionality reduction was applied before feeding the data into the spatial feature extraction branch of the model. PCA significantly reduces the number of spectral features while preserving the primary spatial features in the hyperspectral data. Since the convolution operations of the 2-D CNN are influenced by the spectral dimension of the input pixel blocks, this could impact the computational complexity and classification performance. Therefore, based on the aforementioned parameters, experiments were conducted with different dimensions of input data for this branch (4, 16, 32, and the original 288 dimensions without reduction) to determine the optimal number of components to retain after PCA dimensionality reduction.
As shown in Table 5, retaining too few or too many components after PCA dimensionality reduction adversely affects the model’s final classification performance, and the training time of the model decreases as the number of retained components is reduced. This indicates that the convolution operations of the 2-D CNN are influenced by the spectral dimension of the input pixel blocks, requiring a trade-off between computational complexity and classification accuracy. Therefore, in this study, the spectral dimension of the training data input into this branch was ultimately reduced to 16, achieving the optimal balance.

3.2. Results and Discussion

To evaluate the effectiveness of the rock classification algorithm developed in this study, we compared the developed model with existing models that are capable of classifying hyperspectral data, including Resnet-18 [39], 3-D CNN [40], Hamida [41], HybridSN [42], and CAE-SVM [43]. As previously described, we used a 5-fold cross-validation method to partition the data required for the training, validation, and testing of each model. In each fold, 20% of the data were allocated for training, and the remaining 80% of the data were randomly sampled to form a validation set, which included 10% of the data, leaving 70% of the data for testing. This experiment was repeated five times, and the results of all of the runs were averaged. The results are presented in Table 6.
As can be seen from Table 6, the 2-D CNN-GRU model outperforms the other models in terms of the OA and AA. Additionally, its kappa coefficient exceeds those of the other models, demonstrating the suitability and superiority of this model in the context of hyperspectral rock datasets.
Figure 9 shows representative classification prediction maps from one fold of the models, with the classification accuracies being close to the average. The classification maps of the rock dataset display varying accuracies for the different rock types among the different models. Each model has specific strengths and weaknesses in its ability to classify the rock types. Notably, the classification maps produced using the 3-D CNN and the model developed in this study are smoother compared to those obtained using the other models, and they contain fewer misclassified pixels. The other four models exhibit noticeable pixel misclassifications due to their insufficient feature extraction capabilities.
Figure 10 presents the results of the comparison of the overall accuracies (OAs) of each classification model for different training set proportions. In Figure 10, the vertical axis represents the OA, and the horizontal axis represents the proportion of the training sample. It can be seen from Figure 10 that the OAs of the model developed in this study are higher than those of the other methods for five different proportions: 1%, 5%, 10%, 15%, and 20%. This indicates that our model is superior for both small and large sample classifications.
Table 7 presents more detailed classification results for each rock type. As can be seen from the data presented in Table 7, all six models have higher error ranges and lower classification accuracies in identifying the rock types r27 and r28. This could be due to the poor spectral matching characteristics within each block for these two rock types, resulting in varying accuracies across the five-fold dataset tests. Similarly, rock types r21 and r22 have lower classification accuracies for each model’s classification experiments. Additionally, in one fold of the dataset, the model developed in this study has a lower accuracy in identifying rock type r3, which may be due to the poor representation in the sample selection during the operation of the random sampling algorithms, resulting in significant accuracy errors in the final assessment. The other five models have higher accuracies in identifying rock type r3. However, the model developed in this study has an excellent classification performance for most of the rock classifications. For example, for rock type r5, which contains only one rock sample, the other five models have lower accuracies due to difficulties in handling the categories containing few rock samples, whereas the model developed in this study has significant advantages and higher classification accuracy in handling such categories. In contrast, rock type r26 includes six rock samples with highly similar spectral curves, yet the model developed in this study still achieves a high classification accuracy. This demonstrates that the model developed in this study effectively classifies both sample categories containing several rock samples and categories with few rock samples.

4. Conclusions

This study addresses the typical issues of ‘same spectrum, different materials’ and ‘different materials, same spectrum’ by proposing a dual-branch multidimensional feature network model that combines CNN and RNN based on the theory of integrating spatial and spectral information from hyperspectral image data. The model extracts spatial features of rocks through a 2-D CNN branch with a PCA dimensionality reduction module, while another branch, primarily using a GRU network, extracts spectral features of the rocks. The features extracted from both branches are then fused and classified, effectively solving the issue of insufficient feature utilization when relying solely on the spectral features for rock classification. Furthermore, this hyperspectral rock classification research based on laboratory platform data further validates the effectiveness of the preliminary rock classification system, achieving an overall classification accuracy of 97.925% and an average classification accuracy of 97.956% on a dataset containing 81 types of igneous and metamorphic rocks, significantly outperforming other classification models.

Author Contributions

Conceptualization was done by X.W. and S.X. S.C. was responsible for Methodology and Software. Validation was carried out by X.W. Formal analysis was done by W.W. S.C. and X.W. wrote the original draft, while W.W. and S.X. were responsible for writing review and editing. Visualization was done by S.X. All authors edited the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 61701153 and No. 41402304) and Zhejiang Provincial Natural Science Foundation of China (LQ13D020002).

Data Availability Statement

Dataset details and downloads are available at https://uwrl.hznu.edu.cn/c/2023-02-24/2805144.shtml (accessed on 27 August 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jia, J.; Wang, Y.; Chen, J.; Guo, R.; Shu, R.; Wang, J. Status and application of advanced airborne hyperspectral imaging technology: A review. Infrared Phys. Technol. 2020, 104, 103115. [Google Scholar] [CrossRef]
  2. Bedini, E. The use of hyperspectral remote sensing for mineral exploration: A review. J. Hyperspectral Remote Sens. 2017, 7, 189–211. [Google Scholar] [CrossRef]
  3. Krupnik, D.; Khan, S. Close-range, ground-based hyperspectral imaging for mining applications at various scales: Review and case studies. Earth-Sci. Rev. 2019, 198, 102952. [Google Scholar] [CrossRef]
  4. Tripathi, P.; Garg, R.D. Potential of DESIS and PRISMA hyperspectral remote sensing data in rock classification and mineral identification:a case study for Banswara in Rajasthan, India. Environ. Monit. Assess. 2023, 195, 575. [Google Scholar] [CrossRef]
  5. Monteiro, S.T.; Murphy, R.J.; Ramos, F.; Nieto, J. Applying boosting for hyperspectral classification of ore-bearing rocks. In Proceedings of the 2009 IEEE International Workshop on Machine Learning for Signal Processing, Grenoble, France, 1–4 September 2009; pp. 1–6. [Google Scholar]
  6. Kokaly, R.F.; Graham, G.E.; Hoefen, T.M.; Kelley, K.D.; Johnson, M.R.; Hubbard, B.E.; Buchhorn, M.; Prakash, A. Multiscale hyperspectral imaging of the Orange Hill Porphyry Copper Deposit, Alaska, USA, with laboratory-, field-, and aircraft-based imaging spectrometers. Proc. Explor. 2017, 17, 923–943. [Google Scholar]
  7. Wei, J.; Liu, X.; Liu, J. Integrating Textural and Spectral Features to Classify Silicate-Bearing Rocks Using Landsat 8 Data. Appl. Sci. 2016, 6, 283. [Google Scholar] [CrossRef]
  8. Dkhala, B.; Mezned, N.; Gomez, C.; Abdeljaouad, S. Hyperspectral field spectroscopy and SENTINEL-2 Multispectral data for minerals with high pollution potential content estimation and mapping. Sci. Total Environ. 2020, 740, 140160. [Google Scholar] [CrossRef] [PubMed]
  9. Kovacevic, M.; Bajat, B.; Trivic, B.; Pavlovic, R. Geological units classification of multispectral images by using support vector machines. In Proceedings of the 2009 International Conference on Intelligent Networking and Collaborative Systems, Barcelona, Spain, 4–6 November 2009; pp. 267–272. [Google Scholar] [CrossRef]
  10. Lobo, A.; Garcia, E.; Barroso, G.; Martí, D.; Fernandez-Turiel, J.-L.; Ibáñez-Insa, J. Machine Learning for Mineral Identification and Ore Estimation from Hyperspectral Imagery in Tin–Tungsten Deposits: Simulation under Indoor Conditions. Remote Sens. 2021, 13, 3258. [Google Scholar] [CrossRef]
  11. Buzzi, J.; Riaza, A.; García-Meléndez, E.; Weide, S.; Bachmann, M. Mapping Changes in a Recovering Mine Site with Hyperspectral Airborne HyMap Imagery (Sotiel, SW Spain). Minerals 2014, 4, 313–329. [Google Scholar] [CrossRef]
  12. Tripathi, M.K.; Govil, H. Evaluation of AVIRIS-NG hyperspectral images for mineral identification and mapping. Heliyon 2019, 5, e02931. [Google Scholar] [CrossRef]
  13. Hussain, M.; Bird, J.J.; Faria, D.R. A study on CNN transfer learning for image classification. In Proceedings of the Advances in Computational Intelligence Systems: Contributions Presented at the 18th UK Workshop on Computational Intelligence, Nottingham, UK, 5–7 September 2018; pp. 191–202. [Google Scholar]
  14. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  15. López, A.J.; Ramil, A.; Pozo-Antonio, J.S.; Fiorucci, M.P.; Rivas, T. Automatic Identification of Rock-Forming Minerals in Granite Using Laboratory Scale Hyperspectral Reflectance Imaging and Artificial Neural Networks. J. Nondestruct. Eval. 2017, 36, 52. [Google Scholar] [CrossRef]
  16. Xie, B.; Wu, L.; Mao, W.; Zhou, S.; Liu, S. An Open Integrated Rock Spectral Library (RockSL) for a Global Sharing and Matching Service. Minerals 2022, 12, 118. [Google Scholar] [CrossRef]
  17. Cardoso-Fernandes, J.; Silva, J.; Dias, F.; Lima, A.; Teodoro, A.C.; Barrès, O.; Cauzid, J.; Perrotta, M.; Roda-Robles, E.; Ribeiro, M.A. Tools for Remote Exploration: A Lithium (Li) Dedicated Spectral Library of the Fregeneda–Almendra Aplite–Pegmatite Field. Data 2021, 6, 33. [Google Scholar] [CrossRef]
  18. Schneider, S.; Murphy, R.J.; Monteiro, S.T.; Nettleton, E. On the development of a hyperspectral library for autonomous mining systems. In Proceedings of the Australiasian Conference on Robotics and Automation, Sydney, Australia, 2–4 December 2009. [Google Scholar]
  19. Okyay, Ü.; Khan, S.; Lakshmikantha, M.; Sarmiento, S. Ground-Based Hyperspectral Image Analysis of the Lower Mississippian (Osagean) Reeds Spring Formation Rocks in Southwestern Missouri. Remote Sens. 2016, 8, 1018. [Google Scholar] [CrossRef]
  20. Douglas, A.; Kereszturi, G.; Schaefer, L.N.; Kennedy, B. Rock alteration mapping in and around fossil shallow intrusions at Mt. Ruapehu New Zealand with laboratory and aerial hyperspectral imaging. J. Volcanol. Geotherm. Res. 2022, 432, 107700. [Google Scholar] [CrossRef]
  21. Guo, S.; Jiang, Q. Improving Rock Classification with 1D Discrete Wavelet Transform Based on Laboratory Reflectance Spectra and Gaofen-5 Hyperspectral Data. Remote Sens. 2023, 15, 5334. [Google Scholar] [CrossRef]
  22. Gendrin, A.; Langevin, Y.; Bibring, J.P.; Forni, O. A new method to investigate hyperspectral image cubes: An application of the wavelet transform. J. Geophys. Res. Planets 2006, 111. [Google Scholar] [CrossRef]
  23. Galdames, F.J.; Perez, C.A.; Estévez, P.A.; Adams, M. Rock lithological instance classification by hyperspectral images using dimensionality reduction and deep learning. Chemom. Intell. Lab. Syst. 2022, 224, 104538. [Google Scholar] [CrossRef]
  24. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22 October 2017; pp. 2961–2969. [Google Scholar]
  25. Hamedianfar, A.; Laakso, K.; Middleton, M.; Törmänen, T.; Köykkä, J.; Torppa, J. Leveraging High-Resolution Long-Wave Infrared Hyperspectral Laboratory Imaging Data for Mineral Identification Using Machine Learning Methods. Remote Sens. 2023, 15, 4806. [Google Scholar] [CrossRef]
  26. Abdolmaleki, M.; Consens, M.; Esmaeili, K. Ore-Waste Discrimination Using Supervised and Unsupervised Classification of Hyperspectral Images. Remote Sens. 2022, 14, 6386. [Google Scholar] [CrossRef]
  27. Fang, Y.; Xiao, Y.; Liang, S.; Ji, Y.; Chen, H. Lithological classification by PCA-QPSO-LSSVM method with thermal infrared hyper-spectral data. J. Appl. Remote Sens. 2022, 16, 044515. [Google Scholar] [CrossRef]
  28. Xu, Y.; Ma, H.; Peng, S. Study on identification of altered rock in hyperspectral imagery using spectrum of field object. Ore Geol. Rev. 2014, 56, 584–595. [Google Scholar] [CrossRef]
  29. Ghezelbash, R.; Maghsoudi, A.; Shamekhi, M.; Pradhan, B.; Daviran, M. Genetic algorithm to optimize the SVM and K-means algorithms for mapping of mineral prospectivity. Neural Comput. Appl. 2022, 35, 719–733. [Google Scholar] [CrossRef]
  30. Bahrambeygi, B.; Moeinzadeh, H. Comparison of support vector machine and neutral network classification method in hyperspectral mapping of ophiolite mélanges—A case study of east of Iran. Egypt. J. Remote Sens. Space Sci. 2017, 20, 1–10. [Google Scholar] [CrossRef]
  31. Zhang, C.; Yi, M.; Ye, F.; Xu, Q.; Li, X.; Gan, Q. Application and Evaluation of Deep Neural Networks for Airborne Hyperspectral Remote Sensing Mineral Mapping: A Case Study of the Baiyanghe Uranium Deposit in Northwestern Xinjiang, China. Remote Sens. 2022, 14, 5122. [Google Scholar] [CrossRef]
  32. Miao, Y.; Wu, W.-Y.; Hu, C.-H.; Xu, L.-X.; Fu, X.-H.; Lang, X.-Y.; He, B.-W.; Qian, J.-F. Rock and mineral image dataset based on HySpex hyperspectral imaging system. J. Hangzhou Norm. Univ. (Nat. Sci. Ed.) 2023, 22, 203–210, (In Chinese with English Abstract). [Google Scholar] [CrossRef]
  33. Hu, C.-H.; Wu, W.-Y.; Miao, Y.; Xu, L.-X.; Fu, X.-H.; Lang, X.-Y.; He, B.-W.; Qian, J.-F. Study on Hyperspectral Rock Classification Based on Initial Rock Classification System. Spectrosc. Spectr. Anal. 2024, 44, 784–792, (In Chinese with English Abstract). [Google Scholar]
  34. Li, Y.; Zhang, Y.; Huang, X.; Ma, J. Learning Source-Invariant Deep Hashing Convolutional Neural Networks for Cross-Source Remote Sensing Image Retrieval. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6521–6536. [Google Scholar] [CrossRef]
  35. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
  36. Ma, X.; Dai, Z.; He, Z.; Ma, J.; Wang, Y.; Wang, Y. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction. Sensors 2017, 17, 818. [Google Scholar] [CrossRef]
  37. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  38. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
  39. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  40. Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  41. Ben Hamida, A.; Benoit, A.; Lambert, P.; Ben Amar, C. 3-D Deep Learning Approach for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef]
  42. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  43. Deng, Y.; Deng, Y. A Method of SAR Image Automatic Target Recognition Based on Convolution Auto-Encode and Support Vector Machine. Remote Sens. 2022, 14, 5559. [Google Scholar] [CrossRef]
Figure 1. Hyspex hyperspectral sensor.
Figure 1. Hyspex hyperspectral sensor.
Minerals 14 00923 g001
Figure 2. (a) The 81 rock samples, (b) spectral curves of the 28 types of rocks, and (c) rock names corresponding to the 28 types of rocks.
Figure 2. (a) The 81 rock samples, (b) spectral curves of the 28 types of rocks, and (c) rock names corresponding to the 28 types of rocks.
Minerals 14 00923 g002aMinerals 14 00923 g002b
Figure 3. (a) Ground truth for the 28 rock classes, and (b) ground truth after morphological processing.
Figure 3. (a) Ground truth for the 28 rock classes, and (b) ground truth after morphological processing.
Minerals 14 00923 g003
Figure 4. Schematic diagram of the traditional 2-D CNN structure.
Figure 4. Schematic diagram of the traditional 2-D CNN structure.
Minerals 14 00923 g004
Figure 5. Schematic diagram of the internal structure of the GRU.
Figure 5. Schematic diagram of the internal structure of the GRU.
Minerals 14 00923 g005
Figure 6. Comparison of ASD spectral characteristics of different rocks. (a) Comparison of ASD spectral characteristics of Aplite granite and granite gneiss (b) Comparison of ASD spectral characteristics of andesite and diorite.
Figure 6. Comparison of ASD spectral characteristics of different rocks. (a) Comparison of ASD spectral characteristics of Aplite granite and granite gneiss (b) Comparison of ASD spectral characteristics of andesite and diorite.
Minerals 14 00923 g006
Figure 7. Image data results of potassium feldspar granite samples.
Figure 7. Image data results of potassium feldspar granite samples.
Minerals 14 00923 g007
Figure 8. Proposed network structure.
Figure 8. Proposed network structure.
Minerals 14 00923 g008
Figure 9. Classification effects of different algorithms on the rock dataset: (a) Resnet-18, (b) 3-D CNN, (c) Hamida, (d) HybridSN, (e) CAE-SVM, and (f) our model.
Figure 9. Classification effects of different algorithms on the rock dataset: (a) Resnet-18, (b) 3-D CNN, (c) Hamida, (d) HybridSN, (e) CAE-SVM, and (f) our model.
Minerals 14 00923 g009
Figure 10. Comparison of OAs of all methods for different proportions of the training samples.
Figure 10. Comparison of OAs of all methods for different proportions of the training samples.
Minerals 14 00923 g010
Table 1. Names of rocks included in the 28 types of rocks.
Table 1. Names of rocks included in the 28 types of rocks.
Class NameRock NameClass NameRock Name
r1komatiite, chlorite schistr15andesite, trachyte, syenite, pitchstone
r2garnet granulite, amphibolite, gabbro pegmatite, eclogite, amphiboleschis, plagioclase amphibole schistr16quartz diorite, orthoclase porphyry, granitic gneiss
r3finely crystalline marble, mesocrystalline marble, red marbler17biotite gneiss, ptygmatite
r4intrusive carbonater18aplite
r5garnet skarnr19grayish white slate, rutile schist, muscovite quartz schist, staurolite schist
r6epidote skarn, garnet epidote skarn, phylliter20diorite, granodiorite, nepheline-syenite, nosean phonolite
r7striped migmatite, augen migmatiter21peridotite, diabase, layered magnet quartzite
r8pegmatite, biotite hornfels, greisen, mylonite, sillimanite schistr22amphibole eclogite, leptite
r9kyanite schistr23diorite porphyrite, ijolite, lamprophyre
r10granite, fine-grained granite, monzonitic granite, porphyritic granite, felsite, alaskiter24pyroxene quartz orthoclase porphyry, perlite
r11k-feldspar granite, quartziter25pyroxenite, gabbro, amygdaloidal basalt
r12anorthosite, graphic granite, rhyolite, pseudoleucite phonoliter26kimberlite, basalt, vesicular basalt, andalusite hornfels, cordierite hornfels, serpentine
r13lithophysa rhyolite, migmatitic graniter27lapilli (slag), black slate
r14trachyandesiter28volcanic lava, obsidian, pumice
Table 2. The 28 types of rock labels corresponding to colors and number of pixel samples.
Table 2. The 28 types of rock labels corresponding to colors and number of pixel samples.
ClassSamplesClassSamplesClassSamplesClassSamples
r1 Minerals 14 00923 i00111,415r8 Minerals 14 00923 i00222,042r15 Minerals 14 00923 i00319,547r22 Minerals 14 00923 i00410,746
r2 Minerals 14 00923 i00530,779r9 Minerals 14 00923 i0064718r16 Minerals 14 00923 i00714,048r23 Minerals 14 00923 i00814,108
r3 Minerals 14 00923 i00914,473r10 Minerals 14 00923 i01030,620r17 Minerals 14 00923 i01110,351r24 Minerals 14 00923 i0129242
r4 Minerals 14 00923 i0135121r11 Minerals 14 00923 i0149534r18 Minerals 14 00923 i0155033r25 Minerals 14 00923 i01614,145
r5 Minerals 14 00923 i0174434r12 Minerals 14 00923 i01819,353r19 Minerals 14 00923 i01919,531r26 Minerals 14 00923 i02026,830
r6 Minerals 14 00923 i02111,490r13 Minerals 14 00923 i0229590r20 Minerals 14 00923 i02321,145r27 Minerals 14 00923 i02410,381
r7 Minerals 14 00923 i02511,129r14 Minerals 14 00923 i0265471r21 Minerals 14 00923 i02713,703r28 Minerals 14 00923 i02811,259
TOTAL390,238
Table 3. Comparison of three network structures.
Table 3. Comparison of three network structures.
ModelOA (%)AA (%)Kappa × 100
2-D CNN84.011 ± 6.75182.615 ± 7.34983.30 ± 7.1
GRU84.610 ± 2.25383.522 ± 2.66683.90 ± 2.4
2-D CNN-GRU97.925 ± 2.83197.956 ± 2.91597.80 ± 3.0
Table 4. OA (%) of training datasets with different adjacent pixel block sizes and batch sizes.
Table 4. OA (%) of training datasets with different adjacent pixel block sizes and batch sizes.
Neighboring Pixel Block SizeBatch Size
64128256
5 × 596.45195.32895.986
7 × 796.89497.54397.325
9 × 997.14597.24996.874
Table 5. The influence of different preserved components on classification accuracy and training time after PCA dimensionality reduction.
Table 5. The influence of different preserved components on classification accuracy and training time after PCA dimensionality reduction.
Retained ComponentsOA (%)AA (%)Kappa × 100Average Training Duration (s)
489.802 ± 4.17388.043 ± 5.92289.00 ± 5.31698.775
1697.925 ± 2.83197.956 ± 2.91597.80 ± 3.01764.083
3293.119 ± 3.59892.264 ± 6.74492.60 ± 4.41852.693
28892.563 ± 2.97093.295 ± 2.64292.20 ± 3.12023.253
Table 6. Classification results of the different methods on the rock datasets.
Table 6. Classification results of the different methods on the rock datasets.
MethodOA (%)AA (%)Kappa × 100
Resnet-1887.995 ± 7.48685.678 ± 8.52687.40 ± 7.8
3-D CNN94.075 ± 3.28692.505 ± 3.71793.80 ± 3.4
Hamida88.441 ± 6.41188.003 ± 6.32487.90 ± 6.7
HybridSN90.010 ± 2.21289.078 ± 2.46989.50 ± 2.3
CAE-SVM91.969 ± 2.79892.228 ± 2.06591.60 ± 2.9
Ours97.925 ± 2.83197.956 ± 2.91597.80 ± 3.0
Table 7. The classification results of each type of rock on the rock dataset using different methods.
Table 7. The classification results of each type of rock on the rock dataset using different methods.
Class NameResnet-183-D CNNHamidaHybridSNCAE-SVMOurs
r189.2 ± 10.497.6 ± 3.493.0 ± 4.596.0 ± 4.394.1 ± 3.898.6 ± 1.9
r288.8 ± 5.794.2 ± 5.993.1 ± 2.394.9 ± 3.395.1 ± 1.698.8 ± 1.6
r399.9 ± 0.199.0 ± 1.199.2 ± 1.599.9 ± 0.295.9 ± 4.783.8 ± 32.4
r499.8 ± 0.299.3 ± 0.589.3 ± 21.299.8 ± 0.290.9 ± 10.494.1 ± 11.5
r592.1 ± 5.495.9 ± 2.381.0 ± 13.989.0 ± 7.976.3 ± 15.399.2 ± 0.6
r698.6 ± 1.498.7 ± 0.698.8 ± 1.897.3 ± 1.597.0 ± 1.699.5 ± 0.9
r792.7 ± 2.692.2 ± 1.794.4 ± 2.795.2 ± 1.290.8 ± 5.099.4 ± 0.1
r898.4 ± 1.397.2 ± 2.398.5 ± 1.998.9 ± 0.497.1 ± 1.599.9 ± 0.1
r999.9 ± 0.199.0 ± 1.299.8 ± 0.299.4 ± 0.597.7 ± 2.099.4 ± 1.2
r1092.9 ± 2.196.1 ± 4.184.9 ± 11.494.9 ± 2.894.4 ± 2.999.5 ± 0.4
r1196.1 ± 2.998.4 ± 1.165.9 ± 37.494.1 ± 3.594.0 ± 3.099.8 ± 0.1
r1298.8 ± 0.697.4 ± 1.897.8 ± 2.294.2 ± 1.595.6 ± 4.399.8 ± 0.3
r1385.5 ± 3.488.1 ± 13.987.1 ± 5.380.9 ± 5.483.2 ± 7.199.5 ± 0.1
r1489.4 ± 6.197.5 ± 2.895.3 ± 5.588.9 ± 8.190.3 ± 8.197.5 ± 4.8
r1598.2 ± 2.298.2 ± 1.797.5 ± 2.098.1 ± 1.896.5 ± 3.099.5 ± 0.8
r1687.3 ± 5.298.3 ± 0.884.7 ± 7.489.0 ± 4.293.1 ± 2.299.3 ± 0.6
r1778.4 ± 20.094.9 ± 3.382.8 ± 9.093.1 ± 2.793.8 ± 3.896.7 ± 5.5
r1898.0 ± 0.595.6 ± 5.799.6 ± 0.299.8 ± 0.394.5 ± 3.7100.0 ± 0.0
r1996.1 ± 1.094.9 ± 3.498.4 ± 0.696.7 ± 4.897.6 ± 1.897.9 ± 3.7
r2075.2 ± 12.196.5 ± 2.376.0 ± 34.692.0 ± 4.894.0 ± 2.499.4 ± 0.8
r2178.7 ± 17.196.4 ± 1.185.1 ± 9.278.0 ± 12.986.1 ± 8.099.5 ± 0.6
r2278.9 ± 11.888.2 ± 4.080.2 ± 20.589.9 ± 5.290.4 ± 2.399.1 ± 0.5
r2399.6 ± 0.298.8 ± 1.599.0 ± 0.996.3 ± 1.298.6 ± 0.699.9 ± 0.1
r2488.8 ± 2.593.7 ± 2.876.6 ± 13.290.6 ± 2.286.7 ± 6.893.9 ± 11.8
r2567.5 ± 10.391.1 ± 10.377.0 ± 14.576.0 ± 9.791.6 ± 7.099.4 ± 0.5
r2676.3 ± 9.090.9 ± 3.083.5 ± 6.973.3 ± 9.688.3 ± 7.197.6 ± 2.4
r2752.7 ± 10.187.3 ± 8.074.2 ± 12.373.5 ± 9.474.9 ± 19.791.9 ± 12.3
r2856.8 ± 6.253.5 ± 22.166.8 ± 15.145.4 ± 11.378.3 ± 9.888.1 ± 13.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, S.; Wu, W.; Wang, X.; Xie, S. Hyperspectral Rock Classification Method Based on Spatial-Spectral Multidimensional Feature Fusion. Minerals 2024, 14, 923. https://doi.org/10.3390/min14090923

AMA Style

Cao S, Wu W, Wang X, Xie S. Hyperspectral Rock Classification Method Based on Spatial-Spectral Multidimensional Feature Fusion. Minerals. 2024; 14(9):923. https://doi.org/10.3390/min14090923

Chicago/Turabian Style

Cao, Shixian, Wenyuan Wu, Xinyu Wang, and Shanjuan Xie. 2024. "Hyperspectral Rock Classification Method Based on Spatial-Spectral Multidimensional Feature Fusion" Minerals 14, no. 9: 923. https://doi.org/10.3390/min14090923

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop