Next Article in Journal
An Advancing GCT-Inception-ResNet-V3 Model for Arboreal Pest Identification
Next Article in Special Issue
A Hierarchical Feature-Aware Model for Accurate Tomato Blight Disease Spot Detection: Unet with Vision Mamba and ConvNeXt Perspective
Previous Article in Journal
Comparison of Juvenile Development of Maize and Sorghum in Six Temperate Soil Types under Extreme Water Regimes
Previous Article in Special Issue
Exogenous Melatonin Alleviates the Inhibitory Effect of NaHCO3 on Tomato Growth by Regulating the Root pH Value and Promoting Plant Photosynthesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Hyperspectral Images for Accurate Insect Classification with a Novel Two-Branch Self-Correlation Approach

1
College of Information and Intelligence, Hunan Agricultural University, Changsha 410128, China
2
Graduate School of Information, Production and Systems, Waseda University, Tokyo 163-8001, Japan
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(4), 863; https://doi.org/10.3390/agronomy14040863
Submission received: 9 March 2024 / Revised: 14 April 2024 / Accepted: 18 April 2024 / Published: 20 April 2024

Abstract

:
Insect recognition, crucial for agriculture and ecology studies, benefits from advancements in RGB image-based deep learning, yet still confronts accuracy challenges. To address this gap, the HI30 dataset is introduced, comprising 2115 hyperspectral images across 30 insect categories, which offers richer information than RGB data for enhancing classification accuracy. To effectively harness this dataset, this study presents the Two-Branch Self-Correlation Network (TBSCN), a novel approach that combines spectrum correlation and random patch correlation branches to exploit both spectral and spatial information. The effectiveness of the HI30 and TBSCN is demonstrated through comprehensive testing. Notably, while ImageNet-pre-trained networks adapted to hyperspectral data achieved an 81.32% accuracy, models developed from scratch with the HI30 dataset saw a substantial 9% increase in performance. Furthermore, applying TBSCN to hyperspectral data raised the accuracy to 93.96%. Extensive testing confirms the superiority of hyperspectral data and validates TBSCN’s efficacy and robustness, significantly advancing insect classification and demonstrating these tools’ potential to enhance precision and reliability.

1. Introduction

Accurately identifying insects has profound significance in contemporary society, particularly within the realms of agriculture and economy, as it influences strategies for pest management, crop protection, and sustainable economic development. Firstly, insects are the most diverse and widely distributed biological community on earth [1], playing an important role in the stability and function of ecosystems. Secondly, accurately identifying insects can help develop better strategies to reduce risks, protect crops, and ensure sustainable economic growth. More specifically, it helps agricultural workers take timely and appropriate measures to protect crops from pests and reduce crop losses. On the other hand, beneficial insects such as bees and ladybugs promote agricultural production and need to be recognized. Furthermore, by judging whether insects are beneficial to human society based on their value, policymakers and economists can make wise decisions in pest control, resource allocation, and risk management [2]. For example, identifying invasive species, preventing the spread of pests and diseases, and controlling fruit flies or weevils in food cannot only prevent pollution but also help ensure food safety.
In the pre-technology era, people mainly relied on the professional knowledge of entomologists to identify insects. The methods employed by experts to identify and classify insects were reliable but slow, observing the morphological characteristics of insects, such as wing shape, color, and antennae. Based on existing visual insect recognition research in recent years, with the development of visual insect recognition research, this study briefly divides these works into legal computer vision technology-based and deep learning-based methods. Traditional computer vision methods, exemplified by works such as [3,4], rely on feature extraction and automated classification, but they often exhibit lower accuracy or limited generalization. By contrast, applying deep learning based on convolutional neural networks (CNNs) to insect recognition can more accurately identify insects without the need for manual feature extraction. However, insect recognition methods based on deep learning require high-quality datasets. In the field of agricultural pest and disease control, much work has been devoted to constructing and researching relevant datasets [5,6,7,8,9,10,11,12]. However, some works only focus on specific species or domains, such as Tiger Beetle [9], Deng et al. [5], Alfaris et al. [6], and Kusrini et al. [11]. The most prominent example among these tasks is IP102 [7], which provides a dataset source or baseline for many insect classification algorithms [13,14]. Acquiring data on insects poses challenges, and currently, insect classification usually only considers RGB to identify insects. The reliance on RGB data, which only captures information across three channels (red, green, and blue), inherently limits the depth of spectral information that can be gathered. Given the subtle variations in texture and color among different insect species, RGB-based classification methods may struggle to effectively distinguish between them. However, hyperspectral imaging provides a solution by providing rich spectral information in multiple bands. This allows for a more comprehensive representation of insect features; even with the same number of samples, richer features can be extracted.
Spectra are the intensities of light reflected, emitted, or projected by an object at different wavelengths. Unlike traditional color imaging techniques, hyperspectral imaging captures the spectral information of a target object in hundreds of consecutive and dense spectra, thus providing a continuous spectral profile or spectral signature (Spectral signature means that the molecular structure of each substance has unique absorption and reflection properties for specific wavelengths of light, which gives rise to the “spectral signature”) for each pixel. A three-dimensional (3D) data cube can be obtained through hyperspectral equipment. Two spatial dimensions represent the width and height of the image, while the third dimension represents the spectral dimension (wavelength). Each spatial pixel has a spectral vector representing its reflectance, emissivity, and projection characteristics at all wavelengths, as seen in Figure 1.
Hyperspectral image classification typically refers to the classification of pixels in hyperspectral images based on their spectral characteristics in multiple narrow and continuous spectral bands. Common strategies in hyperspectral classification tasks include feature selection [15], feature extraction [16], grayscale co-occurrence matrix [17], and Gabor filters [18]. Techniques like tLTSL [19] and  L 2 , p -RER [20] introduce innovative techniques for feature extraction and dimensionality reduction in hyperspectral imagery, overcoming the curse of dimensionality and noise issues, validated by thorough experimentation. Although manual feature extraction has significant effects and applications, it requires domain expertise and has poor universality. Machine learning tools including Support Vector Machines (SVM) [21], K Nearest Neighbors [22], Random Forests [23], and Logistic Regression [24], have shown efficacy in hyperspectral image classification. Concurrently, deep learning-based hyperspectral classification methods [25,26,27,28] are emerging as research trends, offering new insights and techniques for spectral data processing and analysis. Based on the above content, researchers are able to distinguish objects that appear the same in traditional RGB images based on rich spectral information, which can be used for non-destructive chemical and biological analysis. Therefore, hyperspectral technology has been used in multiple fields such as agriculture and precision agriculture [29], mineral exploration [30], chili pepper root rot detection [31], and rice variety identification [32], etc.
In recognition of the limitations in RGB data and the demonstrated superiority of hyperspectral imaging—coupled with its extensive applicability—some studies have emerged in this domain. Xiao et al. [33] applied hyperspectral imaging to insect classification tasks in their research in this field, utilizing only nine insect samples in a single field of view. They mainly conducted a pixel segmentation task based on spectral information. However, this method has evident shortcomings in sample size and analysis depth, which constrain the generalization ability and credibility of the research results. Another pertinent study involves a flower classification dataset named HFD100 [34], comprising 100 species. A comparison of the classification results of common methods on artificially selected 3 and 31 channels, lacking credibility to a certain extent, indicates that the more channels there are, the greater the information content.
Considering the limitations of traditional RGB models in hyperspectral classification tasks, the research community has begun exploring innovative approaches to overcome these challenges. Confronted with the challenge of limited labeled samples in hyperspectral data, Yonghao Xu et al. [35] proposed the Random Patch Network (RPNet) as a cost-effective method to enrich information by associating random patches of a given input, eliminating the need for three-dimensional scanning. The goal of RPNet is to achieve robust results in limited sample situations by integrating shallow and deep features. Later researchers made a series of improvements based on this foundation. Cheng Chunbo et al. [36] considered spectral information, stacked the features extracted from SSRPNet layer by layer into high-dimensional vectors, and then used a graph-based learning model for classification. Qu Shenming et al. [37] also used Gabor filters, which combine two-dimensional and three-dimensional features. However, as mentioned earlier, this work differs from traditional hyperspectral classification, which focuses on pixel-level classification using one-dimensional vectors. On the contrary, we focus on the classification of three-dimensional vectors and stand out in the utilization of spectral information. Considering the distinctions between hyperspectral data and RGB data, along with the fundamental differences between our task and traditional hyperspectral classification tasks, we propose a straightforward and effective processing method, termed TBSCN, for insect image classification utilizing hyperspectral data.
The main contributions of this article can be summarized as follows:
  • A new benchmark hyperspectral dataset for the classification of insect species is established, captured via a line-scanning hyperspectral camera, consisting of 2115 samples across 30 insect species. This dataset is publicly available to the community at: https://github.com/Huwz95/HI30-dataset (accessed on 12 April 2024). To the best of our knowledge, this is the first work to use hyperspectral images for insect classification.
  • This paper develops a novel algorithm, TBSCN, which merges PCA dimensionality reduction with correlation processing, tailored for efficient classification of insect hyperspectral images. By combining spectral and spatial information, this method significantly enhances classification accuracy while maintaining processing speed.
  • A thorough evaluation is provided, comparing original and PCA-compressed hyperspectral data, as well as raw hyperspectral versus derived RGB data. This comparative analysis underscores the effects of hyperspectral data on classification efficiency and potential, offering crucial insights for the advancement of future algorithms and their applications.

2. Materials and Methods

2.1. Dataset

2.1.1. Data Collection

Hyperspectral Imaging System The hyperspectral imaging system used for the data collection is shown in Figure 2, composing of a SOC710-VP (The other parameters of this device include a spectral range of 376.9 nm to 1050.16 nm and a lens type of C-Mount) imaging spectrometer, a variable focal length lens, a halogen lamp, a flat stage and a profiling computer. The spectrometer covers wavelength from 377 nm to 1050 nm (binned in 128 bands), recording 12-bit  696 × 520  images.
Data Collection Over 2000 samples of 30 species were collected via placing on the imaging stage, in batches. The illuminant source, halogen lamp, is fixed through the whole collecting process for data uniformity. Three narrow bands representing RGB triplet are selected for visualization. Using the Labelimg tool (https://github.com/HumanSignal/labelImg (accessed on 15 August 2023)) to mark the bounding boxes of insects. Each insect is cropped from its respective hyperspectral cube and then resized to a consistent size of  128 × 128 × 128 , only spatially. The data are stored in the format following ImageNet [38].

2.1.2. Dataset Construction and Labeling

Pest Taxonomy The HI30 dataset is organized according to the biological taxonomic system, following the hierarchical tree structure shown in Figure 3. The chart provides a clear representation of the detailed classification of the dataset, which includes 30 species, 25 families, and 9 orders (Species represents the most fundamental unit of classification, while family is the most commonly used unit. As we move from the highest level, “kingdom”, to “species” at the lower level, the characteristics of the grouped organisms become increasingly similar). The orders include Hemiptera, Lepidoptera, Coleoptera, Diptera, Hymenoptera, Orthoptera, Dermaptera, Odonata, and Isoptera Brullé. Each order serves as a super-class. There are corresponding sub-classes below, for example, under the super-class of Hemiptera, there are two sub-classes: Pentatomidae and Fulgoridae, and Pentatomidae includes species such as Nezara viridula and Erthesina fullo. Each insect species belongs to a sub-class, and each sub-class is classified under the corresponding super-class based on the insect order.
Data Filtering and Expert Annotation Eight experienced insect experts were assigned to contribute to data filtering and annotation. To enhance objective accuracy, the annotation strategy involves two phases: initial classification of insects based on physical appearance and shape, followed by verification of their names, families, genera and Latin names. For the insect samples that couldn’t be classified confidently, we set them aside temporarily and invited eight professional entomologists for more detailed and in-depth classification to ensure accuracy of information. Experts thoroughly analyzed samples that posed challenges in classification. In instances where identification proved difficult, such samples were meticulously reviewed, and if classification remained elusive, they were excluded from the dataset. Following this procedure, a total of 2115 insect samples were categorized into 30 classes, which is of significant importance for the overall quality and readability of the dataset.
Once classified, the HI30 dataset consisted of 2115 insect images distributed across 30 categories. In order to achieve more reliable test results, it was necessary for each category in the test set to have an adequate number of samples. An approximate 7:3 split ratio at the sub-class level was employed for dividing the dataset into training and testing sets. Specifically, the HI30 was divided into 1502 training images and 613 testing images for the classification task. More detailed information about HI30 can be found in Table A1.

2.2. Methods

2.2.1. Framework

The sample consisted of 3D cube data. Most datasets used in previous classification tasks were collected in the same scene. Despite efforts to ensure a data acquisition environment free from other interferences, to fully utilize the low information on the spectrum, a spectral library was created to complement each sample with spectral information. The introduction of a preprocessing net was first given, followed by classifiers for the final decision.
A data processing method with a two-branch architecture was developed, as illustrated in Figure 4. Beginning with the original spectral input, the approach employed two network branches designed for extracting features from distinct perspectives: one focused on random spatial correlation and the other on random spectral correlation. The fusion of spectral and spatial information involved concatenating these two types and then performing PCA to extract features and downscale them. The original hyperspectral data could be seamlessly integrated into traditional methods or deep learning networks for subsequent feature extraction and classification.
Random spatial correlation involves selecting  c p  patches from itself as convolution kernel weights and performing convolution operations with these kernels, yielding  c p  feature maps, followed by L iterations. In contrast, random spectral correlation involves choosing and spatially averaging  c s  patches to create kernels, resulting in  c s  feature maps post-convolution. The resulting PCA-transformed data, along with the original PCA data obtained by reducing the dimensionality of the original data in the spectral domain, were combined and fed into the classifier.

2.2.2. PCA

Since the original hyperspectral data are high-dimensional, redundant, and noise-prone, it is common to apply PCA to reduce dimension before further processing.
For the original image  I o R W × H × B , where W, H, and B refer to the width, height, and number of spectral channels, respectively, PCA projected the original data into a new set of orthogonal coordinates ordered with the corresponding variance and selected the top M channels to obtain the squeezed  I p R W × H × M . As seen in Figure 5, hyperspectral samples of the Pieris rapae, Gryllidae, Nyctemera adversata Walker, and Melolontha are displayed separately. After dimensionality reduction to 10 channels using PCA, the visualization of each channel is presented.

2.2.3. Random Spectrum Correlation

Random Spectrum Correlation borrows the idea from the task of illumination chromaticity estimation or auto white balance [39], which refers to the method of estimating illumination property and color-correct the processed image.
In the experiment context, the described method serves as a treatment for looking for a “normalized” illumination setting and data augmentation in a spectrum manner. Given a spectral input with B channels, with a random location on the background (using object mask), we extract a square patch with a shape of (h,w) and average the patch spatially to obtain a vector with a length of B. Repeat this operation N times on each training sample to obtain a so-called spectral library (in Figure 4, the spectral library is of the shape of (1,1,B,N)).
Before training, for an input  I o R W × H × B , the spectral correlation operation is formulated as:
Z i s = j = 1 N I o j * S i j , i { 1 , , c s } ,
where * denotes the 2D convolution operator,  Z i s R W × H  is the resulted ith feature map,  I o j R W × H  the jth dimension of the input,  S i j R 1 × 1 × K  the jth dimension of the ith random vector.  Z s = concat ( Z 1 s , Z 2 s , , Z c s s ) R W × H × c s  represents the output.

2.2.4. Random Patch Correlation

It was first introduced by Yonghao Xu et al. [35], which treats random patches extracted from images as convolutional kernels. It originates from random projection, where edges between different classes are well-preserved, and it is training-free. Different from the hyperspectral segmentation tasks above, this work addresses the global classification task. The target is generally located in the middle of the image; thus, the selection of the random patch is based on the Gaussian distribution with its mean in the middle.
Specifically, for the obtained  I p R W × H × M , following the Gaussian distribution, we select  c p  points (for  c p  patches) around the image center point, and crop  c p  size-k patches. The process involves convoluting input with a random patch kernel as described:
Z i p = j = 1 N I p j * W i j , i { 1 , , c p } ,
where * denotes the 2D convolution operator,  Z i p R W × H  is the resultant ith feature map,  I p j R W × H  the jth dimension of the input,  W i j R k × k × N  the jth dimension of the ith random patch.  Z p = concat ( Z 1 p , Z 2 p , , Z c p p ) R W × H × c p  represents the output.
Then, whitening is performed to shift the per-channel mean and rescale the standard deviation, followed by a ReLU, which is formulated as:
F = σ ( Norm ( Z ) ) ,
F R W × H × c p , and the  σ  means the Relu operation. Repeating the operations above L times yields a list of feature maps  [ F 1 , F 2 , , F L ] R W × H × ( L · c p ) , representing the spatial information. Such spatial information can be fused with the spectral information  Z p  as complementary information to  I p  in conjunction with it to inform the classifier.

2.2.5. The Classifier

Selection of two classifiers for evaluation, a non-linear SVM classifier and deep learning classifier.
Support Vector Machine (SVM)
The core idea of SVM, originally proposed by Cortes and Vapnik in 1995, is to find an optimal hyperplane in a transformed feature space that maximizes the interval between two categories. Direct application of SVM on spectral input or rendered RGB images is impractical. A deep net was employed to extract fewer-channel features, which were then processed by a non-linear SVM. The apparatus served as a feature extractor in combination with SVM, following the training of the deep classification network for classifying all data.
Deep Net Classification Layer The CNN evolved into a strong image classifier due to its back-propagation capability. A list of representative classification networks was selected to benchmark the performance of HI30. ResNet [40], proposed in 2016, solved the degeneracy problem in deep networks by introducing residual blocks and has been the cornerstone of many subsequent studies. DenseNet [41], proposed in 2017, further enhances feature propagation by connecting each layer to all previous layers. Mobilenetv2 [42] aims to provide efficient CNNs for mobile and embedded devices, proposed by Howard in 2017, it uses deeply separable convolutions to reduce computation and model size without sacrificing much performance.

3. Results

3.1. Experimental Setup and Evaluation Metrics

In order to scientifically compare the effectiveness of the hyperspectral data versus RGB data, and to establish the superiority of hyperspectral information, the processing [43] was adopted to convert the hyperspectral data into corresponding RGB representations. Traditional classification tasks utilize the commonly employed SVM, which has been implemented using the publicly available framework [44]. The performance of advanced deep convolutional networks was evaluated, namely ResNet18 [40], ResNet34 [40], MobileNetv2 [42], and DenseNet121 [41]. Recently, for classification tasks, it has become common practice to pre-train models on large-scale datasets and fine-tune them using downstream task data, as demonstrated by the effectiveness of pre-trained models on IP102 [7] after being pre-trained on the ImageNet [38] dataset. Given that common visual network architectures are designed for RGB images, hyperspectral data, with their significantly higher dimensionality compared to RGB, possesses inherent differences. Therefore, during the training stage, models were trained from scratch, avoiding the utilization of dataset pre-training.
During the training of deep networks, the batch size is 64. The AdamW optimizer was chosen with an initial learning rate of 0.001 and a decay rate of 0.0001, using cosine annealing. While keeping the basic architectures of these deep models unchanged, the only modification made involves adjusting the output of the last fully connected layer from 1000 to the number of classes. The experiments based on deep features were implemented using PyTorch2.0 and executed on an NVIDIA RTX 3060 GPU with 12 GB memory.
In the experiments, four evaluation metrics widely used in classification tasks were used for quantitative evaluation, including Accuracy, Recall, F1-score, and Kappa coefficient. Accuracy quantifies the ratio of the number of samples correctly predicted by the model to the total number of samples. Recall metric quantifies the model’s capacity to recognize positive classes, defined as the proportion of correctly classified positive instances over all actual positive ones. The F1-score, a balanced blend of precision and recall, provides a more comprehensive performance metric than accuracy alone, particularly in scenarios with imbalances in positive and negative categories. Kappa coefficient is a performance metric that goes deeper than simple accuracy and takes into account stochastic prediction. Kappa coefficients usually take values in the range of [0, 1], with higher values implying higher model classification accuracy, as well as a more comprehensive representation of the model’s classification performance across categories. The formulas for these evaluation metrics are as follows:
A c c u r a c y = T P + T N T P + T N + F P + F N ,
R e c a l l = T P T P + F N ,
P r e c i s i o n = T P T P + F P ,
F 1 - s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l ,
K a p p a = p o p e 1 p e .
TP, TN, FP, and FN stand for true positive, true negative, false positive, and false negative samples, respectively.  p o  and  p e  represent the observed accuracy and the expected accuracy, respectively.

3.2. Results and Analysis

Accuracy is the most intuitive performance metric, indicating the percentage of correct classifications. As shown in Table 1, it demonstrates the image classification accuracy of RGB data and hyperspectral data under different processing conditions using the same method, as well as Kappa.
Table 1 reveals that hyperspectral experiments show a much higher accuracy (the accuracy exceeding 80% for each network, reaching up to 90% for DenseNet, while RGB data only achieved 70%). This clearly confirms the significant advantages of the hyperspectral modality in insect applications and also confirms the necessity of HI30. Specifically, we compared the classification performance between original hyperspectral data and RGB data across various categories using DenseNet121, as seen in Table 2. Hyperspectral data perform well in almost all categories, with the majority achieving an accuracy rate surpassing 80%, and 12 categories even achieving 100% accuracy. In contrast, while RGB data perform admirably in specific categories, their performance falters in others, reaching only about half of that observed with hyperspectral data.
Given the challenges associated with collecting hyperspectral data and ensuring equitable data acquisition, we opted to pre-train neural networks on the comprehensive ImageNet dataset, which contains over 14 million labeled images across more than 20,000 categories, while maintaining consistent training parameters. This pre-training approach is widely recognized for enhancing accuracy and is extensively utilized in prior research. As illustrated in Figure 6, we compare experimental results between networks with and without pre-training. Although pre-training on ImageNet improves classification accuracy for RGB data, it still underperforms compared to hyperspectral data across all networks. As shown in Table 1 and Table 3, DenseNet121 achieves the highest RGB data accuracy of 81.73% after ImageNet pre-training. In contrast, unprocessed hyperspectral data without pre-training reach an accuracy of 90.05%, marking an 8.32% increase, which represents a substantial gap in classification performance. Additionally, some hyperspectral-based networks exhibit decreased performance when pre-trained on ImageNet, likely due to the spatial distribution discrepancies between RGB and higher-dimensional data. This mismatch suggests the need for alternative technical adaptations. However, reducing data to three-dimensional PCA, closely resembling RGB data, slightly enhances performance in certain networks.
The confusion matrix diagrams in Figure 7 and Figure 8 provide a detailed comparison of classification between original hyperspectral and RGB data. They highlight the differences in true and predicted categories, emphasizing the superior classification accuracy of hyperspectral data. This advantage is particularly evident in categories where RGB data struggle due to morphological resemblances or taxonomic similarities among species. As seen in Figure 8, Syrphidae is incorrectly identified as Apis cerana cerana Fabricius, and similar errors occur with other species like Athetis fulvula and Prodenia litura (Fabricius). Notably, RGB data also confuse Grylidae for species with transparent wings and slender bodies, such as Oedipodidae, Atratomorpha sinensis Bolivar, and Vespidae (see Figure 9). Therefore, this underscores RGB’s limitations in accurately classifying related species and validates the superior efficacy of hyperspectral data in such classifications.
The noteworthy observation is that, with the hyperspectral channels reduced to three, equating to the same memory footprint as RGB data, the achieved results surpass those obtained with RGB data. We believe the regime behind the improvement is that the axes PCA provides are more canonical aligned. Among the various methods we tested, the most naive one was the traditional “sift + SVM” approach, only obtaining 28.06% accuracy. This is significantly lower compared to the other two manually extracted feature methods. While the histogram-based algorithm performs close to deep learning in terms of feature extraction and classification performance, overall, manual feature extraction methods fall short of the effectiveness achieved by deep learning-based ones. We further operated on hyperspectral data in Table 1, in three different settings for the input channel: the untouched original data preserving all channels, three PCA-ed channels, and ten PCA-ed channels. More channels were tested, showing no noticeable difference. It is obvious that shrinking the original data to a 3-PCA-ed channel format yields superior classification accuracy compared to RGB opponents across various metrics. However, the performance notably deteriorates when compared to the full-spectrum original data. This gap is particularly pronounced when utilizing the ResNet neural network architecture. In this context, the method with three PCA-ed channels reports a significant performance drop, which is over a 10 percent reduction in accuracy. This could be due to the loss of information when reducing from 128 channels to 3 channels. Furthermore, with 10 PCA-ed channels, better results were achieved compared to the full-spectrum original data. This indicates that the shrunken data exhibit a “denoising” effect, effectively disregarding the noise present in the full hyperspectral input and accelerating the entire inference process.
R e c a l l  score higher than the  F 1 S c o r e  signifies the model’s strength in detecting positive instances while potentially conceding lower precision. This gap may be attributable to the model’s predisposition to designate more cases as positive—a factor that inflates  R e c a l l  but conversely amplifies the instances misclassified as positive. As observed in Table 4 R e c a l l  slightly overshadows the  F 1 S c o r e  in the original data’s classification results across a range of methods. Especially with the ResNet34 classifier,  R e c a l l  peaks at 86.27, while  F 1  lags at 82.74.
However, following PCA, the divergence between both metrics grows smaller regardless of channel number.With TBSCN,  R e c a l l , and  F 1 S c o r e  are both enhanced, and their respective values converge, suggesting that the model successfully obtains a high positive class detection rate while avoiding classification error. We can obtain more information from Figure 10, which illustrates the UMAP [45] visualization of feature-extracted data from DenseNet, comparing RGB and hyperspectral datasets. Notably, RGB data points exhibit a higher degree of scatter and lack cohesive aggregation, unlike the hyperspectral data. This disparity is particularly evident when comparing with hyperspectral data reduced to three dimensions using PCA, which, despite matching the dimensional volume of RGB data, demonstrates superior clustering efficiency. However, while the PCA-reduced (10 dimensions) hyperspectral data show promising results, their UMAP representation reveals an excessive stacking of data points, leading to blurred inter-class boundaries. This effect suggests a potential overemphasis on data similarities due to the reduction to ten dimensions, possibly hindering feature extraction. Nevertheless, the application of TBSCN mitigates this issue, likely by optimizing the redundant information inherent in the 10-dimensional PCA data.
Accordingly, this procedure refines the model’s training quality, thereby boosting its ability to accurately recognize positive instances.
A study on different deep neural net backbones is given, where DenseNet [41] leads the leaderboard no matter how many channels are given and which input format is set. ResNet ranks the second. Fewer input channels will make accuracy more dependent on the classification network. On ResNet, the performance of RGB and PCA_3 differs significantly from DenseNet. It is also viable to combine deep net (as feature extractor) and SVM (as classifier), performing slightly less accurately than pure deep solution while over-performing handcraft features.

3.3. Further Analysis

This section prevents several ablation studies to the work. All experiment adopt the PCA_10 setting without training, using the PCA_10 classification results as the baseline. Figure 11 illustrates the distinctions observed in the ablation study.

3.3.1. Ablation Study on Random Correlation Section

In order to validate the effectiveness of the spectral and spatial correlation modules, separate experiments are conducted for each type of correlation, as seen in Table 5. In the experiment where the spectral correlation was removed, the results for almost all deep learning-based classifiers were found to be inferior to those with the spectral correlation included. Similarly, when compared to scenarios where both spectral and spatial correlations were combined, experiments focusing solely on the spatial level produced lower results. It is worth noting that in experiments exclusively involving spectral correlation on the DenseNet architecture, results surpassed those involving only spatial correlation. Notably, the integration of both types of information resulted in a significant improvement in the results, particularly in comparison to other network architectures.

3.3.2. Ablation on Fusion Way

Experiments were conducted to explore the fusion of spatial and spectral information using two additional fusion methods: element-wise addition and element-wise multiplication. Importantly, when the two types of information were added together, the results were generally even less favorable than when centering exclusively on one type of information.
This suggests that spatial and spectral information represent distinct perspectives, and their additive combination can paradoxically deteriorate data enhancement. However, when the two types of information were multiplied element-wise, it yielded promising results. In a few instances, networks were able to perform well with this fusion approach, which involved concatenating both types of information and subsequently applying PCA for dimensionality reduction. As seen in Table 6.

3.3.3. Ablation Study on Input Channel

The study concludes with the selection of channel numbers, noting that the fusion approach, akin to TBSCN, involves concatenation followed by dimensionality reduction. Post-dimensionality reduction, when reducing the number of channels to 5 and 15, both outcomes surpassed the baseline results, but they exhibited a certain degree of decline compared to the scenario where dimensionality was reduced to 10 channels.
It is observed that, regardless of reducing dimensions to 5, 10, or 15, this fusion approach outperforms the “s + s” which means direct element-wise addition of the two types of information. While certain networks, such as ResNet18, may exhibit slightly better results in the classification of “s × s” scenarios, overall, concatenating the two types of information followed by dimensionality reduction is more operationally convenient and yields effective results. As seen in Table 6.

4. Conclusions

The recognition of insects plays a crucial role in a wide range of practical applications, such as agriculture, biodiversity conservation, and environmental monitoring. However, the inherent hyperspectral characteristics of insects often render them indistinguishable to the human eye in an RGB-dominated environment, highlighting the importance of insect recognition from a practical standpoint. Advancements in deep learning have significantly propelled the field of insect identification, highlighting the importance of high-quality data to support data-driven deep learning methods in this domain. While many studies have achieved commendable results in insect recognition, they predominantly focus on RGB data and overlook the significant role of spectral information. HI30 dataset fills this gap, comprising 30 categories and 2115 samples, thereby facilitating comprehensive research into hyperspectral insect classification. Experimental results strongly affirm the effectiveness of hyperspectral data in insect classification, outperforming RGB data. TBSCN model exploits the spatial and spectral dimensions of hyperspectral data through random spatial correlation and spectral correlation, thereby enhancing classification accuracy.
In future work, we will focus on expanding the exploration of hyperspectral data use in insect classification and enhancing our dataset. This initiative includes investigating the application of hyperspectral data in categorizing distinct insect subspecies, poised to make significant contributions to the realm of fine-grained classification tasks in this field.

Author Contributions

Conceptualization, S.T. and Y.D.; methodology, Y.D.; software, S.H. (Shuzhen Hu); validation, Y.D.; formal analysis, S.H. (Shaofang He); investigation, L.Z.; resources, S.T.; data curation, S.H. (Shaofang He); writing—original draft preparation, S.H. (Shuzhen Hu); writing—review and editing, Y.D. and Y.Q.; visualization, S.H. (Shaofang He); supervision, L.Z.; funding acquisition, S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Key R&D Program of China (2023YFD1401100), the Hunan Provincial Key Research and Development Program (2023NK2011), the Provincial Science and Technology Innovation Team (S2021YZCXTD0024), the Financial Support for Changsha Science and Technology Planning Project (kh2303010), the National Natural Science Foundation of China under Grant (62202163) and the Natural Science Foundation of Hunan Province under Grant (2022JJ40190).

Data Availability Statement

The data presented in this study are available on request from the first author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. This is the comprehensive information of HI30 dataset. “–” indicates that the expert identified the sample only up to the level of “family”.
Table A1. This is the comprehensive information of HI30 dataset. “–” indicates that the expert identified the sample only up to the level of “family”.
Order GenusSpeciesAmount
HI30HemipteraPentatomidaeNezara viridula (Linnaeus)139
Erthesina fullo (Thunberg)63
FulgoridaeLycorma delicatula50
LepidopteraArctiinaeNyctemera adversata Walker112
Spilarctia subcarnea (Walker)60
GelechiidaeSweetpotato leaf folder46
PyralidaeDiaphania indica (Saun-ders)84
100
NoctuidaeCtenoplusia albostriata (Bremer et Grey)55
Athetis furvula55
Prodenia litura (Fabricius)83
127
HesperiidaeParnara guttata (Bremer et Grey)53
LycaenidaeDeudorix dpijarbas Moore48
PieridaePieris rapae56
NymphalidaePolygonia c-album (Linnaeus)60
ColeopteraScarabaeoideaMelolontha56
DipteraTephritidae35
Syrphidae119
Calliphoridae66
HymenopteraVespidae48
ApidaeApis cerana cerana Fabricius132
DermapteraForficulidae74
OdonataPlatycnemididae56
Gomphidae48
OrthopteraGryllidae65
Oedipodidae60
GryllotalpidaeGryllotalpa orientalis Burmeister45
AcrididaeAtractomorpha sinensis Bolivar48
Isoptera BrulléTermitidaeOdontotermes formosanus Shiroki72

References

  1. Stork, N.E.; McBroom, J.; Gely, C.; Hamilton, A.J. New approaches narrow global species estimates for beetles, insects, and terrestrial arthropods. Proc. Natl. Acad. Sci. USA 2015, 112, 7519–7523. [Google Scholar] [CrossRef] [PubMed]
  2. Majeed, W.; Khawaja, M.; Rana, N.; de Azevedo Koch, E.B.; Naseem, R.; Nargis, S. Evaluation of insect diversity and prospects for pest management in agriculture. Int. J. Trop. Insect Sci. 2022, 42, 2249–2258. [Google Scholar] [CrossRef]
  3. Wen, C.; Guyer, D. Image-based orchard insect automated identification and classification method. Comput. Electron. Agric. 2012, 89, 110–115. [Google Scholar] [CrossRef]
  4. Zhang, T.; Long, C.F.; Deng, Y.J.; Wang, W.Y.; Tan, S.Q.; Li, H.C. Low-rank preserving embedding regression for robust image feature extraction. IET Comput. Vis. 2023, 18, 124–140. [Google Scholar] [CrossRef]
  5. Deng, L.; Wang, Y.; Han, Z.; Yu, R. Research on insect pest image detection and recognition based on bio-inspired methods. Biosyst. Eng. 2018, 169, 139–148. [Google Scholar] [CrossRef]
  6. Alfarisy, A.A.; Chen, Q.; Guo, M. Deep Learning Based Classification for Paddy Pests & Diseases Recognition. In Proceedings of the 2018 International Conference on Mathematics and Artificial Intelligence, ICMAI ‘18, New York, NY, USA, 9–15 July 2018; pp. 21–25. [Google Scholar] [CrossRef]
  7. Wu, X.; Zhan, C.; Lai, Y.K.; Cheng, M.M.; Yang, J. IP102: A Large-Scale Benchmark Dataset for Insect Pest Recognition. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8779–8788. [Google Scholar] [CrossRef]
  8. Li, Y.; Wang, H.; Dang, L.M.; Sadeghi-Niaraki, A.; Moon, H. Crop pest recognition in natural scenes using convolutional neural networks. Comput. Electron. Agric. 2020, 169, 105174. [Google Scholar] [CrossRef]
  9. Abeywardhana, D.L.; Dangalle, C.D.; Nugaliyadde, A.; Mallawarachchi, Y. An ultra-specific image dataset for automated insect identification. Multimed. Tools Appl. 2022, 81, 3223–3251. [Google Scholar] [CrossRef]
  10. Hansen, O.L.; Svenning, J.C.; Olsen, K.; Dupont, S.; Garner, B.H.; Iosifidis, A.; Price, B.W.; Høye, T.T. Species-level image classification with convolutional neural network enables insect identification from habitus images. Ecol. Evol. 2020, 10, 737–747. [Google Scholar] [CrossRef] [PubMed]
  11. Kusrini, K.; Suputa, S.; Setyanto, A.; Agastya, I.M.A.; Priantoro, H.; Chandramouli, K.; Izquierdo, E. Data augmentation for automated pest classification in Mango farms. Comput. Electron. Agric. 2020, 179, 105842. [Google Scholar] [CrossRef]
  12. Wang, J.; Li, Y.; Feng, H.; Ren, L.; Du, X.; Wu, J. Common pests image recognition based on deep convolutional neural network. Comput. Electron. Agric. 2020, 179, 105834. [Google Scholar] [CrossRef]
  13. Zhang, L.; Zhao, C.; Feng, Y.; Li, D. Pests Identification of IP102 by YOLOv5 Embedded with the Novel Lightweight Module. Agronomy 2023, 13, 1583. [Google Scholar] [CrossRef]
  14. Li, W.; Zhu, T.; Li, X.; Dong, J.; Liu, J. Recommending advanced deep learning models for efficient insect pest detection. Agriculture 2022, 12, 1065. [Google Scholar] [CrossRef]
  15. De Backer, S.; Kempeneers, P.; Debruyn, W.; Scheunders, P. A band selection technique for spectral classification. IEEE Geosci. Remote Sens. Lett. 2005, 2, 319–323. [Google Scholar] [CrossRef]
  16. Kuo, B.C.; Li, C.H.; Yang, J.M. Kernel nonparametric weighted feature extraction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1139–1155. [Google Scholar]
  17. Huang, X.; Liu, X.; Zhang, L. A Multichannel Gray Level Co-Occurrence Matrix for Multi/Hyperspectral Image Texture Representation. Remote Sens. 2014, 6, 8424–8445. [Google Scholar] [CrossRef]
  18. Shen, L.; Jia, S. Three-dimensional Gabor wavelets for pixel-based hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2011, 49, 5039–5046. [Google Scholar] [CrossRef]
  19. Deng, Y.J.; Li, H.C.; Tan, S.Q.; Hou, J.; Du, Q.; Plaza, A. t-Linear Tensor Subspace Learning for Robust Feature Extraction of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5501015. [Google Scholar] [CrossRef]
  20. Deng, Y.J.; Yang, M.L.; Li, H.C.; Long, C.F.; Fang, K.; Du, Q. Feature Dimensionality Reduction with L 2, p-Norm-Based Robust Embedding Regression for Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5509314. [Google Scholar] [CrossRef]
  21. Tarabalka, Y.; Fauvel, M.; Chanussot, J.; Benediktsson, J.A. SVM-and MRF-based method for accurate classification of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 736–740. [Google Scholar] [CrossRef]
  22. Huang, K.; Li, S.; Kang, X.; Fang, L. Spectral–spatial hyperspectral image classification based on KNN. Sens. Imaging 2016, 17, 1. [Google Scholar] [CrossRef]
  23. Xia, J.; Ghamisi, P.; Yokoya, N.; Iwasaki, A. Random forest ensembles and extended multiextinction profiles for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 202–216. [Google Scholar] [CrossRef]
  24. Khodadadzadeh, M.; Li, J.; Plaza, A.; Bioucas-Dias, J.M. A subspace-based multinomial logistic regression for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2105–2109. [Google Scholar] [CrossRef]
  25. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  26. Chen, Y.; Zhao, X.; Jia, X. Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  27. Ge, Z.; Cao, G.; Li, X.; Fu, P. Hyperspectral Image Classification Method Based on 2D–3D CNN and Multibranch Feature Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5776–5788. [Google Scholar] [CrossRef]
  28. Wu, H.; Li, D.; Wang, Y.; Li, X.; Kong, F.; Wang, Q. Hyperspectral Image Classification Based on Two-Branch Spectral–Spatial-Feature Attention Network. Remote Sens. 2021, 13, 4262. [Google Scholar] [CrossRef]
  29. Jin, H.; Peng, J.; Bi, R.; Tian, H.; Zhu, H.; Ding, H. Comparing Laboratory and Satellite Hyperspectral Predictions of Soil Organic Carbon in Farmland. Agronomy 2024, 14, 175. [Google Scholar] [CrossRef]
  30. Carrino, T.A.; Crósta, A.P.; Toledo, C.L.B.; Silva, A.M. Hyperspectral remote sensing applied to mineral exploration in southern Peru: A multiple data integration approach in the Chapi Chiara gold prospect. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 287–300. [Google Scholar] [CrossRef]
  31. Shao, Y.; Ji, S.; Xuan, G.; Ren, Y.; Feng, W.; Jia, H.; Wang, Q.; He, S. Detection and Analysis of Chili Pepper Root Rot by Hyperspectral Imaging Technology. Agronomy 2024, 14, 226. [Google Scholar] [CrossRef]
  32. Long, C.F.; Wen, Z.D.; Deng, Y.J.; Hu, T.; Liu, J.L.; Zhu, X.H. Locality Preserved Selective Projection Learning for Rice Variety Identification Based on Leaf Hyperspectral Characteristics. Agronomy 2023, 13, 2401. [Google Scholar] [CrossRef]
  33. Xiao, Z.; Yin, K.; Geng, L.; Wu, J.; Zhang, F.; Liu, Y. Pest identification via hyperspectral image and deep learning. Signal Image Video Process. 2022, 16, 873–880. [Google Scholar] [CrossRef]
  34. Zheng, Y.; Zhang, T.; Fu, Y. A large-scale hyperspectral dataset for flower classification. Knowl.-Based Syst. 2022, 236, 107647. [Google Scholar] [CrossRef]
  35. Xu, Y.; Du, B.; Zhang, F.; Zhang, L. Hyperspectral image classification via a random patches network. ISPRS J. Photogramm. Remote Sens. 2018, 142, 344–357. [Google Scholar] [CrossRef]
  36. Cheng, C.; Li, H.; Peng, J.; Cui, W.; Zhang, L. Hyperspectral image classification via spectral-spatial random patches network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4753–4764. [Google Scholar] [CrossRef]
  37. Shenming, Q.; Xiang, L.; Zhihua, G. A new hyperspectral image classification method based on spatial-spectral features. Sci. Rep. 2022, 12, 1541. [Google Scholar] [CrossRef] [PubMed]
  38. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  39. Zapryanov, G.; Ivanova, D.; Nikolova, I. Automatic White Balance Algorithms forDigital StillCameras—A Comparative Study. Inf. Technol. Control. 2012. Available online: http://acad.bg/rismim/itc/sub/archiv/Paper3_1_2012.pdf (accessed on 8 March 2024).
  40. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  41. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  42. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  43. Magnusson, M.; Sigurdsson, J.; Armansson, S.E.; Ulfarsson, M.O.; Deborah, H.; Sveinsson, J.R. Creating RGB images from hyperspectral images using a color matching function. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2045–2048. [Google Scholar]
  44. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  45. McInnes, L.; Healy, J.; Melville, J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv 2018, arXiv:1802.03426. [Google Scholar]
Figure 1. A sample in the HI30 dataset comprises 3D data, with H and W denoting the height and width of the image, respectively, while B represents the spectral dimension of the data.
Figure 1. A sample in the HI30 dataset comprises 3D data, with H and W denoting the height and width of the image, respectively, while B represents the spectral dimension of the data.
Agronomy 14 00863 g001
Figure 2. Schematic of hyperspectral imaging system.
Figure 2. Schematic of hyperspectral imaging system.
Agronomy 14 00863 g002
Figure 3. Taxonomy of the HI30 dataset. Displaying only a portion of the genera and species within the dataset as a taxonomic representation, with more comprehensive information available in the appendices.
Figure 3. Taxonomy of the HI30 dataset. Displaying only a portion of the genera and species within the dataset as a taxonomic representation, with more comprehensive information available in the appendices.
Agronomy 14 00863 g003
Figure 4. The feature extractor proposed for tackling spectral pest classification. Starting with the original spectral input, two net branches featuring random spatial correlation and random spectral correlation are used for extracting features from different prospective. For the random spatial correlation,  c p  patches are selected randomly, serving as convolutional kernel weights, resulting in  c p  feature maps after convolution (denoted as ⊗ in the figure). This is repeated L times. For the random spectral correlation,  c s  patches are selected and averaged along the spatial dimensions, saved as kernels, resulting in  c s  feature maps after convolution. Two kinds of feature maps are fused and compressed using PCA, then combined with the original spectral input to feed the classifier.
Figure 4. The feature extractor proposed for tackling spectral pest classification. Starting with the original spectral input, two net branches featuring random spatial correlation and random spectral correlation are used for extracting features from different prospective. For the random spatial correlation,  c p  patches are selected randomly, serving as convolutional kernel weights, resulting in  c p  feature maps after convolution (denoted as ⊗ in the figure). This is repeated L times. For the random spectral correlation,  c s  patches are selected and averaged along the spatial dimensions, saved as kernels, resulting in  c s  feature maps after convolution. Two kinds of feature maps are fused and compressed using PCA, then combined with the original spectral input to feed the classifier.
Agronomy 14 00863 g004
Figure 5. Visualization of top 10 PCA channels of the selected HI30 samples. The Viridis colormap is used for color encoding. The first column is rendered RGB data. a,b,c,d, respectively, represent Pieris rapae, Gryllidae, Nyctemera adversata Walker, and Melolontha.
Figure 5. Visualization of top 10 PCA channels of the selected HI30 samples. The Viridis colormap is used for color encoding. The first column is rendered RGB data. a,b,c,d, respectively, represent Pieris rapae, Gryllidae, Nyctemera adversata Walker, and Melolontha.
Agronomy 14 00863 g005
Figure 6. Comparison of classification accuracy among neural network architectures using varied inputs: “PCA_i” denotes the i-dimensional PCA feature, and “-F” indicates fine-tuning on ImageNet pre-trained weights. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121).
Figure 6. Comparison of classification accuracy among neural network architectures using varied inputs: “PCA_i” denotes the i-dimensional PCA feature, and “-F” indicates fine-tuning on ImageNet pre-trained weights. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121).
Agronomy 14 00863 g006
Figure 7. This confusion matrix illustrates the classification outcomes of original hyperspectral data using the DenseNet121 classification method. The vertical axis represents the true labels, while the horizontal axis indicates the predicted labels.
Figure 7. This confusion matrix illustrates the classification outcomes of original hyperspectral data using the DenseNet121 classification method. The vertical axis represents the true labels, while the horizontal axis indicates the predicted labels.
Agronomy 14 00863 g007
Figure 8. This confusion matrix presents the classification results using RGB data and indicates the underperformance in certain categories compared to the original hyperspectral data. It employs the DenseNet121 classification method, with the vertical axis representing the true labels and the horizontal axis representing the predicted labels.
Figure 8. This confusion matrix presents the classification results using RGB data and indicates the underperformance in certain categories compared to the original hyperspectral data. It employs the DenseNet121 classification method, with the vertical axis representing the true labels and the horizontal axis representing the predicted labels.
Agronomy 14 00863 g008
Figure 9. The examples of misclassification derived from RGB data. Each row represents a unique category with correct label examples on the left column and misclassified samples on the right.
Figure 9. The examples of misclassification derived from RGB data. Each row represents a unique category with correct label examples on the left column and misclassified samples on the right.
Agronomy 14 00863 g009
Figure 10. 2D UMAP feature visualization on different inputs. The feature used here are extracted from last layer of DenseNet121. More distributed for different color clusters, the better discriminative the feature is.
Figure 10. 2D UMAP feature visualization on different inputs. The feature used here are extracted from last layer of DenseNet121. More distributed for different color clusters, the better discriminative the feature is.
Agronomy 14 00863 g010
Figure 11. Ablation studies are alongside baseline metrics. (a) compares results of spectral correlation and spatial correlation individually with the baseline. (b) illustrates the impact of different fusion method on spectral and spatial information. (c) demonstrates the comparison of the final input dimensions.
Figure 11. Ablation studies are alongside baseline metrics. (a) compares results of spectral correlation and spatial correlation individually with the baseline. (b) illustrates the impact of different fusion method on spectral and spatial information. (c) demonstrates the comparison of the final input dimensions.
Agronomy 14 00863 g011
Table 1. Comparison of Classification  A c c u r a c y  (Acc) and  K a p p a  score (multiplied by 100, denoted as  κ ) for different methods on varying input: rendered RGB, the original spectral data, and dimension-reduced PCA feature. “PCA_i” refers to the i-dimension PCA feature. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). A dash (—) means “not available”. Color convention: best, 2nd-best.
Table 1. Comparison of Classification  A c c u r a c y  (Acc) and  K a p p a  score (multiplied by 100, denoted as  κ ) for different methods on varying input: rendered RGB, the original spectral data, and dimension-reduced PCA feature. “PCA_i” refers to the i-dimension PCA feature. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). A dash (—) means “not available”. Color convention: best, 2nd-best.
ClassifiersRGBOriginalPCA_3PCA_10TBSCN
Acc. (%)   κ Acc. (%)   κ Acc. (%)   κ Acc. (%)   κ Acc. (%)   κ
DLRN1858.4056.6988.2587.7878.6377.7687.9387.4489.7289.30
RN3459.8758.2184.9984.3880.2779.4588.0987.6089.2388.79
MobileV264.4462.9784.5083.8881.0780.3184.5083.8788.7488.29
Dense12171.7770.6290.0589.6487.2887.6091.6891.3493.9693.72
DL + SVMRN18 + SVM58.7056.7389.2388.7989.5689.1390.5490.1589.3988.97
RN34 + SVM56.4255.3685.9785.4087.4486.9287.9287.4491.5791.18
Mobile + SVM63.4961.5684.9984.3882.3881.6686.3085.7489.2388.80
Dense + SVM71.1770.4991.1990.8387.6087.1092.5092.1992.4992.19
HandScraft
+
SVM
Gabor + SVM43.2843.28____52.3752.37________
SIFT + SVM28.0624.77____29.3626.40________
Histogram + SVM58.7856.14____71.4570.30________
Table 2. The classification performance of different categories was compared using original hyperspectral data and RGB data on the DenseNet121. In the ‘Species’ column, only the first word of each species name is displayed. For complete names, please refer to Appendix A. Moreover, bolded items indicate cases where it was observed that RGB data performed worse than half of the original hyperspectral data. “ A c c .” stands for  A c c u r a c y .
Table 2. The classification performance of different categories was compared using original hyperspectral data and RGB data on the DenseNet121. In the ‘Species’ column, only the first word of each species name is displayed. For complete names, please refer to Appendix A. Moreover, bolded items indicate cases where it was observed that RGB data performed worse than half of the original hyperspectral data. “ A c c .” stands for  A c c u r a c y .
SpeciesRGB Data Hyperspectral Data
  Acc .   (%)   Recall   F 1 Score Acc. (%)Recall   F 1 Score
Polygonia1001.000.97 1001.001.00
Ctenoplusia310.310.42 620.620.77
Oedipodidae640.650.65 820.820.82
Pieris930.940.91 870.880.88
Nezara970.980.99 1001.001.00
Gryllotalpa760.770.77 690.690.78
Atractomorpha640.640.64 710.710.74
Lycorma850.860.83 1001.001.00
Nyctemera870.880.86 930.940.94
Sweetpotato760.770.80 1001.001.00
Diaphania520.520.63 960.960.98
Odontotermes660.670.70 950.950.93
Vespidae570.570.53 640.640.72
Calliphoridae890.890.83 1001.000.95
Erthesina830.830.73 1001.001.00
Pyralidae890.900.81 890.900.93
Forficulidae770.770.77 860.860.90
Spilarctia940.940.76 1001.000.94
Melolontha870.880.76 1001.001.00
Platycnemididae810.810.79 930.940.91
Syrphidae480.490.47 1001.001.00
Tephritidae300.300.43 700.700.78
Gomphidae1001.000.90 1001.000.93
Athetis430.440.48 1001.000.94
Deudorix710.710.77 920.930.96
Prodenia540.540.53 910.920.83
Gryllidae260.260.36 780.790.79
Lycaenidae670.680.63 970.970.96
Parnara460.470.54 930.930.82
Apis740.740.74 1001.000.99
Table 3. Comparison of Classification  A c c u r a c y  ( A c c . ),  R e c a l l  (multiplied by 100),  F 1 S c o r e  (F1, multiplied by 100), and  K a p p a  Score ( κ , multiplied by 100) among neural networks fine-tuned on ImageNet pre-trained weights using varied inputs: dimension-reduced PCA features. “PCA_i” represents the i-dimensional PCA feature. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). Color convention: best and second best.
Table 3. Comparison of Classification  A c c u r a c y  ( A c c . ),  R e c a l l  (multiplied by 100),  F 1 S c o r e  (F1, multiplied by 100), and  K a p p a  Score ( κ , multiplied by 100) among neural networks fine-tuned on ImageNet pre-trained weights using varied inputs: dimension-reduced PCA features. “PCA_i” represents the i-dimensional PCA feature. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). Color convention: best and second best.
RGB OriginalPCA_3PCA_10
RN18 A c c , . (%)70.6384.3480.9187.27
  κ 69.4283.7180.1386.75
Recall69.0683.0878.8485.74
F168.8282.7079.0485.72
RN34 A c c , . (%)70.9682.8781.4086.62
  κ 69.7782.1880.6286.08
Recall69.8782.2379.4985.16
F169.7081.7379.7885.18
MobileV2 A c c , . (%)79.6185.9782.2287.27
  κ 78.7885.4081.4986.75
Recall77.5284.6980.3385.88
F177.7684.6480.6486.37
Dense A c c , . (%)81.7390.5389.2391.68
  κ 80.9890.1588.7991.34
Recall80.8388.6887.9890.68
F181.3288.8788.2090.85
Table 4. Comparison of classification  R e c a l l  (multiplied by 100),  F 1 S c o r e  ( F 1 , multiplied by 100) among different methods on varying inputs: rendered RGB, the original spectral data, and dimension-reduced PCA feature. “PCA_i” refers to the i-dimension PCA feature. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). A dash — means “not availiable”. Color convention: best, 2nd-best.
Table 4. Comparison of classification  R e c a l l  (multiplied by 100),  F 1 S c o r e  ( F 1 , multiplied by 100) among different methods on varying inputs: rendered RGB, the original spectral data, and dimension-reduced PCA feature. “PCA_i” refers to the i-dimension PCA feature. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). A dash — means “not availiable”. Color convention: best, 2nd-best.
ClassifiersRGBOriginalPCA_3PCA_10TBSCN
  Recall   F 1   Recall   F 1   Recall   F 1   Recall   F 1   Recall   F 1
DLRN1855.6755.2586.1585.5977.0878.0086.0086.0088.2688.24
RN3458.4159.0586.2782.7478.0078.2086.3486.3187.687.76
MobileV263.4163.8083.8283.5580.1579.9681.7482.1287.7987.76
Dense12170.6670.0188.6188.4786.8486.7289.8990.1393.1292.91
DL + SVMRN18 + SVM55.9355.7287.0987.2388.4889.1388.5788.5387.9787.92
RN34 + SVM56.7557.7384.2884.3586.3386.6886.5986.3590.7490.60
Mobile + SVM61.3260.8684.0384.1780.4380.9685.1285.0888.1488.06
Dense + SVM70.8669.4489.2789.3886.0285.9991.2291.2391.2591.96
HandScraft
+
SVM
Gabor + SVM40.4343.28____50.1752.37________
SIFT + SVM24.7023.88____28.5128.40________
Histogram + SVM55.6854.51____70.1469.72________
Table 5. Comparison of Classification  A c c u r a c y  (Acc),  R e c a l l  (multiplied by 100),  F 1 S c o r e  ( F 1 , multiplied by 100), and  K a p p a  score ( κ , multiplied by 100) among different methods on varied inputs: dimension-reduced PCA features. “PCA_i” represents the i-dimensional PCA feature. “Only spatial” denotes the exclusive focus on the two-branch structure, disregarding spectral information and concentrating solely on spatial information. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). Conversely, “Only spectral” refers to the exclusive attention to spectral information, neglecting spatial aspects. Color convention: best, 2nd-best.
Table 5. Comparison of Classification  A c c u r a c y  (Acc),  R e c a l l  (multiplied by 100),  F 1 S c o r e  ( F 1 , multiplied by 100), and  K a p p a  score ( κ , multiplied by 100) among different methods on varied inputs: dimension-reduced PCA features. “PCA_i” represents the i-dimensional PCA feature. “Only spatial” denotes the exclusive focus on the two-branch structure, disregarding spectral information and concentrating solely on spatial information. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). Conversely, “Only spectral” refers to the exclusive attention to spectral information, neglecting spatial aspects. Color convention: best, 2nd-best.
RN18RN34MobileV2DenseRN18 + SVMRN34 + SVMMobileV2 + SVMDense + SVM
PCA_10Acc. (%)87.9388.0984.5091.6890.5487.9286.3092.50
  R e c a l l 86.0086.3481.7489.8988.5786.5985.1291.22
  F 1 86.0086.3182.1290.1388.5386.3585.0891.23
  κ 87.4487.0683.8791.3490.1587.4485.7492.19
only spatialAcc. (%)89.5590.0587.2792.8291.6891.0390.0593.64
  R e c a l l 87.4388.3084.8990.8990.3889.5989.0592.32
  F 1 87.9788.5085.6091.1390.2389.5788.9292.29
  κ 89.1389.6486.7592.5391.3490.6689.6593.38
only spectralAcc. (%)88.4288.9087.1193.3191.5789.7289.5693.96
  R e c a l l 86.2087.0886.4092.3290.1988.5088.2292.97
  F 1 86.0087.2086.3192.5090.0088.5188.3992.68
  κ 87.9588.4586.5993.0491.1789.3089.1393.72
TBSCNAcc. (%)89.7289.2388.7493.9689.3991.5789.2392.49
  R e c a l l 88.2687.6087.7993.1287.9790.7488.1491.25
  F 1 88.2487.7687.7692.9187.9290.6088.0690.96
  κ 89.3088.7988.2993.7288.9791.1888.8092.19
Table 6. Comparison of Classification  A c c u r a c y  (Acc),  R e c a l l  (multiplied by 100),  F 1 S c o r e  ( F 1 , multiplied by 100), and  K a p p a  score ( κ , multiplied by 100) results among different methods. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). “s × s” denotes dot multiplication for integration for two-branch structures, while “s + s” signifies summation two information types. “ss_i” denotes information fusion downscaled to i dimensions by PCA. Color convention: best, 2nd-best.
Table 6. Comparison of Classification  A c c u r a c y  (Acc),  R e c a l l  (multiplied by 100),  F 1 S c o r e  ( F 1 , multiplied by 100), and  K a p p a  score ( κ , multiplied by 100) results among different methods. Model abbreviations: RN18 (ResNet18), RN34 (ResNet34), MobileV2 (MobilenetV2), and Dense (Densenet121). “s × s” denotes dot multiplication for integration for two-branch structures, while “s + s” signifies summation two information types. “ss_i” denotes information fusion downscaled to i dimensions by PCA. Color convention: best, 2nd-best.
RN18RN34MobileV2DenseRN18 + SVMRN34 + SVMMobileV2 + SVMDense + SVM
s + sAcc88.4190.8686.6291.6891.3591.0388.5892.98
  R e c a l l 86.9989.4289.4490.0089.8789.4287.1292.27
  F 1 86.7089.4484.8190.1489.8889.5687.0592.07
  κ 87.9590.4986.0891.3491.0090.6688.1292.70
s × sAcc90.2187.7686.9593.1591.1987.9388.7491.68
  R e c a l l 89.1086.0485.7492.3290.0685.7187.4890.37
  F 1 88.9086.1785.7092.1489.9985.9987.5790.13
  κ 89.8287.7786.4292.8790.8387.4388.2991.34
ss_15Acc88.7987.9386.7992.9991.3589.5687.6093.15
  R e c a l l 87.1086.2485.0091.8389.5088.9385.9692.16
  F 1 87.2986.3284.8091.8889.6688.5485.8792.05
  κ 88.8087.4486.2592.7091.0089.1487.1092.87
ss_5Acc87.7788.7488.9192.8290.8689.7288.9193.31
  R e c a l l 85.5687.2487.2791.6789.2688.1287.7792.44
  F 1 86.0987.1487.1491.7189.2887.8787.5592.39
  κ 87.2688.2988.4692.5390.4989.3088.4693.04
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tan, S.; Hu, S.; He, S.; Zhu, L.; Qian, Y.; Deng, Y. Leveraging Hyperspectral Images for Accurate Insect Classification with a Novel Two-Branch Self-Correlation Approach. Agronomy 2024, 14, 863. https://doi.org/10.3390/agronomy14040863

AMA Style

Tan S, Hu S, He S, Zhu L, Qian Y, Deng Y. Leveraging Hyperspectral Images for Accurate Insect Classification with a Novel Two-Branch Self-Correlation Approach. Agronomy. 2024; 14(4):863. https://doi.org/10.3390/agronomy14040863

Chicago/Turabian Style

Tan, Siqiao, Shuzhen Hu, Shaofang He, Lei Zhu, Yanlin Qian, and Yangjun Deng. 2024. "Leveraging Hyperspectral Images for Accurate Insect Classification with a Novel Two-Branch Self-Correlation Approach" Agronomy 14, no. 4: 863. https://doi.org/10.3390/agronomy14040863

APA Style

Tan, S., Hu, S., He, S., Zhu, L., Qian, Y., & Deng, Y. (2024). Leveraging Hyperspectral Images for Accurate Insect Classification with a Novel Two-Branch Self-Correlation Approach. Agronomy, 14(4), 863. https://doi.org/10.3390/agronomy14040863

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop