Next Article in Journal
Optimal Design of Multilevel Composite Lubrication Structures on Sliding Guide Rail Surfaces
Previous Article in Journal
The Impact of Thermal Treatment on the Structural, Optical and Electrochemical Characteristics of Tin Sulfide Films
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of the Application of Hyperspectral Imaging Technology in Agricultural Crop Economics

1
School of Physics and Mechatronic Engineering, Guizhou Minzu University, Guiyang 550025, China
2
School of Equipment Manufacturing Polytechnic, Office of Academic Research, Guiyang 550008, China
3
School of Mechanical Engineering, Guizhou Institute of Technology, Guiyang 550003, China
*
Author to whom correspondence should be addressed.
Coatings 2024, 14(10), 1285; https://doi.org/10.3390/coatings14101285
Submission received: 9 September 2024 / Revised: 2 October 2024 / Accepted: 4 October 2024 / Published: 9 October 2024
(This article belongs to the Special Issue Machine Learning-Driven Advancements in Coatings)

Abstract

:
China is a large agricultural country, and the crop economy holds an important place in the national economy. The identification of crop diseases and pests, as well as the non-destructive classification of crops, has always been a challenge in agricultural development, hindering the rapid growth of the agricultural economy. Hyperspectral imaging technology combines imaging and spectral techniques, using hyperspectral cameras to acquire raw image data of crops. After correcting and preprocessing the raw image data to obtain the required spectral features, it becomes possible to achieve the rapid non-destructive detection of crop diseases and pests, as well as the non-destructive classification and identification of agricultural products. This paper first provides an overview of the current applications of hyperspectral imaging technology in crops both domestically and internationally. It then summarizes the methods of hyperspectral data acquisition and application scenarios. Subsequently, it organizes the processing of hyperspectral data for crop disease and pest detection and classification, deriving relevant preprocessing and analysis methods for hyperspectral data. Finally, it conducts a detailed analysis of classic cases using hyperspectral imaging technology for detecting crop diseases and pests and non-destructive classification, while also analyzing and summarizing the future development trends of hyperspectral imaging technology in agricultural production. The non-destructive rapid detection and classification technology of hyperspectral imaging can effectively select qualified crops and classify crops of different qualities, ensuring the quality of agricultural products. In conclusion, hyperspectral imaging technology can effectively serve the agricultural economy, making agricultural production more intelligent and holding significant importance for the development of agriculture in China.

1. Introduction

The development of agriculture, as the economic foundation of China, holds a significant position in the national economy. The yield and quality of crops receive considerable attention. High yields and good quality undoubtedly bring enormous benefits to China’s economy and the quality of people’s lives. The growth of crops from planting to maturity is a cyclical process during which crops may be affected by pests and diseases. Therefore, achieving non-destructive detection of crop pests and diseases is crucial. Additionally, the classification of agricultural products is a key step in ensuring their quality. How to achieve non-destructive classification and identification of agricultural products is a critical issue for ensuring quality.
In the above-mentioned issues, using traditional methods of detection and classification may lead to crop damage and low classification accuracy. Compared to traditional detection and classification methods, using hyperspectral imaging technology can effectively address this problem. Hyperspectral imaging technology mainly integrates multiple techniques to achieve detection and classification of targets. In agricultural development, it can realize non-destructive rapid detection of crops and improve the accuracy of crop product classification.
Hyperspectral imaging (HSI) technology originated from hyperspectral remote sensing technology and, together with laser radar technology and synthetic aperture radar technology, is known as one of the three core technologies for obtaining Earth observation information. The research on hyperspectral imaging technology was pioneered by the Jet Propulsion Laboratory (JPL) in the United States, starting in the 1980s [1]. In 1983, with strong support from the National Aeronautics and Space Administration (NASA), JPL developed the first airborne imaging spectrometer (AIS) internationally [2]. China’s research on hyperspectral imaging technology began slightly later, at the beginning of the 1990s [3]. The Shanghai Institute of Technical Physics was the first to successfully develop the airborne imaging spectrometer OMIS and the wide-field push-broom hyperspectral imaging instrument PHI [4].
The working principle of hyperspectral imaging technology is to utilize the differences in the reflection, absorption, and scattering characteristics of different materials to acquire rich spectral and spatial image information about the target object. The working process involves illuminating the target object with a light source, which generates a unique response in the form of reflection, scattering, or transmission of light. This response is then dispersed into different wavelengths by a spectral dispersion device, and separated and imaged by a grating, prism, or AOTF based on the principle of spectral separation. The detector collects the intensity data of different wavelengths, which are further processed to construct a hyperspectral image, enabling the in-depth analysis and research on the composition, structure, and state of the object. In contrast to traditional color or single-wavelength imaging methods, hyperspectral imaging can capture spectral information on the object’s reflection or emission over hundreds of continuous wavelengths. Moreover, hyperspectral imaging offers advantages such as providing richer information, enhancing identification and analysis capabilities, enabling non-destructive testing, and being applicable to a wide range of applications [5].
Hyperspectral imaging technology has been widely applied in various fields such as agriculture [6,7,8], environmental protection [9,10,11], medical diagnosis [12,13,14], and cultural heritage research [15,16,17] since its development. In agricultural production, it is commonly used for the non-destructive rapid detection of crop pests and diseases and the classification and identification of agricultural products. With the continuous development of this technology, the methods of acquiring original image data for detection objects using hyperspectral cameras have become more diversified and efficient, including indoor collection methods and UAV-based methods. The collected hyperspectral original image data are susceptible to environmental factors, equipment-related factors, and other influences, containing rich information and multidimensional data. Therefore, preprocessing is necessary before conducting data analysis. During data analysis, machine learning algorithms and deep learning algorithms are typically used to build recognition and classification models. After extensive data training, these models can effectively achieve the detection and identification of crop pests and diseases, as well as the classification of agricultural products.
In agricultural production, traditional methods for detecting and identifying crop diseases and pests are prone to crop damage, have low accuracy, and lack timeliness. Traditional methods for classifying and identifying agricultural products are time-consuming, labor-intensive, prone to product damage, and inefficient. Therefore, this article focuses on hyperspectral imaging technology, and combines common methods for hyperspectral data acquisition, data preprocessing, and data analysis to address the issues of detecting and identifying crop diseases and pests, as well as classifying and identifying agricultural products.

2. Development of Hyperspectral Imaging Applications

2.1. Crop Pest and Disease Detection and Identification Application Development

Since the successful implementation of hyperspectral imaging technology, it has gradually commercialized, leading to a large number of hyperspectral imaging instrument manufacturers emerging both domestically and internationally. The production of hyperspectral imaging instruments by these manufacturers is progressively applied in fields such as agriculture, forestry, and healthcare. Particularly in recent years, with the rapid development of agriculture and artificial intelligence, this technology combined with AI algorithms has been widely used in the agricultural production sector. It has found extensive applications in the detection of crop pests and diseases and the classification and identification of agricultural products.
Hyperspectral imagers can capture spectral characteristic changes in crops when affected by pests and diseases, enabling detection and early warning before or at the onset of pests and diseases [18]. This assists farmers in promptly implementing preventive measures to reduce losses caused by pests and diseases. By analyzing spectral information from different bands, this technology can identify and classify various types of pests and diseases, including viruses, bacteria, fungi, and insects [19]. This helps farmers accurately determine the type and severity of pests and diseases, providing a basis for precise control measures.
Researchers have conducted extensive studies using hyperspectral imaging technology to address the issue of crop pest and disease detection. By conducting detailed spectral sampling of crops, this technology can acquire continuous spectral curves of crops, analyze subtle spectral features, and thereby achieve pest and disease detection in crops.
Yu Jiajia et al. proposed the independent soft modeling method (SIMCA) for the early detection of gray mold on tomato leaves. They extracted characteristic spectral band images of gray mold on tomato leaves and fused the bands using multiple linear regression (MLR) to obtain a merged image, establishing a technical route for obtaining disease information using the minimum distance method. The results indicate that the proposed method exhibited excellent predictive capabilities, providing a new approach to early detection of gray mold on tomatoes [20].
Zhang Jingyi et al. aimed to detect Alternaria leaf spot disease in melons. They utilized hyperspectral imaging technology to acquire raw spectral images, preprocessed the raw data using principal component analysis and minimum noise fraction methods, and established leaf lesion discrimination models using the K-nearest neighbors and support vector machine methods. The study demonstrated the effectiveness of this method in detecting Alternaria leaf spot disease in melons [21].
Oilseed rape mycosphaerella is a soil-borne disease, with no visible symptoms on the leaves in the early stage of the disease, and it is difficult to detect from the surface of the plant. Liang Wanjie et al. used hyperspectral imaging technology combined with a deep learning model to construct a recognition model for the early onset of canola botrytis and achieved better recognition results [22].
Li Yang et al. sought to achieve the rapid online identification of cucumber diseases and pests. They used hyperspectral imaging and machine learning to study the rapid identification of cucumber downy mildew and two-spotted spider mite infestation. The model built using full-band spectral data preprocessed with the MA method showed the best identification performance, with the MA-RF model achieving an overall classification accuracy (OA) of 97.89% and a Kappa coefficient of 0.97. Research results indicated that using hyperspectral imaging technology for identifying cucumber downy mildew and two-spotted spider mite infestation yielded excellent results [23].
The above-mentioned study only identified a small number of samples, demonstrating the feasibility of using hyperspectral imaging technology in the detection of crop diseases and pests. However, due to the high cost and intricate operation of hyperspectral equipment, as well as the timeliness of detection, it is not feasible for large-scale use in agricultural production. With continuous technological innovation and the rapid development of IoT and unmanned aerial vehicle (UAV) technology, there is hope that hyperspectral imaging technology will be widely utilized in agricultural production.
In conclusion, the application of hyperspectral imaging technology in crop pest and disease detection can more efficiently assist farmers in managing their crops, promoting the rapid development of smart agriculture. This technology can make crop growth more intelligent and precise.

2.2. Development of Applications for Classification and Identification of Agricultural Products

Before the successful implementation of hyperspectral imaging technology, traditional methods were primarily utilized for agricultural product classification. These conventional techniques often require extensive manual intervention, are time-consuming, labor-intensive, and prone to errors. Hyperspectral imaging technology can capture multispectral information on the target, enabling the identification of distinct features of different agricultural products based on subtle spectral differences, thus achieving the precise and non-destructive classification of agricultural products [24].
Hyperspectral imaging technology enables rapid, non-contact detection of agricultural products, allowing for the identification of potential contaminants and harmful substances such as pesticide residues, thereby ensuring food safety [25]. Moreover, hyperspectral imaging technology plays a vital role in agricultural intelligence, facilitating the transformation of agricultural production methods and enhancing both the efficiency and quality of agricultural production.
Many researchers have effectively addressed the time-consuming, labor-intensive, and inefficient issues encountered in agricultural product classification and identification by utilizing hyperspectral imaging technology in conjunction with machine learning and deep learning methods. This technological approach enables the efficient and non-destructive identification of agricultural products.
Zhang Fu et al. proposed a support vector machine (SVM) classification model based on hyperspectral imaging technology can achieve the efficient and non-destructive identification of Ginkgo biloba varieties. They optimized model parameters using genetic algorithms (GA) and particle swarm optimization algorithms (PSO) to enhance the accuracy of variety identification. Experimental results indicated that the SNV-CARS-PSO-SVM model had the best identification performance, with a classification accuracy of 96.67%. This suggests that CARS feature wavelength variables can represent all wavelength information, and the PSO-SVM model demonstrates good variety identification performance, enabling the identification of Ginkgo biloba varieties [26].
Ma Lingkai et al. used hyperspectral imaging technology to differentiate between organic eggs and conventional eggs. They established egg category identification models using partial least squares discriminant analysis (PLS-DA) and support vector machines (SVM). Due to the large volume of hyperspectral data and the presence of significant redundant information, they applied the successive projection algorithm (SPA) and competitive adaptive reweighted sampling (CARS) to reduce the dimensionality of the egg yolk ROI data, eliminating redundant information to build the classification models. The experimental results showed that the SPA-SVM identification model achieved the highest accuracy rate of 94.2% on the test set [27].
Li Guohou et al. collected hyperspectral images of the front and back of wheat seeds from eight varieties using a hyperspectral imaging device. They proposed a hyperspectral wheat variety identification method based on an attention mechanism mixed with a convolutional neural network using this dataset. Experimental results demonstrated that the proposed method outperformed other methods with a classification accuracy of 97.92% [28].
Zhang Long et al. utilized hyperspectral imaging technology in combination with a Savitzky–Golay smoothing filter (SG), multiplicative scatter correction (MSC), and a support vector machine (SVM) to establish a classification method for tobacco leaves and contaminants. The experimental results indicated an overall classification accuracy of 99.92% for tobacco leaves and contaminants, with a Kappa coefficient of 0.998 [29].
In conclusion, the application of hyperspectral imaging technology in agricultural product classification and identification holds significant importance, as it can enhance classification accuracy, reduce manual costs, improve work efficiency, ensure food safety, and drive the development of intelligent agriculture.

3. Hyperspectral Data Acquisition

Nowadays, according to different application requirements in the field of agriculture, spectral data acquisition can be divided into two types: indoor acquisition and outdoor acquisition. The indoor acquisition system consists mainly of a hyperspectral camera, a halogen lamp, a camera obscura, and a slider. The advantage of this acquisition system is primarily the reduction in the impact of environmental factors, thereby improving the final recognition accuracy and classification precision. The lighting conditions in indoor environments may be influenced by artificial light sources, natural light, and the objects themselves, leading to variations in light intensity and color, which can affect the accuracy and stability of hyperspectral data. In indoor environments, objects may be obscured or affected by shadow effects, resulting in the occlusion or confusion in hyperspectral data, which can impact the data collection and analysis results. In order to achieve the classification and identification of sorghum varieties, Song Shaozhong et al. built an indoor spectral acquisition system based on hyperspectral imaging technology. By employing various preprocessing and data analysis methods, they achieved a classification accuracy of 97.16% [30]. The schematic diagram of the indoor acquisition system is shown in Figure 1.
Due to limitations in conditions, indoor acquisition is generally used for small-scale and limited sample testing. When there is a need to collect data on a large scale with multiple samples, outdoor acquisition methods are required. Outdoor acquisition methods mainly involve using drones equipped with hyperspectral cameras to collect spectral data. The outdoor acquisition system primarily consists of a drone, a gimbal, and a hyperspectral camera. The advantage of this method is the ability to collect data over large areas, which increases the collection rate. Outdoor data collection is susceptible to environmental factors such as lighting and weather, with lighting being one of the crucial environmental factors in hyperspectral data acquisition. Variations in lighting conditions can affect the reflectance spectral characteristics of objects, leading to color deviations or unstable reflectance rates in data collected under different lighting conditions. Weather factors such as overcast skies and cloudy conditions can also impact hyperspectral data collection. Under different weather conditions, factors like atmospheric water vapor content and aerosols can affect the transmission and scattering of light, thereby influencing the quality and accuracy of the data. Components in the atmosphere, such as gases and water vapor, can also affect the propagation of light and the reflection of objects. Atmospheric water vapor absorbs light at specific wavelengths, affecting the reflectance of specific bands in hyperspectral data. Factors such as lighting angles and vegetation coverage in different seasons can also impact hyperspectral data collection. Seasonal changes can alter the growth status and color of vegetation, thereby affecting the interpretation and analysis of the data. Feng Haikuan et al. utilized a drone equipped with a hyperspectral imager to monitor nitrogen levels in winter wheat [31].
Hyperspectral imaging is commonly performed using outdoor acquisition methods for detecting crop pests and diseases and monitoring their growth conditions. This method allows for the acquisition of a large amount of data, enabling the training of a significant dataset during data analysis, which can help reduce measurement errors. The outdoor data collection system primarily relies on data acquisition through unmanned aerial vehicles (UAVs). With continuous technological innovation, this collection system ensures real-time feedback on data acquisition information and collaborates with ground-based data analysis systems to facilitate the large-scale application of this data collection method in agricultural production.

4. Hyperspectral Data Processing

4.1. The Need for Hyperspectral Data Preprocessing

Hyperspectral raw data cannot be directly used for data analysis because various sources of noise may exist during the collection of the raw hyperspectral data of crops and agricultural products, such as atmospheric interference, instrument noise, and disturbances during the data acquisition process [32]. The hyperspectral raw data can also be influenced by factors like lighting conditions and sensor responses. By applying correction processes to the data, these influencing factors can be eliminated, making the data more accurate and comparable [33].
Hyperspectral raw data typically have high dimensions, containing a large amount of spectral information [34]. To enhance the efficiency of data processing and analysis, it is necessary to perform dimensionality reduction on the data, mapping the data from a high-dimensional space to a low-dimensional space while retaining the most representative information [35]. The raw hyperspectral data contain rich spectral information and extracting useful features from them is a critical issue. Feature extraction transforms the data into more representative and discriminative feature vectors, laying the foundation for subsequent modeling and analysis [36,37].
In summary, processing hyperspectral raw data involves noise removal, data correction, dimensionality reduction, and feature extraction. These preprocessing steps can enhance the quality and reliability of the data, reduce the impact of interfering factors, and provide a more reliable and effective basis for subsequent data analysis and applications, ultimately improving the accuracy of data analysis.

4.2. Hyperspectral Data Processing Flow

Hyperspectral raw data possess characteristics such as high dimensionality and diversity. Utilizing raw data directly for data analysis may lead to compromised detection results and classification accuracy. Therefore, it is crucial to process hyperspectral raw data with the main aims being to extract meaningful data, filter out irrelevant information, and eliminate various environmental influences [38]. The general data processing workflow is illustrated in Figure 2 below.

4.3. Hyperspectral Data Preprocessing Methods

4.3.1. Data Calibration and Pre-Processing

During the process of image data acquisition by a hyperspectral camera, the acquired images are raw and uncorrected, necessitating the post-acquisition correction of the image data. The primary reasons for this are as follows: on one hand, during image acquisition, the hyperspectral camera can be affected by dark currents and the surrounding environment, potentially impacting the stability of the acquired hyperspectral images [39]; on the other hand, since the raw hyperspectral image data consist of photon intensity information, they require reflectance correction to obtain relative reflectance. Therefore, black and white correction of hyperspectral raw data is a necessary step before data analysis [40,41]. Additionally, factors such as light scattering, irregularities in the detected object images, and random noise during spectral information acquisition can lead to issues like non-smooth spectral curves and a low signal-to-noise ratio. Hence, preprocessing of the data is typically conducted before any data analysis, with common preprocessing methods including smoothing, normalization, derivative methods, multi-scatter correction, standard normal variate transformation, wavelet transformation, etc., [42,43,44,45]. The data processed through these methods not only enhance curve smoothness and signal-to-noise ratio but also contribute to improved accuracy in subsequent modeling steps.
  • Smoothing Processing
The spectral raw data collected are influenced by external factors such as noise and jitter, resulting in irregular and jagged spectral curves that are not conducive to later data analysis. After smoothing, the spectral curve not only eliminates the salient points in the original data but also makes the spectral curve more continuous and smoother. Common smoothing methods include median smoothing, mean smoothing, and the Savitzky–Golay (S–G) smoothing. The S–G smoothing filtering method [46] is a commonly used filtering technique in spectral preprocessing, initially proposed by Savitzky and Golay in 1964. This method is widely applied for data smoothing and noise reduction, utilizing local polynomial least squares fitting in the time domain [47,48]. Ma Yan et al. utilized different spectral preprocessing methods and support vector machine (SVM) techniques to achieve the identification of defects in fresh apricots. The research results indicate that the C-SVM model established using raw spectra and the Savitzky–Golay convolution smoothing preprocessing achieved the optimal recognition rate of 93.3% [49].
The core idea of the Savitzky–Golay (S–G) smoothing method is to perform weighted filtering on all data within a window, with the weights obtained through least squares fitting of a given high-order polynomial. The main feature of this filtering method is that it can remove noise while ensuring that the shape and width of the signal remain unchanged. The key to this smoothing method lies in finding appropriate parameters: the window width (N) and the order of the polynomial fit (K). The primary approach to implementing S–G smoothing is as follows: first, for the i-th reference point and its 2N + 1 neighboring points (N points on each side), conduct a least squares fit on the spectral data, obtaining polynomial coefficients (k). After solving for the coefficients a0, a1, …, ak−1, replace the original intensity value of the reference point with the fitted value Fi [50].
Fi = a0 + a1x + … + ak−1xk−1,
where (X = −N, −N + 1, …, N). After iterating through the entire data band using this method, the smoothed spectral curve can be obtained, as shown in Figure 3 below.
  • Normalization
In spectral analysis, the intensity values of spectral data may vary due to factors such as the concentration, temperature, and pressure of different samples. To eliminate the interference of these factors, it is necessary to normalize the spectral data for a more accurate comparison and analysis of the spectral characteristics of different samples [51]. The basic principle of spectral normalization is to standardize the intensity values of each wavelength point in the spectral data, ensuring that they fluctuate within a certain range without being influenced by external factors. Common normalization methods include Min-Max normalization, Z-score normalization, etc., [52,53]. Zhou Chunxin et al. achieved the non-destructive identification of soybean varieties using different preprocessing methods and data analysis techniques. The detection accuracy of the model built using normalized preprocessed spectral data reached 99.58% [54]. Min-Max normalization involves linearly transforming the values of each feature dimension so that they are mapped to the range [0, 1] (interval scaling), with the transformation function being as follows:
X = X m i n m a x m i n ,
The Z-score normalization method standardizes the values of each feature dimension using the mean and standard deviation of each dimension. Its transformation function is as follows:
X = X μ σ ,
  • Derivative Method
The derivative method can reduce baseline drift caused by instrumental interference, uneven sample surfaces, lighting variations, and other factors. It partially addresses the issue of spectral signal overlap, amplifies hidden weak but effective spectral information, and enhances spectral changes and resolution [55]. The spectral derivative method is commonly used in the identification of absorption peaks and valleys in near-infrared spectra and in the extraction of characteristic wavelengths. Derivative spectra include first-order derivative spectra, second-order derivative spectra, and higher-order derivative spectra, with practical applications often requiring only first and second-order derivative spectra to meet requirements [56,57]. The application of first and second-order derivative processing on spectra is illustrated in Figure 4. Shen Yu et al. utilized the methods of loading coefficient, continuous projection, and second derivative for spectral data preprocessing to establish an apple minor damage identification model based on Genetic Algorithm-optimized BP Neural Network (GA-BP) and Support Vector Machine (SVM). The results showed that the recognition accuracy of the model established after processing with the second derivative method reached 99.69% [58].
  • Multiple Scattering Correction
Multiple scattering correction (MSC) is a linearization process applied to spectra. Uneven sample distribution or variations in sample size can lead to light scattering, resulting in the inability to obtain an “ideal” spectrum [59]. This algorithm assumes that the actual spectrum is linearly related to the “ideal” spectrum. In most cases, the “ideal” spectrum is unattainable, so during spectral data preprocessing, the average spectrum of the sample data set is often used as a replacement [60]. Multiple scattering correction can eliminate random variations, and the corrected spectrum differs from the original spectrum. When the spectrum is highly correlated with the chemical properties of the substance being measured, multiple scattering correction yields better results [61,62]. He Jiawei et al. used near-infrared hyperspectral imaging technology to discriminate between fresh and frozen-thawed beef. After the multi-scattering correction preprocessing of the hyperspectral data, the optimal model for this experiment was constructed using partial least squares regression in the full spectral range (950–1500 nm). The experiment demonstrated that the model had a high prediction accuracy, with a discrimination rate of 94.4% [63]. In practical applications, it is common to use the average value of all data as the ideal spectrum/standard spectrum, which is expressed as follows:
S ¯ = i = 1 N S i N ,
Next, perform a univariate linear regression on the spectrum of each sample with the standard spectrum to determine the baseline shift (ki) and offset (bi):
S i = k i S ¯ + b i ,
The effect plot of the MSC correction is shown in Figure 5.
  • Standard Normal Variate Transformation
Standard Normal Variate (SNV) transformation, similar to multiple scattering correction, can be used to eliminate scattering errors and path length variations [64]. However, the two methods have different processing principles. This method does not require an “ideal” spectrum; instead, it assumes that the spectral absorbance values at each wavelength in every spectrum satisfy certain conditions, such as following a normal distribution. The SNV transformation involves standardizing each spectrum [65,66]. Chen Shuyuan et al. proposed a non-destructive detection method for discriminating the storage years of white tea based on hyperspectral imaging technology. They utilized preprocessing algorithms including least squares smoothing filtering, standard normal transformation, normalization, and multi-scattering correction, and employed support vector machine, partial least squares, joint linear discriminant analysis, and logistic regression for the discriminant analysis of the spectral data after different preprocessing steps. The analysis results showed that the model established by combining standard normal transformation preprocessing with a support vector machine had the best discrimination effect, with precision rates of 90.83% and 86.02% for the training set and test set, respectively [67]. In practical processing, this method involves subtracting the original spectrum from the mean spectrum and then dividing by the standard deviation, effectively normalizing the original spectral data, satisfying the following equations:
Z i j = x i x i ¯ S i j ,
x i ¯ = i = 1 N x i j N ,
S i j = i = 1 N x i j x i ¯ 2 N 1 ,
where i is the number of samples, j is the number of spectral channels, and Zij is the SNV-transformed data. The effect plot of the SNV correction is shown in Figure 6.
  • Wavelet Transform
Wavelet Transform (WT) is a newly developed time/frequency transformation analysis method used for spectral data compression and noise reduction [68]. Wavelet transform inherits the localized idea of Fourier transform, where the window size in the transformation varies with frequency. Moreover, when the time/frequency localization and multi-resolution properties are good, wavelet transform decomposes the spectral signal into multiple scale components at different frequencies. It selects the appropriate sampling step based on the size of the scale component, allowing it to focus on any spectral signal [69].
Wavelet transform typically includes Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) [70]. The multi-scale resolution characteristics of wavelet transform enable it to quickly extract the original spectral signal from noisy signals. Utilizing wavelet analysis can eliminate spectral signals with significant noise, making it a crucial application in hyperspectral analysis [71]. Huang Qi et al. conducted non-destructive rapid detection of peanut moisture content using hyperspectral imaging technology. They preprocessed the data using wavelet transform, multi-scattering correction, and first-order derivative, and established a non-destructive detection model for peanut moisture content using PLS, XGBoost, and BO-XGBoost algorithms. Through experimental comparisons, it was determined that the BO-XGBoost model established using spectral data preprocessed with wavelet transform from the original spectra was optimal [72].
  • Basic formula for continuous wavelet transform
    W a , b = x ( t ) · ψ t b a d t ,
    where W(a,b) is the wavelet coefficient, x(t) is the original signal, ψ(t) is the wavelet basis function, a is the scale parameter, and b is the shift parameter;
  • Basic formula for discrete wavelet transform
    W j , k = x t · ψ j , k ( t ) d t ,
    where W(j,k) is the discrete wavelet coefficient, and ψj,k(t) is the wavelet basis function.

4.3.2. Data Dimension Reduction and Feature Wavelength Extraction

Due to the high number of spectral bands, often hundreds or even thousands, in the raw data captured by hyperspectral cameras, there exists high-dimensional spectral information and significant data redundancy. This not only complicates the computation process but also diminishes the accuracy of non-destructive detection models [73]. Therefore, it is an important step in data analysis to perform dimensionality reduction and extract spectral feature wavelengths from hyperspectral raw data before modeling. The most commonly used methods for dimensionality reduction and feature wavelength extraction include principal component analysis, independent component analysis, genetic algorithms, and minimum noise fraction transformation [74,75]. After processing with the corresponding dimensionality reduction algorithms, a significant amount of redundant information is eliminated. By extracting feature wavelengths, the most distinctive and information-rich wavelengths can be selected to better understand the data and perform target identification or classification [76]. These processes play a crucial role in simplifying the computation process and enhancing the accuracy of the models.
  • Principal Component Analysis
Principal Component Analysis (PCA) is a commonly used dimensionality reduction method in statistics and mathematics. It transforms multiple possibly correlated variables into a set of linearly uncorrelated variables through orthogonal transformation, known as principal components. It aims to convert high-dimensional features into a few key components that are mutually uncorrelated and contain the maximum amount of information [77].
The basic idea behind PCA is to recombine several originally correlated indicators into a new set of composite indicators that are mutually independent, while retaining as much information from the original variables as possible [78]. This method is highly effective in handling multivariable problems as it reduces data dimensionality, decreases computational complexity, and eliminates noise and irrelevant features, making the dataset more user-friendly and interpretable. The computational steps of PCA typically involve standardizing the samples, calculating the covariance matrix of the standardized samples, determining the eigenvalues and eigenvectors of the covariance matrix, computing the contribution rate and cumulative contribution rate of the principal components, and selecting the principal components that achieve a certain level of cumulative contribution rate [79]. These principal components can be utilized for subsequent data analyses such as cluster analysis, regression analysis, etc., [80].
For hyperspectral data represented as an i×j matrix:
X = x 11 x 1 j x i 1 x i j = x 1 , x 2 x j ,
where i is the number of samples, and j is the number of spectral channels. The main steps of the PCA method are as follows:
  • Standardize the matrix X with a mean of x ¯ and a standard deviation of Sn
    X i j = x i j x ¯ S n ,
    whereby the original matrix is standardized to obtain matrix X1 = (X1, …, Xj);
  • Calculate the correlation matrix along with its eigenvalues and eigenvectors;
  • Compute the variance contribution rates and determine the principal components. Typically, select the top k principal components whose cumulative contribution rate exceeds 90% for data analysis.
  • Independent Component Analysis
Independent Component Analysis (ICA) is a statistical method used for separating multivariate signals, aimed at decomposing multiple mixed signals into independent components [81]. Unlike other linear transformation methods (such as PCA), ICA assumes that the original signals are mutually independent rather than simply having low correlation. Its core idea is based on the assumption of independence, positing that the observed multivariate signals are linear mixtures of multiple independent components, each with its own statistical distribution [82]. The assumption of independence is crucial to ICA, enabling the decomposition of mixed signals into independent components through appropriate transformations. This algorithm is currently widely applied in fields such as signal processing, image processing, and other domains [83].
Assuming that the observed mixed signal X is a linear combination of unknown independent signals S, satisfying:
X = A S ,
in this case, X is the observed signal matrix, A is the unknown mixing matrix, and S is the source signal matrix.
The goal of ICA is to estimate the inverse or pseudo-inverse of the mixing matrix A in order to recover the source signal S. The main steps of ICA are:
  • The purpose is to subtract the mean of each feature (variable) of the data to ensure that the data have a mean of zero on each dimension
    X c e n t e r e d = X E X ,
    where E[X] is the mean of each column (feature) in the original data matrix X;
  • The aim is to transform the observed data into a new space with a covariance matrix that is the identity matrix, reducing data redundancy and speeding up the convergence of ICA. Whitening involves transforming the centered data into a new coordinate system such that the transformed signals are uncorrelated in each direction and have unit variance. Whitening can be achieved by computing the covariance matrix C of the data and then performing eigendecomposition to accomplish this
    C = 1 n 1 X c e n t e r e d X c e n t e r e d T ,
    C = E D E T ,
    where E is the matrix of eigenvectors, and D is the diagonal matrix of eigenvalues. The whitening transformation is as follows:
    X w h i t e = D 1 / 2 E T X c e n t e r e d ,
  • Select a measure of non-Gaussianity (such as negative entropy) and find the projection weights that maximize this measure. The ICA algorithm typically uses a nonlinear function g (∙) to approximate the maximization of negative entropy, with the iterative update rule as follows
    w + = E X w h i t e g w T X w h i t e E g w T X w h i t e w ,
    normalize the new weight vector w+:
    w = w + w + ,
  • In multidimensional scenarios, it is necessary to ensure that the estimated components are orthogonal to each other. The Gram–Schmidt process is commonly employed for this purpose. For the i-th component, the update rule takes into account the previous i−1 components
    w i = w i j = 1 i 1 w T w j w j ,
    and after normalizing wi, check for convergence. If the difference between wi and w i + is less than a certain threshold, it is considered that an independent component has been found.
  • Once all the weight vectors W = [ w1, w2, …, wn] are found, independent components can be estimated as follows
    S = W T X w h i t e ,
  • Main Steps of MNF
Minimum Noise Fraction (MNF) is a method used to determine the intrinsic dimensionality (i.e., number of spectral bands) of image data [84]. This method can separate noise in the data, reducing computational demands in subsequent processing. MNF is essentially a two-stage stacked principal component transformation. The first transformation (based on the estimated noise covariance matrix) is used to separate and readjust noise in the data, ensuring that the transformed noise data have minimal variance and no inter-band correlation. The second step involves a standard principal component transformation of noise-whitened data. To further conduct spectral processing, the intrinsic dimension of the data is determined by examining the final eigenvalues and associated images [85]. The data space can be divided into two parts: one part is related to larger eigenvalues and the corresponding feature images, while the remaining part is related to approximately equal eigenvalues and images dominated by noise [86,87]. Main steps of MNF:
  • Utilize a high-pass filter template to filter the entire image or image data blocks with the same characteristics, obtaining the noise covariance matrix (CN). Diagonalize it into a matrix (DN)
    D N = U T C N U ,
    where DN is the diagonal matrix of eigenvalues of CN arranged in descending order; U is an orthogonal matrix composed of eigenvectors. Further transformation formulas can be derived as follows:
    I = P T C N P ,
    P = U D N 1 / 2 ,
    where I is the identity matrix, P is the transformation matrix. When P is applied to image data X through the transformation Y = PX, the original image is projected into a new space, generating transformed data where the noise has unit variance and is uncorrelated between bands.
  • Conduct standard principal component transformation on the noise data with the formula
    C D a d j = P T C D P ,
    where CD is the covariance matrix of image X. CD-adj is the matrix after transformation by P, further diagonalized into matrix DD-adj.
    D D a d j = V T C D a d j V ,
    where DD-adj is a diagonal matrix of eigenvalues of CD-adj arranged in descending order; V is an orthogonal matrix composed of eigenvectors.
After the MNF transformation, the resulting components are uncorrelated with each other, arranged in a descending order of signal-to-noise ratio. MNF transformation separates the noise and ensures inter-band decorrelation, making it superior to PCA transformation. Currently, it is widely used in image denoising, image fusion, image enhancement, and feature extraction, among other applications.
  • Genetic Algorithm
The Genetic Algorithm (GA) is an optimization algorithm that simulates the evolutionary process in nature. By mimicking genetic, crossover, and mutation operations in biology, it progressively searches for the optimal solution [88]. The principle of genetic algorithm dimensionality reduction is an effective method of high-dimensional data analysis. Through the simulation of the biological evolution process in nature it extracts the most representative combination of features from high-dimensional data to achieve the optimization of data reduction and peacekeeping [89]. The core idea of genetic algorithm dimensionality reduction is to select the most representative features from high-dimensional data through the optimization process of genetic algorithms and reduce the high-dimensional data to a lower-dimensional space [90]. The main stages of the genetic algorithm process are illustrated in Figure 7. The specific steps are as follows:
  • Generate a set of solutions with random feature combinations;
  • Evaluate the quality of each solution using an evaluation function such as information gain, variance, etc.;
  • Choose excellent solutions based on fitness evaluation results as parents for the next generation of the population;
  • Exchange and recombine chromosomes in parents, such as single-point crossover or multi-point crossover;
  • Introduce randomness by mutating some solutions to increase population diversity, like bit flips or bit swaps;
  • Determine which solutions to retain based on replacement strategies like elitism or randomness;
  • Set stopping conditions for the algorithm, such as maximum number of iterations or fitness function reaching a threshold value.
Data dimensionality reduction plays a crucial role in data processing, and researchers have proposed various methods for data dimensionality reduction over the years. Wu Qiang et al. introduced a convolution-based spectral data machine learning feature dimensionality reduction method. Compared to the data method of principal component analysis, this method showed significantly better classification performance on theoretical data [91]. Feng Linlin et al. proposed a data dimensionality reduction method based on variational autoencoder uniform manifold approximation and projection. To reduce the coupling between high-dimensional data, the variational autoencoder was used to preprocess the data into latent variables, which were further reduced using uniform manifold approximation and projection to better maintain the similarity relationships between the original high-dimensional data [92].
Currently, data dimensionality reduction can be classified into two types: linear data dimensionality reduction and nonlinear data dimensionality reduction. In addition to the described dimensionality reduction methods, there are many algorithms suitable for data dimensionality reduction processing. Different data dimensionality reduction methods can be chosen based on the characteristics of the data. Commonly used linear data dimensionality reduction methods include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Multidimensional Scaling (MDS), Margin Discrete Projection (MDP), and Linear Local Tangent Space Alignment (LLTSA). Nonlinear data dimensionality reduction methods include Local Linear Embedding (LLE), Locally Preserving Projections (LPP), and Isometric Mapping (ISOMAP), among others.

4.4. Hyperspectral Data Analysis Methods

4.4.1. Machine Learning Methods

Machine learning methods play a crucial role in the analysis of hyperspectral data. After data preprocessing is completed, machine learning methods are primarily used to model and analyze the data, thereby achieving the detection and identification of crop diseases and pests, as well as the classification and identification of agricultural products. Commonly used machine learning methods in different application scenarios include Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbors (KNN), and Linear Discriminant Analysis (LDA).
  • Support Vector Machine
Support Vector Machine (SVM) is a commonly used supervised learning algorithm that is applicable to both binary and multiclass problems [93]. Its principle is based on the minimization of structural risk in statistical learning theory by finding an optimal hyperplane for classification. The basic principle of SVM is as follows:
  • Assuming the training dataset is linearly separable, meaning there exists a hyperplane that can completely separate samples from different classes. The hyperplane can be represented by the following equation:
    w · x + b = 0 ,
    where w is the normal vector, x is the sample feature, and b is the bias term;
  • The goal of SVM is to find an optimal hyperplane that maximizes the margin between the sample points and the hyperplane. The margin refers to the distance from the sample points to the hyperplane, and maximizing this margin helps improve the generalization ability of SVM [94]. The margin can be expressed as
    γ i = y i w w x i + b w ,
    where yi is the label of the sample (1 or −1) and ||w|| is the norm of the normal vector w;
  • During the process of maximizing the margin, only a subset of sample points has an impact on the position of the hyperplane, and these points are called support vectors. Support vectors are the sample points closest to the hyperplane, determining the position and orientation of the hyperplane;
  • Transforming the goal of maximizing the geometric margin into a convex optimization problem can be solved using the Lagrange multiplier method. By maximizing the margin, the optimization problem can be reformulated in its dual form, where the objective is to minimize a function involving Lagrange multipliers;
  • In practical applications, many problems cannot be perfectly separated by a linear hyperplane. To address this issue, SVM introduces the concept of kernel functions, mapping nonlinear problems from low-dimensional spaces to high-dimensional spaces to make them linearly separable. Common kernel functions include linear kernel, polynomial kernel, Gaussian kernel, etc., [95];
  • In real-world scenarios, data often contain noise and outliers. To enhance the robustness of the model, SVM introduces the concepts of soft margin and penalty terms. Soft margin allows some sample points to fall within the margin by introducing slack variables to control this. The penalty term balances maximizing the margin and penalizing misclassifications through regularization [96];
  • For multi-class problems, one can use the one-vs-one or one-vs-rest methods to con-struct multiple binary classifiers, then combine them for multiclass classification.
  • Random Forest
The Random Forest (RF) algorithm is a classification and regression method based on ensemble learning [97]. Its core idea is to improve prediction performance by constructing multiple decision trees and combining them. Specifically, when building each decision tree, the random forest algorithm uses random sampling to extract samples from the original dataset, and randomly selects a subset of features at each node to split the decision tree [98]. This randomness helps reduce the correlation between models, increase model diversity, prevent overfitting, and enhance generalization capability. The main steps of RF are as follows:
First, assuming there are T samples and M features available for a classification task, each sample is associated with a corresponding label yi. Next, n decision trees are built, where each tree is constructed on a randomly selected subset of samples, considering only a subset of all features during the tree-building process [99]. Each feature is referred to as a predictor variable, and all predictor variables can be denoted as {X1, X2, …, Xm}. For each tree, the random forest algorithm defines the following steps:
  • Sample training set samples from the sample set using sampling with replacement;
  • Randomly select k variables without replacement from all predictor variables, where k is much less than m. The selected set of variables can be represented as {f1, f2, …, fk};
  • Build a decision tree based on the training samples and the generated subset of variables, using a certain criterion to measure the importance of features to determine the feature for the first split node in the tree;
  • Repeat step (3) n times to generate n decision trees.
An example diagram is shown in Figure 8.
  • K-Nearest Neighbors Algorithm
The K-Nearest Neighbors (KNN) algorithm is a classification method based on Euclidean distance to infer the category of items. It is used to accomplish hyperspectral classification by computing distances between two targets [100,101].
The distance calculation method is as follows:
D x , y = k = 1 n x k y k 2 ,
where x and y represent two targets, xk and yk denote the coordinates of the two targets in the k-th dimension, and n is the dimension of the target coordinates. The main implementation steps of the K-nearest neighbors algorithm are as follows:
  • Collect and prepare a dataset for training and testing. Each sample should include a set of features and corresponding labels (for classification) or target values (for regression);
  • Determine the value of K, which represents the number of nearest neighbor samples to consider. The choice of the K value affects the algorithm’s performance and is typically selected through methods like cross-validation;
  • For a given test sample, calculate the distance between it and each sample in the training set. Common distance metrics include Euclidean distance, Manhattan distance, etc.;
  • Based on the calculated distances, select the K nearest training samples to the test sample as its nearest neighbors;
  • For classification problems, determine the category of the test sample using a majority voting approach. Predict based on the most frequently occurring category among the labels of the K nearest neighbors. For regression problems, average the target values of the K nearest neighbors to obtain the predicted value for the test sample;
  • Based on the results of the majority voting or averaging operation, consider it as the final prediction result for the test sample.
When selecting the value of K, it is essential to balance the sensitivity and complexity of the model. A small K value is susceptible to noise interference, leading to underfitting and missing data features. On the other hand, a large K value may result in over-smoothing, potentially causing overfitting and reduced generalization. In imbalanced data, an inappropriate K value can bias towards the majority class. The appropriate K value affects the smoothness of boundaries; a smaller K value results in complexity and high variability, while a larger K value leads to smoother results but may lose detail [102].
  • Linear Discriminant Analysis
Linear Discriminant Analysis (LDA) is a supervised classification method, which requires prior knowledge of the sample classification results in the training set. This method linearly transforms high-dimensional feature vectors into low-dimensional feature vectors. This transformation ensures that the data features of samples belonging to the same class are close to each other, while the features of samples from different classes are significantly different [103,104].
LDA primarily constructs a low-dimensional “boundary” to project each sample, such that after projection, the within-class variance is minimized and the between-class variance is maximized [105]. This boundary is obtained under supervised classification samples and can be used for classifying test sets and unknown samples. The basic steps of LDA are as follows:
  • Compute the within-class scatter matrix and between-class scatter matrix: The within-class scatter matrix (SW) represents the dispersion of data points within the same class, while the between-class scatter matrix (SB) represents the dispersion between different classes;
    S W = i = 1 c x D i x m i x m i T ,
    where c is the number of classes, Di is the dataset of the ith class, and mi is the mean vector of the ith class.
    S B = i = 1 c N i m i m m i m T ,
    where Ni is the number of samples in the ith class, and m is the global mean vector of all samples;
  • Calculate the eigenvalues and eigenvectors of matrix S W 1 S B : The eigenvalues of matrix S W 1 S B represent the ratio of between-class scatter to within-class scatter in the direction defined by the corresponding eigenvectors after projecting the data. These eigenvectors define the projection directions from the original high-dimensional space to the new low-dimensional space. The purpose of this step is to find a matrix that, when multiplied by the data points, maximizes between-class scatter while minimizing within-class scatter;
  • Select eigenvectors with the largest eigenvalues: Choose the eigenvectors corresponding to the largest eigenvalues. These eigenvectors determine the optimal new coordinate axes;
  • Project the data onto the new coordinate axes: project the original data onto the new coordinate system using the selected eigenvectors to achieve dimensionality reduction.
To improve recognition accuracy and classification precision in hyperspectral data analysis, more complex neural network structures can be introduced when building models. Deep Convolutional Neural Networks (DCNN) or Recurrent Neural Networks (RNN) can be utilized to enhance the learning ability for complex data patterns. By combining multiple base models such as random forests, gradient boosting trees, etc., and using techniques like voting and weighted averaging, the predictive accuracy can be further enhanced. Leveraging knowledge from the source domain to aid learning in the target domain can improve model performance in cases of scarce data or high labeling costs. Adjusting the learning rate to enhance training speed and performance, such as using AdaGrad, Adam, etc., can help prevent getting trapped in local optima.

4.4.2. Deep Learning Methods

Deep learning is a type of machine learning method based on artificial neural networks that utilizes multiple layers of neural networks to perform nonlinear transformations and feature extraction, enabling the modeling and processing of complex data [106]. Common deep learning algorithms used for hyperspectral data analysis include convolutional neural networks, deep neural networks, and residual networks, among others [107,108].
  • Convolutional Neural Networks
Convolutional Neural Networks (CNNs) are a type of feedforward neural network that include convolutional computations and are particularly well-suited for tasks such as image processing [109]. CNNs have the capability of learning representations and can perform translation-invariant classification of input information based on their hierarchical structure, hence also known as “translation-invariant artificial neural networks”.
The working principle involves first extracting various features from the input using convolutional and pooling layers, and then combining these features using fully connected layers to obtain the final output [110]. The convolution operation involves moving a small window (data window) over the image, multiplying and summing elements element-wise. This window essentially consists of a set of fixed weights and can be viewed as a specific filter or convolutional kernel.
The convolutional layer is the core component of convolutional neural networks, enabling feature extraction from input images through convolutional operations. In the convolution operation, each neuron in the convolutional layer performs convolution operations with a portion of pixels in the input image, resulting in a feature map [111]. Convolution operations help capture local features in the input image, such as edges, textures, and other image characteristics.
The pooling layer is another essential part of convolutional neural networks, reducing the complexity of the model by downsampling the feature maps. Pooling operations typically include max pooling and average pooling, where max pooling involves selecting the maximum value within a specific region, while average pooling involves computing the average value within a specific region. Pooling operations help reduce the number of parameters in the model, thereby enhancing the model’s generalization ability.
  • Deep Neural Networks
Deep Neural Networks (DNNs) are recursive functions composed of multiple layers, each consisting of multiple neurons. Each neuron receives outputs from all neurons in the previous layer, calculates outputs based on input data, and passes them to neurons in the next layer, ultimately completing prediction or classification tasks [112,113]. The basic steps of deep neural networks are as follows:
  • Data preprocessing: Transforming raw data into a format acceptable by the model. Common preprocessing methods include normalization, standardization, encoding, etc.;
  • Building network structure: Deep neural networks consist of multiple layers, including input layers, hidden layers, and output layers. The input layer receives input data, hidden layers can contain multiple levels based on the model’s complexity, and the output layer is responsible for outputting the model’s prediction results;
  • Defining the loss function: The loss function measures the gap between the model’s prediction results and the ground truth. Common loss functions include Mean Squared Error (MSE), Cross Entropy, etc.;
  • Backpropagation: through the backpropagation algorithm, the model’s parameters are updated layer by layer based on the gradient information of the loss function to minimize the loss function;
  • Model training: input training data into the model, continuously update parameters through the backpropagation algorithm until the model converges or reaches a predefined stopping condition;
  • Model evaluation: evaluate the model’s performance using an independent test dataset, including metrics such as accuracy, precision, recall, etc.
  • Residual Networks
Residual Networks (ResNets) make deep neural networks easier to optimize and improve accuracy by introducing residual blocks and skip connections. In traditional neural networks, the output of each layer comes from the output of the previous layer [114]. However, in residual networks, the output of each layer is obtained by adding the output of the previous layer to the input of that layer. This residual connection can be viewed as a skip connection that directly passes information from one layer to the next. Each residual block consists of several convolutional layers and a skip connection that adds the input directly to the output of the convolutional layers, forming a residual connection [115]. This design aims to enable the network to learn residual functions, the differences between inputs and outputs, rather than directly learning the mapping from inputs to outputs. This residual learning concept helps alleviate the vanishing gradient problem and makes the network easier to optimize.
The basic unit of residual networks is the residual module, which includes two or more convolutional layers and a skip connection [116]. The skip connection adds the input directly to the output of the convolutional layers, forming a residual connection. An example of a residual block is shown in Figure 9.

5. Specific Applications and Analyses

In the case studies mentioned above, the combination of hyperspectral technology with convolutional neural networks has achieved significant results in agricultural applications. In the data preprocessing process, the S–G smoothing filter algorithm and second-order derivative method were used for spectral preprocessing. For data analysis, various classification models including CNN, LS-SVM, RF, KNN, and LDA were established with spectral data as input variables and disease severity levels as output variables. By integrating hyperspectral imaging technology with CNN, the successful identification and classification of potato late blight incubation period samples were achieved. The classification models of CNN, LS-SVM, RF, KNN, and LDA were compared, with overall accuracy rates of 99.68%, 90.77%, 92.30%, 93.10%, and 92.34%, respectively. The CNN model performed the best with accuracy, sensitivity, and specificity reaching 99.76%, 98.82%, and 99.54%, respectively, achieving a high identification rate of 99.73% for incubation period samples, which was 5.55%–14.15% higher than other methods [117].
In another case study by Zhao Sen et al., hyperspectral imaging technology was used for the early detection and identification of black spot disease in Acanthopanax. Data preprocessing involved removing brightness and darkness noise, applying smoothing, and using principal component analysis for data dimensionality reduction. Different kernel functions of support vector machine (SVM) were employed for classification, with the polynomial kernel function achieving the best classification accuracy of 92.77% [118].
Furthermore, in the study by Huang Yiqi et al., hyperspectral imaging technology was used for the early detection and identification of rust disease and leaf spot disease in sugarcane. Preprocessing of the spectral data involved first-order derivative, convolution smoothing, and standard normal transformation. Various SVM and random forest (RF) models were established based on different preprocessing and dimensionality reduction methods. The SVM model with SG-SVM preprocessing achieved the best classification accuracy of 99.65%, while the RF model with SG-RF preprocessing had an accuracy of 97.94% [119].
In conclusion, the accuracy of identification and classification models in hyperspectral technology applications is significantly influenced by the preprocessing steps. Effective preprocessing of hyperspectral data can enhance model accuracy, and selecting appropriate preprocessing methods can improve the accuracy of identification models. When establishing identification and classification models, choosing between machine learning and deep learning methods based on data characteristics can boost accuracy. Each method has its advantages and disadvantages, with machine learning being simple and flexible but limited in handling complex tasks, while deep learning excels in learning complex patterns but requires more data and computational resources. Flexibility in choosing preprocessing and data analysis methods is crucial for the effective application of hyperspectral imaging technology in agricultural tasks.

6. Concluding Remarks

Hyperspectral imaging technology plays an important role in the field of agriculture, and can be used for crop growth monitoring, pest and disease diagnosis, soil nutrient analysis, agricultural product quality testing, and crop variety identification. Through hyperspectral image analysis, it is possible to accurately assess the health status of crops, diagnose pest and disease issues, estimate soil nutrient content, analyze agricultural product quality indicators, and identify crop varieties, providing comprehensive data support for agricultural production. However, there are some challenges in the application of hyperspectral imaging technology in agriculture, including high equipment costs, complex data processing requirements, the impact of environmental factors on results, and a lack of standardization. Existing research also has some shortcomings: (1) insufficient sample data leading to inadequate model accuracy and generalization capabilities; (2) high-dimensional hyperspectral data making feature extraction and selection difficult, requiring further research on effective methods; (3) some technologies overly rely on specific datasets and conditions, leading to limited model generalization capabilities; and (4) inadequate real-time performance and operability, necessitating faster and more convenient data acquisition and processing methods [120,121,122].
With the rapid development of hyperspectral imaging technology and artificial intelligence, their integration will expand into a wide range of application areas. As technology innovation and hardware performance improve, hyperspectral data collection will become more convenient and efficient. The use of drones equipped with hyperspectral cameras for data collection has been widely adopted in various fields. Many researchers have focused on preprocessing and analyzing hyperspectral data, improving and designing more algorithms to enhance data processing capabilities and analysis effectiveness.
Agriculture holds a significant position globally, and the transition from traditional agriculture to smart agriculture reflects its rapid development. The rise of smart agriculture, combined with advanced sensing technology, communication, artificial intelligence, and other technologies, is propelling agriculture into a high-speed development era. Hyperspectral imaging technology is increasingly applied in agriculture, particularly in crop pest and disease identification, and product classification and identification. Combining artificial intelligence, communication, and networking technologies allows for the simultaneous detection of multiple parameters of crops and products, enabling real-time dynamic monitoring. In terms of agricultural product quality classification and identification, simultaneous internal and external quality testing will improve classification accuracy.
In the future, technology improvements and cost reductions will also be the direction of development for hyperspectral imaging technology. As technology continues to advance, the cost of hyperspectral imaging equipment is expected to gradually decrease, while the performance of the equipment will continue to improve. Improvements such as increasing spectral resolution and imaging speed, will make it easier to popularize and apply this technology in agricultural production. The integration of technologies such as the internet of things, big data, and artificial intelligence will also be the direction of development for hyperspectral imaging technology in agricultural production [123].

Author Contributions

Conceptualization, J.W., P.H. and Y.Z.; methodology, J.W., P.H. and Y.W.; investigation, Y.W. and Y.Z.; validation, J.W., P.H. and Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, J.W., P.H. and Y.W.; supervision, J.W., P.H. and Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

Natural Science Foundation of Guizhou Minzu University (No. GZMUZK [2022] YB01); Doctoral funding project of Guizhou Minzu University GZMUZK [2024] QD72; Guizhou Province Science and Technology Support Program Project: [2022] 165; Guiyang Science and Technology Support Plan Project: [2022] 3-8; Guizhou Provincial Department of Education supports the scientific research platform [2022] 014.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, J.Y.; Li, C.L. Development and Prospect of Hyperspectral Imager and Its Application. Chin. J. Space Sci. 2021, 41, 22–33. [Google Scholar] [CrossRef]
  2. Vane, G.; Goetz, A.F.H.; Wellman, J.B. Airborne Imaging Spectrometer: A New Tool for Remote Sensing. IEEE Trans. Geosci. Remote Sens. 1984, 22, 546–554. [Google Scholar] [CrossRef]
  3. Liu, Y.N.; Xue, Y.Q.; Wang, J.Y.; Wang, J.-Y.; Shen, M.-M. Operational Modular Imaging Spectrometer. J. Infrared Millim. Waves 2002, 21, 9–13. [Google Scholar]
  4. Dong, G.J.; Zhang, Y.S.; Fan, Y.H. Image Fusion for Hyperspectral Date of PHI and High-Resolution Aerial Image. J. Infrared Millim. Waves 2006, 25, 123–126. [Google Scholar]
  5. Yang, Z.H.; Han, J.F.; Gong, D.P.; Li, Z.X. The Development and Application of Hyperspectral-Remote-Sensing Technology. Hydrogr. Surv. Charting 2003, 55–58. [Google Scholar]
  6. Liu, Y.D.; Zhang, G.W. Hyperspectral Imaging Technology in Application of Agricultural Products Detection. Food Mach. 2012, 28, 223–226+242. [Google Scholar]
  7. Gui, J.S.; Wu, Z.X.; Gu, M.; Chi, Y.F.; Bao, X.A. Overview of the Application of Hyperspectral Imaging Technology in Agriculture. J. Zhejiang Agric. Sci. 2017, 58, 1101–1105. [Google Scholar]
  8. Wang, D.; Wang, K.; Wu, J.Z.; Han, P. Progress in Research on Rapid and Non-Destructive Detection of Seed Quality Based on Spectroscopy and Imaging Technology. Spectrosc. Spectr. Anal. 2021, 41, 52–59. [Google Scholar]
  9. Xiu, L.C.; Zheng, Z.Z.; Yang, B.; Yin, L.; Gao, Y.; Jiang, Y.H.; Huang, Y.; Zhou, Q.P.; Shi, J.L.; Dong, J.X.; et al. Application of Airborne Hyperspectral Imaging Technology to the Ecological Environment Protection of Jiangsu, Anhui and Zhejiang Provinces at Yangtze River Economic Belt. Geol. China 2021, 48, 1334–1356. [Google Scholar]
  10. Tao, H.H.; Yao, Y.T.; Lin, W.J. Research on the Application of UAV Technology in Monitoring Water Quality Pollution in Rivers and Lakes. China Hydropower Electrif. 2021, 42–45+10. [Google Scholar]
  11. Zhao, S.H.; Zhang, F.; Wang, Q.; Yao, Y.J.; Wang, Z.T.; You, D.A. Application of Hyper-Spectral Remote Sensing Technology in Environmental Protection. Spectrosc. Spectr. Anal. 2013, 33, 3343–3348. [Google Scholar]
  12. Yu, C.R.; Wang, T.R.; Li, S.; Zhang, D.X.; Zhang, J.B.; Wang, X.Q.; Qi, M.J. Extraction of Cancer Information from Liver Biopsy with Hyperspectral Imaging Technology. Sci. Technol. Eng. 2015, 15, 106–109. [Google Scholar]
  13. Jiang, H.M.; Mei, J.; Hu, X.X.; Yi, W.S. Discussions of Hyperspectral Imaging for Medical Diagnostics. Chin. J. Med. Phys. 2013, 30, 4148–4152+4158. [Google Scholar]
  14. Liu, L.X.; Li, M.Z.; Zhao, Z.G.; Qu, J.L. Recent Advances of Hyperspectral Imaging Application in Biomedicine. Chin. J. Lasers 2018, 45, 214–223. [Google Scholar]
  15. Shi, N.C.; Li, G.H.; Lei, Y.; Wu, T.X. Application of Hyperspectral Imaging Technology in the Protection of Palace Museum Calligraphy and Painting Cultural Relics. Cult. Relics Prot. Archaeol. Sci. 2017, 29, 23–29. [Google Scholar]
  16. Li, G.H.; Chen, Y.; Sun, X.J.; Zhang, L.F.; Qu, L. Application of Large-Format Hyperspectral Scanning System in Cultural Relics Research. Infrared 2024, 45, 42–52. [Google Scholar]
  17. Ding, L.; Yang, Q.; Jiang, P. Comprehensive Review of the Application of Hyperspectral Imaging Technology in the Analysis of Ancient Chinese Calligraphy and Painting. Cult. Relics Prot. Archaeol. Sci. 2023, 35, 128–141. [Google Scholar]
  18. Qiao, S.C.; Tian, Y.W.; He, K.; Yao, P.; Gu, W.J.; Wang, J.P. Recent Progress in Technologies for Non-destructive Detection of Fruit Diseases and Pests. Food Sci. 2019, 40, 227–234. [Google Scholar]
  19. Yang, X.; Liu, W.; Ma, M.J.; Wang, C.; Qiu, S.Z.; Li, J.Y.; Peng, J.; Liu, G.Q. Research Progress on the Application of Hyperspectral Imaging Technology in Tea Production. South-Cent. Agric. Sci. Technol. 2023, 44, 238–242+252. [Google Scholar]
  20. Yu, J.J.; He, Y. Study on Early Detection of Gray Mold on Tomato Leaves Using Hyperspectral Imaging Technique. Spectrosc. Spectr. Anal. 2013, 33, 2168–2171. [Google Scholar]
  21. Zhang, J.Y.; Chen, J.C.; Fu, X.P.; Ye, Y.F.; Fu, G.; Hong, R.X. Hyperspectral Imaging Detection of Cercospora Leaf Spot of Muskmelon. Spectrosc. Spectr. Anal. 2019, 39, 3184–3188. [Google Scholar]
  22. Liang, W.J.; Feng, H.; Jiang, D.; Zhang, W.Y.; Cao, J.; Cao, H.X. Early Recognition of Sclerotinia Stem Rot on Oilseed Rape by Hyperspectral Imaging Combined with Deep Learning. Spectrosc. Spectr. Anal. 2023, 43, 2220–2225. [Google Scholar]
  23. Li, Y.; Li, C.L.; Wang, X.; Fan, P.F.; Li, Y.K.; Zhai, C.Y. Identification of Cucumber Disease and Insect Pest Based on Hyperspectral Imaging. Spectrosc. Spectr. Anal. 2024, 44, 301–309. [Google Scholar]
  24. Li, B.X.; Zhou, B.; He, X.; Liu, H.X. Status and Prospects of Target Classification Methods Based on Hyperspectral Images. Laser Infrared 2020, 50, 259–265. [Google Scholar]
  25. Wang, D.; Luan, Y.Q.; Tan, Z.J.; Wei, W. Detection of Pesticide Residues in Broccoli Based on Hyperspectral Imaging and Convolutional Neural Network. J/OL. Food Ind. Technol. 2024, 1–15, [2024-08-09]. [Google Scholar]
  26. Zhang, F.; Zhang, F.Y.; Cui, X.H.; Wang, X.Y.; Cao, W.H.; Zhang, Y.K.; Fu, S.L. Identification of Ginkgo Fruit Species by Hyperspectral Image Combined With PSO-SVM. Spectrosc. Spectr. Anal. 2024, 44, 859–864. [Google Scholar]
  27. Ma, L.K.; Zhu, S.P.; Miao, Y.J.; Wei, X.; Li, S.; Jiang, Y.L.; Zhuo, J.X. The Discrimination of Organic and Conventional Eggs Based on Hyperspectral Technology. Spectrosc. Spectr. Anal. 2022, 42, 1222–1228. [Google Scholar]
  28. Li, G.H.; Li, Z.X.; Jin, S.L.; Zhao, W.Y.; Pan, X.P.; Liang, Z.; Qin, L.; Zhang, W.D. Mix Convolutional Neural Networks for Hyperspectral Wheat Variety Discrimination. Spectrosc. Spectr. Anal. 2024, 44, 807–813. [Google Scholar]
  29. Zhang, L.; Ma, X.Y.; Wang, R.L.; Li, Z.G.; Xu, D.Y.; Hong, W.L. Application of Hyperspectral Imaging Technology in the Classification of Tobacco Leaves and Foreign Matter. Tob. Sci. Technol. 2020, 53, 72–78. [Google Scholar]
  30. Song, S.Z.; Liu, Y.Y.; Zhou, Z.Y.; Teng, X.; Li, J.H.; Liu, J.L.; Gao, X. Identification of Sorghum Breed by Hyperspectral Image Technology. Spectrosc. Spectr. Anal. 2024, 44, 1392–1397. [Google Scholar]
  31. Feng, H.K.; Fan, Y.G.; Tao, H.L.; Yang, F.Q.; Yang, G.J.; Zhao, C.J. Monitoring of Nitrogen Content in Winter Wheat Based on UAV Hyperspectral Imagery. Spectrosc. Spectr. Anal. 2023, 43, 3239–3246. [Google Scholar]
  32. Wu, C.Q.; Tong, Q.X.; Zheng, L.F. Pretreatment of Flied and Image Spectra Data. Remote Sens. Technol. Appl. 2005, 50–55. [Google Scholar]
  33. Fan, N.N.; Li, Z.Y.; Fan, W.Y.; Wang, B.Y.; Tan, B.X.; Liu, G.F. Preprocessing of PHI-3 Hyperspectral Data. Res. Soil Water Conserv. 2007, 283–286+290. [Google Scholar]
  34. Liu, P.P.; Lin, H.; Sun, H.; Yan, E.P. Research on Dimensionality Reduction Methods for Hyperspectral Data. J. Cent. South Univ. For. Technol. 2011, 31, 34–38. [Google Scholar]
  35. Wang, X.H.; Xiao, P.; Guo, J.M. Research on Hyperspectral Data Dimensionality Reduction Techniques. Bull. Soil Water Conserv. 2006, 89–91. [Google Scholar]
  36. Su, H.J.; Du, P.J. Study on Feature Selection and Extraction of Hyperspectral Data. Remote Sens. Technol. Appl. 2006, 288–293. [Google Scholar]
  37. Lv, J.; Hao, N.Y.; Shi, X.L. Research on Feature Extraction of Soil Hyperspectral Data Based on Manifold Learning. Arid. Zone Res. Environ. 2015, 29, 176–180. [Google Scholar]
  38. Zhang, B. Advancement of Hyperspectral Image Processing and Information Extraction. J. Remote Sens. 2016, 20, 1062–1090. [Google Scholar] [CrossRef]
  39. Zhou, X.M.; Wang, N.; Wu, H. Comparison of two Methods for Atmospheric Correction of Hyper-spectral Thermal Infrared Data. J. Remote Sens. 2012, 16, 796–808. [Google Scholar]
  40. Ma, L.L.; Wang, X.H.; Tang, L.L. A Highly Efficient Atmospheric Correction Method for HJ-1A/HSI and the Exploration on Its Application Capability. Remote Sens. Technol. Appl. 2010, 25, 525–531. [Google Scholar]
  41. Wu, B.; Miao, F.; Ye, C.M.; Huang, S.H.M.; Bi, X.J. Atmospheric Correction of Hyperspectral Remote Sensing Image Based on FLAASH. Comput. Tech. Geophys. Geochem. Explor. 2010, 32, 442–445+340. [Google Scholar]
  42. Li, X.; Tang, W.R.; Zhang, Y.H.; Xie, Q.; Zhang, F.; Wu, R.S.; Chen, X.J.; Xia, C.; Zeng, S.H.; Liu, L. Discrimination Model of Tobacco Leaf Field Maturity Based on Hyperspectral Imaging Technology. Tob. Sci. Technol. 2022, 55, 17–24. [Google Scholar]
  43. Li, M.; Hu, C.L.; Tao, G.L. Research on Hyperspectral Remote Sensing Estimation of SPAD Value of Camellia Oleifera Leaves Based on Different Preprocessing Methods. Jiangsu For. Sci. Technol. 2022, 49, 1–5. [Google Scholar]
  44. Ding, Z.; Chang, B.S. Preprocessing Methods of Near-Infrared Reflectance Spectral Data for Coal Gangue Identification. Ind. Mine Autom. 2021, 47, 93–97. [Google Scholar]
  45. Wen, P.; Li, H.J.; Lei, H.Y.; Zhang, F. Research on Hyperspectral Data Acquisition and Preprocessing Methods of Water-Injected Lamb. J. Inn. Mong. Agric. Univ. (Nat. Sci. Ed.) 2021, 42, 79–84. [Google Scholar]
  46. Cai, T.J.; Tang, H. Overview of the Least Squares Fitting Principle of Savitzky-Golay Smoothing Filters. Digit. Commun. 2011, 38, 63–68+82. [Google Scholar]
  47. Zhao, A.X.; Tang, X.J.; Zhang, Z.H.; Liu, J.H. Optimizing Savitzky-Golay Parameters and Its Smoothing Pretreatment for FTIR Gas Spectra. Spectrosc. Spectr. Anal. 2016, 36, 1340–1344. [Google Scholar]
  48. Gao, N.; Du, Z.H.; Qi, R.B.; Li, J.Y.; Zhou, T.; Zhou, K.; Wang, Y. Data Preprocessing of Broad-Spectrum Tunable-Diode-Laser Absorption Spectroscopy. Acta Opt. Sin. 2012, 32, 304–309. [Google Scholar]
  49. Ma, Y.; Zhang, R.Y. Support Vector Machine Identification of Fresh Apricot Defects with Different Spectral Pre-Processing Methods. Xinjiang Farm Res. Sci. Technol. 2017, 40, 39–41. [Google Scholar]
  50. Zhang, B.; Sun, Y.S.; Li, X.J.; Liu, P. Development and Prospect of Hyperspectral Target Classification Technology. Infrared 2023, 44, 22. [Google Scholar]
  51. Lian, M.R.; Zhang, S.J.; Ren, R.; Chi, J.T.; Mu, B.Y.; Sun, S.S. Non-destructive Detection of Moisture Content in Fresh Sweet Corn Based on Hyperspectral Technology. Food Mach. 2021, 37, 127–132. [Google Scholar]
  52. Deng, J.M.; Wang, H.J.; Li, Z.Z.; Li, Y.H. External Quality Detection of Potatoes Based on Hyperspectral Technology. Food Mach. 2016, 32, 122–125+211. [Google Scholar]
  53. Chi, J.T.; Zhang, S.J.; Ren, R.; Lian, M.R.; Sun, S.S.; Mu, B.Y. External Defect Detection of Eggplants Based on Hyperspectral Imaging. Mod. Food Technol. 2021, 37, 279–284+178. [Google Scholar]
  54. Zhou, C.X.; Shen, J.G.; Jiang, M.L.; Zeng, L.G.; Zhang, C.J.; Fan, Y.L. Non-destructive Identification of Soybean Varieties based on Hyperspectral Imaging Technology and GBDT. China J. Cereals Oils 2023, 38, 183–190. [Google Scholar]
  55. Wang, B.N. Derivative Spectrophotometry Method. Chin. J. Anal. Chem. 1983, 149–158. [Google Scholar]
  56. Liang, L.; Zhang, L.P.; Lin, H.; Li, C.M.; Yang, M.H. Estimating Canopy Leaf Water Content in Wheat Based on Derivative Spectra. Sci. Agric. Sin. 2013, 46, 18–29. [Google Scholar]
  57. Liu, W.; Chang, Q.R.; Guo, M.; Xing, D.X.; Yuan, Y.S. Extraction of First Derivative Spectrum Features of Soil Organic Matter via Wavelet De-Noising. Spectrosc. Spectr. Anal. 2011, 31, 100–104. [Google Scholar]
  58. Shen, Y.; Fang, S.; Wang, F.Y.; Li, Z.; Zhang, C.; Zheng, J.Y. Research on effective bands for identifying minor apple damage based on hyperspectral imaging. China Agric. Sci. Technol. Guide 2020, 22, 64–71. [Google Scholar]
  59. Ni, Z.; Hu, C.Q.; Feng, F. Progress and effect of spectral data pretreatment in NIR analytical technique. Chin. J. Pharm. Anal. 2008, 28, 824–829. [Google Scholar]
  60. DiWu, P.Y.; Bian, X.H.; Wang, Z.F.; Liu, W. Study on the Selection of Spectral Preprocessing Methods. Spectrosc. Spectr. Anal. 2019, 39, 2800–2806. [Google Scholar]
  61. Chen, H.Z.; Pan, T.; Chen, J.M. Optimal Application of Combined Multiple Scattering Correction and Savitzky-Golay Smoothing Model in Near-Infrared Spectral Analysis of Soil Organic Matter. Comput. Appl. Chem. 2011, 28, 518–522. [Google Scholar]
  62. Lu, Y.J.; Qu, Y.L.; Song, M. Research on the Correlation Chart of Near Infrared Spectra by Using Multiple Scatter Correction Technique. Spectrosc. Spectr. Anal. 2007, 877–880. [Google Scholar]
  63. He, J.W.; Wang, H.W.; Ji, H.W. Research on identification technology of fresh and frozen-thawed beef based on near-infrared hyperspectral imaging. Food Ind. Sci. Technol. 2016, 37, 304–307+31. [Google Scholar]
  64. Li, J.; Li, S.K.; Jiang, L.W.; Liu, X.; Ding, S.H.; Li, P. A Nondestructive Method Identifying Varieties of Green Tea Based on Near Infrared Spectroscopy and Chemometrics. J. Instrum. Anal. 2020, 39, 1344–1350. [Google Scholar]
  65. Li, S.K.; Du, G.R.; Li, P.; Jiang, L.W.; Liu, X. Nondestructive Identification of Soybean Milk Powder Based on Near Infrared Spectroscopy and Optimized Pretreatment Method. Food Res. Dev. 2020, 41, 144–150. [Google Scholar]
  66. Wu, Y.R.; Jin, X.K.; Feng, J.Q.; Zhang, H.F.; Qiu, Y.J.; Yang, J.Y.; Cong, M.F.; Zhu, C.Y. Cotton Impurity Detection Based on Hyperspectral Imaging Technology. J/OL. Advanced Textile Technology 1–11 [2024-08-13].
  67. Chen, S.Y.; Zhang, Y.C.; Yang, J.; Cai, M.S.; Zhang, Q.B.; He, P.M.; Tu, Y.Y. Determination of Storage year of white Tea Based on Hyperspectral Imaging Technology. Food Ind. Sci. Technol. 2021, 42, 276–283. [Google Scholar]
  68. Wang, X.S.; Qi, D.W.; Huang, A.M. Study on Denoising Near Infrared Spectra of Wood Based on Wavelet Transform. Spectrosc. Spectr. Anal. 2009, 29, 2059–2062. [Google Scholar]
  69. Xie, J.C.; Zhang, D.L.; Xu, W.L. Overview on Wavelet Image Denoising. J. Image Graph. 2002, 3–11. [Google Scholar]
  70. Wen, L.; Liu, Z.S.; Ge, Y.J. Several Methods of Wavelet Denoising. J. Hefei Univ. Technol. (Nat. Sci.) 2002, 167–172. [Google Scholar]
  71. Qin, X.; Shen, L.S. Wavelet Transform and Its Application in Spectral Analysis. Spectrosc. Spectr. Anal. 2000, 892–897. [Google Scholar]
  72. Huang, Q.; Shen, J.G.; Jiang, M.L.; Feng, C.G.; Fang, X.S.; Zhang, C.J.; Shi, X.W. Study on Non-destructive Detection of Peanut Water Content Based on Hyperspectral Imaging Technology and BO-XGBoost. China J. Cereals Oils 2023, 38, 135–140. [Google Scholar]
  73. Su, H.J. Dimensionality Reduction for Hyperspectral Remote Sensing: Advances, Challenges, and Prospects. Natl. Remote Sens. Bull. 2022, 26, 1504–1529. [Google Scholar] [CrossRef]
  74. Zhou, Z.; Yang, Y.; Zhang, G.; Xu, L.B.; Wang, M.Q.; Zhu, Q.B. Dimensionality Reduction Algorithm for Hyperspectral Image Based on Self-Supervised Learning. Laser Optoelectron. Prog. 2024, 61, 363–371. [Google Scholar]
  75. Jiang, Y.H.; Wang, T.; Chang, H.W. An Overview of Hyperspectral Image Feature Extraction. J. Electronics Optics & Control. 2020, 27, 73–77. [Google Scholar]
  76. Xiao, P.; Wang, X.H. Hyperspectral Data Classification based on Feature Extraction. J. Northwest Univ. (Nat. Sci. Ed.) 2006, 497–500. [Google Scholar]
  77. Wang, Y.F.; Tang, Z.N. Dimensionality Reduction Method for Multispectral Data Based on PCA and ICA. Opt. Tech. 2014, 40, 180–183. [Google Scholar] [CrossRef]
  78. Li, B.; Liu, Z.Y.; Huang, J.F.; Zhang, L.L.; Zhou, W.; Shi, J.J. Hyperspectral Identification of Rice Diseases and Pests based on Principal Component Analysis and Probabilistic Neural Network. Trans. Chin. Soc. Agric. Eng. 2009, 25, 143–147. [Google Scholar]
  79. Zhang, L. Study on the Hyperspectral Remote Sensed Image Classify based on PCA and SVM. Opt. Tech. 2008, 34, 184–187. [Google Scholar]
  80. Tian, Y.; Zhao, C.H.; Ji, Y.X. The Principal Component Analysis Applied to Hyperspectral Remote Sensing Image Dimensional Reduction. Nat. Sci. J. Harbin Norm. Univ. 2007, 58–60. [Google Scholar]
  81. Feng, Y.; He, M.Y.; Song, J.H.; Wei, J. ICA-Based Dimensionality Reduction and Compression of Hyperspectral Images. J. Electron. Inf. Technol. 2007, 2871–2875. [Google Scholar]
  82. Liang, L.; Yang, M.H.; Li, Y. Hyperspectral Remote Sensing Image Classification Based on ICA and SVM Algorithm. Spectrosc. Spectr. Anal. 2010, 30, 2724–2728. [Google Scholar]
  83. Luo, W.F.; Zhong, L.; Zhang, B.; Gao, L.R. Independent Component Analysis for Spectral Unmixing in Hyperspectral Remote Sensing Image. Spectrosc. Spectr. Anal. 2010, 30, 1628–1633. [Google Scholar]
  84. Li, H.T.; Gu, H.Y.; Zhang, B.; Gao, L.R. Research on Hyperspectral Remote Sensing Image Classification Based on MNF and SVM. Remote Sens. Inf. 2007, 12–15+25+103. [Google Scholar]
  85. Lin, N.; Yang, W.N.; Wang, B. Feature Extraction of Hyperspectral Remote Sensing Image Based on Kernel Minimum Noise Fraction Transformation. J. Wuhan Univ. (Inf. Sci. Ed.) 2013, 38, 988–992. [Google Scholar]
  86. Liu, B.X.; Zhang, Z.D.; Li, Y.; Chen, P. Oil Spill Information Extraction Method Based on Airborne Hyperspectral Remote Sensing Data. J. Dalian Marit. Univ. 2014, 40, 89–92. [Google Scholar]
  87. Bai, L.; Hui, M. Classification and Feature Extraction of Hyperspectral Images based on Improved Minimum Noise Fraction Transformation. Comput. Eng. Sci. 2015, 37, 1344–1348. [Google Scholar]
  88. Liu, Y.D.; Zhang, G.W.; Cai, L.J. Analysis of Chlorophyll in Gannan Navel Orange with Algorithm of GA and SPA Based on Hyperspectral. Spectrosc. Spectr. Anal. 2012, 32, 3377–3380. [Google Scholar]
  89. Wang, L.G.; Wei, F.J. Band Selection for Hyperspectral Imagery based on Combination of Genetic Algorithm and Ant Colony Algorithm. J. Image Graph. 2013, 18, 235–242. [Google Scholar]
  90. Zhang, T.T.; Zhao, B.; Yang, L.M.; Wang, J.H.; Sun, Q. Determination of Conductivity in Sweet Corn Seeds with Algorithm of GA and SPA Based on Hyperspectral Imaging Technique. Spectrosc. Spectr. Anal. 2019, 39, 2608–2613. [Google Scholar]
  91. Wu, Q.; Zhu, Y.T.; Shi, W.; Wang, T.Y.; Huang, Y.W.; Jiang, D.J.; Liu, X. A New Data Dimension Reduction Method Based on Convolution in The Application of Authenticity Identification of Traditional Chinese Medicine LongGu. J. Phys. Conf. Ser. 2023, 2504, 012035. [Google Scholar] [CrossRef]
  92. Feng, L.L.; Wang, C.P.; Wu, T.J.; Zhang, J.S. Dimensionality Reduction Method for Manifold Learning Based on Variational Autoencoder. J/OL J. Comput.-Aided Des. Comput. Graph. 2024, 1–7. [Google Scholar]
  93. Tan, K.; Du, P.J. Hyperspectral Remote Sensing Image Classification Based on Support Vector Machine. J. Infrared Millim. Waves 2008, 123–128. [Google Scholar] [CrossRef]
  94. Zhao, P.; Tang, Y.H.; Li, Z.Y. Wood Species Recognition with Microscopic Hyper-Spectral Imaging and Composite Kernel SVM. Spectrosc. Spectr. Anal. 2019, 39, 3776–3782. [Google Scholar]
  95. Sun, J.T.; Ma, B.X.; Dong, J.; Yang, J.; Xu, J.; Jiang, W.; Gao, Z.J. Study on Maturity Discrimination of Hami Melon with Hyperspectral Imaging Technology Combined with Characteristic Wavelengths Selection Methods and SVM. Spectrosc. Spectr. Anal. 2017, 37, 2184–2191. [Google Scholar]
  96. Lin, H.; Liang, L.; Zhang, L.P.; Du, P.J. Wheat Leaf Area Index Inversion with Hyperspectral Remote sensing based on Support Vector Regression Algorithm. Trans. Chin. Soc. Agric. Eng. 2013, 29, 139–146. [Google Scholar]
  97. Dong, J.J.; Tian, Y.; Zhang, J.X.; Luan, Z.D.; Du, Z.F. Research on the Classification Method of Benthic Fauna Based on Hyperspectral Data and Random Forest Algorithm. Spectrosc. Spectr. Anal. 2023, 43, 3015–3022. [Google Scholar]
  98. Ke, Y.C.; Shi, Z.K.; Li, P.J.; Zhang, X.Y. Lithological Classification and Analysis Using Hyperion Hyperspectral Data and Random Forest Method. Acta Petrol. Sin. 2018, 34, 2181–2188. [Google Scholar]
  99. Cheng, S.X.; Kong, W.W.; Zhang, C.; Liu, F.; He, Y. Variety Recognition of Chinese Cabbage Seeds by Hyperspectral Imaging Combined with Machine Learning. Spectrosc. Spectr. Anal. 2014, 34, 2519–2522. [Google Scholar]
  100. Zhao, J.L.; Hu, L.; Yan, H.; Chu, G.M.; Fang, Y.; Huang, L.S. Hyperspectral Image Classification Combing Local Binary Patterns and K-nearest Neighbors Algorithm. J. Infrared Millim. Waves 2021, 40, 400–412. [Google Scholar]
  101. Tu, B.; Zhang, X.F.; Zhang, G.Y.; Wang, J.P.; Zhou, Y. Hyperspectral Image Classification Via Recursive Filtering and KNN. Remote Sens. Land Resour. 2019, 31, 22–32. [Google Scholar]
  102. Huang, H.; Zheng, X.L. Hyperspectral Image Classification with Combination of Weighted Spatial-Spectral and KNN. Opt. Precis. Eng. 2016, 24, 873–881. [Google Scholar] [CrossRef]
  103. Wang, L.; Qin, H.; Li, J.; Zhang, X.B.; Yu, L.N.; Li, W.J.; Huang, L.Q. Geographical Origin Identification of Lycium Barbarum Using Near-Infrared Hyperspectral Imaging. Spectrosc. Spectr. Anal. 2020, 40, 1270–1275. [Google Scholar]
  104. Ji, H.Y.; Ren, Z.Q.; Rao, Z.H. Discriminant Analysis of Millet from Different Origins Based on Hyperspectral Imaging Technology. Spectrosc. Spectr. Anal. 2019, 39, 2271–2277. [Google Scholar]
  105. Zhu, M.Y.; Yang, H.B.; Li, Z.W. Early Detection and Identification of Rice Sheath Blight Disease Based on Hyperspectral Image and Chlorophyll Content. Spectrosc. Spectr. Anal. 2019, 39, 1898–1904. [Google Scholar]
  106. Zheng, Y.P.; Li, G.Y.; Li, Y. Survey of Application of Deep Learning in Image Recognition. Comput. Eng. Appl. 2019, 55, 20–36. [Google Scholar]
  107. Jia, S.P.; Gao, H.J.; Hang, X. Research Progress on Image Recognition Technology of Crop Pests and Diseases Based on Deep Learning. Trans. Chin. Soc. Agric. Mach. 2019, 50, 313–317. [Google Scholar]
  108. Wang, B.; Fan, D.L. A Review of Research Progress on Remote Sensing Image Classification and Recognition Based on Deep Learning. Bull. Surv. Mapp. 2019, 99–102+136. [Google Scholar]
  109. Liu, J.X.; Ban, W.; Chen, Y.; Sun, Y.Q.; Zhuang, H.F.; Fu, E.J.; Zhang, K.F. Multi-Dimensional CNN Fused Algorithm for Hyperspectral Remote Sensing Image Classification. Chin. J. Lasers 2021, 48, 159–169. [Google Scholar]
  110. Wei, X.P.; Yu, X.C.; Zhang, P.Q.; Zhi, L.; Yang, F. CNN with Local Binary Patterns for Hyperspectral Images Classification. J. Remote Sens. 2020, 24, 1000–1009. [Google Scholar] [CrossRef]
  111. Yu, C.C.; Zhou, L.; Wang, X.; Wu, J.Z.; Liu, Q. Hyperspectral Detection of Unsound Kernels of Wheat Based on Convolutional Neural Network. Food Sci. 2017, 38, 283–287. [Google Scholar]
  112. Huang, S.P.; Sun, C.; Qi, L.; Ma, X.; Wang, W.J. Rice Panicle Blast Identification Method based on Deep Convolution Neural Network. Trans. Chin. Soc. Agric. Eng. 2017, 33, 169–176. [Google Scholar]
  113. Luo, J.H.; Li, M.Q.; Zheng, Z.Z.; Li, J. Hyperspectral Remote Sensing Images Classification Using a Deep Convolutional Neural Network Model. J. Xihua Univ. (Nat. Sci. Ed.) 2017, 36, 13–20. [Google Scholar]
  114. Yan, M.J.; Su, X.Y. Hyperspectral Image Classification Based on Three-Dimensional Dilated Convolutional Residual Neural Network. Acta Opt. Sin. 2020, 40, 163–172. [Google Scholar]
  115. Zhang, X.D.; Wang, T.J.; Yang, Y. Classification of Small-Sized Sample Hyperspectral Images Based on Multi-Scale Residual Network. Laser Optoelectron. Prog. 2020, 57, 348–355. [Google Scholar]
  116. Lu, Y.S.; Li, Y.X.; Liu, B.; Liu, H.; Cui, L.L. Hyperspectral Data Haze Monitoring Based on Deep Residual Network. Acta Opt. Sin. 2017, 37, 314–324. [Google Scholar]
  117. Zhang, F.; Wang, W.X.; Wang, C.S.; Zhou, J.; Pan, Y.; Sun, J.F. Study on Hyperspectral Detection of Potato Dry Rot in Gley Stage Based on Convolutional Neural Network. Spectrosc. Spectr. Anal. 2024, 44, 480–489. [Google Scholar]
  118. Zhao, S.; Fu, Y.; Cui, J.N.; Lu, Y.; Du, X.D.; Li, Y.L. Application of Hyperspectral Imaging in the Diagnosis of Acanthopanax Senticosus Black Spot Disease. Spectrosc. Spectr. Anal. 2021, 41, 1898–1904. [Google Scholar]
  119. Huang, Y.Q.; Liu, X.H.; Huang, Z.Y.; Qian, W.Q.; Liu, S.Y.; Qiao, X. Identification of Early Wheel Spot and Rust on Sugarcane Leaves Based on Spectral Analysis. Trans. Chin. Soc. Agric. Mach. 2023, 54, 259–267. [Google Scholar]
  120. Kuang, R.; Long, T.; Liu, H.L.; Wu, J.H.; Lv, J.S.; Xie, Z.R.; Liu, W.T.; Lan, Y.B.; Long, Y.B.; Wang, Z.H.; et al. Hyperspectral Imaging Technology Combined with Ensemble Learning for Nitrogen Detection in Dendrobium nobile. Spectrosc. Spectr. Anal. 2024, 44, 1918–1927. [Google Scholar]
  121. Wu, L.G.; Zhang, Y.; Wang, S.L.; Li, J.S. Detection of Soil Salt Content based on Hyperspectral Imaging. Optoelectron.-Laser 2020, 31, 388–394. [Google Scholar]
  122. Zhang, F.; Yu, H.; Xiong, Y.; Zhang, F.Y.; Wang, X.Y.; Lv, Q.F.; Wu, Y.G.; Zhang, Y.K.; Fu, S.L. Hyperspectral Non-Destructive Detection of Heat-Damaged Maize Seeds. Spectrosc. Spectr. Anal. 2024, 44, 1165–1170. [Google Scholar]
  123. Yan, J.W. Introduction to the Application and Development of Airborne Hyperspectral Imaging Technology. Robot Tech. Appl. 2024, 60–64. [Google Scholar]
Figure 1. Schematic diagram of the indoor acquisition system.
Figure 1. Schematic diagram of the indoor acquisition system.
Coatings 14 01285 g001
Figure 2. Flowchart of data processing.
Figure 2. Flowchart of data processing.
Coatings 14 01285 g002
Figure 3. The effect of S–G smoothing treatment.
Figure 3. The effect of S–G smoothing treatment.
Coatings 14 01285 g003
Figure 4. First-order and second-order effects.
Figure 4. First-order and second-order effects.
Coatings 14 01285 g004
Figure 5. Effect of MSC calibration.
Figure 5. Effect of MSC calibration.
Coatings 14 01285 g005
Figure 6. Correction diagram of SNV effect.
Figure 6. Correction diagram of SNV effect.
Coatings 14 01285 g006
Figure 7. Flowchart of the main stages of the genetic algorithm.
Figure 7. Flowchart of the main stages of the genetic algorithm.
Coatings 14 01285 g007
Figure 8. Random forest example diagram.
Figure 8. Random forest example diagram.
Coatings 14 01285 g008
Figure 9. Example of a residual block.
Figure 9. Example of a residual block.
Coatings 14 01285 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J.; Zhang, Y.; Hu, P.; Wu, Y. A Review of the Application of Hyperspectral Imaging Technology in Agricultural Crop Economics. Coatings 2024, 14, 1285. https://doi.org/10.3390/coatings14101285

AMA Style

Wu J, Zhang Y, Hu P, Wu Y. A Review of the Application of Hyperspectral Imaging Technology in Agricultural Crop Economics. Coatings. 2024; 14(10):1285. https://doi.org/10.3390/coatings14101285

Chicago/Turabian Style

Wu, Jinxing, Yi Zhang, Pengfei Hu, and Yanying Wu. 2024. "A Review of the Application of Hyperspectral Imaging Technology in Agricultural Crop Economics" Coatings 14, no. 10: 1285. https://doi.org/10.3390/coatings14101285

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop