Next Article in Journal
RAU-Net++: River Channel Extraction Methods for Remote Sensing Images of Cold and Arid Regions
Next Article in Special Issue
The Difference between the Responses of Gross Primary Production and Sun-Induced Chlorophyll Fluorescence to the Environment Based on Tower-Based and TROPOMI SIF Data
Previous Article in Journal
STEAM: Spatial Trajectory Enhanced Attention Mechanism for Abnormal UAV Trajectory Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Different Winter Wheat Cultivars on Hyperspectral UAV Imagery

1
School of Surveying and Land Information Engineering, Henan Polytechnic University, Jiaozuo 454003, China
2
Collaborative Innovation Center of Geo-Information Technology for Smart C Entral Plains, Zhengzhou 450045, China
3
Key Laboratory of Spatiotemporal Perception and Intelligent Processing, Ministry of Natural Resources, Zhengzhou 450045, China
4
Institute of Surveying and Mapping, Information Engineering University, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(1), 250; https://doi.org/10.3390/app14010250
Submission received: 5 November 2023 / Revised: 20 December 2023 / Accepted: 26 December 2023 / Published: 27 December 2023
(This article belongs to the Special Issue New Advances of Remote Sensing in Agriculture)

Abstract

:
Crop phenotype observation techniques via UAV (unmanned aerial vehicle) are necessary to identify different winter wheat cultivars to better realize their future smart productions and satisfy the requirement of smart agriculture. This study proposes a UAV-based hyperspectral remote sensing system for the fine classification of different winter wheat cultivars. Firstly, we set 90% heading overlap and 85% side overlap as the optimal flight parameters, which can meet the requirements of following hyperspectral imagery mosaicking and spectral stitching of different winter wheat cultivars areas. Secondly, the mosaicking algorithm of UAV hyperspectral imagery was developed, and the correlation coefficient of stitched spectral curves before and after mosaicking reached 0.97, which induced this study to extract the resultful spectral curves of six different winter wheat cultivars. Finally, the hyperspectral imagery dimension reduction experiments were compared with principal component analysis (PCA), minimum noise fraction rotation (MNF), and independent component analysis (ICA); the winter wheat cultivars classification experiments were compared with support vector machines (SVM), maximum likelihood estimate (MLE), and U-net neural network ENVINet5 model. Different dimension reduction methods and classification methods were compared to get the best combination for classification of different winter wheat cultivars. The results show that the mosaicked hyperspectral imagery effectively retains the original spectral feature information, and type 4 and type 6 winter wheat cultivars have the best classification results with the classification accuracy above 84%. Meanwhile, there is a 30% improvement in classification accuracy after dimension reduction, the MNF dimension reduction combined with ENVINet5 classification result is the best, its overall accuracy and Kappa coefficients are 83% and 0.81, respectively. The results indicate that the UAV-based hyperspectral remote sensing system can potentially be used for classifying different cultivars of winter wheat, and it provides a reference for the classification of crops with weak intra-class differences.

1. Introduction

Wheat is one of the staple food crops all over the world, with winter wheat as a primary cultivar of wheat in many countries [1,2,3]. There are many cultivars of winter wheat, and different cultivars of winter wheat have different requirements for water, soil fertility, climate, etc., in each phenological phase of their growth [4,5,6]. To accurately provide different cultivars of winter wheat with the best growth conditions and monitor their growth in time, the first step is to identify and classify different winter wheat cultivars. This is also the main research topic in the development of smart agriculture [7]. The earliest fine classification of crops mainly relies on morphological and biological investigation methods [8]. Although these methods have high accuracy, they are too time-consuming and laborious to be effectively applied on a large scale [9]. In recent decades, the rapid development of satellite remote sensing technology has provided new options for the classification of winter wheat. Classifying different cultivars of winter wheat is essentially a task about classifying features with weak intra-class variation in remote sensing imagery classification [10], since different cultivars of winter wheat are the sub-category of wheat, and they have similar phenotypes and consistent texture features [11]. Many studies used satellite data, such as Landsat, Modis, Sentinel, etc., to do the crop classification due to the wide coverage and low cost [12,13,14,15]. Satellite remote sensing can collect data from a large spatial scale, but its practical application in fine-scale agricultural monitoring is limited by its long revisit periods and cloud occlusion [16,17]. In addition, although there are improvements in the spatial resolution of the conventional multispectral remote sensing images, they have low spectral resolution [18]. These limitations are obvious in the classification of different crops which have similar spectral and texture information, such as different cultivars of winter wheat.
UAVs are a quickly evolving technology and the use of UAV-based hyperspectral cameras in smart agriculture is currently an area of great research interest and development [19,20,21]. Due to the low flying height and limited ground coverage, UAV-based hyperspectral cameras can obtain images with a resolution of 2–5 cm or less, which makes these images have both high spatial and spectral resolutions [22,23,24]. In addition, it can be used to monitor areas which are difficult for humans to reach. Thus, these images are widely used to detect the subtle spectral differences between crops and provide a unique data source for the precise classification of crops with similar spectral curves. For example, Yan et al. utilized UAV-based multi-angle hyperspectral remote sensing to obtain the fine classification of rich vegetation types, and the classification results based on multi-angle hyperspectral data were improved (OA = 89.2%, kappa = 0.870) compared with results only based on digital orthophoto maps (OA = 39.8%, kappa = 0.301) [25]; Wei et al. employed UAV-based hyperspectral datasets to identify different crops in cities of Hanchuan and Honghu in China, and their proposed classification method can effectively improve the classification accuracy, protect the edges and shapes of the features, and relieve excessive smoothing while retaining detailed information [26]; Liu et al. used UAV hyperspectral imagery to evaluate the impact of different spatial resolutions on the identification of different crop types in highly fragmented planting areas. Their study also proved that the appropriate feature parameters vary at different scales, thus it was important to select appropriate feature parameters for imagery in different scales based on the different classifying tasks and user requirements [27]. These reviewed studies provide some ideas for the classification of UAV-based hyperspectral imagery. However, their research subjects are mainly about different kinds of crops. Few studies focus on the classification of UAV-based hyperspectral imagery about the same crop with different cultivars.
Although UAV-based hyperspectral technology has unique advantages and can make full use of ground object information to extract and classify ground objects, there are several critical problems in processing hyperspectral data: (1) Curse of dimensionality, due to the large number of spectral channels [28]. A large number of channels and redundant information between different channels easily cause dimension disaster. For hyperspectral data, the higher spectral dimension does not equal the higher classification accuracy. Thus, it is necessary to effectively reduce the data dimension and unnecessary information during hyperspectral data processing [29]. (2) Mixed pixel phenomena [30]. The increased spatial resolution makes hyperspectral data contain more detailed features, resulting in spectral changes and heterogeneity within the same feature. This characteristic reduces the spectral separability and increases the salt-and-pepper classification noise [31]. This is a major interference when classifying data with complex feature types, such as the classification of urban features. (3) The limited number of labeled training samples [32]. In classifying hyperspectral imagery, it is difficult to label samples, and there are limited available training samples. (4) The selection of appropriate classification methods [33]. There are many classification methods for hyperspectral data. It is necessary to select appropriate methods to improve classification accuracy based on the different classifying tasks and user requirements.
In this context, this study aimed to improve the classification accuracy of different varieties of winter wheat, which have weak intra-class differences. The wheat research base at Wuzhi county, Henan province, was taken as the experimental area. Firstly, this study integrated DJI M600 UAV (DJI Technology Co., Shenzhen, China) and UHD185 hyperspectral camera (Cubert GmbH, Ulm, Germany) into a hyperspectral remote sensing system to get hyperspectral imagery of winter wheat in the study area. Secondly, based on obtained hyperspectral data of different winter wheat cultivars, this study compared different data dimensionality reduction and classification methods to get better classification accuracy. Finally, this study also attempted to present the potential of deep learning in classifying hyperspectral imagery by using the U-Net-based ENVINet5 model in data processing. This user-friendly model can probably be popularized by both users and researchers. It can provide a reference for the accurate management and classification of different winter wheat, and it can give support for cultivar identification in related crop research.

2. Materials and Methods

2.1. Study Area

The study area (35°07′59″ N~35°08′0.822″ N; 113°15′26.91″ E~113°15′28″ E) located at Wuzhi County, Jiaozuo City, Henan Province, as shown in Figure 1. It belongs to the Yellow River and Qinhe River alluvial plain in the Huang-Huai-hai Plain. The average altitude is 92 m, the climate type is warm temperate continental monsoon climate, the annual average precipitation is 575.1 mm, and the annual average temperature is 14.4 degrees Celsius. There are 211 frost-free days during the year. The main soil type is tidal soil, and the soil is fertile and conducive to crop cultivation. The main crop types in this area are winter wheat and summer corn. The study area was divided into 90 rectangular blocks with 18 rows and 5 columns, each rectangular block with an area of 4 m × 3 m. Six different cultivars of winter wheat (Aikang58, Bainong207, Jimai22, Jiamai8, Nongda5181, Pumai168) were selected in this study. They all belong to mid-late maturity varieties, 228 days or so after sowing can be harvested. These six types of winter wheat were sowed on 30 October 2022 from north to south and coded from type 1 to type 6. Every three rows had the same type of winter wheat cultivars. Nitrogen fertilizer was urea (containing 46% pure nitrogen), applied at a rate of 90 kg/ha before sowing and 90 kg/ha during the tillering stage. The phosphorus fertilizer was calcium superphosphate (containing 12% P2O5), and the potassium fertilizer was potassium chloride (containing 60% K2O), both applied at a rate of 135 kg/ha simultaneously with the nitrogen fertilizer before sowing. The UHD185 hyperspectral data were collected at the booting stage of winter wheat to carry out the following fine classification experiments.

2.2. Data Acquisition

UHD185 hyperspectral camera and DJI M600 multi-rotor UAV were integrated for data collection of the study area, as shown in Figure 2. UHD185 hyperspectral camera was developed by the Cubert company in Germany. It is a frame type, non-scanning, and real-time imaging storage hyperspectral imager, and is easy to integrate with the UAV system. Its main parameters are as follows: spectral range, 450~950 nm; spectral resolution, 8/nm; spectral sampling interval, 4/nm; the number of spectral channels, 125; Cube resolution, 1000 × 1000; and digital resolution, 12/bit. DJI M600 Pro multi-rotor UAV was used as the sensor platform in this study. It has sufficient power and excellent stability. Data collection was performed on 1 May 2023 (at the booting stage of winter wheat) from 11:30 to 12:10 at noon when there was no cloud and wind. After several attempts, the flight height, heading overlap, and side overlap were designed to be 100 m, 90%, and 85%, respectively. Before collecting data, it was necessary to use a black and white board to calibrate the camera radiation. The control points made of A4 paper were evenly distributed in the study area, and their locations were measured by RTK.

2.3. Data Processing

The flowchart in Figure 3 illustrates the data processing methodology in this study, including the hyperspectral imagery collection, data pre-processing, method, and results and evaluation. Detailed methods of hyperspectral imagery dimension reduction and classification are shown in the flowchart.

2.3.1. Hyperspectral Imagery Pre-Processing

The pre-processing of hyperspectral data acquired by UAV mainly includes three parts: radiation correction, imagery mosaicking, and spectral stitching. The radiation correction was completed by Matlab based on the center wavelength and half-width wavelength of the UHD185 [34]. Considering the limited hardware configuration of the data processing ground station and the large memory of the stitched full-band hyperspectral imagery, the data were firstly exported by JPG and cue formats based on Cubert-PILOT provided by Cubert. Firstly, aiming at mosaicking hyperspectral UHD185 imagery to be a hyperspectral map, this study used the Pansharpen algorithm to fusion the single panchromatic image of 1000 × 1000 pixels and the corresponding hyperspectral image of 50 × 50 pixels. Secondly, extracted and matched image feature points based on the improved SIFT (scale invariant feature transform) algorithm by PhotoScan software, and then used IDL (interactive data language) programming to realize the extraction and merging of hyperspectral sub-band imagery. Finally, the geocoding of hyperspectral imagery was completed based on ArcGIS to form a hyperspectral map, with a spatial resolution of 2.6 cm.
This study then evaluated and analyzed the overall quality of both spatial information and spectral information of the pre-processed image, because both the spatial information and the spectral information played important roles in the following classification. In terms of spatial information, the pre-processed hyperspectral orthophoto image in Figure 4 was used to do the overall evaluation. It can be seen from Figure 4 that the hyperspectral imagery after stitching can effectively convey spatial geometry information, for example, the geometric position of its ground objects was stitched accurately, and the whole image was clear without stitched cracks.
To evaluate the spectral information of the pre-processed imagery, five typical ground objects corresponding to the mosaicked hyperspectral orthographic image and the original hyperspectral image were selected, respectively. Their spectral data information was extracted and then analyzed and evaluated based on qualitative and quantitative methods. Qualitative analysis was carried out by drawing the spectral curve of the extracted spectral data, and the results are shown in Figure 5. It can be seen from Figure 5 that the spectral curves of the selected ground objects of the original hyperspectral images are close to the spectral curves of the hyperspectral orthophoto after spectral curves stitching.
Quantitative analysis was carried out by analyzing the correlation of the extracted spectral data, and the results are shown in Figure 6. The correlation coefficient of stitched spectral curves before and after mosaicking reached 0.97, and the imagery after mosaicking effectively retained the original spectral feature information.

2.3.2. Dimension Reduction of Hyperspectral Imagery

Hyperspectral data involves a large amount of data with rich information and high dimensions, but it also brings data processing problems such as information redundancy, low processing efficiency, and storage difficulties, so it needs to be processed for dimension reduction [35]. In this context, principal component analysis (PCA), minimum noise fraction rotation (MNF), and independent component analysis (ICA), which are commonly used in dimension reduction, were used in this study to reduce the redundant hyperspectral data. The band eigenvalue transformation results of each method are shown in Figure 7. There is an inflection point for each method. The eigenvalue is large before the inflection point; in contrast, the eigenvalue is nearly zero after the inflection point. In other words, the eigenvalue transformation in the first few bands is faster and contains more useful information, whereas the eigenvalue transformation in the latter bands is slower and contains more redundant information. Thus, the inflection points can be used to select useful band information and reduce redundant information. Based on the dimension reduction results, the first 10 eigenvalues of PCA, the first 20 eigenvalues of MNF, and the first 10 eigenvalues of ICA were selected as the spectral features after the data dimensionality reduction.

2.3.3. Data Classification Methods

Maximum likelihood estimate (MLE), support vector machine (SVM), and the improved ENVINet5 classification model based on U-Net neural network structure were, respectively, used to conduct the fine classification of the obtained hyperspectral imagery. Overall accuracy and Kappa coefficient were used to evaluate the classification results.
MLE is a classification algorithm based on normal Gaussian distribution. It is one of the most frequently used supervised classification methods. Its discriminant function is in Equation (1):
L i x = Ρ ω i | x = Ρ x | ω i · Ρ ω i Ρ x , i = 1 , 2 , , c  
In Equation (1), L i x is the probabilistic discriminant function of class i . Ρ x | ω i is the conditional probability of x occurring under the condition of category ω i , which is expressed as the probability that vector x belongs to category ω i . Ρ ω i is the prior probability of category ω i , Ρ x is the probability density function of x .
SVM is a machine learning algorithm based on statistical learning theory. It has the advantages of higher classification accuracy and better generalization ability compared with other traditional classifiers. In addition, SVM can also effectively reduce the dimension disaster, especially for classifying data with high dimensions and small samples [36]. It can be simplified as an optimal decision function in Equation (2):
f ( x ) = sign i = 1 n a i y i x i · x + b  
In Equation (2), x i is the support vector, a i is the Lagrange multiplier, y is the label for categories, y i can be 1 or −1, b is the classification threshold, and n is the number of support vectors. In application, it is important to choose the appropriate kernel functions of SVM to get better classification results. This study selected linear kernel, polynomial kernel, RBF kernel, and sigmoid kernel of SVM to process hyperspectral data classification. SVM with linear kernel function had the highest overall accuracy and Kappa coefficient when the C value and gamma value were set to 5 and 0.5, respectively. Thus, it was used for the following study, and the classification results of the SVM classifiers with different kernel functions are shown in Table A1. In addition, both MLE and SVM were implemented through ENVI and MATLAB in this study. Firstly, the hyperspectral image was converted into a matrix format, where each rectangular block represented a sample and each sample contained multiple spectral bands. Meanwhile, each sample was labeled with its corresponding cultivars. Secondly, the spectral feature of each sample was extracted, and then the cross-validation method was used to divide these samples into training and testing sets, and the classifiers were trained by using the training set. Finally, the performance of the trained SVM and MLE classifiers was evaluated by using the testing set, and the overall accuracy and Kappa coefficient were used to assess the classification results.
ENVINet5 classification model was developed by the Environmental Systems Research Institute (ESRI) based on the TensorFlow deep learning framework. Since 2019, ENVI deep learning module has been released in three versions, which were Deep Learning V1.0, Deep Learning V1.1 Tech Preview, and Deep Learning V1.1.2. Compared with the first two versions, Deep Learning V1.1.2 has enhanced parameter settings for stronger learning abilities, for example, it adds Augmentation as the new training tool, which can not only expand the training data but also improve the training and extraction accuracy. Deep Learning V1.1.2 was officially released in October 2020. It needs to be adapted to the latest ENVI5.6. Thus, ENVI5.6 and ENVI Deep learning1.1.2 modules were installed and activated from the ENVI official website to run the ENVINet5 classification model for this study. The basic architecture design of the ENVINet5 neural network model is based on the improved U-Net neural network structure. U-Net neural network is a classical implementation and semantic segmentation algorithm, which is based on fully convolutional neural networks (FCN). It mainly includes conv, max pool, up-conv, and ReLU activation functions as shown in Figure 8. Its operating mechanism can mainly be divided into two parts. The left half is the down-sampling part, which focuses on data feature extraction. The right half is the up-sampling part, in which extracted feature information is mapped to the category image. To realize pixel-level classification of images and keep image context information to higher resolution feature images, U-Net associates the feature image extracted from the lower sampling part with the upper sampling part to make up for the lost context information [37]. Thus, the most important improvement of U-Net is in the up-sampling operation part [38].
The advantages of the ENVINet5 classification model are that it can be combined with the powerful ENVI imagery processing platform, and it can be used to build training samples, do model initialization, train models, and run deep neural networks without any code operations. This is convenient for researchers and users, who hope to apply remote sensing and classify remote sensing imagery without deep learning foundations. The module used in this study mainly consists of four steps: creating training samples, creating models, training models, and image classification. The first three steps are mainly used for model training and parameter setting. For image slicing in deep learning, manual slicing is not required in the ENVINet5 model; instead, by inputting the images to be sliced or the representative parts of the images, and selecting appropriate parameters of slicing size, the ENVINet5 model will automatically perform slicing on the input images. In addition, the ENVINet5 model adopts an inverse transformation sampling method and adds a weight parameter when selecting a large number of slices. This not only reduces the number of iterations during training and improves training efficiency but also enables the trained model to better select slices containing target elements.
The ENVINet5 model is sensitive to parameters during the sample feature learning process, which is similar to parameter tuning in other neural network deep learning models. When tuning the ENVINet5 parameters, the model parameter adjustment is friendly. As the module currently supports multi-element target classification, the study mainly focuses on parameter settings such as iteration and training quantity, fixed distance and fuzzy distance, slice sampling ratio, and classification weight and loss weight. Based on our computer hardware environment, the patch size of 256 × 256 was set in this study. Considering that there were 125 bands of UHD185 hyperspectral image data after image mosaicking and spectral stitching, and if the 125 bands had been directly set as the number of bands in model initialization, the first convolutional kernel input would have 125 layers, which would seriously affect the patch input rate. Thus, the number of bands after the optimal dimension reduction was selected as the number of bands participating in the training. For the iteration and training quantity-related parameters set in the model training, the number of epochs was set to 25, the number of patches per epoch was set to 200, the number of patches per batch was set to 4, and the patch sampling rate was set to 16. As the resolution of the experimental image data was high, the boundaries of each research target element could be drawn, so fixed distance and fuzzy distance settings were not required.
After model initialization, the model needed to be trained. Most parameters related to model iteration and training amount need to be set based on the computer hardware environment. Among all the parameters, the setting of class weight and loss weight can influence the accuracy of the model, so this study mainly focused on the adjustment of these two parameters to acquire satisfactory classification results. After training, the ENVINet5 model can finally perform classification. It should be noted that the image to be classified after dimension reduction should not be smaller than the patch size of the training model.

3. Results and Discussion

3.1. Comparing Spectral Curves of Different Winter Wheat Cultivars

The subtle spectral differences between crop species can be detected by hyperspectral imagery, and those differences help to identify crop types [39]. The more obvious the spectral difference, the easier the crop is to distinguish. In this study, the spectral curves of different winter wheat cultivars are presented in Figure 9. It can be seen that the general trends of the six spectral curves are consistent, and there are some differences in some specific bands. From 750 nm to 850 nm, type 6 and type 4 are relatively obvious to distinguish from other winter wheat types, and type 1 is clearer from 450 nm to 700 nm. These distinctions may contribute to higher classification accuracy. In contrast, type 2 and type 3 are confused with each other among all bands, partly because of their similar chemical components and inherent physical characteristics [40]. Based on previous research, both type 2 and type 3 were semi-winter wheat varieties, and their optimal sowing data was around October 10th [41]. However, in this study, all the winter wheat was sowed on October 30th to control variables. The postponed sowing data influence their height and leaf chlorophyll content, which might be reflected by their spectral information and resulted in confusion. This hypothesis was supported by Chen’s and Wang’s research that sowing data can affect related agronomic characteristics of winter wheat, with the delay of sowing data, the plant height decreased gradually, and the leaf chlorophyll content increased first and then decreased [41,42].

3.2. Comparing Different Dimension Reduction Methods

The classification results of different winter wheat types can be influenced by the dimension reduction method. Table 1 shows the classification accuracy of different dimensionality reduction methods. For both MLE and SVM (Linear) classification methods, data obtained by the MNF dimension method has the best classification results, followed by PCA and ICA. The results indicate that for dimension reduction of UHD185 hyperspectral data, MNF is better in the fine classification of different varieties of winter wheat. The reason could be that the MNF method considers the original hyperspectral data as both signal and noise, and it separates them before performing the transform to improve image features. Also, when compared with PCA, MNF can involve two or more separate PCA rotations to detect critical information. Lu used both PCA and MNF to identify apple bruises, and, according to the correct detection result, MNF could be better at reaching acceptable results [43]. Compared with hyperspectral data after dimension reduction, data without dimensionality reduction have the poorest classification results. This finding is consistent with the opinion of Chen et al. that the higher spectral dimension does not equal the higher classification accuracy, and it is necessary to effectively extract critical information, and reduce redundant information and storage space during hyperspectral data processing [28]. However, useful spectral information and noise cannot be completely separated by using these methods. From Figure 10, it seems that for data processed with PCA and MNF, the longitudinal field bank between wheat cannot be classified as clear as data processed with ICA, probably because useful information about field bank might be considered as noises and abandoned during the feature extraction process.

3.3. Comparing Different Classification Methods

3.3.1. Classification Results of MLE

After the dimension reduction of PCA, MNF, and ICA, the obtained data were classified by the MLE method. The overall accuracy (OA) of classification and Kappa coefficient are shown in Table 2. Compared with the different classification accuracies of different dimension reduction methods, data obtained after the MNF dimension reduction method have a higher classification accuracy, followed by data obtained after the PCA dimension reduction method, and data obtained by ICA have a lower classification accuracy. Their OAs and Kappa coefficients are 77.23%, 0.7339; 72.86%, 0.6831; and 64.74%, 0.5888, respectively. The overall classification accuracy difference between the first two optimal dimensionality reduction methods MNF and PCA is 3.84%, but the classification accuracy of ICA is much lower than that of MNF and PCA, and the difference between ICA and the optimal classification accuracy is 18.25%. The MLE classification accuracy of hyperspectral data without dimensionality reduction is only about 50%.
The classification results of MLE can, respectively, be seen in Figure 10a–d. In Figure 10a, it is clear that there is a large number of misclassifications in each winter wheat cultivar, and salt-and-pepper noise is serious. In Figure 10b,c, the MLE classification results show less misclassification and salt-and-pepper noise after PCA and MNF dimensionality reduction. However, some types of winter wheat mix up together, such as type 2 and type 3, and the field bank between wheat cannot be classified. In Figure 10d, on the one hand, compared with PCA-MLE and MNF-MLE, the classification results of ICA-MLE is poor, especially the serious misclassification among type 3, type 4, and type 5. On the other hand, the classification of field banks is better, especially the classification of the longitudinal filed bank is much better than that of PCA and MNF. In summary, for different dimension reduction and classification combinations, MNF-MLE is the best, followed by PCA-MLE, and ICA-MLE is poor. Among them, MNF-MLE and PCA-MLE have few differences in classification results, but their classification results are far superior to ICA-MLE.

3.3.2. Classification Results of SVM

The classification performance of SVM in hyperspectral imagery classification is easily affected by the type of kernel function and over-dimensional data. To avoid the effects of other parameters on classification results, the linear kernel with consistent parameters was used in SVM classification. The SVM (linear) classification was carried out under three dimension reduction conditions of PCA, ICA, and MNF. Table 3 shows OAs and Kappa coefficients. It shows that the classification accuracy without dimensionality reduction is less than 50%, whereas the classification accuracies after PCA/MNF/ICA dimension reduction are all above 74%. Compared with the classification accuracy without dimension reduction, the accuracy improves by about 30% after dimensionality reduction, which suggests that the classification of the full band imagery after dimension reduction can not only reduce the amount of data and improve the operation efficiency, but also improve the classification accuracy. Additionally, MNF-SVM (Linear) combination has the best classification accuracy: its OA and Kappa are 78.11% and 0.7449; PCA-SVM (Linear) combination has the second classification accuracy: its OA and Kappa are 74.15% and 0.6989; ICA-SVM(Linear) combination has the poor classification accuracy: its OA and Kappa are 74.14% and 0.6887.
The classification results of SVM (linear) can, respectively, be seen in Figure 11. The SVM (linear) classification results are consistent with the previous classification results of the MLE classification method. When comparing PCA-SVM (linear) and MNF-SVM (linear) in Figure 11a,b), the former show more serious misclassification and salt-and-pepper noise; among all varieties of winter wheat, type 2, type 3, and type 5 are more easily misclassified by each other, especially in the classification between type 2 and type 3. PCA-SVM (linear) shows poor classification accuracy in classifying the field bank between wheat; some longitudinal earth among type 3, type 4, and type 5 are misclassified as winter wheat. In contrast, ICA-SVM (linear) shows better classification results of the field bank between wheat, although its overall classification accuracy is insufficient when compared with other methods. When comparing the classification results of different cultivars of winter wheat, it is clear that, based on the SVM (linear) classification method, type 2 and type 3 winter wheat show more misclassification and noise, whereas type 4 and type 6 winter wheat had better classification results.

3.3.3. Classification Results of ENVINet5

To obtain the best classification results based on the ENVINet5 model, this study mainly analyzed the influence of different classification weight and loss weight parameters on the model. Since the effective value of the classification weight parameter is 0 to 6, its minimum value has been set to 0, and its maximum step size has been set to 2. Since the effective value of the loss weight parameter is 0 to 3, its step size has been set to 1, and the model Epoch has been set to 25. In the parameter setting of the deep learning model, even if the parameters are the same, the initial slice will influence each training, and decrease the training results [44]. The effect of the training model can be reflected by the loss value after multiple iterations, and the minimum loss value usually means the best classification weight and loss weight parameter combination. Thus, the minimum cross-entropy loss value was used as the final evaluation standard for the model training accuracy in this study. The results are shown in Table 4. It can be seen from Table 4 that when the classification weight is fixed, the cross-entropy loss increases with the increase of the loss weight. When the loss weight is fixed, the cross-entropy loss firstly increases and then decreases with the classification weight. After adjusting the parameters of loss weight and classification weight several times, the final cross-entropy value ranges from 0.09 to 0.3, indicating that the model training results need to be further improved. Among all the parameter adjusting results, when the classification weight was set to 2 and the loss weight was set to 0, the ENVINet5 training model showed the best classification effect with the minimum cross-entropy loss value was 0.09.
For the optimal parameter training of the ENVINet5 model, the cross-entropy loss value during training was plotted as shown in Figure 12. From the figure, it can be seen that the cross-entropy loss value of the optimal parameter combination model does not show a decreasing trend in the iterative training process; there are some fluctuations, such as an abnormal increase in cross-entropy loss value during the third and fourth iterations. However, with the increase of iterations, the cross-entropy loss value reached a minimum value, which indicates that the model has reached optimal training status.
After parameter analysis and adjustment, the optimal parameter combination of the ENVINet5 model was selected to classify the obtained hyperspectral images. In addition, previous results indicate that compared with ICA and PCA, the MNF dimension reduction method has the best dimension-reducing effect, thus, the number of bands participating in the training was selected as the number of bands after dimension reduction by MNF. The OA and Kappa coefficient are 83% and 0.81, respectively. The classification results are shown in Figure 13d. Different varieties of winter wheat can be classified, although misclassification and salt-and-pepper noise cannot be completely avoided. Among all varieties of winter wheat, type 2 and type 3 are more easily misclassified with each other, whereas type 4 and type 6 winter wheat have better classification results. The classification accuracy in classifying the field bank still needs to be improved.

3.3.4. Comparing Different Classification Methods

When comparing the classification results of machine learning with data without dimension reduction, machine learning (SVM and MLE) with data obtained after dimension reduction, and ENVINet5 model classification methods with data obtained after dimension reduction, the best classification results of each classification method are shown in Figure 13, and their R2 and p value were acquired. From Figure 13, it can be seen that the classification of the ENVINet5 model, which is based on deep learning, has the best results, the following are MNF-SVM (linear) and MNF-MLE. Original-MLE has the poorest classification results, and it has serious misclassification and salt-and-pepper noise, thus it is difficult to realize the fine classification of winter wheat without basic dimension reduction data processing. In contrast, after dimension reduction, the misclassification and salt-and-pepper noise are reduced in the classification based on both machine learning and deep learning; especially in the classification of the ENVINet5 model, the misclassification and the salt-and-pepper noise has been significantly reduced. However, the classification of each type of winter wheat is excessively aggregated in the ENVINet5 model; as a result, the field bank between wheat is under-segmentation, and most field bank between wheat is misclassified into wheat, which is inferior to the first two machine learning-based classification methods. Statistical analysis results show that the R2 of the MNF-ENVINet5 method is 0.96, the p value is 1.96518 × 10−4; the R2 of the MNF-SVM (linear) method is 0.95, and the p value is 3.26785 × 10−4; the R2 of the MNF-MLE method is 0.90, and the p value is 7.5626 × 10−4; and the R2 of the Original-MLE method is 0.71, and the p value is 0.00649. In summary, it can be seen that the significance of the MNF-ENVINet5 method is higher.
The producer accuracy (PA), user accuracy (UA), and average accuracy (AA) of the classified winter wheat were calculated based on the classification confusion matrix to conduct a better evaluation of the fine classification results of different types of winter wheat. Table 5 shows that type 4 and type 6 winter wheat have the best classification results, since the AA of each classification method is all above 84%, especially for the ENVINet5 classification of type 4, the AA of which is more than 91%. Type 5 also shows satisfactory classification results with the AA of each method fluctuating between 81% and 88%. Following this is type 1, the AA of which fluctuates between 78% and 84%. The AA of type 2 and type 3 winter wheat is the lowest for each classification method, which suggests there are many misclassifications.
UAV-based hyperspectral remote sensing system was used in this study to obtain hyperspectral data during the winter wheat booting stage and create a corresponding sample database. It aims at achieving more accurate classification results of different winter wheat on a larger scale through training a large number of sample data. These sample data are helpful when future research attempts to identify different crops from more complex real-world scenarios, such as land reclamation mining area and farmland near urban buildings. Future research will focus more on the spectral data of different landforms and different crop growth stages, discover sensitive bands between these differences, and use related indices to expand these differences, thereby enlarging the data scope and improving the classification accuracy between different landforms during different growth stages.

4. Conclusions

This study attempted to classify different cultivars of winter wheat at Wuzhi, Henan province, by integrating DJI-M600 UAV and the UHD185 hyperspectral camera into a hyperspectral remote sensing system and comparing different data dimension reduction and classification methods. The experiment studied the dimension reduction method of the original hyperspectral imagery and then introduced machine learning and deep learning theory to study the fine classification method of different varieties of winter wheat. The experimental results were finally evaluated and discussed in qualitative and quantitative ways. The main conclusions are as follows:
  • In terms of different cultivars of winter wheat, this paper explored hyperspectral imagery classification results of the six different winter wheat types. The results showed that type 4 and type 6 winter wheat have the best classification results than other types with the average accuracy of each classification method are all above 84%, while type 2 and type 3 winter wheat have the lowest classification accuracy. The differences in classification accuracy can be explained by the separability of their spectral curves, which are related to chemical components and physical characteristics of different winter wheat cultivars.
  • In terms of dimensionality reduction methods, this paper used different dimensionality reduction methods (PCA, MNF, ICA) to reduce the redundancy data of hyperspectral imagery. The results show that there is a 30% improvement in classification accuracy after dimensionality reduction, and MNF has the best dimension reduction effect compared with PCA and ICA.
  • In terms of classification methods, the obtained data were classified by MLE, SVM (Linear), and ENVINet5 after dimensionality reduction. The combined MNF-ENVINet5 model showed the best classification results, and the OA and Kappa coefficients were 83%, and 0.81, respectively. This study improved the precision of fine classification of different types of winter wheat, which has weak intra-class features. This proposed hyperspectral data processing way may help to solve the problem of crop fine classification with typical weak intra-class features.

Author Contributions

Conceptualization, X.L. and W.D.; methodology, H.Z. and Z.C.; software, W.G.; validation, W.D. and S.W.; formal analysis, X.L. and W.D.; investigation, X.L.; resources, W.G.; data curation, W.D. and S.W.; writing—original draft preparation, X.L.; writing—review and editing, X.L. and Z.C.; visualization, X.L.; supervision, W.D. and Z.C.; project administration, H.Z.; funding acquisition, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Nation Natural Science Foundation of China (grant number U21A20108, U22A20620, U22A20620/003), the Joint Fund of Collaborative Innovation Center of Geo-Information Technology for Smart Central Plains, Henan Province and the Key Laboratory of Spatiotemporal Perception and Intelligent Processing, Ministry of Natural Resources (grant number 211102), the project of Provincial Key Technologies R & D Program of Henan (grant number 222102320306).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

We thank the following agencies for their official website to run the classification model. The ENVI5.6 and ENVI Deep learning1.1.2 modules were installed and activated from ENVI official website at https://envi.geoscene.cn (accessed on 18 January 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this work, the classification results of the SVM classifiers with different kernel functions are shown in Table A1.
Table A1. Classification results of the SVM classifiers with different kernel functions.
Table A1. Classification results of the SVM classifiers with different kernel functions.
Dimension Reduction MethodsSVM with Different Kernel FunctionsOverall AccuracyKappa Coefficient
OriginalLinear46.880.3800
MNFPolynomial78.040.7438
RBF77.480.7369
Sigmoid73.060.6852
Linear78.110.7449
ICAPolynomial73.160.6768
RBF68.570.6330
Sigmoid58.870.5212
Linear74.140.6887
PCAPolynomial74.120.6981
RBF74.100.6978
Sigmoid67.980.6278
Linear74.150.6989

References

  1. Jiang, T.C.; Wang, B.; Xu, X.J.; Cao, Y.X.; Liu, D.L.; He, L.; Jin, N.; Ma, H.J.; Chen, S.; Zhao, K.F.; et al. Identifying sources of uncertainty in wheat production projections with consideration of crop climatic suitability under future climate. Agric. For. Meteorol. 2022, 319, 108933. [Google Scholar] [CrossRef]
  2. Qu, C.; Li, P.J.; Zhang, C.M. A spectral index for winter wheat mapping using multi-temporal Landsat NDVI data of key growth stages. ISPRS J. Photogramm. Remote Sens. 2021, 175, 431–447. [Google Scholar] [CrossRef]
  3. Fu, Y.Y.; Yang, G.J.; Li, Z.H.; Song, X.Y.; Li, Z.H.; Xu, X.G.; Wang, P.; Zhao, C.J. Winter Wheat Nitrogen Status Estimation Using UAV-Based RGB Imagery and Gaussian Processes Regression. Remote Sens. 2020, 12, 3778. [Google Scholar] [CrossRef]
  4. Cousins, O.H.; Garnett, T.P.; Rasmussen, A.; Mooney, S.J.; Smernik, R.J.; Brien, C.J.; Cavagnaro, T.R. Frequency Versus Quantity: Phenotypic Response of Two Wheat Varieties to Water and Nitrogen Variability. J. Soil Sci. Plant Nutr. 2021, 21, 1631–1641. [Google Scholar] [CrossRef]
  5. Sun, J.S.; Zhou, G.S.; Sui, X.H. Climatic suitability of the distribution of the winter wheat cultivation zone in China. Eur. J. Agron. 2012, 43, 77–86. [Google Scholar] [CrossRef]
  6. Yu, H.Q.; Zhang, Q.; Sun, P.; Song, C.Q. Impact of Droughts on Winter Wheat Yield in Different Growth Stages during 2001–2016 in Eastern China. Int. J. Disaster Risk Sci. 2018, 9, 376–391. [Google Scholar] [CrossRef]
  7. Maddikunta, P.K.R.; Hakak, S.; Alazab, M.; Bhattacharya, S.; Gadekallu, T.R.; Khan, W.Z.; Pham, Q.V. Unmanned Aerial Vehicles in Smart Agriculture: Applications, Requirements, and Challenges. IEEE Sens. J. 2021, 21, 17608–17619. [Google Scholar] [CrossRef]
  8. Liu, S.W.; Peng, D.L.; Zhang, B.; Chen, Z.C.; Yu, L.; Chen, J.J.; Pan, Y.H.; Zheng, S.J.; Hu, J.K.; Lou, Z.H.; et al. The Accuracy of Winter Wheat Identification at Different Growth Stages Using Remote Sensing. Remote Sens. 2022, 14, 893. [Google Scholar] [CrossRef]
  9. Su, T.; Zhang, S. Winter wheat mapping using landsat 8 images and geographic object-based image analysis. Trans. ASABE 2017, 60, 625–633. [Google Scholar] [CrossRef]
  10. Zou, S.S.; Chen, H.; Zhou, H.Y.; Chen, J.G. An Intelligent Image Feature Recognition Algorithm With Hierarchical Attribute Constraints Based on Weak Supervision and Label Correlation. IEEE Access 2020, 8, 105744–105753. [Google Scholar] [CrossRef]
  11. Wadood, S.A.; Guo, B.L.; Zhang, X.W.; Raza, A.; Wei, Y.M. Geographical discrimination of Chinese winter wheat using volatile compound analysis by HS-SPME/GC-MS coupled with multivariate statistical analysis. J. Mass Spectrom. 2020, 55, e4453. [Google Scholar] [CrossRef]
  12. Hu, Q.; Ma, Y.X.; Xu, B.D.; Song, Q.; Tang, H.J.; Wu, W.B. Estimating Sub-Pixel Soybean Fraction from Time-Series MODIS Data Using an Optimized Geographically Weighted Regression Model. Remote Sens. 2018, 10, 491. [Google Scholar] [CrossRef]
  13. Xie, Y.H.; Lark, T.J.; Brown, J.F.; Gibbs, H.K. Mapping irrigated cropland extent across the conterminous United States at 30 m resolution using a semi-automatic training approach on Google Earth Engine. Isprs J. Photogramm. Remote Sens. 2019, 155, 136–149. [Google Scholar] [CrossRef]
  14. Khan, A.; Hansen, M.C.; Potapov, P.V.; Adusei, B.; Pickens, A.; Krylov, A.; Stehman, S.V. Evaluating Landsat and RapidEye Data for Winter Wheat Mapping and Area Estimation in Punjab, Pakistan. Remote Sens. 2018, 10, 489. [Google Scholar] [CrossRef]
  15. Xu, F.; Li, Z.F.; Zhang, S.Y.; Huang, N.T.; Quan, Z.Y.; Zhang, W.M.; Liu, X.J.; Jiang, X.S.; Pan, J.J.; Prishchepov, A.V. Mapping Winter Wheat with Combinations of Temporally Aggregated Sentinel-2 and Landsat-8 Data in Shandong Province, China. Remote Sens. 2020, 12, 2065. [Google Scholar] [CrossRef]
  16. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A compilation of UAV applications for precision agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
  17. Zhou, T.; Pan, J.J.; Zhang, P.Y.; Wei, S.B.; Han, T. Mapping Winter Wheat with Multi-Temporal SAR and Optical Images in an Urban Agricultural Region. Sensors 2017, 17, 1210. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, X.W.; Liu, J.F.; Qin, Z.Y.; Qin, F. Winter wheat identification by integrating spectral and temporal information derived from multi-resolution remote sensing data. J. Integr. Agric. 2019, 18, 2628–2643. [Google Scholar] [CrossRef]
  19. Zhang, H.D.; Wang, L.Q.; Tian, T.; Yin, J.H. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS) Use in Agricultural Monitoring in China. Remote Sens. 2021, 13, 1221. [Google Scholar] [CrossRef]
  20. Pan, Q.; Gao, M.F.; Wu, P.B.; Yan, J.W.; Li, S.L. A Deep-Learning-Based Approach for Wheat Yellow Rust Disease Recognition from Unmanned Aerial Vehicle Images. Sensors 2021, 21, 6540. [Google Scholar] [CrossRef]
  21. Fernandez-Gallego, J.A.; Lootens, P.; Borra-Serrano, I.; Derycke, V.; Haesaert, G.; Roldan-Ruiz, I.; Araus, J.L.; Kefauver, S.C. Automatic wheat ear counting using machine learning based on RGB UAV imagery. Plant J. 2020, 103, 1603–1613. [Google Scholar] [CrossRef] [PubMed]
  22. Yue, J.B.; Yang, G.J.; Tian, Q.J.; Feng, H.K.; Xu, K.J.; Zhou, C.Q. Estimate of winter-wheat above-ground biomass based on UAV ultrahigh-ground-resolution image textures and vegetation indices. ISPRS J. Photogramm. Remote Sens. 2019, 150, 226–244. [Google Scholar] [CrossRef]
  23. Yue, J.B.; Zhou, C.Q.; Guo, W.; Feng, H.K.; Xu, K.J. Estimation of winter-wheat above-ground biomass using the wavelet analysis of unmanned aerial vehicle-based digital images and hyperspectral crop canopy images. Int. J. Remote Sens. 2021, 42, 1602–1622. [Google Scholar] [CrossRef]
  24. Zhang, J.J.; Cheng, T.; Guo, W.; Xu, X.; Qiao, H.B.; Xie, Y.M.; Ma, X.M. Leaf area index estimation model for UAV image hyperspectral data based on wavelength variable selection and machine learning methods. Plant Methods 2021, 17, 49. [Google Scholar] [CrossRef]
  25. Yan, Y.A.; Deng, L.; Liu, X.L.; Zhu, L. Application of UAV-Based Multi-angle Hyperspectral Remote Sensing in Fine Vegetation Classification. Remote Sens. 2019, 11, 2753. [Google Scholar] [CrossRef]
  26. Wei, L.F.; Yu, M.; Zhong, Y.F.; Zhao, J.; Liang, Y.J.; Hu, X. Spatial-Spectral Fusion Based on Conditional Random Fields for the Fine Classification of Crops in UAV-Borne Hyperspectral Remote Sensing Imagery. Remote Sens. 2019, 11, 780. [Google Scholar] [CrossRef]
  27. Liu, M.; Yu, T.; Gu, X.F.; Sun, Z.S.; Yang, J.; Zhang, Z.W.; Mi, X.F.; Cao, W.J.; Li, J. The Impact of Spatial Resolution on the Classification of Vegetation Types in Highly Fragmented Planting Areas Based on Unmanned Aerial Vehicle Hyperspectral Images. Remote Sens. 2020, 12, 146. [Google Scholar] [CrossRef]
  28. Chen, Y.S.; Lin, Z.H.; Zhao, X.; Wang, G.; Gu, Y.F. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  29. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A. Fast dimensionality reduction and classification of hyperspectral images with extreme learning machines. J. Real-Time Image Process. 2018, 15, 439–462. [Google Scholar] [CrossRef]
  30. Li, S.T.; Song, W.W.; Fang, L.Y.; Chen, Y.S.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
  31. Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N.; Prabhat, F. Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef]
  32. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. Isprs J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
  33. Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.F.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral Image Classification-Traditional to Deep Models: A Survey for Future Prospects. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 968–999. [Google Scholar] [CrossRef]
  34. Dmitriev, P.A.; Kozlovsky, B.L.; Kupriushkin, D.P.; Dmitrieva, A.A.; Rajput, V.D.; Chokheli, V.A.; Tarik, E.P.; Kapralova, O.A.; Tokhtar, V.K.; Minkina, T.M.; et al. Assessment of Invasive and Weed Species by Hyperspectral Imagery in Agrocenoses Ecosystem. Remote Sens. 2022, 14, 2442. [Google Scholar] [CrossRef]
  35. Hong, D.F.; Yokoya, N.; Chanussot, J.; Xu, J.; Zhu, X.X. Learning to propagate labels on graphs: An iterative multitask regression framework for semi-supervised hyperspectral dimensionality reduction. Isprs J. Photogramm. Remote Sens. 2019, 158, 35–49. [Google Scholar] [CrossRef] [PubMed]
  36. Shao, Z.F.; Zhang, L.; Zhou, X.R.; Ding, L. A Novel Hierarchical Semisupervised SVM for Classification of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1609–1613. [Google Scholar] [CrossRef]
  37. Paul, A.; Bhoumik, S. Classification of hyperspectral imagery using spectrally partitioned HyperUnet. Neural Comput. Appl. 2022, 34, 2073–2082. [Google Scholar] [CrossRef]
  38. Wang, S.; Chen, W.; Xie, S.M.; Azzari, G.; Lobell, D.B. Weakly Supervised Deep Learning for Segmentation of Remote Sensing Imagery. Remote Sens. 2020, 12, 207. [Google Scholar] [CrossRef]
  39. Zhang, N.; Su, X.; Zhang, X.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.; Tian, Y. Monitoring daily variation of leaf layer photosynthesis in rice using UAV-based multi-spectral imagery and a light response curve model. Agric. For. Meteorol. 2020, 291, 108098. [Google Scholar] [CrossRef]
  40. Jia, B.B.; Wang, W.; Ni, X.Z.; Lawrence, K.C.; Zhuang, H.; Yoon, S.C.; Gao, Z.X. Essential processing methods of hyperspectral images of agricultural and food products. Chemom. Intell. Lab. Syst. 2020, 198, 103936. [Google Scholar] [CrossRef]
  41. Wang, Y.Y.; Li, X.H.; Qiao, H.; Ou, X.Q.; He, H.J.; Guo, J.L. Physiological and Biochemical Characteristics of New Wheat Cultivar Bainong 207 in the Overwintering Period Under Different Sowing Date. J. Hainan Norm. Univ. 2021, 34, 308–314. [Google Scholar]
  42. Chen, Q.Y.; Li, X.H.; Wang, Z.J.; Ou, Y.J.; Qiao, H.; Ou, X.Q. Effects of Sowing Date and Density on Yield and Related Agronomic Characters of Bainong 207. J. Henan Inst. Sci. Technol. 2021, 49, 1–7. [Google Scholar]
  43. Lu, R. Detection of bruises on apples using near-infrared hyperspectral imaging. Trans. Asae 2003, 46, 523–530. [Google Scholar]
  44. Kemker, R.; Kanan, C. Self-Taught Feature Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2693–2705. [Google Scholar] [CrossRef]
Figure 1. Location and orthophoto of the study area: (a) Map of China; (b) Map of Jiaozuo; (c) Orthophoto of the study area.
Figure 1. Location and orthophoto of the study area: (a) Map of China; (b) Map of Jiaozuo; (c) Orthophoto of the study area.
Applsci 14 00250 g001
Figure 2. UHD185 integrated hyperspectral equipment: (a) DJI M600 multi-rotor UAV (b) UHD185 Hyperspectral camera.
Figure 2. UHD185 integrated hyperspectral equipment: (a) DJI M600 multi-rotor UAV (b) UHD185 Hyperspectral camera.
Applsci 14 00250 g002
Figure 3. Flowchart showing data processing methodology.
Figure 3. Flowchart showing data processing methodology.
Applsci 14 00250 g003
Figure 4. Hyperspectral orthophoto generated after imagery mosaicking and spectral stitching: (a) Panchromatic image; (b) Hyperspectral orthophoto image.
Figure 4. Hyperspectral orthophoto generated after imagery mosaicking and spectral stitching: (a) Panchromatic image; (b) Hyperspectral orthophoto image.
Applsci 14 00250 g004
Figure 5. Spectral curves of five typical ground objects before and after spectral curves stitching.
Figure 5. Spectral curves of five typical ground objects before and after spectral curves stitching.
Applsci 14 00250 g005
Figure 6. Correlation analysis about spectral curves of typical ground objects before and after imagery mosaicking: (a) Roof; (b) Tree; (c) Earth; (d) Winter wheat; (e) Road.
Figure 6. Correlation analysis about spectral curves of typical ground objects before and after imagery mosaicking: (a) Roof; (b) Tree; (c) Earth; (d) Winter wheat; (e) Road.
Applsci 14 00250 g006
Figure 7. Spectral imagery band eigenvalue transformation results of each dimension reduction method: (a) Transformation results based on PCA; (b) Transformation results based on MNF; (c) Transformation results based on ICA.
Figure 7. Spectral imagery band eigenvalue transformation results of each dimension reduction method: (a) Transformation results based on PCA; (b) Transformation results based on MNF; (c) Transformation results based on ICA.
Applsci 14 00250 g007
Figure 8. Illustration of complete architecture of network model.
Figure 8. Illustration of complete architecture of network model.
Applsci 14 00250 g008
Figure 9. Spectral curves of different winter wheat cultivars.
Figure 9. Spectral curves of different winter wheat cultivars.
Applsci 14 00250 g009
Figure 10. MLE results of different dimensionality reduction methods: (a) Original-MLE; (b) PCA-MLE; (c) MNF-MLE; (d) ICA-MLE.
Figure 10. MLE results of different dimensionality reduction methods: (a) Original-MLE; (b) PCA-MLE; (c) MNF-MLE; (d) ICA-MLE.
Applsci 14 00250 g010
Figure 11. SVM (linear) results of different dimensionality reduction methods: (a) PCA-SVM (linear); (b) MNF-SVM (linear); (c) ICA-SVM (linear).
Figure 11. SVM (linear) results of different dimensionality reduction methods: (a) PCA-SVM (linear); (b) MNF-SVM (linear); (c) ICA-SVM (linear).
Applsci 14 00250 g011
Figure 12. The cross-entropy loss value during ENVINet5 model training.
Figure 12. The cross-entropy loss value during ENVINet5 model training.
Applsci 14 00250 g012
Figure 13. The best classification results of different classification methods: (a) Original-MLE; (b) MNF-MLE; (c) MNF-SVM (linear); (d) MNF-ENVINet5.
Figure 13. The best classification results of different classification methods: (a) Original-MLE; (b) MNF-MLE; (c) MNF-SVM (linear); (d) MNF-ENVINet5.
Applsci 14 00250 g013
Table 1. The influence of different dimensionality reduction methods on the classification accuracy (unit: %).
Table 1. The influence of different dimensionality reduction methods on the classification accuracy (unit: %).
Classification MethodsDimensionality Reduction MethodsThe Overall AccuracyKappa Coefficient
MLEOriginal (without dimensionality reduction)49.51 0.4101
PCA72.860.6831
MNF77.23 0.7339
ICA64.74 0.5888
SVM (Linear)Original (without dimensionality reduction)46.880.3800
PCA74.150.6989
MNF78.110.7449
ICA74.140.6887
Table 2. Classification results based on MLE.
Table 2. Classification results based on MLE.
Dimensionality Reduction ClassifierOverall AccuracyKappa Coefficient
OriginalMLE49.51 0.4101
PCAMLE72.860.6831
MNFMLE77.23 0.7339
ICAMLE64.74 0.5888
Table 3. Classification results based on SVM.
Table 3. Classification results based on SVM.
Dimensionality ReductionClassifierOverall AccuracyKappa Coefficient
OriginalSVM Linear46.88 0.3800
MNFSVM Linear78.110.7449
ICASVM Linear74.14 0.6887
PCASVM Linear74.15 0.6989
Table 4. The minimum cross-entropy loss of each combined parameter.
Table 4. The minimum cross-entropy loss of each combined parameter.
Loss WeightClassification WeightThe Cross-Entropy LossLoss WeightClassification WeightThe Cross-Entropy Loss
000.12200.13
20.0920.27
40.1340.25
60.2160.23
100.14300.17
20.2320.19
40.3140.24
60.2460.21
Table 5. Classification accuracy table for each winter wheat cultivars (unit: %).
Table 5. Classification accuracy table for each winter wheat cultivars (unit: %).
Type Winter Wheat CultivarsSVM (Linear)-MNFMLE-MNFENVINet5-MNF
PAUAAAPAUAAAPAUAAA
1Aikang 5874.9383.8979.4174.483.578.7388.3879.884.09
2Bainong 20771.1372.4671.8066.4174.1170.2673.677.7475.67
3Jimai 2285.2771.8378.5570.5265.5568.0492.2568.7380.49
4Jiamai 898.6581.1589.996.7271.5584.1499.4182.6991.05
5Nongda 518194.2577.9786.1198.972.6581.2893.383.5688.43
6Pumai 16894.3481.387.8296.0582.8789.4691.6284.488.01
Other44.4880.0362.2647.4276.9262.1736.7377.8357.28
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lyu, X.; Du, W.; Zhang, H.; Ge, W.; Chen, Z.; Wang, S. Classification of Different Winter Wheat Cultivars on Hyperspectral UAV Imagery. Appl. Sci. 2024, 14, 250. https://doi.org/10.3390/app14010250

AMA Style

Lyu X, Du W, Zhang H, Ge W, Chen Z, Wang S. Classification of Different Winter Wheat Cultivars on Hyperspectral UAV Imagery. Applied Sciences. 2024; 14(1):250. https://doi.org/10.3390/app14010250

Chicago/Turabian Style

Lyu, Xiaoxuan, Weibing Du, Hebing Zhang, Wen Ge, Zhichao Chen, and Shuangting Wang. 2024. "Classification of Different Winter Wheat Cultivars on Hyperspectral UAV Imagery" Applied Sciences 14, no. 1: 250. https://doi.org/10.3390/app14010250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop