Next Article in Journal
Dendroremediation Potential of Six Quercus Species to Polluted Soil in Historic Copper Mining Sites
Next Article in Special Issue
Automatic Detection and Classification of Dead Nematode-Infested Pine Wood in Stages Based on YOLO v4 and GoogLeNet
Previous Article in Journal
Diversity of Beetles Captured in Pitfall Traps in the Șinca Old-Growth Forest, Brașov County, Romania: Forest Reserve versus Managed Forest
Previous Article in Special Issue
Estimate Forest Aboveground Biomass of Mountain by ICESat-2/ATLAS Data Interacting Cokriging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Damage Diagnosis of Pinus yunnanensis Canopies Attacked by Tomicus Using UAV Hyperspectral Images

1
Key Laboratory of Forest Disaster Warning and Control in Yunnan Province, College of Biodiversity Conservation, Southwest Forestry University, Kunming 650224, China
2
College of Landscape Architecture and Horticulture Sciences, Southwest Forestry University, Kunming 650224, China
3
Department of Geosciences, University of Arkansas, Fayetteville, AR 72701, USA
*
Author to whom correspondence should be addressed.
Forests 2023, 14(1), 61; https://doi.org/10.3390/f14010061
Submission received: 1 December 2022 / Revised: 22 December 2022 / Accepted: 22 December 2022 / Published: 28 December 2022
(This article belongs to the Special Issue Forestry Remote Sensing: Biomass, Changes and Ecology)

Abstract

:
It remains challenging to control Tomicus spp., a pest with fast spreading capability, leading to the death of large numbers of Pinus yunnanensis (Franch.) and posing a severe threat to ecological security in southwest China. Therefore, it is crucial to effectively and accurately monitor the damage degree for Pinus yunnanensis attacked by Tomicus spp. at large geographical scales. Airborne hyperspectral remote sensing is an effective, accurate means to detect forest pests and diseases. In this study, we propose an innovative and precise classification framework to monitor the damage degree of Pinus yunnanensis infected by Tomicus spp. using hyperspectral UAV (unmanned aerial vehicle) imagery with machine learning algorithms. First, we revealed the hyperspectral characteristics of Pinus yunnanensis from a UAV-based hyperspectral platform. We obtained 22 vegetation indices (VIs), 4 principal components, and 16 continuous wavelet transform (CWT) features as the damage degree sensitive features. We classified the damage degree of Pinus yunnanensis canopies infected by Tomicus spp. via three methods, i.e., discriminant analysis (DA), support vector machine (SVM), and backpropagation (BP) neural network. The results showed that the damage degree detected from the BP neural network, combined with 16 CWT features, achieved the best performance (training accuracy: 94.05%; validation accuracy: 94.44%).

1. Introduction

Tomicus yunnanensis (Curculionidae, Scolytinae) has been identified as a destructive pest that has caused tremendous damage to 1.5 million hectares of Pinus yunnanensis (Franch.) since 1980, resulting in severe losses to local ecological security and economy in Yunnan, China [1,2]. Therefore, it is imperative to develop an efficient and accurate method to monitor and control the spread of the Tomicus yunnanensis that are essential for maintaining the ecological security barrier and biodiversity conservation. Traditional field investigation methods primarily rely on manual ground surveys and experience from experts. However, ground surveys are largely limited due to the high cost, the strong seasonality, and the complex topography of the investigations sites, failing to provide repeated information with high frequency at a large geographical scale [3,4]. In recent years, remote sensing has been gradually recognized as a powerful technique to identify infected trees or crops in agriculture and forestry due to its capability of providing extensive coverage and fine-grained spatio-temporal information [5,6,7]. In particular, hyperspectral remote sensing provides fine-band reflectance, which can retrieve important information that includes the plant chlorophyll content, moisture, and structural and physiological indicators, establishing a venue where monitoring forest pests and diseases becomes possible [8,9,10,11,12].
Existing hyperspectral remote sensing studies can be mainly divided into four categories. (1) Satellite/airborne-based hyperspectral: Satellite-based hyperspectral is widely used to monitor the forest disease and pests due to its advantages such as extensive ground coverage and regular data acquisition capability [13,14,15]. The suitability of vegetation indices obtained from the satellite-based hyperspectral image was used to evaluate and monitor stress symptoms induced by the invasion of cypress aphids [16]. Since then, hyperspectral satellite data with higher spatial resolution, such as GF-5 (30 m) and Zhuhai-1 (10 m), have been gradually applied to monitor pest and disease disasters. (2) Non-imaging hyperspectral: Efforts have been made to predict the chlorophyll content or other indicators based on the spectral feature from the hand-held non-imaging spectrometers. The chlorophyll content and density were predicted from the spectral characteristics [17]. The optimal hyperspectral bands for identifying Pinus massoniana (Lamb.) trees infected by Bursaphelenchus xylophilus (Pine wood nematode) disease were determined and the chlorophyll content of infected trees was assessed [18]. Non-imaging hyperspectral images have great potential for the accurate monitoring of pests and diseases but are not suitable for regional-scale monitoring [19]. (3) Ground-based imaging hyper-spectrometers: Ground-based hyperspectral imaging spectrometers can reach nanometer levels, which promotes widespread applications in forestry, agriculture, ecology and the environment [20]. (4) UAV (Unmanned Aerial Vehicles) hyperspectral platforms: UAVs have been widely used in forest pests and diseases due to their capability to cover relatively large areas and to discriminate healthy from sick trees based on spectral characteristics [21]. 3D-CNN models were used to detect early pine wilt disease based on UAV hyperspectral images [22]. A new processing method for analyzing spectral characteristics in forested environments, as well as for mapping individual anomalous trees, was developed based on UAV hyperspectral data [23].
When vegetation is attacked by pests and diseases, water and nutrient delivery will be impeded, leading to the color changing from green to red or gray externally and modifying the spectral reflectance [21]. It is difficult to distinguish such a change with the naked eye. Even for multispectral images and aerial RGB images, the detection of the damage level remains challenging [4]. The UAV’s hyperspectral sensors, with a low cost and narrow bandwidths, can achieve dynamic monitoring for forest pests and diseases, thanks to their capability of identifying minor changes in the spectral characteristics of individual tree crowns or canopies infected by pests and diseases [24]. However, existing studies mainly focus on the identification and classification of dead wood, while few studies explored the potential of UAVs coupled with multispectral sensors in diagnosing the damage degree caused by diseases. However, Liu et al. (2020) [19] distinguished healthy and damaged Pinus yunnanesnsis and identified the infestation severity of Tomicus spp. via diagnostic models, but it had a low accuracy at 79.17%~87.50%, presumably due to the fact that they only used the spectral features and statistical method to discriminate the damage level of Pinus yunnanesnsis infected by Tomicus spp.
Thus, built upon the work by Liu et al. (2020) [19], we aim to explore an effective and accurate detection framework to distinguish the damage degree by calculating the death rate. First, we obtained 120 high-resolution photos of Pinus yunnanesnsis from UAV, calculated the death rate, and identified the damage degrees. Further, we derived spectral features from the vegetation index (VI), principal component analysis (PCA), and continuous wavelet transform (CWT). Finally, the damage degree was classified by discriminant analysis (DA), support vector machine (SVM), and backpropagation (BP). The results of this study are expected to provide an essential scientific basis for monitoring the damage of Pinus yunnanensis caused by Tomicus spp. Meanwhile, the study provides a technical reference for the detection and control of the damage of Pinus yunnanensis caused by Tomicus spp.

2. Materials and Methods

2.1. Study Area

Our study area (Figure 1) is located beside the Heilongtan Reservoir in Shilin County, Yunnan Province (103°19′53.454″–103°20′17.78″ E, 24°45′58.536″–24°46′19″ N), with an altitude between 1700 and 1950 m. As the national central monitoring station for forests, Tomicus spp. is one of the primary targets to be monitored. Pinus yunnanensis in Shilin County covers about 260.50 km2, accounting for 47.48% of the arbor forest. The annual average occurrence of Tomicus spp. is more than 6.67 km2. The Tomicus spp. lays eggs twice a year, with a population ratio of 8:2 between the two generations. The adults tend to migrate from the damaged tips to the trunk for overwintering, and most of them remain inside the damaged tips. Relying on the data recorded by the Forest Prevention Station of Shilin County, we selected our study area covering 0.23 km2 (Figure 1).

2.2. Data Acquisition and Processing

2.2.1. Sample Data Collection

  • Ground sample data collection
The damaged shoot ratio (DSR) is a commonly used parameter to identify the damage level of Pinus yunnanensis infected by Tomicus spp. Therefore, we divided the damage degrees of the Tomicus spp. into four classes, i.e., health, mild damage, moderate damage, and severe damage (Table 1) based on the Standard of Forestry Pests Occurrence and Disaster LY/T 1681-2006. Field investigation is one of the most effective ways to accurately determine the damage of Pinus yunnanensis damaged by Tomicus spp. Meanwhile, the Tomicus spp. tends to transfer from the tip to the trunk to lay eggs and reproduce in November. Therefore, we investigated the damaged shoot ratio of the ground (ground-DSR) of individual Pinus yunnanensis infected by Tomicus spp. from 18 to 30 November 2019. First, the samples of damaged Pinus yunnanensis were randomly and evenly selected in the study area. Then, the latitude and longitude locations of the sample spot, the number of dead shoots, and the total number of shoots were counted and recorded through artificial ground visual measurements around the trees. The damaged shoot ratio (DSR) can be calculated by the following:
DSR = n m × 100 %
where n represents the number of dead shoots, and m is the total number of shoots.
Finally, a total of 120 samples were randomly selected (evenly distributed in classes): 30 healthy trees, 30 mild damage trees, 30 moderate damage trees, and 30 severe damage trees.
2.
Canopy sample data collection
A significantly positive linear correlation exists between Ground-DSR and Canopy-DSR, so Ground-DSR can be fitted by Canopy-DSR to identify the damage degree to Pinus yunnanensis. Therefore, we performed UAV canopy data collection of Pinus yunnanensis on 18 November 2019. The drone was equipped with M200+Z30 and flew 2.5 to 5 m above the wood (Figure 2). The Z30 camera supports a 30x optical zoom lens and a 6x digital zoom lens. By stretching the lens, the dead shoot and wormhole can be clearly collected at a height of 100 m, and the wormhole eaten by the Tomicus spp. can be found through the lens scanning to identify the damaged Pinus yunnanensis. A total of 120 canopy pictures were obtained, and the dead shoot can be identified visually. The dead shoot can be counted visually and recorded, and the DSR can be calculated according to Equation (1).
3.
Ground hyperspectral of Pinus yunnanensis needle
We measured the ground hyperspectral data with a SOC710VP imaging spectrometer (Table 2). A total of 80 samples were selected from the 120 samples. Using high-branch shear, 80 needle samples were collected in the test area (20 healthy samples, 20 mild-damaged samples, 20 moderate-damaged samples, and 20 severe-damaged samples). The samples were collected from different directions of the upper, middle and lower parts of a single Pinus yunnanensis. The needle spectrum in different damage levels was derived in a laboratory under controlled lighting conditions.

2.2.2. Hyperspectral Imagery Acquisition Based on a UAV Platform

Hyperspectral data were collected on 17 November 2019, from 11:00 a.m. to 2:00 p.m. in windless and clear-sky conditions using a UHD S185 hyperspectral imager (Cubert GmbH, Ulm, Baden-Württemberg, Germany) on a DJI M600 Pro multi-rotor UAV (DJI, Shenzhen, China). With its snapshot imaging technology, the UHD S185 spectrometer can instantaneously obtain accurate hyperspectral cube data across the entire field of view. Table 2 lists the main parameters of the UHD S185. The data acquisition route was planned by DJI Go Pro (DJI, Shenzhen, China). The reflectance between 0% and 100% was obtained via standard white and black calibration corrections (Figure 3b). Considering the actual terrain and vegetation conditions, the UAV flight altitude was set to 100 m above the ground, with a flight speed of 5 m/s, forward overlaps of 70%, and side overlaps of 60%, with an average flight altitude of 1832 m. The obtained images contain 125 bands (450 to 950 nm) with a spatial resolution of 0.1 m. The Global Positioning System (GPS) and inertial Measurement Unit (IMU) modules are integrated into the UAV, with horizontal and vertical position errors of approximately 2.0 m and 5.0 m, respectively, with a positioning accuracy of approximately 1°. In small-area image analysis, relative position is more important than absolute horizontal position and absolute vertical position. Therefore, these error margins are acceptable in small-area forestry surveys (See Liu et al., 2020 [19]). We synchronized the position and orientation system (POS) to correct the hyperspectral data. Meanwhile, we acquired the RGB orthophoto image based on UAV for geometric correction of the hyperspectral data. All experimental data are shown in Table 3.

2.2.3. Data Preprocessing

It is important to extract a canopy spectrum of Pinus Yunnanensis accurately. The selection of ROI (Region of Interesting) significantly affects the extraction of canopy spectral information. In this study, the ROI selection tool based on ENVI software was used to draw the canopy range by combining it with the latitude and longitude location information of the ground survey.
Ground hyperspectral data were processed based on the SRAnal710e software, including black field calibration, radiation calibration, spectral calibration, and converting the DN value into reflectance. The pre-processing process of ground hyperspectral data obtained by SOC710VP contains three major steps: (1) Spectral calibration: SOC710VP hyperspectral data were calibrated, allowing the calibration file directly to be accessed through SRAnal710e. (2) Radiation calibration: We conducted radiation calibration using the SRAnal710e software. (3) Converting DN value into reflectance: We selected the reference area, saved DN values, referred to the reflectance references, and converted them into reflectance.
The manufacturer provides the radiometric calibration of the canopy hyperspectral data, and the atmospheric, geometric, and topographic corrections are processed based on the ENVI 5.3 platform. We standardized the reflectance records:
R i = R i 1 n i = 1 n R i
where R i is the initial reflectance; n is the total of bands ( n = 125), and R i is the normalized reflectance.
The captured hyperspectral data are noisy for various reasons, such as the equipment, soil, topographic, and light conditions. We used the Savitzky–Golay (S–G) method to smooth the spectral reflectance [25,26], ensuring the quality of the spectral curve by removing noises:
χ n , s = 1 G i = m m χ n + 1 g i
where χ n , s represents the smoothed spectral value at the wavelength of n; g i G   denotes the parameter estimated by the least squares method; G is the normalization factor; g is the smoothing factor.
To further improve the spectral quality, we further processed the original spectral curve by calculating the first derivative of the spectral reflectance [27]:
R ( λ i ) = d R ( λ i ) d λ = R ( λ i + 1 ) R ( λ i 1 ) 2 Δ λ
where λ i is the wavelength at i , R ( λ i ) represents the spectral value of wavelength λ i ; Δ λ is the interval between λ i 1 and λ i + 1 .

2.3. Methods

We first preprocessed the spectral reflectance and derived canopy spectral features from vegetation indices (VI), principal components and continuous wavelet transform (CWT) to reduce the dimensionality. Then, we classified the damage degrees of Pinus yunnanensis by Tomicus spp. via algorithms that include DA, SVM, and BP. Finally, we validated the classification results and analyzed the feasibility of different methods for varying damage degrees.

2.3.1. Spectral Feature Extraction

  • Vegetation index (VI)
Vegetation indices can reflect the plant’s physiological characteristics. The spectral reflectance changes are affected by Tomicus spp. due to the altered chlorophyll content levels. In this study, we extracted 10 vegetation indices (Table 4), and selected 20 hyperspectral features (Table 4) by combining canopy spectral curves and first-order derivative features [28,29,30].
2.
Principal component analysis (PCA)
Principal component analysis (PCA) is a technique for feature extraction that reduces the dimensionality of a dataset and increases the interpretability to preserve the maximum amount of information [31,32,33]. An n × m matrix is transformed into an n × k matrix ( k < m ). In PCA, the first principal component is the direction in space with maximized variance, which is calculated so that it accounts for the greatest possible variance in the dataset. The second principal component, uncorrelated with the first principal component, is the direction orthogonal to the first, with the second-largest variance. Then, a total of p principal components was derived in the same way, equal to the original number of variables. The transformations of the original variables to the principal components follow:
P = Q T Q
where P is the spectral matrix, the sum of the outer products of the m -dimensional spectral vectors for n samples; Q is the score matrix Q = { Q 1 Q 2 Q i } ( i < m ) , and T is the principal component matrix T = { T 1 T 2 T i } . The principal components are eigenvectors of the data’s covariance matrix, which are commonly computed by the eigen decomposition of the data covariance matrix.
In this study, we computed the characteristic values of each component and the cumulative and single contribution rate for 125 channels from UAV’s hyperspectral data.
3.
Continuous Wavelet Transform (CWT)
The continuous wavelet transform (CWT) is a non-numerical way that provides an overcomplete representation of a signal by letting the translation and scale parameters of the wavelets vary continuously. CWT is one of the important methods for conducting hyperspectral vegetation feature extraction, which can separate multiple features of the reflectance spectrum through high- and low-frequency information. It can obtain wavelet energy decomposed coefficients at different locations and scales, generating stronger spectral features after correlation analysis [34,35,36]. First, the original reflectance spectrum is decomposed into wavelet coefficient spectra of multiple scales. Each scale is a frequency of spectral variation; different scales represent different frequencies; lower scales correspond to higher frequencies. Note that the number of the spectra of wavelet coefficient and the number of spectral bands is the same. Then, important bands are extracted as the features of the wavelet coefficient. The obtained wavelet features contain the spectra of a specific range of bands:
φ a , b ( γ ) = 1 | α | φ ( γ b a )
W f ( a , b ) = < f , φ a , b > = + f ( γ ) φ a , b ( γ ) d γ
where γ is the Daubechies (DBN) wavelet basis; a is the scale factor; b is the translation factor; | α | ensures that the energy of the function remains constant at different resolutions; W f ( a , b ) is the wavelet energy factor (an m × n matrix); f ( γ ) is reflectivity spectrum (γ = 1, 2, ,   m ).
In this study, the DB4 in the Daubechies series is selected as the wavelet basis for wavelet decomposition and reconstruction, with the minimum error. Therefore, the decomposition scale is 2 i (i = 1, 2, 3,..., 10), and the maximum decomposition scale is selected to be 7 by detecting the correlation coefficient between the wavelet coefficient matrix and the damage degree.

2.3.2. Classification and Evaluation

  • Classification based on VIs, PCA and CWT
The selected features (VIs, PCA, and CWT) serve as inputs to algorithms that include DA, SVM, and BP for the classification of the damage degrees.
DA is a multivariate method commonly utilized for classification and dimension reduction. The objects are separated into classes in DA based on the known classification, minimizing the intra-class variance and maximizing the inter-class variance, and finding the linear combination of the original directions, which are called discriminant functions. The separation between classes is the hyperplane. Among all DA models, linear discriminant analysis (LDA) is particularly popular, as it can serve as a classifier while being a dimensionality reduction technique at the same time. Quadratic discriminant analysis (QDA) is a variant of LDA, allowing for nonlinear data separation, which can be used to classify datasets with two or more classes.
LDA and QDA come from a simple probability model, where the model for each category of k distribution P ( X | y = k ) can be attained by Bayes’ theorem:
P ( y = k | X ) = P ( y = k ) P ( X | y = k ) P ( X ) = P ( y = k ) P ( X | y = k ) l P ( y = l ) P ( X | y = l )
We maximize the conditional probability of category k . More specifically, P ( X | y = k ) is modeled as a multivariate Gaussian distribution:
P ( y = k | X ) = 1 ( 2 π ) n | Σ | e 1 2 ( X μ k ) T Σ 1 ( X μ k )
where n represents the number of features. We estimate the prior probability P ( y = k ) , class mean μ k , and covariance matrix Σ of the class from the training data. In LDA, the Gaussian distribution of each category k shares the covariance matrix, and the linear decision surface between the two categories can be derived by comparing the logarithmic probabilities of the two categories, l o g P ( y = k | X ) P ( y = l | X ) . In QDA with the quadratic decision plane, there is no assumption about the Gaussian covariance matrix. In this study, we use both LDA and QDA.
The SVM algorithm is a powerful and flexible supervised learning model that can be used for classification and regression. The SVM model is a representation of different classes of hyperplanes in a multidimensional space. The hyperplane is iteratively generated by the support vector machine to minimize the error. It aims to find a maximum-margin hyperplane in a multi-dimensional space that distinctly classifies the data points [37,38].
In practice, SVM algorithms are implemented with a kernel, which converts the input data space into the desired form. SVM uses a technique called the kernel trick, which takes a low-dimensional input space and converts it to a higher-dimensional space, i.e., the kernel transforms a non-separable problem into a separable problem by adding dimensions. It makes SVMs more powerful, flexible, and accurate. The kernel types of SVM include Linear kernel, Polynomial kernel, and Radial Basis Function (RBF) kernel, to list a few. We chose the linear kernel function (Linear) and quadratic rational kernel function (RQ) for comparison. Linear kernel can be used as a dot product between any two observations (Equation (10)).
K ( x , x i ) = s u m ( x x i )
Polynomial kernel is a more generalized form of the linear kernel and can distinguish curved or nonlinear input spaces.
K ( x , x i ) = 1 + s u m ( x x i ) d
where d is the order of the polynomial, which needs manual specification.
Artificial neural networks (ANNs) are also known as neural networks by simulating basic characteristics of the human brain and natural neural network. Its learning rule is to adopt the steepest descent method in which the back propagation is used to regulate the network’s weight value and threshold value to achieve the minimum error sum of the square. BP neural network is a widely applied neural network model relying on the error backpropagation algorithm [39,40]. It is a multi-layer feedforward network trained according to the error backpropagation algorithm. The BP neural network model includes three layers: input layer, hidden layer, and output layer. The BP algorithm has two processes: data forward propagation and error backward propagation. The forward propagation starts in the input layer, moves to the hidden layer, and finally reaches the output layer. The status of each layer of neurons only affects the neurons in the next layer. The back propagation process will be triggered if the output layer does not reach the expected value. This error function of the network reaches the minimum value by the alternation of two propagation processes. BP is commonly used in the hyperspectral analysis of agricultural diseases and pests, where the nodal activation function usually adopts a Sigmoid function (Equation (8)).
f ( x ) = 1 1 + e x / p
where p is the Sigmoid parameter that adjusts the activation function.
In this study, we selected Sigmoid as the activation function for the hidden layer, Purelin function at the output layer, and trainlm as the training function.
2.
Evaluation
We derived the Confusion Matrix (CM) from validation data for evaluation. CM contains the number of correct classifications and the number of those misclassified into other categories. We also used the Overall Accuracy (OA), Producer Accuracy (PA), and User Accuracy (UA) to evaluate classification results according to the confusion matrix. PA represents the probability that a pixel is correctly classified to a damage degree, while UA represents the proportion of pixels that are correctly classified within the samples. The PA and UA are, respectively, the complement of the Omission Error and Commission error: the Omission Error equals 1 P A and Commission error equals 1 U A . OA is a ratio between the number of correctly classified pixels and the total number of pixels. The collected 120 sample data were randomly divided into training datasets (84 samples) and validation datasets (36 samples), following a ratio of 7:3.
O A = k = 1 n P k k p
P A = P j j i = 1 n P i j
U A = P i i j = 1 n P i j
where O A denotes the overall classification accuracy, P i j is a ratio between the of category i th in classification and the actual category j th .

3. Results

3.1. Results of Canopy Spectral Feature Extraction

3.1.1. Vegetation Indices (VI)

Table 5 presents the results of the variance analysis of 30 spectral features (including ten vegetation indices and 20 hyperspectral parameters) with damage degrees. Features that include NDVI, NDVI705, RVI, GI, DVI, PRI, PSRI, TVI, SLAVI, YI, Rg, Rr, Db, Dr, Dnir, SDb, SDy, SDr, SDnir, Rg/Rr, (Rg−Rr)/(Rg+Rr) and (Sdr−SDy)/(Sdr+SDy) present strong differences, while features that include Dy, SDr/SDb, SDr/SDy, SDnir/SDb, SDnir/SDr, (SDr−SDb)/(SDr+SDb), (SDnir−SDb)/(SDnir+SDb), and (SDnir−SDr)/(SDnir+SDr) present no significant difference.

3.1.2. Principal Component Analysis (PCA)

We obtained the principal components using the eigenvalues based on PCA analysis and then calculated cumulative contribution rates. To avoid information loss and improve the classification accuracy, principal components were selected for which the eigenvalues > 1 and cumulative contribution rate reached more than 99%. The contribution rate reached 73.84% for the first principal component, which can better represent all the spectral information. The cumulative contribution rate of the first four principal components was 99.14% (Figure 4).

3.1.3. Continuous Wavelet Transform (CWT)

The original spectra were decomposed by the wavelet basis functions. As the wavelet basis function, DB4 was used to calculate the wavelet coefficients for different damage degrees. The number of spectral reflectance bands was 12, and the decomposition scale was set to 2 i   ( i = 1 , 2 , 3 , , 10 ) . We further calculated the wavelet coefficient matrix and the correlation coefficients between the wavelet coefficient matrix and DSR. The results suggested that the correlation coefficients remained stable with scales from 8 to 10. Therefore, the maximum decomposition scale was set to 7 (Figure 5). The wavelet coefficients with higher correlation on a scale of 1–7 failed to achieve dimensionality reduction; thus, we conducted multicollinearity analysis on wavelet coefficient features.
The multicollinearity analysis aims to remove the redundancy in the wavelet coefficients. The multiple linear regression and multicollinearity analysis were used to diagnose the variables corresponding to 125 × 7 wavelet coefficients and DSR. We observed that the R2 values gradually increased with decreased standard estimation errors. Then, we obtained a total of 16 features with collinearity and a strong correlation with DSR (0.991) at the 22nd fitting. The variance inflation factor (VIF) of the 16 features was less than 10 (Table 6), indicating their weak collinearity. In addition, the probability p-values (significance Sig.) of the t-tests were all less than 0.05, indicating that these features showed a significant correlation with DSR.

3.2. Classification Results

3.2.1. Results of VI-Based Classification

The classification performance of all the methods based on VIs, PCA and CWT are shown in Figure 6 and Figure 7. To avoid covariance among the VI indices, LDA was used to construct the VIs classification model. The training and validation accuracies of LDA based on VIs were 79.76% and 80.56%, respectively. The SVM with Linear and Quadratic kernel functions yielded training accuracies of 76.19% and 83.33%, and validation accuracies of 77.78% and 80.56%, respectively. For BP, the number of nodes in the implicit layer was set to 9. The network structure of 22-9-1 was constructed, achieving a training accuracy of 77.38% and the validation classification accuracy of 77.78%.
For the category of healthy, VI-LDA, VI-SVM-Linear, SVM-Quadratic and VI-BP yielded a relatively high PA (86.36%, 81.82%, 82.61% and 80.00%) and UA (90.48%, 85.71%, 90.48% and 76.19%), respectively (Figure 6 and Figure 7). The confusion was mainly observed among mild, moderate, and severe damage (SVM-Quadratic) (Figure 8). For mild damage, PA was reduced by using DA (76.47%), SVM-Linear (72.22%), and BP (69.57%), except SVM-Quadratic (93.75%). UA was decreased by using DA (61.90%), SVM-Linear (61.90%), and SVM-Quadratic (71.43%). The omission and commission errors were mainly observed between health, mild, and moderate damage. For moderate damage, the PA was 65.55%, 65.38%, 68.97%, and 78.95%; the UA was 90.48%, 80.95%, 95.24%, and 71.43% in VI-DA, VI-SVM-Linear, SVM-Quadratic and VI-BP. The PA was significantly reduced based on VI-DA and VI-SVM. The confusion was mainly observed between mild damage and moderate damage. The PA of severe damage was 100%, 88.89%, 100%, and 81.82%, and the UA was 76.19%, 76.19%, 76.19%, and 85.71%.
For the validation sample, the omission increased compared with training data in VI-LDA (80%), VI-SVM-Linear and VI-SVM-Quadratic (61.54% and 75%), VI-BP (75%) for the healthy category, and the UA slightly increased for SVM-Linear and SVM-Quadratic (88.89% and 100%) and VI-BP (100%). Meanwhile, the confusion was mainly observed between healthy and mild damage (Figure 9). For mild damage, the PA was significantly increased based on VI-SVM-Linear and VI-SVM-Quadratic (100% and 100%) and VI-BP (100%); however, the UA was significantly decreased based on VI-SVM-Linear and VI-SVM-Quadratic (44.44% and 55.56%). For moderate damage, the PA was 77.78% (VI-LDA), 88.89% (VI-SVM-Linear), 77.78% (VI-SVM-Quadratic), and 66.67% (BP). For severe damage, the PA in VI-LDA, VI-SVM-Linear, VI-SVM-Quadratic and VI-BP was 87.5%, 80%, 72.73%, and 85.71%, respectively, and the UA was 77.78%, 88.89%, 88.89%, and 66.67%, respectively.

3.2.2. PCA-Based Classification and Diagnosis Results

The LDA and QDA yielded an OA of 85.71% and 71.43%, respectively, for training samples (Figure 6) and an OA of 83.33% and 72.22% for validation accuracies, respectively (Figure 7). The training accuracies of the SVM-Linear and SVM-Quadratic were 83.33% and 75.00%, respectively (Figure 6), and the validation accuracies were 80.56% and 72.22%, respectively (Figure 7). The structure of the BP neural network was set as 4-9-1, and the training accuracy reached 91.67% (Figure 6), with a validation accuracy of 86.11% (Figure 7).
For the training sample (the category of health), PCA-LDA, PCA-QDA, PCA-SVM-Linear, PCA-SVM-Quadratic and PCA-BP achieved 91.30%, 89.47%, 100%, 73.68% and 95% for PA, and UA of 100%, 80.95%, 90.47%, 66.67% and 90.48%, respectively. Health was mainly confused with mild damage (Figure 8). For mild damage, PA was 80.95%, 57.89%, 75%, 50% and 90.91%, and UA was 80.95%, 52.38%, 85.71%, 52.38% and 95.24% based on PCA-LDA, PCA-QDA, PCA-SVM-Linear, PCA-SVM-Quadratic and PCA-BP, respectively. The PCA-QDA and PCA-SVM-Quadratic had relatively poor performances. The mild damage was confused with healthy and moderate damage. For moderate damage, the PA values were 86.67%, 52.38%, 68%, 77.27%, 100%, while the UA values were 61.90%, 52.38%, 80.95%, 80.95%, 80.95%, 80.95% for PCA-LDA, PCA-QDA, PCA-SVM-Linear, PCA-SVM-Quadratic and PCA-BP, respectively. For heavy damage, PA and UA saw a notable increase in PCA-LDA (84% and 100%), PCA-QDA (84% and 100%), PCA-SVM-Liner (100% and 76.19%), PCA-SVM-Quadratic (100% and 100%) and PCA-BP (84% and 100%).
In the validation sample, the PCA-LDA, PCA-QDA, PCA-SVM-Linear, PCA-SVM-Quadratic and PCA-BP achieved a PA of 88.89%, 87.5%, 88.89%, 75%, and 90% for health and UA of 88.89%, 77.78%, 88.89%, 66.67% and 100%, respectively. For mild damage, PA was 77.78%, 62.50%, 75%, 50%, 87.5% and UA was 77.78%, 55.56%, 66.67%, 55.56%, 77.78% for PCA-LDA, PCA-QDA, PCA-SVM-Linear, PCA-SVM-Quadratic and PCA-BP, respectively. For moderate damage, the PA was 85.71%, 55.56%, 66.67%, 70% and 77.78% for PCA-LDA, PCA-QDA, PCA-SVM-Linear, PCA-SVM-Quadratic and PCA-BP respectively, and the UA was 66.67%, 55.56%, 88.89%, 77.78% and 77.78%. For severe damage, PCA-LDA, PCA-QDA, PCA-SVM-Linear, PCA-SVM-Quadratic and PCA-BP had PA of 81.82%, 81.82%, 100%, 100%, 88.89% and UA of 100%, 100%, 77.78%, 88.89% and 88.89%, respectively.

3.2.3. CWT-Based Classification and Diagnosis Results

To avoid collinearity of wavelet features, we selected the LDA to classify the damage degrees of Yunnan pine canopies. The OA for training and validation samples based on CWT-LDA was 86.90% and 86.11%, respectively (Figure 6). The SVM with Linear and Quadratic kernel functions produced an OA of 77.38% and 80.95% for training data and 75.00% and 80.56% for validation data, respectively (Figure 6). The number of the hidden layer of BP was set to eight, producing the highest training accuracy of 94.05%, with a validation accuracy of 94.44%.
For the training data, PA and UA were 94.74% and 85.71% (CWT-DA), 100% and 61.90% (CWT-SVM-Linear), 85.71% and 85.71% (CWT-SVM-Quadratic), 90.91% and 95.24% (CWT-BP), respectively. For mild damage, a lower PA was yielded by CWT-SVM-Linear and CWT-SVM-Quadratic of 61.29% and 66.67%. The confusion was mainly among healthy, mild damage and moderate damage. For moderate damage, PA and UA achieved the highest value of 95.24% based on CWT-BP. However, PA has a lower value for CWT-DA (76.92%) CWT-SVM-Linear and CWT-SVM-Quadratic (70.83% and 78.95%). The maximum commission error was 71.43% (CWT-SVM-Quadratic). For severe damage, the PA for all methods was 100%, but UA was 76.19%, 76.19%, 80.95% and 95.24% for CWT-DA, CWT-SVM-Linear and CWT-SVM-Quadratic, CWT-BP, respectively. Significant misclassification was observed between moderate damage and severe damage via CWT-DA and CWT-SVM-Linear models.
For the validation samples, the PA was 100% (CWT-DA), 100% (CWT-SVM-Linear), 70% (CWT-SVM-Quadratic) and 81.82% (BP) for the healthy category, respectively. The commission error was observed between CWT-DA (21.22%), CWT-SVM-Linear (33.3%), and CWT-SVM-Quadratic (21.22%), respectively. The mild damage produced a lower accuracy of 75% (CWT-DA), 63.64% (CWT-SVM-Linear), 70% (CWT-SVM-Quadratic), and a maximum value of 100% in CWT-BP. UA was 100%, 77.78%, 77.78% and 77.78% for CWT-DA, CWT-SVM-Linear and CWT-SVM-Quadratic, CWT-BP, respectively. For moderate damage, The PA of CWT-DA, CWT-SVM-Linear, CWT-SVM-Quadratic and CWT-BP was 80%, 61.54%, 87.50%, 100%, respectively, and UA was 88.89%, 88.89%, 77.78%, 100%, respectively. The PA values for severe damage with different methods were all 100%, and the UA was 77.78%, 66.67%, 88.89% and 100% for CWT-DA, CWT-SVM-Linear and CWT-SVM-Quadratic, CWT-BP, respectively.
We noticed that, overall, the classification accuracies from CWT and PCA were higher than the ones in VI. The BP algorithm has the highest accuracy, followed by DA. BP coupled with CWT achieves the best performance compared with all other models.

4. Discussion

4.1. Comparison between Needles and Canopy

4.1.1. Comparison of Spectral Characteristics

  • Comparison of Original Spectral Reflectance
The spectral reflectance (Figure A1) and the first derivative curves (Figure A2) of Pinus yunnanensis by Tomicus spp. canopies with different damage degrees are shown in Appendix A.
The spectral reflectance curve has a notable difference between the ‘red valley’ (640–700 nm) and the ‘green peak’. The spectral reflectance difference with different damage degrees is not significant at the ‘green peak’, but it presents a notable difference at the ‘red valley’ (Figure 10a). With the damage of Pinus yunnanensis, the spectral reflectance for needles increased gradually, with a disappearing ‘red valley’ (Figure 10a). The canopies’ spectral reflectance of healthy, mild, and moderate damage had no significant variation at the ‘green peak’. The lowest reflectance was observed in severe damage (Figure A1). These phenomena can be explained by the difference in spectral equipment parameters for needles and canopy. Similarity exists between the needle and canopy spectral reflectance curves at the ‘red valley’ (Figure 10a and Figure A1), where the healthy spectral reflectance is the lowest, and the reflectance at the ‘red valley’ increases when the Yunnan pine needles are infested with Tomicus spp.
The result showed that the damage spectra of Tomicus spp. were similar at the two scales. The spectral reflectance curves of different damage degrees were different in the ‘red valley’ (640~700 nm) and fluctuated significantly in the near-infrared band. The spectral reflectance of Pinus yunnanensis needles decreased significantly as the damage degree by Tomicus spp. increased; Thus, the optimal spectral reflectance monitoring band lies in the red band within the 765–838 nm range for needles (Table 7).
2.
Comparison of first-order derivative reflectance
The first derivative of the spectrum has an obvious peak at the ‘red edge’ (680 nm~740 nm) at the two scales (Figure 10b and Figure A2). With the increase in the damage degree, the peak height gradually decreases, and the ‘red edge’ position is slightly shifted to the direction of blue light (Figure 10b and Figure A2). At 750–950 nm, we observed peaks and troughs, and the overall trend of the four damage levels was similar (Figure 10b and Figure A2). The optimal first-order derivative monitoring band lies in 702–744 nm (i.e., the visible red band) (Table 7). As the damage degree by Tomicus spp. increases, the spectral reflectance decreases significantly in the near-infrared band but increases at the ‘red valley’ (Figure 10b and Figure A2). When the damage is severe, the ‘red valley’ disappears (Figure 10b and Figure A2). The peak value of the curve of the first derivative of the spectrum decreases gradually with the increase of damage degree at the ‘red edge’, and the position of the ‘red edge’ tends to shift slightly to the blue light band (Figure 10b and Figure A2).

4.1.2. Comparison of Sensitive Bands

The range of the optimal monitoring band for needles is roughly 765~838 nm (Table 7), and that of the canopy was about 746~802 nm for original spectral reflectance (Table 7), which belongs to red and near-infrared bands. It indicates that the spectral reflectance of red and near-infrared bands can effectively monitor the different damage degrees of Pinus yunnedanensis by Tomicus spp. The best monitoring bands for needles and canopy are approximately 702~744 nm and 694~726 nm (located in the red wavelengths) for the first-order derivative reflectance (Table 7).
The differences in the correlation coefficients were mainly concentrated in the range of 450–715 nm (Figure 11a and Figure A3a). The results for the needles showed highly significant positive correlations at 450–514 nm and 569–708 nm (Figure 11a). The correlation coefficients of the canopy were negative at 526–562 nm but positive at 654–678 nm (Figure A3a). The results showed that needles had a highly significant negative linear correlation with canopy within 715–950 nm (Figure A3a). The correlation coefficient difference was mainly observed in the 800–950 nm range both the needles and the canopy (Figure 11b and Figure A3b). The correlation coefficients between the needles and the canopy were inconsistent (Figure 11b).
The sensitive bands for needles and canopy are mainly concentrated in the near-infrared and red wavelengths (Figure 12a and Figure A4a,b), and the first-order spectral derivatives are primarily concentrated in 450–750 nm and 900–950 nm (Figure 12b and Figure A4c,d).

4.2. Existing Deficiencies and Future Prospects

We compared the spectral features between the canopy and needles based on single samples. Further comparisons between the canopy and needles in extensive areas can improve our understanding of spectral reflectance differences. However, some studies found that image features, such as texture, play an important role in pest and disease monitoring in multispectral imagery with high spatial resolution. Future studies are encouraged to consider the integration of multi-source data, such as UAV-based hyperspectral images and multispectral imagery. In addition, we only collected the samples damaged by Tomicus spp., and the disparity in characteristics between the infected by Tomicus spp. and other pests should be considered in future studies. Advanced machine learning algorithms, such as deep learning algorithms, show great potential in plant disease detection. However, due to the limited samples in this study, we did not compare their performances on our dataset. We need to acknowledge the disparity in the growth levels of Pinus yunnanensis in various regions. The sensitive spectral band of Pinus yunnanensis damaged by Tomicus spp. may slightly differ. Therefore, the effectiveness of the monitoring model should be further evaluated in more study areas. Finally, LIDAR data have been widely adopted in forestry. Therefore, we encourage future efforts to consider LIDAR data as auxiliary sources to improve forest health monitoring.

5. Conclusions

In this study, we developed an innovative and precise approach to monitoring the damage degree of Pinus yunnanensis infected by Tomicus spp. using hyperspectral UAV (unmanned aerial vehicle) imagery with discriminant algorithms. We revealed the hyperspectral characteristics of Pinus yunnanensis from a UAV-based hyperspectral platform. We extracted 22 vegetation indices (VI), 4 principal components, and 16 continuous wavelet transform (CWT) features. We analyze the damage degree of Pinus yunnanensis canopies infected by Tomicus spp. via three methods, i.e., discriminant analysis (DA), support vector machine (SVM), and BP neural network. The results showed that the damage degree detected from BP neural network, combined with 16 CWT features, achieved the best performance (training accuracy: 94.05%; validation accuracy: 94.44%). We further observed that the spectral reflectance in the canopy decreased significantly in the near-infrared band with increased damage degrees.

Author Contributions

Conceptualization, Y.M.; Formal analysis, Y.M.; Funding acquisition, Y.M. and J.L.; Investigation, Y.M. and J.L.; Methodology, Y.M.; Resources, J.L.; Visualization, Y.M. and J.L.; Writing—original draft, Y.M.; Writing—review and editing, Y.M., J.L. and X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Yunnan Provincial Science and Technology Plan Project (992122060) and Yunnan Youth Talent Training Program (99012274),Scientific Research Staring Foundation for doctor of Southwest Forestry University (112006).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1 and Figure A2 show the spectral reflectance and the first derivative curves of Pinus yunnanensis by Tomicus spp. canopies with different damage degrees. Spectral characteristics analysis of the damaged Pinus yunnanensis canopy is shown in Liu et al., 2020 [19].
Figure A1. Spectral reflectance curves of different damage degree Pinus yunnanensis canopy by Tomicus. (a). the original curve, (b). the reflectance average curve.
Figure A1. Spectral reflectance curves of different damage degree Pinus yunnanensis canopy by Tomicus. (a). the original curve, (b). the reflectance average curve.
Forests 14 00061 g0a1
Figure A2. The first derivative curves of different damage degree Pinus yunnanensis canopy by Tomicus. (a). the first derivative curves of the canopy spectra; (b). the first derivative average curves of the canopy spectra.
Figure A2. The first derivative curves of different damage degree Pinus yunnanensis canopy by Tomicus. (a). the first derivative curves of the canopy spectra; (b). the first derivative average curves of the canopy spectra.
Forests 14 00061 g0a2

Appendix B

The spectral sensitive bands of Pinus yunnanensis by Tomicus spp. canopies with different damage degrees are shown in supplementary materials (Figure A3 and Figure A4 and Table A1). Detailed analysis of the damaged Pinus yunnanensis canopy is shown by Liu et al., 2020 [19].
Figure A3. Correlation analysis between different damage degrees by Tomicus and the spectral parameter of Pinus yunnanensis canopy ((a). the spectral reflectance (b). the first derivative of spectra).
Figure A3. Correlation analysis between different damage degrees by Tomicus and the spectral parameter of Pinus yunnanensis canopy ((a). the spectral reflectance (b). the first derivative of spectra).
Forests 14 00061 g0a3
Figure A4. Sensitive bands of spectral reflectance and the first derivative with the damage degrees by Tomicus in Pinus yunnanensis canopy ((a) Spectral reflectance of 120 samples; (b) Mean spectral reflectance; (c) Spectral first derivative curves of 120 samples; (d) Mean spectral first derivative).
Figure A4. Sensitive bands of spectral reflectance and the first derivative with the damage degrees by Tomicus in Pinus yunnanensis canopy ((a) Spectral reflectance of 120 samples; (b) Mean spectral reflectance; (c) Spectral first derivative curves of 120 samples; (d) Mean spectral first derivative).
Forests 14 00061 g0a4
Table A1. Extraction of sensitive bands for spectral reflectance and the first derivative of Pinus yunnanensis canopy.
Table A1. Extraction of sensitive bands for spectral reflectance and the first derivative of Pinus yunnanensis canopy.
Spectral Reflectance of
Pinus yunnanensis Canopy
The Spectral First Derivative of
Pinus yunnanensis Canopy
Sensitive WavelengthCorrelation AnalysisStepwise Regression AnalysisSensitive WavelengthCorrelation AnalysisStepwise Regression Analysis
698−0.199 *<0.001 **718−0.660 **<0.001 **
6500.595 **<0.001 **
806−0.577 **<0.001 **690−0.495 **<0.001 **
8780.332 **0.007 **
858−0.544 **<0.001 **754−0.327 **0.013 *
6620.430 **0.019 *
5860.278 **0.039 *
Note: * and ** represent a significance level of 5% and 1%, respectively.
We used the Pearson correlation to analyze the original spectral reflectance and the first derivative with damage degrees (Figure A3). The spectral reflectance of Pinus yunnanensis canopy at 522 nm, 566 nm, 698 nm, 526~562 nm and 702~946 nm, has a significantly negative correlation with the damage degrees. It shows a significant positive correlation with the damage degrees ranging from 654 to 678 nm.
The first derivative showed both positive and negative correlations with the damage levels in the range 450 to 946 nm (Figure A3b). The spectral first-order derivative of the Yunnan pine canopy showed a highly significant negative correlation with the damage levels in ranges of 498–538 nm and 674–754 nm. A significant negative correlation was observed at 542 nm and 758 nm. The spectral first-order derivative showed a significant negative correlation with the damage levels at the ranges of 554–622 nm, 802–830 nm, 846–858 nm, 874–886 nm, 898–918 nm and 930–918 nm. A highly significant positive correlation with the damage levels was displayed in the range of 554–622 nm, 802–830 nm, 846–858 nm, 874–886 nm, 898–918 nm, 930–946 nm, and at 470 nm, 550 nm, 666 nm, 798 nm, 834 nm, 862 nm, 870 nm, 890–894 nm and 922–926 nm. The first-order derivative showed a highly significant negative correlation and decreased as the damage degree increased at the “red edge” (670–750 nm). The results are consistent with the spectral first-order derivative values at damage degrees (Figure A2).
Sensitive bands of spectral reflectance and the first derivative with the damage degrees by Tomicus in Pinus yunnanensis canopy shows in Figure A4. The original spectral curves of the Pinus yunnanensis canopy are sensitive to the damage degree at 526–562 nm, 654–678 nm and 698–946 nm (Figure A4a,b). A stepwise regression analysis shows that the most sensitive band with the damage degree are 698 nm, 806 nm and 858 nm (Table A1). The first-order derivative curves of the Yunnan pine canopy are sensitive to damage degree in the bands 522–526 nm, 654–678 nm, and 698–946 nm (Figure A4c,d). The most sensitive bands with damage degrees are 586 nm, 650 nm, 662 nm, 690 nm, 718 nm, 754 nm and 878 nm (Table A1).

References

  1. Lieutier, F.; Ye, H.; Yart, A. Shoot damage by Tomicus sp. (Coleoptera: Scolytidae) and effect on Pinus yunnanensis resistance to subsequent reproductive attacks in the stem. Agric. For. Entomol. 2003, 5, 227–233. [Google Scholar] [CrossRef]
  2. Yu, L.; Zhan, Z.; Ren, L.; Zong, S.; Luo, Y.; Huang, H. Evaluating the potential of worldview-3 data to classify different shoot damage ratios of Pinus yunnanensis. Forests 2020, 11, 417. [Google Scholar] [CrossRef] [Green Version]
  3. Pause, M.; Schweitzer, C.; Rosenthal, M.; Keuck, V.; Bumberger, J.; Dietrich, P.; Heurich, M.; Jung, A.; Lausch, A. In situ/remote sensing integration to assess forest health-a review. Remote Sens. 2016, 8, 471. [Google Scholar] [CrossRef] [Green Version]
  4. Liu, Y.; Zhan, Z.; Ren, L.; Ze, S.; Yu, L.; Jiang, Q.; Luo, Y. Hyperspectral evidence of early-stage pine shoot beetle attack in Yunnan pine. For. Ecol. Manage. 2021, 497, 119505. [Google Scholar] [CrossRef]
  5. Coops, N.C.; Johnson, M.; Wulder, M.A.; White, J.C. Assessment of QuickBird high spatial resolution imagery to detect red attack damage due to mountain pine beetle infestation. Remote Sens. Environ. 2006, 103, 67–80. [Google Scholar] [CrossRef]
  6. White, J.C.; Wulder, M.A.; Brooks, D.; Reich, R.; Wheate, R.D. Detection of red attack stage mountain pine beetle infestation with high spatial resolution satellite imagery. Remote Sens. Environ. 2005, 96, 340–351. [Google Scholar] [CrossRef]
  7. Lausch, A.; Erasmi, S.; King, D.J.; Magdon, P.; Heurich, M. Understanding forest health with remote sensing-Part I-A review of spectral traits, processes and remote-sensing characteristics. Remote Sens. 2016, 8, 1029. [Google Scholar] [CrossRef] [Green Version]
  8. Underwood, E.; Ustin, S.; DiPietro, D. Mapping nonnative plants using hyperspectral imagery. Remote Sens. Environ. 2003, 86, 150–161. [Google Scholar] [CrossRef]
  9. Carlson, K.M.; Asner, G.P.; Hughes, R.F.; Ostertag, R.; Martin, R.E. Hyperspectral remote sensing of canopy biodiversity in Hawaiian lowland rainforests. Ecosystems 2007, 10, 536–549. [Google Scholar] [CrossRef]
  10. Kokaly, R.F.; Asner, G.P.; Ollinger, S.V.; Martin, M.E.; Wessman, C.A. Characterizing canopy biochemistry from imaging spectroscopy and its application to ecosystem studies. Remote Sens. Environ. 2009, 113, S78–S91. [Google Scholar] [CrossRef]
  11. Calderón, R.; Navas-Cortés, J.A.; Lucena, C.; Zarco-Tejada, P.J. High-resolution airborne hyperspectral and thermal imagery for early detection of Verticillium wilt of olive using fluorescence, temperature and narrow-band spectral indices. Remote Sens. Environ. 2013, 139, 231–245. [Google Scholar] [CrossRef]
  12. Kim, S.R.; Lee, W.K.; Lim, C.H.; Kim, M.; Kafatos, M.C.; Lee, S.H.; Lee, S.S. Hyperspectral analysis of pine wilt disease to determine an optimal detection index. Forests 2018, 9, 115. [Google Scholar] [CrossRef] [Green Version]
  13. Coops, N.; Stanford, M.; Old, K.; Dudzinski, M.; Culvenor, D.; Stone, C. Assessment of Dothistroma Needle Blight of Pinus radiata Using Airborne Hyperspectral Imagery. Phytopathology 2003, 93, 1524–1532. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Lausch, A.; Heurich, M.; Gordalla, D.; Dobner, H.J.; Gwillym-Margianto, S.; Salbach, C. Forecasting potential bark beetle outbreaks based on spruce forest vitality using hyperspectral remote-sensing techniques at different scales. For. Ecol. Manage. 2013, 308, 76–89. [Google Scholar] [CrossRef]
  15. White, J.C.; Coops, N.C.; Hilker, T.; Wulder, M.A.; Carroll, A.L. Detecting mountain pine beetle red attack damage with EO-1 hyperion moisture indices. Int. J. Remote Sens. 2007, 28, 2111–2121. [Google Scholar] [CrossRef]
  16. Peña, M.A.; Altmann, S.H. Use of satellite-derived hyperspectral indices to identify stress symptoms in an Austrocedrus chilensis forest infested by the aphid Cinara cupressi. Int. J. Pest Manag. 2009, 55, 197–206. [Google Scholar] [CrossRef]
  17. Wang, J.; Wu, W.; Wang, T.; Cai, C. Estimation of leaf chlorophyll content and density in Populus euphratica based on hyperspectral characteristic variables. Spectrosc. Lett. 2018, 51, 485–495. [Google Scholar] [CrossRef]
  18. Ju, Y.; Pan, J.; Wang, X.; Zhang, H. Detection of Bursaphelenchus xylophilus infection in Pinus massoniana from hyperspectral data. Nematology 2014, 16, 1197–1207. [Google Scholar] [CrossRef]
  19. Liu, M.; Zhang, Z.; Liu, X.; Yao, J.; Du, T.; Ma, Y.; Shi, L. Discriminant analysis of the damage degree caused by pine shoot beetle to yunnan pine using UAV-based hyperspectral images. Forests 2020, 11, 1258. [Google Scholar] [CrossRef]
  20. Moreno, R.; Corona, F.; Lendasse, A.; Graña, M.; Galvão, L.S. Extreme learning machines for soybean classification in remote sensing hyperspectral images. Neurocomputing 2014, 128, 207–216. [Google Scholar] [CrossRef]
  21. Iordache, M.D.; Mantas, V.; Baltazar, E.; Pauly, K.; Lewyckyj, N. A machine learning approach to detecting Pine Wilt Disease using airborne spectral imagery. Remote Sens. 2020, 12, 2280. [Google Scholar] [CrossRef]
  22. Yu, R.; Luo, Y.; Li, H.; Yang, L.; Huang, H.; Yu, L.; Ren, L. Three-dimensional convolutional neural network model for early detection of pine wilt disease using uav-based hyperspectral images. Remote Sens. 2021, 13, 4065. [Google Scholar] [CrossRef]
  23. Näsi, R.; Honkavaara, E.; Lyytikäinen-Saarenmaa, P.; Blomqvist, M.; Litkey, P.; Hakala, T.; Viljanen, N.; Kantola, T.; Tanhuanpää, T.; Holopainen, M. Using UAV-based photogrammetry and hyperspectral imaging for mapping bark beetle damage at tree-level. Remote Sens. 2015, 7, 15467–15493. [Google Scholar] [CrossRef] [Green Version]
  24. Yu, R.; Luo, Y.; Zhou, Q.; Zhang, X.; Wu, D.; Ren, L. Early detection of pine wilt disease using deep learning algorithms and UAV-based multispectral imagery. For. Ecol. Manag. 2021, 497, 119493. [Google Scholar] [CrossRef]
  25. Savitzky, A.; Golay, M.J.E. Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Anal. Chem. 1964, 36, 1627–1639. [Google Scholar] [CrossRef]
  26. Press, W.H.; Teukolsky, S.A. Savitzky-Golay Smoothing Filters. Comput. Phys. 1990, 4, 669. [Google Scholar] [CrossRef]
  27. Tsai, F.; Philpot, W. Derivative analysis of hyperspectral data. Remote Sens. Environ. 1998, 66, 41–51. [Google Scholar] [CrossRef]
  28. Horler, D.N.H.; Dockray, M.; Barber, J. The red edge of plant leaf reflectance. Int. J. Remote Sens. 1983, 4, 273–288. [Google Scholar] [CrossRef]
  29. Boochs, F.; Kupfer, G.; Dockter, K.; Kuhbaüch, W. Shape of the red edge as vitality indicator for plants. Int. J. Remote Sens. 1990, 11, 1741–1753. [Google Scholar] [CrossRef]
  30. Filella, I.; Peñuelas, J. The red edge position and shape as indicators of plant chlorophyll content, biomass and hydric status. Int. J. Remote Sens. 1994, 15, 1459–1470. [Google Scholar] [CrossRef]
  31. Pearson, K.L., III. On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1901, 2, 559–572. [Google Scholar] [CrossRef] [Green Version]
  32. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 417–441. [Google Scholar] [CrossRef]
  33. Hotelling, H. Relations Between Two Sets of Variates. Biometrika 1936, 28, 321. [Google Scholar] [CrossRef]
  34. Liu, N.; Xing, Z.; Zhao, R.; Qiao, L.; Li, M.; Liu, G.; Sun, H. Analysis of chlorophyll concentration in potato crop by coupling continuous wavelet transform and spectral variable optimization. Remote Sens. 2020, 12, 2826. [Google Scholar] [CrossRef]
  35. Zhao, L.; Li, Q.; Zhang, Y.; Wang, H.; Du, X. Integrating the continuous Wavelet transform and a convolutional neural network to identify Vineyard Using Time series satellite images. Remote Sens. 2019, 11, 2641. [Google Scholar] [CrossRef] [Green Version]
  36. Zhang, J.; Sun, H.; Gao, D.; Qiao, L.; Liu, N.; Li, M.; Zhang, Y. Detection of canopy chlorophyll content of corn based on continuous wavelet transform analysis. Remote Sens. 2020, 12, 2741. [Google Scholar] [CrossRef]
  37. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  38. Vapnik, V. Estimation of Dependences Based on Empirical Data; Springer Science & Business Media: Berlin, Germany, 2006. [Google Scholar]
  39. Atkinson, P.M.; Tatnall, A.R.L. Introduction neural networks in remote sensing. Int. J. Remote Sens. 1997, 18, 699–709. [Google Scholar] [CrossRef]
  40. Elanayar, S.; Shin, Y.C. Radial Basis Function Neural Network for Approximation and Estimation of Nonlinear Stochastic Dynamic Systems. IEEE Trans. Neural Netw. 1994, 5, 594–603. [Google Scholar] [CrossRef]
Figure 1. Our study area is located in Yunnan, China.
Figure 1. Our study area is located in Yunnan, China.
Forests 14 00061 g001
Figure 2. Example canopy photos.
Figure 2. Example canopy photos.
Forests 14 00061 g002
Figure 3. UAV hyperspectral imagery collection; (a) UAV on the ground; (b) Radiometric calibration; (c) The UAV in operation.
Figure 3. UAV hyperspectral imagery collection; (a) UAV on the ground; (b) Radiometric calibration; (c) The UAV in operation.
Forests 14 00061 g003
Figure 4. Characteristic values, contribution rate, and the cumulative contribution rate of the top ten principal components.
Figure 4. Characteristic values, contribution rate, and the cumulative contribution rate of the top ten principal components.
Forests 14 00061 g004
Figure 5. Characteristics analysis and correlation analysis of CWT.
Figure 5. Characteristics analysis and correlation analysis of CWT.
Forests 14 00061 g005
Figure 6. Comparison of accuracy of different features and discrimination methods—training samples ( n = 84 ).
Figure 6. Comparison of accuracy of different features and discrimination methods—training samples ( n = 84 ).
Forests 14 00061 g006
Figure 7. Comparison of accuracy of different features and discrimination methods—validation samples ( n = 36 ).
Figure 7. Comparison of accuracy of different features and discrimination methods—validation samples ( n = 36 ).
Forests 14 00061 g007
Figure 8. Training accuracy of different classification algorithms (n = 84).
Figure 8. Training accuracy of different classification algorithms (n = 84).
Forests 14 00061 g008
Figure 9. Verification accuracy of different classification methods (n = 36).
Figure 9. Verification accuracy of different classification methods (n = 36).
Forests 14 00061 g009
Figure 10. Spectral characteristics of damage degree in Pinus yunnanensis by Tomicus (Needle) (a) Spectral reflectance of Pinus yunnanensis (Needle) with different damage degree; (b) the first derivative value of Pinus yunnanensis needles with different damage degree).
Figure 10. Spectral characteristics of damage degree in Pinus yunnanensis by Tomicus (Needle) (a) Spectral reflectance of Pinus yunnanensis (Needle) with different damage degree; (b) the first derivative value of Pinus yunnanensis needles with different damage degree).
Forests 14 00061 g010
Figure 11. Correlation coefficient comparison between spectral reflectance and the first derivative of Pinus yunnanensis with different damage degrees by Tomicus (Needle) (a). The correlation coefficient between spectral reflectance and different damage degree of Pinus yunnanensis needles; (b). The correlation coefficient between the different damage degrees of Pinus yunnanensis needles and the first derivative of Pinus yunnanensis needles.
Figure 11. Correlation coefficient comparison between spectral reflectance and the first derivative of Pinus yunnanensis with different damage degrees by Tomicus (Needle) (a). The correlation coefficient between spectral reflectance and different damage degree of Pinus yunnanensis needles; (b). The correlation coefficient between the different damage degrees of Pinus yunnanensis needles and the first derivative of Pinus yunnanensis needles.
Forests 14 00061 g011
Figure 12. Comparison of sensitive bands of spectral reflectance and the first derivative of Pinus yunnanensis with different damage degrees by Tomicus (Needle). (a) Spectral reflectance of the needle; (b) Spectral first derivative of the needle.
Figure 12. Comparison of sensitive bands of spectral reflectance and the first derivative of Pinus yunnanensis with different damage degrees by Tomicus (Needle). (a) Spectral reflectance of the needle; (b) Spectral first derivative of the needle.
Forests 14 00061 g012
Table 1. Damage degree of Pinus yunnanensis (Franch.) by Tomicus spp. and the canopy characteristics.
Table 1. Damage degree of Pinus yunnanensis (Franch.) by Tomicus spp. and the canopy characteristics.
Damage DegreeDSR (%)Canopy Characteristics
Health<10The tree crown is normal
Mild Damage10~20A few damaged shoots begin to turn yellow or reddish brown
Moderate Damage21~50A few damaged shoots turn reddish brown
Severe Damage>51Damaged shoots are reddish brown or gray
Table 2. Detailed technical parameters of SOC 710VP image spectrometer and airborne hyperspectral S185 imager.
Table 2. Detailed technical parameters of SOC 710VP image spectrometer and airborne hyperspectral S185 imager.
Technical Parameter of SOC 710VP Image SpectrometerA Parameter Value of SOC 710VP Image SpectrometerTechnical Parameter of Airborne Hyperspectral S185 ImagerA Parameter Value of Airborne Hyperspectral S185 Imager
Spectral range400~1000 nmSpectral range450~950 nm
Height of objective lens and f-number50 cm and 5.6sample interval4 nm
Spectral resolution4.68 nmSpectral resolution8~532 nm
channels128channels125
Exposure time35 msMeasured Time0.1~1000 ms
speed32 Cubes/sHyperspectral imaging speed5 Cubes/s
dynamic range12 bitdynamic range12 bit
Table 3. Summary of experimental data.
Table 3. Summary of experimental data.
Sample TypeNumberSampling Methods
DSR collection120 samplesManual visual counting
Damaged needle80 samplesCut branches of sampled trees
Canopy photos120 samplesZ30 camera based on UAV
Needle hyperspectral data80 samplesSOC710VP
The airborne hyperspectral imager6.67 km2UHD S185
Airborne hyperspectral spectrum120 samplesManual drawing
Table 4. Vegetation indices and spectral features.
Table 4. Vegetation indices and spectral features.
Variable CategoriesParameter NamesVariable Definitions and Formulas
Vegetation indexes N D V I Normalized Difference Vegetation Index
N D V I = ( R ( 760 ~ 850 ) R ( 650 ~ 670 ) ) / ( R ( 760 ~ 850 ) + R ( 650 ~ 670 ) )
N D V I 705 N D V I 705 = ( R 750 R 705 ) / ( R 750 + R 705 )
R V I Ratio vegetation index R V I = R N I R / R R E D
G I G I = R 544 / R 677
D V I Difference vegetation index D V I = R N I R R R E D
P R I P R I = ( R 531 R 570 ) / ( R 531 + R 570 )
P S R I P S R I = ( R 682 R 498 ) / R 749
T V I T V I = 0.5 [ 120 ( R 750 R 550 ) 200 ( R 670 R 550 ) ]
S A V I S A V I = R 800 / ( R 700 + R 800 )
Y I Y I = ( R 580 2 R 630 + R 680 ) / 2500
Location parameters R g Green peak, the maximum reflectance in the wavelength range 510 to 560 nm
R r Red valley, the minimum reflectance in the wavelength range 640 to 680 nm
Positional parameters D b The maximum value of the first derivative in the blue edge region (490~530 nm)
D y The maximum value of the first derivative spectra in the yellow edge region (550~582 nm)
D r The maximum value of the first derivative spectra in the red edge region (680~780 nm)
D n i r The maximum value of the first derivative spectra in the near-infrared region (780~1300 nm)
Area parameter S D b The sum of the first derivative values in the blue edge region (490 to 530 nm)
S D y The sum of the first derivative values in the yellow edge region (550 to 582 nm)
S D r The sum of the first derivative values in the red-edge region (680 to 780 nm)
S D n i r The sum of the first derivative values in the near-infrared region (780~1300 nm)
Characteristic parameters of vegetation indexes R g / R r The ratio of green peak reflectance ( R g ) to Red Valley reflectance ( R r )
S D r / S D b The ratio of S D r to S D b
S D r / S D y The ratio of S D r to S D y
S D n i r / S D b The ratio of S D n i r to S D b
S D n i r / S D r The ratio of S D n i r to S D r
( R g R r ) / ( R g + R r ) Normalized values of R g and R r
( S D r S D b ) / ( S D r + S D b ) Normalized values of S D r and S D b
( S D r S D y ) / ( S D r + S D y ) Normalized values of S D r and S D y
( S D n i r S D b ) / ( S D n i r + S D b ) Normalized values of S D n i r and S D b
( S D n i r S D r ) / ( S D n i r + S D r ) Normalized values of S D n i r and S D r
Table 5. Results of variance analysis of vegetation index and hyperspectral characteristic parameters of Pinus yunnanensis canopy with different degrees of damage.
Table 5. Results of variance analysis of vegetation index and hyperspectral characteristic parameters of Pinus yunnanensis canopy with different degrees of damage.
NameF-ValuepNameF-Valuep
NDVI19.3680.000 D n i r 5.4530.002
NDVI70515.1370.000 S D b 18.740.000
RVI21.4860.000 S D y 20.1780.000
GI22.8540.000 S D r 30.3610.000
DVI29.0590.000 S D n i r 22.6760.000
PRI9.7030.000 R g / R r 22.2140.000
PSRI13.9360.000 S D r / S D b 1.3350.266
TVI31.0990.000 S D r / S D y 1.5990.193
SLAVI14.5770.000 S D n i r / S D b 2.8380.041
YI6.6650.000 S D n i r / S D r 1.8370.144
Rg6.3440.001 ( R g R r ) / ( R g + R r ) 20.9470.000
Rr5.3320.002 ( S D r S D b ) / ( S D r + S D b ) 1.4550.231
Db14.4620.000 ( S D r S D y ) / ( S D r + S D y ) 8.570.000
Dy3.7450.013 ( S D n i r S D b ) / ( S D n i r + S D b ) 3.2620.024
Dr30.8680.000 ( S D n i r S D r ) / ( S D n i r + S D r ) 1.1340.338
Table 6. CWT characteristics and collinearity statistics.
Table 6. CWT characteristics and collinearity statistics.
NumberScaleλ (nm)Sig.VIFNumberScaleλ (nm)Sig.VIF
WF138220.0008.725WF916780.0001.183
WF238540.0001.631WF1046620.0072.165
WF325420.0001.643WF1129100.0002.059
WF426260.0001.758WF1264740.0001.964
WF536300.0002.284WF1317140.0005.072
WF624740.0004.355WF1419300.0003.712
WF766140.0001.472WF1534620.0001.823
WF858620.0001.366WF1658380.0081.663
Table 7. Comparison of the best monitoring windows of different damage degrees.
Table 7. Comparison of the best monitoring windows of different damage degrees.
Damage DegreeThe Optimal Spectral Reflectance Monitoring Window (nm)
Spectral Reflectance of NeedleSpectral Reflectance of CanopyThe First Derivative of the NeedleThe First Derivative of Canopy
Health/Mild Damage781, 775, 786, 791, 770, 796, 801, 765750, 746, 754, 794, 758, 798, 762, 782718, 723, 713, 728, 708, 734, 739, 702714, 710, 718, 706, 722, 702, 726, 698
Health/Moderate Damage781, 786, 775, 791, 796, 770, 828, 801770, 774, 758, 778, 766, 754, 762, 782723, 718, 728, 713, 734, 739, 708, 744714, 718, 710, 722, 706, 726, 730, 702
Health/Severe Damage781, 786, 775, 791, 770, 796, 801, 828778, 782, 774, 790, 786, 770, 758, 794723, 718, 713, 728, 734, 708, 739, 702714, 718, 710, 706, 722, 702, 726, 698
Mild/Moderate Damage786, 781, 791, 828, 823, 833, 775, 796774, 770, 766, 778, 786, 762, 758, 782723, 728, 718, 734, 713, 739, 744, 708722, 718, 726, 714, 886, 730, 710, 890
Mild/Severe Damage781, 786, 775, 791, 796, 770, 801, 828778, 774, 782, 786, 790, 770, 766, 762723, 718, 728, 713, 734, 739, 708, 744718, 714, 710, 722, 706, 702, 726, 698
Moderate/Severe Damage833, 828, 838, 786, 823, 844, 781, 791790, 802, 794, 806, 778, 782, 798, 786718, 723, 713, 728, 708, 734, 702, 839714, 710, 706, 718, 702, 698, 722, 694
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Y.; Lu, J.; Huang, X. Damage Diagnosis of Pinus yunnanensis Canopies Attacked by Tomicus Using UAV Hyperspectral Images. Forests 2023, 14, 61. https://doi.org/10.3390/f14010061

AMA Style

Ma Y, Lu J, Huang X. Damage Diagnosis of Pinus yunnanensis Canopies Attacked by Tomicus Using UAV Hyperspectral Images. Forests. 2023; 14(1):61. https://doi.org/10.3390/f14010061

Chicago/Turabian Style

Ma, Yunqiang, Junjia Lu, and Xiao Huang. 2023. "Damage Diagnosis of Pinus yunnanensis Canopies Attacked by Tomicus Using UAV Hyperspectral Images" Forests 14, no. 1: 61. https://doi.org/10.3390/f14010061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop