Next Article in Journal
Dry Matter Accumulation, Water Productivity and Quality of Potato in Response to Regulated Deficit Irrigation in a Desert Oasis Region
Previous Article in Journal
Adaptive Responses of Hormones to Nitrogen Deficiency in Citrus sinensis Leaves and Roots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Winter Wheat Soil–Plant Analysis Development Value Prediction through Evaluating Unmanned Aerial Vehicle Flight Altitudes, Predictor Variable Combinations, and Machine Learning Algorithms

1
Jiangsu Key Laboratory of Crop Genetics and Physiology/Jiangsu Key Laboratory of Crop Cultivation and Physiology, Agricultural College of Yangzhou University, Yangzhou 225009, China
2
Jiangsu Co-Innovation Center for Modern Production Technology of Grain Crops, Yangzhou University, Yangzhou 225009, China
3
College of Life and Health Sciences, Anhui Science and Technology University, Chuzhou 233100, China
4
Joint International Research Laboratory of Agriculture and Agricultural Product Safety, Yangzhou University, Yangzhou 225009, China
*
Author to whom correspondence should be addressed.
Plants 2024, 13(14), 1926; https://doi.org/10.3390/plants13141926
Submission received: 9 May 2024 / Revised: 11 June 2024 / Accepted: 9 July 2024 / Published: 12 July 2024

Abstract

:
Monitoring winter wheat Soil–Plant Analysis Development (SPAD) values using Unmanned Aerial Vehicles (UAVs) is an effective and non-destructive method. However, predicting SPAD values during the booting stage is less accurate than other growth stages. Existing research on UAV-based SPAD value prediction has mainly focused on low-altitude flights of 10–30 m, neglecting the potential benefits of higher-altitude flights. The study evaluates predictions of winter wheat SPAD values during the booting stage using Vegetation Indices (VIs) from UAV images at five different altitudes (i.e., 20, 40, 60, 80, 100, and 120 m, respectively, using a DJI P4-Multispectral UAV as an example, with a resolution from 1.06 to 6.35 cm/pixel). Additionally, we compare the predictive performance using various predictor variables (VIs, Texture Indices (TIs), Discrete Wavelet Transform (DWT)) individually and in combination. Four machine learning algorithms (Ridge, Random Forest, Support Vector Regression, and Back Propagation Neural Network) are employed. The results demonstrate a comparable prediction performance between using UAV images at 120 m (with a resolution of 6.35 cm/pixel) and using the images at 20 m (with a resolution of 1.06 cm/pixel). This finding significantly improves the efficiency of UAV monitoring since flying UAVs at higher altitudes results in greater coverage, thus reducing the time needed for scouting when using the same heading overlap and side overlap rates. The overall trend in prediction accuracy is as follows: VIs + TIs + DWT > VIs + TIs > VIs + DWT > TIs + DWT > TIs > VIs > DWT. The VIs + TIs + DWT set obtains frequency information (DWT), compensating for the limitations of the VIs + TIs set. This study enhances the effectiveness of using UAVs in agricultural research and practices.

1. Introduction

The SPAD (Soil–Plant Analysis Development) value represents the relative chlorophyll content and is significant in crop cultivation and breeding to evaluate crops’ photosynthetic capacity and nutritional health. It provides important indicators for rapid fertilization diagnoses and crop variety screenings [1,2,3]. Winter wheat is one of the vital staple crops in China, and is essential for maintaining national food security and driving economic development [4]. Therefore, the accurate and efficient monitoring of winter wheat SPAD values holds immense importance.
Advancements in remote sensing (RS) technology have led to numerous studies confirming that monitoring winter wheat SPAD values through RS is the most effective and non-destructive method available [5,6,7]. In particular, optical sensors carried by unmanned aerial vehicles (UAVs) can obtain RS images with a fine spatial (cm level) and spectral resolution. They can adjust flight altitude and coverage area according to specific requirements, providing detailed spectral and spatial information on winter wheat [8]. When using a fixed focal length for UAV image acquisition, the spatial resolution decreases with the increase in UAV flight altitude. Researchers generally believe that a higher spatial resolution usually means more detailed information on winter wheat growth [9,10]. Therefore, when obtaining UAV images, there is a tendency to lower the flight altitude as much as possible [11,12,13]. For example, the widely used DJI Phantom 4 multispectral UAV (DJI, Inc., Shenzhen, China) typically sets the flight altitude at 10–30 m (with a resolution of 0.52–1.59 cm/pixel) in studies predicting winter wheat SPAD values [14,15,16]. However, a higher spatial resolution requires lower UAV flight altitude, often resulting in a longer image acquisition time.
At present, UAVs’ primary limitation is the battery capacity and net weight, as a high battery capacity and limited battery net weight result in increased flight duration [17]. An ordinary UAV can usually operate safely for about 10–20 min, and recharging becomes necessary if the battery’s charge drops below 10% [18]. Since UAVs typically need to hover to capture images, a lower flight altitude means more hovering points, significantly increasing the flight time and making it difficult to collect more field images with limited batteries. Moreover, a longer flight time increases the likelihood of encountering lighting changes. While the utilization of UAVs for crop monitoring is growing swiftly, a significant challenge arises due to the varying illumination caused by fluctuating solar radiation and cloud cover. The incident spectral irradiance captured by UAV-mounted sensors blends plant properties and solar spectral irradiance. Consequently, image data acquired under variable illumination can yield misleading crop information [19]. For example, vegetation indices (VIs) derived from UAV images for crop monitoring and phenotyping can be affected by these variations. Discrepancies observed in these image data may stem from genuine crop variability or changing lighting conditions. Although certain VIs are less affected by brightness, they are inadequate for handling variable sunlight, given that cloud cover alters brightness and modifies the illumination’s spectral attributes.
Previous studies have begun to explore whether crop parameters can be effectively predicted at higher flight altitudes. For example, Xu et al. [20] collected original images at the flight altitude of 200 m, employing a DJI M600Pro UAV equipped with a Rikola hyperspectral camera (Senop Ltd., Oulu, Finland). They resampled original images at multiple spatial resolutions (26, 39, 52, 65, 78, 91, and 100 cm/pixel) to simulate images collected at various higher flight altitudes, exploring the “appropriate monitoring scale domain” for predicting above-ground biomass (AGB) of rice. However, predicting SPAD values (physiological parameters of crops) at different flight altitudes obviously differs from predicting AGB (morphological characteristics of crops) at different flight altitudes. More importantly, according to the “Interim Regulations on the Management of Unmanned Aircraft Flights”, implemented in China in 1 January 2024, the maximum altitude in light and small flight areas is capped at 120 m [21]. The study by Xu et al. [20] on original images collected at a UAV flight altitude of 200 m seems to lack practical value within China. Therefore, the current study should explore the highest possible flight altitude within 120 m (using a DJI P4-Multispectral UAV as an example, with a resolution of 1.06 cm/pixel) that can accurately estimate the SPAD values of winter wheat. This will facilitate rapid fertilization diagnosis in large-scale farmland and efficient variety selection in breeding fields with a large number of experimental plots.
Moreover, the booting stage is a stage where the vegetative and reproductive growth of winter wheat occur simultaneously, exerting the most significant influence on final yield and quality [22]. In the Yangtze River’s middle and lower reaches and the Huang-Huai-Hai Plain in China, the sowing period of winter wheat usually occurs from mid-September to late November, the tillering stage typically occurs from early December to early March of the following year (Duration ≈ 100 days), the jointing–booting stage usually occurs from mid-March to early April (Duration ≈ 30 days), the heading stage typically occurs from mid-April to early May (Duration ≈ 25 days), and the maturity stage usually occurs from early May to late May (Duration ≈ 20 days) [23,24]. The booting stage of winter wheat typically occurs in late March or early April. This stage is the peak photosynthesis and nutrient absorption period in winter wheat. Plants require a lot of nutrients and water to support their growth and development, as well as the formation of spikes and grains. During the booting stage, winter wheat has weaker resistance to adversity, and drought, high temperatures, pests, and diseases can significantly affect growth, development, and yield formation [25]. Therefore, the timely and efficient monitoring of SPAD values during winter wheat booting is crucial to ensuring final yield. In previous studies, spectral indices comprised of linear or nonlinear combinations of spectral reflectances at various bands were the most commonly used method for predicting SPAD values during the wheat booting stage [22,26]. However, several studies have reported that the accuracy of SPAD value prediction during the winter wheat booting stage is lower than predictions during other growth stages [27,28]. Yin et al. [16] concluded that, compared to the other growth stages, the model developed for predicting winter wheat booting stage SPAD values exhibits underestimation issues. Wang et al. [26] reported that the accuracy of SPAD value prediction varied significantly across growth stages, with the accuracy improving in the following sequence: booting stage < heading stage < milk filling stage < flowering stage.
Optical RS, as a passive RS method, often faces saturation and insufficient sensitivity issues when using VIs to predict SPAD values in the reproductive growth stage of winter wheat [29]. Moreover, spectral heterogeneity, where weak plants within high-density areas and strong plants within low-density areas exhibit similar spectral characteristics, further restricts the efficacy of VIs [30]. Therefore, predicting SPAD values during the winter wheat booting stage (a stage where nutritional and reproductive growth occur simultaneously) using VIs may lead to significant uncertainty. To overcome the limitations of VIs, researchers have begun to explore the potential of texture indices (TIs) in predicting the SPAD values of winter wheat. TIs describe the variability between target pixels and their neighboring pixels, offering insights into vegetation’s spatial dimension and reflecting the canopy structure. TIs improve the ability to detect subtle changes in canopy structure compared to VIs. Yin et al. [16] demonstrated the potential of TIs in predicting the SPAD values of winter wheat. Additionally, the fusion of VIs and TIs can improve the accuracy of the estimated SPAD values of winter wheat during the booting stage compared to using VIs or TIs alone. Nevertheless, the improvement in SPAD value predictions during the winter wheat booting stage obtained through the fusion of VIs and TIs is still limited.
VIs convey the spectral characteristics of RS images, whereas TIs capture the spatial information within RS images. Wavelet variables obtained through discrete wavelet transform (DWT) capture the frequency and spectral details within RS images to some extent [31,32], thereby compensating for the limitations associated with using solely spectral or spatial variables. This is one of the reasons why we attempt to introduce wavelet variables to predict SPAD values during the winter wheat booting stage. DWT is an effective signal-processing technique that decomposes the original spectral signal into low-frequency and high-frequency signals [33,34,35], effectively separating useful information from noise and using existing information [36]. The extensive literature searches we conducted indicate that there is currently no research using DWT to predict crop SPAD values remotely.
In summary, this study aims to (1) assess whether higher flight altitudes (40 to 120 m, using a DJI P4-Multispectral UAV as an example, with a resolution from 2.12 to 6.35 cm/pixel) can accurately predict SPAD values during the winter wheat booting stage compared to a baseline altitude of 20 m (using a DJI P4-Multispectral UAV as an example, with a resolution of 1.06 cm/pixel); (2) assess the different potentials of VIs, TIs, and DWT in predicting SPAD values during the winter wheat booting stage; and (3) assess whether various combinations of predictor variables (VIs + DWT, TIs + DWT, VIs + TIs, and VIs + TIs + DWT) can enhance the prediction of SPAD values during the winter wheat booting stage.

2. Materials and Methods

2.1. Study Site and Experiment Design

The one cultivation period experiment was carried out at the Jingxian Farm in Jiangyan District, Taizhou City, Jiangsu Province, China (32°34′23.43′′ N, 120°5′25.80′′ E) during the winter wheat cultivation period of 2022–2023 (Figure 1). The experimental site is situated in a rice–wheat rotation zone within the Yangtze River’s middle and lower reaches, characterized by a subtropical climate. This region’s annual average rainfall and temperature are approximately 1185.7 mm and 16.7 °C, respectively.
Experiment 1 involved four winter wheat varieties: Yangmai22 (YM22), Yangmai25 (YM25), Yangmai39 (YM39), and Ningmai26 (NM26). Each variety included four nitrogen treatment groups: a control group (0 kg/ha, N0) and treatment groups with nitrogen application rates of 150 kg/ha (N10), 240 kg/ha (N16), and 330 kg/ha (N22). Based on the growth stage of the winter wheat, the nitrogen application regime was divided into basal fertilizer, tillering fertilizer, jointing fertilizer, and booting fertilizer in a ratio of 5:1:2:2. The experiment utilized a split-plot design, with main plots corresponding to four nitrogen application rates treatments and subplots corresponding to four winter wheat varieties. Each experimental plot was replicated three times, totaling 48 plots.
Experiment 2 involved two winter wheat varieties: YM22 and YM39. Each variety was subjected to four different nitrogen application methods (Figure 2): broadcasting (M1), furrow application (M2), and two types of spaced furrow application (M3 and M4). Both urea and resin-coated urea were used as nitrogen fertilizers at a rate of 240 kg/ha. The experiment also used a split-plot design, with the varieties as the main plots and fertilizer types as the subplots. Like Experiment 1, each combination was replicated three times, totaling 24 plots.
In total, the experimental field was divided into 72 plots. The first 48 plots (starting from the south side of the experimental area) were assigned to Experiment 1, and the subsequent 24 plots were assigned to Experiment 2. Phosphorus (P2O5) and potassium (K2O) fertilizers were applied at a rate of 135 kg/ha each as basal fertilizer. Each plot was manually furrowed for sowing with a row spacing of 25 cm, covering an area of 12 m2 per plot. The sowing date was 8 November 2022. The basic seedling density was 240 × 108 plants/ha (28.8 × 108 plants/12 m2 plot).

2.2. Data Acquisition and Processing

2.2.1. UAV Images Acquisition and Preprocessing

The research used the DJI P4-Multispectral UAV to acquire multispectral RS data during the winter wheat booting stage, capturing spectral bands: red band (R): 650 nm ± 16 nm; green band (G): 560 nm ± 16 nm; blue band (B): 450 nm ± 16 nm; near-infrared band (NIR): 840 nm ± 26 nm; Rededge (RE): 730 nm ± 16 nm. The data collection occurred at noon on 11 April 2023 (155 days after sowing (DAS)) under stable lighting conditions.
We employed the DJI GS Pro 2.0 iOS app (DJI, Inc., Shenzhen, China) to plan UAV flight missions and capture spectral images along predefined flight paths. Concurrently, high-definition RGB image data were synchronized. The UAV flew at an altitude of 20 m (with a resolution of 1.06 cm/pixel), with the sensor lens oriented vertically downward and an overlap rate of 80% for both heading and side views, with a flight duration of 39 min. The collected radiometric calibration panels and multispectral images of the winter wheat booting stage were imported into DJI Terra 2.3 software (DJI, Inc., Shenzhen, China) for image processing, resulting in the synthesis of original UAV images (20 m) for the winter wheat booting stage.
Subsequently, the original images with a flight altitude of 20 m (with a resolution of 1.06 cm/pixel) were resampled to multiple spatial resolutions in ENVI 5.6 software (ITT Exelis; Boulder, CO, USA) to simulate RS images captured at multiple UAV flight altitudes. The resampling was performed using the nearest neighbor algorithm, which selects the nearest pixel values to interpolate the original pixels to multiple sizes, ensuring grayscale recombination within the image [37].
In resampling, nearest neighbor interpolation assigns the pixel values of each point in the target image to the closest points in the source image, ensuring that mixed pixels are not generated [38]. This method does not modify the numerical value of the pixels, referred to as the digital number, and is widely used for resampling because of the speed with which it can be implemented and its sheer simplicity [39,40]. The study resampled the original UAV images to resolutions corresponding to flight altitudes of 40 m (with a resolution of 2.12 cm/pixel), 60 m (with a resolution of 3.18 cm/pixel), 80 m (with a resolution of 4.23 cm/pixel), 100 m (with a resolution of 5.29 cm/pixel), and 120 m (with a resolution of 6.35 cm/pixel). This was performed to align with the regulations of China’s Interim Measures for the Management of Unmanned Aircraft Flights [21], which came into effect on 1 January 2024, setting the upper limit for flights in light and small airspaces at a true altitude of 120 m.

2.2.2. In Situ Wheat SPAD Measurements

During UAV data collection, simultaneous field measurements of SPAD values were conducted on 11 April 2023. We employed the SPAD-502Plus handheld chlorophyll meter (Konica Minolta, Tokyo, Japan) to measure 72 plots within the study area. The main specifications of the SPAD-502Plus handheld chlorophyll meter can be found in Table 1 [41]. The maximum temperature on the day of field SPAD value data collection was 27 °C, the minimum temperature was 11 °C, and there was no condensation, complying with the usage specifications of the SPAD-502Plus handheld chlorophyll meter, ensuring accurate SPAD value data acquisition.
A five-point sampling method was employed within each plot. One sampling point was located at the center of the plot, and the remaining four points were positioned near the four corners of the plot. At each sampling point, 10 flag leaves were randomly selected, resulting in a total of 50 flag leaves per plot. SPAD values were measured at three evenly spaced points (top, middle, and base portions of the flag leaf) on each leaf using the SPAD-502Plus handheld chlorophyll meter. Each flag leaf was measured three times, avoiding the leaf stem. Subsequently, the average SPAD value of the 50 flag leaves was calculated as the field-measured SPAD value for that plot.

2.3. Acquisition of RS Variables

2.3.1. Selection of VIs

VIs amalgamate variations in reflectance across various wavelengths, thereby partially mitigating the impact of background factors on vegetation spectral properties. This process enhances the precision of expressing SPAD values using RS data [42]. In this experiment, the spectral reflectance (R, G, B, NIR, Rededge) of 72 plots was extracted in ENVI 5.6 software, and VIs (Table 2) were constructed through linear or nonlinear combinations.

2.3.2. Extraction of TIs

The Gray-Level Co-occurrence Matrix (GLCM), reported by Haralick in 1973 [56], stands out as the most widely adopted texture extraction method. Its popularity stems from variables like rotation invariance, multiscale applicability, and computational efficiency [57]. Our study extracted eight GLCM-TIs from the original spectral band images of UAV multispectral data in ENVI. The extraction process employed a window size of 7 × 7 and a direction of (2, 2), generating 40 TIs calculated across all original spectral bands.

2.3.3. Extraction of DWT

DWT is a signal processing technique that decomposes a signal into frequency components of varying scales [58]. Unlike traditional transform methods such as Fourier Transform, DWT provides both time and frequency domain information simultaneously, making it advantageous in processing non-stationary signals and extracting local variables.
DWT decomposes a signal into different scales using a set of basic functions (wavelets). In the decomposition stage, the signal undergoes separation into approximation coefficients and detail coefficients across various frequency ranges [33]. The approximation coefficients encapsulate the overall trend and low-frequency components of the signal, whereas the detail coefficients capture specific local details and high-frequency components. This decomposition allows for signal frequency characteristics to be analyzed at different scales, leading to a better understanding of the signal structure and variables. DWT finds wide application across signal processing, image processing, data compression, pattern recognition, and other fields [59]. In image processing, DWT is used for tasks such as image compression, denoising, and variable extraction [60]. The study selected the bior 1.3 wavelet basis function for decomposition, as illustrated in Figure 3. After DWT’s application to the original single-band images, four sub-images are obtained: approximate sub-image (LL), horizontal detail sub-image (LH), vertical detail sub-image (HL), and diagonal detail sub-image (HH). Transforming each single-band image of UAV multispectral imagery into wavelets resulted in 20 discrete wavelet variables being calculated.

2.4. Variable Selection and Machine Learning Algorithms

This research employed Recursive Feature Elimination (RFE) for variable selection. RFE progressively reduces the size of the variable set until a certain number of variables is reached and optimal performance is achieved [61]. This approach helps to reduce overfitting, improve model generalization, and identify the most critical variables for model performance [62]. RFE was combined with cross-validation to enhance the robustness and reliability of variable selection. In this study, cross-validated RFE was implemented using the Random Forest (RF) estimator.
Furthermore, four machine-learning algorithms were employed to develop models for predicting the SPAD values. These algorithms include Ridge Regression, RF, Support Vector Regression (SVR), and Back Propagation Neural Network (BPNN). Each algorithm possesses unique strengths in SPAD value prediction, addressing data complexity, and handling nonlinear (or linear) relationships to improve prediction accuracy and stability.
Ridge Regression is a linear regression method used to handle cases where the number of variables exceeds the number of samples or where there is multicollinearity among variables [63]. It controls model complexity by adding an L2 regularization term to prevent overfitting. Ridge Regression can effectively handle multicollinearity and noise in the dataset, improving model generalization.
RF is a type of ensemble learning algorithm that utilizes multiple decision trees. Each tree is built by randomly selecting subsets of variables and samples. The predictions from these trees are then combined through voting or averaging to produce the final prediction [64]. RF is known for its robustness and ability to generalize, effectively modeling complex relationships within high-dimensional datasets. These characteristics make it a suitable choice for predicting winter wheat SPAD values.
SVR is a regression technique derived from support vector machines (SVM), aiming to identify the maximum margin hyperplane within a high-dimensional variable space specifically for regression purposes. It is suitable for modeling nonlinear data and can handle nonlinear relationships by choosing appropriate kernel functions [65]. SVR can effectively model nonlinear relationships and exhibits robustness against outliers in predicting winter wheat SPAD values.
BPNN trains the model using the backpropagation algorithm to adjust weights to minimize the loss function continuously. BPNN is suitable for complex nonlinear problems and can learn complex patterns and variables in the data [66]. In predicting winter wheat SPAD values, BPNN can flexibly capture the dataset’s nonlinear relationships and complex patterns.
For parameter optimization, this research used a combination of cross-validation and grid search to optimize parameter combinations within a given parameter space, thereby improving model performance and generalization [67], leading to better results in predicting the SPAD values.

2.5. Dataset Splitting and Model Evaluation

The dataset was randomly divided into training and testing datasets in a ratio of 8:2, and K-fold (K = 5) cross-validation was employed to enhance the model’s generalization ability. The performance of the models was evaluated using four metrics: Coefficient of Determination (R2), Root Mean Square Error (RMSE), Relative Root Mean Square Error (RRMSE), and Ratio of Performance to Deviation (RPD). RPD aids in mitigating assessment biases arising from varying units or data scales [68].
The formulas of R2, RMSE, RRMSE, and RPD are presented in Equations (1)–(4):
R 2 = ( y ^ i y ) 2 ( y i y ) 2
R M S E = i = 1 n ( y i ^ y i ) 2 n
R R M S E = R M S E y _
R P D = S D R M S E
where y i is the measured SPAD value of sample i; y ^ i is the predicted SPAD value of sample i; y _ is the mean SPAD value; n is the number of samples; S D is the standard deviation between predicted and measured SPAD values.

3. Results

3.1. RS Variable Selection

In the RFE variable selection process, this study employed learning curves derived from RFE to identify the appropriate number of RS variables. The RFE variable importance rankings were employed to determine the optimal set for subsequent modeling.
Based on the RFE learning curves (Figure 4), the study identified the appropriate number of VIs at multiple UAV flight altitudes. At 20 m altitude (with a resolution of 1.06 cm/pixel), the appropriate number of VIs was identified as 13. At 40 m (with a resolution of 2.12 cm/pixel) and 60 m (with a resolution of 3.18 cm/pixel) altitudes, the appropriate number of VIs remained consistent at 12. At altitudes of 80 m (with a resolution of 4.23 cm/pixel), 100 m (with a resolution of 5.29 cm/pixel), and 120 m (with a resolution of 6.35 cm/pixel), the optimal number of VIs was 11. These optimal sets of VIs are listed in Table 3 and will serve as inputs for subsequent modeling. Across different altitudes, the optimal Vis selected for modeling include G, B, NIR, RVI, GRVI, TCARI/OSAVI, and WDRVI. Overall, the optimal number of selected VIs at different altitudes is roughly the same, but subtle differences exist in the specific VIs that were chosen. This suggests that VIs at different altitudes may exhibit slight variations in reflecting the growth status of winter wheat. Therefore, the modeling and analysis should use the appropriate VIs selected for different altitudes. This result further emphasizes the significance of screening RS variables at different altitudes to optimize the performance and accuracy of the model.
The appropriate number of RS variables determined through RFE variable selection learning curves (Figure 4 and Figure 5) under different variable sets (VIs, TIs, DWT, VIs + TIs, VIs + DWT, TIs + DWT, and VIs + TIs + DWT) were found to be 13, 29, 18, 32, 33, 58, and 57, respectively. Subsequently, optimal RS variable sets for different variable combinations were determined based on the RFE variable importance ranking. In the TIs variable set, mean and correlation were selected as the optimal RS variables across different channels. Within the DWT set, LL and HH were chosen as the optimal RS variables across different channels. The selected RS variables in the VIs set are shown in Table 4. Details of the specific selected RS variables in the TIs and DWT sets can be found in Table 4. The specific lists of selected RS variables in the VIs + TIs and VIs + TIs + DWT sets are provided in Table 5. These results provide important clues for subsequent modeling and analysis, aiding in a deeper understanding of the relationship between the SPAD values and various RS variables.

3.2. Development and Validation of Winter Wheat Booting Stage SPAD Value Prediction Models at Different UAV Flight Altitudes

In this study, we first examined the performance of predicting SPAD for winter wheat by using UAV (DJI P4-Multispectral UAV) images at higher flight altitudes of 40 m (with a resolution of 2.12 cm/pixel), 60 m (with a resolution of 3.18 cm/pixel), 80 m (with a resolution of 4.23 cm/pixel), 100 m (with a resolution of 5.29 cm/pixel), and 120 m (with a resolution of 6.35 cm/pixel), through a comparison with the prediction performance using images at a baseline altitude of 20 m (with a resolution of 1.06 cm/pixel). Four different machine learning algorithms, including Ridge, RF, SVR, and BPNN, were employed in this study. In this objective, we only used VIs as predictor variables.
The performance of models based on VIs combined with multiple machine-learning algorithms varied significantly at different UAV flight altitudes (Table 6). For instance, the Ridge model performed best at 60 m altitude (with a resolution of 3.18 cm/pixel), achieving an RPD of 2.2435. The RF model showed optimal performance at 20 m altitude (with a resolution of 1.06 cm/pixel) with an RPD of 1.8232. Both SVR and BPNN models performed best at 40 m altitude (with a resolution of 2.12 cm/pixel), with RPD values of 2.0617 and 1.8388, respectively. Overall, the Ridge and SVR models exhibited superior accuracy in predicting winter wheat booting stage SPAD values at multiple UAV flight altitudes compared to RF and BPNN models. Particularly, the Ridge model developed at 60 m flight altitude (with a resolution of 3.18 cm/pixel) emerged as the optimal model for predicting the SPAD values based on VIs (with an R2 of 0.7821, RMSE of 1.4424, RRMSE of 0.0293, and RPD of 2.2435 on the test dataset). It is also noteworthy that at flight altitudes of 80 m (with a resolution of 4.23 cm/pixel) and 100 m (with a resolution of 5.29 cm/pixel), the Ridge model achieved RPD values of 2.1459 and 2.1545, respectively. At a flight altitude of 120 m (with a resolution of 6.35 cm/pixel), the SVR model achieved an RPD value of 2.0547.
Of particular note is that, at different flight altitudes, VIs can be employed with specific machine-learning algorithms to develop winter wheat booting stage SPAD value prediction models with a very good performance (RPD > 2.0). For example, at 120 m (with a resolution of 6.35 cm/pixel) altitude, despite the slightly lower performance of the RF and BPNN models (with RPDs of 1.6635 and 1.7720 on the test set, respectively), the Ridge and RF models still demonstrate an outstanding performance (with RPDs of 2.0237 and 2.0547 on the test set, respectively). This indicates that at a flight altitude of 120 m (with a resolution of 6.35 cm/pixel), UAV-based models combining VIs with certain machine learning methods can develop highly effective winter wheat booting stage SPAD value prediction models.
To further analyze the effectiveness of winter wheat booting stage SPAD value prediction models developed based on VIs at multiple flight altitudes, Figure 6 presents scatter plots comparing measured SPAD values with the predicted SPAD values for all optimal models at multiple flight altitudes. The small errors observed between the predicted and measured values highlight the effectiveness of predicting winter wheat booting stage SPAD values using the developed models.

3.3. Development and Validation of Winter Wheat Booting Stage SPAD Value Prediction Models under Different Variable Combinations

In this study, we investigated and compared the SPAD prediction performance for winter wheat between using individual types of predictor variable and using various combinations of predictor variables. Three different types of predictor variables were used in this study, encompassing VIs, TIs, and DWT variables. The same four machine learning algorithms were employed for the prediction. In this objective, we only used the images at an altitude of 20 m (with a resolution of 1.06 cm/pixel).
The performance differences in the winter wheat booting stage SPAD value prediction models developed based on different types of predictor variable were obvious (Table 7). For the VIs set, the winter wheat booting stage SPAD value prediction model developed using the SVR model exhibited the best performance (with an R2 of 0.7635, RMSE of 1.5204, RRMSE of 0.0309, and RPD of 2.1284 on the test dataset). Similarly, within the TIs set, the prediction model developed using the SVR model demonstrated the best performance (with R2 of 0.7812, RMSE of 1.4623, RRMSE of 0.0297, and RPD of 2.2130 on the test dataset). For the DWT set, the prediction model developed using RF achieved the best performance (with R2 of 0.7023, RMSE of 1.7057, RRMSE of 0.0347, and RPD of 1.8972 on the test dataset). Overall, when developing winter wheat booting stage SPAD value prediction models using a single variable set, the overall accuracy ranking is TIs > VIs > DWT.
When combining multiple variable sets, the winter wheat booting stage SPAD value prediction model developed using SVR in the VIs + TIs set exhibited the best performance (with an R2 of 0.8148, RMSE of 1.3455, RRMSE of 0.0274, and RPD of 2.4050) on the test dataset. For the VIs + DWT set, the model developed using SVR also demonstrated the best performance (with an R2 of 0.7940, RMSE of 1.4189, RRMSE of 0.0288, and RPD of 2.2807) on the test dataset. For the TIs + DWT set, the model developed using SVR also demonstrated the best performance (with an R2 of 0.7909, RMSE of 1.4294, RRMSE of 0.0291, and RPD of 2.2639) on the test dataset. Similarly, in the VIs + TIs + DWT set, the model developed using SVR also demonstrated the best performance (with an R2 of 0.8390, RMSE of 1.2544, RRMSE of 0.0255, and RPD of 2.5798) on the test dataset.
The overall accuracy of the winter wheat booting stage SPAD value prediction models developed using different variable sets follows the order: VIs + TIs + DWT > VIs + TIs > VIs + DWT > TIs + DWT > TIs > VIs > DWT. Models developed by combining multiple variable sets performed notably better than those developed using a single variable set.
Furthermore, compared to the common use VIs + TIs set, the winter wheat booting stage SPAD value prediction model developed using the VIs + TIs + DWT set not only showed improved accuracy but also demonstrated a more stable performance (Figure 7). Under the VIs + TIs + DWT set, models developed using any machine learning algorithm performed excellently (except for the BPNN model, where Ridge, RF, and SVR models all had R2 values greater than 0.8). The winter wheat booting stage SPAD value prediction model developed using SVR in the VIs + TIs + DWT set, which achieved an RPD of 2.5798 on the test set, is particularly noteworthy. This model is the only one among the different developed models to achieve an RPD > 2.5 on the test dataset, demonstrating an excellent prediction performance. This further underscores the importance of combining the DWT set for winter wheat booting stage SPAD value prediction.

4. Discussion

4.1. Comparison of SPAD Value Prediction Accuracy at Varying UAV Flight Altitudes

In this study, we first examined the performance of predicting SPAD for winter wheat by using UAV (DJI P4-Multispectral UAV) images at higher flight altitudes of 40 m (with a resolution of 2.12 cm/pixel), 60 m (with a resolution of 3.18 cm/pixel), 80 m (with a resolution of 4.23 cm/pixel), 100 m (with a resolution of 5.29 cm/pixel), and 120 m (with a resolution of 6.35 cm/pixel) through a comparison with the prediction performance when using images at a baseline altitude of 20 m (with a resolution of 1.06 cm/pixel). Four different machine learning algorithms, including Ridge, RF, SVR, and BPNN, were employed in this study. In this objective, we only used VIs, which have been commonly used as predictor variables in similar previous studies. To enhance the reliability of the study results, winter wheat with various canopy structures was created by planting different varieties of winter wheat and applying different nitrogen fertilizer treatments within different plots (Figure 1 and Figure 2).
Within the flight altitude of 120 m (40 to 120 m, with a resolution from 2.12 to 6.35 cm/pixel), models for predicting winter wheat SPAD values during the booting stage were successfully developed using VIs combined with specific machine learning regressions (Ridge and SVR, using the flight altitude of 120 m (with a resolution of 6.35 cm/pixel) as an example), with RPD values exceeding 2.0. According to Viscarra Rossel et al. [68], models with RPD values exceeding 2.0 demonstrate a very good prediction performance, exceeding our expectations. Compared to the flight altitude of 20 m (with a resolution of 1.06 cm/pixel), the UAV at higher altitudes (40 to 120 m, with a resolution from 2.12 to 6.35 cm/pixel) were still able to capture clear spectral band reflectance values, facilitating the prediction of winter wheat SPAD values. Comparable findings were reported by Yang et al. [69] and Njane et al. [9], who suggested that VIs-based models are less affected by variations in UAV flight altitude within the range of 20–100 m (using a DJI P4-Multispectral UAV as an example, with a resolution from 1.06 to 5.29 cm/pixel).
The flight altitude of UAVs typically determines the flight duration, image pixel size, and the coverage area of fields [70]. While previous studies have suggested that a higher spatial resolution (lower flight altitude) allows for more detailed crop growth information and a more accurate prediction of crop parameters [10,11,12], particularly in terms if biomass and plant height [9], this conclusion is not contradictory to our findings. This is because images captured by UAVs are taken from above, and as UAV flight altitude increases, the height of the UAV and its coverage area cause plants farther from the UAV to appear smaller in the images, making it difficult to accurately predict morphological characteristics such as the volume (biomass) and height (plant height) of crops [71]. However, the prediction of SPAD values (physiological parameters of crops) using VIs at different flight altitudes differs significantly from predicting the morphological characteristics of crops using VIs at different UAV flight altitudes.
Moreover, a similar accuracy in predicting winter wheat SPAD values during the booting stage was achieved at higher flight altitudes (40 to 120 m, with a resolution from 2.12 to 6.35 cm/pixel) compared to the flight altitude of 20 m (with a resolution of 1.06 cm/pixel), indicating that higher UAV flight altitudes are a preferable option, facilitating the prediction of winter wheat SPAD values. This is because higher flight altitudes save time and battery during field missions, allowing for the collection of more plot images under limited battery conditions. Additionally, shorter flight activities reduce the likelihood of encountering lighting changes, avoiding the provision of misleading information about winter wheat due to images obtained under variable illumination.

4.2. Influence of Multiple Variable Sets on Winter Wheat SPAD Value Prediction during the Booting Stage

In this study, we investigated and compared the SPAD prediction performance for winter wheat when using individual types of predictor variable and using various combinations of predictor variables. Three different types of predictor variables were used in this study, encompassing VIs, TIs, and DWT variables. The same four machine learning algorithms (Ridge, RF, SVR, and BPNN) were employed for the prediction. In this objective, we only used the images at an altitude of 20 m (with a resolution of 1.06 cm/pixel).
The differences in model performance based on different variable sets were significant. Generally, when only one variable set was used to develop winter wheat SPAD value prediction models, the overall accuracy was as follows: TIs > VIs > DWT. This study found that models developed with the TIs set achieved higher accuracy in predicting winter wheat SPAD values during the booting stage than the VIs commonly used in previous studies. This may be because, under different nitrogen fertilizer treatments, some plots still had small winter wheat plants with more exposed soil. This condition potentially disrupted the canopy spectra’s responsiveness to SPAD value characteristics. TIs are sensitive to boundaries between soil and green plants [72], and accordingly, TIs (especially those under the R channel) demonstrated a stronger correlation with winter wheat SPAD values.
Although the accuracy of DWT in predicting winter wheat SPAD values slightly lagged behind that of TIs and VIs, acceptable prediction models could still be developed. The LL, HH, HL, and LH channels under different bands showed some degree of correlation with winter wheat SPAD values. This is because DWT effectively separates useful information from weak information, thereby utilizing existing information [36], which is a key rationale for introducing DWT in this study.
The overall accuracy of winter wheat SPAD value prediction models developed with different variable combinations was as follows: VIs + TIs + DWT > VIs + TIs > VIs + DWT > TIs + DWT > TIs > VIs > DWT. Models combining multiple variable sets performed significantly better than models developed with a single variable set. Although the accuracy of predicting winter wheat SPAD values using the VIs + TIs set was higher than that of using any single variable set alone, the overall improvement in accuracy was not significant. This may be because these two variable sets are already closely related to SPAD values. Therefore, their combination did not produce particularly significant synergistic effects [73].
Furthermore, compared to the VIs + TIs set, models developed with the VIs + TIs + DWT set not only showed improved accuracy but also demonstrated a more stable performance in predicting winter wheat SPAD values. The main reason for this may be that the VIs + TIs + DWT set combines the spectral (VIs), frequency (DWT), and spatial information (TIs) of multispectral images, compensating for the shortcomings of using only spectral and spatial variables [74]. Under the VIs + TIs + DWT set, prediction models developed with any machine learning algorithm performed excellently.
Notably, under the VIs + TIs + DWT set, the prediction model developed with SVR achieved an RPD of 2.5798 on the test set. This model was the only one built under different variable combinations with an RPD exceeding 2.5 on the test set, demonstrating an excellent prediction performance [68]. This further underscores the importance of combining the DWT set for predicting winter wheat SPAD values. Combining VIs, TIs, and DWT can achieve a better prediction of winter wheat SPAD values during the booting stage, serving as an alternative to advanced cameras or longer lenses.

4.3. Performance Comparison of Four Machine Learning Models

Under different UAV flight altitudes, VIs combined with specific machine learning models were able to develop highly accurate models for predicting winter wheat SPAD values. Ridge and SVR models demonstrated distinct advantages over RF and BPNN models at different altitudes, exhibiting notable stability and accuracy. Across different variable sets (VIs, TIs, VIs + TIs, VIs + DWT, TIs + DWT, VIs + TIs + DWT), models developed by SVR performed best in predicting the SPAD values during the booting stage. This suggests that SVR models are more suitable for predicting winter wheat SPAD values during the booting stage. This may be attributed to the objective of the SVR optimization problem, which aims to minimize training errors while maximizing the margin, resulting in models that are typically globally optimal [75] and enabling SVR to better generalize to new data in some cases.
Some studies have suggested that RF models outperform SVR models in predicting crop parameters. For instance, Osco et al. [76] found that RF models could more accurately predict leaf nitrogen content (LNC) in maize compared to SVR models. Likewise, Zha et al. [77] demonstrated that RF models outperformed SVR and Artificial Neural Network models in estimating the rice nitrogen nutrition index (NNI). However, given the excellent performance of SVR models in this study, especially their achieving the highest accuracy in VIs, TIs, VIs + TIs, and VIs + TIs + DWT sets, the superiority of RF models may require further research verification.
Additionally, in this research, the optimal number of input variables for the models was identified using the RFE learning curve. It was observed that increasing the number of input variables beyond a specific point did not improve accuracy; instead, it led to a decrease in accuracy. This finding underscores the importance of identifying the optimal number of input variables to reduce information redundancy, ultimately enhancing model efficiency and prediction accuracy.

4.4. Limitations and Future Directions

We will actually fly the UAV to obtain images at 40 m (with a resolution of 2.12 cm/pixel), 60 m (with a resolution of 3.18 cm/pixel), 80 m (with a resolution of 4.23 cm/pixel), 100 m (with a resolution of 5.29 cm/pixel), and 120 m (with a resolution of 6.35 cm/pixel) in future research. In this way, we can obtain raw UAV images at different flight altitudes, rather than resampled images. This will reduce uncertainties resulted from the use of different resampling algorithms. In addition, it can provide the time used for monitoring the field at different altitudes as evidence when discussing the effectiveness of flying at elevated altitudes.
Additionally, given that previous studies have highlighted the lower accuracy in predicting SPAD values during the winter wheat booting stage compared to other growth stages, this study concentrated solely on this stage, with plans for future research to encompass additional growth stages. Moreover, this study relied on data from a single year of experimentation, emphasizing the need for further validation in subsequent research endeavors.
Furthermore, neglecting the significant vertical gradients in SPAD values and treating the canopy as a uniform plane can compromise the robustness of canopy RS and diminish its practical applicability, as suggested by earlier studies [78]. Future research will consider the issue of the uneven vertical distribution of SPAD values and use advanced sensors such as LiDAR to obtain more winter wheat SPAD value-related characteristics to address these issues.

5. Conclusions

This study demonstrates that VIs combined with specific machine learning algorithms can achieve similar accuracy in predicting winter wheat SPAD values during the booting stage at higher flight altitudes (40 to 120 m, using a DJI P4-Multispectral drone as an example, with a resolution from 2.12 to 6.35 cm/pixel) to the flight altitude of 20 m (with a resolution of 1.06 cm/pixel). The result suggests that the flight altitude of 120 m (with a resolution of 6.35 cm/pixel) is an alternative that can achieve comparable results to a lower flight altitude at 20 m (with a resolution of 1.06 cm/pixel) with a balanced tradeoff between accuracy and efficiency. This allows for the collection of more field images under limited battery conditions. It also avoids providing misleading information about winter wheat due to images being obtained under variable illumination, thereby facilitating the large-scale monitoring of winter wheat in actual agricultural production.
The overall accuracy of winter wheat SPAD value prediction models developed with different variable sets was VIs + TIs + DWT > VIs + TIs > VIs + DWT > TIs + DWT > TIs > VIs > DWT. Models developed with the TIs set achieved a higher accuracy in predicting winter wheat SPAD values than the VIs commonly used in previous studies, presenting a promising alternative approach. Additionally, although the accuracy of DWT in predicting winter wheat SPAD values slightly lagged behind that of TIs and VIs, acceptable prediction models could still be developed.
Models combining multiple variable sets performed significantly better than models developed with a single variable set. Furthermore, compared to the commonly used VIs + TIs set in previous studies, the VIs + TIs + DWT set used in this study combined the spectral (VIs), frequency (DWT), and spatial (TIs) information of multispectral images. This combination compensates for the limitations of solely using spectral and spatial variables. The resulting winter wheat SPAD value prediction models not only showed improved accuracy but also demonstrated a more stable performance. This provides more meaningful technical support for the RS prediction of winter wheat SPAD values, facilitating more sophisticated field management practices in precision agriculture.

Author Contributions

Conceptualization, J.W. and Q.Y.; methodology, J.W. and Q.Y.; software, J.W. and Q.Y.; formal analysis, J.W. and Q.Y.; validation, J.W., Q.Y. and L.C.; visualization, J.W., Q.Y. and L.C.; investigation, Q.Y., W.L., Y.Z., J.W., W.W. and G.Z.; writing—original draft preparation, J.W., Q.Y. and G.Z.; writing—review and editing, J.W. and Q.Y.; supervision, J.W. and Z.H.; funding acquisition, Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Jiangsu Agricultural Science and Technology Innovation fund (grant number: CX(22)1001), the Scientific and Technological Innovation Fund of Carbon Emissions Peak and Neutrality of Jiangsu Provincial Department of Science and Technology (grant number: BE2022424-2), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), China.

Data Availability Statement

The data are available from the authors upon reasonable request as the data need further use.

Acknowledgments

Special thanks to Zhi Ding and Junhan Zhang, Agricultural College of Yangzhou University, for their valuable assistance during the field surveys.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, L.; Han, W.; Niu, Y.; Chavez, J.L.; Shao, G.; Zhang, H. Evaluating the sensitivity of water stressed maize chlorophyll and structure based on UAV derived vegetation indices. Comput. Electron. Agric. 2021, 185, 106174. [Google Scholar] [CrossRef]
  2. Liu, Q.; Wang, C.; Jiang, J.; Wu, J.; Wang, X.; Cao, Q.; Tian, Y.; Zhu, Y.; Cao, W.; Liu, X. Multi-source data fusion improved the potential of proximal fluorescence sensors in predicting nitrogen nutrition status across winter wheat growth stages. Comput. Electron. Agric. 2024, 219, 108786. [Google Scholar] [CrossRef]
  3. Mohammadi, S.; Uhlen, A.K.; Lillemo, M.; Ergon, Å.; Shafiee, S. Enhancing phenotyping efficiency in faba bean breeding: Integrating UAV imaging and machine learning. Precis. Agric. 2024, 25, 1502–1528. [Google Scholar] [CrossRef]
  4. Liu, Y.; Su, L.; Wang, Q.; Zhang, J.; Shan, Y.; Deng, M. Comprehensive and quantitative analysis of growth characteristics of winter wheat in China based on growing degree days. Adv. Agron. 2020, 159, 237–273. [Google Scholar]
  5. Yang, X.; Yang, R.; Ye, Y.; Yuan, Z.; Wang, D.; Hua, K. Winter wheat SPAD estimation from UAV hyperspectral data using cluster-regression methods. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102618. [Google Scholar] [CrossRef]
  6. Guo, Y.; Chen, S.; Li, X.; Cunha, M.; Jayavelu, S.; Cammarano, D.; Fu, Y. Machine learning-based approaches for predicting SPAD values of maize using multi-spectral images. Remote Sens. 2022, 14, 1337. [Google Scholar] [CrossRef]
  7. Li, W.; Weiss, M.; Jay, S.; Wei, S.; Zhao, N.; Comar, A.; López-Lozano, R.; de Solan, B.; Yu, Q.; Wu, W.; et al. Daily monitoring of Effective Green Area Index and Vegetation Chlorophyll Content from continuous acquisitions of a multi-band spectrometer over winter wheat. Remote Sens. Environ. 2024, 300, 113883. [Google Scholar] [CrossRef]
  8. Chianucci, F.; Disperati, L.; Guzzi, D.; Bianchini, D.; Nardino, V.; Lastri, C.; Rindinella, A.; Corona, P. Estimation of canopy attributes in beech forests using true colour digital images from a small fixed-wing UAV. Int. J. Appl. Earth Obs. Geoinf. 2016, 47, 60–68. [Google Scholar] [CrossRef]
  9. Njane, S.N.; Tsuda, S.; van Marrewijk, B.M.; Polder, G.; Katayama, K.; Tsuji, H. Effect of varying UAV height on the precise estimation of potato crop growth. Front. Plant Sci. 2023, 14, 1233349. [Google Scholar] [CrossRef]
  10. Mesas-Carrascosa, F.J.; Torres-Sánchez, J.; Clavero-Rumbao, I.; García-Ferrer, A.; Peña, J.M.; Borra-Serrano, I.; López-Granados, F. Assessing optimal flight parameters for generating accurate multispectral orthomosaicks by UAV to support site-specific crop management. Remote Sens. 2015, 7, 12793–12814. [Google Scholar] [CrossRef]
  11. Hu, P.; Guo, W.; Chapman, S.C.; Guo, Y.; Zheng, B. Pixel size of aerial imagery constrains the applications of unmanned aerial vehicle in crop breeding. ISPRS J. Photogramm. Remote Sens. 2019, 154, 1–9. [Google Scholar] [CrossRef]
  12. Jin, X.; Liu, S.; Baret, F.; Hemerlé, M.; Comar, A. Estimates of plant density of wheat crops at emergence from very low altitude UAV imagery. Remote Sens. Environ. 2017, 198, 105–114. [Google Scholar] [CrossRef]
  13. Wang, W.; Gao, X.; Cheng, Y.; Ren, Y.; Zhang, Z.; Wang, R.; Geng, H. QTL mapping of leaf area index and chlorophyll content based on UAV remote sensing in wheat. Agriculture 2022, 12, 595. [Google Scholar] [CrossRef]
  14. Wang, J.; Zhou, Q.; Shang, J.; Liu, C.; Zhuang, T.; Ding, J.; Xian, Y.; Zhao, L.; Wang, W.; Zhou, G.; et al. UAV-and machine learning-based retrieval of wheat SPAD values at the overwintering stage for variety screening. Remote Sens. 2021, 13, 5166. [Google Scholar] [CrossRef]
  15. Wu, Q.; Zhang, Y.; Zhao, Z.; Xie, M.; Hou, D. Estimation of relative chlorophyll content in spring wheat based on multi-temporal UAV remote sensing. Agronomy 2023, 13, 211. [Google Scholar] [CrossRef]
  16. Yin, Q.; Zhang, Y.; Li, W.; Wang, J.; Wang, W.; Ahmad, I.; Huo, Z. Better inversion of wheat canopy SPAD values before heading stage using spectral and texture indices based on UAV multispectral imagery. Remote Sens. 2023, 15, 4935. [Google Scholar] [CrossRef]
  17. Maddikunta, P.K.R.; Hakak, S.; Alazab, M.; Bhattacharya, S.; Gadekallu, T.R.; Khan, W.Z.; Pham, Q.V. Unmanned aerial vehicles in smart agriculture: Applications, requirements, and challenges. IEEE Sens. J. 2021, 21, 17608–17619. [Google Scholar] [CrossRef]
  18. Rahman, M.F.F.; Fan, S.; Zhang, Y.; Chen, L. A comparative study on application of unmanned aerial vehicle systems in agriculture. Agriculture 2021, 11, 22. [Google Scholar] [CrossRef]
  19. Wang, Y.; Yang, Z.; Kootstra, G.; Khan, H.A. The impact of variable illumination on vegetation indices and evaluation of illumination correction methods on chlorophyll content estimation using UAV imagery. Plant Methods 2023, 19, 51. [Google Scholar] [CrossRef]
  20. Xu, T.; Wang, F.; Shi, Z.; Miao, Y. Multi-scale monitoring of rice aboveground biomass by combining spectral and textural information from UAV hyperspectral images. Int. J. Appl. Earth Obs. Geoinf. 2024, 127, 103655. [Google Scholar] [CrossRef]
  21. Central People’s Government of the People’s Republic of China. Available online: https://www.gov.cn/zhengce/zhengceku/202306/content_6888800.htm (accessed on 15 March 2024).
  22. Cui, H.; Zhang, H.; Ma, H.; Ji, J. Research on SPAD Estimation Model for Spring Wheat Booting Stage Based on Hyperspectral Analysis. Sensors 2024, 24, 1693. [Google Scholar] [CrossRef] [PubMed]
  23. Chen, W.; Yao, R.; Sun, P.; Zhang, Q.; Singh, V.P.; Sun, S.; AghaKouchak, A.; Ge, C.; Yang, H. Drought Risk Assessment of Winter Wheat at Different Growth Stages in Huang-Huai-Hai Plain Based on Nonstationary Standardized Precipitation Evapotranspiration Index and Crop Coefficient. Remote Sens. 2024, 16, 1625. [Google Scholar] [CrossRef]
  24. Liu, L.; Huang, R.; Cheng, J.; Liu, W.; Chen, Y.; Shao, Q.; Duan, D.; Wei, P.; Chen, Y.; Huang, J. Monitoring meteorological drought in southern China using remote sensing data. Remote Sens. 2021, 13, 3858. [Google Scholar] [CrossRef]
  25. Liang, Z.; Luo, J.; Wei, B.; Liao, Y.; Liu, Y. Trehalose can alleviate decreases in grain number per spike caused by low-temperature stress at the booting stage by promoting floret fertility in wheat. J. Agron. Crop Sci. 2021, 207, 717–732. [Google Scholar] [CrossRef]
  26. Wang, Q.; Chen, X.; Meng, H.; Miao, H.; Jiang, S.; Chang, Q. UAV Hyperspectral Data Combined with Machine Learning for Winter Wheat Canopy SPAD Values Estimation. Remote Sens. 2023, 15, 4658. [Google Scholar] [CrossRef]
  27. Su, X.; Nian, Y.; Shaghaleh, H.; Hamad, A.A.; Yue, H.; Zhu, Y.; Li, J.; Wang, W.; Wang, H.; Ma, Q.; et al. Combining features selection strategy and features fusion strategy for SPAD estimation of winter wheat based on UAV multispectral imagery. Front. Plant Sci. 2024, 15, 1404238. [Google Scholar] [CrossRef]
  28. Sun, H.; Li, M.Z.; Zhao, Y.; Zhang, Y.E.; Wang, X.M.; Li, X.H. The spectral characteristics and chlorophyll content at winter wheat growth stages. Spectrosc. Spectr. Anal. 2010, 30, 192–196. [Google Scholar]
  29. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [Google Scholar] [CrossRef]
  30. Tao, W.; Dong, Y.; Su, W.; Li, J.; Huang, J.; Li, X.; Zeng, Y. Mapping the corn residue-covered types using multi-scale feature fusion and supervised learning method by Chinese GF-2 PMS image. Front. Plant Sci. 2022, 13, 901042. [Google Scholar] [CrossRef]
  31. Liao, Q.; Wang, J.; Yang, G.; Zhang, D.; Li, H.; Fu, Y.; Li, Z. Comparison of spectral indices and wavelet transform for estimating chlorophyll content of maize from hyperspectral reflectance. J. Appl. Remote Sens. 2013, 7, 073575. [Google Scholar] [CrossRef]
  32. Ouma, Y.O.; Tetuko, J.; Tateishi, R. Analysis of co-occurrence and discrete wavelet transform textures for differentiation of forest and non-forest vegetation in very-high-resolution optical-sensor imagery. Int. J. Remote Sens. 2008, 29, 3417–3456. [Google Scholar] [CrossRef]
  33. Mallat, S.G. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef]
  34. Chen, D.; Hu, B.; Shao, X.; Su, Q. Variable selection by modified IPW (iterative predictor weighting)-PLS (partial least squares) in continuous wavelet regression models. Analyst 2004, 129, 664–669. [Google Scholar] [CrossRef] [PubMed]
  35. Arai, K.; Ragmad, C. Image retrieval method utilizing texture information derived from discrete wavelet transformation together with color information. Image 2016, 5, 367–380. [Google Scholar] [CrossRef]
  36. Xu, X.; Li, Z.; Yang, X.; Yang, G.; Teng, C.; Zhu, H.; Liu, S. Predicting leaf chlorophyll content and its nonuniform vertical distribution of summer maize by using a radiation transfer model. J. Appl. Remote Sens. 2019, 13, 034505. [Google Scholar] [CrossRef]
  37. LeMay, V.; Maedel, J.; Coops, N.C. Estimating stand structural details using nearest neighbor analyses to link ground data, forest cover maps, and Landsat imagery. Remote Sens. Environ. 2008, 112, 2578–2591. [Google Scholar] [CrossRef]
  38. Zhang, L.; Niu, Y.; Zhang, H.; Han, W.; Tang, J. Maize canopy temperature extracted from UAV thermal and RGB imagery and its application in water stress monitoring. Front. Plant Sci. 2019, 10, 461668. [Google Scholar] [CrossRef]
  39. Borra-Serrano, I.; Peña, J.M.; Torres-Sánchez, J.; Mesas-Carrascosa, F.J.; López-Granados, F. Spatial quality evaluation of resampled unmanned aerial vehicle-imagery for weed mapping. Sensors 2015, 15, 19688–19708. [Google Scholar] [CrossRef]
  40. Putkiranta, P.; Räsänen, A.; Korpelainen, P.; Erlandsson, R.; Kolari, T.H.; Pang, Y.; Villoslada, M.; Wolff, F.; Kumpula, T.; Virtanen, T. The value of hyperspectral UAV imagery in characterizing tundra vegetation. Remote Sens. Environ. 2024, 308, 114175. [Google Scholar] [CrossRef]
  41. Konica Minolta. Available online: https://www.konicaminolta.com.cn/instruments/products/color/chlorophyll-meter/spad502plus/specifications.html (accessed on 8 June 2024).
  42. Hirooka, Y.; Homma, K.; Shiraiwa, T. Parameterization of the vertical distribution of leaf area index (LAI) in rice (Oryza sativa L.) using a plant canopy analyzer. Sci. Rep. 2018, 8, 6387. [Google Scholar] [CrossRef]
  43. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
  44. Gitelson, A.A.; Viña, A.; Ciganda, V.; Rundquist, D.C.; Arkebauer, T.J. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005, 32, L08403. [Google Scholar] [CrossRef]
  45. Haboudane, D.; Miller, J.R.; Tremblay, N.; Zarco-Tejada, P.J.; Dextraze, L. Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture. Remote Sens. Environ. 2002, 81, 416–426. [Google Scholar] [CrossRef]
  46. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring vegetation systems in the Great Plains with ERTS. NASA Spec. Publ. 1974, 351, 309. [Google Scholar]
  47. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J. Plant Physiol. 2003, 160, 271–282. [Google Scholar] [CrossRef] [PubMed]
  48. Gitelson, A.A.; Merzlyak, M.N. Remote estimation of chlorophyll content in higher plant leaves. Int. J. Remote Sens. 1997, 18, 2691–2697. [Google Scholar] [CrossRef]
  49. Hassan, M.A.; Yang, M.; Rasheed, A.; Jin, X.; Xia, X.; Xiao, Y.; He, Z. Time-series multispectral indices from unmanned aerial vehicle imagery reveal senescence rate in bread wheat. Remote Sens. 2018, 10, 809. [Google Scholar] [CrossRef]
  50. Raper, T.B.; Varco, J.J. Canopy-scale wavelength and vegetative index sensitivities to cotton growth parameters and nitrogen status. Precis. Agric. 2015, 16, 62–76. [Google Scholar] [CrossRef]
  51. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  52. Jiang, Z.; Huete, A.R.; Didan, K.; Miura, T. Development of a two-band enhanced vegetation index without a blue band. Remote Sens. Environ. 2008, 112, 3833–3845. [Google Scholar] [CrossRef]
  53. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  54. Daughtry, C.S.; Walthall, C.L.; Kim, M.S.; De Colstoun, E.B.; McMurtrey, J.E., III. Estimating corn leaf chlorophyll concentration from leaf and canopy reflectance. Remote Sens. Environ. 2000, 74, 229–239. [Google Scholar] [CrossRef]
  55. Gitelson, A.A. Wide dynamic range vegetation index for remote quantification of biophysical characteristics of vegetation. J. Plant Physiol. 2004, 161, 165–173. [Google Scholar] [CrossRef] [PubMed]
  56. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  57. Hall-Beyer, M. Practical guidelines for choosing GLCM textures to use in landscape classification tasks over a range of moderate spatial scales. Int. J. Remote Sens. 2017, 38, 1312–1338. [Google Scholar] [CrossRef]
  58. Alessio, S.M. Discrete wavelet transform (DWT). In Digital Signal Processing and Spectral Analysis for Scientists: Concepts and Applications; Springer: Berlin, Germany, 2016; pp. 645–714. [Google Scholar]
  59. Kavitha, S.; Thyagharajan, K.K. Efficient DWT-based fusion techniques using genetic algorithm for optimal parameter estimation. Soft Comput. 2017, 21, 3307–3316. [Google Scholar] [CrossRef]
  60. Varuna Shree, N.; Kumar, T.N.R. Identification and classification of brain tumor MRI images with feature extraction using DWT and probabilistic neural network. Brain Inf. 2018, 5, 23–30. [Google Scholar] [CrossRef] [PubMed]
  61. Chen, L.; Xing, M.; He, B.; Wang, J.; Shang, J.; Huang, X.; Xu, M. Estimating soil moisture over winter wheat fields during growing season using machine-learning methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3706–3718. [Google Scholar] [CrossRef]
  62. Elavarasan, D.; Vincent PM, D.R.; Srinivasan, K.; Chang, C.Y. A hybrid CFS filter and RF-RFE wrapper-based feature extraction for enhanced agricultural crop yield prediction modeling. Agriculture 2020, 10, 400. [Google Scholar] [CrossRef]
  63. Dorugade, A.V. New ridge parameters for ridge regression. J. Assoc. Arab Univ. Basic Appl. Sci. 2014, 15, 94–99. [Google Scholar] [CrossRef]
  64. Afacan, E.; Lourenço, N.; Martins, R.; Dündar, G. Machine learning techniques in analog/RF integrated circuit design, synthesis, layout, and test. Integration 2021, 77, 113–130. [Google Scholar] [CrossRef]
  65. Awad, M.; Khanna, R.; Awad, M.; Khanna, R. Support vector regression. In Efficient Learning Machines: Theories, Concepts, and Applications for Engineers and System Designers; Springer: Berlin, Germany, 2015; pp. 67–80. [Google Scholar]
  66. Zou, P.; Yang, J.; Fu, J.; Liu, G.; Li, D. Artificial neural network and time series models for predicting soil salt and water content. Agric. Water Manag. 2010, 97, 2009–2019. [Google Scholar] [CrossRef]
  67. Adnan, M.; Alarood, A.A.S.; Uddin, M.I.; ur Rehman, I. Utilizing grid search cross-validation with adaptive boosting for augmenting performance of machine learning models. PeerJ Comput. Sci. 2022, 8, e803. [Google Scholar] [CrossRef] [PubMed]
  68. Viscarra Rossel, R.A.; Taylor, H.J.; McBratney, A.B. Multivariate calibration of hyperspectral γ-ray energy spectra for proximal soil sensing. Eur. J. Soil Sci. 2007, 58, 343–353. [Google Scholar] [CrossRef]
  69. Yang, G.; Liu, J.; Zhao, C.; Li, Z.; Huang, Y.; Yu, H.; Xu, B.; Yang, X.; Zhu, D.; Zhang, X.; et al. Unmanned aerial vehicle remote sensing for field-based crop phenotyping: Current status and perspectives. Front. Plant Sci. 2017, 8, 1111. [Google Scholar] [CrossRef] [PubMed]
  70. Flores-de-Santiago, F.; Valderrama-Landeros, L.; Rodríguez-Sobreyra, R.; Flores-Verdugo, F. Assessing the effect of flight altitude and overlap on orthoimage generation for UAV estimates of coastal wetlands. J. Coast. Conserv. 2020, 24, 35. [Google Scholar] [CrossRef]
  71. Chen, Y.; Xin, R.; Jiang, H.; Liu, Y.; Zhang, X.; Yu, J. Refined feature fusion for in-field high-density and multi-scale rice panicle counting in UAV images. Comput. Electron. Agric. 2023, 211, 108032. [Google Scholar] [CrossRef]
  72. Qiao, L.; Zhao, R.; Tang, W.; An, L.; Sun, H.; Li, M.; Wang, N.; Liu, Y.; Liu, G. Estimating maize LAI by exploring deep features of vegetation index map from UAV multispectral images. Field Crops Res. 2022, 289, 108739. [Google Scholar] [CrossRef]
  73. Zhang, X.; Zhang, K.; Sun, Y.; Zhao, Y.; Zhuang, H.; Ban, W.; Chen, Y.; Fu, E.; Chen, S.; Liu, J.; et al. Combining spectral and texture features of UAS-based multispectral images for maize leaf area index estimation. Remote Sens. 2022, 14, 331. [Google Scholar] [CrossRef]
  74. Zhou, L.; Nie, C.; Su, T.; Xu, X.; Song, Y.; Yin, D.; Liu, S.; Liu, Y.; Bai, Y.; Jia, X.; et al. Evaluating the canopy chlorophyll density of maize at the whole growth stage based on multi-scale UAV image feature fusion and machine learning methods. Agriculture 2023, 13, 895. [Google Scholar] [CrossRef]
  75. Yang, H.; Huang, K.; King, I.; Lyu, M.R. Localized support vector regression for time series prediction. Neurocomputing 2009, 72, 2659–2669. [Google Scholar] [CrossRef]
  76. Osco, L.P.; Junior, J.M.; Ramos, A.P.M.; Furuya, D.E.G.; Santana, D.C.; Teodoro, L.P.R.; Gonçalves, W.N.; Baio, F.H.; Pistori, H.; Junior, C.A.; et al. Leaf nitrogen concentration and plant height prediction for maize using UAV-based multispectral imagery and machine learning techniques. Remote Sens. 2020, 12, 3237. [Google Scholar] [CrossRef]
  77. Zha, H.; Miao, Y.; Wang, T.; Li, Y.; Zhang, J.; Sun, W.; Feng, Z.; Kusnierek, K. Improving unmanned aerial vehicle remote sensing-based rice nitrogen nutrition index prediction with machine learning. Remote Sens. 2020, 12, 215. [Google Scholar] [CrossRef]
  78. Li, H.; Zhao, C.; Yang, G.; Feng, H. Variations in crop variables within wheat canopies and responses of canopy spectral characteristics and derived vegetation indices to different vertical leaf layers and spikes. Remote Sens. Environ. 2015, 169, 358–374. [Google Scholar] [CrossRef]
Figure 1. Study site and experimental design: (a) the geographical location of Jiangyan; (b) the NDVI image of the experimental field on 11 April 2023; (c) distribution of the plots and the various treatments. Note: N0 (0 kg/mu), N10 (10 kg/mu), N16 (16 kg/mu), and N22 (22 kg/mu) correspond to nitrogen fertilizer application rates of 0 kg/ha, 150 kg/ha, 240 kg/ha, and 330 kg/ha, respectively.
Figure 1. Study site and experimental design: (a) the geographical location of Jiangyan; (b) the NDVI image of the experimental field on 11 April 2023; (c) distribution of the plots and the various treatments. Note: N0 (0 kg/mu), N10 (10 kg/mu), N16 (16 kg/mu), and N22 (22 kg/mu) correspond to nitrogen fertilizer application rates of 0 kg/ha, 150 kg/ha, 240 kg/ha, and 330 kg/ha, respectively.
Plants 13 01926 g001
Figure 2. Four different nitrogen application methods (Experiment 2): (a) M1; (b) M2; (c) M3; (d) M4. Note: M1, M2, M3, and M4 each consist of 16 seed furrows, with a row spacing of 25 cm and a seeding depth of 3 cm. For M2, M3, and M4, the fertilizer application depth is 5 cm. M2 includes 17 fertilizer furrows, where resin-coated urea and urea are applied as a mixed strip between rows, with a fertilizer application depth of 5 cm and a seeding-to-fertilizer distance of 12.5 cm. M3 includes 17 fertilizer furrows, with resin-coated urea and urea applied separately between rows. M4 comprises eight fertilizer furrows, with resin-coated urea and urea applied as a mixed strip within the inter-row spaces.
Figure 2. Four different nitrogen application methods (Experiment 2): (a) M1; (b) M2; (c) M3; (d) M4. Note: M1, M2, M3, and M4 each consist of 16 seed furrows, with a row spacing of 25 cm and a seeding depth of 3 cm. For M2, M3, and M4, the fertilizer application depth is 5 cm. M2 includes 17 fertilizer furrows, where resin-coated urea and urea are applied as a mixed strip between rows, with a fertilizer application depth of 5 cm and a seeding-to-fertilizer distance of 12.5 cm. M3 includes 17 fertilizer furrows, with resin-coated urea and urea applied separately between rows. M4 comprises eight fertilizer furrows, with resin-coated urea and urea applied as a mixed strip within the inter-row spaces.
Plants 13 01926 g002
Figure 3. Decomposition process using the bior 1.3 wavelet basis function for DWT. Note: L1 represents the first low-pass filter. L2 represents the second low-pass filter. L3 represents the third low-pass filter. L4 represents the fourth low-pass filter. H1 represents the first high-pass filter. H2 represents the second high-pass filter. H3 represents the third high-pass filter. H4 represents the fourth high-pass filter. D1 represents the first downsampling. D2 represents the second downsampling. D3 represents the third downsampling. D4 represents the fourth downsampling. LL represents an approximate sub-image. LH represents a horizontal detail sub-image. HL represents a vertical detail sub-image. HH represents a diagonal detail sub-image.
Figure 3. Decomposition process using the bior 1.3 wavelet basis function for DWT. Note: L1 represents the first low-pass filter. L2 represents the second low-pass filter. L3 represents the third low-pass filter. L4 represents the fourth low-pass filter. H1 represents the first high-pass filter. H2 represents the second high-pass filter. H3 represents the third high-pass filter. H4 represents the fourth high-pass filter. D1 represents the first downsampling. D2 represents the second downsampling. D3 represents the third downsampling. D4 represents the fourth downsampling. LL represents an approximate sub-image. LH represents a horizontal detail sub-image. HL represents a vertical detail sub-image. HH represents a diagonal detail sub-image.
Plants 13 01926 g003
Figure 4. RFE learning curves based on VIs at different UAV flight altitudes: (a) 20 m VIs; (b) 40 m VIs; (c) 60 m VIs; (d) 80 m VIs; (e) 100 m VIs; (f) 120 m VIs.
Figure 4. RFE learning curves based on VIs at different UAV flight altitudes: (a) 20 m VIs; (b) 40 m VIs; (c) 60 m VIs; (d) 80 m VIs; (e) 100 m VIs; (f) 120 m VIs.
Plants 13 01926 g004
Figure 5. RFE learning curves under different variable combinations: (a) TIs; (b) DWT; (c) VIs + TIs; (d) VIs + DWT; (e) TIs + DWT; (f) VIs + TIs + DWT. Note: The RFE learning curve based on VIs is shown in Figure 4a.
Figure 5. RFE learning curves under different variable combinations: (a) TIs; (b) DWT; (c) VIs + TIs; (d) VIs + DWT; (e) TIs + DWT; (f) VIs + TIs + DWT. Note: The RFE learning curve based on VIs is shown in Figure 4a.
Plants 13 01926 g005
Figure 6. Scatter plots of all optimal models at multiple altitudes: (a) 20 m SVR; (b) 40 m Ridge; (c) 60 m Ridge; (d) 80 m Ridge; (e) 100 m Ridge; (f) 120 m SVR.
Figure 6. Scatter plots of all optimal models at multiple altitudes: (a) 20 m SVR; (b) 40 m Ridge; (c) 60 m Ridge; (d) 80 m Ridge; (e) 100 m Ridge; (f) 120 m SVR.
Plants 13 01926 g006
Figure 7. Scatter plots of all SPAD value prediction models under VIs + TIs and VIs + TIs + DWT combinations: (a) Ridge-(VIs + TIs); (b) Ridge-(VIs + TIs + DWT); (c) RF-(VIs + TIs); (d) RF-(VIs + TIs + DWT); (e) SVR-(VIs + TIs); (f) SVR-(VIs + TIs + DWT); (g) BPNN-(VIs + TIs); (h) BPNN-(VIs + TIs + DWT).
Figure 7. Scatter plots of all SPAD value prediction models under VIs + TIs and VIs + TIs + DWT combinations: (a) Ridge-(VIs + TIs); (b) Ridge-(VIs + TIs + DWT); (c) RF-(VIs + TIs); (d) RF-(VIs + TIs + DWT); (e) SVR-(VIs + TIs); (f) SVR-(VIs + TIs + DWT); (g) BPNN-(VIs + TIs); (h) BPNN-(VIs + TIs + DWT).
Plants 13 01926 g007aPlants 13 01926 g007b
Table 1. Specifications of the SPAD-502Plus handheld chlorophyll meter.
Table 1. Specifications of the SPAD-502Plus handheld chlorophyll meter.
Main SpecificationsSpecification Parameters
Measurement principleThe difference in optical density at two wavelengths *
Measurement range0 to 99.9 SPAD units
Sample area2 × 3 mm
Measurement timeApproximately 2 s per sample
Sample thicknessMaximum 1.2 mm
Accuracy±1.0 SPAD units
Operating temperature0–50 °C, relative humidity up to 85% (at 35 °C), no condensation
* Note: The SPAD-502Plus measures the absorbance of chlorophyll at two wavelengths (650 nm and 940 nm) to estimate chlorophyll content.
Table 2. Twenty-two VIs were used in this study for predicting SPAD values during the winter wheat booting stage.
Table 2. Twenty-two VIs were used in this study for predicting SPAD values during the winter wheat booting stage.
VisFormulationReferences
R, G, B, RE, NIR//
RVINIR/R[43]
GCI(NIR/G) − 1[44]
RECI (NIR/RE) − 1[44]
TCARI3 × [(RE − R) − 0.2 × (RE − G) × (RE/R)][45]
NDVI(NIR − R)/(NIR + R)[46]
GNDVI(NIR − G)/(NIR + G)[47]
GRVI(G − R)/(G + R)[43]
NDRE(NIR − RE)/(NIR + RE)[48]
NDREI(RE − G)/(RE + G)[49]
SCCCINDRE/NDVI[50]
EVI2.5 × (NIR − R)/(1 + NIR − 2.4 × R)[51]
EVI22.5 × (NIR − R)/(NIR + 2.4 × R + 1)[52]
OSAVI(NIR − R)/(NIR − R + L) (L = 0.16)[53]
MCARI[(RE − R) − 0.2 × (RE − G)] × (RE/R)[54]
TCARI/OSAVITCARI/OSAVI[55]
MCARI/OSAVIMCARI/OSAVI[54]
WDRVI(a × NIR − R)/(a × NIR + R) (a = 0.12)[55]
Table 3. Optimal VIs selected at multiple UAV flight altitudes.
Table 3. Optimal VIs selected at multiple UAV flight altitudes.
VIsRFE
20 m40 m60 m80 m100 m120 m
R
G
B
NIR
Rededge
RVI
GCI
RECI
NDVI
GNDVI
GRVI
NDRE
NDREI
SCCCI
OSAVI
EVI
EVI2
MCARI
TCARI
MCARI/OSAVI
TCARI/OSAVI
WDRVI
Note: “√” refers to the optimal variables that were used to develop the predictive models.
Table 4. Optimal sets of RS variables selected from TIs set and DWT set.
Table 4. Optimal sets of RS variables selected from TIs set and DWT set.
Variable SetTIsRGBNIRRededge
TIsmean
variance
homogeneity
contrast
dissimilarity
entropy
second moment
correlation
DWTLL
LH
HL
HH
Note: The optimal variable set selected from the VIs set is shown in Table 4. “√” refers to the optimal variables that were used to develop the predictive models.
Table 5. Optimal sets of RS variables selected from the VIs + TIs set, VIs + DWT set, TIs + DWT set, and VIs + TIs + DWT set.
Table 5. Optimal sets of RS variables selected from the VIs + TIs set, VIs + DWT set, TIs + DWT set, and VIs + TIs + DWT set.
Variable SetOptimal Variables
VIs + TIsR, NIR, RE, RVI, GCI, RECI, NDRE, SCCCI, OSAVI, EVI, EVI2, TCARI, MCARI/OSAVI, TCARI/OSAVI, WDRVI, R-mean, R-variance, R-homogeneity, R-dissimilarity, R-correlation, G-mean, G-secondmoment, G-correlation, B-homogeneity, B-contrast, B-dissimilarity, NIR-mean, NIR-homogeneity, NIR-contrast, RE-mean, RE-secondmoment, RE-correlation
VIs + DWTR, G, NIR, RE, RVI, GCI, GNDVI, NDRE, NDREI, SCCCI, EVI2, MCARI, TCARI, MCARI/OSAVI, WDRVI, R_HH, R_HL, R_LH, R_LL, G_HH, G_LH, G_LL, B_HL, B_LH, B_LL, NIR_HH, NIR_HL, NIR_LH, NIR_LL, RE_HH, RE_HL, RE_LH, RE_LL
TIs + DWTR-mean, R-variance, R-homogeneity, R-contrast, R-dissimilarity, R-entropy, R-correlation, G-mean, G-variance, G-homogeneity, G-contrast, G-dissimilarity, G-entropy, G-secondmoment, G-correlation, B-mean, B-variance, B-homogeneity, B-contrast, B-dissimilarity, B-entropy, B-secondmoment, NIR-mean, NIR-variance, NIR-homogeneity, NIR-contrast, NIR-dissimilarity, NIR-entropy, NIR-secondmoment, NIR-correlation, RE-mean, RE-variance, RE-homogeneity, RE-contrast, RE-dissimilarity, RE-entropy, RE-secondmoment, RE-correlation, R_HH, R_HL, R_LH, R_LL, G_HH, G_HL, G_LH, G_LL, B_HH, B_HL, B_LH, B_LL, NIR_HH, NIR_HL, NIR_LH, NIR_LL, RE_HH, RE_HL, RE_LH, RE_LL
VIs + TIs + DWTR, NIR, RE, RVI, GCI, GNDVI, NDRE, NDREI, SCCCI, TCARI, MCARI/OSAVI, WDRVI, R-mean, R-variance, R-homogeneity, R-contrast, R-entropy, R-correlation, G-mean, G-contrast, G-correlation, B-mean, B-variance, B-homogeneity, B-contrast, B-dissimilarity, B-entropy, B-secondmoment, NIR-mean, NIR-variance, NIR-dissimilarity, NIR-entropy, NIR-secondmoment, NIR-correlation, RE-mean, RE-variance, RE-homogeneity, RE-contrast, RE-dissimilarity, RE-entropy, RE-secondmoment, RE-correlation, R_HH, R_HL, R_LH, R_LL, G_LL, B_HH, B_HL, B_LH, B_LL, NIR_HH, NIR_HL, NIR_LL, RE_HH, RE_LH, RE_LL
Table 6. Comparison of modeling accuracy of VIs at different UAV flight altitudes.
Table 6. Comparison of modeling accuracy of VIs at different UAV flight altitudes.
AltitudeModelTrainTest
R2RMSERRMSERPDR2RMSERRMSERPD
20 mRidge0.74001.62980.03271.97870.70921.68580.03431.9196
RF0.93640.61190.01235.27050.67771.77500.03611.8232
SVR0.80381.41590.02842.27770.76351.52040.03092.1284
BPNN0.66401.85290.03711.74050.66091.82060.03701.7775
40 mRidge0.79941.43170.02872.25260.77551.48130.03012.1864
RF0.96280.61630.01235.23280.60981.95300.03971.6570
SVR0.79211.45760.02922.21250.74791.56960.03192.0617
BPNN0.69201.77410.03551.81780.68311.75990.03581.8388
60 mRidge0.79611.44330.02892.23450.78711.44240.02932.2435
RF0.97350.52080.01046.19200.61311.94470.03951.6640
SVR0.77221.52560.03062.11390.72031.65330.03361.9574
BPNN0.70101.74800.03501.84500.68171.76370.03591.8348
80 mRidge0.80601.40790.02822.29060.76731.50800.03072.1459
RF0.96060.63440.01275.08370.59241.99610.04061.6212
SVR0.74131.62570.03261.98370.70781.69000.03441.9148
BPNN0.70061.74910.03501.84380.67731.77600.03611.8221
100 mRidge0.78841.47040.02952.19320.76921.50200.03052.1545
RF0.96050.63550.01275.07460.62011.92700.03921.6793
SVR0.79441.44930.02902.22510.73501.60940.03272.0107
BPNN0.69221.77350.03551.81850.66171.81850.03701.7795
120 mRidge0.77971.50020.03012.14970.73841.59910.03252.0237
RF0.95980.64100.01285.03150.61281.94530.03951.6635
SVR0.76141.56130.03132.06560.74621.57490.03202.0547
BPNN0.70431.73810.03481.85540.65881.82620.03711.7720
Table 7. Comparison of modeling accuracy under different variable combinations.
Table 7. Comparison of modeling accuracy under different variable combinations.
Variable SetModelTrainTest
R2RMSERRMSERPDR2RMSERRMSERPD
VIsRidge0.74001.62980.03271.97870.70921.68580.03431.9196
RF0.93640.61190.01235.27050.67771.77500.03611.8232
SVR0.80381.41590.02842.27770.76351.52040.03092.1284
BPNN0.66401.85290.03711.74050.66091.82060.03701.7775
TIsRidge0.91600.92630.01863.48160.72121.65070.03361.9603
RF0.95760.65800.01324.90150.68661.75020.03561.8489
SVR0.86841.15940.02322.78160.78121.46230.02972.2130
BPNN0.88891.06540.02133.02690.62651.91060.03881.6938
DWTRidge0.69481.76580.03541.82630.65631.83270.03731.7657
RF0.95020.71350.01434.51980.70231.70570.03471.8972
SVR0.68381.79750.03601.79410.67441.78400.03631.8139
BPNN0.78641.47720.02962.18310.50762.19370.04461.4752
VIs + DWTRidge0.73721.63880.03281.96790.53242.13770.04351.5138
RF0.96790.57240.01155.63450.72961.62580.03311.9904
SVR0.78991.46510.02942.20120.79401.41890.02882.2807
BPNN0.73991.63020.03271.97830.44182.33570.04751.3855
TIs + DWTRidge0.87971.11020.02222.90480.78241.45820.02962.2191
RF0.95700.66250.01334.86790.71511.66880.03391.9392
SVR0.90081.00680.02023.20310.79091.42940.02912.2639
BPNN0.84031.27730.02562.52480.60431.96650.04001.6455
VIs + TIsRidge0.86511.17420.02352.74650.79591.41240.02872.2919
RF0.96870.56580.01135.70030.76511.51520.03082.1357
SVR0.88161.10000.02202.93190.81481.34550.02742.4050
BPNN0.89961.01260.02033.18470.68191.76310.03581.8354
VIs + TIs + DWTRidge0.89591.03150.02073.12650.82081.32330.02692.4455
RF0.96640.58600.01175.50290.80501.38050.02812.3440
SVR0.85041.23640.02482.60830.83901.25440.02552.5798
BPNN0.83161.31170.02632.45860.70251.70510.03471.8978
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Yin, Q.; Cao, L.; Zhang, Y.; Li, W.; Wang, W.; Zhou, G.; Huo, Z. Enhancing Winter Wheat Soil–Plant Analysis Development Value Prediction through Evaluating Unmanned Aerial Vehicle Flight Altitudes, Predictor Variable Combinations, and Machine Learning Algorithms. Plants 2024, 13, 1926. https://doi.org/10.3390/plants13141926

AMA Style

Wang J, Yin Q, Cao L, Zhang Y, Li W, Wang W, Zhou G, Huo Z. Enhancing Winter Wheat Soil–Plant Analysis Development Value Prediction through Evaluating Unmanned Aerial Vehicle Flight Altitudes, Predictor Variable Combinations, and Machine Learning Algorithms. Plants. 2024; 13(14):1926. https://doi.org/10.3390/plants13141926

Chicago/Turabian Style

Wang, Jianjun, Quan Yin, Lige Cao, Yuting Zhang, Weilong Li, Weiling Wang, Guisheng Zhou, and Zhongyang Huo. 2024. "Enhancing Winter Wheat Soil–Plant Analysis Development Value Prediction through Evaluating Unmanned Aerial Vehicle Flight Altitudes, Predictor Variable Combinations, and Machine Learning Algorithms" Plants 13, no. 14: 1926. https://doi.org/10.3390/plants13141926

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop