Next Article in Journal
Drone-Based Autonomous Motion Planning System for Outdoor Environments under Object Detection Uncertainty
Next Article in Special Issue
Current Status and Future Opportunities for Grain Protein Prediction Using On- and Off-Combine Sensors: A Synthesis-Analysis of the Literature
Previous Article in Journal
Research and Analysis of Ecological Environment Quality in the Middle Reaches of the Yangtze River Basin between 2000 and 2019
Previous Article in Special Issue
Incorporating Multi-Scale, Spectrally Detected Nitrogen Concentrations into Assessing Nitrogen Use Efficiency for Winter Wheat Breeding Populations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting Equivalent Water Thickness in Wheat Using UAV Mounted Multispectral Sensor through Deep Learning Techniques

1
Key Laboratory of Crop Water Use and Regulation, Ministry of Agriculture, Farmland Irrigation Research Institute, Chinese Academy of Agricultural Sciences, Xinxiang 453003, China
2
Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo 113-8657, Japan
3
Metropolitan Solar Inc., Washington, DC 20032, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(21), 4476; https://doi.org/10.3390/rs13214476
Submission received: 6 October 2021 / Revised: 30 October 2021 / Accepted: 3 November 2021 / Published: 8 November 2021
(This article belongs to the Special Issue Proximal and Remote Sensing for Precision Crop Management)

Abstract

:
The equivalent water thickness (EWT) is an important biophysical indicator of water status in crops. The effective monitoring of EWT in wheat under different nitrogen and water treatments is important for irrigation management in precision agriculture. This study aimed to investigate the performances of machine learning (ML) algorithms in retrieving wheat EWT. For this purpose, a rain shelter experiment (Exp. 1) with four irrigation quantities (0, 120, 240, 360 mm) and two nitrogen levels (75 and 255 kg N/ha), and field experiments (Exps. 2–3) with the same irrigation and rainfall water levels (360 mm) but different nitrogen levels (varying from 75 to 255 kg N/ha) were conducted in the North China Plain. The canopy reflectance was measured for all plots at 30 m using an unmanned aerial vehicle (UAV)-mounted multispectral camera. Destructive sampling was conducted immediately after the UAV flights to measure total fresh and dry weight. Deep Neural Network (DNN) is a special type of neural network, which has shown performance in regression analysis is compared with other machine learning (ML) models. A feature selection (FS) algorithm named the decision tree (DT) was used as the automatic relevance determination method to obtain the relative relevance of 5 out of 67 vegetation indices (Vis), which were used for estimating EWT. The selected VIs were used to estimate EWT using multiple linear regression (MLR), deep neural network multilayer perceptron (DNN-MLP), artificial neural networks multilayer perceptron (ANN-MLP), boosted tree regression (BRT), and support vector machines (SVMs). The results show that the DNN-MLP with R2 = 0.934, NSE = 0.933, RMSE = 0.028 g/cm2, and MAE of 0.017 g/cm2 outperformed other ML algorithms (ANN-MPL, BRT, and SVM- Polynomial) owing to its high capacity for estimating EWT as compared to other ML methods. Our findings support the conclusion that ML can potentially be applied in combination with VIs for retrieving EWT. Despite the complexity of the ML models, the EWT map should help farmers by improving the real-time irrigation efficiency of wheat by quantifying field water content and addressing variability.

Graphical Abstract

1. Introduction

Wheat production accounts for nearly 50% of China’s National Agricultural output. Increasing wheat consumption requires effective decision-making during the wheat growth period. Classically, field data collection has been employed to diagnose plant biophysical parameters, including equivalent water thickness (EWT) [1]. Improving crop water management requires the accurate and timely monitoring of water in the plant [2]. EWT has previously been used to derive the expected grain yield [3] by managing irrigation scheduling [4]. Although the classical method is accurate, it is time-consuming, laborious, and does not indicate the spatial variability of EWT in the entire field [5,6,7]. Airborne remote sensing has been used to detect plant water variability. Recently, UAV-mounted multispectral, hyperspectral, and thermal sensors have been used to acquire very high spectral and spatial resolutions [8]. Many studies have been carried out to explore accuracy in the retrieval of crop biophysical parameters using the UAV remote sensing platform [8,9,10,11,12,13]. The use of the UAV platform not only allows non-destructive and timely data acquisition, but it also captures variability at the field scale as well. Therefore, it is also possible to use UAV-based remote sensing techniques to increase the accuracy of EWT retrieval by reducing the assessment time.
In addition, EWT have been assessed using the broad waveband [14], narrow waveband in the near-infrared (NIR), and shortwave infrared (SWIR) regions [15,16,17,18]. Ceccato et al. [15,17] confirmed that the wavelength absorption bands of plant and water molecules are in the region of 900 to 2500 nm. The reflectance spectrum of green vegetation in this region is reported to be affected by strong liquid water absorption. Tucker [19] suggested that the 1555–1750 nm region was best-suited for the remote sensing of plant canopy water status using satellite platforms. In the past, data analysis techniques such as laboratory spectral reflectance and remote sensing platforms using hyperspectral sensors have been used to estimate EWT [1,20,21,22,23]. Physical-methods have shown that the simple ratio between 1600 and 820 nm is heavily influenced by EWT [15]. Recently, multispectral sensors with higher spatial and spectral resolutions have been used for site-specific crop-management, which provides a reliable and effective remotely sensed source of information for agriculture. For instances, multispectral data have been used to assess chlorophyll content [24,25] and plant nitrogen content [26,27] and their wavelengths are located in the 500 to 800 nm region. However, the most commonly used multispectral vegetation indices (VIs) have limitations when used for predicting plant water content via a linear or multilinear model. In a previous study undertaken by Thornton et al. [28], the change in plant nitrogen concentration was explained by changes in shoot water content, suggesting that changes in chlorophyll or nitrogen content affect plant water content. Poblete et al. [13] have suggested that several indices, when applied to information between 500 and 800 nm, can help to indirectly estimate water status using nonlinear methods.
Estimating EWT at the canopy level using optical remotely sensed data can be achieved via several methodologies, including the empirical method based on hyperspectral VIs [29]. However, the indices generated to estimate EWT in this context yield poor performance. Physical models have also been used to assess EWT from remotely sensed data, based on physical laws such as radiative transfer models that describe the transfer and interaction of radiation within the atmosphere and plant canopy [30,31]. The fact that the site-specific information requirement for a proper model parameterization, which is not always available, can be the biggest disadvantage of radiative transfer modeling. The VIs-based and physical models are critically limited in their ability to accurately estimate EWT. Consequently, alternative methods for the retrieval of EWT using multispectral VIs are needed. The machine learning (ML) method is an advanced computational method that has shown good performance in retrieving biophysical parameters from crops using multispectral and hyperspectral reflectance data. The ML models have been used to produce accurate and robust models in engineering [32], agriculture [33,34,35], hydrology [36], and forestry [37]. ML is a step forward to implement artificial intelligence without the need for explicit programming while deep learning, a subset of machine learning algorithm, makes intuitive and intelligent decisions using a neural network stacked layer-wise. ML algorithms such us neural networks (NN) are known to learn the underlying relationship between the input and the output data [38]. NN is one of the powerful learning algorithms for standing exemplary performance in regression problems [39,40,41]. The success of NN captured the attention of researchers by exploring the possibility to train NN with several hidden layers. However, the success of the NN was limited to one or two hidden layers. For instance, the difficulties of training NN such as artificial neural network (ANN) with several hidden layers lie in vanishing gradients or exploding gradients with the increment of the hidden layer number (depth of network). The introduction of a deep neural network (DNN) smoothed the issue of vanishing or exploding gradients, thereby speeding up training [42]. The attractive feature of DNN is its ability to exploit new activation functions and learning algorithms. In addition, deep architectures often have an advantage over ANN architectures when dealing with complex learning problems [38].
However, ML techniques have not yet been fully employed to bridge the knowledge gap between VIs from the visible infrared region and EWT. Consequently, it is important to define an ML method to accurately assess vegetation EWT using only VIs. Despite the advantages of using ML, they present some major drawbacks. ML models can include any superfluous input variable, increasing the complexity and providing erroneous information about the variables that actually affect the model’s performance [43,44]. To solve these problems, dimensionality reduction approaches, such as feature selection (FS) methods, are applied to reduce the number of input variables to only the relevant ones. As such, FS is used to clean up the redundant, irrelevant, and noisy data. As a result, the performance is boosted. In the FS, a subset of features is selected from the original set based on their redundancy and relevance [44]. In recent studies [45,46,47], FS methods such as filter, wrapper, and embedded methods have been used based on their interaction with the learning model. The filter and wrapper have been widely used in various remote sensing applications; nevertheless, their large computational time is a major drawback. Embedded methods, such as decision tree (DT) algorithms, use ensemble learning and hybrid learning methods for FS [48]. The DT offers excellent sparseness performance compared to the filter and wrapper methods. It is less computationally intensive than the other two methods [47].
The EWT varies with vegetation growth, and thus estimating EWT across the field at different growth stages using the ML method and VIs could offer the farmer critical time- and location-specific information for monitoring their crops and managing farming activities, thus enabling them to increase yields. Therefore, the objective of the present study was to investigate the feasibility of different ML algorithms, such as DNN, ANN, boosted regression tree (BRT), and support vector machines (SVMs) models, to predict wheat EWT using multispectral single-band images and VIs. The key idea behind using several ML methods is to take advantage of each method’s capability to predict EWT. We thus investigated an FS algorithm-based DT to determine best VIs for use as input parameters in the different machine learning models.

2. Materials and Methods

2.1. Description of The Study Area and Experimental Design

Three experiments with varied water and nitrogen rate were conducted from September 2019 to June 2020 at the Agricultural Station of the Chinese Academy of Agricultural Sciences (CAAS), located in Qiliying county of the North China Plain (Figure 1). For all experiments, UAV and destructive sampling were performed from March to May in 2020. This area is categorized by a subtropical warm climate with rainfall varying between 900 and 1200 mm, about 65 to 70% of which occurs between June and September. The average solar radiation is about 4900 MJ m−2 yr−1. The mean annual temperature is about 14.5 °C. Exp. 1 was conducted in a rain-out shelter facility using a split-plot design with four treatments of different irrigation quantities (0, 120, 240, 360 mm) and two nitrogen (N) levels (75 and 255 kg N ha−1). In Exp.1, twenty-four plot experiments were conducted in the irrigation treatment involving one wheat cultivar, Zhoumai27. All of the plot experiments adopted a randomized complete block design. The size of each plot was 3.5 m × 1.9 m and this did not change during the study period. N fertilizer was applied via a two-plot split design with three replications, and the N was only applied at the top dressing. In addition to the plot experiments, two field experiments, Exps. 2–3 were conducted under field conditions to precisely compare the wheat water statuses. The plot size for each treatment varied from 14 × 10 m to 15 m × 10 m. The wheat cultivar was Zhoumai22 and each treatment was performed once. All the plots received a basic N application of 75 kg N ha−1, and at the top dressing the application rate varied from 0 to 150 kg N ha−1. Previous field plot experiments have had various objectives, but this study took advantage of the varying N status to evaluate different UAV remote sensing-based water status estimation methods. Areas of 0.36 m2 were selected as sampling areas in each experiment. However, the samples in Exps. 2–3 were taken at three different locations, and the average value was taken.
The description of all the abbreviated parameters and description of parameters and variables used have been summarized in Table 1 and Table 2 respectively.

2.2. Sampling and Measurements

Destructive sampling was undertaken immediately after UAV remote sensing data collection. Wheat plants were harvested from 0.36 m2 quadrats in all experiments. The flights and the ground samplings occurred on 7 March, 4 April, and 28 May 2020, reflecting the three stages of growth: the early stem elongation, late stem elongation, and anthesis growth stages. The leaves and stems of harvested wheat plants were separated and weighed, followed by oven drying for 24 h at 105 °C before the dry weight was determined. The canopy equivalent water thickness ( E W T C a n o p y y) corresponded to the hypothetical thickness of a single layer of water averaged over the whole ground area (Ag). Wocher et al. [21] defined E W T C a n o p y as the sum of the EWT of the leaf, stem, and fruit over one square meter of ground. To derive an unbiased estimation of EWT, we considered the weight of the fresh and dry masses of the leaf, stem, and ear for the calculation of E W T C a n o p y over one square meter of ground.
T o t a l   E W T C a n o p y = ( F W l e a f + s t e m + e a r s D W l e a f + s t e m + e a r s ) A g 1 [ g   cm 2 ]   o r   [ cm ]
where Ag denotes the ground area, FW is the fresh sample weight, and DW is the oven dry weight. The E W T l e a f , E W T s t e m   , E W T e a r , and E W T c a n o p y per cm2 were calculated from specific water content per ground area.

2.3. UAV Data Collection and Processing

The multirotor UAV Spreading Wings S900 (DJI-Innovations Inc., Shenzhen, China) with six rotors, GPS, and flight control stabilizers was used in this study. To ensure maximum overlap in the fields, a flight path was set before the flights. The multispectral camera, Micasense Red Edge-MX (MicaSense, Seattle, WA, USA) (https://micasense.com/rededge-mx/, accessed on 3 November 2021) mounted on the UAV has five bands in the VIS-NIR spectral range (Red, Green, Blue, Nir, and Red Edge). The details of the multispectral sensor are shown in Table 3. Images were acquired across the entire field at the nadir viewing angle. The data were collected at 11 am–2 pm to minimize the shading effect of the canopy, at a speed of 2 ms−1 and an altitude of 30 m (Table 4). Radiometric calibration was performed using images of a spectral white panel before flight. The calibration image was used to calibrate each band during image processing in the Pix4D mapper (PIX4d, Lausanne, Switzerland) (https://www.pix4d.com/, accessed on 3 November 2021). The following steps were used for UAV image processing after UAV data collection. The pix4D mapper was used to process all the images into one large image using calibration images. The individual images were combined to form a large ortho-mosaic. The ortho-mosaic images were then radiometrically corrected and converted into reflectance. The resulting large ortho-mosaic images were Blue, Green, Near infrared (NIR), Red, and Red Edge (RE). Large ortho-mosaic images or their converted reflectance values were used to calculate the different VIs used in this study. The UAV data acquisition, processing methodology, image segmentation, and VIs calculation are described in Figure 2.
An automatic image segmentation algorithm called the Otsu segmentation algorithm [49] was implemented to minimize the effects of soil background and other non-leaf materials on canopy information pixels. The normalized difference vegetation index (NDVI) was first calculated and then used with the Otsu algorithm to separate soil from the canopy. The NDVI was employed for separating vegetation from other materials (soil included) [50]. The result of this was used in each plot in the form of a threshold value representing the border between soil and canopy. Soil or other background pixels were defined as 0 and wheat pixels as 1 during the extraction of the NDVI map. The integration of NDVI with the Otsu algorithm yielded efficient results in separating canopy from soil. The entire process of the NDVI–Otsu method was implemented using MATLAB and ArcMap.

2.4. E W T c a n o p y Regression Model Development

2.4.1. Machine Learning for Regression

In this subsection, we present the ML regression method used to predict E W T c a n o p y from the VIs. Figure 3 illustrates a typical workflow of a supervised ML algorithm for regression in MATLAB. In the first step, the FS method based on DT is used to reduce the input data to only five relevant input parameters. The next step is to normalize the data using the minimum–maximum normalization techniques. Seventy (70) percent of the normalized data were used as a training dataset to model EWT, while 15% were used for validation, and the remaining 15% were used for testing all the machine learning models used in this study. Next, the VIs and measured E W T c a n o p y values pass into the training phase, where machine learning algorithms are used to identify a good model that can map the inputs to desired outputs. The validation and testing phases provide feedback to the ML phase so as to improve model accuracy. The training process is repeated until the desired accuracy level achieved. Once a model is constructed, it is used to predict E W T c a n o p y from the new VIs data. FW and DW are estimated separately using vegetation indices and SVMs, BRT, ANN-MLP, and DNN-MLP algorithms. The resulting predicted FW and DW are used to estimate E W T c a n o p y . MATLAB MathWorks was used to simulate all the machine learning models used in this study. Constructing the E W T c a n o p y maps involved two steps. The first was to convert the calculated VIs map (G, MTVI2, RE, OSAVI, NIR) to an “n × m” matrix with n rows and m columns. In order to be fed into the ML model, the matrix has to be presented as “n × 1”, with n rows and one column. In our study, we used the reshape function in Matlab to reshape the matrix. The output data are the predicted E W T c a n o p y values, given in the form of an “n × 1” matrix. The second step is to reshape the “n × 1” to an “n × m” matrix and convert the matrix to a MATLAB image using the inwrite function.

2.4.2. Multiple Linear Regression

Multiple regression is generally used to explain the relationships between multiple independent/inputs variables and one dependent/target variable. The general form of the multiple regression equation is:
y = a + β   1 X   1 + β   2 X   2 + + β   n X   n + ε
where y is the dependent variable; a is the intercept; and the parameters β   1 , β   2 ,…, β   n are the regression coefficients associated with X   1 , X   2 ,…, X   n , respectively, while ε is the regression residual reflecting the difference between the observed and fitted linear relationships. The independent variables are VIs, represented as X   1 , X   2 ,…, X   n .

2.4.3. Support Vector Machine

The support vector machine (SVM), introduced by Boser et al. [51], is one of the most commonly applied supervised learning methods for regression as well as classification problems. When used for regression problems, the SVM model is known as support vector regression (SVR), which is used to predict a target using input variables. SVR is a powerful algorithm with the flexibility to be tolerant of an error margin (ϵ), and through tuning the tolerance, it can be made to fall outside the acceptable error rate. The kernel function is important for SVM analysis. Fan et al. [52] used SVM to estimate daily maize transpiration, and they found that SVM, alongside decision tree and deep learning models, can successfully estimate daily maize transpiration. Durbha et al. [53] retrieved the leaf area index using a multiangle imaging spectroradiometer.
In the present study, the SVM analyses were performed using different kernel functions, such as the Gaussian radial basis function (RBF) and polynomial kernel functions.

2.4.4. Boosted Regression Tree

Regression tree methods are commonly used to construct a model that can predict and explain target data from input data. The regression tree can capture the effect of each input variable on the target variable. However, the gradient boosted method combines several simple models to improve the prediction performance of one single model [54]. For instance Zhang et al. [55] modeled upland rice yield responses to climate factors using boosted regression tree in Sahel. A boosted regression tree (BRT) was employed to model the E W T c a n o p y using spectral indices in this study. BRT incorporates the strengths of regression and boosted algorithms [55]. BRT is commonly used because it does not require data transformation or outlier elimination.

2.4.5. Artificial Neural Network Regression Model

The artificial neural network model is inspired by the structure of neural networks in the brain. Zha et al. [39] evaluated ANN other ML methods for estimating rice (Oryza sativa L.) aboveground biomass, plant N uptake, and N nutrition index. The ANN-MLP structure used in this study is a three-layer learning network consisting of an input layer, a hidden layer, and an output layer. The model minimizes the error based on the mean square error minimum value during the training process, using a tangent and sigmoid transfer function [55]. A maximum epoch of 10,000 iterations was set during the training, and we employed gradient descent with momentum and a learning rate of 0.2. A Bayesian regularized neural network model with a Levenberg–Marquart (LM) backpropagation algorithm was used during the training process to improve the generalization of the model. Sigmoid and logistic sigmoid functions were used for the activation function in each neuron, while a linear transfer function was used to calculate the network output. Five spectral indices were used as the inputs in this study, while E W T c a n o p y was regarded as a target. The data were divided into three subsets: training, validating, and testing. The “dividerand” function of Matlab was used to randomly divide the data.
Forward   equation   ( Transfer ) y i = f ( n e t i ) = f ( j w i j x j + b i )
where f ( n e t i ) is the transfer function, with a transfer threshold defined by [0, 1] for the sigmoid logistic and [−1, 1] for the sigmoid tangent transfer; xi is the input from i; w i j is the weight of the connection between unit i and unit j; bi is the bias.
Backpropagation   equation   ( error ) e i = ε i + j > 1 w i j δ j
where δ is the summation index that enforces j > i, and e and ε are the products and injected errors. The error is propagated from the output layer to the input layer in order to update the weights of connections using a gradient equation.

2.4.6. DNN-MLP Model Deployment

Deep learning algorithms were used for image classification [56] and regression analyses [34,52,57]. The addition of more data during deep learning improves the performance of the model, and this makes this technique superior to other learning techniques, such as ANNs, SVM, and RF, which reach a plateau in performance after a certain quantity of data is fed into the model. This study employed a DNN-MLP model with a Relu transfer function to stimulate E W T c a n o p y using spectral indices by setting a maximum epoch at 10000 interactions. Adaptive moment estimation (Adam) was used as an optimization algorithm in the DNN-MLP model. The DNN-MLP structure is a three-layered feed-forward neural network consisting of an input layer, a hidden layer, and an output layer (Figure 4). The “dividerand” function in MATLAB was also used to randomly divide the data.

2.5. Data Pre-Processing Techniques

2.5.1. Data Normalization

The input variables were scaled to the same range (0, 1) using the minimum–maximum normalization techniques. Data normalization is carried out as part of data preparation for ML. The goal of normalization is to change the values of the inputs and output variables to a common scale [58]. X m i n and X m a x are the minimum and maximum values of the i t h attribute involved in the normalization process. Each input variable (VIs, E W T c a n o p y , FW, and DW) was normalized using the following Equation (5):
X n o r m = ( X i X m i n ) / ( X m a x X m i n )
where X n o r m , X i ,   X m i n , and X m a x represent the normalized value, the real value of the input variable, the minimum input variable, and the maximum input variable, respectively. The real predicted value was denormalized after training, validation, and testing, according to the following Equation (6):
Y i = Y m i n + Y n o r m ( Y m a x Y m i n )
where Y n o r m , Y i ,   Y m i n , and Y m a x represent the normalized value, the real value of the output variable, the minimum output variable, and the maximum output value, respectively.

2.5.2. Feature Selection

Feature selection is a pre-processing technique used in ML to reduce the under- and over-fitting problems [46]. It is used to remove the irrelevant input variables and thus improve the learning accuracy of ML algorithms [58]. Several feature selection methods, such as functional discriminate analysis, principal component analysis (PCA), and sensitivity analysis, have been implemented. Feature selection based on a DT was used in this study to score the importance of each input variable in the model. It is easy to determine the contribution of each feature to the regression and its relative significance based on whether a leaf node, i.e., the output (DW, FW, and E W T c a n o p y ), is higher or lower in the tree using DT. The five best VIs were used to model DW, FW, and E W T c a n o p y .

2.5.3. Model Performance

The determination coefficient (R2), Nash–Sutcliffe efficiency (NSE) [59], root mean square error (RMSE), and mean absolute error (MAE) were used to minimize the error and to assess the predictive accuracy of the regression models used in this study. The statistical indices are presented as follows:
R 2 = [ i = 1 n ( y i y i ¯ ) ( y i ^ y i ^ ¯ ) ] 2 [ i = 1 n ( y i y i ¯ ) 2 i = 1 n ( y i ^ y i ^ ¯ ) 2 ]  
NSE = 1 [ i = 1 n ( y i y i ^ ) 2 / i = 1 n ( y i y i ¯ ) 2 ]  
RMSE = 1 n i = 1 n ( y i ^ y i ) 2
MAE = 1 n i = 1 n | y i ^ y i |
where y i ^   and   y i are the predicted and actual values; y i ¯   and   y i ^ ¯ are the mean of the observed and predicted values; and n is the number of data points. The larger the values of R2 and NSE, and the smaller the values of RMSE and MAE, the greater the precision and accuracy of the model in predicting E W T c a n o p y .

3. Results

3.1. Dynamic Changes of FW, DW and E W T c a n o p y during the Growth Stage

Fresh weight (FW) and dry weight (DW) are amongst the most important crop growth indices. The accurate estimation of wheat FW and DW at different crop growth stages leads to effective agricultural field management. The dynamic changes in FW, DW, and E W T c a n o p y observed at different growth stages are shown in Figure 5. Table 5 also summarizes the statistics (range, mean, standard deviation) used for in situ measured E W T l e a f , E W T s t e m   , E W T e a r , and E W T c a n o p y . From the early stem elongation stage to the late stem elongation stage, there was an increase in FW, DW, and E W T c a n o p y . However, from the late stem elongation to anthesis growth stages, there was a decrease in E W T c a n o p y that can be explained by an increase in DW in favor of FW. The mean value of FW did not change significantly from the late elongation growth stage to the anthesis growth stage. With regard to FW and DW, we observed a gradual increase from the late stem elongation to the anthesis growth stage. Our results show that wheat growth was more vigorous in the late stem elongation growth stage compared to other stages. The mean values of FW at the early stem elongation, late stem elongation, and anthesis growth stages were 14.89, 28.71, and 29.05 t ha−1, respectively, while the mean DW at the same growth stages was 2.67, 6.49, and 10.40 t ha−1, respectively. The FW and DW increased by 92% and 143% from the early to late stem elongation growth stages, respectively (Figure 5). Conversely, FW and DW increased by 1.2% and 60% from the late stem elongation to the anthesis growth stage, respectively.

3.2. Variable Inputs’ Effects on FW, DW, and E W T c a n o p y Estimation

Decision tree (DT) methods were implemented in this study to reduce the VIs to the most useful input variables only. The selected input variables were used in different ML models to predict the target variable ( E W T c a n o p y , FW, and DW). It is very important to reduce the number of input variables in order to improve the performance of the model. The importance of VIs for estimating E W T c a n o p y , FW, and DW is shown in Figure 6. In this study, the sixty-seven individual calculated VIs were fed into the DT algorithm to select the five most relevant input variables on the basis of their feature weights (Appendix A Table A1 and Table A2). The FS method was separately applied to the input variables, VIs, and the target variables E W T c a n o p y , FW, and DW. The five VIs chosen as input variables by the algorithm, according to their scores, were G, MTVI2, RE, OSAVI, and NIR, and their relative importance to the E W T c a n o p y varied from 2.4 to 78%. The top five VIs selected by the DT algorithm while modeling FW were G, RE, IKAW, R, and RESR, and their relative importance ranged between 1.48 and 58.54%. MTVI2, RE, RESAVI, R, and NNIR were selected while modeling DW, and the relative importance varied from 2.3 to 80.34%. Red edge (RE) was consistently selected among the top five VIs for estimating E W T c a n o p y , FW, and DW. MTVI2 was among the top five VIs for E W T c a n o p y and DW. The R band was also among the top five VIs used for FW and DW.

3.3. E W T c a n o p y Responses from the Multiple Regression Model

Multiple linear regression (MLR) has been extensively used to determine the relationships between target and input variables. VIs and EWT from the three experiments were used to develop a linear model in this study. Figure 7a shows the scatter plot between E W T c a n o p y and the input VIs using the MLR model. The relationship between E W T c a n o p y and the VIs shows acceptable prediction performance. As such, VIs could be used to predict E W T c a n o p y by MLR. The R2, NSE, RMSE, and MAE values of the MLR were 0.843, 0.843, 0.0433 g/cm2, and 0.0313 g/cm2, respectively. Nevertheless, MLR demonstrated poor performance for predicting E W T c a n o p y as compared to the other ML models (Table 6 and Table 7). The multiple regression model for E W T c a n o p y (y) was expressed as follows:
y = 0.4311 − 0.5454 × G + 1.4150 × MTVI2 + 0.8637 × RE − 1.3980 × OSAVI − 0.3555

3.4. Modeling E W T c a n o p y Using DNN-MLP, ANN-MLP, BRT, and SVM

To predict E W T c a n o p y , we employed regression analysis using the DNN-MLP model with the ReLu transfer function and Adam optimizers algorithm; the ANN-MLP model with sigmoid and sigmoid tangent functions; and BRT and SVM using Gaussian and polynomial kernel functions. In this study, 267 data samples were considered to train the model, of which 15% were used for validation and 15% were used for testing. The number of epochs was set to 10000. Here, we compare the results of DNN-MLP with the MLR, ANN-MLP, BRT, and SVMs models. Table 7 shows the statistical indices comparing the models used to calibrate the FW, DW, and E W T c a n o p y to VIs. It is clear that the performances of all the ML methods (Table 7) were better than the MLR model (Table 6). The SVM-Gaussian regression method had an R2 of 0.941, an NSE of 0.937, an RMSE of 0.0274 g/cm2, and an MAE of 0.0181 g/cm2. DNN-MLP had an R2 of 0.934, an NSE of 0.933, an RMSE of 0.0283 g/cm2, and an MAE of 0.0165 g/cm2. The ANN-MLP-MPL had an R2 of 0.914, NSE of 0.914, RMSE of 0.0321 g/cm2, and MAE of 0.0211 g/cm2. The BRT had an R2 of 0.926, NSE of 0.926, RMSE of 0.0298 g/cm2, and MAE of 0.0194 g/cm2. The SVM-Polynomial had an R2 of 0.892, NSE of 0.891, RMSE of 0.0362 g/cm2, and MAE of 0.0231 g/cm2.

3.5. Relationship between Measured and Predicted E W T c a n o p y

The scatter plots of the E W T c a n o p y predicted by the models versus the observed E W T c a n o p y are given in Figure 7. However, for the ML models, the ability to accurately predict E W T c a n o p y depends on the typology of the ML algorithm. SVM-Gaussian (R2 = 0.941) was found to be the best for predicting E W T c a n o p y using VIs. The performance of DNN-MLP (R2 = 0.934) was also better than BRT (R2 = 0.926), ANN-MLP (R2 = 0.914), and SVM-Polynomial (R2 = 0.892). Of the ML networks, the SVM-Polynomial produced the weakest model. A cross-comparison of all the models employed showed that, in terms of performance, they ranked in the order SVN-Gaussian, DNN-MLP, BRT, ANN-MLP, SVM-Polynomial, and multiple regression (Table 8). The scatter plots of the E W T c a n o p y predicted by the models versus the observed E W T c a n o p y are given in Figure 7. The performances of the SVM-Gaussian and DNN-MLP models were judged as satisfactory, as the slope of the regression line is close to 1. Figure 8 shows a curve between the predicted models and the observed E W T c a n o p y . From this figure, we can conclude that the results of the SVM-Polynomial and DNN-MLP are the closest to the observed E W T c a n o p y .

3.6. Calculating E W T c a n o p y using Predicted DW and FW

The DW and FW predicted using the DNN-MLP and VIs were used to calculate E W T c a n o p y . The calibration, validation, and testing processes were carried out using the DW and FW collected during the three experiments. The E W T c a n o p y calculation results show that the direct estimation of E W T c a n o p y performs well compared to the indirect estimation using predicted DW and FW. The results of the indirect method show a decrease in the performance of R2, RMSE, and MAE (Figure 9). However, the planned comparisons reveal that the assessment of E W T c a n o p y using MLR and SVM-Polynomial models and FW and DW predicted from VIs performed slightly better than the E W T c a n o p y estimated directly from the VIs. The results clearly demonstrate that VIS are suitable for E W T c a n o p y assessment. They proved to be sensitive to water stress, which is the major factor influencing E W T c a n o p y retrieval. Additionally, VIs are suitable for assessing E W T c a n o p y instead of using the predicted FW and DW.

3.7. Model Visualization

The results of the ML models and multi-linear regression were used to generate the E W T c a n o p y map in Exp. (2). Although SVM-Gaussian achieved the best prediction for E W T c a n o p y , the E W T c a n o p y map of Figure 10 shows that SVM-Gaussian cannot accurately assign E W T c a n o p y because of the overfitting problem. Therefore, in terms of practically applying the ML in estimating E W T c a n o p y , DNN-MLP gave results close to the observed E W T c a n o p y in regard to the study areas, and achieved the highest level of accuracy in terms of visual comparison. The results of this study (Table 8) suggest that DNN-MLP achieved the highest level of accuracy and is better used in the assessment of E W T c a n o p y using VIs. We therefore found it possible to model E W T c a n o p y using the DNN-MLP methodology, finding this to be a powerful computational tool to model E W T c a n o p y using multispectral data. The results indicate that the model accurately identified the low and high regions of E W T c a n o p y . The results will allow us to identify water deficiencies in plants, and to take the appropriate action for irrigation. The E W T c a n o p y map will allow agricultural decision-makers to remotely quantify plant water content ( E W T c a n o p y ) and address the variability in this value so as to improve input efficiency and irrigation management. The ANN-MPL, DNN-MLP, BRT, and SVM-Polynomial models were successfully used to quantify E W T c a n o p y in the field, but the SVM-Gaussian model failed to estimate E W T c a n o p y because of the overfitting problem. The map indicates that all the models except SVM-Gaussian will assist farmers in managing irrigation.

4. Discussion

4.1. Dynamic Changes in FW, FW and Ewtcanopy during the Growth Stage

In this study, we evaluated dynamic changes within wheat during the early and late stem elongation and anthesis growth stages. E W T c a n o p y is dependent on FW and DW, and so has a higher value during the late stem elongation stage. These results reflect those of Jin et al. [34], who also found that the DW increased steadily with crop development. The changes in FW affect the value of E W T c a n o p y . At the anthesis stage, the wheat dries and holds a small amount of water. The increases in DW, FW, and EWT during late stem elongation are explained by the fact that wheat growth is vigorous during this period. However, in this study, during the anthesis growth stage, there was a decrease in EWT and an increase in DW, while the FW did not change significantly. These results agree with the study of Wocher et al. [21], showing that EWT decreased in the late growth stage.

4.2. Performance of the Machine Learning Models

The performances of the SVM-Gaussian and DNN-MLP models were remarkable in the training, cross-validation, and testing periods. The calibration, cross-validation, and testing processes were carried out using the observed E W T c a n o p y , FW, and DW values. Other studies employed the DNN- and ANN-MLP models to predict crops’ biophysical parameters, such as crop biomass and yield, finding that DNN was the best model for these purposes [34,45,60]. In addition to MLR, four different ML algorithms were applied to predict E W T c a n o p y status indicators in this study. The ML model performed significantly better than the models based on MLR. Our results are consistent with previous studies [55], which used BRT and ANN to forecast upland rice yield under climate change conditions in the Sahel. The MLR models can only model a linear combination of predictors, while the ML models can also model nonlinear relationships. DNN-MLP is a regression with a great capacity for supporting many hidden layers [61,62]. This study’s results corroborated those of Jin et al. [34], who indicated that the DNN-MLP algorithm could be used to accurately estimate plant biomass from VIs. The SVM algorithm is based on statistical learning, and provided accurate results [63]. ANN-MLP regression is a nonparametric, nonlinear model that creates a neural network between inputs and target data. Fan et al. [52] confirmed the utility of SVM, extreme gradient boosting (XGBoost), ANN-MLP, and DNN-MLP models for estimating the daily temperature of maize in Northwest China. E W T c a n o p y estimation using FW and DW was not as successful as that performed with the direct estimation approach. This may be because the indirect estimation approach led to the accumulation of errors.

4.3. Feature Selection Methods

In this study, 67 VIs were used in the decision algorithm, and only the 5 best-scoring (1.48 to 8.34) were selected for the estimation of E W T c a n o p y , FW, and DW. The RE band was one of the five most relatively important when applying the DT. MTVI2 was among the five top VIs when using the feature selection method for FW and DW, while the R band was among the five best VIs for E W T c a n o p y and DW. Other VIs were also important for E W T c a n o p y estimation. This is consistent with other studies that showed that different feature selection methods increase the performance of the model and decrease the computational time [48,58,64,65]. Haq et al. [64] stated that feature selection methods significantly increase model accuracy. In the feature selection method based on DT, the process of constructing the DT is the process of feature selection. The main advantages of the DT algorithm are its high classification and regression accuracy and its strong robustness [46,48,65]. Consistent with previous works [46,66], this study also found that feature selection improves the performance of the model by reducing the number of input variables. These results further support the idea that the feature vector plays a significant role in the performance of the ML model by including or excluding input variables. This is essential to the processes of training, testing, and validation.

4.4. Advantages and Limitations of Machine Learning

In this study, the multispectral data and VIs were obtained using a multi-rotor UAV remote sensing platform, while the E W T c a n o p y distribution maps were created based on DNN-MLP, ANN-MLP, BRT, and SVM model prediction. The E W T c a n o p y map can be used to guide farmers in the application of irrigation water. The use of an UAV remote sensing platform combined with the ML method could help overcome the limitations of airborne and satellite remote sensing platforms, and provide a reliable data source for E W T c a n o p y assessment during the growth stage. In addition to the commonly used red, green, blue, red edge, and NIR bands, other spectral regions should be used to diagnose E W T c a n o p y , such as shortwave infrared (SWIR)-based indices or hyperspectral cameras. Other studies found that a combination of multispectral and thermal images via the ML method refines the estimation of plant chlorophyll concentration [67], improves vegetation monitoring [68], and can help predict water stress. In the future, thermal and SWIR- hyperspectral remote sensing data should be used to improve the performances of ML models.
Although the use of UAVs combined with ML enabled the prediction of E W T c a n o p y , there are several limitations that prevent their wider use. The multispectral camera was relatively expensive. However, these costs could be reduced by using low-cost RGB imaging instead of a multispectral camera. Sánchez-Sastre et al. [69] successfully used RGB VIs to estimate chlorophyll content in sugar beet leaves. The same results may be achieved by using RGB VIs to assess E W T c a n o p y . Hence, further work is needed to assess the applicability of RGB imaging for assessing E W T c a n o p y using RGB VIs and ML models. In relation to the main limitations of this ML method, it should be noted that an average farmer may require training on how to operate the UAV platform and process the data using the ML methods, which may be costly. This fact may prohibit the adoption of UAV technologies for individual farmers with only small agricultural fields. This may affect the adoption of the UAV remote sensing technology reported in the literature [70]. We propose that scientists and experts should assist farmers in the field. Finally, crop biophysical parameters are known to be influenced by UAV flight height, as reported by Oniga et al. [71]. Further research should be undertaken to determine the influence of height in the retrieval of E W T c a n o p y . Another drawback of commercial UAVs technology is the short flight time, which ranges from 20 min to 1 h and can thus only cover a very restricted area with every flight. In addition, effective UAVs cannot be used on a very windy or rainy day, meaning flights must be postponed. In addition, feature selection is essential for optimizing the accuracy of the model and for enhancing model interpretability. However, in the FS method, all 67 VIs were blindly fed into the FS algorithm to select the 5 best VIs for E W T c a n o p y modeling. Other FS methods need to be investigated in order to assess the effects of other VIs on E W T c a n o p y assessment. RS scientist and/or progressive farmer with larger farms size are able to generate these valuable maps on demand using the methods prescribed in this study despite the complex nature of ML and DL models. Furthermore, the newly developed models might also assist the policy makers and agriculture extensionist for making the recommendations that can guide smallholder farmers to accomplish water management. However, further investigations with different regions and wheat cultivars are recommended to test the applicability of newly developed models for crop water diagnosis.

5. Conclusions

In our study, the ML algorithms DNN-MLP, ANN-MLP, BRT, SVM-Gaussian, SVM-Polynomial, and MLR were used to improve the estimation accuracy of E W T c a n o p y . The results show that only five VIs could be used to accurately estimate E W T c a n o p y when using a feature selection algorithm. This study has also demonstrated the power of the DT algorithm used for automatic relevance determination when evaluating the more relevant input parameters in modeling E W T c a n o p y using ML models. SVM-Gaussian gave the best performance, followed by DNN-MLP, BRT, ANN-MLP, and SVM-Polynomial. In terms of the E W T c a n o p y map, the SVM-Gaussian model performed poorly when compared to the other ML models, making DNN-MLP the most suitable ML to assist farmers in irrigation. The linear model performed poorly in E W T c a n o p y estimation. These findings contribute to the accurate estimation of E W T c a n o p y , and thus improve irrigation efficiency and grain yield. However, other advanced ML and thermal imagery models may be used to model E W T c a n o p y . Besides this, other feature selection methods may be implemented to improve the retrieval accuracy of E W T c a n o p y using ML. More studies are needed to further improve these ML-based models by using thermal, SWIR, and hyperspectral images for irrigation and crop management.

Author Contributions

Conceptualization: A.T., B.Z. and A.D.; methodology, A.T. and B.Z.; software, A.T.; validation, A.T. and B.Z.; formal analysis, A.T. and B.Z.; investigation, A.T. and B.Z.; resources, A.T. and B.Z.; data curation, A.T. and B.Z.; writing—original draft preparation, A.T. and B.Z.; writing—review and editing, A.T., B.Z., A.D., M.K.S., S.T. and S.T.A.-U.-K.; visualization, A.T.; supervision B.Z. and A.D.; project administration, B.Z. and A.D.; funding acquisition, B.Z. and A.D. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (51609247), the Science and Technology Project of Henan Province (212102110278), and the China Agriculture Research System (CARS-02, CARS-3-1-30).

Data Availability Statement

Data available on request due to privacy.

Acknowledgments

The authors are thankful to the Chinese Academy of Agricultural Science and the Farmland Irrigation Research Institute for kindly providing the equipment.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The vegetation indices evaluated in this study, B, G, R, RE, and NIR, indicate blue, green, red, red edge, and near-infrared band reflectance.
Table A1. The vegetation indices evaluated in this study, B, G, R, RE, and NIR, indicate blue, green, red, red edge, and near-infrared band reflectance.
Vegetations (VIs)FormulasReferences
Blue Normalized Difference Vegetation Index (BNDV)(NIR − B)/(NIR + B)[72]
Green Chlorophyll Index (CIg)NIR/G − 1 [73]
Red Edge Chlorophyll Index (CIre)NIR/RE − 1[25]
DATT Index (DATT)(NIR − RE)/(NIR + R)[24]
Excess Blue Vegetation index (ExB)(1.4 × B − G)/(G + R + B)[74]
Excess Green minus Excess Red (EXGR)ExR − ExG[75]
Excess Green index (ExG)(2 × G − R − B)[76]
Excess Red Vegetation index (ExR)(1.4 × R − G)/(G + R + B)[77]
Green Difference Vegetation Index (GDVI)NIR − G [78]
Green Leaf Index (GLI)(2×G–R–B)/(– R − B)[79]
Green Normalized Difference Vegetation Index (GNDVI)(NIR − G)/(NIR + G) [80]
Green Optimal Soil Adjusted Vegetation Index (GOSAVI)(1 + 0.16) (NIR − G)/(NIR + G + 0.16)[81]
Green Re–normalized Different Vegetation Index (GRDVI)(NIR − G)/SQRT (NIR + G) [26]
Green Ratio Vegetation Index (GRVI)(G − R)/(G + R)[82]
Green Red Vegetation Index (GRVI_Ratio)NIR/G[82]
Green Soil Adjusted Vegetation Index (GSAVI)1.5× ((NIR − G)/(NIR + G + 0.5)) [81]
Green Wide Dynamic Range Vegetation Index (GWDRVI)(0.12×NIR − G)/(0.12 × NIR + G) [83]
Kawashima Index (IKAW)(R − B)/(R + B)[84]
Modified Chlorophyll Absorption in Reflectance Index 1 (MCARI1)((NIR − RE) − 0.2 × (NIR − G)) × (NIR/RE)[85]
Modified Chlorophyll Absorption in Reflectance Index 2 (MCARI2)1.5 × (2.5 × (NIR − RE) –1.3× (NIR − G))/SQRT (SQ (2 × NIR + 1)) − (6 × NIR − 5 × SQRT(RE) − 0.5)[86]
Modified Chlorophyll Absorption in Reflectance Index 3 (MCARI3)((NIR − RE) − 0.2 × (NIR − R))/(NIR/RE)[26]
Modified Chlorophyll Absorption in Reflectance Index 4 (MCARI4)1.5 × (2.5 × (NIR − G) –1.3 × (NIR − RE))/SQRT (SQ (2 × NIR + 1)) − (6 × NIR − 5 × SQRT(G) − 0.5)[26]
Modified Double Difference Index Green (MDD)(NIR − RE) − (RE − G)[87]
Modified Double Difference Index Red (MDD)(NIR − RE) − (RE − R)[27]
Modified Green Red Vegetation Index (MGRVI)(SQ(G) − SQ (R))/(SQ(G) + SQ (R))[88]
Modified Nonlinear Index (MNLI) 1.5 × (SQ(NIR) − R)/(SQ(NIR) + R + 0.5)[89]
Modified Red Edge Difference Vegetation Index (MREDVI)RE − R[26]
Modified Red Edge Soil Adjusted Vegetation Index (MRESAVI)0.5 × (2 × NIR + 1 − SQRT (SQ (2 × NIR + 1)) − 8 × (NIR − RE))[90]
Modified Red Edge Transformed Vegetation Index (MRETVI)1.2 × (1.2 × (NIR − R) − 2.5 × (RE − R))[86]
Modified Soil Adjusted Vegetation Index (MSAVI)0.5× (2×NIR + 1 − SQRT (SQ (2×NIR + 1) –8× (NIR − G))) [90]
Modified Simple Ratio (MSR)(NIR/R − 1)/SQRT (NIR/R + 1) [91]
Modified Green Simple Ratio (MSR_G)(NIR/G − 1)/SQRT (NIR/G + 1) [91]
Modified Red Edge Simple Ratio (MSR_RE)((NIR/RE) − 1)/SQRT ((NIR/RE) − 1)[91]
Modified Transformed Chlorophyll Absorption in Reflectance Index (MTCARI)3× ((NIR − RE) − 0.2 × (NIR − R) × (NIR/RE))[92]
Modified Red Edge Soil Adjusted Vegetation Index (MRESAVI)0.5 × (2 × NIR + 1 − SQRT (SQ (2 × NIR + 1)) − 8 × (NIR − RE))[90]
Modified Red Edge Transformed Vegetation Index (MRETVI)1.2 × (1.2 × (NIR − R) − 2.5 × (RE − R))[86]
Modified Soil Adjusted Vegetation Index (MSAVI)0.5× (2×NIR + 1 − SQRT (SQ (2×NIR + 1) –8× (NIR − G))) [90]
Modified Simple Ratio (MSR)(NIR/R − 1)/SQRT (NIR/R + 1) [91]
Modified Green Simple Ratio (MSR_G)(NIR/G − 1)/SQRT (NIR/G + 1) [91]
Modified Red Edge Simple Ratio (MSR_RE)((NIR/RE) − 1)/SQRT ((NIR/RE) − 1)[91]
Modified Transformed Chlorophyll Absorption in Reflectance Index (MTCARI)3× ((NIR − RE) − 0.2 × (NIR − R) × (NIR/RE))[92]
Modified Triangular Vegetation Index (MTVI2)1.5 × (1.2 × (NIR − G) − (2.5 × R–G))/SQRT (SQ (2 × NIR + 1) − (6 × NIR − 5 × SQRT(R)) − 0.5)[86]
Normalized Difference Red Edge (NDRE)(NIR − RE)/(NIR + RE)[93]
Normalized Difference Vegetation Index (NDVI)(NIR − R)/(NIR + R)[94]
Normalized Green Index (NGI)G/(NIR + RE + G)[81]
Nonlinear Index (NLI)(SQ(NIR) − R)/(SQ(NIR) + R)[95]
Normalized NIR Index (NNIR)NIR/(NIR + RE + G) [81]
Normalized Near Infrared Index (NNIRI)NIR/(NIR + RE + R)[27]
Normalized Red Edge Index (NREI)RE/(NIR + RE + G)[81]
Normalized Red Edge Index (NREI)RE/(NIR + RE + R)[27]
Normalized Red Index (NRI)R/(NIR + RE + R)[27]
Optimized SAVI (OSAVI)(1 + 0.16) × (NIR − R)/(NIR + R + 0.16) [96]
Renormalized Difference Vegetation Index (RDVI)(NIR − R)/SQRT (NIR + R) [97]
Red Edge Difference Vegetation Index (REDVI)NIR − RE[26]
Red Edge Normalized Difference Vegetation Index (RENDVI)(RE − R)/(RE + R)[98]
Red Edge Optimal Soil Adjusted Vegetation Index (REOSAVI)(1 + 0.16) × (NIR − RE)/(NIR + RE + 0.16) [96]
Red Edge Renormalized Different Vegetation Index (RERDVI)(NIR − RE)/SQRT (NIR + RE)[26]
Red Edge Ratio Vegetation Index (RERVI)NIR/RE[99]
Red Edge Soil Adjusted Vegetation Index (RESAVI)1.5× ((NIR − RE)/(NIR + RE + 0.5))[81]
Red Edge Simple Ratio (RESR)RE/R[100]
Red Edge Transformed Vegetation Index (RETVI)0.5× (120× (NIR − R) – 200 × (RE − R))[91]
Optimized Red Edge Vegetation Index (REVIopt)100 × (Ln (NIR) − Ln (RE))[101]
Red Edge Wide Dynamic Range Vegetation Index (REWDRVI)(0.12×NIR − RE)/(0.12 × NIR + RE)[83]
Red Green Blue Vegetation Index (RGBVI)(SQ(G) − (B × R))/(SQ(G) + (B × R))[88]
Ratio Vegetation Index (RVI)NIR/R[102]
Soil–Adjusted Vegetation Index (SAVI)1.5× (NIR − R)/(NIR + R + 0.5)[103]
Transformed Normalized Vegetation Index (TNDVI)SQRT ((NIR − R)/(NIR + R) + 0.5)[104]
Optimal Vegetation Index (VIopt)1.45×(SQ(NIR) + 1)/(R + 0.45)[101]
Wide Dynamic Range Vegetation Index (WDRVI) (0.12×NIR − R)/(0.12 × NIR + R) [83]
Table A2. Mean, standard deviation, minimum, and maximum values of vegetation indices used in the study.
Table A2. Mean, standard deviation, minimum, and maximum values of vegetation indices used in the study.
Vegetation Indices/BandsMean ± SDMin.Max.Vegetation Indices/BandsMean ± SDMin.Max.
B0.03 ± 0.020.080.01MSR_RE1.25 ± 0.552.120.51
BNDV0.82 ± 0.110.950.62MTCARI0.04 ± 0.160.23–0.39
Cig6.48 ± 4.3314.991.59MTVI20.34 ± 0.350.84–0.17
CIre1.88 ± 1.414.510.26NDRE0.41 ± 0.20.690.11
DATT0.48 ± 0.250.80.12NDVI0.64 ± 0.290.950.2
ExB–0.13 ± 0.07–0.05–0.26NGI0.11 ± 0.040.180.05
ExG0.01 ± 0.040.06–0.07NIR0.38 ± 0.080.530.23
ExGR0.09 ± 0.310.55–0.33NLI0.31 ± 0.50.91–0.46
ExR0.1 ± 0.280.49–0.29NNIR0.63 ± 0.120.80.46
G0.07 ± 0.030.140.03NNIRI0.62 ± 0.150.830.41
GDVI0.31 ± 0.10.480.14NREI_G0.26 ± 0.080.370.15
GLI–0.57 ± 0.620.21–1.68NREI_R0.25 ± 0.060.320.15
GNDVI0.69 ± 0.150.880.44NRI0.13 ± 0.10.270.02
GOSAVI0.46 ± 0.120.650.25OSAVI0.55 ± 0.250.850.18
GRDVI0.59 ± 0.140.790.34R0.08 ± 0.070.270.01
GRVI0.07 ± 0.280.47–0.32RDVI0.43 ± 0.20.690.14
GRVI_Ratio7.48 ± 4.3315.992.59RE0.16 ± 0.050.330.09
GSAVI0.49 ± 0.130.690.26REDVI0.23 ± 0.120.420.06
GWDRVI–0.14 ± 0.280.31–0.53RENDVI0.44 ± 0.260.780.09
IKAW0.25 ± 0.210.54–0.06REOSAVI0.36 ± 0.190.630.1
MCARI10.61 ± 0.551.770.03RERDVI0.41 ± 0.321.360.09
MCARI20.29 ± 0.61.16–0.75RERVI2.77 ± 1.575.510.09
MCARI30.05 ± 0.010.080.03RESAVI0.42 ± 0.291.230.09
MCARI4–0.1 ± 0.640.89–1.21RESR3.63 ± 2.278.11.19
MDD_G0.13 ± 0.140.36–0.09RETVI10.35 ± 6.5521.962.01
MDD_R0.15 ± 0.10.340.02REVIopt92.05 ± 51.11170.322.94
MGRVI0.12 ± 0.510.77–0.58REWDRVI–0.51 ± 0.18–0.21–0.74
MNLI0.14 ± 0.240.49–0.2RGBVI0.36 ± 0.30.75–0.06
MREDVI0.07 ± 0.020.120.03RVI13.69 ± 13.0844.021.5
MRESAVI–0.89 ± 0.48–0.22–1.69SAVI0.46 ± 0.210.730.15
MRETVI0.2 ± 0.150.480.03TNDVI1.06 ± 0.141.210.84
MSAVI0.49 ± 0.160.760.23VIopt3.21 ± 0.533.982.39
MSR2.65 ± 2.016.380.32WDRVI–0.06 ± 0.50.67–0.7
MSR_G2.04 ± 0.923.630.84

References

  1. Li, L.; Cheng, Y.-B.; Ustin, S.; Hu, X.-T.; Riaño, D. Retrieval of vegetation equivalent water thickness from reflectance using genetic algorithm (GA)-partial least squares (PLS) regression. Adv. Space Res. 2008, 41, 1755–1763. [Google Scholar] [CrossRef]
  2. Penuelas, J.; Filella, I.; Biel, C.; Serrano, L.; Savé, R. The reflectance at the 950-970 nm region as an indicator of plant water staus. Int. J. Remote Sens. 1993, 14, 1887–1905. [Google Scholar] [CrossRef]
  3. Hunt, E.R.; Rock, B.N. Detection of Changes in Leaf Water Content Using Near-and Middle-Infrared Reflectances. Remote Sens. Environ. 1989, 30, 43–54. [Google Scholar] [CrossRef]
  4. Jackson, T.J.; Chen, D.; Cosh, M.; Li, F.; Anderson, M.; Walthall, C.; Doriaswamy, P.; Hunt, E.B. Vegetation water content mapping using Landsat data derived normalized difference water index for corn and soybeans. Remote Sens. Environ. 2004, 92, 427–435. [Google Scholar] [CrossRef]
  5. Zhang, Z.; Tang, B.H.; Li, Z.L. Retrieval of leaf water content from remotely sensed data using a vegetation index model constructed with shortwave infrared reflectances. Int. J. Remote Sens. 2019, 40, 2313–2323. [Google Scholar] [CrossRef]
  6. Gao, B.-C.; Goetzt, A.F.H. Retrieval of equivalent water thickness and information related to biochemical components of vegetation canopies from AVIRIS data. Remote Sens. Environ. 1995, 52, 155–162. [Google Scholar] [CrossRef]
  7. Zhang, F.; Zhou, G. Estimation of vegetation water content using hyperspectral vegetation indices: A comparison of crop water indicators in response to water stress treatments for summer maize. BMC Ecol. 2019, 19, 18. [Google Scholar] [CrossRef] [Green Version]
  8. Zhang, L.; Niu, Y.; Zhang, H.; Han, W.; Li, G.; Tang, J.; Peng, X. Maize Canopy Temperature Extracted From UAV Thermal and RGB Imagery and Its Application in Water Stress Monitoring. Front. Plant Sci. 2019, 10, 1270. [Google Scholar] [CrossRef]
  9. Hassan, M.A.; Yang, M.; Fu, L.; Rasheed, A.; Zheng, B. Accuracy assessment of plant height using an unmanned aerial vehicle for quantitative genomic analysis in bread wheat. Plant Methods 2019, 15, 1–12. [Google Scholar] [CrossRef] [Green Version]
  10. Shafian, S.; Rajan, N.; Schnell, R.; Bagavathiannan, M.; Valasek, J.; Shi, Y.; Olsenholler, J. Unmanned aerial systems-based remote sensing for monitoring sorghum growth and development. PLoS ONE 2018, 15, e0196605. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Xia, X.; Xiao, Y.; He, Z. Time-Series Multispectral Indices from Unmanned Aerial Vehicle Imagery Reveal Senescence Rate in Bread Wheat. Remote Sens. 2018, 10, 809. [Google Scholar] [CrossRef] [Green Version]
  12. Smigaj, M.; Gaulton, R.; Suárez, J.C.; Barr, S.L. Forest Ecology and Management Canopy temperature from an Unmanned Aerial Vehicle as an indicator of tree stress associated with red band needle blight severity. For. Ecol. Manag. 2019, 433, 699–708. [Google Scholar] [CrossRef]
  13. Poblete, T.; Ortega-far, S.; Bardeen, M. Artificial Neural Network to Predict Vine Water Status Spatial Variability Using Multispectral Information Obtained from an Unmanned Aerial Vehicle (UAV). Sensors 2017, 17, 2488. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Gao, B.-c. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  15. Ceccato, P.; Flasse, S.; Tarantola, S.; Jacquemoud, S.; Grégoire, J.M. Detecting vegetation leaf water content using reflectance in the optical domain. Remote Sens. Environ. 2001, 77, 22–33. [Google Scholar] [CrossRef]
  16. Mobasheri, M.R.; Fatemi, S.B. Leaf Equivalent Water Thickness assessment using reflectance at optimum wavelengths. Theor. Exp. Plant Physiol. 2013, 25, 196–202. [Google Scholar] [CrossRef] [Green Version]
  17. Ceccato, P.; Gobron, N.; Flasse, S.; Pinty, B.; Tarantola, S. Designing a spectral index to estimate vegetation water content from remote sensing data: Part 2. Validation and applications. Remote Sens. Environ. 2002, 82, 198–207. [Google Scholar] [CrossRef]
  18. Sibanda, M.; Mutanga, O.; Dube, T.; Mothapo, M.C.; Mafongoya, P.L. Remote sensing equivalent water thickness of grass treated with different fertiliser regimes using resample HyspIRI and EnMAP data. Phys. Chem. Earth Parts A/B/C 2019, 112, 246–254. [Google Scholar] [CrossRef]
  19. Tucker, C.J. Remote Sensing of Leaf Water Content in the Near Irdrared. Remote Sens. Environ. 1980, 10, 23–32. [Google Scholar] [CrossRef]
  20. Liu, S.; Peng, Y.; Du, W.; Le, Y.; Li, L. Remote estimation of leaf and canopy water content in winter wheat with different vertical distribution of water-related properties. Remote Sens. 2015, 7, 4626–4650. [Google Scholar] [CrossRef] [Green Version]
  21. Wocher, M.; Berger, K.; Danner, M.; Mauser, W.; Hank, T. Physically-Based Retrieval of Canopy Equivalent Water Thickness Using Hyperspectral Data. Remote Sens. 2018, 10, 1924. [Google Scholar] [CrossRef] [Green Version]
  22. Dennison, P.E.; Roberts, D.A.; Thorgusen, S.R.; Regelbrugge, J.C.; Weise, D.; Lee, C. Modeling seasonal changes in live fuel moisture and equivalent water thickness using a cumulative water balance index. Remote Sens. Environ. 2003, 88, 442–452. [Google Scholar] [CrossRef]
  23. Yao, X.; Jia, W.; Si, H.; Guo, Z.; Tian, Y.; Liu, X.; Cao, W.; Zhu, Y. Exploring novel bands and key index for evaluating leaf equivalent water thickness in wheat using hyperspectra influenced by nitrogen. PLoS ONE 2014, 9, e96352. [Google Scholar] [CrossRef] [PubMed]
  24. Datt, B. Visible/near infrared reflectance and chlorophyll content in Eucalyptus leaves. Int. J. Remote Sens. 1999, 20, 2741–2759. [Google Scholar] [CrossRef]
  25. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J. Plant Physiol. 2003, 160, 271–280. [Google Scholar] [CrossRef]
  26. Cao, Q.; Miao, Y.; Wang, H.; Huang, S.; Cheng, S.; Khosla, R.; Jiang, R. Non-destructive estimation of rice plant nitrogen status with Crop Circle multispectral active canopy sensor. Field Crop. Res. 2013, 154, 133–144. [Google Scholar] [CrossRef]
  27. Lu, J.; Miao, Y.; Shi, W.; Li, J.; Yuan, F. Evaluating different approaches to non-destructive nitrogen status diagnosis of rice using portable RapidSCAN active canopy sensor. Sci. Rep. 2017, 7, 14073. [Google Scholar] [CrossRef] [PubMed]
  28. Thornton, B.; Lemaire, G.; Millard, P.; Duff, E.I. Relationships Between Nitrogen and Water Concentration in Shoot Tissue of Molinia caerulea During Shoot Development. Ann. Bot. 1999, 83, 631–636. [Google Scholar] [CrossRef] [Green Version]
  29. Sims, D.A.; Gamon, J.A. Estimation of vegetation water content and photosynthetic tissue area from spectral reflectance: A comparison of indices based on liquid water and chlorophyll absorption features. Remote Sens. Environ. 2003, 84, 526–537. [Google Scholar] [CrossRef]
  30. Dawson, T.P.; Curran, P.J. The biochemical decomposition of slash pine needles from reflectance spectra using neural networks. Int. J. Remote Sens. 2010, 19, 37–41. [Google Scholar] [CrossRef]
  31. Ustin, S.L.; Roberts, D.A.; Pinzo, J.; Jacquemoud, S.; Gardner, M.; Scheer, G.; Castan, C.M. Estimating Canopy Water Content of Chaparral Shrubs Using Optical Methods. Remote Sens. Environ. 1996, 4257, 280–291. [Google Scholar] [CrossRef]
  32. Mia, M.; Dhar, N.R. Prediction of surface roughness in hard turning under high pressure coolant using Artificial Neural Network. Meas. J. Int. Meas. Confed. 2016, 92, 464–474. [Google Scholar] [CrossRef]
  33. Khan, M.S.; Semwal, M.; Sharma, A.; Verma, R.K. An artificial neural network model for estimating Mentha crop biomass yield using Landsat 8 OLI. Precis. Agric. 2020, 21, 18–33. [Google Scholar] [CrossRef]
  34. Jin, X.; Li, Z.; Feng, H.; Ren, Z.; Li, S. Deep neural network algorithm for estimating maize biomass based on simulated Sentinel 2A vegetation indices and leaf area index. Crop J. 2020, 8, 87–97. [Google Scholar] [CrossRef]
  35. Veenadhari, S.; Misra, B.; Singh, C.D. Machine learning approach for forecasting crop yield based on climatic parameters. In Proceedings of the 2014 International Conference on Computer Communication and Informatics, Coimbatore, India, 3–5 January 2014. [Google Scholar] [CrossRef]
  36. Worland, S.C.; Farmer, W.H.; Kiang, J.E. Improving predictions of hydrological low-flow indices in ungaged basins using machine learning. Environ. Model. Softw. 2018, 101, 169–182. [Google Scholar] [CrossRef]
  37. Nguyen, V.T.; Constant, T.; Kerautret, B.; Debled-Rennesson, I.; Colin, F. A machine-learning approach for classifying defects on tree trunks using terrestrial LiDAR. Comput. Electron. Agric. 2020, 171, 105332. [Google Scholar] [CrossRef]
  38. Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A Survey of the Recent Architectures of Deep Convolutional Neural Networks 1 Introduction. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef] [Green Version]
  39. Zha, H.; Miao, Y.; Wang, T.; Li, Y.; Zhang, J.; Sun, W. Sensing-Based Rice Nitrogen Nutrition Index Prediction with Machine Learning. Remote Sens. 2020, 12, 215. [Google Scholar] [CrossRef] [Green Version]
  40. Li, S.; Ding, X.; Kuang, Q.; SyedTahir, A.-U.-K.; Cheng, T.; Xiaojun, L.; Yongchao, T.; Zhu, Y.; Cao, W.; Cao, Q. Potential of UAV-based active sensing for monitoring rice leaf nitrogen status. Front. Plant Sci. 2018, 9, 1834. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Colombo, R.; Meroni, M.; Marchesi, A.; Busetto, L.; Rossini, M.; Giardino, C.; Panigada, C. Estimation of leaf and canopy water content in poplar plantations by means of hyperspectral indices and inverse modeling. Remote Sens. Environ. 2008, 112, 1820–1834. [Google Scholar] [CrossRef]
  42. Wang, N.; Wang, Y.; Er, M.J. Review on deep learning techniques for marine object recognition: Architectures and algorithms. Control Eng. Pract. 2020, 1–18. [Google Scholar] [CrossRef]
  43. Batlles, F.J.; Tovar-pescador, J.; Lo, G. Selection of input parameters to model direct solar irradiance by using artificial neural networks. Energy 2005, 30, 1675–1684. [Google Scholar] [CrossRef]
  44. Venkatesh, B.; Anuradha, J. A Review of Feature Selection and Its Methods. Bulg. Acad. Sci. Cybern. 2019, 19, 3–26. [Google Scholar] [CrossRef] [Green Version]
  45. Khaki, S.; Wang, L. Crop Yield Prediction Using Deep Neural Networks. Front. Plant Sci. 2019, 10, 621. [Google Scholar] [CrossRef] [Green Version]
  46. Ruggieri, S. Complete search for feature selection in decision trees. J. Mach. Learn. Res. 2019, 20, 104. [Google Scholar]
  47. Romalt, A.A.; Kumar, R.M.S. An Analysis on Feature Selection Methods, Clustering and Classification used in Heart Disease Prediction—A Machine Learning Approach. J. Crit. Rev. 2020, 7, 138–142. [Google Scholar] [CrossRef]
  48. Zhou, H.F.; Zhang, J.W.; Zhou, Y.Q.; Guo, X.J.; Ma, Y.M. A feature selection algorithm of decision tree based on feature weight. Expert Syst. Appl. 2021, 164, 113842. [Google Scholar] [CrossRef]
  49. Otsu, N. A threshold Selection Method from Gray-Level Histograms. IIEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  50. Corti, M.; Marino, P.; Cavalli, D.; Cabassi, G. Hyperspectral imaging of spinach canopy under combined water and nitrogen stress to estimate biomass, water, and nitrogen content. Biosyst. Eng. 2017, 158, 38–50. [Google Scholar] [CrossRef]
  51. Boser, B.E.; Laboratories, T.B.; Guyon, I.M.; Laboratories, T.B.; Vapnik, V.N. A Training Algorithm for Optimal Margin Classi ers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 1 July 1992; pp. 144–152. [Google Scholar] [CrossRef]
  52. Fan, J.; Zheng, J.; Wu, L.; Zhang, F. Estimation of daily maize transpiration using support vector machines, extreme gradient boosting, artificial and deep neural networks models. Agric. Water Manag. 2021, 245, 106547. [Google Scholar] [CrossRef]
  53. Durbha, S.S.; King, R.L.; Younan, N.H. Support vector machines regression for retrieval of leaf area index from multiangle imaging spectroradiometer. Remote Sens. Environ. 2007, 107, 348–361. [Google Scholar] [CrossRef]
  54. Shadkani, S.; Abbaspour, A.; Samadianfard, S.; Hashemi, S.; Mosavi, A.; Band, S.S. Comparative study of multilayer perceptron-stochastic gradient descent and gradient boosted trees for predicting daily suspended sediment load: The case study of the Mississippi River, U.S. Int. J. Sediment Res. 2020, 36, 512–523. [Google Scholar] [CrossRef]
  55. Zhang, L.; Traore, S.; Ge, J.; Li, Y.; Wang, S.; Zhu, G.; Cui, Y.; Fipps, G. Using boosted tree regression and artificial neural networks to forecast upland rice yield under climate change in Sahel. Comput. Electron. Agric. 2019, 166, 105031. [Google Scholar] [CrossRef]
  56. Puri, P.; Comfere, N.; Drage, L.A.; Shamim, H.; Bezalel, S.A.; Pittelkow, M.R.; Davis, M.D.P.; Wang, M.; Mangold, A.R.; Tollefson, M.M.; et al. Deep learning for dermatologists: Part II. Current applications. J. Am. Acad. Dermatol. 2020. [Google Scholar] [CrossRef]
  57. Singh, P.; Dash, M.; Mittal, P.; Nandi, S.; Nandi, S. Misbehavior Detection in C-ITS Using Deep Learning Approach. Adv. Intell. Syst. Comput. Intell. Syst. Des. Appl. 2020, 641–652. [Google Scholar] [CrossRef]
  58. Yekkala, I.; Dixit, S. Prediction of Heart Disease Using Random Forest and Rough Set Based Feature Selection. Int. J. Big Data Anal. Healthc. 2018, 3, 1–12. [Google Scholar] [CrossRef] [Green Version]
  59. Nash, J.E.; Sutcliffe, J. V River flow forecasting through conceptual models part I—A discussion of principles. J. Hydrol. 1970, 10, 282–290. [Google Scholar] [CrossRef]
  60. Gong, L.; Yu, M.; Jiang, S.; Cutsuridis, V.; Pearson, S. Deep Learning Based Prediction on Greenhouse Crop Yield Combined TCN and RNN. Sensor 2021, 21, 4537. [Google Scholar] [CrossRef]
  61. Huang, T.; Wang, S.; Sharma, A. Highway crash detection and risk estimation using deep learning. Accid. Anal. Prev. 2020, 135, 105392. [Google Scholar] [CrossRef]
  62. Kim, A.; Yang, Y.; Lessmann, S.; Ma, T.; Sung, M.C.; Johnson, J.E.V. Can deep learning predict risky retail investors? A case study in financial risk behavior forecasting. Eur. J. Oper. Res. 2020, 283, 217–234. [Google Scholar] [CrossRef] [Green Version]
  63. Shao, M.; Wang, X.; Bu, Z.; Chen, X.; Wang, Y. Prediction of energy consumption in hotel buildings via support vector machines. Sustain. Cities Soc. 2020, 57, 102128. [Google Scholar] [CrossRef]
  64. Haq, A.U.; Zeb, A.; Lei, Z.; Zhang, D. Forecasting daily stock trend using multi-filter feature selection and deep learning. Expert Syst. Appl. 2021, 168, 114444. [Google Scholar] [CrossRef]
  65. Al-Harbi, O. A Comparative Study of Feature Selection Methods for Dialectal Arabic Sentiment Classification Using Support Vector Machine. arXiv 2019, arXiv:1902.06242. [Google Scholar]
  66. Rao, H.; Shi, X.; Rodrigue, A.K.; Feng, J.; Xia, Y.; Elhoseny, M.; Yuan, X.; Gu, L. Feature selection based on artificial bee colony and gradient boosting decision tree. Appl. Soft Comput. J. 2019, 74, 634–642. [Google Scholar] [CrossRef]
  67. Elarab, M. The Application of Unmanned Aerial Vehicle to Precision Agriculture: Chlorophyll, Nitrogen, and Evapotranspiration Estimation; Utah State University: Logan, UT, USA, 2014. [Google Scholar]
  68. Berni, J.A.J.; Member, S.; Zarco-tejada, P.J.; Suárez, L.; Fereres, E. Thermal and Narrowband Multispectral Remote Sensing for Vegetation Monitoring From an Unmanned Aerial Vehicle. IEEE Trans. Geosci. Remote Sens. 2009, 47, 722–738. [Google Scholar] [CrossRef] [Green Version]
  69. Sánchez-Sastre, L.F.; Alte da Veiga, N.M.S.; Ruiz-Potosme, N.M.; Carrión-Prieto, P.; Marcos-Robles, J.L.; Navas-Gracia, L.M.; Martín-Ramos, P. Assessment of RGB Vegetation Indices to Estimate Chlorophyll Content in Sugar Beet Leaves in the Final Cultivation Stage. AgriEngineering 2019, 2, 128–149. [Google Scholar] [CrossRef] [Green Version]
  70. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A Review on UAV-Based Applications for Precision Agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
  71. Oniga, V.; Breaban, A.; Statescu, F. Determining the Optimum Number of Ground Control Points for Obtaining High Precision Results Based on UAS Images. Proceedings 2018, 2, 165. [Google Scholar] [CrossRef] [Green Version]
  72. Wang, F.; Huang, J.; Tang, Y.; Wang, X. New Vegetation Index and Its Application in Estimating Leaf Area Index of Rice. Chinese J. Rice Sci. 2007, 14, 195–203. [Google Scholar] [CrossRef]
  73. Gitelson, A.A.; Vina, A.; Ciganda, V.; Rundquist, D.C.; Arkebauer, T.J. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005, 32, 4–7. [Google Scholar] [CrossRef] [Green Version]
  74. Mao, W.; Student, P.D.; Wang, Y.; Wang, Y. Real-time Detection of Between-row Weeds Using Machine Vision. Soc. Eng. Agric. Food Biol. Syst. 2003, 031004. [Google Scholar] [CrossRef]
  75. Neto, J.C. A Combined Statistical-Soft Computing Approach for Classification and Mapping Weed Species in Minimum-Tillage Systems; ProQuest Dissertations: Lincoln, NE, USA, 2004. [Google Scholar]
  76. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color Indices for Weed Identification Under Various Soil, Residue, and Lighting Conditions. Am. Soc. Agric. Eng. 1995, 3, 259–269. [Google Scholar] [CrossRef]
  77. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 3, 282–293. [Google Scholar] [CrossRef]
  78. Tucker, C.J. Red and Photographic Infrared linear Combinations for Monitoring Vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  79. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially Located Platform and Aerial Photography for Documentation of Grazing Impacts on Wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  80. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a Green Channel in Remote Sensing of Global Vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  81. Sripada, R.P.; Heiniger, R.W.; White, J.G.; Meijer, A.D. Aerial Color Infrared Photography for Determining Early In-Season Nitrogen Requirements in Corn. Agron. J. 2006, 977, 968–977. [Google Scholar] [CrossRef]
  82. Buschmann, C.; Nagel, E. In vivo spectroscopy and internal optics of leaves as basis for remote sensing of vegetation. Int. J. Remote Sens. 1993, 14, 711–722. [Google Scholar] [CrossRef]
  83. Gitelson, A.A. Wide Dynamic Range Vegetation Index for Remote Quantification of Biophysical Characteristics of Vegetation. J. Plant Physiol. 2004, 161, 165–173. [Google Scholar] [CrossRef] [Green Version]
  84. Kawashima, S.; Nakatani, M.M. An Algorithm for Estimating Chlorophyll Content in Leaves Using a Video Camera. Ann. Bot. 1998, 81, 49–54. [Google Scholar] [CrossRef] [Green Version]
  85. Daughtry, C.S.T.; Walthall, C.L.; Kim, M.S.; Colstoun, E.B. De Estimating Corn Leaf Chlorophyll Concentration from Leaf and Canopy Reflectance. Remote Sens. Environ. 2000, 74, 229–239. [Google Scholar] [CrossRef]
  86. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  87. le Maire, G.; François, C.; Dufrêne, E. Towards universal broad leaf chlorophyll indices using PROSPECT simulated database and hyperspectral reflectance measurements. Remote Sens. Environ. 2004, 89, 1–28. [Google Scholar] [CrossRef]
  88. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  89. Gong, P.; Pu, R.; Biging, G.S.; Larrieu, M.R. Estimation of Forest Leaf Area Index Using Vegetation Indices Derived From Hyperion Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1355–1362. [Google Scholar] [CrossRef] [Green Version]
  90. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  91. Chen, J.M. Evaluation of vegetation indices and a modified simple ratio for boreal applications. Can. J. Remote Sens. 1996, 22, 29–242. [Google Scholar] [CrossRef]
  92. Haboudane, D.; Miller, J.R.; Tremblay, N.; Zarco-Tejada, P.J.; Dextraze, L. Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture. Remote Sens. Environ. 2002, 81, 416–426. [Google Scholar] [CrossRef]
  93. Barnes, E.; Incorporated, C.; Colaizzi, P.; Haberland, J.; Waller, P. Coincident detection of crop water stress, nitrogen status, and canopy density using ground based multispectral data. In Proceedings of the 5th International Conference on Precision Agriculture and Other Resource Management, Bloomington, MN, USA, 16–19 July 2000; pp. 1–16. [Google Scholar]
  94. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring vegetation systems in the Great Plains with ERTS. In Goddard Space Flight Center 3d ERTS-1 Symphony; NASA: Colombia, WA, USA, 1974; pp. 309–317. [Google Scholar]
  95. Goel, N.S.; Qin, W. Influences of Canopy Architecture on Relationships between Various Vegetation Indices and LAI and FPAR: A Computer Simulation. Remote Sens. Rev. 1994, 10, 309–347. [Google Scholar] [CrossRef]
  96. Rondeaux, G.; Steven, M.; Baret, F. Optimization of Soil-Adjusted Vegetation Indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  97. Stagakis, S.; González-Dugo, V.; Cid, P.; Guillén-Climent, M.L.; Zarco-Tejada, P.J. Monitoring water stress and fruit quality in an orange orchard under regulated deficit irrigation using narrow-band structural and physiological remote sensing indices. ISPRS J. Photogramm. Remote Sens. 2012, 71, 47–61. [Google Scholar] [CrossRef] [Green Version]
  98. Gitelson, A.; Merzlyak, M.N. Spectral Reflectance Changes Associated with Autumn Senescence of Aesculus hippocastanum L. and Acer platanoides L. Leaves. Spectral Features and Relation to Chlorophyll Estimation. J. Plant Physiol. 1994, 143, 286–292. [Google Scholar] [CrossRef]
  99. Roujean, J.; Breon, F. Estimating PAR Absorbed by Vegetation from Bidirectional Reflectance Measurements. Remote Sens. Environ. 1995, 384, 375–384. [Google Scholar] [CrossRef]
  100. Erdle, K.; Mistele, B.; Schmidhalter, U. Comparison of active and passive spectral sensors in discriminating biomass parameters and nitrogen status in wheat cultivars. Field Crop. Res. 2011, 124, 74–84. [Google Scholar] [CrossRef]
  101. Jasper, J.; Reusch, S.; Link, A. Active sensing of the N status of wheat using optimized wavelength combination: Impact of seed rate, variety and growth stage. Precis. Agric. 2009, 9, 23–30. [Google Scholar]
  102. Pearson, R.L.; Miller, L.D. Remote Mapping of Standing Crop Biomass for Estimation of the Productivity of the Shortgrass Prairie, Pawnee National Grasslands, Colorado; Department of Watershed Sciences, College of Forestry and Natural Resources, Colorado State University: Fort Collins, CO, USA, 1972. [Google Scholar]
  103. Huete, A.R. A Soil-Adjusted Vegetation Index (SAVI). Remote Sens. Environ. 1988, 309, 295–309. [Google Scholar] [CrossRef]
  104. Zietsman, H.L.; Sandham, L. Surface Temperature Measurement from Space: A Case Study in the South Western Cape of South Africa. S. Afr. J. Enol. Vitic. 1997, 18, 25–30. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Location of the study area.
Figure 1. Location of the study area.
Remotesensing 13 04476 g001
Figure 2. An example of UAV data acquisition, processing, and canopy feature extraction workflow.
Figure 2. An example of UAV data acquisition, processing, and canopy feature extraction workflow.
Remotesensing 13 04476 g002
Figure 3. Flowchart of a supervised machine learning model in MATLAB.
Figure 3. Flowchart of a supervised machine learning model in MATLAB.
Remotesensing 13 04476 g003
Figure 4. An example of DNN-MLP structure, composed of input, hidden, and output layers.
Figure 4. An example of DNN-MLP structure, composed of input, hidden, and output layers.
Remotesensing 13 04476 g004
Figure 5. Dynamic changes in FW (a), DW (b), and E W T c a n o p y (c) at different growth stages.
Figure 5. Dynamic changes in FW (a), DW (b), and E W T c a n o p y (c) at different growth stages.
Remotesensing 13 04476 g005
Figure 6. Feature importance selection of vegetation indices (VIs) based on the decision tree for E W T c a n o p y   (a), FW (b), and DW (c).
Figure 6. Feature importance selection of vegetation indices (VIs) based on the decision tree for E W T c a n o p y   (a), FW (b), and DW (c).
Remotesensing 13 04476 g006
Figure 7. Scatter plots of predicted E W T c a n o p y according to multiple linear regression (MLR, a), boosted regression tree (BRT, b), artificial neural network (ANN, c), deep neural network (DNN, d), support vector machine Gaussian (SVM-Gaussian, e) and support vector machine polynomial (SVM-Polynomial, f) models versus the E W T c a n o p y values measured from all datasets.
Figure 7. Scatter plots of predicted E W T c a n o p y according to multiple linear regression (MLR, a), boosted regression tree (BRT, b), artificial neural network (ANN, c), deep neural network (DNN, d), support vector machine Gaussian (SVM-Gaussian, e) and support vector machine polynomial (SVM-Polynomial, f) models versus the E W T c a n o p y values measured from all datasets.
Remotesensing 13 04476 g007
Figure 8. Comparative plots between E W T c a n o p y values predicted using multiple linear regression (MLR, a), boosted regression tree (BRT, b), artificial neural network (ANN, c), deep neural network (DNN, d), support vector machine Gaussian (SVM-Gaussian, e), and support vector machine polynomial (SVM-Polynomial, f) models and the measured E W T c a n o p y values from all datasets.
Figure 8. Comparative plots between E W T c a n o p y values predicted using multiple linear regression (MLR, a), boosted regression tree (BRT, b), artificial neural network (ANN, c), deep neural network (DNN, d), support vector machine Gaussian (SVM-Gaussian, e), and support vector machine polynomial (SVM-Polynomial, f) models and the measured E W T c a n o p y values from all datasets.
Remotesensing 13 04476 g008
Figure 9. Scatter plots of E W T c a n o p y calculated from predicted FW and DW according to multiple linear regression (MLR, a), boosted regression tree (BRT, b), artificial neural network (ANN, c), deep neural network (DNN, d), support vector machine Gaussian (SVM-Gaussian, e), and support vector machine Polynomial (SVM-Polynomial, f) models versus the measured E W T c a n o p y values from all datasets.
Figure 9. Scatter plots of E W T c a n o p y calculated from predicted FW and DW according to multiple linear regression (MLR, a), boosted regression tree (BRT, b), artificial neural network (ANN, c), deep neural network (DNN, d), support vector machine Gaussian (SVM-Gaussian, e), and support vector machine Polynomial (SVM-Polynomial, f) models versus the measured E W T c a n o p y values from all datasets.
Remotesensing 13 04476 g009
Figure 10. E W T c a n o p y map used in field experiments at the anthesis stage via multiple linear regression (MLR, a), boosted regression tree (BRT, b), artificial neural network (ANN, c), deep neural network (DNN, d), support vector machine Gaussian (SVM-Gaussian, e), and support vector machine polynomial (SVM-Polynomial, f) models.
Figure 10. E W T c a n o p y map used in field experiments at the anthesis stage via multiple linear regression (MLR, a), boosted regression tree (BRT, b), artificial neural network (ANN, c), deep neural network (DNN, d), support vector machine Gaussian (SVM-Gaussian, e), and support vector machine polynomial (SVM-Polynomial, f) models.
Remotesensing 13 04476 g010
Table 1. Description of the abbreviations.
Table 1. Description of the abbreviations.
NameDescription
ANN-MLPartificial neural networks–multilayer perceptron
BRTboosted tree regression
DTdecision tree
DNN-MLPdeep neural network–multilayer perceptron
FSfeature selection
MLmachine learning
NIRnear-infrared region
NNneural network
SWIRshort-wavelength infrared region
SVMssupport vector machines
UAVunmanned aerial vehicle
VIsvegetation indices
Table 2. Description of parameters and variables.
Table 2. Description of parameters and variables.
Parameters/VariablesDescriptionUnit
DWdry weightt·ha−1
EWTequivalent water thicknessg·cm−2 or cm
FWfresh weightt·ha−1
MAEmean absolute errorg·cm−2 or cm
MLRmultiple linear regressiong·cm−2 or cm
NSENash–Sutcliffe efficiency
RMSEroot means square error
R2determination coefficient
X n input variable
β n regression coefficients associated X n input variable
X n o r m normalized value of the input variable
X i real value of the input variable
X m i n minimum input variable
X m a x maximum input variable
Y n o r m denormalized value of the output variableg·cm−2 or cm
Y i real value of the output variableg·cm−2 or cm
Y m i n minimum output variableg·cm−2 or cm
Y m a x maximum output variableg·cm−2 or cm
f ( n e t i ) transfer function of neural network
xiinput from
w i j weight of the connection between unit i and unit j
bibias
e i Backpropagation (error)
δsummation index that enforces j > i
eproducts errors
εinjected errors
y i ^ predicted valuesg·cm−2 or cm
y i ^ ¯ actual valuesg·cm−2 or cm
y i ¯ mean of the observed valuesg·cm−2 or cm
y i mean of predicted valuesg·cm−2 or cm
nnumber of data points
Table 3. Specifications of sensors used in the present study.
Table 3. Specifications of sensors used in the present study.
BandBandwidthWavelengthPicture Resolution
Blue204751280 × 960
Green205601280 × 960
NIR408401280 × 960
Red106681280 × 960
Red Edge717101280 × 960
Table 4. Flight details for the automated unmanned aerial vehicle imagery system during wheat growing season 2020.
Table 4. Flight details for the automated unmanned aerial vehicle imagery system during wheat growing season 2020.
PeriodFlight Altitude (m)Speed (ms−1)Snapshot Interval (s)Growth Stages
7 March 202030 m2.52.5Early stem elongation
4 April 202030 m2.52.5Late stem elongation
28 May 202030 m2.52.5Anthesis
Table 5. Statistics (range, mean, standard deviation) for in situ measured E W T l e a f , E W T s t e m   , E W T e a r and E W T c a n o p y .
Table 5. Statistics (range, mean, standard deviation) for in situ measured E W T l e a f , E W T s t e m   , E W T e a r and E W T c a n o p y .
Exps.Exp. (1)Exp. (2)Exp. (3)
E W T l e a f Range [g cm−2][0.003–0.296][0.014–0.186][0.001–0.158]
Mean (std) (g cm−2)0.084 (0.072)0.077 (0.041)0.056 (0.05)
E W T s t e m Range [g cm−2][0.013–0.175][0.041–0.382][0.004–0.259]
Mean (std) (g cm−2)0.081 (0.037)0.166 (0.094)0.091 (0.062)
E W T e a r Range [g cm−2][0.01–0.112][0.039–0.108][0.006–0.082]
Mean (std) (g cm−2)0.055 (0.016)0.077 (0.019)0.039 (0.022)
E W T c a n o p y Range [g cm−2][0.03–0.442][0.09–0.567][0.011–0.459]
Mean (std) (g cm−2)0.184 (0.089)0.268 (0.126)0.173 (0.113)
Table 6. Summary statistics of multiple linear regression models.
Table 6. Summary statistics of multiple linear regression models.
StatisticsValues
Multiple R0.9185
R Square0.8436
Adjusted R Square0.8406
Standard Error0.0439
Intercept0.4312
Beta:
G   ( β   1 )−0.5454
MTVI 2 ( β   2 )1.4150
RE   ( β   3 )0.8637
OSAVI   ( β   4 )−1.3980
NIR   ( β   5 )−0.3555
Table 7. Statistical performance indices of different machine learning (ML) algorithm during the training, validation, and testing stages for E W T c a n o p y , FW, and DW.
Table 7. Statistical performance indices of different machine learning (ML) algorithm during the training, validation, and testing stages for E W T c a n o p y , FW, and DW.
ModelVariablesTrainingCross-ValidationTesting
R2ENSRMSEMAER2ENSRMSEMAER2ENSRMSEMAE
ANN-MLP E W T c a n o p y (%)0.9160.9160.0320.0210.9220.9150.0340.0250.9050.8940.03340.019
FW (t/ha)0.9050.9043.8892.5830.9090.9073.9442.3950.9540.9522.6291.976
DW (t/ha)0.8680.8680.6800.5080.8750.8640.5740.4490.9240.9220.5120.391
DNN-MLP E W T c a n o p y (%)0.9380.9370.0270.0150.9330.9300.0300.0210.9130.9090.0340.022
FW (t/ha)0.9340.9343.2151.9530.9140.8974.0852.4130.9030.9023.9432.659
DW (t/ha)0.9000.9000.5710.4130.8940.8930.5080.4210.8820.8810.7010.531
BRT E W T c a n o p y (%)0.9480.9470.0260.0170.8930.8680.0350.0250.8720.8680.0390.027
FW (t/ha)0.9280.9283.3772.1750.8850.8834.2042.9220.9020.9003.9722.738
DW (t/ha)0.9170.9170.5370.4230.7780.7580.8540.6900.8140.8030.7590.531
SVM-Gaussian E W T c a n o p y (%)0.9550.9500.0240.0150.9080.9040.0320.0260.9150.9070.0350.025
FW (t/ha)0.9370.9363.1921.8730.8800.8784.2993.0410.9250.8984.0072.937
DW (t/ha)0.9220.9200.5160.3520.8640.8640.6260.5020.9240.9220.5190.434
SVM-Polynomial E W T c a n o p y (%)0.9000.8990.0350.0230.8460.8430.0430.0270.9020.8990.0360.023
FW (t/ha)0.8920.8924.0972.7690.8570.8574.7832.9780.8520.8504.9762.868
DM (t/ha)0.8610.8600.6840.5140.8210.8120.7350.5710.8940.8880.6230.505
Table 8. Comparative performance statistics of the machine learning (ML) models employed in the study.
Table 8. Comparative performance statistics of the machine learning (ML) models employed in the study.
Performance RankModelR2NSERMSEMAE
1SVM-Gaussian0.9420.9370.0270.018
2DNN-MLP0.9340.9330.0280.017
3BRT0.9260.9260.0300.019
4ANN-MLP0.9140.9140.0320.021
5SVM-Polynomial0.8920.8910.0360.023
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Traore, A.; Ata-Ul-Karim, S.T.; Duan, A.; Soothar, M.K.; Traore, S.; Zhao, B. Predicting Equivalent Water Thickness in Wheat Using UAV Mounted Multispectral Sensor through Deep Learning Techniques. Remote Sens. 2021, 13, 4476. https://doi.org/10.3390/rs13214476

AMA Style

Traore A, Ata-Ul-Karim ST, Duan A, Soothar MK, Traore S, Zhao B. Predicting Equivalent Water Thickness in Wheat Using UAV Mounted Multispectral Sensor through Deep Learning Techniques. Remote Sensing. 2021; 13(21):4476. https://doi.org/10.3390/rs13214476

Chicago/Turabian Style

Traore, Adama, Syed Tahir Ata-Ul-Karim, Aiwang Duan, Mukesh Kumar Soothar, Seydou Traore, and Ben Zhao. 2021. "Predicting Equivalent Water Thickness in Wheat Using UAV Mounted Multispectral Sensor through Deep Learning Techniques" Remote Sensing 13, no. 21: 4476. https://doi.org/10.3390/rs13214476

APA Style

Traore, A., Ata-Ul-Karim, S. T., Duan, A., Soothar, M. K., Traore, S., & Zhao, B. (2021). Predicting Equivalent Water Thickness in Wheat Using UAV Mounted Multispectral Sensor through Deep Learning Techniques. Remote Sensing, 13(21), 4476. https://doi.org/10.3390/rs13214476

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop