Next Article in Journal
Addition of Fresh Herbs to Fresh-Cut Iceberg Lettuce: Impact on Quality and Storability
Previous Article in Journal
Village Chickens for Achieving Sustainable Development Goals 1 and 2 in Resource-Poor Communities: A Literature Review
Previous Article in Special Issue
YOLOv8MS: Algorithm for Solving Difficulties in Multiple Object Tracking of Simulated Corn Combining Feature Fusion Network and Attention Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Monitoring Maize Canopy Chlorophyll Content throughout the Growth Stages Based on UAV MS and RGB Feature Fusion

1
Yunnan International Joint Laboratory of Crop Smart Production, Yunnan Agricultural University, Kunming 650231, China
2
The Key Laboratory of Crop Production and Smart Agriculture of Yunnan Province, Yunnan Agricultural University, Kunming 650231, China
3
Dehong Agricultural Technology Extension Centre, Dehong 678499, China
4
School of Agricultural, Yunnan University, Kunming 650231, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(8), 1265; https://doi.org/10.3390/agriculture14081265
Submission received: 13 June 2024 / Revised: 25 July 2024 / Accepted: 31 July 2024 / Published: 1 August 2024

Abstract

:
Chlorophyll content is an important physiological indicator reflecting the growth status of crops. Traditional methods for obtaining crop chlorophyll content are time-consuming and labor-intensive. The rapid development of UAV remote sensing platforms offers new possibilities for monitoring chlorophyll content in field crops. To improve the efficiency and accuracy of monitoring chlorophyll content in maize canopies, this study collected RGB, multispectral (MS), and SPAD data from maize canopies at the jointing, tasseling, and grouting stages, constructing a dataset with fused features. We developed maize canopy chlorophyll content monitoring models based on four machine learning algorithms: BP neural network (BP), multilayer perceptron (MLP), support vector regression (SVR), and gradient boosting decision tree (GBDT). The results showed that, compared to single-feature methods, the MS and RGB fused feature method achieved higher monitoring accuracy, with R² values ranging from 0.808 to 0.896, RMSE values between 2.699 and 3.092, and NRMSE values between 10.36% and 12.26%. The SVR model combined with MS–RGB fused feature data outperformed the BP, MLP, and GBDT models in monitoring maize canopy chlorophyll content, achieving an R² of 0.896, an RMSE of 2.746, and an NRMSE of 10.36%. In summary, this study demonstrates that by using the MS–RGB fused feature method and the SVR model, the accuracy of chlorophyll content monitoring can be effectively improved. This approach reduces the need for traditional methods of measuring chlorophyll content in maize canopies and facilitates real-time management of maize crop nutrition.

1. Introduction

As one of the major food crops, maize (Zea mays L.) holds a significant position in the global agricultural industry [1]. Effectively monitoring maize growth, especially chlorophyll content, is crucial for increasing maize yield and ensuring food supply. Chlorophyll content is an important physiological indicator reflecting light interception and absorption and crop growth status [2]. It is closely related to biomass accumulation and yield [3]. Traditional methods for obtaining crop chlorophyll content, such as laboratory analysis and field surveys, are destructive, time-consuming, labor-intensive, and susceptible to subjective factors, resulting in poor timeliness and accuracy of information [4]. The SPAD-502 Plus handheld chlorophyll meter, manufactured by Konica Minolta Co., Ltd., in Japan, allows for non-destructive measurement and real-time display of leaf chlorophyll content [5]. However, this device is inefficient in field environments and unsuitable for monitoring chlorophyll content over large crop areas. Therefore, finding efficient and rapid methods for chlorophyll content monitoring has become a research focus.
In recent years, Unmanned Aerial Vehicles (UAVs) have rapidly developed in the civilian sector, especially in photogrammetry and remote sensing [6]. Unlike ground-based measurement methods, UAV remote sensing high-throughput phenotyping platforms have become crucial for acquiring field crop phenotypic information due to their flexibility, low cost, and extensive spatial coverage [7]. While satellite remote sensing also offers advantages such as wide-area coverage, it suffers from poor timeliness and low spatial resolution, posing certain limitations in crop phenotyping [8]. UAV remote sensing, as an effective complement to remote sensing platforms, provides a cost-effective means of remote sensing [9]. UAV remote sensing technology not only addresses the shortcomings of satellite-based agricultural remote sensing but also extends its application scope. UAV remote sensing is especially valuable in monitoring small crops and inverting crop phenotypes such as leaf area index and leaf chlorophyll content, which are closely related to canopy structure and light energy status [10]. The high mobility of UAV remote sensing significantly improves the accuracy of monitor [11,12,13]. Yin et al. [14] collected leaf chlorophyll content (LCC) inversion data for four winter wheat varieties and demonstrated that combining the optimal feature selection method with machine learning algorithms can more accurately estimate LCC values. Guo et al. [15] used UAV hyperspectral and multispectral data combined with radiative transfer models and machine learning regression algorithms to construct a maize leaf area index (LAI) inversion algorithm. This model was optimized using active learning (AL) algorithms, significantly improving model accuracy. Qi et al. [16] calculated eight vegetation indices based on multispectral UAV photography at different plant densities and established various monitoring models, finding that the BP neural network was the most suitable model for monitoring peanut chlorophyll content. Most related studies have focused on the UAV multispectral direction. Multispectral vegetation indices (such as NDVI, EVI, etc.) have been widely used in crop phenotyping monitoring, assessing vegetation growth status by analyzing plant reflectance at different wavelengths [14,17]. However, the spectral information contained in multispectral images is limited [18]. Relying solely on multispectral vegetation indices to monitor crop phenotypes does not yield ideal results. Besides vegetation indices, crop phenotypes can also be monitored by analyzing color and texture feature information from RGB images [19]. MS images and RGB images are two commonly used data sources, but research on using low-cost RGB images to monitor field crop chlorophyll content is limited [20]. Among the hundreds of vegetation indices proposed globally, most involve the visible-near-infrared band, requiring multispectral UAVs. Given the current state of the UAV market, RGB band UAVs are more accessible and cost-effective [21,22,23]. UAV remote sensing with RGB images has lower environmental requirements, making it valuable for monitoring field crop phenotypic characteristics [24]. Xu et al. [19] developed a method for estimating leaf chlorophyll content index (CCI) based on the relationship between in situ leaf CCI measurements and UAV RGB images, finding that linear regression models and backpropagation neural network models performed well in estimating leaf CCI.
The studies mentioned predominantly used relatively simple single features, which may be affected by factors such as lighting conditions and pest stress. Previous research has shown that multi-source feature fusion can significantly enhance crop phenotyping accuracy compared to single-feature approaches [25,26]. Yu et al. [20] proposed an algorithm that fused visible and infrared images obtained simultaneously by UAV, overcoming the blockiness and color distortion issues inherent in traditional image fusion methods, resulting in smooth, detailed, and high-resolution images. Liu et al. [27] used UAV-collected visible and thermal infrared images, applying particle swarm optimization and support vector machine algorithms to develop a comprehensive rice lodging identification model. Their results demonstrated that combining visible and thermal infrared image features significantly improved the accuracy of rice lodging identification, with an R2 greater than 0.9 for lodging rate estimation. Similarly, the fusion of MS and RGB features can be an effective method to improve the accuracy of crop chlorophyll content monitoring. MS data contain reflectance information from various bands, while RGB data include color and texture features, describing the spatial correlation of pixels and reflecting changes in vegetation structure [15,19]. Combining these advantages can bring new possibilities for crop phenotyping monitoring. López-Granados et al. [28] developed an automated OBIA (Object-Based Image Analysis) procedure based on visible (red–green–blue band) and multispectral (red–green–blue and near-infrared bands) camera orthophotos to enable object-based early monitoring of grass weeds in grass crops.
In this study, a UAV platform was employed to collect RGB and MS data for monitoring maize chlorophyll content. Four models were constructed using four machine learning algorithms, and the optimal model was selected. The specific objectives of this study were as follows: (1) compare the efficacy of RGB and MS data sources in canopy chlorophyll content monitoring; and (2) evaluate the potential of RGB and MS feature fusion in monitoring maize canopy chlorophyll content.

2. Materials and Methods

2.1. Study Area and Experimental Design

Field experiments were conducted in the Daxingzhai and Mabozi Villages (98°40′24″ E, 24°25′37″ N, and 98°35′90″ E, 24°25′76″ N, respectively), Mangshi, Dehong Prefecture, Yunnan Province, China, in 2023 (Figure 1). The region experiences a South Asian tropical monsoon climate. The annual average temperature is 20.2 °C, with a maximum of 36.2 °C and a minimum of −0.6 °C. The frost-free period exceeds 300 days. The annual average precipitation is 1659.3 mm, with spring averaging 225 mm, summer averaging 1015.9 mm, autumn averaging 361.9 mm, and winter averaging 51.8 mm. The annual average sunshine hours amount to 2252.9 h, with an average of 6.2 h per day. The soil in the experimental area is sandy loam with a pH of 7.27. It has an organic matter content of 16.9 g/kg, available nitrogen of 72.1 mg/kg, available phosphorus of 15 mg/kg, and available potassium of 91.3 mg/kg. The maize cultivar, Beiyu 1521, with strong disease resistance, was used in the experiments. Maize was sown on 14 June 2023 and 14 July 2023, covering an area of approximately 1021 m2. The seeds are manually sown at a depth of 3–5 cm. Three planting densities were used: 5.7 plants/m2 (D1), 6.3 plants/m2 (D2), and 6.9 plants/m2 (D3). Row spacing was 70 cm for all densities, with plant spacing within rows set at 24, 22.7, and 20.7 cm for 5.7, 6.3, and 6.9 plants/m2, respectively. Each treatment was replicated three times. After emergence, pest control is conducted a total of 4 times, with each application spaced 10 days apart, alternating between lambda-cyhalothrin and chlorfenapyr. The remaining field management measures follow local best practices.

2.2. Data Acquisition

2.2.1. Acquisition of Canopy Remote Sensing Data

Multispectral (MS) and RGB data were collected using a DJI Phantom 4 RTK UAV (SZ DJI Technology Co., Shenzhen, China) equipped with an integrated multispectral imaging system (Figure 2). The UAV includes one RGB sensor and five MS sensors (blue, green, red, red-edge, and near-infrared). A multispectral light intensity sensor on the UAV captures solar irradiance data to compensate for image quality during post-processing. The technical specifications of the sensors are shown in Table 1.
MS and RGB data were collected during the jointing stage (17 July 2023), tasseling stage (3 August 2023), and grouting stage (17 August 2023), between 12:00 and 14:00. On 17 July 2023, at 12:00, the temperature was 28.08 °C with a humidity of 52.99%, no precipitation, horizontal surface radiation of 922.99 W/m2, and ground wind speed of 0.869 m/s. At 14:00, the temperature rose to 29.39 °C with humidity decreasing to 47.29%, still no precipitation, horizontal surface radiation increasing to 943.99 W/m2, and ground wind speed increasing to 1.145 m/s. On 3 August 2023, at 12:00, the temperature was 24.06 °C with a higher humidity of 74.71%, a small amount of precipitation at 0.459 mm, horizontal surface radiation decreased to 647.49 W/m2, and ground wind speed increased to 1.449 m/s. At 14:00, the temperature increased to 25.56 °C, humidity decreased slightly to 68.51%, precipitation was 0.338 mm, horizontal surface radiation increased to 797.24 W/m2, and ground wind speed decreased to 0.835 m/s. On 17 August 2023, at 12:00, the temperature was 26.64 °C with a humidity of 59.93%, no precipitation, horizontal surface radiation of 865.74 W/m2, and ground wind speed of 0.870 m/s. At 14:00, the temperature increased to 27.55°C with humidity slightly higher at 62.08%, no precipitation, horizontal surface radiation decreased to 836.49 W/m2, and ground wind speed increased to 1.145 m/s. Flight paths were planned using DJI GS PRO 2.0.17 software. Consistent parameters were maintained across all flights: a 15 m altitude, 12 m/s speed, 80% fore-and-aft overlap, and 70% side overlap, with a Ground Sampling Distance (GSD) of 0.8 cm/pixel. The camera gimbal pitch angle is −90.0 degrees. Waypoint hover scanning is used, with a reprojected error RMS of 0.84 pixels and an xy georeferencing RMSE of 0.78 cm. Gray plate radiometric correction was used, employing two 3 m × 3 m gray plates to correct the reflectance of the image and remove interfering factors. The acquired UAV images were processed using DJI Terra 3.8.0 software to generate digital orthophoto maps.

2.2.2. Acquisition of Canopy Chlorophyll Content

SPAD values are able to reflect the level of chlorophyll content, allowing for determination of chlorophyll levels based on SPAD readings [29,30]. Chlorophyll content of leaves was measured using a handheld chlorophyll meter SPAD-502 Plus (Konica Minolta Holdings, Tokyo, Japan) on five well-grown maize plants randomly selected in the experimental plots, avoiding the position of leaf veins, and marked on the mulch with reflective tape. The chlorophyll content of two to five maize leaves was collected from the top downwards, with three to five measurements taken randomly according to the size of the leaf, and the average value was taken as the chlorophyll content of the leaf. The average value of the chlorophyll content of the leaves represents the chlorophyll content of the canopy of the labeled maize plants.
The statistical analysis of maize leaf chlorophyll content at growth stages revealed an accumulation of chlorophyll content as the maize grew (Figure 3). The smallest mean value of chlorophyll content was observed at the jointing stage of maize, measuring 42.209, with a range of 29.2 to 49.8. The largest mean value of chlorophyll content was observed at the grouting stage of maize, measuring 51.718, with a range of 43.8 to 60.1. The chlorophyll content data in the growth stages of maize exhibited a decreasing trend in standard deviation and variance from the jointing stage to the grouting stage.

2.3. Features Extraction

2.3.1. MS Features Extraction

The remote sensing images were processed using DJI Terra 3.8.0 and ENVI 5.6 software. Firstly, lens distortion and vignetting effects were corrected. Then, the images were radiometrically corrected using solar irradiance data captured by the multispectral light intensity sensor in combination with the standard reflectance values from the calibration whiteboard. The images captured by the UAV were combined in DJI Terra 3.8.0 software to create an orthorectified reflectance map of the test area in each band. This map was then opened in ENVI 5.6 software, and the reflectance values of the region of interest in each band were extracted to calculate the vegetation index. Eleven vegetation indices known to be strongly correlated with crop growth status were selected for this study, and the calculation method for each VI is presented in Table 2.

2.3.2. RGB Features Extraction

This study extracts color and texture features from RGB data, where texture features describe the spatial arrangement and grayscale variations between pixels in the image. When extracting texture features, the image was first converted to grayscale. Then, the Gray-Level Co-occurrence Matrix (GLCM) was used to extract texture information by describing the co-occurrence relationship of grayscale levels between neighboring pixels in the image. Texture features of RGB data were extracted from a 45° direction, and the average value was taken as the texture feature of the region of interest.
To minimize the impact of factors such as weeds, soil, and plastic film, and to maximize the correlation between chlorophyll content and color and texture features, this study employed a method combining threshold segmentation and morphological processing to remove irrelevant backgrounds. First, the RGB images were converted to the HSV color space, where the color of the maize canopy falls within specific ranges of hue, saturation, and brightness. Then, the images were thresholded based on the canopy’s color range to generate a canopy mask. The canopy mask underwent closing operation to fill small holes within the canopy area, ensuring the continuity of the canopy region and preventing fragmentation in subsequent processing. Next, the canopy mask underwent opening operation to smooth the boundaries of the canopy region, reducing unnecessary jagged edges and making the image with the irrelevant background removed appear more natural. This resulted in a canopy mask file, which, when applied to the original RGB images, effectively removed irrelevant backgrounds, as shown in Figure 4.
This study selected 10 color features and 8 texture features as input variables for the model. The parameters and their calculation formulas for each feature are shown in Table 3.

2.4. Data Processing

The overall workflow of this study comprises three main parts: data acquisition, data processing, and model assessment (Figure 5). The specific steps are as follows:
(1) Three datasets were constructed, including vegetation index data from MS data, color and texture feature data from RGB data, and the merging of feature datasets from MS and RGB data into a comprehensive feature dataset. These datasets represent the entire growth period of maize through three growth stages.
(2) Models were constructed using four algorithms: BP, MLP, SVR, and GBDT. These models were developed to monitor the canopy chlorophyll content of maize. The performance of the models was evaluated using R2, RMSE, and NRMSE.
(3) The potential of MS data, RGB data, and fusion MS and RGB data for monitoring maize canopy chlorophyll content was evaluated.

2.5. Estimation Methods

Four canopy chlorophyll content monitoring models were developed using four algorithms, including backpropagation neural network (BP), Multilayer Perceptron (MLP), support vector regression (SVR), and Gradient Boosting Decision Tree (GBDT).
BP neural network is a neural network model trained using a backpropagation algorithm [50]. In the BP model, gradient descent optimization is implemented by a backpropagation algorithm, which continuously adjusts the weights and bias terms in the model to make the prediction results of the model close to the true values [51]. In this study, the custom layer CustomLayer is defined as the input layer, and in the forward pass, CustomLayer uses the sigmoid function as the activation function, and in the reverse pass, CustomLayer uses the error-correction learning rule. In the model training phase, by iterating the training data, the model automatically updates the weights and bias terms using the backpropagation algorithm to minimize the loss function. In each training iteration, the model calculates the adjustments of the weights and bias terms based on the gradient information of the loss function and then updates these parameters using the optimization algorithm (Adam). In this way, the model can learn the appropriate weights and bias terms through multiple iterations, thus improving prediction accuracy.
MLP is a machine learning model based on artificial neural networks, which consists of multiple levels of neuronal connections that are trained through the use of an activation function and a backpropagation algorithm [52]. The MLP model can learn complex nonlinear relationships and automatically adjust the model’s parameters through a backpropagation algorithm [53,54,55]. In this study, the MLP model is constructed by stacking multiple dense layers, each using a specified number of neurons and activation functions, and using the Adam optimizer and the mean-squared error (MSE) as the loss function. Adam can adaptively adjust the learning rate according to the gradient of each parameter, and the MSE is a continuously differentiable function, which is based on the Adam optimizer to effectively optimize the model parameters.
SVR is a machine learning model for dealing with regression problems, which is based on the Support Vector Machine (SVM) algorithm and extended for regression tasks [56,57]. SVR models are used to deal with nonlinear relationships and noisy data and can be flexibly adapted to different regression tasks by choosing appropriate kernel functions and adjusting the parameters [58,59,60]. SVR models can be adapted to different regression tasks by adjusting the tolerance and regularization parameters to balance model accuracy and complexity. In this study, the Radial Basis Function (RBF) is selected as the kernel function, and the k-fold cross-validation method is used to find the optimal parameter combination, to improve the generalization ability of the model.
GBDT is a machine learning model commonly used for solving regression and classification problems, which is based on the idea of integrated learning, by constructing an ensemble of multiple decision tree models, and gradually optimizing the performance of the model iteratively [61,62]. The GBDT model has a certain degree of robustness to the outliers and missing values of the data, it performs well when dealing with high-dimensional sparse data, and it has certain advantages in dealing with complex problems [63,64]. The learning rate, the number of underlying decision trees, and the maximum depth of decision trees are the key parameters that need to be tuned in the GBDT model. In this study, the model is fitted on the training set with parameter search and cross-validation, where all possible combinations in the parameter grid are exhaustively searched, and the performance of each combination is evaluated using cross-validation to automatically determine the best combination of model parameters during the fitting process.

2.6. Assessment Methods

This study divided the dataset into a training set and a test set in an 8:2 ratio to ensure the accuracy and reliability of model training and evaluation. After the model training was completed, the model’s prediction results were compared with the true values on the test set to evaluate the model’s performance and prediction accuracy. The model was evaluated on the test set, and the root-mean-squared error (RMSE), normalized root-mean-squared error (NRMSE), and coefficient of determination (R2) were selected to evaluate the model. Three evaluation metrics: RMSE measures the square root of the average difference between the predicted and actual values of the model, NRMSE takes into account the range of the target variable, allowing comparisons to be made between target variables at different scales, and R2 evaluates the predictive power and explanatory effect of the model on the target variable. The formulas of RMSE, NRMSE, and R2 are shown as Equations (1)–(3):
RMSE = i = 1 n y p y a 2 n
NRMSE = R M S E y m a x y m i n
R 2 = 1 i = 1 n y a y p 2 i = 1 n y a y m 2
where n is the number of samples, yp is the predicted value of the model, ya is the actual value, ymax and ymin are the maximum and minimum values of the target variable, and ym denotes the average of the actual values.

3. Results

3.1. Relationship between Features and Canopy Chlorophyll Content

When numerous input variables are involved in modeling, it can lead to information redundancy, thereby reducing model performance. Correlation analysis was conducted between the growth stages of maize, including the jointing stage, tasseling stage, grouting stage, and full birth stage, and selected RGB and MS features. Features with correlations greater than or equal to 0.3 or less than or equal to −0.3 were chosen as input variables for the model. The results indicate that during the full birth stage, except for Tcon (−0.229), all features are significantly correlated with canopy chlorophyll content (see Figure 6 and Figure 7). GCI, NDVI, GNDVI, R, G, and NB exhibit the highest correlations, suggesting that these features can effectively monitor canopy chlorophyll content.

3.2. Features Fusion Canopy Chlorophyll Content Monitoring

The scatter plots in Figure 8, Figure 9 and Figure 10 depict the monitoring of maize canopy chlorophyll content using vegetation index features from MS data, color and texture features from RGB data, and the fusion of MS and RGB data features, respectively. From Figure 9, it can be observed that the scatter plot exhibits significant dispersion, indicating considerable uncertainty and fluctuation in monitoring maize canopy chlorophyll content when using color and texture features as input variables for the model. In contrast, the scatter plot using the fusion of MS and RGB data features shows a closer alignment with the 1:1 line, suggesting better monitoring effectiveness for maize canopy chlorophyll content. Thus, utilizing spectral feature fusion methods can effectively overcome the limitations of single features, enhancing the stability and accuracy of the monitoring model.
The vegetation index features from MS data, color and texture features from RGB data, and the fusion of MS and RGB features were selected as input variables for the maize canopy chlorophyll content monitoring model. The monitoring was conducted using BP, MLP, SVR, and GBDT models. When utilizing vegetation index features for monitoring canopy chlorophyll content, SVR and GBDT models exhibited the highest accuracy, with R2 values of 0.797 and 0.758, RMSE values of 3.366 and 3.265, and NRMSE values of 13.16% and 12.50%, respectively. For monitoring canopy chlorophyll content using color and texture features, SVR and MLP models demonstrated the highest accuracy, with R2 values of 0.751 and 0.730, RMSE values of 3.893 and 3.994, and NRMSE values of 16.93% and 16.64%, respectively. It is noteworthy that compared to individual features, the fusion of MS and RGB data features significantly improved monitoring accuracy. The R2 values ranged from 0.808 to 0.896, RMSE values ranged from 2.699 to 3.092, and NRMSE values ranged from 10.36% to 12.26% across the four models. Among them, the SVR model performed the best with an R2 value of 0.896, RMSE of 2.746, and NRMSE of 10.36%.

3.3. BP, MLP, SVR, and GBDT Canopy Chlorophyll Content Monitoring

Comparing the accuracies of BP, MLP, SVR, and GBDT in canopy chlorophyll content monitoring, it can be seen that SVR can obtain higher R2 and lower RMSE, NRMSE compared to the other three machine learning algorithms. This demonstrates the robust stability and modeling capability of the SVR model in monitoring maize canopy chlorophyll content. By comparing BP, MLP, and GBDT, it can be observed that BP and GBDT exhibit similar accuracy, which is superior to MLP.

3.4. Comparison of Other Fusion Methods

To maximize the accuracy of crop phenotyping monitoring models based on UAV remote sensing, researchers are not satisfied with using a single data source but instead construct models using multi-source data. UAV remote sensing platforms can be equipped with hyperspectral sensors, multispectral sensors, RGB sensors, thermal infrared sensors, LiDAR, and more. Different sensors capture various types of crop phenotypic information. For example, thermal infrared sensors can obtain canopy temperature, which is influenced by leaf stomatal water vapor flux and is related to photosynthesis and transpiration, thereby reflecting crop growth status [65]. RGB sensors can capture canopy color and texture features, with texture features based on spatial variations between image pixels, providing additional information beyond reflectance [66]. Integrating information extracted from different sensors can often overcome the inherent limitations or deficiencies of individual sensors, thereby effectively improving the accuracy of crop phenotyping monitoring.
Ding et al. [67] collected MS, RGB, and thermal infrared (TIR) images, calculating vegetation index data from MS, canopy shade coverage from RGB, and the Normalized Relative Canopy Temperature Index from TIR to construct a multi-source data fusion dataset. They used machine learning algorithms to predict nitrogen content in winter wheat and found that MS images had higher accuracy than RGB and TIR images when used alone. This demonstrated that multi-source data fusion technology can enhance the prediction capability for winter wheat nitrogen content. Zhou et al. [25] collected MS and RGB data, calculating vegetation index features from MS (Dataset I), texture features from RGB (Dataset II), and wavelet coefficient features from RGB (Dataset III). They constructed a multi-feature dataset (Dataset IV) and used machine learning algorithms to estimate canopy chlorophyll density (CCD) for the entire growth period of maize. The results indicated that the RF model trained on Dataset IV provided the best estimation of CCD.
The aforementioned research demonstrates that multi-source data fusion technology can effectively improve the accuracy of crop phenotyping monitoring. However, in practical production and life, cost is also a crucial factor. Hyperspectral sensors and thermal infrared sensors are expensive, whereas RGB sensors are affordable and readily available. Extracting as many crop phenotyping-related features as possible from RGB sensors can improve monitoring accuracy while expanding the application of RGB drones in agriculture, enhancing the efficiency of agricultural producers, and reducing costs. This study, based on a UAV remote sensing platform, extracted 18 color and texture features from RGB images and 11 vegetation index features from MS images to construct an MS-RGB fusion feature dataset. The study found that the SVR algorithm, combined with the fusion feature dataset, achieved the highest monitoring accuracy, with an R2 of 0.896, an RMSE of 2.746, and an NRMSE of 10.36%.

3.5. Temporal and Spatial Distribution of Canopy Chlorophyll Content

The SVR model using fused MS and RGB data exhibited the highest monitoring accuracy. However, due to the relatively complex modeling process and longer model runtime of SVR, GBDT was ultimately chosen as the model for monitoring maize canopy chlorophyll content. Figure 11 illustrates the spatiotemporal distribution of maize canopy chlorophyll content across three reproductive stages. In terms of time, there is an upward trend in canopy chlorophyll content from the jointing stage to the grouting stage, consistent with the observed changes in maize canopy chlorophyll content (as shown in Figure 3). Regarding spatial distribution, during the jointing stage, maize growth gets progressively better with increasing planting density However, during the tasseling and grouting stages, the impact of density on maize canopy chlorophyll content is minimal, with similar chlorophyll content observed across different density treatments.

4. Discussion

4.1. Comparison of the MS and RGB Features from UAV

UAV remote sensing platforms equipped with various types of sensors play a crucial role in field crop phenotyping monitoring. Hyperspectral sensors can capture continuous narrow spectral bands ranging from dozens to hundreds, but in practice, only a few bands are typically considered when constructing models [68,69]. LiDAR sensors can penetrate the crop canopy to obtain crop height information, monitor canopy structure, and biomass [70]. However, LiDAR sensors often struggle to accurately estimate relatively short crop canopies [71]. Multispectral sensors, due to their low-cost multi-band data acquisition advantage, are widely used in agriculture, forestry, and environmental monitoring. Numerous studies have shown that UAV remote sensing platforms equipped with multispectral cameras can effectively monitor crop phenotypic traits [72]. RGB sensors provide highly interpretable data, making it easier to relate extracted features to the physiological state of plants [21]. The use of RGB data is not limited to monitoring crop chlorophyll content but can also be applied to various other crop traits, such as maturity detection and pest and disease monitoring [27].
In this study, the canopy chlorophyll content of maize was monitored throughout its entire growth period using a UAV remote sensing platform equipped with a multispectral sensor. Eleven vegetation indices (VIs) were extracted from the MS data, and 18 color and texture features were extracted from the RGB data. Three datasets were constructed: one from the MS data, one from the RGB data, and a fused dataset combining features from both RGB and MS data. Four algorithms—Back Propagation (BP), Multilayer Perceptron (MLP), Support Vector Regression (SVR), and Gradient Boosting Decision Tree (GBDT)—were used to build four maize canopy chlorophyll content monitoring models. The results indicate that the SVR model demonstrated the best monitoring performance. Given that RGB sensors are typically cheaper and more readily available compared to MS sensors, utilizing UAVs equipped with RGB sensors holds significant potential for monitoring maize canopy chlorophyll content [23]. Figure 12 compares the performance of SVR models using MS data and RGB data in monitoring maize canopy chlorophyll content during the jointing, tasseling, and grouting stages. The comparison indicates that using RGB data for monitoring maize canopy chlorophyll content is slightly less effective than using MS data. Notably, at the jointing stage, the monitoring performance of the SVR model using RGB data is similar to that using MS data, with identical R2 values; RMSE is 3.023, higher by 0.367; and NRMSE is 14.81%, lower by 1.33%. Overall, UAVs equipped with RGB sensors also demonstrate good effectiveness in monitoring maize canopy chlorophyll content. This finding aligns with the study by Qiu et al. [71], which found that the performance of RGB UAV systems in monitoring rice growth is comparable to that of multispectral UAV systems. This analysis highlights the practicality and effectiveness of using RGB data for agricultural monitoring, particularly given the cost and availability advantages of RGB sensors. The comparable performance to MS data, especially in the early stages of growth, suggests that RGB-equipped UAVs are a viable option for large-scale agricultural monitoring.

4.2. Analysis of MS and RGB Feature Fusion Potential

Multispectral features are widely used for monitoring crop phenotypes such as chlorophyll content, leaf area index, and yield [73,74,75]. However, they are susceptible to errors due to sensor characteristics and environmental conditions. This study investigated whether fusing MS and RGB data features can enhance the accuracy of canopy chlorophyll content monitoring models compared to using either dataset alone. The results (Figure 10) demonstrate that feature fusion outperforms single features regarding monitoring accuracy. This can be attributed to three factors.
(1) Information Utilization: The fusion of MS and RGB data features can fully exploit the informational advantages of each type [76]. RGB data provide color and texture feature information, describing the spatial correlation of pixels and reflecting changes in vegetation structure [66]. MS data offer richer band information, enabling the monitoring of crop growth status by analyzing the reflectance characteristics of crops across different wavelengths [17].
(2) Improved Monitoring Accuracy: Using relatively simple single features can result in research outcomes being affected by factors such as lighting conditions and pest stress. The spectral information contained in MS data is limited, and solely relying on multispectral vegetation indices to monitor crop phenotypes often yields suboptimal results [18]. RGB data are significantly influenced by lighting intensity [19]. The fusion of MS and RGB features can enhance the model’s discriminative capability, thereby improving monitoring accuracy.
(3) Reduction of Environmental Impact: Fusion of MS and RGB information can reduce the influence of environmental conditions (such as changes in light and weather) on monitoring results [77]. By integrating information from different bands, the model exhibits better robustness and can better adapt to environmental changes.
Figure 13 illustrates the R2, RMSE, and NRMSE values of models using MS data, RGB data, and fused MS–RGB data across three growth stages of maize and the entire growth period. The figure demonstrates that most algorithm models achieve higher R2 values and lower RMSE and NRMSE values when utilizing fused data compared to using either MS or RGB data alone. This trend is evident in the graph, where the third point of the polyline consistently shows the highest or lowest values. This indicates that, irrespective of the algorithm used or the maize growth stage, the feature fusion approach consistently enhances model monitoring accuracy, aligning with previous research findings [20,26,27]. At the (M + R)3 coordinate in the image, representing the maize grouting stage using fused MS–RGB data, the MLP model exhibits lower R2 and higher RMSE and NRMSE values compared to the MLP model using only MS data during the same stage. Specifically, both models achieve an R2 of 0.667, but the RMSE values differ with 3.118 and 2.865, respectively, and the NRMSE values are 18.02% and 17.25%. This discrepancy suggests that simply concatenating MS and RGB features may not be the optimal fusion strategy; more sophisticated feature extraction and selection methods are necessary to fully exploit the advantages of different feature types. Monitoring models for canopy chlorophyll content during the grouting stage generally show lower R2 and higher RMSE values compared to those during the jointing stage. This could be attributed to the grouting stage being a later phase in maize growth, characterized by changes in leaf color and structure, a gradual decrease in chlorophyll content, and alterations in the plant’s spectral reflectance properties. Additionally, denser maize canopies during the grouting stage introduce more shadows and uneven reflections, increasing noise and complexity in images, thereby affecting feature extraction and fusion effectiveness.
The SVR model, utilizing kernel methods, effectively manages high-dimensional data and captures complex nonlinear relationships [78]. With robust regularization mechanisms and minimal hyperparameter tuning requirements, SVR demonstrates resilience to noise and efficiently extracts valuable information in complex multidimensional data environments, thereby ensuring accurate monitoring results. Zhang et al. [79] found in their study that among three multivariate regression algorithms combining spectral and texture features to estimate maize Leaf Area Index (LAI), Support Vector Regression (SVR) significantly outperformed Random Forest (RF) and Multiple Linear Regression (MLR). Luan et al. [80], on the other hand, used high-resolution remote sensing images obtained by UAV equipped with multispectral sensors to estimate Chlorophyll Content Concentration (CCC) in slash pine (Pinus elliottii). Their research showed that the SVR method outperformed Random Forest Regression (RFR) in estimating Leaf Chlorophyll Content (LCC), achieving a high R2 value of 0.692 and an RMSE of 0.168 mg·g−1. In this study’s specific dataset and problem setting, the SVR algorithm performed exceptionally well. However, in other situations, algorithms like BP, MLP, and GBDT might also perform well. SVR involves solving a quadratic programming problem during training, which can be computationally intensive, especially with large datasets or high-dimensional feature spaces. This may result in long training times and significant memory consumption. For smaller, lower-dimensional datasets, models like SVR, BP, and MLP are all suitable choices. However, for large-scale datasets or high-dimensional data, GBDT offers greater advantages.

4.3. Limitations of Experimental and Modeling Approaches

This study employs MS and RGB feature fusion alongside machine learning methods to monitor maize canopy chlorophyll content. Using an unmanned aerial vehicle (UAV) platform equipped with RGB and MS sensors, images are captured and data extracted, encompassing color and texture features from RGB data, vegetation index features from MS data, and fused MS–RGB features. Four machine learning algorithms are applied to develop maize canopy chlorophyll content monitoring models, and their accuracies are compared to identify the optimal approach. The results demonstrate that integrating MS vegetation indices with RGB color and texture features significantly enhances the accuracy of maize canopy chlorophyll content monitoring models. Comparative analysis of BP, MLP, SVR, and GBDT models reveals that SVR, particularly when incorporating MS and RGB feature fusion, achieves superior accuracy in monitoring. SVR and GBDT models perform most effectively when using MS data for canopy chlorophyll content monitoring. Future research could explore the comparison of advanced machine learning and deep learning algorithms to further enhance monitoring model accuracy and stability. Additionally, efforts could focus on refining color and texture feature extraction methods and exploring advanced computer vision and deep learning techniques to bolster the efficacy of these features in canopy chlorophyll content monitoring. Practical applications should consider the performance of monitoring models across diverse regions, seasons, and maize varieties. Extracting and integrating multi-level features from images, or cross-modal fusion of image and text features, can enhance model recognition accuracy and data utilization efficiency [81,82]. Due to experimental constraints, the maize chlorophyll content estimation method proposed in this study applies specifically to field maize at varying densities and growth stages. Future research should encompass variables such as different maize varieties, seasons, and regions in experimental designs to ensure the model’s robustness and applicability under varied conditions. Moreover, integrating the model into agricultural intelligence systems for real-time monitoring and precision agricultural management could provide an effective approach for chlorophyll content monitoring in agriculture.

5. Conclusions

This study developed maize canopy chlorophyll content monitoring models based on three different feature datasets using four machine learning algorithms. The models’ performances using different features were compared. The main conclusions are summarized as follows: (1) The canopy chlorophyll content monitoring model using the SVR algorithm demonstrated superior accuracy and stability compared to the BP, MLP, and GBDT models; (2) Using RGB feature data alone as the model input yielded satisfactory monitoring results, but incorporating MS–RGB fused feature data as input showed even better monitoring performance. In conclusion, the maize canopy chlorophyll content monitoring model that combines MS and RGB features with the SVR algorithm can efficiently, intuitively, and accurately monitor maize canopy chlorophyll content. This research provides valuable technical support for precise field management.

Author Contributions

Conceptualization, K.P., P.S., and W.L. (Wenfeng Li); methodology, K.P. and W.L. (Wenrong Liu); software, K.P. and W.L. (Wenrong Liu); validation, K.P. and P.S.; formal analysis, X.C.; investigation, X.C. and W.L. (Wenrong Liu); resources, W.X. and S.N.; data curation, K.P. and P.S.; writing—original draft, K.P.; writing—review and editing, K.P., W.L. (Wenfeng Li), and T.L.; supervision, W.L. (Wenfeng Li) and T.L.; project administration, W.L. (Wenfeng Li) and T.L.; funding acquisition, W.L. (Wenfeng Li) and T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (32160420), the Major Science and Technology Special Projects in Yunnan Province (202202AE09002103).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The datasets in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jin, X.; Ma, J.; Wen, Z.; Song, K. Estimation of maize residue cover using Landsat-8 OLI image spectral information and textural features. Remote Sens. 2015, 7, 14559–14575. [Google Scholar] [CrossRef]
  2. Croft, H.; Chen, J.; Wang, R.; Mo, G.; Luo, S.; Luo, X.; He, L.; Gonsamo, A.; Arabian, J.; Zhang, Y.; et al. The global distribution of leaf chlorophyll content. Remote Sens. Environ. 2020, 236, 111479. [Google Scholar] [CrossRef]
  3. Khangura, R.S.; Johal, G.S.; Dilkes, B.P. Variation in maize chlorophyll biosynthesis alters plant architecture. Plant Physiol. 2020, 184, 300–315. [Google Scholar] [CrossRef] [PubMed]
  4. Steele, M.R.; Gitelson, A.A.; Rundquist, D.C. A comparison of two techniques for nondestructive measurement of chlorophyll content in grapevine leaves. Agron. J. 2008, 100, 779–782. [Google Scholar] [CrossRef]
  5. Markwell, J.; Osterman, J.C.; Mitchell, J.L. Calibration of the Minolta SPAD-502 leaf chlorophyll meter. Photosynth. Res. 1995, 46, 467–472. [Google Scholar] [CrossRef] [PubMed]
  6. Xiang, T.-Z.; Xia, G.-S.; Zhang, L. Mini-unmanned aerial vehicle-based remote sensing: Techniques, applications, and prospects. IEEE Geosci. Remote Sens. Mag. 2019, 7, 29–63. [Google Scholar] [CrossRef]
  7. Maes, W.H.; Steppe, K. Perspectives for remote sensing with unmanned aerial vehicles in precision agriculture. Trends Plant Sci. 2019, 24, 152–164. [Google Scholar] [CrossRef] [PubMed]
  8. Xie, Q.; Dash, J.; Huete, A.; Jiang, A.; Yin, G.; Ding, Y.; Peng, D.; Hall, C.C.; Brown, L.; Shi, Y.; et al. Retrieval of crop biophysical parameters from Sentinel-2 remote sensing imagery. Int. J. Appl. Earth Obs. Geoinf. 2019, 80, 187–195. [Google Scholar] [CrossRef]
  9. Gong, Y.; Yang, K.; Lin, Z.; Fang, S.; Wu, X.; Zhu, R.; Peng, Y. Remote estimation of leaf area index (LAI) with unmanned aerial vehicle (UAV) imaging for different rice cultivars throughout the entire growing season. Plant Methods 2021, 17, 88. [Google Scholar] [CrossRef]
  10. Sun, Q.; Gu, X.; Chen, L.; Xu, X.; Wei, Z.; Pan, Y.; Gao, Y. Monitoring maize canopy chlorophyll density under lodging stress based on UAV hyperspectral imagery. Comput. Electron. Agric. 2022, 193, 106671. [Google Scholar] [CrossRef]
  11. Mukherjee, A.; Misra, S.; Raghuwanshi, N.S. A survey of unmanned aerial sensing solutions in precision agriculture. J. Netw. Comput. Appl. 2019, 148, 102461. [Google Scholar] [CrossRef]
  12. Yang, G.; Liu, J.; Zhao, C.; Li, Z.; Huang, Y.; Yu, H.; Xu, B.; Yang, X.; Zhu, D.; Zhang, X.; et al. Unmanned aerial vehicle remote sensing for field-based crop phenotyping: Current status and perspectives. Front. Plant Sci. 2017, 8, 1111. [Google Scholar] [CrossRef] [PubMed]
  13. Meivel, S.; Maheswari, S. Remote sensing analysis of agricultural drone. J. Indian Soc. Remote Sens. 2021, 49, 689–701. [Google Scholar] [CrossRef]
  14. Yin, Q.; Zhang, Y.; Li, W.; Wang, J.; Wang, W.; Ahmad, I.; Zhou, G.; Huo, Z. Estimation of winter wheat SPAD values based on UAV multispectral remote sensing. Remote Sens. 2023, 15, 3595. [Google Scholar] [CrossRef]
  15. Guo, A.; Ye, H.; Huang, W.; Qian, B.; Wang, J.; Lan, Y.; Wang, S. Inversion of maize leaf area index from UAV hyperspectral and multispectral imagery. Comput. Electron. Agric. 2023, 212, 108020. [Google Scholar] [CrossRef]
  16. Qi, H.; Wu, Z.; Zhang, L.; Li, J.; Zhou, J.; Jun, Z.; Zhu, B. Monitoring of peanut leaves chlorophyll content based on drone-based multispectral image feature extraction. Comput. Electron. Agric. 2021, 187, 106292. [Google Scholar] [CrossRef]
  17. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
  18. Ghasemi, N.; Sahebi, M.R.; Mohammadzadeh, A. Biomass estimation of a temperate deciduous forest using wavelet analysis. IEEE Trans. Geosci. Remote Sens. 2012, 51, 765–776. [Google Scholar] [CrossRef]
  19. Xu, H.; Wang, J.; Qu, Y.; Hu, L.; Tang, Y.; Zhou, Z.; Xu, X.; Zhou, Y. Estimating Leaf Chlorophyll Content of Moso Bamboo Based on Unmanned Aerial Vehicle Visible Images. Remote Sens. 2022, 14, 2864. [Google Scholar] [CrossRef]
  20. Yu, J.; Zhou, C.; Zhao, J. Improvement of Wheat Growth Information by Fusing UAV Visible and Thermal Infrared Images. Agronomy 2022, 12, 2087. [Google Scholar] [CrossRef]
  21. Istiak, M.A.; Syeed, M.M.; Hossain, M.S.; Uddin, M.F.; Hasan, M.; Khan, R.H.; Azad, N.S. Adoption of Unmanned Aerial Vehicle (UAV) imagery in agricultural management: A systematic literature review. Ecol. Inform. 2023, 78, 102305. [Google Scholar] [CrossRef]
  22. Delavarpour, N.; Koparan, C.; Nowatzki, J.; Bajwa, S.; Sun, X. A technical study on UAV characteristics for precision agriculture applications and associated practical challenges. Remote Sens. 2021, 13, 1204. [Google Scholar] [CrossRef]
  23. Liu, J.; Xiang, J.; Jin, Y.; Liu, R.; Yan, J.; Wang, L. Boost precision agriculture with unmanned aerial vehicle remote sensing and edge intelligence: A survey. Remote Sens. 2021, 13, 4387. [Google Scholar] [CrossRef]
  24. Jin, X.; Liu, S.; Baret, F.; Hemerlé, M.; Comar, A. Estimates of plant density of wheat crops at emergence from very low altitude UAV imagery. Remote Sens. Environ. 2017, 198, 105–114. [Google Scholar] [CrossRef]
  25. Zhou, L.; Nie, C.; Su, T.; Xu, X.; Song, Y.; Yin, D.; Liu, S.; Liu, Y.; Bai, Y.; Jia, X.; et al. Evaluating the canopy chlorophyll density of maize at the whole growth stage based on multi-scale UAV image feature fusion and machine learning methods. Agriculture 2023, 13, 895. [Google Scholar] [CrossRef]
  26. Zhai, W.; Li, C.; Cheng, Q.; Ding, F.; Chen, Z. Exploring multisource feature fusion and stacking ensemble learning for accurate estimation of maize chlorophyll content using unmanned aerial vehicle remote sensing. Remote Sens. 2023, 15, 3454. [Google Scholar] [CrossRef]
  27. Liu, T.; Li, R.; Zhong, X.; Jiang, M.; Jin, X.; Zhou, P.; Liu, S.; Sun, C.; Guo, W. Estimates of rice lodging using indices derived from UAV visible and thermal infrared images. Agric. For. Meteorol. 2018, 252, 144–154. [Google Scholar] [CrossRef]
  28. López-Granados, F.; Torres-Sánchez, J.; De Castro, A.-I.; Serrano-Pérez, A.; Mesas-Carrascosa, F.-J.; Peña, J.-M. Object-based early monitoring of a grass weed in a grass crop using high resolution UAV imagery. Agron. Sustain. Dev. 2016, 36, 67. [Google Scholar] [CrossRef]
  29. Narmilan, A.; Gonzalez, F.; Salgadoe, A.S.A.; Kumarasiri, U.W.L.M.; Weerasinghe, H.A.S.; Kulasekara, B.R. Predicting canopy chlorophyll content in sugarcane crops using machine learning algorithms and spectral vegetation indices derived from UAV multispectral imagery. Remote Sens. 2022, 14, 1140. [Google Scholar] [CrossRef]
  30. Li, J.; Feng, Y.; Mou, G.; Xu, G.; Luo, Q.; Luo, K.; Huang, S.; Shi, X.; Guan, Z.; Ye, Y. Construction and application effect of the leaf value model based on SPAD value in rice. Sci. Agric. Sin. 2017, 50, 4714–4724. [Google Scholar]
  31. Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating multispectral images and vegetation indices for precision farming applications from UAV images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef]
  32. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  33. Zarco-Tejada, P.J.; Haboudane, D.; Miller, J.R.; Tremblay, N.; Dextraze, L. Leaf Chlorophyll a+ b and canopy LAI estimation in crops using RT models and Hyperspectral Reflectance Imagery. Remote Sens. Environ. 2002, 72, 229–239. [Google Scholar]
  34. Daughtry, C.S.; Walthall, C.; Kim, M.; De Colstoun, E.B.; McMurtrey, J., III. Estimating corn leaf chlorophyll concentration from leaf and canopy reflectance. Remote Sens. Environ. 2000, 74, 229–239. [Google Scholar] [CrossRef]
  35. Broge, N.H.; Mortensen, J.V. Deriving green crop area index and canopy chlorophyll density of winter wheat from spectral reflectance data. Remote Sens. Environ. 2002, 81, 45–57. [Google Scholar] [CrossRef]
  36. Hatfield, J.L.; Prueger, J.H. Value of using different vegetative indices to quantify agricultural crop characteristics at different growth stages under varying management practices. Remote Sens. 2010, 2, 562–578. [Google Scholar] [CrossRef]
  37. Ballester, C.; Brinkhoff, J.; Quayle, W.C.; Hornbuckle, J. Monitoring the effects of water stress in cotton using the green red vegetation index and red edge ratio. Remote Sens. 2019, 11, 873. [Google Scholar] [CrossRef]
  38. Xie, Q.; Dash, J.; Huang, W.; Peng, D.; Qin, Q.; Mortimer, H.; Casa, R.; Pignatti, S.; Laneve, G.; Pascucci, S.; et al. Vegetation indices combining the red and red-edge spectral information for leaf area index retrieval. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1482–1493. [Google Scholar] [CrossRef]
  39. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J. Plant Physiol. 2003, 160, 271–282. [Google Scholar] [CrossRef]
  40. Zhen, Z.; Chen, S.; Yin, T.; Gastellu-Etchegorry, J.-P. Globally quantitative analysis of the impact of atmosphere and spectral response function on 2-band enhanced vegetation index (EVI2) over Sentinel-2 and Landsat-8. ISPRS J. Photogramm. Remote Sens. 2023, 205, 206–226. [Google Scholar] [CrossRef]
  41. Zhu, X.; Yang, Q.; Chen, X.; Ding, Z. An approach for joint estimation of grassland leaf area index and leaf chlorophyll content from UAV hyperspectral data. Remote Sens. 2023, 15, 2525. [Google Scholar] [CrossRef]
  42. Zhao, X.; Li, Y.; Chen, Y.; Qiao, X.; Qian, W. Water chlorophyll a estimation using UAV-based multispectral data and machine learning. Drones 2022, 7, 2. [Google Scholar] [CrossRef]
  43. Ban, S.; Liu, W.; Tian, M.; Wang, Q.; Yuan, T.; Chang, Q.; Li, L. Rice leaf chlorophyll content estimation using UAV-based spectral images in different regions. Agronomy 2022, 12, 2832. [Google Scholar] [CrossRef]
  44. Yin, H.; Huang, W.; Li, F.; Yang, H.; Li, Y.; Hu, Y.; Yu, K. Multi-temporal UAV imaging-based mapping of chlorophyll content in potato crop. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2023, 91, 91–106. [Google Scholar] [CrossRef]
  45. Huang, Y.; Ma, Q.; Wu, X.; Li, H.; Xu, K.; Ji, G.; Qian, F.; Li, L.; Huang, Q.; Long, Y. Estimation of chlorophyll content in Brassica napus based on unmanned aerial vehicle images. Oil Crop Sci. 2022, 7, 149–155. [Google Scholar] [CrossRef]
  46. Gamon, J.; Surfus, J. Assessing leaf pigment content and activity with a reflectometer. New Phytol. 1999, 143, 105–117. [Google Scholar] [CrossRef]
  47. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  48. Kandhway, P. A novel adaptive contextual information-based 2D-histogram for image thresholding. Expert Syst. Appl. 2024, 238, 122026. [Google Scholar] [CrossRef]
  49. Qian, B.; Shao, W.; Gao, R.; Zheng, W.; Hua, D.; Li, H. The extended digital image correlation based on intensity change model. Measurement 2023, 221, 113416. [Google Scholar] [CrossRef]
  50. Dong, Y.; Fu, Z.; Peng, Y.; Zheng, Y.; Yan, H.; Li, X. Precision fertilization method of field crops based on the Wavelet-BP neural network in China. J. Clean. Prod. 2020, 246, 118735. [Google Scholar] [CrossRef]
  51. Zhao, Z.; Feng, G.; Zhang, J. The simplified hybrid model based on BP to predict the reference crop evapotranspiration in Southwest China. PLoS ONE 2022, 17, e0269746. [Google Scholar] [CrossRef] [PubMed]
  52. Park, J.-G.; Jo, S. Approximate Bayesian MLP regularization for regression in the presence of noise. Neural Netw. 2016, 83, 75–85. [Google Scholar] [CrossRef] [PubMed]
  53. Shao, Y.; Liu, J.; Yang, J.; Wu, Z. Spatial–spectral involution mlp network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 9293–9310. [Google Scholar] [CrossRef]
  54. Bazrafshan, O.; Ehteram, M.; Latif, S.D.; Huang, Y.F.; Teo, F.Y.; Ahmed, A.N.; El-Shafie, A. Predicting crop yields using a new robust Bayesian averaging model based on multiple hybrid ANFIS and MLP models. Ain Shams Eng. J. 2022, 13, 101724. [Google Scholar] [CrossRef]
  55. Bazrafshan, O.; Ehteram, M.; Moshizi, Z.G.; Jamshidi, S. Evaluation and uncertainty assessment of wheat yield prediction by multilayer perceptron model with bayesian and copula bayesian approaches. Agric. Water Manag. 2022, 273, 107881. [Google Scholar] [CrossRef]
  56. Li, Y.; Sun, H.; Yan, W.; Zhang, X. Multi-output parameter-insensitive kernel twin SVR model. Neural Netw. 2020, 121, 276–293. [Google Scholar] [CrossRef] [PubMed]
  57. Sun, Y.; Ding, S.; Zhang, Z.; Jia, W. An improved grid search algorithm to optimize SVR for prediction. Soft Comput. 2021, 25, 5633–5644. [Google Scholar] [CrossRef]
  58. Verma, B.; Prasad, R.; Srivastava, P.K.; Yadav, S.A.; Singh, P.; Singh, R. Investigation of optimal vegetation indices for retrieval of leaf chlorophyll and leaf area index using enhanced learning algorithms. Comput. Electron. Agric. 2022, 192, 106581. [Google Scholar] [CrossRef]
  59. Wang, J.; Zhou, Q.; Shang, J.; Liu, C.; Zhuang, T.; Ding, J.; Xian, Y.; Zhao, L.; Wang, W.; Zhou, G.; et al. UAV-and machine learning-based retrieval of wheat SPAD values at the overwintering stage for variety screening. Remote Sens. 2021, 13, 5166. [Google Scholar] [CrossRef]
  60. Chungcharoen, T.; Donis-Gonzalez, I.; Phetpan, K.; Udompetaikul, V.; Sirisomboon, P.; Suwalak, R. Machine learning-based prediction of nutritional status in oil palm leaves using proximal multispectral images. Comput. Electron. Agric. 2022, 198, 107019. [Google Scholar] [CrossRef]
  61. Li, L.; Dai, S.; Cao, Z.; Hong, J.; Jiang, S.; Yang, K. Using improved gradient-boosted decision tree algorithm based on Kalman filter (GBDT-KF) in time series prediction. J. Supercomput. 2020, 76, 6887–6900. [Google Scholar] [CrossRef]
  62. Zhang, Z.; Jung, C. GBDT-MO: Gradient-boosted decision trees for multiple outputs. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 3156–3167. [Google Scholar] [CrossRef]
  63. Yao, H.; Huang, Y.; Wei, Y.; Zhong, W.; Wen, K. Retrieval of chlorophyll-a concentrations in the coastal waters of the Beibu Gulf in Guangxi using a gradient-boosting decision tree model. Appl. Sci. 2021, 11, 7855. [Google Scholar] [CrossRef]
  64. Yuan, Z.; Ye, Y.; Wei, L.; Yang, X.; Huang, C. Study on the optimization of hyperspectral characteristic bands combined with monitoring and visualization of pepper leaf SPAD value. Sensors 2021, 22, 183. [Google Scholar] [CrossRef]
  65. Elsayed, S.; Elhoweity, M.; Ibrahim, H.H.; Dewir, Y.H.; Migdadi, H.M.; Schmidhalter, U. Thermal imaging and passive reflectance sensing to estimate the water status and grain yield of wheat under different irrigation regimes. Agric. Water Manag. 2017, 189, 98–110. [Google Scholar] [CrossRef]
  66. Fu, Y.; Yang, G.; Song, X.; Li, Z.; Xu, X.; Feng, H.; Zhao, C. Improved estimation of winter wheat aboveground biomass using multiscale textures extracted from UAV-based digital images and hyperspectral feature analysis. Remote Sens. 2021, 13, 581. [Google Scholar] [CrossRef]
  67. Ding, F.; Li, C.; Zhai, W.; Fei, S.; Cheng, Q.; Chen, Z. Estimation of nitrogen content in winter wheat based on multi-source data fusion and machine learning. Agriculture 2022, 12, 1752. [Google Scholar] [CrossRef]
  68. Camps-Valls, G.; Bruzzone, L.; Rojo-Álvarez, J.L.; Melgani, F. Robust support vector regression for biophysical variable estimation from remotely sensed images. IEEE Geosci. Remote Sens. Lett. 2006, 3, 339–343. [Google Scholar] [CrossRef]
  69. Wang, Q.; Chen, X.; Meng, H.; Miao, H.; Jiang, S.; Chang, Q. UAV hyperspectral data combined with machine learning for winter wheat canopy SPAD values estimation. Remote Sens. 2023, 15, 4658. [Google Scholar] [CrossRef]
  70. Chen, Z.; Miao, Y.; Lu, J.; Zhou, L.; Li, Y.; Zhang, H.; Lou, W.; Zhang, Z.; Kusnierek, K.; Liu, C. In-season diagnosis of winter wheat nitrogen status in smallholder farmer fields across a village using unmanned aerial vehicle-based remote sensing. Agronomy 2019, 9, 619. [Google Scholar] [CrossRef]
  71. Qiu, Z.; Ma, F.; Li, Z.; Xu, X.; Du, C. Development of prediction models for estimating key rice growth variables using visible and nir images from unmanned aerial systems. Remote Sens. 2022, 14, 1384. [Google Scholar] [CrossRef]
  72. Li, F.; Piasecki, C.; Millwood, R.J.; Wolfe, B.; Mazarei, M.; Stewart, C.N., Jr. High-throughput switchgrass phenotyping and biomass modeling by UAV. Front. Plant Sci. 2020, 11, 574073. [Google Scholar] [CrossRef] [PubMed]
  73. Yang, H.; Hu, Y.; Zheng, Z.; Qiao, Y.; Zhang, K.; Guo, T.; Chen, J. Estimation of potato chlorophyll content from UAV multispectral images with stacking ensemble algorithm. Agronomy 2022, 12, 2318. [Google Scholar] [CrossRef]
  74. Jiang, J.; Johansen, K.; Stanschewski, C.S.; Wellman, G.; Mousa, M.A.A.; Fiene, G.M.; Asiry, K.A.; Tester, M.; McCabe, M.F. Phenotyping a diversity panel of quinoa using UAV-retrieved leaf area index, SPAD-based chlorophyll and a random forest approach. Precis. Agric. 2022, 23, 961–983. [Google Scholar] [CrossRef]
  75. Qiao, L.; Gao, D.; Zhao, R.; Tang, W.; An, L.; Li, M.; Sun, H. Improving estimation of LAI dynamic by fusion of morphological and vegetation indices based on UAV imagery. Comput. Electron. Agric. 2022, 192, 106603. [Google Scholar] [CrossRef]
  76. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  77. Jin, X.; Li, Z.; Feng, H.; Ren, Z.; Li, S. Deep neural network algorithm for estimating maize biomass based on simulated Sentinel 2A vegetation indices and leaf area index. Crop J. 2020, 8, 87–97. [Google Scholar] [CrossRef]
  78. Zhang, J.; Cheng, T.; Guo, W.; Xu, X.; Qiao, H.; Xie, Y.; Ma, X. Leaf area index estimation model for UAV image hyperspectral data based on wavelength variable selection and machine learning methods. Plant Methods 2021, 17, 49. [Google Scholar] [CrossRef]
  79. Zhang, X.; Zhang, K.; Sun, Y.; Zhao, Y.; Zhuang, H.; Ban, W.; Chen, Y.; Fu, E.; Chen, S.; Liu, J.; et al. Combining spectral and texture features of UAS-based multispectral images for maize leaf area index estimation. Remote Sens. 2022, 14, 331. [Google Scholar] [CrossRef]
  80. Luan, Q.; Xu, C.; Tao, X.; Chen, L.; Jiang, J.; Li, Y. Estimating canopy chlorophyll in slash pine using multitemporal vegetation indices from uncrewed aerial vehicles (UAVs). Precis. Agric. 2024, 25, 1086–1105. [Google Scholar] [CrossRef]
  81. Dai, G.; Fan, J.; Dewi, C. ITF-WPI: Image and text based cross-modal feature fusion model for wolfberry pest recognition. Comput. Electron. Agric. 2023, 212, 108129. [Google Scholar] [CrossRef]
  82. Sunil, C.; Jaidhar, C.; Patil, N. Tomato plant disease classification using multilevel feature fusion with adaptive channel spatial and pixel attention mechanism. Expert Syst. Appl. 2023, 228, 120381. [Google Scholar]
Figure 1. Site and situation of the study area. Note: D1–D3 represent different densities of 5.7 plants/m2, 6.3 plants/m2, 6.9 plants/m2, respectively.
Figure 1. Site and situation of the study area. Note: D1–D3 represent different densities of 5.7 plants/m2, 6.3 plants/m2, 6.9 plants/m2, respectively.
Agriculture 14 01265 g001
Figure 2. The UAV and its sensors. Note: The blue box is the MS sensor and RGB sensor, and the green box is the light intensity sensor.
Figure 2. The UAV and its sensors. Note: The blue box is the MS sensor and RGB sensor, and the green box is the light intensity sensor.
Agriculture 14 01265 g002
Figure 3. Statistics of the measured SPAD values of maize at three reproductive stages and the full birth stage.
Figure 3. Statistics of the measured SPAD values of maize at three reproductive stages and the full birth stage.
Agriculture 14 01265 g003
Figure 4. RGB original image (a) and RGB image after removal of soil background (b).
Figure 4. RGB original image (a) and RGB image after removal of soil background (b).
Agriculture 14 01265 g004
Figure 5. Workflow diagram of this study.
Figure 5. Workflow diagram of this study.
Agriculture 14 01265 g005
Figure 6. The correlation between vegetation indices and canopy chlorophyll content of maize at three reproductive stages and the full birth stage was investigated.
Figure 6. The correlation between vegetation indices and canopy chlorophyll content of maize at three reproductive stages and the full birth stage was investigated.
Agriculture 14 01265 g006
Figure 7. The correlation between the color and texture features of maize at three fertility stages and the full maturity stage and canopy chlorophyll content was examined. Note: NR represents the Normalized red light parameters, NG represents the Normalized green light parameters, NRGD represents the Normalized red–green difference, NB represents the Normalized blue light parameters, NRBD represents the Normalized red–blue difference, R represents the red light parameters, G represents the green light parameters, B represents the Blu-ray parameters, GRD represents the green–red difference, GRR represents the green-to-red ratio, ENE represents the Texture Energy, COR represents the Texture Correlation, STD represents the Texture Standard Deviation, CON represents the Texture Contrast, SMO represents the Texture Smoothness, UNI represents the Texture Uniformity, THM represents the Texture Third Moment, ENT represents the Texture Entropy.
Figure 7. The correlation between the color and texture features of maize at three fertility stages and the full maturity stage and canopy chlorophyll content was examined. Note: NR represents the Normalized red light parameters, NG represents the Normalized green light parameters, NRGD represents the Normalized red–green difference, NB represents the Normalized blue light parameters, NRBD represents the Normalized red–blue difference, R represents the red light parameters, G represents the green light parameters, B represents the Blu-ray parameters, GRD represents the green–red difference, GRR represents the green-to-red ratio, ENE represents the Texture Energy, COR represents the Texture Correlation, STD represents the Texture Standard Deviation, CON represents the Texture Contrast, SMO represents the Texture Smoothness, UNI represents the Texture Uniformity, THM represents the Texture Third Moment, ENT represents the Texture Entropy.
Agriculture 14 01265 g007
Figure 8. Scatterplot of chlorophyll content monitoring by BP, MLP, SVR, and GBDT with vegetation index features. Note: The solid red line indicates the fitted line, and the dashed black line indicates the 1:1 line.
Figure 8. Scatterplot of chlorophyll content monitoring by BP, MLP, SVR, and GBDT with vegetation index features. Note: The solid red line indicates the fitted line, and the dashed black line indicates the 1:1 line.
Agriculture 14 01265 g008
Figure 9. Scatterplot of chlorophyll content monitoring by BP, MLP, SVR, and GBDT with color and texture features. Note: The solid red line indicates the fitted line, and the dashed black line indicates the 1:1 line.
Figure 9. Scatterplot of chlorophyll content monitoring by BP, MLP, SVR, and GBDT with color and texture features. Note: The solid red line indicates the fitted line, and the dashed black line indicates the 1:1 line.
Agriculture 14 01265 g009
Figure 10. Scatterplot of chlorophyll content monitoring by BP, MLP, SVR, and GBDT with RGB and MS fusion features. Note: The solid red line indicates the fitted line, and the dashed black line indicates the 1:1 line.
Figure 10. Scatterplot of chlorophyll content monitoring by BP, MLP, SVR, and GBDT with RGB and MS fusion features. Note: The solid red line indicates the fitted line, and the dashed black line indicates the 1:1 line.
Agriculture 14 01265 g010
Figure 11. Temporal and spatial distribution of canopy chlorophyll content monitoring in maize at the jointing, tasseling, and grouting stages.
Figure 11. Temporal and spatial distribution of canopy chlorophyll content monitoring in maize at the jointing, tasseling, and grouting stages.
Agriculture 14 01265 g011
Figure 12. Comparison of the accuracy of the model combining MS data and SVR with that of the model combining RGB data and SVR for monitoring canopy chlorophyll content at the maize jointing, tasseling, and grouting stages. Note: (a.1) represents the maize jointing stage using MS data, (a.2) represents the maize tasseling stage using MS data, (a.3) represents the maize grouting stage using MS data, (b.1) represents the maize jointing stage using RGB data, (b.2) represents the maize tasseling stage using RGB data, (b.3) represents the maize grouting stage using RGB data. The solid red line indicates the fitted line, and the dashed black line indicates the 1:1 line.
Figure 12. Comparison of the accuracy of the model combining MS data and SVR with that of the model combining RGB data and SVR for monitoring canopy chlorophyll content at the maize jointing, tasseling, and grouting stages. Note: (a.1) represents the maize jointing stage using MS data, (a.2) represents the maize tasseling stage using MS data, (a.3) represents the maize grouting stage using MS data, (b.1) represents the maize jointing stage using RGB data, (b.2) represents the maize tasseling stage using RGB data, (b.3) represents the maize grouting stage using RGB data. The solid red line indicates the fitted line, and the dashed black line indicates the 1:1 line.
Agriculture 14 01265 g012
Figure 13. Distribution of R2, RMSE, and NRMSE for maize at three growth stages and the full birth stage. Note: The abscissa R, M, and (M + R) represent RGB data, MS data, and the fusion RGB and MS data, respectively. The numbers 1, 2, 3, and 4 correspond to the growth stages of jointing, tasseling, grouting, and full birth stage, respectively. R1 indicates the jointing stage of maize using RGB data, and other similar.
Figure 13. Distribution of R2, RMSE, and NRMSE for maize at three growth stages and the full birth stage. Note: The abscissa R, M, and (M + R) represent RGB data, MS data, and the fusion RGB and MS data, respectively. The numbers 1, 2, 3, and 4 correspond to the growth stages of jointing, tasseling, grouting, and full birth stage, respectively. R1 indicates the jointing stage of maize using RGB data, and other similar.
Agriculture 14 01265 g013
Table 1. Technical parameters of the sensors.
Table 1. Technical parameters of the sensors.
DesignationTechnical Parameters
Acquisition efficiencyApprox. 0.63 km2
Image sensor6 × 1/2.9 inch CMOS
2.08 million effective pixels
(2.12 million total pixels)
ISO range for color sensors200–800
Monochromatic Sensor Gain1–8 times
Maximum resolution of photos1600 × 1300 (4:3.25)
Photo formatJPEG (visible imaging) + TIFF (multispectral imaging)
Table 2. The vegetation indices selected for this study.
Table 2. The vegetation indices selected for this study.
Vegetation IndexFormulaReference
Normalized difference vegetation index (NDVI)(RNIR − RR)/(RNIR + RR)[17]
Green normalized difference vegetation index (GNDVI)(RNIR − RG)/(RNIR + RG)[31]
Optimized Soil-Adjusted Vegetation Index (OSAVI)1.16 × (RNIR − RR)/(RNIR + RR + 0.16)[32]
Transformed Chlorophyll Absorption Reflectance Index (TCARI)3 × (RREG − RR) − 0.2 × (RREG − RG) ×   R REG R R [33]
Modified Chlorophyll Absorption Ratio Index (MCARI)(RNIR − RR − 0.2 × (RREG R G ) )   ×   R R E G R R [34]
Ratio vegetation index (RVI)RNIR/RR[35]
Soil-adjusted vegetation index (SAVI)1.5 × (RNIR − RR)/(RNIR + RR + 0.5)[36]
Green–Red Vegetation Index (GRVI)(RG − RR)/(RG + RR)[37]
Improved simple odds index (MSR) R NIR R R 1 / R NIR R R + 1 [38]
Green chlorophyll index (GCI)RNIR/RG − 1[39]
Enhanced vegetation index 2(EVI2)2.5 × (RNIR − RR)/(RNIR + 2.4 × RR + 1)[40]
Note: RR represents the reflectance of the red band, RG represents the reflectance of the green band, RB represents the reflectance of the blue band, RNIR represents the reflectance of the near-infrared band, and RREG represents the reflectance of the red-edge band.
Table 3. Characteristic parameters selected for this study.
Table 3. Characteristic parameters selected for this study.
Characteristic ParametersFormulaReference
Red light parametersR
Green light parametersG
Blu-ray parametersB
Normalize red light parametersR/(R + B + G)
Normalize green light parametersG/(R + B + G)
Normalize Blu-ray parametersB/(R + B + G)
Green–red differenceG − R
Texture Energy i = 0 L 1 p i , j 2 [25]
Green-to-red ratioG/R[41]
Texture Entropy i = 0 L 1 P i l o g P i [42]
Texture Correlation i = 0 L 1 i = 0 l 1 i μ j μ p i , j σ 2 [43]
Texture Contrast i = 0 L 1 i = 0 L 1 ( i j ) 2 p i , j [44]
Texture Smoothness i = 0 L 1 j = 0 L 1 1 1 + i j 2 p i , j [45]
Texture Standard deviation i = 0 L 1 j = 0 L 1 i μ 2 p i , j [46]
Normalize the red–green difference R G R + G + 0.01 [47]
Normalize the red–blue difference R B R + B + 0.01 [47]
Texture Uniformity i = 0 L 1 j = 0 L 1 p i , j , d , θ 2 [48]
Texture Third moment i = 0 L 1 i = 0 L 1 i μ 3 p i , j [49]
Note: L denotes the number of gray levels, µ denotes the mean of the image, p(i,j) denotes the joint probability of gray levels i and j in the image, and σ2 denotes the variance. P(i,j,d,θ) is the gray-level covariance matrix of adjacent pixel pairs in the image, where i and j are pixel gray levels, d is the distance between pixel pairs, and θ is the direction between pixel pairs.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, W.; Pan, K.; Liu, W.; Xiao, W.; Ni, S.; Shi, P.; Chen, X.; Li, T. Monitoring Maize Canopy Chlorophyll Content throughout the Growth Stages Based on UAV MS and RGB Feature Fusion. Agriculture 2024, 14, 1265. https://doi.org/10.3390/agriculture14081265

AMA Style

Li W, Pan K, Liu W, Xiao W, Ni S, Shi P, Chen X, Li T. Monitoring Maize Canopy Chlorophyll Content throughout the Growth Stages Based on UAV MS and RGB Feature Fusion. Agriculture. 2024; 14(8):1265. https://doi.org/10.3390/agriculture14081265

Chicago/Turabian Style

Li, Wenfeng, Kun Pan, Wenrong Liu, Weihua Xiao, Shijian Ni, Peng Shi, Xiuyue Chen, and Tong Li. 2024. "Monitoring Maize Canopy Chlorophyll Content throughout the Growth Stages Based on UAV MS and RGB Feature Fusion" Agriculture 14, no. 8: 1265. https://doi.org/10.3390/agriculture14081265

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop