Next Article in Journal
3D-PCGR: Colored Point Cloud Generation and Reconstruction with Surface and Scale Constraints
Next Article in Special Issue
Recent Cereal Phenological Variations under Mediterranean Conditions
Previous Article in Journal
A Small Object Detection Method for Drone-Captured Images Based on Improved YOLOv7
Previous Article in Special Issue
Water Stress Index and Stomatal Conductance under Different Irrigation Regimes with Thermal Sensors in Rice Fields on the Northern Coast of Peru
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancements in Utilizing Image-Analysis Technology for Crop-Yield Estimation

by
Feng Yu
1,2,†,
Ming Wang
2,†,
Jun Xiao
1,
Qian Zhang
2,*,
Jinmeng Zhang
2,
Xin Liu
2,
Yang Ping
2 and
Rupeng Luan
2
1
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
2
Institute of Data Science and Agricultural Economics, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2024, 16(6), 1003; https://doi.org/10.3390/rs16061003
Submission received: 4 February 2024 / Revised: 4 March 2024 / Accepted: 7 March 2024 / Published: 12 March 2024
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)

Abstract

:
Yield calculation is an important link in modern precision agriculture that is an effective means to improve breeding efficiency and to adjust planting and marketing plans. With the continuous progress of artificial intelligence and sensing technology, yield-calculation schemes based on image-processing technology have many advantages such as high accuracy, low cost, and non-destructive calculation, and they have been favored by a large number of researchers. This article reviews the research progress of crop-yield calculation based on remote sensing images and visible light images, describes the technical characteristics and applicable objects of different schemes, and focuses on detailed explanations of data acquisition, independent variable screening, algorithm selection, and optimization. Common issues are also discussed and summarized. Finally, solutions are proposed for the main problems that have arisen so far, and future research directions are predicted, with the aim of achieving more progress and wider popularization of yield-calculation solutions based on image technology.

Graphical Abstract

1. Introduction

The growth process of crops is complex, and their yields are often influenced by various factors such as crop variety, soil, irrigation, fertilization, light, diseases and pests [1]. Therefore, predicting crop yield is difficult. At the same time, yield estimation is a necessary step in adjusting breeding plans and improving traits. Calculating current and final crop yields accurately and efficiently is of great significance for actual production.
At present, yield calculation mainly relies on traditional yield-calculation methods such as artificial field surveys, meteorological models, and growth models. Among them, the artificial field survey method has a low technical threshold and strong universality, and it is most frequently used in actual yield calculation. However, this method is cumbersome to operate and inefficient; the yield-calculation method based on meteorological and growth models requires a large amount of historical data to support it [2], and with its numerous parameters, it is only applicable for specific planting areas or varieties. In recent years, with the continuous development of sensors and artificial intelligence technology [3], yield calculation based on remote sensing technology or visible light images [4,5] has shown rapid development; remote sensing calculation can obtain multi-band reflection information of a crop canopy, which can accurately reflect the internal growth status and phenotype information of crops, and which is particularly suitable for large-scale grain-crop-yield calculation. The yield-calculation methods based on visible light images are suitable for detecting crops with relatively regular target shapes and textures, such as wheat ears, apples, grapes, and citrus, etc., mainly by extracting their color, texture, morphology, and other features [6], and by achieving object segmentation counting by combining machine learning algorithms [7,8]; alternatively, deep learning algorithms can be used to automatically achieve object detection and counting [9,10], especially neural network models represented by CNNs (Convolutional Neural Networks), which have achieved good calculation performance [11]. The main methods and advantages and disadvantages of crop-yield calculation are shown in Table 1.
The existing reviews have sorted out crop-yield calculation from the perspective of model algorithms [12]. This article mainly analyzes the research and application progress of image-based crop-yield calculation technology and compares and summarizes its technical points, main problems, and development trends. To reflect the latest research results, this article mainly focuses on the research results after 2020. More than 1200 relevant scientific research papers were found in searches of the Web of Science database using keywords such as images, crops, and yield calculation. Through further research and exclusion, 142 papers closely related to this topic were selected for in-depth research. The Section 1 of the article analyzes common research objects in the literature, and the main characteristics of crops and yield-calculation methods; the Section 2 focuses on introducing the progress of literature research according to different technical routes; the Section 3 discusses the main algorithms and common problems in current research; and finally, a summary of the entire article is provided, and future development trends are discussed.

2. Yield-Calculation Indicators for Different Crops

Different types of crops have different external performance traits, and the parameter indicators and technical solutions used for yield calculation are also different. Table 2 explains the main varieties studied in the literature, with a focus on introducing yield-calculation indicators. Among them, grain crops are mainly multi-seed crops, such as corn, wheat, and rice. The yield of crops is typically determined by assessing the number of grains per unit volume, seed density, and unit weight. Economic crops are categorized into fruit, tuber, stem, and leaf crops, with yield-calculation indicators based on their physiological structural characteristics.

3. The Application of Image Technology in Crop-Yield Calculation

With the development of artificial intelligence technology, image-analysis technology has been widely applied in fields such as crop disease detection, soil analysis, crop management, agricultural-product quality inspection, and farm safety monitoring [16]. According to the imaging categories, it can be divided into visible light, hyperspectral, infrared, near-infrared, thermal imaging, fluorescence, 3D, laser, CT, etc. Among them, visible light imaging, hyperspectral, infrared, and thermal imaging technologies can accurately reflect the internal growth status and phenotype parameters of crops [17,18], and these methods have been widely used in crop-growth monitoring and yield prediction. At the same time, the growth process of crops is extremely complex, and crop yields are related to various factors such as variety, planting environment, and cultivation methods. The external structures of different types of crops are also different, and the calculation methods for yields are also different. It is necessary to choose the corresponding single- or composite-imaging technology based on specific objects and scenes. For the convenience of description, this article describes two technical solutions based on remote sensing images and visible light images. Remote sensing images generally cover multiple imaging categories, often including multi-channel data sources, while visible light images are mainly captured through digital cameras, mobile phones, and other means. In Figure 1, the process of calculating crop yield based on image technology is illustrated.

3.1. Yield Calculation by Remote Sensing Image

Remote sensing images mainly reflect the electromagnetic wave information emitted or reflected by objects, and they can effectively express their internal or phenotypic information. After processing and extracting remote sensing images, much key target information can be captured [19]. Remote sensing technology plays a crucial role in precision agriculture; through continuously and extensively collecting remote sensing image information from planting areas, crop-growth status can be well monitored and understood [20]. Agricultural remote sensing data mainly includes vegetation indices, crop physical parameters, and environmental data, which have obvious big data attributes and are used to monitor crop growth [21] for use as the main data sources. Therefore, feature information closely related to crop growth can be extracted from remote sensing data to predict crop yield [22]. Remote sensing technology has the advantages of wide coverage, short cycle, low cost, and long-term equality, playing an important role in crop-growth monitoring and yield calculation. Remote sensing data obtained using spaceborne, airborne, and unmanned aerial vehicles (UAVs) have been successfully used for crop-yield prediction [23]. The main advantages of crop-yield prediction based on remote sensing are reliability, time savings, and cost-effectiveness, which can apply to yield calculation in different growth regions, categories, and cultivation methods.
At present, remote sensing platforms are mainly divided into low-altitude remote sensing based on drones and high-altitude remote sensing based on satellite platforms [24]. Compared with space and airborne platforms, unmanned-aerial-vehicle remote sensing technology equipped with visible light cameras, thermal infrared cameras, and spectral cameras has many advantages [25,26]; for example, high spatiotemporal resolution, flexible acquisition windows, and less atmospheric attenuation, which make it more suitable for crop monitoring and yield prediction on a farm or field scale. Satellite remote sensing is mainly based on various artificial satellites for data collection [27], which has the advantages of good continuity and high stability [28], making it especially suitable for long-term monitoring of crops grown in large fields such as wheat, rice, and corn. Although satellite remote sensing data is highly valuable due to its large-scale coverage, insufficient high-altitude resolution remains a prominent issue. Many prediction models can only provide more accurate crop-yield predictions in large-scale crops and cannot describe detailed changes in crop yield at smaller scales (such as individual fields). In addition, due to the possibility of satellites being obstructed by clouds and being affected by large weather images, it is not possible to obtain timely information on the entire growth cycle of crops.
The spectral information obtained based on remote sensing technology is generally divided into multispectral (MSI) and hyperspectral (HSI) information [29]. The multispectral sensors installed on drones consist of suitable spectral bands in the visible and near-infrared (VNIR) range, which are highly effective in obtaining various vegetation indices (VIs) sensitive to crop health [13], such as the Normalized Difference Vegetation Index (NDVI) [30], Green Normalized Difference Vegetation Index (GNDVI), and Triangle Vegetation Index (TVI), etc. Multispectral data based on drones and combined with machine learning (ML) models [31] has been effectively used to monitor biomass information [32] and yield prediction for various crops, such as corn, wheat, rice, soybeans, cotton, and other varieties. Compared with natural light and multispectral imaging modes, hyperspectral imaging [33,34] has over 100 bands with narrow distances between them, which can more accurately express plant canopy reflectance and capture rich crop-structural information, making them more advantageous for analyzing crop row shapes. At the same time, there are also issues such as data redundancy, spectral overlap, and interference [14], and the increase in data volume has also brought difficulties to model construction, so suitable band selection algorithms are needed for dimensionality reduction.
The key to crop biomass or yield monitoring based on remote sensing images is the identification of the spectral bands that are most sensitive to canopy reflectance; for example, NDVI is calculated from the red and near-infrared bands [14], while EVI is obtained from a combination of the red, near-infrared, and blue light bands [35]. Extraction of vegetation indices, biophysical parameters, growth environment parameters, and other indicators from remote sensing data, and establishment of correlations with crop-dependent variables through machine learning or deep learning algorithms [36] is required. The vegetation indices (VIs) formed based on the spectral information of various bands have high correlations with yield, and they can reliably provide spatiotemporal information of vegetation coverage, which are currently widely used in spectral index information. Table 3 explains the commonly extracted types of remote sensing feature information [37,38].

3.1.1. Yield Calculation by Low-Altitude Remote Sensing Imaging

Different crops have different spectral characteristics, and the absorbed, radiated, and reflected spectra also differ; low-altitude remote sensing technology, which is mainly based on the spectral characteristics of plants, can be equipped with multi-channel image sensors [41], collect different images in different bands, and analyze the different characteristic parameters of crops. With the continuous advancement of flight control technology, unmanned aerial vehicles equipped with multiple sensors have high degrees of freedom in flight and flexible control [42,43]. Compared with satellite remote sensing technology, drone remote sensing has the advantages of a small observation range, high image resolution, and the ability to capture video images. Utilizing drone low-altitude flight to obtain high-resolution remote sensing images has become an ideal choice for agricultural applications. In Figure 2, the low-altitude remote sensing imaging devices and imaging effects are shown.
  • Yield Calculation of Food Crops
Food crops are an indispensable source of food in daily life, with corn, rice, and wheat accounting for more than half of the world’s food. They have the characteristics of a wide planting range and high yield, which are the research objectives that people focus on. In terms of crop-growth monitoring, the analysis methods for food crops are relatively similar, working mainly by collecting remote sensing image information to obtain information on crop optics, structure, thermal characteristics, etc. Indicator prediction can be achieved by establishing biomass or yield fitting models through machine learning or deep learning algorithms [46].
Corn is one of the commodities widely cultivated in countries such as the United States, China, Brazil, Argentina, and Mexico, and there is also a lot of related research work. Yang et al. [47] used a drone platform to collect hyperspectral images of corn at different growth stages, extracted spectral and color image features, and used a CNN model to achieve a prediction accuracy of 75.5% for corn yield, with a Kappa coefficient of 0.69, which is better than single-channel feature extraction and traditional neural network algorithms. Danilevicz et al. [48] proposed a multimodal corn-yield prediction model, and drones were used to obtain multispectral corn images and extract eight vegetation indices, and when combined with field management and a variety of gene information, a multimodal prediction model based on tab-DNN and sp-DNN was established. The results showed a relative mean square error of 7.6% and an R2 of 0.73, which is better than the modeling results using a single data type. Kumar et al. [49] obtained the vegetation index VIs of corn at different stages using drones, and used multiple machine learning algorithms such as LR (Linear Regression), KNN (k-Nearest Neighbor), RF (Random Forest), SVR (Support Vector Regression), and DNN (Deep Neural Network) to predict corn yield, and the effects of various variables on yield-prediction results were evaluated and screened, proving that the combination of VIs and ML models can be used for corn-yield prediction. Yu et al. [50] obtained RGB and multispectral MS images of corn using drones, constructed raster data of crop surface model CMSs, and extracted vegetation plant VIs. Some corn aboveground-biomass-AGB prediction models based on DCNN and traditional machine learning algorithms were constructed, and the effects of different remote sensing datasets and models were compared. The results showed that using data fusion or deep learning algorithms had more advantages in giving results. Marques et al. [39] obtained hyperspectral images of corn using drones and extracted 33 vegetation indices; a prediction model was established by using the random forest RF algorithm, and the contribution rate of vegetation indices to yield was evaluated and ranked. Finally, the optimal model was found to have a correlation coefficient of 0.78 for corn-yield prediction.
Most rice cultivation is concentrated in East Asia, Southeast Asia, and South Asia, and its growth period generally includes milk ripening, wax ripening, full ripening, and withering. Mia et al. [51] studied a multimodal rice-yield prediction model, which combined multispectral data collected by drones with weather data, and they established a prediction model using multiple CNNs (Convolutional neural networks). The optimal model RMSPE was 14%, indicating that multimodal modeling had better prediction performance than single data source modeling. Bellis et al. [52] used drones to obtain hyperspectral and thermal images of rice, and they extracted vegetation indices. Two depth models, 3D-CNN and 2D-CNN, were used to establish rice-yield prediction models, resulting in RMSE of 8.8% and 7.4–8.2%, respectively, indicating the superiority of convolutional autoencoders in yield prediction.
Most countries in the world rely on wheat as their main source of food, making it the world’s largest crop in terms of planting area, yield, and distribution. There have been numerous research reports on wheat breeding, planting technology management, storage, and transportation, and yield prediction is particularly important. The maturity stage of wheat is generally divided into the milk maturity stage, wax maturity stage, and complete maturity stage, and the characteristics expressed at different stages are also different. The calculation of wheat yield based on remote sensing images is mainly achieved through spectral data. In the field of multispectral research on wheat-yield calculation, Bian et al. [53] used drones to obtain multispectral data of multi-stage wheat and extracted multiple vegetation indices. Machine models such as Gaussian process regression (GPR), support vector regression (SVR), and random forest regression (RFR) were used to establish a wheat-yield prediction model based on vegetation indices. The GPR model R2 reached a maximum of 0.88. Han et al. [54] used drones to capture multispectral images of wheat and extracted its feature indices. Using the GOA-XGB model based on the Grasshopper optimization algorithm, the optimal prediction accuracy R2 for aboveground-biomass AGB of wheat was obtained, which was 0.855. Zhou et al. [55] studied the correlation between multispectral reflectance and wheat yield and protein content, and they evaluated the performance of various machine learning models such as random forest (RF), artificial neural network (ANN), and support vector regression (SVR), which were compared with linear models based on vegetation indices, and the results demonstrated the modeling advantages of machine learning algorithms. Sharma et al. [56] used a drone equipped with multiple sensors to collect multispectral images of oats at different growth stages in three experimental fields, and they extracted multiple vegetation indices VIs. The performance of four machine learning models, namely partial least squares (PLS), support vector machine (SVM), artificial neural network (ANN), and random forest (RF), were evaluated. In the collected multiple images, the Pearson coefficient r was between 0.2 and 0.65, and the reasons for the unsatisfactory prediction performance were analyzed. Similar studies include those by Wang et al. [57] and Roy et al. [58], which combined spectral indices with machines to calculate yield by collecting multispectral data during the growth period. In terms of hyperspectral imaging, Fu et al. [59] used drones to obtain hyperspectral images of wheat and used Multiscale Gabor GLCM to extract its canopy texture features, which they combined with vegetation index and other spectral features, and they used filtered parameter variables and an LSSVM algorithm to obtain the highest accuracy in wheat biomass calculation; R2 was 0.87. Tanabe et al. [60] applied CNN networks to wheat-yield prediction based on unmanned-aerial-vehicle hyperspectral data, which achieved better performance than traditional machine learning algorithms. Li et al. [61] used drones to obtain hyperspectral images of winter wheat during flowering and filling stages, extracted a large number of spectral indices, and used three algorithms for feature filtering to reduce dimensionality. The highest prediction result was obtained using an integrated model based on SVM, GP, LRR, and RF, with an R2 of 0.78, which was superior to a single machine learning algorithm and independent variables without feature optimization. In terms of data-fusion yield calculation, the main approach is to use multi-sensor and multi-channel data to establish a wheat-yield calculation model, and many more results have been achieved than with a single dimension, such as those of Fei et al. [38], Li et al. [62], Sharif et al. [63], and Ma et al. [64], who have studied some wheat-yield calculation models based on the fusion of multi-channel data such as RGB images, multispectral, thermal infrared images, and meteorological data. The results obtained through multiple machine learning algorithms are superior to single-channel modeling, and the calculation accuracy and robustness were more advantageous.
  • Yield Calculation of Economic Crops
Economic crops typically play an important role in the food industry, in addition to industrial raw materials such as soybeans, potatoes, cotton, grapes, etc. Soybeans occupy an important position in global crop trade, with Brazil, the United States, and Argentina contributing over 90% of global soybean yield. The combination of spectral indices and machine learning algorithms is also a common research topic within yield prediction. Maimaitijiang et al. [65] used a drone equipped with multiple sensors to collect RGB, multispectral, and thermal images of soybeans, and extracted multimodal features such as canopy spectra, growth structures, thermal information, and textures. Multiple algorithms such as PLSR, RFR, SVR, and DNN were used to predict soybean yield, which verified that multimodal information was more accurate than single-channel data sources, the highest R2 reached 0.72 using DNN-F2, and the RMSE was 15.9%. Zhou et al. [66] extracted seven feature indicators from hyperspectral images obtained by drones, which they combined with maturity and drought-resistance classification factors, and they built a hybrid CNN model to predict soybean yield. The predicted result of the model was 78% of the actual yield. Teodoro et al. [37] used drones to collect multi-temporal spectral data of soybeans, and they extracted multiple spectral indices and used multi-layer deep regression networks to predict the maturity stage (DM), plant height (PH), and seed yield (GY) of soybeans. The modeling effect was superior to traditional machine learning algorithms, which provided a good solution for soybean-yield prediction. Yoosefzadeh-Najafabadi et al. [67] extracted the hyperspectral index (HVI) of soybeans for predicting yield and fresh biomass (FBIO), established a prediction model using DNN-SPEA2, and studied the effects of different band and index selections on the prediction results, and compared it with traditional machine learning algorithms, achieving good expected results. Yoosefzadeh-Najafabadi et al. [68] obtained hyperspectral reflectance data of soybeans, used recursive feature elimination (RFE) to reduce data dimensionality and screen variables, evaluated MLP, SVM, and RF machine learning algorithms, and found the optimal combination of exponential independent variables and models. Shi et al. [69] studied the feasibility of estimating the AGB and LAI of mung beans and red beans using multispectral data collected by drones [70], compared and analyzed the sensitive bands and spectral parameters that affect AGB and LAI, evaluated multiple machine learning algorithms such as LR, SMLR, SVM, PLSR, and BPNN, and, finally, achieved the best fitting effect through the SVM model. The predicted R2 for AGB of red beans and green beans reached 0.811 and 0.751, respectively. Ji et al. [40] obtained RGB images of fava beans by using a drone, and they extracted vegetation index, structural information, and texture information to predict aboveground biomass (AGB) and yield (BY). The impacts of different growth stages, variable combinations, and learning models on prediction performance was evaluated. Finally, an ensemble learning model was used to predict fava bean yield with an R2 of 0.854.
Yield prediction based on drone remote sensing technology is also common in crops such as potatoes, cotton, sugarcane, tea, alfalfa, etc. Different types of crops have different spectral reflectance characteristics and sensitive feature indices, and it is necessary to gradually screen according to the actual contribution rate to ultimately establish a high-precision and robust-prediction model. Liu et al. [71] studied the aboveground biomass (AGB) prediction of potatoes based on unmanned-aerial-vehicle (UAV) multispectral images. Multiple variable information such as COS (canopy original spectra), FDS (first-derivative spectra), VIs (vegetation indices), and CH (crop height) were extracted from the spectral images. The focus was on analyzing the correlation between different channel-variable characteristics, growth stages, regression models, and AGB, and they selected the independent variables and combinations with the highest correlation. Sun et al. [72] used drones to collect hyperspectral images of potatoes to predict potato tuber yield and setting rate. Ridge regression was used to predict tuber yield with R2 of 0.63, and partial least squares was used to predict setting rate with R2 of 0.69. Xu et al. [45] studied cotton-yield calculation based on time-series unmanned-aerial-vehicle remote sensing data. A U-Net network was used for semantic segmentation, multiple feature information was extracted, and a nonlinear prediction model was established by using the BP neural network. Through variable screening and evaluation of results, an optimal yield-prediction model was obtained that had an average R2 of 0.854. Poudyal et al. [73] and de Oliveira et al. [74] used hyperspectral and multispectral methods, respectively, to calculate sugarcane yield. He et al. [44] used drones to collect hyperspectral images of spring-tea canopy to predict its fresh yield, extracted multiple common chlorophyll spectral indices and leaf area spectral indices, studied the differences caused by single or multiple spectral indices, and evaluated the prediction accuracy of LMSV (linear model with a single index variable), PLMSVs (piecewise linear model with the same index variables), and PLMCVs (piecewise model with the combined index variables) models, and good prediction results were achieved, demonstrating the potential of hyperspectral remote sensing in estimating spring-tea fresh yield. Feng et al. [75] used hyperspectral images collected by drones to predict alfalfa yield. Firstly, a large number of spectral indices were extracted from the images and dimensionality was reduced. Then, three machine learning algorithms, namely random forest (RF), support vector machine (SVR), and KNN, were used for training. Finally, the best prediction performance was achieved by integrating machine models, with an R2 of 0.854. Wengert et al. [76] collected multispectral-image data of grasslands in different seasons by using drones, analyzed characteristic bands and vegetation indices, and evaluated the model performance of four machine learning algorithms and finally found that the model based on the CBR algorithm had the best prediction performance, with high prediction accuracy and robustness. Pranga et al. [77] fused the structure and spectral data of drones to predict ryegrass yield and extracted canopy height and vegetation index information collected by sensors, and the model performance of PLSR, RF, and SVM machine learning algorithms for predicting dry matter DMY were evaluated; they found that the prediction accuracy based on multi-channel fusion was higher, and that the RF algorithm had the best prediction performance, with a maximum error of no more than 308 kg ha−1. Li et al. [78] used drones to collect multispectral-image information for red clover and extracted six spectral indices to predict its dry matter yield. The predictive performance of three machine learning algorithms were evaluated, and, finally, they found that the model established through artificial neural networks had the best performance, with an R2 of 0.90 and an NRMSE of 0.12, respectively.
In economic crops, fruit counting is used to calculate yield, with visible light image segmentation or detection is mainly utilized. However, some scholars have also evaluated overall yield through remote sensing technology for varieties such as tomatoes, grapes, apples, almonds, etc. Tatsumi et al. [79] used high-resolution RGB and multispectral images of tomatoes, collected by drones, to measure their biomass and yield. Information from a total of 756 first-order and second-order features were extracted from the images. Multiple variable screening algorithms were used to identify the independent variable factors that contributed significantly to the SM, FW, and FN of tomato, and the impacts of three machine learning algorithms on model performance were evaluated. Finally, the best biomass-indicator calculation models were established through multiple experiments. Ballesteros et al. [80] used drones to obtain hyperspectral images of vineyards and to extract vegetation index VIs and vegetation coverage information, which were used to establish a fitting relationship with yield through artificial neural networks. The impacts of different variables on yield-prediction accuracy were evaluated, providing a good reference for grape-yield prediction based on remote sensing technology; Chen et al. [81] studied apple tree-yield prediction based on drone multispectral images and sensors, evaluated the contribution of spectral and morphological features to yield, and established an ensemble learning model by combining SVR and KNN machine learning algorithms. Finally, through feature priority and model optimization, the R2 of the optimal model on the validation set reached 0.813, and on the test set this reached 0.758, providing a good case for apple-yield prediction based on remote sensing images. Tang et al. [82] collected multispectral aerial images of almonds and established an improved CNN network for almond-yield prediction, achieving good prediction accuracy. The results were significantly better than those obtained by machine learning algorithms based on vegetation indices, demonstrating the advantages of deep learning algorithms in automatically extracting features. In Table 4, the research progress on crop-yield calculation based on low-altitude remote sensing is shown.
With the continuous advancement of flight control technology, the cost of obtaining high-resolution remote sensing data is becoming lower and lower. Significant progress has also been made in crop-yield monitoring by using drone platforms, among which the machine learning algorithm ML has played an irreplaceable role. However, there are also many problems, such as the inability to obtain stable and continuous image data, and the question of how to filter feature indices is also an important issue that affects prediction accuracy. Further consideration is needed for machine learning algorithms.

3.1.2. Yield Calculation by High-Altitude Satellite Remote Sensing Imaging

Compared to drone platforms, satellite platforms have advantages in coverage and stability, and they can continuously monitor crop growth in different spectral bands, extract multiple vegetation indices for yield prediction, and collect data across growth periods and over a large area more efficiently [35,83]. However, their resolution is lower when collecting small-scale plots, which are more affected by weather changes and are more expensive. Remote sensing satellites can provide free and continuous remote sensing data-collection tools for constructing crop-growth models. The most common representative satellites in the world include the LANDSAT series operated in cooperation with the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS), the SPOT series developed and operated by the French National Center for Space Studies (CNES), the NOAA series operated by the National Oceanic and Atmospheric Administration (NOAA) of the United States, the Sentinel series developed by the European Space Agency (ESA), and the ZY-3 and GF-2 systems developed and operated by the National Space Administration of China. The entire process of crop-yield prediction and data processing based on remote sensing satellites includes data acquisition; preprocessing; image correction; feature extraction, classification and interpretation; accuracy evaluation; post-processing; and analysis. Each step has its specific methods and techniques, and the order of and specific implementation of these steps may vary depending on the calculation indicators and data characteristics. In Figure 3, the remote sensing satellite imaging process overview and effect diagram are shown.
Bebie et al. [83] used the Sentinel-2 satellite to obtain remote sensing image data of wheat, and they extracted emissivity data from multiple growth stages as input parameters. An evaluation model was established by using random forest (RF), KNN, and BR. The highest R2 of 0.91 was reached when images of all growth stages were used. Kamir et al. [84] studied the yield prediction of wheat based on satellite images and climate time-series data, and they analyzed the effects of different vegetation indices, machine learning algorithms, growth stages, and other factors on the prediction accuracy of the model; they finally obtained the best prediction effect through a support vector machine algorithm, with an R2 of 0.77, which was better than other single machine learning algorithms or ensemble models. Liu et al. [85] used satellite remote sensing, climate, and crop-yield data to predict wheat yield; multiple linear regression models, machine learning algorithms, and deep learning methods were compared, and they evaluated the impacts of different satellite variables, vegetation indices, and other factors on the prediction results. Son et al. [35] studied the prediction of rice yield in fields based on Sentinel-2 satellite images, evaluated three machine model algorithms: RF, SVM, and ANN, and the yield for four consecutive growing seasons was predicted, which obtained satisfactory prediction results; Abreu et al. [86] estimated the yield of coffee trees by obtaining multispectral images from satellites and using machine learning algorithms. The correlations between different bands of vegetation-index selections and yield were analyzed, and they evaluated the prediction accuracy of various model algorithms and finally found that the highest prediction accuracy was obtained by the neural network algorithm NN. Bhumiphan et al. [87] obtained remote sensing-impact data for rubber trees through the Sentinel-2 satellite for one entire year. By extracting data for six vegetation indices, including GSAVI, MSR, NBR, NDVI, NR, and RVI, the impacts of single or multiple indices on prediction accuracy were evaluated. Finally, the optimal yield-prediction model was established through multiple linear regression, with an R2 of 0.80, providing a reference for yield prediction based on high-altitude remote sensing images. Filippi et al. [88] studied the prediction of cotton yield based on remote sensing datasets in both temporal and spatial domains, including satellite images, terrain data, soil, weather, etc. A prediction model was constructed by using the random forest to evaluate the effects of different resolutions, time spans, coverage areas, and other factors on the prediction results. Desloires et al. [15] studied a corn-yield prediction model based on Sentinel-2, temperature, and other data, and they evaluated the effects of time, spectral information, machine algorithms and other factors on yield and finally obtained the best result by integrating multiple machine learning algorithms, with an average error of 15.2%. Liu et al. [89] proposed a hybrid neural network algorithm for predicting grain yield. Based on remote sensing image data provided by MODIS satellites combined with channel data such as vegetation index and temperature, a convolutional neural network incorporating a CBAM (Convolutional Block Attention Module) attention mechanism was used to enhance the extraction of vegetation index and temperature features. Finally, LSTM (Long Short-Term Memory) was used to analyze time-series data, and the final model obtained an R2 of up to 0.989 in the application. In Table 5, the research progress on crop-yield calculation based on high-altitude-satellite remote sensing is shown.
Over the years, with the continuous iterations of technology, satellite-based remote sensing data acquisition has become more convenient, and calculating various vegetation indices (VIs) has also become more convenient. However, there are also issues such as spatial resolution and cloud cover, and optical remote sensing satellites are heavily affected by weather images. Therefore, it can be combined with microwave-remote sensing data, which can receive longer electromagnetic wave information from the surface. These longer electromagnetic waves can effectively penetrate clouds and mist and are not easily affected by meteorological conditions and sunlight levels, making microwave remote sensing capable of monitoring the surface. In addition, microwave-remote sensing technology can penetrate vegetation and has the ability to detect subsurface targets. The obtained microwave images have a clear three-dimensional sense and can provide information beyond visible-light photography and infrared remote sensing. Therefore, it has strong synergistic potential by combining the above two.

3.2. Yield Calculation by Visible Light Image

Visible light images can convey the absorption and reflection of white light by crops, and high-resolution digital images contain rich color, structure, and morphological information [90], which can be used to analyze the growth and yield prediction of crops by fully extracting feature information. The extraction of color features from digital images is the most effective and widely used method for monitoring crop-growth characteristics. Information such as crop coverage, leaf-area index, biomass, plant nutrition, and pests and diseases will be reflected in color, and commonly used color indices include VARI, ExR, ExG, GLI, ExGR, NDI, etc. Texture is a description of the grayscale of image pixels. Compared with color features, texture can better balance the overall and detailed aspects of the image. Therefore, texture analysis plays a very important role in image analysis, which usually includes two aspects: the extraction of detailed texture features of the image, such as contrast (CON), correlation (COR), and entropy (ENT); and the classification of the image based on the extracted features. Morphological features are often associated with crop features and used together to describe image content. However, due to the complexity of crop growth, the information expressed by a single channel’s features is often not complete enough. It is necessary to comprehensively study features such as color, texture, and morphology to more accurately monitor crop-growth characteristics. With the continuous maturity of digital imaging technology and the widespread use of high-resolution camera equipment, there is an increasing amount of research on evaluating crop growth by analyzing crop-growth images. According to different processing methods, there are mainly two types of image processing: traditional image processing based on segmentation, and depth processing.

3.2.1. Yield Calculation by Traditional Image Processing

Traditional image processing is mainly achieved through information extraction and segmentation. Image segmentation is the core of plant-phenotype image processing, with the main purpose of extracting the parts of interest and removing background or other irrelevant noise from the image. When performing image segmentation, the object of interest is defined by the internal similarity of pixels in features such as texture and color. The simplest algorithm is that for threshold segmentation, which creates pixel groups on grayscale based on intensity levels to separate the background from the target [91]. Feature extraction is one of the key technologies for target recognition and classification based on computer vision [92]; its main purpose is to provide various classifications for machine learning, and the features extracted from the image are processed into “feature vectors”, including edges, pixel intensity, geometric shapes, combinations of pixels in different color spaces, etc. Feature extraction is a challenging task that often requires manual screening and testing using multiple feature-extraction algorithms in traditional image processing until satisfactory feature information is extracted.
The traditional image-processing method is relatively complex and requires manual feature selection, followed by the establishment of calculation models using classification or regression algorithms. Massah et al. [93] used a self-developed robot platform to collect images and extract features such as grayscale histogram, gradient direction histogram, shape context, and local binary to achieve statistical analysis of kiwifruit quantities. The image was segmented based on the RGB-threshold segmentation method, and a support vector machine algorithm was used to achieve quantity prediction. A prediction result with R2 of 0.96 was obtained, which was superior to those from FCN-8S, ZFNet, AlexNet, Google Net and ResNet deep networks. Zhang et al. [94] extracted color and texture features from high-resolution RGB images obtained by a drone to predict the LAI of kiwifruit orchards. Two regression algorithms (SWR and RFR) were used for modeling and comparative analysis. The highest estimated R2 for LAI was 0.972, and the RMSE was 0.035, providing a good reference for kiwifruit growth monitoring and yield calculation. Guo et al. [95] developed a new vegetation index, MRBVI, for predicting chlorophyll and yield of corn. Their experiment showed that the determination coefficients of R2 for estimating chlorophyll content and predicting yield by using MRBVI were 0.462 and 0.570, respectively, which were better than those from the other seven commonly used VI methods. Zhang et al. [96] used consumer-grade drones to capture RGB images of corn, extracted its color features using ExG, and established a corn-yield prediction model using regression algorithms. The yield prediction models for three samples were significant, with a minimum MAPE range of 6.2% and a maximum of 15.1%, and with R2 not exceeding 0.5. The reasons for this were analyzed. In addition, this research evaluated the impact of nitrogen application on crop growth through ExG characteristics. Saddik et al. [97] developed a low-complexity apple-counting algorithm, which was based on apple color and geometric shape for detection. The RGB images were subjected to HSV and Hough transformations, achieving a maximum accuracy of 97.22% on the test dataset, and performing apple counting without relying on large amounts of data and computing power; Liu et al. [98] estimated the plant height and aboveground biomass (AGB) of Toona sinensis seedlings by obtaining RGB and depth-imaging data of canopy. Firstly, the U-Net model was used to segment the foreground and extract multiple feature indicators. Then, SLR was used to predict plant height. The performance of ML, RF, and MLP machine learning algorithms in predicting aboveground biomass (AGB) was compared, and the key factors for predicting AGB were analyzed. Finally, the selected model’s predicted R2 for fresh weight reached 0.83. Rodriguez-Sanchez et al. [99] obtained RGB images of cotton through aerial photography, and trained them by using a SVM supervised learning algorithm. The accuracy of cotton-pixel recognition reached 89%, and after further morphological processing the R2 reached 0.93 in fitting the number of cotton bolls. This machine learning method reduced the performance requirements for model deployment. In Table 6, the research progress on crop-yield calculation based on traditional image processing is shown.
The RGB color model and HIS model are the most common models in image processing. The RGB color model mixes natural colors in different proportions by selecting red, green, and blue as the primary colors. The RGB mode uses the color-light-additive method. In the HIS model, H represents hue, S represents saturation, and I represents intensity. Saturation represents the brightness of a color, and intensity is determined by the size of the object’s reflection coefficient. Compared with RGB color models, HIS color models are more suitable for human visual senses and can be more conveniently used in image processing and computer vision algorithms. There is a conversion relationship between RGB models and HIS models, which can be easily exchanged and provides more ways for image processing. The combination of color index, texture features [91], and morphological information with machine learning algorithms enhances the predictive performance of the model, which can well meet the requirements of biomass or yield calculation in general scenarios. In addition, feature fusion has better robustness and result accuracy than prediction models established with single-dimensional features [100].

3.2.2. Yield Calculation by Deep Learning Imaging

Deep learning algorithms mainly include convolutional neural networks, recursive neural networks, long short-term memory networks, generative adversarial networks, autoencoders, and reinforcement learning. These algorithms have achieved significant results in fields such as computer vision, natural language processing, and generative models. Convolutional neural network models [101,102] have been widely applied in crop-phenotype-parameter acquisition and biomass calculation [103]. Crop-yield calculation based on deep learning technology is generally achieved through object detection or segmentation [104] by counting the number of fruits in a single image. For densely planted or heavily occluded crops, only a portion of the entire yield can be detected in the image, so regression statistics are often needed to achieve this. In addition, with the continuous progress of digital imaging counting, the resolution of images that can be obtained is getting higher and higher, and there is also an increasing amount of research on refined detection for individual plants and grains [105]. Object detection refers to locating and identifying seeds or fruits of interest in an image or video, while object segmentation accurately segments and extracts the target from the background in the image. Object-detection algorithms mainly include region-based and single-stage-based object detection [103]. Region-based object-detection algorithms mainly include R-CNN, Fast R-CNN, and FasterR-CNN [106], and this type of algorithm first generates candidate regions, then extracts features from each candidate region, and then performs target classification and bounding-box regression on the extracted features through a classifier. By introducing candidate-region-generation modules and deep learning-based feature-extraction modules, the accuracy and efficiency of object detection are greatly improved. Single-stage-object-detection algorithms mainly include YOLO and SSD, which directly perform object detection on feature maps by dividing anchor boxes and bounding boxes. These algorithms have faster detection speeds but slightly lower accuracies. Deep learning algorithm-based object segmentation mainly includes semantic segmentation and instance segmentation. Semantic segmentation algorithms mainly include FCN, SegNet, and DeepLab. These algorithms introduce convolutional neural networks and dilated convolution techniques to achieve pixel-level classification of images, improving segmentation accuracy and efficiency. Instance segmentation is based on target segmentation, in which each target instance is further segmented and extracted to achieve fine recognition of each target, which is mainly performed using MaskR-CNN and PANet. The feature-extraction process based on deep learning technology is automatically completed by machines, greatly improving the accuracy of feature extraction and simplifying operational complexity. Research into yield calculation based on deep learning technology mainly focuses on target segmentation, detection, and counting.
  • Yield Calculation of Food Crops
For food crops, the main goal is to achieve detection and counting of grain tassels, which is common in research on corn, wheat, and rice. Obtaining high-resolution images of grain tassels in specific scenarios can also be useful for detecting single grains. Mota-Delfin et al. [107] used unmanned aerial vehicles to capture RGB images of corn-growth stages, and used a series of models such as YOLOv4, YOLOv4 tiny, YOLOv4 tiny 3l, and YOLOv5 to detect and count corn plants. After comparison, the best prediction results were achieved by YOLOv5s, with an average accuracy of 73.1%. Liu et al. [108] used the Faster R-CNN network to detect and count corn ears, and they compared the performance of ResNet and VGG as feature-extraction networks. The highest recognition accuracy for corn-growth images captured by drones and mobile phones reached 94.99%. Jia et al. [109] combined deep learning and image morphology processing methods to achieve the detection and counting of corn ears. First, a deep learning network based on VGG16 was used to complete the recognition of the entire corn plant, and then multiple features of image color, texture, and morphology in the known area were extracted to achieve recognition of corn ears. Finally, the recognition accuracy of the plant reached 99.47%, and the average accuracy of corn ears reached 97.02%.
In terms of wheat-yield calculation, Maji et al. [110] developed a second-order deep learning framework called SlypNet for wheat-ear detection, which combined Mask R-CNN and U-Net algorithms to automatically extract rich morphological features from images. It could effectively overcome interference such as leaf overlap and occlusion in peak detection, and the accuracy of the small-ear-detection model validation reached 99%. Nevavuori et al. [111] used unmanned aerial vehicles to obtain RGB images and weather data of wheat-growth stages, studied the feasibility of using spatiotemporal sequence-based datasets in yield prediction, and compared the predictive performance of three model architectures CNN-LSTM, ConvLSTM, and 3D-CNN, and found that more accurate prediction results were obtained than with a single temporal phase. Qiu et al. [112] studied a wheat spike-automatic-detection and -counting method based on unsupervised image learning. Color images of four wheat strains were collected, and unsupervised spike labeling was achieved by using the watershed algorithm. A prediction model was established by using DCNN and transfer learning, and a maximum R2 of 0.84 was obtained, greatly improving the efficiency of wheat spike recognition. Zhaosheng et al. [113] applied an improved YOLOX-m object-detection algorithm to detect wheat ears and evaluated the prediction accuracy of datasets with different growth stages, planting densities, and drone-flight heights. The highest prediction accuracy obtained through the improved model reached 88.03%, an increase of 2.54% compared to the original. Zang et al. [114] integrated the ECA-attention-mechanism module into the main network of YOLOv5s to achieve rapid detection of wheat spikes, enhancing the ability to extract detailed features. The accuracy of wheat-spike-count statistics reached 71.61%, which was 4.95% higher than those of the standard YOLOv5s, and which could effectively solve the problem of wheat mutual occlusion and interference. Zhao et al. [115] studied an improved YOLOv4 network for detecting and counting wheat ears, mainly by adding a spatial feature pyramid SPP module to enhance feature fusion at different scales. The average accuracies on two datasets was 95.16% and 97.96%, respectively, and the highest fitting R2 with the true value was 0.973.
Lin et al. [116] used drones to obtain RGB images of sorghum canopy and labeled them with masks. A CNN segmentation model was established by using U-Net, and a prediction mask was used to detect and count sorghum, with the final accuracy reaching 95.5%. Guo et al. [117] combined image segmentation and deep learning to automatically calculate the rice seed setting rate (RSSR) based on RGB images captured by mobile phones. During the experiment, multiple convolutional neural network algorithms were compared, and the best-performing YOLOv4 algorithm was ultimately selected to calculate RSSR. The detection accuracies for full grain, empty grain, and RSSR of rice were 97.69%, 93.20%, and 99.43%, respectively. Han et al. [118] proposed an image-driven-data-assimilation framework for rice-yield calculation. The framework included error calculation schemes, image CNN models, and data assimilation models, which could estimate multi-phenotype and yield parameters of rice, providing a good and innovative approach.
  • Yield Calculation of Economic Crops
There is a significant difference between the foreground and background of fruit crops, with obvious target features such as shape, boundary region, and color. It is easy to achieve target segmentation or detection by using deep learning algorithms, which are most commonly reported in crops such as kiwifruit, mango, grape, and apple. Zhou et al. [119] used MobileNetV2, InceptionV3, and corresponding quantified networks to establish a fast-detection model for kiwifruit in orchards. Considering Both the true detection rate and model performance, the quantified MobileNetV2 network, with a TDR of 89.7% and the lowest recognition time and size, was selected to develop a lightweight mobile application. Xiong et al. [120] studied a mango-target-detection method based on the deep learning YOLOv2 algorithm’s model, which achieved an accuracy of 96.1% under different fruit quantities and light conditions. Finally, a fruit-tree calculation model was used to fit the actual mango quantity, with an error rate of 1.1%, achieving a relatively good predicted effect.
Grapes are one of the most popular fruits and are important raw materials for wine. Predicting grape yield is of great significance for adjusting production and marketing plans. Image-based grape-yield prediction mainly focuses on grape-string detection and single-grain counting. Santos et al. [121] used convolutional neural networks of Mask R-CNN, YOLOv2, and YOLOv3 to achieve grape-instance segmentation prediction in grape-string detection. The highest F1 score reached was 0.91, which could accurately evaluate the size and shape of fruits. Shen et al. [122] conducted channel pruning on the YOLO v5 model to obtain YOLO v5s when studying grape-string counting, which effectively reduced the number of model parameters, size, and FLOPs. NMS was introduced to improve detection performance during prediction, resulting in mAP and F1 scores of 82.3% and 79.5%, respectively, on the image datasets, which were validated through video data. Cecotti et al. [123] studied grape detection based on convolutional neural network algorithms and compared the effects of three feature spaces: color images, grayscale images, and color histograms. Finally, the model trained using the ResNet network combined with transfer learning performed the best, with an accuracy of over 99% for both red and white grapes. Palacios et al. [124] combined machine learning with deep learning algorithms to achieve grape-berry detection and counting. SegNet was used to segment individual berries and extract canopy features. Three different yield-prediction models were compared, and the experimental results showed that support vector machine regression was the most effective model, resulting in an NRMSE of 24.99% and an R2 of 0.83. Chen et al. [125] designed an improved grape-string segmentation method based on the PSPNet model, in which a CBAM attention mechanism and atrous convolution were mostly embedded in the backbone network to enhance the ability of detail feature extraction and multi-layer feature fusion. The improved model increased IOU and pixel density PA by 4.36% and 9.95%, respectively, reaching 87.42% and 95.73%. Olenskyj et al. [126] used three models, Object Detection, CNN, and Transformer, to count grape clusters and found that the Transformer architecture had the highest prediction accuracy, with a MAPE of 18%, and they eliminated the step of manually labeling images, demonstrating significant advantages. Sozzi et al. [127] applied YOLOv3, YOLOv3 tiny, YOLOv4 tiny, YOLOv5 tiny, YOLOv5x, and YOLOv5s to grape-string detection and counting, compared the prediction results of different models on different datasets, and finally selected YOLOv5x as having the best performance with an average error of 13.3%. Palacios et al. [128] used the deep convolutional neural network SegNet for grape-flower detection and counting, and VGG19 was used as the encoder, achieving good detection accuracy. The predicted flower count for each tree achieved an R2 of over 0.7 compared to the actual R2, and they developed a mobile automatic detection device. In terms of apple-yield calculation, Sun et al. [129] proposed the YOLOv5-PRE model for apple detection and counting based on YOLOv5s. By introducing lightweight structures from ShuffleNet and GhostNet, as well as attention mechanisms, it was found that the average accuracy of the YOLOv5-PRE model reached 94.03%, showing significant improvements in accuracy and detection efficiency compared to YOLOv5s. Apolo-Apolo et al. [130] explored apple-detection technology based on CNN networks; aerial images collected by drones were used as the training set, and Faster R-CNN was used as the training network. The R2 value reached 0.86 and linear regression was used to fit the total number of apples in each tree to solve the occlusion problem of some fruit trees, providing a good solution for apple calculation.
In addition, similar studies have also been conducted for weed detection [131], chili-biomass calculation, pod detection and counting, etc. Weeds are one of the important factors hindering the healthy growth of crops. In recent years, image recognition techniques based on deep learning algorithms have been increasingly used for weed detection, with RGB images being the most common image category. Quan et al. [132] developed a dual-stream–dense-feature fusion convolutional neural network model based on RGB-D to achieve weed detection and aboveground fresh-weight calculation in land parcels, obtaining richer information than RGB images. By constructing a NiN-Block structural module to enhance feature extraction and fusion, the average accuracy of predicting weed fresh weight reached 75.34% when IoU was set to 0.5. Moon et al. [133] combined simple formulas with deep learning networks to calculate the fresh weight and leaf area of greenhouse sweet peppers. The fresh weight was calculated by using the total weight and volumetric water content of the system in the device. The ConvNet network was used to calculate the sweet pepper leaf area, and R2 values are 0.7 and 0.95, respectively. This solution is universal and can be promoted in practical application scenarios. Lu et al. [134] used a camera to capture RGB images of plants. First, Faster R-CNN, FPN, SSD, and YOLOv3 were selected for pod recognition. Then, they selected the YOLOv3 network as having the highest recognition accuracy, and on the basis of this, the loss function, anchor box clustering algorithm, and some networks were improved for the detection and counting of soybean leaves. Finally, the GRNN algorithm was used to model the number of pods and leaves and obtained the optimal soybean-yield prediction model, with the average accuracy reaching 97.43%. Riera et al. [135] developed a yield-calculation framework based on multi-view images by using RGB images captured by cameras, and established a pod-recognition and counting model by using RetinaNet, effectively overcoming the problem of pod-counting occlusion. In Table 7, the research progress of deep learning in crop-image yield calculation is shown.
The method for image processing based on deep learning is relatively complex, and its feature extraction is independently completed by machines without the need for manual intervention, resulting in relatively high accuracy. At the same time, deep learning requires a large amount of computation and has multiple layers in the network. As the network depth increases, feature maps and concept layer information are continuously extracted, resulting in reduced resolution and insufficient sensitivity to detail information, which will lead to missed or false detections. In addition, during the application process, the occlusion problem between plants, leaves, and fruits is also quite serious, and optimization is needed in image acquisition, preprocessing, and training network construction, such as background removal, branch and leaf construction, video streaming shooting, and other methods.

4. Discussion

From the large amount of research literature, it can be seen that, at present, image-based crop-yield calculation is mainly divided into remote sensing images and visible light images. The large amount of data collected by remote sensing can describe almost all plant physiology, and even internal changes in plants based on the resolution of sensors. That is, by extracting absorption spectra or reflected electromagnetic wave information through multi-channel sensors or remote sensing satellites, machine learning algorithms are used to establish crop biomass or yield-calculation models, which can reflect the overall growth of crops and are suitable for the large-scale cultivation of grain crops. Crop-yield calculation based on visible light mainly achieves fruit counting through image segmentation or detection, which is suitable for economic crops such as eggplants and melons. High-resolution image acquisition, image preprocessing, feature variable selection, and model algorithm selection are all key factors that affect prediction accuracy. Yield-calculation schemes for crops are compared based on two types of images in Table 8, mainly explaining image acquisition methods, preprocessing, extraction indicators, main advantages, main disadvantages, and representative algorithms.
Image acquisition. With the continuous advancement of digital imaging and sensor technology, obtaining high-resolution visible light or spectral images has become more convenient and efficient. At present, unmanned aerial vehicles (UAV) are widely used for data collection in agricultural and ecological-related applications, with advantages in economy and flexibility. The satellite platform-based remote sensing-data-acquisition method has better stability, which makes it easy to use for obtaining remote sensing data from multiple growth stages, and it is convenient for long-term monitoring. In addition, factors such as image shooting angle, smoothness, backlight, shadows, occlusion, etc., lead to incomplete target segmentation, so will effectively weaken its impacts of improving image contrast, light compensation, etc.
Image preprocessing. The near-infrared region expresses the absorption of hydrogen-containing groups, but at the same time, the absorption is weak and the spectra overlap, requiring denoising and filtering to reduce the signal-to-noise ratio to enhance the distribution of vegetation characteristics and canopy structure changes. For visible light images, random aspect ratio cropping, horizontal flipping, vertical flipping, saturation enhancement, saturation reduction, Gaussian blur, grayscale, CutMix, Mosaic, etc. are processing methods that are commonly used to adjust the geometric shape of the image, expand the number of samples, and enhance the signal-to-noise ratio, which can enhance the model’s generalization ability.
Feature variable screening. There are over 40 crop indices, and the selection of indices closely related to crop growth and yield still faces many difficulties. Determining how to select highly correlated independent variables from a large amount of feature information is also the key to building a high-performance model. Therefore, it is important to eliminate redundant or irrelevant variables, which can improve model robustness and reduce computational complexity. Principal Component Analysis (PCA) is a common data dimensionality reduction algorithm that can identify high-scoring independent variables to describe potential relationships in data. Similar algorithms include decision tree DTM, genetic algorithm GA, simulated annealing algorithm SA, etc.
Selection of model algorithms. There are two main model algorithms used for yield calculation: machine learning (ML) and deep learning (DL) [136].
(1)
Machine learning (ML)
The growth process of crops is complex, and the response to different environmental changes is generally nonlinear. Therefore, traditional statistical methods are not always sufficient to accurately estimate the growth of plants. Machine learning (ML) [12] is used to perform regression analysis on highly nonlinear problems and identify nonlinear relationships between input and output datasets (Bishop) [137], which can learn change patterns from a large amount of data, achieve autonomous decision-making, and provide a good solution for complex data analysis. It is widely used in scenarios such as image segmentation and target recognition. Compared with traditional crop models and statistical methods, yield-prediction models established with ML can handle nonlinear relationships and identify independent variables that affect yield weight, but the interpretability is limited [138], and the generated models are usually targeted at specific application scenarios requiring special attention to overfitting processes. In crop-yield calculation research, commonly used machine learning algorithms are those such as artificial neural networks (ANN), support vector machines (SVM), Gaussian process regression (GPR), partial least squares regression (PLSR), multi-layer prediction (MLP), random forest (RF), k-nearest neighbor (KNN), etc., which need to be applied based on specific conditions for datasets, variable types, crop types, growth stage, etc.
(2)
Deep learning (DL)
Deep learning is a high-order machine learning method that includes multiple layers of neural networks, which can deeply explore the internal relationships from data and automatically learn from large hierarchical representations of data by using complex nonlinear functions. Compared with ML, deep learning has higher accuracy. In recent years, DL has been increasingly used for crop biomass monitoring or yield calculation, proving its powerful feature extraction and self-learning abilities. Convolutional neural networks (CNN) and recurrent neural networks (RNN) are the most commonly used DL methods used in exploring the correlations between independent variables and production [139]. Among them, the Convolutional Neural Network (CNN) is the most widely used deep learning architecture in image processing, which is mainly composed of a convolutional layer and a pooling layer [101]. CNN can take images as input and automatically extract features such as color, geometry, texture, etc. [140]. It has been widely applied in field weed and pest identification, environmental stress, agricultural image segmentation, and yield calculation; CNN models are mainly used to capture spatial features of images, while RNN is mainly used to analyze temporal data, especially in analyzing remote sensing data and meteorological data with multiple growth periods and long time series. Long Short-Term Memory (LSTM) is an excellent version of an RNN model iteration, which can effectively solve problems such as gradient explosion and vanishing. When combined with CNN, it is more accurate in establishing yield-calculation models based on multimodal fusion of remote sensing data, meteorological data, phenological information, and other modalities.
Meanwhile, deep learning models generally have complex structures, and require a large amount of data samples and computing power to achieve the expected results. Small samples can easily cause overfitting, so data augmentation is particularly important. In addition, numerous model hyperparameters are also important factors affecting prediction accuracy. In many studies, hyperparameters are usually determined based on experience or model evaluation, and some algorithms are combined to achieve hyperparameter optimization, such as the Bayesian algorithm, genetic algorithm, particle swarm optimization algorithm, etc.

5. Conclusions and Outlooks

With the continuous progress of artificial intelligence and sensor technology, image-analysis technology is being studied more and more in relation to agricultural yield. Remote sensing images and visible light images are also being applied by scholars in crop-target segmentation, detection, counting, biomass monitoring, and yield calculation. Image spectral index, geometric shape, texture, and other information can effectively reflect the internal growth status of crops, express the growth status of crops, and have been proven to be applicable to yield calculation for various food and economic crops. With the improvement of image resolution and continuous optimization of model algorithms, the accuracy of crop-yield calculation is also increasing, but it also faces more problems and challenges.
Model algorithm optimization. There are still some problems in the application of deep learning-based object detection and segmentation algorithms, such as poor detection and segmentation performance for small objects and low accuracy for target boundaries. To solve these issues, researchers have proposed many improvement and optimization methods. On one hand, the performance of object detection and segmentation algorithms can be improved by changing the network structure and loss function. For example, it can enhance the network’s attention and perception ability towards targets, and it can improve the accuracy of target detection and segmentation by introducing attention mechanisms and multiscale fusion techniques. On the other hand, data augmentation can also effectively improve the effectiveness of object detection and segmentation. By performing transformations such as rotation, scaling, and translation on the data, the diversity of training data can be increased, and the robustness and generalization ability of the model can be improved. In addition, by pre-training the model parameters, the convergence speed of the model training can be accelerated, and the performance of object detection and segmentation can be improved. In summary, deep learning-based object detection and segmentation algorithms have broad application prospects in the field of computer vision. By continuously improving and optimizing algorithms, the accuracy and efficiency of object detection and segmentation can be improved.
The fusion of multimodal and channel data. The growth process of crops is complex and variable and is greatly influenced by factors such as light, precipitation, and temperature. Yield prediction is closely related to environmental factors, so time-series samples are particularly important when predicting yield [141]. Therefore, data collection needs to cover remote sensing and meteorological data with multiple growth periods and long time series, and multi-feature fusion (multispectral, thermal infrared, weather) has higher accuracy than does a single dimension. After data fusion, including image texture and multi-channel spectral information, further training of the model can be achieved by using private datasets or other publicly available datasets. In addition, multimodal frameworks can be extended, integrating environmental factors such as meteorology, geography, soil, and altitude into yield prediction, which will give significant improvements in prediction results.
Compensation for insufficient sample size by transfer learning algorithms. Deep learning requires a large number of data samples as support [142]. Transfer learning can perform parameter fine-tuning based on trained models, resulting in better performance for addressing new problems. In this method, a limited number of samples can be used to fine-tune the parameters of the pre-trained model on a large dataset to achieve optimal performance in new tasks. Specifically, it includes the following two aspects: region-based transfer learning and parameters-based transfer learning. Firstly, the regions with sufficient sample size are used to learn the model and then to extend the model to other regions with fewer samples to achieve region transfer. The second aspect is parameters-based transfer learning, which involves sharing partial parameters or prior distributions of hyperparameters between models for related tasks to improve overall performance. So far, although both methods have contributed to improving model performance, due to the complexity and diversity of data, there is currently no unified method for defining dataset similarity, and similarity-based transfer requires more quantitative and qualitative explanations. Therefore, in the future, with the accumulation of data, the advantages of deep learning models will gradually become prominent. In addition, in region-based transfer learning, the environments of different regions are heterogeneous, and the question of how to achieve transfer in heterogeneous environments is a future research direction.
The combination of multiple collection platforms. When monitoring crop growth and estimating yield at the field scale, satellite remote sensing makes it difficult to overcome the impact of spatial heterogeneity on accuracy. However, drone platforms can better identify heterogeneity information. Therefore, it is possible to combine drone platforms with satellite platforms and to use drone platform data as an intermediate variable for scale conversion in the spatiotemporal fusion process of satellite data to ensure accuracy in the downscaling process.
The interpretability of the yield-calculation model. The mechanism of deep learning algorithms is difficult to explain. Based on deep learning algorithms, feature extraction is mostly automatic from data. Growth models can better express information such as crop-growth process, environment, and cultivation technology [9], thereby describing the growth and development process. Crop-growth models can be combined to improve the explanatory power of crop growth.
Power requirements of model computing. When monitoring and estimating yield, it is necessary to use the complex network structure of deep learning to fully learn high-resolution data, which requires a lot of time for training. At the same time, high computer performance is required, so lightweight model algorithms are particularly important while ensuring accuracy. Therefore, the questions of how to efficiently and quickly learn features, ensure the integrity of learning features, and minimize the learning of redundant information are important issues in the use of deep learning methods for field-scale growth monitoring.

Author Contributions

F.Y., M.W. and Q.Z. analyzed the data, prepared the tables, and drafted the manuscript; J.X. and J.Z. designed the project and finalized the manuscript; X.L., Y.P. and R.L. assisted with reference collection and the reorganization and partial data analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Program Project, grant number 2023YFD2201805; Beijing Smart Agriculture Innovation Consortium Project, grant number BAIC10-2024; Beijing Science and Technology Plan, grant number Z231100003923005.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declared that there is no conflict of interest.

References

  1. Paudel, D.; Boogaard, H.; de Wit, A.; Janssen, S.; Osinga, S.; Pylianidis, C.; Athanasiadis, I.N. Machine learning for large-scale crop yield forecasting. Agric. Syst. 2021, 187, 103016. [Google Scholar] [CrossRef]
  2. Zhu, Y.; Wu, S.; Qin, M.; Fu, Z.; Gao, Y.; Wang, Y.; Du, Z. A deep learning crop model for adaptive yield estimation in large areas. Int. J. Appl. Earth Obs. Geoinf. 2022, 110, 102828. [Google Scholar] [CrossRef]
  3. Akhtar, M.N.; Ansari, E.; Alhady, S.S.N.; Abu Bakar, E. Leveraging on Advanced Remote Sensing- and Artificial Intelligence-Based Technologies to Manage Palm Oil Plantation for Current Global Scenario: A Review. Agriculture 2023, 13, 504. [Google Scholar] [CrossRef]
  4. Torres-Sánchez, J.; Souza, J.; di Gennaro, S.F.; Mesas-Carrascosa, F.J. Editorial: Fruit detection and yield prediction on woody crops using data from unmanned aerial vehicles. Front. Plant Sci. 2022, 13, 1112445. [Google Scholar] [CrossRef] [PubMed]
  5. Wen, T.; Li, J.-H.; Wang, Q.; Gao, Y.-Y.; Hao, G.-F.; Song, B.-A. Thermal imaging: The digital eye facilitates high-throughput phenotyping traits of plant growth and stress responses. Sci. Total Environ. 2023, 899, 165626. [Google Scholar] [CrossRef]
  6. Farjon, G.; Huijun, L.; Edan, Y. Deep-learning-based counting methods, datasets, and applications in agriculture: A review. Precis. Agric. 2023, 24, 1683–1711. [Google Scholar] [CrossRef]
  7. Rashid, M.; Bari, B.S.; Yusup, Y.; Kamaruddin, M.A.; Khan, N. A Comprehensive Review of Crop Yield Prediction Using Machine Learning Approaches With Special Emphasis on Palm Oil Yield Prediction. IEEE Access 2021, 9, 63406–63439. [Google Scholar] [CrossRef]
  8. Attri, I.; Awasthi, L.K.; Sharma, T.P. Machine learning in agriculture: A review of crop management applications. Multimed. Tools Appl. 2023, 83, 12875–12915. [Google Scholar] [CrossRef]
  9. Di, Y.; Gao, M.; Feng, F.; Li, Q.; Zhang, H. A New Framework for Winter Wheat Yield Prediction Integrating Deep Learning and Bayesian Optimization. Agronomy 2022, 12, 3194. [Google Scholar] [CrossRef]
  10. Teixeira, I.; Morais, R.; Sousa, J.J.; Cunha, A. Deep Learning Models for the Classification of Crops in Aerial Imagery: A Review. Agriculture 2023, 13, 965. [Google Scholar] [CrossRef]
  11. Bali, N.; Singla, A. Deep Learning Based Wheat Crop Yield Prediction Model in Punjab Region of North India. Appl. Artif. Intell. 2021, 35, 1304–1328. [Google Scholar] [CrossRef]
  12. van Klompenburg, T.; Kassahun, A.; Catal, C. Crop yield prediction using machine learning: A systematic literature review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
  13. He, L.; Fang, W.; Zhao, G.; Wu, Z.; Fu, L.; Li, R.; Majeed, Y.; Dhupia, J. Fruit yield prediction and estimation in orchards: A state-of-the-art comprehensive review for both direct and indirect methods. Comput. Electron. Agric. 2022, 195, 106812. [Google Scholar] [CrossRef]
  14. Li, K.-Y.; Sampaio de Lima, R.; Burnside, N.G.; Vahtmäe, E.; Kutser, T.; Sepp, K.; Cabral Pinheiro, V.H.; Yang, M.-D.; Vain, A.; Sepp, K. Toward Automated Machine Learning-Based Hyperspectral Image Analysis in Crop Yield and Biomass Estimation. Remote Sens. 2022, 14, 1114. [Google Scholar] [CrossRef]
  15. Desloires, J.; Ienco, D.; Botrel, A. Out-of-year corn yield prediction at field-scale using Sentinel-2 satellite imagery and machine learning methods. Comput. Electron. Agric. 2023, 209, 107807. [Google Scholar] [CrossRef]
  16. Thakur, A.; Venu, S.; Gurusamy, M. An extensive review on agricultural robots with a focus on their perception systems. Comput. Electron. Agric. 2023, 212, 108146. [Google Scholar] [CrossRef]
  17. Abebe, A.M.; Kim, Y.; Kim, J.; Kim, S.L.; Baek, J. Image-Based High-Throughput Phenotyping in Horticultural Crops. Plants 2023, 12, 2061. [Google Scholar] [CrossRef] [PubMed]
  18. Alkhaled, A.; Townsend, P.A.A.; Wang, Y. Remote Sensing for Monitoring Potato Nitrogen Status. Am. J. Potato Res. 2023, 100, 1–14. [Google Scholar] [CrossRef]
  19. Pokhariyal, S.; Patel, N.R.; Govind, A. Machine Learning-Driven Remote Sensing Applications for Agriculture in India—A Systematic Review. Agronomy 2023, 13, 2302. [Google Scholar] [CrossRef]
  20. Joshi, A.; Pradhan, B.; Gite, S.; Chakraborty, S. Remote-Sensing Data and Deep-Learning Techniques in Crop Mapping and Yield Prediction: A Systematic Review. Remote Sens. 2023, 15, 2014. [Google Scholar] [CrossRef]
  21. Muruganantham, P.; Wibowo, S.; Grandhi, S.; Samrat, N.H.; Islam, N. A Systematic Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing. Remote Sens. 2022, 14, 1990. [Google Scholar] [CrossRef]
  22. Ren, Y.; Li, Q.; Du, X.; Zhang, Y.; Wang, H.; Shi, G.; Wei, M. Analysis of Corn Yield Prediction Potential at Various Growth Phases Using a Process-Based Model and Deep Learning. Plants 2023, 12, 446. [Google Scholar] [CrossRef]
  23. Zhou, S.; Xu, L.; Chen, N. Rice Yield Prediction in Hubei Province Based on Deep Learning and the Effect of Spatial Heterogeneity. Remote Sens. 2023, 15, 1361. [Google Scholar] [CrossRef]
  24. Darra, N.; Anastasiou, E.; Kriezi, O.; Lazarou, E.; Kalivas, D.; Fountas, S. Can Yield Prediction Be Fully Digitilized? A Systematic Review. Agronomy 2023, 13, 2441. [Google Scholar] [CrossRef]
  25. Shahi, T.B.; Xu, C.-Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
  26. Istiak, A.; Syeed, M.M.M.; Hossain, S.; Uddin, M.F.; Hasan, M.; Khan, R.H.; Azad, N.S. Adoption of Unmanned Aerial Vehicle (UAV) imagery in agricultural management: A systematic literature review. Ecol. Inform. 2023, 78, 102305. [Google Scholar] [CrossRef]
  27. Fajardo, M.; Whelan, B.M. Within-farm wheat yield forecasting incorporating off-farm information. Precis. Agric. 2021, 22, 569–585. [Google Scholar] [CrossRef]
  28. Yli-Heikkilä, M.; Wittke, S.; Luotamo, M.; Puttonen, E.; Sulkava, M.; Pellikka, P.; Heiskanen, J.; Klami, A. Scalable Crop Yield Prediction with Sentinel-2 Time Series and Temporal Convolutional Network. Remote Sens. 2022, 14, 4193. [Google Scholar] [CrossRef]
  29. Safdar, L.B.; Dugina, K.; Saeidan, A.; Yoshicawa, G.V.; Caporaso, N.; Gapare, B.; Umer, M.J.; Bhosale, R.A.; Searle, I.R.; Foulkes, M.J.; et al. Reviving grain quality in wheat through non-destructive phenotyping techniques like hyperspectral imaging. Food Energy Secur. 2023, 12, e498. [Google Scholar] [CrossRef] [PubMed]
  30. Tende, I.G.; Aburada, K.; Yamaba, H.; Katayama, T.; Okazaki, N. Development and Evaluation of a Deep Learning Based System to Predict District-Level Maize Yields in Tanzania. Agriculture 2023, 13, 627. [Google Scholar] [CrossRef]
  31. Leukel, J.; Zimpel, T.; Stumpe, C. Machine learning technology for early prediction of grain yield at the field scale: A systematic review. Comput. Electron. Agric. 2023, 207, 107721. [Google Scholar] [CrossRef]
  32. Elangovan, A.; Duc, N.T.; Raju, D.; Kumar, S.; Singh, B.; Vishwakarma, C.; Gopala Krishnan, S.; Ellur, R.K.; Dalal, M.; Swain, P.; et al. Imaging Sensor-Based High-Throughput Measurement of Biomass Using Machine Learning Models in Rice. Agriculture 2023, 13, 852. [Google Scholar] [CrossRef]
  33. Hassanzadeh, A.; Zhang, F.; van Aardt, J.; Murphy, S.P.; Pethybridge, S.J. Broadacre Crop Yield Estimation Using Imaging Spectroscopy from Unmanned Aerial Systems (UAS): A Field-Based Case Study with Snap Bean. Remote Sens. 2021, 13, 3241. [Google Scholar] [CrossRef]
  34. Sanaeifar, A.; Yang, C.; Guardia, M.d.l.; Zhang, W.; Li, X.; He, Y. Proximal hyperspectral sensing of abiotic stresses in plants. Sci. Total Environ. 2023, 861, 160652. [Google Scholar] [CrossRef]
  35. Son, N.-T.; Chen, C.-F.; Cheng, Y.-S.; Toscano, P.; Chen, C.-R.; Chen, S.-L.; Tseng, K.-H.; Syu, C.-H.; Guo, H.-Y.; Zhang, Y.-T. Field-scale rice yield prediction from Sentinel-2 monthly image composites using machine learning algorithms. Ecol. Inform. 2022, 69, 101618. [Google Scholar] [CrossRef]
  36. Elavarasan, D.; Vincent, P.M.D. Crop Yield Prediction Using Deep Reinforcement Learning Model for Sustainable Agrarian Applications. IEEE Access 2020, 8, 86886–86901. [Google Scholar] [CrossRef]
  37. Teodoro, P.E.; Teodoro, L.P.R.; Baio, F.H.R.; da Silva Junior, C.A.; dos Santos, R.G.; Ramos, A.P.M.; Pinheiro, M.M.F.; Osco, L.P.; Gonçalves, W.N.; Carneiro, A.M.; et al. Predicting Days to Maturity, Plant Height, and Grain Yield in Soybean: A Machine and Deep Learning Approach Using Multispectral Data. Remote Sens. 2021, 13, 4632. [Google Scholar] [CrossRef]
  38. Fei, S.; Hassan, M.A.; Xiao, Y.; Su, X.; Chen, Z.; Cheng, Q.; Duan, F.; Chen, R.; Ma, Y. UAV-based multi-sensor data fusion and machine learning algorithm for yield prediction in wheat. Precis. Agric. 2022, 24, 187–212. [Google Scholar] [CrossRef]
  39. Marques Ramos, A.P.; Prado Osco, L.; Elis Garcia Furuya, D.; Nunes Gonçalves, W.; Cordeiro Santana, D.; Pereira Ribeiro Teodoro, L.; Antonio da Silva Junior, C.; Fernando Capristo-Silva, G.; Li, J.; Henrique Rojo Baio, F.; et al. A random forest ranking approach to predict yield in maize with uav-based vegetation spectral indices. Comput. Electron. Agric. 2020, 178, 105791. [Google Scholar] [CrossRef]
  40. Ji, Y.; Liu, R.; Xiao, Y.; Cui, Y.; Chen, Z.; Zong, X.; Yang, T. Faba bean above-ground biomass and bean yield estimation based on consumer-grade unmanned aerial vehicle RGB images and ensemble learning. Precis. Agric. 2023, 24, 1439–1460. [Google Scholar] [CrossRef]
  41. Ayankojo, I.T.T.; Thorp, K.R.R.; Thompson, A.L.L. Advances in the Application of Small Unoccupied Aircraft Systems (sUAS) for High-Throughput Plant Phenotyping. Remote Sens. 2023, 15, 2623. [Google Scholar] [CrossRef]
  42. Zualkernan, I.; Abuhani, D.A.; Hussain, M.H.; Khan, J.; ElMohandes, M. Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones 2023, 7, 382. [Google Scholar] [CrossRef]
  43. Zhang, Z.; Zhu, L. A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
  44. He, Z.; Wu, K.; Wang, F.; Jin, L.; Zhang, R.; Tian, S.; Wu, W.; He, Y.; Huang, R.; Yuan, L.; et al. Fresh Yield Estimation of Spring Tea via Spectral Differences in UAV Hyperspectral Images from Unpicked and Picked Canopies. Remote Sens. 2023, 15, 1100. [Google Scholar] [CrossRef]
  45. Xu, W.; Chen, P.; Zhan, Y.; Chen, S.; Zhang, L.; Lan, Y. Cotton yield estimation model based on machine learning using time series UAV remote sensing data. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102511. [Google Scholar] [CrossRef]
  46. Gonzalez-Sanchez, A.; Frausto-Solis, J.; Ojeda-Bustamante, W. Predictive ability of machine learning methods for massive crop yield prediction. Span. J. Agric. Res. 2014, 12, 313. [Google Scholar] [CrossRef]
  47. Yang, W.; Nigon, T.; Hao, Z.; Dias Paiao, G.; Fernández, F.G.; Mulla, D.; Yang, C. Estimation of corn yield based on hyperspectral imagery and convolutional neural network. Comput. Electron. Agric. 2021, 184, 106092. [Google Scholar] [CrossRef]
  48. Danilevicz, M.F.; Bayer, P.E.; Boussaid, F.; Bennamoun, M.; Edwards, D. Maize Yield Prediction at an Early Developmental Stage Using Multispectral Images and Genotype Data for Preliminary Hybrid Selection. Remote Sens. 2021, 13, 3976. [Google Scholar] [CrossRef]
  49. Kumar, C.; Mubvumba, P.; Huang, Y.; Dhillon, J.; Reddy, K. Multi-Stage Corn Yield Prediction Using High-Resolution UAV Multispectral Data and Machine Learning Models. Agronomy 2023, 13, 1277. [Google Scholar] [CrossRef]
  50. Yu, D.; Zha, Y.; Sun, Z.; Li, J.; Jin, X.; Zhu, W.; Bian, J.; Ma, L.; Zeng, Y.; Su, Z. Deep convolutional neural networks for estimating maize above-ground biomass using multi-source UAV images: A comparison with traditional machine learning algorithms. Precis. Agric. 2022, 24, 92–113. [Google Scholar] [CrossRef]
  51. Mia, M.S.; Tanabe, R.; Habibi, L.N.; Hashimoto, N.; Homma, K.; Maki, M.; Matsui, T.; Tanaka, T.S.T. Multimodal Deep Learning for Rice Yield Prediction Using UAV-Based Multispectral Imagery and Weather Data. Remote Sens. 2023, 15, 2511. [Google Scholar] [CrossRef]
  52. Bellis, E.S.; Hashem, A.A.; Causey, J.L.; Runkle, B.R.K.; Moreno-García, B.; Burns, B.W.; Green, V.S.; Burcham, T.N.; Reba, M.L.; Huang, X. Detecting Intra-Field Variation in Rice Yield With Unmanned Aerial Vehicle Imagery and Deep Learning. Front. Plant Sci. 2022, 13, 716506. [Google Scholar] [CrossRef]
  53. Bian, C.; Shi, H.; Wu, S.; Zhang, K.; Wei, M.; Zhao, Y.; Sun, Y.; Zhuang, H.; Zhang, X.; Chen, S. Prediction of Field-Scale Wheat Yield Using Machine Learning Method and Multi-Spectral UAV Data. Remote Sens. 2022, 14, 1474. [Google Scholar] [CrossRef]
  54. Han, Y.; Tang, R.; Liao, Z.; Zhai, B.; Fan, J. A Novel Hybrid GOA-XGB Model for Estimating Wheat Aboveground Biomass Using UAV-Based Multispectral Vegetation Indices. Remote Sens. 2022, 14, 3506. [Google Scholar] [CrossRef]
  55. Zhou, X.; Kono, Y.; Win, A.; Matsui, T.; Tanaka, T.S.T. Predicting within-field variability in grain yield and protein content of winter wheat using UAV-based multispectral imagery and machine learning approaches. Plant Prod. Sci. 2020, 24, 137–151. [Google Scholar] [CrossRef]
  56. Sharma, P.; Leigh, L.; Chang, J.; Maimaitijiang, M.; Caffé, M. Above-Ground Biomass Estimation in Oats Using UAV Remote Sensing and Machine Learning. Sensors 2022, 22, 601. [Google Scholar] [CrossRef]
  57. Wang, F.; Yang, M.; Ma, L.; Zhang, T.; Qin, W.; Li, W.; Zhang, Y.; Sun, Z.; Wang, Z.; Li, F.; et al. Estimation of Above-Ground Biomass of Winter Wheat Based on Consumer-Grade Multi-Spectral UAV. Remote Sens. 2022, 14, 1251. [Google Scholar] [CrossRef]
  58. Roy Choudhury, M.; Das, S.; Christopher, J.; Apan, A.; Chapman, S.; Menzies, N.W.; Dang, Y.P. Improving Biomass and Grain Yield Prediction of Wheat Genotypes on Sodic Soil Using Integrated High-Resolution Multispectral, Hyperspectral, 3D Point Cloud, and Machine Learning Techniques. Remote Sens. 2021, 13, 3482. [Google Scholar] [CrossRef]
  59. Fu, Y.; Yang, G.; Song, X.; Li, Z.; Xu, X.; Feng, H.; Zhao, C. Improved Estimation of Winter Wheat Aboveground Biomass Using Multiscale Textures Extracted from UAV-Based Digital Images and Hyperspectral Feature Analysis. Remote Sens. 2021, 13, 581. [Google Scholar] [CrossRef]
  60. Tanabe, R.; Matsui, T.; Tanaka, T.S.T. Winter wheat yield prediction using convolutional neural networks and UAV-based multispectral imagery. Field Crops Res. 2023, 291, 108786. [Google Scholar] [CrossRef]
  61. Li, Z.; Chen, Z.; Cheng, Q.; Duan, F.; Sui, R.; Huang, X.; Xu, H. UAV-Based Hyperspectral and Ensemble Machine Learning for Predicting Yield in Winter Wheat. Agronomy 2022, 12, 202. [Google Scholar] [CrossRef]
  62. Li, R.; Wang, D.; Zhu, B.; Liu, T.; Sun, C.; Zhang, Z. Estimation of grain yield in wheat using source–sink datasets derived from RGB and thermal infrared imaging. Food Energy Secur. 2022, 12, e434. [Google Scholar] [CrossRef]
  63. Sharifi, A. Yield prediction with machine learning algorithms and satellite images. J. Sci. Food Agric. 2020, 101, 891–896. [Google Scholar] [CrossRef]
  64. Ma, J.; Liu, B.; Ji, L.; Zhu, Z.; Wu, Y.; Jiao, W. Field-scale yield prediction of winter wheat under different irrigation regimes based on dynamic fusion of multimodal UAV imagery. Int. J. Appl. Earth Obs. Geoinf. 2023, 118, 103292. [Google Scholar] [CrossRef]
  65. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  66. Zhou, J.; Zhou, J.; Ye, H.; Ali, M.L.; Chen, P.; Nguyen, H.T. Yield estimation of soybean breeding lines under drought stress using unmanned aerial vehicle-based imagery and convolutional neural network. Biosyst. Eng. 2021, 204, 90–103. [Google Scholar] [CrossRef]
  67. Yoosefzadeh-Najafabadi, M.; Tulpan, D.; Eskandari, M. Using Hybrid Artificial Intelligence and Evolutionary Optimization Algorithms for Estimating Soybean Yield and Fresh Biomass Using Hyperspectral Vegetation Indices. Remote Sens. 2021, 13, 2555. [Google Scholar] [CrossRef]
  68. Yoosefzadeh-Najafabadi, M.; Earl, H.J.; Tulpan, D.; Sulik, J.; Eskandari, M. Application of Machine Learning Algorithms in Plant Breeding: Predicting Yield From Hyperspectral Reflectance in Soybean. Front. Plant Sci. 2021, 11, 624273. [Google Scholar] [CrossRef]
  69. Shi, Y.; Gao, Y.; Wang, Y.; Luo, D.; Chen, S.; Ding, Z.; Fan, K. Using Unmanned Aerial Vehicle-Based Multispectral Image Data to Monitor the Growth of Intercropping Crops in Tea Plantation. Front. Plant Sci. 2022, 13, 820585. [Google Scholar] [CrossRef]
  70. Ji, Y.; Chen, Z.; Cheng, Q.; Liu, R.; Li, M.; Yan, X.; Li, G.; Wang, D.; Fu, L.; Ma, Y.; et al. Estimation of plant height and yield based on UAV imagery in faba bean (Vicia faba L.). Plant Methods 2022, 18, s13007–s13022. [Google Scholar] [CrossRef]
  71. Liu, Y.; Feng, H.; Yue, J.; Fan, Y.; Jin, X.; Zhao, Y.; Song, X.; Long, H.; Yang, G. Estimation of Potato Above-Ground Biomass Using UAV-Based Hyperspectral images and Machine-Learning Regression. Remote Sens. 2022, 14, 5449. [Google Scholar] [CrossRef]
  72. Sun, C.; Feng, L.; Zhang, Z.; Ma, Y.; Crosby, T.; Naber, M.; Wang, Y. Prediction of End-Of-Season Tuber Yield and Tuber Set in Potatoes Using In-Season UAV-Based Hyperspectral Imagery and Machine Learning. Sensors 2020, 20, 5293. [Google Scholar] [CrossRef]
  73. Poudyal, C.; Costa, L.F.; Sandhu, H.; Ampatzidis, Y.; Odero, D.C.; Arbelo, O.C.; Cherry, R.H. Sugarcane yield prediction and genotype selection using unmanned aerial vehicle-based hyperspectral imaging and machine learning. Agron. J. 2022, 114, 2320–2333. [Google Scholar] [CrossRef]
  74. de Oliveira, R.P.; Barbosa Júnior, M.R.; Pinto, A.A.; Oliveira, J.L.P.; Zerbato, C.; Furlani, C.E.A. Predicting Sugarcane Biometric Parameters by UAV Multispectral Images and Machine Learning. Agronomy 2022, 12, 1992. [Google Scholar] [CrossRef]
  75. Feng, L.; Zhang, Z.; Ma, Y.; Du, Q.; Williams, P.; Drewry, J.; Luck, B. Alfalfa Yield Prediction Using UAV-Based Hyperspectral Imagery and Ensemble Learning. Remote Sens. 2020, 12, 2028. [Google Scholar] [CrossRef]
  76. Wengert, M.; Wijesingha, J.; Schulze-Brüninghoff, D.; Wachendorf, M.; Astor, T. Multisite and Multitemporal Grassland Yield Estimation Using UAV-Borne Hyperspectral Data. Remote Sens. 2022, 14, 2068. [Google Scholar] [CrossRef]
  77. Pranga, J.; Borra-Serrano, I.; Aper, J.; De Swaef, T.; Ghesquiere, A.; Quataert, P.; Roldán-Ruiz, I.; Janssens, I.A.; Ruysschaert, G.; Lootens, P. Improving Accuracy of Herbage Yield Predictions in Perennial Ryegrass with UAV-Based Structural and Spectral Data Fusion and Machine Learning. Remote Sens. 2021, 13, 3459. [Google Scholar] [CrossRef]
  78. Li, K.-Y.; Burnside, N.G.; Sampaio de Lima, R.; Villoslada Peciña, M.; Sepp, K.; Yang, M.-D.; Raet, J.; Vain, A.; Selge, A.; Sepp, K. The Application of an Unmanned Aerial System and Machine Learning Techniques for Red Clover-Grass Mixture Yield Estimation under Variety Performance Trials. Remote Sens. 2021, 13, 1994. [Google Scholar] [CrossRef]
  79. Tatsumi, K.; Igarashi, N.; Mengxue, X. Prediction of plant-level tomato biomass and yield using machine learning with unmanned aerial vehicle imagery. Plant Methods 2021, 17, s13007–s13021. [Google Scholar] [CrossRef]
  80. Ballesteros, R.; Intrigliolo, D.S.; Ortega, J.F.; Ramírez-Cuesta, J.M.; Buesa, I.; Moreno, M.A. Vineyard yield estimation by combining remote sensing, computer vision and artificial neural network techniques. Precis. Agric. 2020, 21, 1242–1262. [Google Scholar] [CrossRef]
  81. Chen, R.; Zhang, C.; Xu, B.; Zhu, Y.; Zhao, F.; Han, S.; Yang, G.; Yang, H. Predicting individual apple tree yield using UAV multi-source remote sensing data and ensemble learning. Comput. Electron. Agric. 2022, 201, 107275. [Google Scholar] [CrossRef]
  82. Tang, M.; Sadowski, D.L.; Peng, C.; Vougioukas, S.G.; Klever, B.; Khalsa, S.D.S.; Brown, P.H.; Jin, Y. Tree-level almond yield estimation from high resolution aerial imagery with convolutional neural network. Front. Plant Sci. 2023, 14, 1070699. [Google Scholar] [CrossRef]
  83. Bebie, M.; Cavalaris, C.; Kyparissis, A. Assessing Durum Wheat Yield through Sentinel-2 Imagery: A Machine Learning Approach. Remote Sens. 2022, 14, 3880. [Google Scholar] [CrossRef]
  84. Kamir, E.; Waldner, F.; Hochman, Z. Estimating wheat yields in Australia using climate records, satellite image time series and machine learning methods. ISPRS J. Photogramm. Remote Sens. 2020, 160, 124–135. [Google Scholar] [CrossRef]
  85. Liu, Y.; Wang, S.; Wang, X.; Chen, B.; Chen, J.; Wang, J.; Huang, M.; Wang, Z.; Ma, L.; Wang, P.; et al. Exploring the superiority of solar-induced chlorophyll fluorescence data in predicting wheat yield using machine learning and deep learning methods. Comput. Electron. Agric. 2022, 192, 106612. [Google Scholar] [CrossRef]
  86. Abreu Júnior, C.A.M.d.; Martins, G.D.; Xavier, L.C.M.; Vieira, B.S.; Gallis, R.B.d.A.; Fraga Junior, E.F.; Martins, R.S.; Paes, A.P.B.; Mendonça, R.C.P.; Lima, J.V.d.N. Estimating Coffee Plant Yield Based on Multispectral Images and Machine Learning Models. Agronomy 2022, 12, 3195. [Google Scholar] [CrossRef]
  87. Bhumiphan, N.; Nontapon, J.; Kaewplang, S.; Srihanu, N.; Koedsin, W.; Huete, A. Estimation of Rubber Yield Using Sentinel-2 Satellite Data. Sustainability 2023, 15, 7223. [Google Scholar] [CrossRef]
  88. Filippi, P.; Whelan, B.M.; Vervoort, R.W.; Bishop, T.F.A. Mid-season empirical cotton yield forecasts at fine resolutions using large yield mapping datasets and diverse spatial covariates. Agric. Syst. 2020, 184, 102894. [Google Scholar] [CrossRef]
  89. Liu, F.; Jiang, X.; Wu, Z. Attention Mechanism-Combined LSTM for Grain Yield Prediction in China Using Multi-Source Satellite Imagery. Sustainability 2023, 15, 9210. [Google Scholar] [CrossRef]
  90. Tang, Y.; Qiu, J.; Zhang, Y.; Wu, D.; Cao, Y.; Zhao, K.; Zhu, L. Optimization strategies of fruit detection to overcome the challenge of unstructured background in field orchard environment: A review. Precis. Agric. 2023, 24, 1183–1219. [Google Scholar] [CrossRef]
  91. Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.E.; Hemanth, D.J. Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11, 646. [Google Scholar] [CrossRef]
  92. Abbas, A.; Zhang, Z.; Zheng, H.; Alami, M.M.; Alrefaei, A.F.; Abbas, Q.; Naqvi, S.A.H.; Rao, M.J.; Mosa, W.F.A.; Abbas, Q.; et al. Drones in Plant Disease Assessment, Efficient Monitoring, and Detection: A Way Forward to Smart Agriculture. Agronomy 2023, 13, 1524. [Google Scholar] [CrossRef]
  93. Massah, J.; Asefpour Vakilian, K.; Shabanian, M.; Shariatmadari, S.M. Design, development, and performance evaluation of a robot for yield estimation of kiwifruit. Comput. Electron. Agric. 2021, 185, 106132. [Google Scholar] [CrossRef]
  94. Zhang, Y.; Ta, N.; Guo, S.; Chen, Q.; Zhao, L.; Li, F.; Chang, Q. Combining Spectral and Textural Information from UAV RGB Images for Leaf Area Index Monitoring in Kiwifruit Orchard. Remote Sens. 2022, 14, 1063. [Google Scholar] [CrossRef]
  95. Guo, Y.; Wang, H.; Wu, Z.; Wang, S.; Sun, H.; Senthilnath, J.; Wang, J.; Robin Bryant, C.; Fu, Y. Modified Red Blue Vegetation Index for Chlorophyll Estimation and Yield Prediction of Maize from Visible Images Captured by UAV. Sensors 2020, 20, 5055. [Google Scholar] [CrossRef]
  96. Zhang, M.; Zhou, J.; Sudduth, K.A.; Kitchen, N.R. Estimation of maize yield and effects of variable-rate nitrogen application using UAV-based RGB imagery. Biosyst. Eng. 2020, 189, 24–35. [Google Scholar] [CrossRef]
  97. Saddik, A.; Latif, R.; Abualkishik, A.Z.; El Ouardi, A.; Elhoseny, M. Sustainable Yield Prediction in Agricultural Areas Based on Fruit Counting Approach. Sustainability 2023, 15, 2707. [Google Scholar] [CrossRef]
  98. Liu, W.; Li, Y.; Liu, J.; Jiang, J. Estimation of Plant Height and Aboveground Biomass of Toona sinensis under Drought Stress Using RGB-D Imaging. Forests 2021, 12, 1747. [Google Scholar] [CrossRef]
  99. Rodriguez-Sanchez, J.; Li, C.; Paterson, A.H. Cotton Yield Estimation From Aerial Imagery Using Machine Learning Approaches. Front. Plant Sci. 2022, 13, 870181. [Google Scholar] [CrossRef]
  100. Gong, L.; Yu, M.; Cutsuridis, V.; Kollias, S.; Pearson, S. A Novel Model Fusion Approach for Greenhouse Crop Yield Prediction. Horticulturae 2022, 9, 5. [Google Scholar] [CrossRef]
  101. Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef]
  102. Chin, R.; Catal, C.; Kassahun, A. Plant disease detection using drones in precision agriculture. Precis. Agric. 2023, 24, 1663–1682. [Google Scholar] [CrossRef]
  103. Jiang, Y.; Li, C. Convolutional Neural Networks for Image-Based High-Throughput Plant Phenotyping: A Review. Plant Phenomics 2020, 2020, 4152816. [Google Scholar] [CrossRef] [PubMed]
  104. Sanaeifar, A.; Guindo, M.L.; Bakhshipour, A.; Fazayeli, H.; Li, X.; Yang, C. Advancing precision agriculture: The potential of deep learning for cereal plant head detection. Comput. Electron. Agric. 2023, 209, 107875. [Google Scholar] [CrossRef]
  105. Buxbaum, N.; Lieth, J.H.; Earles, M. Non-destructive Plant Biomass Monitoring With High Spatio-Temporal Resolution via Proximal RGB-D Imagery and End-to-End Deep Learning. Front. Plant Sci. 2022, 13, 758818. [Google Scholar] [CrossRef]
  106. Lu, H.; Cao, Z. TasselNetV2+: A Fast Implementation for High-Throughput Plant Counting From High-Resolution RGB Imagery. Front. Plant Sci. 2020, 11, 541960. [Google Scholar] [CrossRef]
  107. Mota-Delfin, C.; López-Canteñs, G.d.J.; López-Cruz, I.L.; Romantchik-Kriuchkova, E.; Olguín-Rojas, J.C. Detection and Counting of Corn Plants in the Presence of Weeds with Convolutional Neural Networks. Remote Sens. 2022, 14, 4892. [Google Scholar] [CrossRef]
  108. Liu, Y.; Cen, C.; Che, Y.; Ke, R.; Ma, Y.; Ma, Y. Detection of Maize Tassels from UAV RGB Imagery with Faster R-CNN. Remote Sens. 2020, 12, 338. [Google Scholar] [CrossRef]
  109. Jia, H.; Qu, M.; Wang, G.; Walsh, M.J.; Yao, J.; Guo, H.; Liu, H. Dough-Stage Maize (Zea mays L.) Ear Recognition Based on Multiscale Hierarchical Features and Multifeature Fusion. Math. Probl. Eng. 2020, 2020, 9825472. [Google Scholar] [CrossRef]
  110. Maji, A.K.; Marwaha, S.; Kumar, S.; Arora, A.; Chinnusamy, V.; Islam, S. SlypNet: Spikelet-based yield prediction of wheat using advanced plant phenotyping and computer vision techniques. Front. Plant Sci. 2022, 13, 889853. [Google Scholar] [CrossRef]
  111. Nevavuori, P.; Narra, N.; Linna, P.; Lipping, T. Crop Yield Prediction Using Multitemporal UAV Data and Spatio-Temporal Deep Learning Models. Remote Sens. 2020, 12, 4000. [Google Scholar] [CrossRef]
  112. Qiu, R.; He, Y.; Zhang, M. Automatic Detection and Counting of Wheat Spikelet Using Semi-Automatic Labeling and Deep Learning. Front. Plant Sci. 2022, 13, 872555. [Google Scholar] [CrossRef]
  113. Zhaosheng, Y.; Tao, L.; Tianle, Y.; Chengxin, J.; Chengming, S. Rapid Detection of Wheat Ears in Orthophotos From Unmanned Aerial Vehicles in Fields Based on YOLOX. Front. Plant Sci. 2022, 13, 851245. [Google Scholar] [CrossRef]
  114. Zang, H.; Wang, Y.; Ru, L.; Zhou, M.; Chen, D.; Zhao, Q.; Zhang, J.; Li, G.; Zheng, G. Detection method of wheat spike improved YOLOv5s based on the attention mechanism. Front. Plant Sci. 2022, 13, 993244. [Google Scholar] [CrossRef] [PubMed]
  115. Zhao, F.; Xu, L.; Lv, L.; Zhang, Y. Wheat Ear Detection Algorithm Based on Improved YOLOv4. Appl. Sci. 2022, 12, 12195. [Google Scholar] [CrossRef]
  116. Lin, Z.; Guo, W. Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning. Front. Plant Sci. 2020, 11, 534853. [Google Scholar] [CrossRef] [PubMed]
  117. Guo, Y.; Li, S.; Zhang, Z.; Li, Y.; Hu, Z.; Xin, D.; Chen, Q.; Wang, J.; Zhu, R. Automatic and Accurate Calculation of Rice Seed Setting Rate Based on Image Segmentation and Deep Learning. Front. Plant Sci. 2021, 12, 770916. [Google Scholar] [CrossRef] [PubMed]
  118. Han, J.; Shi, L.; Yang, Q.; Chen, Z.; Yu, J.; Zha, Y. Rice yield estimation using a CNN-based image-driven data assimilation framework. Field Crops Res. 2022, 288, 108693. [Google Scholar] [CrossRef]
  119. Zhou, Z.; Song, Z.; Fu, L.; Gao, F.; Li, R.; Cui, Y. Real-time kiwifruit detection in orchard using deep learning on Android™ smartphones for yield estimation. Comput. Electron. Agric. 2020, 179, 105856. [Google Scholar] [CrossRef]
  120. Xiong, J.; Liu, Z.; Chen, S.; Liu, B.; Zheng, Z.; Zhong, Z.; Yang, Z.; Peng, H. Visual detection of green mangoes by an unmanned aerial vehicle in orchards based on a deep learning method. Biosyst. Eng. 2020, 194, 261–272. [Google Scholar] [CrossRef]
  121. Santos, T.T.; de Souza, L.L.; dos Santos, A.A.; Avila, S. Grape detection, segmentation, and tracking using deep neural networks and three-dimensional association. Comput. Electron. Agric. 2020, 170, 105247. [Google Scholar] [CrossRef]
  122. Shen, L.; Su, J.; He, R.; Song, L.; Huang, R.; Fang, Y.; Song, Y.; Su, B. Real-time tracking and counting of grape clusters in the field based on channel pruning with YOLOv5s. Comput. Electron. Agric. 2023, 206, 107662. [Google Scholar] [CrossRef]
  123. Cecotti, H.; Rivera, A.; Farhadloo, M.; Pedroza, M.A. Grape detection with convolutional neural networks. Expert Syst. Appl. 2020, 159, 113588. [Google Scholar] [CrossRef]
  124. Palacios, F.; Melo-Pinto, P.; Diago, M.P.; Tardaguila, J. Deep learning and computer vision for assessing the number of actual berries in commercial vineyards. Biosyst. Eng. 2022, 218, 175–188. [Google Scholar] [CrossRef]
  125. Chen, S.; Song, Y.; Su, J.; Fang, Y.; Shen, L.; Mi, Z.; Su, B. Segmentation of field grape bunches via an improved pyramid scene parsing network. Int. J. Agric. Biol. Eng. 2021, 14, 185–194. [Google Scholar] [CrossRef]
  126. Olenskyj, A.G.; Sams, B.S.; Fei, Z.; Singh, V.; Raja, P.V.; Bornhorst, G.M.; Earles, J.M. End-to-end deep learning for directly estimating grape yield from ground-based imagery. Comput. Electron. Agric. 2022, 198, 107081. [Google Scholar] [CrossRef]
  127. Sozzi, M.; Cantalamessa, S.; Cogato, A.; Kayad, A.; Marinello, F. Automatic Bunch Detection in White Grape Varieties Using YOLOv3, YOLOv4, and YOLOv5 Deep Learning Algorithms. Agronomy 2022, 12, 319. [Google Scholar] [CrossRef]
  128. Palacios, F.; Bueno, G.; Salido, J.; Diago, M.P.; Hernández, I.; Tardaguila, J. Automated grapevine flower detection and quantification method based on computer vision and deep learning from on-the-go imaging using a mobile sensing platform under field conditions. Comput. Electron. Agric. 2020, 178, 105796. [Google Scholar] [CrossRef]
  129. Sun, L.; Hu, G.; Chen, C.; Cai, H.; Li, C.; Zhang, S.; Chen, J. Lightweight Apple Detection in Complex Orchards Using YOLOV5-PRE. Horticulturae 2022, 8, 1169. [Google Scholar] [CrossRef]
  130. Apolo-Apolo, O.E.; Pérez-Ruiz, M.; Martínez-Guanter, J.; Valente, J. A Cloud-Based Environment for Generating Yield Estimation Maps From Apple Orchards Using UAV Imagery and a Deep Learning Technique. Front. Plant Sci. 2020, 11, 1086. [Google Scholar] [CrossRef]
  131. Murad, N.Y.; Mahmood, T.; Forkan, A.R.M.; Morshed, A.; Jayaraman, P.P.; Siddiqui, M.S. Weed Detection Using Deep Learning: A Systematic Literature Review. Sensors 2023, 23, 3670. [Google Scholar] [CrossRef]
  132. Quan, L.; Li, H.; Li, H.; Jiang, W.; Lou, Z.; Chen, L. Two-Stream Dense Feature Fusion Network Based on RGB-D Data for the Real-Time Prediction of Weed Aboveground Fresh Weight in a Field Environment. Remote Sens. 2021, 13, 2288. [Google Scholar] [CrossRef]
  133. Moon, T.; Kim, D.; Kwon, S.; Ahn, T.I.; Son, J.E. Non-Destructive Monitoring of Crop Fresh Weight and Leaf Area with a Simple Formula and a Convolutional Neural Network. Sensors 2022, 22, 7728. [Google Scholar] [CrossRef] [PubMed]
  134. Lu, W.; Du, R.; Niu, P.; Xing, G.; Luo, H.; Deng, Y.; Shu, L. Soybean Yield Preharvest Prediction Based on Bean Pods and Leaves Image Recognition Using Deep Learning Neural Network Combined With GRNN. Front. Plant Sci. 2022, 12, 791256. [Google Scholar] [CrossRef] [PubMed]
  135. Riera, L.G.; Carroll, M.E.; Zhang, Z.; Shook, J.M.; Ghosal, S.; Gao, T.; Singh, A.; Bhattacharya, S.; Ganapathysubramanian, B.; Singh, A.K.; et al. Deep Multiview Image Fusion for Soybean Yield Estimation in Breeding Applications. Plant Phenomics 2021, 2021, 9846470. [Google Scholar] [CrossRef]
  136. Sandhu, K.; Patil, S.S.; Pumphrey, M.; Carter, A. Multitrait machine- and deep-learning models for genomic selection using spectral information in a wheat breeding program. Plant Genome 2021, 14, e20119. [Google Scholar] [CrossRef]
  137. Vinson Joshua, S.; Selwin Mich Priyadharson, A.; Kannadasan, R.; Ahmad Khan, A.; Lawanont, W.; Ahmed Khan, F.; Ur Rehman, A.; Junaid Ali, M. Crop Yield Prediction Using Machine Learning Approaches on a Wide Spectrum. Comput. Mater. Contin. 2022, 72, 5663–5679. [Google Scholar] [CrossRef]
  138. Wolanin, A.; Mateo-García, G.; Camps-Valls, G.; Gómez-Chova, L.; Meroni, M.; Duveiller, G.; Liangzhi, Y.; Guanter, L. Estimating and understanding crop yields with explainable deep learning in the Indian Wheat Belt. Environ. Res. Lett. 2020, 15, 024019. [Google Scholar] [CrossRef]
  139. Gong, L.; Yu, M.; Jiang, S.; Cutsuridis, V.; Pearson, S. Deep Learning Based Prediction on Greenhouse Crop Yield Combined TCN and RNN. Sensors 2021, 21, 4537. [Google Scholar] [CrossRef]
  140. de Oliveira, G.S.; Marcato Junior, J.; Polidoro, C.; Osco, L.P.; Siqueira, H.; Rodrigues, L.; Jank, L.; Barrios, S.; Valle, C.; Simeão, R.; et al. Convolutional Neural Networks to Estimate Dry Matter Yield in a Guineagrass Breeding Program Using UAV Remote Sensing. Sensors 2021, 21, 3971. [Google Scholar] [CrossRef]
  141. Meng, Y.; Xu, M.; Yoon, S.; Jeong, Y.; Park, D.S. Flexible and high quality plant growth prediction with limited data. Front. Plant Sci. 2022, 13, 989304. [Google Scholar] [CrossRef] [PubMed]
  142. Oikonomidis, A.; Catal, C.; Kassahun, A. Deep learning for crop yield prediction: A systematic literature review. N. Z. J. Crop Hortic. Sci. 2022, 51, 1–26. [Google Scholar] [CrossRef]
Figure 1. Crop-yield-calculation process based on image technology.
Figure 1. Crop-yield-calculation process based on image technology.
Remotesensing 16 01003 g001
Figure 2. Low-altitude remote sensing imaging devices and imaging effects. (a) Drone remote sensing set-up, (b) multi-channel imaging devices, (c) hyperspectral image [44], and (d) multispectral image [45].
Figure 2. Low-altitude remote sensing imaging devices and imaging effects. (a) Drone remote sensing set-up, (b) multi-channel imaging devices, (c) hyperspectral image [44], and (d) multispectral image [45].
Remotesensing 16 01003 g002aRemotesensing 16 01003 g002b
Figure 3. Remote sensing satellite imaging process overview and effect diagram. (a) Principles of remote sensing satellite imaging, and (b) satellite images based on Sentinel-2 [83].
Figure 3. Remote sensing satellite imaging process overview and effect diagram. (a) Principles of remote sensing satellite imaging, and (b) satellite images based on Sentinel-2 [83].
Remotesensing 16 01003 g003
Table 1. Crop-yield-calculation methods and comparison of advantages and disadvantages [12,13,14,15].
Table 1. Crop-yield-calculation methods and comparison of advantages and disadvantages [12,13,14,15].
Calculation MethodImplementation MethodAdvantageDisadvantages
Artificial field investigationManual statistical calculation by calculation toolsLow technical threshold, simple operation, and strong universalityEach step of the operation is cumbersome and prone to errors, and some crops are also subject to damage detection
Meteorological modelAnalyze the correlation of meteorological factors and establish models using statistical, simulation, and other methods Strong regularity and strong guiding significance for crop productionNeeds a large amount of historical data to be accumulated, suitable for large-scale crops
Growth modelDigging a large amount of growth data to digitally describe the entire growth cycle of cropsStrong mechanism, high interpretability, and high accuracyThe growth models have numerous parameters, are difficult to obtain, are only suitable for specific varieties and regions, and have limited applications
Remote sensing calculationObtaining remote sensing data from multiple channels such as multispectral and hyperspectral data to establish regression modelsExpressing internal and external characteristics of crops, which can reflect agronomic traits of cropsApplicable to specific regions, environments, and large-scale crops
Image detectionImplementing statistics and counting through target segmentation or detectionLow cost and high precisionA large number of sample images is required, and the occlusion problem is not easy to solve
Table 2. Yield-calculation methods for main crop varieties.
Table 2. Yield-calculation methods for main crop varieties.
ClassificationVarietyCrop CharacteristicsYield-Calculation Indicators
Food cropsCornImportant grain crop with strong adaptability, planted in many countries, and also an important source of feedNumber of plants, empty stem rate, number of grains per spike
WheatThe world’s highest sowing area, yield, and distribution of food crops; high planting density and severe mutual obstructionNumber of ears, number of grains per ear, and thousand-grain weight
RiceOne of the world’s most important food crops, accounting for over 40% of total global food productionNumber of ears, number of grains per ear, seed setting rate, thousand-grain weight
Economic cropsCottonOne of the world’s important economic crops; an important industrial raw material for strategic suppliesTotal number of cotton beads per unit area, number of cotton bolls per plant, and quality of seed cotton per boll
SoybeanOne of the world’s important economic crops, widely used in food, feed, and industrial raw materialsNumber of pods, number of seeds per plant, and weight of 100 seeds
PotatoPotatoes are the world’s fourth largest food crop after wheat, corn, and riceTuber weight and fruiting rate
SugarcaneImportant economic crops, grown globally; important sugar raw materialsSingle stem weight and number of stems
SunflowerImportant economic and oil cropsKui disk size and number of seeds
TeaImportant beverage raw materialsNumber and density of tender leaves
AppleThe third largest fruit crop in the worldNumber of plants per mu, number of fruits per plant, and fruit weight
GrapeFruit consumption and brewing raw materials have high social and economic impactsGrape bead count, ear count, and grain count
OrangeThe world’s largest category of fruits has become a leading industry in many countriesNumber of plants per mu, number of fruits per plant, and fruit weight
TomatoOne of the main vegetable varieties in the facility, and also an important raw material for seasoning saucesNumber of spikes per plant, number of fruits, and fruit weight
AlmondCommon food and traditional Chinese medicine raw materialsNumber of plants per mu, number of fruits per plant, and fruit weight
KiwifruitOne of the most consumed fruits in the world, renowned as the “King of Fruits” and “World Treasure Fruit”Number of plants per mu, number of fruits per plant, and fruit weight
Table 3. Common remote sensing indicator Information [37,38,39,40].
Table 3. Common remote sensing indicator Information [37,38,39,40].
TypeTitleExtraction Method or DescriptionRemarks
Vegetation index Normalized Vegetation Index (NDVI)(NIR − R)/(NIR + R)Reflects the coverage and health status of plants
Red-edge chlorophyll vegetation index (ReCl)(NIR/RED) − 1Displays the photosynthetic activity of the canopy
Enhanced Vegetation Index (EVI2)2.5 × (NIR − R)/(NIR + 2.4 × R + 1) × (1 − ATB)Accurately reflects the growth of vegetation
Ratio Vegetation Index (RVI)NIR/RSensitive indicator parameter of green plants, which can be used to estimate biomass
Difference Vegetation Index (DVI)NIR − RSensitive to soil background, beneficial for monitoring vegetation ecological environment
Vertical Vegetation Index (PVI)((SR − VR)2 + (SNIR − VNIR)2)1/2S represents soil emissivity and V represents vegetation reflectance
Transformed Vegetation Index (TVI)(NDVI + 0.5)1/2Conversion of chlorophyll absorption
Green Normalized Difference Vegetation Index (GNDVI)(NIR − G)/(NIR + G)Strong correlation with nitrogen
Normalized Difference Red-Edge Index (NDRE)(NIR − RE)/(NIR + RE)RE represents the emissivity of the red-edge band
Red–Green–Blue Vegetation Index (RGBVI)(G − R)/(G + R)Measuring vegetation and surface red-color characteristics
Green Leaf Vegetation Index (GLI)(2G − B − R)/(2G + B + R)Measures the degree of surface-vegetation coverage
Excess Green (ExG)(2G − R − B)Small-scale plant detection
Super Green Reduced Red Vegetation Index (ExGR)2G − 2.4RSmall-scale plant detection
Excess Red (ExR)1.4R − GSoil background extraction
Visible Light Atmospheric Impedance Vegetation Index (VARI)(G − R)/(G + R − B)Reduces the impacts of lighting differences and atmospheric effects
Leaf Area Vegetation Index (LAI)leaf area (m2)/ground area (m2)The ratio of leaf area to the soil surface covered
Atmospheric Resilience Vegetation Index (ARVI)(NIR − (2 × R) + B)/(NIR + (2 × R) + B)Used in areas with high atmospheric aerosol content
Modified Soil Adjusted Vegetation Index (MSAVI)(2 × NIR + 1 − sqrt((2 × NIR + 1)2 − 8 × (NIR-RED)))/2Reduce the impact of soil on crop-monitoring results
Soil Adjusted Vegetation Index (SAVI)(NIR − R) × (1 + L)/(NIR + R + L)L is a parameter that varies with vegetation density
Optimize Soil Adjusted Vegetation Index (OSAVI)(NIR − R)/(NIR + R + 0.16)Uses reflectance from NIR and red spectra
Normalized Difference Water Index (NDWI)(BG − BNIR)/(BG + BNIR)Research on vegetation moisture or soil moisture
Conditional Vegetation Index (VCI)The ratio of the current NDVI to the maximum and minimum NDVI values during the same periods of time over the yearsReflects the growth status of vegetation within the same physiological period
Biophysical parametersLeaf Area Index (LAI)Total leaf area/land areaThe total leaf area of plants per unit land area is closely related to crop transpiration, soil water balance, and canopy photosynthesis
Photosynthetically active radiation component (FPAR)Proportion of absorbable photosynthetically active radiation in photosynthetically active radiation (PAR)Important biophysical parameter commonly used to estimate vegetation biomass
Growth environment parametersConditional Temperature Index (TCI)The ratio of the current surface temperature to the maximum and minimum surface temperature values over the same periods of time over the yearsReflecting surface temperature conditions, widely used in drought inversion and monitoring
Conditional Vegetation Temperature Index (VTCI)The ratio of LST differences between all pixels with NDVI values equal to a specific value in a certain research areaQuantitatively characterizing crop water stress information
Temperature Vegetation Drought Index (TVDI)Inversion of surface soil moisture in vegetation-covered areasAnalyzing spatial changes in drought severity
Vertical Drought Index (PDI)The normal soil baseline perpendicular to the coordinate origin in the two-dimensional scatter space of near-infrared and red reflectanceThe spatial distribution characteristic commonly used for soil moisture
Table 4. Research progress on crop-yield calculation based on low-altitude remote sensing.
Table 4. Research progress on crop-yield calculation based on low-altitude remote sensing.
Crop
Varieties
LiteratureYearTaskNetwork Framework and AlgorithmsResult
Corn[47]2021Predict the yield of cornCNNAP: 75.5%
[48]2021Predict the yield of corntab-DNN and sp-DNNR2: 0.73
[49]2023Predict the yield of cornLR, KNN, RF, SVR, DNNR2: 0.84
[50]2022Estimate biomass of cornDCNN, MLR, RF, SVMR2: 0.94
[39]2020Predict the yield of cornRFR2: 0.78
Rice[51]2023Predict the yield of riceCNNRMSPE: 14%
[52]2022Predict the yield of rice3D-CNN, 2D-CNNRMSE: 8.8%
Wheat[53]2022Predict the yield of wheatGPRR2: 0.88
[38]2023Predict the yield of wheatEnsemble learning algorithms of MLR2: 0.692
[61]2022Predict the yield of wheatEnsemble learning algorithms of MLR2: 0.78
[54]2022Estimate biomass AGB of wheatGOA-XGBR2: 0.855
[62]2022Estimate yield of wheatRFR2: 0.86
[55]2021Calculate the yield and protein content of wheatSVR, RF, and ANNR2: 0.62
[59]2021Estimate biomass of wheatLSSVMR2: 0.87
[57]2022Estimate biomass of wheatRFR2: 0.97
[58]2021Calculate the yield of wheatANNR2: 0.88
[64]2023Predict the yield of wheatMultimodalNetR2: 0.7411
[60]2023Predict the yield of wheatCNNRMSE: 0.94 t ha−1
[56]2022Estimate biomass of oatsPLS, SVM, ANN, RFr: 0.65
[63]2020Calculate the yield of barleyGPRR2: 0.84
Beans[65]2020Predict the yield of soybean DNN-F2R2: 0.72
[66]2021Predict the yield of soybean CNNR2: 0.78
[37]2021Predict the yield of soybean DL and MLr: 0.44
[67]2021Predict yield and biomassDNN-SPEA2R2: 0.77
[68]2021Predict the yield of soybean seedRFAP: 93%
[69]2022Predict AGB and LAI SVMR2: 0.811
[70]2022Estimate plant height and yield of broad beansSVMR2: 0.7238
[40]2023Predict biomass and yield of broad beansEnsemble learning algorithms of MLR2: 0.854
Potato[71]2022Estimate biomass of potatoesSVM, RF, GPRR2: 0.76
[72]2020Predict the yield of potato tuber ridge regressionR2: 0.63
Cotton[45]2021Predict the yield of cottonBP neural networkR2: 0.854
Sugarcane[73]2022Predict component yields of sugarcane GBRTAP: 94%
[74]2022Predict characteristic parameters of sugarcaneRFR2: 0.7
Spring tea[44]2023Predict fresh yield of spring teaPLMSVsR2: 0.625
Alfalfa[75]2020Predict yieldEnsemble learning algorithms of MLR2: 0.854
Meadow[76]2022Predict the yield of the meadowCBRR2: 0.87
Ryegrass [77]2021Predict the yield of ryegrassPLSR, RF, SVMRMSE: 13.1%
Red clover[78]2021Estimate the yield of red cloverANNR2: 0.90
Tomato[79]2021Predict biomass and yield of tomato RF, RI, SVMrMSE: 8.8%
Grape[80]2020Estimate the yield of the vineyard ANNRE: 21.8%
Apple[81]2022Predict the yield of apple treeSVR, KNNR2: 0.813
Almond[82]2020Estimate yield of almondImproved CNNR2: 0.96
Table 5. Research progress on crop-yield calculation based on high-altitude-satellite remote sensing.
Table 5. Research progress on crop-yield calculation based on high-altitude-satellite remote sensing.
Crop
Varieties
LiteratureYearTaskNetwork Framework and AlgorithmsResult
Wheat[83]2022Predict the yield of wheatRF, KNN, BRR2: 0.91
[84]2020Predict the yield of wheatSVMR2: 0.77
[85]2022Predict the yield of wheatSVRR2: 0.87
Rice[35]2022Predict the yield of riceSVM, RF, ANNMAPE: 3.5%
Coffee tree[86]2022Predict the yield of coffee treeNNR2: 0.82
Rubber[87]2023Predict the yield of rubberLRR2: 0.80
Cotton[88]2020Predict the yield of cottonRFLCCC: 0.65
Corn[15]2023Predict the yield of cornEnsemble learning algorithms of MLR2: 0.42
Foodstuff[89]2023Predict the yield of a foodstuffLSTMR2: 0.989
Table 6. Research progress on crop-yield calculation based on traditional image processing.
Table 6. Research progress on crop-yield calculation based on traditional image processing.
Crop
Varieties
LiteratureYearTaskNetwork Framework and AlgorithmsResult
Kiwifruit[93]2021Count fruit quantitySVMR2: 0.96
[94]2022Calculate the leaf area index of kiwifruitRFRR2: 0.972
Corn[95]2020Predict the yield of cornBP, SVM, RF, ELMR2: 0.570
[96]2020Estimate yield of cornRegression AnalysisMAPE: 15.1%
Apple[97]2023Count apple fruit RaspberryAP: 97.22%
Toona sinensis[98]2021Predict aboveground biomassMLRR2: 0.83
Cotton[99]2022Estimate the yield of cotton SVMR2: 0.93
Table 7. Research progress of deep learning in crop-image yield calculation.
Table 7. Research progress of deep learning in crop-image yield calculation.
Crop
Varieties
LiteratureYearTaskNetwork Framework and AlgorithmsResult
Corn[107]2022Detect and count corn plantsYOLOv4, YOLOv5 seriesmAP: 73.1%
[108]2020Detect and count corn earsFaster R-CNNAP: 94.99%
[109]2020Detect and count corn earsVGG16mAP: 97.02%
Wheat[110]2022Detect and count wheat earsSlypNetmAP: 99%
[111]2020Predict wheat yield 3D-CNNR2: 0.962
[112]2022Detect and count wheat earsDCNNR2: 0.84
[113]2022Rapidly detect wheat spikes YOLOX-mAP: 88.03%
[114]2022Detect and count wheat earsYOLOv5sAP: 71.61%
[115]2022Detect and count wheat earsYOLOv4R2: 0.973
Sorghum[116]2020Detect and count sorghum spikesU-NetAP: 95.5%
Rice[117]2021Calculate Rice Seed Setting Rate (RSSR)YOLOv4mAP: 99.43%
[118]2022Estimate rice yieldCNNR2: 0.646
Kiwifruit[119]2020Count fruit quantityMobileNetV2, InceptionV3TDR: 89.7%
Mango[120]2020Detect and count mangos YOLOv2error rate: 1.1%
Grape[121]2020Detect and count grape strings Mask R-CNN, YOLOv3F1 score: 0.91
[122]2023Detect and count grape strings YOLO v5smAP: 82.3%
[123]2020Detect grapesResnetmAP: 99%
[124]2022Detect and count grape-berry quantitySegNet, SVRR2: 0.83
[125]2021Segment grape skewersPSPNetPA: 95.73%
[126]2022Detect and count grape strings Object detection, CNN, TransformerMAPE: 18%
[127]2022Detect and count grape strings YOLOMAPE: 13.3%
[128]2020Detect and count grapevine flowersSegNetR2: 0.70
Apple[129]2022Detect and count applesYOLOv5-PREmAP: 94.03%
[130]2020Detect and count applesFaster R-CNNR2: 0.86
Weed[132]2021Estimate aboveground fresh weight of weedsYOLO-V4mAP: 75.34%
Capsicum[133]2022Estimate fresh weight and leaf areaConvNetR2: 0.95
Soybean[134]2022Predict soybean yield YOLOv3, GRNNmAP: 97.43%
[135]2021Count soybean podsRetinaNetmAP: 0.71
Table 8. Comparison of crop-yield calculation schemes by different technical categories.
Table 8. Comparison of crop-yield calculation schemes by different technical categories.
Image TypesObtaining MethodsImage PreprocessingExtracting IndicatorsMain AdvantagesMain DisadvantagesRepresentative Algorithms
Remote sensing imagesLow-altitude drone: equipped with multispectral cameras, visible light cameras, thermal imaging cameras, and hyperspectral camerasSize correction;
multi-channel image fusion;
projection conversion;
resampling;
Surface reflectance;
multispectral vegetation index; biophysical parameters; growth environment parameters;
Multi-channel image containing time, space, temperature, and band information; multi-channel fusion; rich informationThe spatiotemporal and band attributes are difficult to fully utilize, and the shooting distance is far, making it suitable for predicting the yield of large-scale land parcels with low accuracy; easily affected by weather;ML, ANN, CNN-LSTM, and 3DCNN
Satellitelow spatial and temporal resolution, long cycle time, and pixel mixing
Visible light imagesDigital cameraSize adjustment;
rotation;
cropping;
gaussian blur;
color enhancement;
brightening;
noise reduction, etc.;
annotation;
dataset partitioning;
Color index;
texture index;
morphological index;
Easy to obtain images at a low costOnly three bands of red, green, and blue that have limited information contentLinear regression,
ML,
YOLO,
Resnet,
SSD,
Mask R-CNN
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, F.; Wang, M.; Xiao, J.; Zhang, Q.; Zhang, J.; Liu, X.; Ping, Y.; Luan, R. Advancements in Utilizing Image-Analysis Technology for Crop-Yield Estimation. Remote Sens. 2024, 16, 1003. https://doi.org/10.3390/rs16061003

AMA Style

Yu F, Wang M, Xiao J, Zhang Q, Zhang J, Liu X, Ping Y, Luan R. Advancements in Utilizing Image-Analysis Technology for Crop-Yield Estimation. Remote Sensing. 2024; 16(6):1003. https://doi.org/10.3390/rs16061003

Chicago/Turabian Style

Yu, Feng, Ming Wang, Jun Xiao, Qian Zhang, Jinmeng Zhang, Xin Liu, Yang Ping, and Rupeng Luan. 2024. "Advancements in Utilizing Image-Analysis Technology for Crop-Yield Estimation" Remote Sensing 16, no. 6: 1003. https://doi.org/10.3390/rs16061003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop