Next Article in Journal
Driveway Detection for Weed Management in Cassava Plantation Fields in Thailand Using Ground Imagery Datasets and Deep Learning Models
Previous Article in Journal
Development of a Robotic Platform with Autonomous Navigation System for Agriculture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Image Analysis Artificial Intelligence Technologies for Plant Phenotyping: Current State of the Art

by
Chrysanthos Maraveas
Department of Natural Resources Development and Agricultural Engineering, Agricultural University of Athens, 75 Iera Odos Street, 11855 Athens, Greece
AgriEngineering 2024, 6(3), 3375-3407; https://doi.org/10.3390/agriengineering6030193
Submission received: 23 July 2024 / Revised: 5 September 2024 / Accepted: 12 September 2024 / Published: 17 September 2024

Abstract

:
Modern agriculture is characterized by the use of smart technology and precision agriculture to monitor crops in real time. The technologies enhance total yields by identifying requirements based on environmental conditions. Plant phenotyping is used in solving problems of basic science and allows scientists to characterize crops and select the best genotypes for breeding, hence eliminating manual and laborious methods. Additionally, plant phenotyping is useful in solving problems such as identifying subtle differences or complex quantitative trait locus (QTL) mapping which are impossible to solve using conventional methods. This review article examines the latest developments in image analysis for plant phenotyping using AI, 2D, and 3D image reconstruction techniques by limiting literature from 2020. The article collects data from 84 current studies and showcases novel applications of plant phenotyping in image analysis using various technologies. AI algorithms are showcased in predicting issues expected during the growth cycles of lettuce plants, predicting yields of soybeans in different climates and growth conditions, and identifying high-yielding genotypes to improve yields. The use of high throughput analysis techniques also facilitates monitoring crop canopies for different genotypes, root phenotyping, and late-time harvesting of crops and weeds. The high throughput image analysis methods are also combined with AI to guide phenotyping applications, leading to higher accuracy than cases that consider either method. Finally, 3D reconstruction and a combination with AI are showcased to undertake different operations in applications involving automated robotic harvesting. Future research directions are showcased where the uptake of smartphone-based AI phenotyping and the use of time series and ML methods are recommended.

1. Introduction

Food insecurity is a pertinent challenge faced in modern society despite the various initiatives implemented to mitigate the problem. The second United Nations Sustainable Development Goal (SDG) advocates for universal access to safe and nutritious food to end world hunger in all its forms and ensure food security is attained by 2030 [1]. However, the United Nations (UN) also reported that there was a significant increase in the percentage of the world’s population suffering from world hunger by 391 million between 2019 and 2022 to reach 2.4 billion [1]. Subsequently, tackling food insecurity has remained a pervasive problem that has received increased attention from different scientists and researchers who seek more effective solutions to sustainable food production.
Two key challenges are identified to hinder food security and the attainment of UN SDG 2 concerning ending world hunger. First is the low effectiveness of current food production systems where available arable land does not meet the current demand. A study by Meraj et al. [2] revealed the arable land used in food production demonstrated increasing usability deficits following the rise in the population within urban regions. Sharma et al. [3] aligned with [2] and reported that in other cases, water scarcity hindered the production of different crops in arid and semi-arid areas, for example, wheat (Triticium aestivum L.) which relies on the high availability of water. The study by [3] revealed that insufficient water supply to wheat reduces the available yield by closing stomata, decreasing leaf pigments, and reducing the rate of photosynthesis. Chiuyari and Cruvinel [4] also reported that food security was challenged by climate change and economic recessions, hence the need to increase agricultural production by 70% to feed the plant inhabitants by 2050. As such, food production systems are still not able to meet the current demand.
The second challenge to food security pertains to the lack of timely knowledge on crop growth and phenological development in evidence-based decision-making in agriculture. Graf et al. [5] accentuate that the accurate estimation of current and historical growth conditions is important in increasing the efficiency of using resources and inputs, including water and pesticides. However, without such timely knowledge, farmers are likely to use inaccurate inputs at the wrong time, causing low yields from arable land [5]. Zahid et al. [6] add to Graf et al. [5] and posit that precision agriculture utilizing information technology is important in the precise management of crops based on their environmental conditions, requirements for management, and physiological traits. As such, farmers who employ precision agriculture, where real-time monitoring of crops is done coupled with precision adjustments of the production elements, can achieve high yields from arable land. Waqas et al. [7] resonated with [6] where they reported that growing maize (Zea mays L.) in conditions where the temperature was increased by 1 °C above seasonal temperature led to a reduction in crop yield between 3–13% with higher temperatures also deteriorating the quality of the maize grains. Figure 1 illustrates the impact of cold stress on the leaves of maize at the late reproductive phase.
In Figure 1, the impact of cold stress on the late reproductive phase in the growth of maize was showcased.
Cold stress is identified to prolong seedling duration and cell cycles leading to repressed chlorophyll. Additionally, cold stress prolongs the harvest maturity. As such, timely knowledge about the management requirements and environmental conditions through precision agriculture facilitates increasing the yields from crops.
Plant phenotypes describe the externally visible crop traits that provide a comprehensive reflection about the information of crops, such as their growth status and genetic features [5]. As such, plant phenotyping refers to a science associated with agronomy and ecophysiology of plant genomics that facilitates understanding the complex structure of plant traits [1]. Mangalraj and Cho [8] add to [1] and reveal that scientists rely on plant phenotyping to characterize crops used in selecting the best genotype for breeding. In modern agriculture, plant phenotyping is adopted to enhance other applications, including precision agriculture and crop monitoring. Different plant phenotyping methods have also been widely adopted throughout history. Ref. [2] reports that traditionally, plant breeders relied on manual, laborious methods that were also time-consuming. The shortcoming of such processes was that they were not reliable or precise, hence leading to the uptake of automatic methods that are more accurate.
Different scholars have also demonstrated the effectiveness of these plant phenotyping techniques in characterizing plant traits. Pappula-Reddy et al. [9] used a variety of image-based plant phenotyping techniques, including RGB imaging, near-infrared, infrared, and chlorophyll fluorescence imaging, to examine the agro-physiological features of chickpea (Cicer arietinum L.) genotypes under drought stress conditions such as plant phenology, yield components, yield, and physiological parameters. The results revealed a strong positive correlation between manually recorded and image-based traits, hence underlining the importance of embracing image-based screening techniques in breeding methods [9]. Geng et al. [10] also demonstrated the effectiveness of high-throughput phenotyping and deep-learning methods to extract image-based traits of rice where the presence of cloned genes such as TACI, Ghd7.1, Ghd7, and Hd1 was reported, and their effects in different stages of development. A different study by [5] outlined various image phenotyping techniques, such as satellite remote sensing, which were also used to quantify the growth conditions of wheat, where the Green Leaf Area Index (GLAI) and Canopy Chlorophyll Content (CCC) were extracted. Such varied studies [5,10] reveal the wide applications of image phenotyping techniques in plant breeding, where they provide information about the traits of plants, hence improving breeding operations.
The author observes that while many disjointed studies have been carried out to demonstrate the use of image analysis phenotyping of different plant traits, review articles that examine a broad range of studies are not available. The examination of various review articles focusing on the research topics also indicates existing gaps that are addressed in the current review article. For example, Ref. [11] only focused on studies using deep learning convolutional neural networks (CNNs) for plant phenotyping and considered studies published between 2015 and 2020. A different review by [12] was also limited in scope where the focus was summarizing the features, benefits, and shortcomings of optical sensing and data-processing methods applied in different plant phenotyping scenarios. As such, the review article was only directed to guiding the selection of optical sensors and methods of data processing that acquired plant leaf phenotypes cost-effectively and rapidly. A third review article by [13] further focused on UAS-based phenotyping platforms and examined the state of the art in their deployment, collection, curation, storage, and data analysis. The synthesis of the current review articles on plant phenotyping indicates existing gaps in the scope of the methods and the publication period. The core focus of this review article is to investigate the latest developments in image analysis for plant phenotyping using AI, ML, software, and other algorithms by limiting literature from 2020. The article also examines the latest image analysis technology developments, including three-dimensional (3D) approaches. The novelty of this research is that it is the first review study that undertakes a comprehensive approach to examine the current state of the art in using artificial intelligence (AI), machine learning (ML), and other algorithms for image analysis in plant phenotyping from 2020 to date. The exclusion of studies published before 2020 ensures that only current information is reported in the review article.
The objectives of the review article are:
i.
To investigate the latest developments, benefits, limitations, and future directions of image analysis phenotyping technologies based on AI, ML, 3D imaging, and software solutions.
ii.
To investigate the challenges associated with the use of different image analysis phenotyping technologies based on AI, ML, 3D imaging, and software solutions.
The rest of the article is organized into four sections. The methods and materials are showcased in the subsequent section, and results collected from the review of articles are presented. The discussion is then undertaken where the research objectives are reexamined. The conclusion finally showcases the findings from the review article.

2. Materials and Methods

This study employed the systematic literature review method to facilitate data collection. The choice of systematic literature review (SLR) arose from its emphasis on a transparent and reproducible process in reviewing literature to address a specific problem [14]. To undertake the SLR, an explicit and stepwise process was adhered to as showcased in the subsequent section.

2.1. Literature Search

The second phase of the SLR involved the development of a search strategy where parameters guiding the data search were outlined. First, appropriate scientific research databases were selected to facilitate the search for different articles. The databases were preferred as they provided access to different high-quality peer-reviewed articles that were appropriate [15]. The selected database publishers included Science Direct, Scopus, Springer Nature, and MDPI.
Second, keywords were generated from the research question to facilitate the search for relevant articles. The selected keywords included the following: “advanced”, “AI algorithms”, “ML algorithms”, “image analysis”, “plant phenotyping”, “challenges”, and “future directions.” The keywords were further combined using Boolean operators AND/OR to create search phrases. The phrases were the actual queries used in the literature databases and included the following;
  • “advanced” AND “AI algorithms” AND “plant phenotyping” AND “challenges”
  • “future directions” AND “AI algorithms” AND “image analysis” OR
  • “plant phenotyping”
  • “challenges” AND “AI algorithms” AND “image analysis” OR
  • “plant phenotyping”
The use of the search phrases expanded the scope of the search from the various databases [16]. As a result, a wide range of resourceful articles were identified and examined in this study.

2.2. Study Selection

In the third phase, the literature was selected. The use of keywords and Boolean operators led to the generation of 840 articles from the selected databases. To narrow the search, inclusion and exclusion criteria were defined. The research considered studies published in the last four years (2020–2024) to ensure current and updated information was evaluated in the review article. Additionally, it was important to limit the studies to focus on addressing the formulated research question in the study. The inclusion and exclusion criteria considered in the search are showcased in Table 1 below.
The inclusion criteria also focused on primary studies and excluded all secondary reviews and critical review papers to ensure the findings were supported by empirical findings. Preference was also given to articles that were published in English to eliminate the need for third-party translation, which would require more time. The emphasis on full-text articles was also prioritized to ensure comprehensive findings were selected.
The exclusion criteria specified in this study excluded articles published in non-English languages such as Italian, Spanish, German, and Chinese. Studies that were beyond the scope of the research and which did not elaborate on the current state-of-the-art in using AI algorithms for image analysis in plant phenotyping were not considered.
The adoption of the inclusion and exclusion criteria reduced the search results further. During the sorting and filtering processes, 340 duplicate articles were excluded. The remaining 500 articles were further screened to ensure they adhered to the publication period. As a result, 200 articles published before 2020 were excluded, and 300 articles remained. The next phase focused on the scope of the study to facilitate addressing the research question, which led to the elimination of 160 articles that did not cover the current state-of-the-art use of AI algorithms for image analysis in plant phenotyping. In the subsequent process, 140 articles were screened to ensure they were full-text and complete studies. A further 56 articles were eliminated as they only provided abstract and review information. In total, this study reviewed 84 articles.

2.3. Reporting the Findings

The screening process of the articles using the inclusion and exclusion criteria led to a total of 84 articles that met the requirements to address the research question. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart demonstrating the filtration process of the articles is showcased in Figure 2 below.
As showcased in the PRISMA flowchart, this review article considered 84 experimental studies that detailed various AI, ML, and 3D methods in image analysis.

3. Results

3.1. Overview of Methods Used in the Selected Articles

The studies reviewed in the research are showcased in Appendix A.
The evaluation of the selected studies in the review indicated that the quantitative experimental method was employed in all studies. However, individual differences emerged in the state-of-the-art techniques adopted for image analysis in plant phenotyping. The analysis demonstrated that different techniques including deep learning algorithms, unmanned aerial vehicles (UAV) image analysis, high-throughput imaging, hyperspectral imaging, convolutional neural networks, 3D reconstruction using multi-view images, transfer learning, digital imaging, and multiple image sensors. The findings related to the different studies are detailed in the subsequent section.

3.2. Latest Techniques Using AI and ML Algorithms in Plant Phenotyping

The focus of the review article was to examine the latest developments, benefits, limitations, and future directions of image analysis phenotyping technologies based on AI, high throughput analysis (HTP), and 3D image reconstruction. The AI category further includes classifications of unspecified algorithms and software solutions.

3.2.1. AI (ML) Algorithms

Random Forest (RF) algorithms were adopted in plant phenotyping. Ref. [17] employed the RF algorithm to extract crop leaf pixels from multi-source RGB imagery captured using different platforms such as cameras, smartphones, and UAVs. Ref. [18] demonstrated how ML algorithms such as random forests (RF), gradient boosting (GB), and deep neural networks (DNN) could enhance soybean yield prediction for different climate and growth conditions. The noteworthy finding from [18] was the highlight of classification machine learning algorithms and a transfer learning approach in genotype selection and the categorization of soybean yield for screening high-yield varieties. As such, Ref. [18] argued that the adoption of classification ML techniques was the most useful in crop breeding applications due to their effectiveness in identifying high-yielding genotypes and improving yield prediction. Ref. [19] also used RFs in the hyper-spectral imaging (HIS) pipeline to detect chlorophyll distribution hence enhancing crop management strategies.
The evaluation of the results also indicated that other classification algorithms, such as support vector machines (SVMs), were also essential in different phenotyping tasks, such as crop counting [18]. The research by [18] reported that problems were faced in counting corn plants (Zea mays L.) using manual methods, especially when large tracts of land were considered. To address the problem, multispectral images were acquired using embedded cameras in unmanned aerial vehicles (drones), and the dataset was processed using digital imaging processing techniques. The use of SVMs was also identified to facilitate the classification of multispectral images, where highly accurate results, higher than 84%, were reported when counting maize plants [18]. The studies indicated that classification image analysis methods using SVMs were applied in large-scale farms to undertake crop counting and enhance crop yield based on different climates and growth conditions. Ref. [20] employed SVMs coupled with Recursive Feature Elimination (RFE) and Boruta algorithms for high light tress phenotyping of tomato genotypes using chlorophyll fluorescence and reported high reliability. Ref. [21] reiterated [20] and used a combination of SVMs, CNN, and RF algorithms in UAV-based imaging to estimate plant height. The results indicated that the techniques were effective in monitoring plant features such as height, yield, and biomass. Ref. [22] demonstrated the use of regression trees and ANNs in improving crop yield by correlating hyperspectral data with photosynthetic parameters hence leading to sustainable crop development. In Ref. [23], the AGSS algorithm was used in detecting root length and diameter where a 90% higher accuracy was reported hence enhancing the repair of root images in small embedded systems. In Ref. [24], the DFU-Net model was adopted in the automated detection of seed melons and cotton plants hence providing valuable technical support for intelligent crop management at 92% accuracy.
The CNN algorithm was adopted in [25] plant phenotyping to obtain maize seedling emergence rate and speed of leaf emergence where a 99.53% accuracy was reported. Ref. [26] added to [25] and demonstrated the effectiveness of R-CNN in the accurate reconstruction of fruits and branches in guava plants. In Ref. [27], CNN was combined with a 3D instance segmentation algorithm to estimate apple counting and volume segmentation to extract traits from orchards. Ref. [28] reiterated [27] and revealed that the R-CNN algorithm generated a high accuracy of 97.3% in determining fruit numbers. A similar case was [29], where non-destructive measurement phenotype technology was adopted in watermelon seedling breeding, management, and quality breeding. Ref. [29] showed that the Mask R-CNN algorithm delivered good measurement performance for the watermelon plug seedlings from the one true-leaf to 3 true-leaf stages. Ref. [30] added to [29], where the use of deep convolutional neural networks (CNN) was identified to enhance the high-spatiotemporal resolution of UAV imagery to characterize the effects of genetics and the environment on key traits of the miscanthus crops in [31], the CNN algorithm was adopted with 3D imaging to enhance crop breeding and precision agriculture capabilities. Ref. [30] added to [29] and revealed the use of 3D-CNN in the diagnosis of yellow rust infection in spring wheat.
In Figure 3, deep learning algorithms were showcased in classifying large flowers, including canola and peas, where rectangles identified the noise, whereas ovals represented missed flowers.

3.2.2. Unspecified AI Algorithms

The review of different studies indicated that unspecified deep and machine learning algorithms were adopted in undertaking image analysis and plant phenotyping. The first image analysis plant phenotyping approach regarded the use of deep and machine learning algorithms, as demonstrated in numerous studies. The evaluation of the studies was undertaken to identify how various techniques were employed to derive plant traits in unique phenotyping contexts. The results underlined the importance of deep learning in monitoring and managing plant growth, where issues in different growth cycles were addressed [18]. The study by [33] developed a predictive model for lettuce growth and also trained it to improve the accuracy of the lettuce growth images and phenotypic indices. The insight from [18] showed high accuracy in predicting lettuce phenotypic indices with errors of less than 0.55% for geometric indices and less than 1.7% for color and texture indices. The role of deep learning in predicting the various issues that would be faced in each growth phase of the lettuce life cycle was reported to enhance quality and efficient lettuce production [18]. The insights also indicated that deep learning algorithms could enhance crop sustainability by enabling scientists to address issues faced in different phases of growth, hence promoting higher productivity. Ref. [34] also demonstrated the use of successive projections algorithm (SPA) classification algorithms in classifying tea plant stresses with a 98% accuracy. The uniqueness of [34] was the adopted algorithm where the continuous wavelet projections algorithm (CWPA) was developed using the continuous wavelet analysis (CWA) and SPA to generate an optimal spectral feature set for crop detection. In Ref. [35], the Otsu algorithm was used to accurately obtain the target region of maize stems hence demonstrating feasibility in the assessment of field conditions.
In Ref. [36], VGG16, VGG19, and ResNet-50 were used in classifying different types of weeds including deep, corn, tomato, and cotton weeds leading to significant performance improvement. In Ref. [37], Generative Adversarial Network (GAN) and U-Net were used in segmenting plant regions and biomass areas to detect emergence counting and estimate crop biomass leading to high yield. Ref. [38] further combined U-Net with DeepLabv3+, PSPNet, and HRNet networks to segment soybean images hence improving the reconstruction of robustness. The low-light image-based crop and weed segmentation network (LCW-Net) was used in segmenting crops and weeds in sugar beet, carrot weeds, grass weeds, and dicot weeds. Ref. [26] used PointNet and YoloV5 where they demonstrated the use of deep-learning regression techniques for predicting grain counts in sorghum panicles. The study combined RGB images and the deep learning regression framework for point clouds where 147 sorghum panicles were considered. The results obtained showed a mean absolute error with a percentage of 6.5% and 6.8% in low-resolution point cloud datasets [26].
Further work in [28] demonstrated clustering algorithms developed for corn population point clouds where an accurate extraction of the 3D morphology of individual crops was done. The study by [28] applied a similar approach as [18,26], where multispectral image data was first obtained using drones, and thereafter, corn plant models were generated using SfM algorithms. The subsequent phase involved simplifying the point clouds and extracting crop point clouds using color and distance thresholds. The use of clustering algorithms demonstrated the segmentation of corn point cloud data, hence segmenting the individual corn plants in the field and providing an efficient and low-cost solution to accurately measure information related to crop phenotypes. The results from [28] revealed an accuracy of 93.1% in the segmentation of corn plants, hence providing a cost-efficient, automated, and low-cost solution to measure the phenotypic information in corn accurately.
A further application was identified by [32], where C-RGB, MS1, and MS2 ML classification algorithms were adopted in gathering flowering information to facilitate escaping abiotic stresses such as heat and drought. In their study, ref. [32] used multispectral imaging sensors such as RGB and multispectral cameras, as well as both supervised and unsupervised techniques, to monitor the flowering intensity of canola, chickpea, pea, and camelina. The results from [32] showed that the features extracted from RGB images from drones were able to accurately detect and quantify large flowers of the spring canola, winter canola, and pea, while they were unable to process the data for the chickpea and camelina. The findings indicate both limitations in extracting flower features using classification algorithms where only some type of data is suitable. Figure 3 showcases the use of clustering algorithms to detect winter canola and pea flowers.
The findings from studies indicated that using classification and deep learning algorithms facilitated addressing different laborious tasks such as manual counting of maize crops, grain counting from sorghum panicles, classification of tea plant stress, and prediction of best climates and conditions of crop growth. The results underscored the relevance of ML and deep learning in applications such as crop yield prediction, estimating plant density, monitoring the growth of crops such as lettuce throughout their life cycles, and measuring the performance of miscanthus perennial crops.
In addition to the traditional ML and deep learning algorithms adopted for plant phenotyping, the review also examined emerging and recently published alternatives. In Ref. [32], the use of high-resolution and time series data from field soybean populations was obtained using UAVs where the red, green, and blue (RGB) infrared images were used as inputs to construct the multimodal image segmentation model. The results of the developed RGB and infrared feature fusion segmentation network (RIFSeg-Net) indicated higher performance than traditional deep learning segmentation networks at a precision of 0.94 and an F1-score of 0.93. As such, the proposed method generated high practical value in resource identification in germplasm and could lead to a practical tool to facilitate genotypic differentiation analysis and select target genes. The findings by [32] were reiterated in a study by [39] where multi-source, time-series, and deep learning methods were used in the classification of poplar seed varieties and drought stress. The obtained results showed that the LSTM method adopted had a high classification accuracy of 96.56% while the ResNet18-LSTM and ResNet18-CBAM-LSTM generated accuracies of 98.12% and 99.69% [39]. The findings indicated that combining time series and deep learning methods improved the classification performance when monitoring multiple varieties of poplar seedlings under different drought conditions.
In further studies, the use of smartphone-based applications for plant phenotyping was also identified. In Ref. [40], a geographic-scale monitoring approach for counting coffee cherries was developed using an AI-powered citizen science approach. The method used basic smartphones to capture images of coffee trees and a model was trained and validated using the YOLO (You Only Look Once) approach to detect cherries. The results in [40] showed an R2 of 0.59 in Peru and 0.71 in Colombia where coffee was grown under different biogeoclimatic conditions. The findings indicated that the smartphone-based method could be applied on broader scales and could be transferred to different regions and countries. The insights indicated the potential for photo-based phenotypic monitoring using smartphones and was suitable in low-income regions. A different study by [41] supported [40] and revealed the use of smartphone-based whole-plant and on-device phenotyping Android applications that could assess 25 leaf traits, 5 stem traits, and 15 plant traits from images. The study used a DeepLabV3+ model for image segmentation and revealed that maize phenotyping could be measured in real time to benefit crop breeding in the future. Another study by [42] Rockel reiterated [35] where an open-source Android app, PhenoApp, was developed to guide digital recording of phenotypical data in greenhouses and in the field. The findings from [42] indicated that using PhenoApp led to customizing descriptors and also improved the efficiency of digital data acquisition in genebank management hence contributing to breeding research. The insights from the studies indicated that using smartphones for plant phenotyping traits was a promising future trend.

3.2.3. AI Algorithms (Software)

The AI analysis also involved the use of software applications where 2D image reconstruction was undertaken in deep learning algorithms. The synthesis of the studies indicated that applications were based on ML and high-throughput phenotyping methods.
In Ref. [43], vision software known as AIseed was developed based on machine learning to automatically capture plant traits such as shape, texture, and color of different seeds, hence guiding the assessment of seed quality to improve breeding and yield. Further inspection of [43] indicated that the motivation for adopting AIseed software was the need to assess the quality of seeds on a large scale basis automatically, where 54 seed features were evaluated to enhance seed quality detection and prediction. Ref. [43] demonstrated that the AIseed vision software was associated with numerous advantages, including its non-destructive nature, high performance in the extraction of phenotypic features, and high speed and precision levels in performing analysis. Figure 4 showcases the AIseed software interface used in the analysis of seed phenotyping traits.
In Figure 5, the original and processed seed images using AIseed software were illustrated. The software generated satisfactory results that facilitate plant phenotyping traits.
In a further study, Ref. [44] added to [43], where they demonstrated that software systems focused on high-throughput phenotyping (HTP) techniques could also be developed based on computer vision algorithms. Ref. [44] developed Cottonsense, an advanced HTP system to address the challenges of deployment across different growth phases of cotton crops, including the square, flower, closed boll, and open boll, yielding a 79% accuracy score in segmenting different fruit categories. The similarity between [43,44] was the use of computer vision algorithms in developing software to automatically assess plant traits. The studies also showed that software was an effective tool in plant phenotyping, especially in large-scale operations. In Ref. [27] the seedscreener software was used in phenotyping seeds automatically leading to a high accuracy rate of 94%.

3.2.4. HTP Analysis

Further analysis also showed that researchers relied on high-throughput image analysis methods for plant phenotyping. The analysis showed that most studies employed diverse image analysis phenotyping models. In Ref. [45], a high-throughput field phenotyping method using a process-based biophysical model was adopted in the detection of early vigor of 231 cereal oat lines in a commercial breeding program. The results in [45] showed that the phenotyping method was important in identifying multi-metric evaluations of spatial and temporal variability in the cereals, hence improving crop breeding applications. Ref. [46] used NDVI where aerial phenotyping techniques were adopted in enhancing sugarcane breeding to identify genotypes with superior yield traits in different watering conditions. Ref. [20] added to [46] and showed that 3D and NDVI techniques had a high utility in evaluating the lifespan of pepper crops. Further study showed that other high-throughput imaging methods adopted included spatial aggregation approaches [47]. In the study, Ref. [47] demonstrated that spatial aggregation approaches were effective in airborne measurement of sun-induced chlorophyll fluorescence (SIF) to monitor different plant functions at different scales.
In Ref. [48], it was shown that low-light image-based crop and weed segmentation networks were accurate in late-time harvesting applications and outperformed other state-of-the-art methods. In a further study, a large-scale-cable-suspended field phenotyping system was proposed to measure bidirectional reflectance distribution [49]. The result in [49] showed that hot spots were visible in the backscatter direction at visible and near-infrared bands at the largest sensor zenith angle. Ref. [50] demonstrated the high accuracy of image-based semi-automatic root phenotyping to examine root morphology and branching features.
The results in [50] revealed that interspecific advantages for maize occurred within 5 cm from the root base in the nodal roots, and soybean inhibition occurred within 20 cm from the root base. Ref. [32] also demonstrated the effectiveness of a multimodal image segmentation model comprising RGB and infrared feature fusion segmentation network (RIFSeg-Net) in the assessment of the rate of soybean canopy development on a large scale, which was laborious and time-consuming. The application of the various image analysis phenotyping models facilitated diverse applications, including monitoring soybean canopies for different genotypes, root phenotyping of field-grown crops, and late-time harvesting of crops and weeds.
In Ref. [51], the CIAT Pheno-I image analysis framework was used to extract plot-level vegetation canopy metrics of cassava-based on its effectiveness and speed. In Refs. [26,27], CCO imaging techniques were used to obtain canopy features of different types of crops from UAV images. In Ref. [38], the watershed algorithm was proposed to perform bean image segmentation where it generated higher performance.

3.2.5. HSI Analysis

The reviewed studies further showcased the use of hyperspectral imaging (HSI) techniques in plant phenotyping. In Ref. [52], the influence of fractional vegetation cover (FVC) on digital terrain model (DTM) reconstruction accuracy was investigated. The study considered UAV imagery to examine plant height features and reported that the accuracy of DTM constructed using an inverse distance weighted algorithm was influenced by fractional vegetation cover (FVC) conditions. The study by [53] reiterated the findings by [52] where UAV imagery obtained at three growth stages using complete and incomplete data was used to compare four rapeseed height estimation methods. The results showed that where complete data was used, optimal results were obtained with an R2 of 0.932 and a root mean square (RMSE) of 0.026 m. Further work by [47] developed and tested two spatial aggregation approaches to make airborne sun-induced chlorophyll fluorescence (SIF) data usable in experimental settings when monitoring plant functioning at different scales. The first spatial aggregation approach considered involved aggregating pixel values directly on SIF maps while the second aggregated at-sensor radiance before retrieval of SIF data. The results showed that the second approach generated a better representation of ground truth, R2 = 0.61, compared to the first approach, R2 = 0.55, when combined with robust outlier detection and weighted averaging [54]. Bhadra et al. [55] showed that UAV-based spectral sensors and advanced ML models could be adapted to accurately estimate leaf chlorophyll concentrations (LCC) and average leaf angle (ALA), hence bridging crop breeding and precision agriculture. The work by [39] demonstrated the effectiveness of a transfer learning-based dual stream neural network PROSAIL-Net in estimating LCC from the UAV images with a high accuracy of R2 0.66 and 0.57 for the LCC and ALA, respectively. Figure 4 illustrates the data collection scenarios using the UAV image setup adopted in the study.
In Figure 4, the data collection setup using UAV images for chlorophyll concentration estimation was showcased.
Further analysis also indicated that combining ML algorithms with high throughput phenotyping image analysis techniques to undertake plant phenotyping led to higher performance. In Ref. [18], the use of multispectral image analysis for maize plant counting and assessment was showcased. The dataset of spectral images was gathered from flights over an agricultural area and processing of orthomosaics was undertaken in spectral channels of red, green, and blue. Further classification of the image data was undertaken using SVM classifiers. The study demonstrated that combining the multispectral image analysis method with the SVM technique led to an accurate and timely counting methodology for maize plants that guided cultivation to ensure high yield was showcased with an accuracy rate of 88.47% [18]. In the study by Zhu et al. [19], close-range hyperspectral imaging (HSI) combined with ML techniques such as random forests (RF) and Savitzky-Golay-standard normal variate (SG-SNV) was adopted to quantify the chlorophyll distribution of basil crops. The insights from [19] indicated that canopy HSI methods presented challenges due to the complex interaction of illumination with the canopy geometry. Combining HSI with the ML techniques led to satisfactory chlorophyll distribution maps consistent with the observed levels of chlorophyll, hence being applicable in the timely monitoring of the chlorophyll status of whole canopies. The findings from [51] also reiterated [19] where they demonstrated the use of multispectral imagery and ML techniques to accurately predict cassava traits. The insights indicated that coupling the two methods could facilitate the extraction of phenotypic information rapidly. The insights from the analysis of the case studies [19,55] indicated that ML techniques were adopted at the image-processing phase to overcome the limitations associated with the hyperspectral imaging techniques. The findings from [51] supported [32], where they demonstrated that combining ML approaches to predict plant traits with multispectral imagery using unmanned aerial vehicles was effective in the measurement of canopy metrics and vegetation indices traits of cassava at different time points during the growth cycle. The evaluation of the studies also highlighted various limitations of the hyperspectral imaging techniques. The limitations included subjectivity in their application [32] and complicated interaction [19]. However, the analysis of the studies revealed that high throughput phenotyping techniques would be combined with ML algorithms to generate higher accuracy in identifying plant traits. The findings suggested that in future applications, more combinations of different high-throughput phenotyping techniques with deep learning and ML algorithms would be expected.

3.2.6. 3D Image Reconstruction

The final application area involved 3D image reconstruction, as demonstrated in diversified studies. The evaluation of these studies showed that relying on 3D reconstruction techniques ensured fast and accurate phenotyping of plant traits by setting the scene of the agricultural land. The results also indicated that 3D image reconstruction was feasible across diverse plant phenotyping applications. Ref. [56] demonstrated a stereo-vision-based system based on the mobile platform to estimate the height of plants. In their research, Ref. [56] argued that computer vision-based systems overcame the manual monitoring approaches of crops that were time-consuming and labor-intensive. In Ref. [20], point-cloud prepossessing coupled with different techniques including mesh generation and optimization of feature-point positions to achieve high accuracy in the reconstruction of maize leaves at high consistency. In Ref. [28], X-ray MicroCT was used in measuring the internal traits of bean seeds to establish their morphological traits. In Ref. [26], neural radiance fields (NeRF) was coupled with Instant-NGP and Instant NSR where insights showed that high reconstruction results were obtained that were comparable to reality capture software. In Ref. [57], a system was developed using Visual SfM and Agisoft Photoscan to map 3D image reconstruction of red pepper plants (Capsicum annuum L.) and undertake automatic analysis. The study used a Kinetic v2 with a depth sensor and high-resolution RGB camera to obtain accurate reconstructed 3D images that were compared with conventional images in aspects such as plant height, leaf number, and width. The results in [57] showed that an error of 5 mm or less was identified when analyzing 3D images, hence its suitability in plant phenotypic analysis. Ref. [58] added to [57] and reported that a 3D reconstruction method based on SfM combined with MVS achieved higher accuracy of 43.3% in wheat phenotyping. Ref. [38] also combined MVS and SfM and reported that 3D reconstruction of dummy and root systems was successful. Ref. [58] supported [38] and revealed high accuracy using SfM in phenotyping vegetables by recording video clips using smartphones. Ref. [27] also adopted SfM to automatically detect and measure plant height, petioles inclination, and single-leaf area. Ref. [26] added to [57], where they showed that 3D reconstruction techniques were important in the development of a garden robot using FCSN that navigated towards rose bushes and clipped them guided by pruning rules. In Ref. [27], 3D reconstruction was adopted to recover the morphology of plants and segment branches, hence leading to a highly accurate method for pruning rose plants.
Ref. [51] supported [26], where they demonstrated the use of multi-vision technology in fruit picking in an orchard, and 3D triangulation techniques identified the positions of fruit clusters and avoided obstacles for the robots. The insights from these studies showed that 3D reconstruction enabled robots to perceive different plant traits, hence allowing them to undertake pruning and fruit harvesting activities with high accuracy. Ref. [28] also adopted 3D reconstruction to create a large-scale agricultural orchard environment, hence facilitating the navigation of robots in the scene. The insights of these findings indicated that 3D reconstruction allowed agricultural robots to undertake pruning and harvesting activities more accurately as they distinguished branches from fruits. In Ref. [28], 3D reconstruction applications were identified as applicable in phenotyping root activity where roots with a diameter higher than 1 cm had higher length similarity between the algorithm reconstructed root system architecture and the simulated RSA at 76.2%. Ref. [27] also reiterated [28], where they showed that 3D reconstruction techniques were cost-effective and cost-efficient in phenotyping water use in rooting systems for sorghum crops. The insights indicate the multiple application areas where 3D reconstruction phenotyping methods could be adopted in plant phenotyping, including root activity, pruning, and harvesting fruits.
In Ref. [26], NDT, intrinsic shape signatures (ISS), and iterative closest point (ICP) algorithms were used in phenotyping tomato canopies using 3D reconstruction where a high correlation was reported. In Ref. [27], an ICP-based 3D registration algorithm for eye-in-hand configuration was adopted to ensure real-time IC-based 3D reconstruction of different types of food items including apples, pears, and bananas. Ref. [59] also used ICP and reported a 93.42% accuracy in the reconstruction of peanut plants hence identifying their plant traits. In Ref. [28], the root topological structure and geometric features reconstruction algorithm was used in the 3D reconstruction of the root system architecture revealing high feasibility and effectiveness.
In Ref. [20], the MTLS image analysis using 3D characterization was used revealing a high correlation with other methods. In Ref. [28], eye-in-hand stereo vision and SLAM systems were used in the construction of a global map to undertake a phenotyping analysis of passion, litchi, and pineapple. Ref. [26] also used SLAM in the reconstruction of a semantic map of citrus orchards by combining the BiSeNetV1 algorithm. In Ref. [28], intrinsic shape signatures-coherent point drift (ISS-CPD) and distance field-based segmentation (DFSP) were adopted to demonstrate a high accuracy of 79.99% in the reconstruction of the soybean canopies.
The use of the motion algorithm in constructing the 3D morphology of maize plants hence extracting phenotyping parameters was also demonstrated [35]. Additionally, throughput 3D phenotyping technology was adopted to understand the association between the canopy architecture and the light environment of maize and soybean [58]. In Ref. [38], cylinder fitting and disk stacking were adopted in extracting 3D point cloud data from UAV images for sorghum panicles. In Ref. [35], the logistic growth model was demonstrated to extract phenotypic traits of soybean plants to identify patterns in phenotypic changes. In Ref. [38], the PCL and computational geometry algorithms (CGAL) were used at high accuracy to measure the mapping of the plant’s environment hence providing higher crop efficiency.
Further work revealed that in other instances, the 3D reconstruction techniques were combined with deep learning methods, as reported in [59]. In Ref. [59], machine vision-based fruit grading systems were adopted where a 96.8% accuracy was reported in measuring mango fruit structures. The insights in [20] showcased that adopting a combination of deep learning and 3D reconstruction led to higher accuracy in complex applications, including detecting branches within orchards where SPGNet and DBSCAN were used. A similarity was identified between [20,26], where it was shown that 3D techniques and mask R-CNN algorithms were important in detecting guava branches and fruits, which were represented using 3D components. As a result, advantages such as obstacle avoidance paths could be generated for harvesting robots. In Ref. [27], PointNett++ was used in combination with 3D modeling to assess the effectiveness of 3D data in measuring cucumber plants with curved growth patterns. In Ref. [20], the U2-Net algorithm was combined with 3D reconstruction to assess the minimum average track length and minimum reprojection error for carex cabbage and kale. In Ref. [27], the probabilistic voxel carving algorithm was combined with 3D reconstruction to extract plant traits automatically from maize. In Ref. [27], the stereo algorithm was combined with GwCNet to quickly and accurately obtain in-depth information about oranges hence suitable in commercial fruit sorting lines.

4. Discussion

In addressing the first objective, the examination of the reviewed studies indicated four categories of emerging developments in image analysis phenotyping. The examination of the findings showcased that a popular approach regarded the use of machine and deep learning techniques in phenotyping different plant traits. Studies such as [18] used predictive deep learning to monitor the different growth cycles of lettuce and identify existing issues that emerged. Close inspection of [18] revealed a similarity to [33], where deep learning methods such as random forests and deep neural networks were adopted in predicting the yield of soybeans under different climate and growth conditions. Further analysis revealed that deep learning approaches were important in undertaking manual and laborious tasks on large-scale farms. A case example was counting maize crops in large farms [18] and grain counts from sorghum panicles [26]. Segmentation of plant traits was a second application area where deep and machine learning algorithms were widely adopted from the reviewed studies. Ref. [28] showcased the use of clustering algorithms in segmenting phenotyping information from corn plants while [32] used ML techniques in extracting flowering information and abiotic stresses, including drought and heat. Ref. [34] also demonstrated that ML algorithms were effective in classifying tea plant stresses at a 98% accuracy. A further application area of plant phenotyping using ML and deep learning algorithms was plant breeding and management, where [59] used them to measure the performance of plug watermelon seedlings.
The discussion further delved into the use of high-throughput image phenotyping techniques in extracting plant traits. The difference between the high throughput techniques and deep learning reviewed earlier regarded the methods adopted in the extraction of plant traits. Ref. [45] demonstrated high-throughput field phenotyping techniques used in multi-metric assessment of the spatial and temporal variability of cereals, hence improving crop breeding applications. Further work in [46] highlighted the efficacy of aerial phenotyping where normalized difference vegetation index (NDVI) was used in identifying whether genotypes were high- or low-yielding based on the canopy temperatures. The discussions also revealed the effectiveness of spatial aggregation techniques in measuring sun-induced chlorophyll fluorescence using airborne mechanisms [47]. Additionally, the techniques were used in roots where [50] demonstrated image-based semi-automatic root phenotyping in the examination of root morphology and branching features.
The discussions also observed that, in some instances, researchers were keen on leveraging the advantages of both deep and machine learning combined with high throughput imaging techniques. The inspection of the results indicated multiple case studies where high throughput imaging techniques were used to extract imaging details, whereas deep learning methods were adopted to classify plant traits. The close inspection of these studies indicated that combining high throughput imaging techniques and deep learning led to higher accuracy in detecting different plant traits and enhancing phenotyping applications.
The final section in the results discussed the importance of using software—3D reconstruction and algorithm-based alternatives—in plant phenotyping. The discussion of the results revealed that different algorithm-based software was available, including AIseed, used in [43] to automatically capture the traits of seeds, including shape and texture. A similar observation was made in [44], where Cottonsense was adopted as a high-throughput software system used in segmenting different plant traits. A vision-based system was showcased in [56] to estimate plant height, hence eliminating the need for intense manual labor.
The second class of software showcased the use of 3D applications to reconstruct different aspects of the plants, such as roots, fruits, and branches, and a combination of deep learning, machine learning, and 3D image reconstruction. The synthesis of these studies reveals that relying on multiple techniques led to higher accuracy in enhancing precision agriculture applications. Ref. [28] demonstrated 3D image reconstruction to scan the roots of a 9-year-old ash tree, extracting the spatial coordinates of the root points and estimating the root point diameter. As such, farmers could leverage the advantages of 3D reconstruction to implement robotics within agricultural farms, such as fruit harvesting robots that can accurately detect fruits and separate them from branches.
To further enhance the performance of 3D reconstruction tools, researchers also incorporated deep and machine learning methods. In Ref. [59], a 3D reconstruction and image processing ML algorithm was adopted to estimate the volume and shape of mango fruits, revealing a 96.8% accuracy compared to other methods that generated 91.7% and 91.5% accuracy. Ref. [28] added to [59] and demonstrated the 3D reconstruction of blueberries as they developed in clusters, hence extracting their cluster traits. The 3D reconstruction generated images of the blueberries, whereas a mask R-CNN was adopted in segmenting individual blueberries at high accuracy, hence facilitating development monitoring, yield estimation, and prediction of harvest times [28]. The studies revealed that researchers could leverage the benefits of both 3D reconstruction and ML and deep learning algorithms to enhance plant phenotyping. The cost-effectiveness and cost-efficiency of the methods also emerged as advantages arising from the adoption of the latest image analysis phenotyping techniques. The findings aligned with the previous literature where studies, such as [5,10,11], had underscored the effectiveness of plant phenotyping techniques in understanding plant traits to improve yield and breeding. Further advantages also arose in real-time monitoring of the plant growth to improve breeding applications, for instance, how plants responded to environmental factors such as water stress. The findings relating the use of deep and machine learning algorithms in enhancing plant phenotyping applications and image analysis were also aligned with [35], where smart agricultural engineering, such as farm machinery optimization, was enhanced by using bio-inspired algorithms such as ant colony and firefly algorithms. The insights indicate the increasing applications of state-of-the-art technologies in improving smart agriculture to promote sustainability as showcased in AI used in smart greenhouses [58,60]. In a further study, Ref. [61] also demonstrated the efficacy of an image processing methodology that combined deep learning and high-resolution imaging to separate disease and senescence in wheat canopies. The findings showed that tracking the necrotic and chlorotic fractions in leaves enabled separate quantification of the contributions of physiological senescence and biotic stress on the overall green leaf area dynamic and investigating the interactions between biotic stress and physiological senescence. As a result, combining high-resolution imaging and deep learning further facilitated genetic studies of disease resistance and tolerance.
The discussion further addressed the second research objective, where the limitations and weaknesses of advanced imaging methods of plant phenotyping, such as deep learning and 3D image reconstruction, were identified. A key limitation was that the deep learning algorithms were only applied to specific crops, hence lowering the generalizability of the findings compared to other crops. Further synthesis of the results showed that the implementation of the latest techniques involved complex processes and the interaction of different systems to identify plant traits. For example, mobile systems used in collecting images were combined with ML algorithms for data analysis and processing, and 3D structures to map the agricultural environment to facilitate extracting plant phenotyping traits. The complexities involved in the individual studies lower the generalizability of the techniques in future applications. As such, future work should focus on the development of off-the-shelf 3D reconstruction and image analysis software that can be adopted in agricultural settings to generate plant phenotyping data. Additionally, future research ought to focus on identifying multiple case studies and diverse crops where the algorithms can be adopted to enhance crop monitoring.

5. Conclusions

The overarching aim of this research was to investigate the latest developments in image analysis for plant phenotyping using AI, ML, software, and other algorithms by limiting literature from 2020. The evaluation of the results from different quantitative experimental studies revealed that significant advancements have been made in image analysis for plant phenotyping, where different technologies are employed to generate plant traits. The synthesis of the findings showed that machine and deep learning methods were effective in diverse phenotyping applications, such as predicting issues expected during the growth cycles of lettuce plants, predicting yields of soybeans in different climate and growth conditions, and identifying high-yielding genotypes to improve yields. The predictive abilities of deep and machine learning have been harnessed in plant phenotyping applications where predictions of yields are showcased.
Furthermore, the insights indicate that farmers can rely on deep and machine learning to ease laborious tasks such as crop and grain counting on large-scale farms using classification algorithms such as SVMs. The adoption of high throughput image analysis techniques is also emphasized as a relevant technique in detecting plant traits to facilitate breeding. Hyperspectral imaging (HSI), UAV image analysis, and spatial aggregation techniques were used to monitor plant functions. The combination of high throughput image analysis with both machine and deep learning has also enhanced plant phenotyping in applications such as leaf chlorophyll concentration estimation. Finally, the research has demonstrated an increasing trend to use 3D reconstruction and software technologies for plant phenotyping. The reviewed software is vision-based and processes seed information to enhance the prediction and detection of seed quality. 3D reconstruction techniques have also been adopted to identify plant traits accurately and in applications involving automated robotic harvesting.
Therefore, the review article underscores that significant development has occurred in the technology space with novel variations of deep learning, ML, 3D reconstruction, and software being integrated into plant phenotyping. Emerging insights also highlight the adoption of smartphone-based plant phenotyping and combining ML and time series data to generate higher performance.
However, a few limitations have been highlighted, which are potential areas for future research. A limitation was that the studies focused on specific crops, which limited the generalizability of the findings across different settings. For example, questions emerged regarding the applicability of deep learning algorithms in classifying other crop traits based on the effectiveness observed with plants such as maize and sorghum. A second limitation was the complexities involved in the implementation of the different technologies, such as 3D reconstruction coupled with deep learning.
To encourage more farmers to adopt these technologies in crop breeding and yield estimation, further work should focus on easing their integration within small-scale farms. The development of off-the-shelf software to estimate yield and count crops based on collected UAV images is a potential research avenue to ease their use in farms. Future directions in the area are expected to involve advancements in developing hybrid solutions that combine these strategies. For example, the development of vision-based software that provides AI functionality to undertake autonomous plant phenotyping in large-scale farms to predict issues in crops at different growth stages.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author has no conflict of interest.

Appendix A. Summary of the Selected Articles in the Review Article

ArticleFocusData processing
Methods
Image
Sensing Tools
Target
Crop
Main FindingsRelevance
Yu et al. [33]Using deep
learning
to assess RGB
images
showcasing
lettuce growth
cycle to identify
and assess
phenotypic indices
Not Specified
AI(ML)
Not SpecifiedLettuceThe model demonstrated high accuracy in predicting lettuce phenotypic indices with an average error of less
than 0.55% for
geometric indices and less than 1.7% for color and texture indices.
The study demonstrates the use of deep learning models in plant phenotypic applications.
Tu et al. [43]To test the
relevance of
AIseed that
captures and
analyzes plant
traits, including color, texture,
and shape of
individual seeds
Not SpecifiedAISeedRice,
wheat, maize,
scutellaria baicalensis,
platycodon
grandiflorum
AI seed has a high performance in the extraction of phenotypic features and testing seed quality from images for seeds of different plants of different sizes.The study reveals the non-destructive method for seed phenotyping and seed quality assessment.
Ji et al.
[62]
To demonstrate a labor-free and
automated method for isolating crop leaf pixels from
RGB imagery using a random forest algorithm
Random
Forest
Not SpecifiedTea, maize, rice, soybean, tomato, arabidopsisThe algorithm’s performance was comparable to or exceeded that of the latest methods. It also showed improvement in evaluation indicators from 9% to 310% higher.The study showed that the methods were relevant in extracting crop leaf pixels from multisource RGB imagery captured using multiple platforms, including cameras, smartphones, and unmanned aerial vehicles.
Skobalski et al. [18]To investigate the transferability and generalization capabilities of yield prediction models for crop breeding using different machine-learning techniquesRandom Forest (RF) and Gradient Boosting (GB)Not SpecifiedSoybeanThe results showed that datasets from Argentina and the US representing different climate regimes had the highest performance with R2 of 0.76 using Random Forest (RF) and Gradient Boosting (GB) algorithms.The use of transfer learning in real-world breeding scenarios improved decision-making for agricultural productivity.
Wang et al.
[17]
To investigate lodging phenotypes in the field reconstructed from UAV images using a geometric model
Not Specified 3D image
reconstruction
Not SpecifiedrapeseedThe results showed a high accuracy of 95.4% in classifying lodging types, where the rapeseed cultivars Zhongshuang 11 and Dadi 199 were the most dominant cultivarsThe study showed that the lodging phenotyping method had the potential to enhance mechanized harvesting; hence, accurate low-yield estimates were obtained
Niu et al.
[52]
To investigate the influence of fractional vegetation cover (FVC) on digital terrain model (DTM) reconstruction accuracyNot Specified 3D image
reconstruction
maizeThe results showed that the accuracy of DTM constructed using an inverse distance weighted algorithm was influenced by FVC conditionsThe results demonstrated the effectiveness of DTM reconstruction and the impact of view angle and spatial resolution on PH estimation based on UAV RGB images
Haghshenas and Emam
[63]
To evaluate the quantitative
characterization
of shading patterns using a green-gradient-based canopy segmentation model
Not Specified
HTP
Not SpecifiedWheat The yielded graph was generated that could be used for accurate prediction
of different canopy properties, including canopy coverage
The model demonstrates the effectiveness of a multipurpose high throughput phenotyping (HTP) platform.
Haque et al. [64]To demonstrate
the effectiveness
of high throughput imagery in quantifying the shape and size features of sweet potato
Not Specified
HTP
Not SpecifiedSweet
potato
Results showed that the model had 84.59% accuracy in predicting the shape features of sweet potato cultivars.The study demonstrated the effectiveness of big data analytics in industrial sweet potato agriculture.
Xie et al.
[53]
To compare four
rapeseed height
estimation methods
using UAV
images obtained
at three growth stages using complete and incomplete data
Not Specified
AI(ML)
Not SpecifiedRapeseedResults showed that where complete data was used, optimal results were obtained with an R2 of 0.932. The study demonstrated that systematic strategies were adopted to select appropriate methods to acquire crop height with reasonable accuracy.
Mingxuan et al. [23] To demonstrate the effectiveness of an anti-gravity stem-seeking (AGSS) root image restoration algorithm to repair root images and extract root phenotype information for different resistant maize seedsAGSSNot SpecifiedmaizeThe results showed high detection
accuracy higher
than 90% for root length and diameter but negatively with lateral root number
The AGSS algorithm is relevant in quick and effective repairs of root images in small embedded systems
Teshome et al. [21] To evaluate the effectiveness of combining unmanned aerial vehicles (UAV)-based imaging and machine learning (ML) for monitoring sweet corn yield, height, biomasskNN and SVMNot SpecifiedSweet cornThe results showed that the UAV-based imaging was effective in estimating plant height. The kNN and SVM algorithms outperformed other modelsThe study results showed that UAV imaging and ML models were effective in monitoring plant phenotypic features, including height, yield, and biomass
Bolouri et al. [44]To investigate the effectiveness of Cotton sense high-throughput phenotyping (HTP) in assessing multiple growth phases of cottonCotton senseNot SpecifiedCottonThe results showed that the model
had an average AP score of 79% in
segmentation
across fruit
categories and
an R2 value of 0.94.
The proposed HTP system was cost-effective and power-efficient, hence effective in high-yield cotton breeding and crop improvement
Zhu et al.
[19]
To demonstrate the effectiveness of hyperspectral imaging (his) in mitigating illumination influences of complex canopies Random
Forest
Not SpecifiedbasilThe results showed that the HIS pipeline permitted the
mitigation of
influences and
enabled in situ
detection of
canopy
chlorophyll
distribution.
The method demonstrates the importance of monitoring chlorophyll status in whole canopies, hence enhancing planting management strategies.
Shi et al.
[24]
To investigate automated detection
of seed melons and cotton plants using
a U-Net established with double-depth convolutional and fusion block (DFU-Net)
DFU-NetNot SpecifiedSeed melons, cottonThe results indicated that the DFU-Net
model had an
accuracy exceeding 92% for the dataset.
The approach is novel in optimizing crop-detection algorithms, hence providing valuable technical support for intelligent crop management.
Jayasuriya et al. [56]To demonstrate a machine learning vision-based plant height estimation system for
protected crop
facilities
Not Specified
AI(ML)
Not SpecifiedCapsicum The results showed that there were
similar growing
patterns between
imaged and
manually measured plant heights with an R2 score of 0.87, 0.96, and 0.79 under unfiltered ambient light, smart glass film, and shifted light
The method was feasible in a vertically supported capsicum crop in a commercial-scale protected crop facility
Zhuang et al. [25]To investigate the effectiveness of a
solution based on a convolutional neural network and field rail-based phenotyping platform in obtaining the maize seedling emergence rate and leaf emergence speed
R-CNNNot SpecifiedMaizeThe results showed that the box mAP
of the maize seedling detection model was 0.969 with an
accuracy rate of 99.53%, which
outperformed the manual counting
model.
The study showed that the developed model was relevant in exploring the genotypic differences affecting seed emergence and leafing.
Yang et al. [54]To demonstrate a phenotyping platform used in the 3D reconstruction of complex plants using multi-view images and a joint evaluation criterion for the reconstruction algorithm and platformNot Specified 3D Image
Reconstruction
Not SpecifiedCarex cabbage and kaleThe proposed
algorithm had
minimum average track length,
minimum
reprojection error,
and the highest
points.
The proposed platform provided a cost-effective, automated, and integrated solution to enhance fine-scale plant 3D reconstruction.
Varghese et al. [22]To demonstrate ML in photosynthetic research and photosynthetic pigment studiesRegression
Trees
ANN
Not SpecifiedZea Mays, Brassica OleraceaThe results showed that ML algorithms were pivotal in improving crop yield
by correlating
hyperspectral data with photosynthetic parameters.
The use of ML in photosynthesis led to sustainable crop development.
Krämer et al. [47]To develop and test two spatial
aggregation
approaches to make airborne SIF data usable in experimental settings.
Not Specified
HTP
Not SpecifiedWheat,
barley
The selected approaches led to
a better
representation
of ground truth
on SIF.
Spatial aggregation techniques were effective in extracting remotely sensed SIF in crop phenotyping applications.
Teixeira et al. [45]To demonstrate the effectiveness of high-throughput field phenotyping to quantify canopy development rates through aerial photographsNot Specified
HTP
Oats The results showed that a decline in plant trait effects occurred across different production system
components
accompanied by
high response
variability.
The combination of high throughput phenotyping, crop physiology field experiments, and biophysical modeling led to an increased understanding of plant trait benefits.
Hoffman, Singels and Joshi [46]To evaluate the
feasibility of using aerial phenotyping in rapidly
identifying
genotypes with superior yield traits
Not SpecifiedNDVI Ratoon
crop
The results revealed the potential for
using water treatment
differences in
canopy temperatures and
stalk dry mass in
identifying drought-tolerant
genotypes.
The aerial phenotyping methods had the potential to enhance breeding efficiency and genetic gains toward productive sugarcane cultivars.
Bhadra et al. [55]To demonstrate the use of a transfer learning-based
dual stream neural network (PROSAIL) in estimating leaf chlorophyll concentrations (LCC) and average leaf angle (ALA) in corn
PROSAILNot SpecifiedCornThe results showed that PROSAIL outperformed all other modeling scenarios in predicting LCC and ALA.The use of large amounts of PROSAIL-simulated data combined with transfer learning and multi-angular UAV observations was effective in precision agriculture.
Kim et al.
[48]
To demonstrate a low-light image-based crop
and weed
segmentation network (LCW-Net)
for crop harvesting
LCW-NetNot SpecifiedSugar beet, dicot weeds, grass weeds,
Carrot, weeds
The results showed that the mean intersection of unions of segmentation for crops and weeds were 0.8718 and 0.8693 for the BoniRob dataset and 0.8337 and 0.8221 for the CWFID datasetThe findings showed that the LCW-Net outperformed the state-of-the-art methods
Bai et al.
[49]
To examine the
potential of using
a large-scale cable-suspended
field phenotyping system to quantify bidirectional reflectance distribution function (BRDF)
(BRDF)NDVIMaize,
Soybean
A strong correlation emerged between reflectance, Vis,
and Green Pixel Fraction. The hotspots were
identified in the backscatter
direction at visible near-infrared (NIR) bands.
The developed system had the potential to generate rapid and detailed BRDF data at a high spatiotemporal resolution using multiple sensors.
Hao et al.
[65]
To assess a method for evaluating the degree of wilting of cotton varieties based on phenotypeNot Specified
AI(ML)
CottonThe results showed that the PointSegAt deep learning network model performed well in leaf and stem segmentation.The model used demonstrated an effective approach in measuring the degree of wilting of cotton varieties based on phenotype.
Debnath et al. [37]To demonstrate a method based on the Taylor-Coot algorithm to segment plant regions and biomass areas to detect emergence counting and estimate crop biomassTaylor
Coot
Different cropsThe results showed that the proposed model had minimal Mean Absolute Difference (MAD), Standard Absolute Difference (SDAD), and %difference (%D) as 0.0.73, 0.074, and 16.45 in emergence counting.The proposed model was effective in phenotypic trait estimation of crops.
Hasan et al. [36]To demonstrate a weed classification pipeline where a patch of the image was considered at a time to improve performanceNot Specified
AI(ML)
Not SpecifiedDeep weeds,
corn weeds, cotton weeds, tomato
weeds
The results showed that the pipeline had significant performance improvement on all four datasets and addressed issues of intra-class dissimilarity and inter-class similarity in the dataset.The proposed pipeline system is effective in crop and weed recognition in farming applications.
Fang et al. [50]To evaluate an
image-based semi-automatic
root phenotyping method for field-grown crops
Not Specified
HTP
MaizeThe results showed that interspecific advantages for maize occurred within 5cm from
the root base in the nodal roots, and the inhibition of soybean was reflected 20cm from the root base
The proposed system had high accuracy for field research on root morphology and branching features
Zhao et al. [34]To evaluate the
continuous
wavelet projections algorithm (CWPA) and successive
projections algorithm (SPA) in generating optimal spectral feature sets for crop detection
CWPA and
SPA Algorithms
TeaAn overall accuracy of 98% was reported in classifying tea plant stresses, and a high coefficient of determination was reported in retrieving corn leaf chlorophyll content.The novel algorithm demonstrated its potential in crop monitoring and phenotyping for hyperspectral data.
Yang and Cho [57]To undertake
automatic analysis of 3D image
reconstruction of a red pepper plant
Not SpecifiedVisual SfMred pepperThe results indicated that the proposed method had an error of 5mm or less when reconstructing the 3D images and was suitable for phenotypic analysis.The images and analysis employed in 3D reconstruction can be applied in different image processing studies.
Yu et al.
[32]
To demonstrate high-throughput phenotyping methods based on UAV systems to monitor and quantitatively describe the development of soybean canopy populationsNot Specified
AI(ML)
Not SpecifiedsoybeanThe results showed that multimodal image segmentation outperformed traditional deep-learning image segmentation
network
The proposed method has high practical value in germplasm resource identification and can enhance genotypic differentiation analysis further.
Selvaraj et al. [51]To evaluate an image-analysis framework called CIAT Pheno-i to extract plot-level vegetation canopy metricsNot Specified
HTP
cassavaThe results showed that the developed pheno-image analysis was easier and more rapid than manual methods.The developed model can be adopted in the phenotype analysis of cassava’s below-ground traits.
Zhang et al. [66]To evaluate
multiple imaging sensors, image
resolutions, and image processing techniques to
monitor the flowering intensity of canola, chickpea, pea, and camelina
Not Specified AI(ML)Not Specifiedcanola, chickpea, pea, and camelinaThe results showed that comparable results were obtained where standard image processing using unsupervised machine learning and thresholds were used in flower detection and feature extraction.The study demonstrated the feasibility of using imaging for monitoring flowering intensity in multiple varieties of evaluated crops.
Rossi et al.
[67]
To test a 3D model in the early detection of plant responses to water stressNot SpecifiedSfMTomato The results showed that the proposed algorithm automatically detected and measured plant height, petioles inclination, and single-leaf area.Plant height traits may be adopted in subsequent analyses to identify how plants respond to water stress.
Boogaard, Henten and Gert Kootstra [68]To evaluate the effectiveness of 3D
data in measuring plants with a curved growing pattern
PointNet++Not SpecifiedcucumberThe results from the 3D method were more accurate than the 2D method The results revealed the effectiveness of computer-vision methods to measure plant architecture and internode length
Cuevas-
Velas
quez, Gallego and
Fisher
[69]
To evaluate the segmentation and 3D reconstruction of rose plants from stereoscopic imagesFSCNNot SpecifiedRose The results showed that the accuracy of the segmentation improved other methods, hence leading to robust reconstruction results.The effectiveness of 3D reconstruction in plant phenomenology was showcased.
Chen et al.
[70]
To test the 3D perception of orchard banana central
stock improved
by adaptive
multi-vision
technology
Not specified
3D Image Reconstruction
Not SpecifiedBananaThe results showed that the proposed method was accurate and led to
stable performance.
The work showcased the relevance of 3D sensing of banana central stocks in complex environments.
Chen et al.
[71]
To develop a 3D unstructured and real-time environment using VR and Kinect-based
immersive
teleoperation in
agricultural field robots
Not specified
3D Image Reconstruction
orchardThe results showed that the proposed algorithm and system had the potential applicability of immersive teleoperation within an unstructured environmentThe study demonstrated that the proposed algorithms could be adopted to enhance teleoperation in agricultural environments.
Isachsen, Theoharis and Misimi [72]To demonstrate a real-time IC-based 3D registration algorithm for eye-in-hand configuration using an RBG-D cameraNot Specified
3D Image Reconstruction
Not SpecifiedApple, banana, pearThe 3D reconstruction based on the GPU was faster than the CPU and eight times faster than similar library implementations.The results indicated that the method was valid for eye-in-hand robotic scanning and 3D reconstruction of different agricultural food items.
Lin et al.
[73]
To demonstrate a 3D reconstruction method to detect branches and fruits to avoid obstacles and plan paths for robotsNot specified
3D Image Reconstruction
Not SpecifiedGuava The results revealed highly accurate results in fruit and branch reconstruction.The results revealed that the proposed method could be adopted in reconstructing fruits and branches, hence planning obstacle avoidance paths for harvesting robots.
Ma et al.
[74]
To test the
automatic
detection of
branches of jujube trees based on 3D reconstruction for dormant pruning using deep learning methods
Not specified
3D Image Reconstruction
Not Specifiedjujube treesThe results obtained showed that the
proposed method
was important in
detecting significant information,
including length
of diameters, branch
diameter
The proposed model could be adopted in developing automated robots for pruning jujube trees in orchard fields
Fan et al.
[75]
To reconstruct root system architecture from the
topological
structure and
geometric features of plants
Not specified
3D Image Reconstruction
Not SpecifiedAsh treeThe results showed that where roots had a diameter higher than 1cm, high accuracy was observed between the reconstructed root and simulated rootThe high similarity between the reconstructed and simulated roots showed that the method was feasible and effective.
Zhao et al.
[76]
To 3D characterize water usage in
crops and root activity in field agronomic research
Not specified
3D Image Reconstruction
Not SpecifiedMaizeThe results showed that the method was highly accurate and cost-efficient in phenotyping root activity.The proposed method is vital in understanding crop water usage.
Zhu et al.
[77]
To compute
Phenotyping
traits of a tomato canopy using 3D reconstruction
Not specified
3D Image Reconstruction
Not SpecifiedTomatoThe results showed that there was a high correlation between measured and calculated values.The proposed method could be adopted for rapid detection of the quantitative indices of phenotypic traits of tomato canopy, hence supporting breeding and scientific cultivation.
Torres-Sánchez et al. [78]To compare UAV digital aerial photogrammetry (DAP) and mobile terrestrial laser scanning (MTLS) in measuring geometric parameters of a vineyard, peach, and pear orchardNot Specified
HTP
Peach, PearThe results showed that the results from the methods were highly correlated. The MTLS had higher values than the UAVThe proposed model showed that 3D crop characterization could be adopted in precision fruticulture
Chen et al.
[79]
To detail a global map involving eye-in-hand stereo vision and SLAM systemNot Specified
AI(ML)
Not SpecifiedPassion, litchi,
pineapple
The results showed that the constructed global map attained large-scale high-resolution.The results showed that in the future, more stable and practical mobile fruit-picking robots can be developed.
Dong et al.
[80]
To demonstrate a method to extract individual 3D apple traits and 3D mapping for apple training systemsNot specified
3D Image Reconstruction
Not SpecifiedApple The results showed high accuracy in estimating apple counting and apple volume estimation.The proposed method combining 3D photography and 3D instance segmentation improved apple phenotypic traits from orchards.
Xiong et al.
[81]
To propose a real-time localization and semantic map reconstruction method for unstructured citrus orchardsNot specified
3D Image Reconstruction
Not SpecifiedCitrus The results indicated that the proposed method could achieve high accuracy and real-time performance in the reconstruction of the semantic mapThe research contributes to the advancement of theoretical and technical work to support intelligent agriculture
Feng et al.
[82]
To propose a voxel carving algorithm
to reconstruct 3D models of maize
and extract leaf traits for phenotyping
Probabilistic Voxel Carving Algorithm 3D Image ReconstructionNot SpecifiedMaize The results showed that the algorithm was robust and extracted plant traits automatically, including leaf angles and the number of leaves.The results show that 3D reconstruction of plants from multi-view images can accurately extract multiple phenotyping traits.
Gao et al.
[26]
To develop a clustering algorithm for corn population, point clouds to accurately extract the 3D morphology of individual cropsNot Specified
3D Image Reconstruction
Not SpecifiedCorn The results showed high accuracy of the improved quick-shift method for segmentation of the corn plants.The proposed method demonstrated an automated and efficient solution to accurately measure crop phenotypic information.
Wu et al.
[83]
To design and deploy a prototype for automatic and high throughput seed phenotyping based on SeedscreenerNot SpecifiedSeed ScreenerWheat The results showed that the platform could achieve a 94% success rate in predicting the traits of wheatThe proposed method was effective in obtaining the endophenotype and exophenotype features of wheat seeds.
James et al.
[84]
To demonstrate a method to predict grain count for sorghum panicles Not SpecifiedYoloV5Sorghum The results showed that the models could predict grain counts for the high-resolution point cloud dataset with high accuracy.The results showed that the multimodal approach was viable in estimating grain count per panicle and can be adopted in future crop development.
Wen et al.
[85]
To present an accurate method for semantic 3D reconstruction of maize leavesNot Specified 3D Image Reconstruction Not SpecifiedMaize The results showed that high accuracy was achieved in the reconstruction of maize leaves where high consistency appeared between the reconstructed mesh and the corresponding point cloud.The technology can be adopted in crop phenomics to improve functional-structural plant modeling.
Gargiulo, Sorrentino and
Mele [27]
To measure the
internal traits of bean seeds
using X-ray
micro-CT
imaging
Not SpecifiedX-ray
microCT
Beans The results showed that the micropyle
was the most
influential on the
initial hydration of the bean seeds.
The approach can be adopted to study the morphological traits of beans.
Hu et al.
[20]
To demonstrate the effectiveness of Neural Radiance Fields (NeRF) in 3D reconstruction of plantsNeRF 3D Image ReconstructionNot SpecifiedPitahaya, grape, fig, orangeThe results show
That NeRF is able to achieve
reconstruction
results that are
comparable to Reality Capture, a 3D commercial reconstruction software.
The use of NeRF is identified as a novel paradigm in plant phenotyping in generating multiple representations from a single process.
Chang et al.
[28]
To apply 3D and hyperspectral imaging (HSI) in investigating heterosis and cytoplasmic effects in pepper (Capsicum annuum)Not SpecifiedNDVIpepper (Capsicum annuum)The results showed
the potential utility
of HIS data recorded throughout the plant life span in analyzing the cytoplasmic effects in crops.
The results demonstrated the potential of adopting 3D and HSI in evaluating the combining capability of crops.
Ni et al.
[86]
To develop a complete framework of 3D segmentation for individual blueberries as they developed in clusters and to extract the cluster traits of the blueberriesR-CNNNot SpecifiedblueberriesThe results showed that a high accuracy of 97.3% was achieved in determining fruit number. The study showed that 3D photogrammetry and 2D instance segmentation were effective in determining blueberry cluster traits, hence facilitating yield estimation and monitoring fruit development.
Xiao et al.
[87]
To construct organ-scale traits of canopy structures
in large-scale fields
CCO Not SpecifiedMaize, cotton, sugar beetThe results demonstrated high accuracy in obtaining complete canopy structures throughout the growth stages of the crops.The CCO method offered high affordability, accuracy, and efficiency in accelerating precision agriculture and plant breeding.
Xiao et al.
[88]
To examine the capabilities of UAV platforms in executing precise yield estimation of 3D cotton bollsCCO Not SpecifiedCotton The results showed that the CCO results surpassed the nadir-derived cotton boll route.The study demonstrated the effectiveness of CCO combined with UAV images in the high-throughput acquisition of organ-scale traits.
Shomali et al. [31]To demonstrate the effectiveness of ANN-based algorithms for high light stress phenotyping of tomato genotypes using chlorophyll fluorescence featuresSVM, RFENot SpecifiedTomato The results showed that the algorithms were reliable in high-light stress screening.The use of deep learning algorithms was advocated for high-light stress phenotyping using chlorophyll fluorescence features.
Ma et al.
[29]
To explore 3D reconstruction in soybean organ segmentation and phenotypic growth simulationISS-CPD, ICP,
DFSP
Not SpecifiedSoybean The results showed that an accuracy of 79.99% was achieved by using the distance-field-based segmentation pipeline algorithm (DFSP)The method is highly accurate and reliable in 3D reconstruction of soybean canopy, phenotype calculation, and growth simulation.
Mon and
ZarAung
[30]
To propose an
image processing algorithm to estimate the volume and 3D shape of mango fruit
Not Specified
3D Image Reconstruction
Not SpecifiedMango The results showed that the proposed method was 96.8% accurate to the measured structures of mango fruitsThe proposed method of reconstructing mango shapes closely agreed with measured shapes
Liu et al.
[89]
To test a 3D image analysis method for counting potato eyes and estimation of eye depth based on the evaluation of curvature Not SpecifiedSfMPotato The results demonstrated high accuracy in estimating the number of potato eyes and their depth.The proposed method was effective in phenotyping potato traits.
Zhou et al.
[90]
To demonstrate the effectiveness of the soybean plant phenotype extractor (SPP-extractor) algorithm in acquiring phenotypic traitsNot SpecifiedYoloV5Soybean The results showed that the developed model accurately identified pods and stems and could count the entire pod set of plants in a single scan.
The proposed method could be adopted in the phenotype extraction of soybean plants.
Liu et al.
[91]
To evaluate the effectiveness of an improved watershed algorithm for bean image segmentationWatershed
Algorithm
Not SpecifiedBean The results showed that the proposed
algorithm
performed better
than the traditional watershed algorithm.
The proposed improved watershed algorithm could be adopted in future bean image segmentation applications.
Li et al.
[92]
To demonstrate a non-destructive measurement method for the canopy phenotype of watermelon plug seedlingsMask R-CNNYoloV5Watermelon The results showed that the non-destructive measurement algorithm achieved good measurement performance for the watermelon plug seedlings from the one true-leaf to 3 true-leaf stages.The proposed deep learning algorithm provided an effective solution for non-destructive measurement of the canopy phenotype of plug seedlings.
Zhou et al.
[93]
To propose a phenotyping monitoring technology based on an internal gradient algorithm to acquire the target region and diameter of maize stemsOtsu Internal
Gradient
Algorithm
Not SpecifiedMaize The results showed that the internal gradient algorithm could accurately obtain the target region of maize stems.The adoption of the internal gradient algorithm is feasible in potential smart agriculture applications to assess field conditions.
Liu et al.
[94]
To propose a wheat point cloud generation and 3D reconstruction method based on SfM and MVS using sequential wheat crop imagesNot SpecifiedSfMWheat The results showed that the method achieved non-invasive reconstruction of the 3D phenotypic structure of realistic objects with accuracy being improved by 43.3% and overall value enhanced by 14.3%The proposed model can be adopted in future virtual 3D digitization applications.
Sun et al.
[95]
To propose a deep learning semantic segmentation technology to preprocess images of soybean plantsNot Specified
AI(ML)
Not SpecifiedSoybean The results showed that semantic segmentation
improved image preprocessing and long
reconstruction time, hence improving
the robustness of noise input
The proposed model can be adopted in future semantic segmentation for the preprocessing of 3D reconstruction in other crops
Begot et al. [96]
To implement micro-computed tomography (micro-CT) to study floral morphology and honey bees in the context of nectar-related traitsNot SpecifiedX-ray micro-computed
tomography
Male and female flowersThe results showed that microcomputed tomography
allowed for easy and rapid generation of 3D volumetric data on nectar, nectary, flower, and honey bee body sizes.
The protocol can be adopted to evaluate flower accessibility to pollinators at high resolution; hence, comparative analysis can be conducted to identify nectar-pollination-related traits.
Li et al.
[97]
To subject maize seedlings to 3D reconstruction using imaging technology to assess their phenotypic traitsNot Specified
3D Image Reconstruction
Not SpecifiedMaize The results from the model were highly correlated with manually measured values, showing that the method was accurate in nondestructive extraction.The proposed model accurately constructed the 3D morphology of maize plants, hence extracting the phenotypic parameters of the maize plants.
Zhu et al.
[98]
To combine 3D plant architecture with a radiation model to quantify and assess the impact of differences in planting patterns and row orientations on canopy light interceptionNot Specified 3D Image
Reconstruction
Not SpecifiedMaize,
soybean
The results showed good agreement between measured and calculated phenotypic traits.The study demonstrated that high throughput 3D phenotyping technology could be adopted to gain a better understanding of the association between the light environment and canopy architecture.
Chang et al.
[99]
To develop a detecting and characterizing method for individual sorghum panicles using a 3D point cloud from UAV imagesNot Specified
3D Image
Reconstruction
Not SpecifiedSorghum The results showed a high correlation between UAV-based and ground measurements. The study demonstrated that the 3D point cloud derived from UAV images provided reliable and consistent individual sorghum panicle parameters that were highly correlated with the ground measurements of panicle weight.
Varela, Pederson and Leakey
[100]
To implement spatio-temporal 3D CNN and UAV time series to predict lodging damage in sorghumCNNNot SpecifiedSorghum The results showed that the 3D-CNN outperformed the 2D-CNN with high accuracy and precision.The study demonstrated that using spatiotemporal CNN based on UAV time series images can enhance plant phenotyping capabilities in crop breeding and precision agriculture.
Varela et al.
[101]
To use deep CNN with UAV time-series imagery to determine flowering time, biomass yield traits, and culm lengthCNNNot SpecifiedMiscanthusThe results showed that the use of 3D spatiotemporal CNN architecture outperformed the 2D spatial CNN architectures.The results demonstrated that integration of high-spatiotemporal resolution UAV imagery with 3D-CNN enhanced monitoring of the key phenomenological and yield-related crop traits.
Nguyen et al. [102]To demonstrate the versatility of aerial remote sensing in the diagnosis of yellow rust infection in spring wheat in a timely mannerCNNNot SpecifiedWheatThe results showed that a high correlation emerged between the proposed method and 3D-CNN.The study demonstrated that low-cost multispectral UAVs could be adopted in crop breeding and pathology applications.
Okamoto et al. [103]To test the applicability of the stereo-photogrammetry (SfM-MVS) method in the morphological measurement of tree root systemsNot SpecifiedSfMBlack PineThe results showed that 3D reconstructions of the dummy and root systems were successful The use of SfM-MVS is a new tool that can be adopted to obtain the 3D structure of tree root systems
Liu et al.
[104]
To demonstrate a fast reconstruction method of a 3D model based on dual RGB-D cameras for peanut plantsNot Specified 3D Image
Reconstruction
Not SpecifiedPeanut The results showed that the average accuracy of the reconstructed peanut plant 3D model was 93.42%, which was higher than the iterative closest point (ICP) method.The 3D reconstruction model can be adopted in future modeling applications to identify plant traits.
Zhu et al.
[105]
To propose a method based on 3D reconstruction to evaluate phenotype development during the whole growth periodNot Specified 3D Image
Reconstruction
Not SpecifiedSoybean The results showed that phenotypic fingerprints of the soybean plant varieties could be established to identify patterns in phenotypic changes. The proposed method could be adopted in future breeding and field management of soybeans and other crops.
Yang and Han [38]To develop a novel approach for the determination of the 3D phenotype of vegetables by recording video clips using smartphonesNot SpecifiedSfMFruit treesThe results showed that highly accurate results were obtained compared to other photogrammetry methods.The proposed methods can be adopted to reconstruct high-quality point cloud models by recording crop videos.
Sampaio, Silva and Marengoni [59]To present a 3D model of non-rigid corn plants that can be adopted in phenotyping processesNot Specified 3D Image
Reconstruction
Not SpecifiedCorn The results showed high accuracy in plant structural measurements and mapping the plant’s environment, hence providing higher crop efficiency.The proposed solution can be adopted in future phenotyping applications for non-rigid plants.
Tagarakis et al. [35]To investigate the use of RGB-D cameras
and unmanned ground vehicles in mapping commercial orchard environments to provide information about tree height and canopy volume
Not Specified 3D Image
Reconstruction
Not SpecifiedTree
Orchards
The results showed that the sensing methods provide
similar height measurements and tree volumes were also calculated accurately
The proposed method, which uses UAV and UGV methods, could be adopted in future mapping applications.

References

  1. United Nations Sustainable Development. Available online: https://www.un.org/sustainabledevelopment/hunger/#:~:text=Goal%202%20is%20about%20creating (accessed on 10 May 2024).
  2. Meraj, T.; Sharif, M.I.; Raza, M.; Alabrah, A.; Kadry, S.; Gandomi, A.H. Computer vision-based plant phenotyping: A comprehensive survey. iScience 2024, 27, 108709. [Google Scholar] [CrossRef] [PubMed]
  3. Sharma, V.; Honkavaara, E.; Hayden, M.; Kant, S. UAV Remote Sensing Phenotyping of Wheat Collection for Response to Water Stress and Yield Prediction Using Machine Learning. Plant Stress 2024, 12, 100464. [Google Scholar] [CrossRef]
  4. Chiuyari, W.N.; Cruvinel, P.E. Method for maize plant counting and crop evaluation based on multispectral image analysis. Comput. Electron. Agric. 2024, 216, 108470. [Google Scholar] [CrossRef]
  5. Graf, L.V.; Merz, Q.N.; Walter, A.; Aasen, H. Insights from field phenotyping improve satellite remote sensing based in-season estimation of winter wheat growth and phenology. Remote. Sens. Environ. 2023, 299, 113860. [Google Scholar] [CrossRef]
  6. Zahid, A.; Mahmud, M.S.; He, L.; Choi, D.; Heinemann, P.; Schupp, J. Development of an integrated 3R end-effector with a cartesian manipulator for pruning apple trees. Comput. Electron. Agric. 2020, 179, 105837. [Google Scholar] [CrossRef]
  7. Waqas, M.A.; Wang, X.; Zafar, S.A.; Noor, M.A.; Hussain, H.A.; Azher Nawaz, M.; Farooq, M. Thermal Stresses in Maize: Effects and Management Strategies. Plants 2021, 10, 293. [Google Scholar] [CrossRef]
  8. Mangalraj, P.; Cho, B.-K. Recent trends and advances in hyperspectral imaging techniques to estimate solar induced fluorescence for plant phenotyping. Ecol. Indic. 2022, 137, 108721. [Google Scholar] [CrossRef]
  9. Pappula-Reddy, S.-P.; Kumar, S.; Pang, J.; Chellapilla, B.; Pal, M.; Millar, A.H.; Siddique, K.H.M. High-throughput phenotyping for terminal drought stress in chickpea (Cicer arietinum L.). Plant Stress 2024, 11, 100386. [Google Scholar] [CrossRef]
  10. Geng, Z.; Lu, Y.; Duan, L.; Chen, H.; Wang, Z.; Zhang, J.; Liu, Z.; Wang, X.; Zhai, R.; Ouyang, Y.; et al. High-throughput phenotyping and deep learning to analyze dynamic panicle growth and dissect the genetic architecture of yield formation. Crop Environ. 2024, 3, 1–11. [Google Scholar] [CrossRef]
  11. Jiang, Y.; Li, C. Convolutional Neural Networks for Image-Based High-Throughput Plant Phenotyping: A Review. Plant Phenomics 2020, 2020, 4152816. [Google Scholar] [CrossRef]
  12. Zhang, H.; Wang, L.; Jin, X.; Bian, L.; Ge, Y. High-throughput phenotyping of plant leaf morphological, physiological, and biochemical traits on multiple scales using optical sensing. Crop J. 2023, 11, 1303–1318. [Google Scholar] [CrossRef]
  13. Guo, W.; Carroll, M.E.; Singh, A.; Swetnam, T.L.; Merchant, N.; Sarkar, S.; Singh, A.K.; Ganapathysubramanian, B. UAS-Based Plant Phenotyping for Research and Breeding Applications. Plant Phenomics 2021, 2021, 9840192. [Google Scholar] [CrossRef] [PubMed]
  14. Carrera-Rivera, A.; Ochoa, W.; Larrinaga, F.; Lasa, G. How-to Conduct a Systematic Literature review: A Quick Guide for Computer Science Research. MethodsX 2022, 9, 101895. Available online: https://www.sciencedirect.com/science/article/pii/S2215016122002746 (accessed on 10 May 2024).
  15. Gusenbauer, M.; Haddaway, N.R. Which Academic Search Systems Are Suitable for Systematic Reviews or meta-analyses? Evaluating Retrieval Qualities of Google Scholar, PubMed, and 26 Other Resources. Res. Synth. Methods 2020, 11, 181–217. [Google Scholar] [CrossRef]
  16. Grewal, A.; Kataria, H.; Dhawan, I. Literature Search for Research Planning and Identification of Research Problem. Indian J. Anaesth. 2016, 60, 635–639. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, C.; Xu, S.; Yang, C.; You, Y.; Zhang, J.; Kuai, J.; Xie, J.; Zuo, Q.; Yan, M.; Du, H.; et al. Determining rapeseed lodging angles and types for lodging phenotyping using morphological traits derived from UAV images. Eur. J. Agron. 2024, 155, 127104. [Google Scholar] [CrossRef]
  18. Skobalski, J.; Sagan, V.; Alifu, H.; Al Akkad, O.; Lopes, F.A.; Grignola, F. Bridging the gap between crop breeding and GeoAI: Soybean yield prediction from multispectral UAV images with transfer learning. ISPRS J. Photogramm. Remote. Sens. 2024, 210, 260–281. [Google Scholar] [CrossRef]
  19. Zhu, F.; Qiao, X.; Zhang, Y.; Jiang, J. Analysis and mitigation of illumination influences on canopy close-range hyperspectral imaging for the in situ detection of chlorophyll distribution of basil crops. Comput. Electron. Agric. 2024, 217, 108553. [Google Scholar] [CrossRef]
  20. Hu, K.; Ying, W.; Pan, Y.; Kang, H.; Chen, C. High-fidelity 3D reconstruction of plants using Neural Radiance Fields. Comput. Electron. Agric. 2024, 220, 108848. [Google Scholar] [CrossRef]
  21. Teshome, F.T.; Bayabil, H.K.; Hoogenboom, G.; Schaffer, B.; Singh, A.; Ampatzidis, Y. Unmanned aerial vehicle (UAV) imaging and machine learning applications for plant phenotyping. Comput. Electron. Agric. 2023, 212, 108064. [Google Scholar] [CrossRef]
  22. Varghese, R.; Cherukuri, A.K.; Doddrell, N.H.; Priya, G.; Simkin, A.J.; Ramamoorthy, S. Machine learning in photosynthesis: Prospects on sustainable crop development. Plant Sci. 2023, 335, 111795. [Google Scholar] [CrossRef] [PubMed]
  23. Mingxuan, Z.; Wei, L.; Hui, L.; Ruinan, Z.; Yiming, D. Anti-gravity stem-seeking restoration algorithm for maize seed root image phenotype detection. Comput. Electron. Agric. 2022, 202, 107337. [Google Scholar] [CrossRef]
  24. Shi, H.; Shi, D.; Wang, S.; Li, W.; Wen, H.; Deng, H. Crop plant automatic detecting based on in-field images by lightweight DFU-Net model. Comput. Electron. Agric. 2024, 217, 108649. [Google Scholar] [CrossRef]
  25. Zhuang, L.; Wang, C.; Hao, H.; Li, J.; Xu, L.; Liu, S.; Guo, X. Maize emergence rate and leaf emergence speed estimation via image detection under field rail-based phenotyping platform. Comput. Electron. Agric. 2024, 220, 108838. [Google Scholar] [CrossRef]
  26. Gao, Y.; Wang, Q.; Rao, X.; Xie, L.; Ying, Y. OrangeStereo: A navel orange stereo matching network for 3D surface reconstruction. Comput. Electron. Agric. 2024, 217, 108626. [Google Scholar] [CrossRef]
  27. Gargiulo, L.; Sorrentino, G.; Mele, G. 3D imaging of bean seeds: Correlations between hilum region structures and hydration kinetics. Food Res. Int. 2020, 134, 109211. [Google Scholar] [CrossRef]
  28. Chang, S.; Lee, U.; Kim, J.-B.; Jo, Y.D. Application of 3D-volumetric analysis and hyperspectral imaging systems for investigation of heterosis and cytoplasmic effects in pepper. Sci. Hortic. 2022, 302, 111150. [Google Scholar] [CrossRef]
  29. Ma, X.; Wei, B.; Guan, H.; Cheng, Y.; Zhuo, Z. A method for calculating and simulating phenotype of soybean based on 3D reconstruction. Eur. J. Agron. 2024, 154, 127070. [Google Scholar] [CrossRef]
  30. Mon, T.; ZarAung, N. Vision based volume estimation method for automatic mango grading system. Biosyst. Eng. 2020, 198, 338–349. [Google Scholar] [CrossRef]
  31. Shomali, A.; Aliniaeifard, S.; Bakhtiarizadeh, M.R.; Lotfi, M.; Mohammadian, M.; Sadegh, M.; Rastogi, A. Artificial neural network (ANN)-based algorithms for high light stress phenotyping of tomato genotypes using chlorophyll fluorescence features. Plant Physiol. Biochem. 2023, 201, 107893. [Google Scholar] [CrossRef]
  32. Yu, H.; Weng, L.; Wu, Q.; He, J.; Yuan, Y.; Wang, J.; Xu, X.; Feng, X. Time-Series & High-Resolution UAV Data for Soybean Growth Analysis by Combining Multimodal Deep Learning and Dynamic Modelling. Plant Phenomics 2024, 6, 0158. [Google Scholar] [CrossRef] [PubMed]
  33. Yu, H.; Dong, M.; Zhao, R.; Zhang, L.; Sui, Y. Research on precise phenotype identification and growth prediction of lettuce based on deep learning. Environ. Res. 2024, 252, 118845. [Google Scholar] [CrossRef] [PubMed]
  34. Zhao, X.; Zhang, J.; Pu, R.; Shu, Z.; He, W.; Wu, K. The continuous wavelet projections algorithm: A practical spectral-feature-mining approach for crop detection. Crop J. 2022, 10, 1264–1273. [Google Scholar] [CrossRef]
  35. Tagarakis, A.C.; Filippou, E.; Kalaitzidis, D.; Benos, L.; Busato, P.; Bochtis, D. Proposing UGV and UAV Systems for 3D Mapping of Orchard Environments. Sensors 2022, 22, 1571. [Google Scholar] [CrossRef]
  36. Hasan, A.; Diepeveen, D.; Laga, H.; Jones, K.; Sohel, F. Image patch-based deep learning approach for crop and weed recognition. Ecol. Inform. 2023, 78, 102361. [Google Scholar] [CrossRef]
  37. Debnath, S.; Preetham, A.; Vuppu, S.; Prasad, N. Optimal weighted GAN and U-Net based segmentation for phenotypic trait estimation of crops using Taylor Coot algorithm. Appl. Soft Comput. 2023, 144, 110396. [Google Scholar] [CrossRef]
  38. Yang, Z.; Han, Y. A Low-Cost 3D Phenotype Measurement Method of Leafy Vegetables Using Video Recordings from Smartphones. Sensors 2020, 20, 6068. [Google Scholar] [CrossRef] [PubMed]
  39. Wang, L.; Zhang, H.; Bian, L.; Zhou, L.; Wang, S.; Ge, Y. Poplar seedling varieties and drought stress classification based on multi-source, time-series data and deep learning. Ind. Crop. Prod. 2024, 218, 118905. [Google Scholar] [CrossRef]
  40. Camilo, J.; Bunn, C.; Rahn, E.; Little-Savage, D.; Schimidt, P.; Ryo, M. Geographic-scale coffee cherry counting with smartphones and deep learning. Plant Phenomics 2024, 6, 0165. [Google Scholar] [CrossRef]
  41. Liu, L.; Yu, L.; Wu, D.; Ye, J.; Feng, H.; Liu, Q.; Yang, W. PocketMaize: An Android-Smartphone Application for Maize Plant Phenotyping. Front. Plant Sci. 2021, 12, 770217. [Google Scholar] [CrossRef]
  42. Röckel, F.; Schreiber, T.; Schüler, D.; Braun, U.; Krukenberg, I.; Schwander, F.; Peil, A.; Brandt, C.; Willner, E.; Gransow, D.; et al. PhenoApp: A mobile tool for plant phenotyping to record field and greenhouse observations. F1000Research 2022, 11, 12. [Google Scholar] [CrossRef] [PubMed]
  43. Tu, K.; Wu, W.; Cheng, Y.; Zhang, H.; Xu, Y.; Dong, X.; Wang, M.; Sun, Q. AIseed: An automated image analysis software for high-throughput phenotyping and quality non-destructive testing of individual plant seeds. Comput. Electron. Agric. 2023, 207, 107740. [Google Scholar] [CrossRef]
  44. Bolouri, F.; Kocoglu, Y.; Lorraine, I.; Ritchie, G.L.; Sari-Sarraf, H. CottonSense: A high-throughput field phenotyping system for cotton fruit segmentation and enumeration on edge devices. Comput. Electron. Agric. 2024, 216, 108531. [Google Scholar] [CrossRef]
  45. Teixeira, E.; George, M.; Johnston, P.; Malcolm, B.; Liu, J.; Ward, R.; Brown, H.; Cichota, R.; Kersebaum, K.C.; Richards, K.; et al. Phenotyping early-vigour in oat cover crops to assess plant-trait effects across environments. Field Crop. Res. 2023, 291, 108781. [Google Scholar] [CrossRef]
  46. Hoffman, N.; Singels, A.; Joshi, S. Aerial phenotyping for sugarcane yield and drought tolerance. Field Crop. Res. 2024, 308, 109275. [Google Scholar] [CrossRef]
  47. Krämer, J.; Siegmann, B.; Kraska, T.; Muller, O.; Rascher, U. The potential of spatial aggregation to extract remotely sensed sun-induced fluorescence (SIF) of small-sized experimental plots for applications in crop phenotyping. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102565. [Google Scholar] [CrossRef]
  48. Kim, Y.H.; Lee, S.J.; Yun, C.; Im, S.J.; Park, K.R. LCW-Net: Low-light-image-based crop and weed segmentation network using attention module in two decoders. Eng. Appl. Artif. Intell. 2023, 126, 106890. [Google Scholar] [CrossRef]
  49. Bai, G.; Ge, Y.; Leavitt, B.; Gamon, J.A.; Scoby, D. Goniometer in the air: Enabling BRDF measurement of crop canopies using a cable-suspended plant phenotyping platform. Biosyst. Eng. 2023, 230, 344–360. [Google Scholar] [CrossRef]
  50. Fang, H.; Xie, Z.; Li, H.; Guo, Y.; Li, B.; Liu, Y.; Ma, Y. Image-based root phenotyping for field-grown crops: An example under maize/soybean intercropping. J. Integr. Agric. 2022, 21, 1606–1619. [Google Scholar] [CrossRef]
  51. Selvaraj, M.G.; Valderrama, M.; Guzman, D.; Valencia, M.; Ruiz, H.; Acharjee, A. Machine learning for high-throughput field phenotyping and image processing provides insight into the association of above and below-ground traits in cassava (Manihot esculenta Crantz). Plant Methods 2020, 16, 87. [Google Scholar] [CrossRef]
  52. Niu, Y.; Han, W.; Zhang, H.; Zhang, L.; Chen, H. Estimating maize plant height using a crop surface model constructed from UAV RGB images. Biosyst. Eng. 2024, 241, 56–67. [Google Scholar] [CrossRef]
  53. Xie, T.; Li, J.; Yang, C.; Jiang, Z.; Chen, Y.; Guo, L.; Zhang, J. Crop height estimation based on UAV images: Methods, errors, and strategies. Comput. Electron. Agric. 2021, 185, 106155. [Google Scholar] [CrossRef]
  54. Yang, D.; Yang, H.; Liu, D.; Wang, X. Research on automatic 3D reconstruction of plant phenotype based on Multi-View images. Comput. Electron. Agric. 2024, 220, 108866. [Google Scholar] [CrossRef]
  55. Bhadra, S.; Sagan, V.; Sarkar, S.; Braud, M.; Mockler, T.C.; Eveland, A.L. PROSAIL-Net: A transfer learning-based dual stream neural network to estimate leaf chlorophyll and leaf angle of crops from UAV hyperspectral images. ISPRS J. Photogramm. Remote. Sens. 2024, 210, 1–24. [Google Scholar] [CrossRef]
  56. Jayasuriya, N.; Guo, Y.; Hu, W.; Ghannoum, O. Machine vision based plant height estimation for protected crop facilities. Comput. Electron. Agric. 2024, 218, 108669. [Google Scholar] [CrossRef]
  57. Yang, M.; Cho, S.-I. High-Resolution 3D Crop Reconstruction and Automatic Analysis of Phenotyping Index Using Machine Learning. Agriculture 2021, 11, 1010. [Google Scholar] [CrossRef]
  58. Maraveas, C.; Asteris, P.G.; Arvanitis, K.G.; Bartzanas, T.; Loukatos, D. Application of Bio and Nature-Inspired Algorithms in Agricultural Engineering. Arch. Comput. Methods Eng. 2023, 30, 1979–2012. [Google Scholar] [CrossRef]
  59. Sampaio, G.S.; Silva, L.A.; Marengoni, M. 3D Reconstruction of Non-Rigid Plants and Sensor Data Fusion for Agriculture Phenotyping. Sensors 2021, 21, 4115. [Google Scholar] [CrossRef]
  60. Maraveas, C. Incorporating Artificial Intelligence Technology in Smart Greenhouses: Current State of the Art. Appl. Sci. 2023, 13, 14. [Google Scholar] [CrossRef]
  61. Anderegg, J.; Zenkl, R.; Walter, A.; Hund, A.; McDonald, B.A. Combining high-resolution imaging, deep learning, and dynamic modelling to separate disease and senescence in wheat canopies. Plant Phenomics 2023, 5, 0053. [Google Scholar] [CrossRef]
  62. Ji, X.; Zhou, Z.; Gouda, M.; Zhang, W.; He, Y.; Ye, G.; Li, X. A novel labor-free method for isolating crop leaf pixels from RGB imagery: Generating labels via a topological strategy. Comput. Electron. Agric. 2024, 218, 108631. [Google Scholar] [CrossRef]
  63. Haghshenas, A.; Emam, Y. Green-gradient based canopy segmentation: A multipurpose image mining model with potential use in crop phenotyping and canopy studies. Comput. Electron. Agric. 2020, 178, 105740. [Google Scholar] [CrossRef]
  64. Haque, S.; Lobaton, E.; Nelson, N.; Yencho, G.C.; Pecota, K.V.; Mierop, R.; Kudenov, M.W.; Boyette, M.; Williams, C.M. Computer vision approach to characterize size and shape phenotypes of horticultural crops using high-throughput imagery. Comput. Electron. Agric. 2021, 182, 106011. [Google Scholar] [CrossRef]
  65. Hao, H.; Wu, S.; Li, Y.; Wen, W.; Fan, J.; Zhang, Y.; Zhuang, L.; Xu, L.; Li, H.; Guo, X.; et al. Automatic acquisition, analysis and wilting measurement of cotton 3D phenotype based on point cloud. Biosyst. Eng. 2024, 239, 173–189. [Google Scholar] [CrossRef]
  66. Zhang, C.; Craine, W.; McGee, R.; Vandemark, G.; Davis, J.; Brown, J.; Hulbert, S.; Sankaran, S. Image-Based Phenotyping of Flowering Intensity in Cool-Season Crops. Sensors 2020, 20, 1450. [Google Scholar] [CrossRef]
  67. Rossi, R.; Costafreda-Aumedes, S.; Leolini, L.; Leolini, C.; Bindi, M.; Moriondo, M. Implementation of an algorithm for automated phenotyping through plant 3D-modeling: A practical application on the early detection of water stress. Comput. Electron. Agric. 2022, 197, 106937. [Google Scholar] [CrossRef]
  68. Boogaard, F.P.; van Henten, E.J.; Kootstra, G. The added value of 3D point clouds for digital plant phenotyping—A case study on internode length measurements in cucumber. Biosyst. Eng. 2023, 234, 1–12. [Google Scholar] [CrossRef]
  69. Cuevas-Velasquez, H.; Gallego, A.; Fisher, R.B. Segmentation and 3D reconstruction of rose plants from stereoscopic images. Comput. Electron. Agric. 2020, 171, 105296. [Google Scholar] [CrossRef]
  70. Chen, M.; Tang, Y.; Zou, X.; Huang, K.; Huang, Z.; Zhou, H.; Wang, C.; Lian, G. Three-dimensional perception of orchard banana central stock enhanced by adaptive multi-vision technology. Comput. Electron. Agric. 2020, 174, 105508. [Google Scholar] [CrossRef]
  71. Chen, Y.; Zhang, B.; Zhou, J.; Wang, K. Real-time 3D unstructured environment reconstruction utilizing VR and Kinect-based immersive teleoperation for agricultural field robots. Comput. Electron. Agric. 2020, 175, 105579. [Google Scholar] [CrossRef]
  72. Isachsen, U.J.; Theoharis, T.; Misimi, E. Fast and accurate GPU-accelerated, high-resolution 3D registration for the robotic 3D reconstruction of compliant food objects. Comput. Electron. Agric. 2021, 180, 105929. [Google Scholar] [CrossRef]
  73. Lin, G.; Tang, Y.; Zou, X.; Wang, C. Three-dimensional reconstruction of guava fruits and branches using instance segmentation and geometry analysis. Comput. Electron. Agric. 2021, 184, 106107. [Google Scholar] [CrossRef]
  74. Ma, B.; Du, J.; Wang, L.; Jiang, H.; Zhou, M. Automatic branch detection of jujube trees based on 3D reconstruction for dormant pruning using the deep learning-based method. Comput. Electron. Agric. 2021, 190, 106484. [Google Scholar] [CrossRef]
  75. Fan, G.; Liang, H.; Zhao, Y.; Li, Y. Automatic reconstruction of three-dimensional root system architecture based on ground penetrating radar. Comput. Electron. Agric. 2022, 197, 106969. [Google Scholar] [CrossRef]
  76. Zhao, D.; Eyre, J.X.; Wilkus, E.; de Voil, P.; Broad, I.; Rodriguez, D. 3D characterization of crop water use and the rooting system in field agronomic research. Comput. Electron. Agric. 2022, 202, 107409. [Google Scholar] [CrossRef]
  77. Zhu, T.; Ma, X.; Guan, H.; Wu, X.; Wang, F.; Yang, C.; Jiang, Q. A calculation method of phenotypic traits based on three-dimensional reconstruction of tomato canopy. Comput. Electron. Agric. 2023, 204, 107515. [Google Scholar] [CrossRef]
  78. Torres-Sánchez, J.; Escolà, A.; Isabel de Castro, A.; López-Granados, F.; Rosell-Polo, J.R.; Sebé, F.; Manuel Jiménez-Brenes, F.; Sanz, R.; Gregorio, E.; Peña, J.M.; et al. Mobile terrestrial laser scanner vs. UAV photogrammetry to estimate woody crop canopy parameters—Part 2: Comparison for different crops and training systems. Comput. Electron. Agric. 2023, 212, 108083. [Google Scholar] [CrossRef]
  79. Chen, M.; Tang, Y.; Zou, X.; Huang, Z.; Zhou, H.; Chen, S. 3D global mapping of large-scale unstructured orchard integrating eye-in-hand stereo vision and SLAM. Comput. Electron. Agric. 2021, 187, 106237. [Google Scholar] [CrossRef]
  80. Dong, X.; Kim, W.-Y.; Zheng, Y.; Oh, J.-Y.; Ehsani, R.; Lee, K.-H. Three-dimensional quantification of apple phenotypic traits based on deep learning instance segmentation. Comput. Electron. Agric. 2023, 212, 108156. [Google Scholar] [CrossRef]
  81. Xiong, J.; Liang, J.; Zhuang, Y.; Hong, D.; Zheng, Z.; Liao, S.; Hu, W.; Yang, Z. Real-time localization and 3D semantic map reconstruction for unstructured citrus orchards. Comput. Electron. Agric. 2023, 213, 108217. [Google Scholar] [CrossRef]
  82. Feng, J.; Saadati, M.; Jubery, T.; Jignasu, A.; Balu, A.; Li, Y.; Attigala, L.; Schnable, P.S.; Sarkar, S.; Ganapathysubramanian, B.; et al. 3D reconstruction of plants using probabilistic voxel carving. Comput. Electron. Agric. 2023, 213, 108248. [Google Scholar] [CrossRef]
  83. Wu, T.; Dai, J.; Shen, P.; Liu, H.; Wei, Y. Seedscreener: A novel integrated wheat germplasm phenotyping platform based on NIR-feature detection and 3D-reconstruction. Comput. Electron. Agric. 2023, 215, 108378. [Google Scholar] [CrossRef]
  84. James, C.; Smith, D.; He, W.; Chandra, S.S.; Chapman, S.C. GrainPointNet: A deep-learning framework for non-invasive sorghum panicle grain count phenotyping. Comput. Electron. Agric 2024, 217, 108485. [Google Scholar] [CrossRef]
  85. Wen, W.; Wu, S.; Lu, X.; Liu, X.; Gu, S.; Guo, X. Accurate and semantic 3D reconstruction of maize leaves. Comput. Electron. Agric. 2024, 217, 108566. [Google Scholar] [CrossRef]
  86. Ni, X.; Li, C.; Jiang, H.; Takeda, F. Three-dimensional photogrammetry with deep learning instance segmentation to extract berry fruit harvestability traits. ISPRS J. Photogramm. Remote. Sens. 2021, 171, 297–309. [Google Scholar] [CrossRef]
  87. Xiao, S.; Ye, Y.; Fei, S.; Chen, H.; Zhang, B.; Li, Q.; Cai, Z.; Che, Y.; Wang, Q.; Ghafoor, A.; et al. High-throughput calculation of organ-scale traits with reconstructed accurate 3D canopy structures using a UAV RGB camera with an advanced cross-circling oblique route. ISPRS J. Photogramm. Remote. Sens. 2023, 201, 104–122. [Google Scholar] [CrossRef]
  88. Xiao, S.; Fei, S.; Ye, Y.; Xu, D.; Xie, Z.; Bi, K.; Guo, Y.; Li, B.; Zhang, R.; Ma, Y. 3D reconstruction and characterization of cotton bolls in situ based on UAV technology. ISPRS J. Photogramm. Remote. Sens. 2024, 209, 101–116. [Google Scholar] [CrossRef]
  89. Liu, J.; Xu, X.; Liu, Y.; Rao, Z.; Smith, M.L.; Jin, L.; Li, B. Quantitative potato tuber phenotyping by 3D imaging. Biosyst. Eng. 2021, 210, 48–59. [Google Scholar] [CrossRef]
  90. Zhou, W.; Chen, Y.; Li, W.; Zhang, C.; Xiong, Y.; Zhan, W.; Huang, L.; Wang, J.; Qiu, L. SPP-extractor: Automatic phenotype extraction for densely grown soybean plants. Crop J. 2023, 11, 1569–1578. [Google Scholar] [CrossRef]
  91. Liu, H.; Zhang, W.; Wang, F.; Sun, X.; Wang, J.; Wang, C.; Wang, X. Application of an improved watershed algorithm based on distance map reconstruction in bean image segmentation. Heliyon 2023, 9, e15097. [Google Scholar] [CrossRef]
  92. Li, L.; Bie, Z.; Zhang, Y.; Huang, Y.; Peng, C.; Han, B.; Xu, S. Nondestructive Detection of Key Phenotypes for the Canopy of the Watermelon Plug Seedlings Based on Deep Learning. Hortic. Plant J. 2023, in press. [Google Scholar] [CrossRef]
  93. Zhou, J.; Cui, M.; Wu, Y.; Gao, Y.; Tang, Y.; Chen, Z.; Hou, L.; Tian, H. Maize (Zea mays L.) Stem Target Region Extraction and Stem Diameter Measurement Based on an Internal Gradient Algorithm in Field Conditions. Agronomy 2023, 13, 1185. [Google Scholar] [CrossRef]
  94. Liu, H.; Xin, C.; Lai, M.; He, H.; Wang, Y.; Wang, M.; Li, J. RepC-MVSNet: A Reparameterized Self-Supervised 3D Reconstruction Algorithm for Wheat 3D Reconstruction. Agronomy 2023, 13, 1975. [Google Scholar] [CrossRef]
  95. Sun, Y.; Miao, L.; Zhao, Z.; Pan, T.; Wang, X.; Guo, Y.; Xin, D.; Chen, Q.; Zhu, R. An Efficient and Automated Image Preprocessing Using Semantic Segmentation for Improving the 3D Reconstruction of Soybean Plants at the Vegetative Stage. Agronomy 2023, 13, 2388. [Google Scholar] [CrossRef]
  96. Begot, L.; Slavkovic, F.; Oger, M.; Pichot, C.; Morin, H.; Boualem, A.; Favier, A.-L.; Bendahmane, A. Precision Phenotyping of Nectar-Related Traits Using X-ray Micro Computed Tomography. Cells 2022, 11, 3452. [Google Scholar] [CrossRef] [PubMed]
  97. Li, Y.; Liu, J.; Zhang, B.; Wang, Y.; Yao, J.; Zhang, X.; Fan, B.; Li, X.; Hai, Y.; Fan, X. Three-dimensional reconstruction and phenotype measurement of maize seedlings based on multi-view image sequences. Front. Plant Sci. 2022, 13, 974339. [Google Scholar] [CrossRef]
  98. Zhu, B.; Liu, F.; Xie, Z.; Guo, Y.; Li, B.; Ma, Y. Quantification of light interception within image-based 3-D reconstruction of sole and intercropped canopies over the entire growth season. Ann. Bot. 2020, 126, 701–712. [Google Scholar] [CrossRef]
  99. Chang, A.; Jung, J.; Yeom, J.; Landivar, J. 3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery. Remote Sens. 2021, 13, 282. [Google Scholar] [CrossRef]
  100. Varela, S.; Pederson, T.L.; Leakey, A.D.B. Implementing Spatio-Temporal 3D-Convolution Neural Networks and UAV Time Series Imagery to Better Predict Lodging Damage in Sorghum. Remote Sens. 2022, 14, 733. [Google Scholar] [CrossRef]
  101. Varela, S.; Zheng, X.; Njuguna, J.N.; Sacks, E.J.; Allen, D.P.; Ruhter, J.; Leakey, A.D.B. Deep Convolutional Neural Networks Exploit High-Spatial- and -Temporal-Resolution Aerial Imagery to Phenotype Key Traits in Miscanthus. Remote Sens. 2022, 14, 5333. [Google Scholar] [CrossRef]
  102. Nguyen, C.; Sagan, V.; Skobalski, J.; Severo, J.I. Early Detection of Wheat Yellow Rust Disease and Its Impact on Terminal Yield with Multi-Spectral UAV-Imagery. Remote Sens. 2023, 15, 3301. [Google Scholar] [CrossRef]
  103. Okamoto, Y.; Ikeno, H.; Hirano, Y.; Tanikawa, T.; Yamase, K.; Todo, C.; Dannoura, M.; Ohashi, M. 3D reconstruction using Structure-from-Motion: A new technique for morphological measurement of tree root systems. Plant Soil 2022, 477, 829–841. [Google Scholar] [CrossRef]
  104. Liu, Y.; Yuan, H.; Zhao, X.; Fan, C.; Cheng, M. Fast reconstruction method of three-dimension model based on dual RGB-D cameras for peanut plant. Plant Methods 2023, 19, 17. [Google Scholar] [CrossRef]
  105. Zhu, R.; Sun, K.; Yan, Z.; Xue-hui, Y.; Jiang-lin, Y.; Shi, J.; Hu, Z.; Jiang, H.; Xin, D.; Zhang, Z.; et al. Analysing the phenotype development of soybean plants using low-cost 3D reconstruction. Sci. Rep. 2020, 10, 7055. [Google Scholar] [CrossRef]
Figure 1. Impact of cold stress on the late reproductive phase in the growth of maize [7].
Figure 1. Impact of cold stress on the late reproductive phase in the growth of maize [7].
Agriengineering 06 00193 g001
Figure 2. PRISMA Flowchart.
Figure 2. PRISMA Flowchart.
Agriengineering 06 00193 g002
Figure 3. Detection of winter canola and pea flowers using k-clustering algorithms in (a,c) and threshold in (b,d) [32]. The rectangles represent the noise detected by the k-means clustering while the ovals are the flowers missed by thresholding. In (a), the k-means clustering detected the yellow winter canola flowers under sunlight and shadow accurately. In (b), thresholding did not capture stems and flowers under shadow. With the pea flowers in (c,d), thresholding removed the non-flower pixels properly. However, k-means clustering incorrectly classified parts of the leaves, stems, and extension poles that were light-colored in the same cluster as the flowers.
Figure 3. Detection of winter canola and pea flowers using k-clustering algorithms in (a,c) and threshold in (b,d) [32]. The rectangles represent the noise detected by the k-means clustering while the ovals are the flowers missed by thresholding. In (a), the k-means clustering detected the yellow winter canola flowers under sunlight and shadow accurately. In (b), thresholding did not capture stems and flowers under shadow. With the pea flowers in (c,d), thresholding removed the non-flower pixels properly. However, k-means clustering incorrectly classified parts of the leaves, stems, and extension poles that were light-colored in the same cluster as the flowers.
Agriengineering 06 00193 g003
Figure 4. Data collection using UAV images for chlorophyll concentration estimation. The UAV system (a) consisted of a gimbal that holds the hyperspectral sensor and GNSS/IMU (b). The UAV was flown in a cross-grid fashion (c). The radiometric calibration is performed using a reflectance tarp (d). The ground truth data collection was performed of the UAV flights by manual measurement (e). The calculated ALA using a handmade protractor tool (f), which can calculate the angle between the stem and leaf, and LCC (g) [44].
Figure 4. Data collection using UAV images for chlorophyll concentration estimation. The UAV system (a) consisted of a gimbal that holds the hyperspectral sensor and GNSS/IMU (b). The UAV was flown in a cross-grid fashion (c). The radiometric calibration is performed using a reflectance tarp (d). The ground truth data collection was performed of the UAV flights by manual measurement (e). The calculated ALA using a handmade protractor tool (f), which can calculate the angle between the stem and leaf, and LCC (g) [44].
Agriengineering 06 00193 g004
Figure 5. Original and processed seed images using AIseed software [43].
Figure 5. Original and processed seed images using AIseed software [43].
Agriengineering 06 00193 g005
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
FocusInclusionExclusion
ScopeStudies focused on the latest developments, benefits, limitations, and
future directions of image analysis
phenotyping technologies based on AI, ML, 3D imaging, and software solutions
Studies not focused on algorithms used in plant phenotyping
Period2020–2024Before 2020
LanguageEnglishAll non-English languages
DesignPrimary experimental studiesSecondary reviews
TypePeer-reviewed journal articlesGrey literature, blogs
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maraveas, C. Image Analysis Artificial Intelligence Technologies for Plant Phenotyping: Current State of the Art. AgriEngineering 2024, 6, 3375-3407. https://doi.org/10.3390/agriengineering6030193

AMA Style

Maraveas C. Image Analysis Artificial Intelligence Technologies for Plant Phenotyping: Current State of the Art. AgriEngineering. 2024; 6(3):3375-3407. https://doi.org/10.3390/agriengineering6030193

Chicago/Turabian Style

Maraveas, Chrysanthos. 2024. "Image Analysis Artificial Intelligence Technologies for Plant Phenotyping: Current State of the Art" AgriEngineering 6, no. 3: 3375-3407. https://doi.org/10.3390/agriengineering6030193

APA Style

Maraveas, C. (2024). Image Analysis Artificial Intelligence Technologies for Plant Phenotyping: Current State of the Art. AgriEngineering, 6(3), 3375-3407. https://doi.org/10.3390/agriengineering6030193

Article Metrics

Back to TopTop