Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (107)

Search Parameters:
Keywords = automated plant phenotyping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 59556 KB  
Review
Application of Deep Learning Technology in Monitoring Plant Attribute Changes
by Shuwei Han and Haihua Wang
Sustainability 2025, 17(17), 7602; https://doi.org/10.3390/su17177602 - 22 Aug 2025
Viewed by 334
Abstract
With the advancement of remote sensing imagery and multimodal sensing technologies, monitoring plant trait dynamics has emerged as a critical area of research in modern agriculture. Traditional approaches, which rely on handcrafted features and shallow models, struggle to effectively address the complexity inherent [...] Read more.
With the advancement of remote sensing imagery and multimodal sensing technologies, monitoring plant trait dynamics has emerged as a critical area of research in modern agriculture. Traditional approaches, which rely on handcrafted features and shallow models, struggle to effectively address the complexity inherent in high-dimensional and multisource data. In contrast, deep learning, with its end-to-end feature extraction and nonlinear modeling capabilities, has substantially improved monitoring accuracy and automation. This review summarizes recent developments in the application of deep learning methods—including CNNs, RNNs, LSTMs, Transformers, GANs, and VAEs—to tasks such as growth monitoring, yield prediction, pest and disease identification, and phenotypic analysis. It further examines prominent research themes, including multimodal data fusion, transfer learning, and model interpretability. Additionally, it discusses key challenges related to data scarcity, model generalization, and real-world deployment. Finally, the review outlines prospective directions for future research, aiming to inform the integration of deep learning with phenomics and intelligent IoT systems and to advance plant monitoring toward greater intelligence and high-throughput capabilities. Full article
(This article belongs to the Section Sustainable Agriculture)
Show Figures

Figure 1

23 pages, 18349 KB  
Article
Estimating Radicle Length of Germinating Elm Seeds via Deep Learning
by Dantong Li, Yang Luo, Hua Xue and Guodong Sun
Sensors 2025, 25(16), 5024; https://doi.org/10.3390/s25165024 - 13 Aug 2025
Viewed by 224
Abstract
Accurate measurement of seedling traits is essential for plant phenotyping, particularly in understanding growth dynamics and stress responses. Elm trees (Ulmus spp.), ecologically and economically significant, pose unique challenges due to their curved seedling morphology. Traditional manual measurement methods are time-consuming, prone [...] Read more.
Accurate measurement of seedling traits is essential for plant phenotyping, particularly in understanding growth dynamics and stress responses. Elm trees (Ulmus spp.), ecologically and economically significant, pose unique challenges due to their curved seedling morphology. Traditional manual measurement methods are time-consuming, prone to human error, and often lack consistency. Moreover, automated approaches remain limited and often fail to accurately process seedlings with nonlinear or curved morphologies. In this study, we introduce GLEN, a deep learning-based model for detecting germinating elm seeds and accurately estimating their lengths of germinating structures. It leverages a dual-path architecture that combines pixel-level spatial features with instance-level semantic information, enabling robust measurement of curved radicles. To support training, we construct GermElmData, a curated dataset of annotated elm seedling images, and introduce a novel synthetic data generation pipeline that produces high-fidelity, morphologically diverse germination images. This reduces the dependence on extensive manual annotations and improves model generalization. Experimental results demonstrate that GLEN achieves an estimation error on the order of millimeters, outperforming existing models. Beyond quantifying germinating elm seeds, the architectural design and data augmentation strategies in GLEN offer a scalable framework for morphological quantification in both plant phenotyping and broader biomedical imaging domains. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

11 pages, 638 KB  
Communication
Millet in Bioregenerative Life Support Systems: Hypergravity Resilience and Predictive Yield Models
by Tatiana S. Aniskina, Arkady N. Kudritsky, Olga A. Shchuklina, Nikita E. Andreev and Ekaterina N. Baranova
Life 2025, 15(8), 1261; https://doi.org/10.3390/life15081261 - 7 Aug 2025
Viewed by 435
Abstract
The prospects for long-distance space flights are becoming increasingly realistic, and one of the key factors for their implementation is the creation of sustainable systems for producing food on site. Therefore, the aim of our work is to assess the prospects for using [...] Read more.
The prospects for long-distance space flights are becoming increasingly realistic, and one of the key factors for their implementation is the creation of sustainable systems for producing food on site. Therefore, the aim of our work is to assess the prospects for using millet in biological life support systems and to create predictive models of yield components for automating plant cultivation control. The study found that stress from hypergravity (800 g, 1200 g, 2000 g, and 3000 g) in the early stages of millet germination does not affect seedlings or yield. In a closed system, millet yield reached 0.31 kg/m2, the weight of 1000 seeds was 8.61 g, and the yield index was 0.06. The paper describes 40 quantitative traits, including six leaf and trichome traits and nine grain traits from the lower, middle and upper parts of the inflorescence. The compiled predictive regression equations allow predicting the accumulation of biomass in seedlings on the 10th and 20th days of cultivation, as well as the weight of 1000 seeds, the number of productive inflorescences, the total above-ground mass, and the number and weight of grains per plant. These equations open up opportunities for the development of computer vision and high-speed plant phenotyping programs that will allow automatic correction of the plant cultivation process and modeling of the required yield. Predicting biomass yield will also be useful in assessing the load on the waste-free processing system for plant waste at planetary stations. Full article
(This article belongs to the Special Issue Physiological Responses of Plants Under Abiotic Stresses)
Show Figures

Figure 1

16 pages, 5301 KB  
Article
TSINet: A Semantic and Instance Segmentation Network for 3D Tomato Plant Point Clouds
by Shanshan Ma, Xu Lu and Liang Zhang
Appl. Sci. 2025, 15(15), 8406; https://doi.org/10.3390/app15158406 - 29 Jul 2025
Viewed by 324
Abstract
Accurate organ-level segmentation is essential for achieving high-throughput, non-destructive, and automated plant phenotyping. To address the challenge of intelligent acquisition of phenotypic parameters in tomato plants, we propose TSINet, an end-to-end dual-task segmentation network designed for effective and precise semantic labeling and instance [...] Read more.
Accurate organ-level segmentation is essential for achieving high-throughput, non-destructive, and automated plant phenotyping. To address the challenge of intelligent acquisition of phenotypic parameters in tomato plants, we propose TSINet, an end-to-end dual-task segmentation network designed for effective and precise semantic labeling and instance recognition of tomato point clouds, based on the Pheno4D dataset. TSINet adopts an encoder–decoder architecture, where a shared encoder incorporates four Geometry-Aware Adaptive Feature Extraction Blocks (GAFEBs) to effectively capture local structures and geometric relationships in raw point clouds. Two parallel decoder branches are employed to independently decode shared high-level features for the respective segmentation tasks. Additionally, a Dual Attention-Based Feature Enhancement Module (DAFEM) is introduced to further enrich feature representations. The experimental results demonstrate that TSINet achieves superior performance in both semantic and instance segmentation, particularly excelling in challenging categories such as stems and large-scale instances. Specifically, TSINet achieves 97.00% mean precision, 96.17% recall, 96.57% F1-score, and 93.43% IoU in semantic segmentation and 81.54% mPrec, 81.69% mRec, 81.60% mCov, and 86.40% mWCov in instance segmentation. Compared with state-of-the-art methods, TSINet achieves balanced improvements across all metrics, significantly reducing false positives and false negatives while enhancing spatial completeness and segmentation accuracy. Furthermore, we conducted ablation studies and generalization tests to systematically validate the effectiveness of each TSINet component and the overall robustness of the model. This study provides an effective technological approach for high-throughput automated phenotyping of tomato plants, contributing to the advancement of intelligent agricultural management. Full article
Show Figures

Figure 1

25 pages, 3721 KB  
Article
Phenotyping for Drought Tolerance in Different Wheat Genotypes Using Spectral and Fluorescence Sensors
by Guilherme Filgueiras Soares, Maria Lucrecia Gerosa Ramos, Luca Felisberto Pereira, Beat Keller, Onno Muller, Cristiane Andrea de Lima, Patricia Carvalho da Silva, Juaci Vitória Malaquias, Jorge Henrique Chagas and Walter Quadros Ribeiro Junior
Plants 2025, 14(14), 2216; https://doi.org/10.3390/plants14142216 - 17 Jul 2025
Viewed by 492
Abstract
The wheat planted at the end of the rainy season in the Cerrado suffers from a strong water deficit. A selection of genetic material with drought tolerance is necessary. In improvement programs that evaluate a large number of materials, efficient, automated, and non-destructive [...] Read more.
The wheat planted at the end of the rainy season in the Cerrado suffers from a strong water deficit. A selection of genetic material with drought tolerance is necessary. In improvement programs that evaluate a large number of materials, efficient, automated, and non-destructive phenotyping is essential, which requires the use of sensors. The experiment was conducted in 2016 using a phenotyping platform, where irrigation gradients ranging from 184 (WR4) to 601 mm (WR1) were created, allowing for the comparison of four genotypes. In addition to productivity, we evaluated plant height, hectoliter weight, the number of spikes per square meter, ear length, photosynthesis, and the indices calculated by the sensors. For most morphophysiological parameters, extreme stress makes it difficult to discriminate materials. WR1 (601 mm) and WR2 (501 mm) showed similar trends in almost all variables. The data validated the phenotyping platform, which creates an irrigation gradient, considering that the results obtained, in general, were proportional to the water levels. The similar trend between sensors (NDVI, PRI, and LIFT) and morphophysiological, plant growth, and crop yield evaluations validated the use of sensors as a tool in selecting drought-tolerant wheat genotypes using a non-invasive methodology. Considering that only four genotypes were used, none showed absolute and unequivocal tolerance to drought; however, each genotype exhibited some desirable characteristics related to drought tolerance mechanisms. Full article
Show Figures

Figure 1

16 pages, 1934 KB  
Article
Research on Obtaining Pepper Phenotypic Parameters Based on Improved YOLOX Algorithm
by Yukang Huo, Rui-Feng Wang, Chang-Tao Zhao, Pingfan Hu and Haihua Wang
AgriEngineering 2025, 7(7), 209; https://doi.org/10.3390/agriengineering7070209 - 2 Jul 2025
Cited by 7 | Viewed by 500
Abstract
Pepper is a vital crop with extensive agricultural and industrial applications. Accurate phenotypic measurement, including plant height and stem diameter, is essential for assessing yield and quality, yet manual measurement is time-consuming and labor-intensive. This study proposes a deep learning-based phenotypic measurement method [...] Read more.
Pepper is a vital crop with extensive agricultural and industrial applications. Accurate phenotypic measurement, including plant height and stem diameter, is essential for assessing yield and quality, yet manual measurement is time-consuming and labor-intensive. This study proposes a deep learning-based phenotypic measurement method for peppers. A Pepper-mini dataset was constructed using offline augmentation. To address challenges in multi-plant growth environments, an improved YOLOX-tiny detection model incorporating a CA attention mechanism was developed, achieving a mAP of 95.16%. A detection box filtering method based on Euclidean distance was introduced to identify target plants. Further processing using HSV threshold segmentation, morphological operations, and connected component denoising enabled accurate region selection. Measurement algorithms were then applied, yielding high correlations with true values: R2 = 0.973 for plant height and R2 = 0.842 for stem diameter, with average errors of 0.443 cm and 0.0765 mm, respectively. This approach demonstrates a robust and efficient solution for automated phenotypic analysis in pepper cultivation. Full article
Show Figures

Figure 1

12 pages, 2844 KB  
Article
End-to-End Deep Learning Approach to Automated Phenotyping of Greenhouse-Grown Plant Shoots
by Evgeny Gladilin, Narendra Narisetti, Kerstin Neumann and Thomas Altmann
Agronomy 2025, 15(5), 1117; https://doi.org/10.3390/agronomy15051117 - 30 Apr 2025
Viewed by 464
Abstract
High-throughput image analysis is a key tool for the efficient assessment of quantitative plant phenotypes. A typical approach to the computation of quantitative plant traits from image data consists of two major steps including (i) image segmentation followed by (ii) calculation of quantitative [...] Read more.
High-throughput image analysis is a key tool for the efficient assessment of quantitative plant phenotypes. A typical approach to the computation of quantitative plant traits from image data consists of two major steps including (i) image segmentation followed by (ii) calculation of quantitative traits of segmented plant structures. Despite substantial advancements in deep learning-based segmentation techniques, minor artifacts of image segmentation cannot be completely avoided. For several commonly used traits including plant width, height, convex hull, etc., even small inaccuracies in image segmentation can lead to large errors. Ad hoc approaches to cleaning ’small noisy structures’ are, in general, data-dependent and may lead to substantial loss of relevant small plant structures and, consequently, falsified phenotypic traits. Here, we present a straightforward end-to-end approach to direct computation of phenotypic traits from image data using a deep learning regression model. Our experimental results show that image-to-trait regression models outperform a conventional segmentation-based approach for a number of commonly sought plant traits of plant morphology and health including shoot area, linear dimensions and color fingerprints. Since segmentation is missing in predictions of regression models, visualization of activation layer maps can still be used as a blueprint to model explainability. Although end-to-end models have a number of limitations compared to more complex network architectures, they can still be of interest for multiple phenotyping scenarios with fixed optical setups (such as high-throughput greenhouse screenings), where the accuracy of routine trait predictions and not necessarily the generalizability is the primary goal. Full article
(This article belongs to the Special Issue Novel Approaches to Phenotyping in Plant Research)
Show Figures

Figure 1

15 pages, 3818 KB  
Article
Measurement of Maize Leaf Phenotypic Parameters Based on 3D Point Cloud
by Yuchen Su, Ran Li, Miao Wang, Chen Li, Mingxiong Ou, Sumei Liu, Wenhui Hou, Yuwei Wang and Lu Liu
Sensors 2025, 25(9), 2854; https://doi.org/10.3390/s25092854 - 30 Apr 2025
Cited by 1 | Viewed by 598
Abstract
Plant height (PH), leaf width (LW), and leaf angle (LA) are critical phenotypic parameters in maize that reliably indicate plant growth status, lodging resistance, and yield potential. While various lidar-based methods have been developed for acquiring these parameters, existing approaches face limitations, including [...] Read more.
Plant height (PH), leaf width (LW), and leaf angle (LA) are critical phenotypic parameters in maize that reliably indicate plant growth status, lodging resistance, and yield potential. While various lidar-based methods have been developed for acquiring these parameters, existing approaches face limitations, including low automation, prolonged measurement duration, and weak environmental interference resistance. This study proposes a novel estimation method for maize PH, LW, and LA based on point cloud projection. The methodology comprises four key stages. First, 3D point cloud data of maize plants are acquired during middle–late growth stages using lidar sensors. Second, a Gaussian mixture model (GMM) is employed for point cloud registration to enhance plant morphological features, resulting in spliced maize point clouds. Third, filtering techniques remove background noise and weeds, followed by a combined point cloud projection and Euclidean clustering approach for stem–leaf segmentation. Finally, PH is determined by calculating vertical distance from plant apex to base, LW is measured through linear fitting of leaf midveins with perpendicular line intersections on projected contours, and LA is derived from plant skeleton diagrams constructed via linear fitting to identify stem apex, stem–leaf junctions, and midrib points. Field validation demonstrated that the method achieves 99%, 86%, and 97% accuracy for PH, LW, and LA estimation, respectively, enabling rapid automated measurement during critical growth phases and providing an efficient solution for maize cultivation automation. Full article
Show Figures

Figure 1

16 pages, 1415 KB  
Review
Advancing Crop Resilience Through High-Throughput Phenotyping for Crop Improvement in the Face of Climate Change
by Hoa Thi Nguyen, Md Arifur Rahman Khan, Thuong Thi Nguyen, Nhi Thi Pham, Thu Thi Bich Nguyen, Touhidur Rahman Anik, Mai Dao Nguyen, Mao Li, Kien Huu Nguyen, Uttam Kumar Ghosh, Lam-Son Phan Tran and Chien Van Ha
Plants 2025, 14(6), 907; https://doi.org/10.3390/plants14060907 - 14 Mar 2025
Cited by 2 | Viewed by 2118
Abstract
Climate change intensifies biotic and abiotic stresses, threatening global crop productivity. High-throughput phenotyping (HTP) technologies provide a non-destructive approach to monitor plant responses to environmental stresses, offering new opportunities for both crop stress resilience and breeding research. Innovations, such as hyperspectral imaging, unmanned [...] Read more.
Climate change intensifies biotic and abiotic stresses, threatening global crop productivity. High-throughput phenotyping (HTP) technologies provide a non-destructive approach to monitor plant responses to environmental stresses, offering new opportunities for both crop stress resilience and breeding research. Innovations, such as hyperspectral imaging, unmanned aerial vehicles, and machine learning, enhance our ability to assess plant traits under various environmental stresses, including drought, salinity, extreme temperatures, and pest and disease infestations. These tools facilitate the identification of stress-tolerant genotypes within large segregating populations, improving selection efficiency for breeding programs. HTP can also play a vital role by accelerating genetic gain through precise trait evaluation for hybridization and genetic enhancement. However, challenges such as data standardization, phenotyping data management, high costs of HTP equipment, and the complexity of linking phenotypic observations to genetic improvements limit its broader application. Additionally, environmental variability and genotype-by-environment interactions complicate reliable trait selection. Despite these challenges, advancements in robotics, artificial intelligence, and automation are improving the precision and scalability of phenotypic data analyses. This review critically examines the dual role of HTP in assessment of plant stress tolerance and crop performance, highlighting both its transformative potential and existing limitations. By addressing key challenges and leveraging technological advancements, HTP can significantly enhance genetic research, including trait discovery, parental selection, and hybridization scheme optimization. While current methodologies still face constraints in fully translating phenotypic insights into practical breeding applications, continuous innovation in high-throughput precision phenotyping holds promise for revolutionizing crop resilience and ensuring sustainable agricultural production in a changing climate. Full article
(This article belongs to the Section Crop Physiology and Crop Production)
Show Figures

Figure 1

18 pages, 4900 KB  
Article
Stem-Leaf Segmentation and Morphological Traits Extraction in Rapeseed Seedlings Using a Three-Dimensional Point Cloud
by Binqian Sun, Muhammad Zain, Lili Zhang, Dongwei Han and Chengming Sun
Agronomy 2025, 15(2), 276; https://doi.org/10.3390/agronomy15020276 - 22 Jan 2025
Cited by 3 | Viewed by 1228
Abstract
Developing accurate, non-destructive, and automated methods for monitoring the phenotypic traits of rapeseed is crucial for improving yield and quality in modern agriculture. We used a line laser binocular stereo vision technology system to obtain the three-dimensional (3D) point cloud data of different [...] Read more.
Developing accurate, non-destructive, and automated methods for monitoring the phenotypic traits of rapeseed is crucial for improving yield and quality in modern agriculture. We used a line laser binocular stereo vision technology system to obtain the three-dimensional (3D) point cloud data of different rapeseed varieties (namely Qinyou 7, Zheyouza 108, and Huyou 039) at the seedling stage, and the phenotypic traits of rapeseed were extracted from those point clouds. After pre-processing the rapeseed point clouds with denoising and segmentation, the plant height, leaf length, leaf width, and leaf area of the rapeseed in the seedling stage were extracted by a series of algorithms and were evaluated for accuracy with the manually measured values. The following results were obtained: the R2 values for plant height data between the extracted values of the 3D point cloud and the manually measured values reached 0.934, and the RMSE was 0.351 cm. Similarly, the R2 values for leaf length of the three kinds of rapeseed were all greater than 0.95, and the RMSEs for Qinyou 7, Zheyouza 108, and Huyou 039 were 0.134 cm, 0.131 cm, and 0.139 cm, respectively. Regarding leaf width, R2 was greater than 0.92, and the RMSEs were 0.151 cm, 0.189 cm, and 0.150 cm, respectively. Further, the R2 values for leaf area were all greater than 0.98 with RMSEs of 0.296 cm2, 0.231 cm2 and 0.259 cm2, respectively. The results extracted from the 3D point cloud are reliable and have high accuracy. These results demonstrate the potential of 3D point cloud technology for automated, non-destructive phenotypic analysis in rapeseed breeding programs, which can accelerate the development of improved varieties. Full article
(This article belongs to the Special Issue Unmanned Farms in Smart Agriculture)
Show Figures

Figure 1

31 pages, 7647 KB  
Systematic Review
Applications of Raspberry Pi for Precision Agriculture—A Systematic Review
by Astina Joice, Talha Tufaique, Humeera Tazeen, C. Igathinathane, Zhao Zhang, Craig Whippo, John Hendrickson and David Archer
Agriculture 2025, 15(3), 227; https://doi.org/10.3390/agriculture15030227 - 21 Jan 2025
Cited by 4 | Viewed by 5452
Abstract
Precision agriculture (PA) is a farm management data-driven technology that enhances production with efficient resource usage. Existing PA methods rely on data processing, highlighting the need for a portable computing device for real-time, infield decisions. Raspberry Pi, a cost-effective multi-OS single-board computer, addresses [...] Read more.
Precision agriculture (PA) is a farm management data-driven technology that enhances production with efficient resource usage. Existing PA methods rely on data processing, highlighting the need for a portable computing device for real-time, infield decisions. Raspberry Pi, a cost-effective multi-OS single-board computer, addresses this gap. However, information on Raspberry Pi’s use in PA remains limited. This review consolidates details on Raspberry Pi versions, sensors, devices, algorithm deployment, and PA applications. A systematic literature review of three academic databases (Scopus, Web of Science, IEEE Xplore) yielded 84 (as of 22 November 2024) articles based on four research questions and screening criteria (exclusion and inclusion). Narrative synthesis and subgroup analysis were used to synthesize the results. Findings suggest Raspberry Pi can be a central unit to control sensors, enabling cost-effective automated decision support for PA, particularly in plant disease detection, site-specific weed management, plant phenotyping, biomass estimation, and irrigation systems. Despite focusing on these areas, further research is essential on other PA applications such as livestock monitoring, UAV-based applications, and farm management software. Additionally, Raspberry Pi can be used as a valuable learning tool for students, researchers, and farmers and can promote PA adoption globally, helping stakeholders realize its potential. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Graphical abstract

22 pages, 15791 KB  
Article
Automated Phenotypic Analysis of Mature Soybean Using Multi-View Stereo 3D Reconstruction and Point Cloud Segmentation
by Daohan Cui, Pengfei Liu, Yunong Liu, Zhenqing Zhao and Jiang Feng
Agriculture 2025, 15(2), 175; https://doi.org/10.3390/agriculture15020175 - 14 Jan 2025
Cited by 5 | Viewed by 1770
Abstract
Phenotypic analysis of mature soybeans is a critical aspect of soybean breeding. However, manually obtaining phenotypic parameters not only is time-consuming and labor intensive but also lacks objectivity. Therefore, there is an urgent need for a rapid, accurate, and efficient method to collect [...] Read more.
Phenotypic analysis of mature soybeans is a critical aspect of soybean breeding. However, manually obtaining phenotypic parameters not only is time-consuming and labor intensive but also lacks objectivity. Therefore, there is an urgent need for a rapid, accurate, and efficient method to collect the phenotypic parameters of soybeans. This study develops a novel pipeline for acquiring the phenotypic traits of mature soybeans based on three-dimensional (3D) point clouds. First, soybean point clouds are obtained using a multi-view stereo 3D reconstruction method, followed by preprocessing to construct a dataset. Second, a deep learning-based network, PVSegNet (Point Voxel Segmentation Network), is proposed specifically for segmenting soybean pods and stems. This network enhances feature extraction capabilities through the integration of point cloud and voxel convolution, as well as an orientation-encoding (OE) module. Finally, phenotypic parameters such as stem diameter, pod length, and pod width are extracted and validated against manual measurements. Experimental results demonstrate that the average Intersection over Union (IoU) for semantic segmentation is 92.10%, with a precision of 96.38%, recall of 95.41%, and F1-score of 95.87%. For instance segmentation, the network achieves an average precision (AP@50) of 83.47% and an average recall (AR@50) of 87.07%. These results indicate the feasibility of the network for the instance segmentation of pods and stems. In the extraction of plant parameters, the predicted values of pod width, pod length, and stem diameter obtained through the phenotypic extraction method exhibit coefficients of determination (R2) of 0.9489, 0.9182, and 0.9209, respectively, with manual measurements. This demonstrates that our method can significantly improve efficiency and accuracy, contributing to the application of automated 3D point cloud analysis technology in soybean breeding. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 5467 KB  
Article
Stem and Leaf Segmentation and Phenotypic Parameter Extraction of Tomato Seedlings Based on 3D Point
by Xuemei Liang, Wenbo Yu, Li Qin, Jianfeng Wang, Peng Jia, Qi Liu, Xiaoyu Lei and Minglai Yang
Agronomy 2025, 15(1), 120; https://doi.org/10.3390/agronomy15010120 - 5 Jan 2025
Cited by 3 | Viewed by 1905
Abstract
High-throughput measurements of phenotypic parameters in plants generate substantial data, significantly improving agricultural production optimization and breeding efficiency. However, these measurements face several challenges, including environmental variability, sample heterogeneity, and complex data processing. This study presents a method applicable to stem and leaf [...] Read more.
High-throughput measurements of phenotypic parameters in plants generate substantial data, significantly improving agricultural production optimization and breeding efficiency. However, these measurements face several challenges, including environmental variability, sample heterogeneity, and complex data processing. This study presents a method applicable to stem and leaf segmentation and parameter extraction during the tomato seedling stage, utilizing three-dimensional point clouds. Focusing on tomato seedlings, data was captured using a depth camera to create point cloud models. The RANSAC, region-growing, and greedy projection triangulation algorithms were employed to extract phenotypic parameters such as plant height, stem thickness, leaf area, and leaf inclination angle. The results showed strong correlations, with coefficients of determination for manually measured parameters versus extracted 3D point cloud parameters being 0.920, 0.725, 0.905, and 0.917, respectively. The root-mean-square errors were 0.643, 0.168, 1.921, and 4.513, with absolute percentage errors of 3.804%, 5.052%, 5.509%, and 7.332%. These findings highlight a robust relationship between manual measurements and the extracted parameters, establishing a technical foundation for high-throughput automated phenotypic parameter extraction in tomato seedlings. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

26 pages, 20467 KB  
Article
Three-Dimensional Time-Series Monitoring of Maize Canopy Structure Using Rail-Driven Plant Phenotyping Platform in Field
by Hanyu Ma, Weiliang Wen, Wenbo Gou, Yuqiang Liang, Minggang Zhang, Jiangchuan Fan, Shenghao Gu, Dongsheng Zhang and Xinyu Guo
Agriculture 2025, 15(1), 6; https://doi.org/10.3390/agriculture15010006 - 24 Dec 2024
Viewed by 1019
Abstract
The spatial and temporal dynamics of crop canopy structure are influenced by cultivar, environment, and crop management practices. However, continuous and automatic monitoring of crop canopy structure is still challenging. A three-dimensional (3D) time-series phenotyping study of maize canopy was conducted using a [...] Read more.
The spatial and temporal dynamics of crop canopy structure are influenced by cultivar, environment, and crop management practices. However, continuous and automatic monitoring of crop canopy structure is still challenging. A three-dimensional (3D) time-series phenotyping study of maize canopy was conducted using a rail-driven high-throughput plant phenotyping platform (HTPPP) in field conditions. An adaptive sliding window segmentation algorithm was proposed to obtain plots and rows from canopy point clouds. Maximum height (Hmax), mean height (Hmean), and canopy cover (CC) of each plot were extracted, and quantification of plot canopy height uniformity (CHU) and marginal effect (MEH) was achieved. The results showed that the average mIoU, mP, mR, and mF1 of canopy–plot segmentation were 0.8118, 0.9587, 0.9969, and 0.9771, respectively, and the average mIoU, mP, mR, and mF1 of plot–row segmentation were 0.7566, 0.8764, 0.9292, and 0.8974, respectively. The average RMSE of plant height across the 10 growth stages was 0.08 m. The extracted time-series phenotypes show that CHU tended to vary from uniformity to nonuniformity and continued to fluctuate during the whole growth stages, and the MEH of the canopy tended to increase negatively over time. This study provides automated and practical means for 3D time-series phenotype monitoring of plant canopies with the HTPPP. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

19 pages, 4990 KB  
Article
A 3D Surface Reconstruction Pipeline for Plant Phenotyping
by Lina Stausberg, Berit Jost, Lasse Klingbeil and Heiner Kuhlmann
Remote Sens. 2024, 16(24), 4720; https://doi.org/10.3390/rs16244720 - 17 Dec 2024
Cited by 1 | Viewed by 1750
Abstract
Plant phenotyping plays a crucial role in crop science and plant breeding. However, traditional methods often involve time-consuming and manual observations. Therefore, it is essential to develop automated, sensor-driven techniques that can provide objective and rapid information. Various methods rely on camera systems, [...] Read more.
Plant phenotyping plays a crucial role in crop science and plant breeding. However, traditional methods often involve time-consuming and manual observations. Therefore, it is essential to develop automated, sensor-driven techniques that can provide objective and rapid information. Various methods rely on camera systems, including RGB, multi-spectral, and hyper-spectral cameras, which offer valuable insights into plant physiology. In recent years, 3D sensing systems such as laser scanners have gained popularity due to their ability to capture structural plant parameters that are difficult to obtain using spectral sensors. Unlike images, point clouds are not structured and require pre-processing steps to extract precise information and handle noise or missing points. One approach is to generate mesh-based surface representations using triangulation. A key challenge in the 3D surface reconstruction of plants is the pre-processing of point clouds, which involves removing non-plant noise from the scene, segmenting point clouds from populations to individual plants, and further dividing individual plants into their respective organs. In this study, we will not focus on the segmentation aspect but rather on the other pre-processing steps, like denoising parameters, which depend on the data type. We present an automated pipeline for converting high-resolution point clouds into surface models of plants. The pipeline incorporates additional pre-processing steps such as outlier removal, denoising, and subsampling to ensure the accuracy and quality of the reconstructed surfaces. Data were collected using three different sensors: a handheld scanner, a terrestrial laser scanner (TLS), and a mobile mapping platform, under varying conditions from controlled laboratory environments to complex field settings. The investigation includes five different plant species, each with distinct characteristics, to demonstrate the potential of the pipeline. In a next step, phenotypic traits such as leaf area, leaf area index (LAI), and leaf angle distribution (LAD) were calculated to further illustrate the pipeline’s potential and effectiveness. The pipeline is based on the Open3D framework and is available open source. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Back to TopTop