remotesensing-logo

Journal Browser

Journal Browser

Imaging for Plant Phenotyping

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (20 December 2021) | Viewed by 20841

Special Issue Editors


E-Mail Website
Guest Editor
Department of Plant Sciences, University of Saskatchewan, Saskatoon, SK, Canada
Interests: abiotic stress plant physiology; crop physiology; plant adaptation to climate change; plant phenotyping

E-Mail Website
Guest Editor
Center for Advanced Bioenergy and Bioproducts Innovation, University of Illinois, Champaign, IL, USA
Interests: multi-scale crop phenotyping; remote sensing of crops; machine learning; spatiotemporal modelling

Special Issue Information

Dear Colleagues,

Climate change is taking its toll on crop production worldwide due to changing agronomic conditions through warming, variability of climate, and abiotic stresses along with resource limitations which represents significant challenges we face in our dependence on crops. In the next three decades, production of food, feed and biofuel crops will have to double to meet the projected demands of the global population. Genetic improvements in crop in the face of climate change remain the key role in improving crop productivity, but the current rate of improvement cannot meet the needs of sustainability and food security. The last 20 years have observed significant progress in the genomics for plant breeding research. Linking these advances to crop phenotypes is critical for successful identification of superior cultivars, but this is still limiting. To overcome this challenge, high-throughput phenotyping has emerged as a multidisciplinary area of research combining non-invasive state-of-the-art sensors, image analysis, and predictive modelling to estimate plant phenotypic traits at scale with reduced manpower effort. The rapid development in sensors and low-cost platforms are expected to ease the current phenotyping bottleneck and offer researchers with novel insights to help guide ways to improve crop productivity and adaptation. This Special Issue "Imaging in plant phenotyping" is focused on the latest innovative research in the integration of sensing technologies and methodological advances to estimate crop phenotypic traits. We welcome papers from the global research community actively involved in novel integrations of remote sensing in plant phenotyping to discuss current advances, challenges, and future directions.

In this Special Issue, potential topics include but are not limited to:

  • Aerial and ground high-throughput phenotyping platforms, such as low orbit satellites, unmanned aerial vehicles, close range moving sensing platforms, and ground fixed-point stations;
  • Innovative approaches of using different imaging sensors (e.g., 3-D photogrammetry, hyperspectral, thermal sensors, LIDAR) to collect novel phenotypic traits;
  • Multi-scale integration of sensors;
  • Imagery algorithms (machine learning, deep learning, spatial and spatiotemporal approaches), novel approaches to estimate crop phenotypic traits to improve throughput in field conditions.

Dr. Dilip Kumar Biswas
Dr. Sebastian Varela
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Abiotic stress (drought, heat, waterlogging and salinity) and resource-use efficiency
  • Crop phenotyping
  • Climate change
  • Imaging
  • Machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 5178 KiB  
Article
In Situ Measuring Stem Diameters of Maize Crops with a High-Throughput Phenotyping Robot
by Zhengqiang Fan, Na Sun, Quan Qiu, Tao Li, Qingchun Feng and Chunjiang Zhao
Remote Sens. 2022, 14(4), 1030; https://doi.org/10.3390/rs14041030 - 21 Feb 2022
Cited by 7 | Viewed by 3031
Abstract
Robotic High-Throughput Phenotyping (HTP) technology has been a powerful tool for selecting high-quality crop varieties among large quantities of traits. Due to the advantages of multi-view observation and high accuracy, ground HTP robots have been widely studied in recent years. In this paper, [...] Read more.
Robotic High-Throughput Phenotyping (HTP) technology has been a powerful tool for selecting high-quality crop varieties among large quantities of traits. Due to the advantages of multi-view observation and high accuracy, ground HTP robots have been widely studied in recent years. In this paper, we study an ultra-narrow wheeled robot equipped with RGB-D cameras for inter-row maize HTP. The challenges of the narrow operating space, intensive light changes, and messy cross-leaf interference in rows of maize crops are considered. An in situ and inter-row stem diameter measurement method for HTP robots is proposed. To this end, we first introduce the stem diameter measurement pipeline, in which a convolutional neural network is employed to detect stems, and the point cloud is analyzed to estimate the stem diameters. Second, we present a clustering strategy based on DBSCAN for extracting stem point clouds under the condition that the stem is shaded by dense leaves. Third, we present a point cloud filling strategy to fill the stem region with missing depth values due to the occlusion by other organs. Finally, we employ convex hull and plane projection of the point cloud to estimate the stem diameters. The results show that the R2 and RMSE of stem diameter measurement are up to 0.72 and 2.95 mm, demonstrating its effectiveness. Full article
(This article belongs to the Special Issue Imaging for Plant Phenotyping)
Show Figures

Graphical abstract

17 pages, 3601 KiB  
Article
Implementing Spatio-Temporal 3D-Convolution Neural Networks and UAV Time Series Imagery to Better Predict Lodging Damage in Sorghum
by Sebastian Varela, Taylor L. Pederson and Andrew D. B. Leakey
Remote Sens. 2022, 14(3), 733; https://doi.org/10.3390/rs14030733 - 4 Feb 2022
Cited by 13 | Viewed by 4920
Abstract
Unmanned aerial vehicle (UAV)-based remote sensing is gaining momentum in a variety of agricultural and environmental applications. Very-high-resolution remote sensing image sets collected repeatedly throughout a crop growing season are becoming increasingly common. Analytical methods able to learn from both spatial and time [...] Read more.
Unmanned aerial vehicle (UAV)-based remote sensing is gaining momentum in a variety of agricultural and environmental applications. Very-high-resolution remote sensing image sets collected repeatedly throughout a crop growing season are becoming increasingly common. Analytical methods able to learn from both spatial and time dimensions of the data may allow for an improved estimation of crop traits, as well as the effects of genetics and the environment on these traits. Multispectral and geometric time series imagery was collected by UAV on 11 dates, along with ground-truth data, in a field trial of 866 genetically diverse biomass sorghum accessions. We compared the performance of Convolution Neural Network (CNN) architectures that used image data from single dates (two spatial dimensions, 2D) versus multiple dates (two spatial dimensions + temporal dimension, 3D) to estimate lodging detection and severity. Lodging was detected with 3D-CNN analysis of time series imagery with 0.88 accuracy, 0.92 Precision, and 0.83 Recall. This outperformed the best 2D-CNN on a single date with 0.85 accuracy, 0.84 Precision, and 0.76 Recall. The variation in lodging severity was estimated by the best 3D-CNN analysis with 9.4% mean absolute error (MAE), 11.9% root mean square error (RMSE), and goodness-of-fit (R2) of 0.76. This was a significant improvement over the best 2D-CNN analysis with 11.84% MAE, 14.91% RMSE, and 0.63 R2. The success of the improved 3D-CNN analysis approach depended on the inclusion of “before and after” data, i.e., images collected on dates before and after the lodging event. The integration of geometric and spectral features with 3D-CNN architecture was also key to the improved assessment of lodging severity, which is an important and difficult-to-assess phenomenon in bioenergy feedstocks such as biomass sorghum. This demonstrates that spatio-temporal CNN architectures based on UAV time series imagery have significant potential to enhance plant phenotyping capabilities in crop breeding and Precision agriculture applications. Full article
(This article belongs to the Special Issue Imaging for Plant Phenotyping)
Show Figures

Graphical abstract

18 pages, 24955 KiB  
Article
Occluded Apple Fruit Detection and Localization with a Frustum-Based Point-Cloud-Processing Approach for Robotic Harvesting
by Tao Li, Qingchun Feng, Quan Qiu, Feng Xie and Chunjiang Zhao
Remote Sens. 2022, 14(3), 482; https://doi.org/10.3390/rs14030482 - 20 Jan 2022
Cited by 28 | Viewed by 4764
Abstract
Precise localization of occluded fruits is crucial and challenging for robotic harvesting in orchards. Occlusions from leaves, branches, and other fruits make the point cloud acquired from Red Green Blue Depth (RGBD) cameras incomplete. Moreover, an insufficient filling rate and noise on depth [...] Read more.
Precise localization of occluded fruits is crucial and challenging for robotic harvesting in orchards. Occlusions from leaves, branches, and other fruits make the point cloud acquired from Red Green Blue Depth (RGBD) cameras incomplete. Moreover, an insufficient filling rate and noise on depth images of RGBD cameras usually happen in the shade from occlusions, leading to the distortion and fragmentation of the point cloud. These challenges bring difficulties to position locating and size estimation of fruit for robotic harvesting. In this paper, a novel 3D fruit localization method is proposed based on a deep learning segmentation network and a new frustum-based point-cloud-processing method. A one-stage deep learning segmentation network is presented to locate apple fruits on RGB images. With the outputs of masks and 2D bounding boxes, a 3D viewing frustum was constructed to estimate the depth of the fruit center. By the estimation of centroid coordinates, a position and size estimation approach is proposed for partially occluded fruits to determine the approaching pose for robotic grippers. Experiments in orchards were performed, and the results demonstrated the effectiveness of the proposed method. According to 300 testing samples, with the proposed method, the median error and mean error of fruits’ locations can be reduced by 59% and 43%, compared to the conventional method. Furthermore, the approaching direction vectors can be correctly estimated. Full article
(This article belongs to the Special Issue Imaging for Plant Phenotyping)
Show Figures

Graphical abstract

19 pages, 5247 KiB  
Article
Morphological and Physiological Screening to Predict Lettuce Biomass Production in Controlled Environment Agriculture
by Changhyeon Kim and Marc W. van Iersel
Remote Sens. 2022, 14(2), 316; https://doi.org/10.3390/rs14020316 - 11 Jan 2022
Cited by 11 | Viewed by 3054
Abstract
Fast growth and rapid turnover is an important crop trait in controlled environment agriculture (CEA) due to its high cost. An ideal screening approach for fast-growing cultivars should detect desirable phenotypes non-invasively at an early growth stage, based on morphological and/or physiological traits. [...] Read more.
Fast growth and rapid turnover is an important crop trait in controlled environment agriculture (CEA) due to its high cost. An ideal screening approach for fast-growing cultivars should detect desirable phenotypes non-invasively at an early growth stage, based on morphological and/or physiological traits. Hence, we established a rapid screening protocol based on a simple chlorophyll fluorescence imaging (CFI) technique to quantify the projected canopy size (PCS) of plants, combined with electron transport rate (ETR) measurements using a chlorophyll fluorometer. Eleven lettuce cultivars (Lactuca sativa), selected based on morphological differences, were grown in a greenhouse and imaged twice a week. Shoot dry weight (DW) of green cultivars at harvest 51 days after germination (DAG) was correlated with PCS at 13 DAG (R2 = 0.74), when the first true leaves had just appeared and the PCS was <8.5 cm2. However, early PCS of high anthocyanin (red) cultivars was not predictive of DW. Because light absorption by anthocyanins reduces the amount of photons available for photosynthesis, anthocyanins lower light use efficiency (LUE; DW/total incident light on canopy over the cropping cycle) and reduce growth. Additionally, the total incident light on the canopy throughout the cropping cycle explained 90% and 55% of variability in DW within green and red cultivars, respectively. Estimated leaf level ETR at a photosynthetic photon flux density (PPFD) of 200 or 1000 µmol m−2 s−1 were not correlated with DW in either green or red cultivars. In conclusion, early PCS quantification is a useful tool for the selection of fast-growing green lettuce phenotypes. However, this approach may not work in cultivars with high anthocyanin content because anthocyanins direct excitation energy away from photosynthesis and growth, weakening the correlation between incident light and growth. Full article
(This article belongs to the Special Issue Imaging for Plant Phenotyping)
Show Figures

Graphical abstract

24 pages, 8432 KiB  
Article
Registration and Fusion of Close-Range Multimodal Wheat Images in Field Conditions
by Sébastien Dandrifosse, Alexis Carlier, Benjamin Dumont and Benoît Mercatoris
Remote Sens. 2021, 13(7), 1380; https://doi.org/10.3390/rs13071380 - 3 Apr 2021
Cited by 16 | Viewed by 3756
Abstract
Multimodal images fusion has the potential to enrich the information gathered by multi-sensor plant phenotyping platforms. Fusion of images from multiple sources is, however, hampered by the technical lock of image registration. The aim of this paper is to provide a solution to [...] Read more.
Multimodal images fusion has the potential to enrich the information gathered by multi-sensor plant phenotyping platforms. Fusion of images from multiple sources is, however, hampered by the technical lock of image registration. The aim of this paper is to provide a solution to the registration and fusion of multimodal wheat images in field conditions and at close range. Eight registration methods were tested on nadir wheat images acquired by a pair of red, green and blue (RGB) cameras, a thermal camera and a multispectral camera array. The most accurate method, relying on a local transformation, aligned the images with an average error of 2 mm but was not reliable for thermal images. More generally, the suggested registration method and the preprocesses necessary before fusion (plant mask erosion, pixel intensity averaging) would depend on the application. As a consequence, the main output of this study was to identify four registration-fusion strategies: (i) the REAL-TIME strategy solely based on the cameras’ positions, (ii) the FAST strategy suitable for all types of images tested, (iii) and (iv) the ACCURATE and HIGHLY ACCURATE strategies handling local distortion but unable to deal with images of very different natures. These suggestions are, however, limited to the methods compared in this study. Further research should investigate how recent cutting-edge registration methods would perform on the specific case of wheat canopy. Full article
(This article belongs to the Special Issue Imaging for Plant Phenotyping)
Show Figures

Graphical abstract

Back to TopTop