Imaging Technology for Detecting Crops and Agricultural Products-II

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 8094

Special Issue Editors


E-Mail Website
Guest Editor
Intermountain Research & Extension Center, University of California, Tulelake, CA 96134, USA
Interests: precision agriculture; remote sensing; digital agriculture; yield monitoring
Special Issues, Collections and Topics in MDPI journals
Food, Water, Waste Research Group (FWW), Faculty of Engineering, University of Nottingham, University Park, Nottingham NG7 2RD, UK
Interests: non-invasive food quality assessment; digital food; machine learning; postharvest engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Imaging applications for several purposes in agriculture are rapidly improving at different scales, and have the potential to be key elements of sustainable agricultural intensification systems. In particular, satellite and drone imagery provides solutions for monitoring field crops and their within-field variability regarding crop health status, weed detection, and yield monitoring. Low-altitude imagery and machine vision applications of agricultural products are having a clear impact on sorting and harvesting automation. Moreover, the current availability of multispectral and hyperspectral sensors and images, combined with several data processing and machine-learning techniques, can facilitate unprecedented ideas and applications in agriculture. Imaging applications are usually coupled with machine-learning algorithms as a means of developing classification and regression models. Deep learning is a relatively new machine-learning technique that has gained importance in different fields in the agri-food chain, especially with significant advancements in imaging acquisition hardware, as well as the computational power available from personal computers with high-capability GPUs, as well as high-performance cloud-based computational servers. There is no doubt that imaging applications in agriculture will continue to lead several promising solutions in the current digital agriculture revolution. More research efforts and application ideas are still needed to improve the quality of agricultural products and to support farmers’ decisions in light of different field and crop conditions. This Special Issue aims to exchange knowledge, ideas, analytical techniques, applications, and experiments that use imagery solutions in the field of agricultural applications. 

Dr. Ahmed Kayad
Dr. Ahmed Rady
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • digital agriculture
  • remote sensing
  • weed detection
  • drone
  • RGB imaging
  • thermal imagery
  • object detection
  • hyperspectral imaging
  • machine learning
  • deep learning

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2623 KiB  
Article
Use of Indices in RGB and Random Forest Regression to Measure the Leaf Area Index in Maize
by Leonardo Pinto de Magalhães and Fabrício Rossi
Agronomy 2024, 14(4), 750; https://doi.org/10.3390/agronomy14040750 - 5 Apr 2024
Viewed by 604
Abstract
In the cultivation of maize, the leaf area index (LAI) serves as an important metric to determine the development of the plant. Unmanned aerial vehicles (UAVs) that capture RGB images, along with random forest regression (RFR), can be used to indirectly measure LAI [...] Read more.
In the cultivation of maize, the leaf area index (LAI) serves as an important metric to determine the development of the plant. Unmanned aerial vehicles (UAVs) that capture RGB images, along with random forest regression (RFR), can be used to indirectly measure LAI through vegetative indices. Research using these techniques is at an early stage, especially in the context of maize for silage. Therefore, this study aimed to evaluate which vegetative indices have the strongest correlations with maize LAI and to compare two regression methods. RFR, ridge regression (RR), support vector machine (SVM), and multiple linear regression (MLR) regressions were performed in Python for comparison using images obtained in an area cultivated with maize for silage. The results showed that the RGB spectral indices showed saturation when the LAI reached 3 m2 m−2, with the VEG (vegetable index), COM (combination), ExGR (red–green excess), and TGI (triangular greenness index) indices selected for modeling. In terms of regression, RFR showed superior performance with an R2 value of 0.981 and a root mean square error (RMSE) of 0.138 m2 m−2. Therefore, it can be concluded that RFR using RGB indices is a good way to indirectly obtain the LAI. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products-II)
Show Figures

Figure 1

17 pages, 2513 KiB  
Article
Combining Image-Based Phenotyping and Multivariate Analysis to Estimate Fruit Fresh Weight in Segregation Lines of Lowland Tomatoes
by Muh Farid, Muhammad Fuad Anshori, Riccardo Rossi, Feranita Haring, Katriani Mantja, Andi Dirpan, Siti Halimah Larekeng, Marlina Mustafa, Adnan Adnan, Siti Antara Maedhani Tahara, Nirwansyah Amier, M. Alfan Ikhlasul Amal and Andi Isti Sakinah
Agronomy 2024, 14(2), 338; https://doi.org/10.3390/agronomy14020338 - 6 Feb 2024
Viewed by 793
Abstract
The fruit weight is an important guideline for breeders and farmers to increase marketable productions, although conventionally it requires destructive measurements. The combination of image-based phenotyping (IBP) approaches with multivariate analysis has the potential to further improve the line selection based on economical [...] Read more.
The fruit weight is an important guideline for breeders and farmers to increase marketable productions, although conventionally it requires destructive measurements. The combination of image-based phenotyping (IBP) approaches with multivariate analysis has the potential to further improve the line selection based on economical trait, like fruit weight. Therefore, this study aimed to evaluate the potential of image-derived phenotypic traits as proxies for individual fruits weight estimation using multivariate analysis. To this end, an IBP experimentation was carried out on five populations of low-land tomato. Specifically, the Mawar (M; 10 plants), Karina (K; 10 plants), and F2 generation cross (100 lines) samples were used to extract training data for the proposed estimation model, while data derived from M/K//K backcross population (35 lines) and F5 population (50 lines) plants were used for destructive and non-destructive validation, respectively. Several phenotypic traits were extracted from each imaged tomato fruit, including the slice and whole fruit area (FA), round (FR), width (FW), height (FH), and red (RI), green (GI) and blue index (BI), and used as inputs of a genetic- and multivariate-based method for non-destructively predicting its fresh weight (FFW). Based on this research, the whole FA has the greatest potential in predicting tomato FFW regardless to the analyzed cultivar. The relevant model exhibited high power in predicting FFW, as explained by R2-adjusted, R2-deviation and RMSE statistics obtained for calibration (81.30%, 0.20%, 3.14 g, respectively), destructive (69.80%, 0.90%, 4.46 g, respectively) and non-destructive validation (80.20%, 0.50%, 2.12 g, respectively). These results suggest the potential applicability of the proposed IBP approach in guiding field robots or machines for precision harvesting based on non-destructive estimations of fruit weight from image-derived area, thereby enhancing agricultural practices in lowland tomato cultivation. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products-II)
Show Figures

Figure 1

15 pages, 46100 KiB  
Article
An Improved Rotating Box Detection Model for Litchi Detection in Natural Dense Orchards
by Bin Li, Huazhong Lu, Xinyu Wei, Shixuan Guan, Zhenyu Zhang, Xingxing Zhou and Yizhi Luo
Agronomy 2024, 14(1), 95; https://doi.org/10.3390/agronomy14010095 - 30 Dec 2023
Viewed by 810
Abstract
Accurate litchi identification is of great significance for orchard yield estimations. Litchi in natural scenes have large differences in scale and are occluded by leaves, reducing the accuracy of litchi detection models. Adopting traditional horizontal bounding boxes will introduce a large amount of [...] Read more.
Accurate litchi identification is of great significance for orchard yield estimations. Litchi in natural scenes have large differences in scale and are occluded by leaves, reducing the accuracy of litchi detection models. Adopting traditional horizontal bounding boxes will introduce a large amount of background and overlap with adjacent frames, resulting in a reduced litchi detection accuracy. Therefore, this study innovatively introduces the use of the rotation detection box model to explore its capabilities in scenarios with occlusion and small targets. First, a dataset on litchi rotation detection in natural scenes is constructed. Secondly, three improvement modules based on YOLOv8n are proposed: a transformer module is introduced after the C2f module of the eighth layer of the backbone network, an ECA attention module is added to the neck network to improve the feature extraction of the backbone network, and a 160 × 160 scale detection head is introduced to enhance small target detection. The test results show that, compared to the traditional YOLOv8n model, the proposed model improves the precision rate, the recall rate, and the mAP by 11.7%, 5.4%, and 7.3%, respectively. In addition, four state-of-the-art mainstream detection backbone networks, namely, MobileNetv3-small, MobileNetv3-large, ShuffleNetv2, and GhostNet, are studied for comparison with the performance of the proposed model. The model proposed in this article exhibits a better performance on the litchi dataset, with the precision, recall, and mAP reaching 84.6%, 68.6%, and 79.4%, respectively. This research can provide a reference for litchi yield estimations in complex orchard environments. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products-II)
Show Figures

Figure 1

14 pages, 7470 KiB  
Article
Detection of Fundamental Quality Traits of Winter Jujube Based on Computer Vision and Deep Learning
by Zhaojun Ban, Chenyu Fang, Lingling Liu, Zhengbao Wu, Cunkun Chen and Yi Zhu
Agronomy 2023, 13(8), 2095; https://doi.org/10.3390/agronomy13082095 - 10 Aug 2023
Cited by 1 | Viewed by 986
Abstract
Winter jujube (Ziziphus jujuba Mill. cv. Dongzao) has been cultivated in China for a long time and has a richly abundant history, whose maturity grade determined different postharvest qualities. Traditional methods for identifying the fundamental quality of winter jujube are known to [...] Read more.
Winter jujube (Ziziphus jujuba Mill. cv. Dongzao) has been cultivated in China for a long time and has a richly abundant history, whose maturity grade determined different postharvest qualities. Traditional methods for identifying the fundamental quality of winter jujube are known to be time-consuming and labor-intensive, resulting in significant difficulties for winter jujube resource management. The applications of deep learning in this regard will help manufacturers and orchard workers quickly identify fundamental quality information. In our study, the best fundamental quality of winter jujube from the correlation between maturity and fundamental quality was determined by testing three simple physicochemical indexes: total soluble solids (TSS), total acid (TA) and puncture force of fruit at five maturity stages which classified by the color and appearance. The results showed that the fully red fruits (the 4th grade) had the optimal eating quality parameter. Additionally, five different maturity grades of winter jujube were photographed as datasets and used the ResNet-50 model and the iResNet-50 model for training. And the iResNet-50 model was improved to overlap double residuals in the first Main Stage, with an accuracy of 98.35%, a precision of 98.40%, a recall of 98.35%, and a F1 score of 98.36%, which provided an important basis for automatic fundamental quality detection of winter jujube. This study provided ideas for fundamental quality classification of winter jujube during harvesting, fundamental quality screening of winter jujube in assembly line production, and real-time monitoring of winter jujube during transportation and storage. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products-II)
Show Figures

Figure 1

19 pages, 7955 KiB  
Article
Cross-Platform Wheat Ear Counting Model Using Deep Learning for UAV and Ground Systems
by Baohua Yang, Ming Pan, Zhiwei Gao, Hongbo Zhi and Xiangxuan Zhang
Agronomy 2023, 13(7), 1792; https://doi.org/10.3390/agronomy13071792 - 4 Jul 2023
Cited by 1 | Viewed by 1190
Abstract
Wheat is one of the widely cultivated crops. Accurate and efficient high-throughput ear counting is important for wheat production, yield evaluation, and seed breeding. The traditional wheat ear counting method is inefficient due to the small scope of investigation. Especially in the wheat [...] Read more.
Wheat is one of the widely cultivated crops. Accurate and efficient high-throughput ear counting is important for wheat production, yield evaluation, and seed breeding. The traditional wheat ear counting method is inefficient due to the small scope of investigation. Especially in the wheat field scene, the images obtained from different platforms, including ground systems and unmanned aerial vehicles (UAVs), have differences in density, scale, and wheat ear distribution, which makes the wheat ear counting task still face some challenges. To this end, a density map counting network (LWDNet) model was constructed for cross-platform wheat ear statistics. Firstly, CA-MobileNetV3 was constructed by introducing a collaborative attention mechanism (CA) to optimize the lightweight neural network MobileNetV3, which was used as the front end of the feature extraction network, aiming to solve the problem of occlusion and adhesion of wheat ears in the field. Secondly, to enhance the model’s ability to learn the detailed features of wheat ears, the CARAFE upsampling module was introduced in the feature fusion layer to better restore the characteristics of wheat ears and improve the counting accuracy of the model for wheat ears. Finally, density map regression was used to achieve high-density, small-target ear counting, and the model was tested on datasets from different platforms. The results showed that our method can efficiently count wheat ears of different spatial scales, achieving good accuracy while maintaining a competitive number of parameters (2.38 million with a size of 9.24 MB), which will benefit wheat breeding and screening analysis to provide technical support. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products-II)
Show Figures

Figure 1

17 pages, 2610 KiB  
Article
Assessing Within-Field Variation in Alfalfa Leaf Area Index Using UAV Visible Vegetation Indices
by Keegan Hammond, Ruth Kerry, Ryan R. Jensen, Ross Spackman, April Hulet, Bryan G. Hopkins, Matt A. Yost, Austin P. Hopkins and Neil C. Hansen
Agronomy 2023, 13(5), 1289; https://doi.org/10.3390/agronomy13051289 - 30 Apr 2023
Cited by 4 | Viewed by 1615
Abstract
This study examines the use of leaf area index (LAI) to inform variable-rate irrigation (VRI) for irrigated alfalfa (Medicago sativa). LAI is useful for predicting zone-specific evapotranspiration (ETc). One approach toward estimating LAI is to utilize the relationship between [...] Read more.
This study examines the use of leaf area index (LAI) to inform variable-rate irrigation (VRI) for irrigated alfalfa (Medicago sativa). LAI is useful for predicting zone-specific evapotranspiration (ETc). One approach toward estimating LAI is to utilize the relationship between LAI and visible vegetation indices (VVIs) using unmanned aerial vehicle (UAV) imagery. This research has three objectives: (1) to measure and describe the within-field variation in LAI and canopy height for an irrigated alfalfa field, (2) to evaluate the relationships between the alfalfa LAI and various VVIs with and without field average canopy height, and (3) to use UAV images and field average canopy height to describe the within-field variation in LAI and the potential application to VRI. The study was conducted in 2021–2022 in Rexburg, Idaho. Over the course of the study, the measured LAI varied from 0.23 m2 m−2 to 11.28 m2 m−2 and canopy height varied from 6 cm to 65 cm. There was strong spatial clustering in the measured LAI but the spatial patterns were dynamic between dates. Among eleven VVIs evaluated, the four that combined green and red wavelengths but excluded blue wavelengths showed the most promise. For all VVIs, adding average canopy height to multiple linear regression improved LAI prediction. The regression model using the modified green–red vegetation index (MGRVI) and canopy height (R2 = 0.93) was applied to describe the spatial variation in the LAI among VRI zones. There were significant (p < 0.05) but not practical differences (<15%) between pre-defined zones. UAV imagery coupled with field average canopy height can be a useful tool for predicting LAI in alfalfa. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products-II)
Show Figures

Figure 1

12 pages, 1500 KiB  
Article
Non-Destructive Method for Estimating Seed Weights from Intact Peanut Pods Using Soft X-ray Imaging
by Guangjun Qiu, Yuanyuan Liu, Ning Wang, Rebecca S. Bennett and Paul R. Weckler
Agronomy 2023, 13(4), 1127; https://doi.org/10.3390/agronomy13041127 - 15 Apr 2023
Viewed by 1232
Abstract
In the U.S., peanut farmers receive premium prices for crops with high seed grades. One component of seed grade is the proportion of seed weight to that of pod hulls and other matter. Seed weight and size are also important traits for food [...] Read more.
In the U.S., peanut farmers receive premium prices for crops with high seed grades. One component of seed grade is the proportion of seed weight to that of pod hulls and other matter. Seed weight and size are also important traits for food processors. Current methods for evaluating peanut seed grade require the opening of the pod and are time-consuming and labor-intensive. In this study, a non-destructive and efficient method to determine peanut seed weights was investigated. X-ray images of a total of 513 peanut pods from three commercial cultivars, each representing three market types, were taken using a soft X-ray imaging system. The region of interest of each image, the seeds, was extracted two ways, manually and with a differential evolution segmentation algorithm. The comprehensive attenuation index (CAI) value was calculated from the segmented regions of interest. Lastly, linear regression models were established between peanut seed weights and the CAI. The results demonstrated that the X-ray imaging technology, coupled with the differential evolution segmentation algorithm, may be used to estimate seed weights efficiently from intact peanut pods. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products-II)
Show Figures

Figure 1

Back to TopTop