Computer Vision and Deep Learning Technology in Agriculture: 2nd Edition

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: closed (31 October 2024) | Viewed by 11573

Special Issue Editor


E-Mail Website
Guest Editor
College of Water Resources and Civil Engineering, China Agricultural University, Beijing, China
Interests: crop traits estimation; crop growth monitoring
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Agriculture is a complex and unpredictable field. Computer vision, in the form of visible data processing, may improve our understanding of various aspects of agriculture. Another technique known as deep learning—A modern form of image processing and data analysis—offers promising results. Recent advances in deep learning have significantly promoted computer vision applications in agriculture, providing solutions to many long-lasting challenges.

This Special Issue of Agronomy will address advances in the development and application of computer vision, machine learning, and deep learning techniques with agricultural applications. Papers describing new techniques for processing high-resolution images collected via RGB, multispectral and hyperspectral sensors from the air (e.g., UAVs) or from the ground are also welcome. We encourage you to submit your research on state-of-the-art applications of computer vision and deep learning in agriculture to this Special Issue.

Dr. Juncheng Ma
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • precision agriculture
  • computer vision
  • deep learning
  • machine learning
  • image processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 5675 KiB  
Article
Utilization of Machine Learning and Hyperspectral Imaging Technologies for Classifying Coated Maize Seed Vigor: A Case Study on the Assessment of Seed DNA Repair Capability
by Kris Wonggasem, Papis Wongchaisuwat, Pongsan Chakranon and Damrongvudhi Onwimol
Agronomy 2024, 14(9), 1991; https://doi.org/10.3390/agronomy14091991 - 2 Sep 2024
Viewed by 818
Abstract
The conventional evaluation of maize seed vigor is a time-consuming and labor-intensive process. By contrast, this study introduces an automated, nondestructive framework for classifying maize seed vigor with different seed DNA repair capabilities using hyperspectral images. The selection of coated maize seeds for [...] Read more.
The conventional evaluation of maize seed vigor is a time-consuming and labor-intensive process. By contrast, this study introduces an automated, nondestructive framework for classifying maize seed vigor with different seed DNA repair capabilities using hyperspectral images. The selection of coated maize seeds for our case study also aligned well with practical applications. To ensure the accuracy and reliability of the results, rigorous data preprocessing steps were implemented to extract high-quality information from raw spectral data obtained from the hyperspectral images. In particular, commonly used pretreatment methods were explored. Instead of analyzing all the wavelengths of spectral data, a competitive adaptive reweighted sampling method was used to select more informative wavelengths, optimizing analysis efficiency. Furthermore, this study leveraged machine learning models, enriched through oversampling techniques to address data imbalance at the seed level. The results obtained using a support vector machine with enhanced techniques demonstrated promising results with 100% sensitivity, 96.91% specificity, and a 0.9807 Matthews correlation coefficient (MCC). Thus, this study highlighted the effectiveness of hyperspectral imaging and machine learning in modern seed assessment practices. By introducing a seed vigor classification system that can even accommodate coated seeds, this study offers a potential pathway for empowering seed producers in practical, real-world applications. Full article
Show Figures

Figure 1

18 pages, 7096 KiB  
Article
Prune-FSL: Pruning-Based Lightweight Few-Shot Learning for Plant Disease Identification
by Wenbo Yan, Quan Feng, Sen Yang, Jianhua Zhang and Wanxia Yang
Agronomy 2024, 14(9), 1878; https://doi.org/10.3390/agronomy14091878 - 23 Aug 2024
Cited by 1 | Viewed by 629
Abstract
The high performance of deep learning networks relies on large datasets and powerful computational resources. However, collecting enough diseased training samples is a daunting challenge. In addition, existing few-shot learning models tend to suffer from large size, which makes their deployment on edge [...] Read more.
The high performance of deep learning networks relies on large datasets and powerful computational resources. However, collecting enough diseased training samples is a daunting challenge. In addition, existing few-shot learning models tend to suffer from large size, which makes their deployment on edge devices difficult. To address these issues, this study proposes a pruning-based lightweight few-shot learning (Prune-FSL) approach, which aims to utilize a very small number of labeled samples to identify unknown classes of crop diseases and achieve lightweighting of the model. First, the disease few-shot learning model was built through a metric-based meta-learning framework to address the problem of sample scarcity. Second, a slimming pruning method was used to trim the network channels by the γ coefficients of the BN layer to achieve efficient network compression. Finally, a meta-learning pruning strategy was designed to enhance the generalization ability of the model. The experimental results show that with 80% parameter reduction, the Prune-FSL method reduces the Macs computation from 3.52 G to 0.14 G, and the model achieved an accuracy of 77.97% and 90.70% in 5-way 1-shot and 5-way 5-shot, respectively. The performance of the pruned model was also compared with other representative lightweight models, yielding a result that outperforms those of five mainstream lightweight networks, such as Shufflenet. It also achieves 18-year model performance with one-fifth the number of parameters. In addition, this study demonstrated that pruning after sparse pre-training was superior to the strategy of pruning after meta-learning, and this advantage becomes more significant as the network parameters are reduced. In addition, the experiments also showed that the performance of the model decreases as the number of ways increases and increases as the number of shots increases. Overall, this study presents a few-shot learning method for crop disease recognition for edge devices. The method not only has a lower number of parameters and higher performance but also outperforms existing related studies. It provides a feasible technical route for future small-sample disease recognition under edge device conditions. Full article
Show Figures

Figure 1

29 pages, 13843 KiB  
Article
Hybridizing Long Short-Term Memory and Bi-Directional Long Short-Term Memory Models for Efficient Classification: A Study on Xanthomonas axonopodis pv. phaseoli (XaP) in Two Bean Varieties
by Ramazan Kursun, Aysegul Gur, Kubilay Kurtulus Bastas and Murat Koklu
Agronomy 2024, 14(7), 1495; https://doi.org/10.3390/agronomy14071495 - 10 Jul 2024
Viewed by 813
Abstract
This study was conducted on Xanthomonas axonopodis pv, which causes significant economic losses in the agricultural sector. Here, we study a common bacterial blight disease caused by the phaseoli (XaP) bacterial pathogen on Üstün42 and Akbulut bean genera. In this study, a total [...] Read more.
This study was conducted on Xanthomonas axonopodis pv, which causes significant economic losses in the agricultural sector. Here, we study a common bacterial blight disease caused by the phaseoli (XaP) bacterial pathogen on Üstün42 and Akbulut bean genera. In this study, a total of 4000 images, healthy and diseased, were used for both bean breeds. These images were classified by AlexNet, VGG16, and VGG19 models. Later, reclassification was performed by applying pre-processing to the raw images. According to the results obtained, the accuracy rates of the pre-processed images classified by the VGG19, VGG16 and AlexNet models were determined as 0.9213, 0.9125 and 0.8950, respectively. The models were then hybridized with LSTM and BiLSTM for raw and pre-processed images and new models were created. When the performance of these hybrid models was evaluated, it was found that the models hybridized with LSTM were more successful than the simple models, while the models hybridized with BiLSTM gave better results than the models hybridized with LSTM. In particular, the VGG19+BiLSTM model attracted attention by achieving 94.25% classification accuracy with pre-processed images. This study emphasizes the effectiveness of image processing techniques in agriculture in the field of disease detection and is important as a new dataset in the literature for evaluating the performance of hybridized models. Full article
Show Figures

Figure 1

19 pages, 7940 KiB  
Article
Light-FC-YOLO: A Lightweight Method for Flower Counting Based on Enhanced Feature Fusion with a New Efficient Detection Head
by Xiaomei Yi, Hanyu Chen, Peng Wu, Guoying Wang, Lufeng Mo, Bowei Wu, Yutong Yi, Xinyun Fu and Pengxiang Qian
Agronomy 2024, 14(6), 1285; https://doi.org/10.3390/agronomy14061285 - 14 Jun 2024
Viewed by 950
Abstract
Fast and accurate counting and positioning of flowers is the foundation of automated flower cultivation production. However, it remains a challenge to complete the counting and positioning of high-density flowers against a complex background. Therefore, this paper proposes a lightweight flower counting and [...] Read more.
Fast and accurate counting and positioning of flowers is the foundation of automated flower cultivation production. However, it remains a challenge to complete the counting and positioning of high-density flowers against a complex background. Therefore, this paper proposes a lightweight flower counting and positioning model, Light-FC-YOLO, based on YOLOv8s. By integrating lightweight convolution, the model is more portable and deployable. At the same time, a new efficient detection head, Efficient head, and the integration of the LSKA large kernel attention mechanism are proposed to enhance the model’s feature detail extraction capability and change the weight ratio of the shallow edge and key point information in the network. Finally, the SIoU loss function with target angle deviation calculation is introduced to improve the model’s detection accuracy and target positioning ability. Experimental results show that Light-FC-YOLO, with a model size reduction of 27.2% and a parameter reduction of 39.0%, has a Mean Average Precision (mAP) and recall that are 0.8% and 1.4% higher than YOLOv8s, respectively. In the counting comparison experiment, the coefficient of determination (R2) and Root Mean Squared Error (RMSE) of Light-FC-YOLO reached 0.9577 and 8.69, respectively, both superior to lightweight models such as YOLOv8s. The lightweight flower detection method proposed in this paper can efficiently complete flower positioning and counting tasks, providing technical support and reference solutions for automated flower production management. Full article
Show Figures

Figure 1

16 pages, 2833 KiB  
Article
Intelligent Detection of Muskmelon Ripeness in Greenhouse Environment Based on YOLO-RFEW
by Defang Xu, Rui Ren, Huamin Zhao and Shujuan Zhang
Agronomy 2024, 14(6), 1091; https://doi.org/10.3390/agronomy14061091 - 21 May 2024
Cited by 1 | Viewed by 1179
Abstract
Accurate detection of muskmelon fruit ripeness is crucial to ensure fruit quality, optimize picking time, and enhance economic benefits. This study proposes an improved lightweight YOLO-RFEW model based on YOLOv8n, aiming to address the challenges of low efficiency in muskmelon fruit ripeness detection [...] Read more.
Accurate detection of muskmelon fruit ripeness is crucial to ensure fruit quality, optimize picking time, and enhance economic benefits. This study proposes an improved lightweight YOLO-RFEW model based on YOLOv8n, aiming to address the challenges of low efficiency in muskmelon fruit ripeness detection and the complexity of deploying a target detection model to a muskmelon picking robot. Firstly, the RFAConv replaces the Conv in the backbone part of YOLOv8n, allowing the network to focus more on regions with significant contributions in feature extraction. Secondly, the feature extraction and fusion capability are enhanced by improving the C2f module into a C2f-FE module based on FasterNet and an Efficient Multi-Scale attention (EMA) mechanism within the lightweight model. Finally, Weighted Intersection over Union (WIoU) is optimized as the loss function to improve target frame prediction capability and enhance target detection accuracy. The experimental results demonstrate that the YOLO-RFEW model achieves high accuracy, with precision, recall, F1 score, and mean Average Precision (mAP) values of 93.16%, 83.22%, 87.91%, and 90.82%, respectively. Moreover, it maintains a lightweight design and high efficiency with a model size of 4.75 MB and an inference time of 1.5 ms. Additionally, in the two types of maturity tests (M-u and M-r), APs of 87.70% and 93.94% are obtained, respectively, by the YOLO-RFEW model. Compared to YOLOv8n, significant improvements in detection accuracy have been achieved while reducing both model size and computational complexity using the proposed approach for muskmelon picking robots’ real-time detection requirements. Furthermore, when compared to lightweight models such as YOLOv3-Tiny, YOLOv4-Tiny, YOLOv5s, YOLOv7-Tiny, YOLOv8s, and YOLOv8n, the YOLO-RFEW model demonstrates superior performance with only 28.55%, 22.42%, 24.50%, 40.56%, 22.12%, and 79.83% of their respective model sizes, respectively, while achieving the highest F1 score and mAP values among these seven models. The feasibility and effectiveness of our improved scheme are verified through comparisons between thermograms generated by YOLOv8n and YOLO-RFEW as well as detection images. In summary, the YOLO-RFEW model not only improves the accuracy rate of muskmelon ripeness detection but also successfully realizes the lightweight and efficient performance, which has important theoretical support and application value in the field of muskmelon picking robot development. Full article
Show Figures

Figure 1

20 pages, 4630 KiB  
Article
U-Net with Coordinate Attention and VGGNet: A Grape Image Segmentation Algorithm Based on Fusion Pyramid Pooling and the Dual-Attention Mechanism
by Xiaomei Yi, Yue Zhou, Peng Wu, Guoying Wang, Lufeng Mo, Musenge Chola, Xinyun Fu and Pengxiang Qian
Agronomy 2024, 14(5), 925; https://doi.org/10.3390/agronomy14050925 - 28 Apr 2024
Viewed by 1154
Abstract
Currently, the classification of grapevine black rot disease relies on assessing the percentage of affected spots in the total area, with a primary focus on accurately segmenting these spots in images. Particularly challenging are cases in which lesion areas are small and boundaries [...] Read more.
Currently, the classification of grapevine black rot disease relies on assessing the percentage of affected spots in the total area, with a primary focus on accurately segmenting these spots in images. Particularly challenging are cases in which lesion areas are small and boundaries are ill-defined, hampering precise segmentation. In our study, we introduce an enhanced U-Net network tailored for segmenting black rot spots on grape leaves. Leveraging VGG as the U-Net’s backbone, we strategically position the atrous spatial pyramid pooling (ASPP) module at the base of the U-Net to serve as a link between the encoder and decoder. Additionally, channel and spatial dual-attention modules are integrated into the decoder, alongside a feature pyramid network aimed at fusing diverse levels of feature maps to enhance the segmentation of diseased regions. Our model outperforms traditional plant disease semantic segmentation approaches like DeeplabV3+, U-Net, and PSPNet, achieving impressive pixel accuracy (PA) and mean intersection over union (MIoU) scores of 94.33% and 91.09%, respectively. Demonstrating strong performance across various levels of spot segmentation, our method showcases its efficacy in enhancing the segmentation accuracy of black rot spots on grapevines. Full article
Show Figures

Figure 1

23 pages, 10296 KiB  
Article
RICE-YOLO: In-Field Rice Spike Detection Based on Improved YOLOv5 and Drone Images
by Maoyang Lan, Changjiang Liu, Huiwen Zheng, Yuwei Wang, Wenxi Cai, Yingtong Peng, Chudong Xu and Suiyan Tan
Agronomy 2024, 14(4), 836; https://doi.org/10.3390/agronomy14040836 - 17 Apr 2024
Cited by 2 | Viewed by 1949
Abstract
The rice spike, a crucial part of rice plants, plays a vital role in yield estimation, pest detection, and growth stage management in rice cultivation. When using drones to capture photos of rice fields, the high shooting angle and wide coverage area can [...] Read more.
The rice spike, a crucial part of rice plants, plays a vital role in yield estimation, pest detection, and growth stage management in rice cultivation. When using drones to capture photos of rice fields, the high shooting angle and wide coverage area can cause rice spikes to appear small in the captured images and can cause angular distortion of objects at the edges of images, resulting in significant occlusions and dense arrangements of rice spikes. These factors are unique challenges during drone image acquisition that may affect the accuracy of rice spike detection. This study proposes a rice spike detection method that combines deep learning algorithms with drone perspectives. Initially, based on an enhanced version of YOLOv5, the EMA (efficient multiscale attention) attention mechanism is introduced, a novel neck network structure is designed, and SIoU (SCYLLA intersection over union) is integrated. Experimental results demonstrate that RICE-YOLO achieves a [email protected] of 94.8% and a recall of 87.6% on the rice spike dataset. During different growth stages, it attains an [email protected] of 96.1% and a recall rate of 93.1% during the heading stage, and a [email protected] of 86.2% with a recall rate of 82.6% during the filling stage. Overall, the results indicate that the proposed method enables real-time, efficient, and accurate detection and counting of rice spikes in field environments, offering a theoretical foundation and technical support for real-time and efficient spike detection in the management of rice growth processes. Full article
Show Figures

Figure 1

13 pages, 10486 KiB  
Article
A Method for Analyzing the Phenotypes of Nonheading Chinese Cabbage Leaves Based on Deep Learning and OpenCV Phenotype Extraction
by Haobin Xu, Linxiao Fu, Jinnian Li, Xiaoyu Lin, Lingxiao Chen, Fenglin Zhong and Maomao Hou
Agronomy 2024, 14(4), 699; https://doi.org/10.3390/agronomy14040699 - 28 Mar 2024
Viewed by 1437
Abstract
Nonheading Chinese cabbage is an important leafy vegetable, and quantitative identification and automated analysis of nonheading Chinese cabbage leaves are crucial for cultivating new varieties with higher quality, yield, and resistance. Traditional leaf phenotypic analysis relies mainly on phenotypic observation and the practical [...] Read more.
Nonheading Chinese cabbage is an important leafy vegetable, and quantitative identification and automated analysis of nonheading Chinese cabbage leaves are crucial for cultivating new varieties with higher quality, yield, and resistance. Traditional leaf phenotypic analysis relies mainly on phenotypic observation and the practical experience of breeders, leading to issues such as time consumption, labor intensity, and low precision, which result in low breeding efficiency. Considering these issues, a method for the extraction and analysis of phenotypes of nonheading Chinese cabbage leaves is proposed, targeting four qualitative traits and ten quantitative traits from 1500 samples, by integrating deep learning and OpenCV image processing technology. First, a leaf classification model is trained using YOLOv8 to infer the qualitative traits of the leaves, followed by the extraction and calculation of the quantitative traits of the leaves using OpenCV image processing technology. The results indicate that the model achieved an average accuracy of 95.25%, an average precision of 96.09%, an average recall rate of 96.31%, and an average F1 score of 0.9620 for the four qualitative traits. From the ten quantitative traits, the OpenCV-calculated values for the whole leaf length, leaf width, and total leaf area were compared with manually measured values, showing RMSEs of 0.19 cm, 0.1762 cm, and 0.2161 cm2, respectively. Bland–Altman analysis indicated that the error values were all within the 95% confidence intervals, and the average detection time per image was 269 ms. This method achieved good results in the extraction of phenotypic traits from nonheading Chinese cabbage leaves, significantly reducing the personpower and time costs associated with genetic resource analysis. This approach provides a new technique for the analysis of nonheading Chinese cabbage genetic resources that is high-throughput, precise, and automated. Full article
Show Figures

Figure 1

24 pages, 8939 KiB  
Article
YOLOv7-GCA: A Lightweight and High-Performance Model for Pepper Disease Detection
by Xuejun Yue, Haifeng Li, Qingkui Song, Fanguo Zeng, Jianyu Zheng, Ziyu Ding, Gaobi Kang, Yulin Cai, Yongda Lin, Xiaowan Xu and Chaoran Yu
Agronomy 2024, 14(3), 618; https://doi.org/10.3390/agronomy14030618 - 19 Mar 2024
Viewed by 1485
Abstract
Existing disease detection models for deep learning-based monitoring and prevention of pepper diseases face challenges in accurately identifying and preventing diseases due to inter-crop occlusion and various complex backgrounds. To address this issue, we propose a modified YOLOv7-GCA model based on YOLOv7 for [...] Read more.
Existing disease detection models for deep learning-based monitoring and prevention of pepper diseases face challenges in accurately identifying and preventing diseases due to inter-crop occlusion and various complex backgrounds. To address this issue, we propose a modified YOLOv7-GCA model based on YOLOv7 for pepper disease detection, which can effectively overcome these challenges. The model introduces three key enhancements: Firstly, lightweight GhostNetV2 is used as the feature extraction network of the model to improve the detection speed. Secondly, the Cascading fusion network (CFNet) replaces the original feature fusion network, which improves the expression ability of the model in complex backgrounds and realizes multi-scale feature extraction and fusion. Finally, the Convolutional Block Attention Module (CBAM) is introduced to focus on the important features in the images and improve the accuracy and robustness of the model. This study uses the collected dataset, which was processed to construct a dataset of 1259 images with four types of pepper diseases: anthracnose, bacterial diseases, umbilical rot, and viral diseases. We applied data augmentation to the collected dataset, and then experimental verification was carried out on this dataset. The experimental results demonstrate that the YOLOv7-GCA model reduces the parameter count by 34.3% compared to the YOLOv7 original model while improving 13.4% in mAP and 124 frames/s in detection speed. Additionally, the model size was reduced from 74.8 MB to 46.9 MB, which facilitates the deployment of the model on mobile devices. When compared to the other seven mainstream detection models, it was indicated that the YOLOv7-GCA model achieved a balance between speed, model size, and accuracy. This model proves to be a high-performance and lightweight pepper disease detection solution that can provide accurate and timely diagnosis results for farmers and researchers. Full article
Show Figures

Figure 1

Back to TopTop