Computer Vision and Artificial Intelligence in Agriculture

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Digital Agriculture".

Deadline for manuscript submissions: closed (10 July 2024) | Viewed by 11637

Special Issue Editors

School of Electrical and Information Engineering, Jiangsu University, Zhenjiang 212013, China
Interests: image processing; computer vision; deep learning; agricultural robotics; artificial intelligence

E-Mail Website
Guest Editor
School of Information Science and Engineering, Shandong Normal University, Jinan 250014, China
Interests: artificial intelligence; computer vision; smart orchard; fruit detection and segmentation; agricultural information technology and equipment

Special Issue Information

Dear Colleagues,

Rapid urbanization, population growth, climate change, and depleting natural resources have raised global food security concerns. Recently, smart farming with innovative technology has started to see agricultural use. Computer vision (CV) and artificial intelligence (AI) have been gaining traction in agriculture. From reducing production costs with intelligent automation to boosting productivity, CV and AI have significant potential to enhance the overall functioning of smart farming. CV and AI-based systems are increasingly being used for smart agriculture applications such as agricultural automation and robotics, the non-destructive detection of living things, livestock and poultry behavior recognition, crop growth monitoring, pest and disease detection, crop yield mapping, targeted spraying, smart irrigation, and nutrient management.

Therefore, this Special Issue aims to promote a deeper understanding of major conceptual and technical challenges and to facilitate the spread of recent breakthroughs in computer vision and artificial intelligence for smart agriculture. All types of articles such as original research, opinions, and reviews are welcome. Topics of interest include but are not limited to the following:

  • Computer vision for agricultural automation and robotics;
  • Computer vision for plant phenotyping;
  • Computer vision for greenhouses, plant factories, and vertical farms;
  • Monitoring/decision support systems for crop/livestock management;
  • IoT, big data, and data analytics for smart agriculture;
  • Edge AI applications for smart agriculture;
  • UAV-based sensing and computer vision for smart agriculture.

Dr. Bo Xu
Dr. Weikuan Jia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart agriculture
  • computer vision
  • artificial intelligence
  • image processing technology
  • deep learning
  • artificial neural network
  • machine learning
  • guidance, navigation, and control
  • autonomy, perception, and decision making
  • data analysis and decision support

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 7600 KiB  
Article
SPCN: An Innovative Soybean Pod Counting Network Based on HDC Strategy and Attention Mechanism
by Ximing Li, Yitao Zhuang, Jingye Li, Yue Zhang, Zhe Wang, Jiangsan Zhao, Dazhi Li and Yuefang Gao
Agriculture 2024, 14(8), 1347; https://doi.org/10.3390/agriculture14081347 - 12 Aug 2024
Viewed by 837
Abstract
Soybean pod count is a crucial aspect of soybean plant phenotyping, offering valuable reference information for breeding and planting management. Traditional manual counting methods are not only costly but also prone to errors. Existing detection-based soybean pod counting methods face challenges due to [...] Read more.
Soybean pod count is a crucial aspect of soybean plant phenotyping, offering valuable reference information for breeding and planting management. Traditional manual counting methods are not only costly but also prone to errors. Existing detection-based soybean pod counting methods face challenges due to the crowded and uneven distribution of soybean pods on the plants. To tackle this issue, we propose a Soybean Pod Counting Network (SPCN) for accurate soybean pod counting. SPCN is a density map-based architecture based on Hybrid Dilated Convolution (HDC) strategy and attention mechanism for feature extraction, using the Unbalanced Optimal Transport (UOT) loss function for supervising density map generation. Additionally, we introduce a new diverse dataset, BeanCount-1500, comprising of 24,684 images of 316 soybean varieties with various backgrounds and lighting conditions. Extensive experiments on BeanCount-1500 demonstrate the advantages of SPCN in soybean pod counting with an Mean Absolute Error(MAE) and an Mean Squared Error(MSE) of 4.37 and 6.45, respectively, significantly outperforming the current competing method by a substantial margin. Its excellent performance on the Renshou2021 dataset further confirms its outstanding generalization potential. Overall, the proposed method can provide technical support for intelligent breeding and planting management of soybean, promoting the digital and precise management of agriculture in general. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

29 pages, 27671 KiB  
Article
Prediction of Feed Quantity for Wheat Combine Harvester Based on Improved YOLOv5s and Weight of Single Wheat Plant without Stubble
by Qian Zhang, Qingshan Chen, Wenjie Xu, Lizhang Xu and En Lu
Agriculture 2024, 14(8), 1251; https://doi.org/10.3390/agriculture14081251 - 29 Jul 2024
Cited by 1 | Viewed by 785
Abstract
In complex field environments, wheat grows densely with overlapping organs and different plant weights. It is difficult to accurately predict feed quantity for wheat combine harvester using the existing YOLOv5s and uniform weight of a single wheat plant in a whole field. This [...] Read more.
In complex field environments, wheat grows densely with overlapping organs and different plant weights. It is difficult to accurately predict feed quantity for wheat combine harvester using the existing YOLOv5s and uniform weight of a single wheat plant in a whole field. This paper proposes a feed quantity prediction method based on the improved YOLOv5s and weight of a single wheat plant without stubble. The improved YOLOv5s optimizes Backbone with compact bases to enhance wheat spike detection and reduce computational redundancy. The Neck incorporates a hierarchical residual module to enhance YOLOv5s’ representation of multi-scale features. The Head enhances the detection accuracy of small, dense wheat spikes in a large field of view. In addition, the height of a single wheat plant without stubble is estimated by the depth distribution of the wheat spike region and stubble height. The relationship model between the height and weight of a single wheat plant without stubble is fitted by experiments. Then, feed quantity can be predicted using the weight of a single wheat plant without stubble estimated by the relationship model and the number of wheat plants detected by the improved YOLOv5s. The proposed method was verified through experiments with the 4LZ-6A combine harvester. Compared with the existing YOLOv5s, YOLOv7, SSD, Faster R-CNN, and other enhancements in this paper, the mAP50 of wheat spikes detection by the improved YOLOv5s increased by over 6.8%. It achieved an average relative error of 4.19% with a prediction time of 1.34 s. The proposed method can accurately and rapidly predict feed quantity for wheat combine harvesters and further realize closed-loop control of intelligent harvesting operations. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

16 pages, 115251 KiB  
Article
RpTrack: Robust Pig Tracking with Irregular Movement Processing and Behavioral Statistics
by Shuqin Tu, Hua Lei, Yun Liang, Enli Lyu and Hongxing Liu
Agriculture 2024, 14(7), 1158; https://doi.org/10.3390/agriculture14071158 - 16 Jul 2024
Viewed by 767
Abstract
Pig behavioral analysis based on multi-object tracking (MOT) technology of surveillance videos is vital for precision livestock farming. To address the challenges posed by uneven lighting scenes and irregular pig movements in the MOT task, we proposed a pig MOT method named RpTrack. [...] Read more.
Pig behavioral analysis based on multi-object tracking (MOT) technology of surveillance videos is vital for precision livestock farming. To address the challenges posed by uneven lighting scenes and irregular pig movements in the MOT task, we proposed a pig MOT method named RpTrack. Firstly, RpTrack addresses the issue of lost tracking caused by irregular pig movements by using an appropriate Kalman Filter and improved trajectory management. Then, RpTrack utilizes BIoU for the second matching strategy to alleviate the influence of missed detections on the tracking performance. Finally, the method utilizes post-processing on the tracking results to generate behavioral statistics and activity trajectories for each pig. The experimental results under conditions of uneven lighting and irregular pig movements show that RpTrack significantly outperforms four other state-of-the-art MOT methods, including SORT, OC-SORT, ByteTrack, and Bot-SORT, on both public and private datasets. The experimental results demonstrate that RpTrack not only has the best tracking performance but also has high-speed processing capabilities. In conclusion, RpTrack effectively addresses the challenges of uneven scene lighting and irregular pig movements, enabling accurate pig tracking and monitoring of different behaviors, such as eating, standing, and lying. This research supports the advancement and application of intelligent pig farming. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

19 pages, 5332 KiB  
Article
Open-Set Sheep Face Recognition in Multi-View Based on Li-SheepFaceNet
by Jianquan Li, Ying Yang, Gang Liu, Yuanlin Ning and Ping Song
Agriculture 2024, 14(7), 1112; https://doi.org/10.3390/agriculture14071112 - 10 Jul 2024
Viewed by 767
Abstract
Deep learning-based sheep face recognition improves the efficiency and effectiveness of individual sheep recognition and provides technical support for the development of intelligent livestock farming. However, frequent changes within the flock and variations in facial features in different views significantly affect the practical [...] Read more.
Deep learning-based sheep face recognition improves the efficiency and effectiveness of individual sheep recognition and provides technical support for the development of intelligent livestock farming. However, frequent changes within the flock and variations in facial features in different views significantly affect the practical application of sheep face recognition. In this study, we proposed the Li-SheepFaceNet, a method for open-set sheep face recognition in multi-view. Specifically, we employed the Seesaw block to construct a lightweight model called SheepFaceNet, which significantly improves both performance and efficiency. To enhance the convergence and performance of low-dimensional embedded feature learning, we used Li-ArcFace as the loss function. The Li-SheepFaceNet achieves an open-set recognition accuracy of 96.13% on a self-built dataset containing 3801 multi-view face images of 212 Ujumqin sheep, which surpasses other open-set sheep face recognition methods. To evaluate the robustness and generalization of our approach, we conducted performance testing on a publicly available dataset, achieving a recognition accuracy of 93.33%. Deploying Li-SheepFaceNet on an open-set sheep face recognition system enables the rapid and accurate identification of individual sheep, thereby accelerating the development of intelligent sheep farming. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

23 pages, 21902 KiB  
Article
WH-DETR: An Efficient Network Architecture for Wheat Spike Detection in Complex Backgrounds
by Zhenlin Yang, Wanhong Yang, Jizheng Yi and Rong Liu
Agriculture 2024, 14(6), 961; https://doi.org/10.3390/agriculture14060961 - 19 Jun 2024
Cited by 1 | Viewed by 1012
Abstract
Wheat spike detection is crucial for estimating wheat yields and has a significant impact on the modernization of wheat cultivation and the advancement of precision agriculture. This study explores the application of the DETR (Detection Transformer) architecture in wheat spike detection, introducing a [...] Read more.
Wheat spike detection is crucial for estimating wheat yields and has a significant impact on the modernization of wheat cultivation and the advancement of precision agriculture. This study explores the application of the DETR (Detection Transformer) architecture in wheat spike detection, introducing a new perspective to this task. We propose a high-precision end-to-end network named WH-DETR, which is based on an enhanced RT-DETR architecture. Initially, we employ data augmentation techniques such as image rotation, scaling, and random occlusion on the GWHD2021 dataset to improve the model’s generalization across various scenarios. A lightweight feature pyramid, GS-BiFPN, is implemented in the network’s neck section to effectively extract the multi-scale features of wheat spikes in complex environments, such as those with occlusions, overlaps, and extreme lighting conditions. Additionally, the introduction of GSConv enhances the network precision while reducing the computational costs, thereby controlling the detection speed. Furthermore, the EIoU metric is integrated into the loss function, refined to better focus on partially occluded or overlapping spikes. The testing results on the dataset demonstrate that this method achieves an Average Precision (AP) of 95.7%, surpassing current state-of-the-art object detection methods in both precision and speed. These findings confirm that our approach more closely meets the practical requirements for wheat spike detection compared to existing methods. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

20 pages, 36086 KiB  
Article
Generalized Focal Loss WheatNet (GFLWheatNet): Accurate Application of a Wheat Ear Detection Model in Field Yield Prediction
by Yujie Guan, Jiaqi Pan, Qingqi Fan, Liangliang Yang, Li Xu and Weikuan Jia
Agriculture 2024, 14(6), 899; https://doi.org/10.3390/agriculture14060899 - 6 Jun 2024
Cited by 1 | Viewed by 967
Abstract
Wheat ear counting is crucial for calculating wheat phenotypic parameters and scientifically managing fields, which is essential for estimating wheat field yield. In wheat fields, detecting wheat ears can be challenging due to factors such as changes in illumination, wheat ear growth posture, [...] Read more.
Wheat ear counting is crucial for calculating wheat phenotypic parameters and scientifically managing fields, which is essential for estimating wheat field yield. In wheat fields, detecting wheat ears can be challenging due to factors such as changes in illumination, wheat ear growth posture, and the appearance color of wheat ears. To improve the accuracy and efficiency of wheat ear detection and meet the demands of intelligent yield estimation, this study proposes an efficient model, Generalized Focal Loss WheatNet (GFLWheatNet), for wheat ear detection. This model precisely counts small, dense, and overlapping wheat ears. Firstly, in the feature extraction stage, we discarded the C4 feature layer of the ResNet50 and added the Convolutional block attention module (CBAM) to this location. This step maintains strong feature extraction capabilities while reducing redundant feature information. Secondly, in the reinforcement layer, we designed a skip connection module to replace the multi-scale feature fusion network, expanding the receptive field to adapt to various scales of wheat ears. Thirdly, leveraging the concept of distribution-guided localization, we constructed a detection head network to address the challenge of low accuracy in detecting dense and overlapping targets. Validation on the publicly available Global Wheat Head Detection dataset (GWHD-2021) demonstrates that GFLWheatNet achieves detection accuracies of 43.3% and 93.7% in terms of mean Average Precision (mAP) and AP50 (Intersection over Union (IOU) = 0.5), respectively. Compared to other models, it exhibits strong performance in terms of detection accuracy and efficiency. This model can serve as a reference for intelligent wheat ear counting during wheat yield estimation and provide theoretical insights for the detection of ears in other grain crops. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

25 pages, 9459 KiB  
Article
BerryNet-Lite: A Lightweight Convolutional Neural Network for Strawberry Disease Identification
by Jianping Wang, Zhiyu Li, Guohong Gao, Yan Wang, Chenping Zhao, Haofan Bai, Yingying Lv, Xueyan Zhang and Qian Li
Agriculture 2024, 14(5), 665; https://doi.org/10.3390/agriculture14050665 - 25 Apr 2024
Cited by 2 | Viewed by 1294
Abstract
With the rapid advancements in computer vision, using deep learning for strawberry disease recognition has emerged as a new trend. However, traditional identification methods heavily rely on manual discernment, consuming valuable time and imposing significant financial losses on growers. To address these challenges, [...] Read more.
With the rapid advancements in computer vision, using deep learning for strawberry disease recognition has emerged as a new trend. However, traditional identification methods heavily rely on manual discernment, consuming valuable time and imposing significant financial losses on growers. To address these challenges, this paper presents BerryNet-Lite, a lightweight network designed for precise strawberry disease identification. First, a comprehensive dataset, encompassing various strawberry diseases at different maturity levels, is curated. Second, BerryNet-Lite is proposed, utilizing transfer learning to expedite convergence through pre-training on extensive datasets. Subsequently, we introduce expansion convolution into the receptive field expansion, promoting more robust feature extraction and ensuring accurate recognition. Furthermore, we adopt the efficient channel attention (ECA) as the attention mechanism module. Additionally, we incorporate a multilayer perceptron (MLP) module to enhance the generalization capability and better capture the abstract features. Finally, we present a novel classification head design approach which effectively combines the ECA and MLP modules. Experimental results demonstrate that BerryNet-Lite achieves an impressive accuracy of 99.45%. Compared to classic networks like ResNet34, VGG16, and AlexNet, BerryNet-Lite showcases superiority across metrics, including loss value, accuracy, precision, F1-score, and parameters. It holds significant promise for applications in strawberry disease identification. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

22 pages, 10436 KiB  
Article
A Lightweight Deep Learning Semantic Segmentation Model for Optical-Image-Based Post-Harvest Fruit Ripeness Analysis of Sugar Apples (Annona squamosa)
by Zewen Xie, Zhenyu Ke, Kuigeng Chen, Yinglin Wang, Yadong Tang and Wenlong Wang
Agriculture 2024, 14(4), 591; https://doi.org/10.3390/agriculture14040591 - 8 Apr 2024
Cited by 2 | Viewed by 1382
Abstract
The sugar apple (Annona squamosa) is valued for its taste, nutritional richness, and versatility, making it suitable for fresh consumption and medicinal use with significant commercial potential. Widely found in the tropical Americas and Asia’s tropical or subtropical regions, it faces [...] Read more.
The sugar apple (Annona squamosa) is valued for its taste, nutritional richness, and versatility, making it suitable for fresh consumption and medicinal use with significant commercial potential. Widely found in the tropical Americas and Asia’s tropical or subtropical regions, it faces challenges in post-harvest ripeness assessment, predominantly reliant on manual inspection, leading to inefficiency and high labor costs. This paper explores the application of computer vision techniques in detecting ripeness levels of harvested sugar apples and proposes an improved deep learning model (ECD-DeepLabv3+) specifically designed for ripeness detection tasks. Firstly, the proposed model adopts a lightweight backbone (MobileNetV2), reducing complexity while maintaining performance through MobileNetV2′s unique design. Secondly, it incorporates the efficient channel attention (ECA) module to enhance focus on the input image and capture crucial feature information. Additionally, a Dense ASPP module is introduced, which enhances the model’s perceptual ability and expands the receptive field by stacking feature maps processed with different dilation rates. Lastly, the proposed model emphasizes the spatial information of sugar apples at different ripeness levels by the coordinate attention (CA) module. Model performance is validated using a self-made dataset of harvested optical images categorized into three ripeness levels. The proposed model (ECD-DeepLabv3+) achieves values of 89.95% for MIoU, 94.58% for MPA, 96.60% for PA, and 94.61% for MF1, respectively. Compared to the original DeepLabv3+, it greatly reduces the number of model parameters (Params) and floating-point operations (Flops) by 89.20% and 69.09%, respectively. Moreover, the proposed method could be directly applied to optical images obtained from the surface of the sugar apple, which provides a potential solution for the detection of post-harvest fruit ripeness. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

15 pages, 5353 KiB  
Article
The Detection of Ear Tag Dropout in Breeding Pigs Using a Fused Attention Mechanism in a Complex Environment
by Fang Wang, Xueliang Fu, Weijun Duan, Buyu Wang and Honghui Li
Agriculture 2024, 14(4), 530; https://doi.org/10.3390/agriculture14040530 - 27 Mar 2024
Viewed by 1052
Abstract
The utilization of ear tags for identifying breeding pigs is a widely used technique in the field of animal production. Ear tag dropout can lead to the loss of pig identity information, resulting in missing data and ambiguity in production management and genetic [...] Read more.
The utilization of ear tags for identifying breeding pigs is a widely used technique in the field of animal production. Ear tag dropout can lead to the loss of pig identity information, resulting in missing data and ambiguity in production management and genetic breeding data. Therefore, the identification of ear tag dropout is crucial for intelligent breeding in pig farms. In the production environment, promptly detecting breeding pigs with missing ear tags is challenging due to clustering overlap, small tag targets, and uneven sample distributions. This study proposes a method for detecting the dropout of breeding pigs’ ear tags in a complex environment by integrating an attention mechanism. Firstly, the approach involves designing a lightweight feature extraction module called IRDSC using depthwise separable convolution and an inverted residual structure; secondly, the SENet channel attention mechanism is integrated for enhancing deep semantic features; and finally, the IRDSC and SENet modules are incorporated into the backbone network of Cascade Mask R-CNN and the loss function is optimized with Focal Loss. The proposed algorithm, Cascade-TagLossDetector, achieves an accuracy of 90.02% in detecting ear tag dropout in breeding pigs, with a detection speed of 25.33 frames per second (fps), representing a 2.95% improvement in accuracy, and a 3.69 fps increase in speed compared to the previous method. The model size is reduced to 443.03 MB, a decrease of 72.90 MB, which enables real-time and accurate dropout detection while minimizing the storage requirements and providing technical support for the intelligent breeding of pigs. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

18 pages, 3554 KiB  
Article
Diagnosis of Cotton Nitrogen Nutrient Levels Using Ensemble MobileNetV2FC, ResNet101FC, and DenseNet121FC
by Peipei Chen, Jianguo Dai, Guoshun Zhang, Wenqing Hou, Zhengyang Mu and Yujuan Cao
Agriculture 2024, 14(4), 525; https://doi.org/10.3390/agriculture14040525 - 26 Mar 2024
Cited by 2 | Viewed by 1146
Abstract
Nitrogen plays a crucial role in cotton growth, making the precise diagnosis of its nutrition levels vital for the scientific and rational application of fertilizers. Addressing this need, our study introduced an EMRDFC-based diagnosis model specifically for cotton nitrogen nutrition levels. In our [...] Read more.
Nitrogen plays a crucial role in cotton growth, making the precise diagnosis of its nutrition levels vital for the scientific and rational application of fertilizers. Addressing this need, our study introduced an EMRDFC-based diagnosis model specifically for cotton nitrogen nutrition levels. In our field experiments, cotton was subjected to five different nitrogen application rates. To enhance the diagnostic capabilities of our model, we employed ResNet101, MobileNetV2, and DenseNet121 as base models and integrated the CBAM (Convolutional Block Attention Module) into each to improve their ability to differentiate among various nitrogen levels. Additionally, the Focal loss function was introduced to address issues of data imbalance. The model’s effectiveness was further augmented by employing integration strategies such as relative majority voting, simple averaging, and weighted averaging. Our experimental results indicated significant accuracy improvements in the enhanced ResNet101, MobileNetV2, and DenseNet121 models by 2.3%, 2.91%, and 2.93%, respectively. Notably, the integration of these models consistently improved accuracy, with gains of 0.87% and 1.73% compared to the highest-performing single model, DenseNet121FC. The optimal ensemble model, which utilized the weighted average method, demonstrated superior learning and generalization capabilities. The proposed EMRDFC model shows great promise in precisely identifying cotton nitrogen status, offering critical insights into the diagnosis of crop nutrient status. This research contributes significantly to the field of agricultural technology by providing a reliable tool for nitrogen-level assessment in cotton cultivation. Full article
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Back to TopTop