Application of Vision Technology and Artificial Intelligence in Smart Farming—2nd Edition

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Digital Agriculture".

Deadline for manuscript submissions: 5 December 2024 | Viewed by 9647

Special Issue Editors

College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
Interests: microclimate analytics of poultry houses; intelligent agricultural equipment; smart farming; non-destructive detection of meat quality; agricultural robot
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Applied Meteorology, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: climate-smart agriculture; AI meteorology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
Interests: prediction model; computer simulation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
Interests: intelligent agricultural equipment; three-dimensional reconstruction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Engineering, Nanjing Agricultural University, Nanjing 210031, China
Interests: intelligent agricultural equipment; disease detection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computer vision (CV) and artificial intelligence (AI) have been gaining traction in agriculture. From reducing production costs with intelligent automation to boosting productivity, CV and AI have massive potential to enhance the overall functioning of smart farming. Monitoring and analyzing the specific behaviors of livestock and poultry in large-scale farms, based on CV and AI, improves our knowledge of intensively raised livestock and poultry behaviors in relation to modern management techniques, allowing for improved health, welfare, and performance. In the field of planting, CV approaches are required to extract plant phenotypes from images and automate the detection of plants and plant organs. AI approaches give growers weapons against pests. Smart farming requires considerable processing power. The application of CV and AI helps crops progress towards a perfect stage of ripeness.

Based on the first volume, a Special Issue focused on the application of CV and AI in smart farming, we decided to continue with a second volume, researching topics that may include but are not limited to the following: the design and optimization of agricultural sensors, behavior recognition of livestock and poultry based on vision technology and deep learning, automation technology in agricultural equipment based on vision technology, the design and optimization of robots for livestock and poultry breeding based on vision technology and artificial intelligence, the non-destructive detection of meat quality, and agricultural big data analytics based on sensor data and deep learning. Both original research articles and reviews are accepted.

Dr. Xiuguo Zou
Dr. Xiaochen Zhu
Dr. Wentian Zhang
Dr. Yan Qian
Dr. Yuhua Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • agricultural sensors
  • behavior recognition of livestock and poultry
  • agricultural automation equipment
  • agricultural intelligent robot
  • intelligent robotic arm
  • non-destructive detection of meat quality
  • agricultural big data analytics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 12366 KiB  
Article
An Effective Yak Behavior Classification Model with Improved YOLO-Pose Network Using Yak Skeleton Key Points Images
by Yuxiang Yang, Yifan Deng, Jiazhou Li, Meiqi Liu, Yao Yao, Zhaoyuan Peng, Luhui Gu and Yingqi Peng
Agriculture 2024, 14(10), 1796; https://doi.org/10.3390/agriculture14101796 - 12 Oct 2024
Viewed by 616
Abstract
Yak behavior is a valuable indicator of their welfare and health. Information about important statuses, including fattening, reproductive health, and diseases, can be reflected and monitored through several indicative behavior patterns. In this study, an improved YOLOv7-pose model was developed to detect six [...] Read more.
Yak behavior is a valuable indicator of their welfare and health. Information about important statuses, including fattening, reproductive health, and diseases, can be reflected and monitored through several indicative behavior patterns. In this study, an improved YOLOv7-pose model was developed to detect six yak behavior patterns in real time using labeled yak key-point images. The model was trained using labeled key-point image data of six behavior patterns including walking, feeding, standing, lying, mounting, and eliminative behaviors collected from seventeen 18-month-old yaks for two weeks. There were another four YOLOv7-pose series models trained as comparison methods for yak behavior pattern detection. The improved YOLOv7-pose model achieved the best detection performance with precision, recall, mAP0.5, and mAP0.5:0.95 of 89.9%, 87.7%, 90.4%, and 76.7%, respectively. The limitation of this study is that the YOLOv7-pose model detected behaviors under complex conditions, such as scene variation, subtle leg postures, and different light conditions, with relatively lower precision, which impacts its detection performance. Future developments in yak behavior pattern detection will amplify the simple size of the dataset and will utilize data streams like optical and video streams for real-time yak monitoring. Additionally, the model will be deployed on edge computing devices for large-scale agricultural applications. Full article
Show Figures

Figure 1

21 pages, 12493 KiB  
Article
Chinese Bayberry Detection in an Orchard Environment Based on an Improved YOLOv7-Tiny Model
by Zhenlei Chen, Mengbo Qian, Xiaobin Zhang and Jianxi Zhu
Agriculture 2024, 14(10), 1725; https://doi.org/10.3390/agriculture14101725 - 1 Oct 2024
Viewed by 537
Abstract
The precise detection of Chinese bayberry locations using object detection technology is a crucial step to achieve unmanned harvesting of these berries. Because of the small size and easy occlusion of bayberry fruit, the existing detection algorithms have low recognition accuracy for such [...] Read more.
The precise detection of Chinese bayberry locations using object detection technology is a crucial step to achieve unmanned harvesting of these berries. Because of the small size and easy occlusion of bayberry fruit, the existing detection algorithms have low recognition accuracy for such objects. In order to realize the fast and accurate recognition of bayberry in fruit trees, and then guide the robotic arm to carry out accurate fruit harvesting, this paper proposes a detection algorithm based on an improved YOLOv7-tiny model. The model introduces partial convolution (PConv), a SimAM attention mechanism and SIoU into YOLOv7-tiny, which enables the model to improve the feature extraction capability of the target without adding extra parameters. Experimental results on a self-built Chinese bayberry dataset demonstrate that the improved algorithm achieved a recall rate of 97.6% and a model size of only 9.0 MB. Meanwhile, the precision of the improved model is 88.1%, which is 26%, 2.7%, 4.7%, 6.5%, and 4.7% higher than that of Faster R-CNN, YOLOv3-tiny, YOLOv5-m, YOLOv6-n, and YOLOv7-tiny, respectively. In addition, the proposed model was tested under natural conditions with the five models mentioned above, and the results showed that the proposed model can more effectively reduce the rates of misdetections and omissions in bayberry recognition. Finally, the improved algorithm was deployed on a mobile harvesting robot for field harvesting experiments, and the practicability of the algorithm was further verified. Full article
Show Figures

Figure 1

19 pages, 17349 KiB  
Article
Research on an Identification and Grasping Device for Dead Yellow-Feather Broilers in Flat Houses Based on Deep Learning
by Chengrui Xin, Hengtai Li, Yuhua Li, Meihui Wang, Weihan Lin, Shuchen Wang, Wentian Zhang, Maohua Xiao and Xiuguo Zou
Agriculture 2024, 14(9), 1614; https://doi.org/10.3390/agriculture14091614 - 14 Sep 2024
Viewed by 491
Abstract
The existence of dead broilers in flat broiler houses poses significant challenges to large-scale and welfare-oriented broiler breeding. To ensure the timely identification and removal of dead broilers, a mobile device based on visual technology for grasping them was meticulously designed in this [...] Read more.
The existence of dead broilers in flat broiler houses poses significant challenges to large-scale and welfare-oriented broiler breeding. To ensure the timely identification and removal of dead broilers, a mobile device based on visual technology for grasping them was meticulously designed in this study. Among the multiple recognition models explored, the YOLOv6 model was selected due to its exceptional performance, attaining an impressive 86.1% accuracy in identification. This model, when integrated with a specially designed robotic arm, forms a potent combination for effectively handling the task of grasping dead broilers. Extensive experiments were conducted to validate the efficacy of the device. The results reveal that the device achieved an average grasping rate of dead broilers of 81.3%. These findings indicate that the proposed device holds great potential for practical field deployment, offering a reliable solution for the prompt identification and grasping of dead broilers, thereby enhancing the overall management and welfare of broiler populations. Full article
Show Figures

Figure 1

21 pages, 3754 KiB  
Article
YOLOv8-Pearpollen: Method for the Lightweight Identification of Pollen Germination Vigor in Pear Trees
by Weili Sun, Cairong Chen, Tengfei Liu, Haoyu Jiang, Luxu Tian, Xiuqing Fu, Mingxu Niu, Shihao Huang and Fei Hu
Agriculture 2024, 14(8), 1348; https://doi.org/10.3390/agriculture14081348 - 12 Aug 2024
Viewed by 712
Abstract
Pear trees must be artificially pollinated to ensure yield, and the efficiency of pollination and the quality of pollen germination affect the size, shape, taste, and nutritional value of the fruit. Detecting the pollen germination vigor of pear trees is important to improve [...] Read more.
Pear trees must be artificially pollinated to ensure yield, and the efficiency of pollination and the quality of pollen germination affect the size, shape, taste, and nutritional value of the fruit. Detecting the pollen germination vigor of pear trees is important to improve the efficiency of artificial pollination and consequently the fruiting rate of pear trees. To overcome the limitations of traditional manual detection methods, such as low efficiency and accuracy and high cost, and to meet the requirements of screening high-quality pollen to promote the yield and production of fruit trees, we proposed a detection method for pear pollen germination vigor named YOLOv8-Pearpollen, an improved version of YOLOv8-n. A pear pollen germination dataset was constructed, and the image was enhanced using Blend Alpha to improve the robustness of the data. A combination of knowledge distillation and model pruning was used to reduce the complexity of the model and the difficulty of deployment in hardware facilities while ensuring that the model achieved or approached the detection effect of a large-volume model that can adapt to the actual requirements of agricultural production. Various ablation tests on knowledge distillation and model pruning were conducted to obtain a high-quality lightweighting method suitable for this model. Test results showed that the mAP of YOLOv8-Pearpollen reached 96.7%. The Params, FLOPs, and weights were only 1.5 M, 4.0 G, and 3.1 MB, respectively, and the detection speed was 147.1 FPS. A high degree of lightweighting and superior detection accuracy were simultaneously achieved. Full article
Show Figures

Figure 1

17 pages, 5936 KiB  
Article
Evaluation of the Habitat Suitability for Zhuji Torreya Based on Machine Learning Algorithms
by Liangjun Wu, Lihui Yang, Yabin Li, Jian Shi, Xiaochen Zhu and Yan Zeng
Agriculture 2024, 14(7), 1077; https://doi.org/10.3390/agriculture14071077 - 4 Jul 2024
Viewed by 877
Abstract
Torreya, with its dual roles in both food and medicine, has faced multiple challenges in its cultivation in Zhuji city due to frequent global climate disasters in recent years. Therefore, conducting a study on suitable zoning for Torreya habitats based on climatic, topographic, [...] Read more.
Torreya, with its dual roles in both food and medicine, has faced multiple challenges in its cultivation in Zhuji city due to frequent global climate disasters in recent years. Therefore, conducting a study on suitable zoning for Torreya habitats based on climatic, topographic, and soil factors is highly important. In this study, we utilized the latitude and longitude coordinates of Torreya distribution points and ecological factor raster data. We thoroughly analyzed the ecological environmental characteristics of the climate, topography, and soil at Torreya distribution points via both physical modeling and machine learning methods. Zhuji city was classified into suitable, moderately suitable, and unsuitable zones to determine regions conducive to Torreya growth. The results indicate that suitable zones for Torreya cultivation in Zhuji city are distributed mainly in mountainous and hilly areas, while unsuitable zones are found predominantly in central basins and northern river plain networks. Moderately suitable zones are located in transitional areas between suitable and unsuitable zones. Compared to climatic factors, soil and topographic factors more significantly restrict Torreya cultivation. Machine learning algorithms can also achieve suitability zoning with a more concise and efficient classification process. In this study, the random forest (RF) algorithm demonstrated greater predictive accuracy than the support vector machine (SVM) and naive Bayes (NB) algorithms, achieving the best classification results. Full article
Show Figures

Figure 1

19 pages, 22454 KiB  
Article
Walnut Recognition Method for UAV Remote Sensing Images
by Mingjie Wu, Lijun Yun, Chen Xue, Zaiqing Chen and Yuelong Xia
Agriculture 2024, 14(4), 646; https://doi.org/10.3390/agriculture14040646 - 22 Apr 2024
Cited by 2 | Viewed by 1430
Abstract
During the process of walnut identification and counting using UAVs in hilly areas, the complex lighting conditions on the surface of walnuts somewhat affect the detection effectiveness of deep learning models. To address this issue, we proposed a lightweight walnut small object recognition [...] Read more.
During the process of walnut identification and counting using UAVs in hilly areas, the complex lighting conditions on the surface of walnuts somewhat affect the detection effectiveness of deep learning models. To address this issue, we proposed a lightweight walnut small object recognition method called w-YOLO. We reconstructed the feature extraction network and feature fusion network of the model to reduce the volume and complexity of the model. Additionally, to improve the recognition accuracy of walnut objects under complex lighting conditions, we adopted an attention mechanism detection layer and redesigned a set of detection heads more suitable for walnut small objects. A series of experiments showed that when identifying walnut objects in UAV remote sensing images, w-YOLO outperforms other mainstream object detection models, achieving a mean Average Precision (mAP0.5) of 97% and an F1-score of 92%, with parameters reduced by 52.3% compared to the YOLOv8s model. Effectively addressed the identification of walnut targets in Yunnan, China, under the influence of complex lighting conditions. Full article
Show Figures

Figure 1

16 pages, 12037 KiB  
Article
Optimizing the YOLOv7-Tiny Model with Multiple Strategies for Citrus Fruit Yield Estimation in Complex Scenarios
by Juanli Jing, Menglin Zhai, Shiqing Dou, Lin Wang, Binghai Lou, Jichi Yan and Shixin Yuan
Agriculture 2024, 14(2), 303; https://doi.org/10.3390/agriculture14020303 - 13 Feb 2024
Cited by 3 | Viewed by 1497
Abstract
The accurate identification of citrus fruits is important for fruit yield estimation in complex citrus orchards. In this study, the YOLOv7-tiny-BVP network is constructed based on the YOLOv7-tiny network, with citrus fruits as the research object. This network introduces a BiFormer bilevel routing [...] Read more.
The accurate identification of citrus fruits is important for fruit yield estimation in complex citrus orchards. In this study, the YOLOv7-tiny-BVP network is constructed based on the YOLOv7-tiny network, with citrus fruits as the research object. This network introduces a BiFormer bilevel routing attention mechanism, which replaces regular convolution with GSConv, adds the VoVGSCSP module to the neck network, and replaces the simplified efficient layer aggregation network (ELAN) with partial convolution (PConv) in the backbone network. The improved model significantly reduces the number of model parameters and the model inference time, while maintaining the network’s high recognition rate for citrus fruits. The results showed that the fruit recognition accuracy of the modified model was 97.9% on the test dataset. Compared with the YOLOv7-tiny, the number of parameters and the size of the improved network were reduced by 38.47% and 4.6 MB, respectively. Moreover, the recognition accuracy, frames per second (FPS), and F1 score improved by 0.9, 2.02, and 1%, respectively. The network model proposed in this paper has an accuracy of 97.9% even after the parameters are reduced by 38.47%, and the model size is only 7.7 MB, which provides a new idea for the development of a lightweight target detection model. Full article
Show Figures

Figure 1

15 pages, 2808 KiB  
Article
AG-YOLO: A Rapid Citrus Fruit Detection Algorithm with Global Context Fusion
by Yishen Lin, Zifan Huang, Yun Liang, Yunfan Liu and Weipeng Jiang
Agriculture 2024, 14(1), 114; https://doi.org/10.3390/agriculture14010114 - 10 Jan 2024
Cited by 8 | Viewed by 2395
Abstract
Citrus fruits hold pivotal positions within the agricultural sector. Accurate yield estimation for citrus fruits is crucial in orchard management, especially when facing challenges of fruit occlusion due to dense foliage or overlapping fruits. This study addresses the issues of low detection accuracy [...] Read more.
Citrus fruits hold pivotal positions within the agricultural sector. Accurate yield estimation for citrus fruits is crucial in orchard management, especially when facing challenges of fruit occlusion due to dense foliage or overlapping fruits. This study addresses the issues of low detection accuracy and the significant instances of missed detections in citrus fruit detection algorithms, particularly in scenarios of occlusion. It introduces AG-YOLO, an attention-based network designed to fuse contextual information. Leveraging NextViT as its primary architecture, AG-YOLO harnesses its ability to capture holistic contextual information within nearby scenes. Additionally, it introduces a Global Context Fusion Module (GCFM), facilitating the interaction and fusion of local and global features through self-attention mechanisms, significantly improving the model’s occluded target detection capabilities. An independent dataset comprising over 8000 outdoor images was collected for the purpose of evaluating AG-YOLO’s performance. After a meticulous selection process, a subset of 957 images meeting the criteria for occlusion scenarios of citrus fruits was obtained. This dataset includes instances of occlusion, severe occlusion, overlap, and severe overlap, covering a range of complex scenarios. AG-YOLO demonstrated exceptional performance on this dataset, achieving a precision (P) of 90.6%, a mean average precision (mAP)@50 of 83.2%, and an mAP@50:95 of 60.3%. These metrics surpass existing mainstream object detection methods, confirming AG-YOLO’s efficacy. AG-YOLO effectively addresses the challenge of occlusion detection, achieving a speed of 34.22 frames per second (FPS) while maintaining a high level of detection accuracy. This speed of 34.22 FPS showcases a relatively faster performance, particularly evident in handling the complexities posed by occlusion challenges, while maintaining a commendable balance between speed and accuracy. AG-YOLO, compared to existing models, demonstrates advantages in high localization accuracy, minimal missed detection rates, and swift detection speed, particularly evident in effectively addressing the challenges posed by severe occlusions in object detection. This highlights its role as an efficient and reliable solution for handling severe occlusions in the field of object detection. Full article
Show Figures

Figure 1

Back to TopTop