sensors-logo

Journal Browser

Journal Browser

Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Smart Agriculture".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 2063

Special Issue Editors


E-Mail Website
Guest Editor
College of Biological and Agricultural Engineering, Jilin University, 5988 Renmin Street, Changchun 130025, China
Interests: bionic intelligent agricultural machinery; autonomous navigation; target recognition based on visual bionics; agricultural drones; agricultural artificial intelligence; soil and plant sensing; agricultural machinery information collection and control
Special Issues, Collections and Topics in MDPI journals
College of Biological and Agricultural Engineering, Jilin University, 5988 Renmin Street, Changchun 130025, China
Interests: agricultural machinery; conservation tillage; sensors; automation; intelligence; plant protection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, sensors and artificial intelligence (AI) technologies have received increasing interest from both academia and industry and have been extensively applied in intelligent agriculture. Accelerating the application of agriculture sensors and AI technologies in intelligent agriculture is urgently needed for the development of modern agriculture, as it will help promote the development of smart agriculture. This Special Issue aims to showcase the excellent implementation of agricultural sensors and AI technologies for intelligent agricultural applications and to provide opportunities for researchers to publish their work related to this topic. Articles that address agricultural sensors and AI technologies applied to crop and animal production are welcome. This Special Issue seeks to amass original research articles and reviews. The scope of this Special Issue includes but is not limited to the following topics:

  • Crop sensing and sensors;
  • Animal perception and sensors;
  • Environmental information perception and sensors;
  • Agricultural equipment information collection and processing;
  • Key technologies of smart agriculture;
  • Artificial intelligence in agriculture;
  • Farm-intelligent equipment;
  • Orchard-intelligent equipment;
  • Garden-intelligent equipment;
  • Pasture-intelligent equipment;
  • Fishing ground-intelligent equipment.

Prof. Dr. Jiangtao Qi
Dr. Gang Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors
  • artificial intelligence
  • intelligent agriculture

Related Special Issue

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 2248 KiB  
Communication
Automatic Shrimp Fry Counting Method Using Multi-Scale Attention Fusion
by Xiaohong Peng, Tianyu Zhou, Ying Zhang and Xiaopeng Zhao
Sensors 2024, 24(9), 2916; https://doi.org/10.3390/s24092916 - 2 May 2024
Viewed by 455
Abstract
Shrimp fry counting is an important task for biomass estimation in aquaculture. Accurate counting of the number of shrimp fry in tanks can not only assess the production of mature shrimp but also assess the density of shrimp fry in the tanks, which [...] Read more.
Shrimp fry counting is an important task for biomass estimation in aquaculture. Accurate counting of the number of shrimp fry in tanks can not only assess the production of mature shrimp but also assess the density of shrimp fry in the tanks, which is very helpful for the subsequent growth status, transportation management, and yield assessment. However, traditional manual counting methods are often inefficient and prone to counting errors; a more efficient and accurate method for shrimp fry counting is urgently needed. In this paper, we first collected and labeled the images of shrimp fry in breeding tanks according to the constructed experimental environment and generated corresponding density maps using the Gaussian kernel function. Then, we proposed a multi-scale attention fusion-based shrimp fry counting network called the SFCNet. Experiments showed that our proposed SFCNet model reached the optimal performance in terms of shrimp fry counting compared to CNN-based baseline counting models, with MAEs and RMSEs of 3.96 and 4.682, respectively. This approach was able to effectively calculate the number of shrimp fry and provided a better solution for accurately calculating the number of shrimp fry. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

19 pages, 10732 KiB  
Article
Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations
by Rizky Mulya Sampurno, Zifu Liu, R. M. Rasika D. Abeyrathna and Tofael Ahamed
Sensors 2024, 24(3), 893; https://doi.org/10.3390/s24030893 - 30 Jan 2024
Cited by 3 | Viewed by 1354
Abstract
Mechanical weed management is a drudging task that requires manpower and has risks when conducted within rows of orchards. However, intrarow weeding must still be conducted by manual labor due to the restricted movements of riding mowers within the rows of orchards due [...] Read more.
Mechanical weed management is a drudging task that requires manpower and has risks when conducted within rows of orchards. However, intrarow weeding must still be conducted by manual labor due to the restricted movements of riding mowers within the rows of orchards due to their confined structures with nets and poles. However, autonomous robotic weeders still face challenges identifying uncut weeds due to the obstruction of Global Navigation Satellite System (GNSS) signals caused by poles and tree canopies. A properly designed intelligent vision system would have the potential to achieve the desired outcome by utilizing an autonomous weeder to perform operations in uncut sections. Therefore, the objective of this study is to develop a vision module using a custom-trained dataset on YOLO instance segmentation algorithms to support autonomous robotic weeders in recognizing uncut weeds and obstacles (i.e., fruit tree trunks, fixed poles) within rows. The training dataset was acquired from a pear orchard located at the Tsukuba Plant Innovation Research Center (T-PIRC) at the University of Tsukuba, Japan. In total, 5000 images were preprocessed and labeled for training and testing using YOLO models. Four versions of edge-device-dedicated YOLO instance segmentation were utilized in this research—YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg—for real-time application with an autonomous weeder. A comparison study was conducted to evaluate all YOLO models in terms of detection accuracy, model complexity, and inference speed. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, and YOLOv8n-seg was selected as the vision module for the autonomous weeder. In the evaluation process, YOLOv8n-seg had better segmentation accuracy than YOLOv5n-seg, while the latter had the fastest inference time. The performance of YOLOv8n-seg was also acceptable when it was deployed on a resource-constrained device that is appropriate for robotic weeders. The results indicated that the proposed deep learning-based detection accuracy and inference speed can be used for object recognition via edge devices for robotic operation during intrarow weeding operations in orchards. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
Show Figures

Figure 1

Back to TopTop