Computational, AI and IT Solutions Helping Agriculture

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Digital Agriculture".

Deadline for manuscript submissions: closed (25 March 2025) | Viewed by 19246

Special Issue Editor


E-Mail Website
Guest Editor
Department of University Transfer, Faculty of Arts & Sciences, NorQuest College, Edmonton, AB T5J 1L6, Canada
Interests: mathematical-process-based; machine learning modeling; ecohydrology; biogeochemistry; ecosystem productivity
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue is a natural continuation of our previous Special Issue, titled “Internet and Computers for Agriculture”; this one extends further, with the aim of covering recent and current progress in the application of computational solutions, artificial intelligence (AI), and information technologies (IT) in modern agriculture. Nowadays, rapid changes are taking place at a planetary scale, including human population growth and global climatic and ecological changes, resulting in a call for immediate sustainable and secure smart solutions for food production, water supply, greenhouse (GHG) gas emissions, and environmental health.

This Special Issue provides a stage for the innovative research of scientists and entrepreneurs involved in the development and application of various software products, and digital solutions for agriculture, agroecosystems, and natural ecosystems with application in agriculture, to be presented. We welcome the submission of original articles and reviews involving mobile apps, web applications, internet platforms, Internet of Things (IoT) devices, cloud technologies, AI and machine learning (ML) methods and applications for precision agriculture, monitoring, cultivation, harvesting, marketing, management, decision making, weather forecasting, optimization, natural language processing, computer/machine vision, drones, real time detection systems, sensors for field operations, smart agriculture machinery, diagnostics, species and disease recognition, big data collection, scientific-process-based mathematical modeling, and machine learning modeling, which can contribute to modern agriculture now and in the future.

Dr. Dimitre Dimitrov
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Smart agriculture
  • Web applications
  • Web platforms
  • Mobile apps
  • IoT devices
  • Cloud computing
  • AI and Machine learning
  • Big data
  • Data driven modeling
  • Process-based modeling

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 5105 KiB  
Article
Research on Ginger Price Prediction Model Based on Deep Learning
by Fengyu Li, Xianyong Meng, Ke Zhu, Jun Yan, Lining Liu and Pingzeng Liu
Agriculture 2025, 15(6), 596; https://doi.org/10.3390/agriculture15060596 - 11 Mar 2025
Viewed by 205
Abstract
In order to ensure the price stability of niche agricultural products and enhance farmers’ income, the study delves into the pattern of the ginger price fluctuation rule and its main influencing factors. By combining seasonal decomposition STL, long and short-term memory network LSTM, [...] Read more.
In order to ensure the price stability of niche agricultural products and enhance farmers’ income, the study delves into the pattern of the ginger price fluctuation rule and its main influencing factors. By combining seasonal decomposition STL, long and short-term memory network LSTM, attention mechanism ATT and Kolmogorov-Arnold network, a combined STL-LSTM-ATT-KAN prediction model is developed, and the model parameters are finely tuned by using multi-population adaptive particle swarm optimisation algorithm (AMP-PSO). Based on an in-depth analysis of actual data on ginger prices over the past decade, the STL-LSTM-ATT-KAN model demonstrated excellent performance in terms of prediction accuracy: its mean absolute error (MAE) was 0.111, mean squared error (MSE) was 0.021, root mean squared error (RMSE) was 0.146, and the coefficient of determination (R2) was 0.998. This study provides the Ginger Industry, agricultural trade, farmers and policymakers with digitalised and intelligent aids, which are important for improving market monitoring, risk control, competitiveness and guaranteeing the stability of supply and price. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

17 pages, 7698 KiB  
Article
Plant Disease Segmentation Networks for Fast Automatic Severity Estimation Under Natural Field Scenarios
by Chenyi Zhao, Changchun Li, Xin Wang, Xifang Wu, Yongquan Du, Huabin Chai, Taiyi Cai, Hengmao Xiang and Yinghua Jiao
Agriculture 2025, 15(6), 583; https://doi.org/10.3390/agriculture15060583 - 10 Mar 2025
Viewed by 255
Abstract
The segmentation of plant disease images enables researchers to quantify the proportion of disease spots on leaves, known as disease severity. Current deep learning methods predominantly focus on single diseases, simple lesions, or laboratory-controlled environments. In this study, we established and publicly released [...] Read more.
The segmentation of plant disease images enables researchers to quantify the proportion of disease spots on leaves, known as disease severity. Current deep learning methods predominantly focus on single diseases, simple lesions, or laboratory-controlled environments. In this study, we established and publicly released image datasets of field scenarios for three diseases: soybean bacterial blight (SBB), wheat stripe rust (WSR), and cedar apple rust (CAR). We developed Plant Disease Segmentation Networks (PDSNets) based on LinkNet with ResNet-18 as the encoder, including three versions: ×1.0, ×0.75, and ×0.5. The ×1.0 version incorporates a 4 × 4 embedding layer to enhance prediction speed, while versions ×0.75 and ×0.5 are lightweight variants with reduced channel numbers within the same architecture. Their parameter counts are 11.53 M, 6.50 M, and 2.90 M, respectively. PDSNetx0.5 achieved an overall F1 score of 91.96%, an Intersection over Union (IoU) of 85.85% for segmentation, and a coefficient of determination (R2) of 0.908 for severity estimation. On a local central processing unit (CPU), PDSNetx0.5 demonstrated a prediction speed of 34.18 images (640 × 640 pixels) per second, which is 2.66 times faster than LinkNet. Our work provides an efficient and automated approach for assessing plant disease severity in field scenarios. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

23 pages, 26465 KiB  
Article
DHS-YOLO: Enhanced Detection of Slender Wheat Seedlings Under Dynamic Illumination Conditions
by Xuhua Dong and Jingbang Pan
Agriculture 2025, 15(5), 510; https://doi.org/10.3390/agriculture15050510 - 26 Feb 2025
Viewed by 394
Abstract
The precise identification of wheat seedlings in unmanned aerial vehicle (UAV) imagery is fundamental for implementing precision agricultural practices such as targeted pesticide application and irrigation management. This detection task presents significant technical challenges due to two inherent complexities: (1) environmental interference from [...] Read more.
The precise identification of wheat seedlings in unmanned aerial vehicle (UAV) imagery is fundamental for implementing precision agricultural practices such as targeted pesticide application and irrigation management. This detection task presents significant technical challenges due to two inherent complexities: (1) environmental interference from variable illumination conditions and (2) morphological characteristics of wheat seedlings characterized by slender leaf structures and flexible posture variations. To address these challenges, we propose DHS-YOLO, a novel deep learning framework optimized for robust wheat seedling detection under diverse illumination intensities. Our methodology builds upon the YOLOv11 architecture with three principal enhancements: First, the Dynamic Slender Convolution (DSC) module employs deformable convolutions to adaptively capture the elongated morphological features of wheat leaves. Second, the Histogram Transformer (HT) module integrates a dynamic-range spatial attention mechanism to mitigate illumination-induced image degradation. Third, we implement the ShapeIoU loss function that prioritizes geometric consistency between predicted and ground truth bounding boxes, particularly optimizing for slender plant structures. The experimental validation was conducted using a custom UAV-captured dataset containing wheat seedling images under varying illumination conditions. Compared to the existing models, the proposed model achieved the best performance with precision, recall, mAP50, and mAP50-95 values of 94.1%, 91.0%, 95.2%, and 81.9%, respectively. These results demonstrate our model’s effectiveness in overcoming illumination variations while maintaining high sensitivity to fine plant structures. This research contributes an optimized computer vision solution for precision agriculture applications, particularly enabling automated field management systems through reliable crop detection in challenging environmental conditions. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

16 pages, 3967 KiB  
Article
Potato Disease and Pest Question Classification Based on Prompt Engineering and Gated Convolution
by Wentao Tang and Zelin Hu
Agriculture 2025, 15(5), 493; https://doi.org/10.3390/agriculture15050493 - 25 Feb 2025
Viewed by 240
Abstract
Currently, there is no publicly available dataset for the classification of potato pest and disease-related queries. Moreover, traditional query classification models generally adopt a single maximum-pooling strategy when performing down-sampling operations. This mechanism only extracts the extreme value responses within the local receptive [...] Read more.
Currently, there is no publicly available dataset for the classification of potato pest and disease-related queries. Moreover, traditional query classification models generally adopt a single maximum-pooling strategy when performing down-sampling operations. This mechanism only extracts the extreme value responses within the local receptive field, which leads to the degradation of fine-grained feature representation and significantly amplifies text noise. To address these issues, a dataset construction method based on prompt engineering is proposed, along with a question classification method utilizing a gated fusion–convolutional neural network (GF-CNN). By interacting with large language models, prompt words are used to generate potato disease and pest question templates and efficiently construct the Potato Pest and Disease Question Classification Dataset (PDPQCD) by batch importing named entities. The GF-CNN combines outputs from convolutional kernels of varying sizes, and after processing with max-pooling and average-pooling, a gating mechanism is employed to regulate the flow of information, thereby optimizing the text feature extraction process. Experiments using GF-CNN on the PDPQCD, Subj, and THUCNews datasets show F1 scores of 100.00%, 96.70%, and 93.55%, respectively, outperforming other models. The prompt engineering-based method provides a new paradigm for constructing question classification datasets, and the GF-CNN can also be extended for application in other domains. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

19 pages, 14103 KiB  
Article
DCFA-YOLO: A Dual-Channel Cross-Feature-Fusion Attention YOLO Network for Cherry Tomato Bunch Detection
by Shanglei Chai, Ming Wen, Pengyu Li, Zhi Zeng and Yibin Tian
Agriculture 2025, 15(3), 271; https://doi.org/10.3390/agriculture15030271 - 26 Jan 2025
Viewed by 935
Abstract
To better utilize multimodal information for agriculture applications, this paper proposes a cherry tomato bunch detection network using dual-channel cross-feature fusion. It aims to improve detection performance by employing the complementary information of color and depth images. Using the existing YOLOv8_n as the [...] Read more.
To better utilize multimodal information for agriculture applications, this paper proposes a cherry tomato bunch detection network using dual-channel cross-feature fusion. It aims to improve detection performance by employing the complementary information of color and depth images. Using the existing YOLOv8_n as the baseline framework, it incorporates a dual-channel cross-fusion attention mechanism for multimodal feature extraction and fusion. In the backbone network, a ShuffleNetV2 unit is adopted to optimize the efficiency of initial feature extraction. During the feature fusion stage, two modules are introduced by using re-parameterization, dynamic weighting, and efficient concatenation to strengthen the representation of multimodal information. Meanwhile, the CBAM mechanism is integrated at different feature extraction stages, combined with the improved SPPF_CBAM module, to effectively enhance the focus and representation of critical features. Experimental results using a dataset obtained from a commercial greenhouse demonstrate that DCFA-YOLO excels in cherry tomato bunch detection, achieving an mAP50 of 96.5%, a significant improvement over the baseline model, while drastically reducing computational complexity. Furthermore, comparisons with other state-of-the-art YOLO and other object detection models validate its detection performance. This provides an efficient solution for multimodal fusion for real-time fruit detection in the context of robotic harvesting, running at 52fps on a regular computer. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

25 pages, 10093 KiB  
Article
Research and Experiments on Adaptive Root Cutting Using a Garlic Harvester Based on a Convolutional Neural Network
by Ke Yang, Yunlong Zhou, Hengliang Shi, Rui Yao, Zhaoyang Yu, Yanhua Zhang, Baoliang Peng, Jiali Fan and Zhichao Hu
Agriculture 2024, 14(12), 2236; https://doi.org/10.3390/agriculture14122236 - 6 Dec 2024
Viewed by 708
Abstract
Aimed at the problems of a high leakage rate, a high cutting injury rate, and uneven root cutting in the existing combined garlic harvesting and root-cutting technology, we researched the key technologies used in a garlic harvester for adaptive root cutting based on [...] Read more.
Aimed at the problems of a high leakage rate, a high cutting injury rate, and uneven root cutting in the existing combined garlic harvesting and root-cutting technology, we researched the key technologies used in a garlic harvester for adaptive root cutting based on machine vision. Firstly, research was carried out on the conveyor alignment and assembly of the garlic harvester to realize the adjustment of the garlic plant position and the alignment of the bulb’s upper surface before the roots were cut, to establish the parameter equations and to modify the structure of the conveyor to form the adaptive garlic root-cutting system. Then, a root-cutting test using the double-knife disk-type cutting device was carried out to examine the root-cutting ability of the cutting device. Finally, a bulb detector trained with the IRM-YOLO model was deployed on the Jetson Nano device (NVIDIA, Jetson Nano(4GB), Santa Clara, CA, USA) to conduct a harvester field trial study. The pass rate for the root cutting was 82.8%, and the cutting injury rate was 2.7%, which tested the root cutting performance of the adaptive root cutting system and its field environment adaptability, providing a reference for research into combined garlic harvesting technology. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

12 pages, 2178 KiB  
Article
Detection and Classification of Agave angustifolia Haw Using Deep Learning Models
by Idarh Matadamas, Erik Zamora and Teodulfo Aquino-Bolaños
Agriculture 2024, 14(12), 2199; https://doi.org/10.3390/agriculture14122199 - 2 Dec 2024
Cited by 1 | Viewed by 968
Abstract
In Oaxaca, Mexico, there are more than 30 species of the Agave genus, and its cultivation is of great economic and social importance. The incidence of pests, diseases, and environmental stress cause significant losses to the crop. The identification of damage through non-invasive [...] Read more.
In Oaxaca, Mexico, there are more than 30 species of the Agave genus, and its cultivation is of great economic and social importance. The incidence of pests, diseases, and environmental stress cause significant losses to the crop. The identification of damage through non-invasive tools based on visual information is important for reducing economic losses. The objective of this study was to evaluate and compare five deep learning models: YOLO versions 7, 7-tiny, and 8, and two from the Detectron2 library, Faster-RCNN and RetinaNet, for the detection and classification of Agave angustifolia plants in digital images. In the town of Santiago Matatlán, Oaxaca, 333 images were taken in an open-air plantation, and 1317 plants were labeled into five classes: sick, yellow, healthy, small, and spotted. Models were trained with a 70% random partition, validated with 10%, and tested with the remaining 20%. The results obtained from the models indicate that YOLOv7 is the best-performing model, in terms of the test set, with a mAP of 0.616, outperforming YOLOv7-tiny and YOLOv8, both with a mAP of 0.606 on the same set; demonstrating that artificial intelligence for the detection and classification of Agave angustifolia plants under planting conditions is feasible using digital images. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

26 pages, 10449 KiB  
Article
AI-Based Monitoring for Enhanced Poultry Flock Management
by Edmanuel Cruz, Miguel Hidalgo-Rodriguez, Adiz Mariel Acosta-Reyes, José Carlos Rangel and Keyla Boniche
Agriculture 2024, 14(12), 2187; https://doi.org/10.3390/agriculture14122187 - 30 Nov 2024
Cited by 2 | Viewed by 2672
Abstract
The exponential growth of global poultry production highlights the critical need for efficient flock management, particularly in accurately counting chickens to optimize operations and minimize economic losses. This study advances the application of artificial intelligence (AI) in agriculture by developing and validating an [...] Read more.
The exponential growth of global poultry production highlights the critical need for efficient flock management, particularly in accurately counting chickens to optimize operations and minimize economic losses. This study advances the application of artificial intelligence (AI) in agriculture by developing and validating an AI-driven automated poultry flock management system using the YOLOv8 object detection model. The scientific objective was to address challenges such as occlusions, lighting variability, and high-density flock conditions, thereby contributing to the broader understanding of computer vision applications in agricultural environments. The practical objective was to create a scalable and reliable system for automated monitoring and decision-making, optimizing resource utilization and improving poultry management efficiency. The prototype achieved high precision (93.1%) and recall (93.0%), demonstrating its reliability across diverse conditions. Comparative analysis with prior models, including YOLOv5, highlights YOLOv8’s superior accuracy and robustness, underscoring its potential for real-world applications. This research successfully achieves its objectives by delivering a system that enhances poultry management practices and lays a strong foundation for future innovations in agricultural automation. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

17 pages, 75550 KiB  
Article
FRESH: Fusion-Based 3D Apple Recognition via Estimating Stem Direction Heading
by Geonhwa Son, Seunghyeon Lee and Yukyung Choi
Agriculture 2024, 14(12), 2161; https://doi.org/10.3390/agriculture14122161 - 27 Nov 2024
Viewed by 885
Abstract
In 3D apple detection, the challenge of direction for apple stem harvesting for agricultural robotics has not yet been resolved. Addressing the issue of determining the stem direction of apples is essential for the harvesting processes employed by automated robots. This research proposes [...] Read more.
In 3D apple detection, the challenge of direction for apple stem harvesting for agricultural robotics has not yet been resolved. Addressing the issue of determining the stem direction of apples is essential for the harvesting processes employed by automated robots. This research proposes a 3D apple detection framework to identify stem direction. First, we constructed a dataset for 3D apple detection that considers the 3-axis rotation of apples based on stem direction. Secondly, we designed a 3D detection algorithm that not only recognizes the dimensions and location of apples, as existing methods do, but also predicts their 3-axis rotation. Furthermore, we effectively fused 3D point clouds with 2D images to leverage the geometric data from point clouds and the semantic information from images, enhancing the apple detection performance. Experimental results indicated that our method achieved AP@0.25 89.56% for 3D detection by considering apple rotation, surpassing the existing methods. Moreover, we experimentally validated that the proposed loss function most effectively estimated the rotation among the various approaches we explored. This study shows the effectiveness of 3D apple detection with consideration of rotation, emphasizing its potential for practical application in autonomous robotic systems. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

27 pages, 10395 KiB  
Article
Intelligent Fault Diagnosis of Inter-Turn Short Circuit Faults in PMSMs for Agricultural Machinery Based on Data Fusion and Bayesian Optimization
by Mingsheng Wang, Wuxuan Lai, Hong Zhang, Yang Liu and Qiang Song
Agriculture 2024, 14(12), 2139; https://doi.org/10.3390/agriculture14122139 - 25 Nov 2024
Viewed by 608
Abstract
The permanent magnet synchronous motor (PMSM) plays an important role in the power system of agricultural machinery. Inter-turn short circuit (ITSC) faults are among the most common failures in PMSMs, and early diagnosis of these faults is crucial for enhancing the safety and [...] Read more.
The permanent magnet synchronous motor (PMSM) plays an important role in the power system of agricultural machinery. Inter-turn short circuit (ITSC) faults are among the most common failures in PMSMs, and early diagnosis of these faults is crucial for enhancing the safety and reliability of motor operation. In this article, a multi-source data-fusion algorithm based on convolutional neural networks (CNNs) has been proposed for the early fault diagnosis of ITSCs. The contributions of this paper can be summarized in three main aspects. Firstly, synchronizing data from different signals extracted by different devices presents a significant challenge. To address this, a signal synchronization method based on maximum cross-correlation is proposed to construct a synchronized dataset of current and vibration signals. Secondly, applying a traditional CNN to the data fusion of different signals is challenging. To solve this problem, a multi-stream high-level feature fusion algorithm based on a channel attention mechanism is proposed. Thirdly, to tackle the issue of hyperparameter tuning in deep learning models, a hyperparameter optimization method based on Bayesian optimization is proposed. Experiments are conducted based on the derived early-stage ITSC fault-severity indicator, validating the effectiveness of the proposed fault-diagnosis algorithm. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

16 pages, 7753 KiB  
Article
Fault Diagnosis of Rolling Bearings in Agricultural Machines Using SVD-EDS-GST and ResViT
by Fengyun Xie, Yang Wang, Gan Wang, Enguang Sun, Qiuyang Fan and Minghua Song
Agriculture 2024, 14(8), 1286; https://doi.org/10.3390/agriculture14081286 - 4 Aug 2024
Cited by 5 | Viewed by 1409
Abstract
In the complex and harsh environment of agriculture, rolling bearings, as the key transmission components in agricultural machinery, are very prone to failure, so research on the intelligent fault diagnosis of agricultural machinery components is critical. Therefore, this paper proposes a new method [...] Read more.
In the complex and harsh environment of agriculture, rolling bearings, as the key transmission components in agricultural machinery, are very prone to failure, so research on the intelligent fault diagnosis of agricultural machinery components is critical. Therefore, this paper proposes a new method based on SVD-EDS-GST and ResNet-Vision Transformer (ResViT) for the fault diagnosis of rolling bearings in agricultural machines. Firstly, an experimental platform for rolling bearing failure in agricultural machinery is built, and one-dimensional vibration signals are obtained using acceleration sensors. Next, the signal is preprocessed for noise reduction using singular value decomposition (SVD) combined with the energy difference spectrum (EDS) to solve for the interference of complex noise and redundant components in the vibration signal. Secondly, generalized S-transform (GST) is used to process vibration signals into images. Then, the ResViT model is proposed, where the ResNet34 network is used to replace the image chunking mechanism in the original Vision Transformer model for feature extraction. Finally, an improved Vision Transformer (ViT) is utilized to synthesize global and local information for fault classification. The experimental results show that the proposed method’s average accuracy in rolling bearing fault classification for agricultural machinery reaches 99.08%. In addition, compared with SVD-EDS-GST-CNN, SVD-EDS-GST-LSTM, STFT-ViT, GST-ViT, and SVD-EDS-GST-ViT, the accuracy rate was improved by 3.5%, 3.84%, 4.8%, 8.02%, and 0.56%, and the standard deviation was also minimized. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

24 pages, 9320 KiB  
Article
Precision Corn Pest Detection: Two-Step Transfer Learning for Beetles (Coleoptera) with MobileNet-SSD
by Edmond Maican, Adrian Iosif and Sanda Maican
Agriculture 2023, 13(12), 2287; https://doi.org/10.3390/agriculture13122287 - 18 Dec 2023
Cited by 9 | Viewed by 3194
Abstract
Using neural networks on low-power mobile systems can aid in controlling pests while preserving beneficial species for crops. However, low-power devices require simplified neural networks, which may lead to reduced performance. This study was focused on developing an optimized deep-learning model for mobile [...] Read more.
Using neural networks on low-power mobile systems can aid in controlling pests while preserving beneficial species for crops. However, low-power devices require simplified neural networks, which may lead to reduced performance. This study was focused on developing an optimized deep-learning model for mobile devices for detecting corn pests. We propose a two-step transfer learning approach to enhance the accuracy of two versions of the MobileNet SSD network. Five beetle species (Coleoptera), including four harmful to corn crops (belonging to genera Anoxia, Diabrotica, Opatrum and Zabrus), and one beneficial (Coccinella sp.), were selected for preliminary testing. We employed two datasets. One for the first transfer learning procedure comprises 2605 images with general dataset classes ‘Beetle’ and ‘Ladybug’. It was used to recalibrate the networks’ trainable parameters for these two broader classes. Furthermore, the models were retrained on a second dataset of 2648 images of the five selected species. Performance was compared with a baseline model in terms of average accuracy per class and mean average precision (mAP). MobileNet-SSD-v2-Lite achieved an mAP of 0.8923, ranking second but close to the highest mAP (0.908) obtained by MobileNet-SSD-v1 and outperforming the baseline mAP by 6.06%. It demonstrated the highest accuracy for Opatrum (0.9514) and Diabrotica (0.8066). Anoxia it reached a third-place accuracy (0.9851), close to the top value of 0.9912. Zabrus achieved the second position (0.9053), while Coccinella was reliably distinguished from all other species, with an accuracy of 0.8939 and zero false positives; moreover, no pest species were mistakenly identified as Coccinella. Analyzing the errors in the MobileNet-SSD-v2-Lite model revealed good overall accuracy despite the reduced size of the training set, with one misclassification, 33 non-identifications, 7 double identifications and 1 false positive across the 266 images from the test set, yielding an overall relative error rate of 0.1579. The preliminary findings validated the two-step transfer learning procedure and placed the MobileNet-SSD-v2-Lite in the first place, showing high potential for using neural networks on real-time pest control while protecting beneficial species. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

22 pages, 18514 KiB  
Article
Efficient and Lightweight Automatic Wheat Counting Method with Observation-Centric SORT for Real-Time Unmanned Aerial Vehicle Surveillance
by Jie Chen, Xiaochun Hu, Jiahao Lu, Yan Chen and Xin Huang
Agriculture 2023, 13(11), 2110; https://doi.org/10.3390/agriculture13112110 - 7 Nov 2023
Cited by 8 | Viewed by 2026
Abstract
The number of wheat ears per unit area is crucial for assessing wheat yield, but automated wheat ear counting still faces significant challenges due to factors like lighting, orientation, and density variations. Departing from most static image analysis methodologies, this study introduces Wheat-FasterYOLO, [...] Read more.
The number of wheat ears per unit area is crucial for assessing wheat yield, but automated wheat ear counting still faces significant challenges due to factors like lighting, orientation, and density variations. Departing from most static image analysis methodologies, this study introduces Wheat-FasterYOLO, an efficient real-time model designed to detect, track, and count wheat ears in video sequences. This model uses FasterNet as its foundational feature extraction network, significantly reducing the model’s parameter count and improving the model’s inference speed. We also incorporate deformable convolutions and dynamic sparse attention into the feature extraction network to enhance its ability to capture wheat ear features while reducing the effects of intricate environmental conditions. To address information loss during up-sampling and strengthen the model’s capacity to extract wheat ear features across varying feature map scales, we integrate a path aggregation network (PAN) with the content-aware reassembly of features (CARAFE) up-sampling operator. Furthermore, the incorporation of the Kalman filter-based target-tracking algorithm, Observation-centric SORT (OC-SORT), enables real-time tracking and counting of wheat ears within expansive field settings. Experimental results demonstrate that Wheat-FasterYOLO achieves a mean average precision (mAP) score of 94.01% with a small memory usage of 2.87MB, surpassing popular detectors such as YOLOX and YOLOv7-Tiny. With the integration of OC-SORT, the composite higher order tracking accuracy (HOTA) and counting accuracy reached 60.52% and 91.88%, respectively, while maintaining a frame rate of 92 frames per second (FPS). This technology has promising applications in wheat ear counting tasks. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

Review

Jump to: Research

32 pages, 5944 KiB  
Review
Emerging Technologies for Precision Crop Management Towards Agriculture 5.0: A Comprehensive Overview
by Mohamed Farag Taha, Hanping Mao, Zhao Zhang, Gamal Elmasry, Mohamed A. Awad, Alwaseela Abdalla, Samar Mousa, Abdallah Elshawadfy Elwakeel and Osama Elsherbiny
Agriculture 2025, 15(6), 582; https://doi.org/10.3390/agriculture15060582 - 9 Mar 2025
Viewed by 947
Abstract
Agriculture 5.0 (Ag5.0) represents a groundbreaking shift in agricultural practices, addressing the global food security challenge by integrating cutting-edge technologies such as artificial intelligence (AI), machine learning (ML), robotics, and big data analytics. To adopt the transition to Ag5.0, this paper comprehensively reviews [...] Read more.
Agriculture 5.0 (Ag5.0) represents a groundbreaking shift in agricultural practices, addressing the global food security challenge by integrating cutting-edge technologies such as artificial intelligence (AI), machine learning (ML), robotics, and big data analytics. To adopt the transition to Ag5.0, this paper comprehensively reviews the role of AI, machine learning (ML) and other emerging technologies to overcome current and future crop management challenges. Crop management has progressed significantly from early agricultural methods to the advanced capabilities of Ag5.0, marking a notable leap in precision agriculture. Emerging technologies such as collaborative robots, 6G, digital twins, the Internet of Things (IoT), blockchain, cloud computing, and quantum technologies are central to this evolution. The paper also highlights how machine learning and modern agricultural tools are improving the way we perceive, analyze, and manage crop growth. Additionally, it explores real-world case studies showcasing the application of machine learning and deep learning in crop monitoring. Innovations in smart sensors, AI-based robotics, and advanced communication systems are driving the next phase of agricultural digitalization and decision-making. The paper addresses the opportunities and challenges that come with adopting Ag5.0, emphasizing the transformative potential of these technologies in improving agricultural productivity and tackling global food security issues. Finally, as Agriculture 5.0 is the future of agriculture, we highlight future trends and research needs such as multidisciplinary approaches, regional adaptation, and advancements in AI and robotics. Ag5.0 represents a paradigm shift towards precision crop management, fostering sustainable, data-driven farming systems that optimize productivity while minimizing environmental impact. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

Back to TopTop