Innovation of Intelligent Detection and Pesticide Application Technology for Horticultural Crops

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 5990

Special Issue Editors


E-Mail Website
Guest Editor
College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642, China
Interests: machine vision; artificial intelligence; intelligent agriculture; agricultural robots

E-Mail Website
Guest Editor
College of Mechanical and Electronic Engineering, Shandong Agricultural University, Shandong 271002, China
Interests: intelligent agriculture; agricultural product detection; hyperspectral image processing

Special Issue Information

Dear Colleagues,

Intelligent detection and pesticide application technologies have always been key areas of research in horticultural crop production. With the development of technology and the need for precise agriculture, intelligent detection and pesticide application technologies have become increasingly important in terms of solving the problems of agricultural production, such as ensuring crop yield and quality, reducing pesticide usage, and protecting the environment. This topic has attracted widespread attention from scholars worldwide.

The aim of this Special Issue is to collect and publish cutting-edge research regarding the intelligent detection and pesticide application technologies used for horticultural crops. We aim to provide a platform for scholars to share their experiences, ideas, and latest research results. The scope of this Special Issue includes, but is not limited to, the following topics:

  • Intelligent detection technology for horticultural crop diseases, pests, and weeds;
  • Pesticide application technology for horticultural crops;
  • Numerical simulation and optimized design of pesticide applications;
  • Evaluation methods and standards for pesticide residue in horticultural products;
  • Intelligent agriculture;
  • Agricultural product detection;
  • Hyperspectral image processing;
  • Machine vision;
  • Artificial intelligence.

This Special Issue welcomes high-quality papers related to the intelligent detection and pesticide application technologies used for horticultural crops. The papers should be original works not yet published elsewhere or review articles summarizing relevant research progress in this field.

Dr. Hongxing Peng
Dr. Yuanyuan Shao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent detection
  • precision agriculture
  • machine vision
  • hyperspectral image processing
  • pesticide application
  • agricultural robots
  • agricultural big data
  • agricultural product quality and safety

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 5989 KiB  
Article
YOLO-PEM: A Lightweight Detection Method for Young “Okubo” Peaches in Complex Orchard Environments
by Jianping Jing, Shujuan Zhang, Haixia Sun, Rui Ren and Tianyu Cui
Agronomy 2024, 14(8), 1757; https://doi.org/10.3390/agronomy14081757 - 11 Aug 2024
Viewed by 753
Abstract
The intelligent detection of young peaches is the main technology of fruit-thinning robots, which is crucial for enhancing peach fruit quality and reducing labor costs. This study presents the lightweight YOLO-PEM model based on YOLOv8s to achieve high-precision and automatic detection of young [...] Read more.
The intelligent detection of young peaches is the main technology of fruit-thinning robots, which is crucial for enhancing peach fruit quality and reducing labor costs. This study presents the lightweight YOLO-PEM model based on YOLOv8s to achieve high-precision and automatic detection of young “Okubo” peaches. Firstly, the C2f_P module was devised by partial convolution (PConv), replacing all C2f modules in YOLOv8s to achieve the model’s lightweight. Secondly, embedding the efficient multi-scale attention (EMA) module in the lightweight C2f_P_1 module of the backbone network enhanced the feature extraction capability and accuracy for young peaches. Finally, the MPDIoU loss function was utilized to replace the original CIoU loss function, which improved the detection accuracy of the bounding box while speeding up the convergence of the model. The experimental results demonstrate that the YOLO-PEM model achieved an average precision (AP) of 90.86%, F1 score of 86.70%, and model size of 16.1 MB, which was a 1.85% improvement in the AP, 0.85% improvement in the F1 score, and 5.3 MB reduction in the model size compared with YOLOv8s. The AP was 6.26%, 6.01%, 2.05%, 2.12%, and 1.87% higher compared with the other lightweight detection models YOLOv3-tiny, YOLOv4-tiny, YOLOv5s, YOLOv6s, and YOLOv7-tiny, respectively. Furthermore, the FPS of YOLO-PEM was 196.2 f·s-1, which can fulfill the demand for the real-time detection of young peaches. YOLO-PEM effectively detects young peaches in complex orchard environments and can offer a basis for the theoretical design of the vision system of the “Okubo” peach fruit-thinning robot and scientific management of orchards. Full article
Show Figures

Figure 1

23 pages, 14538 KiB  
Article
Rep-ViG-Apple: A CNN-GCN Hybrid Model for Apple Detection in Complex Orchard Environments
by Bo Han, Ziao Lu, Jingjing Zhang, Rolla Almodfer, Zhengting Wang, Wei Sun and Luan Dong
Agronomy 2024, 14(8), 1733; https://doi.org/10.3390/agronomy14081733 - 7 Aug 2024
Viewed by 597
Abstract
Accurately recognizing apples in complex environments is essential for automating apple picking operations, particularly under challenging natural conditions such as cloudy, snowy, foggy, and rainy weather, as well as low-light situations. To overcome the challenges of reduced apple target detection accuracy due to [...] Read more.
Accurately recognizing apples in complex environments is essential for automating apple picking operations, particularly under challenging natural conditions such as cloudy, snowy, foggy, and rainy weather, as well as low-light situations. To overcome the challenges of reduced apple target detection accuracy due to branch occlusion, apple overlap, and variations between near and far field scales, we propose the Rep-ViG-Apple algorithm, an advanced version of the YOLO model. The Rep-ViG-Apple algorithm features a sophisticated architecture designed to enhance apple detection performance in difficult conditions. To improve feature extraction for occluded and overlapped apple targets, we developed the inverted residual multi-scale structural reparameterized feature extraction block (RepIRD Block) within the backbone network. We also integrated the sparse graph attention mechanism (SVGA) to capture global feature information, concentrate attention on apples, and reduce interference from complex environmental features. Moreover, we designed a feature extraction network with a CNN-GCN architecture, termed Rep-Vision-GCN. This network combines the local multi-scale feature extraction capabilities of a convolutional neural network (CNN) with the global modeling strengths of a graph convolutional network (GCN), enhancing the extraction of apple features. The RepConvsBlock module, embedded in the neck network, forms the Rep-FPN-PAN feature fusion network, which improves the recognition of apple targets across various scales, both near and far. Furthermore, we implemented a channel pruning algorithm based on LAMP scores to balance computational efficiency with model accuracy. Experimental results demonstrate that the Rep-ViG-Apple algorithm achieves precision, recall, and average accuracy of 92.5%, 85.0%, and 93.3%, respectively, marking improvements of 1.5%, 1.5%, and 2.0% over YOLOv8n. Additionally, the Rep-ViG-Apple model benefits from a 22% reduction in size, enhancing its efficiency and suitability for deployment in resource-constrained environments while maintaining high accuracy. Full article
Show Figures

Figure 1

20 pages, 6514 KiB  
Article
Inversion of Glycyrrhiza Chlorophyll Content Based on Hyperspectral Imagery
by Miaomiao Xu, Jianguo Dai, Guoshun Zhang, Wenqing Hou, Zhengyang Mu, Peipei Chen, Yujuan Cao and Qingzhan Zhao
Agronomy 2024, 14(6), 1163; https://doi.org/10.3390/agronomy14061163 - 29 May 2024
Viewed by 653
Abstract
Glycyrrhiza is an important medicinal crop that has been extensively utilized in the food and medical sectors, yet studies on hyperspectral remote sensing monitoring of glycyrrhiza are currently scarce. This study analyzes glycyrrhiza hyperspectral images, extracts characteristic bands and vegetation indices, and constructs [...] Read more.
Glycyrrhiza is an important medicinal crop that has been extensively utilized in the food and medical sectors, yet studies on hyperspectral remote sensing monitoring of glycyrrhiza are currently scarce. This study analyzes glycyrrhiza hyperspectral images, extracts characteristic bands and vegetation indices, and constructs inversion models using different input features. The study obtained ground and unmanned aerial vehicle (UAV) hyperspectral images and chlorophyll content (called Soil and Plant Analyzer Development (SPAD) values) from sampling sites at three growth stages of glycyrrhiza (regreening, flowering, and maturity). Hyperspectral data were smoothed using the Savitzky–Golay filter, and the feature vegetation index was selected using the Pearson Correlation Coefficient (PCC) and Recursive Feature Elimination (RFE). Feature extraction was performed using Competitive Adaptive Reweighted Sampling (CARS), Genetic Algorithm (GA), and Successive Projections Algorithm (SPA). The SPAD values were then inverted using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), Random Forest (RF), and Extreme Gradient Boosting (XGBoost), and the results were analyzed visually. The results indicate that in the ground glycyrrhiza inversion model, the GA-XGBoost model combination performed best during the regreening period, with R2, RMSE, and MAE values of 0.95, 0.967, and 0.825, respectively, showing improved model accuracy compared to full-spectrum methods. In the UAV glycyrrhiza inversion model, the CARS-PLSR combination algorithm yielded the best results during the maturity stage, with R2, RMSE, and MAE values of 0.83, 1.279, and 1.215, respectively. This study proposes a method combining feature selection techniques and machine learning algorithms that can provide a reference for rapid, nondestructive inversion of glycyrrhiza SPAD at different growth stages using hyperspectral sensors. This is significant for monitoring the growth of glycyrrhiza, managing fertilization, and advancing precision agriculture. Full article
Show Figures

Figure 1

24 pages, 7433 KiB  
Article
Improved YOLOv8 and SAHI Model for the Collaborative Detection of Small Targets at the Micro Scale: A Case Study of Pest Detection in Tea
by Rong Ye, Quan Gao, Ye Qian, Jihong Sun and Tong Li
Agronomy 2024, 14(5), 1034; https://doi.org/10.3390/agronomy14051034 - 13 May 2024
Cited by 8 | Viewed by 1718
Abstract
Pest target identification in agricultural production environments is challenging due to the dense distribution, small size, and high density of pests. Additionally, changeable environmental lighting and complex backgrounds further complicate the detection process. This study focuses on enhancing the recognition performance of tea [...] Read more.
Pest target identification in agricultural production environments is challenging due to the dense distribution, small size, and high density of pests. Additionally, changeable environmental lighting and complex backgrounds further complicate the detection process. This study focuses on enhancing the recognition performance of tea pests by introducing a lightweight pest image recognition model based on the improved YOLOv8 architecture. First, slicing-aided fine-tuning and slicing-aided hyper inference (SAHI) are proposed to partition input images for enhanced model performance on low-resolution images and small-target detection. Then, based on an ELAN, a generalized efficient layer aggregation network (GELAN) is designed to replace the C2f module in the backbone network, enhance its feature extraction ability, and construct a lightweight model. Additionally, the MS structure is integrated into the neck network of YOLOv8 for feature fusion, enhancing the extraction of fine-grained and coarse-grained semantic information. Furthermore, the BiFormer attention mechanism, based on the Transformer architecture, is introduced to amplify target characteristics of tea pests. Finally, the inner-MPDIoU, based on auxiliary borders, is utilized as a replacement for the original loss function to enhance its learning capacity for complex pest samples. Our experimental results demonstrate that the enhanced YOLOv8 model achieves a precision of 96.32% and a recall of 97.95%, surpassing those of the original YOLOv8 model. Moreover, it attains an mAP@50 score of 98.17%. Compared to Faster R-CNN, SSD, YOLOv5, YOLOv7, and YOLOv8, its average accuracy is 17.04, 11.23, 5.78, 3.75, and 2.71 percentage points higher, respectively. The overall performance of YOLOv8 outperforms that of current mainstream detection models, with a detection speed of 95 FPS. This model effectively balances lightweight design with high accuracy and speed in detecting small targets such as tea pests. It can serve as a valuable reference for the identification and classification of various insect pests in tea gardens within complex production environments, effectively addressing practical application needs and offering guidance for the future monitoring and scientific control of tea insect pests. Full article
Show Figures

Figure 1

16 pages, 3571 KiB  
Article
Detection and Analysis of Chili Pepper Root Rot by Hyperspectral Imaging Technology
by Yuanyuan Shao, Shengheng Ji, Guantao Xuan, Yanyun Ren, Wenjie Feng, Huijie Jia, Qiuyun Wang and Shuguo He
Agronomy 2024, 14(1), 226; https://doi.org/10.3390/agronomy14010226 - 21 Jan 2024
Cited by 1 | Viewed by 1448
Abstract
The objective is to develop a portable device capable of promptly identifying root rot in the field. This study employs hyperspectral imaging technology to detect root rot by analyzing spectral variations in chili pepper leaves during times of health, incubation, and disease under [...] Read more.
The objective is to develop a portable device capable of promptly identifying root rot in the field. This study employs hyperspectral imaging technology to detect root rot by analyzing spectral variations in chili pepper leaves during times of health, incubation, and disease under the stress of root rot. Two types of chili pepper seeds (Manshanhong and Shanjiao No. 4) were cultured until they had grown two to three pairs of true leaves. Subsequently, robust young plants were infected with Fusarium root rot fungi by the root-irrigation technique. The effective wavelength for discriminating between distinct stages was determined using the successive projections algorithm (SPA) after capturing hyperspectral images. The optimal index related to root rot between each normalized difference spectral index (NDSI) was obtained using the Pearson correlation coefficient. The early detection of root rot illness can be modeled using spectral information at effective wavelengths and in NDSI, together with the application of partial least squares discriminant analysis (PLS-DA), least squares support vector machine (LSSVM), and back-propagation (BP) neural network technology. The SPA-BP model demonstrates outstanding predictive capabilities compared with other models, with a classification accuracy of 92.3% for the prediction set. However, employing SPA to acquire an excessive number of efficient wave-lengths is not advantageous for immediate detection in practical field scenarios. In contrast, the NDSI (R445, R433)-BP model uses only two wavelengths of spectral information, but the prediction accuracy can reach 89.7%, which is more suitable for rapid detection of root rot. This thesis can provide theoretical support for the early detection of chili root rot and technical support for the design of a portable root rot detector. Full article
Show Figures

Figure 1

Back to TopTop