Next Article in Journal
The Application of Melatonin and Organic Waste Derived from Vitamin C Industry Effectively Promotes Seed Germination and Seedling Growth of Cotton in Saline–Alkali Soil
Previous Article in Journal
Integrated Rice-Snail-Crayfish Farming System Shapes Soil Microbial Community by Enhancing pH and Microbial Biomass in South Subtropical China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precision Weed Management for Straw-Mulched Maize Field: Advanced Weed Detection and Targeted Spraying Based on Enhanced YOLO v5s

1
College of Engineering, China Agricultural University, Beijing 100083, China
2
Key Laboratory of Agricultural Equipment for Conservation Tillage, Ministry of Agricultural and Rural Affairs, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(12), 2134; https://doi.org/10.3390/agriculture14122134
Submission received: 17 October 2024 / Revised: 13 November 2024 / Accepted: 21 November 2024 / Published: 25 November 2024
(This article belongs to the Section Digital Agriculture)

Abstract

:
Straw mulching in conservation tillage farmland can effectively promote land utilization and conservation. However, in this farming mode, surface straw suppresses weed growth, affecting weed size and position distribution and obscuring the weeds, which hampers effective weed management in the field. Accurate weed identification and localization, along with efficient herbicide application, are crucial for achieving precise, efficient, and intelligent precision agriculture. To address these challenges, this study proposes a weed detection model for a targeted spraying system. Firstly, we collected the dataset of weeds in a straw-covered environment. Secondly, we proposed an improved YOLO v5s network, incorporating a Convolutional Block Attention Module (CBAM), FasterNet feature extraction network, and a loss function to optimize the network structure and training strategy. Thirdly, we designed a targeted spraying system by combining the proposed model with the targeted spraying device. Through model test and spraying experiments, the results demonstrated that while the model exhibited a 0.9% decrease in average detection accuracy for weeds, it achieved an 8.46% increase in detection speed, with model memory and computational load reduced by 50.36% and 53.16%, respectively. In the spraying experiments, the proposed method achieved a weed identification accuracy of 90%, a target localization error within 4%, an effective spraying rate of 96.3%, a missed spraying rate of 13.3%, and an erroneous spraying rate of 3.7%. These results confirm the robustness of the model and the feasibility of the targeted spraying method. This approach also promotes the application of deep learning algorithms in precision weed management within directional spraying systems.

1. Introduction

Conservation tillage techniques, generally with straw mulching management, have been widely promoted to enhance agricultural ecological and economic benefits [1]. Although straw coverage inhibits weed growth, reduces weed density, and benefits crop competition [2], weed control becomes more complex due to the random distribution and heavy shaded by the straw [3,4,5]. Targeted spraying is an intelligent technology to control pesticide utilization based on the weed’s recognition and localization [6], while the precision weed management for the straw-mulched maize field is a challenging question [7]. Therefore, the objective of this study is to contribute a targeted spraying strategy based to an advanced weed detection method following precise identification and localization to improve the performance of the sprayer.
An intelligent sprayer generally includes a targeted detection and spraying system, in which the detection system is used to collect information on the weeds and make spraying decisions [8,9,10]. According to the detection results of the weed targeting, the spraying system controls the sprayer, carrying out spraying. Due to references to precision detection, real-time sensing has become a faster-developing technology [11]. Although ultrasonic [12], laser [13], and spectral imaging [14] have been applied, color imagery is often preferable, showing the advantages of low cost and comprehensive information related to the position and size of the field target. Numerous studies have employed traditional image processing in conjunction with image color, texture, shape, and other features to achieve weed detection [15]. Tang et al. proposed a segmentation and extraction method for weeds in farmland based on the YCrCb color model, which achieved segmentation accuracy that exceeded 90% [16]. Bakhshipour et al. utilized the Haar wavelet transform to extract weed texture features, achieving a weed detection rate of 96% in sugar beet fields [17]. This indicates that various features play a significant role in weed identification, but the recognition rate is low when using a single feature. Therefore, the extraction of mixed features is the current trend in object recognition. Wang et al. established a weed identification model based on the shape, color, texture, and fused features of test samples, effectively detecting weeds in asparagus fields [18]. Although traditional image processing methods have been validated for weed detection, they are significantly affected by factors such as lighting and environment. These methods lack robust feature extraction and generalization capabilities, making them difficult to apply in dense, small-target scenarios under complex working conditions [19].
In order to explore more features of weeds and improve the detection robust, deep learning method is considered, by which it can automatically extract features from raw data and construct classifiers with higher accuracy and stronger generalization ability [20,21], demonstrating rapid progress in performing complex agricultural tasks. In terms of weed detection, Quan et al. proposed an improved, Faster R-convolutional neural network (CNN) algorithm using VGG19 instead of the original network architecture to rapidly identify corn seedlings and field weeds with an accuracy of 97.71%, enhancing the precision of crop detection [22]. Wang et al. developed a CNN model that accurately detects targets by combining multiscale features and superpixel segmentation, focusing on corn seedlings and weeds [23]. However, in agricultural practice, these algorithms have high computational complexity and excessive memory consumption [24], making them difficult to apply to embedded devices with limited hardware resources. One of the most pressing issues in weed detection is achieving real-time and accurate results with limited hardware resources [25]. Among the available methods, single-stage detection algorithms represented by the YOLO series have achieved a good balance between accuracy and speed [26], with the potential for weed detection in complex farmland environments.
As the demand for deep learning algorithms on mobile devices has increased, research on lightweight network structures has begun. Wu et al. proposed a small-target weed detection model based on YOLO v4, which adopts a depth-separable convolutional and feature fusion structure, effectively realizing real-time weed detection in cabbage farmland [27]. Wang et al. constructed a pixel-level synthetic data enhancement method and a TIA-YOLO v5 network to target sugar beet crops and weeds [28]. The method was deployed on a mobile platform, which effectively solved the problem of imbalance in the distribution of weed and crop data, achieving rapid and accurate weed detection in the field. Gong et al. proposed an automatic maize seedling recognition and navigation localization method based on an improved YOLOv5s object detection model [29]. This method uses lightweight backbone networks such as MobileNetV3 for feature extraction and integrates the Convolutional Block Attention Module (CBAM) to enhance the recognition capability of small targets, demonstrating the strong application potential of YOLO models in complex backgrounds.
Constructing a targeted spraying platform based on identified weed targets and realizing synchronized control of targeted spraying is crucial for promoting the intelligent application of targeted spraying technology [30]. For spraying devices and equipment, targeted spraying systems and adjustable spraying devices are gradually being developed and applied for targeted spraying of weed targets. Zhao et al. developed a targeted spraying robotic system based on vision servo technology to precisely spray canopy information [31]. James P. Underwood et al. designed a robotic arm-based adjustable spraying device that detects images according to target position information to adjust the nozzle movement and achieve directional spraying in vegetable crop fields [32]. Raja et al. designed a micro-jet sprayer composed of 12 solenoid valves capable of herbicide spraying at high frequency, with low volume, and within a narrow area, significantly enhancing weed control effectiveness [33]. These studies have reported the feasibility of targeted spraying technology. However, existing targeted spraying systems are designed primarily for fruit tree canopies with large targets or vegetables with large plant planting spacing, with fewer studies focusing on on-site targeted spraying for weeds in complex environments.
Deep learning-based object detection models have been applied to some extent in weed management. In the area of intelligent weed management, Diao and Gong et al. utilized deep learning techniques to identify and extract maize row lines to guide weeding machines for row-following and spraying operations [29,34]. This approach can significantly reduce crop damage and improve inter-row weeding efficiency. However, mechanical weeding is not suitable for straw-covered environments, and using navigation lines to guide sprayers cannot achieve precise spraying on individual weed plants, which could further save herbicides. Zhao et al. used an improved support vector machine classification algorithm to detect cabbage and weeds, designing a targeted spraying system with an average effective spraying rate of 92.9% [35]. Fu et al. conducted a study on cabbage and introduced a transformer module into the YOLOV5 model to achieve precise crop identification and targeted spraying on mobile devices under strong light conditions [36]. Li et al. combined deep learning with spraying technology to design a targeted spraying system that showed promising results, but its effectiveness in maize seedling scenarios under straw mulching has yet to be verified [11]. Hederson de S. Sabóia et al. developed an embedded system for weed detection in cotton and soybean crops using Faster R-CNN and YOLOv3 and obtained more than 78% accuracy [37]. Chen et al. proposed a model based on YOLOv4 for the detection of field weeds in sesame fields, which played a good role [38]. Amlan Balabantaray et al. developed an intelligent weeding robot based on YOLOv7, which realized real-time recognition and on-site spraying of long-stalked amaranth with an average accuracy of 60.4% [39]. Based on the above study, deep learning-based machine vision technology achieves certain accuracy and efficiency in precise weed control. However, in straw-covered farmland, straw affects weed density, obstructs weeds, reducing their integrity and clarity, and increases light reflection intensity, leading to image color distortion and reducing the differential information between weeds and crops [36,40]. The traditional detection and spraying methods are inefficient, poorly accurate, and unable to meet the requirements of detection accuracy and speed, and the effectiveness of weed detection for weeds under straw cover is yet to be verified [11,40]. In the complex and variable environment of maize seedling straw-covered farmland and the diverse phenotypic information of weed targets, the model recognition accuracy and model generalization ability of existing weed control technologies need to be further strengthened, and the integration of recognition technologies and spraying systems in agricultural equipment is low. Precision-targeted spraying technology requires further breakthroughs in target identification and localization, target-synchronized control, and intelligent application.
This study proposed a method for identifying and localizing weeds based on an improved YOLO v5s network in the context of straw-covered farmland. By integrating FasterNet and CBAM, this approach reduces memory consumption, enhances weed feature extraction capability, and improves model accuracy. Additionally, the introduction of WIoU enhances the model’s generalization ability. This method was integrated with a directional spraying system, resulting in the development of an intelligent weeding agricultural device. The proposed solution addresses challenges in weed identification and localization, synchronized control of targets, and intelligent application in complex straw-covered cornfields during the seedling stage. It advances the application of deep learning algorithms in precision-targeted spraying systems for weed management, providing theoretical foundations and technical support for precise weed control under these conditions.
The following contributions are reported: (1) A dataset of weeds was obtained during the seedling stage of corn in straw-covered farmland, and weed images were collected under ambient light conditions. (2) A method for identifying and localizing weeds was proposed based on an improved version of the YOLO v5s algorithm. This method integrates the FasterNet feature extraction network and convolutional attention module (CBAM). This method significantly enhances the efficacy and speed of weed detection, enabling automated real-time identification and precise localization of weeds in the seedling stage of corn under straw-covered conditions. This approach also provided location information for herbicide-targeted spraying operations. (3) A targeted spraying system integrated with the spraying device was developed to enable precise targeted spraying based on the location information of the weeds. (4) Field experiments were designed and conducted to evaluate the weed identification performance and the effectiveness of the spraying method.

2. Materials and Methods

2.1. Overall Overview

The method comprises four parts, which are shown in Figure 1. ① Images acquisition: images of corn seedling crops and weeds were collected, and a dataset was constructed in the context of straw cover. ② Weed detection: a lightweight YOLO v5s_FasterNet_CBAM_WIoU weed detection network model that incorporates an attention mechanism based on the YOLO v5s model was constructed to identify and locate weeds. ③ Weed location extraction: weed localization and target extraction were achieved based on the detection information, which was combined with the targeted spraying device. ④ Accurate targeted spraying of weeds: offline experiments were conducted to validate the efficacy of the proposed weed detection and targeted spraying weed control methods. Weed detection always includes weed identification and position localization. Once we determine the weed targets and output their locations, we proceed with targeted spraying. The method can identify corn seedlings and weeds in straw-covered farmland in real time and spray herbicides on weed targets. This approach can effectively improve the efficiency of pesticide utilization, reduce crop resistance, and minimize environmental pollution.

2.2. Image Acquisition

2.2.1. Dataset Preparation

Experimental image data were collected from Waibao Town, Tieling City, Liaoning Province, and Mongolian Autonomous County, Fuxin City, Liaoning Province. The sample diversity and model robustness were enhanced by capturing images of weeds using multiple devices, including a DFK 33UX265 CCD industrial camera, an HT-VM0816-5MP type lens mounted on a tripod, a standard aluminum profile collection frame, a CANON DS126402 camera (Canon, Tokyo, Japan), and a Huawei Nova7 cell phone (Shenzhen, China). The camera was set at a height of 1 m above the ground. In addition, angles of 0°, 30°, and 45° are used to capture multi-view information on corn and its accompanying weeds at the 2–5 leaf stage. The photographs were taken under natural lighting conditions. In order to enhance the diversity of image samples and improve the applicability of image detection models to various complex field conditions, images of 2–5 leaf corn and its early associated weeds were collected in various districts around Beijing in June 2022. The camera and cell phone captured images at resolutions of 5472 pixels × 3072 pixels and 4608 pixels × 3456 pixels, respectively, and all images were saved in JPG format. A total of 2088 images of the weeds under different light intensities and complex environments were selected to reduce data overlap and ensure image quality. The dataset contains images of target weeds under various background conditions, such as straw shading, different shooting angles, and densities similar to those of the corn crop, as shown in Figure 2. The main technical parameters of the camera and lens are shown in Table 1.

2.2.2. Image Annotation

Image preprocessing involves image labeling, image enhancement, and dataset partitioning [41]. The image data were manually labeled with corn and weed labels using the image visual annotation tool LabelImg. The minimum outer rectangle of the target was used for bounding box labeling. The image paths, width and height dimensions, number of channels, and location information of the corn and weed labeling boxes were labeled in XML standard format and stored in PASCAL VOC format.

2.2.3. Image Preprocessing

Images collected in the complex environment of actual fields are subject to blurred images and poor contrast due to lighting conditions, camera shake, fast movement, etc. The raw data are not always suitable for the detection model, and it is necessary to perform preprocessing operations on the images to prepare the training data.
To further expand the number of experimental datasets and improve the diversity of image samples, a combination of mosaic data enhancement and traditional image enhancement techniques was used to enhance the image samples of corn seedlings and weeds in straw-covered plots. Image enhancement techniques such as cropping, translation, brightness, noise, rotation, and mirroring are used to expand the dataset size. By adjusting the brightness coefficient and rotation angle within a certain range, the impact of unstable lighting intensity and the dependence of weed recognition models on certain image attributes are reduced. A small amount of noise is randomly added to the image to interfere with the RGB pixels of the image, which produces a certain visual effect on the image and reduces overfitting of the training model. Trimming and translating images further expands the dataset, improving the stability and robustness of the detection model. Upload the image enhancement code section to the public repository on GitHub (https://github.com/XXSwxh/Precision-Weed-Management), URL (accessed on 12 November 2024). The image data enhancement results are shown in Figure 3. with the number of image data increased from the initial 2088 to 5000. Of these, 80%, 10%, and 10% were selected randomly as the training, test, and validation sets, respectively.

2.3. Construction of Weed Detection Model

At the outset of this study, preliminary experiments were conducted to compare the efficacy of the detection models Fast R-CNN, SSD, YOLO v4, YOLO v5s, and YOLOv8 in weed identification. The findings from these initial tests indicated that while YOLO v5s surpassed its counterparts in terms of weed identification capabilities, further enhancements could be achieved under specific conditions. Consequently, YOLO v5s was chosen as the foundational model for our investigation.
To ensure model detection accuracy while further improving detection efficiency and achieving a balanced trade-off between detection accuracy and speed, we initially selected three neural network architectures, FasterNet, MobileViT, and EfficientViT, to be integrated separately with YOLO v5s and to compare their performance in weed recognition. The experimental results indicate that the FasterNet architecture demonstrated superior performance in weed recognition, effectively reducing memory consumption and enhancing detection speed. The original structure was replaced with the FasterNet feature extraction network, as shown in Figure 4①. In addition, to improve the ability of the model to extract features of small-targeted weeds and enhance robustness, we incorporated the CBAM attention mechanism (as shown in Figure 4②) into the neck part and optimized the loss function to WIoU. These modifications enhanced the YOLO v5s model architecture, addressing the high complexity and computational challenges associated with image processing and improved deployment on embedded devices. Detailed explanations of these modules are provided in the following sections. The improved model, abbreviated as YOLO v5s_FasterNet_CBAM_WIoU, is shown in Figure 4.
  • FasterNet feature extraction network
The performance of the target detection model heavily relies on the design of the backbone network, which is crucial for extracting and representing target features. To enhance the precision and efficiency of weed detection and reduce the model size for real-time target detection and deployment on edge devices, this study introduced the FasterNet network architecture in the backbone section, replacing the original network, as shown in Figure 5.
The model comprised four hierarchical stages, each constructed using a FasterNet Block [42]. An embedding layer was added before Stage 1, and merging layer processing was applied before Stages 2 to 4 for spatial downsampling and channel expansion. The FasterNet Block comprised one Partial Convolution (PConv) and two 1 × 1 point-wise convolutions. Normalization and activation layers were placed in the middle layer in the PConv structure to preserve the diversity of the weed target features, achieve lower latency, and effectively reduce the redundant computations and memory accesses in convolutional neural networks. The architecture effectively captured weed target information of varying scales and complexities. For input IRc×h×w, conventional convolution (Conv) employed individual channels, WRk×k, to obtain output ORc×h×w, as shown in Figure 5a. PConv applied regular convolution to some channels for spatial feature extraction while leaving the remaining channels unchanged, as shown in Figure 5b. The number of memory accesses for Conv and PConv are, respectively, as follows:
h × w × 2c + k2 × c2h × w × 2c,
h × w × 2cp + k2 × cp2h × w × 2cp,
where h and w are the width and height of the feature map; k, c, and cp are the size of the convolution kernel, the number of conventional convolution channels, and the number of channels using conventional convolution, respectively. The remaining (ccp) channels are not involved in the computation; however, the practical implementation of the general has cp/c = 1/4.
2.
CBAM attention mechanism
The attention mechanism addresses the issue of information overload and allows the network to process critical information efficiently. In straw-covered farmland, varying weed target sizes, local occlusion, and the similarity of features between corn plants and weeds during the maize seedling stage are prominent challenges. To enhance the ability of the model to extract and represent significant features, this study integrated the CBAM into the neck section of the network. The CBAM functions in the spatial and channel dimensions to highlight weed information in the images.
The channel attention module performed global max pooling and global average pooling on the input weed images to aggregate weed spatial information [43]. These weed features were compressed, merged, and activated in the spatial dimensions through a shared neural network. The spatial attention module applies global max pooling and global average pooling along the channel dimensions to compress the channel size of the output feature maps of the channel attention module, integrate channel information, effectively fuse weed relevant details at various scales, and minimize the impact of irrelevant information on the detection task. The computational formulas are presented in Equations (3) and (4).
M C ( F ) = σ ( M L P ( A v g P o o l ( F ) ) + M L P ( M a x P o o l ( F ) ) ) = σ ( W 1 ( W 0 ( F a v g c ) ) + W 1 ( W 0 ( F max c ) ) ) ,
M S ( F ) = σ ( f 7 × 7 ( A v g P o o l ( F ) ;   M a x P o o l ( F ) ) ) = σ ( f 7 × 7 ( F a v g s ;   F max s ) ) ,
where σ denotes the sigmoid function. To reduce parameter overhead, RC/r×1×1 is the hidden activation size, and r is the reduction ratio. W0 ∈ RC/r×C and W1 ∈ RC×C/r. Note that the MLP weights, W0 and W1, are shared for both inputs. The ReLU activation function is followed by W0, and f7×7 represents a convolution operation with a filter size of 7 × 7.
3.
Optimization of loss function
The loss function significantly influences the convergence speed and accuracy of the network. A well-defined loss function can substantially enhance model performance. Significant regression errors often occurred because of the small sizes of the maize seedling crops and weed targets in the calculation of the original CIOU loss function, leading to unbalanced training samples and compromised generalizability. Research has shown that WIoU, which incorporates factors such as aspect ratio, position, and scale of the target frame, measures the match between predicted and actual frames more accurately [44]. Therefore, in this study, WIoU replaced the original CIoU. The schematic of WIoU is shown in Figure 6.
WIoU utilized “outliers” as an alternative to CIoU for quality assessment of anchor frames, aiming to reduce the competitiveness of high-quality anchor frames while masking the effects of low-quality weed samples. A gradient gain assignment strategy was also used to reduce competition from high-quality anchor frames and minimize negative gradients generated by low-quality samples. The position and scale losses were effectively balanced to improve the weed detection accuracy and generalization ability of the model. The calculation formula is provided in Equation (5).
L W I O U = r exp x x g t 2 + y y g t 2 W g 2 + H g 2 L I O U L I O U = 1 W i H i w h + w g t h g t W i H i r = β δ α β δ β = L I O U L I O U ¯ 0 , + ,
where x and y are the coordinates of the center point of the prediction frame; w1 and h1 are the width and height of the prediction frame, respectively; xgt and ygt are the coordinates of the center point of the real frame; wgt and hgt are the width and height of the real frame, respectively; Wg and Hg are the width and height, respectively, of the smallest closed area formed by the prediction frame and the real frame; and Wi and Hi are the width and height, respectively, of the overlapped area between the prediction frame and the real frame. L I O U ¯ is the sliding average of LIOU. β is an outlier, with larger values indicating poorer sample quality. r is the focusing coefficient. α and δ are hyperparameters with values of 1.9 and 3, respectively.

2.4. Evaluation Index of Weed Identification Accuracy

To evaluate the detection performance of the proposed corn seedling weed detection model, we selected recognition precision (P), recall (R), and mean average precision (mAP) as the evaluation indices of the weed detection model in this study. The formulas for calculating these values are as follows:
P = T P T P + F P × 100 % R = T P T P + F N × 100 % A P = P ( R ) d R m A P = i = 1 n A P i N ,
where TP is the number of true positive samples detected. FP represents the number of false positive samples detected. FN is the number of false negative samples detected. AP denotes the combined effect of precision and recall, reflected by the area under the PR curve. mAP is the mean of the AP values for all target categories, and N is the total number of target categories.
Frames per second (Fps) was the number of frames per second that the model can recognize as a weed image. The metric was employed to assess the real-time performance of the model. A reduction in the recognition time of a single image indicated enhanced algorithmic efficiency.
Parameters are the sum of the parameters in the model, which affects the memory footprint and is a common metric for evaluating the size of the model; the smaller the number of parameters, the smaller the model.

2.5. Implementation Details

The training and testing experiments for the weed detection model were conducted using the PyTorch open-source framework. The system is equipped with an Intel(R) Core (TM) i7-10700K processor. The development environment included Python 3.8 running on the Windows 10 (64-bit) operating system with Anaconda 3.5.0, CUDA 11.1, and CUDNN 8.2.1. The model was trained and optimized using the Adaptive Optimizer (Adam). For the image dataset, the input size was set to 640 × 640 pixels; the training period (epochs) was 400, and the batch size was 32.

3. Results and Discussion of Weed Identification Accuracy

3.1. Performance Evaluation of Different Detect Models

In this section, we compared and analyzed common target detection methods to verify the effectiveness of the proposed method. The target detection models were Faster R-CNN, SSD, YOLO v4, YOLO v5s, and YOLO v8. The detection results obtained on the dataset are listed in Table 2.
Accurate weed detection in the complex farmland environment of straw mulching and returning is essential for effective targeted spraying for weed control. Among the commonly used detection models, such as Faster R-CNN, SSD, YOLO v4, and YOLO v8, YOLO v5s exhibited the highest precision, recall, and mAP values. Despite being second to YOLO v8, the detection speed of YOLO v5s achieved 70.92 frames per second (f/s), satisfying real-time requirements. Therefore, we selected YOLO v5s as the baseline network for weed detection, prioritizing model precision while ensuring sufficient detection speed.

3.2. Performance Evaluation of Different Network Architectures

In this section, in order to improve the computational efficiency and performance of the model, three efficient neural networks, FasterNet, MobileViT, and EfficientViT, were selected for comparison and analysis to verify their effectiveness in terms of weed detection accuracy and efficiency. The detection results obtained on the dataset are shown in Table 3.
In the complex farmland environment of straw mulching and returning, it is necessary to reduce the model complexity and improve the model detection speed to realize real-time and accurate detection on the basis of ensuring the detection accuracy. The experimental results show that among the three network architectures, FasterNet, MobileViT, and EfficientViT, the accuracy, recall, and mAP values and detection speed of FasterNet are 90.3%, 87.5%, 92.3%, and 74.6 f/s, respectively. The detection accuracy and detection speed are higher than the other two network architectures. Although FasterNet has the largest model memory of 6.34 MB, it is relatively small and can be effectively deployed on mobile devices. Therefore, we chose the FasterNet architecture to synergize with the YOLO v5s model to further improve the effectiveness of weed detection accuracy and detection efficiency.

3.3. Performance Evaluation of Improvement Modules

To enable better model deployment on mobile platforms and high-speed agricultural machinery, further reductions in model memory size and enhancement in detection speed were necessary. We improved the weed detection performance by replacing the backbone network, incorporating an attention mechanism, and optimizing the loss function. To evaluate the performance of each module, we conducted training using uniform hyperparameters, the same dataset, and identical experimental conditions. The results are presented in Table 4.
Based on the data presented in the table, introducing the FasterNet backbone network significantly reduced the number of model parameters and computational load compared to the original YOLO v5s. The model size and computational load were reduced to 46.3% and 44.9% of the baseline model, respectively. However, this results in a 1.3% decrease in detection accuracy.
Applying the CBAM attention mechanism enhanced the detection accuracy of the model. As shown in Figure 7, Grad-CAM visualizations of the baseline and improved networks demonstrated that the CBAM module allows effective features to cover more areas of the maize crops and weeds. This improves the ability of the model to identify small-target features in images, resulting in a 0.2% increase in average detection accuracy compared to the original YOLO v5s. The improved network better extracted the deep feature information of weeds at different growth stages, thereby enhancing the feature extraction and representation capabilities.
Introducing the WIoU loss function improved the detection accuracy and speed of the model. The combined FasterNet backbone network, CBAM attention mechanism, and optimized loss function effectively reduced the memory usage and computational load of the model while improving detection speed. Compared to the original YOLO v5s, despite the 0.9% decrease in model accuracy, the model memory usage and computational load were reduced by 50.36% and 53.16%, respectively, and the detection speed was increased by 8.46%.
In summary, the improvement strategy that combined FasterNet, CBAM, and the WIoU loss function significantly reduced the model parameters and computational load while maintaining accuracy and enhancing the detection performance and practicality of the model.

3.4. Effectiveness Evaluation of Weed Identification Model

3.4.1. Comparative Analysis of Training Results of Weed Identification

The experiment recorded the changes in loss, mAP values, and model memory size during the training process (Figure 8). The loss curve stabilized after 400 iterations, indicating consistent convergence of the detection model. Compared to the basic network model, the improved YOLO v5s_FasterNet_CBAM_WIoU model exhibits a lower loss value, which effectively enhances feature extraction for complex weeds. mAP reflects the difference in the learning ability of the model; although the mAP of the improved YOLO v5s_FasterNet_CBAM_WIoU model showed a slight decrease compared to the basic model, the difference is minimal. In addition, the model memory size was significantly reduced, making it more suitable for deployment on mobile platforms. Overall, considering the changes in each metric, the improved YOLO v5s_FasterNet_CBAM_WIoU model demonstrated better comprehensive performance and practical applicability.

3.4.2. Comparative Analysis of Model Identification Applications Effects

To evaluate the weed detection performance of both the baseline YOLO v5s and improved YOLO v5s models and verify the effectiveness of the improved model in straw-covered complex farmland environments, six images of corn weeds under different lighting conditions and complex environments were selected. Figure 8 compares the detection results before and after improving the YOLO v5s model.
A comparison of the figures demonstrates that the improved YOLO v5s model exhibits greater accuracy in detecting weed targets under various lighting conditions, weeds obscured by straw and corn crops, and weeds with small targets that are dense and similar to crops. Figure 9a, b show that the YOLO v5s model does not detect the weeds obscured by straw and corn seedlings, whereas the improved YOLO v5s model accurately detects these weeds and captures small weed features. Figure 9c shows significant differences in weed target scales and the presence of small, dense weeds similar to corn seedlings. Both detection models can accurately distinguish crop seedlings from weeds, but the improved YOLO v5s model performs better at detecting small weed targets. Figure 9d shows weed images captured in the nighttime environment. Under low-light conditions, the YOLO v5s model fails to identify small weed targets at the image edges, erroneously classifying them as corn seedlings. In contrast, the improved YOLO v5s model accurately detects all weeds without omissions or misclassifications. A comparison of the performance of the two models in various complex scenarios shows that the improved YOLO v5s model significantly enhances the inter-class differences between the weeds and corn seedling leaves. The model also improves the extraction and representation of small-target weed and crop seedling features in the images. Furthermore, the improved model mitigates the detection inaccuracies caused by straw occlusion and dense small-target weeds, as well as the low detection efficiency resulting from the coexistence of multiscale targets and the overlap between crop seedlings and weeds.

4. Application of Targeted Spraying Weeding

4.1. Targeted Spraying Device and Its Structure Composition

To reduce reliance on traditional herbicides without affecting crop production and achieve precise management of weeds, an intelligent targeted spraying weed control device was designed, which mainly consisted of an image acquisition unit, a walking device, and a targeted spraying device, as shown in Figure 10. The image acquisition unit was an industrial camera responsible for collecting image data of corn seedling crops and weeds. The walking device was the motion carrier for weed detection and precisely targeted spraying in straw-covered farmland; the targeted spraying device primarily comprised a target detection module, an electronically controlled spraying module, and a pressure-regulated pesticide delivery module [45]. The target detection module deploys the trained detection model in the upper computer controller, the Nvidia Jetson TX2 development board kit, to receive and process weed image information for weed target identification and localization. The electronically controlled spraying module, which included a lower computer controller and the targeted spraying device, used an STM 32 development board kit for sensing information acquisition and actuator drive control. The upper computer controller and visual sensors established a weed detection node and communication network through serial communication, facilitating the transmission of weed target location information. This process is shown in Figure 11. The targeted spraying device consisted of a two-dimensional steering servo equipped with a nozzle. The servo received drive control instructions from the lower computer controller to adjust the spatial angle in two dimensions, ensuring precise weed targeting.

4.2. Localization and Position Extraction of Weeds Target

After identifying and locating the weed target in the image frame using the weed identification model based on the improved YOLO v5s, the coordinates of the weed positions were first calculated and extracted. For each weed, the model identified and output the pixel values of the diagonal vertices of the weed prediction box. By defining the machine operation direction as the y-axis and the direction perpendicular to it as the x-axis, we established the pixel coordinate system. The position coordinates of the diagonal vertices of the weed target prediction box were denoted as A (x01, y01) and B (x02, y02), as shown in Figure 12. According to Equation (7), the image frame position coordinates of the center point of the target weed prediction box were calculated as (x0, y0).
x 0 = x 01 + x 02 2 y 0 = y 01 + y 02 2 ,
After detecting and localizing the weed target in the image frame, the weed coordinates (x0, y0) within the image frame were mapped to the global frame in the world coordinate system, resulting in the coordinates (x1, y1). Figure 12 shows the coordinate systems of the image frame and global frame. The camera was mounted on the front bar at a height of H above the ground and was perpendicular to the ground, as shown in Figure 13. Based on the invariant perspective of the weed in both the pixel coordinate system and the world coordinate system, the transformation model between the weed pixel coordinates and world coordinates was established as follows:
β = arctan l 2 H Δ β = 2 x 0 p 1 × β x 1 = H × tan ( Δ β ) ,
α = arctan w 2 H Δ α = p 2 2 y 0 p 2 2 × α y 1 = H × tan ( Δ α ) ,
where α and β are the maximum viewing angles of the camera in the vertical and horizontal directions, respectively; Δα and Δβ are the vertical and horizontal viewing angles of the weed location, respectively; p1 × p2 is the pixel size of the weed image; and l and w are the actual distances in the x- and y-directions in the camera field of view.

4.3. Evaluation of Model Application Effect

Herein, an evaluation system was established to assess the target location error and targeted spraying weed control performance. This system was designed to validate the feasibility of the intelligent deep learning based on targeted spraying weed control technology proposed herein.

4.3.1. Evaluation of Weed Localization Error

Weed target extraction and on-target control were essential for achieving precise targeted spraying. The weeds were randomly placed in different positions within the camera view of the spraying device. The actual spraying position relative to the placement position was recorded, and the principle was shown in Figure 14, with the red dots representing the placement position of weeds and the yellow × representing the actual spraying position. The relative offset error and root mean square error (RMSE) were calculated to validate the target-positioning accuracy of the spraying device. The RMSE quantified the root mean square of the average error between the weed placement and actual spraying positions. A smaller RMSE indicated a higher targeting accuracy of the spraying device. The RMSE formula was expressed as Equations (10) and (11):
δ = x x a x + y y a y × 100 % ,
R M S E = i = 1 n x x a 2 + y y a 2 n ,

4.3.2. Evaluation of Targeted Spraying Weeding Performance

The deep learning-based targeted spraying weed control system should identify and localize weed targets in real time to achieve accurate targeted spraying. The overall weed control effect depends on the identification performance of the weed detection model and the target localization accuracy of the spraying device. To effectively evaluate the overall spraying and weed control effectiveness of the proposed method, this study defined several metrics: effective recognition rate (ERR), relative spraying rate (RSR), effective spraying rate (ESR), leakage spraying rate (LSR), and mistaken spraying rate (MSR). These metrics were used to construct the evaluation model and assess the weed control performance of the proposed method, as shown in Table 5. The overall principle is illustrated in Figure 15.
ERR refers to the proportion of weed targets correctly identified as weeds (n) relative to the total number of weeds (N) within the experimental area. ESR represents the proportion of weed targets correctly identified as weeds and successfully sprayed (n1) relative to the total number of sprayed targets (n1 + n1′) within the experimental area. RSR indicates the proportion of sprayed targets (n1 + n1′) relative to the total number of targets identified as weeds (including the number of weed targets correctly identified as weeds n and the number of corn incorrectly identified as weeds n3′). MSR is the proportion of corn incorrectly identified as weeds and successfully sprayed (n1′) relative to the total number of sprayed targets (n1 + n1′) within the experimental area. LSR refers to the proportion of weed targets that were not sprayed (including the number of weed targets correctly identified as weeds but unsuccessfully sprayed n2, the number of weed targets incorrectly identified as corn n3, and the number of weeds not detected n4 relative to the total number of weeds (N) within the experimental area.
ERR reflects the weed identification accuracy of the improved YOLOv5s model proposed in this study. RSR indicates whether the spraying system can quickly respond and perform the spraying action based on the detected weed information. ESR, MSR, and LSR reflect the weeding performance of the integrated weed detection model and spraying system in the targeted spraying system.

4.4. Testing and Analysis of Targeted Spraying Application

4.4.1. Experimental Results and Analysis of Weed Localization Error

To assess the target-positioning performance of the spraying system, weeds were randomly positioned in four quadrants of the coordinate system within the field of view of the camera. A laser diode mounted on the servo gimbal replaced the nozzle to simulate the spraying process, and positional testing was conducted to verify the target-positioning accuracy of the system. The test results are shown in Table 6.
The results revealed a slight positional deviation between the actual spraying position represented by the laser point and the actual locations of the weeds, typically with differences within the millimeter scale. The relative errors ranged from 1.9% to 3.5%, with an RMSE of 0.12. This suggests that the disparity between the spraying position of the device and the actual weed location was minimal, thus meeting the precision requirements for targeted spraying. These findings validate the feasibility of the targeted weed control method proposed in this study.

4.4.2. Experimental Results and Analysis of Targeted Spraying Weeding

The overall weed control efficacy of the proposed method was assessed by conducting weed detection and targeted spraying experiments in Shuangta District, Chaoyang City, Liaoning Province. The corn experimental field had a row spacing of 60 cm and a plant spacing of 25 cm. A total of 30 corn plants and 30 weeds were selected as experimental targets, as shown in Figure 16. The results are detailed in Table 7.
Based on the data in Table 7, the herbicide-targeted spraying test achieved an effective weed recognition rate of 90% and an effective spraying rate of 96.3%. The missed spraying rate was 13.3%, and the incorrect spraying rate was 3.7%. These results indicate that the deep learning-based weed detection and targeted spraying method proposed in this study is feasible. The improved YOLO v5s model showed outstanding performance in detecting corn and weeds under experimental conditions, with good detection results for weed targets that were shaded by straw, overlapped with corn seedlings, small and dense, and low light at night, confirming the robustness and effectiveness of the model. Compared with the peucedani radix and weed detection algorithm proposed by Zhang et al., the improved YOLO v5s model in this study can effectively identify individual weeds and accurately locate weed centroids in more complex farmland environments, such as straw occlusion, crop overlap, and small and dense targets [46]. In terms of detection model performance, our model achieved a 9.3% reduction in memory usage, a 187.8% increase in detection speed, and a 5.1% improvement in weed detection accuracy compared with the corn and weed detection algorithm proposed by Li et al. [11]. In terms of targeted spraying accuracy, the effective weed identification rate and spraying accuracy of our model and system were higher than those of the corn and weed detection and spraying system proposed by Wang et al., which demonstrates the superior performance of our model and system [19].
In the weed-targeted spraying experiment, both missed and incorrect spraying incidents were observed. Leakage spraying refers to weed targets within the experimental area that were not successfully sprayed, which includes three aspects: the weed targets were correctly identified but not sprayed, the weed targets were misidentified as corn and thus not sprayed, and the weed targets were not detected by the model. The reasons for missed and incorrect recognition are attributed to the uneven ground, exacerbated by the disorderly distribution of straw in the farmland, which intensifies the camera shake as the machine moves. This reduces image acquisition quality and, consequently, lowers the effective recognition rate. The model successfully recognizes weeds, but without spray can result from the unevenness of the ground that causes nozzle vibration, reducing targeting accuracy, or from delayed pump activation due to a slower control system response, leading to spraying failure. Incorrect identification of corn as weeds was the primary cause of incorrect spraying. At the seedling stage, corn resembles certain weeds, leading the model to mistakenly identify corn seedlings as weeds.
Missed and incorrect spraying, which prevents herbicide from accurately targeting weeds and may cause damage to corn crops, results from a combination of model detection performance, the complex farmland environment, and the targeted spraying control system. To further enhance the weed control effectiveness of the targeted spraying machine and develop more intelligent agricultural equipment, it is necessary to gather additional weed characteristics across different environments and periods under straw cover. Continuous adjustments to the model’s detection threshold are required to improve detection performance. Additionally, installing damping devices or modifying wheel structures can help reduce motion blur and spraying errors caused by uneven ground. Adjusting and optimizing the nozzle control algorithm will also improve nozzle precision, enhancing overall spraying performance.

5. Conclusions

Under conservation tillage with straw mulching, the surface straw suppresses weed growth and obscures the weeds. This results in random weed target sizes and positions, overlapping or blurred targets, and insufficient feature information, posing new challenges for weed management in the field. To address these issues, we conducted research on precise weed identification and localization, as well as a targeted spraying system based on weed positions. We propose an improved YOLO v5s-based weed identification model and a targeted spraying system. The main conclusions are as follows:
  • Weed Image Dataset Construction: We constructed a weed image dataset during the seedling stage of corn under straw mulching conditions.
  • Precise Weed Identification and Localization: We proposed an improved YOLO v5s model by incorporating a Convolutional Block Attention Module (CBAM), FasterNet feature extraction network, and an improved Weighted Intersection over Union (WIoU) loss function to optimize the network structure and training strategy. This enhancement highlights weed feature information, reduces network redundancy and model memory, and enhances the model’s robustness and stability. Compared to the original network, our model has a 0.9% decrease in average detection accuracy for weeds and corn seedlings but achieves an 8.46% increase in detection speed, with model memory and computational load reduced by 50.36% and 53.16%, respectively. These improvements enhance the model’s ability to recognize and extract small weed features, making it more suitable for deployment on edge intelligent agricultural devices and achieving accurate real-time weed detection.
  • Targeted Spraying System: Based on the improved YOLO v5s model, we designed an intelligent targeted spraying system. This system can accurately spray herbicides according to the identified weed positions, avoiding the waste and environmental pollution associated with traditional spraying methods. This not only improves weeding efficiency but also reduces herbicide usage, contributing to environmental protection.
  • Application Effectiveness Verification: We conducted targeted spraying experiments, and the results showed that the proposed method achieved an effective weed identification rate of 90%, an effective spraying rate of 96.3%, a missed spraying rate of 13.3%, and an erroneous spraying rate of 3.7%. These results demonstrate the robustness of the model and the feasibility of the spraying device for weed-targeted spraying.
Compared to traditional weed management, the method proposed in this paper focuses more on the challenges of weed control in straw-covered farmlands under conservation tillage. It effectively addresses the interference of weeds by unrelated features such as surface straw and complex environments, accurately distinguishes between crops and weeds, and realizes precise management of weed targets. This approach promotes the development of agriculture in the direction of precision, efficiency, and intelligence, greatly reducing the negative impacts of herbicide waste and environmental pollution caused by traditional large-scale spraying. It ensures the safety of corn crops and the environment. The successful implementation of this method can effectively promote the application of deep learning algorithms in precision agriculture for weed control and management, playing a significant role in land conservation and utilization and achieving sustainable agricultural production.
The model demonstrated strong recognition performance on the weed dataset collected from straw-covered farmland. However, our study still has limitations. The image sample size within the dataset is small and cannot encompass the full range of weed characteristics under diverse conditions. With the rapid development of object detection algorithms, enhancing sample diversity and expanding the image dataset is essential. Ground unevenness causes vibration in structures such as the camera and nozzle, leading to reduced image quality and nozzle position shifts, which in turn decreases recognition accuracy and spray precision. Additionally, the spraying system fails to activate the pump promptly based on weed location information, resulting in missed spraying.
In future research, we plan to further expand the image dataset by collecting weed images from different regions and environments to improve the model’s generalization capability. To address the issue of ground unevenness, we will incorporate damping devices or improve walking wheel structures to reduce the negative impact of machine vibration. Additionally, optimizing the response time of the solenoid valve will increase system speed, enabling real-time, precise spraying in complex environments. Enhancing system stability and introducing adaptive algorithms will improve the timeliness and accuracy of spraying decisions while extending the system’s application scope to achieve intelligent variable-rate spraying. These efforts will further enhance the potential of intelligent agricultural equipment.

Author Contributions

Conceptualization, X.W., Q.W., Y.Q. and X.Z.; data curation, Y.Q. and C.W.; formal analysis, X.W. and Y.Q.; funding acquisition, Q.W. and C.L.; investigation, X.Z. and C.W.; methodology, X.W., Q.W. and C.L.; project administration, Q.W.; software, X.W. and C.L.; supervision, Q.W. and C.L.; validation, C.W.; visualization, X.W.; writing—original draft, X.W.; writing—review and editing, X.W., Q.W. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R & D Projects (CN) (Grant No. 2023YFD1500401) and the Research and Development of Key Technologies for Efficient Plant Protection Machine for Soybean and Corn Strip Compound Planting System (China Agricultural University Intramural No. 202405510710112).

Institutional Review Board Statement

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

Gratitude should be expressed to all the members of the Conservation Tillage Research Centre.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Wen, L.; Peng, Y.; Zhou, Y.; Cai, G.; Lin, Y.; Li, B. Effects of conservation tillage on soil enzyme activities of global cultivated land: A meta-analysis. J. Environ. Manag. 2023, 345, 118904. [Google Scholar] [CrossRef] [PubMed]
  2. Salimi Koochi, M.; Madandoust, M. Integrated weed management of cumin (Cuminum cyminum L.) using reduced rates of herbicides and straw mulch. Iran. J. Med. Aromat. Plants Res. 2023, 39, 352–366. [Google Scholar] [CrossRef]
  3. Su, Y.; Ye, S.; Lu, M.; Ma, Y.; Wang, Y.; Wang, S.; Chai, R.; Ye, X.; Zhang, Z.; Ma, C. Effects of straw return on farmland weed abundance and diversity: A meta-analysis. Acta Prataculturae Sin. 2024, 33, 150–160. [Google Scholar] [CrossRef]
  4. Zhang, X.; Xing, S.; Wu, Y. Effects of different straw returning methods on farmland ecological environment: A review. Jiangsu Agric. Sci. 2023, 51, 31–39. [Google Scholar]
  5. Mao, Y.; Li, G.; Shen, J. Weed control efficiency of corn straw residue mulching combining herbicide application in paddy field and its effect on rice yield. Jiangsu Agric. Sci. 2014, 30, 1336–1344. [Google Scholar] [CrossRef]
  6. Fonteyne, S.; Singh, R.G.; Govaerts, B.; Verhulst, N. Rotation, Mulch and Zero Tillage Reduce Weeds in a long-Term Conservation Agriculture Trial. Agronomy 2020, 10, 962. [Google Scholar] [CrossRef]
  7. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Deng, X.; Zhang, L. A fully convolutional network for weed mapping of unmanned aerial vehicle (uav) imagery. PLoS ONE 2018, 13, e196302. [Google Scholar] [CrossRef]
  8. Liu, B.; Bruch, R. Weed Detection for Selective Spraying: A Review. Curr. Robot. Rep. 2020, 1, 19–26. [Google Scholar] [CrossRef]
  9. Meshram, A.T.; Vanalkar, A.V.; Kalambe, K.B.; Badar, A.M. Pesticide spraying robot for precision agriculture: A categorical literature review and future trends. J. Field Robot. 2022, 39, 153–171. [Google Scholar] [CrossRef]
  10. Darbyshire, M.; Salazar-Gomez, A.; Gao, J.; Sklar, E.I.; Parsons, S. Towards practical object detection for weed spraying in precision agriculture. Front. Plant Sci. 2023, 14, 1183277. [Google Scholar] [CrossRef]
  11. Li, H.; Guo, C.; Yang, Z.; Chai, J.; Shi, Y.; Liu, J.; Zhang, K.; Liu, D.; Xu, Y. Design of field real-time target spraying system based on improved YOLOv5. Front. Plant Sci. 2022, 13, 1072631. [Google Scholar] [CrossRef] [PubMed]
  12. Zhao, X.; Li, Y.; Liu, X.; Niu, Z.; Yuan, J. Ultrasonic Sensing System Design and Accurate Target Identification for Targeted Spraying. Adv. Manuf. Autom. VII 2018, 451, 245–253. [Google Scholar] [CrossRef]
  13. Mahmud, M.S.; Zahid, A.; He, L.; Choi, D.; Krawczyk, G.; Zhu, H. LiDAR-sensed tree canopy correction in uneven terrain conditions using a sensor fusion approach for precision sprayers. Comput. Electron. Agric. 2021, 191, 106565. [Google Scholar] [CrossRef]
  14. Fawakherji, M.; Potena, C.; Pretto, A.; Bloisi, D.D.; Nardi, D. Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming. Robot. Auton. Syst. 2021, 146, 103861. [Google Scholar] [CrossRef]
  15. Hasan, A.S.M.M.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G.K. A survey of deep learning techniques for weed detection from images. Comput. Electron. Agric. 2021, 184, 106067. [Google Scholar] [CrossRef]
  16. Tang, J.; Chen, X.; Miao, R.; Wang, D. Weed detection using image processing under different illumination for site-specific areas spraying. Comput. Electron. Agric. 2016, 122, 103–111. [Google Scholar] [CrossRef]
  17. Bakhshipour, A.; Jafari, A.; Nassiri, S.M.; Zare, D. Weed segmentation using texture features extracted from wavelet sub-images. Biosyst. Eng. 2017, 157, 1–12. [Google Scholar] [CrossRef]
  18. Wang, Y.; Zhang, X.; Ma, G.; Du, X.; Shaheen, N.; Mao, H. Recognition of weeds at asparagus fields using multi-feature fusion and backpropagation neural network. Int. J. Agric. Biol. Eng. 2021, 14, 190–198. [Google Scholar] [CrossRef]
  19. Wang, B.; Yan, Y.; Lan, Y.; Wang, M.; Bian, Z. Accurate Detection and Precision Spraying of Corn and Weeds Using the Improved YOLOv5 Model. IEEE Access 2023, 11, 29868–29882. [Google Scholar] [CrossRef]
  20. Wu, Z.; Chen, Y.; Zhao, B.; Kang, X.; Ding, Y. Review of Weed Detection Methods Based on Computer Vision. Sensors 2021, 21, 3647. [Google Scholar] [CrossRef]
  21. Deng, X.; Qi, L.; Liu, Z.; Liang, S.; Gong, K.; Qiu, G. Weed target detection at seedling stage in paddy fields based on YOLOX. PLoS ONE 2023, 18, e294709. [Google Scholar] [CrossRef] [PubMed]
  22. Quan, L.; Feng, H.; Lv, Y.; Wang, Q.; Zhang, C.; Liu, J.; Yuan, Z. Maize seedling detection under different growth stages and complex field environments based on an improved Faster R–CNN. Biosyst. Eng. 2019, 184, 1–23. [Google Scholar] [CrossRef]
  23. Wang, C.; Wu, X.; Li, Z. Recognition of maize and weed based on multi-scale hierarchical features extracted by convolutional neural network. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2018, 34, 144–151. [Google Scholar] [CrossRef]
  24. Ding, P.; Qian, H.; Bao, J.; Zhou, Y.; Yan, S. L-YOLOv4: Lightweight YOLOv4 based on modified RFB-s and depth-wise separable convolution for multi-target detection in complex scenes. J. Real-Time Image Process. 2023, 20, 71. [Google Scholar] [CrossRef]
  25. Rai, N.; Zhang, Y.; Villamill, M.; Howatt, K.; Ostlie, M.; Sun, X. Agricultural weed identification in images and videos by integrating optimized deep learning architecture on an edge computing technology. Comput. Electron. Agric. 2024, 216, 108442. [Google Scholar] [CrossRef]
  26. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  27. Wu, H.; Wang, Y.; Zhao, P.; Qian, M. Small-target weed-detection model based on YOLO-V4 with improved backbone and neck structures. Precis. Agric. 2023, 24, 2149–2170. [Google Scholar] [CrossRef]
  28. Wang, A.; Peng, T.; Cao, H.; Xu, Y.; Wei, X.; Cui, B. TIA-YOLOv5: An improved YOLOv5 network for real-time detection of crop and weed in the field. Front. Plant Sci. 2022, 13, 1091655. [Google Scholar] [CrossRef]
  29. Gong, H.; Wang, X.; Zhuang, W. Research on Real-Time Detection of Maize Seedling Navigation Line Based on Improved YOLOv5s Light-weighting Technology. Agriculture 2024, 14, 124. [Google Scholar] [CrossRef]
  30. Vijayakumar, V.; Ampatzidis, Y.; Schueller, J.K.; Burks, T. Smart spraying technologies for precision weed management: A review. Smart Agric. Technol. 2023, 6, 100337. [Google Scholar] [CrossRef]
  31. Zhao, D.; Zhao, Y.; Wang, X.; Zhang, B. Theoretical Design and First Test in Laboratory of a Composite Visual Servo-Based Target Spray Robotic System. J. Robot. 2016, 2016, 1801434. [Google Scholar] [CrossRef]
  32. Underwood, J.P.; Calleija, M.; Taylor, Z.; Hung, C.; Nieto, J.I.; Fitch, R.; Sukkarieh, S. Real-time target detection and steerable spray for vegetable crops. In Proceedings of the International Conference on Robotics and Automation: Robotics in Agriculture Workshop, Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
  33. Raja, R.; Slaughter, D.C.; Fennimore, S.A.; Siemens, M.C. Real-time control of high-resolution micro-jet sprayer integrated with machine vision for precision weed control. Biosyst. Eng. 2023, 228, 31–48. [Google Scholar] [CrossRef]
  34. Diao, Z.; Guo, P.; Zhang, B.; Zhang, D.; Yan, J.; He, Z.; Zhao, S.; Zhao, C.; Zhang, J. Navigation line extraction algorithm for corn spraying robot based on improved YOLOv8s network. Comput. Electron. Agric. 2023, 212, 108049. [Google Scholar] [CrossRef]
  35. Zhao, X.; Wang, X.; Li, C.; Fu, H.; Yang, S.; Zhai, C. Cabbage and Weed Identification Based on Machine Learning and Target Spraying System Design. Front. Plant Sci. 2022, 13, 924973. [Google Scholar] [CrossRef] [PubMed]
  36. Fu, H.; Zhao, X.; Wu, H.; Zheng, S.; Zheng, K.; Zhai, C. Design and Experimental Verification of the YOLOV5 Model Implanted with a Transformer Module for Target-Oriented Spraying in Cabbage Farming. Agronomy 2022, 12, 2551. [Google Scholar] [CrossRef]
  37. Sabóia, H.D.S.; Mion, R.L.; Silveira, A.D.O.; Mamiya, A.A. Real-time selective spraying for viola rope control in soybean and cotton crops using deep learning. Eng. Agric. 2022, 42, e20210163. [Google Scholar] [CrossRef]
  38. Chen, J.; Wang, H.; Zhang, H.; Luo, T.; Wei, D.; Long, T.; Wang, Z. Weed detection in sesame fields using a yolo model with an enhanced attention mechanism and feature fusion. Comput. Electron. Agric. 2022, 202, 107412. [Google Scholar] [CrossRef]
  39. Balabantaray, A.; Behera, S.; Liew, C.; Chamara, N.; Singh, M.; Jhala, A.J.; Pitla, S. Targeted weed management of palmer amaranth using robotics and deep learning (yolov7). Front. Robot. AI 2024, 11, 1441371. [Google Scholar] [CrossRef]
  40. Song, Y.; Sun, H.; Li, M.; Zhang, Q. Technology Application of Smart Spray in Agriculture: A Review. Intell. Autom. Soft Comput. 2015, 21, 319–333. [Google Scholar] [CrossRef]
  41. Fan, X.; Chai, X.; Zhou, J.; Sun, T. Deep learning based weed detection and target spraying robot system at seedling stage of cotton field. Comput. Electron. Agric. 2023, 214, 108317. [Google Scholar] [CrossRef]
  42. Chen, J.; Kao, S.-h.; He, H.; Zhuo, W.; Wen, S.; Lee, C.-H.; Chan, S.-H.G. Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 12021–12031. [Google Scholar] [CrossRef]
  43. Woo, S.H.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the 15th European Conference on Computer Vision (ECCV), PT VII, Munich, Germany, 8–14 September 2018; p. 11211. [Google Scholar] [CrossRef]
  44. Tong, Z.; Chen, Y.; Xu, Z.; Yu, R. Wise-IoU: Bounding Box Regression Loss with Dynamic Focusing Mechanism. arXiv 2023, arXiv:2301.10051. [Google Scholar]
  45. Zhang, R.; Li, L.; Fu, W.; Chen, L.; Yi, T.; Tang, Q.; Andrew, J.H. Spraying atomization performance by pulse width modulated variable and droplet deposition characteristics in wind tunnel. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2019, 35, 42–51. [Google Scholar] [CrossRef]
  46. Zhang, X.; Cao, C.; Luo, K.; Wu, Z.; Qin, K.; An, M.; Ding, W.; Xiang, W. Design and operation of a peucedani radix weeding device based on yolov5 and a parallel manipulator. Front. Plant Sci. 2023, 14, 1171737. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overall technical route in the present study.
Figure 1. Overall technical route in the present study.
Agriculture 14 02134 g001
Figure 2. Corn and weed images acquisition: (a,b) weed images with some straw shading. (c,d) Weed images under different lighting. (e,f) Weed images at different shooting angles. (g,h) Dense small target weed images.
Figure 2. Corn and weed images acquisition: (a,b) weed images with some straw shading. (c,d) Weed images under different lighting. (e,f) Weed images at different shooting angles. (g,h) Dense small target weed images.
Agriculture 14 02134 g002
Figure 3. The image data enhancement results.
Figure 3. The image data enhancement results.
Agriculture 14 02134 g003
Figure 4. Structure of the improved YOLO v5 weed detection model (* represents the convolution).
Figure 4. Structure of the improved YOLO v5 weed detection model (* represents the convolution).
Agriculture 14 02134 g004
Figure 5. Difference between Pconv and regular convolution.
Figure 5. Difference between Pconv and regular convolution.
Agriculture 14 02134 g005
Figure 6. Schematic of the WIoU.
Figure 6. Schematic of the WIoU.
Agriculture 14 02134 g006
Figure 7. Grad-CAM visualization results. (with red areas indicating important features and blue areas indicating redundant features).
Figure 7. Grad-CAM visualization results. (with red areas indicating important features and blue areas indicating redundant features).
Agriculture 14 02134 g007
Figure 8. Grad-CAM visualization results. (a) is a plot of the loss curves of the improved model and the base model, (b) is a plot of the comparison of the mAP values and the model memory size before and after the model improvement.
Figure 8. Grad-CAM visualization results. (a) is a plot of the loss curves of the improved model and the base model, (b) is a plot of the comparison of the mAP values and the model memory size before and after the model improvement.
Agriculture 14 02134 g008
Figure 9. Comparison of detection results before and after improving the YOLO v5s model: (a) multiscale weed detection effects with partial straw shading, (b) detection effects when corn seedlings and weeds overlap, (c) detection effects for dense small-target weeds, and (d) weed detection effects taken at night under varying illumination conditions. Yellow circles represent missed weeds and red circles represent misidentified weeds.
Figure 9. Comparison of detection results before and after improving the YOLO v5s model: (a) multiscale weed detection effects with partial straw shading, (b) detection effects when corn seedlings and weeds overlap, (c) detection effects for dense small-target weeds, and (d) weed detection effects taken at night under varying illumination conditions. Yellow circles represent missed weeds and red circles represent misidentified weeds.
Agriculture 14 02134 g009aAgriculture 14 02134 g009b
Figure 10. Working principle of targeted spraying weed control.
Figure 10. Working principle of targeted spraying weed control.
Agriculture 14 02134 g010
Figure 11. Weed detection and data communication process.
Figure 11. Weed detection and data communication process.
Agriculture 14 02134 g011
Figure 12. Schematic of weed coordinate extraction.
Figure 12. Schematic of weed coordinate extraction.
Agriculture 14 02134 g012
Figure 13. Camera field of view in the world coordinate system.
Figure 13. Camera field of view in the world coordinate system.
Agriculture 14 02134 g013
Figure 14. Schematic of weed target position error assessment.
Figure 14. Schematic of weed target position error assessment.
Agriculture 14 02134 g014
Figure 15. Evaluation of weed control effect by targeted spraying.
Figure 15. Evaluation of weed control effect by targeted spraying.
Agriculture 14 02134 g015
Figure 16. Experiment of field-targeted spraying.
Figure 16. Experiment of field-targeted spraying.
Agriculture 14 02134 g016
Table 1. The main technical parameters of the camera and lens.
Table 1. The main technical parameters of the camera and lens.
EquipmentParameterValue/Type
Industrial Camera
(The Imaging Source, Bremen, Germany)
dynamic range12 bit
sensor typeCMOS Pregius
data interfaceUSB 3.0, compatible with USB 2.0
resolution/(pixels × pixels)2048 × 1536
maximum frame rate/(f·s−1)100
CCD size/(inch)1/1.8
pixel size/(μm × μm)3.45 × 3.45
shutter/(s)5 × 10−6~4
Lens
(AZURE, Fuzhou, China)
resolution/(pixels)5 million
image size2/3 inch
apertureF1.6
focal length/(mm)8
Canon Camera
(Canon, Tokyo, Japan)
a sensorCMOS
effective pixels/(pixels)20 million 200 thousand
pixels/(pixels × pixels)3648 × 2432
image FormatJPEG
Table 2. Identification accuracy results of different models.
Table 2. Identification accuracy results of different models.
ModelsP/%R/%mAP/%Fps/f/sParameters/MB
Fast R-CNN52.7178.7371.8918.02108
SSD88.2979.5688.2933.6691.1
YOLO v490.187.392.248.0817.6
YOLO v5s90.387.592.370.9213.7
YOLO v889.485.39194.437.01
Table 3. Identification accuracy results of different network infrastructures.
Table 3. Identification accuracy results of different network infrastructures.
ModelsP/%R/%mAP/%Fps/f/sParameters/MB
FasterNet90.387.592.374.66.34
MobileViT85.078.284.965.72.24
EfficientViT87.584.689.867.53.24
Table 4. Identification accuracy results of different improvements.
Table 4. Identification accuracy results of different improvements.
ModelsmAP/%Fps/f/sParameters/MBGFLOPs
Original YOLO v5s92.370.9213.715.8
YOLO v5s-FasterNet9174.636.347.1
YOLO v5s-CBAM92.564.1015.116.9
YOLO v5s-WIoU92.483.3313.715.8
YOLO v5s-FasterNet-CBAM91.268.036.87.4
YOLO v5s-FasterNet-WIoU91.375.766.467.1
YOLO v5s-CBAM-WIoU92.472.4630.216.9
YOLO v5s-FasterNet-CBAM_WIoU91.476.926.87.4
Table 5. Evaluation model for weed control effect of targeted spraying.
Table 5. Evaluation model for weed control effect of targeted spraying.
IndexDefinitionFormulas
Effective
Recognition rate
(ERR)
The proportion of weed targets correctly identified as weeds (n) relative to the total number of weeds (N) within the experimental area. E R R = n N
Relative
spraying rate
(RSR)
The proportion of sprayed targets (n1 + n1′) relative to the total number of targets identified as weeds (n + n3′) within the experimental area. R S R = n 1 + n 1 n + n 3
Effective
spraying rate
(ESR)
The proportion of weed targets that were correctly identified as weeds and successfully sprayed (n1) relative to the total number of sprayed targets (n1 + n1′) within the experimental area. E S R = n 1 n 1 + n 1
Leakage
spraying rate
(LSR)
The proportion of unsprayed weed targets (n2 + n3 + n4) relative to the total number of weeds (N) within the experimental area. L S R = n 2 + n 3 + n 4 N
Mistaken
spraying rate
(MSR)
The proportion of corn incorrectly identified as weeds and successfully sprayed (n1′) relative to the total number of sprayed targets (n1 + n1′) within the experimental area. M S R = n 1 n 1 + n 1
N and N′ represent the total number of weeds and corn in the experimental area, n and n′ denote the number of weeds and corn correctly identified by the model, n1 and n1′ represent the number of targets identified as weeds and successfully sprayed, n2 and n2′ indicate the number of targets identified as weeds but unsuccessfully sprayed, n3 denotes the number of weeds incorrectly identified as corn, n3′ represents the number of corn incorrectly identified as weeds, and n4 and n4′ represent the number of weeds and corn targets not identified by the model.
Table 6. Verification experiment of weed target position error.
Table 6. Verification experiment of weed target position error.
Coordinate of Weed Placement PositionCoordinate of Actual Spraying PositionRelative Error/%RMSE
(−6.3, −9.9)(−6.508, −9.915)3.40.12
(7.3, −5.8)(7.189, −5.822)1.9
(−26.9, 11.2)(−27.388, 11.171)2.1
(18.6, 13.9)(18.184, 13.724)3.5
Table 7. Precision spraying experiment results.
Table 7. Precision spraying experiment results.
CategoryN(N′)n(n′)n1(n1′)n2(n2′)n3(n3′)n4(n4′)ESR/%LSR/%MSR/%
Weeds3027261129013.33.7
Corns3029101096.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Wang, Q.; Qiao, Y.; Zhang, X.; Lu, C.; Wang, C. Precision Weed Management for Straw-Mulched Maize Field: Advanced Weed Detection and Targeted Spraying Based on Enhanced YOLO v5s. Agriculture 2024, 14, 2134. https://doi.org/10.3390/agriculture14122134

AMA Style

Wang X, Wang Q, Qiao Y, Zhang X, Lu C, Wang C. Precision Weed Management for Straw-Mulched Maize Field: Advanced Weed Detection and Targeted Spraying Based on Enhanced YOLO v5s. Agriculture. 2024; 14(12):2134. https://doi.org/10.3390/agriculture14122134

Chicago/Turabian Style

Wang, Xiuhong, Qingjie Wang, Yichen Qiao, Xinyue Zhang, Caiyun Lu, and Chao Wang. 2024. "Precision Weed Management for Straw-Mulched Maize Field: Advanced Weed Detection and Targeted Spraying Based on Enhanced YOLO v5s" Agriculture 14, no. 12: 2134. https://doi.org/10.3390/agriculture14122134

APA Style

Wang, X., Wang, Q., Qiao, Y., Zhang, X., Lu, C., & Wang, C. (2024). Precision Weed Management for Straw-Mulched Maize Field: Advanced Weed Detection and Targeted Spraying Based on Enhanced YOLO v5s. Agriculture, 14(12), 2134. https://doi.org/10.3390/agriculture14122134

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop