1. Introduction
Currently, weeds are widely recognized as the primary biological factor affecting crop growth and causing yield reduction [
1]. According to Oreke’s research [
2], weeds can reduce crop yields by approximately 34%, resulting in substantial economic losses for agriculture. Lettuce, a staple in human diets, is cultivated extensively worldwide, and weed control is one of the most critical factors determining lettuce production [
3]. Weed control in lettuce farming consists of inter-row and intra-row weeding. However, intra-row weed control poses greater challenges than inter-row weeding due to the close proximity of weeds to crop rows [
4] and the high planting density of lettuce. At present, manual weeding and chemical herbicides are still the primary methods used [
5,
6]. Although manual weeding can accurately remove intra-row weeds, this labor-intensive method is not compatible with the demands of smart agriculture and significantly increases production costs [
7]. Additionally, the prolonged use of chemical herbicides leads to herbicide resistance and severe environmental degradation [
8,
9]. In contrast, mechanical weeding has gained attention as a research focus due to its environmental friendliness and cost-effectiveness. Additionally, different types of weeds exhibit varying growth habits, root structures, and resistance, which directly impact the efficiency and effectiveness of weeding systems. Some weeds bear a close resemblance to lettuce in appearance, leading to potential misidentifications that reduce overall operational efficiency. By utilizing weed classification data, it is possible to further analyze the types and distribution of weeds in lettuce fields, providing a scientific basis for decision-making in precision agriculture. Therefore, developing an intelligent, real-time, fast, and cost-effective weeding machine for lettuce fields, equipped with technology for real-time weed identification, localization, and classification, is of great significance for improving lettuce production efficiency and advancing modern agriculture.
The emergence and advancement of deep learning and computer vision technologies have opened new avenues for smart and precision agriculture [
10], offering new possibilities for the automation and intelligence of intra-row lettuce weeding. Over the past two decades, deep learning has led to the development of numerous renowned models [
11], which can automatically extract features in high-dimensional spaces and have been widely applied in various other agricultural production scenarios [
12,
13].
Furthermore, with the rapid improvement in camera performance and computational power, computer vision has shown tremendous potential for tasks such as the rapid detection, classification, and localization of weeds and crops [
14,
15,
16]. Earlier methods for detecting and classifying field weeds typically relied on preprocessing, segmentation, feature extraction, and classification techniques. However, these methods often exhibited poor robustness in cases where crops overlapped with weeds or under suboptimal lighting conditions [
17], leading to the development of machine-learning-based weed detection algorithms. Rumpf et al. [
18] proposed a weed classification method for small-grain crops using a support vector machine (SVM) combined with near-infrared spectroscopy, but it achieved only 80% classification accuracy. Pérez-Ortiz et al. [
19] developed a semi-supervised weed detection system for sunflower crops based on SVM and Hough transform. Lottes et al. [
20] combined random forest algorithms with near-infrared spectroscopy, using drone-acquired data to classify objects and weeds. However, machine learning classifiers require handcrafted feature vectors created by experts based on the visual texture, spectral characteristics, and spatial context of weeds and crops [
13], significantly limiting the applicability of these methods to specific weed or crop types.
Deep learning overcomes these limitations by enabling automatic feature extraction. Tao et al. [
21] proposed a hybrid convolutional neural network–support vector machine classifier, achieving 92.1% accuracy in weed detection for winter oilseed rape fields. Mohd Anul Haq [
22] developed a CNN-LVQ model for weed classification in soybean fields. Zhang et al. [
23] introduced a YOLOv8-based segmentation model for the localization of weed apical meristems, achieving outstanding segmentation accuracy with 97.2%, though the overall weed detection accuracy was only 81%. Hu et al. [
24] developed a lightweight multimodule YOLOv7 model for lettuce recognition and weed severity classification by integrating ECA and CA mechanisms, ELAN-B3, and DownC modules, achieving a detection accuracy of 97.5%. Kong et al. [
25] proposed the Efficient Crop Segmentation Net (ECSNet) based on the YOLO architecture for weed management in maize fields, achieving 90.9% mIoU50 and 90.2% accuracy in maize segmentation. In intelligent weed control systems, merely achieving classification or detection tasks is insufficient; the ultimate goal is the rapid and precise localization of crops and weeds. Quan et al. [
26] developed a YOLOv3-based intelligent weeding robot for inter-row weed removal in cornfields, achieving detection accuracies of 98.5% for maize and 90.9% for weeds, with a weeding rate of 85.91% and seedling injury rate of 1.19%. Ju et al. [
27] developed an adaptive weeding robot for paddy fields based on YOLOv5x, achieving an accuracy of 90.05%, a weeding rate of 82.4%, and a seedling injury rate of 2.8%. These studies demonstrate the immense potential of deep learning methods for fast weed and crop detection and localization. Additionally, YOLO object detectors have shown high precision in detecting weeds even in highly dynamic and challenging unstructured environments [
28]. However, beyond accurate weed–crop recognition and localization, the development of real-time, fast, and efficient weeding equipment is also critical. Therefore, in addition to proposing a novel deep-learning-based lettuce–weed detection, classification, and localization model, this study also optimized a previously developed weeding system to validate the effectiveness of the proposed model [
3]. The main contributions of this study are as follows:
This study proposed an optimized YOLOv8l model, which, to the best of our knowledge, is the first to incorporate both the GAM and CA mechanism for the rapid detection of lettuce and weeds, as well as the classification of six common weed species.
A high-efficiency vision system was developed to identify the emergence point of lettuce stems (i.e., the center point), integrating the LettWd-YOLOv8l model and a lettuce–weed localization method.
The intra-row weeding device, based on a vision system and pneumatic servo technology, was uniquely optimized in this study, marking a significant innovation compared to previous studies.
Overall, this research aims to advance the automation and intelligence of intra-row lettuce weeding by optimizing an intelligent weeding system and introducing a novel deep learning model to enhance the efficiency of lettuce–weed detection, classification, and localization. This provides valuable insights for the future development of precision and smart agriculture. The remainder of this study is organized into five sections:
Section 2 describes the creation of the dataset, the structure of LettWd-YOLOv8l, the localization algorithm, and the optimized intelligent intra-row weeding system;
Section 3 details the parameter settings, experimental environment, and evaluation metrics for various experiments (including model training and conveyor belt experiments);
Section 4 presents and discusses the experimental results;
Section 5 discusses the findings and potential future improvements; and
Section 6 provides a summary and conclusion of the study. The full forms and annotations of abbreviations mentioned in this article are provided in
Table A1, while the symbols and their corresponding meanings are listed in
Table A2.
5. Discussion
In this study, we successfully designed and implemented an efficient and low-cost autonomous intra-row lettuce-weeding system. The system integrates a vision recognition module to control the activation and deactivation of the weeding blade, thereby facilitating effective weed removal between lettuce plants. The proposed vision recognition system is built upon the LettWd-YOLOv8l model, specifically tailored for lettuce–weed identification and localization tasks. Compared to traditional computer vision methods, the YOLOv8 model has demonstrated superior adaptability to the complex field conditions of agricultural environments [
28,
39]. Leveraging the proposed deep-learning-based intelligent vision system, we simulated the operation of the lettuce-weeding device on a conveyor belt to mimic field conditions. The experimental results indicated that the proposed device and method are both feasible and efficient. However, the damage rate to lettuce plants by the system has yet to be evaluated. In this study, 0.7 MPa was selected as the standard operating pressure for the cylinder. Pressure and installed power may influence the efficiency and effectiveness of the weeding system. Future research will further investigate the specific impact of these factors on overall weeding performance.
A higher precision typically indicates the model performs well in identifying positive samples. However, by analyzing the experimental results, we observed that the recall rate for Amaranthus blitum L. (AL) was relatively lower compared to other weed species, which can be attributed to the smaller number of samples of this species in our dataset. This suggests that the model might be too strict in matching with limited samples, potentially overlooking positive samples from other categories, which may lead to overfitting. To address this, future work should focus on expanding the dataset, particularly by acquiring more images of AL, and applying more data augmentation techniques to enhance the dataset. Additionally, implementing k-fold cross-validation will allow for better evaluation of the model’s performance across different datasets, reducing bias from relying on a single dataset. Introducing regularization methods could further improve the model’s generalization ability.
Under poor light conditions, both the lettuce localization success rate and the weeding rate declined slightly, likely due to the use of data augmentation techniques such as adding noise, cutout, and brightness adjustment during dataset creation, which significantly enhanced the model’s robustness under suboptimal lighting. Notably, under good light conditions, the LettWd-YOLOv8l model showed a substantial reduction in misclassifying weeds as lettuce and extracting their center points. However, we observed that the extracted lettuce center points were slightly affected by changes in the light source’s position.
As illustrated in
Figure 18, this study employed the response surface methodology to investigate the effects of light conditions and weed density on lettuce localization success rate and weeding rate. Light conditions positively influenced both metrics, with better lighting significantly improving performance, particularly under lower weed density. Additionally, weed density emerged as a critical factor affecting lettuce localization success, as both metrics declined notably with increasing weed density. Furthermore, an interaction effect was observed between light conditions and weed density, impacting both metrics, though the influence of weed density was more pronounced.
During the experiments, we observed that some weeds tended to cluster in certain areas, which occasionally caused the weeding blade to become entangled with the weeds. This, in turn, affected the precise localization of the lettuce plants and disrupted the normal operation of the weeding blade, thereby reducing the weeding efficiency and effectiveness of the proposed system. The experimental results indicate that excessively high weed density significantly reduces weeding efficiency. Therefore, it is recommended that farmers implement weed control measures early in the lettuce growth stage to enhance the system’s efficiency and weeding effectiveness. Future research should focus on improving the algorithm and optimizing the weeding strategy to enhance the overall weeding rate of the proposed autonomous intra-row lettuce-weeding system. Additionally, further studies should aim to optimize the design of the weeding blade to improve its resistance to weed entanglement and its ability to penetrate the soil. Moreover, enhancements in the motion control algorithms and strategies of the weeding blade [
50] will be critical to ensuring high-efficiency weed removal under complex weed conditions. Given that lighting conditions may affect the performance of the vision recognition system, future research should also include experimental validation under different lighting scenarios to assess the system’s robustness and reliability in varying environmental conditions.
Table 6 presents the recent advancements in intelligent weeding equipment for various crops. Notably, the proposed system demonstrated outstanding performance in both weed removal rate and crop detection accuracy. However, it is important to acknowledge that during the conveyor belt testing phase, LettWd-YOLOv8l did not fully achieve the performance level observed during training. These discrepancies may be attributed to both software and hardware factors. On the software side, communication issues between the laptop and the STM32 microcontroller could be the root cause. On the hardware side, the performance of both the laptop and the microcontroller plays a crucial role in overall system accuracy. Although the crop detection accuracy in the conveyor belt experiments did not reach the level achieved during model training, this does not directly indicate poor real-world performance of the model. The actual recognition performance was limited by hardware conditions, and there remains room for improvement through technical enhancements. Several studies cited in the table have conducted field experiments. However, this study was limited to simulation experiments conducted on a conveyor belt platform, which does not fully replicate the complex production conditions in actual lettuce fields. To more comprehensively validate the proposed optimized YOLOv8l model and the mechanical lettuce-weeding system, future research should not be restricted to conveyor belt experiments. Future work should focus on developing a suitable mobile platform, enabling the weeding system to conduct field experiments on real lettuce crops. Through such field trials, the system’s weeding performance and robustness can be better assessed under unstructured environmental factors such as varying terrain and lettuce planting densities. These evaluations will provide a more complete validation of the feasibility and effectiveness of the device and model in real-field applications.
6. Conclusions
In this study, we used and implemented an intelligent mechanical intra-row lettuce-weeding device based on a deep-learning-powered vision recognition system. The device utilizes the LettWd-YOLOv8l model, which performs tasks such as detecting lettuce and weeds, precisely locating lettuce plants, and classifying weeds. The proposed model is an improvement over the original YOLOv8l, enhanced by the integration of CA (Coordinate Attention) and GAM (Global Attention Mechanism) modules at appropriate layers. As a result, the model achieved outstanding performance across several metrics, with precision, recall, F1-score, mAP50, and mAP95 reaching 99.732%, 99.907%, 99.5%, 99.5%, and 98.995%, respectively. To evaluate the system’s weeding performance, we simulated field conditions using a conveyor belt setup. The experimental results showed that the proposed autonomous intra-row lettuce-weeding system was able to achieve 89.273% accuracy in crop detection and lettuce localization tasks, and an 83.729% weeding rate at a speed of 3.28 km/h under different light conditions and weed densities. The findings of this study provide valuable insights and knowledge for the development of autonomous weeding robots, offering an innovative solution for precision weeding in modern agriculture.