Next Article in Journal
Biochar and Mulch: Hydrologic, Erosive, and Phytotoxic Responses Across Different Application Strategies and Agricultural Soils
Previous Article in Journal
Plant Growth Regulators Reduce Flower and Pod Shedding and Optimize Pod Distribution in Soybean in Northwest China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lightweight Deep Learning-Based Laser Irradiation System for Intra-Row Weed Control in Lettuce

1
Department of Agricultural Engineering, College of Engineering, China Agricultural University, 17 Qinghua East Road, Haidian, Beijing 100083, China
2
School of Mechanical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Agronomy 2025, 15(4), 925; https://doi.org/10.3390/agronomy15040925
Submission received: 6 March 2025 / Revised: 6 April 2025 / Accepted: 8 April 2025 / Published: 10 April 2025
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
Laser weeding is an innovative, environmentally friendly method for intra-row weed control. However, its effectiveness depends on accurate weed identification and an efficient control system. This study developed an intra-row laser weeding system for lettuce, combining deep learning and laser technology. The system consisted of three modules: perception, decision, and execution. It used an MV-UB130GM industrial camera to capture images, which are transmitted to a computer for processing. A target detection algorithm located weeds by calculating the central coordinates of anchor frames. The multi-task learning (MTL) decision system then planned the weeding path, generated instructions, and controlled the laser for weeding tasks. The YOLOv8 model, enhanced with an attention mechanism, formed the foundation of target detection. To compress the model, a class knowledge distillation method based on transfer learning was applied, resulting in a lightweight YOLOv8s-CBAM model with a mAP@0.5 of 98.9% and a size of just 6.2 MB. A simulation prototype of the laser weeding system was built, and initial experiments demonstrated that a 450 nm blue semiconductor laser effectively kills weeds in 1 s with 30 W output. Experimental results showed that the system detected and eliminated 100% of weeds in low-density scenes and achieved an 88.9% detection rate in high-density areas. The real-time detection speed reached 21.27 FPS, and the overall weeding success rate was 76.9%. This study provides valuable insights for the development of intra-row weed control systems based on laser technology, contributing to the advancement of precision agriculture.

1. Introduction

Weed infestation and management are critical production challenges in agricultural fields [1]. Weeds in agricultural fields have a strong ability to reproduce, and they compete with crops for resources such as nutrients, water, and light, directly or indirectly affecting both the yield and quality of the crop [2]. In order to reduce the loss of crop production, timely and effective control of weed is particularly important.
Intra-row weed control is challenging due to the high risk of damaging crops when targeting nearby weeds [3]. Although manual weeding can remove weeds with precision, it is labor-intensive, costly, and often ineffective—misidentifying or missing approximately 35% of weeds [4]. Chemical herbicides are widely used because of their efficiency and convenience. At the same time, the use of herbicides to suppress or kill weeds can effectively reduce the competition of weeds for crops and the operation is simple. However, this method causes a large amount of herbicide waste and environmental pollution, and also renders weeds resistant to pesticides to a certain extent, which is not conducive to the sustainable development of agricultural production [5,6,7,8,9]. Mechanical weeding is environmentally friendly and efficient, but it has a higher rate of crop damage during the weeding process, which could hinder the growth of crops and may lead to secondary injuries due to infection by pathogens, thus reducing the yield [10,11]. Laser weeding is an emerging weeding technology and represents a precise, non-contact physical method of weed control [12]. It uses high-energy laser beams that can deliver a high density of energy to selected points. Laser weeding achieves the effect of rapidly and precisely targeting and removing weeds by directing the laser beam to heat and damage or kill the plant tissue of the weeds [13]. Compared to traditional mechanical and chemical weeding methods, laser weeding has the advantages of environmental protection, high efficiency, flexibility, and automation, thus improving the efficiency and accuracy of weed control [14]. However, the main challenge of laser weed control technology is to accurately differentiate between weeds and crops and to pinpoint weeds [15]. Therefore, developing a fast and accurate real-time identification and localization technology is crucial for the practical application of intelligent weeding equipment.
The accurate identification of weeds is a prerequisite for precise weed control management in the field, and machine vision technology is an effective means of achieving accurate identification of weeds [16]. With the continuous development of deep learning and computer vision technology, the accuracy of weed recognition of weeding robots has improved rapidly, and the transformation pace from theory to practical application is also accelerating [17]. Traditional methods mainly achieve target detection by manually designing features in combination with classifiers, but are limited in efficiency and generalization ability [18,19,20,21,22], and are gradually being replaced by end-to-end detection methods based on deep learning. While modern approaches like DETR and SAM demonstrate strong capabilities in open-domain segmentation, their computational demands and hardware requirements limit practical deployment in agricultural edge devices. Although U-Net combined with SAM has demonstrated excellent performance in this case, its training and inference processes require high hardware resources (such as the NVIDIA RTX A6000 GPU) [23]. DETR [24] achieves 28 FPS on high-end GPUs, and improved Swin-Unet [25] achieves 15.1 FPS on an Ubuntu 20.04 system with the Pytorch1.6 deep learning framework, CUDA11.6 parallel computing architecture, and cuDNN8.4.0 deep neural network GPU acceleration library, whereas You Only Look Once (YOLO) enables real-time weeding on edge devices. EfficientDet-D7 achieves 55.1 AP on the COCO test set, with significantly fewer parameters and FLOPs (floating-point operations per second) compared to other detectors. In contrast, EfficientDet-D0 requires only 1/28 of the FLOPs needed by YOLOv3 to achieve a similar level of accuracy. However, for applications that require rapid deployment or have limited computational resources, especially those demanding minimal latency and quick response times, YOLO stands out due to its simplicity and efficiency in implementation [10]. This highlights the trade-off between model complexity and robustness, where YOLO’s architecture excels in balancing speed and accuracy [26].
YOLO is an advanced object detection algorithm proposed by Joseph Redmon and others that can detect multiple objects in an image within a short time [27]. The YOLO algorithm can detect the object in almost real time in one frame, making it both fast and accurate. Therefore, YOLO is widely used in the field of agriculture to identify and locate crops and weeds. Zhu, et al. [14] designed a blue laser weeding robot based on the YOLOX neural network for weed control in cornfields. Compared to traditional convolutional neural networks, the YOLOX neural network had an advantage in recognizing small targets. On flat ground, the model achieved average recognition rates of 92.45% for corn and 88.94% for weeds, demonstrating strong robustness and reliability. Hu, et al. [28] proposed a YOLOv4 network for detecting 12 types of rice weeds. Experimental results showed that the performance of the proposed algorithm was 11.6% higher than that of YOLOv3, which was suitable for real-time detection in precision agriculture. Chen, et al. [29] added an attention mechanism to YOLOv4 to detect weeds in sesame seeds. The average accuracy for detecting sesame crops and weeds was 96.16%, with a detection speed of 27.17 FPS. Ying, et al. [30] replaced the original backbone structure of YOLOv4 network with MobileNetV3-Small, and introduced a lightweight attention mechanism. This reduced the memory required for image processing and thus improved the efficiency of the detection model. Yong, et al. [31] proposed a BEM-YOLOv7-tiny target detection model for peanut and weed identification and localization in different weeding periods. The mAP@0.5 and F1 score of the model were 88.3% and 92.4%, respectively, but the volume was only 14.1 MB, which can meet the requirements of real-time seedling and grass detection and positioning. In summary, adding the attention mechanism and optimizing the model architecture are important methods to improve the target detection performance and speed of the YOLO model. While emerging models such as DETR and SAM offer specialized capabilities, YOLO’s unique balance of speed, accuracy, and deployability makes it the best choice for real-time agricultural robots. The improvement and development of the YOLO model provides favorable technical conditions for the development of intelligent agricultural robots. The development of lightweight, highly accurate target detection models that can be easily deployed to edge devices is important to achieve effective weed control in lettuce fields.
The primary contributions of this study are as follows: (1) A lightweight object detection model based on the YOLOv8 network was built. (2) A lettuce intra-row laser weeding system based on deep learning was developed. (3) The proposed lightweight YOLOv8s-CBAM model has a size of only 6.2 MB, but its mAP@0.5 is 98.9%. (4) The established laser weeding system has realized real-time and effective weed control, contributing to green and sustainable development.

2. Materials and Methods

2.1. Overall Design of the Lettuce Intra-Row Laser Weeding System

The main work of this study was to develop a lettuce intra-row laser weeding system based on deep learning, which can accurately identify and locate lettuces or weeds, thereby controlling the laser to emit light to kill the weeds. The laser weeding system mainly consisted of a perception module, a decision module, and an execution module. The system structure was shown in Figure 1.
As the core component of the perception module, the industrial camera is the eye of the whole laser weeding system. Its primary function is to capture images of weeds between lettuce plants under the control of an upper-level computer program. The industrial camera is a 1.2 million pixel MV-UB130GM small industrial camera (Mindvision Technology Co., LTD., Shenzhen, China), with a maximum image resolution of 1280 × 960 and a frame rate of 39 FPS. The industrial camera transmitted image data to the computer via a USB interface.
The decision module primarily consisted of a computer and a MTL decision system. As shown in Figure 2, the computer processed the images of weeds between lettuce plants acquired by the perception module to obtain the positional information of the weeds. Based on this positional information, the MTL decision system performed path planning and output weeding instructions, thereby controlling the weeding execution mechanism to carry out the corresponding actions. The system was equipped with an Intel® Core™ i9-14900K CPU (Intel Corporation, Santa Clara, CA, USA), NVIDIA GeForce RTX4080 GPU (NVIDIA Corporation, Santa Clara, CA, USA), and 64 GB of RAM (Kingston, Fountain Valley, CA, USA), which met all processing requirements. The path planning algorithm adopted by the MTL decision system was the 2-OPT local search ant colony algorithm, known for its robust performance.
The execution module consisted of a microcontroller, stepper motor, linear slide and carriage, laser, and laser head (as shown in Figure 3), with the microcontroller, stepper motor, and laser being the core components. The microcontroller was responsible for receiving the weed position and weeding path planning information sent by the host computer, and then controlling the laser beam to move to the specified position to complete the laser weeding operation. Based on these instructions, the microcontroller controlled the movement of the laser beam to the designated position, thereby executing the laser weeding operation. The stepper motor and laser were in charge of driving the carriage movement and controlling the laser’s on/off state, respectively. The microcontroller used in this study is an Arduino microcontroller with the model ATmega328 (Microchip Technology Inc., Chandler, AZ, USA), which operates with an external input voltage of 7~12 V DC. The stepper motor is model 57CME13D (Leadshine, Shenzhen, Guangdong Province, China), with a holding torque of 1.3 N·m and a rated current of 4 A. The laser is a 450 nm blue semiconductor laser (Han’s TCS Semiconductor Co., Ltd., Beijing, China). This laser uses a multi-chip coupled fiber output port connected to the laser emission head, with a maximum output power of 50 W. It supported three control modes: local, RS232, and AD. The key functional parameters of the laser are listed in Table 1.
The workflow of the laser weeding system for lettuce plants was divided into three parts: image acquisition, recognition and decision, and laser weeding. First, the perception module captures images of weeds between the lettuce plants, which are then transmitted to the computer via a serial port. The MTL decision system processes these images, makes decisions and outputs weeding instructions. Finally, the Arduino microcontroller controls the weeding execution mechanism to perform the laser weeding action. The main frame of the laser weeding prototype was built from aluminum profiles and was mounted on a conveyor belt with a speed of 0.91 km/h. The laser head was vertically fixed at a distance of 13 cm from the seedling tray, ensuring that the laser beam was perfectly perpendicular (90°) to the plane where the weeds are located. This setup ensured precise mapping between the laser’s position coordinates and the weed coordinates, as shown in Figure 3. During the experiment, a seedling tray with lettuce and weeds was placed on a conveyor belt to simulate the movement state along the vegetable patch.

2.2. Acquisition and Processing of Lettuce/Weed Image Dataset

For completing the weed target detection task based on deep learning, a prerequisite was to have a large amount of high-quality image data for training the model. A high-quality lettuce/weed dataset should cover samples from different angles and under various environmental conditions to help the neural network fully learn the appearance characteristics and distinguishing criteria of lettuce and weed.
The original dataset of vegetables and weeds was collected in Beijing, China, with the shooting environments mainly being outdoor greenhouses and indoor laboratories. The target objects were early-stage lettuce seedlings and a common accompanying weed (purslane), with sample images as shown in Figure 4. During image acquisition, the method of camera position shooting was manually adjusted, and the rotation angle and direction of the camera relative to the target were randomly changed. Finally, the original data set containing 295 lettuce images and 318 purslane weed images was established.
Since deep learning required a sufficient amount of data to train the model, data augmentation methods were used to obtain various types of image samples. Firstly, cycle generative adversarial network (Cycle GAN) was employed to execute image style transfer between lettuce and weeds, resulting in the unsupervised generation of 80 novel samples that retain key features while exhibiting distinct details. Based on this premise, randomly selected images were employed for data augmentation through techniques such as darkening, mirroring, rotation, and the addition of noise to enhance the generalization capability of the trained model. After completing the data augmentation, the new dataset included 1228 images of lettuce and 1445 images of weeds. The image annotation software, LabelImg 1.8.6 https://github.com/tzutalin/labelImg (accessed on 11 October 2024), was utilized for manual tagging tasks, saving tags that complied with the required model format specifications. Finally, the dataset was randomly split into a training set and a validation set in a 7:3 ratio for subsequent model training and validating.

2.3. Lightweight Object Detection Method Based on Improved YOLOv8

To enhance the detection accuracy and reduce the size of the YOLOv8 model for deployment on a mobile laser weeding platform, this study adopted the following improvement strategies. On the one hand, various attention mechanism modules were added to the network backbone to help the model more accurately detect and locate weeds in images between vegetable plants. On the other hand, to address the issues of low efficiency and excessive storage resource consumption when executing object detection algorithms on embedded devices, a class knowledge distillation method based on transfer learning was proposed to train the improved model. This approach aimed at saving computational power and compressing the model size while maintaining model performance, thereby shortening the model training period and inference time.

2.3.1. YOLOv8 Model

YOLOv8 was the latest state of the art (SOTA) model released by the Ultralytics team, offering different network architectures in various scales (N, S, M, L, X) to meet the needs of various scenarios. YOLOv8 was built upon the architectural framework of YOLOv5, incorporating the C2f module in place of the C3 module utilized in V5 to improve gradient flow information. In addition, YOLOv8 also retained the spatial pyramid pooling (SPPF) module, which enabled the network to better deal with the problem of object size changes in images. The neck part adopted multi-scale feature fusion technology to fuse feature maps from different stages of the backbone to enhance the feature representation ability. During the prediction phase, YOLOv8 employed an anchor-free detection method and a dynamic matching approach called task alignment learning (TAL). Based on the established index, high-quality prior boxes were dynamically selected as positive samples and incorporated into the loss function design, while a decoupled head structure was employed to separate box loss from cls loss.
The latest YOLOv8 model was an improved and efficient version built upon previous generations of YOLO. These improvements allowed YOLOv8 to maintain the advantages of the YOLO series’ network architecture while making more refined adjustments and optimizations. It made the model scalable in different scenarios.

2.3.2. Attention Mechanism

An attention mechanism is a technique that enhances the ability of deep learning models to focus on important features. The attention mechanism allows the network to generate a vector of weights that quantifies the importance of each position in the input image data. By introducing attention mechanisms, the model can selectively focus on relevant features of the target while filtering out unrelated background information. Squeeze-and-excitation (SE), convolutional block attention module (CBAM), coordinate attention (CA), and global attention mechanism (GAM) are four mainstream attention mechanism strategies.
(1)
SE module
SE Net is a typical channel attention mechanism known for its simplicity and ease of deployment. It enhances feature representation by adaptively emphasizing important feature channels and suppressing less important ones [32]. The core process of the SE mechanism includes global average pooling (GAP), which compresses each channel feature to a single value to evaluate its importance; subsequently, based on this assessment, a weight vector is generated through fully connected layers and activation functions, which is used to recalibrate the original feature maps. However, the SE module is computationally complex and focuses only on channel information, paying insufficient attention to spatial position information within the feature maps.
(2)
CBAM module
The introduction of CBAM addresses the shortcomings of SE Net. Compared to SE Net, CBAM combines both a channel attention module (CAM) and a spatial attention module (SAM), emphasizing key features and their spatial localization, thus providing a comprehensive, accurate, and precise feature representation. The design of this structure takes into account the inter-channel relationships and the correlations in the spatial dimension [33]. Additionally, CBAM can be flexibly integrated into existing CNN architectures without requiring additional parameters or computational cost.
(3)
CA module
Compared to other types of attention mechanisms, CA not only captures differential information between channels but also acquires perceptual information at the directional and positional levels, allowing the detection algorithm to more accurately identify target regions. The computation process of the CA mechanism is divided into two parallel stages. First, the input feature map undergoes global average pooling separately in the X and Y directions. This is followed by concatenation, convolution, and non-linear transformation to obtain feature weights for both directions. Finally, these weights are fused with the original features, enhancing the representation power of the algorithm’s image features. Compared to CBAM, the CA module offers finer spatial localization, making it particularly suitable for tasks that require high spatial precision.
(4)
GAM module
As a global attention mechanism, GAM enhances network performance by suppressing information diffusion and reinforcing global interactions [34]. The overall structure of GAM is similar to that of CBAM, as it also employs both CAM and SAM. However, the processing of these two modules differs. Considering the importance of cross-dimensional interaction, the GAM module mainly uses convolutional processing for SAM and GAM, and introduces a 3D permutation with multi-layer perceptron to improve the model’s understanding of image context information.

2.3.3. Class Knowledge Distillation

(1)
Transfer learning
Transfer learning is a common training technique in deep learning. This method involves transferring the weight parameters of a model trained on a source domain to a new model that needs to learn a different target. This not only accelerates the convergence of the new model but also helps to mitigate overfitting to some extent [35]. The COCO dataset is a widely recognized benchmark dataset in the field of computer vision, containing more than 330,000 annotated images covering 80 object classes and 91 material classes. The selected network model can use pre-trained weights from the COCO dataset, and the network parameters can be fine-tuned to adapt to the lettuce/weed dataset. By using transfer learning, the process of random initialization of the network is skipped, thereby improving the efficiency of model training.
(2)
Knowledge distillation
Knowledge distillation is a technique that extracts knowledge from a large model and transfers it to a smaller, more lightweight model. In knowledge distillation, the large model is not used directly but serves as a “teacher” to guide the training of the smaller “student” model. Due to the limited computing power of the terminal device, the large model is compressed, and the small model under guidance saves computing power and is easier to deploy. Compared with other compression methods, knowledge distillation can protect as much knowledge learned in the original model as possible and does not destroy the structure of the original model. This knowledge is then transferred to the compressed model, resulting in a much smaller and more lightweight model.

2.3.4. Lightweight YOLOv8s-CBAM Object Detection Model

In this study, we improved the YOLOv8s model to develop a lightweight YOLOv8s-CBAM object detection model, as shown in Figure 5. To enhance the accuracy of lettuce and weed recognition, this study incorporated the CBAM attention mechanism before the SPPF module in the backbone layer. This allowed the model to learn the weights of each channel before multi-scale pooling, thereby reinforcing key information and reducing the impact of unnecessary features. To address the issues of low efficiency and excessive storage resource consumption when executing object detection algorithms on embedded devices, this study proposed a class knowledge distillation method based on transfer learning to train the YOLOv8s-CBAM model. This study used the COCO128 (https://ultralytics.com/assets/coco128.zip, accessed on 11 October 2024) object detection dataset as the source domain to train the YOLOv8n model, allowing it to learn complex features. The pre-trained YOLOv8n model weights were then loaded. Additionally, this study defined the YOLOv8s network architecture with the embedded CBAM attention mechanism as the configuration file. Finally, this study used the custom lettuce-weed dataset as the target domain and trained a model that combines the YOLOv8n weights and the YOLOv8s-CBAM structure. This approach enabled the model to have the capability to recognize lettuce and weeds, resulting in a lightweight, high-performance object detection model. This strategy compressed the model to save computational resources while preserving the performance of the original model.

2.4. Weed Position Information Extraction

Since the working principle of laser weeding relies on burning the apical meristem of the weeds to inhibit their growth, an effective localization algorithm is crucial. Considering the opposite leaf blades of purslane, symmetrical overall morphology, and the set laser spot diameter of 3 mm, it is simple and feasible to use the center point of the target detection frame to represent the apical meristematic tissue of the weed. The center point of the object detection box was used to represent the weed top position. The equations were as follows, where (X, Y) represent the coordinates of the center of the weed’s top, and x1, x2, y1, and y2 are the X and Y coordinates of the bottom-left and top-right corners of the predicted bounding box for the weed.
X = x 2     x 1 2 + x 1
Y = y 2     y 1 2 + y 1
As shown in Figure 6, the laser weeding device moved along the ridges, spanning the rows of lettuce. The lettuce field was divided into inter-row areas (white parts) and intra-row areas (blue boxes), with the area around the lettuces defined as the safety zone (yellow boxes). Finally, the apical center coordinates of all weeds between the two lettuce plants were output, including point bitmap and position information txt file, which provided information for subsequent work and ensured readability.

2.5. Experimental Configuration

To ensure fairness in model comparisons and the reliability of training outcomes, all models were trained and verified multiple times using the same dataset and equipment, with a consistent set of training parameters. The detailed information regarding the software and hardware resource configuration as well as model parameters are shown in Table 2.

2.6. Evaluation Metrics

To comprehensively assess the performance of the model in weed detection, this study has established a set of evaluation metrics, including precision, recall, mean average precision (mAP), intersection over union (IoU), and F1 score. Among these, mAP is the primary evaluation metric in object detection, typically denoted as mAP@0.5 and mAP@0.5:0.95, with higher values indicating better algorithm performance. Additionally, the loss value is used to evaluate the error between the predicted and actual values. The training loss reflects the model’s ability to fit the dataset, while the validation loss reflects the model’s generalization capability. The loss value comprises three parameters: obj-loss (objectness loss), cls-loss (classification loss), and box-loss (bounding box regression loss). The equations for calculating each metric are as follows:
P r e c i s i o n = T P T P + F P × 100 %
R e c a l l = T P T P + F N × 100 %
m A P = i = 1 c A P i C × 100 %
I O U E , F = E F E F
F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s o n + R e c a l l
L o s s = o b j l o s s + c l s l o s s + b o x _ l o s s
In the equations, TP stands for true positives, which is the number of correctly detected targets; FP stands for false positives, which represents the number of instances incorrectly identified as targets; FN stands for false negatives, which represents the number of targets that were not detected but should have been. Average precision (AP) is the area under the precision-recall curve; C is the total number of classes in the object detection task. E represents the ground truth bounding boxes annotated by humans, and F represents the predicted bounding boxes generated by the model. For mAP@0.5, if the intersection over union (IoU) value of the model’s predicted region is above the preset threshold of 0.5, then the prediction is considered a true positive (TP). Similarly, mAP@0.5:0.95 represents the average mAP over a range of IoU thresholds from 0.5 to 0.95 (with a step size of 0.05), making this metric more stringent.

3. Results

3.1. Results of Model Training

To evaluate the performance of the YOLOv8s-CBAM model in lettuce and weed object detection tasks, this study trained different object detection models under the same deep learning environment. Finally, the training results of lettuce and weed target detection for different models were shown in Table 3.
Training results showed that the YOLOv8s model excelled in detecting lettuce and weed targets, outperforming other YOLO variants. After introducing the attention mechanism, the overall performance of the model was better than that of the single YOLOv8s network, showing a better lettuce-weed recognition effect. Through the comparison of the training results of four attention mechanism embedding models, it can be seen that CBAM has the best improvement effect on the model. The comprehensive index mAP and F1 score index of YOLOv8s-CABM model have the largest improvement, and the accuracy is also increased by 0.6% compared with the original model. However, it is often the case that the complexity of a model is proportional to its required computational resources and storage space. With the help of its advanced architecture design, YOLOv8s-CABM model improved the accuracy, but it also caused the problems of increasing the number of floating-point operations and increasing the size of the model.
In object detection, the use of neural networks is an end-to-end process, similar to a black box, which makes it difficult to directly judge whether the network has been optimized. To verify whether the model with the introduced attention mechanism is more focused on the features of the weeds and lettuces themselves, this study employed the class activation map (CAM) visualization algorithm to intuitively demonstrate whether the improved network pays more attention to the target features, thereby evaluating the effectiveness of the network optimization. As shown in Figure 7, the visualization heatmaps provided a more intuitive explanation of the effects of the four attention mechanisms, presenting the visualization results of lettuce-weed images after detection by the improved model.
From the visualization heatmaps, it can be observed that the YOLOv8s model was distracted by background information not related to the features of weeds and lettuces. After incorporating attention mechanisms, different neural networks exhibited varying degrees of response to the lettuce and weed areas in the images. In particular, the YOLOv8s-CBAM model showed a significantly higher focus on lettuce and weeds, which can be directly observed. This further proved that the introduction of the CBAM module enabled the YOLOv8s to reduce background interference, thereby effectively extracting the appearance features of lettuces and weeds.

3.2. Results of Model Lightweighting

The influence of attention mechanism on the YOLO model was two-sided. While improving the accuracy of model detection, it also increased the complexity of the model and took up a lot of memory and computing resources. Therefore, in order to meet the real-time and high-precision detection requirements of laser weeding, this study proposed a class knowledge distillation strategy based on the concepts of transfer learning and knowledge distillation. This strategy aimed at achieving model lightweighting while maintaining performance as much as possible, enabling efficient and accurate object detection in resource-constrained scenarios. To determine the effectiveness of the class knowledge distillation strategy in object detection tasks, comparative experiments were conducted, with the results shown in Table 4 and Table 5.
By comparing the speed and performance metrics, it can be observed that the lightweight YOLOv8s-CBAM model, through the use of class knowledge distillation techniques, has achieved a significant reduction in both training and inference times while maintaining high performance, and it has reached the same model size as YOLOv8n. Compared to the YOLOv8s-CBAM, the lightweight model showed only a 0.6% decrease in accuracy, but its size was reduced by 16.8 MB. Additionally, the number of parameters and the floating-point operations (FLOPs) were one-quarter of those of the original model. This demonstrated its advantage in resource-constrained environments, making it suitable for the vision system of the laser weeding simulation platform.
Figure 8 shows the training/validation loss of the models and the inference performance in a simulated weeding scenario, respectively. It can be observed that the lightweight YOLOv8s-CBAM model had the fastest convergence rate of training and validation loss, demonstrating good fitting and generalization capabilities. As shown in Figure 9, during the inference phase, the detection rate of this model was comparable to that of the uncompressed model but outperforms the similarly sized YOLOv8n, further proving the effectiveness of this strategy in the weed detection task for laser weeding application.

3.3. Lettuce Intra-Row Laser Weeding Experiment

3.3.1. Pre-Experiment Preparation

Although lasers can quickly kill weeds, the effectiveness of laser weeding is influenced by the laser output power and the duration of exposure. To determine the appropriate laser output power and the exposure time for individual weeds, this study conducted a preliminary laser weeding experiment, as shown in Figure 10. The results of the pre-experiment indicated that a 450 nm blue semiconductor laser could kill weeds within 1 s at a laser output power of 30 W without harming the lettuce.

3.3.2. Results of Laser Weeding Experiment

Two sets of experimental samples were tested by using the system, with the test results shown in Figure 11. In the case of low-density weeds between plants, there were no instances of misidentification or omission, and the average time to complete laser weeding was 9.65 s. In the high-density weed test, the system missed one weed, and the average execution speed was about 21.27 FPS. Overall, the system performed well in scenarios with low-density weeds, but there was still room for optimization in high-density environments.
Figure 12 and Table 6 show the actual weed control effect using the simulated prototype for laser weeding. It can be observed that the laser destroyed the apical meristems of the weeds, preventing them from regenerating and reproducing, thus achieving a good weeding effect. However, it is noteworthy that laser weeding required high precision in weed localization. If the laser burn point is off target, it may fail to completely stop weed growth and could even damage the crops. In the two sets of laser weeding experiments, the system achieved 100% weed detection and laser removal in the low-density weed scenario; however, in the high-density weed scenario, one weed was missed due to non-detection by the system, and two weeds were not successfully removed due to the offset of the laser ablation point. As can be seen in Figure 11, only one weed, which was small and close to the lettuce plant, was not detected in the high-density weed environment, which may be due to the fact that the weed target was too small and similar in color to the soil background; as for the offset of the laser ablation point, it may be due to servo delay. In summary, the overall weed removal success rate for this laser weed control experiment was 76.9% and the average positional deviation was 0.6 ± 0.2 cm, which basically met the expected results.

4. Discussion

Real-time and accurate target detection of crops and weeds is a prerequisite for precise weeding [36]. The integration of object detection methods such as Faster R-CNN, DETR, and YOLO with advanced weeding techniques has propelled the development of modernization and intelligence in agriculture. In this study, deep learning technology was used to improve the YOLOv8 network, and a lightweight YOLOv8s-CBAM target detection model was deployed on the laser weeding device, which achieved good weeding effects. In the work of crop and weed target detection, achieving a lightweight and high-precision model has always been the goal pursued by many researchers, but it is difficult to achieve both. Xiangpeng, et al. [37] developed a new Faster R-CNN detection model to distinguish between various complex growth states of weeds and cotton seedlings. The experimental results showed that the average inference time of this model was 0.165 s, with a mAP@0.5 of 98.43% for weed and cotton seedling detection. Compared to the improved YOLOv8s-CBAM in this study, this model was relatively heavier and had lower operational efficiency on mobile platforms. Lingbing, et al. [38] constructed a YOLOv5-MobileNet-SE model for real-time recognition of field weeds. The experimental results indicated that the model size was only 7.5 MB, meeting the requirements for lightweight detection. However, the mAP@0.5 for weed detection was only 87%, significantly lower than the 98.6% achieved by the lightweight YOLOv8s-CBAM in this study. Currently, direct detection methods are commonly used in crop weeding to identify and locate crops and weeds which require large datasets for training. However, the variety of field weeds and the complex, variable conditions of actual agricultural production environments present significant challenges for the target detection of crops and weeds. Hui, et al. [39] proposed an indirect cabbage weed detection method that detects crops by generating bounding boxes that cover them, with any green pixels outside the bounding box treated as weeds. Experimental results showed that this indirect detection method was more effective than direct weed detection and did not require a large dataset, offering a new approach to the target detection of crops and weeds.
Laser weeding is a novel thermal weeding technology that can more precisely focus energy on specific parts of weeds, causing them to rapidly disintegrate and die [40]. Compared to traditional mechanical and chemical weeding methods, laser weeding has the advantages of being environmentally friendly, efficient, flexible, and automated [41]. During the laser weeding process, factors such as the laser treatment site, weed species, laser power, laser wavelength, and treatment duration all influence the weeding effectiveness. Therefore, the optimization of laser weeding parameters and the construction of laser weeding systems are significant challenges currently faced by laser weeding. Lünsmann, et al. [42] investigated the impact of plant treatment points on weeding effectiveness in laser weeding technology. Through experimental comparisons, they found that irradiating the base of the stem caused more severe damage, leading to plant death or growth inhibition, compared to irradiating the apical meristems. The experimental results also showed that high temperatures generated during laser irradiation could cause the water in plant cells to boil and even evaporate. Marx, et al. [43] developed a laser damage model for two types of weeds (monocot weed ECHCG and dicot weed AMARE) that predicted the probability of successful weed control based on factors such as weed species, growth stage, laser power, laser spot size, and position. This module allowed the laser weeding system to adjust parameters according to actual conditions, thus achieving precise weed control. Model validation results indicated an accuracy rate of 93% for ECHCG and 84% for AMARE. This research provides an important theoretical foundation for the development of laser weeding technology, but further evaluation of its practical application effects is still needed. Additionally, the mechanisms of how laser irradiation affects crop growth and the field environment are not yet fully understood, and further research should be conducted in this area.
The weed dataset used in this study is limited in both variety and quantity. Although the CycleGAN method was employed to augment the dataset, it may still affect the model’s generalization ability. Additionally, the central localization algorithm adopted in this research is only suitable for symmetrically shaped weeds. When more types of weeds are introduced, this localization method may not be rigorous, which is a primary reason for the laser irradiation point offset in actual weeding experiments. Therefore, in future research, methods such as YOLO-pose anchor regression and refined skeleton extraction algorithms will be considered to more accurately locate weeds.

5. Conclusions

Overall, laser weeding is a highly efficient, environmentally friendly, and promising weed control technology with significant efficiency and environmental advantages over traditional mechanical and chemical weeding, and the real-time high-precision target detection of the YOLO model provides strong technical support for the practical application of laser weeding. This study enhanced the YOLOv8 architecture by incorporating the CBAM attention mechanism and employed class knowledge distillation to compress the model, ultimately resulting in the lightweight YOLOv8s-CBAM model. The lightweight YOLOv8s-CBAM model has a size of only 6.2 MB, yet it achieved an mAP@0.5 value of 98.9%, realizing both performance enhancement and model lightweighting. This research completed the design and construction of a laser weeding prototype, developed the corresponding control program, and conducted lettuce inter-plant laser weeding experiments in a laboratory setting that simulated field conditions. The experimental results showed that the laser weeding system exhibited a 100% detection and weeding success rate in low-density weed scenarios. Even in high-density weed distributions, the system successfully identified and located 88.9% of the inter-plant weeds and completed the laser weeding task. The overall weeding success rate for the experiments was 76.9%, demonstrating the value and potential of this research work in practical applications. However, combined with the current technological development and industry challenges, the laser weed control system still needs to be improved in the following aspects: (1) Establishing diverse and rich datasets and optimizing deep learning models to improve the accuracy of weed identification. (2) Explore the effects of different laser action points (top meristematic tissue of weeds, weed stalks, weed emergence points, etc.) on the weed control effect and further optimize the weed localization methods. (3) Exploring the effects of different laser powers on crops and soil and determining the safe dose of lasers to be used. (4) Designing control algorithms to achieve tunable laser power output to minimize energy loss while ensuring safety. Further research related to laser weed control will be carried out with a view to realizing practical applications.

Author Contributions

Q.W.: Writing & editing, Writing—original draft, visualization, validation, methodology, investigation, formal analysis, data curation, conceptualization. Y.-H.W.: Writing—original draft, methodology, investigation. W.-F.D.: visualization, validation, conceptualization. Data curation. W.-H.S.: Writing review & editing, validation, supervision, project administration, methodology, investigation, funding acquisition, data curation, conceptualization. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Funding No. 32371991).

Data Availability Statement

The data used in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interests.

References

  1. Billy Graham, R.; Yu, Z.; Cristiano André da, C.; Mohammed Raju, A.; Thomas, J.P.; Amit, J.J.; Kirk, H.; Xiaorui, S. Palmer amaranth identification using hyperspectral imaging and machine learning technologies in soybean field. Comput. Electron. Agric. 2023, 215, 108444. [Google Scholar] [CrossRef]
  2. Zou, K.; Ge, L.; Zhang, C.; Yuan, T.; Li, W. Broccoli Seedling Segmentation Based on Support Vector Machine Combined with Color Texture Features. IEEE Access 2019, 7, 168565–168574. [Google Scholar] [CrossRef]
  3. Gai, J.; Tang, L.; Steward, B.L. Automated crop plant detection based on the fusion of color and depth images for robotic weed control. J. Field Robot. 2020, 37, 35–52. [Google Scholar] [CrossRef]
  4. Lan, T.; Li, D.; Zhang, Z.; Yu, G.; Jin, X. Analysis on research status and trend of intelligent agricultural weeding robot. Comput. Meas. Control 2021, 29, 1–7. [Google Scholar]
  5. Slaughter, D.C.; Giles, D.; Downey, D. Autonomous robotic weed control systems: A review. Comput. Electron. Agric. 2008, 61, 63–78. [Google Scholar] [CrossRef]
  6. Valeria, P.; Alvaro Santiago, L.; Valeria, E.P.; Andrea, K.M.; Hugo, R.P. Herbicide resistant weeds: A call to integrate conventional agricultural practices, molecular biology knowledge and new technologies. Plant Sci. 2020, 290, 110255. [Google Scholar] [CrossRef]
  7. Maor, M.; Zvi, P.; Ran Nisim, L. Herbicide Resistance in Weed Management. Agronomy 2021, 11, 280. [Google Scholar] [CrossRef]
  8. He, B.; Hu, Y.; Wang, W.; Yan, W.; Ye, Y. The Progress towards Novel Herbicide Modes of Action and Targeted Herbicide Development. Agronomy 2022, 12, 2792. [Google Scholar] [CrossRef]
  9. Stephen, B.P.; Qin, Y. Evolution in Action: Plants Resistant to Herbicides. Annu. Rev. Plant Biol. 2010, 61, 317–347. [Google Scholar] [CrossRef]
  10. Lisek, J. Diversity of Summer Weed Communities in Response to Different Plum Orchard Floor Management in-Row. Agronomy 2023, 13, 1421. [Google Scholar] [CrossRef]
  11. Jannis, M.; Gerassimos, G.P.; Benjamin, K.; Dionisio, A.; Roland, G. Sensor-based mechanical weed control: Present state and prospects. Comput. Electron. Agric. 2020, 176, 105638. [Google Scholar] [CrossRef]
  12. Raja, R.; Nguyen, T.T.; Slaughter, D.C.; Fennimore, S.A. Real-time weed-crop classification and localisation technique for robotic weed control in lettuce. Biosyst. Eng. 2020, 192, 257–274. [Google Scholar] [CrossRef]
  13. Pérez-Ruíz, M.; Slaughter, D.C.; Fathallah, F.A.; Gliever, C.J.; Miller, B.J. Co-robotic intra-row weed control system. Biosyst. Eng. 2014, 126, 45–55. [Google Scholar] [CrossRef]
  14. Zhu, H.; Zhang, Y.; Mu, D.; Bai, L.; Zhuang, H.; Li, H. YOLOX-based blue laser weeding robot in corn field. Front. Plant Sci. 2022, 13, 1017803. [Google Scholar] [CrossRef] [PubMed]
  15. Xu, Y.; Liu, Z.; Li, J.; Huang, D.; Chen, Y.; Zhou, Y. Real-Time Detection and Localization of Weeds in Dictamnus dasycarpus Fields for Laser-Based Weeding Control. Agronomy 2024, 14, 2363. [Google Scholar] [CrossRef]
  16. Deng, X.W.; Qi, L.; Ma, X.; Jiang, Y.; Chen, X.S.; Liu, H.Y.; Chen, W.F. Recognition of weeds at seedling stage in paddy fields using multi-feature fusion and deep belief networks. Trans. Chin. Soc. Agric. Eng. 2018, 34, 165–172. [Google Scholar]
  17. Mu, D. Research on the Actuator and Recognition Algorithm of Laser Weeding Robot. Master’s Thesis, Kunming University of Science and Technology, Kunming, China, 2022. [Google Scholar]
  18. Mao, W.; Cao, J.; Jiang, H.; Wang, Y.; Zhang, X. In-field weed detection method based on multi-features. Trans. CSAE 2007, 23, 206–209. [Google Scholar]
  19. Zhang, C.; Huang, X.; Liu, W.; Zhang, Y.; Li, N.; Zhang, J.; Li, W. Information acquisition method for mechanical intra-row weeding robot. Trans. Chin. Soc. Agric. Eng. 2012, 28, 142–146. [Google Scholar]
  20. Zhu, W.; Zhu, X. The Application of Support Vector Machine in Veed Classification. In Proceedings of the 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 20–22 November 2009; pp. 532–536. [Google Scholar]
  21. Tang, J.-L.; Chen, X.-Q.; Miao, R.-H.; Wang, D. Weed detection using image processing under different illumination for site-specific areas spraying. Comput. Electron. Agric. 2016, 122, 103–111. [Google Scholar] [CrossRef]
  22. Sujaritha, M.; Annadurai, S.; Satheeshkumar, J.; Sharan, S.K.; Mahesh, L. Weed detecting robot in sugarcane fields using fuzzy real time classifier. Comput. Electron. Agric. 2017, 134, 160–171. [Google Scholar] [CrossRef]
  23. James, Y.K.; Balamurugan, R.; Madhava Sarma, V.; Umamaheswara Rao, T. Weed Identification Using U-Net Machine Learning Model and SAM Segmentation. In Proceedings of the 2024 ASABE Annual International Meeting, Anaheim, CA, USA, 28–31 July 2024. [Google Scholar] [CrossRef]
  24. Zhou, Q.; Li, X.; He, L.; Yang, Y.; Cheng, G.; Tong, Y.; Ma, L.; Tao, D. TransVOD: End-to-End Video Object Detection with Spatial-Temporal Transformers. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 7853–7869. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, J.; Gong, J.; Zhang, Y.; Mostafa, K.; Yuan, G. Weed Identification in Maize Fields Based on Improved Swin-Unet. Agronomy 2023, 13, 1846. [Google Scholar] [CrossRef]
  26. Cai, Y.; Zeng, F.; Xiao, J.; Ai, W.; Kang, G.; Lin, Y.; Cai, Z.; Shi, H.; Zhong, S.; Yue, X. Attention-aided semantic segmentation network for weed identification in pineapple field. Comput. Electron. Agric. 2023, 210, 107881. [Google Scholar] [CrossRef]
  27. Joseph, R.; Santosh, D.; Ross, G.; Ali, F. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
  28. Hu, D.; Ma, C.; Tian, Z.; Shen, G.; Li, L. Rice Weed Detection Method on YOLOv4 Convolutional Neural Network. In Proceedings of the 2021 International Conference on Artificial Intelligence, Big Data and Algorithms (CAIBDA), Xi’an, China, 28–30 May 2021; pp. 41–45. [Google Scholar]
  29. Chen, J.; Wang, H.; Zhang, H.; Luo, T.; Wei, D.; Long, T.; Wang, Z. Weed detection in sesame fields using a YOLO model with an enhanced attention mechanism and feature fusion. Comput. Electron. Agric. 2022, 202, 107412. [Google Scholar] [CrossRef]
  30. Ying, B.; Xu, Y.; Zhang, S.; Shi, Y.; Liu, L. Weed Detection in Images of Carrot Fields Based on Improved YOLO v4. Trait. Du Signal 2021, 38, 341–348. [Google Scholar] [CrossRef]
  31. Hua, Y.; Xu, H.; Liu, J.; Quan, L.; Wu, X.; Chen, Q. A peanut and weed detection model used in fields based on BEM-YOLOv7-tiny. Math. Biosci. Eng. 2023, 31, 1. [Google Scholar] [CrossRef]
  32. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  33. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional Block Attention Module. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  34. Liu, Y.; Shao, Z.; Hoffmann, N. Global attention mechanism: Retain information to enhance channel-spatial interactions. arXiv 2021, arXiv:2112.05561. [Google Scholar]
  35. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  36. Hu, R.; Su, W.-H.; Li, J.-L.; Peng, Y. Real-time lettuce-weed localization and weed severity classification based on lightweight YOLO convolutional neural networks for intelligent intra-row weed control. Comput. Electron. Agric. 2024, 226, 109404. [Google Scholar] [CrossRef]
  37. Fan, X.; Chai, X.; Zhou, J.; Sun, T. Deep learning based weed detection and target spraying robot system at seedling stage of cotton field. Comput. Electron. Agric. 2023, 214, 108317. [Google Scholar] [CrossRef]
  38. Li, H.; Guo, C.; Yang, Z.; Chai, J.; Shi, Y.; Liu, J.; Zhang, K.; Liu, D.; Xu, Y. Design of field real-time target spraying system based on improved YOLOv5. Front. Plant Sci. 2022, 13, 1072631. [Google Scholar] [CrossRef] [PubMed]
  39. Sun, H.; Liu, T.; Wang, J.; Zhai, D.; Yu, J. Evaluation of two deep learning-based approaches for detecting weeds growing in cabbage. Pest Manag. Sci. 2024, 80, 2817–2826. [Google Scholar] [CrossRef] [PubMed]
  40. Martin, V.B.; Christian, M.; Fabienne, B.; Dominique, M.F.; Tammo, R.; Bernhard, S. Thermal weed control technologies for conservation agriculture—A review. Weed Res. 2020, 60, 241–250. [Google Scholar] [CrossRef]
  41. Christian, A.; Karsten, S.; Mahin, S. Laser Weeding with Small Autonomous Vehicles: Friends or Foes? Front. Agron. 2022, 4, 841086. [Google Scholar] [CrossRef]
  42. Lünsmann, L.A.; Lautenschläger, M.; Schmidt, T.; Ripken, T.; Wollweber, M. Investigating the treatment point of plants for laser weeding. In Proceedings of the Photonic Technologies in Plant and Agricultural Science, San Francisco, CA, USA, 27 January–1 February 2024; pp. 32–38. [Google Scholar]
  43. Marx, C.; Barcikowski, S.; Hustedt, M.; Haferkamp, H.; Rath, T. Design and application of a weed damage model for laser-based weed control. Biosyst. Eng. 2012, 113, 148–157. [Google Scholar] [CrossRef]
Figure 1. Laser weeding system architecture.
Figure 1. Laser weeding system architecture.
Agronomy 15 00925 g001
Figure 2. Laser weeding decision control flowchart.
Figure 2. Laser weeding decision control flowchart.
Agronomy 15 00925 g002
Figure 3. Photograph of the laser weeding simulation prototype; (a) laser weed control platform; (b) laser weeding actuators.
Figure 3. Photograph of the laser weeding simulation prototype; (a) laser weed control platform; (b) laser weeding actuators.
Agronomy 15 00925 g003
Figure 4. (a) Example images of lettuce and purslane weeds; (b) labelling the dataset using LabelImg 1.8.6.
Figure 4. (a) Example images of lettuce and purslane weeds; (b) labelling the dataset using LabelImg 1.8.6.
Agronomy 15 00925 g004
Figure 5. YOLOv8s-CBAM network architecture diagram.
Figure 5. YOLOv8s-CBAM network architecture diagram.
Agronomy 15 00925 g005
Figure 6. Schematic diagram of lettuce field area division.
Figure 6. Schematic diagram of lettuce field area division.
Agronomy 15 00925 g006
Figure 7. Model attention visualization heatmaps. The class activation map algorithm calculates the attention scores of different regions in an image for a specific class by combining the convolutional feature maps from the last layer of the network with the weights of the classifier. These attention scores are then visually represented as heatmaps, where deeper red color indicated higher attention or relevance to the class.
Figure 7. Model attention visualization heatmaps. The class activation map algorithm calculates the attention scores of different regions in an image for a specific class by combining the convolutional feature maps from the last layer of the network with the weights of the classifier. These attention scores are then visually represented as heatmaps, where deeper red color indicated higher attention or relevance to the class.
Agronomy 15 00925 g007
Figure 8. (a) Model training loss value comparison; (b) model validation loss value comparison.
Figure 8. (a) Model training loss value comparison; (b) model validation loss value comparison.
Agronomy 15 00925 g008
Figure 9. The actual scene detection effect of the model; (a) YOLOv8n; (b) YOLOv8s; (c) YOLOv8s-CBAM; (d) Lightweight YOLOv8s-CBAM.
Figure 9. The actual scene detection effect of the model; (a) YOLOv8n; (b) YOLOv8s; (c) YOLOv8s-CBAM; (d) Lightweight YOLOv8s-CBAM.
Agronomy 15 00925 g009aAgronomy 15 00925 g009b
Figure 10. Pre-experiment of laser weeding; (a) 30 W power laser emission; (b) 50 W power laser emission.
Figure 10. Pre-experiment of laser weeding; (a) 30 W power laser emission; (b) 50 W power laser emission.
Agronomy 15 00925 g010
Figure 11. Actual decision-making effect of the system.
Figure 11. Actual decision-making effect of the system.
Agronomy 15 00925 g011
Figure 12. Actual weed control effect of the lettuce intra-row laser weeding system; (a) original figure; (b) results of target detection; (c) results of laser weeding; (d) local zoom display.
Figure 12. Actual weed control effect of the lettuce intra-row laser weeding system; (a) original figure; (b) results of target detection; (c) results of laser weeding; (d) local zoom display.
Agronomy 15 00925 g012
Table 1. Core functional parameters of the laser.
Table 1. Core functional parameters of the laser.
NameParameter
Output power50 W
Output power adjustment range10~100%
Central wavelength450 ± 10 nm
Fiber length5 m
Dimensions440 × 436 × 88
Cooling methodAir-cooled
Pulse output frequency1~1000 Hz
Control modesRS232/AD/Manual setting
Protection and closed-loop feedbackOvervoltage/Overcurrent/Overtemperature protection
Table 2. Software and hardware configuration, model parameters.
Table 2. Software and hardware configuration, model parameters.
Configuration NameDetails
Hardware configurationCPUInter(R)Core (TM)i9-14900K
RAM size64 G
GPUNVIDIA GeForce RTX4080
VRAM size16 G
Software configurationOperating systemWindows 11
Development platformVisual studio code 2022
Programming languagePython3.9.18
Deep learning frameworkPytorch2.1.2
CUDA version12.1
Training hyperparameterEpochs200
Batch size8
Image size640 × 480
Number of workers8
OptimizerAdam
Learning rate0.01
Table 3. Training results of different models for lettuce and weed object detection.
Table 3. Training results of different models for lettuce and weed object detection.
ModulemAP@0.5PrecisionF1 ScoreRecallModel Size
YOLOv598.7%97.1%97.1%97.2%14.3 MB
YOLOv698.4%95.4%95.9%96.5%18.5 MB
YOLOv798.5%95.8%96.3%96.8%74.8 MB
YOLOv8s98.8%97.6%97.4%97.3%22.5 MB
YOLOv8s-SE98.8%98.3%97.4%97.3%22 MB
YOLOv8s-CBAM98.9%98.2%97.5%96.7%22.5 MB
YOLOv8s-CA98.8%97.8%97%96.9%22 MB
YOLOv8s-GAM98.8%95.8%96.8%96.2%34.8 MB
Table 4. Comparison results of model speed metrics.
Table 4. Comparison results of model speed metrics.
ModelModel SizeNumber of ParametersFloating Point OperationsTraining TimeInference Time
YOLOv8n6.2 MB3,006,0388.1 G1.355 h60.7 ms
YOLOv8s22.5 MB11,126,35828.4 G1.301 h74.0 ms
YOLOv8s-CBAM23 MB11,389,11228.7 G1.317 h73.0 ms
Lightweight YOLOv8s-CBAM6.2 MB3,006,0388.1 G1.209 h60.4 ms
Table 5. Comparison results of model performance metrics.
Table 5. Comparison results of model performance metrics.
ModelPrecisionRecallmAP@0.5mAP@0.5:0.95
YOLOv8n97.2%96.2%98.5%96.1%
YOLOv8s97.6%97.3%98.8%96.1%
YOLOv8s-CBAM98.2%96.9%98.9%96.3%
Lightweight YOLOv8s-CBAM97.6%96.6%98.9%96.5%
Table 6. Test results of laser weed control experiment.
Table 6. Test results of laser weed control experiment.
Weed DensityFPRFNRTarget Detection Success RateOverall Weeding Success RateAverage Positional Deviation
low density00100%76.9%0.6 ± 0.2 cm
high density09.09%90.11%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.; Wang, Y.-H.; Du, W.-F.; Su, W.-H. Lightweight Deep Learning-Based Laser Irradiation System for Intra-Row Weed Control in Lettuce. Agronomy 2025, 15, 925. https://doi.org/10.3390/agronomy15040925

AMA Style

Wang Q, Wang Y-H, Du W-F, Su W-H. Lightweight Deep Learning-Based Laser Irradiation System for Intra-Row Weed Control in Lettuce. Agronomy. 2025; 15(4):925. https://doi.org/10.3390/agronomy15040925

Chicago/Turabian Style

Wang, Qi, Ya-Hong Wang, Wen-Fang Du, and Wen-Hao Su. 2025. "Lightweight Deep Learning-Based Laser Irradiation System for Intra-Row Weed Control in Lettuce" Agronomy 15, no. 4: 925. https://doi.org/10.3390/agronomy15040925

APA Style

Wang, Q., Wang, Y.-H., Du, W.-F., & Su, W.-H. (2025). Lightweight Deep Learning-Based Laser Irradiation System for Intra-Row Weed Control in Lettuce. Agronomy, 15(4), 925. https://doi.org/10.3390/agronomy15040925

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop