Next Article in Journal
Molecular Regulation Mechanisms of Ripening, Senescence and Stress Resistance in Fruits and Vegetables
Previous Article in Journal
Photosynthetic and Physiological Responses of Different Maize Varieties to Mesotrione
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Rice Field Weed Control in Precision Agriculture: From Weed Recognition to Variable Rate Spraying

1
School of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang 110866, China
2
National Digital Agriculture Regional Innovation Center (Northeast), Shenyang 110866, China
3
Key Laboratory of Smart Agriculture Technology in Liaoning Province, Shenyang 110866, China
4
Key Laboratory of Smart Agriculture in the South China Tropical Region, Ministry of Agriculture and Rural Affairs, Guangzhou 510640, China
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(8), 1702; https://doi.org/10.3390/agronomy14081702
Submission received: 4 July 2024 / Revised: 29 July 2024 / Accepted: 31 July 2024 / Published: 2 August 2024
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
A precision agriculture approach that uses drones for crop protection and variable rate application has become the main method of rice weed control, but it suffers from excessive spraying issues, which can pollute soil and water environments and harm ecosystems. This study proposes a method to generate variable spray prescription maps based on the actual distribution of weeds in rice fields and utilize DJI plant protection UAVs to perform automatic variable spraying operations according to the prescription maps, achieving precise pesticide application. We first construct the YOLOv8n DT model by transferring the “knowledge features” learned by the larger YOLOv8l model with strong feature extraction capabilities to the smaller YOLOv8n model through knowledge distillation. We use this model to identify weeds in the field and generate an actual distribution map of rice field weeds based on the recognition results. The number of weeds in each experimental plot is counted, and the specific amount of pesticide for each plot is determined based on the amount of weeds and the spraying strategy proposed in this study. Variable spray prescription maps are then generated accordingly. DJI plant protection UAVs are used to perform automatic variable spraying operations based on prescription maps. Water-sensitive papers are used to collect droplets during the automatic variable operation process of UAVs, and the variable spraying effect is evaluated through droplet analysis. YOLOv8n-DT improved the accuracy of the model by 3.1% while keeping the model parameters constant, and the accuracy of identifying weeds in rice fields reached 0.82, which is close to the accuracy of the teacher network. Compared to the traditional extensive spraying method, the approach in this study saves approximately 15.28% of herbicides. This study demonstrates a complete workflow from UAV image acquisition to the evaluation of the variable spraying effect of plant protection UAVs. The method proposed in this research may provide an effective solution to balance the use of chemical herbicides and protect ecological safety.

1. Introduction

Weeds have strong adaptability to the environment and can compete with crops for nutrient resources and growth space in the field, leading to a decline in crop quality and yield [1,2]. When there are too many weeds in the field, especially in rice fields with dense weed growth, a humid and enclosed microenvironment is formed, creating favorable conditions for the breeding of pests and diseases [3]. Additionally, the process of weed control also increases additional agricultural production costs. For example, in the Australian grain production system, the annual cost due to weeds is as high as USD 3.3 billion, of which USD 2.6 billion are for weed control costs and USD 700 million are for yield losses [4].
In recent decades, extensive herbicide spraying over large areas in weed control operations has caused herbicide residues, which not only pollute the soil and water environment, causing damage to the ecosystem, but also adversely affect human health through environmental media and food chain transmission [5]. If the overreliance and abuse of chemical herbicides continues, it may lead to serious ecological and health safety issues [6].
Precision spraying technology provides an effective solution to balance the use of chemical herbicides and protect ecological safety. Precision spraying combines advanced target recognition and automatic control technologies to accurately apply herbicides only to areas or plants that require weed control [7,8]. The goal is to maximize pesticide use reduction and minimize environmental pollution while ensuring the effectiveness of the application. Specific operation forms of precision spraying include Site Specific Application (SSA) and Variable Rate Technology (VRT) [9,10].
Site-specific application (SSA) refers to an operation mode that precisely applies pesticides to specific targets based on the spatial locations of weeds and other targets in crop fields [11]. The key is to first use machine vision, remote sensing, and other technologies to detect and locate field targets in real time, and then drive the spraying device through the navigation control system to implement precise spraying at the determined target locations. At the same time, the application rate and the spray droplet size are adjusted according to the size, density, and other characteristics of different targets [12,13]. The site-specific application mode is more suitable for ground-based weeding robot platforms, mainly because the working height of ground robots is lower, the sensor resolution is higher, and the field of view is wider, which is conducive to high-precision recognition and positioning of individual weed patches. At the same time, the movement of robots is more stable, enabling precise spraying of each target [14,15]. In recent years, some scholars have developed various types of site-specific spraying robots based on machine vision to identify weeds between crop rows in real time and precisely spray them. Utstumo et al. verified through indoor pot experiments that the “Drop-on-Demand” (DoD) system can effectively control weeds. On this basis, they developed a weeding robot based on machine vision. The robot obtains weed information around radishes through machine vision, classifies the weeds, and applies pesticides on demand according to quantity and type. Field experiments show that compared to traditional spraying methods, the weeding robot can save more than 73% of pesticides, effectively reducing drug waste [16]. Li et al. developed a site-specific spraying plant protection vehicle for weeds in soybean fields. The equipment uses deep learning technology to identify and locate weeds in the field, and controls the on-off of 20 solenoid valves through a microprocessor to complete precise spraying of weeds between soybean rows. Field experiments show that the weed recognition rate in the field environment is over 89%, the maximum operating speed can reach 4 km/h, and the spraying accuracy is over 79% [17].
Variable rate technology (VRT) uses global satellite positioning systems (GPSs), geographic information systems (GISs), and other technologies to divide fields into multiple zones with different spraying requirements [18]. According to the spraying demands of different zones, the spraying rate of the application device is dynamically adjusted through a precision control system to achieve variable application of pesticides [19]. Due to the characteristics of rice fields, such as wet and soft soil, low-lying terrain, and narrow space, certain limitations are imposed on traditional machinery for site-specific weeding operations. In this context, unmanned aerial vehicles (UAVs) have demonstrated unique applicability, leveraging their agile maneuverability. Plant protection UAVs are equipped with high-precision global navigation satellite systems (GNSSs), which can perform precise navigation operations for field zoning. Moreover, UAVs spray continuously and can use pulse width modulation (PWM) to adjust the opening frequency and duty cycle of the spraying solenoid valve, thereby achieving variable rate spray control. Therefore, rice field weeding operations are very suitable for UAVs for plant protection to adopt the variable rate spraying operation mode.
Huang et al. used a fully convolutional network (FCN) to perform semantic segmentation on rice field remote sensing images collected by UAVs and generated a weed distribution map of the entire field based on the segmentation results. Subsequently, they divided the distribution map into multiple operation grids and classified each grid into spraying and non-spraying areas according to a pre-set weed threshold, thereby generating a precise spraying prescription map for the entire field. The experimental results showed that the accuracy of weed recognition of this method reached 83.6%, and that generated prescription map could save 35.5% of the spraying amount [20]. WEN et al. considered the non-linear relationship between flight parameters, body structure, and other factors of plant protection UAVs and the amount of droplet deposition. They developed a variable spray control system for plant protection UAVs that combines BP neural networks and variable speed spray control. The system collects real-time information through multiple sensors, and the deposition rate is determined by the neural network model. The flow rate of the spraying system is adjusted according to the predicted amount of deposition. The experimental results showed that the error between the predicted droplet deposition and the actual droplet deposition of the system was less than 20%, and the system response time was less than 0.25 s [21].
Currently, many scholars have made outstanding contributions to various aspects of variable spraying in UAVs. However, relatively few studies have integrated these aspects into a complete solution, and most remain in the experimental stage. We believe that there are two main reasons for this situation: (1) The accuracy of weed recognition needs to be improved. Variable spraying with UAVs first requires recognition of weeds from remote sensing UAV images, but the proportion of weed pixel areas in the images is relatively small, which is a typical small target detection problem. Recognition of such small targets is easily affected by factors such as lighting conditions and mutual occlusion, resulting in a low accuracy in weed recognition. (2) Lack of unified map format and production methods of prescription. Since different scholars have designed their own UAV variable spraying systems, the required prescription map formats and production methods vary, which has become the main obstacle to the large-scale application of prescription map-based variable spraying technology.
To address the above problems, we propose the following solutions: (1) To improve the recognition accuracy of small target weeds in high-resolution remote sensing images from UAVs, we propose a rice field DT-YOLOv8s model based on knowledge distillation. This model transfers the feature knowledge learned by the teacher network YOLOv8x to the lightweight student network YOLOv8s, enhancing the feature extraction capability of the student network. As a result, it improves the accuracy of identifying small target weeds while ensuring the model remains lightweight. (2) Since DJI plant protection UAVs have been widely used in China and related products support variable spraying operations based on prescription maps, we have studied a prescription map production method that can be supported by DJI plant protection UAVs, making it compatible with mainstream plant protection UAV products. This is the key to promoting the large-scale application of map-based prescription variable spraying technology in UAVs.
The research goals are as follows: (1) To use deep learning methods to identify weeds in rice field drone images and generate a spatial distribution map of weeds; (2) To produce a prescription map for the variable rate application based on the weed distribution map; (3) To use a DJI agricultural drone to perform automatic variable rate application operations according to the prescription map.

2. Materials and Methods

2.1. Experiment Design

2.1.1. Collection of Rice Field Remote Sensing Image Data

This study was conducted from May to June in both 2022 and 2023, at the experimental field of the Hai Cheng Training Base of Shenyang Agricultural University in Haicheng City, Liaoning Province. The field experiments lasted for two years. In the rice production process, farmers usually adopt a “two seals and one kill” weed control strategy, which involves applying soil sealing herbicides before rice transplanting to preliminarily control weeds in the field; spraying foliar herbicides again during the green period of rice to further remove weeds in the field; if there are still weed residues during the tillering stage of rice, farmers will apply targeted stem and leaf spraying to areas with weeds in order to completely eradicate them. The use of drones to implement precise variable spraying on the distribution of weeds in the field during the tilling stage of rice meets the actual needs of farmers for precise weed control. Therefore, from May to June 2022 and 2023, we collected remote sensing image data from rice fields during the green and tiller stages using drones. A knowledge distillation-based rice field weed identification model YOLOv8n-DT was constructed based on unmanned aerial vehicle remote sensing image data collected in 2022. We applied this model to unmanned aerial vehicle remote sensing image data collected in 2023 to identify weeds in rice fields, and generate variable pesticide prescription maps based on the identification results. On 22 June 2023, the DJI T50 plant protection drone (DJI Innovation Company, Shenzhen, China) was used to perform automatic variable spraying operations based on the variable spraying prescription map. The schematic diagram of the experimental area is shown in Figure 1.
The experimental area is 165 m long and 97 m wide, with a total area of 16,005 m2. It is divided into 16 experimental communities, each with an area of approximately 1000 m2. When collecting remote sensing data on weeds in rice fields, the DJI M300 drone (DJI Innovation Company, Shenzhen, China) is used as a flying platform, equipped with a ZENMUSE P1, with an effective pixel size of 45 million. To balance flight efficiency and image quality, we chose to collect remote sensing images of rice paddies at an altitude of 30 m. To ensure the accuracy of image registration, the heading overlap is set to 80% and the lateral overlap is set to 80%. The drone flies along the predetermined route and uses a vertical top-down angle to capture images of the entire experimental field, with a resolution of 8192 × 5460 pixels. The collected rice field remote sensing images are registered and fused using DJI Terra. To avoid disturbance of rice and weeds caused by strong winds and ensure the accuracy of subsequent image registration and fusion, unmanned aerial vehicle remote sensing data collection is selected under weather conditions below level 4 wind.

2.1.2. Design of Variable Application Experiment for Plant Protection Drones

According to the DT-YOLOv8s model’s weed recognition results from the 2023 rice remote sensing images, the experimental plots were divided into four application rate levels, represented by different colors: red, blue, yellow, and green. The application rates were as follows: Red zone: 1.5 L/hm2, equivalent to the normal application rate for local farmers; Blue zone: 1.27 L/hm2, equivalent to 85% of the normal application rate; Yellow zone: 1.05 L/hm2, equivalent to 70% of the normal application rate; Green zone: 0.75 L/hm2, equivalent to 50% of the normal application rate. Before the precision agricultural drone operation, five water-sensitive papers were placed on each application rate level plot, using the five-point sampling method, to collect the spray droplets during the operation and analyze the coverage rate and sediment density of the spray. The water-sensitive paper setup diagram is shown in Figure 2. The spray liquid used in this study was prepared at the following concentration: 1.5 L of water as the solvent, 60 mL of concentrated Domethin (Dow Chemical Company, Midland, MI, USA) and 50 mL of Linguou (Corteva Agriscience, Wilmington, DE, USA). The active ingredient in Domethin is penoxsulam, with an application rate of 18 g a.i./ha. The active ingredient in Linguou is flor-pyrauxifen, with an application rate of 27 g a.i./ha. Additionally, Assist (MRS AgriScience, Beijing, China) was added.
The DJI T50 plant protection drone is used to carry out pesticide spraying operations, with a flight altitude set at 3 m relative to the height of the rice, a flight speed of 5 m/s, and a spraying width of 3 m.

2.2. Dataset Generation

During the drone’s image collection, adjacent images have an 80% overlap region. If we directly annotate the original images, human errors may cause inconsistencies or omissions in the labeling of the same target in different images, which will affect the training accuracy of the deep learning model. To avoid this, our research first used DJI Terra to align and fuse the 2022 rice weed remote sensing data, and then cut the aligned and fused images into 600 × 600 pixel nonoverlapping sub-images. A total of 3094 rice weed remote sensing images and 438 rice field border remote sensing images were obtained after image cutting. During the rice tillering period, the distribution of weeds was uneven, with dense areas showing continuous growth and sparse areas showing individual growth. Therefore, during the manual annotation process, we classified weeds into two classes: continuous weeds and individual weeds. We used Labelme software (v4.5.6) for manual annotation. We divided the dataset into training, validation, and testing sets with a ratio of 7:2:1. Each set had no repeated data. We expanded the training set by randomly cropping, color jittering, adding noise, and randomly rotating samples, creating a dataset with a total of 10,943 images. The sample quantity of the dataset is shown in Table 1.

2.3. YOLOv8n-DT Network Structure

In agricultural production, weed control has time-sensitive requirements. If the optimal period for prevention is missed, the difficulty of subsequent control will increase, and the control effect will not be ideal [22]. Therefore, it is crucial to quickly grasp the growth status of weeds during the weeding period. In recent years, the YOLOv series of object detection models has received significant attention due to their outstanding real-time performance and precise detection capabilities. This series of models innovatively adopts a single-stage structure design, avoiding the complex region proposal generation step, which significantly reduces the computational load of the model, enabling its deployment on embedded devices with limited computing power [23]. Consequently, this study uses the YOLOv8 model for the recognition of weeds in unmanned aerial vehicle (UAV) remote sensing images of rice fields. Taking into account the recognition accuracy and lightweight requirements of the weed recognition network in rice fields, this study employs knowledge distillation to transfer the “knowledge features” learned by the larger and more powerful teacher network to the smaller student network, thereby enhancing the object detection performance of the student network without modifying its structure.
YOLOv8 has four network models: YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x, with a gradual increase in network depth and width [24]. The four versions of YOLOv8 were trained separately, and the corresponding detection results are shown in Table 2.
As shown in Table 2, the YOLOv8l model achieved the highest recognition accuracy in the rice field weed recognition task, with an average precision (mAP) of 0.824, which is 3.6% higher than the model with the worst performance, YOLOv8n (0.795). However, this higher detection performance comes at the cost of a larger model capacity, as the YOLOv8l model has 77.5 M more parameters than the YOLOv8n model. Therefore, the YOLOv8l model with the highest recognition accuracy was chosen as the teacher network, and the YOLOv8n model with the lowest recognition accuracy and the smallest model size was selected as the student network. A knowledge distillation model for rice field weed recognition was designed, called YOLOv8n-DT. This model consists of three main components: the teacher network, the student network, and the distillation loss function module, as illustrated in Figure 3.

2.4. Distillation Loss Function

Knowledge distillation aims to use the prediction response distribution (logits) produced by the teacher model at the output layer as soft targets, guiding the student model to fit this softened probability distribution, thereby enhancing the generalization ability of the student model [25,26]. For object detection models like YOLOv8, which employ multiscale feature fusion, the final detection performance relies on the rich feature representations learned by these feature layers (such as p3, p4, and p5) that are fused at the detection head [24]. Therefore, introducing feature distillation between these critical feature layers is essential. The knowledge distillation loss function of YOLOv8n-DT consists of two parts: logit distillation loss and feature distillation loss.
During the distillation training process, soft labels enable the student network to learn how to distinguish between classes with similar features, thereby improving the prediction accuracy of the student network. Soft labels represent the predicted probability distribution of classes in the dataset, and logit distillation aims to minimize the distance between soft labels. We used KL divergence to measure the relative entropy between the student’s logit output and the teacher’s soft labels, facilitating the distillation of logit knowledge. The mathematical formula involved in logit distillation is shown in the equation.
P S , t c N e t S = exp Z c S / t o b j j = 1 C   exp Z c S / t o b j
P T , t c Net T = exp Z c T / t o b j j = 1 C   exp Z c T / t o b j
l logit = L k l P S , t , P T , t
The super-reference distillation temperature t controls the degree of smoothing of the soft labels, which is used to control the importance between each category and enhance the model’s ability to handle uncertainty-related categories, Net represents the network, ZS represents the actual logic of the student network for the object being category c, ZT represents the actual logic of the teacher network for the object being category c, PS c means the probability that the student network predicts the category of an object to be c when the distillation temperature is t, PTc denotes the probability that the teacher network predicts the category of an object to be c at a distillation temperature of t, and C represents the number of categories in the rice field weed dataset.
Each channel of the feature map corresponds to a visual pattern, but the importance of each channel’s visual pattern is different [27]. Since the teacher’s performance is better than the student’s, we believe that the visual patterns learned by the teacher are more accurate, and we want the student to learn the teacher’s visual patterns. The student and the teacher calculate the attention information for each channel from their respective feature maps. The teacher then supervises the student to learn the attention information for each channel and passes this attention information on to the student. In this way, the student can understand the teacher’s attention information for each channel, thereby improving its performance [28]. The loss function for feature distillation is defined by the following formula:
C D ( s , t ) = i = 1 n   j = 1 c   w s i j w t i j 2 n × c
where C D ( s , t ) means the CD loss between the student and the teacher. c represents the number of channels. w s i j is the weight of j-th channel of the i-th sample, as shown in Equation (5).
w c = 1 H × W i = 1 H   h = 1 W   u c ( i , j )
where w c is the weight of c-th channel. H and W are the spatial dimensions of the feature map and u c ( i , j ) is the activation.

2.5. Generation of Variable Rate Application Maps

In this study, three professional Geographic Information System software, DJI Terra (v3.4.4), SuperMap (v11.0.0), and Global Mapper (v24.10.0), were utilized to generate variable rate application maps that can be recognized and executed by DJI’s agricultural drones. During the acquisition of remote sensing images, the drone simultaneously recorded flight trajectories, headings, altitudes, and precise GPS coordinates of each photograph. Using DJI Terra, the remote sensing data of the rice field weed distribution obtained in 2022 underwent image registration and fusion. All geometrically corrected images were accurately mosaicked according to their GPS coordinates, generating a complete orthoimage, with each pixel associated with real latitude and longitude coordinates.
The prescription map generation process began by importing the registered orthoimage into the SuperMap software (as shown in Figure 4a). A new vector polygon dataset was created to store the application rate data, and the coordinate system was set to WGS 1984 during the creation of the polygon dataset to ensure consistency with the geographic coordinate reference system of the image data. The polygon dataset was visualized and displayed in the orthoimage window, and the “draw polygon” command was used to delineate the operational area of the agricultural drone (as shown in Figure 4b). New fields were added to the attribute structure of the polygon dataset, and the corresponding variable application rates were entered for each experimental plot. Subsequently, the vector polygon dataset was converted to a raster dataset format using the vector-to-raster conversion function, maintaining consistency with the orthoimage. Finally, the rasterized polygon dataset was exported as a TIF file for further processing.
Since the DJI agricultural drone flight control system requires elevation grid data as input, the TIF image containing the application rate information was converted to an elevation grid format using the Global Mapper software. The resulting image included application rate information, geographic coordinates, and elevation data. The geographic coordinates and elevation data in this image are essential for precise variable rate application operations that need to account for the influence of terrain undulations.
The final elevation grid format image is the variable rate application map that can be recognized by the DJI agricultural drone flight control system (as shown in Figure 4c). This map can be imported into the remote controller through the DJI Agricultural Service Platform or an SD card, allowing the drone to perform variable rate application operations based on the prescription map.

2.6. Model Training

To ensure fairness in the experiments, the same initial training parameters were set for each experimental group. Taking into account physical memory and learning efficiency, the batch size was set at 4 images, and the maximum number of iterations was set at 500. During training, the model used the Stochastic Gradient Descent (SGD) optimizer, and the learning rate (lr) decay strategy can be described as follows:
l r = b a s e _ l r · ( 1 i t e r _ n u m max _ i t e r a t i o n s ) p
Here, b a s e _ l r is the base learning rate, m a x _ i t e r a t i o n s is the maximum number of iterations, i t e r _ n u m is the iteration index, and p is the polynomial decay exponent (Power). In this study, the base learning rate was set at 0.001, the momentum at 0.9, the weight decay to 1 × 10−4, and the lower bound for learning rate updates was set at 0. All models were trained using these settings.
In this study, the cross-entropy loss function was used to measure the distance between the predicted probability distribution of the pixel categories by the model during training and the true probability distribution of the label category. The specific calculation method is as follows:
L o s s = 1 M i = 1 M C = 1 N h ( b i ) log ( p i c )
In the equation, M is the number of pixels, N is the number of categories, i is the current pixel, C is the current category, b i is the true label category of pixel i , h is a 0~1 probability distribution function (if b i = C , it is 1, otherwise 0), and p i c is the predicted probability that pixel i belongs to category c, obtained by applying the Sigmoid function to the predicted category scores. By calculating the loss function during the iteration process, the training performance of the model is evaluated, and the weights are adjusted through backpropagation, gradually reducing the error distance represented by the loss value to achieve the training objective.

2.7. Evaluation Metrics

2.7.1. Model Evaluation Metrics

To quantitatively analyze the performance of the model, this study employs precision, recall, AP (Average Precision), and mAP (mean Average Precision) to evaluate the effectiveness of the proposed YOLOv8n-DT model. For precision and recall, there are four states after the test sample is predicted: true positive (TP), false positive (FP), true negative (TN), and false negative (FN), where N is the total number of detection categories. The definitions are as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
A P = 0 1 P ( R ) d R
m A P = i = 1 N A P i N
The recall rate and the precision rate are based on the threshold value of 0.5.

2.7.2. Evaluation Metrics for the Unmanned Aerial Vehicle Spraying Effect

The water-sensitive papers collected were analyzed using the DepositScan software (United States Department of Agriculture, v1.6.0) to calculate metrics such as droplet deposition coverage rate (%), deposition density (Pieces·cm−2), deposition amount, and deposition coefficient of variation. The deposition coverage rate is the proportion of pesticide deposited on the target crop surface, which is a fundamental parameter for evaluating the utilization level of pesticides. Its calculation formula is as follows:
P % = 100 n N × 100
In the equation, P is the deposition coverage rate, n is the amount of pesticide deposited on the target crop surface (μL·cm−2), and N is the application rate (L·hm−2).
In this study, the coefficient of variation (CV) was used to evaluate the uniformity of droplet deposition among the collection points within the unmanned aerial vehicle spraying area. A lower CV value indicates a more uniform droplet deposition and better penetration. The calculation formula for the coefficient of variation is as follows:
C V = S X ¯ × 100 %
S = i = 1 n ( X i X ¯ ) 2 / ( n 1 )
In the equation, S is the standard deviation of the amount of deposition at the collection points in the experimental area, X i is the deposition amount (μL·cm−2) at each collection point in the experimental plot, X ¯ is the mean amount of deposition amount (μL·cm−2) at the collection points in the experimental plot, and n is the number of collection points in the experimental plot.

2.8. Experimental Platform Configuration

The experimental platform configuration is shown in Table 3

3. Experimental Results

3.1. Ablation Study

To compare the performance of different knowledge distillation methods in the task of weed object detection based on UAV remote sensing images of rice fields, we evaluated various knowledge distillation methods, including feature-based, logit-based, and combined methods. The results are shown in Table 4.
According to the data shown in Table 4, among the feature-based knowledge distillation strategies, the CD [28] method achieved the optimal performance, with a recognition accuracy 0.4% higher than the Mimic [29] method. Furthermore, in the logit-based knowledge distillation process, the KL divergence method achieved the highest accuracy of 0.815. To further enhance the model’s performance, the CD and KL methods were integrated, using both feature loss and logit loss as the loss function for knowledge distillation. Ultimately, this approach improved the accuracy of the YOLOv8n-DT model to 0.820. Therefore, the CD and KL methods were chosen as loss functions for knowledge distillation.
Through the ablation study, this paper quantitatively evaluated the impact of knowledge distillation techniques on the model performance in the rice field weed recognition task. Figure 5 illustrates the change in loss values during the training process for different models.
Observing Figure 5, it can be seen that in the initial stage of model training, the YOLOv8n-DT model has the fastest convergence speed, indicating that the YOLOv8n-DT model can effectively learn the key characteristic information of the data in the early stage of training. As the number of training iterations increases, the loss function values of all models tend to stabilize after the 300th round, and ultimately, the YOLOv8l model shows the lowest loss value of 0.134. The continuous convergence of the loss value and its eventual stabilization indicate that the model gradually reaches the optimal state. The weights with the minimum validation loss during the training process are selected as the optimal weights, and tested on the test set of rice field weeds, obtaining the final average precision (AP), as shown in Table 5.
As can be seen in Table 5, compared to the student network YOLOv8n, the YOLOv8n-DT has improved the recognition accuracy of each category to varying degrees, with the accuracy of single barnyard grass, continuous patches of barnyard grass, and field ridge increasing by 4.4%, 3.4%, and 1.9%, respectively. The mAPof the YOLOv8n-DT network has improved by 3.1% and is close to the mAP of the teacher network. Therefore, the YOLOv8n-DT algorithm proposed in this study can effectively improve the accuracy of rice field weed recognition without increasing the number of parameters.

3.2. Visualization

In order to intuitively demonstrate the internal working mechanism of the YOLOv8n-DT model proposed in this study in the detection of rice field weeds, as well as to verify the effectiveness of the model improvement method, the attention maps of the YOLOv8n, YOLOv8l, and YOLOv8n-DT models were compared to extracting the features of rice field weeds using Grad-CAM visualization technology. All attention maps are from the last encoding layer of the model encoder, as shown in Figure 6.
Figure 6 shows the activation map of weed species in paddy fields. The darker colors in the activation map indicate that the model pays more attention to this area. After knowledge distillation, the YOLOv8n-DT model has a relatively more focused attention compared to the YOLOv8l model. From Figure 6a, it can be seen that in the single barnyard grass detection task, the attention of the YOLOv8n model is more scattered, and it incorrectly identifies some rice plants with similar appearance to barnyard grass as single barnyard grass. In addition, the single barnyard grass area in the class activation map of YOLOv8n presents a lighter color, indicating that the YOLOv8n model has a lower degree of focus on this target. In comparison, the attention of the YOLOv8l model is more concentrated, and it can accurately locate the single barnyard grass in the image. The single barnyard grass area in the class activation map of YOLOv8l presents a deeper color, indicating that the model has a more accurate understanding of which areas belong to the target class and is more certain about this judgment. The YOLOv8n-DT model after knowledge distillation, compared to the YOLOv8l model, has relatively concentrated attention, and the activation color of the single barnyard grass is also deeper, indicating that through knowledge distillation, the YOLOv8n-DT model has learned the characteristics of single barnyard grass from the YOLOv8l model.
From Figure 6b, it can be seen that in the recognition of the continuous patches of the barnyard grass category, the class activation map of the YOLOv8l model shows that the red areas form continuous patches, and these continuous patches are mainly concentrated on the barnyard grass, which indicates that the model can well capture the overall structure and distribution of the barnyard grass when identifying the “continuous patches of barnyard grass” category. In the class activation map of the YOLOv8n model, the continuous patches of barnyard grass area show a scattered characteristic, indicating that the YOLOv8n model only focuses on the local features of the barnyard grass and identifies the continuous patches of barnyard grass as independent individuals. In comparison, the red area of the continuous patches of barnyard grass in the class activation map of the YOLOv8n-DT model is relatively concentrated, indicating that the YOLOv8n-DT model has learned the understanding of the overall characteristics of the continuous patches of barnyard grass from the YOLOv8l model.
From Figure 6c, it can be seen that the YOLOv8l model pays more accurate attention to the features of the field ridge compared to the YOLOv8n model, indicating that the YOLOv8l model can more effectively capture the key features of the field ridge. In addition, the YOLOv8n-DT model after knowledge distillation is also superior to the YOLOv8n model in recognizing the characteristics of the field ridge. This suggests that through knowledge distillation, the YOLOv8n-DT model has successfully learned the important features of the field ridge from the YOLOv8l model.

3.3. Comparison with Other Classic Algorithms

To evaluate the performance of the YOLOv8n-DT model in the rice field weed detection task, we conducted comparative experiments on the rice field weed dataset using YOLOv7-tiny [30], YOLOv5n [31], SSD [32], and Faster R-CNN [33] models. The experimental results are shown in Table 6.
Table 6 compares the performance of YOLOv8n DT and other models from dimensions such as parameters, latency, GFLOPs, recall, mAP50, etc. From Table 6, it can be directly observed that Faster R-CNN is significantly higher than the other models in the three indicators of parameters, latency, and GFLOPs. This is because Faster R-CNN, as a two-stage object detection model, needs to use the Region Proposal Network (RPN) to generate candidate regions and perform separate feature extraction and classification processing for each candidate region, resulting in increased latency and computational cost. On the contrary, the YOLO series models, as one-stage object detection models, omit the cumbersome process of candidate region generation and screening, thus maintaining relatively low levels of parameter count, latency, and computational complexity. Among these one-stage models, the YOLOv8n-DT model proposed in this paper shows relatively small overhead in terms of parameter count, latency, and computational complexity. Specifically, the YOLOv8n-DT model has 3,157,184 parameters, a latency of 0.0104 s, and 8.9 GFLOPs, which are all between YOLOv5n and YOLOv7-tiny. However, YOLOv8n-DT achieves the highest values in the key performance indicators of mAP and recall, which are 0.82 and 0.77, respectively. It is worth noting that YOLOv5n uses CSPDarknet as the backbone network, focusing on using small-sized convolution kernels and residual connections for feature extraction, and the cross-stage partial connections enhance the gradient flow while reducing the computational cost. The YOLOv8n-DT, on the other hand, uses a new architecture that combines the Feature Pyramid Network (FPN) and Path Aggregation Network (PAN) modules. The FPN generates rich feature maps that can detect objects of different scales and resolutions by reducing the spatial resolution of the input image and increasing the number of feature channels. The PAN module further aggregates the features of different network layers to improve the detection accuracy. In addition, the Soft-Non-maximum Suppression (Soft-NMS) technology adopted by YOLOv8n-DT is an improvement over the traditional Non-maximum Suppression (NMS) method. The traditional NMS method directly discards the overlapping bounding boxes and only retains the one with the highest confidence, while Soft-NMS uses a soft threshold to process the overlapping bounding boxes, better preserving the target information. In summary, while maintaining relatively low latency and model parameters, the YOLOv8n-DT provides higher model performance, making it suitable for real-time object detection tasks such as rice field weed recognition.

3.4. Results of Rice Field Weed Distribution

To accurately grasp the distribution of weeds in the field, this study first processed the remote sensing image of the experimental field, dividing the complete image in Figure 1 into 3712 non-overlapping sub-images of 600 × 600 pixels. Then, we used the YOLOv8n-DT model proposed in this study to automatically recognize the rice field weeds in these sub-images. After the recognition was completed, we stitched the recognition results of each sub-image to form a complete rice field weed distribution map, as shown in Figure 7. In this figure, the red bounding boxes represent single barnyard grass targets, the pink bounding boxes represent field ridge targets, and the yellow bounding boxes represent continuous patches of barnyard grass targets.
In the figure, red bounding boxes indicate single barnyard grass plants, pink bounding boxes indicate field ridges, and yellow bounding boxes indicate continuous patches of barnyard grass.

3.5. Formulation of the Spraying Strategy

This study formulates the amount of spraying based on two indicators: the total number of weeds within the experimental plots, and the proportion of continuous patches of barnyard grass in the total number of weeds. Therefore, this study counted the number of weeds in each experimental plot, and the results are shown in Figure 8.
  • Consideration of the quantity of weeds: The amount of spraying in the experimental plots is adjusted based on the total number of weeds. Specifically, for plots with weed counts greater than 400, the spray amount is set at 1.5 L/hm2, which is the normal spraying amount for local farmers, corresponding to the red area in Figure 10. For plots with weed counts between 400 and 300, the amount of spraying is 85% of the normal amount, which is 1.27 L/hm2, corresponding to the blue area in Figure 10. When the weed count is between 300 and 200, the spraying amount is adjusted to 70% of the normal amount, which is 1.05 L/hm2, corresponding to the yellow area in Figure 10. If the weed count is between 200 and 100, the spray amount is reduced to 50% of the normal amount, which is 0.75 L/hm2, corresponding to the green area in Figure 10.
  • Consideration of the proportion of continuous patches of barnyard grass: Since the category of continuous patches of barnyard grass has a relatively large number of barnyard grasses, the number of continuous patches of barnyard grass must be considered in a comprehensive way. This study counted the proportion of continuous patches of barnyard grass in the total of weeds for each experimental plot, and the results are shown in Figure 9. The results show that 10% is a relatively obvious boundary. Therefore, for the experimental plots where the proportion of continuous patches of barnyard grass exceeds 10%, regardless of the total number of weeds, they should be treated according to the normal spraying amount of 1.5 L/hm2 in the Haicheng area. For plots where the proportion of continuous patches of barnyard grass is less than 10%, the spraying rules based on the number of weeds mentioned above should be followed.
The results of the spraying amount for different experimental plots based on the above spraying strategy are shown in Figure 10. The total amount of spraying for the entire experimental field is 30.48 L, which is approximately 15.28% less than the traditional spraying method.

3.6. Evaluation of the Spraying Effect

Before the plant protection UAV operation based on the variable rate application map, this study adopted the five-point sampling method and placed five water-sensitive papers in the experimental plots with different spraying amount levels. These water-sensitive papers were used to capture the sprayed droplets during the plant protection UAV operation, and then analyze the spraying amount and droplet coverage rate of the UAV.
Figure 11 shows some of the water-sensitive papers collected in the experimental plots with different spraying amount levels. We used the DepositScan (USDA, v1.6.0) software to analyze the water-sensitive papers collected from the experimental plots with different spraying amount levels, and obtained the droplet coverage rate, deposition density, deposition amount, and deposition coefficient of variation for the corresponding experimental plots, as shown in Figure 12.
Based on the data shown in Figure 12, we can observe that as the spraying amount decreases, the droplet deposition amount, deposition density, and droplet coverage rate on the leaves of the rice field weeds all show a consistent downward trend. Specifically, the droplet coverage rate decreased from 7.812% to 3.466%, the deposition density decreased from 52.78 droplets per square centimeter to 18.56 droplets per square centimeter, and the deposition amount also decreased from 0.5832 microliters per square centimeter to 0.32 microliters per square centimeter. This result indicates that when the UAV performs variable rate spraying based on the prescription map, it can accurately adjust the spraying amount.
This study further explored the variability of deposition amount in the experimental plots with different spraying amounts, and the results are detailed in Figure 12d. The coefficient of variation (CV) of the deposition amount is an important indicator for evaluating the uniformity of droplet deposition in different operation areas of the field trial. The closer the CV value is to 0, the higher the deposition uniformity and the better the spraying quality. From Figure 12d, it can be seen that the experimental plot with 85% spraying amount shows the best droplet deposition uniformity, while the experimental plots with 70% and 50% spray amounts (shown in yellow and green, respectively) have relatively poorer droplet deposition uniformity. The main reason is the response delay of the plant protection UAV’s spraying system when implementing variable flow control. When operating in the experimental plots with 70% and 50% spraying amounts, the plant protection UAV needs to perform continuous variable control in these areas. Due to the delayed response of the spraying system, its variable spraying system was unable to respond agilely, resulting in a higher coefficient of variation in the amount of deposition. Therefore, when setting the grid size of the prescription map in the future, the response time of the spraying system should be fully considered to optimize the spraying effect and improve the operation quality.

4. Discussion

Current research on variable rate precision spraying mainly focuses on developing spray systems based on targeted spraying and integrating them into ground-based weed control machinery to achieve precise herbicide application on specific weeds. However, the traditional mechanical weeding operation faces various limitations due to the characteristics of paddy field environments, such as soft and wet soil, low-lying terrain, and limited space.
To address this problem, our study proposes an innovative solution. We utilized the DJI plant protection drone, combined with a variable rate application prescription map, to successfully achieve precise variable rate spraying in paddy fields. To our knowledge, this is the first attempt to implement variable rate precision spraying using a prescription map at the drone scale. This method not only overcomes the limitations of traditional ground-based operations but also establishes a complete technical process for paddy weed management. The process includes the following steps: drone-based remote sensing image acquisition, weed target detection model construction, efficient weed identification, determination of appropriate herbicide application rates based on weed quantity, creation of a variable rate application prescription map, automated variable rate spraying operation, and final efficacy evaluation.
When constructing the paddy field weed detection model, considering the potential future integration of the detection model into embedded platforms and the need for model miniaturization, we adopted a knowledge distillation approach. We transferred the “knowledge features” learned by the larger and more powerful feature extraction model, YOLOv8l, to the smaller YOLOv8n network. Using both feature loss and logit loss as loss features, we constructed a knowledge distillation model for paddy field weed recognition—YOLOv8n-DT. Experimental results show that YOLOv8n-DT achieved a 3.1% increase in model accuracy without changing the model parameters. The results indicate that although the parameter count of the YOLOv8n-DT model remains unchanged, its feature extraction capability has been significantly enhanced through learning from the larger model’s “knowledge”, enabling it to better capture key features of paddy field weeds. The possible reason is that this study introduced a loss function that combines feature loss and logical loss, based on the traditional logical loss function. This method not only made the student model learn the hard labels of the teacher model, but also obtained the inter-class relationship information from the soft labels. In particular, the CD method applied the channel attention mechanism, dynamically learning weights for each channel, and multiplying these weights with the original channels to enhance the features of important channels and weaken the features of unimportant channels. This made the student model more directional in feature extraction, thereby improving its overall performance.
This study achieved variable rate precision pesticide application using UAVs based on prescription maps and detailed a method for creating prescription maps compatible with DJI agricultural drones. Given the widespread use of DJI agricultural drones in China’s agricultural sector, this compatibility is crucial for promoting variable rate application technology based on prescription maps. Farmers can utilize their existing DJI agricultural drone equipment to achieve precision variable rate pesticide application without the need for additional investment in new agricultural drones. This compatibility provides convenient conditions for the large-scale promotion of variable rate precision pesticide application technology. However, the process from collecting weed data in rice fields to generating variable rate application prescription maps takes about one day. This process is not in real time and is relatively complex, which is a major obstacle to its large-scale application. To overcome these limitations, we plan to integrate the rice field weed identification model into the embedded systems of drones in future research, significantly improving weed identification efficiency and reducing human intervention. Additionally, we will work on simplifying the process of creating variable rate application prescription maps, which will be the focus of our next research phase.
We divided the entire field into 16 experimental plots with different pesticide application requirements. Using the prescription maps, the agricultural UAV dynamically adjusted the spray system rate to achieve variable pesticide application. The results indicated that the droplet deposition uniformity was relatively poor in the plots with 70% and 50% application rates. The main reason is that the plant protection drone spraying system needs to process a large amount of sensor data and perform complex calculations when adjusting the flow rate. This calculation processes take time, and the nozzles, solenoid valves, and flow sensors also require a certain amount of time to respond after receiving the control signals. As a result, especially in plots where the drone needs to perform variable rate spraying multiple times consecutively, these factors prevent the spraying system from adjusting the flow rate in a timely manner. This can affect the uniformity of droplet deposition, thereby impacting the spraying effect. In future research, we aim to further explore the settings of grid sizes and improve the spray control system of agricultural UAVs to enhance nozzle sensitivity and reduce response delay. This will enable more precise and efficient pesticide application.
Compared to the traditional extensive method of broad-spectrum herbicide application, our approach saves approximately 15.28% of the herbicide used. This not only reduces the production costs for farmers, thereby increasing their income, but also helps improve soil health, protect biodiversity, and reduce groundwater pollution. Furthermore, the promotion of this method has the potential to drive a transformation in agricultural production practices and promote the development of precision agriculture.

5. Conclusions

This study used the knowledge distillation method to construct a rice field weed target detection model, YOLOv8n-DT. Using this model for rice field weed recognition, we created a variable rate application map based on the recognition results, and then used a DJI plant protection UAV to conduct variable rate spraying in the field, thereby minimizing the use of chemical herbicides in rice production. The main conclusions of this work are as follows:
(1) A rice field weed target detection model—YOLOv8n-DT—was constructed using the knowledge distillation method. The YOLOv8l model was used as the teacher network, and the YOLOv8n model was used as the student network. The CD method and the KL method were selected as the loss functions for knowledge distillation. After knowledge distillation, the recognition accuracy of the YOLOv8n-DT model was improved by 3.1% compared to the YOLOv8n model, and the mAP reached 0.820.
(2) Based on the weed count in each experimental plot and the spray strategy proposed in this study, the specific spraying amount for each experimental plot was determined, and a variable rate application map was generated accordingly. The total amount of spraying for the entire experimental field was 30.48 L, which is approximately 15.28% less than the traditional spraying method.

Author Contributions

Z.G.: Data collection, manuscript writing, and chart making, Data analysis, Validation, Methodology. D.C.: Literature search, Visualization, Validation, Methodology, Data collection, Data analysis. J.B.: Data organization, Writing—review and editing, Visualization, Validation, Data collection, chart making. T.X.: Concept Proposal, Writing—review and editing, Validation, Literature search. F.Y.: Research design, funding acquisition, Writing—review and editing, Literature search. All authors have read and agreed to the published version of the manuscript.

Funding

Liaoning Province Applied Basic Research Program Project (2023JH2/101300120), the National Natural Science Foundation of China (32201652). Liaoning Province’s “Xingliao Talent Plan” project, with project number XLYC2203005 and Open Project of the South China Tropical Smart Agriculture Technology Key Laboratory of the Ministry of Agriculture and Rural Affairs (HNZHNY-KFKT-202208).

Data Availability Statement

Data may be available from the authors upon reasonable request and with the permission of the research participants.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. MacLaren, C.; Storkey, J.; Menegat, A.; Metcalfe, H.; Dehnen-Schmutz, K. An ecological future for weed science to sustain crop production and the environment. A review. Agron. Sustain. Dev. 2020, 40, 24. [Google Scholar] [CrossRef]
  2. Sharma, G.; Shrestha, S.; Kunwar, S.; Tseng, T.-M. Crop diversification for improved weed management: A review. Agriculture 2021, 11, 461. [Google Scholar] [CrossRef]
  3. Qu, S.; Yang, X.; Zhou, H.; Xie, Y. Improved YOLOv5-based for small traffic sign detection under complex weather. Sci. Rep. 2023, 13, 16219. [Google Scholar] [CrossRef]
  4. Llewellyn, R.; Ronning, D.; Clarke, M.; Mayfield, A.; Walker, S.; Ouzman, J. Impact of Weeds in Australian Grain Production; Grains Research and Development Corporation: Canberra, ACT, Australia, 2016. [Google Scholar]
  5. Taiwo, A.M. A review of environmental and health effects of organochlorine pesticide residues in Africa. Chemosphere 2019, 220, 1126–1140. [Google Scholar] [CrossRef]
  6. Sharma, A.; Kumar, V.; Shahzad, B.; Tanveer, M.; Sidhu, G.P.; Handa, N.; Kohli, S.K.; Yadav, P.; Bali, A.S.; Parihar, R.D.; et al. Worldwide pesticide usage and its impacts on ecosystem. SN Appl. Sci. 2019, 1, 1446. [Google Scholar] [CrossRef]
  7. Allmendinger, A.; Spaeth, M.; Saile, M.; Peteinatos, G.G.; Gerhards, R. Precision chemical weed management strategies: A review and a design of a new CNN-based modular spot sprayer. Agronomy 2022, 12, 1620. [Google Scholar] [CrossRef]
  8. Vijayakumar, V.; Ampatzidis, Y.; Schueller, J.K.; Burks, T. Smart spraying technologies for precision weed management: A review. Smart Agric. Technol. 2023, 6, 100337. [Google Scholar] [CrossRef]
  9. Monteiro, A.; Santos, S. Sustainable approach to weed management: The role of precision weed management. Agronomy 2022, 12, 118. [Google Scholar] [CrossRef]
  10. Meena, B.R.; Jatav, H.S.; Dudwal, B.L.; Kumawat, P.; Meena, S.S.; Singh, V.K.; Khan, M.A.; Sathyanarayana, E. Fertilizer Recommendations by Using Different Geospatial Technologies in Precision Farming or Nanotechnology. Ecosyst. Serv. 2022, 14, 241–257. [Google Scholar]
  11. Zhao, X.; Wang, X.; Li, C.; Fu, H.; Yang, S.; Zhai, C. Cabbage and weed identification based on machine learning and target spraying system design. Front. Plant Sci. 2022, 13, 924973. [Google Scholar] [CrossRef]
  12. Meshram, A.T.; Vanalkar, A.V.; Kalambe, K.B.; Badar, A.M. Pesticide spraying robot for precision agriculture: A categorical literature review and future trends. J. Field Robot. 2022, 39, 153–171. [Google Scholar] [CrossRef]
  13. Abbas, I.; Liu, J.; Faheem, M.; Noor, R.S.; Shaikh, S.A.; Solangi, K.A.; Raza, S.M. Different sensor based intelligent spraying systems in Agriculture. Sens. Actuators A Phys. 2020, 316, 112265. [Google Scholar] [CrossRef]
  14. Quan, L.; Jiang, W.; Li, H.; Li, H.; Wang, Q.; Chen, L. Intelligent intra-row robotic weeding system combining deep learning technology with a targeted weeding mode. Biosyst. Eng. 2022, 216, 13–31. [Google Scholar] [CrossRef]
  15. Li, H.; Guo, C.; Yang, Z.; Chai, J.; Shi, Y.; Liu, J.; Zhang, K.; Liu, D.; Xu, Y. Design of field real-time target spraying system based on improved YOLOv5. Front. Plant Sci. 2022, 13, 1072631. [Google Scholar] [CrossRef]
  16. Utstumo, T.; Urdal, F.; Brevik, A.; Dørum, J.; Netland, J.; Overskeid, Ø.; Berge, T.W.; Gravdahl, J.T. Robotic in-row weed control in vegetables. Comput. Electron. Agric. 2018, 154, 36–45. [Google Scholar] [CrossRef]
  17. Li, Y.; Guo, Z.; Shuang, F.; Zhang, M.; Li, X. Key technologies of machine vision for weeding robots: A review and benchmark. Comput. Electron. Agric. 2022, 196, 106880. [Google Scholar] [CrossRef]
  18. Udoumoh, U.I.; Ikrang, E.G.; Ehiomogue, P.O. Precision farming and fertilizer recommendation using geographic information system (GIS): A review. Int. J. Agricult. Earth Sci. 2021, 7, 68–75. [Google Scholar]
  19. Dou, H.; Zhang, C.; Li, L.; Hao, G.; Ding, B.; Gong, W.; Huang, P. Application of variable spray technology in agriculture. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2018; Volume 186, p. 012007. [Google Scholar]
  20. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Deng, X.; Wen, S.; Zhang, H.; Zhang, Y. Accurate weed mapping and prescription map generation based on fully convolutional networks using UAV imagery. Sensors 2018, 18, 3299. [Google Scholar] [CrossRef]
  21. Wen, S.; Zhang, Q.; Yin, X.; Lan, Y.; Zhang, J.; Ge, Y. Design of plant protection UAV variable spray system based on neural networks. Sensors 2019, 19, 1112. [Google Scholar] [CrossRef]
  22. Sapkota, B.; Sarkar, S.; Baath, G.S.; Flynn, K.C.; Smith, D.R. Using UAS-multispectral images to predict cord yield under different planting dates. In Proceedings of the ASA, CSSA, SSSA International Annual Meeting, Baltimore, MD, USA, 6–9 November 2022. [Google Scholar]
  23. Qu, H.-R.; Su, W.-H. Deep Learning-Based Weed–Crop Recognition for Smart Agricultural Equipment: A Review. Agronomy 2024, 14, 363. [Google Scholar] [CrossRef]
  24. Reis, D.; Kupec, J.; Hong, J.; Daoudi, A. Real-time flying object detection with YOLOv8. arXiv 2023, arXiv:2305.09972. [Google Scholar]
  25. Bang, D.; Lee, J.; Shim, H. Distilling from professors: Enhancing the knowledge distillation of teachers. Inf. Sci. 2021, 576, 743–755. [Google Scholar] [CrossRef]
  26. Li, G.; Li, X.; Wang, Y.; Zhang, S.; Wu, Y.; Liang, D. Knowledge distillation for object detection via rank mimicking and prediction-guided feature imitation. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; Volume 36, pp. 1306–1313. [Google Scholar]
  27. Simon, M.; Rodner, E. Neural activation constellations: Unsupervised part model discovery with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1143–1151. [Google Scholar]
  28. Zhou, Z.; Zhuge, C.; Guan, X.; Liu, W. Channel distillation: Channel-wise attention for knowledge distillation. arXiv 2020, arXiv:2006.01683. [Google Scholar]
  29. Li, Q.; Jin, S.; Yan, J. Mimicking very efficient network for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6356–6364. [Google Scholar]
  30. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
  31. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  32. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Proceedings, Part I 14, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
  33. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; Volume 28. [Google Scholar]
Figure 1. The experimental area is located at the Haicheng Training Base of Shenyang Agricultural University in Haicheng, Liaoning Province. The experimental area is 165 m long and 97 m wide, with a total area of 16,005 m2. It is divided into 16 plots, each with an area of approximately 1000 m2.
Figure 1. The experimental area is located at the Haicheng Training Base of Shenyang Agricultural University in Haicheng, Liaoning Province. The experimental area is 165 m long and 97 m wide, with a total area of 16,005 m2. It is divided into 16 plots, each with an area of approximately 1000 m2.
Agronomy 14 01702 g001
Figure 2. (a) At the five-point sampling positions in plots with different application rates, five pieces of water-sensitive paper were placed. In the figure, the yellow squares represent the water-sensitive paper, while the red, blue, green, and yellow areas indicate plots with different application rates. (b) When setting up the water-sensitive paper, use a double-sided clip to attach one end to the PVC pipe and the other end to the water-sensitive paper to secure it. The height of the water-sensitive paper should be equivalent to the height of the rice canopy and should be kept level.
Figure 2. (a) At the five-point sampling positions in plots with different application rates, five pieces of water-sensitive paper were placed. In the figure, the yellow squares represent the water-sensitive paper, while the red, blue, green, and yellow areas indicate plots with different application rates. (b) When setting up the water-sensitive paper, use a double-sided clip to attach one end to the PVC pipe and the other end to the water-sensitive paper to secure it. The height of the water-sensitive paper should be equivalent to the height of the rice canopy and should be kept level.
Agronomy 14 01702 g002
Figure 3. The YOLOv8n DT network architecture, which is mainly composed of three parts: teacher network, student network, and distillation loss function module. The model uses both feature loss and logic loss as loss features for knowledge distillation.
Figure 3. The YOLOv8n DT network architecture, which is mainly composed of three parts: teacher network, student network, and distillation loss function module. The model uses both feature loss and logic loss as loss features for knowledge distillation.
Agronomy 14 01702 g003
Figure 4. The workflow for generating a variable rate application map. (a) Registered orthoimage, where each pixel in the image is associated with real latitude and longitude coordinates. (b) The operational area of the plant protection drone is overlaid on the registered orthoimage, as shown by the blue area in the image. (c) The final variable rate application prescription map generated.
Figure 4. The workflow for generating a variable rate application map. (a) Registered orthoimage, where each pixel in the image is associated with real latitude and longitude coordinates. (b) The operational area of the plant protection drone is overlaid on the registered orthoimage, as shown by the blue area in the image. (c) The final variable rate application prescription map generated.
Agronomy 14 01702 g004
Figure 5. The trend of loss values during the training process of different models in Figure 5 shows that the loss function values of all models gradually stabilize after the 300th round. Finally, the YOLOv8l model exhibits the lowest loss value, which is 0.134.
Figure 5. The trend of loss values during the training process of different models in Figure 5 shows that the loss function values of all models gradually stabilize after the 300th round. Finally, the YOLOv8l model exhibits the lowest loss value, which is 0.134.
Agronomy 14 01702 g005
Figure 6. (a) Single barnyard grass plant. (b) Continuous patches of barnyard grass. (c) Field ridge.
Figure 6. (a) Single barnyard grass plant. (b) Continuous patches of barnyard grass. (c) Field ridge.
Agronomy 14 01702 g006
Figure 7. Comprehensive distribution map of weeds in a rice field.
Figure 7. Comprehensive distribution map of weeds in a rice field.
Agronomy 14 01702 g007
Figure 8. Weed quantity statistics in different experimental plots. The blue bars represent the number of single barnyard grass plants, while the orange bars represent the number of continuous patches of barnyard grass.
Figure 8. Weed quantity statistics in different experimental plots. The blue bars represent the number of single barnyard grass plants, while the orange bars represent the number of continuous patches of barnyard grass.
Agronomy 14 01702 g008
Figure 9. The proportion of continuous barnyard grass patches in different experimental plots. Clearly, 10% is a distinct demarcation line.
Figure 9. The proportion of continuous barnyard grass patches in different experimental plots. Clearly, 10% is a distinct demarcation line.
Agronomy 14 01702 g009
Figure 10. Variable rate application map.
Figure 10. Variable rate application map.
Agronomy 14 01702 g010
Figure 11. Water-sensitive paper samples collected in the experimental plots with spraying amounts of (a) 100% (standard recommended dose), (b) 85%, (c) 70%, and (d) 50% of the standard recommended dose, respectively. It can be observed that as the amount of application decreases, the number of droplets on the water-sensitive paper also decreases.
Figure 11. Water-sensitive paper samples collected in the experimental plots with spraying amounts of (a) 100% (standard recommended dose), (b) 85%, (c) 70%, and (d) 50% of the standard recommended dose, respectively. It can be observed that as the amount of application decreases, the number of droplets on the water-sensitive paper also decreases.
Agronomy 14 01702 g011
Figure 12. The results of the quantitative analysis of droplets on the water-sensitive paper are as follows: (a) shows the trend of droplet deposition amount with varying application rates, (b) illustrates the trend of deposition density with varying application rates, (c) displays the trend of droplet coverage with varying application rates, and (d) depicts the trend of the coefficient of variation with varying application rates. It can be observed that as the application rate decreases, the droplet deposition amount, deposition density, and droplet coverage all show a consistent downward trend. Additionally, the uniformity of droplet deposition is relatively poor.
Figure 12. The results of the quantitative analysis of droplets on the water-sensitive paper are as follows: (a) shows the trend of droplet deposition amount with varying application rates, (b) illustrates the trend of deposition density with varying application rates, (c) displays the trend of droplet coverage with varying application rates, and (d) depicts the trend of the coefficient of variation with varying application rates. It can be observed that as the application rate decreases, the droplet deposition amount, deposition density, and droplet coverage all show a consistent downward trend. Additionally, the uniformity of droplet deposition is relatively poor.
Agronomy 14 01702 g012aAgronomy 14 01702 g012b
Table 1. The sample size of the self-built rice field weed dataset is 3532. The training set samples were expanded fourfold through data augmentation, resulting in a total sample size of 10,943 after data augmentation.
Table 1. The sample size of the self-built rice field weed dataset is 3532. The training set samples were expanded fourfold through data augmentation, resulting in a total sample size of 10,943 after data augmentation.
Label CategoryNumbers of Original ImagesTotal Number of Images after Augmentation
Field ridge4381224
Continuous patches of barnyard grass218608
Single barnyard grass plant28768052
Table 2. Train four versions of YOLOv8 separately. It is evident that YOLOv8n has the lowest recognition accuracy and is suitable as a student model, while YOLOv8l has the highest recognition accuracy and is suitable as a teacher model.
Table 2. Train four versions of YOLOv8 separately. It is evident that YOLOv8n has the lowest recognition accuracy and is suitable as a student model, while YOLOv8l has the highest recognition accuracy and is suitable as a teacher model.
ModelsmAPModel Size (MB)
YOLOv8n0.7956.2
YOLOv8s0.82321.5
YOLOv8m0.80849.7
YOLOv8l0.82483.7
YOLOv8x0.804130.0
Table 3. Experimental platform configuration and environment, detailing the specific configuration of the experimental platform used in this study, including the operating system, hardware environment, and software environment.
Table 3. Experimental platform configuration and environment, detailing the specific configuration of the experimental platform used in this study, including the operating system, hardware environment, and software environment.
Operating
System
Hardware EnvironmentSoftware Environment
CPUHard Drive
Capacity
GPUPythoncuDNNCUDA
Windows 10Intel(R)
Core(TM)
i7-9700
@3.0 GHz (Intel Corporation, Santa Clara, CA, USA)
64GNVIDIA
GeForce RTX 5000 (Nvidia Corporation, Santa Clara, CA, USA)
3.98.5.011.7
Table 4. The results of different distillation methods on the rice field weed dataset. It is evident that combining feature loss and logic loss as loss features for knowledge distillation resulted in the highest accuracy of the YOLOv8n DT model.
Table 4. The results of different distillation methods on the rice field weed dataset. It is evident that combining feature loss and logic loss as loss features for knowledge distillation resulted in the highest accuracy of the YOLOv8n DT model.
TypeMethodYOLOv8n-DT(%)
FeatureCD0.808
Mimic0.804
LogicL10.812
L20.807
KL0.815
Feature + LogicCD + KL0.820
Table 5. Comparison of accuracy of YOLOv8n, YOLOv8n DT, and YOLOv8l in different weed recognition tasks. The knowledge-distilled YOLOv8n DT model outperforms the original YOLOv8n student model in all tasks.
Table 5. Comparison of accuracy of YOLOv8n, YOLOv8n DT, and YOLOv8l in different weed recognition tasks. The knowledge-distilled YOLOv8n DT model outperforms the original YOLOv8n student model in all tasks.
CategoryYOLOv8n(%)YOLOv8n-DT(%)YOLOv8l(%)
Single barnyard grass plant0.6800.7100.725
Continuous patches of barnyard grass0.8710.9010.881
Field ridge0.8350.8510.866
mAP0.7950.8200.824
Table 6. Performance comparison of YOLOv8n-DT and other models across various dimensions, including Parameters, Latency, GFLOPs, Recall, and mAP50.
Table 6. Performance comparison of YOLOv8n-DT and other models across various dimensions, including Parameters, Latency, GFLOPs, Recall, and mAP50.
ModelYOLOv8n-DTYolov7Yolov5SSDFaster R-CNN
Parameters3,157,1846,219,7091,763,22413,312,56041,583,062
Latency0.01040.0110.0100.0150.054
GFLOPs8.913.24.115.1134.9
Recall0.770.740.730.610.62
mAP500.820.7950.815180.73030.751
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, Z.; Cai, D.; Bai, J.; Xu, T.; Yu, F. Intelligent Rice Field Weed Control in Precision Agriculture: From Weed Recognition to Variable Rate Spraying. Agronomy 2024, 14, 1702. https://doi.org/10.3390/agronomy14081702

AMA Style

Guo Z, Cai D, Bai J, Xu T, Yu F. Intelligent Rice Field Weed Control in Precision Agriculture: From Weed Recognition to Variable Rate Spraying. Agronomy. 2024; 14(8):1702. https://doi.org/10.3390/agronomy14081702

Chicago/Turabian Style

Guo, Zhonghui, Dongdong Cai, Juchi Bai, Tongyu Xu, and Fenghua Yu. 2024. "Intelligent Rice Field Weed Control in Precision Agriculture: From Weed Recognition to Variable Rate Spraying" Agronomy 14, no. 8: 1702. https://doi.org/10.3390/agronomy14081702

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop