Next Article in Journal
Identification of Co-Expression Modules of Cotton Plant Height-Related Genes Based on Weighted Gene Co-Expression Network Analysis
Previous Article in Journal
Spray-Induced Gene Silencing (SIGS): Nanocarrier-Mediated dsRNA Delivery Improves RNAi Efficiency in the Management of Lettuce Gray Mold Caused by Botrytis cinerea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SMC-YOLO: A High-Precision Maize Insect Pest-Detection Method

1
College of Electronic Engineering, Heilongjiang University, Harbin 150000, China
2
Harbin Dongshui Intelligent Agriculture Technology Co., Harbin 150000, China
3
Nantong Xikerui Intelligent Technology Co., Ltd., Nantong 226010, China
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(1), 195; https://doi.org/10.3390/agronomy15010195
Submission received: 12 December 2024 / Revised: 9 January 2025 / Accepted: 12 January 2025 / Published: 15 January 2025
(This article belongs to the Section Pest and Disease Management)

Abstract

:
Maize is an excellent crop with high yields and versatility, and the extent and frequency of pest outbreaks will have a serious impact on maize yields. Therefore, helping growers accurately identify pest species is important for improving corn yields. Thus, in this study, we propose to use a pest detector called SMC-YOLO, which is proposed using You Only Look Once (YOLO) v8 as a reference model. First, the Spatial Pyramid Convolutional Pooling Module (SPCPM) is utilized in lieu of the Spatial Pyramid Pooling-Fast (SPPF) to enrich the diversity of feature information. Subsequently, a Multi-Dimensional Feature-Enhancement Module (MDFEM) is incorporated into the neck network. This module serves the purpose of augmenting the feature information associated with pests. Finally, a cross-scale feature-level non-local module (CSFLNLM) is incorporated in front of the detector head, which improves the global perception of the detector head. The results showed that SMC-YOLO achieved excellent results in several metrics, with its F1 Score (F1), mean Average Precision (mAP) @0.50, mAP@0.50:0.95 and mAP@0.75 reaching 83.18%, 86.7%, 60.6% and 70%, respectively, outperforming YOLOv11. This study provides a more reliable method of pest identification for the development of smart agriculture.

1. Introduction

Against the backdrop of continued global population growth, extreme weather, and regional conflicts, the demand for food by the world’s people is rising year after year. Maize, characterized by its elevated yields and an extensive array of applications, serves to address this exigency. China exhibits a substantial demand for corn and concurrently occupies a prominent position globally with respect to both the scale of cultivation and annual yield throughout the year. The acreage dedicated to corn cultivation in China attains 29.9 million hectares, while the annual production volume amounts to 165.9 million tons. Two pests alone, the Asian corn borer and cotton bollworm, cost China USD 1.3 billion [1]. Traditional maize pest control mostly relies on farmers’ experience, using a large number of insecticides while also putting enormous pressure on the ecosystem. Therefore, using object-detection technology to quickly and accurately count the species and number of pests, we can provide reliable data support for the allocation of insecticides, maximize the effect of insecticides, promote high-quality and high-yield grain production, and achieve the goal of environmental protection simultaneously.
In the context of the evolution of computer technology, a progressively larger cohort of scientific and technological practitioners has initiated the focus on the application of computers within the agricultural realm. Agricultural scientists are beginning to experiment with machine-learning methods for agricultural pest detection. For example, Li et al. utilized a Support Vector Machine classifier to distinguish between whiteflies and thrips, achieving accuracies of 93.9% and 89.9%, respectively [2]. Zhao et al. harnessed Support K-means for the purpose of detecting agricultural pests accordingly and, therefore, attained satisfactory outcomes [3]. Although machine learning has achieved good results in agriculture, machine-learning algorithms require human selection of feature information that facilitates classification. In practical applications, machine learning has great limitations because the environmental situations faced by sensors are complex and diverse, and the extracted features are affected by various factors.
Due to advancements in deep-learning technologies, deep-learning detection methods have addressed the challenge of feature screening in machine learning. The application of these methods for pest detection has increasingly emerged as a predominant approach within the field [4,5,6,7,8,9]. Bhujel et al. incorporated a Channel-Based Multi-Attention (CBMA) module into a Convolutional Neural Network (CNN) architecture, resulting in a reduction of parameters by a factor of 16, all the while sustaining a high level of accuracy [10]. Jiao et al. integrated an adaptive feature fusion pyramid (AFFP) into a backbone and utilized object detection in a three-stage detector with up to 77.0% accuracy [11]. Liu et al. used global information to construct a pyramid network combining a pest localization network and detector for localization and classification, with a mAP of 75.03% [12]. Although the algorithms proposed in the above studies fulfill the task of pest identification, they also reveal issues such as low accuracy and difficulties in platform deployment.
The ability of trained models to be deployed to other platforms with relative ease is critical to the generalization and application of the algorithms. YOLO is a pioneering single-stage detection algorithm, which is widely favored by researchers for its high accuracy, streamlined network structure, and ease of deployment compared to algorithms such as FasterR-CNN [13,14]. S et al. improved YOLOv7 by applying a multi-head attentional mechanism to solve the leaf occlusion problem during apple detection and showed good performance [15]. Dai et al. integrated the C3 with the SPPF and Swin transformer modules, and the enhanced network achieved a substantially increased mAP of 96.4% [16]. Hu et al. integrated the Global Context Network (GCNet) into the C3 module and switched to a Bidirectional Feature Pyramid Network (Bi-FPN) and increased the number of heads. The mAP of the optimized network reaches 79.8% [17]. Roy et al. added a residual block to YOLOv4 and modified the Path Aggregation Network (PANet) to achieve an mAP of 96.29% on the Tomato Leaf Disease dataset using Hard-swish as the activation function [18]. Kumar et al. tested different versions of YOLOv5 and introduced CBMA in the YOLOv5x backbone, which improved both mAP and F1 [19]. Li et al. developed a passion fruit pest-detection method, incorporating the CBMA module and enhancing the loss function, which led to a 1.51% enhancement in the performance of the improved YOLOv5 [20]. Lin et al. proposed a TSBA-YOLO, which integrates a transformer module and a Shuffle Attention model (SA). Moreover, they replaced the neck network with a Bi-FPN architecture, resulting in a remarkable 6.03% enhancement in the mAP [21]. Other scientists have combined the features of YOLO with other technologies to greatly expand the application scenarios of YOLO [22,23].
However, in practice, although the YOLO algorithm has been used by scientists for pest detection [24,25,26,27,28], most algorithms are designed to use samples that include images of pests from all time periods, and most pests vary greatly in morphology and damage from one time period to the next. Therefore, most of the samples in the selected dataset are images of pests that are in the most serious period of their impact on corn. The improved algorithm is deployed on the server side to assist corn growers in determining the pest species, and experiments are conducted based on YOLOv8 as follows:
  • In this study, the SPPF module in YOLOv8 is substituted with the proposed SPCPM, therefore augmenting the depth of the backbone and enabling the extraction of more comprehensive feature information.
  • To mitigate the potential loss of features following fusion, the MDFEM is incorporated prior to the neck.
  • A new non-local attention mechanism, CSFLNLM, is added in front of the head to improve the ability to discriminate between objects and backgrounds.
  • Ablation studies and comparative experiments were conducted on SPCPM, MDFEM, and CSFLNLM proposed within this paper, respectively. Additionally, SMC-YOLO was contrasted with other leading models.

2. Materials and Methods

2.1. Maize Pest Dataset

Field images of nine maize-affecting pests were collected in Heilongjiang, Shandong, and Hebei provinces in China. Other samples were from the IP102 dataset. However, to improve the robustness of the algorithm and to mitigate the problem of limited samples, random numbers like image superimposed 0–255 simulate Gaussian noise, randomly rotate the image 0–90°, adjust the pixel values to 0.5–2.5 times the original values, and change the brightness. To realize the augmentation of the dataset. All images were carefully annotated under the supervision of agricultural experts to ensure accuracy and reliability.
A dataset comprising 4547 images was curated and subsequently divided into training, validation, and test subsets following a 7:2:1 ratio. Figure 1 shows the images, names and numbers of the nine pests tested, and Figure 2 shows the distribution of instances of each pest.

2.2. Proposed SMC-YOLO Network

The most famous target detection algorithms belong to the YOLO family, of which YOLOv8 is favored for its few parameters and high accuracy. At the beginning of the design of this experiment, YOLOv10 was the newest model, YOLOv8 was superior to the latest models available at the time, and then YOLOv11 was introduced, which outperformed YOLOv8 on the dataset, but the three proposed modules do not show any performance improvement on YOLOv11, and therefore YOLOv8 is still identified as the benchmark model. The original YOLOv8 has drawbacks such as insufficient backbone feature extraction, the need to enhance features fused by the neck network, and insufficient focus of the detection head on the object to be detected. To solve these problems, the author has carefully analyzed them and put forward the corresponding solutions. First, SPCPM is used to replace the SPPF module at the top of the trunk, and the rich gradient paths are used to guide the convolution module to extract richer feature information. Then, three MDFEM modules are introduced at the neck to enhance the feature information from multiple angles using MDFEM. Finally, three CSFLNLMs are added at the front of the head to improve the efficiency of detecting the head by utilizing the CSFLNLM’s property of capturing global information. The overall architecture is depicted in Figure 3.

2.2.1. SPCPM

He et al. introduced spatial pyramid pooling (SPP), a structure that solves the image distortion problem [29]. The authors of the YOLOv5 network proposed the structure of the SPPF based on improved spatial feature pyramid pooling, which is much faster than SPP detection. Both SPP and SPPF play an important role in the maximum pooling module, through which the sensory field can be expanded, and redundant information can be removed. Inspired by SPP, Chen et al. proposed the ASPP module, and the difference between ASPP and SPP is that the maximum pooling module in SPP is replaced by null convolution, which expands the receptive field while acquiring richer features [30].
Nevertheless, the enhanced ASPP module overlooks the output of shallow features, which are crucial for accurately determining object positions. This oversight persists despite the inclusion of a residual structure intended to address this issue, resulting in minimal effect. The analysis suggests that this issue may originate from the relatively small proportion of shallow features within the overall feature set, as well as the significant variance in feature information across different layers. It is well known that the neural network can learn more information about the weights by enriching the gradient paths [31], and changing the length of the gradient paths can effectively prevent the network from overfitting [32]. Inspired by ELAN and SPPF, this paper proposes SPCPM, which significantly diminishes the parameter quantity while enhancing the accuracy, which is shown in Figure 4.
After successive maximum pooling operations, shallow features with different receptive fields are obtained while removing a large amount of redundant information. After four consecutive nested convolution operations, feature information with different levels is obtained. The SPCPM proposed in this experiment can obtain both high-quality shallow features and deep features with different levels.

2.2.2. MDFEM

The similarity between pests and their environments is increasing due to the natural selection of pests by ecosystems, which allows individuals to better adapt to environmental changes to survive. The backbone network sometimes has difficulty distinguishing between foreground and background information when extracting features and may extract environmental features that are not relevant to the pest; therefore, adding a feature-enhancement module is necessary for the neural network. Most of the previous feature-enhancement modules operate from a single dimension, for example, from the perspective of expanding the sensory field, extracting deeper features, or increasing the feature richness. Inspired by the Receptive Field Block (RFB) module, the authors synthesized the three different directions of improvement mentioned above and, based on them, proposed MDFEM, as shown in Figure 5.
The 1 × 1 convolution is first utilized to compress the channels while fully integrating the features of each channel. Considering that the multi-branch structure can extract richer features, the multi-branch structure is used. Next, more efficient features are extracted by a second layer of 3 × 3 convolution. Finally, after a layer of dilated convolution for expanding the sensory field, the dilated convolution captures more comprehensive feature information by setting different null rates.

2.2.3. CSFLNLM

Incorporating an attention mechanism anterior to the detection head augments the target perception capacity of the detection head. Wang et al. put forward a non-local neural network (NL_Net), which can seize global information. Nevertheless, the computational overhead entailed by NL_Net is considerably large [33]. Cao et al. found that the attention score is independent of the query location by studying the attention situations generated by different query locations and proposed the GC_Net [34]. Both NL_Net and GC_Net are based on pixel-level non-local attention mechanisms that generate attention scores by capturing pixel-to-pixel relationships, while block-level non-local attention mechanisms are less studied. Mei et al. initially introduced the cross-scale non-local (CS_NL) attention mechanism by extracting feature information among diverse small regions of an image and applying it for image reconstruction [35]. Inspired by GC_Net and CS_NL, this paper proposes the CSFLNLM. The structure of which is depicted in Figure 6. The procedure of obtaining attention scores for GC_Net and CSFLNLM and the process by which each pixel acquires information about the other pixels are comparatively analyzed in Figure 7.
As presented in Figure 7a, GC_Net performs channel compression and channel feature fusion on the input feature maps and then derives the global attention score in terms of pixels, which leads to global information. Viewed from the point of view of the pixel to which attention is applied, first (C × 1 × 1) pixel information is applied to that pixel, and then the process is traversed (H × W) times, as illustrated in Figure 7b.
In Figure 7c, CSFLNLM first performs deep convolution on the input to extract features and compresses the feature maps from (H × W) to (H/s × W/s). Then, it computes the global attention scores on the compressed feature maps, therefore capturing global information. Viewed from the point of view of the pixel to which attention is applied, first (C × s × s) pixel information is applied to that pixel, and then the process is traversed (H/s × W/s) times, as shown in Figure 7d.
In CSFLNLM, all the pixels in each channel obtain the global information, and then sufficient channel feature fusion is performed to make the pixel points available for other channel information. The whole process is shown in Figure 7, and the mathematical expression of the whole process is shown below:
Q i j = P i j + W t [ M ]
W t = C o n v 2 d [ L a y e r N o r m [ Re L U [ C o n v 2 d [     ] ] ] ]
M = g = 1 n exp ( Y g j , s × s ) m = 1 n exp ( Y m j , s × s ) Y g j , s × s
n = H s × W s
CSFLNLM is designed to enable each pixel to capture contextual information from other pixels within the feature map. This process is executed independently across each channel. Therefore, when determining the position of a pixel, it is essential to specify both the channel in which the pixel resides and its spatial coordinates within that channel.
Q i j denotes the ith pixel of the jth channel after adding global feature information, P i j is the i-th pixel of the j-th channel before adding the global feature information, W t denotes the Transform operation, and M is a feature map that captures global information. Y m j , s × s denotes the mth pixel of the feature map output from the jth channel after the deep convolution, s denotes the step size of the deep convolution, which is the same as the height and width of the convolution kernel, and thus s × s denotes the sensory field of the pixel in the output feature. H and W indicates the height and width after padding.

2.3. Training Procedures

The experimental platform and training parameters utilized are presented in Table 1 and Table 2.

2.4. Performance Evaluation

In this study, F1, mAP@0.5, mAP@0.75, and mAP@0.5:0.9 was chosen as a measure of the network. Precision (P) and Recall (R) typically exhibit an inverse relationship. The F1 is employed to synthesize the P and R of the network. The higher the value, the better the detection of pests, with the corresponding equations presented in (5)–(7). Specifically, mAP@0.5 and mAP@0.75 denote the mean Average Precision for Intersection over Union (IoU) values equal to 0.5 and 0.75, respectively. mAP@0.5:0.95 signifies the mean Average Precision calculated for IoU values ranging from 0.5 to 0.95 at intervals of 0.05, and the mean mAP across these different IoU values was computed. The formulas are illustrated in (8)–(9).
P = T P T P + F P
R = T P T P + F N
A P = 0 1 P ( r ) d r
m A P = 1 k i = 1 k A P i
F 1 = 2 P R P + R
T P , F P and F N respectively, denote true positive, false positive, and false negative. k indicates pest species, k = 9 for this experiment.

3. Results

In this segment, the three innovative modules proposed in this paper will be tested, including the role of each module, ablation experiments to determine the parameters, and comparison experiments with other existing similar modules.

3.1. Ablation Experiments

To assess the significance of the three modules within the network, comparative experiments were carried out utilizing YOLOv8n as the baseline model. The outcomes are displayed in Table 3.
The outcomes are displayed in Table 3. Replacing the SPPF in the baseline model with the proposed SPCPM results in an improvement in all four different metrics, and the improvement is more pronounced in the more demanding mAP@0.75, which reaches 2.2%. The authors analyze that this may be due to the fact that the SPCPM can make full use of shallow features and extract more deep features that are beneficial for detecting the target. The addition of MDFEM resulted in improvements in all four metrics, verifying that the approach of augmenting features from multiple dimensions is effective. The addition of CSFLNLM improved all the metrics, which indicates that CSFLNLM is effective in improving the detection capability of the detection head. In Section 3.2, the above three innovations are described in more detail.

3.2. Effect of Different Parameters and Comparison with Other Modules

The impacts of different parameters and different structures in each module on the network model will be explored, followed by a description of the process of determining the model structure and parameters and a comparative analysis with other mainstream modules.

3.2.1. Comparative Experiments with SPCPM

In order to investigate the effect of superimposing different convolutional and pooling layers on the network detection results, we conducted the following study. Convolutional as well as pooling of stacked 3, 4, and 5 layers, respectively, are performed to train and test the performance of the network, and the structure is illustrated in Figure 4a,c,d. The outcomes are presented in Table 4.
As the number of layers increases, the detection speed decreases while the number of parameters and model complexity gradually increase. Compared with the SPPF module, SPCPM compresses the number of channels of the feature map to one eighth of the original through the first CBS module. Although SPCPM has four more convolutional modules and one more max-pooling module compared with SPPF, it still has some advantages in the detection speed, and model complexity, and still has some advantages.
Combined with Table 4, it can be concluded that deepening the network structure can improve the performance, but the stacking cannot be repeated indefinitely. With the growth of the number of layers, the performance slowly gradually stabilizes, and when continue to deepen the model, the detection performance decreases. The author analyzes that this may be that stacking more convolutional layers makes the module extract some redundant information, which makes the combined efficiency of the network decrease.
In order to reduce the parameters of the network model, channel compression is performed during the execution of the first CBS module, with a compression factor of r indicating that the number of output channels is 1/r of the number of input channels. The outcomes are presented in Table 5.
As the degree of compression increases, the individual channels of the output feature map contain more feature information, and the parameter utilization increases, but as the degree of compression is further increased, the performance of the model appears to decrease. The authors posit that this may be due to the fact that fewer output channels disrupt the original feature relationships, and consequently, selecting the appropriate compression ratio is of great significance for network modeling.
This experiment also compares the proposed SPCPM with eight other improved networks. The outcomes are presented in Table 6, and the comparison of PR curves before and after the improvement is shown in Figure 8.
From Table 6, Except for SPPFCSPC, SPCPM achieves an increase in accuracy and a decrease in parameters compared to other modules and becomes the optimal structure. Compared to SPPCSPPC, only two metrics, F1 and mAP@0.75, were improved by 0.09% and 1.1%, but the module parameters were reduced by 93.3%. Therefore, we conclude that SPCPM is still more efficient than SPPCSPPC. SPCPM achieves excellent results compared to other modules, and the authors analyze that this may be due to the fact that SPCPM can output pure shallow features as well as rich deep features at the same time.

3.2.2. Comparative Experiments with MDFEM

In this subsection, the impact of the various parts of MDFEM on the network will be explored, comparing the models MDFEM_A and MDFEM_B, and a joint experiment with SPCPM will be conducted in order to explore the impact of the combination of the two modules on the baseline model. The network structure of MDFEM_A and MDFEM_B is shown in Figure 5c,d. The outcomes are presented in Table 7.
Observing Table 7, the combination of SPCPM and MDFEM outperforms the other two combinations of MDFEM proposed in this paper, although it does not show an absolute advantage compared to MDFEM_A and MDFEM_B. Therefore, MDFEM is selected as the feature-enhancement module for this improvement, considering the improvement of the overall network effect.

3.2.3. Comparative Experiments with CSFLNLM

In this section, CFFLNLM is analyzed in comparison with mainstream non-local attention mechanisms such as NL_Net and GC_Net and state-of-the-art attention mechanisms like EMA_Net [36] and BiFormer_Net [37]. The outcomes are presented in Table 8.
Observing Table 8, CSFLNLM shows the most superior performance in comparison with NL_Net, GC_Net, and EMA_Net, improving in all four metrics. In the comparison between CSFLNLM and BiFormer_Net, it can be found that despite the poor performance on mAP@0.75, it also shows the best performance on the other three metrics, so the authors believe that it still proves that the CSFLNLM proposed in this research is useful for improving the performance. A comparison of the feature maps after the baseline model and CSFLNLM is shown in Figure 9, and a comparison of the PR curves is shown in Figure 10.

3.3. Error Analyses

As observed in Figure 11 and Figure 12, the accuracy of the improved network increased by 1.91%. In the sample analysis, the false-negative rate decreased in 6 samples, remained the same in 1 sample, and increased slightly in only 2 samples. This may be due to the fact that although the larvae of both pests are in the larval stage, there are still differences in body shape and color between the same species, which affects the recognition performance of the network.

3.4. Comparison of SMC-YOLO with Other Methods

To validate the superiority of the SMC-YOLO for corn pest detection, this section compares the improved network with seven mainstream target detection networks. Comparison networks include Faster R-CNN, SSD, YOLOv5, YOLOv8, YOLOv9, YOLOv10, and YOLOv11. The outcomes are displayed in Table 9.
As observed in Table 9, with Faster-RCNN, SMC-YOLO was improved in all four metrics, with F1 scores improved by 14.4%. mAP@0.50 improved by 4.1%. mAP@0.75 improved by 22.1%, and mAP@0.50:0.95 improved by 13.4%, respectively. Improved in all metrics compared to SSD, YOLOv5, YOLOv8, YOLOv9, and YOLOv10. Compared to the latest YOLOv11 model, F1 improved by 1.45%, mAP@0.5 by 1.7%, mAP@0.75 by 1.2%, and mAP@0.50:0.95 by 1.7%. The most significant improvement was seen in the mAP@0.75 and mAP@0.50:0.95 metrics. The performance of various detection models in actual detection is shown in Figure 13.
Two images are randomly selected for prediction, and the results are depicted in Figure 13. From Figure 13a, each detection algorithm has a different degree of leakage for this image, where YOLOv9, YOLOv10, and YOLOv11 do not detect any target. SSD, YOLOV5, and YOLOV8 misses 7, FasterRCNN misses 5, while SMC-YOLO proposed in this paper misses only 4, which exceeds all the models. As demonstrated in Figure 13b, except for SMC-YOLC and YOLOv11, all the other models have different degrees of missed detection. FasterRCNN, SSD, YOLOV5, YOLOV8, and YOLOV10 have one undetected target, and YOLOv9 has two undetected targets. In addition, there is an error in the detection of Faster-RCNN. Only SMC-YOLO and YOLOv11 proposed in this paper accurately detect all targets.
The data is shown in Table 9, and the detection results in Figure 13. The SMC-YOLO advanced outperforms a series of algorithms such as YOLOv11, which can also fully validate the superiority of this offered net model.

3.5. Experiment for Model Generalization Ability Verification

The superiority of SMC-YOLO has been verified through the above experiments. In the field of object detection, applying the improved algorithm to other datasets is a common way to study generalization ability. In this experiment, the authors applied the modified SMC-YOLO to the detection of wheat pests to verify its generalization ability.
The wheat pest dataset used in this experiment was selected from the IP02 and amplified by noise addition, rotation, etc. Finally, 3312 images were obtained, which included four common types of wheat pests, divided according to the ratio of 7:2:1. The situation network parameters were set as in the previous experiments. The experimental results are shown in Table 10 and Table 11.
As can be seen from Table 10 and Table 11, the performance of this proposed SMC-YOLO on the wheat pest dataset is still better than YOLOv8, and it shows some advantages in all four indexes, which reflects that SMC-YOLO is still competitive in the face of different pest datasets, and at the same time, it proves that the improved model has some generalization ability.

3.6. The Performance of the Three Modules in the Latest YOLO Model

The team also combined the three modules proposed in this experiment with YOLOv11, as shown below in Table 12.
As depicted in the above-mentioned table, the integration of the three modules into YOLOv11 led to a decline in performance. The author opines that in YOLOv11, the modified C2f, detection header, and the introduced attention mechanism potentially impede these three modules from enhancing the network’s performance. Consequently, when applying the three modules proposed herein to other YOLO models, appropriate adjustments to the network structure and parameters are essential.

4. Discussion

As analyzed above, SMC-YOLO outperforms all current mainstream detection algorithms, including YOLOv11, which stems from the analysis and improvement of the primary network defects. It is through the critique of the residual structure, ELAN structure, and the study of gradient paths that the SPCPM structure was developed, and in future work, attempts will be made to replace the residual structure with the SPCPM structure in order to develop more advanced algorithms. The proposal of CSFLNLM stems from the reflection on depth-separated convolution and the analysis of the operation process of ordinary attention mechanisms, which is the first attempt to apply the attention mechanism at the feature level. Although it outperforms networks such as EMA in comparison, it still suffers from the limitation of more parameters, and we will try to develop more efficient modules from the perspective of point-to-point convolution in future work. Moreover, the secondary development of MDFEM is to introduce a lightweight attention mechanism to achieve the double optimization of performance and parameters.
The pest-detection model presented in this experiment is mainly appropriate for detecting larvae rather than adults or full-cycle pests. This stems from the author’s thinking about the fact that the characteristics of pests are significantly different at different periods. The damage caused to corn and the growth environments of larvae and adults varies at different periods. Most of the current pest-detection algorithms use datasets that encompass both adults and larvae, which saves computational resources but sacrifices the recognition effect, which is not what farmers want. Considering the special growth environment of larvae and other issues, algorithms for larval detection can only be deployed in the cloud, so the research focus can be inclined to improve the accuracy.
The group’s goal is to build a monitoring system for crop pests, including larval identification, adult monitoring, and other analysis platforms that affect the growth and development of corn and the formation of pests. The future research direction is to build a maize adult insect detection platform and an environmental factor analysis platform. The maize adult insect detection platform includes highly efficient pest traps and high-precision and lightweight adult insect detection algorithms. The environmental factor analysis platform is used to analyze the influence of weather temperature, humidity and other factors on the formation of pests. The difficulty in establishing an environmental awareness platform lies in the preservation and analysis of historical data. Experienced growers can judge the emergence of insect pests with their experience, but farmers are not able to quantify this experience in the form of numbers, so tracking the weather and the impact of the climatic environment on the insect pests is a long-term endeavor.
In the future, the completion of the agricultural pest monitoring system will continue to contribute to the protection of the ecological environment and the preservation of crop yields.

5. Conclusions

To effectively alleviate the detrimental effects of pests on maize yield, this research puts forward an innovative maize pest-detection algorithm, namely SMC-YOLO. Its superior performance is meticulously verified via a series of experiments. The algorithm is intended for deployment on a cloud platform, aiming to offer corn cultivators convenient pest species identification services.
In this study, pests are categorized into adult and larval stages based on their living environments, flight capabilities, and the extent of damage inflicted on corn. Current pest-detection algorithms typically train networks using a mixture of adults and larvae, yielding models with limited practical applicability. Consequently, this paper focuses on addressing the larval-recognition problem. Given the concealed habitats of larvae and their significant harm to corn, manual photography followed by cloud-based upload is deemed the optimal approach for their identification.
Growers may face many problems during the shooting process, such as insufficient light, different shooting angles, and low resolution of the equipment. To attenuate these unfavorable effects, we use strategies such as adjusting the image brightness, introducing noise, and rotating the image in order to effectively expand the dataset while enhancing the robustness of the algorithm. Meanwhile, most of the data used in this experiment came from the China region, and if this algorithm is applied to the detection of corn pests in other regions, the parameters may have to be adjusted accordingly. In addition, we verified the generalization performance of this network using the wheat pest dataset and achieved satisfactory results.
When applying these modules to other models, e.g., YOLOv5 and YOLOv11, the target model needs in-depth analysis and adjustment. Given the intrinsic disparities in network structure design concepts and parameter configurations among different models, parameter settings must be optimized accordingly.
The SMC-YOLO algorithm proposed in this study focuses on detecting larvae. However, adult insects can also pose a threat to corn. Adult insects, which are less harmful to corn and can fly, can be exterminated by traps. However, the uploading of pest images by traps is greatly affected by the mobile network environment, and direct local identification is a preferable method.
Therefore, exploring high-performance, lightweight detection networks and deploying them on pest traps for adult insect identification and classification is also a focus of our future research. By utilizing advanced technologies to promote smart agriculture and the high-quality and high-yield production of agricultural products, this research has far-reaching significance in alleviating global hunger and poverty.

Author Contributions

Conceptualization and methodology, software and writing—original draft preparation, Q.W.; Investigation and sources, formal analysis, Y.L. (Yongkang Liu); validation and data curation, visualization, Q.Z.; project administration, R.T.; writing—review and editing and supervision, funding acquisition, Y.L. (Yong Liu). All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the following projects: 2023 Heilongjiang Provincial Department of Education “Double First-class” Discipline Collaborative Innovation Achievement Project: Research and Development of Intelligent Irrigation Controller (LJGXCG2023-063), and 2024 Heilongjiang Provincial Department of Education “Double First-class” Discipline Collaborative Innovation Achievement Project (202310212169S).

Data Availability Statement

The code and data sets in the present study may be available from the corresponding author upon request.

Conflicts of Interest

Author Rui Tao was employed by the company Harbin Dongshui Intelligent Agriculture Technology Co. Author Yong Liu was employed by the company Nantong Xikerui Intelligent Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Li, X.; Liu, Y.; Pei, Z.; Tong, G.; Yue, J.; Li, J.; Dai, W.; Xu, H.; Shang, D.; Ban, L. The Efficiency of Pest Control Options against Two Major Sweet Corn Ear Pests in China. Insects 2023, 14, 929. [Google Scholar] [CrossRef] [PubMed]
  2. Li, W.; Yang, Z.; Lv, J.; Zheng, T.; Li, M.; Sun, C. Detection of Small-Sized Insects in Sticky Trapping Images Using Spectral Residual Model and Machine Learning. Front. Plant Sci. 2022, 13, 915543. [Google Scholar] [CrossRef] [PubMed]
  3. Zhao, X.; Zhang, J.; Huang, Y.; Tian, Y.; Yuan, L. Detection and discrimination of disease and insect stress of tea plants using hyperspectral imaging combined with wavelet analysis. Comput. Electron. Agric. 2022, 193, 106717. [Google Scholar] [CrossRef]
  4. Amrani, A.; Sohel, F.; Diepeveen, D.; Murray, D.; Jones, M.G.K.; Cammaran, D. Insect detection from imagery using YOLOv3-based adaptive feature fusion convolution network. Crop Pasture Sci. 2022, 74, 615–627. [Google Scholar] [CrossRef]
  5. Albanese, A.; Nardello, M.; Brunelli, D. Automated Pest Detection with DNN on the Edge for Precision Agriculture. IEEE J. Emerg. Sel. Top. Circuits Syst. 2021, 11, 458–467. [Google Scholar] [CrossRef]
  6. Anitha, G.; Harini, P.; Chandru, V.; Abdullah, S.; Rahman, B.S.A. Pest Detection and Identification in Rice crops using Yolo V3 Convolutional Neural Network. In Proceedings of the 2024 OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 4.0, Raigarh, India, 5–7 June 2024; pp. 1–5. [Google Scholar]
  7. Sozzi, M.; Cantalamessa, S.; Cogato, A.; Kayad, A.; Marinello, F. Automatic Bunch Detection in White Grape Varieties Using YOLOv3, YOLOv4, and YOLOv5 Deep Learning Algorithms. Agronomy 2022, 12, 319. [Google Scholar] [CrossRef]
  8. Lippi, M.; Bonucci, N.; Carpio, R.F.; Contarini, M.; Speranza, S.; Gasparri, A. A YOLO-Based Pest Detection System for Precision Agriculture. In Proceedings of the 2021 29th Mediterranean Conference on Control and Automation (MED), Puglia, Italy, 22–25 June 2021; pp. 342–347. [Google Scholar]
  9. Bari, B.S.; Islam, M.N.; Rashid, M.; Hasan, M.J.; Razman, M.A.M.; Musa, R.M.; Ab Nasir, A.F.; Majeed, A.P.A. A real-time approach of diagnosing rice leaf disease using deep learning-based faster R-CNN framework. PeerJ Comput. Sci. 2021, 7, e432. [Google Scholar] [CrossRef]
  10. Bhujel, A.; Kim, N.-E.; Arulmozhi, E.; Basak, J.K.; Kim, H.-T. A Lightweight Attention-Based Convolutional Neural Networks for Tomato Leaf Disease Classification. Agriculture 2022, 12, 228. [Google Scholar] [CrossRef]
  11. Jiao, L.; Xie, C.; Chen, P.; Du, J.; Li, R.; Zhang, J. Adaptive feature fusion pyramid network for multi-classes agricultural pest detection. Comput. Electron. Agric. 2022, 195, 106827. [Google Scholar] [CrossRef]
  12. Liu, L.; Xie, C.; Wang, R.; Yang, P.; Sudirman, S.; Zhang, J.; Li, R.; Wang, F. Deep Learning Based Automatic Multiclass Wild Pest Monitoring Approach Using Hybrid Global and Local Activated Features. IEEE Trans. Ind. Inform. 2021, 17, 7589–7598. [Google Scholar] [CrossRef]
  13. Sharma, A.; Kumar, V.; Longchamps, L. Comparative performance of YOLOv8, YOLOv9, YOLOv10, YOLOv11 and Faster R-CNN models for detection of multiple weed species. Smart Agric. Technol. 2024, 9, 100648. [Google Scholar] [CrossRef]
  14. Abdullah, A.; Amran, G.A.; Tahmid, S.M.A.; Alabrah, A.; Al-Bakhrani, A.A.; Ali, A. A Deep-Learning-Based Model for the Detection of Diseased Tomato Leaves. Agronomy 2024, 14, 1593. [Google Scholar] [CrossRef]
  15. S, P.K.; K, N.K. Drone-based apple detection: Finding the depth of apples using YOLOv7 architecture with multi-head attention mechanism. Smart Agric. Technol. 2023, 5, 100311. [Google Scholar] [CrossRef]
  16. Dai, M.; Dorjoy, M.M.H.; Miao, H.; Zhang, S. A New Pest Detection Method Based on Improved YOLOv5m. Insects 2023, 14, 54. [Google Scholar] [CrossRef] [PubMed]
  17. Hu, Y.; Deng, X.; Lan, Y.; Chen, X.; Long, Y.; Liu, C. Detection of Rice Pests Based on Self-Attention Mechanism and Multi-Scale Feature Fusion. Insects 2023, 14, 280. [Google Scholar] [CrossRef]
  18. Roy, A.M.; Bose, R.; Bhaduri, J. A fast accurate fine-grain object detection model based on YOLOv4 deep neural network. Neural Comput. Appl. 2022, 34, 3895–3921. [Google Scholar] [CrossRef]
  19. Kumar, N.; Nagarathna; Flammini, F. YOLO-Based Light-Weight Deep Learning Models for Insect Detection System with Field Adaption. Agriculture 2023, 13, 741. [Google Scholar] [CrossRef]
  20. Li, K.; Wang, J.; Jalil, H.; Wang, H. A fast and lightweight detection algorithm for passion fruit pests based on improved YOLOv5. Comput. Electron. Agric. 2023, 204, 107534. [Google Scholar] [CrossRef]
  21. Lin, J.; Bai, D.; Xu, R.; Lin, H. TSBA-YOLO: An Improved Tea Diseases Detection Model Based on Attention Mechanisms and Feature Fusion. Forests 2023, 14, 619. [Google Scholar] [CrossRef]
  22. Vilar-Andreu, M.; García, L.; Garcia-Sanchez, A.-J.; Asorey-Cacheda, R.; Garcia-Haro, J. Enhancing Precision Agriculture Pest Control: A Generalized Deep Learning Approach with YOLOv8-Based Insect Detection. IEEE Access 2024, 12, 84420–84434. [Google Scholar] [CrossRef]
  23. Bazame, H.C.; Molin, J.P.; Althoff, D.; Martello, M. Detection, classification, and mapping of coffee fruits during harvest with computer vision. Comput. Electron. Agric. 2021, 183, 106066. [Google Scholar] [CrossRef]
  24. Arjun, K.; Shine, L.; S, D. Pest Detection and Disease Categorization in Tomato Crops using YOLOv8. In Proceedings of the 2024 IEEE Recent Advances in Intelligent Computational Systems (RAICS), Kothamangalam, India, 16–18 May 2024; pp. 1–6. [Google Scholar]
  25. Khalid, S.; Oqaibi, H.M.; Aqib, M.; Hafeez, Y. Small Pests Detection in Field Crops Using Deep Learning Object Detection. Sustainability 2023, 15, 6815. [Google Scholar] [CrossRef]
  26. Alves, A.; Pereira, J.; Khanal, S.; Morais, A.J.; Filipe, V. Pest Detection in Olive Groves Using YOLOv7 and YOLOv8 Models. In Proceedings of the International Conference on Optimization, Learning Algorithms and Applications, Ponta Delgada, Portugal, 27–29 September 2023; pp. 50–62. [Google Scholar]
  27. Mamdouh, N.; Khattab, A. YOLO-Based Deep Learning Framework for Olive Fruit Fly Detection and Counting. IEEE Access 2021, 9, 84252–84262. [Google Scholar] [CrossRef]
  28. Tetila, E.C.; da Silveira, F.A.G.; da Costa, A.B.; Amorim, W.P.; Astolfi, G.; Pistori, H.; Barbedo, J.G.A. YOLO performance analysis for real-time detection of soybean pests. Smart Agric. Technol. 2024, 7, 100405. [Google Scholar] [CrossRef]
  29. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
  30. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
  31. Wang, C.Y.; Bochkovskiy, A.; Liao, H.-Y.M. Scaled-YOLOv4: Scaling Cross Stage Partial Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13024–13033. [Google Scholar]
  32. Wang, C.Y.; Liao, H.-Y.M.; Yeh, I.-H. Designing network design strategies through gradient path analysis. arXiv 2022, arXiv:2211.04800. [Google Scholar]
  33. Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7794–7803. [Google Scholar]
  34. Cao, Y.; Xu, J.; Lin, S.; Wei, F.; Hu, H. GCNet: Non-Local Networks Meet Squeeze-Excitation Networks and Beyond. In Proceedings of the IEEE International Conference on Computer Vision Workshop, Seoul, Republic of Korea, 27–28 October 2019; pp. 1971–1980. [Google Scholar]
  35. Mei, Y.; Fan, Y.; Zhou, Y.; Huang, L.; Huang, T.S.; Shi, H. Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 5690–5699. [Google Scholar]
  36. Ouyang, D.; He, S.; Zhang, G.; Luo, M.; Guo, H.; Zhan, J.; Huang, Z. Efficient Multi-Scale Attention Module with Cross- Spatial Learning. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Rhodes, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
  37. Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R. BiFormer: Vision Transformer with Bi-Level Routing Attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 18–22 June 2023; pp. 10323–10333. [Google Scholar]
Figure 1. Pictures, names and serial numbers of pests.
Figure 1. Pictures, names and serial numbers of pests.
Agronomy 15 00195 g001
Figure 2. Number of instances of each type of pest.
Figure 2. Number of instances of each type of pest.
Agronomy 15 00195 g002
Figure 3. General framework structure of the SMC-YOLO network model.
Figure 3. General framework structure of the SMC-YOLO network model.
Agronomy 15 00195 g003
Figure 4. (a) is the SPCPM; (b) is the parameter description of each module; (c,d) is the comparative models of ablation experiments for this module.
Figure 4. (a) is the SPCPM; (b) is the parameter description of each module; (c,d) is the comparative models of ablation experiments for this module.
Agronomy 15 00195 g004
Figure 5. (a) is the MDFEM proposed in this paper; (b)is the parameter description of each module; (c,d) is the comparative ablation experimental module.
Figure 5. (a) is the MDFEM proposed in this paper; (b)is the parameter description of each module; (c,d) is the comparative ablation experimental module.
Agronomy 15 00195 g005
Figure 6. CSFLNLM General Architecture.
Figure 6. CSFLNLM General Architecture.
Agronomy 15 00195 g006
Figure 7. Comparison of GC_Net and CSFLNLM. (a) GC_Net generates attention scores; (b) GC_Net Single-Pixel Capture Global Information; (c) CSFLNLM generates attention scores; (d) CSFLNLM single-pixel capture of global information.
Figure 7. Comparison of GC_Net and CSFLNLM. (a) GC_Net generates attention scores; (b) GC_Net Single-Pixel Capture Global Information; (c) CSFLNLM generates attention scores; (d) CSFLNLM single-pixel capture of global information.
Agronomy 15 00195 g007
Figure 8. Comparison of PR curves of SPPF and SPCPM.
Figure 8. Comparison of PR curves of SPPF and SPCPM.
Agronomy 15 00195 g008
Figure 9. Gradient CAM visualization results.
Figure 9. Gradient CAM visualization results.
Agronomy 15 00195 g009
Figure 10. PR curves for the baseline model vs. after adding CSFLNLM.
Figure 10. PR curves for the baseline model vs. after adding CSFLNLM.
Agronomy 15 00195 g010
Figure 11. YOLOv8’s confusion matrix.
Figure 11. YOLOv8’s confusion matrix.
Agronomy 15 00195 g011
Figure 12. SMC-YOLO’s confusion matrix.
Figure 12. SMC-YOLO’s confusion matrix.
Agronomy 15 00195 g012
Figure 13. Prediction results of SMC-YOLO with other networks. Red circles indicate missed targets; yellow circles indicate targets that were detected incorrectly.
Figure 13. Prediction results of SMC-YOLO with other networks. Red circles indicate missed targets; yellow circles indicate targets that were detected incorrectly.
Agronomy 15 00195 g013
Table 1. Platforms for hosting experiments.
Table 1. Platforms for hosting experiments.
PlatformConfiguration
Operating systemUbuntu22.04
Central Processing Unit (CPU)AMD EPYC 9754 128-Core
Central Processing Unit (GPU)NVIDIA GeForce RTX3090
GPU acceleratorCUDA12.1
Deep-learning framePytorch 2.3.1
Scripting languagePython 3.10
Table 2. Experimental parameters.
Table 2. Experimental parameters.
ParametersConfiguration
Weight decay0.005
Momentum0.937
Learning rate0.001
Batch size16
Training epochs200
dropout0.3
Table 3. Impact of modules and their combinations on network modeling.
Table 3. Impact of modules and their combinations on network modeling.
SPCPMMDFEMCSFLNLMF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
82.2984.967.158.5
83.0986.269.359.5
82.7185.267.958.7
84.2186.667.359.8
82.2086.169.959.7
81.3585.669.759.5
81.4684.768.059.5
83.1886.770.060.6
√ indicates that this module is added to the original model.
Table 4. Effect of stacking different layers on the network.
Table 4. Effect of stacking different layers on the network.
NameParamsF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
3layer93,95282.2685.469.059.4
4layer119,51683.0986.269.359.5
5layer145,28083.3185.867.859.1
Table 5. Effect of scaling factors on the model.
Table 5. Effect of scaling factors on the model.
rParameterF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
4312,44881.3885.068.459.1
8119,51683.0986.269.359.5
1270,35881.9584.365.758.1
Table 6. Comparison of SPCPM with other mainstream models.
Table 6. Comparison of SPCPM with other mainstream models.
NameParameterF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
SPP164,60882.9486.268.459.1
SimSPPF164,60882.6185.368.958.7
ASPP2,229,76081.5984.567.958.5
BasicRFB330,27281.3085.067.958.6
SPPCSPPC1,773,05683.0086.768.260.0
SPPCSPC_group445,95281.5284.667.158.7
SPPFCSPC1,773,05681.8084.868.359.2
SPPF164,60882.2984.967.158.5
SPCPM(Ours)119,61683.0986.269.359.5
Table 7. Impact of MDFEM components on the network.
Table 7. Impact of MDFEM components on the network.
NameF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
MDFEM_C83.2586.069.459.3
MDFEM_D83.6586.267.559.3
MDFEM82.7185.267.958.7
SPCPM + MDFEM_C81.6085.368.359.5
SPCPM + MDFEM_D83.0586.469.259.4
SPCPM + MDFEM82.2086.169.959.7
Table 8. Comparison with other attention mechanisms.
Table 8. Comparison with other attention mechanisms.
NameF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
Yolov8n82.2984.967.158.5
Yolov8n + NL_Net81.9884.966.858.2
Yolov8n + GC_Net81.4584.866.258.4
Yolov8n + EMA_Net83.1385.867.058.9
Yolov8n + BiFormer_Net83.5386.467.759.7
Yolov8n + CSFLNLM (Our)84.2186.667.359.8
Table 9. Comparison of SMC-YOLO with other six target detection networks.
Table 9. Comparison of SMC-YOLO with other six target detection networks.
NameF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
FasterRCNN68.7882.647.947.2
SSD82.80.557.442.6
Yolov581.5685.264.757.3
Yolov882.2984.967.158.5
Yolov977.1480.558.151.8
Yolov1078.1682.158.051.8
Yolov1181.7385.068.858.9
SMC-YOLO (Our)83.1886.770.060.6
Table 10. Results of YOLOv8 on wheat pest dataset.
Table 10. Results of YOLOv8 on wheat pest dataset.
ClassF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
Wheat sawfly0.972998.4072.8062.30
Penthaleus major0.735974.317.6029.20
Wheat blossom midge0.982098.762.1058.60
Wheat phloeothrips0.840489.9084.1067.10
all0.882890.4059.1054.30
Table 11. Results of SMC-YOLO on wheat pest dataset.
Table 11. Results of SMC-YOLO on wheat pest dataset.
ClassF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
Wheat sawfly0.937596.7072.0062.20
Penthaleus major0.710171.3018.8027.30
Wheat blossom midge0.995099.5066.3059.70
Wheat phloeothrips0.907494.9089.4071.70
all0.887590.6061.6055.20
Table 12. Performance of the three modules on YOLOv11.
Table 12. Performance of the three modules on YOLOv11.
NameF1 (%)mAP@0.50 (%)mAP@0.75 (%)mAP@0.50:0.95 (%)
SMC-YOLO (v8)83.1886.770.060.6
YOLOv1181.7385.068.858.9
SMC-YOLO (v11)81.2284.565.657.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.; Liu, Y.; Zheng, Q.; Tao, R.; Liu, Y. SMC-YOLO: A High-Precision Maize Insect Pest-Detection Method. Agronomy 2025, 15, 195. https://doi.org/10.3390/agronomy15010195

AMA Style

Wang Q, Liu Y, Zheng Q, Tao R, Liu Y. SMC-YOLO: A High-Precision Maize Insect Pest-Detection Method. Agronomy. 2025; 15(1):195. https://doi.org/10.3390/agronomy15010195

Chicago/Turabian Style

Wang, Qinghao, Yongkang Liu, Qi Zheng, Rui Tao, and Yong Liu. 2025. "SMC-YOLO: A High-Precision Maize Insect Pest-Detection Method" Agronomy 15, no. 1: 195. https://doi.org/10.3390/agronomy15010195

APA Style

Wang, Q., Liu, Y., Zheng, Q., Tao, R., & Liu, Y. (2025). SMC-YOLO: A High-Precision Maize Insect Pest-Detection Method. Agronomy, 15(1), 195. https://doi.org/10.3390/agronomy15010195

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop