Next Article in Journal
Exploring Community-Supported Agriculture through Maslow’s Hierarchy: A Systematic Review of Research Themes and Trends
Previous Article in Journal
Biological and Physiological Changes in Spodoptera frugiperda Larvae Induced by Non-Consumptive Effects of the Predator Harmonia axyridis
Previous Article in Special Issue
Semi-Supervised One-Stage Object Detection for Maize Leaf Disease
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Paddy Planthopper Detection and Counting Using Faster R-CNN

by
Siti Khairunniza-Bejo
1,2,3,
Mohd Firdaus Ibrahim
2,4,*,
Marsyita Hanafi
5,
Mahirah Jahari
2,3,
Fathinul Syahir Ahmad Saad
6 and
Mohammad Aufa Mhd Bookeri
7
1
Institute of Plantation Studies, Universiti Putra Malaysia, Serdang 43400, Malaysia
2
Department of Biological and Agricultural Engineering, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Malaysia
3
Smart Farming Technology Research Centre, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Malaysia
4
Department of Agrotechnology, Faculty of Mechanical Engineering Technology, Universiti Malaysia Perlis, Arau 02600, Malaysia
5
Department of Computer and Communication Systems Engineering, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Malaysia
6
Department of Mechatronic, Faculty of Electrical Engineering Technology, Universiti Malaysia Perlis, Arau 02600, Malaysia
7
Engineering Research Centre, Malaysian Agriculture Research and Development Institute, Seberang Perai 13200, Malaysia
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(9), 1567; https://doi.org/10.3390/agriculture14091567
Submission received: 29 March 2024 / Revised: 31 August 2024 / Accepted: 2 September 2024 / Published: 10 September 2024
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)

Abstract

:
Counting planthoppers manually is laborious and yields inconsistent results, particularly when dealing with species with similar features, such as the brown planthopper (Nilaparvata lugens; BPH), whitebacked planthopper (Sogatella furcifera; WBPH), zigzag leafhopper (Maiestas dorsalis; ZIGZAG), and green leafhopper (Nephotettix malayanus and Nephotettix virescens; GLH). Most of the available automated counting methods are limited to populations of a small density and often do not consider those with a high density, which require more complex solutions due to overlapping objects. Therefore, this research presents a comprehensive assessment of an object detection algorithm specifically developed to precisely detect and quantify planthoppers. It utilises annotated datasets obtained from sticky light traps, comprising 1654 images across four distinct classes of planthoppers and one class of benign insects. The datasets were subjected to data augmentation and utilised to train four convolutional object detection models based on transfer learning. The results indicated that Faster R-CNN VGG 16 outperformed other models, achieving a mean average precision (mAP) score of 97.69% and exhibiting exceptional accuracy in classifying all planthopper categories. The correctness of the model was verified by entomologists, who confirmed a classification and counting accuracy rate of 98.84%. Nevertheless, the model fails to recognise certain samples because of the high density of the population and the significant overlap among them. This research effectively resolved the issue of low- to medium-density samples by achieving very precise and rapid detection and counting.

1. Introduction

The current state of global agriculture is at a crucial point, as it must provide food for a growing population while facing increasing environmental difficulties. In Malaysia, where rice is an essential crop, ensuring the protection of its production is crucial. Rice is not only a fundamental food source but also a crucial element of the cultural and economic foundation of the nation, providing sustenance for millions. Over the last decade, the average national rice production as measured by the rice self-sufficiency ratio (SSR) was approximately 70%, and the country still relies on rice importation to cater to its growing population [1]. As such, the protection of rice crops from the pervasive threat of pests takes on added urgency, as any compromise in production can reverberate across the entire socio-economic spectrum [2]. Several planthopper species have been propounded as a major threat to paddy fields as this highly migratory pest is capable of traversing long distances and has invaded all the major rice-growing areas in the world [3]. The Brown planthopper (Nilaparvata lugens; BPH) and whitebacked planthopper (Sogatella furcifera; WBPH) are monophagous sap-sucking insects that can cause leaves to initially turn orange–yellow before becoming brown and dry [4]. Zigzag leafhopper (Maiestas dorsalis; ZIGZAG) feeding causes damage to the leaf tips, which will dry up, and causes whole leaves to become orange and curled [5]. Another planthopper species is the green leafhopper (Nephotettix malayanus and Nephotettix virescens; GLH), which spreads the viral ungro disease, which can cause stunted plants, reduced vigour and number of productive tillers, and withering or complete plant drying [6]. The past few years have witnessed GLH becoming one of the major pests of rice, causing immense yield loss [7]. Conventional strategies to control planthoppers have proven unsuccessful, with the planthoppers’ populations acquiring resistance to most pesticides, thereby rendering them ineffective within a few generations [8].
Effective prevention and control of insect pests are crucial for minimising agricultural losses. An essential aspect of achieving this goal is the ability to accurately identify and classify these pests, differentiate between various species, and assess their population levels for specific pest control measures. The Malaysian Agricultural Research and Development Institute (MARDI) has developed a manual identification and counting technique done by highly skilled experts as part of an early warning system for planthopper out-breaks. To optimise this method, a solar-powered light trap device was created specifically to capture insects during the nighttime [9]. The device consists of a light bulb contained within a translucent plastic sheet, with dimensions approximately similar to an A3 paper. The plastic sheet is covered with adhesive to trap the pests. Flying insects have positive phototaxis, a behaviour that leads them to be drawn towards sources of light. The adhesive light trap is retrieved on the next day and the professionals manually count the caught insects. Nevertheless, the process of counting a single light trap requires up to 6 hours. Moreover, the precision and effectiveness of the manual counting method may be influenced by variables like tiredness and inconsistent evaluation by the inspectors. Technical factors, such as the planthoppers’ condition (whether they are damaged or incomplete), the placement of samples on traps, and the presence of overlapping insects, can result in differing viewpoints among the inspectors. Additionally, the chosen classes of planthoppers share numerous common characteristics, which makes it challenging to categorise the sample accurately without a complete view from all angles. The counting process is further hampered by the existence of overlapping insects. Therefore, the implementation of this manual approach on a large scale presents issues [10,11,12]. The challenging and expensive nature of this project has also prompted a significant increase in excitement concerning the automated identification of insect pests in recent years [13].
The primary features for object recognition are broadly categorised into manual features and deep features. In a study by Wu et al. [14], manual features such as GIST [15], SIFT [16], and SURF [17] were compared with deep features like AlexNet [18], GoogleNet [19], VGGNet [20], and ResNet [21] for pest identification. The results demonstrated a substantial advantage of deep learning with an accuracy surpassing manual features by approximately 30%. Presently, the focus in pest detection mainly centres on detecting larger pests [22,23,24]. This is because cameras in the field are relatively sparsely arranged, resulting in pests having fewer pixels in the overall image.
Currently, two-stage detectors have achieved state-of-the-art performance in small object datasets [25] and demonstrated improved performance in small pest datasets [14]. The core process of a two-stage detector involves the following steps: (1) The input feature map is processed through a Region Proposal Network (RPN), generating several Regions of Interest (RoIs). The RPN aims to retain as much foreground information (pests) as possible while filtering background information irrelevant for subsequent classification. (2) The location and classification of each RoI are determined by several fully connected layers. The use of an RPN can effectively eliminate environmental backgrounds in small pest images, thereby enhancing multi-classification accuracy.
A widely utilised architecture for object detection is the Faster R-CNN (Region-based Convolutional Neural Network) [26]. The Faster R-CNN algorithm is renowned for its capacity to effectively and precisely identify objects in images by merging a region proposal network with a convolutional neural network backbone. A previous study examined the use of automated methods to detect diseases in paddy leaves through Faster-RCNN and multiple deep CNN models, including VGG16, ResNet-50, ResNet-101, and YOLOv3 [27]. Another study utilised multiple model detectors, such as Faster R-CNN ResNet 101, Faster R-CNN ResNet 50, Faster R-CNN Inception v.2, R-FCN ResNet 101, and Retinanet ResNet 50, to identify moths captured in pheromone traps [28]. The Faster R-CNN ResNet 101 detector achieved the highest accuracy with a mean average precision (mAP) of 90.25. A separate study employed the VGG16 and SSD models to identify six distinct species of dissimilar pests in a light trap [29]. Faster R-CNN was also used as part of a novel approach for automated pest counting, showcasing its effectiveness in accurately detecting and counting Matsucoccus thunbergianae pests in pheromone trap images [28]. Additionally, Faster R-CNN has been used to detect other pests, including aphids, whiteflies, thrips [30,31,32,33,34,35], bagworm [36], stored-grain insects [37], pine larvae [38], Spodoptera frugiperda [39], and apple pest [40].
Several studies have investigated the application of advanced image processing techniques and machine learning algorithms for automated planthopper detection. Various techniques, such as image processing, machine learning, and deep learning, have been explored to automate the identification and counting of planthoppers in paddy fields. A number of studies have focused on segmentation methods, such as multi-feature fusion, Markov random field models, and morphological approaches, to accurately delineate planthoppers from complex backgrounds [41,42,43]. Additionally, the development of innovative tools, such as handheld devices for image capture and software systems for automated counting, has been promising in streamlining the planthopper detection process [44,45,46,47]. While extensive studies have been conducted on planthoppers, the majority of them mainly focused on small population samples, where instances of planthopper overlapping are infrequent.
Based on the literature, it is evident that there is a dearth of research on effectively addressing the detection of planthoppers in cases where there is sample overlap. To address the research gap, our study created a ground-built device that employed a customised algorithm for deep-learning image processing to precisely identify and count planthoppers. The system’s robustness was assessed to determine its efficacy and reliability in identifying planthoppers in images with high population densities where there is a high occurrence of overlapping insects. An automated detection method was employed as an essential element of this investigation, which entailed identifying planthoppers and employing classification algorithms to classify and quantify them into their respective planthopper classes using Faster R-CNN. Attaining this goal can significantly aid planters by equipping them with an automated planthopper counting system, allowing them to conduct a comprehensive census before implementing control measures. Consequently, this can significantly reduce pesticide consumption by accurately timing interventions to control planthoppers.

2. Materials and Methods

Figure 1 shows the overall process of this research. The procedure is divided into two stages, with the first stage concerning model development. The samples were annotated and labelled within the training dataset. Next, the dataset was trained and fine-tuned using four different models before it was optimised to achieve exceptional detection accuracy performance. The second stage entailed the validation of the model's results by an entomologist. The selected model utilised new samples to detect and quantify the four distinct categories of planthoppers. All results (labelled images and the number of counts) were uploaded into a developed web system. The entomologist then confirmed the outcomes and provided input based on cases that were incorrectly classified and samples that had not been identified.

2.1. Dataset Preparation

This study was conducted at two specific locations: MADA Kodiang, Kedah (latitude: 6.389501346945703, longitude: 100.3044763625799) and MARDI Parit, Perak (latitude: 4.435004707006803, longitude: 100.89111414850355). The data were gathered between 2021 and 2023 using a light trap apparatus designed by MARDI (Serdang, Malaysia), which comprised a light bulb, a support pole, and an adhesive trap. The sticky trap was constructed by applying adhesive to one side of a transparent plastic sheet that had the same dimensions as an A3 paper sheet. It was then encircled around the light bulb to lure and trap insects. A transparent box was used to house the light trap. Each side of the box comprised numerous small apertures with a 5 mm diameter that only permitted entry to insects of comparable size to planthoppers. Figure 2 shows an image of the transparent box. The light bulb was illuminated from 7:30 p.m. to 8:30 p.m., coinciding with the period of peak insect activity. Insects were attracted to the light source, moved towards it, and were trapped on the adhesive trap in several orientations. A few of them incurred harm while trying to flee from the light trap. On the subsequent day, the adhesive trap was retrieved and transported to the laboratory for the image acquisition process.
Figure 3 depicts an example of the adhesive light trap. The machine vision system described in [48] was used to capture numerous smaller images of the A3 light trap. Each image depicted a fragment of the light trap, which was partitioned into 323 grids with a field of view (FOV) of 25 mm × 15 mm and a resolution of 3072 × 2048 pixels. Ten light traps were used, resulting in a total of 3230 images. However, only 1654 images contained planthoppers and were used for further analysis. The annotation process for each image was done by the entomologists from MARDI using a free graphical image annotation software, i.e., LabelImg v1.8.6 [49]. The XML files containing the annotations, including the categories of planthoppers and the coordinates of their bounding boxes, were saved in the PASCAL VOC format. The research utilised four categories of planthoppers: BPH, GLH, WBPH, and ZIG-ZAG. Additionally, a new category called BENIGN was added, which includes insects that are similar to planthoppers. A total of 1654 images were utilised with 852 were obtained from BPH, 142 from GLH, 2286 from WBPH, 742 from ZIGZAG, and 207 from BENIGN. The images were partitioned into three sets in a proportion of 70:15:15, yielding 1158 images for the training set, 248 for testing, and 248 for validation. Figure 4 exhibits examples of the four primary planthopper categories, Figure 5 contains examples of the BENIGN category that shares a resemblance to the main planthopper category, and Figure 6 displays instances of annotated photographs created using the LabelImg software.

2.2. Data Training and Evaluation

The selection of convolutional object detection models has a substantial impact on both their processing speed and accuracy. This decision depends on several criteria, such as meta-architecture, feature extractor type, and input size [50]. Choosing and optimising a model that is customised to the individual needs of the detecting activities is crucial. Therefore, this study utilised four different model configurations: Faster R-CNN Resnet 50, Faster R-CNN Resnet 101, Faster R-CNN Resnet 152, and Faster R-CNN VGG-16. The selection of these models was based on previous research on the classification of planthoppers [48]. These four models exhibit excellent performance in accurately recognising the four different categories of planthoppers. The models were created for planthoppers trapped inside a black box with a consistent light source and a white background. The TensorFlow Object Detection API [51] was used to implement these configurations. To enhance the model's robustness, transfer learning was employed by utilising pre-trained weights from the Pascal-VOC 2007 challenge dataset [52]. Additionally, data augmentation techniques, such as vertical and horizontal flipping, random cropping and padding, and adjustments to contrast and brightness, were applied.
The Faster R-CNN model was implemented by utilising the PyTorch V2.1.1 open-source deep learning framework and the computational power of an NVIDIA deep learning graphics processing unit (GPU) RTX3060. The model underwent training for 30 epochs, employing a momentum optimiser with a momentum factor set to 0.9. The initial learning rate was established at 0.02 and underwent decay at each epoch step.
Subsequently, the trained models were subjected to evaluation using the average precision (AP) metric, a widely adopted assessment tool in object detection. This metric represents the mean of precision values corresponding to changes in the recall value within the precision-recall distribution of the model. The matching process between ground truth boxes and detected boxes took into account the acceptance of box localisation, which was considered a correct detection, and was contingent on the intersection over union (IoU) threshold. IoU quantifies the extent of overlap between two boxes, calculated by dividing the area of overlap by the area of union. In this study, APs were calculated for IoU thresholds set at 0.5. This choice stemmed from the nature of the task, wherein precise box localisation is deemed less crucial compared to more general object detection tasks. The precision and recall, integral components in AP computation, are defined in Equations (1) and (2), with TP representing true positives, FP representing false positives, and FN representing false negatives.
The mean average precision (mAP) served as the primary metric for assessing the detection accuracy of the proposed model. It represents the mean value of the APs across all classes. AP itself, a critical metric in object detection, can be computed using precision and recall values, as indicated in Equation (3), where r denotes the recall value and ρ i n t e r p ( r n + 1 ) signifies the maximum precision value within varying recall value intervals.
P r e c i s i o n % = T P T P + F P × 100
R e c a l l   % = T P T P + F N × 100
m A P % = r = 0 1 ( r n + 1 r n ) ρ i n t e r p ( r n + 1 ) × 100

2.3. Verification of the Model

The model’s verification process involved a thorough examination by five groups of MARDI entomologists, with two entomologists assigned to each group. The purpose was to assess the model’s precision in classifying four types of planthoppers: BPH, GLH, WBPH, and ZIGZAG. A total of twenty adhesive light traps with the size of 297 mm × 420 mm were employed. A sticky light trap containing a high-density object is defined as one that contains over 1000 objects. As a result, 13 light traps contained low-density objects and 7 contained high-density objects. A grid-based capture technique was employed for each sticky light trap, generating smaller images with dimensions of 15 mm × 25 mm. This approach produced 323 images per trap, resulting in a total of 6460 images from the 20 light traps. The grid was designed with overlapping regions of 2 mm on each side of the grid. Throughout the verification process, the built model was applied to all images and the identified objects were immediately annotated on each image. The entomologists evaluated two specific criteria: mislabel cases and the detection rate. Mislabel cases refer to situations when the identified object was incorrectly categorised as a different class or did not fall into any of the four designated classes. Concerning the detection rate, the entomologists quantified the number of planthoppers that were not identified by the model and designated it as the error rate. Subsequently, all images that consisted of identified planthoppers were submitted to a developed web system for the verification procedure.
Figure 7 shows the user interface of the developed web system. The detected classes are labelled in the image and the results of the correct number of detected objects for each class are also displayed on the screen.
The web-based system was developed by utilising the LabVIEW platform and the Python language based on the research findings. The device was equipped with a graphical user interface to capture the image of the sticky light trap within a black enclosure. LabVIEW was utilised to integrate the camera with the motorised platform responsible for camera movement. Python was integrated with LabVIEW to do image processing on the sticky light trap image using an AI model and the results were saved in a local database. The database was then synchronised with the cloud database, providing users with a remote view of the data. Figure 8 displays the graphical user interface of the system. The implementation of an AI model in the counting process has significantly accelerated the procedure, reducing the time required to count an A3-size light trap to just two minutes.

3. Results and Discussion

Figure 9 illustrates the mAP values for all models. It can be observed that the mAP value for each model exhibits distinct patterns. The training was stopped at 30 epochs after no significant changes—i.e., less than 2% of the mAP. The Resnet50 model initially demonstrated a relatively modest mAP, ranging from 45.78% to 94.46%. However, it displayed a consistent upward trend over the epochs, indicating continuous learning and refinement. Resnet101 also showed a similar pattern, with mAP values ranging from 58.81% to 94.77%. It began with a lower mAP but exhibited improvement over time, with occasional fluctuations. On the other hand, Resnet152 started with a relatively higher mAP, ranging from 56.72% to 96.63%. It maintained a generally upward trend, suggesting consistent learning and enhancement with minor fluctuations. In contrast, VGG16 started with a relatively high mAP, ranging from 53.45% to 99.68%. It displayed a consistently positive trend, indicating a strong learning curve. Notably, VGG16 consistently outperformed the Resnet models, showcasing higher mAP values throughout the training process.
Figure 10 illustrates the loss values for all models. Observing the loss trends, it is evident that all models exhibited a consistently significant reduction in loss values as the epochs progressed until 30. Initially, the models started with relatively high loss values, which was typical in the early stages of training when the model was yet to learn the underlying patterns in the data. As the number of epochs increased, there was a notable downward trend, indicative of the models’ improving performance.
VGG16 consistently demonstrated the lowest loss values across the epochs, indicating its superior performance in terms of minimising prediction discrepancies. Resnet152 exhibited the second-lowest loss values, showcasing its efficacy in learning complex patterns. Despite displaying substantial improvements, Resnet101 and Resnet50 tended to have marginally higher loss values compared to VGG16 and Resnet152. In summary, the loss data revealed that all models underwent significant learning and improvement over the training epochs. VGG16 consistently maintained the lowest loss values, followed closely by Resnet152. This information is crucial to understand the learning dynamics of the models and aids in selecting the most appropriate model for a given task.
Table 1 shows the performance of different object detection models based on their respective Inference Time and mean average precision (mAP) scores. Among the models assessed, Faster R-CNN VGG 16 exhibited the highest level of precision with a noteworthy mAP score of 97.69%. This indicates an exceptional proficiency in accurately identifying objects within images. However, it is essential to note that this model necessitates slightly more time for inference, averaging at 13.73 milliseconds per image compared to Faster R-CNN Resnet 50 (12.71 milliseconds). Nonetheless, Faster R-CNN Resnet 50 achieved the lowest mAP (92.95%) among other models, signifying a minor compromise in accuracy compared to the VGG 16 model. The Faster R-CNN Resnet 101 and Resnet 152 models exhibited an incremental improvement in accuracy compared to Faster R-CNN Resnet 50 with the mAP scores of 94.77% and 96.63%, respectively. However, they necessitate marginally more time for inference, averaging at 14.42 and 15.16 milliseconds, respectively.
The advantage of the VGG16 architecture, which contributes to its higher accuracy compared to ResNet, can be accounted to its deep architecture with a homogeneous structure. VGG16 is characterised by its simplicity and uniformity, encompassing a series of convolutional layers followed by max-pooling layers, which allows it to learn a rich hierarchical representation of features in the input images. This deep architecture enables VGG16 to capture intricate patterns and details within the images, leading to superior performance in object detection tasks. Additionally, the VGG16 model is effective in feature extraction and transfer learning, making it proficient in learning discriminative features for object identification [53]. The deep hierarchical representations learned by VGG16 contribute to its ability to achieve high precision in object detection tasks, as evidenced by its exceptional mAP score of 97.69% in the context of Faster R-CNN [54].
Table 2 presents the classification accuracy of different object detection models across distinct planthoppers classes. Notably, the Faster R-CNN VGG 16 model excelled and achieved exceptional correct detection across all classes. It recorded an accuracy of 99.8% for BPH, 100% for GLH, and 99.4% for WBPH while maintaining a highly commendable accuracy of 95.8% for ZIGZAG. Additionally, Faster R-CNN VGG 16 excelled in distinguishing BENIGN with an accuracy of 93.5%. It also demonstrated superior performance for BPH and BENIGN with an average accuracy of 96.65%. In comparison, PENYEK [55] achieved 95% accuracy in classifying the BPH and BENIGN classes. The VGG 16 model’s outstanding performance, particularly in discerning BPH and GLH, underscores its potential for precision agriculture applications.
The Faster R-CNN Resnet 152 model also deserves attention as it demonstrated robust accuracy rates and achieved an impressive 98.7% accuracy in identifying BPH while maintaining perfect scores for GLH. The model’s performance underscores its suitability for applications demanding highly accurate planthoppers classification. Despite being slightly less accurate compared to VGG 16 and Resnet 152, the Faster R-CNN Resnet 50 and Resnet 101 models still demonstrated commendable proficiency. They exhibited accuracies ranging from 95.5% to 96.5% across various pest classes, confirming their capability in reliable planthoppers identification. These results provide a detailed insight into the specific strengths and aptitudes of each model, allowing for informed decisions when selecting the most appropriate model for precise planthoppers classification in agricultural contexts. Figure 11 shows the results of the detected planthoppers using Faster R-CNN VGG16.
Figure 12 illustrates instances of detection errors. The blue dashed line squares signify false negative cases, indicating objects that were not detected by the model. Conversely, the red dashed line squares represent false positive cases where objects were incorrectly identified as belonging to another class. As seen in Figure 12 instances of false negatives predominantly occurred when multiple insects overlapped. There were numerous instances where even experienced entomologists found it challenging to determine the correct class for planthoppers, especially in cases involving BPH and WBPH, due to overlapping insects. The majority of the detection errors occurred in these scenarios. It is worth noting that false positive errors were less common in comparison to false negative errors.
The entomologists’ verification results were split into two categories, namely misclassified cases and undetected cases. Table 3 shows the model prediction and verification results. “Total” represents the combined sum of detections across these four classes for each light trap; “Misclassified” denotes instances where the detected object was inaccurately classified; and “Undetected” reveals the total count of planthoppers that, as per the entomologists’ assessment, should have been identified by the model but were not.
Several significant observations become apparent. For instance, Light Trap 2 recorded the highest total count of detected planthoppers (4062). However, it also exhibited a noteworthy number of mislabelled cases (20) and a substantial count of undetected planthoppers (1252). Despite its high total count of detected planthoppers (3514), Light Trap 18 also displayed a significant number of mislabelled cases (59) and undetected planthoppers (1069). This indicates that the undetected cases mostly occurred in high-density samples (detected sample > 1000), where overlapping cases are very common. Figure 13 illustrates a sample from Light Trap 2 where the most undetected cases occurred. From the image, it is evident that the undetected cases (marked by blue dashed line boxes) occurred in areas with heavy overlapping. Finally, Light Trap 7 had the lowest number of detections (16), followed by Light Trap 6 (55) and Light Trap 9 (80).
The model’s performance in detecting planthoppers was assessed based on various metrics. In total, 24,720 planthoppers were detected by the entomologist during the validation process. The model managed to detect 19,368 planthoppers while missing 5352 planthoppers, resulting in a detection rate of 78.34% for the actual validation process. Among the detected cases by the model, 225 were mislabelled, leading to an impressive correct detection accuracy of 98.84%.
The analysis of misclassified and undetected cases revealed compelling insights into the challenges associated with insect detection in high-density scenarios. The results presented in Table 4 demonstrate a notable prevalence of misclassified and undetected cases in high-density images, indicating the complexities and difficulties inherent in accurately identifying and labelling insects in densely infested areas. Specifically, the findings indicate that the incidence of undetected cases is substantially higher in high-density images, with 1672 insects found over 290 images, resulting in a rate of 5.77%. This is significantly higher than the low-density category, where 3680 insects were found over 6170 images, yielding a lower rate of 0.6%. Similarly, the misclassified cases exhibited a higher occurrence in high-density images, with 29 insects (0.1%), compared to 196 insects (0.03%) for undetected cases. These findings underscore the heightened challenges and complexities associated with accurately detecting and classifying insects in high-density images.

4. Conclusions

This study investigated the capabilities of object detection algorithms in automatically identifying planthoppers. Annotated datasets were obtained using adhesive light traps, which included a wide variety of planthopper classes, namely BPH, GLH, WBPH, and ZIG-ZAG, with the later addition of BENIGN. The test results demonstrated that the Faster R-CNN VGG-16 model attained an impressive mAP score of 97.69%, indicating its real-world applicability. Nevertheless, the presence of densely populated samples with significant planthopper overlap posed a difficulty, leading to an overall detection rate of 78.34% while maintaining a small false detection rate of 1.16%. This emphasises the necessity for more research that concentrates on creating sophisticated models to tackle the complexities of high-density infestations. These developments may include integrating advanced feature extraction techniques to utilise the deep hierarchical representations acquired by VGG-16. Investigating new object detection algorithms and refining methodologies for fine-tuning can also enhance performance in scenarios with a large density of objects. Moreover, the potential for improving the generalisation and adaptability of pre-training models to real-world difficulties can be achieved by utilising transfer learning techniques on various high-density insect datasets. In summary, this study lays a solid groundwork for the creation of reliable automated pest detection systems, which will lead to notable progress in precision agriculture.

Author Contributions

Conceptualization, M.F.I. and S.K.-B.; methodology, M.F.I. and S.K.-B.; software, M.F.I.; formal analysis, M.F.I. and S.K.-B.; investigation, M.F.I. and M.J.; resources, M.A.M.B.; data curation, M.F.I. and M.H.; validation, M.F.I., S.K.-B. and M.A.M.B.; writing—original draft preparation, M.F.I.; writing—review and editing, S.K.-B.; supervision, S.K.-B., M.H., M.J. and F.S.A.S.; project administration, S.K.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nodin, M.N.; Mustafa, Z.; Hussain, S.I. Eco-efficiency assessment of Malaysian rice self-sufficiency approach. Socioecon. Plann. Sci. 2023, 85, 101436. [Google Scholar] [CrossRef]
  2. Nodin, M.N.; Mustafa, Z.; Hussain, S.I. Assessing rice production efficiency for food security policy planning in Malaysia: A non-parametric bootstrap data envelopment analysis approach. Food Policy 2022, 107, 102208. [Google Scholar] [CrossRef]
  3. Gupta, A.; Sinha, D.K.; Nair, S. Shifts in Pseudomonas species diversity influence adaptation of brown planthopper to changing climates and geographical locations. iScience 2022, 25, 104550. [Google Scholar] [CrossRef] [PubMed]
  4. IRRI. “Planthopper-IRRI Rice Knowledge Bank”, IRRI Rice Knowledge Bank. Available online: http://www.knowledgebank.irri.org/training/fact-sheets/pest-management/insects/item/planthopper (accessed on 26 September 2023).
  5. CABI International. Recilia dorsalis (Zigzag leafhopper); PlantwisePlus Knowledge Bank: Wallingford, UK, 2022. [Google Scholar] [CrossRef]
  6. Green Leafhopper. Available online: http://www.knowledgebank.irri.org/training/fact-sheets/pest-management/insects/item/green-leafhopper (accessed on 26 September 2023).
  7. Xiao, L.; Huang, L.-L.; He, H.-M.; Xue, F.-S.; Tang, J.-J. Life history responses of the small brown planthopper Laodelphax striatellus to temperature change. J. Therm. Biol. 2023, 115, 103626. [Google Scholar] [CrossRef]
  8. Horgan, F.G. Slowing virulence adaptation in Asian rice planthoppers through migration-based deployment of resistance genes. Curr. Opin. Insect Sci. 2023, 55, 101004. [Google Scholar] [CrossRef]
  9. Bookeri, M.A.M.; Masaruddin, M.F.; Shah, N.A.A.; Noh, A.M.; Samsuri, N.S.; Abu Bakar, B.H.; Khadzir, M.K. Evaluation of Light Trap System in Monitoring of Rice Pests, Brown Planthopper (Nilaparvata lugens). Adv. Agric. Food Res. J. 2021, 3, a0000187. [Google Scholar] [CrossRef]
  10. Georgantopoulos, P.S.; Papadimitriou, D.; Constantinopoulos, C.; Manios, T.; Daliakopoulos, I.N.; Kosmopoulos, D. A Multispectral Dataset for the Detection of Tuta absoluta and Leveillula taurica in Tomato Plants. Smart Agric. Technol. 2023, 4, 100146. [Google Scholar] [CrossRef]
  11. Yasmin, R.; Das, A.; Rozario, L.J.; Islam, M.E. Butterfly detection and classification techniques: A review. Intell. Syst. Appl. 2023, 18, 200214. [Google Scholar] [CrossRef]
  12. Ding, W.; Taylor, G. Automatic moth detection from trap images for pest management. Comput. Electron. Agric. 2016, 123, 17–28. [Google Scholar] [CrossRef]
  13. Li, W.; Zheng, T.; Yang, Z.; Li, M.; Sun, C.; Yang, X. Classification and detection of insects from field images using deep learning for smart pest management: A systematic review. Ecol. Inform. 2021, 66, 101460. [Google Scholar] [CrossRef]
  14. Wu, X.; Zhan, C.; Lai, Y.-K.; Cheng, M.-M.; Yang, J. IP102: A Large-Scale Benchmark Dataset for Insect Pest Recognition. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8779–8788. [Google Scholar] [CrossRef]
  15. Oliva, A.; Torralba, A. Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope. Int. J. Comput. Vis. 2001, 42, 145–175. [Google Scholar] [CrossRef]
  16. Lindeberg, T. Scale Invariant Feature Transform. Comput. Sci. Comput. Vis. Robot. (Auton. Syst.) 2012, 7, 10491. [Google Scholar] [CrossRef]
  17. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  18. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification With Deep Convolutional Neural Networks. Commun. Acm 2017, 60, 84–90. [Google Scholar] [CrossRef]
  19. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions 2014. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
  20. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015 Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  21. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  22. Huang, M.L.; Chuang, T.C.; Liao, Y.C. Application of transfer learning and image augmentation technology for tomato pest identification. Sustain. Comput. Inform. Syst. 2022, 33, 100646. [Google Scholar] [CrossRef]
  23. Yu, H.; Liu, J.; Chen, C.; Heidari, A.A.; Zhang, Q.; Chen, H. Optimized deep residual network system for diagnosing tomato pests. Comput. Electron. Agric. 2022, 195, 106805. [Google Scholar] [CrossRef]
  24. Wei, D.; Chen, J.; Luo, T.; Long, T.; Wang, H. Classification of crop pests based on multi-scale feature fusion. Comput. Electron. Agric. 2022, 194, 106736. [Google Scholar] [CrossRef]
  25. Liu, Y.; Sun, P.; Wergeles, N.; Shang, Y. A survey and performance evaluation of deep learning methods for small object detection. Expert. Syst. Appl. 2021, 172, 114602. [Google Scholar] [CrossRef]
  26. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Comput. Vis. Pattern Recognit. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef]
  27. Islam, M.A.; Shuvo, M.N.R.; Shamsojjaman, M.; Hasan, S.; Hossain, M.S.; Khatun, T. An Automated Convolutional Neural Network Based Approach for Paddy Leaf Disease Detection. Int. J. Adv. Comput. Sci. Appl. 2021, 12. [Google Scholar] [CrossRef]
  28. Hong, S.J.; Nam, I.; Kim, S.Y.; Kim, E.; Lee, C.H.; Ahn, S.; Park, I.K.; Kim, G. Automatic pest counting from pheromone trap images using deep learning object detectors for matsucoccus thunbergianae monitoring. Insects 2021, 12, 342. [Google Scholar] [CrossRef]
  29. Nam, N.T.; Hung, P.D. Pest detection on traps using deep convolutional neural networks. In Proceedings of the ACM International Conference Proceeding Series, Tokyo, Japan, 25–28 November 2018; Association for Computing Machinery: New York, NY, USA; pp. 33–38. [Google Scholar]
  30. Guo, Q.; Wang, C.; Xiao, D.; Huang, Q. An enhanced insect pest counter based on saliency map and improved non-maximum suppression. Insects 2021, 12, 705. [Google Scholar] [CrossRef]
  31. Ye, Y.; Huang, Q.; Rong, Y.; Yu, X.; Liang, W.; Chen, Y.; Xiong, S. Field detection of small pests through stochastic gradient descent with genetic algorithm. Comput. Electron. Agric. 2023, 206, 107694. [Google Scholar] [CrossRef]
  32. Patel, D.; Bhatt, N. Improved accuracy of pest detection using augmentation approach with Faster R-CNN. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1042, 012020. [Google Scholar] [CrossRef]
  33. Li, W.; Wang, D.; Li, M.; Gao, Y.; Wu, J.; Yang, X. Field detection of tiny pests from sticky trap images using deep learning in agricultural greenhouse. Comput. Electron. Agric. 2021, 183, 106048. [Google Scholar] [CrossRef]
  34. Xia, C.; Chon, T.S.; Ren, Z.; Lee, J.M. Automatic identification and counting of small size pests in greenhouse conditions with low computational cost. Ecol. Inform. 2015, 29, 139–146. [Google Scholar] [CrossRef]
  35. Zhao, N.; Zhou, L.; Huang, T.; Taha, M.F.; He, Y.; Qiu, Z. Development of an automatic pest monitoring system using a deep learning model of DPeNet. Meas. J. Int. Meas. Confed. 2022, 203, 111970. [Google Scholar] [CrossRef]
  36. Ahmad, M.N.; Shariff, A.R.M.; Aris, I.; Halin, I.A. A four stage image processing algorithm for detecting and counting of bagworm, metisa plana walker (Lepidoptera: Psychidae). Agric. Switz. 2021, 11, 1265. [Google Scholar] [CrossRef]
  37. Shen, Y.; Zhou, H.; Li, J.; Jian, F.; Jayas, D.S. Detection of stored-grain insects using deep learning. Comput. Electron. Agric. 2018, 145, 319–325. [Google Scholar] [CrossRef]
  38. Lee, S.H.; Gao, G. A Study on Pine Larva Detection System Using Swin Transformer and Cascade R-CNN Hybrid Model. Appl. Sci. Switz. 2023, 13, 1330. [Google Scholar] [CrossRef]
  39. Du, L.; Sun, Y.; Chen, S.; Feng, J.; Zhao, Y.; Yan, Z.; Zhang, X.; Bian, Y. A Novel Object Detection Model Based on Faster R-CNN for Spodoptera frugiperda According to Feeding Trace of Corn Leaves. Agric. Switz. 2022, 12, 248. [Google Scholar] [CrossRef]
  40. Wang, T.; Zhao, L.; Li, B.; Liu, X.; Xu, W.; Li, J. Recognition and counting of typical apple pests based on deep learning. Ecol. Inform. 2022, 68, 101556. [Google Scholar] [CrossRef]
  41. Yue, H.; Cai, K.; Lin, H.; Man, H.; Zeng, Z. A markov random field model for image segmentation of rice planthopper in rice fields. J. Eng. Sci. Technol. Rev. 2016, 9, 31–38. [Google Scholar] [CrossRef]
  42. Zhu, S.; Zhang, J.; Lin, X.; Liu, D. Classification of rice planthoppers based on shape descriptors. J. Eng. 2019, 2019, 8378–8382. [Google Scholar] [CrossRef]
  43. Hongwei, Y.; Ken, C.; Hanhui, L.; Zhihui, C.; Zhaofeng, Z. Segmentation of rice planthoppers in rice fields based on an improved level-set approach. INMATEH-Agric. Eng. 2016, 48, 67–74. [Google Scholar]
  44. Ayob, M.Z.; Rahman, A.H.A.; Kadir, M.K.A.; Hashim, N.H.I.; Sahlan, N.S.; Hassim, M.D. Prototype development of brown planthopper (BPH) detector and data logger. In Proceedings of the 2014 4th International Conference on Engineering Technology and Technopreneuship (ICE2T), Kuala Lumpur, Malaysia, 27–29 August 2014; pp. 252–255. [Google Scholar] [CrossRef]
  45. YAO, Q.; CHEN, G.-t; WANG, Z.; ZHANG, C.; YANG, B.-j; TANG, J. Automated detection and identification of white-backed planthoppers in paddy fields using image processing. J. Integr. Agric. 2017, 16, 1547–1557. [Google Scholar] [CrossRef]
  46. Watcharabutsarakham, S.; Methasate, I.; Watcharapinchai, N.; Sinthupinyo, W.; Sriratanasak, W. An approach for density monitoring of brown planthopper population in simulated paddy fields. In Proceedings of the 2016 13th International Joint Conference on Computer Science and Software Engineering (JCSSE), Khon Kaen, Thailand, 13–15 July 2016; pp. 22–25. [Google Scholar] [CrossRef]
  47. Yao, Q.; Xian, D.-x.; Liu, Q.-j.; Yang, B.-j.; Diao, G.-q.; Tang, J. Automated counting of rice planthoppers in paddy fields based on image processing. J. Integr. Agric. 2014, 13, 1736–1745. [Google Scholar] [CrossRef]
  48. Ibrahim, M.F.; Khairunniza-Bejo, S.; Hanafi, M.; Jahari, M.; Ahmad Saad, F.S.; Mhd Bookeri, M.A. Deep CNN-Based Planthopper Classification Using a High-Density Image Dataset. Agric. Switz. 2023, 13, 1155. [Google Scholar] [CrossRef]
  49. HumanSignal, ‘labelImg’, GitHub repository. 2024. Available online: https://github.com/HumanSignal/labelImg/tree/master (accessed on 1 April 2024).
  50. Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  51. Tensorflow “Object Detection” GitHub. Available online: https://github.com/tensorflow/models/tree/master/research/object_detection (accessed on 28 May 2024).
  52. Everingham, M.; Eslami, S.M.A.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes Challenge: A Retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
  53. Gupta, A.; Gupta, D.; Gupta, S. Identification of Alzheimer’s disease from MRI image employing a probabilistic deep learning-based approach and the VGG16. 2023; preprint. [Google Scholar]
  54. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  55. Nazri, A.; Mazlan, N.; Muharam, F. PENYEK: Automated brown planthopper detection from imperfect sticky pad images using deep convolutional neural network. PLoS ONE 2018, 13, e0208501. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The process involved in this research.
Figure 1. The process involved in this research.
Agriculture 14 01567 g001
Figure 2. The transparent box used to house the light trap. Each side of the box has hundreds of small holes.
Figure 2. The transparent box used to house the light trap. Each side of the box has hundreds of small holes.
Agriculture 14 01567 g002
Figure 3. Sample of a sticky light trap image placed on top of a white paper.
Figure 3. Sample of a sticky light trap image placed on top of a white paper.
Agriculture 14 01567 g003
Figure 4. Sample of four major classes of planthoppers: (a) BPH; (b) GLH; (c) WBPH; and (d) ZIGZAG.
Figure 4. Sample of four major classes of planthoppers: (a) BPH; (b) GLH; (c) WBPH; and (d) ZIGZAG.
Agriculture 14 01567 g004
Figure 5. Sample of images in the BENIGN class, exhibiting similarities with major planthopper classes.
Figure 5. Sample of images in the BENIGN class, exhibiting similarities with major planthopper classes.
Agriculture 14 01567 g005
Figure 6. Examples of annotated images using the LabelImg software.
Figure 6. Examples of annotated images using the LabelImg software.
Agriculture 14 01567 g006
Figure 7. Graphical user interface of the developed verification web system.
Figure 7. Graphical user interface of the developed verification web system.
Agriculture 14 01567 g007
Figure 8. The user interface of the system used to capture the image and execute the counting process.
Figure 8. The user interface of the system used to capture the image and execute the counting process.
Agriculture 14 01567 g008
Figure 9. mAP values for each epoch.
Figure 9. mAP values for each epoch.
Agriculture 14 01567 g009
Figure 10. Loss for each epoch.
Figure 10. Loss for each epoch.
Agriculture 14 01567 g010
Figure 11. Results of detected planthoppers using Faster-RCNN with VGG16.
Figure 11. Results of detected planthoppers using Faster-RCNN with VGG16.
Agriculture 14 01567 g011
Figure 12. Detection errors. Red dash line square indicates false positive cases while blue dash line square indicates false negative cases.
Figure 12. Detection errors. Red dash line square indicates false positive cases while blue dash line square indicates false negative cases.
Agriculture 14 01567 g012
Figure 13. Sample from image no. 134 of Light Trap 2 where the total undetected samples number 15 and there are 0 misclassified cases. Undetected classes are labelled by a blue dashed line box.
Figure 13. Sample from image no. 134 of Light Trap 2 where the total undetected samples number 15 and there are 0 misclassified cases. Undetected classes are labelled by a blue dashed line box.
Agriculture 14 01567 g013
Table 1. Test results of planthoppers detection.
Table 1. Test results of planthoppers detection.
ModelInference Time (ms)mAP (%)
Faster R-CNN Resnet 5012.7192.95
Faster R-CNN Resnet 10114.4294.77
Faster R-CNN Resnet 15215.1696.63
Faster R-CNN VGG 1613.7397.69
Table 2. Accuracy by class.
Table 2. Accuracy by class.
ModelAccuracy by Class (%)
BPHGLHWBPHZIGZAGBENIGN
Faster R-CNN Resnet 5096.899.995.594.374.7
Faster R-CNN Resnet 10196.510096.395.385.8
Faster R-CNN Resnet 15298.710097.995.890.8
Faster R-CNN VGG 1699.810099.495.893.5
Table 3. Results obtained from the detection model and verification process done by the MARDI entomologists.
Table 3. Results obtained from the detection model and verification process done by the MARDI entomologists.
Light TrapModel Detection ResultsVerification Results
BPHGLHWBPHZIGZAGTotalMisclassifiedUndetected
129525826525210704270
2559178437113484062201252
3173468664687199211859
482010641978521306504
5870234105937
672916355416
726801617
8525730103631
9217451680011
10721210130727
119832301604915101
1232210852111
1380810320339431210
143362307658900114
15605451903002101
1616496152846125820194
174613543494622150
181734309928123514591069
19382922207296633
20294259118129719682465
Total289636722868993219,3682255352
Table 4. Results based on the density level of the image.
Table 4. Results based on the density level of the image.
Density LevelNumber of ImagesNumber of Detected Planthopper ClassesVerification Results
BPHGLHWBPHZIGZAGMisclassifiedUndetected
High2905251162432281429 (0.1%)1672 (5.77%)
Low61702371251024367118196 (0.03%)3680 (0.6%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khairunniza-Bejo, S.; Ibrahim, M.F.; Hanafi, M.; Jahari, M.; Ahmad Saad, F.S.; Mhd Bookeri, M.A. Automatic Paddy Planthopper Detection and Counting Using Faster R-CNN. Agriculture 2024, 14, 1567. https://doi.org/10.3390/agriculture14091567

AMA Style

Khairunniza-Bejo S, Ibrahim MF, Hanafi M, Jahari M, Ahmad Saad FS, Mhd Bookeri MA. Automatic Paddy Planthopper Detection and Counting Using Faster R-CNN. Agriculture. 2024; 14(9):1567. https://doi.org/10.3390/agriculture14091567

Chicago/Turabian Style

Khairunniza-Bejo, Siti, Mohd Firdaus Ibrahim, Marsyita Hanafi, Mahirah Jahari, Fathinul Syahir Ahmad Saad, and Mohammad Aufa Mhd Bookeri. 2024. "Automatic Paddy Planthopper Detection and Counting Using Faster R-CNN" Agriculture 14, no. 9: 1567. https://doi.org/10.3390/agriculture14091567

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop