Next Article in Journal
Design of Artificial Intelligence-Based Novel Device for Fault Diagnosis of Integrated Circuits
Previous Article in Journal
Fractal Autoencoder-Based Unsupervised Hyperspectral Bands Selection for Remote Sensing Land-Cover Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Advancements in Plant Pests Detection: Leveraging Convolutional Neural Networks for Smart Agriculture †

by
Gopalakrishnan Nagaraj
1,
Dakshinamurthy Sungeetha
2,
Mohit Tiwari
3,
Vandana Ahuja
4,*,
Ajit Kumar Varma
5 and
Pankaj Agarwal
6
1
Department of Mechanical Engineering, Sethu Institute of Technology, Virudhunagar 626115, India
2
Department Electronics and Communication Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Science, Chennai 602105, India
3
Department of Computer Science and Engineering, Bharati Vidyapeeth’s College of Engineering, Delhi 110063, India
4
Department of Computer Science and Engineering, Swami Vivekanand Institute of Engineering and Technology, Ramnagar, Banur 140601, India
5
Department of Pharmaceutical Sciences, Rama University, Mandhana, Kanpur 209217, India
6
School of Engineering and Technology, K.R. Mangalam University, Gurugram 122103, India
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances on Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
Eng. Proc. 2023, 59(1), 201; https://doi.org/10.3390/engproc2023059201
Published: 22 January 2024
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)

Abstract

:
Insects and illnesses that affect plants can have a major negative effect on both their quality and their yield. Digital image processing may be applied to diagnose plant illnesses and detect plant pests. In the field of digital image processing, recent developments have shown that more conventional methods have been eclipsed by deep learning by a wide margin. Now, researchers are concentrating their efforts on the question of how the technique of deep learning may be applied to the issue of identifying plant diseases and pests. In this paper, the difficulties that arise when diagnosing plant pathogens and pests are outlined, and the various diagnostic approaches that are currently in use are evaluated and contrasted. This article presents a summary of three perspectives, each of which is based on a different network design, in recent research on deep learning applied to the detection of plant diseases and pests. We developed a convolutional neural network (CNN)-based framework for identifying pest-borne diseases in tomato leaves using the Plant Village Dataset and the MobileNetV2 architecture. We compared the performance of our proposed MobileNetV2 model with other existing methods and demonstrated its effectiveness in pest detection. Our MobileNetV2 model achieved an impressive accuracy of 93%, outperforming some other models like GoogleNet and VGG16, which were fully trained on the pest dataset in terms of speed.

1. Introduction

Because of the extremely high rate of infant mortality, the human population of the world did not begin to increase until the year 1700. In the early 1800s, the world’s population reached its first billion; in 1928, it reached its second billion; and in 1960, it reached its third billion. In 2017, we estimated that there were seven billion people living around the globe. The current rapid increase in population may be substantially due to the progress that has been made in medical treatment. The United Nations projects that the world’s population will range from 9.7 billion in the year 2050 to 10.9 billion in the year 2100 [1]. Farming has seen a considerable uptick because of the dramatic rise in the country’s population that has occurred over the course of the last few decades. To meet the requirements of growing population, we require a 2.4% annual growth in the yields of major crops; however, we are only seeing a 1.3% annual gain in yields. Yet, if this condition is met, the ecosystem will endure a reduction in the amount of biodiversity as well as an increase in the number of emissions of greenhouse gases. In addition, agriculture is perpetually vulnerable to problems such as the invasion of unwanted insects and illnesses. In recent years, widespread outbreaks of crop diseases and insect pests have become increasingly common because of the continuing deterioration of the ecological environment [1]. Because of the widespread presence of crop diseases and insect pests, agricultural production suffers not only in terms of quantity but also in terms of quality, which ultimately results in economic losses. It is essential to conduct research on the treatment and prevention of agricultural illnesses and insect pests to reduce the economic losses that result from failed crop production. The Food and Agriculture Organization estimates that the annual cost to the economy of the world is somewhere between USD 220 and USD 70 billion due to the losses in agricultural output that occur because of plant diseases and insect attacks, which range from 20 to 40 percent of total agricultural output. Pests are one of the most major threats to the safety of our food supply, and their presence has a ripple effect that extends to the economy, society, and environment [2]. On the other hand, this raises the chance that pests will develop resistance to pesticides, which will make it more difficult to utilize the traditional method of diagnosing plant illnesses, which relies on the expert observation and detection of plant diseases using the naked eye.
This article discusses the challenges associated with diagnosing plant diseases and insects, as well as the various approaches that are currently in use. The manuscript aims to propose a convolutional neural network (CNN)-based approach for automated pest detection in plants. It introduces a curated dataset of labeled images depicting healthy plants and plants infected with pests [2]. The CNN architecture, augmented with transfer learning and data augmentation techniques, is designed to achieve robust and accurate pest identification across various plant species and environmental conditions. Through comparative analysis, the manuscript demonstrates the superiority of the proposed method compared to existing approaches. The research contributes to the advancement of precision agriculture by enabling timely and efficient pest monitoring in crops.

2. Literature Review

Different diagnostic approaches for plant pathogens and pests offer varying advantages and limitations. Visual inspection is cost-effective but subjective and labor-intensive. Molecular techniques like PCR and ELISA are highly specific but require expensive equipment and expertise. Remote sensing provides large-scale monitoring capabilities, yet its resolution may not be suitable for detecting specific pathogens. Sensor-based technologies offer continuous monitoring, but setup and maintenance costs can be high. Machine learning and CNNs present automated solutions with potential for high accuracy, but they require extensive and diverse datasets for training and might struggle with rare or unseen pathogens. The effectiveness of each approach depends on factors such as scale, budget, and the specific goals of the diagnosis. Combining these approaches can lead to a more robust and comprehensive pest and disease diagnosis in plants. It has been demonstrated by Aruvansh et al. [3] that the random forest machine learning approach provides the most accurate yield forecast accuracy. The sequential model, known as the simple recurrent neural network, can provide more accurate predictions of rainfall than the LSTM can for temperature. When attempting to predict the outcome, the article considers factors such as rainfall, temperature, season, and region, among others. The findings indicate that random forest is the most effective classifier to use when considering all of the variables. Preetha Rajan et al. [4] proposed yet another original method for the goal of locating and determining the identity of unwanted organisms. They explored using a color characteristic to differentiate white flies from the other flies on the leaves in a data set because the background color and the color of the object of interest are very different from one another. Support vector machines are a useful tool in the process of categorizing data. Wen et al. [5] carried out a study on a novel method to automate the classification and identification of insects by employing an image-based orchard. The purpose of this project was to develop a more dependable automated method that can handle cropped photographs of insects appropriately while also considering other factors. He presented three different versions. In total, there were eight different species of insects that were considered for the research project, and three distinct feature models were used: local, global, and hybrid. The performance of the hybrid model was superior to that of both the local and the global models [6]. Occlusion, rotation, and lighting are all factors that have a substantial influence on global features. Yet, in contrast to biologists, they present their reasoning in a style that is both obvious and succinct. Local characteristics, on the other hand, do away with these limits; yet it is quite challenging to analyze them by combining a model with the classification of images of pests obtained in the field.
According to Hassan et al. [7], the two different kinds of insects are required to demonstrate the proof-of-concept. The classification of pests can be accomplished by the manipulation of traits such as color and shape. This is possible since each sample class have a distinct body form and color. It was the step of this technology known as feature extraction that was employed to initiate the process of pest detection. Whiteflies, which are pests of a smaller size, were the focus of another novel way for identifying them by multifractal analysis which was presented by Yan et al. [8]. They paid some thought to the locations depicted in the in situ images of the leaf surfaces. Graham Taylor and his colleagues [9] presented a monitoring method that makes use of field traps. It is essential to carry out ongoing monitoring of the levels of insect pest populations when utilizing pheromone-based methods of pest control. Based on the findings of this study, it was suggested that a deep learning technique and an automated identification pipeline be used to detect and count field trap interior pests. Using the dataset for a commercial codling moth, this method demonstrates a considerable performance in terms of both quantitative and qualitative performance.
To illustrate the methodology, Ebrahimi et al. [10] used the SVM classification technique. To prevent the introduction of unwanted pests into greenhouses, automated insect detecting technologies are required. A method for identifying pests on grapevines and removing them was introduced. The support vector machine (SVM) and the scale invariant feature transform technique were utilized to recognize and classify local characteristics, which were then used for categorization (SIFT). In addition to having a precision of 0.86 and an accuracy of 90.8%, the model was not sensitive to changes in the scale factor. For instance, despite the exceptional effectiveness of the SIFT algorithm in locating destined points, the algorithm needs to perform 23 comparisons to decide whether a point is destined. In the publication that Ashok and his colleagues proposed [11], they provided a full explanation of the procedure, which they determined to be 98% accurate. Moreover, the implementation of hybrid algorithms, fuzzy logic, and artificial neural networks is a possibility. The way in which a computer vision system [12] that makes use of integrated LoRa (long range) and deep learning can rapidly diagnose illnesses in grape leaves from low-resolution pictures was illustrated. They made use of two different technologies, namely LoRa and deep learning, to accomplish their goals of disease diagnosis and picture transfer. To accomplish this, the system incorporates both real-world and simulated testing, a wide variety of LoRa parameters, and fine-tuning of the convolutional neural network (CNN) model. Based on the results of the analysis, it was determined that the proposed framework has the capability of successfully adhering to the protocol limitations while simultaneously delivering images over LoRa. Their improved method can accurately identify diseases that affect grape leaves.
Barburiceanu et al. [13] suggested that deep learning models AlexNet, VggNet, and ResNet are applied in practical applications requiring texture categorization such as plant disease diagnosis because they are already trained on ImageNet object categories. These applications include plant disease diagnosis and evaluations of diseases affecting leaves during the phases of corn growth and production. Farmers face a significant issue, which is the exact diagnosis of maize crop illnesses. K-means bunching and deep learning are two potential solutions that have been proposed for this issue by Yu et al. [14]. They suggest a method for consistently identifying the three most frequent maize leaf diseases by combining K-implies bunching with a revised deep learning model. This method can be found in their paper (rust, leaf spot, and dim spot). In this experiment, the effectiveness of the models VGG-16, ResNet-18, Origin v3, and K quality 2, 4, 8, 16, 32, and 64 in detecting agricultural diseases is evaluated. Roopashree et al. [15] proposed the use of Xception properties in a vision-based system for the identification of medicinal plants. The discipline of computer vision is combined with techniques derived from deep learning neural networks to provide an automated system for identifying plants used in traditional medicine. The accessibility of this dataset for medicinal plants is the source of the problem. It showcases a newly compiled collection of photographs of medicinal leaves known as DeepHerb. This collection includes 2515 images of leaves derived from 40 distinct types of Indian herbs. To demonstrate the usefulness of the dataset, different pre-trained deep convolutional neural network architectures, such as VGG16, VGG19, InceptionV3, and Xception, are compared. Pham et al. [16] gave established DL techniques in a lot of the research that was done for identifying leaf diseases. Most of them created their models from low-resolution photographs by employing the utilization of convolutional neural networks (CNNs). The artificial neural network (ANN) technique is what they use to find the disease spots on the plant leaves, which are so little that they are only visible in images with a greater resolution. All the contaminated blobs are singled out and removed after an entire dataset has been pre-processed utilizing a method for contrast enhancement. Kibriya et al. [17] advises employing two models based on convolution neural networks (CNNs): GoogLeNet and VGG16. This is in reference to diseases that can affect tomato plants. The suggested research would employ deep learning to identify the most effective strategy for overcoming the obstacle of diagnosing tomato leaf disease. For the 10,735 leaf photos that were included in the Plant Village dataset, VGG16 achieved an accuracy of 98%, whereas GoogLeNet achieved an accuracy of 99.23%. The recommended method might be adopted for tomato crops, which would allow for the prevention of output losses and the early diagnosis of illness.

3. Methodology

The framework proposes a sequence of steps leading to a state of science as a knowledge base for modelling challenges. Then, two models based on prior data and experience are used. Lastly, we compare the results and outputs of the models. We demonstrate the utility of the proposed framework by developing two models for predicting coffee leaf rust: one based on expert knowledge and represented by a hierarchical multi-criteria decision structure (56.03% accuracy), and another based on data and built using a gradient boosting algorithm (XGBoost) (7.19% mean absolute error). In a supplementary study to our case study, we showed how knowledge-based modelling works for coffee leaves by investigating how components of a data-based model can improve a knowledge-based model and boost the accuracy of the model by 7.07%. An alternative to database-based modelling is described in cases when the given dataset has around 60 instances. In general, data-based models perform better, but the difficulty of replicating them depends on the variety of datasets used. While knowledge-based models can be simpler and do not always require a specific physical location, they can also benefit from human oversight. The problem to be solved involves developing a convolution neural network-based framework for identifying pest-borne diseases. In order to detect diseases in tomato leaves, this article introduces the Plant Village Dataset and a lightweight CNN method based on the MobileNetV2 architecture. It is time- and resource-consuming to use weighted models like data-based and knowledge-based models, but the results are more accurate. The samples in the Plant Village and Pets databases are restricted since they were gathered in a lab.

3.1. Network of Neural Convolutions

Often employed for image classification and recognition, convolutional neural networks (CNNs) are a form of deep learning architecture. Convolutional layers, pooling layers, and fully linked layers make up the various levels of a CNN. Convolutional layers are responsible for filtering the input image to extract features, pooling layers are responsible for downloading image samples to decrease computation, and the fully connected layer is responsible for making the final prediction. The network discovers the best filters via backpropagation and gradient descent. Figure 1 shows the configuration of a general CNN architecture.

3.2. Dataset Used

The Plant Village Datasets’ “Pests” dataset was obtained from Kaggle. There is a huge number of samples of images; however, due to limited computing resources we have taken only 2700 images randomly. The dataset containing images of leaves from various plants is included in the collection, which is further classified into 9 categories. The collection contains a total of 2700 photos. There are 450 test images included, and they range from full color to grayscale to segmented. Figure 2 depicts the classes distribution of this sample which is divided into 9 categories of equal proportion.

3.3. Workflow and Its Phases

The workflow of the model predicts whether a plant does or does not have disease using the steps shown below in Figure 3. The input layer expects an image with dimensions MN3. In our situation, we reduces the size of the photos to (224 × 224 × 3) before feeding them into the main pre-trained model. The functional layer is the model’s representation after it has been trained. By employing transfer learning, we make use of the model weights obtained from these various datasets. A global average pooling layer obtains data from the functional layer and averages those values to turn the feature map into a 1D vector. The one-dimensional vector generated by this layer is sent to a dense layer with full connectivity. Using this extra filter, we can reduce the number of categories to a more workable 4. In terms of both efficiency and speed, it is superior to rival algorithms. Throughout the training of the model, a value of 30 was used for the batch size each epoch. In Figure 3 First part shows the training process and second part of the figure shows data validation.

3.4. Data Preprocessing

The images were preprocessed after the dataset was loaded. Preprocessing refers to any modifications made to the raw data before they are fed to the deep learning model. The preprocessing methods include size normalization and standardization, balanced batch production, random over sampling, and generating picture data. To facilitate preprocessing, the software supports the import of the Keras cv2 package. Mainly, we have augmented the images from the dataset for better training in the fully connected layers.

3.5. Image Augmentation

Image augmentation is used to generate image data by improving upon the source pictures. Image enhancement entails creating multiple iterations of the same picture by applying various modifications to the source picture [19,20]. In such contexts, data augmentation is a powerful tool for training deep learning models. MobileNetV2′s overall structure consists of a 32-filter fully convolutional first layer and then 19-layers of residual bottlenecks. MobileNetV2 employs two types of basic modules. Value 1 is a one-step residual block. The two-stage block approach of reduction is another option. It employs depth-wise separable convolutions, which factorize standard convolutions into separate spatial and channel-wise operations, reducing computational complexity while maintaining accuracy. MobileNetV2 introduces inverted residual blocks with linear bottlenecks and shortcut connections, enabling the efficient flow of information through the network as shown in the Figure 4.

3.6. Training the Model

Here, the entire dataset is split into two parts: the training set and the test set in the ratio of 85:15. Only a subset of the full dataset is used while training a deep learning model. Data from tests were used to determine the model’s accuracy. The efficiency of the model was measured through many performance indicators. To fine-tune the model’s hyperparameters, the validation set was incorporated into the training phase. As the model does not gain any knowledge from this information and just uses it for evaluation purposes, the results are unbiased and objective. By pausing model training when loss in the validation dataset exceeds loss in the training dataset (thus minimizing bias and variance), the validation dataset can be used for regression as well.

4. Results and Discussion

The results were calculated based on the accuracy of each model indifferent epochs. Epochs are taken as two sets, i.e., 20 epochs.

4.1. Comparison of Accuracy Achieved with Existing Methods

Table 1 compares different CNN fully trained and pretrained for pest detection. GoogleNet and VGG16 are trained fully on pest dataset, and this explain why their detection is high when compared to our methodology. But, if we compare the parameters trained, GoogleNet and VGG16 have training parameters that are very high in comparison to our methodology, which is very fast and is trained only with 20 epochs. This explains the merit of our methodology compared to other CNNs. MobilenetV2 is very fast when compared to other CNNs, as shown in the model configuration in Figure 5.
Figure 6 and Figure 7 show the accuracy and training loss of our method, which was trained on only 20 epochs. The graph was drawn for our model, showing the loss value on Y-Axis and Epochs on X-Axis. Figure 6 and Figure 7 describe the comparisons of results between the test and validation data for the loss and accuracy values obtained while training the model. Here, we can find that even after 20 epochs training loss decreases while training accuracy increases, which gives a hint the method can provide good accuracy if it is trained for a greater number of epochs.

4.2. Disease Detection Results of the Images

Using deep-learning algorithms based on CNN models, which demonstrate improved performance, it is possible to detect and classify pests and illnesses. We achieved 93.99% accuracy using the MobileNetV2 Model, which is far higher than previous attempts at picture categorization that relied on “manual” features extraction. A visual output of the model is shown in Figure 8. The proposed CNN-based approach demonstrates promising scalability and generalizability for application across various plant species and agricultural settings. By leveraging transfer learning and data augmentation techniques during training, the model can effectively learn and extract meaningful features from diverse datasets, allowing it to adapt to different plant species and their unique visual characteristics. The use of large and curated datasets contributes to the model’s robustness and ability to generalize to a wide range of agricultural settings. Moreover, the automated nature of the CNN-based system makes it feasible for large-scale monitoring, reducing the need for manual inspection and enabling the timely detection of pests and pathogens in crops. The approach’s scalability and generalizability offer significant potential to enhance the precision of agriculture practices, support crop health management, and contribute to sustainable farming practices worldwide.

5. Conclusions

In conclusion, this study presented a CNN-based approach for pest detection in tomato leaves using the MobileNetV2 architecture and the Plant Village Dataset. The results showed that, even with limited data and computing resources, we could achieve impressive accuracy, and that with more extensive training, the model’s performance could further improve. The proposed framework can serve as a foundation for developing advanced plant disease-detection systems, helping farmers and agricultural experts make informed decisions to protect their crops and enhance agricultural productivity. By integrating cutting-edge deep learning techniques with domain-specific knowledge, we contribute to the growing field of precision agriculture and pave the way for more sustainable and efficient farming practices. Future research directions in using deep learning for smart agriculture and pest control applications can focus on addressing the scalability and interpretability of models. Scalability can be achieved by developing efficient and lightweight deep learning architectures that can run on resource-constrained edge devices, ensuring real-time and on-site pest monitoring. Moreover, interpretability is crucial for building trust and understanding in AI-driven systems. Research efforts can explore techniques to make deep learning models more transparent and explainable, providing farmers with insights into the reasoning behind pest predictions and control recommendations. Additionally, integrating deep learning with Internet of Things (IoT) devices and drones can enhance data collection and enable targeted pest-management strategies. Exploring these directions will pave the way for the widespread adoption of deep learning in agriculture, leading to more sustainable and environmentally friendly pest control practices.

Author Contributions

Conceptualization, G.N. and D.S.; methodology, M.T., V.A., A.K.V. and P.A.; validation, G.N. and D.S.; formal analysis, M.T., V.A., A.K.V. and P.A.; investigation, G.N. and D.S.; resources, M.T., V.A., A.K.V. and P.A.; data curation, G.N. and D.S.; writing—original draft preparation, G.N. and D.S.; validation, M.T., V.A., A.K.V. and P.A.; writing—review and editing, G.N. and D.S.; validation, M.T., V.A., A.K.V. and P.A.; visualization, G.N. and D.S.; supervision, M.T., V.A., A.K.V. and P.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset used in this research is publicly available on https://www.kaggle.com/datasets/adilmubashirchaudhry/plant-village-dataset (accessed on 14 August 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aparecido, L.E.; Rolim, G.; Costa, C.T.S.; Souza, P.S. Machine learning algorithms for forecasting the incidence of Coffea arabica pests and diseases. Int. J. Biometeorol. 2020, 64, 671–688. [Google Scholar] [CrossRef] [PubMed]
  2. Robert, M.; Dury, J.; Thomas, A.; Therond, O.; Sekhar, M. CMFDM: A methodology to guide the design of a conceptual model of farmers’ decision-making processes. Agric. Syst. 2016, 148, 86–94. [Google Scholar] [CrossRef]
  3. Nigam, A.; Garg, S.; Agrawal, A.; Agrawal, P. Crop yield prediction using machine learning algorithms. In Proceedings of the 2019 Fifth International Conference on Image Information Processing (ICIIP), Shimla, India, 15–17 November 2019; pp. 125–130. [Google Scholar]
  4. Rajan, P.; Radhakrishnan, B. A survey on different image processing techniques for pest identification and plant disease detection. Int. J. Comput. Sci. Netw. (IJCSN) 2016, 5, 137–141. [Google Scholar]
  5. Wen, C.; Guyer, D. Image-based orchard insect automated identification and classification method. Comput. Electron. Agric. 2012, 89, 110–115. [Google Scholar] [CrossRef]
  6. Dora Pravina, C.T.; Buradkar, M.U.; Jamal, M.K.; Tiwari, A.; Mamodiya, U.; Goyal, D. A Sustainable and Secure Cloud resource provisioning system in Industrial Internet of Things (IIoT) based on Image Encryption. In Proceedings of the 4th International Conference on Information Management & Machine Intelligence, Jaipur, India, 23–24 December 2022; pp. 1–5. [Google Scholar]
  7. Srivastava, P.K.; Kumar, S.; Tiwari, A.; Goyal, D.; Mamodiya, U. Internet of thing uses in materialistic ameliorate farming through AI. In Proceedings of the AIP Conference Proceedings, Jaipur, India, 6–7 May 2023; Volume 2782. [Google Scholar]
  8. Hassan, S.N.; Rahman, N.S.; Win, Z.Z. Automatic classification of insects using color-based and shape-based descriptors. Int. J. Appl. Control. Electr. Electron. Eng. 2014, 2, 23–35. [Google Scholar]
  9. Li, Y.; Xia, C.; Lee, J. Detection of small-sized insect pest in greenhouses based on multifractal analysis. Opt.-Int. J. Light Electron Opt. 2015, 126, 2138–2143. [Google Scholar] [CrossRef]
  10. Ding, W.; Taylor, G. Automatic moth detection from trap images for pest management. Comput. Electron. Agric. 2016, 123, 17–28. [Google Scholar] [CrossRef]
  11. Ebrahimi, M.A.; Khoshtaghaza, M.H.; Minaei, S.; Jamshidi, B. Vision-based pest detection based on SVM classification method. Comput. Electron. Agric. 2017, 137, 52–58. [Google Scholar] [CrossRef]
  12. Ashok, S.; Kishore, G.; Rajesh, V.; Suchitra, S.; Sophia, S.G.; Pavithra, B. Tomato leaf disease detection using deep learning techniques. In Proceedings of the 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; pp. 979–983. [Google Scholar]
  13. Zinonos, Z.; Gkelios, S.; Khalifeh, A.F.; Hadjimitsis, D.G.; Boutalis, Y.S.; Chatzichristofis, S.A. Grape leaf diseases identification system using convolutional neural networks and Lora technology. IEEE Access 2021, 10, 122–133. [Google Scholar] [CrossRef]
  14. Barburiceanu, S.; Meza, S.; Orza, B.; Malutan, R.; Terebes, R. Convolutional neural networks for texture feature extraction. Applications to leaf disease classification in precision agriculture. IEEE Access 2021, 9, 160085–1600103. [Google Scholar] [CrossRef]
  15. Yu, H.; Liu, J.; Chen, C.; Heidari, A.A.; Zhang, Q.; Chen, H.; Mafarja, M.; Turabieh, H. Corn leaf diseases diagnosis based on K-means clustering and deep learning. IEEE Access 2021, 9, 143824–143835. [Google Scholar] [CrossRef]
  16. Mamodiya, U.; Raigar, G.; Meena, H. Design & simulation of tiffin food problem using fuzzy logic. Int. J. Sci. Adv. Res. Technol. 2018, 4, 55–60. [Google Scholar]
  17. Pham, T.N.; Dao, S.V. A hybrid metaheuristic algorithm for intelligent nurse scheduling. In Enabling Healthcare 4.0 for Pandemics: A Roadmap Using AI, Machine Learning, IoT and Cognitive Technologies; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2021; Volume 21, pp. 211–235. [Google Scholar]
  18. Tiwari, A.; Garg, R. Orrs Orchestration of a Resource Reservation System Using Fuzzy Theory in High-Performance Computing: Lifeline of the Computing World. Int. J. Softw. Innov. (IJSI) 2022, 10, 1–28. [Google Scholar] [CrossRef]
  19. Manikandan, R.; Maurya, R.K.; Rasheed, T.; Bose, S.C.; Arias-Gonzáles, J.L.; Mamodiya, U.; Tiwari, A. Adaptive cloud orchestration resource selection using rough set theory. J. Interdiscip. Math. 2023, 26, 311–320. [Google Scholar] [CrossRef]
  20. Kamble, S.; Saini, D.K.J.; Kumar, V.; Gautam, A.K.; Verma, S.; Tiwari, A.; Goyal, D. Detection and tracking of moving cloud services from video using saliency map model. J. Discret. Math. Sci. Cryptogr. 2022, 25, 1083–1092. [Google Scholar] [CrossRef]
Figure 1. A general CNN architecture [14].
Figure 1. A general CNN architecture [14].
Engproc 59 00201 g001
Figure 2. Data set labels.
Figure 2. Data set labels.
Engproc 59 00201 g002
Figure 3. Flow chart of CNN classification. A sample image taken from [18]. (A) the training process, (B) data validation.
Figure 3. Flow chart of CNN classification. A sample image taken from [18]. (A) the training process, (B) data validation.
Engproc 59 00201 g003
Figure 4. MobileNetV2.
Figure 4. MobileNetV2.
Engproc 59 00201 g004
Figure 5. MobileNetV2 Model.
Figure 5. MobileNetV2 Model.
Engproc 59 00201 g005
Figure 6. Training and validation accuracy of MobileNetV2.
Figure 6. Training and validation accuracy of MobileNetV2.
Engproc 59 00201 g006
Figure 7. Training and validation loss.
Figure 7. Training and validation loss.
Engproc 59 00201 g007
Figure 8. Pests’ disease detection.
Figure 8. Pests’ disease detection.
Engproc 59 00201 g008
Table 1. Model accuracy and comparison with other works trained on Village Plant dataset.
Table 1. Model accuracy and comparison with other works trained on Village Plant dataset.
ReferenceAlgorithmAccuracyF1-ScoreRemarks
[17]GoogleNet99.16%-Full Trained
[17]VGG1698%-Full Trained
[11]CNN with feature extraction98%-Full Trained with feature engineering
[10]XgBoost with SIFT90.3%86%Feature extraction
Our Proposed MethodMobileNetV293%90%Pre Trained and Very fast
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nagaraj, G.; Sungeetha, D.; Tiwari, M.; Ahuja, V.; Varma, A.K.; Agarwal, P. Advancements in Plant Pests Detection: Leveraging Convolutional Neural Networks for Smart Agriculture. Eng. Proc. 2023, 59, 201. https://doi.org/10.3390/engproc2023059201

AMA Style

Nagaraj G, Sungeetha D, Tiwari M, Ahuja V, Varma AK, Agarwal P. Advancements in Plant Pests Detection: Leveraging Convolutional Neural Networks for Smart Agriculture. Engineering Proceedings. 2023; 59(1):201. https://doi.org/10.3390/engproc2023059201

Chicago/Turabian Style

Nagaraj, Gopalakrishnan, Dakshinamurthy Sungeetha, Mohit Tiwari, Vandana Ahuja, Ajit Kumar Varma, and Pankaj Agarwal. 2023. "Advancements in Plant Pests Detection: Leveraging Convolutional Neural Networks for Smart Agriculture" Engineering Proceedings 59, no. 1: 201. https://doi.org/10.3390/engproc2023059201

Article Metrics

Back to TopTop