Next Article in Journal
Leveraging Convolutional Neural Networks for Disease Detection in Vegetables: A Comprehensive Review
Previous Article in Journal
Agronomic and Phytochemical Characterization of Chickpea Local Genetic Resources for the Agroecological Transition and Sustainable Food Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Deep Learning Models for Multi-Crop Leaf Disease Detection with Enhanced Vegetative Feature Isolation and Definition of a New Hybrid Architecture

by
Sajjad Saleem
1,
Muhammad Irfan Sharif
2,
Muhammad Imran Sharif
3,
Muhammad Zaheer Sajid
4,* and
Francesco Marinello
5
1
Department of Information and Technology, Washington University of Science and Technology, Alexandria, VA 22314, USA
2
Department of Information Sciences, University of Education Lahore, Jauharabad Campus, Jauharabad 41200, Pakistan
3
Department of Computer Science, Kansas State University, Manhattan, KS 66506, USA
4
Department of Computer Software Engineering, Military College of Signals, National University of Sciences and Technology, Islamabad 44000, Pakistan
5
Department of Land, Environment, Agriculture and Forestry, University of Padova, 35020 Legnaro, Italy
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(10), 2230; https://doi.org/10.3390/agronomy14102230
Submission received: 31 August 2024 / Revised: 13 September 2024 / Accepted: 23 September 2024 / Published: 27 September 2024
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
Agricultural productivity is one of the critical factors towards ensuring food security across the globe. However, some of the main crops, such as potato, tomato, and mango, are usually infested by leaf diseases, which considerably lower yield and quality. The traditional practice of diagnosing disease through visual inspection is labor-intensive, time-consuming, and can lead to numerous errors. To address these challenges, this study evokes the AgirLeafNet model, a deep learning-based solution with a hybrid of NASNetMobile for feature extraction and Few-Shot Learning (FSL) for classification. The Excess Green Index (ExG) is a novel approach that is a specified vegetation index that can further the ability of the model to distinguish and detect vegetative properties even in scenarios with minimal labeled data, demonstrating the tremendous potential for this application. AgirLeafNet demonstrates outstanding accuracy, with 100% accuracy for potato detection, 92% for tomato, and 99.8% for mango leaves, producing incredibly accurate results compared to the models already in use, as described in the literature. By demonstrating the viability of a deep learning/IoT system architecture, this study goes beyond the current state of multi-crop disease detection. It provides practical, effective, and efficient deep-learning solutions for sustainable agricultural production systems. The innovation of the model emphasizes its multi-crop capability, precision in results, and the suggested use of ExG to generate additional robust disease detection methods for new findings. The AgirLeafNet model is setting an entirely new standard for future research endeavors.

1. Introduction

Maintaining the health of key crops such as Solanum tuberosum (potato), Solanum lycopersicum (tomato), and Mangifera indica (mango) is essential to ensuring global food security and addressing the challenges of feeding a growing population. Potatoes are a staple food in many parts of the world, tomatoes are widely consumed both fresh and processed, and mangoes hold significant economic importance as tropical fruit. However, these vital crops are susceptible to various leaf diseases drastically reducing yield and quality. For potatoes, common diseases such as early blight, late blight, and leaf spot cause severe damage to crop productivity. Traditional methods for detecting these diseases often rely on expert visual assessments, which can be time-consuming and prone to errors. Deep learning (DL) and machine learning (ML) techniques have revolutionized this process by automating the detection and classification of plant diseases, offering faster, more accurate solutions. Mohanty et al. [1] demonstrated that convolutional neural networks (CNNs) could classify 26 crop species, including potatoes, based on their diseases, achieving remarkable accuracy. Picon et al. [2] and Sibiya and Sumbwanyambe [3] further showed that deep learning models outperform traditional machine learning methods in detecting potato leaf diseases. Ramcharan et al. [4] applied transfer learning (TL) to CNNs, achieving high accuracy in identifying late blight in potato crops, a significant breakthrough for large-scale potato farming. Tomato (Solanum lycopersicum), another widely cultivated crop, is similarly vulnerable to various leaf diseases such as bacterial spot, early blight, late blight, and viral infections like the Tomato Yellow Leaf Curl Virus (TYLCV). These diseases can cause significant losses if not identified earlier. Ferentinos (2018) demonstrated the successful application of CNNs in detecting multiple tomato diseases with high accuracy, showing the scalability of these methods in large agricultural settings [5]. Brahimi et al. (2017) expanded on this work by using transfer learning techniques to improve disease classification accuracy for tomatoes, particularly in real-world agricultural applications where annotated datasets are scarce [6]. Integrating Internet of Things (IoT) technologies with DL models has further enhanced tomato disease detection. Zhang et al. [7] demonstrated that IoT devices combined with DL models could provide real-time monitoring of tomato plants, enabling early detection and intervention. Plants are essential for global food production, but environmental factors often lead to diseases, causing significant crop losses. Manual detection methods are inefficient, so machine learning (ML) and deep learning (DL) have emerged as effective solutions for early and accurate plant disease identification. This research reviews recent advancements in ML and DL for plant disease detection, highlighting their benefits, limitations, and proposed solutions to overcome challenges such as data quality and availability [8]. Mango (Mangifera indica), an economically significant tropical fruit, is also at risk from various leaf diseases, such as anthracnose, bacterial canker, and powdery mildew. Applying deep learning methods has shown promising results in detecting these diseases. Amara et al. (2017) applied CNNs to identify several mango leaf diseases with high accuracy, providing an automated alternative to traditional disease identification methods [9]. Brahimi et al. (2017) demonstrated that transfer learning could further improve the accuracy of mango disease detection by utilizing pre-trained models adapted to the Mango Dataset [6]. Mango plant diseases require timely control for high yield, but manual detection is impractical due to high costs, limited experts, and symptom variations. This study proposes a novel vein-based segmentation approach to accurately identify diseased areas on mango leaves [10]. India’s growing population and food demands make increasing crop productivity crucial, but plant diseases significantly reduce yields. Machine learning and deep learning techniques are being used for accurate plant disease detection, offering better performance and solutions in disease diagnosis. These AI-based methods help prevent major crop losses by identifying diseases early, especially through advancements in deep learning for image-based disease recognition [11]. Potato blight significantly threatens global potato crops, impacting livelihoods and economies, especially in developing countries. Neural networks have been used for early detection, but issues with accuracy and computation time persist [12].
Integrating deep learning architectures with IoT technologies offers a transformative approach to disease monitoring in crops like potatoes, tomatoes, and mangoes. These technologies provide the speed, scalability, and accuracy required for effective disease detection across large-scale agricultural operations. However, challenges remain, including the need for large, annotated datasets, more interpretable models, and practical implementations that farmers can easily adopt. Continued research and collaboration between technologists, agronomists, and farmers are essential to overcoming these challenges. This study aims to explore the latest advancements in detecting and classifying leaf diseases in potatoes, tomatoes, and mangoes using state-of-the-art machine learning and deep learning techniques. We evaluate the performance of various models and methodologies presented in recent research, discuss the practical applications of these technologies, and highlight the challenges and future directions in this rapidly evolving field. By leveraging advanced deep learning architectures and IoT technologies, this study aims to contribute to developing innovative solutions that can help farmers maintain healthy crops, promote sustainable agricultural practices, and enhance global food security. The diseases affecting these crops are categorized into several types, including Potato Early Blight, Potato Healthy, Potato Late Blight, Tomato Bacterial Spot, Tomato Early Blight, Tomato Healthy, Tomato Late Blight, Tomato Leaf Mold, Tomato Septoria Leaf Spot, Tomato Spider Mites (Two-spotted spider mite), Tomato Target Spot, Tomato Mosaic Virus, Tomato Yellow Leaf Curl Virus, Mango Anthracnose, Mango Bacterial Canker, Mango Cutting Weevil, Mango Die Back, Mango Gall Midge, Mango Healthy, Mango Powdery Mildew, and Mango Sooty Mould. These classifications are summarized in Table 1, which outlines the key findings for potato, tomato, and mango leaf diseases and their classification.

1.1. Research Motivation

This research is motivated by the importance of challenges in agriculture that need to be addressed, especially in detecting and managing leaf diseases affecting key crops like potatoes, tomatoes, and mangoes. These crops are essential in ensuring food security; any threat to the health of these crops is a loss in yield and quality, affecting the farmers’ livelihood and the availability of food supplies. Traditional methods of disease detection that rely primarily on expert visual examination are not only time and labor-intensive but also subject to errors due to human fallibility. The need for the timely and correct identification of diseases in large-scale agricultural production is due to proper management and control. This approach is increasingly insufficient. ML and DL innovations have found rapid applications to enhance accuracy in crop disease diagnosis automation. One of the areas arguably revolutionized by image-based disease detection is convolutional neural networks, which are capable of automatically learning and extracting relevant features from images. However, it is still a challenge to develop a model that is not only accurate but can scale and apply across multiple crops and under varying environmental conditions. For this article, we propose the AgirLeafNet model. This deep learning architecture takes advantage of certain desirable aspects of NASNetMobile for feature extraction and employs Few-Shot Learning for classification. The proposed approach innovatively adopts the excess green index in boosting features with the model’s capability, even in the most challenging cases, to isolate vegetation features from scarce labeled data.
The current research focuses on developing leaf disease diagnostics for multiple crops, such as potatoes, tomatoes, and mangoes, to provide a versatile and robust solution with broader applicability in various agricultural contexts. This would be supplemented by deep learning models embedded in IoT technologies that will help carry out real-time monitoring and management of crop diseases in the future. This greatly boosts the dependability and effectiveness of disease control measures toward agricultural sustainability in ensuring improved food security. It will be able to address these critical challenges faced by the farmer in maintaining healthy crops by reducing losses, ensuring high productivity, and achieving sustainability in farming. It is quite important to underline that the AgirLeafNet model represents a novel benchmark in multi-crop disease detection, providing a scalable solution that is both accurate and fits different agriculture conditions.

1.2. Research Contribution

The following work makes a few critical contributions towards the classification of potato, tomato, and mango leaf diseases:
  • This work proposes a deep learning framework, AgirLeafNet, that combines a feature extraction method (NASNetMobile) with a Few-Shot Learning classification mechanism. This is the first method of its kind for detecting agricultural leaf diseases.
  • The significant novelty of this work is the use of the innovative Excess Green Index in image preprocessing. The index strengthens the green component in images and increases the model’s ability to identify and analyze vegetative features more effectively, especially where there is a limited amount of labeled data.
  • The AgirLeafNet framework is designed to be scalable and versatile, allowing it to effectively manage disease detection across multiple crops, unlike conventional models that are often restricted to single-crop applications. By leveraging advanced feature extraction and classification techniques, the model demonstrates robust performance in a variety of agricultural contexts, showing its potential to be widely adopted for multi-crop disease detection.
  • The paper discusses the potential of AgirLeafNet integration with IoT technologies in order to track and manage crop diseases in real-time. This can significantly enhance the timeliness and effectiveness of disease detection and intervention in agricultural practices.
  • In that respect, the AgirLeafNet model provides a new benchmark in multi-crop disease detection, thus setting a foundation for further research in this area. This work therefore opens up future research related to the generalization capacity of the model across different environments and crop varieties and, more generally, other crops.
  • The research thus underlines the importance of the AgirLeafNet model in enabling sustainable agricultural practices. It enhances the accuracy and scalability of disease detection for the reduction of crop losses, resulting in higher productivity critical for global food security.
This study provides a new state of the art in categorizing illnesses in potato, tomato, and mango leaves. It creates opportunities for more research on integrating deep learning techniques for agricultural applications. Comparison to the state-of-the-art section is made in detail to clearly show that the proposed model outperformed other works found in the literature and gives better accuracy in classifying potato, tomato, and mango leaf diseases.

2. Related Work

The detection and categorization of potato illnesses have improved significantly by including deep learning and machine learning techniques. Most conventional techniques relied on labor-intensive, erroneous manual feature extraction and image processing techniques. Developing deep learning, particularly CNNs, significantly improved the precision and effectiveness of illness detection systems. Several research studies have examined various CNN architectures to improve potato disease detection systems. For example, the accuracy of models for diagnosing and categorizing common potato diseases such as bacterial wilt, late blight, and early blight is relatively high. These models can be generalized by learning from illness features because they leverage very big, annotated datasets. Additionally, to increase classification accuracy, hybrid models that combine CNNs solely with other machine learning methods like support vector machines and random forests have been developed [13].
This has introduced a potato disease classification algorithm that uses the distinct visual symptoms of plant diseases, applying deep convolutional neural networks. The model classifies potatoes into five categories, including four disease classes and one healthy class, using an expert-labeled image dataset. Various train-test splits were employed to determine the amount of data required for accurate classification using deep learning [14]. This dramatically speeds up the training process and eliminates the requirement for large, labeled datasets. A few studies have combined DL with multispectral imaging to create more sophisticated disease detection algorithms, demonstrating that modern imaging techniques may be instrumental in agricultural applications [15]. This research applies Deep Convolutional Neural Networks (DCNNs) for plant disease detection, using image preprocessing techniques to enhance accuracy. By applying Gaussian and Median filters and converting images to HSI and CMYK color models, the study identifies the best combination for optimal classification. The highest accuracies were achieved with ResNet-50 (99.53%), Vgg-19 (98.27%), and MobileNet-V2 (94.98%) using Gaussian Blur and CMYK conversion, demonstrating significant improvements in agricultural disease classification [16].
Developing strong DL models to withstand several tomato diseases, including leaf mold, early blight, and late blight, has been the focus of several research studies [17]. Tomato diseases significantly impact crop quality and yield, making fast and accurate identification crucial for smart agriculture. This study proposes a lightweight CNN model, MFRCNN, which uses a multi-scale and feature reuse structure to improve efficiency. Tested on both laboratory and field-based datasets, MFRCNN outperformed popular CNN models, achieving 99.01% and 98.75% accuracy, respectively. With fewer trainable parameters and only 2.7 MB of storage space, MFRCNN offers a highly accurate and resource-efficient solution for plant disease diagnosis on low-performance devices [18]. Agriculture faces numerous challenges, including labor shortages, climate changes, plant diseases, and market instability, while global food demand continues to rise. This research proposes addressing plant leaf diseases at early stages using deep learning techniques, specifically through the AgroDeep mobile application, which collects real diseased leaf images for analysis. The CNN-based model achieved 97% accuracy in classifying diseases on tomato leaves, offering an effective solution for farmers. By improving disease detection and supporting higher crop yields, this research plays a crucial role in enhancing agricultural productivity and profitability [19]. This research focuses on using deep convolutional neural networks, specifically the YOLOv5 model, for the real-time detection of plant leaf diseases. By creating a dataset of money plant leaf images, classified as healthy or unhealthy, the study demonstrated how mobile devices can help farmers quickly identify diseases, even in remote regions with limited infrastructure. The YOLOv5 model achieved 93% accuracy, offering a practical solution for farmers to detect early signs of disease, prevent its spread, and improve crop yields. This approach is vital for enhancing disease management in agriculture [20].
An efficient and successful AI-based solution for the early identification and categorization of illnesses affecting mango leaves is offered by the suggested Ensemble Stacked Deep Learning model. This algorithm recognizes illnesses, including powdery mildew and anthracnose, with an accuracy of up to 98.57% thanks to a mix of deep neural networks and machine learning. This will significantly assist farmers in taking appropriate action at the appropriate moment, reducing yield losses and enhancing the quality of mango output [21]. In this regard, the lightweight model DenseNet78, based on deep learning, was built and presented to classify mango leaf diseases accurately. Here, it was discovered that the optimized DenseNet architecture using custom layers achieved an exceptional accuracy of 99.47% for recognizing healthy leaves and 99.44% for detecting various ill ones. While the DenseNet model may be used with few datasets, it significantly generalizes various areas and disease variations. It is crucial for the automated detection of illnesses affecting mango leaves in agriculture [22]. This research focuses on using Convolutional Neural Networks (CNNs) for the early diagnosis and identification of mango leaf diseases, which are crucial for maintaining mango quality and yield. The CNN model, requiring minimal preprocessing, was designed to automatically detect diseases by analyzing images of mango leaves. Techniques like image augmentation were applied to prevent overfitting and enhance the model’s generalization. This approach improves disease prediction accuracy, helping manage the spread of mango diseases and supporting better crop management [23].
Mango cultivation plays a critical role in the economy and food security of tropical and subtropical regions, but mango leaf diseases can severely reduce yield and quality. Early detection is vital for sustainable production. This research systematically analyzes deep learning techniques for mango leaf disease detection, using pre-trained models like VGG19, InceptionV3, and ResNet152V2. Among these, InceptionV3 achieved the highest accuracy at 99.87%, outperforming other models. The study also compares its findings with earlier research, highlighting the effectiveness of deep learning in improving disease diagnosis for mango crops [24]. This study presents the VGG16 Convolutional Neural Network model for classifying mango leaf diseases into eight categories using a dataset of 4000 images. After training the model over five epochs with a batch size of 64, the VGG16 achieved an accuracy of 94%. This research underscores the potential of deep learning techniques in agriculture, providing farmers with an efficient tool for early disease detection in mango crops. This enables timely intervention, supporting the health and productivity of mango trees [25].
The field of plant disease detection and classification has been revolutionized precisely because of advances in deep learning (DL) and CNNs. The combination of these technologies with IoT and cutting-edge imaging methods greatly encourages the creation of reliable, effective, and scalable agricultural disease management systems. In order to solve the issues of data accessibility, model interpretability, and practical application, further study in this field is necessary. This will ultimately lead to more sustainable farming methods and increased food security. Table 2 is a tabular summary of the literature mentioned previously. Figure 1 compares the number of deep learning methods with machine learning methods applied in the agricultural disease detection area. Most of the recent works are shifting toward deep learning methods due to their powerful feature extraction and classification performance.

3. Material and Methods

The research method involves constructing a solid, deep, significant structure that combines NASNetMobile for function elimination with a Multi-Feature Fusion Network and Few-Shot Learning for accurate classification. The approach begins with acquiring a diverse picture data source, which is then downsized to 224 × 224 pixels to ensure consistency and input effectively into the NASNetMobile model. NASNetMobile is used as a function extractor by feeding photographs through convolutional blocks, each designed to capture varying degrees of function complexity. These convolutional blocks employ depthwise separable combinations to successfully filter picture information while maintaining a high degree of information in the deleted features. The function vectors generated by NASNetMobile are ultimately refined with a Multi-Feature Fusion Network, which integrates the removed attributes from various layers, allowing the design to incorporate multi-scale details while improving its ability to detect fine distinctions in fallen leaf illness. The approach also incorporates Few-Shot Learning by specifying support and question collections. The support images produce course models by balancing their function vectors. At the same time, the question photos are identified based on the Euclidean range between their function vectors and these course models. To enhance category efficiency, a Weight Generation Component is presented, appointing various weights to include based on their significance, and these weights are adapted throughout the training procedure. The version is educated by iterating over the assistance as well as inquiry collections using the Adam optimizer to upgrade the specifications and cross-entropy loss to review forecast precision. The classification likelihoods are computed based on the heavy ranges with the most excellent chance of figuring out the last category. The version’s efficiency is regularly examined in the recognition collection, with training and recognition losses and accuracies outlined over ages to ensure the version’s generalization and to discover any overfitting or underfitting problems. This detailed method offers a comprehensive approach to multi-crop disease detection by combining the attributes, Few-Shot Learning, and NASNetMobile’s endurance for scalable and accurate categorization. The systematic distribution of these detailed steps is seen in Figure 2.

3.1. Data Acquisition

The study undertaken for this project uses three datasets related to plant diseases: mango, tomato, and potato leaf diseases. There are 2152 photos in the Potato Village Leaf Dataset [26] and 2152 photographs in the Potato Plant Leaf Disease dataset [26]. There are 11,000 photos in the collection for tomato leaf disease [27] and 4000 images in the dataset for mango leaf disease [28]. Data are collected from well-reputed online websites, while professional farmers help organize the data for training. Table 2 details the datasets used for this research, including their dimension settings, for building and evaluating the testing and training image sets. After processing, these images were assigned multiple labels, and data augmentation was applied to balance the datasets and ensure fairness. The images were resized to 700 × 600 pixels during preprocessing before input into the AgirLeafNet model. Experimental analysis determined that 700 × 600 pixels is the optimal size for processing, as reducing larger images to this size is generally more efficient than enlarging smaller ones. Deep learning models typically train more quickly on smaller pictures in practice. The various leaf image datasets are displayed in Figure 3, Figure 4, Figure 5 and Figure 6, including the images utilized in the study. The datasets utilized to create the training and testing fundus sets are shown in Table 3.

3.2. Data Preprocessing

The Excess Green Index (ExG) is a specialized vegetation index used predominantly in remote sensing, agricultural monitoring, and environmental studies to enhance the visibility and detection of green vegetation in RGB images. ExG employs the unique reflectance characteristics of green plants to spotlight the green portion of an image while suppressing its red and blue components, thus separating vegetation from the rest of the scene. The following formula calculates index:
E x G = 2 G R B
where R and B represent the red and blue color channels, respectively. By doubling the green component and subtracting red and blue, it enhances points with lots of greens, which helps filter out vegetation from non-vegetation areas. Algorithm 1 ExG: RGB Image Input Enhanced Green Channel Output. The processes in this approach, which include channel separation, using the suggested ExG formula, normalizing, and visualization with or without a colormap, are listed in the table below (see Algorithm 1). Most notably in agriculture and the environment, ExG is particularly important for monitoring tasks like weed detection, canopy cover estimation, and plant health assessment. By addressing one node at each moment with its neighbors, we obtain an elegant technique that is parallelizable. It is simple and computationally inexpensive and helps us compute the PageRank better because it requires the sum and multiplication of matrices, which takes O(V2) times. However, these systems’ empirical sensitivity to changing sunlight and noise levels limits the amount of green vegetation they produce.
Regarding using it, the ExG remains a valuable resource since it offers a simple and efficient method for enhancing green vegetation in digital photos. Combining this index with other vegetation indices, like the NDVI, will increase the accuracy and robustness of the vegetation assessments in different settings, from environmental to agricultural. Results after processing images with the application of ExG can be seen in Figure 7, Figure 8 and Figure 9, showing results on potato, tomato, and mango leaf datasets, respectively. These images depict the augmentation of the green component through the ExG method, thus locating the diseased areas in leaves.
Figure 7. The visualization of the applied preprocessing technique on Potato Leaf Disease.
Figure 7. The visualization of the applied preprocessing technique on Potato Leaf Disease.
Agronomy 14 02230 g007
Figure 8. The visualization of the applied preprocessing technique on Tomato Leaf Disease.
Figure 8. The visualization of the applied preprocessing technique on Tomato Leaf Disease.
Agronomy 14 02230 g008
Figure 9. The visualization of the applied preprocessing technique on Mango Leaf Disease.
Figure 9. The visualization of the applied preprocessing technique on Mango Leaf Disease.
Agronomy 14 02230 g009
Algorithm 1: Computing the Excess Green Index (ExG) in Digital Images.
StepDescriptionOperation/FormulaResult/Output
1Input ImageLoad the RGB image.RGB image ready for processing.
2Channel SeparationSeparate the image into three channels: red, green, and blue.R, G, B = cv2.split(image)
3Compute Excess Green IndexApply the ExG formula to enhance the green channel and suppress red and blue channels.ExG = 2G − R − B
4Normalization (Optional)Normalize the ExG values to ensure the pixel intensity falls within the desired range (e.g., 0–255).ExG = cv2.normalize(ExG, None, 0, 255, cv2.NORM_MINMAX)
5Visualization (Optional)Apply a colormap to enhance visual interpretation of the ExG values.ExG_img = cv2.applyColorMap(ExG.astype(np.uint8), cv2.COLORMAP_JET)
6Save/Display ImageSave or display the processed ExG image.Save the ExG image with an appropriate filename (e.g., image_exg.jpg).
7Output ImageThe final processed image showing enhanced green vegetation.Processed image highlighting the green regions.

3.3. Model Architecture

The proposed hybrid model integrates NASNetMobile with Few-Shot Learning techniques to classify images efficiently. Here, NASNetMobile is used for its feature extraction capability, combined with a prototypical network for robust classification, mainly in cases with only limited labeled data. The dataset being used for training and testing will contain images organized into multiple classes. Preprocessing: Resizing images to 224 × 224 pixels and normalizing their pixel values to have consistency in the size of input and intensity values. Using a bilinear interpolation method, all images were resized consistently to 224 × 224 pixels. It is a method in which the weighted average of the nearest four-pixel values is calculated for every new pixel value to be obtained to maintain smooth transitions in resizing. The resizing ensures consistency across all input images, which is required for efficient processing by the deep learning model.
Additionally, the pixel values after resizing had to be normalized within a range of 0 to 1 by dividing the original pixel values range from 0 to 255 by 255. Normalization like this puts the input values on a similar scale, enhancing the stability of the process of training, and the model converges efficiently since considerable changes in the pixel values do not mess with the learning process. The preprocessing steps in resizing and normalization make the input to the model both uniform and scaled, allowing for efficient training with higher accuracy. After preprocessing, the dataset is divided into a training and validation set, further divided into support and query sets to enable Few-Shot Learning. As shown in Figure 8, each class of the dataset contains a specified number of support samples n s h o t and query samples  ( n q u e r y ) . Feature extraction uses this lightweight CNN, a computationally efficient NASNetMobile architecture. When conducting feature extraction, the final classification layer of the NASNetMobile is removed. The network makes up initial convolutional layers with filters that capture edges, textures, and more complex patterns at increasing levels of abstraction. These convolutional layers are followed by batch normalization to stabilize the learning process. At the same time, a ReLU activation function is included to introduce non-linearity into a model, enabling it to learn complex patterns. Pooling layers allow a reduction in spatial dimensions of feature maps, hence summarizing information while reducing computational complexity. Passing through such layers, NASNetMobile produces a high-dimensional feature vector denoted as
z i R e s N e t = f N A S N e t ( x i )
z i R e s N e t represents the feature vector extracted from image x i and f N A S N e t ( x i ) is the NASNetMobile feature extraction function for each input image. To make the feature vectors more manageable for subsequent processing, a fully connected (dense) layer is applied to reduce their dimensionality. This transformation is represented by the following equation:
z i = R e L U W f Z i N A S N e t + b f
Here, W f is the weight matrix, b f is the bias vector, and R e L U is the activation applied after the linear transformation. The result of such a transformation is a reduced and feature-transformed vector z i ,  which is more efficient to be used in the classification stage.
For classification, it uses a Prototypical Network, which is a metric-based Few-Shot Learning method classifying images by comparing query image feature vectors with prototypes, which are representative feature vectors of each class. More specifically, the prototype of each class is the average of the feature vectors of all support examples in that class, shown by equation
C k = 1 S k x i , y i S k z i
where S k is the set of support examples for class k and C k is the resulting prototype. Thereafter, the distance between the feature vector of a query example and each class prototype is computed using the Euclidean distance metric, given by
d Z q , C k = Z q C k 2
The query example is classified based on the nearest prototype, with the predicted class determined by
y ^ q = arg m i n k   d ( Z q ,   C k )
meaning that the class whose prototype is most similar to the query example in the feature space will be chosen as the predicted class.
It is trained and evaluated while iterating over the support and query sets. The training focuses on minimizing cross-entropy loss function. The training loss is then defined as
L t r a i n = 1 N i = 1 N C r o s s E n t r o p y ( y ^ ,   y i )
where N is the total number of training examples, y ^ i is the predicted class for the i-th example, and y i is the true class label. This loss function measures the difference between the predicted and true classes, hence to encourage the model to output accurate class probabilities. It uses the Adam optimizer for updating model parameters during training.
The overall architecture of this hybrid model combines the strong feature extraction abilities of NASNetMobile with robust classification by Prototypical Networks. This application domain is particularly suited for image classification tasks when there is limited labeled data, thus availing a robust solution that can be obtained by integrating convolutional layers, pooling layers, a fully connected layer to reduce dimensionality, and a prototypical network for classification. It explains all the mathematical basics of the model working, from the calculations for feature extraction to dimensionality reduction, prototype computation, distance measure, and classification. The process steps are very well defined (see Algorithm 2) and thus ensure good performance. This architecture, as illustrated in Figure 10, makes for an excellent solution in tasks related to image classification, particularly in those cases where limited labeled data are available.
Algorithm 2: This is an algorithm detailing structured workflow for the hybrid model on feature extraction, classification, and training, with corresponding mathematical representations.
StepExplanationInputOutput
Step 1: Data PreparationLoad the image dataset, preprocess by resizing to
224 × 224 pixels and normalizing pixel values. Split the dataset into training and testing sets, then further divide into support and query sets for Few-Shot Learning.
Raw image datasetPreprocessed support and query sets for training and testing
Step 2: Define Feature Extractor (NASNetMobile)Initialize the NASNetMobile architecture and modify it by removing its final classification layer to use it as a feature extractor.Preprocessed imagesFeature vectors extracted by NASNetMobile
Step 3: Dimensionality ReductionPass the feature vectors through a fully connected (dense) layer to reduce their dimensionality from 1056 to 256 dimensions.Feature vectors from NASNetMobileFeature vectors with reduced dimensionality
Step 4: Prototypical Network InitializationCompute the prototype vectors for each class by averaging the feature vectors of the support examples in that class.Feature vectors of support images, class labelsPrototype vectors for each class
Step 5: Distance CalculationCalculate the Euclidean distance between the query feature vectors and the class prototypes.Query feature vectors, class prototypesDistances between query feature vectors and class prototypes
Step 6: ClassificationClassify query examples based on the nearest prototype (minimum distance).Distances between query feature vectors and class prototypesPredicted classes for query examples
Step 7: Training LoopFor each epoch, it performs a forward pass, computes loss through the use of cross-entropy, updates model weights with a backward pass, and traces the training loss and accuracy.Support and query sets, initialized modelTrained model, training loss, and accuracy
Step 8: EvaluationEvaluate the trained model on the testing set, compute testing loss, and accuracy.Testing support and query sets, trained modelTesting loss and accuracy

4. Results

The dataset used to represent the testing images for the training accuracy of this model consisted of 19,304 images of potatoes, tomatoes, and mangoes, showing different phases of development of the respective diseases in the crops. These pictures were obtained from online databases. Each of them was dimmed proportionally and resized to 700 × 600 pixels to optimize the feature extraction and classification processes. The present work developed an AgirLeafNet system that builds on NASNet for feature extraction and Few-Shot Learning methods for classification. The training was performed for a total of 100 epochs. The optimal model identified at the 30th epoch within each dataset performed and attained an F1-score of 0.99 for potato, 98% for tomato, and 92% for mango. The inclusion of evaluation of model accuracy, specificity, and sensitivity provided and indicated that the NASNet network system gave the highest possibility of providing perfect results ever obtained with other existing systems, in such a case, highly improved image enhancement. For potato, the accuracy without enhancement was 97%, tomato was 89%, and mango was 96%. It increased to 99% for potato, 99.8% for tomato, and 92% for mango with enhancement. The AgirLeafNet system was developed on a computer with an HP Core i9 CPU, 32 GB RAM, 4 GB NVIDIA video card, and running 64-bit Windows 10. The development environment was set up using the Anaconda platform with Python. The dataset was split into 70% for training and 30% for testing, with a learning rate 0.0001 for 100 batches.

4.1. Experiment 1

In this experiment, seven contemporary methodologies were employed to evaluate the efficacy of the proposed structure. The deep learning models VGG16, VGG19, Xception, InceptionV3, DenseNet, ResNet, and EfficientNet were trained, and their results were compared with the proposed AgirLeafNet system. These models were trained for the same number of epochs. Figure 11 presents the accuracy percentage comparison between the AgirLeafNet system and these models. Table 4 illustrates the accuracy comparison across different datasets using various deep learning models. Additionally, Table 5 highlights the real-time speedup of the proposed AgirLeafNet model compared to other standard deep learning architectures. The results demonstrate that the AgirLeafNet system outperformed the other models. Figure 12 demonstrates the area under the curve (AUC) and receiver operating characteristic curve (ROC).

4.2. Experiment 2

In this experiment, we assessed the effectiveness of our proposed AgirLeafNet method using the dataset Potato Plant [26]. Initially, we evaluated the loss function and the model’s performance on both the training and validation sets. Figure 13 and Figure 14 graphically depict the confusion matrix and the training and validation accuracy of the AgirLeafNet model when trained on this dataset. The results demonstrate the model’s high efficacy in both settings. Furthermore, the model achieved a high accuracy rate of 100% on both the training and validation sets with the [26] dataset.

4.3. Experiment 3

To validate the effectiveness of our proposed AgirLeafNet model, we implemented the AgirLeafNet model on the dataset Potato Village [26], collected from Kaggle. Initially, we tried to see how our model performed on the training and validation sets and noticed carefully that our developed model was efficient based on the loss function. The obtained training and validation accuracies in the training process are shown in Figure 15 and Figure 16 below, respectively. However, the model worked perfectly when run in the data, with a 99.8% accuracy for both training and validation on training with the data from the [26] data.

4.4. Experiment 4

In this paper, we have considered a dataset named Tomato Dataset [27], collected from Kaggle, to evaluate the efficacy of our proposed technique, AgirLeafNet. We compared the performance of the model on the training dataset and validation dataset and also checked the loss function on these datasets. Accuracy and the confusion matrix of AgirLeafNet while training and validation with the Tomato Dataset are drawn in Figure 17 and Figure 18. The results show that our model performed well on both sets. More specifically, on the Tomato Dataset [27], we achieved a 92% accuracy rate for both the training and the validation sets.

4.5. Experiment 5

In this experiment, we will evaluate the efficacy of the proposed AgirLeafNet technique using a dataset named Mango Dataset [28] from Kaggle. First, we will compare the model’s performance over training and validation datasets and estimate the loss function over the mentioned datasets. The accuracy and confusion matrix of the AgirLeafNet model for training and validation with the Mango Dataset are shown in Figure 19 and Figure 20. The results show that our model performed excellently in both of those settings. Our model gave an accuracy rate of 99.8% on the test and validation sets only on the Mango Dataset [28].

4.6. State of the Art Comparison

Deep learning models have improved accuracy and efficiency in detecting various crops in agricultural disease detection. In particular, the model presented in [30] uses a traditional convolutional deep learning model coupled with max-pooling layers for potato leaf diseases and achieves a high accuracy of 98.83% on the Plant Village Dataset. This model shows the capacity of conventional CNN frameworks in single-crop and well-structured contexts. Specifically, the MDSCIRNet architecture [29] is proposed, which implements a much more complex approach, including depthwise separable convolution along with the multi-head attention mechanism, elevating performance to new heights at 99.24%. However, its combination with the Support Vector Machine significantly increases the accuracy up to 99.33%, showing how powerful it is when fusion is performed with deep learning and classical machine learning techniques.
Moreover, state-of-the-art image enhancement methods, such as CLAHE and ESRGAN, enhance the data quality, making a model superior for performing potato leaf disease classification. To that effect, AgirLeafNet sets a new benchmark in multi-crop detection capabilities, including potato, tomato, and mango. With the integration of NASNetMobile for feature extraction with Few-Shot Learning applied for classification, AgirLeafNet achieves state-of-the-art accuracy rates at 100% for potatoes, 92% for tomatoes, and 99.8% for mango leaf disease detection as shown in Figure 21 and Figure 22. The model’s original use of ExG further refines the ability to isolate vegetative features, hence being particularly effective even in scenarios where the labeled dataset is small. The versatility and scalability across different crops provide AgirLeafNet with capabilities beyond a single-crop model and as a robust and complete solution for the early detection of diseases in agriculture, raising the bar for future developments in this area.

5. Discussion

Agricultural productivity is one of the global bases of food security. Crop health, including potato, tomato, and mango crops, is paramount to food security. Crops like these are usually ravaged by several leaf diseases that bring considerable losses in yield and quality, resulting in reduced economic returns for farmers. These are some of the most dangerous diseases in crops, such as late blight and early blight of potato, tomato leaf mold, and mango anthracnose, which could cause big losses when diagnosis and control are not timely. Potato Leaf Diseases: Potato (Solanum tuberosum) is an important staple worldwide. However, it is very susceptible to various leaf diseases, the more important ones being late blight, early blight, and leaf spot. These diseases not only reduce the yield of crops but also deteriorate the quality of produce, making it less consumable and processable. These traditional diagnostic methods based on visual inspection are labor-intensive and highly inaccurate, as the symptoms for various diseases show very minute differences. For example, late blight, caused by the pathogen Phytophthora infestans, can destroy an entire field in days if not detected early. Therefore, fast and correct identification is imperative so that control measures can be initiated against the rapid progression of the disease under survey. Tomato Leaf Diseases: Another significant crop under attack from several categories of leaf diseases, including late blight, early blight, and leaf mold, is the tomato (Solanum lycopersicum). This disease can cause large losses in yields, mostly in extensive farming. Therefore, the challenge concerning tomato leaf diseases is the variability of the symptoms, which might be influenced by environmental factors, making diagnosis through visual appearance quite challenging. There is, therefore, a growing demand for automation systems to ensure continuous and correct disease diagnosis and enable farmers to take timely action. Diseases of Mango Leaves: Mango is one of the most prized tropical fruit crops, and it is under continuous threat from diseases such as anthracnose, powdery mildew, and leaf spot. All these diseases lower yield and reduce the fruit’s market value by producing blemishes and other quality factors. Diagnosing mango leaf diseases becomes very difficult because of the variable symptoms occurring on almost all parts of the plant. The standard diagnostic methods are generally inefficient, especially in large orchards, where it is impossible to conduct a manual examination through the human eye. Their effect, however, goes beyond the economic scope; it is also social in such a way that these diseases may throw millions of smallholder farmers into a life of destruction. Therefore, early diagnosis of leaf diseases can save these crops from large-scale failure, ensuring food security. Traditional valuable disease diagnosis methods are quickly supplemented by advanced technological solutions offering better accuracy and scalability. Advancements in Deep Learning Models for Disease Detection: Recent progress in machine learning (ML) and deep learning (DL) has transformed the domain of agricultural disease detection with impressive tools to solve the challenges posed by leaf diseases in crops like potato, tomato, and mango. Deep learning models, especially Convolutional Neural Networks, have been found to achieve enormous success in automatically learning and recovering features from images, thus detecting and classifying plant diseases with a high level of accuracy. Sequential Deep Learning Model [30]: The sequential deep learning model can be said to be an approach to using CNNs for plant disease detection, especially in potato crops. It uses traditional convolutional architecture with max-pooling layers to extract features from the input images. It was trained on the Plant Village dataset and achieved an accuracy of 98.83%. This model has proved very effective in detecting late and early blight potato leaf diseases. The strength of this model comes from the simplicity and power of its operation for single-crop applications. However, it is somehow limited to an extremely structured dataset and focused on only one crop, which limits its scalability on other crops or diverse datasets. MDSCIRNet Architecture [29]: The architecture of MDSCIRNet is a remarkable milestone in deep learning models over plant disease detection. This model accommodates a depthwise separable convolution alongside the multi-head attention mechanism, which helps the model to focus more on the important features of the input images. The MDSCIRNet model recorded 99.24% accuracy in potato leaf disease detection; this improved to 99.33% when combined with a Support Vector Machine. Techniques for advanced image enhancement, such as CLAHE and an underlying technique, ESRGAN, have been used to make the model’s superiority possible. These techniques improve the quality of input data, helping the model draw better discrimination between a healthy and a diseased leaf. Hence, we have the MDSCIRNet architecture as a case in point to illustrate the muscle power behind combining deep learning techniques with classical machine learning. AgirLeafNet Model: Thus, the model of AgirLeafNet introduces a new benchmark by making the recognition scope wide during disease diagnosis from various crops such as potatoes, tomatoes, and mangoes. The model integrates NASNetMobile, which represents state-of-the-art CNN architecture for feature extraction, with Few-Shot Learning for classification. Few-Shot Learning is especially relevant for applications in agriculture, where the volume of data with labels is low. The AgirLeafNet model gave excellent accuracy rates in testing potato, tomato, and mango leaf diseases at 100%, 92%, and 99.8%, respectively. One of the cardinal innovations of this model was the use of the ExG Index, which overamplifies the green component of the image to make it easier to isolate and analyze vegetative features. This innovation works particularly well regarding subtle visual differences between healthy and diseased leaves. The versatility and scalability built within AgirLeafNet make the model a robust solution for multi-crop disease detection, significantly improving traditional single-crop models. Future Directions and Challenges: While deep learning models for the detection of plant diseases have considerably improved over the recent years, a number of challenges remain to be met if these technologies are to realize their full potential for agriculture. Data Availability and Quality: Among the bottlenecks to developing robust models of DL are data availability and quality. Although these datasets, such as those with large datasets, like Plant Village, offer a great resource, their applicability is limited in real situations. There is a requirement for diverse datasets that will contain a variety of crops and a variety of disease symptoms under a variety of environmental conditions. Furthermore, the image quality in these datasets plays an important role in that poor-quality images might mislead the models into errors. Although the recent CLAHE and ESRGAN models have considerably improved model performance, their execution has been challenging, as it demands a deep understanding of the dataset’s specific and general characteristics. Challenges of Model Interpretability and Explainability: Another major challenge associated with deep learning is model interpretability. Although MDSCIRNet and AgirLeafNet provide high accuracy, it is often difficult to understand how the models make their predictions. This lack of transparency may repulse farmers and agricultural professionals who rely heavily on those models for their work from adopting them, and, therefore, they may become very skeptical of the “black box” approach. It would be interesting to devise a way to make DL models interpretable or improve their interpretability in the future. This can be achieved through mechanisms, such as attention, which indicates the feature that is heavily influential on the decision result, and XAI techniques. There are huge challenges in developing DL models from controlled research environments to practical agricultural fields. Independent factors affecting model performances are environmental differences, differences between crop varieties, and even uncertainty in the case of more than one co-occurring disease. Therefore, more experiments related to practical deployment are needed at the widespread agricultural setup to better fit DL models with technologies like IoT for real-time disease monitoring and management. Wide dissemination of these will necessitate their being user-friendly and accessible to all farmers, particularly in resource-constrained settings. Sustainability and Ethical Considerations: DL models, being an integral part of agriculture, should be considered for their impact on sustainability and ethics. These models use enormous computational resources for training and deployment, which raises questions about their potential environmental impact. Moreover, the use of automated systems in agriculture needs to be carefully managed so they do not result in labor displacement and aggravate existing inequalities. Future research in DL models should focus on energy efficiency and equitable benefit sharing. Integration with IoT and Advanced Sensing Technologies: The integration of DL models with IoT and advanced sensing technologies like hyperspectral imaging and UAV surveillance is very promising in improving the accuracy and efficiency of plant disease detection. These technologies can provide real-time data regarding crop health, allowing earlier detection and more targeted interventions. Their integration with existing agricultural practices brings about technical and logistical challenges concerning data management and analysis. How far DL models can induce meaningful change within agriculture depends on future research to develop scalable, interoperable systems that seamlessly integrate data from multiple sources (Table 6).

6. Conclusions

This study, a significant advancement in agricultural disease diagnosis with AgirLeafNet, combines feature extraction with NASNetMobile and Few-Shot Learning (FSL) classification. This work contributes to a great extent, helping those farmers improve production and enable sustainable agriculture because it solves the stated problem of less labeled data and the requirement for multi-crop disease detection in agriculture. This model yielded the best accuracy rates for several crops, including 100% for potatoes, 92% for tomatoes, and 99.8% for detecting mango leaf diseases. Hence, it was proven to perform well and effectively in real-world applications. This research uses ExG for image preprocessing, strengthening it in isolating the vegetative feature and being very effective for agricultural applications where early and accurate disease detection is a priority. The research sets a new benchmark in detecting agricultural diseases for sustainable farming and improved food security and exhibits potential in advanced deep-learning architectures when combined with practical techniques. In addition, the high scalability and adaptability of the AgirLeafNet model feature over multiple crops, thus revealing their versatility toward broader applications within agriculture. To do this, future research would need to extend the model’s applicability to other crops and fine-tune its performance in a wide range of environmental conditions, thus ultimately contributing to the development of more resilient and sustainable agricultural systems.

Author Contributions

Conceptualization, S.S.; methodology, M.I.S. (Muhammad Irfan Sharif), M.I.S. (Muhammad Imran Sharif), M.Z.S. and F.M.; validation, S.S., M.I.S. (Muhammad Irfan Sharif) and M.I.S. (Muhammad Imran Sharif); formal analysis, S.S. and M.I.S. (Muhammad Imran Sharif); investigation, M.I.S. (Muhammad Irfan Sharif); resources, M.I.S. (Muhammad Imran Sharif) and F.M.; writing—original draft preparation, S.S., M.I.S. (Muhammad Irfan Sharif) and M.I.S. (Muhammad Imran Sharif); writing—review and editing, M.Z.S. and F.M.; visualization, M.I.S. (Muhammad Imran Sharif); supervision, M.Z.S.; project administration, M.Z.S. and F.M.; funding acquisition, F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

These data were derived from the following resources available in the public domain: Potato Plant Diseases Data (https://www.kaggle.com/datasets/hafiznouman786/potato-plant-diseases-data) (accessed on 22 September 2024), Tomato Leaf Diseases Dataset (https://www.kaggle.com/datasets/kaustubhb999/tomatoleaf) (accessed on 22 September 2024), Mango Leaf Disease Dataset (https://www.kaggle.com/datasets/aryashah2k/mango-leaf-disease-dataset) (accessed on 22 September 2024).

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this study.

References

  1. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed]
  2. Picon, A.; Alvarez-Gila, A.; Seitz, M.; Ortiz-Barredo, A.; Echazarra, J.; Johannes, A. Deep Convolutional Neural Networks for Mobile Capture Device-Based Crop Disease Classification in the Wild. Comput. Electron. Agric. 2019, 161, 280–290. [Google Scholar] [CrossRef]
  3. Sibiya, M.; Sumbwanyambe, M. A Computational Procedure for the Recognition and Classification of Maize Leaf Diseases Out of Healthy Leaves Using Convolutional Neural Networks. Agriculture 2019, 9, 67. [Google Scholar] [CrossRef]
  4. Ramcharan, A.; McCloskey, P.; Baranowski, K.; Legg, J.; Achieng, F.; Zia, A.; Hughes, D.P. A Mobile-Based Deep Learning Model for Cassava Disease Diagnosis. Front. Plant Sci. 2017, 8, 1852. [Google Scholar] [CrossRef] [PubMed]
  5. Ferentinos, K.P. Deep Learning Models for Plant Disease Detection and Diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  6. Brahimi, M.; Arsenovic, M.; Laraba, S.; Sladojevic, S.; Boukhalfa, K.; Moussaoui, A. Deep Learning for Plant Diseases: Detection and Saliency Map Visualization. In Human and Machine Learning; Springer: Cham, Switzerland, 2017; pp. 93–112. Available online: https://www.springerprofessional.de/en/deep-learning-for-plant-diseases-detection-and-saliency-map-visu/15829084 (accessed on 22 September 2024).
  7. Zhang, X.; Qiao, Y.; Meng, Q.; Fan, C.; Zhang, M. Identification of Maize Leaf Diseases Using Improved Deep Convolutional Neural Networks. IEEE Access 2020, 8, 5163–5171. [Google Scholar] [CrossRef]
  8. Shoaib, M.; Shah, B.; EI-Sappagh, S.; Ali, A.; Ullah, A.; Alenezi, F.; Gechev, T.; Hussain, T.; Ali, F. An advanced deep learning models-based plant disease detection: A review of recent research. Front. Plant Sci. 2023, 14, 1158933. [Google Scholar] [CrossRef]
  9. Amara, J.; Bouaziz, B.; Algergawy, A. A Deep Learning-based Approach for Banana Leaf Diseases Classification. 2017. Available online: www.semanticscholar.org/paper/A-Deep-Learning-based-Approach-for-Banana-Leaf-Amara-Bouaziz/9fcecc67da35c6af6defd6825875a49954f195e9 (accessed on 22 September 2024).
  10. Saleem, R.; Shah, J.H.; Sharif, M.; Yasmin, M.; Yong, H.-S.; Cha, J. Mango Leaf Disease Recognition and Classification Using Novel Segmentation and Vein Pattern Technique. Appl. Sci. 2021, 11, 11901. [Google Scholar] [CrossRef]
  11. Munawar, S.M.; Rajendiran, D.; Sabjan, K.B. Plant Diseases Diagnosis with Artificial Intelligence (AI). In Microbial Data Intelligence and Computational Techniques for Sustainable Computing. Microorganisms for Sustainability; Khamparia, A., Pandey, B., Pandey, D.K., Gupta, D., Eds.; Springer: Singapore, 2024; Volume 47. [Google Scholar] [CrossRef]
  12. Al-Adhaileh, M.H.; Verma, A.; Aldhyani, T.H.H.; Koundal, D. Potato Blight Detection Using Fine-Tuned CNN Architecture. Mathematics 2023, 11, 1516. [Google Scholar] [CrossRef]
  13. Barbedo, J.G.A. Plant Disease Identification from Individual Lesions and Spots Using Deep Learning. Biosyst. Eng. 2019, 180, 96–107. [Google Scholar] [CrossRef]
  14. Oppenheim, D.; Shani, G. Potato Disease Classification Using Convolution Neural Networks. Adv. Anim. Biosci. 2017, 8, 244–249. [Google Scholar] [CrossRef]
  15. Singh, V.; Misra, A.K. Detection of Plant Leaf Diseases Using Image Segmentation and Soft Computing Techniques. Inf. Process. Agric. 2017, 4, 41–49. [Google Scholar] [CrossRef]
  16. Hossain, M.I.; Jahan, S.; Al Asif, M.R.; Samsuddoha, M.; Ahmed, K. Detecting tomato leaf diseases by image processing through deep convolutional neural networks. Smart Agric. Technol. 2023, 5, 100301. [Google Scholar] [CrossRef]
  17. Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant Leaf Disease Classification Using EfficientNet Deep Learning Model. Ecol. Inform. 2021, 61, 101182. [Google Scholar] [CrossRef]
  18. Li, P.; Zhong, N.; Dong, W.; Zhang, M.; Yang, D. Identifiction of tomato leaf diseases using convolutional neural network with multi-scale and feature reuse. Int. J. Agric. Biol. Eng. 2023, 16, 226–235. [Google Scholar] [CrossRef]
  19. Paymode, A.S.; Magar, S.P.; Malode, V.B. Tomato Leaf Disease Detection and Classification using Convolution Neural Network. In Proceedings of the 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India, 5–7 March 2021; pp. 564–570. [Google Scholar] [CrossRef]
  20. Khalid, M.; Sarfraz, M.S.; Iqbal, U.; Aftab, M.U.; Niedbała, G.; Rauf, H.T. Real-Time Plant Health Detection Using Deep Convolutional Neural Networks. Agriculture 2023, 13, 510. [Google Scholar] [CrossRef]
  21. Gautam, V.; Ranjan, R.K.; Dahiya, P.; Kumar, A. ESDNN: A novel ensembled stack deep neural network for mango leaf disease classification and detection. Multimed. Tools Appl. 2024, 83, 10989–11015. [Google Scholar] [CrossRef]
  22. Mahmud, B.U.; Al Mamun, A.; Hossen, M.J.; Hong, G.Y.; Jahan, B. Light-Weight Deep Learning Model for Accelerating the Classification of Mango-Leaf Disease. Emerg. Sci. J. 2024, 8, 28–42. [Google Scholar] [CrossRef]
  23. Saravanan, T.M.; Jagadeesan, M.; Selvaraj, P.A.; Aravind, M.; Dharun Raj, G.; Lokesh, P. Prediction of Mango Leaf Diseases Using Convolutional Neural Network. In Proceedings of the 2023 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 23–25 January 2023. [Google Scholar]
  24. Varma, T.; Mate, P.; Azeem, N.A.; Sharma, S.; Singh, B. Automatic mango leaf disease detection using different transfer learning models. Multimed. Tools Appl. 2024. [Google Scholar] [CrossRef]
  25. Kaur, G.; Sharma, N.; Malhotra, S.; Devliyal, S.; Gupta, R. Mango Leaf Disease Detection using VGG16 Convolutional Neural Network Model. In Proceedings of the 2024 3rd International Conference for Innovation in Technology (INOCON), Bangalore, India, 1–3 March 2024; pp. 1–6. [Google Scholar] [CrossRef]
  26. Nouman, H. Potato Plant Diseases Data. Kaggle, Last modified 28 May 2024. Available online: https://www.kaggle.com/datasets/hafiznouman786/potato-plant-diseases-data (accessed on 22 September 2024).
  27. Kaustubh, B. Tomato Leaf Disease Detection. Kaggle, 24 April 2020. Available online: www.kaggle.com/datasets/kaustubhb999/tomatoleaf (accessed on 22 September 2024).
  28. Shah, A. Mango Leaf Disease Dataset. Kaggle, 14 April 2023. Available online: www.kaggle.com/datasets/aryashah2k/mango-leaf-disease-dataset (accessed on 22 September 2024).
  29. Kumar, R.; Agrawal, T.; Dwivedi, V.D.; Khatter, H. Potato Leaf Disease Classification Using Deep Learning Model. In Communications in Computer and Information Science, Proceedings of the Machine Learning, Image Processing, Network Security and Data Sciences (MIND 2023), Hamirpur, India, 21–22 December 2023; Chauhan, N., Yadav, D., Verma, G.K., Soni, B., Lara, J.M., Eds.; Springer: Cham, Switzerland, 2024; Volume 2128. [Google Scholar] [CrossRef]
  30. Reis, H.C.; Turk, V. Potato leaf disease detection with a novel deep learning model based on depthwise separable convolution and transformer networks. Eng. Appl. Artif. Intell. 2024, 133, 108307. [Google Scholar] [CrossRef]
Figure 1. Comparison of Deep Learning vs. Machine Learning methods in plant disease detection.
Figure 1. Comparison of Deep Learning vs. Machine Learning methods in plant disease detection.
Agronomy 14 02230 g001
Figure 2. Workflow of Proposed Methodology.
Figure 2. Workflow of Proposed Methodology.
Agronomy 14 02230 g002
Figure 3. Different classes of potato leaf diseases, including healthy leaves and those infected by early blight and late blight. Images show the visual difference between healthy and diseased leaves, which helps the model in its classification process.
Figure 3. Different classes of potato leaf diseases, including healthy leaves and those infected by early blight and late blight. Images show the visual difference between healthy and diseased leaves, which helps the model in its classification process.
Agronomy 14 02230 g003
Figure 4. The few classes of tomato leaf diseases showcasing healthy leaves, along with those affected by bacteria spot, early blight, and late blight diseases. Examples like these help in model development for the differentiation of various disease symptoms of tomato plants.
Figure 4. The few classes of tomato leaf diseases showcasing healthy leaves, along with those affected by bacteria spot, early blight, and late blight diseases. Examples like these help in model development for the differentiation of various disease symptoms of tomato plants.
Agronomy 14 02230 g004
Figure 5. Several classes of mango leaf diseases, from healthy to infected by anthracnose or bacterial canker. The variety in the visual presentation underlines the versatility of the model in the way it can detect diseases across a wide range of crops.
Figure 5. Several classes of mango leaf diseases, from healthy to infected by anthracnose or bacterial canker. The variety in the visual presentation underlines the versatility of the model in the way it can detect diseases across a wide range of crops.
Agronomy 14 02230 g005
Figure 6. Representation of the Different Leaf Disease images dataset.
Figure 6. Representation of the Different Leaf Disease images dataset.
Agronomy 14 02230 g006
Figure 10. The model architecture diagram considering NASNetMobile for feature extraction and a multi-feature fusion network with prototypical networks for classification.
Figure 10. The model architecture diagram considering NASNetMobile for feature extraction and a multi-feature fusion network with prototypical networks for classification.
Agronomy 14 02230 g010
Figure 11. The accuracy comparison of different deep learning models.
Figure 11. The accuracy comparison of different deep learning models.
Agronomy 14 02230 g011
Figure 12. The comparison of all models using ROC and AUC.
Figure 12. The comparison of all models using ROC and AUC.
Agronomy 14 02230 g012
Figure 13. The training and validation accuracy and loss of the proposed model with the Potato Plant dataset.
Figure 13. The training and validation accuracy and loss of the proposed model with the Potato Plant dataset.
Agronomy 14 02230 g013
Figure 14. The confusion matrix for Potato Plant dataset.
Figure 14. The confusion matrix for Potato Plant dataset.
Agronomy 14 02230 g014
Figure 15. The training validation accuracy and loss on the Potato Village dataset.
Figure 15. The training validation accuracy and loss on the Potato Village dataset.
Agronomy 14 02230 g015
Figure 16. The confusion matrix for Potato Village Dataset [26].
Figure 16. The confusion matrix for Potato Village Dataset [26].
Agronomy 14 02230 g016
Figure 17. The training validation accuracy and loss on the Tomato Dataset [27].
Figure 17. The training validation accuracy and loss on the Tomato Dataset [27].
Agronomy 14 02230 g017
Figure 18. The confusion matrix for Tomato Dataset.
Figure 18. The confusion matrix for Tomato Dataset.
Agronomy 14 02230 g018
Figure 19. The training validation accuracy and loss on the Mango Dataset [28].
Figure 19. The training validation accuracy and loss on the Mango Dataset [28].
Agronomy 14 02230 g019
Figure 20. The confusion matrix for Mango Dataset [28].
Figure 20. The confusion matrix for Mango Dataset [28].
Agronomy 14 02230 g020
Figure 21. The accuracy Comparison of Deep Learning Models for Agricultural Disease Detection.
Figure 21. The accuracy Comparison of Deep Learning Models for Agricultural Disease Detection.
Agronomy 14 02230 g021
Figure 22. ROC and AUC for Hypothetical Models.
Figure 22. ROC and AUC for Hypothetical Models.
Agronomy 14 02230 g022
Table 1. Potato, tomato and mango findings as per severity level.
Table 1. Potato, tomato and mango findings as per severity level.
Serial No.Disease NameDisease Description
1Potato Early BlightAlternaria solani causes small, dark brown, circular spots on the leaves; it is the potato early blight pathogen. These can increase in size, causing huge defoliation and a yield reduction.
2Potato HealthyHealthy potato plants have fresh green leaves without any spots or discoloration, an indication of proper growth with no disease.
3Potato Late BlightPhytophthora infestans causes the major potato disease known as Potato Late Blight, characterized by water-soaked lesions that follow on the leaves and stems, after which rapid decay ensues, which might lead to the loss of crops.
4Tomato Bacterial SpotTomato Bacterial Spot is caused by Xanthomonas campestris. This pathogen is first noticed as small, dark, water-soaked spots on the leaves and fruit, which eventually cause defoliation and render fruit unmarketable.
5Tomato Early BlightThe causal agent Alternaria solani is recognized by the presence of concentric rings in the older leaves, which leads to premature defoliation and, finally, reduced fruit production.
6Tomato HealthyHealthy tomato plants will have dark green foliage, sturdy stems, and a steady production of blemish-free fruit.
7Tomato Late BlightIn the case of Tomato Late Blight, large, irregular, water-soaked lesions appear on leaves and stems. The damage suddenly progresses to rapid plant decline.
8Tomato Leaf MoldPassalora fulva results in Tomato Leaf Mold, and in the severity of the case, pale yellowish spots on the leaf’s upper part will bear grayish brownish-looking mold on the leaf’s underside; this impairs photosynthesis.
9Tomato Septoria Leaf SpotSeptoria lycopersici Tomato Septoria Leaf Spot: It causes small, circular leaf spot with dark borders that can lead to defoliation and a reduction in yield.
10Tomato Spider Mites (Two-spotted spider mite)Infestation of Two-Spotted Spider Mites results in yellowing or stippling on the leaves of tomatoes, potentially resulting in leaf drop and reduced photosynthesis.
11Tomato Target SpotThe pathogen Corynespora cassiicola causes tomato target spot, a disease characterized by small, dark lesions that occur on leaves, stems, and fruits of tomato plants, later leading to defoliation and rotting.
12Tomato Tomato Mosaic VirusTomato Mosaic Virus causes mottling, leaf curling, and stunted growth in tomato plants, leading to reduced yield and fruit quality.
13Tomato Tomato Yellow Leaf Curl VirusTomato Yellow Leaf Curl Virus leads to yellowing, leaf curling, and stunted growth, severely affecting fruit production and plant vigor.
14Mango AnthracnoseMango Anthracnose, caused by Colletotrichum gloeosporioides, presents as black, sunken lesions on leaves, stems, and fruit, leading to significant crop loss.
15Mango Bacterial CankerMango Bacterial Canker, caused by Xanthomonas campestris, manifests as raised, water-soaked lesions on leaves, stems, and fruit, leading to defoliation and fruit drop.
16Mango Cutting WeevilCutting Weevil in mangoes leads to damage on young shoots and fruit, with larvae tunneling into the seed, affecting fruit quality and yield.
17Mango Die BackThis is a physiological disorder caused by a fungal infection in which the gradual dying of shoots from the tips towards the base affects the health of the whole tree.
18Mango Gall MidgeMango Gall Midge causes the formation of galls on leaves and flower panicles, leading to reduced fruit set and overall plant vigor.
19Mango HealthyHealthy mango trees have dark green, glossy leaves, strong shoots, and produce high-quality fruit without signs of disease or pest infestation.
20Mango Powdery MildewInfection due to Oidium mangiferae manifests itself in the form of a white, powdery growth on leaves, flowers, and young fruits, which may result in premature fruit drop and a reduction in yield, constituting a disorder called mango powdery mildew.
21Mango Sooty MouldMango Sooty Mould is a black, velvety fungal growth on leaves and fruit, often following sap-sucking insect infestations, which reduces photosynthesis and fruit quality.
Table 2. Existing work for potato, tomato, and mango prediction by various former researchers.
Table 2. Existing work for potato, tomato, and mango prediction by various former researchers.
Ref.MethodsDatasetsLimitations
[1]CNNsAnnotated potato imagesRequires large, annotated datasets
[2]CNNs and traditional MLIndividual lesion imagesManual feature extraction required
[3]CNNs with transfer learningPre-trained datasets and specific potato disease imagesLimited by availability of labeled data
[4]Image segmentation and soft computingPlant leaf imagesHigh computational cost
[5]CNNsAnnotated tomato imagesNeed for large, labeled datasets
[6]EfficientNetLarge tomato leaf datasetComplexity of EfficientNet architecture
[7]CNNsPlant VillageModel overfitting concerns, scalability
[8]Neural NetworksTomato leaf imagesMethodological limitations
[9]CNNsReal-time plant imagesRequires continuous data feed
[10]CNNsMango leaf imagesLimited dataset size
[11]Image processing and MLPlant LeavesLack of implementation
[12]Fine-tune CNNsPlant VillageLimited dataset
[13]Deep learningPre-trained datasets and mango leaf imagesLimited labeled data
[14]CNN-Manually labeled dataset
Table 3. Summary of Datasets Used for Training and Testing the AgirLeafNet Model.
Table 3. Summary of Datasets Used for Training and Testing the AgirLeafNet Model.
DatasetClassesSize of ImagesNumber of Images
Potato Village Dataset [26]03700 × 6002152
Potato Plants Dataset [26]03700 × 6002152
Tomato Dataset [27]10700 × 60011,000
Mago Dataset [28]08700 × 6004000
Total24-19,304
Table 4. Proposed architecture comparison with a state-of-the-art model in terms of accuracy and prediction time in seconds.
Table 4. Proposed architecture comparison with a state-of-the-art model in terms of accuracy and prediction time in seconds.
ModelF1-ScoreRecallAccuracy
VGG1679.5%77%84%
VGG1980%78%83%
Xception82%78%85%
IncetionV382%81%86%
DenseNet83%81%88%
ResNet85%82%89%
EfficientNet86%82%89%
AgirLeafNet98%97%98%
Table 5. State-of-the-Art Comparison of Deep Learning Models for Agricultural Disease Detection.
Table 5. State-of-the-Art Comparison of Deep Learning Models for Agricultural Disease Detection.
AspectSequential Deep Learning Model [29]MDSCIRNet Architecture [30]AgirLeafNet Model
FocusPotato leaf disease detectionPotato leaf disease classificationMulti-crop disease detection (potato, tomato, mango)
ArchitectureSequential convolutional layers with max poolingDepthwise Separable Convolution (DSC) with Multi-head AttentionNASNetMobile for feature extraction with Few-Shot Learning (FSL)
Key FeaturesTraditional CNN with max poolingDSC, Multi-head Attention, advanced image enhancement techniquesExcess Green Index (ExG) for enhanced vegetative feature extraction
DatasetPlant Village dataset (2152 potato leaf images)Enhanced dataset using CLAHE, ESRGANMulti-crop dataset (potato, tomato, mango)
Accuracy Achieved98.83%99.24% (MDSCIRNet), 99.33% (MDSCIRNet + SVM)100% (potato), 92% (tomato), 99.8% (mango)
Innovative TechniquesSequential deep learning with max poolingIntegration of SVM with MDSCIRNet, CLAHE, ESRGAN for image qualityFew-Shot Learning, Excess Green Index (ExG)
ScalabilityFocused on a single cropPrimarily focused on potato leaf diseaseVersatile across multiple crops
StrengthsEffective for single-crop disease detectionHigh accuracy with advanced feature extraction and classificationHigh versatility, effective with limited labeled data
LimitationsLimited to single-crop focusHigh complexity, focused on single cropSlightly lower accuracy for mango, compared to other crops
Overall ContributionDemonstrates the effectiveness of traditional CNNsSets a new benchmark in accuracy with complex architectureProvides a scalable, versatile solution for multi-crop detection
Table 6. Limitations of the AgirLeafNet Model.
Table 6. Limitations of the AgirLeafNet Model.
LimitationDescription
Data Quality and AugmentationPerformance of the AgirLeafNet model is extremely related to the input image quality and preprocessing steps applied. Inadequate data augmentation will lower model accuracy and robustness.
Computational ComplexityUsing NASNetMobile with Few-Shot Learning in AgirLeafNet is computationally very expensive, making it quite impossible to be run on standard or low-resource hardware.
Limited GeneralizationWhile AgirLeafNet shows excellent performance on specific datasets, its ability to generalize across diverse environmental conditions and different crop varieties requires further validation.
Integration with Agricultural PracticesImplementing the AgirLeafNet model in real-world agricultural settings can be challenging due to the need for specialized hardware, software, and technical expertise.
Data Dependency and PrivacyThe model’s accuracy is dependent on large, high-quality datasets, raising potential concerns about data privacy and the ethical implications of data collection from farmers.
Real-Time DeploymentReal-time disease detection and classification on the ground are relatively challenging since AgirLeafNet is a computationally intensive model that requires constant speeds of data processing.
Model Complexity and InterpretabilityComplicated structures like that of the AgirLeafNet model give rise to interpretability issues while using highly advanced techniques in deep learning. This makes it difficult for anyone other than an expert to understand the model’s decision-making process.
Cost of ImplementationSetting up AgirLeafNet will be quite expensive for small-scale or resource-constrained farmers since it involves hardware, software, and data acquisition processes.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saleem, S.; Sharif, M.I.; Sharif, M.I.; Sajid, M.Z.; Marinello, F. Comparison of Deep Learning Models for Multi-Crop Leaf Disease Detection with Enhanced Vegetative Feature Isolation and Definition of a New Hybrid Architecture. Agronomy 2024, 14, 2230. https://doi.org/10.3390/agronomy14102230

AMA Style

Saleem S, Sharif MI, Sharif MI, Sajid MZ, Marinello F. Comparison of Deep Learning Models for Multi-Crop Leaf Disease Detection with Enhanced Vegetative Feature Isolation and Definition of a New Hybrid Architecture. Agronomy. 2024; 14(10):2230. https://doi.org/10.3390/agronomy14102230

Chicago/Turabian Style

Saleem, Sajjad, Muhammad Irfan Sharif, Muhammad Imran Sharif, Muhammad Zaheer Sajid, and Francesco Marinello. 2024. "Comparison of Deep Learning Models for Multi-Crop Leaf Disease Detection with Enhanced Vegetative Feature Isolation and Definition of a New Hybrid Architecture" Agronomy 14, no. 10: 2230. https://doi.org/10.3390/agronomy14102230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop