Next Article in Journal
Predicting Key Grassland Characteristics from Hyperspectral Data
Next Article in Special Issue
Precision Irrigation Management Using Machine Learning and Digital Farming Solutions
Previous Article in Journal
Discrete Element Modelling of Soil Compaction of a Press-Wheel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic and Reliable Leaf Disease Detection Using Deep Learning Techniques

by
Muhammad E. H. Chowdhury
1,*,
Tawsifur Rahman
1,
Amith Khandakar
1,
Mohamed Arselene Ayari
2,*,
Aftab Ullah Khan
3,
Muhammad Salman Khan
3,4,
Nasser Al-Emadi
1,
Mamun Bin Ibne Reaz
5,
Mohammad Tariqul Islam
5 and
Sawal Hamid Md Ali
5
1
Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
2
Technology Innovation and Engineering Education (TIEE), College of Engineering, Qatar University, Doha 2713, Qatar
3
AI in Healthcare, Intelligent Information Processing Laboratory, National Center for Artificial Intelligence, Peshawar 25120, Pakistan
4
Department of Electrical Engineering (JC), University of Engineering and Technology, Peshawar 25120, Pakistan
5
Department of Electrical, Electronics and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
*
Authors to whom correspondence should be addressed.
AgriEngineering 2021, 3(2), 294-312; https://doi.org/10.3390/agriengineering3020020
Submission received: 9 April 2021 / Revised: 11 May 2021 / Accepted: 17 May 2021 / Published: 20 May 2021
(This article belongs to the Special Issue Intelligent Systems and Their Applications in Agriculture)

Abstract

:
Plants are a major source of food for the world population. Plant diseases contribute to production loss, which can be tackled with continuous monitoring. Manual plant disease monitoring is both laborious and error-prone. Early detection of plant diseases using computer vision and artificial intelligence (AI) can help to reduce the adverse effects of diseases and also overcome the shortcomings of continuous human monitoring. In this work, we propose the use of a deep learning architecture based on a recent convolutional neural network called EfficientNet on 18,161 plain and segmented tomato leaf images to classify tomato diseases. The performance of two segmentation models i.e., U-net and Modified U-net, for the segmentation of leaves is reported. The comparative performance of the models for binary classification (healthy and unhealthy leaves), six-class classification (healthy and various groups of diseased leaves), and ten-class classification (healthy and various types of unhealthy leaves) are also reported. The modified U-net segmentation model showed accuracy, IoU, and Dice score of 98.66%, 98.5%, and 98.73%, respectively, for the segmentation of leaf images. EfficientNet-B7 showed superior performance for the binary classification and six-class classification using segmented images with an accuracy of 99.95% and 99.12%, respectively. Finally, EfficientNet-B4 achieved an accuracy of 99.89% for ten-class classification using segmented images. It can be concluded that all the architectures performed better in classifying the diseases when trained with deeper networks on segmented images. The performance of each of the experimental studies reported in this work outperforms the existing literature.

1. Introduction

Agriculture contributed to the domestication of today’s major food crops and animals thousands of years ago. Food insecurity [1], which is a major cause of plant diseases [2], is one of the major global problems that humanity faces today. According to one study, plant diseases account for around 16 percent of global crop yield loss [3]. Pest losses are projected to be about 50 percent for wheat and 26–29 percent for soybeans globally [3]. Fungi, fungus-like species, bacteria, viruses, viroid, virus-like organisms, nematodes, protozoa, algae, and parasitic plants are the main classes of plant pathogens. Many applications have benefited from artificial intelligence (AI), machine learning (ML), and computer vision, including power prediction from renewable resources [4,5] and biomedical applications [6,7]. During the COVID-19 pandemic, AI was used extensively for the identification of lung-related diseases [8,9,10,11] as well as other prognostic applications [12]. Similar advanced technologies can be used to mitigate the negative effects of plant diseases by detecting and diagnosing them at an early level. The application of AI and computer vision to automatic detection and diagnosis of plant diseases is currently being extensively studied because manual plant disease monitoring is tedious, time-consuming, and labor-intensive. Sidharth et al. [13] applied a Bacterial Foraging Optimization-based Radial Basis Function Network (BRBFNN) to automatically identify and classify plant disease, achieving classification accuracy of 83.07%. Convolutional neural network (CNN) is a very popular neural network architecture that is used successfully for a variety of computer vision tasks in diverse fields [14]. Researchers have used CNN architecture and its various versions for the classification and identification of plant diseases. Sunayana et al. [15] compared different CNN architectures for disease detection in potato and mango leaves, with AlexNet achieving 98.33% accuracy and a shallow CNN model achieving 90.85% accuracy. Guan et al. [15,16] used a pre-trained VGG16 model to predict disease severity in apple plants and achieved a 90.40% accuracy rate. Jihen et al. [17] used the LeNet model to accurately distinguish healthy and diseased banana leaves, achieving a 99.72% accuracy rate.
Tomatoes are a major food crop in many parts of the world, with a per capita consumption of 20 kg per year, accounting for around 15% of total vegetable consumption. North America consumes 42 kg of tomatoes per person per year, while Europe consumes 31 kg per person per year [18,19]. To meet global demand for tomatoes, techniques for the crop yield and early detection of pests, bacterial, and viral infections must be created. Several studies have been performed to enhance tomato plant survival through early disease detection and subsequent disease management using artificial intelligence-based techniques. Manpreet et al. [20] classified seven tomato diseases with an accuracy of 98.8% using a pre-trained CNN-based architecture known as Residual Network or generally known as ResNet. Rahman et al. [21] proposed a fully-connected deep learning-based network to distinguish Bacterial Spot, Late Blight, and Septorial Spot disease from tomato leaf images with a 99.25% accuracy. Fuentes et al. [22] used three types of detectors to identify 10 diseases from tomato leaf images: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Completely Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD). For real-time disease and pest recognition, these detectors were combined with different variants of deep feature extractors VGG16, ResNet50, and ResNet152 for Faster R-CNN, ResNet-50 for SSD, and ResNet-50 for R-FCN, with VGG16 on top of FRCNN achieving the highest Average Precision of 83%. Agarwal et al. [23] proposed the Tomato Leaf Disease Detection (ToLeD) model, a CNN-based architecture for the classification of 10 diseases from tomato leaf photographs, with a 91.2% accuracy rate. Durmuş et al. [24] used AlexNet and SqueezeNet architectures to classify 10 diseases from photographs of tomato leaves and achieved a 95.5% accuracy. While disease classification and identification in plant leaves have been extensively studied in tomatoes and other plants, there has been little research on segmenting leaf images from the context. Since real-world images can differ greatly in terms of lighting conditions, better segmentation techniques can assist AI models in learning from the correct region of interest rather than the context.
U-net is a cutting-edge deep learning-based image segmentation architecture. It was created with biomedical image segmentation in mind [25]. The U-shape network architecture gives U-net its name. Unlike traditional CNN models, U-net includes convolutional layers that are used to up-sample or recombine feature maps into a complete image.
The authors have published articles using the state-of-the-art U-Net for segmentation [25] with very promising results [8]. EfficientNet is a recent classification network [26] and has not been used for the application intended in the paper. Thus, the authors used it in this application and have obtained promising results.
The paper has the following main contributions:
  • Different variants of U-net architecture are investigated to propose the best segmentation model by comparing the model predictions to the ground truth segmented images.
  • Investigation of the classification tasks for different variants of CNN architecture for binary and different multi-class classifications of tomato diseases. Several experiments employing different CNN architectures were conducted. Three different types of classifications were done in this work: (a) Binary classification of healthy and diseased leaves, (b) Five-class classification of healthy and four diseased leaves, and finally, (c) Ten-class classification with healthy and nine different diseases classes.
  • The performance achieved in this work outperforms the existing state-of-the-art works in this domain.
The rest of the paper is organized in the following manner: Section 1 gives a brief introduction, literature review, and motivation for the study. Section 2 describes the different types of plant pathogens. Section 3 provides the methodology of the study with technical details such as the dataset description, pre-processing techniques, and details of the experiments. Section 4 reports the results of the studies, followed by discussions in Section 5 and finally, the conclusion is provided in Section 6.

2. Background Study

2.1. Deep Convolutional Neural Networks (CNN)

For detecting tomato leaf diseases, we fine-tuned the EfficientNet CNN proposed by Tan et al. [26]. The authors make sure that the network’s width, depth, and resolution are all balanced. They are the first to empirically measure the relationship between all three dimensions, unlike other CNN scaling approaches that use one-dimension scaling. The authors developed their baseline architecture using the MnasNet network [27], which employs a multi-objective neural architecture search that prioritizes accuracy and FLOPS. They build the EfficientNet-B0 network, which is similar to MnasNet [27] but much larger due to the higher FLOPS target. The mobile inverted bottleneck MBConv [28] is its key building block, and it also includes squeeze and excitation optimization [29]. The authors use the compound scaling method, which uses a compound coefficient φ to uniformly scale the network width, depth, and resolution, based on EfficientNet-B0. By the following equation:
Depth, D = aφ
Width, W = bφ
Resolution, R = cφ
    a ≥ 1, b ≥ 1, c ≥ 1
where a, b, and c are constants that can be identified through a quick grid scan. φ is a user-specified coefficient that regulates the number of additional resources used for model scaling, while a, b, and c specify how those extra resources would be allocated to network width, depth, and resolution, respectively.
EfficientNet-B0 scales up the baseline network by fixing a, b, and c as constants and scaling up the baseline network with different φ to build a family of EfficientNets (B0 to B7); the baseline network is shown in Table 1. EfficientNet-B7 achieves state-of-the-art performance on ImageNet, with a top-five accuracy of 97.1%, while being 8.4 times smaller and 6.1 times faster on inference than the best current ConvNets like SENet [29] and Gpipe [30].
To build our model, we used EfficientNet-B0, EfficientNet-B4, and EfficientNet-B7. To enhance accuracy and minimize overfitting, we added a Global Average Pooling (GAP) to the network’s final convolution layer. Following GAP, we added a Dense layer with a size of 1024 and a 25% dropout. Finally, the probability prediction scores for detecting tomato leaf diseases are given by a Softmax layer.
Starting with EfficientNet as a baseline, we use our compound scaling method to scale it up in two steps:
The first step—assuming twice more resources available, we first fix φ = 1. In particular, we find the best values for EfficientNet-B0 are a = 1.2, b = 1.1, c = 1.15.
Second step—then we fix a, b, c as constants and scale-up baseline network with different φ using Equation (1), to obtain EfficientNet-B0 to B7.
Notably, searching for a, b, and c directly around a large model will yield even better results, but the search cost becomes prohibitively costly on larger models. Our approach overcomes this problem by performing a single search on a small baseline network (first step) and then applying the same scaling coefficients to all other models (second step).

2.2. Segmentation

In the literature, there are many variants of segmentation models based on U-nets. In order to use the best performing one, two separate variants of the original U-Net [25] and Modified U-Net [31] were investigated in this study. The design of the original U-Net and the Modified U-Net is shown in Figure 1. A contracting path and an expanding path make up the initial U-net. The contracting path consists of two 3 × 3 convolutions (unpadded convolutions) that are applied repeatedly, each followed by a ReLU and a 2 × 2 max pooling operation with stride 2 for downsampling. The expanding path consists of an upsampling of the feature map, a 2 × 2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the contracting path’s correspondingly cropped feature map, and two 3 × 3 convolutions, each accompanied by a ReLU. The network employs a total of 23 convolutional layers.
The Modified U-Net, also a U-Net model with small variation in its decoding part, is utilized [31]. A contracting path with four encoding blocks is followed by an expanding path with four decoding blocks in the U-Net model. Each encoding block is made up of two consecutive 3 × 3 convolutional layers, followed by a downsampling max pooling layer with a stride of 2. The decoding blocks are made up of two 3 × 3 convolutional layers, a transposed convolutional layer for upsampling, and concatenation with the corresponding feature map from the contracting path. The decoding block uses three convolutional layers instead of two in the modified U-Net architecture. An upsampling layer is followed by two 3 × 3 convolutional layers, a concatenation layer, and another 3 × 3 convolutional layer in the modified decoder. Batch normalization and ReLU activation are extended to all convolutional layers. A pixel-wise SoftMax is applied to map each pixel into a binary class of background at the final layer, which uses 1 × 1 convolution to map the output from the last decoding block to two channel feature maps.

2.3. Visualization Techniques

The development of visualization techniques has resulted from an increased interest in the internal mechanisms of CNNs and the logic behind a network making particular decisions. The visualization techniques aid in the interpretation of CNN decision-making processes by providing a more visual representation. This also improves the model’s clarity by visualizing the reasoning behind the inference in a way that is readily understood by humans, thus raising trust in the neural network’s outcomes. Among the numerous visualization techniques available, such as SmoothGrad [32], Grad-CAM [33], Grad-CAM++ [34], and Score-CAM [35], the recently proposed Score-CAM was used in this study due to its promising output. The weight of each activation map is obtained through its forward passing score on the target class, and the final result is obtained through a linear combination of weights and activation maps. Score-CAM eliminates the dependency on gradients by obtaining the weight of each activation map through its forward passing score on the target class. Figure 2 shows a sample image visualization with Score-CAM, with the heat map showing that the leaf regions controlled the decision making in CNN. This can help users understand how the network makes decisions and can also raise end-user trust if it can be verified that the network always makes decisions based on the leaf area.

2.4. Pathogens of Tomato Leaves

The most common plant pathogens are fungi, which can cause a variety of diseases such as early blight, septoria leaf spot, target spot, and leaf mold. Fungi can invade plants from a variety of places, including infected soil and seeds. Animals, humans, equipment, and soil pollution may all spread fungal infections from one plant to another. The fungus that causes early blight disease in tomato plants affects the plant leaves. Collar rot, stem lesion, and fruit rot are the terms for the rot that affects the seedlings’ basal stems, adult plant stems, and fruits, respectively [36,37]. The most important methods for controlling early blight are cultural control, which involves effective soil, nutrient, and crop management, as well as the use of fungicidal chemicals. Fungus induces Septoria leaf spot on tomato plants [38,39], which releases tomatinase enzyme, which speeds up the degradation of tomato steroidal glycoalkaloids α-tomatine [40,41]. The fungus induces the target spot disease in tomato plants [42,43]. In tomato plants, necrotic lesions with a light brown color in the middle are symptoms of target spot disease [44,45]. Early defoliation occurs when the lesions spread to a wider blighted leaf region [44,45]. The goal spot also does direct damage to the fruit by penetrating the pulp [44,45]. The fungus is responsible for plant leaf mold disease [46,47], which happens when the leaves are damp for a long time. Bacteria are a major plant pathogen as well. They get into plants through wounds like insect bites, pruning, and cuts, as well as natural openings like stomata. Temperature, humidity, soil conditions, nutrient availability, weather conditions, and airflow are all important factors in bacterial growth on plants and the harm they cause. Bacterial spot is a bacterial-caused plant disease [48,49]. Molds are also a significant contributor to plant disease. Mold causes late blight disease in tomato and potato plants [50,51]. A few of the symptoms include the presence of dark uneven blemishes on leaf tips and plant stems. The Tomato Yellow Leaf Cur Virus (TYLCV) is a disease-causing virus that affects tomatoes. The plant is infected by this virus, which is spread by another insect. Despite the fact that tomato plants have diseased leaves and a ten-class classification. Different types of tomato leaf diseases were categorized into disease categories in study 2, while different classes of unhealthy and stable leaf photos were classified in study 3. Similar studies have shown that the virus can infect a number of plants, including beans and peppers, tobacco, potatoes, and eggplants [52,53]. Owing to the disease’s rapid spread in recent decades, the focus of research has changed to damage control of yellow leaf curl disease [54,55,56,57]. Tomato mosaic virus is another viral disease that directly affects tomato plants (ToMV). This virus is present all over the world and affects not only tomatoes but also other plants. Twisted and fern-like stems, infected fruit with yellow patches, and necrotic blemishes are all symptoms of ToMV infection [58,59].

3. Methodology

The overall methodology of the study of the paper is summarized in Figure 3. This study used tomato leaf data from the Plant Village dataset [60,61], where tomato leaf images and corresponding segmented tomato leaf masks are provided. As explained earlier, the paper has three different studies: (i) binary classification of healthy and unhealthy segmented leaves; (ii) five-class classification of healthy and were performed using segmented leaf images. The paper also explored different variants of U-net segmentation models to investigate the best segmentation network for leaf segmentation from the background. Segmented tomato leaf images leveraging in the classification is further verified with the Score-Cam visualization technique, which has been found very reliable in different applications. The classification is done using EfficientNet networks that have been comparatively successful in previous publications by the authors.

3.1. Datasets Description

In this study, Plant Village tomato leaf images and corresponding leaf mask dataset were used [60,61], where 18,161 tomato leaf images and corresponding segmented leaf masks are available. The dataset was used for training the tomato leaf segmentation models and classification models as well. All images were divided into 10 different classes, where one class is healthy and the other nine classes are unhealthy (such as bacterial spot, early blight, leaf mold, septoria leaf spot, target spot, two-spotted spider mite, late bright mold, mosaic virus, and yellow leaf curl virus), and nine unhealthy classes are categorized into five subgroups (namely-bacterial, viral, fungal, mold, and mite disease). Some sample tomato leaf images, for healthy and different unhealthy classes, and leaf masks from the Plant Village dataset are shown in Figure 4. Moreover, a detailed description of the number of images in the dataset is also shown in Table 2, which is useful for classification tasks discussed in detail in the next section.

3.2. Preprocessing

Resizing and Normalizing: The various CNN network (both for segmentation and classification experiments) has input image size requirements. Thus, the images were resized to 256 × 256 for the various variants of U-net segmentation networks. Similarly, the images were resized to 224 × 224 for EfficientNet (EfficientNet-B0, EfficientNet-B4, and EfficientNet-B7). Using the mean and standard deviation of the images of the dataset, z-score normalization was used to normalize the images.
Augmentation: Since the dataset is not balanced and the dataset does not have a similar number of images for the different categories, training with an imbalanced dataset can produce a biased model. Thus, data augmentation can help in having a similar number of images in the various classes, which can provide reliable results as stated in many recent publications [6,7,8,9,11]. In this study, three augmentation strategies (rotation, scaling, and translation) were utilized to balance the training images. The rotation operation used for image augmentation was done by rotating the images in the clockwise and counterclockwise direction with an angle between 5 to 15 degrees. The scaling operation is the magnification or reduction of the frame size of the image and 2.5% to 10% image magnifications were used in this work. Image translation was done by translating images horizontally and vertically by 5% to 20%.

3.3. Experiments

Leaf Segmentation: Different U-net models were used separately on Plant Village tomato leaf images and leaf mask dataset to identify the best performing segmentation model for leaf segmentation. Five-fold cross-validation was used, where 80% of 18,161 tomato leaf images and their corresponding ground truth masks were randomly selected and used for training, and the remaining 20% were used for testing (Table 3). The class distribution in the test set is similar to the train set. Out of the 80% training dataset, 90% was used for actual training, and 10% for validation, which helps in avoiding the overfitting problem. In this study, three different loss functions (Negative Log-Likelihood (NLL) loss, Binary Cross-Entropy (BCE) loss, and Mean-Squared Error (MSE) loss) were used to achieve the best performance metrics and to identify the best tomato leaf segmentation, model. Moreover, an early stopping criterion of five maximum epochs with no improvement in validation loss was used as reported in some of the recent works [9,11].
Tomato leaf disease classification:
The study investigated a deep learning architecture based on a recent convolutional neural network called EfficientNet to classify segmented tomato leaf disease images. Three different classification experiments were carried out in this study. Table 4 summarizes the details of the images in the experiments for three different classification using segmented leaf images. The summary of parameters of the classification and segmentation experiments is reported in Table 5.
All the experiments were conducted using PyTorch library with Python 3.7 on Intel® Xeon® CPU E5-2697 v4 @ 2,30GHz and 64 GB RAM, with a 16 GB NVIDIA GeForce GTX 1080 GPU.

3.4. Performance Matrix

Tomato leaf Segmentation: Important performance metrics for the segmentation experiment is stated in Equations (2)–(4).
Accuracy = TP + TN TP + FN + FP + TN
IoU   Jaccard   Index = TP TP + FN + FP
Dice   Coefficient   F 1   score = 2 TP 2 TP + FN + FP
Tomato leaf disease Classification: Important performance metrics for the classification experiment is stated in Equations (5)–(9):
Accuracy = TP + TN TP + FN + FP + TN
Sensitivity = TP TP + FN
Specificity = TN TN + FP
Precision = TP TP + FP
F 1 _ score = 2 TP 2 TP + FN + FP
Here, true positive (TP) is the number of correctly classified healthy leaf images and true negative (TN) is the number of correctly classified unhealthy leaf images. False-positive (FP) and false-negative (FN) are the misclassified healthy and unhealthy leaf images, respectively.
Moreover, segmentation and classification networks were also compared in terms of the testing time per image, i.e., time taken by each network to segment or classify an input image, represented in Equation (10):
T = t″ − t’
where t’ is the starting time for a network to segment or classify an image, I and t″ is the end time when the network has segmented or classified the same image, I.

4. Results

The performance of various networks in the different experiments is reported in this section.

4.1. Tomato Leaf Segmentation

In this study, three different segmentation models, the original U-net [25], and modified U-net [31] were trained, validated, and tested for the segmentation of tomato leaf images. Table 6 shows the comparative performance of the two segmentation models using three different loss functions (namely, NLL, BCE, and MSE loss function) in image segmentation. It can be noted that the Modified U-net with NLL loss function outperformed the original U-net in the segmentation of the leaf region on the whole images quantitatively and qualitatively. The test loss, test accuracy, IoU, and dice for the segmentation of tomato leaves using Modified U-net with NLL loss function were found to be 0.0076, 98.66, 98.5, and 98.73, respectively. Figure 5 shows some example test tomato leaf images, corresponding ground truth masks, and segmented leaf images generated by the Modified U-net model with NLL loss function for the Plant Village dataset.

4.2. Tomato Leaf Disease Classification

In this study, three different experiments were conducted for segmented tomato leaf images. The comparative performance for three different EfficientNet families (such as EfficientNet-B0, EfficientNet-B4, and EfficientNet-B7) for the three classification schemes for segmented leaf images is shown in Table 7. It is apparent from Table 7 that all the evaluated pre-trained models perform very well in classifying healthy and unhealthy tomato leaf images in two-class, six-class, and ten-class problems. The performance also improved when using non-segmented images (see Supplementary Table S1).
Among the networks trained with leaf images with and without segmented two-class, six-class, and ten-class problems, EfficientNet-B7 outperformed other trained models, except for ten-class where EfficientNet-B4 was slightly better than EfficientNet-B7. It can also be seen that as the EfficientNet model’s scale, the testing time (T) increases due to scaled depth, width, and resolution of the network. The authors have tried the different versions of EfficientNet and it was seen that as the network is scaled in terms of depth, width, and resolution, the performance becomes better. However, as the classification scheme becomes complicated, the performance does not improve much with the scaled version of EfficientNet.
For segmented tomato leaf images, EfficientNet-B7 outperforms others and for two-class and six-class problems showed accuracy, sensitivity, and specificity of 99.95%, 99.95%, 99.77%, and 99.12%, 99.11%, 99.81%, respectively. In contrast, EfficientNet-B4 produced the best result for ten-class with accuracy, sensitivity, and specificity of 99.89%, 99.44%, and 99.94%, respectively. It is evident from Figure 5 that network performances are slightly improved with more parameters for 2-class, 6-class, and 10-class problems.Figure 6 clearly shows that the Receiver operating characteristic (ROC) curves for two-class, six-class, and ten-class problems using segmented tomato leaf images. However, deep networks can provide better performance gain for 2-class and 6-class problems.
The confusion matrix for the best performing networks for the different classification problems using tomato leaf images is shown in Figure 7. It can be noticed that even with the best performing network EfficientNet-B7 for two-class tomato leaf images, 6 out of 16,570 unhealthy tomato leaf images were misclassified as healthy and 4 out of 1591 healthy tomato leaf images were miss-classified as unhealthy images.
For the six-class problem, which consisted of one healthy class and five different unhealthy classes, only 3 out of 1591 healthy tomato leaf images were miss-classified as unhealthy images, and 159 out of 16,570 unhealthy tomato leaf images were miss-classified as healthy or any other unhealthy classes. Moreover, for the ten-class problem, which consisted of one healthy class and nine different unhealthy classes, it can be noticed that the best performing network was EfficientNet-B4, only 4 out of 1591 healthy tomato leaf images were miss-classified as unhealthy images and 105 out of 16,570 unhealthy tomato leaf images of nine different categories were miss-classified as healthy or any other unhealthy classes.

4.3. Visualization Using Score-Cam

In this study, the reliability of the trained networks was investigated using visualization techniques. Score-CAM- of five different categories were misclassified as healthy or any other unhealthy classes. For the ten-class problem, heat maps for segmented tomato leaf images were used. Figure 8 shows the original tomato leaf samples along with the heat maps on segmented leaves. As can be seen from Figure 8, the networks are learning from the leaf images in the segmented leaf, which makes the network decision more reliable. This helps to counter the criticism that CNN takes decision from the non-relevant region and are not reliable [62]. It can also be seen in Figure 9 that segmentation has helped in a classification where the network learns from the region of interest. This reliable learning has helped incorrect classification. In addition, the authors have also experimented to confirm that segmentation helps in learning and taking decisions from relevant areas compared to non-segmented images (see Supplementary Figure S1).

5. Discussion

Plant diseases are a major threat to global food security. The latest technologies need to be applied to the agriculture sector to curb diseases. Artificial intelligence-based technologies are extensively investigated in plant disease detection. Computer vision-based disease detection systems are popular for their robustness, ease of acquiring data, and quick results. This research investigates how model scaling CNN-based architectures perform against each other in two tasks i.e., segmentation and classification of tomato leaf images. The study was divided into three sub-studies of 2-class classification (Healthy, and Unhealthy), 6-class classification (Healthy, Fungi, Bacteria, Mold, Virus, and Mite), and 10-class classification (Healthy, Early blight, Septoria leaf spot, Target spot, Leaf mold, Bacterial spot, Late bright mold, Tomato Yellow Leaf Curl Virus, Tomato Mosaic Virus, and Two-spotted spider mite). Overall, the EfficientNet-B7 model outperformed every other model, except for binary classification, and 6-class classification with segmented images, where the EfficientNet-B4 model outperformed others in 10-class classification. In the binary classification of healthy and diseased tomato leaves, EfficientNet-B7 showed an overall accuracy of 99.95% with segmented images. In 6-class classification, EfficientNet-B7 showed an overall accuracy of 99.12% with segmented images. Furthermore, in the 10-class classification, EfficientNet-B4 showed an overall accuracy of 99.89% with segmented images. The results in the paper are comparable to the state-of-the-art results and are also summarized in Table 8. Although the Plant Village dataset used in this study contains images taken in diverse environmental conditions, the dataset is collected in a specific region and is of specific breeds of tomatoes. A study conducted using a dataset containing images of other breeds of tomato plants from different regions of the world may result in a more robust framework for early disease detection in tomato plants. Furthermore, the lighter architecture of CNN models with non-linearity in the feature extraction layers might be useful to investigate for portable solutions.

6. Conclusions

In this work, we developed a deep convolutional neural network (CNN) based on a recently developed EfficientNet CNN model. The model was fine-tuned and trained for the detection of healthy and different unhealthy tomato leaf images. The obtained results show that our model outperforms some recent deep learning techniques by using the most popular publicly available Plant Village dataset [60,61]. It was also found that the Modified U-net was best suited for segmentation of leaf images from the background and EfficientNet-B7 was better at extracting discriminative features from images compared to other architecture. Besides, the performance of the networks generally further improved when trained with more parameters. The trained models can be used in the early automatic detection of plant diseases. Experts need years of training and knowledge to early disease detection with visual inspection but our model can be used by anybody who is not an expert. Any new users will have the network working in the background which will take input from the visual camera and immediately inform the user of the output so that the user can take necessary action. Thus, preventive actions can be taken earlier. This work can be beneficial in early and automatic disease detection of tomato crops enabled by the latest technologies such as smartphones, drone cameras, and robotic platforms. The proposed framework can be incorporated with a feedback system that gives valuable suggestions, remedies, disease management, and control strategies, thus ensuring better crop yields. The authors would work on an extension of the work to validate the performance of the proposed solution on a real-time application where microcontrollers with cameras will be used to check the performance. The future work would have a much more diverse environment and the authors are confident that it will work better even over there.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/agriengineering3020020/s1, Figure S1: Score-CAM visualization to confirm how segmentation has helped in classification even incorrectly classified images, Table S1: Summary of the tomato leaf disease classification performance using non-segmented original leaf images.

Author Contributions

M.E.H.C.: Conceptualization, Writing—Original Draft, Writing—Review & Editing, Supervision, and Project Administration. T.R.: Data Curation, Methodology, Software, Validation, Formal analysis, Writing—Review & Editing. A.K.: Data Curation, Investigation, Resources, Writing—Original Draft, Writing—Review & Editing. A.U.K.: Data Curation, Investigation, Resources, Writing—Original Draft, Writing—Review & Editing. M.A.A.: Investigation, Resources, Writing—Original Draft, Writing—Review & Editing, A.K.: Writing—Original Draft, Writing—Review & Editing. M.S.K.: Visualization, Writing—Original Draft. N.A.-E.: Writing—Review & Editing, Supervision, M.B.I.R.: Writing—Review & Editing, Supervision, Conceptualization. M.T.I.: Writing—Review & Editing, supervision, S.H.M.A.: Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

The open-access publication of this article was funded by the Qatar National Library and this work was made possible by HSREP02-1230-190019 from the Qatar National Research Fund, a member of Qatar Foundation, Doha, Qatar. The statements made herein are solely the responsibility of the authors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [SpMohanty/PlantVillage-Dataset. Available online: https://github.com/spMohanty/PlantVillage-Dataset (accessed on 24 January 2021)].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chowdhury, M.E.; Khandakar, A.; Ahmed, S.; Al-Khuzaei, F.; Hamdalla, J.; Haque, F.; Mamun, B.I.R.; Ahmed, A.S.; Nasser, A.E. Design, construction and testing of iot based automated indoor vertical hydroponics farming test-bed in Qatar. Sensors 2020, 20, 5637. [Google Scholar] [CrossRef] [PubMed]
  2. Strange, R.N.; Scott, P.R. Plant disease: A threat to global food security. Annu. Rev. Phytopathol. 2005, 43, 83–116. [Google Scholar] [CrossRef]
  3. Oerke, E.-C. Crop losses to pests. J. Agric. Sci. 2006, 144, 31–43. [Google Scholar] [CrossRef]
  4. Touati, F.; Khandakar, A.; Chowdhury, M.E.; Antonio, S., Jr.; Sorino, C.K.; Benhmed, K. Photo-Voltaic (PV) monitoring system, performance analysis and power prediction models in Doha, Qatar. In Renewable Energy; IntechOpen: London, UK, 2020. [Google Scholar]
  5. Khandakar, A.; Chowdhury, M.E.H.; Kazi, M.K.; Benhmed, K.; Touati, F.; Al-Hitmi, M.; Gonzales, A.S.P., Jr. Machine learning based photovoltaics (PV) power prediction using different environmental parameters of Qatar. Energies 2019, 12, 2782. [Google Scholar] [CrossRef] [Green Version]
  6. Chowdhury, M.H.; Shuzan, M.N.I.; Chowdhury, M.E.; Mahbub, Z.B.; Uddin, M.M.; Khandakar, A.; Mamun, B.I.R. Estimating blood pressure from the photoplethysmogram signal and demographic features using machine learning techniques. Sensors 2020, 20, 3127. [Google Scholar] [CrossRef]
  7. Chowdhury, M.E.; Khandakar, A.; Alzoubi, K.; Mansoor, S.; Tahir, A.M.; Reaz, M.B.I.; Nasser, A.-E. Real-time smart-digital stethoscope system for heart diseases monitoring. Sensors 2019, 19, 2781. [Google Scholar]
  8. Rahman, T.; Khandakar, A.; Kadir, M.A.; Islam, K.R.; Islam, K.F.; Mazhar, R.; Tahir, R.; Mohammad, T.I.; Saad, B.A.K.; Mohamed, A.A.; et al. Reliable tuberculosis detection using chest X-ray with deep learning, segmentation and visualization. IEEE Access 2020, 8, 191586–191601. [Google Scholar] [CrossRef]
  9. Tahir, A.; Qiblawey, Y.; Khandakar, A.; Rahman, T.; Khurshid, U.; Musharavati, F.; Islam, M.T.; Kiranyaz, S.; Chowdhury, M.E.H. Coronavirus: Comparing COVID-19, SARS and MERS in the eyes of AI. arXiv 2020, arXiv:2005.11524. [Google Scholar]
  10. Chowdhury, M.E.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Atif, I.; Nasser, A.-E.; Khan, M.S.; Islam, K.R. Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
  11. Rahman, T.; Chowdhury, M.E.; Khandakar, A.; Islam, K.R.; Islam, K.F.; Mahbub, Z.B.; Muhammad, A.K.; Saad, K. Transfer learning with deep convolutional neural network (CNN) for pneumonia detection using chest X-ray. Appl. Sci. 2020, 10, 3233. [Google Scholar] [CrossRef]
  12. Chowdhury, M.E.; Rahman, T.; Khandakar, A.; Al-Madeed, S.; Zughaier, S.M.; Hassen, H.; Mohammad, T.I. An early warning tool for predicting mortality risk of COVID-19 patients using machine learning. arXiv 2020, arXiv:2007.15559. [Google Scholar]
  13. Chouhan, S.S.; Kaul, A.; Singh, U.P.; Jain, S. Bacterial foraging optimization based radial basis function neural network (BRBFNN) for identification and classification of plant leaf diseases: An automatic approach towards plant pathology. IEEE Access 2018, 6, 8852–8863. [Google Scholar] [CrossRef]
  14. LeCun, Y.; Haffner, P.; Bottou, L.; Bengio, Y. Object recognition with gradient-based learning. In Shape, Contour and Grouping in Computer Vision; Springer: Berlin/Heidelberg, Germany, 1999; pp. 319–345. [Google Scholar]
  15. Arya, S.; Singh, R. A Comparative Study of CNN and AlexNet for Detection of Disease in Potato and Mango leaf. In Proceedings of the 2019 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT), Ghaziabad, India, 27–28 September 2019; pp. 1–6. [Google Scholar]
  16. Wang, G.; Sun, Y.; Wang, J. Automatic image-based plant disease severity estimation using deep learning. Comput. Intell. Neurosci. 2017, 2017, 2917536. [Google Scholar] [CrossRef] [Green Version]
  17. Amara, J.; Bouaziz, B.; Algergawy, A. A deep learning-based approach for banana leaf diseases classification. In Proceedings of the Datenbanksysteme für Business, Technologie und Web (BTW 2017)—Workshopband, Stuttgart, Germany, 6–10 March 2017. [Google Scholar]
  18. Statistics, F. Food and Agriculture Organization of the United Nations. Retrieved 2010, 3, 2012. [Google Scholar]
  19. Adeoye, I.; Aderibigbe, O.; Amao, I.; Egbekunle, F.; Bala, I. Tomato Products’market Potential and Consumer Preference In Ibadan, Nigeria. Sci. Pap. Manag. Econ. Eng. Agric. Rural Dev. 2017, 17, 9–15. [Google Scholar]
  20. Kaur, M.; Bhatia, R. Of An Improved Tomato Leaf Disease Detection And Classification Method. In Proceedings of the IEEE Conference on Information and Communication Technology, Allahabad, India, 6–8 December 2019; pp. 1–5. [Google Scholar]
  21. Rahman, M.A.; Islam, M.M.; Mahdee, G.S.; Kabir, M.W.U. Improved Segmentation Approach for Plant Disease Detection. In Proceedings of the 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019; pp. 1–5. [Google Scholar]
  22. Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors 2017, 17, 2022. [Google Scholar] [CrossRef] [Green Version]
  23. Agarwal, M.; Singh, A.; Arjaria, S.; Sinha, A.; Gupta, S. ToLeD: Tomato leaf disease detection using convolution neural network. Procedia Comput. Sci. 2020, 167, 293–301. [Google Scholar] [CrossRef]
  24. Durmuş, H.; Güneş, E.O.; Kırcı, M. Disease detection on the leaves of the tomato plants by using deep learning. In Proceedings of the 6th International Conference on Agro-Geoinformatics, Fairfax, VA, USA, 7–10 August 2017; pp. 1–5. [Google Scholar]
  25. Ronneberger, O.; Fischer, P.; Brox, A.T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  26. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 6105–6114. [Google Scholar]
  27. Tan, M.; Chen, B.; Pang, R.; Vasudevan, V.; Sandler, M.; Howard, A.; Le, Q.V. MnasnetPlatform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–21 June 2019; 2019; pp. 2820–2828. [Google Scholar]
  28. Howard, A.; Zhmoginov, A.; Chen, L.-C.; Sandler, M.; Zhu, M. Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  29. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  30. Huang, Y.; Cheng, Y.; Bapna, A.; Firat, O.; Chen, M.X.; Chen, D.; HyoukJoong, L.; Jiquan, N.; Quoc, V.L.; Yonghui, W.; et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. arXiv 2018, arXiv:1811.06965. [Google Scholar]
  31. Lung-Segmentation-2d. Available online: https://github.com/imlab-uiip/lung-segmentation-2d#readme (accessed on 1 August 2020).
  32. Smilkov, D.; Thorat, N.; Kim, B.; Viégas, F.; Wattenberg, M. Smoothgrad: Removing noise by adding noise. arXiv 2017, arXiv:1706.03825. [Google Scholar]
  33. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  34. Chattopadhay, A.; Sarkar, A.; Howlader, P.; Balasubramanian, V.N. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 839–847. [Google Scholar]
  35. Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.; Mardziel, P.; Hu, X. Score-CAM: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 24–25. [Google Scholar]
  36. Chaerani, R.; Voorrips, R.E. Tomato early blight (Alternaria solani): The pathogen, genetics, and breeding for resistance. J. Gen. Plant Pathol. 2006, 72, 335–347. [Google Scholar] [CrossRef]
  37. Wu, Q.; Chen, Y.; Meng, J. DCGAN-based data augmentation for tomato leaf disease identification. IEEE Access 2020, 8, 98716–98728. [Google Scholar] [CrossRef]
  38. Cabral, R.N.; Marouelli, W.A.; Lage, D.A.; Café-Filho, A.C. Septoria leaf spot in organic tomatoes under diverse irrigation systems and water management strategies. Hortic. Bras. 2013, 31, 392–400. [Google Scholar] [CrossRef] [Green Version]
  39. Café-Filho, A.C.; Lopes, C.A.; Rossato, M. Management of plant disease epidemics with irrigation practices. In Irrigation in Agroecosystems; Books on Demand: Norderstedt, Germany, 2019; p. 123. [Google Scholar]
  40. Zou, J.; Rodriguez-Zas, S.; Aldea, M.; Li, M.; Zhu, J.; Gonzalez, D.O.; Lila, O.V.; DeLucia, E.; Steven, J.C. Expression profiling soybean response to Pseudomonas syringae reveals new defense-related genes and rapid HR-specific downregulation of photosynthesis. Mol. Plant. Microbe Interact. 2005, 18, 1161–1174. [Google Scholar] [CrossRef] [Green Version]
  41. Li, G.; Chen, T.; Zhang, Z.; Li, B.; Tian, S. Roles of Aquaporins in Plant-Pathogen Interaction. Plants 2020, 9, 1134. [Google Scholar] [CrossRef] [PubMed]
  42. Schlub, R.; Smith, L.; Datnoff, L.; Pernezny, K. An overview of target spot of tomato caused by Corynespora cassiicola. In Proceedings of the II International Symposium on Tomato Diseases 808, Kusadasi, Turkey, 8 October 2007; pp. 25–28. [Google Scholar]
  43. Zhu, J.; Zhang, L.; Li, H.; Gao, Y.; Mu, W.; Liu, F. Development of a LAMP method for detecting the N75S mutant in SDHI-resistant Corynespora cassiicol. Anal. Biochem. 2020, 597, 113687. [Google Scholar] [CrossRef] [PubMed]
  44. Pernezny, K.; Stoffella, P.; Collins, J.; Carroll, A.; Beaney, A. Control of target spot of tomato with fungicides, systemic acquired resistance activators, and a biocontrol agent. Plant. Prot. Sci. Prague 2002, 38, 81–88. [Google Scholar] [CrossRef] [Green Version]
  45. Abdulridha, J.; Ampatzidis, Y.; Kakarla, S.C.; Roberts, P. Detection of target spot and bacterial spot diseases in tomato using UAV-based and benchtop-based hyperspectral imaging techniques. Precis. Agric. 2020, 21, 955–978. [Google Scholar] [CrossRef]
  46. De Jong, C.F.; Takken, F.L.; Cai, X.; de Wit, P.J.; Joosten, M.H. Attenuation of Cf-mediated defense responses at elevated temperatures correlates with a decrease in elicitor-binding sites. Mol. Plant. Microbe Interact. 2002, 15, 1040–1049. [Google Scholar] [CrossRef] [Green Version]
  47. Calleja-Cabrera, J.; Boter, M.; Oñate-Sánchez, L.; Pernas, M. Root growth adaptation to climate change in crops. Front. Plant. Sci. 2020, 11, 544. [Google Scholar] [CrossRef]
  48. Louws, F.; Wilson, M.; Campbell, H.; Cuppels, D.; Jones, J.; Shoemaker, P.; Sahin, F.; Miller, S. A Field control of bacterial spot and bacterial speck of tomato using a plant activator. Plant. Dis. 2001, 85, 481–488. [Google Scholar] [CrossRef] [Green Version]
  49. Qiao, K.; Liu, Q.; Huang, Y.; Xia, Y.; Zhang, S. Management of bacterial spot of tomato caused by copper-resistant Xanthomonas perforans using a small molecule compound carvacrol. Crop. Prot. 2020, 132, 105114. [Google Scholar] [CrossRef]
  50. Nowicki, M.; Foolad, M.R.; Nowakowska, M.; Kozik, E.U. Potato and tomato late blight caused by Phytophthora infestans: An overview of pathology and resistance breeding. Plant. Dis. 2012, 96, 4–17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Buziashvili, A.; Cherednichenko, L.; Kropyvko, S.; Yemets, A. Transgenic tomato lines expressing human lactoferrin show increased resistance to bacterial and fungal pathogens. Biocatal. Agric. Biotechnol. 2020, 25, 101602. [Google Scholar] [CrossRef]
  52. Glick, E.; Levy, Y.; Gafni, Y. The viral etiology of tomato yellow leaf curl disease–a review. Plant. Prot. Sci. 2009, 45, 81–97. [Google Scholar] [CrossRef] [Green Version]
  53. Dhaliwal, M.; Jindal, S.; Sharma, A.; Prasanna, H. Tomato yellow leaf curl virus disease of tomato and its management through resistance breeding: A review. J. Hortic. Sci. Biotechnol. 2020, 95, 425–444. [Google Scholar] [CrossRef]
  54. Ghanim, M.; Czosnek, H. Tomato yellow leaf curl geminivirus (TYLCV-Is) is transmitted among whiteflies (Bemisia tabaci) in a sex-related manner. J. Virol. 2000, 74, 4738–4745. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Ghanim, M.; Morin, S.; Zeidan, M.; Czosnek, H. Evidence for transovarial transmission of tomato yellow leaf curl virus by its vector, the whiteflyBemisia tabaci. Virology 1998, 240, 295–303. [Google Scholar] [CrossRef] [Green Version]
  56. He, Y.-Z.; Wang, Y.-M.; Yin, T.-Y.; Fiallo-Olivé, E.; Liu, Y.-Q.; Hanley-Bowdoin, L.; Xiao-Wei, W. A plant DNA virus replicates in the salivary glands of its insect vector via recruitment of host DNA synthesis machinery. Proc. Natl. Acad. Sci. USA 2020, 117, 16928–16937. [Google Scholar] [CrossRef] [PubMed]
  57. Choi, H.; Jo, Y.; Cho, W.K.; Yu, J.; Tran, P.-T.; Salaipeth, L.; Hae-Ryun, K.; Hong-Soo, C.; Kook-Hyung, K. Identification of Viruses and Viroids Infecting Tomato and Pepper Plants in Vietnam by Metatranscriptomics. Int. J. Mol. Sci. 2020, 21, 7565. [Google Scholar] [CrossRef]
  58. Broadbent, L. Epidemiology and control of tomato mosaic virus. Annu. Rev. Phytopathol. 1976, 14, 75–96. [Google Scholar] [CrossRef]
  59. Xu, Y.; Zhang, S.; Shen, J.; Wu, Z.; Du, Z.; Gao, F. The phylogeographic history of tomato mosaic virus in Eurasia. Virology 2021, 554, 42–47. [Google Scholar] [CrossRef] [PubMed]
  60. Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
  61. SpMohanty/PlantVillage-Dataset. Available online: https://github.com/spMohanty/PlantVillage-Dataset (accessed on 24 January 2021).
  62. Schlemper, J.; Oktay, O.; Schaap, M.; Heinrich, M.; Kainz, B.; Glocker, B.; Daniel, R. Attention gated networks: Learning to leverage salient regions in medical images. Med. Image Anal. 2019, 53, 197–207. [Google Scholar] [CrossRef]
  63. Tm, P.; Pranathi, A.; SaiAshritha, K.; Chittaragi, N.B.; Koolagudi, S.G. Tomato leaf disease detection using convolutional neural networks. In Proceedings of the 2018 Eleventh International Conference on Contemporary Computing (IC3), Surat, India, 7–8 February 2020; pp. 1–5. [Google Scholar]
  64. Zhang, K.; Wu, Q.; Liu, A.; Meng, X. Can deep learning identify tomato leaf disease? Adv. Multimed. 2018, 2018, 6710865. [Google Scholar] [CrossRef] [Green Version]
  65. Dookie, M.; Ali, O.; Ramsubhag, A.; Jayaraman, J. Flowering gene regulation in tomato plants treated with brown seaweed extracts. Sci. Hortic. 2021, 276, 109715. [Google Scholar] [CrossRef]
Figure 1. Architecture of (A) original U-Net and (B) modified U-Net.
Figure 1. Architecture of (A) original U-Net and (B) modified U-Net.
Agriengineering 03 00020 g001
Figure 2. Score-CAM visualization of tomato leaf images, to show where the CNN model is mainly taking the decision.
Figure 2. Score-CAM visualization of tomato leaf images, to show where the CNN model is mainly taking the decision.
Agriengineering 03 00020 g002
Figure 3. Overall Methodology of the study.
Figure 3. Overall Methodology of the study.
Agriengineering 03 00020 g003
Figure 4. Sample images of healthy and different unhealthy tomato leaves from the Plant Village database.
Figure 4. Sample images of healthy and different unhealthy tomato leaves from the Plant Village database.
Agriengineering 03 00020 g004
Figure 5. Samples tomato leaf images from a database (left), generated masks by the trained modified U-net model with NLL loss function (second from left), corresponding segmented leaf (second from right), and the ground truth from the database (right).
Figure 5. Samples tomato leaf images from a database (left), generated masks by the trained modified U-net model with NLL loss function (second from left), corresponding segmented leaf (second from right), and the ground truth from the database (right).
Agriengineering 03 00020 g005
Figure 6. Comparison of the ROC curves for (A) binary classification for segmented leaf, (B) six-class classification segmented leaf, and (C) ten-class classification segmented leaf.
Figure 6. Comparison of the ROC curves for (A) binary classification for segmented leaf, (B) six-class classification segmented leaf, and (C) ten-class classification segmented leaf.
Agriengineering 03 00020 g006
Figure 7. Confusion matrix for healthy, and unhealthy tomato leaf image classification using compound scaling CNN-based models for segmented leaf images for (A) binary-class, (B) six-class, and (C) ten-class classification.
Figure 7. Confusion matrix for healthy, and unhealthy tomato leaf image classification using compound scaling CNN-based models for segmented leaf images for (A) binary-class, (B) six-class, and (C) ten-class classification.
Agriengineering 03 00020 g007
Figure 8. Score-CAM visualization of correctly classified tomato leaf images: Original leaves, score-CAM heat map on segmented leaves.
Figure 8. Score-CAM visualization of correctly classified tomato leaf images: Original leaves, score-CAM heat map on segmented leaves.
Agriengineering 03 00020 g008
Figure 9. Score-CAM visualization to confirm how segmentation has helped in classification.
Figure 9. Score-CAM visualization to confirm how segmentation has helped in classification.
Agriengineering 03 00020 g009
Table 1. EfficientNet baseline network.
Table 1. EfficientNet baseline network.
StageOperatorResolutionChannelsLayers
1Conv3 × 3224 × 224321
2MBConv 1, k3 × 3112 × 112161
3MBConv 6, k3 × 3112 × 112242
4MBConv 6, k5 × 556 × 56402
5MBConv 6, k3 × 328 × 28803
6MBConv 6, k5 × 514 × 141123
7MBConv 6, k5 × 514 × 141924
8MBConv 6, k3 × 37 × 73201
9Conv1 × 1 & Pooling & FC7 × 712801
Table 2. The number of tomato leaf images for healthy and different unhealthy classes.
Table 2. The number of tomato leaf images for healthy and different unhealthy classes.
ClassHealthyFungiBacteriaMoldVirusMite
Sub ClassHealthy (1591) Early blight (1000)Bacterial spot (2127)Late bright mold (1910)Tomato Yellow Leaf Curl Virus (5357)Two-spotted spider mite (1676)
Septoria leaf spot (1771)
Tomato Mosaic Virus (373)
Target spot (1404)
Leaf mold (952)
Tomato (18,161)
Table 3. Summary of the U-net segmentation experiment.
Table 3. Summary of the U-net segmentation experiment.
DatasetNumber of Tomato Leaf Images and Their Corresponding MaskTrain Set Count/FoldValidation Set Count/FoldTest Set Count/Fold
Plant Village tomato leaf images181611307614533632
Table 4. Summary of different classification experiments.
Table 4. Summary of different classification experiments.
ClassificationTypesTotal No. of Images/ClassFor Both Segmented and Unsegmented Experiment
Train Set Count/FoldValidation Set Count/FoldTest Set Count/Fold
Binary-classHealthy15911147 × 10 = 11470127317
Unhealthy (9 diseases)16,5701193013263314
Six-classHealthy15911147 × 3 = 3441127317
Fungi512736924101025
Bacteria21271532 × 2 = 3064170425
Mold19101375 × 3 = 4125153382
Virus573041264581146
Mite16761207 × 3 = 3621134335
Ten-classHealthy15911147 × 3 = 3441127317
Early Blight1000720 × 5 = 360080200
Septoria Leaf Spot17711275 × 3 = 3825142354
Target Spot14041011 × 3 = 3033112281
Leaf Mold952686 × 5 = 343076190
Bacterial Spot21271532 × 2 = 3064170425
Late Bright Mold19101375 × 3 = 4125153382
Tomato Yellow Leaf Curl Virus535738574291071
Tomato Mosaic Virus373268 × 13 = 34843075
Table 5. Summary of training parameters for segmentation and classification experiments.
Table 5. Summary of training parameters for segmentation and classification experiments.
ParametersSegmentation ModelClassification Model
Batch size1616
Learning rate0.0010.001
Epochs5015
Epochs patience86
Stopping criteria85
Loss functionNLL/BCE/MSEBCE
OptimizerADAMADAM
Table 6. Comparative performance of the original U-net and the modified U-net. (Best results are highlighted as bold).
Table 6. Comparative performance of the original U-net and the modified U-net. (Best results are highlighted as bold).
Loss FunctionNetworkTest LossTest AccuracyIoUDiceInference Time
T (s)
NLL lossUnet0.016897.2596.8397.1114.05
BCE lossUnet0.016297.3296.997.0213.89
MSE lossUnet0.013497.5297.2597.3513.66
NLL lossModified Unet0.007698.6698.598.7312.12
BCE lossModified Unet0.01697.1296.8297.112.04
MSE lossModified Unet0.08998.1998.2598.4311.76
Table 7. Summary of the tomato leaf disease classification performance using segmented and original leaf images. (Best results are highlighted as bold).
Table 7. Summary of the tomato leaf disease classification performance using segmented and original leaf images. (Best results are highlighted as bold).
Classification SchemeModelsResult with 95% CI
OverallWeighted
AccuracyPrecisionSensitivityF1-ScoreSpecificityInference Time
(T)
2 ClassEfficientNet-b099.74 ± 0.0799.75 ± 0.0799.73 ± 0.0899.73 ± 0.0899.75 ± 0.0719.32
EfficientNet-b499.82 ± 0.0699.83 ± 0.0699.82 ± 0.0699.82 ± 0.0698.74 ± 0.1634.25
EfficientNet-b799.95 ± 0.0399.94 ± 0.0399.95 ± 0.0399.95 ± 0.0399.77 ± 0.0744.12
6 ClassEfficientNet-b097.34 ± 0.2397.38 ± 0.2397.34 ± 0.2397.33 ± 0.2399.47 ± 0.1120.45
EfficientNet-b498.49 ± 0.1898.51 ± 0.1898.49 ± 0.1898.49 ± 0.1899.73 ± 0.0838.02
EfficientNet-b799.12 ± 0.1499.1 ± 0.1499.11± 0.1499.1 ± 0.1499.81 ± 0.0645.18
10 ClassEfficientNet-b099.71 ± 0.0898.69 ± 0.1798.68 ± 0.1798.68 ± 0.1799.87 ± 0.0522.16
EfficientNet-b499.89 ± 0.0599.45 ± 0.1199.44 ± 0.1199.4 ± 0.1199.94 ± 0.0441.24
EfficientNet-b799.84 ± 0.0699.15 ± 0.1399.13 ± 0.1499.13 ± 0.1499.92 ± 0.0451.23
Table 8. Results in the paper compared with other state-of-the-art results.
Table 8. Results in the paper compared with other state-of-the-art results.
PaperClassificationDatasetAccuracyPrecisionRecallF1-ScoreResults
Mohit et al. [23]Ten-classPlant Village91%90%92%91%Non-Segmented
P. Tm et al. [63]Ten-classPlant Village94%94.81%94.78%94.8%Segmented
Keke et al. [64]Two-classOwn dataset95%---Non-segmented
Madhavi et al. [65]Two-classOwn dataset85%-84%-Non-Segmented --
Proposed studyTwo-classPlant Village99.95%99.94%99.95%99.95%Segmented
Six-classPlant Village99.12%99.10%99.11%99.10%Segmented
Ten-classPlant Village99.89%99.45%99.44%99.4%Segmented
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Ayari, M.A.; Khan, A.U.; Khan, M.S.; Al-Emadi, N.; Reaz, M.B.I.; Islam, M.T.; Ali, S.H.M. Automatic and Reliable Leaf Disease Detection Using Deep Learning Techniques. AgriEngineering 2021, 3, 294-312. https://doi.org/10.3390/agriengineering3020020

AMA Style

Chowdhury MEH, Rahman T, Khandakar A, Ayari MA, Khan AU, Khan MS, Al-Emadi N, Reaz MBI, Islam MT, Ali SHM. Automatic and Reliable Leaf Disease Detection Using Deep Learning Techniques. AgriEngineering. 2021; 3(2):294-312. https://doi.org/10.3390/agriengineering3020020

Chicago/Turabian Style

Chowdhury, Muhammad E. H., Tawsifur Rahman, Amith Khandakar, Mohamed Arselene Ayari, Aftab Ullah Khan, Muhammad Salman Khan, Nasser Al-Emadi, Mamun Bin Ibne Reaz, Mohammad Tariqul Islam, and Sawal Hamid Md Ali. 2021. "Automatic and Reliable Leaf Disease Detection Using Deep Learning Techniques" AgriEngineering 3, no. 2: 294-312. https://doi.org/10.3390/agriengineering3020020

APA Style

Chowdhury, M. E. H., Rahman, T., Khandakar, A., Ayari, M. A., Khan, A. U., Khan, M. S., Al-Emadi, N., Reaz, M. B. I., Islam, M. T., & Ali, S. H. M. (2021). Automatic and Reliable Leaf Disease Detection Using Deep Learning Techniques. AgriEngineering, 3(2), 294-312. https://doi.org/10.3390/agriengineering3020020

Article Metrics

Back to TopTop