Next Article in Journal
A Dual-Stage Vocabulary of Features (VoF)-Based Technique for COVID-19 Variants’ Classification
Next Article in Special Issue
A Transfer Learning Technique for Inland Chlorophyll-a Concentration Estimation Using Sentinel-3 Imagery
Previous Article in Journal
Model-Validation and Implementation of a Path-Following Algorithm in an Autonomous Underwater Vehicle
Previous Article in Special Issue
One-Dimensional Convolutional Neural Networks for Hyperspectral Analysis of Nitrogen in Plant Leaves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mango Leaf Disease Recognition and Classification Using Novel Segmentation and Vein Pattern Technique

1
Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan
2
Department of Computer Science & Engineering, Ewha Womans University, Seoul 120-750, Korea
3
Department of computer Science, Hanyang University, Seoul 04763, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(24), 11901; https://doi.org/10.3390/app112411901
Submission received: 29 October 2021 / Revised: 6 December 2021 / Accepted: 7 December 2021 / Published: 14 December 2021
(This article belongs to the Special Issue Sustainable Agriculture and Advances of Remote Sensing)

Abstract

:
Mango fruit is in high demand. So, the timely control of mango plant diseases is necessary to gain high returns. Automated recognition of mango plant leaf diseases is still a challenge as manual disease detection is not a feasible choice in this computerized era due to its high cost and the non-availability of mango experts and the variations in the symptoms. Amongst all the challenges, the segmentation of diseased parts is a big issue, being the pre-requisite for correct recognition and identification. For this purpose, a novel segmentation approach is proposed in this study to segment the diseased part by considering the vein pattern of the leaf. This leaf vein-seg approach segments the vein pattern of the leaf. Afterward, features are extracted and fused using canonical correlation analysis (CCA)-based fusion. As a final identification step, a cubic support vector machine (SVM) is implemented to validate the results. The highest accuracy achieved by this proposed model is 95.5%, which proves that the proposed model is very helpful to mango plant growers for the timely recognition and identification of diseases.

1. Introduction

Countries dependent upon agriculture are facing a terrible threat and great loss due to plant diseases, which cause a decline in the quality and quantity of fruits and yields [1]. Pakistan is among those countries where a large amount of its income is earned by importing and producing a variety of crops, vegetables, and fruits cultivated in different areas of the country [2]. Therefore, it is necessary to identify diseased plants by implementing computer vision and image processing techniques [3,4]. Recently, deep learning (DL) techniques, specifically, convolutional neural networks (CNNs), have achieved extraordinary results in many applications, including the classification of plant diseases [5,6]. The mango is a highly popular fruit and is available in summer [7]. It is important in the agricultural industry of Pakistan due to its huge production volume. Several approaches to the detection and identification of mango plant leaf diseases have been proposed in the literature. Although a large number of diseases affect mango orchards, only some of them are causing great loss to the economy of the country. A few of the more common diseases [8] include powdery mildew, sooty mold, anthracnose and apical necrosis, as shown in Figure 1. In the current era, computer scientists are aiming to devise computer-based solutions to identify the diseases in their initial phase. This will aid farmers to safeguard the crop until it is harvested, resulting in the reduction in economic loss [9]. According to agricultural experts, naked eye observation is the traditional method used to recognize plant diseases. This is very expensive and time-consuming as it requires continuous monitoring [3,10]. Hence, it is almost impossible to accurately recognize the diseases of plant at an initial stage. Unfortunately, very few techniques that address the diseases of the mango, also called the king of fruits, have been reported before now [11,12] due to the complicated and complex structure and pattern of the plant. Hence, there is a need for efficient and robust techniques to identify mango diseases automatically, accurately, and efficiently [4,13,14,15]. For this purpose, the images used as a baseline can be captured by digital and mobile cameras [16,17].
Machine learning (ML) plays an important role in the identification of diseases [16,18]. ML is a sub-branch of artificial intelligence (AI) [19]. It enables computer-based systems to provide accurate and precise results. Real-world objects are the main objects to inspire ML techniques [20]. A few computerized techniques, for instance, segmentation by K-means and classification using SVM to identify the diseased area, are reported in [21].
Hence, as per the available and above discussion, there is a strong need for the automatic detection, identification, and classification the diseases of the mango plant. Keeping this in mind, this article covers the following issues: data augmentation to increase the dataset; tracking the color, size, and texture of the diseased part of leaf; handling background variation and the diseased part; the proper segmentation of the unhealthy part of the leaf; and, at a later stage, robust feature extraction and fusion to classify the disease. The following key contributions were performed to achieve this task:
  • Image resizing and augmentation in order to set query images.
  • A method for the segmentation of the diseased part.
  • Fusion of color and LBP features by performing canonical correlation analysis (CCA).
  • Using classifiers of ten different types to perform identification and recognition.

2. Literature Review

In current times, various types of techniques and methods have been established for the detection of plant leaf diseases. These are generally characterized into disease finding or disease detection methods and disease sorting or disease classification methods [22]. Many techniques use segmentation, feature fusion, and image classification implemented on cotton, strawberry, mango, tomato, rice, sugarcane, and citrus. Similarly, these methods are appropriate for leaf, flower, and fruit diseases because they use proper segmentation, feature extraction, and classification. To make a computer-based system work efficiently, ML techniques are generally used to enhance the visualization of the disease symptoms and to segment the diseased part for classification purposes. Mango fruit is important in the agricultural sector due to its massive production volume in Pakistan. Therefore, several approaches for the detection and identification of mango plant leaf diseases have been proposed to prevent a loss of harvest.
Iqbal et al. [23] considered segmentation, recognition, and identification techniques. They found that almost all the techniques are in the initial stage. Moreover, they discussed almost all existing methods along with their advantages, limitations, challenges, and models of ML (image processing) for the recognition and identification of diseases.
Shin et al. [24], in their study on powdery mildew of strawberry leaves, achieved an accuracy of 94.34% by combining an artificial neural network (ANN) and speeded-up robust features (SURF). However, by using an SVM and GLCM, their highest classification accuracy was 88.98%. They used HOG, SURF, GLCM, and two supervised ML methods (ANN and SVM).
Pane et al. [25] adopted an ML technique using a wavelength between 403 and 446nm for the detection of the blue color. They precisely distinguished unhealthy and healthy leaves of wild rocket, also called salad leaves. Bhatia et al. [26] used the Friedman test to rank multiple classifiers and post hoc analysis was also performed using the Nemenyi test. In their study, they found the MGSVM to be the superior classifier with an accuracy of 94.74%. Lin et al. [17] proved their results to be 3.15% more accurate than the traditional method used on pumpkin leaves. They used PCA in order to obtain 97.3% accurate results.
Shah [27] extracted the color features to detect diseases on cotton leaves. Kahlout et al. [28] developed an expert system for the detection of diseases including powdery mildew and sooty mold on all members of the citrus family. Sharif et al. [29] recommended a computerized system to segment and classify the diseases of citrus plants. In the first part of their suggested system, they used an optimized weight technique to recognize unhealthy parts of the leaf. Secondly, color, geometric, and texture descriptors were combined. Lastly, the best features were nominated by a hybrid feature selection technique consisting of the PCA approach called entropy and they obtained 90% accuracy. Udayet et al. [30] proposed a method to classify anthracnose (diseased) leaves of mango plants. They used a multiple layer convolutional neural network (CNN) for this task. The method was applied to 1070 images collected by their own cameras and gadgets. Consequently, the classification accuracy was raised to 97.13%. Kestur et al. [31] presented a segmentation method in 2019 based on deep learning called Mango net. The results using this method are 73.6% accurate. Arivazhagan et al. [11] used a CNN and showed 96.6% accurate outcomes. No preprocessing or feature extraction was performed in this proposed technique. Srunitha [32] detected unhealthy regions for mango diseases, including red rust, anthracnose, powdery mildew, and sooty mold.
Sooty mold and powdery mildew both create a layer on the leaf by disturbing its vein pattern, as the vein is a vital part of the plant. No techniques have been proposed to make this specific diagnosis, which is a major weakness in the available literature. So, a novel segmentation technique was constructed in this work that segments the diseased part by considering the vein pattern of mango leaves. After segmentation, two features (color and texture) are extracted and used for fusion and classification purposes. We used a self-collected dataset to accomplish this task. The dataset was collected using mobile cameras and digital gadgets. Details about this work are given in Section 3. A concise and precise description of the experimental results and their interpretation is given in Section 4, while Section 5 presents the conclusion drawn from those results.

3. Material and Method

The primary step of this work was the preparation of a dataset. The dataset used for this work was a collection of self-collected images that were captured using different types of image capturing gadgets. These images were collected from different mango growing regions in Pakistan, including Multan, Lahore, and Faisalabad, in the form of RGB images. The collected images were resized to 256 × 256 after being annotated by expert pathologists, as they showed dissimilar sizes. Figure 1 presents the workflow of the proposed technique, which comprises the following steps: (1) the preprocessing of images consisting of data augmentation followed by image resizing; (2) the use of the proposed model to segment the images obtained as a result of the resizing operation (codebook); (3) color and texture (LBP) feature extraction. Finally, the images were classified by using 10 different types of classifiers.

3.1. Preprocessing

The purpose of preprocessing is to improve the segmentation and classification accuracy by enhancing the quality of the image. The detailed sketch of each phase implemented for this purpose is as follows:

Resizing and Data Augmentation

A total of 29 images of healthy and unhealthy mango plant leaves were collected. Some of the images (2 out of 29) were distorted upon applying the resize operation. The distorted images were discarded, and the remaining 27 images were augmented by flipping and rotating [33] them horizontally, vertically, and both horizontally and vertically, as well as by using power-law transformations with gamma = 0.5 and c = 1, as shown in Figure 2. As such, 135 images were made available for tuning the proposed algorithm, as shown in Table 1.
An equal image size of 256 × 256 was utilized for the current study. From the whole dataset, only diseased images were used for segmentation, while the other 45 images of healthy leaves were used for classification.

3.2. Proposed Leaf Vein-Seg Architecture

The second step after preprocessing is the proposed leaf vein-seg architecture. The powder-like, purplish-white fungi growing mainly on the leaves that makes the plants dry and brown is called powdery mildew [34]. The honey-dew-like insect secretions that form a brownish layer on the leaf of mango plants are called sooty mold.
The proposed architecture is a stepwise process that extracts the veins of the leaf. The extracted veins are helpful in further processing and in the classification of diseases of the mango plant. First, the RGB input image is converted to a binary image. The binary image is then converted to a gray-scale image to extract a single channel from the image. CLAHE is applied to it, which improves the quality of the image by improving its contrast. CLAHE operates on small regions of an image called tiles and takes care of the over amplification of contrast in an image [35]. Bilinear interpolation is used to remove artificial boundaries by combining the neighboring tiles.
An average filter of 9 × 9 is then applied to the output gray-scale image to exclude its background. A mean or average filter smooths the image by reducing the intensity variation among the neighbor pixels. This filter works by replacing the original value of a pixel with the average value of its own and neighboring pixels. It moves pixel-by-pixel through the whole image. An average 9 × 9 filter is shown in matrix form below. Mathematically, it can be represented by Equation (1).
I n e w ( x , y ) = j = 1 1 i = 1 1 1 × I o l d ( x + i , y + j )
The output is then normalized to make the image pixel values between 0 and 255 as given in Equation (2).
I n e w ( x , y ) n o r m a l i z e d = 1 j = 1 1 i = 1 1 1 j = 1 1 i = 1 1 1 × I o l d ( x + i , y + j )
The edges of the image are detected, and noise is removed by sharpening the image. In the next step, the difference is calculated from the images obtained in the previous two steps: the gray-scale images and the images obtained after the application of the average filer along with its normalization, as the images should be the same size. This step is performed to compare the images and correct uneven luminance. Its mathematical representation is shown in Equation (3):
D ( x , y ) = I 1 ( x , y ) I n e w ( x , y ) n o r m a l i z e d
where the output of the gray-scale image is I 1 ( x , y ) and I n e w ( x , y ) n o r m a l i z e d is the output after the application of the average filter and its normalization. The threshold is then applied to the obtained images as D ( x , y ) . A method was designed to perform this task. The global image threshold is computed by setting a threshold level to D ( x , y ) , obtained in Equation (3). The computed global threshold level is used to convert the intensity image to a binary image. Standardized intensity values lie between 0 and 1. The histogram is segmented into two parts to normalize the image by using a starting threshold value that should be half of the maximum dynamic range.
H = 2 B 1
Foreground and background values are computed by using the sample mean values. Mean values associated with the foreground are (mf, 0), and the gray values associated with the background are (mb, 0). In this way, threshold value 1 is computed and this process is repeated until the threshold value does not change any more. The image obtained is converted to a binary image. The next step is to take the complement of binary image D. All zeros became one and all ones became zero. Mathematically, it can be represented in Equation (5).
D = D
Another method was designed to obtain the final segmented output in the form of edges and veins that is used to detect the diseased leaf of the mango plant. It takes as input the binary image obtained in Equation (5) to produce a mask of indicated colors for the output image while using a 1 × 3 size vector with values between 0 and 1 for the color. The mask must be represented in a logical two-dimensional matrix where [0 0 0] shows white and [1 1 1] shows black. The output of this step is a segmented RGB image. The processed images, including both diseased and healthy leaves of a mango plant, are presented in Figure 3.

3.3. Features Extraction and Fusion

This is a helpful phase of an artificially intelligent, automated system based on ML. Color and texture features are helpful descriptors. Information about the color is obtained by using color features, whereas texture features provide texture analysis for the diseased leaves of the mango plant. In order to conduct the classification of diseases (powdery mildew, sooty mold) of the mango plant, color, shape, and texture features are extracted in this proposed method. The structure of the feature (color, shape, and texture) extraction is shown in Figure 4.
The comprehensive explanation of each phase is as follows: preprocessed images are passed through the codebook and a segmented image is achieved in the training phase; the segmented image is used for feature extraction; and then color, shape, and texture features are extracted.
Color Features: These are a significant resource for the detection and recognition of diseased/unhealthy parts of mango plants, as every mango plant leaf disease has a different color and shade. In this paper, the diseases on mango plant leaves are recognized or identified by the extraction of color features. As stated earlier, each disease has its own pattern and shading, so four types of color spaces are utilized to obtain the extracted colors. Features including RGB; hue, saturation, and variance (HSV); luminance, a, b component (LAB); hue, intensity, and saturation (HIS) are obtained from the mango leaf images. Different information is obtained along every channel. Therefore, the maximum information about the defective part of the mango leaf is obtained by applying color spaces. Color features are obtained by using the six types of different statistical metrics mentioned below from Equations (6)–(11). One vector, sized 1 × 3000, is used to combine these features for each channel. The statistical metrics are calculated by the following formulae:
A ¯ = ( a i ) / n ,
σ = 1 n i = 1 n ( a i a ) 2
V = 1 n i = 1 n ( a i a ) 2
E n t r o p y = i = 1 c p i log 2 p i
K R = n i = 1 n ( A i A a v g ) 4 ( i = 1 n ( A i A a v g ) 2 ) 2
S k e w n e s s = 1 n i = 1 n ( a i a ) 3 ( 1 n i = 1 n ( a i a ) 2 ) 3 2
where A ¯ denotes the mean feature, σ denotes the standard deviation feature, V   represents the variance feature,   E n t r o p y describes the entropy, K R represents the kurtosis feature, and S k e w n e s s indicates the skewness feature.
Texture Features: Texture features alone cannot find identical images. Other features, such as color, work with texture features to segregate texture and non-texture features. To handle complications in image texture, an ancient but easy technique, local binary pattern (LBP), is implemented [36]. Hence, LBP was used in this research to extract texture features. Its description is mentioned below:
L B P A , B = A = 0 A 1 F ( g A g c ) 2 A , F ( x ) = 1   i f   x 0 ; 0   o t h e r w i s e
In Equation (12), the value of a pixel is denoted by A, the value of the radius by B, the neighborhood point by ga, the center point by gc, and the binomial factor is denoted by 2A. As a result, a 1 × 800 size vector was generated after extracting features from different channels and color spaces.

3.4. Features Fusion and Classification

Finally, a CCA-based feature reduction is applied to the extracted features. A serial-based fusion approach is used to fuse the resultant reduced vectors. A vector of dimensions n × 2000 is obtained simply by concatenating the features. This is used for classification and fed into the classifiers. Ten different types of classification techniques were implemented for the analysis of classification accuracy. In the past, agricultural applications suffered from the unavailability of data due to its complex structure and collection cost, especially for mango plants because these plants are available only in select regions. The cost of labeling for data acquisition is also very high [37]. Hence, this issue encouraged us to collect data by ourselves from mango growing areas in Pakistan. We adopted 2 strategies in this research: (1) data augmentation, and (2) a segmentation technique to segment the diseased parts and veins of mango leaves. This is a unique technique as it has not been performed in earlier agricultural applications, especially with the mango plant. As a final point, the segmented images were entered into a computer-based system for feature fusion, and then for identification. The computation was about 45 min as the segmentation of one image took almost 7.5 s. All simulations were performed on a personal computer with the following specifications: 64-bit Windows operating system with MATLAB version 2018, 32 GB RAM with an Intel® Xeon® processor and central processing unit of 2.2GHz, GPU GeForce GT×1080.

4. Experimental Results and Analysis

In this section, the results of the proposed algorithm are discussed in both graphic and tabular form. For validation, out of the total 135 images used, 45 images were of sooty mold, 45 images were of powdery mildew, and 45 were of healthy mango plants. As a primary step, the proposed segmentation technique was used. Second, the classification results for this segmentation technique were tabulated by implementing different standardized classifiers. The detailed results, with their descriptions, are discussed in the following section. The identification of each disease is analyzed with the images of the healthy leaves of the mango plant. Then, the classification accuracy of all diseased leaves is compared with the classification of the healthy leaves of the mango plant. The proposed technique was tested with 10 of the most ideal classifiers with 1- fold cross-validation.

4.1. Test 1: Powdery Mildew vs. Healthy

In this test, 45 powdery mildew and 45 healthy mango leaf images were classified. Table 2 shows that an accuracy of 96.6% was attained through cubic SVM. It proved to be the highest amongst all the other competing classifiers. Furthermore, 0.16, 0.97, 0.97, and 3.4 were the obtained values of sensitivity, specificity, AUC, and FNR, respectively. Sensitivity and specificity mathematically describe the accuracy of a test that reports the presence or absence of a condition. The confusion matrix of this test is also given in Table 3.

4.2. Test 2: Sooty Mold vs. Healthy

In this test, 45 images of sooty mold and 45 healthy mango leaf images were classified. Table 4 shows that an accuracy of 95.5% was attained by using linear SVM. It proved to be the highest amongst all the other competing classifiers. Furthermore, 0.05, 0.95, 0.95, and 4.5 are the obtained values of sensitivity, specificity, AUC, and FNR, respectively. The confusion matrix of this test is also given in Table 5.

4.3. Test 3: Diseased vs. Healthy

This section presents the findings of the classification all unhealthy and healthy images of mango plant leaves. Table 6 shows the classification results for all the diseases obtained after feature fusion based on CCA. This test was performed on all 135 images. The accuracy of 95.5% was attained using a cubic SVM, which is the highest among all the other competing classifiers. Moreover, 0.03, 0.93, 0.99, and 4.5 were the values of the sensitivity, specificity, AUC, and FNR, respectively. The confusion matrix of this test is also given in Table 7.

4.4. Discussion

The achieved results are more efficient than the results presented by Srunitha et al. [32] in 2018 who introduced k-means for the segmentation of the diseased part and a multiclass SVM for classification purposes and obtained an accuracy of 96%, as shown in Table 8. However, the list of diseases they detected did not contain powdery mildew, whereas our proposed method showed an accuracy of 95.5% while detecting powdery mildew as well. None of the studies available have yet checked the vein pattern of plant leaves. As it is the most important part of the plant as concerns food and water transportation, the proposed technique is far superior to the available techniques. Furthermore, the mobile-based system proposed by Anantrasirichai et al. [38] in 2019 uses classification techniques and obtained an accuracy of 80% when detecting the diseases of mango plants. In comparison, the achieved accuracy of the proposed model is much better, at 95.5%.

5. Conclusions

A novel segmentation technique was introduced in this paper. Two types of mango leaf disease, powdery mildew and sooty mold, were recognized. A self-collected dataset was used to perform this task. A leaf vein segmentation technique that detects the vein pattern of the leaf was proposed. The leaf’s features were extracted on the basis of color and texture after performing the segmentation. Ten different classifiers were used to obtain the results. The overall performance of the proposed method is much improved compared to already available methods. However, the following improvements will be considered in the future: (1) increase the number of images in the dataset, (2) minimize the identification time through feature optimization algorithms [39,40,41] to implement it in real time, and (3) implement some latest deep learning models [42,43,44,45].

Author Contributions

Conceptualization, R.S. and J.H.S.; methodology, R.S., J.H.S., and M.S.; software, R.S. and M.S.; validation, J.H.S., M.Y., and M.S.; formal analysis, M.Y. and J.C.; investigation, H.-S.Y. and J.C.; resources, M.Y. and J.C.; data curation, H.-S.Y. and J.C.; writing—original draft preparation, R.S. and M.S.; writing—review and editing, J.C. and M.Y.; visualization, M.S. and M.Y.; supervision, J.H.S. and J.C.; project administration, H.-S.Y.; funding acquisition, H.-S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT; MSIT) under Grant RF-2018R1A5A7059549.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tran, T.-T.; Choi, J.-W.; Le, T.-T.H.; Kim, J.-W. A comparative study of deep CNN in forecasting and classifying the macronutrient deficiencies on development of tomato plant. Appl. Sci. 2019, 9, 1601. [Google Scholar] [CrossRef] [Green Version]
  2. Shah, F.A.; Khan, M.A.; Sharif, M.; Tariq, U.; Khan, A.; Kadry, S.; Thinnukool, O. A Cascaded Design of Best Features Selection for Fruit Diseases Recognition. Comput. Mater. Contin. 2021, 70, 1491–1507. [Google Scholar] [CrossRef]
  3. Akram, T.; Sharif, M.; Saba, T. Fruits diseases classification: Exploiting a hierarchical framework for deep features fusion and selection. Multimed. Tools Appl. 2020, 79, 25763–25783. [Google Scholar]
  4. Rehman, M.Z.U.; Ahmed, F.; Khan, M.A.; Tariq, U.; Jamal, S.S.; Ahmad, J.; Hussain, I. Classification of Citrus Plant Diseases Using Deep Transfer Learning. CMC Comput. Mater. Contin. 2022, 70, 1401–1417. [Google Scholar]
  5. Maeda-Gutierrez, V.; Galvan-Tejada, C.E.; Zanella-Calzada, L.A.; Celaya-Padilla, J.M.; Galván-Tejada, J.I.; Gamboa-Rosales, H.; Luna-Garcia, H.; Magallanes-Quintanar, R.; Guerrero Mendez, C.A.; Olvera-Olvera, C.A. Comparison of convolutional neural network architectures for classification of tomato plant diseases. Appl. Sci. 2020, 10, 1245. [Google Scholar] [CrossRef] [Green Version]
  6. Hussain, N.; Khan, M.A.; Tariq, U.; Kadry, S.; Yar, M.A.E.; Mostafa, A.M.; Alnuaim, A.A.; Ahmad, S. Multiclass Cucumber Leaf Diseases Recognition Using Best Feature Selection. CMC Comput. Mater. Contin. 2022, 70, 3281–3294. [Google Scholar] [CrossRef]
  7. Khan, M.A.; Sharif, M.I.; Raza, M.; Anjum, A.; Saba, T.; Shad, S.A. Skin lesion segmentation and classification: A unified framework of deep neural network features fusion and selection. Expert Syst. 2019, 14, 1–23. [Google Scholar] [CrossRef]
  8. Rosman, N.F.; Asli, N.A.; Abdullah, S.; Rusop, M. Some common disease in mango. In AIP Conference Proceedings; AIP Publishing LLC: Melville, NY, USA, 2019; p. 020019. [Google Scholar]
  9. Jogekar, R.N.; Tiwari, N. A review of deep learning techniques for identification and diagnosis of plant leaf disease. In Smart Trends in Computing and Communications: Proceedings of SmartCom 2020; Springer: New York, NY, USA, 2021; pp. 435–441. [Google Scholar]
  10. Runno-Paurson, E.; Lääniste, P.; Nassar, H.; Hansen, M.; Eremeev, V.; Metspalu, L.; Edesi, L.; Kännaste, A.; Niinemets, Ü. Alternaria Black Spot (Alternaria brassicae) Infection Severity on Cruciferous Oilseed Crops. Appl. Sci. 2021, 11, 8507. [Google Scholar] [CrossRef]
  11. Arivazhagan, S.; Ligi, S.V. Mango leaf diseases identification using convolutional neural network. Int. J. Pure Appl. Math. 2018, 120, 11067–11079. [Google Scholar]
  12. Saleem, R.; Shah, J.H.; Sharif, M.; Ansari, G.J. Mango Leaf Disease Identification Using Fully Resolution Convolutional Network. Comput. Mater. Contin. 2021, 69, 3581–3601. [Google Scholar] [CrossRef]
  13. Latif, M.R.; Khan, M.A.; Javed, M.Y.; Masood, H.; Tariq, U.; Nam, Y.; Kadry, S. Cotton Leaf Diseases Recognition Using Deep Learning and Genetic Algorithm. Comput. Mater. Contin. 2021, 69, 2917–2932. [Google Scholar] [CrossRef]
  14. Adeel, A.; Khan, M.A.; Akram, T.; Sharif, A.; Yasmin, M.; Saba, T.; Javed, K. Entropy-controlled deep features selection framework for grape leaf diseases recognition. Expert Syst. 2020. [Google Scholar] [CrossRef]
  15. Aurangzeb, K.; Akmal, F.; Khan, M.A.; Sharif, M.; Javed, M.Y. Advanced machine learning algorithm based system for crops leaf diseases recognition. In Proceedings of the 6th Conference on Data Science and Machine Learning Applications (CDMA), Riyadh, Saudi Arabia, 4–5 March 2020; pp. 146–151. [Google Scholar]
  16. Saeed, F.; Khan, M.A.; Sharif, M.; Mittal, M.; Goyal, L.M.; Roy, S. Deep neural network features fusion and selection based on PLS regression with an application for crops diseases classification. Appl. Soft Comput. 2021, 103, 107164. [Google Scholar] [CrossRef]
  17. Tariq, U.; Hussain, N.; Nam, Y.; Kadry, S. An Integrated Deep Learning Framework for Fruits Diseases Classification. Comput. Mater. Contin. 2021, 71, 1387–1402. [Google Scholar]
  18. Khan, M.A.; Akram, T.; Sharif, M.; Javed, K.; Raza, M.; Saba, T. An automated system for cucumber leaf diseased spot detection and classification using improved saliency method and deep features selection Multimed. Tools Appl. 2020, 79, 18627–18656. [Google Scholar]
  19. Webster, C.; Ivanov, S. Robotics, artificial intelligence, and the evolving nature of work. In Digital Transformation in Business and Society; Springer: New York, NY, USA, 2020; pp. 127–143. [Google Scholar]
  20. Adeel, A.; Khan, M.A.; Sharif, M.; Azam, F.; Shah, J.H.; Umer, T.; Wan, S. Diagnosis and recognition of grape leaf diseases: An automated system based on a novel saliency approach and canonical correlation analysis based multiple features fusion. Sustain. Comput. Inform. Syst. 2019, 24, 100349. [Google Scholar] [CrossRef]
  21. Febrinanto, F.G.; Dewi, C.; Triwiratno, A. The implementation of k-means algorithm as image segmenting method in identifying the citrus leaves disease. In Proceedings of the IOP Conference Series: Earth and Environmental Science, East Java, Indonesia, 17–18 November 2019; p. 012024. [Google Scholar]
  22. Khan, M.A.; Akram, T.; Sharif, M.; Alhaisoni, M.; Saba, T.; Nawaz, N. A probabilistic segmentation and entropy-rank correlation-based feature selection approach for the recognition of fruit diseases. EURASIP J. Image Video Process. 2021, 2021, 1–28. [Google Scholar] [CrossRef]
  23. Iqbal, Z.; Khan, M.A.; Sharif, M.; Shah, J.H.; ur Rehman, M.H.; Javed, K. An automated detection and classification of citrus plant diseases using image processing techniques: A review. Comput. Electr. Agric. 2018, 153, 12–32. [Google Scholar] [CrossRef]
  24. Shin, J.; Chang, Y.K.; Heung, B.; Nguyen-Quang, T.; Price, G.W.; Al-Mallahi, A. Effect of directional augmentation using supervised machine learning technologies: A case study of strawberry powdery mildew detection. Biosyst. Eng. 2020, 194, 49–60. [Google Scholar] [CrossRef]
  25. Pane, C.; Manganiello, G.; Nicastro, N.; Cardi, T.; Carotenuto, F. Powdery Mildew Caused by Erysiphe cruciferarum on Wild Rocket (Diplotaxis tenuifolia): Hyperspectral Imaging and Machine Learning Modeling for Non-Destructive Disease Detection. Agriculture 2021, 11, 337. [Google Scholar] [CrossRef]
  26. Bhatia, A.; Chug, A.; Singh, A.P. Statistical analysis of machine learning techniques for predicting powdery mildew disease in tomato plants. Int. J. Intell. Eng. Inform. 2021, 9, 24–58. [Google Scholar] [CrossRef]
  27. Shah, N.; Jain, S. Detection of disease in cotton leaf using artificial neural network. In Proceedings of the 2019 Amity International Conference on Artificial Intelligence (AICAI), Dubai, United Arab Emirates, 4–6 February 2019; pp. 473–476. [Google Scholar]
  28. El Kahlout, M.I.; Abu-Naser, S.S. An Expert System for Citrus Diseases Diagnosis. Int. J. Acad. Eng. Res. (IJAER) 2019, 3, 1–7. [Google Scholar]
  29. Sharif, M.; Khan, M.A.; Iqbal, Z.; Azam, M.F.; Lali, M.I.U.; Javed, M.Y. Detection and classification of citrus diseases in agriculture based on optimized weighted segmentation and feature selection. Comput. Electr. Agric. 2018, 150, 220–234. [Google Scholar] [CrossRef]
  30. Singh, U.P.; Chouhan, S.S.; Jain, S.; Jain, S. Multilayer convolution neural network for the classification of mango leaves infected by anthracnose disease. IEEE Access 2019, 7, 43721–43729. [Google Scholar] [CrossRef]
  31. Kestur, R.; Meduri, A.; Narasipura, O. MangoNet: A deep semantic segmentation architecture for a method to detect and count mangoes in an open orchard. Eng. Appl. Artif. Intell. 2019, 77, 59–69. [Google Scholar] [CrossRef]
  32. Srunitha, K.; Bharathi, D. Mango leaf unhealthy region detection and classification. In Computational Vision and Bio Inspired Computing; Springer: New York, NY, USA, 2018; pp. 422–436. [Google Scholar]
  33. Hussain, Z.; Gimenez, F.; Yi, D.; Rubin, D. Differential data augmentation techniques for medical imaging classification tasks. In Proceedings of the AMIA Annual Symposyum Proceedings, Washington, DC, USA, 6–8 November 2017; pp. 979–984. [Google Scholar]
  34. Ajitomi, A.; Takushi, T.; Sato, Y.; Arasaki, C.; Ooshiro, A. First report of powdery mildew of mango caused by Erysiphe quercicola in Japan. J. Gen. Plant Pathol. 2020, 86, 316–321. [Google Scholar] [CrossRef]
  35. Sepasian, M.; Balachandran, W.; Mares, C. Image enhancement for fingerprint minutiae-based algorithms using CLAHE, standard deviation analysis and sliding neighborhood. In Proceedings of the World Congress on Engineering and Computer Science, San Francisco, CA, USA, 22–24 October 2008; pp. 22–24. [Google Scholar]
  36. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? arXiv 2014, arXiv:1411.1792. [Google Scholar]
  37. Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Anantrasirichai, N.; Hannuna, S.; Canagarajah, N. Towards automated mobile-phone-based plant pathology management. arXiv 2019, arXiv:1912.09239. [Google Scholar]
  39. Khan, S.; Alhaisoni, M.; Tariq, U.; Yong, H.-S.; Armghan, A.; Alenezi, F. Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion. Sensors 2021, 21, 7941. [Google Scholar] [CrossRef]
  40. Saleem, F.; Alhaisoni, M.; Tariq, U.; Armghan, A.; Alenezi, F.; Choi, J.-I.; Kadry, S. Human gait recognition: A single stream optimal deep learning features fusion. Sensors 2021, 21, 7584. [Google Scholar] [CrossRef]
  41. Arshad, M.; Khan, M.A.; Tariq, U.; Armghan, A.; Alenezi, F.; Younus Javed, M.; Aslam, S.M.; Kadry, S. A Computer-Aided Diagnosis System Using Deep Learning for Multiclass Skin Lesion Classification. Comput. Intell. Neurosci. 2021, 21, 1–27. [Google Scholar] [CrossRef]
  42. Nasir, M.; Sharif, M.; Javed, M.Y.; Saba, T.; Ali, H.; Tariq, J. Melanoma detection and classification using computerized analysis of dermoscopic systems: A review. Curr. Med Imaging 2020, 16, 794–822. [Google Scholar] [CrossRef]
  43. Muhammad, K.; Sharif, M.; Akram, T.; Kadry, S. Intelligent fusion-assisted skin lesion localization and classification for smart healthcare. Neural Comput. Appl. 2021, 12, 1–16. [Google Scholar]
  44. Khan, M.; Sharif, M.; Akram, T.; Kadry, S.; Hsu, C.H. A two-stream deep neural network-based intelligent system for complex skin cancer types classification. Int. J. Intell. Syst. 2021, 14, 1–28. [Google Scholar]
  45. Akram, T.; Sharif, M.; Kadry, S.; Nam, Y. Computer decision support system for skin cancer localization and classification. Comput. Mater. Contin. 2021, 70, 1–15. [Google Scholar]
Figure 1. Framework of the proposed computerized system.
Figure 1. Framework of the proposed computerized system.
Applsci 11 11901 g001
Figure 2. Data augmentation steps.
Figure 2. Data augmentation steps.
Applsci 11 11901 g002
Figure 3. Stepwise output of leaves: (a) healthy, (b) powdery mildew, and (c) sooty mold.
Figure 3. Stepwise output of leaves: (a) healthy, (b) powdery mildew, and (c) sooty mold.
Applsci 11 11901 g003
Figure 4. Structure of feature extraction.
Figure 4. Structure of feature extraction.
Applsci 11 11901 g004
Table 1. Distribution of the data.
Table 1. Distribution of the data.
Sooty MoldPowdery MildewHealthyTotal
454545135
Table 2. Powdery mildew vs. healthy mango leaves.
Table 2. Powdery mildew vs. healthy mango leaves.
MethodsSensitivitySpecificityAUCFNR (%)Accuracy (%)
Linear discriminant0.240.890.8217.882.2
Linear SVM0.220.910.855.694.4
Quadratic SVM0.220.910.866.793.3
Cubic SVM0.160.970.973.496.6
Fine KNN0.220.910.8411.288.8
Medium KNN0.180.840.8616.783.3
Cubic KNN0.200.710.8418.981.1
Weighted KNN0.180.820.8611.288.8
Subspace discriminant0.20.820.8612.387.7
Subspace KNN0.180.840.8616.783.3
Table 3. Confusion matrix of powdery mildew vs. healthy mango leaves.
Table 3. Confusion matrix of powdery mildew vs. healthy mango leaves.
Classification-ClassClassification-Class
Powdery MildewHealthy
Powdery mildew97.7%<1%
Healthy<1%95.6%
Table 4. Sooty mold vs. healthy mango leaves.
Table 4. Sooty mold vs. healthy mango leaves.
MethodsSensitivitySpecificityAUCFNR (%)Accuracy (%)
Linear discriminant0.470.740.6336.763.3
Linear SVM0.050.950.954.595.5
Quadratic SVM0.220.910.866.793.3
Cubic SVM0.220.910.855.694.4
Fine KNN0.240.670.7128.971.1
Medium KNN0.20.820.8612.387.7
Cubic KNN0.230.820.8611.288.8
Weighted KNN0.200.870.9312.583.3
Subspace discriminant0.290.910.8218.981.1
Subspace KNN0.240.60.7532.267.8
Table 5. Confusion matrix of sooty mold vs. healthy mango leaves.
Table 5. Confusion matrix of sooty mold vs. healthy mango leaves.
Classification ClassClassification Class
Sooty MoldHealthy
Sooty mold95.5%<1%
Healthy<1%95.5%
Table 6. Diseased vs. healthy mango leaves.
Table 6. Diseased vs. healthy mango leaves.
MethodsSensitivitySpecificityAUCFNR (%)Accuracy (%)
Linear discriminant0.130.690.7827.472.6
Linear SVM0.030.880.986.793.3
Quadratic SVM0.140.760.8825.274.8
Cubic SVM0.030.930.994.595.5
Fine KNN0.100.560.7333.366.7
Medium KNN0.060.830.8811.288.8
Cubic KNN0.030.890.987.592.5
Weighted KNN0.110.790.862080
Subspace discriminant0.130.710.8728.171.9
Subspace KNN0.080.470.7735.664.4
Table 7. Confusion matrix of diseased vs. healthy mango leaves.
Table 7. Confusion matrix of diseased vs. healthy mango leaves.
Classification ClassClassification Class
HealthyPowdery MildewSooty Mold
Healthy97.8%<1%-
Powdery mildew-97.8%<1%
Sooty mold--91.1%
Table 8. Comparison of different segmentation and classification techniques with proposed model.
Table 8. Comparison of different segmentation and classification techniques with proposed model.
MethodsYearTechnique Accuracy (%)
K means by Srunitha et al. [32]2018Multiclass SVM96%
Mobile phone based by Anantrasirichai et al. [38]2019Classification 80%
ProposedLeaf vein-seg95.5%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Saleem, R.; Shah, J.H.; Sharif, M.; Yasmin, M.; Yong, H.-S.; Cha, J. Mango Leaf Disease Recognition and Classification Using Novel Segmentation and Vein Pattern Technique. Appl. Sci. 2021, 11, 11901. https://doi.org/10.3390/app112411901

AMA Style

Saleem R, Shah JH, Sharif M, Yasmin M, Yong H-S, Cha J. Mango Leaf Disease Recognition and Classification Using Novel Segmentation and Vein Pattern Technique. Applied Sciences. 2021; 11(24):11901. https://doi.org/10.3390/app112411901

Chicago/Turabian Style

Saleem, Rabia, Jamal Hussain Shah, Muhammad Sharif, Mussarat Yasmin, Hwan-Seung Yong, and Jaehyuk Cha. 2021. "Mango Leaf Disease Recognition and Classification Using Novel Segmentation and Vein Pattern Technique" Applied Sciences 11, no. 24: 11901. https://doi.org/10.3390/app112411901

APA Style

Saleem, R., Shah, J. H., Sharif, M., Yasmin, M., Yong, H. -S., & Cha, J. (2021). Mango Leaf Disease Recognition and Classification Using Novel Segmentation and Vein Pattern Technique. Applied Sciences, 11(24), 11901. https://doi.org/10.3390/app112411901

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop