Next Article in Journal
Fault Location Method of Multi-Terminal Transmission Line Based on Fault Branch Judgment Matrix
Next Article in Special Issue
Sample-Pair Envelope Diamond Autoencoder Ensemble Algorithm for Chronic Disease Recognition
Previous Article in Journal
Calculation of Main Circuit Steady-State Parameters for Capacitor Commutated Converter System
Previous Article in Special Issue
Personalized Chinese Tourism Recommendation Algorithm Based on Knowledge Graph
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Fast Detection of Crop Disease Leaves Based on Single-Channel Gravitational Kernel Density Clustering

1
School of Electronic Information, Xijing University, Xi’an 710123, China
2
Department of Geology, Northwest University, Xi’an 710127, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(2), 1172; https://doi.org/10.3390/app13021172
Submission received: 28 November 2022 / Revised: 6 January 2023 / Accepted: 8 January 2023 / Published: 15 January 2023
(This article belongs to the Special Issue Application of Artificial Intelligence in Visual Signal Processing)

Abstract

:
Plant diseases and pests may seriously affect the yield of crops and even threaten the survival of human beings. The characteristics of plant diseases and insect pests are mainly reflected in the occurrence of lesions on crop leaves. Machine vision disease detection is of great significance for the early detection and prevention of plant diseases and insect pests. A fast detection method for lesions based on a single-channel gravitational kernel density clustering algorithm was designed to examine the complexity and ambiguity of diseased leaf images. Firstly, a polynomial was used to fit the R-channel feature histogram curve of a diseased leaf image in the RGB color space, and then the peak point and peak area of the fitted feature histogram curve were determined according to the derivative attribute. Secondly, the cluster numbers and the initial cluster center of the diseased leaf images were determined according to the peak area and peak point. Thirdly, according to the clustering center of the preliminarily determined diseased leaf images, the single-channel gravity kernel density clustering algorithm in this paper was used to achieve the rapid segmentation of the diseased leaf lesions. Finally, the experimental results showed that our method could segment the lesions quickly and accurately.

1. Introduction

Crops are the main source of human energy and nutrition. According to the Food and Agriculture Organization of the United Nations (FAO), 650 million people in the world were undernourished in 2019 and 768 million in 2020. In addition, the global report on food crises stated that nearly 40 million more people would be affected by food crises in 2021 than in 2020. In Asia and Africa, more than 50% of the population relies on agricultural production for employment and income [1]. It is estimated that 30–40% of crops are lost every year through the production chain [2], of which at least 10% of the world’s foods are reduced due to crop diseases. Therefore, it is of great significance to actively monitor and control crop diseases and insect pests. This is because the characteristics of crop diseases are mainly reflected by leaf lesions. The automatic detection of crop leaves through machine vision technology may be effective in realizing the early warning signs and controlling crop diseases, which could greatly improve the efficiency of disease control and reduce labor costs [3].
In crop disease detection, the image segmentation of diseased leaves is very important, which directly affects the reliability of feature extractions and the accuracy of disease recognition. The detection algorithms for leaf lesions are mainly image threshold segmentation algorithms and image clustering algorithms [4]. In recent years, many scholars have introduced statistical pattern recognition, K-means clustering, fuzzy C-means clustering, Otsu, EM, and other methods into the segmentation of diseased crop leaves [5,6,7,8]. Due to the different types of pathogens of crop diseases, specific symptoms with different shapes, colors, and textures could be generated. At present, it is difficult to segment lesions from leaves using the global threshold segmentation algorithm, so a multi-threshold segmentation algorithm is needed to solve this problem. However, multiple thresholds are difficult to determine automatically with algorithms and often need to be supplemented with human participation, which renders the algorithm non-universal and unintelligent. When the leaf lesions are segmented using image clustering algorithms, the number of class families needs to be determined manually in advance, which renders the algorithm poorly adaptable and easily leads to segmentation errors. In addition, with the development of deep learning techniques, deep learning-based methods have also been applied to the field of crop disease detection. The existing architectures have also been used to extract the features and combine them with other models. However, these deep models are difficult to apply to actual scenarios due to the single content of segmentation, high network complexity, and long training time.
In order to solve these problems, we propose a fast detection method for crop disease leaves based on single-channel gravitational kernel density clustering. Firstly, the R-channel values of the diseased leaf images were extracted in the RGB color space. Then a polynomial was used to fit the R-channel feature histogram curve. The cluster number and initial cluster center of the diseased leaf images were determined by locating the peak value and peak area. Finally, according to the initially determined clustering centers of the diseased leaf images, the gravitational kernel density clustering algorithm was used to complete the segmentation of the lesions on the diseased leaves. The experimental results showed that this method could segment the lesions quickly and accurately.

2. Related Works

Various image segmentation and clustering-based methods were used for detecting the regions of crop diseases. Zhang et al. [9] used a method of diseased leaf image segmentation based on hybrid clustering, which divides the image into uniform superpixels through superpixel clustering to guide the rapid convergence of the algorithm. Yuan et al. [10] designed a segmentation method for crop lesion leaves with a complex background in which the optimal segmentation threshold could be obtained by using a weighted Parson window. Arivazhagan et al. [11] introduced a scheme for the automatic detection and classification of plant leaf diseases, which used specific thresholds to mask and delete green pixels and effectively segment the images based on texture statistics. Gui et al. [12] designed a disease detection system for color soybean leaf images based on complex backgrounds. Based on the K-means algorithm and combined with empirical thresholds, a bump map was used to segment the salient regions from soybean leaf disease images. Tajane et al. [13] built a content image database retrieval system, which used a canny edge detection operator and histogram analysis to identify the leaf diseases of medicinal plants. This method highlighted the regions with high spatial derivatives by searching for image gradients and finally detected the diseases according to the edge features of each medicinal plant leaf. Tucker et al. [14] explored the influence of different segmentation thresholds on the detection of diseased oat and sunflower leaves. Harshadkumar et al. [15] constructed a database of rice leaves in a simple background, then used the K-means clustering method to extract the diseased parts in the leaf image, and used threshold technology to remove the unnecessary green areas of the diseased parts. Hiary et al. [16] designed a leaf lesion image segmentation algorithm that first recognized the greenest pixels, which were masked by the specific thresholds calculated by the Otsu segmentation method, and the zero values of red, green, and blue and pixels on the border to completely remove the infected cluster. Kaur et al. [17] applied a color image K-means clustering algorithm to grape leaf image segmentation. Yu et al. [18] designed a watershed algorithm for crop disease leaf segmentation. Chaudhary et al. [19] designed a segmentation algorithm for diseased spots using maximum expectation (EM) processing technology and compared the effects of CIELAB, HSI, and YCbCr color spaces in the detection of disease spots. Ren et al. [20] improved the traditional Sobel operator. Based on the preprocessing of graying, filtering, and noise reduction of the leaf image, the Sobel operator was improved by adding multiple directional templates, which effectively solved the problem that the boundary points of the traditional Sobel operator were too rough during the edge detection. Wang et al. [21] designed a multi-scale analysis algorithm for the edge extraction of strawberry disease leaf images. This method first used wavelet functions to reconstruct leaf images at different scales, then used the Otsu algorithm to obtain the binary image regions of the reconstructed images at different scales, used the canny operator to obtain the diseased edge part in the image, and obtained the complete area of the leaf lesion through a spatial fusion of different scale images. Shantkumari et al. [22] designed a two-stage segmentation model that includes both normal and absolute segmentations. Fast segmentation is realized with ordinary segmentation, and good segmentation accuracy is achieved with absolute segmentation.
Many scholars have tried to use CNNs to solve the problem of plant disease image segmentation. Liu et al. [23] combined Markov and CNNs to complete the segmentation of cotton spots. In this method, a neural network was used to extract the deep semantic features of the image, and the conditional random field energy function was constructed and optimized by combining the relative relation between the feature image pixels. Xiong et al. [24] used the combination of superpixel segmentation and a neural network to complete the segmentation of rice ears at different growth stages, but this method had poor segmentation effects under complex backgrounds. Zhao et al. [25] extended an image segmentation method for maize disease leaves based on FCN, which outputted the segmentation results through an encoder and decoder structure. Due to complex preprocessing, this method is not universal. Chen et al. [26] designed a new model named BLSNet, which uses an attention mechanism and multi-scale extraction integration to improve the accuracy of lesion segmentation. Xu et al. [27] constructed a lightweight multi-scale extended U-Net (LWMSDU-Net), which applied multi-scale extended convolution to encoders and greatly reduced model parameters. The above deep learning-based segmentation methods were applied to the segmentation of diseased leaves in different crops, but there are problems such as the single segmentation content, high network complexity, and long model training time.

3. Image Acquisition and Methodology

We first obtained the dataset through field photography and online collection. Then, in order to find the color features that could better distinguish the normal leaves from insect and diseased leaves, the RGB color space was used, respectively, to calculate the statistics on the number of pixel values in the image. Finally, the initial clustering value was obtained by fitting the characteristic curve with a polynomial, and the final segmentation result was obtained by combining them with the gravity kernel clustering algorithm.

3.1. Image Acquisition of the Diseased Leaves

More than 3000 pictures of pests and diseases were collected, 80% of which were collected online, and the rest were taken by mobile phones in the field. The database mainly included the diseased leaf images of wheat, corn, tomato, and cucumber, as shown in Figure 1. The first column of each row is the image collected with mobile phones, and the last three columns are the images collected online. Table 1 shows the quantity information of the different crop images. These images are stored in a database, named according to the different crop, pest, and disease categories, and could be queried by category and time. In addition, the vast majority of images collected online had simple backgrounds and less noise. For the photos taken in the field, the background interference could be reduced by shortening the shooting distance.

3.2. Image Characteristics Analysis

The color information of the diseased leaves is the most direct and effective feature to reflect the disease. As the most obvious feature of leaf lesions, the color information effectively reflects the status of lesions. Therefore, this section explores the color changes of the lesion images.

3.2.1. Characteristics Analysis of the Lesion Leaf Images in the RGB Color Space

In order to segment the lesions in the collected images, the color information should be used to analyze the variation rules of the different color components in different color spaces. In the RGB color space, through the analysis of the color components of R, G, and B in a large number of diseased leaves, it was found that the red component changes the most during the transition from a normal leaf to a diseased leaf. The color changes of R, G, and B in a1, b1, c1, and d1 in Figure 2 correspond to the values of R, G, and B of the white line pixels in a, b, c, and d in Figure 2, respectively. Since the white line of the mark includes both the normal leaf pixels and the diseased leaf pixels, the color with the most obvious change was found by analyzing the three-color change curve. As can be seen from Figure 2, the red R-value of the normal area is small, but the red R-value of the diseased area increased suddenly and even reached the maximum value among the three colors, indicating that the difference between the red R-value of the pixel of the normal leaf area and the pixel of the diseased leaf area is the largest, which could be used as the basis for the segmentation of the diseased image.

3.2.2. Characteristics Analysis of the Lesion Leaf Images in the Lab Color Space

Images a, b, c, and d in Figure 3 are the diseased leaf images in the lab color space. Images a1, b1, c1, and d1 in Figure 3 correspond to the l, a, and b three-component change curves of the pixels in the white lines marked in the original images. The three-component change curves of l, a, and b show that the values of brightness l in the normal leaf are small but suddenly increase in the lesion, even reaching the maximum in the three components. So, from the normal leaf to the lesion, the brightness is the largest amplitude change in the three components, while chroma a and chroma b change relatively gently, which indicates that the difference between the brightness L values of the pixels in the normal areas and the brightness L values of the pixels in lesions is the largest, which could be used as the basis for the image segmentation of the diseased leaves.

3.3. Single-Channel Gravitational Kernel Density Clustering Algorithm

3.3.1. Automatic Identification of the Initial Value of a Cluster

The number of clusters and the cluster centers of the clustering algorithm were automatically obtained by fitting the characteristic histogram of the R-channel. Through analyzing a large number of collected leaf image features, the results showed that the diseased leaf images usually had two composition modes: the first was that the diseased leaf image consists of diseased spots and leaves, while the second was that the diseased leaf image is composed of three regions: the lesion, leaf, and background. Therefore, most of the diseased leaves showed a bimodal or triplet feature on the histogram. The As are shown in a1, b1, c1, and d1 in the Figure 4 diagrams of the diseased leaf red channel histogram. The corresponding diseased leaf images are shown in Figure a, b, c, and d in Figure 4. The different peak areas of the red R histogram represent the different areas of the diseased leaf image, and the peak points of the peak area represent the pixels that appeared most frequently in this area.
The abscissa of the feature histogram in Figure 4 represents the R-value of the image pixels, ranging from 0 to 255, and the ordinate represents the number of pixels corresponding to the different R-values. It is difficult to mathematically find the peak area and the peak point because the histogram curve is a sharp oscillatory curve that forms many local peak points. To this end, this paper used a sub-polynomial to fit the histogram, as shown in Equation (1)
y = a 0 + a 1 x + a 2 x 2 + + a k x k .
In Equation (1), input x represents the R-value of the pixel of the image, and the output y represents the number of pixels in the image with a value of R. Assume inputs x and outputs y constitute the training dataset { x i , y i | i = 0 , 1 , 2 , , 255 } , in which N = 256 represents the number of samples and satisfies N > k. Then, Equation (1) becomes:
y = a 0 + a 1 x i + a 2 x i 2 + + a k x i k     i = 0 , 1 , 255 .
Establish the evaluation function model J so that the mean square and error are minimized, and estimate the polynomial parameters { a 0 , a 1 , a 2 , , a k } using the least squares algorithm. The evaluation function model J is:
J = i = 0 N 1 [ y i a 0 a 1 x i a 2 x i 2 a k x i k   ] 2 .
Based on the above formula, find the partial derivatives of { a 0 , a 1 , a 2 , , a k } equal to 0, and obtain:
{ J a 0 = 2 i = 0 N 1 ( y i a 0 a 1 x i a 2 x i 2 a k x i k ) = 0 J a 1 = 2 i = 0 N 1 ( y i a 0 a 1 x i a 2 x i 2 a k x i k ) x i = 0 J a k = 2 i = 0 N 1 ( y i a 0 a 1 x i a 2 x i 2 a k x i k ) x i k = 0
Then, the equation is expressed in the matrix form to obtain Equation (5):
N i = 0 N 1 x i i = 0 N 1 x i k i = 0 N 1 x i i = 0 N 1 x i 2 i = 0 N 1 x i k + 1 i = 0 N 1 x i k i = 0 N 1 x i k + 1 i = 0 N 1 x i 2 k a 0 a 1 a k = i N 1 i = 0 N 1 x i y i i = 0 N 1 x i k y i
So, Equation (5) may be written as:
X A = Y .
Then, the polynomial parameter A is calculated using the least square method.
A = ( X T X ) 1 Y .
Find the peak point by deriving Equation (1) so that the derivative is equal to 0. The result of the derivation is as in Equation (8).
d y d x = a 1 + 2 a 2 x + + a k x k 1 = 0 .
After substituting all integer values into Equation (8), the x value meeting the following two conditions is the peak point x p :
(1)
Label the polynomial derivative as 0;
(2)
The value of the polynomial at this point is greater than the value of the polynomial corresponding to the neighborhood point.
The feature histogram curve of the polynomial fit is shown in a2, b2, c2, and d2 in Figure 4. After finding all the peak points of the feature histogram fitting curve, we found all the peak areas of the diseased leaf histograms and roughly found the different areas of the diseased leaf. The peak point was used as the initial clustering center to cluster the diseased leaves. Therefore, according to the peak area and peak point of the feature histogram, the cluster number and cluster center of the diseased leaves could be preliminarily determined.

3.3.2. Gravity Calculations between the Pixels

It was derived from Section 3.3.1 that the number of clusters and initial cluster centers of the diseased leaves may be obtained from the feature histograms of the diseased leaves. According to the law of universal gravitation, the attraction between an object and another object is not only related to the mass of the object but also to the distance between the objects, as shown in Equation (9),
F = G M m r 2 .
In the formula, F represents universal gravitation, m and M represent the mass, G represents the universal gravitational constant, and r is the distance between the objects. According to the formula of universal gravitation, if the mass of an object is greater, it would be more attractive to other objects, and the closer the distance between them, the greater the attraction. In the universe, if the mass of an astral is very large, many asteroids would take this astral as the center and rotate around it to form a galaxy. Based on this, this paper proposes a gravity kernel density clustering algorithm. The cluster center is similar to a large star, and the classification of the pixels is differentiated according to the gravity of the cluster center to the pixels.
First, we defined the quality of the pixels. According to the histogram, the number of pixels ni (i = 0,1,…,255) with different red R-values was calculated. Therefore, for a pixel with a value of i (i = 0,1,…,255), whose number is n i and quality is m i :
m i = n i 1 i = 0 255 n i     i = 0 , 1 255 .
Suppose an image I has a size of M × N, and I ( x , y ) represents an arbitrary pixel point in the image I, where x and y represent the coordinate values of the pixels, respectively, x = 1,2,…, M, y = 1,2,…, N. There are two-pixel points I ( x s , y s ) and I ( x t , y t ) , and the corresponding values of these two pixels are I s and I t , respectively. The kernel density distance, rst, between these two pixels is:
r s t = e x p [ ( x s y s ) 2 + ( x t y t ) 2 h 1 ] e x p [ ( I s I t ) 2 h 2 ] ,  
where e x p [ ( x s y s ) 2 + ( x t y t ) 2 h 1 ] represents the spatial kernel density distance between the pixels, e x p [ ( I s I t ) 2 h 2 ] represents the distance of the gray kernel density between the pixels, and h 1 and h 2 represent the bandwidths of the kernel density function, respectively.
Then, according to Equation (9), the gravitational force F s t between the pixels I ( x s , y s ) and I ( x t , y t ) may be calculated using Equation (12). For calculation convenience, use G = 1 here.
F s t = G m s m t r s t .

3.3.3. Steps of the Single-Channel Gravity Kernel Density Clustering Algorithm

Assuming that the number of clusters determined according to the feature histogram of the diseased leaf is k, the initial cluster center corresponding to each cluster is Ip, then the gravity kernel density clustering algorithm is described as follows:
(1)
Calculate the gravity F s p of each pixel I s to a different I p ;
(2)
Divide pixels I s into corresponding clusters according to the magnitude of gravity F s p that is, select the cluster center with the greatest attraction to the pixels, and divide the pixels into this category;
(3)
For the divided clusters, used the centroid of this class as the new cluster center to update the original cluster center;
(4)
Go to (1) and repeatedly iterate until the cluster center does not change;
(5)
Algorithm stops.

4. Experimental Results and Analysis

4.1. Experimental Setup

In this section, many experiments were conducted to validate our method. The comparative experiments and results were then analyzed and discussed. All experiments were carried out using the Windows 8 64-bit operating system with a CPU of 2.8 GHz. The RAM was 16 G, and the programming environment was MATLAB2017a.

4.2. Evaluation Criteria

In this paper, the gravity kernel density clustering algorithm was compared with traditional and deep learning methods. The algorithm was compared with the K-means algorithm and the mean shift algorithm in terms of the execution time, number of iterations, and segmentation accuracy. The lower the execution time and the number of iterations, the faster the convergence rate of the algorithm. The higher the segmentation accuracy, the more accurate the segmentation of the diseased leaf region.
By analyzing and comparing the segmentation accuracy, over-segmentation rate, and under-segmentation rate of the FCN and U-Net segmentation algorithms based on deep learning, the performances of different algorithms in time and space were obtained. Segmentation accuracy (SA) refers to the ratio between the pixel value of the lesion area correctly segmented with the algorithm and the pixel value of the lesion area manually marked. The over-segmentation ratio (OSR) refers to the ratio of the pixel value of the lesion incorrectly segmented with the algorithm to the pixel value of the manually labeled lesion area. The under-segmentation rate (USR) is the ratio of the pixel value of an unsegmented lesion to the pixel value of a manually labeled lesion area. The relevant evaluation indicators were calculated with Equation (13).
  S A = N t p N g t ,   S R = N f p N g t ,   U S R = N f n N g t ,
where N g t represents the number of pixel values in the manually labeled lesion area, N t p represents the number of pixel values in the correctly segmented lesion area, N f p represents the number of pixel values of the incorrectly segmented lesions, and N f n represents the number of pixel values in the unsegmented lesion area.

4.3. Main Results

Figure 5 shows the comparative analysis of the segmentation results of the diseased leaves using our algorithm, the K-means clustering algorithm, and the mean shift clustering algorithm. The second row of Figure 5 is the result after segmentation using the K-means algorithm. As seen from the figure, the boundary of the result after segmentation is fuzzy. In addition, the segmentation accuracy of the K-means algorithm is low and easily misclassified normal leaves as lesions. The third row in Figure 5 is the segmentation result of the mean shift algorithm. As seen from the figure, the segmentation result of the mean shift algorithm has a clear boundary value, but its segmentation accuracy is relatively low, and some diseases cannot be accurately divided. In addition, the mean shift algorithm displayed over-segmentation and under-segmentation problems. The last row of Figure 5 is the segmentation result of the algorithm in this paper. It was seen that the algorithm designed in this paper had good anti-noise ability and could segment the areas of the pests and diseases more completely.
Table 2 compares the performance of our algorithm with the K-means and mean-shift algorithms in terms of the running time, number of iterations, and segmentation accuracy. As seen in Table 2, this algorithm has the lowest execution time and iteration time, followed by the K-means algorithm. Especially in terms of the iterations, the iteration time of our algorithm is less than half of the K-means and one-third of the mean shift algorithm. Figure 6 shows the average iterative convergence time of the three algorithms. The number of iterative convergences of our algorithm is 145, while that of the K-means algorithm is 352, and that of the mean shift algorithm is 474. Therefore, the iterative convergence rate of our algorithm is much faster than the other two algorithms.
Table 3 compares the performance of our method with the FCN and U-Net segmentation algorithms based on deep learning. In the experiment, 100 diseased leaf maps were used for testing, and the segmentation results were statistically analyzed. It was seen that this algorithm is superior to other methods in every index. Figure 7 shows the segmentation results of some diseased leaf images. It was seen that FCN has the worst segmentation effect, and the diseased area could not be completely segmented, which may have led to over-segmentation and under-segmentation. U-Net could segment the contours of the diseased areas in the leaves but could not segment the boundaries of the diseased leaf. The algorithm in this paper could segment the diseased area effectively and could accurately segment the boundary and details of the diseased leaf, which fully proves the effectiveness of our method in this paper.

5. Discussions and Conclusions

As seen in Figure 5, compared with other methods, the algorithm designed in this paper has good anti-noise performance and higher segmentation accuracy, which could better segment diseased leaf regions. The K-means algorithm only considered the color similarity calculated with the Euclidean distance between the pixels, not the spatial relationship between the pixels, so it easily misclassiffied normal leaves as lesions. The mean shift algorithm had a particular anti-noise performance and higher segmentation accuracy, but only with appropriate bandwidth. The algorithm designed in this paper fully discussed the relationship between the pixels, considering not only the color information and spatial information between the pixels but also the pixel number (density); therefore, the segmentation effect is better than the other methods. In addition, Table 2 shows that our algorithm converges faster than the other methods. The K-means clustering algorithm needs to specify the number of clusters and initial cluster center manually, and the whole segmentation process is time-consuming. The mean shift clustering algorithm needs to select a bandwidth value, which requires manual participation. The algorithm designed in this paper has certain adaptability, may automatically select the number of clusters and the initial cluster center, and greatly speeds up the convergence of the algorithm. Compared with the methods based on depth learning, the algorithm designed in this paper has a faster convergence speed and higher segmentation accuracy. The FCN and U-Net methods have high requirements for datasets. More data could improve the performance of the model. At the same time, due to a large number of model parameters, our method is better than the two methods in the prediction stage.
The method in this paper overcomes the shortcomings of the traditional K-means clustering algorithm and mean shift clustering algorithm and could automatically determine the cluster center and the number of clusters. It could also comprehensively consider the color information, spatial information, and density information between the pixels. This method could automatically, quickly, and accurately complete the fast segmentation of diseased leaves and obtain high segmentation accuracy. In future research, work may focus on the deployment of algorithms by deploying different algorithms to hardware devices to check the pros and cons of performance. In addition, the pictures collected with the hardware would contain rich backgrounds, and how to accurately segment the pest and diseased areas from leaves with complex backgrounds is also the next research focus.

Author Contributions

Conceptualization, Z.L.; methodology, Y.R.; software, Q.L. and Z.L.; validation, Q.L.; writing—original draft preparation, Y.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the College Research Funding of Xijing University (XJ170128) and the Key Development Projects of Shaanxi Science and Technology Department (2017ZDXM-NY-088).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

All data included in this study are available upon request by contacting the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thresh, J.M. The impact of plant virus diseases in developing countries. In Virus and Virus Like Diseases of Major Crops in Developing Countries; Springer: Dordrecht, The Netherlands, 2013; pp. 1–30. [Google Scholar]
  2. Flood, J. The importance of plant health to food security. Food Secur. 2016, 2, 215–231. [Google Scholar] [CrossRef]
  3. Zhang, H.; He, G.; Li, F.; Xia, Z.; Liu, B.; Peng, J. Plant taxonomy-guided path-based tree classifier for large-scale plant species identification. J. Electron. Imaging 2021, 30, 023019. [Google Scholar] [CrossRef]
  4. He, G.; Xia, Z.; Zhang, Q.; Zhang, H.; Fan, J. Plant Species Identification by Bi-channel Deep Convolutional Networks. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2018; Volume 1004. [Google Scholar]
  5. Revathy, R.; Chennakesavan, S.A. Threshold based approach for disease spot detection on plant leaf. Trans. Eng. Sci. 2015, 3, 72–75. [Google Scholar]
  6. Li, G.; Ma, Z.; Huang, C.; Chi, Y.; Wang, H. Segmentation of color images of grape disease using K_means clustering algorithm. Trans. Chin. Soc. Agric. Eng. 2012, 26, 32–37. [Google Scholar]
  7. Bandi, S.R.; Varadharajan, A.; Chinnasamy, A. Performance evaluation of various statistical classifiers in detecting the diseased citrus leaf. Int. J. Eng. Sci. Technol. 2013, 5, 298–307. [Google Scholar]
  8. Al-Tarawneh, M.S. An empirical investigation of olive leaf spot disease using auto-cropping segmentation and fuzzy c-means classification. World Appl. Sci. J. 2013, 23, 1207–1211. [Google Scholar]
  9. Zhang, S.W.; You, Z.H.; Wu, X.W. Plant disease leaf image segmentation based on superpixel clustering and EM algorithm. Neural Comput. Appl. 2019, 31, 1225–1232. [Google Scholar] [CrossRef]
  10. Yuan, Y.; Li, M.; Liang, Q.; Hu, X.; Zhang, W. Segmentation method for crop disease leaf images with complex background. Trans. Chin. Soc. Agric. Eng. 2011, 27, 208–212. [Google Scholar]
  11. Arivazhagan, S.; Newlin Shebiah, R.; Ananthi, S.; Varthini, S.V. Detection of unhealthy region of plant leaves and classification of plant leaf diseases using texture features. Agric. Eng. Int. CIGR J. 2013, 15, 211–217. [Google Scholar]
  12. Gui, J.S.; Hao, L.; Zhang, Q.; Bao, X. A new method for soybean leaf disease detection based on modified salient regions. Int. J. Multimed. Ubiquitous Eng. 2015, 10, 45–52. [Google Scholar] [CrossRef] [Green Version]
  13. Tajane, V.; Janwe, N.J. Medicinal plants disease identification using canny edge detection algorithm histogram analysis and CBIR. Int. J. Adv. Res. Comput. Sci. Soft Eng. 2014, 4, 530–536. [Google Scholar]
  14. Tucker, C.C.; Chakraborty, S. Quantitative assessment of lesion characteristics and disease severity using digital image processing. J. Phytopathol. 2015, 145, 273–278. [Google Scholar] [CrossRef]
  15. Prajapati, H.B.; Shah, J.P.; Dabhi, V.K. Detection and classification of rice plant diseases. Intell. Decis. Technol. 2017, 11, 357–373. [Google Scholar] [CrossRef]
  16. Al-Hiary, H.; Bani-Ahmad, S.; Reyalat, M.; Braik, M.; Alrahamneh, Z. Fast and accurate detection and classification of plant diseases. Int. J. Comput. Appl. 2011, 17, 31–38. [Google Scholar] [CrossRef]
  17. Kaur, R.; Kang, S.S. An enhancement in classifier support vector machine to improve plant disease detection. In Proceedings of the IEEE 3rd International Conference on Innovation and Technology in Education (MITE), Amritsar, India, 1–2 October 2015; pp. 135–140. [Google Scholar]
  18. Ren, Y.; Zhang, J.; Li, M.; Yuan, Y. Segmentation method for crop disease leaf images based on watershed algorithm. J. Comput. Appl. 2012, 32, 752–755. [Google Scholar] [CrossRef]
  19. Chaudhary, P.; Chaudhari, A.K.; Cheeran, A.N.; Godara, S. Color transform based approach for disease spot detection on plant leaf. Int. J. Comput. Sci. Telecommun. 2012, 3, 65–70. [Google Scholar]
  20. Ren, K.; Zhang, J. Fuzzy enhancement and improved Canny’s leaf edge extraction. J. Optoelectron. Laser 2018, 29, 1251–1258. [Google Scholar]
  21. Wang, J.; Zhao, S.; Chen, J.; Wang, S. A New Multiscale Analysis Algorithm for Image Edge Extraction of Strawberry Leaves in Natural Light Greenhouse. Agric. Eng. 2018, 30, 8–16. [Google Scholar]
  22. Shantkumari, M.; Uma, S.V. Grape leaf segmentation for disease identification through adaptive Snake algorithm model. Multimed. Tools Appl. 2020, 80, 8861–8879. [Google Scholar] [CrossRef]
  23. Liu, L.; Cheng, X.; Lai, J. A method for cotton canopy image segmentation based on improved full convolution network. Trans. Chin. Soc. Agric. Eng. 2019, 34, 193–201. [Google Scholar]
  24. Xiong, X.; Duan, L.; Liu, L.; Tu, H.; Yang, P.; Wu, D.; Chen, G.; Xiong, L.; Yang, W.; Liu, Q. Panicle-SEG: A robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization. Plant Methods 2018, 13, 104–111. [Google Scholar] [CrossRef]
  25. Zhao, B.; Feng, Q. Grape disease leaf segmentation based on full convolution network. J. Nanjing Agric. Univ. 2019, 41, 752–759. [Google Scholar]
  26. Chen, S.; Zhang, K.; Zhao, Y.; Sun, Y.; Ban, W.; Chen, Y.; Zhuang, H.; Zhang, X.; Liu, J.; Yang, T. An Approach for Rice Bacterial Leaf Streak Disease Segmentation and Disease Severity Estimation. Agriculture 2021, 11, 420. [Google Scholar] [CrossRef]
  27. Xu, C.; Yu, C.; Zhang, S. Lightweight Multi-Scale Dilated U-Net for Crop Disease Leaf Image Segmentation. Electronics 2022, 11, 3947. [Google Scholar] [CrossRef]
Figure 1. Examples of diseased crop leaves.
Figure 1. Examples of diseased crop leaves.
Applsci 13 01172 g001
Figure 2. Three-color change curves of the diseased leaves in the RGB color space.
Figure 2. Three-color change curves of the diseased leaves in the RGB color space.
Applsci 13 01172 g002
Figure 3. L, a, and b changes of the diseased leaves in the lab color space.
Figure 3. L, a, and b changes of the diseased leaves in the lab color space.
Applsci 13 01172 g003
Figure 4. Feature histograms of diseased leaves.
Figure 4. Feature histograms of diseased leaves.
Applsci 13 01172 g004aApplsci 13 01172 g004b
Figure 5. Lesion segmentation results based on different algorithms. (a) Diseased leaf image. (b) Diseased leaf image. (c) Diseased leaf image. (d) Diseased leaf image. (a1) Segmentation results with the K-means method. (b1) Segmentation results with the K-means method. (c1) Segmentation results with the K-means method. (d1) Segmentation results with the K-means method. (a2) Segmentation results with the mean shift method. (b2) Segmentation results with the mean shift method. (c2) Segmentation results with the mean shift method. (d2) Segmentation results with the mean shift method. (a3) Segmentation results using our method. (b3) Segmentation results using our method. (c3) Segmentation results using our method. (d3) Segmentation results using our method.
Figure 5. Lesion segmentation results based on different algorithms. (a) Diseased leaf image. (b) Diseased leaf image. (c) Diseased leaf image. (d) Diseased leaf image. (a1) Segmentation results with the K-means method. (b1) Segmentation results with the K-means method. (c1) Segmentation results with the K-means method. (d1) Segmentation results with the K-means method. (a2) Segmentation results with the mean shift method. (b2) Segmentation results with the mean shift method. (c2) Segmentation results with the mean shift method. (d2) Segmentation results with the mean shift method. (a3) Segmentation results using our method. (b3) Segmentation results using our method. (c3) Segmentation results using our method. (d3) Segmentation results using our method.
Applsci 13 01172 g005
Figure 6. Iteration times of different algorithms.
Figure 6. Iteration times of different algorithms.
Applsci 13 01172 g006
Figure 7. Lesion segmentation results based on different algorithms.
Figure 7. Lesion segmentation results based on different algorithms.
Applsci 13 01172 g007
Table 1. Statistics on the number of images.
Table 1. Statistics on the number of images.
PhoneOnlineAll
Wheat115528643
Corn187594781
Tomato152649801
Cucumber201676877
65524473102
Table 2. Algorithm performance comparisons.
Table 2. Algorithm performance comparisons.
ImagesMethodExecution Time (s)Iteration TimeSegmentation Accuracy (%)
Original image aK-means5.32432065.634
Mean shift 10.32845070.286
Our method4.47314079.548
Original image bK-means5.47529067.983
Mean shift 11.07641071.612
Our method4.46910581.673
Original image cK-means6.03838070.845
Mean shift 10.87151076.732
Our method5.67916084.975
Original image dK-means5.80436063.781
Mean shift 10.93648067.742
Our method5.02615074.657
Table 3. Algorithm performance comparisons.
Table 3. Algorithm performance comparisons.
MethodExecution Time (s)SA(%)OSR(%)USR(%)
FCN11.4961.3437.3638.66
U-Net9.6373.5530.1426.45
Our method5.7681.2120.7818.79
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, Y.; Li, Q.; Liu, Z. The Fast Detection of Crop Disease Leaves Based on Single-Channel Gravitational Kernel Density Clustering. Appl. Sci. 2023, 13, 1172. https://doi.org/10.3390/app13021172

AMA Style

Ren Y, Li Q, Liu Z. The Fast Detection of Crop Disease Leaves Based on Single-Channel Gravitational Kernel Density Clustering. Applied Sciences. 2023; 13(2):1172. https://doi.org/10.3390/app13021172

Chicago/Turabian Style

Ren, Yifeng, Qingyan Li, and Zhe Liu. 2023. "The Fast Detection of Crop Disease Leaves Based on Single-Channel Gravitational Kernel Density Clustering" Applied Sciences 13, no. 2: 1172. https://doi.org/10.3390/app13021172

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop