Next Article in Journal
DA-CapsUNet: A Dual-Attention Capsule U-Net for Road Extraction from Remote Sensing Imagery
Previous Article in Journal
GSCA-UNet: Towards Automatic Shadow Detection in Urban Aerial Imagery with Global-Spatial-Context Attention Module
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusarium Wilt of Radish Detection Using RGB and Near Infrared Images from Unmanned Aerial Vehicles

1
Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
2
School of Electrical Engineering, Korea University, Seoul 02841, Korea
3
Department of Bioresource Engineering, Sejong University, Seoul 05006, Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(17), 2863; https://doi.org/10.3390/rs12172863
Submission received: 9 July 2020 / Revised: 24 August 2020 / Accepted: 1 September 2020 / Published: 3 September 2020
(This article belongs to the Section AI Remote Sensing)

Abstract

:
The radish is a delicious, healthy vegetable and an important ingredient to many side dishes and main recipes. However, climate change, pollinator decline, and especially Fusarium wilt cause a significant reduction in the cultivation area and the quality of the radish yield. Previous studies on plant disease identification have relied heavily on extracting features manually from images, which is time-consuming and inefficient. In addition to Red-Green-Blue (RGB) images, the development of near-infrared (NIR) sensors has enabled a more effective way to monitor the diseases and evaluate plant health based on multispectral imagery. Thus, this study compares two distinct approaches in detecting radish wilt using RGB images and NIR images taken by unmanned aerial vehicles (UAV). The main research contributions include (1) a high-resolution RGB and NIR radish field dataset captured by drone from low to high altitudes, which can serve several research purposes; (2) implementation of a superpixel segmentation method to segment captured radish field images into separated segments; (3) a customized deep learning-based radish identification framework for the extracted segmented images, which achieved remarkable performance in terms of accuracy and robustness with the highest accuracy of 96%; (4) the proposal for a disease severity analysis that can detect different stages of the wilt disease; (5) showing that the approach based on NIR images is more straightforward and effective in detecting wilt disease than the learning approach based on the RGB dataset.

Graphical Abstract

1. Introduction

Radishes or daikons are native vegetables in Southeast Asian countries, particularly in Korea. It is the most widely grown crop and is regarded as the national vegetable because it accounts for about 10 percent of the vegetable production [1]. Nevertheless, the yield of radishes has reduced sharply in recent years due to the Fusarium wilt disease caused by the fungus, which infects the radishes at an unprecedented rate [2]. The yellowing starts on one side of the bottom leaves, shoot, or branch. Then, it slowly spreads out and deteriorates the vascular system in the roots, the stems, and the petioles, resulting in a stunted plant [3]. The fungus remains in the soil for a long time and invades nearby regions through contaminated seeds and infested equipment. Therefore, the disease is challenging to prevent or treat because it can spread quickly from infected plants to healthy plants and lead to severe harvest losses. As a result, early symptoms of the disease must be identified to isolate the infected plants in their early stage to minimize the damage. For a long time, manual inspection has been the primary approach to examine the radish field for early signs of any disease [4]. However, it is time-consuming and unproductive [5,6]. Therefore, it is essential to implement an automatic system that accurately detects the Fusarium wilt disease in its early stages based on utilizing recent technologies [7].
Low-altitude remote sensing (LARS) is an essential component of a geographic information system (GIS) that provides the measurements for an agricultural area at low altitudes [8]. Airplanes are the standard vehicle in LARS, which are stocked with many sensor devices, and high-resolution cameras have been widely used for a long time. However, they are costly to deploy if the study areas are small [9]. In recent years, the use of unmanned aerial vehicle (UAV) has grown rapidly, which emerged as an alternative to airplanes because they are small aircraft capable of capturing high-resolution images and videos of the agricultural areas at low-altitudes [9]. In addition to the conventional RGB images, recent advances in camera technology have resulted in more affordable and accessible near-infrared (NIR) cameras, which can be equipped to UAV to provide additional information that cannot be observed by conventional cameras.
For the RGB dataset, after UAV capture the image, several pre-processing techniques can be applied to improve the performance before the data is fed into learning algorithms [10]. For instance, when the UAV fly at a low altitude above the field, they have to capture a series of images over the current field coverage. It would be more straightforward for humans or machines to investigate an area if a set of two or more UAV images are merged into one wide-field image, known as the mosaicking technique [11]. The main procedures of the mosaicking technique include image calibration, image registration, and image blending [12].
After performing the mosaicking technique, the output image contains many objects, such as radish, soil, and plastic mulch. The primary purpose of the study is to classify wilt disease on radish, thus the radish is the only object that needs to be acquired for analysis, while other objects can be discarded. This can be achieved by applying object detection algorithms such as YOLO3 [13] and faster-RCNN [14]. However, the preparation of the dataset for these algorithms is time-consuming, labor-intensive, and human-error prone [15]. On the other hand, superpixel segmentation that divides an image into hundreds of non-overlapping superpixels is a more efficient approach than the mentioned object detection in terms of time and implementation. As a result, it has been extensively studied and applied to a considerable number of applications [16]. Felzenszwalb–Huttenlocher (FH) [17], mean shift [18], Ncuts [19], and quick shift [20] are three widely used algorithms to create the superpixels. Nevertheless, owing to the high computational complexity, many new algorithms have been implemented to address this problem by considering compactness and reducing the computational complexity for the generation of superpixels.
After a series of pre-processing, a collection of patches that contain different radish field regions are extracted. Previous approaches in classifying these patches have mainly relied on features engineering to select distinctive features, such as color [21], texture, and shape features [22], scale-invariant features transform (SIFT), and the histogram of oriented gradient (HOG) [23,24]. However, the features were manually selected, which is time-consuming and inefficient if huge datasets are involved [25]. Recent achievements in deep learning-based methods in different areas of computer science have led to a huge interest from the researchers [3,25]. It is a subset of artificial intelligence (AI) involved with the deep architectures to extract more abstract features [26,27]. Moreover, it can automatically decide if a prediction is accurate or wrong through the cost functions [28]. Several studies have applied deep learning in the identification of plant diseases. For example, a deep learning-based disease identification in the vineyard fields using images captured by UAV was introduced by [29]. The authors implemented the model using a combination of color spaces and vegetation indices. After that, they applied the sliding window approach with different window sizes of (16 × 16), (32 × 32), and (64 × 64) pixels to divide the image into a collection of smaller windows, and each window was categorized as healthy or diseased. The highest disease identification accuracy reached up to 95%. However, the study was restricted by the relatively small number of labeled data and a recurring issue. On the other hand, multispectral imagery captured by UAV was applied to discriminate against the phytoplasma disease in vine vegetation [30]. The authors manually acquired multispectral training images from four picked vineyards in Southwest France. Then, univariate and multivariate classification methods were applied to classify 20 variables from the training data. The results revealed that the classifier achieved higher performance on red cultivars compared to white cultivars. However, the method suffered from the healthy pixel misclassification issue. In the same year, a benchmark disease detection dataset was published [31] to promote the development of effective methodologies to monitor plant stress based on a UAV platform. The obtained results proved that multispectral images captured by UAV are useful for detecting stress in grown plants and can even be used to recognize stress early. Moreover, the authors found that diseases could be easily identified in data from the red edge and the NIR channels. The conducted research revealed a major challenge for similar platforms—manual plant health assessments must be performed to compare them with the automatic model. A comprehensive review of the UAV application for monitoring and analyzing plant stress was most recently conducted [32]. They explored over 100 articles in the related field, discussed previous weaknesses that were already solved, remained challenged, and offered suggestions for future research. They summarized and discussed three leading causes of the plant stress: drought, nutrition disorders, diseases, and other factors such as weed. Through the discussion, the authors demonstrated that it would be easier to examine new methods because a growing number of datasets have been made freely available.
Using a different approach than RGB, NIR can be used to measure the vegetation index (VI) to detect areas with varying levels of plant biomass [33]. VI combines surface reflectance at two or more wavelengths to emphasize a specific feature of vegetation. It has been the standard method to analyze plant health in agriculture for many years based on computing the reflectance properties [34]. Normalized difference vegetation index (NDVI) is a popular VI that identifies vegetation and measures a plant’s overall health [35]. Radishes infected by Fusarium wilt disease show a drop in the NDVI accompanied by changes in spectral reflectance [3]. Healthy plants absorb most visible light while reflecting a large amount of the NIR light, and infected plants do the opposite. For example, twelve VIs, including NDVI, were computed to analyze the wilt disease on avocado [36]. The authors showed considerable variations in spectral values between wilted avocado and healthy avocado. The best results were obtained with a mixture of excess green (ExG), NDVI, color index of vegetation (CIVE), and vegetative (VEG), which proved that the differences between red-edge, green, and blue channels were adequate for the precise classification of laurel wilt on avocado. However, the NDVI is highly susceptible to color and brightness of the soil, cloud shade, leaf canopy shadow, and atmosphere, thus calibration procedures are required to minimize these effects.
In analyzing different aspects of a typical plant disease detection system and reviewing previous works on radish wilt detection, it is imperative to implement an efficient radish early-stage wilt disease detection framework. Based on the obtained results, strengths and weaknesses of implementing radish wilt detection on RGB and NIR datasets are shown. For the RGB dataset, superpixel segmentation was applied to extract only radish regions from the radish field. After that, a customized convolutional neural network (CNN) model was trained to recognize the radish regions with the output contained only in the radish regions. The model is motivated by recent research on deep learning that demonstrated the potential to detect diseases in plants [5,23,32]. Finally, we evaluated the severity of the wilt disease (healthy, disease heavy, and disease light) by applying various computer vision (CV) methods. With a more straightforward approach than the RGB complicated process, the NDVI value was calculated and analyzed to identify wilt disease in the radish field for the NIR dataset. Finally, the performance of the introduced framework was examined through various experiments.
The main objectives of this paper include four parts:
  • Collect a high-resolution radish field dataset that contains both regular RGB images and multispectral NIR images.
  • Introduce a customized CNN model to precisely classify radish, mulching film, and soil in RGB images.
  • Apply a series of image processing methods to identify different stages of the wilt disease.
  • Comprehensively compare the NIR-based approach and the conventional RGB-based approach.
The rest of the article is divided into six sections. Section 2 explains data collection and data preparation processes of the suggested dataset. The proposed wilt disease identification model is described thoroughly in Section 3. Section 4 shows a series of experiments conducted to examine the proposed framework on the RGB and the NIR datasets. Section 5 analyzes the obtained results and discusses the performance of the presented framework. Finally, we summarize the article, identify some limitations, and address potential solutions in Section 6.

2. Radish Field Dataset

This study is dedicated to research about the Fusarium wilt disease in radish. Thus, the radish fields were strictly controlled and drip-irrigated with a nutrient solution containing nitrogen, potassium, phosphorus, and small amounts of other compounds. Therefore, abiotic stresses (nutrient deficiencies or drought stress), diseases, or pests, which cause the same symptoms as the wilt disease, were significantly minimized. In addition, the fields were investigated periodically (one time per day) by experts and farmers to prevent other diseases, pests, or abiotic stress.

2.1. Data Collection

The dataset used in this research was collected by two photography drones (Phantom 4 Pro, DJI co., Ltd, Shenzhen, China.) on two agricultural areas in Jeju, Korea, between January 2018 and February 2018. The two fields have an area of 85 × 28 m2 and 76 × 35 m2, respectively. The distance between rows is 0.3 m. The drones’ maximum flight times were about 30 min, and the onboard camera sensors in the two drones had different Complementary Metal Oxide Semiconductor (CMOS) size, lens, and focal length. One drone was equipped with an RGB sensor with a stock 1/2.3 inch (1.10 cm) RGB CMOS camera sensor with 12.4 m effective pixels (DJI 2019). The other used a modified camera with Near Infrared, Red, Green (NRG) filter (520–575 nm (green), 600–670 nm (red), and 600–1030 nm (NIR)) to capture multispectral images (Store 2019). While low attitude enables small detail of a specific region (such as wilt versus healthy) to be captured, images captured in high attitude can cover a large area and allow inspectors to have a general view of the field. As a result, the drones were controlled to acquire the images at three different altitudes of approximately 3 m, 7 m, and 15 m above the ground level. Therefore, the dataset can be used later to serve different research purposes, as represented in Figure 1.
The data collection process was conducted within one hour between 11:30 and 00:30 (solar noon), with wind speed under 7 m/s and temperature above 20 degrees. The times when the cloud partly obscured the sun were avoided to minimize the differences between each flight and ensure relatively consistent lighting conditions. Moreover, a white reference image was taken before every trip to calculate reflectance by framing a calibrated reflectance panel (CPR) from (Micasense co., Ltd, Shenzhen, China.) before and after the drones took off. After that, during the image processing process, white reference images captured before and after each flight were applied to compute the compensation of lighting conditions at the time of image capture. The transfer function of radiance to reflectance F i for the ith band was calculated as:
F i = ρ i a v g L i  
where ρ i   is the average reflectance of the CPR for the ith band and a v g L i is the average value of the radiance for the pixels inside the panel for band i.
Table 1 shows flight date, altitude, geographical condition of the site, and where this study was conducted.
In total, 2814 RGB images and 1997 NIR images were collected. Figure 2 presents the map of the two fields, where the data collection was conducted.
The proposed dataset has three main characteristics, including (1) radish field images in 4K resolution (4000 × 3000) captured by UAV; (2) radish field images at various altitudes (3 m, 7 m, and 15 m); and (3) the dataset includes both RGB images, which are applied to regular wilt identification, and NIR images, which can be used in remote sensing analysis.

2.2. Description of Training Data

The altitude of 3 m was chosen for experiments on the RGB dataset because each region in the radish field (radish, soil, and mulching film) is clear and separated. Moreover, the wilt regions are detected easily at the altitude of 3 m compared to other altitudes (7 m, 15 m). We then randomly picked 50 RGB images to validate and construct a new RGB dataset manually. It contained 600 region of interest (ROI) for radishes, 580 ROIs for soil, and 520 ROI for plastic mulch, as shown in Figure 3. The proposed deep learning model was trained on the RGB-ROI dataset to classify three ROI on a radish field.
Similarly, 50 NIR images at the altitude of 3 m were chosen to perform experiments on the NIR dataset. Low altitude images enable a more comprehensive observation when the NDVI value is computed, such as identifying which region of the field is severely affected by the wilt disease and tracking the spreading path of the wilt disease.

3. Methodology

Figure 4 explains the overall architecture of the proposed Fusarium wilt disease identification framework in the radish field. As explained previously in Section 2, two Phantom 4 Pro drones (one drone was equipped with an RGB sensor, while the other drone used an NIR sensor) to capture radish field images at a 3 m altitude. After this process, 60 RGB images and 50 NIR images were selected to perform the following experiments. Then, to compare the performance of the radish wilt disease detection on RGB and NIR datasets, two different processes were implemented. (1) For the approach using the RGB dataset, first of all, 50 RGB images were selected to construct the RGB-ROI dataset, as described in Section 2.2. After the dataset was created, a customized CNN model (RadRGB) containing several convolutional layers was trained to extract multi-levels abstract features from the RGB-ROI dataset. It was used to classify whether an input image was a radish, soil, or mulching film region. The remaining 10 RGB images were used for testing purposes. A mosaicking algorithm was implemented to combine these images into a mosaic image. As mentioned in the previous section, radish regions were primary ROIs in this study, thus a superpixel segmentation approach was implemented to segment the mosaic image into distinct regions based on the color similarity. The trained radish wilt detection using RGB images (RadRGB) model was used to separate only the radish segments from the superpixel segmentation process. Finally, a disease severity evaluation was conducted to further classify the radish regions into healthy, light, and serious diseases. (2) For the approach based on the NIR dataset, NIR images were used to calculate the NDVI index to detect the wilt disease and analyze the disease severity. Finally, a detailed discussion of the advantages and drawbacks of each approach was conducted.

3.1. Radish Wilt Detection Using RGB Dataset

This section describes all methods that were applied to perform wilt disease detection on the RGB images. The methods include (1) detailed explanation of the stitching algorithm, (2) implementation detail of the superpixel algorithm, (3) description of the proposed RadRGB model, and (4) disease severity analysis algorithm.
After the stitching process, the stitched images were segmented into distinctive segments using the superpixel approach. After that, in order to classify each of the superpixel segment into a radish, mulching film, or soil region, the RadRGB model was trained using the manually created RGB-ROI dataset. Furthermore, we tried to categorize radish segments into healthy, light disease, and serious disease by using multiple image processing techniques. The stitching technique is explained in Section 3.1.1. Next, the segmentation process is described in Section 3.1.2. The RadRGB framework is discussed in Section 3.1.3. Finally, the disease severity analysis is presented in Section 3.1.4.

3.1.1. Mosaicking Technique (Stitching Technique)

There are some crucial steps in a typical image stitching algorithm. First of all, key points are identified from the input images. Then, local invariant descriptors are extracted and matched between the images. After that, homography estimation algorithms, such as random sample consensus (RANSAC), are utilized to construct the homography matrix using the matched feature vectors. Finally, the homography matrix is fed into a warping transformation technique to generate the mosaic image. In this paper, we used the stitching algorithm proposed by [37] for the mosaicking process for the RGB dataset. Unlike previous image stitching algorithms, the application of a probabilistic model and the fixed local features to validate image matches make the method more stable and resistant to order and orientation of the images, illumination changes, and noisy images. Furthermore, it can produce high-quality panorama images by using image blending and gain compensation.
We picked 13 images at an altitude of 3 m to create the stitched RGB images. Upon extracting the features from input images, four nearest neighbors of each feature were included in feature spaces. After that, images that had over six matched images were identified and extracted from the feature matching step. As shown in Figure 5, the RGB stitched image has a size of 8404 × 3567.

3.1.2. Radish Field Segmentation

Linear spectral clustering (LSC) is a superpixel algorithm proposed by [38]. It can extract perceptually salient global features from the input image and perform in a linear span to achieve high memory efficiency. If the maximum pixels in an image are N, then the complexity of the feature mapping is O N , which is more straightforward than that of previous superpixel approaches [38]. In the LSC algorithm, every pixel is linked to a point in a 10-dimensional feature space. Next, a weighted clustering algorithm, such as K-means, is used for the segmentation process. Thanks to the implementation of the clustering algorithm in the ten-dimensional feature space and the normalized portions in the initial pixel space, non-local information is fully preserved. As a result, the LSC algorithm is computational complicity, memory efficiency, and global features of the images are maintained. The results showed that LSC is superior compared to previous state-of-the-art superpixel segmentation methods. Two crucial parameters, which include r c and the number of superpixels K , must be carefully selected. Higher r c values indicate a bias towards creating superpixels with higher shape regularity, whereas smaller r c leads to better boundary adherence. In this study, the aspect ratio r c was set to 0.075, as reported in the paper. The number of superpixels was assigned to 500, 1000, and 2000 because the stitched image had a larger size than the image size reported in the original paper. Additionally, the serious wilt regions had a very similar color to the soil regions. Figure 6 visualizes the superpixel segmentation with different numbers of superpixel parameters. The soil and the serious wilt regions tended to group when the number of superpixels parameter was 500, because these regions had a slight difference in color (Figure 6(c1)). However, if the number of superpixels parameter was higher (1000 or 2000), the superpixel segmentation algorithm performed the segmentation process better, because the soil regions and the serious wilt disease regions were segmented into different superpixels (Figure 6(c2)) and (Figure 6(c3)). As a result, K equal to 2000 was applied to extract the superpixel segments.

3.1.3. Detailed Implementation of the RadRGB Model

In this part, we thoroughly explain the RadRGB radish region detection model. During the data collection process for the RGB-ROI dataset, collected images usually had small sizes (less than 100 × 100 pixels). Thus, a CNN that contains five convolutional layers with an input size of 64 × 64 was selected. An appropriate kernel size for each layer was chosen to map an input image of 64 × 64 to an output of 1 × 3. Each convolutional layer required a proper kernel size, thus it could effectively manage the parameters’ flow. Figure 7 illustrates the complete architecture of the proposed CNN model, which shows input size, kernel size, and output size. In general, the RadRGB model received 64 × 64 images as the input. Then, they were forwarded to five convolutional layers (C1 to C5), three pooling layers (M1 to M3), and two dense layers. The max-pooling layer was applied to decrease the feature maps’ spatial size and reduce the overfitting issue. The last dense layer’s output was radish, plastic mulch, or soil [39].
Table 2 presents the detailed structure and the output in the RadRGB model.
In order to train the proposed deep learning model to distinguish different regions on the radish field, the RGB-ROI dataset was divided into two subsets with a split ratio of 85/15. The first subset was used as the training dataset (1445 images), while the second subset was created for the testing purpose (255 images). The detailed description of the training dataset and the testing dataset is explained in Table 3.
Four-fold cross-validation [40] was then implemented on the training set, randomly dividing the dataset into four subsets. The model’s performance was examined four times, and three subsets were used as training data for each fold. The remaining subset was used for testing. After that, the three subsets used as training data were further separated into two subsets; 80% of training data was fed to the proposed model to optimize the network weights, and 20% of the data were utilized as a validation dataset to determine the optimized parameters.
At the end of the classification steps, a list of classified radish regions was extracted to use as the input for the disease severity classification process.

3.1.4. Disease Severity Analysis

Color space conversion and thresholding are two fundamental processes of image pre-processing used in this section. All the input images a=were converted from RGB to hue, saturation, lightness (HSV) because, compared to the RGB, the HSV differentiates between the image intensity and the color information. Therefore, image descriptors extracted from HSV color space are resistant to inconsistent illumination or removing shadows [8]. Before the wilt severity analysis procedure is implemented, the number of black pixels in the HSV images (the result of the superpixel segmentation process) are counted. After that, they are omitted in the computation of the healthy rate value, as shown in Algorithm 1.
Algorithm 1: Wilt Severity Analysis
0:  Initialize count_seg_black, count_white equals to 0 and total_pixel equals to total number of pixels in img;
 1: for p i x e l in img do
 2:  if p i x e l equals to 0 then
 3:   count _ seg _ black = count _ seg _ black + 1
 4:  end if
 5: end for
 6: adaptiveThreshold(img, blockSize, C)
 7: for p i x e l in bin_img do
 8:  if p i x e l equals to 255 then
 9:   count _ white   =   count _ white   + 1
 10:  end if
 11: end for
 12:  h e a l t h y _ r a t e = c o u n t _ w h i t e t o t a l _ p i x e l c o u n t _ s e g _ b l a c k   × 100
The adaptive thresholding process from Algorithm 1 is described as follows:
d e s x , y = 0 ,   i f   s r c x , y > T x , y 255 ,   o t h e r w i s e
where d e s x , y is the pixel at location x , y of the binary image, whereas s r c x , y is the pixel at location x , y of the source image. The threshold value T x , y is a weighted sum (Gaussian cross-correlation) of the (block_Size by block_Size) neighborhood of s r c x , y minus C. block_Size is the pixel neighborhood size that is applied to compute a threshold value for the d e s x , y , and C is the constant subtracted from the weighted mean.
The disease’s criticalness can be analyzed using the binary image. The black regions illustrated the yellowish wilt radishes after the thresholding process, while the white pixels indicated the healthy greenish radishes. The radishes at the final stage of wilt disease had more black areas than the healthy radishes or the infected radishes at the early stage of the disease. This characteristic was introduced as we observed the results from various experiments. Moreover, the specific range of threshold for each level of disease severity was determined by following two previous studies on radish wilt detection [5,6].
As a result, the mentioned characteristics were applied in the thresholding process to categorize each input image into healthy, light disease, or serious disease, as shown in Figure 8. When the h e a l t h y _ r a t e was over 90, which meant the white pixels occupied over 90% of the entire image, the image was classified as a healthy radish, because wilt pixels are insignificant when they take less than 10% of the whole field. If the percentage of white pixels was between 60% and 90%, the image was classified as the light disease region. Finally, the image was classified as a serious disease if the white portions occupied less than 60%.

3.2. Radish Wilt Detection Using NIR Dataset

This section detects the wilt disease in radish agricultural fields by computing NDVI value based on the NIR dataset. We followed previous studies to compute NDVI using the multispectral images [30,35]. In the spectral data, the red channel (600–700 nm) and the NIR channel (700–900 nm) were the key variables to calculate the NDVI. Green and dense vegetation strongly absorbed red light (R) thanks to the green molecule’s appearance. In contrast, cell walls in the leaves greatly reflected light that belonged to the NIR channel. NDVI normalized R and NIR channels to provide a new signal that indicated the health condition of the plants [35]. The equation of NDVI is as follows:
N D V I = N I R R / N I R + R
The NDVI was computed based on a normalization method, thus its value range was from 0 to 1. Moreover, the index responded well to the green vegetation, and it could work with low vegetation-covered regions. NDVI value is usually used in studies that implement vegetation assessments. It has been proven to be related to both canopy photosynthesis and the leaf area index (LAI) [41].

4. Experimental Results

In this section, we describe all experiments conducted on the RGB and the NIR datasets. After that, the obtained results are visualized for analysis. Section 4.1 shows several experiments conducted to verify the identification of wilt disease on RGB images. Section 4.2 describes experiments on NDVI for NIR images.

4.1. Experiments on RGB Dataset

The input RGB images were applied as the input data for a series of experiments. Section 3.1.1 explains the results of the image stitching algorithm. The superpixel segmentation process was used to extract a list of segmented regions, as described in Section 3.1.2. After that, Section 3.1.3 explains how we trained the proposed RadRGB model to filter out only the radish regions. Finally, the disease severity process was conducted to classify a radish region into either healthy, light, or serious.
An NVIDIA deep learning graphics processing unit (GPU) training system (DIGITS) toolbox was utilized for training the CNN model. It had a preinstalled Ubuntu 16.04 and an Intel Core i7-5930K CPU, 64 GB DDR4, and four Titan X 12GB GPUs (each GPU has 3072 compute unified device architecture (CUDA) cores). All the programming was implemented in the Python language with a Tensorflow deep learning library used for constructing and validating the model. The model was trained with 30 epochs and took approximately 20 min. The batch size was 32, and the first two dropout regularizations were set to 0.2, and the third dropout regularization was 0.5. The learning rate of the Adam optimization function was 0.001, beta_1 was 0.9, beta_2 was 0.999, and epsilon was 1 × 10−8.
Figure 9 reveals that accuracy rose dramatically to 80%, whereas the loss dropped significantly to 35% during the first five epochs. The training accuracy and the validation accuracy grew steadily and stabilized around 86% for the rest of the training process. In contrast, the loss of both training and validation processes dropped slightly and hit bottom at 30%. Among those folds, fold four achieved the best results regarding the robustness of the model and the validation accuracy.
Table 4 reports the confusion matrix results on the RGB-ROI testing dataset to examine the performance of the RadRGB model in identifying distinctive regions in the radish field. The CNN model correctly recognized three types of objects on the radish field, with an average accuracy of 96%. The high precision values of over 0.95 in three classes indicated that over 95% of total relevant results were accurately classified by the model. In addition, the average recall value of over 0.96 proved that the model correctly categorized over 96% of the relevant results. Based on analyzing different statistics, the model showed its superiority in classifying different regions in the radish field.
In addition, the proposed model was compared with previous state-of-the-art methods on the proposed dataset. The first model is a fine-tuned Inception-V3 model that distinguished three different levels of wilt disease (healthy plant, light disease, and severe disease) [3]. The second model is a customized VGG-16 model proposed to classify healthy plants and wilt plants [2]. A performance comparison between the proposed RadRGB, the fine-tuned Inception net model [3], and the customized VGG-16 model [2] is described in Table 5. These two models were trained on the same dataset as the proposed RadRGB model. In addition, we implemented previous state-of-the-art models and hyperparameters followed using the same description as described in [2] and [3].
Overall, all models performed well on the proposed dataset with an accuracy of over 90%. The proposed RadRGB achieved the highest classification accuracy of 96.4%. With a slightly lower performance, the customized VGG-16 model and the fine-tuned inception-V3 model reached accuracies of 93.1% and 95.7%, respectively.
For the time complexity, the proposed model demands the lowest testing time at 0.043 s because it has a simple structure with an 8-convolutional layer, followed by the VGG-16 model with 16 layers and requiring 0.1 s for each test image. Finally, the Inception-V3 with 48 convolutional layers needs the longest testing time with 0.22 s per one test image.
Finally, the trained model was used to classify 7336 segments, which were the superpixel segmentation results in Section 3.1.2. In total, 4763 segments were classified as radish regions, 1956 segments were categorized as soil, and the remaining 617 segments were classified as plastic mulch. After the RadRGB model identified 4763 radish regions, the disease severity analysis was conducted by classifying extracted radish regions into three different levels of wilt disease (healthy, light disease, and serious disease) based on the leaf color. The leaf color changed from yellow to brown when the infected radish was at the late stage of wilt disease. After performing the disease severity analysis, 2876 regions were categorized as healthy regions, 1397 regions were classified as a light disease, and 490 regions were categorized as the serious disease.
Figure 10 shows the radish field classification and the disease severity analysis results. Figure 10a shows extracted radish regions, which were classified by the trained RadRGB model. After that, the disease severity analysis was implemented to select only the regions that were infected by wilt disease. Figure 10b highlights three regions infected by Fusarium wilt of radish (highlighted in blue). Finally, Figure 10c shows the effectiveness of the proposed method in detecting wilt areas by displaying the three regions and their corresponding original images.
However, the proposed model sometimes failed to detect the late stage of wilt disease (Figure 11c), as parts of the yellow leaves were not detected because soil regions and radish wilt regions at the final stage had the same appearance.
Thus, wilt regions were segmented as the soil region in the superpixel segmentation process and were discarded during the classification process. We manually checked all of the classified ground regions by the proposed model and found that, among 695 soil segments, 38 segments contained the late wilt disease regions. Because these regions were discarded, they caused a decrease in the wilt disease detection rate.

4.2. Experiments on the NIR Dataset

Figure 12 shows three NDVI images from the original NIR images of the radishes at three different stages (healthy, early disease, and late disease). The NDVI value was calculated by using an open-source Python library named infrapix. The NDVI values of the radish in three cases were always over 0, and the NDVI value in Figure 12a was between 0.7 to 1, which indicated the highest possible density of the healthy leaves. The leaf color began to change when radish was infected with the Fusarium wilt disease, which lead to lower NDVI values between 0.45 to less than 0.7 and below 0.45 in the late stage of the disease.
The NIR dataset contained two radish fields; one field was healthy radish field, whereas the other field was reported by the farmers to show early symptoms of Fusarium wilt disease. We randomly picked ten radishes from the field without wilt disease and ten radishes from the infected field. Next, the drone was used to captured one image for each spot at two different times (12 January 2018 and 28 January 2018) to check the NDVI indicator when the disease became more serious. Therefore, the NDVI value was calculated for a total of 20 images collected from each observation date. Figure 13 shows the NDVI values of the ten radish samples taken from the field without wilt disease, and ten radish samples were taken from the radish field infected with wilt disease on the 12 and the 28 of January 2018. On 12 January 2018, the highest NDVI values were recorded for samples on the field without wilt disease. In contrast, lower NDVI values occurred for samples on the field infected by Fusarium wilt disease. NDVI values acquired 16 days later for the field infected with wilt disease showed even lower NDVI values because the disease had slowly spread out and deteriorated the vascular elements in the petioles, resulting in the stunted plant.

5. Discussion

To the best of our knowledge, there existed no dataset that supported wilt disease detection for radish previously. The proposed radish wilt detection dataset is the first dataset that contains both high-resolution RGB and multispectral NIR images captured by UAV at various altitudes. Using widely available UAV to capture the radish field is faster and more reliable than traditional manual evaluation techniques. Thus, this data can become a public benchmark dataset for the radish wilt detection topic and can be used for different research purposes.
The proposed wilt disease detection framework for the RGB dataset was necessary, as it could be applied for images at different sizes by using the sliding window technique. Moreover, it could also identify the wilt disease to varying levels from healthy to heavy wilt. Although previous state-of-the-art models such as VGG-16 [2] and Inception-V3 [3] have been applied for disease identification and achieved high performance, they require substantial computational time. On the other hand, the proposed model demands less training time and testing time thanks to its simple structure and achieves comparable performance to previous state-of-the-art models. The proposed framework can also be used for other types of plants, such as banana, tobacco, and tomato, because Fusarium wilt is a common vascular disease. Finally, the wilt identification framework based on the RGB dataset requires several steps compared to the wilt detection on the NIR dataset.
However, conventional RGB datasets are susceptible to variations such as shading, sunlight, and cloud, thus wilt disease analysis of images captured on the same spot at different times may not be comparable. As a result, it is essential to apply several techniques to minimize environmental impacts, such as performing the data collection at a fixed time, conducting image calibration, and using several pre-processing methods. Moreover, because the approach based on the RGB dataset is mainly based on analyzing the color of the leaves, the model fails to differentiate the late wilt disease regions with the soil regions, as they have similar texture and color. This issue can be solved by changing the number of segments parameter to force the superpixel segmentation to produce more segments or add the wilt class into the classification model. However, these solutions hugely increase model complexity.
In contrast to the RGB dataset, NDVI is robust against variations, making it a valuable tool to analyze the health of a radish field over time (week to week, month to month). Moreover, plants show stress in NIR earlier than the visible spectrum, which allows the slightest crop stress to be detected before it appears in the visible spectrum. The vegetation indexes are a valuable tool for farmers and researchers because the excellent performance as an indicator of vegetation coverage can give a robust and cost-effective solution to monitoring plant health across large areas in a short time. However, NIR sensors are costly compared to widely available RGB sensors. As a result, they can only be found on specific drones or have to be added on the drones.

6. Conclusions

This study provides an efficient framework for detecting Fusarium wilt disease on radishes using UAV on both RGB images and NIR images. A huge and high-resolution RGB and NIR radish dataset at three altitudes of 3 m, 7 m, and 15 m was collected. For the RGB dataset, we proposed a customized deep learning model that could classify three different regions of the radish field (radish, soil, and mulching film) with an average accuracy of over 96%. We further categorized the disease severity of the radish regions into healthy, light wilt, and serious wilt by applying different computer vision techniques. The proposed framework achieved high performance, flexibility, and robustness. Therefore, it can reduce the labor cost in controlling and stopping the wilt disease from spreading further. In addition, we also showed the possibility of constructing the NDVI map from the proposed NIR dataset to handle different stages of wilt disease with high precision. Although the NIR approach is more straightforward compared to a complicated procedure that needs to be conducted to recognize radish wilt in the RGB dataset, it requires a specialized type of sensor that is not always available. The proposed framework presents several advantages over existing systems that have been used for the task of wilt detection on radish with end-to-end features, high performance, flexibility, and robustness.
Some limitations need to be addressed in the future to increase the performance of the model. The RGB dataset was utilized directly, and no pre-processing techniques were implemented. In the future, processing methods, including augmentation and transformation, can be applied to improve the overall performance. Moreover, it would be better if the data are collected over time for the NIR dataset, because the disease path and patterns can be analyzed and tracked.

Author Contributions

Conceptualization, H.M. and L.M.D., H.W.; methodology, L.M.D., Y.L.; validation, J.T.K., O.N.L., and H.P.; resources, H.W.; writing—original draft preparation, L.M.D.; writing-review and editing, Y.L.; visualization, L.M.D.; validation, K.M.; supervision, H.M., H.P.; funding acquisition, H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2020R1A6A1A03038540) and by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2019-0-00136, Development of AI-Convergence Technologies for Smart City Industry Productivity Innovation).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, G.-G.; Lee, H.-W.; Lee, J.-H. Greenhouse gas emission reduction effect in the transportation sector by urban agriculture in Seoul, Korea. Landsc. Urban Plan. 2015, 140, 1–7. [Google Scholar] [CrossRef] [Green Version]
  2. Ha, J.G.; Moon, H.; Kwak, J.T.; Hassan, S.I.; Dang, L.M.; Lee, O.N.; Park, H.Y. Deep convolutional neural network for classifying Fusarium wilt of radish from unmanned aerial vehicles. J. Appl. Remote Sens. 2017, 11, 042621. [Google Scholar] [CrossRef]
  3. Dang, L.M.; Hassan, S.I.; Suhyeon, I.; Sangaiah, A.K.; Mehmood, I.; Rho, S.; Seo, S.; Moon, H.; Syed, I.H. UAV based wilt detection system via convolutional neural networks. Sustain. Comput. Inform. Syst. 2018. [Google Scholar] [CrossRef] [Green Version]
  4. Drapikowska, M.; Drapikowski, P.; Borowiak, K.; Hayes, F.; Harmens, H.; Dziewiątka, T.; Byczkowska, K. Application of novel image base estimation of invisible leaf injuries in relation to morphological and photosynthetic changes of Phaseolus vulgaris L. exposed to tropospheric ozone. Atmos. Pollut. Res. 2016, 7, 1065–1071. [Google Scholar] [CrossRef]
  5. Khirade, S.D.; Patil, A. Plant disease detection using image processing. In Proceedings of the 2015 International Conference on Computing Communication Control and Automation, Maharashtra, India, 26–27 February 2015. [Google Scholar]
  6. Singh, V.; Misra, A.K. Detection of plant leaf diseases using image segmentation and soft computing techniques. Inf. Process. Agric. 2017, 4, 41–49. [Google Scholar] [CrossRef] [Green Version]
  7. Dang, L.M.; Piran, J.; Han, D.; Min, K.; Moon, H. A Survey on Internet of Things and Cloud Computing for Healthcare. Electronics 2019, 8, 768. [Google Scholar] [CrossRef] [Green Version]
  8. Huang, Y.; Reddy, K.N.; Fletcher, R.S.; Pennington, D. UAV low-altitude remote sensing for precision weed management. Weed Technol. 2018, 32, 2–6. [Google Scholar] [CrossRef]
  9. Matese, A.; Toscano, P.; Di Gennaro, S.F.; Genesio, L.; Vaccari, F.P.; Primicerio, J.; Belli, C.; Zaldei, A.; Bianconi, R.; Gioli, B.; et al. Intercomparison of UAV, aircraft and satellite remote sensing platforms for precision viticulture. Remote Sens. 2015, 7, 2971–2990. [Google Scholar] [CrossRef] [Green Version]
  10. Li, Y.; Wang, H.; Dang, L.M.; Sadeghi-Niaraki, A.; Moon, H. Crop pest recognition in natural scenes using convolutional neural networks. Comput. Electron. Agric. 2020, 169, 105174. [Google Scholar] [CrossRef]
  11. Zhao, J.; Zhang, X.; Gao, C.; Qiu, X.; Tian, Y.; Zhu, Y.; Cao, W. Rapid Mosaicking of Unmanned Aerial Vehicle (UAV) Images for Crop Growth Monitoring Using the SIFT Algorithm. Remote Sens. 2019, 11, 1226. [Google Scholar] [CrossRef] [Green Version]
  12. Wu, Y.; Ji, Q. Robust facial landmark detection under significant head poses and occlusion. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar]
  13. Redmon, J.; Farhadi, A. Yolov3: An Incremental Improvement. Retrieved Sept. 2018, 17, 1–6. [Google Scholar]
  14. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems 2015, Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
  15. Li, Y.; Hou, X.; Koch, C.; Rehg, J.M.; Yuille, A.L. The secrets of salient object segmentation. In Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014. [Google Scholar]
  16. Ren, X.; Malik, J. Learning a classification model for segmentation. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003. [Google Scholar]
  17. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
  18. Comaniciu, D.; Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef] [Green Version]
  19. Shi, J.; Malik, J. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar] [CrossRef] [Green Version]
  20. Vedaldi, A.; Soatto, S. Quick shift and kernel methods for mode seeking. In Proceedings of the European Conference on Computer Vision, Marseille, France, 12–18 October 2008. [Google Scholar]
  21. Patil, J.K.; Kumar, R. Analysis of content based image retrieval FO–R plant leaf diseases using color, shape and texture features. Eng. Agric. Environ. Food 2017, 10, 69–78. [Google Scholar] [CrossRef]
  22. Dubey, S.R.; Jalal, A.S. Apple disease classification using color, texture and shape features from images. Signal Image Video Process. 2016, 10, 819–826. [Google Scholar] [CrossRef]
  23. Rançon, F.; Bombrun, L.; Keresztes, B.; Germain, C. Comparison of SIFT Encoded and Deep Learning Features for the Classification and Detection of Esca Disease in Bordeaux Vineyards. Remote Sens. 2019, 11, 1. [Google Scholar] [CrossRef] [Green Version]
  24. Zhang, S.; Wang, H.; Huang, W.; You, Z. Plant diseased leaf segmentation and recognition by fusion of superpixel, K-means and PHOG. Optik 2018, 157, 866–872. [Google Scholar] [CrossRef]
  25. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [Green Version]
  26. Nguyen, T.N.; Lee, S.; Nguyen-Xuan, H.; Lee, J. A novel analysis-prediction approach for geometrically nonlinear problems using group method of data handling. Comput. Methods Appl. Mech. Eng. 2019, 354, 506–526. [Google Scholar] [CrossRef]
  27. Nguyen, T.N.; Nguyen-Xuan, H.; Lee, J. A novel data-driven nonlinear solver for solid mechanics using time series forecasting. Finite Elem. Anal. Des. 2020, 171, 103377. [Google Scholar] [CrossRef]
  28. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef] [PubMed]
  29. Kerkech, M.; Hafiane, A.; Canals, R. Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases detection in UAV images. Comput. Electron. Agric. 2018, 155, 237–243. [Google Scholar] [CrossRef]
  30. Albetis, J.; Duthoit, S.; Guttler, F.; Jacquin, A.; Goulard, M.; Poilvé, H.; Féret, J.-B.; Dedieu, G. Detection of Flavescence dorée grapevine disease using unmanned aerial vehicle (UAV) multispectral imagery. Remote Sens. 2017, 9, 308. [Google Scholar] [CrossRef] [Green Version]
  31. Dash, J.P.; Watt, M.S.; Pearse, G.D.; Heaphy, M.; Dungey, H.S. Assessing very high resolution UAV imagery for monitoring forest health during a simulated disease outbreak. ISPRS J. Photogramm. Remote Sens. 2017, 131, 1–14. [Google Scholar] [CrossRef]
  32. Barbedo, J.G.A. A Review on the Use of Unmanned Aerial Vehicles and Imaging Sensors for Monitoring and Assessing Plant Stresses. Drones 2019, 3, 40. [Google Scholar] [CrossRef] [Green Version]
  33. Wierzbicki, D.; Kedzierski, M.; Fryskowska, A.; Jasinski, J. Quality Assessment of the Bidirectional Reflectance Distribution Function for NIR Imagery Sequences from UAV. Remote Sens. 2018, 10, 1348. [Google Scholar] [CrossRef] [Green Version]
  34. Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017, 1–17. [Google Scholar] [CrossRef] [Green Version]
  35. Gandhi, G.M.; Parthiban, S.; Thummalu, N.; Christy, A. NDVI: Vegetation change detection using remote sensing and GIS—A case study of Vellore District. Procedia Comput. Sci. 2015, 57, 1199–1210. [Google Scholar] [CrossRef] [Green Version]
  36. De Castro, A.I.; Ehsani, R.; Ploetz, R.C.; Crane, J.H.; Buchanon, S. Detection of laurel wilt disease in avocado using low altitude aerial imaging. PLoS ONE 2015, 10, e0124642. [Google Scholar] [CrossRef]
  37. Brown, M. Automatic Panoramic Image Stitching Using Invariant Features. Int. J. Comput. Vis. 2016, 74.1, 59–73. [Google Scholar] [CrossRef] [Green Version]
  38. Li, Z.; Chen, J. Superpixel segmentation using linear spectral clustering. In Proceedings of the 28th IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  39. Li, Y.; Yuan, Y. Convergence analysis of two-layer neural networks with relu activation. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 597–607. [Google Scholar]
  40. Wang, H.; Li, Y.; Dang, L.M.; Ko, J.; Han, D.; Moon, H. Smartphone-based bulky waste classification using convolutional neural networks. Multimed. Tools Appl. 2020, 79, 1–21. [Google Scholar] [CrossRef]
  41. Yang, H.; Yang, X.; Heskel, M.A.; Sun, S.; Tang, J. Seasonal variations of leaf and canopy properties tracked by ground-based NDVI imagery in a temperate forest. Sci. Rep. 2017, 7, 1267. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Sample images captured by unmanned aerial vehicle (UAV) at three different altitudes (3 m, 7 m, and 15 m). The top three images belong to the near-infrared (NIR) dataset, whereas the bottom three images are from the Red-Green-Blue (RGB) dataset.
Figure 1. Sample images captured by unmanned aerial vehicle (UAV) at three different altitudes (3 m, 7 m, and 15 m). The top three images belong to the near-infrared (NIR) dataset, whereas the bottom three images are from the Red-Green-Blue (RGB) dataset.
Remotesensing 12 02863 g001
Figure 2. Satellite map of the radish fields (33°30′5.44″ N, 126°51′25.39″ E), where the radish dataset was collected (Google map image).
Figure 2. Satellite map of the radish fields (33°30′5.44″ N, 126°51′25.39″ E), where the radish dataset was collected (Google map image).
Remotesensing 12 02863 g002
Figure 3. An RGB radish sample captured by UAV illustrates three distinctive types of objects (the green line shows radish regions, the blue line indicates soil, and the black line indicates plastic mulch).
Figure 3. An RGB radish sample captured by UAV illustrates three distinctive types of objects (the green line shows radish regions, the blue line indicates soil, and the black line indicates plastic mulch).
Remotesensing 12 02863 g003
Figure 4. Overall structure of the proposed wilt detection framework. Two UAV were used to capture the RGB and the NIR images at the 3-m altitude. After that, two distinctive sub-processes were implemented to perform radish wilt detection on the RGB and the NIR dataset.
Figure 4. Overall structure of the proposed wilt detection framework. Two UAV were used to capture the RGB and the NIR images at the 3-m altitude. After that, two distinctive sub-processes were implemented to perform radish wilt detection on the RGB and the NIR dataset.
Remotesensing 12 02863 g004
Figure 5. The stitching result for 13 RGB images at an altitude of 3 meters (the mosaic image size is 8404 × 3567).
Figure 5. The stitching result for 13 RGB images at an altitude of 3 meters (the mosaic image size is 8404 × 3567).
Remotesensing 12 02863 g005
Figure 6. Superpixel segmentation with a different number of superpixels. We zoomed in a specific wilt region (b) on the original image (a). After that, the superpixel segmentation was applied to the zoom image with a different number of superpixels, which include (c1) 500 superpixels, (c2) 1000 superpixels, and (c3) 2000 superpixels.
Figure 6. Superpixel segmentation with a different number of superpixels. We zoomed in a specific wilt region (b) on the original image (a). After that, the superpixel segmentation was applied to the zoom image with a different number of superpixels, which include (c1) 500 superpixels, (c2) 1000 superpixels, and (c3) 2000 superpixels.
Remotesensing 12 02863 g006
Figure 7. RadRGB architecture that involves the input images of 64 × 64, a detailed configuration of five convolutional layers, three max-pooling layers, and three output classes (C indicates the convolutional layer and M represents the max-pooling layer).
Figure 7. RadRGB architecture that involves the input images of 64 × 64, a detailed configuration of five convolutional layers, three max-pooling layers, and three output classes (C indicates the convolutional layer and M represents the max-pooling layer).
Remotesensing 12 02863 g007
Figure 8. Disease severity classification process (scaled up to 100% from the original image). Each of the 64x64 radish images (first column) were converted to hue, saturation, lightness (HSV) colorspace (second column), and then the thresholding process (third column) was conducted to categorize the disease severity into healthy (first row), light (second row), or serious (third row).
Figure 8. Disease severity classification process (scaled up to 100% from the original image). Each of the 64x64 radish images (first column) were converted to hue, saturation, lightness (HSV) colorspace (second column), and then the thresholding process (third column) was conducted to categorize the disease severity into healthy (first row), light (second row), or serious (third row).
Remotesensing 12 02863 g008
Figure 9. Performance of the RadRGB model on each fold.
Figure 9. Performance of the RadRGB model on each fold.
Remotesensing 12 02863 g009
Figure 10. Example of the radish field classification and disease severity analysis on the RGB image, including (a) extracted radish region from the RadRGB model, (b) wilt severity analysis with the blue box indicate some examples of the wilt region, and (c) comparison between detected wilt regions and original images (scaled up to 150% from (b)).
Figure 10. Example of the radish field classification and disease severity analysis on the RGB image, including (a) extracted radish region from the RadRGB model, (b) wilt severity analysis with the blue box indicate some examples of the wilt region, and (c) comparison between detected wilt regions and original images (scaled up to 150% from (b)).
Remotesensing 12 02863 g010
Figure 11. Wilt disease severity analysis for extracted radish regions (scaled up to 100% from the original image). (a) Healthy radish regions, (b) light disease regions, and (c) serious disease regions.
Figure 11. Wilt disease severity analysis for extracted radish regions (scaled up to 100% from the original image). (a) Healthy radish regions, (b) light disease regions, and (c) serious disease regions.
Remotesensing 12 02863 g011
Figure 12. The distinction between three corresponding normalized difference vegetation index (NDVI) maps for radish at different stages (scaled up to 100% from the original image), including (a) healthy, (b) light disease, and (c) serious disease.
Figure 12. The distinction between three corresponding normalized difference vegetation index (NDVI) maps for radish at different stages (scaled up to 100% from the original image), including (a) healthy, (b) light disease, and (c) serious disease.
Remotesensing 12 02863 g012
Figure 13. Boxplot showing the correlation between NDVI values and Fusarium wilt disease on 12 January 2018 and 28 January 2018. The graph shows a negative trend in NDVI, with higher values in healthy radish and lower values in radish infected with wilt disease, with lowest NDVI values recorded for the radishes at the late stage of wilt disease.
Figure 13. Boxplot showing the correlation between NDVI values and Fusarium wilt disease on 12 January 2018 and 28 January 2018. The graph shows a negative trend in NDVI, with higher values in healthy radish and lower values in radish infected with wilt disease, with lowest NDVI values recorded for the radishes at the late stage of wilt disease.
Remotesensing 12 02863 g013
Table 1. Basic information of the proposed radish dataset (RGB + NIR) on Jeju, Korea (#Fields is the the number of fields, and #RGB Images is the number of RGB images, and #NIR is the number of NIR images).
Table 1. Basic information of the proposed radish dataset (RGB + NIR) on Jeju, Korea (#Fields is the the number of fields, and #RGB Images is the number of RGB images, and #NIR is the number of NIR images).
Position
(Latitude, Longitude)
Date#FieldsAltitude
(m)
#RGB Images#NIR Images
33°30′5.44″ N, 126°51′25.39″ E1 May 201823627406
7434267
15207187
2 May 20183853535
7431368
15262234
Total number of images28141997
Table 2. A comprehensive description of the proposed RadRGB architecture.
Table 2. A comprehensive description of the proposed RadRGB architecture.
RadRGB
NameStructure(Widths, Heights, Channels)
Input 64 × 64
Convolution_17 × 7 (58, 58, 32)
Convolution_25 × 5 (54, 54, 64)
Maxpool_12 × 2(27, 27, 64)
Dropout_1Probability: 0.2
Convolution_33 × 3 (25, 25, 128)
Maxpool_22 × 2(12, 12, 128)
Dropout_2Probability: 0.2
Convolution_43 × 3 (10, 10, 256)
Maxpool_32 × 2(5, 5, 256)
Dropout_3Probability: 0.2
Convolution_53 × 3(3, 3, 512)
BatchNorm (3, 3, 512)
Dropout_4Probability: 0.5
Flatten (4068)
Dense (3)
Table 3. The number of images for each class (radish, soil, and plastic mulch) of the training set and the testing set.
Table 3. The number of images for each class (radish, soil, and plastic mulch) of the training set and the testing set.
SubsetTraining SetTesting Set
Class
Radish510 90
Soil49387
Plastic mulch44278
Total1445255
Table 4. Confusion matrix on the RGB-region of interest (ROI) testing dataset using the proposed deep learning model.
Table 4. Confusion matrix on the RGB-region of interest (ROI) testing dataset using the proposed deep learning model.
RadishSoilPlastic Mulch
Radish8731
Soil1842
Plastic mulch2075
Accuracy (%)96.696.596.1
Precision0.9670.9660.962
Recall0.9560.9660.974
F-measure0.9610.9660.968
Table 5. Performance of RadRGB, VGG-16, and Inception-V3 models on the RGB-ROI dataset.
Table 5. Performance of RadRGB, VGG-16, and Inception-V3 models on the RGB-ROI dataset.
ModelAccuracy (%)
RadRGB96.4
Inception-V3 [3]95.7
VGG-16 [2]93.1

Share and Cite

MDPI and ACS Style

Dang, L.M.; Wang, H.; Li, Y.; Min, K.; Kwak, J.T.; Lee, O.N.; Park, H.; Moon, H. Fusarium Wilt of Radish Detection Using RGB and Near Infrared Images from Unmanned Aerial Vehicles. Remote Sens. 2020, 12, 2863. https://doi.org/10.3390/rs12172863

AMA Style

Dang LM, Wang H, Li Y, Min K, Kwak JT, Lee ON, Park H, Moon H. Fusarium Wilt of Radish Detection Using RGB and Near Infrared Images from Unmanned Aerial Vehicles. Remote Sensing. 2020; 12(17):2863. https://doi.org/10.3390/rs12172863

Chicago/Turabian Style

Dang, L. Minh, Hanxiang Wang, Yanfen Li, Kyungbok Min, Jin Tae Kwak, O. New Lee, Hanyong Park, and Hyeonjoon Moon. 2020. "Fusarium Wilt of Radish Detection Using RGB and Near Infrared Images from Unmanned Aerial Vehicles" Remote Sensing 12, no. 17: 2863. https://doi.org/10.3390/rs12172863

APA Style

Dang, L. M., Wang, H., Li, Y., Min, K., Kwak, J. T., Lee, O. N., Park, H., & Moon, H. (2020). Fusarium Wilt of Radish Detection Using RGB and Near Infrared Images from Unmanned Aerial Vehicles. Remote Sensing, 12(17), 2863. https://doi.org/10.3390/rs12172863

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop