Next Article in Journal
Use of Drone Photogrammetry as An Innovative, Competency-Based Architecture Teaching Process
Next Article in Special Issue
Operational Mapping of Salinization Areas in Agricultural Fields Using Machine Learning Models Based on Low-Altitude Multispectral Images
Previous Article in Journal
Multi-UAV Formation Control in Complex Conditions Based on Improved Consistency Algorithm
Previous Article in Special Issue
Sensitivity of LiDAR Parameters to Aboveground Biomass in Winter Spelt
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feasibility Study of Detection of Ochre Spot on Almonds Aimed at Very Low-Cost Cameras Onboard a Drone

by
Juana M. Martínez-Heredia
1,*,
Ana I. Gálvez
1,
Francisco Colodro
1,
José Luis Mora-Jiménez
1 and
Ons E. Sassi
2
1
Department of Electronic Engineering, Universidad de Sevilla, 41092 Seville, Spain
2
College of Aviation and Technology, University College of Aviation and Technology ESAT, Carthage 2035, Tunisia
*
Author to whom correspondence should be addressed.
Drones 2023, 7(3), 186; https://doi.org/10.3390/drones7030186
Submission received: 29 January 2023 / Revised: 21 February 2023 / Accepted: 2 March 2023 / Published: 8 March 2023
(This article belongs to the Special Issue UAS in Smart Agriculture)

Abstract

:
Drones can be very helpful in precision agriculture. Currently, most drone-based solutions for plant disease detection incorporate multispectral, hyperspectral, or thermal cameras, which are expensive. In addition, there is a trend nowadays to apply machine learning techniques to precision agriculture, which are computationally complex and intensive. In this work, we explore the feasibility of detecting ochre spot disease in almond plantations based on conventional techniques of computer vision and images from a very low-cost RGB camera that is placed on board a drone. Such an approach will allow the detection system to be simple and inexpensive. First, we made a study of color on the ochre spot disease. Second, we developed a specific algorithm that was capable of processing and analyzing limited-quality images from a very low-cost camera. In addition, it can estimate the percentage of healthy and unhealthy parts of the plant. Thanks to the GPS on board the drone, the system can provide the location of every sick almond tree. Third, we checked the operation of the algorithm with a variety of photographs of ochre spot disease in almonds. The study demonstrates that the efficiency of the algorithm depends to a great extent on environmental conditions, but, despite the limitations, the results obtained with the analyzed photographs show a maximum discrepancy of 10% between the estimated percentage and the ground truth percentage of the unhealthy area. This approach shows great potential for extension to other crops by making previous studies of color and adaptations.

1. Introduction

In the last decade, commercial and civilian use of unmanned aerial vehicles (UAVs) has increased rapidly. In particular, the Federal Aviation Administration estimates that the recreational UAV sector in the United States will have around 1.48 million units by 2024, and the commercial UAV sector will have around 828,000 aircraft [1].
One of the most valuable industries for the application of drone technologies is precision agriculture (PA), after infrastructure [2]. Agricultural production has increased drastically in recent years. Studies [3] predict that agricultural consumption will increase by 69% between 2010 and 2050, and this increase will be mainly stimulated by population growth from 7 billion to 9 billion by 2050. PA is a concept of agriculture management that uses technology to monitor and optimize agricultural production processes. These technologies allow farmers to maximize yields while reducing the environmental impact by controlling every variable of crop farming, such as moisture levels, pest stress, soil conditions, and microclimates.
In this paper, the issue of the detection and quantification of plant diseases caused by pathogens or insects that affect agricultural production is addressed; in particular, some can manifest in crops due to their visually detectable symptoms and their abundant presence in Spanish fields. In addition, an inexpensive and small tool is needed to assist farmers in their work. Traditional techniques to detect diseases and pests in many crops rely on human scouting, which is time-consuming, expensive, and sometimes impractical or prone to human error. Modern PA techniques are based on the use of remote sensing platforms that can obtain images such as satellites [4,5,6,7], manned aircraft [8,9,10,11], UAVs [12,13,14,15], or combinations of them [16,17,18,19,20]. In this work, the use of satellite imagery has been ruled out due to the high costs and low availability of high-resolution imagery. In addition, satellites may suffer from cloud cover and constraints due to fixed-time acquisitions. The use of aircraft imagery has also been ruled out because although aircraft surveys can be planned more flexibly, they pose a difficult and costly organizational effort. We have chosen a drone-based solution because UAVs present some main advantages over satellites and general airplanes. First, collecting images from UAVs is easier and much more inexpensive. Second, UAVs can fly at a lower altitude and collect high-resolution imagery. Third, they can fly and capture images on short notice or during a small window of opportunity, which can be crucial in some PA applications.
Several reviews [12,13,14,15,21,22,23,24] provide more detailed information on the use of UAVs in agriculture and the different types of sensors that can be carried on board a UAV for agricultural applications. There are a variety of remote sensors, but this paper is focused on the most common imaging sensors: multispectral, hyperspectral, thermal, and RGB (visible spectrum) cameras. Other sensors such as spectrometers [25,26], stereo cameras [27,28], light detection and range (LiDAR) [29,30], and sonar [31,32] move beyond the scope of this article and will not be discussed.
Multispectral and hyperspectral cameras have been widely used in PA [33,34,35,36,37,38,39]. Multispectral cameras can be built by RGB cameras in which the near-infrared filter has been removed and replaced with a red filter or with a set of sensors with different lenses, and each sensor is sensitive in one spectral region. The main difference between these two types of cameras is the number of bands in the electromagnetic spectrum that they capture and the narrowness of the bands. Multispectral imagery generally refers to between 3 and 10 bands (e.g., visible bands, red-edge bands, and near-infrared bands), whereas hyperspectral imagery could have hundreds of narrower bands (5–10 nm each).
Thermal cameras use information about the radiation emitted in the thermal infrared range (TIR) of the electromagnetic spectrum (0.7–100 µm) and convert it into temperature [40,41]. Generally, elements such as vegetation, soil, water, and people emit TIR radiation in the long-wave region (3–14 µm). In recent years, all of these types of cameras have gained popularity due to improvements in sensor technology and a reduction in costs. Nevertheless, RGB cameras are the least expensive of all the cameras. The comparison of price ranges according to the technical characteristics of different types of sensors presented in [24] establishes these ranges: $900–$35,000 for digital cameras, $5000–$16,000 for multispectral cameras, $70,000–$150,000 for hyperspectral cameras, and $10,000–$15,000 for thermal cameras. RGB cameras capture only visible light (red, green, and blue), but they have also been used in PA [42,43,44].
In order to choose a type of camera for a particular PA application, it must be taken into account that all cameras have advantages and drawbacks. RGB sensors are affordable and easy to handle, and they can have a high spatial resolution (10–100 MP, depending on their price) alongside a poor spectral resolution. Spectral cameras provide very useful spectral information per pixel, but the required image processing is more complex and involves calibration and data correction to consider aspects such as atmospheric weather and noise. Hyperspectral cameras are the ones that provide the most spectral information, but currently, they are also the most voluminous of all. Thermal cameras typically have a low spatial resolution (0.33 MP), with only one band sensitively measured in the long-wave infrared region (7–12 µm). Some thermal cameras require a cooling system, which makes them larger and more expensive. Depending on the particular application of PA, one or another type of camera may be more suitable [45]. In yield prediction, the optimum option would be multispectral cameras. In insect or pathogen detection in the early stages, hyperspectral and thermal cameras appear to be more suitable, although in the detection of disease severity, RGB, multispectral, and hyperspectral cameras are all suitable.
In this work, we addressed the ochre spot disease in almonds. According to the Food and Agriculture Organization (FAO) of the United Nations, the total area dedicated to almond cultivation in 2019 was 2,139,115 ha, with Spain being the country with the highest cultivation area of 687,230 ha, followed by the United States with 477,530 ha. In Spain, the region or autonomous community with the highest almond production is Andalusia. This expansion has encouraged an improvement in production techniques and varieties, as well as the study and research of the main diseases that affect this type of cultivation, affecting the productivity, profitability of the plantation, and quality of the product. The ocher spot is just one of the main diseases of the almond tree and is widespread in the Mediterranean basin. The symptoms of this disease are characterized by spots on almond leaves that can be recognized by visual tracking. These spots have different sizes and shades depending on the stage of the disease. For our work, the cost has been the main reason why an RGB sensor has been chosen to be onboarded on our drone. In addition, we have also chosen a camera whose price is very low, under $100, which affects its resolution directly. Therefore, in this case, with a camera containing such low cost and resolution, it is a challenge to carry out disease detection in plants: research efforts must be made in algorithms that are capable of processing and analyzing these limited-quality data in an effective way. In the case of ochre spot disease in almond plantations, it is a challenge due to the structure of the almond tree: its leafiness can be scarce, and new divergent branches are born radially around the trunk. Hence, it is difficult for an algorithm to delimitate the contours, and it can recognize this as real pixels of the tree and those of the environment between the branches and leaves. Furthermore, in our case, almond plantations in Andalusia (South of Spain), it is an even greater challenge because the almond trees are usually planted on very dry vacant land, and it is difficult to distinguish the ocher tones of almond disease from the soil around the tree.
In addition, although there is a trend nowadays to apply machine learning (ML) or deep learning (DL) techniques to PA [46,47,48,49,50], we have chosen to apply conventional computer vision (CV) techniques in our application. CV techniques require manual feature extraction from images by an expert and have lower accuracy rates than DL techniques. Otherwise, they have three advantages over DL that make them very suitable for obtaining an easy and inexpensive approach to detecting plant diseases in farms. First, they do not need heavy computational resources (graphic processing units and different machines, which increase costs to the users). Second, they do not need huge, labeled datasets to guarantee the results and overcome the current lack of a suitable database to train a neural network. Third, they usually generate a small model of just a few megabytes to be shipped inside a microprocessor (DL models can occupy from a few hundred megabytes to one or two gigabytes).
To address these challenges and investigate the feasibility of this approach, the objectives of this study are: (1) to perform a study on the color of ochre spot disease; (2) to develop a specific algorithm that is capable of processing and analyzing limited quality images from a very low-cost camera, detecting the disease and quantifying the percentage of healthy and unhealthy areas; (3) to test the operation of the algorithm with a variety of photographs of ochre spot disease in almonds.

2. Materials and Method

2.1. System Overview and Materials

Figure 1 shows the proposed scheme for our UAV-based detection system.
In order to perform the flight and assist the farmer, a technician is required. He is the remote pilot. In addition, he collects information and images which were saved previously on a memory card (SD card) on board and saves this in a folder on his laptop. The detection algorithm is run on-premise, and the technician adjusts some initial parameters depending on the type of crop and the characteristics of the farm under study before using it to detect possible diseases. After applying the detection algorithm to the photographs, he is able to indicate to the farmer if there is an ochre spot in his almond crop, the quantification of the damage, and the location of the affected plants.
  • Regarding the selection of the RGB camera to take photographs of plants, the requirements for it were a low price and as small and light as possible. Cameras used in other articles about agriculture exceed $1000. For this work, we chose the Victure Action Camera AC700. Its low price (less than $55), weight (61 g), small size (8 cm × 6 cm × 2.5 cm), and 16 MP resolution made it appropriate for our application. Additionally, the camera had a special operating mode (electronic image stabilization) to reduce vibration problems caused by drone flight. We undertook a variety of tests with this camera, both in a laboratory with single leaves and in a field of almond trees, at different distances from the object of interest. We concluded that the drone should take photographs at a distance of around 1 m or 1.5 m from the tree to distinguish the symptoms of the disease. As well, in general, the value of the ISO parameter (or sensitivity to light) of the camera should be set as low as possible (ISO 100, for example).
We initially performed laboratory tests with cameras to check these characteristics. We also took photographs from five different fields. They are located in Andalusia (South of Spain), mainly in Granada and Seville, and present common characteristics that were mentioned in Section 1 (almond trees are usually planted on very dry and vacant land, and the soil around the tree presents similar shades to the ocher tones of the almond disease).

2.2. Study of Color Carried out for Ochre Spot

As we presented, the symptoms of ochre spot disease can be recognized by the visual tracking of spots on almond leaves. We used color-recognition techniques to detect them. In a preliminary study, we decided to process the images directly from the RGB image file. This method is simple and robust, but it has certain drawbacks, such as color inconstancy; that is, it is affected by the lighting at the time of shooting, shadows, and, as seen in the laboratory tests we made, the capabilities of the camera. Furthermore, although image capture and display devices generate a color by mixing primary colors (red, green, blue), the amount applied to each one is not intuitive for the human being at first glance due to the great amalgam of tonalities, and consequently, the variation in the values RGB according to the disease does not follow a logical pattern. For this reason, we finally decided to use the HSL (hue, saturation, and lightness) color model instead of the RGB model to define the color range associated with the different symptoms. The HSL model is a representation of the color space that is more similar to the human perception of color. It is a cylindrical representation of the RGB tetrahedron (Figure 2), based on how humans organize colors in terms of hue, saturation, and lightness, as well as classic pigment mixing methods. The amount of tone H is identified from the dominant wavelength and is the angle around the cylindrical axis, so it takes a value within the interval [0, 360]. Saturation S shows the purity of the color: it is the proportion between pure light, associated with the dominant wavelength, and white light, which is generally expressed as a percentage. A pure color has a saturation of 100% and is visually recognized as an intense and vivid color. Lightness L is the amount of light present in a certain color, so 100% corresponds to the white color, while, on the contrary, 0% represents black. It can also be expressed in the range [0, 1], as in this paper.
The HSL model allows one to effectively narrow the range of tones in which, regardless of the lightness and saturation of the image, we can consider that the point of interest is traditionally identified with colors associated by humans with a disease. For our study, we used one of the many online tools available. As shown in Figure 3, the value of H increases as one advance in the color band located on the right, which represents linearly the angular variation in H in Figure 2b. The upper end of the color band is associated with 0° and the lower one with 360°. In this way, we can narrow the range of shades that are identified with traditional colors. For example, in Figure 3, the value H = 23 is commonly related to the color ‘orange-brown’. Although RGB values vary as the color matrix on the left is traversed, corresponding to an S-L plane of the cylindrical HSL model (changing saturation and lightness), the value of H remains constant and can never be confused with other clearly differentiated tones, such as “green”.
Then, our purpose was to study and narrow down the approximate range of tones in the region of interest: the color associated with the diseased part of the leaf and the green color associated with its healthy part. To conduct this, the aforementioned converter was used, choosing random pixels from the images of leaves associated with almonds and the ochre spot. The ochre spot, generated by the fungus Polystigma Ochraceum, is one of the diseases of almond cultivation, especially affecting Mediterranean countries. Representative symptoms are yellow-red spots on the leaves [51]. As shown in Figure 4, a visual assessment of the amount of affected leaf area can be made, from level 0, corresponding to an ‘absence of symptoms’, up to 4, with a ‘high presence of ochre spots caused by the pathogen’ [51]. The ochre spot can also appear as reddish spots caused by the Polystigma Amygdalinum fungus, which results in damage similar to that mentioned previously [52].
We conducted a preliminary study in the laboratory on individual leaves with ochre spots collected from almond trees in the field. We took a variety of photos for each leaf with the Victure Action Camera AC700 in 16 MP resolution mode at different distances and different ISO parameters. As we stated before, in order to distinguish the ocher spots in a photo the drone should take the photographs at a distance of around 1 m or 1.5 m from the tree with a low value for the ISO parameter. Afterward, we used the selected photos to perform the color study, manually analyzing the photos, and collecting multiple samples in healthy and unhealthy areas. Table 1 collects the RGB data and H values after conversion. Then, we concluded that the value of H associated with the green color of the healthy leaf could be approximately between 69 and 105. As far as the ochre spot is concerned, it would be located with an H in the range from 12 to 64, which is difficult to limit at the upper end as it is a derivation of the yellow tone, which is very close to the green one of the leaves themselves.
Once the algorithm of this work, presented in the following subsection, was developed, we used it to redefine the ranges of H in an even finer way, using the photographs of groups of leaves on trees. Finally, this H range was decided: from 6 to 62, for the ochre spot, and from 64 to 128, for the healthy leaf (green).

2.3. Algorithm

The algorithm takes the images of the leaves and processes them, providing a quantitative result of the damage. MATLAB® was used to implement the algorithm. Figure 5 shows the flow diagram with the main steps of the algorithm for each image. After reading an image, the contours of interest (that is, the contours of the leaves) are identified, and uninteresting objects are removed from the image. The leaves are then rebuilt with their original RGB color; that is, the area within the contours is recovered. The following step is to study the color of the leaves and identify the symptoms of the disease under study. Finally, the system provides a percentage value for the level of disease progression. This value allows optimal decision-making based on disease progression, following the recommendations and thresholds that are set by what is called integrated production (IP). IP is a dynamic, tied approach to agriculture in which farmers must adjust agricultural practices and the use of alternatives over time, taking into account new knowledge and new methods. Additionally, the system can provide the location (GPS coordinates) where each photograph was taken and warn if the established disease thresholds were exceeded. The system also saves the original images with the located and highlighted symptoms so that a person can display and check them later.
Those main steps will be explained in more detail in the following, applying them to a particular image to help the reader understand them.

2.3.1. Shape Recognition

This step consists of identifying the contours of the objects of interest in a previously read image and discarding the rest of the objects. We have built the secondary flow chart in Figure 6 to illustrate this process.
We used the image in Figure 7a, which was taken in the laboratory, as an example to illustrate the operation of the algorithm. It is a single leaf with a plain, light background, that is, an idealized version of what a photograph of almond leaves should look similar to in practice, but it allows us to help the reader deeply understand the details of how the algorithm works in an easy way. The reader can visually check himself in Figure 7, Figure 8 and Figure 9: the result of each of the main steps of the algorithm on that photograph. After this section, in Section 2.3.2, we will show the results of applying the algorithm to more realistic photos.
First, the image intensity values were adjusted. After numerous tests with the photographs that we had selected in the laboratory, among those taken on individual leaves with an ocher stain, we decided to linearly map intensity values from [0.1, 0.5] in the original image to [0.2, 1] in the resulting image in order to increase the contrast of the RGB image. Then, if a pixel in the input image had an intensity value less than 0.1, it was saturated to a value of 0.2 in the output image. If a pixel in the input image had an intensity value greater than 0.5, it could be saturated to 1 in the output image. The intensified image is shown in Figure 7b.
The next step was to convert the RGB image to grayscale by eliminating the hue and saturation information while retaining luminance. The grayscale image is shown in Figure 7c. We then applied the edge detection image processing technique to identify points in the image that experienced sharp changes in brightness. These points where the brightness of the image varied considerably are called the edges (or boundaries) of the image. The ‘edge’ function of MATLAB® returns a binary mask containing 1 where edges are found and are 0 elsewhere. In particular, the Sobel edge detector was used. The resulting binary gradient mask is shown in Figure 7d.
The following step was to dilate the binary mask and make the gaps in the lines surrounding the object of interest disappear (‘dilate’ function). For this process, two perpendicular linear structuring elements (‘strel’ function) were used. The dilated image is shown in Figure 7e.
Then, we filled the holes in the interior of the objects outlined (function ‘imfill’). The filled image is shown in Figure 7f.
After that, we removed all the small objects from the binary image, i.e., those that presented fewer than 10,000 pixels. The resulting final open binary image is shown in Figure 7g.
Before progressing to the next step of the main algorithm, as an illustration of the result achieved so far, Figure 7h shows the result of combining the final mask of Figure 7g with the original image (function ‘labeloverlay’): the blue pixels are part of the recognized object, and it can be seen that leaf shape recognition was achieved.

2.3.2. Object RGB Reconstruction

This step consists of isolating the object of interest from the original image, with its RGB colors which, based on the binary mask of the object obtained in the previous step, discard the uninteresting objects of the image. For this, we created a three-dimensional matrix of zeros in which the element (i, j) in the three planes was filled with the original RGB value of the pixel (i, j). This process applies only in the case where the corresponding element (i, j) in the binary mask is true (value 1). To illustrate this step, we took the image shown in Figure 7b as an original image (that is, the intensified image), and the result of the RGB reconstruction is shown in Figure 8.

2.3.3. Disease Recognition

In this step, the recognition of colors associated with the disease under study was carried out. For it, as mentioned in Section 2.3.2, the H ranges taken into account in our algorithm ranged from 6 to 62 for the ochre spot and 64 to 128 for the healthy leaf. From the original intensified image, a matrix with the three RGB planes was generated. With that matrix and the corresponding one to the image with the RGB object of interest, we generate another matrix, called H_matrix, with two dimensions, where the element (i, j) was equal to zero if the element (i, j) in the three planes RGB in the image with the RGB object of interest was also zero. For the rest of the elements, we converted RGB to HSL [53], and then the element (i, j) of the H_matrix could have the hue value (H) of element (i, j) in the intensified image. Therefore, going through this matrix pixel by pixel and comparing each tone value with the H range that was established for the diseased leaf and the healthy leaf, we can count the pixels corresponding to the healthy part and the diseased part, as well as the pixels not recognized as being of one type or another. In addition, we generated an auxiliary matrix called Ochre_matrix with ones on the positions (i, j) of the pixels (i, j) of the H_matrix corresponding to the unhealthy part and zeros elsewhere. Analogously, a second auxiliary matrix was generated, called the Healthy_matrix; the ones on the positions (i, j) of the pixels (i, j) of the H_matrix was found to correspond to the healthy part and zeros elsewhere. This allowed us to locate and save parts of interest in the original image in order to perform the necessary checks later. In addition, these matrixes allowed us to show those parts as a blue mask over the original image.
To illustrate this step, we took the image shown in Figure 7b as an original image (that is, the intensified image) and the image in Figure 8 as an image with the RGB object of interest. The graphic result of the recognition of the diseased part (with an ochre spot) is shown in Figure 9a. The graphic result of the recognition of the healthy part is shown in Figure 9b.

2.3.4. Damage Quantification

This is the last step of the algorithm. After saving Nu (the number of pixels classified as the unhealthy part), Nh (the number of pixels classified as the healthy part), Nr (the number of pixels classified as ‘recognized’, that is, the sum of Nu and Nh), Nnr (the number of pixels classified as ’not recognized’), and N (the total number of pixels); we calculated the percentage of the unhealthy area (Pu), the percentage of the healthy area (Ph), and the percentage of the not recognized area (Pnr) as:
P u = ( N u / ( N r ) ) 100
P h = ( N h / ( N r ) ) 100
P n r = ( N n r / N ) 100
These percentages were calculated for each image, but since they are based on a pixel count, can be easily extended to the whole collection of images collected from a crop, giving the representative percentages according to the case: either over the entire field if it has been photographed in its entirety or over the part of the field sampled.
With the definition of these metrics, we could say that the method worked well when the percentage of unrecognized pixels was not considerable (less than 30%), and, in that case, we could also affirm that the plant was healthy if the percentage of sick pixels was low or very low (depending on the thresholds set by integrated production in Andalusia). In order to compare the results of the algorithm with ground truth data in different contexts, we defined Pug as the ground truth percentage of the unhealthy area of the plant in the photograph. Therefore, Phg is the ground truth percentage of the healthy area of the plant in the photograph, which is equal to 100% minus Pug. It is expected that Pu ≥ Pug because NnrN (Nnr should be zero ideally and, in this case, Nr = N and Pu = Pug). For the image of Figure 8, we obtained Pu = 26.0274% and Ph = 73.9726%. Furthermore, Pnr = 3.7674%, which corresponds at most to the external pixels of the main outline of the leaf. For this image, Pug was around 25%, while Phg was around 75%. Therefore, we can claim that for this example photograph, the algorithm was shown to be successful in detecting the ochre stain. In Section 2.4, the application of the algorithm to a variety of photographs is presented. Finally, we compared the estimated percentage of damage with the damage threshold established by the specific regulation of the integrated production of almonds in Andalusia. It established that if there was more than 5% of leaves in the almond tree affected by the ochre spot, treatment would need to be applied urgently to avoid further deterioration of the plantation.

2.4. Operation of the Algorithm with Simple Photographs

Before using the final real images, we worked with two simpler types of photos to study the weaknesses of the algorithm and overcome them: individual leaves and a set of leaves. After that, we faced actual photographs of the trees in different contexts in order to evaluate the conditions where our algorithm and method would be valid or not.
First, we checked individual leaves in an ideal scenario (a leaf on a white background). In Figure 10a (taken in the laboratory), an example is shown. It is an image with a leaf to process, which is of poor quality and even has shadows. After applying the algorithm, Figure 10b shows the image with unhealthy parts highlighted (blue mask), and Figure 10c shows the image with healthy parts highlighted (blue mask). It can be seen that shadows are identified as non-recognized areas. Thus, they do not affect the percentages of Pu and Ph. In this case, we had Pu = 30.0667%, Ph = 69.9333%, and Pnr = 25.3894%. In Figure 10a, Pug = 22% and Phg = 78% approximately. Therefore, although the algorithm did not recognize 25% of the pixels in the photo, the percentage of the unhealthy part estimated by the algorithm can be considered a good and useful result for a farmer. We have checked other similar photographs with single leaves, and these good results are maintained.
The second type of test has been with photos with a small set of leaves but still on a white background. In that case, shape recognition is more complex than before. In Figure 11a we show an example of an image to process now, which is of poor quality and even with shadows as well. It was taken in the laboratory. For a better comparison, it is a combination of a solitary leaf and a set of leaves. After applying the algorithm, Figure 11b shows the image with the unhealthy parts highlighted (blue mask) and Figure 11c shows the image with healthy parts highlighted (blue mask). In this case, we have Pu = 65.5576%, Ph = 34.4224%, and Pnr = 28.6558%. In this case, Pug = 57% and Phg = 43% approximately. We can see in Figure 11b that it is more difficult for the algorithm to recognize the shape of the individual leaves of the set, with the number Nu being larger in this case (the pixels within the set of leaves that are not actually part of the leaves). However, the results of Pu agree quite well with reality. In addition, for this kind of photographs the image sensitivity threshold is calculated and multiplied by a fudge factor value 0.8 so that it ignores all edges whose values intensity are smaller than that product. This helps to form a better prediction. For example, in Figure 12a, we show an example of a very unfavorable image, with a human hand, taken in the laboratory. Figure 12b shows the image with unhealthy parts highlighted (blue mask) and Figure 12c shows the image with healthy parts highlighted (blue mask). For this example, we have Pu = 64.9266%, Ph = 35.0734%, and Pnr = 25.6595%. As expected, the human hand is not recognized. In this case, Pug = 54% approximately and, again, the results of Pu and Pug are in quite good agreement.

3. Results

In this section, we present some of the results obtained, both numerical and graphic, from the tests carried out with realistic photographs corresponding to entire real almond trees. These are photographs taken in the field. To this day, although we have not yet been able to fly our particular drone over our almond crop, we have taken a variety of photographs of almond trees that have been affected by the ochre stain, emulating that the camera was on board a flying drone. We have taken photos at different heights and different angles. In addition, to test the robustness of our algorithm against the camera model, we used another camera to complete our set of real photographs for our almond field in the study so that we could test the correct or incorrect operation of our algorithm. The other camera was an old Nikon Coolpix L830 with a 16 MP resolution. In addition, some researchers have provided us with real aerial photographs of an almond grove taken from a drone, and although they are still taken from a much higher height than we need, they serve to establish the scope of the proposed method.
Then, within the type of photographs of whole trees, the next kind of photographs that we studied was similar to the one shown in the example of Figure 13a. It was taken just above the tree top, pointing the camera downward, and not only does the top of the tree appear in it, but also a large part of the ground. Here, we found what could be the first important problem in applying our drone algorithm successfully. If the terrain under the study is very dry, as is usual in Spain (especially Andalusia, where it is common to dedicate the poorest land to the almond tree), the part of the ground appearing in the photos is brown, just the color of the ochre stain. It makes it very difficult to detect healthy and unhealthy parts of the tree: sometimes, part of the soil is taken as the ochre part of the tree, and, other times, the contours of the plant are not well delimited. In addition, in this kind of photograph, it is necessary to carry out a last step after shape recognition, which is to invert the binary matrix of the mask obtained for the RGB reconstruction of the object. In this way, the algorithm recognizes the object of the study and not its environment. By way of illustration, we show the result of applying the algorithm to photos as described above. In Figure 13a, the plant is surrounded by dry brown soil with some green weeds. After applying the algorithm, Figure 13b shows the image with unhealthy parts highlighted (blue mask), and Figure 13c shows the image with healthy parts highlighted (blue mask). In this case, we have Pu = 9.7368%, Ph = 90.2632%, and Pnr = 58.0339%. It can be seen that there are many pixels of the reconstructed RGB object that are actually ground pixels but end up being treated as unrecognized and, thus, do not affect the good estimate of the percentages Pu and Ph. Additionally, there are some soil pixels (in the upper left corner of the photo and in some gaps among the leaves of the plant) that are considered the unhealthy part; therefore, they contribute a bit to the overestimation of Pu. In this case, Pug = 5.5% approximately. So, the result of Pu would be useful for a farmer. Another illustrative example is based on the image in Figure 14a. The background of this photo is cream-colored. In this case, we have Pu = 35.2909%, Ph = 64.7091%, and Pnr = 69.1765%. The high number of pixels not recognized may seem too high, but it mostly corresponds to those gaps among leaves which are part of the soil, and it hardly affects the estimations of unhealthy and healthy areas. In this case, Pug = 29% and Phg = 71% approximately. Thus, the result of Pu agrees quite well with reality.
Thus, our algorithm needs to ensure that the land where it is going to be applied is not too brown and dry and that it is as free of weeds as possible. In this way, the images can have a light background, not brown. An example of a valid terrain for our method would be the one shown in Figure 15. Figure 15a shows two rows of almonds, whereas Figure 15b shows one of those almonds. It can be verified that the soil is dry but of a very light color. Unfortunately, although we obtained quite a few photos from that field, taken from a drone and courtesy of other researchers, it was not flying at the right height so that the images could be processed by our algorithm. We have estimated that useful photographs must be taken at a distance of around one meter from the tree. We have tested the operation of the algorithm with photos of entire trees and a light background. We have taken them from the side of every tree, pointing a little towards the sky in clear weather so that the background of the photograph corresponds to the blue sky. An illustrative example of this kind of image is shown in Figure 16a. Anyway, with this kind of photo, we have found other difficulties and have proposed improvements to the algorithm and certain strategies to obtain better results. First, it is not necessary to intensify this kind of image at the beginning of the algorithm. Second, with such a mixture of green leaves and brown parts by image, it is necessary to limit the H range from [6, 62] to [6, 40] so that we can better distinguish the unhealthy part from the healthy part. In addition, as previously stated, the structure of the almond tree may be inconvenient for the precise quantification of the disease. This is because they are trees of sympodial branching; that is, new divergent branches are born radially around the trunk. This aspect and the scarce leafiness that this plant could present hinder the delimitation of contours and allows for the incorrect recognition of the environment between branches and leaves (sky or ground). Anyway, in this type of photograph, since the algorithm does not take into account unrecognized pixels, those pixels within the reconstructed object are actually part of the image background; the estimated Nnr, in this case, would be lower than it should be. However, those pixels within the reconstructed object, due to their color, would not be taken into account to estimate Pu or Ph, so the estimation of Pu and Ph would still be in good agreement with reality.
Then, after applying the algorithm to the image of Figure 16a, Figure 16b shows the image with unhealthy areas highlighted (blue mask), and Figure 16c shows the image with healthy areas highlighted (blue mask). In this case, we have Pu = 15.2212%, Ph = 84.7788%, and Pnr = 41.2325%, with Pug = 16% and Phg = 84% approximately. So, the result of Pu agrees quite well with reality.
Another illustrative example is based on the image in Figure 17a. After applying the algorithm, Figure 17b shows the image with unhealthy parts, and Figure 17c shows the image with healthy parts. In this case, we have Pu = 15.0005%, Ph = 84.9995%, and Pnr = 53.4982%, with Pug = 10% approximately. So, again, the result of Pu agrees quite well with reality.
Furthermore, we also checked that the variation in the capture angle of the photograph did not have a significant effect on the recognition of the object of study: our algorithm would analyze the image correctly as long as the background of the photograph was light, not brown.

4. Discussion

In this particular case of the utilization of drones for the detection of diseases in crops using RGB cameras and traditional computer vision techniques simultaneously, to date, we have found few articles in scientific databases. Some of them are based on aerial images taken at a height of hundreds of meters, and they are very different from those that we process, which are taken at only some meters from the ground. Therefore, those images include wide areas with many trees or plants, and solutions are proposed for the detection of dead, infected, or withered pines [54,55,56] or the detection of plant failures in rows of planted crops [57,58]. Other articles [43,59,60,61] are based on aerial images taken at a low height from the ground and are more comparable to ours. In [43], the authors proposed a methodology to discriminate healthy plants from those infected with the stripe and rust of the wheat leaf in winter wheat. The approach presented in [59] is geared towards the detection of potato pathogens. In [60], the authors used a UAV flying at 46 m and equipped with a multispectral camera to calculate the vegetation index of turfgrass fields, but they also used another UAV flying at 30 m and equipped with an RGB camera to estimate the percentage of ground green cover. However, RGB cameras in [43,59,60] can still be considered expensive (more than $500). The authors in [61] show that a low-cost UAV flying at 30 m with an RGB camera can be used to monitor the vegetation cover of the coffee crop. The authors in [62] present a method that is specifically oriented to the diagnosis of rust disease in spruce forests and applicable to individual branches and entire juvenile and adult trees, based on RGB photographs taken by a semi-professional UAV flying at a distance from the ground between 7 m and 15 m. Despite these advances, to our knowledge, the application to almond plantations has not yet become widespread, even less the use of very low-cost tools.

5. Conclusions

This work has demonstrated that, under certain environmental conditions, it is feasible that the detection of ochre spot disease in almond plantations can be based on conventional techniques of computer vision and images from a very low-cost RGB camera that would be situated on board a drone. This detection system would be easy and inexpensive compared to other proposals based on machine learning techniques and/or more expensive cameras (such as multispectral, hyperspectral, or thermal cameras). A study of the color of ochre spot disease in almonds has been presented. It is based on an HSL (hue, saturation, and lightness) color model instead of an RGB model in order to avoid color inconstancy. An algorithm that is able to process limited-quality images from a very low-cost camera has been developed. It was able to estimate the percentage of the healthy and unhealthy parts of the plant. A variety of tests with different types of photographs have been made to evaluate the conditions where the algorithm would be valid or not. We have concluded that it could be effective if aerial photographs, taken by the drone, have a light, not brown background and are taken around one meter away from the tree. In the event that the terrain is very unfavorable (brown and dry soil), the algorithm may work well if the photographs are taken from the side of the tree and pointing a little toward the sky: the method would be feasible if the distance between the outer branches of the adjacent trees were large enough to allow the drone to fly (two meters); if the rows of the trees are too close, then it would be more practical for a ground vehicle to take those photos. For images such as those required, the results show a maximum discrepancy of 10% between the estimated percentage and the ground truth percentage of the unhealthy area.
This approach shows the potential to be extended to other crop diseases by carrying out color studies, algorithm parameter adjustments, and tests in different scenarios. It could even provide better results with other diseases than those presented here with the ocher spot disease in almond trees. In the future, we intend to extend the number and coverage of the experiments to demonstrate the performance of the system under other circumstances and even under bad weather conditions. Carrying out several flight tests, we could verify aspects such as the proposed flight mode of operation, the correct functioning of the different subsystems on board, the flying range and its adaptation to the farm to be mapped, or the influence of vibrations on the acquisition of the images. We would also like to study the possibility of doing a photogrammetric mapping to increase the accuracy of disease localization using some commercial software and study the cost of the entire system after incorporating that tool. We also aspire to adapt our prototype UAV to carry a ‘sense and avoid’ system and fly it over with more safety.

Author Contributions

Conceptualization, J.M.M.-H., A.I.G., F.C., J.L.M.-J. and O.E.S.; methodology, J.M.M.-H., A.I.G., F.C., J.L.M.-J. and O.E.S.; software, J.M.M.-H., A.I.G., F.C., J.L.M.-J. and O.E.S.; validation, J.M.M.-H., A.I.G., F.C., J.L.M.-J. and O.E.S.; writing—original draft preparation, J.M.M.-H.; writing—review and editing, J.M.M.-H., A.I.G., F.C., J.L.M.-J. and O.E.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank Baltasar Gálvez-Ruiz, Belén Cárceles-Rodríguez, and Iván F. García-Tejero who voluntarily engaged in this work. Gálvez-Ruiz allowed us to take samples at his farm. Cárceles-Rodríguez provided us with technical information on crops and the Spanish regulation in this regard. García-Tejero provided us with photographs taken from a drone.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Federal Aviation Administration. FAA Aerospace Forecast Fiscal Years 2020–2040. 2020. Available online: https://www.faa.gov/data_research/aviation/aerospace_forecasts/media/FY2020-40_faa_aerospace_forecast.pdf (accessed on 9 September 2022).
  2. PwC. Clarity from above: PwC Global Report on the Commercial Applications of Drone Technology. 2016. Available online: https://www.pwc.pl/pl/pdf/clarity-from-above-pwc.pdf (accessed on 9 September 2022).
  3. Pardey, P.G.; Beddow, J.M.; Hurley, T.M.; Beatty, T.K.; Eidman, V.R. A Bounds Analysis of World Food Futures: Global Agriculture Through to 2050. Aust. J. Agric. Resour. Econ. 2014, 58, 571–589. [Google Scholar] [CrossRef] [Green Version]
  4. Wang, K.; Huggins, D.R.; Tao, H. Rapid mapping of winter wheat yield, protein, and nitrogen uptake using remote and proximal sensing. Int. J. Appl. Earth Obs. Geoinf. 2019, 82, 101921. [Google Scholar] [CrossRef]
  5. Vizzari, M.; Santaga, F.; Benincasa, P. Sentinel 2-Based Nitrogen VRT Fertilization in Wheat: Comparison between Traditional and Simple Precision Practices. Agronomy 2019, 9, 278. [Google Scholar] [CrossRef] [Green Version]
  6. Saifuzzaman, M.; Adamchuk, V.; Buelvas, R.; Biswas, A.; Prasher, S.; Rabe, N.; Aspinall, D.; Ji, W. Clustering Tools for Integration of Satellite Remote Sensing Imagery and Proximal Soil Sensing Data. Remote Sens. 2019, 11, 1036. [Google Scholar] [CrossRef] [Green Version]
  7. Zhao, B.; Liu, M.; Wu, J.; Liu, X.; Liu, M.; Wu, L. Parallel Computing for Obtaining Regional Scale Rice Growth Conditions Based on WOFOST and Satellite Images. IEEE Access 2020, 8, 223675–223685. [Google Scholar] [CrossRef]
  8. Hu, X.; Chen, W.; Xu, W. Adaptive Mean Shift-Based Identification of Individual Trees Using Airborne LiDAR Data. Remote Sens. 2017, 9, 148. [Google Scholar] [CrossRef] [Green Version]
  9. White, J.C.; Tompalski, P.; Coops, N.C.; Wulder, M.A. Comparison of airborne laser scanning and digital stereo imagery for characterizing forest canopy gaps in coastal temperate rainforests. Remote Sens. Environ. 2018, 208, 1–14. [Google Scholar] [CrossRef]
  10. Olanrewaju, S.; Rajan, N.; Ibrahim, A.M.; Rudd, J.C.; Liu, S.; Sui, R.; Jessup, K.E.; Xue, Q. Using aerial imagery and digital photography to monitor growth and yield in winter wheat. Int. J. Remote Sens. 2019, 40, 6905–6929. [Google Scholar] [CrossRef]
  11. Zhao, J.; Zhong, Y.; Hu, X.; Wei, L.; Zhang, L. A robust spectral-spatial approach to identifying heterogeneous crops using remote sensing imagery with high spectral and spatial resolutions. Remote Sens. Environ. 2020, 239, 111605. [Google Scholar] [CrossRef]
  12. Hassler, S.C.; Baysal-Gurel, F. Unmanned Aircraft System (UAS) Technology and Applications in Agriculture. Agronomy 2019, 9, 618. [Google Scholar] [CrossRef] [Green Version]
  13. Mukherjee, A.; Misra, S.; Raghuwanshi, N.S. A survey of unmanned aerial sensing solutions in precision agriculture. J. Netw. Comput. Appl. 2019, 148, 102461. [Google Scholar] [CrossRef]
  14. Del Cerro, J.; Ulloa, C.C.; Barrientos, A.; Rivas, J.D.L. Unmanned Aerial Vehicles in Agriculture: A Survey. Agronomy 2021, 11, 203. [Google Scholar] [CrossRef]
  15. Olson, D.; Anderson, J. Review on unmanned aerial vehicles, remote sensors, imagery processing, and their applications in agriculture. Agron. J. 2021, 113, 971–992. [Google Scholar] [CrossRef]
  16. Yang, C.; Everitt, J.H.; Du, Q.; Luo, B.; Chanussot, J. Using High-Resolution Airborne and Satellite Imagery to Assess Crop Growth and Yield Variability for Precision Agriculture. Proc. IEEE 2013, 101, 582–592. [Google Scholar] [CrossRef]
  17. Gevaert, C.M.; Suomalainen, J.; Tang, J.; Kooistra, L. Generation of Spectral—Temporal Response Surfaces by Combining Multispectral Satellite and Hyperspectral UAV Imagery for Precision Agriculture Applications. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3140–3146. [Google Scholar] [CrossRef]
  18. Murugan, D.; Garg, A.; Singh, D. Development of an Adaptive Approach for Precision Agriculture Monitoring with Drone and Satellite Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 5322–5328. [Google Scholar] [CrossRef]
  19. Cai, Y.; Guan, K.; Nafziger, E.; Chowdhary, G.; Peng, B.; Jin, Z.; Wang, S.; Wang, S. Detecting In-Season Crop Nitrogen Stress of Corn for Field Trials Using UAV- and CubeSat-Based Multispectral Sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 5153–5166. [Google Scholar] [CrossRef]
  20. Vargas, J.Q.; Khot, L.R.; Peters, R.T.; Chandel, A.K.; Molaei, B. Low Orbiting Satellite and Small UAS-Based High-Resolution Imagery Data to Quantify Crop Lodging: A Case Study in Irrigated Spearmint. IEEE Geosci. Remote Sens. Lett. 2020, 17, 755–759. [Google Scholar] [CrossRef]
  21. Barbedo, J.G.A. A Review on the Use of Unmanned Aerial Vehicles and Imaging Sensors for Monitoring and Assessing Plant Stresses. Drones 2019, 3, 40. [Google Scholar] [CrossRef] [Green Version]
  22. Kim, J.; Kim, S.; Ju, C.; Son, H.I. Unmanned Aerial Vehicles in Agriculture: A Review of Perspective of Platform, Control, and Applications. IEEE Access 2019, 7, 105100–105115. [Google Scholar] [CrossRef]
  23. Zhang, C.; Valente, J.; Kooistra, L.; Guo, L.; Wang, W. Orchard management with small unmanned aerial vehicles: A survey of sensing and analysis approaches. Precis. Agric. 2021, 22, 2007–2052. [Google Scholar] [CrossRef]
  24. Zhang, H.; Wang, L.; Tian, T.; Yin, J. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS) Use in Agricultural Monitoring in China. Remote Sens. 2021, 13, 1221. [Google Scholar] [CrossRef]
  25. Franceschini, M.H.D.; Bartholomeus, H.; van Apeldoorn, D.; Suomalainen, J.; Kooistra, L. Assessing Changes in Potato Canopy Caused by Late Blight in Organic Production Systems through UAV-based Pushroom Imaging Spectrometer. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 109–112. [Google Scholar] [CrossRef] [Green Version]
  26. Vargas, J.Q.; Bendig, J.; Mac Arthur, A.; Burkart, A.; Julitta, T.; Maseyk, K.; Thomas, R.; Siegmann, B.; Rossini, M.; Celesti, M.; et al. Unmanned Aerial Systems (UAS)-Based Methods for Solar Induced Chlorophyll Fluorescence (SIF) Retrieval with Non-Imaging Spectrometers: State of the Art. Remote Sens. 2020, 12, 1624. [Google Scholar] [CrossRef]
  27. Sanchez-Rodriguez, J.-P.; Aceves-Lopez, A.; Martinez-Carranza, J.; Flores-Wysocka, G. Onboard plane-wise 3D mapping using super-pixels and stereo vision for autonomous flight of a hexacopter. Intell. Serv. Robot. 2020, 13, 273–287. [Google Scholar] [CrossRef]
  28. Busch, C.A.M.; Stol, K.A.; van der Mark, W. Dynamic tree branch tracking for aerial canopy sampling using stereo vision. Comput. Electron. Agric. 2021, 182, 106007. [Google Scholar] [CrossRef]
  29. Lin, Y.-C.; Habib, A. Quality control and crop characterization framework for multi-temporal UAV LiDAR data over mechanized agricultural fields. Remote Sens. Environ. 2021, 256, 112299. [Google Scholar] [CrossRef]
  30. Luo, S.; Liu, W.; Zhang, Y.; Wang, C.; Xi, X.; Nie, S.; Ma, D.; Lin, Y.; Zhou, G. Maize and soybean heights estimation from unmanned aerial vehicle (UAV) LiDAR data. Comput. Electron. Agric. 2021, 182, 106005. [Google Scholar] [CrossRef]
  31. Reiser, D.; Martín-López, J.M.; Memic, E.; Vázquez-Arellano, M.; Brandner, S.; Griepentrog, H.W. 3D Imaging with a Sonar Sensor and an Automated 3-Axes Frame for Selective Spraying in Controlled Conditions. J. Imaging 2017, 3, 9. [Google Scholar] [CrossRef] [Green Version]
  32. Cooper, I.; Hotchkiss, R.; Williams, G. Extending Multi-Beam Sonar with Structure from Motion Data of Shorelines for Complete Pool Bathymetry of Reservoirs. Remote Sens. 2021, 13, 35. [Google Scholar] [CrossRef]
  33. Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating Multispectral Images and Vegetation Indices for Precision Farming Applications from UAV Images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef] [Green Version]
  34. Potgieter, A.B.; George-Jaeggli, B.; Chapman, S.; Laws, K.; Cadavid, L.A.S.; Wixted, J.; Watson, J.; Eldridge, M.; Jordan, D.; Hammer, G. Multi-Spectral Imaging from an Unmanned Aerial Vehicle Enables the Assessment of Seasonal Leaf Area Dynamics of Sorghum Breeding Lines. Front. Plant Sci. 2017, 8, 1532. [Google Scholar] [CrossRef]
  35. Zhong, Y.; Wang, X.; Xu, Y.; Wang, S.; Jia, T.; Hu, X.; Zhao, J.; Wei, L.; Zhang, L. Mini-UAV-Borne Hyperspectral Remote Sensing: From Observation and Processing to Applications. IEEE Geosci. Remote Sens. Mag. 2018, 6, 46–62. [Google Scholar] [CrossRef]
  36. Jay, S.; Baret, F.; Dutartre, D.; Malatesta, G.; Héno, S.; Comar, A.; Weiss, M.; Maupas, F. Exploiting the centimeter resolution of UAV multispectral imagery to improve remote-sensing estimates of canopy structure and biochemistry in sugar beet crops. Remote Sens. Environ. 2019, 231, 110898. [Google Scholar] [CrossRef]
  37. Gao, D.; Sun, Q.; Hu, B.; Zhang, S. A Framework for Agricultural Pest and Disease Monitoring Based on Internet-of-Things and Unmanned Aerial Vehicles. Sensors 2020, 20, 1487. [Google Scholar] [CrossRef] [Green Version]
  38. Xavier, T.W.F.; Souto, R.N.V.; Statella, T.; Galbieri, R.; Santos, E.S.; Suli, G.S.; Zeilhofer, P. Identification of Ramularia Leaf Blight Cotton Disease Infection Levels by Multispectral, Multiscale UAV Imagery. Drones 2019, 3, 33. [Google Scholar] [CrossRef] [Green Version]
  39. Heim, R.H.; Wright, I.J.; Scarth, P.; Carnegie, A.J.; Taylor, D.; Oldeland, J. Multispectral, Aerial Disease Detection for Myrtle Rust (Austropuccinia psidii) on a Lemon Myrtle Plantation. Drones 2019, 3, 25. [Google Scholar] [CrossRef] [Green Version]
  40. Messina, G.; Modica, G. Applications of UAV Thermal Imagery in Precision Agriculture: State of the Art and Future Research Outlook. Remote Sens. 2020, 12, 1491. [Google Scholar] [CrossRef]
  41. Maguire, M.; Neale, C.; Woldt, W. Improving Accuracy of Unmanned Aerial System Thermal Infrared Remote Sensing for Use in Energy Balance Models in Agriculture Applications. Remote Sens. 2021, 13, 1635. [Google Scholar] [CrossRef]
  42. Calvario, G.; Alarcón, T.; Dalmau, O.; Sierra, B.; Hernandez, C. An Agave Counting Methodology Based on Mathematical Morphology and Images Acquired through Unmanned Aerial Vehicles. Sensors 2020, 20, 6247. [Google Scholar] [CrossRef]
  43. Dehkordi, R.H.; El Jarroudi, M.; Kouadio, L.; Meersmans, J.; Beyer, M. Monitoring Wheat Leaf Rust and Stripe Rust in Winter Wheat Using High-Resolution UAV-Based Red-Green-Blue Imagery. Remote Sens. 2020, 12, 3696. [Google Scholar] [CrossRef]
  44. dos Santos, L.M.; Ferraz, G.A.E.S.; Barbosa, B.D.D.S.; Diotto, A.V.; Andrade, M.T.; Conti, L.; Rossi, G. Determining the Leaf Area Index and Percentage of Area Covered by Coffee Crops Using UAV RGB Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6401–6409. [Google Scholar] [CrossRef]
  45. Maes, W.H.; Steppe, K. Perspectives for Remote Sensing with Unmanned Aerial Vehicles in Precision Agriculture. Trends Plant Sci. 2019, 24, 152–164. [Google Scholar] [CrossRef]
  46. Narmilan, A.; Gonzalez, F.; Salgadoe, A.S.A.; Powell, K. Detection of White Leaf Disease in Sugarcane Using Machine Learning Techniques over UAV Multispectral Images. Drones 2022, 6, 230. [Google Scholar] [CrossRef]
  47. Chaschatzis, C.; Karaiskou, C.; Mouratidis, E.G.; Karagiannis, E.; Sarigiannidis, P.G. Detection and Characterization of Stressed Sweet Cherry Tissues Using Machine Learning. Drones 2022, 6, 3. [Google Scholar] [CrossRef]
  48. Niu, H.; Hollenbeck, D.; Zhao, T.; Wang, D.; Chen, Y. Evapotranspiration Estimation with Small UAVs in Precision Agriculture. Sensors 2020, 20, 6427. [Google Scholar] [CrossRef] [PubMed]
  49. Zhou, Z.; Majeed, Y.; Naranjo, G.D.; Gambacorta, E.M. Assessment for crop water stress with infrared thermal imagery in precision agriculture: A review and future prospects for deep learning applications. Comput. Electron. Agric. 2021, 182, 106019. [Google Scholar] [CrossRef]
  50. Raj, M.; Gupta, S.; Chamola, V.; Elhence, A.; Garg, T.; Atiquzzaman, M.; Niyato, D. A survey on the role of Internet of Things for adopting and promoting Agriculture 4. J. Netw. Comput. Appl. 2021, 187, 103107. [Google Scholar] [CrossRef]
  51. Marimon, N.; Luque, J.; Vargas, F.J.; Alegre, S.; Miarnau, X. Susceptibilidad varietal a la ‘mancha ocre’ (Polystigma ochraceum (Whalenb.) Sacc.) en el cultivo del almendro. In Proceedings of the XVI Spanish National Congress of the Phytopathology Society, Barcelona, Spain, 21–23 September 2012. [Google Scholar]
  52. López-López, M.; Calderón, R.; González-Dugo, V.; Zarco-Tejada, P.J.; Fereres, E. Early Detection and Quantification of Almond Red Leaf Blotch Using High-Resolution Hyperspectral and Thermal Imagery. Remote Sens. 2016, 8, 276. [Google Scholar] [CrossRef] [Green Version]
  53. Saravanan, G.; Yamuna, G.; Nandhini, S. Real time implementation of RGB to HSV/HSI/HSL and its reverse color space models. In Proceedings of the 2016 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 6–8 April 2016; pp. 0462–0466. [Google Scholar] [CrossRef]
  54. Jung, K.Y.; Park, J.K. Analysis of Vegetation Infection Information Using Unmanned Aerial Vehicle with Optical Sensor. Sensors Mater. 2019, 31, 3319. [Google Scholar] [CrossRef]
  55. Nazir, M.N.M.M.; Terhem, R.; Norhisham, A.R.; Razali, S.M.; Meder, R. Early Monitoring of Health Status of Plantation-Grown Eucalyptus pellita at Large Spatial Scale via Visible Spectrum Imaging of Canopy Foliage Using Unmanned Aerial Vehicles. Forests 2021, 12, 1393. [Google Scholar] [CrossRef]
  56. Sun, Z.; Wang, Y.; Pan, L.; Xie, Y.; Zhang, B.; Liang, R.; Sun, Y. Pine wilt disease detection in high-resolution UAV images using object-oriented classification. J. For. Res. 2021, 33, 1377–1389. [Google Scholar] [CrossRef]
  57. Oliveira, H.C.; Guizilini, V.C.; Nunes, I.P.; Souza, J.R. Failure Detection in Row Crops From UAV Images Using Morphological Operators. IEEE Geosci. Remote Sens. Lett. 2018, 15, 991–995. [Google Scholar] [CrossRef]
  58. Marques, P.; Pádua, L.; Adão, T.; Hruška, J.; Peres, E.; Sousa, A.; Sousa, J.J. UAV-Based Automatic Detection and Monitoring of Chestnut Trees. Remote Sens. 2019, 11, 855. [Google Scholar] [CrossRef] [Green Version]
  59. Siebring, J.; Valente, J.; Franceschini, M.H.D.; Kamp, J.; Kooistra, L. Object-Based Image Analysis Applied to Low Altitude Aerial Imagery for Potato Plant Trait Retrieval and Pathogen Detection. Sensors 2019, 19, 5477. [Google Scholar] [CrossRef] [Green Version]
  60. Zhang, J.; Virk, S.; Porter, W.; Kenworthy, K.; Sullivan, D.; Schwartz, B. Applications of Unmanned Aerial Vehicle Based Imagery in Turfgrass Field Trials. Front. Plant Sci. 2019, 10, 279. [Google Scholar] [CrossRef] [Green Version]
  61. Barbosa, B.D.S.; Ferraz, G.A.E.S.; Santos, L.M.; Santana, L.S.; Marin, D.B.; Rossi, G.; Conti, L. Application of RGB Images Obtained by UAV in Coffee Farming. Remote Sens. 2021, 13, 2397. [Google Scholar] [CrossRef]
  62. Ganthaler, A.; Losso, A.; Mayr, S. Using image analysis for quantitative assessment of needle bladder rust disease of Norway spruce. Plant Pathol. 2018, 67, 1122–1130. [Google Scholar] [CrossRef] [Green Version]
Figure 1. System architecture for the detection of plant diseases.
Figure 1. System architecture for the detection of plant diseases.
Drones 07 00186 g001
Figure 2. Color models: (a) RGB cube; (b) HSL cylinder.
Figure 2. Color models: (a) RGB cube; (b) HSL cylinder.
Drones 07 00186 g002
Figure 3. Color study with a converter tool.
Figure 3. Color study with a converter tool.
Drones 07 00186 g003
Figure 4. Scale with different levels of surface affected by ocher stain [51].
Figure 4. Scale with different levels of surface affected by ocher stain [51].
Drones 07 00186 g004
Figure 5. Flow chart of the main program for each image.
Figure 5. Flow chart of the main program for each image.
Drones 07 00186 g005
Figure 6. Flow chart of the block ‘shape recognition’ of the main program.
Figure 6. Flow chart of the block ‘shape recognition’ of the main program.
Drones 07 00186 g006
Figure 7. (a) Original image of a leaf with ochre spot; (b) Intensified image; (c) Grayscale image; (d) Binary gradient mask; (e) Dilated image; (f) Filled image; (g) Final open binary image; (h) Fused image of the original image and the binary mask after shape recognition.
Figure 7. (a) Original image of a leaf with ochre spot; (b) Intensified image; (c) Grayscale image; (d) Binary gradient mask; (e) Dilated image; (f) Filled image; (g) Final open binary image; (h) Fused image of the original image and the binary mask after shape recognition.
Drones 07 00186 g007
Figure 8. Image with the RGB object of interest.
Figure 8. Image with the RGB object of interest.
Drones 07 00186 g008
Figure 9. Image with parts highlighted with a blue mask: (a) Unhealthy and (b) Healthy.
Figure 9. Image with parts highlighted with a blue mask: (a) Unhealthy and (b) Healthy.
Drones 07 00186 g009
Figure 10. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Figure 10. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Drones 07 00186 g010
Figure 11. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Figure 11. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Drones 07 00186 g011
Figure 12. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Figure 12. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Drones 07 00186 g012
Figure 13. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Figure 13. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Drones 07 00186 g013
Figure 14. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Figure 14. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Drones 07 00186 g014
Figure 15. Example of a valid terrain for our algorithm: (a) Almond crop and (b) One of the almond trees on it.
Figure 15. Example of a valid terrain for our algorithm: (a) Almond crop and (b) One of the almond trees on it.
Drones 07 00186 g015
Figure 16. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Figure 16. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Drones 07 00186 g016
Figure 17. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Figure 17. (a) Image to process; (b) Image with unhealthy areas highlighted; (c) Image with healthy areas highlighted.
Drones 07 00186 g017
Table 1. Data from the tonality study for almonds ochre spot.
Table 1. Data from the tonality study for almonds ochre spot.
Healthy Part: GreenUnhealthy Part: Ochre Spot
RGBHRGBH
14217068761441468462
1501721047919616612435
114144628221218813641
901146690108845632
96126608717813610227
124140667392625215
1841961046890665123
138146100701641269228
180206114771121025850
156192968274745860
1922101207280625221
90110628570625626
150180948111078780
134178868998706411
8513057979256526
4260361051461008812
17018480689264624
1902181297776605416
18219284666651510
678943891101125162
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Martínez-Heredia, J.M.; Gálvez, A.I.; Colodro, F.; Mora-Jiménez, J.L.; Sassi, O.E. Feasibility Study of Detection of Ochre Spot on Almonds Aimed at Very Low-Cost Cameras Onboard a Drone. Drones 2023, 7, 186. https://doi.org/10.3390/drones7030186

AMA Style

Martínez-Heredia JM, Gálvez AI, Colodro F, Mora-Jiménez JL, Sassi OE. Feasibility Study of Detection of Ochre Spot on Almonds Aimed at Very Low-Cost Cameras Onboard a Drone. Drones. 2023; 7(3):186. https://doi.org/10.3390/drones7030186

Chicago/Turabian Style

Martínez-Heredia, Juana M., Ana I. Gálvez, Francisco Colodro, José Luis Mora-Jiménez, and Ons E. Sassi. 2023. "Feasibility Study of Detection of Ochre Spot on Almonds Aimed at Very Low-Cost Cameras Onboard a Drone" Drones 7, no. 3: 186. https://doi.org/10.3390/drones7030186

Article Metrics

Back to TopTop