Next Article in Journal
Preceding Vehicle Detection and Tracking Adaptive to Illumination Variation in Night Traffic Scenes Based on Relevance Analysis
Next Article in Special Issue
Detection of Potato Storage Disease via Gas Analysis: A Pilot Study Using Field Asymmetric Ion Mobility Spectrometry
Previous Article in Journal
Sensors and Technologies in Spain: State-of-the-Art
Previous Article in Special Issue
Automated In-Situ Laser Scanner for Monitoring Forest Leaf Area Index
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Approach for Weed Type Classification Based on Shape Descriptors and a Fuzzy Decision-Making Method

by
Pedro Javier Herrera
1,*,
José Dorado
2,* and
Ángela Ribeiro
1
1
Centre for Automation and Robotics, CSIC-UPM, 28500 Madrid, Spain
2
Institute of Agricultural Sciences, CSIC, 28006 Madrid, Spain
*
Authors to whom correspondence should be addressed.
Sensors 2014, 14(8), 15304-15324; https://doi.org/10.3390/s140815304
Submission received: 5 March 2014 / Revised: 7 July 2014 / Accepted: 8 August 2014 / Published: 19 August 2014
(This article belongs to the Special Issue Agriculture and Forestry: Sensors, Technologies and Procedures)

Abstract

: An important objective in weed management is the discrimination between grasses (monocots) and broad-leaved weeds (dicots), because these two weed groups can be appropriately controlled by specific herbicides. In fact, efficiency is higher if selective treatment is performed for each type of infestation instead of using a broadcast herbicide on the whole surface. This work proposes a strategy where weeds are characterised by a set of shape descriptors (the seven Hu moments and six geometric shape descriptors). Weeds appear in outdoor field images which display real situations obtained from a RGB camera. Thus, images present a mixture of both weed species under varying conditions of lighting. In the presented approach, four decision-making methods were adapted to use the best shape descriptors as attributes and a choice was taken. This proposal establishes a novel methodology with a high success rate in weed species discrimination.

1. Introduction

The Precision Agriculture (PA) concept advocates for the adjustment of resources and agronomic practices to the requirements of the soil and crop, seeking greater sustainability and efficiency. Among the practices associated with a PA schema, site-specific weed management is effective in decreasing herbicide costs, optimising weed control and preventing unnecessary environmental contamination [14]. Several authors have shown that the distribution of the most harmful weeds for a particular crop is not uniform, generally affecting less than 40% of the crop [5,6]. However, weeds are usually managed uniformly across the whole field, with consequent damage to the environment and waste of money. The variable spatial distribution of weeds must, therefore, be considered in weed-management strategies: by using target chemical applications, agriculture and the environment can be more sustainable [7].

To carry out suitable site-specific weed management, it is essential to have accurate information on within-field variation of weeds, i.e.: (i) where the weeds are located; (ii) the weed seedling density; and (iii) the type of infestation present. This information can be obtained by different methods, including cameras located on aerial platforms or ground platforms.

Nowadays the increasing technology of satellite and aerial images is demanding solutions for different image-based applications where it is often useful the classification of the different textures underlying the images. In this context, textures can be useful for distinguishing weeds from crop. So far, the identification of agricultural textures in aerial images and data coming from satellites has been realized by means of strategies that require a costly field sampling [8,9]. Besides, information collection depends heavily on the weather (no clouds or fog) and, although remote sensing in agriculture has experienced a resurgence in recent years due to the use of hyperspectral and multispectral cameras [10], it is expensive and produces low-resolution images. In general, satellite images are better suited for large, contiguous areas, while aerial images are better for smaller or non-contiguous areas. Satellite images may be too expensive for small survey areas, but covers a relatively large geographic area compared to aerial images. In contrast, Unmanned Aerial Vehicles (UAVs), commonly known as drones allow a higher resolution, but many countries have passed legislation that limits the use of UAVs. In addition, the less dangerous drones also have a less energy autonomy, flying on average for less than half an hour, which limits the size of area covered by the inspection. On the other hand, the images acquired from ground platforms also allow a high (centimetre) resolution, but the information contained in each image only covers a small crop area. Nevertheless, treatments are realised at ground level and this largely justifies the convenience of making detection from ground platforms.

The development of methods for weed detection from images is a very important open field for PA [1115] and a challenge that has no simple solution owing to the great diversity of crops and weeds, changes in ambient lighting, differences in the texture of the terrain (fundamentally due to humidity), different growth states of crop and weed infestation, and great similarity of the crop and weeds [16,17]. Many techniques for weed detection have in common the segmentation of vegetation against background. They mostly take into account the fact that all pixels belonging to vegetation (crop or weed) have a strong green component. Based on this, the vegetation is separated from background (soil) independently of the type of crop treated. This characteristic can be used directly through the RGB colour model or creating colour indices that represent the “greenness” of a given pixel [12,18]. These indices are designed to cope with the variability of lighting conditions, and all these strategies oriented toward green detection need to fix a threshold for final segmentation.

These difficulties make the discrimination between the crop, weed and soil a complex task, and the difficulty increases when the objective is to discriminate between weeds species or to apply herbicide in real time as the position of the infestation is detected [1,1923].

Precision treatment of weeds leads to the application of well-adjusted doses of herbicides directly to the target at the seedling stage. The effectiveness in controlling weeds and crop yield can be significantly improved if, in addition to precisely locating herbicides, the herbicides are applied early in the weed growth cycle (i.e., the seedling stage) [2426]. Nevertheless, the lack of commercial availability of precision application equipment continues to be an issue preventing the technology from reaching its full potential due to limitations in robustness in a wide variety of field conditions, including fluctuating weather and changing plant canopy and structure [27]. In addition, targeted recognition and application technology for precision weed control must be easily incorporated into current systems or used as stand-alone implements. While several research studies and a few commercial grade systems are being developed for targeted applications, little is known about the precise rates of herbicides and other treatments needed to control very small weed seedlings.

The use of high-tech machinery to target individual weeds in real time would provide valuable tools for weed control in the field at any time [28]. For this reason, it is essential to analyse the viability of the real time performance of the proposed image processing. The final aim of this work is to integrate a RGB camera as part of the sensors systems on board an autonomous tractor in order to reach the weed classification in real time. Figure 1 illustrates the location on a tractor of the equipment used in the initial experiments conducted to analyse the performance of our approach. In this case, the camera (an EOS 7D, Canon, Tokyo, Japan) is connected to a computer by a USB port; the computer acquires images at a rate of 6 frames per second; the images are geo-referenced during the acquisition process. The type and aspect of the images obtained are shown on the bottom of Figure 1. The modification of the image perspective is performed following the process described in [29]. Furthermore, crop rows that appear in the image are eliminated by a process based on the method proposed in [30] to isolate the vegetation cover areas between the crop rows. After applying the previous process, the images obtained are similar to those shown in Figure 2. Therefore, the main objective of this work is to design, develop and assess a strategy of discrimination between weed species, able to work in real time with images as those shown in Figure 2. Finally, the developed method will be integrated on the computer on board the tractor with the aim to take effective treatment decisions.

Herbicide application efficiency would be higher if selective treatment is performed for each type of weed instead of using a single broadcast herbicide [31,32]. Most studies during the last twenty years have addressed the classification of only two classes of plants, either crop or weed, or distinguished between weeds species, broadleaf and grasses [31,33,34], e.g., based on their heights [33,34]. Broad-leaved weeds and grasses are distinguished because the selectivity of some herbicides is based on differences between monocots and dicots. Therefore, the characterisation of the spatial distribution of both groups is essential to the development of an autonomous system for treatments that can adjust the type of herbicide and the dose to the dominant infestation. However, precisely classifying a plant species that may be combined with other different species is challenging from the point of view of image processing.

The success with which these aspects can be adapted to classification depends on the type of crop and weeds, and how and when images are gathered. In other words, early weed detection in row crops is an objective that can be planned according to criteria oriented to two different levels with an increasing requirement: (1) estimation of the presence or absence of weeds according to its location in the bare soil or in the crop rows and (2) differentiation between weeds species (e.g., monocots vs. dicots) according to discriminant parameters (e.g., spectral characteristics, size, and shape). Previous works have tackled this problem. For example, the performance of three neural-network models for classifying images from six classes of plants is presented in [35]. The images were taken with controlled lighting and the different species always appeared isolated. A promising method based on using ultraviolet (UV) induced fluorescence is proposed in [36]. In latter case, experiments were conducted with plants grown in greenhouse. A complex method that combines well-known image processing techniques, clustering and genetic algorithms to extract leaf shapes and discriminate plant species is presented in [37]. The previous method is hardly suitable for real-time processing.

Furthermore, shape descriptors are used in many computer vision tasks [38]. In general, descriptors describe a given shape so that descriptors for different shapes should be different enough that the shapes can be discriminated. Regions can be either described by contour-based properties or by region-based properties [39]. Invariance with respect to translation, rotation and scaling is demanded in object recognition applications, whose aim is to identify an object independently of its position, orientation and size in the scene [40,41]. Hu defined in 1962 [42] a set of seven invariant moments for two-dimensional objects derived from the second and third central moments that have proved to be useful in many pattern recognition tasks [40,41]. In addition, geometric shape descriptors assess the geometric shape of the contours of the regions, e.g., the perimeter, the diameter, the eccentricity, etc. [38,39]. For instance, as the spatial distributions of weeds are unique, with monocot infestations being patchier than dicot infestations [6] and monocots differing structurally from dicots (as can be seen in Figure 2), a strategy based on the use of shape descriptors may be suitable for the recognition of plant shape.

In [43], is proposed new shape features based on a skeleton operation for discriminating weed/crop but is necessary to adapt the method to the varying field conditions, and improve the detection of late growth stages and overlaps in the images. In [44] plant seedling recognition is addressed by means of two approaches of shape feature generation based on plant silhouettes. The performance assessment is based on the classification accuracy of four different classifiers (KNN, Naive-Bayes, Linear SVM, Nonlinear SVM), but a description about how shape descriptors are used for each classifier is not included. Moreover, is necessary to adapt the method to the varying conditions existing in real field situations, since the images were taken with controlled lighting and the plants appear isolated. A preliminary study in [45] shows the Hu invariant moments obtained from regions that belong to the two weed species under study. The data from monocot weeds show that Hu moments tend to have negative values in the fifth, sixth and seventh moment, while the moments that give a positive value are close to 1 or 2. In dicot weeds, the values in the seven moments are close to zero and never reach 1. The fifth, sixth and seventh moments may be negative, but these values are very close to zero. Therefore, these results suggest that the Hu moments could be a valid base to appropriately characterise different weed species.

Summarizing, this work presents a novel method of discrimination between monocot and dicot weeds in images taken in real field situations and, consequently, which display a mixture of both types. The proposed method assumes that each region belonging to weeds can be characterised by a set of thirteen attributes based on the seven invariant Hu moments and six geometric shape descriptors (perimeter, diameter, minor axis length, major axis length, eccentricity and area). Based on these attributes, each region can be classified as monocots or dicots using the appropriate decision method. For the final validation, several decision-making methods have been analysed. In particular the Choquet fuzzy integral (CFI), the Sugeno fuzzy integral (SFI), the Dempster-Shafer theory (DES) and the fuzzy multicriteria decision making (FMCDM). Each one of them has been reported to give excellent results as a classifier combiner [46,47]. Moreover, based on the conclusions reported in [4850], each strategy appears as a suitable method for the combination of attributes. In fact, with a little adjusting they can be used for combining attributes in this proposal, in outdoor images under similar characteristics (lighting condition, shadows, occlusions, etc.) [4650]. Furthermore, support Vector Machines (SVM) is used in this work for comparative purposes because it has been successfully applied for crop/weed discrimination with features related to colour, texture and shape [44,5153], although in a different context because plants appeared isolated in the images [44,52,53]. A discussion about the SVMs performance is presented in [51].

The organisation of this paper is as follows: Section 2 describes the proposed approach, including a brief overview of the decision-making strategies studied and how they are adjusted to be applied to combining attributes; moreover, the section includes a brief description of the images tested to assess the proposal performance, as well as the characteristics of the sensor and the setting of the camera used in the acquisition process. Section 3 analyses the performance of the method proposed. Section 4 presents the conclusions and future work.

2. Methods and Material

The proposed approach consists of the following four stages: (1) segmentation of vegetation cover and definition of regions; (2) labelling of disconnected regions; (3) extraction of the seven Hu invariant moments and six geometric shape descriptors for each region; and (4) classification of both monocot and dicot regions by means of a decision-making method; in this paper, several decision-making methods are analysed. Figure 3 shows the described process.

2.1. Segmentation of Vegetation Cover and Definition of Regions

The segmentation of the vegetation cover is a two-steps process. First, a linear combination (Equation (1)) to each pixel of the original image (I) is applied to create a colour index:

G I = r I ( R ) + g I ( G ) + b I ( B )
where r = −0.884, g = 1.262, b = −0.311 [23]. These coefficients were found using a genetic algorithm optimization, and proved to perform better than Excess Green coefficients (r = −1, g = 2, b = −1), on similar images. Then, the original RGB image is transformed into a one-dimensional grayscale image (GI). The values with which these coefficients are set is the key to obtain the most appropriate segmentation of vegetation/soil, and is largely discussed in [12]. The resulting GI image is binarised using a threshold, which is set to 10 in this case to cope with the variability of daylight conditions [12,30]. Figure 4b shows the binarised image from Figure 4a.

Next, to enhance the regions, an opening morphologic operation is conducted to avoid overlap among regions belonging to different plants and to remove the pixels due to noise with minimum alteration of the pixels from vegetation cover. To obtain the isolated regions, the opening operation is accomplished with a structural element that symmetrically operates in all spatial directions, i.e., the classical 5 × 5 matrix of ones. The election of this structural element is based on the resolution of the images and the size and characteristics of the weeds aim of the study. Moreover, the selection of this window size is consistent with the real-time requirements.

2.2. Labelling of Disconnected Regions and Extraction of the Hu Invariant Moments

Labelling is an operation to identify distinct self-contained objects of a binary image. It can be defined two types of connectivity, 4-connectivity and 8-connectivity as it is shown in Figure 5. The connectivity determines which pixels to include in the object (region in this case).

In the second stage, the regions are labelled following the algorithm described in [54], which finds the connected components in a binary image as follows:

(1)

Run-length encode the input image.

(2)

Scan the runs, assigning preliminary labels and recording label equivalences in a local equivalence table.

(3)

Resolve the equivalence classes.

(4)

Relabel the runs based on the resolved equivalence classes.

In this method, all pixels in the same region are assigned the same level using 8-connected labelling. The connected components are searched in top-to-bottom scan order, i.e., all pixels in the first connected component are labelled as 1, those in the second as 2 and so on.

Once all regions have been labelled, the seven Hu invariant moments and the following six geometric shape descriptors are computed for each region as follows:

(1)

Perimeter: the distance around the boundary of the region (the pixels on the inside of the object's boundary).

(2)

Diameter: specifies the diameter of a circle that circumscribes the region.

(3)

Minor axis length: is the length (in pixels) of the minor axis of the ellipse that has the same normalized second central moments as the region.

(4)

Major axis length: is the length (in pixels) of the major axis of the ellipse that has the same normalized second central moments as the region.

(5)

Eccentricity: specifies the eccentricity of the ellipse, i.e., the ratio between the minor axis of the ellipse and its major axis.

(6)

Area: is the actual number of pixels in the region.

Therefore, each region is characterised with thirteen attributes, i.e., Ω1 ≡ {φ1, φ2, φ3, φ4, φ5, φ6, φ7} and Ω2 ≡ {d1, d2, d3, d4, d5, d6}, where φi is associated to the ith moment and d1: perimeter, d2: diameter, d3: minor axis length, d4: major axis length, d5: eccentricity, d6: area and φi, di ∈ [0,1]. The six geometric descriptors are computed for each region following the Equation (2) where yi represents one of the six geometric descriptors previously defined:

d i = y i min ( y ) max ( y ) min ( y )

2.3. Classification of Weeds Types

Once each region is characterised by thirteen shape descriptors, the classification stage must decide the class to which each region belongs. For classification, four decision-making methods (CFI, SFI, DES and FMCDM) which have been reported to give excellent results as classifier combiner [46,47], were adapted to combine shape descriptors as attributes. With this aim, they were trained and their different performances were tested to classify monocot vs. dicot weeds. The following subsections explain briefly the decision-making methods analysed. Each of the following methods requires a prior step of training in which the method is set to a specific problem (in this case, the classification between monocots and dicots) using a set of positive and negative training examples.

2.3.1. CFI Method

The CFI method requires the computation of the relevance for each attribute, from which the so-called fuzzy densities can be computed. The relevance for each attribute is determined by computing the λ − fuzzy measure [46]. In the proposed approach, the calculation starts by selecting a set of thirteen fuzzy measures, which will be called g1, g2, …, g13 according to [46]. Each measure represents the individual relevance (strength or competence) of the associated attribute in Ω = Ω1∪Ω2. The value of λ needed to calculate gi is obtained as the unique real root greater than −1 of the following polynomial:

λ + 1 = i Ω ( 1 + λ g i ) , λ 0

In the approach proposed for the CFI, the method computes the relevance of each attribute for determining its specific contribution to the decision through the fuzzy densities. The relevance of each attribute is assessed by considering a number of reliable true and false training examples obtained from a set of different regions. The process is as follows: for each region in an image, the grade of support is computed for its class (monocot or dicot), but considering each of the thirteen attributes separately. Thus, the averaged percentage of error, p1, …, p13, is obtained for the selected regions and for each attribute, based on the expert criterion. Therefore, the relevance for an attribute i is computed by Equation (4):

g i = p i / j = 1 13 p j

Once the g1, …, g13 are obtained and λ is found, the CFI is performed following the process described in [48] as follows:

  • For a given region, the vector [a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13]T is obtained; ai ∈ Ω without loss of generality, assume that a1 is the highest value and a13 the lowest. In this way, this vector is arranged under this criterion, i.e., ∀i, j, i > j, ai > aj where i, j ∈ [1,13].

  • Arrange the fuzzy densities corresponding with the mentioned arrangement, i.e., g1, …, g13, and set the first fuzzy density g(1) = g1.

  • For t = 2 to 13, g(t) is calculated recursively by Equation (5):

    g ( t ) = g i + g ( t 1 ) + λ g i g ( t 1 )

  • Calculate for each candidate region i, the final degree of support to be matched with each class l as:

    S i ( l ) = a 1 + h = 2 13 [ a h 1 a h ] g ( h 1 )

  • The class to which a region belongs is chosen by selecting the maximum support Si(l) among all classes, in this case two classes, monocots and dicots.

2.3.2. SFI Method

The SFI method is very similar to CFI, coinciding exactly for the first three steps and differing in the way in which the support is estimated, which is reformulated as Equation (7):

S i ( l ) = max h [ 1 , 13 ] { min { a h , g ( h ) } }

The decision about the best match is made by selecting the maximum support Si(l) among all classes, in this case monocots and dicots.

2.3.3. DES Theory

The Dempster-Shafer theory (DES) owes its name to works by the both authors in [55,56]. The DES method is applied in our approach following the process described in [46,49] as follows:

(1)

A region l is matched correctly or incorrectly with its class of weed. Hence, two classes are identified, which are the class of true matches and the class of false matches, C1 and C2, respectively. Given a set of samples from both classes, a 13-dimensional mean vector is built, v̅i, where its components are the mean values of their thirteen descriptors; v̅1 and v̅2 are the mean for C1 and C2, respectively. This process is carried out during a previous phase, equivalent to the training phase in the classification problems.

(2)

Given a region i and Ωi, the 13-dimensional vector xi is computed, where its components are the thirteen shape descriptors, i.e., xi = [ei1, ei2, …, ei13]T, eij ∈ Ω. Then, the proximity Φ between each component in xi and each component in v̅j is calculated based on the Euclidean norm ‖·‖ using Equation (8):

Φ j A ( x i ) = ( 1 + e i A v ¯ j A 2 ) 1 k = 1 2 ( 1 + e i A v ¯ k A 2 ) 1 where A Ω

(3)

For every class wj and every region i, the membership degrees are calculated according to:

b j i ( A ) = Φ j A ( x i ) k j ( 1 Φ k A ( x i ) ) 1 Φ j A ( x i ) [ 1 k j ( 1 Φ k A ( x i ) ) ] ; j = 1 , 2

(4)

The final degree of support that each region i, represented by xi, receives for each class wj is given by:

μ j ( x i ) = A Ω b j i ( A )

(5)

The class to which a region belongs is chosen based on the maximum support received for the class of true matches (w1), i.e., maxi1(xi)}.

2.3.4. FMCDM Method

The decision based on the FMCDM method first requires the definition of two elements [57]: (a) a triangular fuzzy number u as a triplet (u1, u2, u3) and (b) a distance (d) between two triangular fuzzy numbers u and z, which can be estimated with Equation (11):

d ( u , z ) = { [ ( u 1 z 1 ) 2 + ( u 2 z 2 ) 2 + ( u 3 z 3 ) 2 ] / 3 } 1 / 2

Taking into account the thirteen shape descriptors obtained for each region, it can be separated in thirteen groups C1, C2, …, C13. Each group defines a criterion ranging from 0 to 1, i.e., in this approach, there are thirteen criteria available for making the decisions. Assuming there are m classes, the fuzzy MCDM paradigm [57,58] can be formulated as the choice of the best alternative Ai (i = 1, …, m; m = 2), where each alternative represents a class. In other words, the FMCDM problem can be expressed in the matrix format as follows:

M = [ x i j ] m × n ; W = [ w j ] 1 × n ; i = 1 , , m ; j = 1 , n

M is the decision matrix where xij is the rating of alternative Ai with respect to the criterion Cj; wj is the weight assigned to criterion Cj. Shape descriptors are considered as triangular fuzzy numbers so that xi = (ai1, ai2, ai3) ∈ [0,1] where ai1 = ϕiε1, ai2 = ϕi, ai3 = ϕi + ε2 and ε1ε2. The random numbers ε1 and ε2 must be no longer than a threshold T, which is set to 0.1, one random number must be smaller than the other, and they must not exceed the range [0,1]. The weights associated with each criterion are w1, w2, …, w13, respectively, and are calculated according to the following:

w h = p h k p k , h , k = 1 , , n
where p1, …, p13 are the averaged percentages of error for each attribute, as in CFI and SFI. Without loss of generality, the values in xi are ordered so that ai1ai2ai3. Therefore, the normalised fuzzy decision matrix (N) is obtained as follows:
N = [ r i j ] m × n ; r i j = ( a i 1 a i 3 * , a i 2 a i 3 * , a i 3 a i 3 * )
where ai3* = maxi{ai3}. This equation preserves the property that the ranges of the normalised triangular fuzzy numbers belong to the interval [0,1]. Considering the importance assigned to each criterion, the weighted normalised fuzzy decision matrix (WN) is constructed by:
W N = [ v i j ] m × n where v i j = r i j w j

From WN, the elements vij, ∀ij are normalised positive triangular fuzzy numbers ranging in the closed interval [0,1]. Then, the fuzzy positive-ideal solution p+ = (1,1,1) and the fuzzy negative-ideal solution p = (0,0,0) are defined. The distances for each alternative can be calculated as follows:

d i + = j = 1 n d ( v i j , p + ) and d i = j = 1 n d ( v i j , p )
where d(·,·) is the distance measured between two fuzzy numbers, defined in Equation (11). According to [57], a closeness coefficient (CCi) is defined to determine the ranking order of all alternatives, once both di+ and di for each alternative have been computed. This coefficient is:
C C i = d i ( d i + + d i )

Obviously, an alternative Ai is closer to the fuzzy ideal solution and farther from the fuzzy negative solution as CCi approaches +1. Thus, given a region l, class i is that with the maximum CCi.

2.4. Acquisition System and Images

The sixty-six images used in this work were taken in maize crops sited in Madrid (Spain) on different days and therefore under varying lighting conditions. A D70 (Nikon, Tokyo, Japan) camera equipped with an 18–70 mm AF-S DX Nikon lens was used to capture images. Image collection was performed by placing the camera on a tripod at approximately 1.5 m height pointing vertically downward (Figure 6).

The images with a dimension of 1700 × 1696 pixels were acquired on the inter-row area, each image covering 0.25 m2 (0.5 m × 0.5 m), with a resolution of 72 × 72 dpi. The images were taken with natural lighting. Table 1 summarises the main characteristics of the camera sensor, the images taken and the setting of the camera in the course of the acquisition process.

The multi-zone metering was selected, where the camera sets the exposure automatically to suit the scene by dividing the frame into zones and taking separate readings from each zone. The camera then guesses what parts of the scene are important and chooses the exposure accordingly. This procedure is considered good for landscapes. Furthermore, the camera was left to automatically control contrast, saturation, sharpening and white balance as conditions changed.

3. Results

Regarding the scenes shown in the images used in this work, the vegetation always coincided with weeds (i.e., monocots, dicots or a mixture of both) because the images were taken in the inter-row area. From the sixty-six images available, twenty-eight presented a mixture of weeds, nineteen presented only monocots and nineteen only dicots. A high level of infestation was observed in 14% of the images (Figure 7).

In this work, fifty-six images were selected to represent a wide range of situations. After applying Step 1 (vegetation cover segmentation) and Step 2 (labelling of disconnected regions) in the selected set of images, as described in Sections 2.1 and 2.2, respectively, four hundred different regions were extracted and manually analysed. In general, the number of regions extracted per image ranged from five to twenty. In the cases where an important infestation was observed, fewer than five different regions could be extracted.

Figure 8 displays, as an example, the regions extracted by the approach in Steps 1 and 2; concretely, the original image in Figure 4a is represented in Figure 8a. Each region appears labelled with a unique label, represented by a colour in a scale for visualisation purposes. Figure 9 represents different regions belonging to weed classes where the regions' structure plays a key role in the proposed discrimination process.

The tests corresponding to the proposed decision-making strategies (CFI, SFI, DES and FMCDM) were carried out with fifty-six images including four hundred different regions belonging to monocots and dicots. Sixteen of the images containing one-hundred and fourteen regions were used for computing the relevance of each attribute for CFI and SFI based on Equation (4), the mean vectors for DES, as explained in Section 2.3.3, and the weights for FMCDM, as described in Equation (13).

At this point, the information of class membership provided by the expert criterion was available. Thus, the correct class for each region in an image was known according to the expert knowledge, and this information was used to compute the percentage of error of the proposed approach. The averaged percentages of error, p1, …, p13, were p1 = 18 (ϕ1), p2 = 20 (ϕ2), p3 = 30 (ϕ3), p4 = 28 (ϕ4), p5 = 24 (ϕ5), p6 = 23 (ϕ6), p7 = 21 (ϕ7), p8 = 27.5 (d1), p9 = 27.5 (d2), p10 = 27.5 (d3), p11 = 15 (d4), p12 = 40 (d5) and p13 = 40 (d6). Based on Equation (4), the fuzzy values g1, …, g13 were obtained. From Equations (4) and (13), wi = gi, i = 1, …, n. Finally, considering the true and false matches under the expert knowledge, the mean vectors v̅1 and v̅2, were obtained and normalized from 0 to 1.

The best individual results, according to the thirteen shape descriptors, were obtained with the first, second and seventh Hu moments and the geometric shape descriptor named major axis length. The worst attributes in terms of percentage were eccentricity and area, respectively. These attributes do not contribute in any way (positive or negative) to the final decision.

At a second stage, for each of the two hundred eighty-six regions obtained from the remaining forty images used for testing, the proposed decision-making strategies, CFI, SFI, DES and FMCDM as described in Sections 2.3.1, 2.3.2, 2.3.3 and 2.3.4, respectively, were applied, and the success for each region, as well as the average of these hits, were computed.

Based on the best individual results, the four stages-approach proposed was applied again with only these four shape descriptors (ϕ1, ϕ2, ϕ7 and d4—major axis length). This decision was accomplished because some attributes do not contribute to the final decision. The proposed decision-making strategies were applied in the same way as described in Section 2.3, but in this case taking into account four shape descriptors. The results show that the strategies based on combining the best attributes, improve the accuracy six points average. The reason is none other than some shape descriptors do not characterise properly the regions belonging to the two weeds species aim of study. For this reason they do not contribute to take the right decision.

Table 2 displays the averaged classification accuracy and standard deviations obtained with the four decision-making strategies when the seven Hu moments and six geometric shape descriptors take part in the final decision (see first column). Based on the individual results explained above, second column shows the results obtained with the four decision-making strategies when only take part in the final decision the best shape descriptors (ϕ1, ϕ2, ϕ7 and d4). Finally, for comparative purposes SVM was tested as described in [51,52]. The Gaussian Radial Basis Function (RBF) kernel was used for both training and testing phases. RBF outperforms the other two kernels tested: Polynomial and Sigmoid [51]. These results are also presented in Table 2.

The combined strategy showed the best results for FMCDM. Similar results were observed for CFI and SFI in terms of percentage and low standard deviation. Although SVM has good generalization performance and a fast decision computation once trained, it has some drawbacks, e.g., the selection of the kernel function parameters, the problem of over-fitting from optimising the parameters to model selection and the robustness of the SVM-based approach against illumination variability. Taking into account the final requirements of our problem as described in Section 1, the results are considered good enough to solve the decision problem. The strategy proposed has been implemented in Matlab. The processing time with the best proposal is less than 0.5 s, which is enough to achieve real-time processing even though Matlab is not generally used to do real-time analysis. The image acquisition frequency is around 6 fps and is sufficient because our proposal guarantees overlapping between images, considering the treatment speed (around 1.6 m/s).

4. Conclusions

This paper proposes a strategy for discriminating between monocot and dicot weeds. The method, which has proven effective and simple, is based on colour segmentation, morphological operations and well-known shape descriptors and classifiers, common operations in image processing.

The shape descriptors and the classifier are the two most important factors affecting the performance of the proposed approach. The Hu moments have been shown to be relevant in shape recognition processes because they are invariant to translation, rotation and scaling. Accordingly, for each region in an image the seven Hu moments are obtained to characterise the region in order to discriminate between monocot and dicot weeds. With the same aim, six geometric shape descriptors (perimeter, diameter, minor axis length, major axis length, eccentricity and area) were proposed. Under the FMCDM method, the values of the best descriptors are combined and a decision for choosing the unique class for each region is made. This strategy outperforms the SVM, CFI, SFI and DES combined decision-making methods.

The proposed combined strategy works properly when the weeds present an early stage of growth, which coincides with the right timing for herbicide application. If the crop is further developed, the weeds will most likely present overlapping and the segmentation process will become difficult, mainly due to occlusions leading to incorrect differentiation of the weed shapes. Nevertheless, the proposed approach provides a useful methodology to discriminate seedlings of monocots and dicots in real field situations, consequently, which display a mixture of both types.

Although the results can be considered satisfactory, better results could be obtained by improving the segmentation stage where a greenness index was calculated to distinguish vegetation against non-vegetation, and the selection of descriptors able to characterise successfully regions in order to discriminate between weeds species. The four decision making strategies proposed cope with the variability of lighting conditions such as it was observed in previous works [4850]. However, an improvement in the accuracy is desirable. Other classifiers or combining classifiers may be applied to automate the classification [46,47], using induced knowledge based on shape descriptors.

As future work, it is proposed to improve the processing time of the system. Taking advantage of the overlapping of the images it would be possible to get better results during the segmentation stage. As it was mentioned in Section 1, the final aim is to integrate a RGB camera as part of the sensors systems on board an autonomous tractor in order to reach the weed classification in real time.

The proposed combined approach can be extrapolated to any situation where monocots and dicots are present, e.g., to discriminate between maize (a monocot crop) and dicot weeds. Moreover, it can be used in images obtained by low-altitude UAVs. In this context, site-specific weed management could significantly reduce herbicide use, with undoubted benefits for the environment. In addition, efficiency is higher if selective treatment is performed for each type of infestation instead of using a broadcast herbicide. In summary, this proposal improves a utility that is essential for the future of the site-specific weed management.

Acknowledgments

This research was partly financed by the Spanish Ministry of Economy and Competition (AGL2011-30442-C02-02 project) and by the 7th Framework Programme of the European Union under the Grand Agreement CP-IP245986-2 (RHEA project). Research of Herrera was supported by the JAE-Doc Program, financed by the Spanish National Research Council (CSIC) and the European Social Fund (ESF). Thanks to the anonymous referees for their valuable comments and suggestions.

Author Contributions

The work presented here was carried out in collaboration between all authors. P. Javier Herrera, José Dorado and Ángela Ribeiro designed the study. The original idea of characterizing weed by a set of shape descriptors was proposed by P. Javier Herrera, who also carried out the programming. José Dorado established the experimental field and performed the field sampling, providing agronomic knowledge. Ángela Ribeiro directed the research, collaborating in testing and discussion of results. The manuscript was mainly drafted by P. Javier Herrera and revised and corrected by all co-authors. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tian, L.; Reid, J.F.; Hummel, J.W. Development of a precision sprayer for site-specific weed management. Trans. Am. Soc. Agric. Eng. 1999, 42, 893–900. [Google Scholar]
  2. Timmermann, C.; Gerhards, R.; Kühbauch, W. The economic impact of site-specific weed control. Precis. Agric. 2003, 4, 249–260. [Google Scholar]
  3. Gerhards, R.; Oebel, H. Practical experiences with a system for site-specific weed control in arable crops using real-time image analysis and GPS-controlled patch spraying. Weed Res. 2006, 46, 185–193. [Google Scholar]
  4. Nordmeyer, H. Patchy weed distribution and site-specific weed control in winter cereals. Precis. Agric. 2006, 7, 219–231. [Google Scholar]
  5. Marshall, E.J.P. Field-scale estimates of grass weed populations in arable land. Weed Res. 1988, 28, 191–198. [Google Scholar]
  6. Johnson, G.A.; Mortensen, D.A.; Martin, A.R. A simulation of herbicide use based on weed spatial distribution. Weed Res. 1995, 35, 197–205. [Google Scholar]
  7. Gerhards, R. Precision Weed Management. In Precision Agriculture for Sustainability and Environmental Protection; Oliver, M., Bishop, T., Marchant, B., Eds.; Routledge (Verlag): New York, NY, USA, 2013; Chapter 9; pp. 158–171. [Google Scholar]
  8. De Castro, A.I.; Jurado-Expósito, M.; Peña-Barragán, J.M.; López-Granados, F. Airborne multi-spectral imagery for mapping cruciferous weeds in cereal and legume crops. Precis. Agric. 2012, 13, 302–321. [Google Scholar]
  9. López-Granados, F. Weed detection for site-specific weed management: Mapping and real-time approaches. Weed Res. 2011, 51, 1–11. [Google Scholar]
  10. López-Granados, F.; Jurado-Expósito, M.; Atenciano, S.; García-Ferrer, A.; Sánchez de la Orden, M.; García-Torres, L. Spatial variability of agricultural soils in southern Spain. Plant Soil 2002, 246, 97–105. [Google Scholar]
  11. Onyango, C.M.; Marchant, J.A. Segmentation of row crop plants from weeds using colour and morphology. Comput. Electron. Agric. 2003, 39, 141–155. [Google Scholar]
  12. Ribeiro, A.; Fernández-Quintanilla, C.; Barroso, J.; García-Alegre, M.C. Development of an image analysis system for estimation of weed. Proceedings of the 5th European Conference on Precision Agriculture (5ECPA), Uppsala, Sweden, 9–12 June 2005; Stafford, J.V., Ed.; pp. 169–174.
  13. Tellaeche, A.; Burgos-Artizzu, X.; Pajares, G.; Ribeiro, A.; Fernández-Quintanilla, C. A new vision-based approach to differential spraying in precision agriculture. Comput. Electron. Agric. 2008, 60, 144–155. [Google Scholar]
  14. Tellaeche, A.; Burgos-Artizzu, X.P.; Pajares, G.; Ribeiro, A. A vision-based method for weeds identification through the Bayesian decision theory. Pattern Recognit. 2008, 41, 521–530. [Google Scholar]
  15. Burgos-Artizzu, X.P.; Ribeiro, A.; Tellaeche, A.; Pajares, G.; Fernández-Quintanilla, C. Improving weed pressure assessment using digital images from an experience-based reasoning approach. Comput. Electron. Agric. 2009, 65, 176–185. [Google Scholar]
  16. Tian, L.F.; Slaughter, D.C. Environmentally adaptive segmentation algorithm for outdoor image segmentation. Comput. Electron. Agric. 1998, 21, 153–168. [Google Scholar]
  17. Brown, R.B.; Noble, S.D. Site-specific weed management: Sensing requirements—What do we need to see? Weed Sci. 2005, 53, 252–258. [Google Scholar]
  18. Guijarro, M.; Pajares, G.; Riomoros, I.; Herrera, P.J.; Burgos-Artizzu, X.P.; Ribeiro, A. Automatic segmentation of relevant textures in agricultural images. Comput. Electron. Agric. 2011, 75, 75–83. [Google Scholar]
  19. Lee, W.S.; Slaughter, D.C.; Giles, D.K. Robotic weed control system for tomatoes. Precis. Agric. 1999, 1, 95–113. [Google Scholar]
  20. Meyer, G.E.; Mehta, T.; Kocher, M.F.; Mortensen, D.A.; Samal, A. Textural imaging and discriminant analysis for distinguishing weeds for spot spraying. Trans. ASABE 1998, 41, 1189–1197. [Google Scholar]
  21. Ishak, A.J.; Hussain, A.; Mustafa, M.M. Weed image classification using Gabor wavelet and gradient field distribution. Comput. Electron. Agric. 2009, 66, 53–61. [Google Scholar]
  22. Hemming, J.; Rath, T. Precision agriculture: Computer-vision-based weed identification under field conditions using controlled lighting. J. Agric. Eng. Res. 2001, 78, 233–243. [Google Scholar]
  23. Burgos-Artizzu, X.P.; Ribeiro, A.; Guijarro, M.; Pajares, G. Real-time image processing for crop/weed discrimination in maize fields. Comput. Electron. Agric. 2011, 75, 337–346. [Google Scholar]
  24. Giles, D.K.; Downey, D.; Slaughter, D.C.; Brevis-Acuna, J.C.; Lanini, W.T. Herbicide microdosing for weed control in field grown processing tomatoes. Appl. Eng. Agric. 2004, 20, 735–743. [Google Scholar]
  25. Sogaard, H.T.; Lund, I. Application accuracy of a machine vision-controlled robotic microdosing system. Biosyst. Eng. 2007, 96, 315–322. [Google Scholar]
  26. Jeon, H.Y.; Tian, L.F. Direct application end effector for a precise weed control robot. Biosyst. Eng. 2009, 104, 458–464. [Google Scholar]
  27. Christensen, S.; Sogaard, H.T.; Kudsk, P.; Norremark, M.; Lund, I.; Nadimi, E.S.; Jorgensen, R. Site-specific weed control technologies. Weed Res. 2009, 49, 233–241. [Google Scholar]
  28. Young, S.L. True integrated weed management. Weed Res. 2012, 52, 107–111. [Google Scholar]
  29. Sainz-Costa, N.; Ribeiro, A.; Burgos-Artizzu, X.P.; Guijarro, M.; Pajares, G. Mapping Wide Row Crops with Video Sequences Acquired from a Tractor Moving at Treatment Speed. Sensors 2011, 11, 7095–7109. [Google Scholar]
  30. Burgos-Artizzu, X.P.; Ribeiro, A.; Tellaeche, A.; Pajares, G.; Fernández-Quintanilla, C. Analysis of natural images processing for the extraction of agricultural elements. Image Vis. Comput. 2010, 28, 138–149. [Google Scholar]
  31. Tang, L.; Tian, L.; Steward, B.L. Classification of broadleaf and grass weeds using Gabor wavelets and an Artificial Neural Network. Trans. ASAE 2003, 46, 1247–1254. [Google Scholar]
  32. Wiles, L.J. Beyond patch spraying: Site-specific weed management with several herbicides. Precis. Agric. 2009, 10, 277–290. [Google Scholar]
  33. Andújar, D.; Escolà, A.; Dorado, J.; Fernández-Quintanilla, C. Weed discrimination using ultrasonic sensors. Weed Res. 2011, 51, 543–547. [Google Scholar]
  34. Andújar, D.; Escolà, A.; Rosell-Polo, J.R.; Fernández-Quintanilla, C.; Dorado, J. Potential of a terrestrial LiDAR-based system to characterise weed vegetation in maize crops. Comput. Electron. Agric. 2013, 92, 11–15. [Google Scholar]
  35. Burks, T.F.; Shearer, S.A.; Heath, J.R.; Donohue, K.D. Evaluation of Neural-network Classifiers for Weed Species Discrimination. Biosyst. Eng. 2005, 91, 293–304. [Google Scholar]
  36. Panneton, B.; Guillaume, S.; Samson, G.; Roger, J. Discrimination of Corn from Monocotyledonous Weeds with Ultraviolet (UV) Induced Fluorescence. Appl. Spectrosc. 2011, 65, 10–19. [Google Scholar]
  37. Camargo-Neto, J.; Meyer, G.E. Crop species identification using machine vision of computer extracted individual leaves. Proceedings of the Optical Sensors and Sensing Systems for Natural Resources and Food Safety and Quality, Bellingham, WA, USA, 8 November 2005; Chen, Y.R., Meyer, G.E., Tu, S., Eds.; SPIE 5996. pp. 64–74.
  38. Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  39. Zhang, D.S.; Lu, G. Review of shape representation and description techniques. Pattern Recognit. 2004, 37, 1–19. [Google Scholar]
  40. Mercimek, M.; Gulez, K.; Mumcu, T.K. Real object recognition using moment invariants. Sadhana 2005, 30, 765–775. [Google Scholar]
  41. Flusser, J.; Suk, T.; Zitová, B. Moments and Moment Invariants in Pattern Recognition; John Wiley & Sons, Ltd: Chichester, UK, 2009. [Google Scholar]
  42. Hu, M.K. Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
  43. Weis, M.; Gerhards, R. Feature extraction for the identification of weed species in digital images for the purpose of site-specific weed control. Proceedings of the 6th European Conference on Precision Agriculture (6ECPA), Skiatos, Greece, 3–6 June 2007; Stafford, J.V., Ed.; pp. 537–543.
  44. Giselsson, T.M.; Midtiby, H.S.; Jørgensen, R.N. Seedling Discrimination with Shape Features Derived from a Distance Transform. Sensors 2013, 13, 5585–5602. [Google Scholar]
  45. Herrera, P.J.; Dorado, J.; Ribeiro, A. A new combined strategy for discrimination between types of weed. In ROBOT 2013 Advances in Robotics. Advances in Intelligent Systems and Computing; Armada, M.A., Sanfeliu, A., Ferre, M., Eds.; Springer International Publishing: Switzerland, 2014; AISC 252; pp. 469–480. [Google Scholar]
  46. Kuncheva, L. Combining Pattern Classifiers: Methods and Algorithms; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  47. Guijarro, M.; Pajares, G. On combining classifiers through a fuzzy multicriteria decision making approach: Applied to natural textured images. Expert Syst. Appl. 2009s, 36, 7262–7269. [Google Scholar]
  48. Herrera, P.J.; Pajares, G.; Guijarro, M.; Ruz, J.J.; Cruz, J.M. Choquet Fuzzy Integral applied to stereovision matching for fish-eye lenses in forest analysis. In Advances in Computational Intelligence; Yu, W., Sanchez, E.N., Eds.; Springer-Verlag: Berlin, Germany, 2009; AISC 61; pp. 179–187. [Google Scholar]
  49. Herrera, P.J.; Pajares, G.; Guijarro, M.; Ruz, J.J.; Cruz, J.M. Combination of attributes in stereovision matching for fish-eye lenses in forest analysis. In Advanced Concepts for Intelligent Vision Systems; Blanc-Talon, J., Philips, W., Popescu, D., Scheunders, P., Eds.; Springer-Verlag: Berlin, Germany, 2009; LNCS 5807; pp. 277–287. [Google Scholar]
  50. Herrera, P.J.; Pajares, G.; Guijarro, M.; Ruz, J.J.; Cruz, J.M. Fuzzy multi-criteria decision making in stereovision matching for fish-eye lenses in forest analysis. In Intelligent Data Engineering and Automated Learning; Yin, H., Corchado, E., Eds.; Springer-Verlag: Berlin, Germany, 2009; LNCS 5788; pp. 325–332. [Google Scholar]
  51. Tellaeche, A.; Pajares, G.; Burgos-Artizzu, X.P.; Ribeiro, A. A computer vision approach for weeds identification through support vector machines. Appl. Soft Comput. 2011, 11, 908–915. [Google Scholar]
  52. Ahmed, F.; Al-Mamun, H.A.; Hossain-Bari, A.S.M.; Hossain, E.; Kwanb, P. Classification of crops and weeds from digital images: A support vector machine approach. Crop Prot. 2012, 40, 98–104. [Google Scholar]
  53. Pereira, L.A.M.; Nakamura, R.Y.M.; de Souza, G.F.S.; Martins, D.; Papa, J.P. Aquatic weed automatic classification using machine learning techniques. Comput. Electron. Agric. 2012, 87, 56–63. [Google Scholar]
  54. Haralick, R.M.; Shapiro, L.G. Computer and Robot Vision; Addison-Wesley: Reading, MA, USA, 1992; Volumes I–II. [Google Scholar]
  55. Dempster, A.P. A generalization of Bayesian inference. J. R. Stat. Soc. 1968, 30, 205–247. [Google Scholar]
  56. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  57. Chen, C.T. Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst. 2000, 114, 1–9. [Google Scholar]
  58. Wang, W.; Fenton, N. Risk and confidence analysis for fuzzy multi criteria decision making. Knowl. Based Syst. 2006, 19, 430–437. [Google Scholar]
Figure 1. A RGB camera on board an autonomous tractor acquiring geo-referenced images in real conditions (lighting and tractor speed) in a maize crop.
Figure 1. A RGB camera on board an autonomous tractor acquiring geo-referenced images in real conditions (lighting and tractor speed) in a maize crop.
Sensors 14 15304f1 1024
Figure 2. Images (a,b) show monocots (long and slender leaf), whereas images (c,d) present dicots (broadleaf and short). Images (e,f) display a mixture of both weed species.
Figure 2. Images (a,b) show monocots (long and slender leaf), whereas images (c,d) present dicots (broadleaf and short). Images (e,f) display a mixture of both weed species.
Sensors 14 15304f2 1024
Figure 3. The proposed approach consists of four stages.
Figure 3. The proposed approach consists of four stages.
Sensors 14 15304f3 1024
Figure 4. (a) Mixed monocots and dicots; (b) Segmentation of the vegetation cover of image (a).
Figure 4. (a) Mixed monocots and dicots; (b) Segmentation of the vegetation cover of image (a).
Sensors 14 15304f4 1024
Figure 5. Two types of connectivity: (a) 4-connectivity; (b) 8-connectivity.
Figure 5. Two types of connectivity: (a) 4-connectivity; (b) 8-connectivity.
Sensors 14 15304f5 1024
Figure 6. Acquisition process: camera on a tripod at approximately 1.5 m height pointing vertically downward.
Figure 6. Acquisition process: camera on a tripod at approximately 1.5 m height pointing vertically downward.
Sensors 14 15304f6 1024
Figure 7. Images showing high levels of infestation of (a) monocot weeds and (b) a mixture of monocots and dicots.
Figure 7. Images showing high levels of infestation of (a) monocot weeds and (b) a mixture of monocots and dicots.
Sensors 14 15304f7 1024
Figure 8. Labelling regions. Each isolated region is identified by a unique colour.
Figure 8. Labelling regions. Each isolated region is identified by a unique colour.
Sensors 14 15304f8 1024
Figure 9. (ad) Four regions belonging to monocots; (eh) Four regions belonging to dicots.
Figure 9. (ad) Four regions belonging to monocots; (eh) Four regions belonging to dicots.
Sensors 14 15304f9 1024
Table 1. Several important characteristics of the camera sensor, the images taken and the setting of the camera used in the acquisition process.
Table 1. Several important characteristics of the camera sensor, the images taken and the setting of the camera used in the acquisition process.
ElementPropertyValue
SensorSensor typeCCD RGB—23.7 mm × 15.6 mm
Total pixels6.24 million
Crop factor (35 mm)≅1.5

ImageBit depth24
Resolution unit2
Colour representationsRGB
Compressed bits/pixel4

CameraRelative aperturef/9
Exposure time1/320 s
Focal length29 mm
Max. aperture3.9
Metering modeMulti-zone
Table 2. Averaged classification accuracy and standard deviations obtained for the SVM, CFI, SFI, DES and FMCDM decision-making strategies.
Table 2. Averaged classification accuracy and standard deviations obtained for the SVM, CFI, SFI, DES and FMCDM decision-making strategies.
Decision Based on # Shape DescriptorsAll of Them (13)The Best Ones (4)

Decision Making Strategies%σ%σ
SVM82.82.187.12.0
CFI84.21.790.21.7
SFI84.11.590.11.6
DES81.92.389.12.2
FMCDM85.81.892.91.7

Share and Cite

MDPI and ACS Style

Herrera, P.J.; Dorado, J.; Ribeiro, Á. A Novel Approach for Weed Type Classification Based on Shape Descriptors and a Fuzzy Decision-Making Method. Sensors 2014, 14, 15304-15324. https://doi.org/10.3390/s140815304

AMA Style

Herrera PJ, Dorado J, Ribeiro Á. A Novel Approach for Weed Type Classification Based on Shape Descriptors and a Fuzzy Decision-Making Method. Sensors. 2014; 14(8):15304-15324. https://doi.org/10.3390/s140815304

Chicago/Turabian Style

Herrera, Pedro Javier, José Dorado, and Ángela Ribeiro. 2014. "A Novel Approach for Weed Type Classification Based on Shape Descriptors and a Fuzzy Decision-Making Method" Sensors 14, no. 8: 15304-15324. https://doi.org/10.3390/s140815304

Article Metrics

Back to TopTop