Next Article in Journal
Impact of Aerosols on the Macrophysical and Microphysical Characteristics of Ice-Phase and Mixed-Phase Clouds over the Tibetan Plateau
Next Article in Special Issue
Estimation of Rice Leaf Area Index Utilizing a Kalman Filter Fusion Methodology Based on Multi-Spectral Data Obtained from Unmanned Aerial Vehicles (UAVs)
Previous Article in Journal
Reconstruction of Hourly FY-4A AGRI Land Surface Temperature under Cloud-Covered Conditions Using a Hybrid Method Combining Spatial and Temporal Information
Previous Article in Special Issue
Effectiveness of Management Zones Delineated from UAV and Sentinel-2 Data for Precision Viticulture Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial Resolution as a Factor for Efficient UAV-Based Weed Mapping—A Soybean Field Case Study

Institute of Computer Science, University of Osnabrueck, Wachsbleiche 27, D-49090 Osnabrueck, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(10), 1778; https://doi.org/10.3390/rs16101778
Submission received: 27 March 2024 / Revised: 3 May 2024 / Accepted: 13 May 2024 / Published: 17 May 2024
(This article belongs to the Special Issue UAS Technology and Applications in Precision Agriculture)

Abstract

:
The influence of spatial resolution on classification accuracy strongly depends on the research object. With regard to unmanned aerial vehicle (UAV)-based weed mapping, contradictory results on the influence of spatial resolution have been attained so far. Thus, this study evaluates the effect of spatial resolution on the classification accuracy of weeds in a soybean field located in Belm, Lower Saxony, Germany. RGB imagery of four spatial resolutions (0.27, 0.55, 1.10, and 2.19 cm ground sampling distance) corresponding to flight altitudes of 10, 20, 40, and 80 m were assessed. Multinomial logistic regression was used to classify the study area, using both pixel- and object-based approaches. Additionally, the flight and processing times were monitored. For the purpose of an accuracy assessment, the producer’s, user’s, and overall accuracies as well as the F1 scores were computed and analyzed for statistical significance. Furthermore, McNemar’s test was conducted to ascertain whether statistically significant differences existed between the classifications. A linear relationship between resolution and accuracy was found, with a diminishing accuracy as the resolution decreased. Pixel-based classification outperformed object-based classification across all the resolutions examined, with statistical significance (p < 0.05) for 10 and 20 m. The overall accuracies of the pixel-based approach ranged from 80 to 93 percent, while the accuracies of the object-based approach ranged from 75 to 87 percent. The most substantial drops in the weed-detection accuracy with regard to altitude occurred between 20 and 40 m for the pixel-based approach and between 10 and 20 m for the object-based approach. While the decline in accuracy was roughly linear as the flight altitude increased, the decrease in the total time required was exponential, providing guidance for the planning of future UAV-based weed-mapping missions.

Graphical Abstract

1. Introduction

In the realm of agriculture, the proliferation of unwanted weeds constitutes a well-acknowledged problem [1,2], even if the potential use and valorization of some weeds is increasingly under investigation [3]. Weeds flourishing alongside crop plants engender a state of interspecific competition for vital resources such as water, nutrients, space, and sunlight [4]. As a result, crop yields and, consequently, the financial viability of the farm can become significantly reduced [2]. To combat this issue, various weed management strategies have been devised, including both mechanical and chemical methods. While mechanical weed control is often viewed as being more eco-friendly, it is generally less efficacious than chemical control [5,6]. However, the latter is often criticized for its detrimental impact on the environment, including causing harm to soil and water organisms as well as human health [2]. In order to mitigate the disadvantages associated with extensive weed control, site-specific weed management (SSWM) was developed in the 1990s, coming from an understanding that weeds usually do not occur uniformly, but rather in locally varying intensities [7]. Weed control strategies that align with this spatial variability can be implemented to attenuate ecological damage and axe costs [5,8,9].
For the successful implementation of SSWM, it is necessary to obtain knowledge on the spatial distribution of weeds prior to the initiation of control measures. In this regard, the utilization of unmanned aerial vehicles (UAVs) provides an expedient solution due to its ability to generate high-resolution images in a relatively time-efficient manner [10]. These vehicles can be equipped with various camera systems, ranging from RGB to hyperspectral capturing systems. Due to their low flight altitude, spatial resolutions of well below 1 cm can be achieved. The high-resolution images procured through UAV flights enable various approaches to weed detection. For instance, weeds can be detected on a pixel-by-pixel basis, with each pixel being classified as either a weed or a non-weed. Additionally, an object-based image analysis (OBIA) has gained popularity for weed mapping [11,12]. In this approach, the image is first segmented into semantic units, and based on these, features are extracted that provide information on the spectral and textural properties of the segments. By classifying entire segments instead of single pixels, the classification result is typically much more homogeneous and often of higher accuracy [11,12]. In addition to these classic image segmentation methods, recently evolved deep learning approaches offer versatile opportunities for weed detection that extend beyond the scope of this study [13,14]: Object detection techniques have proven efficacious when the primary objective is not the precise delineation of weed contours, but rather the recognition of their mere presence. Instance segmentation complements conventional image segmentation, as conducted in this study, by additionally recognizing individual plants. However, this added capability comes at the cost of higher hardware, time, and training demands [14,15], rendering it impractical for large-scale weed mapping in extensive agricultural fields, as aimed for in this study.
Effective weed detection requires not only a methodology, but also a data basis aligned to the specific needs of a project. While increasing the spectral resolution can lead to the more accurate differentiation of species, studies examining the effect of spatial resolution have shown varying results, with some finding better results at comparatively higher resolutions [16,17] and others finding better results at comparatively lower resolutions [18,19,20]. As the aforementioned studies demonstrate, the relationship between spatial resolution and accuracy is not straightforward and depends on the object under investigation. However, to the best of our knowledge, only three articles exist to date that have systematically investigated spatial resolution in weed-detection scenarios, and they obtained contradictory results. Peña et al. [21] compared different resolutions for weed detection in sunflower fields using an object-based approach, and found that the detection accuracy consistently decreased with a diminishing resolution. López-Granados et al. [22] found a similar trend when detecting Johnsongrass in maize, observing the best results at the highest resolution. Sanders et al. [23] investigated the effect of different growth stages, plant densities, and flight altitudes on the spectral separability of soybeans and Palmer amaranth. The authors found complex relationships between these variables, but, in contrast to Peña et al. [21] and López-Granados et al. [22], they stated that the resolution had no impact on the spectral separability or the overall classification accuracy. The small number of studies, accompanied by conflicting results, leaves a research gap of high relevance for UAV-based weed mapping. Specifically for soybeans, the question arises as to what impact the spatial resolution has on the accuracy of weed mapping. The significance of this question is not the least reflected in practice, as spatial resolution is directly connected to the flight altitude and flight times, affecting the requisite work time, battery charging plans, and ultimately, the agronomic productivity. Against this backdrop, relating attainable accuracies with time requirements becomes vital to optimize future UAV missions for efficiency.
In light of these considerations, this case study aims to further investigate the effect of spatial resolution on the accuracy of weed detection in a soybean field. For this purpose, resolutions according to the altitudes of 10, 20, 40, and 80 m were examined. Multinomial logistic regression (MLR) was performed to classify the resulting RGB images using both pixel- and object-based approaches. This study explored the extent to which the accuracies of pixel- and object-based classification differ from one another and at different resolutions. Additionally, this study evaluated the so-far-disregarded aspect of how flight and processing times vary for different resolutions, and how they relate to the achieved accuracies.

2. Materials and Methods

2.1. Study Area

The soybean field investigated for this study was located in Belm, Lower Saxony, Germany (LAT: 52.3217, LON: 8.1584, reference system: WGS84) and was characterized by a high general occurrence of weeds. However, due to the partial use of herbicides in the previous year, the weed occurrence varied greatly within the field. While certain rows exhibited a complete absence of weeds, other rows were heavily infested. Two sub-areas were selected for the study, as shown in Figure 1. One for the purpose of collecting training data (768.6 m2) and the other for validating the classifications (981.6 m2), with an effort to ensure a comparable soybean and weed distribution for both areas while maintaining spatial independence as much as possible. The soybean cultivar Abelina was present in both areas (BBCH stage 14), and the most prevalent weed by far in the study area was white goosefoot (Chenopodium album L.). Moreover, there was a sparse occurrence of weeds in the form of black nightshade (Solanum nigrum L.), chickweed (Stellaria media (L.) Vill.), and thistles (Sonchus arvensis L. and Cirsium arvense (L.) Scop.), as well as individual cockspur (Echinochloa crus-galli (L.) P.Beauv.) and grass shoots.

2.2. Image Acquisition and Preprocessing

RGB imagery of the study areas was taken on June 8, 2022, using a DJI Phantom 4 RTK drone that was connected to a D-RTK 2 mobile station. Images were captured at altitudes of 10, 20, 40, and 80 m, resulting in ground-sampling distances (GSD) of 0.27, 0.55, 1.10, and 2.19 cm. The flights were conducted around noon, with an 80 percent longitudinal and 80 percent lateral lap during partly cloudy weather conditions. The images were processed in Agisoft Metashape (alignment of orthophotos, generation of matching points, inference of a digital elevation model, and generation of an orthomosaic). For the assessment of time efficiency, the flight and preprocessing times were recorded (see Section 2.7).
In order to eliminate potential distortions caused by positional inaccuracies when comparing the classification results (see Section 2.6), classifications were performed using resampled variants of the 10 m image rather than the original 20, 40, and 80 m images. Even with a high georeferencing accuracy, slight changes in leaf, shadow, and UAV positions and lighting are inevitable. To exclude these confounding factors from the examination of spatial resolution effects, a resampling approach was preferred. Accordingly, the 10 m image was resampled to 20, 40, and 80 m images by pixel aggregation. In this process, the pixel values of the highest available resolution were averaged to form larger units. Thus, the pixel values of the simulated 20 m flight altitude corresponded to the mean of the four underlying pixels of the 10 m image. The pixel values of the simulated 40 and 80 m image were accordingly based on the mean values of 16 and 64 pixels, respectively, of the original 10 m image. In summary, the resampled 10 m orthomosaic was used for classification, while the flight and preprocessing time of the original imagery was used for a time assessment. An overview of the succeeding procedure, including the training, classification, and accuracy assessment, is provided in Figure 2.

2.3. Generating Training Data

For the supervised classification process delineated in the following sections, 200 sample points were created for each of the following three classes: soil, soy, and weeds. These were used to extract training pixels for each resolution level. Care was taken to account for the spectral variations in the classes and to avoid using points that became mixed pixels at a low resolution as training data, as well as using points that were too proximate in location to avoid having two points in one segment for the object-based classification.

2.4. Pixel-Based Classification

In pixel-based classification, each pixel is assigned to one of the three classes, soil, soy, or weeds, based on its red, green, and blue reflectance values. To accomplish this, a multinomial logistic regression (MLR) model was employed. MLR is a special form of logistic regression in which the dependent variable is nominally scaled and can have more than two expressions. It is also known as polytomous logistic regression, softmax regression, multinomial logit (mlogit), and the maximum entropy (MaxEnt) classifier. The probability of a pixel belonging to a class k   P r k is determined by the explanatory variables x i and their coefficients β k [24]:
P r k = e x p ( x i   β k ) 1 + k = 1 K e x p ( x i   β k )
Newton’s method was used to iteratively optimize the coefficients [25]. While MLR may yield a lower accuracy compared to other classifiers, such as the support vector machine or artificial neural networks [26,27], it has the advantage of not requiring any hyperparameter tuning. Given that the primary focus of this study was to investigate the effect of spatial resolution on accuracy, a hyperparameter-free method such as MLR was preferred, as it eliminates the potential impact of the inevitably suboptimally tuned parameters on the results. MLR was preferred to other hyperparameter-free classification methods, such as the maximum likelihood classification or a linear discriminant analysis, because it does not make any assumptions about the data structure (such as a normal distribution, homoscedasticity, multicollinearity, etc.), making it in theory more suitable for this specific application, since non-normally distributed and highly correlated band values are to be expected for the RGB imagery.

2.5. Object-Based Classification

In contrast to the pixel-based approach, object-based classification does not classify individual pixels, but segments composed of several pixels that ideally align to objects in the image (here: mainly plant leaves). Several features (spectral, textural, shape, and neighborhood) can subsequently be extracted from the segments. For the segmentation of the underlying images, the SLICO algorithm was chosen, which has been proven effective in remote sensing applications [28,29,30]. SLICO is a zero-parameter variant of the simple linear iterative clustering (SLIC) algorithm. SLIC itself is an adapted k-means clustering method in which pixels are clustered by spectral similarity. In the SLIC method, however, the clusters are not generated globally over the entire image, but only locally within a certain spatial perimeter. Since each pixel does not have to be compared with each cluster center as in k-means, it is much less computationally intensive [31,32]. SLICO differs from SLIC in that no prior assumption about the compactness of the clusters needs to be made and superpixels are formed with a comparatively simple geometry [32].
Upon the completion of segmentation, feature extraction based on statistical metrics was executed. The mean, standard deviation, skewness, and kurtosis of the three spectral bands were extracted for each segment, resulting in a 12-dimensional feature space. Utilizing MLR, the study area was classified based on these 12 features into the classes of soil, soy, and weeds.
Even though SLICO is often referred to as parameter-free, it requires the number of clusters to be generated as part of the segmentation process. Since the number of segments (and thus, the average segment size) can have a significant impact on the classification result [32,33], 10-fold cross-validation was conducted on the training data to find the approximately optimal average segment sizes, testing average segment sizes of 5 × 5, 10 × 10, 15 × 15, 20 × 20, and 25 × 25 pixels. Based on the cross-validation results, final segment sizes were chosen for each flight level.

2.6. Accuracy Assessment

To determine the classification accuracies of each flight level/resolution, a dataset of 1000 validation points was created. Although weed detection was the focus of the study, weeds themselves accounted for the smallest proportion of the study area, while the soil accounted for over 50 percent of the area. Simple random sampling would, therefore, have led to a situation in which only a small proportion of the validation points would be allocated to weeds and the certainty regarding the weed detection accuracy would be limited. To mitigate this concern, but still allow for randomly distributed sampling, a staged approach to the distribution of validation points was chosen. Therefore, the study area was first divided into soil and vegetation using the green leaf index (GLI), with a threshold value of 0.1 [34,35]:
G L I = 2     G r e e n R e d B l u e 2     G r e e n + R e d + B l u e
A total of 200 points were randomly assigned to soil areas (GLI ≤ 0.1), while the remaining 800 points were assigned to vegetated areas (GLI > 0.1). The resulting points were then labeled. This resulted in a total of 245 soil, 430 soy, and 325 weed points. From these validation points, the overall (OA), producer’s (PA), and user’s accuracies (UA) with 95 percent confidence intervals were calculated for the classification results. Because of the stratified nature of the sampling, the class proportions of the validation points were inconsistent with the true class proportions of the respective image. Therefore, OA, PA, and UA were area-adjusted according to Oloffson et al. [36]. Additionally, to facilitate the comparison of class accuracies, the F1 score for each class was calculated as the harmonic mean of PA and UA.
To further evaluate the differences in the classification results based on different resolutions, McNemar’s test was used. While comparing the confidence intervals of the classification accuracies gives an insight about the statistical significance of differences in the accuracy, McNemar’s test assesses whether and to which degree the classification results themselves differ (based on related samples) [37,38,39]. In contrast to the standalone accuracy confidence intervals, McNemar’s test compares two results in terms of commonly correctly, incorrectly, and differently classified pixels. Since, with 1000 validation points, a relatively large dataset is available, McNemar’s test was calculated based on a chi2 distribution, with Edward’s continuity correction applied [40]. With this test, the significance of difference of pixel- and object-based classifications per altitude and the difference between flight altitudes within the pixel- and object-based approach were assessed.

2.7. Time Comparison

In order to compare the time required for each classification, the flight time, the processing time in Agisoft Metashape and, if applicable, the segmentation time were measured. The time for collecting training points was disregarded, as it was constant for all the procedures and subject to the user’s individual speed, making it not reproducible. The time needed for training the classifier and classifying the image was also omitted, as it was negligibly low (well below 1 s on a modern computer). Since the flight and processing times strongly depend on individual circumstances (area covered, hardware) and are thus hard to compare, the time was normalized (pixel-based classification of 10 m image corresponds to 100 percent) and, therefore, given in relations.

2.8. Software

The preprocessing of the imagery (aligning single images taken by UAV to an orthomosaic for further analysis) was carried out using Agisoft Metashape 1.5.1. The selection and labeling of training samples were performed in ArcGIS Pro 2.8. The classification procedures were implemented in Python, utilizing the scikit-learn (version 1.0.2) and scikit-image (version 0.19.3) libraries. The computation of segment features for the object-based classification, including the mean, standard deviation, skewness, and kurtosis, was executed using the numpy (version 1.12.6) and scipy (version 1.8.0) libraries. For time measurements, the timeit library integrated in Python 3.10 was used. Processing times in Agisoft Metashape were obtained from the generated reports.

3. Results

3.1. Pixel-Based Classification Results

To facilitate comprehension, the following results are presented with reference to the altitude rather than the spatial resolution. The producer’s, user’s, and overall accuracies, including 95 percent confidence intervals, are presented in Table 1. The overall accuracy was found to be the highest at the lowest altitude of 10 m, at 93 percent, and decreased consistently with increasing altitude. At 20 m, the OA was still high at just under 90 percent. The 95 percent confidence intervals of the 10 and 20 m accuracies overlapped, indicating that the difference in accuracy between the two resolutions was not statistically significant. The OA corresponding to an altitude of 40 m was 83.48 percent, which is approximately 6 percent lower than the accuracy at 20 m and almost 10 percent lower than at 10 m, with confidence intervals showing statistical significance. The highest altitude investigated yielded the worst performance, with an overall accuracy of 79.80 percent. This accuracy was found to be significantly worse than the accuracy at flight altitudes of 10 and 20 m, but not significantly worse than the accuracy at 40 m.
The results of McNemar’s test were mostly consistent with the accuracy findings: the difference between classifications at 10 and 20 m was found to be not statistically significant (p = 0.6198). In contrast, the 10 and 20 m classifications were found to be highly significant in comparison to the two higher altitudes of 40 and 80 m (p < 0.0001). The 40 m classification was also found to be different from the classification at 80 m with high confidence (p = 0.0002), although, as previously mentioned, the overall accuracies of both classifications were not found to be significantly different.
The high overall accuracies, particularly at high resolutions, were primarily attributed to the high accuracy in detecting soil, as evidenced in Table 1. The PA of soil was over 90 percent across all flight altitudes, with the UA of soil also surpassing 90 percent for the flight altitudes of 10 and 20 m, with tight confidence intervals throughout. Accordingly, the soy and weed accuracies were usually below the OA. At an altitude of 10 m, approximately 84 percent of the validation pixels were correctly identified as weeds. This value dropped significantly with a decreasing resolution, from 75 percent at 20 m to 60 and 59 percent at 40 and 80 m, respectively. In contrast, the percentage of weed pixels in the classification result that truly corresponded to weeds was consistently higher (87–90 percent for 10–40 m), but significantly reduced to 77 percent at 80 m. Thus, it was observed that the UA was consistently greater than the PA. The same trend can be seen for soy, with the exception of the 10 m classification, where the UA and PA were about the same, at 87 percent.
An illustrative comparison of the accuracies of the individual classes across flight altitudes is provided in Figure 3, depicting the F1 scores of each class as well as the OA. The comparison reveals that the soil accuracy was clearly the highest, followed by that of the soy and weeds. A notable decline in accuracy was discernible within the altitude range of 20 to 40 m, with the weed class exhibiting the most substantial drop of over 10 percent.

3.2. Object-Based Classification Results

Table 2 exhibits the outcomes of the initial cross-validation regarding segment sizes. For 10 and 20 m, the best result was attained with a segment size of 15 × 15 pixels, whereas the best performance for 40 and 80 m was achieved with 10 × 10 and 5 × 5 pixels, respectively.
An exemplary illustration of the effect of the segment sizes on segmentation in relation to spatial resolution is provided in Figure 4. While the segments covered one or two leaves in the 10 m scenario, beginning from 20 m on, several leaves were conflated into a segment. At 40 and especially at 80 m, an increasing number of segments consisting of numerous leaves, partially from multiple plants, can be observed.
The 10 m classification achieved the highest OA of approximately 87 percent, followed by the 20 m classification, with about an 82 percent OA. According to the 95 percent confidence interval, no statistically significant differences in the accuracy between the two altitudes were found. The 40 m classification yielded an accuracy of approximately 79 percent, which was significantly lower than that of the 10 m classification, but not significantly lower than the OA of the 20 m classification. At 80 m, the OA decreased to about 75 percent, which was not significantly lower than that at 40 m, yet was significantly lower than the 10 and 20 m classifications. The PA, UA, and OA, including 95 percent confidence intervals, are presented in Table 3.
However, assessing the classifications with McNemar’s test for dissimilarity yielded a different result. All four flight levels were significantly different from each other, with the difference between the 40 and 80 m classifications being the weakest, yet still significant with 99 percent confidence (p = 0.0087), followed by the difference between the 10 and 20 m classifications (p = 0.0017). All other combinations displayed even higher levels of significance (p < 0.0001). Consequently, there were multiple significantly dissimilar classifications with non-significantly dissimilar accuracies.
Upon closer inspection of the class accuracies in Table 3, the highest PAs were found for the soil, exceeding 90 percent up to 40 m. However, the UAs were notably lower, decreasing from 89 percent at 10 m to 74 percent at 80 m. Furthermore, the PAs for weeds were found to be relatively low, even for low altitudes, with only 64 percent at 10 m and 55 percent at 20 m. On the other hand, the UAs were much higher, with 88 and 84 percent at 10 and 20 m, respectively, but then they dropped drastically to 70 and 72 percent at 40 and 80 m, respectively. The accuracies for soy were higher, with PAs and UAs beyond 80 percent for 10 and 20 m. Here, the PA was higher than the UA, in contrast to the case for 40 and 80 m, where the accuracies decreased significantly. In comparison with the class accuracies over the flight altitudes (Figure 5), the F1 score confirmed the high accuracy for soil, followed by soy.

3.3. Comparison of Pixel- and Object-Based Classification Results

When comparing the overall accuracies of pixel-based and object-based classifications, it became evident that the former constantly outperformed the latter. The distinction in accuracy was statistically significant for the flight altitudes of 10 and 20 m. However, at altitudes of 40 and 80 m, while the disparity in accuracy remained at approximately 5 percentage points, it was not deemed statistically significant.
McNemar’s test for differences in classifications using pixel-based and object-based methods yielded significant results for each flight altitude (p < 0.001 for 10 and 40 m; p < 0.0001 for 20 m). For 80 m, however, the p-value was the lowest (p = 0.0129), so significance was only established for the 95 percent confidence interval, but not for the 99 percent interval. Distinctions between the pixel-based and object-based methods were, thus, not only reflected in their respective accuracy, but also in the specific classifications of the validation pixels. Figure 6 illustrates these distinctions in classification graphically. The soy rows were clearly visible in both approaches. In general, object-based classification provided a much more homogeneous, but also coarser, picture.
The soil and weeds performed worse in the object-based classification, while the accuracies of soy were relatively similar, as measured by the F1 score. The discrepancy in the accuracy was particularly pronounced in the case of the weeds. For 10 and 20 m, the difference in the F1 score of both methods exceeded 10 percent. For 40 and 80 m, the accuracies converged. Nonetheless, the F1 score for the weed class in the pixel-based classification at 80 m surpassed that of the object-based classification at 20 m. Furthermore, both methods displayed a marked decline in accuracy across flight altitudes. The drop in the pixel-based approach was particularly large between 20 and 40 m, with a loss exceeding 10 percent, while the most substantial decrease in the object-based approach, also surpassing 10 percent, occurred between the altitudes of 10 and 20 m. These observations are illustrated in Figure 7.

3.4. Time Requirements

On average, the flight time increased by a factor of 3.76 as the flight altitude was halved. A similar exponential trend, albeit less pronounced, was observed in terms of the data-processing time. The average time required for pixel-based classification decreased by a factor of 3.20 as the flight altitude was halved, while the average time required for object-based classification decreased by a factor of 2.94. A comparison of the two approaches indicated that object-based classification took 33.26 percent longer on average. However, the absolute difference in time diminished as the quantity of data to be processed decreased. To facilitate a hardware- and area-size-independent comparison, a normalized diagram is presented (Figure 8), displaying the time curves for the flight and processing times, which illustrates the exponential trend of the relationship.

4. Discussion

4.1. General Assessment of Classification Results

Firstly, the results demonstrate that weed mapping utilizing RGB imagery is viable in the present scenario (Abelina soybeans with primarily white goosefoot weeds). The achieved accuracies, an OA of 0.79–0.93 for pixel-based classification and 0.75–0.87 for object-based classification, were within the typical range that can be found in the literature, despite the comparatively simple design of the classification approaches. Grey et al. [41] attained accuracies of 80 percent for a similarly designed three-class classification into soil, soybeans, and weeds based on multispectral data and a maximum likelihood classification. Sanders et al. [23] achieved accuracies of up to 90 percent using multispectral UAV data and a maximum likelihood classification to distinguish weeds from soy. Sivakumar et al. [42] performed weed detection in a soybean field based on RGB data taken at a 20 m flight altitude (0.5 cm GSD), utilizing convolution neural networks (CNNs) for object detection and achieving a mean intersection over union of up to 0.85. Xu et al. [43], using a CNN-based segmentation approached, attained up to a 97.2 percent accuracy for weed segmentation in soybean fields. This range of accuracy is largely consistent with weed mapping in other types of crops, such as maize, where Goa et al. [44] achieved accuracies of up to 94.5 percent using a semi-automatic OBIA approach with random forest and Peña et al. [45], also utilizing an OBIA approach on a multispectral basis, attained an overall accuracy of 86 percent.
Despite the generally satisfactory accuracy level attained by our efficiency-oriented, classical machine learning approach, studies have demonstrated that the accuracy could likely be further enhanced by deploying deep learning methodologies. Dos Santos Ferreira et al. [46] achieved superior accuracies (99 percent) by utilizing a CNN compared to classical machine learning algorithms such as random forest, support vector machines, and AdaBoost (94 to 97 percent) for weed classification within soybean fields. Slightly lower accuracies, yet consistent trends, were observed by Zhang et al. [47], who attained an OA of 96.88 percent in weed classification in pastures using CNN, in contrast to SVM’s accuracy of 89.4 percent. While the potential for higher accuracies exists, it is important to consider the increased resource consumption. Practical users must contemplate the economic justifiability of this increase in accuracy against the background of a limited efficiency concerning training, time, and hardware requirements [15].

4.2. Relationship between Spatial Resolution and Accuracy

The classification results highlight a distinct relationship between flight altitude and accuracy. For both pixel-based and object-based classification, the accuracy diminished significantly with a decreasing resolution. In particular, the F1 scores of the weeds demonstrated that the accuracy of object-based classification experienced a significant decline between the altitudes of 10 and 20 m. In contrast, the weed accuracy of the pixel-based classification dropped later between 20 and 40 m. Therefore, if comprehensive weed mapping that can also detect small weed patches is desired, flying at altitudes of 10 or 20 m and performing pixel-wise classification would be suitable. When only a broad overview of the study area is required, the flight can be conducted between 40 and 80 m. These lower resolutions are particularly appropriate when the weeds occur in extensive patches, as this ensures the availability of adequate training pixels and minimizes the weakness of not detecting small weed patches, which is less crucial in this context.
This work ascertains a relationship between resolution and accuracy, corresponding to the findings of Peña et al. [21] and López-Granados et al. [22]. Pena et al. [21] discovered a similar decline in accuracy while detecting weeds in sunflower fields with an OBIA approach. In that study, RGB images were captured at 40, 60, 80, and 100 m and different growth stages. Peña et al. [21] discerned a greater difference in the accuracy between 40 and 80 m compared to the findings of this present study. However, it has to be considered that, even though in Peña et al. [21] and in this work, the same altitudes of 40 and 80 m were investigated, the corresponding ground sampling distances differed due to different camera systems (40 m: 1.10 (own) vs. 1.52 cm (Peña et al. [21]); 80 m: 2.19 (own) vs. 3.04 cm (Peña et al. [21])). On top of this, the differing classification methodologies and study environments impede a one-to-one comparison of the outcomes. López-Granados et al. [22] observed a similar trend when detecting Johnsongrass in maize, also using an OBIA approach. In this study, flights were carried out with an RGB camera at flight altitudes of 30, 60, and 100 m, corresponding to a resolution of 1.14 to 3.8 cm. For all the fields investigated, the accuracy dropped with diminishing resolution. In contrast to these two studies, Sanders et al. [23] did not recognize this relationship of decreasing accuracies with increasing altitudes when detecting Palmer amaranth in soybean fields. During the growth phase, images were collected at 15, 30, and 45 m. Considering the authors’ statement that GSD at 120 m is 8.2 cm, these flight altitudes should correspond to resolutions of 1.03, 2.05, and 3.08 cm, respectively. Soybean and Palmer amaranth were distinguished by a two-class maximum likelihood classification. Despite investigating similar resolutions to the aforementioned and this present study, no discernible effect of spatial resolution on the classification accuracy was observed.

4.3. Relationship between Accuracy and Time

Since spatial resolution is directly linked with time requirements, significant amounts of time can be saved by reducing the resolution/increasing the altitude, at the cost of decreasing the accuracy. While the 10 m classification offered the highest accuracy, albeit not statistically significant, it took almost four times longer than the 20 m classification. Hence, opting for 20 m presents a viable time-saving option. When comparing the curves of accuracy and time across altitude, it is evident that, while the time required decreased exponentially with increasing altitude, the accuracy declined in a more or less linear fashion. This particularly makes the higher flight altitudes attractive when time is strongly limited. In concrete terms, although 40 m pixel-based classification lags 10 percentage points behind the best classification at 10 m, it consumes less than 10 percent of the time for flight and processing.

4.4. Classificatory Challenges with Respect to Spatial Resolution

In this case study, the pixel-based approach was found to yield higher accuracies than the object-based approach throughout all the investigated altitudes. An analysis of the weeds’ F1 scores revealed that even the 80 m pixel-based classification was superior to the 20 m object-based classification in this regard. Looking at Figure 6, the segmentation led to a coarser classification, which provided a less fragmented picture, but became increasingly blind for small weed patches. By comparing the pixel-based and object-based approaches, it was observed that the UAs for soybeans and weeds tended to be higher than the PAs, suggesting an underestimation of these classes. This type of misclassification frequently transpires in mixed-pixel transition areas. These mixed-pixel-induced misclassifications also occurred for soy and weeds, but classic classifier failures were observed here due to the spectral similarity of both species’ foliage. A specific problem of the object-based approach in this study, which might be decisive for the lower accuracies, was a limitation of the segmentation approach against the background of these spectral overlaps. This was particularly evident at higher altitudes with larger segments, resulting in superpixels that no longer represented a single class, but were composed of both soy and weeds. Accordingly, not only individual pixels, but whole segments were potentially misclassified. Similarly, sparse weeds such as grasses fell into soil segments. The lower the resolution, the more likely two classes are aggregated into one segment. Looking at the high-altitude segmentations makes this point obvious (Figure 4). The spatial resolution is so low that the mixed-pixel fraction is relatively large. By further consolidating areas through segmentation, a precise semantic assignment is no longer possible. However, it is surprising that the accuracy of the object-based method was already significantly lower than the pixel-based classification at 10 m. Here, the issue of ambiguous segments should only have a minor impact on object-based classification.
Looking at other studies, this outperformance of pixel- against object-based classification appears less common. Gao and Mas [48] showed in a study on the accuracy of pixel- and object-based classification that, at high resolutions, the object-based method achieves better results than the pixel-based classification. This relationship became reversed with a decreasing resolution. In other studies that have aimed to compare pixel- and object-based approaches, similar results were witnessed, with the latter regularly achieving superior classification results [49,50,51]. Despite this, some studies can be found in which a pixel-based similarly outperformed an object-based method. Mattivi et al. [52], for instance, achieved higher accuracies using a pixel-based artificial neural network than with an OBIA approach for weed mapping in maize.
It is conceivable that the quality of the segmentation could be enhanced by the use of alternate segmentation algorithms. However, comparative studies have found that the contrast among competing methods such as simple linear iterative clustering (SLIC), superpixels extracted via energy-driven sampling (SEEDS), linear spectral clustering (LSC), etc., is relatively small [33]. Also, segment size could be a factor in need of improvement. Although the most advantageous segment sizes according to cross-validation were selected, a qualitative examination of the segments revealed that, in some cases, quite large areas were aggregated (Figure 7). Possibly, better results could be achieved with smaller segment sizes oriented towards the actual objects, e.g., individual leaves for higher and single plants for coarser resolutions. Speculatively, the object-based approach could be more effective by using multispectral rather than RGB data because of augmented spectral discriminability, causing less aggregation of soy and weeds in a segment.

4.5. Limitations

Even though the results of this study give important insights into the relationship between spatial resolution and accuracy for soybean weed mapping, we want to stress that UAV-based weed mapping is a complex endeavor, facing numerous instability factors.
Firstly, this study covers a typical weed-infested soybean field with a certain weed spectrum. In other scenarios, however, different plant species, plant densities, and soil properties could be present, where spectral discriminability is less distinct. The accuracies can be expected to suffer if the classes cannot be clearly distinguished in a feature space. Secondly, this case study was carried out in June and at an early growth stage, as this is usually an appropriate time for applying weed control measures. No reliable statement can be made for other growth stages or seasons. Furthermore, the results could have been affected by deviating weather conditions or by the use of different equipment. For the stated reasons, the findings of this article should be interpreted as those of a case study rather than being universally applicable.

5. Conclusions

The influence of spatial resolution on classification accuracy strongly depends on the research object. With regards to weed mapping, there are no consistent results on the influence of spatial resolution so far. To further investigate this topic, this study examined how the classification accuracy of weeds in a soybean field changes as a function of spatial resolution. For this purpose, the resolutions corresponding to flight altitudes of 10, 20, 40, and 80 m were investigated, using both a pixel-based and an object-based approach. A clear relationship was found in that the accuracy decreased with a decreasing resolution. The best OA of 93 percent was achieved with the pixel-based classification approach at the spatial resolution corresponding to an altitude of 10 m. The greatest loss of accuracy in weed detection with respect to flight altitude was found for the pixel-based classification between 20 and 40 m, and for the object-based classification between 10 and 20 m. For all four resolutions studied, the pixel-based method outperformed the object-based approach, which is rather atypical compared to the findings of other studies.
While the accuracies declined approximately linearly with a decreasing resolution, the required flight and processing times decreased exponentially. This study showed that flight and processing times could be considerably reduced without losing statistically significant amounts of accuracy. While, for the 10 and 20 m pixel-based classification, the difference in the overall accuracy was not found to be statistically significant, the time savings of operating at 20 m compared to 10 m was about 75 percent in terms of the flight time and about 66 percent in terms of the processing time.
To further validate the findings made in this case study, further investigations with different soybean and weed species, plant densities, growth stages, weather conditions, and camera systems should be conducted.

Author Contributions

Conceptualization, N.U. and T.J.; methodology, N.U.; software, N.U.; validation, N.U.; formal analysis, N.U.; investigation, N.U.; resources, N.U., M.P. and T.J.; data curation, N.U.; writing—original draft preparation, N.U.; writing—review and editing, M.P. and T.J.; visualization, N.U.; supervision, T.J.; project administration, T.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript.
BBCHBiologische Bundesanstalt, Bundessortenamt and Chemical Industry
CNNconvolutional neural network
GLIgreen leaf index
GSDground sampling distance
LSClinear spectral clustering
MLRmultinomial logistic regression
OAoverall accuracy
OBIAobject-based image analysis
PAproducer’s accuracy
RGBred, green, blue
SEEDSsuperpixels extracted via energy-driven sampling
SLICsimple linear iterative clustering
SLICOsimple linear iterative clustering—zero-parameter version
SSWMsite-specific weed management
UAuser’s accuracy
UAVunmanned aerial vehicle

References

  1. Thorp, K.R.; Tian, L.F. A Review on Remote Sensing of Weeds in Agriculture. Precis. Agric. 2004, 5, 477–508. [Google Scholar] [CrossRef]
  2. Zimdahl, R.L. Fundamentals of Weed Science, 5th ed.; Elsevier: Amsterdam, The Netherlands, 2018; ISBN 9780128111437. [Google Scholar]
  3. Perrino, E.V.; Calabrese, G. Endangered segetal species in southern Italy: Distribution, conservation status, trends, actions and ethnobotanical notes. Genet. Resour. Crop Evol. 2018, 65, 2107–2134. [Google Scholar] [CrossRef]
  4. Kraehmer, H.; Laber, B.; Rosinger, C.; Schulz, A. Herbicides as Weed Control Agents: State of the Art: I. Weed Control Research and Safener Technology: The Path to Modern Agriculture. Plant Physiol. 2014, 166, 1119–1131. [Google Scholar] [CrossRef]
  5. Idowu, J.; Angadi, S. Understanding and Managing Soil Compaction in Agricultural Fields. Circular 2013, 672, 1–8. [Google Scholar]
  6. Morales, M.A.M.; de Camargo, B.C.V.; Hoshina, M.M. Toxicity of Herbicides: Impact on Aquatic and Soil Biota and Human Health. In Herbicides: Current Research and Case Studies in Use; Price, A., Kelton, J., Eds.; IntechOpen Limited: London, UK, 2013; pp. 399–443. [Google Scholar]
  7. Cardina, J.; Johnson, G.A.; Sparrow, D.H. The nature and consequence of weed spatial distribution. Weed Sci. 1997, 45, 364–373. [Google Scholar] [CrossRef]
  8. Castaldi, F.; Pelosi, F.; Pascucci, S.; Casa, R. Assessing the potential of images from unmanned aerial vehicles (UAV) to support herbicide patch spraying in maize. Precis. Agric. 2017, 18, 76–94. [Google Scholar] [CrossRef]
  9. Christensen, S.; Søgaard, H.T.; Kudsk, P.; Nørremark, M.; Lund, I.; Nadimi, E.S.; Jørgensen, R. Site-specific weed control technologies. Weed Res. 2009, 49, 233–241. [Google Scholar] [CrossRef]
  10. del Cerro, J.; Cruz Ulloa, C.; Barrientos, A.; de León Rivas, J. Unmanned Aerial Vehicles in Agriculture: A Survey. Agronomy 2021, 11, 203. [Google Scholar] [CrossRef]
  11. López-Granados, F. Weed detection for site-specific weed management: Mapping and real-time approaches. Weed Res. 2011, 51, 1–11. [Google Scholar] [CrossRef]
  12. Liu, B.; Bruch, R. Weed detection for selective spraying: A review. Curr. Robot. Rep. 2020, 1, 19–26. [Google Scholar] [CrossRef]
  13. Hasan, A.S.M.M.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G.K. A survey of deep learning for weed detection from images. Comput. Electron. Agric. 2021, 184, 106067. [Google Scholar] [CrossRef]
  14. Coleman, G.R.Y.; Bender, A.; Hu, K.; Sharpe, S.M.; Schumann, A.W.; Wang, Z.; Bagavathiannan, M.V.; Boyd, N.S.; Walsh, M.J. Weed detection to weed recognition: Reviewing 50 years of research to identify constraints and opportunities for large-scale cropping systems. Weed Technol. 2022, 36, 741–757. [Google Scholar] [CrossRef]
  15. Dargan, S.; Kumar, M.; Ayyagari, M.R.; Kumar, G. A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning. Arch. Comput. Methods Eng. 2020, 27, 1071–1092. [Google Scholar] [CrossRef]
  16. Underwood, E.C.; Ustin, S.L.; Ramirez, C.M. A Comparison of Spatial and Spectral Image Resolution for Mapping Invasive Plants in Coastal California. Environ. Manag. 2007, 39, 63–83. [Google Scholar] [CrossRef]
  17. Fisher, J.R.B.; Acosta, E.A.; Dennedy-Frank, P.J.; Kroeger, T.; Boucher, T.M. Impact of satellite imagery spatial resolution on land use classification accuracy and modeled water quality. Remote Sens. Ecol. Conserv. 2018, 4, 137–149. [Google Scholar] [CrossRef]
  18. Meddens, A.J.H.; Hicke, J.A.; Vierling, L.A. Evaluating the potential of multispectral imagery to map multiple stages of tree mortality. Remote Sens. Environ. 2011, 115, 1632–1642. [Google Scholar] [CrossRef]
  19. Roth, K.L.; Roberts, D.A.; Dennison, P.E.; Peterson, S.H.; Alonzo, M. The impact of spatial resolution on the classification of plant species and functional types within imaging spectrometer. Remote Sens. Environ. 2015, 171, 45–57. [Google Scholar] [CrossRef]
  20. Liu, M.; Yu, T.; Gu, X.; Sun, Z.; Yang, J.; Zhang, Z.; Mi, X.; Cao, W.; Li, J. The Impact of Spatial Resolution on the Classification of Vegetation Types in Highly Fragmented Planting Areas Based on Unmanned Aerial Vehicle Hyperspectral Images. Remote Sens. 2020, 12, 146. [Google Scholar] [CrossRef]
  21. Peña, J.M.; Torres-Sánchez, J.; Serrano-Pérez, A.; de Castro, A.I.; López-Granados, F. Quantifying Efficacy and Limits of Unmanned Aerial Vehicle (UAV) Technology for Weed Seedling Detection as Affected by Sensor Resolution. Sensors 2015, 15, 5609–5626. [Google Scholar] [CrossRef]
  22. López-Granados, F.; Torres-Sánchez, J.; De Castro, A.I.; Serrano-Pérez, A.; Mesas-Carrascosa, F.J.; Peña, J.M. Object-based early monitoring of a grass weed in a grass crop using high resolution UAV imagery. Agron. Sustain. Dev. 2016, 36, 67. [Google Scholar] [CrossRef]
  23. Sanders, J.T.; Jones, E.A.L.; Austin, R.; Roberson, G.T.; Richardson, R.J.; Everman, W.J. Remote Sensing for Palmer Amaranth (Amaranthus palmeri S. Wats.) Detection in Soybean (Glycine max (L.) Merr.). Agronomy 2021, 11, 1909. [Google Scholar] [CrossRef]
  24. Böhning, D. Multinomial logistic regression algorithm. Ann. Inst. Stat. Math. 1992, 44, 197–200. [Google Scholar] [CrossRef]
  25. Galántai, A. The theory of Newton’s method. J. Comput. Appl. Math. 2000, 124, 25–44. [Google Scholar] [CrossRef]
  26. Gutiérrez, P.A.; López-Granados, F.; Peña-Barragán, J.M.; Jurado-Expósito, M.; Hervás-Martínez, C. Logistic regression product-unit neural networks for mapping Ridolfia segetum infestations in sunflower crop using multitemporal remote sensed data. Comput. Electron. Agric. 2008, 64, 293–306. [Google Scholar] [CrossRef]
  27. Mohajane, M.; Costache, R.; Karimi, F.; Pham, Q.B.; Essahlaoui, A.; Nguyen, H.; Laneve, G.; Oudija, F. Application of remote sensing and machine learning algorithms for forest fire mapping in a Mediterranean area. Ecol. Indic. 2021, 129, 107869. [Google Scholar] [CrossRef]
  28. Lv, X.; Ming, D.; Chen, Y.; Wang, M. Very high resolution remote sensing image classification with SEEDS-CNN and scale effect analysis for superpixel CNN classification. Int. J. Remote Sens. 2019, 40, 506–531. [Google Scholar] [CrossRef]
  29. He, P.; Shi, W.; Zhang, H. Adaptive superpixel based Markov random field model for unsupervised change detection using remotely sensed images. Remote Sens. Lett. 2018, 9, 724–732. [Google Scholar] [CrossRef]
  30. Tu, J.; Sui, H.; Feng, W.; Song, Z. Automatic Building Damage Detection Method Using High-Resolution Remote Sensing Images and 3D GIS Model. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 43–50. [Google Scholar] [CrossRef]
  31. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
  32. Csillik, O. Fast segmentation and classification of very high resolution remote sensing data using SLIC superpixels. Remote Sens. 2017, 9, 243. [Google Scholar] [CrossRef]
  33. Csillik, O. Superpixels: The end of pixels in obia. A comparison of state-of-the-art superpixel methods for remote sensing data. In Proceedings of the 6th International Conference on Geographic Object-Based Image Analysis, GEOBIA 2016: Solutions & Synergies, Enschede, The Netherlands, 14–16 September 2016; University of Twente, Faculty of Geo-Information Science and Earth Observation (ITC): Enschede, The Netherlands, 2016. [Google Scholar]
  34. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  35. Hunt, E.R., Jr.; Doraiswamy, P.C.; McMurtrey, J.E.; Daughtry, C.S.; Perry, E.M.; Akhmedov, B. A visible band index for remote sensing leaf chlorophyll content at the canopy scale. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 103–112. [Google Scholar] [CrossRef]
  36. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 45–57. [Google Scholar] [CrossRef]
  37. Agresti, A. An Introduction to Categorical Data Analysis, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  38. Foody, G.M. Thematic map comparison: Evaluating the Statistical Significance of Differences in Classification Accuracy. Photogramm. Eng. Remote Sens. 2004, 70, 627–633. [Google Scholar] [CrossRef]
  39. Foody, G.M. Classification accuracy comparison: Hypothesis tests and the use of confidence intervals in evaluations of difference, equivalence and non-inferiority. Remote Sens. Environ. 2009, 113, 1658–1663. [Google Scholar] [CrossRef]
  40. Edwards, A.L. Note on the “correction for continuity” in testing the significance of the difference between correlated proportions. Psychometrika 1948, 13, 185–187. [Google Scholar] [CrossRef]
  41. Gray, C.J.; Shaw, D.R.; Gerard, P.D.; Bruce, L.M. Utility of multispectral imagery for soybean and weed species differentiation. Weed Technol. 2008, 22, 713–718. [Google Scholar] [CrossRef]
  42. Sivakumar, A.N.V.; Li, J.; Scott, S.; Psota, E.; Jhala, A.J.; Luck, J.D.; Shi, Y. Comparison of Object Detection and Patch-Based Classification Deep Learning Models on Mid- to Late-Season Weed Detection in UAV Imagery. Remote Sens. 2020, 12, 2136. [Google Scholar] [CrossRef]
  43. Xu, B.; Fan, J.; Chao, J.; Arsenijevic, N.; Werle, R.; Zhang, Z. Instance segmentation method for weed detection using UAV imagery in soybean fields. Comput. Electron. Agric. 2023, 211, 107994. [Google Scholar] [CrossRef]
  44. Gao, J.; Liao, W.; Nuyttens, D.; Lootens, P.; Vangeyte, J.; Pižurica, A.; He, Y.; Pieters, J.G. Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery. Int. J. Appl. Earth Obs. Geoinf. 2018, 67, 43–53. [Google Scholar] [CrossRef]
  45. Peña, J.M.; Torres-Sánchez, J.; de Castro, A.I.; Kelly, M.; López-Granados, F. Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images. PLoS ONE 2013, 8, e77151. [Google Scholar] [CrossRef] [PubMed]
  46. Dos Santos Ferreira, A.; Freitas, D.M.; da Silva, G.G.; Pistori, H.; Folhes, M.T. Weed detection in soybean crops using ConvNets. Comput. Electron. Agric. 2017, 143, 314–324. [Google Scholar] [CrossRef]
  47. Zhang, W.; Hansen, M.F.; Volonakis, T.N.; Smith, M.; Smith, L.; Wilson, J.; Ralston, G.; Broadbent, L.; Wright, G. Broad-leaf weed detection in pasture. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 101–105. [Google Scholar]
  48. Gao, Y.; Mas, J.F. A comparison of the performance of pixel-based and object-based classifications over images with various spatial resolutions. Online J. Earth Sci. 2008, 2, 27–35. [Google Scholar]
  49. Keyport, R.N.; Oommen, T.; Martha, T.R.; Sajinkumar, K.S.; Gierke, J.S. A comparative analysis of pixel- and object-based detection of landslides from very high-resolution images. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 1–11. [Google Scholar] [CrossRef]
  50. Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
  51. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
  52. Mattivi, P.; Pappalardo, S.E.; Nikolić, N.; Mandolesi, L.; Persichetti, A.; De Marchi, M.; Masin, R. Can commercial low-cost drones and open-source GIS technologies be suitable for semi-automatic weed mapping for smart farming? A case study in NE Italy. Remote Sens. 2021, 13, 1869. [Google Scholar] [CrossRef]
Figure 1. True color composite of the study area as captured by UAV (A) and pictures demonstrating the weed situation in the study area, taken in situ (B,C).
Figure 1. True color composite of the study area as captured by UAV (A) and pictures demonstrating the weed situation in the study area, taken in situ (B,C).
Remotesensing 16 01778 g001
Figure 2. Schematic overview of the methodology.
Figure 2. Schematic overview of the methodology.
Remotesensing 16 01778 g002
Figure 3. F1 scores of classes and overall accuracy for the pixel-based classifications.
Figure 3. F1 scores of classes and overall accuracy for the pixel-based classifications.
Remotesensing 16 01778 g003
Figure 4. SLICO segmentations with best-suited segment sizes according to cross-validation (see Table 2) at the different altitudes.
Figure 4. SLICO segmentations with best-suited segment sizes according to cross-validation (see Table 2) at the different altitudes.
Remotesensing 16 01778 g004
Figure 5. F1 scores of classes and overall accuracy of object-based classifications.
Figure 5. F1 scores of classes and overall accuracy of object-based classifications.
Remotesensing 16 01778 g005
Figure 6. Comparison of pixel- and object-based classifications for the investigated altitudes. Soy plants are depicted in green, and weeds are depicted in red.
Figure 6. Comparison of pixel- and object-based classifications for the investigated altitudes. Soy plants are depicted in green, and weeds are depicted in red.
Remotesensing 16 01778 g006
Figure 7. F1 scores of weeds and overall accuracy of pixel- and object-based classification.
Figure 7. F1 scores of weeds and overall accuracy of pixel- and object-based classification.
Remotesensing 16 01778 g007
Figure 8. Progression of time required for flight and processing.
Figure 8. Progression of time required for flight and processing.
Remotesensing 16 01778 g008
Table 1. Accuracies of the pixel-based classifications with 95 percent confidence intervals.
Table 1. Accuracies of the pixel-based classifications with 95 percent confidence intervals.
Altitude/
Resolution
Soil PA (%)Soil UA (%)Soy PA (%)Soy UA (%)Weed PA (%)Weed UA (%)Overall
Accuracy (%)
10 m/
0.27 cm/pixel
97.33 (±0.85)96.77 (±2.36)87.38 (±5.41)86.52 (±3.12)83.89 (±5.20)87.00 (±3.67)93.02 (±1.72)
20 m/
0.55 cm/pixel
97.61 (±0.84)90.76 (±3.69)80.48 (±6.46)86.21 (±3.16)74.98 (±7.04)90.16 (±3.35)89.64 (±2.49)
40 m/
1.10 cm/pixel
96.31 (±1.09)83.67 (±4.64)75.07 (±6.49)80.17 (±3.57)59.84 (±6.74)88.41 (±3.78)83.48 (±3.06)
80 m/
2.19 cm/pixel
92.23 (±1.72)82.57 (±5.05)74.76 (±5.51)76.06 (±3.85)58.99 (±6.31)77.10 (±4.69)79.80 (±3.07)
Table 2. Cross-validation scores for average segment sizes.
Table 2. Cross-validation scores for average segment sizes.
10 m20 m40 m80 m
5 × 50.9570.9400.9370.920
10 × 100.9430.9630.9500.800
15 × 150.9630.9670.9130.717
20 × 200.9430.9400.8030.640
25 × 250.9530.9230.7800.643
Table 3. Accuracies of the object-based classifications, with 95 percent confidence intervals.
Table 3. Accuracies of the object-based classifications, with 95 percent confidence intervals.
Altitude/
Resolution
Soil PA (%)Soil UA (%)Soy PA (%)Soy UA (%)Weed PA (%)Weed UA (%)Overall
Accuracy (%)
10 m/
0.27 cm/pixel
95.37 (±1.19)88.58 (±4.22)88.77 (±5.00)83.40 (±3.33)64.38 (±7.28)88.29 (±3.65)87.28 (±2.73)
20 m/
0.55 cm/pixel
94.67 (±1.33)82.33 (±4.92)83.67 (±5.26)81.30 (±3.51)55.04 (±6.38)83.90 (±4.22)82.34 (±3.14)
40 m/
1.10 cm/pixel
92.59 (±1.64)79.82 (±5.28)69.16 (±5.26)84.50 (±3.55)60.71 (±6.64)70.03 (±4.63)78.53 (±3.28)
80 m/
2.19 cm/pixel
86.97 (±2.43)74.30 (±5.87)73.82 (±5.01)76.65 (±3.90)54.96 (±5.86)71.99 (±4.84)74.57 (±3.29)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ubben, N.; Pukrop, M.; Jarmer, T. Spatial Resolution as a Factor for Efficient UAV-Based Weed Mapping—A Soybean Field Case Study. Remote Sens. 2024, 16, 1778. https://doi.org/10.3390/rs16101778

AMA Style

Ubben N, Pukrop M, Jarmer T. Spatial Resolution as a Factor for Efficient UAV-Based Weed Mapping—A Soybean Field Case Study. Remote Sensing. 2024; 16(10):1778. https://doi.org/10.3390/rs16101778

Chicago/Turabian Style

Ubben, Niklas, Maren Pukrop, and Thomas Jarmer. 2024. "Spatial Resolution as a Factor for Efficient UAV-Based Weed Mapping—A Soybean Field Case Study" Remote Sensing 16, no. 10: 1778. https://doi.org/10.3390/rs16101778

APA Style

Ubben, N., Pukrop, M., & Jarmer, T. (2024). Spatial Resolution as a Factor for Efficient UAV-Based Weed Mapping—A Soybean Field Case Study. Remote Sensing, 16(10), 1778. https://doi.org/10.3390/rs16101778

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop