Next Article in Journal
Measurement of Downwelling Radiance Using a Low-Cost Compact Fourier-Transform Infrared System for Monitoring Atmospheric Conditions
Previous Article in Journal
A Dual-Branch Autoencoder Network for Underwater Low-Light Polarized Image Enhancement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Crop Residue Estimation via Unsupervised Techniques Using High-Resolution UAS RGB Imagery

Lyles School of Civil Engineering, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(7), 1135; https://doi.org/10.3390/rs16071135
Submission received: 31 January 2024 / Revised: 7 March 2024 / Accepted: 22 March 2024 / Published: 24 March 2024
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
Crop Residue Cover (CRC) is crucial for enhancing soil quality and mitigating erosion in agricultural fields. Accurately estimating CRC in near real-time presents challenges due to the limitations of traditional and remote sensing methods. This study addresses the challenge of accurately estimating CRC using unsupervised algorithms on high-resolution Unmanned Aerial System (UAS) imagery. We employ two methods to perform CRC estimation: (1) K-means unsupervised algorithm and (2) Principal Component Analysis (PCA) along with the Otsu thresholding technique. The advantages of these methods lie in their independence from human intervention for any supervised training stage. Additionally, these methods are rapid and suitable for near real-time estimation of CRC as a decision-making support in agricultural management. Our analysis reveals that the K-means method, with an R 2 = 0.79 , achieves superior accuracy in CRC estimation over the PCA-Otsu method with an R 2 = 0.46 . The accuracy of CRC estimation for both corn and soybean crops is significantly higher in winter than in spring, attributable to the more weathered state of crop residue. Furthermore, CRC estimations in corn fields exhibit a stronger correlation, likely due to the larger size of corn residue which enhances detectability in images. Nevertheless, the variance in CRC estimation accuracy between corn and soybean fields is minimal. Furthermore, CRC estimation achieves the highest correlation in no-till fields, while the lowest correlation is observed in conventionally tilled fields, a difference likely due to the soil disturbance during plowing in conventional tillage practices.

1. Introduction

Crop residue plays a crucial role in mitigating soil erosion caused by wind and runoff [1], retains moisture and nutrients in the soil [2], and increases the organic content of the soil [3,4]. Additionally, crop residue improves water use efficiency, enhances soil fertility [5], and ultimately contributes to overall soil quality [6]. Consequently, maintaining soil surface coverage with crop residue is a vital and strongly recommended practice in conservation agriculture [7]. In the 1980s, growing awareness of the adverse environmental impacts associated with conventional tillage methods prompted agronomists to introduce conservation tillage practices to improve soil health and promote environmentally friendly agricultural approaches [8]. The percentage of Crop Residue Cover (CRC) observed in a field can be considered as an indicator of tillage intensity and crop management practices [9]. Recognizing the significance of the relationship between CRC and tillage intensity, the Conservation Technology Information Center (CTIC) determined the following set of definitions for the standard form of tillage practice (https://www.ctic.org/resource_display/?id=322&title=Tillage+Type+Definitions, accessed on 21 March 2024);
  • Intensive or conventional tillage leaves less than 15% CRC or fewer than 500 pounds per acre (560 kg/ha) of crop residue. Intensive tillage disturbs all the soil by involving multiple operations with implements such as a moldboard, disk, or chisel plow.
  • Strip-tillage merges the benefits of conventional tillage with the soil-protecting advantages of no-till by mixing up only the portion of the soil that contains the seed row (about one-third of the row width).
  • No-till aims to achieve a 100% CRC, leaving most of the soil undisturbed. In this practice, the only disturbance between harvest and planting is nutrient injection (https://www.extension.purdue.edu/extmedia/ct/ct-1.html, accessed on 21 March 2024).
Traditionally, the collection of tillage data relied on manual field data collection, survey responses, agricultural censuses, or visual assessment of the field. Among the traditional methods used to estimate CRC in fields, the line-point transect method stands out for its simplicity and high accuracy in CRC estimation. Many studies utilize the line-point transect method to validate the results obtained by other approaches [7,10,11]. However, collecting ground reference data using this method can be time-consuming, labor-intensive, and costly [11], and it is often hindered by challenges related to weather conditions and the availability of personnel to conduct such surveys. Moreover, the limited spatial and temporal coverage provided by these on-site methods makes them insufficient for thorough assessment and monitoring of agricultural practices [1]. As a result, acquiring data systematically and consistently across large geographical areas through these methods poses substantial challenges [8]. Recognizing the limitations of these traditional approaches, a study conducted by Karan and Hamelin [12] focused on using different methods to estimate 16 distinct primary crop residues in France. Their research unveiled significant variations in the estimations, with statistical tests revealing notable disparities among these estimates. The findings underscored the considerable unreliability of current methods for estimating crop residues and highlighted the necessity for more accurate and consistent approaches.
To respond to this challenge, remote sensing data have emerged as a promising alternative to estimate CRC. Remote sensing techniques offer a cost-effective and efficient means of systematically surveying tillage practices on a broad scale [7,8]. Additionally, unlike the line-point transect method, which collects three to five samples in a field and extrapolates the results to the entire field, remote sensing imagery captures the spatial distribution of CRC across the entire field [13]. Beeson et al. [11] evaluated CRC and tillage intensity in corn and soybeans fields in central Iowa by employing multispectral satellite imagery, the line-point transect method, and visual estimates through roadside surveys. Their findings showed that remote sensing imagery yields reasonable classification accuracy ranging from 64% to 92%. However, remote sensing techniques for estimating CRC are negatively impacted by soil moisture and crop water content, which complicate the accurate identification of crop residues against the background of soil or living crop cover [6,14,15,16]. This challenge increases when attempting to distinguish weathered residues or crops that have reached advanced stages of growth [17,18,19]. Yue et al. [6] highlighted that moisture levels significantly influence the remote sensing imagery, resulting in weaker correlations with CRC under mixed moisture conditions (correlation coefficient r = 0.58) compared with stronger correlations in uniformly dry conditions (r = 0.869). On the other hand, Quemada et al. [20] found that adjusting spectral bands to account for moisture content significantly improved the accuracy of CRC estimates using these indices.
Spectral indices can magnify specific features of the intended objects in images, making them widely utilized in remote sensing studies. Stern et al. [10] evaluated five spectral indices and classification methods using Landsat 5TM, 7ETM+, and 8 OLI data; their finding shows that no single approach is consistently superior for CRC estimation. Cai et al. [21] identified the Normalized Difference Tillage Index (NDTI) as particularly effective, while Hively et al. [9] noted higher accuracy with indices like the Shortwave Infrared Normalized Difference Residue Index (SINDRI) and the Lignin Cellulose Absorption Index (LCA) in predicting CRC, emphasizing the importance of the Shortwave Infrared (SWIR) spectrum, particularly within the 2100–2300 nm range [2,3,16,19,20,22]. Despite progress, the limited availability of free hyperspectral data and their low signal-to-noise ratio poses challenges [13,18,23]. This is why de Paul Obade et al. [13] suggested using alternative indices within the 450–1750 nm range, finding green band reflectance most correlated with CRC variability. Yet, differentiating crop residues from soils remains difficult due to similar spectral signatures in the visible and near-infrared regions, underlining the complexities in selecting effective spectral indices for CRC estimation [2]. One specific challenge with using spectral indices from space-borne imagery is the coarse spatial resolution of satellite images, which fails to capture the detailed spatial distribution of CRC, making it unsuitable for accurate CRC estimation [24]. Moreover, satellite data collection is often hindered on cloudy days, limiting the reliability of space-borne imagery for decision-making purposes. This incapability to capture the details of fields and spatial distribution of CRC makes space-borne imagery inadequate for advanced object detection algorithms and deep learning methods. An effective alternative is the use of high-resolution imagery captured by UAS which allows for the detailed observation of fields. The captured details are crucial for the training of supervised learning algorithms and the implementation of unsupervised methods. Such detailed imagery significantly improves the accuracy of CRC estimations [24].
Previous studies using machine learning (ML) and deep learning (DL) methods for estimating CRC in agricultural fields highlight significant advancements in addressing the inherent challenges and complexities associated with CRC estimation. These studies collectively underscore the high accuracy and efficiency of ML approaches in CRC estimation [2,25,26]. For instance, Bannari et al. [27] demonstrated that Artificial Neural Networks (ANNs) not only offer greater accuracy but also align more closely with ground reference data. Bocco et al. [28] compared the performance of a neural network (NN) model against the Crop Residue Index Multiband (CRIM) index derived from Landsat images, revealing the NN model has superior capability in estimating corn and soybeans residue cover. Further, a comparative study by Tao et al. [29] highlighted the marked superiority of DL-based models over traditional ML models like Support Vector Machines (SVM) in mapping corn residue cover, with DL models significantly outperforming their ML counterparts. Ding et al. [26] enhanced CRC estimation by employing ML models with Sentinel-2 Multispectral data, indicating that ML models with optimally selected input variables surpass univariate regression approaches. Ribeiro et al. [30] explored automated CRC quantification using a genetic algorithm to refine the segmentation process of RGB images, achieving up to 92% similarity between automated segmentation outputs and manually traced templates. Upadhyay et al. [31] utilized high-resolution UAS-borne RGB imagery and identified Recursive Feature Elimination with SVM (RFE-SVM) as the best feature selection method, pinpointing texture features, especially Local Binary Pattern (LBP) features, as most effective, while deeming shape features irrelevant. Despite these advancements, the primary limitation of supervised ML models lies in their dependency on extensive ground measurements for accurate model calibration [32]. This dependence not only demands substantial human effort but also introduces challenges in applying these models universally across varied agricultural landscapes, where soil types, moisture levels, and crop varieties differ markedly. Thus, the need for supervised training further complicates their use due to the necessity of human intervention, which can limit their effectiveness and applicability in diverse settings. Furthermore, gathering data on residue management practices is time-sensitive, emphasizing the importance of timely and spatially detailed soil quality information for sustainable agroecosystem management [1,13].
This study addresses the need for an unsupervised method capable of estimating CRC in near real-time using high-resolution, UAS-based RGB imagery, specifically focusing on corn and soybeans, which are the primary crops in Indiana. The analysis is concentrated on the period from postharvest in December 2021 to preharvest in early May 2022. This time frame is selected because the fields lack vegetation and are most vulnerable to soil erosion during these months. The results of this study are then validated with ground reference data collected through the line-point transect method. The result of this study would serve as a vital decision-support tool for tillage practices and would enhance agricultural management. The structure of the paper is organized as follows: Section 2 introduces the data and study area. Section 3 outlines the methodology employed. Section 4 presents the results, with Section 5 discussing these findings. This paper concludes in Section 6, offering a summary and proposing directions for future research to overcome the identified limitations.

2. Study Region and Data

2.1. Study Region

This study focuses on corn and soybeans fields in Indiana, a central region within the U.S. Corn and soybeans Belt (https://www.ncei.noaa.gov/access/monitoring/reference-maps/corn-belt, accessed on 21 March 2024). Indiana allocates around 5.25 million acres to corn and 5.85 million acres to soybeans cultivation (https://www.nass.usda.gov/Quick_Stats/Ag_Overview/stateOverview.php?state=INDIANA, accessed on 21 March 2024), underscoring the substantial economic importance of these crops in the state. Our assessment of CRC involved 18 fields—nine planted with corn and nine with soybeans. Each field measured approximately 30 acres. The fields, chosen for their varied geographical spread across Indiana, are depicted in Figure 1. We distributed the fields to cover different parts of the state: two in the north, two in the east, and several in proximity to Purdue University. This geographical grouping was strategically planned to streamline the data collection process. Furthermore, this study encompasses an analysis of diverse tillage practices, including no-till, strip-till, and conventional-till, to capture a wide array of farming techniques. Table 1 lists the specifics for each field, including names, locations, crop types, and tillage practices, offering a detailed snapshot of the areas under investigation.

2.2. Data

2.2.1. Field Measurements/Line-Point Transect Method

The line-point transect method is widely applied across various fields for quantifying animal populations, plant diversity, and CRC in agricultural settings. Designed to efficiently sample a small yet representative section of larger ecosystems, this technique is invaluable for collecting data on ecosystem communities. For estimating CRC, while several methods like surveys, direct observations, and photo comparisons are available, the line-point transect method is renowned for its precision and simplicity in accurately measuring ground coverage by plant or crop residues. Implementing this method necessitates careful selection of the field area to ensure representativeness, avoiding areas like end rows or those adversely affected by conditions such as flooding, drought, or pest infestations, as these could compromise result accuracy. A 50-foot tape measure is used, laid either directly across the crop rows or at a 45-degree angle to them, as illustrated in Figure 2. The method focuses on counting only the crop residues that are large enough to intercept raindrops, specifically targeting residues at least 3/32 inches in diameter. Counts are made at each footmark along the tape. To calculate CRC, the total count of qualifying residues is divided by the number of foot marks along the 50-foot tape, providing an estimated measure of CRC.
We conducted the line-point transect method a total of sixteen times within each field, arranging the counts at the field in a cross-shaped layout. For clarity, we refer to each of these configurations as a “Plot”. Therefore, four plots exist within a single field, with four line-point transect measurements carried out in each plot. With the tape measure extending to 50 feet, the area covered by each plot is approximately 100 by 100 square feet. Figure 3 provides a visual representation of the plot locations and the detailed line-point transect measurements for each plot and field. Temporal changes in crop residue were investigated by visiting each field twice during the off-season: first, between December 2021 and March 2022, representing the winter period, and then between April and May 2022, to capture the springtime conditions. Due to logistical challenges such as distance and adverse weather conditions, four fields in the north were only visited once during the winter. Detailed information about the field visits, including specific dates and average crop residue percentages calculated from 16 measurements, is presented in Table 2. Figure 4 represents the variance in CRC across all fields between the two visits. This variation highlights differing trends: in some cases, CRC decreased by the second visit, while in others, it increased. These changes could be influenced by several factors, including the crop type, timing of the visits, choice of measurement locations within each field, as well as possible residue displacement by wind or errors in measurement.

2.2.2. Global Positioning System (GPS) Survey Data

In our study, we utilized the Reach RS2 (Emlid, Hong Kong, China) (Figure 5a) dual-frequency GNSS receiver for precise positioning of ground control points (GCPs) (Figure 5b) and line-point transect plot locations, using the nearest base station to each field for RTK positioning. Prior to fieldwork, we identified four representative plots in each field and established three GCPs within each field whose coordinates were employed during the georeferencing process. Additionally, within each field, we measured the GPS coordinates at the start and end points of the tape measure, so we acquired five distinct GPS coordinates for each plot, marking the center, top, bottom, left, and right positions. These precise coordinates clearly outline each plot, supporting in-depth analysis and further research activities.

2.2.3. Unmanned Aircraft Systems (UAS) Imagery

For Unmanned Aerial System (UAS) data collection, we used the DJI Matrice 300 RTK (Figure 6a), a sophisticated 6.3 kg quadcopter by DJI Technology, China (https://www.dji.com/company, accessed on 21 March 2024). Operating at altitudes of up to 5000 m and boasting a maximum endurance of 55 min without payload, the UAV demonstrates positioning precision of 1.0 cm + 1 ppm horizontally and 1.5 cm + 1 ppm vertically (https://enterprise.dji.com/zenmuse-p1/specs, accessed on 21 March 2024). Capturing images is performed with the Zenmuse P1 camera (Figure 6b), which offers 3 cm horizontal and 5 cm vertical absolute accuracy and features a 4.4 µm pixel size. The UAV was flown at an altitude of 40 m and maintained an average speed of 10 m/s, which was adjusted according to wind conditions, to achieve a 1 cm Ground Sampling Distance (GSD) in the orthomosaic images. We ensured an 80% overlap and sidelap for comprehensive coverage. Each mission over the fields lasted approximately 20 min and captured between 600 to 1000 RGB images. To achieve complete coverage, some areas required two flights, totaling 40 min of flight time. The RGB orthomosaics, which cover the entire field, were generated with Agisoft Metashape software (version 1.7.1). This process involved a precise workflow using the “high-accuracy” setting for photo alignment, to ensure the utmost precision in the creation of the orthomosaics.

3. Methods

The methodology of this study is illustrated as a four-stage process in the flowchart shown in Figure 7. First, the data collection stage includes the acquisition of ground reference data, the collection of GPS data for precise spatial referencing, and the capturing of RGB images using UAS. Second, the processing stage involves implementing two unsupervised methods: the K-means clustering algorithm and PCA-Otsu. Third, these processes result in a binary map, which clearly distinguishes crop residues from their surrounding environment. Lastly, the validation stage is twofold: it begins with a comparison of the binary map against the ground reference data to assess accuracy. This is followed by validation using reference data filtered by visual assessment. Together, these interconnected steps form a comprehensive methodology, ensuring this study progresses logically from data collection to the validation of results.

3.1. K-Means Clustering Algorithm

Clustering algorithms exploit the natural structure of data sets, categorizing elements with similar characteristics into groups [33]. The K-means algorithm, a prominent clustering method, assigns data to the nearest cluster’s centroid, aiming to increase intracluster similarity and reduce intercluster similarity [34]. This is achieved through distance functions that gauge the proximity between data points and centroids, ensuring that elements are grouped based on their closeness to the center of the cluster [35]. Renowned for its straightforwardness and efficiency, K-means is widely regarded as a leading technique in partitional clustering and a significant tool in data mining [34]. Despite its advantages, K-means encounters challenges such as the arbitrary initialization of centroids, which can lead to unpredictable results, and the requirement to predefine the number of clusters, which can complicate its application [36]. To address the challenge of selecting an optimal number of clusters for the K-means algorithm, we explored the elbow method, alongside the sum of square errors (SSE) and the silhouette score, as analytical tools.
  • Elbow method: The elbow method assists in identifying the most appropriate number of clusters by analyzing the SSE graph. A sharp “elbow” or a significant drop in SSE values with a changing number of clusters suggests the optimal number of clusters [34]. This method systematically evaluates changes in SSE to pinpoint the best number of clusters in K-means clustering.
  • Sum of squared error (SSE): In K-means clustering, each data point ( x i ) is assigned to a cluster ( C i ) based on its Euclidean distance from the cluster’s centroid ( m i ) . The SSE, a metric for clustering effectiveness [37], quantifies the variance within a cluster as the total squared difference between each data point and its cluster mean. An SSE of zero indicates perfect homogeneity within the cluster. Mathematically, SSE is defined as
    S S E = i = 1 K x j C i x j C i 2
    where K is the number of clusters, x j is the data assigned to cluster C i , m i is the centroid of cluster C i , and . is Euclidean distance from x j to m i .
  • Silhouette score: The silhouette score measures how well a data point fits within its assigned cluster. It is calculated by comparing the mean intracluster distance (cohesion) to the mean nearest-cluster distance (separation) [38]. Scores range from −1 to 1, with values close to 1 indicating strong cluster fit, values near 0 suggesting overlapping clusters, and values around −1 highlighting misclassified data points. A high silhouette score denotes clear differentiation between clusters and tight grouping within them, whereas a low score may indicate clustering inaccuracies. The silhouette score is calculated as
    S x i = ( a i b i ) / m a x ( a i , b i )
    Here, S x i is the silhouette score, a i represents cohesion, and b i indicates separation. The average silhouette score across all data points is then calculated for each K.
    average silhouette = m e a n { S x i }
Figure 8 displays the relationship between the number of clusters and both SSE and the average silhouette score. It is evident that although SSE decreases with an increase in the number of clusters, the rate of decrease becomes negligible after exceeding six clusters. Similarly, the silhouette score shows the same pattern after two clusters, suggesting minimal improvement in cluster definition beyond this point. The lack of a clear “elbow” in Figure 8 reflects the complexity of the data and the challenge of categorizing it into distinct groups. Moreover, the highest silhouette score does not always align with the lowest SSE value. An in-depth examination of both SSE and silhouette scores across different cluster numbers, as shown in Figure 9, reveals varying trends and values. To identify the optimal cluster count, we analyzed the variance between the silhouette scores and SSE for each potential number of clusters (k). The ideal number of clusters achieves a balance between minimizing SSE and maximizing the silhouette score across all considerations. Figure 9 demonstrates that the optimal number of clusters, K = 6, is determined where the variance of SSE and the variance of silhouette scores between all plots are minimized. The variance for each SSE and silhouette score are provided in Table 3.
The K-means algorithm, set with K = 6 , results in six clusters. However, the random assignment of cluster labels by K-means complicates the automated identification of clusters as either crop residue or background solely based on these labels. This is due to the algorithm’s unsupervised nature, where a cluster labeled “1” might represent crop residue in one instance and background soil in another. Figure 10 illustrates how the same cluster labels appear across four different images, with identical colors indicating the same K-means assigned labels. For example, a blue-colored cluster labeled “5” may indicate crop residue in one image (due to high RGB values) and soil in another (due to low RGB values), complicating the distinction between foreground and background. To mitigate this issue, we use the clusters’ centroid as the definitive label, enabling us to categorize clusters into crop residue or background based on their average pixel intensity. By establishing a threshold value of 110, clusters with an average intensity above this threshold are classified as crop residue, while those below are considered background. This threshold was derived from analyzing multiple images, noting that crop residue generally exhibits higher RGB values. Subsequently, we create a binary map to distinguish crop residue and apply the line-point transect method for CRC estimation. By evaluating the presence of crop residue at every foot along a transect, and replicating the line-point transect method on the field, we achieve a precise assessment of CRC.

3.2. PCA-Otsu Method

To overcome the limitations of the K-means method in determining cluster numbers and setting thresholds for crop residue and background differentiation, we introduced a novel approach using Principal Component Analysis (PCA) to convert images into grayscale, followed by Otsu thresholding for precise foreground-background separation. Unlike K-means, which relies on a predefined number of clusters with an arbitrary threshold for detecting crop residue, this method automates threshold determination, enhancing both accuracy and efficiency. This process starts by projecting RGB data onto a principal component that captures the maximum variance, using the largest eigenvalue vector to indicate the most informative dimension. Consequently, pixel values are transformed onto this primary vector. The Otsu method then automatically sets the threshold, distinguishing crop residue from soil. This technique, unlike the initial cluster mean comparison, adapts to varying field conditions, offering improved segmentation accuracy.

3.2.1. Principal Component Analysis

Principal Component Analysis (PCA) simplifies datasets by transforming them into a new space where the most significant variations are captured with fewer dimensions. This process generates principal components that are linearly uncorrelated [39], with the first capturing the most variance and each subsequent component capturing progressively less variance orthogonal to the previous components [39]. PCA identifies the vector associated with the largest eigenvalue, signifying the dimension with the highest variance, and containing the most information about the dataset. As depicted in Table 4, the first eigenvalue markedly differs from the second and third eigenvalues, indicating that the vector corresponding to the first eigenvalue contains substantial information, while the second and third eigenvectors hold comparatively less information. Utilizing PCA, we can reduce the complexity of RGB spectral data into a single dimension that highlights the distinction between soil and crop residue. For effective segmentation between crop residue and background, we apply Otsu’s method to identify an optimal threshold, thus dividing the image into foreground (crop residue) and background (soil or vegetation).

3.2.2. Otsu Threshold

Binarization involves setting a grayscale threshold to differentiate foreground from background pixels. Otsu’s method optimizes this threshold by minimizing within-class variance, effectively partitioning pixels into two groups: those above the threshold (foreground) and those below (background) [40]. It calculates the best threshold value by maximizing the variance between these classes, considering the intensity distribution of the entire image [41]. The Otsu threshold is computed as:
σ B 2 = W B × W f ( μ B μ f )
where W B , W f are the number of pixels in the background and foreground, respectively, calculated by dividing the number of pixels in each category by the total number of pixels. μ B , μ f denote the average intensity values of the background and foreground pixels, respectively. To have a better understanding of the clusters’ distribution, Figure 11 depicts each cluster in unique colors, with the RGB values ranging from 0 to 255. This representation shows that brighter pixels (higher RGB values) are typically crop residues, whereas darker pixels (lower RGB values) correspond to the background soil. The arrangement of pixel values in Figure 11a forms a cylindrical shape, aligning with the gray line vector illustrated in Figure 11b, providing a clear visual distinction between soil and crop residue in the field.

3.3. Validation

To validate the binary maps of crop residue created using both K-means and PCA-Otsu methods, we simulated the line-point transect method and compared these simulations with ground truth data collected in the field. This simulation was facilitated by accurately locating the transect line’s start and end points via GPS. Starting from the plot center, the method progresses by examining 10 cm by 10 cm sections along the transect. Each section is checked for crop residue presence, with found residue counted and absence ignored. This window size choice compensates for possible inaccuracies in marking transect points and tape measure alignment. The percentage of CRC is calculated by dividing the count of residue-containing sections by the total sections, typically 50, reflecting the 50-foot tape measure length. To streamline data processing across multiple fields, an automated Python script was developed for counting. Then, the results from the counting process, which replicated the line-point transect method, were validated against ground reference data that was collected using the line-point transect method. In a subsequent step to reduce biases associated with the transect method and human error, a visual assessment phase was introduced. This involved using references from Iowa State University Integrated Crop Management (Iowa State University Integrated Crop Management: pages 69–71 of the IC-488 (8)—13 May 2002 issue. https://crops.extension.iastate.edu/encyclopedia/methods-measuring-crop-residue, accessed on 21 March 2024) and Purdue University’s guide on estimating corn and soybeans residue cover for comparison with orthoimages of field (Figure 12). This comparison aimed to approximate CRC. Conducting this assessment twice and averaging the results minimized biases. Plots with a CRC discrepancy greater than ± 20 % from ground truth were excluded from further regression analysis. While this visual assessment helps filter outliers and improve dataset quality, it is still susceptible to human bias and error, underscoring its role as a supplementary evaluation rather than a primary data source.

4. Results

4.1. Result of CRC Estimation Using K-Means Unsupervised Method

In this section, we analyzed the results of replicating the line-point transect method on the binary map (crop residue, background) generated by the K-means algorithm. Figure 13a illustrates the estimation of corn residue with an R 2 = 0.26 , slightly outperforming the estimation of soybeans residue in Figure 13b with an R 2 = 0.18 . This difference can be attributed to the larger size of corn residue, enhancing its detectability in orthophoto images compared with soybean residue. Secondly, the data have been analyzed based on the different seasons during which the data were collected: winter (December 2021 to March 2022) and spring (April 2022 to May 2022). The CRC estimation results for winter and spring are displayed in Figure 14a and Figure 14b, respectively. Notably, the CRC estimation in winter yields an R 2 = 0.6 , significantly outperforming the spring estimation with an R 2 = 0.21 . This aligns with findings from [19], indicating the challenge in differentiating bare soil from advanced crop residue stages in remote sensing techniques, particularly when residues have weathered or crops have progressed in their phenological stages. It is worth noting that the higher accuracy of the estimation in winter may be attributed to the smaller sample size in winter, as we visited fewer fields during this season due to adverse weather conditions.
To investigate the effect of different tillage practices on CRC estimation, we have analyzed the result of the K-means algorithm based on tillage practice, as shown in Figure 15. Figure 15a depicts CRC estimation using the K-means algorithm against ground reference data. It is evident that the estimation exhibits poor accuracy, as the K-means algorithm underestimates CRC for many plots. One possible reason for this poor performance in conventional tillage is that soil is fully disturbed, leading to a thorough mixing of soil and crop residue, thereby making it challenging for algorithms to detect CRC from background soil. Furthermore, the practice of mixing soil and crop residue during plowing can increase the transfer of soil moisture to the crop residue. This added moisture makes the residue less discernible in remote sensing images. As illustrated in Figure 15b, CRC estimation in strip tillage fields shows a slightly higher correlation with the reference data than in conventional tillage fields. However, the accuracy of CRC estimation remains lower than that observed in no-till fields (Figure 15c). The higher accuracy of crop residue in no-till fields is attributed to the undisturbed nature of the soil, which minimizes moisture transfer from soil to crop residue, thereby making the residue more visible in remote sensing imagery.
The analysis of replicating the line-point transect method on binary images generated from the K-means algorithm is detailed in Figure 16a. While the estimation for most measurements appears reasonably accurate, with several points aligning closely with the one-to-one line, a significant underestimation is highlighted by a low R 2 = 0.21 . This poor performance is attributed primarily to two factors. First, the K-means algorithm occasionally missed detecting crop residue, particularly in shadowed areas where the lower RGB values of crop residue pixels led to their classification as background due to falling below the threshold. Second, the manual counting of crop residue in the field introduced human error and biases, as is common in experimental procedures involving human judgment. Upon comparing the estimation results with ground reference data and applying a visual assessment filter, Figure 16b reveals significant improvements, with an R 2 increasing to 0.79 and a noticeable reduction in error rates. These enhanced results include both corn and soybean fields across winter and spring seasons and include all tillage practices. It is important to note that visual assessment is subject to human bias and error. This is because it relies solely on sources of images and human evaluation. The primary purpose of this visual assessment is to filter outliers from the line-point transect method and enhance the quality of the dataset. However, relying only on data from visual assessment as reference data are not recommended.
Figure 17 shows the residue on the field (orthophoto) alongside a binary crop residue map generated by the K-means algorithm. For improved visualization, the binary map has been segmented and color-coded, with each distinct color representing different crop residue types.

4.2. Result of CRC Estimation Using PCA-Otsu Method

The PCA-Otsu method’s CRC estimation results, depicted in Figure 18, show corn residue estimations ( R 2 = 0.31 ) slightly outperforming soybean residue ( R 2 = 0.24 ). This trend aligns with the K-means method’s findings, indicating minimal variance in accuracy across crop types. The marginally better detection of corn residue, attributed to its larger size, facilitates its identification in UAV imagery. Seasonal comparisons in Figure 19 reveal a stronger correlation in winter ( R 2 = 0.33 , Figure 19a) than in spring ( R 2 = 0.19 , Figure 19b), suggesting that less weathered residues are more detectable in winter. Nonetheless, the noted seasonal variation in accuracy may be partly due to the reduced number of samples collected during the winter period.
The analysis of CRC estimation across different tillage practices, as depicted in Figure 20, reveals distinct patterns. Conventional tillage, shown in Figure 20a, exhibits a low correlation with ground reference data, largely due to the underestimation of CRC in many fields. This poor performance is likely due to the blending of crop residue with soil during plowing and the resultant increase in residue moisture, which negatively impacts its visibility in remote sensing images. Strip tillage, presented in Figure 20b, shows a marginally improved correlation. This improvement is attributed to the limited soil disturbance, preserving the integrity of undisturbed areas and reducing the mixing of crop residue and soil, thus enhancing residue detection. No-till, illustrated in Figure 20c, achieves the highest correlation ( R 2 = 0.27 ), echoing findings from the K-means analysis. However, this correlation level remains insufficient for definitive conclusions. The primary issue with the model’s underestimation of CRC could be compounded by the overestimation of CRC in ground reference data. This overestimation may stem from human errors and biases during the implementation of the line-point transect method.
The outcomes of PCA-Otsu, validated against ground reference data before and after filtering by visual assessment, are illustrated in Figure 21a and Figure 21b, respectively. This method exhibits poorer performance compared with the K-means algorithm. The Otsu method typically demonstrates effectiveness when applied to histograms with a distinct bimodal distribution, featuring a well-defined threshold between two clusters [41]. However, the challenge arises from the absence of a clear threshold between crop residue and the background, making it difficult for the algorithm to determine the optimal threshold between crop residue and soil.

5. Discussion

In this study, we applied two unsupervised algorithms—K-means and PCA-Otsu—to UAS-borne RGB imagery and validated the results against ground reference data collected through the line-point transect method. Our findings revealed a superior accuracy of the K-means algorithm over the PCA-Otsu method. The lower performance of the PCA-Otsu method is attributed to the difficulty of the Otsu threshold method in distinguishing between crop residue and soil when the surface reflectance of these elements does not differ significantly. Additionally, our analysis showed that corn residue estimation is correlated more closely with the ground reference data than soybeans residue estimation for both algorithms. This higher accuracy in corn residue detection can be linked to the larger size of corn residues, which are easier to identify in the imagery compared with the smaller soybean residues. Despite this, the accuracy disparity between corn and soybean residue detection was not significant, suggesting that the use of very high-resolution imagery (1 cm) may mitigate some differences between the estimation of CRC in corn and soybean. This points to the potential value of exploring even higher resolution datasets to better understand the impact of resolution on the accuracy of crop residue detection.
Moreover, this study found a significantly higher accuracy in CRC estimation during winter compared with spring. This aligns with prior research indicating that crop residues are more weathered in spring, making it challenging to distinguish them from the background soil using remote sensing techniques. It is important to note, however, that the winter and spring datasets were of unequal size, since fewer fields were visited in winter due to adverse weather conditions. This imbalance could affect the results, suggesting the need for future studies to collect equal samples in both seasons to gain clearer insights into the effects of weathering on CRC detectability. Our investigation into tillage practices revealed that the K-means and PCA-Otsu algorithms are more accurate in estimating CRC for no-till practices compared with conventional and strip tillage. This is likely because no-till practices leave the soil and crop residue less disturbed, making it easier for the algorithms to detect residue. In contrast, in conventional tilled fields soil is heavily disturbed, blending soil and crop residue together which reduces the algorithms’ ability to accurately distinguish between soil and crop residue, resulting in low estimation accuracy. Additionally, the mixing process during plowing transfers moisture from the soil to the crop residue and increases the moisture content of the residue, complicating the estimation process. However, despite variations in accuracy across different crop types, seasons, and tillage practices, both algorithms were effective in capturing these distinctions, indicating their utility in predicting tillage practices based on CRC estimation.
This research faces several limitations and challenges that could impact the accuracy of CRC estimation. These include biases and errors in collecting ground reference data through the line-point transect method, potential inaccuracies in GPS readings, and the effect of varying soil moisture and water content in crop residue. For future studies, we recommend conducting repeated measurements using the line-point transect method by various individuals to minimize bias and errors. Additionally, incorporating the moisture content of soil and residue into the estimation model is crucial, as it influences surface reflectance in remote sensing imagery. Implementing these adjustments could enhance the accuracy of CRC estimation and further advance its application in agricultural management practices.

6. Conclusions

Crop Residue Cover (CRC), the remnants of crops left in fields after harvest, is crucial for retaining soil moisture and enhancing agricultural productivity. Accurately measuring the percentage of CRC is crucial for informed decision making in agriculture. Traditional methods for CRC measurement, such as roadside surveys and photo comparisons, suffer from subjective biases and low accuracy. Moreover, these methods are time-consuming and costly, particularly when covering large areas or being used for conducting systematic assessments. Space-borne remote sensing, while useful, typically offers resolutions too coarse to capture the variability in the spatial distribution of CRC, and thus it is unsuitable for precise CRC estimation. This paper introduces a novel method leveraging high-resolution UAS imagery to estimate CRC in corn and soybean fields, utilizing the K-means unsupervised algorithm and PCA combined with the Otsu thresholding technique. Our validation against ground reference data reveals that both methods are capable of estimating CRC, with the K-means method achieving an R 2 = 0.79 and an RMSE (Root Mean Square Error) of 10.04%, while the PCA-Otsu method showed an R 2 = 0.46 with an RMSE of 13.27%. The lower accuracy of the PCA-Otsu method is attributed to the algorithm’s difficulty in correctly identifying thresholds in images where crop and soil are not spectrally distinct, a common scenario in corn and soybean fields. Moreover, Our study found that CRC estimations were more accurate during winter, aligning with literature that suggests weathered residue cover is harder to detect in remote sensing imagery. Interestingly, estimations for corn fields were slightly more accurate due to the larger size of corn residue, which improves detectability, though the difference with soybean fields was minimal. Analyzing results based on tillage practices, we discovered that no-till fields yielded the best CRC estimations, while conventional till fields performed the poorest. This is likely because conventional tillage mixes soil and residue, complicating the distinction between crop residue and soil and increasing crop residue moisture content. Despite these promising findings, this study faced some limitations, including potential biases from line-point transect measurements, fewer winter samples due to weather conditions, the relatively coarse resolution of UAS imagery for better detection of soybean residue, and interference from rain puddles, which adversely impact accuracy. In conclusion, to improve accuracy in CRC estimation, future research should focus on several key areas. Repeating the point-line transect method with multiple observers can help minimize biases introduced by individual measurements. Integrating soil and crop residue moisture into the CRC estimation model is another recommended approach for enhancing precision. Additionally, investigating the use of Synthetic Aperture Radar (SAR) for CRC estimation across various crops and larger expanses could provide valuable insights into enhancing estimation accuracy. These strategies are poised to enhance CRC estimation techniques, offering significant benefits for agricultural management practices.

Author Contributions

Conceptualization, F.A. and J.J.; methodology, F.A. and J.J.; validation, F.A. and J.J.; formal analysis, F.A.; investigation, F.A.; resources, J.J.; writing—original draft preparation, F.A.; writing—review and editing, F.A. and J.J.; visualization, F.A.; supervision, J.J.; project administration, J.J.; funding acquisition, J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Hummingbird Technologies. Award No.: 22036025.

Data Availability Statement

All orthomosaic images are publicly viewable but not available for download at https://hub.digitalforestry.org (accessed on 21 March 2024).

Acknowledgments

We thank our gratitude to the members of the Geospatial Data Science Lab (GDSL) for their invaluable assistance and dedication during the data collection phase of this research from December 2021 to May 2022. More information about GDSL members can be found at https://gdsl.org/our-team/ (accessed on 21 March 2024).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UASUnmanned Aerial Systems
CRCCrop Residue Cover
PCAPrincipal Component Analysis

References

  1. Mc Nairn, H.; Brisco, B. The Application of C-Band Polarimetric SAR for Agriculture: A Review. Can. J. Remote Sens. 2004, 30, 525–542. [Google Scholar] [CrossRef]
  2. Wang, S.; Guan, K.; Zhang, C.; Zhou, Q.; Wang, S.; Wu, X.; Jiang, C.; Peng, B.; Mei, W.; Li, K.; et al. Cross-Scale Sensing of Field-Level Crop Residue Cover: Integrating Field Photos, Airborne Hyperspectral Imaging, and Satellite Data. Remote Sens Environ 2023, 285, 113366. [Google Scholar] [CrossRef]
  3. Daughtry, C.S.T.; Hunt, E.R.; Doraiswamy, P.C.; McMurtrey, J.E. Remote Sensing the Spatial Distribution of Crop Residues. Agro. J. 2005, 97, 864–871. [Google Scholar] [CrossRef]
  4. Singh, R.B.; Ray, S.S.; Bal, S.K.; Sekhon, B.S.; Gill, G.S.; Panigrahy, S. Crop Residue Discrimination Using Ground-Based Hyperspectral Data. J. Indian Soc. Remote Sens. 2013, 41, 301–308. [Google Scholar] [CrossRef]
  5. Derpsch, R.; Friedrich, T.; Kassam, A.; Hongwen, L. Current Status of Adoption of No-till Farming in the World and Some of Its Main Benefits. Int. J. Agric. Biol. Eng. 2010, 3, 1–25. [Google Scholar] [CrossRef]
  6. Yue, J.; Tian, Q.; Dong, X.; Xu, K.; Zhou, C. Using Hyperspectral Crop Residue Angle Index to Estimate Maize and Winter-Wheat Residue Cover: A Laboratory Study. Remote Sens. 2019, 11, 807. [Google Scholar] [CrossRef]
  7. Kosmowski, F.; Stevenson, J.; Campbell, J.; Ambel, A.; Haile Tsegay, A. On the Ground or in the Air? A Methodological Experiment on Crop Residue Cover Measurement in Ethiopia. Environ. Manag. 2017, 60, 705–716. [Google Scholar] [CrossRef]
  8. Zheng, B.; Campbell, J.B.; Serbin, G.; Galbraith, J.M. Remote Sensing of Crop Residue and Tillage Practices: Present Capabilities and Future Prospects. Soil Tillage Res. 2014, 138, 26–34. [Google Scholar] [CrossRef]
  9. Hively, W.D.; Lamb, B.T.; Daughtry, C.S.T.; Shermeyer, J.; McCarty, G.W.; Quemada, M. Mapping Crop Residue and Tillage Intensity Using WorldView-3 Satellite Shortwave Infrared Residue Indices. Remote Sens. 2018, 10, 1657. [Google Scholar] [CrossRef]
  10. Stern, A.J.; Daughtry, C.S.T.; Hunt, E.R.; Gao, F. Comparison of Five Spectral Indices and Six Imagery Classification Techniques for Assessment of Crop Residue Cover Using Four Years of Landsat Imagery. Remote Sens. 2023, 15, 4596. [Google Scholar] [CrossRef]
  11. Beeson, P.C.; Daughtry, C.S.T.; Hunt, E.R.; Akhmedov, B.; Sadeghi, A.M.; Karlen, D.L.; Tomer, M.D. Multispectral Satellite Mapping of Crop Residue Cover and Tillage Intensity in Iowa. J. Soil Water Conserv. 2016, 71, 385–395. [Google Scholar] [CrossRef]
  12. Karan, S.K.; Hamelin, L. Crop Residues May Be a Key Feedstock to Bioeconomy but How Reliable Are Current Estimation Methods? Resour. Conserv. Recycl. 2021, 164, 105211. [Google Scholar] [CrossRef]
  13. de Paul Obade, V.; Gaya, C.O.; Obade, P.T. Statistical Diagnostics for Sensing Spatial Residue Cover. Precis. Agric. 2023, 24, 1932–1964. [Google Scholar] [CrossRef]
  14. Quemada, M.; Hively, W.D.; Daughtry, C.S.T.; Lamb, B.T.; Shermeyer, J. Improved Crop Residue Cover Estimates Obtained by Coupling Spectral Indices for Residue and Moisture. Remote Sens. Environ. 2018, 206, 33–44. [Google Scholar] [CrossRef]
  15. Sullivan, D.G.; Fulmer, J.L.; Strickland, T.C.; Masters, M.; Yao, H. Field Scale Evaluation of Crop Residue Cover Distribution Using Airborne and Satellite Remote Sensing. In Proceedings of the 2007 Georgia Water Resources Conference, Athens, GA, USA, 27–29 March 2007. [Google Scholar]
  16. Quemada, M.; Daughtry, C.S.T. Spectral Indices to Improve Crop Residue Cover Estimation under Varying Moisture Conditions. Remote Sens. 2016, 8, 660. [Google Scholar] [CrossRef]
  17. Yue, J.; Tian, Q. Estimating Fractional Cover of Crop, Crop Residue, and Soil in Cropland Using Broadband Remote Sensing Data and Machine Learning. Int. J. Appl. Earth Obs. Geoinf. 2020, 89, 102089. [Google Scholar] [CrossRef]
  18. Serbin, G.; Daughtry, C.S.T.; Hunt, E.R.; Brown, D.J.; McCarty, G.W. Effect of Soil Spectral Properties on Remote Sensing of Crop Residue Cover. Soil Sci. Soc. Am. J. 2009, 73, 1545–1558. [Google Scholar] [CrossRef]
  19. Bannari, A.; Pacheco, A.; Staenz, K.; McNairn, H.; Omari, K. Estimating and Mapping Crop Residues Cover on Agricultural Lands Using Hyperspectral and IKONOS Data. Remote Sens. Environ. 2006, 104, 447–459. [Google Scholar] [CrossRef]
  20. Quemada, M.; Hively, W.D.; Daughtry, C.S.T.; Lamb, B.T.; Shermeyer, J. Improved Crop Residue Cover Estimates from Satellite Images by Coupling Residue and Water Spectral Indices. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 5425–5428. [Google Scholar] [CrossRef]
  21. Cai, W.; Zhao, S.; Zhang, Z.; Peng, F.; Xu, J. Comparison of Different Crop Residue Indices for Estimating Crop Residue Cover Using Field Observation Data. In Proceedings of the 2018 7th International Conference on Agro-Geoinformatics, Hangzhou, China, 6–9 August 2018. [Google Scholar]
  22. Sonmez, N.K.; Slater, B. Measuring Intensity of Tillage and Plant Residue Cover Using Remote Sensing. Eur. J. Remote Sens. 2016, 49, 121–135. [Google Scholar] [CrossRef]
  23. Chi, J.; Crawford, M.M. Spectral Unmixing-Based Crop Residue Estimation Using Hyperspectral Remote Sensing Data: A Case Study at Purdue University. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2531–2539. [Google Scholar] [CrossRef]
  24. Ruwaimana, M.; Satyanarayana, B.; Otero, V.; Muslim, A.M.; Syafiq, A.M.; Ibrahim, S.; Raymaekers, D.; Koedam, N.; Dahdouh-Guebas, F. The advantages of using drones over space-borne imagery in the mapping of mangrove forests. PLoS ONE 2018, 13, e0200288. [Google Scholar] [CrossRef]
  25. Barnes, M.L.; Yoder, L.; Khodaee, M. Detecting Winter Cover Crops and Crop Residues in the Midwest US Using Machine Learning Classification of Thermal and Optical Imagery. Remote Sens. 2021, 13, 1998. [Google Scholar] [CrossRef]
  26. Ding, Y.; Zhang, H.; Wang, Z.; Xie, Q.; Wang, Y.; Liu, L.; Hall, C.C. A Comparison of Estimating Crop Residue Cover from Sentinel-2 Data Using Empirical Regressions and Machine Learning Methods. Remote Sens. 2020, 12, 1470. [Google Scholar] [CrossRef]
  27. Bannari, A.; Chevrier, M.; Staenz, K.; McNairn, H. Senescent Vegetation and Crop Residue Mapping in Agricultural Lands Using Artificial Neural Networks and Hyperspectral Remote Sensing. In Proceedings of the IGARSS 2003—2003 IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003; Proceedings (IEEE Cat. No.03CH37477). Volume 7, pp. 4292–4294. [Google Scholar]
  28. Bocco, M.; Sayago, S.; Willington, E. Neural network and crop residue index multiband models for estimating crop residue cover from Landsat TM and ETM+ images. Int. J. Remote Sens. 2014, 35, 3651–3663. [Google Scholar] [CrossRef]
  29. Tao, W.; Xie, Z.; Zhang, Y.; Li, J.; Xuan, F.; Huang, J.; Li, X.; Su, W.; Yin, D. Corn Residue Covered Area Mapping with a Deep Learning Method Using Chinese GF-1 B/D High Resolution Remote Sensing Images. Remote Sens. 2021, 13, 2903. [Google Scholar] [CrossRef]
  30. Ribeiro, A.; Ranz, J.; Burgos-Artizzu, X.P.; Pajares, G.; del Arco, M.J.S.; Navarrete, L. An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues. Sensors 2011, 11, 6480–6492. [Google Scholar] [CrossRef]
  31. Upadhyay, P.C.; Lory, J.A.; DeSouza, G.N.; Lagaunne, T.A.P.; Spinka, C.M. Classification of Crop Residue Cover in High-Resolution RGB Images Using Machine Learning. J. ASABE 2022, 65, 75–86. [Google Scholar] [CrossRef]
  32. Yue, J.; Fu, Y.; Guo, W.; Feng, H.; Qiao, H. Estimating fractional coverage of crop, crop residue, and bare soil using shortwave infrared angle index and Sentinel-2 MSI. Int. J. Remote Sens. 2022, 43, 1253–1273. [Google Scholar] [CrossRef]
  33. Jain, A.K.; Murty, M.N.; Flynn, P.J. Data Clustering: A Review. ACM Comput. Surv. 1999, 31, 264–323. [Google Scholar] [CrossRef]
  34. Nainggolan, R.; Perangin-angin, R.; Simarmata, E.; Tarigan, A.F. Improved the Performance of the K-Means Cluster Using the Sum of Squared Error (SSE) Optimized by Using the Elbow Method. J. Phys. Conf. Ser. 2019, 1361, 12015. [Google Scholar] [CrossRef]
  35. Muningsih, E.; Kiswati, S. Sistem Aplikasi Berbasis Optimasi Metode Elbow Untuk Penentuan Clustering Pelanggan. Joutica 2018, 3, 117. [Google Scholar] [CrossRef]
  36. Ahmed, M.; Seraj, R.; Islam, S.M.S. The K-Means Algorithm: A Comprehensive Survey and Performance Evaluation. Electronics 2020, 9, 1295. [Google Scholar] [CrossRef]
  37. Thinsungnoen, T.; Kaoungku, N.; Durongdumronchai, P.; Kerdprasop, K.; Kerdprasop, N. The Clustering Validity with Silhouette and Sum of Squared Errors. In Proceedings of the ICIAE 2015: The 3rd International Conference on Industrial Application Engineering, Kitakyushu, Japan, 28–31 March 2015; pp. 44–51. [Google Scholar]
  38. Shahapure, K.R.; Nicholas, C. Cluster Quality Analysis Using Silhouette Score. In Proceedings of the 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA), Sydney, NSW, Australia, 6–9 October 2020; pp. 747–748. [Google Scholar]
  39. Kherif, F.; Latypova, A. Principal Component Analysis. In Machine Learning: Methods and Applications to Brain Disorders; Academic Press: Cambridge, MA, USA, 2020; pp. 209–225. [Google Scholar] [CrossRef]
  40. Melgani, F. Robust Image Binarization with Ensembles of Thresholding Algorithms. J. Electron. Imaging 2006, 15, 23010. [Google Scholar] [CrossRef]
  41. Yousefi, J. Image Binarization Using Otsu Thresholding Algorithm; University of Guelph: Guelph, ON, Canada, 2015. [Google Scholar]
Figure 1. Geographical distribution of selected corn and soybeans fields for this study: (a) Two fields in northern IN, (b) two fields in eastern IN, (c), and (d), seven fields in the Agronomy Center for Research and Education (ACRE) near Purdue University. The use of different colors to represent various groups of fields serves solely for distinction purposes.
Figure 1. Geographical distribution of selected corn and soybeans fields for this study: (a) Two fields in northern IN, (b) two fields in eastern IN, (c), and (d), seven fields in the Agronomy Center for Research and Education (ACRE) near Purdue University. The use of different colors to represent various groups of fields serves solely for distinction purposes.
Remotesensing 16 01135 g001
Figure 2. The process of collecting ground reference data by implementing the line-point transect method and manually counting crop residue along each foot of a 50-foot tape measure.
Figure 2. The process of collecting ground reference data by implementing the line-point transect method and manually counting crop residue along each foot of a 50-foot tape measure.
Remotesensing 16 01135 g002
Figure 3. (a) Illustrates the spatial distribution of plots within a field, and (b) shows the line-point transect measurements conducted within each plot. Each line extends from the center to each side, covering a distance of 50 feet.
Figure 3. (a) Illustrates the spatial distribution of plots within a field, and (b) shows the line-point transect measurements conducted within each plot. Each line extends from the center to each side, covering a distance of 50 feet.
Remotesensing 16 01135 g003
Figure 4. The crop residue percentage for the first and second visits across all fields is calculated by averaging all 16 measurements collected by the line-point transect method within each field.
Figure 4. The crop residue percentage for the first and second visits across all fields is calculated by averaging all 16 measurements collected by the line-point transect method within each field.
Remotesensing 16 01135 g004
Figure 5. (a) Emlid R S2 (https://emlid.com/reachrs2plus/, accessed on 21 March 2024). (b) Recording the coordinates of GCP, as well as the start and end points of the tape measure for the line-point transect method.
Figure 5. (a) Emlid R S2 (https://emlid.com/reachrs2plus/, accessed on 21 March 2024). (b) Recording the coordinates of GCP, as well as the start and end points of the tape measure for the line-point transect method.
Remotesensing 16 01135 g005
Figure 6. (a) DJI Matrice 300. (b) Zenmuse L1, utilized for collecting RGB imagery in this study.
Figure 6. (a) DJI Matrice 300. (b) Zenmuse L1, utilized for collecting RGB imagery in this study.
Remotesensing 16 01135 g006
Figure 7. Flowchart illustrating the methodology of this study, representing the four primary phases: data collection, processing, analysis of results, and validation. Detailed tasks within each phase are explained.
Figure 7. Flowchart illustrating the methodology of this study, representing the four primary phases: data collection, processing, analysis of results, and validation. Detailed tasks within each phase are explained.
Remotesensing 16 01135 g007
Figure 8. A sum of squared error (SSE) and silhouette score variation with different numbers of clusters.
Figure 8. A sum of squared error (SSE) and silhouette score variation with different numbers of clusters.
Remotesensing 16 01135 g008
Figure 9. (a) SSE variation and (b) silhouette score across all fields. Different colors represent different plots.
Figure 9. (a) SSE variation and (b) silhouette score across all fields. Different colors represent different plots.
Remotesensing 16 01135 g009
Figure 10. The K-means clustering results are represented by different colors, each indicating a cluster label. In (a), the cluster labeled “purple” has the lowest average pixel value of 42, while in (b), the same “purple” cluster has an average of 68. Similarly, in (c), the cluster shown in “blue” has the lowest average pixel value within the cluster, whereas in (b), the same “blue” cluster has the highest average pixel value. However, the cluster shown in “black” has the highest average pixel value both in (c,d). This variability among cluster labels and their properties presents a challenge in classifying the clusters into two classes: crop residue and background.
Figure 10. The K-means clustering results are represented by different colors, each indicating a cluster label. In (a), the cluster labeled “purple” has the lowest average pixel value of 42, while in (b), the same “purple” cluster has an average of 68. Similarly, in (c), the cluster shown in “blue” has the lowest average pixel value within the cluster, whereas in (b), the same “blue” cluster has the highest average pixel value. However, the cluster shown in “black” has the highest average pixel value both in (c,d). This variability among cluster labels and their properties presents a challenge in classifying the clusters into two classes: crop residue and background.
Remotesensing 16 01135 g010
Figure 11. (a) Three-dimensional representation of RGB pixel values within a single plot, where each color in the cylinder symbolizes a distinct cluster derived from the K-means clustering. (b) Visualization of the color cube with the corresponding gray line.
Figure 11. (a) Three-dimensional representation of RGB pixel values within a single plot, where each color in the cylinder symbolizes a distinct cluster derived from the K-means clustering. (b) Visualization of the color cube with the corresponding gray line.
Remotesensing 16 01135 g011
Figure 12. Visual assessment sources for crop residue over (a) corn fields and (b) soybeans fields. Image source: “Estimating Corn and soybeans Residue Cover” (Estimating corn and soybeans residue cover, Agronomy and Guide, Purdue University Cooperative Extension Service).
Figure 12. Visual assessment sources for crop residue over (a) corn fields and (b) soybeans fields. Image source: “Estimating Corn and soybeans Residue Cover” (Estimating corn and soybeans residue cover, Agronomy and Guide, Purdue University Cooperative Extension Service).
Remotesensing 16 01135 g012
Figure 13. CRC estimation for (a) corn and (b) soybeans against ground reference data.
Figure 13. CRC estimation for (a) corn and (b) soybeans against ground reference data.
Remotesensing 16 01135 g013
Figure 14. CRC estimation for (a) data collected during winter and (b) spring against ground reference data.
Figure 14. CRC estimation for (a) data collected during winter and (b) spring against ground reference data.
Remotesensing 16 01135 g014
Figure 15. CRC estimation for (a) conventional tillage, (b) strip tillage, and (c) no-till against ground reference data using the K-means method.
Figure 15. CRC estimation for (a) conventional tillage, (b) strip tillage, and (c) no-till against ground reference data using the K-means method.
Remotesensing 16 01135 g015
Figure 16. CRC estimation for the result of the K-means algorithm against ground reference data: (a) before filtering the data with visual assessment and (b) after filtering the data with visual assessment.
Figure 16. CRC estimation for the result of the K-means algorithm against ground reference data: (a) before filtering the data with visual assessment and (b) after filtering the data with visual assessment.
Remotesensing 16 01135 g016
Figure 17. (a) An RGB image displaying Crop Residue Cover (CRC) across the field. (b) The corresponding binary map generated by the K-means algorithm, where white pixels indicate crop residue and black pixels denote background soil.
Figure 17. (a) An RGB image displaying Crop Residue Cover (CRC) across the field. (b) The corresponding binary map generated by the K-means algorithm, where white pixels indicate crop residue and black pixels denote background soil.
Remotesensing 16 01135 g017
Figure 18. CRC estimation for the result of the PCA-Otsu method for (a) corn and (b) soybeans against ground reference data.
Figure 18. CRC estimation for the result of the PCA-Otsu method for (a) corn and (b) soybeans against ground reference data.
Remotesensing 16 01135 g018
Figure 19. CRC estimation for data collected during (a) winter and (b) spring against ground reference data using the PCA-Otsu method.
Figure 19. CRC estimation for data collected during (a) winter and (b) spring against ground reference data using the PCA-Otsu method.
Remotesensing 16 01135 g019
Figure 20. CRC estimation for (a) conventional tillage, (b) strip tillage, and (c) no-till against ground reference data using the PCA-Otsu method.
Figure 20. CRC estimation for (a) conventional tillage, (b) strip tillage, and (c) no-till against ground reference data using the PCA-Otsu method.
Remotesensing 16 01135 g020
Figure 21. CRC estimation using the PCA-Otsu method compared against the ground reference data: (a) before filtering the data with visual assessment and (b) after filtering the data with visual assessment.
Figure 21. CRC estimation using the PCA-Otsu method compared against the ground reference data: (a) before filtering the data with visual assessment and (b) after filtering the data with visual assessment.
Remotesensing 16 01135 g021
Table 1. Details of the fields including field fame, crop type, tillage practice, latitude, and longitude.
Table 1. Details of the fields including field fame, crop type, tillage practice, latitude, and longitude.
FieldCropTillage PracticeLongitudeLatitude
Mckinnis NorthCornConventional−86.98590140.4813251
Mckinnis SouthCornConventional−86.98384540.4742143
Griner–Wag–RusCornConventional−86.96148340.5629297
Field 57soybeansConventional−86.99992340.4892381
Field 69soybeansConventional−87.00012940.4895974
Chris 40soybeansConventional−86.95221140.541898
Field 267 EastCornStrip-till−86.53462341.7025976
Field 267 WestCornStrip-till−86.53309941.7030615
Field M1CornStrip-till−85.1513440.2488210
Field JsoybeansStrip-till−85.15289640.2582106
Field 354 NorthsoybeansStrip-till−86.46435441.7420796
Field 354 SouthsoybeansStrip-till−86.46529841.7411566
County LineCornNo-till−86.96514140.5627906
Church EastCornNo-till−86.96711740.5822479
Church WestCornNo-till−86.96916340.5807252
Don 209 TopsoybeansNo-till−86.96148140.5751723
Don 209 SidesoybeansNo-till−86.95697140.5750181
Don 209 BottomsoybeansNo-till−86.96239440.5700434
Table 2. Visit dates for UAS data collection and ground reference data collection using the line-point transect method for CRC measurement.
Table 2. Visit dates for UAS data collection and ground reference data collection using the line-point transect method for CRC measurement.
FieldFirst Visit (Date)First Visit CRCSecond Visit (Date)Second Visit CRC
Mckinnis North20 December 202143.2527 April 202262.12
Mckinnis South20 December 202143.8627 April 202264.37
Griner–Wag–Rus4 January 202256.7512 May 202284.75
County Line4 January 202291.0029 April 202294.37
Church East4 January 202289.8712 May 202283.62
Church West4 January 202295.7512 May 202282.37
Field 574 March 202247.2527 April 202257.5
Chris 404 March 202286.0029 April 202285.00
Field M13 April 202291.7511 May 202265.5
Field J3 April 202273.8711 May 202252.75
Field 6910 April 202260.3729 April 202267.00
Don 209 Top10 April 202287.1219 May 202278
Don 209 Side10 April 202276.8719 May 202279.25
Don 209 Bottom2 April 202288.0019 May 202270.5
Field 354 North10 May 202282.75--
Field 354 South10 May 202285.75--
Field 267 North10 May 202295.12--
Field 267 South10 May 202293.37--
Table 3. SSE and Silhouette score for a different number of clusters across all plots.
Table 3. SSE and Silhouette score for a different number of clusters across all plots.
Number of ClustersSSEVariance in SSE ( 10 13 )Average SilhouetteVariance in Silhouette ( 10 3 )
2126,905,8483480.581.8
364,285,609730.530.76
439,426,688250.510.31
526,977,695110.4980.15
619,876,7336.40.4830.17
715,421,45240.4720.24
812,471,2232.70.4610.33
910,397,09320.4510.44
108,891,2081.50.440.52
Table 4. Eigenvalues and eigenvectors corresponding to each principal component.
Table 4. Eigenvalues and eigenvectors corresponding to each principal component.
Eigen Value ( 10 2 )Eigen Vector
First Component Second Component Third Component
7.050.570.42−0.15
0.030.570.090.28
0.010.56−0.50−0.11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Azimi, F.; Jung, J. Automated Crop Residue Estimation via Unsupervised Techniques Using High-Resolution UAS RGB Imagery. Remote Sens. 2024, 16, 1135. https://doi.org/10.3390/rs16071135

AMA Style

Azimi F, Jung J. Automated Crop Residue Estimation via Unsupervised Techniques Using High-Resolution UAS RGB Imagery. Remote Sensing. 2024; 16(7):1135. https://doi.org/10.3390/rs16071135

Chicago/Turabian Style

Azimi, Fatemeh, and Jinha Jung. 2024. "Automated Crop Residue Estimation via Unsupervised Techniques Using High-Resolution UAS RGB Imagery" Remote Sensing 16, no. 7: 1135. https://doi.org/10.3390/rs16071135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop