Next Article in Journal
Approximate Optimal Curve Path Tracking Control for Nonlinear Systems with Asymmetric Input Constraints
Next Article in Special Issue
Lightweight Multipurpose Three-Arm Aerial Manipulator Systems for UAV Adaptive Leveling after Landing and Overhead Docking
Previous Article in Journal
Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Project Report

Efficacy of Mapping Grassland Vegetation for Land Managers and Wildlife Researchers Using sUAS

Cooperative Wildlife Research Laboratory, Department of Zoology, Center for Ecology, Southern Illinois University, 1125 Lincoln Drive, Carbondale, IL 62901, USA
*
Author to whom correspondence should be addressed.
Drones 2022, 6(11), 318; https://doi.org/10.3390/drones6110318
Submission received: 25 August 2022 / Revised: 19 October 2022 / Accepted: 20 October 2022 / Published: 26 October 2022
(This article belongs to the Special Issue Drones in the Wild)

Abstract

:
The proliferation of small unmanned aerial systems (sUAS) is making very high-resolution imagery attainable for vegetation classifications, potentially allowing land managers to monitor vegetation in response to management or wildlife activities and offering researchers opportunities to further examine relationships among wildlife species and their habitats. The broad adoption of sUAS for remote sensing among these groups may be hampered by complex coding, expensive equipment, and time-consuming protocols. We used a consumer sUAS, semiautomated flight planning software, and graphical user interface GIS software to classify grassland vegetation with the aim of providing a user-friendly framework for managers and ecological researchers. We compared the overall accuracy from classifications using this sUAS imagery (89.22%) to classifications using freely available National Agriculture Imagery Program imagery (76.25%) to inform decisions about cost and accuracy. We also compared overall accuracy between manual classification (89.22%) and random forest classification (69.26%) to aid with similar decisions. Finally, we examined the impact of resolution and the addition of a canopy height model on classification accuracy, obtaining mixed results. Our findings can help new users make informed choices about imagery sources and methodologies, and our protocols can serve as a template for those groups wanting to perform similar vegetation classifications on grassland sites without the need for survey-grade equipment or coding. These should help more land managers and researchers obtain appropriate grassland vegetation classifications for their projects within their budgetary and logistical constraints.

1. Introduction

The field of remote sensing has rapidly evolved over the past 30 years, changing how researchers gather data and providing opportunities to directly examine patterns and processes at scales not previously feasible [1,2,3,4]. Remote sensing techniques have become increasingly popular for ecologists, and the potential and realized applications for wildlife science are vast [5,6]. Of particular interest to many ecologists is the ability of remote sensing tools to map land cover and vegetation composition at scales too large for on-the-ground survey methods. Because remote sensing expands the feasible mapping scale, it has been used successfully in multiple applications for terrestrial and marine habitats [3,7,8]: mapping temporal changes in habitat availability for sensitive species [9], performing vegetation health assessments [10], and monitoring the spread of invasive species [11].
Remote sensing has a long history of use for classifying landscape and vegetation characteristics. In recent decades, low- or no-cost satellite imagery has been available with spatial resolution as fine as tens of meters and costlier alternatives to submeter resolutions [12,13]. Within the United States, the National Agriculture Imagery Program (NAIP) currently provides free 0.6 m imagery from manned aircraft to authorized users [14]. However, even the precise spatial resolution of satellite and traditional aerial imagery may be too coarse to effectively map changes in vegetation structure and composition in some systems. This is especially true for grasslands, which are largely composed of herbaceous plants too small to be discerned from satellite-derived images [13].
Imagery acquired using small unmanned aerial systems (sUAS), commonly known as “drones”, has emerged as an alternative to satellite and manned aerial imagery in recent years, driven in part by the ever-increasing availability of sUAS [15] and clarification of regulations on their use in the United States [16]. The centimeter-scale resolution of sUAS imagery better allows for the detection of differences in vegetation structure or composition than coarser sources [17]. sUAS imagery can be used to discern individual species and vegetation types [16] and has been used to reliably monitor species composition in grasslands [18] and rangelands [19]. sUAS often offer more flexible use patterns than satellite imagery, or even airplane-based remote sensing [20,21]. Because sUAS operate at low altitudes, they can usually provide clear views of the ground when satellite imagery is obscured by clouds. An sUAS can be programmed to record a specific area of interest, be repeatedly deployed with ease, and can be scheduled to coincide with biologically relevant events [22,23]. This ability to image and map vegetation on a repeatable, user-defined timeline could be a valuable tool to land managers for monitoring vegetation changes in response to management actions (e.g., burning, discing), wildlife interactions (e.g., grazing), or natural phenomena (e.g., wildfire, weather events).
Using an sUAS, however, can be an expensive option, especially when compared with satellite or aerial imagery that is often freely available. It is therefore important to compare the accuracy of maps produced from sUAS imagery to those from free sources, such as NAIP. In addition to purchasing the sUAS and the necessary legal registration/licensing, specific software is required to construct flight plans and to create usable maps from the sUAS images. Often, the technical training and ability of ecologists interested in using sUAS is somewhat limited. Thus, although open-source software options are available, they may not be adequately user-friendly and potential users may be discouraged from adopting sUAS imagery for their research or monitoring needs due to software or coding complexity [11]. Additionally, although the peer-reviewed literature is rich with studies evaluating many different parameters and approaches to remotely mapping vegetation, many of those studies can be viewed as overwhelmingly complex by novice potential users, discouraging sUAS use.
Our objective was to develop a repeatable and approachable workflow simplistic enough for most ecologists to effectively map the following grassland vegetation classes at our and other similar grassland sites: autumn olive (Eleagnus umbellata), blackberry (Rubus allegheniensis), common reed (Phragmites australis), grass-dominated vegetation (hereafter “grasses”, not including common reed), forb-dominated vegetation (hereafter “forbs”), and tree-shrub (not including autumn olive or blackberry). To decide if the added costs of sUAS using this simplistic approach were justified, we compared (1) the accuracy of vegetation classifications generated using sUAS to those from NAIP imagery. To inform best-practices for our workflow, we compared (2) the impacts of image resolution and the use of a canopy height model on classification accuracy, and (3) the accuracy of random forest and manual classification methods. We used a consumer-grade sUAS, semiautomated flight software, and graphical user interface GIS software, representing a relatively user-friendly approach to vegetation classification using sUAS. We compared manual image classification to automatic classification using a random forest. A random forest is a machine learning method based on decision trees [24] that can be used to identify and differentiate researcher-specified objects and is thus often employed to classify images for land cover or vegetation mapping [25,26,27]. It represents a more hands-off solution than manual image classification and has demonstrated a high accuracy in some cases [28,29] although manual classification may still achieve a higher accuracy [11]. Information about the relative accuracies of each method would be valuable for land managers questioning if they should dedicate the extra person hours required for manual classification. Our final objective (4) was to create a flowchart to guide decision-making by land and wildlife managers. This would help them choose the most appropriate path based on their logistical constraints and desired outcomes.
We also evaluated how several image and classification parameters impacted classification accuracy. We tested sUAS imagery at several post hoc resampled resolutions and predicted that accuracy would decrease with decreasing resolution. One of the primary determinants of flight time is sensor altitude, which directly correlates with resolution. Time afield can be limited for land managers, so finding a balance between efficiency and accuracy is critical. Resolution also impacts computing demands and is thus an important consideration for those without very powerful computers. We also compared classifications using canopy height models (CHMs) to those without. Canopy height models are raster layers representing vegetation height that can be generated from LiDAR or photogrammetry and are commonly used to estimate aboveground biomass of different landcover types [30] and to identify plant species [31]. In cases like ours, where the vegetation height can differ substantially among vegetation classes, a CHM could be a useful addition, but temporally relevant CHMs are not always readily available and adding CHMs to the workflow can increase computing demands. Information about the relative contribution of a CHM to classification accuracy in a similar grassland system would help potential users decide if they should incorporate one in their workflow.

2. Materials and Methods

2.1. Data Acquisition

We conducted this study at 10 grassland fields (mean ± SD = 10.99 ± 10.57, range = 1.8–35.9 ha) distributed across Burning Star State Fish and Wildlife Area in northeast Jackson County, Illinois, USA (−89.20315, 37.86606; Figure 1). Burning Star SFWA is a reclaimed coal-mine site with forest, surface water, grassland, agriculture, and shrubland components. Grassland fields were primarily dominated by grasses and forbs with some tree-shrub and wetland components. We used NAIP imagery and orthophotos collected by sUAS for mapping, in conjunction with ground-referenced data points collected for map validation using handheld GPS.

2.1.1. Collecting Remotely Sensed Imagery

We collected sUAS orthophotos from 7 to 14 October 2019 using a DJI Mavic 2 Pro with a 20 MP, 1 inch CMOS RGB sensor (DJI, Shenzhen, Guangdong, China). All flights were conducted between 10:24 and 14:01 local time to reduce shadows in the resulting imagery. We chose to fly the sUAS ~61 m above ground level (AGL) based on a qualitative evaluation of orthomosaics created from test flights at ~31, 47, 61, 91, and 122 m AGL, resulting in a 1.27 cm ground-sampling distance. We automated flights using DroneDeploy software (DroneDeploy, San Francisco, CA, USA) with 75% front overlap, 70% sidelap, an 18 KPH speed limit, and both perimeter 3D and enhanced 3D options activated. DroneDeploy simplified the construction of flight paths and image collection because it required only the input of a bounding polygon and the above-listed parameters; it then generated the flight plan automatically and guided the sUAS during flights. We then uploaded JPG files to DroneDeploy for processing in terrain mode and downloaded the resulting 1.58 cm orthomosaics, 1.58 cm “plant health” rasters, 4.3 cm digital surface models (DSMs), and 4.3 cm digital terrain models (DTMs). The “plant health” rasters were produced by DroneDeploy’s implementation of the visible atmospherically resistance index [32]. We obtained NAIP imagery of the study area collected between 2 August 2019 and 12 September 2019. These uncompressed .tif format scenes had a 0.6 m horizontal resolution and included visible (i.e., RGB) and near-infrared (NIR) bands [14].

2.1.2. Collecting Ground Survey Data

Using handheld GPS (Garmin eTrex 10; manufacturer reported ± 3 m positional accuracy), we marked the locations of various vegetation classes on foot. These classes were customized to the needs of ongoing research and monitoring projects at the site and included autumn olive (Eleagnus umbellata), blackberry (Rubus allegheniensis), common reed (Phragmites australis), grass-dominated vegetation (hereafter “grasses”, not including common reed), forb-dominated vegetation (hereafter “forbs”), and tree-shrub (not including autumn olive or blackberry). Grasses largely consisted of native warm-season grasses such as big bluestem (Andropogon gerardii), Indian grass (Sorghatrum nutans) and switchgrass (Panicum virgatum), as well as cool-season non-native grasses such as smooth brome (Bromus inermis), Kentucky bluegrass (Poa pratensis), and green foxtail (Setaria viridis). Common forbs included goldenrod (Solidago canadensis), ragweed (Ambrosia artemisifolia), and non-natives such as sericea lespedeza (Lespedeza cuneata) and yellow sweet clover (Melilotus officinalis). The number of waypoints collected for each class was scaled relative to presumed areal coverage based on prior experience at the site. A total of 425 ground-reference points (n = 66 autumn olive, 67 blackberry, 60 common reed, 69 forbs, 87 grasses, and 76 tree-shrub) were collected and subsequently digitized to polygons of known composition in the immediate vicinity of the waypoints for training and accuracy assessment. We randomly selected 80% of the ground-reference points for model training and 20% for accuracy assessment.

2.1.3. Topographic Models

Photogrammetric derivatives (i.e., DSMs and DTMs) from DroneDeploy contained artifacts that made the resulting canopy height models (CHMs) unfeasible for use in supervised classifications. These artifacts resulted in some trees having negative instead of positive CHM values. Coupled with the RGB DroneDeploy imagery, we could correctly interpret the artifacts when manually digitizing maps, so we still utilized them for manual classifications. We expected that these artifacts would confound the random forest classifier, so we obtained DSM and DTM derivatives for Jackson County from the Illinois Height Modernization Project dataset [33] for automatic classifications. The ILHMP dataset was created using LiDAR imagery collected in 2014 and has a 0.6 m horizontal accuracy, 6.18 cm vertical accuracy, and 0.95 m point spacing. The ILHMP was the best freely available option for developing a CHM despite having been collected five years prior to our study and is similar to what might be available to land managers; expecting current LiDAR data is not realistic for many land managers. Based on extensive prior experience at the field site, we were confident that minimal changes had occurred among the positions of vegetation classes during that time. No active management (i.e., fire, mowing, grazing) had occurred on the site in the decade prior to our study and we had observed no major shifts in the horizontal distribution of classes.

2.2. Image Preprocessing

2.2.1. Preprocessing sUAS and NAIP Imagery

All subsequent processing of raster and ground reference data was conducted in ArcGIS Pro 2.7.0 (Environmental Systems Research Institute, Inc., Redlands, CA, USA). We merged the maps of individual fields derived from DroneDeploy to create single maps encompassing all fields for each map type (i.e., orthomosaic, DTM, DSM, plant health) and projected to NAD 1983 UTM zone 16. We resampled the original 1.58 cm orthomosaic to produce additional rasters with 5, 20, and 60 cm horizontal resolution to compare among resolutions. We then clipped all four orthomosaics to study field extents. We created a canopy height model (CHM) by subtracting the DroneDeploy DTM from the DroneDeploy DSM. We followed similar procedures for NAIP imagery, mosaicking the four NAIP scenes into a single raster, projecting to NAD 1983 UTM zone 16, and clipping to the field edges. We produced true-color (i.e., RGB), color-infrared (CIR), and normalized difference vegetation index (NDVI) versions of the NAIP orthomosaic.

2.2.2. Creating Topographic Derivatives

We also produced a LiDAR-based CHM by subtracting the ILHMP DTM from the ILHMP DSM for the random forest classifier due to concerns shortcomings in the DroneDeploy derived CHM would confound the analysis. To reduce computational demands when processing the ILHMP data, we converted the resulting raster to 8-bit unsigned data and clipped it to a 300 m buffer area around each of the 10 study fields. We then projected it to NAD 1983 UTM zone 16. The ArcGIS Pro Image Classification Tool requires that all raster bands to be classified are of equal resolution, so we resampled the CHM to each of the four resolutions used for the sUAS-derived orthomosaics using a bilinear interpolation and clipped each resulting raster to the field extents.

2.3. Image Classification and Assessment

2.3.1. Classified Datasets

We created two datasets for manual classification (i.e., digitizing). The first included our sUAS-derived products from DroneDeploy including 1.58 cm orthomosaic and plant health layers and 4.3 cm DTM and DSM layers. The second included true-color, false-color, and NDVI layers from NAIP imagery. Investigators were “blind” to the ground-survey data collected via GPS for any field they manually classified (i.e., Investigator A collected GPS data for fields 1–4, but digitized fields other than 1–4) to reduce any positive bias that they could potentially have on classification accuracy from their experience during ground-survey data collection. We viewed imagery at different scales to discern features but maintained a 1:300 scale for manual polygon creation and segmenting.
We created six base datasets for supervised classification; this comprised four different resolutions of sUAS orthomosaics, a true-color NAIP orthomosaic with NIR as a fourth band (i.e., R, G, B, NIR), and a color-infrared NAIP orthomosaic with blue as a fourth band (NIR, R, G, B). We composited each of the six base datasets with an ILHMP-derived CHM of equal resolution, generating six more datasets with an additional band each. Each of the 12 datasets, 6 with CHM and 6 without CHM, were used for random forest classifications so that we could compare the influence of resolution, data source, and CHM on the random forest classification accuracy.

2.3.2. Classification Parameters

Because our vegetation classes likely had spectral overlap, and due to potential noise associated with very high-resolution imagery, we used object-based classification to account for both spectral and spatial properties of the imagery. We used a supervised random forest classifier, implemented as the Random Trees tool within ArcGIS Pro’s Image Classification workflow.
We used default settings for segmentation (spatial detail 15.5, spectral detail 15) and training (maximum number of trees = 50, maximum tree depth = 30, maximum number of samples per class = 1000) but scaled the number of pixels for minimum segment size to maintain a similar minimum segment footprint across different resolutions (~2.5 m2). The Image Classification tools use only the first three bands of a raster for segmentation but use all bands for later steps. We held training settings at defaults (maximum number of trees = 50, maximum tree depth = 30, maximum number of samples per class = 1000). We used the segment attributes of active chromaticity color, mean digital number, standard deviation, and compactness for training. We used 80% of the ground reference data for training.

2.3.3. Accuracy Assessment

We produced confusion matrices for accuracy assessment of each of 14 classified maps using 500 GIS-generated stratified random points, 20% of the ground reference data for validation in the field. We calculated overall accuracy, producer’s accuracy, and user’s accuracy. Overall accuracy is the proportion of correct classifications out of all classifications. Producer’s accuracy is the probability that a ground-surveyed point was classified correctly (i.e., 100% omission error). User’s accuracy is the probability that a point on the classified map is correctly classified (i.e., 100% commission error) and is of particular importance when the resulting classified maps is used in additional models (i.e., habitat suitability or resource selection models planned for the ongoing research at the site).

3. Results

Manual classifications were more accurate than random forest classifications (Table 1, Figure 2). The manual classification of sUAS imagery produced the most accurate map (overall accuracy = 89.22%) and the manual classification of NAIP imagery produced the second-most accurate map (76.25%). Random forest classifications produced maps with accuracies ranging from 47.31 to 69.26%. The accuracy was higher for sUAS maps than for NAIP maps, not just for the manual classifications above, but also for all random forest classifications (56.29–69.26% vs. 30.74–48.50%).
The impacts of modifying imagery resolution or including a CHM were mixed for the random forest classifications of sUAS imagery. The 1.58 cm imagery with the CHM had the highest classification accuracy of all random forest classifications (overall accuracy = 69.26%), but the addition of the CHM did not universally improve accuracy within each resolution. Moreover, a finer resolution did not universally improve accuracy. When the CHM was not included, the 5 cm (overall accuracy = 66.27%) and 20 cm (61.48%) classifications performed better than the 1.58 cm classification (60.28%). When the CHM was included, the accuracy at the 20 cm resolution (65.47%) was higher than at the 5 cm resolution (62.87%).
The sources of classification error varied among the different classification types and resolutions, but several notable errors were evident. We misclassified forbs and common reed more than other classes in our manual classifications of both sUAS and NAIP imagery, often misclassifying the forbs as trees/shrubs or grasses, and common reed as forbs and grasses (Supplementary Tables S1 and S2). Our misclassification of forbs had a large impact on the user’s accuracy for the tree/shrub class in the sUAS classification and on grasses in the NAIP classification due to the large areal coverage of forbs in the fields and correspondingly large number of validation points for that class in the accuracy assessment. The producer’s accuracy was lowest for autumn olive in all but one of the random forest classifications (Supplementary Tables S3–S14). The user’s accuracy was lowest for the tree/shrub class in all random forest classifications, indicating that the resulting maps had a substantial uncertainty for any points indicated as tree/shrub. Adding the CHM improved the user’s accuracy for tree/shrub in all classifications.

4. Discussion

Our research team comprises several wildlife researchers, with only limited remote sensing expertise. We required vegetation maps of our grassland sites for use in our wildlife habitat studies but were overwhelmed by the extensive, complex, and often contradictory remote sensing literature. We did not need to know the absolute best methodology, but rather the best methodology given our logistical, financial, and knowledge-based constraints. Many land managers and wildlife researchers might find themselves in a similar situation and could benefit from our results and the decision-making workflow that we have created based on those results.
We used consumer-grade sUAS and GPS technology with semiautomated flight software and graphical user interface mapping software to accurately map grassland vegetation in our study area. Without specialized survey equipment or procedures such as ground-control points, real-time kinematic processing GPS, thermal sUAS sensors, or radiometric calibration, we achieved ~89% classification accuracy with our best-performing classification. This level of accuracy is similar to other studies classifying vegetation types from remote sensing imagery, including studies relying on supervised classifiers such as random forest or convolutional neural networks [11,21,34,35], and those employing manual digitizing [20,36,37]. We had a general knowledge of the vegetation at our study site from ongoing research there, but this would be a reasonable expectation for most land managers or researchers already working in a location. Higher-end equipment may have further improved our classification accuracy but would have required more expenses, training, and processing. Our method is feasible and repeatable for use by those lacking extensive survey or coding experience and offers an actionable method for vegetation mapping and monitoring by ecologists and land managers.

4.1. sUAS vs. NAIP Imagery

Using sUAS imagery clearly improved classification accuracy of our selected classes at our site when compared to NAIP imagery (manual: 89.22% vs. 76.25%, random forest: 69.26% vs. 48.50%). The best sUAS imagery classification outperformed the best NAIP imagery classification even when we downscaled the 1.58 cm sUAS imagery to a 60 cm resolution for random forest classifications (58.88% vs. 48.50%).
Several factors must be considered when selecting between sUAS and NAIP imagery for a project. Because we planned on using the resulting classifications as parameters in subsequent ecological models, we wanted to minimize classification error within our logistical and budget constraints so as to not largely increase uncertainty in those later models. According to our results, NAIP imagery classification may be satisfactory for some other purposes, such as landscape-scale vegetation analyses or studies covering areas too large to feasibly map via sUAS and would have the added benefits of being free and requiring no extra time afield.
There are additional benefits to the sUAS approach beyond accuracy. NAIP imagery is collected at periods that are outside of investigators’ control and typically at intervals of one to multiple years. Moreover, NAIP imagery is often not available until months post collection. Using sUAS imagery can allow investigators or managers to obtain information about grasslands on schedules that suit their needs. For example, differences in the phenology of various plants could be leveraged to improve discrimination [35] or mapping could be conducted during critical periods for local wildlife or after extreme events. Routine monitoring can also be conducted at shorter intervals using sUAS than would be possible using NAIP imagery [18,20].

4.2. Random Forest Classification vs. Manual Classification

We achieved a substantially higher classification accuracy by manually classifying imagery than by using random forest classification. This was true for both sUAS (89.22% vs. 69.26%) and NAIP imagery (76.25% vs. 48.50%). Other studies that have directly compared supervised and manual classification methods also found manual classification to be more accurate, though more time-consuming [37] (overall accuracy: 93% manual, 80% random forest), Hamylton et al., [38] (overall accuracy: 91% manual, 85% convolutional neural networks, and 82% pixel-based algorithm).
We did not record the exact time spent on each method but, anecdotally, random forest classifications took several days of computational processing for the highest resolution imagery and minutes-to-hours for lower resolution imagery, compared to ~30 person-hours for manual classification. The computational processing represented the total processing time to conduct the classifications, not necessarily the time actively spent at a computer by an investigator. Random forest classification is a relatively passive process, requiring only minutes of user input even for the highest resolution imagery. By comparison, manual classification relies entirely on user input.
Although we obtained a high accuracy with our manual classification and a moderate accuracy with our top random forest classification, errors may have occurred. With submeter vegetation patches, it is possible that misclassifications arose from errors in alignment. Our orthomosaics had ~1.1 m of RMSE, but we did not test the manufacturer-reported 3 m accuracy of the GPS units used for ground reference data. More advanced protocols (i.e., ground-control points, radiometric calibration) and equipment (i.e., RTK GPS for ground reference points and sUAS) would likely further improve classification accuracies but would require additional training, time, and expense. Similarly, radiometric calibration could improve accuracy by reducing the impact of radiometric differences throughout the imagery collection period but would require additional processing and time afield. Uneven brightness values could lead to a substantial spectral overlap among vegetation classes, and we advise those planning to solely use supervised classification techniques or those evaluating changes in imagery collected over time to consider radiometric calibration in their workflows [35,39,40].
We constrained our flights to mid-day, but shadows were evident adjacent to trees and shrubs at the extreme ends of our flight window. Further constraining flights to a narrower window or to only overcast days may help reduce the impact of shadows. In a different approach, Ishida et al. [41] trained a support vector machine classifier simultaneously with sunlit and shaded MODIS satellite data, which improved classification accuracy by 13.5%. Field operations are ultimately an exercise in balance; further limiting the flight window or collecting imagery twice during differently lit periods means more time afield. Investigators will need to balance accuracy and efficiency in their own operations.
We mostly held parameters to default settings when conducting the random forest classifications in ArcGIS Pro. It is likely that fine-tuning the settings could improve classification accuracies. Further investigations into the impact of each of those parameters on this dataset may be worthwhile but was beyond the scope of this project.

4.3. Impacts of Resolution and the Use of a Canopy Height Model (CHM)

Despite the 1.58 cm imagery with CHM resulting in the most accurate random forest classification, the relationship between resolution and classification accuracy was not straightforward. The lowest overall accuracy resulted from 60 cm imagery, but some of the 5 cm and 20 cm classifications performed nearly as well as the 1.58 cm classification with CHM and better than the 1.58 cm classification without CHM (Table 1). This pattern is consistent with other studies that have investigated the effects of resolution on classification accuracy. Lu and He [12] and Liu et al. [42] compared the classification accuracy resulting from sUAS-derived imagery of varying resolutions. Both studies found little difference in accuracy for resolutions less than 15 cm and found that the highest resolution was not necessarily the most accurate.
The similarity in classification accuracy that we observed at different resolutions can have important implications for planning both flights and image classification. If investigators wish to prioritize efficiency in their supervised classifications, they may be able to use lower-resolution imagery without substantially decreasing accuracy. They can increase efficiency using lower resolutions in two ways. First, they could fly the sUAS missions at higher altitudes, within legal constraints, to maximize aerial coverage during each flight. This could also help improve image mosaicking and reduce geometric distortions caused by relief displacement, which may help offset the decreased resolution. Second, they could upscale acquired imagery, as we did, to reduce computational demands during preprocessing and classification by orders of magnitude. Doing so may also present a viable method for those without powerful computers to perform random forest classifications on sUAS imagery. An advantage of this approach over flying at greater altitude is that investigators would still have access to the higher resolution imagery if a need arose later. We used a fairly robust computer for the random forest classification (i.e., Intel i9 processor, 64 GB RAM, m.2 solid-state drive), so investigators using slower computers should expect longer processing times. Investigators debating the trade-offs between efficiency and accuracy of these methods should also consider the scale of their project and subsequent dataset size.
We added a CHM from ILHMP to imagery for random forest classifications in an attempt to better discriminate different vegetation types and to ameliorate potential tree shadow issues. Adding a CHM did not universally improve the random forest classification accuracy; it reduced accuracy in some cases. The original CHM had a 1 m resolution and possibly did not contain enough detail to resolve the shadow problem even after downscaling to finer resolution. It is also possible that we incorrectly assumed that the five-year-old vertical dataset would be suitable, and that vertical vegetation structure had changed enough over that period to impact results. Any changes in the distribution of the vegetation classes in our study during that period would have negatively biased our classification accuracies. Despite that, we obtained good classification accuracies for most land-management applications.

4.4. Recommendations for Similar Uses

Mapping vegetation in grasslands can be a valuable tool for land managers and wildlife researchers but deciding whether or not to use sUAS or freely available imagery can be difficult. Similarly, deciding on flight parameters, classification methods, and additional data sources can be intimidating for novice GIS users. By comparing classification accuracies among different combinations of remote sensing platforms, data sources, classification methods, and flight parameters, we can inform potential users on the costs and benefits of each. We present this guidance in the form of a workflow chart (Figure 3) where potential users can consider their logistical constraints and desired outcomes when deciding on their chosen methodology. No two study sites are identical, nor are any two teams of investigators. This workflow allows potential users to select the best course of action for their specific use-case.

4.5. Conclusions

We demonstrated that consumer-grade sUAS and GPS could be used with relatively simplistic semiautomated flight planning and graphical user interface GIS software packages to produce accurate grassland vegetation classifications in our study area (up to 89.22%). We also showed how several imagery and methodological parameters could impact classification accuracy and presented a guide to assist potential users in their decision-making regarding data sources and classification methods. Implementing similar methods to ours and considering those parameters can make sUAS classification more approachable for land managers and researchers seeking to obtain useful and accurate maps of their grasslands in an efficient manner.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/drones6110318/s1, Table S1: Accuracy assessment for manual classification of sUAS imagery. Overall accuracy was 89%, Table S2: Accuracy assessment for manual classification of National Agriculture Imagery Program imagery. Overall accuracy was 76%, Table S3: Accuracy assessment for random forest classification of 1.58 cm sUAS imagery with a canopy height model. Overall accuracy was 69%, Table S4: Accuracy assessment for random forest classification of 1.58 cm sUAS imagery without a canopy height model. Overall accuracy was 60%, Table S5: Accuracy assessment for random forest classification of 5 cm sUAS imagery with a canopy height model. Overall accuracy was 63%, Table S6: Accuracy assessment for random forest classification of 5 cm sUAS imagery without a canopy height model. Overall accuracy was 66%, Table S7: Accuracy assessment for random forest classification of 20 cm sUAS imagery with a canopy height model. Overall accuracy was 65%, Table S8: Accuracy assessment for random forest classification of 20 cm sUAS imagery without a canopy height model. Overall accuracy was 61%, Table S9: Accuracy assessment for random forest classification of 60 cm sUAS imagery with a canopy height model. Overall accuracy was 56%, Table S10: Accuracy assessment for random forest classification of 60 cm sUAS imagery without a canopy height model. Overall accuracy was 59%, Table S11: Accuracy assessment for random forest classification of National Agriculture Imagery Program imagery with a canopy height model and segmented using near-infrared, red, and green blue bands. Overall accuracy was 43%, Table S12: Accuracy assessment for random forest classification of National Agriculture Imagery Program imagery without a canopy height model and segmented using near-infrared, red, and green bands. Overall accuracy was 47%, Table S13: Accuracy assessment for random forest classification of National Agriculture Imagery Program imagery with a canopy height model and segmented using red, green, and blue bands. Overall accuracy was 49%, Table S14: Accuracy assessment for random forest classification of National Agriculture Imagery Program imagery without a canopy height model and segmented using red, green, and blue bands. Overall accuracy was 31%.

Author Contributions

J.R.O. helped develop the research topic and methodology, helped collect field data, conducted statistical analysis, and wrote the first draft of the manuscript; A.G. helped develop the research topic and methodology, collected field data, analyzed imagery, and edited the first draft of manuscript; C.S.C. helped develop the research topic and methodology, collected field data, analyzed imagery, and edited the first draft of manuscript; M.W.E. secured research funding, helped develop the research topic and methodology, and edited the first draft of manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

Funding was provided by U.S. Forest Service grant 19-CS-11090800-002 and Federal Aid in Wildlife Restoration grant W-106-R-31.

Data Availability Statement

All data are reported in the supplemental material. Data will be deposited in the Dryad Digital Repository.

Acknowledgments

We thank DroneDeploy for the trial use of their software package for flight planning, flight management, and image processing. We also thank M. Miller, A. Minor, S. Plesh, and L. O’Connell for editorial comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duro, D.C.; Coops, N.C.; Wulder, M.A.; Han, T. Development of a large area biodiversity monitoring system driven by remote sensing. Prog. Phys. Geogr. 2007, 31, 235–260. [Google Scholar] [CrossRef]
  2. Gillespie, T.W.; Foody, G.M.; Rocchini, D.; Giorgi, A.P.; Saatchi, S. Measuring and modelling biodiversity from space. Prog. Phys. Geogr. 2008, 32, 203–221. [Google Scholar] [CrossRef]
  3. Horning, N.; Robinson, J.A.; Sterling, E.J.; Turner, W.; Spector, S. Remote Sensing for Ecology and Conservation; Oxford University Press: New York, NY, USA, 2010; 467p. [Google Scholar]
  4. Roughgarden, J.; Running, S.W.; Matson, P.A. What does remote sensing do for ecology? Ecology 1991, 72, 1918–1922. [Google Scholar] [CrossRef]
  5. Chabot, D.; Bird, D.M. Wildlife research and management methods in the 21st century: Where do unmanned aircraft fit in? J. Unmanned Veh. Syst. 2015, 3, 137–155. [Google Scholar] [CrossRef] [Green Version]
  6. Pettorelli, N.; Laurance, W.F.; O’Brien, T.G.; Wegmann, M.; Nagendra, H.; Turner, W. Satellite remote sensing for applied ecologists: Opportunities and challenges. J. Appl. Ecol. 2014, 51, 839–848. [Google Scholar] [CrossRef]
  7. Dierssen, H.M.; Zimmerman, R.C.; Drake, L.A.; Burdige, D. Benthic ecology from space: Optics and net primary production in seagrass and benthic algae across the Great Bahama Bank. Mar. Ecol. Prog. Ser. 2010, 411, 1–15. [Google Scholar] [CrossRef]
  8. Kachelriess, D.; Wegmann, M.; Gollock, M.; Pettorelli, N. The application of remote sensing for marine protected area management. Ecol. Indic. 2013, 36, 169–177. [Google Scholar] [CrossRef]
  9. Duarte, A.; Jensen, J.L.R.; Hatfield, J.S.; Weckerly, F.W. Spatiotemporal variation in range-wide Golden-cheeked Warbler breeding habitat. Ecosphere 2013, 4, 152. [Google Scholar] [CrossRef] [Green Version]
  10. Nebiker, S.; Annen, A.; Scherrer, M.; Oesch, D. A light-weight multispectral sensor for micro UAV—Opportunities for very high resolution airborne remote sensing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 1193–1200. [Google Scholar]
  11. Horning, N.; Fleishman, E.; Ersts, P.J.; Fogarty, F.A.; Zillig, M.W. Mapping of land cover with open-source software and ultra-high resolution imagery acquired with unmanned aerial vehicles. Remote Sens. Ecol. Conserv. 2020, 6, 487–497. [Google Scholar] [CrossRef] [Green Version]
  12. Lu, B.; He, Y. Optimal spatial resolution of unmanned aerial vehicle (UAV)-acquired imagery for species classification in a heterogeneous grassland ecosystem. GIScience Remote Sens. 2018, 55, 205–220. [Google Scholar] [CrossRef]
  13. Lu, B.; He, Y.; Liu, H. Investigating species composition in a temperate grassland using Unmanned Aerial Vehicle-acquired imagery. In Proceedings of the 4th International Workshop on Earth Observation and Remote Sensing Applications (EORSA 2016), Piscataway, NJ, USA, 4–6 July 2016; pp. 107–111. [Google Scholar]
  14. United States Geological Survey (USGS). FSA-APFO Aerial Photography Field Office. NAIP Georectified Images m_3708907_se_16_060_20190912_20191220, m_3708907_sw_16_060_20190912, m_3708915_ne_16_060_20190912, m_3708915_nw_16_060_20190802 TIFFs from Earth Explorer & EROS Data Center. 2020. [Google Scholar]
  15. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  16. Federal Aviation Administration (FAA). Advisory Circular 107-2 Small Unmanned Aircraft Systems (sUAS); US DOT: Washington, DC, USA, 2016.
  17. Banu, T.P.; Borlea, G.F.; Banu, C. The use of drones in forestry. J. Environ. Sci. Eng. B 2016, 5, 557–562. [Google Scholar]
  18. Sun, Y.; Yi, S.; Hou, F. Unmanned aerial vehicle methods makes species composition monitoring easier in grasslands. Ecol. Indic. 2018, 95, 825–830. [Google Scholar] [CrossRef]
  19. Rango, A.; Laliberte, A.; Herrick, J.E.; Winters, C.; Havstad, K.; Steele, C.; Browning, D. Unmanned aerial vehicle-based remote sensing for rangeland assessment, monitoring, and management. J. Appl. Remote Sens. 2009, 3, 033542. [Google Scholar]
  20. Hunt, E.R., Jr.; Hively, W.D.; Fujikawa, S.J.; Linden, D.S.; Daughtry, C.S.T.; McCarty, G.W. Acquisition of NIR-Green-Blue Digital Photographs from Unmanned Aircraft for Crop Monitoring. Remote Sens. 2010, 2, 290–305. [Google Scholar] [CrossRef] [Green Version]
  21. Laliberte, A.S.; Goforth, M.A.; Steele, C.M.; Rango, A. Multispectral remote sensing from unmanned aircraft: Image processing workflows and applications for rangeland environments. Remote Sens. 2011, 3, 2529–2551. [Google Scholar] [CrossRef] [Green Version]
  22. Cruzan, M.B.; Weinstein, B.G.; Grasty, M.R.; Kohrn, B.F.; Hendrickson, E.C.; Arredondo, T.M.; Thompson, P.G. Small unmanned aerial vehicles (micro-UAVs, drones) in plant ecology. Appl. Plant Sci. 2016, 4, 1600041. [Google Scholar] [CrossRef]
  23. Richardson, A.D.; Braswell, B.H.; Hollinger, D.Y.; Jenkins, J.P.; Ollinger, S.V. Near-surface remote sensing of spatial and temporal variation in canopy phenology. Ecol. Appl. 2009, 19, 1417–1428. [Google Scholar] [CrossRef]
  24. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  25. Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef] [Green Version]
  26. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  27. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  28. Hayes, M.M.; Miller, S.N.; Murphy, M.A. High-resolution landcover classification using Random Forest. Remote Sens. Lett. 2014, 5, 112–121. [Google Scholar] [CrossRef]
  29. Kattenborn, T.; Eichel, J.; Wiser, S.; Burrows, L.; Fassnacht, F.E.; Schmidtlein, S. Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery. Remote Sens. Ecol. Conserv. 2020, 6, 472–486. [Google Scholar] [CrossRef] [Green Version]
  30. Zhang, H.; Yi, S.; Chang, L.; Qin, Y.; Chen, J.; Qin, Y.; Du, J.; Yi, S.; Wang, Y. Estimation of grassland canopy height and aboveground biomass at the quadrat scale using unmanned aerial vehicle. Remote Sens. 2018, 10, 851. [Google Scholar] [CrossRef] [Green Version]
  31. Marcinkowska-Ochtyra, A.; Jarocinska, A.; Bzdega, K.; Tokarska-Guzik, B. Classification of expansive grassland species in different growth stages based on hyperspectral and LiDAR data. Remote Sens. 2018, 10, 570. [Google Scholar] [CrossRef] [Green Version]
  32. Gitelson, A.A.; Kaufman, Y.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef] [Green Version]
  33. Illinois Geospatial Data Clearinghouse (IGDC). Illinois Height Modernization: Digital Elevation Data. 2015. Available online: https://clearinghouse.isgs.illinois.edu/data/elevation/illinois-height-modernization-ilhmp (accessed on 4 March 2015).
  34. Kattenborn, T.; Eichel, J.; Fassnacht, F.E. Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery. Sci. Rep. 2019, 9, 17656. [Google Scholar] [CrossRef] [Green Version]
  35. Lu, B.; He, Y. Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland. ISPRS J. Photogramm. Remote Sens. 2017, 128, 73–85. [Google Scholar] [CrossRef]
  36. Bork, E.W.; Su, J.G. Integrating LIDAR data and multispectral imagery for enhanced classification of rangeland vegetation: A meta analysis. Remote Sens. Environ. 2007, 111, 11–24. [Google Scholar] [CrossRef]
  37. Husson, E.; Ecke, F.; Reese, H. Comparison of manual mapping and automated object-based image analysis of non-submerged aquatic vegetation from very-high-resolution UAS images. Remote Sens. 2016, 8, 724. [Google Scholar] [CrossRef] [Green Version]
  38. Hamylton, S.M.; Morris, R.H.; Carvalho, R.C.; Order, N.; Barlow, P.; Mills, K.; Wang, L. Evaluating techniques for mapping island vegetation from unmanned aerial vehicle (UAV) images: Pixel classification, visual interpretation and machine learning approaches. Int. J. Appl. Earth Obs. Geoinf. 2020, 89, 102085. [Google Scholar] [CrossRef]
  39. Guo, Y.; Senthilnath, J.; Wu, W.; Zhang, X.; Zeng, Z.; Huang, H. Radiometric calibration for multispectral camera of different imaging conditions mounted on a UAV platform. Sustainability 2019, 11, 978. [Google Scholar] [CrossRef] [Green Version]
  40. Holman, F.; Riche, A.B.; Castle, M.; Wooster, M.J.; Hawkesford, M.J. Radiometric calibration of ‘commercial off the shelf’ cameras for UAV-based high-resolution temporal crop phenotyping of reflectance and NDVI. Remote Sens. 2019, 11, 1657. [Google Scholar] [CrossRef] [Green Version]
  41. Ishida, H.; Oishi, Y.; Morita, K.; Moriwaki, K.; Nakajima, T.Y. Development of a support vector machine based cloud detection method for MODIS with the adjustability to various conditions. Remote Sens. Environ. 2018, 205, 390–407. [Google Scholar] [CrossRef]
  42. Liu, M.; Yu, T.; Gu, X.; Sun, Z.; Yang, J.; Zhang, Z.; Mi, X.; Cao, W.; Li, J. The impact of spatial resolution on the classification of vegetation types in highly fragmented planting areas based on unmanned aerial vehicle hyperspectral images. Remote Sens. 2020, 12, 146. [Google Scholar] [CrossRef]
Figure 1. Borders of study fields (yellow) at Burning Star State Fish and Wildlife Area (white) in Illinois (inset). Basemap is 2019 NAIP imagery from U.S.G.S.
Figure 1. Borders of study fields (yellow) at Burning Star State Fish and Wildlife Area (white) in Illinois (inset). Basemap is 2019 NAIP imagery from U.S.G.S.
Drones 06 00318 g001
Figure 2. Examples comparing NAIP and sUAS methods: (A) 60 cm National Agriculture Imagery Program (NAIP) imagery, (B) 1.58 cm resolution sUAS imagery, (C,D) manual classifications for each, and (E,F) most accurate random forest classifications for each. The most accurate random forest classification of NAIP imagery (E) included a canopy height model and used red, green, and blue layers for segmentation. The most accurate random forest classification of sUAS imagery (F) included a canopy height model and used 1.58 cm resolution imagery.
Figure 2. Examples comparing NAIP and sUAS methods: (A) 60 cm National Agriculture Imagery Program (NAIP) imagery, (B) 1.58 cm resolution sUAS imagery, (C,D) manual classifications for each, and (E,F) most accurate random forest classifications for each. The most accurate random forest classification of NAIP imagery (E) included a canopy height model and used red, green, and blue layers for segmentation. The most accurate random forest classification of sUAS imagery (F) included a canopy height model and used 1.58 cm resolution imagery.
Drones 06 00318 g002
Figure 3. Flowchart to guide decision-making by land managers and wildlife biologists considering mapping grassland vegetation with remote sensing imagery. Multiple factors must be considered to find the path that is most suitable for each unique application.
Figure 3. Flowchart to guide decision-making by land managers and wildlife biologists considering mapping grassland vegetation with remote sensing imagery. Multiple factors must be considered to find the path that is most suitable for each unique application.
Drones 06 00318 g003
Table 1. Summary of classification types, imagery sources, and parameters used for classification along with overall accuracy for each classification. Two classification types were used, manual digitizing and random forest machine learning. Imagery was collected from two sources: small unmanned aerial system (sUAS) flights and the National Agriculture Imagery Program (NAIP). Segmentation was performed on both red, green, and blue (RGB) and on near-infrared, red, and green band (CIR) combinations. Canopy height models were derived from DroneDeploy or Illinois Height Modernization Program (ILHMP) data. Changes in minimum segment size in pixels corresponded to similar minimum segment footprints (~2.5 m2).
Table 1. Summary of classification types, imagery sources, and parameters used for classification along with overall accuracy for each classification. Two classification types were used, manual digitizing and random forest machine learning. Imagery was collected from two sources: small unmanned aerial system (sUAS) flights and the National Agriculture Imagery Program (NAIP). Segmentation was performed on both red, green, and blue (RGB) and on near-infrared, red, and green band (CIR) combinations. Canopy height models were derived from DroneDeploy or Illinois Height Modernization Program (ILHMP) data. Changes in minimum segment size in pixels corresponded to similar minimum segment footprints (~2.5 m2).
Classification TypeImagery SourceResolution (cm)Canopy Height DataMinimum Segment Size (Pixels)Overall Accuracy
ManualsUAS1.58DroneDeploy 89.22%
ManualNAIP60None 76.25%
Random forestsUAS1.58ILHMP999969.26%
Random forestsUAS1.58None999960.28%
Random forestsUAS5ILHMP99262.87%
Random forestsUAS5None99266.27%
Random forestsUAS20ILHMP6465.47%
Random forestsUAS20None6461.48%
Random forestsUAS60ILHMP756.29%
Random forestsUAS60None758.88%
Random forestNAIP RGB60ILHMP748.50%
Random forestNAIP RGB60None730.74%
Random forestNAIP CIR60ILHMP743.11%
Random forestNAIP CIR60None747.31%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

O’Connell, J.R.; Glass, A.; Crawford, C.S.; Eichholz, M.W. Efficacy of Mapping Grassland Vegetation for Land Managers and Wildlife Researchers Using sUAS. Drones 2022, 6, 318. https://doi.org/10.3390/drones6110318

AMA Style

O’Connell JR, Glass A, Crawford CS, Eichholz MW. Efficacy of Mapping Grassland Vegetation for Land Managers and Wildlife Researchers Using sUAS. Drones. 2022; 6(11):318. https://doi.org/10.3390/drones6110318

Chicago/Turabian Style

O’Connell, John R., Alex Glass, Caleb S. Crawford, and Michael W. Eichholz. 2022. "Efficacy of Mapping Grassland Vegetation for Land Managers and Wildlife Researchers Using sUAS" Drones 6, no. 11: 318. https://doi.org/10.3390/drones6110318

Article Metrics

Back to TopTop