Next Article in Journal
Introducing ICEDAP: An ‘Iterative Coastal Embayment Delineation and Analysis Process’ with Applications for the Management of Coastal Change
Previous Article in Journal
The Minimum Temperature Outweighed the Maximum Temperature in Determining Plant Growth over the Tibetan Plateau from 1982 to 2017
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Effective Detection and Spatial Pattern of Prickly Pear Cactus (Opuntia Genus) from Airborne Imagery before and after Prescribed Fires in the Edwards Plateau

1
Department of Ecology and Conservation Biology, Texas A&M University, College Station, TX 77843, USA
2
USDA Agricultural Research Service, Fort Keogh Livestock and Range Research Laboratory, Miles City, MT 59301, USA
3
USDA Agricultural Research Service, Aerial Application Technology Research Unit, College Station, TX 77845, USA
4
Texas A&M AgriLife Research, San Angelo, TX 76901, USA
5
Texas A&M AgriLife Research, Sonora, TX 76950, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(16), 4033; https://doi.org/10.3390/rs15164033
Submission received: 16 June 2023 / Revised: 10 August 2023 / Accepted: 12 August 2023 / Published: 15 August 2023
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
Over the past century, prickly pear (PP) cactus (e.g., genus Opuntia; subgenus Platyopuntia) has increased on semi-arid rangelands. Effective detection of cacti abundance and spatial pattern is challenging due to the inherent heterogeneity of rangeland landscapes. In this study, high-resolution multispectral imageries (0.21 m) were used to test object-based (OB) feature extraction, random forest (RF) machine learning, and spectral endmember (n-D) classification methods to map PP and evaluate its spatial pattern. We trained and tested classification methods using field-collected GPS location, plant cover, and spectrometry from 288 2 m radius polygons before a prescribed burn and 480 samples after the burn within a 69.2-ha burn unit. The most accurate classification method was then used to map PP distribution and quantify abundance before and after fire. As a case study, we assessed the spatial pattern of mapped PP cover, considering topoedaphic setting and burn conditions. The results showed that the endmember classification method, spectral angle mapper (SAM), outperformed the RF and OB classifications with higher kappa coefficients (KC) (0.93 vs. 0.82 and 0.23, respectively) and overall accuracies (OA) (0.96 vs. 0.91 and 0.49) from pre-fire imagery. KC and OA metrics of post-fire imagery were lower, but rankings among classification methods were similar. SAM classifications revealed that fire reduced PP abundance by 46.5%, but reductions varied by soil type, with deeper soils having greater decreases (61%). Kolmogorov-Smirnov tests indicated significant changes before and after fire in the frequency distribution of PP cover within deeper soils (D = 0.64, p = 0.02). A two-way ANOVA revealed that the interaction of season (pre- vs. post-fire) and soils significantly (p < 0.00001) influenced the spatial pattern of PP patches. Fire also reduced the size and shape of PP patches depending on the topoedaphic settings. This study provides an innovative and effective approach for integrating field data collection, remote sensing, and endmember classification methods to map prickly pear and assess the effects of prescribed fire on prickly pear spatial patterns. Accurate mapping of PP can aid in the design and implementation of spatially explicit rangeland management strategies, such as fire, that can help reduce and mitigate the ecological and economic impacts of prickly pear expansion.

1. Introduction

In Texas, some of the most common species of prickly pear cactus are the Texas prickly pear (O. lindheimeri Engelm.), Engelmann prickly pear (O. phaecantha var. discata [Griffiths] L. Benson and Walkington), Brownspine prickly pear (O. phaeacantha var. phaeacantha Engelm.), and Edwards prickly pear (O. edwardsii V. Grant and K. Grant) [1]. Due to hybridization from cross-pollination among species, accurate identification of prickly pear at a species level has proven difficult [2]. Nonetheless, prickly pear cactus (genus Opuntia) encroachment has significantly increased on rangelands in Texas and other semi-arid areas over the past several decades [3,4,5,6]. Both woody plant and prickly pear encroachment in these areas have negatively impacted forage availability for livestock. Prickly pear cacti can become a noxious species that increases soil erosion and decreases grazable grass abundance, thus affecting sustainable livestock production on these landscapes [4]. Although prickly pear encroachment can negatively impact livestock production, several studies have shown potential benefits of prickly pear, including harvesting spineless prickly pear cactus as livestock fodder [7,8] especially during drought [9]. However, to achieve the optimal benefits of utilizing prickly pear cactus as livestock feed, mechanical burning of cactus spines is needed to reduce health hazards (damage to the lips, upper gastrointestinal tract, and mouth) [1,10,11]. Around the world, prickly pear cactus is used throughout the year as forage and during drought periods when herbaceous grazeable biomass is in short supply [9,12].
In addition to the effect on livestock nutritional value and digestibility from consuming species of prickly pear cactus [11,13], there is an increasing focus on the effects of fire on prickly pear cactus growth and livestock utilization [13,14]. Historically, the benefits of implementing prescribed fires for woody and cacti control (e.g., Prosopis glandulosa, Juniperus ashei, Opuntia sp., and J. virginiana) during the winter and summer seasons have been evaluated [14]. However, summer “growing-season” prescribed fires significantly suppressed prickly pear cacti on rangelands in Texas [4,15]. In contrast, high-intensity prescribed burns during winter applications only reduced about 30% of medium–small-sized clusters of cacti [4]. Implementing prescribed fires to suppress the motte (i.e., a cluster or clump of cacti) size and distribution of prickly pear cactus has been argued to successfully manage prickly pear populations in the southern rangelands of Texas [3,4,14]. Depending on motte size and their distribution across the landscape, prickly pear resistance to fire will increase as underlying herbaceous cover and litter fuel loads decline within the perimeters of larger mottes [4,16]. The ability to map motte sizes and the distribution of prickly pear across the landscape would aid in better managing prickly pear with fire and grazing.
In the 21st century, substantial research efforts have shifted from field-based sampling to developing remote sensing methods for vegetation surveys [17,18,19]. The application of uncrewed aerial vehicles (UAVs) and airborne multi- and hyperspectral imagery analysis has been used in previous studies to detect Prosopis glandulosa with high accuracy through supervised classification that utilized object-based segmentation on semi-arid grasslands [20,21,22]. Moreover, studies on mapping woody plant encroachment in the Edwards Plateau of Texas have successfully classified woody species using pixel-based, object-based, and deep-learning methods applied to drone imagery [22]. As demonstrated in previous studies, it is possible to delineate accurate maps of specific plant species using their spectral characteristics from spectrally rich, high-resolution imagery analyzed with machine learning algorithms. Currently, several remote sensing methods exist for high-resolution image classification. These include (1) object-based classification [23,24,25]; (2) machine learning-regression-based predictions through Random Forest [26,27] Support Vector Machines [28]; and (3) convolutional neural networks [22]. These methods allow researchers to analyze spatial information based on spectral profile, textural, and geometrical properties from image features and pixel information within a region of interest.
Regarding vegetation structure’s spectral properties, adding indices that amplify biochemical and biophysical characteristics is an important surrogate for vegetation identity [28,29,30,31]. For example, the Excess Green Index (ExGI) and Green Chlorophyll Index (GCI) are known to outperform other indices in predicting crude protein and leaf chlorophyll content of vegetation [29,30]. Normalized Difference Vegetation Index (NDVI) [31] and Enhanced Vegetation Index (EVI) [32] are well known for estimating vegetation health and productivity and are widely used in estimating crop health, leaf-area index (LAI), and aboveground biomass [33,34]. Two variants of NDVI, Green-NDVI (GNDVI; [35]) and Kernel-NDVI (KNDVI; [36]), have been developed to improve the estimation of photosynthetic activity from chlorophyll, leaf nitrogen content, and water uptake [35], and as a proxy for gross primary production (GPP) [36]. To suppress NDVI negative values, the Corrected Transformed Vegetation Index (CTVI; [37]) was developed to address the condition and richness of green biomass [38]. To detect leaf chlorophyll content, the Modified Triangular Vegetation Index (MTVI2: [39,40]) is known to predict LAI while being resistant to saturation from high LAI values. From vegetation indices to estimate the leaf area index [41,42], the Normalized Difference Water Index (NDWI; [43]) serves as a proxy for plant water stress. For landscapes with sparse vegetation cover, the slope-based indices [38,44] such as the Soil-Adjusted Vegetation Index (SAVI) [45] and the Optimized Soil-Adjusted Vegetation Index (OSAVI) [46,47], and the distance-based indices’ [38,44] Modified Soil-Adjusted Vegetation Index (MSAVI and MSAVI2: [48]) have been successful for estimating biomass from alpine grasslands to semi-arid savannas [34,49]. Lastly, light use efficiency indices such as the Structure Insensitive Pigment Index (SIPI; [50]) are known to identify increases in carotenoid pigments from vegetation stress (e.g., mortality from drought or fires). The use of these indices in the classification of prickly pear on the landscape has yet to be explored.
Concerning the spectral profile, methods such as the Spectral Angle Mapper (SAM) [51,52,53] can be used to physically (geometrically) match pixels to a reference spectral library, resulting in a spectral classification based on spectral similarities in a space with equal dimensionality. Spectral analyses, such as the Spectral Hourglass, can successfully separate reflectance properties that define phenological differences from images taken during seasonal biomass (leaf-on and leaf-off imagery) stages [54,55]. The Endmember Collection tool is another advanced spectral analysis technique that uses Spectral Information Divergence (SID) algorithms [56,57] to find the slightest divergence between the image’s spectral profile and the reference spectral library [53,56]. The SID method is ideal for discriminating between a plant species’ spectral characterization among random pixels in the image following a specified maximum divergence threshold for probabilistic spectral behavior [53,56]. The use of SAM and SID algorithms for analyzing prickly pear spectra before and after prescribed fires has not been conducted to date.
As noted, remote sensing applications are useful in showing the spatial pattern and temporal changes in plant species with unique characteristics that affect ecosystem structure and services [17,21]. Remote detection of targeted plant species from airborne imagery during phenological stages can significantly improve ground-based vegetation surveys for addressing the distribution and management of woody encroachment [17,22] and prickly pear cactus [1,4]. To date, there has been a lack of studies establishing effective approaches to mapping prickly pear based on remote sensing data, which is essential for assessing ecological and economic impacts and management outcomes at meaningful spatial and temporal scales. Small-scale heterogeneity in dynamic savannas and semiarid rangelands can create challenges for accurate vegetation mapping [19,22], and prickly pear classification is no exception. However, evaluating the acceptable ranges for accurately mapping prickly pear from high-resolution multispectral imagery will provide critical insights into its feasibility, affordability, and application for use in mapping the size and distribution of prickly pear, thus assisting land managers in grazing, fire, and vegetation management decisions.
When considering prickly pear motte sizes [4] and distribution, research is lacking on using the spectral characteristics of this plant to identify and map the spatial pattern of prickly pear cactus in rangeland landscapes. Thus, information on the effect of fire on the spatial pattern (e.g., shape and size of the prickly pear mottes) is also currently lacking. So, not only would the spectral mapping and spatial pattern analysis greatly expand our understanding of the species’ spatial arrangement and spatial distribution, but they will also provide another dimension of knowledge for ranch management and operations (e.g., targeted herbicide applications or mechanical treatments) as prickly pear cacti become more widespread on the landscape.
In this study, we aimed to innovate by developing an effective and efficient approach to mapping prickly pear based on high-resolution remote sensing data while also assessing the influence of prescribed fire on its spatial pattern as a case study. To identify the best methods for classification of prickly pear, we examined the utility and performance of feature extraction, machine learning, and spectral (n-D) classification techniques for mapping the distribution of prickly pear cacti using multispectral (4-band) airborne imagery. The specific objectives of this study are: (1) Classify prickly pear cactus using object-based feature extraction, pixel-based machine learning spectral extraction, and endmember-based spectral classification; (2) Evaluate each method’s performance for classifying multispectral imagery before and after prescribed fires; and (3) For the best-performing classification method, evaluate the spatial pattern of prickly pear (motte) patches identified within the burned areas and compare the changes in prickly pear spatial distribution before and after patch burns using classified imagery that has been verified with field surveys.
The following research questions were evaluated: (1) How well does each classification method perform for classifying prickly pear in the landscape, and which method performs best after fire? (2) Do classification methods using spectral endmember collection improve classification accuracies over other methods? (3) How does fire affect the cover and spatial pattern of classified prickly pear after patch burning, and if so, does cover vary between deep versus shallow soil series? Evaluation of these questions will aid in identifying a novel approach for mapping prickly pear on the landscape that can also be used to identify changes in the cover and spatial distribution of prickly pear after management actions such as prescribed fire.

2. Materials and Methods

2.1. Study Area and Field Sampling

The research was conducted on Texas A&M AgriLife Research’s Martin Ranch (MR) (Figure 1), located approximately 11 km southwest of Menard, TX (30.809670 N; −99.865701 W) within the Edwards Plateau ecoregion. The ranch covers approximately 2010.2 ha with an elevation gradient ranging from 612.8 m to 677.8 m. The topography at the site varies from 1–15% slope with very cobbly clay and silty clay soils. Average temperatures range from 34.4 °C in August to 7.9 °C in January. The 30-yr mean annual rainfall (PRISM Climate Group, 2018) is 663 mm, with peak rainfall occurring during June (87.73 mm/month) and August (70.35 mm/month). Within MR, one experimental burn unit (93.71 ha) was selected as the study area. Prescribed fires were implemented in the spring of 2019. The dominant soil series within the burn unit was the Tarrant (TA) series with 37.59 ha of very shallow to indurated limestone rock and clay, the Valera Silty Clay (VaB) series with 41.49 ha of moderately deep to a petrocalcic horizon of silty-clay soil, and the Dev series with 14.63 ha of very gravelly deep loamy alluvium [58].
During the pre-fire and post-fire periods, 288 GPS points with 2 m radius polygons served as training sample locations. The training sample locations were stratified by soil type, with about 120 training samples randomly placed within each dominant soil series (i.e., TA VaB) and 50 per secondary soil series (Dev).
We sampled prickly pear presence and physical measurements for three size classes of prickly pear mottes [4]. We identified three size classes: (1) small (0–20 cladodes per motte), (2) medium (21–100 cladodes per motte), and (3) large (101–500 cladodes per motte). Then, we measured the width (m), height (m), and length (m) of each surveyed prickly pear motte to estimate the total area (m2). Ansley and Castellano’s (2007) canopy area (CA) formula was used to calculate each prickly pear’s 2-dimensional space as canopy area (canopy area (CA) = πab (a = motte length; b = motte width)).
Using aerial and UAV imagery collected prior to the burn, we identified areas within the landscape that had a diverse distribution of prickly pear mottes. Once in the field, we used a GeoTX Trimble sub meter GPS unit to collect the coordinates from the center of randomly sampled prickly pear mottes based on their perimeter size. Prickly pear mottes were categorized by CA and size class distribution (based on the number of mature cladodes per motte). For classification, the mottes were split randomly (70:30) into training and accuracy assessment groups, respectively.
An ASD FieldSpec Hand-Held 2 Pro spectroradiometer from Malvern Panalytical was used for building the spectral library of prickly pear and proximate plant species (e.g., grasses, forbs, shrubs, and tree species if intermixed within the prickly pear mottes). The FieldSpec allowed rapid collection of spectral reflectance signals from the cactus cladodes at wavelengths ranging from 325–1075 nm (±1 nm resolution) while maintaining a field of view of 25 degrees. Calibration of the FieldSpec took place at every sampled patch using a square white tara (20 cm × 20 cm). Each measurement was taken under clear skies and full sunlight. After sampling, the FieldSpec raw data was transferred to a computer using the ASD HH-2 Sync software, then transformed from raw data to reflectance using ViewSpecPro 6.2 software (ASD Inc.,Falls Church, VA, USA, www.asdi.com, accessed on 12 August 2023).

2.1.1. Post-Fire Testing Data

Field-based test and validation data points were collected during the peak biomass (leaf-on) field season in August 2019, approximately 6 months after the prescribed fire was conducted. To test and validate image classifications, field-based test pixels were established from a new set of 480 randomly selected prickly pear mottes of different size classes (e.g., small, medium, and large) within the burn unit after the fire. GPS location information for each motte was collected, along with spectral profiles of prickly pear cladodes. The 480 motte perimeters served as a validation dataset for the image classifications.

2.2. Remote Sensing Data Collection and Processing

During biannual vegetation biomass seasons (lowest: February-March and peak: August-September) from 2018 to 2020, a Cessna 206 aircraft equipped with a multispectral imaging system (two Nikon D810 digital cameras with a 7360 × 4912-pixel array) was used for image acquisition across the entire Martin Ranch (Table 1; Appendix A.1 for detailed information on flights, cameras, and image preparation and processing).
For both the pre-fire and post-fire multispectral images (Table 1), vegetation indices (Table 2; Figure A1) were calculated. The selected vegetation indices were based on their performance in previous studies [59,60,61] and their potential utility in classifying prickly pear. The spectral profiles of the four original bands in the multispectral images (Table 1) and the fourteen spectral indices (Table 2) were stacked into an image composite for use in the prickly pear classification. A low pass filter (3 × 3 kernel) was used to reserve the low-frequency brightness values of each of the original spectral bands. To enhance high frequencies or sharpen edges, a high-pass filter (3 × 3 kernel) was used for the original spectral bands using the Convolution and Morphology Filters tool in ENVI. The final stacked images for the pre-fire and post-fire seasons had 26 bands.

Spectral Profile Library

A field-based spectral library for prickly pear mottes was built to train and test sub-pixel classification for spectral profile analysis using the spectral hourglass wizard. From the 288 training (Section 2.1, Study Area and Field Sampling) and 480 testing (Section 2.1, Post-fire Testing) prickly pear patch samples collected at the MR study site, 300 spectral profiles were collected from a variety of prickly pear patches (prickly pear mixed with herbaceous plants and pure patches) with the FieldSpec by soil series (pre- and post-fire). The wavelength range of FieldSpec was compatible with the ranges of the four spectral bands (389–1000 nm) in the multispectral images.

2.3. Classification Methods and Accuracy Assessment

Before classification, a training dataset consisting of a thousand pixels per class (prickly pear, ground, shadow, woody, and herbaceous cover) was manually extracted as regions of interest (ROIs) from multispectral (22 cm resolution) and UAV (3 cm resolution) images [22]. The Jeffries-Matusita distance separability analysis was performed to find the best distance score (0.0 worst–2.0 best) between classes to evaluate the uniqueness of each training class [22,62].
Based on extensive literature and their popularity as land cover classification methods, three classification methods were evaluated (Figure 2) as follows:
(1)
Classification using object-oriented methods [23,24,25] (see Appendix A.2). We used the Feature Extraction-Example Based Workflow from ENVI 5.5 (NV5 Geospatial Solutions, Inc., Broomfield, CO, USA.), which includes K Nearest Neighbor (KNN), Principal Components Analysis (PCA), and Support Vector Machines [28,63] as classification methods based on the user-chosen training samples (e.g., regions of interest).
(2)
Classification using pixel extraction and regression models. In RStudio [64] (see Appendix A.2), we used the RandomForest Package (RF) [26,65,66]. RF is a robust ensemble learning classification algorithm resistant to overfitting. Moreover, RF is effective in classifying large datasets, rating variables of importance, generating unbiased estimates of the Out of Bag (OOB) error (generalization error), and maintaining its robustness against outliers and data noise [26,67].
(3)
Classification of spectral profiles. The Spectral Angle Mapper (SAM) [51,52,53] from the Spectral Hourglass Wizard in ENVI 5.5 (see Appendix A.2) was used to geometrically match pixels to a reference spectral library, resulting in a spectral classification based on spectral and physical similarities [53,54,68]. To classify spectral profiles based on the Spectral Information Divergence (SID) algorithm [56,57], we used the ENVI 5.5 Endmember Collection tool (see Appendix A.2). Compared to SAM, the SID method discriminates between spectral profiles among random pixels in the image (extracted using stochastic measures) following a maximum divergence threshold for probabilistic spectral behavior [53,56].

Accuracy Assessment to Determine the Best Classification Method

Before conducting a performance analysis through accuracy assessment, a majority filtering based on four neighbors (4 × 4 kernel) was used to reduce the presence of single pixels while keeping contiguous neighboring groups of pixels intact [22]. A confusion matrix based on overall accuracies (OA) and kappa coefficients (KC) was used to assess the classification accuracy for each method using the test datasets for prickly pear mottes. An independent set of 1000 “equally-stratified random” points per soil series was created and manually verified using the UAV 3 cm image using the “Create Accuracy Assessment Points” tool within ArcMap 10.8 (ESRI, 2022). After manually verifying each random point, 520 points were selected to overlay onto prickly pear and non-prickly pear pixels from the UAV image. Then, the 520 verified points were extracted as updated accuracy assessment points and combined with the 480 field-validated points described in Section 2.1.1, which accounted for a total ground-truth dataset of 1000 points. Then, the accuracy assessment was computed using the Confusion Matrix tool in ArcMap 10.8. Using the formulas for calculating overall accuracy (OA), user’s accuracy (UA), and producer’s accuracy (PA) in Olariu et al. (2022), the performance of each prickly pear classification product was evaluated. The best classification method was assessed by comparing each method’s accuracy before and after prescribed fires. The method with the highest overall accuracy and kappa coefficient was used for the subsequent spatial pattern analyses and comparison to field-based prickly pear allometries.

2.4. Effect of Fire on the Amount and Spatial Pattern of Prickly Pear Cover—A Case Assessment

The amount (% cover) and spatial pattern of prickly pear cover in the pre-fire (2018) and post-fire (2019–2020) seasons were quantified based on the spectral classification for areas burned and unburned in the burn unit and in each of the major soil series (i.e., Ta and VaB) within the burn unit, respectively. A binary classification of burned vs. unburned areas was developed using an RF model classification with 93.1% accuracy. The classification of burned areas was post-processed using majority filtering (8 × 8 cells) and shrink and expand (4-cells) to make a more generalized burned area layer within the burn unit.
The size (area of patch, m2) and shape index ( 0.25   P i j a i j ; where Pij = perimeter (m) of patch ij and aij = area (m2) of patch ij) of the prickly pear patches were used to characterize the spatial pattern of the prickly pear cover and its change [69,70,71]. The R package “landscapemetrics” [72] was used in conjunction with the “raster” package [73,74] to calculate landscape metrics. A two-way ANOVA was used to assess the effect of fire (pre- and post-fire seasons) and soil type (TA and VaB) on patch size and shape index of prickly pear patches. Simple linear regressions were also used to evaluate the relationship between the size and shape of prickly pear patches before and after the fire. ANOVA and regression analyses were conducted using the Vegan package in R [75]. Additionally, frequency distributions of the size and shape index of the prickly pear patches were generated for areas burned in the TA and VaB soils, respectively. A two-sample Kolmogorov-Smirnov (KS) test [76] was used to assess the influence of fire on the frequency distribution of prickly pear patches between the pre-fire and post-fire seasons. The KS test was evaluated using the “dgof” and “Matching” R packages in R [77].

3. Results

3.1. Pre- and Post-Fire Classification Performance

The OA for the pre-fire and post-fire images classified using object-based SVM and kNN, pixel-based RF, and endmember-based SAM and SID varied from 43.5% to 96.6% for the 26-band multispectral images (Table 3). From highest to lowest accuracy, the endmember collection method using SAM classification achieved the highest OA (96.6%) and KC (93.2%) in pre-fire imagery and OA (91.2%) and KC (86.9%) during post-fire imagery (Table 3). The object-based (image feature extraction) method using SVM and kNN classification resulted in the lowest accuracy for both pre- and post-fire imagery. For the object-based methods, the UA and PA indicated that these methods performed poorly by overestimating classified features as prickly pear when compared to the other methods (Table 2). However, the UA for the object-based classification methods (78.7–93.0%) was more accurate than the PA (32.0–70.9%) (Table 3).
For the pixel extraction RF model (no. of variables = 10; mtry = 3:10; ntree = 1000), 98.44% of the variation was explained (%IncMSE), and RMSE was lowest (2.26) using the combination of spectral reflectance and highest-ranking vegetation indices identified by variable importance analysis (VarImpPlot). The spectral reflectance bands explained 52.5% of the variation, while the VIs accounted for 47.5% of the variance in the prickly pear spectral profile. The RF method was intermediate in accuracy among the methods evaluated, with the average OA and KC being 91.2% and 82.4% for pre-fire imagery and 85.8% and 71.6% for post-fire imagery, respectively. The endmember spectral (n-D) analysis methods (SAM and SID) performed best, with OA > 94% and KC > 89% for both methods on pre-fire imagery (Table 3). The performance of the endmember methods on post-fire imagery was slightly lower but still higher than object-based and pixel extraction methods (Table 3).
Across all classification methods tested, the endmember-based SAM model, which used the geometric features of prickly pear spectra, consistently produced the best results in classifying both pre- and post-fire images (Figure 3; Table 3). The SID algorithm showed a lower OA and KC for post-fire classification than its pre-fire classification performance. UA and PA were higher in pre-fire images when compared to post-fire images for both SAM and SID spectral methods (Table 3). SAM and SID classification of post-fire images identified substantial fragmentation of the classified prickly pear patches within the burned landscape. The SAM algorithm classified prickly pear similarly to that of the RF model but expanded the estimates to adjacent areas with similar spectral geometry. During pre-fire conditions, classified prickly pear showed more homogeneous and larger patches in the SAM-classified images (Figure 3).

3.2. Effect of Fire on the Amount and Spatial Pattern of Prickly Pear Cover—A Case Assessment

The percent cover of prickly pear in the burn unit decreased by 46.53%, from 18.92% before the fire to 10.12% after the fire. Within the shallow (TA) and deeper (VaB) soils, prickly pear cover decreased by 29.21% and 61.33%, respectively. The density of large patches in the burned areas decreased in both the TA and VaB soils (Figure 4). The density of small patches in the burned areas decreased within the burn unit as a whole and within the VaB soil. However, small patches increased within the TA soil (Figure 4). The Kolmogorov-Smirnov two-sample test results showed a significant difference between the frequency distributions of the pre- and post-fire seasons in the VaB soil (D = 0.64, p = 0.02), but only marginally significant differences in the TA soil (D = 0.56, p = 0.07) and whole burn unit (D = 0.55, p = 0.07). The slopes of the regression lines of shape index against patch size were lower in the post-fire season than in the pre-fire season, suggesting that the shape of the larger prickly pear patches became simpler after the fire.
ANOVA results indicated that seasons (pre- and post-fire) significantly affected the size and shape index of prickly pear patches. For both patch size and shape index, the effect of two-way interactions (Season × Soil) was statistically significant (p < 0.00001).

4. Discussion

Assessing the classification performance of classical machine learning algorithms from object-based SVM [78] and pixel-based RF methods [28,79] to the spectral (n-D) endmember methods allowed a comparison of these traditional methods of hyperspectral classification being used on multispectral imagery. The spatial resolution from the multispectral sensor used in this study, the spectral calibration, and the collection of spectral signatures created a unique opportunity to test the classification accuracies for estimating prickly pear cover in the Edwards Plateau of Texas. The methodology for accurately classifying prickly pear resulting from this study can also enable the design and implementation of spatially explicit management strategies. These could include targeted prescribed burns, herbicide applications, and mechanical treatments that could be applied over large and heterogeneous landscapes with precision and efficiency. In addition, these methods can aid in monitoring and assessing the effectiveness of these and other management interventions.

4.1. Best-Performing Classification Methods

Generally, previous studies have recommended object-based classifications over pixel-based classifications for woody species [24,25,43]. However, results from the present study indicated that pixel- and endmember-based methods had higher overall accuracy and kappa values (Table 3). Object-based classification results are relevant due to the targeted species’ 2-dimensional and 3-dimensional canopy structure, allowing better feature extraction using object-based methods when the canopies are outlined appropriately. However, object-based classification accuracy can vary [80,81,82] and potentially perform worse than pixel-based classification for less conspicuous features on the landscape and in lower resolution imagery. Species having a low 3-dimensional profile (such as prickly pear cacti) mean that the classification performance will depend greatly on the spatial resolution of the imagery [19], with higher resolutions needed to achieve higher target detection.
In the present study, the spectral-based classification methods derived from SAM and SID performed better in classifying prickly pear in pre-fire and post-fire imagery than object-oriented and pixel-based methods (Table 3). Spectral-based classification methods had higher accuracy compared to reference field and spectral training data. Moreover, based on previous studies, SAM and SID algorithms appeared to perform well against changes in image illumination effects and brightness shifts from pre-fire and post-fire imagery [83,84]. In context, the prescribed fire likely created charred prickly pear portions within the motte, thus changing the pixel spectra. The fire possibly decreased the spectral variation and contrast within the bandwidth of the multispectral data, reducing the classification accuracies for pixel-based and endmember-based classifications. However, in evaluating the error matrix and mapping results (Figure 3), the SAM method performed better at estimating changes in the prickly pear area, which were consistent with the changes in the total prickly pear area calculated from the landscape analysis within the burn unit. The SID model’s performance was consistent with the method’s effectiveness in preserving spectral properties and capturing the spectral similarities among prickly pear pixels [83]. However, SAM appears to have performed better at detecting the spectral geometry of prickly pear before and after the fire. The SAM angle is known to be suitable for separating targeted spectra based on the shape rather than the absorption curve [83,84]. Thus, comparing SAM and SID performance when classifying vegetation in semiarid landscapes provides a novel approach to increasing our understanding of cases where SAM would be a better classification approach than SID.
The accuracy and performance of the pixel-extraction method using the RF regression model were acceptable as expected but, surprisingly, not the best-performing method. In this study, RF runtimes were fast. The model dealt with unbalanced and missing data from a combination of the top 10 most important bands (VarImpPlot) out of the image stack that contained the spectral reflectance bands, VIs, and convolution bands (low pass and high pass; 3 × 3 kernel). The RF prediction model increased the overall accuracy and model performance of prickly pear classification. However, the performance of RF classification models in our study echoes a general understanding of the weaknesses in RF regression, where the model predictions seem limited to the range in the training data or sensitive to both the number of training datasets and the skewed evidence distribution [85]. Due to extrapolation limitations, RF models can overfit data sets that are noticeably noisy, such as heterogeneous or disturbed landscapes. From our study, RF models fell short of accurately predicting prickly pear cover outside of the mean pixel values given in the training data space, which is an indication of variations or noise in the image pixel array that was not successfully included in the mean pixel value for prickly pear training data. Thus, the RF model prediction will have a high probability of missing prickly pear mottes from classification (false negative error) [86].
Compared to endmember analysis in post-fire conditions, the RF model outperformed object-oriented classification while underperforming compared to SAM and SID spectral analyses. The lower RF model accuracy performance in classifying prickly pear pixels after the fire may be partly due to known disadvantages in prediction accuracy on complex problems [79,86,87]. For example, by using a voting process, RF reduces the degree of overfitting [79]. Still, compared to linear models, the RF model prediction overfits more than linear models if the training dataset for RF models is not robust.
In that regard, recent studies evaluated the advantages of adopting the Bayesian hyperparameter optimization process for Sentinel-2 satellite imagery to improve the RF classification performance by correcting the deficiencies from false negative errors [88]. The Bayesian optimizer, however, has not been applied to spectral data for land cover classification at a finer resolution than 10 m. Additional research is needed to examine whether the optimizer can potentially improve the RF model classification of prickly pear by reducing the number of false negatives in the classification.
The stacking of spectral bands and calculation of spectral indices from those bands [89] to target ecological variables improved the classification of prickly pear cover by correlating the spectral information to the biophysical characteristics of the cactus. Among the VIs (Table 1) used in this study, the light use efficiency index (SIPI) and plant water stress index (NDWI) consistently scored a higher variable importance among other indices in RF. These indices likely enhance motte pigment and cladophyll dehydration, two common adverse effects of fire exposure [4,15]. For SAM and SID, however, combining all the VIs covering crucial biophysical and biochemical characteristics of prickly pear yielded higher overall accuracy and performance. Further studies should include an assessment of VI’s importance ranking in SAM and SID classifiers, something that is currently lacking in SAM and SID classification packages. The type of VIs considered is important because relying on models to predict prickly pear cover solely on the selected VIs is susceptive to challenging model generalization when evaluating models for use in similar ecosystems. Exploring a larger array of VIs is also dependent on image resolution and vegetation structure [49,90].

4.2. Fire Effects on the Cover and Spatial Pattern of Prickly Pear—A Case Assessment

Fire significantly reduced prickly pear cover in the savannah landscape by 46.6%, which is consistent with the findings of similar studies. For example, Heirman and Wright (1973) [91] reported a 68% mortality rate in prickly pear from late-winter fires in west Texas, and Bunting et al. (1980) reported a combined average of 60% mortality from late-winter and summer fires in another area in west Texas. However, Ansley and Castellano (2007) found lower prickly pear mortality from high-intensity winter fires in north-central Texas, 29% and 19% in small and medium mottes, respectively, and no large mottes were killed.
Fire also significantly affected the spatial pattern of the prickly pear patches, as reflected in the changes in the frequency distributions of patch size and shape index. The density of large prickly pear patches was reduced substantially after the fire (Figure 4), which appears inconsistent with the findings of Ansley and Castellano (2007), who reported no mortality of large mottes. The large patches in our study, however, were identified based on the classification of aerial imagery, and many of the large patches might consist of multiple, coalescing prickly pear mottes (individual plants). The fire at our site likely killed some of the cladodes of the large mottes and some of the smaller mottes within individual large patches, which resulted in the fragmentation of these large patches into smaller patches. Based on the findings of Ansley and Castellano (2007), high-intensity winter fires are likely to reduce the canopy size of medium and large mottes in both the short- and long-term (3 years) and reduce the canopy size of small mottes in the short- but not long-term. Our results also show that the shape of the larger prickly pear patches tended to be simpler (lower shape index) after the fire, as evidenced by the lowered slope of the regression line of shape index against patch size (Figure 4). This likely resulted from both the fragmentation of the large prickly pear patches, which reduced the density of the large patches, and the smoothing of the edges of the patches. Large patches with convoluted shapes (vs. those with more compact and smooth shapes) are more likely to fragment when losing cladodes in narrower segments of the patches and more likely to lose cladodes in narrow and intruding lobes of the patches, both of which would reduce the shape complexity of the patches.
The effects of fire on the amount and spatial pattern of prickly pear appeared strongly modulated by soil types, and we speculate that this was likely due to associated vegetation patterns such as vegetation productivity resulting from differences in the water holding capacity of the soils. The deeper clay soils (VaB) at the study location would have higher volumetric water storage capacity, while the cobbly clay soils (TA) are shallower and have a higher proportion of rock fragments, which would result in lower volumetric water storage capacity per unit area. Prickly pear cover decreased by 29.21% in the shallow TA soils, but in the deeper (VaB) soils, cover decreased by 61.33%. The herbaceous vegetation cover (%) estimated from 288 sampled plots was 20% less in TA soil (62%) than in VaB soil (82%), from which we can deduce that a difference in productivity existed between the two soils. The proportion of fine fuels was considerably higher in the deeper VaB soils (90.7% fine fuel cover) than in the shallow TA soils (73% fine fuel cover). The higher fuel volume in VaB soils could have resulted in higher fire intensity and duration, leading to a greater reduction of prickly pear in the deeper VaB soils. As described in the National Fire-Danger Rating System [92], fine fuels represent ≤ 0.64 cm diameter dead fine litter. Given the heterogeneous pattern of the vegetation in the shallow and rocky TA soil, the fine fuel distribution likely had lower connectivity, which might further reduce the burn effect on prickly pear by limiting fire spread.
The density of large (≥10 m2) prickly pear patches decreased in both the TA and VaB soils. However, the magnitude of the decrease was greater in the VaB soils (Figure 4). The patterns of change in the density of smaller prickly pear patches were opposite in the two soils—a significant decrease in density in the VaB soils but a substantial increase in density in the TA soils (Figure 4). The higher fuel load and presumable higher intensity of the fire in the VaB soils likely resulted in the significant reduction of small prickly pear patches due to mortality. The lower fuel load in the TA soils likely resulted in a low-intensity fire and hence low mortality of prickly pear in the small patches, which was consistent with the findings of Ansley and Castellano (2007). The limited reduction of existing small patches and the addition of small new patches from the fragmentation of large patches could have increased the density of small prickly pear patches in the TA soils.
Interestingly, the regressions of shape index vs. patch size were similar for the TA and VaB soils after fire. However, the slopes of the pre-fire regression lines were higher for the TA soils than for the VaB soils, suggesting greater shape complexity of large prickly pear patches pre-fire in the TA soils. A possible reason for this difference might be that the higher herbaceous biomass in the VaB soils in the pre-fire seasons with higher rainfall (853.78 mm) concealed some of the intruding small cladodes, which resulted in simpler shapes with smoother edges of large prickly pear patches based on image classification, as well as slight underestimations of their sizes, in the VaB soils. The postfire season had substantially lower rainfall (439.59 mm); meanwhile, the burn unit was continuously grazed, which likely reduced the herbaceous biomass in the VaB soils to a degree that minimized the concealment of intruding small cladodes.

4.3. Limitations and Recommendations

There are several limitations to the present study. First, significant classification improvements over those presented here could likely be achieved by including high-resolution images from UAVs with a LiDAR payload, offering a robust 3-dimensional classification of prickly pear mottes. Secondly, the timing discrepancies between remote sensing data collection and field surveys could, to some extent, limit the classification performance of any of the methods [93]. Thirdly, limitations in the field sampling logistics, with perhaps broadening the sampling point distribution as training data, could improve RF model performance from the pixel extraction approach. Given the challenges and opportunities from further expansion in a remote sensing application, deep learning methods such as convolutional neural networks (VGG-19 CNN; [22]) and machine learning with data fusion [94] may offer an effective strategy for cacti species classification in heterogeneous semiarid landscapes. Lastly, exploring the utility of readily available commercial (e.g., DigitalGlobe’s WorldView-3 from Maxar and Dove satellites from Planet.com) and public (e.g., USGS EROS NAIP-National Agriculture Imagery Program) Earth-imaging services at a high-resolution (0.3–3.7 m) provides upscaling possibilities for classifying species distribution.

5. Conclusions

The expansion of the prickly pear distribution has critical implications for the ecology and management of the rangelands, with a significant economic impact in the US Southern Great Plains. In the context of the utility of high-resolution (0.21 m) imagery, this study offers a meaningful contribution to promoting multispectral analysis of plant species by achieving an accurate spectral classification for prickly pear cactus. This study provided detailed testing of different classification models, from object-based to endmember-based, and compared their performance through classification accuracy and field validation. Spectral (n-D) classification techniques were found to be the most accurate for mapping the distribution of cacti species within heterogeneous landscapes in the Edwards Plateau of Texas. Comparing the performance of SAM and SID spectral analyses and mapping outputs in this study provided new insights into how accurate these classification methods are when mapping prickly pear cacti in a diverse landscape.
As such, these findings provided a critical step forward in bridging the gap between traditional field sampling and remote sensing for classifying prickly pear with high-resolution aerial imagery. The methods developed in this study for remote sensing-based classification of prickly pear allow for a large-scale evaluation of the factors influencing the spatial distribution of prickly pear over heterogeneous landscapes. It enables ecological explorations of the spatial pattern and temporal dynamics of prickly pear cacti, the biophysical factors influencing these patterns and dynamics, and potential ecological mechanisms. The enhanced ecological understanding and the derived management strategies can help reduce and mitigate the ecological and economic impacts of prickly pear expansion and sustain the livestock production, biodiversity, and other ecosystem services of the rangelands in the Southern Great Plains.

Author Contributions

Conceptualization, X.A.J., J.P.A., J.W., D.R.T. and X.B.W.; Methodology, X.A.J., J.P.A., X.B.W., D.R.T. and C.Y.; Formal analysis, X.A.J. and J.P.A.; Investigation, X.A.J., J.P.A., J.M., D.R.T., J.W. and X.B.W.; Data curation, X.A.J., J.M. and C.Y.; Writing—original draft preparation, X.A.J.; Visualization, X.A.J.; Writing—review and editing, X.A.J., J.P.A., X.B.W. and C.Y.; Supervision, J.P.A. and X.B.W.; Project administration, X.B.W. and J.P.A.; Funding acquisition, X.B.W. and J.P.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the U.S. Department of Agriculture’s Agriculture Research Service and the National Institute of Food and Agriculture (2019-68012-29819 and Hatch Project 1003961). The U.S. Department of Agriculture is an equal opportunity lender, provider, and employer. Support was also provided to Xavier Jaime through a Graduate Diversity Fellowship from Texas A&M University. Partial support for this research was also provided by the Savanna Long-term Research and Education Initiative (SLTREI), Department of Ecosystem Science and Management, Texas A&M University.

Data Availability Statement

The data presented in this study are available from the corresponding author on reasonable request.

Acknowledgments

We thank Fred Gomez, Jesse Goplin, Deann Burson, Zheng Li, Weiqian Gao, Justin P. Wied, Matthew Rector, and the Grazingland Animal Nutrition Lab from TAMU AgriLife Research for their assistance in the field surveys and remote sensing logistics. TAMU AgriLife San Angelo and the Sonora Research Station for providing lodging accommodations and aiding in field logistics. Jesus Lopez for his unconditional disposition to assist us during our field surveys at Martin Ranch.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Description of the USDA Agricultural Research Service’s Multispectral Imaging System and Image Processing

  • Description of flight logistics, multispectral cameras, UAV equipment, and data processing.
The USDA Agricultural Research Service (ARS) (College Station, TX, USA) coordinated the flight missions and collected the imagery using the standard USDA protocol for aircraft to collect imagery at 1981.2 m MSL [1]. The average elevation at ground level (required for remote sensing) was about 685.8 m above mean sea level (MSL), which resulted in a flight path at 1295.4 m above ground level.
Two Nikon D810 digital cameras with a 7360 × 4912-pixel array were used during flight missions. One camera captured standard RGB color images, while the other was equipped with an 830-nm long band-pass filter to obtain NIR images (both at 21 cm pixel resolution). The images had a side overlap of 82% and a forward overlap of 80%. After the prescribed fire, high-resolution multispectral images (3 cm pixel resolution) were collected for the burn unit using a rotary AG-V6A hexacopter unmanned aerial vehicle (UAV) (Homeland Surveillance and Electronics, LLC, Casselberry, FL, USA) equipped with the same two Nikon D810 cameras. In all image acquisitions, we strategically placed white tiles (0.61 m2) at 25 locations across the ranch as ground control points (GCPs) to geometrically correct the airborne images using Pix4Dmapper (Pix4D Inc., Lausanne, Switzerland) [95,96,97,98].
Table A1. Multispectral data specifications.
Table A1. Multispectral data specifications.
Band NameBand No.Spectral Range (nm)Spatial Resolution (m)Radiometric Resolution (bits)
Red1604.7–698.00.218
Green2501.1–599.60.218
Blue3418.2–495.90.218
NIR-1 *4703.2–900.20.218
NIR-25703.2–900.20.218
NIR-36703.2–900.20.218
* The NIR camera provided three NIR bands with the same wavelengths but different digital numbers. The first NIR band (NIR-1) was used to calculate the vegetation indices.
Data processing of multispectral imagery was initiated by applying the “Empirical Line Computer Factors and Correct” method in ENVI 5.5 software (L3Harris Geospatial, Jersey City, NJ, USA, 2022) to radiometrically calibrate the images (21 cm resolution) from digital numbers (DN values) to surface reflectance percentage. The reflectance values were paired with reflectance values measured from 8 m by 8 m tarpaulins with an ASD FieldSpec [98]. The tarpaulins had 3%, 4%, 16%, 32%, and 48% nominal reflectance values, respectively. The spectra from the tarpaulins were averaged by the band ranges from the two Nikon D810 cameras to convert DN to reflectance values following Yang et al. (2014) [95,98].
Figure A1. Example of the fourteen spectral indices calculated from the multispectral image composite for a selected site in the study area. The black buffer zones (2 m) represent field-sampled prickly pear patches as reference points.
Figure A1. Example of the fourteen spectral indices calculated from the multispectral image composite for a selected site in the study area. The black buffer zones (2 m) represent field-sampled prickly pear patches as reference points.
Remotesensing 15 04033 g0a1

Appendix A.2. Description of the Land Cover Classification Methods Evaluated in Detecting Prickly Pear Cactus

  • Object-based Image Feature Extraction
The Feature Extraction-Example Based Workflow from ENVI 5.5 (Feature Extraction License; Harris Geospatial Solutions, Inc.) was used to identify segments and classify their features from multispectral images. The Feature Extraction algorithm groups adjacent pixels with similar spectral (e.g., NDVI, HUE, Saturation, and Intensity) and spatial (e.g., convolutions and morphology filters) characteristics. For the multispectral image, the 26-band stack of three visible (RGB), one NIR, 14 spectral indices (Figure A1, Table 1), four high pass filter bands (3 × 3 kernel), and four low-pass filter bands (3 × 3 kernel) served as the input data. As ancillary data, the Intense-Hue-Saturation transformation from the ENVI data fusion technique was included to increase differences among features in the image. Next, the segmentation (e.g., edge algorithm) and merging (e.g., Full Lambda Schedule) settings in the ENVI Feature Extraction scale level were set to 65.0 and the merge level to 95.0. From the available Feature Extraction algorithms, we chose the Support Vector Machine (SVM) [63] with a radial basis function kernel. Parameters for gamma were set to 0.091, and the penalty was set to 100 to minimize overfitting [22]. We also tested the K Nearest Neighbor (KNN) with five (5) neighboring rules to achieve a majority vote that was used to compare the performance of the Feature Extraction algorithms.
2.
Pixel Extraction and Random Forest Modeling
Field-based geotagged prickly pear buffers (288 points with a 2 m buffer) were combined with an additional 275 prickly pear buffers that were manually drawn from the multispectral images and validated with the 3 cm UAV image (Figure 2). For the pixel extraction process, we used the Terra and Raster packages in R to perform the pixel value extraction and the subsequent data frame that included sample ID, spatial coordinates, cell statistics (spectral mean, standard deviation, and standardized mean), and pixel values for each of the 26 bands.
Random Forest—RF [26] was used to classify the prickly pear pixels by extracting pixel spectral data from the 2 m buffer surrounding each prickly pear training data. From the extracted data, cell statistics and a standardized mean spectral reflectance were calculated to account for the proportion of reflectance within each prickly pear training buffer. Understandably, RF produces multiple regressions from decision trees based on bootstrap samples of pixel mean spectral values (from 26-band images) to classify the landscape without biasing and overfitting models. The tuneRF [65] function from the R package “randomForest” was used to optimize the RF classifier performance based on the prediction variables (m) and desired number of decision trees (k). Root Mean Square Error (RMSE), R2-Out-of-bag-evaluation (OOB-error rate), Mean Square Error (%IncMSE), and the variance explained (VarImpPlot or VIP) from the RF-related packages (e.g., VIP, tuneRanger, and randomForest) served as performance metrics to evaluate model accuracy and goodness of fit [34].
The first RF model used the prickly pear training data frame with original spectral bands, vegetation indices, and convolution filter bands as input variables. Based on the top 10 variables according to the variable importance analysis (VarImpPlot), a second RF model was developed using the top 10 variables. The raster map derived from the top 10 variables was used for subsequent analyses. Cover maps of prickly pear cactus patches (mottes) and non-prickly pear classes for each burn unit were generated from images acquired from 2018 to 2020.
3.
Endmember-based classification
The Spectral Hourglass Wizard (SHW) in ENVI 5.5 was used to post-process the spectral information from images by matching it with the field-based spectral profiles for prickly pear mottes from pre- and post-fire images. The first step in SHW is to implement the Minimal Noise Fraction (MNF) algorithm to reduce the number of bands with spatial and spectral noise. The image bands ranked with the lowest noise fraction are expected to carry the most information and have high spatial structure and variability within the data. It then estimates data dimensionality using a spatial coherence measure, but it is advisable to overestimate the dimensionality and include extra MNF bands (e.g., 26 bands in this study). From the calculated data, dimensionality, and coherence threshold, the next SHW step calculates the Pixel Purity Index (PPI) to find the best qualifying pixels with the most extreme spectral information, thus identifying pixel endmembers from a cluster of mixed spectral information in the image. In this step, the parameters were set to 10,000 PPI iterations (since the PPI plot approached a horizontal line after the most extreme pixels were accounted for), the default (2500) PPI threshold value, and the default PPI maximum memory use of 10.0 Mb.
The n-D Visualizer is the third step in the SHW. The visualizer uses the MNF bands as n-D (dimensions) and provides the location and identification of associated endmembers clustering the purest pixels with reflectance values. From the automated clustering method, this step selects the potential or most extreme spectral endmembers in n-dimensional spaces as defined classes of endmembers (e.g., 19), to which we also supplied endmembers from the spectral library (e.g., 61), both of which were considered for further analysis in the ENVI’s Spectral Analyst. In the Spectral Analyst, those (automated) n-D Visualizer classes and (user) library spectrum endmembers matching at 0.9 or higher are likely candidates for identification. Finally, the matching 49 n-D classes between the spectral library and Visualizer served as the training n-D data for the Spectral Angle Mapper (SAM) binary classification (i.e., prickly pear vs. non-prickly pear spectral profile). Spectral Information Divergence (SID) [99] from the Endmember Collection tool in ENVI 5.5 was used to compare another endmember-based classification method to SHW’s SAM classification output. When establishing parameters for SID, the maximum divergence threshold was set to 0.05 for all the 50 n-D representatives of the prickly pear spectral profile.
After running SAM and SID, the spectral profile of the classified 50 n-D spectral classes (Figure 2) was tested against the spectral library for prickly pear, with and subsequent validation through visual evaluation of classified pixels within trained and tested prickly pear buffers using images from both the manned aircraft and UAV. The best matching spectral curves were combined and reclassified as prickly pear cactus mottes, and the remaining non matching spectral classes were classified as non-prickly pear (Figure 3). Classification accuracy assessments consistent with feature- and pixel-extraction methods were conducted to evaluate algorithm performance.

References

  1. Hanselka, C.W.; Falconer, L.L. Prickly pear management in south Texas. Rangel. Arch. 1994, 16, 102–106. [Google Scholar]
  2. Hart, C.R.; Lyons, R.K. Prickly Pear Biology and Management; Texas AgriLife Extension Service: College Station, TX, USA, 2010; pp. 1–8. [Google Scholar]
  3. Bunting, S.C.; Wright, H.A.; Neuenschwander, L.F. Long-term effects of fire on cactus in the southern mixed prairie of Texas. Rangel. Ecol. Manag. Range Manag. Arch. 1980, 33, 85–88. [Google Scholar] [CrossRef]
  4. Ansley, R.J.; Castellano, M.J. Prickly pear cactus responses to summer and winter fires. Rangel. Ecol. Manag. 2007, 60, 244–252. [Google Scholar] [CrossRef]
  5. Archer, S.R.; Andersen, E.M.; Predick, K.I.; Schwinning, S.; Steidl, R.J.; Woods, S.R. Woody plant encroachment: Causes and consequences. In Rangeland Systems: Processes, Management and Challenges; Briske, D.D., Ed.; Springer International Publishing: Cham, Switzerland, 2017; pp. 25–84. ISBN 978-3-319-46709-2. [Google Scholar]
  6. Wilcox, B.P.; Birt, A.; Fuhlendorf, S.D.; Archer, S.R. Emerging frameworks for understanding and mitigating woody plant encroachment in grassy biomes. Curr. Opin. Environ. Sustain. 2018, 32, 46–52. [Google Scholar] [CrossRef]
  7. Melgar, B.; Dias, M.I.; Ciric, A.; Sokovic, M.; Garcia-Castello, E.M.; Rodriguez-Lopez, A.D.; Barros, L.; Ferreira, I. By-product recovery of Opuntia spp. peels: Betalainic and phenolic profiles and bioactive properties. Ind. Crops Prod. 2017, 107, 353–359. [Google Scholar] [CrossRef] [Green Version]
  8. Cruz-Martins, N.; Roriz, C.; Morales, P.; Barros, L.; Ferreira, I. Food colorants: Challenges, opportunities and current desires of agro-industries to ensure consumer expectations and regulatory practices. Trends Food Sci. Technol. 2016, 52, 1–15. [Google Scholar] [CrossRef] [Green Version]
  9. Guevara, J.C.; Suassuna, P.; Felker, P. Opuntia forage production systems: Status and prospects for rangeland application. Rangel. Ecol. Manag. 2009, 62, 428–434. [Google Scholar] [CrossRef]
  10. Ueckert, D.N.; Livingston, C.W., Jr.; Huston, J.E.; Menzies, C.S.; Dusek, R.K.; Petersen, J.L.; Lawrence, B.K. Range and Sheep Management for Reducing Pearmouth and Other Pricklypear-related Health Problems in Sheep Flocks. TTS. Available online: https://sanangelo.tamu.edu/files/2011/11/1990_7.pdf#page=46 (accessed on 15 June 2023).
  11. McMillan, Z.; Scott, C.; Taylor, C.; Huston, J. Nutritional value and intake of prickly pear by goats. J. Range Manag. 2002, 55, 139. [Google Scholar] [CrossRef]
  12. Felker, P.; Paterson, A.; Jenderek, M. Forage potential of Opuntia clones maintained by the USDA, National Plant Germplasm System (NPGS) collection. Crop Sci. 2006, 46, 2161–2168. [Google Scholar] [CrossRef]
  13. McGinty, A.; Smeins, F.E.; Merrill, L.B. Influence of spring burning on cattle diets and performance on the Edwards Plateau. J. Range Manag. 1983, 36, 175. [Google Scholar] [CrossRef]
  14. Ansley, R.J.; Taylor, C. The future of fire as a tool for managing brush. In Brush Management—Past, Present, Future; Hamilton, W.T., McGunty, A., Ueckert, D.N., Hanselka, C.W., Lee, M.R., Eds.; Texas A&M University Press: College Station, TX, USA, 2004; pp. 200–210. ISBN 978-1-58544-355-0. [Google Scholar]
  15. Vermeire, L.T.; Roth, A.D. Plains Prickly pear response to fire: Effects of fuel load, heat, fire weather, and donor site soil. Rangel. Ecol. Manag. 2011, 64, 404–413. [Google Scholar] [CrossRef]
  16. Scifres, C.J.; Hamilton, W.T. Prescribed burning for brushland management: The south Texas example. Texas A&M University Press: College Station, TX, USA, 1993; ISBN 0890965390. [Google Scholar]
  17. Sankey, J.B.; Munson, S.M.; Webb, R.H.; Wallace, C.S.A.; Duran, C.M. Remote sensing of Sonoran desert vegetation structure and phenology with ground-based LiDAR. Remote Sens. 2015, 7, 342–359. [Google Scholar] [CrossRef] [Green Version]
  18. Bazzichetto, M.; Malavasi, M.; Barták, V.; Acosta, A.T.R.; Moudrý, V.; Carranza, M.L. Modeling plant invasion on mediterranean coastal landscapes: An integrative approach using remotely sensed data. Landsc. Urban Plan. 2018, 171, 98–106. [Google Scholar] [CrossRef]
  19. Oddi, L.; Cremonese, E.; Ascari, L.; Filippa, G.; Galvagno, M.; Serafino, D.; Cella, U.M. Using UAV imagery to detect and map woody species encroachment in a subalpine grassland: Advantages and limits. Remote Sens. 2021, 13, 1239. [Google Scholar] [CrossRef]
  20. Mirik, M.S.A.A.; Ansley, R.J. Comparison of ground-measured and image-classified mesquite (Prosopis glandulosa) canopy cover. Rangel. Ecol. Manag. 2012, 65, 85–95. [Google Scholar] [CrossRef]
  21. Mirik, M.; Ansley, R.J.; Steddom, K.; Jones, D.C.; Rush, C.M.; Michels, G.J.; Elliott, N.C. Remote distinction of a noxious weed (Musk Thistle: Carduus Nutans) using airborne hyperspectral imagery and the Support Vector Machine classifier. Remote Sens. 2013, 5, 612–630. [Google Scholar] [CrossRef] [Green Version]
  22. Olariu, H.G.; Malambo, L.; Popescu, S.C.; Virgil, C.; Wilcox, B.P. Woody plant encroachment: Evaluating methodologies for semiarid woody species classification from drone images. Remote Sens. 2022, 14, 1665. [Google Scholar] [CrossRef]
  23. Hay, G.; Castilla, G. Object-based image analysis: Strengths, Weaknesses, Opportunities and Threats (SWOT). In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science (ISPRS Archives); Salzburg University: Salzburg, Austria, 2006; Volume XXXVI-4/C42, Available online: https://www.isprs.org/proceedings/XXXVI/4-C42/Papers/OBIA2006_Hay_Castilla.pdf (accessed on 10 May 2022).
  24. Weih, R.; Riggan, N. Object-based classification vs. Pixel-based classification: Comparative importance of multi-resolution imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.-ISPRS Arch. 2010, 38, C7. [Google Scholar]
  25. Laliberte, A.; Rango, A.; Havstad, K.; Paris, J.; Beck, R.; Mcneely, R.; Gonzalez, A. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sens. Environ. 2004, 93, 198–210. [Google Scholar] [CrossRef]
  26. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  27. Amini, S.; Homayouni, S.; Safari, A.; Darvishsefat, A. Object-based classification of hyperspectral data using Random Forest algorithm. Geo-Spat. Inf. Sci. 2018, 21, 127–138. [Google Scholar] [CrossRef] [Green Version]
  28. Sheykhmousa, M.; Mahdianpari, M.; Ghanbari, H.; Mohammadimanesh, F.; Ghamisi, P.; Homayouni, S. Support Vector Machine versus Random Forest for remote sensing image classification: A Meta-analysis and systematic review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6308–6325. [Google Scholar] [CrossRef]
  29. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J. Plant Physiol. 2003, 160, 271–282. [Google Scholar] [CrossRef]
  30. Larrinaga, A.R.; Brotons, L. Greenness indices from a low-cost UAV imagery as tools for monitoring post-fire forest recovery. Drones 2019, 3, 6. [Google Scholar] [CrossRef] [Green Version]
  31. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
  32. Huete, A.; Justice, C.; Liu, H. Development of vegetation and soil indices for MODIS-EOS. Remote Sens. Environ. 1994, 49, 224–234. [Google Scholar] [CrossRef]
  33. Zhao, F.; Xu, B.; Yang, X.; Jin, Y.; Li, J.; Xia, L.; Chen, S.; Ma, H. Remote sensing estimates of grassland aboveground biomass based on MODIS net primary productivity (NPP): A Case study in the Xilingol grassland of northern China. Remote Sens. 2014, 6, 5386. [Google Scholar] [CrossRef] [Green Version]
  34. Li, Z.; Angerer, J.P.; Jaime, X.; Yang, C.; Wu, X.B. Estimating rangeland fine fuel biomass in western Texas using high-resolution aerial imagery and machine learning. Remote Sens. 2022, 14, 4360. [Google Scholar] [CrossRef]
  35. Gitelson, A.A.; Buschmann, C.; Lichtenthaler, H.K. Leaf chlorophyll fluorescence corrected for re-absorption by means of absorption and reflectance measurements. J. Plant Physiol. 1998, 152, 283–296. [Google Scholar] [CrossRef]
  36. Camps-Valls, G.; Campos-Taberner, M.; Moreno, A.; Walther, S.; Duveiller, G.; Cescatti, A.; Mahecha, M.; Muñoz, J.; García-Haro, F.; Guanter, L.; et al. A unified vegetation index for quantifying the terrestrial biosphere. Sci. Adv. 2021, 7, eabc7447. [Google Scholar] [CrossRef] [PubMed]
  37. Perry, C.R.; Lautenschlager, L.F. Functional equivalence of spectral vegetation indices. Remote Sens. Environ. 1984, 14, 169–182. [Google Scholar] [CrossRef]
  38. Silleos, N.G.; Alexandridis, T.K.; Gitas, I.Z.; Perakis, K. Vegetation Indices: Advances Made in Biomass Estimation and Vegetation Monitoring in the Last 30 Years. Geocarto Int. 2006, 21, 21–28. [Google Scholar] [CrossRef]
  39. Eitel, J.U.H.; Long, D.S.; Gessler, P.E.; Smith, A.M.S. Using in-situ measurements to evaluate the new RapidEyeTM satellite series for prediction of wheat nitrogen status. Int. J. Remote Sens. 2007, 28, 4183–4190. [Google Scholar] [CrossRef]
  40. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  41. Richardsons, A.J.; Wiegand, A. Distinguishing vegetation from soil background information. Photogramm. Eng. Remote Sens. 1977, 43, 1541–1552. [Google Scholar]
  42. Clevers, J.G.P.W. Application of the WDVI in estimating LAI at the generative stage of barley. ISPRS J. Photogramm. Remote Sens. 1991, 46, 37–47. [Google Scholar] [CrossRef]
  43. Gao, B. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  44. Jackson, R.; Huete, A. Interpreting vegetation indices. Prev. Vet. Med. 1991, 11, 185–200. [Google Scholar] [CrossRef]
  45. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  46. Munkhdulam, O.; Atzberger, C.; Chambers, J.; Amarsaikhan, D. Mapping pasture biomass in Mongolia using partial least squares, Random Forest regression and Landsat 8 imagery. Int. J. Remote Sens. 2018, 40, 3204–3226. [Google Scholar] [CrossRef]
  47. Fern, R.; Foxley, E.; Montalvo, A.; Morrison, M. Suitability of NDVI and OSAVI as estimators of green biomass and coverage in a semi-arid rangeland. Ecol. Indic. 2018, 94, 16–21. [Google Scholar] [CrossRef]
  48. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  49. Liang, T.; Yang, S.; Feng, Q.; Liu, B.; Zhang, R.; Huang, X.; Xie, H. Multi-factor modeling of above-ground biomass in alpine grassland: A case study in the Three-River Headwaters region, China. Remote Sens. Environ. 2016, 186, 164–172. [Google Scholar] [CrossRef]
  50. Fusaro, L.; Salvatori, E.; Mereu, S.; Manes, F. Photosynthetic traits as indicators for phenotyping urban and peri-urban forests: A case study in the metropolitan city of Rome. Ecol. Indic. 2019, 103, 301–311. [Google Scholar] [CrossRef]
  51. Kruse, F.A.; Lefkoff, A.B.; Boardman, J.W.; Heidebrecht, K.B.; Shapiro, A.T.; Barloon, P.J.; Goetz, A.F.H. The spectral image processing system (SIPS)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  52. Odden, B.; Kneubuehler, M.; Itten, K. Comparison of a hyperspectral classification method implemented in two remote sensing software packages. In Proceedings of the 6th Workshop on Imaging Spectroscopy, Tel Aviv, Israel, 19 March 2009; European Association of Remote Sensing Laboratories. pp. 1–8. [Google Scholar] [CrossRef]
  53. Durgasi, A.; Guha, A. Potential utility of Spectral Angle Mapper and Spectral Information Divergence methods for mapping lower Vindhyan rocks and their accuracy assessment with respect to conventional lithological map in Jharkhand, India. J. Indian Soc. Remote Sens. 2018, 46, 737–747. [Google Scholar] [CrossRef]
  54. Klinken, R.V.; Shepherd, D.; Parr, R.; Robinson, T.; Anderson, L. Mapping mesquite (Prosopis) distribution and density using visual aerial surveys. Rangel. Ecol. Manag. 2007, 60, 408–416. [Google Scholar] [CrossRef]
  55. Chignell, S.M.; Luizza, M.W.; Skach, S.; Young, N.E.; Evangelista, P.H. An integrative modeling approach to mapping wetlands and riparian areas in a heterogeneous Rocky Mountain watershed. Remote Sens. Ecol. Conserv. 2018, 4, 150–165. [Google Scholar] [CrossRef] [Green Version]
  56. Chang, C.-I. Spectral Information Divergence for Hyperspectral Image Analysis. In Proceedings of the IEEE 1999 International Geoscience and Remote Sensing Symposium. IGARSS’99, Hamburg, Germany, 28 June–2 July 1999; 1999; Volume 1, pp. 509–511. [Google Scholar] [CrossRef]
  57. Zhang, E.; Zhang, X.; Yang, S.; Wang, S. Improving hyperspectral image classification using Spectral Information Divergence. IEEE Geosci. Remote Sens. Lett. 2014, 11, 249–253. [Google Scholar] [CrossRef]
  58. Soil Survey Staff Soil Survey Geographic (SSURGO) Database. Available online: https://sdmdataaccess.sc.egov.usda.gov (accessed on 21 October 2022).
  59. Jin, Y.; Yang, X.; Qiu, J.; Li, J.; Gao, T.; Wu, Q.; Zhao, F.; Ma, H.; Yu, H.; Xu, B. Remote Sensing-based biomass estimation and its spatio-temporal variations in temperate grassland, northern China. Remote Sens. 2014, 6, 1496–1513. [Google Scholar] [CrossRef] [Green Version]
  60. Zhou, Y.; Zhang, L.; Xiao, J.; Chen, S.; Kato, T.; Zhou, G. A Comparison of satellite-derived vegetation indices for approximating gross primary productivity of grasslands. Rangel. Ecol. Manag. 2014, 67, 9–18. [Google Scholar] [CrossRef]
  61. Ren, H.; Zhou, G.; Zhang, X. Estimation of green aboveground biomass of desert steppe in Inner Mongolia based on red-edge reflectance curve area method. Biosyst. Eng. 2011, 109, 385–395. [Google Scholar] [CrossRef]
  62. Richards, J.A.; Richards, J.A. Remote Sensing Digital Image Analysis; Springer: Berlin/Heidelberg, Germany, 2022; Volume 5. [Google Scholar] [CrossRef]
  63. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  64. R Development Core Team R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. 2022. Available online: http://www.R-project.org (accessed on 10 May 2022).
  65. Liaw, A.; Wiener, M. Classification and regression by RandomForest. Forest 2001, 23, 18–22. [Google Scholar]
  66. Pelletier, J.D.; Broxton, P.D.; Hazenberg, P.; Zeng, X.; Troch, P.A.; Niu, G.-Y.; Williams, Z.; Brunke, M.A.; Gochis, D. A gridded global data set of soil, intact regolith, and sedimentary deposit thicknesses for regional and global land surface modeling. J. Adv. Model. Earth Syst. 2016, 8, 41–65. [Google Scholar] [CrossRef]
  67. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  68. Oldeland, J.; Dorigo, W.; Lieckfeld, L.; Lucieer, A.; Jürgens, N. Combining vegetation indices, constrained ordination and fuzzy classification for mapping semi-natural vegetation units from hyperspectral imagery. Remote Sens. Environ. 2010, 114, 1155–1166. [Google Scholar] [CrossRef]
  69. McGarigal, K.; Cushman, S.A.; Ene, E. FRAGSTATS v4: Spatial Pattern Analysis Program for Categorical Maps. Computer Software Program Produced by the Authors. 2023. Available online: https://www.fragstats.org (accessed on 5 February 2022).
  70. McGarigal, K.; Tagil, S.; Cushman, S.A. Surface metrics: An alternative to patch metrics for the quantification of landscape structure. Landsc. Ecol. 2009, 24, 433–450. [Google Scholar] [CrossRef]
  71. Perotto-Baldivieso, H.L.; Wu, X.B.; Peterson, M.J.; Smeins, F.E.; Silvy, N.J.; Schwertner, T.W. Flooding-induced landscape changes along dendritic stream networks and implications for wildlife habitat. Landsc. Urban Plan. 2011, 99, 115–122. [Google Scholar] [CrossRef]
  72. Hesselbarth, M.; Sciaini, M.; With, K.; Wiegand, K.; Nowosad, J. Landscapemetrics: An open-source R tool to calculate landscape metrics. Ecography 2019, 42, 1648–1657. [Google Scholar] [CrossRef] [Green Version]
  73. Hijmans, R.J. raster: Geographic Data Analysis and Modeling. 2021. Cranr-Proj. Available online: https://CRAN.R-project.org/package=raster (accessed on 10 May 2022).
  74. Hijmans, R.J. Terra: Spatial data analysis. 2021. Cranr-Proj. Available online: https://CRAN.R-project.org/package=terra (accessed on 10 May 2022).
  75. Oksanen, J.; Simpson, G.; Blanchet, F.; Kindt, R.; Legendre, P.; Minchin, P.; O’hara, R.; Solymos, P.; Stevens, M.; Szoecs, E.; et al. _Vegan: Community Ecology Package_. R Package Version 2.6-4, Cranr-Proj. 2022. Available online: https://CRAN.R-project.org/package=vegan (accessed on 10 May 2022).
  76. Marsaglia, G.; Tsang, W.W.; Wang, J. Evaluating Kolmogorov’s distribution. J. Stat. Softw. 2003, 8, 1. [Google Scholar] [CrossRef]
  77. Sekhon, J.S. Multivariate and propensity score matching software with automated balance optimization: The Matching package for R. J. Stat. Softw. 2011, 42, 1. [Google Scholar] [CrossRef] [Green Version]
  78. Huang, C.; Davis, L.S.; Townshend, J.R.G. An assessment of support vector machines for land cover classification. Int. J. Remote Sens. 2002, 23, 725–749. [Google Scholar] [CrossRef]
  79. Cutler, D.R.; Jr, T.C.E.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J. Random Forests for classification in ecology. Ecology 2007, 88, 2783–2792. [Google Scholar] [CrossRef] [PubMed]
  80. Poznanovic, A.; Falkowski, M.; Maclean, A.; Smith, A.; Evans, J. An accuracy assessment of tree detection algorithms in Juniper woodlands. Photogramm. Eng. Remote Sens. 2014, 80, 627–637. [Google Scholar] [CrossRef]
  81. Kopeć, D.; Sławik, Ł. How to effectively use long-term remotely sensed data to analyze the process of tree and shrub encroachment into open protected wetlands. Appl. Geogr. 2020, 125, 102345. [Google Scholar] [CrossRef]
  82. Cao, X.; Liu, Y.; Liu, Q.; Cui, X.; Chen, X.; Chen, J. Estimating the age and population structure of encroaching shrubs in arid/semiarid grasslands using high spatial resolution remote sensing imagery. Remote Sens. Environ. 2018, 216, 572–585. [Google Scholar] [CrossRef]
  83. Chang, C.-I. An information-theoretic approach to spectral variability, similarity, and discrimination for hyperspectral image analysis. IEEE Trans. Inf. Theory 2000, 46, 1927–1932. [Google Scholar] [CrossRef] [Green Version]
  84. Qin, J.; Burks, T.; Ritenour, M.; Bonn, W. Detection of citrus canker using hyperspectral reflectance imaging with spectral information divergence. J. Food Eng. 2009, 93, 183–191. [Google Scholar] [CrossRef]
  85. Lindgren, A.; Lu, Z.; Zhang, Q.; Hugelius, G. Reconstructing past global vegetation with Random Forest machine learning, sacrificing the dynamic response for robust results. J. Adv. Model. Earth Syst. 2021, 13, e2020MS002200. [Google Scholar] [CrossRef]
  86. Wang, C.; Shu, Q.; Wang, X.; Guo, B.; Liu, P.; Li, Q. A random forest classifier based on pixel comparison features for urban LiDAR data. ISPRS J. Photogramm. Remote Sens. 2018, 148, 75–86. [Google Scholar] [CrossRef]
  87. Schröer, G.; Trenkler, D. Exact and randomization distributions of Kolmogorov-Smirnov tests two or three samples. Comput. Stat. Data Anal. 1995, 20, 185–202. [Google Scholar] [CrossRef]
  88. Zhang, T.; Su, J.; Xu, Z.; Luo, Y.; Li, J. Sentinel-2 satellite imagery for urban land cover classification by optimized Random Forest classifier. Appl. Sci. 2021, 11, 543. [Google Scholar] [CrossRef]
  89. Lawrence, R.L.; Ripple, W.J. Comparisons among vegetation indices and bandwise regression in a highly disturbed, heterogeneous landscape: Mount St. Helens, Washington. Remote Sens. Environ. 1998, 64, 91–102. [Google Scholar] [CrossRef]
  90. Ullah, S.; Si, Y.; Schlerf, M.; Skidmore, A.; Shafique, M.; Iqbal, I.A. Estimation of grassland biomass and nitrogen using MERIS data. Int. J. Appl. Earth Obs. Geoinformation 2012, 19, 196–204. [Google Scholar] [CrossRef]
  91. Heirman, A.L.; Wright, H.A. Fire in medium fuels of west Texas. J. Range Manag. 1973, 26, 331–335. [Google Scholar] [CrossRef]
  92. Bradshaw, L.S.; Deeming, J.E.; Burgan, R.E.; Cohen, J.D. The 1978 National Fire-Danger Rating System: Technical documentation. General Technical Report INT-169. U.S. Department of Agriculture, Forest Service, Intermountain Forest and Range Experiment Station: Ogden, UT, USA, 1984; 44p. [Google Scholar]
  93. Baena, S.; Moat, J.; Whaley, O.; Boyd, D.S. Identifying species from the air: UAVs and the very high resolution challenge for plant conservation. PLoS ONE 2017, 12, e0188714. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  94. Meng, T.; Jing, X.; Yan, Z.; Pedrycz, W. A survey on machine learning for data fusion. Inf. Fusion 2020, 57, 115–129. [Google Scholar] [CrossRef]
  95. Yang, C. Remote Sensing Technologies for Crop Disease and Pest Detection. In Soil and Crop Sensing for Precision Crop Production; Li, M., Yang, C., Zhang, Q., Eds.; Agriculture Automation and Control; Springer International Publishing: Cham, Switzerland, 2022; pp. 159–184. ISBN 978-3-030-70432-2. [Google Scholar]
  96. Song, X.; Yang, C.; Wu, M.; Zhao, C.; Yang, G.; Hoffmann, W.C.; Huang, W. Evaluation of Sentinel-2A satellite imagery for mapping cotton root rot. Remote Sens. 2017, 9, 906. [Google Scholar] [CrossRef] [Green Version]
  97. Li, X.; Yang, C.; Huang, W.; Tang, J.; Tian, Y.; Zhang, Q. Identification of cotton root rot by multifeature selection from Sentinel-2 images using random forest. Remote Sens. 2020, 12, 3504. [Google Scholar] [CrossRef]
  98. Yang, C.; Westbrook, J.K.; Suh, C.P.-C.; Martin, D.E.; Hoffmann, W.C.; Lan, Y.; Fritz, B.K.; Goolsby, J.A. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing. Remote Sens. 2014, 6, 5257–5278. [Google Scholar] [CrossRef] [Green Version]
  99. Du, Y.; Chang, C.-I.; Ren, H.; Chang, C.-C.; Jensen, J.O.; D’Amico, F.M. New hyperspectral discrimination measure for spectral characterization. Opt. Eng. 2004, 43, 1777–1786. [Google Scholar]
Figure 1. The burn unit within the Martin Ranch, located in Menard County, TX.
Figure 1. The burn unit within the Martin Ranch, located in Menard County, TX.
Remotesensing 15 04033 g001
Figure 2. A consolidated workflow showing the steps involved in the coupled pre- and post-burn field surveys, remote sensing data collection, evaluation of prickly pear classification methods, classification accuracy assessment, and landscape analyses employed for mapping prickly pear cover.
Figure 2. A consolidated workflow showing the steps involved in the coupled pre- and post-burn field surveys, remote sensing data collection, evaluation of prickly pear classification methods, classification accuracy assessment, and landscape analyses employed for mapping prickly pear cover.
Remotesensing 15 04033 g002
Figure 3. Pre-fire and post-fire seasons’ comparison of the best-performing methods (Spectral Angle Mapper—SAM) of endmember classification models for prickly pear cactus from the 22 cm multispectral image for the Martin Ranch burn unit.
Figure 3. Pre-fire and post-fire seasons’ comparison of the best-performing methods (Spectral Angle Mapper—SAM) of endmember classification models for prickly pear cactus from the 22 cm multispectral image for the Martin Ranch burn unit.
Remotesensing 15 04033 g003
Figure 4. Frequency distributions of patch size (left column; (A,C,E)) and scatterplots (+ regression trend) of shape index against patch size (right column; (B,D,F)) of the prickly pear within the burned areas, in the (A,B) burn unit (top row), and each of the two dominant soil types within the burn unit, the (C,D) shallow Tarrant (Ta) soil (middle row) and the (E,F) deeper Valeria (VaB) soil (bottom row).
Figure 4. Frequency distributions of patch size (left column; (A,C,E)) and scatterplots (+ regression trend) of shape index against patch size (right column; (B,D,F)) of the prickly pear within the burned areas, in the (A,B) burn unit (top row), and each of the two dominant soil types within the burn unit, the (C,D) shallow Tarrant (Ta) soil (middle row) and the (E,F) deeper Valeria (VaB) soil (bottom row).
Remotesensing 15 04033 g004
Table 1. Multispectral data specifications.
Table 1. Multispectral data specifications.
Band NameBand No.Spectral Range (nm)Spatial Resolution (m)Radiometric Resolution (bits)
Red1604.7–698.00.218
Green2501.1–599.60.218
Blue3418.2–495.90.218
NIR4703.2–900.20.218
Table 2. Vegetation indices selected for discriminating prickly pear cactus biophysical characteristics.
Table 2. Vegetation indices selected for discriminating prickly pear cactus biophysical characteristics.
Abbrev.FormulaIndex Name Reference
ExGI 2 G r e e n ( B l u e + R e d ) Excess Green (I)[30]
GCI ( N I R G r e e n 1 ) Green Chlorophyll (I)[29]
NDVI N I R R e d N I R + R e d Normalized Difference (VI)[31]
EVI 2.5 ( N I R R e d N I R + 6     R e d 7.5     B l u e + 1 ) Enhanced (VI)[32]
GNDVI N I R G r e e n N I R + G r e e n Green-Normalized Difference (VI)[35]
KNDVI t a n h ( ( N I R R e d 2 σ ) 2 ) ; σ = 0.5 ( N I R + R e d ) Kernel-Normalized Difference (VI)[36]
CTVI ( N D V I + 0.5 ) a b s ( N D V I + 0.5 ) Corrected Transformed (VI)[37]
MTVI2 1.5 ( 1.2 ( N I R G r e e n ) 2.5 ( R e d G r e e n ) ( ( 2 N I R + 1 ) 2 ) ( 6 N I R 5 R e d ) 0.5 ) ) Modified-Transformed (VI) 2[40]
NDWI G r e e n N I R G r e e n + N I R Normalized Difference Water (I)[43]
SAVI 1.5 ( N I R R e d ) ( N I R + R e d + 0.5 ) Soil-Adjusted (VI)[45]
OSAVI ( 1 + 0.16 ) ( N I R R e d ) ( N I R + R e d + 0.16 ) Optimized Soil-Adjusted (VI)[47]
MSAVI ( N I R + 0.5 ) ( 0.5   ( 2 N I R + 1 ) 2 8 ( N I R ( 2 R e d ) ) Modified Soil-Adjusted (VI)[48]
MSAVI2 ( ( 2 ( N I R + 1 ) ) ( 2 ( N I R + 1 ) ) 2 8 ( N I R R e d ) ) 2 Modified Soil-Adjusted (VI) 2[48]
SIPI N I R B l u e N I R R e d Structure-Insensitive Pigment (I)[50]
Vegetation Index (VI) and Index (I).
Table 3. Summary of error matrix accuracies for the feature extraction (object-based), pixel-based machine learning, and spectral (n-D) classification methods for classifying prickly pear using multispectral at the MR site. Acronyms: overall accuracy—OA; kappa coefficient—KC; user’s accuracy—UA; producer’s accuracy—PA.
Table 3. Summary of error matrix accuracies for the feature extraction (object-based), pixel-based machine learning, and spectral (n-D) classification methods for classifying prickly pear using multispectral at the MR site. Acronyms: overall accuracy—OA; kappa coefficient—KC; user’s accuracy—UA; producer’s accuracy—PA.
MethodTimingOA (%)KC (%)UA (%)PA (%)
Object-basedSupport Vector Machine (SVM)Pre-fire48.923.091.349.0
Post-fire59.349.983.860.1
K Nearest Neighbor (KNN)Pre-fire43.515.693.032.1
Post-fire67.163.478.770.9
Pixel-basedRandomForest (RF)Pre-fire91.282.488.094.0
Post-fire85.871.680.989.6
Endmember-basedSpectral Angle Mapper (SAM)Pre-fire96.693.295.497.7
Post-fire91.286.986.491.6
Spectral Information Divergence (SID)Pre-fire94.789.492.896.5
Post-fire84.681.888.9382.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jaime, X.A.; Angerer, J.P.; Yang, C.; Walker, J.; Mata, J.; Tolleson, D.R.; Wu, X.B. Exploring Effective Detection and Spatial Pattern of Prickly Pear Cactus (Opuntia Genus) from Airborne Imagery before and after Prescribed Fires in the Edwards Plateau. Remote Sens. 2023, 15, 4033. https://doi.org/10.3390/rs15164033

AMA Style

Jaime XA, Angerer JP, Yang C, Walker J, Mata J, Tolleson DR, Wu XB. Exploring Effective Detection and Spatial Pattern of Prickly Pear Cactus (Opuntia Genus) from Airborne Imagery before and after Prescribed Fires in the Edwards Plateau. Remote Sensing. 2023; 15(16):4033. https://doi.org/10.3390/rs15164033

Chicago/Turabian Style

Jaime, Xavier A., Jay P. Angerer, Chenghai Yang, John Walker, Jose Mata, Doug R. Tolleson, and X. Ben Wu. 2023. "Exploring Effective Detection and Spatial Pattern of Prickly Pear Cactus (Opuntia Genus) from Airborne Imagery before and after Prescribed Fires in the Edwards Plateau" Remote Sensing 15, no. 16: 4033. https://doi.org/10.3390/rs15164033

APA Style

Jaime, X. A., Angerer, J. P., Yang, C., Walker, J., Mata, J., Tolleson, D. R., & Wu, X. B. (2023). Exploring Effective Detection and Spatial Pattern of Prickly Pear Cactus (Opuntia Genus) from Airborne Imagery before and after Prescribed Fires in the Edwards Plateau. Remote Sensing, 15(16), 4033. https://doi.org/10.3390/rs15164033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop