Next Article in Journal
Investigating the Role of Cover-Crop Spectra for Vineyard Monitoring from Airborne and Spaceborne Remote Sensing
Previous Article in Journal
The Effect of a Parcel-Aggregated Cropping Structure Mapping Method in Irrigation-Water Estimation in Arid Regions—A Case Study of the Weigan River Basin in Xinjiang
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating Burn Severity and Post-Fire Woody Vegetation Regrowth in the Kalahari Using UAV Imagery and Random Forest Algorithms

1
Department of Geography, University of California Los Angeles, 1255 Bunche Hall, P.O. Box 951524, Los Angeles, CA 90095, USA
2
Department of Geography and the Environment, University of Texas at Austin, 305 E. 23rd Street - A3100 - RLP 3.306, Austin, TX 78712, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(21), 3943; https://doi.org/10.3390/rs16213943
Submission received: 7 August 2024 / Revised: 18 October 2024 / Accepted: 21 October 2024 / Published: 23 October 2024

Abstract

:
Accurate burn severity mapping is essential for understanding the impacts of wildfires on vegetation dynamics in arid savannas. The frequent wildfires in these biomes often cause topkill, where the vegetation experiences above-ground combustion but the below-ground root structures survive, allowing for subsequent regrowth post-burn. Investigating post-fire regrowth is crucial for maintaining ecological balance, elucidating fire regimes, and enhancing the knowledge base of land managers regarding vegetation response. This study examined the relationship between bush burn severity and woody vegetation post-burn coppicing/regeneration events in the Kalahari Desert of Botswana. Utilizing UAV-derived RGB imagery combined with a Random Forest (RF) classification algorithm, we aimed to enhance the precision of burn severity mapping at a fine spatial resolution. Our research focused on a 1 km2 plot within the Modisa Wildlife Reserve, extensively burnt by the Kgalagadi Transfrontier Fire of 2021. The UAV imagery, captured at various intervals post-burn, provided detailed orthomosaics and canopy height models, facilitating precise land cover classification and burn severity assessment. The RF model achieved an overall accuracy of 79.71% and effectively identified key burn severity indicators, including green vegetation, charred grass, and ash deposits. Our analysis revealed a >50% probability of woody vegetation regrowth in high-severity burn areas six months post-burn, highlighting the resilience of these ecosystems. This study demonstrates the efficacy of low-cost UAV photogrammetry for fine-scale burn severity assessment and provides valuable insights into post-fire vegetation recovery, thereby aiding land management and conservation efforts in savannas.

1. Introduction

Africa is often referred to as the “Fire Continent”, as African savanna fires account for over 60% of the global fire extent area annually [1,2]. Fire is an integral part of African savanna ecosystems and includes ignitions from both natural (e.g., lightning) and anthropogenic sources [3]. The effect of fire on savannas depends upon the type and intensity of the fire, and the season and frequency of burning (i.e., fire regime) [4]. Compared to other ecoregions, savanna fires typically have lower intensity yet occur more frequently [1,3]; this can be attributed to the rapid regrowth rates of grasses, which serve as the primary fuel for these fires, leading to short fire return intervals [4,5] and low-intensity surface fires [3,6,7,8]. The fire intensity and frequency may alter the long-term woody/grass cover ratio, woody height profile, and surface albedo [3,6,7]. Surface fires predominantly consume the grass layer, leaving the woody canopy largely unscathed as the flames seldom extend to the canopy [1,9]. Higher-intensity fires predominantly impact woody vegetation in savannas by damaging the internal stem structures of vegetation rather than damaging the canopy [10,11]. The result is topkill (death of aerial biomass), which causes coppicing (or regeneration/resprouting) from the collar region of the stem [12]. Fire damage in savannas is seldom enough to cause whole-plant mortality [8,11]. Woody vegetation in African savannas, including in the Kalahari, therefore exhibits a high level of resilience to individual fires due to their low mortality rates and rapid resprouting patterns post-burn [12,13].
It is difficult to discern total tree mortality versus survival post-burn because the surviving below-ground biomass is not easily detectable with standard surveying techniques. Woody vegetation that has lost significant amounts of total above-ground biomass is often fully intact below the ground surface [10,12]. Land managers frequently observe a phenomenon referred to as ‘ghost logs’, where trees have fully combusted, leaving only a white ash outline of the tree. These ghost logs have been seen to demonstrate significant rates of resprouting from surviving root structures post-burn [14]. Coppicing of woody vegetation therefore poses a challenge for quantifying the post-burn vegetative response.
In the Kalahari, understanding burn severity (i.e., the extent to which fire causes mortality in aboveground vegetation and alters soil properties and below-ground processes [15,16,17]) is particularly important due to its direct impact on the survival rates of woody vegetation [10,13,18]. The region has experienced significant shrub encroachment, characterized by the expansion of shrubs at the expense of grasses, which represents an ecological regime shift [19,20] that threatens the livelihoods of local communities that depend on grazing [5,21]. Shrub encroachment is driven by factors such as overgrazing, fire suppression, changes in precipitation, and increased CO2 [5,13]. Natural fire regimes mitigate woody encroachment by preventing shrub establishment, with prescribed burning aiming to maintain open savannas [1,13,20,22]. Understanding post-fire vegetation response and recovery patterns is essential for informed management decisions, particularly in assessing the efficacy of fire in combating shrub encroachment.
Many studies have focused on the effects of low-intensity prescribed fires [9,13,20,21]. Here, we investigate the impacts of higher-intensity wildfires on post-burn woody vegetation responses [9,13,23] using unmanned aerial vehicle (UAV)-derived imagery that should provide information about burn severity. Previous studies have classified burn severity utilizing spectral parameters derived from multi- and hyperspectral satellite and aerial imagery [24,25,26]. A common method involves computing the difference between pre-fire and post-fire normalized burn ratio (NBR), which employs the near-infrared (NIR) and shortwave infrared (SWIR) bands [25,27]. In the Kalahari, the immediate post-burn response in high-severity fires is predominantly characterized by white ash deposits, which increase surface albedo [6,7,9]. This challenges the effectiveness of NBR in this region, as highlighted by Roy et al. [28], who critiqued its sensitivity in these conditions. Smith detailed difficulties in using NBR indices for accurately classifying burn severity in Northern Botswana, particularly due to difficulty capturing the fine scale of ash deposits and the unique spectral signatures of white ash within the NBR model [29]. We address these challenges by investigating alternative methods to enhance burn severity classification in regions characterized by white ash deposits.
Driven by the need for finer spatial resolution in burn severity classification, UAVs present a cost-effective, on-demand remote sensing solution capable of detecting fine-scale white ash deposits. Globally, studies have successfully utilized UAVs to classify burn severity with high accuracy, showcasing their effectiveness across diverse environments [24,26,30,31]. However, within the Kalahari and Southern Africa more broadly, UAVs have yet to be employed for this purpose; previous research has primarily relied on satellite imagery for classification [29,32].
This research aims to determine the relationship between bush burn severity and post-burn coppicing/regeneration events in the Kalahari Desert in Southern Botswana, providing insight for researchers and land managers into the likelihood of post-burn regeneration by establishing a framework for estimating mortality rates and future succession patterns in the landscape [33]. Quantification of post-burn woody vegetation regeneration rates is crucial for furthering the understanding of the effect of fire on vegetation composition and community shifts in the Kalahari [33]. Traditional satellite imagery’s limited resolution and temporal constraints often fail to capture the fine-scale effects of fire events and their immediate aftermath [27,30]. To address these challenges, we utilized UAV-derived RGB imagery, a novel approach in this region, combined with machine learning techniques to achieve more precise and timely burn severity classifications.

2. Materials and Methods

2.1. Study Area

This study was conducted at the Modisa Wildlife Reserve in the Kalahari Desert in the Kgalagadi District of Southern Botswana, an arid savanna ecosystem [5,7,10]. Rainfall is highly seasonal, with an annual mean of less than 300 mm, and falls almost exclusively between October and April [5,34]. These conditions favor the growth of savanna open grassland vegetation, which comprises a mix of perennial and annual grasses, with occasional shrubs and few trees [34]. The soil composition is predominantly Kalahari sand [34,35]. In this region, land is principally used for wildlife conservation/tourism and pastoral ranching, with roughly a third of the district’s total land area incorporated into the Kgalagadi Transfrontier Park [32,35]. Fire is common and tends to occur late in the dry season when grasses have low moisture content and ignite readily [9,21].
Modisa Wildlife Reserve (Figure 1) consists of a 7000-hectare private wildlife area in the Kalahari Desert [36]. Modisa serves as an ideal study site given its effective wildlife management and controlled grazing, presenting it as a model of a healthy savanna ecosystem. The property has no history of cattle grazing and has only experienced native ungulate grazing and browsing. During the 2021 dry season, the Kgalagadi Transfrontier Fire (KTF) burnt a large portion of the reserve. It (see Figure 1) was ignited by anthropogenic sources in August 2021 and lasted through late September of 2021, burning over 4 million hectares of land [32]. This fire presents an opportunity to scrutinize the ecological impacts of high-intensity wildfires. Unlike prescribed fires, which are controlled and less intense, the unanticipated nature and scale of this wildfire afforded a rare perspective on the dynamics of fire behavior and its consequent effects on the ecosystem. Furthermore, higher burn intensity levels were expected within our study site due to high rainfall years from 2019 to 2021, which had yielded an increase in fuel load and fuel connectivity across the landscape. A recent study by Kaduyu [32] used the Landsat Fire Mapping Tool to estimate total burn severity for the fire extent, overall claiming low severity estimates for the fire. However, Kaduyu primarily focused on large-scale severity classifications rather than fine-scale methodologies that have been used in previous studies to better fit the fine-scale resolution needs of post-burn mapping in the Kalahari [32].
This research concentrates on a 1 km2 plot of land (center point: −26.1817348°N, 21.8504333°E) within the Modisa Wildlife Reserve, entirely burnt by the Kgalagadi Transfrontier Fire (Figure 2). The plot was randomly selected after the fire and consists of a mixture of shrubs and trees, grasses, and sparse herbaceous cover displaying a range of burn severities (Figure 2).

2.2. Approach

The approach for classifying burn severity involved four main components outlined in the flow diagram in Figure 3: (1) UAV image collection and processing; (2) input supervised classification, texture, canopy height model, and RGB spectral variable calculations; (3) Random Forest modeling and accuracy assessment; (4) final burn severity mapping.

2.3. UAV Image Collection and Image Processing

UAV imagery has demonstrated the ability to capture spatial variability in heterogeneous burned areas compared to high-resolution satellite imagery [24,27,30]. Unlike satellite imagery, UAV image acquisition is not subject to timing constraints imposed by orbital dynamics and can thus provide images at any time. It also offers the potential for high spatial resolution sampling, which is critical for assessing spatial variations in burn severity [1,26]. This is particularly important in the Kalahari, where the temporal detection window for white ash is narrow.
We acquired post-fire imagery across the 1 km2 study area in Modisa in both dry and wet seasons to facilitate a comparative analysis of woody vegetation regrowth (September 2021–December 2023) (Table 1). By examining a sample of woody vegetation captured in the initial imagery 12 h post-burn and subsequently tracking its development throughout the subsequent fire, this approach enabled a detailed assessment of vegetative recovery. The images collected 12 h post-burn served as the primary dataset for the burn severity classification model. The imagery produced subsequently was utilized in the analysis of vegetation regrowth, allowing for a comprehensive assessment of recovery patterns over time.
Images were captured using a DJI Mavic 2 Dual Enterprise (Shenzhen, Guangdong, China) with a 48 MP RGB camera. DroneDeploy software (v 2.422.0; DroneDeploy Inc., San Francisco, CA, USA) was used to plan autonomous flights at 75 m above ground level in a grid pattern to capture nadir imagery with ~5 cm resolution, 85% forward overlap, and 70% side overlap. Flights over the entire study area took roughly 2.5 h under ideal conditions. Permanently placed steel ground control points (GCPs) were established within the study area for accurate georeferencing of drone imagery and were surveyed using a Global Navigation Satellite System (GNSS) receiver with a 1-h point averaging technique. The drone images were processed into dense 3D point clouds, digital surface models (DSMs), digital elevation models (DEMs), and color orthomosaics, all at 5 cm resolution using DroneDeploy’s structure from motion (SfM) algorithms [25]. Orthomosaics were accurately georeferenced by leveraging the GPS data tagged by the drone’s onboard GPS system during image capture and georeferencing the images to the GCPs. After generating the DEM and the DSM from the point cloud data using DroneDeploy, a canopy height model (CHM) was created by subtracting the DEM (representing the ground elevation) from the DSM (indicating the elevation of the surface, including vegetation). The CHM was instrumental in assessing the vertical structure of the vegetation, allowing the distribution of vegetation height post-burn to be quantified. To ensure precise alignment of drone images taken at different times, a co-registration process was performed using manually identified ground control points in ArcPro (v3.1.3, Redlands, CA, USA), aligning all images to a common coordinate system.

2.4. Burn Severity Classification

Burn severity classifications often assume that surface albedo decreases following a fire [6,7,29], and previous research using coarse imagery has centered on finding the minimum albedo after a fire [7]. However, this assumption may not hold when two types of ash can be observed: white ‘mineral’ ash from complete combustion and darker ‘black’ ash or char with unburned fuel components [14,29,37].
Given the fine resolution of UAV imagery and evidence of the limited success of spectral-based methodologies such as NBR in the Kalahari [28,29], biophysical rather than spectral parameters were used as the primary indicators of burn severity during the manual classification dataset creation. The high-resolution UAV-derived orthomosaics allow for the effects of fires to be classified by the human eye. Thus, we manually classified land cover into categories aligned with specific burn severity classifications to produce a training dataset that could be used to train a machine learning model. This method, diverging from methods used in previous studies, which predominantly involve spectral indices when classifying burn severity from drone imagery, takes advantage of drone imagery’s finer resolution to evaluate the effectiveness of biophysical indicators [24,25,26,30].
We used six land cover classifications (including shadow’s null classification) with corresponding burn severity rankings (Table 2), including four indicators of burn severity (green vegetation, burnt woody vegetation, charred grass, and ash deposits) that could be easily distinguished within the RGB imagery (Figure 4) [14,25]. Initial model runs were executed using additional burn severity classes, including gray ash, white ash, burnt woody vegetation (minimal charring on trunk), and charred woody vegetation (high levels of charring throughout trunk and branches). These classes were combined to become ‘Ash’ and ‘Burnt Woody Vegetation’ for final model runs, given their similar characteristics that resulted in low levels of class separability and low classification accuracy within the initial model outputs. Burnt woody vegetation was defined as standing vegetation that experienced partial scorching on the trunk and canopy, leading to an overall loss of green vegetation, but received no structural damage. Ash was defined as white ash, which represented fully combusted vegetation with minimal residual organic material, and gray ash, which comprised a mixture of fully combusted vegetation and partially combusted plant material. Charred grass was defined as grass that had been burned, resulting in a blackened appearance due to partial combustion, while retaining some structural integrity, unlike ash. Green vegetation was classified as standing, green woody vegetation that received little to no fire damage. Bare soil was classified as uncharred parts of the bare ground.
UAV images taken in the 12 h post-burn period were acquired early in the morning (approximately 9 am) immediately post-burn in order to reduce the loss of ash deposits in the imagery. This led to a considerable number of shadows in the images, necessitating a tailored approach for accurate analysis. An initial attempt to mask shadows by thresholding the orthomosaics’ red band, as documented by Fraser [25], proved overly broad, inadvertently masking charred grass and burnt woody vegetation due to their similar spectral signatures. This resulted in significant data loss within the model. To mitigate this, shadows were classified as a distinct category in the classification process, aiming to isolate their effects and ensure a more accurate representation of the post-burn landscape. All subsequent drone imagery was acquired at high noon, making mitigation of shadows in the images unnecessary.
Focusing on woody vegetation, we applied a ternary assessment model to evaluate burn severity, categorizing it as either unburnt, low severity, or high severity. This model aligns with the findings and recommendations of Edwards and Russell-Smith [38] and McKenna [26], who argue for the applicability of a simplified binary or ternary classification in savanna ecosystems. Such environments are characterized by sparsely populated canopy covers, making a ternary classification both relevant and efficient [38]. While we also acknowledged the presence and significance of charred woody grass by assigning it a medium severity classification, woody vegetation was the core focus of our research.

2.5. Supervised Classification—Training Dataset

A manually digitized training dataset (Figure 5), corresponding to our land cover classes (Table 2), was created using Environmental Systems Research Institute’s (ESRI) ArcPro (v3.1.3, Redlands, CA, USA). The training dataset for land cover classifications was developed using a manual visual analysis of UAV-derived orthomosaics. Four randomly located and non-overlapping 100 m × 100 m areas were extracted from the 12-h post-burn RGB orthomosaics and were digitized by a single observer (M. Gillespie).

2.6. Random Forest Classification and Input Variables

This study implemented the Machine Learning (ML) Random Forest (RF) algorithm for burn severity classification. ML techniques have proven to be superior to simple classifiers, particularly in navigating the complexities of scene scale and interaction, and in distinguishing classes within heterogeneous landscapes [39,40]. These landscapes, typical in remote sensing, often feature low separability between different classes and high variability within the same class [27,40]. RF, in particular, has the following advantages in burn severity mapping: it handles categorical predictors naturally, it is computationally simple to fit, it can consider multiple environmental variables simultaneously, and it can handle outliers and noise with relative ease [40,41,42]. Despite the commonly cited advantages of ML classifiers like RF, they have rarely been utilized for burn severity classification within the literature [27,43].
RF is known for its stable and robust accuracy in classifying land cover due to its proficiency in managing large, high-dimensional datasets by constructing decision trees through boot-strap aggregated sampling (bagging) of training data [40,44,45]. It enhances prediction accuracy via ensemble voting, where each tree evaluates a random subset of predictor variables at each split to maximize data homogeneity [27,28,41]. We utilized the Scikit-Learn package in Python (version 3.1.0). Based on the recommendations of previous studies and hyperparameter tuning in initial model runs, the number of decision trees in the ensemble—the ‘ntree’ parameter—along with the number of features to consider for splitting at each node—the ‘mtry’ parameter—were set to 100 and ‘default’, respectively [27,42,45,46]. These values were found in the pretest from the dataset to yield the best classification accuracy while maintaining efficient processing times. Additionally, a bootstrap parameter was used to increase variability and diversity during the construction of individual trees in the model, helping prevent overfitting and improving its robustness and generalization to different land cover classifications [40,46]. We used a 70/30 training/testing split where 70% of the data points were selected for training while the remaining 30% were reserved for testing. Feature importance in the Random Forest model was quantified using the Scikit-Learn library based on the average decrease in Gini impurity attributed to each feature across all decision trees in the ensemble.
Predictor data included an array of spectral indices, elevation data, texture features, and individual RGB band information. Training variables were strategically selected for their proven efficacy in capturing the nuanced dynamics of land cover, with a particular emphasis on differentiating burn severity (Table 3). The EGI and GCC indices were chosen for their ability to discriminate green vegetation from all other land cover types. The EGI leverages the RGB segments of the electromagnetic spectrum (EMS) to enhance the contrast between the peak of green reflectance and the absorption troughs of chlorophyll in the red and blue wavelengths, whereas the GCC focuses on the proportion of green light reflectance relative to the combined reflectance in the red and blue wavelengths, aiming to quantify vegetation greenness [26]. Both indices have been documented in previous burn severity classification studies to be effective at delineating green vegetation from burned areas [25,26]. Introduced by Fraser [25], the Char Index (CI) was specifically developed to detect charred organic surfaces from RGB imagery. This composite index employs both the Brightness Index (BI) and the Maximum RGB Difference Index (MaxDiff) for the purpose of distinguishing charred areas within an image. The foundation of the CI is its recognition that surfaces impacted by charring exhibit very low reflectance in the visible spectrum, as measured by the BI, along with a uniform visible reflectance spectrum, leading to a notable absence of color, which is assessed using MaxDiff [25]. Subsequent studies, including those by Beltrán-Marcos [24] and Von Nonn [31], have successfully used the CI to classify burn severity in RGB–UAS-derived imagery, demonstrating high levels of accuracy.
Numerous studies and practical applications have documented the effectiveness of combining spectral and textural information for land cover classification [47,48]. Several textural features derived from a Gray-Level Co-occurrence Matrix (GLCM) were calculated and implemented into the model, including Contrast, Energy, Homogeneity, and Correlation (Table 3). GLCM features capture textural information by examining the spatial relationships between pixel intensities in an image, which can provide valuable information about surface characteristics [49,50]. Combining GLCM textural features with spectral indices served to enhance the RF’s effectiveness by adding spatial details to the spectral information. Spectral indices capture the condition of the surface, such as vegetation health and burn extent, while GLCM texture features provide a detailed understanding of the area’s spatial patterns [49,50]. The Gray comatrix package within Python’s Scikit-Learn Library was used to compute the textural features. We used a GLCM with a five-pixel distance, calculated at a 0-degree angle, within a 25 × 25 pixel neighborhood window. The GLCM was computed with 256 gray levels to capture fine-scale textures in the high-resolution imagery. These parameters were well-suited to our fine spatial resolution, effectively capturing co-occurrence patterns and detailed textural information. A CHM was constructed from 3D point clouds in the drone imagery processing steps of this study. This integration of CHM data into the model allowed for a more nuanced differentiation between burned and unburned trees, leveraging variations in canopy height to discern standing trees with greater precision.
To quantify the RF model’s uncertainty, a Monte Carlo simulation was conducted to generate different model severity outputs [51,52]. This approach aligns with established methodologies. Li [52] used Monte Carlo simulations to enhance the robustness of their RF classification for above-ground biomass estimations in forests, and Coulston [51] and Wang [53] highlighted the value of Monte Carlo simulations in quantifying uncertainty and improving the accuracy and reliability of predictive models and environmental data interpretations in remote sensing applications. This approach involved introducing a controlled level of variability into the training data to simulate potential inaccuracies and real-world data variations. Specifically, a custom Python function was developed to randomly alter 5% of the classified burn severity data in each iteration, changing them to represent another possible burn severity class. By creating 1000 different training datasets with these slight variations and training a separate RF model for each dataset, we were able to generate 1000 different burn severity classification outputs. This process allowed us to assess the model’s sensitivity to changes in the training dataset and to estimate the variability in its predictions.
The initial accuracy assessment was conducted using the unaltered training dataset to establish the baseline performance of the RF model. This baseline provided a reference point for evaluating the impact of introducing variability in the training data. The accuracy metrics obtained from this initial assessment were used for model accuracy analysis and the creation of the final severity map.

2.7. Woody Vegetation Survival/Regrowth Analysis

To assess the survival and regrowth of woody vegetation post-burn, drone imagery from two subsequent growing seasons was analyzed to manually detect green vegetation. Imagery from 6 months and 2.5 years post-burn was visually examined to determine whether a random sample of trees within the study area exhibited regrowth in the growing seasons following the fire. The 1 km2 plot was divided into 100 plots of 100 m2 each, with 500 randomly generated points assigned across these plots (five points per plot). A random sample of 500 woody vegetation patches were selected for visual analysis. Due to the overlapping canopy and close establishment patterns of woody vegetation, a patch-based analysis was employed to assess survival, as individual trees were difficult to identify without ground-truth data. Within this analysis, a patch was not defined using a size specification but was rather defined based on the visual presence of woody vegetation either as green vegetation, burnt vegetation, or combusted vegetation. Figure 6 depicts how patches were visualized and analyzed in the regrowth/survival analysis.
The model’s predicted classification was recorded for each woody patch. The co-registered drone imagery from 6 months and 2.5 years post-burn was then analyzed to observe the survival status of each patch, specifically looking for green regrowth as an indicator of survival. Given the limited species diversity and sparse canopy cover in the study area, distinguishing woody vegetation regrowth from other vegetation forms was straightforward within the high-resolution drone imagery (Figure 6). Herbaceous cover, as compared to woody vegetation, displays distinct hues of green within the imagery, allowing for easy categorization of woody vegetation regrowth versus other vegetation forms.
The probability of woody vegetation survival and subsequent regrowth at 6 months and 2.5 years post-burn for each burn severity classification was calculated based on the regrowth observations derived from the 500-tree patch sample. Utilizing the Monte Carlo simulation outputs as previously detailed, the probability was calculated over all 1000 burn severity classification outputs derived from the Monte Carlo simulation to provide a robust understanding of the likelihood of woody vegetation survival and regrowth across a multitude of different model outputs.
These probabilities were collected and analyzed to derive the mean and standard deviation of regrowth outcomes for each burn severity classification across all simulations. This comprehensive approach allowed for an assessment of how variations in burn severity classification accuracy might influence the understanding of regrowth patterns, thereby enhancing the reliability of the model’s outputs. Violin plots were constructed in Python to visualize the probability distribution of woody vegetation regrowth within the Monte Carlo simulation.

3. Results

3.1. Burn Severity Classification

A visual analysis of the final classified image indicates a strong correspondence between the original RGB image and the classified image, particularly with respect to the accurate identification of woody vegetation (Figure 7 and Figure 8). This agreement underscores the effectiveness of our RF in distinguishing between different land cover types within the study area. However, some limitations were observed. Notably, there is evidence of salt-and-pepper noise, predominantly within the bare soil and charred grass classifications, indicating pixel-level misclassifications. The classification of shadow in the final image was also seen to experience levels of salt-and-pepper noise, often being misclassified as charred grass or charred woody vegetation.
The final reclassified burn severity map (Figure 8) illustrates the overall burn severity of the 2021 Kgalagadi Transfrontier Fire, predominantly characterized by low- to medium-severity burns. The landscape is primarily dominated by bare soil and charred grass regions, with only a few areas exhibiting high-severity burns, marked by the presence of combusted woody vegetation (ash). Table 4 includes percentages of total cover across the 1 km2 study area for each of the classified burn severity classes. Notably, the northwestern corner of the study area, as highlighted in Figure 7 and Figure 8, shows a significant concentration of high severity burn areas compared to the rest of the study plot.
The final RF model was trained using over 20 million pixels, achieving an overall F score of 0.74 and an overall accuracy (OA) of ~79.71%. Green vegetation yielded the highest F1-score of 0.90, indicating a strong concordance between the training dataset and RF-predicted classifications (Table 4). Notably, despite having the smallest sample size (210,213 pixels), green vegetation’s high F1 score underscores the model’s robustness in this class. In contrast, burnt woody vegetation exhibited the lowest performance, with an F1 score of 0.51. The other land cover classes—shadow, bare soil, ash, and charred grass—demonstrated intermediate accuracy, with F1 scores between 0.70 and 0.78 (Table 4). The confusion matrix (Figure 9) reveals the model’s difficulty in accurately predicting burnt woody vegetation, frequently misclassifying it as charred grass. Similar misclassification patterns were observed for shadow, bare soil, and ash, which were often incorrectly classified as charred grass.

3.2. Relative Importance of Model Predictors

Elevation data proved to be the most influential feature within the model, followed by the individual RGB bands as the second most significant predictors, with the green band showing the highest importance (Figure 10). Among the RGB spectral indices, the GCC was the most significant for land cover classification. In an attempt to better distinguish burnt woody vegetation from charred grass, textural features were added to the predictor dataset with the expectation that the standing woody vegetation would be better separated from the charred soil and grass. Despite previous studies’ success in employing GLCM textural features in land cover classifications [48,54], our study saw little to no positive effect in regard to the addition of texture variables, with correlation, homogeneity, and energy displaying minimal feature importance in the model. Contrast was observed to have some importance in the model but, overall, was not a significant or influential feature in the model’s classification. However, we chose to keep these features in the analysis to ensure consistency and to maintain a comprehensive approach, as removing them for a small accuracy gain did not seem justified. Including them also helped avoid the risk of overfitting the model to this specific dataset, ensuring it remained more generalizable.

3.3. Woody Vegetation Survival/Regrowth Probabilities

Woody vegetation patches that experienced high-severity burns, as indicated by ash cover, experienced the lowest likelihood of survival/regrowth post-burn compared to burnt woody vegetation and un-impacted green vegetation (Figure 11). At 6 months post-burn, high-severity patches saw a ~54% probability of survival/regrowth, low-severity patches saw an 85% probability of survival/regrowth, and unimpacted areas of green vegetation saw a 97% probability of survival/regrowth. At 2.5 years post-burn, these probabilities decreased for all three burn severity classes. Although the probability of survival and regrowth for areas dominated by ash was lower than other severity categories, it is noteworthy that a probability greater than 50% was observed for high-severity burns. This finding suggests that woody vegetation experiencing total above-ground combustion has a significant likelihood of survival, with more than half of such patches showing regrowth at 6 months post-burn.
The standard deviations associated with each category provide a measure of the model’s uncertainty. The higher standard deviations observed for the ash category (3.36% at 6 months and 3.41% at 2.5 years) indicate greater variability and uncertainty in the classification and regrowth outcomes for areas subjected to high-severity burns. Conversely, the relatively lower standard deviations for green vegetation (1.64% at 6 months and 2.65% at 2.5 years) suggest more consistent and predictable regrowth and survival probabilities in areas with little to no burn impact.
The Monte Carlo simulation supports the model’s robustness in classifying low-severity regions, as evidenced by consistently low standard deviation metrics across all 1000 model runs. While high-severity regions displayed increased uncertainty during the simulation runs, they still maintained acceptable accuracy levels, supporting the model’s overall robustness. The broader probability distributions for ash indicate a wider range of potential outcomes, reflecting the complex and variable nature of post-burn recovery in these regions. In contrast, the narrower distributions for green vegetation suggest more predictable outcomes, underscoring the resilience of unimpacted areas.

4. Discussion

4.1. Severity Accuracy

Model results achieved high values for the accuracy metrics in burn severity classifications compared to previous studies that leveraged UAV-RGB imagery without the inclusion of an RF classifier model, exhibiting an ~80% classification accuracy [24,26]. Mckenna [26] utilized UAV-RGB imagery acquired before and after fire occurrence to calculate delta greenness indices, which were then thresholded to classify burn severity, yielding a 68% classification accuracy. In a similar manner, Beltrán-Marcos [24] evaluated the accuracy of UAV-derived RGB spectral indices, including EGI, GCC, and CI, in burn severity classification with an overall ~50% accuracy. Our study, therefore, shows the advantage of using RF for more accurate burnt area severity mapping when utilizing UAV–RGB imagery.
Previous studies have highlighted the effectiveness of RF in burn severity classification when employing multi-spectral imagery, attaining classification accuracy metrics of >90% [27,43,44]. Multi-spectral imagery has been well documented to outperform RGB in machine learning land cover classifications due to its ability to capture a broader range of the electromagnetic spectrum, including the near-infrared (NIR) and short-wave infrared (SWIR), which are sensitive to vegetation health, soil moisture, and other land cover features [25,55]. The Normalized Vegetation Index (NDVI), for instance, is a well-established and thoroughly documented spectral index derived from multi-spectral data, specifically designed to monitor and detect green vegetation, which has been demonstrated to yield higher levels of detection accuracy for vegetation detection in burn severity studies compared to RGB indices like EGI and GCC [24,37,55]. Despite the spectral limitations associated with RGB-based imagery, employing low-cost RGB drones presented a more feasible option for the Kalahari, especially in terms of potential accessibility for local land managers. In Botswana, access to UAV technologies for land management is notably scarce, and this scarcity extends even more to advanced and expensive technologies such as UAV-derived multispectral imagery. Consequently, we aimed to develop an accessible model for land managers by focusing on the use of RGB technologies that were realistically attainable. By leveraging RGB data, we demonstrated that meaningful ecological insights on burn severity could be derived from more readily available and less expensive imaging options, aligning with the practical needs of local land managers.
The model performed best in classifying unburnt areas of green vegetation in the study site (F score = 0.90) despite comprising the smallest training sample size within the predictor dataset. Our Monte Carlo simulation further supported the models’ robustness in classifying green vegetation regions by consistently yielding low standard deviation error metrics in green vegetation classification across the 1000 model iterations. Unburned areas also demonstrated high overall classification accuracy, with bare soil achieving the second-highest F score (0.78), underscoring the model’s robust capabilities in accurately mapping unburned regions. These results contrast with the findings of McKenna [26] and Hillman [30], who reported higher classification accuracy for high-severity classes compared to unburnt classes. This discrepancy may be indicative of an underlying bias within our dataset, where the majority of the landscape was minimally impacted by the fire, thereby making the unburnt regions easier for the model to classify accurately [26,28]. A low-severity classification for the KTF matches visual analysis of the final severity map and findings from Kaduyu that reported an overall low severity for the fire’s extent [32]. Conversely, the studies by McKenna [26] and Hillman [30] were biased towards high-severity fires due to the predominant severity of the fires they examined. Such biases are a known limitation of RF classification, where imbalanced datasets can lead to skewed predictions if certain land cover classes are underrepresented [39].
The model exhibited the greatest confusion in classifying low-severity burnt areas of woody vegetation (F score = 0.51), which were often misclassified as charred grass. This misclassification is primarily due to the similar dark spectral signatures of charred surfaces and the model’s difficulty in detecting standing burnt woody vegetation at smaller height intervals. Vegetation in the Kalahari is dominated by grass and smaller shrubs with sparse cover of larger trees. Upon visual comparison between the CHM and the RGB image, the model struggled to detect smaller shrubs up to two meters tall (Figure 12). Consequently, smaller shrubs that were burnt at a low severity and were still standing may have been incorrectly classified as charred grass due to their similar dark coloration.
The robustness of the model is likely to improve with an increased sample size of training data. Utilizing a single individual for manual classification aimed to minimize bias; however, the intricate details of the post-burn landscape rendered this process laborious, thereby limiting the sample size of our training dataset. Future studies investigating the use of RF for burn severity classification should consider employing a more extensive and diverse training dataset, including data from various study sites and different time periods, to enhance the accuracy and generalizability of model predictions. One of the significant advantages of the RF model is its ability to continuously refine predictions with the incorporation of new training data, leading to sustained improvements in model performance over time [46]. This addition of a diverse training dataset would provide insights into the model’s transferability and sensitivity to different study sites in the Kalahari with different wildfire events.
A limitation in both the model classifications and the analysis of post-burn woody vegetation regrowth is the lack of comprehensive pre- and post-burn field data. Ground-truth data increase model reliability as they provide a reliable reference for validating classified training maps. Additionally, the unpredictable nature of wildfire ignition presents a considerable challenge in acquiring pre-burn imagery from UAVs, which is crucial for comparative analysis. It is acknowledged that visual interpretation limits the assessment of burn severity to what is visible in the imagery and excludes variables such as stem scorch and understory loss in areas of closed canopies [30]. However, the visual interpretation of high-resolution orthomosaics for determining severity has been shown to strongly correlate with field-based measures of severity [56]. Nonetheless, our approach presents a useful framework for improving the precision and accuracy of fire effects detection in unique ecosystems such as the Kalahari and could be further expanded on in future studies by integrating ground-truthed data into the model.
Other observed limitations with respect to severity classification include the complexity added to the model due to the presence of shadows in the 12-h post-burn UAV image. It can be stated with confidence, based on previous studies’ results, that our model would have yielded higher classification accuracy without the addition of shadows to the image that necessitated a shadow classification in the model [25,31]. As detailed in our methodologies, a shadow mask was attempted but failed to properly mask shadows, given the similar spectral signatures of shadows and the charred landscape. In response to this, shadows were classified within their own land cover classification in the model, resulting in model misclassification of shadows as charred grass or woody vegetation, thereby limiting model accuracy.

4.2. Post-Fire Woody Vegetation Dynamics

Our study demonstrated a >50% likelihood of survival and subsequent regrowth at 6 months post-burn in woody vegetation that had experienced a total combustion, high-severity fire. These results support the notion found in the literature and based on observations from land managers that within the Kalahari, topkill is the dominant response from woody vegetation to fire [8,9,10]. Previous studies have reported low mortality rates for woody vegetation following prescribed fires aimed at reducing shrub encroachment and discuss the inability of fire alone to significantly reduce this encroachment [9,13,23,57]. Our study is the first in the Kalahari region to quantify a similarly high survival rate of woody vegetation following high-severity wildfire, in contrast to the prescribed fires used in previous research. These findings suggest that even fires at higher intensities, such as those ascribed to wildfires, are not alone sufficient to drastically reduce woody vegetation cover and therefore combat shrub encroachment, necessitating alternative or combined management approaches. Additionally, our analysis underscores the importance of considering topkill in post-burn assessments, as accurate mortality and survival rates may only become evident several months after the burn.
The observed decrease in survival across all classes at the 2.5 years post-burn period indicates that some of the regrowth observed at 6 months did not persist. Environmental pressures such as drought, browsing, or frost may have impacted survival probabilities at the 2.5-year mark. The reduction of woody vegetation to an earlier structural stage after experiencing topkill increases individual plants’ vulnerability to these pressures and should be considered when analyzing survival likelihoods [9]. These findings are supported by Hoffmann and Solbrig, who found that once woody vegetation had experienced topkill and subsequent regrowth, the individual plant became very susceptible to topkill in subsequent fires due to their reduction in size [12]. In fact, for the plots that burned twice during their study period, all individuals that experienced topkill in the first fire experienced topkill within the second fire, regardless of whether 1 or 2 years had elapsed between burns [12]. This fact warrants further research on the impacts of multiple high-severity fires on woody vegetation survival/regrowth in regard to shrub encroachment management.
While fire intensity has been well documented as an important factor dictating the likelihood of topkill in the Kalahari, other important variables such as species, demographic age, and vegetation height also need to be considered to gain a fully comprehensive understanding of severity impacts [8,12]. We focused our study on investigating the capability of new technologies in the form of UAV-RGB imagery to assess changes in vegetation post-fire, but we acknowledge that the addition of a pre- and post-burn field-based dataset that includes data on vegetation species, age, and height would have improved the understanding of the relationship between burn severity and vegetation survival/regrowth post-burn. Higgins [8], reporting on a set of experimental prescribed burns in Kruger National Park, South Africa, stated that while fire intensity was an important factor determining the likelihood of topkill, the effects of tree size overwhelmed the effects of fire intensity when assessing the likelihood of topkill. Specifically, Higgins reported a higher likelihood of topkill from smaller shrubs (height < two meters) when compared to larger trees [8]. Previous studies have also found that there is a varying savanna species response to fire intensity in regard to topkill likelihoods that is primarily based on differences in bark thickness and moisture, which influences a species’ susceptibility to fire impacts [8,58,59].

5. Conclusions

Accurate burn severity mapping in savannas is critical to improving the understanding of fire regimes over time, the effectiveness of management decisions, and acquiring a better understanding of the impacts of fire on vegetation dynamics in the landscape. This research demonstrated the effectiveness of employing a combination of fine-resolution UAV–RGB imagery and an RF classifier to accurately map burn severity at local scales in arid savannas. Our burn severity classification model yielded an overall accuracy of roughly ~80%, outperforming other RGB-based severity classification models. Furthermore, our methodologies not only performed effectively in overall burn severity classification but also showed considerable capability in identifying high-severity regions of ash cover—a task in which traditional severity classification methods such as NBR have previously encountered significant challenges. Our findings suggest that the utilization of fine-scaled UAV imagery with the inclusion of a machine learning model presents an appropriate methodology choice for burn severity classification in the Kalahari. Based on our severity classifications, our study demonstrated a greater than 50% probability of survival and regrowth of woody vegetation in areas of high severity fire 6 months post-burn and a 45% probability of survival at 2.5 years post-burn, supporting the well-established notion that fire is unlikely to cause high levels of mortality in woody vegetation. These findings are significant in the continued research and application of shrub encroachment management within the Kalahari, demonstrating that fire, even at higher severities akin to that of wildfire, is insufficient to combat shrub encroachment alone.

Author Contributions

Conceptualization, M.G. and G.S.O.; M.G. and T.M. conducted the data acquisition and data preprocessing; data analysis, M.G. and F.O.; M.G. prepared the manuscript and all co-authors assisted in the review, discussion of results, and direct parts of the data analyses of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

Funding was provided by NASA Interdisciplinary Sciences Grant 80NSSC24K0298 as well as the UCLA Department of Geography Helin Travel Fund.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

We would like to thank the MODISA Wildlife Project Team, specifically Val Gruener, along with members of Gazelle Ecosolutions including Nico Esteva, Grant Kitlowski, Douglas Pham, Siddharth Thakur, and Amod Daherkar who aided in the UAV image acquisition at our study site. Special thanks are also given to the UCLA Geography Department for their financial support of fieldwork and travel expenses through their departmental travel grant.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Archibald, S.; Scholes, R.J.; Roy, D.P.; Roberts, G.; Boschetti, L. Southern African Fire Regimes as Revealed by Remote Sensing. Int. J. Wildland Fire 2010, 19, 861. [Google Scholar] [CrossRef]
  2. Komarek, E.V., Sr. The natural history of lightning. In Proceedings of the Annual Tall Timbers Fire Ecology Conference Number 3, Tallahassee, FL, USA, 9–10 April 1964; pp. 139–183. [Google Scholar]
  3. Mathieu, R.; Main, R.; Roy, D.P.; Naidoo, L.; Yang, H. The Effect of Surface Fire in Savannah Systems in the Kruger National Park (KNP), South Africa, on the Backscatter of C-Band Sentinel-1 Images. Fire 2019, 2, 37. [Google Scholar] [CrossRef]
  4. Trollope, W.S.W. Effects and Use of Fire in Southern African Savannas; Department of Livestock and Pasture Science, Faculty of Agriculture University Fort Hare: Alice, South Africa, 1999. [Google Scholar]
  5. Meyer, T.; Holloway, P.; Christiansen, T.B.; Miller, J.A.; D’Odorico, P.; Okin, G.S. An Assessment of Multiple Drivers Determining Woody Species Composition and Structure: A Case Study from the Kalahari, Botswana. Land 2019, 8, 122. [Google Scholar] [CrossRef]
  6. Dintwe, K.; Okin, G.S.; Xue, Y. Fire-induced Albedo Change and Surface Radiative Forcing in sub-Saharan Africa Savanna Ecosystems: Implications for the Energy Balance. JGR Atmos. 2017, 122, 6186–6201. [Google Scholar] [CrossRef]
  7. Saha, M.V.; D’Odorico, P.; Scanlon, T.M. Kalahari Wildfires Drive Continental Post-Fire Brightening in Sub-Saharan Africa. Remote Sens. 2019, 11, 1090. [Google Scholar] [CrossRef]
  8. Higgins, S.I.; Bond, W.J.; Combrink, H.; Craine, J.M.; February, E.C.; Govender, N.; Lannas, K.; Moncreiff, G.; Trollope, W.S.W. Which Traits Determine Shifts in the Abundance of Tree Species in a Fire-prone Savanna? J. Ecol. 2012, 100, 1400–1410. [Google Scholar] [CrossRef]
  9. Joubert, D.F.; Smit, G.N.; Hoffman, M.T. The Role of Fire in Preventing Transitions from a Grass Dominated State to a Bush Thickened State in Arid Savannas. J. Arid. Environ. 2012, 87, 1–7. [Google Scholar] [CrossRef]
  10. Holdo, R.M. Stem Mortality Following Fire in Kalahari Sand Vegetation: Effects of Frost, Prior Damage, and Tree Neighbourhoods. Plant Ecol. 2005, 180, 77–86. [Google Scholar] [CrossRef]
  11. Govender, N.; Trollope, W.S.W.; Van Wilgen, B.W. The Effect of Fire Season, Fire Frequency, Rainfall and Management on Fire Intensity in Savanna Vegetation in South Africa. J. Appl. Ecol. 2006, 43, 748–758. [Google Scholar] [CrossRef]
  12. Hoffmann, W.A.; Solbrig, O.T. The Role of Topkill in the Differential Response of Savanna Woody Species to Fire. For. Ecol. Manag. 2003, 180, 273–286. [Google Scholar] [CrossRef]
  13. Lohmann, D.; Tietjen, B.; Blaum, N.; Joubert, D.F.; Jeltsch, F. Prescribed Fire as a Tool for Managing Shrub Encroachment in Semi-Arid Savanna Rangelands. J. Arid. Environ. 2014, 107, 49–56. [Google Scholar] [CrossRef]
  14. Hudak, A.T.; Ottmar, R.D.; Vihnanek, R.E.; Brewer, N.W.; Smith, A.M.S.; Morgan, P. The Relationship of Post-Fire White Ash Cover to Surface Fuel Consumption. Int. J. Wildland Fire 2013, 22, 780. [Google Scholar] [CrossRef]
  15. Keeley, J.E. Fire Intensity, Fire Severity and Burn Severity: A Brief Review and Suggested Usage. Int. J. Wildland Fire 2009, 18, 116. [Google Scholar] [CrossRef]
  16. Lentile, L.B.; Morgan, P.; Hudak, A.T.; Bobbitt, M.J.; Lewis, S.A.; Smith, A.M.S.; Robichaud, P.R. Post-Fire Burn Severity and Vegetation Response Following Eight Large Wildfires across the Western United States. Fire Ecol. 2007, 3, 91–108. [Google Scholar] [CrossRef]
  17. Bennett, L.T.; Bruce, M.J.; MacHunter, J.; Kohout, M.; Tanase, M.A.; Aponte, C. Mortality and Recruitment of Fire-Tolerant Eucalypts as Influenced by Wildfire Severity and Recent Prescribed Fire. For. Ecol. Manag. 2016, 380, 107–117. [Google Scholar] [CrossRef]
  18. Retallack, A.; Finlayson, G.; Ostendorf, B.; Lewis, M. Using Deep Learning to Detect an Indicator Arid Shrub in Ultra-High-Resolution UAV Imagery. Ecol. Indic. 2022, 145, 109698. [Google Scholar] [CrossRef]
  19. Devine, A.P.; McDonald, R.A.; Quaife, T.; Maclean, I.M.D. Determinants of Woody Encroachment and Cover in African Savannas. Oecologia 2017, 183, 939–951. [Google Scholar] [CrossRef]
  20. Kraaij, T.; Ward, D. Effects of Rain, Nitrogen, Fire and Grazing on Tree Recruitment and Early Survival in Bush-Encroached Savanna, South Africa. Plant Ecol. 2006, 186, 235–246. [Google Scholar] [CrossRef]
  21. Roques, K.G.; O’Connor, T.G.; Watkinson, A.R. Dynamics of Shrub Encroachment in an African Savanna: Relative Influences of Fire, Herbivory, Rainfall and Density Dependence. J. Appl. Ecol. 2001, 38, 268–280. [Google Scholar] [CrossRef]
  22. Sankaran, M.; Hanan, N.P.; Scholes, R.J.; Ratnam, J.; Augustine, D.J.; Cade, B.S.; Gignoux, J.; Higgins, S.I.; Le Roux, X.; Ludwig, F.; et al. Determinants of Woody Cover in African Savannas. Nature 2005, 438, 846–849. [Google Scholar] [CrossRef]
  23. Case, M.F.; Staver, A.C. Fire Prevents Woody Encroachment Only at Higher-than-historical Frequencies in a South African Savanna. J. Appl. Ecol. 2017, 54, 955–962. [Google Scholar] [CrossRef]
  24. Beltrán-Marcos, D.; Suárez-Seoane, S.; Fernández-Guisuraga, J.M.; Fernández-García, V.; Pinto, R.; García-Llamas, P.; Calvo, L. Mapping Soil Burn Severity at Very High Spatial Resolution from Unmanned Aerial Vehicles. Forests 2021, 12, 179. [Google Scholar] [CrossRef]
  25. Fraser, R.; Van Der Sluijs, J.; Hall, R. Calibrating Satellite-Based Indices of Burn Severity from UAV-Derived Metrics of a Burned Boreal Forest in NWT, Canada. Remote Sens. 2017, 9, 279. [Google Scholar] [CrossRef]
  26. McKenna, P.; Erskine, P.D.; Lechner, A.M.; Phinn, S. Measuring Fire Severity Using UAV Imagery in Semi-Arid Central Queensland, Australia. Int. J. Remote Sens. 2017, 38, 4244–4264. [Google Scholar] [CrossRef]
  27. Collins, L.; Griffioen, P.; Newell, G.; Mellor, A. The Utility of Random Forests for Wildfire Severity Mapping. Remote Sens. Environ. 2018, 216, 374–384. [Google Scholar] [CrossRef]
  28. Roy, D.P.; Boschetti, L.; Trigg, S.N. Remote Sensing of Fire Severity: Assessing the Performance of the Normalized Burn Ratio. IEEE Geosci. Remote Sens. Lett. 2006, 3, 112–116. [Google Scholar] [CrossRef]
  29. Smith, A.M.S.; Wooster, M.J.; Drake, N.A.; Dipotso, F.M.; Falkowski, M.J.; Hudak, A.T. Testing the Potential of Multi-Spectral Remote Sensing for Retrospectively Estimating Fire Severity in African Savannahs. Remote Sens. Environ. 2005, 97, 92–115. [Google Scholar] [CrossRef]
  30. Hillman, S.; Hally, B.; Wallace, L.; Turner, D.; Lucieer, A.; Reinke, K.; Jones, S. High-Resolution Estimates of Fire Severity—An Evaluation of UAS Image and LiDAR Mapping Approaches on a Sedgeland Forest Boundary in Tasmania, Australia. Fire 2021, 4, 14. [Google Scholar] [CrossRef]
  31. Von Nonn, J.; Villarreal, M.L.; Blesius, L.; Davis, J.; Corbett, S. An Open-Source Workflow for Scaling Burn Severity Metrics from Drone to Satellite to Support Post-Fire Watershed Management. Environ. Model. Softw. 2024, 172, 105903. [Google Scholar] [CrossRef]
  32. Kaduyu, I.; Tsheko, R.; Chepete, J.H.; Kgosiesele, E. Burned Area Estimation and Severity Classification Using the Fire Mapping Tool (Fmt) in Arid Savannas of Botswana, a Case Study—Kgalagadi District; Elsevier BV: Amsterdam, The Netherlands, 2023. [Google Scholar]
  33. Pérez-Rodríguez, L.A.; Quintano, C.; Marcos, E.; Suarez-Seoane, S.; Calvo, L.; Fernández-Manso, A. Evaluation of Prescribed Fires from Unmanned Aerial Vehicles (UAVs) Imagery and Machine Learning Algorithms. Remote Sens. 2020, 12, 1295. [Google Scholar] [CrossRef]
  34. Kgosikoma, O.E.; Batisani, N. Livestock Population Dynamics and Pastoral Communities’ Adaptation to Rainfall Variability in Communal Lands of Kgalagadi South, Botswana. Pastoralism 2014, 4, 19. [Google Scholar] [CrossRef]
  35. Porporato, A.; Laio, F.; Ridolfi, L.; Caylor, K.K.; Rodriguez-Iturbe, I. Soil Moisture and Plant Stress Dynamics along the Kalahari Precipitation Gradient. J. Geophys. Res. 2003, 108, 4127. [Google Scholar] [CrossRef]
  36. Modisa Wildlife Project—Mission. Available online: https://www.modisawildlifeproject.com/mission (accessed on 25 June 2024).
  37. Lewis, S.A.; Robichaud, P.R.; Hudak, A.T.; Strand, E.K.; Eitel, J.U.H.; Brown, R.E. Evaluating the Persistence of Post-Wildfire Ash: A Multi-Platform Spatiotemporal Analysis. Fire 2021, 4, 68. [Google Scholar] [CrossRef]
  38. Edwards, A.; Russell-Smith, J.; Maier, S.W. Measuring and Mapping Fire Severity in the Tropical Savannas. In Carbon Accounting and Savanna Fire Management; Murphy, B., Edwards, A., Meyer, M., Russell-Smith, J., Eds.; CSIRO Publishing: Melbourne, Australia, 2015; Chapter 8; pp. 169–181. [Google Scholar]
  39. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of Machine-Learning Classification in Remote Sensing: An Applied Review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  40. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An Assessment of the Effectiveness of a Random Forest Classifier for Land-Cover Classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  41. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  42. Belgiu, M.; Drăguţ, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  43. Meddens, A.J.H.; Kolden, C.A.; Lutz, J.A. Detecting Unburned Areas within Wildfire Perimeters Using Landsat and Ancillary Data across the Northwestern United States. Remote Sens. Environ. 2016, 186, 275–285. [Google Scholar] [CrossRef]
  44. Mohammadpour, P.; Viegas, D.X.; Viegas, C. Vegetation Mapping with Random Forest Using Sentinel 2 and GLCM Texture Feature—A Case Study for Lousã Region, Portugal. Remote Sens. 2022, 14, 4585. [Google Scholar] [CrossRef]
  45. Phan, T.N.; Kuch, V.; Lehnert, L.W. Land Cover Classification Using Google Earth Engine and Random Forest Classifier—The Role of Image Composition. Remote Sens. 2020, 12, 2411. [Google Scholar] [CrossRef]
  46. Ghimire, B.; Rogan, J.; Galiano, V.R.; Panday, P.; Neeti, N. An Evaluation of Bagging, Boosting, and Random Forests for Land-Cover Classification in Cape Cod, Massachusetts, USA. GISci. Remote Sens. 2012, 49, 623–643. [Google Scholar] [CrossRef]
  47. Kupidura, P. The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery. Remote Sens. 2019, 11, 1233. [Google Scholar] [CrossRef]
  48. Zhou, H.; Fu, L.; Sharma, R.P.; Lei, Y.; Guo, J. A Hybrid Approach of Combining Random Forest with Texture Analysis and VDVI for Desert Vegetation Mapping Based on UAV RGB Data. Remote Sens. 2021, 13, 1891. [Google Scholar] [CrossRef]
  49. Hall-Beyer, M. Practical Guidelines for Choosing GLCM Textures to Use in Landscape Classification Tasks over a Range of Moderate Spatial Scales. Int. J. Remote Sens. 2017, 38, 1312–1338. [Google Scholar] [CrossRef]
  50. Haralick, R.M. Statistical and Structural Approaches to Texture. Proc. IEEE 1979, 67, 786–804. [Google Scholar] [CrossRef]
  51. Coulston, J.W.; Blinn, C.E.; Thomas, V.A.; Wynne, R.H. Approximating Prediction Uncertainty for Random Forest Regression Models. Photogramm. Eng. Remote Sens. 2016, 82, 189–197. [Google Scholar] [CrossRef]
  52. Li, Z.; Bi, S.; Hao, S.; Cui, Y. Aboveground Biomass Estimation in Forests with Random Forest and Monte Carlo-Based Uncertainty Analysis. Ecol. Indic. 2022, 142, 109246. [Google Scholar] [CrossRef]
  53. Wang, G.; Gertner, G.Z.; Fang, S.; Anderson, A.B. A Methodology for Spatial Uncertainty Analysis of Remote Sensing and GIS Products. Photogramm. Eng. Remote Sens. 2005, 71, 1423–1432. [Google Scholar] [CrossRef]
  54. Zhang, X.; Cui, J.; Wang, W.; Lin, C. A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm. Sensors 2017, 17, 1474. [Google Scholar] [CrossRef]
  55. Shin, J.; Seo, W.; Kim, T.; Park, J.; Woo, C. Using UAV Multispectral Images for Classification of Forest Burn Severity—A Case Study of the 2019 Gangneung Forest Fire. Forests 2019, 10, 1025. [Google Scholar] [CrossRef]
  56. Costa, H.; Benevides, P.; Moreira, F.D.; Moraes, D.; Caetano, M. Spatially Stratified and Multi-Stage Approach for National Land Cover Mapping Based on Sentinel-2 Data and Expert Knowledge. Remote Sens. 2022, 14, 1865. [Google Scholar] [CrossRef]
  57. Thomsen, A.M.; Ooi, M.K.J. Shifting Season of Fire and Its Interaction with Fire Severity: Impacts on Reproductive Effort in Resprouting Plants. Ecol. Evol. 2022, 12, e8717. [Google Scholar] [CrossRef]
  58. Meyer, K.M.; Ward, D.; Moustakas, A.; Wiegand, K. Big Is Not Better: Small Acacia Mellifera Shrubs Are More Vital after Fire. Afr. J. Ecol. 2005, 43, 131–136. [Google Scholar] [CrossRef]
  59. Brando, P.M.; Nepstad, D.C.; Balch, J.K.; Bolker, B.; Christman, M.C.; Coe, M.; Putz, F.E. Fire-induced Tree Mortality in a Neotropical Forest: The Roles of Bark Traits, Tree Size, Wood Density and Fire Behavior. Glob. Chang. Biol. 2012, 18, 630–641. [Google Scholar] [CrossRef]
Figure 1. Kgalagadi Transfrontier Fire (KTF) extent and location in Botswana. Modisa is indicated by a red star in the left panel of the figure. The natural color satellite imagery of the Kgalagadi Transfrontier Fire in the left panel was acquired by the National Aeronautics and Space Administration’s (NASA, Washington, DC, USA) Moderate Resolution Imaging Spectroradiometer (MODIS, Washinton, DC, USA) from its Aqua satellite on 8 September 2021 at a 250-m resolution.
Figure 1. Kgalagadi Transfrontier Fire (KTF) extent and location in Botswana. Modisa is indicated by a red star in the left panel of the figure. The natural color satellite imagery of the Kgalagadi Transfrontier Fire in the left panel was acquired by the National Aeronautics and Space Administration’s (NASA, Washington, DC, USA) Moderate Resolution Imaging Spectroradiometer (MODIS, Washinton, DC, USA) from its Aqua satellite on 8 September 2021 at a 250-m resolution.
Remotesensing 16 03943 g001
Figure 2. The top left corner depicts the 1 sq. km post-burn plot of land that this study primarily focused on. The bottom panel offers a closer look at the burn impacts within the plot of land. The top right corner displays the location of the study site in Botswana, Modisa and is indicated by a red star.
Figure 2. The top left corner depicts the 1 sq. km post-burn plot of land that this study primarily focused on. The bottom panel offers a closer look at the burn impacts within the plot of land. The top right corner displays the location of the study site in Botswana, Modisa and is indicated by a red star.
Remotesensing 16 03943 g002
Figure 3. Flow chart showing the steps of the burn severity classification model along with the datasets and software used. R: red; G: green; B: blue; GLCM: gray-level co-occurrence matrix; UAS: unmanned aerial system.
Figure 3. Flow chart showing the steps of the burn severity classification model along with the datasets and software used. R: red; G: green; B: blue; GLCM: gray-level co-occurrence matrix; UAS: unmanned aerial system.
Remotesensing 16 03943 g003
Figure 4. Visualizations of land cover classification schema and their corresponding burn severity rankings.
Figure 4. Visualizations of land cover classification schema and their corresponding burn severity rankings.
Remotesensing 16 03943 g004
Figure 5. Original RGB drone images (left) and the manually classified land cover classifications (right).
Figure 5. Original RGB drone images (left) and the manually classified land cover classifications (right).
Remotesensing 16 03943 g005
Figure 6. Visual comparison of 12-h post-burn imagery and 6-month post-burn imagery. Woody vegetation regrowth visualization is defined and compared to herbaceous cover, as indicated by the red box outlines. Regrowth was determined based on patch regrowth rather than analysis at the pixel level.
Figure 6. Visual comparison of 12-h post-burn imagery and 6-month post-burn imagery. Woody vegetation regrowth visualization is defined and compared to herbaceous cover, as indicated by the red box outlines. Regrowth was determined based on patch regrowth rather than analysis at the pixel level.
Remotesensing 16 03943 g006
Figure 7. Top Panel: Original drone image 12 h post burn (left) and the Random Forest model-predicted land cover classification map (right). Three outlined regions (A, B, C) are indicated. Bottom Panels: The zoomed-in regions from the model-predicted map and the original RGB map for better visualization.
Figure 7. Top Panel: Original drone image 12 h post burn (left) and the Random Forest model-predicted land cover classification map (right). Three outlined regions (A, B, C) are indicated. Bottom Panels: The zoomed-in regions from the model-predicted map and the original RGB map for better visualization.
Remotesensing 16 03943 g007
Figure 8. Random Forest classification results reclassified to represent burn severity rankings.
Figure 8. Random Forest classification results reclassified to represent burn severity rankings.
Remotesensing 16 03943 g008
Figure 9. Confusion matrix for Random Forest classification of burn severity. Each cell shows the proportion of observations predicted versus the actual observed categories, highlighting the model’s precision and misclassification rates. Numerical values and gradient color of the cells represent the normalized value of correct pixel predictions.
Figure 9. Confusion matrix for Random Forest classification of burn severity. Each cell shows the proportion of observations predicted versus the actual observed categories, highlighting the model’s precision and misclassification rates. Numerical values and gradient color of the cells represent the normalized value of correct pixel predictions.
Remotesensing 16 03943 g009
Figure 10. Feature importance within the Random Forest classification model. CHM = Canopy Height Model; RGB = Red band, green band, blue band; GCC = Green Chromatic Coordinate; CI = Char Index; max_diff = Max Difference Index; EGI = Excessive Greenness Index; BI = Brightness Index.
Figure 10. Feature importance within the Random Forest classification model. CHM = Canopy Height Model; RGB = Red band, green band, blue band; GCC = Green Chromatic Coordinate; CI = Char Index; max_diff = Max Difference Index; EGI = Excessive Greenness Index; BI = Brightness Index.
Remotesensing 16 03943 g010
Figure 11. Woody vegetation survival and regrowth. This figure presents the probability of survival and regrowth in woody vegetation at 6 months and 2.5 years post-burn across the 1000 derived Monte Carlo outputs. Mean probabilities and standard deviations are calculated for each category. Wider violin plots indicate a higher likelihood of regrowth, while narrower plots suggest a lower likelihood.
Figure 11. Woody vegetation survival and regrowth. This figure presents the probability of survival and regrowth in woody vegetation at 6 months and 2.5 years post-burn across the 1000 derived Monte Carlo outputs. Mean probabilities and standard deviations are calculated for each category. Wider violin plots indicate a higher likelihood of regrowth, while narrower plots suggest a lower likelihood.
Remotesensing 16 03943 g011
Figure 12. Sample of RGB images used within the manual classification dataset and their corresponding CHM in meters. White spots within the CHM are indicative of taller vegetation.
Figure 12. Sample of RGB images used within the manual classification dataset and their corresponding CHM in meters. White spots within the CHM are indicative of taller vegetation.
Remotesensing 16 03943 g012
Table 1. Dates of drone imagery acquisition and their respective seasons.
Table 1. Dates of drone imagery acquisition and their respective seasons.
Date of Image AcquisitionTime Since Fire
26 September 2021—Dry Season12 h post-burn
29 December 2021—Wet Season6 months post-burn
21 July 2022—Dry Season1 year post-burn
9 August 2023—Dry Season2 years post-burn
23 November 2023—Wet Season2.5 years post-burn
Table 2. Land cover classifications and their corresponding burn severity rankings.
Table 2. Land cover classifications and their corresponding burn severity rankings.
Classification SchemaBurn Severity Ranking
Green Vegetation0–No Burn Impact
Bare Soil0–No Burn Impact
Burnt Woody Vegetation1–Low Severity
Charred Grass2–Medium Severity
Ash3–High Severity
ShadowNull
Table 3. Variables used to train the model.
Table 3. Variables used to train the model.
Training Indices and VariablesEquations and Descriptions
Excess Green Index (EGI)2 × G − R − B
Green Chromatic Coordinate Index (GCC)G/(G + R + B)
Char Index (CI)BI + (MaxDiff × 15)
Brightness Index (BI)R + G + B
Maximum RGB Difference (MaxDiff)Max(|B − G|, |B − R|, |R − G|)
Red BandR
Green BandG
Blue BandB
CHMThe height of vegetation above the ground surface, derived by subtracting the DTM from the DSM.
GLCM—ContrastMeasures the local variations in GLCM.
GLCM—EnergyProvides the sum of squared elements in the GLCM.
GLCM—CorrelationMeasures the joint probability occurrence of the specified pixel pairs.
GLCM—Homogeneity Measures the closeness of the distribution of elements in the GLCM to the GLCM diagonal.
Where: G = Green, R = Red, B = Blue, DTM = Digital Terrain Model, DSM = Digital Surface Model, GLCM = Gray Level Co-occurrence Matrix, CHM = Canopy Height Model.
Table 4. Model accuracy assessment. The percentage of severity classification coverage over the study area is included.
Table 4. Model accuracy assessment. The percentage of severity classification coverage over the study area is included.
Land Cover ClassificationPrecisionRecallF1-ScoreSupportPercent of Cover
Shadow0.730.660.70335,7034.90%
Green Vegetation0.910.900.90210,2133.07%
Charred Grass0.740.770.753,019,85244.11%
Burnt Woody Vegetation0.560.470.51771,18511.26%
Bare Soil0.770.780.782,285,95033.39%
Ash0.780.740.75223,6483.27%
Weighted Average0.750.740.75
Overall Accuracy (OA): 0.79717
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gillespie, M.; Okin, G.S.; Meyer, T.; Ochoa, F. Evaluating Burn Severity and Post-Fire Woody Vegetation Regrowth in the Kalahari Using UAV Imagery and Random Forest Algorithms. Remote Sens. 2024, 16, 3943. https://doi.org/10.3390/rs16213943

AMA Style

Gillespie M, Okin GS, Meyer T, Ochoa F. Evaluating Burn Severity and Post-Fire Woody Vegetation Regrowth in the Kalahari Using UAV Imagery and Random Forest Algorithms. Remote Sensing. 2024; 16(21):3943. https://doi.org/10.3390/rs16213943

Chicago/Turabian Style

Gillespie, Madeleine, Gregory S. Okin, Thoralf Meyer, and Francisco Ochoa. 2024. "Evaluating Burn Severity and Post-Fire Woody Vegetation Regrowth in the Kalahari Using UAV Imagery and Random Forest Algorithms" Remote Sensing 16, no. 21: 3943. https://doi.org/10.3390/rs16213943

APA Style

Gillespie, M., Okin, G. S., Meyer, T., & Ochoa, F. (2024). Evaluating Burn Severity and Post-Fire Woody Vegetation Regrowth in the Kalahari Using UAV Imagery and Random Forest Algorithms. Remote Sensing, 16(21), 3943. https://doi.org/10.3390/rs16213943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop