Next Article in Journal
Lake Phenology of Freeze-Thaw Cycles Using Random Forest: A Case Study of Qinghai Lake
Next Article in Special Issue
Wildland Fire Tree Mortality Mapping from Hyperspatial Imagery Using Machine Learning
Previous Article in Journal
Analysis of the Impact of Positional Accuracy When Using a Single Pixel for Thematic Accuracy Assessment
Previous Article in Special Issue
Contextualizing the 2019–2020 Kangaroo Island Bushfires: Quantifying Landscape-Level Influences on Past Severity and Recovery with Landsat and Google Earth Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Burn Extent of Large Wildland Fires from Satellite Imagery Using Machine Learning Trained from Localized Hyperspatial Imagery

Department of Mathematics and Computer Science, Northwest Nazarene University, Nampa, ID 83686, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(24), 4097; https://doi.org/10.3390/rs12244097
Submission received: 28 October 2020 / Revised: 17 November 2020 / Accepted: 9 December 2020 / Published: 15 December 2020

Abstract

:
Wildfires burn 4–10 million acres annually across the United States and wildland fire related damages and suppression costs have exceeded $13 billion for a single year. High-intensity wildfires contribute to post-fire erosion, degraded wildlife habitat, and loss of timber resources. Accurate and temporally adequate assessment of the effects of wildland fire on the environment is critical to improving the of wildland fire as a tool for restoring ecosystem resilience. Sensor miniaturization and small unmanned aircraft systems (sUAS) provide affordable, on-demand monitoring of wildland fire effects at a much finer spatial resolution than is possible with satellite imagery. The use of sUAS would allow researchers to obtain data with more detail at a much lower initial cost. Unfortunately, current regulatory and technical constraints prohibit the acquisition of imagery using sUAS for the entire extent of large fires. This research examined the use of sUAS imagery to train and validate burn severity and extent mapping of large wildland fires from various satellite images. Despite the lower resolution of the satellite image, the research utilized the advantages of satellite imagery such as global coverage, low cost, temporal stability, and spectral extent while leveraging the higher resolution of hyperspatial sUAS imagery for training and validating the mapping analytics.

1. Introduction

This study examines the use of sub-decimeter hyperspatial imagery acquired with a small unmanned aircraft system (sUAS) to train machine learning algorithms to map wildland fire severity and extent from Landsat imagery. This effort increases the accuracy with which severity and extent can be mapped using hyperspatial imagery beyond the study area extent constraints observed due to the current technical and regulatory limitations resulting from the spatial extent of imagery that can be acquired using sUAS. Extending the usefulness of hyperspatial sUAS imagery beyond current flight extent restrictions will provide managers with increased actionable knowledge, leading to improved management decisions and increased ecosystem resilience.
A century of fire suppression and fire prevention communities has led to the current departure of wildlands from fire return intervals typically experienced under pre-European settlement conditions. Wildlands in the western United States (US) are experiencing a much higher incidence of catastrophic fires [1]. Millions of hectares of western United States wildlands are impacted by wildland fire annually, with wildlands burned in some fire seasons exceeding four million hectares [2], with suppression costs exceeding three billion dollars annually [3]. High-intensity wildland fires contribute to post-fire erosion, degraded wildlife habitat, and loss of timber resources. This loss results in negative impacts on ecosystem resilience as well as communities in the wildland-urban interface where 25,790 structures were burned in 2018 [2]. Additionally, wildland fires across the US claim more lives than any other type of natural disaster, resulting in the average loss of twenty wildland firefighters per year [4,5]. The 2018 Camp Fire in northern California alone resulted in 85 fatalities with estimates of insured losses running approximately ten billion dollars [6]. Effective management of wildland fire is a critical dimension of maintaining healthy and sustainable wildlands. Actionable knowledge of the relationships between fuel, fire behavior, and the effects on the ecosystem and human development can help land managers develop elegant solutions to wildfire problems. Remotely sensed imagery is commonly relied on in assessing the impact of fire on the ecosystem [7]. The knowledge gained from remotely sensed data enables land managers to better understand the effects fire has had on the landscape and develop a more effective management response facilitating ecosystem recovery and resiliency.

1.1. Background

Although researchers face many challenges in understanding fire behavior and its corresponding effect, we investigate the improvements of what changing data resolution creates when analyzing wildland fires. The improvement with which wildland fire severity and extent can be mapped from medium resolution satellite imagery using machine learning trained from hyperspatial sUAS imagery relied on previously published efforts, including identification of the ecological factors of interest, identification of current methods of mapping wildland fire extent, and utilization of sUAS as a remote sensing vehicle.

1.1.1. Wildland Fire Severity and Extent

The term “wildland fire severity” can refer to many different effects observed through a fire cycle, from determining how intense an active fire had burned to the ecosystem’s response to the fire over the subsequent years. This study investigates the direct or immediate effects of a fire, such as biomass consumption, as observed in the days and weeks after the fire is contained [8]. Therefore, this study defines burn severity as the measurement of biomass (or fuel) consumption [9].
“Wildland fire extent” refers to the area in which a wildland fire has consumed organic material. Identification of burned area extent within an image can be achieved by exploiting the spectral separability between burned organic material and unburned vegetation [10,11]. Patchiness examines the fire’s spatial completeness within the extent of the fire, examining how much biomass remains unburned within the fire perimeter [12]. This study defines burn extent as the area within which a fire has consumed organic materials, omitting the unburned islands of vegetation within the burned area perimeter.
Vector-based digitization of fire perimeters: Burn extent is often recorded as a polygon geospatial feature, recorded by a global navigation satellite system (GNSS) either from a helicopter flying along the perimeter of a fire or by ground-based mapping of the fire-edge with a handheld GNSS receiver. For the pilot, the burned area edge can be difficult to discern, especially when high vegetation canopy closure and shadows reduce visibility of surface vegetation from above. For ground-based personnel mapping the burned area either on foot or from a motorized vehicle, following the perimeter can be complicated by both rough terrain and the non-uniform manner in which wildland fire burns across the landscape. Most wildland fires contain islands of unburned vegetation dispersed throughout the burned area, ranging in size up to hundreds of hectares. These unburned islands are typically not mapped due to safety concerns as well as the impracticality of traversing the edge of each of these islands either aerially or on the ground. Even an extinguished fire can be a dangerous environment for humans to work in. Structurally unsound trees are known to fall upon and injure or kill unsuspecting people. Additionally, the patchy, convoluted edge of the fire can often be hard to delineate from the ground, let alone from a manned aircraft [13].
Raster-based mapping of burn extent and severity: The most common metric used for mapping wildland burn severity from medium satellite imagery is the normalized burn ratio (NBR) which is the normalized difference between the near-infrared (NIR) and shortwave infrared (SWIR) bands [9], calculated as:
N B R = ( S W I R NIR ) ( SWIR + N I R )
While NBR is effectively used for burn severity mapping, differenced NBR (dNBR), calculated as pre-fire NBR (NBRpre) minus post-fire NBR (NBRpost) [7], has been found to have a stronger indicator to burn severity than just NBRpost alone [14]. Satellite imagery is a useful dataset from which to calculate dNBR due to the ability to obtain pre-burn imagery corresponding to a study area containing an unplanned wildland fire.
The Landsat program has been provided continuous satellite coverage of the earth at 30 m resolution since 1982, providing 38 years of imagery from which to extract fire history data. The US Departments of Agriculture and Interior maintain the Monitoring Trends in Burn Severity (MTBS) program to map burn perimeters and severity across the US starting in 1984. The MTBS program maps wildland fires across the US using dNBR [7]. Due to a large number of fires across the US, the MTBS program only maps fires exceeding 400 hectares in the western US and exceeding 200 hectares in the eastern US [15]. With the offset in the orbits of Landsat 7 and 8, Landsat’s 8-day flyover interval, easily allows analysts to access bi-temporal imagery preceding the fire for NBRpre and post-fire for NBRpost. In the event the fire study area was obscured by clouds or smoke during a pass-over, another scene can be used from either the preceding pass-over for NBRpre or in a succeeding pass-over for NBRpost as necessary. While MTBS is extensively used for mapping large fires across the US, it has been shown to overestimate the burn extent by four to sixteen percent due to oversimplification of burned area polygons and not mapping large unburned islands [16].

1.1.2. On Origins of Fires Used in This Study

As for the data used in this research, this study utilized a set of fire data from previous similar research efforts. The wildfire burn areas used in this study are a collection of medium to large size fires in southwestern Idaho ranging from xeric sagebrush steppe fires located within the United States Department of Interior Bureau of Land Management Boise District to the mesic upper Payette and Boise River watersheds in the United States Department of Agriculture Forest Service Boise National Forest. The naming schema for these fires, as displayed in Table 1, are usually determined by a nearby notable landmark such as a creek or a hill. For example, the Hoodoo fire burn area burned near and across the Hoodoo Creek near Idaho City, Idaho. Typically, these names were the official names of these fires assigned by the local jurisdiction and used by dispatchers. The locations of the burned areas of which acquisition flights were conducted are shown in Figure 1.

1.1.3. Utilization of sUAS for Mapping Wildland Fire

The proliferation of small unmanned aircraft system technology has made the procurement and use of remotely sensed imagery a viable possibility for many organizations that could not afford to obtain such data in the past. A small unmanned aircraft system (sUAS) is a designation given by the US Federal Aviation Administration to UAS that weigh between 0.25 kg and 25 kg [17]. Most commercially available sUAS come with an onboard digital camera, a multi-spectral sensor with three bands capturing visible light in the blue, green and, red spectrum ranging from 400 nm to 700 nm [18]. Spectral responses in the visible spectra can be used to differentiate between different image features such as white and black ash [10,11,19] and other features of interest to fire managers such as vegetation type [10,20].
New advances in sUAS capabilities enable imagery acquisition with a spatial resolution of centimeters and temporal resolution of minutes [21]. The temporal responsiveness of acquiring imagery with a sUAS is significantly increased due to the increased availability of the sUAS being able to be flown at any desired time as opposed to Landsat imagery which can only be acquired when the satellite flies over the scene every 16 days, assuming the scene is not obscured by smoke or clouds during the flyover. However, although a manned aircraft can be a viable alternative, United States governmental agencies update their aerial manned photography through programs such as the Department of Agriculture National Agricultural Imagery Program every few years. However, this is nowhere near the update frequency of satellite imagery as needed in this experiment. In comparison to sUAS, on-demand manned aircraft aerial photography is far more expensive, with rental costs for manned aircraft running thousands of dollars per hour. High-resolution imagery was only needed on-demand and therefore sUAS imagery was the best temporally responsive and cost-effective solution to the US land management agency regulatory need to acquire post-fire data including mapping burn extent and fire effects within 14 days after fire containment.
Aerial imagery for this project was acquired with a DJI Phantom 4 with a 12-megapixel color camera. The imagery acquired with the sUAS was taken while flying at an altitude of 120 m above ground level (AGL), giving the photos a spatial resolution of 5 cm per pixel [9]. Objects that are wider than that pixel resolution will be discernible in the acquired hyperspatial imagery as shown in Figure 2a. The black rectangles in the image are burned areas. Small lines and patches of white within the burned area are white ash from sagebrush which was fully combusted by the fire. The unburned vegetation consists primarily of annual and perennial grasses and forbs, Wyoming big sagebrush (Artemisia tridentata spp. Wyomingensis), and yellow rabbitbrush (Chrysothamnus viscidiflorus). The scene shown in Figure 2 contains two western juniper trees (Juniperus occidentalis spp. occidentalis).
Features easily identified in hyperspatial (5 cm) imagery are lost in medium resolution (30 m) Landsat satellite imagery, being aggregated into more dominant neighboring features. Figure 2b shows the same scene as the preceding image but resampled to 30 m spatial resolution having 48 pixels aligned in six rows by eight columns. Mapping burn severity and extent was found to have significantly lower accuracy when using imagery with 30 m spatial resolution such as Landsat than was found with hyperspatial imagery acquired by flying the sUAS over the burned area [19]. While burn severity and extent can be mapped with much higher accuracy using hyperspatial imagery acquired with a sUAS, this research team found that current technical and regulatory constraints on drone usage realistically will only allow for the acquisition of up to 600 hectares per day [23]. That same flight extent would be a small part of a single Landsat scene. To effectively map large class F (>400 ha) and G (>5000 ha) fires [24], a larger and more efficiently acquirable imagery satellite system such as Landsat still needs to be utilized to obtain a large enough analysis extent to map the whole fire.

1.1.4. Thirty Meter Burn Severity Mapping Analytics Trained with Hyperspatial Classification

In evaluating the effects of spatial resolution on burn severity and extent mapping accuracy, Hamilton [19] established the following methodology which enabled the use of hyperspatial burn severity classes of unburned vegetation, black ash and white ash as classified using a Support Vector Machine (SVM) [22] for training machine learning algorithms to map burn severity from medium resolution (30 m) imagery. This previous effort evaluated only the effect of spatial resolution on mapping accuracy, using 30 m imagery that was resampled from the hyperspatial imagery acquired with the sUAS, thereby removing other variables from consideration which could affect accuracy, such as sensor radiometric resolution, atmospheric influence, and temporal resolution. Machine learning classifiers mapped burn severity and extent from 30 m imagery separately, and were trained using hyperspatial burn severity and extent classifications using the following methods:
  • Hyperspatial orthomosaics (5 cm) were resampled to have medium resolution of 30 m (30 m), which is an equivalent spatial resolution to Landsat imagery, with the spatial reference for the 30 m imagery being calculated from the spatial reference from the 5 cm orthomosaic.
  • Labeling 30 m training pixels using fuzzy logic by:
    • Calculating 5 cm burn severity class pixel density within each 30 m pixel, where density is the percentage of 5 cm pixels for a specific class that are found within the containing 30 m pixel.
    • Applying fuzzification, the post-fire class density is used to establish where fuzzy set membership transitions from 0 to 1 over a range of values. Burn extent transitioned from unburned to burned between 35 and 65 percent of 5 cm pixels being unburned as shown in Figure 3. Burn severity transitioned between low biomass consumption and high biomass consumption between 33% and 50% [25] of burned pixels being classified as white ash as shown in Figure 4.
    • Activate fuzzy rules by applying fuzzy logic. When evaluating a series of fuzzy if statements, as shown in Figure 5, the data is defuzzified by selecting for activation the action associated with the if expression that has the highest value where a fuzzy AND is evaluated as taking the minimum value of either of the associated operands.
  • Each of the 30 m pixels was labeled with training data labels with the SVM training on 70 percent of the 30 m training pixels. The remaining labeled 30 m pixels withheld for validation of the SVM.

2. Materials and Methods

In order to complete the goal of this effort, a method was developed which mapped and analyzed wildland fire extent using spatial satellite imagery. This was done by converting high-resolution hyperspatial training data to a lower resolution training data using fuzzy logic. This created satellite resolution training data and enabled the burn extent to be determined and analyzed from satellite imagery using an SVM. Although this experiment will be using Landsat 8 imagery in particular, this method is not bound to any specific earth-observing satellite and can be applied to any spatial imagery provided the correct format. The fires used in the research were from the same set of fires used from a previous study [19,22] as mentioned in Section 1.1.2.
Hamilton [19] mapped wildland fire burn-extent from hyperspatial data with an accuracy of up to 98% on 5 cm pixels. For this experiment, hyperspatial training data will be assumed to be an adequate reference point when determining the accuracy of the SVM on the 30 m spatial imagery. This assumption enables the experiment to use hyperspatial training data resampled to satellite resolution training data using fuzzy logic. This experiment also investigated the effect different spectral bands have on burn extent and severity mapping accuracy.

2.1. Assembling Spatial Extent Burn Indicator

As mentioned before, one of the objectives of the application of this experiment was to provide an efficient and a more consistent method of running the experiment so that the results of the experiment were consistent and adequate to analyze using the scientific method. Thus, an application was assembled to evaluate the data in a consistent manner. The application for this experiment is divided into four sections. The first part of the application adjusts the geoposition alignment of the hyperspatial and satellite imagery. The second part utilizes an SVM to create hyperspatial training data from hyperspatial imagery. The third part converts hyperspatial training data into satellite resolution spatial training data using fuzzy logic. The final part of the application runs the SVM on the satellite scene using the newly created training data. This application will provide an effective and efficient way to analyze accuracy impact.

2.1.1. Coregistering Hyperspatial and Medium Resolution Images

Before resampling the resolution of the training image, acquired with an sUAS, from hyperspatial resolution to satellite medium resolution, such as is acquired by the Landsat Satellite project, preprocessing and imagery alignment is required to establish alignment with the smaller pixels and the larger medium resolution image pixels. The first alignment step reprojects the imagery so the hyperspatial image and the medium resolution image have the same spatial reference. In Figure 6, the projection for the medium resolution image (orange) and the hyperspatial sUAS image (blue) have different spatial references, making the pixel boundaries misaligned between the two images.
Reprojection to the same spatial reference results in pixel boundaries that are aligned between the images as shown in Figure 7.
Once the pixel boundaries are aligned between the hyperspatial and medium resolution images, the medium resolution image is cropped so it only contains pixels that overlap the hyperspatial image. At that point, the extent of the hyperspatial image does not correspond to pixel boundaries, as is shown by the white space between the blue pixels and the boundary of the overlapping orange pixel in Figure 6. The extent of the hyperspatial image needs to be expanded out to the edge of the clipped medium resolution image. Additional rows and columns of null pixels are added around the hyperspatial image using OpenCV, expanding the extent of the hyperspatial image until it corresponds to the pixel outer pixel boundaries of the overlapping medium resolution pixels, which are shown in green in Figure 8.
Once the additional rows and columns of null pixels are added to the hyperspatial image, the spatial resolution of the hyperspatial image is adjusted to account for the addition of the new null pixels around the perimeter of the orthomosaic as shown in Figure 9.

2.1.2. Creating Hyperspatial Training Data

Once all the spatial imagery was aligned, the hyperspatial imagery burn extent and severity were mapped separately using a support vector machine. The training data used for the hyperspatial imagery was from a data set that had previously been processed while mapping burned areas over multiple years, with accuracy ranging up to 98% [19,21]. After burn severity and extent were mapped, the data was denoised using image morphology, reducing sub-object sized noise [26]. An example of this noise reduction on the burn extent is shown in Figure 10. The output would then be used as hyperspatial burn extant data resampled to satellite resolution burn extent data used as training data in the next section. A similar process was done with the severity case and the images were combined using the logic mentioned in Section 1.1.4.

2.1.3. Resampling Training Data to 30 m Using Fuzzy Logic

After generating the hyperspatial burn severity output, fuzzy control was used to label overlapping medium resolution pixels to be used for training an SVM to map burns from satellite imagery. Fuzzy logic allows decisions to be based on imprecise boundaries rather than relying on precise boundaries that are used by Boolean logic. This use of vagueness allows the expression of how much the data fits given criteria, transitioning from one class to another over a range of values. Fuzzy logic is often more applicable to ecological data than the crisp delineations resulting from Boolean logic, where data will transition from one class to another at a single threshold value.
The fuzzy set theory allows the specification of how well an object satisfies a vague criterion [27] with fuzzy logic providing a means for specifying that the transition from one class to another is not demarked at a single value but transitions from one class to another over a range of values [28]. For example, Hamilton [19] defined the transition from low to high biomass consumption as occurring between 33 and 50 percent of burned pixels being classified as white ash as shown in Figure 4. Rather than set membership being expressed as either zero or one, as is the case with Boolean logic, fuzzy logic allows set membership to be specified as a range of membership from 0.0 to 1.0. For example, using these thresholds, a plot with 40 percent white ash cover from the Biomass consumption fuzzy sets shown in Figure 3 would have 0.41 membership in the high biomass consumption set and 0.59 membership in the low biomass consumption set as shown in Figure 11.
Assignment of the post-fire burn extent and severity classes to a 30 m pixel is based on a combination of whether the pixel burned and white ash cover within the pixel. In neither case is Boolean logic appropriate for determining set membership. Increasing the density of a class by a handful of 5 cm pixels and as a result changing a Boolean expression from false to true does not adequately describe the state of the pixel. As mentioned in the background, the first set of data to be evaluated was between the burned and unburned sets, with the burned set consisting of the combination of pixels classified as either burned or unburned. The fuzzy logic transition for these criteria from 5 cm pixels to 30 m was from 35% to 65% [19], as previously shown in Figure 3.
An additional set membership was used that measured biomass consumption, which evaluated the relationship between the density of black ash and white ash. The white ash cover was expressed as a percentage of white ash to burned (white and black ash) pixels, transitioning between low and high biomass consumption over a transition from 33% to 50% [19,23], as previously shown in Figure 4.
Fire biomass consumption needed to be only evaluated on the burn extent, and thus any white ash detection outside of the burned boundaries was disregarded. This filter could be applied using fuzzy logic to fuzzy set membership, a fuzzy AND is expressed as taking the minimum value of the expressions on either side of the AND operator. Likewise, a fuzzy OR is expressed as taking the maximum value of the expressions on either side of the OR operator. When evaluating a series of fuzzy IF statements, as previously shown in Figure 5, the data are defuzzified by selecting for activation the action associated with the IF expression with the highest value.
After the fuzzy logic was applied, the data would be converted into 30 m training data by taking the most dominant set as determined by the fuzzy logic control and was labeled either white ash, black ash, or unburned accordingly. Since the snapping software aligned all the images, the newly created training data from the fuzzy logic training data came out aligned with the associated satellite scene and would be ready to train an SVM using Landsat imagery, which had been labeled using fuzzy logic on the hyperspatial burn extent and severity classifications as described above. After the training data were created, one third of the training data were set aside as validation data to test the classifier’s accuracy.

2.1.4. Running the SVM on the Satellite Scene

After the satellite scene training data was created, the satellite scene and the newly created satellite resolution training data was used as the input into the SVM. However, before running the SVM the spatial training data was clipped to balance out the burned and unburned pixels for consistency across fires. This clipping was necessary to balance for the ratio between burned and unburned pixels is normalized between 45–55%. Balancing the burned and unburned pixels was done because most of the selected fires used in this experiment had an imbalance burned and unburned pixel ratio where the unburned pixels numerically dominated the burned pixels. Thus, a solution was developed through an algorithm that repeatedly stripped off each side until the ratio was met. This solution would be sufficient to the imbalance issue because most of the edge pixels were predominantly burned pixels in all the fires that were used. After preprocessing was complete, the SVM would create a TIF image of the fire’s burn extent and severity through 30-m resolution.

2.1.5. Collecting Results

Once the SVM finished classifying, an output image of the entire satellite scene was produced. A simple testing program was developed to analyze the SVM’s accuracy when it mapped burn extent and severity on a satellite. Once the data was normalized, all the pixels were compared with the training data, and the equation shown in Equation (2) was applied to determine the accuracy of the SVM’s burn extent.
E x t e n t A c c u r a c y = | T r u e P i x e l s | | T o t a l P i x e l s |
However, to determine burn severity accuracy, only the white ash and black ash pixels were evaluated. Thus, all unburned pixels were omitted, and the accuracy calculation applied accuracy validity to white and black ash pixels, as shown in Equation (3).
S e v e r i t y A c c u r a c y = | T r u e B l a c k A s h |   +   | T r u e W h i t e A s h | | T o t a l B u r n e d P i x e l s |

3. Results

A set of fires previously used by Hamilton in [19,22] was selected for analysis. The estimated accuracy of mapping out burn severity using color satellite imagery (comprised of red, green, and blue spectral bands) in comparison to the high-resolution results from color imagery, as shown by Hamilton [19], is shown in Table 1.
This experiment also investigated how the accuracy would change if color satellite imagery were replaced for a set of infrared (IR) bands. The IR imagery comprised the following Landsat bands: shortwave infrared 1 (0.156–1.660 µm), shortwave infrared 2 (2.100–2.300 µm), and near infrared (0.845–0.885µm) [29]. The results of this experiment are shown in Table 2.

4. Discussion

This experiment’s results did not converge to a specific accuracy, but instead the accuracy varied across fires. This was expected as many of the fires used in the experiment varied in severity, differed between forest and grass fire, had different atmospherical conditions, and varied in canopy cover. For example, the Elephant fire had a thin layer of cloud haze above it, which skewed off the color balance to be higher, which decreased the SVM’s accuracy of detected darker burned pixels. Although each image may have one or multiple of these challenges, these problems were addressed through many various methods during the development of the experiment. However, all could not be met seamlessly. One of these challenges, such as the canopy cover, was addressed by applying the denoise tool, as previously shown in Figure 10. This tool would allow an unburned tree canopy and other smaller unburned objects with burned pixels around it to be marked as noise and replaced in with burnt pixels. Another challenge pertaining to the atmospherical condition was based on the fact that each satellite image, covering the same spatial area, had slightly different solar lighting, atmospheric density, and moisture. This color imbalance problem was solved by utilizing different hyperspatial training imagery for every fire in the experiment, although the separate training data were all created a consistent way. Ideally, the experiment should not need separate training data for each fire. Without creating separate training data for each fire, the experiments overall accuracy plummeted from color imbalance from atmospheric interference. Although bear in mind that each of the separate training was generated a consistent way as done by Hamilton [22]. However, creating separate training data is necessary because each satellite image has its own unique state. Future work could create a single set of training data that is cross-compatible with all fires, which will account for the ecological, atmospherical, and temporal setting of the fire.
As shown in Table 1, the experiment results for the 30 m burn extent shows that the accuracy of mapping burn extent from a satellite image using hyperspatial training data has a mean of 71% of 30 m extent results accuracy. The accuracy of the burn extent of the 30 m was expected to be lower than the accuracy of the 5 cm burn extent [19]. Many factors could impair the accuracy level. One of these factors include is the satellite image had a lot of atmospheric dissidence, which caused the sUAS training data to pick up far more unblemished data as previously discussed. Another of these factors includes that most of these fires were considered small fires and included many edge cases as shown in Figure 12. These edge cases would severely decrease the accuracy level of smaller fires. Therefore, accuracy is expected to increase when mapping larger wildland fires. To increase the overall accuracy of this experiment, future edge case analysis can be done to significantly increase the accuracy of mapping out wildland fires with satellite imagery.
Observing the IR experiment in compared to the RGB experiment, a decrease in accuracy is observed. However, this is contradictory to what was expected [9]. The accuracy decreases, for the most part, is slight, and after further analysis most of the error falls on the SVM edge cases similar to what is shown in Figure 12. A solution for this error is to use IR imagery hyperspatial and spatial imagery in this experiment, which theoretically will decrease error. However, this is left for future work. However, since the experiment was done on parts of the burn area where edge cases are predominately located, theoretically, the SVM would have better accuracy on a larger fire as the ratio between edge pixels and non-edge pixels is decreased.

5. Conclusions

This experiment successfully applied sUAS hyperspatial data to be used as training data to map out the burn extent on a satellite imagery. As shown in Table 2, the experiment went further than just using color imagery to determine accuracy. This experiment has also applied IR spectroscopy imagery in which has been previously proven to improve fire mapping; thus, results were expected to improve [9]. However, as previously discussed, the IR experiment results proved contradictory to what was expected and previous work [9]. Thus, it was assumed that the issue could be readdressed by using IR hyperspatial training data for the IR imagery as previously discussed.
One of the research goals was to determine the change in accuracy of burn extent mapping between the hyperspatial imagery and satellite imagery. The tool was created and tested on this experiment which showed improved accuracy from 5 cm to 30 m. However, for future work, one could take this research further by determining how the decrease in the resolution would decrease the accuracy of fire extent and severity mapping. This knowledge could enable future researchers to quickly and accurately determine which resolution would best fit their area of research, given their resources and budget constraints.
Source code (Supplementary Materials) developed as well as data used in this effort are available for download at https://firemap.nnu.edu/satellite-burn-mapping.

Implications for Local Management

Improved mapping of fire effects resulting from the development of methods, analytic tools, and metrics resulting in increased fire effects mapping accuracy will improve fire management. These improvements affect post-fire recovery planning and other management operations, which are driven by the extent of the fire and how much biomass was consumed by the fire. Development of post-fire recovery plans includes outlining the restoration activities that determine management response to the fire, facilitating ecosystem recovery and resiliency [7]. In order to reduce risks to affected resources, managers will also need increased capacity to determine potential effects on neighboring resources such as hydrologic features, infrastructure, and wildlife habitat.
Existing data about vegetation within the perimeter of a fire are rendered obsolete because of a disturbance such as a wildland fire. Increasingly accurate burn extent and biomass consumption mapping will improve efforts to update geospatial vegetation layers such as existing vegetation type, cover and height to accurately reflect fire fuel data following a wildland fire [7].
Improved mapping will also result in more complete fire history data, facilitating the inclusion of the spatial extent and biomass consumption of small fires which are commonly omitted from fire history data [19]. The inclusion of knowledge about small fires will reduce the omission of the most ecologically diverse areas burned within a landscape [30]. Additionally, calculations will be improved for the departure of current fire frequency from historical fire frequency, a key metric for determining ecosystem resilience [10].

6. Future Work

This research was unique in a way that the initial intent was to pave a path for future research. As discussed, prior, each satellite scene has its own unique solar lighting, atmospheric density, and moisture effect. This variance across each fire would cause a shift in the pixel values for each of the classes used in this experiment. For example, a burned pixel from a satellite image in early June may have a SWIR value of 1000, but in late August, the SWIR value of a burned pixel in a satellite image may have a value of 700. This could be caused by the solar position during the time of year, the lack of moisture towards the end of the year, and random atmospheric concentration. To solve this problem, one could calibrate the experiment by utilizing ground objects known to remain consistent throughout the year, such as pavement or lakes. Taking those base values to calibrate each satellite image could fix the problem and avoid utilizing different training data for each fire experiment. However, due to complexity, alternatively, a consistent set of hyperspatial data was used to create the satellite training data instead, as shown in Section 2.1.2.
Another challenge that occurred was the color images showing better accuracy than the IR images. As mentioned before, this is contradictory to previous work [9]. However, as mentioned previously, most of the inaccuracy occurred at the edge cases, as shown in Figure 12. Thus, to improve accuracy, more work on edge case analysis can be done, which could significantly improve the experiments accuracy. However, due to data and time constraints, this was left off for future work.

Supplementary Materials

The following are available online from the authors at https://firemap.nnu.edu/satellite-burn-mapping: source code and data.

Author Contributions

Conceptualization, D.H.; methodology, D.H. and E.L.; software, D.H., E.L., and N.H.; validation, E.L.; formal analysis, D.H. and E.L.; investigation, D.H. and E.L.; data curation, D.H., N.H., and E.L.; writing, D.H., E.L., and N.H.; visualization, D.H., E.L., and N.H.; supervision, D.H.; project administration, D.H.; funding acquisition, D.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the USDA Forest Service Boise National Forest, Forest Service Agreement No. 18-CS-11040200-0025.

Acknowledgments

We would like to acknowledge the students in the Northwest Nazarene University Department of Math and Computer Science who have helped with different aspects of this effort, including current and former students Nicholas Hamilton, Jonathan Hamilton, Gabriel Johnson, Aleesha Chavez and Andrew Welk. Additionally, we would like to acknowledge Northwest Nazarene University who funded the research efforts of many of the students mentioned.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wildland Fire Leadership Council. Available online: https://www.forestsandrangelands.gov/documents/strategy/strategy/CSPhaseIIINationalStrategyApr2014.pdf (accessed on 30 July 2020).
  2. Hoover, K.; Hanson, L.A. Wildfire Statistics. Congressional Research Service. October 2019. Available online: https://crsreports.congress.gov/product/pdf/IF/IF10244 (accessed on 14 May 2020).
  3. National Interagency Fire Center (NIFC). Suppression Costs. 2020. Available online: https://www.nifc.gov/fireInfo/fireInfo_documents/SuppCosts.pdf (accessed on 18 May 2020).
  4. National Interagency Fire Center (NIFC). Wildland Fire Fatalities by Year. 2020. Available online: https://www.nifc.gov/safety/safety_documents/Fatalities-by-Year.pdf (accessed on 18 May 2020).
  5. Zhou, G.; Li, C.; Cheng, P. Unmanned aerial vehicle (UAV) real-time video registration for forest fire monitoring. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 29 July 2005. [Google Scholar] [CrossRef]
  6. Insurance Information Institute. Facts + Statistics: Wildfires|III. Available online: https://www.iii.org/fact-statistic/facts-statistics-wildfires (accessed on 18 May 2020).
  7. Eidenshink, J.C.; Schwind, B.; Brewer, K.; Zhu, Z.-L.; Quayle, B.; Howard, S.M. A project for monitoring trends in burn severity. Fire Ecol. 2007, 3, 3–21. [Google Scholar] [CrossRef]
  8. Keeley, J.E. Fire intensity, fire severity and burn severity: A brief review and suggested usage. Int. J. Wildland Fire 2009, 18, 116–126. [Google Scholar] [CrossRef]
  9. Key, C.H.; Benson, N.C. Landscape Assessment (LA). In FIREMON: Fire Effects Monitoring and Inventory System. Gen. Tech. Rep. RMRS-GTR-164-CD; U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: Fort Collins, CO, USA, 2016; Volume 164, p. 155. [Google Scholar]
  10. Hamilton, D.; Bowerman, M.; Colwell, J.; Donahoe, G.; Myers, B. A Spectroscopic Analysis for Mapping Wildland Fire Effects from Remotely Sensed Imagery. J. Unmanned Veh. Syst. 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Lentile, L.B.; Holden, Z.A.; Smith, A.M.; Falkowski, M.J.; Hudak, A.T.; Morgan, P.; Benson, N.C. Remote sensing techniques to assess active fire characteristics and post-fire effects. Int. J. Wildland Fire 2006, 15. [Google Scholar] [CrossRef]
  12. Morgan, P.; Hardy, C.; Swetnam, T.; Rollins, M.; Long, D. Mapping fire regimes across time and space: Understanding coarse and fine-scale fire patterns. Int. J. Wildland Fire 2001, 10, 329–342. [Google Scholar] [CrossRef] [Green Version]
  13. Kolden, C.A.; Weisberg, P.J. Assessing Accuracy of Manually-mapped Wildfire Perimeters in Topographically Dissected Areas. Fire Ecol. 2007, 3, 22–31. [Google Scholar] [CrossRef]
  14. Escuin, S.; Navarro, R.; Fernandez, P. Fire severity assessment by using NBR (Normalized Burn Ratio) and NDVI (Normalized Difference Vegetation Index) derived from LANDSAT TM/ETM images. Int. J. Remote Sens. 2008, 29, 1053–1073. [Google Scholar] [CrossRef]
  15. USDA Forest Service Geospatial Technolgy and Applications Center (GTAC). Monitoring Trends in Burn Severity. Monit. Trends Burn Sev. 2020. Available online: https://www.mtbs.gov/ (accessed on 20 May 2020).
  16. Sparks, A.M.; Boschetti, L.; Smith, A.M.; Tinkham, W.T.; Lannom, K.O.; Newingham, B.A. An accuracy assessment of the MTBS burned area product for shrub–steppe fires in the northern Great Basin, United States. Int. J. Wildland Fire 2015, 24, 70–78. [Google Scholar] [CrossRef]
  17. Federal Aviation Administration (FAA). Frequently Asked Questions. Unmanned Aircr. Syst. Freq. Asked Quest. 2020. Available online: https://www.faa.gov/uas/resources/faqs/ (accessed on 20 May 2020).
  18. Lebourgeois, V.; Bégué, A.; Labbé, S.; Mallavan, B.; Prévot, L.; Roux, B. Can commercial digital cameras be used as multispectral sensors? A crop monitoring test. Sensors 2008, 8, 7300–7322. [Google Scholar] [CrossRef] [PubMed]
  19. Hamilton, D.; Hamilton, N.; Myers, B. Evaluation of Image Spatial Resolution for Machine Learning Mapping of Wildland Fire Effects. In Proceedings of the SAI Intelligent Systems Conference, London, UK, 5–6 September 2019; pp. 400–415. [Google Scholar]
  20. Rango, A.; Laliberte, A.; Herrick, J.E.; Winters, C.; Havstad, K.; Steele, C.; Browning, D. Unmanned aerial vehicle-based remote sensing for rangeland assessment, monitoring, and management. J. Appl. Remote Sens. 2009, 3, 033542. [Google Scholar] [CrossRef]
  21. Laliberte, A.S.; Herrick, J.E.; Rango, A.; Winters, C. Acquisition, orthorectification, and object-based classification of unmanned aerial vehicle (UAV) imagery for rangeland monitoring. Photogramm. Eng. Remote Sens. 2010, 76, 661–672. [Google Scholar] [CrossRef]
  22. Hamilton, D. Improving Mapping Accuracy of Wildland Fire Effects from Hyperspatial Imagery Using Machine Learning; The University of Idaho: Moscow, ID, USA, 2018. [Google Scholar]
  23. Goodwin, J.; Hamilton, D. Archaeological Imagery Acquisition and Mapping Analytics Development; Boise National Forest: Boise, ID, USA, 2019.
  24. National Wildfire Coordinating Group (NWCG). Size Class of Fire. 2020. Available online: www.nwcg.gov/term/glossary/size-class-of-fire (accessed on 20 May 2020).
  25. Lewis, S.; Robichaud, P.; Frazier, B.; Wu, J.; Laes, D. Using hyperspectral imagery to predict post-wildfire soil water repellency. Geomorphology 2008, 95, 192–205. [Google Scholar] [CrossRef]
  26. Gonzalez, R.; Woods, R. Digital Image Processing: Pearson Prentice Hall; Pearson Prentice Hall: Upper New Jersey, NJ, USA, 2008. [Google Scholar]
  27. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
  28. Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques, 3rd ed.; Morgan Kaufmann: Boston, MA, USA, 2012. [Google Scholar]
  29. National Aeronautics and Space Administration (NASA). Landsat 8 Bands. 21 July 2020. Available online: https://landsat.gsfc.nasa.gov/landsat-8/landsat-8-bands/ (accessed on 22 July 2020).
  30. Hamilton, D.; Hann, W. Mapping landscape fire frequency for fire regime condition class. In Proceedings of the Large Fire Conference, Missoula, MT, USA, 19–23 May 2014. [Google Scholar]
Figure 1. Geographic distribution of wildland fires over which post-suppression image acquisition flights were conducted in southwestern Idaho.
Figure 1. Geographic distribution of wildland fires over which post-suppression image acquisition flights were conducted in southwestern Idaho.
Remotesensing 12 04097 g001
Figure 2. (a) Image of a rangeland study area acquired with a Phantom 4 sUAS flying at 120 m AGL with a spatial resolution of 6 cm per pixel. (b) Same scene resampled to 30 m resolution with six rows and eight columns of pixels [22].
Figure 2. (a) Image of a rangeland study area acquired with a Phantom 4 sUAS flying at 120 m AGL with a spatial resolution of 6 cm per pixel. (b) Same scene resampled to 30 m resolution with six rows and eight columns of pixels [22].
Remotesensing 12 04097 g002aRemotesensing 12 04097 g002b
Figure 3. Burn extent fuzzy set showing fuzzy set membership of 30 m pixel transition between Unburned and Burned from 35 to 65 percent of 5 cm pixels being classified as burned.
Figure 3. Burn extent fuzzy set showing fuzzy set membership of 30 m pixel transition between Unburned and Burned from 35 to 65 percent of 5 cm pixels being classified as burned.
Remotesensing 12 04097 g003
Figure 4. Biomass Consumption fuzzy set showing fuzzy set membership of 30 m pixel transition between Low and High Biomass Consumption from 33 to 50 percent of 5 cm burned pixels being classified as white ash.
Figure 4. Biomass Consumption fuzzy set showing fuzzy set membership of 30 m pixel transition between Low and High Biomass Consumption from 33 to 50 percent of 5 cm burned pixels being classified as white ash.
Remotesensing 12 04097 g004
Figure 5. Fuzzy logic algorithm for labeling 30 m pixels from 5 cm burn severity classes.
Figure 5. Fuzzy logic algorithm for labeling 30 m pixels from 5 cm burn severity classes.
Remotesensing 12 04097 g005
Figure 6. Medium resolution satellite image pixels (orange) and hyperspatial sUAS image pixels (blue) prior to reprojection.
Figure 6. Medium resolution satellite image pixels (orange) and hyperspatial sUAS image pixels (blue) prior to reprojection.
Remotesensing 12 04097 g006
Figure 7. Example of projection before the border is added.
Figure 7. Example of projection before the border is added.
Remotesensing 12 04097 g007
Figure 8. Example of projection after the border is added.
Figure 8. Example of projection after the border is added.
Remotesensing 12 04097 g008
Figure 9. (a) sUAS orthomosaic rendered on top of clipped Landsat image; (b) sUAS orthomosaic with a buffer of null pixels giving it the same extent as the clipped Landsat image. Red polylines represent roads and trails. Blue polylines represent creeks. Black hatched polyline represents a historic railgrade.
Figure 9. (a) sUAS orthomosaic rendered on top of clipped Landsat image; (b) sUAS orthomosaic with a buffer of null pixels giving it the same extent as the clipped Landsat image. Red polylines represent roads and trails. Blue polylines represent creeks. Black hatched polyline represents a historic railgrade.
Remotesensing 12 04097 g009
Figure 10. Comparison of Hyperspatial classification with its Denoised Output. White pixels are the burned data and the black pixels are the unburned data. (a) example of an image before denoise tool is applied; (b) example of an image after denoise tool is applied. These images is a post processing view of the bitmap output data with each pixel containing one class value Burned or Unburned.
Figure 10. Comparison of Hyperspatial classification with its Denoised Output. White pixels are the burned data and the black pixels are the unburned data. (a) example of an image before denoise tool is applied; (b) example of an image after denoise tool is applied. These images is a post processing view of the bitmap output data with each pixel containing one class value Burned or Unburned.
Remotesensing 12 04097 g010
Figure 11. Biomass Consumption Fuzzy Set Membership. With 40 percent White Ash Cover denoted by the dotted line, there is 0.41 membership in High Biomass Consumption and 0.59 membership in Low Biomass Consumption.
Figure 11. Biomass Consumption Fuzzy Set Membership. With 40 percent White Ash Cover denoted by the dotted line, there is 0.41 membership in High Biomass Consumption and 0.59 membership in Low Biomass Consumption.
Remotesensing 12 04097 g011
Figure 12. Some error analysis concluded that most False Positives or Negatives, indicated in red, are predominately located at the edge of the burn area.
Figure 12. Some error analysis concluded that most False Positives or Negatives, indicated in red, are predominately located at the edge of the burn area.
Remotesensing 12 04097 g012
Table 1. Experiment Results Compared with the 5 cm from [19,22].
Table 1. Experiment Results Compared with the 5 cm from [19,22].
5 cm Extent30 m Extent Results5 cm Severity30 m Severity Results
Elephant98.558.5298.6795.30
MM10686.5871.598.68100
Hoodoo99.2973.9295.2241.50
Immigrant98.3477.6797.97100
Jack94.7457.0996.9719.06
Owyhee87.6585.4087.42100
Mean 93.32 73.116 95.252 72.112
Table 2. Color Experiment vs. IR Experiment.
Table 2. Color Experiment vs. IR Experiment.
30 m Color Extent30 m IR Extent30 m Color Severity30 m IR Severity
Elephant58.5255.9595.3097.83
MM10671.569.1100100
Hoodoo73.9271.7541.5036.48
Immigrant77.6778.19100100
Jack57.0956.6819.0617.40
Owyhee85.4056.61100N/A 1
Mean 70.68 64.71 75.98 70.34
1 The fire was too small for the experiment to detect any white ash.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hamilton, D.; Levandovsky, E.; Hamilton, N. Mapping Burn Extent of Large Wildland Fires from Satellite Imagery Using Machine Learning Trained from Localized Hyperspatial Imagery. Remote Sens. 2020, 12, 4097. https://doi.org/10.3390/rs12244097

AMA Style

Hamilton D, Levandovsky E, Hamilton N. Mapping Burn Extent of Large Wildland Fires from Satellite Imagery Using Machine Learning Trained from Localized Hyperspatial Imagery. Remote Sensing. 2020; 12(24):4097. https://doi.org/10.3390/rs12244097

Chicago/Turabian Style

Hamilton, Dale, Enoch Levandovsky, and Nicholas Hamilton. 2020. "Mapping Burn Extent of Large Wildland Fires from Satellite Imagery Using Machine Learning Trained from Localized Hyperspatial Imagery" Remote Sensing 12, no. 24: 4097. https://doi.org/10.3390/rs12244097

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop