Next Article in Journal
Short-Term Forecasting of Satellite-Based Drought Indices Using Their Temporal Patterns and Numerical Model Output
Next Article in Special Issue
Assessing Vegetation Response to Multi-Scalar Drought across the Mojave, Sonoran, Chihuahuan Deserts and Apache Highlands in the Southwest United States
Previous Article in Journal
A New Method for Estimating Tropospheric Zenith Wet-Component Delay of GNSS Signals from Surface Meteorology Data
Previous Article in Special Issue
Extreme Climate Event and Its Impact on Landscape Resilience in Gobi Region of Mongolia
 
 
Article
Peer-Review Record

FIRED (Fire Events Delineation): An Open, Flexible Algorithm and Database of US Fire Events Derived from the MODIS Burned Area Product (2001–2019)

Remote Sens. 2020, 12(21), 3498; https://doi.org/10.3390/rs12213498
by Jennifer K. Balch 1,2,*, Lise A. St. Denis 1, Adam L. Mahood 1,2, Nathan P. Mietkiewicz 3, Travis M. Williams 1,4, Joe McGlinchy 1 and Maxwell C. Cook 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Reviewer 5: Anonymous
Remote Sens. 2020, 12(21), 3498; https://doi.org/10.3390/rs12213498
Submission received: 27 August 2020 / Revised: 10 October 2020 / Accepted: 14 October 2020 / Published: 24 October 2020
(This article belongs to the Special Issue Earth Observations for Ecosystem Resilience)

Round 1

Reviewer 1 Report

The paper presents the FIRED algorithm and database, which identifies fire events using the MODIS MCD64 burned area product. There are several advantages to the FIRED approach, which is open source and flexible enough to adapt to various needs. Importantly for a proposed global database, it does not rely on assumptions about fire behavior that are known to fall apart in the tropics. The revised paper has greatly clarified the limitations on using the MTBS dataset for comparison to the FIRED results. However, this means that there is no useful validation of the FIRED algorithm, and the evidence actually presented does not adequately support the paper conclusions. In particular, many differences between the two datasets are attributed to the ability of FIRED to detect much smaller fires; but there is no attempt to limit the analysis to large fires, where more agreement might reasonably be expected, or even to determine whether small fires are especially common in areas where there is large disagreement. More analysis would be necessary to make the paper’s final results persuasive.

Author Response

The paper presents the FIRED algorithm and database, which identifies fire events using the MODIS MCD64 burned area product. There are several advantages to the FIRED approach, which is open source and flexible enough to adapt to various needs. Importantly for a proposed global database, it does not rely on assumptions about fire behavior that are known to fall apart in the tropics. The revised paper has greatly clarified the limitations on using the MTBS dataset for comparison to the FIRED results.  However, this means that there is no useful validation of the FIRED algorithm, and the evidence actually presented does not adequately support the paper conclusions. In particular, many differences between the two datasets are attributed to the ability of FIRED to detect much smaller fires; but there is no attempt to limit the analysis to large fires, where more agreement might reasonably be expected, or even to determine whether small fires are especially common in areas where there is large disagreement. More analysis would be necessary to make the paper’s final results persuasive.    

 

Thank you for the additional review, but there is an incorrect statement here. Reviewer #1 states that “there is no attempt to limit the analysis to large fires, where more agreement might reasonably be expected.” We do, in fact, limit the comparison with MTBS to just compare the large fires. We report in the paper repeatedly how we accounted for the MTBS size threshold throughout the optimization workflow and in the comparative analysis. In section d of the methods it is explained how the optimization only compared FIRED events that overlapped with MTBS events. Small fires that are not accounted for by MTBS did not influence our optimization, please see these locations in the text: 

 

LINES 210-212: “For each combination we matched the FIRED events that were >404 ha in the west and >202 ha in the eastern US to the associated MTBS wildfire perimeter.”

 

LINS  214-217: “For each unique fire polygon in the MTBS database, we extracted the ID numbers for each FIRED event overlapping the MTBS polygon. Then, for each unique FIRED event, we extracted each MTBS ID that overlapped.”

 

In addition, in section f of the methods, we explain that we limit our comparison of the two product to only those events which were captured by both products (which is by definition limiting the analysis to large fires only):

 

LINES 252-257: “Thus, comparing the area burned by the two products represents a trade-off between imperfect satellite detection from MODIS and imperfect burned area reporting in the perimeters that drive selection by the MTBS product. With those caveats in mind, we co-located those events captured by both products (i.e. they overlapped in space and time), and compared estimated area burned at the event level using two approaches.”

 

The confusion matrix also accounted for the size MTBS thresholds:

 

On LINE 285-288 and Table 5 we show that we excluded events below the MTBS thresholds for our calculation of commission and omission errors. We have added further clarification in the Table caption: “Table 5. Confusion matrix for comparison between FIRED events and MTBS events, where both products were above the MTBS-determined size threshold of 404 ha in the western US and 202 ha in the eastern US. The last column provides the number of small events that were delineated by FIRED below these size thresholds.”

 

In addition, this workflow is also openly available and also documents the process of optimizing against MTBS large fires only: https://www.github.com/admahood/fired_optimization, which is also given in the paper.

Reviewer 2 Report

 The authors have addressed all comments and have made significant revisions. Congrats to the authors on a fine manuscript. 

Author Response

The authors have addressed all comments and have made significant revisions. Congrats to the authors on a fine manuscript.

 

Thank you!

Reviewer 3 Report

I am satisfied my previous comments and concerns have been adequately addressed in this revised version.  I have also reviewed other changes and additions to the ms and feel these have improved the ms.

Author Response

I am satisfied my previous comments and concerns have been adequately addressed in this revised version.  I have also reviewed other changes and additions to the ms and feel these have improved the ms.

 

Thank you!

Reviewer 4 Report

After reviewing the revised version of paper, I can confirm that the manuscript has been significantly improved. The authors considered all my comments and put a lot of work into the manuscript again. I think the manuscript can be recommended for publication in the Remote Sensing in present form.

But I would like to check and clarify a couple of questions in the text one more time.

L22: Authors said about 2001–2019 time interval of data. Could it be checked one more time across the text, because there are noted different ranges in different places, such as 2001–2018 (L159, and 174), 2001–2016 (L293, and 299).

Tables 2: What does it mean “Total” considering percent values in Mean Reburn and Std Reburn columns? Is it averaged value? Please clarify.

L243, Table 3, Table 4: I still haven't been able to figure out the differences for “level 1–3 ecoregions”. There is a reference to the data now, but I would like to see brief explanations for level 1–3 ecoregions here in the text, moreover, the list of only level 1 ecoregions is given in Table 7.

Figure 1: What is shown in the figure titled Fire Frequency? Is this an indicator per area or per time? Or is it only the total number of registered fires? Please, clarify.

L390 and L462: It seems to me that explanations for the ICS-209s  and ICS-209-PLUS dataset are required.

L491: “BAECV product or VIIRS” – I believe the reference is required.

Author Response

After reviewing the revised version of paper, I can confirm that the manuscript has been significantly improved. The authors considered all my comments and put a lot of work into the manuscript again. I think the manuscript can be recommended for publication in the Remote Sensing in present form.

But I would like to check and clarify a couple of questions in the text one more time.

L22: Authors said about 2001–2019 time interval of data. Could it be checked one more time across the text, because there are noted different ranges in different places, such as 2001–2018 (L159, and 174), 2001–2016 (L293, and 299). 

Thank you for your careful attention to detail. When the timeframe goes until 2016, this indicates that we were comparing to MTBS which, at the time of the analysis, was only available up to 2016. For the instances where the end date is 2018, we were calculating reburn percentage in the midst of developing the algorithm, and were using the data as it was available at the time. We ran the firedpy algorithm for the publication of the data at the very end of the process of developing and optimizing the algorithm, when more data was available (and we will continue to update it!). Adding another year to the reburn analysis is unnecessary, computationally costly and time-consuming. Accordingly, we understand the confusion, but the dates are all correct. 

Tables 2: What does it mean “Total” considering percent values in Mean Reburn and Std Reburn columns? Is it averaged value? Please clarify. 

We changed “Total” to “Average per tile”

L243, Table 3, Table 4: I still haven't been able to figure out the differences for “level 1–3 ecoregions”. There is a reference to the data now, but I would like to see brief explanations for level 1–3 ecoregions here in the text, moreover, the list of only level 1 ecoregions is given in Table 7. 

We added the following text to lines 235-237:  “Ecoregions are areas where soil, climate, vegetation and other properties of ecosystems are generally similar. The Center for Environmental Cooperation has a nested product, with 3 levels of progressively finer grained ecoregion delineations.” For table 7 we only have level 1 ecoregions in this table because we aggregated the data for only the level 1 ecoregions for that particular analysis.

Figure 1: What is shown in the figure titled Fire Frequency? Is this an indicator per area or per time? Or is it only the total number of registered fires? Please, clarify. 

We updated the figure legend to indicate that fire frequency is the number of fire events. 

L390 and L462: It seems to me that explanations for the ICS-209s  and ICS-209-PLUS dataset are required.

We decided to delete the reference to the ICS-209s dataset at this location in the text (i.e., in the vicinity of line 390) as we felt it would interrupt the narrative flow of the paragraph. However, we still mention the potential to integrate this dataset with FIRED later in the text (lines 462-464): “we anticipate that a newly published ICS-209-PLUS dataset that is an integrated database of over 120,000 incident command reports for the U.S. from 1999-2014 could be connected...”. 

L491: “BAECV product or VIIRS” – I believe the reference is required. 

References have been added.

Reviewer 5 Report

General comments:

 

The manuscript entitled “FIRED (Fire Events Delineation): An Open, 2 Flexible Algorithm & Database of US Fire Events Dderived from the MODIS Burned Area Product 4 (2001-19)”, by Balch et al., develops an algorithm that automatically delineates MODIS C6 burned area products into individual fire events across the CONUS and assesses FIRED product by comparing with Landsat-based MTBS burned area data. The study is interesting and the products could be useful in regions around the world where similar fire data to MTBS are not available. The manuscript is well structured and the results look reasonable but several points need to be clarified and some details in methods and discussions can be added to make the paper stronger. Moreover, not sure if this version is the original submitted version as it is full of track changes. I recommend publication of this manuscript after a moderate revision by addressing my specific comments below.

 

Specific comments:

L3: In Title, remove “D” in “Dderived”

L55-56: The term “fire behavior” is used through the manuscript as the context of the scientific question and discussions but fire behavior related terms like “fire spread rate” here in context of remote sensing seem differ from the same concepts in landscape fire behavior. It would be good to clearly define the terms to avoid possible confusion.

L70: “fire spread” is not a property of fire regime that is usually characterized in a long time period using fire frequency, intensity, size, pattern, season, and severity. Please consider re-phrase it.

L71-72: As above, please clearly define fire spread rate here to avoid confusion with the same concept in landscape fire behavior.

L130: A flow chart would help to better understand the FIRED algorithm.

L178: What is “CUS”?

L191: Do not understand the phase “at least one fire detection occurred”.

L202: Please define the term “FIRED” in main context although a full name is given in title.

L207-208: Not sure if it is a problematic feature of the MTBS. A fire complex in MTBS could consists of several spatially adjacent but not connected patches, which seems match the name pattern of the inciweb database (https://inciweb.nwcg.gov/).

L250: Table3- what is the difference between “fire spread rate” and “growth rate’?

L287: what are these two equations? Please describe it and clarify the abbreviations in the equations.

L292: What is the spatial resolutions of the maps in Fig1?

L315: A map of ecoregions could help to better understand spatial distributions of BA in different ecoregions.

L315: Great Plains ecoregion in table 6: the total BA estimates of MTBS and FIRED are very close but the number of fire events differ substantially. This suggests that the way of defining a fire event matters in the total number of the delineated fire events. During Spring in Kansas and Oklahoma States, lots of agricultural fires occur every year and those fires are usually spatially adjacent. Could add some discussions on it.

L353: Lack of discussions on the omission and commission errors and possible reasons. Please consider add it.

L375: This is not true. MODIS BA product is not able to detect an area as small as 4 square meters, read Giglio et al., 2018 in RSE. Authors may have confuse it with MODIS active fire detection product, as indicated by the citation [32].

L377: Again, the citation [32] Giglio et al., 2016 talks about MODIS active fire product rather than BA product (see Giglio et al., 2018 in RSE, as the citation [27]).

L382: Because of the uncertainties in burn date of the MODIS BA product (Giglio et al., 2018), please add discussions of the uncertainties in date related variables listed in tables 3-4 in FIRED product.

L489-481: Application potentials of the FIRED algorithm is over-emphasized, especially as for active fire product and others, as no such analysis has been conducted in this manuscript.

Author Response

The manuscript entitled “FIRED (Fire Events Delineation): An Open, 2 Flexible Algorithm & Database of US Fire Events Dderived from the MODIS Burned Area Product 4 (2001-19)”, by Balch et al., develops an algorithm that automatically delineates MODIS C6 burned area products into individual fire events across the CONUS and assesses FIRED product by comparing with Landsat-based MTBS burned area data. The study is interesting and the products could be useful in regions around the world where similar fire data to MTBS are not available. The manuscript is well structured and the results look reasonable but several points need to be clarified and some details in methods and discussions can be added to make the paper stronger. Moreover, not sure if this version is the original submitted version as it is full of track changes. I recommend publication of this manuscript after a moderate revision by addressing my specific comments below.

 Specific comments:

L3: In Title, remove “D” in “Dderived”  

Done. Thank you.

L55-56: The term “fire behavior” is used through the manuscript as the context of the scientific question and discussions but fire behavior related terms like “fire spread rate” here in context of remote sensing seem differ from the same concepts in landscape fire behavior. It would be good to clearly define the terms to avoid possible confusion. 

We changed the two occurrences of “fire behavior” to “fire activity” to avoid that confusion.

L70: “fire spread” is not a property of fire regime that is usually characterized in a long time period using fire frequency, intensity, size, pattern, season, and severity. Please consider re-phrase it. 

We would argue that the temporal characteristics of a fire regime, including the spread rate, are key properties to describe. White & Jentsch (2001) in their exhaustive review include abruptness and duration as key components of disturbance regimes. One of the reasons that spread rate is not usually  included in large-scale studies of fire regimes is that, until now, these data are very difficult to obtain across thousands of events. This is a central advance of the FIRED product. 

White, P. S., & Jentsch, A. (2001). The Search for Generality in Studies of Disturbance and Ecosystem Dynamics. Progress in Botany, 62, 399–450. https://doi.org/10.1007/978-3-642-56849-7_17

L71-72: As above, please clearly define fire spread rate here to avoid confusion with the same concept in landscape fire behavior. 

Line 64 now reads “daily fire spread rate” instead of just “fire spread.”

L130: A flow chart would help to better understand the FIRED algorithm. 

We appreciate the suggestion but feel that the processing and code are sufficiently well-documented.

L178: What is “CUS”? 

This referred to the Conterminous United States. We changed it to “conterminous United States.” 

L191: Do not understand the phase “at least one fire detection occurred”. 

Now reads as “For each cell of the three-dimensional array where at least one fire detection occurred.”

L202: Please define the term “FIRED” in main context although a full name is given in title. 

We defined the acronym in the abstract as well as at first mention in the text. (lines 20, 195)

L207-208: Not sure if it is a problematic feature of the MTBS. A fire complex in MTBS could consists of several spatially adjacent but not connected patches, which seems match the name pattern of the inciweb database (https://inciweb.nwcg.gov/). 

We agree that it is not objectively problematic, but it was for this particular analysis. That’s why the wording is: “One problematic feature of the MTBS data for this comparison…”

L250: Table3- what is the difference between “fire spread rate” and “growth rate’? 

There is no difference in units. To be consistent, we changed “growth” to “spread.”

L287: what are these two equations? Please describe it and clarify the abbreviations in the equations. -- 

We replaced OE and CE with omission error and commission error, respectively. 

L292: What is the spatial resolutions of the maps in Fig1? 

Thanks for that catch! Figure 1 caption now reads: “A comparison of the spatial distribution of fire events from the FIRED and MODIS products from 2001-2016 aggregated to 50 km2 pixels shows a similar distribution of fire events and burned area in general, but the FIRED algorithm picks up many more events and burned area in the midwest, southeastern US and eastern Washington.”

L315: A map of ecoregions could help to better understand spatial distributions of BA in different ecoregions. 

We added a map of ecoregions as a supplemental figure.

L315: Great Plains ecoregion in table 6: the total BA estimates of MTBS and FIRED are very close but the number of fire events differ substantially. This suggests that the way of defining a fire event matters in the total number of the delineated fire events. During Spring in Kansas and Oklahoma States, lots of agricultural fires occur every year and those fires are usually spatially adjacent. Could add some discussions on it. 

We agree. We do already have a fair amount of discussion in section b of the results (lines 300-310) about how the lack of inclusion of small fires in MTBS leads to a large increase in events with a much smaller increase in total burned area for FIRED. 

L353: Lack of discussions on the omission and commission errors and possible reasons. Please consider adding it.

Please see sections b. and c of the results where we describe the differences between FIRED and MTBS and we offer some reasons now for the differences. 

L375: This is not true. MODIS BA product is not able to detect an area as small as 4 square meters, read Giglio et al., 2018 in RSE. Authors may have confused it with MODIS active fire detection product, as indicated by the citation [32]. 

Thank you for this catch. We changed the wording on lines 370-375 to specifically refer to the ability of the MODIS active fire product to detect very small fires. 

L377: Again, the citation [32] Giglio et al., 2016 talks about MODIS active fire product rather than BA product (see Giglio et al., 2018 in RSE, as the citation [27]). 

Please see above. We have added clarification.

L382: Because of the uncertainties in burn date of the MODIS BA product (Giglio et al., 2018), please add discussions of the uncertainties in date related variables listed in tables 3-4 in FIRED product. 

On line 385, we added “Daily polygons should be used carefully, as there is uncertainty associated with the burn dates estimated by the MCD64 collection 6 product (Giglio 2018). They (Giglio 2018) found that 44% burned grid cells were detected on the same day of an active fire, and 68% within 2 days.”

L489-481: Application potentials of the FIRED algorithm is over-emphasized, especially as for active fire product and others, as no such analysis has been conducted in this manuscript. 

We have altered the language slightly to emphasize that the FIRED algorithm could be adapted for other types of fire or hazard data with spatio-temporal characteristics. With the open python package, firedpy, there is an opportunity for community development and we hope that others will contribute to this effort.

 

Round 2

Reviewer 1 Report

My previous concerns have been addressed.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

In the manuscript the authors describe algorithm, that has been used to derive fire events from the MODIS MCD64 burned area product for the coterminous US from January 2001 to May 2019. This topic is well known, since MODIS data and MODIS products have been used for about 20 years and in some publications you can find various techniques for processing data and using them for fire monitoring and geospatial analysis of fire regimes. Thus, the topic is not new. However, the authors proposed another method of data processing, which was validated for practical use.

As it became clear to me from the manuscript, there are several gaps in different parts of the article and in the presentation of the results.

They are listed below and I believe that the article requires revision taking into account these comments before it can be recommended for publication in the Remote Sensing Journal.

 

In the Abstract

L23–26: “Events, fundamentally a geographic concept with delineated boundaries around a specific phenomena that is homogenous in some property, are key to understanding fire regimes and more importantly how they are changing.” These issues are not discussed in the article. Why is this given in an abstract?

L29–30: “thresholds to cluster burned area pixels into events were an 11-day window and a 5-pixel distance”. Further, in the text there is no description on the basis of what criterion these thresholds were selected.

In the Introduction

L76–78: “we fundamentally need landscape-scale event delineation to integrate across products and build greater understanding of how fire regimes vary at regional and global scales [30].” Again to my comment above. The authors do not consider fire regimes, and the relationship of fires with landscapes is described insufficiently justified. See notes below.

L81–84: “Some studies have clustered the MODIS active fire hotspots (MODIS MOD12) to derive events in Europe and northern Africa to understand what drives large fires [31] and Indonesian tropical rainforests [32].” This sentence is not clear.

Line 82: Probably MOD14? MOD12 is land cover product.

L118: “MTBS” Is this the first mention in the text? Consider giving an explanation of the abbreviation.

In the Materials and Method section

L152: Should 2001–2019 be here?

L141, L161, Table 2: Please explain in more detail why the analysis of reburned pixels is important. As shown in the table, they make up less than 1%. What is the percentage of the area of these fires in the general statistics? What is the criterion for "relatively high reburn occurrence"?

L173–175: “The algorithm takes as input a spatial variable, representing the number of pixels, and a temporal variable, representing the number of days, within which to group burn detections.” This question is one of the main in this article. Please describe in more detail the procedure for selecting these input parameters and the algorithm for their selection.

Line 202: maybe 97th meridian?

L215–216: “We ran the fire event classifier for all spatiotemporal combinations between 1-15 days and 1-15 pixels”. L231–232: “we estimated an optimal combination for the US of 5 pixels and 11 days.” Clarification of the results is required. Based on which comparison are thresholds selected?

L242–244: “We also extracted the mode of the International Geosphere-Biosphere Programme land cover classification.” Please provide more clarification on the land cover data and ecoregions. I think you also need to give a reference here and in table 3.

Tables 3 and 4: How important are the tables in this form in the text?

Table 3: What are Level 1–3 for the ecoregion? Please, clarify.

Table 3: “Maximum, minimum, and mean growth rate” Per what time? Are the units of measurement correct?

Table 4: Is it the same date for “Ignition Date” and “Last Burn Date” in the table of the attributes of daily-level product? Please, clarify.

L260: “Data is available at CU Scholar [at time of publication will have a DOI].” Please, clarify.

In the Results section

Lines 271–272: Two formulas have the same denominators but their values differ. Please, clarify.

L286–287, Table 6: “The FIRED burned area represents 97% of the NIFC reported totals from 2001-2016 (Table 6)”. The data of the NIFC was not discussed in the Methods section. Please provides details and the reference.

Table 6, Table 7: What was the source for Ecoregion data? What the scale of data? Please provide the reference.

Figure 2 is not cited from the text.

L297–298: “we binned the data 297 into 50 equal size classes”. This procedure has not been described in the Methods section. Please, clarify.

In the Discussion section

L341–344: The same was written in the Introduction?

L359: “intra-year reburns”. I did not see statistics on such cases. What is the percentage of these fires. How important is it to take them into account?

L360–364: As far as I understand, the proposed method does not solve the problem of small fires? Please, explain in more detail.

L401: “11-day window and a 5-pixel distance”.  Any comparisons of results with other variants of these input parameters are not given in the text.

L420: “This is a unique moment in the history of fire science” Please explain why this is so.

In the Conclusions

L453–454: “without events we cannot explore how the spatio-temporal properties of fire regimes are changing”. Authors could cite any papers devoted to this topic. Although fire regimes were not the subject of the manuscript and were not discussed in it.

L464–467: “Moreover, this algorithm can be used with any spatiotemporal data and is not constrained to fire data. As other efforts are built to understand natural hazards, these efforts may help to better delineate the spatial and temporal dimensions of floods, hurricanes, disease outbreaks, and other events.” This is not stated in the article. Probably, you need to clarify what is meant.

Author Response

Thank you to the editor and reviewers for their valuable comments and suggestions. Our major revisions include: i) connecting the FIRED product to helping better understand changing fire regimes and changing resilience of ecosystems; ii) clarifying potential future directions in the application of this flexible algorithm, to include other fire parameters such as FRP, to apply to other fire remote sensing products, such as active fire products, and potentially even to other remote sensing observations of other hazards that have space/time information; iii) we have also expanded our discussion of space-time optimization that could be improved by including smaller fire perimeters to optimize against, and could be expanded if other known fire perimeters can be utilized beyond the U.S. We have responded to all editor and reviewer comments below (in bold).

 

Reviewer 1:

 

In the manuscript the authors describe an algorithm that has been used to derive fire events from the MODIS MCD64 burned area product for the coterminous US from January 2001 to May 2019. This topic is well known, since MODIS data and MODIS products have been used for about 20 years and in some publications you can find various techniques for processing data and using them for fire monitoring and geospatial analysis of fire regimes. Thus, the topic is not new. However, the authors proposed another method of data processing, which was validated for practical use.

 

As it became clear to me from the manuscript, there are several gaps in different parts of the article and in the presentation of the results.

 

They are listed below and I believe that the article requires revision taking into account these comments before it can be recommended for publication in the Remote Sensing Journal.

 

In the Abstract

 

L23–26: “Events, fundamentally a geographic concept with delineated boundaries around a specific phenomena that is homogenous in some property, are key to understanding fire regimes and more importantly how they are changing.” These issues are not discussed in the article. Why is this given in an abstract? Fire regimes are defined in the first paragraph of the introduction: “...fire regimes, or the spatial and temporal characteristics of fires in a strict sense”. 

 

We added a section to the introduction (lines 74-81) discussing fire regimes and how this product specifically provides information on fire spread rate which is one of the fundamental elements of a disturbance regime that is unavailable in other products. 

 

L29–30: “thresholds to cluster burned area pixels into events were an 11-day window and a 5-pixel distance”. Further, in the text there is no description on the basis of what criterion these thresholds were selected.  

 

This is now better described in the methods section, part d (line 218-252). Sensitivity analysis was conducted to identify the optimal spatiotemporal thresholds for delineating fire events.

 

In the Introduction

 

L76–78: “we fundamentally need landscape-scale event delineation to integrate across products and build greater understanding of how fire regimes vary at regional and global scales [30].” Again to my comment above. The authors do not consider fire regimes, and the relationship of fires with landscapes is described insufficiently justified. See notes below.

 

We added a section (lines 74-81) talking about fire regimes and how this product specifically provides information on fire spread rate, which is unavailable in most other products and one of the fundamental elements of a disturbance regime. 

 

L81–84: “Some studies have clustered the MODIS active fire hotspots (MODIS MOD12) to derive events in Europe and northern Africa to understand what drives large fires [31] and Indonesian tropical rainforests [32].” This sentence is not clear. 

 

To be more clear, we changed the wording to (lines 84-86): “Some studies have clustered the MODIS active fire hotspots (MODIS MOD14) to delineate events in Europe and northern Africa [29] and Indonesian tropical rainforests [26,27] to understand what drives large fires.”

 

Line 82: Probably MOD14? MOD12 is land cover product. 

 

Good catch! Thanks! We changed it to MOD14.

 

L118: “MTBS” Is this the first mention in the text? Consider giving an explanation of the abbreviation. 

 

Thank you. This is a good suggestion, and has been done (lines 130). 

 

In the Materials and Method section

 

L152: Should 2001–2019 be here? Accepted suggestion. Thanks!

 

L141, L161, Table 2: Please explain in more detail why the analysis of reburned pixels is important. As shown in the table, they make up less than 1%. What is the percentage of the area of these fires in the general statistics? What is the criterion for "relatively high reburn occurrence"? 

 

On lines 164-175 we improved our explanation: “Prior efforts have justified ignoring intra-year or intra-season reburns based on an occurrence of around 1% [34,35]. However we found that when we examined the study area tile by tile, some areas experienced rates of intra-year reburns much greater than 1%. To investigate whether reburned pixels would have a confounding effect on our data, we examined the occurrence of pixels that burned multiple times per year for each of the tiles overlapping CONUS for each year. We converted each monthly tile in CONUS to binary (1 for burned, 0 for unburned), summed each monthly pixel per year and calculated the percentage of pixels that burned more than once per year, per tile. For 2001 - 2018 for all of CONUS except the tile that contains Florida, there were a total of 12,676 pixels that burned more than once in a given year, or about 0.48% of pixels. The tile that includes Florida (h10v06) had a rate of 5% (sd 2.3%) of pixels that  burned multiple times per year (Table 2). We suspect that this high reburn occurrence is due to the year-round growing season combined with year-round occurrence of lightning strikes and human ignition pressure. Intra-year reburns would present a problem if this algorithm were expanded globally, because there are many ecosystems, especially in the tropics, with year-round growing seasons combined with year-round anthropogenic ignition sources.”

 

The intention of Table 2 was to show that all the tiles except h10v06 (the tile containing Florida) had less than 1%, while the h10v06 had 5%. Previous studies have justified ignoring intra-year reburns based on an occurrence of around 1%. We also put the row of the table for the florida tile in boldface so that readers can see how that tile has a much higher occurrence of intra-year reburns. 

 

L173–175: “The algorithm takes as input a spatial variable, representing the number of pixels, and a temporal variable, representing the number of days, within which to group burn detections.” This question is one of the main in this article. Please describe in more detail the procedure for selecting these input parameters and the algorithm for their selection. 

 

We describe in Methods section d how we conducted a sensitivity analysis to determine the optimal parameters for the coterminous US: 5 pixels (2315 m) and 11 days. These parameters are set for this version of the dataset, but they can be changed if a user decides to adapt our approach via the python package, “firedpy.”

 

Line 202: maybe 97th meridian?  Accepted suggestion. Thanks!

 

L215–216: “We ran the fire event classifier for all spatiotemporal combinations between 1-15 days and 1-15 pixels”. L231–232: “we estimated an optimal combination for the US of 5 pixels and 11 days.” Clarification of the results is required. Based on which comparison are thresholds selected? 

 

We now detail this sensitivity and selection process, see lines 240-248: “An accuracy assessment was conducted for each spatiotemporal combination of the MODIS-based events, based on how well the MODIS events matched the MTBS events. For each unique fire polygon in the MTBS database, we extracted the ID numbers for each MODIS event overlapping the MTBS polygon. Then, for each unique MODIS event, we extracted each MTBS ID that overlapped. We then calculated the ratio of the number of unique MTBS events that contained a MODIS event divided by the number of unique MODIS events that contained at least one MTBS event, with the optimum value being one. We used this ratio to approximate the spatio-temporal combination that minimized both over- and under-segmentation of the MODIS fire events based on known MTBS fire perimeters. Based on this ratio, we estimated an optimal combination for the US of 5 pixels (2315 m) and 11 days. ”

 

L242–244: “We also extracted the mode of the International Geosphere-Biosphere Programme land cover classification.” Please provide more clarification on the land cover data and ecoregions. I think you also need to give a reference here and in table 3. Accepted suggestion. Thank you.

 

Tables 3 and 4: How important are the tables in this form in the text? 

 

We think they are worth including. One of the points we are trying to make is that this product contains information that other fire products do not have. Providing a table of the attributes of the data reinforces that point and provides potential users of the product with a handy glimpse of the metadata. 

 

Table 3: What are Level 1–3 for the ecoregion? Please, clarify. 

 

We added a citation for the Community for Environmental Cooperation’s ecoregions that we used. 

 

Table 3: “Maximum, minimum, and mean growth rate” Per what time? Are the units of measurement correct? 

 

We added clarification to the table on the time unit (i.e., per day). And yes, the units are correct. 

 

Table 4: Is it the same date for “Ignition Date” and “Last Burn Date” in the table of the attributes of daily-level product? Please, clarify. 

 

We added clarification to the table.

 

L260: “Data is available at CU Scholar [at time of publication will have a DOI].” Please, clarify. 

We added the DOI.

 

In the Results section

 

Lines 271–272: Two formulas have the same denominators but their values differ. Please, clarify. Fixed it! Thanks.

 

L286–287, Table 6: “The FIRED burned area represents 97% of the NIFC reported totals from 2001-2016 (Table 6)”. The data of the NIFC was not discussed in the Methods section. Please provides details and the reference. 

 

Source added to table caption, additional text discussing the comparison to NIFC in Results section b, and methods section f.

 

Table 6, Table 7: What was the source for Ecoregion data? What the scale of data? Please provide the reference. 

 

We added a reference for the ecoregions we used in the text. They are provided as polygons.

 

Figure 2 is not cited from the text.  

 

We added in some discussion of figure 2 in the results and methods

 

L297–298: “we binned the data 297 into 50 equal size classes”. This procedure has not been described in the Methods section. Please, clarify. 

 

We added in some discussion of figure 2 in the results and methods

 

In the Discussion section

 

L341–344: The same was written in the Introduction? 

 

Thank you for pointing that out. We left the sentence in the discussion as is, and changed the sentence that was in the introduction (lines 109-110).

 

L359: “intra-year reburns”. I did not see statistics on such cases. What is the percentage of these fires. How important is it to take them into account? 

 

Table 1 has intra-year reburns. We added a reference to table 1 in that sentence.

 

L360–364: As far as I understand, the proposed method does not solve the problem of small fires? Please, explain in more detail. 

 

We added some clarification (lines XX-XX): “Second, the FIRED database delineates small fire events from MODIS MCD64, which are not included in MTBS. This expands our ability to understand how fire size and burned area areis changing, beyond just the large events”

 

L401: “11-day window and a 5-pixel distance”.  Any comparisons of results with other variants of these input parameters are not given in the text. 

 

We now describe in methods section d how we created 225 different combinations of spatial and temporal parameters and did a sensitivity analysis to estimate the optimal combination of those two parameters.

 

L420: “This is a unique moment in the history of fire science” Please explain why this is so.  

 

We added:given the abundance of fire data across spatial scales, that requires the fire science community to better coordinate efforts on fire data harmonization challenges and opportunities.”

 

In the Conclusions

 

L453–454: “without events we cannot explore how the spatio-temporal properties of fire regimes are changing”. Authors could cite any papers devoted to this topic. Although fire regimes were not the subject of the manuscript and were not discussed in it. 

 

Thank you for this great suggestion. We added some text and citations on fire regimes, see lines 50-54: “Answering this question is fundamental to defining fire regimes, or the spatial and temporal characteristics of fire events in a strict sense [4–6], i.e., size, frequency, intensity, severity, seasonality, duration, and rate of spread. Remote sensing has increased our capacity to quantify some of these characteristics at large spatial scales, such as frequency, intensity, size, and severity [7–9]. However, there is even greater potential to inform our understanding of changing fire and resilience of ecosystems and society if we are able to delineate events in remote sensing fire products that preserve the temporal characteristics.”

 

L464–467: “Moreover, this algorithm can be used with any spatiotemporal data and is not constrained to fire data. As other efforts are built to understand natural hazards, these efforts may help to better delineate the spatial and temporal dimensions of floods, hurricanes, disease outbreaks, and other events.” This is not stated in the article. Probably, you need to clarify what is meant. 

 

We added text to the end of the introduction stating that this algorithm could be adapted to other types of phenomena (lines 134-138): “The algorithm is designed in a way that makes it adaptable to data source, regional context, and even event type: the spatiotemporal criteria can be altered, and it could be used with newer burned area products (e.g., Fire_cci based on MODIS images at 250 m resolution [36] or VIIRS [19]), or even different types of phenomena (e.g. bark beetle outbreaks, floods).” 



Author Response File: Author Response.pdf

Reviewer 2 Report

This paper outlines a new algorithm to delineate discrete (event-based) fires in the USA, called ‘FIRE’.  The paper is clearly and succinctly written with a convincing case made for the need to accurately and efficiently delineate individual fire events from satellite imagery. A thorough review of previous algorithms to group fire pixels has been provided, together with a broad critique of the pros and cons of these approaches.

The new algorithm put forward by the authors has a number of features which means it is likely to be an improvement over previous ones, including: 1) accounting for multiple fires at same locality in one year (albeit a relatively rare event, although more likely in the tropics); 2) derivation and use of an optimal threshold combining time and space; and 3) creation of two fire products – fire events and dates within fire events (by pixel).  The fire attributes which can be derived from these two products are impressive and will be of considerable value to fire managers and researchers.  Testing of the new algorithm is done by comparisons with other satellite-derived fire datasets, namely the Landsat-derived MTBS and, at a finer level, the Global Fire Atlas.  The results demonstrate an improvement over these previous fire products. Another advantage is the flexibility of their approach; for instance, the spatial and temporal thresholds can be modified to suit the biome or region.  They are also advocating data and code sharing, and are leading by example on this front.

So, I am satisfied that this is an improved approach with better outputs than many other fire products (and hence the paper is worth publishing in my opinion). However, the degree of testing of their product provided in the ms is relatively slim as they compare it to two other products with known limitations.  Further and more rigorous testing is probably beyond the scope of the paper, but could be considered in future papers (focusing on specific fire events across a range of biomes and circumstances).

The authors flag (in the introduction) that fire complexes (large wildfires which develop over many days, and amalgamate into larger fires or split into smaller fires) are very difficult to delineate as discrete fire events.  This issue is not really revisited in the discussion, yet this could be valuable as perhaps FIRED may be superior in dealing with such fire complexes (eg by identifying these as discrete fire event but also providing pixel-level information on date of burn which can help researchers describe the spatial pattern of burning, burning rates etc within the complex).

The author claim in the discussion that FIRED successfully delineates small fire events. This is true, but there are still limitations imposed by pixel size of MODIS (ie 250-500 m). Perhaps this could be clarified.

Some minor corrections:

Line 101:  ‘And further’ could just be ‘Further’

Fig. 3: It is not clear why the Fig3B legend is abridged (too many colors to show?)

Table 7 title doesn’t fully explain what is in the table.

Author Response

Thank you to the editor and reviewers for their valuable comments and suggestions. Our major revisions include: i) connecting the FIRED product to helping better understand changing fire regimes and changing resilience of ecosystems; ii) clarifying potential future directions in the application of this flexible algorithm, to include other fire parameters such as FRP, to apply to other fire remote sensing products, such as active fire products, and potentially even to other remote sensing observations of other hazards that have space/time information; iii) we have also expanded our discussion of space-time optimization that could be improved by including smaller fire perimeters to optimize against, and could be expanded if other known fire perimeters can be utilized beyond the U.S. We have responded to all editor and reviewer comments below (in bold).

 

Reviewer 2:

This paper outlines a new algorithm to delineate discrete (event-based) fires in the USA, called ‘FIRED’.  The paper is clearly and succinctly written with a convincing case made for the need to accurately and efficiently delineate individual fire events from satellite imagery. A thorough review of previous algorithms to group fire pixels has been provided, together with a broad critique of the pros and cons of these approaches.

 

The new algorithm put forward by the authors has a number of features which means it is likely to be an improvement over previous ones, including: 1) accounting for multiple fires at same locality in one year (albeit a relatively rare event, although more likely in the tropics); 2) derivation and use of an optimal threshold combining time and space; and 3) creation of two fire products – fire events and dates within fire events (by pixel).  The fire attributes which can be derived from these two products are impressive and will be of considerable value to fire managers and researchers.  Testing of the new algorithm is done by comparisons with other satellite-derived fire datasets, namely the Landsat-derived MTBS and, at a finer level, the Global Fire Atlas.  The results demonstrate an improvement over these previous fire products. Another advantage is the flexibility of their approach; for instance, the spatial and temporal thresholds can be modified to suit the biome or region.  They are also advocating data and code sharing, and are leading by example on this front.

 

So, I am satisfied that this is an improved approach with better outputs than many other fire products (and hence the paper is worth publishing in my opinion). However, the degree of testing of their product provided in the ms is relatively slim as they compare it to two other products with known limitations.  Further and more rigorous testing is probably beyond the scope of the paper, but could be considered in future papers (focusing on specific fire events across a range of biomes and circumstances). 

 

We added text in the discussion (lines 481-492) on the need for a more universally applicable optimization approach: “Future improvements could include: i) validation with smaller events, such as those contained in the US-based National Incident Feature Service dataset, formerly Geomac [50] or others; ii) estimates of uncertainty around start and end dates of the fire events; iii) regionally-varying thresholds based on fire regime characteristics; and iv) development of an optimization process that does not rely on already existing fire perimeter polygons.”

 

The authors flag (in the introduction) that fire complexes (large wildfires which develop over many days, and amalgamate into larger fires or split into smaller fires) are very difficult to delineate as discrete fire events. This issue is not really revisited in the discussion, yet this could be valuable as perhaps FIRED may be superior in dealing with such fire complexes (e.g., by identifying these as discrete fire event but also providing pixel-level information on date of burn which can help researchers describe the spatial pattern of burning, burning rates etc within the complex). 

 

We added the following text to the discussion to address this great point (lines 428-433): “ the daily-level product preserves the fine-scale heterogeneity of the larger events. This allows the user to see, for example, when large fire events are actually complexes of smaller independently ignited fire patches, or if the large event is truly the product of a single ignition (e.g. the Rim Fire in figure 4). This also allows for users to link daily-level burned area data within a defined event to daily or even sub-daily covariates (e.g. climate).”

 

The authors claim in the discussion that FIRED successfully delineates small fire events. This is true, but there are still limitations imposed by pixel size of MODIS (ie 250-500 m). Perhaps this could be clarified. 

 

We clarified what size fires MODIS theoretically captures and how that translates to incorporation of smaller events in the FIRED database (see lines 419-424): “Second, because the FIRED database is based on the MODIS MCD64 product, it includes fire events theoretically as small as 4m2, albeit these are rare detections (~90% omission error) [32]. Small fire events greater than 12.6 hectares are more likely the events that are captured in the MODIS MCD64 product (10% omission error) at the size of a MODIS pixel (~500 m) [32], and therefore in FIRED.”

 

Some minor corrections:

 

Line 101:  ‘And further’ could just be ‘Further’ Accepted suggestion. Thank you.

 

Fig. 3: It is not clear why the Fig3B legend is abridged (too many colors to show?) Added clarification. Thank you.

 

Table 7 title doesn’t fully explain what is in the table. Thanks! Added clarification to the title of Table 7.

 

Author Response File: Author Response.pdf

Reviewer 3 Report

In the paper titled “FIRED (Fire Events Delineation): An open, flexible algorithm & database of US fire events derived from the MODIS burned area product (2001-19)” the authors preset an algorithm to cluster fire data in space and time. They compare their US events to established MTBS burned area perimeter delineations.  

Major:

A clustering algorithm  or such algo applied to fire data is not new. If clustering  adds value by providing new insights, highlights of such insights is not apparent.

The article has ignores active fire detections and fire radiative power. I understand that the title clearly says they limit to MODIS burned area product but including the active fire detections and FRP would perhaps be something novel for publication.

Limitations on the use of MTBS data for error estimation needs to be discussed. It surely is not validation as no ground truth is being used.   

Minor

Listed below are some minor comments

 Line 30 5 pixel ?? Suggest specify a linear dimension

Line 32 u mean FIRED Events and MTBS size? (Does MTBS call individual fires events )

Line 34  this statement  “or even the underlying 34 algorithm as they see fit.”  Is redundant

Line 38/39  This perhaps can be in  discussion “The open, flexible FIRED algorithm could be utilized to derive events in any  satellite product.”

 Table 1 What is a spatio temporal flooding? The table is obviously incomplete and perhaps cannot be exhaustive. The studies mentioned use spatially resampled regridded native satellite resolution can bias the spread rates due to the PSF and Bow-Tie effect of MODIS sensor.  These following papers use the native resolutions (to the best of my knowledge) to ensure that such effects are minimal

Loboda, T. V., & Csiszar, I. A. (2007). Reconstruction of fire spread within wildland fire events in Northern Eurasia from the MODIS active fire product. Global and Planetary Change56(3-4), 258-273.

Morton, D. C., Defries, R. S., Randerson, J. T., Giglio, L., Schroeder, W., & Van Der Werf, G. R. (2008). Agricultural intensification increases deforestation fire activity in Amazonia. Global Change Biology14(10), 2262-2275.

Boschetti, L., & Roy, D. P. (2009). Strategies for the fusion of satellite fire radiative power with burned area data for fire radiative energy derivation. Journal of Geophysical Research: Atmospheres114(D20).

Line 171 Suggest rename the section title.  The text in this section does not suggest why this is a fast algorithm and all algorithms are flexible if the thresholds are changed

Line 178 “netCDF file” is this needed to be mentioned ? As its only an intermediate file? I mean it could be simply be any raster stack?

Line 181 “To avoid unnecessary computation, we did not check cells in which there was no burned area assignment throughout the study period. “. Line 175  clearly states it applies on burned pixel and this sentence seems un necessary.

Line 216 1- 15 pixels. Easier to read if the spatial dimension is stated. And may be # pixels in ()

Line 217  Why are these numbers selected ?

Line 232 Pixel dimensions instead of a pixel is easier to read

Line 232 How did you calculate “We calculated commission and 232 omission errors for both the MODIS-based events and the MTBS events”

 Line 240 how was spread rate, date of maximum growth, daily growth? calculated?

 Line 287 “What is NIFC”

Line 267 Unnecessary to mention “Commission error was calculated as: 267 11,412/(11,412+7,054) . Omission error was calculated as 8,721/(8,721+7,054).”

Table 5 confusion matrix needs more explanation.

Figure 2  Lacks a good discussion . Why is there a  dashed line at 0.8 ?. Why is the R^2 reducing at all for large fires when one will expect them to concur more ?

How will Figure 2 look when different spatio- temporal thresholds are used?. Additional frames in Figure 2  with varying spatio temporal thresholds would be useful.

Lien 307 202 and 404  why ~?

 Table 7 and in discussion

A discussion on how spread rates based using active fire like in these papers would be useful.

Loboda, T. V., & Csiszar, I. A. (2007). Reconstruction of fire spread within wildland fire events in Northern Eurasia from the MODIS active fire product. Global and Planetary Change56(3-4), 258-273.

Kumar, S. S., Picotte, J. J., & Peterson, B. (2019). Prototype downscaling algorithm for MODIS satellite 1 km daytime active fire detections. Fire2(2), 29.

 

Author Response

Thank you to the editor and reviewers for their valuable comments and suggestions. Our major revisions include: i) connecting the FIRED product to helping better understand changing fire regimes and changing resilience of ecosystems; ii) clarifying potential future directions in the application of this flexible algorithm, to include other fire parameters such as FRP, to apply to other fire remote sensing products, such as active fire products, and potentially even to other remote sensing observations of other hazards that have space/time information; iii) we have also expanded our discussion of space-time optimization that could be improved by including smaller fire perimeters to optimize against, and could be expanded if other known fire perimeters can be utilized beyond the U.S. We have responded to all editor and reviewer comments below (in bold).

 

Reviewer 3:

 

In the paper titled “FIRED (Fire Events Delineation): An open, flexible algorithm & database of US fire events derived from the MODIS burned area product (2001-19)” the authors present an algorithm to cluster fire data in space and time. They compare their US events to established MTBS burned area perimeter delineations.  

 

Major:

 

A clustering algorithm or such algo applied to fire data is not new. If clustering  adds value by providing new insights, highlights of such insights is not apparent.

 

Thank you for this suggestion to clarify what is novel and beneficial about our approach, i.e., our algorithm captures and/or more appropriately delineates small events, multi-year events, fire complexes, and intra-annual reburns. Moreover, our approach and algorithm are open, making the effort reproducible, but more importantly, adaptable to community use and input. We expanded the text on the benefits of our approach at the end of the introduction (lines 127-138) and in the discussion (lines 412-456). 

 

The article has ignored active fire detections and fire radiative power. I understand that the title clearly says they limit to MODIS burned area product but including the active fire detections and FRP would perhaps be something novel for publication. 

 

We agree that incorporating additional fire products as inputs in the event-building process would enrich the output data, not only by filling in gaps of undetected burned area, but adding valuable variables like FRP. We have pointed to these great suggestions for future work (lines 518-520).

 

Limitations on the use of MTBS data for error estimation needs to be discussed. It surely is not validation as no ground truth is being used.   

 

We added discussion about the limitations of the MTBS data in the results section b, and methods section f.

 

Minor

 

Listed below are some minor comments

 

Line 30 5 pixel ?? Suggest specify a linear dimension Added (2315 m) after 5 pixel. Thanks for this suggestion.

 

Line 32 u mean FIRED Events and MTBS size? (Does MTBS call individual fires events ) 

 

Here we are referring to each MTBS polygon as an event, and each FIRED polygon as an event. We changed the wording to lessen the confusion: “The linear relationship between the size of individual FIRED and MTBS events for the CONUS was strong (R2 = 0.92 for all events).”

 

Line 34  this statement  “or even the underlying 34 algorithm as they see fit.”  Is redundant. 

 

We have added “underlying algorithm approach,” as there are several elements that could be changed or adapted in the algorithm (beyond the spatio-temporal parameters). We hope this clarifies.

 

Line 38/39  This perhaps can be in discussion “The open, flexible FIRED algorithm could be utilized to derive events in any  satellite product.” 

 

We have added a few sentences to this effect in the discussion on lines 451-453: “Further, we anticipate that this algorithm has wide applicability to other fire products and other efforts to build events based on any geospatial data that has both spatial and temporal information.” 

 

Table 1 What is a spatio temporal flooding? The table is obviously incomplete and perhaps cannot be exhaustive. The studies mentioned use spatially resampled regridded native satellite resolution can bias the spread rates due to the PSF and Bow-Tie effect of MODIS sensor.  These following papers use the native resolutions (to the best of my knowledge) to ensure that such effects are minimal 

We did all data processing in the native resolution. We replaced the word “flooding” with “moving window” to match the wording used in the methods. Additionally, we speak to future potential to use active fire products.

 

Loboda, T. V., & Csiszar, I. A. (2007). Reconstruction of fire spread within wildland fire events in Northern Eurasia from the MODIS active fire product. Global and Planetary Change, 56(3-4), 258-273.

 

Morton, D. C., Defries, R. S., Randerson, J. T., Giglio, L., Schroeder, W., & Van Der Werf, G. R. (2008). Agricultural intensification increases deforestation fire activity in Amazonia. Global Change Biology, 14(10), 2262-2275.

 

Boschetti, L., & Roy, D. P. (2009). Strategies for the fusion of satellite fire radiative power with burned area data for fire radiative energy derivation. Journal of Geophysical Research: Atmospheres, 114(D20).

 

Line 171 Suggest rename the section title.  The text in this section does not suggest why this is a fast algorithm and all algorithms are flexible if the thresholds are changed. 

 

We have added a sentence that clarifies why this algorithm is fast, as it relates to its efficiency of processing and implementing….”We created an algorithm that automatically downloads, processes, defines events and calculates summary statistics for the entire CUS in 30 minutes on a normal laptop”

 

Line 178 “netCDF file” is this needed to be mentioned ? As its only an intermediate file? I mean it could be simply be any raster stack? 

 

This data structure enables faster processing, compared with other raster types. This is part of the explanation on how it works so fast. We added this text to lines XXX-XXX “The data processing script downloads the entire time series of HDF files from the ftp server, extracts the burn date layer from each monthly tile, adds them to a 3-dimensional netCDF data cube. We used this data structure to maximize efficiency and speed”

 

Line 181 “To avoid unnecessary computation, we did not check cells in which there was no burned area assignment throughout the study period. “. Line 175  clearly states it applies on burned pixel and this sentence seems un necessary. 

 

This is another detail we included to help the reader understand why the algorithm runs efficiently.

 

Line 216 1- 15 pixels. Easier to read if the spatial dimension is stated. And may be # pixels in (). 

 

We added the distances in m along with the number of pixels.

 

Line 217  Why are these numbers selected ? 

 

These are the MTBS minimum size thresholds which we introduced in the previous paragraph.

 

Line 232 Pixel dimensions instead of a pixel is easier to read 

 

Accepted suggestion. Thank you.

 

Line 232 How did you calculate “We calculated commission and 232 omission errors for both the MODIS-based events and the MTBS events” 

 

There is an equation given on line 315 and 316.

 

Line 240 how was spread rate, date of maximum growth, daily growth? Calculated? 

 

For spread rate, this was calculated as area burned/duration), which is now indicated in the text. Date of maximum growth is just that, choosing the date within the full event that had the greatest burned area. We added “date with the highest burned area per event” for clarity. And daily growth was the burned area for that day, which has been changed to “daily burned area” for clarity. 

 

Line 287 “What is NIFC” 

 

We added more info on NIFC (which is the National Interagency Fire Center), spelled out the algorithm, and discussed more in depth how our results compared to the NIFC data. They tally national incidents, in terms of total wildfires and their size. It provides a government-base record to compare with.

 

Line 267 Unnecessary to mention “Commission error was calculated as: 267 11,412/(11,412+7,054) . Omission error was calculated as 8,721/(8,721+7,054).” We deleted this sentence.

 

Table 5 confusion matrix needs more explanation.

 

We have clarified what the confusion matrix is comparing, with the addition of the latter half of this sentence: “The MODIS-derived events had a 55% omission and 62% commission error, compared to the MTBS reference dataset, based on a confusion matrix that compares when FIRED and MTBS identify the same events (Table 5).”

 

Figure 2  Lacks a good discussion. Why is there a  dashed line at 0.8 ?. Why is the R^2 reducing at all for large fires when one will expect them to concur more ?

 

We added this paragraph to the methods: lines 279-297 “f. Comparison to MTBS

 

In order to understand how well the FIRED algorithm delineated event size, we compared the estimates of burned area from FIRED events to the estimates of burned area for MTBS events for the subset of events that were captured by both products. Because MTBS does not account for unburned patches within a fire perimeter when they calculate burned area, many burned area estimates reported by MTBS are likely overestimations. Thus, comparing the area burned by the two products represents a trade-off between imperfect satellite detection from MODIS and imperfect burned area reporting in the perimeters that drive selection by the MTBS product. With those caveats in mind, we co-located those events captured by both products (i.e. they overlapped in space and time), and compared estimated area burned at the event level using two approaches. First, to compare all fire events, we created a linear regression model where the FIRED-determined area burned predicted MTBS-determined area burned. Second, to understand how that relationship varied with size class, we binned the fire events into 50 equal size classes, and created a linear model on each subset. The expectation was that FIRED-based burned areas would be consistently less than the MTBS-based burned areas. In addition, due to lower burn detection by MODIS for smaller fires [32], we expected the models at smaller size classes to explain less of the variation than for large sizes. We also acquired the total yearly burned area and fire counts from the National Interagency Fire Center (NIFC) for CONUS to understand how FIRED and MTBS products compared to the aggregation of all reported wildfires (note, NIFC does not include intentional land use fires or prescribed burns).”

 

And this text to the results: lines 333-345

 

The relationship between area burned for the FIRED events and the MTBS events was strong (R2 = 0.92, Figure 2A), and the area reported by MTBS was always higher than the FIRED events (the points are all above the 1:1 line in Figure 2A) at the event level. As event size increased, the R2 improved from below 0.6 for fires below 50,000 acres, to above 0.8 for fires over 70,000 acres (Figure 2b). The MODIS MCD64A1 burned area product consistently underestimated burned area reported by MTBS for fires below 100,000 hectares. This consistent underestimation is not necessarily a flaw with the FIRED product, rather it is partially due to the fact that MTBS does not account for unburned patches within a fire perimeter when they calculate burned area, and burned area is consistently overestimated by MTBS. The burned area captured by MODIS MCD64A1, and thus FIRED, was much closer to the NIFC totals (Table 6). This is likely because the MCD64A1 product captures many more small fires than MTBS. However, the MCD64A1 product does not generally capture the smallest fires, below 12.6 ha [32]. There is a dramatically larger count of individual events reported by NIFC, which includes many fires as small as 0.4 ha.

 

How will Figure 2 look when different spatio- temporal thresholds are used?. Additional frames in Figure 2  with varying spatio temporal thresholds would be useful. We feel as though that is beyond the scope of this paper and not exactly apropos to the purpose of that figure. We did not use these metrics to estimate the optimal spatial temporal parameters. Hopefully the text we added to the methods and results adds some clarity for this. 

 

Lien 307 202 and 404  why ~? Explained on line 222.

 

 Table 7 and in discussion

 

A discussion on how spread rates based using active fire like in these papers would be useful. 

 

Loboda, T. V., & Csiszar, I. A. (2007). Reconstruction of fire spread within wildland fire events in Northern Eurasia from the MODIS active fire product. Global and Planetary Change, 56(3-4), 258-273.

 

Kumar, S. S., Picotte, J. J., & Peterson, B. (2019). Prototype downscaling algorithm for MODIS satellite 1 km daytime active fire detections. Fire, 2(2), 29.

 

We added this line in the introduction (lines 78-79): “There have been some attempts to characterize fire spread using active fire products, but the code and resulting data products are not publicly available (Loboda 2007).” We discuss further in the discussion the hope for this product to be further developed in order to integrate multiple products containing complementary information, including FRP from the active fire products. We also note that the MCD64 burned area product already has the MOD14 active fire product as an input to boost the burned area estimates. We also updated this sentence in the discussion (lines 518-520): “Additional satellite sensors and their derived products, e.g., active fire, could be leveraged to expand the detections per event and add other key properties like fire radiative power..” And we note that the Kumar et al. 2019 paper is a solid contribution on downscaling, but it does not estimate fire spread, and we feel it is less relevant for this manuscript.

 

Author Response File: Author Response.pdf

Reviewer 4 Report

see attached file for comments

Comments for author File: Comments.pdf

Author Response

Thank you to the editor and reviewers for their valuable comments and suggestions. Our major revisions include: i) connecting the FIRED product to helping better understand changing fire regimes and changing resilience of ecosystems; ii) clarifying potential future directions in the application of this flexible algorithm, to include other fire parameters such as FRP, to apply to other fire remote sensing products, such as active fire products, and potentially even to other remote sensing observations of other hazards that have space/time information; iii) we have also expanded our discussion of space-time optimization that could be improved by including smaller fire perimeters to optimize against, and could be expanded if other known fire perimeters can be utilized beyond the U.S. We have responded to all editor and reviewer comments below (in bold).

 

Reviewer 4: 

 

General comments:

The paper presents the FIRED algorithm and database, which identifies fire events using

the MODIS MCD64 burned area product. Event delineation is an important research tool to

make fire or burned area detection in satellite data useful to applications including ecology

and public health implications, and there have been several recent studies proposing

different global fire event databases. FIRED is open source and flexible enough to adapt to

other burned area products or different research needs. The paper is logically organized

and clearly written. However, the large discrepancy in the number of fire events detected

by FIRED in comparison to the MTBS validation dataset undermines the analysis, and more

work is needed to explain why so many “false positives” occur. I can recommend it for

publication after major revisions. 

 

The large discrepancy in the number of fire events between FIRED, MTBS and NIFC is mostly due to an arbitrary size threshold for inclusion of fire events by MTBS (i.e., all fires <202 ha in the east and <404 ha in the west are not included). In addition it is worth noting that the MODIS MCD64 product has a declining ability to detect small fires as size decreases below 12.6 ha. NIFC government records do log even very small fires (e.g., there are many fires <1 acre in size that are reported. These small fires do not contribute substantially to burned area, but they do dramatically increase fire event counts (Table 3). We added more discussion in the results (lines 329-345) to clear up that confusion. In addition, the sensitivity analysis for which we used MTBS was more of an optimization than a validation, in that we only used events detected by both products, and then optimized the spatiotemporal thresholds to minimize over- and under-segmentation. True validation data for fire data does not currently exist at the scale that we need and is a ripe area for future research. We added more text (lines 484-492) discussing the lack of true validation in the discussion.



Specific comments:

Lines 161-165. Would this also be capable of detecting whether a regional fire season has

changed e.g. due to climate change? 

 

Yes. That is an excellent future research question, although it is worth noting the MODIS record is much shorter than the MTBS record.

 

Lines 199-202. How does MTBS delineate events from Landsat data? 

 

Manually. We changed the wording on line 130 to clarify this: “... the Monitoring Trends in Burn Severity (MTBS) product, which is manually derived from Landsat imagery [41]”

 

Table 5. Doesn’t this mean that an event flagged by FIRED is more likely a false positive than not, even excluding those that would have been below the MTBS size threshold? 

 

This is only comparing the sizes of events captured by both products, rather than the probability of detection. It is tricky to make inferences on false positives and negatives in this context since MTBS is not really a true ground truth. MTBS is manually derived, and so if FIRED detects an event where there is no MTBS data, it is possible that the MTBS team simply did not delineate that event for any number of reasons (e.g., budget restrictions). 

 

So the data analyzed in Figure 2 is a minority of the results? 

 

We say in the figure 2 caption that it is indeed a subset of the results. We changed the wording to make that a bit more clear.

 

Lines 306-313. A figure mapping the frequency only of the fires below the MTBS cutoff

would show whether the difference in e.g. the Mississippi valley in Figure 1 is due to small

Fires. 

 

Thanks for the suggestion. This is a good alternative way to present this information, but we have chosen to keep the figure as originally presented.

 

Figure 3. This is convincing as a case study, but it’s not clear how much the CONUS dataset

is affected by MTBS marking unburned area. 

 

We think that is a really great idea! One could analyze this by taking the burn severity mosaics that MTBS produces and calculating how many pixels are marked as unburned, but that is currently beyond the scope of this paper.

 

Also, this figure uses the Global Fire Atlas before it is introduced during the discussion section. The Global Fire Atlas is mentioned immediately after the figure and on the same page.

 

Technical comments:

Line 131. Does this product have a DO  We added the DOI.





Author Response File: Author Response.pdf

Back to TopTop