Next Article in Journal
Study on Evaluation of Timber Security in China Based on the PSR Conceptual Model
Previous Article in Journal
Oak Resprouting Survival and Competition for 19 Years after Wildfire in the Republic of Korea
 
 
Article
Peer-Review Record

A Comparison of Fire Weather Indices with MODIS Fire Days for the Natural Regions of Alaska

Forests 2020, 11(5), 516; https://doi.org/10.3390/f11050516
by Robert H. Ziel 1, Peter A. Bieniek 1,*, Uma S. Bhatt 2, Heidi Strader 3, T. Scott Rupp 1 and Alison York 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Forests 2020, 11(5), 516; https://doi.org/10.3390/f11050516
Submission received: 1 April 2020 / Revised: 25 April 2020 / Accepted: 2 May 2020 / Published: 3 May 2020
(This article belongs to the Section Forest Ecology and Management)

Round 1

Reviewer 1 Report

In this study, 13 different indicators of atmospheric conditions, fuel moisture and
flammability are compared to determine how effective each is at identifying thresholds and trends for significant wildfire activity. Flammability indices are compared with MODIS
remote sensing characterizations. The Authors  did a remarkable  work, if only for the sheer  number  of indices used in the comparison. The results are clearly presented  and  they show the predominance of  a series of indices over others. The collection of data seemed  thorough and  precise. 

I have several remarks to point out, that are listed  below.

General Remarks:

G1- I am aware that MODIS data constiute a well known tool for fire-related works. However, the data collected by MODIS is prone to false positives, since several human activities can be mistaken as wildfires. While I assume that this would not affect the less habitated areas of Alaska, it would be nice to add a few lines describing not only the strenghts but also the possible weaknesses of MODIS input data.
G2- Could you please add a line where you discuss why you did not use directly burned area [ha] as a proxy for forest fire activity instead of MODIS data (because of lack of updated databases, etc.)?


G3- As I understood, you use localized data in order to compute indices (Fig. 4). Every Modis activity has in principle a distance from the nearest sensor- data source. Do you have a rough estimate of such distance?


G4- a part of the validation is based on the concept of Cumulative Departure. Since validating a risk-susceptibility index is never an easy task, could you put a little of literature about related works? I am also interested in knowing why you did not use a ROC / AUC analysis for valdiation.

G5- A very interesting result of your work was not pointed out as much as it should: the fact that different PSA may have different best-to-use indices (Fig.8-9). Maybe the Authors could spend some more word about that. They may also produce a Figure analogous to Figure 1 but with the best performing index as a colorbar.

Minor remarks:

M1 - Figure 2: define better the concept of burnable area
M2 - Line 424 : I would add a formula to describe what a "Cumulative departure is" given the distributions of fire activity and fire index.



Author Response

In this study, 13 different indicators of atmospheric conditions, fuel moisture and
flammability are compared to determine how effective each is at identifying thresholds and trends for significant wildfire activity. Flammability indices are compared with MODIS remote sensing characterizations. The Authors did a remarkable work, if only for the sheer number of indices used in the comparison. The results are clearly presented and they show the predominance of a series of indices over others. The collection of data seemed thorough and precise.  

Thank you for your review.

I have several remarks to point out, that are listed below.

General Remarks:

G1- I am aware that MODIS data constitute a well-known tool for fire-related works. However, the data collected by MODIS is prone to false positives, since several human activities can be mistaken as wildfires. While I assume that this would not affect the less habitated areas of Alaska, it would be nice to add a few lines describing not only the strenghts but also the possible weaknesses of MODIS input data. 

Prior analysis for Alaska has shown these data to actually be quite robust (e.g. Ziel et al 2015) with 97% of all MODIS hotspots in the Collection 6 dataset occurring within the final fire perimeters buffered out to ½ of the MODIS resolution (500m). Hot spot detections were flagged to identify those from stationary sources, such as volcanoes and incinerators. We have now mentioned this where we introduce the MODIS hotspot data at L191-194. 


G2- Could you please add a line where you discuss why you did not use directly burned area [ha] as a proxy for forest fire activity instead of MODIS data (because of lack of updated databases, etc.)?

The historical fire perimeter database does not include the progression of the fires at the daily scale, only the final perimeter that each fire reached at the end of the season. Therefore, it was not suitable for the subseasonal scale analysis we conducted. We outline our rationale L207-209

G3- As I understood, you use localized data in order to compute indices (Fig. 4). Every Modis activity has in principle a distance from the nearest sensor- data source. Do you have a rough estimate of such distance?

All of the analysis was conducted at the PSA scale and the fire index values computed for each (either based on stations or the gridded reanalysis) were considered generally representative of the geographic area of each PSA. Given the size/locations of these PSAs that is a good assumption based on prior work too when looking at climate divisions (e.g. Bieniek et al. 2012). This therefore means that the fire index values of each PSA cover the active fires within each. In preparation of this paper each MODIS detection was also associated (based on the distance) with the nearest surface observing station location in our dataset, though we did not use that association in that analysis.  We stratified MODIS detections and surface observation records within each PSA and associated them on a PSA basis. We did not attempt to determine a distance to grid cell centroid because we similarly cumulated and processed that data by PSA. 

Here are the average weather station distances for each PSA and regional subgroup, based on individual MODIS detection distance.

Grouping

Average Distance From RAWS (km)

AK00

132.12

AK01E

28.90

AK01W

32.64

AK02

35.68

AK03N

42.55

AK03S

39.90

AK04

50.16

AK05

53.70

AK06

72.85

AK07

72.61

AK08

88.16

AK09

66.85

AK10

148.23

AK11

33.88

AK12

32.92

AK13

26.56

AK14

20.26

AK15

72.02

AK16

102.05

AK17

139.66

AK18

23.25

Statewide

59.14

E&C Int

35.93

W Int

64.39

Tundra

70.39

S. Central

26.90

Coastal

97.04

G4- a part of the validation is based on the concept of Cumulative Departure. Since validating a risk-susceptibility index is never an easy task, could you put a little of literature about related works? I am also interested in knowing why you did not use a ROC / AUC analysis for valdiation.

Multiple methods, including ROC/AUC were actually initially considered to compare the indices with the MODIS data.  We felt that these types of statistics would not give the indices fair treatment. False positives, with indices indicating high fire potential with no fire manifestation, are generally related to the absence of a source fire (ignition or active fire) rather than an erroneous indication of fire potential with a high index value.  False negatives, with indices indicating low fire potential where fire are occuring, are generally associated with isolated residual heat signatures/little fire growth and/or associated with local differences between individual MODIS detections and individual source weather observations (whether they are from a weather station many kms away or from a coarse reanalysis grid cell representation).  The ROC AUC test assumes that these “errors” are both in fact misrepresentations and would rate indices more poorly than would be fair.

G5- A very interesting result of your work was not pointed out as much as it should: the fact that different PSA may have different best-to-use indices (Fig.8-9). Maybe the Authors could spend some more word about that. They may also produce a Figure analogous to Figure 1 but with the best performing index as a colorbar.

There were quite similar results among the different subregions that we evaluated (DMC and BUI were clear winners with FFMC or VPD sometimes in third place). Therefore, the suggested figures would be mostly of one color so we didn’t feel the need to replace the “table figures” with maps. There would be more diversity further down the rankings but we mainly focused on highlighting the top performers. 

Minor remarks:

M1 - Figure 2: define better the concept of burnable area

A definition of “burnable area” has been added to the caption for Figure 2: “lands sufficiently vegetated to accumulate dead fuel loads over time using historic review of vegetative greenness remote sensing”

M2 - Line 424 : I would add a formula to describe what a "Cumulative departure is" given the distributions of fire activity and fire index.

Formula has been added (L324-325):

Cumulative Departure= SUMk=0,n=20 index maxfire index frequency-MODIS history(day or count) frequency

Reviewer 2 Report

Title: Assessing Alaska’s Changing Wildfire Potential

 

      With global warming and important increases in temperatures predicted for Alaska, the authors have reason to seek for the best fire weather indices that will enhance fire prediction modeling. With better fire prediction capabilities, wildfire threat potential can be better addressed through adaptive fire management operations. I see results from this study as applicable to other boreal ecosystems, notably those that already use the Canadian Forest Fire Danger Rating System. While the level of novelty to this study is limited, it is important to re-enforce that current fire indices are still adequate to predict fire behavior and ensuing wildfire threat in boreal forest ecosystems.

I see value in publishing this article, but important points must be addressed. These are described in the General Comments. The detailed comments mainly focus on grammatical and syntax issues.

General Comments

Based on the last sentence of your abstract, I was expecting some data limitations in terms of sample size. However, I was a bit baffled when you waited to Line 483 to reveal that only 2 years out of 15 had substantial fire activity due to late season droughts. This is a sample size of 2 to address the performance of long-lag fire indices. Therefore, the 52- and 90-day indices must be removed as you do not have the number of observations to produce any meaningful results. I would encourage you to say so at the beginning in your data presentation and explain why you can only focus on the short and mid lag drying periods.

The paper needs some restructuring around the data set presentation, methods and results. The paper is heavy on data presentation and I must admit it is difficult to wrap one’s head around it all as there is interpretation and results with figures in the data and method section. Figures 2, 3 and 5 can remain in the data presentation for clarity, but any other figures where you present discuss results must go in the Results, and any interpretation goes that section. To keep things organized, you may need sub-sections in the Results and Interpretation Sections.

You are missing a Study Area Section. Please describe each natural region (not the PSAs) in terms of their topography, fuel type, burning regime, and mean summer temperatures and precipitation, and perhaps length of fire season. This will go a long way for the reader to understand your data interpretation and your concluding message that fire indices of choice can differ based on ecosystems.

Figure 1 needs to reflect the natural regions you mention: Coastal, Boreal, Interior,Tundra etc.  I found the color palette to be inadequate as 4 greens look almost the same. Perhaps codes should be marked inside each PSA and save the color scheme for the natural regions.

I had difficulty with the technical lingo around downscaling of data. This is not commonly used and had to look it up. It appears to be more specific to the field of climate data and GIS analysis. When you first introduce the “20km dynamically downscaled data” (L80), simply say you used observations at a 20 km grid resolution. Explanations on this particular dataset can be presented in its dedicated sub-section. I would encourage you to summarize what this exactly means in a couple of sentences so that your readers don’t have to go look it up in the paper of reference.

Interpretation: flesh out a bit more the differences in your results by natural region. You make an interesting point at the end that VPD may be a better index for Tundra ecosystems.

Recommendations for your concluding remarks. You have 3 salient points from your research 1. Findings from L292-295, 2. L499-502, 3. L508-510

Title: After having read the entire paper and circling back to your concluding remarks, I find the title a bit vague and not that representative. And if I was to answer your question: Can new tools help? I would say not really. How about something like: “A comparison of fire weather indices with MODIS fire days for the natural regions of Alaska”  Putting the words fire weather indices in the title will insure that your study readily comes up in on-line searches.

Figures should always be presented after they have been introduced in the text.

Be mindful when you present graphs to use the same vocabulary and same way of spelling (use of small or large caps) between graphs, as well as in your caption. This is a recurrent issue throughout the paper.

Figures 8 and 9 are Tables. Please change and renumber accordingly.

In the future, be mindful of the time anonymous reviewers spend revising. Please ask your other co-authors, or people of your entourage, to give it a good read for grammatical errors. Even better if you can ask somebody that is not familiar with your work so that the text is well-structured and laid-out before submission. It is time consuming for us Reviewers, it means we don’t always understand what you are saying, and it detracts from the quality of your work.

 

Specific comments

L36: spell out acronyms in Keywords

L38-39: move reference [1] right after “ecosystem

L53-56: Confusing. Please re-write as follow. Note that you need a better reference for the CFFDRS.

Different “fire management” systems are used across the Boreal forest (e.g.,[9,10]) and in the United States [11]. The Forest Fire Weather Index (FWI) system [12], which is a component of the Canadian Forest Fire Danger Rating System (CFFDRS)[new reference], rates day-to-day changes …

New reference: Stocks, B.J.; Lawson, B.D.; Alexander, M.E.; Van Wagner, C.E.; McAlpine, R.S.; Lynham, T.J.; Dubé, D.E. 1989. The Canadian Forest Fire Danger Rating System: an overview [reprinted from August 1989 issue, 65:258-265, with corrections and new pagination]. . Forestry Chronicle 65(6): 450-457.  https://cfs.nrcan.gc.ca/publications?id=11347

L57,58,65  You have 2 N for the acronym National Fire-Danger Rating System. Please verify throughout document. It has 1 N on line 85.

L67 perform with an “s”

L68 replace “so can” with “and”

L69 End of sentence after “Finland”. Start of new sentence: They concluded that …

L70 put comma before “but”

L72 … specific locations “and were the motivating factor for this study to compare 13 indices of flammability over multiple years in Alaska.”

L74: replace “is” with “are”

L75 replace “which” with “that”

L75 remove “the”

L75 replace “are” with “can be”

L80 Start sentence with: “Observation-based data from a 20km resolution grid [15] was used” to evaluate two  drought assessment …

L82-83 the acronyms need to be in brackets, not flanked by commas.

L92-93 Remove sentence “Section 2 describes”

L118-120 Sentence poorly constructed. Please re-phrase. Likely missing a comma after (MCD14ML) but then it still doesn’t make sense.

L131 replace “has maintained” with “has been maintaining”

L132 insert comma after season

L135 “distribution” is plural as you have “both” temporal and spatial distributions

L136 no “s” at “show”

L151 “configured for fire weather monitoring” needs to be in between comma

L150-154. Please rephrase and break the sentence in two.

L159 Start sentence with “For each PSA, the daily indices were averaged, resulting in…

L160 remove  “then” and “for each PSA” .  For the other “for each PSA” replace with “from each station”

L163-165 explain what the dynamically downscaled product is exactly. And what is boundary forcing?

L170 your study period says 1979-2017 but the graph in Figure 4 says 1996-2017. Which is correct?

L175 “vary” not “varies” (you have a start and end dates …plural), replace “was” with “were”

L178 put in between commas “such as this downscaled product”

L179 comma after “observations” , replace “in” by “for”

L180 remove comma after “due”, add comma before “due” and add comma after “part”

L184 explain when you have a wet, dry, warm, or cold bias what color and positive/negative values those represent

L216 there is a red period (rather black) after [28]

L226 spell out ASCE (I don’t think this acronym has been presented)

L245 Table 1 be consistent in how you present the data. Either us all acronyms or spell all indices. You already write what the lag time is for the short and mid in the header, so it doesn’t need to be repeated for EDDI and SPEI

L305 Be mindful when you present graphs to use the same vocabulary and same way of spelling (use of small or large caps) between graphs, as well as in your caption.

L310 Please find a better reference than Countryman (1972), which is the reference to the fire environment involving not only climate, but topography and fuels. Find a reference that is relevant to your focus on weather.

L312 add “ignitions” after “fire”

L323 not sure I understand the “sorted and scaled”  To obtain the percentile ranking, you would sort your data, but I don’t quite get the “scaled”. Did you transform your data? From what I see in Figure 7 it simply looks to me that you evaluated the frequency by classes (bins) in increments of 10. If I am correct in my assumption, then the term “scaled” is wrongly used. Statistical scaling is a different process.

L329 same issue with the term scaled.  Perhaps simply say: across a range of BUI values “that” represents the overall …

L337 be consistent: MODIS Day with capital D

L338 MODIS Detects with capital D

L343 capitalize “Days/Counts”

L383 Replace “Figure 8” with “Table 2”

L386-397 these subregions and different parts of Alaska you refer to should be in Figure 1

L386-417 The comment in the captions of your figures about the color coding is not sufficient for understanding. It is not obvious at first glance how the color gradient was applied as it is not related to a particular range of values. I think that using white as the transition between blue and red is misleading. It makes it look like blues could be negative departures, white is neutral and reds are positive departures. Keep it a palette gradient of one color. Also, state that the order of indices are from the Statewide results.

L407 Replace “Figure 9” with “Table 3”

L409-410 Replace “at the top of each column in Figure 9” with “in the first row”

L412 “slightly” not “slight”

L419 In Figure 10 why are there two different scales for the X axis?  Why not take the percentile ranks of EDDI and SPEI to be on the same scale as VPD, DMC and BUI?

L448-453 the text is very similar to what you presented in Lines 309-315  This type of discussion definitely belongs in the Interpretation Section and not in your presentation of data.

L468 I don’t understand why the ranked percentile is almost the same in each BUI class.  How can this be possible?

L478 This sentence is already in your Introduction, please remove.

L479-484 With so few fire seasons that experienced a prolonged drought, you are in no position to perform a meaningful analysis. Regardless, it is not the long-lag fire indices that will inform you on the probabilities of burning, but rather the BUI (mid-lag). The DC informs on how large fires could potentially get, as well as the potential for negative fire severity effects. Under a changing climate with the expectation that the number and length of droughts will increase, having a better understanding of the drought code and its correlation with active fire days would indeed be valuable information to have.

L493-499 The first paragraph of the Conclusion belongs in the Introduction

L515 replace “offing” with “distant future”

L515 replace “devises” with “designed”

L527 replace “influenced” with “affected” 

L540 remove coma after “snow-free”

Author Response

Title: Assessing Alaska’s Changing Wildfire Potential

      With global warming and important increases in temperatures predicted for Alaska, the authors have reason to seek for the best fire weather indices that will enhance fire prediction modeling. With better fire prediction capabilities, wildfire threat potential can be better addressed through adaptive fire management operations. I see results from this study as applicable to other boreal ecosystems, notably those that already use the Canadian Forest Fire Danger Rating System. While the level of novelty to this study is limited, it is important to re-enforce that current fire indices are still adequate to predict fire behavior and ensuing wildfire threat in boreal forest ecosystems.

I see value in publishing this article, but important points must be addressed. These are described in the General Comments. The detailed comments mainly focus on grammatical and syntax issues.

Thank you for your review.

General Comments

Based on the last sentence of your abstract, I was expecting some data limitations in terms of sample size. However, I was a bit baffled when you waited to Line 483 to reveal that only 2 years out of 15 had substantial fire activity due to late season droughts. This is a sample size of 2 to address the performance of long-lag fire indices. Therefore, the 52- and 90-day indices must be removed as you do not have the number of observations to produce any meaningful results. I would encourage you to say so at the beginning in your data presentation and explain why you can only focus on the short and mid lag drying periods.

Fires occurred, were detected by MODIS, and included in our analysis in every one of the 15 years in our study period (see Figure 2). It is true that only a few years, 2004, 2005 and 2015, were extreme seasons in terms of area burned, but there are always active fires each summer in Alaska. Therefore, we had a sample size that spanned 2745 days in all 15 years of the MODIS record. This is actually very important because it helps us to assess the performance of these fire indices in both extreme and “normal” years. 

The paper needs some restructuring around the data set presentation, methods and results. The paper is heavy on data presentation and I must admit it is difficult to wrap one’s head around it all as there is interpretation and results with figures in the data and method section. Figures 2, 3 and 5 can remain in the data presentation for clarity, but any other figures where you present discuss results must go in the Results, and any interpretation goes that section. To keep things organized, you may need sub-sections in the Results and Interpretation Sections.

We have done a major restructure of the paper moving some sections of the data and methods section into the results section. The figures were also renumbered to reflect these changes. Figure 5-7, along with their accompanying descriptions in the text, are now part of the results section rather than the data and methods section. As a result, substantial portions of the text were reworked to facilitate these changes. The results section is now divided into multiple categories: one on the frequency of MODIS activity, another that goes through the fire indices and then their comparison with MODIS active fires.

You are missing a Study Area Section. Please describe each natural region (not the PSAs) in terms of their topography, fuel type, burning regime, and mean summer temperatures and precipitation, and perhaps length of fire season. This will go a long way for the reader to understand your data interpretation and your concluding message that fire indices of choice can differ based on ecosystems.

The climatological description of the key subregions has been added and the general description of each has been expanded (see L222-229).

Figure 1 needs to reflect the natural regions you mention: Coastal, Boreal, Interior,Tundra etc.  I found the color palette to be inadequate as 4 greens look almost the same. Perhaps codes should be marked inside each PSA and save the color scheme for the natural regions.

We have changed Figure 1 to shade the key regions and include labels on the map for the PSAs as you suggested.

I had difficulty with the technical lingo around downscaling of data. This is not commonly used and had to look it up. It appears to be more specific to the field of climate data and GIS analysis. When you first introduce the “20km dynamically downscaled data” (L80), simply say you used observations at a 20 km grid resolution. Explanations on this particular dataset can be presented in its dedicated sub-section. I would encourage you to summarize what this exactly means in a couple of sentences so that your readers don’t have to go look it up in the paper of reference.

We have reworked the description of the dynamically downscaled data to describe it and its purpose in our study in more general terms with less jargon.

Interpretation: flesh out a bit more the differences in your results by natural region. You make an interesting point at the end that VPD may be a better index for Tundra ecosystems.

There is quite broad similarity among the top performing indices among the five subregions. We have however reworked much of the text now in the results section due to the restructure. 

Recommendations for your concluding remarks. You have 3 salient points from your research 1. Findings from L292-295, 2. L499-502, 3. L508-510

We have reworked the beginning of the conclusion section to clarify our main points.

Title: After having read the entire paper and circling back to your concluding remarks, I find the title a bit vague and not that representative. And if I was to answer your question: Can new tools help? I would say not really. How about something like: “A comparison of fire weather indices with MODIS fire days for the natural regions of Alaska”  Putting the words fire weather indices in the title will insure that your study readily comes up in on-line searches.

Thanks for the excellent title suggestion, it has been adopted in the revised manuscript. 

Figures should always be presented after they have been introduced in the text.

We have tried to follow this when possible but because of layout constraints to avoid large white spaces there are spaces when having the figure first is unavoidable. 

Be mindful when you present graphs to use the same vocabulary and same way of spelling (use of small or large caps) between graphs, as well as in your caption. This is a recurrent issue throughout the paper.

We have gone through the text and remade most of the figures to have consistent acronyms and caps.

Figures 8 and 9 are Tables. Please change and renumber accordingly.

These are tables, however they are displayed as figures since we want to include colored shading and tables in this journal can only be plain text.

In the future, be mindful of the time anonymous reviewers spend revising. Please ask your other co-authors, or people of your entourage, to give it a good read for grammatical errors. Even better if you can ask somebody that is not familiar with your work so that the text is well-structured and laid-out before submission. It is time consuming for us Reviewers, it means we don’t always understand what you are saying, and it detracts from the quality of your work.

Thank you for taking the time to thoroughly review our paper and offer corrections for our grammar. We have implemented many of your suggestions and we have done additional proofreading. The restructure of the paper required rewriting/reworking some sections therefore the text related to some of your specific comments may no longer be in the paper.

Specific comments

We addressed the minor text changes you provided below prior to restructure to be sure we got everything, however the paper has been substantially changed (especially in the data/methods and results sections) so some of these sentences/sections were deleted while reworking the paper.

L36: spell out acronyms in Keywords

Done

L38-39: move reference [1] right after “ecosystem

Done

L53-56: Confusing. Please re-write as follow. Note that you need a better reference for the CFFDRS.

Different “fire management” systems are used across the Boreal forest (e.g.,[9,10]) and in the United States [11]. The Forest Fire Weather Index (FWI) system [12], which is a component of the Canadian Forest Fire Danger Rating System (CFFDRS)[new reference], rates day-to-day changes …

New reference: Stocks, B.J.; Lawson, B.D.; Alexander, M.E.; Van Wagner, C.E.; McAlpine, R.S.; Lynham, T.J.; Dubé, D.E. 1989. The Canadian Forest Fire Danger Rating System: an overview [reprinted from August 1989 issue, 65:258-265, with corrections and new pagination]. Forestry Chronicle 65(6): 450-457.  https://cfs.nrcan.gc.ca/publications?id=11347

Thank you for this suggestion and added reference. We have rewritten this sentence as suggested (L56-58).

L57,58,65  You have 3 N for the acronym National Fire-Danger Rating System. Please verify throughout document. It has 1 N on line 85.

We have corrected the typo and have changed “NFFDRS” to “NFDRS” throughout the paper.

L67 perform with an “s”

Done

L68 replace “so can” with “and”

Done

L69 End of sentence after “Finland”. Start of new sentence: They concluded that …

Done

L70 put comma before “but”

Done

L72 … specific locations “and were the motivating factor for this study to compare 13 indices of flammability over multiple years in Alaska.”

Done

L74: replace “is” with “are”

Done

L75 replace “which” with “that”

Done

L75 remove “the”

Done

L75 replace “are” with “can be”

Done

L80 Start sentence with: “Observation-based data from a 20km resolution grid [15] was used” to evaluate two  drought assessment …

We have reworked this sentence as suggested. We have retained mention of the data as being downscaled since that is an important characteristic of the dataset.

L82-83 the acronyms need to be in brackets, not flanked by commas.

Done

L92-93 Remove sentence “Section 2 describes”

Done

L118-120 Sentence poorly constructed. Please re-phrase. Likely missing a comma after (MCD14ML) but then it still doesn’t make sense.

We have reworked this sentence for clarity

L131 replace “has maintained” with “has been maintaining”

We have reworked this sentence for clarity

L132 insert comma after season

Done

L135 “distribution” is plural as you have “both” temporal and spatial distributions

Done

L136 no “s” at “show”

Done

L151 “configured for fire weather monitoring” needs to be in between comma

Done

L150-154. Please rephrase and break the sentence in two.

Done

L159 Start sentence with “For each PSA, the daily indices were averaged, resulting in…

Done

L160 remove  “then” and “for each PSA” .  For the other “for each PSA” replace with “from each station”

Done

L163-165 explain what the dynamically downscaled product is exactly. And what is boundary forcing?

We have reworked the explanation of these data and have clarified what dynamical downscaling means (i.e. use a regional model to downscale the coarse reanalysis data on a finer grid). We have also more generally described the process to avoid getting into the details of different types of “forcing” since that is not critical for this study. See section 2.5.

L170 your study period says 1979-2017 but the graph in Figure 4 says 1996-2017. Which is correct?

We have clarified this sentence. The indices were computed for the study period of 2003-2017 but EDDI and SPEI are relative to a 1979-2017 reference climatology (the study period is not long enough on its own to provide a meaningful climatology per standard practices). We have better explained this issue L264-266. 

L175 “vary” not “varies” (you have a start and end dates …plural), replace “was” with “were”

Done

L178 put in between commas “such as this downscaled product”

Done

L179 comma after “observations” , replace “in” by “for”

Done

L180 remove comma after “due”, add comma before “due” and add comma after “part”

We didn’t fully understand this suggestion to add a comma before “due” but the comma was added after “part” as suggested.

L184 explain when you have a wet, dry, warm, or cold bias what color and positive/negative values those represent

The colorbars are included in figure 4 for reference that show when values have positive/negative departures in each panel and the units are provided in the figure caption.

L216 there is a red period (rather black) after [28]

Fixed

L226 spell out ASCE (I don’t think this acronym has been presented)

We have spelled out the American Society of Civil Engineers as suggested.

L245 Table 1 be consistent in how you present the data. Either us all acronyms or spell all indices. You already write what the lag time is for the short and mid in the header, so it doesn’t need to be repeated for EDDI and SPEI

We have changed this table to include only acronyms

L305 Be mindful when you present graphs to use the same vocabulary and same way of spelling (use of small or large caps) between graphs, as well as in your caption.

We have edited the paper and figures to establish consistent acronyms and spelling.

L310 Please find a better reference than Countryman (1972), which is the reference to the fire environment involving not only climate, but topography and fuels. Find a reference that is relevant to your focus on weather.

An alternate reference could be an issue of Fire Management today from 2017, a special edition entitled Weather Effects on Smoke and Wildland Fire. However, this paragraph was deleted when we reworked the paper for the restructuring so the reference was removed entirely.

https://www.fs.usda.gov/sites/default/files/fire-management-today/1952_fmt_75_15.pdf

L312 add “ignitions” after “fire”

There are two instances “fire” on this line so the intent is unclear. Therefore, we have attempted change the sentence to mention “ignitions” in the spirit of the suggestion. 

L323 not sure I understand the “sorted and scaled”  To obtain the percentile ranking, you would sort your data, but I don’t quite get the “scaled”. Did you transform your data? From what I see in Figure 7 it simply looks to me that you evaluated the frequency by classes (bins) in increments of 10. If I am correct in my assumption, then the term “scaled” is wrongly used. Statistical scaling is a different process.

We have rewritten the description of our methods to better explain our analysis approach (see section 2.6). The sorting just alluded to computing cumulative distributions and percentiles. For the cumulative difference calculation, the cumulative distributions of the fire indices were rescaled/transformed to match the range MODIS (but just for that calculation and not what is shown in Figure 10, etc.). Like you said, the sums were then taken over those frequency bins (5% increments).

L329 same issue with the term scaled.  Perhaps simply say: across a range of BUI values “that” represents the overall …

Done

L337 be consistent: MODIS Day with capital D

Done

L338 MODIS Detects with capital D

Done

L343 capitalize “Days/Counts”

Done

L383 Replace “Figure 8” with “Table 2”

These are tables, however they are displayed as figures since we want to include colored shading and tables in this journal can only be plain text.

L386-397 these subregions and different parts of Alaska you refer to should be in Figure 1

We have redone Figure 1 and shade the different subregions of Alaska instead of by PSA.

L386-417 The comment in the captions of your figures about the color coding is not sufficient for understanding. It is not obvious at first glance how the color gradient was applied as it is not related to a particular range of values. I think that using white as the transition between blue and red is misleading. It makes it look like blues could be negative departures, white is neutral and reds are positive departures. Keep it a palette gradient of one color. Also, state that the order of indices are from the Statewide results.

Thanks for this suggestion. We have remade Figures 8-9 using single color shading.

L407 Replace “Figure 9” with “Table 3”

These are tables, however they are displayed as figures since we want to include colored shading and tables in this journal can only be plain text.

L409-410 Replace “at the top of each column in Figure 9” with “in the first row”

Done

L412 “slightly” not “slight”

Done

L419 In Figure 10 why are there two different scales for the X axis?  Why not take the percentile ranks of EDDI and SPEI to be on the same scale as VPD, DMC and BUI? 

While conventional fire indices are commonly represented and considered as percentiles to “normalize” them operationally, EDDI and SPEI have been formally normalized within the index calculation, using a 40 year climate history. These normalized representations are the only way these indices are represented in application. Further transformation of these already normalized indices was deemed not appropriate.  

L448-453 the text is very similar to what you presented in Lines 309-315  This type of discussion definitely belongs in the Interpretation Section and not in your presentation of data.

This was a combined results and discussion section and this was where we began to focus more on the discussion… we will return to a separate discussion section from the restructure of the paper..

L468 I don’t understand why the ranked percentile is almost the same in each BUI class.  How can this be possible?

When an index, such as BUI, is transformed into its percentile rank, based on the climate period (2003-2017 in this case) the set of observations is distributed more equally.  Distribution of days by percentile rank will place a relatively equal number of days in each range in the distribution (e.g. 0.95-1.0 will include approximately 5% of values just as 0.80-0.85.)

L478 This sentence is already in your Introduction, please remove.

Done

L479-484 With so few fire seasons that experienced a prolonged drought, you are in no position to perform a meaningful analysis. Regardless, it is not the long-lag fire indices that will inform you on the probabilities of burning, but rather the BUI (mid-lag). The DC informs on how large fires could potentially get, as well as the potential for negative fire severity effects. Under a changing climate with the expectation that the number and length of droughts will increase, having a better understanding of the drought code and its correlation with active fire days would indeed be valuable information to have.

It may have been phrased improperly, but 2004 and 2005 were the only years with widespread late season drought.  However, the presence or absence of widespread drought over an area as large as Alaska should not limit our ability to assess the value of long-lag drought indices.  The dataset comprises 2745 days of history in each PSA, of which 146 active fire days (based on MODIS detection) occurred in the average PSA (see table below).  Clearly, the average number of active fire days in 2004 and 2005 were the highest (24 and 19 days, respectively) and those two years produced a disproportionate share of total 15 year analysis period fire impact (area burned).  

L493-499 The first paragraph of the Conclusion belongs in the Introduction

This paragraph has been moved to the closing paragraph in the Introduction as suggested.

L515 replace “offing” with “distant future”

Done

L515 replace “devises” with “designed”

Done

L527 replace “influenced” with “affected” 

Done

L540 remove coma after “snow-free”

Done



Reviewer 3 Report

This is in general a well-written piece of work. However, I have a few issues as outlined below.

 

1. First, I want just to clarify, VPD and FWI indices were calculated based on the AKIFWID dataset, i.e., basically on station observations, while EDDI and SPEI were calculated fron the downscaled ERA-Interim renalysis, right?

 

2. I would consider moving the last paragraphs from Section 2 to Section 3. I think that figures 6 and 7 would be better suited for the Results than Materials and Methods section.

 

3. I am struggling with the x axes of figures 6 and 10 depicting the percentile ranks. Fig 6b seems at first very understandable for me. It depicts the MODIS Counts ranked based on percentiles. So the class 0.95-1 should include the top 5% of days with the highest number of detected fires. So I would understand that by definition, the average daily MODIS count should decrease towards lower percentile classes. However, there seem to be higher number of detected fires in class 0.65-0.70 than in class 0.75-0.80, for example. So the figure is clearly not depicting what I am thinking?

Figure 6a is even more confusing for me. I understand that this figure shows the MODIS Days, right? Based on lines 269-270 MODIS days can get only values 0 and 1. If there are not a detected fire in a certain PSA on a certain day, the value for that day is 0 and if there are at least one fire, then the value is 1. However, on y axis the values go up to over 200. It does not seem to be a cumulative number of days in each percentile class either, as in E&C Int there are some MODIS days already in the class 0.80-0.85 and then more days in each higher percentile classes. As for example, if there would be a 100-day period of which 16 would have detected fires, there should be 5 days in classes 0.95-1.00, 0.90-0.95 and 0.85-0.90 and 1 day in class 0.80-0.85. I don't get it how the number of days increases like in the figure, so it is obviously like in Fig. 6b that the figure depicts something else than I assume.

After this, I am similarly struggling with figure 10. There the scale of x axis is also apparently different for EDDI and other indices? Other indices have the percentile scale but EDDI has the absolute values? Also SPEI is mentioned in the axis but SPEI is still not shown in the figure?

In Figure 11, I do not understand what is the meaning of showing BUI Percentiles as shown in the figure because each class has the same number of days by definition?

As I am struggling with these most relevant figures of the paper, it is hard to say my final judgement for this manuscript.

 

Lastly, a few very minor things:

 

Line 190: It is hard to say whether positive bias in precipitation is more likely due to wet bias in dew point or vice versa.

 

Figures 8 and 9 are rather tables than figures.

 

Author Response

This is in general a well-written piece of work. However, I have a few issues as outlined below.

Thank you for your review.

  1. First, I want just to clarify, VPD and FWI indices were calculated based on the AKIFWID dataset, i.e., basically on station observations, while EDDI and SPEI were calculated fron the downscaled ERA-Interim renalysis, right?

That is correct. EDDI and SPEI had to be calculated using the reanalysis because they required a longer time-series than was available in AKIFWID to produce the reference climatology. We have worked to clarify these points in section 2.

  1. I would consider moving the last paragraphs from Section 2 to Section 3. I think that figures 6 and 7 would be better suited for the Results than Materials and Methods section.

This was also suggested by reviewer 2 and a significant restructuring of sections 2 and 3 was undertaken. Figures 5-7 were previously part of section 2 and have now been moved into section 3. This required reworking much of the text in both sections.

  1. a) I am struggling with the x axes of figures 6 and 10 depicting the percentile ranks. Fig 6b seems at first very understandable for me. It depicts the MODIS Counts ranked based on percentiles. So the class 0.95-1 should include the top 5% of days with the highest number of detected fires. So I would understand that by definition, the average daily MODIS count should decrease towards lower percentile classes. However, there seem to be higher number of detected fires in class 0.65-0.70 than in class 0.75-0.80, for example. So the figure is clearly not depicting what I am thinking?

Thank you for spotting this. Your mention of 200 days in the top percentile band was the cue that tipped us off to an error we made. Of course, 5% of 2745 days in the 15 year history is only 137.25.  So where the top bin is filled with instances, that is the maximum.  We found an error in the formula used to calculate percentiles and, when the correction was implemented, it answered your concerns here.  No more wobbles at lower percentiles in the MODIS history distribution displays.  And we didn’t spot it because, in the end, it didn’t result in any downstream changes to the results.  

3b) Figure 6a is even more confusing for me. I understand that this figure shows the MODIS Days, right? Based on lines 269-270 MODIS days can get only values 0 and 1. If there are not a detected fire in a certain PSA on a certain day, the value for that day is 0 and if there are at least one fire, then the value is 1. However, on y axis the values go up to over 200. It does not seem to be a cumulative number of days in each percentile class either, as in E&C Int there are some MODIS days already in the class 0.80-0.85 and then more days in each higher percentile classes. As for example, if there would be a 100-day period of which 16 would have detected fires, there should be 5 days in classes 0.95-1.00, 0.90-0.95 and 0.85-0.90 and 1 day in class 0.80-0.85. I don't get it how the number of days increases like in the figure, so it is obviously like in Fig. 6b that the figure depicts something else than I assume.

Of course, the counts in lower MODIS Day percentile bins do decrease and show them to be less than full, because there is variation among the member PSAs in their numbers of MODIS Days overall.  And the conditional frequencies based on the indices do not reference the overall total number of days in the 15 year history (2745).  Instead, those conditional frequencies are referenced by the total number of days in a given index or index percentile range in the 15-year history.  Those total number of days,  MODIS DAY and MODIS Count frequencies, and the associated conditional frequencies in percentile bins will “wobble a bit” as a result.

3c) After this, I am similarly struggling with figure 10. There the scale of x axis is also apparently different for EDDI and other indices? Other indices have the percentile scale but EDDI has the absolute values? Also SPEI is mentioned in the axis but SPEI is still not shown in the figure?

Your question about the use of percentiles on the x-axis goes back to the explanation of conditional frequencies as well as the need to provide a consistent and comparable scale for indices (FWI indices and VPD) that are not normalized.  Percentile ranking of index values is a commonly used technique and, fortunately, it solves the small sample bias highlighted in the conditional frequency explanation.  But there was no way to perform similar normalization to what is integrated into the EDDI and SPEI derivations.  And we decided that it did not make sense to further recast those already normalized indices.  

3d)In Figure 11, I do not understand what is the meaning of showing BUI Percentiles as shown in the figure because each class has the same number of days by definition?

In Figure 11, we attempt to explain the impact of imposing “normalization” techniques on non-normal distributions we encounter when evaluating efficacy for risk sensitivity.  Normalized indices, like Palmer-Drought Index, have never been effectively implemented in fire models. Climate referencing can be done in a variety of ways, all of which distort relationships in some cases while informing them in others.  Nationalizing and globalizing representations is important, but masks important relationships.  It seems that unpacking and evaluating the underlying physics directly could be informative.  

3e) As I am struggling with these most relevant figures of the paper, it is hard to say my final judgement for this manuscript.

Thanks for the insightful comments on our explanation. We feel that they have helped to explain the method more clearly now. 

Lastly, a few very minor things:

 Line 190: It is hard to say whether positive bias in precipitation is more likely due to wet bias in dew point or vice versa.

We only speculate that the dew point bias is driving the wet bias in WRF since more analysis would be needed to verify that. We didn’t want to get into more detail in the paper since the downscaled data are not the focus, but WRF is initialized from the atmospheric water vapor (among other variables) and WRF then produces the precipitation driven by those forcing data. So, it is more likely that the dew points are driving the precipitation than vice-versa.  However, there may be other processes, atmospheric forcing variables, or model physics/parameterizations at play that result in the wet bias.

The sentence has been modified to communicate the dew point bias is consistent with precipitation bias (see reworked section 2.5). 

 Figures 8 and 9 are rather tables than figures.

These are tables, however they are displayed as figures since we want to include colored shading and tables in this journal can only be plain text.



Back to TopTop