Next Article in Journal
The Cenozoic Malaguide Basin from Sierra Espuña (Murcia, S Spain): An Example of Geological Heritage
Next Article in Special Issue
Associating Climatic Trends with Stochastic Modelling of Flow Sequences
Previous Article in Journal
A Geospatial Approach for Mapping the Earthquake-Induced Liquefaction Risk at the European Scale
 
 
Article
Peer-Review Record

Quantifying Uncertainty in the Modelling Process; Future Extreme Flood Event Projections Across the UK

Geosciences 2021, 11(1), 33; https://doi.org/10.3390/geosciences11010033
by Cameron Ellis *, Annie Visser-Quinn, Gordon Aitken and Lindsay Beevers
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Geosciences 2021, 11(1), 33; https://doi.org/10.3390/geosciences11010033
Submission received: 30 October 2020 / Revised: 19 December 2020 / Accepted: 21 December 2020 / Published: 8 January 2021
(This article belongs to the Special Issue Applications of Mathematical/Statistical Techniques to Extreme Events)

Round 1

Reviewer 1 Report

This paper presents a hydrological study of annual extremes in flows produced from a multi-model ensemble of general circulation models coupled with hydrological models. The statistical parameters for annual extremes are estimated for contemporary/historical simulations as well as future projections, with the former compared to observations to provide a means for model bias correction. Overall the methodology seems reasonable and is presented coherently along with a succinct and informative collection of results. The authors further provide a meaningful discussion of the roles of model structural uncertainty and extreme value statistical parameter uncertainty.

Additionally, I think readers will appreciate the authors’ careful documentation of the R software packages used at various stages of the analysis. This manuscript would be a welcome addition to the literature after addressing the comments here.

Major Comments

  • The authors present an understandable and welcome discussion of the relatively large uncertainty due to EV parameter estimation in Section 4.2 and possible implications for future work in 4.5. As noted above, the parametric bootstrap approach is an entirely reasonable method for characterizing this uncertainty under the assumptions implemented. While the duration of the underlying records is a key limitation, I think some discussion of possible methodological innovations would be a key addition to the paper. Modeling of environmental extremes has received some recent attention in spatio-temporal statistics, and these approaches offer options for “borrowing strength” across locations for improved precision (Cooley & Sain, 2010; Fix et al., 2018). The Bayesian approaches underpinning these investigations provide some potential for other “pooling” mechanisms.
  • The baseline periods used in the analysis are different for the model simulations (1971-2000) and the observed peak flows (1976-2005). Fundamentally, this does not seem to be problematic because the study objective is to compare climatological parameters of the observations to those from the simulations, rather than pairing individual events. With that said, this objective needs to be more explicitly articulated, as readers may be familiar with the latter strategy and the temporal disconnect is a bit jarring to the reader.
  • I would suggest some additional discussion in the Introduction surrounding Figure 1. In particular, it is worth noting that GCMs are run at coarse resolution and that their outputs (particularly precipitation) are used as input for hydrological models. The water cycle linkage between the atmosphere and the surface is an important component of this entire system, and I think that could be emphasized more strongly in the Introduction.
  • The authors defer the evaluation of GCM vs HM uncertainty to future work, but it would be worthwhile to comment further or provide a graphical summary of the variation of the estimated parameters across GCM/HM combinations, even without a more in depth ANOVA approach.

Minor Comments/Edits

  • I think the journal publisher encourages a listing of abbreviations at the end of the article. This addition would be welcome for the current article.
  • Two related remarks regarding figures. For the multi-panel figures (4,5,6), I would suggest more prominent labels (darker color and/or larger font) for the panel headings. For example, in Figure 2, the scenario and return period labels/headings come off somewhat faint. In addition, the figure captions could generally use a bit of additional description so that they can stand alone from the main text to a certain extent. This would include defining the abbreviations in Figure 1.
  • The discussion of “validation” at the end of section 2.1.3 sounds like “verification” to me (National Research Council, 2012). That is, verification is the process of ensuring that software reproduces a mathematical formulation or algorithm. The manual process described in the current manuscript seems to meet this objective. On the other hand, validation is the evaluation of a modeled quantity against actual observations.
  • In the third paragraph of the Introduction, is the Met Office record reference correct? 17 record-breaking months since 1910 does not seem right.
  • Are there some relevant references for the behaviors of MMEs versus PPEs for either GCMs or HMs or both that can be mentioned in the introduction? Also, at that point it is worth noting whether these ensemble types apply to GCMs or HMs (or both) in the current study.
  • Line 134 (end of 2.1.1): I think this is meant to be 41 total observations (year, catchment combinations) in the entire record.
  • Section 2.1.2: Is the bias correction applied to each model “chain” separately?
  • It might be noted in 2.1.3 that the distributions are described in terms of their quantile functions with the forms provided. That is, the expressions provide the flow value corresponding to a specified exceedance probability F

 

Cooley, D., & Sain, S. R. (2010). Spatial Hierarchical Modeling of Precipitation Extremes From a Regional Climate Model. Journal of Agricultural, Biological, and Environmental Statistics, 15(3), 381–402. https://doi.org/10.1007/s13253-010-0023-9

Fix, M. J., Cooley, D., Sain, S. R., & Tebaldi, C. (2018). A comparison of U.S. precipitation extremes under RCP8.5 and RCP4.5 with an application of pattern scaling. Climatic Change, 146(3–4), 335–347. https://doi.org/10.1007/s10584-016-1656-7

National Research Council. (2012). Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification. The National Academies Press.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The manuscript investigates the effects of climate change on discharge in UK by evaluating how the probability distribution of extreme discharge events is going to change under different greenhouse gasses emission scenarios, according to IPCC reports.

The Authors makes use of a modeling chain including climate model prediction, bias correction, rainfall-runoff transformation based on conceptual hydrological modeling and extreme value analysis. The innovative contribution of this work is that of providing results in terms of the uncertainty that characterizes predictions, including model structure and parametrization uncertainty. In fact, at each step of the modeling chain several competing/alternative models are considered. The procedure is applied to about 200 UK catchments, and results are discussed at the regional scale, showing the spatial and temporal patterns of expected change.

The Authors also compare results with those obtained by making use of older versions of climatic models. Indeed, at the very beginning of the manuscript the Authors state that climatic modeling is improved. This latter point is that presenting some weaknesses, based on my opinion. It is not clear in what sense climatic models differ and are improved with respect to the past; furthermore, the large uncertainty characterizing results does not allow to draw clear and general conclusions on this issue. I would expect, in general, a deeper discussion on this, since it is clear that the reliability of climatic model is atopic still requiring strong efforts form the scientific community.

Based on this, I recommend to consider this for for publication after minor revisions, improving the discussion of the climatic modeling issue.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop