Next Article in Journal
A Long–Term Response-Based Rainfall-Runoff Hydrologic Model: Case Study of The Upper Blue Nile
Previous Article in Journal
A Quantitative Approach to Evaluate Changes in Hydrologic Conditions of Headwater Streams: A Case Study of Restoration and Recovery Following Longwall Mine Subsidence
 
 
Article
Peer-Review Record

Evaluation and Bias Correction of CHIRP Rainfall Estimate for Rainfall-Runoff Simulation over Lake Ziway Watershed, Ethiopia

by Demelash Wondimagegnehu Goshime 1,3,*, Rafik Absi 2 and Béatrice Ledésert 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Submission received: 16 June 2019 / Revised: 28 July 2019 / Accepted: 6 August 2019 / Published: 9 August 2019

Round 1

Reviewer 1 Report

The paper presented by Goshmine et al. shows an application of the CHIRP rainfall product in the simulation of runoff with the HBV model. The authors tested several cases in the calibration of the hydrological model by using gauged data, uncorrected and bias-corrected CHIRP data, and their results show an advantage in the use of bias-corrected precipitation. The manuscript is well written, and the results are interesting for the community. Therefore, I recommend to accept the work for publication. However, I suggest some changes in the manuscript in order to highlight the findings:

1) The bias correction method aims to match the CHIRP’s probability density function (PDF) with the observations’ PDF based in some statistics (e.g. the mean and the standard deviation). Results show a good agreement in the PDFs as show in figure 4. However, for the Meki basin, the match is not achieved for low rainfall (precipitation lower than 50 mm month-1). I wonder if the bias correction method was applied for the whole daily data or the factors were applied on monthly basis. This factor could have an influence in the results, so please comment about it in the manuscript.

2) I wonder if the statistics improved after bias correction. I suggest to add the comparison in the statistical measures (CC, PBIAS, etc.) as in Table 3 and 4 but for daily bias-corrected precipitation. I think daily data would improve the CC but, of course, the match will not be perfect. This will support your conclusion iii, which stands that the bias correction effectively reduced the rainfall bias, which is showed in a monthly comparison, but not in daily data, which was used to feed the hydrological model.

3) The hydrological model calibration was performed for three cases: gauged precipitation and CHIRP’s uncorrected and bias-corrected precipitation. However, in page 12 line 347 it is written: “Not that, in our calibration we did not corrected either precipitation or evapotranspiration as we did not find any reason to do such correction” what do you mean you did not corrected precipitation?

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Review-Comments- hydrology-539309-peer-review-v1

Evaluation and Bias Correction of CHIRP Rainfall Estimate for Rainfall-Runoff Simulation over Lake Ziway Watershed, Ethiopia

 

 

General comment

This paper focuses on the evaluation and bias correction bias correction of Climate Hazards Group InfraRed Precipitation (CHIRP) satellite estimate for rainfall-runoff simulation in two case studies in Ethiopia. The SBP evaluations is important and needed, and I think the study topic is good but the paper is still under the level of hydrology Journal, as additional discussion considering the data evaluation and model simulation are still missing.  Also, some statistical measures are needed in some analysis and also, the paper language should be improved. My comments are listed below. 

 

 Comments and questions:

1.      In introduction section, line 96-97, “In addition, previous inter-comparison studies of satellite rainfall products indicated that CHIRP satellite rainfall product performed better than others [13,25,26].” Actually, the satellite performance are highly variable based on the time and space, which means these data might show good performance in some regions but actually, in other cases are not well, and the same condtions for the other satellite products, there are too many SBTs products now, therefore, this statement is not applicable in many cases, the sentences need to be modified  to be more realistic.

 

2.      In methods In section (2.4 Statistical indices), I would like to recommend using categorical verification statistics, such as POD or FAR, in order to evaluate the consistency between the rain gauges and tested SBT data.

 

3.      In section “3.3.1 Model Calibration and Evaluation”, Authors are recommended to add the monthly scatter plot diagram as they stated that they considered two-time scales daily and monthly.

 

4.      In section “4.1.2. Catchment-scale Rainfall comparison “, Additional time series Comparison figure, to understand the daily change over the two basins between SBT and Rain gauges data.

 

5.      I believe that the climatic seasons are effective in Ethiopia, therefore, the comparison should be considered for different climate seasons, for instance, what are the difference between the dry and wet seasons between both datasets.

 

6.      In section “4.2 CHIRP Satellite bias correction”, additional figure showing the Cumulative Distribution function (CDF) of daily rainfall time series is needed. The authors are recommended to discuss how much the improvements statistically of the corrected biased data with some statistical indicators to show the stats before and after correction.

 

7.      In “Figure 5. Model calibration result of Meki catchment (1986-1991) from gauge, uncorrected and bias-corrected CHIRP satellite rainfall input”, the Hyetograph of the input rainfall data should be added to the figure.

 

8.      In the results of model calibration and validation, in line 233, the authors stated that “The model calibration and validation were performed at a daily time step from 1986-1991 and 1996-2000 periods, respectively ” , but in the results sections “4.3 Model Calibration and Evaluation”, the calibration and validation were done at different catchments, with the same time period,  1986-1991, it would be better to add the time different calibration and validation simulations, for instance, the calibration for the time series 1996-2000.

 

9.      From the results, it seems that the extreme events or the high peak of discharge are not well represented in this study, the authors are recommended to discuss why? Because that give a notice about the effect of extreme events in the bias of SBTs. Additional discussion is needed.  This also give an indication of how the importance to discuss the seasonality of this study, dry and wet seasons.

 

 

Minor Comments:

10.  The Word “our” was used at many sentences along the paper, for instance, “our results show” is not meaningful in some cases, but it should be specified, as “the results of …… show….”. Please enhance the writing.

 


Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

The authors kindly addressed my comments in the revised  manuscript.

Back to TopTop