Next Article in Journal
The Design of Cone and Pendulum Scanning Mode Using Dual-Camera with Multi-Dimensional Motion Imaging Micro-Nanosatellite
Next Article in Special Issue
Sensitivity of Remote Sensing Floodwater Depth Calculation to Boundary Filtering and Digital Elevation Model Selections
Previous Article in Journal
Estimating the Routing Parameter of the Xin’anjiang Hydrological Model Based on Remote Sensing Data and Machine Learning
Previous Article in Special Issue
Protecting Existing Urban Green Space versus Cultivating More Green Infrastructures: Strategies Choices to Alleviate Urban Waterlogging Risks in Shenzhen
 
 
Article
Peer-Review Record

Can Satellite and Atmospheric Reanalysis Products Capture Compound Moist Heat Stress-Floods?

Remote Sens. 2022, 14(18), 4611; https://doi.org/10.3390/rs14184611
by Lei Gu 1,*, Ziye Gu 1, Qiang Guo 2, Wei Fang 1, Qianyi Zhang 1, Huaiwei Sun 1, Jiabo Yin 3 and Jianzhong Zhou 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Remote Sens. 2022, 14(18), 4611; https://doi.org/10.3390/rs14184611
Submission received: 19 July 2022 / Revised: 31 August 2022 / Accepted: 12 September 2022 / Published: 15 September 2022

Round 1

Reviewer 1 Report

The manuscript by Gu et al. addresses a very important research topic of capturing moist heat-stress floods using the satellite and reanalysis data. After reading the manuscript up to section 3.3, I decided to request the Authors to submit the manuscript to either 'WATER' or 'HYDROLOGY' MDPI journals. There is no reason why this manuscript should be reviewed in 'REMOTE SENSING' journal. Authors may justify based on the use of Satellite data but the entire analysis is about comparison of data in 8 regions, setting up two lumped hydrological models, and analysis of the output of hydrological model. I don't find anything related to 'Remote Sensing.' 

Apart from my main comment, the results need to be presented for the sub-basins (120 catchments shown in Figure 1) within the 8 regions presented in the study area. Its too big area in each of those 8 boxes. By presenting the average error statistics (as shown in Figures 2 onwards), Authors are limiting the scope of their work. I will restrict my comments to this much only because I strongly encourage the Authors to submit this manuscript to other relevant journals. 

Author Response

Please see the response in the attached File.

Author Response File: Author Response.docx

Reviewer 2 Report

This study analyzed satellite remote sensing and reanalysis data in capturing floods and CMHF from 2001 to 2020 over 120 catchments in China. The overall writing is good. However, there are still some issues to address.

 

1.     Section 2,1,2, please explain why the 120 catchments were selected, and provide more information about the streamflow data. What’s the frequency of observation? How many stations in each catchment and where are they located? Are the data available for every catchment throughout 2001-2010? Are there any missing values? Also, the number of catchments in each sub-region differs a lot. Some sub-regions contain dozens of catchments while others contain only one or two catchments which lack of representative. The authors should consider redividing the sub-regions or redistributing catchments.

2.     Section 2.2.1, most readers may not be familiar with POD, FAR, CSI, HSS. Please show more details and provide a direct sense by indicating if those indices are the larger the better or the smaller the better. Also, the equation for false alarm ratio (FAR) is not shown.

3.     Section 2.2.2 is poorly written. It is very unclear how did the author run the two models. Is it because some catchments are too small to include a meteorological grid so that the authors did Thiessen polygon for the first step? For large catchments contains multiple grids how did the meteorology data treated over the catchment? Why were the Xinanjiang and GR4J models selected? Do they show good performance in China from past studies? What Are the parameters to be calibrated in the two models? Are they the same or different? How were the models calibrated and validated? Aim at maximizing KGE for all catchments as a whole or each individual catchment? At what temporal scale, daily, monthly, annually or interannual mean? Please explain the Shuffled Complex Evolution (SCE-UA) optimization algorithm. Please provide more information of the two models. Please show the equation for KGE.

4.     Section 3.1, are IMERG and ERA compared against CN05.1? In Figure 2, ERA generally outperforms IMERG, but in Figure 3, IMERG is better than ERA. What causes this discrepancy? Please use the same color scale for all indices in Figure 2. In Figure 3, why S3, S4 and S5 have much larger MAE than other Ss?

5.     The definition of CMHF is not very clear. Please elaborate using equations.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

This manuscript (MS) tries to evaluate the capability of IMERG and ERA5-LAND in driving the conceptual hydrological models to reproduce CMHFs.

 

There are already many publications focusing on evaluating various precipitation datasets from hydrological perspective, and this MS selected on specific category, i.e., compound moist heat stress-floods. I think the idea is interesting.

 

I have several concerns regarding the methodology and results part, mainly from hydrological perspective.

 

(1)    The authors used three precipitations to drive the hydrological models, but how did they deal with (potential) evaporation data, which is another crucial data when calibrating and validating models using long serials streamflow.

(2)    I found they also select catchment in source region of Yangtze River, which experiences snowfall and frozen soil, how they tackle these in XAJ and GR4J (According to Fig.4, it has very good KEG!)?

(3)    The authors used KEG to quantify the modeling performance, that’s fine, but I also want to see the relative error of runoff volume. I’m worrying about the water balance issue arising from the precipitation bias.

(4)    Many figures in MS (e.g. 7, 10…) show the results of different precipitation datasets, they are beautiful but not clear enough. Are there any metrics to quantify the difference among IMERG, ERA5 and OBS?

(5)    The authors use streamflow simulations driven by CN05.1 as “observation”. That’s fine if they focus on future projection (e.g. ref 43), however, in this study they should have in-situ streamflow observations, which can be used directly for evaluation. This issue should be clarified.

(6)    The authors seem forget to upload appendix materials so I cannot see details of the catchments and simulations. I’ll double check them during next round of review.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

This paper is in good shape now.

Author Response

Please see the attachment. 

Author Response File: Author Response.docx

Reviewer 3 Report

The authors have revised the MS based on my comments, however, after reading the revised MS I found there exist considerable uncertainty in modeling and analysis. I understand it cannot be addressed in this study, but It is suggested that the authors discuss the uncertainty/limitation in discssion section. After that the MS can be accepted for publication.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop