Next Article in Journal
Comparison of Numerical Calculation Methods for Stem Diameter Retrieval Using Terrestrial Laser Data
Previous Article in Journal
Development of Integrated Crop Drought Index by Combining Rainfall, Land Surface Temperature, Evapotranspiration, Soil Moisture, and Vegetation Index for Agricultural Drought Monitoring
 
 
Article
Peer-Review Record

Study on Radar Echo-Filling in an Occlusion Area by a Deep Learning Algorithm

Remote Sens. 2021, 13(9), 1779; https://doi.org/10.3390/rs13091779
by Xiaoyan Yin 1, Zhiqun Hu 1,2,*, Jiafeng Zheng 1, Boyong Li 1 and Yuanyuan Zuo 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2021, 13(9), 1779; https://doi.org/10.3390/rs13091779
Submission received: 18 March 2021 / Revised: 28 April 2021 / Accepted: 29 April 2021 / Published: 2 May 2021

Round 1

Reviewer 1 Report

See attachment.

Comments for author File: Comments.docx

Author Response

Dear reviewer,

Thank you very much for your careful reading and constructive comments on our manuscript, which are all valuable and very helpful for revising and improving our manuscript. We have made every effort to revise the manuscript according to your comments and suggestions, and marked the revised parts in red and blue font. And responded point-to-point following each comments, in which the “line number” is the line order in the revised manuscript. Please see the revised manuscript in the attachment.

We appreciate your kind work.

Best regards,

Xiaoyan Yin and the co-authors

----------------------------------------------------------------------------------------

Point 1: Line 77: Change to “The VPR correction method has its shortage.” 


Response 1: Thank you for your suggestion. We have corrected the sentence in line 78.

Point 2: Line 79: Change to “has been developing rapidly”.

Response 2: Thank you for pointing out this mistake. We have corrected it in line 81.

Point 3: Line 82: Change “cloud-age” to “cloudage”.

Response 3: Thank you for your suggestion. We have corrected the word in line 84.

Point 4: Line 108-109: Change to “spatial resolution of 1 km x 1o”.

Response 4: Thank you for this suggestion. We have corrected the word in line 110.

Point 5: Line 119: Change to “Figure 1. Transmittance of SA radar in Nanjing at the elevation angle of 0.5o.”

Response 5: Thank you for your suggestion. We have corrected the figure captions in line 118.

Point 6: Line 121-122: The terms of “tag”, label’, and “feature data” should be explained here.

Response 6: Thank you for this significant suggestion. We have adopted your suggestion. We supplement the meaning of “tag”, label’, and “feature data” in machine learning. We have rephrased the sentence as “The labels are the conclusion that needs to be obtained, in which are the echo intensity at 0.5° elevation, and the feature data are the evidence for correction, in which are the intensity at the upper elevation corresponding to the location of labels.”, in lines 125-128.

Point 7: Line 133-136: Rewrite this part. What is “the number of radial gates”? Based on Eq.(2), The unit of variable G is km.

Response 7: Thank you for pointing out this mistake. We have corrected the equation (2) in line 89. After dividing by the radial resolution of the radar (1 km), G stands for "the number of radial gates", namely the gate number. (Lines 142-143)

Point 8: Line 160: What kind of precipitation are selected in the “114 volume scan data”? How many events and in which season?

Response 8: Thank you for this comment. It can greatly improve the strictness and clarity of our manuscript. We added the selected month and season of the 114 volume scan data in the revised manuscript (Lines 167-168), which is from April to July, and all of them are in the rainy season in Nanjing. Because of the large cloud structure in rainy season, it can provide enough sample data.

Point 9: Line 177: It is adequate to briefly explain why 3 hidden layers model is selected for the EFnet here.

Response 9: Thank you for this significant suggestion. After many tests, to give consideration to both fitting speed and effect, 3 hidden layers model is selected for the EFnet. (Line 181)

Point 10: Line 178: In Fig.3 “Hidden layer” number should be 1, 2, and 3.

Response 10: Thank you for pointing out this mistake. It has been corrected in line 187.

Point 11: Line 193: Change “weather processes” to ‘weather events”.

Response 11: Thank you for pointing out this mistake. We have corrected it in line 202.

Point 12: Line 198-199: Why is the weighting function w given as [10, 5, 2, 5, 8, 10]? Same weights for weak and strong echoes?

Response 12: Thank you for your suggestion. Since the number of points of weak echo is far more than that of strong echo, the error accumulation of weak echo may lead to the deterioration of the whole model in the training process. Therefore, we try to increase the weight of weak echo, namely, focusing on the training of weak echo to ensure the training effect of weak echo model. (Lines 206-208)

Point 13: Line 222: why is the performance of the model for the section from 125 to 150 km not well comparing to the others?

Response 13: Thank you for this significant comment. It can greatly improve the clarity of our manuscript. Owing to the radar beam broadens with distance and altitude, the larger error in the long-range section than in the short-range section. Please see the detail in lines 234-235.

Point 14: Line 240: What do you mean “a very slight loss” here? Give an example to elaborate it.

Response 14: Thank you for this significant suggestion. We have adopted your suggestion and give an example to elaborate the meaning of “a very slight loss”. Please see the detail in lines 255-257.

Point 15: Line 244: Change to “Case study”.

Response 15: Thank you for your suggestion. We have corrected it in line 270.

Point 16: Line 269: Suggest to plot radial profiles of raw observed and predicted reflectivity using models compiled with the MSE and the self-defined loss function in the azimuth where beam blockage occurs in order to clearly illustrate the performance of the models in different sections.

Response 16: Thank you for this significant suggestion. For your suggestion to plot radial profiles to compare MSE loss function and self-defined loss function performance, actually, the difference of the predicted echo of the two models are only reflected at 0.5 ° elevation, but not at upper elevation. That is to say, the difference between MSE loss function and self-defined loss function is only reflected in 0.5 ° elevation, which can be clearly reflected in PPI images.

Point 17: Fig.9 and 10: Add the indication of red and blue ellipses in the figure captions.

Response 17: Thank you for this suggestion. The indication of red and blue ellipses has been added in the figure captions in lines 289 and 328.

Point 18: Line 279-283: Plot radial profiles of observed and predicted reflectivity using models compiled with the MSE and the self-defined loss function in this region to show your statement “more consistent”.

Response 18: Thank you for this significant suggestion. In combination with what you mentioned in suggestion 16, this “more consistent” can be seen on the PPI images.

Point 19: Line 297-300: It is really hard to find that “the self-defined loss function performs better on strong echo fitting” from Fig.10. Please find another way to illustrate it.

Response 19: Thank you for this significant comment. We have adopted your suggestion and calculated the echo points of strong echo (>40 dBZ) in the raw observation and the predicted values of the models compiled with the MSE and the self-defined loss function respectively. The strong echoes (> 40 dBZ) in the actual and the predicted echo of the two loss function models are calculated respectively. There are 1807 strong echo points (3.37%) in the actual echo, 1740 strong echo points (3.24%) predicted by the MSE loss function model, and 1841 strong echo points (3.43%) predicted by the self-defined loss function model, which illustrates that the prediction of self-defined loss function for strong echo is closer to the raw observation. Please find the detail in the revised manuscript. (Lines 333-338)

Point 20: Line 302-312: By comparing with Bengbu radar’s measurement in this area surrounded by red rectangle to demonstrate the deep learn algorithm show “better correction effect”, it is not right, at least is not rigorous. Firstly, you even didn’t show the echoes observed by both Nanjing and Bengbu radars at the same height. Secondly, the sizes of resolution volume in this area of the both radars are different which leads the difference in observed reflectivity as well. Thirdly, you haven’t shown the algorithm has capability to predict what radar did not observe.

Response 20: Thank for your significant comment. As you said, it is not rigorous to demonstrate the deep learn algorithm show “better correction effect” by comparing with Bengbu radar’s measurement. It is confused that the echoes are slightly stronger in the far distance. Therefore, we have revised the relevant expressions in the manuscript. In order to compare the filling effect, we interpolate the data of Bengbu radar into the same area for comparison, and find that the data are consistent with the Bengbu radar. Therefore, we guess that the echo in this region is stronger. In addition, the reason why the model predicts that the long-distance echo is stronger may be: in this case, it is July 3, while the average height of the melting layer in Nanjing is about 4.8 km in July, and the height in range 125-150 km is about 4.5-5 km. Therefore, it is possible that there is a melting layer in the region, and the melting layer is thicker in this weather event, which leads to the stronger output of the model. In the future work, we need to add more stable precipitation echo data to the training dataset to fit the long-range echo and further improve the algorithm. (Lines 341-349)

Point 21: Line 318: Rewrite the caption of Fig.11.

Response 21: Thank you for pointing out this mistake. It has been rewritten in line 358-359.

Point 22: Line 336: How do you define a dataset?

Response 22: Thank you for your suggestion. As for the definition of dataset, it is mentioned in Section 2.2 (line number) of the manuscript. Combined with your suggestion 6, we define dataset as: The labels are the conclusion that needs to be obtained, in which are the echo intensity at 0.5° elevation, and the feature data are the evidence for correction, in which are the in-tensity at the upper elevation corresponding to the location of labels. (Line 126-129)

Point 23: Line 340: Remove “detected”.

Response 23: Thank you for your suggestion. We have removed these words in line 381.

Point 24: Line 341-345: If you would add dual-pol parameters as input features in the deep learning algorithm to echo-filling, you could compare yours to the results by using the simpler method proposed by Zhang et al. (2013).

Zhang, P. , D. Zrnic, A. Ryzhkov, 2013: Partial Beam Blockage Correction Using Polarimetric Radar Measurements. Journal of Atmospheric and Oceanic Technology30, 861–872, doi:http://dx.doi.org/10.1175/JTECH-D-12-00075.1

Response 24: Thank you for your kind suggestion. In the next step, we will perform the polarimetric radar correction, and compare with the method mentioned in this paper.

Point 25: Line 349: Change “during radar blockage” to “for radar beam blockage”.

Response 25: Thank you for your suggestion. We have changed these words in line 390.

 

Author Response File: Author Response.docx

Reviewer 2 Report

Review of Study on Radar Echo-Filling in an Occlusion Area by a Deep Learning Algorithm

 

This study examines the use of a deep learning algorithm to fill in areas of echo occlusion in the lowest radar scan of an S-band radar in China. The radar has two main regions of occlusion to the south-east and south-west. Therefore, the northern parts of the radar volume were able to be used to train the algorithm. The training involved using the echo intensities in the elevations above the lowest one to predict the echo intensity of the bottom scan. The number of upper elevations used to predict this was a function of distance from the radar.

Overall the paper is quite well written and laid out, with a straight-forward presentation of methodology and results. The recommendations for further work are reasonable. There could be some improvements made to the robustness of the analysis, however, with expansion of the analysis section.

Specific comments:

  1. "the range of 0.5° is divided into six sections" – The meaning of this sentence was not initially clear to me. Maybe reword to something like: the 0.5° elevation scan is divided into six range bands, every 25 km, to train a different model for each band.
  2. L77-78. It would be interested to clarify why the VPR in convective storms is difficult to use, e.g. the advection of storms and the lag between different elevations being scanned results in poor relationship between upper and lower reflectivities, and if this could also impact on your methodology. How is your methodology affected by inherit characteristics of reflectivity in convective storms?
  3. Why is clean data important? Does or can the EFnet attempt to fill in areas with total loss due to permanent echoes? Do permanent echoes affect the upper scan elevations? How does the model handle areas with removed (cluttered) values in its feature data (if any)? Are they also set to -20 dBZ?
  4. In Figure 1, you should also show transmittance for the 1.5° scan to demonstrate that this elevation does not suffer from blockage. If it did, this could invalidate the model use of the 1.5° elevation scan as a predictor. Additionally, the figure label should indicate the scan elevation.
  5. Given that the referenced literature is all in Chinese, a very brief summary of the methods and types of echoes removed, or the degree of accuracy of the methods, would be helpful to the wider community to understand how comprehensive the quality control is.
  6. Figure 2. Is the bent grid in the lower right meant to represent a 3x3 grid on arcs of a circle? The dashed lines do not appear to be circular arcs. Can this be improved?
  7. Figure 3. All Hidden layer1. Should there be a layer2 and layer3?
  8. …points of strong echo are generally fewer than those of weak echo.
  9. It's not really clear why the weight vector seems to equally weight the lowest and highest reflectivities, when you state your intention is to increase the weigh of strong echoes only. Also, are there no normalized reflectivity values above 0.6?
  10. What is a cut? I'm not entirely clear on the use of "cut" in this manuscript. It appears to be used as a synonym for scan or sweep, but cut to me implies compositing across multiple scans, which doesn’t seem to be what you are doing here. Perhaps use the term sweep or scan instead, which are more common in the radar literature.
  11. In Figure 5, the predicted strong echo is slightly larger than the measured value? It looks more like the other way, with greater point density to the upper left of the 1-to-1 line at the higher dBZ values. I.e. the prediction is underestimating.
  12. L239-240."The strong echoes have been obviously improved"? It doesn't seem very obvious. "The values in Table 3 have a very slight loss compared with Tables 2 and 3"? This sentence doesn't make sense. You need to clarify this paragraph, and maybe find a better way to represent the differences between Figures 5 and 6, which superficially look similar enough that it's not clear one is improved in any way. Eyeball comparison of two scatter plots is not a good way to demonstrate such small differences.
  13. Figs 7 and 8 are using stratiform input data? This should be stated in the figure captions for clarity.
  14. It's not clear why Figure 9 is included after Figure 7 and 8, rather than when it is first mentioned. Consider moving it, or rework this section.
  15. L282-283. The area in the blue ellipse is not previously demonstrated to be convective precipitation embedded within this stratiform case. Do you mean heavy stratiform? How did you determine if the precipitation was convective or stratiform? Also, the radar observes precipitation not cloud, so you aren't simulating convective clouds but precipitation. Check the usage of this terminology also in the rest of the paper.
  16. L284-288. You don't really make much use of the values in the table. Can you highlight some meaningful differences in the table that demonstrate the relative skill of the model?
  17. Why does the self-defined loss function perform better in Table 4, but not in Table 3 compared to Table 2? Make this clearer in the text.
  18. I don't think you need to mention Figure 9 here, just describe what Figure 10 is by itself. Likewise for the caption of figure 10. The reader shouldn't need to rely on finding the caption to a different figure to understand the contents.
  19. It is not really convincing that it is a better correction effect. It depends on whether you are interpreting the reflectivity at that range as at a particular height, or as a representation of what is happening at the ground. Maybe it gives a better value if you are eventually producing a QPE product from the reflectivity, but of course this would need to be demonstrated.
  20. L313 and Table 5. A more complete description of the contents of Table 5. i.e. which data it applies to, should be included in the text and table caption.
  21. Figure 11 caption seems to be missing content.
  22. L328-329 "the echo intensity at far distances is obviously improved". It is obviously increased, it is not demonstrated that it is improved in all circumstances and for all applications.
  23. It is a bit strange to focus on stratiform precipitation in the convective event, and vice versa. You should identify the precipitation types more clearly at the start of the sections.
  24. To show the benefits of using a machine learning method, is it possible to compare with a simplistic echo-filing solution, such as just down-filling the values using the echo intensities from the scan immediately above? What benchmark are you trying to improve upon?

Author Response

Dear reviewer,

Thank you very much for your careful reading and constructive comments on our manuscript, which are all valuable and very helpful for revising and improving our manuscript. We have made every effort to revise the manuscript according to your comments and suggestions, and marked the revised parts in red and blue font. And responded point-to-point following each comments, in which the “line number” is the line order in the revised manuscript. Please see the revised manuscript in the attachment.

We appreciate your kind work.

Best regards,

Xiaoyan Yin and the co-authors

----------------------------------------------------------------------------------------

Point 1: "the range of 0.5° is divided into six sections" – The meaning of this sentence was not initially clear to me. Maybe reword to something like: the 0.5° elevation scan is divided into six range bands, every 25 km, to train a different model for each band.


Response 1: Thank you for your suggestion. We have corrected the word in lines 18-19.

Point 2: L77-78. It would be interested to clarify why the VPR in convective storms is difficult to use, e.g. the advection of storms and the lag between different elevations being scanned results in poor relationship between upper and lower reflectivities, and if this could also impact on your methodology. How is your methodology affected by inherit characteristics of reflectivity in convective storms?

Response 2: Thank you for this significant suggestion. We have added that in the introduction. Please see the detail in lines 78-80. As you said, the inherit characteristics of convective storms, such as baroclinicity, can result in poor relationship between upper and lower reflectivities. Therefore, the VPR method is difficult to apply in convective storms. VPR is the information of a line, which uses limited volume scan data, while deep learning uses the information of multiple parameters in a volume, which can learn the influence of the internal characteristics of convective storms to a certain extent. Because the upper eigenvalue of the deep learning method used in this study is a 3-dimensional spatial stereo data of 3×3×N, we can even use 5×5×N volume scan data in subsequent studies. Deep learning obtains a fitting relationship with the value of the 0.5° elevation reflectivity factor by adjusting the weights of the upper feature values. For example, for a 1-25 km distance band, we used a reflectivity factor of 3×3×7, or 63 feature values, of the seven elevation layers above. We continuously adjusted the weights of these 63 values, while VPR had only seven feature values. In addition, deep learning uses a large amount of scan data to find a fitting relationship, rather than a limited and simple mean value.

Point 3: Why is clean data important? Does or can the EFnet attempt to fill in areas with total loss due to permanent echoes? Do permanent echoes affect the upper scan elevations? How does the model handle areas with removed (cluttered) values in its feature data (if any)? Are they also set to -20 dBZ?

Response 3: Thank you for your suggestion.

  • Data cleaning is to ensure the accuracy and effectiveness of training, we must use the data without any pollution to build the training dataset. (Line 121)
  • EFnet is a fitting relationship between reflectivity factor of the upper and lower layers of the unobstructed zone by deep learning, which is then used to fill in the reflectivity factor of the 0.5° obstructed zone. Therefore, the model cannot be filled for multilayer occlusion caused by weak echo from a fixed target. In future studies, we can try the following two methods: First, if there is obvious occlusion at 1.5° elevation layer, the reflectivity factor of 1.5° elevation layer is filled by other layers above 1.5° elevation layer, and the filling model of 0.5° elevation layer is built. Second, if there is occlusion in the multilayers, we can try to utilize the left and right radial echoes to fill the occlusion area.
  • The Nanjing radar station selected has transmittance greater than 0.95 in 132-137° and 220-233° directions of 1.5° elevation layer and almost no occlusion. In combination with your fourth suggestion, we have added a 1.5° elevation layer transmittance map. (Lines 117-118)
  • First of all, the quality control of radar data is performed to remove the influence of clutter, and the value of clutter points is set to -20 dBZ, but this part of data is not added to the training dataset. In conjunction with your fifth suggestion, this section is described in more detail in lines 122-125 of the revised version. After quality control, we removed the data if the value of the tag is -20 dBZ, this gate is not outputted into the training dataset. On the other hand, the data are also not appended to the dataset when all feature values are -20 dBZ. Please see lines 150-153 of the revised manuscript for details.

Point 4: In Figure 1, you should also show transmittance for the 1.5° scan to demonstrate that this elevation does not suffer from blockage. If it did, this could invalidate the model use of the 1.5° elevation scan as a predictor. Additionally, the figure label should indicate the scan elevation.

Response 4: Thank you for your suggestion. The transmittance at 1.5° elevation is added in Figure 1. It can be seen that only a few radial transmittance at 1.5 ° elevation angle is slightly less than 1, which exceeds 0.95 and can be ignored. Moreover, we use the azimuth 0-90° and 300-360° (north) of the completely unobstructed reflectivity factor values as datasets to build the model, so the correctness of the model will not be affected.

Point 5: Given that the referenced literature is all in Chinese, a very brief summary of the methods and types of echoes removed, or the degree of accuracy of the methods, would be helpful to the wider community to understand how comprehensive the quality control is.

Response 5: Thank you for your suggestion. Your suggestion will greatly improve the clarity of the article. Therefore, the quality control process of this study is briefly described in the Section 2.2 of the revised manuscript. Please see the lines 122-125 of the revised manuscript for details.

summarized as follows:

The quality control of radar base data is implemented to eliminate the pollution of ground clutter and other non-meteorological echoes. Some thresholds which include echo intensity, radial velocity, and the vertical gradient of echo intensity are used to reduce the misjudgment of the meterorological echo.

Point 6: Figure 2. Is the bent grid in the lower right meant to represent a 3x3 grid on arcs of a circle? The dashed lines do not appear to be circular arcs. Can this be improved?

Response 6: Thank you for point for this. The bent grid in the lower right meant to represent a 3x3 grid on arcs of a circle, and the black point is the corresponding location of the label at the bottom. And we have redrawn the circular arcs in the lower right. (Line 147)

Point 7: Figure 3. All Hidden layer1. Should there be a layer2 and layer3?

Response 7: Thank you for your suggestion. We have corrected the figure in line 187.

Point 8: …points of strong echo are generally fewer than those of weak echo.

Response 8: Thank you for point for this. Generally, the number of weak echoes is less than that of strong echoes. In order to prove this, we made statistics on the number of strong and weak echoes in the dataset and added it in the manuscript. Please see the detail in lines 203-204.

Point 9: It's not really clear why the weight vector seems to equally weight the lowest and highest reflectivities, when you state your intention is to increase the weigh of strong echoes only. Also, are there no normalized reflectivity values above 0.6.

Response 9: Thank you for your suggestion.

  • Combined with your last suggestion, we have made statistics on the number of strong and weak echo points. Obviously, the number of weak echo points is far more than that of strong echo points. In the training process, the error accumulation of weak echo may lead to the deterioration of the whole model, so we try to increase the weight of weak echo to ensure the training effect of the model. (Lines 206-208)
  • In the self-defined loss function, 0.1~0.2 corresponds to weight 10, 0.2~0.3 corresponds to weight 5, 0.3~0.4 corresponds to weight 2, 0.4~0.5 corresponds to weight 5, 0.5~0.6 corresponds to weight 8, > 0.6 corresponds to weight 10.

Point 10: What is a cut? I'm not entirely clear on the use of "cut" in this manuscript. It appears to be used as a synonym for scan or sweep, but cut to me implies compositing across multiple scans, which doesn’t seem to be what you are doing here. Perhaps use the term sweep or scan instead, which are more common in the radar literature.

Response 10: Thank you for point for this. "cut" refers to "scan layer", that is, each layer in a scan. We have revised the relevant terms of "cut" to "scan layer" or "elevation layer" in the revised manuscript. Please see the mark of the revised manuscript for details.

Point 11: In Figure 5, the predicted strong echo is slightly larger than the measured value? It looks more like the other way, with greater point density to the upper left of the 1-to-1 line at the higher dBZ values. I.e. the prediction is underestimating.

Response 11: Thank you for point out mistakes. It has been revised in the revised manuscript. (Line 250)

Point 12: L239-240."The strong echoes have been obviously improved"? It doesn't seem very obvious. "The values in Table 3 have a very slight loss compared with Tables 2 and 3"? This sentence doesn't make sense. You need to clarify this paragraph, and maybe find a better way to represent the differences between Figures 5 and 6, which superficially look similar enough that it's not clear one is improved in any way. Eyeball comparison of two scatter plots is not a good way to demonstrate such small differences.

Response 12: Thank you for your important comments, which will greatly improve the clarity of our manuscript. On the basis of Figure 5 and Figure 6, we add the trend line of scatter (red solid line) to show the difference between Figure 5 and Figure 6. It can be seen that the prediction of the MSE loss function for strong echo is small, while the prediction of self-defined loss function for strong echo is obviously improved. For the differences between Table 2 and Table 3, combined with the opinions of Reviewer 1, in order to more clearly illustrate the performance of the two models, we give examples to illustrate the specific differences between Table 2 and Table 3. (Lines 255-257)

Point 13: Figs 7 and 8 are using stratiform input data? This should be stated in the figure captions for clarity.

Response 13: Thank you for this suggestion. We have corrected the figure caption in line 316.

Point 14: It's not clear why Figure 9 is included after Figure 7 and 8, rather than when it is first mentioned. Consider moving it, or rework this section.

Response 14: Thank you for this comment. It can greatly improve the logicality and clarity of our manuscript. we have rewritten this part in the revised manuscript. Please see the detail in section 4.1.

Point 15: L282-283. The area in the blue ellipse is not previously demonstrated to be convective precipitation embedded within this stratiform case. Do you mean heavy stratiform? How did you determine if the precipitation was convective or stratiform? Also, the radar observes precipitation not cloud, so you aren't simulating convective clouds but precipitation. Check the usage of this terminology also in the rest of the paper.

Response 15: Thank you for this significant comment. As you said, it’s hard to demonstrate if the precipitation was convective or stratiform. So, we have corrected “convective” to “strong echoes”, “stratiform” to “weak echoes”, and have checked the usage of this terminology also in the rest of the manuscript.

Point 16: L284-288. You don't really make much use of the values in the table. Can you highlight some meaningful differences in the table that demonstrate the relative skill of the model?

Response 16: Thank you for this significant comment. We have adopted your suggestion. Combined with your suggestion 17, we have rewritten this part to highlight some meaningful differences in the table. Please see the detail in lines 305-309.

Point 17: Why does the self-defined loss function perform better in Table 4, but not in Table 3 compared to Table 2? Make this clearer in the text.

Response 17: Thank you for this significant comment. Table 2 and Table 3 are the evaluation results of the test dataset. Due to the large amount of data in the test dataset, the proportion of strong echo is very small, the evaluation results of the test dataset of the self-defined loss function model will not be significantly improved. For Table 4 and Table 5, the evaluation results of volume scan (cases) are presented. Relatively, the proportion of strong echo has been improved. We use the self-defined loss function to improve the weight of strong echo, so the evaluation result of the self-defined function model will be better for the case filling. Please find the detail in the revised manuscript in lines 308-309.

Point 18: I don't think you need to mention Figure 9 here, just describe what Figure 10 is by itself. Likewise for the caption of figure 10. The reader shouldn't need to rely on finding the caption to a different figure to understand the contents.

Response 18: Thank you for your suggestion. It can significate improve the clarity of our manuscript. The caption of figure 10 has been corrected. (Lines 326-328)

Point 19: It is not really convincing that it is a better correction effect. It depends on whether you are interpreting the reflectivity at that range as at a particular height, or as a representation of what is happening at the ground. Maybe it gives a better value if you are eventually producing a QPE product from the reflectivity, but of course this would need to be demonstrated.

Response 19: Thank you for this significant comment. As you said, "better correction effect" is not rigorous. Therefore, we have revised the relevant expressions in the manuscript. In order to compare the filling effect, we interpolate the data of Bengbu radar into the same area for comparison, and find that the data are consistent with the Bengbu radar. Therefore, we guess that the echo in this region is stronger. In addition, the reason why the model predicts that the long-distance echo is stronger may be: in this case, it is July 3, while the average height of the melting layer in Nanjing is about 4.8 km in July, and the height in range from 125 to 150 km in this case is 4.5-5 km. Therefore, it is possible that there is a melting layer in the region, and the melting layer is thicker in this weather event, and the upper echo is stronger, which leads to the stronger output of the model. Therefore, in the future work, we need to add more stable precipitation echo data to the training dataset to fit the far range echo and further improve the algorithm. (Lines 341-349)

Point 20: L313 and Table 5. A more complete description of the contents of Table 5. i.e. which data it applies to, should be included in the text and table caption.

Response 20:  Thank you for your suggestion. A more complete description of the contents of Table 5 has added in the text and table caption. Please see the detail in lines 353-356.

Point 21: Figure 11 caption seems to be missing content.

Response 21: Thank you for pointing out this mistake. It has been rewritten in lines 358-359.

Point 22: L328-329 "the echo intensity at far distances is obviously improved". It is obviously increased, it is not demonstrated that it is improved in all circumstances and for all applications.

Response 22: Thank you for your comment. As you said, we really don't have sufficient reason to explain that the model can correct the long-distance echo by taking only one example. Therefore, in combination with your suggestion 19, we have modified this expression. Please see the detail in lines 366-368.

Point 23: It is a bit strange to focus on stratiform precipitation in the convective event, and vice versa. You should identify the precipitation types more clearly at the start of the sections.

Response 23: Thank you for this significant comment. As you said in Point 15, it’s hard to demonstrate if the precipitation was convective or stratiform. So, we have corrected “convective” to “strong echoes”, “stratiform” to “weak echoes” in the manuscript.

Point 24: To show the benefits of using a machine learning method, is it possible to compare with a simplistic echo-filing solution, such as just down-filling the values using the echo intensities from the scan immediately above? What benchmark are you trying to improve upon?

Response 24: Thank you for this significant suggestion. As you said, in order to better prove the superiority of machine learning, we compare the results of EFnet with the results of simple linear regression fitting. That is to say, we still use the same dataset and consider the effect of beam broadening to establish multiple linear regression model in different distance sections, but only use one upper point instead of 9 (3×3) points. The result shows the prediction results of six distance segments of multiple linear regression model, which shows that the prediction effect is worse than EFnet. Please see the detail in lines 261-269.

 

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

See attached file.

Comments for author File: Comments.docx

Author Response

Dear reviewer,

Thank you very much for your careful reading and constructive comments on our manuscript, which are all valuable and very helpful for revising and improving our manuscript. We have made every effort to revise the manuscript according to your comments and suggestions, and marked the revised parts in red and blue font. And responded point-to-point following each comments, in which the “line number” is the line order in the revised manuscript. Please see the revised manuscript in the attachment.

We appreciate your kind work.

Best regards,

Xiaoyan Yin and the co-authors

----------------------------------------------------------------------------------------

I am not satisfied with response 16 and 18. The performance of the model can NOT be clearly seen in PPI images. I still think the radial plots is the best way to show the performance.

Point 16: Line 269: Suggest to plot radial profiles of raw observed and predicted reflectivity using models compiled with the MSE and the self-defined loss function in the azimuth where beam blockage occurs in order to clearly illustrate the performance of the models in different sections. 


Response 1: Thank you for your suggestion. We've found that is definitely a good way to show a more complete vertical structure of the cloud. We have plotted radial profiles of raw observed and predicted reflectivity using two types of models to clearly illustrate the performance of the models. Please find the detail in the revised manuscript in lines 306-315.

Point 18: Line 279-283: Plot radial profiles of observed and predicted reflectivity using models compiled with the MSE and the self-defined loss function in this region to show your statement “more consistent”.

Response 2: Thank you for your suggestion. In order to keep the contents of case 1 not bloated, we chose to plot a radial profile of the blue ellipse area in case 2 to demonstrate the “more consistent” of the self-defined loss function in strong echo filling. Please find the detail in the revised manuscript in lines 355-363.

Author Response File: Author Response.pdf

Reviewer 2 Report

Second Review of "Study on Radar Echo-Filling in an Occlusion Area by a Deep Learning Algorithm"

Overall the authors have made a very good effort to address the review comments, and their responses were detailed and clear. The study's results are now clearer to discern, and better supported.

Minor comments.

L202: Please excuse my lack of clarity in my previous point about "the points of strong echo are generally less than those of weak echo." I suggest using the word 'fewer' rather than the word 'less' for greater clarity and grammatical accuracy because you are talking about the number of points, not the value of points.

It is still difficult to get a clearer overall picture of the comparison between Table 2 and Table 3. One type of comparison that might help is a scorecard plot, like a table where the colour of each cell represents the difference between the scores in Table 2 and Table 3. This makes it easy to see the wins and losses for each value. It is still hard to tell that the self-defined loss function is better overall.

L305: comparing Table 4 with 5… It looks like this comparison is just between values in Table 4. Also L353 comparing Table 5 with 4. The tables represent different case studies and it's not clear why you would compare values between these tables.

Author Response

Dear reviewer,

Thank you very much for your careful reading and constructive comments on our manuscript, which are all valuable and very helpful for revising and improving our manuscript. We have made every effort to revise the manuscript according to your comments and suggestions, and marked the revised parts in red and blue font. And responded point-to-point following each comments, in which the “line number” is the line order in the revised manuscript. Please see the revised manuscript in the attachment.

We appreciate your kind work.

Best regards,

Xiaoyan Yin and the co-authors

----------------------------------------------------------------------------------------

Point 1: L202: Please excuse my lack of clarity in my previous point about "the points of strong echo are generally less than those of weak echo." I suggest using the word 'fewer' rather than the word 'less' for greater clarity and grammatical accuracy because you are talking about the number of points, not the value of points.

Response 1: Thank you for your suggestion. We have corrected the word in line 202.

Point 2: It is still difficult to get a clearer overall picture of the comparison between Table 2 and Table 3. One type of comparison that might help is a scorecard plot, like a table where the colour of each cell represents the difference between the scores in Table 2 and Table 3. This makes it easy to see the wins and losses for each value. It is still hard to tell that the self-defined loss function is better overall.

Response 2: Thank you for this significant suggestion. To show a clearer comparison between Table 2 and Table 3, we made a column chart (Figure 7) and added a corresponding text description. Please see the detail in lines 254-260.

In general, In the test dataset, the effect of the self-defined loss function models is slightly weaker than that of the MSE loss function models. But in section 4 (case study), the self-defined loss function performs better. This is also mentioned in your first round of comments (point 17). Table 2 and Table 3 are the evaluation results of the test dataset. Due to the large amount of data in the test dataset, the proportion of strong echo is very small, the evaluation results of the test dataset of the self-defined loss function model will not be significantly improved. For Table 4 and Table 5, the evaluation results of volume scan (cases) are presented. Relatively, the proportion of strong echo has been improved. We use the self-defined loss function to improve the weight of strong echo, so the evaluation result of the self-defined loss function model will be better for the case filling.

Point 3: L305: comparing Table 4 with 5… It looks like this comparison is just between values in Table 4. Also L353 comparing Table 5 with 4. The tables represent different case studies and it's not clear why you would compare values between these tables.

Response 3: Thank you for pointing out this. As you said, it was definitely a comparison between values in Table 4. This is our negligence. We have corrected the description in the manuscript in lines 320, 379-382.

Author Response File: Author Response.pdf

Back to TopTop