Next Article in Journal
GMDSS Equipment Usage: Seafarers’ Experience
Previous Article in Journal
Effect of Maritime Traffic on Water Quality Parameters in Santa Marta, Colombia
 
 
Article
Peer-Review Record

Numerical Simulation of Wind Wave Using Ensemble Forecast Wave Model: A Case Study of Typhoon Lingling

J. Mar. Sci. Eng. 2021, 9(5), 475; https://doi.org/10.3390/jmse9050475
by Min Roh 1, Hyung-Suk Kim 2,*, Pil-Hun Chang 1 and Sang-Myeong Oh 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
J. Mar. Sci. Eng. 2021, 9(5), 475; https://doi.org/10.3390/jmse9050475
Submission received: 19 March 2021 / Revised: 16 April 2021 / Accepted: 26 April 2021 / Published: 28 April 2021
(This article belongs to the Section Ocean Engineering)

Round 1

Reviewer 1 Report

This paper describes an ensemble wave model used to forecast wave conditions for a historical typhoon. The results of this model were compared to those of a deterministic model developed for the same event. This comparison showed that the ensemble model outperformed the deterministic model in terms of its accuracy (as compared to buoy data for the event) and its ability to quantify forecast uncertainty, particularly at longer lead-times. The authors argue that their study shows that ensemble forecasts are useful (and better than deterministic models) for predicting hazardous wave conditions and quantifying the uncertainty associated with these predictions.

Overall, this study appears to be based on sound science and the arguments made are convincing. I found most of the figures to be clear and helpful (although the captions could be improved). However, the awkward phrasing and other grammatical inconsistencies throughout the manuscript make it difficult to fully evaluate its scientific merit. In addition, the introduction and conclusion do not adequately (or clearly) explain how this study is novel and/or in what ways it advances knowledge in the field and/or solves an open problem. I have left a few additional comments on the individual sections in the attached PDF. In general, I believe this paper is not currently acceptable for publication; however, if the writing is improved and the merit of the study is more clearly addressed, it may constitute a valuable contribution to JMSE after major revisions.

Comments for author File: Comments.pdf

Author Response

The authors would like to thank the editor and the reviewer for their precious time and invaluable comments.

We have carefully addressed all the comments. The responses are attached in MS Word.

Thank you very much for your time and effort.

Author Response File: Author Response.pdf

Reviewer 2 Report

This is an interesting paper studying the performance of a wave ensemble forecast for an extreme event. The assessment has been carefully conducted using proper statistics. The results are clear and the discussion is good. Bellow, I added some corrections and suggestions:

General: The clarity of the text improves with short sentences. If possible, please review the English quality.

Abstract:

- Typhoon Lingling (1913) . What is 1913?

- “by the ensemble atmospheric model”. Which one?

- “and the each ensemble” – “and each ensemble”

1 Introduction:

First paragraph:  An additional study comparing the performance of ensemble wave forecasts with deterministic wave forecast (control run) is “Global assessments of the NCEP Ensemble Forecast System using altimeter data” https://doi.org/10.1007/s10236-019-01329-4

- authors wrote “Recently, studies to improve the prediction accuracy of ensemble model using machine learning, etc. have been conducted actively [6].” Another study in this area is: “Improving NCEP’s global-scale wave ensemble averages using neural networks” https://doi.org/10.1016/j.ocemod.2020.101617

- If possible it would be valuable to mention which Atmospheric ensemble and Wave ensemble is considered.

 

2 Methodology

79: “20–50°” “20–50°N”

79: “115–150°” “115–150°E”

81: “sea level was established as shown in Figure 1 based on ETOPO1” . ETOPO1 does not give sea level but bathymetry.

97: I don’t understand if WAVEWATCH III was run with source-terms ST2 (Tolman and Chalikov) or ST4 (Ardhuin et al). Please clarify this part in the text.

Table 1: “+120 hr (3 hourly cycle)”  is it 3 hour cycle or 3 hours of time-interval (resolution)? The word “cycle” gives the idea of model integration, i.e., when the model is run and a new forecast is provided. As it is written “The ensemble wave model was performed twice a day” I guess the cycles are 00h and 12h.

2.2 The validation of hurricane winds and waves with buoy data must be done with caution. Such strong and turbulent conditions can affect the quality of buoy measurements.

 

 

3 Results

181: please quote this sentence to the study https://doi.org/10.1175/1520-0493(2001)129<0550:IORHFV>2.0.CO;2

188: “The rank histogram up to a lead time of 24 hr showed a U-shape with bias on the left and right, indicating that the ensemble spread was too wide”. I have the opposite idea, that U-shape is when the spread is too small. Please check the studies:

https://doi.org/10.1175/1520-0493(2001)129<0550:IORHFV>2.0.CO;2

https://psl.noaa.gov/people/tom.hamill/MSMM_vizverif_hamill.pdf

Fig 8 . The error drops after approximately 100h, which is counter-intuitive, probably due to the decay of storm intensities (as only one forecast cycle is shown). Perhaps this could be clarified in the paper.

  1. Conclusions

Ok

Author Response

The authors would like to thank the editor and the reviewer for their precious time and invaluable comments.

We have carefully addressed all the comments. The responses are attached in MS Word.

Thank you very much for your time and effort.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper develops, and assess the performance, of an ensemble wave model applied to model Typhoon Lingling in the Korean Peninsula. The performance, using 24 ensembles, is compared to a deterministic modelling approach with the aid of 17 wave buoys in the region; whereby significant wave height is used for statistical comparisons and analysis. A variety of statistical tools are used to assess the forecast performance. The paper to be quite thorough, with a significant amount of modelling work and field data used for analysis. I believe the work of value to the wave modelling community, however, recommend several changes before publication; mostly related to clarity and the inclusion of some additional results to further assess performance.

Main comments:

  1. The application of the metrics could be a lot clearer. For example, it is not particularly clear how the rank histograms are generated. It is not clear what sea state parameter this is based on (although I assume this is Hm0). It is also not clear what the bins correspond to or how the processing is achieved. Is this computed at the buoy locations only, or for all grid points? It takes some time to figure out as a reader. This process needs clarification in order to interpret the rank and frequency information shown in Fig 6 properly. A similar level of clarity is required throughout Sections 3.2 and 3.3. Please revise trying to make the process as clear as possible for the reader.
  2. Interpretation of results. More explanation of the reasons (physical processes, uncertainties) for the results should be included. For example, physical explanation as to why the bias increases significantly after 72 hours, and why positively? Similar level of (possible) explanations required for all metrics presented.
  3. Additional results. All of the results presented are for the significant wave height. It would be nice to see the performance in relation to other (important) wave parameters e.g. mean/peak/energy period, mean direction etc. Additionally, prior to the statistical analysis it would be nice to compare the time-histories of e.g. Hm0, Te, at some of the buoy locations to the model(s) to get a simple example of the model(s) performance.
  4. Focus on extremes. The analysis mostly focuses on Hm0 thresholds of 2 m and 4 m, but I think it would be nice to focus a bit on the extreme (temporal-spatial) Hm0 values observed as this is critical for capturing dangerous conditions. Perhaps a side-by-side comparison of spatial Hm0 plots for the deterministic vs ensemble prediction when the maximum Hm0 is observed in the domain? The suggestion of a time-domain comparison (point 3) at key buoy locations may also elucidate this if the buoy with the largest Hm0 value was chosen as an example. This would demonstrate further the importance of the ensemble approach in determining the localised extremes which will cause the most potential damage.

Minor comments:

  • “Lingling (1913)”. To me this seemed like the Typhoon was in 1913 rather than 2019. Perhaps this is standard for Typhoon referencing but I found it confusing. Consider removing “(1913)” or replacing with “(2019)”.
  • Line 21: Please rephrase “and the each ensemble member has been performed stably”. E.g. “the ensemble model runs were all stable”
  • Line 24: Define “ROC” in the abstract.
  • Line 44: Please add some detail why it may be “difficult to analyze” to obtain probability data.
  • There are quite a few long sentences which are a bit difficult to read. Please shorten to aid readability. E.g. lines 62-65, lines 79-84. There are potentially others which I would encourage you to find and revise for the benefit of the reader.
  • Fig 2: a clearer statement on what is being represented by Fig 2 would aid understanding of the ensemble modelling approach.
  • Line 181: As mentioned in “Main comments 1” a clarification on what is considered “observed values” is needed. I am assuming this is simulated model outputs and not “observed” by the buoy.
  • Fig 8 and eq (2) and (3). As above, more clarity needed. Is this computed for all ensemble runs for all buoys for all time-frames? Additionally, please clarify in caption this is Hm0, as it is only showing RMSE and bias of one parameter (and clarify this for other captions for other metrics).
  • Some more discussion would be beneficial on the trade-offs of the computational time of the ensemble modelling approach vs the increase in accuracy. There is some good discussion of the potential benefits in the introduction etc. but would be nice to summarise the findings in relation to the observed improvements and effort required.

Author Response

The authors would like to thank the editor and the reviewer for their precious time and invaluable comments.

We have carefully addressed all the comments. The responses are attached in MS Word.

Thank you very much for your time and effort.

Author Response File: Author Response.pdf

Reviewer 4 Report

jmse-1169609

Numerical Simulation of Wind Wave using Ensemble Forecast Wave Model: A Case Study of Typhoon Lingling (1913)   by Min Roh , Hyung Suk Kim 

 

Submitted for publication in JMSE

This paper investigates the performance of an ensemble atmosheric model for the wave forecast under the influence of the Lingling Typhoon (1913).

The model is verified against field data and also compared with a deterministic model.

Overall, the paper is well written and the arguments clear enough.

After the clear discussion of a few minor comments, I am happy to propose publication.

Minor comments:

  1. A general comment: The authors need to highlight the importance of their findings with clarity in both abstract, intro and conclusions. The reader is not able to get a clear advice as to whether it is worth using an ensemble model versus a determenistic as it is not explicitely explained which are the downsides of using an ensemble model. For example, is it more time consuming, more complex? Are the very good results of the ensemble model expected from theory? Do we expect always to have a good and stable result compared to a deterministic model? etc.
  2.  Figure 5 (d): The ensemble average is not whown in the area North of Japan.
  3. Figure 5 and elsewhere, the Regional model is the deterministic model but both defintions are used in the text without explanation.
  4. Figure 7: The y axis should read RMSE
  5. Figure 8: Why do we need both (a) and (b)? Which is the additional vaue gained with both presentations?
  6. Figure 8(a): In any case please limit the y axis between -0.5 and 0.5 for clarity.
  7. Lines 235-236 Please delete: "is a  verification index"
  8. Figures 10-11 To show trend, consider presenting both 2m and 4m significant wave heights in one graph.

Author Response

The authors would like to thank the editor and the reviewer for their precious time and invaluable comments.

We have carefully addressed all the comments. The responses are attached in MS Word.

Thank you very much for your time and effort.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The figure captions and clarity have been much improved. However, there are still many instances of awkward phrasing and ungrammatical sentences throughout the manuscript (I have noted some, but not all in the attached file), which still distract from the quality of the science presented in the paper. Please have someone with strong grammar read through and rephrase awkward sentences throughout the manuscript. In addition, I still find that context for this study is lacking in the introduction and conclusion-- I am still left wondering what this study adds to the current state of knowledge in the field. Is it the first study to show that ensemble modeling is more accurate for this particular scenario or is it simply illustrating a result that others have also addressed?

If the grammar of this paper is improved, and a few sentences of contextual information are added to flesh out the intro and conclusion sections, I would consider this paper acceptable for publication.

Comments for author File: Comments.pdf

Author Response

The authors would like to thank the editor and the reviewer for their precious time and invaluable comments.

We have carefully addressed all the comments. The responses are attached in MS Word. 

Thank you very much for your time and effort.

Author Response File: Author Response.pdf

Reviewer 3 Report

Some effort has been made to address most of the comments raised, and many of the comments have been addressed.

However, I still feel more work should be done to:

1) increase the clarity on how the metrics are applied.

2) Add additional explanation to the physical reasons behind the results observed.

 

On 1), these are certainly improved but I feel there still could be a bit more information to detail how the metrics are calculated. It seems like each of the metrics are computed for each (lead) time, where the values are averaged over all buoy observation points for each ensemble. Perhaps this can be clarified further?

On 2), I still feel there is not a lot of physical explanation as to the observed trends and results. The only addition seems to be that "it was confirmed that the typhoon intensity declined after 2 hours of the forecast lead time, and the positive bias had decreased at the same forecast lead time.". To me, this does not really try to address the physical reasons. Why does the intensity decline mean there is a positive bias in the ensemble model? There are many other results which would benefit from a bit more detail on why the results are observed (or at least a speculation). I feel this would improve the paper significantly. 

 

Author Response

The authors would like to thank the editor and the reviewer for their precious time and invaluable comments. 

We have carefully addressed all the comments. The responses are attached in MS Word. 

Thank you very much for your time and effort.

Author Response File: Author Response.pdf

Back to TopTop