Next Article in Journal
Learning Experiences of a Participatory Approach to Educating for Sustainable Development in a South African Higher Education Institution Yielding Social Learning Indicators
Previous Article in Journal
Green Supply Chain Management with Cooperative Promotion
 
 
Article
Peer-Review Record

An Improved Gray Wolf Optimization Algorithm to Solve Engineering Problems

Sustainability 2021, 13(6), 3208; https://doi.org/10.3390/su13063208
by Yu Li 1, Xiaoxiao Lin 2 and Jingsen Liu 3,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Sustainability 2021, 13(6), 3208; https://doi.org/10.3390/su13063208
Submission received: 27 December 2020 / Revised: 4 March 2021 / Accepted: 11 March 2021 / Published: 15 March 2021

Round 1

Reviewer 1 Report

The paper is interesting and presents a novel algorithm that could contribute to the existing research. However, I have several major concerns regarding framing, methodology, and results' discussion. There are a few key points to be addressed:

  1. Problem framing and related work: The first section – in its current form - gives a broad overview of existing studies, which is appreciated. However, the studies are, in most cases, listed one by one without critical comparison. Did these studies tackle the same problem you are tackling? How did they perform? What do these studies lack? What do they miss? How does your work fit with the existing research? Not providing a straightforward answer to these questions might leave your readers wondering why your work is needed. I would advise a revision of the first section to critically discuss the existing Grey Wolf Optimization extensions and their limitations. Such restructuring would, in turn, make it easier for you to highlight your work's contributions. Please consider explicitly answering these questions to emphasize the need for (and, consequently, the merit of) your work: 
    1. What are the limitations of the existing methods when tackling the problem you are studying? 
    2. Why is GWO a good way to go (and not some other algorithm, such as PSO)? What are GWOs advantages regarding the studied problem?
    3. What are the limitations of GWO and existing GWO extensions regarding the studied problem?
    4. What benefits your work brings to the field (that have not been addressed by previous studies)? What is the novelty of your work, and why are such novelties valuable?

Addressing some of the listed questions might require as little as merely reformulating the existing sentences. Others, on the other hand, may require more substantial changes. 

  1. Methodology: Knowing GWO's limitations, the modifications' goal is clear. However, the reasoning behind selecting Tent chaos for initialization and Gaussian perturbation should be discussed in a bit more detail. Have you tried implementing other chaotic maps and compared the results? Was your work inspired by the existing research utilizing the Tent chaotic initialization in GWO? Have you verified that this modification indeed improves the algorithm's performance (by comparing GWO with and without Tent initialization)? Similarly, have you considered any alternatives to Gaussian perturbation or decided to use it based on some existing study's results? What is the effect of Gaussian perturbation introduction on GWO's performance (i.e., have you compared GWO with and without Gaussian perturbation to demonstrate its effects)? Also, similar questions can be asked about the cosine control parameter. While the Results section demonstrates the extended algorithm's performance, there should also be a section discussing (and depicting) the extent to which each extension contributes to the performance improvement. Or – in case modifications' combined effect results in performance improvement, but the individual changes do not – demonstrate the general effect of each modification on the original GWO's performance. 
  2. Results: The Results section offers many interesting comparisons. However, it is not clear whether the experiments reported in Section 5 comprised a single simulation run or multiple runs (for each algorithm)? If a single run was performed for each algorithm – why? Do you think different results would be obtained if, for example, mean results (i.e., means of multiple runs) were compared? If each algorithm was run multiple times, please state the mean, worst, and best results obtained by each algorithm and the standard deviation among runs. The proposed algorithm is compared to many methods, but the methods are not chosen consistently across the engineering problems, and reasoning for their selection is not provided. Numerous existing studies utilized the same benchmarking engineering design problems (e.g., 10.1007/s00366-020-00996-y, 10.1016/j.jcde.2017.02.005, 10.1016/j.eswa.2020.113917) – how do your results relate to the ones obtained in the existing studies? Finally, please check the whole results section and in particular results for the welding beam problem and the truss design problem. Although the bolded text indicates IGWO achieves the best results, in both cases, other algorithms seem to be performing better (MVO in welding beam case and Tsa's algorithm in three truss design case).

Finally, language and style should be revised. There are many grammatical mistakes, overly long and redundant sentences, and paragraphs similar to published research (e.g., lines 201-206 are very similar to text in 10.1109/ICICTA49267.2019.00130). 

 

 

In addition to the general comments listed above, here is a list of specific comments to help in revising your manuscript:

  • Abstract: Please consider adding a more explicit description of the obtained results than "the IGWO algorithm is able to provide very competitive results compared to other algorithms".
  • Line 32: I would argue that this statement applies in general, not just to China's energy resources supply.
  • Line 33: do you mean "circular economy" rather than "recycling economy"?
  • Lines 45-46: Please note, the references [3] and [6] should be switched.
  • Line 46: Flower Pollination Algorithm
  • Lines 47-48: Are all of the referenced algorithms based on GWO? It does not seem to be the case – please verify the reference usage.
  • Lines 100-102: Although one may argue that it is a well-proven fact, please consider adding references supporting the claim "it do not make full use of the search information of the population to produce more effective search directions, and it also has the disadvantages of low computational accuracy and premature convergence".
  • Line 137-138: Please note, it is not clear which BA algorithm you are referring to –butterfly optimization algorithm or bat algorithm. You denoted both as BA (in lines 45 and 91, respectively). Also, SSA has not been introduced up until this point, so please state its name explicitly in this instance.
  • Line 156-157: Please note, using the word "proposed" implies you are the first to suggest the equations (1-4). Consider changing to "used" or "utilized". 
  • Line 158: Should not the last X in Eq. (1) be time-dependent as well, i.e., X(t)?
  • Line 163-164: Please update the interval [0,1] to [0,1]^n, where n is the space dimension.
  • Lines 166-191: Please change the sigma symbol to delta to remain consistent and in accordance with the terminology.
  • Figure 3: Do not forget to spell-check the figures (here, there is a typo in the word "solution", as well as a mistake in the phrase "best performed wolves").
  • Lines 210-216: Please add the references supporting the claims made.
  • Line 218-219: The statement "It is also proved that the tent map has a prerequisite for optimizing chaotic sequence of algorithms by strict mathematical reasoning" should be reformulated, or a reference should be added.
  • Lines 219-223 and lines 228-233: Sentences "Tent map, in mathematics …" and "Indu Batra et al. …" are very difficult to read and require restructuring.
  • Line 243: it is unclear what "The most common" is referring to, the most common chaotic map?
  • Lines 246-248: it is unclear why the Bernoulli map is mentioned. Tent map is not analogous to Bernoulli map (the most obvious difference being that one is continuous and the other is not). They are somewhat similar when alpha equals ½ in that that they have the same Lyapunov exponent, but I am not sure why the Bernoulli map should be mentioned here at all – especially since alpha was set to 0.7. Also note, the symbol "j" is introduced, but it is not used in any of the equations.
  • Line 249: Why was alpha set to equal 0.7? Was it based on some existing works (e.g., 10.1016/j.jcde.2017.02.005), or is it a recommended value based on experimental findings? Adding such clarifications strengthens your arguments. A similar comment applies to parameter values used in PSO, BA, and FPA algorithms (lines 356-357) where standard (i.e., recommended in the original works) parameters were taken.
  • Lines 279-282: Please note, you have a redundant sentence ("The leader wolf…")
  • Lines 291-293: Can you please elaborate on the sentence "which can show the random walk behaviour of grey wolfs more accurately".
  • Line 294: There are two typos in Eq. 11: in the word "Gaussion" and "X" should be changed to X(t).
  • Figures 7 and 8: I do not find these figures very informative. In fact, they present either a common knowledge (N(0,1) distribution) or results that could be straightforwardly deduced (the difference between 2*random([0,1]) and the N(0,1)'s image). In case of space limitations, these figures could be removed.
  • Lines 300-303: Please revise this sentence.
  • Lines 310-312: The statement "but the linearly decreasing convergence factor…" requires a reference or further elaboration.
  • Figure 9: Please change the y-axis label to state which parameter is depicted clearly. Also, why was B(t) introduced separately, i.e. why was a' not defined as 2*cos(pi/2 *t/maxIter)*cos(pi/2*t/maxIter)? An analysis accounting for B(t)'s influence should be included to enable comparisons with existing works (e.g., such as doi.org/10.1155/2016/7950348).
  • Figure 10: Please remove the word "when" from the figure label.
  • Lines 326-330: Please consider introducing the name "inertia weight factor" upfront (i.e., where the parameter is first mentioned) to increase readability.
  • Tables 1-3: Please check these tables for typos. E.g., there is a typo in F20 – parameter "j" should range from 1 to 6. Also, F8's name is Schwefel's problem, rather than Schwel's problem. F19 and F20 should state "Hartmann" instead of "Hartman", etc. (there are other typos as well, so please check all of the functions, their names, ranges, and optimal values).
  • Tables 4-7: Please use consistent number formatting (i.e., either scientific or decimal notation). Otherwise, it is difficult to compare the listed values. For example, three different notations are used for the same value in Table 6 (3.000, 3, and 3.00e+00). Similarly, use the same number of digits for every entry in Table 9. Keep in mind that the selected notation should enable results' comparison. For example, check the results for F16: it seems that IGWO and SSA perform equally well but, due to the selected notation, the magnitude of differences from other algorithms cannot be established. 
  • Tables 4-7: In addition to the previous comment, please check the results reported in these tables. Several instances seem to be wrongly marked as "the best" (e.g., std for F5), and in some cases, none of the results are marked in bold text (e.g., results for F6 and F13). In addition, the instances marked as "the best" do not accord with the ranks reported in Table 8. In particular:
    • Should not PSO's std be marked as best for F5?
    • Please mark the best results for F6.
    • Should not MFO be marked as the best performing for F8?
    • Please mark the best results for F13.
    • Why is SSA's std marked in bold for F14? 
    • Why is MFO's std bolded for F15 (IGWO seems to have a smaller std)? 
    • Why is SSA not bolded for F16 (especially since it has a lower std than IGWO)? 
    • Should not SSA also be bolded for F18? 
    • Should MFO and FPA be bolded for F19 (it is impossible to deduce due to the formatting differences)?
    • Why is IGWO marked for F20 when GWO seems to be performing better?
    • Should not FPA be marked for F21?
    • Should not GWO and FPA be marked as best for F22 (instead of IGWO)?
    • Should not FPA be marked as best for F23 (instead of IGWO)?
  • Line 379: F7 should be listed instead of F6.
  • Lines 385-387: I would be cautious about claiming the algorithm's superiority on fixed dimensional multimodal functions, given that the results in Table 6 seem to prove the algorithm's advantages only for F15. Please check the data in Table 6 and, if needed, edit the text in accordance with the results.
  • Line 390: Are the results in Figure 11 averaged over all the dimensions, as well?
  • Line 391: "Fig. 8" should be changed to "Fig.11".
  • Lines 389-400: This paragraph implies IGWO's superiority in all of the cases, but, as evidenced by the tables as well as the figures, in multiple instances, other algorithms outperform IGWO. Please change the text accordingly. Also, please note, the data seems to be missing from Figure 11(p).
  • Lines 443-446: Please note, the text following the statement "significance level is set to 0.05" is redundant and could be omitted.
  • Line 447: Was a two-sided or one-sided test variant used?
  • Table 8: Please note, the text should not be in bold here. Also, consistent text alignment should be used.
  • Table 8: How were the ranks derived? Based on the best value, an average value, or something else? I assume based on the average value (as reported in Tables 4-7). However, the formatting in Tables 4-7 does not permit verifying this assumption.
  • Section 5: Please provide details on the number of iterations and runs conducted in experiments reported in this section. Also, clarify if the reported optima correspond to the best runs, mean performance, or other criteria.
  • Lines 484-491: Please note, there are several typos in the problem statement. Please check the formulae.
  • Table 9: "Optimal weight" should be changed to "Optimal cost".
  • Line 500: The listed value (5935.7161) does not correspond to the value obtained by IGWO (as reported in Table 9).
  • Lines 512-516: Please note, there are several typos in the problem statement. Please check the formulae.
  • Lines 521-524: Is there a reason why these algorithms were selected (rather than the ones considered in Section 5.1)? If so, please specify the selection criteria.
  • Lines 538-556: Please note, there are several typos in the problem statement. Please check the formulae. Also, please add the measurement units to Eq. (19) and use a consistent notation (e.g., capitalized P instead of p in line 544).
  • Tables 11 and 12: Please check the results reported in these tables. It seems MVO achieves the optimal result for the welding beam problem (1.7250 vs. 1.7254 obtained by IGWO). Similarly, Tsa's algorithm seems to outperform IGWO for the three truss design problem (263.68 vs. 263.8959).

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments to the authors:

In the article, their authors present “an improved grey wolf optimization algorithm (IGWO)” applied “to optimize engineering design problems and reduce the waste of resources”. The algorithm combines “tent mapping, gaussian mutation and cosine control factor with grey wolf optimization” Finally, the improved algorithm is tested both in four engineering problems and 23 functional optimization problems. The authors claim that “the IGWO algorithm is able to provide very competitive results compared to other algorithms

After careful review of the article, I have doubts about the scientific soundness of work in the field of sustainability. The article, with a structure very close to other similar ones, does not add a clear value to the aim and scope of the Sustainability journal. For this reason, I propose that the article be rejected for an in-depth revision and then resubmitted for a new review

Main aspects:

I understand the approach of the authors. An improvement to the grey wolf optimization algorithm may be an interesting idea. But the objective of the Sustainability journal, as can be seen on its own website, is not the development of an optimization algorithm but, I quote: "Our aim is to encourage scientists to publish their experimental and theoretical research relating to natural sciences, social sciences and humanities in as much detail as possible in order to promote scientific predictions and impact assessments of global change and development"

Likewise, the scope of the journal is focused on subject areas such as “Challenges relating to sustainability”, “Socio-economic, scientific and integrated approaches to sustainable development” and “Other topics related to sustainability” (such as “Defining and quantifying sustainability” or “Applications of sustainability”)

The authors only write the words "waste of resources" in the abstract and in the Introduction and state, also in the Introduction: "[...] IGWO algorithm can effectively deal with engineering optimization problems and the waste of resources in engineering design problems are effectively alleviated” However, despite stating this theoretical reduction in resources, the authors do not provide a single proof of this claim. How exactly can your algorithm help reduce the waste of resources? The examples they use are neither significant nor representative but rather common examples in the testing of optimization algorithms. For example, in the recent work by Nadimi-Shahraki et al. (2020) the same set of engineering tests and applications is used. Furthermore, such engineering applications are in common use and can be found in the aforementioned work.

Within the many methodological approaches of engineering design the exploration of sustainability is undoubtedly a fundamental concern. In fact, as can be seen in the book “Sustainability in engineering design”, among the three main objectives of a design, in addition “To create a product” and “To create profit”, is "To achieve sustainable development of the whole product life cycle" Thus, I quote: “The incorporation into the design process of the whole-life model and its consideration at each stage of the design process incorporates the third bottom line: environmental impact and sustainability.” In such way, terms such as "Design for sustainable use" are already being used in engineering design to accommodate the principles of sustainability in this field of study.

The authors need to carry out a deep and extensive review of the concept of sustainability applied to engineering design. They should point out the innovative approaches and current methodologies in this field and once established, design an experiment that allows testing whether their algorithm, first, allows optimizing a real design process and second, at least their algorithm links to some idea of ​​sustainability such as that associated with the concepts of recycle, reuse, repair and reduce. The Introduction and Conclusions sections should be redone including appropriate references. A new Discussion section should be added where the authors can expose the advantages and disadvantages of their algorithm in the field of sustainability applied in engineering design. They must make an effective comparison with other approaches to control and sustainability management in the process of designing and developing new products. The authors must clearly demonstrate what the differential contribution of their algorithm is in the field of sustainability. Without this fundamental premise, the article cannot be considered for publication.

Another issue is the integration of your algorithm into an appropriate system for engineering design. I understand that the algorithm allows you to optimize the variables of a design but, in this sense, what do these variables refer to? Are they just physical aspects? Are they restrictions? Are they selection criteria for design alternatives? Are they ponderable?

On the other hand, in what phase of the design process according to, for example, the classical Pahl & Beitz approach, is your algorithm integrated? Generation of alternatives? Selection of alternatives?

As the authors will know, the process of designing and developing a product is, in essence, a decision-making process. So is your algorithm part of a decision support system? Is it part of an information system? How are the variables decided? Should they be decisions related to sustainability principles? Authors need to identify a theory to support their development.

At present, the article is just one approach to improving the gray wolf optimization algorithm. The application examples are generic and do not allow to validate the use and usefulness of the algorithm in the field of sustainability within engineering design. The article needs a comprehensive revision. Only after solving these issues a new revision can be conducted.

 

Other points:

- Could the authors be so kind as to clarify the differences of their work with the articles mentioned below?

-- Long, Wen & Wu, T.-B. Improved grey wolf optimization algorithm coordinating the ability of exploration and exploitation. Kongzhi yu Juece/Control and Decision. 32. (2017): 1749-1757. 10.13195/j.kzyjc.2016.1545.

-- Long, Wen & Cai, S.-H & Jiao, Jianjun & Zhang, W.-Z & Tang, M.-Z. Hybrid grey wolf optimization algorithm for high-dimensional optimization. 31. (2016): 1991-1997. 10.13195/j.kzyjc.2015.1183.

-- Teng, Z. & Lü, J. & Guo, L. & Xu, Y. An improved hybrid grey wolf optimization algorithm based on Tent mapping. Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology. 50. (2018): 40-49. 10.11918/j.issn.0367-6234.201806096.

 

References:

Nadimi-Shahraki, Mohammad H., Shokooh Taghian, and Seyedali Mirjalili. "An improved grey wolf optimizer for solving engineering problems." Expert Systems with Applications, 166 (2021): 113917.

Johnson, Anthony, and Andy Gibson. Sustainability in engineering design. Academic Press, 2014.

 

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors changed the paper addressing several of the major concerns. However, a few issues remain. Content-wise, the paper has been improved, but several additional changes are suggested:

  1. I appreciate the detailed analysis of each modification’s influence on the algorithm’s performance (i.e., Table 1 and Section 4.3). It is very nice to see the comparison. However, I have several remarks:
    1. I am confused why the results in Table 1 do not correspond to those in Table 8 (for F9). Why do they differ? Also, it is unclear why F9 was selected and not some other function. Nevertheless, there might be no need to address this point if the next point (1.b) is considered.
    2. Since Table 8 gives a detailed analysis of the performances, Table 1 may be seen as redundant and can be omitted (along with the corresponding figure and text). 
    3. Another suggestion is to switch the order of subsections in Section 4. It seems that switching sections 4.2 and 4.3 might give a more coherent outline. The suggested outline is as follows: Section 4.1. present the test functions (as in the current version). Then, Section 4.2. discusses the algorithm’s design by studying the algorithm’s modifications’ performances on each function (what now is Section 4.3). Once the algorithm’s performance “in isolation” (i.e., the algorithm’s modifications are tested against each other), Section 4.3 could compare the algorithm to existing ones (current sections 4.2.1, 4.2.2, and 4.4). 
  2. Section 5: The edits made to Tables in Section 5 now highlight that IGWO does not outperform other algorithms on 3 out of 4 engineering problems. However, the text is not changed accordingly. In particular, while Tables 13 and 14 show that MVO and Tsa’s algorithm perform the best, the accompanying paragraphs state that IGWO is optimal. Please update the text to reflect the findings presented in the tables. Details on the experimental setup in Section 5 now explicitly state the number of iterations and runs. Still, it is unclear which results are presented, i.e., when you state “to obtain the optimal design results “it is unclear how these optimal results are derived for each algorithm: as the best of all runs? As the average of all runs?
  3. Conclusion: currently, future work is structured as a list. Please restructure the paragraph to contain full sentences. Further, the proposed algorithm’s limitations should be briefly discussed.
  4. Unfortunately, language is still unsatisfactory, and as a result, there are several redundant sentences and unclear descriptions. Also, several spelling mistakes and grammatical errors are present. I strongly suggest a thorough paper editing and cleaning. Please revise the manuscript carefully to ensure that the font use is consistent, bold text is appropriately used in the results section, and that sentences are of appropriate length and form. For example, sentences in lines 137 (“The first level: ..”), 595 (“Eq. (20)...”), and 619-621 (“In order to save..”) are incomplete. Several sentences should be split into two (e.g., lines 303-306), and several verbs are incorrectly used (e.g., “concerned by” in line 43). It may be useful to ask a native colleague for help or employ editing services to help in polishing the paper. 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

From my opinion, the article has been significantly improved. The authors have carried out an in-depth review and, now, the article could be suitable for the Sustainability journal. I would like to congratulate the authors for their work.

Almost all of my previous comments have been adequately answered by the authors. However, I believe that, before recommending publication, authors should consider adding a Discussion section to their work as I suggested in my first review. Although the contribution of their algorithm to the field of sustainability is clear, I insist that a Discussion section would be of great help. In this Discussion section the authors could highlight the relevance, advantages and disadvantages of their approach and resume the findings of sections 4 and 5. At the moment all this information is divided into several sections. For this reason, my recommendation is that the article would be accepted after a minor revision.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report

The manuscript has been greatly improved from its original version and, thus, I would like to congratulate the authors on their work. 

However, I still have several small comments related to the results' presentation.

1. Figure 10: In one of the previous review rounds, I asked how were the subfigures in Fig. 10 derived. In particular, from Tables 1 and 2, it seems that for F1- F5 and F9-F11, several dimensions (30, 100, 500, and 1000) were considered. Which of these cases are depicted in Figure 10? For example, what was the dimension of function F1 utilized to derive Fig. 10 a)? In their response, the authors stated: "The algorithm test result will change with dimension, so the function convergence curve will also change accordingly" - which I understand but feel does not answer the question. Also, are the convergence curves plotted by extracting (and depicting) the best solution in each iteration? Please explicitly state for which functions' dimensions the performance depicted in Figure 10 was observed, and specify how the depicted convergence curves were derived. 

2. Section 5. The authors clarified that the results reported in Tables 10-13 present the best values obtained by each algorithm in 30 iterations. If I understand correctly, that means that for each design problem (reported in 5.1-5.4.), every algorithm was run 30 times, and each algorithm's best performance among 30 runs is extracted and reported in the tables. However, such a clarification is not added to the text. Specifically, the text does state that 30 iterations were run, but it is not clarified whether the reported data (i.e., the data in Tables 10-13) is the mean, median, or the best algorithms' performance among 30 runs. Please add clarification to the text. 

3. Language and text formatting: The effort to polish the paper is visible, and I appreciate it. However, there are still several points to address. For example, the bold text in Tables 6 and 7 is still misleading:

  • MFO achieves the best average performance for F8 (not SCA)
  • IGWO has the smallest std for F12 (not BA)
  • IGWO has the smallest std for F15 (not MFO)
  • SSA has the smallest std for F20 (not IGWO)
  • FPA has the best average performance for F21 (not GWO)
  • FPA has the best average performance for F23 (not IGWO). 

These data can also be verified using Table 9. Still, I recommend updating Tables 6 and 7.

Also, several entries in Table 9 are written in bold. Since there is no reason for bold text in this table, I recommend removing the table's bold formatting.

There are still several language and style issues. The main ones are: The sentence in lines 122-123 is redundant (i.e., stated in line 71). The sentence in lines 242-245 is grammatically incorrect and difficult to read. Perhaps you meant "...it enriches the diversity..." rather than "to enrich the diversity..."? Similarly, the sentence in lines 262-264 requires restructuring. Should not "ORGWO" in line 348 state "IGWO"? 

The minor issues include, for example, the redundant hyphen in the words "cur-rent" and "at-tacking" (lines 14 and 74, respectively). The phrase "easy to fall" in line 90 should be replaced with "ease of falling". The phrase "best performed" in Figure 3 should be replaced with "best performing". There is a redundant word "can" in line 309 (and so on). Please note, these issues are minor and are not crucial for the paper's publication. However, please check the manuscript to present your work in the best light possible. 

 

Overall, the authors did a good job improving the manuscript. I believe the remaining comments can be addressed quickly, and then the manuscript will be ready for publication.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

In the latest version of the paper the authors have inserted some paragraphs in the Conclusions section. By this, they try to justify my previous recommendation to add a new Discussion section. Although I do not share their opinion, considering the recommendations of the journal, it is possible to accept the proposed revision. The new paragraphs answer the main questions I suggested to the authors to include in a new Discussion section. Therefore my recommendation is that the paper be accepted for publication.

Round 4

Reviewer 1 Report

The paper is now clear, and the results are adequately presented. I congratulate the authors on their work and have no further comments regarding the research results and presentation. 

The minor comment related to the use of English still applies. If the authors have a native colleague willing to help in polishing the manuscript, redundant sentences/phrases, stylistic errors, and grammatical mistakes could be avoided. 

Nevertheless, aside from final proofreading, I am happy to state that the manuscript is ready for publication. 

Back to TopTop