Next Article in Journal
Special Issue: “Turbomachinery: Theory, Design and Application”
Previous Article in Journal
A Review of Degradation and Life Prediction of Polyethylene
 
 
Article
Peer-Review Record

Role Knowledge Prompting for Document-Level Event Argument Extraction

Appl. Sci. 2023, 13(5), 3041; https://doi.org/10.3390/app13053041
by Ruijuan Hu *, Haiyan Liu and Huijuan Zhou
Reviewer 1:
Reviewer 2:
Reviewer 3:
Appl. Sci. 2023, 13(5), 3041; https://doi.org/10.3390/app13053041
Submission received: 26 December 2022 / Revised: 21 February 2023 / Accepted: 25 February 2023 / Published: 27 February 2023

Round 1

Reviewer 1 Report

Dear authors,

The manuscript is good, Please do the following changes for acceptance

1. Abstract- It is good, but mention quantitative results (best results)

2. Introduction- It is good. Mention research gaps, Mention structure of the article in the end. Section 2 related work and so on.

3. Methodology and results _ Good. However, add a discussion section and add some paragraphs. Also compare your work with existing work.

4. Add challenges and future directions.

Overall, I recommend the manuscript for publication.

Good luck!

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The paper deals about an interesting topic, the use of argument mining based on role knowledge. As a scientific paper, it makes its own contribution to the scientific community.

The manuscript is very well written and the content is understandable. I could not find any spelling or grammatical errors.

The wording used fits the text mining community.

The structure follows the requirements of a scientific paper.

Abstract and Introduction summarize the problem very well.

Related work is described too brief and has to be extended. Please consider the content of existing review papers in the argument mining field. Some approaches are not explicitly named „role-base“ but they are.

The method is described in an understandable way and the evaluation is comparable to other approaches by using standard F1 statistical measure.

However, detailed information about the evaluation is not given. But these are needed to evaluate the comparison with other methods. Open questions are for example: Is n-fold cross-validation used and if so, which n? What is the division of training and test data. Is there a validation set?

Conclusion is fine. It is recommended to expand the future work part.

Overall, the paper is worth for publication after considering the points mentioned above.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

The paper discusses in details issues in the field of text analysis. Contributions are clearly defined and then presented with mathematical formulas and experiments. So the overall presentation is correct, but there are a few problems in the paper that need to be improved:

  1. The methodology of the approach to the presented problem is explained thoroughly in the introduction; however, there needs to be a clear justification for why this problem is worth solving via the proposed methods. From the text of the article alone, it is impossible to assess whether the issue is innovative and even used in practice - in which cases and to what extent.
  2. There needs to be a discussion on the languages where this research might be applicable. Probably English is the default language, but a discussion on this matter should be a part of this paper.
  3. Comparative studies with other solutions have been presented, which is correct and desirable. However, there is no description of how the presented solution and those used in the comparison were implemented. Consequently, it is impossible to objectively assess whether the obtained values ​​of test metrics result only from differences in the solutions themselves or whether they are partly related to the method of implementation. A precise specification of the implementation is even more critical if the differences in test metrics values are as minor as in the case of this paper. Did you implement all of those methods used in the comparison? Did you train all the models, or have you obtained pre-trained models?
  4. Only one type of test metric was used, i.e. F1 score. Why is that? Comparing with other solutions based only on one metric might not show different aspects of the proposed solution. I expect a discussion with different metrics for all the solutions. Why were so many different F1-score metrics used?
  5. Regarding the datasets used, I would expect a more detailed description of them. How were training and testing datasets created? Are those train and test datasets the same for each method? Were there any differences depending on the method?
  6. The language and style require moderate editing due to grammatical and punctuation errors, e.g. 

Line 13: " it provides effective..." - capital letter in a new sentence

Lines 187-188 "Our proposed method will be evaluated on two widely DEAE datasets, RAMS[47 ] and WIKIEVENTS[ 16 ]." - widely used datasets?

 

7. The paper should also under an extensive editing:

 

  • Too many abbreviations are used in the introduction. This first chapter should be clear and engaging, but introducing and using several abbreviations in one sentence reduces readability.
  • Spaces need to be included when quoting foreign works or references to tables/drawings, e.g. in the 17th line "Figure1a)"
  • In the 17th line, there is an example of trigger words and roles found in some text - trigger words are in quotes, but roles are not. Unification, in that matter, is a must. Plus, using italics or different fonts when mentioning the parts of sentences would increase the readability and clarity of the text. 
  • The content of the 131st line of text is wider than the specified text width.
  • Figure 6 appears to have been cut from another document, but has no reference to the bibliography. Lack of reference with clearly copied contents raises suspicions about the figure's origin- is it an original part of the submission or a previously published one?
  • Figures 4, 5 and 6 should be redrawn in latex.

 

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop