Next Article in Journal
Methodological Research on Image Registration Based on Human Brain Tissue In Vivo
Previous Article in Journal
BadDGA: Backdoor Attack on LSTM-Based Domain Generation Algorithm Detector
 
 
Article
Peer-Review Record

Multi-Task Learning Model Based on BERT and Knowledge Graph for Aspect-Based Sentiment Analysis

Electronics 2023, 12(3), 737; https://doi.org/10.3390/electronics12030737
by Zhu He 1,†, Honglei Wang 1,2,*,† and Xiaoping Zhang 3
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Electronics 2023, 12(3), 737; https://doi.org/10.3390/electronics12030737
Submission received: 29 December 2022 / Revised: 25 January 2023 / Accepted: 27 January 2023 / Published: 1 February 2023
(This article belongs to the Section Artificial Intelligence)

Round 1

Reviewer 1 Report

The present paper deals with a multi-task Sentiment Analysis model based on BERT and Knowledge Graph.

 

From the beginning I want to emphasize that the Turnitin plagiarism detector has reported a 27% similarity factor, which is kind of high for a journal paper. The authors should try to rephrase the paper in order to reduce this factor.

 

There are also several minor comments, as follows.

 

- the abbreviation BERT is used in the title, in the abstract and in the text. Even though the abbreviation comes from a known machine learning framework for natural language processing, it should still be explained;

 

- The state of the art is comprehensive, based on 43 reference papers. However, from those 43, only 6 have been published after 2020. Taking into account the dynamics of the domain, I’d suggest the authors to make a thorough review of papers publish recently and anchor their research on more recent grounds.

 

- Please explain in the text Figure 1. Overall structure of SABKG in the text: what do the blocks represents (rel_1…N, ReLU, Trans, hi coefficients, wk coefficients, etc.).

 

- Please explain in the text Figure 2. Heterogeneous graph because the relation between this figure and the rest of the text is not obvious.

 

- in the text – “f (a, r, s)” should look the same as the one in the equation

 

- with respect to the section “4. Experiments and Results Discussion” please describe the simulation test-bed.

 

-also, please describe the main differences between the data sets used. Have you choose those datasets randomly or there is a reasoning for choosing them against many other available.

 

- the Accuracy in the table (Acc) and in the tect (ACC) should have the same type-face.

 

- the F1 parameter in Figures 3 and 5 should be presented as histograms since there is no continuous variation of the parameter on the x axis.

 

- When comparing the performances achieved by the algorithm presented in this paper, the authors should use newer  


Comments for author File: Comments.pdf

Author Response

We would like to thank you for your valuable comments. We have revised the paper according to your comments and suggestions. Changes in the revised manuscript are marked in red. The responses to your comments are detailed below:

Point 1: the abbreviation BERT is used in the title, in the abstract and in the text. Even though the abbreviation comes from a known machine learning framework for natural language processing, it should still be explained;

Response 1: We appreciate your helpful suggestion and we have revised the manuscript accordingly. We have indicated the full name of BERT in the revised maniuscript(pp.5-6, in red) where it first appeared.

Point 2: The state of the art is comprehensive, based on 43 reference papers. However, from those 43, only 6 have been published after 2020. Taking into account the dynamics of the domain, I’d suggest the authors to make a thorough review of papers publish recently and anchor their research on more recent grounds.

Response 2: We thank you for your useful suggestions. Now we have thoroughly reviewed the references in the article, especially added some references from the past two years and deleted some references that are not novel enough. At present, the article has 46 references, including 20 published after 2020 and 32 published after 2019. The added references are marked in red in the revision.

Point 3: Please explain in the text Figure 1. Overall structure of SABKG in the text: what do the blocks represents (rel_1…N, ReLU, Trans, hi coefficients, wk coefficients, etc.).

Response 3: Your comment is quite valuable and we have revised the manuscript accordingly. In the revised version(pp.141-150,in red), we added the description of Figure 1 and explained the meaning of the blocks in the figure.

Point 4: Please explain in the text Figure 2. Heterogeneous graph because the relation between this figure and the rest of the text is not obvious.

Response 4: Following the reviewer’s suggestion, we have now revised the manuscript accordingly. In the revised version(pp.210-211,in red), we added the description of Figure 2. to make the article more logical. The node in the figure is the aspect subject and the corresponding adjective. Lines indicate whether the relationship between nodes is positive or negative. Heterogeneous graphs are used to identify emotional relationships in sentences and as input to RGCN.

Point 5: in the text – “f (a, r, s)” should look the same as the one in the equation

Response 5: According to your comment, we have modified the format of f (a, r, s) to make it look the same as in the equation.(pp.213,in red)

Point 6: with respect to the section “4. Experiments and Results Discussion” please describe the simulation test-bed.

Response 6: According to your comment, we have revised it in the revised version(pp.249-250,in red). The experimental platform of this paper is pycharm, and the python version is python 3.6.

Point 7: also, please describe the main differences between the data sets used. Have you choose those datasets randomly or there is a reasoning for choosing them against many other available.

Response 7: Your comment is quite valuable and we have revised the manuscript accordingly. We have added the description and comparison of data sets in the revised version(pp.239-243,in red). We do not randomly select data sets for experiment. Lap14, Rest14 and Rest15 are the most widely used data sets in the ABSA task. Many experimenters have conducted model experiments on these data sets, which is helpful for our model to carry out comparative experiments.

Point 8:the Accuracy in the table (Acc) and in the tect (ACC) should have the same type-face.

Response8: According to your comment, we have revised it in the revised version(pp.337,351,353,in red). We uniformly use Acc for text expression.

Point 9: the F1 parameter in Figures 3 and 5 should be presented as histograms since there is no continuous variation of the parameter on the x axis.

Response 9: Following the reviewer’s suggestion, we have now revised the manuscript accordingly. The F1 value of the model does not change continuously. We drew a line chart to show the performance of the model more intuitively. At present, we have drawn Figure 3 and Figure 5 into histograms to avoid ambiguity. (pp.288,345,in red)

Point 10: When comparing the performances achieved by the algorithm presented in this paper, the authors should use newer .

Response 10: Your comment is quite valuable and we have revised the manuscript accordingly. In sections 4.2 and 4.3, we added some new models for comparative testing. These methods were proposed after 2020, and we also added some references.

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors proposed a sentiment analysis model based on BERT and knowledge graph for improving the aspect-based sentiment analysis (ABSA) task, and two essential conclusions can be drawn from the analysis. The authors combined the aspect extraction task and the aspect sentiment classification task to complete the ABSA task through the interaction between the two tasks. The method in this paper integrates the part-of-speech information into the output representation of BERT and obtains the semantic feature information of the input text through linguistic knowledge. Besides, this paper learns the embeddings in the "aspect word, sentiment polarity, sentiment word" triplet through  relational graph convolution network, which enriches the contextual relationship between the aspect word and the sentiment word in the text to better predict the aspect sentiment polarity.

The experimental results show that the proposed model can achieve the most advanced performance compared with previous models and  the learning method that fuses semantic information and uses knowledge graphs has better ABSA performance than the standard end-to-end multi-task learning method.

The article can be published without changes.

Author Response

Thank you for your review of my article. I'm glad to get your approval. Best wishes to you.

Reviewer 3 Report

Summary, and overall contribution:

this paper proposes a multi-task Sentiment Analysis model based on BERT and Knowledge Graph (SABKG) model for improving the Aspect-based sentiment analysis (ABSA) task.

Minor comments:

- 13/43 = 30% references are preprints, and only 7/43 articles are from the past two years. I suggest updating references and including recent ones if possible.

- Because the BERT model was used to encode the input sequences, it will be better if you compare ABSA to BERT/Transformer based models like R-GAT+BERT.

- Also, I suggest including references to research papers in the tables where you provide a comparison of results to other methods.

- In the R-GAT paper the authors claim that the model was evaluated on SemEval2014 Task (Restaurant, Laptop) while in the paper you provided results of R-GAT on Rest15. Can explain how did you get these results?

Author Response

We would like to thank you for your valuable comments. We have revised the paper according to your comments and suggestions. Changes in the revised manuscript are marked in red. The responses to your comments are detailed below:

Point 1: 13/43 = 30% references are preprints, and only 7/43 articles are from the past two years. I suggest updating references and including recent ones if possible.

Response 1: We thank you for your helpful suggestions. We have now thoroughly reviewed the references according to the suggestions of the reviewers. We have added some references from the past two years and deleted some references that are not novel enough. At present, the article has 46 references, including 20 published after 2020 and 32 published after 2019. The added references are marked in red in the revision.

Point 2: Because the BERT model was used to encode the input sequences, it will be better if you compare ABSA to BERT/Transformer based models like R-GAT+BERT.

Response 2: Your comment is quite valuable and we have revised the manuscript accordingly. In section 4.3 of this paper, we added R-GAT-BERT, GBM-BERT and RACL-BERT models based on BERT coding model to better compare with the model in this paper. Meanwhile, Section 4.3 has been rewritten. (pp.303-322,in red)

Point 3: Also, I suggest including references to research papers in the tables where you provide a comparison of results to other methods.

Response 3: Following the reviewer’s suggestion, we have now revised the manuscript accordingly. We have added references to the comparative model to Table 3. and Table 4. for the convenience of readers.( pp.289,302,in red)

Point 4: In the R-GAT paper the authors claim that the model was evaluated on SemEval2014 Task (Restaurant, Laptop) while in the paper you provided results of R-GAT on Rest15. Can explain how did you get these results?

Response 4: We thank you for your useful suggestions. We carefully reviewed the original paper of R-GAT and found that the author did not experiment with Rest15. However, the author opened the R-GAT code and other researchers tested it on the dataset Rest15. According to the paper of Liang B et al., we obtained the R-GAT test results on Rest15.

Liang B, Su H, Gui L, et al. Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks[J]. Knowledge-Based Systems, 2022, 235: 107643.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

With respect to the issues risen in the comments the paper has been significantly improved.

 

I'd suggest the authors to read again the paper more carefully because the English is sometimes difficult to understand.

Author Response

感谢您的宝贵意见。我们根据您的意见和建议对文件进行了修改。

我们根据这些意见和建议修改了稿件。我们主要仔细检查文章的英文写作,重写了一些文章中没有表达清楚的句子,还修改了一些写作不规范的地方。文章的修订部分以蓝色标记。

Back to TopTop