Next Article in Journal
Arbitrary Sampling Fourier Transform and Its Applications in Magnetic Field Forward Modeling
Previous Article in Journal
Data Clustering in Urban Computational Modeling by Integrated Geometry and Imagery Features for Probabilistic Navigation
 
 
Article
Peer-Review Record

Knowledge Graph Double Interaction Graph Neural Network for Recommendation Algorithm

Appl. Sci. 2022, 12(24), 12701; https://doi.org/10.3390/app122412701
by Shuang Kang, Lin Shi * and Zhenyou Zhang
Reviewer 1:
Reviewer 2: Anonymous
Appl. Sci. 2022, 12(24), 12701; https://doi.org/10.3390/app122412701
Submission received: 27 October 2022 / Revised: 26 November 2022 / Accepted: 9 December 2022 / Published: 11 December 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Round 1

Reviewer 1 Report

This manuscript develops a double interaction graph neural network recommendation algorithm based on knowledge graph. The algorithm first integrats datasets items into user features and uses graph neural network to integrated the features of nodes in the knowledge graph to obtain neighborhood information, and then let user features interacte and aggregate with the entity's own information and neighborhood information respectively, finally, uses the label propagation algorithm to train the edge weights to assist the learning of entity features. The research is potentially interesting and has practical application value, but I think that this manuscript still has many weaknesses.

 

1.      Literature citation error, for example, “However, CF technology usually has data sparse problems [Error! Reference source not found.] ” in Introduction.

2.      and cold start problems [4-66]”, but there are only 25 references in the manuscript.

3.      Some sentences are not fluent, for example, “Collaborative filtering (CF) [1,2] it is a typical recommendation technology in recommendation system”; “In order to solve this problem, additional information can be introduced to 34 solve the problems existing in CF”; “By discovering the multi-type link relationship between users 51 and items on the knowledge graph.”; “Message passing [18] it is divided into two steps:”……

4.      audio, video, etc.),”, Brackets do not match.

5.      Equation (1) contains Chinese characters.

6.      The caption of Table 3 “Table 3. This is a table. Tables should be placed in the main text near to the first time they are cited.” Is inappropriate.

7.      The research motivation is unclear, and the contribution and the novelty of this paper are not obvious.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

I think paper is very interesting in our modern world, where everything goes online: movies, books, music, papers, social media etc. However I think the paper in this form cannot be accepted. I am not an English expert, but it is hard to read this paper in this form. Sentences are unclear and they make the paper harder to understand.

Major:

1. I do not understand Fig. 2.
2. Paper is missing a thorough literature overview: methods goes through over 5 pages, while literature overview for 0.5 page. Of course not amount but quality is important, but there are many papers/works that might be cited regarding GNNs (GAT, Chebyshew, etc.), graph embeddings (node2vec, GraphSage, etc.) and so on.
3. Why do Authors use Precision@K and Recall@K?
4. I think that Experiment section might be highly improved. Authors are writing about GNNs, but I am missing cruitial for neural networks waveforms of accuracy and loss for training.
Another interesting Figure would be to present t-SNE visualization of the embeddings of the proposed structure.
5. Authors write: "The average accuracy of KGDI in MoviesLens-1M movie dataset, reached 8.7%, 382 which proved the validity of KGDI model." Is this sentence correct? We would rather prefer accuracy to go to 100%, 8.7% seems to be wrong. I get the feeling that Authors are using interchangeably words: accuracy, precision and recall, which are 3 different measures.

Minor:

1. In Introduction there is sentence: [Error! Reference source not found]
2. Equation (1) - shouldn't it be in English?
3. I can clearly see that paper has been written in Word, not in Latex and the formatting is really bad, in-sentence equations and interpunction are "floating".
4. Table 1 do not present statistics but sizes of the datasets.
5. In text Authors write that Fig. 5 present accuracy, while in fact it presents precision@K and recall@K, are these the same as accuracy?

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Some problems have been solved, but there are some problems in the revised version. The motivation of the study is still not clear, and there is only one data set used, thus the results  is insufficient. 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Dear Authors,

Thank you for answering all of my concerns. I still have some minor cases:

1. Table 3 has wrong caption.

2. By visualization of Embeddings I meant t-SNE visualization, I know that the size of Embeddings is multidimensional, that is why I suggested t-SNE, maybe this explanation would help:

https://towardsdatascience.com/visualizing-feature-vectors-embeddings-using-pca-and-t-sne-ef157cea3a42

Anyway I still cannot find in the text any info about sizes of Embeddings.

3. Was the dataset split into 3 sets? I mean train, validation and test? That is an usual procedure for artificial neural networks and by analyzing Fig. 6 I think that Authors used only two sets: for training and for testing. The plot of loss and accuracy only for training set, without validation does not tell much about the training itself. Is it possible to draw these plots on one figure also for validation dataset?

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 3

Reviewer 1 Report

The paper has been revised according to the comments. So I recommend to accept it.

Back to TopTop