Next Article in Journal
A Household Energy Efficiency Index Assessment Method Based on Non-Intrusive Load Monitoring Data
Next Article in Special Issue
Cognitive Similarity-Based Collaborative Filtering Recommendation System
Previous Article in Journal
Critical Examination of the Parametric Approaches to Analysis of the Non-Verbal Human Behavior: A Case Study in Facial Pre-Touch Interaction
Previous Article in Special Issue
Improving Matrix Factorization Based Expert Recommendation for Manuscript Editing Services by Refining User Opinions with Binary Ratings
 
 
Article
Peer-Review Record

Neighborhood Aggregation Collaborative Filtering Based on Knowledge Graph

Appl. Sci. 2020, 10(11), 3818; https://doi.org/10.3390/app10113818
by Dehai Zhang 1, Linan Liu 1, Qi Wei 1, Yun Yang 1,*, Po Yang 2 and Qing Liu 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2020, 10(11), 3818; https://doi.org/10.3390/app10113818
Submission received: 7 May 2020 / Revised: 23 May 2020 / Accepted: 27 May 2020 / Published: 30 May 2020
(This article belongs to the Special Issue Recommender Systems and Collaborative Filtering)

Round 1

Reviewer 1 Report

The paper describes interesting work but the presentation and level of detail have to be improved in order to make it a good contribution.
Specifically:

- The English prose is very poor and has to be improved. There are lots of typos and grammar errors which make the paper hard to read.
- The related work might be improved by considering the following papers: Multirelational Recommendation in Heterogeneous Networks, TrustWalker: A Random Walk Model for Combining Trust-based and Item-based Recommendation, A Matrix Factorization Technique with Trust Propagation for Recommendation in Social Networks
- The description of the proposed model should be enhanced by adding intuition and detail to the aggregation and sub-aggregation parts (Agg).
- On the other hand, the authors devote a fairly relevant amount of space to describing the construction of neighborhood that is trivial and can be shortened

I also have some concerns regarding the experiments (section 4)
- the decision to set click-through rate to 4 in the movie ratings and to take all selected music as positive is arbitrary and should be justified, e.g., by means of an analysis of the distribution of ratings in the datasets
- the selection of baselines has the following problems: the authors pick SVD as an example of recommender that only exploits ratings, but probably SVD++ would be stronger, as it outperforms SVD. Moreover, why did the authors select CKE that needs textual information? Used in the context of this experiment this is a very weak baseline
- the settings of hyperparameters is unclear. The authors do not explain how they picked them, whether by optimizing the algorithms, and if yes according to which performance metric
- the discussion of results should be strengthened, currently it is rather superficial and it does not explain findings in a clear way. Moreover, the selected metrics are ok but Top-N recommendation should be also measured according to ranking metrics such as MAP and MRR and also RMSE and MAE should be added to measure rating prediction. Moreover what is ACC? F1? Please explain.
- finally, I wonder whether the recommender really has to be evaluated @100. Usually much smaller values of K are considered.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors propose a neighborhood aggregation collaborative filtering for recommender systems based on knowledge graph and Graph Convolutional Neural networks.

The paper is very well-written, and the authors focused on a very interesting topic in this area.

The state of the art review is accurate and detailed.

The NACF framework proposes to directly embed users into the Knowledge graph using a combination of neighborhood aggregation and attention methods. I found the methodology very interesting and convincing.

However, the authors should explain more clearly some parts of the pipeline, giving more details and discussing the reasons behind some particular design choices: in particular, the attention mechanism and how the training phase works combining aggregation steps and the back-propagation of the GCN. Aggregation seems to represent a preliminary step for each user node, performed before training in a parametric way.

It is also unclear how the method computes the first node embeddings before aggregation (for example, e_u[1]). I suggest improving these parts in section 2.


I do appreciate that the authors tested their solution on well-known datasets to demonstrate that NACF overperforms other techniques, and I found the results very interesting in terms of precision and recall. However, it is not clear if the authors performed several training sessions and predictions with different top-k or if results have been obtained in just one run. It is essential to clarify this point, in particular, to guarantee the robustness of results in the music dataset. The same concern holds for the results about performance on CTR, where performance is only marginally better than the other approaches', as concerns both AUC and accuracy.


Results concerning the embedding dimensions are not so relevant if they are not contextualized with more details about first-node embeddings and in terms of performed run. I suggest that the authors also clarify this point.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors greatly improved the paper. It is a pity that they did not extend the experiments with SVD++ etc., as I suggested. Neverthless, the current version of the paper is fine for publication as it is improved with clear presentation of methods applied and with better evaluation metrics.

I suggest a final proof-read.

Reviewer 2 Report

The authors satisfied all requests made in the previous review by adding several new lines that introduce the previously lacking details. I suggest accepting the paper in this form.

Back to TopTop