Next Article in Journal
Real Fluid Modeling and Simulation of the Structures and Dynamics of Condensation in CO2 Flows Shocked Inside a de Laval Nozzle, Considering the Effects of Impurities
Previous Article in Journal
Digital Communication Forensics in 6G and beyond Networks
 
 
Article
Peer-Review Record

TransRFT: A Knowledge Representation Learning Model Based on a Relational Neighborhood and Flexible Translation

Appl. Sci. 2023, 13(19), 10864; https://doi.org/10.3390/app131910864
by Boyu Wan 1,2, Yingtao Niu 2,*, Changxing Chen 1 and Zhanyang Zhou 2
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Appl. Sci. 2023, 13(19), 10864; https://doi.org/10.3390/app131910864
Submission received: 30 August 2023 / Revised: 29 September 2023 / Accepted: 29 September 2023 / Published: 29 September 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Round 1

Reviewer 1 Report

All is as in the report!

Comments for author File: Comments.pdf

Author Response

Thank you very much for your thoughtful comments! Please refer to the attached document for the reply to your comment.

Author Response File: Author Response.docx

Reviewer 2 Report

This PDF file presents a new knowledge representation learning model called TransRFT, which addresses the limitations of existing models in dealing with complex relations. The authors thoroughly explain the foundation of the TransRFT knowledge representation paradigm, including neighborhood information and flexible translation to address TransH's drawbacks. They also use the probability technique to improve the quality of negative triples by replacing the head and tail entities. The paper includes experimental results and performance analysis of the model and concludes that TransRFT outperforms existing models in terms of accuracy and efficiency. I found the research interesting and suggest revisions as listed below:

1.      Can TransRFT be extended to handle temporal and spatial relations in addition to the existing relational neighborhood and flexible translation?

2.      What are some potential applications of TransRFT in real-world scenarios, such as natural language processing, recommendation systems, and question answering?

3.      How can we evaluate the robustness and generalization ability of TransRFT on noisy and incomplete knowledge graphs?

4.      Recent representation learning relies on multi-modal pretraining, some references to build the related work in the paper

https://arxiv.org/abs/2109.14910

https://arxiv.org/abs/2107.02575

https://arxiv.org/abs/2307.08347

 

https://arxiv.org/abs/2305.19894

fine

Author Response

Thank you very much for your thoughtful comments! Please refer to the attached document for the reply to your comment.

Author Response File: Author Response.docx

Reviewer 3 Report

Authors tackle the problem of knowledge representation, based on a learning model able to capture similatities between concepts. The idea is inspired, although not derived, from the TransE model. The novel scheme, TransRFT, is based on projections on relational hyperplanes. Each term is defined as a vector, but what is interesting in the proposed model is the selection of the hyperplanes those vectors are mapped onto.

Althpugh the ideas are relevant, and the results are extremely good (tables 2 - 4), there are some points that, if clarified, will help authors to better communicate their ideas:

 

- TransH is not fully explained.

- The equations are not formatted in a way that is easy to understand them.

- The equation in line 72 is not fully described. What are h, t, r?  Their meaning is briefly hinted a few lines below that, but not in a clear manner.

- It is not clear how the training was conducted. For instance, how many triplets were considered in this stage? Table 1 refers to it, but it is not completely clear. For example, in the text it is stated that "The details of the entity number, relationship number, training set, verification set and test set of these data sets are shown in Table 1". However, in table 1, there is no reference to "verification set", but to "valid".

 

- The conclusions are scarce and do not make justice to all the work authors have presented.  In its current state is more a summary than a proper conclusion section.

Line 52. "The researchers presented knowledge representation learning as a way to get around these ..." It is not clear to whom is the reference.

 

Line 78: "For example: 78poor performance in complex relationships, inability to accurately infer one-to-many, many-to-one, many-to-many, and reflexive relationships. " Not clear the relevance and meaning of the example.

Author Response

Thank you very much for your thoughtful comments! Please refer to the attached document for the reply to your comment.

Author Response File: Author Response.docx

Back to TopTop