CARAG: A Context-Aware Retrieval Framework for Fact Verification, Integrating Local and Global Perspectives of Explainable AI
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsPlease see attached report
Comments for author File: Comments.pdf
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThis is an interesting paper that claims two contributions, each one with some clear limitations.
The first contribution is a new dataset for fact verification. The limitation here is the way the dataset was developed. With each entry of the dataset manually proposed by a single team, there is no cross checking between humans. Therefore personal bias is not removed. Moreover, alignment between the contributions of the different teams is not assured.
The second contribution is a new method/algorithm for fact checking. The limitation here is more pronounced: With the method designed to work only with the newly presented dataset, there cannot be any direct comparison to any of the well known and already established methodologies.
This is not to say that there is no value in the contribution of the work. Quite the contrary the article is promising and the qualitative discussion of the benefits compared to the earlier works is convincing. Also, the step by step presentation of the way the method works makes for an excellent presentation and facilitates others to apply the approach in their own works.
Overall, despite two major limitations, I lean towards a positive opinion for this article, particularly due to the excellent step-by-step presentation. An effort to 1) validate the introduced dataset and 2) construct a setting in which the introduced method can be compared to the already published research would make the article even better.
Author Response
Comment 1: “This is an interesting paper that claims two contributions, each one with some clear limitations.”
Response 1: Thank you for your positive assessment. We appreciate your recognition of the contributions made in this work.
Comment 2: “The first contribution is a new dataset for fact verification. The limitation here is the way the dataset was developed. With each entry of the dataset manually proposed by a single team, there is no cross-checking between humans. Therefore, personal bias is not removed. Moreover, alignment between the contributions of the different teams is not assured.”
Response 2: We acknowledge the reviewer’s concern regarding the dataset’s annotation process and we recognise that introducing cross-team validation and additional alignment measures would further enhance dataset reliability. The current version is FactVer 1.3, and to address this, we plan to incorporate these improvements in FactVer v2.0, where we will introduce cross-team validation and additional bias-reduction strategies. This refinement will enhance alignment between different thematic categories while maintaining annotation consistency.
Comment 3: “The second contribution is a new method/algorithm for fact-checking. The limitation here is more pronounced: With the method designed to work only with the newly presented dataset, there cannot be any direct comparison to any of the well-known and already established methodologies.”
Response 3: We understand the reviewer’s concern regarding the evaluation scope of our proposed methodology. Since CARAG is a novel retrieval mechanism tailored to integrate thematic context into AFV, existing benchmark datasets lack the necessary thematic structure to facilitate a direct comparison. However, recognising the importance of broader applicability, we have recently refined the methodology toward greater generalisability. As an initial step, we developed CARAG-u, an unsupervised extension that eliminates dependence on predefined thematic labels and manual annotations. This extension enables comparative evaluation without reliance on pre-annotated datasets. A detailed study on this approach has been conducted, and the manuscript is currently under review.
Comment 4: “This is not to say that there is no value in the contribution of the work. Quite the contrary, the article is promising, and the qualitative discussion of the benefits compared to the earlier works is convincing. Also, the step-by-step presentation of the way the method works makes for an excellent presentation and facilitates others to apply the approach in their own works.”
Response 4: Thank you for your encouraging feedback. We are pleased to hear that the step-by-step methodology was well received and that the qualitative discussion effectively conveyed the benefits of our approach.
Comment 5: “Overall, despite two major limitations, I lean towards a positive opinion for this article, particularly due to the excellent step-by-step presentation. An effort to (1) validate the introduced dataset and (2) construct a setting in which the introduced method can be compared to the already published research would make the article even better.”
Response 5: We appreciate the reviewer’s constructive feedback and recognition of the paper’s contributions. We acknowledge the importance of validating the dataset through additional measures and are actively working towards these improvements in FactVer v2.0. Furthermore, our ongoing research on CARAG-u aims to extend our method’s applicability beyond the FactVer dataset, allowing for broader comparisons with existing AFV methodologies.