Next Article in Journal
A Flexible Semantic Ontological Model Framework and Its Application to Robotic Navigation in Large Dynamic Environments
Previous Article in Journal
Sound Based Fault Diagnosis Method Based on Variational Mode Decomposition and Support Vector Machine
Previous Article in Special Issue
A Feature-Based Approach for Sentiment Quantification Using Machine Learning
 
 
Article
Peer-Review Record

HFGNN-Proto: Hesitant Fuzzy Graph Neural Network-Based Prototypical Network for Few-Shot Text Classification

Electronics 2022, 11(15), 2423; https://doi.org/10.3390/electronics11152423
by Xinyu Guo 1, Bingjie Tian 2 and Xuedong Tian 1,3,*
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2022, 11(15), 2423; https://doi.org/10.3390/electronics11152423
Submission received: 24 June 2022 / Revised: 30 July 2022 / Accepted: 1 August 2022 / Published: 3 August 2022

Round 1

Reviewer 1 Report

 

The authors proposed a novel hesitant fuzzy graph neural network (HFGNN) model that explores the multi-attribute relations between samples and combine it with a Prototypical Network to achieve few-shot text classification. This study is interesting. However, it needs some improvements as follows:

1.     Is "HGGNN" a novel/new approach? In its absence, would the results be significantly different? 

2.     Is the proposed method generalizable for other MCDM problems?

3.     I advise authors to clearly explain why they have preferred to improve HGGNN-proto method. What was their motivation to implement this model and what benefits they have seen comparing the other MCDM methods?

4.     In Section 2, the authors provided a literature review. I advise authors to present the current literature and their contribution to the literature with a summary table.

5.     Some MCDM papers should be discussed in the manuscript as follows: (i) An analytics approach to decision alternative prioritization for zero-emission zone logistics (ii) Recovery Center Selection for End-of-life Automotive Lithium-ion Batteries Using an Integrated Fuzzy WASPAS Approach

6.     The authors need to discuss about the limitations of the proposed method, what are your recommendations for future works, how the proposed method solved the case study problem.

7.     how practitioners can use the proposed method in the real life problems, how the proposed method is useful for future studies.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Dear author,It is my honor to review your paper. This paper is substantial in content, complete in framework, the algorithm proposed in this paper has no obvious defects and its performance is supported by sufficient experimental verification. Overall, this is an excellent paper. However, this paper has great deficiencies in language expression and grammatical style, which will bring difficulties to reading. It is strongly recommended that you should polish the language expression and grammatical style of this paper.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

A brief summary

This article proposed an HFGNN-Proto model, which considers the multiple sub-attribute relationships among texts, for few-shot text classification, and developed a new hesitant fuzzy strategy incorporating all relations.

In addition, a linear function was designed to provide inductive deviations for the transfer of the relations.

 

Specific comments:

This paper has a clear motivation and decent experimental results (though some concern on baseline models, see below). The weaknesses of the previous research have been taken into account and corrected, with some innovation.

I am leaning to give a "major revisions" based on my current knowledge and understanding of the paper. But I will be willing to revisit the decision after get feedback from the author(s).

In particular, I would be glad if the author could clarify the questions below.

 

*Using this link (https://paperswithcode.com/sota/few-shot-text-classification-on-raft) as an example, the SOTA method would be one based on the GPT-3 variant, but in your paper there is no mention of GPT-3. Also the reference/baseline models used in the experiment might not be strong enough. If you could compare your model with some latest algorithms proposed in the few-shot-learning communities, that would be more convincing as well.

I think you need to cite this paper and understand the current benchmark. Alex, N., Lifland, E., Tunstall, L., Thakur, A., Maham, P., Riedel, C.J., Hine, E., Ashurst, C., Sedille, P., Carlier, A., Noetel, M., & Stuhlmüller, A. (2021). RAFT: A Real-World Few-Shot Text Classification Benchmark. ArXiv, abs/2109.14076.”

 

*In Section 4.2, for all the usages of pre-trained embedding (BERT), why you finetuning the embedding parameters during your training? In my opinion, to avoid disrupting the inherent geometry of word embeddings, as finetuning will cause these embeddings to lose the relationship between meta-train vocabulary and meta-test vocabulary, since the vocabulary of meta-train classes and meta-test classes may be very different. This makes your method looks like dropping some of the "relation" information of the word embedding and then adding it by proposing the method.

*BERT is contextual, so the embedding of a word represents not only itself but also its surroundings. Your method considers multi-attribute relations, does this "relations" duplicate the information contained in the BERT word embedding ?

This makes your method a bit like weighting the word embedding.

 

*Why are the threshold values α and β in the linear function F(·) set to 0.7 and 0.3, respectively? If necessary, small-scale experiments need to be conducted first to determine the setting range of the thresholds.

 

*From table 1 and table 2, it seems Method HFGNN-Proto is a competitive model. My question is, do you have any experiments using the multiple sub-attribute relationships as a common feature in standard text classification problems? In other words, is this method only (significantly) beneficial to few-shot-learning? If it is also useful in general text classification task, it would be a good "plus" here.

 

*In Section 5.4, you have added experiments to verify the validity of the proposed method, which is good. In all three tables, the accuracy of the proposed method is 88.36%. So why should you compare in each of the three tables in these aspects ? If you divide your method into three aspects and want to verify them, then the three tables should simply be "with" and "without", i.e., in a single table it should just be " without this aspect" and "with this aspect" in a single table.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

All issues have been successfully addressed by authors.

Reviewer 3 Report

The author answered the questions I asked very carefully. I now have no more questions.

Back to TopTop