Next Article in Journal
Low Leakage Current Metal–Insulator–Metal Device Based on a Beryllium Oxide Insulator Created by a Two-Step Spin-Coating Method as a Novel Type of Modified Pechini Synthesis
Next Article in Special Issue
Semi-Supervised Gastrointestinal Stromal Tumor Detection via Self-Training
Previous Article in Journal
Graphene and Two-Dimensional Materials-Based Flexible Electronics for Wearable Biomedical Sensors
Previous Article in Special Issue
CM-NET: Cross-Modal Learning Network for CSI-Based Indoor People Counting in Internet of Things
 
 
Article
Peer-Review Record

Learning Data-Driven Propagation Mechanism for Graph Neural Network

Electronics 2023, 12(1), 46; https://doi.org/10.3390/electronics12010046
by Yue Wu 1, Xidao Hu 1, Xiaolong Fan 2, Wenping Ma 3,* and Qiuyue Gao 1
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2023, 12(1), 46; https://doi.org/10.3390/electronics12010046
Submission received: 21 November 2022 / Revised: 12 December 2022 / Accepted: 20 December 2022 / Published: 22 December 2022
(This article belongs to the Special Issue Applications of Computational Intelligence)

Round 1

Reviewer 1 Report

the authors proposed a data-driven propagation mechanism to propagate information between layers. 

the article is well written, correctly presented, the experiments and setups are well designed, the conclusions are correct.

however, compare your results with the ones existent in the mainstream research in the conclusion section. either you move some lines from [285-301] or rephrase them in the conclusion section.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper is well written and structured. The authors developed a data-driven propagation mechanism to adaptively learn the connections between different layers, enabling nodes to selectively fuse low-order local structural information while acquiring high-order neighborhood information. The introduction is relevant and theory based. The methods are generally appropriate. Clarification of a few details should be provided. 

1.     Line 173: how to set these two hyperparameters? How will they affect the performance?

2.     The method relies on the validation set to obtain the network structure with the best performance. How does validation set affect the optimization process? In other words, how robust is the method towards different validation sets? In all the experiments, the ratio of the validation set is consistent. How does the number of data in the validation set affect the performance?

3.     Equation 4, 9 and 12 all has parameter α. Are they same?

4.     What’s the computational efficiency of the proposed method compared with other methods?

5.     The performance is evaluated using accuracy. Are the classes equally distributed in all the datasets? Is there class imbalance problem existed? If so, what’s the performance of the network in terms of other metrics except accuracy such as precision, recall and f1 score?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

In this paper, the authors propose a data-driven propagation mechanism to adaptively propagate information between layers in a Graph Neural Network. They perform experiments on seven benchmark datasets and claim that their proposed method is effective.

I find the approach in the paper interesting, but in my opinion, too many of its advantages rely on its performance on datasets created by the authors. GCNII outperforms GraphSAP on Cora and Citeseer and virtually ties on PubMed while losing clearly on the four datasets created by the authors. Even 61.4 - 92.3 on Amazon Computers! I would also mention that the four new datasets are in fact more similar to two new datasets, given that they are subsets from two sources. GraphSAP has a best accuracy of 81.5 on the public datasets, but a worse one of 92.3 on the non-public datasets, beating all the other approaches. This could imply that it is exceptionally better suited for the two data sources from which you have built your non-public datasets.   

The paper would benefit from a careful check of English usage and grammar. For example, in lines 17-19: "As a consequence, some important operations (e.g., convolutions[1]) that can be easily applied to the image domain[2] but are difficult to apply to the graph domain.", the first and the second part of the sentence do not match. Sometimes the Cora dataset is written as "cora", etc.

In line 52, which dataset is the Citation Network dataset? Are any of those appearing in lines 223-224?

In line 98: do you mean "the excellent abilities of CNN"?

Lines 98-100 and 104-105 are the same.

Line 108 is confusing.

Please elaborate on the assertion in lines 123-124, "For example, if you have a 4-layer network, then mathematically there are 15 combinations of layer-to-layer connections in total.". If this sentence is intended as an example, a diagram with the 15 possible combinations would help.

In lines 132-134, N has two meanings: the number of nodes in the graph and the function that generates sets of neighbors.

Line 137: "everyv", there is a space missing.

The citation of the Cora dataset should appear beside its first reference, in line 148.

Why are experiments with the Cora dataset described in lines 148-155, when this is the Background section?

In line 196, it is more clear if you write "L2P and our proposed GraphSAP".

In the caption of Figure 3, there is a reference to Amazon Photo dataset that is described later in the paper.

In Figure 4, is not Coauthor Physics missing?

In Table 2, why there are no results for Dense-GCN and Res-GCN on Citeseer dataset, and for GraphSage for Amazon Computers?

Is Table 2 the results with 2 layers?

Line 424: Please capitalize correctly the citation: "DeepGCNs: Can GCNs Go as Deep as CNNs?".

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

I believe my main concerns have been addressed.

Back to TopTop