Next Article in Journal
A Spatio-Temporal Encoding Neural Network for Semantic Segmentation of Satellite Image Time Series
Previous Article in Journal
The Active Roof Supporting Technique of a Double-Layer Flexible and Thick Anchorage for Deep Withdrawal Roadway under Strong Mining Disturbance
 
 
Article
Peer-Review Record

Shared Graph Neural Network for Channel Decoding

Appl. Sci. 2023, 13(23), 12657; https://doi.org/10.3390/app132312657
by Qingle Wu 1, Benjamin K. Ng 1,*, Chan-Tong Lam 1, Xiangyu Cen 1, Yuanhui Liang 1 and Yan Ma 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2023, 13(23), 12657; https://doi.org/10.3390/app132312657
Submission received: 7 October 2023 / Revised: 9 November 2023 / Accepted: 17 November 2023 / Published: 24 November 2023

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1. Although the analysis in this paper is solid, it is interesting to discuss the practical insights that can be obtained from the analysis and the obtained results.

2. The discussion section is basic. Please add relevant information from a scientific point of view.

 3. Drawbacks, limitations, and future work must be added.

Comments on the Quality of English Language

Minor editing of English language required.

Author Response

Original Manuscript ID: applsci-2676596  

Original Article Title: Shared Graph Neural Network for Channel Decoding

To: Applied Sciences Editor

Re: Response to reviewers

 

 

 

Dear Editor and Reviewers,

 

Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. We would like to thank all reviewers and the Editor’s comments and suggestions for further improving the manuscript. 

We are uploading (a) our point-by-point response to all the comments (below) (response to reviewers), (b) an updated manuscript with texts in RED indicating changes.

 

 

Best regards,

 

 

 

Qingle Wu

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Overall Comments for Reviewer#1

 

Reviewer#1, Major Issue # 1: Although the analysis in this paper is solid, it is interesting to discuss the practical insights that can be obtained from the analysis and the obtained results.

 

Author response:  Thanks for pointing this out. For details, we have added the introduction and discuss in the manuscript.

Author action: We have updated the discussion related to figure3-7 in the manuscript on p.10-13  in Section 4 of the updated manuscript.

 

Reviewer#1, Major Issue # 2:  The discussion section is basic. Please add relevant information from a scientific point of view.

Author response: We have added  the discussion in the manuscript.

Author action:  We have updated the discussion related to figure3-7 in the manuscript on p.10-13  in Section 4 of the updated manuscript.

Reviewer#1, Major Issue # 3:  Drawbacks, limitations, and future work must be added.

Author response: We have added  a section to discussion and open issues.

Author action: Section 5  is added in the updated manuscript on p. 13. Moreover, we added the description for drawbacks, limitations , and future work  listed as follows;

  • A possible way to implement a universal GNN decoder is to train GNN decoder weights for multiple FEC Parallel code.
    • The decoding complexity of the proposed SGNN is higher Compared with BP, other complexity reduction methods are put together to further reduce complexity, such as pruning, quantification, etc.
    • Extensions to non-AWGN channels and other modulation methods are possible.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The paper considers the problem of neural channel decoding based on graph neural networks (GNNs). In particular, the authors proposed GNN-based channel decoding algorithms with shared parameters. As a result, it was possible to reduce the complexity of data storage when decoding channels based on GNN algorithms, while the BER performance was slightly worse.

The paper is correctly structured.

The paper contains significant theoretical results and new and significant experimental results.

However, it could be improved:

1. It was suggested that the authors should further improve the section of the abstract.

2. The paper analyzes the related literature, but it is also necessary to include works for 2023, for example, https://doi.org/10.3390/electronics12132973, https://doi.org/10.1016/j.phycom.2023.102194.

3. It is necessary to improve the quality and readability of Figures 3-6.

4. In the paper, it would be good to provide quantitative indicators of reducing the complexity of data storage when decoding channels based on the proposed algorithms.

5. In the paper, the authors do not indicate the range of specific practical tasks that would be effective in using the proposed algorithms. In particular, it is necessary to investigate the impact of BER deterioration on the overall solvability of such problems.

6. Please add a subsection and one or more future works to enhance your presentatio.

Comments for author File: Comments.pdf

Author Response

Original Manuscript ID: applsci-2676596  

Original Article Title: Shared Graph Neural Network for Channel Decoding

To: Applied Sciences Editor

Re: Response to reviewers

 

 

 

Dear Editor and Reviewers,

 

Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. We would like to thank all reviewers and the Editor’s comments and suggestions for further improving the manuscript. 

We are uploading (a) our point-by-point response to all the comments (below) (response to reviewers), (b) an updated manuscript with texts in RED indicating changes.

 

 

Best regards,

 

 

 

Qingle Wu

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Overall Comments for Reviewer#2

 

Reviewer#2, Major Issue # 1: It was suggested that the authors should further improve the section of the abstract.

Author response:  Thanks for pointing this out. We have updated the abstract in the manuscript.

 

Author action: We have updated the abstract in the updated manuscript.

 

Reviewer#2, Major Issue # 2:   The paper analyzes the related literature, but it is also necessary to include works for 2023, for example, https://doi.org/10.3390/electronics12132973, https://doi.org/10.1016/j.phycom.2023.102194.

Author response: We have added the work for 2023 in Section 1.

Author action:  We have updated the introduction in the updated manuscript.

Reviewer#2, Major Issue # 3:  It is necessary to improve the quality and readability of Figures 3-6.

Author response: We have modified the quality of the figure and removed some BER curve details to make the figure clearer, because there are too many lines.

Author action:  Fig 3-7 are modified in the updated manuscript on p. 10-13.

Reviewer#2, Major Issue # 4:  In the paper, it would be good to provide quantitative indicators of reducing the complexity of data storage when decoding channels based on the proposed algorithms.

Author response: We discussed the complexity of the algorithm in Section 4.4, where the number of weights determines the size of the storage space. Table 2 provides quantitative indicators of reducing the complexity of data storage. Because the multiplicative complexity does not increase due to the shared GNN,  we simply list the number of multiplications. The method of reducing computational complexity is also a  direction we will study in the future.

Author action: We have listed the quantitative indicators of reducing the complexity of data storage, and it is emphasized that the method of reducing computational complexity is one of the directions we will research in the future.

Reviewer#2, Major Issue # 5:  In the paper, the authors do not indicate the range of specific practical tasks that would be effective in using the proposed algorithms. In particular, it is necessary to investigate the impact of BER deterioration on the overall solvability of such problems.

Author response: Thank you very much for your comment. We have added the benefits of reducing memory storage resources to hardware implementation and IoT applications in the revised manuscript. In addition, we also consider the possible impact of a slight performance degradation.

 

Author action:  Benefits of reducing memory storage resources is added in Section 4.4 of the updated manuscript on p. 13. Moreover, we added the recommendation in the conclusion.

Reviewer#2, Major Issue # 6:  Please add a subsection and one or more future works to enhance your presentation.

Author response: We have added  a section to open issues.

Author action: Section 5  is added in the updated manuscript on p. 13. Moreover, we added the description for drawbacks, limitations , and future work are listed below:

  • A possible way to implement a universal GNN decoder is to train GNN decoder weights for multiple FEC Parallel code.
    • The decoding complexity of the proposed SGNN is higher Compared with BP, other complexity reduction methods are jointed to further reduce complexity, such as pruning, quantification, etc.
    • Extensions to non-AWGN channels and other modulation methods are possible.

 

 

Reviewer 3 Report

Comments and Suggestions for Authors

In section 1, motivation must be extended and improved.

Add references for equations 1 and 2.

Figs. 3-5 are for BPSK modulation format. Expand results for M-PSK constellations for generalization purposes.

Given the FEC limit, add analysis for BER curves.

Results for more realistic channels (time-variant and/or frequency-selective) must be presented. Actually, only an AWGN channel is considered.

Computational complexity must be added and discussed in terms of time execution and memory employed.

Justify and explain BER results, by using other deep metrics, such as constellations, spectrums, etc.

Add future works with details.

Add a section of abbreviations.

Comments on the Quality of English Language

None

Author Response

Original Manuscript ID: applsci-2676596  

Original Article Title: Shared Graph Neural Network for Channel Decoding

To: Applied Sciences Editor

Re: Response to reviewers

 

 

 

Dear Editor and Reviewers,

 

Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. We would like to thank all reviewers and the Editor’s comments and suggestions for further improving the manuscript. 

We are uploading (a) our point-by-point response to all the comments (below) (response to reviewers), (b) an updated manuscript with texts in RED indicating changes.

 

 

Best regards,

 

 

 

Qingle Wu

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Overall Comments for Reviewer#3

 

Reviewer#3, Major Issue # 1: In section 1, motivation must be extended and improved.

Author response:  Thanks for pointing this out. Complexity is an implementation issue we care about in neural network decoding algorithms, and graph neural networks are no exception. Therefore, our motivation is to reduce complexity to the greatest extent, including storage complexity and computational complexity. We have added the motivation in the manuscript.

 

Author action: We have updated the motivation related in the manuscript on p.2  in Section 1 of the updated manuscript.

 

Reviewer#3, Major Issue # 2: Add references for equations 1 and 2.

Author response:  We have added references for equations 1 and 2.

Author action: References [19] is added in the updated manuscript on p. 2 in Section II.

Reviewer#3, Major Issue # 3:  Figs. 3-5 are for BPSK modulation format. Expand results for M-PSK constellations for generalization purposes.

 

Author response: We have simulated the SGNN algorithm with QPSK modulation.

Author action:  Figure 7  is added in the updated manuscript on p. 13.  Due to time constraints, we only simulated one codeword.

Reviewer#3, Major Issue # 4:  Given the FEC limit, add analysis for BER curves.

Author response:

 Shannon Limit Calculations:

Eb can be defined as the energy per information bit. N0 is power of the complex Gaussian noise. So for real noise, sigma^2 = N0/2. Then Ec is the energy per code bit and Ec = r*Eb

The Shannon limit given is:

 

Computing capacity takes few lines of Matlab code

f = @(z) exp(-z.^2/2)/sqrt(2*pi) .* (log2(2) - log2(1+exp(-2*rho-2*sqrt(rho)*z)));

Zmin = -9; Zmax = 9;

C = quadl(f, Zmin, Zmax);

Where rho = Ec/sigma^2. Eb/N0 = Ec / r/2, then Eb/N0 is converted to dB for presentation purposes.

We have added  FEC limit in each figure.

Author action:  Each figure is added shannon limit in the updated manuscript on figure 3-7 in p10-12.

Reviewer#3, Major Issue # 5: Results for more realistic channels (time-variant and/or frequency-selective) must be presented. Actually, only an AWGN channel is considered.

Author response: We have simulated the SGNN algorithm with different time-variant channels and flat fading channel. But there are still some bugs that need to be resolved, so this will be our next step.

Author action:  We will continue this work in the future.

Reviewer#3, Major Issue # 6: Computational complexity must be added and discussed in terms of time execution and memory employed.

Author response: We have added computational complexity  in section 4.4.

Author action:  We have added computational complexity  in updated  the manuscript.

Reviewer#3, Major Issue # 7:  Justify and explain BER results, by using other deep metrics, such as constellations, spectrums, etc.

Author response: Thank you very much for your comment. We have added performance results under QPSK modulation in the revised manuscript, and analyzed the performance from different constellation mappings such as BPSK and QPSK.

Author action:  QPSK modulation is added in the updated manuscript on Figure 5.

Reviewer#3, Major Issue # 8:  Add future works with details.

Author response: We have added  a section to discussion and open issues.

Author action: Section 5  is added in the updated manuscript on p. 13. Moreover, we added the description for drawbacks, limitations , and future work are listed below:

  • A possible way to implement a universal GNN decoder is to train GNN decoder weights for multiple FEC Parallel code.
    • The decoding complexity of the proposed SGNN is higher Compared with BP, other complexity reduction methods are jointed to further reduce complexity, such as pruning, quantification, etc.
    • Extensions to non-AWGN channels and other modulation methods are possible.

Reviewer#3, Major Issue # 9:  Add a section of abbreviations.

Author response: We have used  abbreviations in manuscript. Papers in this format have no abbreviations as a section.

Author action:  We have used  abbreviations in the updated manuscript.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

 Accept in present form

Reviewer 3 Report

Comments and Suggestions for Authors

The paper can be accepted

Back to TopTop