Next Article in Journal
A Feature-Reduction Scheme Based on a Two-Sample t-Test to Eliminate Useless Spectrogram Frequency Bands in Acoustic Event Detection Systems
Next Article in Special Issue
Handling Efficient VNF Placement with Graph-Based Reinforcement Learning for SFC Fault Tolerance
Previous Article in Journal
Self-Knowledge Distillation via Progressive Associative Learning
Previous Article in Special Issue
Performance Evaluation of Parallel Graphs Algorithms Utilizing Graphcore IPU
 
 
Article
Peer-Review Record

Diffusion-Based Radio Signal Augmentation for Automatic Modulation Classification

Electronics 2024, 13(11), 2063; https://doi.org/10.3390/electronics13112063
by Yichen Xu 1, Liang Huang 1,*, Linghong Zhang 1, Liping Qian 2 and Xiaoniu Yang 3,4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2024, 13(11), 2063; https://doi.org/10.3390/electronics13112063
Submission received: 24 April 2024 / Revised: 17 May 2024 / Accepted: 21 May 2024 / Published: 25 May 2024
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper is well-written, and I only have a few minor comments.

1. Figure 6 needs to be redesigned with enlarged fonts.

2. English should be further improved.

3. Although this paper only considers CPFSK, AM-DSB, GFSK, MPSK, MQAM, and WBFM modulation schemes, it should provide an overview of various modulation techniques currently available, including chaotic modulation [1] and OTFS [2] modulation.

[1] DOI: 10.1109/TWC.2022.3192347

[2] DOI: 10.1109/JIOT.2021.3132606

4. Please evaluate the complexity of the proposed solution.

5. Are the data sets used in this work designed by the authors?

6. What is the SNR range applicable to the proposed algorithm?

Comments on the Quality of English Language

English should be further improved.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

1) The paper claims to propose a new algorithm, DiRSA, but the conceptual differences between your method and existing methods such as DDPM are insufficient. To describe more clearly how DiRSA surpasses the application of diffusion models in signal processing, especially in addressing unique challenges in the field of automatic modulation classification

 

2) The paper lacks a comprehensive comparison with state-of-the-art methods. Not only does it need to be compared with traditional enhancement techniques such as rotation and flipping, but it also needs to be compared with other advanced machine learning methods applied to similar problems in recent literature.

 

3) The description of the DiRSA algorithm in the paper, especially the denoising process and hint based signal generation, is too simplistic and simplistic. It is crucial to provide detailed pseudocode or step-by-step algorithm details for reproducibility. This should include details of the noise level configuration and the precise mechanism by which prompt words control the diffusion process.

 

 

 

Overall, the paper is relatively rough and rudimentary, equivalent to using existing methods directly. The entire paper is more like an experimental report rather than a highly specialized one, and requires more in-depth research on new mechanisms and methods of diffusion models, rather than replacing the dataset to publish a worthless paper. My opinion is that this paper cannot be accepted

Comments on the Quality of English Language

Inconsistent use of professional terms in the paper, such as "signal augmentation," "data augmentation," and "dataset extension," can be interchanged without clear definitions or distinctions.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

- There is one instance of high similarity (> 6%) from a single source (https://doi.org/10.1109/TCCN.2018.2835460) which needs to be reduced to 2 or 3%. Please request the assistant editor to send the similarity report if you are unable to access it, and make the necessary revisions to lower it. Could you elaborate on the exact differences between your study and theirs?

 

- The rationale behind using prompt-words for denoising and data augmentation is somewhat unclear. Could you provide a step-by-step explanation of how the algorithm works? Providing a single example with the exact prompt-word before and after would enhance the clarity of the model.

 

- The parameter settings, as well as implementation details of each deep learning model used, such as LSTM and CNN, should be presented to increase the reproducibility of the study.

 

- Please clarify the following sentence: “We use 80% and 5% of the RadioML2016.10A data as training datasets to simulate situations of full sets and small sets, and split the remaining 20% equally into validation and test datasets.” Why is there 5% and the remaining 20%? Does your dataset total 105%?

 

- Where does the prompt-words dataset come from? Is it publicly available?

 

- Figure 5 is too small. It is recommended to show the performance in the form of a table and provide the metrics being used, such as RMSE, MSE, etc.

 

- Additional comparison with general augmentation should be presented to highlight the performance of your proposed method compared to general augmentation methods widely used and implemented in research and industry, such as rotation and flipping.

Comments on the Quality of English Language

N/A

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

the authors have make a good revison, it can be accepted with minor english wirtting improvement.

Comments on the Quality of English Language

it should be minor revision with native researchers.

Reviewer 3 Report

Comments and Suggestions for Authors

The authors have addressed all previous concerns, and I have no further comments on the current version.

Comments on the Quality of English Language

NA

Back to TopTop