Next Article in Journal
Contribution of Atmospheric Depositions to Inventory of Nutrients in the Coastal Waters of Crimea
Previous Article in Journal
Continuous Monitoring of Transmission Lines Sag through Angular Measurements Performed with Wireless Sensors
 
 
Article
Peer-Review Record

An Efficient Data Augmentation Method for Automatic Modulation Recognition from Low-Data Imbalanced-Class Regime

Appl. Sci. 2023, 13(5), 3177; https://doi.org/10.3390/app13053177
by Shengyun Wei 1, Zhaolong Sun 2, Zhenyi Wang 1, Feifan Liao 1, Zhen Li 1 and Haibo Mi 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Appl. Sci. 2023, 13(5), 3177; https://doi.org/10.3390/app13053177
Submission received: 4 February 2023 / Revised: 24 February 2023 / Accepted: 26 February 2023 / Published: 1 March 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Round 1

Reviewer 1 Report

The paper presents a novel method for  the automatic data augmentation of radio signals to enhance the learning process of automatic modulation recognition (AMR). They propose SigAugment.  Additionally, 5 new label-preserving transformations for modulated signals are proposed. However, the transformation "flip and channel shuffle" is a combination between flip and the new proposed channel shuffle, which I would not count as a new transformation, but rather a combination.

My biggest concern, however, is that the selection strategy of the transformations applied by SigAugment is not thoroughly explained. While it is explained  how many different transformations can be applied, I don't understand how the algorithm is choosing which transformation to apply.

 

 

l.135 What do you mean by "poorer"? 

l.174 Data augmentation methods [are] popular...

l.175 Please explain "mel-spectrum"

l.180 Please explain "wearable sensor data". How can one wear data?

l.200, l. 201 Both sentence contain the phrase "... in the absence of expert knowledge...". Please rephrase. 

Fig 1. Are there really differences to the original image for Jit, Rod, Flip? Then again Inv and SP look the same. Please check.

l. 269 According To which strategy? Please explain.

l.273 Do you mean SigAugment-C? 

l. 336-344 Same sentence as l.327-336. Remove!

l. 342 What do you mean by "two layer" LSTM? Do you mean 2LSTMs? As far as I know, LSTMs have a fixed structure in which you can't simply choose the number of layers.

Fig. 2 Why so large batch sizes? I have never head a paper before where the batch size was higher than 128. Are these large batch sizes also used in the original papers and common for the presented use-case?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

1. The description of the datasets in Section 4.1 is a little confused. For example, in Line 300-302, “When β=50, the number of samples in the training set for modulation categories 8PSK, AM-DSB, BPSK, CPFSK, GFSK, 4PAM, 16QAM, 64QAM, QPSK, and WBFM is 140, 280, 420, 560, 700, 840, 980, 1120, 1260, and 1400”, it seems that the most frequent class and the least frequent class are WBFM and 8PSK, respectively, where the value of imbalance factor β should be 10 (1400/140=10) instead of 50.

Besides, in order to distinguish the set during training session and the “training set” in Table 1, the description of “training set” in Line 300-302 should be changed.

 

2. The dataset “2016A-1” is clear, where the number of training samples, validation samples and testing samples is 5250, 1750 and 7000 per class.

The imbalance factor β of dataset “2016A-10” is 10. It seems that Nmax is 5250 and Nmin is 525. Therefore, the number of samples in the training set for modulation categories 8PSK, AM-DSB, BPSK, CPFSK, GFSK, 4PAM, 16QAM, 64QAM, QPSK, and WBFM is 525, 1050, 1575, 2100, … ,4725, 5250, respectively, where the number of the samples in the training set is 28875 (9625 for validation set).

However, when the imbalance factor β of dataset “2016A-20” is 20, the value of Nmax and the value of Nmin in this dataset are both changed. If Nmax is still 5250, then Nmin should be 262.5 and the number of the samples in the training set is about 27562.5. The same situation also occurs in “2016A-50”. In order to make Section 4.1 more readable, please illustrate these situations more clearly, and the authors are hoped to provide a table to show the number of samples for per class in training set, validation set and testing set based on three different datasets.

 

3. In Table 4, the number of “81.04” should be painted with red instead of “80.18”.

 

4. In terms of the proposed data augmentation method, the authors provide experiments to show the effectiveness of their method. However, the author should conduct more experiments, where the low-data category of modulation is not constant. For example, if Category A, B and C are low-data and Category D and E are high-data, then in next experiment, Category A, C and E could be low-data and Category B and D could be high-data.

I know the number of these experiments is huge. If possible, extra two or three experiments would be welcome and useful for researchers and the community. The results could be provided briefly in Appendix Section.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Authors present a practical automatic data augmentation method for radio signals,called SigAugment, which incorporates eight individual transformations and effectively improves the performance of AMR tasks without additional search. The research work reported is interesting in the community. Some suggestions are listed below to improve the manuscript's quality (major revision):

1. The manuscript's motivations should be further highlighted in the manuscript, e.g., what problems did the previous works exist? How to solve these problems? The authors may consider analyzing the problems of the previous works and how to address these problems with the proposed method. Please explain that.

2. The research gaps in the abstract and introduction should be clearly expressed. Please rewrite this part.

3. The authors must clearly explain the difference(s) between the proposed method and similar works in the introduction. The authors should further highlight the manuscript's innovations and contributions.

4. In the Section of 3. Methods, the idea is unclear, please rewrite the idea and provide a flowchart of the proposed method.

5. In Line 375, alphat = 0.25 and g = 2.0, how to determine these values? According to what?

6. One key background of this manuscript is the advanced techniques. Thus, the Introduction and/or related work section could be extended and incorporates additional discussions on the topics of advanced techniques, e.g., https://doi.org/10.3389/fendo.2022.1057089ï¼›https://doi.org/10.1016/j.ins.2022.12.068; https://doi.org/10.1007/s12652-021-03516-y;https://doi.org/10.1016/j.ymssp.2022.109422 and so on. This could set the scene and background for the subsequent discussions in this manuscript.

7. There are some grammatical errors seen in the paper. Check carefully for a few clerical errors and formatting issues.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

The paper organization is nice. I just have a comment on the loss training/validation. The authors must check it carefully. The loss value is high.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Thanks the authors for revising the manuscript. The authors have properly addressed responses to all my questions and therefore, my recommendation is to accept the article.

Reviewer 3 Report

All my previous concerns have been accurately addressed. I think that this paper can be accepted.

 

Back to TopTop