Next Article in Journal
Decadal Continuous Meteor-Radar Estimation of the Mesopause Gravity Wave Momentum Fluxes over Mohe: Capability Evaluation and Interannual Variation
Previous Article in Journal
Rice and Greenhouse Identification in Plateau Areas Incorporating Sentinel-1/2 Optical and Radar Remote Sensing Data from Google Earth Engine
 
 
Article
Peer-Review Record

Radar Intra–Pulse Signal Modulation Classification with Contrastive Learning

Remote Sens. 2022, 14(22), 5728; https://doi.org/10.3390/rs14225728
by Jingjing Cai 1, Fengming Gan 1, Xianghai Cao 2,*, Wei Liu 3 and Peng Li 1
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4:
Remote Sens. 2022, 14(22), 5728; https://doi.org/10.3390/rs14225728
Submission received: 15 September 2022 / Revised: 28 October 2022 / Accepted: 8 November 2022 / Published: 12 November 2022

Round 1

Reviewer 1 Report

The article offers an interesting approach. Maybe it's good to review and refine one more time. Of interest to me is the influence of different types of noise on the proposed model. I propose to publish it after final minor corrections by the authors.

Author Response

Thanks for your supportive comments.

Reviewer 2 Report

- Summary
In this paper, the author proposed a radar signal modulation classification method based on contrastive learning.
It is significant that it presented a new method for radar signal classification using CNN.

1. [line 264] It is necessary to use the de-noising algorithm at the pre-processing step that was frequently used in previous conventional radar processing techniques.

2. [line 264] Since there is a huge difference between the simulation data and the real-world data, it seems necessary to acquire a few real-world radar signal data to compare the algorithm objectively.

3. [line 277] The number of test dataset is too small to compare performance. Over 1k samples per-class are required for testing.

4. [line 209] In case of the radar signal, the signal strength is the only important factor and the RGB value has no information at all. Thus, using only 1D gray-scaled image as an input image is a more convincing way.
5. [line 263] Since the classification accuracy is highly related with the size and number of layers of the CNN model, it is necessary to compare the size of the model and the inference speed.


6. The proposed method and other compared method should be tested on not only simulation dataset, but also the real-world radar dataset.

7. There are many conventional radar signal modulation method in the previous research. The author should compare with them in the experiment step.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This manuscript is about the classification between nine classes of radar intra-pulse signal modulation types, using CNN with contrastive learning. In general, the manuscript reads well however I have a number of concerns.

1. The input data and data augmentation: the input data must be well understood. Radar images are not exactly like video images and should be treated with care. In Figure 3, the caption is incomplete, it should explain the different subfigures. The axis should be labelled with units and a scale. The frequency axis covers probably too large of a range, and the modulation signal is limited to only a few pixels, this is probably not optimal. Figures 3 and 4 in your Reference 2 are better examples.

Most importantly, the authors should explain in greater detail the data augmentation techniques performed. For example the random resized crop. I assume this results in a stretching of one or both axis. In the case of the LFM signal modulation, stretching the frequency axis results in a radar signal with a different chirp rate, i.e. this is a totally different radar, it's not equivalent to a cat becoming a fat cat. Given that the classification is performed between classes of radar signals, then perhaps this operation is acceptable in this case. However the data augmentation should be clearly explained and impacts on the radar signal modulations be clearly outlined and their meaning understood. 

2. Comparisons between the different methods. The different methods are compared for sample sizes of 15 and 55. The sample size is defined for the CL-CNN as the number of samples used for fine tuning. What does the sample size mean for the models that do not use fine tuning (e.g. CNN-Xia, ResNet, SCRNN, SVM, KNN)? Is that the total number of samples used for training? If yes then that is a very small number, how do you explain the good results obtained?

3. The analysis of the first simulation (lines 359-364). The authors conclude that the focal loss function solves the sample imbalance problem. But according to the description of the datasets, there is no sample imbalance. This cannot be the conclusion. Perhaps CL-CNN is just better than CL-CNN-CE.

Also, CL-CNN-WP should be better explained. It is said that there is no pre-training. Does that mean that the model is only trained or "fine-tuned" with the 15 or 55 samples? For 85 samples it does a great job, almost as good as CL-CNN and CL-CNN-CE. If that model was trained with only 85 samples and did almost as well as the CL models, does that mean that all the pre-tuning complexity of the CL-CNN is not truly required? 

4. The text of Figure 5 says that the columns are the predicted while the rows are the real class. This should be vice versa. It would help if the axis were labelled in the figure. In the text it should be confusion between Frank and P3, not P4. 

5. Please define the batch size. The symbol N is used for both batch size and the number of samples of a waveform. Also sample size and sample number are both used, for clarity pick only one of the two.

6. There are already many references, but I would like to see a few more basic references on a) radar intra-pulse signal modulation and electronic warfare, b) impulsive noise (why is this relevant), c) contrastive learning.

7. In the conclusion, the expression few-shot learning is used while this concept has not been introduced or discussed in the paper.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

Radar pulse classification using semi supervised learning is presented. the paper is well presented i'd like to provide minor revision

1. Can this technique classify FMCW vs SFCW signal or the UWB signals as well.?
2. The related work section only presents Deep learning related references. section title must be changed

3. Statistical significance of the accuracy of proposed model should be added. Accuracy after several trials should be reported. Add true-positive false-negative and precision etc. as well.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

No additional comments. (All comments in previous review have been reflected.)

Author Response

Thanks for your supportive comments.

Reviewer 3 Report

Thanks for the revised manuscript.

1. About the range of frequencies in Figure 4. My previous comment was not so much concerned about the Figure itself, but rather as the input images to your CNN models. If the simulation results have not been obtained with the images with the revised frequency range, then it would be preferable to show the original time-frequency images.

2. About the number of samples. Lines 296-300 should be updated with the revised number of test samples being 1500. Also, it is still confusing to compare the different models. I would suggest to add a Table, similar to your Table 3. For example, columns could be: Model, Pre-training (entries could be None, ImageNet, xx samples per class, etc.), Fine-tuning (entries could be None, 15-85 samples per class, etc.), Testing (1500 samples per class). 

Some minor notes:

- Eq. 9 (top) seems to be missing an L

- Number of layers: line 368 is 32, Table 5 is 34.

- Line 433: third should be fourth

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop