Next Article in Journal
A Code Reviewer Recommendation Approach Based on Attentive Neighbor Embedding Propagation
Previous Article in Journal
Ultracompact SIRC-Based Self-Triplexing Antenna with High Isolation
 
 
Article
Peer-Review Record

Radar Spectrum Image Classification Based on Deep Learning

Electronics 2023, 12(9), 2110; https://doi.org/10.3390/electronics12092110
by Zhongsen Sun 1, Kaizhuang Li 1, Yu Zheng 1, Xi Li 2 and Yunlong Mao 2,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Electronics 2023, 12(9), 2110; https://doi.org/10.3390/electronics12092110
Submission received: 3 March 2023 / Revised: 21 April 2023 / Accepted: 28 April 2023 / Published: 5 May 2023
(This article belongs to the Section Microwave and Wireless Communications)

Round 1

Reviewer 1 Report

The paper proposed a method of emitter classification and recognition based on deep learning, which is an efficientnetv2-s convolution structure model based on one-dimensional convolution. The input image suitable for neural network is obtained through the scale compression and grayscale processing of the frequency domain image in the data set, and the features are extracted by one-dimensional convolution and attention mechanism, to achieve specific classification and recognition. The paper is of great importance. However, there a few comments to consider.

·       The paper is poorly written, and the introduction and motivation need improvement. 

·       Some abbreviations have been used without a first-time definition, such as MBConv. The authors are encouraged to clarify what type of convolution this module is.

·       The introduction needs to motivate the problem sufficiently. 

·       Almost all figures are of very low resolution and need to be modified.

·       The novelty needs to be clarified in how the proposed method differs from efficientnetv2. The authors are encouraged to explain the contributions in bullet points in the introduction.

·       In the introduction, the authors use “literature” as the first word in the sentence, followed by the citation to explain other work. This is not the correct way to cite other work. We usually use the first author’s last name followed by “et al.” if there are more than two authors or the last name of the two authors if there are only two.

·       Many grammar mistakes and poor phrase structure, such as in lines 54, 55, 88 – 102, 131, 135, 142, and 215.

·       The authors need to expand the experimental results to validate the proposed framework clearly.

Author Response

请参阅附件

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper proposes an improved deep learning method based on EfficientNetV2-s, which have better classification accuracy on the test set. 1) The innovation of the paper is relatively insufficient. The improved method is just a minor change in the existing network structure. 2) Some of the content in the paper is well known, and there is no need to elaborate on it at a large length (such as Section II).

Author Response

请参阅附件

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper aims to utilize efficientnetV2 for radar spectrum image classification. It seems that they used a 1D convolutional operator to reduce the number of parameters and achieved better classification results. However, I have many major concerns as listed follows:

1.  First of all, the introduction is unclear why this method was proposed and what their contributions were. Based on the abstract, the author used 2D convoluational operator to replace 1D one, however, in the paper, they used 1D convoluational operator replace 2D one. It is very confusing for evaluating their contributions.

2.  All the figures they used are very vague for recognition, especially Fig. 11, I can not recognize what they are. Please also notice your legends covered the accuracy curves in Fig 13 as well, this is hard to know their final accuracy.

3.  Please specify each loss in your Fig.12. Does this present loss curves or you use different loss function for different deep learning methods?

4. It seems that the experiments were learnt from scratch on the 576 radar training images. But there is no specific description of it in the paper. If not, please specify the pretrained models. If so, since the training data is too small to support learning the large models such as Resnet 50, 101, ResNeXt, DensNet, ShuffleNet,etc. So, such comparisons in the paper are not fair, please either increase your training data or reduce the parameters of the competing methods for comparison. For example, reduce the depth of ResNet to 18.

 

Author Response

请参阅附件

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The paper is good. It only needs a slight improvement in the language.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

I have no mre comments.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

After revision, the authors provides more experiments and explanations, which improve the paper and resolve some of my concerns.

However, I still have some major concerns:

1. Based on the Fig.10 (c) and (d), it shows ResNet 50 and 18 are not convergent in 100 epochs, can you add more epochs to achieve their best performances?  

2. In Fig.11, it demonstrates that the average results of DenseNet 12, ResNet18 MobileNet V3 are very similar. Could you please compare their average curves?  If MobileNetV3 with lower FLOPs and lower number of parameters than the proposed one achieve the similar prefromance as the one of proposed, then the paper contribution would be  incremental.

3. Also please specify the following settings of your experiments.

- When training other comparison models, are you using the 2D convolutional operator?

- How many repeating experiments have you done for the average Top1 and 5 accuracy in Table 2?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 3

Reviewer 3 Report

All my concerns have been resolved.

Back to TopTop