Next Article in Journal
Experimental Investigation on Structural Performance Enhancement of Brick Masonry Member by Internal Reinforcement
Previous Article in Journal
Similarity Model Test on Stress and Deformation of Freezing Pipe in Composite Strata during Active Freezing
 
 
Article
Peer-Review Record

Multi-Source HRRP Target Fusion Recognition Based on Time-Step Correlation

Appl. Sci. 2023, 13(9), 5286; https://doi.org/10.3390/app13095286
by Jianbin Lu, Zhibin Yue * and Lu Wan
Reviewer 1:
Reviewer 2:
Appl. Sci. 2023, 13(9), 5286; https://doi.org/10.3390/app13095286
Submission received: 17 March 2023 / Revised: 10 April 2023 / Accepted: 20 April 2023 / Published: 23 April 2023

Round 1

Reviewer 1 Report

Overall comments:
This work aims to propose a feature fusion module based on HRRP temporal features, more precisely, based on timestep correlation. The idea is interesting and justified by the aforementioned reference methods. However, some points in the work need to be improved, for example, the justification of some techniques, the lack of definition of some acronyms, and the figures could be better quality.

Major reviews:

1 - RATA was not defined in the text and appeared in the first paragraph of the Introduction. Soon after, the acronym RATR is presented, which is also not defined. Are RATR and RATA the same thing? Is this related to target recognition?

2 - The authors adopted a pre-processing step in the data based on the l-2 norm. Why use this norm and not other normalization strategies? Is there any reference method that adopted the same idea? If so, it would be interesting to cite the works.

3 - There is a lack of justification for using the sliding window after normalizing the data. Did you use it to compare the results with the reference?

4 - The work's figures could be of better quality (good resolution). Nowadays, several softwares allow the creation of Figures with very excellent quality.

As an example, we can cite Figure 2. The text in the figure (size and font) differs from the text in the rest of the document. Why is softmax in bold in the figure? As you read the document from top to bottom, ideally, the input should be at the top and the output at the bottom, which is not shown in the figure. If you zoom in on the figure, you lose all image resolution. It looks like a screenshot of some software was taken.

In Figure 4, for example, we have other details that need to be adjusted. There are many "d's." Some you can see, while others are very close to the arrows. What are the input and output of the figure? There's nothing written.

5 - How does the size of the sliding window (size 3) compare with Figure 3? Wouldn't this figure be the same for any size?

6 - Where are ?1 and ?2 in Figure 6?

7 - You mention that ?1 and ?2 are feature vectors but they ∈ R?×?. Please explain what you mean here.

 

minor reviews:

1 - What do you mean by "golden town correlation" in the abstract?

2 - You say, "Liao et al. [2] and Du et al. [11] started from the spectrum of HRRP to extract features that are easy to identify". What would be easy? In terms of processing?

3 - In the Introduction, you mentioned "In the literature [XX-XX]" to describe some works. You can just keep the reference number and remove the "In Literature" text.

4 - Removes spacing between Equations and Where. Also, it is necessary to put (. or,) after all equations, as they are part of the text.

- There are some acronyms in the text that have not been defined, for example, RNN and SRU.

Author Response

Dear reviewers,

thank you very much for your valuable suggestions on the article. For details about the modification, see the uploaded document.

Author Response File: Author Response.docx

Reviewer 2 Report

The proposed Timestep Correlation-based Feature Fusion (TCFF) method is aimed at addressing the limitations of single High Resolution Range Profile (HRRP) in recognition, and it seems to be a promising approach based on the experimental results. The method involves calculating the covariance matrix at the time step of the two features extracted by the two channels and assigning different weights to the time step according to the strength of the covariance golden town correlation for feature fusion.

According to the paper, the experimental results on the simulated ship target HRRP data set demonstrate that the TCFF method can achieve better recognition performance than the single-channel model. Additionally, the results suggest that the TCFF method can achieve better recognition results compared with simple feature fusion approaches such as element addition and element contacting.

Overall, this paper presents a potentially useful method for addressing the limitations of single HRRP in recognition. However, it is important to note that the proposed approach has only been tested on a simulated data set, and its effectiveness in real-world scenarios remains to be investigated. Further research and experimentation are needed to validate the effectiveness and robustness of the TCFF method in practical applications.

Author Response

Dear reviewers,

thank you very much for your valuable suggestions on the article. For details about the modification, see the uploaded document.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

I recommend careful proofreading for the final version of the manuscript. 

Back to TopTop