Next Article in Journal
A Novel Technique for High-Precision Ionospheric VTEC Estimation and Prediction at the Equatorial Ionization Anomaly Region: A Case Study over Haikou Station
Previous Article in Journal
Dependence of the Bidirectional Reflectance Distribution Function Factor ƒ′ on the Particulate Backscattering Ratio in an Inland Lake
 
 
Article
Peer-Review Record

Contrastive Learning Network Based on Causal Attention for Fine-Grained Ship Classification in Remote Sensing Scenarios

Remote Sens. 2023, 15(13), 3393; https://doi.org/10.3390/rs15133393
by Chaofan Pan, Runsheng Li *, Qing Hu, Chaoyang Niu, Wei Liu and Wanjie Lu
Reviewer 1:
Reviewer 2:
Reviewer 3:
Remote Sens. 2023, 15(13), 3393; https://doi.org/10.3390/rs15133393
Submission received: 10 May 2023 / Revised: 26 June 2023 / Accepted: 27 June 2023 / Published: 3 July 2023

Round 1

Reviewer 1 Report

1.Double check formula (17),Ar is right?

2. Why do not give the results of OA in the Table 2 and Table 7;

3. The labels in Figure 11 are not clear;

4. The experimental results in Table 6 show that the recognition rate of the proposed method in the AR11 category is much lower than that of the comparative methods. Please provide possible reasons in the article.

5. Why choose reference [45-50 ]for comparison on FGSCR-42 dataset?

 

 

Minor editing of English language required.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The manuscript proposes contrastive learning network based on causal attention for fine-grained classification of ship targets in remote sensing scenarios. Two designed modules, the FD-CAM and FAM, are presented. Experiments show superior performance of the proposed methods against other methods. I am inclined to reject this paper. I think some important explanations are missing, which are as follows:

1.     The authors use the strip pooling module (SPM) to pool the input feature map F because of the extreme aspect ratio of ship targets. I noticed the samples mentioned in Figure. 8 in which the ships are horizontal. Will the classification accuracy of ships which are not vertical or horizontal in the images get adverse effects?

2.     Will SPM have more advantages over deformable convolution in ship target classification?

3.     Among fine-grained classification of ship targets, is FD-CAM and FAM more helpful for easy categories or for difficult categories?

Minor Comments:

1.     In page 4, Figure 3. SPM 结构 ” should be “Figure 3. SPM structure”?

2.     Symbols in some formulas are missing. For example, formula (1), (4), etc. Please check your paper carefully to avoid these stupid mistakes.

1.     In page 4, Figure 3. SPM ç»“æž„ ” should be “Figure 3. SPM structure”?

2. The English logic and expression should be much improved.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

Please, refer to the attached file.

Comments for author File: Comments.pdf

Only a few expressions require modification.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

The manuscript has been sufficiently improved to warrant publication in Remote Sensing.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The manuscript proposes contrastive learning network based on causal attention for fine-grained classification of ship targets in remote sensing scenarios. Two designed modules, the FD-CAM and FAM, are presented. Experiments show superior performance of the proposed methods against other methods. However, I think some important explanations are missing, which are as follows:

1.     The authors use the strip pooling module (SPM) to pool the input feature map F because of the extreme aspect ratio of ship targets. I noticed the samples mentioned in Figure. 8 in which the ships are horizontal. Will the classification accuracy of ships which are not vertical or horizontal in the images get adverse effects?

2.     Will SPM have more advantages over deformable convolution in ship target classification?

3.     Among fine-grained classification of ship targets, is FD-CAM and FAM more helpful for easy categories or for difficult categories?

4.     The computing efficiency of the proposed method should be analyzed and should be fairly compared with the existing SOTA methods.

5.     Please analyze the drawbacks and disadvantages of the proposed C2NET approach. Compared to existing ship classification methods, in which practical applications is the proposed C2NET approach applicable and in which cases is the proposed C2NET approach not applicable.

The manuscript proposes contrastive learning network based on causal attention for fine-grained classification of ship targets in remote sensing scenarios. Two designed modules, the FD-CAM and FAM, are presented. Experiments show superior performance of the proposed methods against other methods. However, I think some important explanations are missing, which are as follows:

1.     The authors use the strip pooling module (SPM) to pool the input feature map F because of the extreme aspect ratio of ship targets. I noticed the samples mentioned in Figure. 8 in which the ships are horizontal. Will the classification accuracy of ships which are not vertical or horizontal in the images get adverse effects?

2.     Will SPM have more advantages over deformable convolution in ship target classification?

3.     Among fine-grained classification of ship targets, is FD-CAM and FAM more helpful for easy categories or for difficult categories?

4.     The computing efficiency of the proposed method should be analyzed and should be fairly compared with the existing SOTA methods.

5.     Please analyze the drawbacks and disadvantages of the proposed C2NET approach. Compared to existing ship classification methods, in which practical applications is the proposed C2NET approach applicable and in which cases is the proposed C2NET approach not applicable.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop