Next Article in Journal
A New Spatial–Temporal Depthwise Separable Convolutional Fusion Network for Generating Landsat 8-Day Surface Reflectance Time Series over Forest Regions
Previous Article in Journal
Removing Prior Information from Remotely Sensed Atmospheric Profiles by Wiener Deconvolution Based on the Complete Data Fusion Framework
 
 
Article
Peer-Review Record

Azimuth-Sensitive Object Detection of High-Resolution SAR Images in Complex Scenes by Using a Spatial Orientation Attention Enhancement Network

Remote Sens. 2022, 14(9), 2198; https://doi.org/10.3390/rs14092198
by Ji Ge 1,2,3, Chao Wang 1,2,3, Bo Zhang 1,2,3,*, Changgui Xu 1,2,3 and Xiaoyang Wen 1,2,3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2022, 14(9), 2198; https://doi.org/10.3390/rs14092198
Submission received: 17 April 2022 / Revised: 28 April 2022 / Accepted: 1 May 2022 / Published: 4 May 2022

Round 1

Reviewer 1 Report

The authors properly addressed all the comments in the revised manuscript, and no more questions.

Author Response

Thank you for your comments, which help us improve the quality of the manuscript.

Reviewer 2 Report

The Authors accept my remarks and involve necessary edition.

Author Response

Thank you for your comments, which help us improve the quality of the manuscript.

Reviewer 3 Report

This paper has greatly improved over its previous version, but it still has significant flaws. As previously stated, the contribution, approach, and novelties must be clearly described! What have you achieved that others have not? What are your IPCN novelties that you used? Even if you did something novel, how can you justify it based on formulation and evaluation scenarios? This paper suffers a lot from a verification scenario, which makes the manuscript resemble an engineering report! The only simulation result is Figure. 5 which can not be used as a verification scenario! 

This paper really needs a comprehensive quality assessment based on the formula and Image processing. Hence, unfortunately, I would reject it again. Please take a look at these papers for samples and use their data presentation style as well as verification methods.

1-https://doi.org/10.1080/01431161.2021.1953719

2-DOI: 10.1109/TGRS.2017.2776357

3-DOI: 10.1109/JSTARS.2020.3015909

4-DOI: 10.1109/JSTARS.2020.3015909

5-https://doi.org/10.1080/17455030.2016.1198062

6-DOI: 10.1109/TGRS.2019.2947634

Author Response

Dear reviewers,

Manuscript ID remotesensing-1708680 entitled “Azimuth-sensitive object detection of high-resolution SAR images in complex scenes by using Spatial Orientation Attention Enhancement Network”.

We appreciate the detailed and constructive comments provided by the reviewers. At the same time, we also thank you very much for your advice. Following is our response to Reviewer 3’s comments. In the manuscript and this file, the red parts are revisions suggested by the Reviewer. The underline parts in this file and the manuscript are those added contents to improve the expressions.

Regards,

Ji Ge

[email protected]

 

Correspondence: Mr. Bo Zhang

Email: [email protected]

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Unfortunately, the same old problems still exist! besides, the paper is still devoid of verification scenario, which is a necessary element for such submission. Hence, I would reject it and won't recommend resubmission. In my opinion, theory, formulation and simulation results must all be in line with each other, which is not the case here. 

 

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

The authors propose a deep learning framework for the SAR azimuth-sensitive object detection, namely ‘YOLO-IPCN’. In this paper, the authors describe this end-to-end lightweight anchor-free SAR azimuth-sensitive object detection framework’ has several improvements on the YOLOX. Due to these certain improvements, results show that by using a lightweight feature extraction network, YOLO-IPCN overcomes overfitting and balance between accuracy and computational cost. Compared with YOLOX-L and YOLOv5-L, this lightweight network has a more reliable detection performance.

However, there are still some issues that need to be considered, as follows:

  1. More work is required in Section 2, such as more information about the ‘Spatial Orientation Attention Module’ and ‘PAFPN’.
  2. Authors mentioned: ‘ConvMixer is applied in object detection for the first time. What is the purpose of the article choosing ConvMixer?
  3. The relationship between the proposed framework and the YOLOX is unclear.
  4. The authors have improved the network structure and trained 300 epochs. But it is not clear how the loss of the trained network evaluates in the training set and the test set, and I hope the authors will show the loss of the network training.
  5. On page 8, lines 276-278, two common evaluation metrics are shown for SAR target detection. We know that there are many evaluation metrics for target detection. Are these two standard evaluation metrics for SAR target detection? I hope the authors can cite the corresponding literature to support it.
  6. Although the subject of the study was SAR azimuth-sensitive object images, there was nearly nothing related to it. It is necessary to highlight the imaging principle of synthetic aperture radar images and the particularity of the azimuth-sensitive. This may allow us to obtain an explainable framework.

Reviewer 2 Report

The satellite images play important role in many aspects of modern life. A significant part of these images are obtain by the satellites with Synthetic Aperture Radar (SAR). The proper detection of relatively small surface objects is affected by the systematics, arising from azimuth influences. The proposed here method of object extraction from SAR images may improve the quality of image processing.

Recommendation of  text improving:

  1. The Authors use a lot of abbreviations. It is necessary to describe their full forms, when the abbreviations appear for the first time.
  2. Insert a graph with explanation why the azimuth angle may lead to falls detection.
  3. The figures with SAR images and object detection need more explanations.
  4. Will be good to place full description of all used algorithms in Appendix to the paper.

Reviewer 3 Report

Please find the attachment

Comments for author File: Comments.pdf

Reviewer 4 Report

In this paper, the authors proposed a YOLO-IPCN for azimuth-sensitive object detection of HR SAR images. 

  1. The main challenges and difficulties of the azimuth-sensitive object detection is not well described.
  2. The IPCN is used as the backbone in the proposed method. Please the authors cite the related paper if it is not proposed by the authors. 
  3. The authors proposed the SOAM but it is not clear how it works for the azimuth-sensitive object detection.
  4. Please the authors give the link to access the data used in the paper if it can be published.
Back to TopTop