Next Article in Journal
Assessing Air Quality Dynamics during Short-Period Social Upheaval Events in Quito, Ecuador, Using a Remote Sensing Framework
Next Article in Special Issue
AgeDETR: Attention-Guided Efficient DETR for Space Target Detection
Previous Article in Journal
Microphysical Characteristics of Monsoon Precipitation over Yangtze-and-Huai River Basin and South China: A Comparative Study from GPM DPR Observation
Previous Article in Special Issue
SAR-NTV-YOLOv8: A Neural Network Aircraft Detection Method in SAR Images Based on Despeckling Preprocessing
 
 
Article
Peer-Review Record

Context-Aware DGCN-Based Ship Formation Recognition in Remote Sensing Images

Remote Sens. 2024, 16(18), 3435; https://doi.org/10.3390/rs16183435
by Tao Zhang, Xiaogang Yang *, Ruitao Lu, Xueli Xie, Siyu Wang and Shuang Su
Reviewer 1: Anonymous
Reviewer 2:
Remote Sens. 2024, 16(18), 3435; https://doi.org/10.3390/rs16183435
Submission received: 1 July 2024 / Revised: 1 September 2024 / Accepted: 13 September 2024 / Published: 16 September 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This article uses deep learning to detect ships. It show incremental improvement in MAP. Is that good enough? For ship clustering, it compares with k-means. Is that good enough? Conclusion does not include a conclusion, just reiterates the extensiveness of experiments.

Comments on the Quality of English Language

Presentation is good.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Overall

The article presents a context-aware Dense Graph Convolution Network (DGCN) aimed at detecting ship formation. For this purpose the authors have designed a center point-based ship detection method. Additionally, they have applied the Delaunay triangulation for graph structure representation of a ship formation and the DGCN network for formation classification. They have conducted a series of extensive experiments on two satellite image databases.

The text is very interesting and its strong points outnumber the weak ones, hence, after minor correction, it should be published.

 

Strong points:

There are many strong points:

  1. the literature survey,

  2. the idea of graph model-based formation recognition,

  3. the results obtained,

  4. the numerous insightful experiments supporting the results

  5. most of the necessary references

 

Weak points:

Some fragments of the text mention ship detection, but there is no explicit description of the particular method of ship detection or additionally identification of ship classes in this project. L. 431 listed ship symbols without their explanation, furthermore, in l. 433 the reader can learn that the formation types presented in Fig. 8 are ‘publicly available’ without a reference. In the whole presentation the graph nodes are treated as identical, but graphically they are presented in different coulours. It raises a question if the topologically two identical formations are indeed identical, as they consist of different ship classes.

 

Detailed remarks or and comments:

I suggest adding a figure, illustrating angles from eq. (8). It would make it easier to understand the similarity between ships.

I noticed some minor mistakes:

  1. Eq. (2) why is there Yxyz ? I suppose it should be Yxyc, as in eq.(3).

  2. A lack of reference to the SGF database.

  3. L. 291 λcwh=0.1, why?

  4. L. 318 if ‘fuzzy function’ means membership function in fuzzy logic context?

  5. Eq. (7) what does k parameter mean?

  6. Eq. (10) there is no explanation for matrices D̃ and Ã.

  7. L. 391 how does a node feature correspond to a ship class?

  8. L. 457 ― we can read ‘all 20 classes’ of what: classes of ships or classes of formations?

  9. L. 449 ‘increase convergence speed, which is reduced to one-tenth at the 180th and 210th iterations.’ Why did the authors select this particular range of values?

  10. Tab. 3 and l. 511 – there are mentioned ‘several attention mechanisms’ without any references, at least those mentioned in Tab. 3.

  11. Fig. 10 is hard to make out. In order to make it legible, I suggest changing its orientation from horizontal to vertical, and then enlarge the second column, which is the second row now.

  12. What kind of programming language and hardware configuration have the authors applied in this project?

 

Comments on the Quality of English Language

English has to be proofread (esp. l. 19 ‘Convolutional Network’ → ‘Convolution Network’, ‘adopt’ → ‘adapt’ in some places)

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop