Next Article in Journal
Effects of Topography on Vegetation Recovery after Shallow Landslides in the Obara and Shobara Districts, Japan
Previous Article in Journal
Reliability of GPM IMERG Satellite Precipitation Data for Modelling Flash Flood Events in Selected Watersheds in the UAE
 
 
Article
Peer-Review Record

Adaptive Feature Attention Module for Robust Visual–LiDAR Fusion-Based Object Detection in Adverse Weather Conditions

Remote Sens. 2023, 15(16), 3992; https://doi.org/10.3390/rs15163992
by Taek-Lim Kim 1,†, Saba Arshad 2,† and Tae-Hyoung Park 3,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Reviewer 4:
Remote Sens. 2023, 15(16), 3992; https://doi.org/10.3390/rs15163992
Submission received: 21 June 2023 / Revised: 1 August 2023 / Accepted: 8 August 2023 / Published: 11 August 2023

Round 1

Reviewer 1 Report

Comments to the Author:

Considering that the diverse distribution of training data makes effective learning of networks a challenging problem, author systematically study existing target detection methods based on vision and liDAR features, and propose an adaptive feature attention module (AFAM) for robust target detection based on multi-sensory data fusion in outdoor dynamic environments. I have some questions that the author needs to address carefully, as detailed below.

1.The variables in the article should correspond to the formula, recheck the formula (16) and formula (17). The variables in formula (18) and formula (19) need to be explained.

2. Table 1 lists different weather data sets. It is recommended to indicate the source network and target network corresponding to different data sets, so as to be more clear.

3. Check the correspondence between references and citations in the paper, such as references [6] and [7].

4. What is the difference between FSL-EfficientDet and EfficientDet networks? Shouldn't all training be done with labels? State the specific differences.

5. The proposed method needs to be compared with other state-of-the-art target detection networks. Are AFAM modules portable? Can it be ported to other networks, and how does it work on other networks? The main innovation point of this paper is the proposed AFAM module, which needs to carefully analyze the effectiveness of this module and appropriately add more analysis.

  • Language can be further improved.

Author Response

We are thankful to the reviewer for the useful comments and helping us improve our manuscript. We have tried our best to address the comments and the response is attached below.

Author Response File: Author Response.docx

Reviewer 2 Report

Dear Authors

Thank you for letting me read the manuscript "Adaptive Feature Attention Module for Robust Visual LiDAR Fusion-based Object Detection in Adverse Weather Conditions". In summary, the paper addresses the problem of adverse weather conditions in the operating environment such as sunny, fog and snow, and extreme illumination changes from day to night. As a solution, the paper proposes an adaptive feature attention module (AFAM) that utilizes camera and lidar features and adaptively refines uncertainties within a channel and spatial axis. Even though there are some typos, the paper is well-written. However, there are some major and minor recommendations that I can address below.

 

Major Issues

The paper's contribution is explained but it is hard to see how significant the results are. More comparisons are needed. 

mAP results in Table 2 are needed to be discussed in detail.

Minor Issues

-mAP formula is not explained

-Table 2 title has SOTA algorithms that are not explained in the text 

-Table 3 and Table 4 parameters are not really explained in the text. 

-line 88 con-vergence, needs to be convergence(check for similar typos)

-does line 404 Rhard mean "hard result"?

-Conclusion is so short please try to extend 

Thank you

Author Response

We are thankful to the reviewer for the useful comments and helping us improve our manuscript. We have tried our best to address the comments and the response is attached below.

Author Response File: Author Response.docx

Reviewer 3 Report

The paper presents an adaptive module for object detection based on visual-lidar fusion in adverse weather conditions. 

 

Please improve the proposed method with images and plots examples.

The spacing/comma/dash between different quotes. Please use the journal indications for the document formatting.

 

line 52: Describe the object detection system quotes

line 53: Correct Ex-traction

line 77: Clearify "Annotations"

line 119: Correct cam-era

line 125: Correct with "rich context"

line 317: the "Training with AFAM" chapter must go in the experiments and results after the quote of the source in line 340.Consequently remove the description in line 188.

line 347: width and height missing the units. 

 

Figure 4: move out the comments from the caption to the "Experiments and results" chapter text. 

Author Response

We are thankful to the reviewer for the useful comments and helping us improve our manuscript. We have tried our best to address the comments and the response is attached below.

Author Response File: Author Response.docx

Reviewer 4 Report

What is the appropriate network mentioned in the introduction section to process the data of each sensor and ultimately obtain more robust feature to improve target detection performance?

One of the methods mentioned in the introduction that determine the increase in sensor noise is to use annotations. Please check that if it is a clear description.

Chapters 2.1 to 2.3 of the relevant work introduce three object detection methods from a comprehensive perspective, I suggest to summarize the advantages and disadvantages of each method,

The section of Proposed Method proposes the use of cross attention mechanism to effectively fuse LiDAR features with their related camera features, but the cross attention mechanism is not explained.

The composition and principles of adaptive feature attention module (AFAM) did not clearly illustrated.

Besides of the formats of figures and table, the article layout are suggested to be further optimized according to the template of the journal.

Author Response

We are thankful to the reviewer for the useful comments and helping us improve our manuscript. We have tried the best to address the comments and the response is attached below.

Author Response File: Author Response.docx

Back to TopTop