Next Article in Journal
Multi-Altitude Corn Tassel Detection and Counting Based on UAV RGB Imagery and Deep Learning
Previous Article in Journal
A Review of Real-Time Implementable Cooperative Aerial Manipulation Systems
Previous Article in Special Issue
New Concept of Smart UAS-GCP: A Tool for Precise Positioning in Remote-Sensing Applications
 
 
Article
Peer-Review Record

Robust Radar Inertial Odometry in Dynamic 3D Environments

by Yang Lyu *, Lin Hua, Jiaming Wu, Xinkai Liang and Chunhui Zhao
Reviewer 1: Anonymous
Reviewer 2:
Submission received: 16 March 2024 / Revised: 3 May 2024 / Accepted: 7 May 2024 / Published: 13 May 2024
(This article belongs to the Special Issue UAV Positioning: From Ground to Sky)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The topic of the post is current because radars have a prospective application in the field of controlling the movement of various types of vehicles. The authors of the paper proposed a Radar-Inertial-Odometry (RIO) pipeline using long-range millimetre-wave radar for autonomous vehicle navigation. They tested the derived algorithms in challenging on-road environments and in-the-air environments. The results achieved indicate that the proposed RIO can provide a localization function for mobile platforms, such as automotive vehicles and unmanned aerial vehicles, in various operation conditions. The mentioned algorithms for processing radar measurement signals are considered correct and can be used in the process of controlling the behaviour of vehicles. The simulation results are correct and confirm the correctness of the proposed method. The post contains formal errors, e.g. when describing the image in formulas. I consider the mentioned contribution to be a contribution in the field of increasing the safety of transport. I recommend publishing this paper after a slight edit.

Comments on the Quality of English Language

No.

Author Response

We would like to thank the reviewer for your valuable suggestions. We have re-stressed on the contribution in the revised manuscript. We have also carefully checked the paper to avoid possible formal errors and typos.

Reviewer 2 Report

Comments and Suggestions for Authors

The paper is using a mm-Wave FMCW Radar for localization purposes. An odometry algorithm is proposed, which combines IMU data with poses from point cloud registration and point cloud velocities to estimate the local pose of a robot. 

Fig. 1 is never referenced in the complete paper.

Line 126: "[...] vimu denote the velocity of radar measured by the IMU." It is quite difficult to understand the difference btw. the ego-motion vradar of the radar itself and the velocity of the radar generated by the IMU. Isn't it nearly the same when a fixed object is detected by the radar system? Could the authors provide a sketch?

Line 137: Are these 'radar frames' the consecutive point clouds from the sensor acquired at the same time, like mentioned in line 154? Then it should be made clearer.

Line 157: "The general flow of the algorithm is as follows [...]". What follows is another paragraph about 'Ego-Motion Estimation'. Is the algorithm 1 meant? Please clarify. 

Equation 7: What is vpc? It is not mentioned before.

Line 169: It is not sufficient to just cite the basic RANSAC method for filtering the velocities from eq. 8. How would the authors implement an algorithm for applying RANSAC? Are there already available filtering methods based on RANSAC, which the authors could recommend?

Line 172: Where to find 'Scene 2'? In Paragraph 4 no scene is clearly depicted. Fig. 9 shows some 'scenes' but which scene the author is referencing?

Fig. 5: What is the +1.670492e9 meaning? Why is the time starting at ~200s?

Line 172 to 175: Please quantify the meaning of 'reasonable range' and 'relatively long detection range'. Why is the velocity of the UAV called vb? The vb_z velocity is estimated to be ~1m/s at t=202 - is this still reasonable? Was the UAV in hovering condition when acquiring this data? Please provide more details.

Fig 6. and Fig. 7: It is quite difficult to understand the procedure of sensor fusion with the sliding window when looking at the data sampling. It seems that the sensor data is sampled at different time steps. The IMU e.g. every fourth sample and the mmWave point cloud every tenth sample. For the factor graph 13 IMU samples are used indicated as 'IMU preintegration factor'. In Fig. 6 the IMU preintegration is indicated with just ten samples. Please clarify the way of processing and sampling the data. The velocity vj is e.g. generated after four times using the IMU preintegration data.

Line 231: What is the difference btw. 'DJI UAV' and 'DJI attitude data'? Please specify where the positioning data comes from.

Fig. 9: The colours look quite strange in the pictures. Is there a reason for it?

Line 256: Please quantify the  'acceptable level of errors'. What is acceptable?

The conclusion paragraph is more a summary. Some interesting topics should be covered in the paper, e.g. how many points are needed to provide a proper localisation? ORB SLAM uses visual key points and doesn't perform well, if these points are missing. Please compare as well the different approaches by considering these aspects.

Fig. 10 is not well readable and cannot be linked to the contents of the paper easily. Why is the contents of Fig. 10 not at the appropriate place in the results paragraph?

Comments on the Quality of English Language

Language is fine. The use of punctuation could help to understand the contents, e.g. in line 4. A comma following 'front-end', clarifies the complete sentence and enhances the readability.

Line 160: 'Estamation' Please correct.

Line 169: Please check formatting issues, e.g. missing spaces.

Line 192: Please make two sentences out of this sentence for clarification.

Author Response

Thank you for your valuable suggestions. We have provided the point-by-point response in the revised manuscript as well as in the attached file.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The revised version of the paper helps a lot in comprehending the content. The equations are much clearer now and the link btw. different parameters as well. The additional figures, e.g. fig. 4, are beneficial for the reader.

Just the images in part 4, so fig. 11, should be reformatted in order to be readable. The following tables (1 and 2) are covering a whole page and should be combined with the results from fig. 11. Please use larger images, so that the content is readable. Check whether the first four images from fig. 11 (RPE errors) could be combined with table 1. Then the other four images can be split accordingly as well.

Author Response

We reorganized figures and tables in this round of revision to make it more readable. We would like to thank you for your insightful suggestions which improved the representation and clarity of the paper a lot.

 

Yang Lyu

Back to TopTop