Next Article in Journal
Bayesian Time-Domain Ringing Suppression Approach in Impulse Ultrawideband Synthetic Aperture Radar
Previous Article in Journal
Predicting Tree-Level Diameter and Volume for Radiata Pine Using UAV LiDAR-Derived Metrics Across a National Trial Series in New Zealand
 
 
Article
Peer-Review Record

TerrAInav Sim: An Open-Source Simulation of UAV Aerial Imaging from Map-Based Data

Remote Sens. 2025, 17(8), 1454; https://doi.org/10.3390/rs17081454
by Seyedeh Parisa Dajkhosh, Peter M. Le, Orges Furxhi and Eddie L. Jacobs *
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Remote Sens. 2025, 17(8), 1454; https://doi.org/10.3390/rs17081454
Submission received: 6 February 2025 / Revised: 27 March 2025 / Accepted: 7 April 2025 / Published: 18 April 2025

Round 1

Reviewer 1 Report (Previous Reviewer 1)

Comments and Suggestions for Authors

The current version of the paper demonstrates some improvements. However, there remain several issues that need to be addressed. The specific details are as follows:

1) In Section 1.2 Related Work, Table 1 lists Visual Question Answering (VQA), but the "Tasks" column in the table does not actually include VQA-related content. This inconsistency should be corrected.

2) In Section 1.2.2 Simulators, the comparison of related research is based solely on textual descriptions, which feels insufficient. The authors should enrich this section with more visual comparisons, such as simulated image results or detailed comparative tables, to better highlight the distinctive features of this research.

3) In Section 2 Materials and Methods, it is recommended to add a simplified flowchart that more directly illustrates the overall workflow of the proposed methodology. The existing diagram appears overly complex.

4) In Section 3 Results, while the paper presents dataset outcomes through images, it lacks practical validation. Since the dataset has been completed, the authors should test whether models trained on this dataset can effectively perform drone-related tasks. For instance, they should verify if models trained with synthetic data can properly process real-world drone-collected images, thereby demonstrating whether the dataset can partially substitute authentic drone image acquisition.

Overall, the manuscript requires enhanced readability and clearer presentation of research content. Certain sections currently resemble course project report, necessitating improvements in academic rigor.

Comments on the Quality of English Language

The manuscript requires enhanced readability and clearer presentation of research content

Author Response

1. Summary

 

 

Thank you for your review.

 

Comments 1: [In Section 1.2 Related Work, Table 1 lists Visual Question Answering (VQA), but the "Tasks" column in the table does not actually include VQA-related content. This inconsistency should be corrected.]

Response 1: Thank you for pointing this out. It is removed from the table. 

 

Comments 2: [In Section 1.2.2 Simulators, the comparison of related research is based solely on textual descriptions, which feels insufficient. The authors should enrich this section with more visual comparisons, such as simulated image results or detailed comparative tables, to better highlight the distinctive features of this research.]

Response 2:  We appreciate the reviewer’s suggestion and recognize the need for greater clarity on this point. To address this, we have added a clearer explanation about the simulators in the introduction: [Among available flight simulators, FGFS is the only candidate that might be capable of providing comparable data, but not in its default state. It lacks built-in support for precise, automated imaging missions, requiring extensive customization and the development of additional tools to extract usable data. Other simulators, however, are entirely unsuitable, as they are designed purely for piloting experiences and lack any framework for aerial image data generation. Their architectures do not accommodate the level of control, automation, and fidelity necessary for machine learning dataset development, making them completely impractical for this purpose, lines 113-120, page 4]. Visual comparison is also provided [updated Figure 10, page 11]. We hope this addition clarifies why these simulators are not directly comparable to TerrAInav Sim in terms of functionality or output.

 

Comments 3: [In Section 2 Materials and Methods, it is recommended to add a simplified flowchart that more directly illustrates the overall workflow of the proposed methodology. The existing diagram appears overly complex.]

Response 3:  Thank you for the thoughtful suggestion regarding the flowchart in Section 2. We designed the diagram to provide a detailed representation of the workflow, ensuring clarity and reproducibility for readers familiar with remote sensing and UAV-based image processing. Given the technical nature of the methodology and the expected expertise of the intended audience, we believe the current level of detail is appropriate. Additionally, other reviewers have highlighted the existing flowcharts as a strength of the paper, reinforcing their effectiveness in conveying the methodology. However, we appreciate the reviewer’s perspective and have already added clarifying descriptions in the accompanying text to guide interpretation ["Methodology Section "].

 

Comments 4: [In Section 3 Results, while the paper presents dataset outcomes through images, it lacks practical validation. Since the dataset has been completed, the authors should test whether models trained on this dataset can effectively perform drone-related tasks. For instance, they should verify if models trained with synthetic data can properly process real-world drone-collected images, thereby demonstrating whether the dataset can partially substitute authentic drone image acquisition.]

Response 4:  The AI models trained on this dataset are cited in the paper and are available for review online [citation 29, page 19]. Additionally, we provide quantitative measurements and visual validation comparing real drone flight data with simulator results [Figure 10, page 11, and chapter 3.1, page 11]. Further AI models based on this dataset are currently in progress and will be presented in a separate publication.

We acknowledge the importance of practical validation; however, mathematical frameworks and theoretical analysis remain fundamental in demonstrating a model’s reliability. The provided validations offer substantial evidence of the dataset’s applicability, and we believe they effectively address this concern.

 

Comments 5: [Overall, the manuscript requires enhanced readability and clearer presentation of research content. Certain sections currently resemble course project report, necessitating improvements in academic rigor.]

Response 5: We have made careful revisions to ensure a clear and cohesive presentation of the research content.

Reviewer 2 Report (Previous Reviewer 4)

Comments and Suggestions for Authors

Thank you very much for the response for my review and first of all, for incorporating feedback in the second version, which is definitely much better.

The corrected version is clear, well organized and thanks to good English, it is nice to read.

At present, the authors’ motivation is evident to me, however, the advantages of this simulator, mentioned in ‘Discussion’ and ‘Conclusion’, would be more effective in the ‘Introduction’, in the contribution paragraph.

Detailed remarks :

  1. l. 138 A spelling mistake: ‘collectin‘;

  2. The colours in the two flowcharts from the first version should stay in the final version. They distinguished particular parts.

Author Response

I sincerely appreciate your time and thoughtful observations. The issues raised have been carefully addressed.

Reviewer 3 Report (New Reviewer)

Comments and Suggestions for Authors

1. In section 3.1, add results comparisions with other flight simulation tools.

2. In section 3.4, give the performance and time cost of other simulations tools in the same situations to show the improvements.

Author Response

We appreciate the reviewer’s suggestion and recognize the need for greater clarity on this point. To address this, we have added a clearer explanation about the simulators in the introduction: [Among available flight simulators, FGFS is the only candidate that might be capable of providing comparable data, but not in its default state. It lacks built-in support for precise, automated imaging missions, requiring extensive customization and the development of additional tools to extract usable data. Other simulators, however, are entirely unsuitable, as they are designed purely for piloting experiences and lack any framework for aerial image data generation. Their architectures do not accommodate the level of control, automation, and fidelity necessary for machine learning dataset development, making them completely impractical for this purpose, lines 113-120, page 4]. Thanks to your suggestion, visual comparison is also provided [updated Figure 10, page 11] along with benchmark comparison [Table 2, page 14].  We hope this addition clarifies why these simulators are not directly comparable to TerrAInav Sim in terms of functionality or output.

Reviewer 4 Report (New Reviewer)

Comments and Suggestions for Authors

The paper presents a promising open-source tool, TerraInav Sim, for simulating UAV-based aerial imaging using map data. The tool has significant potential for applications in vision-based navigation, environmental monitoring, and urban planning. However, while the paper is well-structured and provides detailed technical descriptions, there are areas where improvements can be made to enhance clarity, depth, and practical relevance. Below are specific points for improvement:

 

  1. The paper lacks quantitative metrics to compare the simulated images with real-world UAV-captured images.

 

  1. This paper does not thoroughly discuss how these limitations might impact specific applications or how they could be mitigated.

 

  1. Include a brief "Getting Started" section in the paper or the GitHub repository, providing step-by-step instructions for setting up and running the tool.

 

  1. Test and discuss the tool's performance in diverse environments to demonstrate its versatility.

 

  1. Discuss potential solutions for integrating temporal data, such as leveraging APIs from services like Google Earth Engine or Sentinel Hub, which provide time-stamped satellite imagery.

 

  1. Conduct a performance benchmark comparing TerraInav Sim with other simulators (e.g., Microsoft Flight Simulator, FlightGear) in terms of image generation speed, resource usage, and scalability.

 

 

 

Author Response

1. Summary

 

 

We appreciate your feedback on this work.

 

Comments 1: [The paper lacks quantitative metrics to compare the simulated images with real-world UAV-captured images."]

Response 1: Quantitative comparisons between real and simulated images have been provided. [Figure 10, page 11, and chapter 3.1, page 11]

 

Comments 2: [This paper does not thoroughly discuss how these limitations might impact specific applications or how they could be mitigated.]

Response 2: Discussion section has been revised.

 

Comment 3: [Include a brief "Getting Started" section in the paper or the GitHub repository, providing step-by-step instructions for setting up and running the tool.]

Response 3: Very helpful comment. This section is already available in the GitHub repository.

 

Comment 4: [Test and discuss the tool's performance in diverse environments to demonstrate its versatility.]

Response 4: Thank you. The following line is added. [The tool's performance has been evaluated across a diverse range of environments, including low-end laptops and high-performance GPU servers, running on Linux and Windows operating systems. The reported results reflect an averaged performance across these varied settings to demonstrate its versatility. Page 14 lines 358-360]

 

Comment 5: [It might be useful to include a section on how SkyAI Sim compares to other simulation tools in terms of computational efficiency.]

Response 5: We appreciate the reviewer’s suggestion and recognize the need for greater clarity on this point. To address this, we have added a clearer explanation about the simulators in the introduction: [Among available flight simulators, FGFS is the only candidate that might be capable of providing comparable data, but not in its default state. It lacks built-in support for precise, automated imaging missions, requiring extensive customization and the development of additional tools to extract usable data. Other simulators, however, are entirely unsuitable, as they are designed purely for piloting experiences and lack any framework for aerial image data generation. Their architectures do not accommodate the level of control, automation, and fidelity necessary for machine learning dataset development, making them completely impractical for this purpose, lines 113-120, page 4]. Thanks to your suggestion, visual comparison is also provided [updated Figure 10, page 11] along with benchmark comparison [Table 2, page 14].  We hope this addition clarifies why these simulators are not directly comparable to TerrAInav Sim in terms of functionality or output.

 

Comment 6: [Discuss potential solutions for integrating temporal data, such as leveraging APIs from services like Google Earth Engine or Sentinel Hub, which provide time-stamped satellite imagery.]

Response 6: Discussion section has been revised.

 

Comment 7: [Conduct a performance benchmark comparing TerraInav Sim with other simulators (e.g., Microsoft Flight Simulator, FlightGear) in terms of image generation speed, resource usage, and scalability.]

Response 7: We appreciate the reviewer’s suggestion and recognize the need for greater clarity on this point. To address this, we have added a clearer explanation about the simulators in the introduction: [Among available flight simulators, FGFS is the only candidate that might be capable of providing comparable data, but not in its default state. It lacks built-in support for precise, automated imaging missions, requiring extensive customization and the development of additional tools to extract usable data. Other simulators, however, are entirely unsuitable, as they are designed purely for piloting experiences and lack any framework for aerial image data generation. Their architectures do not accommodate the level of control, automation, and fidelity necessary for machine learning dataset development, making them completely impractical for this purpose, lines 113-120, page 4]. Thanks to your suggestion, visual comparison is also provided [updated Figure 10, page 11] along with benchmark comparison [Table 2, page 14].  We hope this addition clarifies why these simulators are not directly comparable to TerrAInav Sim in terms of functionality or output.

Round 2

Reviewer 4 Report (New Reviewer)

Comments and Suggestions for Authors

Thanks for the revision.

Good luck!

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Upon reviewing the article, I believe that it does not fully meet the criteria of a research paper and is more aligned with a lab report format. The innovation and the research's objectives are not sufficiently highlighted. Specifically, the main innovation seems to involve the integration of satellite imagery and drone perspectives, but there is little discussion on how this approach differs from simply cropping images from Google Earth.

 

At the technical level, there are two main areas that may need further consideration. Firstly, the strength of drone imagery lies in its high resolution, which provides detailed observations that satellite images cannot. In this context, the use of satellite-based simulations may appear inconsistent with the advantages offered by drone imagery. Secondly, if the study's primary focus is on constructing the dataset, then the quality and generalizability of the dataset should also be addressed, which is not fully explored in the current study. For example, there are concerns about the image timestamps and the potential discrepancies when stitching images from different times, which may affect the reliability of time-sensitive analyses such as change detection.

 

Given these points, there are aspects of the dataset that might limit its use for training models, and more experimental evidence may be needed to demonstrate the dataset's capabilities and boundaries. In its current form, I believe the article requires significant revisions and may not yet be suitable for publication.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The manuscript titled "SkyAI Sim: An Open-Source Simulation of UAV Aerial Imaging from Satellite Data" presents a novel open-source tool that simulates UAV aerial imaging using satellite data. The tool, SkyAI Sim, addresses a significant gap in the current methodologies for obtaining aerial images for various applications such as vision-based navigation, environmental monitoring, and urban planning. The authors have provided a comprehensive description of the tool's capabilities, its application, and the methodology behind its development. The manuscript is well-structured, and the language is clear and concise. The research appears to be methodologically sound, and the authors have effectively demonstrated the utility of SkyAI Sim through the provided datasets and use cases.

Specific Recommendations:

1. The title and abstract are clear and accurately reflect the content of the manuscript. However, consider whether the term "satellite data" might be misleading since the tool uses Google Maps Static API, which is not exclusively satellite-based. It might be more appropriate to refer to it as "aerial imaging data."

2. The introduction provides a good background and rationale for the tool's development. It might be beneficial to include a brief discussion on the current limitations of existing tools to better highlight the novelty of SkyAI Sim.

3. This section could be expanded to include a more detailed comparison with existing tools and datasets. Providing a table that contrasts SkyAI Sim with other tools could help readers quickly understand its unique features.

4. The methodology is well-described. However, the paper could benefit from a more detailed discussion on the validation process. How was the accuracy of the simulated images compared to real-world data assessed?

5. The discussion is thoughtful and addresses the limitations and potential future developments. It might be useful to include a section on how SkyAI Sim compares to other simulation tools in terms of computational efficiency.

6. The conclusion effectively summarizes the key points of the paper. It could be strengthened by adding a statement on the potential impact of SkyAI Sim on future research and applications in the field.

7. The figures and tables are informative. However, ensure that all figures and tables are properly cited in the text and have clear captions.

8. Fot the section Introduciton, some example using UAV remote sensing application should be added. For example, in agricultural, ecological, geographic aspects. I would recommend some highly cited papers:

1) Comparison of different machine learning algorithms for predicting maize grain yield using UAV-based hyperspectral images

Comments on the Quality of English Language

minor

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The introduction contains academic literature, but there are a number of flight simulators available that incorporate Google Maps:

https://www.reddit.com/r/Flightsimulator2020/comments/qpdhgq/ms_flight_simulator_2020_can_now_use_google_maps/?rdt=63242

https://www.geo-fs.com/

I strongly recommend that authors add industry approaches and compare features. By doing so, authors will be able to clarify their contribution and potentially amplify SkyAI Sim's impact in AR/VR applications.

Table 1 caption does not match the table which does not have a column named "# Tiles" 

Flowcharts in Fig 3 and Fig 4 need slightly higher resolution considering the font size. Also, conditionals (diamond symbols) sometimes lack 'Yes' or 'No' or have more than two branches, which should be labeled appropriately.

Line 27: In artifitial intelligence (AI) and machine 27 learning (ML), -> artificial

Line 46: designed for a specific applications such as segmentation, -> remove 'a'

Line 64: A bird’s-eye view is a 90-degree tilt, means -> which means

Lines 98-99: In satellite simulation, -> simulations

Line 184: drone’s AGL. By default -> By default,

Line 331: To cleanup the data -> To clean up

Please check the journal name (drone?) and replace the wikipedia references by more academic sources.

 

Comments on the Quality of English Language

The journal name seems to be incorrect. (Version September 13, 2024 submitted to Drones) Is the use of notes (not references) allowed?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

Overall

The presented system SkyAI Sim is a vision-based navigation (VBN) for drones based on the Google Maps Static API. The presentation of a problem is closer to a research report than a scientific article. The system was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-21-2-0192. The text is supposed to be confidential rather than open-access, as the title suggests. The program at the github is not available, nor is the Jacobs Sensor Lab web page, which makes it impossible to familiarize the reviewer themselves with the program.

Generally, the scientific level is very poor and is not worth publishing in ‘Remote Sensing’.

 

Strong points:

A clear presentation of the parameters and flowcharts which is rare in such publications.

 

Weak points:

The introduction is incomplete because the motivation has not been presented.

In subsect. 2.2 we can find the phrase ‘useful features for deep learning’, but no feature examples have been mentioned, nor a deep learning method. if any has been implemented.

Also, there is no explanation why the road map mode has been selected as an entropy illustration.

 

Detailed remarks :

  1. The editorial side is poor. There are word repetitions, e.g. l. 235, l. 403, under Fig. 7 the caption is unfinished.

  2. Fig. A1 – the code is hardly legible. It should be changed to a negative.

  3. Levels 20, 21, and 22 of Google Maps need to be explained because in a web browser the most detailed level is 19. Does the API download those levels, or do they have to be paid for extra?

 

Comments on the Quality of English Language

English should be proofread.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop