Next Article in Journal
A Hybrid Global/Reactive Algorithm for Collision-Free UAV Navigation in 3D Environments with Steady and Moving Obstacles
Next Article in Special Issue
Using Unoccupied Aerial Systems (UASs) to Determine the Distribution Patterns of Tamanend’s Bottlenose Dolphins (Tursiops erebennus) across Varying Salinities in Charleston, South Carolina
Previous Article in Journal
Reinforcement Learning-Based Formation Pinning and Shape Transformation for Swarms
Previous Article in Special Issue
Observing Individuals and Behavior of Hainan Gibbons (Nomascus hainanus) Using Drone Infrared and Visible Image Fusion Technology
 
 
Article
Peer-Review Record

Burrow-Nesting Seabird Survey Using UAV-Mounted Thermal Sensor and Count Automation

Drones 2023, 7(11), 674; https://doi.org/10.3390/drones7110674
by Jacob Virtue 1,*, Darren Turner 1, Guy Williams 2, Stephanie Zeliadt 3, Henry Walshaw 4 and Arko Lucieer 1
Reviewer 2: Anonymous
Drones 2023, 7(11), 674; https://doi.org/10.3390/drones7110674
Submission received: 14 October 2023 / Revised: 6 November 2023 / Accepted: 10 November 2023 / Published: 13 November 2023
(This article belongs to the Special Issue Drone Advances in Wildlife Research)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors


Comments for author File: Comments.pdf

Author Response

Please see the attachment

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

This paper is quite straightforward and presents some promising results for surveying burrow nesting seabirds. I found the description of the methods a bit confusing, and I think the order of the methods needs to be re-arranged so the thermal drift issue (and the fact that automated counting has to proceed one image at a time rather than on the full mosaic) is introduced before the discussion of burrow clustering, which I found very confusing in 2.9.2. I also found 2.9.3 confusing - when you say “image analysis” do you mean automated image analysis? But this is also confusing because the automated image analysis was done on individual images and not on the mosaic so its not clear what “vignetting” is being discussed in this section. Overall, 2.9.2 and 2.9.3 need to be moved and heavily edited so the reader can follow exactly what is going on.

My second overarching concern is that manual image interpretation is considered the baseline against which automated counts are assessed, but since manual annotation was not compared against the field work validation, we don’t know whether automated counting is doing better than manual counting or worse. It’s just not clear that manual annotation is so good that it should be considered the “truth” against which automated counting should be prepared. I suspect both have limitations and they are simply different, but they probably make different kinds of errors. I would not assume the manual counting is “correct”.

Other comments:

Maybe this is a difference between countries, but in the US the acronym UAS stands for Unmanned Aerial Systems. (The federal aviation authority of the US defines it this way as well.)

Table 1 seems to be a low-res screenshot of a table rather than an actual table.

Extra period at the end of the Figure 4 caption.

Confused by the clustering described in 2.9.2 because I would have thought that AgiSoft would have already taken care of the overlap so its not clear if this step is addressing artifacts created by the AgiSoft stitching? Is this clustering approach only needed because thermal drift required automated counting on each individual image? This is not at all clear when clustering is introduced because the thermal drift issues have not been introduced yet.

Why fly at 40 m and not lower? What is the height at which the birds are disturbed by the UAS?

Figure 6 is low quality in terms of its overall design but its also low resolution, can you make the axes and legend labels bigger?

Figure 7 needs axes labels bigger.

Figure 9 doesn’t have to be a figure, it could be a table (which would be more space efficient).

Line 441: “We utilised the IOU method and a best-fit approach to evaluate the correlation 441
between the automated and manual counts. “ What do you mean by “best-fit approach”?

Line 478: Here you note that the automated counting method detected 95% of ones validated as occupied in the field but you don’t actually report how many of these were found by human annotators. If the metal target was removed from the image (using for example Photoshop’s image repair tool) and a manual annotator were given the image, what percentage of the validated burrows were found by the human annotator.

Line 614:  “Additionally, a density-based clustering algorithm was used to group proximate burrow locations.“ The motivation behind this clustering is unclear.

Author Response

Please see the attachment

Author Response File: Author Response.docx

Back to TopTop