Next Article in Journal
A Comprehensive Review of Scab Disease Detection on Rosaceae Family Fruits via UAV Imagery
Previous Article in Journal
A Real-Time UAV Target Detection Algorithm Based on Edge Computing
 
 
Article
Peer-Review Record

Adaptive Multi-Scale Fusion Blind Deblurred Generative Adversarial Network Method for Sharpening Image Data

by Baoyu Zhu 1,2,3,†, Qunbo Lv 1,2,3,† and Zheng Tan 1,3,*
Reviewer 1: Anonymous
Reviewer 2:
Submission received: 28 December 2022 / Revised: 24 January 2023 / Accepted: 26 January 2023 / Published: 30 January 2023
(This article belongs to the Topic Artificial Intelligence in Sensors)

Round 1

Reviewer 1 Report

Dear authors, thanks for your work and interest to publish in Drones MDPI Journal. I found interesting the lecture of your article, particularly the way you combine diverse tools/techniques to make more robust the process of deblurring. In my personal understanding, this work is a consequence of your reference nr. 55.

Because you are aiming at publishing at Drones, I strongly suggest you prepare a small dataset using such platforms. The datasets you prepared and analyzed, which are great, are more focused on remote sensing scales. Basically, your presented spatial scales are for military applications of drones (order of km high). Civilian drones, that I would say it is the majority of your audience, fly around 300 m high, having higher spatial resolution or GSD.

I am working for more than 10 years using UAVs, and you can’t imagine how many blurring effects you have in the images. For example in vegetation monitoring, the UAV platform moves, but also the leaves, plants, because of the wind. Water, you can’t imagine. If you use these blurry images for photogrammetry, it is a very difficult task to filter, pre-process, and finally obtain a good result. Then, I believe your proposed method is really relevant.  

If you use UGVs you will face similar problems, eventually worse, because of the change of the light conditions (rotations and angle of the vehicle towards the sun).

Some general comments/suggestions

-          Please check your introduction part, the very first section (line 39). You cited [3] and [6] as affected applications because of images (line 1) but actually these two references use dedicated instrumentation (not imagers).

-          Please include units in Figure 9, also in Figure 9)b) the sharp or blurl4 is not visible. Units in some tables (the ones that present PSNR – dB and %). Figure 11, is non aligned the first block.  

-          I guess, the section 3, should be named as Network Structure AMD-GAN? Or you are proposing exactly the same title for this section as you did in your previous work [Ref 55]. Have you considered naming it as Methods? Maybe a subsection could be Materials (actual section 3.4)

-          I suggest creating a UAV dataset. Also, you can take a look in data journals, or repositories, also photogrammetry companies. Zenodo, Pangaea, Pix4D respectively offer UAV datasets. For terrestrial applications, the YOLO community offers a lot of datasets. Then you can apply your kernels to blur them.

-          I found your Conclusions very, very similar to the Abstract. I recommend you to emphasize your novelties, results, possible impact there, future work.  

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The authors have done a lot of work and the article is of high scientific value. However, it seems to me to be too comprehensive and very complicatedly presented.

Some remarks:

- I suppose title: "Adaptive multi-scale fusion blind deblurred generative adversarial network method for sharpening image data. 

-    Fig. 9: SMD2 result numbers left is what?

 

-    -     Fig. 9 right curves – what is on X axis? Curves does not corespond to bars on the left, or?

 

-      -    row 478: how the accuracy was computed?

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop