Next Article in Journal
LRMSNet: A New Lightweight Detection Algorithm for Multi-Scale SAR Objects
Previous Article in Journal
Identifying Plausible Labels from Noisy Training Data for a Land Use and Land Cover Classification Application in Amazônia Legal
 
 
Article
Peer-Review Record

Depth-Guided Dehazing Network for Long-Range Aerial Scenes

Remote Sens. 2024, 16(12), 2081; https://doi.org/10.3390/rs16122081
by Yihu Wang, Jilin Zhao, Liangliang Yao and Changhong Fu *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Remote Sens. 2024, 16(12), 2081; https://doi.org/10.3390/rs16122081
Submission received: 19 May 2024 / Revised: 4 June 2024 / Accepted: 4 June 2024 / Published: 8 June 2024
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)

Round 1

Reviewer 1 Report (Previous Reviewer 2)

Comments and Suggestions for Authors

Dear authors, 

I can say I feel satisfied with the improvements of the manuscript, so I would like to congratulate you for it.

Best regards, 

Author Response

Dear reviewer,

Thank you very much for your previous comments and recognition of our manuscript. Your suggestions have been greatly helpful for us to improve the manuscript. Wish you all the best.

Best regards, 

Reviewer 2 Report (Previous Reviewer 3)

Comments and Suggestions for Authors

The author responded well to my concerns. The manuscript proposes a dehazing network specifically designed for long-range scenes. By combining depth prediction and attention mechanisms, it effectively addresses the issue of haze density varying with depth in long-range images. Through the construction of the UAV-HAZE dataset, it provides rich resources for training and evaluating dehazing methods in long-range scenes. Meanwhile, the article compares various image dehazing methods, demonstrating the superior dehazing effect of the proposed depth-guided dehazing network (DGDN) in long-range scenes. I have the following suggestions/concerns, which, if carefully addressed, could enhance the readability of the manuscript.

1: The UAV-HAZE dataset mentioned in the article, specifically designed for training and evaluating dehazing methods in such scenarios, is a valuable contribution. It is recommended to classify the test dataset based on different haze concentrations and scenes, use the classified test set to evaluate the model's performance, and analyze whether the evaluation results are consistent under different conditions.

2: Improve the reproducibility of the method by providing complete code, pre-trained weights, hyperparameters, and the dataset. The links provided in the article for code and data are currently invalid.

3: The article only uses PSNR and SSIM as evaluation metrics, which, while common, may not be sufficient for a comprehensive assessment of dehazing effects. It is recommended to include additional quantitative evaluation metrics, such as VIF and NIQE, to provide a more comprehensive performance analysis.

4: The computational complexity of the model is also an important consideration, especially for model selection in practical application scenarios. It is recommended to include a discussion of the model's computational complexity, including metrics such as the number of model parameters and runtime.

5: In Section 2.2, it would be beneficial to further discuss the limitations of different methods when dealing with long-range aerial scenes, either through quantification or by providing examples. This would better highlight the necessity of the proposed method. Simultaneously, briefly explain why DGDN is able to achieve more uniform and accurate dehazing effects in long-range scenes, rather than merely mentioning the mechanisms or method names.

Author Response

Please see the attachment

Author Response File: Author Response.docx

Reviewer 3 Report (Previous Reviewer 4)

Comments and Suggestions for Authors

Although the author said that Figure 11 shows the dehazing result of the real image, Figure 11 cannot be found. Additionally, Figure 10 could not be found.

Except for the above points, the reviewer think it has been well supplemented.

Author Response

Dear reviewer,

Thank you very much for your previous comments and recognition of our manuscript. Your suggestions have been greatly helpful for us to improve the manuscript. 
You can find Figure 11 (original Figure 10) on Page 15 and Figure 12 (original Figure 11) on Page 16 in the revised version.
Wish you all the best.

Best regards, 

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The methodology used is described in detail in the paper. The course of the study and the results obtained are clearly presented. However, the results could be presented in more detail in the conclusions. Conclusions should be written  with more detail about research results.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

Dear authors, 

After the review of your manuscript, I would like to share some findings with you.

I understand your method is focusing on dehazing images on long-range area, but what impact do you think it produced on close-range area, because in Figure 11, appears a darker area on water that does not exist on the ground truth image.

Where did you get SOTA method's code? Are they open source, or do you implement them?. I would like to know if you run them on similar conditions as yours method.

I think that 200 real haze images are not enough for the experiment, mainly compared with more than 30 thousands of synthetic haze images;  I mean, the synthetic images are useful to setting the method's parameters, but I think the real images must have the main focus here, not on the contrary as you did. To confirm this, you have a section with results on synthetic images, but not the same for real images. That is strange.

Did you use the same depth estimation method (Marigold) to all methods? or just for your method? This is important because its impact could favor your method over the others.

This is all, I hope you find this observations valuable.

Best regards, 

Author Response

Please see the attachment.

Reviewer 3 Report

Comments and Suggestions for Authors

1.    The duplication rate of this manuscript is too high. I carefully read the iThenticate report. The overall duplication rate is 26%, and there is a single source with a duplication rate exceeding 5%. Additionally, there is partial content duplication throughout the entire paper, which has cast doubt on the originality of the thesis.
2.    I am confused about the author's dataset, as there is no mention of UAV-HAZE throughout the entire paper. Although the address is listed on line 535, the link clicked on is 404.
3.    I am also confused about the data in Figure 6, as it seems to me that it is not a depth image. The grayscale values in the figure seem to violate basic visual intuition.
4.    The novelty of the manuscript is also not very good. Image dehazing is a very old research problem, although the author has raised the issue of haze removal for aerial perspective and distant scenes. However, this is not a phenomenon unique to drones. Aerial photography by planes will also have this problem. Therefore, I do not think it is a very worthwhile research problem.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

Comments and Suggestions for Authors

Attention methodology was applied to extract depth information and haze density, and a dehazing technique for long-range images taken from UAVs such as drones was developed. This methodology uses images with various perspectives, long-range, and haze captured from UAVs such as drones.

Author(s) implemented a depth prediction sub-network based on multiple residual dense modules, a dehazing process was constructed by combining depth information and haze features.

About 200 haze images taken from a UAV located in the air and 35,000 generated haze images were used.

The haze is corrected using information generated through a network created by stacking DDRB (Depth-wise Dilated Residual Blocks) and an internal depth prediction model.

 

<originality>

 

Author(s) proposed a methodology that enables dehazing of long range images by combining a depth prediction network and a dehazing network.

In particular, data was generated and applied through hazing through an atmospheric scattering model, and UAVs with various perspectives and heights were used.

 

<issue>

 

In comparison with other methodologies, only test results with generated haze images are available. Therefore, there is no comparison of results derived from actual haze images.

Additionally, the depth prediction network was used as a subnetwork, but the analysis of the correlation between the haze density on the image and the depth map seems to be lacking.

Haze images were simulated and used as training and verification data. At this time, only human evaluation methods were used to evaluate the simulated data set, and the reviewer believes that a more detailed analysis is needed to determine what characteristics the simulated images actually have.

the data set can be an important variable in evaluating the proposed methodology.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop