Next Article in Journal
Coal and Gangue Recognition Method Based on Local Texture Classification Network for Robot Picking
Previous Article in Journal
Improvement of the Transglycosylation Efficiency of a Lacto-N-Biosidase from Bifidobacterium bifidum by Protein Engineering
 
 
Article
Peer-Review Record

A Novel Shadow Removal Method Based upon Color Transfer and Color Tuning in UAV Imaging

Appl. Sci. 2021, 11(23), 11494; https://doi.org/10.3390/app112311494
by Gilberto Alvarado-Robles, Francisco J. Solís-Muñoz, Marco A. Garduño-Ramón, Roque A. Osornio-Ríos and Luis A. Morales-Hernández *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2021, 11(23), 11494; https://doi.org/10.3390/app112311494
Submission received: 11 October 2021 / Revised: 18 November 2021 / Accepted: 23 November 2021 / Published: 4 December 2021
(This article belongs to the Section Computing and Artificial Intelligence)

Round 1

Reviewer 1 Report

This study proposes a shadow removal method based on color transfer for color correction in shadowed regions in urban aerial scenes. The research method has its innovation, with a certain reference value. However, I have two concerns that the author must resolve before it can be accepted for publication.

 

  1. Shadow-mask image needs manual input, and depends on manual correction. In this way, the automation and practicability of the method in this manuscript will be greatly reduced.

 

  1. As we know, the deep learning method has achieved great success in image processing, including image shadow detection and removal. However, in this paper, the author's comparative experiment adopts two traditional methods and one deep learning approach, but the effect of the deep learning approach is very poor, which is incredible. Therefore, the author must compare with some latest methods (including deep learning methods) to prove that this method is indeed effective.

 

References include but are not limited to:

[1] Ding, B., Long, C., Zhang, L., & Xiao, C. (2019). ARGAN: Attentive recurrent generative adversarial network for shadow detection and removal. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10213-10222).

[2] Tang, J., Luo, Q., Guo, F., Wu, Z., Xiao, X., & Gao, Y. (2020). SDRNet: An end-to-end shadow detection and removal network. Signal Processing: Image Communication, 84, 115832.

[3] Fan, X., Wu, W., Zhang, L., Yan, Q., Fu, G., Chen, Z., ... & Xiao, C. (2020). Shading-aware shadow detection and removal from a single image. The Visual Computer, 36(10), 2175-2188.

[4] Amin, B., Riaz, M. M., & Ghafoor, A. (2020). Automatic shadow detection and removal using image matting. Signal Processing, 170, 107415.

 

In addition, these methods are end-to-end processing methods, including shadow detection and removal. This manuscript is only shadow removal and depends on the detection results of manually input shadow-masks. Therefore, the author must explain the advantages of the proposed method compared with above methods.

 

Minior comments:

 

  1. The literature cited in the Related Work Section is not comprehensive, especially the method of deep learning.
  2. There should be clear accuracy evaluation results in the abstract.
  3. Equation (1): How was 25 determined?
  4. Equation (2): What does MC mean?
  5. Discussion: Drawbacks of the proposed method need to be discussed.

Author Response

Dear Reviewer 1,

 

We appreciate the given valuable comments; the authors consider that the points kindly mentioned were relevant to be taken into account, we hope the updated version of the manuscript improves the initial submission.

 

best regards.

Author Response File: Author Response.pdf

Reviewer 2 Report

The manuscript presents a shadow removal method by using other color spaces other than traditional RGB, UAV-bases imagery was used to demonstrate the usefulness of the method. Several concerns regarding the content and structure of the manuscript are raised in this review which must be addressed before it could be considered for acceptance.

Introduction Section is rather short, please merge the content from Section 2 in the Introduction.  Moreover, a detailed explanation of what is presented in the study provided, which must be reduced. In the end of the Introduction provide a small paragraph describing what is addressed in each Section.

In Section 3.1. convert the text in bold into sub-sections, "Input data" would be "3.1.1. Input data", the same applies to "Color transfer algorithm" and "Color tuning process".

Results Section does not present any results, its content must be moved into Materials and Methods section, since it only describes the tested scenarios. A more detailed description of the comparative methods should be provided.

Discussion Section presents the obtained results, which must be placed in the Results Section. Move all figures and tables to the Results Section and focus only on addressing the differences among the tested methods. No comparison with previous published studies is carried in this Section, please include it.

It would be of value to assess if the photogrammetric processing of the corrected images improves the quality of orthophoto mosaics and digital surface models, when compared to the use of the original images, consider adding this as future work.

Authors should note that the PDF did not allow text selection, which complicated the review process. Next time, such type of files should be avoided.

Author Response

Dear Reviewer 2,

 

We appreciate the given valuable comments; the authors consider that the points kindly mentioned were relevant to be taken into account, we hope the updated version of the manuscript improves the initial submission.

 

best regards.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Thank you very much for accepting my suggestions. The quality of the manuscript has been greatly improved. I think this version of the manuscript can be accepted.

Reviewer 2 Report

The authors improved the manuscript by following the comments and suggestions from the reviewers. It is now suitable to be accepted with some minor corrections in text edditing and by correction some missing elements (e.g. Line 324: missing reference number for Cun et al.).

 

Back to TopTop