Deep Learning on Synthetic Data Enables the Automatic Identification of Deficient Forested Windbreaks in the Paraguayan Chaco
Round 1
Reviewer 1 Report
The manuscript with the title " Deep learning on synthetic data enables the automatic identification of deficient forested windbreaks in the Paraguayan Chaco" presents a study that proposes a methodology to automatically classify gaps in windbreaks and central forest patches using a convolutional neural network (CNN) entirely trained on synthetic imagery. The paper takes into a worthy topic, the structure is strong, and it is technically correct. I believe the manuscript would make a nice contribution to RS.
In general:
The introduction provides valuable information and justifies the need for the research.
Methods sections are a little larger than usual, but I still like the detail provided. The only revision here is to pay attention to the quality (resolution) of the figures. Figure 3 showing the entire workflow is beneficial for readers.
Results and Discussion sections meet the criteria required for scientific journals and provide the main findings of the study in a comprehensive way.
Minor comments in the conclusion section:
Lines 602-609 need to be revised. They look to me that are not conclusions from the study but the author's assumptions. These lines can easily be part of the discussion section.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
This paper examined the U-Net modeling for the segmentation of field and forest using S1 and S2 images, which is very interesting but requires a more detailed description of the U-Net model used and a comparison with another model like DeepLabV3+.
Line 337. How did you divide the 8000 training and 2000 validation sets?
Line 338. Usually, a chart showing the accuracy and loss changes according to epochs is presented. So, more than three epochs should be tried to make sure of the convergence of accuracy (five or ten epochs).
Line 343. Please provide detailed information about the U-Net model, including the number of filters for each convolution step along the encoding path. Primary hyperparameters should also be described.
Line 367. Mean IOU (intersection over union) is a standard measure for computer vision. Please provide the Mean IOU for the experiment.
Table 1. Can you compare the result of U-Net and DeepLabV3+?
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 2 Report
Well revised. Thank you very much for your efforts.