Next Article in Journal
SARAL-AltiKa Wind and Significant Wave Height for Offshore Wind Energy Applications in the New England Region
Previous Article in Journal
Damage Detection Based on 3D Point Cloud Data Processing from Laser Scanning of Conveyor Belt Surface
Previous Article in Special Issue
Evaluation of HF Radar Wave Measurements in Iberian Peninsula by Comparison with Satellite Altimetry and in Situ Wave Buoy Observations
 
 
Article
Peer-Review Record

LaeNet: A Novel Lightweight Multitask CNN for Automatically Extracting Lake Area and Shoreline from Remote Sensing Images

Remote Sens. 2021, 13(1), 56; https://doi.org/10.3390/rs13010056
by Wei Liu 1,2, Xingyu Chen 1, Jiangjun Ran 1,3,*, Lin Liu 4, Qiang Wang 5, Linyang Xin 1 and Gang Li 6,7
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Remote Sens. 2021, 13(1), 56; https://doi.org/10.3390/rs13010056
Submission received: 16 November 2020 / Revised: 17 December 2020 / Accepted: 22 December 2020 / Published: 25 December 2020
(This article belongs to the Special Issue Remote Sensing of Coastal and Inland Waters)

Round 1

Reviewer 1 Report

The document aims to propose a new lightweight CNN for the segmentation of lake areas and shoreline identification. This problem is fairly classic in the field of analysis of rear images. For that, the paper can be interesting for the audience of this newspaper.


On the other hand, it needs to be improved to bring a more precise position concerning the state of the art, a more objective evaluation of the results and it must be shortened because in the current state its verbosity does not serve the content. scientist.


Here are my recommendations:


Abstract: very long, half a page, contains detailed methodological considerations that do not have to appear in the abstract.
Introduction: it confuses without any differentiation introduction, state of the art, and paper proposal. The state of the art is not structured and no organizational classification is clear.


Fig. 1 The figure and the rôle of the blocks are not described, the links between the model section and the figure are not established. Also, there is some part of the network which is repeated and no explained.
Study area and Data: could be significantly reduced. The authors should discuss the data from the dataset point of view, the exhaustivity, specificity compared to other available datasets, particular features etc...


LaeNet Model:
There is no more detailed illustration of the entire network organization whereas the authors explain in Fig. 5 padding is a basic technique there is no need to spend a lot of space on this problem.
How different layers contribute to the results and which layer is so crucial and different from SOA, allowing to achieve better results than bigger and more general networks?
what does it mean "SAME" ?
mIOU -> mIoU
Table 1: 0 Layer ? it is not clear how you can obtain the results without any layer.


Comparison with other models.
I find this comparison somewhat unfair. The models seem to be trained on different datasets, the capacity of generalization is not reported, results on some public or common datasets are not presented.


What do the authors want to illustrate by the Fig. 7? The claimed quality scores Precision and Recall are almost 100% but on Fig. 7 we can see o lot of irregularities. This raises the question of how the annotations for the comparisons have been done. Again, the evaluation of the LeaNet performances on some public database would be helpful.


Attention modules: interesting tentative but I miss how it was integrated exactly with LeaNet.


Table 4: move it closer to the data presentation part.

Author Response

Dear reviewer,

We would like to thank you for spending your time to improve our manuscript and thank for your constructive criticisms and valuable suggestions. We have carefully studied your valuable comments and revised this paper accordingly. The point-by-point responses to your comments and an updated manuscript with yellow highlighting changes are attached with this letter. Please see the attachment

best regards
On behalf of all authors,
Wei Liu

Author Response File: Author Response.pdf

Reviewer 2 Report

In this paper, the authors recast lake area and shoreline extraction as an MTL problem. The authors then develop LaeNet which is a CNN-based structure that benefits from task relationships to extract helpful features for semantic segmentation. Experimental results demonstrate that the method reduces computational cost significantly while leading to comparable accuracy with state of the art methods.


The paper is well-written and easy to follow and is valuable in that it addresses a practical problem using a transfer learning approach. The experiments are convincing and I only have minor comments to be addressed before publication:


1. To demonstrate statistical significance, you need to add report mean std in tables 1 and 2 to make sure that the results are robust. Please add std to your results.

2. I suggest the authors include the following recently published papers at Remote Sensing which benefit fromCNNs in their introduction section to highlight that deep learning has been quite effective in addressing challenges of learning in practical settings of interest in remote sensing:
A. Feng, Ziyi, Guanhua Huang, and Daocai Chi. "Classification of the Complex Agricultural Planting Structure with a Semi-Supervised Extreme Learning Machine Framework." Remote Sensing 12, no. 22 (2020): 3708.
B. Wang, Jiaxin, Chris HQ Ding, Sibao Chen, Chenggang He, and Bin Luo. "Semi-Supervised Remote Sensing Image Semantic Segmentation via Consistency Regularization and Average Update of Pseudo-Label." Remote Sensing 12, no. 21 (2020): 3603.
C. Ren, Yuanyuan, Xianfeng Zhang, Yongjian Ma, Qiyuan Yang, Chuanjian Wang, Hailong Liu, and Quan Qi. "Full Convolutional Neural Network Based on Multi-Scale Feature Fusion for the Class Imbalance Remote Sensing Image Classification." Remote Sensing 12, no. 21 (2020): 3547.
D. Rostami, Mohammad, Soheil Kolouri, Eric Eaton, and Kyungnam Kim. "Deep transfer learning for few-shot SAR image classification." Remote Sensing 11, no. 11 (2019): 1374.
E. Li, L., 2019. Deep Residual Autoencoder with Multiscaling for Semantic Segmentation of Land-Use Images. Remote Sensing, 11(18), p.2142.

3. Can you release your code and data for future explorations of the scientific community?

Author Response

Dear reviewer,

We would like to thank you for spending your time to improve our manuscript and thank for your constructive criticisms and valuable suggestions. We have carefully studied your valuable comments and revised this paper accordingly. The point-by-point responses to your comments and an updated manuscript with yellow highlighting changes are attached with this letter. Please see the attachment

best regards
On behalf of all authors,
Wei Liu

Author Response File: Author Response.pdf

Reviewer 3 Report

Interesting.

Title:

How do you define “LaeNet“in full letters?  The definition provided on the Lines 85, 86, 87 does not match with the abbreviation provided.

Abstract:

14: “f1-score“ to “F1-score“

 

16: “the cost time“ to “running time“ and please define the time in seconds and minutes. What do you mean by 0.047M? 

18: What do you mean by “in-situ shoreline“? Please specify if it is “in-situ shoreline position“ or “in-situ shoreline coordinates“. 

Introduction:

26: Do you mean “Lake Shoreline variation“ or “lake level fluctuations“? Please rephrase this “the lake can be taken as an indicator”.

80, 81, 82:  “Additionally, some deep learning algorithms (such as DeepLabV3+) are not applicable to multispectral remote sensing image since the input is three image channels, band information loss occurs when reducing the multiple bands to 3 bands”. The phrase is too long and confusing, please rephrase. Because three image channels is a multispectral image.

88: It could be better if the “Figure 1“is cited before it is indicated in the manuscript. Please relocate the Figure 1 and put it in manuscript after it was cited.  

Study Area and Data:

91: Which areas do you mean by “to predict areas and non-areas”? Please rephrase and check for the whole manuscript.

125, 127: please change “figure 2“to “Figure 2“please check for the whole manuscript

138: please change “figure 3“to “Figure 3“

144: for “Field-measured lakeshore from GPS“, what was the distance between each of the measurements used for shoreline presented on the Figure 4?

248: Please provide the version of ArcGIS software used.

Results:

265: What is the software you used to convert Raster to Vector?

266, 267, 268,269, 270: How confident can you be by comparing the coordinate of the edge pixel extracted from 30m pixel image with the measured line obtained by GPS?

418: References: Please use abbreviation of the names of journals.

Author Response

Dear reviewer,

We would like to thank you for spending your time to improve our manuscript and thank for your constructive criticisms and valuable suggestions. We have carefully studied your valuable comments and revised this paper accordingly. The point-by-point responses to your comments and an updated manuscript with yellow highlighting changes are attached with this letter. Please see the attachment

best regards
On behalf of all authors,
Wei Liu

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors answered my questions.


Before final publication, I recommend completing the discussion of the results in "Table 3. Performance of different semantic segmentation models." I suggest adding the information about the "pixel tolerance" used for the evaluation of the metric. For example: are we in the case of strict pixelwise evaluation or patchwise evaluation. If the second one is true, what is the size of the patch?

 

Author Response

Dear reviewer,

Thank you again for spending your time to improve our manuscript and the valuable suggestions. We have carefully studied your valuable comments and revised this paper accordingly. The point-by-point responses to your comments and an updated manuscript with yellow highlighting changes are attached with this letter.

best regards
On behalf of all authors,
Wei Liu

Author Response File: Author Response.pdf

Back to TopTop