Next Article in Journal
Vegetation Water Content Retrieval from Spaceborne GNSS-R and Multi-Source Remote Sensing Data Using Ensemble Machine Learning Methods
Previous Article in Journal
High-Precision Digital Clock Steering Method Based on Discrete Σ-Δ Modulation for GNSS
 
 
Article
Peer-Review Record

Is Your Training Data Really Ground Truth? A Quality Assessment of Manual Annotation for Individual Tree Crown Delineation

Remote Sens. 2024, 16(15), 2786; https://doi.org/10.3390/rs16152786 (registering DOI)
by Janik Steier *, Mona Goebel and Dorota Iwaszczuk
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2024, 16(15), 2786; https://doi.org/10.3390/rs16152786 (registering DOI)
Submission received: 7 June 2024 / Revised: 22 July 2024 / Accepted: 25 July 2024 / Published: 30 July 2024
(This article belongs to the Section AI Remote Sensing)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The paper evaluates the accuracy of manually generated annotations of individual tree crowns compared to single tree reference data. The study results reveal that the manual annotations detect only 37% of the tree crowns on the forest-like plantation area and 10% of tree crowns in the natural forest correctly.

- The manuscript is well-written and well-organized.

- The methodology is sufficient.

The authors are invited to address the following comments:

- The authors should explain the novelty of the study and contribution of the conducted study in the introduction section.

- The determination of true negatives (TN) is not applicable in this case because it is not possible to correctly capture the absence of tree reference data. Authors should explain why it is not possible to correctly capture the absence of tree reference data.

- The validation result is influenced by many factors. Please list and explain these factors.

- Authors should propose some perspectives for future research.

Author Response

Thank you very much for your review and your comments. We have revised the article and highlighted the changes in the manuscript in red and labelled them with the corresponding comment.


Comment 1: "The authors should explain the novelty of the study and contribution of the conducted study in the introduction section."

Response1 : We agree on the comment. We have revised an existing paragraph in the paragraph in the introduction to clarify the novelty and our contribution of this study on Chapter 1, Page 4, Line 132-145 and Chapter 2, Page 6, Line 186-189.

Comment 2: "The determination of true negatives (TN) is not applicable in this case because it is not possible to correctly capture the absence of tree reference data. Authors should explain why it is not possible to correctly capture the absence of tree reference data."


Response 2: We rephrased the description to clarify the case in Chapter 2.3., Page 10 Line 301-305.

 

Comment 3: "The validation result is influenced by many factors. Please list and explain these factors."

Response 3: "We present and explore the five most important factors, providing a foundation for the subsequent analysis of the validation results in Chapter 4.1. We rephrased the introduction to the chapter, so that it becomes more transparent, that we dicuss and explain the influencing factors on the validation result. Chapter 4, Page 16.

 

Comment 4: Authors should propose some perspectives for future research.


Response 4: We agree on this comment. We propose perpectives  for future research and added an approach for enhancing the quality of our training data resp. the manual annotations in Chapter 5, Page 19, Line 503-517.

Reviewer 2 Report

Comments and Suggestions for Authors Comments:   1. The problem is clearly described but the aim of the paper is generic. It is suggested that the point-to-point contributions and novelty of the paper be emphasized.   2. It is suggested to add mathematical description in the methodology section to better understand the outcomes and their impacts.   3. Did you perform computational analysis for the annotation generation? There is a confusion in understanding the process. It is required to understand the outcomes in every aspect i.e. qualitative, quantitative and computationally.   4. What is the specific reason behind the high standard deviation on the study site-2?   5. Did you plan any further improvements in the training data generation. It would be more understandable if you test the annotated samples on the specific task like classification/segmentation.     Comments on the Quality of English Language

Minor editing is required. It is suggested to go through the detail reading and correction to improve the sentences.

Author Response

Thank you very much for your review and your comments. We have revised the article and highlighted the changes in the manuscript in red and labelled them with the corresponding comment.

Comment 1: "The problem is clearly described but the aim of the paper is generic. It is suggested that the point-to-point contributions and novelty of the paper be emphasized."

Response 1: We agree on the comment. We have revised an existing paragraph in the paragraph in the introduction to clarify the novelty and our contribution of this study on Chapter 1, Page 4, Line 132-145 and Chapter 2, Page 6, Line 186-189.

Comment 2:  It is suggested to add mathematical description in the methodology section to better understand the outcomes and their impacts.  


Response 2: The mathematical description for calculating the validation result metrics are descriped in Chapter 2.3., Page Table 3 and 4. We added no revision regarding this comment to the text.

 

Comment 3: Did you perform computational analysis for the annotation generation? There is a confusion in understanding the process. It is required to understand the outcomes in every aspect i.e. qualitative, quantitative and computationally.  

Response 3: We are not sure what is exactty meant with "computational analysis for the annotation generation". For the validation of the manual annotations against the reference data in terms of how accurately the annotations capture this data, a quantative analysis method is introduced in this paper. The quantative analysis method is implemented in in pyhton programming language. We hope that clarifies the matter. We added no revision regarding this comment to the text.

 

Comment 4: What is the specific reason behind the high standard deviation on the study site-2 ?
Response 4: We agree on this comment. We explained the reason for the higher standard deviation in Chapter 4.2, Page 18, Line 448-451.

 

Comment 5: Did you plan any further improvements in the training data generation. It would be more understandable if you test the annotated samples on the specific task like classification/segmentation.

Response 5: We agree on this comment. We propose perpectives  for future research and added an approach for enhancing the quality of our training data resp. the manual annotations in Chapter 5, Page 19, Line 503-517.

Reviewer 3 Report

Comments and Suggestions for Authors

The paper describes results of the process of manually annotating tree crowns for the purpose of preparing data to machine learning. In my opinion the paper needs significant major review.

 

1. Foremost, the conclusions presented in the paper are not supported by the results. The paper describes results of few annotators and it draws conclusion that manual annotating is not correct. The paper should provide more details on the annotating process. These details should include information on the time which annotators spend on processing images, their criteria for annotating and how closely they looked on images. Authors should also verify if average time spent on annotating individual tree affects the quality of results.

 

2. Although it is admirable that Authors describe the problem with low quality of manual annotation Authors should provide some solution to this problem. There are circumstances in which is it not possible to automatically annotate images. Authors should make an attempt to draw some conclusions on how to improve the manual annotating process and perhaps propose a useful method for manual annotating. 

 

3. The paper also lacks discussion on finding of other researchers regarding manual annotations. There are successful systems based on manual annotations. Authors should refer to their process of annotations and try to find differences between their methods and methods used by other researchers.

 

4. Authors do not include any references when they mention some software used in their research such as MatLab's LiDAR toolbox, CloudCompare or SuperAnnotate. Authors should include some references describing these applications.

 

The paper has also advantages. The greatest one is such that the paper is concerned with processing data from a real environment. The paper also addresses a valid research problem of preparing training data.

Author Response

Thank you very much for your review and your comments. We have revised the article and highlighted the changes in the manuscript in red and labelled them with the corresponding comment

Comment 1: "Foremost, the conclusions presented in the paper are not supported by the results. The paper describes results of few annotators and it draws conclusion that manual annotating is not correct. The paper should provide more details on the annotating process. These details should include information on the time which annotators spend on processing images, their criteria for annotating and how closely they looked on images. Authors should also verify if average time spent on annotating individual tree affects the quality of results. "

Response 1: In the conclusion we rephrased the paragraphs regarding our findings of the quality of manual annotations. It should be more understandable, that our results refer to the error-proneness of manual labelling of tree crowns and very likely do not reflect this as absolute ‘truth’ and need to be treated with caution, but we don't generalise and conclude that all manual annotations are very error-prone. Chapter 5, Page 19 Line 492-502

The criteria for the annotators are now more specifically described on Page Line: The time which annotators spend on processing images was not tracked as no time limit was set for the annotations process. Chapter 2.22, Page 10, Line 287-293.

 

Comment 2: "Although it is admirable that Authors describe the problem with low quality of manual annotation Authors should provide some solution to this problem. There are circumstances in which is it not possible to automatically annotate images. Authors should make an attempt to draw some conclusions on how to improve the manual annotating process and perhaps propose a useful method for manual annotating."

Response 2: We agree on this comment. We propose perpectives  for future research and added an approach for enhancing the quality of our training data resp. the manual annotations in Chapter 5, Page 19, Line 503-517.

 

Comment 3: "The paper also lacks discussion on finding of other researchers regarding manual annotations. There are successful systems based on manual annotations. Authors should refer to their process of annotations and try to find differences between their methods and methods used by other researchers. "

Response 3: We agree on this comment. We added the Chapter "Related work: Training Data error using visual interpretation" to describe the issues of manual annotation based on visual interpretation on findings of others researcher. Chapter1, Page 4-5, Line 146-183.

Comment 4: "Authors do not include any references when they mention some software used in their research such as MatLab's LiDAR toolbox, CloudCompare or SuperAnnotate. Authors should include some references describing these applications. "

Response 4: Thank you, we agree and added them to the text in Chapter 2.1.2, Page 8, Line 238,239, 246, 248 and Chapter 2.2., Page 10, Line 280 as well as we list them in the references chapter.

 

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The authors have satisfactorily addressed the comments.

Back to TopTop