Next Article in Journal
An Angle Effect Correction Method for High-Resolution Satellite Side-View Imaging Data to Improve Crop Monitoring Accuracy
Previous Article in Journal
Research on Calculation Method of On-Orbit Instrumental Line Shape Function for the Greenhouse Gases Monitoring Instrument on the GaoFen-5B Satellite
Previous Article in Special Issue
Evaluation and Selection of Multi-Spectral Indices to Classify Vegetation Using Multivariate Functional Principal Component Analysis
 
 
Article
Peer-Review Record

Machine Learning Vegetation Filtering of Coastal Cliff and Bluff Point Clouds

Remote Sens. 2024, 16(12), 2169; https://doi.org/10.3390/rs16122169
by Phillipe Alan Wernette
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2024, 16(12), 2169; https://doi.org/10.3390/rs16122169
Submission received: 17 April 2024 / Revised: 4 June 2024 / Accepted: 12 June 2024 / Published: 15 June 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper shows an interesting approach to classify vegetation in cliffs and vertical surfaces using machine learning techniques over point clouds. The paper discusses advantages and limitations in comparison to state-of-the-art methods. The main point for improvement is that the authors state in the abstract about the benefits of the approach for specific seasons or refer in some way to multi-temporal datasets however this viewpoint is not developed along the paper, even the metadata of the point clouds used are not presented or introduced. For this reason and due to other minor concerns I recommend major revisions for this work.

The abstract summarizes rightly the content of the work, but I don not understand why you refer to “different growing seasons” because later in the manuscript I did not see any multi-temporal or time-lapse analysis.

L17 I just would change the word “sediments” by “rocks”

L36 I would say photogrammetry because we have many previous examples of classic photogrammetry applied to coastal cliffs so the term photogrammetry include both, SfM and classical.

L55 I suggest “SfM-derived point cloud” instead “SfM”, in fact, probably it would be good to clarify in the introduction that you will use the term SfM but you really mean SfM-MVS or automatic photogrammetry because SfM is just a part of the technique. Of course, I know that many researchers today refer to the technique just as SfM but the proper description is SfM-MVS.

L70 clarify that LIDAR and TLS instruments just record intensity and RGB is added later by a post-processing procedure.

L107 clarify what other point classifiers.

MATERIAL AND METHODS

I miss in this section the explanations about the multi-temporal analysis that may be inferred from your abstract and introduction.

L114 “other hyperspectral bands”: other bands.

L151 You will need a reference here to support your reasoning

L165-166 this should be specified also at the end of the introduction where you present the objectives.

L174 why 1m? justify your decision

L212-224 may we see a table of dates, number of points, etc. of the point clouds used?

L234-241 why 70-30? Why not 80-20? I think an analysis varying the size of the training and test classes would be necessary to support your decision here.

L242 you say along the text “the paper explores”, “the paper analyzed…” etc. I recommend you to use a different  writing, I mean you carried out the job not the paper or use the passive. Just consider this as I am not a native English speaker but I have red many papers and it sounds a little bit weird to me. This is just a recommendation as you are a native English speaker.

L273, just oblique, delete very.

L287 I do not see how vegetation growth may result in erosion, I suggest rewriting the sentence. I mean you may expect false sedimentation or deposition as the result of vegetation growth.

L294-300 this paragraph is not at the proper place in my opinion, the subject is the creating of the training dataset so it should be placed out of the study area section.

L303 you start the results indicating the results of the CANUPO algorithm but I do not see in your objectives or methods that you say you will test this classifier, please specify that you will use CANUPO and explain, for the reader, briefly the basis of this algorithm.

RESULTS

L342 what is a “visible relationship”? please fit to demonstrable-objective facts or include your reasoning in the discussion.

L439-447 I do not see any difference between this paragraph ant your statement at the introduction, so is this actually a piece of reasoning or discussion?.

 

Author Response

Please see the attachment

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

General comments

 I read with interest the work entitled: "Machine Learning Vegetation Filtering of Coastal Cliff and Bluff Point Clouds" by Phillipe Wernette. In general, the paper can add important new information regarding the segmentation of

 structure from motion (SfM)-based 3D  point cloud using only RGB imagery. Considering that the author reported that machine learning (ML) models with RBG and three input layers are the more efficient, parsimonious and robust across other existing techniques

using vegetation indices, this approach provides a more streamlined workflow for processing complex coastal cliff environments where vegetation presence can bias our erosion measurements and limit our ability to detect geomorphic erosion. Even if I can evaluate the language not being a native English speaker, I deem the paper well written. Still, in some parts (introduction and discussion), I think more information should be added to improve the general importance of the work and its readability. The introduction will benefit from a more exhaustive overview of the current methods and limitations, whilst in the material and methods, the raw data acquisition is not sufficiently clear to ensure replicability. Therefore, before being suitable for publication, the paper required a general revision of the form and style to address several typos other than a specific improvement of the section mentioned above. Finally, consider adding an appendix with a brief description of the technical terms used in the paper to ensure comprehension to a broader range of readers.

Specific comments on the text:

Line 28: I suggest adding a reference at the end of the sentence.

Line 31:  I suggest adding references at the end of each example regarding historic imagery, LIDAR and SfM. Also, please consider if "historical" could be a better term than "historic".

 

Line 69-73: I think a more detailed explanation regarding why RGB point clouds cannot be used to derive vegetation indices (e.g.NDVI) can improve the rationale behind this work. This sentence could be misleading because other than the problem linked to LAS format, the primary aspect is that the NIR band is missing in RGB imagery and derived point cloud. Please rephrase.

Line 75: I suggest changing "we" with "I" since only one author exists.

Line 79: The third software is missing from the list. It could be helpful to specify what of this software is open source versus licensing. Moreover, since this software is used for many ecological and geological applications, I suggest adding some examples  with references to better show such tools' usage.

Line 103: I suggest changing "we" with "I" since only one author exists.

Lines 100-108: The main goals of this work are pretty hard to read in this part. I suggest reporting the main aim of the research more clearly, maybe using bullet points or a list.

Line 112: Maybe it could be of interest to cite other applications for rural and coastal monitoring, such as

https://doi.org/10.1016/j.jenvman.2022.115723

https://doi.org/10.1016/j.biosystemseng.2020.11.010

https://doi.org/10.3390/rs15133254

 

Line 139: In the figure, the caption is missing each panel's letters. Moreover, panel “e” has a pretty odd title.

 

Line 179: Maybe table 2 might be moved to supplementary material

Line 202: The band normalization procedure should be described at least in the supplementary material

Line 266: Please clarify how imagery has been collected and the main steps used for georeferencing. Did you use a UAV? Have GCPs been surveyed using a total station? More details could improve the replicability of the study.

 

Line 304: How you assessed the accuracy of point cloud classification is unclear. Did you use a confusion matrix? How many reference points for ground truthing have been collected?

Comments on the Quality of English Language

Minor typos to be addressed 

Author Response

Please see the attachment

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This manuscript presents an evaluation of using multi-layer perceptron (MLP) machine learning (ML) models to segment and filter vegetation from bare cliff in red-green-blue (RGB) point clouds. This is an important topic because of the challenges involved with monitoring change along near-vertical surfaces such as coastal cliffs. Using structure-from-motion (SfM) derived point clouds, the author demonstrates that relatively simple (2-3 layers and 8 or 16 nodes per layer) MLP models using only the RGB data were nearly as accurate (within 1-2%) as more complex models that included more layers, nodes, xyz statistics, and/or vegetation indices – at significantly reduced computational cost. Overall, the manuscript is well written, provides a robust assessment of how the various model architectures performed, and adds valuable insight into application of ML for segmenting point cloud imagery. I have just a few specific comments:

 

In several instances the paper uses the plural “we” (see lines 75, 103, 417); however, only a single author is listed. Were there others involved in this research who should be included as authors or in the acknowledgements?

 

Fig.2,3,6,8 – Add scale.

 

Line 99 – Reference is repeated [16,17,17].

 

Line 120 – Acronym for Kawashima index is given as IKAW but is listed as KI in Table 1.

 

Line 193 – “…the most complex models had five densely connected layers…” Three models listed in Table 2 have 6 layers.

 

Fig. 5 – Labels and geographic boundaries are difficult to read.

 

Fig. 7a – Gray-scale markers are difficult to differentiate, consider using colors like in Fig.  7b and 10.

 

Line 485 – Data link is missing.

 

Line 595 – Reference [47] is incomplete.

Author Response

Please see the attachment

 

Author Response File: Author Response.pdf

Back to TopTop