Next Article in Journal
ROSACE: A Proposed European Design for the Copernicus Ocean Colour System Vicarious Calibration Infrastructure
Previous Article in Journal
Impact of Ocean Waves on Guanlan’s IRA Measurement Error
 
 
Article
Peer-Review Record

Factors Influencing Movement of the Manila Dunes and Its Impact on Establishing Non-Native Species

Remote Sens. 2020, 12(10), 1536; https://doi.org/10.3390/rs12101536
by Buddhika Madurapperuma 1,2,*, James Lamping 1, Michael McDermott 3, Brian Murphy 2, Jeremy McFarland 4, Kristy Deyoung 1, Colleen Smith 1, Sam MacAdam 1, Sierra Monroe 1, Lucila Corro 2, Shayne Magstadt 2, John Dellysse 5 and Solveig Mitchell 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2020, 12(10), 1536; https://doi.org/10.3390/rs12101536
Submission received: 22 March 2020 / Revised: 25 April 2020 / Accepted: 6 May 2020 / Published: 12 May 2020
(This article belongs to the Section Remote Sensing Communications)

Round 1

Reviewer 1 Report

General comments:

Dear Authors,

The manuscript is in principle interesting for the Journal, because it deals with integration of remote sensing approaches to study evolution of coastal sand dunes.

Nevertheless, it suffers from the following weakness points.

The chosen study area is very small and only two epochs are considered for multitemporal comparison. These two conditions don’t allow to obtain robust demonstration of the reliability and stability of the results.

Moreover, the methods are too roughly described in the manuscript. No adequate relevance was given to the analysis of the accuracy of the results obtained. This is very important when a comparison between independent information must be performed, as in this case. Hence there’s no way to know whether the highlighted changes are either signal or artefacts arising from data inconsistencies.

More specific comments are given in the following.

 

Best regards.

 

 

 

 

 

Specific comments:

43-44: Authors should support the sentence by adding relevant literature. Who did support the idea that sea level rise and coastal erosion fastened the dunes dynamics?

57: what is the meaning of “develop commercially available survey…”?

80: which kind of resolution? If spatial resolution, this doesn’t depend on the KAP, but on the camera used …please explain…

Fig.1: it seems the study area is really very small, about 1 km coast stretch. In order to support the significance and scientific sounding of this paper, Authors should discuss the following: 1) Are the contents and conclusions gained within this small study area representative of wider areas? 2) May they be applied successfully to other areas in the world?

122-144: the data acquisition procedure is described in a very general way without providing any quantitative information. No information is given about: GCP location, GPS positioning accuracy, orthocorrection accuracy for both KAP and UAV imagery, co-registration quality of KAP and UAV data, accuracy and reliability of the DEM. Authors should provide this information and explain them within the results section. As an example, later they perform a multitemporal analysis of the DEMs. Without knowing the quality of every DEM, it is not possible to demonstrate that changes really represent ground modifications and are not false alarms resulting from DEMs errors. Hence it is opinion of the reader that this part of the manuscript should be deeply rewritten.

146-150: in order to assume that the results of the procedure really describe ground changes, Authors should demonstrate that the two independent DEMs are well (X,Y) co-registered and no artefacts occur for elevation, especially concerning the UAV DEM. Authors should develop all these considerations and discuss the results. Instead, the manuscript completely lacks this information.

154-158: which variable is considered to calculate the standard deviation? Very hard to understand the procedure implemented. This section should be reorganized.

158-159: why using maps of the FOD for trails instead of extracting them from the data collected in this work? What about the co-registration quality among the new datasets and the existing ones?

161-162: which radiometric-spectral-spatial criteria were used to obtain the training dataset? What about its reliability?

168: Authors used classification to represent vegetation density. Why didn’t they use classification directly to classify their imagery?

177: “…the AOI was divided into areas containing vegetation and areas lacking …”. How did they perform this hard-type classification? Which relationships with the vegetation density layer, which is instead a soft-type representation of vegetation occurrence? Please explain.

180: angle of repose of sand is not constant, but it depends mainly on grain size, grain shape and relative density of the deposit. Authors should check whether their assumption was right.

186-191: this method and data shouldn’t be described here, but in a previous chapter.

214-215: which parameter did Author use to calculate NDVI? DN? Reflectance? Results may be quite different because NDVI strongly depends on both atmospheric effects and irradiance…..Moreover Authors do not discuss/ take into account the effects of surface azimuth and steepness: also the correction for these morphologic parameters is a relevant task. When no pre-processing tasks are implemented it is more reliable to analyse the VI = NIR/IR instead of the NDVI.

229-230: How Authors assessed “…which method classified invasive vegetation best”?

259-261: This description is unclear to the reader. Authors should improve contents and description within this paragraph.

275: Table 2. Is the “Mean elevation change” represented in meters? If yes, do Authors think that they can represent elevation statistics at mm accuracy?

278-279: This description is unclear to the reader. Authors should improve contents and description within this paragraph.

290-291: Authors should introduce these concepts within the Methods section. Instead, results of an accuracy assessment are given without providing any information about how these values were obtained. Please improve the manuscript for that. Moreover, accuracy is very low. How this low accuracy could influence the quality of results?

298: what about the reliability/accuracy of Fig. 7?

326-330: this discussion about effect of wind doesn’t seem to be based on analyses developed and implemented in the paper. Authors should better explain the relevance of these factors.

336: what about rain erosion? Where did Authors analyse this process end its effects in the study area?

366-369: where did Authors perform within the paper this comparison between the two methods? Which approach did they implement to get to this conclusive remark?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This is a case study of how to map sand dune change using kite and drone imagery. The problem is that the study site and methods are not clearly explained, and need a general revision so that they can be followed to insure that the results are valid. I point out some of the more problematic areas below. Assuming that methods are good, the results are interesting and the Discussion and Conclusions make sense.

Punctuation and grammar need to be corrected in many places. e.g. Lines 55-56, here is a better way to word the sentence: The social trails have disturbed dune mat habitats, trampling of rare plant communities, spread of invasive species, and disturbance to ground-nesting bee colonies [2, 26-27].

Title: Not clear about Manila Dunes in title versus Ma-le’l Dunes in text.

Line 115. Why switch from UAVs to sUAV? Please be consistent with acronyms throughout the manuscript.

Lines 118-119. Last part of sentence not clear: tidal was ranged from 0-6 feet with 4 feet low tidal water level in this coastal habitat. Are you trying to say that tides have a six foot range but were at the 4 foot level above low tide when the images were taken? Also, please use metric units, rather than feet.

Lines 120-121. This statement about the plant community belongs above after line 110.

Lines 154-155. Not clear. I have no clue as to how standard deviation leads to quantity of movement per pixel or elevation change. Some explanation is needed.

Lines 178-197. The decision tree described in the text does not match figure 3. If vegetation presence 100% is not true, then why are those pixels assigned to vegetation? If slope is not >30%, then why are those pixels assigned as steep? The remaining levels do make sense. I suspect the problem lies with how the boxes in the tree are labeled.

Line 243, how does toyon appear here without having been mentioned earlier in the study area description?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Sound method but not novel. 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

line 54: double space

line 56: this reads as a brochure for Friends of the Dunes and you have already once mentioned it is a nonprofit. These lines are not needed. line 60: remove surprisingly line 61: define social trails line 65:  is there only one such study that shows this?  line 67: "easier than ever" is too colloquial line 70: Since then,  line 72-73: They will be used in this study? Or all studies? This line is vague. If it is this study, this belongs in methods line 79: you have a dash and a comma to separate the phrase - pick one or the other line 81: are you referring to invasive plant species only?  line 95: "uniquely dynamic" is subjective, unless you have metrics Figure 1: these maps need to be redone. It is too blurry, the inset map area is unclear (use an inset box and zoom out to a more recognizable geographic area - such as the state of CA), add a scale bar to both and a grid to the inset map, and the inset map is unclear. Zoom in on the management area itself. Is the star the only point of study, or does it cover the entire management area? line 98: when did the data collection occur? How did it relate to the tides? Growth phase of the vegetation present? Was the timing of the collection at the same time between the two methods?  line 110: a second flight line 114: define sfm line 122: you are defining sfm now, when it should be done earlier line 124: where is this DEM coming from? From your own data collection? what specifications did you use to create it? What was the accuracy? Did you follow the lidar base spec from ASPRS? Was the slope measured in degrees or percent? What is the date of the DEM? This section needs to be expanded, as it is very vague. Also note that it is "a UAV" and "lidar-derived" and "The DEM was used"  line 130: this long sentence is awkward and needs to be rewritten line 136: how did you differentiate between the classes? What was the criteria?  Figure 2: this figure is too blurry to be used. It also needs a grid and another map to show where in the study area this region is. And how were the trails defined and mapped? line 145: double space line 152: this sentence is a fragment line 154: how are you defining the different classes of vegetation? line 156: how are you determining wind direction? From which direction is the wind blowing, and is it consistent across the entire area? (Yes, I know this may seem obvious, but it needs to be written out) line 157: double space line 157: For the MERRA-2 wind dataset, which dates did you use? How many images did you observe?  Figure 3: there is no need for colors in this tree, as it makes it overly busy. Also expand on the caption, with each step explained line 167: Subtracting the two DEMs is not useful unless they were taken at the same time and created using the same standards and have the same accuracy.  line 170:  this sentence is awkward line 179: why are you multiplying the NDVI equation by 1?  line 188: how did you choose samples? Were they field verified?  line 198: double space and I think you are missing a word line 216: did you create DSMs? Previously, you called them DEMs. If they are DSMs, then you need to elaborate how they were created. Figure 4:  this map is the clearest. A grid is still needed. Also are all the significant figures needed in the legend? And in general, you do not need to write the word "Legend" in the legend box. Table 1: Again, without stating the accuracy of the DEMs, it's impossible to know if these values are significant or not. Figure 5: This map is blurry. Same comments as previous figures line 246: these technical difficulties should be presented in the methods line 253: there is no need to quote maximum likelihood classification  line 255: Just to reiterate an earlier point, there needs to be an in-depth discussion about how the classes were identified in the methods line 256: mapped how? and when?  line 262: you need a grid, and another map to show where these are. Also, remove the "HSU Group Project" etc. Is the scale changing throughout the image? Why are there two scale bars?  Figure 7: another blurry figure, a different scale bar type than the others, spell out vegetation in your "Dead Veg" class Related to Figure 7, for the methods: Can you collapse these vegetation types any further? Species-specific classification is much more difficult than more generic classes. You are possibly going too fine.  line 274: such as:  line 277: RED doesn't need to be capitalized line 284: this MERRA information belongs in the methods line 299: why are you only defining social trails now? line 315: is emerging line 325: FOD and not Friends of the Dunes? Either choose one acronym/initialism and be consistent or don't use it at all line 335: enter mixed?     This paper reads too much like an school project than a scientific paper, and my suspicions were confirmed when I saw Figure 6. There is a good idea in this, but there is much work to be done to make it publishable. The outline of every ArcGIS step is not needed with the tool named and how files were exported, for example. The figures need to be redone. The paper feels like there were several different authors that didn't coordinate with one another, and it needed one project lead to ensure the paper was consistent. It is difficult to appropriately gauge whether or not the conclusions are valid without the methods being complete. I do apologize if this review seems overly harsh, especially as I suspect some of the authors are new researchers. Take heart, we all receive harsher reviews in the beginning of our career. However, each of the scientists that review your paper take unpaid time out of their schedule to review the paper, and it is important to submit as near-perfect of a submission as you can, before any peer reviewer sees it. 

Reviewer 2 Report

Review of: Factors Influencing Movement of the Manila Dunes and Its Impact on Establishing Non-native Species

Buddhika Madurapperuma, James Lamping, Michael McDermott, Brian Murphy, Jeremy  McFarland, Kirsty Deyoung, Colleen Smith, Sam MacAdam, Sierra Monroe, Lucila Corro, Shayne Magstadt and John Dellysse

I quote from the authors;

Purpose:The purpose of this paper is to understand the dune movements in relation to social/established trails, vegetation density and topography and mapping invasive/native species in the Mal-le’l Dunes area of Humboldt Bay National Wildlife Refuge.

Results:From our analysis, the installation of trails had an overall impact of lessening the amount of dune movement. Social trails digitized within the study site were found to have more local movement than the established when comparing to movement across the entire site. We compared two methods of classification viz., object based feature extraction method and a pixel-based supervised maximum likelihood classification method, in order to identify the best classification of dune vegetation.

Introduction:

Introduction has important material but needs to be rewritten. I note that the paper opens with a focus on the FOD organization and some site information (Lines 32-39) then shifts to real background material, then in the same paragraph where the authors discuss dune stabilization and invasive species suddenly returns to a discussion of FOD. After this they turn to methodology and then finally and appropriately in lines 75 to 86 turn to the purpose of the study. Lines 87-90 which conclude the introduction are more appropriate for the final discussion.

Needless to say, the introduction materials (summarized above), is both poorly organized and in places includes inappropriate material for the Introduction. Rethink and rewrite this section.

Section 2.1:

Study area description is minimal- need to expand.

Figure 1 needs an inset so the reader can determine where it is actually located. Connect the right hand figure with the left by a study are box and create a small inset map of California for the right hand figure. Latitude and longitude graticules are necessary if you do not specifically list the location in the text.

I have to ask where is there a discussion of the actual vegetation of the study area? It should be in this section yet it is missing. Some is found inappropriately in the introduction. It should be here and it should be expanded.

The authors, in their purpose and in other places in the Introduction, discuss dune movement. Yet there is no discussion of the actual timing of the data acquisition that I was able to find until I came to the Results section and there found a somewhat non-informative statement, “Comparing the absolute value of the differences in the two DSMs emphasized the shifts in dune movements over the last two years”. I searched the entire manuscript and there is absolutely no information on the timing of data collection other than reference to a 2018 and 2016 with no actual dates. The reader should not have to get all the way to Figure 4 before finding out this minimal – and insufficient- bit of imporant information.

So these questions need to be answered;

Was it actually two years exactly?

And, as an additional important point.

Are there any dates for the creation of the “trails” and the “social trails”? In addition, how might time of establishment of these trails affect the results?

Data acquisition is fairly well described however the authors should read and proof their manuscript before submission. Lines 99-104 read;

“A mission plan was created within the region of interest (ROI) using DJI GS Pro software prior to flying sUAV (DJI Mavic Pro) at the FOD. Aerial photographs were collected at a height of 30 meters over a 22.5 acre plot using a DJI Mavic Pro multicopter (DJI, Shenzhen, China) equipped with  a 12-megapixel RGB camera. Aerial photographs were collected, at a height of 30 meters, over a 22.5 acre plot by a DJI Mavic Pro multicopter (DJI, Shenzhen, China) equipped with a 12-megapixel RGB camera.”

In other words, unless it is an error is it important that the authors repeat the same thing.

Aerial photographs were collected at a height of 30 meters over a 22.5 acre plot using a DJI Mavic Pro multicopter (DJI, Shenzhen, China) equipped with a 12-megapixel RGB camera.

Is the same as;

Aerial photographs were collected, at a height of 30 meters, over a 22.5 acre plot by a DJI Mavic Pro multicopter (DJI, Shenzhen, China) equipped with a 12-megapixel RGB camera.

Section 2.5.1 the authors state “To determine the amount of movement seen in the dunes we calculated the standard deviation across the entire study area to get a “quantity of movement” per pixel.” What is the source of this approach? No reference is given and even after 40 years in this field I’m unfamiliar with this particular approach. How do you go from a standard deviation (in the most general sense a measure of variation) to get a quantity? Unless the authors can explain this clearly and either reference their approach to established techniques or mathematically show how it works (perhaps some equations???) I find the analysis that follows at best flawed and at worst wrong and unrepeatable.

Lines 135-140: What “interpolation function” was used in ARC? From the manuscript it is impossible to determine the approach, whether it is the appropriate one to use, and more importantly what is meant by a “density interpolation function”.

In your figure 2 Vegetation density is shown from less dense to more dense. Less and more are hardly quantifiable and could mean anything. This needs to be clarified. If not, and as I noted above, if they can’t the results are flawed. In the following paragraph, there is mention of a value of 11 meaning “moderately dense vegetation”. No mention of what this actually means in terms of a scaled quantifiable metric.

Section 2.6.1 Please explain how NDVI and texture analysis are related. Your statement “Using both the RGB and NIR bands, Normalized Difference Vegetation Index (NDVI) was calculated to incorporate texture analysis, to improve the vegetation classification.” NDVI is a vegetation quantity/health metric. Texture is a measure of the variability over different scales. Having developed many texture metrics over the years I’m not certain why you couldn’t calculate the local texture from your imagery. There are a number of options within the “Focal Stats” tool which are more appropriate.

Lines 194-195: “The two output classification images that were created were then color coded and compared to determine which method classified invasive vegetation best.” What criteria was used to determine which was best?

Line 197: …. the study area into three equal sections, which were then further broken down into smaller  and……  Smaller What?

Section 2.6.2: What “features” were extracted? I read that you ran a tool- but to pull out what?

Section 3.3: In line 246 the reader finds that there were some difficulties. Why is this addressed here and not mentioned 150 lines earlier?

Line 262-265 Figure 6: What is the green area in Figure 6B?

Figure 7: Now I find out what was extracted in section 2.6.2. I note that there is no discussion either earlier or in this section as to how the results were validated.

Discussion: I cannot evaluate how well the study supports the conclusions reached due to all of the issues noted above. The study description is too flawed to make such an assessment

Line 304:  “……..Because a NIR band is necessary to calculate NDVI and texture……”.  NIR is not required to calculate texture.

 

Lines 334-338: The variety of small, enter mixed vegetation types throughout the dunes causes difficulties for classifying individual plant types when using object based methods. Object-based image  classification works best when the features to be identified have distinct shapes throughout the  landscape.

“small, enter mixed” what?

What distinct shapes are you talking about here?

Recommendation: Reject. Resubmit if it is possible to address all of the issues I note above, but as I see this manuscript there are too many holes in the approach to allow this.

Back to TopTop