Next Article in Journal
Time Reversal Detection for Moving Targets in Clutter Environments
Next Article in Special Issue
Correction: Vanneschi et al. Hazard Assessment of Rocky Slopes: An Integrated Photogrammetry–GIS Approach Including Fracture Density and Probability of Failure Data. Remote Sens. 2022, 14, 1438
Previous Article in Journal
A Coherent Integration and Parameter Estimation Method for Constant Radial Acceleration Weak Target via SOKT-IAR-LVD
Previous Article in Special Issue
Filtering Green Vegetation Out from Colored Point Clouds of Rocky Terrains Based on Various Vegetation Indices: Comparison of Simple Statistical Methods, Support Vector Machine, and Neural Network
 
 
Article
Peer-Review Record

Lidar-Derived Rockfall Inventory—An Analysis of the Geomorphic Evolution of Rock Slopes and Modifying the Rockfall Activity Index (RAI)

Remote Sens. 2023, 15(17), 4223; https://doi.org/10.3390/rs15174223
by Shane J. Markus 1,*, Joseph Wartman 1, Michael Olsen 2 and Margaret M. Darrow 3
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Reviewer 4: Anonymous
Remote Sens. 2023, 15(17), 4223; https://doi.org/10.3390/rs15174223
Submission received: 30 June 2023 / Revised: 12 August 2023 / Accepted: 25 August 2023 / Published: 28 August 2023
(This article belongs to the Special Issue Remote Sensing for Rock Slope and Rockfall Analysis)

Round 1

Reviewer 1 Report

General comments

This is a very interesting paper based on a large rockfall dataset from 4 sites in Alaska. The focus of the paper is predominantly on magnitude-frequency distributions and the use of Dunham’s Rockfall Activity Index. The reviewer feels the presented analysis of this large dataset is worthy of publication in the remote sensing journal and has potential useful observations for future use of the RAI. including joint sets, stereonets, kinematic analysis and information such as block shape (not just rockfall block size). The paper discussion/conclusion sections summarise the important results of the paper well but suffer from an overuse of the vague terminology such as “suggesting that” when explaining the reasons for the findings. This is unfortunate as important progress could have been made in understanding the presented results by better use of the scanning data, (including measurement of rock slop0e geometry, joint set data/characteristics and stereonets), more engineering geology mapping field work  (rock mass characteristics, structural geology, water, joint surface characteristics) and some rock slope kinematic analysis at the four sites. The paper also does not discuss in sufficient detail the limitations, uncertainties, and potential errors in the measured/calculated data. The data presented is important, but might also benefit by a Table highlighting assumptions, uncertainties, and errors with some recommendations for future research. More close-range photographs of the different sites would be useful in addition to Figure 3. These should show rock mass, joint sets and examples of failures.

Page 1: Abstract – Line 16 – unusual to cite references in an abstract

Page 2: Lines – 45 – 57.

The reference to Eberhardt and DEM methods seems inappropriate. DEM methods such as UDEC/3DEC would rarely be applied to rockfall studies in practice- being more suited to rock slope failure mechanisms such as toppling, wedge, planar block slides. It would arguably be overkill to use DEM method on most rockfall projects. Limit equilibrium codes would also probably be used before DEM.

It is surprising to see no reference to the Slope Mass Rating of Romana or the Q-slope rating method of Barr and Barton. Discrete fracture networks such as Fracman have also been used in several rockfall research papers.

Page 3 – Figure 2 caption – include CloudCompare in reference list and cite using normal {*} method.

Figure 2 – ensure all fonts clearly legible and not too small.

Page 5

 1.5 – Geologic setting – this is a very brief section with only general information – there is no discussion of the engineering geology of the sites including joint set and rock mass characterisation. Kinematics of rockfall not really discussed and no real account of structure at each site. A section on the geotechnical setting would be extremely useful to the paper with stereonets of joint sets for each site and a kinematic analysis. Discussion of the rock slope geometries – slope dip/dip direction, sections, slope height etc. although generally available in lidar are not discussed. Sections through the lidar at different sites would make much better use of the geometrical information available. Much more use could be made of the Lidar and attributes in Figures 1, 3 and 4 in characterising rock slope instability/rockfall failure kinematics.

Lines 139-147 – rephrase to include information in Table 2 in the paper text. Table 2 as presented is repetitive and not necessary. This information could easily be expressed in one additional sentence. Delete Table 2. T able 2 also contains multiple misspelling of Riegl.

Figure 3 caption – include CloudCompare in reference list and cite as [*]

Line 156 – include reference to Maptek I-studio in reference list and cite as [*]

Line 177-78 “Cells indicating change adjacent to other changed cells are assumed to be involved in the same failure event” – is this always true ? – this seems quite a big assumption given one year interval and between scans – William’s and Rosser’s work at Durham using continuous scanning i.e. 30 mins – hour of East Cliff, Whitby clearly shows many small events which as you note later can appear to be one event. Failures occurring in clusters may be multiple evets due to progressively changing kinematics- one event providing kinematic freedom for adjacent event. The total failure volume assigned to the failure cluster may thus not only ignore kinematics but also be over-estimated. The inter-relationship between temporal scanning interval and climate forcing can also be very important. The time of day – thermal changes leading to lots of small events on an hourly basis. Long time interval 1- 2yr based scans may indicate changes in climate affecting rockfall activity between sites but is a homogenized reality of a complex process.

Page 8

Fig 4 caption – refer to Maptek using reference [*]. These figures are nice, but the scales vary and visually are a little misleading – red failure areas in 4a less pronounced than 4d but purely scale effect? The information contained in these figures could be leveraged by the authors by more description and photographs of failure locations and showing the relation to the geology/structure and slope geometry variations for each site. Some mention of failure modes/kinematics, rock mass quality at the different sites in the paper would be useful        rather than total reliance of RAI.

There appear to be specific areas of failures on each plot in Fig 4 – some seem to follow linear trends (Structure? topographic?). More discussion and photographs needed.

Figs 5 – 8 – fonts used for legends too small – barely legible.

Page 11-12

Define annual failure rate and average event depth earlier i.e., when they are first introduced in the paper. The limitations on both these measurements with relationship to temporal scanning interval should be clearly discussed in the paper – not just mention of Williams work in background literature.

Page 12 – Line 297 – Table 6 should read Table 7.?

Table 7. Labelling of columns for Sites A – d needed aligning correctly.

Page 14.

Figure 11. Relabel x-axis with realistic precision for slope angle.

Section 4.1

Site A: B:C:D Area = 30:3.9:2.9:1

Failure max volume ratio: 35.1:5.4:1.5:1

Although scales shown in LiDAR plots – please state the rock slope Height, Length and Slope angle for each site?

The larger sites are liable to contain more variation in structure/geotechnical characteristics. Has the possibility of geotechnical domains been considered in the use of RAI. At one site specific geotechnical domains may experience more failures due to kinematics. Although Dunham’s RAI provides convenient classes to assess rockfall activity – just like all classification systems GSI, SMR, Qslope etc. – the user should always where the opportunity exists collect the input data needed to undertake a rock slope kinematic analysis.

Page 15 – Lines 390-396 – “Differences between magnitude-frequency distributions and associated power law parameters between each site are likely attributed to these differences in morphology, hydrology and structural geology (which may include site size, site-specific geology and rock characteristics; fracture spacing/orientation/character (? vague delete), slope inclination, groundwater systems, vegetation, roadway maintenance/construction history or various other factors (vague delete))”

This is a rather vague statement for engineering rock cuts (are likely/may, various other factors). Given the number of lidar scans done at the sites over a 5-year period – 2012-2017). Much of this data could have been collected on site and/or obtained from the lidar scans and photographs. The magnitude-frequency distributions and power law parameters are valuable information worth publishing but it is unfortunate that the opportunity to constrain this with data with measurements (kinematic data etc.) has been missed or not shown in the current paper.

Section 4,3 – similar comment to above applies – more constraints could have been derived to avoid use of statements such as Line 407. “possibly resulting in similar site-specific morphology, hydrology and structural geology”

Section 4,4 RAI

I think the authors in lines 422-426 are correct in suggesting site-specific morphology, hydrology and structural geology may continue to be influential even when areas of the rock slope are discretized into RAI class – this appears to emphasise the comments above on the need for more geotechnical-geological data being used to constrain the RAI distributions – while the RAI index is potentially a very useful tool in rockfall hazard identification – understanding the slope kinematics/structure is essential in any engineering risk assessment.

The scope of the paper does not discuss hazard and risk however the results may have important implications. The importance of temporal scanning frequency could have effects on rockfall modelling – i.e., size of blocks overestimated. Consideration of kinematics important – i.e., what were the reasons for the largest volume failures?

 

 

Page 16

Line 433 – should be Table 7?

Lines 434 – 449. Terminology in these paragraphs seems a little loose - what does “sufficiently unique” mean? – rephrase, “notable” variations, “reasonably” useful, “relatively” site-specific.

Line 445 – should this be Table 10?

Line 450 Table 10 not Table 7 – i.e., paper has two Tables 7’s.

Line 463 – Could provide a little detail on RAMBO code and its use in this paper in the text.

Page 18.

Ref 5 – provide full reference.

Reference 10 – should be Journal.

References 13, 1719,20,28,35 and 36 – Consistency in in use of lower case or title case.

Ref 18 – should be Yosemite.

Ref 21 – should be Brain.

Ref 29 – should be Brain.

Tables – please check that Tables 6, 8 and 9 are referenced/referenced correctly in the paper – please note Table 6 where referenced should have been Table 7.

The paper is well written although there are a few typos. The legend in the figures often uses too small a font and is barely legible.

Author Response

Please see attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

 

Well written, good data, good analysis, very few comments or changes suggested.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

This paper focuses on the Rockfall Activity Index (RAI) classes using TSL-derived DEM and takes four rock avalanches in Alaska, USA, as the study area. The RAI parameters of each RAI class are modified to enhance the accuracy and effectiveness of rockfall hazard assessments. The manuscript is well organized and the method and results are intensively illustrated.

My comments and questions are given bellow.

1.       The reference is seldom cited in abstract.

2.       The flow chart is highly recommended to give for general illustration, where the main steps proposed by this research can be highlighted.

3.       The manual steps should be explained, which is mostly upon the experience of experts.

4.       Surface models generated from point clouds of the same slope face, taken at different points in time, were differenced at each cell. Here differenced is not correct.

 

5.       First, scan data was cropped to only include the exposed rock slope face areas. Do you crop the scan data into regular or irregular shapes? Is this step realized automatically or manually?

6.       Regarding the best-fit plane, the related equation is needed to show the data involved and error to be considered.

7.       As for the change detection, in this study, the significant change threshold is set as 0.05m on the change grid in order to exclude the noise, please clarify the cloud points, cell and grid for change detection. Besides, the preliminary average failure depths for the first two categories of original RAI Class were less than 0.05m, so please explain it in details?

8.       The range of preliminary mean failure depth values in table 1 is between 0.025m and 0.75m, and the range of mean damage depth values in table 7 is from -0.153m to -0.229m. What’s your comments for such big difference? How can you verify your results?

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

The authors used LiDAR data to expand the existing rockfall inventory and modified the rockfall activity index (RAI) with the estimated parameters, such as the annual failure rate and average failure depth. They analyzed the commonly used four parameters to generate the spatial and temporal characteristics of rockfall within the existing area. The method description of this paper is insufficient. The analysis and discussion of this paper are not enough to support the conclusion. Some issues must be corrected.

 

General issues:

1.   In the introduction part, the paper only used the expanded inventory to calculate the related parameters. They do not describe the weakness of the existing assessment system (RAI) and the contribution of this paper is not described well.

2.   In the Methods part, the accuracy of the detected rockfall is not given. In my opinion, the accuracy of the rockfall can direct affect the calculation of the parameters

3.   The conclusion of this paper is based on two critical parameters, the annual failure rate and average failure depth. However, the calculation of these two parameters is not given.

Some other issues:

1.     L174, does the threshold 0.05 is an empirical value?

2.     In section 3.2, please add a description of the results.

3.     L302, since the corresponding data of 2014 and 2015 are available, however, the failure rate and average failure depth for site D of 2014 to 2015 are lost.

4.     L332, why did you choose the data from 2015 to calculate the slope angle? Would the results be different if you have chosen data with different acquisition times?

5.     L461, please add the description about the criterion of the variable talus slope. Do you mean the variable talus slope proposed by the author has been incorporated into the RAMBO software?

6.     L504-505, the conclusion is not proved well in the text.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 4 Report

My comments have been addressed properly in the modified version and I have no more questions.

Back to TopTop