Next Article in Journal
An Improved Infrared and Visible Image Fusion Using an Adaptive Contrast Enhancement Method and Deep Learning Network with Transfer Learning
Next Article in Special Issue
Absolute Accuracy Assessment of Lidar Point Cloud Using Amorphous Objects
Previous Article in Journal
Characteristics of Marine Heatwaves in the Japan/East Sea
Previous Article in Special Issue
Statewide USGS 3DEP Lidar Topographic Differencing Applied to Indiana, USA
 
 
Article
Peer-Review Record

The Accuracy and Consistency of 3D Elevation Program Data: A Systematic Analysis

Remote Sens. 2022, 14(4), 940; https://doi.org/10.3390/rs14040940
by Jason Stoker 1,* and Barry Miller 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2022, 14(4), 940; https://doi.org/10.3390/rs14040940
Submission received: 22 January 2022 / Revised: 10 February 2022 / Accepted: 13 February 2022 / Published: 15 February 2022
(This article belongs to the Special Issue Environmental Monitoring and Mapping Using 3D Elevation Program Data)

Round 1

Reviewer 1 Report

This is an impressive evaluation of one of today’s most important GIS datasets.  There is much here that would likely be useful to a wide range of audiences.  I have mostly praise, two general concerns, and then a few minor suggestions.

There is a wealth of information, both statistical and conceptual.  I found the historical undercurrents as interesting as the results of specific tests, especially insight into the changing priorities that motivated different phases of data collection. For a methodologically quite dense manuscript, it reads well and is logically organized and illustrated.  The only section that I would recommend revisiting is the Discussion, which seemed to lack an organizing thread by comparison.  In terms of more general concerns:

Concern #1 is that while your stated goal is to “see if a user can use all the 3DEP data, both the LPCs as  well as the derived DEMs, as a single entity” (line 559), sometimes the discussion descends into (albeit interesting) detail about specific findings concerning the dataset(s) without clarifying how you would argue they relate back to your central question.  At points I wasn’t sure whether the main interest was to evaluate consistency, at least across the contiguous US, or to critique accuracy of representation in general.  They are entwined, but still raise somewhat different sets of questions. 

Concern #2 is about the sampling strategy.  Not being familiar with the ways in which underlying “projects” are carved out and defined, I struggled to understand the groupings upon which your stratification was based.  If there was an attempt to strategically sample by land cover group (which seemed implied at times), it was not clear to me how.  And if you had a completely different purpose in sampling the projects, that could be explained more directly.  Also, I should probably mention I’m not an expert in statistics by any means, so I don’t feel qualified to critique use of specific statistical tests (e.g. whether the Tukey-Kramer Honestly Significant Difference test was the right one to compare Z range and HAG calculations), and will defer to other reviewers on those matters.

Finally, a few minor things:

Line 125: update tense from “goal of study was” to “goal of study is”

Lines 188-191: around your discussion of coordinate systems, it might be preferable (or it might be splitting hairs?) to refer to EPSG6350 as CONUS Albers NAD83(2011) rather than the more generic “Albers Equal Area.”

Maps #1 and #2 – should the two legends be consistent (“count” vs “point count”) across both maps?  The legends also didn’t clearly convey what the maps were showing.  The choropleth symbology I assume was total number of files available, with the points being the ones you selected per sampling strategy?  If so, could both be indicated in the legend?  (I’ll resist the urge to also suggest changing the map layout’s coordinate system, though tempted to inquire if you have tried out your layout using the Albers you mention above)

Author Response

We thank the reviewer for their thoughtful review and excellent suggestions for improvement.

Concern #1 is that while your stated goal is to “see if a user can use all the 3DEP data, both the LPCs as  well as the derived DEMs, as a single entity” (line 559), sometimes the discussion descends into (albeit interesting) detail about specific findings concerning the dataset(s) without clarifying how you would argue they relate back to your central question.  At points I wasn’t sure whether the main interest was to evaluate consistency, at least across the contiguous US, or to critique accuracy of representation in general.  They are entwined, but still raise somewhat different sets of questions.

This is a great point. We added this language in the introduction to help clarify: “Currently 3DEP data is only reviewed at the individual project level. Having some a priori understanding of the variability, reliability and accuracy of key attributes in these data will help anyone in the future who attempts to use these data at scale, versus by individual projects. ”

Concern #2 is about the sampling strategy.  Not being familiar with the ways in which underlying “projects” are carved out and defined, I struggled to understand the groupings upon which your stratification was based.  If there was an attempt to strategically sample by land cover group (which seemed implied at times), it was not clear to me how.  And if you had a completely different purpose in sampling the projects, that could be explained more directly.  Also, I should probably mention I’m not an expert in statistics by any means, so I don’t feel qualified to critique use of specific statistical tests (e.g. whether the Tukey-Kramer Honestly Significant Difference test was the right one to compare Z range and HAG calculations), and will defer to other reviewers on those matters.

Stratification was based on the name of the individual projects, which is part of the file name. Without this knowledge of how the files are named I can see how this would be difficult to interpret. We added this text to clarify: “As project names are inherent in the file nomenclature, we were able to stratify by project by extracting the project name from the master list of files, and then randomly selecting up to 4 files per project.”

We are fairly confident that the Tukey-Kramer HSD test is acceptable, as it was used in a previous peer-reviewed article for the exact same type of analysis in 2014.

Line 125: update tense from “goal of study was” to “goal of study is”

We have made this and several other grammatical changes throughout the paper to improve tense and plural agreements.

Lines 188-191: around your discussion of coordinate systems, it might be preferable (or it might be splitting hairs?) to refer to EPSG6350 as CONUS Albers NAD83(2011) rather than the more generic “Albers Equal Area.”

We agree, this can be an important distinction to some, and have made the recommended change.

Maps #1 and #2 – should the two legends be consistent (“count” vs “point count”) across both maps? 

The legends also didn’t clearly convey what the maps were showing.  The choropleth symbology I assume was total number of files available, with the points being the ones you selected per sampling strategy?  If so, could both be indicated in the legend?  (I’ll resist the urge to also suggest changing the map layout’s coordinate system, though tempted to inquire if you have tried out your layout using the Albers you mention above)

Yes, great catch. We have made the legends consistent and have augmented the figure caption to better explain that the black points are the locations of individual files.

Reviewer 2 Report

The paper is a nice analysis on the accuracy of the 3D EPD and will be obviously an important reference for most of the users. The various aspects related to data quality and validation are well organized and cover all the needed topics/points. the availability of those data sets open the doors for many new applications in numerous fields. Obviously improvements are always needed and will come for sure with the increase of the demand formulated by users. Congratulations to the authors for this useful work.

Author Response

We thank the reviewer for their timely and positive review.

Back to TopTop