Next Article in Journal
Landslide Deformation Extraction from Terrestrial Laser Scanning Data with Weighted Least Squares Regularization Iteration Solution
Next Article in Special Issue
Mapping Crop Types of Germany by Combining Temporal Statistical Metrics of Sentinel-1 and Sentinel-2 Time Series with LPIS Data
Previous Article in Journal
Single-Image Super Resolution of Remote Sensing Images with Real-World Degradation Modeling
Previous Article in Special Issue
Mapping Grassland Classes Using Unmanned Aerial Vehicle and MODIS NDVI Data for Temperate Grassland in Inner Mongolia, China
 
 
Article
Peer-Review Record

Multi-Temporal LiDAR and Hyperspectral Data Fusion for Classification of Semi-Arid Woody Cover Species

Remote Sens. 2022, 14(12), 2896; https://doi.org/10.3390/rs14122896
by Cynthia L. Norton 1,*, Kyle Hartfield 1, Chandra D. Holifield Collins 2, Willem J. D. van Leeuwen 3 and Loretta J. Metz 4
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2022, 14(12), 2896; https://doi.org/10.3390/rs14122896
Submission received: 11 April 2022 / Revised: 11 June 2022 / Accepted: 16 June 2022 / Published: 17 June 2022
(This article belongs to the Special Issue Remote Sensing Applications in Vegetation Classification)

Round 1

Reviewer 1 Report

The manuscript titled "Multi‐Temporal LiDAR and Hyperspectral Data Fusion for Classification of Semi‐Arid Woody Cover Species" Authors investigated the use of multi‐temporal, airborne hyperspectral imagery and light detection and ranging (LiDAR) derived data for tree species classification in a semi‐arid desert region. With a novel approach to create a framework for producing species‐specific woody vegetation map by utilizing a fusion of simultaneously acquired airborne LiDAR and high spatial resolution hyperspectral data to improve classification accuracies. I believe that the methods used in the work are appropriate and well-chosen for the topic and clearly described. The work presents an appropriate structure. This topic fits the scope of the journal.

I would recommend this work for publication after some minor corrections.

  • Several citations used in this work are so old. It’s recommended to update it with latest literature. If possible.
  • Line 51-54. The reference cited here is from 2008. After that the literature has been updated a lot. Consider latest literature. Also It’s better to refer all those studies which used that models instead of a single citation.
  • Recheck the “Dynamic Text” in figure 1. “ESRI, GOE EYE”
  • Figure 1 dose not fit on the page. Resize or better use verticle lables for the Grid on right and left side. 
  • Consider to add the proper references for section 2.2.2.
  • Line 341. Recheck the caption error there.
  • Correct the caption of Table 2.  It should be below the table, It might be a system error.
  • Journal references must cite the full title of the paper, page range or article number, and digital object identifier (DOI). In referances DOI are missing. 

Author Response

Cynthia L. Norton

Responses to Reviewers

The response by the authors to the reviewers are in bold

 

Reviewer 1

The manuscript titled "Multi‐Temporal LiDAR and Hyperspectral Data Fusion for Classification of Semi‐Arid Woody Cover Species" Authors investigated the use of multi‐temporal, airborne hyperspectral imagery and light detection and ranging (LiDAR) derived data for tree species classification in a semi‐arid desert region. With a novel approach to create a framework for producing species‐specific woody vegetation map by utilizing a fusion of simultaneously acquired airborne LiDAR and high spatial resolution hyperspectral data to improve classification accuracies. I believe that the methods used in the work are appropriate and well-chosen for the topic and clearly described. The work presents an appropriate structure. This topic fits the scope of the journal.

I would recommend this work for publication after some minor corrections.

  1. Several citations used in this work are so old. It’s recommended to update it with latest literature. If possible.

I added 51 more references to include recent literature. The references with the initial idea were kept and more recent highly cited studies were added. 

New Reference: 2,4,6,8,10,13,15,17,20,22,

24,28,30,32,34,38,46,48,50,53,

54,47,58,65,57,69,71,74,80,83,85,87,91

93,95,98,100,102,104,108,111,113,115,

117,121,123,125,133,136,139,141

In text citations has been changed as well.

 

  1. Line 51-54. The reference cited here is from 2008. After that the literature has been updated a lot. Consider latest literature. Also, i’s better to refer all those studies which used that models instead of a single citation.

3 new citations were added in references 13,15 and 17 while keeping the older papers, lines 544-557

  1. Recheck the “Dynamic Text” in figure 1. “ESRI, GOE EYE”

Resized dynamic text in Figure 1 at lines 131

  1. Figure 1 does not fit on the page. Resize or better use vertical labels for the Grid on right and left side. 

Resized Figure 1 at lines 131

  1. Consider adding the proper references for section 2.2.2.

Added reference 59, lines 655-657 in support of section 2.2.2

NEON (National Ecological Observatory Network). Discrete return LiDAR point cloud (DP1.30003.001), RELEASE-2022. https://doi.org/10.48443/ca22-1n03. Dataset accessed from https://data.neonscience.org  on December 3, 2021

 

  1. Line 341. Recheck the caption error there.

Rewrote Table 2 citation line 341 - 342

  1. Correct the caption of Table 2.  It should be below the table, It might be a system error.

Reformatted line 342

  1. Journal references must cite the full title of the paper, page range or article number, and digital object identifier (DOI). In references DOI are missing. 

Added DOI lines 519-853

Author Response File: Author Response.docx

Reviewer 2 Report

1. Please make clear why the drone image used for this classification. The description in paper is "to incorporate the different abundant vegetation species", which can't be infered from the flowchart and text. Furthermore, the drone image is acquired on 2021, and the hyperspeatral images are acquried on 2017 to 2019. How to make the time difference?
2. Please make clear why these 38 bands are selected for calculating the 12 vegetation indexes. If the sensitivity analysis has been done?
3. How to fiter the LiDAR points for DTM generation?
4. Figure 4B reveals that there are serious confusion between 5 kinds of wood vegetation in NDVI-VACTI space. 

Author Response

Cynthia L. Norton

Responses to Reviewers

The response by the authors to the reviewers are in bold

Reviewer 2

  1. Please make clear why the drone image used for this classification. The description in paper is "to incorporate the different abundant vegetation species", which can't be inferred from the flowchart and text.

Added sentence and supplementary figure to visualize drone images, training data and SRER boundary - Using the DJI Mavic2 Enterprise Dual, we collected high resolution RGB images over six additional sites in the northwest section of the study area in 2021 to incorporate the different abundant vegetation species within the region for training data detection and selection (Figure S1). Reference lines 197-200.

  1. Furthermore, the drone image is acquired on 2021, and the hyperspectral images are acquired on 2017 to 2019. How to make the time difference?

Woody vegetation growth is a slow process in arid environments and drone images were only used to detect location of tree species, larger mature individuals should have not changed much within 5 years. Check lines 205-207

  1. Please make clear why these 38 bands are selected for calculating the 12 vegetation indexes. If the sensitivity analysis has been done?

Vegetation indices were developed based on biophysical and biochemical signals of various absorption features that aid in detecting and mapping vegetation species. Bands selected were derived from previous research based on differences in reflectance and absorption features that represent various plant species [62-63]. Addressed in lines 222 - 226, citations 62-63

 

  1. How to filter the LiDAR points for DTM generation?

This was already described in lines 258-260 and has the references to the method as well. A digital terrain model (DTM) was created using the lidR [88] package for spatial in-terpolation by k-nearest neighbor (KNN) of 12 with an inverse-distance weighting (IDW) of 2 in RStudio [61]. The normalized DTM point cloud uses an algorithm that implements a point to raster (p2r) method for creating a denser collection of points resulting in a smoother CHM raster [61, 88].

  1. Figure 4B reveals that there are serious confusion between 5 kinds of wood vegetation in NDVI-CACTI space. 

Beside NDVI and Cacti space, other variables (spectral indices and CHM values) allow us to discriminate among woody species. The main source of misclassifications is in the confusion of lotebush and mesquite due to their similar signals of VI and CHM. Additional indices or data available when lotebush is blooming could help it delineate from mesquite trees. Lines 486-487 added: To apply this method in other areas, an in-situ inventory of tree and species locations are needed to aid in remote identification of tree species using UAS images. Addition-ally, auxiliary information or multi-temporal data of equal quality may be needed to further delineate tree species more accurately.  -  

Author Response File: Author Response.docx

Reviewer 3 Report

This study aims to investigate the use of multi‐temporal, airborne hyperspectral imagery and light detection and ranging (LiDAR) derived data for tree species classification in a semi‐arid desert region. This study produces highly accurate classifications by combining multi‐temporal fine spatial resolution hyper‐ spectral and LiDAR data (~1m) through a reproducible scripting and machine learning approach that can be applied to larger areas and similar datasets. Generally,I have two questions:

1.How to assess the application ability of the proposed method to larger areas?

2.How to choose canopy height models?

 

 

Author Response

Cynthia L. Norton

Responses to Reviewers

The response by the authors to the reviewers are in bold

Reviewer 3

This study aims to investigate the use of multi‐temporal, airborne hyperspectral imagery and light detection and ranging (LiDAR) derived data for tree species classification in a semi‐arid desert region. This study produces highly accurate classifications by combining multi‐temporal fine spatial resolution hyper‐ spectral and LiDAR data (~1m) through a reproducible scripting and machine learning approach that can be applied to larger areas and similar datasets. Generally, I have two questions:

  1. How to assess the application ability of the proposed method to larger areas

Our study utilized an open-source R programming approach which makes it easily transferable to other larger areas with available data. Depending on data availability, Google Earth Engine (GEE) could be used apply the same methodology to larger areas and fast processing. Accuracy assessments will need to be performed to evaluate performance in other and larger areas. Lines383 – 387.

 

  1. How to choose canopy height models?

The canopy height model algorithm was chosen based on prior research and iterations to obtain the best results.  The normalized DTM point cloud uses an algorithm that implements a point to raster (p2r) method for creating a denser collection of points resulting in a smoother CHM raster [61, 88]. The CHM was created with a p2r of 0.25 and a cell resolution of 0.5 m for a model that best captured vegetation height. Lines 260 – 264.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

Thank you for all the replies.

Back to TopTop