Next Article in Journal
Analysis of Permafrost Distribution and Change in the Mid-East Qinghai–Tibetan Plateau during 2012–2021 Using the New TLZ Model
Previous Article in Journal
Is the Gridded Data Accurate? Evaluation of Precipitation and Historical Wet and Dry Periods from ERA5 Data for Canadian Prairies
 
 
Technical Note
Peer-Review Record

A Study on the Effect of Multispectral LiDAR Data on Automated Semantic Segmentation of 3D-Point Clouds

Remote Sens. 2022, 14(24), 6349; https://doi.org/10.3390/rs14246349
by Valentin Vierhub-Lorenz 1,*, Maximilian Kellner 1, Oliver Zipfel 1 and Alexander Reiterer 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2022, 14(24), 6349; https://doi.org/10.3390/rs14246349
Submission received: 2 November 2022 / Revised: 12 December 2022 / Accepted: 13 December 2022 / Published: 15 December 2022
(This article belongs to the Section Urban Remote Sensing)

Round 1

Reviewer 1 Report

The paper is clear and concise, I had no trouble understanding it. The results may be useful for a practitioner, but there is no fundamentally new methodology or theory presented. There are several studies with multispectral LiDAR and point cloud classification, which are not included in related work. To name only a few from the Remote Sensing journal:
- JING, Zhuangwei, et al. Multispectral LiDAR point cloud classification using SE-PointNet++. Remote Sensing, 2021.
- ZHANG, Zhiwen, et al. Introducing Improved Transformer to Land Cover Classification Using Multispectral LiDAR Point Clouds. Remote Sensing, 2022.
- CHEN, Biwu, et al. Multispectral LiDAR point cloud classification: A two-step approach. Remote Sensing, 2017.
The differences and similarities with these and other related work should be at least briefly explained.

Some minor issues:
- Figure 5 is referenced before Figures 3 and 4.
- In Figure 3c, the colors for different types of connections should be changed to be more easily distinguished. Layer output sizes should be added to the image.
- In line 137, how is the point cloud mapped to the voxel grid - is the bounding box used? I expect that can change object sizes in voxels considerably (e.g. we may have a very wide shot and a very narrow one, both discretized into the same voxel grid resolution; in one the cars will be only a few voxels wide, in other many more)?
- Section 2 a bit hastily written, with some grammatical errors, e.g. capital "Where" in line 164, "liner" in line 166.
- In line 174, what is the unit of random point displacement, is it cm?
- In line 175, when the whole point cloud is rotated, how is it afterwards aligned with the voxel grid (e.g., if an axis aligned bounding box is used, there can be a lot of extra empty space in the corners)?
- In line 209, the reference should be to Figure 5, not 4.

The description in lines 238-243 should be moved to section 2. In addition, the following should be explained:
- How is the class of the voxel determined during discretization - is it the majority class of points in the voxel?
- Since a considerable part of the point cloud is unclassified, why is the "other" label not included in the classification? Every such voxel is classified into one of the classes - is this considered an error in the evaluation, or are such voxels omitted from evaluation?

There is no Conclusion section.

Author Response

Thank you for the very constructive Feedback.

Please see the attachment for a detailed point-by-point response.

Author Response File: Author Response.docx

Reviewer 2 Report

Presented paper evaluates the effect of a novel multispectral LiDAR system on automated semantic segmentation of 3D-pointclouds to overcome this downside Multispectral LiDAR system and its implementation on mobile mapping vehicle is presented and the point cloud processing and training of convolutional neural network is described in the paper.

The manuscript is clearly arranged. Quite a lot of references is older than five years in the cited literature. It does not include an excessive number of self-citations. The experiment presented is appropriately designed and the conclusions are clear and consistent with the results presented. The tables and figures are adequate. I only have reservations about figure 3 see below.

My specific comments:

I recommend arranging the numbering of the literature according to the order of citation in the paper.

Paragraph on lines 107-110 - better describe the measured residential areas and scan parameters (e.g. how far from the scanner was measured, density of points on the surface, etc.).

Lines 122-123 - explain in more detail how the distance dependency of intensity was calibrated.

Figure 3 - I do not see significant difference between part a) and b). Part c) is incomprehensible. I recommend more detailed description in the text.

Author Response

Thank you for the very constructive Feedback.

Please see the attachment for a detailed point-by-point response.

Author Response File: Author Response.docx

Back to TopTop