Next Article in Journal
Monitoring Agricultural Land and Land Cover Change from 2001–2021 of the Chi River Basin, Thailand Using Multi-Temporal Landsat Data Based on Google Earth Engine
Next Article in Special Issue
Camera and LiDAR Fusion for Urban Scene Reconstruction and Novel View Synthesis via Voxel-Based Neural Radiance Fields
Previous Article in Journal
Ship Detection via Multi-Scale Deformation Modeling and Fine Region Highlight-Based Loss Function
Previous Article in Special Issue
Saint Petersburg 3D: Creating a Large-Scale Hybrid Mobile LiDAR Point Cloud Dataset for Geospatial Applications
 
 
Article
Peer-Review Record

Learning Contours for Point Cloud Completion

Remote Sens. 2023, 15(17), 4338; https://doi.org/10.3390/rs15174338
by Jiabo Xu 1, Zeyun Wan 2 and Jingbo Wei 2,*
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2023, 15(17), 4338; https://doi.org/10.3390/rs15174338
Submission received: 28 July 2023 / Revised: 28 August 2023 / Accepted: 30 August 2023 / Published: 3 September 2023

Round 1

Reviewer 1 Report

The reported good results raise a number of questions, some of the results even puzzled me a lot. I hope the authors could clarify the issues at first.

 

1: For the 4 weighting coefficients in the L_confidence loss, are they kept the same as 2,15,0.3, and 2 for all the 3 experiments? Or different settings are used for different datasets? If yes, what are the settings for the other two datasets?

2: For the last experiments, although ground truth is not available, what are the approximate missing data degrees in the airborne laser scanning (ALS) dataset ? In addition, for the point cloud completion task, a metric of data missing degrees is meaningful only when the missing data distribution is also specified. Clearly if the missing data has a uniformly distribution, the completion is much easier. Hence the authors need to provide a more meaningful description on ALS dataset. Otherwise the last experiment is not informative;

3: More fundamentally, what is really the contribution of the so-called “contour learning”? This is the key claimed contribution of this manuscript, but unfortunately the description is sketchy, only in the loss function part, the authors gave an outline on it. I hope the authors could give more details on the contour learning part, including:

         3.1): Grid-size clearly affect the labeling of valid, invalid grids, in turn the “contours”, a discussion on this issue is needed;

         3.2): The classifier on the valid grid needs more clarification. If the used threshold of 0.5 is slightly changed, how will be the diffusion result affected? And the authors should report some experimental results on this issue;

         3.3): From the weighting coefficients, “2” is set for I_contours, which is much less than “16” for the V-missing, why could such a junior term in the L-Confidence loss play such a magic role on the performance improvement? The authors need a careful analysis.

         3.4): The results in Table 1 puzzled me a lot. Ours ( no contour) is much worse than Ours, also much worse than other listed competitors. Why? Without contour learning, I suppose the method ( ours ( no contour)) should work SIMILARY with the competitors, rather than much worse, what is the cause?  From this result, I suspect the contour-learning is not the key contributor to the good result of Ours in Table 1. If yes, the authors should explain it;

         3.5):  In the current DL era, DNN is very complex and much complicated operations are involved. With a limited dataset, it is quite possible the reported good results are due to careful parameter tuning, rather than claimed principled contour learning. Hence it is not sufficient to report good results, it is absolutely necessary to demonstrate the “good” results are truly due to the proposed novelty.

 

All in all, I hope the authors could simply the commonly used modules, such as attention unit, rather concentrate on “contour learning and its role”. More specifically, specify the contour learning steps, analyze the sensitivity to the involved parameter settings ( grid size, valid-grid threshold), and localized and demonstrate its proper contribution.

 

Some minor notes:

identify the performance:->demonstrate or validate the performance ?

any a valid voxel-> Each valid voxel;

I_contour denotes the invalid grids that are adjacent to valid grids: The definition of adjacency is needed.

compared with that of state-of-the-art models digitally and visually. Digitally-> quantitatively

manifestation-> visualization

Author Response

Please see the attached file.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper presents a new deep learning method for point cloud completion. The topic is interesting since the point cloud incompleteness is a challenge for data fusion. The document is well organized and structured. However, there are some points that need to be figure out before recommending this paper for publication as it stands.

1. Figure 2. What is the meaning of W/2, is it using a x2 hierarchical Pachify? But in Figure 4, 2-stage hierarchical Patchify divide into 4 patches, so I dont understand what is the meaning of W/2.

2. 260, 261: The number of points is used to as the description of dataset, the readers dont know how many samples in ShapeNet are used to test and train, I think the number of samples is also need to illustrate.

3. 300-304: The priori car models are learned from the ShapeNet datasetwhich mean the training dataset was the car samples from ShapeNet? Did it use all the types of ShapeNet as the training set to completing car, or just use the car samples as the training set to complete car in KITTI.

4. As description of this paper, it need to be voxelization before encoder, so it would be bring a lot of computing costs, this paper did bot mention the length of training, parameters setting, and running conditions, I think it is needed for readers to well know about the proposed method

Author Response

Please see the attached file.

Author Response File: Author Response.pdf

Reviewer 3 Report

This manuscript presents a novel end-to-end neural network for point cloud completion. This network includes newly designed transformers for the encoder and decoder. Moreover, the experiment also well-presents the novelty of the proposed network by comparing a lot of previous networks using indoor and outdoor datasets.

The proposed approach has merit and the standard is high, therefore, I recommend its publication after supplementation of references.

 - There is a comprehensive review paper in CVPR referring to a lot of related previous papers: "Comprehensive review of deep learning-based 3d point cloud completion processing and analysis". It would be much better if the Related Work is supplemented based on this paper.

The manuscript is well-written, so I suggest a final check to correct minor typos.

Author Response

Thank you for your advice.

A new reference [17] is cited in Line 33:

Fei et al. [17] investigated the latest and advanced algorithms for point cloud completion together with their methods and contributions.

  1. Fei, B.; Yang, W.; Chen, W.M.; Li, Z.; Li, Y.; Ma, T.; Hu, X.; Ma, L. Comprehensive Review of Deep Learning-Based 3D Point Cloud Completion Processing and Analysis. IEEE Transactions on Intelligent Transportation Systems 2022, 23, 22862–22883.

A final check is made to avoid mistakes as possible as we can.

Round 2

Reviewer 1 Report

no further comments

Back to TopTop