Next Article in Journal
Driving Climatic Factors at Critical Plant Developmental Stages for Qinghai–Tibet Plateau Alpine Grassland Productivity
Next Article in Special Issue
Fusion of a Static and Dynamic Convolutional Neural Network for Multiview 3D Point Cloud Classification
Previous Article in Journal
Mapping Land Cover Types for Highland Andean Ecosystems in Peru Using Google Earth Engine
Previous Article in Special Issue
Distribution Modeling and Factor Correlation Analysis of Landslides in the Large Fault Zone of the Western Qinling Mountains: A Machine Learning Algorithm
 
 
Technical Note
Peer-Review Record

Group-in-Group Relation-Based Transformer for 3D Point Cloud Learning

Remote Sens. 2022, 14(7), 1563; https://doi.org/10.3390/rs14071563
by Shaolei Liu 1,2,†, Kexue Fu 1,2,†, Manning Wang 1,2 and Zhijian Song 1,2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2022, 14(7), 1563; https://doi.org/10.3390/rs14071563
Submission received: 18 February 2022 / Revised: 15 March 2022 / Accepted: 21 March 2022 / Published: 24 March 2022

Round 1

Reviewer 1 Report

I thank the authors for their efforts and the nice work presented in this paper globally the paper looks nice for me, I have just 2 remarks that I wish authors will take them in consideration 
1. on the starting of the Introduction, authors should mention more applications of "Point cloud processing tasks" to follow the context of the starting of the paragrapph
2. the results part need more details about experiments and interpretation of results, especially on the segmentation part where we can see other methods provide better results in some categories, it would be  nice to give some interpretation about it

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Overview

The paper is well-written and well-structured. The key ideas are easy to follow and to understand.

 

Specific Comments (C)

C1 – At the end of section 1, please add a paragraph stating the organization of the remainder of the paper (the remaining sections).

C2 – Figure 1 is placed on page 3. The only reference to this figure is on page 4, in section 3.2. In my opinion, there should be a reference to the figure before it appears on the text.

C3 – At the beginning of section 4, please add some text describing the organization of the experimental results.

C4 – At the end of the section 5 – conclusions, please add some directions for future work.

 

Writing (W)

 

Medical Image Computing and Computer Assited Intervention

->

Medical Image Computing and Computer Assisted Intervention

 

Line 7. Please, define the RFA acronym.

Line 38. Please, define the PCT acronym.

Line 46. Please, define the TNT acronym.

Line 66.

These methods [8,24,25] mainly

->

The multi-view based methods [8,24,25] mainly

 

Line 74.

These methods [7,27,28] voxelize

->

The voxel-based methods [7,27,28] voxelize

 

Line 97

This kind of methods

->

The hybrid-data methods

 

Line 98

and kd-tree).integrate

->

and kd-tree) integrate

 

Lines 126 and 146

MLPs

->

MLP

 

Line 128. Please, define the GELU acronym.

 

Line 131

of the input respectively.

->

of the input, respectively.

 

Line 156

Please, define the DRBL acronym.

 

Line 177

to determine the group is sparse or dense.

->

to determine if the group is sparse or dense.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

Very interesting!! It is very promising according to the results.

I would like to know the time performance during training and inferencing of the proposed model compared to existing models. It is probably better if the time comparison is presented in the manuscript. 

Lastly, will the Authors release the source codes?

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

I thank the other for their efforts. The paper looks fine now for publication

Back to TopTop