Procedural Point Cloud and Mesh Editing for Urban Planning Using Blender
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe paper focuses on the processing, manipulation, and visualization of point cloud data using Blender, specifically in the context of urban point cloud operations and rendering techniques. It serves as a comprehensive user manual or technical report, with well-executed experiments. However, the following issues need to be addressed:
1.The paper outlines three contributions, primarily centered around Blender's point cloud processing and rendering methods. However, the core topic of urban point cloud construction and software improvement is not adequately covered. It is recommended to include this in the paper.
2.The literature review in the "Related Work" section is incomplete. It does not cover relevant research related to the paper's topic, such as Blensor, an improved point cloud generation tool based on Blender, which can simulate various LiDARs for scene point cloud rendering. Additionally, some studies utilizing Blensor for complex scene construction and point cloud generation should be cited, such as:
Zhao, L., Hu, Y., Han, F., Dou, Z., Li, S., Zhang, Y., & Wu, Q. (2020). Multi-sensor missile-borne LiDAR point cloud data augmentation based on Monte Carlo distortion simulation. CAAI Transactions on Intelligence Technology.
Zhao, L., Hu, Y., Yang, X., Dou, Z., & Kang, L. (2024). Robust multi-task learning network for complex LiDAR point cloud data preprocessing. Expert Systems with Applications, 237, 121552.
Gschwandtner, M., Kwitt, R., Uhl, A., & Pree, W. (2011). BlenSor: Blender sensor simulation toolbox. In Advances in Visual Computing: 7th International Symposium, ISVC 2011, Las Vegas, NV, USA, September 26-28, 2011. Proceedings, Part II 7 (pp. 199-208). Springer Berlin Heidelberg.
Griffiths, D., & Boehm, J. (2019). SynthCity: A large-scale synthetic point cloud. arXiv preprint arXiv:1907.04758.
3.The Background section could be moved to an appendix, as it is overly lengthy for the current placement in the paper.
4.The experimental and discussion sections are well-written, but some of the excessive parameter descriptions could also be placed in an appendix.
Author Response
Comment #1: The paper outlines three contributions, primarily centered around Blender’s point cloud
processing and rendering methods. However, the core topic of urban point cloud construction
and software improvement is not adequately covered. It is recommended to include
this in the paper.
Response: Thank you for this valuable input. We added references [1, 2, 5, 6] to papers
discussing point cloud construction using LiDAR technology in the RelatedWork section.
We also note our software-related contributions in the Introduction section.
Comment #2: The literature review in the ”Related Work” section is incomplete. It does not cover
relevant research related to the paper’s topic, such as Blensor, an improved point cloud
generation tool based on Blender, which can simulate various LiDARs for scene point cloud rendering. Additionally, some studies utilizing Blensor for complex scene construction
and point cloud generation should be cited, such as:
Zhao, L., Hu, Y., Han, F., Dou, Z., Li, S., Zhang, Y., & Wu, Q. (2020). Multi-sensor
missile-borne LiDAR point cloud data augmentation based on Monte Carlo distortion
simulation. CAAI Transactions on Intelligence Technology.
Zhao, L., Hu, Y., Yang, X., Dou, Z., & Kang, L. (2024). Robust multi-task learning
network for complex LiDAR point cloud data preprocessing. Expert Systems with Applications,
237, 121552.
Gschwandtner, M., Kwitt, R., Uhl, A., & Pree, W. (2011). BlenSor: Blender sensor
simulation toolbox. In Advances in Visual Computing: 7th International Symposium,
ISVC 2011, Las Vegas, NV, USA, September 26-28, 2011. Proceedings, Part II 7 (pp.
199-208). Springer Berlin Heidelberg.
Griffiths, D., & Boehm, J. (2019). SynthCity: A large-scale synthetic point cloud. arXiv
preprint arXiv:1907.04758.
Response: Thank you for pointing out the work that is missing. We agree that it sets
our paper better within the research field. We added the works [7, 8, 4, 3] into meaningful
subsections of the Related Work section.
Comment #3: The Background section could be moved to an appendix, as it is overly lengthy for the
current placement in the paper.
Response: Thank you for the suggestion. We moved the section in question to the
appendix with the title Overview of related solutions.
Comment #4: The experimental and discussion sections are well-written, but some of the excessive
parameter descriptions could also be placed in an appendix
Response: Similarly as for the previous point, we agree with the suggestion. We identified
the excessive parameter descriptions and moved them to an Appendix section titled
Geometry Nodes Setups.
References
[1] Dong Chen, Liqiang Zhang, P. Takis Mathiopoulos, and Xianfeng Huang. “A Methodology
for Automated Segmentation and Reconstruction of Urban 3-D Buildings from ALS Point
Clouds”. In: IEEE Journal of Selected Topics in Applied Earth Observations and Remote
Sensing 7.10 (2014), pp. 4199–4217. doi: 10.1109/jstars.2014.2349003.
[2] Meida Chen, Qingyong Hu, Zifan Yu, Hugues Thomas, Andrew Feng, Yu Hou, Kyle Mc-
Cullough, Fengbo Ren, and Lucio Soibelman. STPLS3D: A Large-Scale Synthetic and
Real Aerial Photogrammetry 3D Point Cloud Dataset. 2022. url: https://arxiv.org/
abs/2203.09065.
[3] David Griffiths and Jan Boehm. SynthCity: A large scale synthetic point cloud. 2019. url:
https://arxiv.org/abs/1907.04758.
[4] Michael Gschwandtner, Roland Kwitt, Andreas Uhl, andWolfgang Pree. “BlenSor: Blender
Sensor Simulation Toolbox”. In: Advances in Visual Computing. Ed. by George Bebis,
Richard Boyle, Bahram Parvin, Darko Koracin, Song Wang, Kim Kyungnam, Bedrich
Benes, Kenneth Moreland, Christoph Borst, Stephen DiVerdi, Chiang Yi-Jen, and Jiang
Ming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 199–208. isbn: 978-3-642-
24031-7.
[5] Qingyong Hu, Bo Yang, Sheikh Khalid, Wen Xiao, Niki Trigoni, and Andrew Markham.
“SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds”.
In: International Journal of Computer Vision 130.2 (2022), pp. 316–343. issn: 1573-1405.
doi: 10.1007/s11263-021-01554-9. url: https://doi.org/10.1007/s11263-021-
01554-9.
[6] Thinal Raj, Fazida Hanim Hashim, Aqilah Baseri Huddin, Mohd Faisal Ibrahim, and Aini
Hussain. “A Survey on LiDAR Scanning Mechanisms”. In: Electronics 9.5 (2020). issn:
2079-9292. doi: 10.3390/electronics9050741. url: https://www.mdpi.com/2079-
9292/9/5/741.
[7] Luda Zhao, Yihua Hu, Fei Han, Zhenglei Dou, Shanshan Li, Yan Zhang, and Qilong Wu.
“Multi-sensor missile-borne LiDAR point cloud data augmentation based on Monte Carlo
distortion simulation”. In: CAAI Transactions on Intelligence Technology 10.1 (2024),
pp. 300–316. doi: https://doi.org/10.1049/cit2.12389. url: https://ietresearch.
onlinelibrary.wiley.com/doi/abs/10.1049/cit2.12389.
[8] Luda Zhao, Yihua Hu, Xing Yang, Zhenglei Dou, and Linshuang Kang. “Robust multitask
learning network for complex LiDAR point cloud data preprocessing”. In: Expert
Systems with Applications 237 (2024), p. 121552. issn: 0957-4174. doi: https://doi.
org/10.1016/j.eswa.2023.121552. url: https://www.sciencedirect.com/science/
article/pii/S0957417423020547.
Reviewer 2 Report
Comments and Suggestions for AuthorsThis study represents a significant step forward in making Blender a viable alternative for urban planning applications, with innovations in non-destructive point cloud editing, procedural modeling, performance benchmarking, and cost-effective solutions.
The study explores Blender’s potential for procedural point cloud and mesh editing in urban planning, proposing it as an open-source alternative to proprietary software. It introduces a non-destructive workflow for editing point clouds directly within Blender using Geometry Nodes, preserving data integrity and allowing advanced transformations.
The research benchmarks Blender against Rhinoceros Grasshopper, highlighting Blender’s greater stability and scalability for large datasets and identifying its limitations in real-time performance.
Key innovations include procedural modeling, Grease Pencil-based point manipulation, and level-of-detail (LoD) management, which enhances efficiency and flexibility in handling urban datasets.
Despite Blender’s advantages in attribute-based editing and visualization, challenges remain in real-time interaction speed and built-in point cloud functionalities.
The study contributes to advancing the state of the art by demonstrating Blender’s viability for urban planning workflows.
However, it also underscores the need for further optimizations to fully realize Blender's potential for urban planning.
The Discussion section may interpret findings, compare them with existing literature, highlight theoretical and practical implications, acknowledge limitations, and suggest directions for future research. The authors in this section simply reported their findings without interpreting them, relating them to the cited literature, or highlighting their theoretical implications. This is the weakest part of the text and should be improved.
As for the figures, figure 11 (d) should be in the same orientation as the others (a, b, and c), as it would facilitate the comparison sought by the authors in this image.
Author Response
Comment #1: The Discussion section may interpret findings, compare them with existing literature,
highlight theoretical and practical implications, acknowledge limitations, and suggest directions
for future research. The authors in this section simply reported their findings
without interpreting them, relating them to the cited literature, or highlighting their theoretical
implications. This is the weakest part of the text and should be improved.
Response: Thank you for your comment. We extend the Discussion section to include
our interpretation of the results. We point out different aspects of performance in both
programs, discuss techniques, and present our conclusions on Geometry Nodes editing –
its strengths and weaknesses. We pointed out some limitations already in the original
submission but built on top of that in the revised version of the paper. We could not
find comparable related work to which to compare our findings. We also added pointers for possible future work.
Comment #2: As for the figures, figure 11 (d) should be in the same orientation as the others (a, b, and
c), as it would facilitate the comparison sought by the authors in this image.
Response: Thank you for pointing out the problems with Figure 11. We have addressed
the issue and adapted the figure.
Reviewer 3 Report
Comments and Suggestions for AuthorsWhilst the article is important in that it shows the feasibility of conducting point cloud studies with Blender (which is free software), it reads much more as a technical article than a scientific one. In order to improve it for publication, I suggest the authors to:
1) Emphasize more the results, not only in terms of time comparisons, but also highlighting different performances in different computer settings.
2) A more descriptive comparison between output renderings in Rhinoceros and Blender. Figures 11 a-d basically show that but the authors fail to properly compare them. Perhaps zoomed in areas and similar renderings could show the similarities and the differences between them. Please elaborate more on this topic.
3) Another item regarding usability. Rhinoceros allows the immediate manipulation of LiDAR point clouds, whilst Blender requires a lot of pre-processing steps (as of now... blender is very powerful and might eventually incorporate that). You need to provide a paragraph or two commenting about the learning curve, the difficulties and limitations for users with less computational abilities.
Likewise, the conclusion section needs to be further enhanced. As of now, it is too short. I kindly ask the authors to provide the revisions above so, when this article returns to me for reevaluation, I can provide better input on the conclusions. Since they rely so much on the results, I believe it would be better to improve the results, then improve the conclusions. This could be done all at once and then sent to reviewers.
Author Response
Comment #1: Emphasize more the results, not only in terms of time comparisons, but also highlighting
different performances in different computer settings.
Response: Thank you for pointing out the weaknesses of the Results section. We have
added the benchmarks measured on a consumer-grade laptop.
Comment #2: A more descriptive comparison between output renderings in Rhinoceros and Blender.
Figures 11 a-d basically show that but the authors fail to properly compare them. Perhaps
zoomed in areas and similar renderings could show the similarities and the differences
between them. Please elaborate more on this topic.
Response: Thank you for the suggestion. We provided additional explanations in the
figure captions and in the Results section. We also enhanced our renders with close-ups
of areas to highlight the features we deemed interesting and which we discuss in the text.
Comment #3: Another item regarding usability. Rhinoceros allows the immediate manipulation of LiDAR point clouds, whilst Blender requires a lot of pre-processing steps (as of now...
blender is very powerful and might eventually incorporate that). You need to provide a
paragraph or two commenting about the learning curve, the difficulties and limitations
for users with less computational abilities.
Response: Thank you for your comment. We added an appropriate comment to the
Discussion section, pointing out the issues Rhinoceros users might encounter in Blender,
and highlighting how Blender’s generality might result in a more complex workflows even
for simpler tasks.
Comment #4: Likewise, the conclusion section needs to be further enhanced. As of now, it is too short.
I kindly ask the authors to provide the revisions above so, when this article returns to
me for reevaluation, I can provide better input on the conclusions. Since they rely so
much on the results, I believe it would be better to improve the results, then improve the
conclusions. This could be done all at once and then sent to reviewers.
Response: We believe that the reviewer was addressing the Discussion section, which
highlights and interprets our results. We extended that section, providing more insights
and interpretations.
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsAccepted.
Reviewer 3 Report
Comments and Suggestions for AuthorsI believe the authors have addressed my major comments and the article is fit for publication. However, if the authors want to improve the discussion section a bit more, they may - mostly to make the article sound more scientific and less of a technical report.