RE-PU: A Self-Supervised Arbitrary-Scale Point Cloud Upsampling Method Based on Reconstruction
Abstract
:1. Introduction
- We propose a novel reconstruction-based point cloud upsampling framework.
- We introduce a prior-based point cloud processing network, which can be utilized for both reconstruction and upsampling.
- We demonstrate that the proposed method achieves comparable results to the state-of-the-art methods in terms of both visual quality and quantitative metrics.
2. Related Work
2.1. Deep Learning-Based Point Cloud Upsampling
- A.
- Classical point cloud upsampling
- B.
- Modern point cloud upsampling
2.2. Point Cloud AutoEncoder
2.3. Implicit Neural 3D Representation
3. Method
3.1. Problem Formulation
3.2. Overview
3.3. Network
- A.
- Encoder Based on Dynamic Graph
- B.
- Decoder based on Offset Attention
- C.
- Prior Distribution
- D.
- Loss Function
3.4. Point Cloud Reconstruction and Upsampling
4. Experiments
4.1. Implementation Details
4.2. Datasets and Metrics
4.3. Quantitative and Qualitative Results
4.4. Other Experiments
- A.
- Results of Different Sizes of Point Clouds
- B.
- Results of Noisy Point Clouds
- C.
- Results of Varying Upsampling Rates
4.5. Analysis
- A.
- Reconstruction and Upsampling
- B.
- Prior Distribution
- C.
- Encoder and Decoder
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Luo, L.; Tang, L.; Zhou, W.; Wang, S.; Yang, Z.X. Pu-eva: An edge-vector based approximation solution for flexible-scale point cloud upsampling. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 16208–16217. [Google Scholar]
- Liu, Y.; Wang, Y.; Liu, Y. Refine-PU: A Graph Convolutional Point Cloud Upsampling Network using Spatial Refinement. In Proceedings of the 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP), Suzhou, China, 13–16 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–5. [Google Scholar]
- Li, T.; Lin, Y.; Cheng, B.; Ai, G.; Yang, J.; Fang, L. PU-CTG: A Point Cloud Upsampling Network Using Transformer Fusion and GRU Correction. Remote. Sens. 2024, 16, 450. [Google Scholar] [CrossRef]
- Akhtar, A.; Li, Z.; Van der Auwera, G.; Li, L.; Chen, J. Pu-dense: Sparse tensor-based point cloud geometry upsampling. IEEE Trans. Image Process. 2022, 31, 4133–4148. [Google Scholar] [CrossRef] [PubMed]
- Huang, H.; Wu, S.; Gong, M.; Cohen-Or, D.; Ascher, U.; Zhang, H. Edge-aware point set resampling. ACM Trans. Graph. (TOG) 2013, 32, 1–12. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Du, H.; Yan, X.; Wang, J.; Xie, D.; Pu, S. Point cloud upsampling via cascaded refinement network. In Proceedings of the Asian Conference on Computer Vision, Macao, China, 4–8 December 2022; pp. 586–601. [Google Scholar]
- Qiu, S.; Anwar, S.; Barnes, N. Pu-transformer: Point cloud upsampling transformer. In Proceedings of the Asian Conference on Computer Vision, Macao, China, 4–8 December 2022; pp. 2475–2493. [Google Scholar]
- Lim, S.; El-Basyouny, K.; Yang, Y.H. PU-Ray: Domain-Independent Point Cloud Upsampling via Ray Marching on Neural Implicit Surface. IEEE Trans. Intell. Transp. Syst. 2024, 1, 1–11. [Google Scholar] [CrossRef]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef]
- Guo, M.H.; Cai, J.X.; Liu, Z.N.; Mu, T.J.; Martin, R.R.; Hu, S.M. Pct: Point cloud transformer. Comput. Vis. Media 2021, 7, 187–199. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhao, W.; Sun, B.; Zhang, Y.; Wen, W. Point cloud upsampling algorithm: A systematic review. Algorithms 2022, 15, 124. [Google Scholar] [CrossRef]
- Yu, L.; Li, X.; Fu, C.W.; Cohen-Or, D.; Heng, P.A. Pu-net: Point cloud upsampling network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2790–2799. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30, 5105–5114. [Google Scholar]
- Yifan, W.; Wu, S.; Huang, H.; Cohen-Or, D.; Sorkine-Hornung, O. Patch-based progressive 3D point set upsampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5958–5967. [Google Scholar]
- Qian, G.; Abualshour, A.; Li, G.; Thabet, A.; Ghanem, B. Pu-gcn: Point cloud upsampling using graph convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11683–11692. [Google Scholar]
- Li, R.; Li, X.; Heng, P.A.; Fu, C.W. Point cloud upsampling via disentangled refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 344–353. [Google Scholar]
- Long, C.; Zhang, W.; Li, R.; Wang, H.; Dong, Z.; Yang, B. Pc2-pu: Patch correlation and point correlation for effective point cloud upsampling. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; pp. 2191–2201. [Google Scholar]
- Wang, J.; Chen, J.; Shi, Y.; Ling, N.; Yin, B. SSPU-Net: A Structure Sensitive Point Cloud Upsampling Network with Multi-Scale Spatial Refinement. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 1546–1555. [Google Scholar]
- Zhao, W.; Zhang, H.; Zheng, C.; Yan, X.; Cui, S.; Li, Z. CPU: Codebook Lookup Transformer with Knowledge Distillation for Point Cloud Upsampling. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 3917–3925. [Google Scholar]
- Cai, P.; Wu, Z.; Wu, X.; Wang, S. Parametric Surface Constrained Upsampler Network for Point Cloud. arXiv 2023, arXiv:2303.08240. [Google Scholar] [CrossRef]
- Qian, Y.; Hou, J.; Kwong, S.; He, Y. PUGeo-Net: A geometry-centric network for 3D point cloud upsampling. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 752–769. [Google Scholar]
- Qian, Y.; Hou, J.; Kwong, S.; He, Y. Deep magnification-flexible upsampling over 3d point clouds. IEEE Trans. Image Process. 2021, 30, 8354–8367. [Google Scholar] [CrossRef] [PubMed]
- Li, R.; Li, X.; Fu, C.W.; Cohen-Or, D.; Heng, P.A. Pu-gan: A point cloud upsampling adversarial network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 21 October–2 November 2019; pp. 7203–7212. [Google Scholar]
- Liu, H.; Yuan, H.; Hou, J.; Hamzaoui, R.; Gao, W. Pufa-gan: A frequency-aware generative adversarial network for 3d point cloud upsampling. IEEE Trans. Image Process. 2022, 31, 7389–7402. [Google Scholar] [CrossRef] [PubMed]
- Zhou, K.; Dong, M.; Arslanturk, S. “Zero-Shot” Point Cloud Upsampling. In Proceedings of the 2022 IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan, 18–22 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar]
- Kumbar, A.; Anvekar, T.; Tabib, R.A.; Mudenagudi, U. ASUR3D: Arbitrary Scale Upsampling and Refinement of 3D Point Clouds using Local Occupancy Fields. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Paris, France, 2–6 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1644–1653. [Google Scholar]
- Kumbar, A.; Anvekar, T.; Vikrama, T.A.; Tabib, R.A.; Mudenagudi, U. TP-NoDe: Topology-aware Progressive Noising and Denoising of Point Clouds towards Upsampling. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 2272–2282. [Google Scholar]
- Zhao, Y.; Hui, L.; Xie, J. Sspu-net: Self-supervised point cloud upsampling via differentiable rendering. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual, 20–24 October 2021; pp. 2214–2223. [Google Scholar]
- Zhao, W.; Liu, X.; Zhong, Z.; Jiang, J.; Gao, W.; Li, G.; Ji, X. Self-supervised arbitrary-scale point clouds upsampling via implicit neural representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1999–2007. [Google Scholar]
- Liu, X.; Liu, X.; Liu, Y.S.; Han, Z. Spu-net: Self-supervised point cloud upsampling by coarse-to-fine reconstruction with self-projection optimization. IEEE Trans. Image Process. 2022, 31, 4213–4226. [Google Scholar] [CrossRef] [PubMed]
- Hu, X.; Mu, H.; Zhang, X.; Wang, Z.; Tan, T.; Sun, J. Meta-SR: A magnification-arbitrary network for super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1575–1584. [Google Scholar]
- Ye, S.; Chen, D.; Han, S.; Wan, Z.; Liao, J. Meta-PU: An arbitrary-scale upsampling network for point cloud. IEEE Trans. Vis. Comput. Graph. 2021, 28, 3206–3218. [Google Scholar] [CrossRef] [PubMed]
- Feng, W.; Li, J.; Cai, H.; Luo, X.; Zhang, J. Neural points: Point cloud representation with neural fields for arbitrary upsampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 18633–18642. [Google Scholar]
- Mao, A.; Du, Z.; Hou, J.; Duan, Y.; Liu, Y.j.; He, Y. PU-Flow: A point cloud upsampling network with normalizing flows. IEEE Trans. Vis. Comput. Graph. 2022, 29, 4964–4977. [Google Scholar] [CrossRef] [PubMed]
- Mao, A.; Duan, Y.; Wen, Y.H.; Du, Z.; Cai, H.; Liu, Y.J. Invertible residual neural networks with conditional injector and interpolator for point cloud upsampling. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Macao, China, 19–25 August 2023; pp. 1267–1275. [Google Scholar]
- He, Y.; Tang, D.; Zhang, Y.; Xue, X.; Fu, Y. Grad-PU: Arbitrary-Scale Point Cloud Upsampling via Gradient Descent with Learned Distance Functions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 5354–5363. [Google Scholar]
- Xiao, A.; Huang, J.; Guan, D.; Zhang, X.; Lu, S.; Shao, L. Unsupervised point cloud representation learning with deep neural networks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 11321–11339. [Google Scholar] [CrossRef] [PubMed]
- Girdhar, R.; Fouhey, D.F.; Rodriguez, M.; Gupta, A. Learning a predictable and generative vector representation for objects. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part VI 14. Springer: Cham, Switzerland, 2016; pp. 484–499. [Google Scholar]
- Yang, Y.; Feng, C.; Shen, Y.; Tian, D. Foldingnet: Point cloud auto-encoder via deep grid deformation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 206–215. [Google Scholar]
- Groueix, T.; Fisher, M.; Kim, V.G.; Russell, B.C.; Aubry, M. A papier-mâché approach to learning 3D surface generation. In Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 216–224. [Google Scholar]
- Liu, X.; Han, Z.; Wen, X.; Liu, Y.S.; Zwicker, M. L2g auto-encoder: Understanding point clouds by local-to-global reconstruction with hierarchical self-attention. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 989–997. [Google Scholar]
- Zhao, Y.; Birdal, T.; Deng, H.; Tombari, F. 3D point capsule networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1009–1018. [Google Scholar]
- Chen, S.; Duan, C.; Yang, Y.; Li, D.; Feng, C.; Tian, D. Deep unsupervised learning of 3D point clouds via graph topology inference and filtering. IEEE Trans. Image Process. 2019, 29, 3183–3198. [Google Scholar] [CrossRef] [PubMed]
- Gao, X.; Hu, W.; Qi, G.J. Graphter: Unsupervised learning of graph transformation equivariant representations via auto-encoding node-wise transformations. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7163–7172. [Google Scholar]
- Eckart, B.; Yuan, W.; Liu, C.; Kautz, J. Self-supervised learning on 3D point clouds by learning discrete generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8248–8257. [Google Scholar]
- Pang, Y.; Wang, W.; Tay, F.E.; Liu, W.; Tian, Y.; Yuan, L. Masked autoencoders for point cloud self-supervised learning. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Cham, Switzerland, 2022; pp. 604–621. [Google Scholar]
- Zhang, R.; Guo, Z.; Gao, P.; Fang, R.; Zhao, B.; Wang, D.; Qiao, Y.; Li, H. Point-m2ae: Multi-scale masked autoencoders for hierarchical point cloud pre-training. Adv. Neural Inf. Process. Syst. 2022, 35, 27061–27074. [Google Scholar]
- Park, J.J.; Florence, P.; Straub, J.; Newcombe, R.; Lovegrove, S. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 165–174. [Google Scholar]
- Ma, B.; Han, Z.; Liu, Y.S.; Zwicker, M. Neural-Pull: Learning Signed Distance Function from Point clouds by Learning to Pull Space onto Surface. In Proceedings of the International Conference on Machine Learning. PMLR, Virtual, 18–24 July 2021; pp. 7246–7257. [Google Scholar]
- Chen, Z.; Zhang, H. Learning implicit fields for generative shape modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5939–5948. [Google Scholar]
- Mescheder, L.; Oechsle, M.; Niemeyer, M.; Nowozin, S.; Geiger, A. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4460–4470. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Zhao, H.; Jiang, L.; Jia, J.; Torr, P.H.; Koltun, V. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 16259–16268. [Google Scholar]
- Chang, A.X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; et al. Shapenet: An information-rich 3D model repository. arXiv 2015, arXiv:1512.03012. [Google Scholar]
Method | CD ↓ | HD ↓ | EMD ↓ | P2F (avg) ↓ | P2F (std) ↓ |
---|---|---|---|---|---|
PU-Net [14] | 0.556 | 4.750 | 40.146 | 4.678 | 5.946 |
MPU [16] | 0.298 | 4.700 | 30.534 | 2.855 | 5.180 |
PU-GAN [25] | 0.280 | 4.640 | 26.243 | 2.330 | 4.431 |
PU-GCN [17] | 0.258 | 1.885 | 24.460 | 2.721 | 3.542 |
Dis-PU [18] | 0.260 | 2.104 | 25.312 | 2.480 | 3.521 |
SSAS [31] | 0.264 | 2.320 | 25.027 | 2.625 | 3.462 |
Grad-PU [38] | 0.245 | 2.369 | 23.348 | 1.893 | 2.875 |
Ours | 0.238 | 2.012 | 22.353 | 2.463 | 2.965 |
Method | CD ↓ | HD ↓ | EMD ↓ | P2F (avg) ↓ | P2F (std) ↓ |
---|---|---|---|---|---|
PU-Net [14] | 1.155 | 15.170 | 91.487 | 4.834 | 6.799 |
MPU [16] | 0.935 | 13.327 | 77.401 | 3.551 | 5.970 |
PU-GAN [25] | 0.873 | 12.146 | 68.534 | 3.189 | 5.682 |
PU-GCN [17] | 0.585 | 7.577 | 55.570 | 2.499 | 4.004 |
Dis-PU [18] | 0.541 | 8.348 | 53.687 | 2.964 | 5.209 |
SSAS [31] | 0.613 | 7.451 | 68.970 | 2.474 | 6.088 |
Grad-PU [38] | 0.403 | 3.743 | 55.487 | 1.480 | 2.468 |
Ours | 0.421 | 3.236 | 46.476 | 2.257 | 2.375 |
Prior Distribution | Lattice Points | Fibonacci Lattice | Hammersley Points | Sphere Uniform | Lattice Points + Noise |
---|---|---|---|---|---|
CD ↓ | 0.510 | 0.496 | 0.507 | 0.613 | 0.421 |
Model | Model 1 | Model 2 | Model 3 | Ours |
---|---|---|---|---|
CD↓ | 0.874 | 0.697 | 0.512 | 0.421 |
KNN | 10 | 15 | 20 | 25 | 30 |
---|---|---|---|---|---|
CD↓ | 0.598 | 0.523 | 0.421 | 0.498 | 0.592 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Han, Y.; Yin, M.; Yang, F.; Zhan, F. RE-PU: A Self-Supervised Arbitrary-Scale Point Cloud Upsampling Method Based on Reconstruction. Appl. Sci. 2024, 14, 6814. https://doi.org/10.3390/app14156814
Han Y, Yin M, Yang F, Zhan F. RE-PU: A Self-Supervised Arbitrary-Scale Point Cloud Upsampling Method Based on Reconstruction. Applied Sciences. 2024; 14(15):6814. https://doi.org/10.3390/app14156814
Chicago/Turabian StyleHan, Yazhen, Mengxiao Yin, Feng Yang, and Feng Zhan. 2024. "RE-PU: A Self-Supervised Arbitrary-Scale Point Cloud Upsampling Method Based on Reconstruction" Applied Sciences 14, no. 15: 6814. https://doi.org/10.3390/app14156814
APA StyleHan, Y., Yin, M., Yang, F., & Zhan, F. (2024). RE-PU: A Self-Supervised Arbitrary-Scale Point Cloud Upsampling Method Based on Reconstruction. Applied Sciences, 14(15), 6814. https://doi.org/10.3390/app14156814