Sym3DNet: Symmetric 3D Prior Network for Single-View 3D Reconstruction
Abstract
:1. Introduction
2. Proposed Method
2.1. Symmetry Prior Fusion
2.2. Reconstruction Loss
2.3. Perceptual Loss
2.4. Network Architecture
3. Experimental Evaluation
3.1. Data Sets
3.2. Evaluation Metric
3.3. Training Protocol
3.4. Ablation Studies
3.4.1. Evaluation on ShapeNet Data Set
3.4.2. Evaluation on Unseen Data
3.4.3. Evaluation on Real-World Images
3.4.4. Space and Time Complexity
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Ozyesil, O.; Voroninski, V.; Basri, R.; Singer, A. A survey of structure from motion. arXiv 2017, arXiv:1701.08493. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef] [Green Version]
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef] [Green Version]
- Kar, A.; Häne, C.; Malik, J. Learning a multi-view stereo machine. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Long Beach, CA, USA, 2017; pp. 365–376. [Google Scholar]
- Choy, C.B.; Xu, D.; Gwak, J.; Chen, K.; Savarese, S. 3D-r2n2: A unified approach for single and multi-view 3D object reconstruction. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 628–644. [Google Scholar]
- Huang, P.H.; Matzen, K.; Kopf, J.; Ahuja, N.; Huang, J.B. Deepmvs: Learning multi-view stereopsis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 19–23 June 2018; pp. 2821–2830. [Google Scholar]
- Paschalidou, D.; Ulusoy, O.; Schmitt, C.; Van Gool, L.; Geiger, A. Raynet: Learning volumetric 3D reconstruction with ray potentials. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 19–23 June 2018; pp. 3897–3906. [Google Scholar]
- Yang, B.; Wang, S.; Markham, A.; Trigoni, N. Robust attentional aggregation of deep feature sets for multi-view 3D reconstruction. Int. J. Comput. Vis. 2020, 128, 53–73. [Google Scholar] [CrossRef] [Green Version]
- Xie, H.; Yao, H.; Zhang, S.; Zhou, S.; Sun, W. Pix2Vox++: Multi-scale context-aware 3D object reconstruction from single and multiple images. Int. J. Comput. Vis. 2020, 128, 2919–2935. [Google Scholar] [CrossRef]
- Lin, C.H.; Kong, C.; Lucey, S. Learning efficient point cloud generation for dense 3D object reconstruction. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; AAAI Press: New Orleans, LA, USA, 2018; Volume 32. [Google Scholar]
- Wen, C.; Zhang, Y.; Li, Z.; Fu, Y. Pixel2mesh++: Multi-view 3D mesh generation via deformation. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 1042–1051. [Google Scholar]
- Dibra, E.; Jain, H.; Oztireli, C.; Ziegler, R.; Gross, M. Human shape from silhouettes using generative hks descriptors and cross-modal neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4826–4836. [Google Scholar]
- Barron, J.T.; Malik, J. Shape, illumination, and reflectance from shading. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 1670–1687. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Richter, S.R.; Roth, S. Discriminative shape from shading in uncalibrated illumination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1128–1136. [Google Scholar]
- Witkin, A.P. Recovering surface shape and orientation from texture. Artif. Intell. 1981, 17, 17–45. [Google Scholar] [CrossRef]
- Xie, H.; Yao, H.; Sun, X.; Zhou, S.; Zhang, S. Pix2vox: Context-aware 3D reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 2690–2698. [Google Scholar]
- Girdhar, R.; Fouhey, D.F.; Rodriguez, M.; Gupta, A. Learning a predictable and generative vector representation for objects. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 484–499. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Wu, J.; Zhang, C.; Xue, T.; Freeman, B.; Tenenbaum, J. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. Adv. Neural Inf. Process. Syst. 2016, 29, 82–90. [Google Scholar]
- Wu, J.; Wang, Y.; Xue, T.; Sun, X.; Freeman, B.; Tenenbaum, J. Marrnet: 3D shape reconstruction via 2.5 d sketches. Adv. Neural Inf. Process. Syst. 2017, 30, 540–550. [Google Scholar]
- Wu, J.; Zhang, C.; Zhang, X.; Zhang, Z.; Freeman, W.T.; Tenenbaum, J.B. Learning shape priors for single-view 3D completion and reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 646–662. [Google Scholar]
- Tatarchenko, M.; Dosovitskiy, A.; Brox, T. Octree generating networks: Efficient convolutional architectures for high-resolution 3D outputs. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2088–2096. [Google Scholar]
- Richter, S.R.; Roth, S. Matryoshka networks: Predicting 3D geometry via nested shape layers. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1936–1944. [Google Scholar]
- Fan, H.; Su, H.; Guibas, L.J. A point set generation network for 3D object reconstruction from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 June 2017; pp. 605–613. [Google Scholar]
- Wang, N.; Zhang, Y.; Li, Z.; Fu, Y.; Liu, W.; Jiang, Y.G. Pixel2mesh: Generating 3D mesh models from single rgb images. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 52–67. [Google Scholar]
- Xu, Q.; Wang, W.; Ceylan, D.; Mech, R.; Neumann, U. Disn: Deep implicit surface network for high-quality single-view 3D reconstruction. Adv. Neural Inf. Process. Syst. 2019, 32, 492–502. [Google Scholar]
- Mo, K.; Zhu, S.; Chang, A.X.; Yi, L.; Tripathi, S.; Guibas, L.J.; Su, H. Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 909–918. [Google Scholar]
- Paschalidou, D.; Gool, L.V.; Geiger, A. Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1060–1070. [Google Scholar]
- Zhu, C.; Xu, K.; Chaudhuri, S.; Yi, R.; Zhang, H. SCORES: Shape composition with recursive substructure priors. ACM Trans. Graph. (TOG) 2018, 37, 1–14. [Google Scholar] [CrossRef]
- Vetter, T.; Poggio, T.; Bülthoff, H. The importance of symmetry and virtual views in three-dimensional object recognition. Curr. Biol. 1994, 4, 18–23. [Google Scholar] [CrossRef]
- Troje, N.F.; Bülthoff, H.H. How is bilateral symmetry of human faces used for recognition of novel views? Vis. Res. 1998, 38, 79–89. [Google Scholar] [CrossRef] [Green Version]
- Korah, T.; Rasmussen, C. Analysis of building textures for reconstructing partially occluded facades. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2008; pp. 359–372. [Google Scholar]
- Wu, S.; Rupprecht, C.; Vedaldi, A. Unsupervised learning of probably symmetric deformable 3D objects from images in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1–10. [Google Scholar]
- Hong, W.; Ma, Y.; Yu, Y. Reconstruction of 3-d symmetric curves from perspective images without discrete features. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2004; pp. 533–545. [Google Scholar]
- Hong, W.; Yang, Y.; Ma, Y. On group symmetry in multiple view geometry: Structure, pose, and calibration from a single image. In Coordinated Science Laboratory Report no. UILU-ENG-02-2208, DC-206; Coordinated Science Laboratory, University of Illinois at Urbana-Champaign: Champaign, IL, USA, 2002. [Google Scholar]
- Xu, Y.; Fan, T.; Yuan, Y.; Singh, G. Ladybird: Quasi-monte carlo sampling for deep implicit field based 3D reconstruction with symmetry. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 248–263. [Google Scholar]
- Zhou, Y.; Liu, S.; Ma, Y. NeRD: Neural 3D Reflection Symmetry Detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 15940–15949. [Google Scholar]
- Yao, Y.; Schertler, N.; Rosales, E.; Rhodin, H.; Sigal, L.; Sheffer, A. Front2Back: Single View 3D Shape Reconstruction via Front to Back Prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 531–540. [Google Scholar]
- Speciale, P.; Oswald, M.R.; Cohen, A.; Pollefeys, M. A Symmetry Prior for Convex Variational 3D Reconstruction. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 313–328. [Google Scholar]
- Thrun, S.; Wegbreit, B. Shape from symmetry. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–20 October 2005; Volume 2, pp. 1824–1831. [Google Scholar]
- Mukherjee, D.P.; Zisserman, A.P.; Brady, M.; Smith, F. Shape from symmetry: Detecting and exploiting symmetry in affine images. Philos. Trans. R. Soc. Lond. Ser. A Phys. Eng. Sci. 1995, 351, 77–106. [Google Scholar]
- Li, Y.; Pizlo, Z. Reconstruction of shapes of 3D symmetric objects by using planarity and compactness constraints. In Vision Geometry XV. International Society for Optics and Photonics; SPIE: San Jose, CA, USA, 2007; Volume 6499, p. 64990B. [Google Scholar]
- Gao, Y.; Yuille, A.L. Exploiting symmetry and/or manhattan properties for 3D object structure estimation from single and multiple images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7408–7417. [Google Scholar]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
- Sun, X.; Wu, J.; Zhang, X.; Zhang, Z.; Zhang, C.; Xue, T.; Tenenbaum, J.B.; Freeman, W.T. Pix3D: Dataset and methods for single-image 3D shape modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2974–2983. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Tatarchenko, M.; Richter, S.R.; Ranftl, R.; Li, Z.; Koltun, V.; Brox, T. What do single-view 3D reconstruction networks learn? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3405–3414. [Google Scholar]
- Lorensen, W.E.; Cline, H.E. Marching cubes: A high resolution 3D surface construction algorithm. ACM Siggraph Comput. Graph. 1987, 21, 163–169. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Groueix, T.; Fisher, M.; Kim, V.G.; Russell, B.C.; Aubry, M. A papier-mâché approach to learning 3D surface generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 216–224. [Google Scholar]
- Chen, Z.; Zhang, H. Learning implicit fields for generative shape modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5939–5948. [Google Scholar]
- Mescheder, L.; Oechsle, M.; Niemeyer, M.; Nowozin, S.; Geiger, A. Occupancy networks: Learning 3D reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4460–4470. [Google Scholar]
- Su, H.; Qi, C.R.; Li, Y.; Guibas, L.J. Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3D model views. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2686–2694. [Google Scholar]
- Xiao, J.; Hays, J.; Ehinger, K.A.; Oliva, A.; Torralba, A. Sun database: Large-scale scene recognition from abbey to zoo. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 3485–3492. [Google Scholar]
- Runz, M.; Li, K.; Tang, M.; Ma, L.; Kong, C.; Schmidt, T.; Reid, I.; Agapito, L.; Straub, J.; Lovegrove, S.; et al. Frodo: From detections to 3D objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14720–14729. [Google Scholar]
3D Embedding | Symmetry Fusion | Perceptual Loss | IoU | F-Scores | |
---|---|---|---|---|---|
Baseline | 0.657 | 0.396 | |||
Baseline+ Loss | √ | 0.677 | 0.427 | ||
Sym3DNet w/o Loss | √ | √ | 0.686 | 0.437 | |
Sym3DNet | √ | √ | √ | 0.689 | 0.440 |
Lenth | 3D-3D | 2D-3D | ||
---|---|---|---|---|
IoU | F-Scores | IoU | F-Scores | |
64 | 0.847 | 0.601 | 0.670 | 0.417 |
128 | 0.873 | 0.640 | 0.673 | 0.429 |
256 | 0.891 | 0.658 | 0.666 | 0.422 |
512 | 0.900 | 0.673 | 0.662 | 0.421 |
Backbone | IoU | F-Scores |
---|---|---|
VGG | 0.593 | 0.362 |
ResNet18 | 0.673 | 0.429 |
ResNet50 | 0.683 | 0.437 |
ResNet101 | 0.689 | 0.440 |
Category | 3D-R2N2 | Matryoshka | OGN | Pixel2Mesh | AtlasNet | IM-Net | OccNet | AttSets | Pix2Vox++ | Sym3DNet |
---|---|---|---|---|---|---|---|---|---|---|
[7] | [26] | [25] | [28] | [53] | [54] | [55] | [10] | [11] | ||
airplane | 0.513 | 0.647 | 0.587 | 0.508 | 0.493 | 0.702 | 0.532 | 0.594 | 0.674 | 0.710 |
bench | 0.421 | 0.577 | 0.481 | 0.379 | 0.431 | 0.564 | 0.597 | 0.552 | 0.608 | 0.656 |
cabinet | 0.716 | 0.776 | 0.729 | 0.732 | 0.257 | 0.680 | 0.674 | 0.783 | 0.799 | 0.811 |
car | 0.798 | 0.850 | 0.828 | 0.670 | 0.282 | 0.756 | 0.671 | 0.844 | 0.858 | 0.872 |
chair | 0.466 | 0.547 | 0.483 | 0.484 | 0.328 | 0.644 | 0.583 | 0.559 | 0.581 | 0.600 |
display | 0.468 | 0.532 | 0.502 | 0.582 | 0.457 | 0.585 | 0.651 | 0.565 | 0.548 | 0.580 |
lamp | 0.381 | 0.408 | 0.398 | 0.399 | 0.261 | 0.433 | 0.474 | 0.445 | 0.457 | 0.473 |
speaker | 0.662 | 0.701 | 0.637 | 0.672 | 0.296 | 0.683 | 0.655 | 0.721 | 0.721 | 0.723 |
rifle | 0.544 | 0.616 | 0.593 | 0.468 | 0.573 | 0.723 | 0.656 | 0.601 | 0.617 | 0.652 |
sofa | 0.628 | 0.681 | 0.646 | 0.622 | 0.354 | 0.694 | 0.669 | 0.703 | 0.725 | 0.740 |
table | 0.513 | 0.573 | 0.536 | 0.536 | 0.301 | 0.621 | 0.659 | 0.590 | 0.620 | 0.629 |
telephone | 0.661 | 0.756 | 0.702 | 0.762 | 0.543 | 0.762 | 0.794 | 0.743 | 0.809 | 0.814 |
watercraft | 0.513 | 0.591 | 0.632 | 0.471 | 0.355 | 0.607 | 0.579 | 0.601 | 0.603 | 0.626 |
Overall | 0.560 | 0.635 | 0.596 | 0.552 | 0.352 | 0.659 | 0.626 | 0.642 | 0.670 | 0.689 |
Category | 3D-R2N2 | Matryoshka | OGN | Pixel2Mesh | AtlasNet | IM-Net | OccNet | AttSets | Pix2Vox++ | Sym3DNet |
---|---|---|---|---|---|---|---|---|---|---|
[7] | [26] | [25] | [28] | [53] | [54] | [55] | [10] | [11] | ||
airplane | 0.412 | 0.446 | 0.487 | 0.376 | 0.415 | 0.598 | 0.494 | 0.489 | 0.583 | 0.596 |
bench | 0.345 | 0.424 | 0.364 | 0.313 | 0.439 | 0.361 | 0.318 | 0.406 | 0.478 | 0.492 |
cabinet | 0.327 | 0.381 | 0.316 | 0.450 | 0.350 | 0.345 | 0.449 | 0.367 | 0.408 | 0.425 |
car | 0.481 | 0.481 | 0.514 | 0.486 | 0.319 | 0.304 | 0.315 | 0.497 | 0.564 | 0.574 |
chair | 0.238 | 0.302 | 0.226 | 0.386 | 0.406 | 0.442 | 0.365 | 0.334 | 0.309 | 0.302 |
display | 0.227 | 0.400 | 0.215 | 0.319 | 0.451 | 0.466 | 0.468 | 0.310 | 0.296 | 0.313 |
lamp | 0.267 | 0.276 | 0.249 | 0.219 | 0.217 | 0.371 | 0.361 | 0.315 | 0.315 | 0.324 |
speaker | 0.231 | 0.279 | 0.225 | 0.190 | 0.199 | 0.200 | 0.249 | 0.211 | 0.152 | 0.290 |
rifle | 0.521 | 0.514 | 0.541 | 0.340 | 0.405 | 0.407 | 0.219 | 0.524 | 0.574 | 0.583 |
sofa | 0.274 | 0.326 | 0.290 | 0.343 | 0.337 | 0.354 | 0.324 | 0.334 | 0.377 | 0.399 |
table | 0.340 | 0.374 | 0.352 | 0.502 | 0.371 | 0.461 | 0.549 | 0.419 | 0.406 | 0.385 |
telephone | 0.504 | 0.598 | 0.528 | 0.485 | 0.545 | 0.423 | 0.273 | 0.469 | 0.633 | 0.613 |
watercraft | 0.305 | 0.360 | 0.328 | 0.266 | 0.296 | 0.369 | 0.347 | 0.315 | 0.390 | 0.410 |
Overall | 0.351 | 0.391 | 0.368 | 0.398 | 0.362 | 0.405 | 0.393 | 0.395 | 0.436 | 0.440 |
Network | IoU | F-Scores |
---|---|---|
ShapeHD [24] | 0.284 | - |
Pix3D [48] | 0.282 | 0.041 |
Pix2Vox++ [13] | 0.292 | 0.068 |
FroDo [58] | 0.325 | - |
Sym3DNet w/o Loss | 0.325 | 0.147 |
Sym3DNet | 0.346 | 0.150 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Siddique, A.; Lee, S. Sym3DNet: Symmetric 3D Prior Network for Single-View 3D Reconstruction. Sensors 2022, 22, 518. https://doi.org/10.3390/s22020518
Siddique A, Lee S. Sym3DNet: Symmetric 3D Prior Network for Single-View 3D Reconstruction. Sensors. 2022; 22(2):518. https://doi.org/10.3390/s22020518
Chicago/Turabian StyleSiddique, Ashraf, and Seungkyu Lee. 2022. "Sym3DNet: Symmetric 3D Prior Network for Single-View 3D Reconstruction" Sensors 22, no. 2: 518. https://doi.org/10.3390/s22020518
APA StyleSiddique, A., & Lee, S. (2022). Sym3DNet: Symmetric 3D Prior Network for Single-View 3D Reconstruction. Sensors, 22(2), 518. https://doi.org/10.3390/s22020518