GaussianEnhancer++: A General GS-Agnostic Rendering Enhancer
Abstract
:1. Introduction
- We present GaussianEnhancer++, a GS-agnostic post-processor designed to eliminate the artifacts produced by GS models and enhance the resolution of rendered images.
- We propose a two-stage degradation simulator that mimics real GS rendering artifacts. Through this simulator, we demonstrate the effectiveness of GaussianEnhancer and GaussianEnhancer++ in improving rendering quality using simulated samples.
- We develop an effective method for spatial information fusion, enhancing the quality of GS-rendered images.
2. Related Work
2.1. Novel View Synthesis
2.2. Degradation Model and High-Resolution Synthesis
3. GaussianEnhancer
3.1. Preliminaries
3.2. GS-Style Degradation Simulator
3.3. Spatial Information Fusion (SIF)
4. GaussianEnhancer++
4.1. Two-Stage Degradation Simulator (TSDS)
Algorithm 1 Two-stage Degradation Simulator (TSDS) |
|
4.2. Super Spatial Information Fusion (SSIF)
Algorithm 2 Super Spatial Information Fusion for SR (SSIF-SR) |
|
Algorithm 3 Super Spatial Information Fusion for LR (SSIF-LR) |
|
5. Experiments
5.1. Datasets and Evaluation Metric
5.2. Implementation Details
5.3. Main Results
5.3.1. Quantitative Comparison
5.3.2. Qualitative Comparison
5.4. Ablation Study
6. Discussion
6.1. Super-Resolution on Mip-NeRF360 Dataset
6.2. Same-Resolution Enhancement
6.3. Computational Resources
6.4. Statistical Reliability
6.5. Artifacts Remediation
6.6. Ablation Study
6.7. Limitations and Future Work
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhou, T.; Tulsiani, S.; Sun, W.; Malik, J.; Efros, A.A. View synthesis by appearance flow. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part IV 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 286–301. [Google Scholar]
- Zhou, T.; Tucker, R.; Flynn, J.; Fyffe, G.; Snavely, N. Stereo magnification: Learning view synthesis using multiplane images. arXiv 2018, arXiv:1805.09817. [Google Scholar] [CrossRef]
- Yao, Y.; Luo, Z.; Li, S.; Fang, T.; Quan, L. Mvsnet: Depth inference for unstructured multi-view stereo. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 767–783. [Google Scholar]
- Riegler, G.; Koltun, V. Free view synthesis. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XIX 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 623–640. [Google Scholar]
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
- Wei, X.; Zhang, R.; Wu, J.; Liu, J.; Lu, M.; Guo, Y.; Zhang, S. Noc: High-quality neural object cloning with 3d lifting of segment anything. arXiv 2023, arXiv:2309.12790. [Google Scholar]
- Reiser, C.; Peng, S.; Liao, Y.; Geiger, A. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 14335–14345. [Google Scholar]
- Müller, T.; Evans, A.; Schied, C.; Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (TOG) 2022, 41, 1–15. [Google Scholar] [CrossRef]
- Sun, C.; Sun, M.; Chen, H. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2022; pp. 5459–5469. [Google Scholar]
- Qin, Y.; Li, X.; Zu, L.; Jin, M.L. Novel view synthesis with depth priors using neural radiance fields and cyclegan with attention transformer. Symmetry 2025, 17, 59. [Google Scholar] [CrossRef]
- Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 2023, 42, 139:1–139:14. [Google Scholar] [CrossRef]
- Wu, G.; Yi, T.; Fang, J.; Xie, L.; Zhang, X.; Wei, W.; Liu, W.; Tian, Q.; Wang, X. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 20310–20320. [Google Scholar]
- Chen, Z.; Wang, F.; Wang, Y.; Liu, H. Text-to-3d using gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 21401–21412. [Google Scholar]
- Yang, Z.; Gao, X.; Zhou, W.; Jiao, S.; Zhang, Y.; Jin, X. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 20331–20341. [Google Scholar]
- Szymanowicz, S.; Rupprecht, C.; Vedaldi, A. Splatter image: Ultra-fast single-view 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 10208–10217. [Google Scholar]
- Hu, L.; Zhang, H.; Zhang, Y.; Zhou, B.; Liu, B.; Zhang, S.; Nie, L. Gaussianavatar: Towards realistic human avatar modeling from a single video via animatable 3d gaussians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 634–644. [Google Scholar]
- Yi, T.; Fang, J.; Wang, J.; Wu, G.; Xie, L.; Zhang, X.; Liu, W.; Tian, Q.; Wang, X. Gaussiandreamer: Fast generation from text to 3d gaussians by bridging 2d and 3d diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 6796–6807. [Google Scholar]
- Tang, J.; Ren, J.; Zhou, H.; Liu, Z.; Zeng, G. Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. arXiv 2023, arXiv:2309.16653. [Google Scholar]
- Chen, Y.; Chen, Z.; Zhang, C.; Wang, F.; Yang, X.; Wang, Y.; Cai, Z.; Yang, L.; Liu, H.; Lin, G. Gaussianeditor: Swift and controllable 3d editing with gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 21476–21485. [Google Scholar]
- Charatan, D.; Li, S.L.; Tagliasacchi, A.; Sitzmann, V. Pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 19457–19467. [Google Scholar]
- Radl, L.; Steiner, M.; Parger, M.; Weinrauch, A.; Kerbl, B.; Steinberger, M. Stopthepop: Sorted gaussian splatting for viewconsistent real-time rendering. ACM Trans. Graph. (TOG) 2024, 43, 1–17. [Google Scholar] [CrossRef]
- Xiong, H.; Muttukuru, S.; Upadhyay, R.; Chari, P.; Kadambi, A. Sparsegs: Real-time 360 sparse view synthesis using gaussian splatting. arXiv 2023, arXiv:2312.00206. [Google Scholar]
- Zhang, D.; Wang, C.; Wang, W.; Li, P.; Qin, M.; Wang, H. Gaussian in the wild: 3d gaussian splatting for unconstrained image collections. arXiv 2024, arXiv:2403.15704. [Google Scholar]
- Du, Y.; Zhang, Z.; Zhang, P.; Sun, F.; Lv, X. Udr-gs: Enhancing underwater dynamic scene reconstruction with depth regularization. Symmetry 2024, 16, 1010. [Google Scholar] [CrossRef]
- Zou, C.; Ma, Q.; Wang, J.; Lu, M.; Zhang, S.; He, Z. GaussianEnhancer: A General Rendering Enhancer for Gaussian Splatting. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 6–11 April 2025. [Google Scholar]
- Zhou, K.; Li, W.; Wang, Y.; Hu, T.; Jiang, N.; Han, X.; Lu, J. Nerflix: High-quality neural view synthesis by learning a degradation-driven inter-viewpoint mixer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 12363–12374. [Google Scholar]
- Zhou, K.; Li, W.; Jiang, N.; Han, X.; Lu, J. From nerflix to nerflix++: A general nerf-agnostic restorer paradigm. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 46, 3422–3437. [Google Scholar] [CrossRef] [PubMed]
- Xia, B.; Zhang, Y.; Wang, S.; Wang, Y.; Wu, X.; Tian, Y.; Yang, W.; Gool, L.V. Diffir: Efficient diffusion model for image restoration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 13095–13105. [Google Scholar]
- Huang, X.; Li, W.; Hu, J.; Chen, H.; Wang, Y. Refsr-nerf: Towards high fidelity and super resolution view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 8244–8253. [Google Scholar]
- Jiang, Y.; Hedman, P.; Mildenhall, B.; Xu, D.; Barron, J.T.; Wang, Z.; Xue, T. Alignerf: High-fidelity neural radiance fields via alignment-aware training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 46–55. [Google Scholar]
- Yu, H.; Julin, J.; Milacski, Z.A.; Niinuma, K.; Jeni, L.A. Dylin: Making light field networks dynamic. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 12397–12406. [Google Scholar]
- Yen-Chen, L.; Florence, P.; Barron, J.T.; Rodriguez, A.; Isola, P.; Lin, T. Inerf: Inverting neural radiance fields for pose estimation. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 1323–1330. [Google Scholar]
- Lin, C.; Ma, W.; Torralba, A.; Lucey, S. Barf: Bundle-adjusting neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 5741–5751. [Google Scholar]
- Wang, P.; Liu, L.; Liu, Y.; Theobalt, C.; Komura, T.; Wang, W. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv 2021, arXiv:2106.10689. [Google Scholar]
- Yu, Z.; Chen, A.; Huang, B.; Sattler, T.; Geiger, A. Mip-splatting: Alias-free 3d gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 19447–19456. [Google Scholar]
- Lu, T.; Yu, M.; Xu, L.; Xiangli, Y.; Wang, L.; Lin, D.; Dai, B. Scaffold-gs: Structured 3d gaussians for view-daptive rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 20654–20664. [Google Scholar]
- Matsuki, H.; Murai, R.; Kelly, P.H.; Davison, A.J. Gaussian splatting slam. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 18039–18048. [Google Scholar]
- Yan, Y.; Lin, H.; Zhou, C.; Wang, W.; Sun, H.; Zhan, K.; Lang, X.; Zhou, X.; Peng, S. Street gaussians for modeling dynamic urban scenes. arXiv 2023, arXiv:2401.01339. [Google Scholar]
- Guédon, A.; Lepetit, V. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 5354–5363. [Google Scholar]
- Huang, B.; Yu, Z.; Chen, A.; Geiger, A.; Gao, S. 2d gaussian splatting for geometrically accurate radiance fields. In SIGGRAPH 2024 Conference Papers; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar]
- Gao, J.; Gu, C.; Lin, Y.; Li, Z.; Zhu, H.; Cao, X.; Zhang, L.; Yao, Y. Relightable 3d gaussians: Realistic point cloud relighting with brdf decomposition and ray tracing. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; Springer: Berlin/Heidelberg, Germany, 2025; pp. 73–89. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 184–199. [Google Scholar]
- Ji, X.; Cao, Y.; Tai, Y.; Wang, C.; Li, J.; Huang, F. Real-world super-resolution via kernel estimation and noise injection. In Proceedings of the CVPR Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 466–467. [Google Scholar]
- Zhou, R.; Susstrunk, S. Kernel modeling super-resolution on real low-resolution images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2433–2443. [Google Scholar]
- Zhang, K.; Liang, J.; Gool, L.V.; Timofte, R. Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 4791–4800. [Google Scholar]
- Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 1905–1914. [Google Scholar]
- Gu, J.; Lu, H.; Zuo, W.; Dong, C. Blind super-resolution with iterative kernel correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 1604–1613. [Google Scholar]
- Shocher, A.; Cohen, N.; Irani, M. “Zero-shot” super-resolution using deep internal learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3118–3126. [Google Scholar]
- Park, S.; Yoo, J.; Cho, D.; Kim, J.; Kim, T.H. Fast adaptation to super-resolution networks via meta-learning. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXVII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 754–769. [Google Scholar]
- Cai, J.; Zeng, H.; Yong, H.; Cao, Z.; Zhang, L. Toward real-world single image super-resolution: A new benchmark and a new model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3086–3095. [Google Scholar]
- Lugmayr, A.; Danelljan, M.; Timofte, R. Unsupervised learning for real-world super-resolution. In Proceedings of the Compute 2019 IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
- Zhang, W.; Shi, G.; Liu, Y.; Dong, C.; Wu, X. A closer look at blind super-resolution: Degradation models, baselines, and performance upper bounds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 527–536. [Google Scholar]
- Liang, Z.; Zhang, Q.; Hu, W.; Feng, Y.; Zhu, L.; Jia, K. Analytic-splatting: Anti-aliased 3d gaussian splatting via analytic integration. arXiv 2024, arXiv:2403.11056. [Google Scholar]
- Zhang, Z.; Hu, W.; Lao, Y.; He, T.; Zhao, H. Pixel-gs: Density control with pixel-aware gradient for 3d gaussian splatting. arXiv 2024, arXiv:2403.15530. [Google Scholar]
- Cheng, K.; Long, X.; Yang, K.; Yao, Y.; Yin, W.; Ma, Y.; Wang, W.; Chen, X. Gaussianpro: 3d gaussian splatting with progressive propagation. In Proceedings of the Forty-first International Conference on Machine Learning, Vienna, Austria, 21–27 July 2024. [Google Scholar]
- Franke, L.; Rückert, D.; Fink, L.; Stamminger, M. Trips: Trilinear point splatting for real-time radiance field rendering. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2024; p. e15012. [Google Scholar]
- Zhang, J.; Zhan, F.; Xu, M.; Lu, S.; Xing, E. Fregs: 3d gaussian splatting with progressive frequency regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 21424–21433. [Google Scholar]
- Knapitsch, A.; Park, J.; Zhou, Q.; Koltun, V. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Trans. Graph. (TOG) 2017, 36, 1–13. [Google Scholar] [CrossRef]
- Hedman, P.; Philip, J.; Price, T.; Frahm, J.; Drettakis, G.; Brostow, G. Deep blending for free-viewpoint image-based rendering. ACM Trans. Graph. (TOG) 2018, 37, 1–15. [Google Scholar] [CrossRef]
- Jonathan, T.; Barron, J.T.; Mildenhall, B.; Verbin, D.; Pratul, P.; Srinivasan, P.P.; Hedman, P. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 5470–5479. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, Q. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Munich, Germany, 8–14 September 2018; pp. 767–783. [Google Scholar]
Method | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
2DGS [40] + Bicubic | 26.41 | 0.786 | 0.388 |
2DGS + GaussianEnhancer + Bicubic | 26.71 | 0.835 | 0.375 |
2DGS + GaussianEnhancer++(LR) + Bicubic | 27.36 | 0.844 | 0.345 |
2DGS + GaussianEnhancer++(SR) | 28.54 | 0.848 | 0.222 |
3DGS [11] + Bicubic | 26.58 | 0.818 | 0.381 |
3DGS + GaussianEnhancer + Bicubic | 27.03 | 0.862 | 0.368 |
3DGS + GaussianEnhancer++(LR) + Bicubic | 27.46 | 0.875 | 0.329 |
3DGS + GaussianEnhancer++(SR) | 28.14 | 0.904 | 0.170 |
Mip-Splatting [35] + Bicubic | 28.86 | 0.829 | 0.571 |
Mip-Splatting + GaussianEnhancer + Bicubic | 29.75 | 0.883 | 0.498 |
Mip-Splatting + GaussianEnhancer++(LR) + Bicubic | 30.30 | 0.919 | 0.412 |
Mip-Splatting + GaussianEnhancer++(SR) | 30.79 | 0.957 | 0.206 |
Method | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
2DGS [40] | 27.73 | 0.840 | 0.233 |
+ GaussianEnhancer | 28.06 | 0.844 | 0.222 |
+ GaussianEnhancer++ | 28.42 | 0.879 | 0.202 |
3DGS [11] | 28.17 | 0.854 | 0.212 |
+ GaussianEnhancer | 28.61 | 0.856 | 0.200 |
+ GaussianEnhancer++ | 28.96 | 0.871 | 0.186 |
Mip-Splatting [35] | 30.73 | 0.904 | 0.150 |
+ GaussianEnhancer | 30.93 | 0.917 | 0.142 |
+ GaussianEnhancer++ | 31.16 | 0.932 | 0.129 |
Method | Inference Resources | ||
---|---|---|---|
1K | 2K | 4K | |
GaussianEnhancer | 377 ms/8.5 GB | 1105 ms/19.3 GB | - |
GaussianEnhancer++(LR) | 459 ms/8.6 GB | 1339 ms/19.5 GB | - |
GaussianEnhancer++(SR) | - | - | 412 ms/16.1 GB |
Method | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
2DGS [40] | 24.72 | 0.860 | 0.201 |
+ GaussianEnhancer | 25.39 | 0.870 | 0.194 |
+ GaussianEnhancer++ | 25.67 | 0.894 | 0.186 |
3DGS [11] | 26.07 | 0.887 | 0.158 |
+ GaussianEnhancer | 26.65 | 0.892 | 0.156 |
+ GaussianEnhancer++ | 26.86 | 0.902 | 0.153 |
Mip-Splatting [35] | 26.92 | 0.906 | 0.134 |
+ GaussianEnhancer | 27.48 | 0.905 | 0.135 |
+ GaussianEnhancer++ | 27.73 | 0.908 | 0.132 |
Method | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
2DGS [40] | 29.88 | 0.943 | 0.221 |
+ GaussianEnhancer | 30.42 | 0.950 | 0.220 |
+ GaussianEnhancer++ | 30.98 | 0.955 | 0.193 |
3DGS [11] | 31.47 | 0.953 | 0.193 |
+ GaussianEnhancer | 31.87 | 0.961 | 0.192 |
+ GaussianEnhancer++ | 32.35 | 0.964 | 0.174 |
Mip-Splatting [35] | 32.14 | 0.960 | 0.179 |
+ GaussianEnhancer | 32.92 | 0.968 | 0.178 |
+ GaussianEnhancer++ | 33.30 | 0.970 | 0.165 |
Method | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
2DGS [40] | 33.26 | 0.972 | 0.024 |
+ GaussianEnhancer | 34.68 | 0.978 | 0.021 |
+ GaussianEnhancer | 34.53 | 0.978 | 0.019 |
3DGS [11] | 34.96 | 0.976 | 0.016 |
+ GaussianEnhancer | 36.55 | 0.983 | 0.014 |
+ GaussianEnhancer++ | 36.40 | 0.983 | 0.013 |
Mip-Splatting [35] | 35.37 | 0.991 | 0.020 |
+ GaussianEnhancer | 36.69 | 0.996 | 0.018 |
+ GaussianEnhancer++ | 36.80 | 0.995 | 0.019 |
Datasets | Statistical Metrics | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|---|
Mip-NeRF360 [60] | Max Value | 0.81 | 0.046 | −0.026 |
Min Value | 0.56 | 0.031 | −0.036 | |
Mean Value | 0.67 | 0.039 | −0.031 | |
Std Dev | 0.09 | 0.005 | 0.004 | |
Tanks & Temples [58] | Max Value | 1.14 | 0.041 | −0.012 |
Min Value | 0.86 | 0.030 | −0.018 | |
Mean Value | 0.98 | 0.035 | −0.015 | |
Std Dev | 0.12 | 0.004 | 0.002 | |
Deep Blending [59] | Max Value | 1.18 | 0.014 | −0.022 |
Min Value | 0.97 | 0.010 | −0.033 | |
Mean Value | 1.08 | 0.012 | −0.027 | |
Std Dev | 0.07 | 0.002 | 0.004 | |
NeRF-Synthetic [5] | Max Value | 1.46 | 0.007 | −0.004 |
Min Value | 1.03 | 0.005 | −0.006 | |
Mean Value | 1.31 | 0.006 | −0.005 | |
Std Dev | 0.15 | 0.001 | 0.001 |
Datasets | Statistical Metrics | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|---|
Mip-NeRF360 [60] | Max Value | 0.92 | 0.020 | −0.021 |
Min Value | 0.69 | 0.014 | −0.031 | |
Mean Value | 0.81 | 0.016 | −0.024 | |
Std Dev | 0.09 | 0.002 | 0.003 | |
Tanks & Temples [58] | Max Value | 0.94 | 0.017 | −0.005 |
Min Value | 0.73 | 0.012 | −0.006 | |
Mean Value | 0.83 | 0.015 | −0.005 | |
Std Dev | 0.08 | 0.002 | 0.001 | |
Deep Blending [59] | Max Value | 1.02 | 0.013 | −0.017 |
Min Value | 0.75 | 0.001 | −0.022 | |
Mean Value | 0.87 | 0.011 | −0.019 | |
Std Dev | 0.12 | 0.001 | 0.002 | |
NeRF-Synthetic [5] | Max Value | 1.64 | 0.008 | −0.003 |
Min Value | 1.20 | 0.006 | −0.003 | |
Mean Value | 1.38 | 0.007 | −0.003 | |
Std Dev | 0.15 | 0.001 | 0.002 |
Datasets | Statistical Metrics | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|---|
Mip-NeRF360 [60] | Max Value | 0.51 | 0.032 | −0.017 |
Min Value | 0.35 | 0.025 | −0.025 | |
Mean Value | 0.44 | 0.028 | −0.021 | |
Std Dev | 0.06 | 0.002 | 0.003 | |
Tanks & Temples [58] | Max Value | 0.89 | 0.003 | −0.001 |
Min Value | 0.74 | 0.001 | −0.003 | |
Mean Value | 0.79 | 0.002 | −0.002 | |
Std Dev | 0.05 | 0.0001 | 0.0001 | |
Deep Blending [59] | Max Value | 1.38 | 0.012 | −0.011 |
Min Value | 0.93 | 0.008 | −0.016 | |
Mean Value | 1.09 | 0.010 | −0.013 | |
Std Dev | 0.15 | 0.001 | 0.002 | |
NeRF-Synthetic [5] | Max Value | 1.66 | 0.005 | −0.0004 |
Min Value | 1.34 | 0.002 | −0.002 | |
Mean Value | 1.49 | 0.004 | −0.001 | |
Std Dev | 0.11 | 0.001 | 0.001 |
Variant | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
2DGS [40] | 27.73 | 0.840 | 0.233 |
+ GaussianEnhancer | 28.06 | 0.844 | 0.222 |
+ GaussianEnhance & GaussianEnhancer++ (TSDS) | 28.20 | 0.862 | 0.210 |
+ GaussianEnhancer & GaussianEnhancer++ (SSIF) | 28.38 | 0.863 | 0.211 |
+ GaussianEnhancer++ (All) | 28.42 | 0.879 | 0.202 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zou, C.; Ma, Q.; Wang, J.; Lu, M.; Zhang, S.; Qu, Z.; He, Z. GaussianEnhancer++: A General GS-Agnostic Rendering Enhancer. Symmetry 2025, 17, 442. https://doi.org/10.3390/sym17030442
Zou C, Ma Q, Wang J, Lu M, Zhang S, Qu Z, He Z. GaussianEnhancer++: A General GS-Agnostic Rendering Enhancer. Symmetry. 2025; 17(3):442. https://doi.org/10.3390/sym17030442
Chicago/Turabian StyleZou, Chen, Qingsen Ma, Jia Wang, Ming Lu, Shanghang Zhang, Zhaowei Qu, and Zhaofeng He. 2025. "GaussianEnhancer++: A General GS-Agnostic Rendering Enhancer" Symmetry 17, no. 3: 442. https://doi.org/10.3390/sym17030442
APA StyleZou, C., Ma, Q., Wang, J., Lu, M., Zhang, S., Qu, Z., & He, Z. (2025). GaussianEnhancer++: A General GS-Agnostic Rendering Enhancer. Symmetry, 17(3), 442. https://doi.org/10.3390/sym17030442