Next Article in Journal
Noise2Clean: Cross-Device Side-Channel Traces Denoising with Unsupervised Deep Learning
Previous Article in Journal
Countermeasuring MITM Attacks in Solar-Powered PON-Based FiWi Access Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Depth-Based Dynamic Sampling of Neural Radiation Fields

1
Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo 315201, China
2
Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo 315201, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(4), 1053; https://doi.org/10.3390/electronics12041053
Submission received: 11 January 2023 / Revised: 6 February 2023 / Accepted: 16 February 2023 / Published: 20 February 2023

Abstract

Although the NeRF approach can achieve outstanding view synthesis, it is limited in practical use because it requires many views (hundreds) for training. With only a few input views, the Depth-DYN NeRF that we propose can accurately match the shape. First, we adopted the ip_basic depth-completion method, which can recover the complete depth map from sparse radar depth data. Then, we further designed the Depth-DYN MLP network architecture, which uses a dense depth prior to constraining the NeRF optimization and combines the depthloss to supervise the Depth-DYN MLP network. When compared to the color-only supervised-based NeRF, the Depth-DYN MLP network can better recover the geometric structure of the model and reduce the appearance of shadows. To further ensure that the depth depicted along the rays intersecting these 3D points is close to the measured depth, we dynamically modified the sample space based on the depth of each pixel point. Depth-DYN NeRF considerably outperforms depth NeRF and other sparse view versions when there are a few input views. Using only 10–20 photos to render high-quality images on the new view, our strategy was tested and confirmed on a variety of benchmark datasets. Compared with NeRF, we obtained better image quality (NeRF average at 22.47 dB vs. our 27.296 dB).
Keywords: NeRF; scene representation; view synthesis; image-based rendering; volume rendering NeRF; scene representation; view synthesis; image-based rendering; volume rendering

Share and Cite

MDPI and ACS Style

Wang, J.; Xiao, J.; Zhang, X.; Xu, X.; Jin, T.; Jin, Z. Depth-Based Dynamic Sampling of Neural Radiation Fields. Electronics 2023, 12, 1053. https://doi.org/10.3390/electronics12041053

AMA Style

Wang J, Xiao J, Zhang X, Xu X, Jin T, Jin Z. Depth-Based Dynamic Sampling of Neural Radiation Fields. Electronics. 2023; 12(4):1053. https://doi.org/10.3390/electronics12041053

Chicago/Turabian Style

Wang, Jie, Jiangjian Xiao, Xiaolu Zhang, Xiaolin Xu, Tianxing Jin, and Zhijia Jin. 2023. "Depth-Based Dynamic Sampling of Neural Radiation Fields" Electronics 12, no. 4: 1053. https://doi.org/10.3390/electronics12041053

APA Style

Wang, J., Xiao, J., Zhang, X., Xu, X., Jin, T., & Jin, Z. (2023). Depth-Based Dynamic Sampling of Neural Radiation Fields. Electronics, 12(4), 1053. https://doi.org/10.3390/electronics12041053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop