Uncertainty Quantification of Neural Reflectance Fields for Underwater Scenes
Abstract
:1. Introduction
- For the first time, uncertainty quantification has been introduced to neural reflectance fields of underwater scenes, thus enabling us to analyse the reliability and enhance the robustness of the model.
- The regularization proposed in Ref. [14] is incorporated to BNU.
- Our uncertainty quantification framework strictly follows a volume rendering procedure and does not require changes to underlying architectures of the codes.
2. Related Works
2.1. Underwater Neural Scene Representation
2.2. Uncertainty Estimation in Deep Learning
2.3. Uncertainty Estimation in Neural Radiance Fields
3. Scientific Background
3.1. Neural Reflectance Fields
3.2. Beyond NeRFs Underwater (BNU)
4. Uncertainty Estimation
4.1. Ensembles for Predictive RGB Uncertainty
4.2. Ensembles for Epistemic Uncertainty in Unseen Areas
5. Numerical Experiment
5.1. Experimental Setup
5.1.1. Dataset
5.1.2. Framework
5.1.3. Metrics
- The MAE directly calculates the average absolute error between the predicted value of the model and the ground truth. The smaller the MAE value, the more accurate the prediction.
- The MSE is calculated by squaring the differences between the ground truth and predicted values, summing them up, and then taking the average.
- The RMSE measures the deviation between the predicted values and ground truth, and it is sensitive to outliers in the data.
- The PSNR is a metric used to measure image quality. Here, represents the maximum pixel value of image x.
- The log likelihood function was defined as follows:
- For the AUSE, the prediction error (e.g., RMSE) for each pixel was firstly computed based on the predicted values and ground truth. The uncertainty values obtained from training and the prediction errors for all pixels were merged and sorted. The top 1% of data were removed from the sorted list, and the average error and average uncertainty of the remaining data were treated as the point at 1%. Similarly, the top 2% of data were removed; then, the point at 2% was calculated, and this was so on performed until it reached 100%. This process generates two curves: the prediction error curve and the uncertainty curve. The area enclosed by the two curves is the AUSE value. A lower AUSE value indicates a higher correlation between uncertainty estimation and true error, thus implying a more reliable uncertainty estimation.
5.2. Results
5.3. Ablation Study
5.3.1. Influence of the Ensemble
5.3.2. Influence of Uncertainty Terms
5.4. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
- Zhang, T.; Johnson-Roberson, M. Beyond NeRF Underwater: Learning neural reflectance fields for true color correction of marine imagery. IEEE Robot. Autom. Lett. 2023, 8, 6467–6474. [Google Scholar] [CrossRef]
- Bi, S.; Xu, Z.; Srinivasan, P.; Mildenhall, B.; Sunkavalli, K.; Hašan, M.; Hold-Geoffroy, Y.; Kriegman, D.; Ramamoorthi, R. Neural reflectance fields for appearance acquisition. arXiv 2020, arXiv:2008.03824. [Google Scholar]
- Pairet, È.; Hernández, J.D.; Carreras, M.; Petillot, Y.; Lahijanian, M. Online mapping and motion planning under uncertainty for safe navigation in unknown environments. IEEE Trans. Autom. Sci. Eng. 2021, 19, 3356–3378. [Google Scholar] [CrossRef]
- Melo, J. AUV position uncertainty and target reacquisition. In Proceedings of the Global Oceans 2020: Singapore–US Gulf Coast, Biloxi, MS, USA, 5–30 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
- Pairet, È.; Hernández, J.D.; Lahijanian, M.; Carreras, M. Uncertainty-based online mapping and motion planning for marine robotics guidance. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2367–2374. [Google Scholar]
- Shen, J.; Ren, R.; Ruiz, A.; Moreno-Noguer, F. Estimating 3D uncertainty field: Quantifying uncertainty for neural radiance fields. arXiv 2023, arXiv:2008.03824.2311.01815. [Google Scholar]
- Abdar, M.; Pourpanah, F.; Hussain, S.; Rezazadegan, D.; Liu, L.; Ghavamzadeh, M.; Fieguth, P.; Cao, X.; Khosravi, A.; Acharya, U.R.; et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Inf. Fusion 2021, 76, 243–297. [Google Scholar] [CrossRef]
- MacKay, D.J. A practical bayesian framework for backpropagation networks. Neural Comput. 1992, 4, 448–472. [Google Scholar] [CrossRef]
- Neal, R.M. Bayesian Learning for Neural Networks; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 118. [Google Scholar]
- Gal, Y.; Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 1050–1059. [Google Scholar]
- Lakshminarayanan, B.; Pritzel, A.; Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. Adv. Neural Inf. Process. Syst. 2017, 30, 6402–6413. [Google Scholar]
- Sünderhauf, N.; Abou-Chakra, J.; Miller, D. Density-aware NeRF ensembles: Quantifying predictive uncertainty in neural radiance fields. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 9370–9376. [Google Scholar]
- Yang, J.; Pavone, M.; Wang, Y. FreeNeRF: Improving few-shot neural rendering with free frequency regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 8254–8263. [Google Scholar]
- Sethuraman, A.V.; Ramanagopal, M.S.; Skinner, K.A. WaterNeRF: Neural radiance fields for underwater scenes. In Proceedings of the OCEANS 2023-MTS/IEEE US Gulf Coast, Biloxi, MS, USA, 25–28 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–7. [Google Scholar]
- Barron, J.T.; Mildenhall, B.; Verbin, D.; Srinivasan, P.P.; Hedman, P. Mip-NeRF 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5470–5479. [Google Scholar]
- Levy, D.; Peleg, A.; Pearl, N.; Rosenbaum, D.; Akkaynak, D.; Korman, S.; Treibitz, T. SeaThru-NeRF: Neural radiance fields in scattering media. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 56–65. [Google Scholar]
- Orinaitė, U.; Palevičius, P.; Pal, M.; Ragulskis, M. A deep learning-based approach for automatic detection of concrete cracks below the waterline. Vibroeng. Procedia 2022, 44, 142–148. [Google Scholar] [CrossRef]
- Orinaitė, U.; Karaliūtė, V.; Pal, M.; Ragulskis, M. Detecting underwater concrete cracks with machine learning: A clear vision of a murky problem. Appl. Sci. 2023, 13, 7335. [Google Scholar] [CrossRef]
- Guo, C.; Pleiss, G.; Sun, Y.; Weinberger, K.Q. On calibration of modern neural networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1321–1330. [Google Scholar]
- Hernández-Lobato, J.M.; Adams, R. Probabilistic backpropagation for scalable learning of bayesian neural networks. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 1861–1869. [Google Scholar]
- Neapolitan, R.E. Learning bayesian networks. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, CA, USA, 12–15 August 2007. [Google Scholar]
- Aralikatti, R.; Margam, D.; Sharma, T.; Abhinav, T.; Venkatesan, S.M. Global SNR estimation of speech signals using entropy and uncertainty estimates from dropout networks. arXiv 2018, arXiv:1804.04353. [Google Scholar]
- Hernández, S.; Vergara, D.; Valdenegro-Toro, M.; Jorquera, F. Improving predictive uncertainty estimation using dropout–Hamiltonian Monte Carlo. Soft Comput. 2020, 24, 4307–4322. [Google Scholar] [CrossRef]
- Blum, A.; Haghtalab, N.; Procaccia, A.D. Variational dropout and the local reparameterization trick. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–10 December 2015; pp. 2575–2583. [Google Scholar]
- Jain, S.; Liu, G.; Mueller, J.; Gifford, D. Maximizing overall diversity for improved uncertainty estimates in deep ensembles. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 4264–4271. [Google Scholar]
- Chen, L.; Cheng, R.; Li, S.; Lian, H.; Zheng, C.; Bordas, S.P.A. A sample-efficient deep learning method for multivariate uncertainty qualification of acoustic–vibration interaction problems. Comput. Methods Appl. Mech. Eng. 2022, 393, 114784. [Google Scholar] [CrossRef]
- Chen, L.; Lian, H.; Xu, Y.; Li, S.; Liu, Z.; Atroshchenko, E.; Kerfriden, P. Generalized isogeometric boundary element method for uncertainty analysis of time-harmonic wave propagation in infinite domains. Appl. Math. Model. 2023, 114, 360–378. [Google Scholar] [CrossRef]
- Chen, L.; Wang, Z.; Lian, H.; Ma, Y.; Meng, Z.; Li, P.; Ding, C.; Bordas, S.P.A. Reduced order isogeometric boundary element methods for CAD-integrated shape optimization in electromagnetic scattering. Comput. Methods Appl. Mech. Eng. 2024, 419, 116654. [Google Scholar] [CrossRef]
- Martin-Brualla, R.; Radwan, N.; Sajjadi, M.S.; Barron, J.T.; Dosovitskiy, A.; Duckworth, D. NeRF in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7210–7219. [Google Scholar]
- Shen, J.; Ruiz, A.; Agudo, A.; Moreno-Noguer, F. Stochastic neural radiance fields: Quantifying uncertainty in implicit 3D representations. In Proceedings of the 2021 International Conference on 3D Vision (3DV), London, UK, 1–3 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 972–981. [Google Scholar]
- Shen, J.; Agudo, A.; Moreno-Noguer, F.; Ruiz, A. Conditional-flow NeRF: Accurate 3D modelling with reliable uncertainty quantification. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Cham, Switzerland, 2022; pp. 540–557. [Google Scholar]
- Pan, X.; Lai, Z.; Song, S.; Huang, G. ActiveNeRF: Learning where to see with uncertainty estimation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Cham, Switzerland, 2022; pp. 230–246. [Google Scholar]
- Max, N. Optical models for direct volume rendering. IEEE Trans. Vis. Comput. Graph. 1995, 1, 99–108. [Google Scholar] [CrossRef]
- Song, Y.; Nakath, D.; She, M.; Elibol, F.; Köser, K. Deep sea robotic imaging simulator. In Pattern Recognition, Proceedings of the ICPR International Workshops and Challenges, Virtual Event, 10–15 January 2021; Springer: Cham, Switzerland, 2021; pp. 375–389. [Google Scholar]
- Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Loquercio, A.; Segu, M.; Scaramuzza, D. A general framework for uncertainty estimation in deep learning. IEEE Robot. Autom. Lett. 2020, 5, 3153–3160. [Google Scholar] [CrossRef]
- Qu, C.; Liu, W.; Taylor, C.J. Bayesian deep basis fitting for depth completion with uncertainty. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 16147–16157. [Google Scholar]
- Bae, G.; Budvytis, I.; Cipolla, R. Estimating and exploiting the aleatoric uncertainty in surface normal estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 13137–13146. [Google Scholar]
- Poggi, M.; Aleotti, F.; Tosi, F.; Mattoccia, S. On the uncertainty of self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3227–3237. [Google Scholar]
MSE | RMSE | MAE | PSNR | AUSE RMSE | AUSE MAE | NLL | |||
---|---|---|---|---|---|---|---|---|---|
Ours | Synthetic | Coarse | 0.0004 | 0.019 | 0.0127 | 35.1 | 0.0015 | 0.0015 | −0.0116 ± 0.0628 |
Refined | 0.0004 | 0.019 | 0.0127 | 35.1 | 0.0014 | 0.0013 | 0.1125 ± 0.0923 | ||
Real | Coarse | 0.0006 | 0.021 | 0.0161 | 34.1 | 0.0051 | 0.0043 | 0.5255 ± 0.103 | |
Refined | 0.0006 | 0.021 | 0.0162 | 34.1 | 0.0028 | 0.0026 | 0.5615 ± 0.0995 |
MSE | RMSE | MAE | PSNR | |||
---|---|---|---|---|---|---|
BNU | Synthetic | Coarse | 0.0005 | 0.02 | 0.0133 | 34.6 |
Refined | 0.0005 | 0.02 | 0.0133 | 34.6 | ||
Real | Coarse | 0.0006 | 0.022 | 0.0167 | 33.8 | |
Refined | 0.0006 | 0.022 | 0.0167 | 33.8 | ||
Ours | Synthetic | Coarse | 0.0004 | 0.019 | 0.0127 | 35.1 |
Refined | 0.0004 | 0.019 | 0.0127 | 35.1 | ||
Real | Coarse | 0.0006 | 0.021 | 0.0161 | 34.1 | |
Refined | 0.0006 | 0.021 | 0.0162 | 34.1 |
MSE | RMSE | MAE | PSNR | MSE | RMSE | MAE | PSNR | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BNU | Synthetic | Coarse | 0.000491529 | 0.0200459 | 0.013337525 | 34.63339233 | Albedo | Synthetic | Coarse | 0.000502874 | 0.020200016 | 0.013464036 | 34.58008575 |
Refined | 0.000492371 | 0.020073349 | 0.013330285 | 34.61220551 | Refined | 0.000503786 | 0.020222723 | 0.013453723 | 34.56002045 | ||||
Real | Coarse | 0.000599609 | 0.022026947 | 0.01668788 | 33.81985092 | Real | Coarse | 0.000589828 | 0.02165741 | 0.016421406 | 33.97740173 | ||
Refined | 0.000602939 | 0.022051068 | 0.016739015 | 33.80893326 | Refined | 0.000590084 | 0.02164993 | 0.01644345 | 33.97847366 | ||||
Norm | Synthetic | Coarse | 0.000459542 | 0.019584062 | 0.012990904 | 34.78876877 | Density | Synthetic | Coarse | 0.000518398 | 0.020231744 | 0.01353504 | 34.62197495 |
Refined | 0.000460262 | 0.019598676 | 0.012975736 | 34.77446747 | Refined | 0.000519 | 0.020276753 | 0.013544992 | 34.58899689 | ||||
Real | Coarse | 0.000566563 | 0.021291357 | 0.016084474 | 34.09726334 | Real | Coarse | 0.000555657 | 0.02125203 | 0.016046988 | 34.08520126 | ||
Refined | 0.000567068 | 0.021277852 | 0.016084943 | 34.100914 | Refined | 0.000556773 | 0.021272138 | 0.016073255 | 34.07310486 |
NLL | AUSE RMSE | AUSE MAE | NLL | AUSE RMSE | AUSE MAE | NLL | AUSE RMSE | AUSE MAE | ||
---|---|---|---|---|---|---|---|---|---|---|
Synthetic | Coarse | 475.5226 ± 15,068.98 | 0.002254584 | 0.001743699 | −0.34395203 ± 0.17283306 | 0.001247811 | 0.00123583 | −0.011556505 ± 0.062817425 | 0.001504149 | 0.001509211 |
Refined | 474.00775 ± 15,063.435 | 0.002261687 | 0.001751348 | −0.21837942 ± 0.217421 | 0.002945993 | 0.002841848 | 0.11246947 ± 0.09231803 | 0.001368612 | 0.001328661 | |
Real | Coarse | 1207.923 ± 8092 | 0.002890238 | 0.002550809 | 0.32225832 ± 0.12907358 | 0.002272962 | 0.001563942 | 0.52552766 ± 0.10300194 | 0.005134132 | 0.00426048 |
Refined | 482.2294 ± 4909.587 | 0.002837191 | 0.002513715 | 0.35704166 ± 0.14970568 | 0.000311964 | 0.000618934 | 0.5615406 ± 0.099535696 | 0.002829015 | 0.0026005 |
NLL | AUSE RMSE | AUSE MAE | NLL | AUSE RMSE | AUSE MAE | NLL | AUSE RMSE | AUSE MAE | |||
---|---|---|---|---|---|---|---|---|---|---|---|
Normal | Synthetic | Coarse | 450.82037 ± 4871.639 | 0.000249919 | 0.000233828 | −0.6259125 ± 0.20818105 | 0.000698834 | 0.000657816 | −0.39603126 ± 0.11323436 | 0.00241437 | 0.002293217 |
Refined | 289.39508 ± 3605.0972 | 0.00025749 | 0.000241996 | −0.6097659 ± 0.20786819 | 0.000721777 | 0.000680518 | −0.3866958 ± 0.11498752 | 0.002479377 | 0.002356168 | ||
Real | Coarse | 1610.0344 ± 8977.543 | 0.000125807 | 0.000121349 | −0.5150174 ± 0.25648573 | 0.000450089 | 0.000362964 | −0.38426322 ± 0.14627357 | 0.000507562 | 0.000392292 | |
Refined | 968.5724 ± 7319.2705 | 0.000117436 | 0.000112676 | −0.4824918 ± 0.25163838 | 0.000414992 | 0.00030181 | −0.35871175 ± 0.1483912 | 0.000457031 | 0.000314412 | ||
Albedo | Synthetic | Coarse | 1070.254 ± 6610.523 | 0.000510919 | 0.000523254 | −0.6515078 ± 0.18188678 | 0.000651663 | 0.000609364 | −0.27608374 ± 0.062299836 | 0.002779543 | 0.002610347 |
Refined | 1282.3138 ± 7401.9 | 0.000609278 | 0.000620129 | −0.6313605 ± 0.18285266 | 0.000746449 | 0.000704222 | −0.2656904 ± 0.06084825 | 0.002868676 | 0.002695229 | ||
Real | Coarse | 1639.0918 ± 9054.044 | 0.000348316 | 0.000300185 | −0.37690923 ± 0.24971825 | 0.000946304 | 0.00065761 | 0.01580802 ± 0.09670887 | 0.000669593 | 0.000235055 | |
Refined | 693.6819 ± 5697.16 | 0.000376567 | 0.000328893 | −0.3352108 ± 0.24189745 | 0.001014562 | 0.000690136 | 0.03544719 ± 0.09218908 | 0.000787288 | 0.000305379 | ||
Density | Synthetic | Coarse | 2052.7996 ± 9721.339 | 0.002614084 | 0.002074688 | −0.5209125 ± 0.12687543 | 0.000790859 | 0.000755849 | −0.18053763 ± 0.076944746 | 0.002355352 | 0.00182316 |
Refined | 1352.4849 ± 7712.8203 | 0.002602652 | 0.002061267 | −0.47604796 ± 0.13382044 | 0.001606671 | 0.001540226 | −0.13651294 ± 0.076657325 | 0.001342092 | 0.001227694 | ||
Real | Coarse | 1652.699 ± 8296.807 | 1.37 | 9.67 | −0.09061173 ± 0.21988438 | 0.000879356 | 0.00048239 | 0.052660644 ± 0.10419571 | 0.000483531 | 0.000182983 | |
Refined | 1821.3926 ± 8670.74 | 2.85 | 5.22 | −0.04039674 ± 0.2547284 | 0.002096198 | 0.001341577 | 0.106006674 ± 0.1293263 | 0.000180225 | 0.000390491 |
N = 5 | N = 10 | N = 15 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
NLL | AUSE RMSE | AUSE MAE | NLL | AUSE RMSE | AUSE MAE | NLL | AUSE RMSE | AUSE MAE | ||
Synthetic | Coarse | −0.06423895 ± 0.077842504 | 0.012891437 | 0.011193172 | −0.06423895 ± 0.077842504 | 0.012891437 | 0.011193172 | −0.06081396 ± 0.07654448 | 0.01272295 | 0.011058994 |
Refined | −0.06856838 ± 0.07776483 | 0.011792897 | 0.010343174 | −0.06856838 ± 0.07776483 | 0.011792897 | 0.010343174 | −0.0646455 ± 0.0769008 | 0.011601113 | 0.010205722 | |
Real | Coarse | 0.674563 ± 0.1071525 | 0.00481758 | 0.004217093 | 0.6660141 ± 0.11203171 | 0.005699564 | 0.004884685 | 0.674563 ± 0.1071525 | 0.00481758 | 0.004217093 |
Refined | 0.6742599 ± 0.10600442 | 0.004616567 | 0.003919621 | 0.49425298 ± 0.10282664 | 0.004663244 | 0.00389921 | 0.6742599 ± 0.10600442 | 0.004616567 | 0.003919621 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lian, H.; Li, X.; Chen, L.; Wen, X.; Zhang, M.; Zhang, J.; Qu, Y. Uncertainty Quantification of Neural Reflectance Fields for Underwater Scenes. J. Mar. Sci. Eng. 2024, 12, 349. https://doi.org/10.3390/jmse12020349
Lian H, Li X, Chen L, Wen X, Zhang M, Zhang J, Qu Y. Uncertainty Quantification of Neural Reflectance Fields for Underwater Scenes. Journal of Marine Science and Engineering. 2024; 12(2):349. https://doi.org/10.3390/jmse12020349
Chicago/Turabian StyleLian, Haojie, Xinhao Li, Leilei Chen, Xin Wen, Mengxi Zhang, Jieyuan Zhang, and Yilin Qu. 2024. "Uncertainty Quantification of Neural Reflectance Fields for Underwater Scenes" Journal of Marine Science and Engineering 12, no. 2: 349. https://doi.org/10.3390/jmse12020349
APA StyleLian, H., Li, X., Chen, L., Wen, X., Zhang, M., Zhang, J., & Qu, Y. (2024). Uncertainty Quantification of Neural Reflectance Fields for Underwater Scenes. Journal of Marine Science and Engineering, 12(2), 349. https://doi.org/10.3390/jmse12020349