Uncertainty-Aware Multimodal Trajectory Prediction via a Single Inference from a Single Model
Abstract
:1. Introduction
- Demonstrating the feasibility of adopting a single inference from a single-model approach based on deterministic single forward pass methods for uncertainty quantification in a trajectory prediction task on edge platforms of autonomous vehicles with limited resources and computing power;
- Proposing the uncertainty-aware multimodal trajectory prediction (UAMTP) leveraging uncertainty through disentangled quantification of aleatoric and epistemic uncertainties in the longitudinal and lateral direction of vehicles;
- Demonstrating that uncertainty-aware multimodal trajectory prediction (UAMTP) can improve the safety of autonomous vehicles in driving scenarios with inherent uncertainties.
2. Background and Related Works
2.1. Types of Uncertainty
2.1.1. Aleatoric Uncertainty
2.1.2. Epistemic Uncertainty
2.2. Methods for Uncertainty Quantification
2.2.1. Bayesian Model
2.2.2. Monte Carlo Dropout
2.2.3. Deep Ensembles
2.2.4. Deep Evidential Regression
2.2.5. Deterministic Single Forward Pass
2.3. Trajectory Prediction in Autonomous Driving
2.3.1. Uncertainty Quantification in Trajectory Prediction
2.3.2. Multimodal Trajectory Prediction
3. Methods
3.1. Decomposition of Trajectory Prediction Task
3.1.1. Decomposition into Velocity and Yaw Prediction Task
3.1.2. Trajectory Prediction Task
3.1.3. Dataset
3.1.4. Model Architecture
3.2. Uncertainty Quantification
3.2.1. Uncertainty Quantification Methods
3.2.2. Gaussian Mixture Model
3.2.3. Uncertainty Quantification Equations
3.3. Uncertainty-Aware Multimodal Trajectory Prediction (UAMTP)
3.3.1. Analysis of Uncertainty in Trajectory Prediction
3.3.2. Uncertainty-Aware Multimodal Trajectory Prediction (UAMTP) Methods
4. Experiments
4.1. Evaluation of Decomposed Trajectory Prediction Task
4.2. Evaluation of Uncertainty Quantification
4.3. Evaluation of Uncertainty-Aware Multimodal Trajectory Prediction
5. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Gal, Y. Uncertainty in Deep Learning. Ph.D. Thesis, University of Oxford, Oxford, UK, 2016. [Google Scholar]
- Zhou, X.; Liu, H.; Pourpanah, F.; Zeng, T.; Wang, X. A survey on epistemic (model) uncertainty in supervised learning: Recent advances and applications. Neurocomputing 2022, 489, 449–465. [Google Scholar] [CrossRef]
- Blundell, C.; Cornebise, J.; Kavukcuoglu, K.; Wierstra, D. Weight uncertainty in neural network. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 1613–1622. [Google Scholar]
- Blei, D.M.; Kucukelbir, A.; McAuliffe, J.D. Variational inference: A review for statisticians. J. Am. Stat. Assoc. 2017, 112, 859–877. [Google Scholar] [CrossRef]
- Kingma, D.P. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Steinbrener, J.; Posch, K.; Pilz, J. Measuring the uncertainty of predictions in deep neural networks with variational inference. Sensors 2020, 20, 6011. [Google Scholar] [CrossRef] [PubMed]
- Gal, Y.; Ghahramani, Z. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016; pp. 1050–1059. [Google Scholar]
- Bethell, D.; Gerasimou, S.; Calinescu, R. Robust Uncertainty Quantification Using Conformalised Monte Carlo Prediction. Proc. AAAI Conf. Artif. Intell. 2024, 38, 20939–20948. [Google Scholar] [CrossRef]
- Folgoc, L.L.; Baltatzis, V.; Desai, S.; Devaraj, A.; Ellis, S.; Manzanera, O.E.M.; Nair, A.; Qiu, H.; Schnabel, J.; Glocker, B. Is MC dropout bayesian? arXiv 2021, arXiv:2110.04286. [Google Scholar]
- Gal, Y.; Hron, J.; Kendall, A. Concrete dropout. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Verdoja, F.; Kyrki, V. Notes on the Behavior of MC Dropout. In Proceedings of the ICML Workshop on Uncertainty & Robustness in Deep Learning 2021, Virtual, 18–24 July 2021. [Google Scholar]
- Lakshminarayanan, B.; Pritzel, A.; Blundell, C. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Fort, S.; Hu, H.; Lakshminarayanan, B. Deep ensembles: A loss landscape perspective. arXiv 2019, arXiv:1912.02757. [Google Scholar]
- Ashukha, A.; Lyzhov, A.; Molchanov, D.; Vetrov, D. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. arXiv 2020, arXiv:2002.06470. [Google Scholar]
- Abe, T.; Buchanan, E.K.; Pleiss, G.; Zemel, R.; Cunningham, J.P. Deep ensembles work, but are they necessary? In Proceedings of the Advances in Neural Information Processing Systems 35 (NeurIPS 2022), New Orleans, LA, USA, 28 November–9 December 2022; pp. 33646–33660. [Google Scholar]
- Okamoto, N.; Minami, S.; Hirakawa, T.; Yamashita, T.; Fujiyoshi, H. Deep ensemble collaborative learning by using knowledge-transfer graph for fine-grained object classification. arXiv 2021, arXiv:2103.14845. [Google Scholar]
- Amini, A.; Schwarting, W.; Soleimany, A.; Rus, D. Deep evidential regression. In Proceedings of the Advances in Neural Information Processing Systems 33 (NeurIPS 2020), Virtual, 6–12 December 2020; pp. 14927–14937. [Google Scholar]
- Wu, Y.; Shi, B.; Dong, B.; Zheng, Q.; Wei, H. The Evidence Contraction Issue in Deep Evidential Regression: Discussion and Solution. Proc. AAAI Conf. Artif. Intell. 2024, 38, 21726–21734. [Google Scholar] [CrossRef]
- Meinert, N.; Gawlikowski, J.; Lavin, A. The unreasonable effectiveness of deep evidential regression. Proc. AAAI Conf. Artif. Intell. 2023, 37, 9134–9142. [Google Scholar] [CrossRef]
- Van Amersfoort, J.; Smith, L.; Teh, Y.W.; Gal, Y. Uncertainty estimation using a single deep deterministic neural network. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 13–18 July 2020; pp. 9690–9700. [Google Scholar]
- Liu, J.; Lin, Z.; Padhy, S.; Tran, D.; Bedrax Weiss, T.; Lakshminarayanan, B. Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. In Proceedings of the Advances in Neural Information Processing Systems 33 (NeurIPS 2020), Virtual, 6–12 December 2020; pp. 7498–7512. [Google Scholar]
- van Amersfoort, J.; Smith, L.; Jesson, A.; Key, O.; Gal, Y. On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty. In Proceedings of the Bayesian Deep Learning NeurIPS 2021 Workshop, Virtual, 6–14 December 2021. [Google Scholar]
- Mukhoti, J.; Kirsch, A.; van Amersfoort, J.; Torr, P.H.S.; Gal, Y. Deep deterministic uncertainty: A new simple baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 24384–24394. [Google Scholar]
- Tang, X.; Yang, K.; Wang, H.; Wu, J.; Qin, Y.; Yu, W.; Cao, D. Prediction-uncertainty-aware decision-making for autonomous vehicles. IEEE Trans. Intell. Veh. 2022, 7, 849–862. [Google Scholar] [CrossRef]
- Shao, W.; Xu, Y.; Peng, L.; Li, J.; Wang, H. Failure Detection for Motion Prediction of Autonomous Driving: An Uncertainty Perspective. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 12721–12728. [Google Scholar] [CrossRef]
- Li, G.; Li, Z.; Knoop, V.L.; van Lint, H. Unravelling uncertainty in trajectory prediction using a non-parametric approach. Transp. Res. Part. C Emerg. Technol. 2024, 163, 104659. [Google Scholar] [CrossRef]
- Feng, D.; Harakeh, A.; Waslander, S.L.; Dietmayer, K. A review and comparative study on probabilistic object detection in autonomous driving. IEEE Trans. Intell. Transp. Syst. 2021, 23, 9961–9980. [Google Scholar] [CrossRef]
- Peng, L.; Li, B.; Yu, W.; Yang, K.; Shao, W.; Wang, H. SOTIF Entropy: Online SOTIF Risk Quantification and Mitigation for Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2024, 25, 1530–1546. [Google Scholar] [CrossRef]
- Hoel, C.-J.; Wolff, K.; Laine, L. Ensemble quantile networks: Uncertainty-aware reinforcement learning with applications in autonomous driving. IEEE Trans. Intell. Transp. Syst. 2023, 24, 6030–6041. [Google Scholar] [CrossRef]
- Yang, K.; Tang, X.; Qiu, S.; Jin, S.; Wei, Z.; Wang, H. Towards robust decision-making for autonomous driving on highway. IEEE Trans. Veh. Technol. 2023, 72, 11251–11263. [Google Scholar] [CrossRef]
- Suk, H.; Lee, Y.; Kim, T.; Kim, S. Addressing uncertainty challenges for autonomous driving in real-world environments. Adv. Comput. 2024, 134, 317–361. [Google Scholar]
- Cui, H.; Radosavljevic, V.; Chou, F.-C.; Lin, T.-H.; Nguyen, T.; Huang, T.-K.; Schneider, J.; Djuric, N. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA,, 2019; pp. 2090–2096. [Google Scholar]
- Deo, N.; Trivedi, M.M. Multi-modal trajectory prediction of surrounding vehicles with maneuver based lstms. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1179–1184. [Google Scholar]
- Deo, N.; Wolff, E.; Beijbom, O. Multimodal trajectory prediction conditioned on lane-graph traversals. In Proceedings of the Conference on Robot Learning, London, UK, 8–11 November 2021; pp. 203–212. [Google Scholar]
- Liu, Y.; Zhang, J.; Fang, L.; Jiang, Q.; Zhou, B. Multimodal motion prediction with stacked transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 7577–7586. [Google Scholar]
- Nayakanti, N.; Al-Rfou, R.; Zhou, A.; Goel, K.; Refaat, K.S.; Sapp, B. Wayformer: Motion forecasting via simple & efficient attention networks. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 2980–2987. [Google Scholar]
- Phan-Minh, T.; Grigore, E.C.; Boulton, F.A.; Beijbom, O.; Wolff, E.M. Covernet: Multimodal behavior prediction using trajectory sets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14074–14083. [Google Scholar]
- Sun, J.; Li, Y.; Fang, H.-S.; Lu, C. Three steps to multimodal trajectory prediction: Modality clustering, classification and synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 13250–13259. [Google Scholar]
- Lafage, A.; Barbier, M.; Franchi, G.; Filliat, D. Hierarchical Light Transformer Ensembles for Multimodal Trajectory Forecasting. arXiv 2024, arXiv:2403.17678. [Google Scholar]
- Nayak, A.; Eskandarian, A.; Doerzaph, Z.; Ghorai, P. Pedestrian Trajectory Forecasting Using Deep Ensembles Under Sensing Uncertainty. IEEE Trans. Intell. Transp. Syst. 2024, 25, 11317–11329. [Google Scholar] [CrossRef]
- Saini, R.; Roy, P.P.; Dogra, D.P. A segmental HMM based trajectory classification using genetic algorithm. Expert Syst. Appl. 2018, 93, 169–181. [Google Scholar] [CrossRef]
- Fayyad, U.M.; Irani, K.B. Multi-interval discretization of continuous-valued attributes for classification learning. In IJCAI; Citeseer: Princeton, NJ, USA, 1993; pp. 1022–1029. [Google Scholar]
- Dougherty, J.; Kohavi, R.; Sahami, M. Supervised and unsupervised discretization of continuous features. In Machine Learning Proceedings 1995; Elsevier: Amsterdam, The Netherlands, 1995; pp. 194–202. [Google Scholar]
- Wang, S.; Ren, J.; Bai, R. A semi-supervised adaptive discriminative discretization method improving discrimination power of regularized naive Bayes. Expert Syst. Appl. 2023, 225, 120094. [Google Scholar] [CrossRef]
- Toulabinejad, E.; Mirsafaei, M.; Basiri, A. Supervised discretization of continuous-valued attributes for classification using RACER algorithm. Expert Syst. Appl. 2024, 244, 121203. [Google Scholar] [CrossRef]
- Kaushik, M.; Sharma, R.; Vidyarthi, A.; Draheim, D. Discretizing Numerical Attributes: An Analysis of Human Perceptions. In Proceedings of the European Conference on Advances in Databases and Information Systems, Torino, Italy, 5–8 September 2022; Springer: Cham, Switzerland, 2022; pp. 188–197. [Google Scholar]
- Deckert, A.C.; Kummerfeld, E. Investigating the effect of binning on causal discovery. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2574–2581. [Google Scholar]
- Xiong, L.; Fu, Z.; Zeng, D.; Leng, B. Surrounding vehicle trajectory prediction and dynamic speed planning for autonomous vehicle in cut-in scenarios. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 987–993. [Google Scholar]
- Houenou, A.; Bonnifait, P.; Cherfaoui, V.; Yao, W. Vehicle trajectory prediction based on motion model and maneuver recognition. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 4363–4369. [Google Scholar]
- Kim, W.; Kang, C.M.; Son, Y.S.; Lee, S.-H.; Chung, C.C. Vehicle path prediction using yaw acceleration for adaptive cruise control. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3818–3829. [Google Scholar] [CrossRef]
- Zhai, G.; Meng, H.; Wang, X. A constant speed changing rate and constant turn rate model for maneuvering target tracking. Sensors 2014, 14, 5239–5253. [Google Scholar] [CrossRef] [PubMed]
- Wang, P.; Wang, J.; Chan, C.-Y.; Fang, S. Trajectory prediction for turning vehicles at intersections by fusing vehicle dynamics and driver’s future input estimation. Transp. Res. Rec. 2016, 2602, 68–77. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. In Proceedings of the 2017 Conference on Robot Learning (CoRL), Mountain View, CA, USA, 13–15 November 2017; pp. 1–16. [Google Scholar]
- Montali, N.; Lambert, J.; Mougin, P.; Kuefler, A.; Rhinehart, N.; Li, M.; Gulino, C.; Emrich, T.; Yang, Z.; Whiteson, S. The waymo open sim agents challenge. In Proceedings of the Advances in Neural Information Processing Systems 36 (NeurIPS 2023), New Orleans, LA, USA, 10–16 December 2023. [Google Scholar]
- Gulino, C.; Fu, J.; Luo, W.; Tucker, G.; Bronstein, E.; Lu, Y.; Harb, J.; Pan, X.; Wang, Y.; Chen, X. Waymax: An accelerated, data-driven simulator for large-scale autonomous driving research. In Proceedings of the Advances in Neural Information Processing Systems 36 (NeurIPS 2023), New Orleans, LA, USA, 10–16 December 2023. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Lin, M. Network in network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Vaswani, A. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Lee, K.; Lee, K.; Lee, H.; Shin, J. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Proceedings of the Advances in Neural Information Processing Systems 31 (NeurIPS 2018), Montreal, QC, Canada, 3–8 December 2018. [Google Scholar]
- Liu, L.; Lu, S.; Zhong, R.; Wu, B.; Yao, Y.; Zhang, Q.; Shi, W. Computing systems for autonomous driving: State of the art and challenges. IEEE Internet Things J. 2020, 8, 6469–6486. [Google Scholar] [CrossRef]
- Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V. Towards fully autonomous driving: Systems and algorithms. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden, Germany, 5–9 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 163–168. [Google Scholar]
- Apostolopoulou, I.; Eysenbach, B.; Nielsen, F.; Dubrawski, A. A Rate-Distortion View of Uncertainty Quantification. Proc. Mach. Learn. Res. 2024, 235, 1631–1654. [Google Scholar]
- Liu, W.; Wang, X.; Owens, J.; Li, Y. Energy-based out-of-distribution detection. In Proceedings of the Advances in Neural Information Processing Systems 33 (NeurIPS 2020), Virtual, 6–12 December 2020; pp. 21464–21475. [Google Scholar]
- Lee, S.; Purushwalkam Shiva Prakash, S.; Cogswell, M.; Ranjan, V.; Crandall, D.; Batra, D. Stochastic multiple choice learning for training diverse deep ensembles. In Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS 2016), Barcelona, Spain, 5–10 December 2016. [Google Scholar]
- Franchi, G.; Yu, X.; Bursuc, A.; Kazmierczak, R.; Dubuisson, S.; Aldea, E.; Filliat, D. MUAD: Multiple Uncertainties for Autonomous Driving benchmark for multiple uncertainty types and tasks. arXiv 2022, arXiv:2203.01437. [Google Scholar]
- Postels, J.; Segu, M.; Sun, T.; Sieber, L.; Van Gool, L.; Yu, F.; Tombari, F. On the Practicality of Deterministic Epistemic Uncertainty. Proc. Mach. Learn. Res. 2022, 162, 17870–17909. [Google Scholar]
- Huang, R.; Xue, H.; Pagnucco, M.; Salim, F.; Song, Y. Multimodal trajectory prediction: A survey. arXiv 2023, arXiv:2302.10463. [Google Scholar]
- Chang, M.-F.; Lambert, J.; Sangkloy, P.; Singh, J.; Bak, S.; Hartnett, A.; Wang, D.; Carr, P.; Lucey, S.; Ramanan, D. Argoverse: 3D tracking and forecasting with rich maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8748–8757. [Google Scholar]
Attribute | Unit |
---|---|
Position x, y | meter |
Yaw | degree |
Velocity x, y | m/s |
Yaw rate | degree/s |
Acceleration x, y | m/s2 |
Is the vehicle at stop line | true or false |
Traffic light affecting the vehicle | red or yellow or green |
# of Models (↓) | # of Inferences (↓) | Computational Burden (↓) | |
---|---|---|---|
Monte Carlo Dropout | 1 | 16 | High (Monte Carlo sampling) |
Deep Ensembles | 8 | 1 per model | High (Bootstrap aggregating) |
Deterministic Single Forward Pass | 1 | 1 | Moderate (Distance/Density calculation) |
Accuracy (↑) | # of Trainable Params (↓) | # of FLOPs (↓) | Runtime per Inference (↓) | ||
---|---|---|---|---|---|
Velocity | Yaw | ||||
MCD | 79.8890 ± 0.1442 | 94.3683 ± 0.0799 | 4,384,808 | 991,974,024 | 0.735 s (21.62×) |
DE | 81.7350 ± 0.1890 | 95.0141 ± 0.0786 | 35,078,464 | 7,935,792,192 | 0.111 s (3.26×) |
DSFP | 80.8642 ± 0.2279 | 94.3762 ± 0.0949 | 4,384,808 | 991,974,024 | 0.034 s (1×) |
Accuracy (↑) | ||||||||
---|---|---|---|---|---|---|---|---|
Velocity | Yaw | |||||||
After-10 | After-20 | After-30 | After-40 | After-10 | After-20 | After-30 | After-40 | |
MCD | 81.7733 ± 0.2923 | 80.6975 ± 0.2921 | 79.6937 ± 0.2912 | 77.3916 ± 0.2781 | 94.3683 ± 0.0799 | 96.7710 ± 0.0843 | 95.5265 ± 0.1328 | 91.5189 ± 0.2259 |
DE | 83.2975 ± 0.3482 | 82.4998 ± 0.3817 | 81.3326 ± 0.3678 | 79.8101 ± 0.4114 | 97.2709 ± 0.0811 | 96.0232 ± 0.1208 | 94.3470 ± 0.1576 | 92.4154 ± 0.2301 |
DSFP | 82.2107 ± 0.4408 | 81.5333 ± 0.4512 | 80.6762 ± 0.4672 | 79.0366 ± 0.4636 | 96.8556 ± 0.1114 | 95.3846 ± 0.1558 | 93.8225 ± 0.1929 | 91.4419 ± 0.2650 |
AUROC-AU (↑) | ||
---|---|---|
Aleatoric Uncertainty of Velocity | Aleatoric Uncertainty of Yaw | |
MCD | 84.1455 ± 0.1097 | 96.7504 ± 0.0392 |
DE | 89.3743 ± 0.1300 | 97.0878 ± 0.0433 |
DSFP | 89.9730 ± 0.1464 | 97.1380 ± 0.0440 |
AUROC-AU (↑) | ||||||||
---|---|---|---|---|---|---|---|---|
Aleatoric Uncertainty of Velocity | Aleatoric Uncertainty of Yaw | |||||||
After-10 | After-20 | After-30 | After-40 | After-10 | After-20 | After-30 | After-40 | |
MCD | 90.0805 ± 0.1379 | 87.5751 ± 0.1549 | 82.3324 ± 0.2579 | 76.5939 ± 0.2884 | 97.2907 ± 0.0685 | 97.1138 ± 0.0813 | 96.6549 ± 0.0690 | 95.9420 ± 0.0922 |
DE | 91.1284 ± 0.2332 | 90.3828 ± 0.2417 | 89.0760 ± 0.2653 | 86.9101 ± 0.2955 | 97.5207 ± 0.0877 | 97.3861 ± 0.0908 | 97.0043 ± 0.0794 | 96.4400 ± 0.0882 |
DSFP | 91.8791 ± 0.2961 | 91.3033 ± 0.2763 | 89.3830 ± 0.2911 | 87.1825 ± 0.3071 | 97.5296 ± 0.0871 | 97.3894 ± 0.0862 | 97.0558 ± 0.0873 | 96.5722 ± 0.0913 |
AUROC-EU (↑) | ||
---|---|---|
Epistemic Uncertainty of Velocity | Epistemic Uncertainty of Yaw | |
MCD | 49.1900 ± 0.2354 | 54.6307 ± 0.2145 |
DE | 98.3863 ± 0.0328 | 97.0878 ± 0.0756 |
DSFP | 99.8110 ± 0.0074 | 99.8643 ± 0.0051 |
AUROC-EU (↑) | ||||||||
---|---|---|---|---|---|---|---|---|
Aleatoric Uncertainty of Velocity | Aleatoric Uncertainty of Yaw | |||||||
After-10 | After-20 | After-30 | After-40 | After-10 | After-20 | After-30 | After-40 | |
MCD | 46.9658 ± 0.4285 | 48.2709 ± 0.4711 | 50.1777 ± 0.5457 | 51.3455 ± 0.4280 | 54.3445 ± 0.3983 | 52.6054 ± 0.4246 | 55.5397 ± 0.4356 | 56.0333 ± 0.4558 |
DE | 99.0935 ± 0.0276 | 99.2409 ± 0.0185 | 95.5325 ± 0.1262 | 99.6784 ± 0.0122 | 99.8101 ± 0.0132 | 99.8145 ± 0.0107 | 94.4022 ± 0.2082 | 94.2220 ± 0.2188 |
DSFP | 99.7687 ± 0.0172 | 99.7652 ± 0.0163 | 99.8488 ± 0.0110 | 99.8613 ± 0.0141 | 99.9485 ± 0.0061 | 99.8839 ± 0.0078 | 99.8055 ± 0.0100 | 99.8192 ± 0.0146 |
AUROC-EU (↑) | ||
---|---|---|
Epistemic Uncertainty of Velocity | Epistemic Uncertainty of Yaw | |
DE-4 | 75.9640 ± 0.2449 | 93.2563 ± 0.0863 |
DE-8 | 98.3863 ± 0.0328 | 97.0622 ± 0.0756 |
minFDE (↓) | Miss Rate (>1.0 m) (↓) | Miss Rate (>1.5 m) (↓) | |
---|---|---|---|
Unimodal Prediction (Baseline) | 2.138 ± 0.009 m | 0.751 | 0.437 |
DE-based Unimodal Prediction | 2.030 ± 0.008 m | 0.718 | 0.418 |
DE-based Multimodal Prediction | 1.281 ± 0.005 m | 0.423 | 0.222 |
UAMTP (Ours) | 1.112 ± 0.005 m | 0.383 | 0.168 |
minFDE (↓) | Miss Rate (>1.0 m) (↓) | Miss Rate (>1.5 m) (↓) | |
---|---|---|---|
Unimodal (Baseline) | 2.138 ± 0.009 m | 0.751 | 0.437 |
Longitudinal-only UAMTP (Ablation) | 1.615 ± 0.006 m | 0.648 | 0.337 |
Lateral-only UAMTP (Ablation) | 1.682 ± 0.008 m | 0.515 | 0.281 |
UAMTP (Ours) | 1.112 ± 0.005 m | 0.383 | 0.168 |
minFDE (↓) | Miss Rate (>1.0 m) (↓) | Miss Rate (>1.5 m) (↓) | |
---|---|---|---|
Unimodal Prediction (Baseline) | 1.085 ± 0.003 m | 0.330 | 0.201 |
DE-based Unimodal Prediction | 1.006 ± 0.003 m | 0.310 | 0.182 |
DE-based Multimodal Prediction | 0.637 ± 0.002 m | 0.170 | 0.094 |
UAMTP (Ours) | 0.618 ± 0.002 m | 0.177 | 0.085 |
Runtime per Prediction (↓) | |
---|---|
DE-based Multimodal Prediction | 0.091 ± 0.001 s (1.11×) |
UAMTP (Ours) | 0.082 ± 0.002 s (1×) |
Time to React (↑) | |
---|---|
Unimodal Prediction (Baseline) | 1.49 s (×1) |
UAMTP (Ours) | 1.91s(×1.28) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Suk, H.; Kim, S. Uncertainty-Aware Multimodal Trajectory Prediction via a Single Inference from a Single Model. Sensors 2025, 25, 217. https://doi.org/10.3390/s25010217
Suk H, Kim S. Uncertainty-Aware Multimodal Trajectory Prediction via a Single Inference from a Single Model. Sensors. 2025; 25(1):217. https://doi.org/10.3390/s25010217
Chicago/Turabian StyleSuk, Ho, and Shiho Kim. 2025. "Uncertainty-Aware Multimodal Trajectory Prediction via a Single Inference from a Single Model" Sensors 25, no. 1: 217. https://doi.org/10.3390/s25010217
APA StyleSuk, H., & Kim, S. (2025). Uncertainty-Aware Multimodal Trajectory Prediction via a Single Inference from a Single Model. Sensors, 25(1), 217. https://doi.org/10.3390/s25010217