Uncertainty Estimation of Dense Optical Flow for Robust Visual Navigation †
Abstract
:1. Introduction
- Estimation of uncertainty from the dense optical flow. The epipolar constraint is first included in the matching cost to improve the matching accuracy. The uncertainty is then recovered by fitting a bivariate Gaussian to the matching cost.
- Development of a new Mahalanobis eight-point algorithm to compute the visual odometry. The estimated uncertainty enables efficient sampling of RANSAC (random sampling and consensus) and accurate pose estimation using the weights based on the Mahalanobis distance.
- Demonstration of the proposed methods for ground and aerial navigation.
2. Related Work
3. Dense Flow with Uncertainty
3.1. Dense Flow with Epipolar Constraint
3.2. Uncertainty Estimation
4. New Mahalanobis 8-Point Algorithm
5. Visual Processing Pipeline
6. Experimental Results
6.1. Ground Vehicle
6.2. Aerial Vehicle
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Morgenthal, G.; Hallermann, N. Quality assessment of unmanned aerial vehicle (UAV) based visual inspection of structures. Adv. Struct. Eng. 2014, 17, 289–302. [Google Scholar] [CrossRef]
- Nikolic, J.; Burri, M.; Rehder, J.; Leutenegger, S.; Huerzeler, C.; Siegwart, R. A UAV system for inspection of industrial facilities. In Proceedings of the Aerospace Conference, Big Sky, MT, USA, 2–9 March 2013; pp. 1–8. [Google Scholar]
- Herwitz, S.; Johnson, L.; Arvesen, J.; Higgins, R.; Leung, J.; Dunagan, S. Precision agriculture as a commercial application for solar-powered unmanned aerial vehicles. In Proceedings of the 1st UAV Conference, Portsmouth, VA, USA, 20–23 May 2002; p. 3404. [Google Scholar]
- Gago, J.; Douthe, C.; Coopman, R.; Gallego, P.; Ribas-Carbo, M.; Flexas, J.; Escalona, J.; Medrano, H. UAVs challenge to assess water stress for sustainable agriculture. Agric. Water Manag. 2015, 153, 9–19. [Google Scholar] [CrossRef]
- Bernard, M.; Kondak, K.; Maza, I.; Ollero, A. Autonomous transportation and deployment with aerial robots for search and rescue missions. J. Field Robot. 2011, 28, 914–931. [Google Scholar] [CrossRef] [Green Version]
- Waharte, S.; Trigoni, N. Supporting search and rescue operations with UAVs. In Proceedings of the 2010 International Conference on Emerging Security Technologies (EST), Canterbury, UK, 6–7 September 2010; pp. 142–147. [Google Scholar]
- Kümmerle, R.; Grisetti, G.; Strasdat, H.; Konolige, K.; Burgard, W. g 2 o: A general framework for graph optimization. In Proceedings of the 2011 IEEE International on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 3607–3613. [Google Scholar]
- Stachniss, C.; Kretzschmar, H. Pose graph compression for laser-based slam. In 18th International Symposium on Robotics Research; Springer: Puerto Varas, Chile, 2017; pp. 271–287. [Google Scholar]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality NW, Washington, DC, USA, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Song, S.; Chandraker, M.; Guest, C.C. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 730–743. [Google Scholar] [CrossRef] [PubMed]
- Fanani, N.; Stürck, A.; Ochs, M.; Bradler, H.; Mester, R. Predictive monocular odometry (PMO): What is possible without RANSAC and multiframe bundle adjustment? Image Vis. Comput. 2017, 68, 3–13. [Google Scholar] [CrossRef]
- Bradler, H.; Anne Wiegand, B.; Mester, R. The Statistics of Driving Sequences—And What We Can Learn from Them. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Xu, J.; Ranftl, R.; Koltun, V. Accurate Optical Flow via Direct Cost Volume Processing. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Hui, T.W.; Tang, X.; Loy, C.C. LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8981–8989. [Google Scholar]
- Sun, D.; Yang, X.; Liu, M.Y.; Kautz, J. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Cheng, J.; Kim, J.; Shao, J.; Zhang, W. Robust linear pose graph-based SLAM. Robot. Auton. Syst. 2015, 72, 71–82. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. arXiv 2016, arXiv:1610.06475. [Google Scholar] [CrossRef] [Green Version]
- Engel, J.; Stückler, J.; Cremers, D. Large-scale direct SLAM with stereo cameras. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; pp. 1935–1942. [Google Scholar] [CrossRef]
- Cheviron, T.; Hamel, T.; Mahony, R.; Baldwin, G. Robust nonlinear fusion of inertial and visual data for position, velocity and attitude estimation of UAV. In Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 2010–2016. [Google Scholar]
- Artieda, J.; Sebastian, J.M.; Campoy, P.; Correa, J.F.; Mondragón, I.F.; Martínez, C.; Olivares, M. Visual 3-d slam from uavs. J. Intell. Robot. Syst. 2009, 55, 299. [Google Scholar] [CrossRef]
- Geiger, A.; Ziegler, J.; Stiller, C. StereoScan: Dense 3D Reconstruction in Real-time. In Intelligent Vehicles Symposium (IV); IEEE: Baden-Baden, Germany, 2011. [Google Scholar]
- Chen, Q.; Koltun, V. Full flow: Optical flow estimation by global optimization over regular grids. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4706–4714. [Google Scholar]
- Revaud, J.; Weinzaepfel, P.; Harchaoui, Z.; Schmid, C. Epicflow: Edge-preserving interpolation of correspondences for optical flow. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1164–1172. [Google Scholar]
- Hartley, R.I. In defense of the eight-point algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 580–593. [Google Scholar] [CrossRef] [Green Version]
- Torr, P.; Zisserman, A. Robust computation and parametrization of multiple view relations. In Proceedings of the Sixth International Conference on Computer Vision, Bombay, India, 7 January 1998; pp. 727–732. [Google Scholar]
- Zhang, Z. Determining the Epipolar Geometry and its Uncertainty: A Review. Int. J. Comput. Vis. 1998, 27, 161–195. [Google Scholar] [CrossRef]
- Armangué, X.; Salvi, J. Overall view regarding fundamental matrix estimation. Image Vis. Comput. 2003, 21, 205–220. [Google Scholar] [CrossRef]
- Shi, J.; Tomasi, C. Good features to track. 1994. In Proceedings of the CVPR’94, 1994 IEEE Computer Society Conference, New York, NY, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
- Mahalanobis, P.C. On the Generalised Distance in Statistics; National Institute of Sciences of India: Kolkata, India, 1936; pp. 49–55. [Google Scholar]
- Douc, R.; Cappe, O. Comparison of resampling schemes for particle filtering. In Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, Zagreb, Croatia, 15–17 September 2005; pp. 64–69. [Google Scholar] [CrossRef] [Green Version]
- Gao, X.S.; Hou, X.R.; Tang, J.; Cheng, H.F. Complete solution classification for the perspective-three-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 930–943. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012. [Google Scholar]
- Menze, M.; Geiger, A. Object Scene Flow for Autonomous Vehicles. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Ng, Y.; Kim, J.; Li, H. Robust Dense Optical Flow with Uncertainty for Monocular Pose-Graph SLAM. In Proceedings of the Australasian Conference on Robotics and Automation, Sydney, Australia, 11–13 December 2017. [Google Scholar]
- Adarve, J.D.; Mahony, R. A filter formulation for computing real time optical flow. IEEE Robot. Autom. Lett. 2016, 1, 1192–1199. [Google Scholar] [CrossRef]
DOF-2DU | DOF-2DU + PnP | DOF-2DU + PnP + LC | ||||
---|---|---|---|---|---|---|
seq | rot | trans | rot | trans | rot | trans |
(deg/m) | (%) | (deg/m) | (%) | (deg/m) | (%) | |
00 | 0.0076 | 1.80 | 0.0067 | 1.57 | 0.0045 | 1.07 |
01 | 0.0082 | 0.97 | 0.0050 | 1.03 | 0.0050 | 1.03 |
06 | 0.0047 | 0.96 | 0.0039 | 1.11 | 0.0039 | 1.17 |
VISO2-M | MLM-SFM | PMO | DOF-1DU + LC | DOF-2DU + PnP + LC | ||||||
---|---|---|---|---|---|---|---|---|---|---|
seq | rot | trans | rot | trans | rot | trans | rot | trans | rot | trans |
(deg/m) | (%) | (deg/m) | (%) | (deg/m) | (%) | (deg/m) | (%) | (deg/m) | (%) | |
00 | 0.0209 | 11.91 | 0.0048 | 2.04 | 0.0042 | 1.09 | 0.0117 | 2.03 | 0.0045 | 1.07 |
01 | n/a | n/a | n/a | n/a | 0.0038 | 1.32 | 0.0107 | 1.149 | 0.0050 | 1.03 |
06 | 0.0157 | 4.74 | 0.0081 | 2.09 | 0.0044 | 1.31 | 0.0054 | 1.05 | 0.0039 | 1.17 |
Method | Distance of Farthest Point to Origin (m) |
---|---|
GPS | |
Ours | |
VISO2-M |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ng, Y.; Li, H.; Kim, J. Uncertainty Estimation of Dense Optical Flow for Robust Visual Navigation. Sensors 2021, 21, 7603. https://doi.org/10.3390/s21227603
Ng Y, Li H, Kim J. Uncertainty Estimation of Dense Optical Flow for Robust Visual Navigation. Sensors. 2021; 21(22):7603. https://doi.org/10.3390/s21227603
Chicago/Turabian StyleNg, Yonhon, Hongdong Li, and Jonghyuk Kim. 2021. "Uncertainty Estimation of Dense Optical Flow for Robust Visual Navigation" Sensors 21, no. 22: 7603. https://doi.org/10.3390/s21227603
APA StyleNg, Y., Li, H., & Kim, J. (2021). Uncertainty Estimation of Dense Optical Flow for Robust Visual Navigation. Sensors, 21(22), 7603. https://doi.org/10.3390/s21227603