Three-View Relative Pose Estimation Under Planar Motion Constraints
Abstract
1. Introduction
- Theoretical innovation: By integrating planar motion constraints with trifocal tensor theory, we derive a linear solver requiring only three point correspondences, eliminating the dependency on dense feature points in conventional methods. This theoretical framework reduces the 6-DoF problem to a 3-DoF problem, significantly lowering computational complexity.
- Computational efficiency: Experiments show that our method achieves a single-solution time of only 0.142 ms, representing a 6× speedup compared to traditional three-view pose estimation methods. This efficiency advantage arises from the linearized solver design, which eliminates the need for iterative optimization and thus guarantees reliable real-time performance.
- Robustness verification: The method maintains strong stability under challenging conditions, including image noise (≤2 pixels), angular deviation (≤1°), and minor camera vibration. Notably, in the motion sequences of the KITTI dataset, the median rotational error is below 0.0545°, and the translational error is below 2.1319°, validating its applicability in real-world scenarios.
2. Related Work
2.1. Three-View Pose Estimation
2.2. Planar Motion Pose Estimation
3. Model Building
3.1. The Projection Matrix Under Planar Motion
3.2. Linear 3-Point Method
3.3. The Corrective Methods in Practical Applications
4. Experimental Results and Analysis
4.1. Simulated Data Experiment
4.1.1. Computational Efficiency and Numerical Stability
4.1.2. Accuracy Analysis Under the Influence of Noise
4.1.3. Accuracy Analysis Under the Influence of Vibration
4.2. Real Image Sequences Experiment
4.2.1. Comparison of Pose Estimation Accuracy in Real-World Scenarios
4.2.2. Verification of Computational Efficiency and Feature Robustness in Real-World Scenarios
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chen, C.; Guan, B.; Shang, Y.; Li, Z.; Yu, Q. A Ground Positioning Method with Rigidly Mounted IMU and Camera. Chin. J. Lasers 2023, 50, 118–125. [Google Scholar]
- Gu, M.; Li, H.; Zhang, J.; Bai, X.; Zheng, J. A Survey on Vision-Based UAV Positioning and Navigation Methods. Acta Electron. Sin. 2024, 53, 651–685. [Google Scholar]
- Gyagenda, N.; Hatilima, J.V.; Roth, H.; Zhmud, V. A review of GNSS-independent UAV navigation techniques. Robot. Auton. Syst. 2022, 152, 104069. [Google Scholar] [CrossRef]
- Ma, N.; Cao, Y.-F. A survey of vision-based perception and pose estimation methods for autonomous UAV landing. Acta Autom. Sin. 2024, 50, 1284–1304. [Google Scholar]
- Ding, Y.; Yang, J.; Ponce, J.; Kong, H. Minimal solutions to relative pose estimation from two views sharing a common direction with unknown focal length. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 7045–7053. [Google Scholar]
- Heyden, A. Reconstruction from image sequences by means of relative depths. Int. J. Comput. Vis. 1997, 24, 155–161. [Google Scholar] [CrossRef]
- Torr, P.H.S.; Zisserman, A. Robust parameterization and computation of the trifocal tensor. Image Vis. Comput. 1997, 15, 591–605. [Google Scholar] [CrossRef]
- Indelman, V.; Gurfil, P.; Rivlin, E.; Rotstein, H. Real-time vision-aided localization and navigation based on three-view geometry. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 2239–2259. [Google Scholar] [CrossRef]
- Ponce, J.; Hebert, M. Trinocular geometry revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, ON, USA, 23–28 June 2014; pp. 17–24. [Google Scholar]
- Li, T.; Yu, Z.; Guan, B.; Han, J.; Lv, W.; Fraundorfer, F. Trifocal Tensor and Relative Pose Estimation With Known Vertical Direction. IEEE Robot. Autom. Lett. 2024, 10, 1305–1312. [Google Scholar] [CrossRef]
- Ortin, D.; Montiel, J.M.M. Indoor robot motion based on monocular images. Robotica 2001, 19, 331–342. [Google Scholar] [CrossRef]
- Chou, C.C.; Wang, C.-C. 2-point RANSAC for scene image matching under large viewpoint changes. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3646–3651. [Google Scholar]
- Choi, S.; Kim, J.-H. Fast and reliable minimal relative pose estimation under planar motion. Image Vis. Comput. 2018, 69, 103–112. [Google Scholar] [CrossRef]
- Guan, B.-L.; Zhao, J.; Shang, Y.; Yu, Q.-F. Minimal solutions for relative pose estimation under planar motion constraints. Sci. Sin. Technol. 2024, 54, 2122–2130. [Google Scholar] [CrossRef]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Lu, L.; Tsui, H.T.; Hu, Z.Y. A novel method for camera planar motion detection and robust estimation of the 1D trifocal tensor. In Proceedings of the 15th International Conference on Pattern Recognition, ICPR-2000, Barcelona, Spain, 3–8 September 2000; IEEE: Piscataway, NJ, USA, 2000; Volume 3, pp. 807–810. [Google Scholar]
- Chen, J.; Jia, B.; Zhang, K. Trifocal tensor-based adaptive visual trajectory tracking control of mobile robots. IEEE Trans. Cybern. 2016, 47, 3784–3798. [Google Scholar] [CrossRef] [PubMed]
- Guan, B.; Vasseur, P.; Demonceaux, C. Trifocal tensor and relative pose estimation from 8 lines and known vertical direction. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 6001–6008. [Google Scholar]
- Julià, L.F.; Monasse, P. A critical review of the trifocal tensor estimation. In Proceedings of the Pacific-Rim Symposium on Image and Video Technology, Wuhan, China, 20–24 November 2017; pp. 337–349. [Google Scholar]
- Nistér, D. An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 756–770. [Google Scholar] [CrossRef] [PubMed]
- Guan, B.; Zhao, J.; Barath, D.; Fraundorfer, F. Minimal solvers for relative pose estimation of multi-camera systems using affine correspondences. Int. J. Comput. Vis. 2023, 131, 324–345. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In International Workshop on Vision Algorithms; Springer: Berlin, Germany, 1999; pp. 298–372. [Google Scholar]
- Scholkmann, F.; Boss, J.; Wolf, M. An efficient algorithm for automatic peak detection in noisy periodic and quasi-periodic signals. Algorithms 2012, 5, 588–603. [Google Scholar] [CrossRef]
Method | Our-3pt | Hartley-7pt | Choi-2pt | Nister-5pt | Hartley-8pt |
---|---|---|---|---|---|
Time | 0.142 | 0.860 | 0.097 | 0.424 | 0.183 |
Seq. | Our-3pt | Hartley-7pt | Choi-2pt | Nister-5pt | Hartley-8pt |
---|---|---|---|---|---|
00 | 0.0509 | 0.1190 | 0.0380 | 0.0866 | 0.1591 |
01 | 0.0537 | 0.4269 | 0.0429 | 0.0755 | 0.4803 |
02 | 0.0540 | 0.1331 | 0.0383 | 0.0887 | 0.1640 |
03 | 0.0545 | 0.1230 | 0.0478 | 0.0741 | 0.1674 |
04 | 0.0285 | 0.1469 | 0.0330 | 0.0762 | 0.1736 |
05 | 0.0347 | 0.1114 | 0.0327 | 0.0676 | 0.1414 |
06 | 0.0330 | 0.1154 | 0.0351 | 0.0672 | 0.1540 |
07 | 0.0362 | 0.1075 | 0.0340 | 0.0734 | 0.1390 |
08 | 0.0413 | 0.1204 | 0.0333 | 0.0798 | 0.1515 |
09 | 0.0406 | 0.1354 | 0.0367 | 0.0807 | 0.1704 |
10 | 0.0455 | 0.1226 | 0.0349 | 0.0794 | 0.1498 |
Seq. | Our-3pt | Hartley-7pt | Choi-2pt | Nister-5pt | Hartley-8pt |
---|---|---|---|---|---|
00 | 1.5463 | 1.2917 | 1.8002 | 1.2103 | 1.4457 |
01 | 2.1319 | 2.8976 | 2.1547 | 1.8397 | 2.3018 |
02 | 1.3496 | 1.2865 | 1.6883 | 1.1022 | 1.3273 |
03 | 1.6843 | 2.0605 | 1.5764 | 1.2532 | 1.7406 |
04 | 0.7774 | 1.0534 | 1.4140 | 1.0244 | 1.2120 |
05 | 1.1093 | 1.5729 | 1.1420 | 0.9484 | 1.2034 |
06 | 0.8023 | 0.8500 | 1.4336 | 0.8147 | 1.0001 |
07 | 1.3239 | 1.4630 | 1.8569 | 1.3784 | 1.6365 |
08 | 1.6226 | 1.7043 | 1.4997 | 1.3753 | 1.6101 |
09 | 1.0632 | 1.6247 | 1.1977 | 1.0043 | 1.3199 |
10 | 1.5088 | 1.6111 | 1.3993 | 1.1301 | 1.4817 |
Method | Our-3pt | Hartley-7pt | Choi-2pt | Nister-5pt | Hartley-8pt |
---|---|---|---|---|---|
Time | 1.26 | 20.6 | 1.66 | 2.70 | 6.99 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dai, Z.; Lv, W.; Liu, L. Three-View Relative Pose Estimation Under Planar Motion Constraints. Vision 2025, 9, 72. https://doi.org/10.3390/vision9030072
Dai Z, Lv W, Liu L. Three-View Relative Pose Estimation Under Planar Motion Constraints. Vision. 2025; 9(3):72. https://doi.org/10.3390/vision9030072
Chicago/Turabian StyleDai, Ziqin, Weimin Lv, and Liang Liu. 2025. "Three-View Relative Pose Estimation Under Planar Motion Constraints" Vision 9, no. 3: 72. https://doi.org/10.3390/vision9030072
APA StyleDai, Z., Lv, W., & Liu, L. (2025). Three-View Relative Pose Estimation Under Planar Motion Constraints. Vision, 9(3), 72. https://doi.org/10.3390/vision9030072