Next Article in Journal
Testing Fifth Forces from the Galactic Dark Matter
Previous Article in Journal
Pressure Agglomeration Process of Bakery Industry Waste
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Dynamic Catadioptric Sensory Data Fusion for Visual Localization in Mobile Robotics †

1
Department of Systems Engineering and Automation, Miguel Hernández University, Av. de la Universidad s/n. Ed. Innova., 03202 Elche (Alicante), Spain
2
Centre for Automation and Robotics (CAR), UPM-CSIC. Technical University of Madrid, C/ José Gutiérrez Abascal, 2, 28006 Madrid, Spain
*
Author to whom correspondence should be addressed.
Presented at the 7th International Symposium on Sensor Science, Napoli, Italy, 9–11 May 2019.
Proceedings 2019, 15(1), 2; https://doi.org/10.3390/proceedings2019015002
Published: 5 July 2019
(This article belongs to the Proceedings of 7th International Symposium on Sensor Science)

Abstract

:
This approach presents a localization technique within mobile robotics sustained by visual sensory data fusion. A regression inference framework is designed with the aid of informative data models of the system, together with support of probabilistic techniques such as Gaussian Processes. As a result, the visual data acquired with a catadioptric sensor is fused between poses of the robot in order to produce a probability distribution of visual information in the 3D global reference of the robot. In addition, a prediction technique based on filter gain is defined to improve the matching of visual information extracted from the probability distribution. This work reveals an enhanced matching technique for visual information in both, the image reference frame, and the 3D global reference. Real data results are presented to confirm the validity of the approach when working in a mobile robotic application for visual localization. Besides, a comparison against standard visual matching techniques is also presented. The suitability and robustness of the contributions are tested in the presented experiments.

1. Introduction

Standard visual localization methods in mobile robotics have been widely acknowledged thanks to the unequivocal matching of categorical and physical information extracted from visual sensors [1], transferred into images. One well accepted method is feature point matching [2]. However, dealing with realistic applications in mobile robotics implies that certain issues may arise and jeopardize the final localization estimate [3]. In this sense, external noise sources are very likely to affect the visual sensor input, such as non-systematic and non-linear effects. This work relies on a catadioptric sensor represented by an omnidirectional camera. Its main strength falls on the capability to encode wide scenes over 360 degrees around the camera axis. Nonetheless, due to the nonlinearities associated to the geometry of its hyperbolic mirror, it turns to be a visual system liable to suffer from the such type of harmful noise effects.
Under this context, we present a dynamic approach for visual localization sustained by the sensory data provided by the catadioptric sensor, as an omnidirectional camera. The core of the system considers the data extracted from the visual sensor, in order to be fused at each motion step of the robot. Information metrics [4] are computed over the images, so as to feed a regression module, which is also supported by Gaussian processes (GP) [5]. The output data are accumulated and fused all along the trajectory of the robot. This procedure permits inferring a probability distribution for the visual feature matching. Moreover, exploiting the relevance of the Kalman Filter (KL) gain, we are allowed to define a set of predicted feature matching candidates over the next pose of the robot at t+1.
Therefore, the main contribution of this work is a visual localization technique which dynamically adapts to the non-linear noise effects generated by the environment. Such adaptive dynamic is achieved through the proposals for data fusion and inference, at each time step. Furthermore, this approach emerges as a promising alternative to general outlier techniques, which tend to compromise the computational load of the system to work in real time. The validity and suitability of the approach are evaluated in an experimental setup.

2. Catadioptric Sensor

The real system used in this work is represented in Figure 1. Figure 1a shows the Pioneer P3-AT robot, and the details of the catadioptric sensor, consisting of a CCD (Charge Couple Device) camera jointly coupled with a hyperbolic mirror. Figure 1b details the projection model, by which a 3D point, Q, is projected towards the mirror surface on P, and finally addressed towards the focal of the hyperboloid (F). Along its path, it intersects on the pixel frame of the camera, represented by the 2D point p(u,v).

3. Dynamic Matching

The essential basis for the localization estimation is established by the epipolarity constraint, as described in previous works [6]. The image data fusion between poses of the robot is defined as:
XG = T + RXt
being T and R the predicted translation and rotation that relate the current image data, Xt, processed from the pose of the robot at time t, which is finally fused into the 3D global reference system, denoted as XG. Additionally, the informative inference is stated by the use of Kullback-Leibler (KL) divergence and input into a Gaussian process, whose nomenclature responds to a function f(x) with mean m(x) and covariance k(x,x’). This function is applied to the 3D global data accumulated in XG.
f(x) = GP[m(x), k(x,x’)]
Thus f(XG) returns a probability distribution of inferred feature point matching, namely p(x,y,z). That is to say, the 3D areas from where feature matching is more probable according to the history in the system. It is worth noting that dynamic updates of the current uncertainty of the system are also considered and accordingly propagated through f(x). The last step takes into account the KF gain to predict the modulation of such probability distribution when the robot moves to its next pose in t+1. Figure 2 presents an example of such probability distribution, p(x,y,z), for probable feature matching by means of this dynamic data fusion approach. Please note that the probability data have been projected onto the 2D image frame of the camera, p(u,v), so as to proceed to the final matching. Ultimately, Figure 3 presents a performance example of the dynamic matching proposed in this work. A standard matching [2] (blue) is compared with the proposal (green). Red points indicate the probability distribution for feature point existence, p(x,y,z), projected onto the image pixels, p(u,v).

4. Experiments

An experimental setup has been conducted over a publicly dataset [7], to confirm the validity, suitability and robustness of the presented approach. Figure 4 presents comparison results for the accuracy of the dynamic matching. Besides this, Figure 5 provides localization results generated in a large indoor scenario.

Author Contributions

Conceptualization, D.V. and J.M.S.; methodology, D.V., L.P.; software, D.V. and L.M.J.; validation, J.M.S., L.P. and L.M.J.; formal analysis, L.P. and J.M.S.; investigation, D.V., J.M.S.; resources, L.P., J.M.S. and L.M.J.; data curation, L.P., J.M.S. and O.R.; writing–original draft preparation, D.V.; writing–review and editing, D.V., L.P. and J.M.S.; visualization, D.V., L.P. and L.M.J.; supervision, J.M.S. and O.R.; project administration, J.M.S. and O.R.; funding acquisition, J.M.S., L.M.J., L.P. and O.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially supported by the Spanish Government through the project DPI2016-78361-R (AEI/FEDER, UE), the Valencian Education Council and the European Social Fund through the post-doctoral grant APOSTD/2017/028, held by D. Valiente.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Valiente, D.; Gil, A.; Payá, L.; Sebastián, J.M.; Reinoso, O. Robust visual localization with dynamic uncertainty management in omnidirectional SLAM. Appl. Sci. 2017, 7, 1294. [Google Scholar] [CrossRef]
  2. Bay, H.; Tuytelaars, T.; Van Gool, L. Speeded up robust features. Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  3. Huang, S.; Dissanayake, G. Convergence and Consistency Analysis for Extended Kalman Filter Based SLAM. IEEE Trans. Rob. 2007, 23, 1036–1049. [Google Scholar] [CrossRef]
  4. Kulback, S.; Leiber, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  5. Rasmussen, C.; Hager, G.D. Probabilistic data association methods for tracking complex visual objects. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 560–576. [Google Scholar] [CrossRef]
  6. Valiente, D.; Gil, A.; Fernández, L.; Reinoso, Ó. Visual SLAM Based on Single Omnidirectional Views. In Informatics in Control, Automation and Robotics: 9th International Conference, ICINCO 2012, Rome, Italy, 28–31 July 2012 Revised Selected Papers; Springer: Berlin/Heidelberg, Germany, 2014; pp. 131–146. [Google Scholar]
  7. ARVC: Automation, Robotics and Computer Vision Research Group. Miguel Hernandez University. Omnidiectional Image Dataset at Innova Building. Available online: http://arvc.umh.es/db/images/innova_trajectory/ (accessed on 11 January 2019).
Figure 1. Real system: (a) Robot P3-AT with the catadioptric sensor (CCD camera and hyperbolic mirror); (b) Camera projection model (3D-2D).
Figure 1. Real system: (a) Robot P3-AT with the catadioptric sensor (CCD camera and hyperbolic mirror); (b) Camera projection model (3D-2D).
Proceedings 15 00002 g001
Figure 2. Probability distribution of feature matching existence, p(x,y,z), projected onto the image pixels, p(u,v).
Figure 2. Probability distribution of feature matching existence, p(x,y,z), projected onto the image pixels, p(u,v).
Proceedings 15 00002 g002
Figure 3. Example of dynamic matching. Standard matching [2] (blue) is compared with the proposal (green). Red points indicate the probability distribution for feature point existence, p(x,y,z), projected onto the image pixels.
Figure 3. Example of dynamic matching. Standard matching [2] (blue) is compared with the proposal (green). Red points indicate the probability distribution for feature point existence, p(x,y,z), projected onto the image pixels.
Proceedings 15 00002 g003
Figure 4. Matching accuracy: (a) % of false matches with distance between images d1 = 0.4 m versus absolute probability pmin. (b) Localization error in terms of angular relation between poses.
Figure 4. Matching accuracy: (a) % of false matches with distance between images d1 = 0.4 m versus absolute probability pmin. (b) Localization error in terms of angular relation between poses.
Proceedings 15 00002 g004
Figure 5. Visual localization results: ground truth (black); standard matching (grey); proposal (green).
Figure 5. Visual localization results: ground truth (black); standard matching (grey); proposal (green).
Proceedings 15 00002 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Valiente, D.; Payá, L.; Sebastián, J.M.; Jiménez, L.M.; Reinoso, O. Dynamic Catadioptric Sensory Data Fusion for Visual Localization in Mobile Robotics. Proceedings 2019, 15, 2. https://doi.org/10.3390/proceedings2019015002

AMA Style

Valiente D, Payá L, Sebastián JM, Jiménez LM, Reinoso O. Dynamic Catadioptric Sensory Data Fusion for Visual Localization in Mobile Robotics. Proceedings. 2019; 15(1):2. https://doi.org/10.3390/proceedings2019015002

Chicago/Turabian Style

Valiente, David, Luis Payá, José M. Sebastián, Luis M. Jiménez, and Oscar Reinoso. 2019. "Dynamic Catadioptric Sensory Data Fusion for Visual Localization in Mobile Robotics" Proceedings 15, no. 1: 2. https://doi.org/10.3390/proceedings2019015002

Article Metrics

Back to TopTop