Next Article in Journal
Using Gas Counter Pressure and Combined Technologies for Microcellular Injection Molding of Thermoplastic Polyurethane to Achieve High Foaming Qualities and Weight Reduction
Next Article in Special Issue
Fabrication of Highly Sensitive Capacitive Pressure Sensors Using a Bubble-Popping PDMS
Previous Article in Journal
Supercritical CO2 Foaming of Poly(3-hydroxybutyrate-co-4-hydroxybutyrate)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Sensors and Sensor Fusion Methodologies for Indoor Odometry: A Review

1
Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China
2
Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201, China
3
Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology, Ningbo 315201, China
4
Nottingham Ningbo China Beacons of Excellence Research and Innovation Institute, University of Nottingham Ningbo China, Ningbo 315100, China
5
Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, Nottingham NG7 2RD, UK
6
Ningbo Research Institute, Zhejiang University, Ningbo 315100, China
*
Authors to whom correspondence should be addressed.
Polymers 2022, 14(10), 2019; https://doi.org/10.3390/polym14102019
Submission received: 30 March 2022 / Revised: 5 May 2022 / Accepted: 11 May 2022 / Published: 15 May 2022
(This article belongs to the Special Issue Advanced Polymer-Based Sensor)

Abstract

:
Although Global Navigation Satellite Systems (GNSSs) generally provide adequate accuracy for outdoor localization, this is not the case for indoor environments, due to signal obstruction. Therefore, a self-contained localization scheme is beneficial under such circumstances. Modern sensors and algorithms endow moving robots with the capability to perceive their environment, and enable the deployment of novel localization schemes, such as odometry, or Simultaneous Localization and Mapping (SLAM). The former focuses on incremental localization, while the latter stores an interpretable map of the environment concurrently. In this context, this paper conducts a comprehensive review of sensor modalities, including Inertial Measurement Units (IMUs), Light Detection and Ranging (LiDAR), radio detection and ranging (radar), and cameras, as well as applications of polymers in these sensors, for indoor odometry. Furthermore, analysis and discussion of the algorithms and the fusion frameworks for pose estimation and odometry with these sensors are performed. Therefore, this paper straightens the pathway of indoor odometry from principle to application. Finally, some future prospects are discussed.

Graphical Abstract

1. Introduction

Knowing the position of a robot is a requisite condition for conducting tasks such as autonomous navigation, obstacle avoidance, and mobile manipulation. This technology has notable economic values, and the global market of indoor Positioning, Localization, and Navigation (PLAN) is expected to reach USD 28.2 billion by 2024 [1]. The social benefit of indoor PLAN is also profound, as it may serve as a wayfinder for humans in metro stations, markets, and airports. In particular, vulnerable people—for instance, the elderly and the visually impaired—may also benefit from this technology. Although Global Navigation Satellite Systems (GNSSs) are already a mature solution for precise outdoor localization, they may quickly degrade due to satellite coverage fluctuation, multipath reflection, and variation in atmospheric conditions; such degradation is profound in indoor environments. To alleviate this effect, several approaches have been proposed, including magnetic guidance, laser guidance, Wi-Fi, Ultra-Wide Band (UWB), 5G, etc. However, these methods require the pre-installation of infrastructure such as beacons, and to change the arrangement of such a system is a tedious task. Thus, a self-contained localization system is more favorable for agents operating in such indoor environments.
During the past two decades, the self-contained odometry methodologies and Simultaneous Localization and Mapping (SLAM) technology have developed rapidly and enabled bypassing of the aforementioned problems. Odometry entails deducing the trajectory of the moving agent based on readings from observations and, with the initial position and the path of travel recorded, estimating its current position [2]. Odometry can be regarded as a thread of SLAM, where the aim is to track the position of the robot and to maintain a local map, while SLAM pursues a globally consistent trajectory and map [3]. Odometry usually serves as the front end of a SLAM system. In this review, onboard sensor systems as well as the algorithms used for mobile robots’ indoor odometry are our focus.
Materials with novel properties promote the development of odometry technologies. Magnetic materials have been utilized to build compasses for centuries, since the Earth’s geomagnetic field is able to provide an accurate and reliable reference for orientation. In the modern era, inertial navigation systems built using piezoelectric materials and electro-optic materials have emerged. In addition, photoelectric materials and electro-optic materials are applied in state-of-the-art navigation sensors, including LiDAR and cameras.
Apart from the materials mentioned above, recent advancements in materials and manufacturing technologies enable polymer-based sensors to be deployed. The first use of polymeric materials in navigation sensors can be traced back to the early 2000s [4]. Polymeric materials introduce flexibility to the sensors in terms of both mechanical structure and functionality, and reduce the cost of mass production. Such materials have been implemented in odometry sensors such as IMUs and LiDAR. Soft polymeric sensors are also ideal candidates to be embedded in soft robotics [5]. In this review, the applications of polymeric sensors for odometry are also included.
This review paper surveys the literature published in English; the search was mainly conducted in IEEE Xplore, ScienceDirect, ACM Digital Library, and arXiv. Taking IEEE Xplore as an example, the Boolean operators “AND” and “OR” were used for combining keywords as well as their synonyms. For polymeric sensors, (“polymeric” OR “polymer”) AND (“accelerometer” OR “gyroscope” OR “LiDAR” OR “Radar”) was used for searching for articles; the initial search yielded 407 papers; the papers that explicitly conceptualized or fabricated a sensor in which polymer materials played a key role—such as an actuator—were retained, while the papers that did not have a major novelty, or concerned the topic of radar-absorbing materials, were screened. For odometry, (“inertial” OR “IMU” OR “LiDAR” OR “Laser” OR “Radar” OR “visual” OR “vision”) AND (“odometry” OR “localization” OR “SLAM”) was used for searching for publications between 2015 to 2022 within journals or highly renowned conference proceedings, yielding 7821 papers. Due to the huge amount of results, only papers with more than 50 citations, or papers published after 2020 with more than 10 citations, were retained at this stage. Among the retained papers, those papers using multiple sensors, demonstrating no major novelty in terms of algorithms, focusing mainly on mapping, focused on outdoor environments, or primarily applied for mobile phones or pedestrian gait analysis were excluded. For sensor fusion, (“sensor fusion” OR “filter” OR “optimization”) AND (“odometry” OR “localization”) was used for searching for publications between 2010 and 2022 within renowned journals or conference proceedings, yielding 388 papers; those using similar sensor suites and similar fusion frameworks, using inapplicable sensors for indoor mobile robots, or focusing on outdoor environments were eliminated. After the preliminary screening, a snowballing search was conducted with the references from or to the selected papers, to check for additional papers. As a result, a total number of 252 articles was selected for review.
Several reviews have been published on odometry and SLAM. Table 1 summarizes and remarks on some of the representative works from the past five years:
To the best of our knowledge, existing reviews rarely cover sensing mechanisms, polymer-based sensors, odometry algorithms, and sensor fusion frameworks systematically in one paper. A summary table and a highlight of easy-to-use solutions are appended to the end of each section. This review paper extends previous works from three perspectives: firstly, the operating principles and advancements of sensors for odometry, including polymer-based sensors; secondly, a briefing on odometry algorithms and the taxonomy of methods based on their working principles; thirdly, a briefing and taxonomy of sensor fusion techniques for odometry. The paper is organized as follows: Section 2 reviews the operating principles of sensors for odometry, including IMU, LiDAR, radar, and camera, as well as the corresponding odometry algorithms. A summative table of representative methods is provided for each sensor. The utilization of polymeric sensors is also presented accordingly. Section 3 reviews the sensor fusion methods and their implementation in odometry. Lastly, Section 4 and Section 5 present future prospects and conclusions, respectively.

2. Sensors and Sensor-Based Odometry Methods

Sensors are equipped in robotic systems to mimic human sensory systems (e.g., vision, equilibrium, kinesthesia), which provide signals for perception, utilization, and decision. Onboard sensors for mobile robots can be categorized into proprioceptive sensors and exteroceptive sensors, which are sensors for monitoring internal states and the external environment, respectively. Examples of proprioceptive sensors include wheel odometers and IMUs. Examples of exteroceptive sensors include LiDAR, radar, and cameras [14].
Selectivity and sensitivity are two prominent properties of sensors. Sensitivity means the lowest power level of the signal as an input from which the sensor can decode information correctly. Selectivity means the ability to detect and decode signals in the presence of interfering signals. Advances of materials in device fabrication enable sensors with better sensitivity and selectivity to be manufactured. For example, the InGaAs/InP Single-Photon Avalanche Diode (SPAD) can detect the injection of a single photoexcited carrier. SiGe-based radar technology allows high-frequency operation, thus enabling better RF performances, and even high-resolution imaging is achievable with such sensors [15].

2.1. Wheel Odometer

The word “Odometry” is derived from the odometer, which is a device mounted to a wheel to measure the travel distance; it may also be referred to as a wheel encoder. There are three types of wheel encoder: the pulsed encoder, single-turn absolute encoder, and multi-turn absolute encoder. The pulsed encoder is the simplest implementation; it consists of a Hall sensor with a magnet, or has equally spaced grating to pass or reflect light. It is widely installed in drive motors for closed-loop control of the motor speed. For example, in the Wheeltec robot, a 500-pulse Hall encoder is used for motor speed feedback. The single-turn absolute encoder has two implementations: The first uses multiple Hall sensors to determine the absolute rotation angle, and has been used in the Stanford Doggo, which employs an AS5047P as the encoder. The second utilizes unequally spaced grating, such as the Gray code system, in conjunction with a light source and detection system, to determine the absolute rotation angle. The multi-turn absolute encoder usually consists of several single-turn encoders with a gear set, where the angle and the number of revolutions are recorded separately.
Several robots, such as TurtleBot and Segway RMP, use encoders for odometry. The OPS-9 uses two orthogonal wheel encoders, providing planar position information with good accuracy. Combining learning-based methods with wheel odometers can yield reasonably good odometry results. The authors of [16] tested the combination of Gaussian processes with neural networks and random variable inference on a publicly available encoder dataset. A CNN structure was employed for the neural network part, demonstrating better trajectory than the physical filters.

2.2. Inertial Measurement Units (IMUs)

The Inertial Measurement Unit (IMU) is a device that is used to estimate the position, velocity, and acceleration of a robot. It is generally composed of a gyroscope, an accelerometer, and a magnetometer (optional). Since these devices output the acceleration and the angular rate, other state variables—such as the velocity and the position of the robot—are obtained by integration of the measured data; thus, any drift or bias in the measurement of acceleration and angular rate will cause accumulation of errors in the estimation of velocity and position. The IMU systems discussed below are of the strapdown type rather than the stable platform (gimbal) type, due to their mechanical simplicity and compactness [17].
The operating principal of an IMU is shown in Figure 1. The gyroscope measures the three-axis angular rate, and estimates the relative orientation of the robot to the global frame; the accelerometer measures the acceleration, and then projects it to the global frame with the gravity vector subtracted, and the velocity and position are obtained via integration and double integration, respectively [18,19]. The IMU measurements in the local (inertial) frame are given as follows:
ω m = ω + b ω + η ω
a m = a + b a + η a ,
where ω and a are the true angular rate and acceleration, respectively, b ω and b a are biases, and η ω and η a are zero-mean white Gaussian noises. In the fixed global frame, the motion equations of the IMU are as follows:
R ˙ = R ω m b ω η ω ×
  v ˙ = R a m b a η a + g  
  p ˙ = v
where R encodes the rotation from the local frame to the global frame, · × stands for the skew-symmetric matrix operator, and g = [ 0   0 9.81 ] T is the gravity vector.
Different types of gyroscopes and accelerometers have been constructed based on different working principles. For accelerometers, there are three main types: piezoelectric accelerometers, capacitive Microelectromechanical System (MEMS) accelerometers, and piezoresistive MEMS accelerometers. The piezoelectric accelerometer works based on the piezoelectric effect—when a mechanical stress is applied to the crystal, it will generate a voltage. A piezoelectric accelerometer consists of one or more piezoelectric crystals and a proof mass (or seismic mass); the proof mass transduces the inertial force to the crystal when being accelerated, and the acceleration can be measured in the form of voltage. Common piezoelectric materials include PZT, LiNbO3, AlN, and ZnO. AlN with a wide bandgap (6.2 eV) has been regarded as the preferred material due to its high breakdown field, high working temperature, and low dielectric loss. Doping AlN with elemental Sc can significantly increase the piezoelectric coefficients. The capacitive MEMS accelerometer and the piezoresistive MEMS accelerometer are based on the mass-spring-damper system model, which utilizes capacitive change or resistive change to sense the deflection of the moving proof mass under acceleration. Compared to piezoelectric materials, piezoresistive materials have high sensitivity and better low-frequency response. SiC is regarded as a promising material; it has higher bandgap than Si, and has a good piezoresistive effect at high temperatures. Referring to the application of polymers in accelerometers, in [20], an SU-8-polymer-based, single-mass, three-axis, piezoresistive accelerometer was built (Figure 2a); it demonstrated better sensitivity due to the low Young’s modulus of SU-8 compared with Si, and piezoresistive materials such as ZnO nanorods were employed as sensing materials applied on the surface of U-beams to detect deformation. In [21], a polymeric Fano-resonator-based accelerometer was fabricated (Figure 2b); when being accelerated, a force was exerted on the ring, which experienced a strain, causing a phase change of the light proportional to the acceleration. This device demonstrated very high sensitivity. In [22], a PVDF-based localization and wheel–ground contact-sensing scheme was presented.
Gyroscopes also come in different types, including mechanical gyroscopes, optical gyroscopes, and MEMS gyroscopes [23]. Mechanical gyroscopes are constructed based on the principle of the conservation of angular momentum, which is the tendency of a moving object to maintain the same rotational axis at a constant rotation speed. Optical gyroscopes rely on the Sagnac effect. If a light path is rotating at a certain angular rate, by measuring the time delay between two light pulses travelling along the same light path in opposite directions, the angular rate can be calculated. Generally, there are two forms of optical gyroscope: Ring-Laser Gyroscopes (RLGs), and Fiber-Optic Gyroscopes (FOGs) [24]. MEMS gyroscopes are mostly based on the effect of Coriolis acceleration, which is the acceleration applied to a moving object at a certain velocity in a rotating frame. Such an MEMS gyroscope usually contains a vibrating part, and by detecting the Coriolis acceleration, the angular rate is obtained [24]. While most implementations today utilize MEMS-based gyroscopes for wheeled mobile robot applications due to their low cost, [25] used a calibrated FOG together with measurements from wheel encoders for wheeled robot dead-reckoning. Referring to the application of polymers in gyroscopes, in [26], a polymeric ring resonator was fabricated and applied to an optical gyroscope (Figure 2d). The coupler split the input light into two beams, and the refractive index difference between polymers 1 and 2 was 0.01, aiming at achieving an optimal coupling ratio and low propagation loss. In [27], a PDMS polymeric ring structure was fabricated to build an MEMS gyroscope (Figure 2c), where the bottom of the ring was fixed while the upper part could move freely. The eight coils serve as driving and sensing parts. This was achieved by exerting Lorentz force on the ring for harmonic vibration. When being rotated, a Coriolis force was introduced to the ring, and could be sensed by the coils in the form of electromotive force.
Figure 2. (a) SU-8 3-axis piezoresistive accelerometer, reprinted with permission from [20]. Copyright IEEE 2019. (b) Polymeric Fano-resonator-based accelerometer, reprinted with permission from [21].Copyright The Optical Society2016. (c) Polymeric vibratory ring-type MEMS gyroscope, reprinted with permission from [27]. Copyright IEEE 2008. (d) Polymeric ring resonator for optical gyroscope, reprinted from [26], Hindawi, 2014.
Figure 2. (a) SU-8 3-axis piezoresistive accelerometer, reprinted with permission from [20]. Copyright IEEE 2019. (b) Polymeric Fano-resonator-based accelerometer, reprinted with permission from [21].Copyright The Optical Society2016. (c) Polymeric vibratory ring-type MEMS gyroscope, reprinted with permission from [27]. Copyright IEEE 2008. (d) Polymeric ring resonator for optical gyroscope, reprinted from [26], Hindawi, 2014.
Polymers 14 02019 g002
In more general cases, accelerometers and gyroscopes are often combined to form an IMU package for robot localization and navigation. Fusing multiple sensors together requires the consideration of sensor fusion; a more detailed discussion on sensor fusion techniques is presented in the Sensor Fusion section. The authors of [28] proposed a dynamic-model-based slip detector for a wheeled robot based on an MEMS IMU and an encoder, proving successful vehicle immobilization detection (0.35% of false detection) and accurate estimates of the robot’s velocity. The authors of [29] utilized a low-cost IMU and measurements from wheel encoders to form an extended Kalman filter (EKF) scheme for a skid-steered wheeled robot, with the “virtual” velocity measurement update based on the proposed kinematic model, demonstrating relatively accurate state estimation.
Putting multiple IMUs together may improve localization accuracy. The author of [30] used multiple MEMS IMUs mounted on a vehicle fused using a Kalman filter, which achieved a maximum of 55% improvement in positioning accuracy compared to a single IMU. The authors of [31] demonstrated a solution using multiple MEMS IMUs with at least one mounted on a wheel to replace the wheel encoder with the output of the wheel rate gyro, fused together with an EKF, reporting a mean positional drift rate of 0.69% and a heading error of 2.53 degrees.
IMU pre-integration plays a key role in the state-of-the-art robot navigation systems since, in general, IMUs have a higher sampling rate (100 Hz~1 KHz) than other navigation sensors; thus, combining serial IMU readings into a single measurement becomes desirable. This method was discussed in [32,33,34], and is transcribed here as follows:
Δ R i j = R i T R j
Δ v i j = R i T v j v i g Δ t i j
Δ p i j = R i T p j p i v i Δ t i j 1 2 g Δ t i j 2 .
In terms of currently available solutions, Xsens [35] and Microstrain [36] offer IMUs and development kits for visualization and log data, and can stream odometry data via an ROS API. This is also achievable with a low-cost IMU using Arduino [37]. For an in-depth review of modern inertial navigation systems and commercially available products, please refer to [24,38].

2.3. LiDAR

LiDAR is named after its working principle and function, which is Light Detection and Ranging. The authors of [39,40,41,42] conducted comprehensive reviews of the principles and applications of modern LiDAR systems. Based on its working principles, LiDAR can be categorized into Time-of-Flight (ToF) LiDAR and Frequency-Modulated Continuous-Wave (FMCW) LiDAR. ToF LiDAR measures range by comparing the elapsed time between the transmitted and received signal. It dominates the market due to its simple structure, but it suffers issues such as interference from sunlight or other LiDAR devices. FMCW LiDAR adopts the same principle as FMCW radar—the transmitted laser frequency is modulated linearly against the time, and then both the range and the velocity of the observed object can be translated from the frequency difference between the transmitted and received laser wave. One notable advantage of FMCW LiDAR is its ability to directly retrieve velocity from the measurements.
Based on the laser beam steering mechanism, LiDAR can be further categorized into mechanical LiDAR and solid-state LiDAR. Mechanical steering of the laser beam actuated by a motor is the most popular solution at present due to its large Field of View (FOV), but it usually results in a very bulky implementation, and is susceptible to distortion caused by motion. Solid-state LiDAR comes in multiple forms, including MEMS LiDAR, FLASH LiDAR, and Optical Phased Array (OPA) LiDAR. Here solid-state refers to a steering system without bulky mechanical moving parts. MEMS LiDAR consists of a micromirror embedded in a chip, the tilting angle of which is controlled by the electromagnetic force and the elastic force, resulting in a relatively small FOV (typically 20–50 degrees horizontally). Due to its compact size and low weight, MEMS LiDAR can be used for robotic applications, where size and weight requirements are stringent [43]. Ref. [44] developed and fabricated an MEMS LiDAR for robotic and autonomous driving applications. The authors of [45] presented an algorithm for such small-FOV solid-state LiDAR odometry, mitigating the problems of small FOV, irregular scanning pattern, and non-repetitive scanning by linear interpolation, and demonstrating a trajectory drift of 0.41% and an average rotational error of 1.1 degrees. FLASH LiDAR operates in a similar way to a camera using a flashlight—a single laser is spread to illuminate the area at once, and a 2D photodiode detection array is used to capture the laser’s return. Since the imaging of the scene is performed simultaneously, movement compensation of the platform is unnecessary. This method has been used for pose estimation of space robots, as demonstrated by [46,47]. The main drawbacks of FLASH LiDAR are its limited detection range (limited laser power for eye protection) and relatively narrow FOV. OPA LiDAR controls the optical wavefront by modulating the speed of light passing through each antenna; ref. [48] presents a realization and application of such a device.
Polymers have been utilized for the fabrication of novel LiDAR devices. In [49,50], a polymeric thermo-optic phase modulator was fabricated and utilized for OPA LiDAR (Figure 3a), achieving good energy efficiency. In [51], piezoelectric polymer P(VDF-TrFE) copolymers were employed as actuators, introducing rotation to the micromirror of the MEMS LiDAR due to the asymmetric position of the mirror (Figure 3b). In [52], a UV-cured polymer was adopted for the fabrication of microlenses of Single-Photon Avalanche Diodes (SPADs) applied in FLASH LiDAR.
LiDAR odometry (LO) is the process of finding the transformation between consecutive LiDAR scans, by aligning the point cloud from the current scan to the reference scan. Reviews of the point cloud alignment methods for general purposes can be found in [53,54,55,56,57], and for autonomous driving in [58,59]. Based on the source of the LiDAR point cloud, both 2D [60,61] and 3D implementations have been shown. Those methods can be categorized into scan-matching methods and feature-based methods [59].
Scan-matching methods are also called fine-registration methods. Among the registration methods, the family of Iterative Closest Point (ICP) registration methods is widely adopted [53] not only for mobile robots’ SLAM problems [62], but also for object reconstruction, non-contact inspections, and surgery support. A comprehensive review of the ICP algorithms can be found in [63]. The general idea behind ICP is to iteratively find the transformation that can best align the incoming point cloud with the reference point cloud. A complete ICP algorithm should include functional blocks such as a data filter, initial transformation, associate solver, outlier filter, and error minimization. The association solver (also called match function) is utilized for pairing points from the incoming data and the reference point cloud; this process may also refer to data association, point matching, or correspondence finding, depending on the context. This matching process can have three types: feature matching (point coordinate, surface normal [64], or curvature [65]), descriptor matching (laser intensity [66]), and mixed. The finding process is often accelerated by data structures such as k-D trees [67] to find the correspondences with the shortest distance and/or similar properties.
Error minimization is the area in which where most ICP variants differ. The goal is to minimize Equation (9), where p and q are corresponding points in the reading and reference point clouds, respectively, while R and t are the rotation and translation to be resolved, respectively. The outcome is dependent on the error metric, for example, the point-to-point error metric, which takes the Euclidean distance between points as the metric, as shown in [68]; the point-to-plane error metric [69], which originates from the idea that the points are sampled from a smooth surface, and searches for the distance between a point and a plane containing the matched point; and the generalized ICP [70], which introduces a probabilistic representation of the points, and can be viewed as a plane-to-plane error metric. LiTAMIN2 [71] introduced an additional K–L divergence that evaluates the difference in the distribution shape, which can perform well even when the points for registration are relatively sparse.
( q R p + t ) T q R p + t
A major drawback of ICP is that it relies heavily on the initial guess. Therefore, ICP is susceptible to local minima [72]. To address this issue, a global method based on unique features without the need for initial assumption was developed. The authors of [68] proposed a globally optimized ICP that integrates the Branch-and-Bound (BnB) algorithm; the ICP searches the local minima, while the BnB algorithm helps it to jump out of the local minima, and together they converge to the global minima.
Another disadvantage of ICP is that it is a discrete sampling of the environment. To address the effect of uneven LiDAR points, the Normal Distribution Transform (NDT) was introduced for both 2D registration [73] and 3D registration [74]. Instead of using individual LiDAR points, the normal distributions give a piecewise smooth Gaussian probability distribution of the point cloud, thus avoiding the time-consuming nearest-neighbors search and memory-inefficient complete point cloud set storage. The NDT first equally divides the space occupied by the scan into cells, and then calculates the mean vector q and the covariance matrix C for each cell of the fixed reference scan. The goal is to find the transformation of R and t that can minimize the score function (Equation (10)) using Newton’s optimization method. Since this process iterates over all points in the incoming scan, this is called a Point-to-Distribution (P2D) NDT registration algorithm.
( q R p + t ) T C 1 q R p + t
If the registration is directly performed on the distribution model of both scans, then it becomes a Distribution-to-Distribution (D2D) NDT registration algorithm [75]. This method shares many similarities with generalized ICP in the distance error metric function, but performs more accurately and faster compared with the generalized ICP and the standard P2D [76]. An NDT histogram of plane orientation is also computed in this method, for better initial transformation estimation. However, in some cases it is still susceptible to local minima. The Gaussian Mixture Map (GMM) method [77,78] has similarities with the NDT methods, since they both maximize the probability of drawing the transformed scan from the reference scan, which constructs a Gaussian mixture model over the z-height of the 3D LiDAR scan, and then uses a multiresolution Branch-and-Bound search to reach global optima. The Coherent Point Drift (CPD) algorithm is also based on the GMM method [79]. The Closet Probability and Feature Grid (CPFG)-SLAM [80] is inspired by both ICP and NDT, searching for the nearest-neighbor grid instead of the nearest-neighbor point, and can achieve a more efficient registration of the point cloud in the off-road scenario.
Other registration methods include the Random Sample Consensus (RANSAC) algorithm-based registration method [81], which randomly chooses minimal points from each scan and then calculates the transformation, and the transformation with the largest number of inliers is selected and returned. Its time complexity depends on the subset size, the inlier ratio, and the number of data points; thus, its runtime can be prohibitively high in some cases. The Implicit Moving Least-Squares (IMLS) method leverages a novel sampling strategy and an implicit surface matching method, and demonstrates excellent matching accuracy, but is hard to operate in real time [82]. The Surface Element (Surfel) method [83,84,85] can represent a large-scale environment while maintaining dense geometric information at the same time. MULLS-ICP categorizes points into ground, facade, pillar, and beam, which are then registered by the multimetric ICP [86].
Feature-based methods extract relevant features from the point clouds, and then use them for successive pose estimation. Since these methods only use a selected part of the point cloud, they can be treated as “sparse” methods. Features are the points with distinct geometry within a locality [87]. Feature-based methods generally consist of three main phases: key point detection, feature description, and matching [88]. Summary and evaluation of the 3D feature descriptors can be found in [87,88,89]. Several feature descriptors—including the spinning images (SI) [90], the Fast Point Feature Histograms (FPFHs) [91], the Shape Context (SC) [92,93], and the Signature of Histograms of Orientations (SHOT) [94,95]—are applied for point cloud registration and loop-closure detection [60,93,95,96]. According to [97], SHOT is the descriptor that can give the fastest and most accurate results in the test. Feature descriptor methods are often employed in initial transformation calculations or loop-closure detection problems.
The state-of-the-art feature-based method LiDAR Odometry And Mapping (LOAM) has held first place in the KITTI odometry benchmark since it was first introduced in [98]. LOAM achieves both low drift and low computational complexity by running two algorithms in parallel. Feature points are selected as edge points with low smoothness and planar points with high smoothness, and then the transformation between consecutive scans is found by minimizing the point-to-edge distance for edge points and the point-to-plane distance for planar points, using the Levenberg–Marquardt (L–M) method. Inspired by LOAM, several methods have been proposed, including LeGO-LOAM [99], which first segments the raw point cloud using a range image, and then extracts features via a similar process to that used in LOAM, and performs a two-step L–M optimization; that is, [ t x , t y , θ y a w ] are estimated using the edge features, while [ t z , θ r o l l ,   θ p i t c h ] are estimated using the planar features. A summative table of representative LiDAR odometry methods is shown in Table 2.
In terms of currently available solutions, SLAMTEC offers user-friendly software [100] for robotic odometry, mapping, and control, with low-cost LiDAR. Livox offers products and packages for odometry and mapping. Cartographer is a widely used package for odometry and mapping with LiDAR.
Table 2. Summary of representative LiDAR odometry (LO).
Table 2. Summary of representative LiDAR odometry (LO).
CategoryMethodLoop-Closure DetectionAccuracy 1Runtime 1
Scan-matchingICP [70]NoMediumHigh
NDT [76]NoMediumHigh
GMM [77]NoMedium-
IMLS [82]NoHighHigh
MULLS [86]YesHighMedium
Surfel-based [83]YesMediumMedium
DLO [101]NoHighLow
ELO [102]NoHighLow
Feature-basedFeature descriptor [97]NoLowHigh
LOAM [98]NoHighMedium
LeGO-LOAM [99]NoHighLow
SA-LOAM [103]YesHighMedium
1 Adopted from [58,104].

2.4. Millimeter Wave (MMW) Radar

Radar stands for radio detection and ranging, which is another type of rangefinder. It is based on the emission and detection of electromagnetic waves in the radio frequency ranging from 3 MHz to 300 GHz (with wavelengths from 100 m to 1 mm). The radar equation (Equation (11)) depicts how the expected received power p r is a function of the transmitted power p t , the antenna gain G , and the wavelength λ , as well as the Radar Cross-Section (RCS) σ and the range r from the target. Compared with its counterpart LiDAR, radar has superior detection performance under extreme weather conditions, since waves within this spectrum have weak interaction with dust, fog, rain, and snow. The Millimeter Wave (MMW) spectrum ranges from 30 GHz to 300 GHz (with wavelengths from 10 mm to 1 mm), which provides wide bandwidth and narrow beams for sensing, thus allowing finer resolution [105,106].
p r = p t G 2 λ 2 σ ( 4 π ) 3 r 4
In terms of polymer utilization in radar, Liquid Crystal Polymer (LCP) is regarded as a promising candidate as a substrate for MMW applications due to its flexibility, low dielectric loss, lower moisture absorption, and ability to withstand temperatures up to 300 °C [107]. The use of HDPE as a dielectric waveguide for distributed flexible antennas for proximity measurement in robotics applications is presented in [108] (Figure 4a). The use of conducting polymers such as polyaniline (PANI), doped with multiwalled carbon nanotubes, in the fabrication of antennas has demonstrated excellent flexibility and conformality in RF device manufacture [109] (Figure 4b).
Based on the forms of the emitted wave, radar can be divided into pulsed and continuous-wave radar [110]; pulsed radar determines range based on the round-trip time of the electromagnetic wave, with its maximal detectable range and range resolution depending on its pulse cycle interval and pulse width, respectively. In contrast to pulsed radar, continuous-wave radar emits a continuous signal, and the most widely used waveform for robotics and automotive applications is Frequency-Modulated Continuous-Wave (FMCW), which can determine the range and velocity of the object simultaneously. The frequency of the emitted signal is modulated linearly against time, which is also referred to a chirp. The range and the velocity information are obtained by performing 2D Fast Fourier Transform (FFT) on the radar’s beat frequency signal [111,112,113], as demonstrated in Figure 5. Other waveforms for MMW radar are summarized in [111,114].
The angular location of detection should be discriminated so that the location of the objects can be resolved. Thanks to the short wavelength of MMW, the aperture size of radar antennas can be made small; hence, many antennas can be densely packed to form an array. With at least two receiver antennas, the Angle of Arrival (AoA) can be calculated from the measured phase difference at different receivers, which can be performed via 3D FFT [116]. A commonly used AoA measurement principle is called Multiple-Input–Multiple-Output (MIMO), which utilizes multiple transmitters and receivers to form an antenna array. Spaced real and virtual receivers can thus calculate the elevation and azimuth angle based on the phase shift in the corresponding directions. The virtue of a fixed antenna array is that the examined region is captured instantaneously; hence, no distortion will appear due to sensor movement and, thus, most automotive radar adopts this configuration. In addition to MIMO, the AoA can also be measured by a spinning directive antenna, for each moment the radar outputs an 1D power–range spectrum for the inspected line of sight, where the azimuth angle is the radar’s own azimuth angle relative to the fixed coordinate [115]. In [117,118], a designated spinning radar device called PELICAN was constructed and evaluated for mobile robotics applications.
An overview of MMW radar applications in robotics and autonomous vehicles can be found in [119,120]. Radar-based odometry methods can be classified into direct and indirect methods [121,122]; similar to LO, indirect methods involve feature extraction and association, whereas direct methods forego these procedures. Among the direct methods, in [123], the Fourier–Mellin Transform (FMT) is used for registering radar images in sequence, which relies on the translational and rotational properties of the Fourier transformation [124]. Similarly, FMT is also leveraged in [125], where the rotation and translation are calculated in a decoupled fashion, and a local graph optimization process is included. Since Doppler radar can measure radial velocity directly, the relative velocity of a stationary target is equal to the inverse sensor velocity. This method is implemented in [126,127,128,129], where a RANSAC algorithm is invoked for non-stationary outlier removal. Meanwhile, in [127], the 2D point-to-point ICP is used to obtain finer odometry results. Since Doppler measurement cannot survey rotational ego-motion directly, a hybrid method is used in [130] for angular rate measurement.
Among the indirect methods, some classical work in SLAM was carried out with the aid of MMW radar, including [131], which incorporated it with a Kalman filter framework operating in an environment with well-separated reflective beacons. The authors of [132] conducted a series of thorough works on mobile robot localization with MMW radar. In their work, new feature-extraction algorithms—Target Presence Probability (TPP) and a confidential model—showed superior performance compared with constant thresholding and Constant False Alarm Rate (CFAR) [133]. Since radar detection can be impaired by false alarms and clutter, feature association may become problematic; thus, the feature measurements would be better modeled as Random Finite Sets (RFSs) with arbitrary numbers of measurements and orders, and incorporated with the RB (Rao–Blackwellized)-PHD (Probability Hypothesis Density) filter [134,135] for map building and odometry.
Some more recent works include [136], which extracts the Binary Annular Statistics Descriptor (BASD) for feature matching, and then performs graph optimization; and [137], where SURF and M2DP [96] descriptors are computed from radar point clouds for feature association and loop-closure detection, respectively; as well as the use of SIFT in [138]. Radar measurements are noisy and, thus, may worsen the performance of scan-matching algorithms used for LiDAR, such as ICP and NDT; nevertheless, G-ICP [70] showed good validity in [139], where the covariance of each measurement was assigned according to its range; the same can be said of NDT in [140] and GMM in [141], which incorporated detection clustering algorithms including k-means, DBSCAN, and OPTICS. In [142], Radar Cross-Section (RCS) was used as a cue for assisting with feature extraction and Correlative Scan Matching (CSM). The authors of [143,144] devised a new feature-extraction algorithm for MMW radar power–range spectra, plus a multipath reflection removal process, and then performed data association via graph matching.
Ref. [145] analyzed radar panoramic image distortion caused by vehicle movement and the Doppler effect in detail. CFAR detection was used for feature extraction, and then feature association and optimization were applied to retrieve both the linear velocity and the angular rate of the mobile platform. The method in [146] was based on a similar assumption, but was applied in a more elegant way, utilizing the feature-extraction algorithm proposed in [143], and calculating ORB descriptors for feature association. These methods demonstrated better results compared with the FMT-based methods, but their performance may deteriorate when the robot is accelerating or decelerating [122]. A summative table of representative radar odometry methods is shown in Table 3.
In terms of currently available solutions, NAVTECH offers radar devices that have been widely used in radar odometry research. Yeti [146] is a package that works with NAVTECH radar that removes motion distortion and provides odometry data.

2.5. Vision

Cameras can acquire visual images for various applications. Various cameras—including monocular cameras, stereo cameras, RGB-D cameras, event cameras, and omnidirectional cameras—are employed for robotic tasks [148]. Solid-state image sensors, including Charge-Coupled Devices (CCDs) and Complementary Metal-Oxide Semiconductors (CMOSs), are the basis of the cameras used for imaging [149]. Photodetectors take the role of converting electromagnetic radiation into an electronic signal that is ideally proportional to the incident light, and are mostly fabricated from semiconductors made from materials such as Si, GaAs, InSb, InAs, and PbSe. When a photon is absorbed, it creates a charge-carrier pair, and the movement of charge carriers produces a photocurrent to be detected. Compared with inorganic photodetectors, organic photodetectors exhibit attractive properties such as flexibility, light weight, and semi-transparency. Heterojunction diodes based on polymers such as P3HT:PC71BM, PCDTBT:PC71BM, and P3HT:PC61BM as the photon active layer are fabricated [150]. Organic photodetectors can be classified into organic photoconductors, organic photon transistors, organic photomultiplication devices, and organic photodiodes. These organic photodetectors demonstrate utility in imaging sensors [151]. As a recent topic of interest, narrowband photodetectors can be fabricated from materials with narrow bandgaps, exhibiting high selectivity of wavelength [152].
Several camera models can describe the projection of 3D points onto the 2D image plane. The most common is the pinhole model (Equation (12)), which is widely applied for monocular and stereo cameras, projecting a world point P to its image coordinate P , where α and β are coefficients of focal length and pixel length, respectively, while c x and c y describe the coordinates of the image’s center [153]. Other models—including the polynomial distortion model, the two-plane model, and the catadioptric camera model—are summarized in [154,155].
P = x y z = α 0 c x 0 0 β c y 0 0 0 1 0 x y z 1 = α 0 c x 0 0 β c y 0 0 0 1 0 P
The term Visual Odometry (VO) was first coined by [156]. As suggested by its name, motion estimation of the mobile platform is performed solely from visual input. With recent advancements in the VO systems with loop-closure, the boundary between VO and Visual-SLAM (V-SLAM) has become blurred; nevertheless, VO systems devote more attention to ego-motion estimation than to map building [157]. There exist several review papers on the VO systems—[13,158,159,160] provided a comprehensive overview of VO and V-SLAM; the two-part survey [161,162] highlighted feature-based VO; while [163,164,165,166] conducted reviews of recent advancements in VO and V-SLAM using state-of-the-art data-driven methods. VO can be classified into geometry-based methods and learning-based methods; geometry-based methods can be further categorized into feature-based approaches, appearance-based approaches, and hybrid approaches.
Geometry-based methods explicitly model camera pose based on multi-view geometry; among them, feature-based approaches are currently the most prominent solution for VO. Features denote salient image structures that differ from their neighbors, and they need to be first extracted and then located by feature detectors, such as Harris, Shi–Tomasi, FAST, or MSER. To match features between different images, they need to be described by the adjacent supported region with feature descriptors, such as ORB [167], BRIEF [168], BRISK, or FREAK. Some algorithms, such as SIFT [169] and SURF [170], involve both a detector and a descriptor. For an in-depth survey of feature detectors and descriptors, please refer to [171,172]. Based on the pose estimation solver, feature-based approaches can be further decomposed into 2D-to-2D methods, 3D-to-2D methods, and 3D-to-3D methods [161].
2D-to-2D methods are formulated from the so-called epipolar geometry; the motion of the camera is resolved by calculating the essential matrix E , which encapsulates the translational parameters t and the rotational parameters R of the camera motion; p and p are corresponding image points.
p T E p = 0
E = t × R .
In terms of the minimal sets of point correspondences required to generate a motion hypothesis, several n-point algorithms were proposed, with a tutorial to be found in [173], including the eight-point algorithm [174]—a linear solver with a unique solution, which is implemented in the monocular version of [175] and LiBVISO2 [176]. The seven-point algorithm applies the rank-constraint of the essential matrix [177], and is more efficient in the presence of outliers, but there may exist three solutions, and all three must be tested. The six-point algorithm further imposes the trace-constraint of the essential matrix, and may return either a single solution [178] or up to six solutions [179]; the former cannot perform in the presence of a planar scene [180]. The six-point algorithm may also serve as a minimal solver for a partially calibrated camera with unknown focal length [181,182]. If the depth of an object is unknown, only five parameters are calculated for camera motion (two for translation and three for rotation); hence, a minimal set of five correspondences is adequate. The five-point algorithm [182,183,184,185,186,187] solves a multivariate polynomial system and returns up to 10 solutions; this system can be solved via a Groebner basis [185], hidden variable approach [182], PolyEig [187], or QuEst [186]. It has the ability to work under planar scenes, and also quadric surfaces [188]. If the rotational angle can be read from other sensors, such as IMUs, then the four-point algorithm [189] and the three-point algorithm [190,191] can be used. By imposing the non-holonomic constraint of the ground vehicle, the one-point algorithm was proposed in [192]; however, the camera needed to be mounted above the rear axis of the vehicle. For cameras moving in a plane, the two-point algorithm was proposed [193,194]. Note that the above algorithms usually incorporate the RANSAC framework, indicating that fewer points lead to faster convergence.
The 3D-to-2D methods aim at recovering camera position and orientation from a set of correspondences between the 3D points and their 2D projections, which is also known as the Perspective-n-Point (PnP) problem. Several PnP algorithms were summarized in a recent review paper [195]. Three points is the minimal set to solve the problem; [196] covered the early solutions to the problem, in which the covered solutions suffered from instability stemming from the unstable geometric structure [197]. The first complete analytical solution to the P3P problem was given in [198]. An improved triangulation based method Lambda Twist P3P was proposed in [199]. Ref. [200] directly computed the absolute position and orientation of the camera as a function of image coordinates and their world-frame coordinates instead of employing triangulation, and demonstrated better computational efficiency. This was further improved by [201]. Nonetheless, it is desirable to incorporate larger point sets to bring redundancy and immunity to noise. For more general cases where n > 3 , PnP solutions based on iterative and non-iterative methods have been proposed. Iterative PnP solutions—including LHM [202] and PPnP [203]—are sensitive to initialization, and may get stuck in local minima. SQPnP [204] casts PnP as a Quadratically Constrained Quadratic Program (QCQP) problem, and attains global optima. Among the non-iterative methods, the first efficient algorithm EPnP was presented in [205], which allocates four weighted virtual control points for the whole set of points to improve efficiency. To improve accuracy when the point set is small (n   5 ) , RPnP [206] was proposed, which partitions n points into (n−2) sets, and forms a polynomial system to determine the intermediate rotational axis; the rotational angle and translational parameters are retrieved by performing Singular Value Decomposition (SVD) of a linear equation system. This was further refined by the SRPnP algorithm [207]. The Direct-Least-Squares (DLS) method [208] employs a Macaulay matrix to find all roots of the polynomials parameterized from Carley parameterization, which can guarantee global optima, but suffers from degeneration for any 180° rotation. This issue was circumvented by OPnP [209], which also guarantees global optima using non-unit quaternion parameterization.
The 3D-to-3D methods recover transformation based on sets of points with 3D information, which is similar to the case for LiDAR point cloud registration. Generally, 3D data acquired with a stereo camera or RGB-D camera are noisier than those acquired by LiDAR; thus, the performance of 3D-to-3D methods is usually inferior to that of the 3D-to-2D methods. Similar approaches to those adopted in LiDAR systems—such as ICP [210], NDT [211], and feature registration [91,212]—have been applied for visual odometry, with surveys and evaluations of their performance to be found in [213,214].
Appearance-based approaches forego the feature-matching step and use the pixel intensities from the consecutive images instead; consequently, they are more robust under textureless environments. They can be further partitioned into regional-matching-based and optical-flow-based methods. The regional-matching-based methods recover the transformation by minimizing the photometric error function, and have been implemented for stereo cameras [215,216,217,218], RGB-D cameras [219], and monocular cameras in dense [220], semi-dense [221], and sparse [222,223] fashion. Optical-flow-based methods retrieve camera motion from the point velocity measured on the image plane, as the apparent velocity of a point X 3 results from the camera linear velocity v and angular velocity ω , where [ ω ] × stands for the skew-symmetric matrix with the vector ω 3 :
X ˙ = [ ω ] × X + v
Commonly used algorithms for optical flow field computation include the Lucas–Kanade algorithm and the Horn–Schunck algorithm. Most optical-flow-based methods are derived from the Longuet–Higgins model and Prazdny’s motion field model [224]; for an overview, please refer to [225].
Hybrid approaches combine the virtues of robustness from feature-based approaches and abundance in information of appearance-based approaches. SVO [226] utilizes direct photometric error minimization for incremental motion estimation, followed by feature matching for pose refinement. Ref. [227] leveraged direct tracking adopted from LSD-SLAM [216] for inter-keyframe pose tracking and feature-based tracking for incremental motion estimation, which also served as a form of motion prior to keyframe refinement. A similar notion was adopted and improved upon in [228]. Conversely, in [229] the direct module from DSO [222] was used for real-time camera tracking, and the feature-based module from ORB-SLAM [157] was used for globally consistent pose refinement. Meanwhile, in [230], the geometric residual and the photometric residual were optimized jointly for each frame. In terms of currently available solutions, the RealSense T265 depth camera offers a standalone solution to directly output odometry [231]. A summative table of representative visual odometry methods is shown in Table 4.

2.6. Discussion

Polymers have been employed in various sensors that are applicable for odometry, as summarized in Table 5. The pronounced virtues of using polymers are flexibility, light weight, and low cost.
Sensor modalities are the most dominant drivers for the evolution of odometry methods; generally, a new wave of odometry methods emerges every time new applicable sensors appear. Event-based cameras are bio-inspired sensors; unlike conventional cameras, which image the whole scene at a fixed rate, event cameras respond to changes in brightness at the individual pixel level, have the merits of high temporal resolution (i.e., microseconds), high dynamic range, and low latency [236,237]. Odometry methods based on event cameras were proposed in [238]. Doppler LiDAR has the ability of long-range detection and radial velocity measurement; hence, it endows mobile platforms with better sensing capacity [239,240]. An MEMS Focal Plane Switch Array (FPSA) with wide FOV was recently proposed in [241], achieving better performance than current OPA LiDAR, and suitable for mass production in CMOS foundries.
Learning-based methods have recently attracted much attention and come to the fore, as they do not rely on handcrafted algorithms based on physical or geometric theories [9], and demonstrate comparable or even better performance than traditional methods. To enumerate some, for visual odometry, DeepVO [242] leverages deep Recurrent Convolutional Neural Networks (RCNNs) to estimate pose in an end-to-end fashion; D3VO [243] formulates the deep prediction of depth, pose, and uncertainty into direct visual odometry. For inertial navigation, ref. [244] introduced a learning method for gyro denoising, and estimated attitudes in real-time. For radar odometry, ref. [245] presented an unsupervised deep learning feature network for subsequent pose estimation. For LiDAR odometry, LO-NET [246] performs sequential learning of normal estimation, mask prediction, and pose regression. However, learning-based methods may deteriorate at previously unseen scenes.
Each sensor has its strengths and weaknesses, as summarized in Table 6, which reveals that there is no single sensor that can handle all conditions, while one sensor may complement another in at least one aspect. Thus, a multisensor fusion strategy is favored.

3. Sensor Fusion

There is no single sensor that can perform all measurements; thus, stitching data from various sensors for complement and verification is more desirable. Generally, sensor fusion is used for two purposes: redundancy and complement. Redundancy is provided by sensors with the same measurement capability (e.g., range measurements from LiDAR and radar), and its aim is to improve the accuracy of the measurements. Complement is provided by sensors with diverse measurement capabilities (e.g., range measurement from LiDAR and speed measurement from radar), and its aim is to enrich the collected information [248]. As a rule of thumb, measurements fused from two low-end sensors can attain similar or better results than those from a single high-end sensor, since it is mathematically proven that the covariance of two measurements is lower than their individual variances [249]. One commonly used categorization of sensor fusion is based on the input to the fusion framework: For low-level fusion, raw sensor data are directly fed into the fusion network. Medium-level fusion takes place where features are first extracted from the raw data and then fused together. High-level fusion, also called decision fusion, combines decisions from individual systems [250]. For mobile robot odometry—especially for Visual–Inertial Odometry (VIO)—two main approaches are used for sensor fusion, namely, the tightly coupled approach and the loosely coupled approach. In the loosely coupled approach, each sensor has its own estimator (e.g., VO and IMU), and the final result is a combination of each estimator, while in the tightly coupled approach the sensor measurements are directly fused in a single processor [166].
The taxonomy of sensor fusion methods for mobile robot odometry is dependent on their working principle, and adopted from recent surveys on sensor fusion [10,248,251,252]. Filter-based methods [253] and optimization-based methods [254,255] are summarized below.

3.1. Filter-Based

3.1.1. Probability-Theory-Based

Probability-based methods represent sensor data uncertainty as a Probability Density Function (PDF); data fusion is built upon the Bayesian estimator, given a measurement set Z k = z 1 , , z k and the prior distribution p x k | Z k 1 , and the posterior distribution of the estimated state x k at time k is given by:
p x k | Z k = p z k | x k p x k | Z k 1 p Z k | Z k 1
The well-known Kalman Filter (KF) is an analytical solution to the Bayes filter, and is probably the most popular method for sensor fusion. The standard KF has two steps: the prediction step, and the correction step. In the prediction step, the predicted state mean μ ¯ t   and covariance ¯ t are calculated as follows:
μ ¯ t = A t μ t 1 + B t u t
¯ t = A t Σ t 1 A t T + R t ,
where A t and B t are the state and control transition matrices, respectively, R t is the covariance matrix of motion noise, u t is the control vector, and the indices t and t − 1 represent the current and previous timestamp, respectively.
In the correction step, the Kalman gain K t , as well as the updated state mean μ t and the covariance t , are calculated as follows:
K t =   ¯ t C t T C t   ¯ t C t T + Q t 1
μ t = μ ¯ t + K t z t C t μ ¯ t
t = I K t C t   ¯ t ,
where C t is the measurement matrix, and Q t is the covariance matrix of measurement noises. For a full derivation of the KF, please refer to [256].
The standard KF requires both the motion and measurement models to be linear; for nonlinear systems, the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) can be adopted, which are based on first- and second-order Taylor expansion of current estimation, respectively. The EKF has been implemented for Visual–Inertial Odometry (VIO) [257], LiDAR–Inertial Odometry (LIO) [258], and Radar–Inertial Odometry (RIO) [128]; an error state space propagation strategy is usually adopted due to its superior properties [259]. Inertial measurement serves to propagate the filter state when the measurements from the camera or LiDAR are incorporated into the filter update. Variants of the EKF, such as the UKF [260], Multi-State Constraint Kalman Filter (MSCKF) [261,262]—which incorporates poses of past frames to marginalize features from the state space. Iterated EKF [263], Cubature Kalman Filter (CKF) [264], fuzzy logic KF [265], covariance intersection KF [266] and invariant EKF based on the Lie group [267], have been proposed in the literature.
Although widely implemented, the KF and its variants can handle nonlinearity only to a limited degree. Monte Carlo (MC)-simulation-based methods express the underlying state space as weighted samples, and do not make assumptions of the underlying probability distribution. Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) are two types of MC [268]; SMC (also known as Particle Filter) is more frequently seen in robot odometry. Particle Filter (PF) uses a set of particles x t m with index m at time t to resemble the real state space; the transition of particles takes place according to the state transition function, and then the weight of each sample is calculated as follows:
w t m p z t | x t m
The real trick of PF is the importance sampling step, where the samples are resampled according to their weight; this step approximates the posterior distribution of the Bayes filter [269]. The Rao–Blackwellized Particle Filter (RBPF) is one of the most importantimplementations of PF for odometry, introduced in [270] and refined in [271]. It is built upon the Rao–Blackwell theorem, which states the fact that sampling of x 1 from the distribution p x 1 , and then x 2 conditioned on x 1 from p x 2 , will do no worse than sampling from their joint distribution p x 1 , x 2 . Some notable recent advancements of PF include Particle Swarm Optimization (PSO)-aided PF [272], entropy-weighted PF [273], Inter-Particle Map Sharing (IPMS)-RBPF [274], and a 6D translation–rotation decoupled RBPF with an autoencoder network to output hypotheses of rotation [275]. The general trend of PF is to either reduce the sampling size via a better sampling strategy or simplify the map representation.

3.1.2. Evidential-Reasoning-Based

Evidential-reasoning-based methods rely on the Dempster–Shafer (D–S) theory [276], combining evidence from several sources to prove a certain hypothesis, which can be seen as a generalization of the Bayes theory when ignorance about the hypothesis reaches zero. For a given problem, let X be a finite set of all possible states of the system (also called the Frame of Discernment (FOD)), where the power set 2 X represents all possible subsets of X . D–S assigns a mass m E to each element E of 2 X , representing the proportion of available evidence that supports the system state x belonging to E , and the mass function has the following properties ( stands for empty set):
m = 0 E 2 X m E = 1
Using m , the probability interval b e l E P E p l E can be obtained, where the lower bound belief is defined as b e l E = B E m B and the upper bound’s plausibility is defined as p l E = B E m B . Evidence from two sources is combined via the D–S combination rule as follows:
m 1 m 2 E = A B = E m 1 A m 2 B 1 A B = m 1 A m 2 B
The notion of D–S has been deployed for sensor fusion in odometry—as demonstrated in [277,278]—and map building in [279]. Its main advantage is that it requires no prior knowledge of the probability distribution, and has proven utility for systems in which one sensor reading is likely to be unreliable or unavailable during operation.

3.1.3. Random-Finite-Set-Based

Random Finite Set (RFS)-based methods treat the system state as a finite-set-valued random variable instead of a random vector, and the size (cardinality) of the set is also a random variable. RFS is deemed to be an ideal extension of the Bayes filter, and has been widely implemented for Multi-Target Tracking (MTT). Propagating the whole RFS Probability Density Function (PDF) is computationally intractable; thus, the first statistical moment of RFS—Probability Hypothesis Density (PHD)—is used to construct the PHD filter, as it is for the Kalman filter (i.e., mean and covariance) [134,280]. With the PHD filter, the target state is modelled as the union ( ) of different RFSs:
X k = [ ζ X k 1 S k | k 1 ζ ] Γ k
where S k | k 1 ζ represents the surviving targets from time k 1 , and is modelled as a Bernoulli RFS, which means that it either survives—with probability P S , k X k 1 —to take the new value X k , or dies—with probability 1 P S , k X k 1 —into the empty set . Γ k represents spontaneously born targets at time k , and is modelled as a Poisson RFS with intensity (PHD) γ k . The observation state is also modelled as the union of different RFSs:
Z k = [ x X k Θ k x ] K k
where Θ k x represents detected targets, and is modelled as a Bernoulli RFS, which means that it is either detected—with probability P D , k X k yielding Z k by the observation function g k Z k | X k —ormissed—with probability 1 P D , k X k —into the empty set . K k represents false alarms, and is modelled as a Poisson RFS with intensity (PHD) κ k . The prediction and update equation of the PHD filter are as follows, where D k is the first moment (i.e., PHD) of the state RFS:
D k | k 1 X k = P S , k X k 1 f k | k 1 X k | X k 1 D k 1 X k 1 d X k 1 + γ k X k
D k X k = 1 P D , k X k D k | k 1 X k + z Z k P D , k X k g k z i | X k D k | k 1 X k κ k z i + P D , k g k z i | ζ D k | k 1 ζ d ζ
The PHD is not a PDF, and is not necessarily integrated into one, integrating the PHD will yield the number of targets in the space [281,282]. The PHD filter is handy when modelling phenomena such as object block, missing detection, or false alarms. Thus, it has been used for Radar Odometry (RO), since both misdetection and false feature association can be commonly encountered in RO [132]. A comparison between PHD filters and state-of-the-art EKFs [267] was conducted in [283], demonstrating robustness of the PHD filter. A C (Cardinalized)-PHD filter propagates the distribution of the number of targets (cardinality) in addition to the PHD function; this auxiliary information enables higher modelling accuracy [284]. Alternatively, a multi-Bernoulli filter was implemented in [285,286].

3.2. Optimization-Based

Optimization-based (smoothing) methods correspond to estimating the full state given the whole observations up to current moment. This is the so-called full-SLAM solution. An intuitive way to achieve this is via the factor graph method, which builds a graph whose vertices encode robot poses and feature locations, with edges encoding the constraints between vertices arising from measurements [255,287]. This is cast into an optimization problem that minimizes F x :
x * = argmin x   F x
F x = i j C e i j Ω i j e i j
where C stands for the set index pair between nodes, Ω i j stands for the information matrix between nodes i and j , and e i j is the error function modelling the error between expected and measured spatial constraint. Some generic solvers—including GTSAM, g2o, Ceres, SLAM++, and SE-Sync—are frameworks approaching nonlinear optimization problems, and also provide solutions for SLAM [288]. g2o is a library of general graph optimization algorithms. It contains predefined types of nodes and edges to simplify the structure of SLAM algorithms, and to construct SLAM algorithms quickly. GTSAM is an optimizer for maximizing posterior probabilities using factor graphs and Bayes networks, and is commonly used as a C++ library for smoothing and mapping, as well as being widely used in SLAM and computer vision. Ceres Solver is a library developed by Google for nonlinear optimization, and is heavily used in Google’s open-source LiDAR SLAM project Cartographer. Its process is simple and well understood, with a variety of built-in solvers for a wide range of application scenarios, and can solve common data-fitting, least-squares, and nonlinear least-squares problems. A summative table of representative fusion methods is shown in Table 7.

3.3. Discussion

Sensor fusion at high levels has better scalability and ease of modification to incorporate more sensors, while it generally shows better accuracy at low levels (i.e., the raw-data level); thus, tightly coupled fusion has become a trend in recently proposed methods. However, traditional methods need accurate models and careful calibration, so employing machine learning for sensor fusion in odometry has also become an open research topic. In [299], sequences of CNNs were used to extract features and determine pose from a camera and 2D LiDAR. Some learning-based methods, such as VINet and DeepVIO [300], demonstrate comparable or even better performance than traditional methods.

4. Future Prospects

4.1. Embedded Sensors for Soft Robots’ State Estimation and Perception

Soft embedded sensors have been employed for soft robots in strain, stress, and tactile sensing [301]. However, soft sensors generally exhibit nonlinearity, hysteresis, and slow response. To overcome these issues, multisensor fusion strategies for soft sensors—such as [302,303]—have been proposed. Recent achievements have also brought soft sensors and machine learning techniques together for robot kinematic estimation [304].
The need for soft robots to explore unstructured environments is also growing [305]. For example, the Visual Odometry (VO) method was used in a soft endoscopic capsule robot for location tracking [306]. However, current implantations still mostly rely on solid-state sensors, and employing soft sensors could greatly improve the flexibility and compatibility of these systems.

4.2. Swarm Odometry

Multiple robots can perform tasks more quickly and are more robust where single agents may fail [307]. This has been implemented in homogeneous systems, either in a centralized [308] or a decentralized [309] fashion, where multiple drones self-estimate their state from onboard sensors, and communicate with the base station or one another. This may also be deployed in a heterogeneous system, where the UAVs and UGVs work together. However, the scalability of the system, the relative pose estimation between agents, the uncertainty of the relative pose estimation, and the limited communication range still challenge such research.

4.3. Accelerating Processing Speed at the Hardware Level

While most current solutions rely on complex and time-consuming operations at the software level, this can be alleviated by using FPGAs (Field-Programmable Gate Arrays) or other dedicated processors [310] to integrate sensors and odometry algorithms at the hardware level, which enables odometry estimation at very high rates. This is expected to save computational resources and improve the real-time performance of the odometry systems.

5. Conclusions

In this paper, a comprehensive review of odometry methods for indoor navigation is presented, with detailed analysis of the state estimation algorithms for various sensors, including inertial measurement sensors, visual cameras, LiDAR, and radar, with an investigation of the applications of polymeric materials in those sensors. The principles and implementation of sensor fusion algorithms that have been successively deployed in indoor odometry are also reviewed. Generally, polymers introduce flexibility and compatibility to the sensors, and reduce the cost of their mass production. They may also serve as embedded solutions that enable novel applications of odometry technology, such as in soft endoscopic capsules.
Although mature solutions exist, the improvement of indoor odometry/localization accuracy is still an ongoing research topic. It is vital to achieve a sub-centimeter level for safe navigation. Prospective research areas within this topic including advanced sensor technology, algorithm enhancement, machine intelligence, human–robot interaction, information fusion, and overall performance improvement.

Author Contributions

Writing—original draft preparation, M.Y., X.S., A.R. and F.J.; writing—review and editing, X.D., S.Z., G.Y. and B.L.; supervision, X.S., A.R. and G.Y.; funding acquisition, X.S., Z.F. and G.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 92048201; the Major Program of the Nature Science Foundation of Zhejiang Province, grant number LD22E050007; and the Major Special Projects of the Plan “Science and Technology Innovation 2025” in Ningbo, grant number 2021Z020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. El-Sheimy, N.; Li, Y. Indoor navigation: State of the art and future trends. Satell. Navig. 2021, 2, 7. [Google Scholar] [CrossRef]
  2. Everett, H. Sensors for Mobile Robots; AK Peters/CRC Press: New York, NY, USA, 1995. [Google Scholar]
  3. Civera, J.; Lee, S.H. RGB-D Odometry and SLAM. In RGB-D Image Analysis and Processing; Rosin, P.L., Lai, Y.-K., Shao, L., Liu, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 117–144. [Google Scholar]
  4. Ashley, P.; Temmen, M.; Diffey, W.; Sanghadasa, M.; Bramson, M.; Lindsay, G.; Guenthner, A. Components for IFOG Based Inertial Measurement Units Using Active and Passive Polymer Materials; SPIE: Sandiego, CA, USA, 2006; Volume 6314. [Google Scholar]
  5. Hellebrekers, T.; Ozutemiz, K.B.; Yin, J.; Majidi, C. Liquid Metal-Microelectronics Integration for a Sensorized Soft Robot Skin. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5924–5929. [Google Scholar]
  6. Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous Driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef] [Green Version]
  7. Mohamed, S.A.S.; Haghbayan, M.H.; Westerlund, T.; Heikkonen, J.; Tenhunen, H.; PLoSila, J. A Survey on Odometry for Autonomous Navigation Systems. IEEE Access 2019, 7, 97466–97486. [Google Scholar] [CrossRef]
  8. Huang, B.; Zhao, J.; Liu, J. A survey of simultaneous localization and mapping with an envision in 6g wireless networks. arXiv 2019, arXiv:1909.05214. [Google Scholar]
  9. Chen, C.; Wang, B.; Lu, C.X.; Trigoni, N.; Markham, A. A survey on deep learning for localization and mapping: Towards the age of spatial machine intelligence. arXiv 2020, arXiv:2006.12567. [Google Scholar]
  10. Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review. Sensors 2020, 20, 4220. [Google Scholar] [CrossRef]
  11. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
  12. Taheri, H.; Xia, Z.C. SLAM; definition and evolution. Eng. Appl. Artif. Intell. 2021, 97, 104032. [Google Scholar] [CrossRef]
  13. Servières, M.; Renaudin, V.; Dupuis, A.; Antigny, N. Visual and Visual-Inertial SLAM: State of the Art, Classification, and Experimental Benchmarking. J. Sens. 2021, 2021, 2054828. [Google Scholar] [CrossRef]
  14. Tzafestas, S.G. 4—Mobile Robot Sensors. In Introduction to Mobile Robot Control; Tzafestas, S.G., Ed.; Elsevier: Amsterdam, The Netherlands, 2014; pp. 101–135. [Google Scholar]
  15. Thomas, S.; Froehly, A.; Bredendiek, C.; Herschel, R.; Pohl, N. High Resolution SAR Imaging Using a 240 GHz FMCW Radar System with Integrated On-Chip Antennas. In Proceedings of the 15th European Conference on Antennas and Propagation (EuCAP), Virtual, 22–26 March 2021; pp. 1–5. [Google Scholar]
  16. Brossard, M.; Bonnabel, S. Learning Wheel Odometry and IMU Errors for Localization. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 291–297. [Google Scholar]
  17. Woodman, O.J. An Introduction to Inertial Navigation; University of Cambridge: Cambridge, UK, 2007. [Google Scholar]
  18. Siciliano, B.; Khatib, O. Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  19. Siegwart, R.; Nourbakhsh, I.R.; Scaramuzza, D. Introduction to Autonomous Mobile Robots; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
  20. Ahmed, A.; Pandit, S.; Patkar, R.; Dixit, P.; Baghini, M.S.; Khlifi, A.; Tounsi, F.; Mezghani, B. Induced-Stress Analysis of SU-8 Polymer Based Single Mass 3-Axis Piezoresistive MEMS Accelerometer. In Proceedings of the 16th International Multi-Conference on Systems, Signals & Devices (SSD), Istanbul, Turkey, 21–24 March 2019; pp. 131–136. [Google Scholar]
  21. Wan, F.; Qian, G.; Li, R.; Tang, J.; Zhang, T. High sensitivity optical waveguide accelerometer based on Fano resonance. Appl. Opt. 2016, 55, 6644–6648. [Google Scholar] [CrossRef]
  22. Yi, J. A Piezo-Sensor-Based “Smart Tire” System for Mobile Robots and Vehicles. IEEE/ASME Trans. Mechatron. 2008, 13, 95–103. [Google Scholar] [CrossRef]
  23. Passaro, V.M.N.; Cuccovillo, A.; Vaiani, L.; De Carlo, M.; Campanella, C.E. Gyroscope Technology and Applications: A Review in the Industrial Perspective. Sensors 2017, 17, 2284. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. El-Sheimy, N.; Youssef, A. Inertial sensors technologies for navigation applications: State of the art and future trends. Satell. Navig. 2020, 1, 2. [Google Scholar] [CrossRef] [Green Version]
  25. Hakyoung, C.; Ojeda, L.; Borenstein, J. Accurate mobile robot dead-reckoning with a precision-calibrated fiber-optic gyroscope. IEEE Trans. Robot. Autom. 2001, 17, 80–84. [Google Scholar] [CrossRef]
  26. Qian, G.; Tang, J.; Zhang, X.-Y.; Li, R.-Z.; Lu, Y.; Zhang, T. Low-Loss Polymer-Based Ring Resonator for Resonant Integrated Optical Gyroscopes. J. Nanomater. 2014, 2014, 146510. [Google Scholar] [CrossRef]
  27. Yeh, C.N.; Tsai, J.J.; Shieh, R.J.; Tseng, F.G.; Li, C.J.; Su, Y.C. A vertically supported ring-type mems gyroscope utilizing electromagnetic actuation and sensing. In Proceedings of the IEEE International Conference on Electron Devices and Solid-State Circuits, Hong Kong, China, 8–10 December 2008; pp. 1–4. [Google Scholar]
  28. Ward, C.C.; Iagnemma, K. A Dynamic-Model-Based Wheel Slip Detector for Mobile Robots on Outdoor Terrain. IEEE Trans. Robot. 2008, 24, 821–831. [Google Scholar] [CrossRef]
  29. Yi, J.; Wang, H.; Zhang, J.; Song, D.; Jayasuriya, S.; Liu, J. Kinematic Modeling and Analysis of Skid-Steered Mobile Robots With Applications to Low-Cost Inertial-Measurement-Unit-Based Motion Estimation. IEEE Trans. Robot. 2009, 25, 1087–1097. [Google Scholar] [CrossRef] [Green Version]
  30. Bancroft, J.B. Multiple IMU integration for vehicular navigation. In Proceedings of the Proceedings of the 22nd International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS 2009), Savannah, GA, USA, 22–25 September 2009; pp. 1828–1840. [Google Scholar]
  31. Wu, Y.; Niu, X.; Kuang, J. A Comparison of Three Measurement Models for the Wheel-mounted MEMS IMU-based Dead Reckoning System. arXiv 2020, arXiv:2012.10589. [Google Scholar] [CrossRef]
  32. Lupton, T.; Sukkarieh, S. Visual-Inertial-Aided Navigation for High-Dynamic Motion in Built Environments Without Initial Conditions. IEEE Trans. Robot. 2012, 28, 61–76. [Google Scholar] [CrossRef]
  33. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-Manifold Preintegration for Real-Time Visual—Inertial Odometry. IEEE Trans. Robot. 2017, 33, 1–21. [Google Scholar] [CrossRef] [Green Version]
  34. Brossard, M.; Barrau, A.; Chauchat, P.; Bonnabel, S. Associating Uncertainty to Extended Poses for on Lie Group IMU Preintegration With Rotating Earth. IEEE Trans. Robot. 2021, 3, 998–1015. [Google Scholar] [CrossRef]
  35. Quick Start for MTi Development Kit. Available online: https://xsenstechnologies.force.com/knowledgebase/s/article/Quick-start-for-MTi-Development-Kit-1605870241724?language=en_US (accessed on 1 March 2022).
  36. ROS (Robot Operating System) Drivers | MicroStrain. Available online: https://www.microstrain.com/software/ros (accessed on 1 March 2022).
  37. Mukherjee, A. Visualising 3D Motion of IMU Sensor. Available online: https://create.arduino.cc/projecthub/Aritro/visualising-3d-motion-of-imu-sensor-3933b0?f=1 (accessed on 1 March 2022).
  38. Borodacz, K.; Szczepański, C.; Popowski, S. Review and selection of commercially available IMU for a short time inertial navigation. Aircr. Eng. Aerosp. Technol. 2022, 94, 45–59. [Google Scholar] [CrossRef]
  39. Behroozpour, B.; Sandborn, P.A.M.; Wu, M.C.; Boser, B.E. Lidar System Architectures and Circuits. IEEE Commun. Mag. 2017, 55, 135–142. [Google Scholar] [CrossRef]
  40. Khader, M.; Cherian, S. An Introduction to Automotive LIDAR, Texas Instruments; Technical Report; Taxes Instruments Incorporated: Dallas, TX, USA, 2018. [Google Scholar]
  41. Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  42. Royo, S.; Ballesta-Garcia, M. An Overview of Lidar Imaging Systems for Autonomous Vehicles. Appl. Sci. 2019, 9, 4093. [Google Scholar] [CrossRef] [Green Version]
  43. Wang, D.; Watkins, C.; Xie, H. MEMS Mirrors for LiDAR: A Review. Micromachines 2020, 11, 456. [Google Scholar] [CrossRef]
  44. Yoo, H.W.; Druml, N.; Brunner, D.; Schwarzl, C.; Thurner, T.; Hennecke, M.; Schitter, G. MEMS-based lidar for autonomous driving. Elektrotechnik Inf. 2018, 135, 408–415. [Google Scholar] [CrossRef] [Green Version]
  45. Lin, J.; Zhang, F. Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3126–3131. [Google Scholar]
  46. Amzajerdian, F.; Pierrottet, D.; Petway, L.B.; Hines, G.D.; Roback, V.E.; Reisse, R.A. Lidar sensors for autonomous landing and hazard avoidance. In Proceedings of the AIAA Space 2013 Conference and Exposition, San Diego, CA, USA, 10–12 September 2013; p. 5312. [Google Scholar]
  47. Dietrich, A.B.; McMahon, J.W. Robust Orbit Determination with Flash Lidar Around Small Bodies. J. Guid. Control. Dyn. 2018, 41, 2163–2184. [Google Scholar] [CrossRef]
  48. Poulton, C.V.; Byrd, M.J.; Russo, P.; Timurdogan, E.; Khandaker, M.; Vermeulen, D.; Watts, M.R. Long-Range LiDAR and Free-Space Data Communication With High-Performance Optical Phased Arrays. IEEE J. Sel. Top. Quantum Electron. 2019, 25, 1–8. [Google Scholar] [CrossRef]
  49. Kim, S.-M.; Park, T.-H.; Im, C.-S.; Lee, S.-S.; Kim, T.; Oh, M.-C. Temporal response of polymer waveguide beam scanner with thermo-optic phase-modulator array. Opt. Express 2020, 28, 3768–3778. [Google Scholar] [CrossRef]
  50. Im, C.S.; Kim, S.M.; Lee, K.P.; Ju, S.H.; Hong, J.H.; Yoon, S.W.; Kim, T.; Lee, E.S.; Bhandari, B.; Zhou, C.; et al. Hybrid Integrated Silicon Nitride–Polymer Optical Phased Array For Efficient Light Detection and Ranging. J. Lightw. Technol. 2021, 39, 4402–4409. [Google Scholar] [CrossRef]
  51. Casset, F.; Poncet, P.; Desloges, B.; Santos, F.D.D.; Danel, J.S.; Fanget, S. Resonant Asymmetric Micro-Mirror Using Electro Active Polymer Actuators. In Proceedings of the IEEE SENSORS, New Delhi, India, 28–31 October 2018; pp. 1–4. [Google Scholar]
  52. Pavia, J.M.; Wolf, M.; Charbon, E. Measurement and modeling of microlenses fabricated on single-photon avalanche diode arrays for fill factor recovery. Opt. Express 2014, 22, 4202–4213. [Google Scholar] [CrossRef] [PubMed]
  53. Cheng, L.; Chen, S.; Liu, X.; Xu, H.; Wu, Y.; Li, M.; Chen, Y. Registration of Laser Scanning Point Clouds: A Review. Sensors 2018, 18, 1641. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Saiti, E.; Theoharis, T. An application independent review of multimodal 3D registration methods. Comput. Graph. 2020, 91, 153–178. [Google Scholar] [CrossRef]
  55. Huang, X.; Mei, G.; Zhang, J.; Abbas, R. A comprehensive survey on point cloud registration. arXiv 2021, arXiv:2103.02690. [Google Scholar]
  56. Zhu, H.; Guo, B.; Zou, K.; Li, Y.; Yuen, K.-V.; Mihaylova, L.; Leung, H. A Review of Point Set Registration: From Pairwise Registration to Groupwise Registration. Sensors 2019, 19, 1191. [Google Scholar] [CrossRef] [Green Version]
  57. Holz, D.; Ichim, A.E.; Tombari, F.; Rusu, R.B.; Behnke, S. Registration with the Point Cloud Library: A Modular Framework for Aligning in 3-D. IEEE Robot. Autom. Mag. 2015, 22, 110–124. [Google Scholar] [CrossRef]
  58. Jonnavithula, N.; Lyu, Y.; Zhang, Z. LiDAR Odometry Methodologies for Autonomous Driving: A Survey. arXiv 2021, arXiv:2109.06120. [Google Scholar]
  59. Elhousni, M.; Huang, X. A Survey on 3D LiDAR Localization for Autonomous Vehicles. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1879–1884. [Google Scholar]
  60. Bosse, M.; Zlot, R. Keypoint design and evaluation for place recognition in 2D lidar maps. Rob. Auton. Syst. 2009, 57, 1211–1224. [Google Scholar] [CrossRef]
  61. Wang, D.Z.; Posner, I.; Newman, P. Model-free detection and tracking of dynamic objects with 2D lidar. Int. J. Robot. Res. 2015, 34, 1039–1063. [Google Scholar] [CrossRef]
  62. Zou, Q.; Sun, Q.; Chen, L.; Nie, B.; Li, Q. A Comparative Analysis of LiDAR SLAM-Based Indoor Navigation for Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 1–15. [Google Scholar] [CrossRef]
  63. François, P.; Francis, C.; Roland, S. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot. 2015, 4, 1–104. [Google Scholar]
  64. Münch, D.; Combès, B.; Prima, S. A Modified ICP Algorithm for Normal-Guided Surface Registration; SPIE: Bellingham, WA, USA, 2010; Volume 7623. [Google Scholar]
  65. Serafin, J.; Grisetti, G. NICP: Dense normal based point cloud registration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 742–749. [Google Scholar]
  66. Godin, G.; Rioux, M.; Baribeau, R. Three-Dimensional Registration Using Range and Intensity Information; SPIE: Boston, MA, USA, 1994; Volume 2350. [Google Scholar]
  67. Greenspan, M.; Yurick, M. Approximate k-d tree search for efficient ICP. In Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, Banff, AB, Canada, 6–10 October 2003; pp. 442–448. [Google Scholar]
  68. Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2241–2254. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Park, S.-Y.; Subbarao, M. An accurate and fast point-to-plane registration technique. Pattern Recognit. Lett. 2003, 24, 2967–2976. [Google Scholar] [CrossRef]
  70. Segal, A.; Haehnel, D.; Thrun, S. Generalized-icp. In Proceedings of the Robotics: Science and Systems, Seattle, WA, USA, 28 June–1 July 2009; p. 435. [Google Scholar]
  71. Yokozuka, M.; Koide, K.; Oishi, S.; Banno, A. LiTAMIN2: Ultra Light LiDAR-based SLAM using Geometric Approximation applied with KL-Divergence. arXiv 2021, arXiv:2103.00784. [Google Scholar]
  72. Lu, D.L. Vision-Enhanced Lidar Odometry and Mapping; Carnegie Mellon University: Pittsburgh, PA, USA, 2016. [Google Scholar]
  73. Biber, P.; Strasser, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, NV, USA, 27–31 October 2003; Volume 2743, pp. 2743–2748. [Google Scholar]
  74. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Rob. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
  75. Stoyanov, T.; Magnusson, M.; Andreasson, H.; Lilienthal, A.J. Fast and accurate scan registration through minimization of the distance between compact 3D NDT representations. Int. J. Robot. Res. 2012, 31, 1377–1393. [Google Scholar] [CrossRef]
  76. Magnusson, M.; Vaskevicius, N.; Stoyanov, T.; Pathak, K.; Birk, A. Beyond points: Evaluating recent 3D scan-matching algorithms. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3631–3637. [Google Scholar]
  77. Wolcott, R.W.; Eustice, R.M. Robust LIDAR localization using multiresolution Gaussian mixture maps for autonomous driving. Int. J. Robot. Res. 2017, 36, 292–319. [Google Scholar] [CrossRef]
  78. Eckart, B.; Kim, K.; Kautz, J. Hgmr: Hierarchical gaussian mixtures for adaptive 3d registration. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 705–721. [Google Scholar]
  79. Myronenko, A.; Song, X. Point Set Registration: Coherent Point Drift. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2262–2275. [Google Scholar] [CrossRef] [Green Version]
  80. Ji, K.; Chen, H.; Di, H.; Gong, J.; Xiong, G.; Qi, J.; Yi, T. CPFG-SLAM:a Robust Simultaneous Localization and Mapping based on LIDAR in Off-Road Environment. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 650–655. [Google Scholar]
  81. Fontanelli, D.; Ricciato, L.; Soatto, S. A Fast RANSAC-Based Registration Algorithm for Accurate Localization in Unknown Environments using LIDAR Measurements. In Proceedings of the IEEE International Conference on Automation Science and Engineering, Scottsdale, AZ, USA, 22–25 September 2007; pp. 597–602. [Google Scholar]
  82. Deschaud, J.-E. IMLS-SLAM: Scan-to-model matching based on 3D data. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–16 May 2018; pp. 2480–2485. [Google Scholar]
  83. Behley, J.; Stachniss, C. Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments. In Proceedings of the Robotics: Science and Systems, Pittsburgh, PA, USA, 26–30 June 2018. [Google Scholar]
  84. Quenzel, J.; Behnke, S. Real-time multi-adaptive-resolution-surfel 6D LiDAR odometry using continuous-time trajectory optimization. arXiv 2021, arXiv:2105.02010. [Google Scholar]
  85. Droeschel, D.; Schwarz, M.; Behnke, S. Continuous mapping and localization for autonomous navigation in rough terrain using a 3D laser scanner. Rob. Auton. Syst. 2017, 88, 104–115. [Google Scholar] [CrossRef]
  86. Pan, Y.; Xiao, P.; He, Y.; Shao, Z.; Li, Z. MULLS: Versatile LiDAR SLAM via Multi-metric Linear Least Square. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hybrid Event, 30 May–5 June 2021; pp. 11633–11640. [Google Scholar]
  87. Kim, H.; Hilton, A. Evaluation of 3D Feature Descriptors for Multi-modal Data Registration. In Proceedings of the International Conference on 3D Vision—3DV 2013, Seattle, WA, USA, 29 June–1 July 2013; pp. 119–126. [Google Scholar]
  88. Guo, Y.; Bennamoun, M.; Sohel, F.; Lu, M.; Wan, J. 3D Object Recognition in Cluttered Scenes with Local Surface Features: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2270–2287. [Google Scholar] [CrossRef] [PubMed]
  89. Alexandre, L.A. 3D descriptors for object and category recognition: A comparative evaluation. In Proceedings of the Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, 7–12 October 2012; p. 7. [Google Scholar]
  90. He, Y.; Mei, Y. An efficient registration algorithm based on spin image for LiDAR 3D point cloud models. Neurocomputing 2015, 151, 354–363. [Google Scholar] [CrossRef]
  91. Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
  92. Tombari, F.; Salti, S.; Stefano, L.D. Unique shape context for 3d data description. In Proceedings of the ACM Workshop on 3D Object Retrieval, Firenze, Italy, 25 October 2010; pp. 57–62. [Google Scholar]
  93. Kim, G.; Kim, A. Scan Context: Egocentric Spatial Descriptor for Place Recognition Within 3D Point Cloud Map. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4802–4809. [Google Scholar]
  94. Tombari, F.; Salti, S.; Di Stefano, L. Unique Signatures of Histograms for Local Surface Description; Springer: Berlin/Heidelberg, Germany, 2010; pp. 356–369. [Google Scholar]
  95. Guo, J.; Borges, P.V.K.; Park, C.; Gawel, A. Local Descriptor for Robust Place Recognition Using LiDAR Intensity. IEEE Rob. Autom. Lett. 2019, 4, 1470–1477. [Google Scholar] [CrossRef] [Green Version]
  96. He, L.; Wang, X.; Zhang, H. M2DP: A novel 3D point cloud descriptor and its application in loop closure detection. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 231–237. [Google Scholar]
  97. Skjellaug, E.; Brekke, E.F.; Stahl, A. Feature-Based Laser Odometry for Autonomous Surface Vehicles utilizing the Point Cloud Library. In Proceedings of the IEEE 23rd International Conference on Information Fusion (FUSION), Virtual, 6–9 July 2020; pp. 1–8. [Google Scholar]
  98. Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar]
  99. Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
  100. RoboStudio. Available online: https://www.slamtec.com/en/RoboStudio (accessed on 1 March 2022).
  101. Chen, K.; Lopez, B.T.; Agha-mohammadi, A.a.; Mehta, A. Direct LiDAR Odometry: Fast Localization With Dense Point Clouds. IEEE Rob. Autom. Lett. 2022, 7, 2000–2007. [Google Scholar] [CrossRef]
  102. Zheng, X.; Zhu, J. Efficient LiDAR Odometry for Autonomous Driving. IEEE Rob. Autom. Lett. 2021, 6, 8458–8465. [Google Scholar] [CrossRef]
  103. Li, L.; Kong, X.; Zhao, X.; Li, W.; Wen, F.; Zhang, H.; Liu, Y. SA-LOAM: Semantic-aided LiDAR SLAM with Loop Closure. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Virtual, 30 May–5 June 2021; pp. 7627–7634. [Google Scholar]
  104. Lu, W.; Wan, G.; Zhou, Y.; Fu, X.; Yuan, P.; Song, S. DeepVCP: An End-to-End Deep Neural Network for Point Cloud Registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 12–21. [Google Scholar]
  105. Berlo, B.v.; Elkelany, A.; Ozcelebi, T.; Meratnia, N. Millimeter Wave Sensing: A Review of Application Pipelines and Building Blocks. IEEE Sens. J. 2021, 21, 10332–10368. [Google Scholar] [CrossRef]
  106. Waldschmidt, C.; Hasch, J.; Menzel, W. Automotive Radar—From First Efforts to Future Systems. IEEE J. Microw. 2021, 1, 135–148. [Google Scholar] [CrossRef]
  107. Jilani, S.F.; Munoz, M.O.; Abbasi, Q.H.; Alomainy, A. Millimeter-Wave Liquid Crystal Polymer Based Conformal Antenna Array for 5G Applications. IEEE Antennas Wirel. Propag. Lett. 2019, 18, 84–88. [Google Scholar] [CrossRef] [Green Version]
  108. Geiger, M.; Waldschmidt, C. 160-GHz Radar Proximity Sensor With Distributed and Flexible Antennas for Collaborative Robots. IEEE Access 2019, 7, 14977–14984. [Google Scholar] [CrossRef]
  109. Hamouda, Z.; Wojkiewicz, J.-L.; Pud, A.A.; Kone, L.; Bergheul, S.; Lasri, T. Flexible UWB organic antenna for wearable technologies application. IET Microw. Antennas Propag. 2018, 12, 160–166. [Google Scholar] [CrossRef]
  110. Händel, C.; Konttaniemi, H.; Autioniemi, M. State-of-the-Art Review on Automotive Radars and Passive Radar Reflectors: Arctic Challenge Research Project; Lapland UAS: Rovaniemi, Finland, 2018. [Google Scholar]
  111. Patole, S.M.; Torlak, M.; Wang, D.; Ali, M. Automotive radars: A review of signal processing techniques. IEEE Signal Process Mag. 2017, 34, 22–35. [Google Scholar] [CrossRef]
  112. Ramasubramanian, K.; Ginsburg, B. AWR1243 Sensor: Highly Integrated 76–81-GHz Radar Front-End for Emerging ADAS Applications. Tex. Instrum. White Pap. 2017. Available online: https://www.ti.com/lit/wp/spyy003/spyy003.pdf?ts=1652435333581&ref_url=https%253A%252F%252Fwww.google.com%252F (accessed on 1 March 2022).
  113. Rao, S. Introduction to mmWave sensing: FMCW radars. In Texas Instruments (TI) mmWave Training Series; Texas Instruments: Dallas, TX, USA, 2017; Volume SPYY003, pp. 1–11. [Google Scholar]
  114. Hakobyan, G.; Yang, B. High-Performance Automotive Radar: A Review of Signal Processing Algorithms and Modulation Schemes. IEEE Signal Process Mag. 2019, 36, 32–44. [Google Scholar] [CrossRef]
  115. Meinl, F.; Schubert, E.; Kunert, M.; Blume, H. Real-Time Data Preprocessing for High-Resolution MIMO Radar Sensors. Comput. Sci. 2017, 133, 54916304. [Google Scholar]
  116. Gao, X.; Roy, S.; Xing, G. MIMO-SAR: A Hierarchical High-Resolution Imaging Algorithm for mmWave FMCW Radar in Autonomous Driving. IEEE Trans. Veh. Technol. 2021, 70, 7322–7334. [Google Scholar] [CrossRef]
  117. Rouveure, R.; Faure, P.; Monod, M.-O. PELICAN: Panoramic millimeter-wave radar for perception in mobile robotics applications, Part 1: Principles of FMCW radar and of 2D image construction. Rob. Auton. Syst. 2016, 81, 1–16. [Google Scholar] [CrossRef] [Green Version]
  118. Rouveure, R.; Faure, P.; Monod, M.-O. Description and experimental results of a panoramic K-band radar dedicated to perception in mobile robotics applications. J. Field Rob. 2018, 35, 678–704. [Google Scholar] [CrossRef]
  119. Dickmann, J.; Klappstein, J.; Hahn, M.; Appenrodt, N.; Bloecher, H.; Werber, K.; Sailer, A. Automotive radar the key technology for autonomous driving: From detection and ranging to environmental understanding. In Proceedings of the IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2–6 May 2016; pp. 1–6. [Google Scholar]
  120. Zhou, T.; Yang, M.; Jiang, K.; Wong, H.; Yang, D. MMW Radar-Based Technologies in Autonomous Driving: A Review. Sensors 2020, 20, 7283. [Google Scholar] [CrossRef]
  121. Cen, S. Ego-Motion Estimation and Localization with Millimeter-Wave Scanning Radar; University of Oxford: Oxford, UK, 2019. [Google Scholar]
  122. Vivet, D.; Gérossier, F.; Checchin, P.; Trassoudaine, L.; Chapuis, R. Mobile Ground-Based Radar Sensor for Localization and Mapping: An Evaluation of two Approaches. Int. J. Adv. Rob. Syst. 2013, 10, 307. [Google Scholar] [CrossRef]
  123. Checchin, P.; Gérossier, F.; Blanc, C.; Chapuis, R.; Trassoudaine, L. Radar Scan Matching SLAM Using the Fourier-Mellin Transform; Springer: Berlin/Heidelberg, Germany, 2010; pp. 151–161. [Google Scholar]
  124. Reddy, B.S.; Chatterji, B.N. An FFT-based technique for translation, rotation, and scale-invariant image registration. IEEE Trans. Image Process. 1996, 5, 1266–1271. [Google Scholar] [CrossRef] [Green Version]
  125. Park, Y.S.; Shin, Y.S.; Kim, A. PhaRaO: Direct Radar Odometry using Phase Correlation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Virtual, 31 May–31 August 2020; pp. 2617–2623. [Google Scholar]
  126. Kellner, D.; Barjenbruch, M.; Klappstein, J.; Dickmann, J.; Dietmayer, K. Instantaneous ego-motion estimation using multiple Doppler radars. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 1592–1597. [Google Scholar]
  127. Holder, M.; Hellwig, S.; Winner, H. Real-Time Pose Graph SLAM based on Radar. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1145–1151. [Google Scholar]
  128. Doer, C.; Trommer, G.F. An EKF Based Approach to Radar Inertial Odometry. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Virtual Conference, 14–16 September 2020; pp. 152–159. [Google Scholar]
  129. Kramer, A.; Stahoviak, C.; Santamaria-Navarro, A.; Agha-mohammadi, A.a.; Heckman, C. Radar-Inertial Ego-Velocity Estimation for Visually Degraded Environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Virtual Conference, 31 May–31 August 2020; pp. 5739–5746. [Google Scholar]
  130. Monaco, C.D.; Brennan, S.N. RADARODO: Ego-Motion Estimation From Doppler and Spatial Data in RADAR Images. IEEE Trans. Intell. Veh. 2020, 5, 475–484. [Google Scholar] [CrossRef]
  131. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef] [Green Version]
  132. Adams, M.; Adams, M.D.; Mullane, J.; Jose, E. Robotic Navigation and Mapping with Radar; Artech House: Boston, MA, USA, 2012. [Google Scholar]
  133. Mullane, J.; Jose, E.; Adams, M.D.; Wijesoma, W.S. Including probabilistic target detection attributes into map representations. Rob. Auton. Syst. 2007, 55, 72–85. [Google Scholar] [CrossRef]
  134. Adams, M.; Vo, B.; Mahler, R.; Mullane, J. SLAM Gets a PHD: New Concepts in Map Estimation. IEEE Robot. Autom. Mag. 2014, 21, 26–37. [Google Scholar] [CrossRef]
  135. Mullane, J.; Vo, B.; Adams, M.D.; Vo, B. A Random-Finite-Set Approach to Bayesian SLAM. IEEE Trans. Robot. 2011, 27, 268–282. [Google Scholar] [CrossRef]
  136. Schuster, F.; Keller, C.G.; Rapp, M.; Haueis, M.; Curio, C. Landmark based radar SLAM using graph optimization. In Proceedings of the IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 2559–2564. [Google Scholar]
  137. Hong, Z.; Petillot, Y.; Wang, S. RadarSLAM: Radar based Large-Scale SLAM in All Weathers. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2021; pp. 5164–5170. [Google Scholar]
  138. Callmer, J.; Törnqvist, D.; Gustafsson, F.; Svensson, H.; Carlbom, P. Radar SLAM using visual features. EURASIP J. Adv. Signal Process. 2011, 2011, 71. [Google Scholar] [CrossRef] [Green Version]
  139. Ward, E.; Folkesson, J. Vehicle localization with low cost radar sensors. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 19–22 June 2016; pp. 864–870. [Google Scholar]
  140. Kung, P.-C.; Wang, C.-C.; Lin, W.-C. A Normal Distribution Transform-Based Radar Odometry Designed For Scanning and Automotive Radars. arXiv 2021, arXiv:2103.07908. [Google Scholar]
  141. Rapp, M.; Barjenbruch, M.; Hahn, M.; Dickmann, J.; Dietmayer, K. Probabilistic ego-motion estimation using multiple automotive radar sensors. Rob. Auton. Syst. 2017, 89, 136–146. [Google Scholar] [CrossRef]
  142. Li, Y.; Liu, Y.; Wang, Y.; Lin, Y.; Shen, W. The Millimeter-Wave Radar SLAM Assisted by the RCS Feature of the Target and IMU. Sensors 2020, 20, 5421. [Google Scholar] [CrossRef]
  143. Cen, S.H.; Newman, P. Precise Ego-Motion Estimation with Millimeter-Wave Radar Under Diverse and Challenging Conditions. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 6045–6052. [Google Scholar]
  144. Cen, S.H.; Newman, P. Radar-only ego-motion estimation in difficult settings via graph matching. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 298–304. [Google Scholar]
  145. Vivet, D.; Checchin, P.; Chapuis, R. Localization and Mapping Using Only a Rotating FMCW Radar Sensor. Sensors 2013, 13, 4527–4552. [Google Scholar] [CrossRef]
  146. Burnett, K.; Schoellig, A.P.; Barfoot, T.D. Do We Need to Compensate for Motion Distortion and Doppler Effects in Spinning Radar Navigation? IEEE Rob. Autom. Lett. 2021, 6, 771–778. [Google Scholar] [CrossRef]
  147. Retan, K.; Loshaj, F.; Heizmann, M. Radar Odometry on SE(3) With Constant Velocity Motion Prior. IEEE Rob. Autom. Lett. 2021, 6, 6386–6393. [Google Scholar] [CrossRef]
  148. Zaffar, M.; Ehsan, S.; Stolkin, R.; Maier, K.M. Sensors, SLAM and Long-term Autonomy: A Review. In Proceedings of the NASA/ESA Conference on Adaptive Hardware and Systems (AHS), Edinburgh, UK, 6–9 August 2018; pp. 285–290. [Google Scholar]
  149. Fossum, E.R. The Invention of CMOS Image Sensors: A Camera in Every Pocket. In Proceedings of the Pan Pacific Microelectronics Symposium (Pan Pacific), Kapaau, Hawaii, 10–13 February 2020; pp. 1–6. [Google Scholar]
  150. Jansen-van Vuuren, R.D.; Armin, A.; Pandey, A.K.; Burn, P.L.; Meredith, P. Organic Photodiodes: The Future of Full Color Detection and Image Sensing. Adv. Mater. 2016, 28, 4766–4802. [Google Scholar] [CrossRef]
  151. Zhao, Z.; Xu, C.; Niu, L.; Zhang, X.; Zhang, F. Recent Progress on Broadband Organic Photodetectors and their Applications. Laser Photonics Rev. 2020, 14, 2000262. [Google Scholar] [CrossRef]
  152. Wang, Y.; Kublitski, J.; Xing, S.; Dollinger, F.; Spoltore, D.; Benduhn, J.; Leo, K. Narrowband organic photodetectors—Towards miniaturized, spectroscopic sensing. Mater. Horiz. 2022, 9, 220–251. [Google Scholar] [CrossRef]
  153. Hata, K.; Savarese, S. CS231A Course Notes 1: Camera Models. 2017. Available online: https://web.stanford.edu/class/cs231a/course_notes/01-camera-models.pdf (accessed on 1 March 2022).
  154. Peter, S.; Srikumar, R.; Simone, G.; João, B. Camera Models and Fundamental Concepts Used in Geometric Computer Vision. Found. Trends Comput. Graph. Vis. 2011, 6, 1–183. [Google Scholar]
  155. Corke, P.I.; Khatib, O. Robotics, Vision and Ccontrol: Fundamental Algorithms in MATLAB; Springer: Berlin/Heidelberg, Germany, 2011; Volume 73. [Google Scholar]
  156. Nister, D.; Naroditsky, O.; Bergen, J. Visual odometry. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; p. I. [Google Scholar]
  157. Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  158. Aqel, M.O.A.; Marhaban, M.H.; Saripan, M.I.; Ismail, N.B. Review of visual odometry: Types, approaches, challenges, and applications. SpringerPlus 2016, 5, 1897. [Google Scholar] [CrossRef] [Green Version]
  159. Poddar, S.; Kottath, R.; Karar, V. Motion Estimation Made Easy: Evolution and Trends in Visual Odometry. In Recent Advances in Computer Vision: Theories and Applications, Hassaballah, M., Hosny, K.M., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 305–331. [Google Scholar]
  160. Woo, A.; Fidan, B.; Melek, W.W. Localization for Autonomous Driving. In Handbook of Position Location; Wiley-IEEE Press: Hoboken, NJ, USA, 2018; pp. 1051–1087. [Google Scholar]
  161. Scaramuzza, D.; Fraundorfer, F. Visual Odometry [Tutorial]. IEEE Robot. Autom. Mag. 2011, 18, 80–92. [Google Scholar] [CrossRef]
  162. Fraundorfer, F.; Scaramuzza, D. Visual Odometry: Part II: Matching, Robustness, Optimization, and Applications. IEEE Robot. Autom. Mag. 2012, 19, 78–90. [Google Scholar] [CrossRef] [Green Version]
  163. Lim, K.L.; Bräunl, T. A Review of Visual Odometry Methods and Its Applications for Autonomous Driving. arXiv 2020, arXiv:2009.09193. [Google Scholar]
  164. Li, R.; Wang, S.; Gu, D. Ongoing Evolution of Visual SLAM from Geometry to Deep Learning: Challenges and Opportunities. Cogn. Comput. 2018, 10, 875–889. [Google Scholar] [CrossRef]
  165. Wang, K.; Ma, S.; Chen, J.; Ren, F.; Lu, J. Approaches Challenges and Applications for Deep Visual Odometry Toward to Complicated and Emerging Areas. IEEE Trans. Cogn. Dev. Syst. 2020, 14, 35–49. [Google Scholar] [CrossRef]
  166. Alkendi, Y.; Seneviratne, L.; Zweiri, Y. State of the Art in Vision-Based Localization Techniques for Autonomous Navigation Systems. IEEE Access 2021, 9, 76847–76874. [Google Scholar] [CrossRef]
  167. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  168. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. BRIEF: Binary Robust Independent Elementary Features; Springer: Berlin/Heidelberg, Germany, 2010; pp. 778–792. [Google Scholar]
  169. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vision 2004, 60, 91–110. [Google Scholar] [CrossRef]
  170. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vision Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  171. Hassaballah, M.; Abdelmgeid, A.A.; Alshazly, H.A. Image Features Detection, Description and Matching. In Image Feature Detectors and Descriptors: Foundations and Applications; Awad, A.I., Hassaballah, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 11–45. [Google Scholar]
  172. Li, Y.; Wang, S.; Tian, Q.; Ding, X. A survey of recent advances in visual feature detection. Neurocomputing 2015, 149, 736–751. [Google Scholar] [CrossRef]
  173. Pajdla, T. The Art of Solving Minimal Problems. Available online: http://cmp.felk.cvut.cz/minimal-iccv-2015/program.html (accessed on 1 March 2022).
  174. Hartley, R.I. In defense of the eight-point algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 580–593. [Google Scholar] [CrossRef] [Green Version]
  175. Hu, G.; Huang, S.; Zhao, L.; Alempijevic, A.; Dissanayake, G. A robust RGB-D SLAM algorithm. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; pp. 1714–1719. [Google Scholar]
  176. Geiger, A.; Ziegler, J.; Stiller, C. StereoScan: Dense 3d reconstruction in real-time. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 963–968. [Google Scholar]
  177. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  178. Philip, J. A Non-Iterative Algorithm for Determining All Essential Matrices Corresponding to Five Point Pairs. Photogramm. Rec. 1996, 15, 589–599. [Google Scholar] [CrossRef]
  179. Pizarro, O.; Eustice, R.; Singh, H. Relative Pose Estimation for Instrumented, Calibrated Imaging Platforms; DICTA: Calgary, AB, Canada, 2003; pp. 601–612. [Google Scholar]
  180. Philip, J. Critical Point Configurations of the 5-, 6-, 7-, and 8-Point Algorithms for Relative Orientation; Department of Mathematics, Royal Institute of Technology: Stockholm, Sweden, 1998. [Google Scholar]
  181. Li, H. A Simple Solution to the Six-Point Two-View Focal-Length Problem; Springer: Berlin/Heidelberg, Germany, 2006; pp. 200–213. [Google Scholar]
  182. Hartley, R.; Li, H. An Efficient Hidden Variable Approach to Minimal-Case Camera Motion Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2303–2314. [Google Scholar] [CrossRef]
  183. Nister, D. An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 756–770. [Google Scholar] [CrossRef] [PubMed]
  184. Hongdong, L.; Hartley, R. Five-Point Motion Estimation Made Easy. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Washington, DC, USA, 20–24 August 2006; pp. 630–633. [Google Scholar]
  185. Stewénius, H.; Engels, C.; Nistér, D. Recent developments on direct relative orientation. ISPRS J. Photogramm. Remote Sens. 2006, 60, 284–294. [Google Scholar] [CrossRef]
  186. Fathian, K.; Ramirez-Paredes, J.P.; Doucette, E.A.; Curtis, J.W.; Gans, N.R. QuEst: A Quaternion-Based Approach for Camera Motion Estimation From Minimal Feature Points. IEEE Rob. Autom. Lett. 2018, 3, 857–864. [Google Scholar] [CrossRef] [Green Version]
  187. Kukelova, Z.; Bujnak, M.; Pajdla, T. Polynomial Eigenvalue Solutions to the 5-pt and 6-pt Relative Pose Problems; BMVC: Columbus, OH, USA, 2008. [Google Scholar]
  188. Rodehorst, V.; Heinrichs, M.; Hellwich, O. Evaluation of relative pose estimation methods for multi-camera setups. Int. Arch. Photogramm. Remote Sens. 2008, XXXVII, 135–140. [Google Scholar]
  189. Li, B.; Heng, L.; Lee, G.H.; Pollefeys, M. A 4-point algorithm for relative pose estimation of a calibrated camera with a known relative rotation angle. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1595–1601. [Google Scholar]
  190. Fraundorfer, F.; Tanskanen, P.; Pollefeys, M. A Minimal Case Solution to the Calibrated Relative Pose Problem for the Case of Two Known Orientation Angles; Springer: Berlin/Heidelberg, Germany, 2010; pp. 269–282. [Google Scholar]
  191. Ding, Y.; Yang, J.; Kong, H. An efficient solution to the relative pose estimation with a common direction. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Virtual Conference, 31 May–31 August 2020; pp. 11053–11059. [Google Scholar]
  192. Scaramuzza, D. Performance evaluation of 1-point-RANSAC visual odometry. J. Field Rob. 2011, 28, 792–811. [Google Scholar] [CrossRef]
  193. Saurer, O.; Vasseur, P.; Boutteau, R.; Demonceaux, C.; Pollefeys, M.; Fraundorfer, F. Homography Based Egomotion Estimation with a Common Direction. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 327–341. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  194. Choi, S.; Kim, J.-H. Fast and reliable minimal relative pose estimation under planar motion. Image Vision Comput. 2018, 69, 103–112. [Google Scholar] [CrossRef]
  195. Pan, S.; Wang, X. A Survey on Perspective-n-Point Problem. In Proceedings of the 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; pp. 2396–2401. [Google Scholar]
  196. Haralick, B.M.; Lee, C.-N.; Ottenberg, K.; Nölle, M. Review and analysis of solutions of the three point perspective pose estimation problem. Int. J. Comput. Vision 1994, 13, 331–356. [Google Scholar] [CrossRef]
  197. Zhang, C.; Hu, Z. Why is the Danger Cylinder Dangerous in the P3P Problem? Acta Autom. Sin. 2006, 32, 504. [Google Scholar]
  198. Xiao-Shan, G.; Xiao-Rong, H.; Jianliang, T.; Hang-Fei, C. Complete solution classification for the perspective-three-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 930–943. [Google Scholar] [CrossRef]
  199. Persson, M.; Nordberg, K. Lambda Twist: An Accurate Fast Robust Perspective Three Point (P3P) Solver; Springer: Cham, Switzerland, 2018; pp. 334–349. [Google Scholar]
  200. Kneip, L.; Scaramuzza, D.; Siegwart, R. A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation. In Proceedings of the CVPR, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2969–2976. [Google Scholar]
  201. Ke, T.; Roumeliotis, S.I. An Efficient Algebraic Solution to the Perspective-Three-Point Problem. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4618–4626. [Google Scholar]
  202. Lu, C.; Hager, G.D.; Mjolsness, E. Fast and globally convergent pose estimation from video images. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 610–622. [Google Scholar] [CrossRef] [Green Version]
  203. Garro, V.; Crosilla, F.; Fusiello, A. Solving the PnP Problem with Anisotropic Orthogonal Procrustes Analysis. In Proceedings of the Second International Conference on 3D Imaging Modeling, Processing, Visualization & Transmission, Zurich, Switzerland, 13–15 October 2012; pp. 262–269. [Google Scholar]
  204. Terzakis, G.; Lourakis, M. A Consistently Fast and Globally Optimal Solution to the Perspective-n-Point Problem; Springer: Cham, Switzerland, 2020; pp. 478–494. [Google Scholar]
  205. Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: An Accurate O(n) Solution to the PnP Problem. Int. J. Comput. Vision 2008, 81, 155. [Google Scholar] [CrossRef] [Green Version]
  206. Li, S.; Xu, C.; Xie, M. A Robust O(n) Solution to the Perspective-n-Point Problem. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1444–1450. [Google Scholar] [CrossRef] [PubMed]
  207. Wang, P.; Xu, G.; Cheng, Y.; Yu, Q. A simple, robust and fast method for the perspective-n-point Problem. Pattern Recognit. Lett. 2018, 108, 31–37. [Google Scholar] [CrossRef]
  208. Hesch, J.A.; Roumeliotis, S.I. A Direct Least-Squares (DLS) method for PnP. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–3 November 2011; pp. 383–390. [Google Scholar]
  209. Zheng, Y.; Kuang, Y.; Sugimoto, S.; Åström, K.; Okutomi, M. Revisiting the PnP Problem: A Fast, General and Optimal Solution. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 2344–2351. [Google Scholar]
  210. Whelan, T.; Johannsson, H.; Kaess, M.; Leonard, J.J.; McDonald, J. Robust real-time visual odometry for dense RGB-D mapping. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 5724–5731. [Google Scholar]
  211. Andreasson, H.; Stoyanov, T. Real time registration of RGB-D data using local visual features and 3D-NDT registration. In Proceedings of the SPME Workshop at Int. Conf. on Robotics and Automation (ICRA), St Paul, MN, USA, 14–19 May 2012. [Google Scholar]
  212. Zeisl, B.; Köser, K.; Pollefeys, M. Automatic Registration of RGB-D Scans via Salient Directions. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 2808–2815. [Google Scholar]
  213. Fang, Z.; Scherer, S. Experimental study of odometry estimation methods using RGB-D cameras. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 680–687. [Google Scholar]
  214. 214. Vicente, M.-G.; Marcelo, S.-C.; Victor, V.-M.; Jorge, A.-L.; Jose, G.-R.; Miguel, C.; Sergio, O.-E.; Andres, F.-G. A Survey of 3D Rigid Registration Methods for RGB-D Cameras. In Advancements in Computer Vision and Image Processing; Jose, G.-R., Ed.; IGI Global: Hershey, PA, USA, 2018; pp. 74–98. [Google Scholar]
  215. Comport, A.I.; Malis, E.; Rives, P. Real-time Quadrifocal Visual Odometry. Int. J. Robot. Res. 2010, 29, 245–266. [Google Scholar] [CrossRef] [Green Version]
  216. Engel, J.; Stückler, J.; Cremers, D. Large-scale direct SLAM with stereo cameras. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 1935–1942. [Google Scholar]
  217. Wang, Z.; Li, M.; Zhou, D.; Zheng, Z. Direct Sparse Stereo Visual-Inertial Global Odometry. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hybrid Event, 30 May–5 June 2021; pp. 14403–14409. [Google Scholar]
  218. Liu, W.; Mohta, K.; Loianno, G.; Daniilidis, K.; Kumar, V. Semi-dense visual-inertial odometry and mapping for computationally constrained platforms. Auton. Robot. 2021, 45, 773–787. [Google Scholar] [CrossRef]
  219. Whelan, T.; Salas-Moreno, R.F.; Glocker, B.; Davison, A.J.; Leutenegger, S. ElasticFusion: Real-time dense SLAM and light source estimation. Int. J. Robot. Res. 2016, 35, 1697–1716. [Google Scholar] [CrossRef] [Green Version]
  220. Newcombe, R.A.; Lovegrove, S.J.; Davison, A.J. DTAM: Dense tracking and mapping in real-time. In Proceedings of the International Conference on Computer Vision, Washington, DC, USA, 6–13 November 2011; pp. 2320–2327. [Google Scholar]
  221. Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-Scale Direct Monocular SLAM; Springer: Cham, Switzerland, 2014; pp. 834–849. [Google Scholar]
  222. Engel, J.; Koltun, V.; Cremers, D. Direct Sparse Odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 611–625. [Google Scholar] [CrossRef]
  223. Zubizarreta, J.; Aguinaga, I.; Montiel, J.M.M. Direct Sparse Mapping. IEEE Trans. Robot. 2020, 36, 1363–1370. [Google Scholar] [CrossRef]
  224. Jaegle, A.; Phillips, S.; Daniilidis, K. Fast, robust, continuous monocular egomotion computation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 773–780. [Google Scholar]
  225. Raudies, F.; Neumann, H. A review and evaluation of methods estimating ego-motion. Comput. Vision Image Underst. 2012, 116, 606–633. [Google Scholar] [CrossRef]
  226. Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems. IEEE Trans. Robot. 2017, 33, 249–265. [Google Scholar] [CrossRef] [Green Version]
  227. Krombach, N.; Droeschel, D.; Behnke, S. Combining Feature-Based and Direct Methods for Semi-Dense Real-Time Stereo Visual Odometry; Springer: Cham, Switzerland, 2017; pp. 855–868. [Google Scholar]
  228. Luo, H.; Pape, C.; Reithmeier, E. Hybrid Monocular SLAM Using Double Window Optimization. IEEE Rob. Autom. Lett. 2021, 6, 4899–4906. [Google Scholar] [CrossRef]
  229. Lee, S.H.; Civera, J. Loosely-Coupled Semi-Direct Monocular SLAM. IEEE Rob. Autom. Lett. 2019, 4, 399–406. [Google Scholar] [CrossRef] [Green Version]
  230. Younes, G.; Asmar, D.; Zelek, J. A Unified Formulation for Visual Odometry*. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), The Venetian Macao, Macau, 3–8 November 2019; pp. 6237–6244. [Google Scholar]
  231. Introduction to Intel® RealSense™ Visual SLAM and the T265 Tracking Camera. Available online: https://dev.intelrealsense.com/docs/intel-realsensetm-visual-slam-and-the-t265-tracking-camera (accessed on 1 March 2022).
  232. Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  233. Klein, G.; Murray, D. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
  234. Pire, T.; Fischer, T.; Castro, G.; De Cristóforis, P.; Civera, J.; Jacobo Berlles, J. S-PTAM: Stereo Parallel Tracking and Mapping. Rob. Auton. Syst. 2017, 93, 27–42. [Google Scholar] [CrossRef] [Green Version]
  235. Ng, T.N.; Wong, W.S.; Chabinyc, M.L.; Sambandan, S.; Street, R.A. Flexible image sensor array with bulk heterojunction organic photodiode. Appl. Phys. Lett. 2008, 92, 213303. [Google Scholar] [CrossRef]
  236. Gallego, G.; Delbrück, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.J.; Conradt, J.; Daniilidis, K.; et al. Event-Based Vision: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 154–180. [Google Scholar] [CrossRef]
  237. Posch, C.; Serrano-Gotarredona, T.; Linares-Barranco, B.; Delbruck, T. Retinomorphic Event-Based Vision Sensors: Bioinspired Cameras With Spiking Output. Proc. IEEE 2014, 102, 1470–1484. [Google Scholar] [CrossRef] [Green Version]
  238. Zhou, Y.; Gallego, G.; Shen, S. Event-Based Stereo Visual Odometry. IEEE Trans. Robot. 2021, 37, 1433–1450. [Google Scholar] [CrossRef]
  239. Martin, A.; Dodane, D.; Leviandier, L.; Dolfi, D.; Naughton, A.; O’Brien, P.; Spuessens, T.; Baets, R.; Lepage, G.; Verheyen, P.; et al. Photonic Integrated Circuit-Based FMCW Coherent LiDAR. J. Lightw. Technol. 2018, 36, 4640–4645. [Google Scholar] [CrossRef]
  240. Crouch, S. Advantages of 3D imaging coherent lidar for autonomous driving applications. In Proceedings of the 19th Coherent Laser Radar Conference, Okinawa, Japan, 18–21 June 2018; pp. 18–21. [Google Scholar]
  241. Zhang, X.; Kwon, K.; Henriksson, J.; Luo, J.; Wu, M.C. A large-scale microelectromechanical-systems-based silicon photonics LiDAR. Nature 2022, 603, 253–258. [Google Scholar] [CrossRef] [PubMed]
  242. Wang, S.; Clark, R.; Wen, H.; Trigoni, N. DeepVO: Towards end-to-end visual odometry with deep Recurrent Convolutional Neural Networks. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 2043–2050. [Google Scholar]
  243. Yang, N.; Stumberg, L.v.; Wang, R.; Cremers, D. D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1278–1289. [Google Scholar]
  244. Brossard, M.; Bonnabel, S.; Barrau, A. Denoising IMU Gyroscopes With Deep Learning for Open-Loop Attitude Estimation. IEEE Rob. Autom. Lett. 2020, 5, 4796–4803. [Google Scholar] [CrossRef]
  245. Burnett, K.; Yoon, D.J.; Schoellig, A.P.; Barfoot, T.D. Radar odometry combining probabilistic estimation and unsupervised feature learning. arXiv 2021, arXiv:2105.14152. [Google Scholar]
  246. Li, Q.; Chen, S.; Wang, C.; Li, X.; Wen, C.; Cheng, M.; Li, J. LO-Net: Deep Real-Time Lidar Odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8465–8474. [Google Scholar]
  247. Brossard, M.; Barrau, A.; Bonnabel, S. AI-IMU Dead-Reckoning. IEEE Trans. Intell. Veh. 2020, 5, 585–595. [Google Scholar] [CrossRef]
  248. Hu, J.-W.; Zheng, B.-Y.; Wang, C.; Zhao, C.-H.; Hou, X.-L.; Pan, Q.; Xu, Z. A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments. Front. Inf. Technol. Electron. Eng. 2020, 21, 675–692. [Google Scholar] [CrossRef]
  249. Van Brummelen, J.; O’Brien, M.; Gruyer, D.; Najjaran, H. Autonomous vehicle perception: The technology of today and tomorrow. Transp. Res. Part C Emerg. Technol. 2018, 89, 384–406. [Google Scholar] [CrossRef]
  250. Galar, D.; Kumar, U. Chapter 1—Sensors and Data Acquisition. In eMaintenance; Galar, D., Kumar, U., Eds.; Academic Press: Cambridge, MA, USA, 2017; pp. 1–72. [Google Scholar]
  251. Khaleghi, B.; Khamis, A.; Karray, F.O.; Razavi, S.N. Multisensor data fusion: A review of the state-of-the-art. Inf. Fusion 2013, 14, 28–44. [Google Scholar] [CrossRef]
  252. Qu, Y.; Yang, M.; Zhang, J.; Xie, W.; Qiang, B.; Chen, J. An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation. Sensors 2021, 21, 1605. [Google Scholar] [CrossRef]
  253. Ho, T.S.; Fai, Y.C.; Ming, E.S.L. Simultaneous localization and mapping survey based on filtering techniques. In Proceedings of the 10th Asian Control Conference (ASCC), Sabah, Malaysia, 31 May–3 June 2015; pp. 1–6. [Google Scholar]
  254. Grisetti, G.; Kümmerle, R.; Stachniss, C.; Burgard, W. A Tutorial on Graph-Based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [Google Scholar] [CrossRef]
  255. Wu, X.; Xiao, B.; Wu, C.; Guo, Y.; Li, L. Factor graph based navigation and positioning for control system design: A review. Chin. J. Aeronaut. 2021, 35, 25–39. [Google Scholar] [CrossRef]
  256. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics (Intelligent Robotics and Autonomous Agents); MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  257. Bloesch, M.; Omari, S.; Hutter, M.; Siegwart, R. Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 298–304. [Google Scholar]
  258. Xu, W.; Zhang, F. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter. IEEE Rob. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
  259. Huang, G. Visual-Inertial Navigation: A Concise Review. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, Canada, 20–24 May 2019; pp. 9572–9582. [Google Scholar]
  260. Brossard, M.; Bonnabel, S.; Barrau, A. Unscented Kalman Filter on Lie Groups for Visual Inertial Odometry. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 649–655. [Google Scholar]
  261. Sun, K.; Mohta, K.; Pfrommer, B.; Watterson, M.; Liu, S.; Mulgaonkar, Y.; Taylor, C.J.; Kumar, V. Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight. IEEE Rob. Autom. Lett. 2018, 3, 965–972. [Google Scholar] [CrossRef] [Green Version]
  262. Li, M.; Mourikis, A.I. High-precision, consistent EKF-based visual-inertial odometry. Int. J. Robot. Res. 2013, 32, 690–711. [Google Scholar] [CrossRef]
  263. Bloesch, M.; Burri, M.; Omari, S.; Hutter, M.; Siegwart, R. Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback. Int. J. Robot. Res. 2017, 36, 1053–1072. [Google Scholar] [CrossRef] [Green Version]
  264. Nguyen, T.; Mann, G.K.I.; Vardy, A.; Gosine, R.G. Developing Computationally Efficient Nonlinear Cubature Kalman Filtering for Visual Inertial Odometry. J. Dyn. Syst. Meas. Contr. 2019, 141, 081012. [Google Scholar] [CrossRef]
  265. Zhu, J.; Tang, Y.; Shao, X.; Xie, Y. Multisensor Fusion Using Fuzzy Inference System for a Visual-IMU-Wheel Odometry. IEEE Trans. Instrum. Meas. 2021, 70, 3051999. [Google Scholar] [CrossRef]
  266. Li, L.; Yang, M. Joint Localization Based on Split Covariance Intersection on the Lie Group. IEEE Trans. Robot. 2021, 37, 1508–1524. [Google Scholar] [CrossRef]
  267. Barrau, A.; Bonnabel, S. Invariant Kalman Filtering. Annu. Rev. Control Rob. Auton. Syst. 2018, 1, 237–257. [Google Scholar] [CrossRef]
  268. Hartig, F.; Calabrese, J.M.; Reineking, B.; Wiegand, T.; Huth, A. Statistical inference for stochastic simulation models—Theory and application. Ecol. Lett. 2011, 14, 816–827. [Google Scholar] [CrossRef]
  269. Elfring, J.; Torta, E.; van de Molengraft, R. Particle Filters: A Hands-On Tutorial. Sensors 2021, 21, 438. [Google Scholar] [CrossRef]
  270. Doucet, A.; Freitas, N.D.; Murphy, K.P.; Russell, S.J. Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks. In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, San Francisco, CA, USA, 30 June–3 July 2000; pp. 176–183. [Google Scholar]
  271. Grisetti, G.; Stachniss, C.; Burgard, W. Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef] [Green Version]
  272. Zhang, Q.-B.; Wang, P.; Chen, Z.-H. An improved particle filter for mobile robot localization based on particle swarm optimization. Expert Syst. Appl. 2019, 135, 181–193. [Google Scholar] [CrossRef]
  273. Kim, H.; Liu, B.; Goh, C.Y.; Lee, S.; Myung, H. Robust Vehicle Localization Using Entropy-Weighted Particle Filter-based Data Fusion of Vertical and Road Intensity Information for a Large Scale Urban Area. IEEE Rob. Autom. Lett. 2017, 2, 1518–1524. [Google Scholar] [CrossRef]
  274. Jo, H.; Cho, H.M.; Jo, S.; Kim, E. Efficient Grid-Based Rao–Blackwellized Particle Filter SLAM With Interparticle Map Sharing. IEEE/ASME Trans. Mechatron. 2018, 23, 714–724. [Google Scholar] [CrossRef]
  275. Deng, X.; Mousavian, A.; Xiang, Y.; Xia, F.; Bretl, T.; Fox, D. PoseRBPF: A Rao–Blackwellized Particle Filter for 6-D Object Pose Tracking. IEEE Trans. Robot. 2021, 37, 1328–1342. [Google Scholar] [CrossRef]
  276. Zadeh, L.A. A Simple View of the Dempster-Shafer Theory of Evidence and Its Implication for the Rule of Combination. AI Mag. 1986, 7, 85. [Google Scholar] [CrossRef]
  277. Aggarwal, P.; Bhatt, D.; Devabhaktuni, V.; Bhattacharya, P. Dempster Shafer neural network algorithm for land vehicle navigation application. Inf. Sci. 2013, 253, 26–33. [Google Scholar] [CrossRef]
  278. Wang, D.; Xu, X.; Yao, Y.; Zhang, T. Virtual DVL Reconstruction Method for an Integrated Navigation System Based on DS-LSSVM Algorithm. IEEE Trans. Instrum. Meas. 2021, 70, 3063771. [Google Scholar] [CrossRef]
  279. Steyer, S.; Tanzmeister, G.; Wollherr, D. Grid-Based Environment Estimation Using Evidential Mapping and Particle Tracking. IEEE Trans. Intell. Veh. 2018, 3, 384–396. [Google Scholar] [CrossRef] [Green Version]
  280. Leung, K.Y.K.; Inostroza, F.; Adams, M. Relating Random Vector and Random Finite Set Estimation in Navigation, Mapping, and Tracking. IEEE Trans. Signal Process. 2017, 65, 4609–4623. [Google Scholar] [CrossRef]
  281. Mahler, R.P.S. Multitarget Bayes filtering via first-order multitarget moments. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1152–1178. [Google Scholar] [CrossRef]
  282. Zhang, F.; Stähle, H.; Gaschler, A.; Buckl, C.; Knoll, A. Single camera visual odometry based on Random Finite Set Statistics. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; pp. 559–566. [Google Scholar]
  283. Gao, L.; Battistelli, G.; Chisci, L. PHD-SLAM 2.0: Efficient SLAM in the Presence of Missdetections and Clutter. IEEE Trans. Robot. 2021, 37, 1834–1843. [Google Scholar] [CrossRef]
  284. Nannuru, S.; Blouin, S.; Coates, M.; Rabbat, M. Multisensor CPHD filter. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 1834–1854. [Google Scholar] [CrossRef] [Green Version]
  285. Fröhle, M.; Lindberg, C.; Granström, K.; Wymeersch, H. Multisensor Poisson Multi-Bernoulli Filter for Joint Target–Sensor State Tracking. IEEE Trans. Intell. Veh. 2019, 4, 609–621. [Google Scholar] [CrossRef] [Green Version]
  286. Li, D.; Hou, C.; Yi, D. Multi-Bernoulli smoother for multi-target tracking. Aerosp. Sci. Technol. 2016, 48, 234–245. [Google Scholar] [CrossRef]
  287. Jurić, A.; Kendeš, F.; Marković, I.; Petrović, I. A Comparison of Graph Optimization Approaches for Pose Estimation in SLAM. In Proceedings of the 44th International Convention on Information, Communication and Electronic Technology (MIPRO), Opatija, Croatia, 27 September–1 October 2021; pp. 1113–1118. [Google Scholar]
  288. Grisetti, G.; Guadagnino, T.; Aloise, I.; Colosi, M.; Della Corte, B.; Schlegel, D. Least Squares Optimization: From Theory to Practice. Robotics 2020, 9, 51. [Google Scholar] [CrossRef]
  289. Qin, C.; Ye, H.; Pranata, C.E.; Han, J.; Zhang, S.; Liu, M. LINS: A Lidar-Inertial State Estimator for Robust and Efficient Navigation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Virtual, 31 May–31 August 2020; pp. 8899–8906. [Google Scholar]
  290. Zhang, J.; Singh, S. Laser–visual–inertial odometry and mapping with high robustness and low drift. J. Field Rob. 2018, 35, 1242–1264. [Google Scholar] [CrossRef]
  291. Zuo, X.; Geneva, P.; Lee, W.; Liu, Y.; Huang, G. LIC-Fusion: LiDAR-Inertial-Camera Odometry. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Venetian Macao, Macau, 3–8 November 2019; pp. 5848–5854. [Google Scholar]
  292. Montemerlo, M.; Thrun, S.; Roller, D.; Wegbreit, B. FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. In Proceedings of the 18th international joint conference on Artificial intelligence, Acapulco, Mexico, 9–15 August 2003; pp. 1151–1156. [Google Scholar]
  293. Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef] [Green Version]
  294. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  295. Rosinol, A.; Abate, M.; Chang, Y.; Carlone, L. Kimera: An Open-Source Library for Real-Time Metric-Semantic Localization and Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Virtual Conference, 31 May–31 August 2020; pp. 1689–1696. [Google Scholar]
  296. Ye, H.; Chen, Y.; Liu, M. Tightly Coupled 3D Lidar Inertial Odometry and Mapping. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, Canada, 20–24 May 2019; pp. 3144–3150. [Google Scholar]
  297. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020; pp. 5135–5142. [Google Scholar]
  298. Shan, T.; Englot, B.; Ratti, C.; Rus, D. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hybrid Event, 30 May–5 June 2021; pp. 5692–5698. [Google Scholar]
  299. Valente, M.; Joly, C.; Fortelle, A.D.L. Deep Sensor Fusion for Real-Time Odometry Estimation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Venetian Macao, Macau, 3–8 November 2019; pp. 6679–6685. [Google Scholar]
  300. Han, L.; Lin, Y.; Du, G.; Lian, S. DeepVIO: Self-supervised Deep Learning of Monocular Visual Inertial Odometry using 3D Geometric Constraints. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Venetian Macao, Macau, 3–8 November 2019; pp. 6906–6913. [Google Scholar]
  301. Wang, H.; Totaro, M.; Beccai, L. Toward Perceptive Soft Robots: Progress and Challenges. Adv. Sci. 2018, 5, 1800541. [Google Scholar] [CrossRef]
  302. Thuruthel, T.G.; Hughes, J.; Georgopoulou, A.; Clemens, F.; Iida, F. Using Redundant and Disjoint Time-Variant Soft Robotic Sensors for Accurate Static State Estimation. IEEE Rob. Autom. Lett. 2021, 6, 2099–2105. [Google Scholar] [CrossRef]
  303. Kim, D.; Park, M.; Park, Y.L. Probabilistic Modeling and Bayesian Filtering for Improved State Estimation for Soft Robots. IEEE Trans. Robot. 2021, 37, 1728–1741. [Google Scholar] [CrossRef]
  304. Thuruthel, T.G.; Shih, B.; Laschi, C.; Tolley, M.T. Soft robot perception using embedded soft sensors and recurrent neural networks. Sci. Rob. 2019, 4, eaav1488. [Google Scholar] [CrossRef] [PubMed]
  305. Drotman, D.; Jadhav, S.; Karimi, M.; Zonia, P.D.; Tolley, M.T. 3D printed soft actuators for a legged robot capable of navigating unstructured terrain. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5532–5538. [Google Scholar]
  306. Turan, M.; Almalioglu, Y.; Araujo, H.; Konukoglu, E.; Sitti, M. Deep EndoVO: A recurrent convolutional neural network (RCNN) based visual odometry approach for endoscopic capsule robots. Neurocomputing 2018, 275, 1861–1870. [Google Scholar] [CrossRef]
  307. Saeedi, S.; Trentini, M.; Seto, M.; Li, H. Multiple-Robot Simultaneous Localization and Mapping: A Review. J. Field Rob. 2016, 33, 3–46. [Google Scholar] [CrossRef]
  308. Weinstein, A.; Cho, A.; Loianno, G.; Kumar, V. Visual Inertial Odometry Swarm: An Autonomous Swarm of Vision-Based Quadrotors. IEEE Rob. Autom. Lett. 2018, 3, 1801–1807. [Google Scholar] [CrossRef]
  309. Xu, H.; Zhang, Y.; Zhou, B.; Wang, L.; Yao, X.; Meng, G.; Shen, S. Omni-swarm: A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarm. arXiv 2021, arXiv:2103.04131. [Google Scholar]
  310. Bose, L.; Chen, J.; Carey, S.J.; Dudek, P.; Mayol-Cuevas, W. Visual Odometry for Pixel Processor Arrays. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4614–4622. [Google Scholar]
Figure 1. Schematic of the working principle of an IMU, redrawn from [18].
Figure 1. Schematic of the working principle of an IMU, redrawn from [18].
Polymers 14 02019 g001
Figure 3. (a) Polymeric thermo-optic phase modulator for OPA LiDAR, reprinted from [49], Optica Publishing Group, 2020; (b) P(VDF-TrFE) copolymer piezoelectric actuator for MEMS LiDAR, reprinted with permission from [51]. Copyright IEEE 2018.
Figure 3. (a) Polymeric thermo-optic phase modulator for OPA LiDAR, reprinted from [49], Optica Publishing Group, 2020; (b) P(VDF-TrFE) copolymer piezoelectric actuator for MEMS LiDAR, reprinted with permission from [51]. Copyright IEEE 2018.
Polymers 14 02019 g003
Figure 4. (a) HDPE as a dielectric waveguide for distributed radar antennas, reprinted with permission from [108]. Copyright IEEE 2019. (b) PANI/MWCNT fabricated antenna on a Kapton substrate, demonstrating good flexibility; reprinted with permission from [109]. Copyright John Wiley and Sons 2018.
Figure 4. (a) HDPE as a dielectric waveguide for distributed radar antennas, reprinted with permission from [108]. Copyright IEEE 2019. (b) PANI/MWCNT fabricated antenna on a Kapton substrate, demonstrating good flexibility; reprinted with permission from [109]. Copyright John Wiley and Sons 2018.
Polymers 14 02019 g004
Figure 5. 2D FFT of the beat frequency signal, redrawn from [115]. Copyright River Publishers 2017.
Figure 5. 2D FFT of the beat frequency signal, redrawn from [115]. Copyright River Publishers 2017.
Polymers 14 02019 g005
Table 1. Summary of recent reviews on sensors and sensor fusion for SLAM and odometry.
Table 1. Summary of recent reviews on sensors and sensor fusion for SLAM and odometry.
ReferenceRemarks
Bresson et al. [6] 2017SLAM in autonomous driving
Mohamed et al. [7] 2019Odometry systems
Huang et al. [8] 2020Representative LiDAR and visual SLAM systems dictionary
Chen et al. [9] 2020
Fayyad et al. [10] 2020
Deep learning for localization and mapping
Yeong et al. [11] 2021Sensor and sensor calibration methods for robot perception
El-Sheimy et al. [1] 2021Overview of indoor navigation
Taheri et al. [12] 2021Chronicles of SLAM from 1986–2019
Servières et al. [13] 2021Visual and visual-inertial odometry and SLAM
Table 3. Summary of representative radar odometry (RO).
Table 3. Summary of representative radar odometry (RO).
CategoryMethodAutomotive Radar (A)/Spinning Radar (S)Radar Signal RepresentationLoop-Closure
Direct methodsFourier–Mellin transform [123]SRadar imageYes
Doppler-effect-based [126]APoint cloudNo
Indirect methodsDescriptor [137]SRadar imageYes
ICP [139]APoint cloudNo
NDT [140]BothPoint cloudNo
GMM [141]APoint cloudNo
Graph-matching [144]SRadar imageNo
Distortion resolver [145]SRadar imageNo
Hybrid methodsRADARODO [130]ARadar imageNo
SE(3) RO [147]APoint cloudNo
Table 4. Summary of representative visual odometry (VO).
Table 4. Summary of representative visual odometry (VO).
CategoryImplementationCamera TypeLoop-ClosureRemark
Feature-basedMonoSLAM [232]MonoNo
PTAM [233]MonoNo5-point initiation
S-PTAM [234]StereoYes
ORB-SLAM3 [157]Mono/StereoYesPnP re-localization
Appearance-basedDTAM [220]MonoNoDense
LSD-SLAM [221]MonoYesSemi-dense
DSO [222]MonoNoSparse
HybridSVO [226]MonoNo
Table 5. Summary of representative applications of polymers in sensors for odometry.
Table 5. Summary of representative applications of polymers in sensors for odometry.
SensorMaterialMajor Role(s)Merit(s) Comparing with Non-Polymeric Counterparts
AccelerometerSU-8 [20]Proof mass and flexureLow Young’s modulus and high sensitivity
Not reported [21]Optical waveguideHigh sensitivity
GyroscopePDMS [27]Proof massReduced driving force
Not reported [26]Optical waveguideLow cost
LiDARAcrylate polymer [49]Phase modulator and waveguideHigh thermo-optic coefficients and low thermal conductivity
P(VDF-TrFE) [51]ActuatorLow cost
RadarLCP [109]SubstrateLow dielectric loss
PANI [108]AntennaFlexibility and conformality
HDPE [107]WaveguideFlexibility
CameraMEHPPV:PCBM [235]PhotodetectorWavelength tunability
Table 6. Comparison of onboard sensors for indoor odometry.
Table 6. Comparison of onboard sensors for indoor odometry.
SensorBest Reported AccuracyCostAdvantagesDisadvantages
Translation ErrorRotation
Error (deg/m)
IMU0.97% 10.0023 1Low–highSelf-containedDrift
LiDAR0.55% 20.0013 2Medium–highHigh accuracy;
dense point cloud
Large volume;
sensitive to weather
Radar1.76% 30.005 3Medium–highWeatherproof;
radial velocity measurement
Sparse point cloud;
high noise level
Camera0.53% 20.0009 2Low–mediumRich color information;
compact
Sensitive to illumination;
ambiguity in scale (monocular camera)
1 Adopted from [247]. 2 Adopted from the KITTI odometry benchmark. 3 Adopted from [245].
Table 7. Summary of representative multisensor fusion odometry methods.
Table 7. Summary of representative multisensor fusion odometry methods.
MethodImplementation and YearLoosely Coupled/Tightly CoupledSensor Suite 1Loop-Closure
Filter-basedProbability-theory-basedKalman filterMSCKF [262], 2013TV-INo
ROVIO [257], 2015TV-INo
LINS [289], 2020TL-INo
FAST-LIO [258], 2020TL-INo
EKF RIO [128], 2020TR-INo
LVI-Odometry [290], 2018LV-L-INo
LIC fusion [291], 2019TV-L-INo
Particle filterFastSLAM [292], 2002 L-ONo
Evidential-reasoning-basedD–S combination[277], 2013LGPS-INo
Random-finite set-basedPHD filterPHD-SLAM 2.0 [283], 2021TL-ONo
Optimization-basedOKVIS [293], 2014TV-INo
VINS-MONO [294], 2018TV-IYes
Kimera [295], 2020TV-IYes
LIO-mapping [296], 2019TL-INo
LIO-SAM [297], 2020TL-IYes
LVI-SAM [298], 2021TV-L-IYes
1 V: vision, L: LiDAR, R: radar, I: IMU, O: wheel odometer.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, M.; Sun, X.; Jia, F.; Rushworth, A.; Dong, X.; Zhang, S.; Fang, Z.; Yang, G.; Liu, B. Sensors and Sensor Fusion Methodologies for Indoor Odometry: A Review. Polymers 2022, 14, 2019. https://doi.org/10.3390/polym14102019

AMA Style

Yang M, Sun X, Jia F, Rushworth A, Dong X, Zhang S, Fang Z, Yang G, Liu B. Sensors and Sensor Fusion Methodologies for Indoor Odometry: A Review. Polymers. 2022; 14(10):2019. https://doi.org/10.3390/polym14102019

Chicago/Turabian Style

Yang, Mengshen, Xu Sun, Fuhua Jia, Adam Rushworth, Xin Dong, Sheng Zhang, Zaojun Fang, Guilin Yang, and Bingjian Liu. 2022. "Sensors and Sensor Fusion Methodologies for Indoor Odometry: A Review" Polymers 14, no. 10: 2019. https://doi.org/10.3390/polym14102019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop