Next Article in Journal
Software Engineering of Resistive Elements Electrophysical Parameters Simulation in the Process of Laser Trimming
Next Article in Special Issue
Neural Adaptive Impedance Control for Force Tracking in Uncertain Environment
Previous Article in Journal
Optimal Thickness of Double-Layer Graphene-Polymer Absorber for 5G High-Frequency Bands
Previous Article in Special Issue
Robot Manipulation Skills Transfer for Sim-to-Real in Unstructured Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Innovation-Superposed Simultaneous Localization and Mapping of Mobile Robots Based on Limited Augmentation

1
College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin 150040, China
2
School of Automation, Harbin University of Science and Technology, Harbin 150040, China
3
The Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(3), 587; https://doi.org/10.3390/electronics12030587
Submission received: 27 December 2022 / Revised: 17 January 2023 / Accepted: 20 January 2023 / Published: 24 January 2023
(This article belongs to the Special Issue Design, Dynamics and Control of Robots)

Abstract

:
In this paper, Aaiming at the problem of simultaneous localization mapping (SLAM) for mobile robots, a limited-augmentation innovation superposition (LAIS) is proposed to solve the problems occurring in SLAM. By extending the single-time innovation superposition to multi-time innovation, the error accumulation during the movement of mobile robots is reduced and the accuracy of the algorithm is improved. At the same time, when the number of feature points observed by the sensor exceeds the threshold, the sensor range is restricted. Therefore, only the qualified feature points are added to the system state vector, which reduces the calculation amount of the algorithm and improves the running speed. Simulation results show that compared with other algorithms, LAIS has higher accuracy and higher running speed in environmental maps with a different number of landmark points.

1. Introduction

With the continuous development of mobile robot technology, the research on mobile robot autonomy has become more and more important, and the research on the operation of mobile robots in an unknown environment has attracted more and more attention [1,2]. Localization and mapping are two important research directions related to mobile robots in an unknown environment, and there exists a very close relationship between them. Positioning means that the robot can know its position in the environment, which requires map construction and precise positioning of the robot to obtain accurate map information of the environment [3,4,5]. Therefore, the idea of simultaneous location mapping, also known as SLAM technology, has been of wide concern. At present, there are also many methods to solve the problem of robot SLAM [6].
At present, many algorithms are used to solve SLAM problems, such as the extended Kalman filter algorithm [7,8,9,10], particle filter algorithm [11], RBPF SLAM [12], etc. Extended Kalman filter (EKF) is a popular SLAM method in the field of robot navigation because of its high mathematical rigor and algorithm structure [13]. Particle filter, also known as the sequential Monte Carlo method, bootstrap filter, or clustering algorithm, is an implementation method of Markov positioning. This method does not need to directly solve the probability density function, and determines the reliability of attitude through a series of random sampling points [14]. Layered RBPF SLAM is a robust SLAM framework recently proposed for indoor environments with sparse and proximity sensors. In order to overcome the limitation of the sensors, the whole area is divided into several independent local maps [15].
The SLAM of mobile robots in an unknown environment is studied in this paper. The earliest study on mobile robot SLAM is estimated to be conducted by Smith et al. [16], using an extended Kalman filter. EKF SLAM can generate a more accurate estimate of unknown variables based on each measured value at different times, taking into account the joint distribution of each measured value, than algorithms based on a single measured value alone [17]. However, the linear error of the traditional EKF SLAM algorithm increases with the increase in iteration times, which will lead to divergence of the SLAM system, and then lead to incorrect mapping [18]. In order to improve this problem, Sage and Husa [19] proposed the AEKF SLAM algorithm. By introducing a forgetting factor into the traditional EKF SLAM algorithm, the process noise and observation noise of the SLAM system during the movement of the mobile robot can be corrected in real time to ensure the convergence of the algorithm. However, the introduction of a forgetting factor will reduce the process noise estimation and the accuracy of the algorithm [20].
In order to improve the accuracy of the algorithm, the AEKF SLAM algorithm will be introduced after multi-innovation superposition, and the adjacent time information can be effectively utilized to make the filtering algorithm more accurate [21,22,23]. During the movement of the mobile robot, with the increase in the number of observed landmark points, the dimensions of system state vector keep increasing, which makes the calculation amount of covariance matrix and Jacobi matrix keep increasing. In addition, the superposition of single innovation is extended to multi-time innovation, which further increases the calculation difficulty and reduces the operation speed of the algorithm [24]. In order to improve the speed of algorithm, this paper limits the augmented vector of the system state. When the number of feature points observed by the sensor at a certain moment is large, the observation range of the sensor will be constrained, and the information of the feature point outside the constraint range will no longer be added to the system state augmented vector, which reduces the increase in the dimension of the system state vector in each iteration to decrease the calculation amount and improves the running speed of the algorithm.
The organizational structure of this paper is as follows: in Section 1, the motion model and the observation model of mobile robots are analyzed; in Section 2, a synchronous positioning and mapping algorithm for innovative mobile robots based on restriction augmentation is designed. The single-time innovation of robot motion process is superimposed into multi-time innovation to improve the accuracy of the algorithm. The limitation of system state vector augmentation is increased to improve the running speed of the algorithm. In Section 3, the performance of each algorithm is compared and analyzed through simulation experiments. In Section 4, the conclusion will be given.

2. Problem Formulation

The system model of the mobile robot is discussed here. When the mobile robot is in a position environment, the robot cannot know the environment map and its position in the map. At this time, the robot needs to observe the surrounding environment when moving, estimate its position and posture in the environment, and construct the environment map through the observed environmental information, which is the principle of the SLAM algorithm. In order to realize the SLAM algorithm, it is necessary to know the control input u k and record the observation information Z k of sensors in the process of robot motion; the motion model and observation model of the system are obtained. The motion model is used to describe the motion process from k 1 time to k 1 time under the action of the mobile robot. The observation model is used to obtain the observation value of the mobile robot to the environmental road sign. The motion model is used to describe the motion process of the mobile robot from k 1 time to k time under the action of the control input, and the observation model is used to obtain the observation value of the mobile robot to the environmental road sign.

2.1. System Motion Model

In this paper, the robot motion is controlled by giving a translation speed v k and a rotation speed ω k . Assuming that the control input of the robot is constant in unit time, that is, the given translation speed and rotation speed do not change with time, under the control of this constant input u k = ( v k , ω k ) T , the robot will complete a circular arc motion from k 1 time to k time, as shown in Figure 1, where A k 1 is the pose of k 1 time, represented by ( x k 1 , y k 1 , θ k 1 ) T ; A k is the robot pose at k times, which is represented by ( x k , y k , θ k ) T .
The kinematics equation of the robot is Equation (1), where w k is the noise of the mobile robot system.
x k y k θ k = x k 1 y k 1 θ k 1 + v k ω k sin θ k 1 + v k ω k sin ( θ k 1 + ω t Δ t ) v k ω k cos θ k 1 v k ω k cos ( θ k 1 + ω t Δ t ) ω t Δ t + w k
In the process of constructing a SLAM point map, we need to estimate not only the robot’s motion, but also the road signs in the map, so we need to add the road sign information in the map to the state equation of the system, so that we can get the state equation of the robot in the whole SLAM system (2). Among them, X i , k m is the coordinate of the i t h signpost point observed by the robot at time k , which does not change with the motion of the mobile robot.
x k y k θ k X 1 , k m X i , k m = x k 1 v k ω k sin θ k 1 + v k ω k sin ( θ k 1 + ω k Δ k ) y k 1 + v k ω k cos θ k 1 v k ω k cos ( θ k 1 + ω k Δ k ) ω k Δ k X 1 , k 1 m X i , k 1 m

2.2. Systematic Observation Model

As shown in Figure 2, during the movement of the mobile robot, the position of the second road sign observed by the vision sensor at time i can be expressed as the star- M i = ( x i , k m , y i , k m ) T , i = 1 , 2 n . Among them, n is the total number of road signs.
A mobile robot uses external sensors to perceive the features of the real-world environment. In this work, the sonars are adopted as the external sensors of the robot to provide the measurements of the range and the bearing of the distance and orientation of observation feature ( x i , k m , y i , k m ) [25]. In this paper, assuming that the position of all road signs does not change, the observation equation of the SLAM system can be obtained as follows:
z i , k m = ( x i , k m x k ) 2 + ( y i , k m y k ) 2 arctan y i , k m y k x i , k m x k θ k

3. LAIS SLAM

3.1. AEKF SLAM

Based on the standard EKF SLAM algorithm, the AEKF SLAM algorithm introduces a forgetting factor to correct the process noise and observation noise in real time, which ensures the convergence of the algorithm. The specific implementation process is as follows:
Suppose that the k time state of system can be expressed by nonlinear state transition matrix f and the system observation can be expressed by nonlinear observation matrix h , then the system state equation and observation equation can be expressed as:
X k = f ( X k 1 , u k ) + w k 1
Z k = h ( X k ) + v k
where X k is the k -time SLAM system state vector, u k is the system input control variable, and w k 1 is the process noise from k 1 to k , with a mean value of q k 1 and a covariance of Q k 1 . Z k is the k -time systematic observation vector, v k is the k -time systematic observation noise, and its mean value is r k and covariance is R k .
(1)
The algorithm prediction step
The prior estimation equation of system state is:
X ^ k = f ( X ^ k 1 ) + q ^ k
The state prior estimated covariance equation is:
P k = F k P k 1 F k T + W Q ^ k W T
where F k is the Jacobi matrix obtained by taking the partial derivative of f with respect to X k , and q ^ k is the estimated value of the system process noise.
Where W is the Jacobian matrix obtained by taking partial derivatives from f · to w k , Q ^ k is the covariance matrix of the estimated value of system process noise at k time.
(2)
The algorithm update step
The innovation vector of the system at time k is:
Z ˜ k = Z k H k X ^ k 1 r ^ k
where H k is the Jacobian matrix obtained by taking partial derivatives from h · to X k , and r ^ k is the estimated value of system observation noise.
The Kalman gain matrix is:
K k = P k H k T H k P k H k T + R ^ k
The posterior estimation equation of system state is:
X ^ k = X ^ k + K k Z ˜ k
The posterior estimation covariance equation is:
P k = ( I K k H k ) P k
In AEKF SLAM algorithm, a forgetting factor is introduced into standard EKF SLAM algorithm to correct system errors in real time. The correction process is as follows:
r ^ k = ( 1 d k ) r ^ k 1 + d k ( Z k H k X k + 1 | k )
where r ^ k is observation noise of system estimation at k -time.
R ^ k = ( 1 d k ) R ^ k 1 + d k ( Z ˜ k Z ˜ k T H k P k H k T )
where R ^ k is covariance matrix of system observation noise estimation at k -time.
q ^ k = ( 1 d k ) q ^ k 1 + d k ( X ^ k F k X ^ k )
where q ^ k is process noise of system estimation at k -time.
Q ^ k = ( 1 d k ) Q ^ k 1 + d k ( K k Z ˜ k Z ˜ T k K k T + P k F k P k 1 F k T )
where Q ^ k is covariance matrix of system process noise estimation at k -time.
d k = 1 b 1 b k + 1 ( 0 < b < 1 )
where b is the forgetting factor, and 0.996 is taken to correct the process noise and observation noise of the system in real time.

3.2. IS AEKF SLAM

After the forgetting factor is introduced into the standard extended Kalman filter algorithm, the adaptive extended Kalman filter is designed, which can effectively ensure the convergence of the algorithm. However, when the forgetting factor is added, the process noise estimation q k of the system is small, which makes the accuracy of the algorithm lower.
In order to improve the accuracy of the algorithm, this paper combines the multi-innovation theory with the adaptive extended Kalman filter and designs an adaptive extended Kalman filter algorithm based on innovation superposition. Multi-innovation theory is a more accurate identification method based on single-innovation identification theory. It describes the motion law of the system by establishing a mathematical model. The main idea is to modify the current model by using the parameter estimation of the previous time system output and combining the product of gain vector and innovation. In order to improve the prediction accuracy of the algorithm, the innovation matrices of multiple moments during the movement of the mobile robot are superimposed, and the effective information of each moment is fully utilized. The specific implementation process is as follows:
Z ˜ k n = Z ˜ k Z ˜ k 1 Z ˜ k n + 1 = Z k H k X ^ k r ^ k Z k 1 H k 1 X ^ k 1 r ^ k 1 Z k n + 1 H k n + 1 X ^ k n + 1 r ^ k n + 1
According to this, the state update equation of the system can be obtained as follows:
X ^ k = X ^ k + K k Z ˜ k n
The covariance matrix of the system state vector is:
P k = ( I K k H k ) P k
The system observation noise is estimated as:
r ^ k = ( 1 d k ) r ^ k 1 + d k ( Z k H i , k X k + 1 | k )
The covariance updating equation of the estimated observation noise of the system is as follows:
R ^ k = ( 1 d k ) R ^ k 1 + d k ( Z ˜ n k Z ˜ n k T H k P k H k T )

3.3. LAIS SLAM

During the movement of the mobile robot, the system state vector is a 3 + 2 n -dimensional vector, which represents the position information of three pose quantities and n road landmark points of the robot. With the increase in the number of landmarks observed by the robot, the dimension of the system state vector is also increased, which leads to the increase in the calculation amount of covariance matrix and the slow speed of the algorithm.
In order to solve the above problems, we can restrict the observation range of the sensor and set the maximum observation range of the sensor to β max . M k = ( x k m , y k m ) T is a feature point observed by the sensor at three times, and the distance between it and the robot is β . If β < β max , the M k coordinate information is added into the system state augmented matrix and updated according to IS-AEKF-SLAM algorithm; if β > β max , it is ignored. This method can reduce the dimension of the state vector of the system at a certain time, reduce the amount of computation, and make the algorithm faster. However, when there are more distant points among the observed feature points at a certain time, the system will only add the points whose distance from the robot is less than β max to the state vector, and other points will be ignored. This will lead to the inefficiency of map building, and there may be unobserved road signs, which will lead to incorrect map building. In this paper, a variable s is given to limit the augmentation of the system state vector, and its expression is:
s = n n ˜
In Equation (22), n is the dimension of observation vector at k times, and n = 3 + 2 i . i are the number of feature points observed by sensors at k time. n ˜ is the dimension of the system state vector at time k 1 .
At time k , the sensor observation information can be as follows:
(1)
s n ˜ / 2 . In this case, the dimension of the system state vector at time k 1 is not much different from that of the sensor observation vector at time k , and the dimension of the system state vector will not increase too much after expansion, so there is no constraint on the sensor observation range.
(2)
s > n ˜ / 2 . In this case, the dimension of system state vector at time k 1 is quite different from that of sensor observation vector at time k . If all the observed feature points are added to the system state vector, the dimension of system state vector will be greatly increased and the algorithm accuracy will be reduced. At this time, the sensor observation range can be constrained, and the feature points whose distance from the robot is greater than β max are ignored and not included in the system state vector, thus improving the running speed of the algorithm.
Let the sensor observe i characteristic points at k time, and the observation vector can be expressed as:
Z k = ( x k , y k , θ k , x 1 , k m , y 1 , k m , x i , k m , y i , k m )
Assuming that i ˜ feature points out of i feature points observed by the sensor are outside the constrained observation range of the sensor, the feature points outside the observation range are deleted to obtain a new observation vector:
Z n e w , k = ( x k , y k , θ k , x 1 , k m , y 1 , k m , x i i ˜ , k m , y i i ˜ , k m )
(1)
The prediction part of the algorithm
According to the system motion model, the robot moves from k 1 to k under the action of control input u k , and the sensor constantly observes the location information of the surrounding landmark points during the movement process. Thus, the prior state estimation equation of the mobile robot system can be written as:
X ^ k = x k 1 v k ω k sin θ k 1 + v k ω k sin ( θ k 1 + ω k Δ k ) y k 1 + v k ω k cos θ k 1 v k ω k cos ( θ k 1 + ω k Δ k ) ω k Δ k X 1 , k 1 m X i , k 1 m + q ^ k
The prior estimation covariance equation of system state at time k is:
P k = F k P k 1 F k T + W Q W T
Since the position of each landmark point does not change with time during the movement of the mobile robot, only the robot’s own state updates with time, so the Jacobi matrix is:
F k = 0 0 v k ω k cos θ k 1 + v k ω k cos ( θ k 1 + ω t Δ t ) 0 0 v k ω k sin θ k 1 + v k ω k sin ( θ k 1 + ω t Δ t ) 0 0 0 F j
F j = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0
(2)
The algorithm update part
After limiting and augmenting the system state vector, the feature point information outside the sensor constraint range is deleted from the observation vector, and a new observation vector Z n e w , k is obtained. From Z n e w , k , the innovation vector after limiting and augmenting and the extended multi-innovation matrix can be written as:
Z ˜ n e w , k = Z n e w , k H k X ^ k 1 r ^ n e w , k
Z ˜ n e w , k n = Z ˜ n e w , k Z ˜ n e w , k 1 Z ˜ n e w , k n + 1 = Z n e w , k H k X ^ k r ^ k Z n e w , k 1 H k 1 X ^ k 1 r ^ k 1 Z n e w , k n + 1 H k n + 1 X ^ k n + 1 r ^ k n + 1
According to the systematic observation model, the Jacobi matrix of partial derivative of h with respect to X k can be written:
H k = x k a x i , k m m y k a y i , k m m 0 x i , k m x k a m y i , k m y k a m 0 y i , k m y k a m x k a x i , k m m 1 y k a y i , k m m x i , k m x k a m 0 H j
H j = 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0
Kalman gain after limiting augmentation is:
K n e w , k = P k H k T H k P k H k T + R ^ n e w , k
The posterior estimation equation of system state is:
X ^ n e w , k = X ^ k + K , n e w , k Z ˜ n n e w , k
The posterior estimation covariance equation is:
P n e w , k = ( I K n e w , k H k ) P k
where r ^ n e w , k , q ^ n e w , k , R ^ n e w , k , and Q ^ n e w , k are the estimated values of process noise and observation noise of the restricted augmented system and their respective covariance matrices.
r ^ n e w , k = ( 1 d k ) r ^ k 1 + d k ( Z n n e w , k H k X k + 1 | k )
R ^ k = ( 1 d k ) R ^ k 1 + d k ( Z ˜ n n e w , k Z ˜ n n e w , k T H k P n e w , k H k T )
q ^ n e w , k = ( 1 d k ) q ^ k 1 + d k ( X ^ n e w , k F k X ^ n e w , k )
Q ^ n e w , k = ( 1 d k ) Q ^ k 1 + d k ( K k Z ˜ n n e w , k Z ˜ n n e w , k T K n e w , k T + P k F k P k 1 F k T )

4. LAIS SLAM Simulation Experiment and Result Analysis

4.1. Establishment and Simulation of Environmental Map

In this paper, MATLAB is used to build an environmental map. The map covers an area of 120 m × 100 m, 80 Eighty blue star landmarks were set, and eight key points are selected as the actual trajectory of the robot. Assuming that the position of the sensor is the center of mass of the robot, the robot can be regarded as a particle in the process of motion. The mobile robot starts from the first key point, linear velocity v = 4   m / s , and angular velocity ω = 1.5   rad / s . The observation range of the sensor is 35 m and 180 degrees. The sensor constrains the observation range β max = 20   m . In this environment, the mobile robot uses the IS-AEKF-SLAM algorithm and restricted augmented IS-AEKF-SLAM algorithm to locate and build maps, and the simulation results are shown in Figure 3.
Figure 3 shows the real trajectory of the mobile robot and moving trajectory of the mobile robot under various algorithms. It can be seen that in the initial stage of robot positioning mapping, the trajectory of the mobile robot under each algorithm is not much different from the real trajectory, but with the increase in time, under the action of mobile robot system noise, each algorithm begins to accumulate errors, which leads to the trajectory deviation. As can be seen from Figure 3, the difference between the running trajectory of the mobile robot and the actual running trajectory of the mobile robot is the smallest under the LAIS SLAM algorithm.

4.2. Analysis of Simulation Results of LAIS SLAM Algorithm

Figure 4 shows the errors between each algorithm and the real value in the x axial direction, the errors between each algorithm and the real value in the y axial direction, and the errors between each algorithm and the real value in the rotation angle when there are 80 road landmark points on the map. It can be seen from the result diagram that in the first 200 iterations of the algorithm, the errors of the four algorithms are not much different, but with the increase in the number of iteration times, each algorithm begins to present errors. Compared with the other three algorithms, the error curve of the LAIS SLAM algorithm is more stable and the error value is the smallest; that is, the accuracy is the highest.
Because the process noise and observation noise of the mobile robot are random and the simulation results are different every time, it is impossible to judge the performance of the algorithm only by the results of one simulation experiment. In order to avoid contingency of the experiment and ensure rigor, the number of road signs in the map was changed to 60 for the subsequent simulation experiment. The experimental results are shown in Figure 5. As can be seen from the following result, the LAIS SLAM algorithm still has the highest accuracy compared with the other three algorithms after changing the road signs of the environmental map.
In the above experiment, the system noise was all achromatic noise. In order to increase the reliability of the experiment, the noise was changed to colored noise, and the simulation experiment was carried out again. The results are shown in Figure 6. It can be seen from the experimental results that the LAIM SLAM algorithm still has higher accuracy after the noise is changed to colored noise.
In the case of achromatic noise and the number of road points on the map is 60. The root-mean-square error (RMSE), the mean absolute error (MRE) and the maximum absolute error (MAE) of each algorithm in three directions are shown in Table 1, Table 2, and Table 3, respectively. From Table 1, Table 2 and Table 3 it can be seen that compared with other algorithms, the LAIS SLAM algorithm has achieved higher accuracy.
In order to verify the running speed of the algorithm, this paper compares the complete positioning and mapping time of four algorithms when the number of road signs in the environmental map is 60 and 80, respectively. The results are shown in Table 4. It can be seen from Table 4 that the times required by the first three algorithms are not much different, while the time required by LAIS SLAM is reduced by 11% compared with the first three algorithms. It shows that the running speed of the algorithm is obviously improved after adding the state vector of the restricted system.

5. Conclusions

In this paper, a restricted augmented SLAM algorithm for innovative stacked mobile robots is proposed. In this algorithm, the single-time innovation superposition is extended to multi-time innovation in the process of mobile robot motion, which reduces the error accumulation in the iterative process. Simulation results show that this algorithm has higher accuracy than EKF SLAM, AEKF SLAM and IS AEKF SLAM in the same map environment. The simulation results show that the LAIS SLAM algorithm takes 11% less time and has a higher running speed than the other three algorithms.
However, this method has not been applied to complex maps in practice, and whether it can run stably in a complex environment remains to be studied. In our future work, we will focus on practical experiments to ensure that the algorithm can run stably and accurately in the real environment.

Author Contributions

Conceptualization, L.Y. and C.L.; methodology, L.Y.; software, C.L. and W.S.; validation, L.Y., C.L. and Z.L.; formal analysis, L.Y.; resources, L.Y. and Z.L.; writing—original draft preparation, C.L.; writing—review and editing, L.Y., C.L. and W.S.; supervision, L.Y. and W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the China Postdoctoral Science Foundation project under Grant 2018M631896. This work was supported partially by the National Natural Science Foundation of China under Grant 62203146.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tin, L. A Review on Visual-SLAM: Advancements from Geometric Modelling to Learning-Based Semantic Scene Understanding Using Multi-Modal Sensor Fusion. Sensors 2022, 22, 1–3. [Google Scholar]
  2. Di Marco, M.; Garulli, A.; Giannitrapani, A.; Vicino, A. A Set Theoretic Approach to Dynamic Robot Localization and Mapping. Auton. Robots 2004, 16, 2–5. [Google Scholar] [CrossRef]
  3. Chen, X.; Sun, H.; Zhang, H. A New Method of Simultaneous Localization and Mapping for Mobile Robots Using Acoustic Landmarks. Appl. Sci. 2019, 9, 1352. [Google Scholar] [CrossRef] [Green Version]
  4. Park, S.-H.; Yi, S.-Y. Least-square Matching for Mobile Robot SLAM Based on Line-segment Model. Int. J. Control. Autom. Syst. 2019, 17, 2961–2968. [Google Scholar] [CrossRef]
  5. Min, Z.; Yiqing, Y.; Xiaosu, X.; Hongyu, W. A Robust Parallel Initialization Method for Monocular Visual-Inertial SLAM. Sensors 2022, 22, 5–8. [Google Scholar]
  6. Benfa, K.; Jie, Y.; Qiang, L. A robust RGB-D SLAM based on multiple geometric features and semantic segmentation in dynamic environments. Meas. Sci. Technol. 2023, 34, 3–10. [Google Scholar]
  7. Wen, S.; Zhen yuan, W.; Jun nian, W.; Xiang yu, W.; Lili, L. Research on a Real-Time Estimation Method of Vehicle Sideslip Angle Based on EKF. Sensors 2022, 22, 7–9. [Google Scholar]
  8. Wen, S.; Sheng, M.; Ma, C.; Li, Z.; Zhao, Y.; Ma, J. Camera Recognition and Laser Detection based on EKF-SLAM in the Autonomous Navigation of Humanoid Robot. J. Intell. Robot. Syst. 2018, 92, 9–12. [Google Scholar] [CrossRef] [Green Version]
  9. Young wan, C.; Jae young, H. A Study on EKF-SLAM Simulation of Autonomous Flight Control of Quadcopter. Int. J. Softw. Eng. Its Appl. 2015, 9, 4–10. [Google Scholar]
  10. Barrau, A.; Bonnabel, S. An EKF-SLAM algorithm with consistency properties. arXiv, 2015; arXiv preprint. arXiv:1510.06263. [Google Scholar]
  11. Zhang, W.; Du, Y.; Bai, Q. An Optimized Coverage Robot SLAM Algorithm Based on Improved Particle Filter for WSN Nodes. Int. J. Grid High Perform. Comput. 2020, 12, 5–7. [Google Scholar] [CrossRef]
  12. Yubao, S.; Zhipeng, J. A Novel Self-Positioning Based on Feature Map Creation and Laser Location Method for RBPF-SLAM. J. Robot. 2021, 2021, 9988916. [Google Scholar]
  13. Choi, J.; Choi, M.; Chung, W.K.; Choi, H.-t. Data association using relative compatibility of multiple observations for EKF-SLAM. Intell. Serv. Robot. 2016, 9, 2–6. [Google Scholar] [CrossRef]
  14. Howard, A. Multi-robot Simultaneous Localization and Mapping using Particle Filters. Int. J. Robot. Res. 2006, 25, 5–7. [Google Scholar] [CrossRef] [Green Version]
  15. Yiyi, C.; Tuanfa, Q. Design of Multisensor Mobile Robot Vision Based on the RBPF-SLAM Algorithm. Math. Probl. Eng. 2022, 2022, 1518968. [Google Scholar]
  16. Smith, R.C.; Cheeseman, H.; Liam, P. On the representation and estimation of spatial uncertainty. Int. Journational J. Robot. Res. 1986, 5, 56–68. [Google Scholar] [CrossRef]
  17. Rosa, L.S.; Bonato, V. A method to convert floating to fixed-point EKF-SLAM for embedded robotics. J. Braz. Comp. Soc. 2013, 19, 5–6. [Google Scholar]
  18. Huang, G.P. Observability-based Rules for Designing Consistent EKF SLAM Estimators. Int. J. Robot. Res. 2010, 29, 3–20. [Google Scholar] [CrossRef] [Green Version]
  19. Husa, G.W. Adaptive filtering with unknown prior statistics. Jt. Autom. Control. Conf. 1969, 7, 760–769. [Google Scholar]
  20. Tian, Y.; Suwoyo, H.; Wang, W.; Mbemba, D.; Li, L. An AEKF-SLAM Algorithm with Recursive Noise Statistic Based on MLE and EM. J. Intell. Robot. Syst. 2020, 97, 3–7. [Google Scholar] [CrossRef]
  21. Zhou, Z.; Wang, D.; Xu, B. A multi-innovation with forgetting factor based EKF-SLAM method for mobile robots. Assem. Autom. 2020, 41, 6–10. [Google Scholar] [CrossRef]
  22. Ya, G.; Wei, D.; Quan min, Z.; Hassan, N. Hierarchical multi-innovation stochastic gradient identification algorithm for estimating a bilinear state-space model with moving average noise. J. Comput. Appl. Math. 2023, 420, 4–8. [Google Scholar]
  23. Li, L.; Ren, X. A Modified Multi-innovation Algorithm to Turntable Servo System Identification. Circuits Syst. Signal Process. 2020, 39, 2–8. [Google Scholar] [CrossRef]
  24. Ding, F. Several multi-innovation identification methods. Digit. Signal Process. 2009, 20, 4–7. [Google Scholar] [CrossRef]
  25. Do, C.H. Incorporating neuro-fuzzy with extended Kalman filter for simultaneous localization and mapping. Int. J. Adv. Robot. Syst. 2019, 16, 3–4. [Google Scholar] [CrossRef]
Figure 1. Motion model of mobile robot.
Figure 1. Motion model of mobile robot.
Electronics 12 00587 g001
Figure 2. Observation model of mobile robot.
Figure 2. Observation model of mobile robot.
Electronics 12 00587 g002
Figure 3. Trajectory diagram of each algorithm.
Figure 3. Trajectory diagram of each algorithm.
Electronics 12 00587 g003
Figure 4. Error of each algorithm when the signpost point is 80 under achromatic noise. (a) x axis direction error. (b) y axis direction error. (c) Rotation angle error.
Figure 4. Error of each algorithm when the signpost point is 80 under achromatic noise. (a) x axis direction error. (b) y axis direction error. (c) Rotation angle error.
Electronics 12 00587 g004
Figure 5. Error of each algorithm when the signpost point is 60 under achromatic noise. (a) x axis direction error. (b) y axis direction error. (c) Rotation angle error.
Figure 5. Error of each algorithm when the signpost point is 60 under achromatic noise. (a) x axis direction error. (b) y axis direction error. (c) Rotation angle error.
Electronics 12 00587 g005aElectronics 12 00587 g005b
Figure 6. Error of each algorithm when the signpost point is 60 under colored noise. (a) x axis direction error. (b) y axis direction error. (c) Rotation angle error.
Figure 6. Error of each algorithm when the signpost point is 60 under colored noise. (a) x axis direction error. (b) y axis direction error. (c) Rotation angle error.
Electronics 12 00587 g006
Table 1. The RMSE of each algorithm in three directions.
Table 1. The RMSE of each algorithm in three directions.
SLAMEKF-SLAMAEKF-SLAMISAEKF-SLAMLAIS-SLAM
DirectionRMSE
X(m)0.44130.41870.40590.3984
Y(m)0.42820.41910.42990.3872
θ (rad)0.04510.04420.04030.0301
Table 2. The MRE of each algorithm in three directions.
Table 2. The MRE of each algorithm in three directions.
SLAMEKF-SLAMAEKF-SLAMISAEKF-SLAMLAIS-SLAM
DirectionRMSE
X(m)5.273.152.260.55
Y(m)4.864.012.720.57
θ (rad)0.180.0820.150.027
Table 3. The MAE of each algorithm in three directions.
Table 3. The MAE of each algorithm in three directions.
SLAMEKF-SLAMAEKF-SLAMISAEKF-SLAMLAIS-SLAM
DirectionRMSE
X(m)9.0026.1464.7921.056
Y(m)10.937.4574.5841.168
θ (rad)0.240.190.220.08
Table 4. Time for complete map construction of each algorithm.
Table 4. Time for complete map construction of each algorithm.
SLAM EKF-SLAMAEKF-SLAMISAEKF-SLAMLAIS-SLAM
Numberof Road SignsTime(s)
6090.889.690.480.7
80122.4120.5122.7118.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, L.; Li, C.; Song, W.; Li, Z. Innovation-Superposed Simultaneous Localization and Mapping of Mobile Robots Based on Limited Augmentation. Electronics 2023, 12, 587. https://doi.org/10.3390/electronics12030587

AMA Style

Yang L, Li C, Song W, Li Z. Innovation-Superposed Simultaneous Localization and Mapping of Mobile Robots Based on Limited Augmentation. Electronics. 2023; 12(3):587. https://doi.org/10.3390/electronics12030587

Chicago/Turabian Style

Yang, Liu, Chunhui Li, Wenlong Song, and Zhan Li. 2023. "Innovation-Superposed Simultaneous Localization and Mapping of Mobile Robots Based on Limited Augmentation" Electronics 12, no. 3: 587. https://doi.org/10.3390/electronics12030587

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop