Next Article in Journal
Mobile Robot + IoT: Project of Sustainable Technology for Sanitizing Broiler Poultry Litter
Previous Article in Journal
Characterization of Magnetoresistive Shunts and Its Sensitivity Temperature Compensation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Indoor Position Estimation Based on a UDU Factorization Extended Kalman Filter Structure Using Beacon Distance and Inertial Measurement Unit Data †

1
Mechatronics Engineering Department, Istanbul Technical University, Istanbul 34025, Turkey
2
Control and Automation Engineering Department, Istanbul Technical University, Istanbul 34025, Turkey
*
Author to whom correspondence should be addressed.
This work is an extended version of the paper Enhancing Indoor Position Estimation Accuracy: Integration of IMU, Raw Distance Data, and Extended Kalman Filter with Comparison to Vicon Indoor Positioning System Data, Online, 15–30 November 2023.
Sensors 2024, 24(10), 3048; https://doi.org/10.3390/s24103048
Submission received: 11 December 2023 / Revised: 24 December 2023 / Accepted: 4 January 2024 / Published: 11 May 2024

Abstract

:
The development of the GPS (Global Positioning System) and related advances have made it possible to conceive of an outdoor positioning system with great accuracy; however, for indoor positioning, more efficient, reliable, and cost-effective technology is required. There are a variety of techniques utilized for indoor positioning, such as those that are Wi-Fi, Bluetooth, infrared, ultrasound, magnetic, and visual-marker-based. This work aims to design an accurate position estimation algorithm by combining raw distance data from ultrasonic sensors (Marvelmind Beacon) and acceleration data from an inertial measurement unit (IMU), utilizing the extended Kalman filter (EKF) with UDU factorization (expressed as the product of a triangular, a diagonal, and the transpose of the triangular matrix) approach. Initially, a position estimate is calculated through the use of a recursive least squares (RLS) method with a trilateration algorithm, utilizing raw distance data. This solution is then combined with acceleration data collected from the Marvelmind sensor, resulting in a position solution akin to that of the GPS. The data were initially collected via the ROS (Robot Operating System) platform and then via the Pixhawk development card, with tests conducted using a combination of four fixed and one moving Marvelmind sensors, as well as three fixed and one moving sensors. The designed algorithm is found to produce accurate results for position estimation, and is subsequently implemented on an embedded development card (Pixhawk). The tests showed that the designed algorithm gives accurate results with centimeter precision. Furthermore, test results have shown that the UDU-EKF structure integrated into the embedded system is faster than the classical EKF.

1. Introduction

With the advancement of technology in recent years, positioning systems have advanced significantly; however, these systems typically take place outdoors, where GPS signals are readily available. For indoor environments, GPS signals are insufficient for positioning accuracy because they are not at the desired level. This situation creates problems for indoor unmanned aerial (UAVs) and ground vehicles (UGVs). Therefore, studies have begun on position estimation for indoor environments. There are lots of different approaches related to indoor positioning, such as those that are motion-captured-based, lidar-based, Wi-Fi-based, Bluetooth-based, ultra-wideband (UWB)-based, and IMU-based [1,2,3]. Motion-capturing-based systems like VICON and OptiTrack employ multiple high-speed cameras to determine the position of an object relative to one another; however, these systems have a drawback in that they require a complex setup and challenging calibration. Lidar-based positioning approaches, such as Gmapping, Hector, and Cartographer [4,5,6], are also an option, but they only provide the two-dimensional position due to weight limitations, especially for UAVs.
Wi-Fi- and Bluetooth-based positioning techniques are widely used, with an accuracy of a few meters [7]; however, they have constraints regarding the required number of access points, costs, and energy consumption. Bluetooth Low Energy (BLE) and Wi-Fi operate on the same frequency. BLE is intended for short-range and energy-efficient communication using brief messages [8]. BLE-based location is usually achieved by placing proximity beacons at specific locations. Receivers determine their position by measuring the RSSI (receiver’s distance from the sender) of the closest beacons.
UWB positioning is lightweight, has a straightforward design, provides stable positioning results, and can achieve a centimeter level of accuracy; however, relying solely on UWB does not fulfill the demands of high-precision indoor operations. Although an IMU is a popular sensor for determining orientation, its position estimates can accumulate errors over time due to drift [9,10].
Therefore, an algorithm is designed to perform more accurate indoor positioning estimation. This paper presents an algorithm that uses the EKF with UDU factorization to combine the data from an IMU and raw distance data to calculate an accurate position estimation. The algorithm begins by using the RLS method with a trilateration algorithm to obtain an initial position estimation. This estimation is then fused with data from the IMU’s acceleration sensor.
The main contributions of this work are listed below:
  • The use of a sensor fusion algorithm, utilizing the EKF structure, in conjunction with acceleration data from IMU sensors, to enhance the accuracy of position information obtained from distance data provided by beacon sensors via the RLS algorithm.
  • The incorporation of a UDU factorization structure to reduce computation costs in embedded systems, in addition to the utilization of the EKF structure for sensor fusion.
  • The ability of the designed algorithm to produce a solution even under suboptimal conditions, such as the use of only three beacon sensors instead of the ideal four.
In this article, detailed information will be provided on the position estimation algorithm, followed by a description of the data collection process from the real sensor, and, finally, the results of the real-time tests of the designed algorithm will be conveyed.

2. Position Estimation Algorithm

In this section, the structure of algorithms designed for position estimation will be discussed. Figure 1 illustrates the general framework of the algorithm. Initially, information related to the geometric approach will be provided, and, subsequently, detailed information about the sensor fusion aspect will be conveyed. The designed algorithm is intended to be applied in real-time systems. For this reason, the EKF structure has been decomposed into UDU factorization to reduce the computation cost. Hence, the UDU-EKF structure is employed instead of the classical EKF structure in this study.

2.1. Geometric Approach

2.1.1. Solution Based on Three Reference Points

This study is based on a geometric method, as depicted in Figure 2, which involves three reference points, B1, B2, and B3, represented by their coordinates: (x1, y1, z1), (x2, y2, z2), and (x3, y3, z3), respectively. Additionally, the study also includes distance measurements, d1, d2, and d3, up to point A. The determination of the coordinates (x, y, z) of point A is equivalent to finding the solutions to the following system of quadratic equations [11,12,13].
( x x 1 ) 2 + ( y y 1 ) 2 + ( z z 1 ) 2 = d 1 2 ( x x 2 ) 2 + ( y y 2 ) 2 + ( z z 2 ) 2 = d 2 2 ( x x 3 ) 2 + ( y y 3 ) 2 + ( z z 3 ) 2 = d 3 2
The equations in this system can be expressed as follows [11,12,13]:
( x 2 + y 2 + z 2 ) 2 x 1 x 2 y 1 y 2 z 1 z = d 1 2 x 1 2 y 1 2 z 1 2 ( x 2 + y 2 + z 2 ) 2 x 2 x 2 y 2 y 2 z 2 z = d 2 2 x 2 2 y 2 2 z 2 2 ( x 2 + y 2 + z 2 ) 2 x 3 x 2 y 3 y 2 z 3 z = d 3 2 x 3 2 y 3 2 z 3 2
The system of equations can be represented in matrix form as follows [11,12,13]:
A 0 u = b 0
where
A 0 = 1 2 x 1 2 y 1 2 z 1 1 2 x 2 2 y 2 2 z 2 1 2 x 3 2 y 3 2 z 3 , u = x 2 + y 2 + z 2 x y z , b 0 = d 1 2 x 1 2 y 1 2 z 1 2 d 2 2 x 2 2 y 2 2 z 2 2 d 3 2 x 3 2 y 3 2 z 3 2
The solution set of this system has two possible interpretations: In one interpretation, points B1, B2, and B3 are not in a straight line. In the other interpretation, they are in a straight line [11,12,13].
  • Case 1. B1, B2, and B3 are not in a straight line.
In this situation, the following propositions hold true:
  • The rank of matrix A0 is 3.
  • The dimension of the null space of A0 is 1.
The general solution to Equation (3) can be represented as follows:
u = u p + t . u h
where t is a real number coefficient, up is a particular solution of (3), and uh is a solution of the homogenous system A 0 . u = 0 . Both up and uh can be calculated using the Gaussian elimination method:
u p = u p 0 , u p 1 , u p 2 , u p 3 T   ,   u h = u h 0 , u h 1 , u h 2 , u h 3 T , u = ( u 0 , u 1 , u 2 , u 3 ) T
If we substitute the expression for up, uh, and u into Equation (4), the following expressions are obtained:
u 0 = u p 0 + t u h 0 ,     u 1 = u p 1 + t u h 1   ,   u 2 = u p 2 + t u h 2 ,     u 3 = u p 3 + t u h 3
By using the constraint u ∈ E:
u p 0 + t u h 0 = ( u p 1 + t u h 1 ) 2 + ( u p 2 + t u h 2 ) 2 + ( u p 3 + t u h 3 ) 2
and then
t 2 u h 1 2 + u h 2 2 + u h 3 2 + u p 1 2 + u p 2 2 + u p 3 2 u p 0 + t 2 u p 1 u h 1 + 2 u p 2 u h 2 + 2 u p 3 u h 3 u h 0 = 0
This equation is a quadratic in the form of a t 2 + b t + c = 0 , and has two solutions:
t 1 / 2 = b ± b 2 4 a c 2 a
The solutions of the equation system can be presented as follows:
u 1 = u p + t 1 u h ,     u 2 = u p + t 2 u h
  • Case 2. B1, B2, and B3 are in a straight line.
In this situation, the following propositions hold true:
  • The rank of matrix A0 is 2.
  • The dimension of the null space of A0 is 2.
The general solution to Equation (3) can be expressed as follows:
u = u p + t . u h 1 + k . u h 2  
where up is a particular solution of Equation (3), and uh1 as well as uh2 are two linearly independent solutions of the homogeneous system described by A0.u = 0.

2.1.2. Solution Based on More Than Three Reference Points

If there is an additional distance of d4, d5, … dn to the reference points, B4, B5 … Bn can be extend as follows [11,12,13]:
1 2 x 1 2 y 1 2 z 1 1 2 x 2 2 y 2 2 z 2 1 2 x 3 2 y 3 2 z 3 1 2 x n 2 y n 2 z n x 2 + y 2 + z 2 x y z = d 1 2 x 1 2 y 1 2 z 1 2 d 2 2 x 2 2 y 2 2 z 2 2 d 3 2 x 3 2 y 3 2 z 3 2 d n 2 x n 2 y n 2 z n 2
Expressed in a known form, this system can be written as follows:
A u = b A = 1 2 x 1 2 y 1 2 z 1 1 2 x 2 2 y 2 2 z 2 1 2 x 3 2 y 3 2 z 3 1 2 x n 2 y n 2 z n ,   u = x 2 + y 2 + z 2 x y z , b = d 1 2 x 1 2 y 1 2 z 1 2 d 2 2 x 2 2 y 2 2 z 2 2 d 3 2 x 3 2 y 3 2 z 3 2 d n 2 x n 2 y n 2 z n 2
The general solution, denoted as u ^ , can be obtained using the least squares method as follows:
u ^ = ( A T A ) 1 A T b
The projection of point p on the column space of matrix A is as follows:
p = A ( A T A ) 1 A T b
In this case, the coordinates of point p in the column space of Col (A) correspond to the solution, u ^ . Moreover, vector b consists of distances between the unknown point, A, and all the reference points; however, if the measurements are uncorrelated but have varying levels of uncertainty, the weighted least squares (WLS) method is utilized. The solution, u ^ , can then be determined using the following equation:
u ^ = ( A T V 1 A ) 1 A T V 1 b
V is the covariance matrix of random errors. These errors refer to the variability or uncertainty in the measurements of the distances between points (represented by the vector u) and the reference points. It might be measurement noise (e.g., signal interference) or propagation effects (e.g., signal reflection or refraction). The RLS method uses u0 as the initial solution. It is continually updated with each new distance measurement, resulting in the creation of a new solution, u1. This approach enables the simultaneous calculation of both distance measurements and positioning, allowing for the initiation of position assignment even when not all distances are known, thus reducing waiting time and improving the speed of the positioning calculation. Generally, for indoor positioning, a limited number of participating reference stations does not cause a big challenge to the computational resources for multilateration. This presented algorithm utilizes a linear algebraic approach with low computational complexity, making it suitable for real-time applications.
The combination of distance data and the RLS algorithm is used to calculate the position. The following section will go into further detail on how to improve position estimation using the EKF.

2.2. Sensor Fusion Algorithm

Several algorithms are used for sensor fusion, including Bayesian networks, convolutional neural networks, Dempster–Shafer theory, and the Kalman filter [14]. This system employs the Kalman filter to achieve more precise position estimation; however, the Kalman filter is optimized for linear systems, while real-world systems often exhibit non-linearity. Thus, the EKF is utilized to enhance performance [15]. The implementation of the EKF used in this study will be discussed in further detail. The state vector is defined as follows [13]:
X [ k ] = p ( w ) [ k ] v ( w ) [ k ] a ( w ) [ k ]
The vector p(w) represents the coordinates of the object along the x, y, and z axes in the world coordinate system, and is given as [px [ k ] , py [ k ] , pz [ k ] ]. The velocity vector of the object along the x, y, and z axes in the world coordinate system is represented by v(w) = [vx [ k ] , vy [ k ] , vz [ k ] ]. Additionally, the accelerometer vector of the object in the world coordinate system is represented by a(w) = [ax [ k ] , ay [ k ] , az [ k ] ].
The world coordinate system here (w) can also be specified as the coordinate system of the UWB sensor (Marvelmind). At this point, ∆t is defined as the sampling time interval, and (∆t)w[k] can be defined as the process noise of acceleration, representing uncertainties in the acceleration process. t 2 2 w [ k ] , t 3 6 w [ k ] can be represented as the process noise of velocity and process noise of positions, respectively. Equations of motion in the (k + 1) time interval can be expressed as follows [13]:
p [ k + 1 ] = p [ k ] + v [ k ] t + a [ k ] t 2 2 + t 3 6 w [ k ]
The equation states that the new position at time step k + 1 is determined by the current position, the velocity multiplied by the time step, half of the acceleration multiplied by the square of the time step, and a noise term. The noise term is added to account for uncertainties or external disturbances that may affect the motion of the system:
v [ k + 1 ] = v [ k ] + a [ k ] t + t 2 2 w [ k ]
The equation states that the new velocity at time step k + 1 is determined by the current velocity, the acceleration multiplied by the time step, and a noise term:
a [ k + 1 ] = a [ k ] + t w [ k ]
The equation states that the new acceleration at time step k + 1 is determined by the current acceleration and a noise term.
The choice of noise term w k depends on the characteristics of the system and the sources of uncertainty or disturbances. As mentioned earlier, it is common to assume w k to be white noise. White noise has equal power at all frequencies, making it a simple and convenient choice in many applications.
The state equation can be represented in matrix form, as follows:
X [ k + 1 ] = A X [ k ] + G w [ k ]
The matrices A and G represent the transition matrix and noise process matrix, respectively. Process noise vector and Q covariance can be defined as follows [16,17,18]:
w [ k ] = [ w x [ k ] ;   w y [ k ] ;   w z [ k ] ] ,   Q = d i a g ( [ σ a x 2   σ a y 2   σ a z 2 ] )
A = 1 0 0 t 0 0 t 2 / 2 0 0 0 1 0 0 t 1 0 t 2 / 2 0 0 0 1 0 0 t 0 0 t 2 / 2 0 0 0 1 0 0 t 0 0 0 0 0 0 1 0 0 t 0 0 0 0 0 0 1 0 0 t 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1   , G = t 3 6 0 0 0 t 3 6 0 0 0 t 3 6 t 2 2 0 0 0 t 2 2 0 0 0 t 2 2 t 0 0 0 t 0 0 0 t
The observation vector, o k , contains the distance values used in the geometric approach and an additional noise vector. Since the observation vector typically contains noisy or imperfect measurements of the true state of the system, the measurement vector, H[k], represents the predicted measurements based on the current estimate of the system’s state. The observation vector contains the real-world measurements obtained from sensors, while the measurement vector represents the predicted measurements based on a current estimate of the system’s state:
o [ k ] = d 1 k + n 1 k d 2 k + n 2 k d 3 k + n 3 k d 4 k + n 4 k H [ k ] x [ k ] + n [ k ]
d 1 , …, d 4 represents the true distance measurements. In addition to this, n1, n2, n3, and n4 represent the noise associated with each measurement.
The measurement vector is represented by H [ k ] , and the vector of noise, with a mean of zero and a covariance matrix, is represented by n k . As seen in Figure 3, in the general structure, five Marvelmind ultrasonic sensors are used for the system, with one being mobile and four being stationary. The distances from the four stationary sensors to the mobile sensor can be defined as r1, r2, r3, and r4. The matrix, R, a covariance matrix, is defined by the covariances of these distance data as follows:
R = d i a g ( [ σ r 1 2   σ r 2 2   σ r 3 2   σ r 4 2 ] )
The equations described in the structure of the applied EKF are utilized. The initialization stage of this EKF is one of the crucial factors, where the initial values of the covariance matrices are assigned. Subsequently, the prediction and update steps are executed in sequence [15,19].
(1)
Prediction of state:
x ¯ [ k ] = f ( A x ^ [ k 1 ] , u [ k ] )
where f is a nonlinear transition function and u[k] represents control inputs if applicable.
(2)
Prediction of state covariance:
P ¯ [ k ] = A P ^ [ k 1 ] A T + Q
where Ak1 is the Jacobian matrix of the nonlinear transition function, f, with respect to the state, X.
(3)
Gain calculation of the EKF:
K [ k ] = P ¯ [ k ] H k T [ H k P ¯ k H k T + R [ k ] ] 1
(4)
State correction:
x ^ [ k ] = x ¯ [ k ] + K [ k ] ( O [ k ] h (   x ¯ [ k ] ) )
where h is the Jacobian matrix and represents the nonlinear measurement function.
(5)
State covariance correction:
P ^ [ k ] = ( I K [ k ] H [ k ] ) P ¯ [ k ]
In EKF, the complexity of computation is increased by calculating the Jacobian matrix. To enhance the efficiency and accuracy, and to decrease the computational cost of the EKF computations, UDU factorization is applied. The details of this algorithm are given in the next section.

2.3. UDU Factorization

UDU factorization employs the above equations. It substitutes the covariance matrix, P[k], with an upper triangular matrix with ones on the diagonal (U[k]) and a diagonal matrix, D[k], and can be expressed as follows [20]:
P [ k ] = U [ k ] D [ k ] U T [ k ]

2.4. UDU Measurement Update

Reword the sentence by expressing the process of factorizing the covariance, P, into an LDL form, where instead of utilizing an upper triangular matrix a lower triangular matrix is used [21].
P [ k ] = L [ k ] [ k ] L [ k ] T   and   P ¯ [ k ] = L ¯ [ k ] ¯ [ k ] L ¯ [ k ] T
where is the diagonal matrix and U and D are the inverses of LT and , respectively [21].
P [ k ] 1 = L [ k ] T [ k ] 1 L [ k ] 1     = U k D [ k ] U [ k ] T P ¯ [ k ] 1 = L ¯ [ k ] T ¯ [ k ] 1 L ¯ [ k ] 1     = U ¯ k D ¯ [ k ] U ¯ [ k ] T
Therefore, the measurement update becomes as follows:
U k D [ k ] U k T = U ¯ [ k ] D ¯ [ k ] U ¯ k T + H k T R k 1 H [ k ]
Then, factorize the mxm matrix Rk into the LDL form as described in [21]:
R [ k ] = L R [ k ] R [ k ] L R [ k ] T R [ k ] 1 = L R [ k ] T R [ k ] 1 L R [ k ] T = U R [ k ] D R [ k ] U R [ k ] T
Therefore, (34) becomes as follows:
U k D [ k ] U k T = U ¯ [ k ] D ¯ [ k ] U ¯ k T + H k T U R k D R k U R k T H [ k ]
and the term U R [ k ] H can be expressed as follows [21]:
U R [ k ] T H R = v 1 T v m T
Each vi is an n × 1 vector. The factor H [ k ] T R [ k ] 1 H [ k ] can be expressed as follows [21]:
H k T R k 1 H k = H k T U R k D R k U R k T H k = v 1 T v m T T 1 d 1 R 0 0 1 d m R v 1 T v m T = i = 1 m 1 d i R v i v i T
Therefore, the measurement update is expressed as follows:
U k D [ k ] U [ k ] T = U ¯ [ k ] D ¯ [ k ] U ¯ [ k ] T + i = 1 m 1 d i R v i v i T
In addition to this, z ^ k and z ¯ [ k ] , which are related to x ^ k and x ¯ k , are defined below [20,21]:
z ^ [ k ] P k 1 x ^ [ k ] ,   z ¯ P ¯ k 1 x ¯ [ k ] z ^ [ k ] = P k 1 I K k H k x ¯ [ k ] + P k 1 K [ k ] m [ k ] z ^ [ k ] = z ¯ [ k ] + H k T R k 1 m [ k ]
where m [ k ] is the measurement vector.

2.5. UDU Time Update

The Kalman filter uses a standard method. It involves inverting the information matrix to obtain the covariance, then propagating the covariance, and finally inverting the propagated covariance matrix to prepare for the measurement update step. This proposed algorithm utilizes this method for covariance propagation and factorizing Q[k] with UDU parameterization [20,21]:
Q [ k ] = U Q [ k ] Q [ k ] U Q [ k ] T
where Q [ k ] is a diagonal p × p matrix, and G Q [ k ] is defined as follows:
G Q [ k ] G [ k ] U Q [ k ]
so P ¯ [ k ] becomes the following:
P ¯ [ k + 1 ] = ϕ [ k ] P [ k ] ϕ [ k ] T + G Q [ k ] Q [ k ] G Q [ k ] T
Using the matrix inversion lemma, we obtain the following:
( Q + W E F ) 1 = Q 1 Q 1 W ( E 1 + F Q 1 W ) 1 F Q 1
Q = ϕ k P [ k ] ϕ [ k ] T ;      E = Q [ k ] ;     W = G Q [ k ] ;    F = G Q [ k ] T   Q 1 = M [ k ] ϕ [ k ] 1 P [ k ] 1 ϕ [ k ] 1
The inversion of the propagated covariance matrix is as follows [20,21]:
P ¯ [ k + 1 ] 1 = M [ k ] M [ k ] G Q k G k T M k G Q k + Q k 1 1 G Q k T M [ k ]
G ¯ [ k ] = ϕ [ k ] 1 G Q [ k ] ,      D Q [ k ] = Q [ k ] 1
Then, the expression for P [ k + 1 ] 1 is given by the following:
P [ k + 1 ] 1 = ϕ [ k ] T P [ k ] 1 P [ k ] 1 G ¯ [ k ] G ¯ [ k ] T P [ k ] 1 G ¯ [ k ] + D Q [ k ] 1 G ¯ [ k ] T P [ k ] 1 ϕ [ k ] 1
and the Kalman (KUDU) gain is defined as follows:
K U D U [ k ] P [ k ] 1 G ¯ [ k ] G ¯ [ k ] T P [ k ] 1 G ¯ [ k ] + D Q [ k ] 1
Then, if we define brackets in (47) as P [ k ] 1 , we obtain the following:
P [ k ] 1 P [ k ] 1 P [ k ] 1 G ¯ [ k ] G ¯ [ k ] T P [ k ] 1 G ¯ [ k ] + D Q [ k ] 1 G ¯ [ k ] T P [ k ] 1 P [ k ] 1 = I K U D U [ k ] G ¯ [ k ] T P [ k ] 1
Equation (48) is an analog to Equations (28)–(30), with the following:
P [ k ] 1 P ^ [ k ] ,   P [ k ] 1 P ¯ [ k ] , G ¯ k H k T , K U D U [ k ] K k , D Q [ k ] R [ k ]
and since D Q [ k ] is the diagonal matrix, the UDU factorization of P [ k ] 1 is solved directly by using Carlson’s rank-one update [22]:
Ư [ k ] Ɗ [ k ] Ư [ k ] T = U k D [ k ] U k T U [ k ] D [ k ] U [ k ] T G ¯ [ k ] [ G ¯ [ k ] T U k D [ k ] U [ k ] T G ¯ [ k ] + Q [ k ] 1 ] G ¯ [ k ] T U [ k ] D [ k ] U [ k ]
and time-propagated UDU factors of P ¯ [ k + 1 ] 1 are defined as below:
P ¯ [ k + 1 ] 1 = U ¯ [ k + 1 ] D ¯ [ k + 1 ] U ¯ [ k + 1 ] T = ϕ [ k ] T Ư [ k ] Ɗ [ k ] Ư [ k ] T ϕ [ k ] 1
The time update for z ¯ [ k + 1 ] is obtained as follows.
P ¯ [ k + 1 ] 1 x ¯ [ k + 1 ] = P ¯ k + 1 1 ϕ [ k ] P [ k ] P k 1 x ^ [ k ]
then it becomes
z ¯ [ k + 1 ] = P ¯ k + 1 1 ϕ [ k ] P [ k ] z ^ [ k ]
and it can be written as:
z ¯ [ k + 1 ] = ϕ [ k ] T I K U D U [ k ] G ¯ [ k ] T z ^ [ k ]
The equation systems specified in this section have been utilized to enhance the performance of the EKF structure in real-time applications. The outcomes of this structure will be comprehensively analyzed in Section 4 of the article. The next section will explain how data are collected from the Marvelmind sensor.

3. Data Acquisition

This section will provide detail on how to retrieve data from the Marvelmind sensor. The mobile Marvelmind sensor is connected to a laptop computer via the serial port. To ensure that all of the necessary components are working correctly, four Marvelmind sensors attached to the wall were activated, and their functionality was confirmed through the Marvelmind’s own interface [23]. After verifying that all of the sensors were operating properly, a software tool on the ROS platform was run on the laptop to gather the data shown in Table 1.
After the data were collected in the ROS environment, the packet parsing functions that interpret the data were added to Pixhawk in order to be able to collect them in the same way from Pixhawk as well. In addition to the written codes, a physical connection must also be made between Pixhawk and the Marvelmind sensor. This physical connection schema can be seen in Figure 4 [24].
Following the schematic connection depicted in Figure 5a,b, a physical connection between Pixhawk and the Marvelmind sensor was established. Data collection was then conducted on Pixhawk, while the other Marvelmind sensors remained fixed to the wall.
The explanation of the algorithm tests and their outcomes will be provided in detail in the next section.

4. Testing Algorithm and Results

The UDU-factorized EKF structure and the geometric approach based on the RLS method were implemented in the C++ environment using MATLAB code generation support. The integration was carried out on the Pixhawk platform. The trajectories obtained from the Marvelmind sensor are presented in Table 2. The robotic platform used in this study is a quadcopter. Due to reliability concerns associated with indoor flight, the quadcopter was constrained to a fixed table, and data were collected using Marvelmind sensors. In the initial six trajectories, no motion was applied to the robot along the z-axis; however, intentional movement in the z-axis was introduced during the data collection for the last two trajectories to assess the algorithm’s performance.
The position estimations, calculated solely through the trilateration algorithm, and the positions derived by integrating the trilateration with the EKF and UDU factorization algorithms, incorporating IMU data, are presented in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. Additionally, the locations of the fixed sensors used in these figures are indicated as black dots. Comprehensive information on the minimum, maximum, and average error values for the calculated positions is provided in Table 3, generated using the RMS value.
It is evident from the figures that there are differences between the position estimations obtained using UDU factorization applied to the EKF structure and those obtained solely through the geometric approach. The position estimation derived from UDU factorization is closer to the reference position compared to the geometric approach. This is expected because the geometric approach calculates position estimations solely based on distance data from the Marvelmind sensor. On the other hand, in the approach utilizing the EKF, a sensor fusion algorithm is employed, incorporating both distance and accelerometer data to enhance the accuracy of position estimations.
In Figure 6, Figure 7 and Figure 8, four mobile and one fixed Marvelmind sensors were utilized. In Figure 9, Figure 10 and Figure 11, three mobile and one fixed Marvelmind sensors were used. In Figure 12 and Figure 13, both four mobile and one fixed in addition to three mobile and one fixed Marvelmind sensors were sequentially employed. In the first six scenarios, there was no movement along the z-axis. Conversely, in the last two scenarios, motion along the z-axis was introduced. At this point, the obtained RMS values exhibit differences depending on the trajectories, the number of sensors used, and the algorithmic structure.
When examining Table 3, it is apparent that the results obtained solely through the geometric approach for all trajectories fall short of the desired accuracy, with a notably high error amount; however, it is observed that position estimations reached the desired levels when UDU-EKF was employed in conjunction with the geometric approach. The primary reason for this is, as detailed in the article, that the sensor fusion algorithm is capable of providing a more accurate estimation compared to the geometric-based algorithm. Incorporating accelerometer data into the position estimation process contributes to achieving a more accurate outcome.
For each trajectory, the geometric approach, relying solely on trilateration, exhibits notable variability in accuracy across trajectories, with maximum errors reaching up to 5.4721 m in Trajectory 6. In contrast, the integration of UDU factorization into the EKF consistently enhances accuracy, as demonstrated by reduced minimum, mean, and maximum errors. Trajectory 6 stands out, with a remarkable improvement, showcasing a substantial drop in the maximum error from 5.4721 to 0.4796 m. This underscores the pivotal role of UDU-EKF in refining position estimates, particularly in scenarios involving intentional z-axis movement. The overall pattern highlights the algorithm’s adaptability and robustness in dynamic indoor environments, reaffirming its efficacy for precise positioning in real-world applications.
In addition to that, the code structure was running multiple times for the comparison of UDU factorization EKF with normal EKF in terms of processing speeds, and the processing times for both structures were observed. The comparison of processing speeds between the EKF with UDU factorization and the conventional EKF, as detailed in Table 4, offers valuable insights into their computational efficiency. The experimental setup involved multiple runs to ensure a thorough assessment, and the recorded processing times for UDU-EKF and the EKF, presented in seconds for each trajectory, quantify the time required for algorithm execution. The observed results confirm the consistent outperformance of the EKF structure with UDU factorization, with a minimum increase in the processing time of 18%. The percentage difference analysis underscores the substantial improvement in processing speed achieved by incorporating UDU factorization. Trajectory-specific performance evaluations reveal notable percentage differences of 21%, 25%, and 18% for Trajectories 1, 2, and 3, respectively, emphasizing UDU-EKF’s faster execution across diverse scenarios. The implications for real-world applications are substantial, especially in time-sensitive scenarios where quick and accurate positioning is paramount. The observed reduction in computational cost further positions UDU factorization as an efficient choice, particularly for resource-constrained systems. These findings suggest that implementing UDU factorization in the EKF structure can be considered a strategic optimization for improving overall algorithm efficiency. In summary, the analysis highlights UDU-EKF’s superiority in processing speed, showcasing its potential to significantly enhance the efficiency of positioning algorithms in critical real-world applications.
As a result, the integration of UDU factorization into the EKF structure has been observed to improve the accuracy of position estimation, as evidenced by both the graphs and Table 3. UDU-EKF achieves reliable results even in scenarios involving motion along the three axes. Additionally, upon reviewing Table 4, it is evident that the UDU-EKF structure operates faster than the traditional EKF. Considering both algorithms, it is observed that UDU-EKF plays a significant role in position estimation with lower computational cost.

5. Conclusions

This paper presents a comprehensive study on the integration of UDU factorization into the structure of the EKF for indoor position estimation algorithms. The proposed algorithm demonstrates its effectiveness in real-time systems, showcasing improved accuracy in position estimation compared to traditional EKF approaches. The decomposition of the EKF structure with UDU factorization aims to reduce computational costs, making it particularly suitable for real-time applications.
The study focuses on a geometric method based on reference points, introducing an approach that utilizes distance measurements and quadratic equations to determine the position coordinates of a target point.
Experimental results, conducted using Marvelmind sensors and a quadcopter, provide a thorough analysis of the algorithm’s performance. Trajectory-based comparisons emphasize the superiority of UDU-EKF over the geometric approach, highlighting its ability to achieve more accurate position predictions. The integration of UDU factorization into the EKF not only enhances accuracy but also demonstrates adaptability to dynamic indoor environments.
Furthermore, the paper investigates the computational efficiency of the proposed algorithm, comparing processing speeds between UDU-EKF and traditional EKF structures. The results consistently show that UDU-EKF outperforms traditional EKF in terms of processing speed. This finding is crucial for real-world applications, especially those requiring quick and accurate positioning.
In summary, the integration of UDU factorization into the EKF structure emerges as a strategic optimization for achieving higher accuracy with lower computational costs. The algorithm’s adaptability to all three axes and its superior processing speed position it as a promising solution for real-time positioning systems, emphasizing its potential impact on various applications, including robotics and indoor navigation.

Author Contributions

Conceptualization, T.B. and F.C.; methodology, T.B.; software, T.B.; validation, T.B.; data acquisition, T.B.; formal analysis, T.B.; investigation, T.B.; writing—original draft preparation, T.B.; writing—review and editing, T.B. and F.C.; visualization, T.B.; supervision, F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially partly supported by Istanbul Technical University, grant number 42754.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the work and article.

Conflicts of Interest

The authors declare that there is no competing financial interest or personal relationship that could have appeared to influence the work reported in this paper.

References

  1. Nirjon, S.; Liu, J.; DeJean, G.; Priyantha, B.; Jin, Y.; Hart, T. COIN-GPS: Indoor localization from direct GPS receiving. In Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services—MobiSys 2014, Bretton Woods, NH, USA, 16–19 June 2014; pp. 301–314. [Google Scholar]
  2. Vasisht, D.; Kumar, S.; Katabi, D. Decimeter-Level Localization with a Single WiFi Access Point. In Proceedings of the USENINX Symposium on Networked Systems Design and Implementation, Santa Clara, CA, USA, 16–18 March 2016; pp. 165–178. [Google Scholar]
  3. Zafari, F.; Papapanagiotou, I.; Christidis, K. Microlocation for internet-of-things-equipped smart buildings. IEEE Internet Things J. 2016, 3, 96–112. [Google Scholar] [CrossRef]
  4. Grisetti, G.; Stachniss, C.; Burgard, W. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef]
  5. Kohlbrecher, S.; von Stryk, O.; Meyer, J.; Klingauf, U. A flexible and scalable SLAM system with full 3D motion estimation. In Proceedings of the IEEE International Symposium on Safety, Security, and Rescue Robotics, Kyoto, Japan, 1–5 November 2011; pp. 155–160. [Google Scholar]
  6. Ren, R.; Fu, H.; Wu, M. Large-scale outdoor SLAM based on 2D lidar. Electronics 2019, 8, 613. [Google Scholar] [CrossRef]
  7. Chintalapudi, K.; Padmanabha Iyer, A.; Padmanabhan, V.N. Indoor localization without the pain. In Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM, Chicago, IL, USA, 20–24 September 2010; pp. 173–184. [Google Scholar]
  8. Gomez, C.; Oller, J.; Paradells, J. Overview and evaluation of bluetooth low energy: An emerging low-power wireless technology. Sensors 2012, 12, 11734–11753. [Google Scholar] [CrossRef]
  9. Aiello, G.R.; Rogerson, G.D. Ultra-wideband wireless systems. IEEE Microw. Mag. 2003, 4, 36–47. [Google Scholar] [CrossRef]
  10. Chehri, A.; Fortier, P.; Tardif, P.M. UWB-based sensor networks for localization in mining environments. Ad Hoc Netw. 2009, 7, 987–1000. [Google Scholar] [CrossRef]
  11. Norrdine, A. An Algebraic Solution to the Multilateration Problem. In Proceedings of the 2012 International Conference on Indoor Positioning and Indoor Navigation, Sydney, Australia, 13–15 November 2012; pp. 1–4. [Google Scholar]
  12. Bodrumlu, T.; Caliskan, F. Indoor Position Estimation Using Ultrasonic Beacon Sensors and Extended Kalman Filter. Eng. Proc. 2022, 27, 16. [Google Scholar]
  13. Bodrumlu, T.; Çalışkan, F. Enhancing Indoor Position Estimation Accuracy: Integration of IMU, Raw Distance Data, and Extended Kalman Filter with Comparison to Vicon Indoor Positioning System Data. In Proceedings of the 10th International Electronic Conference on Sensors and Applications, Online, 15–30 November 2023; MDPI: Basel, Switzerland, 2023. [Google Scholar] [CrossRef]
  14. Fung, M.L.; Chen, M.Z.Q.; Chen, Y.H. Sensor fusion: A review of methods and applications. In Proceedings of the 2017 29th Chinese Control and Decision Conference (CCDC), Chongqing, China, 28–30 May 2017; pp. 3853–3860. [Google Scholar]
  15. Krishnaveni, B.V.; Reddy, K.S.; Reddy, P.R. Indoor Tracking by Adding IMU and UWB Using Unscented Kalman Filter. Wirel. Pers. Commun. 2022, 123, 3575–3596. [Google Scholar] [CrossRef]
  16. You, W.; Li, F.; Liao, L.; Huang, M. Data Fusion of UWB and IMU Based on Unscented Kalman Filter for Indoor Localization of Quadrotor UAV. IEEE Access 2020, 8, 64971–64981. [Google Scholar] [CrossRef]
  17. Benini, A.; Mancini, A.; Marinelli, A.; Longhi, S. A Biased Extended Kalman Filter for Indoor Localization of a Mobile Agent using Low-Cost IMU and UWB Wireless Sensor Network. In Proceedings of the 10th IFAC Symposium on Robot Control International Federation of Automatic Control, Dubrovnik, Croatia, 5–7 September 2012. [Google Scholar]
  18. Kwon, S.-G.; Kwon, O.-J.; Kwon, K.-R.; Lee, S.-H. UWB and MEMS IMU Integrated Positioning Algorithm for a Work-Tool Tracking System. Appl. Sci. 2021, 11, 8826. [Google Scholar] [CrossRef]
  19. Oh, K.H.; Ahn, H.S. Extended Kalman Filter with Multi-frequency Reference Data for Quadrotor Navigation. In Proceedings of the 2015 15th International Conference on Control, Automation and Systems (ICCAS 2015), Busan, Republic of Korea, 13–16 October 2015. [Google Scholar]
  20. Asl, H.G.; Pourtakdoust, S.H. UD Covariance Factorization for Unscented Kalman Filter using Sequential Measurements Update. Int. J. Aerosp. Mech. Eng. 2007, 1, 564–572. [Google Scholar]
  21. D’souza, C.; Zanetti, R. Information Formulation of the UDU Kalman Filter. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 493–498. [Google Scholar] [CrossRef] [PubMed]
  22. Carlson, N. Fast triangular factorization of the square root filter. AIAA J. 1973, 11, 1259–1265. [Google Scholar] [CrossRef]
  23. Marvelmind Indoor Navigation System Operating Manual. Available online: https://marvelmind.com/wp-content/uploads/2017/08/marvelmind_navigation_system_manual.pdf (accessed on 20 July 2017).
  24. PixHawk and Marvelmind Integration Manual. Available online: https://marvelmind.com/pics/PixHawk_Marvelmind_Integration_Manual.pdf (accessed on 1 December 2017).
Figure 1. General structure of position estimation.
Figure 1. General structure of position estimation.
Sensors 24 03048 g001
Figure 2. Reference points and interval measurements.
Figure 2. Reference points and interval measurements.
Sensors 24 03048 g002
Figure 3. Displaying used sensors on the interface.
Figure 3. Displaying used sensors on the interface.
Sensors 24 03048 g003
Figure 4. Pixhawk physical wiring diagram with the Marvelmind sensor [24].
Figure 4. Pixhawk physical wiring diagram with the Marvelmind sensor [24].
Sensors 24 03048 g004
Figure 5. (a). Pixhawk physical connection with the Marvelmind sensor (b). Marvelmind sensor fixed on the wall.
Figure 5. (a). Pixhawk physical connection with the Marvelmind sensor (b). Marvelmind sensor fixed on the wall.
Sensors 24 03048 g005
Figure 6. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 1.
Figure 6. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 1.
Sensors 24 03048 g006
Figure 7. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 2.
Figure 7. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 2.
Sensors 24 03048 g007
Figure 8. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 3.
Figure 8. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 3.
Sensors 24 03048 g008
Figure 9. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 4.
Figure 9. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 4.
Sensors 24 03048 g009
Figure 10. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 5.
Figure 10. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 5.
Sensors 24 03048 g010
Figure 11. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 6.
Figure 11. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 6.
Sensors 24 03048 g011
Figure 12. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 7.
Figure 12. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 7.
Sensors 24 03048 g012
Figure 13. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 8.
Figure 13. Position-estimation-only trilateration algorithm and trilateration algorithm with the EKF in Trajectory 8.
Sensors 24 03048 g013
Table 1. Marvelmind sensor data structure [23].
Table 1. Marvelmind sensor data structure [23].
MessageMessage VariableData TypeDescription of Data
beacon_distancedist_mfloat64Raw distance data of beacon, in meters
add_hedgeuint8Address no of mobile beacon
add_beaconuint8Address no of stationary beacon
hedge_imu_rawtime_varint64Timestamp of IMU data
accel_xint16Accelerometer of x data
accel_yint16Accelerometer of y data
accel_zint16Accelerometer of z data
Table 2. Trajectories of the tests.
Table 2. Trajectories of the tests.
Trajectory IDBeacon NumberTrajectories
15 (4 stationary + 1 mobile)Sensors 24 03048 i001
25 (4 stationary + 1 mobile)Sensors 24 03048 i002
35 (4 stationary + 1 mobile)Sensors 24 03048 i003
44 (3 stationary + 1 mobile)Sensors 24 03048 i001
54 (3 stationary + 1 mobile)Sensors 24 03048 i002
64 (3 stationary + 1 mobile)Sensors 24 03048 i003
75 (4 stationary + 1 mobile)With a different z value
Sensors 24 03048 i001
84 (3 stationary + 1 mobile)With a different z value
Sensors 24 03048 i001
Table 3. Error tables.
Table 3. Error tables.
Trajectory IDAlgorithmMin Error (m)Mean Error (m)Max Error (m)
1Only geometric approach0.07340.33560.9940
1Geometric approach with the EKF (UDU)0.02720.06170.1439
2Only geometric approach0.07760.45331.7680
2Geometric approach with the EKF (UDU)0.01770.08270.1686
3Only geometric approach0.35361.25835.0031
3Geometric approach with the EKF (UDU)0.01910.10340.2148
4Only geometric approach0.11420.36281.1472
4Geometric approach with the EKF (UDU)0.04300.20700.4387
5Only geometric approach0.13480.53122.1322
5Geometric approach with the EKF (UDU)0.07520.16600.3361
6Only geometric approach0.46121.4365.4721
6Geometric approach with the EKF (UDU)0.11410.23780.4796
7Only geometric approach0.06820.37811.2343
7Geometric approach with the EKF (UDU)0.03050.13120.1980
8Only geometric approach0.08360.41161.3841
8Geometric approach with the EKF (UDU)0.05040.21620.3226
Table 4. Process time difference of the EKF vs. UDU-EKF.
Table 4. Process time difference of the EKF vs. UDU-EKF.
TrajectoryUDU-EKF (s)EKF (s)Percentage of Difference
10.00940.011921%
20.00760.010125%
30.00990.012118%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bodrumlu, T.; Caliskan, F. 3D Indoor Position Estimation Based on a UDU Factorization Extended Kalman Filter Structure Using Beacon Distance and Inertial Measurement Unit Data. Sensors 2024, 24, 3048. https://doi.org/10.3390/s24103048

AMA Style

Bodrumlu T, Caliskan F. 3D Indoor Position Estimation Based on a UDU Factorization Extended Kalman Filter Structure Using Beacon Distance and Inertial Measurement Unit Data. Sensors. 2024; 24(10):3048. https://doi.org/10.3390/s24103048

Chicago/Turabian Style

Bodrumlu, Tolga, and Fikret Caliskan. 2024. "3D Indoor Position Estimation Based on a UDU Factorization Extended Kalman Filter Structure Using Beacon Distance and Inertial Measurement Unit Data" Sensors 24, no. 10: 3048. https://doi.org/10.3390/s24103048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop