Next Article in Journal
Experimental Study on Performance Enhancement of a Photovoltaic Module Incorporated with CPU Heat Pipe—A 5E Analysis
Next Article in Special Issue
Equations of Disturbed Motion of the Moving Part of the Gyroscope Suspension
Previous Article in Journal
Sentiment Analysis and Emotion Recognition from Speech Using Universal Speech Representations
Previous Article in Special Issue
Inertial Motion Capture-Based Wearable Systems for Estimation of Joint Kinetics: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Human Gait Tracking System Using Dual Foot-Mounted IMU and Multiple 2D LiDARs

Department of Electrical, Electronic and Computer Engineering, University of Ulsan, Ulsan 44610, Korea
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(17), 6368; https://doi.org/10.3390/s22176368
Submission received: 20 July 2022 / Revised: 22 August 2022 / Accepted: 22 August 2022 / Published: 24 August 2022
(This article belongs to the Collection Inertial Sensors and Applications)

Abstract

:
This paper proposes a human gait tracking system using a dual foot-mounted IMU and multiple 2D LiDARs. The combining system aims to overcome the disadvantages of each single sensor system (the short tracking range of the single 2D LiDAR and the drift errors of the IMU system). The LiDARs act as anchors to mitigate the errors of an inertial navigation algorithm. In our system, two 2D LiDARs are used. LiDAR 1 is placed around the starting point, and LiDAR 2 is placed at the ending point (in straight walking) or at the turning point (in rectangular path walking). Using the LiDAR 1, we can estimate the initial headings and positions of each IMU without any calibration process. We also propose a method to calibrate two LiDARs that are placed far apart. Then, the measurement from two LiDARs can be combined in a Kalman filter and the smoother algorithm to correct the two estimated feet trajectories. If straight walking is detected, we update the current stride heading and the foot position using the previous stride headings. Then, it is used as a measurement update in the Kalman filter. In the smoother algorithm, a step width constraint is used as a measurement update. We evaluate the stride length estimation through a straight walking experiment along a corridor. The root mean square errors compared with an optical tracking system are less than 3 cm. The performance of proposed method is also verified with a rectangular path walking experiment.

1. Introduction

Human gait monitoring and analysis are important in clinical rehabilitation [1,2], functional mobility diagnostics [3,4], and sports training [5,6,7]. With some technological tools, the gait phase and human gait parameters can be determined through gait analysis. The Vicon optical tracking system [8] and the GAITRite system [9] are the gold standards of gait analysis since they provide accurate measurements. The optical system captures the position of passive reflective markers attached on human body using multi-cameras, while the GAITRite system measures gait parameters via an electronic force platform. However, these standard systems are bulky, expensive, and require a long time for set up and post-processing the data. It may be difficult to deploy these evaluation systems in some rural areas or remote locations.
Wearable sensors, such as inertial sensors or inertial measurement units (IMUs), are used as low-cost alternative instruments in gait analysis [10]. They are small and lightweight, have low power consumption, and are integrated in most smart wearable devices. They can be attached to various parts of the human body including the foot, waist, chest, and head. The gait parameters are then estimated using the recorded movement signal. For example, the walking step length can be estimated with a waist-mounted IMU using a linear regression model [11]. This method requires a calibration process to construct the step length model. A common method for foot-mounted IMU is the numerical integration of inertial data to estimate foot velocity and position. However, the drawback of low-cost IMU is the sensor noises, which result in the accumulated integration errors. A most widely used technique to reduce these errors is the zero-velocity update (ZVU) [12,13], which assumes that the foot velocity is zero when it contacts the ground. Once the stance phase is detected, a ZVU is used as a pseudo measurement of the Kalman filter to reset the errors. However, as the position and heading are still unobservable, the heading drift grows over time. Thus, the positioning accuracy is decreased.
Some researchers use a dual foot-mounted IMU system to reduce the symmetrical heading errors [14,15]. There are two problems in the dual foot-mounted IMU system. The first one is the initial heading and position of each IMU, which is usually solved using a calibration process. The other one is the heading drift over time. Using the mapping information, a heuristic drift elimination (HDE) algorithm is proposed to update the heading with the dominant direction of building [16,17]. This method cannot apply to large space, where the building direction is not available. Based on the relative position of two feet, some distance constraints are proposed as the measurements in the Kalman filter frame work [18,19,20,21,22]. These methods update the stride heading and foot position if straight walking is detected or use the maximum range of stride length or two feet step length to constrain the divergence of the moving foot. However, the heading cannot be effectively corrected, especially in the turning phase.
In human gait analysis, a 2D LiDAR is also used to track the lower limb. The LiDAR does not require external markers and has simple installation. It can be mounted on a moving platform such as a mobile robot [23,24] or a smart walker [25,26], or can be fixed on the ground at a height of human shin [27,28,29,30]. Using the LiDAR with moving platform requires more additional sensors as well as a control system. Thus, the system is complicated and bulky. The system with the LiDAR is installed on the ground is simple. It scans human leg and estimates the leg center point during walking. The LiDAR-based human leg estimation is quite accurate and contains the walking direction with respect to the local environment. However, the effective human leg tracking range of a single 2D LiDAR is limited within a few meters. In [31], a fusion system of multiple 2D LiDARs has been proposed to extend the human leg tracking range. However, the LiDARs cannot be placed far apart to guarantee the continuity of leg scan data. Therefore, the tracking range is still limited. For example, for 20 m of straight walking, three LiDARs need to be used.
The aim of this paper is to combine multiple 2D LiDARs with a dual foot-mounted IMU system to overcome the disadvantage of each single system. The proposed system can increase the human gait tracking range and reduce the heading drift of the inertial sensor system. Two 2D LiDARs are used in this paper. The first one is placed around the starting point. The other one is placed based on the walking route: at the ending point in straight walking or at the turning point in rectangular path walking. The LiDAR-based human leg center point estimation is then used as a measurement update of the inertial sensor system. The main contributions of this paper are as follows:
  • Multiple 2D LiDARs are combined in the dual foot-mounted IMU system for heading and position correction;
  • The initial heading and position of each IMU are automatically estimated using the first LiDAR without any calibration process;
  • A calibration algorithm of two 2D LiDARs (placed far apart) is used, and the calibration parameters are then used to transform the human leg center point into the same coordinate system;
  • In contrast with the HDE algorithm, which uses the building heading information, we propose a simple walking stride heading update algorithm when straight walking is detected. The foot position estimation from the updated heading is then used as a measurement update of the Kalman filter;
  • The estimated errors of the Kalman filter are then compensated using a quadratic optimization-based smoother algorithm [32]. A constraint of the step width is also proposed in the smoother algorithm if the straight walking is detected.

2. System Overview

The proposed system is configured, as shown in Figure 1. Two foot-mounted IMU systems are synchronized and consist of a microSD card to store the initial data. Two 2D LiDARs are horizontally placed on the ground to scan human leg. The data from IMUs and LiDARs are then post-processed using Matlab.
As we can see in Figure 1, there are four sensor coordinate systems: two IMUs and two LiDARs. To track the human gait, we define the global coordinate system and two foot body coordinate systems. The left/right foot body coordinate systems are set to coincide with corresponding left/right IMU coordinate system. Subscripts L and R denote the left and right side, respectively. The three axes of the global coordinate system are chosen as follows: x and y axes coincide with the LiDAR 1 coordinate system, the z axis points upwards, and the origin is the projection of LiDAR 1 origin to the horizontal plane of two IMUs. The height of the global coordinate system is shifted so that the initial z positions of two feet are zero. For a given vector, we sometimes use subscript b (body) and w (global) to emphasize that a vector is expressed in the particular coordinate.
The inertial sensor outputs are acceleration y a R 3 and angular velocity y g R 3 , and are modeled as follows:
y a = a b + C w b ( t ) g ˜ + v a ( t ) , y g = ω b ( t ) + v g ( t ) ,
where a b R 3 is the acceleration produced by forces other than the gravitational field; g ˜ = 0 0 9.8 T is the gravitational vector in the global coordinate system; C w b R 3 × 3 is the rotation matrix from global coordinate system to the body coordinate system; ω b R 3 is the body angular velocity; and v a R 3 and v g R 3 are the accelerometer and gyroscope measurement noises, respectively. We assume that v a and v g are white Gaussian noises, where their covariances are given by:
R a = E { v a v a T } , R g = E { v g v g T } .
The block diagram of our proposed system is shown in Figure 2. Firstly, the data from sensors is preprocessed. The single LiDAR processing module estimates the human leg trajectories using LiDAR scan data. The leg positions are expressed in each LiDAR coordinate system. The first calibration algorithm calibrates two LiDARs coordinate systems to transform the leg positions from LiDAR 2 into the LiDAR 1 coordinate system. The second calibration algorithm uses the IMUs and LiDAR 1 data to output the initial rotation angles and position of each IMU in the LiDAR 1 coordinate system. Then, the Kalman filter and the smoother algorithm are used to estimate the two feet trajectories. In the Kalman filter, the zero-velocity update (ZVU) and the LiDAR-based stance foot positions are used as a measurement update. A simple straight walking detection algorithm is also proposed to correct the heading estimation and the foot position. A smoother algorithm is then used to reduce the estimation error and smooth the trajectories of the two feet. In this smoother algorithm, the straight-walking-based heading and position update is replaced by the step width constraint measurement update.

3. Data Preprocessing

The aim of this part is to estimate the human leg center point from LiDAR and estimate the initial heading and position of each IMU in the global system. In this paper, the symbols | . | and | | . | | represent the absolute value and the Euclidean norm, respectively. We also denote the estimated value, the true value, and the error of x as x ^ , x ˜ , and x e , respectively, where x ˜ = x ^ + x e .

3.1. Single LiDAR-Based Human Leg Center Point Estimation

The LiDAR scan data are then used to estimate the human leg center point trajectories using the algorithm in [27,31]. The outputs of this module are the human leg positions in stance phases which are represented by diamond shapes in Figure 3.
Let p i , j , k j L i denote the positions of left and right legs positions in the stance phase expressed in the LiDAR i coordinate system, where L i represents LiDAR i, j = L , R is the left or right leg, and k j = 0 , 1 , 2 , . . . denotes the sequence of stance phases of j leg. For example, k L = 0 means the left foot is in initial standing still period. Since these leg positions are expressed in different LiDAR coordinate systems, the pose between two LiDARs is computed in the next section.

3.2. Calibration 1: Two LiDARs Calibration

The calibration parameters of two LiDARs are the rotation angle and the translation vector. In our previous research of human gait tracking using multiple LiDARs, the working space is shared by the LiDARs; therefore, a cylinder is placed arbitrarily in the common scanning space to calibrate the system. In this research, two LiDARs are placed far apart and there is no common scan data. Therefore, in stead of arbitrary choice, we place the cylinder at some fixed points which are already known their geometric relationship.
The calibration configuration is shown in Figure 4. The cylinder with measured radius r ^ c is placed at four points A 1 , B 1 (in LiDAR 1 scanning range), A 2 , and B 2 (in LiDAR 2 scanning range). These four points are manually calibrated to identify the following parameters: the intersection O of A 1 B 1 and A 2 B 2 , the angle α ^ between A 1 B 1 and A 2 B 2 , and the corresponding distance from O to each point d ^ m i , where m = A , B and i = 1 , 2 . We also denote the measurement errors of r ˜ c , α ˜ , and d ˜ m i by r c , e , α e , and d e , respectively.
The coordinate systems of two LiDARs are represented by O 1 x 1 y 1 and O 2 x 2 y 2 . The calibration parameters are the translation vector T 12 from O 1 to O 2 and the rotation angle θ 12 from the LiDAR 2 coordinate system to the LiDAR 1 coordinate system. To account the uncertainties of the LiDAR data measurement and the measured configuration parameters, we propose an iterative algorithm to compensate the calibration parameters errors. The state vector x c consists of the calibration parameters and the cylinder’s estimated center points at points A 1 and B 1 :
x c = θ 12 T 12 T ( c A 1 L 1 ) T ( c B 1 L 1 ) T T R 7 .
Let s m i , k L i represent the cylinder sample scan data of LiDAR i at position m i . At each position, the total number of scan data point is N m i .

3.2.1. Initialization

We firstly initialize the cylinder’s center points using the least square fitting algorithm. The initial cylinder’s center points c ^ m i , 0 L i are then used to estimate the initial calibration parameters as follows:
  • The unit vector is estimated by: v ^ L i = c ^ A i , 0 L i c ^ B i , 0 L i c ^ A i , 0 L i c ^ B i , 0 L i , i = 1 , 2 ;
  • The initial rotation angle is estimated by: θ ^ 12 , 0 = ( v ^ L 1 , R ( α ^ ) v ^ L 2 ) ;
  • The initial translation vector is estimated by:
    T ^ 12 , 0 = 1 c ^ A 1 , 0 + d ^ A 1 v ^ L 1 + d ^ A 2 R ( α ^ ) v ^ L 1 R ( θ ^ 12 , 0 ) 2 c ^ A 2 , 0 .
In the above equations, R ( φ ) is the rotation matrix of an angle φ with respect to the z-axis: R ( φ ) = cos φ sin φ sin φ cos φ .

3.2.2. Iterative Algorithm

The errors state vector is defined as follows:
x c , e = θ 12 , e T 12 , e T ( c A 1 , e L 1 ) T ( c B 1 , e L 1 ) T T R 7 .
The measurement equations are then identified using the LiDAR scan data.
For LiDAR 1, the circle fitting equation is as follows:
r ˜ c = s ˜ m 1 , k L 1 c ˜ m 1 L 1 , m = A , B .
By inserting the measurement errors, we can identify the measurement equation for each scan data point of LiDAR 1 as follows:
z m 1 , k = r ^ c s m 1 , k L 1 c ^ m 1 L 1 = h m 1 , k c m 1 , e L 1 + h m 1 , k 1 v s m 1 , k r c , e ,
with h m 1 , k = ( s m 1 , k L 1 c ^ m 1 L 1 ) T s m 1 , k L 1 c ^ m 1 L 1 and v s m 1 , k is the measurement errors of sample data s m 1 , k L 1 .
For LiDAR 2, we fit the circle after transforming the scan data into the LiDAR 1 coordinate system:
r ˜ c = R ( θ ˜ 12 ) s ˜ m 2 , k L 2 + T ˜ 12 c ˜ m 2 L 1 , m = A , B ,
where c ˜ m 2 L 1 = c ˜ A 1 L 1 + d ˜ A 1 v ˜ L 1 + d ˜ m 2 R ( α ˜ ) v ˜ L 1 . The small error of unit vector v ˜ L 1 can be represented by a small angle β e as follows: v ˜ L 1 = R ( β e ) v ^ L 1 v ^ L 1 + u ^ L 1 β e , with u ^ L 1 = 0 1 1 0 v ^ L 1 . The rotation matrix is approximated as follows: R ( θ ˜ ) = R ( θ ^ + θ e ) R ( θ ^ ) + G ( θ ^ ) θ e , where G ( θ ^ ) = R ( θ ^ ) 0 1 1 0 . By inserting the measurement errors, (6) becomes:
r ^ c + r c , e = ( R ( θ ^ 12 ) + G ( θ ^ 12 ) θ e ) ( s m 2 , k L 2 v s m 2 , k ) + T ^ 12 + T 12 , e ( c ^ A 1 L 1 + c A 1 , e L 1 + ( d ^ A 1 + d e ) ( v ^ L 1 + u ^ L 1 β e ) + ( d ^ m 2 + d e ) ( R ( α ^ ) + G ( α ^ ) α e ) ( v ^ L 1 + u ^ L 1 β e ) ) .
The measurement equation at point m 2 , m = A , B is as follows:
z m 2 , k = r ^ c f m 2 , k = h m 2 , k ( G ( θ ^ 12 ) s m 2 , k L 2 θ 12 , e + T 12 , e c A 1 , e L 1 R ( θ ^ 12 ) v s m 2 , k ( v ^ L 1 + R ( α ^ ) v ^ L 1 ) d e ( d ^ A 1 u ^ L 1 + d ^ m 2 R ( α ^ ) u ^ L 1 ) β e d ^ m 2 G ( α ^ ) v ^ L 1 α e ) r c , e ,
where f m 2 , k = R ( θ ^ 12 ) s m 2 , k L 2 + T ^ 12 ( c ^ A 1 L 1 + d ^ A 1 v ^ L 1 + d ^ m 2 R ( α ^ ) v ^ L 1 ) , and
h m 2 , k = f m 2 , k T f m 2 , k .
By combining (5) and (7), the measurement equation for all four positions of cylinder is as follows:
z c = H c x c , e + B c w c ,
where z c = r ^ c s m 1 , k L 1 c ^ m 1 L 1 r ^ c f m 2 , k R N a × 1 , w c = v s m 1 , k v s m 2 , k r c , e d e β e α e R 2 N a + 4 , and H c R N a × 7 and B c R N a × 2 N a + 4 can be identified using (5) and (7). Here, N a = m = ( A , B ) , i = ( 1 , 2 ) N m i is the total number of sample scan data at four positions of cylinder. We assume that the measurement noises are zero mean white Gaussian noises, where covariances are given by:
R r c = E { r c , e T r c , e } R , R d = E { d e T d e } R , R β = E { β e T β e } R , R α = E { α e T α e } R .
The covariance of LiDAR measurements can be found in [31]. Then, we can compute the covariance matrix of measurement noises of (8) as follows:
Q c = B c E { w c w c T } B c T .
The error-state vector is estimated as follows:
x c , e = ( H c T Q c 1 H c ) 1 H c T Q c 1 z c .
The state vector (3) is then updated using x c , e . Using the updated cylinder center points at A 1 and B 1 , the unit vector v ^ L 1 is calculated at each iteration. This process is iterated until a stop condition is satisfied:
x c , e < γ .
Using the calibration parameters, we can transform the stance leg positions from LiDAR 2 coordinate into LiDAR coordinate as follows:
p 2 , j , k j L 1 = R ( θ ^ 12 ) T ^ 12 p 2 , j , k j L 2 , j = L , R .
Figure 5 shows an example of transformation result of 20 m straight walking. The stance leg positions are transformed from LiDAR 2 into the LiDAR 1 coordinate system.

3.3. Calibration 2: Initial Heading and Position of Each IMU Estimation

The initial rotation angle and position of each IMU in the LiDAR 1 coordinate system are automatically estimated using the first walking stride data from IMUs and LiDAR 1. To estimate the initial rotation angle, we use the TRIAD algorithm [33]. This algorithm requires two pairs of reference and corresponding observation unit vectors. Two reference vectors in the LiDAR 1 coordinate system are gravitational vector g ˜ and the first walking stride direction ( S 1 , j , 1 L 1 ) T 0 T which is estimated from LiDAR 1, where S 1 , j , 1 L 1 = p 1 , j , 1 p 1 , j , 0 , j = L , R . Two corresponding observation vectors are the average accelerometer output of one second standing still y ¯ a , j , 0 and the first walking stride direction estimated from each IMU ( S j , 1 I j ) T 0 T . To estimate the IMU-based first walking stride direction, a simple inertial navigation algorithm with ZVU is applied to the first walking stride data of each IMU. The initial rotation matrix can be estimated using the following equations:
y ¯ a , j , 0 = C w , 0 b g ˜ , S j , 1 I j 0 = C w , 0 b S 1 , j , 1 L 1 0 , j = L , R .
After that, we can estimate the initial rotation angles using C w , 0 b .
The initial positions of each IMU ( r j , 0 = x j , 0 y j , 0 0 T , j = L , R ) are estimated using the estimation of LiDAR-based human leg center points (Figure 6). We assume that the human leg is vertically straight during the initial standing still phase, as well as the mid-stance phase of walking. Therefore, the ankle joint K j can be approximated as the projection of leg center point on the x y plane. By assuming that the coordinate of K j in IMU j, r K j I j , is pre-calibrated, the initial position of each IMU in LiDAR 1 coordinate can be computed by:
r j , 0 = p 1 , j , 0 0 + ( C w , 0 b ) T r K j I j , j = L , R .

4. Proposed Algorithm

The proposed human gait tracking algorithm can be referred to as an inertial navigation system (INS).

4.1. Basic Inertial Navigation System Mechanization

The basic equations [34] for inertial navigation are given as follows:
q ˙ = 1 2 Ω ( ω b ) q , v ˙ = C T ( q ) a b , r ˙ = v ,
where q R 4 is the quaternion which represents the attitude of foot, and v R 3 and r R 3 are the velocity and position of foot in global coordinate, respectively. The symbol Ω ( ω b ) is defined by:
Ω ( ω b ) 0 ω x ω y ω z ω x 0 ω z ω y ω y ω z 0 ω x ω z ω y ω x 0 .
The inertial navigation algorithm is used to estimate q ^ , v ^ and r ^ by integrating (16). Due to sensor noises in the low-cost inertial sensor, the errors increase quickly if there is no reference measurement. To overcome this problem, a ZVU method is used to reset the accumulated errors. The method assumes that the velocity of the foot is zero when it touches the ground. Thus, once the stance phase is detected, the ZVU can be fused with the INS algorithm by an error-state Kalman filter.
To detect the stance phase during walking, a simple zero-velocity detection algorithm is used [35,36]. A discrete time k is determined to belong to the zero-velocity interval if the following conditions are satisfied:
y g , i B g , k N g 2 i k + N g 2 y a , i y a , i 1 B a , k N a 2 i k + N a 2 ,
where B g and B a are threshold values, and N g and N a are even number integers.
The errors in the numerically integrated values of (16) are estimated using a Kalman filter. Let q e R 4 , v e R 3 , and r e R 3 be the estimation errors in q ^ , v ^ , and r ^ , which are defined as follows:
q e = q ^ * q , v e = v v ^ , r e = r r ^ ,
where ⊗ is the quaternion multiplication and q * denotes the complex quaternion conjugate of q. It is assumed that the quaternion errors are small. Thus, q e can be approximated by:
q e = 1 q ¯ e R R 3 .
With this assumption, the attitude error is represented by the three-dimensional vector q ¯ e . We define the error-state vector as follows:
x e = q ¯ e r e v e R 9 .
The state transition equation is given by:
x ˙ e = A x + w ,
where A = [ y g × ] 0 3 0 3 0 3 0 3 I 3 2 C T ( q ^ ) [ y a × ] 0 3 0 3 and w = 0.5 v g 0 C T ( q ^ ) v a . For a vector p R 3 , [ p × ] is defined by:
[ p × ] 0 p 3 p 2 p 3 0 p 1 p 2 p 1 0 .

4.2. The Proposed Dual Foot-Mounted IMU Algorithm

The dual foot-mounted IMU errors state vector is defined by combining the left and right foot error states:
X e = x e , L T x e , R T T R 18 .
The dynamic equation is expressed as follows:
X ˙ e = F x + W ,
where F = A L 0 9 0 9 A R and W = w L w R .
Let τ be the sampling period of the inertial sensor output. The system equation is discretized as follows:
X e , k + 1 = Φ k X e , k + W k ,
where Φ k = e F τ I 18 + F τ + 0.5 ( F τ ) 2 and Q k = E { W k W k T } .
In the proposed system, the model and observation equations are linear except the quaternion model, which is approximately linearized. The divergence can be the direct consequence of modeling errors. In the previous initial heading estimation section, although the initial heading of the INS system is unknown and has large uncertainty, it does not affect our proposed system. Three measurement updates are used in this Kalman filter: the zero-velocity update, the LiDAR-based stance foot position update, and the straight-walking-based heading and foot position update.

4.2.1. Zero-Velocity Update

If the left/right foot is in the stance phase, the corresponding measurement equations are given by:
z v , L = 0 3 × 1 v ^ L , k = H v , L X e , k + n v , L , z v , R = 0 3 × 1 v ^ R , k = H v , R X e , k + n v , R ,
where H v , L = 0 3 0 3 I 3 0 3 0 3 0 3 , H v , R = 0 3 0 3 0 3 0 3 0 3 I 3 , n v , L and n v , R are the white Gaussian measurement noises, and R v = E { n v , j n v , j T } .

4.2.2. Lidar-Based Stance Foot Position Update

This update is used in the stance phase of each foot if the stance leg position estimation of LiDAR p i , j ( i = 1 , 2 and j = L , R ) is available. Here, the LiDAR-based stance leg positions are already transformed into the LiDAR 1 coordinate system, so that we ignore the right superscript L 1 in p i , j .
As we mention in the Section 3.3, we assume that the human leg is vertically straight during the mid-stance phase; therefore, we can transform the LiDAR-based stance leg center point position to the x y coordinate of stance foot position f i , j using the following equation:
f i , j * = p i , j 0 + ( C ( q ^ j ) ) T r K j I j ,
where q ^ j is the currently estimated quaternion of IMU j, and r K j I j is the pre-calibrated coordinate of ankle joint in IMU j coordinate, as demonstrated in Section 3.3. Now, we can update the x y coordinate of the stance foot using the following measurement equations:
z l i d a r , L = f i , L r ^ L , x r ^ L , y = H r , L X e , k + n l i d a r , L , z l i d a r , R = f i , R r ^ R , x r ^ R , y = H r , R X e , k + n l i d a r , R ,
where H r , L = 0 2 × 3 1 0 0 0 1 0 0 2 × 3 0 2 × 3 0 2 × 3 0 2 × 3 ,
H r , R = 0 2 × 3 0 2 × 3 0 2 × 3 0 2 × 3 1 0 0 0 1 0 0 2 × 3 , n l i d a r , L , and n l i d a r , R are the white Gaussian measurement noises with covariance R l i d a r .

4.2.3. Heading and Position Update during Straight Walking

Instead of using the pre-defined building heading, we update the current walking stride heading using the previous stride headings if straight walking is detected. The new position of current stance foot is then calculated based on this heading. Then, it is used as the measurement update in the Kalman filter. Since we use the heading of previous strides, the heading errors are still accumulated until the measurement data of LiDAR 2 is available. These errors are compensated later in the smoother algorithm.
Figure 7 shows an example of the proposed algorithm for current stance left foot. Let r ^ L , s denote the current estimated left foot position, where s is the discrete index of the current stance phase. The current walking stride on the right side and the adjacent walking stride on the left side are used. Their stance foot positions are represented by ( r R , s , r R , s 1 ), and ( r L , s 1 , r L , s 2 ), respectively. The straight detection and heading correction procedures are as follows:
  • Condition 1: The current step length is calculated:
    S ^ L R = r ^ L , s r R , s .
    Only the walking step with S ^ L R > δ s is considered. Here, the threshold is empirically chosen to check whether a normal walking step has indeed been taken.
  • Condition 2: If the current walking step is normal, we compute the angles between the current stride vector S ^ L , c = r ^ L , s r L , s 1 and the previous left stride vector S L = r L , s 1 r L , s 2 and the right stride vector S R = r R , s r R , s 1 :
    α 1 = ( S ^ L , c , S L ) , α 2 = ( S ^ L , c , S R ) .
    If they are smaller than a threshold δ α , straight walking is detected. Here, we use the same threshold values used in [22], i.e., δ s = 0.5 m and δ α = 8 o .
  • The current stride vector is set as the average value of previous left and right stride vectors S ^ L , c , n e w = mean { s L , s R } . Then, we normalize the S ^ L , c , n e w and update the new position of the current left foot as follows:
    r ^ L , s , n e w = r L , s 1 + S ^ L , c S ^ L , c , n e w S ^ L , c , n e w .
  • Similarly, if the right straight walking step is detected, the new position of the current right foot r ^ R , s , n e w can be estimated.
Using the new current left/right foot positions, the measurement equations are given by:
z p , L = r ^ L , s , n e w r ^ L , x r ^ L , y = H r , L X e , k + n p , L , z p , R = r ^ R , s , n e w r ^ R , x r ^ R , y = H r , R X e , k + n p , R ,
where H r , L and H r , R are given in (26), and n p , L and n p , R are the white Gaussian measurement noises with covariance R p . Note that we only update the x y positions in this part.
In the Kalman filter, X e , k is estimated and used to compensate the errors in q ^ , r ^ , and v ^ using (18). After updating, the errors are set to zero.

4.3. Smoother Algorithm

Since we use the estimated previous heading to update the stride heading in straight walking, the heading errors are still accumulated. To archive our proposed correction performance, the errors of the Kalman filter estimated values (denoted by q ^ K F , k , v ^ K F , k , and v ^ K F , k ) are compensated using the smoother algorithm [32]. For that, we define the estimation error-state vector as follows:
X S M , k = q ¯ L , k r ¯ L , k v ¯ L , k q ¯ R , k r ¯ R , k v ¯ R , k = 0 3 × 1 I 3 q ^ K F , L , k * q L , k r L , k r ^ K F , L , k v L , k v ^ K F , L , k 0 3 × 1 I 3 q ^ K F , R , k * q R , k r R , k r ^ K F , R , k v R , k v ^ K F , R , k .
The system equation for smoother estimation values is given by:
ζ k + X S M , k + 1 = Φ k X S M , k + W k ,
where Φ k is defined in (23) and
ζ k = q ˜ S M , L , k r ^ K F , L , k + 1 f 2 , k v ^ K F , L , k + 1 f 3 , k q ˜ S M , R , k r ^ K F , R , k + 1 f 5 , k v ^ K F , R , k + 1 f 6 , k ,
f 1 , k f 2 , k f 3 , k f 4 , k f 5 , k f 6 , k = f k q ^ K F , L , k r ^ K F , L , k v ^ K F , L , k q ^ K F , R , k r ^ K F , R , k v ^ K F , R , k , v g , L , k v a , L , k v g , R , k v a , R , k ,
q ˜ S M , L , k = 0 3 × 1 I 3 f 1 , k * q ^ K F , L , k + 1 ,
q ˜ S M , R , k = 0 3 × 1 I 3 f 4 , k * q ^ K F , R , k + 1 .
Function f k in (31) is the numerical integration of quaternion, position, and velocity from k τ to ( k + 1 ) τ . Using (28), (30) and (31), we can expand the left side of (29) as the errors of the numerical integration of the Kalman filter estimated values. Thus, (29) represents how the estimation errors evolve after the integration of (31).
The smoothing problem can be formulated as a quadratic optimization problem using the method in [32]. Let the optimization variable X ˜ be defined by:
X ˜ = X S M , 1 X S M , 2 X S M , N R 18 N × 1 .
We can estimate X ˜ by minimizing the cost function:
J ( X ˜ ) = 1 2 k = 1 N 1 ζ k + X S M , k + 1 Φ k X S M , k T Q k 1 ζ k + X S M , k + 1 Φ k X S M , k + 1 2 ( X S M , 1 X i n i t ) T P i n i t 1 ( X S M , 1 X i n i t ) + constraint terms in ( 36 ) ,
where P i n i t = P L , i n i t 0 9 0 9 P R , i n i t in which P j , i n i t = P q j , 0 0 3 0 3 0 3 P r j , 0 0 3 0 3 0 3 P v j , 0 , j = L , R . The initial attitude error covariance P q j , 0 is given by the algorithm in [33]. The initial position and velocity error covariance P r j , 0 , P v j , 0 can be considered as design parameters.
In this smoother algorithm, we use three measurements: the zero-velocity update, the LiDAR-based stance foot positions update, and the step width constraint update. The first two measurements are explained in the previous section of the Kalman filter. The third measurement is proposed using an assumption that the width of walking step is almost constant during straight walking.
As an example, during a current stance phase of left foot, the step width is estimated as follows:
  • The walking step vector is computed: S L R = r ^ K F , L , k r ^ K F , R , k ;
  • The orthogonal unit vector n L of the current walking direction vector S ^ L , c , n e w that points toward opposite foot side is computed, so that n L T S L R > 0 ;
  • The step width d is now calculated as follows:
    d = n L T S L R = n L T ( r ^ K F , L , k + r ¯ L , k r ^ K F , R , k r ¯ R , k ) .
The step width measurement equation for the left stance foot is given as follows:
z d , L , k = H d , L X S M , k + n d , L ,
where z d , L , k = d n L T ( r ^ K F , L , k r ^ K F , R , k ) and
H d , L = 0 1 × 3 n L T 0 0 1 × 3 0 1 × 3 n L T 0 0 1 × 3 .
Similarly, the step width measurement equation for the right stance foot is given as follows:
z d , R , k = H d , R X S M , k + n d , R ,
where z d , R , k = d n R T ( r ^ K F , R , k r ^ K F , L , k ) and
H d , R = 0 1 × 3 n R T 0 0 1 × 3 0 1 × 3 n R T 0 0 1 × 3 . In (34) and (35), n d , L and n d , R are the white Gaussian measurement noises with covariance R d .
Consequently, the “constraint terms” in (33) are detected as follows:
constraint terms = 1 2 i = L , R ( k Z v , i ( z v , i , k H v , i , k X S M , i , k ) T R v 1 ( z v , i , k H v , i , k X S M , i , k ) + k Z r , i ( z r , i , k H r , i , k X S M , i , k ) T R l i d a r 1 ( z r , i , k H r , i , k X S M , i , k ) + k Z d , i ( z d , i , k H d , i , k X S M , i , k ) T R d 1 ( z d , i , k H d , i , k X S M , i , k ) ) ,
where Z v , i , Z r , i , and Z d , i denote the sets of all discrete time indices belonging to zero-velocity intervals, LiDAR-based stance foot position measurement intervals, and step width measurement intervals, respectively. The cost function (33) can be rewritten as a quadratic function of X ˜ as follows:
J ( X ˜ ) = 1 2 X ˜ T M 1 X ˜ + M 2 X ˜ + M 3 ,
where M 3 is irrelevant in the optimization, and M 1 and M 2 can be easily computed using (33) and (36). The details can be found in [32]. This is the main advantage of the quadratic-based optimization smoother algorithm, where the measurement constraints can be easily included in the optimization problem. The minimizing solution of (37) can be computed by solving the following equation:
M 1 X ˜ * + M 2 = 0 ,
where X ˜ * is the minimizing solution. Once X ˜ * is computed, the smoother estimation values q ^ S M , L , k , r ^ S M , L , k , v ^ S M , L , k , q ^ S M , R , k , r ^ S M , R , k , and v ^ S M , R , k can be found using (28).

5. Experimental Results

5.1. Hardware Description

A Xsens MTi-1 IMU module with a micro SD card is installed in each foot. The sampling rate of two IMUs is 100 Hz. Two LiDARs (RPLIDAR A3) have a scan rate of 10 Hz. The total cost of the system is around USD 1500 (USD 150 for an IMU and USD 600 for a LiDAR). The sensors specifications are shown in Table 1. An OptiTrack optical motion capture system, which is composed of six Flex 13 cameras with a resolution of 1280 × 1024 at 120 Hz, is used to provide the ground truth of the walking step length estimation.
The parameters used in this paper are given in Table 2.

5.2. Stride Length Estimation Evaluation

To evaluate the stride length estimation, we perform a straight walking experiment with the optical camera system. Two male healthy volunteers, which have no problems with gait, balance, and coordination, are recruited. Their information is given in Table 3. They are asked to walk along the corridor with the travel distances of 20 m and 50 m. Each person repeats the walk 10 and 5 times in 20 m and 50 m experiments, respectively.
The configuration of the experiment is given in Figure 8. The optical system cannot cover all the walking path; thus, it is placed so that a walking path outside two LiDARs range can be captured. The walking path outside the LiDARs range has the least measurement update; therefore, the walking stride length errors may be largest. With this setup, our OptiTrack system can capture a walking range of 8–13 m measured from LiDAR 2 origin. By comparing the walking stride lengths captured in this path, the performance of our proposed system is evaluated. The LiDAR 2 and OptiTrack system positions are fixed during both experiments. The LiDAR 1 position is changed with the walking distance.
The subject may have difficulties in following the straight path during walking. We model a simple case of abnormal gait by removing the availability of straight walking in 5 s (20 m walking) and 10 s (50 m walking). The straight-walking-based foot position update is therefore not applied in this interval. The step width constraint is still used with larger uncertainty. The modeled measurement update is then fed to the proposed method. An example of measurement availability of two feet is given in Figure 9. The top plot is the normal availability, while the bottom one is the modeled availability of measurement update. The non-zero values represent the presence of measurements. We can see the periodicity of zero-velocity measurements during walking. The LiDAR 1-based stance foot position measurements are available for the first three and four stance phases of the left and right foot, respectively. LiDAR 2 provides the stance foot position update for the last four and three stance phases of the left and right foot, respectively. In the stance phases where the LiDAR data are not available, the straight-walking-based heading and position update (in the Kalman filter) or the step width constraint update (in the smoother algorithm) is integrated. As in the bottom plot, the straight-walking-based measurement update is not available during the interval from 15 to 25 s.
The two estimated feet trajectories of all 20 m and 50 m walking experiments using the normal and modeled update are given in Figure 10 and Figure 11. Figure 12 shows an example of a single 50 m walking estimation result for clearer vision. The LiDAR-based stance foot position estimation is denoted by a circle. We can see that the Kalman filter with the initial heading estimated from Section 3.3 and the foot position update during straight walking reduces the heading errors significantly. However, the heading errors can still be observed as the estimated trajectories do not match with the foot position update from LiDAR 2. This error is compensated using the smoother algorithm, where the step width constraint is used instead of using the heading and foot position update. If the heading and foot position update is used, the Kalman-filer-based heading estimation will be maintained in the smoother algorithm. Therefore, the heading errors are not compensated. Based on the heading estimation from the Kalman filter, the advantage of dual foot-mounted IMU is expressed in the step width constraint. From the all estimated walking trajectories results, we can see a small bulge in the middle of the walk. The further walking distance is, the bigger bulge is. This is because there is no actual heading observation outside the LiDARs’ range. However, with the measurement update from LiDAR 2, the heading estimation errors can be corrected. In 20 m and 50 m walking estimation, the results using the normal and modeled update are almost similar. This shows the robustness of the proposed method.
The walking stride length estimation is then evaluated. For each walking, the optical OptiTrack system can capture three to four walking strides for each foot. The total counts of captured walking strides are 98 and 101 for the left and right foot, respectively. We compare the estimated walking stride length from the proposed smoother algorithm ( S L I M U ) with the optical estimation ( S L o p t i c a l ) by computing the errors:
e = S L o p t i c a l S L I M U .
Figure 13 shows the histogram of the stride length estimation errors. Table 4 shows the mean estimation of the stride length and variance of each subject with a normal measurement update. The estimation errors compared to the optical system estimation are also given. For all subjects, the mean and root mean square errors (RMSE) of stride length estimation are less than 3 cm, while the maximum errors are less than 8 cm. The performances of the stride length estimation of the normal measurement update and the modeled measurement update are almost identical.

5.3. Evaluation of Rectangular Path Walking Estimation

To verify the effectiveness of the proposed method in heading correction, a rectangular path walking experiment is carried out. In this experiment, a person starts from the starting point, walks counter-clockwise twice along a rectangular path, and stops at the ending point as in Figure 14. The total of walking distance is around 124 m, with six turning phases. The LiDARs are placed to be able to capture the human leg data during turning phases.
Figure 15 shows the left and right foot estimated trajectories results of the proposed algorithm. We plot the corridor wall scan data as a reference of the walking direction as well as the results of calibrating the two LiDARs. The shading circle area represents the human leg tracking range of each LiDAR. As we can see, the proposed method can correct the heading during turning phases using the LiDAR measurements. There is no intersection between two feet trajectories. For comparison, we also plot the result of the dual foot-mounted IMU algorithm, which is proposed in [22], as in Figure 16. The algorithm using only dual foot-mounted IMU with a foot position update during straight walking based on the maximum range of the one-side foot stride length and the both-sides feet step length. Since there is no measurement of the heading, the errors can be seen clearly after the second round.

6. Conclusions

This paper proposes a human gait tracking system using a dual foot-mounted IMU and multiple 2D LiDARs. Two LiDARs are placed far apart: LiDAR 1 is at the starting point and LiDAR 2 is at the destination in straight walking or at the turning point in rectangular path walking. A calibration algorithm is proposed to estimate the rotation angle and translation vector of two LiDARs. The calibration uses a cylinder placed at four positions which are known their geometrical information. An iterative algorithm is then used to estimate the calibration parameters with the including of LiDAR measurement errors and configuration errors. Then, we can use the LiDARs as anchors to correct the inertial sensor system estimation. The LiDAR-based stance leg position estimations are transformed to stance foot position, and then combined as the measurement update of the Kalman filter and the smoother algorithm.
Since the LiDAR can provide the environment information, the LiDAR 1 coordinate system is chosen as the global coordinate system. The initial heading and position of each IMU can be estimated using IMUs data and the leg scan data from LiDAR 1. For further improvement, we update the heading and foot position if straight walking is detected. This is used as a measurement update in the Kalman filter. In the smoother algorithm, the straight-walking-based foot position update is replaced by a relative step width constraint update. We assume that in normal walking, the width of walking step is almost constant. We verify the stride length estimation of the proposed system with straight walking of 20 m and 50 m. We also model the abnormal walking by assuming that the subject cannot follow the straight path during a time interval. The measurement update of straight walking is therefore not available during this interval. The root mean square errors of the walking stride length estimation of both normal and modeled update are less than 3 cm compared with an optical tracking system. Through the rectangular path walking experimental results, we can see the LiDAR measurement can be used to correct the heading effectively. The proposed system has the potential for use in many fields, such as human gait monitoring and human gait navigation systems.

Author Contributions

Conceptualization and methodology, H.T.D. and Y.S.S.; data curation, H.T.D.; original draft, H.T.D.; review and editing, Y.S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the 2022 Research Fund of University of Ulsan.

Acknowledgments

The authors would like to thank to the University of Ulsan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baker, R. Gait analysis methods in rehabilitation. J. Neuroeng. Rehabil. 2006, 3, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Qiu, S.; Liu, L.; Zhao, H.; Wang, Z.; Jiang, Y. MEMS inertial sensors based gait analysis for rehabilitation assessment via multi-sensor fusion. Micromachines 2018, 9, 442. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Moore, S.T.; MacDougall, H.G.; Ondo, W.G. Ambulatory monitoring of freezing of gait in Parkinson’s disease. J. Neurosci. Methods 2008, 167, 340–348. [Google Scholar] [CrossRef]
  4. Barth, J.; Klucken, J.; Kugler, P.; Kammerer, T.; Steidl, R.; Winkler, J.; Hornegger, J.; Eskofier, B. Biometric and mobile gait analysis for early diagnosis and therapy monitoring in Parkinson’s disease. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 868–871. [Google Scholar]
  5. Ghasemzadeh, H.; Loseu, V.; Guenterberg, E.; Jafari, R. Sport training using body sensor networks: A statistical approach to measure wrist rotation for golf swing. In Proceedings of the Fourth International Conference on Body Area Networks, Los Angeles, CA, USA, 1–3 April 2009; pp. 1–8. [Google Scholar]
  6. Wahab, Y.; Bakar, N.A. Gait analysis measurement for sport application based on ultrasonic system. In Proceedings of the 2011 IEEE 15th International Symposium on Consumer Electronics (ISCE), Singapore, 14–17 June 2011; pp. 20–24. [Google Scholar]
  7. Norris, M.; Anderson, R.; Kenny, I.C. Method analysis of accelerometers and gyroscopes in running gait: A systematic review. Proc. Inst. Mech. Eng. Part P J. Sports Eng. Technol. 2014, 228, 3–15. [Google Scholar] [CrossRef] [Green Version]
  8. Merriaux, P.; Dupuis, Y.; Boutteau, R.; Vasseur, P.; Savatier, X. A study of vicon system positioning performance. Sensors 2017, 17, 1591. [Google Scholar] [CrossRef] [PubMed]
  9. Menz, H.B.; Latt, M.D.; Tiedemann, A.; San Kwan, M.M.; Lord, S.R. Reliability of the GAITRite® walkway system for the quantification of temporo-spatial parameters of gait in young and older people. Gait Posture 2004, 20, 20–25. [Google Scholar] [CrossRef]
  10. Tao, W.; Liu, T.; Zheng, R.; Feng, H. Gait analysis using wearable sensors. Sensors 2012, 12, 2255–2283. [Google Scholar] [CrossRef]
  11. Shin, S.; Park, C.; Kim, J.; Hong, H.; Lee, J. Adaptive step length estimation algorithm using low-cost MEMS inertial sensors. In Proceedings of the 2007 IEEE Sensors Applications Symposium, San Diego, CA, USA, 6–8 February 2007; pp. 1–5. [Google Scholar]
  12. Skog, I.; Handel, P.; Nilsson, J.O.; Rantakokko, J. Zero-velocity detection—An algorithm evaluation. IEEE Trans. Biomed. Eng. 2010, 57, 2657–2666. [Google Scholar] [CrossRef] [PubMed]
  13. Park, S.K.; Suh, Y.S. A zero velocity detection algorithm using inertial sensors for pedestrian navigation systems. Sensors 2010, 10, 9163–9178. [Google Scholar] [CrossRef] [Green Version]
  14. Prateek, G.; Girisha, R.; Hari, K.; Händel, P. Data fusion of dual foot-mounted INS to reduce the systematic heading drift. In Proceedings of the 2013 4th International Conference on Intelligent Systems, Modelling and Simulation, Bangkok, Thailand, 29–31 January 2013; pp. 208–213. [Google Scholar]
  15. Skog, I.; Nilsson, J.O.; Zachariah, D.; Händel, P. Fusing the information from two navigation systems using an upper bound on their maximum spatial separation. In Proceedings of the 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, NSW, Australia, 13–15 November 2012; pp. 1–5. [Google Scholar]
  16. Abdulrahim, K.; Hide, C.; Moore, T.; Hill, C. Aiding MEMS IMU with building heading for indoor pedestrian navigation. In Proceedings of the 2010 Ubiquitous Positioning Indoor Navigation and Location Based Service, Kirkkonummi, Finland, 14–15 October 2010; pp. 1–6. [Google Scholar]
  17. Jiménez, A.R.; Seco, F.; Zampella, F.; Prieto, J.C.; Guevara, J. Improved Heuristic Drift Elimination (iHDE) for pedestrian navigation in complex buildings. In Proceedings of the 2011 International Conference on Indoor Positioning and Indoor Navigation, Guimaraes, Portugal, 21–23 September 2011; pp. 1–8. [Google Scholar]
  18. Shi, W.; Wang, Y.; Wu, Y. Dual MIMU pedestrian navigation by inequality constraint Kalman filtering. Sensors 2017, 17, 427. [Google Scholar] [CrossRef]
  19. Zhao, H.; Wang, Z.; Qiu, S.; Shen, Y.; Zhang, L.; Tang, K.; Fortino, G. Heading drift reduction for foot-mounted inertial navigation system via multi-sensor fusion and dual-gait analysis. IEEE Sens. J. 2018, 19, 8514–8521. [Google Scholar] [CrossRef]
  20. Niu, X.; Li, Y.; Kuang, J.; Zhang, P. Data fusion of dual foot-mounted IMU for pedestrian navigation. IEEE Sens. J. 2019, 19, 4577–4584. [Google Scholar] [CrossRef]
  21. Wang, Q.; Cheng, M.; Noureldin, A.; Guo, Z. Research on the improved method for dual foot-mounted Inertial/Magnetometer pedestrian positioning based on adaptive inequality constraints Kalman Filter algorithm. Measurement 2019, 135, 189–198. [Google Scholar] [CrossRef]
  22. Zhang, W.; Wei, D.; Yuan, H.; Yang, G. Cooperative Positioning Method of Dual Foot-Mounted Inertial Pedestrian Dead Reckoning Systems. IEEE Trans. Instrum. Meas. 2021, 70, 1–14. [Google Scholar] [CrossRef]
  23. Cifuentes, C.A.; Frizera, A.; Carelli, R.; Bastos, T. Human—Robot interaction based on wearable IMU sensor and laser range finder. Robot. Auton. Syst. 2014, 62, 1425–1439. [Google Scholar] [CrossRef]
  24. Piezzo, C.; Leme, B.; Hirokawa, M.; Suzuki, K. Gait measurement by a mobile humanoid robot as a walking trainer. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 1084–1089. [Google Scholar]
  25. Bayon, C.; Ramírez, O.; Serrano, J.I.; Del Castillo, M.; Pérez-Somarriba, A.; Belda-Lois, J.M.; Martínez-Caballero, I.; Lerma-Lara, S.; Cifuentes, C.; Frizera, A.; et al. Development and evaluation of a novel robotic platform for gait rehabilitation in patients with Cerebral Palsy: CPWalker. Robot. Auton. Syst. 2017, 91, 101–114. [Google Scholar] [CrossRef] [Green Version]
  26. Cifuentes, C.A.; Rodriguez, C.; Frizera-Neto, A.; Bastos-Filho, T.F.; Carelli, R. Multimodal human–robot interaction for walker-assisted gait. IEEE Syst. J. 2014, 10, 933–943. [Google Scholar] [CrossRef]
  27. Duong, H.T.; Suh, Y.S. Human gait tracking for normal people and walker users using a 2D LiDAR. IEEE Sens. J. 2020, 20, 6191–6199. [Google Scholar] [CrossRef]
  28. Pallejà, T.; Teixidó, M.; Tresanchez, M.; Palacín, J. Measuring gait using a ground laser range sensor. Sensors 2009, 9, 9133–9146. [Google Scholar] [CrossRef] [Green Version]
  29. Yorozu, A.; Moriguchi, T.; Takahashi, M. Improved leg tracking considering gait phase and spline-based interpolation during turning motion in walk tests. Sensors 2015, 15, 22451–22472. [Google Scholar] [CrossRef] [Green Version]
  30. Li, D.; Li, L.; Li, Y.; Yang, F.; Zuo, X. A multi-type features method for leg detection in 2-D laser range data. IEEE Sens. J. 2017, 18, 1675–1684. [Google Scholar] [CrossRef]
  31. Duong, H.T.; Suh, Y.S. Human Gait Estimation Using Multiple 2D LiDARs. IEEE Access 2021, 9, 56881–56892. [Google Scholar] [CrossRef]
  32. Suh, Y.S. Inertial sensor-based smoother for gait analysis. Sensors 2014, 14, 24338–24357. [Google Scholar] [CrossRef] [PubMed]
  33. Shuster, M.D.; Oh, S.D. Three-axis attitude determination from vector observations. J. Guid. Control 1981, 4, 70–77. [Google Scholar] [CrossRef]
  34. Titterton, D.; Weston, J.L.; Weston, J. Strapdown Inertial Navigation Technology; IET: London, UK, 2004; Volume 17. [Google Scholar]
  35. Godha, S.; Lachapelle, G. Foot mounted inertial system for pedestrian navigation. Meas. Sci. Technol. 2008, 19, 075202. [Google Scholar] [CrossRef]
  36. Alonso, R.F.; Casanova, E.Z.; García-Bermejo, J.G. Pedestrian tracking using inertial sensors. J. Phys. Agents 2009, 3, 35–43. [Google Scholar]
Figure 1. System overview: a dual foot-mounted IMU and multiple 2D LiDARs.
Figure 1. System overview: a dual foot-mounted IMU and multiple 2D LiDARs.
Sensors 22 06368 g001
Figure 2. Block diagram of proposed method.
Figure 2. Block diagram of proposed method.
Sensors 22 06368 g002
Figure 3. Estimation of the human leg trajectories from each LiDAR.
Figure 3. Estimation of the human leg trajectories from each LiDAR.
Sensors 22 06368 g003
Figure 4. The configuration of LiDARs calibration.
Figure 4. The configuration of LiDARs calibration.
Sensors 22 06368 g004
Figure 5. Two LiDARs calibration results in 20 m straight walking. The stance leg positions from LiDAR 2 are transformed into the LiDAR 1 coordinate system.
Figure 5. Two LiDARs calibration results in 20 m straight walking. The stance leg positions from LiDAR 2 are transformed into the LiDAR 1 coordinate system.
Sensors 22 06368 g005
Figure 6. Initial position of IMU in LiDAR 1 coordinate estimation.
Figure 6. Initial position of IMU in LiDAR 1 coordinate estimation.
Sensors 22 06368 g006
Figure 7. Straight detection for current left foot.
Figure 7. Straight detection for current left foot.
Sensors 22 06368 g007
Figure 8. The configuration of the 20 m and 50 m walking distance experiment.
Figure 8. The configuration of the 20 m and 50 m walking distance experiment.
Sensors 22 06368 g008
Figure 9. An example of measurement availability of two feet in 50 m walking. The straight-walking-based measurement updates are available for all walking steps outside the LiDAR range in the normal update (top plot) and are removed by 10 s in the modeled update (bottom plot).
Figure 9. An example of measurement availability of two feet in 50 m walking. The straight-walking-based measurement updates are available for all walking steps outside the LiDAR range in the normal update (top plot) and are removed by 10 s in the modeled update (bottom plot).
Sensors 22 06368 g009
Figure 10. The total estimated 20 m walking trajectories of two subjects from the smoother algorithm in the normal and modeled update.
Figure 10. The total estimated 20 m walking trajectories of two subjects from the smoother algorithm in the normal and modeled update.
Sensors 22 06368 g010
Figure 11. The total estimated 50 m walking trajectories of two subjects from the smoother algorithm in the normal and modeled update.
Figure 11. The total estimated 50 m walking trajectories of two subjects from the smoother algorithm in the normal and modeled update.
Sensors 22 06368 g011
Figure 12. An example of an estimation of 50 m walking trajectories from the Kalman filter and the smoother algorithm. The circles represent the estimated LiDAR-based foot positions in the stance phase.
Figure 12. An example of an estimation of 50 m walking trajectories from the Kalman filter and the smoother algorithm. The circles represent the estimated LiDAR-based foot positions in the stance phase.
Sensors 22 06368 g012
Figure 13. The histogram of all walking stride length estimation errors of the proposed method.
Figure 13. The histogram of all walking stride length estimation errors of the proposed method.
Sensors 22 06368 g013
Figure 14. The configuration of rectangular path walk experiment.
Figure 14. The configuration of rectangular path walk experiment.
Sensors 22 06368 g014
Figure 15. The estimated trajectory of proposed method in the rectangular path walking experiment. The shading circle area indicates the human leg tracking range of each LiDAR.
Figure 15. The estimated trajectory of proposed method in the rectangular path walking experiment. The shading circle area indicates the human leg tracking range of each LiDAR.
Sensors 22 06368 g015
Figure 16. The estimated trajectory inthe rectangular path walking experiment, in which the LiDAR data are not used.
Figure 16. The estimated trajectory inthe rectangular path walking experiment, in which the LiDAR data are not used.
Sensors 22 06368 g016
Table 1. Sensor specifications.
Table 1. Sensor specifications.
SpecificationXsens MTi-1
AccelerometerGyroscope
Full range ± 16 g ± 2000 / s
Noise density120 μ g / Hz 0 . 007 / s / Hz
Bandwidth324 Hz (z: 262 Hz)255 Hz
Sampling rate100 Hz100 Hz
SpecificationRPLIDAR A3
Distance range10–25 m
Minimum operating ranging0.2 m
Angular resolution 0 . 225
Accuracy 1 % of range ≤3 m
2 % of range 3–5 m
2.5 % of range 5–25 m
Scan rate10 Hz
Table 2. Parameters used in this paper.
Table 2. Parameters used in this paper.
ParametersRelated Equations
R a = 0.01 I 3 , R g = 0.001 I 3 (2)
R r c = 0.01 , R β = 0.01 , R α = 0.01 (9)
γ = 0.001 (12)
B g = 1.2 , B a = 1.5 , N g = 16 , N a = 16 (17)
P r j , 0 = 0.000001 I 3 , P v j , 0 = 0.000001 I 3 (33)
R d = 0.01 (9) and (36)
R v = 0.01 I 3 (24)
R l i d a r = 0.01 I 2 (26)
R p = 0.01 I 2 (27)
Table 3. Subjects information.
Table 3. Subjects information.
SubjectAgeHeight (cm)Weight (kg)
13316863
23016052
Table 4. The stride length estimation results compared with the optical OptiTrack system.
Table 4. The stride length estimation results compared with the optical OptiTrack system.
SubjectMean Stride Length (m)Estimation Errors
Leg (Strides)Max (cm)Mean (cm)RMSE (cm)
11.503 ± 0.054Left (45)7.52.32.8
1.489 ± 0.047Right (56)6.22.12.5
21.416 ± 0.032Left (53)7.62.73.2
1.377 ± 0.056Right (45)6.42.53.1
AllNormal updateLeft (98)7.62.53.0
Right(101)6.42.32.8
Modeled updateLeft (98)7.62.53.1
Right(101)6.42.32.8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Duong, H.T.; Suh, Y.S. A Human Gait Tracking System Using Dual Foot-Mounted IMU and Multiple 2D LiDARs. Sensors 2022, 22, 6368. https://doi.org/10.3390/s22176368

AMA Style

Duong HT, Suh YS. A Human Gait Tracking System Using Dual Foot-Mounted IMU and Multiple 2D LiDARs. Sensors. 2022; 22(17):6368. https://doi.org/10.3390/s22176368

Chicago/Turabian Style

Duong, Huu Toan, and Young Soo Suh. 2022. "A Human Gait Tracking System Using Dual Foot-Mounted IMU and Multiple 2D LiDARs" Sensors 22, no. 17: 6368. https://doi.org/10.3390/s22176368

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop