Next Article in Journal
Two-Time Scale Virtual Sensor Design for Vibration Observation of a Translational Flexible-Link Manipulator Based on Singular Perturbation and Differential Games
Next Article in Special Issue
On Performance Analysis of Protective Jamming Schemes in Wireless Sensor Networks
Previous Article in Journal
When Ultrasonic Sensors and Computer Vision Join Forces for Efficient Obstacle Detection and Recognition
Previous Article in Special Issue
Combating QR-Code-Based Compromised Accounts in Mobile Social Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Tracking via Shoe Sensing

1
Department of Mathematics and Computer Science, Changsha University, Changsha 410022, China
2
Key Laboratory of Fiber Optical Sensing Technology and Information Processing, Ministry of Education, School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
3
Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(11), 1809; https://doi.org/10.3390/s16111809
Submission received: 6 September 2016 / Revised: 15 October 2016 / Accepted: 20 October 2016 / Published: 28 October 2016
(This article belongs to the Special Issue New Paradigms in Cyber-Physical Social Sensing)

Abstract

:
Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices’ random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy.

1. Introduction

Nowadays, location-based service (LBS) has drawn considerable attention as it could provide a number of uses in various domains, such as entertainment, work and personal life, etc. Most LBS services, such as those in the conventional Garmin map and Google map, rely on GPS (Global Positioning System) to obtain accurate location information [1]. Although GPS location service has become very mature and can achieve high positioning accuracy, the performance of a GPS location service becomes poor for indoor environments due to a wide variety of physical signal blocks and potential sources of interference [2]. Thus, it is highly desirable to find an alternative that can provide stable positioning for indoor environments.
Recently, infrared-based and WiFi-based indoor positioning solutions have been explored. In general, these existing solutions are mainly focused on the positioning for 2D indoor LBS services. However, the complexity of the building floor structure and the restriction of 2D indoor positioning cannot always meet the demands of indoor LBS services. For example, firefighters in a burning building cannot effectively distinguish their own indoor positions or which floor they are on, making it very hard for them to perform effective rescues. It is crucial for them to have a 3D indoor positioning service to obtain their real-time locations. Another example is tracking hospital patient services. A hospital patient may be in critical condition at anytime and anywhere, and nurses and doctors need to be able to know all patients’ fine-grained indoor locations to provide immediate and effective treatments. Meanwhile, human activities are mainly concentrated in indoor environments, such as offices and malls, making the field of 3D indoor positioning/navigation a huge business opportunity. Therefore, how to achieve 3D indoor positioning with low-cost and non-invasive requirements has become a top topic in recent years.
Indoor positioning should consider not only the positioning accuracy but also the feasibility, resource consumption, and cost [3,4]. Although there are a number of solutions in this area, the indoor positioning problem has not been addressed satisfactorily due to a series of practical issues (e.g., device cost and deployment limits). Under such circumstances, it is important to develop a low-cost, low power consumption solution to achieve accurate 3D indoor positioning and tracking. Different from existing solutions, we propose the use of shoe sensors, which are low-cost inertial sensors attached to one of the user’s shoes to accurately localize the user in a 3D indoor environment. In this paper, we mainly focus on addressing the following three problems: (1) accurately extracting features from the shoe sensors according to the characteristics of human walking; (2) establishing human walking state classification model, which can distinguish the user’s walking status including normal walking, going upstairs, and going downstairs; and (3) relying on the walking model, reducing the accumulation of positioning errors while walking.
Specifically, the following contributions are made in this work:
  • We propose a solution using 3D shoe sensors, inertial sensor attached to the user’s shoes that can accurately localize the user in 3D indoor environments.
  • A short-time energy-based mechanism has been proposed to extract gait information while the user is walking.
  • We design a walking state classification model that can distinguish the user’s walking status including normal walking, going upstairs, and going downstairs. The classified walking status can be further used to reduce 3D positioning errors.
  • Extensive experiments demonstrate that the proposed low-cost shoe sensing-based 3D indoor positioning solution can perform real-time localization with high accuracy.
The remainder of this paper is organized as follows. We describe the related work in Section 2. In Section 3, we present the methodology of the proposed shoe sensing-based 3D tracking. We evaluate the performance of our system in Section 4. Finally, conclusions are given in Section 5.

2. Related Work

Recently, with the development of MEMS inertial sensor devices, inertial sensor-based navigation solutions are becoming more and more popular in indoor localization scenarios [5]. Specifically, inertial sensor-based navigations can be categorized as either stepping-based or strap-down-based navigation systems [6].
Stepping-based navigation systems use step information (e.g., the number of walking steps and step length) to detect pedestrians’ positions [6,7,8]. For example, in [6] the users carry an inertial sensor in their pockets, and the system can calculate the inertial sensor’s pitch angle and further estimate the step length. However, this method assumes that different people have the same step length, making it hard to achieve accurate localization in practice.
Strap-down-based navigation systems integrate acceleration readings twice to get the walking distance. With the aid of a compass and gyroscope, the system can also capture the walking direction. Existing solutions can be divided into two categories: one is carrying the sensor at the user’s waist, and the other is attaching sensors to the shoes.
Inertial sensors fixed at the waist could be used to detect the user pelvis’s vertical displacement and estimate the length of each step [9,10]. However, there are different walking characteristics due to people’s various height, weight, age, etc. In order to improve the accuracy of the estimated step length, step length for walking, and step frequency, Shin et al. [11] use personal training historical data to study the personal characteristics of each user. Due to the sensor’s internal bias, there is an accumulation drift problem over time [1]. To address this problem, Shih et al. [1] use a mathematical model of simple harmonic motion to calculate each step’s length. However, this method still does not work well due to the different height of each individual.
There are also several studies using shoe sensors to estimate a pedestrian’s position. According to the characteristics of human walking (e.g., the sensor velocity should be zero when the foot is on the ground), zero speed correction (zero-velocity update, ZUPT) [12,13,14,15,16,17,18,19] is proposed to correct the accumulation drift. However, these solutions mainly focus on 2D positioning. In addition, Li et al. [15,19,20] use a Kalman filter or complementary filter to calibrate the inertial sensor data, which has high algorithm complexity. Moreover, Placer et al. [17] use a camera sensor attached to the shoes to eliminate measurement error and improve the localization accuracy. Although a camera-based solution can effectively improve the accuracy of step estimation, the camera-based sensor leads to complicated data acquisition and consumes more power.Madgwick et al. [21,22] use gradient descent algorithm to eliminate gyro errors. Additionally, Nilsson et al. [14] uses an expensive inertial sensor unit (i.e., over $700 per unit).Nilsson et al. [23,24,25,26] also use an expensive sensor placed under the heel, and such solutions mainly focus on 2D indoor localization. Ojeda et al. [27] proposed ZUPT-based 3D positioning, but this does not apply any amendment to fix sensor measurement errors.
Additionally, some studies (e.g., [27,28,29,30]) use the inertial sensors of smartphones to complete interior 2D positioning. However, these approaches rely on the aid of other methods such as WiFi to achieve better positioning performance. These approaches increase the complexity of the positioning system and WiFi signals are not always available in indoor environments.

3. Methodology

3.1. Gait Information

The estimated distance error will keep accumulating due to the drift of inertial sensor readings [31]. According to the characteristics of human walking, we can eliminate the accumulated error of distance by using zero reference points from gait information. The zero reference points are the moments when the user’s feet step on the floor while walking. The gait information can be derived by using the short-time energy, which is usually applied on the audio signal, on acceleration. Also, we try to find a feasible position for fixing the sensors by comparing the energy of the signal.

3.1.1. Fixed Position Selection

The walking patterns and the gait information of every step should be clear from the sensor reading [32]. We observe that the walking pattern and the gait information are more stable with the sensors fixed on the foot compared to other places.
Figure 1 shows an example of the energy of acceleration (i.e., x = a c c X 2 + a c c Y 2 + a c c Z 2 g) on all three axes while a person is walking seven steps with the inertial sensors fixed in different positions (i.e., thigh, leg, and foot). We can see that the energy varies in the duration of every step. What is more, the energy of the acceleration will decrease to zero when the foot steps on the floor. Comparing Figure 1a,b with Figure 1c, we can see that the walking pattern of every step is much more stable with the sensor fixed on the foot. Meanwhile, the duration of zero velocity points is longer than the other two fixed positions. Therefore, we choose the foot as the fixed position of the sensors. While conducting experiments, we fixed the sensors as shown in Figure 2.

3.1.2. Gait Information Extraction

The gait information can be derived by comparing the energy fluctuation on all three axes of the accelerometer. We observe that the energy of acceleration varies along with the patterns of the foot. That is to say, while the foot is in the air the energy of acceleration is high; while the foot is stepping on the floor the energy of acceleration is decreased to zero. Therefore, we can extract the gait information by using a threshold of the short-time energy of the acceleration.
The acceleration of human walking varies on three axes. In order to take the energy fluctuation on all three axes into consideration, we use the amplitude of acceleration to extract the gait information. The amplitude can be calculated as follows:
x = a c c X 2 + a c c Y 2 + a c c Z 2 g ,
where a c c X , a c c Y , a c c Z are the acceleration on the X, Y, and Z axis, respectively, and g is gravity. Therefore, the short-time energy signal of human walking can be derived as:
E n = m = n N + 1 n [ x ( m ) w ( n m ) ] 2 ,
where n is the position of the center of the sliding window and N is the length of the window. N is critical for controlling the fluctuation of the energy. Figure 3 shows an example of the short-time energy with different sliding window lengths. We can see that the short-time energy wave is smoother with a longer window length.
According to the foregoing conclusions, the short-time energy of human walking will decrease to zero while the foot steps on the floor. Thus we can extract the gait information by setting a threshold E T = 0.05 of energy. The reference points can be determined as:
s t a t i o n a r y ( t ) = { 1    E ( t ) < E T 0    E ( t ) E T .
Figure 4 shows an example of gait information (i.e., reference points) extraction using our gait extraction method.

3.2. Posture Correction Based on Gait Information

Quaternion is a hyper-complex number, which can be expressed as:
Q ( q 0 , q 1 , q 2 , q 3 ) = q 0 + q 1 i + q 2 j + q 3 k .
Figure 5 shows an example of coordinate system alignment with a quaternion. O is the reference coordinate system and O’ is the coordinate system of the inertial sensor. In this paper, the system needs to derive the quaternion that describes the attitude of the inertial sensor relative to the reference coordinate system.
While the foot is stepping on the floor, the acceleration data is stable. Thus we can estimate the initial posture of the sensors by using a method proposed in previous research [22].
Figure 6 shows the process of our posture initialization method. Supposing the initial quaternion of the reference coordinate system is q 0 = [ 1 , 0 , 0 , 0 ] , we can calculate the gravitational acceleration vector a R = [ 0 , 0 , 1 ] T of the reference coordinate system.
Following this direction, since the quaternion rotation matrix can be derived as:
C b R = [ q 0 2 + q 1 2 q 2 2 q 3 2 2 ( q 1 q 2 q 3 q 0 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 1 q 2 + q 0 q 3 ) q 0 2 q 1 2 + q 2 2 q 3 2 2 ( q 2 q 3 q 0 q 1 ) 2 ( q 1 q 3 q 0 q 2 ) 2 ( q 2 q 3 + q 0 q 1 ) q 0 2 q 1 2 q 2 2 + q 3 2 ] ,
the gravity in the sensor coordinate system can be calculated by rotating the gravity in the reference coordinate system:
g b = C b R × a R
g b = ( g b x g b y g b z ) = ( 2 ( q 1 q 3 q 0 q 2 ) 2 ( q 2 q 3 + q 0 q 1 ) q 0 2 q 1 2 q 2 2 + q 3 2 ) .
The raw acceleration from sensor readings can be described as:
g R = [ a x a y a z ] T ,
which can be normalized as:
g R = g R a x 2 + a y 2 + a z 2 .
Therefore, we can derive the quaternion as:
Q n ( q n 0 + q n 1 + q n 2 + q n 3 ) = v n P ,
where P is the solution process of the quaternion, v n = a R × g R is the vector product of, and (R = 1, 2, 3, …, n). The calculated quaternion of Q expression approaches the acceleration of gravity, so we can calculate the g b of the gravitational acceleration vector. The posture can be initialized by repeating the above steps.
The above calculation process is convergent. The estimated coordinate system of the inertial sensor will converge to the real attitude. Figure 7 shows an example of the convergent process of deriving the quaternion while we conducting experiments. We find that our method will converge in about 4 s (i.e., 400 points).
Madgwick et al. [22] uses a gradient manner to eliminate the direction error. Following this method, the acceleration while the foot is stepping on the floor can be used as a reference value to estimate the error between the sensor coordinate system and the reference coordinate system, since it is more stable [22]. Thus, we can correct the drift error of the gyroscope by using the estimated error.
Figure 8 shows the work flow of gait information based posture estimation. The gait information based posture estimation and the attitude initialization are similar. The vector product of g b and g R is angular velocity error e. The larger e is, the greater the angular velocity error will be. Combined with gait information, the gyro angular velocity can be expressed as:
g y r o = g y r o K p e s t a t i o n a r y ,
where gyro is the angular velocity vector, Kp is the gain error coefficient of system. We use the fourth-order Runge–Kutta method to update the quaternion. The differential of the quaternion is defined as:
Q = 1 2 w ( t ) * Q ( t ) .
At time t 0 it is:
Q ( t 0 ) = Q 0 .
The corresponding differential equation is:
[ q ˙ 0 ( t ) q ˙ 1 ( t ) q ˙ 2 ( t ) q ˙ 3 ( t ) ] = 1 2 [ 0 w x w y w z w x 0 w z w y w y w z 0 w x w z w y w x 0 ] [ q 0 ( t ) q 1 ( t ) q 2 ( t ) q 3 ( t ) ] .
The quaternion can be derived by using the fourth-order Runge–Kutta method as follows:
Q n + 1 = Q n + h 6 ( k 1 + 2 k 2 + 3 k 3 + k 4 )
k 1 = f ( w n , Q n )
k 2 = f ( w n + h 2 , Q n + h 2 k 1 )
k 3 = f ( w n + h 2 , Q n + h 2 k 2 )
k 4 = f ( w n + h , Q n + h k 3 ) ,
where w x , w y , w z are the raw angular velocities of inertial sensor and h is the actual sampling interval. Updating the quaternion in real time will gradually result in losing quaternion specification properties. So we must normalize the quaternion as follows:
q i = q i q ^ 0 2 + q ^ 1 2 + q ^ 2 2 + q ^ 3 2 ( i = 1 , 2 , 3 ) ,
where q ^ 0 , q ^ 1 , q ^ 2 , q ^ 3 are the quaternion values of updating.

3.3. Eliminate Cumulative Error Based on Gait Information

The gait information can not only be used as the basis of gyroscope error elimination, but also can be the reference point for eliminating cumulative error. As shown in Figure 9, Following the idea of Yun and Han et al. [20,33], the accumulated error in acceleration can be eliminated based on the zero velocity reference point and the linear drift characteristics of inertial sensors.
In our experiments, we fix the sensor on the foot of the user with its x axis along the user’s toe direction, which is also the moving direction of the user. Thus, the moving speed of the user can be expressed as:
v ( T ) = v ( 0 ) + i = 0 T k 1 k a c c x ( i ) ,
where k is the sample rate of the sensors. In our experiments, we found that the sampling rate can range from 40 to 100 Hz without affecting the system performance significantly. This range covers most of the maximum accelerometer sampling rate of current smartphones. Therefore, we focus on other factors that are apropos to the performance of the system and set the sample rate at 100 Hz during our experiments.
As shown in Figure 10, the accumulated velocity error from t a to t b can be calculated as follows:
v e = Δ v 2 Δ v 1 ,
then the gradient of accumulated velocity error during this period is:
e ˜ = Δ v 2 Δ v 1 t b t a .
According to previous work [33], if the gradient of accumulated velocity error during this period is constant, then velocity at time T can be derived as:
V ( T ) = V ( a k 1 ) + i = a k b k 1 k a c c x ( i ) Δ v 2 Δ v 1 t b t a × i a k k .
When the foot is landing, the velocity should be zero:
V ( a k 1 ) = 0 ,
then the velocity can be expressed as:
V ( T ) = { i = a n k b n k 1 k a c c x ( i ) e ˜ i a n k k s t a t i o n a r y = 1 0 s t a t i o n a r y = 0 ,
where e ^ is the accumulated error gradient of each step, a n is the start time of every step, and b n is the ending time of every step. The human walking distance can be calculated by integrating the corrected velocity V(T). Assuming the initial distance is zero, the distance of a certain axis can be expressed as:
S ( T ) = i = 0 T K 1 k V ( i ) .

3.4. Eliminate Vertical Distance Error Based on Gait Information

3.4.1. Build and Design a Model of State

In this section, we focus on how to distinguish between the body’s normal walking upstairs, downstairs, and along a plane (normal walking, jogging). We found an effective way to distinguish between the different walking patterns based on the characteristics of human walking. Figure 11 shows an example of the walking model while the user is walking upstairs and downstairs, respectively.
Each step can be abstracted as a normal walking vector S , which is the sum of H and Z . While walking in the horizontal plane, θ can be expressed as:
θ = arccos Z | S | ,
where θ stands for the degree of change on the z axis while the user takes a random step. However, θ is different between different users, since the walking pattern varies greatly among different users. In order to distinguish between walking along a plane, upstairs, and downstairs, we design a novel mathematical model:
θ = θ | H | ,
where θ is the angle change of a unit horizontal distance. The main purpose of this method is to eliminate the error caused by individual walking characteristics by normalizing the distance changes on the z axis.
Figure 12 shows an example of θ when a user walks in different environments. We can observe that the θ is different when the user walks along a plane, upstairs, or downstairs. It is easy to distinguish the state of human walking by using a threshold of the average θ , which is shown in Table 1.

3.4.2. Eliminate Vertical Distance Error

Although we can eliminate the accumulated error from the speed level, the error in the vertical direction cannot be completely eliminated, which will lead to height calculation errors. Figure 13, Figure 14 and Figure 15 show the height error from 100 sets of walking and running data, respectively. While the user is walking, the maximum vertical distance error is 0.291 m, the average absolute error is 0.1186 m, and the variance of the error is 0.0582 m.
While the user is running, the maximum vertical distance error is 0.66 m, the average absolute error is 0.17 m, and the mean square deviation 0.22 m.
The error of indoor 3D localization is partly introduced by the accumulated error on the Z axis. Figure 15 shows an example of the displacement on the Z axis while the user walks on a flat floor. We can observe that the displacement on the Z axis accumulates after every step.
In order to eliminate the foregoing accumulated error, we propose a strategy based on the average human walking pattern and the gait information.
Figure 16 shows the workflow of our error elimination strategy. The start and end point of each step can be derived from the gait information. In addition, we can determine whether the user is walking on a flat floor by using the walking pattern model. Then we can eliminate the accumulated error on the Z axis through our strategy as shown in Figure 17.
The accumulated error from t a to t b is:
s e = Δ s 2 Δ s 1 .
Then we can calculate the distance error gradient as:
e ˜ s = Δ s 2 Δ s 1 t b t a .
In this step, we assume the distance error gradient is constant, thus the moving distance on the Z axis while taking a step can be derived as:
S z ( T ) = S z ( a k 1 ) + i = a k b k ( 1 k V z ( i ) e ˜ s i a k k ) ,
and the moving distance on Z axis can be estimated by using:
S z ( T ) = { S z ( a n k 1 ) + i = a n k b n k ( 1 k V z ( i ) e ˜ s i a n k k ) s t a t i o n a r y = 1 S z ( a n k 1 ) s t a t i o n a r y = 0 .

4. Evaluation

4.1. Building a System Platform and Experimental Settings

4.1.1. Building a System Platform

The acquisition and network nodes are the main hardware modules in our system. The collection nodes focus on packing and sending data from a gyroscope and an accelerometer to the network node. Then the data will be delivered to the PC monitoring client via the serial port after parsing by the network node for further processing and displaying. Figure 18 shows the collection node and network node of our system. The size of the nodes is designed as 5 cm × 5 cm for convenience.

4.1.2. Experimental Environment Settings

Since the MEMS inertial sensor is hardly affected by the external environment, we do not need to consider the effect of other external factors. Considering the difference between 2D and 3D localization is height information, the main experimental scene can be divided into two classes: moving in the horizontal plane (normal walking and jogging) and climbing stairs. Figure 19 shows our experimental scene.

4.2. Experimental Results and Analysis

4.2.1. Gait Information Extraction Experiments

The accuracy of the gait information extraction is critical for the accuracy of 3D indoor localization. We focus on extracting gait information of four statuses, normal walking, running, going upstairs, and going downstairs. Then we will verify the accuracy of the gait information extraction based on the short-time energy of these statuses.
In order to accurately extract the gait information and reduce the decision delay, the threshold should be set as small as possible while using both methods. Figure 20 shows an example of extracting gait information. We can observe that the extracted gait information while walking on a flat floor is fine with our short-time energy method, yet abnormal with the acceleration magnitude-based method. The problem arises when extracting the gait information of the other three statuses.
For the four different states, in this paper we perform experiments under different scenarios. The experimental field of normal walking and jogging is conducted in the corridor of a school building. For the benchmark experiments, normal walking and jogging activities are conducted along a straight line with a distance of 19.2 m. Normal walking and jogging are performed in 20 rounds each (i.e., back and forth). The tests of walking upstairs and downstairs were conducted on the stairs of the school building. To facilitate testing and statistical analysis of the experimental data, walking upstairs and downstairs were also conducted in 20 rounds.
Figure 21 shows the accuracy of gait information extraction for all four statuses with our short-time energy method and the acceleration amplitude-based method. We can see that our method provides sufficient accuracy to extract those four common moving activities.

4.2.2. Walking State Classification Model Experiment

According to the judgement model of walking state presented in this paper, it can distinguish three kinds of states, a horizontal plane of movement (normal walking, jogging), going upstairs, and going downstairs. We mainly verify the accuracy of the judgment mentioned before with the collected data containing normal walking, jogging, going upstairs, and going downstairs as four states. We regard both jogging and normal walking as the same moving state. Specifically, we extract the experimental data of 100 steps of each state to analyze the statistics of walking state classification.
It can be seen from the Figure 22 that using the mathematical model designed in this paper, the judgement for three states can achieve above 95% accuracy and the mathematical model can judge the walking state effectively. Thus, it provides a reliable precondition for eliminating the error of vertical distance.

4.2.3. Error Elimination in the Vertical Direction

The method designed in this paper eliminates errors of the vertical direction when moving in the perpendicular plane based on the judgement model of walking state. We assessed the two states of normal walking and jogging by getting statistical data for 100 steps. As shown in Figure 23, for the walking activity, the maximum distance error is 0.26 m, the mean distance of the absolute value error is 0.02 m, and the distance mean square error of the absolute value is around 0.06 m.
As shown in Figure 24, the largest vertical distance error of jogging in the plane is 0.61 m, the average error of absolute value is 0.01 m, and the variance of absolute value is 0.06 m. It can be seen from the statistical results of the two states that we can efficiently reduce the vertical distance error by leveraging gait information and creating a judgement model of the walking state.

4.2.4. Experimental Estimation Step

For inertial sensors-based 3D localization, one of the important metrics is the accuracy of step length estimation. In this paper, we continuously collect data from four common walking states, normal walking, jogging, going upstairs, and going downstairs.
Figure 25a shows an example of walking in a straight line. The real and estimated distance of the path is 19.2 m and 17.3 m, respectively, which means that our system can achieve an accuracy of 90.1%. Figure 25b shows an example of walking along a rectangle with a length of 6 m and a width of 7 m. It can be seen from the figure that these substantially coincide with the actual length and width. The following figure gives detailed statistics on the walking accuracy in the plane.
Figure 26 shows the statistics of step error while normal walking. In the condition of normal walking, the average length of each step is 1.20 m. We pick 100 steps randomly from our experimental data for statistical analysis. The result shows that the maximum error is 0.34 m, the average error of absolute value is 0.11 m, the mean square error of step length error is 0.08 m, and the accuracy of step length estimation is 90.83% while walking normally along a horizontal plane.
Figure 27 shows the statistics of step errors while jogging. In the condition of jogging, the average length of each step is 1.60 m. We pick 100 steps randomly from our experimental data for statistical analysis. The result shows that the maximum error is 0.49 m, the average error of absolute value is 0.13 m, and the mean square error is 0.11 m; the accuracy of step length estimation is 91.87% while jogging along a horizontal plane.
Figure 28 shows an example of the trajectory of going upstairs or downstairs. The width of each step is 0.3 m and the height of each step is 0.16 m. When going up and down normally, we can regard alternating feet once as one step; the walking distance of each step is 0.60 m and the vertical height is 0.32 m. It can be seen from the trajectory in the figure that the statistics above substantially coincide with the trajectory of going upstairs or downstairs in reality.
Every step of walking upstairs can be regarded as both horizontal and vertical movement. Figure 29 shows an example of the statistical analysis of horizontal steps while going upstairs. It can be seen from the figure that the maximum error is 0.32 m, the average error of absolute value is 0.09 m, and the mean square error is 0.06 m. The accuracy of the horizontal step length when going upstairs is 90.83%.
Figure 30 shows the statistical analysis of vertical movements per step while going upstairs. The maximum error is 0.14 m, the average error of absolute value is 0.04 m, and the mean square error is 0.03 m. The accuracy of vertical step length when going upstairs is 87.5%.
Similarly, Figure 31 gives a statistical analysis of horizontal steps while going downstairs. We can find that the maximum error is 0.38 m, the average error of absolute value is 0.12 m, and the mean square error is 0.08 m; the accuracy of each horizontal step length is 80.0%.
Figure 32 shows the statistical analysis of vertical steps while going downstairs. The maximum error is 0.30 m, the average error of absolute value is 0.07 m, and the mean square error is 0.06 m; the accuracy of each vertical step length is 79.1%.
Here we compare the performance of our tracking system with a method used in the literature [8]. First of all, we note that [8] was using high-precision fiber optic gyro sensors (DSP-1750) and the white noise is less than 18 ° h / H z at a normal temperature. In this paper we use low-cost inertial sensors (MPU-6050) and the white noise is 18 ° h / H z at the same temperature. The accuracy of the two sensors differ 22.5-fold. In order to ensure the parity of the experimental platform in the comparison, we perform a normalization process for the measurement error of both. The contrasting performance of the two systems is shown in Table 2.
The step estimation accuracy of our system is close to [8] most of the time, yet our system will achieve better results in the case of jogging.

4.2.5. Heading Verification Experiment

Course accuracy is one of the significant indicators of inertial sensors-based indoor 3D positioning. We evaluated this indicator by asking three participants to walk and jog straight along a 19.2-m path (forth and back 10 times). The statistical results are shown in Figure 33. We find that the mean value of the course error for every step is close to 0° in normal conditions, the maximum error of the yaw angle is 15.46°, the mean value of absolute error of the course angle is 5.65°, and the mean square error is 3.88°.
Figure 34 shows the jogging heading angle error. We find that the maximum error is 39.13°, the average of the absolute value heading angle error is 7.09°, and the mean square error of the heading angle is 6.94°.

4.2.6. Overall Effect of Indoor 3D Positioning

This section shows the overall effect of 3D positioning. Figure 35a shows the structure of every floor of the building. As shown in Figure 35b, the trajectory of 3D positioning for 5 min is basically in conformity with the actual trajectory, which means that our system can provide accurate 3D positioning.

5. Conclusions

In this paper, we propose a shoe sensing-based 3D tracking solution aiming to achieve real-time indoor 3D positioning leveraging low-cost inertial sensors that can be easily attached to the user’s shoes. The proposed system reduces the cumulative errors caused by the sensors’ internal bias using the linear property of the cumulative acceleration error drift. In addition, we also propose a walking state classification model that is able to distinguish different moving status including normal walking/jogging, going upstairs, and going downstairs. A real-time 3D trajectory dynamic map is also built relying on unity3D and the system’s tracking results. Extensive experiments have been conducted using a low-cost sensor module MPU6050, which demonstrates that the proposed system could accurately track users’ 3D indoor positions and moving trajectories.
Large-scale deployment of the shoe sensing-based real-time localization requires careful consideration of a set of key networking metrics (e.g., throughput, delay, and energy efficiency). Several studies (e.g., [34,35]) redesigned the networking scheme in terms of these critical metrics for large-scale WSN and IOT networks. We leave the study of large-scale shoe sensing deployment to our future work.

Acknowledgments

This work is supported by the National Science Foundation of China and the Changsha Science and Technology Foundation, under grant nos. 61373042 and 61502361. We are extremely grateful to the editors and anonymous reviewers for comments that helped us improve the quality of the paper.

Author Contributions

Fangmin Li conceived the project and obtained the required funding. Xiaolin Ma and Xiaochuang Chen did the project management, obtained all the materials, and helped with analysis and dissemination of the project results. Jian Liu and Guo Liu provided technical feedback and suggestions for each of the experiments, as well as editing of the manuscript. The literature review, design of experiments, script development, accuracy assessment, and write-up of the original version of the manuscript were done by Fangmin Li and Xiaochuang Chen.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Shih, W.Y.; Chen, L.Y.; Lan, K.C. Estimating walking distance with a smart phone. In Proceedings of the IEEE 2012 Fifth International Symposium on Parallel Architectures, Algorithms and Programming (PAAP), Taipei, Taiwan, 17–20 December 2012; pp. 166–171.
  2. Borriello, G.; Chalmers, M.; LaMarca, A.; Nixon, P. Delivering real-world ubiquitous location systems. Commun. ACM 2005, 48, 36–41. [Google Scholar] [CrossRef] [Green Version]
  3. Gu, Y.; Lo, A.; Niemegeers, I. A survey of indoor positioning systems for wireless personal networks. IEEE Commun. Surv. Tutor. 2009, 11, 13–32. [Google Scholar] [CrossRef]
  4. Wang, Y.; Liu, J.; Chen, Y.; Gruteser, M.; Yang, J.; Liu, H. E-eyes: Device-free location-oriented activity identification using fine-grained WiFi signatures. In Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, Maui, HI, USA, 7–11 September 2014; pp. 617–628.
  5. Diaz, E.M. Inertial pocket navigation system: Unaided 3D positioning. Sensors 2015, 15, 9156–9178. [Google Scholar] [CrossRef] [PubMed]
  6. Diaz, E.M.; Gonzalez, A.L.M. Step detector and step length estimator for an inertial pocket navigation system. In Proceedings of the IEEE 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Busan, Korea, 27–30 October 2014; pp. 105–110.
  7. Jahn, J.; Batzer, U.; Seitz, J.; Patino-Studencka, L.; Boronat, J.G. Comparison and evaluation of acceleration based step length estimators for handheld devices. In Proceedings of the IEEE 2010 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Zurich, Switzerland, 15–17 September 2010; pp. 1–6.
  8. Harle, R. A survey of indoor inertial positioning systems for pedestrians. IEEE Commun. Surv. Tutor. 2013, 15, 1281–1293. [Google Scholar] [CrossRef]
  9. Goyal, P.; Ribeiro, V.J.; Saran, H.; Kumar, A. Strap-down pedestrian dead-reckoning system. Measurements 2011, 2, 3. [Google Scholar]
  10. Li, J.; Wang, Q.; Liu, X.; Zhang, M. An autonomous waist-mounted pedestrian dead reckoning system by coupling low-cost MEMS inertial sensors and FPG receiver for 3D urban navigation. J. Eng. Sci. Technol. 2014, 7, 9–14. [Google Scholar]
  11. Shin, S.H.; Park, C.G.; Hong, H.S.; Lee, J.M. MEMS-based personal navigator equipped on the user’s body. In Proceedings of the 18th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS 2005), Long Beach, CA, USA, 13–16 September 2005.
  12. Pratama, A.; Hidayat, R. Smartphone-based pedestrian dead reckoning as an indoor positioning system. In Proceedings of the 2012 International Conference on System Engineering and Technology (ICSET), Bandung, Indonesia, 11–12 September 2012; pp. 1–6.
  13. Foxlin, E. Pedestrian tracking with shoe-mounted inertial sensors. IEEE Comput. Graph. Appl. 2005, 25, 38–46. [Google Scholar] [CrossRef] [PubMed]
  14. Nilsson, J.O.; Skog, I.; Handel, P.; Hari, K.V.S. Foot-mounted INS for everybody—An open-source embedded implementation. In Proceedings of the 2012 IEEE/ION Position Location and Navigation Symposium (PLANS), Myrtle Beach, SC, USA, 23–26 April 2012; pp. 140–145.
  15. Li, Y.; Wang, J.J. A robust pedestrian navigation algorithm with low cost IMU. In Proceedings of the IEEE 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, Australia, 13–15 November 2012; pp. 1–7.
  16. Feliz Alonso, R.; Zalama Casanova, E.; Gómez García-Bermejo, J. Pedestrian tracking using inertial sensors. J. Phys. Agents 2009, 3, 35–43. [Google Scholar] [CrossRef]
  17. Placer, M.; Kovačič, S. Enhancing indoor inertial pedestrian navigation using a shoe-worn marker. Sensors 2013, 13, 9836–9859. [Google Scholar] [CrossRef] [PubMed]
  18. Jian, L.; Wang, Y.; Chen, Y.; Yang, J.; Chen, X.; Cheng, J. Tracking vital signs during sleep leveraging off-the-shelf wifi. In Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, Hangzhou, China, 22–25 June 2015; pp. 267–276.
  19. Jiménez, A.R.; Seco, F.; Prieto, J.C.; Guevara, J. Indoor pedestrian navigation using an INS/EKF framework for yaw drift reduction and a foot-mounted IM. In Proceedings of the IEEE 2010 7th Workshop on Positioning Navigation and Communication (WPNC), Dresden, Germany, 11–12 March 2010; pp. 135–143.
  20. Yun, X.; Calusdian, J.; Bachmann, E.R.; McGhee, R.B. Estimation of human foot motion during normal walking using inertial and magnetic sensor measurements. IEEE Trans. Instrum. Meas. 2012, 61, 2059–2072. [Google Scholar] [CrossRef]
  21. Madgwick, S.O.H.; Harrison, A.J.L.; Vaidyanathan, R. Estimation of IMU and MARG orientation using a gradient descent algorithm. In Proceedings of the IEEE 2011 IEEE International Conference on Rehabilitation Robotics (ICORR), Zurich, Switzerland, 29 June–1 July 2011; pp. 1–7.
  22. Madgwick, S.O.H. An Efficient Orientation Filter for Inertial and Inertial/Magnetic Sensor Arrays. Report x-io and University of Bristol (UK), 2010. Available online: https://www.samba.org/tridge/UAV/madgwick_internal_report.pdf (accessed on 27 October 2016).
  23. OpenShoe [EB/OL]. Available online: http://www.openshoe.org/ (accessed on 27 October 2016).
  24. Nilsson, J.O.; Zachariah, D.; Skog, I.; Händel, P. Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging. EURASIP J. Adv. Signal Process. 2013, 2013, 1–17. [Google Scholar] [CrossRef]
  25. Rantakokko, J.; Rydell, J.; Stromback, P.; Händel, P.; Callmer, J.; Törnqvist, D.; Gustafsson, F.; Jobs, M.; Grudén, M. Accurate and reliable soldier and first responder indoor positioning: multisensor systems and cooperative localization. IEEE Wirel. Commun. 2011, 18, 10–18. [Google Scholar] [CrossRef]
  26. House, S.; Connell, S.; Milligan, I.; Austin, D.; Hayes, T.L.; Chiang, P. Indoor localization using pedestrian dead reckoning updated with RFID-based fiducials. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Boston, MA, USA, 30 August–3 September 2011; pp. 7598–7601.
  27. Ojeda, L.; Borenstein, J. Personal dead-reckoning system for GPS-denied environments. In Proceedings of the IEEE International Workshop on Safety, Security and Rescue Robotics (SSRR 2007), Rome, Italy, 27–29 September 2007; pp. 1–6.
  28. Zampella, F.; De Angelis, A.; Skog, I.; Zachariah, D.; Jimenez, A. A constraint approach for UWB and PDR fusion. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Sydney, Australia, 13–15 November 2012.
  29. Khan, M.I.; Syrjarinne, J. Investigating effective methods for integration of building’s map with low cost inertial sensors and wifi-based positioning. In Proceedings of the International IEEE 2013 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Montbeliard-Belfort, France, 28–31 October 2013; pp. 1–8.
  30. Chen, Z.; Zou, H.; Jiang, H.; Zhu, Q.; Soh, Y.; Xie, L. Fusion of WiFi, smartphone sensors and landmarks using the Kalman filter for indoor localization. Sensors 2015, 15, 715–732. [Google Scholar] [CrossRef] [PubMed]
  31. Woodman, O.J. An introduction to inertial navigation. Tech. Rep. 2007, 14, 15. [Google Scholar]
  32. Jalil, M.; Butt, F.A.; Malik, A. Short-time energy, magnitude, zero crossing rate and autocorrelation measurement for discriminating voiced and unvoiced segments of speech signals. In Proceedings of the IEEE 2013 International Conference on Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), Konya, Turkey, 9–11 May 2013; pp. 208–212.
  33. Han, H.; Yu, J.; Zhu, H.; Chen, Y.; yang, J.; Zhu, Y.; Xue, G.; Li, M. Senspeed: Sensing driving conditions to estimate vehicle speed in urban environments. In Proceedings of the IEEE Conference on Computer Communications (NFOCOM 2014), Toronto, ON, Canada, 27 April–2 May 2014; pp. 727–735.
  34. Dong, M.; Ota, K.; Liu, A. RMER: Reliable and energy efficient data collection for large-scale wireless sensor networks. IEEE Internet Things J. 2016, 3, 511–519. [Google Scholar] [CrossRef]
  35. Xie, R.; Liu, A.; Gao, J. A residual energy aware schedule scheme for WSNs employing adjustable awake/sleep duty cycle. Wirel. Pers. Commun. 2016, 90, 1859–1887. [Google Scholar] [CrossRef]
Figure 1. Amplitude of raw acceleration with sensor fixed on different position. (a) Fixed on thigh; (b) fixed on legs; (c) fixed on foot.
Figure 1. Amplitude of raw acceleration with sensor fixed on different position. (a) Fixed on thigh; (b) fixed on legs; (c) fixed on foot.
Sensors 16 01809 g001aSensors 16 01809 g001b
Figure 2. Inertial sensor fixed on foot.
Figure 2. Inertial sensor fixed on foot.
Sensors 16 01809 g002
Figure 3. Energy waveforms with different window sizes: (a) window size of 21; (b) window size of 31; (c) window size of 41; (d) window size of 51.
Figure 3. Energy waveforms with different window sizes: (a) window size of 21; (b) window size of 31; (c) window size of 41; (d) window size of 51.
Sensors 16 01809 g003
Figure 4. Waveform of gait signal: (a) waveform of energy and gait; (b) acceleration and gait.
Figure 4. Waveform of gait signal: (a) waveform of energy and gait; (b) acceleration and gait.
Sensors 16 01809 g004
Figure 5. Reference coordinate system and sensor coordinate system.
Figure 5. Reference coordinate system and sensor coordinate system.
Sensors 16 01809 g005
Figure 6. Posture initialization process.
Figure 6. Posture initialization process.
Sensors 16 01809 g006
Figure 7. Convergent process. (ad) describes the convergent process of the four elements in quaternion.
Figure 7. Convergent process. (ad) describes the convergent process of the four elements in quaternion.
Sensors 16 01809 g007
Figure 8. Posture estimate based on gait information.
Figure 8. Posture estimate based on gait information.
Sensors 16 01809 g008
Figure 9. Accumulated error elimination.
Figure 9. Accumulated error elimination.
Sensors 16 01809 g009
Figure 10. Based on cumulative error gait elimination.
Figure 10. Based on cumulative error gait elimination.
Sensors 16 01809 g010
Figure 11. Walking state classification model: (a) downstairs; (b) upstairs.
Figure 11. Walking state classification model: (a) downstairs; (b) upstairs.
Sensors 16 01809 g011
Figure 12. θ angle contrast.
Figure 12. θ angle contrast.
Sensors 16 01809 g012
Figure 13. The vertical distance error while walking: (a) 100 sets of walking data; (b) vertical distance error.
Figure 13. The vertical distance error while walking: (a) 100 sets of walking data; (b) vertical distance error.
Sensors 16 01809 g013
Figure 14. The vertical distance error while running: (a) 100 sets of running data; (b) vertical distance error.
Figure 14. The vertical distance error while running: (a) 100 sets of running data; (b) vertical distance error.
Sensors 16 01809 g014
Figure 15. Vertical error accumulation.
Figure 15. Vertical error accumulation.
Sensors 16 01809 g015
Figure 16. Plane vertical distance error elimination process.
Figure 16. Plane vertical distance error elimination process.
Sensors 16 01809 g016
Figure 17. The vertical distance error elimination.
Figure 17. The vertical distance error elimination.
Sensors 16 01809 g017
Figure 18. Collection node and network node: (a) collection node; (b) network node.
Figure 18. Collection node and network node: (a) collection node; (b) network node.
Sensors 16 01809 g018
Figure 19. Experimental scene.
Figure 19. Experimental scene.
Sensors 16 01809 g019
Figure 20. Comparative information extraction gaits.
Figure 20. Comparative information extraction gaits.
Sensors 16 01809 g020
Figure 21. Gait information extraction accuracy.
Figure 21. Gait information extraction accuracy.
Sensors 16 01809 g021
Figure 22. Traveling state determining statistical accuracy.
Figure 22. Traveling state determining statistical accuracy.
Sensors 16 01809 g022
Figure 23. Walking normally: vertical distance error elimination: (a) distance error; (b) error percentage.
Figure 23. Walking normally: vertical distance error elimination: (a) distance error; (b) error percentage.
Sensors 16 01809 g023
Figure 24. Jogging: vertical distance error elimination: (a) distance error; (b) error percentage.
Figure 24. Jogging: vertical distance error elimination: (a) distance error; (b) error percentage.
Sensors 16 01809 g024
Figure 25. Track of walking on a horizontal plane: (a) walking straight; (b) walking along square.
Figure 25. Track of walking on a horizontal plane: (a) walking straight; (b) walking along square.
Sensors 16 01809 g025
Figure 26. Normal walking step statistics: (a) distance error; (b) error percentage.
Figure 26. Normal walking step statistics: (a) distance error; (b) error percentage.
Sensors 16 01809 g026
Figure 27. Jogging step statistics: (a) distance error; (b) error percentage.
Figure 27. Jogging step statistics: (a) distance error; (b) error percentage.
Sensors 16 01809 g027
Figure 28. Upstairs and downstairs tracks: (a) upstairs; (b) downstairs.
Figure 28. Upstairs and downstairs tracks: (a) upstairs; (b) downstairs.
Sensors 16 01809 g028
Figure 29. Horizontal step statistics (upstairs): (a) distance error; (b) error percentage.
Figure 29. Horizontal step statistics (upstairs): (a) distance error; (b) error percentage.
Sensors 16 01809 g029
Figure 30. Vertical step statistics (upstairs): (a) distance error; (b) error percentage.
Figure 30. Vertical step statistics (upstairs): (a) distance error; (b) error percentage.
Sensors 16 01809 g030
Figure 31. Horizontal step statistics (downstairs): (a) distance error; (b) error percentage.
Figure 31. Horizontal step statistics (downstairs): (a) distance error; (b) error percentage.
Sensors 16 01809 g031
Figure 32. Vertical step statistics (downstairs): (a) distance error; (b) error percentage.
Figure 32. Vertical step statistics (downstairs): (a) distance error; (b) error percentage.
Sensors 16 01809 g032
Figure 33. Walking heading angle error: (a) course error; (b) error percentage.
Figure 33. Walking heading angle error: (a) course error; (b) error percentage.
Sensors 16 01809 g033
Figure 34. Jogging heading angle error: (a) course error; (b) error percentage.
Figure 34. Jogging heading angle error: (a) course error; (b) error percentage.
Sensors 16 01809 g034
Figure 35. 3D positioning: (a) corridor structure; (b) 3D positioning trajectory in 3D modeling.
Figure 35. 3D positioning: (a) corridor structure; (b) 3D positioning trajectory in 3D modeling.
Sensors 16 01809 g035
Table 1. The average and standard deviation of θ .
Table 1. The average and standard deviation of θ .
TypesAverage Value (°)Standard Deviation (°)
plane3.9356.435
upstairs107.90436.465
downstairs−89.46434.907
Table 2. Step estimation normalized error comparison.
Table 2. Step estimation normalized error comparison.
Error (DSP-1750 [8])Error (MPU-6050)
walking0.19%0.40%
jogging6.25%0.36%
upstairs0.30%0.56%
downstairs0.90%0.88%

Share and Cite

MDPI and ACS Style

Li, F.; Liu, G.; Liu, J.; Chen, X.; Ma, X. 3D Tracking via Shoe Sensing. Sensors 2016, 16, 1809. https://doi.org/10.3390/s16111809

AMA Style

Li F, Liu G, Liu J, Chen X, Ma X. 3D Tracking via Shoe Sensing. Sensors. 2016; 16(11):1809. https://doi.org/10.3390/s16111809

Chicago/Turabian Style

Li, Fangmin, Guo Liu, Jian Liu, Xiaochuang Chen, and Xiaolin Ma. 2016. "3D Tracking via Shoe Sensing" Sensors 16, no. 11: 1809. https://doi.org/10.3390/s16111809

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop