Next Article in Journal
Fast Decision Algorithms in Low-Power Embedded Processors for Quality-of-Service Based Connectivity of Mobile Sensors in Heterogeneous Wireless Sensor Networks
Previous Article in Journal
Method for Reading Sensors and Controlling Actuators Using Audio Interfaces of Mobile Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gait Analysis Using Floor Markers and Inertial Sensors

School of Electrical Engineering, University of Ulsan, Mugeo 2-Dong, Nam gu, Ulsan City 680-749, Korea
*
Author to whom correspondence should be addressed.
Sensors 2012, 12(2), 1594-1611; https://doi.org/10.3390/s120201594
Submission received: 1 December 2011 / Revised: 31 January 2012 / Accepted: 31 January 2012 / Published: 7 February 2012
(This article belongs to the Section Physical Sensors)

Abstract

: In this paper, a gait analysis system which estimates step length and foot angles is proposed. A measurement unit, which consists of a camera and inertial sensors, is installed on a shoe. When the foot touches the floor, markers are recognized by the camera to obtain the current position and attitude. A simple planar marker with 4,096 different codes is used. These markers printed on paper are placed on the floor. When the foot is moving off the floor, the position and attitude are estimated using an inertial navigation algorithm. For accurate estimation, a smoother is proposed, where vision information and inertial sensor data are combined. Through experiments, it is shown that the proposed system can both track foot motion and estimate step length.

1. Introduction

Gait analysis [1] involves the measurement of temporal/spatial characteristics (step length and walking speed), kinematics and kinetics. Gait analysis is used for medical purposes, sport analysis, and also for entertainment purposes. An instrumented walkway, where pressure sensors are located on the floor, can be used to measure step length and foot pressure [1]. A vision-based motion tracking system can measure step length and foot angles accurately [2]. However, both systems have rather limited walking ranges (usually less than 10 m).

Recently, inertial sensor based systems are becoming popular. In [3], inertial sensors are installed on a leg and gait phases are identified by computing angles of leg segments. In [4], inertial sensors are used to estimate upper-limb orientation. More relevant results are given in [5,6], where inertial sensors are installed on a shoe or a leg and the foot movement is estimated using inertial navigation algorithms. Step length is estimated in [5] and a foot movement is estimated in [6]. These techniques can estimate step length or foot movement without range limitations. However, due to double integration errors, the accuracy tends to degrade as time goes by.

In this paper, a new gait analysis system is proposed. A measurement system consisting of a camera and inertial sensors is installed on a shoe. Fiducial markers are printed on paper and placed on the floor. Fiducial markers are used mainly in virtual reality systems and there are many types of such markers [79]. In this paper, we use simple markers since there are no concerns that complicated backgrounds might mistakenly be recognized as markers.

When a foot is on the floor, the position and attitude are estimated using a camera by recognizing the markers. When a foot is moving, its movement is estimated using inertial sensors. In an inertial-based foot estimation system, Kalman filters are usually used to estimate the position and attitude of the foot. For better accuracy, a smoother is used to combine vision information and inertial sensor data. It will be shown that the accuracy of the proposed system is better than that of inertial sensor-only estimation as described in [5].

One possible application of the proposed system is the clinical gait assessment of patients, where long walking ranges are desirable. Another application is parameter estimation of some pedestrian navigation algorithms [10]. In those algorithms, relationship between step length and one step walking time needs to be identified; thus step length and one step walking time should be accurately measured. We note that step length measurement using vision only [11] and partial combinations of vision and inertial sensors are reported in [12]. In this paper, vision and inertial sensor data are tightly coupled in the proposed algorithm. The proposed system can be applied only for the flat ground. For example, the proposed system cannot be used for stairs.

The paper is organized as follows. In Section 2, an overview of the proposed system is given. In Section 3, how to estimate the position and attitude is explained. In Section 4, an inertial navigation algorithm combining inertial sensor data and vision data is described. A smoothing algorithm [13] consisting of a forward filter and a backward filter is used. In Section 5, experiment results are given. Conclusions are presented in Section 6.

2. Gait Analysis System

The gait analysis system consists of a sensor unit placed on a shoe and fiducial markers located on the floor (Figure 1). A sensor unit consists of a camera (point grey Firefly MV FFMV-03MTM) and inertial sensors (XSens MTi inertial measurement unit).

A fiducial marker used in this paper is similar to ARTag markers [79]. Each marker is a nine by nine grid of quadrilateral cells of 1.5 millimeter edges and thus the marker size is 13.5 mm × 13.5 mm. The intra-marker space is 4.5 mm. The marker size is determined so that there are at least two markers in the camera image. We note that the height of a camera from the markers is about 85 cm and camera field of view is about 54.5 × 45 mm.

The whole cells are blank or colored in black in order to make them bitonal cells allowing either gray scale or color cameras to be used. The cell in black represent 1s, while the cells in white represent 0 s (Figure 2). The border cells are colored in black and are one cell width wide, followed by blank closed-chain cells. The digital coding system used to identify the marker consists of a five by five grid of bitonal cells including the blank central-cross cells, a black origin cell and three blank corner cells in the interior of a marker. A marker structure and its twelve bits digital coding system inside are illustrated in Figure 2.

The code bits are distributed along the quadrilateral central-cross in the order from left to right, and from top to bottom (the bit corresponding to 0 position in Figure 2(a) is the least significant bit and 11 is the most significant bit). The origin quadrilateral cell and three blank corner cells are used to detect the least significant bit regardless of the relative orientation between a camera and a marker. For example, the marker ID in Figure 2(b) is 000110101000 in a binary number and 424 in a decimal number.

Usually, the digital coding system is encoded in the marker containing a marker ID, checksum and error correcting codes. However, in our application, markers are seen by the camera in predetermined sequences not random sequences as in other virtual reality applications. Thus, in order to increase the recognition speed and number of markers, a simple coding system is used. Neither checksum nor error correcting codes is used. Totally, there are 212=4096 unique markers (marker ID 0∼4095).With 4096 unique markers, the total length is 16.9 m. If longer walking range is needed, markers can be repeated so that marker ID 0 reappears after the marker ID 4095.

A planar landmark system is generated by N fiducial markers which are composed of a four by M grid of markers as in Figure 3. The marker ID is encoded from zero to (N−1).

There are four coordinate frames in this paper. The world coordinate frame is based on the marker plane: x and y axes of the world coordinate frame constitute the plane where markers are placed. The origin of the world coordinate frame coincides with the center of the first marker. Foot positions are expressed in the world coordinate frame.

The navigation coordinate frame is used in an inertial navigation algorithm. The origin of the navigation coordinate frame is the same as that of the world coordinate frame. The z axis of the navigation coordinate frame coincides with the local vertical. Unlike a usual navigation coordinate frame (where the x axis is in the direction of the magnetic north), the x axis lies on the x–z plane of the world coordinate frame. If markers are placed on the perfect plane, the inclination angle of the floor is zero and two coordinate frames are the same.

The camera coordinate frame is placed at the pinhole of the camera, where the z axis is perpendicular to the image plane. Three axes of the body coordinate frame coincide with those of inertial sensors. In this paper, it is assumed that the three axes of the camera coordinate frame and the body coordinate frame are the same.

One walking step is illustrated in Figure 4, where normal walking is considered. When a person is walking, a foot touches the ground for a short time (usually about 0.1∼0.3 seconds) almost periodically. Even if a shoe looks like it is moving all the time during walking, a shoe is on the ground and not moving for a short time. This short interval is called the zero velocity interval. We also define moving intervals, which refer to an interval when a foot is moving. Thus one normal walking step consists of a zero velocity interval and a moving interval (see Figure 4).How to divide walking steps into a zero velocity interval and moving interval is given in Figure 4. In medical gait analysis [1], one walking step consists of seven gait phases: loading response, mid-stance, terminal stance, pre-swing, initial swing, mid-swing, terminal swing. The zero velocity interval exists between the loading response phase and terminal stance phase.

During zero velocity intervals, the position and attitude of a foot are estimated using markers on the floor (see Section 3). During moving intervals, the position and attitude are estimated using inertial sensors (see Section 4).

3. Position and Attitude Estimation Using Markers

When a foot is not moving on the floor, markers in camera images are recognized through an image processing process. The Canny algorithm, the most popular algorithm for edge detection, is applied to detect whole contours from the original captured image (Figure 5(a,b)). The quadrilateral contours are then detected based on the geometric properties (Figure 5(c)). Other contours are eliminated. Two quadrilateral contours whose central coordinates are close to each other are grouped. A line equation is computed for each side of a quadrilateral using the least squares method. For each marker, eight lines are computed (four lines for an outer quadrilateral and four lines for an inner quadrilateral). Based on these lines, the inner quadrilateral is partitioned into 7 × 7 partitions to check the interior cells (see Figure 2). The interior cells of grouped contours are converted into a binary coding using the following adaptive threshold:

threshold = 1 640 × 480 ( i = 1 480 ( j = 1 640 I ij ) )
where Iij is the gray image intensity level of the (i, j) pixel. The image size is 640 × 480. If a binary coding satisfies the constraint in Figure 2, the group is identified as a marker and a marker ID is computed. If the constraint is not satisfied, the group is abandoned. From the origin cell (see Figure 2), the orientation of a marker can be identified. In Figure 5(d), identified markers with their IDs and their four corners are given, where intersections of four lines of an outer quadrilateral are defined as four corners of a marker.

Position (vision) and attitude (Ĉvision) of a camera with respect to the world coordinate frame are determined using four corners of a marker (outer corners in Figure 5(d)). Let rw be a point expressed in the world coordinate and rb be the same point in the body coordinate. Then the relationship between rb and rw is given by:

r w = C ^ vision r b + r ^ vision .

A vector vision represents the position of a camera in the world coordinate. A rotation matrix Ĉvision represents a rotation matrix transforming a body coordinate (=camera coordinate) into a world coordinate.

It is known that the position and attitude can be computed if there are at least four points [14,15]. Thus position and attitude can be computed if there is at least one marker with four corners in the image. If there is more than one marker, more points can be used for the position and attitude computation; then the estimation error becomes smaller. We used an algorithm in [14] to estimate position vision and attitude. Ĉvision Position information is directly used in Section 4. Only yaw information in Ĉvision is used in Section 4 since pitch and roll information can be computed from inertial sensors.

4. Position and Attitude Estimation Using Inertial Sensor Data

When a foot is moving, the position and attitude are estimated using an inertial navigation algorithm. To simplify the inertial navigation algorithm, it is assumed that the navigation coordinate frame and the world coordinate frame are the same. This assumption is satisfied if markers are placed on a completely flat floor.

A quaternion q = [ q 0         q 1         q 2         q 3 ] T     R 4 is used to represent rotation between the navigation coordinate frame and the body coordinate frame [16,17]. The rotation matrix C(q) corresponding to quaternion q is defined by:

C ( q ) = [ 2 q 0 2 + 2 q 1 2 1 2 q 1 q 2 + 2 q 0 q 3 2 q 1 q 3 2 q 0 q 2 2 q 1 q 2 2 q 0 q 3 2 q 0 2 + 2 q 2 2 1 2 q 2 q 3 + 2 q 0 q 1 2 q 1 q 3 + 2 q 0 q 2 2 q 2 q 3 2 q 0 q 1 2 q 0 2 + 2 q 3 2 1 ]

Gyroscope output (yg) and accelerometer output (ya) are given by:

y g = ω + v g y a = C ( q ) g ˜ + a b + v a
where ω is the body angular rate, is the gravitational acceleration vector, ab is the acceleration due to movement, vg is the gyroscope measurement noise, and va is the accelerometer measurement noise. The sensor noises vg and va are assumed to be zero mean white Gaussian noises satisfying:
E [ v g ( t ) v g   ( s ) T ] = λ g I δ ( t s ) ,       E [ v a ( t ) v a   ( s ) T ] = λ a I δ ( t s ) ,       E [ v a ( s ) v g   ( t ) T ] = 0 .

The sampling frequency of inertial sensors is 100 Hz and the sampling frequency of vision data is 30 Hz. A discrete time system is based on the sampling period of inertial sensors; that is, the sampling period T for the discrete time system is T = 0.01 s. For example, yg,k in the discrete time means yg (kT) in the continuous time. Vision data is not synchronized with inertial data. When vision data is available at the continuous time t, its discrete time index is computed by k =floor(t/T) where floor(U)is the largest integer not larger than U.

4.1. Zero Velocity Interval Detection

Since the velocity of a foot cannot be measured directly, the zero velocity intervals are determined from inertial sensor data and vision data. To determine whether a discrete time k belongs to a zero velocity interval, firstly the following condition are tested:

y a , i y a , i 1 < S a ,   k N a 2 i k + N a 2 y g , i < S g , k N g 2 i k + N g 2
where Sa, Sg, Na and Ng are constant parameters. The first condition means that difference between consecutive accelerometer values should be small. The second condition means that gyroscope values should be small. In addition to Equation (1), there is an additional condition for zero velocity determination, which requires difference between two consecutive images should be small. Recall that vision data are not available for each discrete time k since the sampling frequency of a camera is lower than that of inertial sensor data. Here vision data means that the image coordinates of marker corners. Difference between two images is measured by computing the average movement of image coordinates of marker corners.

Given a discrete time k, let k1k and k2 > k be discrete time indices at which time vision data are available. That is, vision data are available at k1 and k2 and there are no vision data in (k1,k2). Let pi,k1R2 and pi,k2R2 (1 ≤ iNk) be the image coordinates of marker corners, which exist both at k1 and k2 discrete time images. The same index i represents the same corner and Nk is the number of marker corners appearing in the both images. The small image movement is defined as follows:

1 N k i = 1 N k p i , k 1 p i , k 2 < S image

In summary, discrete time k is determined to belong to a zero velocity interval if Equations (1) and (2) are satisfied.

4.2. Initialization of an Inertial Navigation Algorithm

We use the position estimate in Section 3 as an initial position estimate. For an initial attitude estimate, we could have used the attitude estimate (Ĉvision) from vision data. Instead we combine the attitude estimate (Ĉvision) and accelerometer output (ya), where pitch and roll are estimated using ya and yaw is estimated from Ĉvision. In this way, we can obtain a correct attitude estimate even if the floor is not completely flat and thus the navigation and world coordinate frames are not exactly the same.

An initial attitude estimate is computed by finding C(q) satisfying the following:

y a = C ( q ) g ˜
C ^ vision   [ 1 0 0 ] = C ( q ) [ 1 0 0 ]

The TRIAD algorithm [18] is used to compute C(q). We note that in the TRIAD algorithm pitch and roll are estimated using Equation (3) and yaw is estimated using Equation (4). Thus Ĉvision only affects yaw in C(q). We have not used magnetic sensors since there could be large magnetic disturbances indoors, which degrades the accuracy of yaw estimation.

4.3. Basic Equations for an Inertial Navigation Algorithm

Let vn be the velocity of a foot in the navigation coordinate frame. Basic equations for q, vn and rn are given by:

q ˙ = 1 2 Ω ( ω ) q v ˙ n = C ( q ) T   a b r ˙ n = v n
where Ω(ω) is defined by:
Ω ( ω )   @ [ 0 ω 1 ω 2 ω 3 ω 1 0 ω 3 ω 2 ω 2 ω 3 0 ω 1 ω 3 ω 2 ω 1 0 ]

In this paper, we estimate q, vn and rn by combining a forward filter and a backward filter [19]. An indirect filter is used for a forward and a backward filter. Firstly q, vn and rn are estimated using Equation (5); let , n and n be corresponding estimated values. Let e, ve and re be errors of these estimated values, which are defined by:

q ˜ e = q ^ *   q v e = v n v ^ n r e = r n r ^ n
where ⊗ represents the quaternion multiplication. Assuming that e is small, we can approximate e as follows:
q ˜ e = [ 1 q e ]
where qeR3. In an indirect filter, qe, ve and re are estimated. Let a state x be defined by:
x = [ q e v e r e ] R 9

The differential equation is given by:

x ˙ ( t ) = A ( t ) x ( t ) + w ( t )
where:
A = [ [ y g × ] 0 0 2 C   ( q ^ ) T   [ y a × ] 0 0 0 I 0 ] ,                             w = [ 0.5 v g C ( q ^ ) T   η a 0 ] .

For a vector pR3, [p×] is defined by:

p = [ 0 p 3 p 2 p 3 0 p 1 p 2 p 1 0 ] .

Note that E[w(t)w(t)T]=(0), where:

Q = [ 0.25 λ g I 0 0 0 λ a I 0 0 0 0 ] .

4.4. Forward Filter

A forward filter processes data from the beginning to the end of data. In a forward filter, f,k, f,k and f,k are first computed using the discretized approximation of Equation (5): subscript f represents a forward filter and k represents the discrete time index. The following equations are used to compute f,k, f,k and f,k:

q ^ f , k + 1 = ( I + 3 4 Ω k T 1 4 Ω k 1 T 1 6 y g , k 2 2 T 2 1 24 Ω k Ω k 1 T 2 1 48 y g , k 2 2 Ω k T 3 ) q ^ f , k v ^ f , k + 1 = v ^ f , k + 0.5 T ( C ( q ^ f , k + 1 ) T   y a , k + 1 + C ( q ^ f , k ) T   y a , k ) T g ˜                                                                                               r ^ f , k + 1 = r ^ f , k + 0.5 T ( v ^ f , k + 1 + v ^ f , k )  
where Ωk is defined by Ωk = Ω(yg,k). In the computation, f,k is normalized each time.

The estimation errors in f,k, f,k and f,k are estimated using a forward Kalman filter, where the process model is given in Equation (6). During the zero velocity interval, the fact vn,k = 0 is used in the measurement equation. If vision data are also available, vision,k and Ĉvision are used in the measurement update. For example, a measurement equation using vn,k = 0 and vision,k is given by:

z k = [ 0 v ^ f , k r ^ vision , k r ^ f , k ] = [ 0 I 0 0 0 I ] x k + v k .

The measurement noise vk is assumed to satisfy:

E [ v k v k   T ] = [ r v I 0 0 r r I ]
where rv and rr are scalar constants. Similarly to the initialization process, only yaw information in Ĉvision is used. We used the same technique in [20] for yaw updating. In [20], magnetic sensors are used for yaw updating. Replacing the magnetic sensor outputs with Ĉvision [1 0 0]T, we can obtain a yaw updating algorithm.

The error covariance of the forward Kalman filter is denoted by Pf,k :

P f , k = E [ ( x f , k x ^ f , k ) ( x f , k x ^ f , k ) T ] R 9 × 9 .

4.5. Backward Filter

A backward filter processes data from the end of data to the beginning. Similarly to the forward filter, b,k, b,k and b,k are first computed (subscript b represents a backward filter) :

q ^ b , k = ( I 3 4 Ω k T + 1 4 Ω k + 1 T 1 6 y g , k 2 2 T 2 1 24 Ω k Ω k + 1 T 2 + 1 48 y g , k 2 2 Ω k T 3 ) q ^ b , k + 1 v ^ b , k = v ^ b , k + 1 + 0.5 T   ( C ( q ^ b , k + 1 ) T   y a , k + 1 + C ( q ^ b , k ) T   y a , k ) T g ˜                                                                                               r ^ b , k = r ^ b , k + 1 + 0.5 T   ( v ^ b , k + 1 + v ^ b , k )

Their errors are estimated in the backward filter, where the state space model is given by:

x ˙ b = Ax b w
where the state definition of xb is the same as that of xf.

The measurement equations are the same as those in the forward filter. The error covariance of the backward Kalman filter is denoted by Pb,k:

P b , k = E [ ( x b , k x ^ b , k ) ( x b , k x ^ b , k ) T ] R 9 × 9

4.6. Smoother

The smoother combines the forward filter output and the backward filter output. Due to an indirect filter structure, it is difficult to derive an optimal smoother. In this paper, we use a suboptimal smoother, which only uses diagonal matrices of Pf,k and Pb,k.

Diagonal matrices of Pf,k and Pb,k are given by:

P f , k = [ P q f , k * * * P v f , k * * * P r f , k ] ,     P b , k = [ P q b , k * * * P v b , k * * * P r b , k ]
where * parts denote the off-diagonal matrices and not used in the smoother. The smoothed quaternion is computed by using the quaternion averaging algorithm in [21]. The smoothed quaternion qs is computed by solving the following optimization problem:
q ^ s , k = min q   q T ( Ξ ( q f , k ) T   P q f , k 1 Ξ ( q f , k ) + Ξ ( q b , k ) T   P q b , k 1 Ξ ( q b , k ) ) q
where:
Ξ ( q ) = [ q 1 q 0 q 3 q 2 q 2 q 3 q 0 q 1 q 3 q 2 q 1 q 0 ]

The smoothed velocity and position estimates (s and s) are given by:

v ^ s , k = ( P v f , k 1 + P v b , k 1 ) 1   ( P v f , k 1 v ^ f , k + P v b , k 1 v ^ b , k ) r ^ s , k = ( P r f , k 1 + P r b , k 1 ) 1 ( P r f , k 1 r ^ f , k + P r b , k 1 r ^ b , k ) .

5. Experimental Results

In this section, experimental results to verify the proposed method are given. The camera is calibrated to obtain its intrinsic parameters [22]. The inertial sensor unit is calibrated using the algorithm in [23]. The sampling rate of camera and inertial IMU sensors are 30 fps and 100 Hz, respectively. The computation is done off-line using Matlab. The thresholds used for zero velocity detection are given as follows:

N a = 0.1 ,   S a = 20 ,   N g = 0.05 ,   S g = 20 ,   S image = 30
where these thresholds are decided by trial and error. Various combinations of thresholds are tried and the above thresholds are chosen since they identify zero velocity intervals well. In this process, true zero velocity intervals are manually identified by inspecting inertial sensor data.

The initial values used for the indirect Kalman filter in the experiment are given as follows:

P q f , 0 = P q b , 0 = 0.001 I ,   P v f , 0 = P v b , 0 = 0.001 I ,   P r f , 0 = P r b , 0 = 0.008 I r v = 0.004 ,   r r = 0.008 ,   R yaw = 0.01 I
where Pqf,0, Pvf,0, Prf,0 are the initial values of forward Kalman filter error covariance matrices, Pqb,0, Pvb,0, Prb,0 are the initial values of backward Kalman filter error covariance matrices, respectively.

5.1. Table Experiment: Comparison with the Digitizer Output

A shoe is moved back and forth between two positions A and B several times along to the Yw axis direction as illustration in Figure 6. The movement is tracked using Microscribe G2X digitizer, whose output is considered as a true value. The accuracy of G2X model is up to 0.23 mm in a 1.27m sphere workspace.

Since the movement is mainly along the Yw axis, only the Yw axis velocity and position are plotted in Figure 7. The first graph of Figure 7 shows the estimated velocity using the forward filter, the backward filter and the smoother while the second graph shows the position estimates. The third graph shows the zero velocity detection results, where the value 1 indicates that the corresponding discrete time belong to a zero velocity interval. In the second graph, assuming the smoother estimates are accurate, we can see the error in the forward filter increases as the moving time increases. Also we can see the error in the backward filter increases as the moving time backwardly increases. In the final graph of Figure 7, Yw axis position estimated by the smoother, the vision, and the digitizer are given. The RMS difference between digitizer data and smoothed estimation is 4.8 ± 9.1 mm (mean ± standard deviation).

5.2. Walking Experimental Results

A person wearing the shoe walked along a planar marker system path (Figure 1). The length of the path is 33.8 meters. We note that this length can be easily extended by using more markers. Euler angles of a shoe are given in Figure 8.

Note that quaternion is used to represent attitude. Euler angles are transformed from quaternion estimates for visualization. In the attitude estimation, there are almost no differences between the forward and backward filter estimates. This is due to the fact that attitude errors are almost periodically reset during zero velocity intervals. Velocity and position estimation results are given in Figures 9 and 10.

Note that there are large differences between the forward and backward filters. The errors are probably due to inertial sensor errors (bias and scaling factor). We can see the velocity and position estimation errors can be reduced by using the smoother.

The position estimates from vision is compared with smoothed position estimates in Figure 11. Note that the position estimated from vision is mostly available during zero velocity intervals. During moving intervals, marker recognition becomes difficult due to motion related image blurring.

From Figure 10, step length can be computed. One walking step consists of a zero velocity interval and a moving interval (see Figure 4). The accuracy of the step length estimation is evaluated by one-step experiment using the ruler as a reference (see Figure 12). A marker pen is attached on the shoe. Its tip touches on the ruler when the shoe is on the ground. Step length measured by the ruler (by measuring two dots) is considered as a true value.

The results of 20 one-step experiments are listed in Table 1. The error between the measurement using a ruler and the smoothed estimation is in range 0.5∼4.1 mm. RMS difference and the worst case error are given in Table 2.

The RMS error between the measurement by a ruler and the smoothed estimation is 1.99 ± 1.25 mm (mean ± standard deviation). They are smaller than the one (4.8 ± 9.1 mm) in Section 5.1 since the step length is computed using estimates during zero velocity intervals. During zero velocity intervals, positions are compensated from the vision and thus position estimates are more accurate than those during moving intervals. Step length RMS error in [5] is 34.1 ± 2.1 mm, where an optical tracker output is used as a true value. The proposed system is more accurate since markers are used to compensate position and attitude errors.

6. Conclusions

A gait analysis system combining a reliable fiducial marker system on the floor and a inertial sensor unit was proposed. The system can track foot motion and measure step length on flat ground. The position errors tend to become larger during the moving intervals and smaller during the zero velocity intervals since vision data are used to reduce position and attitude errors. The step length RMS error is 1.99 ± 1.25 mm, which is smaller than that of an existing inertial sensor only system. Commercial optical trackers such as the Vicon unit is more accurate than the proposed system—they are usually sub-millimeter level, however, they have rather limited walking ranges. On the other hand, the walking range of the proposed system can be easily extended.

Acknowledgments

This work was supported by National Research Foundation of Korea Grant funded by the Korean Government (No. 2011-0021978).

References

  1. Whittle, M.W. Gait analysis: An introduction, 4th ed; Elsevier: Amsterdam, The Netherlands, 2007; p. 255. [Google Scholar]
  2. Wang, F.; Stone, E.; Dai, W.; Skubic, M.; Keller, J. Gait Analysis and Validation Using Voxel Data. Proceedings of 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2009), Minneapolis, MN, USA, 2–6 September 2009; pp. 6127–6130.
  3. Liu, T.; Inoue, Y.; Shibata, K. Development of a wearable sensor system for quantitative gait analysis. Measurement 2009, 42, 978–988. [Google Scholar]
  4. Hyde, R.A.; Ketteringham, L.P.; Neild, S.A.; Jones, R.J.S. Estimation of upper-limb orientation based on accelerometer and gyroscope measurements. IEEE Trans. Biomed. Eng 2008, 55, 746–754. [Google Scholar]
  5. Martin Schepers, H.; van Asseldonk, E.H.; Baten, C.T.; Veltink, P.H. Ambulatory estimation of foot placement during walking using inertial sensors. J. Biomech. 2010, 43, 3138–3143. [Google Scholar]
  6. Sabatini, A.M. Quaternion-based strapdown integration method for applications of inertial sensing to gait analysis. Med. Biol. Eng. Comput. 2005, 43, 94–101. [Google Scholar]
  7. Fiala, M. Artag Revision 1, a Fiducial Marker Sytem Using Digital Techniques; National Research Council Publication 47419/ERB-1117: Ottawa, Canada, 2004. [Google Scholar]
  8. Fiala, M. Artag, a Fiducial Marker System Using Digital Techniques. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, San Diego, CA, USA, 20–25 June 2005; 2, pp. 590–596.
  9. Fiala, M. Comparing Artag and Artoolkit Plus Fiducial Marker Systems. Proceedings of IEEE International Workshop on Haptic Audio Visual Environments and their Applications, HAVE 2005, Ottawa, ON, Canada, 1–2 October 2005; pp. 148–153.
  10. Shin, S.H.; Park, C.G.; Kim, J.W.; Hong, H.S.; Lee, J.M. Adaptive Step Length Estimation Algorithm Using Low-Cost Mems Inertial Sensors. Proceedings of SAS 2007—IEEE Sensors Applications Symposium, San Diego, CA, USA, 6–8 February 2007.
  11. Do, T.-N.; Suh, Y.-S. Foot Motion Tracking Using Vision. Proceedings of 2011 IEEE 54th International Midwest Symposium on Circuits and Systems (MWSCAS), Seoul, Korea, 7–10 August 2011; pp. 1–4.
  12. Park, S.K.; Suh, Y.S.; Do, T.N. The Pedestrian Navigation System Using Vision and Inertial Sensors. Proceedings of ICROS-SICE International Joint Conference 2009, Fukuoka, Japan, 18–21 August 2009; pp. 3970–3974.
  13. Suh, Y.S.; Do, T.N.; Ro, Y.S.; Kang, H.J. A Smoother for Attitude Estimation Using Inertial and Magnetic Sensors. Proceedings of 2010 IEEE Sensors, Kona, HI, USA, 1–4 November 2010; pp. 743–746.
  14. Lu, C.-P.; Hager, G.D.; Mjolsness, E. Fast and globally convergent pose estimation from video images. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 610–622. [Google Scholar]
  15. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar]
  16. Phillips, W.F.; Hailey, C.E.; Gebert, G.A. Review of attitude representations used for aircraft kinematics. J. Aircr. 2001, 38, 718–737. [Google Scholar]
  17. Kuipers, J.B. Quaternions and Rotation Sequences: A Primer with Applications to Orbits, Aerocpace, and Virtual Reality; Princeton University Press: Princeton, NJ, USA, 1999. [Google Scholar]
  18. Shuster, M.D.; Oh, S.D. Three-axis attitude determination from vector observations. J. Guid. Control 1981, 4, 70–77. [Google Scholar]
  19. Brown, R.G.; Hwang, P.Y.C. Introduction to Random Signals and Applied Kalman Filtering, 3rd ed; John Wiley & Sons Inc: New York, NY, USA, 1997. [Google Scholar]
  20. Suh, Y.S. Orientation estimation using a quaternion-based indirect Kalman filter with adaptive estimation of external acceleration. IEEE Trans Instrum. Meas. 2010, 59, 3296–3305. [Google Scholar]
  21. Gramkow, C. On averaging rotations. Int. J. Comput. Vis. 2001, 42, 7–16. [Google Scholar]
  22. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar]
  23. Won, S.P.; Golnaraghi, F. A triaxial accelerometer calibration method using a mathematical model. IEEE Trans. Instrum. Meas. 2010, 59, 2144–2153. [Google Scholar]
Figure 1. Gait Analysis System.
Figure 1. Gait Analysis System.
Sensors 12 01594f1 1024
Figure 2. Fiducial Marker Structure (a) and a marker ID sample (b).
Figure 2. Fiducial Marker Structure (a) and a marker ID sample (b).
Sensors 12 01594f2 1024
Figure 3. Planar Landmark System.
Figure 3. Planar Landmark System.
Sensors 12 01594f3 1024
Figure 4. A Zero velocity interval and a moving interval of a walking step.
Figure 4. A Zero velocity interval and a moving interval of a walking step.
Sensors 12 01594f4 1024
Figure 5. Image processing steps including (a) Origin grayscale image;(b) Detected contours using Canny edge detector;(c) Quadrilateral contours; and (d) Marker IDs and four corners.
Figure 5. Image processing steps including (a) Origin grayscale image;(b) Detected contours using Canny edge detector;(c) Quadrilateral contours; and (d) Marker IDs and four corners.
Sensors 12 01594f5 1024
Figure 6. Table experiment (Yw axis moving experiment).
Figure 6. Table experiment (Yw axis moving experiment).
Sensors 12 01594f6 1024
Figure 7. Results of a table experiment (Yw axis moving experiment).
Figure 7. Results of a table experiment (Yw axis moving experiment).
Sensors 12 01594f7 1024
Figure 8. Floor experiment (Euler angles estimation).
Figure 8. Floor experiment (Euler angles estimation).
Sensors 12 01594f8 1024
Figure 9. Floor experiment (velocity estimation).
Figure 9. Floor experiment (velocity estimation).
Sensors 12 01594f9 1024
Figure 10. Floor experiment (position estimation).
Figure 10. Floor experiment (position estimation).
Sensors 12 01594f10 1024
Figure 11. Floor experiment (Comparison of smoother estimated and vision estimated positions).
Figure 11. Floor experiment (Comparison of smoother estimated and vision estimated positions).
Sensors 12 01594f11 1024
Figure 12. One walking step length measurement.
Figure 12. One walking step length measurement.
Sensors 12 01594f12 1024
Table 1. Step length estimation (20 steps).
Table 1. Step length estimation (20 steps).
Step #1Step #2Step #3Step #18Step #19Step #20
Measurement by a ruler [mm]792804841779800814
Estimation by a smoother [mm]793.6802.2839.6778.5797.8810.0
Error [mm]1.61.81.40.52.24.0
Table 2. Step length estimation error.
Table 2. Step length estimation error.
Mean of step length error [mm]1.99
Standard deviation of step length error [mm]1.25
Maximum value of step length error [mm]4.10

Share and Cite

MDPI and ACS Style

Do, T.N.; Suh, Y.S. Gait Analysis Using Floor Markers and Inertial Sensors. Sensors 2012, 12, 1594-1611. https://doi.org/10.3390/s120201594

AMA Style

Do TN, Suh YS. Gait Analysis Using Floor Markers and Inertial Sensors. Sensors. 2012; 12(2):1594-1611. https://doi.org/10.3390/s120201594

Chicago/Turabian Style

Do, Tri Nhut, and Young Soo Suh. 2012. "Gait Analysis Using Floor Markers and Inertial Sensors" Sensors 12, no. 2: 1594-1611. https://doi.org/10.3390/s120201594

Article Metrics

Back to TopTop