Next Article in Journal
Student’s t-Kernel-Based Maximum Correntropy Kalman Filter
Next Article in Special Issue
Research on Small Sample Dynamic Human Ear Recognition Based on Deep Learning
Previous Article in Journal
Translational Applications of Wearable Sensors in Education: Implementation and Efficacy
Previous Article in Special Issue
An Energy-Efficient Routing Algorithm Based on Greedy Strategy for Energy Harvesting Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Diverse Gait Dataset: Gait Segmentation Using Inertial Sensors for Pedestrian Localization with Different Genders, Heights and Walking Speeds

1
Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(4), 1678; https://doi.org/10.3390/s22041678
Submission received: 8 January 2022 / Revised: 6 February 2022 / Accepted: 17 February 2022 / Published: 21 February 2022
(This article belongs to the Special Issue Instrument and Measurement Based on Sensing Technology in China)

Abstract

:
Stride length estimation is one of the most crucial aspects of Pedestrian Dead Reckoning (PDR). Due to the measurement noise of inertial sensors, individual variances of pedestrians, and the uncertainty in pedestrians walking, there is a substantial error in the assessment of stride length, which causes the accumulated deviation of Pedestrian Dead Reckoning (PDR). With the help of multi-gait analysis, which decomposes strides in time and space with greater detail and accuracy, a novel and revolutionary stride estimating model or scheme could improve the performance of PDR on different users. This paper presents a diverse stride gait dataset by using inertial sensors that collect foot movement data from people of different genders, heights, and walking speeds. The dataset contains 4690 walking strides data and 19,083 gait labels. Based on the dataset, we propose a threshold-independent stride segmentation algorithm called SDATW and achieve an F-measure of 0.835. We also provide the detailed results of recognizing four gaits under different walking speeds, demonstrating the utility of our dataset for helping train stride segmentation algorithms and gait detection algorithms.

1. Introduction

Pedestrian localization is commonly used in maneuvers, fire drills, and mine rescues. Unlike GPS, optical, audio, and other sensor data, inertial measurements are infrastructure-independent, allowing them to be used for a terminal location in complex contexts [1,2]. As a result of the development of Micro-Electro-Mechanical Systems (MEMS), Inertial Measurement Units (IMUs) have become lightweight, low in power consumption, low cost, and non-intrusive to users, which are suitable characteristics for clinical and residential applications. Thus, IMU-based Pedestrian Dead Reckoning (PDR) has become popular and received considerable attention [3,4,5,6,7].
Stride length estimation, direction estimation, and position update are three key processes of PDR [8,9,10]. One of the most basic components is estimating stride length [11,12]. There are mainly two classes of approaches: the first kind of method is based on the integration of the accelerations, and the other techniques utilize various models to predict the stride length. The models can be further divided according to whether they are based on physical or statistical models. The double integration of acceleration in the forward direction is the most direct method for estimating stride length because it needs no assumption or user customization. However, it is not easy to obtain the forward acceleration from IMU measurements since each part of the body moves in different directions during walking. Biomechanical models for step length estimating, such as inverted pendulum models, are defined mainly by simplifying and approximating the mechanical movements of the human body. Nevertheless, due to the significant variability of pedestrians, these models need to be calibrated for each user. Mechanical models are also impacted by the non-negligible bias and noise of the IMU, which makes the distance error grow cubically over time or distance [13]. In order to reduce the cumulative error, Zero-Velocity-Update (ZUPT) was introduced to reset the integral computations for distance when the foot was recognized as remaining stationary on the ground [14,15,16]. Thanks to the periodicity of human gait, various statistical variables show a clear correlation with step length and can therefore work as features or predictors in statistical models. This type of method needs to create an empirical regression model based on the movement features of the pedestrian’s pelvis, feet, or legs and then fit the model parameters by utilizing the existing dataset to estimate the step length for walking. Li’s model demonstrates a linear relationship between step length and walking frequency [17]. Weinberg’s model utilizes the difference between the maximum and the minimum in vertical acceleration data within a step [18]. Kim’s model is only based on the mean acceleration within a step [19]. Scarlett’s model uses minimum, maximum and average acceleration to estimate step length [20]. Since the variability of individuals’ walking habits stems from gender, height, age, and walking speed, these empirical models require parameter customization for individual pedestrians. If the predicted data differ much from the training data distribution, then the accuracy of the statistical models will be low. In recent years, neural networks have been developed as a promising trend in the pedestrian localization area and have also been used for step length prediction [21,22]. They achieve better prediction accuracy than empirical models with the cost of larger-scale datasets and more massive computation consumption. With limited training data, neural networks are prone to be overfitted, and the requirement for a large number of computational resources prevents them from being used in wearable devices and embedded systems.
To summarize the preceding methods of stride length estimation approaches, they all treat a single stride as a whole processing item rather than dealing with more detailed decomposition and analysis. Firstly, a segment of signal corresponding to a stride must be detected, and then selected features need to be calculated and input into a pretrained model to derive a prediction of movement distance. However, stride length estimation is a case of a black box problem, and researchers are currently unable to investigate the effect of individual differences on stride length estimation. In the field of kinesiology, IMU-based mobile gait analysis enables a continuous and detailed insight into the motor performance of foot movements in multiple gait patterns, under more natural and realistic conditions compared to laboratory settings [23]. With the basis of estimating temporal and spatial parameters within a stride, gait analysis is not only used for the detection of stroke and Parkinson’s symptoms, but also for posture stability in the rehabilitation phase of treatment after injury [24,25,26,27,28]. Inspired by this, we think that it is possible to improve the accuracy of stride length estimation on the basis of gait analysis. By accurately dividing a stride into several gait segments, stride length estimation can be transformed into a fusion of several predictions from different gait analyses. However, there is still a lack of studies in the field of localization based on the analysis of gait signals as well as gait information and step length estimation for PDR application.
IMU-based gait recognition can be investigated on the basis of precise labels characterizing the semantic information corresponding to a specific segment of IMU data, and various datasets have been published [29]. These datasets provide cyclic data collected on different parts of the body by various kinds of sensors and are briefly described in Table 1. The Digital Biobank [8,10] and Sensor-based Gait Analysis Validation Data [30] mainly collected data from healthy elderly controls, PD patients, and geriatric patients by using an optical motion capture system to provide reference data, but the system must be built in a laboratory and has a limited scope. MAREA [31] was used to detect key movements in the gait, such as the heel touching the bottom and the toe off the ground. It provided gait data from 20 subjects, containing data of pressure sensors located on the soles of the feet and data of IMU sensors recording ankle and wrist movement. Each stride was divided into two phases, where the foot swings in the air and makes contact with the ground.
MAREA also provided foot movement data on a treadmill with changing speeds and inclination angles to help evaluate gait events detection, since the lower body movement kinetics of walking on a treadmill are similar to that of walking on the ground. However, the work in [34] found that the shear forces caused by the belts sliding over the treadmill significantly reduced propulsive force during late stance, so there exists considerable differences in gait variables between overground and treadmill walking. The Smart Annotation Cyclic Activities Dataset [32] provides foot movement recordings with camera frames as reference information. It provides stride borders in the foot movement sequence but with gait phases not mentioned. To summarize, thanks to the existing stride databases or gait datasets mentioned above, a large number of investigations into the biomechanics of foot movements have been made possible. However, there is still a lack of stride gait datasets providing both stride borders and gait phases from healthy subjects walking in overground with diverse parameters, such as gender, height, and walking speeds, which significantly affect gait variables in cyclic movement. Among the datasets that collect data from healthy pedestrians, the number of healthy subjects in our dataset is much greater than that of the sensor-based Gait Analysis Validation Data [27] and is comparable to the other two datasets [31,32]. Compared with MAREA [31], we offer more gait labels and foot movement data that are closer to the real application scenarios. Our dataset shares the same type of reference information as the Smart Annotation Cyclic Activities Dataset [32], but we provide reference data with a higher sampling frequency and more detailed gait labels.
One of the initial steps in most wearable gait analysis systems is to segment strides from continuous sensor data, which is a crucial component of the underlying signal processing pipeline. Various methods have been successfully applied to stride segmentation. By identifying peaks or valleys in the data sequence as significant and typical gait events, Weinberg suggested stride borders be recognized, and later dynamic threshold schemes were added to improve the accuracy [18,35,36]. Among the existing IMU-based adaptive stride segmentation methods, [37] proposed a technique which utilized an autocorrelation procedure to fine-tune the threshold for stride boundaries. However, unbiased autocorrelation estimates rely heavily on extracting meaningful information from the signal’s autocorrelation coefficients, necessitating a larger number of observed samples and thus lagged parameter values. The number of samples needed for analysis was said to be at least 400, but the sampling rate of current IMU modules is from 50 to 200 Hz, implying a lag of at least 2 s. As a result, parameter adjustments would be significantly delayed regarding the recognized stride.
In the study of [38], an adaptive procedure was added to the finite state machine algorithm in which six transition rules are empirically set for separating a full stride into six stages. The assumption behind these fixed state transfer criteria is that all strides are made up of a combination of six complete phases sequentially. The signal’s amplitude threshold, the signal’s derivative threshold, and the signal’s variance threshold were adjusted in the state machine based on sixty percent of the average of the local maxima and minima detected in the last three strides. If the recognized strides were less than three steps, the preset thresholds would be used. The method of parameter updating was based on the authors’ assumption that the signal distribution pattern of pedestrian walking follows a similar conditional pattern that transforms between distinct states within a certain range of speed fluctuations. However, the assumption of the cycle heel-strike, foot flat, middle mid-swing, heel off, toe-off, and middle mid-swing in the gait pattern limits the analysis for that particular forward walking gait. Although it is the most frequent phase sequence, it must be considered that some strides may omit one or more phases due to the uncertainty of pedestrian foot movements.
In the research of [39], a parameter adjustment mechanism was proposed to be added to the Hidden Markov Model to improve the adaptability of stride segmentation. Rather than analyzing the statistical characteristics of acceleration or gyroscope data, the system begins by calculating the Impulse Response Function (IRF) for the gyroscope signals at different stages to reflect the periodic impedance of the force at the ankle joints of the humans and robots. Additionally, it then compares the inverse of the Euclidean distance with the IRF of each stage at each moment. Similar to [38], the system assumed that the gait stages are fixed in mid-stance, terminal stance, swing, and loading response. The IRF of the current signal should be close to one of the four gait templates as the current estimated stride phase. The authors did not provide test results for this method on different subjects, so the robustness of the threshold adaptation for stride segmentation or gait recognition is unknown.
The research in [13] offered a positive correlation between the amplitude of the peak acceleration point within the heel strike phase and the threshold required to identify the zero-velocity phase and developed a regression model. The model was used to estimate the threshold value for the forthcoming zero-velocity phase based on the magnitude of the newly formed acceleration peak point. The utility of adaptive thresholds for zero-velocity phase detection, i.e., improving stride recognition accuracy, was only verified on a 75 m test-path with different walking and running speeds, while the cross-individual adaptation was not mentioned in the validation experiments of this approach. We infer that the empirical model parameters might need customization to each individual due to the influence of gender, height, and age on pedestrian gait patterns.
In summary, current adaptive stride segmentation methods are unable to recognize stride in cross-individual and wide speed domain scenarios with robustness due to the lag of the regulation mechanism, the fixation of the number of gait phases and complex transition conditions, and thr customization of threshold estimation models for different individuals.
Alternatively, template matching algorithms such as Dynamic Time Warping (DTW) were introduced for stride segmentation. DTW methods calculate the similarity between the input sequence and the template, which makes it well adapted to varying segment lengths and trivial distortion [8,40,41]. The template matching method, like standard DTW, can still distinguish the strides from signal series, referencing a standard stride template. The SDATW introduced in this research is trained on such a diverse gait dataset to maintain accuracy and consistency in cross-individual and broad speed domain scenarios.
From the perspective of frequency signal decomposition, wavelet analysis provided insight into determining stride borders, and it is suggested that better performance could be achieved in the frequency domain than in the time domain [42,43,44,45]. Another kind of method employs Hidden Markov Models (HMM) [46,47,48], residual neural networks [29], etc. These methods could achieve better detection accuracy in stride segmentation or gait recognition assignments with the assurance that large-scale training data should be offered and massive computing resources supplied. However, this requirement runs counter to the low-power-consumption nature of wearable device platforms.
The purpose of this paper consists of the following four parts. Firstly, we give a diverse gait dataset including IMU data and camera recordings of foot movement, which features a wide coverage of gender, height and walking speed. Secondly, we propose to divide a normal walking stride into four gait phases on the basis of biomechanics and annotate the whole dataset with four different gait labels. Last but not least, we offer a threshold-independent stride segmentation algorithm that requires no customization and is able to perform with adequate accuracy and robustness at different speeds. We take it a step further by evaluating gait recognition to provide a baseline for accurate gait analysis.

2. Materials and Methods

2.1. Subjects and Measurement Protocols

A total of 22 healthy volunteers (13 males, 9 females, age 32.5 ± 7.5 years) participated in the study and were divided into different groups according to gender and height information, as shown in Table 2. Each subject walked at three kinds of self-selected speeds along an indoor corridor of 46 m. The data collection area was free of obstacles and long enough, so they were asked to walk as usual while approximately keeping straight along the brick line between the floor tiles. We added this restriction with the consideration that using the brick line as an auxiliary beacon can help subjects maintain a straight walking direction. It is not only important for subjects that they can focus their attention on controlling the walking speed evenly and steadily but for the videographer that they could capture the full process of foot movement by camera. Although this limitation probably prevented us from fully simulating the walking state of a pedestrian in real application scenarios, we extended the diversity of the acquisition samples with three different walking speed gears. We let the subjects choose their own walking speed for three different daily scenarios: normal walking in the street, slower than normal choice such as thinking while walking, and faster than normal choice such as weaving through the crowds at a brisk pace to catch the upcoming bus. Furthermore, we chose a corridor that was 46 m long. The distance ensures that we collect a sufficient amount of data on pedestrians walking on a daily basis, since subjects are able to quickly adapt to this experimental environment built on a realistic scenario. By collecting walking data from 3 different speed gears and with 2 trails for each speed order, we believe such an acquisition scheme is able to capture the walking characteristics of the subject and essentially encompasses the regular and extreme states of that individual’s foot movement in daily life. By pooling pedestrian data across gender and height attributes, we can obtain a dataset that incorporates individual pedestrian differences and a wide range of stride gait states; hence, we call it the Diverse Gait Dataset.

2.2. Sensor System and Setup

Each subject wore an Xsens MTw IMU, which consists of a 3-axis accelerometer ( ± 160   m / s 2 ), 3-axis gyroscope ( ± 1200   deg / s ) and a 3-axis magnetometer. The measurement module was attached to the in-step on the right side of the shoe with elastic bands and was connected via Bluetooth to the data acquisition software “MT Manager”. Figure 1 displays that position and orientation of the IMU in each trail of data collection. The accelerometer X and Y axes were pointing to the forward and upward direction, respectively, and the Z axis was pointing in the left direction. We used three different cellphones in different collection batches: an iPhone X (60fps, 720p), a HuaWei Mate 30pro (120fps, 1080p) and a One Plus 7pro (240fps, 1080p). Thanks to the smaller size and weight of the phone compared to traditional cameras, we were able to attach the phone to a tennis bat and just make sure that the fixation was secure. Cell phones’ video recording is not only able to overcome the impacts of unstable light in the field, but also, with its powerful anti-shake function and strong focus tracking ability, it can lock and track the foot as a target as long as we set the focus point on the camera. With the above guarantees, we can use the rear camera on the phone to record the foot movements through the grids between the tennis rackets with no obstruction. While the pedestrian was walking, the videographer needed to adjust the distance to the subject’s foot and angle of the racket so that all the foot movements were recorded in the view of camera.

2.3. Sensor Signals and Time Synchronization

Each set of raw data consists of two parts: inertial motion data recorded by the IMU and foot motion images recorded by the phone camera. Since the IMU does not have a triggering mechanism for data acquisition, we offer a scheme to synchronize the IMU data and image frames after data collection. At the beginning of each trail of data acquisition, we asked the subjects to follow a pre-designed action: before starting to walk and after walking to the end point and standing firmly, the single foot wearing the IMU should stamp the ground vertically and swiftly [49]. This action is distinctly different from the foot movement during walking and can be reliably and accurately identified in the IMU data and video as shown in Figure 2. Based on the timestamp of the stomp actions, we can calculate the time offset that the IMU data lags behind the video stream. Then, we manually delayed the timestamp of image frames in the timeline for the period of offset value. It can be verified in the experiments that the time differences between the last three stamps and the corresponding image frames are almost zero. With this guarantee and consistent frequency in the video stream and IMU sequence, we could align the IMU data with the gait labels obtained from the images and maintain very high time accuracy in the annotation process.

2.4. Manual Data Labeling

The following section describes the concepts of manual gait division and the process of data annotation.

2.4.1. Gait Modes

When humans are walking, the body relies on the support of the feet to maintain the balance of the torso between travels. The torso generates forward or backward momentum with the ground’s reaction force on the feet to achieve startup and cushioning. Research in the field of human motor health defines a pedestrian stride as the process of a foot leaving the ground until it leaves the ground again. Based on biomechanics, this process could be divided into four different gait patterns shown in Figure 3: (1) pushoff phase: the ankle exerts force to make the heel and palm leave the ground sequentially; (2) swing phase: after the toes leave the ground, the foot begins swinging in the direction of travel in a pendulum-like motion; (3) heel-strike phase: the heel firstly comes into contact with the ground and takes the ground impact, and then the foot gradually makes contact with the ground to transfer the ground force to the arch to alleviate the impact; (4) stance phase: in this phase, the foot remains relatively stationary on the ground and supports the center of gravity of the torso, maintaining the body’s balance and preparing for the next gait phase, i.e., the push phase. These four gait patterns constitute a continuous cycle corresponding to the periodic motion of a single foot during walking. Based on the above discussion, the video frames can be utilized to generate four labels in our dataset: “pushoff, swing, heelstrike, and stance” corresponding to four different gait patterns, respectively.

2.4.2. Data Annotation

Since the transitions between gaits are made in chronological order according to a fixed tight relationship, when dividing the gaits in the video stream, we do so by manually capturing the key movements of the feet in the gaits.
The process of producing labels is based on ELAN software, which was released in the year 2000 by the Max Planck Institute for Psycholinguistics in the Netherlands to label audio signals. It supports labeling of audio, video and audio-video multi-streaming data and is recognized as professional labeling software in psychology, medicine, psychiatry, education and behavioral research [50]. ELAN defaults to a semantic layer associated with the index position of the data to be labeled, then generates a label file of the data by placing the words entered by the user in the semantic layer.
We can obtain each image frame of foot motion and its corresponding timestamp in ELAN. By manually identifying and picking the image frames corresponding to different significant foot movements, we annotated all the foot motion data involved in this dataset and generated label files. It should be noted that manual annotation is a very time-consuming task. We could have taken the alternative of using existing gait recognition methods, such as Hidden Markov Models [46], but the parameters of the model or detection algorithms still require fine-tuning before real application. Additionally, in the case of unknown generalizability of methods, the detection results will require manual verification inevitably. Therefore, we sticked to this conventional approach to generate reliable and accurate labeling information. The flow of data annotation is shown in Figure 4.

2.4.3. Data File Description

The data set consists of the following types of files: IMU data files, ELAN exported label files, subject information (height, gender, walking speed type, etc.). The IMU data includes the following: time_stamp_0, time_stamp_1, acc_x, acc_y, acc_z, turnrate_x, turnrate_y, turnrate_z, magnetometer_x, magnetometer_x, magnetometer_y, magnetometer_z. In that order, the two timestamp readings, tri-axis acceleration data, tri-axis gyroscope data and tri-axis magnetometer data are represented. The timestamp readings are recorded in sample order, which corresponds to 0.01 s, the acceleration data are in units of m / s 2 , the angular velocity data are in units of rad / s , and the magnetometer readings are normalized to the Earth’s magnetic field strength. The tag file is suffixed with “.eaf”, where all labeled words are encoded and each code corresponds to the respective timestamp.

2.5. Stride Segmentation Method

In this part of the work, we chose the DTW method for stride segmentation because it shows satisfactory adaptability to different time lengths and varying signal amplitudes. We reproduced the multi-dimensional subsequence Dynamic Time Warping (msDTW) method and validated its performance in the diverse gait dataset. We refer interested readers to [51] for computation details of msDTW. However, we found that the msDTW works well on the basis of a grid search scheme to select a specific threshold value that is utilized when searching for the stride borders. If a valley point in the accumulated distance between the template signal and the input sequence is less than the threshold, then the segment between the last valley point that meets the condition and the newly detected valley point will be regarded as a new stride. If the threshold is too large, then pseudo-strides will be detected; if the threshold is too small, then normal strides are prone to being missed. In order to make the stride segmentation process independent of the threshold, we propose a novel method, SDATW, based on IMU-subsequence-shape-descriptors and the augmented time warping process. The following section describes the process of our algorithm.

2.5.1. Template Generation

Stride template is defined as B = { β 0 , β 1 , , β n 1 } T ,   B R n ,   β i R 6 , where each sample in template consists of the item from 3-axis accelerometer and 3-axis gyroscope. We evenly select 30 percent of the manually labeled stride segments from each height group, then each axis in every segment was interpolated or down-sampled to a length of 200 samples [8]. Consequently, we get a stride-database with full-speed-range coverage for each height group. Finally, all the strides in the database were averaged sample by sample to generate a template representative of a compromise for a wide speed range and individual differences in strides.

2.5.2. Data Normalization

In this literature, we use z-normalization, which is a necessity for accuracy and generalization of DTW methods [52]. Z-normalization refers to the process of normalizing every sample in a series of data such that the mean of all values is 0 and the standard deviation is 1, so it helps to reduce the impact of outlier points on recognition results. The calculation is shown in the following formula:
a c c n o r m , i = a c c i a c c ¯ S   ,   i = 1 , 2 , , n
where a c c ¯ is the mean value of the accelerometer series, and S is the unbiased estimation of the standard deviation:
S = 1 n 1   i = 1 n ( a c c i a c c ¯ ) 2
Since only normalized data were used in further calculation, the index n o r m is omitted for simplicity.

2.5.3. Calculation of Distance Matrix of Shape Descriptor Sequence

Intuitively, the DTW algorithm searches for the minimum cumulative distance when matching sample points by measuring the similarity of two time series of different lengths. We formally design the query sequences A = { α 0 , α 1 , , α m 1 } T ,   A R m and template B = { β 0 , β 1 , , β n 1 } T ,   B R n . We create and initialize the distance accumulation matrix D m + 1 , n + 1 :
D ( i , j ) = { 0 , i = { 0 , 1 , , m 1 } , j = 0 0 , i = 0 , j = { 0 , 1 , , n 1 } i n f , e l s e
To calculate the time warping distance matrix between the sample points of the sequence, conventionally the elements of D m + 1 , n + 1 are calculated as:
D ( i , j ) = d i s t ( i , j ) + m i n { D ( i 1 , j ) , D ( i , j 1 ) , D ( i 1 , j 1 ) }  
The d i s t i , j is the spatial distance between the i -th point in query sequency and the j-th element in template. It is usually calculated by using Euclidean distance, Manhattan Distance, Hamming Distance or other types of approach. The msDTW algorithm is one of the cases employing template matching algorithms to stride segmentation in recent years. In conventional DTW algorithms, the accumulated distance is compared with a threshold to assess whether a data segment fits the signal distribution in the template, which is also used in the msDTW method. Thus, msDTW can represent standard DTW algorithms that make decisions based on a threshold. We found that the distribution of accumulated distance is not monotonically increasing or decreasing when applying msDTW to data from the same subject with consistent walking cadence. A visible spike appears at the time adjacent to the valley point, as shown in box 2 in Figure 5. However, the threshold and temporal conditions fail to preclude the appearance of pseudo-valleys.
In this literature, the stride boundary is set at the end of the stance phase, which accounts for nearly 40% of a stride cycle. Because the magnitude of the sensor signal in the stance phase is lower than in other phases, the stride boundaries appear in the flat section of the accumulated distance curve. The phenomena of multiple valleys within the flat area make the basis for identifying the boundaries blurred, as indicated in box 1 in Figure 5. Therefore, if the curve of the accumulated distance function could keep smooth and monotonic, it would be promising to improve the accuracy of the stride segmentation method.
We use shape descriptors to help improve the smoothness and monotonicity of the accumulated distance curve. Shape descriptors were designed to measure similarities between two points by computing similarities between their local neighborhoods, rather than computing the distance between two points based on their values. As a result, the accumulated distance between the sample point and the template element is the difference between the data distribution of their neighborhoods and a subsequence of template, where the difference calculation is not confined to the Euclidean distance. By measuring the differences between neighbors, it makes the cumulative distance curve behave more smoothly and the valley spots more distinct and discriminative than conventional DTW approaches, which form the basis of SDATW without relying on thresholds. As shown in Figure 5, SDATW employs six shape descriptors, which aid the accumulated distance curve’s more desirable smoothness and monotonicity.
In the process of finding the minimum cumulative distance, the traditional DTW methods require recreating a sliding window for each sample point and calculating the distance between the data segment inside the window and the template, which makes the DTW computationally expensive. SDATW searches for the minimum accumulated distance point by using only a matrix to locate the data segment with starting and ending points that best match the template. The selected matching data segment can be guaranteed that its starting point has the best similarity to the template than the previous sample points, and the data before the ending point contain the information of the current cycle as much as possible, which allows SDATW to get rid of the dependence on the threshold. It ensures that the minimum value in the accumulated distance is not overlooked, as well as supports online stream data detection to suit practical application requirements. In this literature, we replace the spatial distance by measuring the distance between shape descriptors, which is designed to express subsequence structural information under the assumption that the optimal-matched subsequences should be recognized with the best similarity between their structural features.
Each shape descriptor is calculated to encode local structural information around the temporal point α i or β j . Given a mapping function M ( ) , we convert the query sequence A to its descriptor sequence: A d e s = { α d e s 0 , α d e s 1 , , α d e s m 1 } T , α d e s i R l   , i.e., A d e s = { M ( α 0 ) , M ( α 1 ) , , M ( α m 1 ) } T . The l in α d e s i R l   indicates that the dimension of α d e s i could be different with sample point and is up to the mapping function M ( ) . The shape descriptors can be classified into the magnitude-aware-descriptors and the fluctuation-capturing-descriptors. The magnitude-aware-descriptors include Raw Subsequence (RAW), Piecewise Aggregate Approximation (PAA), and Discrete Wavelet Transform (DWT). RAW is namely the raw samples around the point where features need to be extracted. PAA consists of mean values of several equal-length intervals divided from the original subsequence. DWT consists of concatenated wavelet coefficients, which come from decomposing a subsequence into three levels by using a Haar wavelet basis. These three kinds of descriptors are generated based on the amplitude of the signal, namely the y-axis value, so they memorize the signal magnitude distribution in the neighborhood of the sample point in the subsequence. The fluctuation-capturing-descriptors involve SLOPE, DERIVATIVE, and HOG1D. SLOPE is extracted as a series of gradients of the intervals whose length depends on the size of the subsequence and the number of equal-length intervals; DERIVATIVE is similar to SLOPE but is calculated from the first-order derivative of a subsequence; HOG1D inherits from the Histogram of Oriented Gradients (HOG) descriptor [50] and was used in [51] to describe 1D time series sequences. As SLOPE, DERIVATIVE, and HOG1D mainly record the direction and amplitude of signal fluctuations, they are invariant to the magnitude of raw subsequence. We refer interested readers to [52] for computation details of shape descriptors.
Figure 6 shows an example of calculating RAW descriptors for gyroscope-coronal-axis-data. Each column represents the distance between one sample of query sequence A d e s and the complete template B d e s . The top row represents the distance between the beginning point of the template B d e s and the sequence A d e s , while the bottom row represents the distance between the end of the template B d e s and the sequence A d e s . It is clearly shown in Figure 6 that on the distance matrix, four dark blue paths are running from top to bottom through the matrix and featuring periodicity, which is same as the number of strides in IMU data. We would like to try to match each subsequence in the query sequence with the stride template, with the goal of minimizing the cumulative distance and finally finding the optimal segment.
By computing the weighted sum of one magnitude-aware-descriptor and one fluctuation-capturing-descriptor, we can get a compound shape descriptor, which may carry over the strengths of both descriptors and thus be more promising to correctly distinguish stride segment from IMU data streams.

2.5.4. Augmented Time Warping Scheme

Given query sequence A = { α 0 , α 1 , , α m 1 } T ,   A R m and template B = { β 0 , β 1 , , β n 1 } T ,   B R n , let A d e s = { α d e s 0 , α d e s 1 , , α d e s m 1 } T , α d e s i R l and B d e s = { β d e s 0 , β d e s 1 , , β d e s n 1 } T , β d e s j R l be their shape descriptor sequences, respectively. Then, we use a disjoint query DTW method to find the A d e s = { α d e s s , α d e s s + 1 , , α d e s e } T , α d e s i R l whose distance from template-descriptor sequence is the smallest among those of all other possible subsequences of A d e s . That is to say, D ( A d e s [ s : e ] ,   B d e s   ) D ( A d e s [ p : q ] ,   B d e s ) for any pair of p = 0 , 1 , , m 1 and q = p , , m 1 . In this part of work, we combined the shape descriptors with the SPRING method proposed in [53]. Then, a threshold-independent DTW method was evaluated on the MATLAB platform for solving the stride segmentation problem. By augmenting the time warping process from two aspects, our method is able to find the best-match part from query subsequence but also dramatically reduce the computation complexity since it is invariant to the length of streaming data.
The calculation of accumulated distance of warping path is same as naïve solution, but a subsequence time warping matrix named s q u e r y , whose height is same as the length of template B , is utilized to keep the beginning point of current candidate query in memory at the same time. The process of memorization is shown in following formula:
s q u e r y ( t ,   i ) = { s q u e r y ( t ,   i 1 ) , D ( t , i 1 ) = = d b e s t s q u e r y ( t 1 ,   i ) , D ( t 1 ,   i ) = = d b e s t s q u e r y ( t 1 ,   i 1 ) , D ( t 1 ,   i 1 ) = = d b e s t
d b e s t = min { D ( t , i 1 ) , D ( t 1 , i ) , D ( t 1 , i 1 ) }
In this direct way, the head of the current candidate query could be saved in a greedy way. In addition, if the candidate query is confirmed to be the best-match query at index-position t , the range of best match in query sequence will be readily available, the beginning position of A d e s [ s : e ] is the value saved in s q u e r y ( e ,   e n d ) and the end position is e . It should be noted that the confirmed position of the best match is later than the end position of the best match and the reason can be explained in the following section.
The guarantee of no false best match is a necessity for the success of this step. Even though pseudocode has been given in [53], we provide another perspective of understanding based on the version of the method we reproduced.
When a new point α d e s t from query sequence A d e s comes, we firstly calculate the accumulated distance results. This step of calculation actually generates a new column in the accumulated distance matrix D and a new column of beginning-position recordings in the time warping matrix s q u e r y ( i , j ) . We use D ( : , t ) and s q u e r y ( : , t ) to represent the new columns, respectively, and the last element in D ( : , t ) and s q u e r y ( : , t ) is represented as D ( e n d , t ) and s q u e r y ( e n d , t ) . The accumulated distance of current candidate query is named as optimal distance and represented as d m i n .
We need to check whether current candidate query needs to be replaced. If judgement statement D ( e n d , t ) < d m i n is true, which means there is another waring path that matches the template with less differences, then the current candidate query should be replaced by the new query: let the optimal distance d m i n =   D ( e n d , t ) , the beginning position s = s q u e r y ( e n d , t ) and the end position e = t , and the index range of candidate query is [ s : e ] . It should be noted that each point that is located in the index range of a candidate query has the opportunity to act as a competitor to be the head of another candidate query, unless it is excluded from the competitors against the head. This step ensures that the best match will not be missed.
To beat and exclude the competitors which no longer have the chance to replace the current candidate query, we find the index position of the element in s q u e r y ( : , t ) which is competitor against the head. We use c m p to represent the index position of a competitor point. If the judgement statement D ( c m p , t ) > d m i n ,   t < e n d is true, which replies the even the query matched with only part of template, its accumulated distance is larger than current candidate query, not to mention the accumulated distance of matching the whole template. In this way, the candidate query proves its superiority over its competitors. This step acts as the preparation of confirming the candidate query as best-match query.
To confirm the candidate query as a best-match query, we only need to make sure that there exist no competitor’s index positions in s q u e r y ( : , t ) or that there is hardly competitor’s accumulated distance that is smaller than d m i n . These two circumstances both indicate that the candidate query stands out, since a new stride might have begun. The confirmed query will be saved in o p t q u e r y . Here, we also provide the explanation for the confirmed position lagging behind the end point of the best match.
The method to start looking for new best match is called “star-padding” [53]. We deliberately create a vector full of positive infinite elements with same height as D ( : , t ) and put it at the left side of D ( : , t ) . As for s q u e r y ( : , t ) , we create a vector full of zero and deal with it in the same way. The detected strides can be accessed by o p t q u e r y , which memorizes the time range of each detected stride. The results can be shown in Figure 7.

2.5.5. Complexity Analysis

We make A an evolving sequence of length m , B a sequence of fixed length n , and l the size of each neighbor for extracting each shape descriptors.
According to [54], the calculation of shape-descriptors takes linear time O (   l m ) . At time warping step, our algorithm keeps updating a single matrix and calculating O (   n ) numbers for each sample point, the time complexity of time warping is O (   m n ) . Since l m and n m in general, the total time complexity of shape-descriptor calculation and time warping procedure are same as O (   m 2 ) ; hence, the time complexity of the whole algorithm is O (   m 2 ) , which is the same as conventional DTW.
In summary, we prepare for warping path procedure with more in-depth information than conventional approach, including magnitude and fluctuation of the IMU signal. Furthermore, by recording the head of the candidate query and calculating the accumulated distance matrix simultaneously, the augmented time warping matrix makes it possible to calculate only one column of accumulated distance for each input point. Last but not least, the technique of searching for best-match queries allows us to avoid relying on thresholds and the procedure auto initializes when we detect a new stride segment. Because the algorithm is based on shape descriptors [54] and augmented time warping [53], it is named SDATW.

2.5.6. Time Constraints

According to [8], the duration of a stride is located in the range of 250 to 2000 ms. This helps us to exclude the pseudo-stride whose time durations are not compatible with the time constraints.

2.6. Gait Division

When a pedestrian walks, successive gait patterns are sequentially articulated to form a complete stride, and the gait patterns are divided between them based on the foot movement. When the pushoff phase ends, the toe leaves the ground and the swing phase begins; when the swing phase ends, the heel makes contact with the ground and the heel-strike phase begins; when the heel-strike phase ends, the palm of the foot (shoe) makes full contact with the ground and the stationary phase begins; when the stationary phase ends, the heel begins to leave the ground and the pushoff phase begins.
Therefore, we define Toe Off (TO), Heel Touch (HT), and Heel Off (HO) as three different stride events for gait recognition. Based on the time synchronization between IMU data and gait label, we can apply the time range of gait labels to IMU sequence, and experience-based knowledge of gait borders was verified to be a gait recognition method [10]. When a TO event occurs, the foot completes the action from plantar flexion to dorsiflexion, which leads to the gyroscope detecting a zero crossing of the coronal axis data; when a HT event occurs, the gyroscope data in the coronal axis direction is reflected to a sharp change of data in one direction due to the violent interaction between the foot and the ground, and the accelerometer in the sagittal axis direction is reflected to a spike of signal due to the sudden change of force; when an HO event occurs, which breaks the static phase of the foot, the detection is performed by setting a threshold for the variance of the signal, from which the division of the stride and gait is obtained, as shown in Figure 8.

2.7. Error Measurement

There are two types of errors in stride segmentation and gait recognition: (1) stride segmentation failed to recognize the labeled stride boundaries; (2) stride segmentation is excessively sensitive and detects pseudo-stride boundaries. These types of errors also occur in gait division assignments. Therefore, a preliminary evaluation of stride segmentation and gait recognition based on accuracy and recall is performed.

2.7.1. Precision

p r e c i s i o n = t r u e   p o s i t i v e s d e t e c t e d   p o s i t i v e s
The t r u e   p o s i t i v e s means the number of detected strides that are true in labels. The d e t e c t e d   p o s i t i v e s is the number of all strides detected by the algorithm. Precision is used to measure the level of correct detection of strides and gaits by the algorithm.

2.7.2. Recall

r e c a l l = t r u e   p o s i t i v e s t r u e   p o s i t i v e s + f a l s e   n e g a t i v e s
The recall is the proportion of detected true strides/true gaits to all paces/gaits that are labeled. All labeled strides/gaits contain samples that may not be detected by the algorithm and are therefore used to measure the level of failure of the algorithm to detect the labeled strides/gaits.

2.7.3. F-Measure

F m e a s u r e = 2 · p r e c i s i o n · r e c a l l p r e c i s i o n + r e c a l l
The F m e a s u r e is the summed average of the accuracy and recall [22] and contains the strides/gaits that were falsely detected as well as those that were not detected.

3. Experiments and Results

To evaluate the effectiveness of SDATW, we carried out experiments on the diverse gait dataset in four parts. Firstly, we tested the stride segmentation performance of six single-axis schemes based on the magnitude-aware-descriptors and picked the best one; then we selected a kind of descriptor from the fluctuation-capturing-descriptors that works best in the same way. Secondly, based on the experience of the first part, we tried axis combinations for the selected shape descriptors and performed stride segmentation. In the third part, we fused the selected shape descriptors to form a compound shape descriptor and tried all single-axis and multi-axis combinations. In the fourth part, we presented the results of gait recognition based on the optimal scheme of stride segmentation. Our final goal is to try to find the best decision of descriptor-data-axis schemes and use this result as a benchmark for the diverse gait dataset. The IMU module being worn loosely during the walking process causes the acceleration and gyroscope data to become noisier and alters the distribution pattern of a stride cycle. In order to mitigate the detrimental impacts on stride segmentation, we adopted a simple strategy: both the original and its flipped version were employed as the inputs, and the group with higher number of identified strides was chosen as the formal output of SDATW.

3.1. Separate Performance Evaluation of Two Types of Shape Descriptors

The goal of this portion of the experiment is to use each single kind of shape descriptor to implement and validate pedestrian stride segmentation under cross-individual and broad speed domain circumstances. Therefore, when preparing the experimental data, we presume that the algorithm should not have access to certain priori information aids, such as the physiological features of the pedestrians. According to gait analysis in [55], walking speed dominates in influencing gait parameters over gender, age, body height and weight. Therefore, we consider walking speed type as the key component in the test scenario in order to verify the stability of our stride segmentation method. For each group of speed-type test, we mixed foot movement data from all subjects, which brought the diversity of genders and height groups.
The SDATW was implemented for each distinct axis of acceleration and gyroscope. Thanks to the contribution of [54], we could try various shape descriptors and test their capability in extracting signal characteristics. Table 3 and Table 4 present an overview of the performance using the magnitude-aware-descriptors and the fluctuation-capturing-descriptors, respectively.
Firstly, by comparing the errors of different speed groups, we can identify whether the performance of the stride division algorithm based on one shape descriptor is stable under different walking speeds. Secondly, we can also determine the IMU data axes that are most significant to the stride segmentation by comparing the results with different data axes. Finally, the data from all velocity groups are combined together to get the average accuracy of the stride segmentation across individuals and wide velocity domain scenarios. We could settle on the proper sensor axes and the shape descriptors that stand out with the highest average F-measure. In experiments using the magnitude-aware-descriptors for stride segmentation, the best performance could be obtained by using coronal gyroscope axis data and selecting DTW or PAA as shape descriptors. In fluctuation-capturing-descriptors, HOG1D based on vertical gyroscope data was selected as the best one.

3.2. Stride Segmentation with Selected Shape Descriptors and Combined Sensor Types

Based on DWT, PAA, and HOG1D, we used the combination of accelerometer and gyroscope data to see if there was an improvement in the results. Table 5 presents an overview of the performance using different sensor axis combination schemes. The best choice for DWT and PAA is using acceleration along the sagittal axis and the vertical axis together and the best choice for the sensor axis combination is using three-axis-gyroscope data together, but we need to say that there is a significant lag compared to the best result in first part of experiments.

3.3. Stride Segmentation with Compound Shape Descriptors

We concatenated HOG1D as a fluctuation-capturing-descriptor and all three magnitude-award-descriptors with equal weights, resulting in three compound descriptors: H O G 1 D + R A W = ( H O G 1 D , R A W ) , H O G 1 D + D W T = ( H O G 1 D , D W T ) , H O G 1 D + P A A = ( H O G 1 D , P A A ) .
Then, we evaluated stride segmentation with all sensor axis plans. The results are showing in Table 6. We discovered that the best combination of magnitude-aware-descriptors (RAW, PAA, and DWT) with HOG1D is RAW + HOG1D, rather than DWT + HOG1D, in which DWT owned the best performance among magnitude-aware-descriptors. This result confirms the idea that combining the magnitude-aware-shape descriptors and the fluctuation-capturing-descriptors might boost algorithm performance.
We compare SDATW with two stride segmentation methods commonly used in current wearable devices. msDTW uses a grid search algorithm to optimize the thresholds while fusing data from multiple sensor data axes, thus ensuring both the accuracy of stride detection and good cross-individual and wide speed domain adaptability. The wavelet-based algorithm extracts features of the signal based on multi-scale analysis, so the thresholds in the algorithm also have good adaptability to different pedestrian individuals and different walking speeds [45]. The comparison results for stride segmentation on the diverse gait dataset is shown in Table 7. According to [8], grid search method must be applied to find the optimal threshold for each of the different speed groups, so it is reasonable that msDTW performs slightly better than the best result of our algorithm in fast and slow speed tests. However, when applied in real scenarios, our method could maintain accuracy and robustness with no preparation while msDTW might not. Compared with the wavelet-based method, SDATW generally performed slightly better, except in the slow speed test.

3.4. Gait Recognition with Optimal Shape Descriptor and Sensor Type

In this part of the work, we evaluated the gait division based on the stride segmentation of the optimal scheme. The results in Table 8 show the effect of walking speed on the gait recognition. The boundaries between gait patterns are blurred due to the relatively weak amplitude of foot movements during slow walking, so F-measures in the slow group are smaller than those in other groups.

4. Discussion

In a PDR system, the pedestrian’s position could be updated by
P t = [ P t N P t E ] = [ P t 1 N P t 1 E ] P t 1 + S L t [ cos ( ϕ t ) sin ( φ t ) ]
where P t and P t 1 are the current and latest positions, respectively; P t N and P t E are the displacements in the north and east directions of PDR’s coordinate system, respectively. Based on the known stride boundaries, we may divide a stride into four gait phases based on the stride boundaries: stance, pushoff, swing, and heel-strike phases. From the perspective of gait analysis, the total of the foot-moving distances of the pushoff, swing, and heel-strike phases, also known as dynamic gait phases, determines the walking distance of the foot in a stride time range. It is also necessary to estimate the direction of foot motion during the stride time based on IMU data and magnetometer data. This problem is outside the scope of this paper, but we found that there has been extensive research in the field of localization for direction estimation based on IMU data [6,15,56,57,58,59]. The SDATW proposed in this literature is expected to produce accurate stride segmentation and strong robustness in cross-individual and wide speed domain walking scenarios due to its threshold independence and adaptability.
For the choice of shape descriptors, we employed two kinds of shape descriptors for stride segmentation; however, it was found that the fluctuation-capturing-descriptors did not work well as the magnitude-aware-descriptors. We infer that this result stems from their properties. Our template is generated by randomly selecting 30% of the strides from each speed group. This process is itself a smoothing of the stride signal, leading to a smoother segment to stand for a typical stride signal for each axis of IMU data. However, the fluctuation-capturing-descriptors (SLOPE, DERIVATIVE, HOG1D) essentially record the fluctuations of the input signal, while the fluctuations in the template signal have already been weakened, which makes them weaker than magnitude-aware-descriptors in matching the template both on acceleration data and gyroscope data.
Among SLOPE and DERIVATIVE, each item is a gradient obtained by linear regression on a small set consisting of sample points and their neighbors. However, linear regression is easily subject to outliers, whereas HOG1D obtains a histogram of the slope distribution-oriented gradient (HOG) descriptor [60], which means that HOG1D retains more knowledge about the slope than linear regression and could be more tolerant to outliers.
In the comparison of all the axes in the IMU data, the coronal axis gyroscope data tend to exhibit the closest periodicity and smoothness to the template, and we infer from two perspectives. Firstly, when a pedestrian is walking, the coronal axis in the gyroscope records the angular velocity of rotation of the ankle as it transitions back and forth from plantar flexion to dorsiflexion. The motion pattern of the ankle is quite homogeneous under walking state, so the signal amplitude of the coronal axis gyroscope is greater than that of the sagittal and vertical axes. This phenomenon is most obvious when a pedestrian is walking in a straight line. Even if the pedestrian makes a turn or other movement, plantar extension and dorsal extension of the ankle are still the keys to generating driving force in the lower limb. Thus, the gyroscope signal of the coronal axis should be considered as the most essential motor information in stride segmentation and gait recognition with respect to the other two axes. Furthermore, the gyroscope records the effect of the force driving the rotation accumulated per unit time, while the accelerometer records the immediate effect of the force acting on the object. Consequently, the acceleration signal during walking has more burrs than the gyroscope signal. Additionally, the variation in walking habits between individuals further increases the difference between the actual acceleration pattern and the pattern within the template. We think it is harder to detect strides based on acceleration structural features compared to that of the gyroscope. Although we use mean filtering to smooth the acceleration signal, we still cannot get the acceleration stream to be as ideal as shown in the template. If the size of the sliding window is large to make burrs disappear, it destroys valuable information in the original signal, which will not be compensated by any subsequent step, such as z-normalization or extracting shape descriptors.
We have tried all combinations of accelerations to try to improve the performance of the acceleration-based shape descriptors for stride segmentation, but it still performed similarly to the single-axis-data. Here, we think that the acceleration data fusion method needs to be modified. The axis combination method in our algorithm is the calculation of the modulus of two-axis data or three-axis data, which is essentially calculating the 2-norm fusion of data from different axes. However, the signal along different axes records three mutually orthogonal motion components of foot motion in space, and their noise distributions are likely to be uncorrelated. With the implementation of axis fusion, the noise along all axes is not eliminated, but is retained. This might bring more noise information to shape descriptors rather than highlighting the periodic patterns of the signal.
We agree that if the tri-axis acceleration data are properly utilized and fused with gyroscope data, it will further improve the robustness of the stride segmentation algorithm in different individuals and under complex scenarios [8,35]. Therefore, in our future work, we will investigate and explore IMU data fusion methods to improve the accuracy and generalizability of stride segmentation. In addition, for the problem of pedestrian step length estimation based on gait analysis, we have only implemented and validated it for one subject under a free-walking scenario. By utilizing reference datasets for pedestrian navigation as a testbed, we are going to study an adaptive step length estimation model on the basis of gait analysis that is capable of providing distance prediction with high accuracy and proficient adaptability for different individuals.

5. Conclusions

This paper provides a diverse gait dataset with comprehensive coverage of healthy subjects by gender, height, and walking speed. There are 4690 strides of walking data collected and 19,083 items annotated as gait labels. Furthermore, based on this dataset, a novel algorithm called SDATW was proposed in the literature. With no dependence on the threshold, the SDATW algorithm could be used for stride segmentation with no customization for individual pedestrians and can also maintain accuracy under different walking speeds. The best F-measure for fast walking, normal walking and slow walking is 0.813, 0.818 and 0.829, respectively. The performance of SDATW is slightly better than that of the conventional DTW algorithm and wavelet-based method, with no parameter optimization process. Last but not least, a gait recognition method was evaluated on the basis of SDATW’s output, and the detailed results could be used as the baseline for gait recognition on the diverse gait dataset.

Author Contributions

Conceptualization, C.H.; Data curation, C.H.; Formal analysis, F.Z.; Funding acquisition, Z.X. and J.W.; Investigation, C.H.; Methodology, C.H.; Project administration, F.Z. and J.W.; Resources, Z.X. and J.W.; Software, C.H.; Supervision, Z.X.; Visualization, C.H.; Writing—original draft, C.H.; Writing—review & editing, F.Z. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Youth Innovation Promotion Association CAS (2021289).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Shanghai Advanced Research Institute, Chinese Academy of Sciences (001).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, L.; Xiong, Z.; Zhao, R. An Indoor Pedestrian Navigation Algorithm Based on Smartphone Mode Recognition. In Proceedings of the 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chengdu China, 15–17 March 2019; pp. 831–835. [Google Scholar]
  2. Gang, H.-S.; Pyun, J.-Y. A Smartphone Indoor Positioning System Using Hybrid Localization Technology. Energies 2019, 12, 3702. [Google Scholar] [CrossRef] [Green Version]
  3. Abdelbar, M.; Buehrer, R.M. Pedestrian GraphSLAM Using Smartphone-Based PDR in Indoor Environments. In Proceedings of the 2018 IEEE International Conference on Communications Workshops (ICC Workshops), Kansas City, MO, USA, 20–24 May 2018; pp. 1–6. [Google Scholar]
  4. Angrisano, A.; Vultaggio, M.; Gaglione, S.; Crocetto, N. Pedestrian Localization with PDR Supplemented by GNSS. In Proceedings of the 2019 European Navigation Conference (ENC), Warsaw, Poland, 9–12 April 2019; pp. 1–6. [Google Scholar]
  5. Ashraf, I.; Hur, S.; Shafiq, M.; Kumari, S.; Park, Y. GUIDE: Smartphone Sensors-Based Pedestrian Indoor Localization with Heterogeneous Devices. Int. J. Commun. Syst. 2019, 32, e4062. [Google Scholar] [CrossRef]
  6. Ciabattoni, L.; Foresi, G.; Monteriù, A.; Pepa, L.; Pagnotta, D.P.; Spalazzi, L.; Verdini, F. Real Time Indoor Localization Integrating a Model Based Pedestrian Dead Reckoning on Smartphone and BLE Beacons. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 1–12. [Google Scholar] [CrossRef]
  7. Huang, H.-Y.; Hsieh, C.-Y.; Liu, K.-C.; Cheng, H.-C.; Hsu, S.J.; Chan, C.-T. Multimodal Sensors Data Fusion for Improving Indoor Pedestrian Localization. In Proceedings of the 2018 IEEE International Conference on Applied System Invention (ICASI), Tokyo, Japan, 13–17 April 2018; pp. 283–286. [Google Scholar]
  8. Barth, J.; Oberndorfer, C.; Pasluosta, C.; Schülein, S.; Gassner, H.; Reinfelder, S.; Kugler, P.; Schuldhaus, D.; Winkler, J.; Klucken, J.; et al. Stride Segmentation during Free Walk Movements Using Multi-Dimensional Subsequence Dynamic Time Warping on Inertial Sensor Data. Sensors 2015, 15, 6419–6440. Available online: https://www.mad.tf.fau.de/research/activitynet/digital-biobank/ (accessed on 2 January 2022). [CrossRef]
  9. Mao, Y.; Ogata, T.; Ora, H.; Tanaka, N.; Miyake, Y. Estimation of Stride-by-Stride Spatial Gait Parameters Using Inertial Measurement Unit Attached to the Shank with Inverted Pendulum Model. Sci. Rep. 2021, 11, 1391. [Google Scholar] [CrossRef]
  10. Rampp, A.; Barth, J.; Schülein, S.; Gaßmann, K.-G.; Klucken, J.; Eskofier, B.M. Inertial Sensor-Based Stride Parameter Calculation from Gait Sequences in Geriatric Patients. IEEE Trans. Biomed. Eng. 2015, 62, 1089–1097. [Google Scholar] [CrossRef]
  11. Kang, X.; Huang, B.; Qi, G. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones. Sensors 2018, 18, 297. [Google Scholar] [CrossRef] [Green Version]
  12. Kang, X.; Huang, B.; Yang, R.; Qi, G. Accurately Counting Steps of the Pedestrian with Varying Walking Speeds. In Proceedings of the 2018 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Guangzhou, China, 8–12 October 2018; pp. 679–686. [Google Scholar]
  13. Wang, Y.; Shkel, A.M. Adaptive Threshold for Zero-Velocity Detector in ZUPT-Aided Pedestrian Inertial Navigation. IEEE Sens. Lett. 2019, 3, 7002304. [Google Scholar] [CrossRef]
  14. Zhou, Z.; Yang, S.; Ni, Z.; Qian, W.; Gu, C.; Cao, Z. Pedestrian Navigation Method Based on Machine Learning and Gait Feature Assistance. Sensors 2020, 20, 1530. [Google Scholar] [CrossRef] [Green Version]
  15. Foxlin, E. Pedestrian Tracking with Shoe-Mounted Inertial Sensors. IEEE Comput. Graph. Appl. 2005, 25, 38–46. [Google Scholar] [CrossRef]
  16. Nilsson, J.-O.; Skog, I.; Händel, P.; Hari, K.V.S. Foot-Mounted INS for Everybody—An Open-Source Embedded Implementation. In Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, Myrtle Beach, SC, USA, 23–26 April 2012; pp. 140–145. [Google Scholar]
  17. A Reliable and Accurate Indoor Localization Method Using Phone Inertial Sensors | Proceedings of the 2012 ACM Conference on Ubiquitous Computing. Available online: https://dl.acm.org/doi/abs/10.1145/2370216.2370280 (accessed on 26 December 2021).
  18. Weinberg, H. Using the ADXL202 in Pedometer and Personal Navigation Applications; Analog Devices AN-602 Application Note; Analog Devices: Norwood, MA, USA, 2002. [Google Scholar]
  19. Kim, J.W.; Jang, H.J.; Hwang, D.-H.; Park, C. A Step, Stride and Heading Determination for the Pedestrian Navigation System. J. GPS 2004, 3, 273–279. [Google Scholar] [CrossRef] [Green Version]
  20. Enhancing the Performance of Pedometers Using a Single Accelerometer|Analog Devices. Available online: https://www.analog.com/en/analog-dialogue/articles/enhancing-pedometers-using-single-accelerometer.html (accessed on 7 January 2022).
  21. Xing, H.; Li, J.; Hou, B.; Zhang, Y.; Guo, M. Pedestrian Stride Length Estimation from IMU Measurements and ANN Based Algorithm. J. Sens. 2017, 2017, e6091261. [Google Scholar] [CrossRef] [Green Version]
  22. Hannink, J.; Kautz, T.; Pasluosta, C.F.; Barth, J.; Schülein, S.; Gaßmann, K.-G.; Klucken, J.; Eskofier, B.M. Mobile Stride Length Estimation with Deep Convolutional Neural Networks. IEEE J. Biomed. Health Inform. 2018, 22, 354–362. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Continuous Home Monitoring of Parkinson’s Disease Using Inertial Sensors: A Systematic Review. Available online: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0246528 (accessed on 7 December 2021).
  24. Mirelman, A.; Bonato, P.; Camicioli, R.; Ellis, T.D.; Giladi, N.; Hamilton, J.L.; Hass, C.J.; Hausdorff, J.M.; Pelosin, E.; Almeida, Q.J. Gait Impairments in Parkinson’s Disease. Lancet Neurol. 2019, 18, 697–708. [Google Scholar] [CrossRef]
  25. Chatzaki, C.; Skaramagkas, V.; Tachos, N.; Christodoulakis, G.; Maniadi, E.; Kefalopoulou, Z.; Fotiadis, D.I.; Tsiknakis, M. The Smart-Insole Dataset: Gait Analysis Using Wearable Sensors with a Focus on Elderly and Parkinson’s Patients. Sensors 2021, 21, 2821. [Google Scholar] [CrossRef] [PubMed]
  26. Evers, L.J.; Raykov, Y.P.; Krijthe, J.H.; Silva de Lima, A.L.; Badawy, R.; Claes, K.; Heskes, T.M.; Little, M.A.; Meinders, M.J.; Bloem, B.R. Real-Life Gait Performance as a Digital Biomarker for Motor Fluctuations: The Parkinson@Home Validation Study. J. Med. Internet Res. 2020, 22, e19068. [Google Scholar] [CrossRef]
  27. Haji Ghassemi, N.; Hannink, J.; Martindale, C.F.; Gaßner, H.; Müller, M.; Klucken, J.; Eskofier, B.M. Segmentation of Gait Sequences in Sensor-Based Movement Analysis: A Comparison of Methods in Parkinson’s Disease. Sensors 2018, 18, 145. [Google Scholar] [CrossRef] [Green Version]
  28. Pasluosta, C.F.; Gassner, H.; Winkler, J.; Klucken, J.; Eskofier, B.M. An Emerging Era in the Management of Parkinson’s Disease: Wearable Technologies and the Internet of Things. IEEE J. Biomed. Health Inform. 2015, 19, 1873–1881. [Google Scholar] [CrossRef]
  29. Martindale, C.F.; Christlein, V.; Klumpp, P.; Eskofier, B.M. Wearables-Based Multi-Task Gait and Activity Segmentation Using Recurrent Neural Networks. Neurocomputing 2020, 432, 250–261. [Google Scholar] [CrossRef]
  30. Kluge, F.; Gaßner, H.; Hannink, J.; Pasluosta, C.; Klucken, J.; Eskofier, B.M. Towards Mobile Gait Analysis: Concurrent Validity and Test-Retest Reliability of an Inertial Measurement System for the Assessment of Spatio-Temporal Gait Parameters. Sensors 2017, 17, 1522. Available online: https://www.mad.tf.fau.de/research/activitynet/sensor-based-gait-analysis-validation-data-kluge-et-al-2017/ (accessed on 2 January 2022). [CrossRef]
  31. Khandelwal, S.; Wickström, N. Evaluation of the Performance of Accelerometer-Based Gait Event Detection Algorithms in Different Real-World Scenarios Using the MAREA Gait Database. Gait Posture 2017, 51, 84–90. Available online: https://wiki.hh.se/caisr/index.php/Gait_database (accessed on 2 January 2022). [CrossRef] [PubMed]
  32. Martindale, C.F.; Roth, N.; Hannink, J.; Sprager, S.; Eskofier, B.M. Smart Annotation Tool for Multi-Sensor Gait-Based Daily Activity Data. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), March 2018; pp. 549–554. Available online: https://www.mad.tf.fau.de/research/activitynet/benchmark-cyclic-activity-recognition-database-using-wearables/ (accessed on 2 January 2022).
  33. Givon, U.; Zeilig, G.; Achiron, A. Gait Analysis in Multiple Sclerosis: Characterization of Temporal-Spatial Parameters Using GAITRite Functional Ambulation System. Gait Posture 2009, 29, 138–142. [Google Scholar] [CrossRef] [PubMed]
  34. Hutchinson, L.A.; De Asha, A.R.; Rainbow, M.J.; Dickinson, A.W.L.; Deluzio, K.J. A Comparison of Centre of Pressure Behaviour and Ground Reaction Force Magnitudes When Individuals Walk Overground and on an Instrumented Treadmill. Gait Posture 2021, 83, 174–176. [Google Scholar] [CrossRef] [PubMed]
  35. Lueken, M.; ten Kate, W.; Batista, J.P.; Ngo, C.; Bollheimer, C.; Leonhardt, S. Peak Detection Algorithm for Gait Segmentation in Long-Term Monitoring for Stride Time Estimation Using Inertial Measurement Sensors. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; pp. 1–4. [Google Scholar]
  36. Zhao, N. Full-Featured Pedometer Design Realized with 3-Axis Digital Accelerometer; Analog Devices: Norwood, MA, USA, 2010; Volume 5. [Google Scholar]
  37. Jain, R.; Semwal, V.; Kaushik, P. Stride Segmentation of Inertial Sensor Data Using Statistical Methods for Different Walking Activities. Robotica 2021, 40, 1–14. [Google Scholar] [CrossRef]
  38. Figueiredo, J.; Felix, P.; Costa, L.; Moreno, J.C.; Santos, C.P. Gait Event Detection in Controlled and Real-Life Situations: Repeated Measures from Healthy Subjects. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 1945–1956. [Google Scholar] [CrossRef]
  39. Pérez-Ibarra, J.C.; Siqueira, A.A.G.; Krebs, H.I. Adaptive Gait Phase Segmentation Based on the Time-Varying Identification of the Ankle Dynamics: Technique and Simulation Results. In Proceedings of the 2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), New York, NY, USA, 29 November–1 December 2020; pp. 734–739. [Google Scholar]
  40. Oudre, L.; Barrois-Müller, R.; Moreau, T.; Truong, C.; Vienne-Jumeau, A.; Ricard, D.; Vayatis, N.; Vidal, P.-P. Template-Based Step Detection with Inertial Measurement Units. Sensors 2018, 18, 4033. [Google Scholar] [CrossRef] [Green Version]
  41. Vienne-Jumeau, A.; Oudre, L.; Moreau, A.; Quijoux, F.; Vidal, P.-P.; Ricard, D. Comparing Gait Trials with Greedy Template Matching. Sensors 2019, 19, 3089. [Google Scholar] [CrossRef] [Green Version]
  42. Ji, N.; Zhou, H.; Guo, K.; Samuel, O.W.; Huang, Z.; Xu, L.; Li, G. Appropriate Mother Wavelets for Continuous Gait Event Detection Based on Time-Frequency Analysis for Hemiplegic and Healthy Individuals. Sensors 2019, 19, 3462. [Google Scholar] [CrossRef] [Green Version]
  43. Khandelwal, S.; Wickström, N. Identification of Gait Events Using Expert Knowledge and Continuous Wavelet Transform Analysis. In Proceedings of the International Joint Conference on Biomedical Engineering Systems and Technologies, Setubal, Portugal, 3–6 March 2014; SCITEPRESS—Science and Technology Publications: Setubal, Portugal, 2014; Volume 4, pp. 197–204. [Google Scholar]
  44. Chakraborty, J.; Nandy, A. Discrete Wavelet Transform Based Data Representation in Deep Neural Network for Gait Abnormality Detection. Biomed. Signal Process. Control. 2020, 62, 102076. [Google Scholar] [CrossRef]
  45. Prateek, G.V.; Mazzoni, P.; Earhart, G.M.; Nehorai, A. Gait Cycle Validation and Segmentation Using Inertial Sensors. IEEE Trans. Biomed. Eng. 2020, 67, 2132–2144. [Google Scholar] [CrossRef]
  46. Martindale, C.F.; Sprager, S.; Eskofier, B.M. Hidden Markov Model-Based Smart Annotation for Benchmark Cyclic Activity Recognition Database Using Wearables. Sensors 2019, 19, 1820. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Martindale, C.F.; Hoenig, F.; Strohrmann, C.; Eskofier, B.M. Smart Annotation of Cyclic Data Using Hierarchical Hidden Markov Models. Sensors 2017, 17, 2328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Roth, N.; Küderle, A.; Ullrich, M.; Gladow, T.; Marxreiter, F.; Klucken, J.; Eskofier, B.M.; Kluge, F. Hidden Markov Model Based Stride Segmentation on Unsupervised Free-Living Gait Data in Parkinson’s Disease Patients. J. NeuroEng. Rehabil. 2021, 18, 93. [Google Scholar] [CrossRef] [PubMed]
  49. Angermann, M.; Robertson, P.; Kemptner, T.; Khider, M. A High Precision Reference Data Set for Pedestrian Navigation Using Foot-Mounted Inertial Sensors. In Proceedings of the 2010 International Conference on Indoor Positioning and Indoor Navigation, Zurich, Switzerland, 15–17 September 2010; pp. 1–6. [Google Scholar]
  50. ELAN: A Professional Framework for Multimodality Research—ACL Anthology. Available online: https://aclanthology.org/L06-1082/ (accessed on 25 November 2021).
  51. Jiang, Y.; Qi, Y.; Wang, W.K.; Bent, B.; Avram, R.; Olgin, J.; Dunn, J. EventDTW: An Improved Dynamic Time Warping Algorithm for Aligning Biomedical Signals of Nonuniform Sampling Frequencies. Sensors 2020, 20, 2700. [Google Scholar] [CrossRef]
  52. Mueen, A.; Keogh, E. Extracting Optimal Performance from Dynamic Time Warping. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 2129–2130. [Google Scholar]
  53. Stream Monitoring under the Time Warping Distance. IEEE Conference Publication. IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/4221753 (accessed on 12 December 2021).
  54. Zhao, J.; Itti, L. ShapeDTW: Shape Dynamic Time Warping. arXiv 2016, arXiv:1606.01601. [Google Scholar] [CrossRef] [Green Version]
  55. Røislien, J.; Skare, Ø.; Gustavsen, M.; Broch, N.L.; Rennie, L.; Opheim, A. Simultaneous Estimation of Effects of Gender, Age and Walking Speed on Kinematic Gait Data. Gait Posture 2009, 30, 441–445. [Google Scholar] [CrossRef]
  56. Zhao, H.; Zhang, L.; Qiu, S.; Wang, Z.; Yang, N.; Xu, J. Pedestrian Dead Reckoning Using Pocket-Worn Smartphone. IEEE Access 2019, 7, 91063–91073. [Google Scholar] [CrossRef]
  57. Gu, F. Indoor Localization Supported by Landmark Graph and Locomotion Activity Recognition. Ph.D. Thesis, University of Melbourne, Parkville, VIC, Australia, 2018. [Google Scholar]
  58. Yu, T.; Jin, H.; Nahrstedt, K. ShoesLoc: In-Shoe Force Sensor-Based Indoor Walking Path Tracking. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 23. [Google Scholar] [CrossRef]
  59. Chen, C.; Lu, X.; Wahlstrom, J.; Markham, A.; Trigoni, N. Deep Neural Network Based Inertial Odometry Using Low-Cost Inertial Measurement Units. IEEE Trans. Mob. Comput. 2019, 20, 1351–1364. [Google Scholar] [CrossRef]
  60. Deng, H.; Runger, G.; Tuv, E.; Vladimir, M. A Time Series Forest for Classification and Feature Extraction. Inf. Sci. 2013, 239, 142–153. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The IMU position and the direction of each axis.
Figure 1. The IMU position and the direction of each axis.
Sensors 22 01678 g001
Figure 2. The acceleration magnitude of stomps is distinctly larger than the that of walking.
Figure 2. The acceleration magnitude of stomps is distinctly larger than the that of walking.
Sensors 22 01678 g002
Figure 3. The movement of a whole stride can be divided into four gait phases. (A) displays that the heel just leaves the ground at the end of stance phase; (B) displays that the toe is going to leave the ground at the end of pushoff phase; (C) displays that the heel touches the ground at the end of swing phase.
Figure 3. The movement of a whole stride can be divided into four gait phases. (A) displays that the heel just leaves the ground at the end of stance phase; (B) displays that the toe is going to leave the ground at the end of pushoff phase; (C) displays that the heel touches the ground at the end of swing phase.
Sensors 22 01678 g003
Figure 4. In the ELAN interface, the Current Image Frame shows the target’s movement state at the moment; the Current Label Information shows the time range corresponding to the series of images selected by the user. When a specific movement state is captured (box 1), the user stops opening the next frame and edits the label for the corresponding gait in the Area for Editing Labels (box 2); Annotation Information records the detailed information that has been labeled so far, and the latest label item is located in the bottom of label history (box 3).
Figure 4. In the ELAN interface, the Current Image Frame shows the target’s movement state at the moment; the Current Label Information shows the time range corresponding to the series of images selected by the user. When a specific movement state is captured (box 1), the user stops opening the next frame and edits the label for the corresponding gait in the Area for Editing Labels (box 2); Annotation Information records the detailed information that has been labeled so far, and the latest label item is located in the bottom of label history (box 3).
Sensors 22 01678 g004
Figure 5. The accumulated distance curve used in conventional DTW methods may contain spikes leading to pseudo-minimums, and multi-valleys blurring the stride boundaries. Shape descriptors are able to improve the smoothness and monotonicity of the accumulated distance curve. (a) accumulated distance in conventional DTW; (b1) accumulated distance from RAW descriptor; (b2) accumulated distance from PAA descriptor; (b3) accumulated distance from DWT descriptor; (b4) accumulated distance from SLOPE descriptor; (b5) accumulated distance from DERIVATIVE descriptor; (b6) accumulated distance from HOG1D descriptor.
Figure 5. The accumulated distance curve used in conventional DTW methods may contain spikes leading to pseudo-minimums, and multi-valleys blurring the stride boundaries. Shape descriptors are able to improve the smoothness and monotonicity of the accumulated distance curve. (a) accumulated distance in conventional DTW; (b1) accumulated distance from RAW descriptor; (b2) accumulated distance from PAA descriptor; (b3) accumulated distance from DWT descriptor; (b4) accumulated distance from SLOPE descriptor; (b5) accumulated distance from DERIVATIVE descriptor; (b6) accumulated distance from HOG1D descriptor.
Sensors 22 01678 g005
Figure 6. Distance matrix is shown as an example for calculating RAW descriptors for gyroscope-coronal-axis-data. The elements with deep blue in the distance matrix show closer spatial distance between the shape descriptor of a query sample and that of a template point, while elements with red indicate greater spatial distance. (A) query sequence A d e s ; (B) stride template B d e s ; (C) distance matrix d i s t ;”
Figure 6. Distance matrix is shown as an example for calculating RAW descriptors for gyroscope-coronal-axis-data. The elements with deep blue in the distance matrix show closer spatial distance between the shape descriptor of a query sample and that of a template point, while elements with red indicate greater spatial distance. (A) query sequence A d e s ; (B) stride template B d e s ; (C) distance matrix d i s t ;”
Sensors 22 01678 g006
Figure 7. (A) The white lines represent the warping paths which correspond to best-matched subsequences in the query sequence. By using the augmented time warping scheme, the best-match subsequences could be detected and borders of strides recognized. Dark red ribbons between two warping paths indicate the borders of detected stride segments. They display the accumulated distances that are positive infinite resulting from star-padding. (B) The warping paths in distance matrix just run through the deep blue area from top to bottom, which is consistent with the hypothesis in Section 2.5.3. (C) After applying the time range of warping paths to the time axis of IMU data, the stride borders are available, which are represented as red vertical lines. Additionally, the stride segments just look like template.
Figure 7. (A) The white lines represent the warping paths which correspond to best-matched subsequences in the query sequence. By using the augmented time warping scheme, the best-match subsequences could be detected and borders of strides recognized. Dark red ribbons between two warping paths indicate the borders of detected stride segments. They display the accumulated distances that are positive infinite resulting from star-padding. (B) The warping paths in distance matrix just run through the deep blue area from top to bottom, which is consistent with the hypothesis in Section 2.5.3. (C) After applying the time range of warping paths to the time axis of IMU data, the stride borders are available, which are represented as red vertical lines. Additionally, the stride segments just look like template.
Sensors 22 01678 g007
Figure 8. The time range of gait labels in video recordings can be converted to IMU sequence with the assurance of time synchronization, which performs as the basis of gait analysis and gait phase recognition.
Figure 8. The time range of gait labels in video recordings can be converted to IMU sequence with the assurance of time synchronization, which performs as the basis of gait analysis and gait phase recognition.
Sensors 22 01678 g008
Table 1. Summary of published gait datasets.
Table 1. Summary of published gait datasets.
DatasetDigital BiobankSensor-Based Gait Analysis Validation Data [30]MAREA [31]Smart Annotation Cyclic Activities Dataset [32]The Diverse Gait Dataset
eGaIT-Validation Stride Segmentation [8]eGaIT-Validation Gait Parameters [10]
Sampling frequency [Hz]102.4102.4102.4128200100
Reference Data GAITRite (pressure sensors) Manual annotationMotion capture systemPiezo-electric force sensitive resistorsCamera recordings (30 Hz)Camera recordings
Number of subjects101
(55 females and 46 males)
70
(39 males and 41 females)
15
(8 males and 7 females)
20
(12 males and 8 females)
18
(14 males and 4 females)
22
(13 males and 9 females)
Subject health descriptionGeneric patients.Elderly controls (45), PD patients (15), geriatric patients (15).Healthy (11), PD patients (4).All healthy.All healthy.All healthy.
Scenarioslaboratory settingsIndoor: obstacle-free environment;
Outdoor: overground.
Laboratory
settings.
Indoor: laboratory settings;
Outdoor:
overground streets.
Outdoor: A prescribed circuit in outdoor setting with varying surfaces.Indoor corridors.
Sensor wear positionsShoeShoeShoeWaists, left wrist, left and right anklesShoeShoe
LabelsGait velocity, cadence, step length, heel to heel base of support width, length of gait phases. [33]The start and end point of each strideHeel-strike, toe-off, heel-offHeel-strike, toe-offThe start and end point of each strideStance, toe-off, heel-strike.
Walking distance/duration10 m normal walk;
1–2 min four-wheeled walk;
40 m straight walk;
2 min free walk;
4 × 10 m walk;Treadmill walk;
outdoor walk/run/jog;
-46 m straight walk
Number of strides--1116 strides (1037 from healthy subjects, 129 from patients.)-2263 walking strides and 1391 running strides4690 walking strides
Table 2. A total of 22 healthy volunteers (13 males, 9 females, age 32.5 ± 7.5 years) participated in the study and were divided into different groups according to gender and height information.
Table 2. A total of 22 healthy volunteers (13 males, 9 females, age 32.5 ± 7.5 years) participated in the study and were divided into different groups according to gender and height information.
Height Range (cm)MalesFemalesNumber of Strides (Speed Type)Number of Gait Phases
155~160-2fast142stance487
middle159pushoff478
slow171swing474
all472heel-strike474
160~16523fast261stance1121
middle337pushoff1100
slow475swing1113
all1073heel-strike1110
165~17022fast298stance1022
middle324pushoff1013
slow367swing1113
all989heel-strike998
170~17641fast406stance1358
middle440pushoff1353
slow459swing1006
all1305heel-strike1378
176~18021fast123stance408
middle121pushoff399
slow146swing400
all390heel-strike393
180~1853-fast150stance480
middle114pushoff470
slow197swing470
all461heel-strike465
Table 3. Stride segmentation results for magnitude-aware-descriptors in F-measure values. Best results for each speed group are highlighted in bold numbers.
Table 3. Stride segmentation results for magnitude-aware-descriptors in F-measure values. Best results for each speed group are highlighted in bold numbers.
Shape
Dscriptor
Speed AccXAccYAccZGyroXGyroYGyroZ
RAWfast0.623 ± 0.0220.499 ± 0.0480.537 ± 0.0560.328 ± 0.0490.605 ± 0.0180.737 ± 0.069
mid0.505 ± 0.0660.681 ± 0.0420.559 ± 0.0880.362 ± 0.0510.727 ± 0.0490.832 ± 0.019
slow0.534 ± 0.0770.807 ± 0.0120.607 ± 0.0670.449 ± 0.0690.774 ± 0.0290.831 ± 0.008
all0.63 ± 0.0520.654 ± 0.0430.577 ± 0.0720.426 ± 0.070.675 ± 0.0630.796 ± 0.034
PAAfast0.758 ± 0.0150.65 ± 0.0710.306 ± 0.0550.256 ± 0.0420.642 ± 0.0240.819 ± 0.029
mid0.783 ± 0.0250.748 ± 0.0570.35 ± 0.0660.232 ± 0.0610.73 ± 0.0380.852 ± 0.006
slow0.691 ± 0.0330.76 ± 0.0240.377 ± 0.040.477 ± 0.0710.769 ± 0.0330.816 ± 0.011
all0.784 ± 0.0190.666 ± 0.060.265 ± 0.0430.278 ± 0.0520.705 ± 0.0570.833 ± 0.013
DWTfast0.765 ± 0.0120.451 ± 0.0810.304 ± 0.0510.294 ± 0.0520.652 ± 0.0310.811 ± 0.031
mid0.782 ± 0.0270.638 ± 0.0510.333 ± 0.060.206 ± 0.0380.67 ± 0.0250.847 ± 0.007
slow0.739 ± 0.0290.687 ± 0.050.375 ± 0.0560.38 ± 0.0690.722 ± 0.0290.806 ± 0.011
all0.761 ± 0.0260.497 ± 0.070.243 ± 0.0450.22 ± 0.0510.687 ± 0.0420.835 ± 0.008
Table 4. Stride segmentation results for fluctuation-capturing-descriptors in F-measure values. Best results for each speed group are highlighted in bold numbers.
Table 4. Stride segmentation results for fluctuation-capturing-descriptors in F-measure values. Best results for each speed group are highlighted in bold numbers.
Shape
Dscriptor
Speed AccXAccYAccZGyroXGyroYGyroZ
SLOPEfast0.016 ± 0.0010.014 ± 0.0010.015 ± 0.0010.025 ± 0.0010.109 ± 0.0430.059 ± 0.023
mid0.032 ± 0.0070.045 ± 0.0080.014 ± 0.0010.046 ± 0.0050.165 ± 0.0430.207 ± 0.072
slow0.148 ± 0.0420.203 ± 0.0370.115 ± 0.0250.296 ± 0.0660.44 ± 0.0590.414 ± 0.115
all0.071 ± 0.020.069 ± 0.0150.029 ± 0.0040.146 ± 0.0520.157 ± 0.0390.245 ± 0.093
DERIVATIVEfast0.014 ± 00.019 ± 0.0010.013 ± 0.0010.023 ± 0.0010.112 ± 0.0490.067 ± 0.025
mid0.026 ± 0.0030.041 ± 0.0040.016 ± 0.0010.041 ± 0.0030.161 ± 0.0440.249 ± 0.083
slow0.198 ± 0.0630.293 ± 0.0790.157 ± 0.040.304 ± 0.0670.447 ± 0.0750.418 ± 0.124
all0.109 ± 0.040.111 ± 0.0390.052 ± 0.0150.145 ± 0.050.167 ± 0.0490.261 ± 0.1
HOG1Dfast0.13 ± 0.0310.228 ± 0.0730.237 ± 0.0710.179 ± 0.0260.578 ± 0.0520.094 ± 0.025
mid0.154 ± 0.0210.455 ± 0.0910.246 ± 0.0580.166 ± 0.030.639 ± 0.0360.218 ± 0.048
slow0.238 ± 0.0470.232 ± 0.0590.273 ± 0.0610.308 ± 0.0390.65 ± 0.0270.504 ± 0.054
all0.186 ± 0.040.257 ± 0.0650.285 ± 0.0560.267 ± 0.0540.454 ± 0.070.334 ± 0.078
Table 5. Stride segmentation results of different sensor axis combination schemes in F-measure values. Best results for each speed group are highlighted in bold numbers.
Table 5. Stride segmentation results of different sensor axis combination schemes in F-measure values. Best results for each speed group are highlighted in bold numbers.
Shape
Dscriptor
Speed AccXYAccXZAccYZAccXYZGyroXYGyroXZGyroYZGyroXYZ
DWTfast0.585 ± 0.0150.607 ± 0.030.649 ± 0.0430.596 ± 0.0090.504 ± 0.0420.251 ± 0.0830.273 ± 0.0830.304 ± 0.082
mid0.593 ± 0.0170.523 ± 0.070.682 ± 0.0230.589 ± 0.0180.305 ± 0.0540.375 ± 0.0670.381 ± 0.0810.386 ± 0.058
slow0.669 ± 0.030.543 ± 0.0630.767 ± 0.0150.674 ± 0.0320.487 ± 0.0460.742 ± 0.0530.718 ± 0.0550.699 ± 0.061
all0.628 ± 0.0190.581 ± 0.0720.666 ± 0.0330.624 ± 0.0180.388 ± 0.0650.347 ± 0.1080.336 ± 0.0980.337 ± 0.102
PAAfast0.512 ± 0.0350.592 ± 0.0450.652 ± 0.0580.516 ± 0.0440.364 ± 0.0660.144 ± 0.040.122 ± 0.0220.076 ± 0.013
mid0.552 ± 0.0430.432 ± 0.0510.723 ± 0.0380.561 ± 0.0380.219 ± 0.0470.421 ± 0.0660.385 ± 0.0750.393 ± 0.074
slow0.659 ± 0.0310.494 ± 0.0630.778 ± 0.0170.654 ± 0.0480.4 ± 0.0530.794 ± 0.0510.813 ± 0.0470.805 ± 0.041
all0.602 ± 0.0320.557 ± 0.0680.697 ± 0.0340.602 ± 0.0340.348 ± 0.0610.441 ± 0.1280.41 ± 0.1210.405 ± 0.128
HOG1Dfast0.628 ± 0.0230.274 ± 0.0410.639 ± 0.0290.609 ± 0.0170.654 ± 0.0090.567 ± 0.0760.632 ± 0.0660.571 ± 0.055
mid0.603 ± 0.0330.319 ± 0.0370.59 ± 0.0270.598 ± 0.0240.74 ± 0.0220.649 ± 0.0120.677 ± 0.0190.662 ± 0.021
slow0.638 ± 0.0450.544 ± 0.0370.674 ± 0.0420.63 ± 0.0420.735 ± 0.0240.724 ± 0.030.745 ± 0.0140.724 ± 0.026
all0.538 ± 0.0420.313 ± 0.0540.576 ± 0.0420.552 ± 0.0430.642 ± 0.0280.759 ± 0.0390.776 ± 0.0340.782 ± 0.033
Table 6. Stride segmentation results of compound descriptor of different sensor axis combination schemes in F-measure values. Best results for each speed group are highlighted in bold numbers.
Table 6. Stride segmentation results of compound descriptor of different sensor axis combination schemes in F-measure values. Best results for each speed group are highlighted in bold numbers.
Shape
Dscriptor
Single
Axis
AccXAccYAccZGyroXGyroXGyroZ
(HOG1D,
RAW)
fast0.515 ± 0.0280.584 ± 0.0310.325 ± 0.0640.339 ± 0.0460.566 ± 0.0220.815 ± 0.021
mid0.409 ± 0.0670.721 ± 0.0080.28 ± 0.0710.305 ± 0.0410.676 ± 0.0430.83 ± 0.021
slow0.438 ± 0.0680.835 ± 0.0080.253 ± 0.0630.438 ± 0.0680.732 ± 0.0290.824 ± 0.009
all0.427 ± 0.0610.641 ± 0.0340.276 ± 0.0570.311 ± 0.0520.636 ± 0.0720.823 ± 0.015
(HOG1D,
DWT)
fast0.443 ± 0.050.557 ± 0.0390.301 ± 0.0690.361 ± 0.0430.599 ± 0.0110.771 ± 0.038
mid0.317 ± 0.0510.703 ± 0.0270.259 ± 0.0560.36 ± 0.0370.629 ± 0.0380.798 ± 0.021
slow0.399 ± 0.0510.747 ± 0.0340.267 ± 0.0430.423 ± 0.040.702 ± 0.0320.788 ± 0.012
all0.372 ± 0.0610.598 ± 0.0380.3 ± 0.0560.312 ± 0.0540.625 ± 0.0590.795 ± 0.017
(HOG1D,
PAA)
fast0.295 ± 0.0480.647 ± 0.040.355 ± 0.0720.352 ± 0.0610.598 ± 0.0140.758 ± 0.059
mid0.307 ± 0.0590.765 ± 0.0150.334 ± 0.0750.404 ± 0.0480.68 ± 0.0380.817 ± 0.022
slow0.413 ± 0.0620.758 ± 0.0260.206 ± 0.0390.398 ± 0.0410.744 ± 0.0190.741 ± 0.03
all0.291 ± 0.0560.623 ± 0.0410.299 ± 0.0670.311 ± 0.0560.652 ± 0.0580.776 ± 0.033
Fuse
axis
AccXYAccXZAccYZAccXYZGyroXYGyroXZGyroYZGyroXYZ
(HOG1D,
RAW)
fast0.583 ± 0.0310.454 ± 0.050.612 ± 0.0190.572 ± 0.0230.533 ± 0.030.175 ± 0.0390.352 ± 0.0760.188 ± 0.049
mid0.546 ± 0.0370.389 ± 0.0580.67 ± 0.0340.542 ± 0.0410.616 ± 0.0330.528 ± 0.0920.609 ± 0.0830.584 ± 0.097
slow0.631 ± 0.030.424 ± 0.0650.796 ± 0.0210.653 ± 0.0170.626 ± 0.0380.799 ± 0.0320.837 ± 0.0150.803 ± 0.032
all0.59 ± 0.0480.38 ± 0.0680.619 ± 0.0350.571 ± 0.0520.521 ± 0.0540.399 ± 0.1130.409 ± 0.1140.385 ± 0.116
(HOG1D,
DWT)
fast0.574 ± 0.0330.386 ± 0.0620.572 ± 0.0270.562 ± 0.0330.538 ± 0.0350.206 ± 0.0380.341 ± 0.0610.259 ± 0.056
mid0.518 ± 0.0360.359 ± 0.0580.516 ± 0.0390.529 ± 0.0420.626 ± 0.0230.49 ± 0.0430.494 ± 0.0610.511 ± 0.04
slow0.598 ± 0.0380.366 ± 0.0660.72 ± 0.0310.596 ± 0.0360.626 ± 0.0420.72 ± 0.040.776 ± 0.0260.766 ± 0.027
all0.518 ± 0.0610.319 ± 0.0570.565 ± 0.0430.512 ± 0.0580.55 ± 0.0520.465 ± 0.0870.434 ± 0.0980.416 ± 0.092
(HOG1D,
PAA)
fast0.564 ± 0.0230.315 ± 0.0540.63 ± 0.0170.529 ± 0.0240.536 ± 0.0290.262 ± 0.0590.34 ± 0.0610.301 ± 0.058
mid0.496 ± 0.0460.349 ± 0.0670.567 ± 0.0510.512 ± 0.040.707 ± 0.020.447 ± 0.0510.536 ± 0.0340.45 ± 0.055
slow0.515 ± 0.0510.379 ± 0.0550.764 ± 0.0160.492 ± 0.0460.714 ± 0.0220.721 ± 0.0360.781 ± 0.020.756 ± 0.029
all0.46 ± 0.0590.287 ± 0.0540.616 ± 0.030.468 ± 0.0540.585 ± 0.0340.436 ± 0.0930.503 ± 0.0850.508 ± 0.094
Table 7. Stride segmentation results of msDTW, wavelet-based method and SDATW in F-measure.
Table 7. Stride segmentation results of msDTW, wavelet-based method and SDATW in F-measure.
msDTWWavelet Based MethodSDATW
fast0.8130.7140.811
mid0.8180.7810.847
slow0.8290.8150.806
all0.8220.7730.835
Table 8. Detailed results of gait phase recognition with different walking speed types given in F-measure values.
Table 8. Detailed results of gait phase recognition with different walking speed types given in F-measure values.
Walking SpeedStancePushoffSwingHeel-Strike
Fast 0.80460.85220.85960.7884
Middle 0.77840.84610.88350.8056
Slow0.7010.69580.83990.7180
Full Range0.75480.79250.85970.7674
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, C.; Zhang, F.; Xu, Z.; Wei, J. The Diverse Gait Dataset: Gait Segmentation Using Inertial Sensors for Pedestrian Localization with Different Genders, Heights and Walking Speeds. Sensors 2022, 22, 1678. https://doi.org/10.3390/s22041678

AMA Style

Huang C, Zhang F, Xu Z, Wei J. The Diverse Gait Dataset: Gait Segmentation Using Inertial Sensors for Pedestrian Localization with Different Genders, Heights and Walking Speeds. Sensors. 2022; 22(4):1678. https://doi.org/10.3390/s22041678

Chicago/Turabian Style

Huang, Chao, Fuping Zhang, Zhengyi Xu, and Jianming Wei. 2022. "The Diverse Gait Dataset: Gait Segmentation Using Inertial Sensors for Pedestrian Localization with Different Genders, Heights and Walking Speeds" Sensors 22, no. 4: 1678. https://doi.org/10.3390/s22041678

APA Style

Huang, C., Zhang, F., Xu, Z., & Wei, J. (2022). The Diverse Gait Dataset: Gait Segmentation Using Inertial Sensors for Pedestrian Localization with Different Genders, Heights and Walking Speeds. Sensors, 22(4), 1678. https://doi.org/10.3390/s22041678

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop