Next Article in Journal
Numerical Study on Thermal Design of a Large-Area Hot Plate with Heating and Cooling Capability for Thermal Nanoimprint Lithography
Previous Article in Journal
Feasibility Study for Using Piezoelectric-Based Weigh-In-Motion (WIM) System on Public Roadway
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What Lies Beneath One’s Feet? Terrain Classification Using Inertial Data of Human Walk

by
Muhammad Zeeshan Ul Hasnain Hashmi
,
Qaiser Riaz
*,
Mehdi Hussain
and
Muhammad Shahzad
Department of Computing, School of Electrical Engineering and Computer Science, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(15), 3099; https://doi.org/10.3390/app9153099
Submission received: 18 June 2019 / Revised: 22 July 2019 / Accepted: 23 July 2019 / Published: 31 July 2019

Abstract

:
The objective of this study was to investigate if the inertial data collected from normal human walk can be used to reveal the underlying terrain types. For this purpose, we recorded the gait patterns of normal human walk on six different terrain types with variation in hardness and friction using body mounted inertial sensors. We collected accelerations and angular velocities of 40 healthy subjects with two smartphones embedded inertial measurement units (MPU-6500) attached at two different body locations (chest and lower back). The recorded data were segmented with stride based segmentation approach and 194 tempo-spectral features were computed for each stride. We trained two machine learning classifiers, namely random forest and support vector machine, and cross validated the results with 10-fold cross-validation strategy. The classification tasks were performed on indoor–outdoor terrains, hard–soft terrains, and a combination of binary, ternary, quaternary, quinary and senary terrains. From the experimental results, the classification accuracies of 97% and 92% were achieved for indoor–outdoor and hard–soft terrains, respectively. The classification results for binary, ternary, quaternary, quinary and senary class classification were 96%, 94%, 92%, 90%, and 89%, respectively. These results demonstrate that the stride data collected with the low-level signals of a single IMU can be used to train classifiers and predict terrain types with high accuracy. Moreover, the problem at hand can be solved invariant of sensor type and sensor location.

Graphical Abstract

1. Introduction

Terrain classification is an active area of research having a wide range of applications, e.g., outdoor terrain navigation, the recommendation of floor types for health care environments, sports flooring, consumer suggestion systems and autonomous driving [1,2]. In the literature, several terrain classification approaches have been proposed based on different types of data (e.g., visual data, acoustic, physical touch, etc.) typically acquired using optical cameras, 3D laser scanners and on-board sensors mounted over humanoid robots [3,4,5], autonomous off-road driving vehicles [6,7,8] and aerial platforms [9].
The traditional camera-based approaches employ visual features to distinguish different terrain types. One of the earliest approaches was proposed by Weszka et al. [10] who performed terrain classification using automatic texture measure. Anastrasirichai et al. [11] also presented an algorithm that used visual features captured during the human walk. Similarly, Ma et al. employed aerial image data for terrain classification to support off-road navigation of the ground vehicle using low-rank sparse representation. Peterson et al. [12] also used imagery data using aerial vehicle for ground robot navigation. With the aim of terrain classification, Dornik et al. [6] classified different soil types using geographic object-based analysis on images. Laible et al. [7] fused color information together with 3D scans obtained from LiDAR to perform terrain classification. Similarly, Ojeda et al. [13] employed fused data acquired from a suite of sensors consisting of microphone, accelerometer, gyroscope, infrared and motor current to train a feedforward neural network for terrain classification.
From a robotics perspective, Wu et al. [3] proposed a small legged robot that used an array of miniature capacitive tactile sensors to directly measure ground reaction forces (GRF) and used them to classify terrains. Zhang et al. [14] also sensed ground forces using force/torque sensor for biomimetic hexapod robots walking on unstructured terrains. Similarly, Giguere et al. [4] described a tactile probe for surface classification utilizing mobile robots, with single-axis accelerometer. Belter et al. [5] also addressed the issue of terrain perception and classification using noisy range data acquired via laser scanning and terrain mapping module in humanoid robots. Valada et al. [15] performed a robotic acoustic-based terrain classification that exploited the sound waves originating from the terrain–vehicle interaction to build a spectrogram image that is fed to a convolution neural network to learn deep features for subsequent terrain classification. Rothrock et al. [16] framed the problem of terrain classification as a semantic segmentation problem in which they employed deeplab to visually classify terrain type and then registered it with the slope angles and wheel slip data to generate a prediction model for Mars Rover mission. Similarly, for planetary missions, Brooks et al. [17] also analyzed the vibration patterns obtained via terrain-vehicle vibration to distinguish different terrain types. Yaguang et al. [18] proposed a method for terrain classification based on a combination of speed-up robust features for an autonomous multi-legged walking robot.
In the context of autonomous off-road driving, the terrain classification problem has been actively studied in the last decade. For instance, Manduchi et al. [1] presented a technique for terrain classification and obstacle detection for autonomous navigation using single-axis lidar and a stereo pair of sensors. Dupont et al. [19] and Lu et al. [20] also proposed terrain classification based on the vibrations generated by autonomous ground vehicles using a laser stripe based structured light sensor. Further, Delmerico et al. [21] also performed terrain classification using rapid training of a classifier for autonomous air and ground robots. In the problem of robotic terrain classification, the performance of the proposed methods depends on the appropriate navigation strategy, machine vibration and obstacles in the path.
Although these approaches work well, their accuracy is constrained due to several challenges. For example, in the case of visual sensor, the motion, occlusion and change of appearance in visual data due to varying illumination conditions cause degradation in performance. Similarly, the robotics- and autonomous off-driving-based approaches have trade-off in terms of accuracy, cost, and restrictive ambient conditions [3]. Thus, owing to these aforementioned issues, the problem of reliably classifying terrain types while maintaining the accuracy with a low-cost solution is a highly challenging task. Moreover, there is a need to explore the alternative approach against traditional terrain classifications. For instance, it is well known that human beings can capture information about terrain during walking by sensing it with their feet and by the sound of their footsteps [22]. The kinematic properties of the human motion pattern allow capturing the motion data for gait analysis, which in turn has been used as a reliable source for activity recognition [23] and estimating soft biometrics including gait-based age estimation [24,25], gender classification [24,26], emotion recognition [27] and human authentication/identification [28,29]. Moreover, with the ubiquitous availability of modern devices such as smartphones and wearables are typically equipped with many sensors. For instance, on-board/embedded inertial measurement units (IMUs) consisting of tri-axial accelerometers, tri-axial gyroscopes and tri-axial magnetometers are able to provide the inertial data (i.e., 3D acceleration and angular velocities with acceptable low noise rate) at no additional cost. Furthermore, together with such sensors, these smart devices are usually equipped with powerful processors capable of performing high computational tasks and thus potentially can capture and analyze inertial data without compromising the normal use of the device. This has potentially created many opportunities to solve real-life problems [30] including soft biometrics classification [31], reconstruction of human motion [32], and measuring physical health and basic activities.
Use of smart phone inertial data for terrain classification has not yet been extensively explored. To the best of our knowledge, there exist only a few methods that utilize inertial data of human gait for terrain classification [33,34]. Table 1 highlights the main approaches in terrain classification. In this context, this paper proposes a novel terrain classification framework that uses tempo-spectral features extracted from inertial data collected with a smart phone. The extracted features are used to train machine learning classifiers (support vector machine and random forest) and predict terrain types. More precisely, the gait patterns of normal human walk over six different terrain types with variations in hardness and friction were recorded with inertial sensors. The following are the main contributions of the proposed approach:
  • We collected gait data of 40 healthy participants using body-mounted inertial sensors (embedded in smartphones) attached on two body locations i.e., chest and lower back.The data were collected on six different types of terrains: carpet, concrete floor, grass, asphalt, soil, and tiles (as explained in Section 2.4). The data can be freely accessed by sending email to the corresponding author.
  • We propose a set of 194 saptio-spectral hand-crafted features per stride, which can be used to train different supervised learning classifiers (random forest and support vector machine) and predict terrains. The prediction accuracy remained above 90% for terrains under different classes such as indoor–outdoor, hard–soft, and a combination of binary, ternary, quaternary, quinary and senary terrain classes (details in Section 2.4 and Section 3).
  • From the experimental results, we found that the lower back location is more suitable for sensor placement than chest for the task of terrain classifications as it produced the highest classification accuracies (details are in Section 4.1).

2. Materials and Methods

2.1. Selection of Terrains

Several types of terrains, both natural and man-made, exist around us. The goal of this study was to classify such terrains which humans encounter on daily basis. In this context, we chose six different types of terrains having variations in hardness and friction. The hard terrains include concrete floor, asphalt, and tiles, whereas the soft terrains include carpet, grass, and soil (Figure 1). An indoor environment was used to record data for concrete floor, carpet, and tiles, whereas an outdoor environment was used for the rest of the terrains (Table 2).

2.2. Characteristics of the Population

The population consisted of 40 healthy South Asian subjects who voluntarily participated in the study. The subjects were briefed about the nature of the experiments, the type of data to be collected, and how it would be used in the research. All willing participants were asked to sign a consent form. The characteristics of the population including age, height, weight, and male-to-female ratio are given in Table 3.

2.3. Placement of Sensors

Recent studies have shown that body locations such as chest [35], arm [36], waist and lower back [36,37,38], and ankle [24,38] are appropriate for human activity recognition and soft bio-metrics estimation. We chose two different body locations i.e., chest and lower back, for the placement of sensors. Two smartphones were attached on the chest and the lower back. The data collected from each sensor were processed independently to find the suitable body location for the task of terrain classification, which was one of the main objectives of this study.

2.4. Data Collection

Most modern digital devices such as smart phones are equipped with a range of sensors including cameras and inertial measurement units (IMUs). For the purpose of data collection, we used two Android-based smart phones. The smart phones have on-board 6D IMUs (MPU-6500–InvenSense, San Jose, CA, USA) which can measure tri-axial accelerations and tri-axial angular velocities. The technical specifications of both MPU-6500 are given in Table 4.The smart phones were tightly attached at the chest and lower back of the subject using elastic belts, as shown in Figure 2.
A data recording application was developed to record 6D inertial data with the smart phones. The data were recorded at a sampling rate of 75 Hz.
The standardized gait task consisted of a straight walk for a distance of 10 m from a starting point, turning around, and returning to the starting point. Subjects were asked to walk in their natural gait while wearing shoes and repeat the standardized gait task twice on all six surfaces. This resulted in 40 × 6 = 240 m of walking by each subject on all terrains.

2.5. Segmentation of Signals into Strides

Human gait is characterized as “the succession of phases separated by foot-strikes (the foot is in contact with the ground) and takeoffs (the foot leaves the ground)” [39]. A stride is defined as a complete cycle of heel strike of a foot to the next heel strike of the same foot [40]. The low-level signals of IMUs are known to be noisy and suppression of noise is necessary in order to minimize the effect of the noise and correctly segment the raw signals into strides. To reduce the noise, we employed moving average filter [41,42], which is a known method of noise suppression. A window size of nine samples was used to smooth the raw signal. The smoothed signal can be decomposed into single steps using local maxima (peaks) or local minima (valleys) [43,44,45]. We detected the local minima in y-axis of acceleration signal only as it is the gravitational axis. The same valleys were used to segment all of the 3D accelerations and 3D angular velocities. This technique assured that the length of the segmented step remained the same for all 6D components. For strides, we segmented the signal only when the same foot consecutively stuck the ground twice, i.e., two consecutive steps (Figure 3).

2.6. Features Extraction

Choosing a set of good features is necessary for the success rate of a classifier. In this regard, we computed 33 unique features per stride. When computed from 6D low-level signals, this resulted in 194 features per stride (32 × 6 = 198 plus 1 × 2 = 2; the signal magnitude area was computed for the x-axis of acceleration and angular velocities only, while the rest of the features were computed for all 6D components). The 194 tempo-spectral features from each stride are shown in Table 5. The temporal features include mean, standard deviation, median, maximum, minimum, root mean square, signal magnitude area, index of maximum, index of minimum, power, entropy, energy, skewness, kurtosis, interquartile range, mean absolute deviation, jerk, zero crossing rate and max-min difference.
We computed Discrete Fourier Transform (DFT) of the raw signa. For an input vector x of length N, the DFT is a vector X of length N given by the formula:
X k = n = 1 N x n e 2 π j ( k 1 ) ( n 1 ) N , 1 k N
By using Equation (1), we computed the spectral features, which include mean, maximum value, magnitude, band power of signal, energy and nine coefficients of FFT.

2.7. Classification

The primary goal of the study was prediction of ground surface or terrain using hand-crafted features. For this purpose, we used the Scikit-learn library (version 0.19.2), which is an open source machine learning library for Python [46]. We used two supervised learning algorithms i.e.,Random Forest (RF) and Support Vector Machine (SVM), and trained them using the features set discussed in the previous section. We used grid search and found the best parameters for classification of terrains. For RF, the model was trained using the following parameters: number_of_trees = 500, criterion = “gini”, max_features = “auto”, and max_depth = “none”. For SVM, the following parameters were used for training the model: kernel = “RBF”, C = 10, and gamma = 0.01.
Since the sensors were placed on two different locations of the subjects’ bodies, each model was trained and validated with a features set computed from the data collected through the respective sensor. Furthermore, five different sets of features for each stride against each dataset were computed to test the prediction accuracy of each model under variable feature sets. The number of features computed for each stride under five different feature sets are as follows: (1) all features (tempo-spectral): 194 features; (2) temporal features: 110 features; (3) spectral features: 84 features; (4) 3D accelerations (tempo-spectral): 97 features; and (5) 3D angular velocities (tempo-spectral): 97 features. The k-fold cross validation model [47], which is a well-known model for cross validation, was used to measure the performance of each predictor. The chosen value of k was 10.

3. Results

The results were computed and discussed under five different categories: (1) binary classification; (2) ternary classification; (3) quaternary classification; (4) quinary classification; and (5) senary classification. The results of ternary, quaternary and quinary classifications are presented in Appendix A.1, Appendix A.2 and Appendix A.3, respectively.

3.1. Binary Classification

In the binary classification, the terrains were grouped into pairs or binary classes: indoor (carpet, concrete, and tiles) and outdoor (grass, asphalt, and soil); hard (concrete, tiles, and asphalt) and soft (carpet, grass, and soil); and pair-wise (one-to-one). The classification results of all three cases are presented in the following subsections.

3.1.1. Indoor–Outdoor Classification

In the case of indoor–outdoor classification, the best classification accuracy of 97.48% (±0.29) was achieved with SVM for the lower back sensor followed by the chest sensor 97.11% (±0.32) when trained with all features (194 features per stride). A comparable classification rate on all features was observed with RF where the best accuracy was achieved with the lower back sensor 96.52% (±0.34) followed by the chest sensor 95.72% (±0.41). The precision, recall, and f1-score remained above 95% in all cases. Figure 4 shows the confusion matrices of indoor–outdoor classification computed with SVM and RF using the different sets of features computed from the lower back sensor data. Detailed classification results for different sensor placements against different features sets are presented in Table 6. In general, for all three sensor placements, models trained with all features produced the highest classification accuracies.

3.1.2. Hard–Soft Classification

The hard–soft terrains classification is a binary classification case where the surfaces were categorized into hard surfaces (concrete floor, asphalt, and tiles) and soft surfaces (carpet, grass, and soil). The best classification accuracy of 92.08% (±0.25) was observed with RF for the lower back sensor when trained on all features. On the same features set and the same lower back sensor, the SVM produced an average classification accuracy of 90.64% (±0.33). For the chest, the RF achieved classifications accuracy of 89.08% (±0.28), whereas the SVM achieved classification accuracy of 89.57% (±0.31). Figure 5 shows the confusion matrices computed by RF and SVM on all features using the lower back sensor. Detailed classification results for different sensor placements against different features sets are presented in Table 6. A trend similar to hard–soft classification was observed again where the models trained with all features produced the highest classification accuracies for all sensor placements.

3.1.3. Pair-Wise Classification

The goal of pair-wise classification was to compute and compare classification accuracies of different terrains against each other as pairs.
The best classification accuracies were achieved with all features computed from the lower back sensor data and trained with SVM, as shown in Figure 6. The best pair-wise classification accuracies of above 98% were observed between carpet and asphalt, carpet and soil, asphalt and tiles, and soil and tiles. The lowest pair-wise classification accuracy of 91% was seen between grass and soil. The precision, recall, and f1-score remained above 93% in all cases. Detailed pair-wise classification results are presented as bar graphs in Figure 7 and in Table 6.

3.2. Senary Classification

The senary classification is the most significant case as all surfaces were compared with each other. The results are presented in Table 7. The trend remained similar to all of the previous cases where the best average classification accuracies were achieved with the features set computed from the lower back mounted sensor. The RF produced an average classification accuracy of 88.7% (±0.4), whereas SVM produced an average classification accuracy of 87.5% (±0.43). For the chest sensor, the average classification accuracies remained at 86% for both SVM and RF. The precision, recall, and f1-score remained above 84% in all of the cases.
Figure 8 shows the confusion matrices computed with SVM and RF for the lower back sensor for different features sets. The best classification accuracies of 89% and 87.5% were achieved with temporal features for RF and with all features for SVM, respectively. The features set computed with the 3D angular velocities produced the lowest classification accuracies for both RF and SVM. From the confusion matrices computed on all features, the highest confusion was mostly observed between soft and hard surfaces, e.g., carpet and concrete (RF: 8.3%, SVM: 6.9%), carpet and tiles (RF: 7.8%, SVM: 8.4%), and asphalt and soil (RF: 3.9%, SVM: 4.0%). Interestingly, highest confusion was also observed between grass and soil (RF: 7.9%, SVM: 9.2%), which was mainly due to the natural similarities between soil and grass.
The radar graph in Figure 8b shows the classification accuracies of senary classification computed with all features using RF and SVM. It is observable that the highest classification accuracy was achieved for carpet, whereas the lowest classification accuracy was achieved for soil (because of confusion between soil and grass).

4. Discussion

4.1. Summary of Findings

The goal of this work was to classify the type of terrain from the inertial data of the strides collected with a single body mounted IMU. 6D accelerations and angular velocities were recorded with the help of two android based smart phones (on-board MPU-6500 IMU). The sensors were mounted on two different body locations, i.e., chest (smart phone), and lower back (smart phone). A total of 40 volunteers participated in data collection sessions and their gait data were recorded on six different terrains, namely carpet, concrete floor, grass, asphalt, soil, and tiles. A valleys detection method was used to segment low-level 6D gait signals into strides. A total of 194 tempo-spectral features were computed for each of the stride, which included 110 time domain features and 84 spectral domain features. The choice of the predictors was Random Forest and Support Vector Machine and 10-fold cross validation was used as validation model.
Figure 9 shows the comparison of classification accuracies achieved with SVM and RF for each sensor location. It is observable that the highest classification accuracy was achieved with the lower back sensor data followed by the chest data. Furthermore, it can be seen that SVM performed better with fewer classes, i.e., indoor–outdoor, hard–soft, binary and ternary classification, whereas the RF performed better with more classes, i.e., quaternary, quinary and senary. This is true for the features sets computed from the data collected with the sensors attached on lower back and chest positions. The gradual drop in the classification accuracies as the number of classes increases was because of the natural similarities between different types of surfaces, which caused a significant number of samples to be mis-classified.

4.2. Comparison with Existing Approaches

Terrain classification has been extensively studied in the domain of humanoid robots and autonomous mobile vehicles; however, we have found few studies that focus on terrain classification using inertial data of human gait. In the experimental setup of Hu et al. [33], an IMU was mounted on L5 vertebra of the subjects and the gait data of 35 subjects were recorded on two different types of surfaces: flat surfaces and uneven bricks. A deep learning model with long short-term memory units was used for training and prediction. They achieved a surface classification accuracy of 96.3%. Anantrasirichai et al. [11] used visual features captured from body mounted cameras during human locomotion. They considered three different classes of terrains, i.e., hard surfaces, soft surfaces, and unwalkable surfaces. They reported a classification accuracy of 82%. Diaz et al. [34] proposed a terrain identification and surface inclination estimation system for a prosthetic leg using visual and inertial sensors. They recorded data on six different surfaces and achieved an average classification accuracy of 86%. Libby et al. [48] performed acoustic-based terrain classification for robots that used sound from the interaction of vehicle and terrain. They reported a classification rate of 92%. They performed sliding window operation for feature extraction, which is not efficient with respect to time, as compared to the proposed method. In comparison to these approaches, the proposed approach was tested on six different terrains, namely, carpet, concrete floor, grass, asphalt, soil, and tile, and the classification accuracies outperformed others in all cases, as shown in Table 8.

4.3. Limitations

In our experiments, we only collected data on six different terrains, namely carpet, concrete floor, grass, asphalt, soil, tiles. However, there exist many practical terrains such as pebbles, sand, gravels, exercise mats, footpaths, etc., which should be considered in data collection. This would help in analyzing the behavior of the proposed approach under broader spectrum. Similarly, our database consists of only 40 subjects with a male to female ratio of 30:10, which is unbalanced as only 25% of the population is female. Extending the database and including more female participants to balance the population and testing the proposed approach with the extended dataset is another important direction of the future work. Another limitation of the proposed approach is placement of the sensors i.e., chest and lower back. There exist many other practical sensor placement locations such as wrists, side and back pockets, chest pockets, etc. that should also be considered. This would help in terrain classification in a ubiquitous manner. Another important aspect is that, although the data from both the sensors were collected simultaneously, there was no stringent requirement for synchronization, as both time series were independent from each other. Human intervention was only needed in the pre-processing step for training data preparation to retain the time series data when the subject is walking. Since we were interested in terrain classification using single strides, these strides were extracted automatically using local minima from the time series data. For practical applications, automatic segmentation of strides and decision making (for terrain classification) using sequential analysis is indeed a direction of future work.

5. Conclusions

The novelty of this work is finding a set of hand-crafted features from the inertial data of human gait that can be used to train and predict different of terrains. We showed that a single stride has enough information encoded to predict terrains. We also showed that the set of hand-crafted features can be computed from either of the two body parts, i.e., lower back or chest The highest classification accuracy of 97% was achieved for indoor–outdoor classification, whereas the lowest classification accuracy of 89% was achieved for senary classification. The results also show that, for lower back and chest sensors, SVM performed better than RF in binary classification case only. We also present the comparison of different sets of features, which were computed from tempo-spectral domain, where the set of all features performed better than any other features set. Moreover, our results shows that smart phones can be used for data collection and terrains classification. The possible applications of the proposed approach include monitoring of the conditions of sidewalks to identify the areas which need renovation, terrain awareness systems to guide visually impaired persons [50], and production of digital terrain models using body mounted IMUs [51].

Author Contributions

Conceptualization, M.Z.U.H.H. and Q.R.; methodology, M.Z.U.H.H., Q.R. and M.H.; software, M.Z.U.H.H., Q.R., M.H. and M.S.; validation, M.Z.U.H.H. and M.S.; writing—original draft preparation, M.Z.U.H.H., Q.R., M.H.; and writing—review and editing, M.Z.U.H.H., Q.R., M.H. and M.S.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The results of ternary, quaternary and quinary classifications are discussed in the following subsections.

Appendix A.1. Ternary Classification

For ternary classification, any three of the six surfaces were compared with each other and classification accuracies were computed. This resulted in a total of 20 ternary combinations, as shown in the parallel coordinate chart (Figure A1). In most of the cases, the best average classification accuracies of around 96% were achieved with the features set computed from the lower back mounted sensor and both SVM and RF produced comparable results. The lowest average classification accuracies of up to 91% were observed with the features set computed from the chest sensor.
Figure A2 shows the confusion matrices of all of the 20 ternary terrain combinations using SVM on all features computed form the lower back sensor data. The best ternary classification accuracy of above 96% was achieved among carpet, asphalt, and soil, whereas the lowest accuracy of 89% was observed among carpet, concrete, and tiles as well as grass, asphalt, and soil. In the case of RF, the average classification accuracy for the lower back sensor remained above 93% when trained with all features. For the chest sensor, both SVM and RF produced an average classification accuracy of 92%. The precision, recall, and F1-score remained above 89% in all cases. Detailed results are presented in Table A1.
Figure A1. Parallel coordinate chart of ternary classification. The results were obtained with all features set (194 features) for each classifier and for each sensor location.
Figure A1. Parallel coordinate chart of ternary classification. The results were obtained with all features set (194 features) for each classifier and for each sensor location.
Applsci 09 03099 g0a1
Figure A2. Confusion matrices of ternary classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s .
Figure A2. Confusion matrices of ternary classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s .
Applsci 09 03099 g0a2
Table A1. Classification results of ternary and quaternary classification for each sensor location. The classification results are presented for each feature set and statistical scores are presented for the set of all features only.
Table A1. Classification results of ternary and quaternary classification for each sensor location. The classification results are presented for each feature set and statistical scores are presented for the set of all features only.
Classification CategorySensorAccuracy (%)Statistical Analysis
All FeaturesTemporal FeaturesSpectral Features3D Accelerations3D Angular VelocitiesPrecisionRecallf1-Score
SVMRFSVMRFSVMRFSVMRFSVMRFSVMRFSVMRFSVMRF
TernaryLower Back93.7793.5593.3193.8789.4889.2390.4990.1185.4888.7493.894.093.793.993.793.9
Chest92.7892.2989.5692.9989.7388.4091.9190.7884.6982.6393.092.593.092.493.092.4
QuaternaryLower Back91.5291.9387.9791.9785.7186.1987.0388.2080.9886.0092.091.891.991.691.991.6
Chest90.0889.9385.7090.8284.4082.4387.3589.0277.2176.7790.389.690.289.690.289.6
QuinaryLower Back89.6790.2485.1190.4282.6983.7084.1585.7777.2183.3989.690.589.590.289.590.2
Chest87.6987.8882.2588.9680.9479.2184.2886.7572.9172.8687.387.087.387.087.387.0

Appendix A.2. Quaternary Classification

The objective of quaternary classification was to compute and compare the classification accuracies of any four of the six surfaces against each other. In this regard, a total of 15 quaternary combinations were created and the results are presented in parallel coordinate chart (Figure A3). In general, a trend similar to ternary terrain classification was observed where the best average classification accuracies were achieved with the features set computed from the lower back mounted sensor for both SVM and RF classifiers. The best classification accuracy, however, dropped to 94% from 96%. The lowest average classification accuracies were observed with the features set computed from the chest sensor.
Figure A4 presents the confusion matrices for all 15 quaternary terrain combinations using SVM on all features computed form the lower back sensor data. The best quaternary classification accuracy of around 94% was achieved among concrete, grass, soil, and tiles, whereas the lowest accuracy of around 90.15% was observed among carpet, grass, soil, and tiles. The average classification accuracy for the lower back sensor remained above 91% for both SVM and RF when trained with all features, as shown in Table A1. For the chest sensor, both SVM and RF produced an average classification accuracy of 90%. The precision, recall, and F1-score remained above 86% in all cases.
Figure A3. Parallel coordinate chart of quaternary classification. The results were obtained with all features set (194 features) for each classifier and for each sensor location.
Figure A3. Parallel coordinate chart of quaternary classification. The results were obtained with all features set (194 features) for each classifier and for each sensor location.
Applsci 09 03099 g0a3
Figure A4. Confusion matrices of quaternary classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s .
Figure A4. Confusion matrices of quaternary classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s .
Applsci 09 03099 g0a4

Appendix A.3. Quinary Classification

The quinary classification was the case where any five of the six terrains were compared against each other. Since six different terrains were considered for data collection, thus five quinary combinations were created. The classification results are presented in Figure A5 as a parallel coordinate chart. Similar to ternary and quaternary classification cases, the best average classification accuracies were achieved with all features computed from the lower back mounted sensor. The classification accuracies remained above 91% for both SVM and RF. The lowest average classification accuracies were observed with the features set computed from the chest sensor.
Figure A6 presents the confusion matrices of all five quinary terrain combinations using SVM on all features computed form the lower back sensor data. The best classification accuracy of around 90.5% was achieved among concrete, grass, asphalt, soil, and tiles, whereas the lowest accuracy of around 88% was observed among carpet, concrete, grass, soil, and tiles. The average classification accuracy for the lower back sensor remained 90.24% and 89.67% for RF and SVM, respectively, when trained with all features, as shown in Table 7. For the chest sensor, both SVM and RF produced an average classification accuracy of 87%. The precision, recall, and F1-score remained above 83% in all cases.
Figure A5. Parallel coordinate chart of quinary classification. The results were obtained with all features set (194 features) for each classifier and for each sensor location.
Figure A5. Parallel coordinate chart of quinary classification. The results were obtained with all features set (194 features) for each classifier and for each sensor location.
Applsci 09 03099 g0a5
Figure A6. Confusion matrices of quinary classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s .
Figure A6. Confusion matrices of quinary classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s .
Applsci 09 03099 g0a6

References

  1. Manduchi, R.; Castano, A.; Talukder, A.; Matthies, L. Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation. Auton. Robot. 2005, 18, 81–102. [Google Scholar] [CrossRef] [Green Version]
  2. Bedi, P.; Sinha, A.; Agarwal, S.; Awasthi, A.; Prasad, G.; Saini, D. Influence of terrain on modern tactical combat: Trust-based recommender system. Def. Sci. J. 2010, 60, 405–411. [Google Scholar] [CrossRef]
  3. Wu, X.A.; Huh, T.M.; Mukherjee, R.; Cutkosky, M. Integrated ground reaction force sensing and terrain classification for small legged robots. IEEE Robot. Autom. Lett. 2016, 1, 1125–1132. [Google Scholar] [CrossRef]
  4. Giguere, P.; Dudek, G. A simple tactile probe for surface identification by mobile robots. IEEE Trans. Robot. 2011, 27, 534–544. [Google Scholar] [CrossRef]
  5. Belter, D.; Skrzypczyński, P. Rough terrain mapping and classification for foothold selection in a walking robot. J. Field Robot. 2011, 28, 497–528. [Google Scholar] [CrossRef]
  6. Dornik, A.; Drăguţ, L.; Urdea, P. Classification of soil types using geographic object-based image analysis and Random Forest. Pedosphere 2017. [Google Scholar] [CrossRef]
  7. Laible, S.; Khan, Y.N.; Bohlmann, K.; Zell, A. 3D lidar-and camera-based terrain classification under different lighting conditions. In Autonomous Mobile Systems; Springer: Berlin/Heidelberg, Germany, 2012; pp. 21–29. [Google Scholar]
  8. Schilling, F.; Chen, X.; Folkesson, J.; Jensfelt, P. Geometric and visual terrain classification for autonomous mobile navigation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 24–28 September 2017; pp. 2678–2684. [Google Scholar]
  9. Ma, X.; Hao, S.; Cheng, Y. Terrain classification of aerial image based on low-rank recovery and sparse representation. In Proceedings of the IEEE 20th International Conference on Information Fusion, Xi’an, China, 10–13 July 2017; pp. 1–6. [Google Scholar]
  10. Weszka, J.S.; Dyer, C.R.; Rosenfeld, A. A comparative study of texture measures for terrain classification. IEEE Trans. Syst. Man Cybern. 1976. [Google Scholar] [CrossRef]
  11. Anantrasirichai, N.; Burn, J.; Bull, D. Terrain classification from body-mounted cameras during human locomotion. IEEE Trans. Cybern. 2015, 45, 2249–2260. [Google Scholar] [CrossRef]
  12. Peterson, J.; Chaudhry, H.; Abdelatty, K.; Bird, J.; Kochersberger, K. Online Aerial Terrain Mapping for Ground Robot Navigation. Sensors 2018, 18, 630. [Google Scholar] [CrossRef]
  13. Ojeda, L.; Borenstein, J.; Witus, G.; Karlsen, R. Terrain characterization and classification with a mobile robot. J. Field Robot. 2006, 23, 103–122. [Google Scholar] [CrossRef] [Green Version]
  14. Zhang, H.; Wu, R.; Li, C.; Zang, X.; Zhang, X.; Jin, H.; Zhao, J. A force-sensing system on legs for biomimetic hexapod robots interacting with unstructured terrain. Sensors 2017, 17, 1514. [Google Scholar] [CrossRef] [PubMed]
  15. Valada, A.; Spinello, L.; Burgard, W. Deep feature learning for acoustics-based terrain classification. In Robotics Research; Springer: Cham, Switzerland, 2018; pp. 21–37. [Google Scholar]
  16. Rothrock, B.; Kennedy, R.; Cunningham, C.; Papon, J.; Heverly, M.; Ono, M. Spoc: Deep learning-based terrain classification for mars rover missions. AIAA SPACE 2016. [Google Scholar] [CrossRef]
  17. Brooks, C.A.; Iagnemma, K. Vibration-based terrain classification for planetary exploration rovers. IEEE Trans. Robot. 2005, 21, 1185–1191. [Google Scholar] [CrossRef]
  18. Zhu, Y.; Jia, C.; Ma, C.; Liu, Q. SURF-BRISK–Based Image Infilling Method for Terrain Classification of a Legged Robot. Appl. Sci. 2019, 9, 1779. [Google Scholar] [CrossRef]
  19. DuPont, E.M.; Moore, C.A.; Collins, E.G.; Coyle, E. Frequency response method for terrain classification inautonomousground vehicles. Auton. Robot. 2008, 24, 337–347. [Google Scholar] [CrossRef]
  20. Lu, L.; Ordonez, C.; Collins, E.G.; DuPont, E.M. Terrain surface classification for autonomous ground vehicles using a 2D laser stripe-based structured light sensor. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 24–28 September 2017; pp. 2174–2181. [Google Scholar] [CrossRef]
  21. Delmerico, J.; Giusti, A.; Mueggler, E.; Gambardella, L.M.; Scaramuzza, D. “On-the-spot training” for terrain classification in autonomous air-ground collaborative teams. In International Symposium on Experimental Robotics; Springer: Cham, Switzerland, 2016; pp. 574–585. [Google Scholar]
  22. Christie, J.; Kottege, N. Acoustics based terrain classification for legged robots. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 3596–3603. [Google Scholar]
  23. Bao, L.; Intille, S.S. Activity recognition from user-annotated acceleration data. In International Conference on Pervasive Computing; Springer: Berlin/Heidelberg, Germany, 2004; pp. 1–17. [Google Scholar]
  24. Riaz, Q.; Vögele, A.; Krüger, B.; Weber, A. One small step for a man: Estimation of gender, age and height from recordings of one step by a single inertial sensor. Sensors 2015, 15, 31999–32019. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, K.; Gao, C.; Guo, L.; Sun, M.; Yuan, X.; Han, T.X.; Zhao, Z.; Li, B. Age Group and Gender Estimation in the Wild with Deep RoR Architecture. IEEE Access 2017, 5, 22492–22503. [Google Scholar] [CrossRef]
  26. Flora, J.; Lochtefeld, D.; Bruening, D.; Iftekharuddin, K. Improved gender classification using non pathological gait kinematics in full-motion video. IEEE Trans. Hum.-Mach. Syst. 2015, 45, 304–314. [Google Scholar] [CrossRef]
  27. Janssen, D.; Schöllhorn, W.I.; Lubienetzki, J.; Fölling, K.; Kokenge, H.; Davids, K. Recognition of emotions in gait patterns by means of artificial neural nets. J. Nonverbal Behav. 2008, 32, 79–92. [Google Scholar] [CrossRef]
  28. Khamsemanan, N.; Nattee, C.; Jianwattanapaisarn, N. Human Identification From Freestyle Walks Using Posture-Based Gait Feature. IEEE Trans. Inf. Forensics Secur. 2018, 13, 119–128. [Google Scholar] [CrossRef]
  29. Wu, Z.; Huang, Y.; Wang, L.; Wang, X.; Tan, T. A comprehensive study on cross-view gait based human identification with deep cnns. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 209–226. [Google Scholar] [CrossRef] [PubMed]
  30. Liew, C.S.; Wah, T.Y.; Shuja, J.; Daghighi, B. Mining personal data using smartphones and wearable devices: A survey. Sensors 2015, 15, 4430–4469. [Google Scholar]
  31. Son, D.; Lee, J.; Qiao, S.; Ghaffari, R.; Kim, J.; Lee, J.E.; Song, C.; Kim, S.J.; Lee, D.J. Multifunctional wearable devices for diagnosis and therapy of movement disorders. Nat. Nanotechnol. 2014, 9, 397. [Google Scholar] [CrossRef] [PubMed]
  32. Riaz, Q.; Tao, G.; Krüger, B.; Weber, A. Motion reconstruction using very few accelerometers and ground contacts. Graph. Model. 2015, 79, 23–38. [Google Scholar] [CrossRef]
  33. Hu, B.; Dixon, P.; Jacobs, J.; Dennerlein, J.; Schiffman, J. Machine learning algorithms based on signals from a single wearable inertial sensor can detect surface-and age-related differences in walking. J. Biomech. 2018, 71, 37–42. [Google Scholar] [CrossRef] [PubMed]
  34. Diaz, J.P.; da Silva, R.L.; Zhong, B.; Huang, H.H.; Lobaton, E. Visual Terrain Identification and Surface Inclination Estimation for Improving Human Locomotion with a Lower-Limb Prosthetic. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 1817–1820. [Google Scholar] [CrossRef]
  35. Riaz, Q.; Hashmi, M.Z.U.H.; Hashmi, M.A.; Shahzad, M.; Errami, H.; Weber, A. Move Your Body: Age Estimation Based on Chest Movement During Normal Walk. IEEE Access 2019, 7, 28510–28524. [Google Scholar] [CrossRef]
  36. Steven Eyobu, O.; Han, D. Feature representation and data augmentation for human activity classification based on wearable IMU sensor data using a deep LSTM neural network. Sensors 2018, 18, 2892. [Google Scholar] [CrossRef] [PubMed]
  37. Sztyler, T.; Stuckenschmidt, H. On-body localization of wearable devices: An investigation of position-aware activity recognition. In Proceedings of the 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom), Sydney, Australia, 14–19 March 2016; pp. 1–9. [Google Scholar]
  38. Chung, S.; Lim, J.; Noh, K.J.; Kim, G.; Jeong, H. Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning. Sensors 2019, 19, 1716. [Google Scholar] [CrossRef]
  39. Multon, F.; France, L.; Cani-Gascuel, M.P.; Debunne, G. Computer animation of human walking: A survey. J. Vis. Comput. Animat. 1999, 10, 39–54. [Google Scholar] [CrossRef]
  40. Boenig, D.D. Evaluation of a clinical method of gait analysis. Phys. Ther. 1977, 57, 795–798. [Google Scholar] [CrossRef]
  41. Azami, H.; Mohammadi, K.; Bozorgtabar, B. An improved signal segmentation using moving average and Savitzky-Golay filter. J. Signal Inf. Process. 2012, 3, 39. [Google Scholar] [CrossRef]
  42. Guiñón, J.L.; Ortega, E.; García-Antón, J.; Pérez-Herranz, V. Moving average and Savitzki-Golay smoothing filters using Mathcad. In Proceedings of the International Conference on Engineering and Education 2007, Coimbra, Portugal, 3–7 September 2007; pp. 30–39. [Google Scholar]
  43. Li, F.; Zhao, C.; Ding, G.; Gong, J.; Liu, C.; Zhao, F. A Reliable and Accurate Indoor Localization Method Using Phone Inertial Sensors. In Proceedings of the ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 September 2012; ACM: New York, NY, USA, 2012; pp. 421–430. [Google Scholar] [CrossRef]
  44. Derawi, M.O.; Nickel, C.; Bours, P.; Busch, C. Unobtrusive User-Authentication on Mobile Phones Using Biometric Gait Recognition. In Proceedings of the Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Darmstadt, Germany, 15–17 October 2010; pp. 306–311. [Google Scholar] [CrossRef]
  45. Zijlstra, W. Assessment of spatio-temporal parameters during unconstrained walking. Eur. J. Appl. Physiol. 2004, 92, 39–44. [Google Scholar] [CrossRef] [PubMed]
  46. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  47. Bengio, Y.; Grandvalet, Y. No unbiased estimator of the variance of k-fold cross-validation. J. Mach. Learn. Res. 2004, 5, 1089–1105. [Google Scholar]
  48. Libby, J.; Stentz, A.J. Using sound to classify vehicle-terrain interactions in outdoor environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012; pp. 3559–3566. [Google Scholar]
  49. Anthony, D.; Basha, E.; Ostdiek, J.; Ore, J.P.; Detweiler, C. Surface classification for sensor deployment from UAV landings. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3464–3470. [Google Scholar]
  50. Yang, K.; Wang, K.; Bergasa, L.; Romera, E.; Hu, W.; Sun, D.; Sun, J.; Cheng, R.; Chen, T.; López, E. Unifying terrain awareness for the visually impaired through real-time semantic segmentation. Sensors 2018, 18, 1506. [Google Scholar] [CrossRef] [PubMed]
  51. Massad, I.; Dalyot, S. Towards the Crowdsourcing of Massive Smartphone Assisted-GPS Sensor Ground Observations for the Production of Digital Terrain Models. Sensors 2018, 18, 898. [Google Scholar] [CrossRef]
Figure 1. (a) Different types of terrains used for data collection; (b) strides of accelerometer gravitational axis (y-axis); and (c) spectrogram of the signals.
Figure 1. (a) Different types of terrains used for data collection; (b) strides of accelerometer gravitational axis (y-axis); and (c) spectrogram of the signals.
Applsci 09 03099 g001
Figure 2. Placement of sensors at different body parts: (a) chest (smart phone); and (b) lower back (smart phone).
Figure 2. Placement of sensors at different body parts: (a) chest (smart phone); and (b) lower back (smart phone).
Applsci 09 03099 g002
Figure 3. General work flow of terrain classification system. The input signals were segmented into strides and tempo-spectral features were extracted. Two different types of classifiers, i.e., Support Vector Machine (SVM) and Random Forest (RF), were trained to predict terrains.
Figure 3. General work flow of terrain classification system. The input signals were segmented into strides and tempo-spectral features were extracted. Two different types of classifiers, i.e., Support Vector Machine (SVM) and Random Forest (RF), were trained to predict terrains.
Applsci 09 03099 g003
Figure 4. Confusion matrices of indoor–outdoor terrain classification computed with lower back sensor. Each column represents a feature set: (a) all features (194 features), (b) time (110 features), (c) frequency (84 features), (d) accelerations (97 features) and (e) angular velocities (97 features). Each row represents a classifier (from (top) to (bottom)): RF and SVM. Classes: C T I = i n d o o r (carpet, concrete, tiles) and C T O = o u t d o o r (grass, Asphalt, soil).
Figure 4. Confusion matrices of indoor–outdoor terrain classification computed with lower back sensor. Each column represents a feature set: (a) all features (194 features), (b) time (110 features), (c) frequency (84 features), (d) accelerations (97 features) and (e) angular velocities (97 features). Each row represents a classifier (from (top) to (bottom)): RF and SVM. Classes: C T I = i n d o o r (carpet, concrete, tiles) and C T O = o u t d o o r (grass, Asphalt, soil).
Applsci 09 03099 g004
Figure 5. Confusion matrices of hard–soft terrain classification computed with lower back sensor. Each column represents a feature set: (a) all features (194 features), (b) time (110 features), (c) frequency (84 features), (d) accelerations (97 features) and (e) angular velocities (97 features). Each row represents a classifier (from (top) to (bottom)): RF and SVM. Classes: C T S = s o f t (carpet, grass, and soil) and C T H = h a r d (concrete, Asphalt, and tiles).
Figure 5. Confusion matrices of hard–soft terrain classification computed with lower back sensor. Each column represents a feature set: (a) all features (194 features), (b) time (110 features), (c) frequency (84 features), (d) accelerations (97 features) and (e) angular velocities (97 features). Each row represents a classifier (from (top) to (bottom)): RF and SVM. Classes: C T S = s o f t (carpet, grass, and soil) and C T H = h a r d (concrete, Asphalt, and tiles).
Applsci 09 03099 g005
Figure 6. Confusion matrices of pair-wise classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s .
Figure 6. Confusion matrices of pair-wise classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s .
Applsci 09 03099 g006
Figure 7. Bar chart of pair-wise classification. The results were obtained with all features set (194 features) for each classifier and for each sensor location.
Figure 7. Bar chart of pair-wise classification. The results were obtained with all features set (194 features) for each classifier and for each sensor location.
Applsci 09 03099 g007
Figure 8. (a) Confusion matrices of senary classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s . (b) Radar plot of senary classification.
Figure 8. (a) Confusion matrices of senary classification with lower back sensor. All results were obtained with all features set (194 features) using SVM classifier. Classes: C T C a = c a r p e t , C T C o = c o n c r e t e , C T G r = g r a s s , C T A s = A s p h a l t , C T S o = s o i l , and C T T i = t i l e s . (b) Radar plot of senary classification.
Applsci 09 03099 g008
Figure 9. Bar chart showing the comparison of each classifier at each sensor location on different classification tasks.
Figure 9. Bar chart showing the comparison of each classifier at each sensor location on different classification tasks.
Applsci 09 03099 g009
Table 1. A summary of the main approaches in terrain classification. A variety of sensors are used for data collection including camera, microphone, inertial measurement units (IMU), laser, tactical and ladar sensors. For classification, different types of classifier are proposed including Random Forest, Support Vector Machine (SVM), Sparse Representation Based Classification (SRC), Artifical Neural Network (ANN), Probabilistic Neural Network (PNN) and Long Short-Term Memory Units (LSTM).
Table 1. A summary of the main approaches in terrain classification. A variety of sensors are used for data collection including camera, microphone, inertial measurement units (IMU), laser, tactical and ladar sensors. For classification, different types of classifier are proposed including Random Forest, Support Vector Machine (SVM), Sparse Representation Based Classification (SRC), Artifical Neural Network (ANN), Probabilistic Neural Network (PNN) and Long Short-Term Memory Units (LSTM).
CategoryYearSensorClassifier
Anastrasirichai [11]Vision2015CameraSVM
Dornik [6]Vision2017CameraRandom Forest
Ma et al. [9]Vision2017CameraSRC
Christie et al. [22]Acoustic2016MicrophoneSVM
Valada et al. [15]Acoustic2018MicrophoneDeep Learning
Ojeda [13]Robotics2006IMU, MotorANN
Giguere et al. [4]Robotics2011TactileANN
Wu et al. [3]Robotics2016TactileSVM
Manduchi et al. [1]Autonomous off-road driving2005Ladar & CameraGaussian Process
Lu et al. [20]Autonomous off-road driving2009LaserPNN
Hu et al. [33]Human Gait2018IMULSTM
Diaz et al. [34]Human Gait2018Camera, IMUBoW model
Table 2. Types of terrains used for collect data. The terrains were categorized into hard or soft surfaces and the data were collected in both indoor and outdoor environments.
Table 2. Types of terrains used for collect data. The terrains were categorized into hard or soft surfaces and the data were collected in both indoor and outdoor environments.
TerrainTypeEnvironment
TilesHardIndoor
CarpetSoftIndoor
Concrete FloorHardIndoor
GrassSoftOutdoor
AsphaltHardOutdoor
SoilSoftOutdoor
Table 3. Characteristics of the population.
Table 3. Characteristics of the population.
VariableCharacteristics
Participants40
Male:Female30:10
Age (year, μ ± σ )29.2 ± 11.4
Height (cm, μ ± σ )171.2 ± 8.2
Weight (kg, μ ± σ )67.1 ± 13.3
Table 4. Technical specifications of MPU-6500 IMUs.
Table 4. Technical specifications of MPU-6500 IMUs.
AccelerometerGyroscope
Axes33
Noise0.0029 m/s2/ Hz 0.01 deg/s/ Hz
Output Rate5 to 100 Hz5 to 100 Hz
Range± 2 g, ± 4 g, ± 8 g, ± 16 g± 2000 deg/s
Resolution16 bits16 bits
Table 5. List of hand-crafted features extracted from 6D accelerations and angular velocities for each stride. The features set consists of a total of 194 tempo-spectral features, of which 110 features were computed from the time domain (T) and 84 features were computed from the frequency domain (F).
Table 5. List of hand-crafted features extracted from 6D accelerations and angular velocities for each stride. The features set consists of a total of 194 tempo-spectral features, of which 110 features were computed from the time domain (T) and 84 features were computed from the frequency domain (F).
Applsci 09 03099 i001
Table 6. Classification results of binary classification: indoor–outdoor, hard–soft and pair-wise classification for each sensor location. The classification results are presented for each feature set and statistical scores are presented for the set of all features only.
Table 6. Classification results of binary classification: indoor–outdoor, hard–soft and pair-wise classification for each sensor location. The classification results are presented for each feature set and statistical scores are presented for the set of all features only.
Classification CategorySensorAccuracy (%)Statistical Analysis
All FeaturesTemporal FeaturesSpectral Features3D Accelerations3D Angular VelocitiesPrecision (%)Recall (%)f1-Score (%)
SVMRFSVMRFSVMRFSVMRFSVMRFSVMRFSVMRFSVMRF
Indoor–OutdoorLower Back97.4896.5296.7396.9394.4793.5895.8093.8392.2593.3397.296.697.296.697.196.6
Chest97.1195.7295.8895.8993.8391.3296.1196.6689.9088.6197.595.597.595.597.595.5
Hard–SoftLower Back90.6492.0886.6991.8185.1987.1685.1988.4679.8786.3989.191.889.091.789.191.7
Chest89.5789.0884.4189.3883.7682.3485.9888.5078.0777.4589.487.789.487.689.387.6
Pair-wiseLower Back96.5396.0095.0596.1094.0493.1394.7094.3391.5693.6296.395.996.395.996.395.9
Chest96.0095.2894.2895.6993.4191.5194.9794.9689.7888.4296.395.796.395.696.395.6
Table 7. Classification results of quinary and senary classification for each sensor location. The classification results are presented for each feature set and statistical scores are presented for the set of all features only.
Table 7. Classification results of quinary and senary classification for each sensor location. The classification results are presented for each feature set and statistical scores are presented for the set of all features only.
Classification CategorySensorAccuracy (%)Statistical Analysis
All FeaturesTemporal FeaturesSpectral Features3D Accelerations3D Angular VelocitiesPrecisionRecallf1-Score
SVMRFSVMRFSVMRFSVMRFSVMRFSVMRFSVMRFSVMRF
SenaryLower Back87.5588.7082.5189.1080.1081.6781.4684.0874.7881.7486.987.286.786.586.886.6
Chest85.7885.9979.5487.5677.9176.7681.4584.7669.1369.6785.786.185.786.285.686.1
Table 8. Comparison of classification rate with existing approaches in literature.
Table 8. Comparison of classification rate with existing approaches in literature.
CategoryClassification Accuracy (%)
BinaryTernayQuaternaryQuinarySenary
Indoor–OutdoorHard–SoftPair-Wise
Anthony et al. [49]Robot-90.00
(Indoor)
--63.75, 62.34
(Indoor), (Outdoor)
--
Hu et al. (2018) [33]IMU96.3 (Indoor only)------
Anantrasirichai et al. (2015) [11]Vision---82.0---
Diaz et al. (2018) [34]Vision, IMU------86.0
Libby et al. [48]Acoustic------92.00
Proposed ApproachIMU97.4892.0896.5383.8791.9790.4289.10

Share and Cite

MDPI and ACS Style

Hashmi, M.Z.U.H.; Riaz, Q.; Hussain, M.; Shahzad, M. What Lies Beneath One’s Feet? Terrain Classification Using Inertial Data of Human Walk. Appl. Sci. 2019, 9, 3099. https://doi.org/10.3390/app9153099

AMA Style

Hashmi MZUH, Riaz Q, Hussain M, Shahzad M. What Lies Beneath One’s Feet? Terrain Classification Using Inertial Data of Human Walk. Applied Sciences. 2019; 9(15):3099. https://doi.org/10.3390/app9153099

Chicago/Turabian Style

Hashmi, Muhammad Zeeshan Ul Hasnain, Qaiser Riaz, Mehdi Hussain, and Muhammad Shahzad. 2019. "What Lies Beneath One’s Feet? Terrain Classification Using Inertial Data of Human Walk" Applied Sciences 9, no. 15: 3099. https://doi.org/10.3390/app9153099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop