Next Article in Journal
Transfer Learning Based on Clustering Difference for Dynamic Multi-Objective Optimization
Next Article in Special Issue
Human Activity Recognition by the Image Type Encoding Method of 3-Axial Sensor Data
Previous Article in Journal
Assessment of 2D Digital Image Correlation for Experimental Modal Analysis of Transient Response of Beams Using a Continuous Wavelet Transform Method
Previous Article in Special Issue
Device Orientation Independent Human Activity Recognition Model for Patient Monitoring Based on Triaxial Acceleration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leg-Joint Angle Estimation from a Single Inertial Sensor Attached to Various Lower-Body Links during Walking Motion †

Department of Mechanical Engineering, Graduate School of Science and Engineering, Ehime University, Bunkyo-cho 3, Matsuyama 790-8577, Japan
*
Author to whom correspondence should be addressed.
This paper is an extended version of our previously published paper: Alemayoh, T.T.; Lee, J.H.; Okamoto, S. LocoESIS: Deep-Learning-Based Leg-Joint Angle Estimation from a Single Pelvis Inertial Sensor. In Proceedings of the 2022 9th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), Seoul, Republic of Korea, 21–24 August 2022, pp. 1–7. https://doi.org/10.1109/BioRob52689.2022.9925420.
Appl. Sci. 2023, 13(8), 4794; https://doi.org/10.3390/app13084794
Submission received: 3 March 2023 / Revised: 5 April 2023 / Accepted: 7 April 2023 / Published: 11 April 2023
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare)

Abstract

:
Gait analysis is important in a variety of applications such as animation, healthcare, and virtual reality. So far, high-cost experimental setups employing special cameras, markers, and multiple wearable sensors have been used for indoor human pose-tracking and gait-analysis purposes. Since locomotive activities such as walking are rhythmic and exhibit a kinematically constrained motion, fewer wearable sensors can be employed for gait and pose analysis. One of the core parts of gait analysis and pose-tracking is lower-limb-joint angle estimation. Therefore, this study proposes a neural network-based lower-limb-joint angle-estimation method from a single inertial sensor unit. As proof of concept, four different neural-network models were investigated, including bidirectional long short-term memory (BLSTM), convolutional neural network, wavelet neural network, and unidirectional LSTM. Not only could the selected network affect the estimation results, but also the sensor placement. Hence, the waist, thigh, shank, and foot were selected as candidate inertial sensor positions. From these inertial sensors, two sets of lower-limb-joint angles were estimated. One set contains only four sagittal-plane leg-joint angles, while the second includes six sagittal-plane leg-joint angles and two coronal-plane leg-joint angles. After the assessment of different combinations of networks and datasets, the BLSTM network with either shank or thigh inertial datasets performed well for both joint-angle sets. Hence, the shank and thigh parts are the better candidates for a single inertial sensor-based leg-joint estimation. Consequently, a mean absolute error (MAE) of 3.65° and 5.32° for the four-joint-angle set and the eight-joint-angle set were obtained, respectively. Additionally, the actual leg motion was compared to a computer-generated simulation of the predicted leg joints, which proved the possibility of estimating leg-joint angles during walking with a single inertial sensor unit.

1. Introduction

Locomotion is a universal behavior that animals and humans use to efficiently translocate and navigate between places. Particularly in humans, the central pattern generator, a complex network located in the spinal cord, is responsible for the generation of rhythmic motor behaviors such as walking. The brain stem and motor cortex supply this network with inputs and motor commands, while the various joints, muscles, and skin provide it with sensory feedback. This network then produces different patterns of bipedal gait [1]. Furthermore, musculoskeletal/neurological disorders and the overall health status of a person can affect their gait, hence producing a unique walking pattern (gait) [2].
Gait analysis is highly demanded in the medical field, which is mainly adopted for precise patient monitoring, pathological gait treatment assessment, movement abnormality identification, and surgical outcome evaluation [3]. Its importance in the health area has been discussed in various studies. These studies cover areas such as knee and hip osteoarthritis [4], falling risk [5], spinal damage-level determination [6], Parkinson’s disease diagnosis [7], and facilitating interactive rehabilitation and predictive diagnostics [8,9]. Moreover, it can also be crucial in sports and robotics applications [10], virtual reality, and character animation applications [11]. Gaits are interpreted by first quantifying them using representative parameters that are easier to understand. These parameters mainly fall into two categories, either spatiotemporal (e.g., speed and stride/step length), kinematic (e.g., hip extension/flexion), or kinetic (e.g., moments and ground reaction forces) parameters [3]. In this paper, the focus is the estimation of the kinematic parameters for the lower half of the body. Leg joints are the key source of degrees of freedom for walking locomotion. Hence, accurately computing the joint angles is vital in understanding human gaits during walking. To do so, the type of sensor employed for data acquisition plays a key role in the accuracy of joint-angle computation. Hence, several data-collection techniques have been investigated over the past few years. Generally, they can be categorized as wearable and nonwearable sensor systems.
Nonwearable sensor methods mostly employ a 3D motion capture system using special markers attached to the bodies of subjects. The 3D human pose is captured in a specialized indoor setting, such as laboratories and studios, using a high level of position accuracy optical motion capture systems [12]. These methods have long been considered the industry standard methods. Another type of nonwearable system which is a pressure-sensing carpet was proposed by the Massachusetts Institute of Technology. It is used to estimate the 3D human pose using the pressure data acquired from the tactile carpet. The system includes a carpet of 36 ft2 areas with 9216 sensors, readout circuits, and two cameras [13]. Moreover, vision-based methods by [14,15,16] developed a 3D reconstruction of a human pose from 2D still images and movies while [17] computed walking speed and stride length from a Kinect camera depth data. Despite their excellent performance, nonwearable systems only operate inside controlled laboratory settings, which makes them difficult for physiotherapists and sports scientists who are looking to bridge the lab-to-field gap. On top of that, such systems are expensive and demand longer setup time and substantial skill.
These limitations are currently being eased owing to the technological advancement of wearable sensor miniaturization. Inertial measurement units (IMUs), electromyography, and other wearable sensors have opened the way for practical indoor/outdoor motion capture systems for long-term use. The continuous digitization progress and the high demand for motion analysis in various fields such as rehabilitation centers have made inertial sensors to be the center of the topic over the last few years. Even though they enable us to assess movements in a real-world setting with easier portability, wearable sensors are not yet a standard practice in motion analysis because of a lack of examination related to accuracy and reliability. However, recent works by [18,19] performed an investigation on the reliability and validity of the commercially available inertial sensors called Xsens inertial sensors. They evaluated them for different activities including walking, squatting, and jumping. As a result, they concluded reliability and validity were fair to excellent in the sagittal plane for hip, knee, and ankle joint angles and the system can be used by a clinician to quantify leg-joint angles. For their convenient accompanying software, these inertial sensors were used in this study as well. However, many of the inertial capture systems vary in terms of sensor quantity, sensor positioning, and estimation method [20,21,22]. The study by [20] adopted an extended Kalman filter method for lower-limb segment position and orientation estimation from two (fixed only to the feet) and three (attached to the pelvis and the feet) sensor sets. For the three-sensor set, they achieved an overall root mean square error (RMSE) of 5.0 ± 1.0, 8.2 ± 2.2, and 5.9 ± 1.6 for the hip, knee, and ankle, respectively. A study by [21] developed a microcontroller with two inertial sensors mounted to the thigh and the shank for the computation of the knee joint angle. Their system claimed to have achieved an RMSE of 0.04° with a mean average percentage error of 2.95% compared to a Vicon motion capture system. Similarly, [22] used one inertial sensor fixed to the thigh to target the knee joint angle and two inertial sensors fixed to the shank and thigh to target the ankle joint angles during walking. They have achieved an MAE of 1.69 ± 1.43°, 1.29 ± 1.0°, and 0.82 ± 0.69° for the knee, talocrural joint, and subtalar joint, respectively. In the existing systems, there is a lack of information on how many inertial sensors are enough to correctly estimate the lower-limb-joint angles during walking locomotion. Certainly, multiple inertial sensors would make the subject uncomfortable and the system complex and thus expensive to run. Therefore, any method which employs a reduced sensor quantity while not sacrificing the performance of the system is favorable. Additionally, considering the fact that each person has a unique gait makes it challenging for implementing gait-analysis systems for any random subject. However, a walking motion is comprised of cyclic leg motions where the bone segments move in a correlated way with each other. Hence, the walking motion can be mapped or reconstructed from the motion of a single bone segment. The nonlinear relation that exists among the bone segments could be possibly approximated by neural networks.
Various algorithms have been used to estimate human poses. However, with the ability to reconstruct human poses from fewer sensor quantities and the ability to generalize across subjects, neural networks have been the center of attention in recent years. This has been demonstrated by our previous study, where we investigated the estimating leg joints from only one IMU sensor fixed onto the pelvis of a subject using a neural network [23]. Another data-driven technique by [24] gathered data from five people with one IMU sensor unit fixed on the shank of the right leg to train a recurrent neural network (RNN) that approximates the gaits of construction workers. They made a special rectangular wooden frame to perform data measurement experimentation. Then subjects were instructed to walk on top of it while carrying all the computing equipment. Similarly, [25] also used a shank-mounted single IMU sensor to estimate the sagittal-plane lower-limb-joint angles. Their data collection was performed by instructing subjects to walk in a straight line of a 5-m distance inside a laboratory.
The existing methods explained above proved one or two sensors can be enough to estimate the leg-joint angles with good accuracy. This is possible due to the periodicity and kinematically constrained biomechanical walking of humans. Reduced sensor quantity not only helps reduce the complexity but also contributes to a more natural gait performed by subjects. Despite increased research in this field, there is a paucity of information investigating the most suitable single IMU placement for leg-joint estimation. As the need for portable and simple wearable sensors for motion analysis is growing, identifying the best possible sensor-fixing body locations is the critical part. The position of the fixed single inertial sensor highly affects the estimation result of the neural networks. There is no consensus regarding the position of the sensors on the body as previous studies fix inertial sensors on the pelvis [20,23], thigh [21,22], shank [21,22,24,25], and foot [20]. Hence, in this study, the placement of a single sensor on different parts of the body for joint-angle estimation of both legs will be investigated by employing various neural-network algorithms. This is essential to understand the optimal inertial sensor placement on the lower half of the body when reduced inertial sensors are needed for lower-body motion analysis. This study will contribute to healthcare physiotherapists and motion analysts in the sports field. The most dominant sensor positions in many of the existing studies will be the potential candidates for the inertial sensor placement to estimate two lower-limb-joint angle sets. These include the pelvis, thigh, shank, and foot. According to [26], CNN is a better candidate for only prediction tasks while LSTM is desired for sagittal-plane joint-angle prediction and real-time joint-angle estimation over multilayer perceptron networks. Hence, four neural networks including convolution-based ones and LSTM networks were selected. These include a unidirectional LSTM, a bidirectional long short-term memory (BLSTM), a convolutional neural network (CNN), and a wavelet neural network (WNN). For the neural-network training, walking data were collected from 16 subjects. The data measurement was performed in an outdoor setting where subjects were told to walk freely and naturally. This study was accomplished with easier mounting labor and significantly lower sensor setup cost.
Therefore, the main contributions of this research are: (i) the use of a single IMU sensor to estimate the lower-limb joint rotation angles from data collected outdoors; (ii) the investigation of an optimal body position for a single inertial sensor placement to estimate the lower-limb-joint angles; (iii) to show the promising future of reduced wearable sensors in addressing gait analysis and pose estimation problems; and (iv) to give physiotherapists and sports scientists insight regarding how good a single inertial sensor can be in estimating lower-limb-joint angles in an outdoor setting. Therefore, this could be further extended for daily activity pose-tracking which could be crucial in rehabilitation and assistive robot applications.

2. Data Acquisition

2.1. IMU Sensor

The sensors used in this study are called MTw Awinda (hereafter referred to as Awinda sensors), manufactured by Movella Inc., which is headquartered in Henderson, NV, USA. These sensors are wireless and easy to integrate small microelectromechanical system inertial sensors that are convenient for real-time human motion tracking. Awinda sensors ensure accurate and well-synchronized data among all connected sensors, which is vital in human pose estimation. The sensors are accompanied by a free software named MT Manager, which has the functionality of recording and exporting raw inertial data and orientation data of each sensor.
Since IMU sensors suffer from drifting errors and environmental magnetism, validating and evaluating their performance is a necessary step before their usage. A study by [27] compared the Awinda sensor system and an 8-camera Qualisys optical motion capture system for walking and static poses. The minimum and maximum average root mean square error (RMSE) results for 18 lower-limb joints were 3.2° and 10.1° for walking and 3.7° and 8.0° for the static pose, respectively. Additionally, the effectiveness of the Awinda sensor system was evaluated in a study by [28] in comparison to the Optotrak motion capture system using three activities namely walking, descending stairs, and ascending stairs. Resultantly, a mean estimation error of the joint angles ranged from a minimum of 1.38° to a maximum of 6.69°. However, since experiment environments affect the performance of the Awinda inertial sensors, the sensors were tested in our optical motion capture indoor experiment. In particular, verifying the performance of the Awinda inertial sensors’ orientation is the main goal as their orientation is used to compute the joint angles. To do so, five-minute data were collected using a rectangular rigid frame with markers and an Awinda sensor mounted on it. Resultantly, the orientation deviation of the Awinda sensor system from the Optotrack motion capture system was 1.45°, 1.66°, and 0.67° corresponding to the x, y, and z axes. On top of the lower results, our data-collection experiments were conducted for a shorter period, 10 min, to avoid any possible long-term error. However, more importantly, our actual data-collection experimentation was carried out in a barely magnetized outdoor space. The magnetization of the site was verified by the magnetic norm of the sensors as recommended by the manufacturer, which hardly varies. This is because there are no big man-made structures in the outdoor experimental site. Therefore, the Awinda sensor system data are sufficient to rely on for this study’s experimental and analytical needs.

2.2. Data Measurement

To compute the ground-truth joint-angle values of the lower limb, seven individual Awinda sensors were mounted to the lower half of each subject’s body. As depicted in Figure 1, one sensor unit per each lower-body bone segment was fixed. The bone segments include the pelvis, the thighs, the shanks, and the upper parts of the feet. To reduce the effect of skin motion artifacts, sensors are mounted in places with less skin movement. These include the pelvis bone at the height of the anterior superior iliac spine, the middle of the lateral thighs, the upper parts of the tibiae, and the front upper parts of the feet.
Here the objective is to estimate the leg kinematics (joint angles, particularly) from any of the sensors fixed to the body as summarized in Figure 2. As the right leg is dominant for most people, the three sensors on the right leg in addition to the waist sensor were investigated and compared in this study. A study by [29] suggested that human locomotor muscle synergies are decoded from slow cortical waves of the brain. They claimed to have formulated a relationship between brain signals and leg kinematics. However, in this study, a noninvasive method with only a single sensor is used to mimic the function of spinal cord signals during locomotion. This is possible because the movement of our leg is manifested in our pelvis motion, presuming the subject always maintains contact with the ground. The pelvis moves forward/backward and sideways during normal walking. Due to maintaining continuous ground contact, the leg motion directly drives the trunk body depending on the speed and direction. This creates a repetitive rhythmic motion. This makes it easier to estimate the repetitive poses of the lower half of the body from various bone segments’ inertial data. As an example, Figure 3, shows the inertial data of the pelvis for a single gait leg pose.
After sensor synchronization, sensor calibration was performed before every experiment by orienting the sensors in one direction on a level surface. Next, sensors were carefully attached to subjects by Velcro tape straps in a similar direction as recommended by the manufacturer. Then, subjects were instructed so that they walk naturally, in any direction, by switching their paces to slow, normal, or fast at their convenience. Hence, diverse data were collected during our experimentation from the 16 subjects. The Awinda station, which is connected directly to an LG Gram 11th Gen Intel® Core™ i7 computer, receives the synchronized data from the seven sensors via a wireless transmission. The Awinda station antenna supports wireless communication up to 50 m range in an outdoor area. This made the data-collection process a lot easier. The data collection was made at a sampling rate of 100 Hz for approximately 10 min per subject. Sixteen subjects comprised 13 males and 3 females; an age group of 28 ± 7.2 years old; a weight group of 63.3 ± 12.2 [Kg]; and a height group of 169.3 ± 8.1 [cm]. In this study, the data were collected from walking activity only. The experiment was carried out in a level, open space field which does not have any structures that could pose magnetic interference to the sensor. A Google map of the experimental site is shown in Figure 4.

2.3. Data Preparation

The first step of dataset preparation is the ground-truth joint-angle computation. The MT Manager software exports the collected raw motion data from the seven sensors as a text file. However, only three quantities, a 3-axis accelerometer, a 3-axis gyroscope, and a quaternion orientation, were extracted. The MT Manager software calculates each sensor’s orientation in both Euler angles and unit quaternions and outputs it with reference to a global coordinate system. After the raw data are exported and saved as a text file, the next step is to compute the leg-joint angles which will be used as target values during the supervised neural-network training. The joint-angle calculation, dataset preparation, training, and inferencing steps were computed and programmed on the PyCharm IDE using Python 3.7.
Since each sensor is firmly attached to each bone segment of the body, it is assumed that the sensor’s orientation corresponds to the orientation of the associated body segment. The orientation difference between the distal and proximal segments then defines the joint rotation angle that connects them. This is mathematically expressed in Equation (1). All attached sensors are aligned to face the same direction.
In other words, if a subject stands upright, making his shank and thigh perpendicular to the flat ground, the extension/flexion angle of the knee and hip will be 0°.
qdis_prox = *qdisqprox.
where qdis_prox denotes the distal and proximal bone segments orientation difference, qdis is the distal bone segment orientation, and qprox is the proximal bone segment orientation. Both the later quantities are measured in reference to the global frame. The ‘⨂’ symbol denotes quaternion multiplication while ‘*’ indicates quaternion complex conjugate. For instance, the rotation angle of the knee joint is computed from the orientations of the distal (thigh) and the proximal (shank) bone segments. This is illustrated in Figure 5. Subsequently, the quaternion result from Equation (1) was transformed to Euler angles format from which relevant Euler angles corresponding to the extension/flexion of hip and knee joints were taken as the ground-truth values. The size of the computed joint angles is the same in size as the original raw data collected.
Two sets of target leg-joint angles were investigated. The first set is comprised of four joint angles, namely the extension/flexion joint angles of both the hip and knee of both legs. The second set contains the ankle dorsiflexion/plantarflexion and hip abduction and adduction joint angles of both legs in addition to the first leg-joint angle set. From the collected data, the rotation of hip, knee, and ankle joints ranges from −40° (flexion) to 20° (extension) and 0° (extension) to 80° (flexion), and −18° (dorsiflexion) to 40° (plantarflexion), respectively.
Datasets preparation is the second step during the data preparation stage. Datasets are the input arrays for neural networks during deep learning. These are created by cutting the raw time-series data into smaller-sized data pieces. To prepare the datasets, a sampling window of 100 samples-wide (equivalent to 1 s) with an overlap of 80% was employed to cut the time-series raw data as shown in Figure 6. The resultant dataset becomes an array of size 100 × 6 inertial data. This method was implemented on all the inertial data of the pelvis, thigh, shank, and foot. The target labels for the neural networks are the joint angles that correspond to the last frame of the shifting window. The target joint angles which correspond to the input inertial datasets are shown with the vertical lines in Figure 6. The target (label) joint-angle data were then organized into 4 × 1 and 8 × 1 arrays for both sets.
For deeper analysis, three varieties of input datasets were created. One dataset has only inertial data of one of the four sensor positions on either leg, which is shaped into a 100 × 6 array. Another dataset consists of inertial data of both feet (bFID) and pelvis inertial data (PID). The resultant dataset was then structured into a 100 × 18 array. The 18 columns are the 6-axis inertial data of the pelvis and both feet. This set was created to examine the estimation performance improvement by combining the inertial data of the pelvis and both feet. The last one adds the subjects’ biometric information to the PID. Each person has a distinctive gait, step size, walking speed, and range of motion. Age, gender, weight, and height are among the factors that could affect these variations. Hence, adding this information to the training process could improve the estimation accuracy. Except for gender, the other quantities are expressed numerically. Hence, gender was represented with a binary quantity that 1 indicates male participants while 0 is for female participants. As a result, the last dataset will have two separate inputs: a 100 × 6 PID and 4 × 1 biometric information data (BID). A total of 50,973 datasets were prepared for deep learning. First, it was divided into three categories as follows: 84.5% of the datasets for training, 14% of the datasets for validation, and the rest 1.5% of the datasets for testing. The testing dataset was collected from a separate subject whose data are not included in the training. The testing data from the 16th subject, which is less than 10 min data, is a new and unencountered dataset for the trained model.

3. Neural Networks

This section will explain the architectures of the chosen neural-network models for the leg-joint angle estimation. As mentioned previously, four neural-network methods were investigated for the estimation problem. Two sets of each of the following neural-network models were developed to estimate both joint-angle sets. Below follows the description of the structure of each model used in this study.

3.1. Long Short-Term Networks

By retaining information for a longer period, unidirectional LSTMs (simply LSTMs) are a type of RNN that excels at learning long-term dependencies [30]. RNNs, specifically LSTMs, are preferred for recognition and prediction tasks in applications involving language translation, time-series data, and speech recognition [31]. LSTMs would therefore be a good option for training with our time-series data.
A single LSTM layer followed by four fully connected layers was created as an estimator in this study. A total of 512 hidden units made up the LSTM layer with a time step of 100, equivalent to the input dataset row size. Considering the fact that the output target angle values could be positive or negative, a linear activation was employed on the last fully connected layer of the network. This last layer is the same for all the other networks too. The bidirectional LSTM (BLSTM) is another variation of LSTMs. The distinction between both the unidirectional LSTM and BLSTM is that input data flows in both forward and backward directions of the LSTM nodes connected across the timesteps of the network. In other words, BLSTM can be assumed to add one more LSTM layer to reverse the input data flow from the last timestep to the first timestep direction [32]. The fact that the BLSTM also preserves information from the future is the only distinction between the unidirectional LSTM and BLSTM. The full BLSTM network for the eight-joint-angle set is depicted in Figure 7. The diagram is only for the inertial data of any of the four bone segments. For the datasets which include BID, the BID data were fed to a separate dense layer which is later combined with the LSTM output at the fully connected layer with 64 units. For datasets that include the FID, the FID is concatenated with the PID and given to the networks as a 100 × 18 array. In this way, the two different quantities, the inertial and the biometric data will be separately fed to the network so that the networks can learn features from them independently. Similarly, this applies to the other neural-network models as well.

3.2. Convolutional Neural Network

CNN has recently emerged as a favorable network not only for image-related classification problems but also for human motion analysis [33,34]. Hence, in this case, the input dataset was then treated as a virtual image with 100 × 6 dimensions. The CNN model consists of two convolutional layers, each of which has a rectified linear unit (RELU) activation function followed by an average pooling layer. A two-layer fully connected network then receives the 1D vectorized output from the second convolutional layer.

3.3. Wavelet Neural Network

WNN can be treated as a 2D convolution network, with the exception that low-pass and high-pass discrete wavelet filters are used in WNN instead of the activation functions in CNN. A single wavelet layer is equivalent to a two-level wavelet packet decomposition, which then produces four output coefficients that are then concatenated to create the final output. Filters for the network training were selected from the Haar wavelets family. Using these filters, a two-wavelet layer WNN followed by two dense layers was designed.
The open-source Python-based artificial neural-network interface library, Keras, was used to build the four networks. To mitigate variance shift and overfitting problems, all the neural networks implement batch normalization, an exponentially decaying learning rate of 0.0001 with a decay rate of 0.9 at every 1000 steps, a dropout layer with a 0.3 ratio, and an l2 weight regularization technique. The epoch and batch size hyperparameters for the deep learning were determined to be 100 and 32, respectively, after several testing and training. Furthermore, the Huber regression cost function and Adam optimizer methods were adopted during the training.

4. Results and Discussions

In this section, the performance of the neural networks with the different inertial datasets will be explained.

4.1. Network Performance with the Different Datasets

The number of combinations of the datasets and the neural networks is large. Therefore, to reduce the computational time, the best-performing network was first selected by training four of the networks using solely PID datasets. Next, the selected network will be trained using the four datasets namely: PID, thigh inertial dataset (TID), shank inertial dataset (SID), and foot inertial dataset (FID).
The training performance for the PID with the BLSTM network is depicted in Figure 8. This loss is computed after every training step during the training process. As can be seen from the graph, the network learned the features well in the first 20 epochs without any overfitting or underfitting problems. Even though there is a gap between the two graphs, the difference is small enough to be deemed as an overfitting model. All the networks employed the Huber loss function and Adam optimizer. It can be seen from Table 1 that the performance of the BLSTM and LSTM models exceeded the other two models. This is because both recurrent networks are excellent at learning temporal features included in the time-series data. Their close mean absolute errors (MAEs) for the PID indicate the significance of temporal information for human lower-limb pose estimation problem over spatial information. Because CNN and WNN are better at extracting and learning spatial features than recurrent networks. However, their results are inferior compared to LSTM and BLSTM. Their results, which are shown in Table 1, were acquired by testing the trained models of the four neural networks with unseen testing dataset types.
From the total average, BLSTM outperformed the other models in predicting the joint angle in all cases. Next to BLSTM, LSTM and CNN come, respectively, due to their overall performance as can be seen from the average columns. When the input dimension increased, especially in PID + bFID, spatial features can be extracted from the two datasets making it easier for WNN and CNN networks. One observation from Table 1 is that the accuracy for the knee joints significantly increased when the bFID data were included in the input. This is because new knee joint information is obtained from the bFID dataset. With PID, WNN struggled to perform well. Because WNNs employ classical sigmoid activations along with randomly initialized weights. During training, this leads to the network converging at a local minimum point. That is why WNNs did not perform well in the training.
Adding biometric information and feet inertial data to the pelvis inertial data have improved the total average prediction accuracy of BLSTM by 4.12% and 33%, respectively. This proves that the way we walk is influenced by our biometric information. However, the bFID supplements the PID by adding more kinematic information about the foot which is far from the pelvis sensor.

4.2. Effect of Sensor Placement

Even though WNN and BLSTM have smaller network sizes, as seen from Table 1, BLSTM has performed well regardless. Hence, to reduce the training time only the BLSTM network was further trained using the TID, SID, and FID to determine the best sensor position for the first set of joint angles. The result of the training is shown in Table 2. Except for the right hip angle, the inertial data of the sensor attached to the tibia/shank bone has better accuracy compared to the other three with minimum and maximum MAEs of 3.02° and 4.33°, respectively. This is because the shank motion is directly associated with the rotations of the knee and hip joints during walking. In other words, we cannot move the lower part of the body without moving the shank part, but we can move the lower part of the body without too much movement on the pelvis. Hence, the shank part captures most of the lower-body kinematics when walking. The performance for the estimation of the second eight joint angles is lower than the estimation for the four joint angles set. It can be particularly seen from the ankle and hip abduction/adduction angle columns of Table 3 that the BLSTM network did not perform well for the two joints.
Hence, the overall estimation accuracy was affected. However, the result for the TID is slightly better than the result of SID in the second set of joint angles as shown in Table 3. Therefore, for lower extremity joint-angle estimation, attaching an IMU sensor on the tibia bone right below the knee or on the side of the thigh works well for the four joint angles. It can be concluded that by using a single inertial sensor, the general pose of walking of a person can be estimated with good accuracy. Not only from the pelvis inertial data but the pose of the lower half of the body can also be estimated from other bone segment inertial data of either leg. In other words, general walking parameters such as forward walking speed, sagittal joint angles, and step/stride sizes could be computed from a single inertial data-based estimation method.

4.3. Generalized vs. Personalized Inertial Data

In some applications such as rehabilitation, we may be interested in only a subject-specific estimation process to boost the performance of the system. Hence, we have also evaluated the performance of the BLSTM network when trained using subject-specific and the whole dataset. The name assigned to the subject-specific datasets is “personal.” The result is shown in Table 4. The MAE values of the personal dataset indicated in the table are obtained by averaging the MAE of all subjects. The dataset division for this training was 74%, 16%, and 10% for training, validation, and testing, respectively. As indicated in Table 4, the errors for the individual dataset are noticeably lower than the general (whole) dataset errors. This might be adopted in applications that call for a more accurate personalized joint-angle prediction.

4.4. Evaluation with Unseen Dataset

Lastly, the trained BLSTM model was evaluated by the unencountered testing dataset. The graphs in Figure 9 show the testing dataset’s predicted and actual joint angles. The testing was performed offline, where test data collected as part of the data collection was restructured into datasets and fed to the trained BLSTM model for prediction. The figures show some bias errors, which is to be expected given that each subject has a unique gait. However, this is a highly promising result for input data solely from a single IMU. Furthermore, a MATLAB Simulink® skeleton model was also created to visualize the estimated joint angles from the testing dataset. Figure 10 shows the comparison of the actual and simulated versions of the predicted four joint angles set. Both successive images were captured from a camera video and a simulated video. The video of the subject was taken during the data collection using a camera. Where the video of the skeleton was generated using a MATLAB built-in video recorder function. The subject was instructed to make momentary stops during the walking which we use later to synchronize both videos. An excellent pose estimation was achieved from only a single inertial sensor on the shank as can be referred from the figure.
Furthermore, the model was evaluated using a cross-validation method. Table 5 shows the result of a 10-fold cross-validation over the testing dataset. The rows represent the MAE of the best model and the average of the MAE values of the 10 models of the cross-validator by joint angle. The best model was selected as the model with the smaller overall mean value of the joint-angle MAEs. The overall mean value of a model was calculated by taking the average of the MAE values of each joint. As a result, the overall mean value of the joint-angle MAEs ranges from 4.46° to 5.13° which indicates a similar result was obtained from the different dataset arrangement. Moreover, the results show a similar trend to Table 3 where the errors for the ankle joints and hip adduction/abduction are larger.
To have an idea of how well our method can handle other datasets, an assessment with open-source datasets is essential. However, there are no relevant open-source datasets collected similar to ours. For reference, the results of the studies by the authors in [24,25] are mentioned here, even though both studies employed different data-collection techniques and estimation methodologies. The authors in [24] gathered their data by giving their subjects instructions to walk on a wooden frame along a predetermined path. They claimed that for their 5 subjects, they were able to achieve a mean joint-angle error range of 5.35° to 12.3°. The data-collection process by the authors in [25] was carried out in a 5 m-long indoor area. They have achieved a root mean square error range of 7.49° to 8.14° (using all features) and 6.19° to 7.0° (using selected features).

4.5. Discussion

The neural-network models have resulted in a promising result in estimating, particularly the sagittal-plane lower-limb-joint angles. The results could have been improved if the data collection was made only for a straight walk on perfectly level ground. However, the outdoor ground had a gentle slope and subjects were taking turns at different angles. Hence, for more instructed straight walking in medical environments, the system could potentially be used to give insight into the kinematics of the lower half of the body. In particular, in a lower-limb exoskeleton robot where the robot needs to identify the patient’s walking intention so that it applies a mechanical force to assist the walking movement. In addition, the before and after walking status of patients who underwent leg-joint surgery, and their progress can be tracked with this system. The main advantage of this system is its portability and reduced sensor complexity which could be used in different scenarios without too much effort.
Even though promising results were obtained for the sagittal-plane joint angles with SID, the model has difficulty estimating ankle joint angles and coronal-plane lower-body joint angles. One reason for the difficulty of coronal-plane joint angles is that during walking our legs barely move along the coronal plane resulting in smaller angle values that are prone to noise. On the other hand, the ground-truth value of the ankle joints is affected by the foot–ground impact during heel strike which resulted in higher error. This could be improved by introducing other low-level sensors on the foot. Another general limitation of deep learning methods is the lack of explainability and interpretability. This could cause unreliable results of machine-learning methods. To be used in real-world applications, machine-learning methods must be easier to understand for unskilled personnel as well. Some studies have started the work of addressing this issue for an easier understanding of machine-learning methods. The authors in [35] employed various methods, including Local Interpretable Model-agnostic Explanations, to explain and interpret the decision-making of machine-learning methods. This could make machine-learning methods less challenging when used by less-experienced clinicians. Hence in the future, the interpretability of the system will be addressed by adopting different techniques such as the black-box explainers which will be then followed by system reliability and validity evaluation by hiring inexperienced physiotherapists and skilled people to operate the system. Eventually, the study will be extended to rehabilitation and eldercare applications. However, for more depth analysis in these fields, the addition of sensors such as insole sensors, full gait parameterization, and visualization will be implemented which will give physiotherapists a clear understanding of a patient’s walking conditions.

5. Conclusions

The gait-analysis research area is expanding quickly due to its fast-growing demand in areas such as health services and robotics. Due to the rapid advancement in sensing technology and artificial intelligence, gait analysis has become possible using only a few wearable sensors. However, there is less consensus on the sensor quantity and placement for better lower-leg pose estimation. Therefore, in this study, the placement of a single inertial sensor on the lower half of the body for the leg-joint angle estimation using neural networks was investigated. Four neural-network models were compared using walking-motion data collection from 16 multiracial subjects. Among the neural networks, BLSTM networks performed better with MAE ranging from 3.02° to 4.33° for the four dominant sagittal-plane leg-joint angles. The results were improved with the increment of sensors and the introduction of biometric information. From the investigation of single senor placement, it was found that the shank or thigh is the optimal position for leg-joint angle estimation. Both achieve similar results with an overall average error of 3.84° and 3.65° for the thigh and shank, respectively. Others positions such as the pelvis would not be close enough to capture whole-leg kinematics from the hip to the toe. Furthermore, it was confirmed from the estimation results that a single inertial sensor can be enough to estimate the extension/flexion angles of the hip and knee joints. However, it was challenging to accurately estimate the coronal-plane joint angles of the lower limb and ankle joints owing to the inherent small lateral movement during walking foot–ground impact during heal strike.
Hence, adding low-dimensional sensors, such as pressure sensors, could potentially improve the obtained result. However, this study has achieved a promising result that could serve as a springboard for the further extension of the study to other human activities. If a robust estimation mechanism for various human activities is developed, it can be implemented to solve real-world issues, particularly in healthcare services, assistive robotics, and collaborative robotics.

Author Contributions

Conceptualization, J.H.L.; Data curation, T.T.A.; Formal analysis, T.T.A.; Investigation, T.T.A. and J.H.L.; Methodology, T.T.A. and J.H.L.; Project administration, J.H.L.; Software, T.T.A.; Supervision, J.H.L. and S.O.; Validation, J.H.L. and S.O.; Writing—original draft, T.T.A.; Writing—review and editing, J.H.L. and S.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by JSPS KAKENHI, Grant Number JP22K04012.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Partial resources of this research can be found here. https://github.com/tsgtdss583/JointAngleEstimation, accessed on 23 February 2023.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nielsen, J.B. How we Walk: Central Control of Muscle Activity during Human Walking. Neuroscience 2003, 9, 195–204. [Google Scholar] [CrossRef] [PubMed]
  2. Horst, F.; Mildner, M.; Schöllhorn, W. One-year persistence of individual gait patterns identified in a follow-up study—A call for individualised diagnose and therapy. Gait Posture 2017, 58, 476–480. [Google Scholar] [CrossRef]
  3. Shull, P.B.; Jirattigalachote, W.; Hunt, M.A.; Cutkosky, M.R.; Delp, S.L. Quantified self and human movement: A review on the clinical impact of wearable sensing and feedback for gait analysis and intervention. Gait Posture 2014, 40, 11–19. [Google Scholar] [CrossRef] [PubMed]
  4. Ornetti, P.; Maillefert, J.-F.; Laroche, D.; Morisset, C.; Dougados, M.; Gossec, L. Gait analysis as a quantifiable outcome measure in hip or knee osteoarthritis: A systematic review. Jt. Bone Spine 2010, 77, 421–425. [Google Scholar] [CrossRef]
  5. Hausdorff, J.M.; Rios, D.A.; Edelberg, H.K. Gait variability and fall risk in community living older adults: A 1-year pro-spective study. Arch. Phys. Med. Rehabil. 2001, 82, 1050–1056. [Google Scholar] [CrossRef]
  6. Glowinski, S.; Łosi, K.; Kowia, P.; Wa, M.; Bryndal, A.; Grochulska, A. Inertial sensors as a tool for diagnosing discopathy lumbosacral pathologic gait: A preliminary research. Diagnostics 2020, 10, 342. [Google Scholar] [CrossRef] [PubMed]
  7. Rovini, E.; Maremmani, C.; Cavallo, F. A Wearable System to Objectify Assessment of Motor Tasks for Supporting Parkinson’s Disease Diagnosis. Sensors 2020, 20, 2630. [Google Scholar] [CrossRef]
  8. Lloréns, R.; Gil-Gómez, J.A.; Alcañiz, M.; Colomer, C.; Noé, E. Improvement in balance using a virtual reality-based stepping exercise: A randomized controlled trial involving individuals with chronic stroke. Clin. Rehabil. Mar. 2015, 29, 261–268. [Google Scholar] [CrossRef] [Green Version]
  9. Shull, P.; Lurie, K.; Shin, M.; Besier, T.; Cutkosky, M. Haptic gait retraining for knee osteoarthritis treatment. In Proceedings of the 2010 IEEE Haptics Symposium, Waltham, MA, USA, 25–26 March 2010; pp. 409–416. [Google Scholar] [CrossRef]
  10. Maurice, P.; Malaisé, A.; Amiot, C.; Paris, N.; Richard, G.-J.; Rochel, O.; Ivaldi, S. Human movement and ergonomics: An industry-oriented dataset for collaborative robotics. Int. J. Robot. Res. 2019, 38, 1529–1537. [Google Scholar] [CrossRef] [Green Version]
  11. Ke, S.-R.; Zhu, L.; Hwang, J.-N.; Pai, H.-I.; Lan, K.-M.; Liao, C.-P. Real-Time 3D Human Pose Estimation from Monocular View with Applications to Event Detection and Video Gaming. In Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston, MA, USA, 29 August–1 September 2010; pp. 489–496. [Google Scholar] [CrossRef]
  12. Merriaux, P.; Dupuis, Y.; Boutteau, R.; Vasseur, P.; Savatier, X. A Study of Vicon System Positioning Performance. Sensors 2017, 17, 1591. [Google Scholar] [CrossRef]
  13. Luo, Y.; Li, Y.; Foshey, M.; Shou, W.; Sharma, P.; Palacios, T.; Torralba, A.; Matusik, W. Intelligent Carpet: Inferring 3D Human Pose from Tactile Signals. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 11250–11260. [Google Scholar]
  14. Yang, W.; Ouyang, W.; Li, H.; Wang, X. End-to-End Learning of Deformable Mixture of Parts and Deep Convolutional Neural Networks for Human Pose Estimation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 3073–3082. [Google Scholar] [CrossRef]
  15. Newell, A.; Yang, K.; Deng, J. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 483–499. [Google Scholar]
  16. Sun, X.; Xiao, B.; Wei, F.; Liang, S.; Wei, Y. Integral human pose regression. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 529–545. [Google Scholar]
  17. Stone, E.E.; Skubic, M. Unobtrusive, Continuous, In-Home Gait Measurement Using the Microsoft Kinect. IEEE Trans. Biomed. Eng. 2013, 60, 2925–2932. [Google Scholar] [CrossRef]
  18. Al-Amri, M.; Nicholas, K.; Button, K.; Sparkes, V.; Sheeran, L.; Davies, J.L. Inertial Measurement Units for Clinical Movement Analysis: Reliability and Concurrent Validity. Sensors 2018, 18, 719. [Google Scholar] [CrossRef] [Green Version]
  19. Cudejko, T.; Button, K.; Al-Amri, M. Validity and reliability of accelerations and orientations measured using wearable sensors during functional activities. Sci. Rep. 2022, 12, 14619. [Google Scholar] [CrossRef] [PubMed]
  20. Sy, L.; Lovell, N.; Redmond, S. Estimating Lower Limb Kinematics Using a Lie Group Constrained Extended Kalman Filter with a Reduced Wearable IMU Count and Distance Measurements. Sensors 2020, 20, 6829. [Google Scholar] [CrossRef]
  21. de Almeida, T.F.; Morya, E.; Rodrigues, A.C.; de Azevedo Dantas, A.F.O. Development of a Low-Cost Open-Source Measurement System for Joint Angle Estimation. Sensors 2021, 21, 6477. [Google Scholar] [CrossRef] [PubMed]
  22. Lee, T.; Kim, I.; Lee, S.-H. Estimation of the Continuous Walking Angle of Knee and Ankle (Talocrural Joint, Subtalar Joint) of a Lower-Limb Exoskeleton Robot Using a Neural Network. Sensors 2021, 21, 2807. [Google Scholar] [CrossRef] [PubMed]
  23. Alemayoh, T.T.; Lee, J.H.; Okamoto, S. LocoESIS: Deep-Learning-Based Leg-Joint Angle Estimation from a Single Pelvis Inertial Sensor. In Proceedings of the 2022 9th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), Seoul, Republic of Korea, 21–24 August 2022; pp. 1–7. [Google Scholar] [CrossRef]
  24. Chen, S.; Bangaru, S.S.; Yigit, T.; Trkov, M.; Wang, C.; Yi, J. Real-Time Walking Gait Estimation for Construction Workers Using a Single Wearable Inertial Measurement Unit (IMU). In Proceedings of the 2021 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Delft, The Netherlands, 12–16 July 2021; pp. 753–758. [Google Scholar] [CrossRef]
  25. Sung, J.; Han, S.; Park, H.; Cho, H.-M.; Hwang, S.; Park, J.W.; Youn, I. Prediction of Lower Extremity Multi-Joint Angles during Overground Walking by Using a Single IMU with a Low Frequency Based on an LSTM Recurrent Neural Network. Sensors 2022, 22, 53. [Google Scholar] [CrossRef] [PubMed]
  26. Mundt, M.; Johnson, W.; Potthast, W.; Markert, B.; Mian, A.; Alderson, J. A Comparison of Three Neural Network Approaches for Estimating Joint Angles and Moments from Inertial Measurement Units. Sensors 2021, 21, 4535. [Google Scholar] [CrossRef]
  27. Schepers, M.; Giuberti, M.; Bellusci, G. Xsens MVN: Consistent Tracking of Human Motion Using Inertial Sensing. Xsens Technol. 2018, 1, 1–8. [Google Scholar] [CrossRef]
  28. Zhang, J.-T.; Novak, A.; Brouwer, B.; Li, Q. Concurrent validation of Xsens MVN measurement of lower limb joint angular kinematics. Physiol. Meas. 2013, 34, N63–N69. [Google Scholar] [CrossRef]
  29. Yokoyama, H.; Kaneko, N.; Ogawa, T.; Kawashima, N.; Watanabe, K.; Nakazawa, K. Cortical Correlates of Locomotor Muscle Synergy Activation in Humans: An Electroencephalographic Decoding Study. iScience 2019, 15, 623–639. [Google Scholar] [CrossRef] [Green Version]
  30. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  31. Elsworth, S.; Guttel, S. Time Series Forecasting Using LSTM Networks: A Symbolic Approach. arXiv 2020, arXiv:2003.05672. [Google Scholar]
  32. Graves, A.; Schmidhuber, J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. Off. J. Int. Neural Netw. Soc. 2005, 18, 602–610. [Google Scholar] [CrossRef] [PubMed]
  33. Liu, J. Convolutional Neural Network-Based Human Movement Recognition Algorithm in Sports Analysis. Front. Psychol. 2021, 12, 663359. [Google Scholar] [CrossRef]
  34. Alemayoh, T.; Lee, J.; Okamoto, S. New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition. Sensors 2021, 21, 2814. [Google Scholar] [CrossRef] [PubMed]
  35. Khare, S.K.; Acharya, U.R. An explainable and interpretable model for attention deficit hyperactivity disorder in children using EEG signals. Comput. Biol. Med. 2023, 155, 106676. [Google Scholar] [CrossRef]
Figure 1. A subject wearing seven Awinda sensors during data acquisition.
Figure 1. A subject wearing seven Awinda sensors during data acquisition.
Applsci 13 04794 g001
Figure 2. Summarized diagram of the developed system. Adapted with permission from Ref. [23]. 2022, IEEE.
Figure 2. Summarized diagram of the developed system. Adapted with permission from Ref. [23]. 2022, IEEE.
Applsci 13 04794 g002
Figure 3. Pelvis inertial data of a single gait cycle. Red (x-axis), green (y-axis), and blue (z-axis).
Figure 3. Pelvis inertial data of a single gait cycle. Red (x-axis), green (y-axis), and blue (z-axis).
Applsci 13 04794 g003
Figure 4. Experimental area (Google Maps).
Figure 4. Experimental area (Google Maps).
Applsci 13 04794 g004
Figure 5. The computation of knee joint angles from proximal and distal inertial sensors.
Figure 5. The computation of knee joint angles from proximal and distal inertial sensors.
Applsci 13 04794 g005
Figure 6. Dataset preparation using a sampling window.
Figure 6. Dataset preparation using a sampling window.
Applsci 13 04794 g006
Figure 7. BLSTM network for the PID dataset.
Figure 7. BLSTM network for the PID dataset.
Applsci 13 04794 g007
Figure 8. Training and validation losses of the BLSTM network for PID.
Figure 8. Training and validation losses of the BLSTM network for PID.
Applsci 13 04794 g008
Figure 9. Ground-truth vs. estimated joint angles from a shank IMU using the trained BLSTM model. (a) right hip ext/flex angle; (b) right hip abd/add; (c) right knee ext/flex angle; (d) right ankle dorsi/plant; (e) left hip ext/flex angle; (f) left hip abd/add; (g) left knee ext/flex angle; (h) left ankle dorsi/plant.
Figure 9. Ground-truth vs. estimated joint angles from a shank IMU using the trained BLSTM model. (a) right hip ext/flex angle; (b) right hip abd/add; (c) right knee ext/flex angle; (d) right ankle dorsi/plant; (e) left hip ext/flex angle; (f) left hip abd/add; (g) left knee ext/flex angle; (h) left ankle dorsi/plant.
Applsci 13 04794 g009
Figure 10. Actual and predicted leg-joint comparison (for the four joint angles) through a graphical simulation.
Figure 10. Actual and predicted leg-joint comparison (for the four joint angles) through a graphical simulation.
Applsci 13 04794 g010
Table 1. Performance of the Networks using mean absolute error (in °) metrics.
Table 1. Performance of the Networks using mean absolute error (in °) metrics.
NetworksNetwork Parameter (PID)PIDPID + BIDPID + bFID
hR akR bhL ckL dav ehR akR bhL ckL dav ehR akR bhL ckL dav e
WNN510,6926.198.726.308.977.556.719.496.098.337.665.155.205.044.755.04
LSTM1,237,5726.595.736.256.826.356.516.185.427.366.376.254.265.054.555.03
BLSTM713,5445.766.016.816.166.195.516.055.816.355.935.064.244.203.074.14
CNN1,318,9166.168.716.387.937.305.276.536.336.416.145.385.806.705.665.89
a Right leg hip extension/flexion, b Right leg knee extension/flexion, c Left leg hip extension/flexion, d Left leg knee extension/flexion, e total average.
Table 2. MAE (in °) of the trained BLSTM network for the first four-joint-angle set using various inertial datasets.
Table 2. MAE (in °) of the trained BLSTM network for the first four-joint-angle set using various inertial datasets.
DatasethpR_x aknR_x bhpL_x cknL_x dAverage
PID5.766.016.816.166.19
TID2.744.923.404.283.84
SID3.513.733.024.333.65
FID4.414.213.235.554.35
a right hip extension/flexion, b right knee extension/flexion, c left hip extension/flexion, d left knee extension/flexion.
Table 3. MAE (in °) of the trained BLSTM network for the second eight-joint-angle set using various inertial datasets.
Table 3. MAE (in °) of the trained BLSTM network for the second eight-joint-angle set using various inertial datasets.
DatasethpR_xhpR_d aknR_xankR_p bhpL_xhpL_d cknL_xankL_p dAverage
PID5.597.396.717.765.414.687.019.976.82
TID2.287.914.669.333.344.763.996.235.31
SID3.937.193.3910.413.195.075.334.675.40
FID4.414.714.919.443.363.556.139.055.70
a right hip adduction/abduction, b right ankle dorsiflexion/plantarflexion, c left hip adduction/abduction, d left ankle dorsiflexion/plantarflexion.
Table 4. MAE (in °) of the trained BLSTM network for personal and general pelvis datasets. Adapted with permission from Ref. [23]. 2022, IEEE.
Table 4. MAE (in °) of the trained BLSTM network for personal and general pelvis datasets. Adapted with permission from Ref. [23]. 2022, IEEE.
DatasethpR_xknR_xhpL_xknL_x
General5.766.016.816.16
Personal2.653.792.733.30
Table 5. Average MAE (in °) of the 10-fold cross-validation result over the testing dataset.
Table 5. Average MAE (in °) of the 10-fold cross-validation result over the testing dataset.
QuantityhpR_xhpR_d aknR_xankR_p bhpL_xhpL_d cknL_xankL_p d
MAE of the best model2.694.543.827.022.625.234.245.52
Average of all models3.314.194.447.772.555.134.865.94
a right hip adduction/abduction, b right ankle dorsiflexion/plantarflexion, c left hip adduction/abduction, d left ankle dorsiflexion/plantarflexion.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alemayoh, T.T.; Lee, J.H.; Okamoto, S. Leg-Joint Angle Estimation from a Single Inertial Sensor Attached to Various Lower-Body Links during Walking Motion. Appl. Sci. 2023, 13, 4794. https://doi.org/10.3390/app13084794

AMA Style

Alemayoh TT, Lee JH, Okamoto S. Leg-Joint Angle Estimation from a Single Inertial Sensor Attached to Various Lower-Body Links during Walking Motion. Applied Sciences. 2023; 13(8):4794. https://doi.org/10.3390/app13084794

Chicago/Turabian Style

Alemayoh, Tsige Tadesse, Jae Hoon Lee, and Shingo Okamoto. 2023. "Leg-Joint Angle Estimation from a Single Inertial Sensor Attached to Various Lower-Body Links during Walking Motion" Applied Sciences 13, no. 8: 4794. https://doi.org/10.3390/app13084794

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop