Next Article in Journal
Autocollimation-Based Roll Angle Sensor Using a Modified Right-Angle Prism for Large Range Measurements
Next Article in Special Issue
Evaluating Sparse Inertial Measurement Unit Configurations for Inferring Treadmill Running Motion
Previous Article in Journal
Sensor-Fusion Based Navigation for Autonomous Mobile Robot
Previous Article in Special Issue
Infants Display Anticipatory Gaze During a Motor Contingency Paradigm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Vertical Ground Reaction Forces Under Different Running Speeds: Integration of Wearable IMU with CNN-xLSTM

1
Faculty of Sports Science, Ningbo University, Ningbo 315211, China
2
Faculty of Engineering, University of Pannonia, 8200 Veszprem, Hungary
3
Faculty of Engineering, University of Szeged, 6720 Szeged, Hungary
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(4), 1249; https://doi.org/10.3390/s25041249
Submission received: 6 January 2025 / Revised: 16 February 2025 / Accepted: 17 February 2025 / Published: 18 February 2025

Abstract

:
Traditional methods for collecting ground reaction forces (GRFs) mainly use lab force plates. Previous research broke this pattern by predicting GRFs with deep learning and data from IMUs like joint acceleration. Joint angle, as a geometric, is easier to collect than acceleration outdoors with cameras. LSTM is one of the deep learning models that have shown good performance in biomechanical studies. xLSTM, as an optimized version of LSTM, has not been used in biomechanical studies and no research has predicted GRFs during running solely using lower limb joint angles. This study collected lower-limb joint angle and vertical ground reaction force data at five speeds from 12 healthy male runners with Xsens sensors. Datasets including three joints and three planes were set as the inputs of four deep learning models for vertical-GRF prediction. CNN-xLSTM consistently performed best in the four deep learning models when different datasets were input (R2 = 0.909 ± 0.064, MAPE = 2.18 ± 0.09, rMSE = 0.061 ± 0.008), and the performance was at a relatively high level at the five speeds. The current findings may contribute to a new GRF measurement and provide a reference for future real-time motion detection and sport injury prediction.

1. Introduction

Running, as a widely popular form of exercise, has been deeply favored by the masses due to its simplicity, ease of implementation, and significant effects. A substantial amount of research in sport training and biomechanics has been conducted around this fundamental movement form [1,2]. In biomechanics, the gait phase of running is generally divided into the stance phase and the swing phase [3,4]. The ground reaction force (GRF), as a crucial factor that drives runners moving forward, has attracted extensive attention from biomechanics researchers. Accurate measurement of the ground reaction force during running is of great significance. Biomechanical analysis, through measuring the vertical ground reaction force during running, helps to intuitively understand the force distribution during running, thereby optimizing running technique, such as adjusting the running posture and foot strike pattern, reducing the impact force, and improving running efficiency [5]. At the same time, this measurement also aids in preventing sport injuries, assessing injury risks, and formulating personalized preventive measures. In addition, the vertical ground reaction force is a key parameter in scientific research, providing data support for the development of sport science, and is of great importance for the training guidance of professional athletes and coaches. Ultimately, by optimizing technique, running efficiency can be improved, enhancing the runner’s sporting experience and stimulating their enthusiasm for continued participation [6,7]. Compared with GRFs in other directions, the magnitude of the vertical ground reaction force (vertical GRF) directly influences the propulsive efficiency and energy conversion of running [8]. Previous studies have shown that, under the rear-foot strike running pattern, the vertical-GRF curve typically exhibits a double-peak trend [9,10,11].
Traditionally, the measurement of GRFs has primarily relied on specialized force plates in laboratories. Although the data from force plates is highly valuable for in-depth analysis of a runner’s running posture, assessment of running efficiency, and prediction of sport injury risks, the high equipment costs, spatial limitations, and lack of flexibility in data collection have become limitations in their widespread application [12]. Therefore, exploring a method that can measure GRFs in non-laboratory environments through convenient and low-cost means is of great significance for advancing sport science research and sport training practice [13,14,15]. Such a method can not only reduce the cost and improve the flexibility of data collection but also enable runners and coaches to conduct real-time monitoring and analysis in daily training or home environments, thereby better guiding training and preventing sport injuries [16,17,18].
With the development of wearable inertial measurement units (IMUs) and deep learning, more and more biomechanical studies have used wearable IMUs to collect kinematics and dynamics and then input these data into machine learning and DL models for the classification, recognition, and prediction of athletic performance and sport injuries [19,20,21,22]. Previous studies have directly or indirectly predicted ground reaction forces in running using machine learning and deep learning algorithms, which contributed to breaking the traditional mode of measuring GRFs with force plates in laboratories [23,24,25,26].
Long Short-Term Memory (LSTM), as a special recurrent neural network structure, has been applied to analysis and prediction tasks in the field of biomechanics in previous research. The gating mechanism of LSTM is capable of capturing complex long-term dependencies, which is particularly important when predicting the relationship between the ground reaction force and joint angle changes during the running stance phase. LSTM can handle long-sequence data, avoiding gradient vanishing or exploding problems, making it suitable for processing longer sequences of data. Moreover, LSTM has a good generalization ability and can be combined with other network structures, such as CNN, to improve prediction accuracy and robustness, making it applicable to different runners and environments, thereby broadening the application scope of GRF prediction [27,28]. Alcantara et al. [29] and Donahue et al. [30] predicted GRFs accurately with an LSTM-based model. The Extended LSTM (xLSTM) network was proposed by M. Beck and his team, the founders of LSTM [31]. As an extension of LSTM, it encompasses two variants: scalar LSTM (sLSTM) and matrix memory LSTM (mLSTM). The sLSTM block retains LSTM’s sequential processing and optimizes gating through fine-grained control, making it suitable for subtle temporal variations. The mLSTM block processes all the token sequences simultaneously, enhancing memory and parallel processing by extending LSTM’s vector operations to matrix operations. The two different modules can be flexibly combined within the xLSTM architecture to balance parallelism and sequential modeling [31,32]. Though xLSTM has been used in previous studies for the prediction of time-series data and has demonstrated good performance, there are currently no studies in the field of biomechanics that use xLSTM to analyze kinematics and kinetics [33,34,35].
Previous studies have mainly used kinematic data, such as acceleration, from IMUs as inputs of deep learning models. However, although these studies have accurately estimated the ground reaction force during running, there have been no studies that have used lower-limb joint angles as a single input to predict the GRF. With the advancement of machine learning and deep learning algorithms, complex models are often used for visual data analysis and real-time recognition [36]. Joint angle is not only a type of geometric data that can be captured in real time through imaging equipment and algorithms but is also a basic type of kinematic data in sport biomechanics that has attracted much research attention [37,38]. Based on the above introduction, the aim of this study was to develop an xLSTM-based deep learning model to predict the vertical ground reaction force during the stance phase of running by inputting the angle data of lower-limb joints (ankle, hip, and knee) on three planes (sagittal, frontal, and transversal) and explore the influence of the angles of different joints and different motion planes on the accuracy of prediction results. This study may provide alternatives to break the traditional pattern of collecting ground reaction forces by using force plates in the laboratory. We assumed that the prediction would work best when all three joint angles on all three planes were input. The main contributions of this study are as follows.
1. We develop a deep learning model that can accurately predict the vertical ground reaction force during the stance phase of running by inputting the joint angles of the lower limbs.
2. We explore the impact of different joint angles on different planes on the prediction results of vertical ground reaction forces.
3. We test the predictive performance of the developed model at five different running speeds.

2. Procedure

The study was divided into 3 main parts. First, lower-limb joint angle and ground reaction force data were collected from 12 healthy male runners through a Vicon three-dimensional motion capture system, Kistler force plates, and Xsens sensors during the running stance phase. Second, the collected data were preprocessed and categorized using different joints and different planes. Third, angles from different joints and planes were used to train 4 deep learning models (CNN-xLSTM, CNN-sLSTM, CNN-mLSTM, and CNN-LSTM) to predict the vertical ground reaction forces. The workflow of the study is shown in Figure 1.

2.1. Data Collection and Preprocessing

The vertical-GRF data and the lower-limb joint angle data were collected in the Biomechanics Laboratory of Ningbo University. Twelve healthy male runners (age: 22.5 ± 0.86 years; body mass: 72.5 ± 9.55 kg; height: 1.78 ± 0.77 m) were recruited to participate in the study. The subject screening criteria were as follows: (1) participants must have no history of serious lower-limb surgery or any other injury variables in the past six months that would interfere with the study; and (2) there must be no other factors that would affect athletic performance. All participants were informed of the purpose, requirements, and procedures of the experiment and signed a written informed consent form. This study complied with the principles laid down in the Declaration of Helsinki. Ningbo University’s Ethics Committee accepted the study protocol (Approval Number: TY2024037), and all subjects supplied and signed a written informed permission form.
Vertical GRFs during running were collected in this study through a Vicon three-dimensional motion capture system (Vicon Metrics Ltd., version 2.14.0, Oxford, UK) and Kistler force plates. The sampling frequency was set to 200 Hz and 1000 Hz, respectively. Two photoelectric gates were placed on both sides of the Kistler force plates, and the time it took runners to pass the force plate was recorded and converted to the running speed (8 km/h, 10 km/h, 12 km/h, 14 km/h, and 16 km/h) [39]. Each runner was required to run through the Vicon–Kistler–photoelectric gate system with Xsens sensors 10 times at each speed. All runners were required to wear Xsens motion capture sensors and to wear designated clothes and running shoes while running to collect the angle of 3 joints (hip, knee, and ankle) on 3 planes (sagittal, frontal, and transversal). The Xsens sensors (Xsens, Henderson, NV, USA) were set on the hip, thigh, shank, and foot of each runner (Figure 1). All runners were required to perform the running task with the rear-foot strike pattern and were allowed adequate rest after each running task.
The stance phase of running was defined as the period from the initial contact of the right heel with the ground (when the GRF collected by the force platform exceeded 10 N) to the complete liftoff of the right forefoot from the ground [40]. The phase was divided into a period of 0–100% in this study. A fourth-order Butterworth low-pass filter was used to process the collected ground reaction force and joint angle data, with cutoff frequencies set at 10 Hz and 20 Hz, respectively. The filtered data were imported into MATLAB (Visual R2022a, MathWorks, Natick, MA, USA), and we used a MATLAB script to perform an interpolation calculation, expanding the data to 101 points corresponding to 0–100% of the running stance phase. Data on missing instances and eliminated outliers were checked through MATLAB scripts to ensure the accuracy of the dataset. After the preprocessing procedure, 530 sets of one-to-one corresponding vertical-GRF and joint angle data were put into the final dataset. To investigate the impact of different input data on the prediction results, the calculated joint angles were classified as follows and used as different inputs:
1.
M1 (3Joints, 3Planes) = 530 × 909 (3joints × 3planes × 101angles);
2.
M2 (Ankle, 3Planes) = 530 × 303 (1ankle joint × 3planes × 101angles);
3.
M3 (Hip, 3Planes) = 530 × 303 (1hip joint × 3planes × 101angles);
4.
M4 (Knee, 3Planes) = 530 × 303 (1knee joint × 3planes × 101angles);
5.
M5 (3Joints, Sagittal) = 530 × 303 (3joints × 1sagittal plane × 101angles);
6.
M6 (3Joints, Frontal) = 530 × 303 (3joints × 1frontal plane × 101angles);
7.
M7 (3Joints, Transversal) = 530 × 303 (3joints × 1transversal plane × 101angles).

2.2. Deep Learning Models

A CNN-xLSTM network, a CNN-sLSTM network, a CNN-mLSTM network, and a CNN-LSTM network were developed in this study for vertical-GRF prediction. The development, training, and validation of the 4 deep learning models were conducted in PyCharm (V2024.2.3, JetBrains, Prague, Czech Republic). The structure of the deep learning models is shown in Figure 2.

2.2.1. Convolutional Neural Networks (CNNs)

The basic structure of a Convolutional Neural Network (CNN) comprises an input layer, convolutional layers, pooling layers, fully connected layers, and an output layer. The input layer receives the original data, the convolutional layers extract features using multiple convolution kernels to generate feature maps, the pooling layers reduce the dimensionality of the feature, decreasing the computational load and preventing overfitting, the fully connected layers transform the output of the pooling layers into a probability distribution for classification results, and the output layer produces the final classification labels [41,42].
A CNN block with a convolutional kernel size of 3 and a pooled layer size of 2 was set and combined with 4 deep learning model blocks (xLSTM, sLSTM, mLSTM, and LSTM) in this study to extract temporal features of the joint angle and vertical ground reaction forces during the stance phase. The output of the CNN was set as the input for subsequent models in this study.

2.2.2. Long Short-Term Memory (LSTM)

The LSTM model achieves the effective capture and memorization of long-sequence messages by introducing a cell status and three logic gates (a forget gate, an input gate, and an output gate) that control message transmission [28]. The main characteristic of this model lies in its unique “gating mechanism”, with the primary algorithmic formula as follows:
Ct = ft ∙ Ct−1 + it ∙ zt
where Ct represents the cell state at time t; ft represents the output of the forget gate at time t; Ct−1 represents the cell state at the previous time step; it represents the output of the input gate at time t; and zt represents the candidate cell state at time t.
The nn LSTM class from the PyTorch deep learning framework was utilized to implement this LSTM layer structure. This class provides all the necessary functionalities for constructing layers, including parameter initialization and forward propagation.

2.2.3. Extended Long Short-Term Memory (xLSTM)

xLSTM is actually a hybrid model of two variants: scalar LSTM (sLSTM) and matrix LSTM (mLSTM). sLSTM retains the memory mixing function of traditional LSTM and supports state tracking, making it suitable for tasks that require the capture of subtle changes in time-series data, while mLSTM introduces a normalizer state to track the product of the input gate and the future forget gate. Additionally, mLSTM achieves full parallelism, enabling the efficient processing of large-scale data and making it suitable for tasks requiring a fast response and high-performance computing [31]. The two blocks can be switched and selected by modifying the ‘s’ or ‘m’ module in the model definition code in Pycharm.
sLSTM improves upon the standard LSTM algorithm through its unique exponential gating mechanism. This variant introduces an exponential function as the activation function of the model to control information flow, making the activation of the input gate and forget gate more efficient and stable. Based on this mechanism, the forward propagation algorithm of sLSTM is as follows:
nt = ft ∙ nt−1 + it
where nt represents a normalization state, which sums the product of the input gate and all future forget gates; ft represents the activation value of the forget gate, which is used to regulate the amount of information inherited from the previous time step t − 1; nt−1 represents the state of the previous time step; and it represents the new information added at the current time step.
h t = o t ·   h ~ t ,
h ~ t = c t / n t
where h t represents the output state at a time step; o t represents the activation value of the output gate, which is used to regulate the amount of output from the cell; h ~ t represents the candidate output state used to adjust the activation level; and c t represents the internal state of the cell or the “memory cell” state, which is typically used to store long-term information.
i t = exp i ~ t ,
f t = exp f ~ t
where i t and f t represent the activation values of the input gate and forget gate, respectively, after being transformed by the exponential function exp , and i ~ t and f ~ t represent the activation values before the transformation. This process ensures that the output values of the input gate and forget gate are positive and uses these outputs for subsequent nonlinear activation and the regulation of information flow [31].
mLSTM extends the vector operations in the original LSTM algorithm to matrix operations, significantly enhancing the model’s memory capacity and parallel processing capability. The algorithm for updating the memory cell through matrix operations in mLSTM is as follows:
C t = f t C t 1 + i t   tanh   ( W c   [ h t 1 ,   x t ] + b c )
where C t represents the memory cell matrix at the current time step, denotes element-wise multiplication, i t represents the input gate matrix, [ h t 1 ,   x t ] is the matrix formed by concatenating the hidden state h t 1 from the previous time step and the input x t at the current time step, and b c is a bias term. The function of the hidden state in mLSTM is:
h ~ t = C t   q t / max n t q t , 1
where C t represents the memory cell matrix at the current time step, h ~ t represents the hidden state at the current time step, and q t denotes the query input. The parallelization capability of mLSTM significantly improves the computational efficiency when processing long sequences by eliminating memory mixing, enabling the parallel capture and processing of high-dimensional information in tokens and, thus, accelerating the training and inference processes [31].

2.3. Model Training and Validation

2.3.1. Model Training

The 7 datasets mentioned in Section 2.1 were used as the input of 4 deep learning models. The Min–Max normalization technique was employed to normalize the data, and the algorithm formula for this process is as follows:
x = x min max min
where x represents the value of a single data point, min is the minimum value in the column of data, and max is the maximum value in the column of data. This technique scales the original data to a range between 0 and 1. This method not only preserves the original distribution of the data but also unifies the scale of the data, making different features or variables comparable.

2.3.2. Model Validation

K-fold cross-validation was chosen in this study for model validation and to overcome the overfitting problem. Through 10-fold cross-validation (K = 10), the shuffled dataset was evenly divided into 10 subsets of equal size, and 10 iterations were performed. In each iteration, 9 subsets were selected as the training set, and the remaining subset was used as the test set. The squared correlation coefficient (R2), the Mean Absolute Percentage Error (MAPE), and the root Mean Squared Error (rMSE) between the predicted values and the actual values were calculated by the matplotlib library with Python code to evaluate the performance of the 4 models in the task of vertical-GRF prediction. The formula for R2 is as follows:
R 2 = 1 ( y ^ ( i ) y ( i ) ) 2 i   ( y - y ( i ) ) 2
where y ( i ) represents the true value, y ^ ( i ) represents the predicted value, y - represents the sample mean, ( y ^ ( i ) y ( i ) ) 2 represents the error generated by the predictions, and i   ( y -   y ( i ) ) 2 represents the error generated by the mean. The R2 is better when it is larger. When the prediction model makes no errors, the R2 reaches its maximum value of 1.
The formula for MAPE is as follows:
MAPE = 100 % n i = 1 n   | y ^ i y i y i |
where y i is the true value and y ^ i represents the predicted value. The range is [0, +∞). An MAPE of 0% indicates a perfect model, while an MAPE greater than 100% indicates a poor model.
The formula for the rMSE is as follows:
rMSE = 1 n i = 1 n     ( y i   y ^ i ) 2
where n represents the number of samples, y i represents the actual value of the data, and y ^ i represents the predicted value of the data. A smaller MSE value indicates a smaller difference between the predicted values and the actual values, implying more accurate predictions.

3. Results

3.1. Parameters of Deep Learning Models

The seven datasets mentioned in the data processing section were used as the input of four deep learning models (CNN-xLSTM, CNN-sLSTM, CNN-mLSTM, and CNN-LSTM). After training and testing multiple combinations of variant structures, a CNN-xLSTM model that integrates an mLSTM module with an sLSTM module was ultimately constructed in this study. Table 1 shows the optimal parameter configuration of each model. The training loss and testing loss with the configuration are shown in Figure 3.

3.2. Prediction Results and Model Performance

The vertical-GRF prediction results of training four deep learning models with data from different joints and planes (M1 (3Joints, 3Planes), M2 (Ankle, 3Planes), M3 (Hip, 3Planes), M4 (Knee, 3Planes), M5 (3Joints, Sagittal), M6 (3Joints, Frontal), and M7 (3Joints, Transversal)) are shown in the Figure 4. Table 2 shows the R2, MAPE, and rMSE of the four models in each prediction task. In Figure 4 and Table 2, when the dataset M1 (3Joints, 3Planes) was used as the input dataset, the fitting effect of the prediction results of the four models was the best (R2xLSTM = 0.909 ± 0.064, R2sLSTM = 0.748 ± 0.056, R2mLSTM = 0.791 ± 0.077, and R2LSTM = 0.742 ± 0.040), which means that the angles of the ankle, hip, and knee joints on all three planes during the running stance phase made the biggest contribution to accurate vertical-GRF prediction results.
In addition, when the input datasets were M2 (Ankle, 3Planes), M5 (3Joints, Sagittal), and M6 (3Joints, Frontal), the four models also showed a relatively good performance in fitting the vertical-GRF curve. When the input datasets were M3 (Hip, 3Planes), M4 (Knee, 3Planes), and M7 (3Joints, Transversal), the fitting effect of the four models on the vertical ground reaction force curve was not ideal. In Figure 4, Figure 5, and Table 2, when the seven datasets were input separately, the CNN-xLSTM model consistently showed the best fitting effect for the vertical-GRF curve among the four models, i.e., the highest R2 value (R2 = 0.909 ± 0.064), the lowest MAPE value (MAPE = 2.18 ± 0.09), and the lowest rMSE value (rMSE = 0.061 ± 0.008).

3.3. Result Validation

Joint angle and vertical ground reaction force data at five different running speeds (8 km/h, 10 km/h, 12 km/h, 14 km/h, and 16 km/h) were collected in this study. The CNN-xLSTM model with an sLSTM block and an mLSTM block showed the best vertical-GRF prediction performance among the four models (as described in Section 2.2). Datasets at five speeds were input into CNN-xLSTM to validate the accuracy of the predicted vertical-GRF values at different running speeds, and the four cases with better prediction performance (M1 (3Joints, 3Planes), M2 (Ankle, 3Planes), M5 (3Joints, Sagittal), and M6 (3Joints, Frontal)) are discussed in this section. The R2, MAPE, and rMSE of CNN-xLSTM at five running speeds are shown in Table 3 and Figure 6. In Table 3, the performance of CNN-xLSTM in the vertical-GRF prediction tasks was at a relatively stable level at all five running speeds. Each case performed best at the speed of 12 km/h, and the case of three joints and three planes made the biggest contribution to the prediction results (R2 = 0.879 ± 0.068, MAPE = 2.19 ± 0.12, rMSE = 0.063 ± 0.010). The performance of the CNN-xLSTM model developed in this study is compared with the studies mentioned in this article in Table 4.

4. Discussion

The purpose of this study was to develop an xLSTM-based deep learning model to predict the vertical ground reaction force during the stance phase of running by inputting the angles of the lower-limb joints (ankle, hip, and knee) on three planes (sagittal, frontal, and transversal) and to explore the influence of the angles of different joints and different motion planes on the accuracy of prediction results. We first collected lower-limb joint angles and vertical ground reaction forces at five speeds from 12 healthy male runners during the running stance phase with Xsens sensors. The collected data were divided into seven datasets, including three lower-limb joints (ankle, hip, and knee) and three planes (sagittal, frontal, and transversal), which were set as the input datasets for each of the four deep learning models (CNN-xLSTM, CNN-sLSTM, CNN-mLSTM, and CNN-LSTM). The CNN-xLSTM model showed the best performance in the vertical-GRF prediction tasks among the four models (R2 = 0.909 ± 0.064, MAPE = 2.18 ± 0.09, rMSE = 0.061 ± 0.008).

4.1. Contribution of Different Joint Angles

Most previous studies used joint acceleration or angular velocity from a wearable IMU as the input of machine learning or deep learning models to predict the ground reaction force [25,26,29]. However, there are few studies that predicted ground reaction forces with joint angles alone. The Xsens wearable sensors that were used in this study allowed us to collect joint angle data directly, skipping the process of conversion through other variables. As a type of intuitive geometric data, joint angles can be captured directly by the cameras and identified by a specific algorithm then set as the input of subsequent algorithms, which may help to break away from the limitations of the traditional mode of measuring ground reaction forces with force plates in the laboratory [12]. Lower-limb joint angle data from three joints and three planes were used as the input into deep learning models in this study to predict vertical GRFs during the running stance phase. In the seven inputs mentioned in Section 2.1, when M1 (3Joints, 3Planes) was input, all models achieved the best prediction performance when the seven datasets were input separately, indicating that the data from the three joints on all three planes contributed the most to accurate vertical ground reaction force prediction results. At the same time, we also found that, among the three joints, the performance of the four models was better than that of the other two joints when the angles of the ankle were entered on the three planes. On the three planes, when the joint angles on the sagittal and frontal planes were input, the prediction performance of the model was also relatively good (Figure 4). These findings not only reveal the influence of different lower-limb joint angles on different planes on the accurate prediction of vertical ground reaction forces, but also provide a reference for the setting of sport analysis equipment and the rehabilitation and training of athletes or sport injury patients [18,43]. The joint angles on the sagittal and frontal planes have a significant impact on predicting the ground reaction force during running. Changes in joint angles on the sagittal plane are directly related to lower-limb propulsion movements and the force output, affecting the direction and magnitude of the ground reaction force, whereas the joint angles on the frontal plane, although small, are crucial for maintaining running stability and also have an indirect influence on the ground reaction force. When data from joint angles on all three planes are considered comprehensively, deep learning models can more fully capture the lower-limb movement status and accurately predict the ground reaction force [44]. At the same time, the joint angles on the sagittal and frontal planes have independent predictive capabilities, reflecting information on lower-limb propulsion and stability, respectively. Understanding these joint angles is of great significance for optimizing running posture and improving running efficiency.

4.2. Performance of CNN-xLSTM

As an optimized model of LSTM, xLSTM was proposed by Sepp Hochreiter and his team. xLSTM combines the advantages of the sLSTM and mLSTM variants in dealing with different tasks and can freely select and combine modules to cope with different tasks through the Python language [31,32]. Since its proposal, xLSTM has demonstrated powerful performance in prediction tasks in many fields [34,45,46]. In this study, we developed a CNN-xLSTM model to predict vertical GRFs during running. CNN-LSTM was constructed by connecting a CNN module to an xLSTM containing an sLSTM module and an mLSTM module. The CNN block was used to extract the features of lower-limb joint angles from the time series of the running standing phase and input these features into xLSTM for vertical-GRF prediction. When the seven datasets were input, CNN-xLSTM consistently showed the best performance among the four models used in this study (R2 = 0.909 ± 0.064, MAPE = 2.18 ± 0.09, rMSE = 0.061 ± 0.008). We also found in the prediction results that CNN-xLSTM has a better fitting effect on the double-peak characteristics of the vertical GRF during running in the rear-foot strike pattern [3,9] (Figure 4). In order to validate the robustness, we tested the four datasets with a relatively large contribution to the prediction results at five speeds (8 km/h, 10 km/h, 12 km/h, 14 km/h, and 16 km/h), and the results show that the prediction performance of CNN-xLSTM for the vertical ground reaction force was maintained at a relatively stable level at all five speeds, which indicates that the model was developed with good robustness (Figure 6, Table 3). In addition, CNN-xLSTM was compared with the three previous models mentioned in this paper that attempted to predict the vertical GRF, and the prediction performance of CNN-xLSTM was better than most models proposed by other studies that used acceleration as the input (Table 4). Although CNN-xLSTM has demonstrated good accuracy in prediction tasks, it should be acknowledged that the training and test datasets used by the models in the comparison are different from those in this study, and there may also be differences in the data collection methods. In previous studies, it may have been difficult to achieve good results in the prediction of the ground reaction force based on a single data point or feature. This study only used the angles of the lower-limb joints from the three planes and accurately predicted the vertical GRF during running. This result may depend on the analysis and judgment of the complex relationship between the joint angle data and the ground reaction force data in the time series by the ‘m’ and ‘s’ modules in xLSTM.

4.3. Prospects and Limitations

In this study, the vertical ground reaction force during running was predicted by using the angle of lower-limb joints in different motion planes by using deep learning methods. The results of this study provide a reference for future real-time motion detection and sport injury prediction. The joint angle can be used as a type of geometric data that can be collected and calculated by the image capture device in real time. At the same time, athletes or patients with sport injuries can adjust their stride length, cadence, and other sport modes in time according to the predicted results to achieve the purpose of improving sport performance and rehabilitation.
There are also limitations to this study. The study only collected data on lower-limb joint angles and vertical ground reaction forces during running from 12 healthy adult male runners, without considering the situation of female runners, and there is a lack of sample diversity that may affect the generality of the results. Additionally, when collecting data at five different speeds, due to equipment limitations, we did not strictly determine speed indicators, and there may be errors in the speed conversion results obtained from the photoelectric gates. Furthermore, the deep learning model developed in this study was not trained and tested using a public dataset, and results obtained using a public dataset may deviate from those in this study.

5. Conclusions

This study developed a CNN-xLSTM model and accurately predicted the vertical ground reaction force during the stance phase of running by inputting the joint angles of the lower limbs. The study also explored the impact of various joint angles on different planes on the prediction results. Additionally, we tested the predictive performance of the developed model across five distinct running speeds. The current findings may not only contribute to alternatives to the traditional mode of measuring the GRF with force plates in a laboratory but provide a reference for the setting of sport analysis equipment and the rehabilitation and training of athletes or sport injury patients.

Author Contributions

T.C. and D.X. collaborated on the design and performance of the experiments; T.C., S.S. and Z.Z. performed the research; S.S. and H.Z. provided help and advice on the research study; T.C., S.S. and Z.Z. analyzed the data; and T.C., D.X., Z.Z. and Y.G. wrote the manuscript. All authors contributed to editorial changes in the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This study was sponsored by the Zhejiang Provincial Natural Science Foundation of China for Distinguished Young Scholars (LR22A020002), the Zhejiang Provincial Key Research and Development Program of China (2021C03130), the Zhejiang Provincial Natural Science Foundation (LTGY23H040003), the Ningbo Key R&D Program (2022Z196), the Ningbo Natural Science Foundation (20221JCGY010532, 20221JCGY010607), the Public Welfare Science & Technology Project of Ningbo, China (2021S134), and the Zhejiang Rehabilitation Medical Association Scientific Research Special Fund (ZKKY2023001).

Institutional Review Board Statement

This study complied with the principles laid down in the Declaration of Helsinki. Ningbo University’s Ethics Committee accepted the study protocol (Approval Number: TY2024037), and all subjects supplied and signed a written informed permission form.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GRFGround Reaction Force
IMUInertial Measurement Unit
CNNConvolutional Neural Network
LSTMLong Short-Term Memory
xLSTMExtended Long Short-Term Memory
sLSTMScalar Long Short-Term Memory
mLSTMMatrix Long Short-Term Memory
R2Squared Correlation Coefficient
MAPEMean Absolute Percentage Error
rMSEroot Mean Squared Error

References

  1. Van Oeveren, B.T.; de Ruiter, C.J.; Beek, P.J.; van Dieën, J.H. The biomechanics of running and running styles: A synthesis. Sports Biomech. 2024, 23, 516–554. [Google Scholar] [CrossRef] [PubMed]
  2. Souza, R.B. An evidence-based videotaped running biomechanics analysis. Phys. Med. Rehabil. Clin. 2016, 27, 217–236. [Google Scholar] [CrossRef] [PubMed]
  3. Leardini, A.; Benedetti, M.G.; Berti, L.; Bettinelli, D.; Nativo, R.; Giannini, S. Rear-foot, mid-foot and fore-foot motion during the stance phase of gait. Gait Posture 2007, 25, 453–462. [Google Scholar] [CrossRef] [PubMed]
  4. Van Hooren, B.; Bosch, F. Is there really an eccentric action of the hamstrings during the swing phase of high-speed running? Part I: A critical review of the literature. J. Sports Sci. 2017, 35, 2313–2321. [Google Scholar] [CrossRef]
  5. Zhou, H.; Ugbolue, U.C. Is there a relationship between strike pattern and injury during running: A review. Phys. Act. Health 2019, 3, 127–134. [Google Scholar] [CrossRef]
  6. Moore, I.S. Is there an economical running technique? A review of modifiable biomechanical factors affecting running economy. Sports Med. 2016, 46, 793–807. [Google Scholar] [CrossRef]
  7. Clark, K.P.; Ryan, L.J.; Weyand, P.G. Foot speed, foot-strike and footwear: Linking gait mechanics and running ground reaction forces. J. Exp. Biol. 2014, 217, 2037–2040. [Google Scholar]
  8. Yu, P.; Cen, X.; Mei, Q.; Wang, A.; Gu, Y.; Fernandez, J. Differences in intra-foot movement strategies during locomotive tasks among chronic ankle instability, copers and healthy individuals. J. Biomech. 2024, 162, 111865. [Google Scholar] [CrossRef]
  9. Gruber, A.H.; Edwards, W.B.; Hamill, J.; Derrick, T.R.; Boyer, K.A. A comparison of the ground reaction force frequency content during rearfoot and non-rearfoot running patterns. Gait Posture 2017, 56, 54–59. [Google Scholar] [CrossRef]
  10. Hall, J.P.; Barton, C.; Jones, P.R.; Morrissey, D. The biomechanical differences between barefoot and shod distance running: A systematic review and preliminary meta-analysis. Sports Med. 2013, 43, 1335–1353. [Google Scholar] [CrossRef]
  11. Tongen, A.; Wunderlich, R.E. Biomechanics of running and walking. Math. Sports 2010, 43, 315–327. [Google Scholar]
  12. Mizoguchi, M.; Calame, C. Possibilities and limitation of today’s force plate technology. Gait Posture 1995, 4, 268. [Google Scholar] [CrossRef]
  13. Oh, S.E.; Choi, A.; Mun, J.H. Prediction of ground reaction forces during gait based on kinematics and a neural network model. J. Biomech. 2013, 46, 2372–2380. [Google Scholar] [CrossRef] [PubMed]
  14. Fluit, R.; Andersen, M.S.; Kolk, S.; Verdonschot, N.; Koopman, H.F. Prediction of ground reaction forces and moments during various activities of daily living. J. Biomech. 2014, 47, 2321–2329. [Google Scholar] [CrossRef] [PubMed]
  15. Xu, D.; Zhou, H.; Quan, W.; Gusztav, F.; Baker, J.S.; Gu, Y. Adaptive neuro-fuzzy inference system model driven by the non-negative matrix factorization-extracted muscle synergy patterns to estimate lower limb joint movements. Comput. Methods Programs Biomed. 2023, 242, 107848. [Google Scholar] [CrossRef]
  16. Chaaban, C.R.; Berry, N.T.; Armitano-Lago, C.; Kiefer, A.W.; Mazzoleni, M.J.; Padua, D.A. Combining inertial sensors and machine learning to predict vGRF and knee biomechanics during a double limb jump landing task. Sensors 2021, 21, 4383. [Google Scholar] [CrossRef]
  17. Greenhalgh, A.; Sinclair, J.; Protheroe, L.; Chockalingam, N. Predicting impact shock magnitude: Which ground reaction force variable should we use. Int. J. Sports Sci. Eng. 2012, 6, 225–231. [Google Scholar]
  18. Johnson, C.D.; Tenforde, A.S.; Outerleys, J.; Reilly, J.; Davis, I.S. Impact-related ground reaction forces are more strongly associated with some running injuries than others. Am. J. Sports Med. 2020, 48, 3072–3080. [Google Scholar] [CrossRef]
  19. Xu, D.; Quan, W.; Zhou, H.; Sun, D.; Baker, J.S.; Gu, Y. Explaining the differences of gait patterns between high and low-mileage runners with machine learning. Sci. Rep. 2022, 12, 2981. [Google Scholar] [CrossRef]
  20. Xu, D.; Zhou, H.; Quan, W.; Gusztav, F.; Wang, M.; Baker, J.S.; Gu, Y. Accurately and effectively predict the ACL force: Utilizing biomechanical landing pattern before and after-fatigue. Comput. Methods Programs Biomed. 2023, 241, 107761. [Google Scholar] [CrossRef]
  21. Xu, D.; Zhou, H.; Quan, W.; Jiang, X.; Liang, M.; Li, S.; Ugbolue, U.C.; Baker, J.S.; Gusztav, F.; Ma, X. A new method proposed for realizing human gait pattern recognition: Inspirations for the application of sports and clinical gait analysis. Gait Posture 2024, 107, 293–305. [Google Scholar] [CrossRef] [PubMed]
  22. Xu, D.; Zhou, H.; Quan, W.; Ma, X.; Chon, T.-E.; Fernandez, J.; Gusztav, F.; Kovács, A.; Baker, J.S.; Gu, Y. New insights optimize landing strategies to reduce lower limb injury risk. Cyborg Bionic Syst. 2024, 5, 0126. [Google Scholar] [CrossRef] [PubMed]
  23. Halilaj, E.; Rajagopal, A.; Fiterau, M.; Hicks, J.L.; Hastie, T.J.; Delp, S.L. Machine learning in human movement biomechanics: Best practices, common pitfalls, and new opportunities. J. Biomech. 2018, 81, 1–11. [Google Scholar] [CrossRef] [PubMed]
  24. Scheltinga, B.L.; Kok, J.N.; Buurke, J.H.; Reenalda, J. Estimating 3D ground reaction forces in running using three inertial measurement units. Front. Sports Act. Living 2023, 5, 1176466. [Google Scholar] [CrossRef]
  25. Pogson, M.; Verheul, J.; Robinson, M.A.; Vanrenterghem, J.; Lisboa, P. A neural network method to predict task-and step-specific ground reaction force magnitudes from trunk accelerations during running activities. Med. Eng. Phys. 2020, 78, 82–89. [Google Scholar] [CrossRef]
  26. Bogaert, S.; Davis, J.; Vanwanseele, B. Predicting vertical ground reaction force characteristics during running with machine learning. Front. Bioeng. Biotechnol. 2024, 12, 1440033. [Google Scholar] [CrossRef]
  27. Du, X.; Vasudevan, R.; Johnson-Roberson, M. Bio-lstm: A biomechanically inspired recurrent neural network for 3-d pedestrian pose and gait prediction. IEEE Robot. Autom. Lett. 2019, 4, 1501–1508. [Google Scholar] [CrossRef]
  28. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to forget: Continual prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef]
  29. Alcantara, R.S.; Edwards, W.B.; Millet, G.Y.; Grabowski, A.M. Predicting continuous ground reaction forces from accelerometers during uphill and downhill running: A recurrent neural network solution. PeerJ 2022, 10, e12752. [Google Scholar] [CrossRef]
  30. Donahue, S.R.; Hahn, M.E. Estimation of ground reaction force waveforms during fixed pace running outside the laboratory. Front. Sports Act. Living 2023, 5, 974186. [Google Scholar] [CrossRef]
  31. Beck, M.; Pöppel, K.; Spanring, M.; Auer, A.; Prudnikova, O.; Kopp, M.; Klambauer, G.; Brandstetter, J.; Hochreiter, S. xLSTM: Extended Long Short-Term Memory. arXiv 2024, arXiv:2405.04517. [Google Scholar]
  32. Alkin, B.; Beck, M.; Pöppel, K.; Hochreiter, S.; Brandstetter, J. Vision-LSTM: xLSTM as Generic Vision Backbone. arXiv 2024, arXiv:2406.04303. [Google Scholar]
  33. Alharthi, M.; Mahmood, A. xlstmtime: Long-term time series forecasting with xlstm. AI 2024, 5, 1482–1495. [Google Scholar] [CrossRef]
  34. Chen, T.; Ding, C.; Zhu, L.; Xu, T.; Yan, W.; Ji, D.; Li, Z.; Zang, Y. xLSTM-UNet can be an Effective Backbone for 2D & 3D Biomedical Image Segmentation Better than its Mamba Counterparts. In Proceedings of the IEEE-EMBS International Conference on Biomedical and Health Informatics, Houston, TX, USA, 10–13 November 2024. [Google Scholar]
  35. Fan, X.; Tao, C.; Zhao, J. Advanced stock price prediction with xlstm-based models: Improving long-term forecasting. In Proceedings of the 2024 11th International Conference on Soft Computing & Machine Intelligence (ISCMI), Melbourne, Australia, 22–23 November 2024. [Google Scholar]
  36. Wang, W.; Lian, C.; Zhao, Y.; Zhan, Z. Sensor-Based Gymnastics Action Recognition Using Time-Series Images and a Lightweight Feature Fusion Network. IEEE Sens. J. 2024, 24, 42573–42583. [Google Scholar] [CrossRef]
  37. Giarmatzis, G.; Zacharaki, E.I.; Moustakas, K. Real-time prediction of joint forces by motion capture and machine learning. Sensors 2020, 20, 6933. [Google Scholar] [CrossRef]
  38. Stetter, B.J.; Ringhof, S.; Krafft, F.C.; Sell, S.; Stein, T. Estimation of knee joint forces in sport movements using wearable sensors and machine learning. Sensors 2019, 19, 3690. [Google Scholar] [CrossRef]
  39. Wang, Z.; Qiu, Q.; Chen, S.; Chen, B.; Lv, X. Effects of unstable shoes on lower limbs with different speeds. Phys. Act. Health 2019, 3, 82–88. [Google Scholar] [CrossRef]
  40. De Wit, B.; De Clercq, D.; Aerts, P. Biomechanical analysis of the stance phase during barefoot and shod running. J. Biomech. 2000, 33, 269–278. [Google Scholar] [CrossRef]
  41. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  42. Liu, Y.; Fernandez, J. Randomized Controlled Trial of Gastrocnemius Muscle Analysis Using Surface Electromyography and Ultrasound in Different Striking Patterns of Young Women’s Barefoot Running. Phys. Act. Health 2024, 8, 223–233. [Google Scholar] [CrossRef]
  43. Chang, H.; Cen, X. Can running technique modification benefit patellofemoral pain improvement in runners? A systematic review and meta-analysis. Int. J. Biomed. Eng. Technol. 2024, 45, 83–101. [Google Scholar] [CrossRef]
  44. Xu, D.; Lu, J.; Baker, J.S.; Fekete, G.; Gu, Y. Temporal kinematic and kinetics differences throughout different landing ways following volleyball spike shots. Proc. Inst. Mech. Eng. Part P 2022, 236, 200–208. [Google Scholar] [CrossRef]
  45. Fang, Z.; Shi, K.; Han, Q. When Mamba Meets xLSTM: An Efficient and Precise Method with the XLSTM-VMUNet Model for Skin lesion Segmentation. arXiv 2024, arXiv:2411.09363. [Google Scholar]
  46. Zhu, Q.; Cai, Y.; Fan, L. Seg-LSTM: Performance of xLSTM for Semantic Segmentation of Remotely Sensed Images. arXiv 2024, arXiv:2406.14086. [Google Scholar]
Figure 1. An illustration of the study’s structure. The study was divided into 3 parts. (A) The data collection procedure. Xsens sensors and a Vicon 3D motion capture system were used to collect the joint angle and vertical ground reaction force data. (B) The data preprocessing procedure. (C) The GRF prediction procedure. The datasets were set as the input of 4 deep learning models (CNN-xLSTM, CNN-sLSTM, CNN-mLSTM, and CNN-LSTM) in order to compare and analyze the performance of models in the vertical-GRF prediction tasks. (D) The process for each deep learning model. (E) The workflow of the entire study.
Figure 1. An illustration of the study’s structure. The study was divided into 3 parts. (A) The data collection procedure. Xsens sensors and a Vicon 3D motion capture system were used to collect the joint angle and vertical ground reaction force data. (B) The data preprocessing procedure. (C) The GRF prediction procedure. The datasets were set as the input of 4 deep learning models (CNN-xLSTM, CNN-sLSTM, CNN-mLSTM, and CNN-LSTM) in order to compare and analyze the performance of models in the vertical-GRF prediction tasks. (D) The process for each deep learning model. (E) The workflow of the entire study.
Sensors 25 01249 g001
Figure 2. The structure of the 4 deep learning models. (A) The CNN block was combined with xLSTM, sLSTM, mLSTM, and LSTM for feature extraction and vertical-GRF prediction. (B) The basic unit of xLSTM, which is a hybrid of sLSTM and mLSTM. (C) The basic unit of sLSTM. (D) The basic unit of mLSTM. (E) The basic unit of LSTM.
Figure 2. The structure of the 4 deep learning models. (A) The CNN block was combined with xLSTM, sLSTM, mLSTM, and LSTM for feature extraction and vertical-GRF prediction. (B) The basic unit of xLSTM, which is a hybrid of sLSTM and mLSTM. (C) The basic unit of sLSTM. (D) The basic unit of mLSTM. (E) The basic unit of LSTM.
Sensors 25 01249 g002
Figure 3. The visualization of the training loss and testing loss curve for each deep learning model in each epoch.
Figure 3. The visualization of the training loss and testing loss curve for each deep learning model in each epoch.
Sensors 25 01249 g003
Figure 4. Visualization of the vertical-GRF prediction results obtained from different inputs (M1 (3Joints, 3Planes), M2 (Ankle, 3Planes), M3 (Hip, 3Planes), M4 (Knee, 3Planes), M5 (3Joints, Sagittal), M6 (3Joints, Frontal), and M7 (3Joints, Transversal)). Each diagram shows the prediction results and the R2 of the 4 different deep learning models.
Figure 4. Visualization of the vertical-GRF prediction results obtained from different inputs (M1 (3Joints, 3Planes), M2 (Ankle, 3Planes), M3 (Hip, 3Planes), M4 (Knee, 3Planes), M5 (3Joints, Sagittal), M6 (3Joints, Frontal), and M7 (3Joints, Transversal)). Each diagram shows the prediction results and the R2 of the 4 different deep learning models.
Sensors 25 01249 g004
Figure 5. Visualization of the MAPE and rMSE obtained from 4 deep learning models (CNN-xLSTM, CNN-sLSTM, CNN-mLSTM, and CNN-LSTM). The lower the MAPE and rMSE, the better the performance of models in fitting the vertical-GRF curve. (A) MAPE of each model with different inputs. (B) rMSE of each model with different inputs.
Figure 5. Visualization of the MAPE and rMSE obtained from 4 deep learning models (CNN-xLSTM, CNN-sLSTM, CNN-mLSTM, and CNN-LSTM). The lower the MAPE and rMSE, the better the performance of models in fitting the vertical-GRF curve. (A) MAPE of each model with different inputs. (B) rMSE of each model with different inputs.
Sensors 25 01249 g005
Figure 6. The visualization of the R2, MAPE, and rMSE of CNN-xLSTM at 5 running speeds in the 4 cases that fitted the vertical-GRF curve better.
Figure 6. The visualization of the R2, MAPE, and rMSE of CNN-xLSTM at 5 running speeds in the 4 cases that fitted the vertical-GRF curve better.
Sensors 25 01249 g006
Table 1. Optimal parameter configuration of the four deep learning models used in this study.
Table 1. Optimal parameter configuration of the four deep learning models used in this study.
ModelBatch SizeHidden SizeEpochsStacked LayersModule
CNN-xLSTM12864251‘m’, ’s’
CNN-sLSTM12864251‘s’
CNN-mLSTM12864251‘m’
CNN-LSTM256128552/
Table 2. The mean value and standard deviation of the R2, rMSE, and MAPE of 4 deep learning models trained by 7 datasets.
Table 2. The mean value and standard deviation of the R2, rMSE, and MAPE of 4 deep learning models trained by 7 datasets.
ModelsIndexM1M2M3M4M5M6M7
CNN-xLSTMR20.909 ± 0.0640.814 ± 0.0530.757 ± 0.0920.709 ± 0.0740.836 ± 0.0420.861 ± 0.0570.671 ± 0.051
MAPE2.18 ± 0.094.38 ± 0.219.95 ± 0.3410.01 ± 0.413.17 ± 0.123.09 ± 0.1812.82 ± 0.33
rMSE0.061 ± 0.0080.089 ± 0.0100.119 ± 0.0140.132 ± 0.0210.074 ± 0.0070.070 ± 0.0060.138 ± 0.017
CNN-sLSTMR20.748 ± 0.0560.702 ± 0.0440.622 ± 0.0690.656 ± 0.0460.711 ± 0.0550.729 ± 0.0730.491 ± 0.067
MAPE3.71 ± 0.326.74 ± 0.5917.06 ± 0.6013.88 ± 0.776.27 ± 0.475.78 ± 0.5815.06 ± 0.76
rMSE0.097 ± 0.0110.114 ± 0.0090.143 ± 0.0140.161 ± 0.0230.092 ± 0.0070.094 ± 0.0050.168 ± 0.028
CNN-mLSTMR20.791 ± 0.0270.713 ± 0.0310.608 ± 0.0350.627 ± 0.0250.709 ± 0.0310.721 ± 0.0270.626 ± 0.031
MAPE5.56 ± 0.388.58 ± 1.0419.49 ± 1.1317.39 ± 0.939.19 ± 0.7210.20 ± 0.9716.38 ± 1.45
rMSE0.092 ± 0.0120.112 ± 0.0130.178 ± 0.0320.209 ± 0.0140.093 ± 0.0130.096 ± 0.0080.158 ± 0.024
CNN-LSTMR20.742 ± 0.0400.683 ± 0.0320.619 ± 0.0340.628 ± 0.0410.728 ± 0.0390.717 ± 0.0410.653 ± 0.042
MAPE7.17 ± 0.458.12 ± 0.5314.44 ± 1.0515.07 ± 1.508.50 ± 0.808.89 ± 0.4913.54 ± 0.98
rMSE0.104 ± 0.0180.108 ± 0.0060.135 ± 0.0290.173 ± 0.0370.112 ± 0.0100.105 ± 0.0080.172 ± 0.015
Table 3. The mean value and standard deviation of the R2, MAPE, and rMSE of CNN-xLSTM at 5 running speeds in the 4 cases that fitted the vertical-GRF curve better.
Table 3. The mean value and standard deviation of the R2, MAPE, and rMSE of CNN-xLSTM at 5 running speeds in the 4 cases that fitted the vertical-GRF curve better.
InputsIndex8 km/h10 km/h12 km/h14 km/h16 km/h
3 Joints and 3 PlanesR20.842 ± 0.0470.854 ± 0.0430.879 ± 0.0680.861 ± 0.0540.836 ± 0.032
MAPE2.45 ± 0.182.33 ± 0.092.19 ± 0.122.36 ± 0.112.71 ± 0.08
rMSE0.069 ± 0.0120.067 ± 0.0070.063 ± 0.0100.065 ± 0.0090.070 ± 0.013
Ankle and 3 PlanesR20.801 ± 0.0370.804 ± 0.0390.811 ± 0.0360.807 ± 0.0440.793 ± 0.043
MAPE4.69 ± 0.224.63 ± 0.264.45 ± 0.194.59 ± 0.184.78 ± 0.20
rMSE0.116 ± 0.0180.109 ± 0.0130.101 ± 0.0110.105 ± 0.0090.127 ± 0.021
3 Joints and SagittalR20.819 ± 0.620.821 ± 0.0580.831 ± 0.0550.829 ± 0.0600.813 ± 0.052
MAPE3.73 ± 0.133.39 ± 0.153.24 ± 0.113.29 ± 0.123.87 ± 0.17
rMSE0.093 ± 0.0190.089 ± 0.0110.082 ± 0.0120.085 ± 0.0100.104 ± 0.014
3 Joints and FrontalR20.811 ± 0.0380.832 ± 0.0500.857 ± 0.0430.841 ± 0.0370.803 ± 0.046
MAPE3.65 ± 0.213.27 ± 0.193.06 ± 0.133.12 ± 0.113.71 ± 0.16
rMSE0.091 ± 0.0130.085 ± 0.0170.079 ± 0.0120.082 ± 0.0120.093 ± 0.011
Table 4. The comparison of performance between CNN-xLSTM and the models mentioned in other studies that aim to predict the vertical GRF.
Table 4. The comparison of performance between CNN-xLSTM and the models mentioned in other studies that aim to predict the vertical GRF.
StudiesModelsInputsR2MAPE (%)rMSE
Oh et al. (2013) [13]ANNCentre of Mass and Acceleration of Segments and Joints0.982/0.058 ± 0.010
Pogson et al. (2020) [25]PCA-MLP-ANNTrunk Acceleration0.900//
Alcantara et al. (2022) [29]LSTMHeight, Mass, Speed, Slope, and Running Pattern//0.064 ± 0.015
Scheltinga et al. (2023) [24]ensANNAcceleration of Pelvis and Tibias0.960/0.066 ± 0.001
Bogaert et al. (2024) [26]Lasso3-Dimensional Sacral Acceleration0.8703.290.106
Donahue et al. (2023) [30] LSTM3D accelerations and angular velocities//0.189
This StudyCNN-xLSTMJoints Angle of Ankle, Hip, and Knee0.9732.18 ± 0.090.061 ± 0.008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, T.; Xu, D.; Zhou, Z.; Zhou, H.; Shao, S.; Gu, Y. Prediction of Vertical Ground Reaction Forces Under Different Running Speeds: Integration of Wearable IMU with CNN-xLSTM. Sensors 2025, 25, 1249. https://doi.org/10.3390/s25041249

AMA Style

Chen T, Xu D, Zhou Z, Zhou H, Shao S, Gu Y. Prediction of Vertical Ground Reaction Forces Under Different Running Speeds: Integration of Wearable IMU with CNN-xLSTM. Sensors. 2025; 25(4):1249. https://doi.org/10.3390/s25041249

Chicago/Turabian Style

Chen, Tianxiao, Datao Xu, Zhifeng Zhou, Huiyu Zhou, Shirui Shao, and Yaodong Gu. 2025. "Prediction of Vertical Ground Reaction Forces Under Different Running Speeds: Integration of Wearable IMU with CNN-xLSTM" Sensors 25, no. 4: 1249. https://doi.org/10.3390/s25041249

APA Style

Chen, T., Xu, D., Zhou, Z., Zhou, H., Shao, S., & Gu, Y. (2025). Prediction of Vertical Ground Reaction Forces Under Different Running Speeds: Integration of Wearable IMU with CNN-xLSTM. Sensors, 25(4), 1249. https://doi.org/10.3390/s25041249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop