Next Article in Journal
A Robust Faster R-CNN Model with Feature Enhancement for Rust Detection of Transmission Line Fitting
Previous Article in Journal
CeRA-eSP: Code-Expanded Random Access to Enhance Success Probability of Massive MTC
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction

1
State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240, China
2
Orthopaedic Surgery and Sports Medicine, Detroit Medical Center, Detroit, MI 48201, USA
3
South Texas Health System—McAllen Department of Trauma, McAllen, TX 78503, USA
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(20), 7960; https://doi.org/10.3390/s22207960
Submission received: 21 September 2022 / Revised: 12 October 2022 / Accepted: 14 October 2022 / Published: 19 October 2022
(This article belongs to the Section Biomedical Sensors)

Abstract

:
Background: Gait recognition has been applied in the prediction of the probability of elderly flat ground fall, functional evaluation during rehabilitation, and the training of patients with lower extremity motor dysfunction. Gait distinguishing between seemingly similar kinematic patterns associated with different pathological entities is a challenge for the clinician. How to realize automatic identification and judgment of abnormal gait is a significant challenge in clinical practice. The long-term goal of our study is to develop a gait recognition computer vision system using artificial intelligence (AI) and machine learning (ML) computing. This study aims to find an optimal ML algorithm using computer vision techniques and measure variables from lower limbs to classify gait patterns in healthy people. The purpose of this study is to determine the feasibility of computer vision and machine learning (ML) computing in discriminating different gait patterns associated with flat-ground falls. Methods: We used the Kinect® Motion system to capture the spatiotemporal gait data from seven healthy subjects in three walking trials, including normal gait, pelvic-obliquity-gait, and knee-hyperextension-gait walking. Four different classification methods including convolutional neural network (CNN), support vector machine (SVM), K-nearest neighbors (KNN), and long short-term memory (LSTM) neural networks were used to automatically classify three gait patterns. Overall, 750 sets of data were collected, and the dataset was divided into 80% for algorithm training and 20% for evaluation. Results: The SVM and KNN had a higher accuracy than CNN and LSTM. The SVM (94.9 ± 3.36%) had the highest accuracy in the classification of gait patterns, followed by KNN (94.0 ± 4.22%). The accuracy of CNN was 87.6 ± 7.50% and that of LSTM 83.6 ± 5.35%. Conclusions: This study revealed that the proposed AI machine learning (ML) techniques can be used to design gait biometric systems and machine vision for gait pattern recognition. Potentially, this method can be used to remotely evaluate elderly patients and help clinicians make decisions regarding disposition, follow-up, and treatment.

1. Introduction

As the population ages, the emergence of diseases such as lower extremity diseases or motor nerve dysfunction have increased leading to an increased rate of elderly adults falling on flat ground. Approximately 32% of community-dwelling elderly adults over 75 years of age will fall at least once during a one-year interval and 24% of these individuals will sustain serious injuries [1,2]. In the United Kingdom (UK), the medical costs related to falls are substantial; fall-related injuries in adults greater than 60 years have been reported to cost more than £981 million pounds per year [3]. Total healthcare spending for elderly falls ranged from $48 million in Alaska to $4.4 billion in California. Medicare spending attributable to older adult falls ranged from $22 million in Alaska to $3 billion in Florida, as reported in 2014. The lifetime medical costs of fall-related injuries ranged from $68 million in Vermont to $2.8 billion in Florida [3]. As such, falling has become a costly problem in the growing elderly population [1,2,4].
For that reason, the detection and recognition of fall risk has been growing due to the implementation of safety measures [5] in high-risk work environments, hospitals, and nursing homes [6].
A person’s pattern of walking can be understood by gait analysis. Gait and balance functions decline through the course of disorders including stroke, dementia, Parkinson’s disease, arthritis, and others [7,8,9]. Gait can serve as a marker of changes in physical status and fall risk [10]. The gait of the human body refers to the behavioral characteristics of the lower limbs of the human body in the process of upright walking. A normal human gait cycle usually needs to meet the characteristics of natural, coordination of the legs, labor saving, and periodicity. Abnormal gait can develop before the human body falls. Numerous possibilities may cause an abnormal gait. In the field of medical rehabilitation, identification and evaluation of abnormal gait patterns significantly guide lower limb training regimens and flat ground falls prevention strategies.
By monitoring the gait patterns of elderly patients, proper preventive measures can be recommended to reduce the risk of flat ground falls. Human vision may not accurately recognize or quantify the changes in the gait pattern. Therefore, automatic gait recognition using computer vision has become a hot research topic in the biomechanics and healthcare literature in recent years [10,11,12].
Computer-vision technology is used to acquire gait kinematics information, including angles, velocity, and acceleration of the joints based on Kinect skeletal tracking sequences [12,13]. The gait analysis involves many interdependent measures that can be difficult to interpret due to a vast amount of data and their inter-relations, and a significant amount of labor is required in off-line data analysis [14]. This has led to the development of machine learning (ML) for data analysis, and ML has been used for gait analysis [13,14]. Dolatabadi et al. [13] used two ML approaches, an instance-based discriminative classifier (k-nearest neighbor) and a dynamical generative classifier (Gaussian process latent variable model) to distinguish between healthy and pathological gaits with an accuracy of up to 96% when walking at a fast pace. Ortells et al. [15] developed vision-based gait-impairment analysis to distinguish a normal gait from seven impaired gaits for aided diagnosis. Zakaria et al. [16] classified Autism Spectrum Disorder (ASD) children’s gait from normal gait by a depth camera and found the accuracy of the support vector machine (SVM) classifier was 98.67% and the Naive Bayes classifier had an accuracy of 99.66%. Chen et al. [17] and Xu et al. [18] classified Parkinsonian gait classification using monocular video imaging techniques and kernel-based principal component analysis (PCA) [17,18]. These studies demonstrate the advancement of sensor technology and its capacity to collect kinematic and electrophysiological information during walking, which has greatly promoted the development of automated gait recognition technology. However, there remains a lack of real-time computer vision monitoring systems for geriatric gait monitoring for fall-risk warning or home-based remote monitoring. Computer vision using ML computing techniques for flat falls has not been fully understood nor investigated.
ML techniques have been studied for gaiting pattern recognition among different disorders. ML can be used for real-time signal processing and instant output command signals [19,20,21]. The pathway includes collecting data for ML model training and then using it for real-time signal monitoring and the process for commanding signal output for control. Our long-term goal is to develop a real-time ML-based computer vision system for geriatric gait analysis that is capable of predicting flat ground fall risk. We plan to collect gait data among elderly patients with a flat ground fall history to build ML models. This study aimed to design a cost-effective gait-recognition system for the elderly population using different ML algorithms and identify the optimal ML method for building a computer camera monitoring system.
This paper proposed the design of a human gait recognition system using the Azure Kinect camera (Microsoft Inc., Redmond, Seattle, WA, USA). Compared with traditional motion capture equipment, the Kinect camera has the characteristics of easy integration, no need for large-scale data acquisition equipment, and a user-friendly experience. We hypothesized that different gait patterns can be classified and recognized using computer vision and ML techniques. Using the Kinect skeleton tracking technology, the spatial information of the key points of the human skeleton can be accurately obtained, processed through the algorithm, and converted into the joint angle during walking. Unlike other studies that only collect ankle joint information [12], footstep information [18], trunk tilt angle [22], or data from a public database [23], we collected all joint kinetic data available for ML processing. Moreover, previous studies detected the whole fall event [12,18,22,23,24], while our study focused on detecting abnormal gaits (pelvic obliquity gait and knee hyperextension gait) aiming at the prevention of falls. Four automatic classification algorithms were used to classify human gait patterns, including Convolutional Neural Network (CNN), Support Vector Machine (SVM), Long Short-Time Memory (LSTM), and K-Nearest Neighbors (KNN) [24,25,26,27]. The purpose was to determine the optimal classifier for flat fall gait recognition.

2. Materials and Methods

2.1. Participants and Experimental Devices

Seven healthy subjects (aged 23 to 29 years), including 3 males and 4 females, were recruited in this study. None of the subjects had any history of neurological or musculoskeletal disorders. This study was approved by the Institutional Review Board (IRB) of Shanghai Jiao Tong University (I2022210I). All procedures were performed according to the Declaration of Helsinki. Each subject signed an informed consent form before the start of the experiment.
The equipment used in this study included an Azure Kinect DK integrated camera (Microsoft Inc., Redmond, Seattle, WA, USA), an h/p/cosmos treadmill, and a laptop. The Azure Kinect DK integrated camera was used to photograph and record the key point information of the subject’s angle during the experiment. The camera was placed on the lateral side of a participant aiming at the central axis of the frame. The whole body of a participant was captured for recording. Participants were video recorded during the gait task. These tasks allowed us to visually detect features of 3 gait patterns including normal gait (NG), pelvic obliquity (PO) gait (pelvic-hiking gait) [28,29], and knee hyperextension (HK) gait (trunk forward tilt gaits). The h/p/cosmos treadmill was used to control the experimenter’s gait walking speed, and the laptop was used to run software programs and ML algorithms. The software used for walking motion information acquisition included Visual Studio 2019 (Microsoft Inc., Redmond, Seattle, WA, USA), OpenCV (Version 3.4.10, OpenCV Team, Palo Alto, CA, USA), and MATLAB (version 2018, MathWorks, Natick, MA, USA). The ML algorithms were implemented using the PyCharm (Python version 3.7, Boston, MA 02210, USA) and PyTorch (version 1.11, San Francisco, CA, USA) software.

2.2. Experimental Procedures

Before the start of the experiment, the participant was given instruction to perform 3 patterns of walking gait and allowed to practice on the treadmill for 3 min. Before the formal experiment, each subject conducted a testing experiment, including practicing 3 designated gaits (Figure 1) to test equipment connectivity and system setup. During the experiment, the subjects were asked to look forward as much as possible and maintain a steady gait on the treadmill.
In this experiment, the subjects walked on the h/p/cosmos treadmill with the 3 different gait patterns shown in Figure 1. The 3 types of gaits are normal gait (NG) during normal walking, abnormal gait patterns including pelvic obliquity (PO) gait, and knee hyperextension (KH) gait patterns of the right lower limb. The abnormal gaits were generated for the right lower limb only. Pelvic obliquity gait refers to first the torso being lifted to the left when walking, then lifting the pelvis and stepping out with the right leg. During the knee hyperextension gait, the trunk was tilted forward slightly while the right knee was lifted to step out during walking. These gait patterns occur at the critical moment of a fall and therefore can be used for fall detection [30].
During the experiment, the speed of the treadmill was set to run at a constant speed of 0.1 m/s. A subject walked on the treadmill at the same speed for 1 min. For each subject, each gait pattern was recorded 5 times to obtain 5 sets of valid experimental datasets; a total of 15 sets of valid experimental datasets were obtained for offline data analysis.

2.3. Kinect Skeleton Tracking Technique

Microsoft Azure Kinect SDK (software development kit) camera and a human skeleton recognition and tracking software toolkit (Azure Kinect Body Tracking SDK) were used in this study. The subject’s motion was tracked in the camera’s depth image field of view with 32 joint point angle readings recorded and saved in the computer while tracking the target. The spatial coordinates and skeleton joint nodes of the upper limb and lower limb, spine, shoulder, and hip are shown in Figure 2. In this study, the joint points of the lower limbs were extracted for gait pattern recognition. According to the joint points of the lower limbs, the angle of joint flexion and extension was calculated and then used as the featured value of gait recognition.
The skeleton nodes used in the calculation of the lower limb joint angle included right clavicle point A (Clavicle_Right), left clavicle point B (Clavicle_Left), pelvis point C (Pelvis), right hip joint point D (Hip_Right), left hip joint point E (Hip_Left), right knee point F (Knee_Right), left knee point G (Knee_Left), right ankle point H (Ankle_Right), left ankle point I (Ankle_Left) (Figure 2).
The joint angles used in this experiment include the hip joint flexion and extension angle, hip joint abduction and adduction angle, and knee joint angle. Their definitions are defined as follows:
Hip joint flexion-extension angle: the angle between the thigh vector line (the line connecting D–F points) and trunk line (the line connecting A–D points) on the plane on which these two lines intersect for the right hip joint, and the angle between the E–G line and B–E line for the left hip joint.
Hip joint abduction-adduction angle: the angle between the thigh vector line (the line connecting D–F points) and the pelvic line (the line connecting C–D points) on the plane on which these two lines intersect for the right hip joint, and the angle between the E–G line and C–E line for the left hip joint.
Knee joint angle: the angle between the thigh vector line (the line connecting D–F points) and the leg vector line (the line connecting F–H points) on the plane on which these two lines intersect for the right knee joint, and the angle between the E–G line and G–I line for the left knee joint.
Taking the calculation of the right knee joint as an example, when the coordinates of joint point D were set as (x1, y1, z1), joint point F as (x2, y2, z2), and joint point H as (x3, y3, z3); then the right thigh vector D F was expressed as (X1, Y1, Z1), and the right leg vector F H as (X2, Y2, Z2), in which X1 = x1 − x2, Y1 = y1 − y2, Z1 = z1 − z2, X2 = x2 − x3, Y2 = y2 − y3, Z2 = z2 − z3. Thus, the right knee angle α   ( D F , F H ) was calculated using the following equation:
α ( D F , F H ) = cos 1 ( X 1 X 2 + Y 1 Y 2 + Z 1 Z 2 X 1 2 + Y 1 2 + Z 1 2 X 2 2 + Y 2 2 + Z 2 2 )
The definition and calculation of six joint angles used in this gait pattern recognition study are shown in Table 1.
The joint angle calculation algorithms were implemented using Visual Studio software (Version 2019, Microsoft Inc., Redmond, Seattle, WA, USA) and C++ language. The frame rate (FPS) of the Kinect camera was set up as 5 fps, in which 5 images per second of the point of the skeleton node positions were captured and processed. The featured vector Fθ was calculated using six lower limb joint angles of each image of the skeleton node frame. The featured Fθ is expressed as the following equation:
F θ = { θ 1 , θ 2 , θ 3 , θ 4 , θ 5 , θ 6 }
Participant walking speed was set at 0.1 m/s by following the treadmill moving speed. Because a participant was able to complete at least one gait cycle of the walk within 3 s, a time window of 3 s was used to segment the recorded data into datasets. A total of 2100 sets ([60/3] × 5 × 3 × 7)((60 s/3 s of processing time window) × 5 times of walking on the treadmill per gait pattern × 3 gait patterns × 7 participants) of valid datasets were collected. Each dataset contained 90 (6 × 5 × 3) (6 joint angle readings × 5 frames per second (FPS) × 3 s of the time window for data processing) feature values and their corresponding gait labels of normal, pelvic obliquity gait, and knee hyperextension gait. The data segmentation was performed using MATLAB software.

2.4. Machine Learning-Based Classifiers

In this study, four classifiers were used to classify gait data, including the convolutional neural network (CNN), long short-term memory (LSTM) neural network, support vector machine (SVM), and K-nearest neighbors (KNN) classifier. CNN and LSTM algorithms were implemented using PyCharm and PyTorch. SVM and KNN algorithms were implemented using MATLAB and Toolbox.

2.4.1. Convolutional Neural Network (CNN) Classifier

The convolutional neural network with two 2-dimensional convolutional layers was used in this experiment. The CNN is a type of deep neural network classifier that has been used in image classification and recognition [31,32]. It can recognize features directly from the data instead of manually extracting features. Within the CNN structure in this study, the data was put through several layers with different tasks [33,34]. The input was imported into the convolution layer where a spatial filter was applied to the inputs in the form of a window of weights. This spatial filter moved vertically and horizontally throughout the inputs to generate the output of the convolution layer, which was rectified by a rectified linear unit (ReLU) and then exported into the pooling layer. The results of the pooling layer were put through a fully connected layer yielding the classification [35,36,37,38,39,40]. This approach has been used in medical applications where datasets are of a small size [40], demonstrating success in medical image recognition [41]. During the training phase, an epoch was defined as a full training cycle using the whole training set, and a mini-batch size was a subset of the training set used to update the weights and calculate the gradient descent loss function.
The structure of the proposed CNN is shown in Figure 3. The input of the CNN was the gait angles derived from Kinect; two 2D-convolutional layers were used in the model, and the output of the model was the classification result of the gait pattern. In the first convolutional layer, the number of filters was 16, kernel size 5, padding 2, and stride 1. For the max pooling in the first layer, the kernel size was 2, padding 0, and stride 2. In the second layer, the number of filters was 32, kernel size 5, padding 2, and stride 1. For the max-pooling in the second layer, the kernel size was 2, padding 0, and stride 2.
The loss function of the model was the cross-entropy loss function, which can be calculated by the following formula:
l o s s = ( x , c l a s s ) = log ( exp ( x [ c l a s s ] ) j exp ( x [ j ] ) ) = x [ c l a s s ] + log ( j exp ( x [ j ] ) )
where x is the input, class is the index of the target, and j is the number of classes.
The optimizer of the model was Adam. The training was completed after 500 epochs with a mini-batch size of 16 images when the learning rate was fixed and set to 0.001.

2.4.2. Support Vector Machine (SVM) Classifier

In this study, the support vector machine (SVM) model was applied as one of the classifiers and a Bayesian optimizer was used to optimize the model. SVM [42] uses different and high dimensional features of the datasets to assign a label to a data point based on their location in the data space. SVM as a supervised learning algorithm has many applications [18], such as face detection [43] and bioinformatics [44]. It is an optimal hyperplane in an N-dimensional space as a decision boundary to separate datasets into two or more classes. [45]. In this study, we used SVM and the Bayesian optimizer to identify an optimal classifier for gait pattern recognition.

2.4.3. K-Nearest Neighbors (KNN) Classifier

In this paper, we used an automatically optimized KNN model; a Bayesian optimizer was selected as an optimizer and the distance measurement method was implemented using the Euclidean metric method.
KNN (K-nearest neighbors) is another supervised classification algorithm that has no explicit training phase [46]. The principle of KNN predicts the category of a new value according to the category of the K points closest to it. When using KNN, many parameters need to be optimized, including the number of adjacent points, distance metrics, and distance weights. KNN classification is based on a distance metric between a data point to be classified and already known data points from a training set [47,48]. KNN assigns that label to a data point which is shared by the majority of the K nearest surrounding data points of the training set. The setting of the K value affects the accuracy of the model, so the K value in the proposed method varies in the optimization process, and the K value varies between 1–150.

2.4.4. Long Short-Term Memory (LSTM) Neural Network

LSTM is a special recurrent neural network (RNN) structure that can memorize long- and short-term information. Recurrent neural networks (RNNs) are a class of neural networks that are naturally suited to processing time-series data and other sequential data. The traditional RNN network usually produces long-term dependencies after multiple calculations of network nodes, whereas the LSTM network avoids this phenomenon through the design of the structure [49,50].
Because gait actions are often closely related to the moments before and after a certain moment, we used a bidirectional LSTM (Bi-LSTM) network for classification. The architecture of the proposed model was shown in Figure 4. The model was composed of two Bi-LSTM layers and a fully connected (FC) layer. The bi-LSTM layers could process the input information both in forward and backward directions. The output of the current moment is not only affected by the previous state but may also be affected by the future state.
The loss function of the model was a cross-entropy loss function, which can be calculated by Formula (3). The optimizer of the model was Adam. The training was completed after 500 epochs with a mini-batch size of 20 images when the learning rate was fixed and set to 0.0005.

2.5. Statistical Analysis

One-way ANOVA with PostHoc LSD was used to determine the statistical difference of the average range of motion (ROM) of a joint between different gait patterns, as well as the average accuracy of gait pattern recognition between ML algorithms. A Pearson’s chi-square or Bonferroni correction test was performed to determine the difference in the accuracy of each gait pattern recognition among the four individual machine learning algorithms for different gait patterns. SPSS software (Version 28, IBM, Armonk, NY, USA) was used for statistical analysis. A p-value smaller than 0.05 was considered to be statistically significant.

3. Results

3.1. Characteristics of Lower Limb Joint Angles in Three Gait Patterns

A Microsoft Azure Kinect SDK camera and the Human Skeleton Recognition and Tracking software toolkits (Azure Kinect Body Tracking SDK) captured the joint angle during the subject’s walking on a treadmill. Figure 3 shows the joint angles of a subject’s right hip and knee within 3 to 9 s among different gait pattern groups.
The average hip joint flexion-extension angle was 32.6 ± 3.8 degrees in the normal gait group, 14.1 ± 3.0 degrees in the PO group, and 16.0 ± 3.5 degrees in the KH group (Figure 5a). The ROM of the right hip joint of the normal gait (NG) was larger than the ROM of the PO and KH gait patterns (ANOVA, PostHoc LSD, p < 0.001). The ROM of the right hip joint of the KH gait was larger than that of the PO gait (ANOVA, PostHoc LSD, p < 0.001).
The average ROM of hip joint abduction was 18.9 ± 4.9 degrees in the normal gait group, 17.1 ± 4.6 degrees in the PO group, and 9.5 ± 2.6 degrees in the KH group (Figure 5b). The abduction ROM of the right hip joint of normal gait (NG) was larger than that of the PO and KH gait patterns (ANOVA, PostHoc LSD, p < 0.001). The ROM of the right hip joint of the PO gait was larger than that of the KH gait (ANOVA, PostHoc LSD, p < 0.001).
The average ROM of knee joint flexion-extension was 48.7 ± 5.4 degrees in the normal gait group, 18.8 ± 5.5 degrees in the PO group, and 27.6 ± 3.8 degrees in the KH group (Figure 5c). The ROM of the right knee joint of normal gait (NG) was larger than that of the PO and KH gait patterns (ANOVA, PostHoc LSD, p < 0.001). The ROM of the right knee joint of the KH gait was larger than that of the PO gait (ANOVA, PostHoc LSD, p < 0.001).

3.2. Comparison of the Model Training Process

Eighty percent of the data was used for training, and 20 percent of the data was used for validation testing to verify the accuracy of the trained model. Figure 6, Figure 7, Figure 8 and Figure 9 are the plots showing the loss value over the training iterations of 4 machine learning algorithms. CNN model training demonstrated a typical curve characteristic of gradual convergence with a minimum loss after 100 training iterations with loss values of 0.032 and 0.006 at the 200th iteration (Figure 6). SVM model training demonstrated a rapid convergence after 200 training iterations with a minimum loss value of 0.0125 (Figure 7). KNN model training appeared to have a convergence after 200 training iterations with a minimum classification loss value close to 0.06 (Figure 8). LSTM model training had a convergence at the 200th training iteration with a loss value of 0.25, but the loss value increased with the increase of iterations, demonstrating overfitting [51] (Figure 9). SVM had the shortest training iteration to archive the minimum error, followed by CNN, KNN, and LSTM. CNN had the smallest minimum error.
To compare the reasoning speed of the model, we also tested the runtime of the four proposed models, and the result is shown in Table 2. The training epochs for CNN and LSTM were 500, and for SVM and KNN the training epochs were 200. All four proposed methods were running on the same computer. The average runtime of CNN was 20.17 ± 0.99 s, while the average runtime of LSTM was 32.77 ± 1.32 s. The training epochs for SVM and KNN were smaller than for CNN and LSTM, but the runtime of SVM and KNN was longer than that of CNN and LSTM. The average runtime for SVM was 714.71 ± 58.14 s, and the average runtime for KNN was 316.57 ± 11.41 s. Table 2 shows the best K value for the KNN model.

3.3. Comparison of Classification Accuracy of the Four ML Algorithms

In this experiment, we used a total of four different classification models to recognize gait movements. The recognition accuracy of the four classification algorithms is shown in Figure 10. Overall, the algorithm with the highest recognition accuracy was the SVM algorithm, with an average recognition accuracy of 94.9 ± 3.36%. The accuracy of the KNN algorithm was 94.0 ± 4.22%, CNN 87.6 ± 7.50%, and LSTM 83.6 ± 5.35% among participants. LSTM had a lower accuracy (Pearson’s chi-square test, p = 0.031) (Table 3).
The classification accuracy of the two traditional machine learning algorithms was significantly higher than that of the algorithm using the deep learning architecture, which has a stronger relationship with our data processing this time. This time, we identified the key points of the skeleton in the human gait image and calculated the joint angles of the lower limbs, which greatly reduced the feature dimension of the input data, so the network architectures such as CNN and LSTM lost their advantages. Machine learning algorithms such as SVM and LSTM have relatively simple structures and obvious advantages in dealing with such low-dimensional features. For the time window, 3 s is used to separate the data, which ensures that each time window covers the complete gait action, thus achieving a high recognition accuracy.
Figure 11 shows the confusion matrix results of four classification methods in gait recognition, and Table 4, Table 5, Table 6 and Table 7 provide detailed information on the confusion matrix and corresponding precision and recall of four machine learning algorithms.

3.4. Outcomes of the Chi-Square Test

Statistical analysis demonstrated that there was not a statistical difference between the 4 ML methods in predicting normal gait patterns (chi-square, Bonferroni correction test, p > 0.7). CNN had higher accuracy in predicting the PH gait pattern (p = 0.007) while KNN has a lower accuracy (p = 0.001). CNN had higher accuracy in predicting the BK gait (p = 0.04) while LSTM had a lower accuracy (p = 0.000) (Table 8).

4. Discussion

4.1. Application of Computer Vision Camera in Gait Analysis

Azure Kinect is a cutting-edge spatial computing developer kit with sophisticated computer vision, advanced AI sensors, and a range of powerful SDKs that can pair real-time depth sensor data with AI-driven insights. Current development techniques target prevention and mitigation of potential patient accidents and injuries in healthcare environments with predictive alerts. The Azure Kinect camera also consists of an RGB camera and an IR camera. The color camera can offer a resolution of 3840 × 2160 px at 30 Hz. The IR camera can have a resolution of 1 MP 1024 × 1024 pixels. It has been successfully used and validated in human gait analysis [52,53,54]. The RGB-D sensor system can estimate features of gait patterns in pathological individuals and differences between disorders. The advances and popularity of low cost RGB-D sensors have enabled us to acquire depth information of objects [55,56,57]. The Kinect sensor also utilizes the time-of-flight (ToF) principle. Time-of-flight is a method for measuring the distance between a sensor and an object based on the time difference between the emission of a signal and its return to the sensor, after being reflected by the object [58]. Both the RGB and IR cameras support different fields of view. The Azure Kinect also has an IMU sensor, consisting of a triaxial accelerometer and gyroscope, which can estimate its position in space. Microsoft offers a Body Tracking SDK for human skeletal tracking using C and C++ programming languages. This SDK tracks a total of 32 main joints of a participant to provide the joint orientations in quaternions representing local coordinate systems and 3D position data [52]. Our study demonstrated that the Microsoft Azure Kinect SDK camera and Human Skeleton Recognition and Tracking software toolkits (Azure Kinect Body Tracking SDK) captured the joint angles as SDK defines them during subject gait analysis on a treadmill. The ML algorithms recognized the three different gait patterns described in this study.
In this study, the hip angles were measured by a computer camera and calculated using computer-vision-marked joint lines (clavicle-point to hip-point to knee-point) defaulted by SDK software based on its 3D coordinate axis. This joint angle measurement is different from the conventional calculation methods with different joint markers. Most of the traditional methods for hip joint angle measurement are based on the angle between the thigh and body trunk. Thus, the gait curves of our study are different from the typical gait curves [59,60] measured by traditional methods [61]. The knee joint calculation method is similar to traditional measuring methods (hip-point to knee-point to ankle-point). Although there is a disparity between joint measurement measures, our results demonstrated that computer vision and ML computing can classify normal gait from PO and KH gaits.
Most of the previous studies demonstrated that Azure and the Body Tracking SDK can recognize the gait patterns and differences in individuals with disorders [52,54,55,59]. Azure Kinect tracking accuracy of the foot marker trajectories is significantly higher than the previous model Kinect v2 overall treadmill velocities. Azure Kinect can efficiently express human poses and gestures [52,62]; however, the newly added joints (clavicles, spine chest) of the Azure Kinect camera achieved a reasonable tracking error [52]. There is an occlusion problem causing the quality of invisible joints to be randomly degraded [62]. It has been suggested that tracking quality can be enhanced by improving hardware and deep-learning (DL)-based skeleton tracking algorithms. Microsoft has started to develop a new body-tracking SDK for the Azure Kinect using DL and convolutional neural networks (CNN) machine learning (ML) algorithms [52]. However, there is a paucity of research reports on gait analysis using Azure Kinect or other computer-vision-based ML and DL techniques [63]. Our study suggested that the Azure camera and Human Skeleton Recognition and Tracking software toolkits can effectively discriminant the gait patterns using machine learning algorithms.

4.2. Machine Learning for Gait Analysis

Compared with computer vision-based, most ML-based gait analysis adopt IMU-based sensor systems [63,64,65,66,67]. The ML techniques used in IMU-based gait analysis include decision tree (DT) [68], linear discriminant analysis (LDA) [69], k-nearest neighbors (k-NN) [70], support vector machine (SVM) [67], CNN [71,72], random forest (RF) [61,66,73], and LSTM [49]. The efficiency of ML algorithms on Azure Kinect computer vision-based gait analysis is still not clear. The LSTM or CNN model has a generalization ability across different scenes, while the regularization and generalization techniques of CNN [51] and LSTM [74] have been used in image processing [33,34,48,56]. In this study, four different classification methods including convolutional neural network (CNN), support vector machine (SVM), K-nearest neighbors (KNN), and long short-term memory (LSTM) neural network were used to automatically classify three gait patterns. Their efficiency at gait pattern classification was investigated and compared. The SVM (94.9 ± 3.36%) had the highest accuracy in the classification of gait patterns, followed by KNN (94.0 ± 4.22%). The accuracy of CNN was 87.6 ± 7.50% and LSTM 83.6 ± 5.35%. The accuracy of SVM and KNN was higher than that of CNN and LSTM. During model training, CNN had the smallest training loss value followed by SVM (0.0125), KNN (0.03), and LSTM (0.25, then increased) with overfitting. SVM had the shortest training iteration to archive the minimum error, followed by CNN, KNN, and LSTM. CNN had the smallest minimum error. These outcomes demonstrated that LSTM was less efficient in gait pattern classification in this study. The reason for lower LSTM accuracy in gait pattern classification could be that LSTM algorithms are generally designed for larger amounts of data with complex structures. The estimation errors increased over the iterations upon using LSTM in this study, which could be due to overfitting [51]. Overfitting has been reported in the literature, and the estimation errors of the proposed LSTM methods gradually became high over the steps when it is used for constant acceleration motion tracking [74]. SVM and KNN are designed for small-size dataset applications. CNN fits better for medium-sized datasets. The dataset we collected in this study was smaller; thus, SVM and KNN revealed better performance.
Fricke et al. [75] compared three machine learning algorithms (KNN, SVM, and CNN) for the automatic classification of electromyography (EMG) patterns in gait disorders and found that CNN had the highest accuracy (91.9%), while SVM (accuracy 67.6%) and KNN (accuracy 48.7%) had much lower accuracy. Our study demonstrated that the ML approach of our gait pattern recognition using computer vision was similar to Fricke’s EMG-based machine learning process method. However, our results demonstrated that computer vision yields relatively higher accuracy than EMG-based ML for gait pattern recognition.

4.3. Epidemiology of Flat Fall

Falling is a common problem in the growing elderly population [4]. Approximately 32% of community-dwelling elderly adults aged ≥75 years fall at least once during a one-year interval and 24% of these individuals sustain serious injuries [1,2]. The medical costs related to falls are substantial [3]. In 2012, there were 24,190 fatal and 3.2 million medically treated non-fatal fall-related injuries. Direct medical costs totaled $616.5 million for fatal and $30.3 billion for non-fatal injuries in 2012 and rose to $637.5 million and $31.3 billion, respectively, in 2015 [76].
The risk factors associated with elderly falls include age-related gait pattern changes, and deficits in the musculoskeletal system, proprioception, and vestibular system [4,77]. Predicting falls remains an elusive task for healthcare providers. For this reason, many researchers and clinicians have used screening tools based on known risk factors in an attempt to identify future “fallers” [78]. Although a history of falls remains the most accurate predictor of future falls [79], it is not useful for the early detection and prevention of first-time falls [80].
Elderly persons presenting with mild cognitive impairments (dementia) or using a walker/crane demonstrate a higher fall rate [81]. Similarly, comparing individuals with gait deficiencies to those with a history of falls suggests that executive function, gait control, and fall risk may be linked [81]. Because most falls occur during locomotion, research has focused on identifying age-related gait pattern changes [82].

4.4. The Rationale of Our Gait Pattern Selection for the Study

Hip abduction can compensate for a reduced swing-phase knee flexion angle in those after a stroke. Pelvic obliquity (hip hiking) also facilitates foot clearance with greater energy efficiency [28,29]. This gait is most likely a reflection of an altered motor template occurring after a stroke, which contributes to a decline in gait ability [83,84]. Home-based stroke hemiplegia patients tend to fall easily due to poor toe clearance, which is reported to be one of the causes of falling, although there are many other related factors [85]. The hyperextension gait reflexes a hyperextension moment of the knee joint during walking, which is commonly seen among post-stroke patients with a stiff knee. The stiff-knee gait is a common abnormal gait pattern in patients after stroke characterized by insufficient knee flexion during swing [86,87]. Elder adults can develop abnormal gaits that are similar to pathologic gaits such as frontal gait, hemiparetic gait, Trendelenburg gait, and knee hyperextension gait [88]. Abnormal gaits are usually caused by musculoskeletal and/or neurologic abnormalities, which can be identified in the clinical setting. Prompt diagnosis and appropriate treatment may save an elderly patient from immobility, fall-related injury, loneliness, and depression [89]. Early detection of abnormal gait with related interventions will be critical in general fall prevention. Because both PO and KH gaits are common abnormal gaits among elder adults and patients with neurological disorders, they were selected for this study to develop a computer vision and machine learning-based sensor system aiming at elder adult abnormal gait detection and fall prevention.

4.5. Limitations of This Study

The participants of this study were healthy volunteers, who mimicked the pathologic gaits by walking on a treadmill. The efficiency of this computer vision and machine learning-based sensor system in the classification of patients’ gaits in a clinical setting is unclear. Future study should include clinical studies using this sensor system to recognize the differences between flat-faller and non-flat-faller’s gait conditions for gait pattern recognition.
In summary, this paper proposed a gait recognition system using a computer vision Kinect skeleton tracking technology based on machine learning algorithms. Compared with the traditional gait recognition technology, this method has the characteristics of easy construction, rapid calculation, and good user experience. Further study may demonstrate the utility of computer vision and ML processes to detect aberrant gait patterns in clinical settings. Additionally, future studies should focus on the utilization of the Azure–Kinect in assessing pathological gait patterns of postoperative patients undergoing rehabilitation.

5. Conclusions

This paper proposes a gait recognition system using computer vision and machine learning-based human motion tracking technology. The recognition accuracy of the system can reach 94.5%, which meets the needs of users; providing support for the use of the proposed AI machine learning techniques to design gait biometric systems and machine vision applications for gait pattern recognition. Potentially, this method can be used to evaluate geriatric gait patterns, predict flat falls, and aid clinicians in decision-making regarding both disposition and rehabilitation regimens.

Author Contributions

Conceptualization, B.C., C.C., J.H. and C.P.-L.; methodology, B.C., C.C., J.H., Z.S., C.F. and C.P.-L.; software, B.C. and C.C.; validation, J.Q., M.D. and S.L.; formal analysis, B.C., C.C., Z.S., S.L. and M.D.; investigation, H.F.D., B.E.L. and C.P.-L.; resources, C.C., J.H., H.F.D. and C.P.-L.; data curation, B.C. and C.C.; writing—original draft preparation, B.C., J.Q. and M.D.; writing—review and editing, C.C., Z.S., H.F.D. and C.P.-L.; visualization, B.C. and C.C.; supervision, C.C. and J.H.; project administration, C.C. and J.H.; funding acquisition, J.H. and C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported partially by the National Natural Science Foundation of China (51975360, 52035007), the National Social Science Foundation of China (17ZDA020), the Cross Fund for Medical and Engineering of Shanghai Jiao Tong University (YG2021QN118), and Rehabilitation Institute of Michigan Foundation (Grant# 22-2-003).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Shanghai Jiao Tong University (I2022210I, 9 August 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

This study was performed in the State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, China.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moreland, B.; Kakara, R.; Henry, A. Trends in Nonfatal Falls and Fall-Related Injuries Among Adults Aged ≥65 Years—The United States, 2012–2018. Morb. Mortal. Wkly. Rep. 2020, 69, 875–881. [Google Scholar] [CrossRef] [PubMed]
  2. Florence, C.S.; Bergen, G.; Atherly, A.; Burns, E.; Stevens, J.; Drake, C. Medical Costs of Fatal and Nonfatal Falls in Older Adults. J. Am. Geriatr. Soc. 2018, 66, 693–698. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Haddad, Y.K.; Bergen, G.; Florence, C.S. Estimating the Economic Burden Related to Older Adult Falls by State. J. Public Health Manag. Pract. 2019, 25, 17-e24. [Google Scholar] [CrossRef] [PubMed]
  4. Yamada, M.; Ichihashi, N. Predicting the probability of falls in community-dwelling elderly individuals using the trail-walking test. Environ. Health Prev. Med. 2010, 15, 386–391. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Usmani, S.; Saboor, A.; Haris, M.; Khan, M.A.; Park, H. Latest Research Trends in Fall Detection and Prevention Using Machine Learning: A Systematic Review. Sensors 2021, 21, 5134. [Google Scholar] [CrossRef] [PubMed]
  6. Palestra, G.; Rebiai, M.; Courtial, E.; Koutsouris, D. Evaluation of a Rehabilitation System for the Elderly in a Day Care Center. Information 2018, 10, 3. [Google Scholar] [CrossRef] [Green Version]
  7. Ko, S.U.; Jerome, G.J.; Simonsick, E.M.; Studenski, S.; Ferrucci, L. Differential Gait Patterns by History of Falls and Knee Pain Status in Healthy Older Adults: Results from the Baltimore Longitudinal Study of Aging. J. Aging Phys. Act. 2018, 26, 577–582. [Google Scholar] [CrossRef]
  8. Ardalan, A.; Yamane, N.; Rao, A.K.; Montes, J.; Goldman, S. Analysis of gait synchrony and balance in neurodevelopmental disorders using computer vision techniques. Health Inform. J. 2021, 27, 14604582211055650. [Google Scholar] [CrossRef] [PubMed]
  9. Rupprechter, S.; Morinan, G.; Peng, Y.; Foltynie, T.; Sibley, K.; Weil, R.S.; Leyland, L.A.; Baig, F.; Morgante, F.; Gilron, R.; et al. A Clinically Interpretable Computer-Vision Based Method for Quantifying Gait in Parkinson’s Disease. Sensors 2021, 21, 5437. [Google Scholar] [CrossRef]
  10. Lonini, L.; Moon, Y.; Embry, K.; Cotton, R.J.; McKenzie, K.; Jenz, S.; Jayaraman, A. Video-Based Pose Estimation for Gait Analysis in Stroke Survivors during Clinical Assessments: A Proof-of-Concept Study. Digit. Biomark. 2022, 6, 9–18. [Google Scholar] [CrossRef]
  11. Tipton, P.W. Dissecting parkinsonism: Cognitive and gait disturbances. Neurol. Neurochir. Pol. 2021, 55, 513–524. [Google Scholar] [CrossRef] [PubMed]
  12. Ng, K.D.; Mehdizadeh, S.; Iaboni, A.; Mansfield, A.; Flint, A.; Taati, B. Measuring Gait Variables Using Computer Vision to Assess Mobility and Fall Risk in Older Adults with Dementia. IEEE J. Transl. Eng. Health Med. 2020, 8, 2100609. [Google Scholar] [CrossRef] [PubMed]
  13. Dolatabadi, E.; Taati, B.; Mihailidis, A. An Automated Classification of Pathological Gait Using Unobtrusive Sensing Technology. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 2336–2346. [Google Scholar] [CrossRef] [PubMed]
  14. Khera, P.; Kumar, N. Role of machine learning in gait analysis: A review. J. Med. Eng. Technol. 2020, 44, 441–467. [Google Scholar] [CrossRef] [PubMed]
  15. Ortells, J.; Herrero-Ezquerro, M.T.; Mollineda, R.A. Vision-based gait impairment analysis for aided diagnosis. Med. Biol. Eng. Comput. 2018, 56, 1553–1564. [Google Scholar] [CrossRef]
  16. Zakaria, N.K. ASD Children Gait Classification Based On Principal Component Analysis and Linear Discriminant Analysis. Int. J. Emerg. Trends Eng. Res. 2020, 8, 2438–2445. [Google Scholar] [CrossRef]
  17. Chen, S.-W.; Lin, S.-H.; Liao, L.-D.; Lai, H.-Y.; Pei, Y.-C.; Kuo, T.-S.; Lin, C.-T.; Chang, J.-Y.; Chen, Y.-Y.; Lo, Y.-C.; et al. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis. BioMedical Eng. OnLine 2011, 10, 99. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Xu, L.; Chen, J.; Wang, F.; Chen, Y.; Yang, W.; Yang, C. Machine-learning-based children’s pathological gait classification with low-cost gait-recognition system. BioMedical Eng. OnLine 2021, 20, 62. [Google Scholar] [CrossRef]
  19. Zhou, Y.; Chen, C.; Cheng, M.; Alshahrani, Y.; Franovic, S.; Lau, E.; Xu, G.; Ni, G.; Cavanaugh, J.M.; Muh, S.; et al. Comparison of machine learning methods in sEMG signal processing for shoulder motion recognition. Biomed. Signal Process. Control 2021, 68, 102577. [Google Scholar] [CrossRef]
  20. Zhou, Y.; Chen, C.; Cheng, M.; Franovic, S.; Muh, S.; Lemos, S. Real-Time Surface EMG Pattern Recognition for Shoulder Motions Based on Support Vector Machine. In Proceedings of the 2020 9th International Conference on Computing and Pattern Recognition, Xiamen, China, 30 October–1 November 2020; pp. 63–66. [Google Scholar]
  21. Jiang, Y.; Chen, C.; Zhang, X.; Chen, C.; Zhou, Y.; Ni, G.; Muh, S.; Lemos, S. Shoulder muscle activation pattern recognition based on sEMG and machine learning algorithms. Comput. Methods Programs Biomed. 2020, 197, 105721. [Google Scholar] [CrossRef] [PubMed]
  22. Zhao, Z.; Zhang, L.; Shang, H. A Lightweight Subgraph-Based Deep Learning Approach for Fall Recognition. Sensors 2022, 22, 5482. [Google Scholar] [CrossRef]
  23. Ramirez, H.; Velastin, S.A.; Aguayo, P.; Fabregas, E.; Farias, G. Human Activity Recognition by Sequences of Skeleton Features. Sensors 2022, 22, 3991. [Google Scholar] [CrossRef] [PubMed]
  24. Prentice, S.D.; Patla, A.E.; Stacey, D.A. Artificial neural network model for the generation of muscle activation patterns for human locomotion. J. Electromyogr. Kinesiol. 2001, 11, 19–30. [Google Scholar] [CrossRef]
  25. Lau, H.-Y.; Tong, K.-Y.; Zhu, H. Support vector machine for classification of walking conditions using miniature kinematic sensors. Med. Biol. Eng. Comput. 2008, 46, 563–573. [Google Scholar] [CrossRef]
  26. Lai, D.T.; Begg, R.K.; Palaniswami, M. Computational intelligence in gait research: A perspective on current applications and future challenges. IEEE Trans. Inf. Technol. Biomed 2009, 13, 687–702. [Google Scholar] [CrossRef]
  27. Wu, J.; Wu, B. The Novel Quantitative Technique for Assessment of Gait Symmetry Using Advanced Statistical Learning Algorithm. BioMed Res. Int. 2015, 2015, 528971. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Stanhope, V.A.; Knarr, B.A.; Reisman, D.S.; Higginson, J.S. Frontal plane compensatory strategies associated with self-selected walking speed in individuals post-stroke. Clin. Biomech. 2014, 29, 518–552. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Akbas, T.; Prajapati, S.; Ziemnicki, D.; Tamma, P.; Gross, S.; Sulzer, J. Hip circumduction is not a compensation for reduced knee flexion angle during gait. J. Biomech. 2019, 87, 150–156. [Google Scholar] [CrossRef]
  30. Kwolek, B.; Kepski, M. Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 2014, 117, 489–501. [Google Scholar] [CrossRef] [PubMed]
  31. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  32. Santos, G.L.; Endo, P.T.; Monteiro, K.H.d.C.; Rocha, E.d.S.; Silva, I.; Lynn, T. Accelerometer-based human fall detection using convolutional neural networks. Sensors 2019, 19, 1644. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Zheng, Q.; Yang, M.; Yang, J.; Zhang, Q.; Zhang, X. Improvement of Generalization Ability of Deep CNN via Implicit Regularization in Two-Stage Training Process. IEEE Access 2018, 6, 15844–15869. [Google Scholar] [CrossRef]
  34. Zhao, M.; Chang, C.H.; Xie, W.; Xie, Z.; Hu, J. Cloud Shape Classification System Based on Multi-Channel CNN and Improved FDM. IEEE Access 2020, 8, 44111–44124. [Google Scholar] [CrossRef]
  35. Sainath, T.N.; Kingsbury, B.; Mohamed, A.-r.; Dahl, G.E.; Saon, G.; Soltau, H.; Beran, T.; Aravkin, A.Y.; Ramabhadran, B. Improvements to deep convolutional neural networks for LVCSR. In Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, Olomouc, Czech Republic, 8–12 December 2013; pp. 315–320. [Google Scholar]
  36. Alaskar, H. Deep learning-based model architecture for time-frequency images analysis. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 12. [Google Scholar] [CrossRef] [Green Version]
  37. Ravì, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.-Z. Deep learning for health informatics. IEEE J. Biomed. Health Inform. 2016, 21, 4–21. [Google Scholar] [CrossRef] [Green Version]
  38. Yhdego, H.; Li, J.; Morrison, S.; Audette, M.; Paolini, C.; Sarkar, M.; Okhravi, H. Towards musculoskeletal simulation-aware fall injury mitigation: Transfer learning with deep CNN for fall detection. In Proceedings of the 2019 Spring Simulation Conference (SpringSim), Tucson, AZ, USA, 29 April–2 May 2019; pp. 1–12. [Google Scholar]
  39. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [Green Version]
  40. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  41. Shin, H.-C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [Green Version]
  42. Smola, A.J.; Schölkopf, B. On a kernel-based method for pattern recognition, regression, approximation, and operator inversion. Algorithmica 1998, 22, 211–231. [Google Scholar] [CrossRef]
  43. Jee, H.; Lee, K.; Pan, S. Eye and face detection using SVM. In Proceedings of the Proceedings of the 2004 Intelligent Sensors, Sensor Networks and Information Processing Conference, Melbourne, VIC, Australia, 14–17 December 2004; pp. 577–580.
  44. Melvin, I.; Ie, E.; Kuang, R.; Weston, J.; Noble, W.S.; Leslie, C. SVM-Fold: A tool for discriminative multi-class protein fold and superfamily recognition. BMC Bioinform. 2007, 8 (Suppl. 4), S2. [Google Scholar] [CrossRef]
  45. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar]
  46. Altman, N.S. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 1992, 46, 175–185. [Google Scholar]
  47. Saçlı, B.; Aydınalp, C.; Cansız, G.; Joof, S.; Yilmaz, T.; Çayören, M.; Önal, B.; Akduman, I. Microwave dielectric property based classification of renal calculi: Application of a kNN algorithm. Comput. Biol. Med. 2019, 112, 103366. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. You, L.; Jiang, H.; Hu, J.; Chang, C.H.; Chen, L.; Cui, X.; Zhao, M. RGB-face recognition Faster Mean Shift with euclidean distance metrics. In Proceedings of the 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), Torino, Italy, 27 June–1 July 2022; pp. 211–216. [Google Scholar]
  49. Sarshar, M.; Polturi, S.; Schega, L. Gait Phase Estimation by Using LSTM in IMU-Based Gait Analysis-Proof of Concept. Sensors 2021, 21, 5749. [Google Scholar] [CrossRef] [PubMed]
  50. Khokhlova, M.; Migniot, C.; Morozov, A.; Sushkova, O.; Dipanda, A. Normal and pathological gait classification LSTM model. Artif. Intell. Med. 2019, 94, 54–66. [Google Scholar] [CrossRef] [PubMed]
  51. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef] [PubMed]
  52. Albert, J.A.; Owolabi, V.; Gebel, A.; Brahms, C.M.; Granacher, U.; Arnrich, B. Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study. Sensors 2020, 20, 5104. [Google Scholar] [CrossRef] [PubMed]
  53. Ferraris, C.; Amprimo, G.; Masi, G.; Vismara, L.; Cremascoli, R.; Sinagra, S.; Pettiti, G.; Mauro, A.; Priano, L. Evaluation of Arm Swing Features and Asymmetry during Gait in Parkinson’s Disease Using the Azure Kinect Sensor. Sensors 2022, 22, 6282. [Google Scholar] [CrossRef]
  54. Cerfoglio, S.; Ferraris, C.; Vismara, L.; Amprimo, G.; Priano, L.; Pettiti, G.; Galli, M.; Mauro, A.; Cimolin, V. Kinect-Based Assessment of Lower Limbs during Gait in Post-Stroke Hemiplegic Patients: A Narrative Review. Sensors 2022, 22, 4910. [Google Scholar] [CrossRef]
  55. Cimolin, V.; Vismara, L.; Ferraris, C.; Amprimo, G.; Pettiti, G.; Lopez, R.; Galli, M.; Cremascoli, R.; Sinagra, S.; Mauro, A.; et al. Computation of Gait Parameters in Post Stroke and Parkinson’s Disease: A Comparative Study Using RGB-D Sensors and Optoelectronic Systems. Sensors 2022, 22, 824. [Google Scholar] [CrossRef]
  56. Jin, B.; Cruz, L.; Goncalves, N. Pseudo RGB-D Face Recognition. IEEE Sens. J. 2022. [Google Scholar] [CrossRef]
  57. Kayhan, N.; Fekri-Ershad, S. Content based image retrieval based on weighted fusion of texture and color features derived from modified local binary patterns and local neighborhood difference patterns. Multimed. Tools Appl. 2021, 80, 32763–32790. [Google Scholar] [CrossRef]
  58. Rossignol, J.; Turtos, R.M.; Gundacker, S.; Gaudreault, D.; Auffray, E.; Lecoq, P.; Bérubé-Lauzière, Y.; Fontaine, R. Time-of-flight computed tomography-proof of principle. Phys. Med. Biol. 2020, 65, 085013. [Google Scholar] [CrossRef] [PubMed]
  59. Cicirelli, G.; Impedovo, D.; Dentamaro, V.; Marani, R.; Pirlo, G.; D’Orazio, T.R. Human Gait Analysis in Neurodegenerative Diseases: A Review. IEEE J Biomed Health Inf. 2022, 26, 229–242. [Google Scholar] [CrossRef]
  60. Vargas-Valencia, L.S.; Elias, A.; Rocon, E.; Bastos-Filho, T.; Frizera, A. An IMU-to-Body Alignment Method Applied to Human Gait Analysis. Sensors 2016, 16, 2090. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Taborri, J.; Palermo, E.; Rossi, S.; Cappa, P. Gait Partitioning Methods: A Systematic Review. Sensors 2016, 16, 66. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Lee, S.H.; Lee, D.W.; Jun, K.; Lee, W.; Kim, M.S. Markerless 3D Skeleton Tracking Algorithm by Merging Multiple Inaccurate Skeleton Data from Multiple RGB-D Sensors. Sensors 2022, 22, 3155. [Google Scholar] [CrossRef] [PubMed]
  63. Harris, E.J.; Khoo, I.H.; Demircan, E. A Survey of Human Gait-Based Artificial Intelligence Applications. Front. Robot. AI 2021, 8, 749274. [Google Scholar] [CrossRef]
  64. Trabassi, D.; Serrao, M.; Varrecchia, T.; Ranavolo, A.; Coppola, G.; De Icco, R.; Tassorelli, C.; Castiglia, S.F. Machine Learning Approach to Support the Detection of Parkinson’s Disease in IMU-Based Gait Analysis. Sensors 2022, 22, 3700. [Google Scholar] [CrossRef]
  65. Zhang, Y.; Ma, Y. Application of supervised machine learning algorithms in the classification of sagittal gait patterns of cerebral palsy children with spastic diplegia. Comput. Biol. Med. 2019, 106, 33–39. [Google Scholar] [CrossRef]
  66. Imura, T.; Toda, H.; Iwamoto, Y.; Inagawa, T.; Imada, N.; Tanaka, R.; Inoue, Y.; Araki, H.; Araki, O. Comparison of Supervised Machine Learning Algorithms for Classifying of Home Discharge Possibility in Convalescent Stroke Patients: A Secondary Analysis. J. Stroke. Cereb. Dis. 2021, 30, 106011. [Google Scholar] [CrossRef]
  67. Zhou, Y.; Romijnders, R.; Hansen, C.; Campen, J.V.; Maetzler, W.; Hortobágyi, T.; Lamoth, C.J.C. The detection of age groups by dynamic gait outcomes using machine learning approaches. Sci. Rep. 2020, 10, 4426. [Google Scholar] [CrossRef]
  68. Farah, J.D.; Baddour, N.; Lemaire, E.D. Design, development, and evaluation of a local sensor-based gait phase recognition system using a logistic model decision tree for orthosis-control. J. Neuroeng. Rehabil. 2019, 16, 22. [Google Scholar] [CrossRef]
  69. Portillo-Portillo, J.; Leyva, R.; Sanchez, V.; Sanchez-Perez, G.; Perez-Meana, H.; Olivares-Mercado, J.; Toscano-Medina, K.; Nakano-Miyatake, M. Cross View Gait Recognition Using Joint-Direct Linear Discriminant Analysis. Sensors 2016, 17, 6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Rattanasak, A.; Uthansakul, P.; Uthansakul, M.; Jumphoo, T.; Phapatanaburi, K.; Sindhupakorn, B.; Rooppakhun, S. Real-Time Gait Phase Detection Using Wearable Sensors for Transtibial Prosthesis Based on a kNN Algorithm. Sensors 2022, 22, 4242. [Google Scholar] [CrossRef] [PubMed]
  71. Zhao, Y.; Zhou, S. Wearable Device-Based Gait Recognition Using Angle Embedded Gait Dynamic Images and a Convolutional Neural Network. Sensors 2017, 17, 478. [Google Scholar] [CrossRef] [Green Version]
  72. Huang, H.; Zhou, P.; Li, Y.; Sun, F. A Lightweight Attention-Based CNN Model for Efficient Gait Recognition with Wearable IMU Sensors. Sensors 2021, 21, 2866. [Google Scholar] [CrossRef]
  73. Pulido-Valdeolivas, I.; Gómez-Andrés, D.; Martín-Gonzalo, J.A.; Rodríguez-Andonaegui, I.; López-López, J.; Pascual-Pascual, S.I.; Rausell, E. Gait phenotypes in paediatric hereditary spastic paraplegia revealed by dynamic time warping analysis and random forests. PLoS ONE 2018, 13, e0192345. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Gao, C.; Yan, J.; Zhou, S.; Varshney, P.K.; Liu, H. Long short-term memory-based deep recurrent neural networks for target tracking. Inf. Sci. 2019, 502, 279–296. [Google Scholar] [CrossRef]
  75. Fricke, C.; Alizadeh, J.; Zakhary, N.; Woost, T.B.; Bogdan, M.; Classen, J. Evaluation of Three Machine Learning Algorithms for the Automatic Classification of EMG Patterns in Gait Disorders. Front. Neurol. 2021, 12, 666458. [Google Scholar] [CrossRef] [PubMed]
  76. Burns, E.R.; Stevens, J.A.; Lee, R. The direct costs of fatal and non-fatal falls among older adults—United States. J. Saf. Res. 2016, 58, 99–103. [Google Scholar] [CrossRef] [PubMed]
  77. Tinetti, M.E.; Speechley, M.; Ginter, S.F. Risk factors for falls among elderly persons living in the community. N. Engl. J. Med. 1988, 319, 1701–1707. [Google Scholar] [CrossRef] [PubMed]
  78. Sacks, D.; Baxter, B.; Campbell, B.C.V.; Carpenter, J.S.; Cognard, C.; Dippel, D.; Eesa, M.; Fischer, U.; Hausegger, K.; Hirsch, J.A.; et al. Multisociety Consensus Quality Improvement Revised Consensus Statement for Endovascular Therapy of Acute Ischemic Stroke. Int. J. Stroke 2018, 13, 612–632. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  79. van Schooten, K.S.; Pijnappels, M.; Rispens, S.M.; Elders, P.J.; Lips, P.; van Dieën, J.H. Ambulatory fall-risk assessment: Amount and quality of daily-life gait predict falls in older adults. J. Gerontol. A Biol. Sci. Med. Sci. 2015, 70, 608–615. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Bargiotas, I.; Wang, D.; Mantilla, J.; Quijoux, F.; Moreau, A.; Vidal, C.; Barrois, R.; Nicolai, A.; Audiffren, J.; Labourdette, C.; et al. Preventing falls: The use of machine learning for the prediction of future falls in individuals without history of fall. J. Neurol. 2022. [Google Scholar] [CrossRef] [PubMed]
  81. VanSwearingen, J.M.; Perera, S.; Brach, J.S.; Wert, D.; Studenski, S.A. Impact of Exercise to Improve Gait Efficiency on Activity and Participation in Older Adults with Mobility Limitations: A Randomized Controlled Trial. Phys. Ther. 2011, 91, 1740–1751. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  82. Lockhart, T.E.; Soangra, R.; Yoon, H.; Wu, T.; Frames, C.W.; Weaver, R.; Roberto, K.A. Prediction of fall risk among community-dwelling older adults using a wearable system. Sci. Rep. 2021, 11, 20976. [Google Scholar] [CrossRef] [PubMed]
  83. Sulzer, J.S.; Gordon, K.E.; Dhaher, Y.Y.; Peshkin, M.A.; Patton, J.L. Preswing knee flexion assistance is coupled with hip abduction in people with stiff-knee gait after stroke. Stroke 2010, 41, 1709–1714. [Google Scholar] [CrossRef] [PubMed]
  84. Fujita, K.; Kobayashi, Y.; Miaki, H.; Hori, H.; Tsushima, Y.; Sakai, R.; Nomura, T.; Ogawa, T.; Kinoshita, H.; Nishida, T.; et al. Pedaling improves gait ability of hemiparetic patients with stiff-knee gait: Fall prevention during gait. J. Stroke Cereb. Dis. 2020, 29, 105035. [Google Scholar] [CrossRef] [PubMed]
  85. Naito, Y.; Kimura, Y.; Hashimoto, T.; Mori, M.; Takemoto, Y. Quantification of gait using insole type foot pressure monitor: Clinical application for chronic hemiplegia. J. Uoeh. 2014, 36, 41–48. [Google Scholar] [CrossRef] [Green Version]
  86. Campanini, I.; Merlo, A.; Damiano, B. A method to differentiate the causes of stiff-knee gait in stroke patients. Gait Posture 2013, 38, 165–169. [Google Scholar] [CrossRef]
  87. Geerars, M.; Minnaar-van der Feen, N.; Huisstede, B.M.A. Treatment of knee hyperextension in post-stroke gait. A systematic review. Gait Posture 2022, 91, 137–148. [Google Scholar] [CrossRef] [PubMed]
  88. Lim, M.R.; Huang, R.C.; Wu, A.; Girardi, F.P.; Cammisa, F.P., Jr. Evaluation of the elderly patient with an abnormal gait. J. Am. Acad. Orthop. Surg. 2007, 15, 107–117. [Google Scholar] [CrossRef] [PubMed]
  89. Rubino, F.A. Gait disorders in the elderly. Distinguishing between normal and dysfunctional gaits. Postgrad. Med. 1993, 93, 185–190. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Illustration of gait kinematics information acquisition, including angles and velocity of the joints using Kinect skeletal tracking SDK, Open CV, and Visual Studio software. Kinematics information of 3 gait patterns was obtained for offline data analysis. (a) The tracking sequence of normal gait (NG); (b) The tracking sequence of pelvic obliquity (PO) gait; (c) The tracking sequence of knee hyperextension (KH) gait.
Figure 1. Illustration of gait kinematics information acquisition, including angles and velocity of the joints using Kinect skeletal tracking SDK, Open CV, and Visual Studio software. Kinematics information of 3 gait patterns was obtained for offline data analysis. (a) The tracking sequence of normal gait (NG); (b) The tracking sequence of pelvic obliquity (PO) gait; (c) The tracking sequence of knee hyperextension (KH) gait.
Sensors 22 07960 g001
Figure 2. The human joints that can be obtained by the Kinect SDK camera and the Human Skeleton Recognition and Tracking software toolkits. These 32 joint points are automatically generated by Kinect SDK software based on machine-detected motions. Points A-I represent the joints used in this study for joint angle calculations.
Figure 2. The human joints that can be obtained by the Kinect SDK camera and the Human Skeleton Recognition and Tracking software toolkits. These 32 joint points are automatically generated by Kinect SDK software based on machine-detected motions. Points A-I represent the joints used in this study for joint angle calculations.
Sensors 22 07960 g002
Figure 3. Proposed CNN structure for gait recognition. There are two, 2D-convolutional layers with the following parameters: f = number of filters, k = kernel size, s = stride, p = padding, and n = number of nodes.
Figure 3. Proposed CNN structure for gait recognition. There are two, 2D-convolutional layers with the following parameters: f = number of filters, k = kernel size, s = stride, p = padding, and n = number of nodes.
Sensors 22 07960 g003
Figure 4. The architecture of the proposed LSTM model.
Figure 4. The architecture of the proposed LSTM model.
Sensors 22 07960 g004
Figure 5. Graph of right lower limb joint angles over time during walking measured by the Human Skeleton Recognition and Tracking software. (a) Hip joint flexion-extension angle; (b) Hip joint abduction-adduction angle; (c) Knee joint angle. NG (normal gait) represents the angles in normal gait, PO (pelvic obliquity) represents the angles during pelvic obliquity gait, and KH (knee hyperextension) represents the angles during knee hyperextension gait. The upper row shows joint angles over time. The lower row demonstrates the measured joint angles (blue lines) in this study.
Figure 5. Graph of right lower limb joint angles over time during walking measured by the Human Skeleton Recognition and Tracking software. (a) Hip joint flexion-extension angle; (b) Hip joint abduction-adduction angle; (c) Knee joint angle. NG (normal gait) represents the angles in normal gait, PO (pelvic obliquity) represents the angles during pelvic obliquity gait, and KH (knee hyperextension) represents the angles during knee hyperextension gait. The upper row shows joint angles over time. The lower row demonstrates the measured joint angles (blue lines) in this study.
Sensors 22 07960 g005
Figure 6. CNN training loss value over iterations. The classic loss curves were found in the CNN model training and model validation. The loss value of the CNN model converges rapidly in the early stage of training. The loss curve plateaus after about 200 iterations, indicating that the model has converged after 200 iterations.
Figure 6. CNN training loss value over iterations. The classic loss curves were found in the CNN model training and model validation. The loss value of the CNN model converges rapidly in the early stage of training. The loss curve plateaus after about 200 iterations, indicating that the model has converged after 200 iterations.
Sensors 22 07960 g006
Figure 7. SVM training loss value over iterations. The process of automatic optimization of the SVM model shows the observation error decreases with the increase in iterations. At the 30th iteration, the SVM model has the smallest loss value, indicating that the SVM model achieves the most optimal parameters.
Figure 7. SVM training loss value over iterations. The process of automatic optimization of the SVM model shows the observation error decreases with the increase in iterations. At the 30th iteration, the SVM model has the smallest loss value, indicating that the SVM model achieves the most optimal parameters.
Sensors 22 07960 g007
Figure 8. KNN training loss value over iterations. The KNN model yields the smallest classification error around the 60th iteration, indicating that the KNN model has achieved the optimal parameters.
Figure 8. KNN training loss value over iterations. The KNN model yields the smallest classification error around the 60th iteration, indicating that the KNN model has achieved the optimal parameters.
Sensors 22 07960 g008
Figure 9. LSTM training loss value over iterations. There is an overfitting event that occurred over the training iterations. There is no convergence over the training iterations; however, the loss value drops over the iterations.
Figure 9. LSTM training loss value over iterations. There is an overfitting event that occurred over the training iterations. There is no convergence over the training iterations; however, the loss value drops over the iterations.
Sensors 22 07960 g009
Figure 10. Comparison of the average classification accuracy of different models.
Figure 10. Comparison of the average classification accuracy of different models.
Sensors 22 07960 g010
Figure 11. Confusion matrix result of four classification methods: (a) CNN; (b) LSTM; (c) SVM; (d) KNN.
Figure 11. Confusion matrix result of four classification methods: (a) CNN; (b) LSTM; (c) SVM; (d) KNN.
Sensors 22 07960 g011
Table 1. The definition of joint angles used in this study.
Table 1. The definition of joint angles used in this study.
NoSymbolDescription
1 θ 1 = α ( A D , D F ) Left thigh flexion angle
2 θ 2 = α ( B E , E G ) Right thigh flexion angle
3 θ 3 = α ( D F , C D ) Left hip joint angle
4 θ 4 = α ( E G , C E ) Right hip joint angle
5 θ 5 = α ( E G , G I ) Left knee angle
6 θ 6 = α ( D F , F H ) Right knee angle
Table 2. The runtime of four proposed models.
Table 2. The runtime of four proposed models.
CNNLSTMSVMKNN
Subject 120.9 s32.8 s658 s300 s (k = 1)
Subject 221.5 s32.7 s757 s317 s (k = 10)
Subject 319.2 s33.5 s663 s315 s (k = 6)
Subject 419.7 s33.4 s646 s334 s (k = 6)
Subject 519.4 s33.5 s763 s316 s (k = 2)
Subject 619.3 s29.9 s728 s327 s (k = 2)
Subject 721.2 s33.6 s788 s307 s (k = 20)
Average20.17 ± 0.99 s32.77 ± 1.32 s714.71 ± 58.14 s316.57 ± 11.41 s
Table 3. Accuracy of seven subjects in the gait recognition experiment.
Table 3. Accuracy of seven subjects in the gait recognition experiment.
CNNLSTMSVMKNN
Subject 185.0%86.6%92.3%89.3%
Subject 275.0%71.7%89.7%91.3%
Subject 398.3%88.3%99.3%99.7%
Subject 490.0%85.0%95.3%93.3%
Subject 583.3%81.7%92.0%88.7%
Subject 685.0%83.3%98.0%97.0%
Subject 796.7%88.3%97.7%99.0%
Average87.6 ± 7.5%86.3 ± 5.4%94.9 ± 3.4%94.0 ± 4.2%
Table 4. Confusion matrix table result of CNN.
Table 4. Confusion matrix table result of CNN.
CNNPredicted Gait Pattern
NGPOKHPrecision
Actual gait patternNG6951499.3%
PO26613794.4%
KH82966394.7%
Recall98.6%95.7%94.2%
Table 5. Confusion matrix table result of LSTM.
Table 5. Confusion matrix table result of LSTM.
LSTMPredicted Gait Pattern
NGPOKHPrecision
Actual gait patternNG6902898.6%
PO46148287.7%
KH54964692.3%
Recall98.7%92.3%87.8%
Table 6. Confusion matrix table result of SVM.
Table 6. Confusion matrix table result of SVM.
SVMPredicted Gait Pattern
NGPOKHPrecision
Actual gait patternNG6914598.7%
PO46593794.1%
KH53266394.7%
Recall98.7%94.8%94.0%
Table 7. Confusion matrix table result of KNN.
Table 7. Confusion matrix table result of KNN.
KNNPredicted Gait Pattern
NGPOKHPrecision
Actual gait patternNG68111897.3%
PO36613694.4%
KH55564091.4%
Recall98.8%90.9%93.6%
Table 8. Accuracy of Individual ML algorithms for different Gait Recognition patterns.
Table 8. Accuracy of Individual ML algorithms for different Gait Recognition patterns.
MLMethodsPO GaitKH GaitNormal Gait
Correct PredictionNoYesNoYesNoYes
SVMCount36 (5.2%)659 (94.8%)42 (6.0%)663 (94.0%)9 (1.3%)691 (98.7%)
p Value0.0890.0890.0570.0570.9930.993
KNNCount66 (9.1%)661 (90.9%)42 (6.2%)640 (93.8%)8 (1.2%)681 (98.8%)
p Value0.0010.0010.1020.1020.7320.732
CNNCount30 (4.3%)661 (95.7%)41 (5.8%)663 (94.2%)10 (1.4%)695 (98.6%)
p Value0.0070.0070.0400.0400.7240.724
LSTMCount51 (7.7%)614 (92.3%)90 (12.2%)646 (87.8%)9 (1.3%)690 (98.7%)
p Value0.1940.1940.0000.0000.9970.997
Chi-square, Bonferroni correction test
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, B.; Chen, C.; Hu, J.; Sayeed, Z.; Qi, J.; Darwiche, H.F.; Little, B.E.; Lou, S.; Darwish, M.; Foote, C.; et al. Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction. Sensors 2022, 22, 7960. https://doi.org/10.3390/s22207960

AMA Style

Chen B, Chen C, Hu J, Sayeed Z, Qi J, Darwiche HF, Little BE, Lou S, Darwish M, Foote C, et al. Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction. Sensors. 2022; 22(20):7960. https://doi.org/10.3390/s22207960

Chicago/Turabian Style

Chen, Biao, Chaoyang Chen, Jie Hu, Zain Sayeed, Jin Qi, Hussein F. Darwiche, Bryan E. Little, Shenna Lou, Muhammad Darwish, Christopher Foote, and et al. 2022. "Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction" Sensors 22, no. 20: 7960. https://doi.org/10.3390/s22207960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop