Next Article in Journal
Analysis of Traffic Organisation in the Kiss-and-Fly Zone of Kraków Airport: Eye-Tracking Study
Previous Article in Journal
Relationship between Preoperative Maxillomandibular Transverse Discrepancy and Post-Surgical Stability in Class II Malocclusion
Previous Article in Special Issue
A Quick Capture Evaluation System for the Automatic Assessment of Work-Related Musculoskeletal Disorders for Sanitation Workers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computer Vision System Based on the Analysis of Gait Features for Fall Risk Assessment in Elderly People

by
Rogelio Cedeno-Moreno
1,
Diana L. Malagon-Barillas
2,
Luis A. Morales-Hernandez
1,
Mayra P. Gonzalez-Hernandez
2 and
Irving A. Cruz-Albarran
1,3,*
1
Laboratory of Artificial Vision and Thermography/Mechatronics, Faculty of Engineering, Autonomous University of Queretaro, Campus San Juan del Rio, San Juan del Rio 76807, Mexico
2
University Physiotherapy Care System, Faculty of Nursing, Autonomous University of Queretaro, Campus Corregidora, Santiago de Queretaro 76912, Mexico
3
Artificial Intelligence Systems Applied to Biomedical and Mechanical Models, Faculty of Engineering, Autonomous University of Queretaro, Campus San Juan del Rio, San Juan del Rio 76807, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(9), 3867; https://doi.org/10.3390/app14093867
Submission received: 5 April 2024 / Revised: 25 April 2024 / Accepted: 25 April 2024 / Published: 30 April 2024
(This article belongs to the Special Issue Advanced Sensors for Postural or Gait Stability Assessment)

Abstract

:
Up to 30% of people over the age of 60 are at high risk of falling, which can cause injury, aggravation of pre-existing conditions, or even death, with up to 684,000 fatal falls reported annually. This is due to the difficult task of establishing a preventive system for the care of the elderly, both in the hospital environment and at home. Therefore, this work proposes the development of an intelligent vision system that uses a novel methodology to infer fall risk from the analysis of kinetic and spatiotemporal gait parameters. In general, each patient is assessed using the Tinetti scale. Then, the computer vision system estimates the biomechanics of walking and obtains gait features, such as stride length, cadence, period, and range of motion. Subsequently, this information serves as input to an artificial neural network that diagnoses the risk of falling. Ninety-six participants took part in the study. The system’s performance was 99.1% accuracy, 94.4% precision, 96.9% recall, 99.4% specificity, and 95.5% F1-Score. Thus, the proposed system can evaluate the fall risk assessment, which could benefit clinics, hospitals, and even homes by allowing them to assess in real time whether a person is at high risk of falling to provide timely assistance.

1. Introduction

A fall is defined as an event in which a person is unintentionally thrown to the ground. It is the second leading cause of unintentional injury death with 684,000 fatal falls per year, primarily in adults over the age of 60 [1]. In addition, it has been observed that more than 30% of older adults suffer a fall each year, with 14% of these falls being recurrent [2], resulting in disability, loss of independence, limitation of activities of daily living, and functional impairment [3]. This is often associated with the natural degeneration of the gait due to the physiological changes associated with aging. For example, 10% of people between the ages of 60 and 69 have gait disorders, and more than 60% of people over the age of 80 have gait disorders [4]. Gait disorders occur in 10% of people between the ages of 60 and 69 and up to 60% of people over the age of 80 [4]. Gait disorders can have a significant impact on the quality of life of older adults and their participation in social activities, and a strong association has been observed with an increased risk of falls [5]. Therefore, worldwide guidelines for the prevention of falls in the elderly have recommended the use of devices that can measure the risk of falling, with the use of gait speed as one of the most important indicators [6].
To meet these needs, machine learning (ML) algorithms have been implemented in recent years. These are based on a structure inspired by the human brain for analyzing complex and unstructured information [7]. This has facilitated the processing of large amounts of information from images, text, and sound, the analysis of which is not feasible with traditional methods due to ambiguities in the information [8]. Based on this, intelligent systems have been developed to provide fall risk information, classified according to the type of sensor used and the function of the system [9]. Commonly used sensors can be divided into three categories: wearable sensors, computer vision systems, and environmental or fusion sensors [10]. Wearable sensor systems typically consist of accelerometers [11,12,13], gyroscopes, or a combination of both [14]. These are referred to as inertial measurement units (IMUs) and are placed directly on the user to provide continuous individual monitoring, which, in many cases, allows free movement [10]. Chen et al. [15] and Mehmood et al. [16] achieved promising results by developing systems with an accuracy of 99% for fall detection, which alerts the hospital or clinic staff. However, these have the disadvantage that patients may forget to use the device or recharge their batteries [17]. Alternatively, there are computer vision systems that operate in a predetermined area, such as a hallway, room, or garden, among others, to monitor one or more people within the field of view (FOV), which is continuous and non-invasive [9]. This is carried out using deep learning APIs for image processing, such as Mediapipe [18] or YOLO [8], which have functions for human detection, face recognition, and skeleton reconstruction, to name a few. Using these tools, fall risk assessment systems have been developed, such as the work of Blasco-Garcia et al. [19] and Eichler et al. [20], where vision systems evaluate established tests for calculating fall risk, such as the Tinetti tests and the Berg Balance Scale (BBS), allowing patients to be classified into low, medium, and high risk categories, with an accuracy of up to 97%. Similarly, Anitha and Priya [17] used a vision system to recognize people and their poses using MobileNet to perform a binary classification between a fall and non-fall state, with a maximum accuracy of 100%. Finally, environmental, or fusion systems tend to use vibration sensors [21], microphones [22], and pressure plates [10]. These are used to characterize the signals generated during the fall event; however, they do not have any type of feedback that allows them to distinguish between the fall of a person or an object. This makes them very practical systems to implement but with many risks of false alarms [10].
These intelligent systems can also be classified according to their performance. One is falling detection, such as the works of Aziz et al. [13], Chen et al. [15], Mehmood et al. [16], Ranakoti et al. [23], and Anitha and Priya [17], where a binary classification is used to determine whether the patient or elderly person has fallen, and some even notify nurses, physiotherapists, caretakers, or family members. However, while such systems typically have accuracies of around 96% and avoid long waits for assistance [24,25], they have the disadvantage of post-fall alarms [26,27]. This means that the patient or elderly person is already at risk of having suffered some kind of trauma caused by the fall, which may aggravate their already fragile state. To avoid this, other systems such as those by Blasco-Garcia et al. [19], Eichler et al. [20], Khandoker et al. [28], Silva et al. [29], and Drover et al. [30] have opted for a preventive approach that assesses fall risk and determines which patients require further assistance and continuous monitoring. However, these are designed to assist in the assessment of tests such as the BSS, Tinetti, and Morse Fall Scale, to name a few. The problem is that the assessment requires tests that are not commonly performed in a clinic or hospital due to their time-consuming nature. Instead, an approach such as that of Khandoker et al. [28] may be more useful, as it is an analysis of fall risk based on parameters extracted from gait. In their case, a Vicon-based system and wavelet transform post-processing are used to classify patients as low or high fall risk without the need to perform specific tests, using the minimum foot clearance (MFC) as a gait feature. This type of system offers the advantage of continuous, non-invasive, and efficient assessment in terms of time management, but there is still room for improvement when considering that only the MFC is assessed. This is a measurement that can be difficult to obtain in an uncontrolled environment, as the visualization of the feet can be easily obstructed. On the other hand, the gait has other indicators that could have a better performance and can be obtained more easily, such as stride length, cadence, speed, period between steps, and knee movement ranges, among others.
This paper proposes a new methodology to establish a relationship between fall risk and parameters extracted from gait, using Tinetti test scores as ground truth, to develop a monitoring system that can continuously assess fall risk without interfering with the activities of patients or professionals. This is expected to establish a relationship between gait and fall risk that can be used by an intelligent fall risk assessment system in a clinic or hospital to keep staff aware of the status of their patients. Therefore, the main contributions of this work are an automated methodology to estimate the biomechanics of walking, obtaining gait characteristics, and assessing fall risk; and a low-cost computer vision system that includes an automated fall risk assessment methodology with performance indicators of 99.1% accuracy, 94.4% precision, 96.9% recall, 99.4% specificity, and 95.5% F1-Score.

2. Materials and Methods

2.1. General Diagram

The general methodology shown in Figure 1 was followed in the development of this work. It starts with the selection of the population, which focuses on the evaluation of people over 60 years of age who meet the inclusion and exclusion criteria. For each participant, an identification form is generated with basic information such as age, height, weight, and other data. This information is complemented by the application of the Tinetti tests to assess their fall risk, the result of which is taken as ground truth and used as a reference. Next, a gait assessment is performed using a computer vision system, which consists of a digital camera that monitors walking on a treadmill at a comfortable speed. This is analyzed to estimate the biomechanics of the lower extremities, from which the gait main features are extracted, such as stride length, cadence, period, and range of motion (ROM), to name a few. Then, the extracted features are used to train and validate an artificial neural network (ANN) that seeks a correlation between the features and the Tinetti score; as a result, the final system can predict the Tinetti score of new cases via gait analysis.

2.2. Population

The population included 96 participants with a mean age of 68.6 years and a standard deviation of 10.1, of whom 70 were women and 26 were men. Subjects were selected only if they were over 60 years of age and did not have any of the following exclusion criteria: vertigo, lower limb prosthesis, acute lower limb injury, and use of assistive devices such as canes, walkers, crutches, and wheelchairs. Participants were recruited via informational flyers.

2.3. Preliminary Evaluation

2.3.1. Identification Record

Participants were assisted by physiotherapists from the University Physiotherapy Care System Clinic to register ID cards and obtain basic information about the participants, such as age, sex, weight, and height. It was also checked that no one had any exclusion criteria.

2.3.2. Tinetti Test

The Tinetti Balance Scale was applied to the participants, which is a commonly used tool to assess balance and gait and to identify fall risk in older adults [31]. It consists of 28 items, with a subscale of 12 items for gait and 16 items for balance. These two scores are added together to determine the fall risk in older adults, with higher scores indicating better performance. The scores are then grouped to determine the level of fall risk: ≥25 no risk, 19–24 fall risk, and <19 high fall risk. The Tinetti Balance scale is a valid and reliable tool for assessing mobility (r = 0.74–0.93), with high interobserver reliability (0.95) [32]. Some of the items evaluated consist of an 8 m walk, standing up and sitting in a chair, standing and imbalance while standing, and 360° turns.

2.4. Gait Assessment

2.4.1. Computer Vision System

The vision system consists of an IDS UI-3130CP-M-GL R2 monochrome camera (IDS Imaging, Obersulm, Germany), with industrial-grade robustness, and is configured to record at 20 frames per second (FPS). It incorporates a 3.5 mm lens with radial distortion of less than 1 px. The camera is mounted on a tripod at 1.7 m to obtain a vertical FOV of 1.4 m, visualizing both lower extremities [33]. The system is complemented by artificial illumination from infrared reflectors with a wavelength of 850 nm to provide uniform illumination of the workspace without causing discomfort to the participants.

2.4.2. Workspace

To perform gait evaluation with the vision system, a controlled work environment is set up, as shown in the diagram in Figure 2, which shows an enclosed room with an electric treadmill used by the subjects. The area is isolated from the rest of the clinic by dark curtains that prevent the entry of ambient light; instead, infrared reflectors are placed to maintain a controlled and constant illumination [33]. Controlling the working environment simplifies the preprocessing required for image analysis, as there is no need to isolate objects or people unrelated to the tests. In the same way, it is possible to visualize any movement of the lower extremities. The reflectors are placed at 1.7 m from the treadmill, as well as the vision system camera and the PC used to coordinate video acquisition. The final layout of the workspace is designed to ensure that the subjects are not at risk of falling and that it is easy to enter and exit the test area.

2.4.3. Gait Assessment Protocol

A biomechanical gait study was performed using computer vision to identify its kinematic parameters. The methodology to acquire this information requires that the participants get on a treadmill and that it would advance at an incremental speed until it reached a speed like their usual gait, referred to as the comfort speed, which would be maintained for up to 3 min when possible. They were informed in advance of this process so that they could feel confident that it did not pose any risk to their health or safety. During preparation for the gait tests, reflective markers were placed at various locations on the right limb; however, these were used for a derivative project with which only the video database was shared.
Each test was supervised by a physiotherapist who could interrupt it at any time if he felt that the volunteer needed assistance due to fatigue, discomfort, or dizziness, among other things. After the gait analysis, the participants were assisted in getting down from the treadmill. This procedure was applied to each participant and took approximately 10 min, taking into consideration the participant’s explanation and preparation time.
To carry out the tests, the project was submitted to the Ethics Council of the Autonomous University of Queretaro, which granted an approval status under the folio number FOPER-2021-FEN02370. The procedure complies with NOM-012-SSA3 2012, which establishes the guidelines regarding the ethical aspect and the physical well-being and integrity that must be guaranteed in all research involving human participants. In addition, the guidelines established in the General Health Law Regulations on Health Research, the Declaration of Helsinki, and the Good Clinical Practices issued by the National Bioethics Commission are followed.

2.5. Biomechanics Estimation

For video analysis, YOLOv8 is used, which is a high-speed, high-precision DL-based model used in computer vision [8,34]. It supports tasks such as detection, segmentation, pose estimation, tracking, and classification. These can be trained using custom databases to recognize specific objects, such as people, and to track key points. These are points of interest in an object that needs to be detected and, in the case of video, tracked. To develop the system, a set of 100 images extracted from videos of volunteers walking in the test area is used. The images are then loaded into CVAT (https://www.cvat.ai, accessed on 11 December 2023), which is an online tool to label images used in AI training [35]. There is a toolbox that allows the selection of key points, which, in this case, are placed to indicate the different segments of the left and right lower extremities. An example of this process is shown in Figure 3, where all the key points are visible. It is important to note that it was later decided not to use the reflective markers shown in the figure, as they were limited to monitoring only one limb. In addition to the high risk of not being able to detect all the markers, it requires very robust pre-processing and proper light conditioning to accurately segment the markers in each frame.
Once all the images have been labeled, CVAT generates a database containing information about the boxes and points placed, such as their position and dimension. These points are then exported to Python, where they are used to train a DL model. For the development of the algorithm, the use of the open-source library Keras in Python was chosen, which is designed for the training and implementation of DL models [36,37]. These models can be used both for the development of neural networks for the analysis of statistical databases and for the training of models for the analysis of images, which perform tasks of detection of objects, people, faces, and body orientation, to name a few. Therefore, using Keras and the database obtained from CVAT, the training of personalized models is performed, where the information and images are divided for training and validation. The resulting model is then used for the analysis of gait test videos, where each frame is taken to perform the identification of each lower limb section via key points. This results in an estimate of the subject’s pose within the frame, as shown in Figure 4, which, when monitored over time, generates signals representing the biomechanics of walking.

2.6. Gait Feature Extraction

2.6.1. List of Features

  • Stride: Corresponds to the distance between two consecutive supports of the same foot, measured in centimeters [38].
  • Velocity: It is a simple, objective, and global measure of neuromuscular function and physical performance of the lower extremities, corresponding to the distance covered in a unit of time (m/s) [39].
  • Period: The time between the moment of toe-off and the first contact of the same foot, measured in seconds (s) [40].
  • Cadence: Number of steps taken in a given time by a person walking at spontaneous speed (steps per minute) [41].
  • ROM: The maximum angle described between two body segments with respect to a reference plane measured at the joints, i.e., the number of degrees through which a joint can move [41].
  • Knee angle during heel strike: The angle of knee extension during heel strike, when the heel contacts the ground.
  • Knee angle during toe-off: The angle of knee flexion during the toe-off phase, when the foot loses contact with the ground.

2.6.2. Feature Calculation

To extract the indicators, the signals obtained from the vision system are used, where the movement of the hip, knee, and ankle of both legs are monitored in relation to time. First, the displacement of the ankle is checked to determine the exact moment when the events of toe-off and heel strike occur since these coincide with the change in the direction of the foot displacement, generating peaks and valleys in the signal, taking a shape like a sinusoidal signal.
To calculate peaks and valleys, the peakutils library is used to determine the magnitude of the peaks ( P i ) and valleys ( V i ), as well as the exact time at which they occurred: t P and t V , respectively. From these, the average stride length and period ( T ) are calculated using the following equations:
s t r i d e = 1 n i = 1 n | P i V i |
T = 1 n i = 1 n | t P i t V i |
From this, the average cadence and speed ( v ) can be calculated using the following equations:
c a d e n c e = 60 T
v = s t r i d e T
To determine the range of motion (ROM), it is necessary to calculate the angular displacement that occurs in the knee. For this the hip, knee, and ankle are considered as reference points, referring to each point by means of the indexes placed in Figure 3:
θ r i g h t = tan 1 y 6 y 4 x 6 x 4 tan 1 y 2 y 4 x 2 x 4
θ l e f t = tan 1 y 5 y 3 x 5 x 3 tan 1 y 1 y 3 x 1 x 3
Using the angular displacement of the knee, the ROM can be calculated by calculating the average amplitude of the peaks P for the maximum and the valleys V for the minimum, resulting in the following equations:
θ m a x = 1 n i = 1 n | P i |
θ m i n = 1 n i = 1 n | V i |
Finally, the knee angle during toe-off and heel strike is determined by superimposing the angular knee displacement and ankle displacement signals on the x-axis, as shown in Figure 5.
The peaks and valleys of the ankle correspond to the beginning and end of the stride, so the times t P correspond to the heel strike and t V to the toe-off. Therefore, the amplitude of the angular displacement can be determined at the same times.
θ h s = 1 n i = 1 n | θ t P , i |
θ t o = 1 n i = 1 n | θ t V , i |

2.7. Network Training and Validation

Artificial neural networks (ANNs) are based on a design inspired by the connections that naturally occur between neurons in the human brain [42]. The idea is to create machines that can perform tasks that cannot be carried out with traditional algorithms because they depend on an undefined and uncertain number of factors. By simulating human brain processing, complex information such as images, sounds, and large databases can be analyzed for patterns and irregularities [43]. This can be used for classification and regression tasks, where the patterns found are correlated with a desired output. In this way, the network “learns” what output to provide according to the input it receives [13]. ANNs consist of simple elements called perceptrons that simulate the processing activity of a neuron. Like neurons, perceptrons form a layered network, with an input layer, an output layer, and the layers between them called hidden layers. The latter can have several layers, and in each layer, the number of perceptrons can vary, either increasing or decreasing. On the other hand, the input layer must have several perceptrons equal to the input data, while the output layer can vary depending on the task to be performed [44].
In all layers, the perceptrons have one or more inputs x , which can be external or come from other perceptrons. These inputs are weighted with a weight w , to determine which inputs should be more important for the calculation of the output y , which is given by the following equation:
y = j = 1 d w j x j + w 0
where d is the number of perceptron inputs, and w 0 is the weight given to the bias x j , which always has a value of 1. This establishes a relationship between the input and the output of each perceptron, which is adjusted so that the network provides a desired output for a given input. Therefore, an iterative training process is applied, starting from random values of w and determining the error between the obtained output and the desired one, and adjusting the parameters depending on the difference to minimize the error in each iteration. The result is a trained network capable of inferring an output corresponding to the desired result for any new input.

3. Results

3.1. Tinetti Scale

A total of 96 participants were evaluated, of whom 71 older adults scored 25 points or more on the Tinetti scale and were considered not at risk of falling. While 15 participants scored between 19 and 24 points, i.e., at risk of falling, and 10 were excluded because they could not maintain a stable gait on the treadmill. Table 1 shows the frequencies of the scores obtained on the Tinetti scale.
On the other hand, Table 2 describes the mean results of the gait analysis. Cadence is reported in steps per minute, period in seconds, stride in centimeters, speed in meters per second, and maximum ROM, minimum ROM, knee angle at heel strike, and knee angle at toe-off measured in degrees of motion.

3.2. Fall Risk Assessment System

Based on the results of Section 3.1, information is collected from the 86 subjects who met the exclusion criteria and had their Tinetti test results. To add to the database, the videos are divided into 5 s windows, segmenting the signals as shown in Figure 6. In this way, samples with 5 to 9 steps are available, which is sufficient for calculating the indicators proposed in Section 2.6.
The resulting database consists of 1129 samples, discarding cases where acquisition errors were present. The information collected from each subject consists of age, gender, and the calculation of the eight gait indicators for both legs, as defined in Section 2.6.1. These are complemented by the Tinetti score, which is used as a reference for the desired output of the system.
To classify the cases, an ANN is implemented using ReLU as the activation function for the hidden layers, softmax for the output layer, and categorical cross-entropy as the loss function. For training and validation, a 5-fold cross-validation method is used to divide the data between training and validation at 80% (904 samples) and 20% (225 samples), respectively. The performance of the resulting algorithm is evaluated in terms of accuracy, precision, recall, specificity, and F1-Score using the following equations:
A c c u r a c y = T P + T N T P + F P + T N + F N
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
S p e c i f i c i t y = T N F P + T N
F 1 S c o r e = 2 ( P r e c i s i o n ) ( R e c a l l ) P r e c i s i o n + R e c a l l
where TP is the true positives that occur when the output corresponds to the correct class, FP is the false positives that occur when a case is mistakenly placed in another class, TN is the true negatives that are cases that are assigned to the class but do not belong, and FN is cases that belong to the class but their output does not correspond to the correct class.
To define the structure of the algorithm, a trial-and-error process was used to determine the number of hidden layers and the perceptrons present in each, using ReLU as the activation function. Table 3 shows a synthesized representation of the proposed structures, along with their performance during validation.
The structure of test 4 is considered since it is the least complex structure and has a good performance. From this, an ablation test is applied to determine which activation function presents a better performance, these are presented in Table 4.
For the final model, ReLU was chosen, which showed a performance superior to the rest, in addition to a consistent performance when using cross-validation. Figure 7 shows a diagram of the algorithm structure with ReLU as the hidden layer activation function and softmax as the output layer.
The training and validation of the ANN were carried out with 500 iterations, obtaining the precision and loss graphs shown in Figure 8, where both settle around iteration 300 with an error in the steady state.
The final ANN model was cross-validated, resulting in a confusion matrix shown in Figure 9. From this, the performance is determined, obtaining a precision of 94.4%, a recall of 96.9%, an accuracy of 99.1%, a specificity of 99.4%, and an F1-Score of 95.5%, all of which were the average obtained between the different classes.
Given the promising results of the trained network, it was decided to use the same structure to train a network that could classify each case as at risk or not at risk. For this, the same input information is used, but the desired output is labeled as 0 for fall risk cases and 1 for no risk cases. These cases are defined based on what was established in Section 2.3, where a score from 19 to 23 is at risk of falling, while a score above 24 is not at risk of falling. Figure 10 shows the result of the network trained by implementing cross-validation, obtaining an accuracy of 99.1%, a precision of 95.6%, a recall of 99.5%, an F1-Score of 97.4%, and a specificity of 99.5%.

4. Discussion

In the present work, a system based on ML and computer vision was obtained that can estimate the risk of falling in elderly people, estimating a score based on the Tinetti tests to determine quantitatively if people are at risk of falling. The results were excellent compared to other studies that have tried to assess the risk of falling due to the new methodology used, based on gait analysis. The purpose of the system is to help people over the age of 60 receive help promptly, in addition to speeding up evaluation times and providing quantifiable data. By not limiting itself to the use of observational analysis, which depends on the skills and experience of the observer [45], and the time of application of some fall risk assessment methods, such as the Tinetti scale, which can take up to 15 min [46], reason why it may be missed in a routine consultation.
It is also necessary to highlight that most of the studies found on AI focus on fall detection [47]. There are few works focused on the prediction of falls or fall risk assessment, in addition to being carried out in the hospital using databases with medical records [48]. On the contrary, this research allows the prediction of fall risk in older adults based on the evaluation of their gait, which favors early assistance to the patient to ensure the prevention of falls. As shown in Table 5, a good performance was obtained, equal to and, in some cases, better than that obtained in previously published studies.
Fall detection systems are those that can identify the moment the event occurs, while fall risk assessment systems assign a score to patients as a means of prevention. These require different methodologies that have a clear impact on their performance, with fall detection systems having accuracies close to 100%. On the other hand, fall risk assessment systems have reported accuracies between 70 and 85%. However, the proposed system also belongs to the group of fall risk assessment systems, but it stands out with a precision of 94.4% and an accuracy of 99.1%. Thanks to the fact that its methodology is closer to fall detection systems since it uses indicators extracted from walking and does not require the patient to perform any type of exercise or unusual task. This makes its application faster, more accurate, and imperceptible.
In addition, only 13% of the studies conducted to date have a population over 50 years of age [47], so there could be significant differences in the use of their information on older adults. This factor was considered of great importance at the time of gathering the population of participants, having a group made up only of adults over 60 years old, which allows the system to focus on working with people who are within this population group.
The promising results of the proposed system allow for future plans, such as adapting the system for continuous use in the clinic, where patients can be assessed outside the test environment, i.e., while moving between areas of the clinic. This will allow staff to monitor the fall risk of each patient passing through the assessment area, allowing intervention to be anticipated for those at high risk of falling. However, for this, it is necessary to work with some limitations since the study had a population with a Tinetti score between 19 and 28 points, so there are no cases with a high risk of falling. Therefore, it is necessary to expand the current database by seeking the participation of people over 60 years of age with scores below 19, allowing the software to consider all ranges of the Tinetti scale. It is also suggested that the video recording method be modified to avoid the use of the treadmill so that direct assessment on a firm floor is possible since few older adults are accustomed to using treadmills. In addition, the software was designed to work in a test area, so several considerations must be made when placing the system in a common area. There may be unforeseen events such as obstructions, poor lighting consistency, clothing inappropriate for limb detection, and the presence of multiple people in the area, among many other situations that cannot be anticipated or avoided.

5. Conclusions

An intelligent system was designed and developed using a novel methodology to assess the risk of falling in people over 60 years of age based on gait analysis. For this purpose, a database was created consisting of kinetic and spatiotemporal indicators extracted from the pose estimation using YOLOv8 and signal analysis. Complemented with the application of Tinetti tests to quantify the fall risk with a score that is established as ground truth. The result is a network trained to estimate the Tinetti score and another to classify them as at risk or not, quantifying the fall risk of a person over 60 with a functional scale from 19 to 28 points. The network was optimized by performing extensive tests with different network configurations, increasing, and decreasing the number of layers and neurons, as well as the activation function. Cross-validation was used to determine the performance of the network, which achieved an accuracy of 99.1%, a recall of 96.9%, a precision of 94.4%, a specificity of 99.4%, and an F1-Score of 95.5%. These results are superior to other fall risk assessment systems that rely solely on gait monitoring. This opens the door for future research to test the system’s flexibility in uncontrolled environments, such as common areas like hallways, living rooms, and gardens, among others. To date, the methodology is presented as a novel, being one of the first to use gait analysis to predict fall risk in the elderly.

Author Contributions

Conceptualization, R.C.-M., L.A.M.-H., M.P.G.-H. and I.A.C.-A.; methodology, all authors; software, R.C.-M.; validation, L.A.M.-H., M.P.G.-H. and I.A.C.-A.; formal analysis, R.C.-M., D.L.M.-B. and M.P.G.-H.; research, R.C.-M., D.L.M.-B., M.P.G.-H. and I.A.C.-A.; resources, L.A.M.-H., M.P.G.-H. and I.A.C.-A.; data curation, R.C.-M. and D.L.M.-B.; writing—preparation of the original draft, all authors; drafting—revising and editing, all authors; visualization, R.C.-M., M.P.G.-H. and I.A.C.-A.; supervision, L.A.M.-H., M.P.G.-H. and I.A.C.-A.; project management, L.A.M.-H., M.P.G.-H. and I.A.C.-A. All authors have read and agreed to the published version of the manuscript.

Funding

No external funding was received for this study.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of the Autonomous University of Queretaro (FOPER-2021-FEN02370 approved 27 August 2021).

Informed Consent Statement

Informed consent was obtained from all subjects included in the study.

Data Availability Statement

The raw data used to support the conclusions presented in this article will be made available upon request.

Acknowledgments

The first author expresses his gratitude to CONAHCYT for the scholarship for postgraduate studies at the doctoral level.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Health Organization. Falls. Available online: https://www.who.int/en/news-room/fact-sheets/detail/falls (accessed on 18 March 2024).
  2. Leitón-Espinoza, Z.E.; Silva-Fhon, J.R.; de Lima, F.M.; Fuentes-Neira, W.L.; Villanueva-Benites, M.E.; Partezani-Rodrigues, R.A. Predicción de caídas y caídas recurrentes en adultos mayores que viven en el domicilio. Gerokomos 2022, 33, 212–218. [Google Scholar]
  3. Gale, C.R.; Cooper, C.; Aihie Sayer, A. Prevalence and Risk Factors for Falls in Older Men and Women: The English Longitudinal Study of Ageing. Age Ageing 2016, 45, 789–794. [Google Scholar] [CrossRef] [PubMed]
  4. Ronthal, M. Gait Disorders and Falls in the Elderly. Med. Clin. N. Am. 2019, 103, 203–213. [Google Scholar] [CrossRef]
  5. Sakano, Y.; Murata, S.; Goda, A.; Nakano, H. Factors Influencing the Use of Walking Aids by Frail Elderly People in Senior Day Care Centers. Healthcare 2023, 11, 858. [Google Scholar] [CrossRef]
  6. Montero-Odasso, M.; van der Velde, N.; Martin, F.C.; Petrovic, M.; Tan, M.P.; Ryg, J.; Aguilar-Navarro, S.; Alexander, N.B.; Becker, C.; Blain, H.; et al. World Guidelines for Falls Prevention and Management for Older Adults: A Global Initiative. Age Ageing 2022, 51, afac205. [Google Scholar] [CrossRef] [PubMed]
  7. Balas, V.E.; Roy, S.S.; Sharma, D.; Samui, P. (Eds.) Handbook of Deep Learning Applications; Springer International Publishing: Cham, Switzerland, 2019; ISBN 9783030114787. [Google Scholar]
  8. Manssor, S.A.F.; Sun, S.; Elhassan, M.A.M. Real-Time Human Recognition at Night via Integrated Face and Gait Recognition Technologies. Sensors 2021, 21, 4323. [Google Scholar] [CrossRef]
  9. Nooruddin, S.; Islam, M.M.; Sharna, F.A.; Alhetari, H.; Kabir, M.N. Sensor-Based Fall Detection Systems: A Review. J. Ambient Intell. Humaniz. Comput. 2022, 13, 2735–2751. [Google Scholar] [CrossRef]
  10. Mubashir, M.; Shao, L.; Seed, L. A Survey on Fall Detection: Principles and Approaches. Neurocomputing 2013, 100, 144–152. [Google Scholar] [CrossRef]
  11. Lindemann, U.; Hock, A.; Stuber, M.; Keck, W.; Becker, C. Evaluation of a Fall Detector Based on Accelerometers: A Pilot Study. Med. Biol. Eng. Comput. 2005, 43, 548–551. [Google Scholar] [CrossRef]
  12. Kangas, M.; Konttila, A.; Winblad, I.; Jamsa, T. Determination of simple thresholds for accelerometry-based parameters for fall detection. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007. [Google Scholar]
  13. Aziz, O.; Musngi, M.; Park, E.J.; Mori, G.; Robinovitch, S.N. A Comparison of Accuracy of Fall Detection Algorithms (Threshold-Based vs. Machine Learning) Using Waist-Mounted Tri-Axial Accelerometer Signals from a Comprehensive Set of Falls and Non-Fall Trials. Med. Biol. Eng. Comput. 2017, 55, 45–55. [Google Scholar] [CrossRef]
  14. Wu, Y.; Su, Y.; Hu, Y.; Yu, N.; Feng, R. A Multi-Sensor Fall Detection System Based on Multivariate Statistical Process Analysis. J. Med. Biol. Eng. 2019, 39, 336–351. [Google Scholar] [CrossRef]
  15. Chen, L.; Li, R.; Zhang, H.; Tian, L.; Chen, N. Intelligent Fall Detection Method Based on Accelerometer Data from a Wrist-Worn Smart Watch. Measurement 2019, 140, 215–226. [Google Scholar] [CrossRef]
  16. Mehmood, A.; Nadeem, A.; Ashraf, M.; Alghamdi, T.; Siddiqui, M.S. A Novel Fall Detection Algorithm for Elderly Using SHIMMER Wearable Sensors. Health Technol. 2019, 9, 631–646. [Google Scholar] [CrossRef]
  17. Anitha, G.; Baghavathi Priya, S. Vision Based Real Time Monitoring System for Elderly Fall Event Detection Using Deep Learning. Comput. Syst. Sci. Eng. 2022, 42, 87–103. [Google Scholar] [CrossRef]
  18. Kim, J.-W.; Choi, J.-Y.; Ha, E.-J.; Choi, J.-H. Human Pose Estimation Using MediaPipe Pose and Optimization Method Based on a Humanoid Model. Appl. Sci. 2023, 13, 2700. [Google Scholar] [CrossRef]
  19. Blasco-Garcia, D.J.; Pavon-Pulido, N.; Feliu-Batlle, J.J. Sistema de Evaluación Del Riesgo de Caídas en Mayores Usando Inteligencia Artificial y Cloud Computing; Universidad Politécnica de Cartagena: Cartagena, Spain, 2021. [Google Scholar]
  20. Eichler, N.; Raz, S.; Toledano-Shubi, A.; Livne, D.; Shimshoni, I.; Hel-Or, H. Automatic and Efficient Fall Risk Assessment Based on Machine Learning. Sensors 2022, 22, 1557. [Google Scholar] [CrossRef] [PubMed]
  21. Alwan, M.; Rajendran, P.J.; Kell, S.; Mack, D.; Dalal, S.; Wolfe, M.; Felder, R. A smart and passive floor-vibration based fall detector for elderly. In Proceedings of the 2006 2nd International Conference on Information & Communication Technologies, Damascus, Syria, 24–28 April 2006. [Google Scholar]
  22. Zhuang, X.; Huang, J.; Potamianos, G.; Hasegawa-Johnson, M. Acoustic fall detection using Gaussian mixture models and GMM supervectors. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009. [Google Scholar]
  23. Ranakoti, S.; Arora, S.; Chaudhary, S.; Beetan, S.; Sandhu, A.S.; Khandnor, P.; Saini, P. Human Fall Detection System over IMU Sensors Using Triaxial Accelerometer. In Computational Intelligence: Theories, Applications and Future Directions–Volume I; Springer Singapore: Singapore, 2019; pp. 495–507. ISBN 9789811311314. [Google Scholar]
  24. Allali, G.; Ayers, E.I.; Holtzer, R.; Verghese, J. The Role of Postural Instability/Gait Difficulty and Fear of Falling in Predicting Falls in Non-Demented Older Adults. Arch. Gerontol. Geriatr. 2017, 69, 15–20. [Google Scholar] [CrossRef] [PubMed]
  25. Kistler, B.M.; Khubchandani, J.; Jakubowicz, G.; Wilund, K.; Sosnoff, J. Falls and fall-related injuries among US adults aged 65 or older with chronic kidney disease. Prev. Chronic Dis. 2018, 15, E82. [Google Scholar] [CrossRef] [PubMed]
  26. Callis, N. Falls Prevention: Identification of Predictive Fall Risk Factors. Appl. Nurs. Res. 2016, 29, 53–58. [Google Scholar] [CrossRef]
  27. Florence, C.S.; Bergen, G.; Atherly, A.; Burns, E.; Stevens, J.; Drake, C. Medical Costs of Fatal and Nonfatal Falls in Older Adults. J. Am. Geriatr. Soc. 2018, 66, 693–698. [Google Scholar] [CrossRef]
  28. Khandoker, A.H.; Lai, D.T.H.; Begg, R.K.; Palaniswami, M. Wavelet-based feature extraction for support vector machines for screening balance impairments in the elderly. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 587–597. [Google Scholar] [CrossRef] [PubMed]
  29. Silva, J.; Madureira, J.; Tonelo, C.; Baltazar, D.; Silva, C.; Martins, A.; Alcobia, C.; Sousa, I. Comparing machine learning approaches for fall risk assessment. In Proceedings of the 10th International Joint Conference on Biomedical Engineering Systems and Technologies, Porto, Portugal, 21–23 February 2017; SCITEPRESS–Science and Technology Publications: Lisbon, Portugal, 2017. [Google Scholar]
  30. Drover, D.; Howcroft, J.; Kofman, J.; Lemaire, E. Faller Classification in Older Adults Using Wearable Sensors Based on Turn and Straight-Walking Accelerometer-Based Features. Sensors 2017, 17, 1321. [Google Scholar] [CrossRef]
  31. Gonzalez-Roman, L.; Protocolo Adaptado del Test de Tinetti Para Residentes con Deterioro Cognitivo: Técnica Delphi. Scientific big Data. 2018. Available online: www.tcpdf.org (accessed on 18 March 2024).
  32. Perez-Hernandez, M.G.; Velasco-Rodriguez, R.; Maturano-Melgoza, J.A.; Hilerio-Lopez, A.G.; Garcia-Hernandez, M.L.; Garcia-Jimenez, M.A. Deterioro cognitivo y riesgo de caída en adultos mayores institucionalizados en el estado de Colima, México. Rev. Enferm. Inst. Mex. Seguro Soc. 2018, 26, 171–178. [Google Scholar]
  33. Pajares, G.; de la Cruz, J.M. Visión por Computador, Imágenes Digitales y Aplicaciones; Alfaomega: Jackson, MI, USA, 2002. [Google Scholar]
  34. Ultralytics. YOLO Vision. Available online: https://docs.ultralytics.com/es/ (accessed on 11 December 2023).
  35. CVAT. Open Data Annotation Platform. Available online: https://www.cvat.ai (accessed on 11 December 2023).
  36. François Chollet. Keras. Available online: https://keras.io (accessed on 11 December 2023).
  37. Yunas, S.U.; Ozanyan, K.B. Gait activity classification using multi-modality sensor fusion: A deep learning approach. IEEE Sens. J. 2021, 21, 16870–16879. [Google Scholar] [CrossRef]
  38. Saavedra Lozano, D.F.; Castillo Garcia, J.F. System for Analysis of Human Gait Using Inertial Sensors. In Lecture Notes in Electrical Engineering; Springer International Publishing: Cham, Switzerland, 2021; pp. 283–292. ISBN 9783030530204. [Google Scholar]
  39. Alfaro-Salas, K.I.; Espinoza-Sequeira, W.; Alfaro-Vindas, C.; Calvo-Ureña, A. Patrón de marcha normal en adultos mayores costarricenses. Acta Méd. Costarric. 2019, 61, 104–110. [Google Scholar]
  40. Agudelo-Mendoza, A.I.; Briñez-Santamaría, T.J.; Guarín-Urrego, V.; Ruiz-Restrepo, J.P.; Zapata-García, M.C.; Duque-Ramirez, J.R. Descripción de Los Parámetros de Referencia de la Marcha en Adultos de la Población Antioqueña Entre 20 y 54 años de Edad; Universidad CES Facultad Fisioterapia: Medellín, Columbia, 2012. [Google Scholar]
  41. Peña Ayala, L.E.; Gómez Bull, K.G.; Vargas Salgado, M.M.; Ibarra Mejía, G.; Máynez Guaderrama, A.I. Determinación de rangos de movimiento del miembro superior en una muestra de estudiantes universitarios mexicanos. Rev. Cienc. Salud 2018, 16, 64–74. Available online: https://doi.org/10.12804/revistas.urosario.edu.co/revsalud/a.6845 (accessed on 11 December 2023).
  42. Jakhar, D.; Kaur, I. Artificial Intelligence, Machine Learning and Deep Learning: Definitions and Differences. Clin. Exp. Dermatol. 2020, 45, 131–132. [Google Scholar] [CrossRef]
  43. Bari, A.S.M.H.; Gavrilova, M.L. Artificial neural network based gait recognition using Kinect sensor. IEEE Access 2019, 7, 162708–162722. [Google Scholar] [CrossRef]
  44. Alpaydin, E. Introduction to Machine Learning, 3rd ed.; The MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  45. Morimoto, T.; Hirata, H.; Kobayashi, T.; Tsukamoto, M.; Yoshihara, T.; Toda, Y.; Mawatari, M. Gait analysis using digital biomarkers including smart shoes in lumbar spinal canal stenosis: A scoping review. Front. Med. 2023, 10, 1302136. [Google Scholar] [CrossRef]
  46. Parveen, H.; Noohu, M.M. Evaluation of Psychometric Properties of Tinetti Performance-Oriented Mobility Assessment Scale in Subjects with Knee Osteoarthritis. Hong Kong Physiother. J. 2017, 36, 25–32. [Google Scholar] [CrossRef]
  47. Usmani, S.; Saboor, A.; Haris, M.; Khan, M.A.; Park, H. Latest Research Trends in Fall Detection and Prevention Using Machine Learning: A Systematic Review. Sensors 2021, 21, 5134. [Google Scholar] [CrossRef] [PubMed]
  48. Mauldin, T.; Canby, M.; Metsis, V.; Ngu, A.; Rivera, C. SmartFall: A Smartwatch-Based Fall Detection System Using Deep Learning. Sensors 2018, 18, 3363. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Methodology general diagram for the development of the fall risk assessment system.
Figure 1. Methodology general diagram for the development of the fall risk assessment system.
Applsci 14 03867 g001
Figure 2. Workspace and vision system setup.
Figure 2. Workspace and vision system setup.
Applsci 14 03867 g002
Figure 3. Gait image labeling.
Figure 3. Gait image labeling.
Applsci 14 03867 g003
Figure 4. Pose estimation via key point detection derived from frame analysis.
Figure 4. Pose estimation via key point detection derived from frame analysis.
Applsci 14 03867 g004
Figure 5. Detection of knee angle during heel strike (black dots) and toe-off (red dots). (A) Right ankle displacement along the x-axis. (B) Right knee angle.
Figure 5. Detection of knee angle during heel strike (black dots) and toe-off (red dots). (A) Right ankle displacement along the x-axis. (B) Right knee angle.
Applsci 14 03867 g005
Figure 6. Division of the signals into 5 s windows, visually represented by the signal shown in blue and the divisions in red.
Figure 6. Division of the signals into 5 s windows, visually represented by the signal shown in blue and the divisions in red.
Applsci 14 03867 g006
Figure 7. The final structure of the fall risk assessment ANN.
Figure 7. The final structure of the fall risk assessment ANN.
Applsci 14 03867 g007
Figure 8. Graphs obtained from the training and validation of the ANN. (A). Loss. (B). Accuracy.
Figure 8. Graphs obtained from the training and validation of the ANN. (A). Loss. (B). Accuracy.
Applsci 14 03867 g008
Figure 9. Confusion matrix of the fall risk assessment algorithm.
Figure 9. Confusion matrix of the fall risk assessment algorithm.
Applsci 14 03867 g009
Figure 10. Validation of fall risk classification using a confusion matrix.
Figure 10. Validation of fall risk classification using a confusion matrix.
Applsci 14 03867 g010
Table 1. Frequencies of the Tinetti Scale scores.
Table 1. Frequencies of the Tinetti Scale scores.
ScoreFrequencyPercentage
19 2 2.3%
20 2 2.3%
21 2 2.3%
23 3 3.5%
24 6 7.0%
25 10 11.6%
26 18 20.9%
27 22 25.6%
28 20 23.3%
29 1 1.2%
Table 2. Results of gait analysis according to fall risk.
Table 2. Results of gait analysis according to fall risk.
Main Features GroupMeanStandard Deviation
Cadence (steps per minute)No risk97.01 24.00
At risk 99.81 41.20
Period (s)No risk1.29 0.30
At risk 1.44 0.70
Stride (cm)No risk71.29 24.30
At risk64.53 43.10
Velocity (m/s)No risk58.54 20.60
At risk57.09 55.50
Max ROM (°)No risk48.86 9.80
At risk 45.99 12.70
Min ROM (°)No risk 3.95 3.50
At risk4.47 3.10
Knee angle during heel strike (°)No risk14.38 10.70
At risk13.90 8.10
Knee angle during toe-off (°)No risk24.52 8.40
At risk22.85 10.90
Table 3. Performance test for ANN structures.
Table 3. Performance test for ANN structures.
ParametersPerformance
No. of InputsHidden LayersNo. of OutputsAccuracy (%)Precision
(%)
Recall
(%)
F1-Score
(%)
Specificity
(%)
FirstSecondThirdFourth
No. of
Perceptrons
No. of
Perceptrons
No. of
Perceptrons
No. of
Perceptrons
Test 11850---------1197.379.890.683.898.4
Test 2185032------1198.590.292.990.399.0
Test 318503221---1198.594.892.193.099.1
Test 4181007040---1199.194.496.995.599.4
Test 5181007550251198.895.197.896.299.3
Table 4. Ablation test for the activation function selection.
Table 4. Ablation test for the activation function selection.
ParametersPerformance
No. of
Inputs
Hidden Layers
Activation Function
No. of OutputsAccuracy
(%)
Precision
(%)
Recall
(%)
F1-Score
(%)
Specificity
(%)
Test 118Sigmoid1185.142.758.145.690.6
Test 218ReLU1199.194.496.996.599.4
Test 318Tanh1198.493.393.993.499.0
Test 418Selu1198.694.896.195.599.2
Test 518Linear1189.257.165.555.293.5
Table 5. Performance comparison between the proposed system and existing research.
Table 5. Performance comparison between the proposed system and existing research.
Work SourceUsed SensorSystem TypePerformed TestImplemented AlgorithmPrecision (%)Recall (%)Specificity (%)Accuracy (%)F1-Score (%)
Drover et al. (2017) [30]Accelerometer in the lower legFall Risk AssessmentPeriodic Fall-occurrence SurveyRandom Forest Classifier---82.0082.0073.40---
Silva et al. (2017) [29]3-axial accelerometer and 3-axis gyroscopeFall Risk AssessmentSit-to-stand Test and Stage Balance Test “modified”Naïve Bayes Classifier74.5871.19---84.82---
Ranakoti et al. (2019) [23]3-axial AccelerometerFall DetectionFall SimulationSupport Vector Machine78.2077.7078.3078.0477.9
Chen et al. (2019) [15]Wrist AccelerometerFall DetectionFall SimulationEnsemble Stacked AutoEncoders---96.0998.92------
Mehmood et al. (2019) [16]Waist AccelerometerFall DetectionFall SimulationMahalanobis Distance-Based Threshold---------96.0---
Anitha and Priya (2022) [17]CameraFall DetectionFall SimulationStack AutoEncoder99.97100.0099.8899.9299.97
Eichler et al. (2022) [20]Depth CameraFall Risk AssessmentBerg Balance ScaleSupport Vector Machine75.1672.4586.7775.1673.59
Proposed systemCameraFall Risk AssessmentTinneti Gait and Balance TestArtificial Neuronal Network94.4096.9099.4099.1095.50
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cedeno-Moreno, R.; Malagon-Barillas, D.L.; Morales-Hernandez, L.A.; Gonzalez-Hernandez, M.P.; Cruz-Albarran, I.A. Computer Vision System Based on the Analysis of Gait Features for Fall Risk Assessment in Elderly People. Appl. Sci. 2024, 14, 3867. https://doi.org/10.3390/app14093867

AMA Style

Cedeno-Moreno R, Malagon-Barillas DL, Morales-Hernandez LA, Gonzalez-Hernandez MP, Cruz-Albarran IA. Computer Vision System Based on the Analysis of Gait Features for Fall Risk Assessment in Elderly People. Applied Sciences. 2024; 14(9):3867. https://doi.org/10.3390/app14093867

Chicago/Turabian Style

Cedeno-Moreno, Rogelio, Diana L. Malagon-Barillas, Luis A. Morales-Hernandez, Mayra P. Gonzalez-Hernandez, and Irving A. Cruz-Albarran. 2024. "Computer Vision System Based on the Analysis of Gait Features for Fall Risk Assessment in Elderly People" Applied Sciences 14, no. 9: 3867. https://doi.org/10.3390/app14093867

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop