Next Article in Journal
An Improved Autoencoder and Partial Least Squares Regression-Based Extreme Learning Machine Model for Pump Turbine Characteristics
Next Article in Special Issue
An Intelligent and Smart Environment Monitoring System for Healthcare
Previous Article in Journal
Which Local Search Operator Works Best for the Open-Loop TSP?
Previous Article in Special Issue
Intelligent Tennis Robot Based on a Deep Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Activity Recognition System for Alternative Control Strategies of a Lower Limb Rehabilitation Robot

1
School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
2
School of Design, Fujian University of Technology, Fuzhou 350118, China
3
State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
4
School of Mechanical & Automotive Engineering, Fujian University of Technology, Fuzhou 350118, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(19), 3986; https://doi.org/10.3390/app9193986
Submission received: 31 August 2019 / Revised: 18 September 2019 / Accepted: 18 September 2019 / Published: 23 September 2019
(This article belongs to the Special Issue Artificial Intelligence for Smart Systems)

Abstract

:
Robot-aided training strategies that allow functional, assist-as-needed, or challenging training have been widely explored. Accurate activity recognition is the basis for implementing alternative training strategies. However, some obstacles to accurate recognition exist. First, scientists do not yet fully understand some rehabilitation activities, such as abnormal gaits and falls; thus, there is no standardized feature for identifying such activities. Second, during the activity identification process, it is difficult to reasonably balance sensitivity and specificity when setting the threshold. Therefore, we proposed a multisensor fusion system and a two-stage activity recognition classifier. This multisensor system integrates explicit information such as kinematics and spatial distribution information along with implicit information such as kinetics and pulse information. Both the explicit and implicit information are analyzed in one discriminant function to obtain a detailed and accurate recognition result. Then, alternative training strategies can be implemented on this basis. Finally, we conducted experiments to verify the feasibility and efficiency of the multisensor fusion system. The experimental results show that the proposed fusion system achieves an accuracy of 99.37%, and the time required to prejudge a fall is approximately 205 milliseconds faster than the response time of single-sensor systems. Moreover, the proposed system also identifies fall directions and abnormal gait types.

1. Introduction

Rehabilitation robots have received considerable attention in recent years due to the increasing tendency toward rehabilitation and the shortage of rehabilitation physicians. According to statistics, one out of every six people in the world may suffer from stroke [1], making it a global problem that seriously endangers human health. In China, approximately 2.5 million new brain injuries and spinal cord injuries occur each year [2]. These patients need rehabilitation treatment after surgery to restore normal function, but few rehabilitation medical resources exist. There is an urgent need for approximately 350,000 rehabilitation technicians, but fewer than 20,000 professionals currently work on rehabilitation. In addition, traditional physical therapy is labor intensive and repetitive [3]; therefore, rehabilitation robots have emerged.
A rehabilitation robot is a device that assists rehabilitation therapists in repetitive rehabilitation training tasks based on neurological rehabilitation mechanisms. In addition to undertaking intensive, repetitive rehabilitation training, modern rehabilitation robots should also have the ability to recognize the patient’s motor intent and physical condition. If given such recognition abilities, a high-level rehabilitation strategy could be achieved. In recent years, robot-aided training strategies that support functional, assist-as-needed, or challenging training regimens have been widely explored. These strategies can better provide motor plasticity. The primary goal of alternative robot-aided training strategies is to accurately recognize the various patient states in real time to provide an auxiliary judgment basis for the top-level control system and to provide the necessary control input for the bottom controller. These states include the patient’s motor intent and tolerance degree, the physical interaction (i.e., asynchronous error) between the patient and the robot, the patient’s abnormal motor pattern (e.g., abnormal gait) and the tasks that must be completed for rehabilitation assessment [4], and patient fall detection. Among these, fall detection is critical, because there is a high probability of falls during rehabilitation, and these falls can cause severe injuries. Statistically, the probability of a stroke survivor falling within six months after surgery is 25%–37%, and subsequently, the fall probability is 23%–50% [5,6,7,8]. Accidental falls and fall-related injuries, such as hip fractures, often lead to severe disabilities and affect the patient’s overall health [9]. Therefore, timely and effective fall prediction and protection during rehabilitation training are beneficial for stroke patients.
The detection of abnormal gait (patient-specific gait) [10] and the fall methods currently used in rehabilitation training can be roughly divided into three methods: image sensing, environmental sensors, and wearable sensors. Image detection works mainly through cameras installed in the environment, and uses image-processing algorithms to detect whether a patient has fallen [11]. This approach does not require patients to wear testing equipment; however, the image-processing algorithms are sophisticated and easily affected by light and noise, which degrades the accuracy and applies only to fixed locations, providing little applicability to a wider range of activities. Infrared sensors and acoustic sensors are generally installed [12,13] to detect whether a fall occurs in a specific environment. This approach is inexpensive, but it is limited to environments where the sensors are installed. Wearable sensor methods [14,15,16,17,18,19] mainly analyze the acceleration sensor data of limbs and then use an algorithm to identify falls. Wearable sensors have been widely studied and applied because of their small volume and low cost.
In terms of detection algorithms, the methods used in fall and abnormal behavior detection are mainly threshold-based and rely on machine learning (pattern matching) classifiers, such as the support vector machine (SVM) [20,21,22], k-nearest neighbor (KNN) [14], backpropagation (BP) and single-class minimax probability machine (SCMPM) [23]. These methods usually require a threshold value to be set manually or through a semisupervised process to distinguish abnormal from standard samples. Threshold-based systems are simple to implement and computationally inexpensive, but they do not provide a good trade-off between false positives (if the threshold value is too low, the system could generate false alarms, specificity <100%) and false negatives (if the threshold value is too high, the system could miss real alarms, sensitivity <100%) because soft falls due to paralysis or fatigue of the gluteus medius are characterized by kinematic peaks similar to those of normal gaits, and small perturbations or “wiggly duck gait” are characterized by kinematic peaks similar to those of falls. In other words, an extrinsic fall [9,24] can be identified by a high threshold because extrinsic falls caused by trips and slips are often accompanied by sharp changes in acceleration. In contrast, intrinsic falls caused by dizziness, impaired balance, or gait [9,24] usually do not exhibit a sharp change in acceleration, and can easily go undetected. In addition, these methods cannot perform detailed classifications of falls or abnormal gaits. In addition, existing abnormal gait and fall detection research faces other problems, such as missed alarms and false alarms [25,26,27] and inadequate understanding of falls and abnormal gait [28,29]. Currently, no standardized methodology in terms of sensor types, feature engineering [24], or standard classification samples exists for identifying these problems. Although many studies on activity recognition and fall detection exist, these studies tend to use a single sensor (usually a kinematic sensor) for pattern recognition, largely for cost reasons. However, the potential problem of this approach is that it is difficult for a single dimension input to fully reflect the gait characteristics; thus, there is some risk when using a single type of sensor for pattern recognition. In addition, the sensors are easily disturbed by noise or unexpected situations, which inevitably results in uncertainty. Under interference from noise, it is difficult to achieve accurate determination and identification. Kinematic features such as falling speed, stride time, acceleration coordinates, posture information, inactivity periods, and angular velocity vary by individual, especially for different age groups. Moreover, most of these studies are intended to raise alarms only after a fall occurs. Some studies [19] determined that a fall occurs when the acceleration threshold is triggered, and the wearer subsequently remains in a quiet position for a short duration. These studies have been applied to improve the safety of elderly people, but they are not applicable during rehabilitation training, because when a patient is at risk of falling, the robot cannot make a timely judgment; however, if the robot continues to operate, it is likely to cause serious harm to the patient. Therefore, from the perspective of fall injury prevention, timeliness is as crucial as accuracy [30]. During the identification process, a faster yet accurate judgment allows better subsequent measures to be implemented. However, kinematic-sensor-based approaches often require high thresholds to achieve high accuracy, and high thresholds correspond to later predictions. To make accurate judgments in the minimum time, it is worth studying sensor selection and other features to compensate for the defects of kinematic sensors for recognizing complex activities.
Given these problems, we have conducted some investigations and hope to find a sensor system that can be applied to weight support rehabilitation training and conduct basic research on criteria for activity recognition in patients with hemiplegia. To accurately recognize various patient and robot states in real time, we propose a multisensor fusion system that recognizes three dimensions: kinematics, kinetics, and position distribution. Our system distinguishes between periods of training and rest, detects events such as normal gait and abnormal gait, and achieves a reasonable degree of accuracy. In a previous study, we proposed a two-stage activity recognition classifier, in which a probabilistic neural network (PNN) is first trained to identify a fall in real time, and then data are transferred to a second level for detailed classification.
Based on this recognition system, our previous paper also proposed an adaptive neural sliding-mode controller (ANSMC) that provides a continuum of either assistive or challenging rehabilitation training. This controller was applied to a mobile chaperone lower limb rehabilitation-training robot (MCLLRTR). The main contributions of this paper are summarized as follows:
  • We propose a multisensor fusion system that provides accurate activity and motor capability recognition fall prediction and physical fitness assessment during the rehabilitation training process. It uses the multidimensional features provided by multiple sensors with differing properties to achieve a reasonable degree of identification accuracy. To overcome the shortcomings of the kinematic sensor, we propose multisensor activity identification based on kinematics, kinetics, and spatial distribution.
  • We introduce a two-stage activity recognition classifier in which a probabilistic neural network (PNN) is first trained on regular activities to filter out falls. Then, the fall data are input to an SVM-RBF-KNN algorithm, which classifies specific falls or an abnormal gait in a supervised manner. The first stage predicts falls as early as possible, and the second stage determines fall details and abnormal gait types. These detailed classifications are provided to the therapist for functional assessment and the subsequent formulation of the therapeutic schedule.
  • We have tried to address the contradictions of sensitivity and specificity. By selecting appropriate sensors whose features can compensate for each other’s shortcomings, we can achieve both high sensitivity and reasonable specificity.
Additionally, to provide proper safety protection for rehabilitation patients, the motor locking commands are triggered when the algorithm predicts falls, and the bodyweight support sling provides protection.
The rest of this article is organized as follows. Section 2 introduces the methods and system setup. Section 3 presents the experimental results. Finally, conclusions are summarized in Section 4.

2. Methods and System Setup

We cooperated with the China Rehabilitation Research Center (CRRC) to develop a mobile chaperone lower-limb rehabilitation-training robot (MCLLRTR). The goal was to integrate active assistance and challenge-based training control strategies into the robot’s control system. The robot is intended to assist patients recovering from poststroke hemiplegia and who have an exceptional ability to move freely from the synergy pattern. Additionally, the robot can provide patients with real-time protection and provide doctors with a physical fitness assessment, which promotes subsequent diagnosis. Therapists can utilize a selection of training modes and parameters to adjust the challenge level to provide scalable therapy that is catered to each patient’s needs. For example, therapists can steer the MCLLRTR at any time and support gait initiation or nudge patients in a specific direction.

2.1. System Introduction

The MCLLRTR activity recognition process is shown in Figure 1. First, the height and weight of the patient are measured, and the therapist makes a functional assessment of the patient. Then, the essential data are input into MATLAB and compared with the standard model, and a detection threshold is specified for each patient off-chip. In step two, zero-point correction of the tension sensor is performed, the abnormal gait trigger position settings for the photoelectric sensors are set, and preparations are made to begin training. In the third step, the training data are transmitted to the STM32 main control board after Kalman filtering and analog/ digital (AD) conversion. Based on the sample features and offline thresholds from the first step, the proposed first-stage classifier (PNN) determines whether the data indicate a ‘fall’ or a ‘normal gait’. If a fall has occurred, protection measures are launched, and the robot initiates safety protection measures; otherwise, rehabilitation training continues. The rehabilitation control strategies adaptively adjust based on the patient’s performance, and all data are transmitted to the host computer through the wireless serial port in real time. As the final step, the data are uploaded for detailed analysis and classification during training, training reports are generated, new patient data are sent to the database, the neural network training parameters are updated, and the fall determination thresholds are updated.

2.2. Proposed Methods

A block diagram of the MCLLRTR’s control system is shown in Figure 2. First, the control architecture includes a local sensing and recognition system in which local variables are adopted for state detection and activity pattern recognition. Second, the asynchronous force and pulse are employed as inputs to identify the tolerance degree of the patients and training task completion differences in the mid-level control; we designed a transition law suitable for active assistance, challenge-based, or regular training. Third, we designed an adaptive RBF neural network sliding mode (ARNNSM) controller that executes a selected control strategy. Furthermore, a patient’s training state, which is obtained from gait, force, and kinematics data, is monitored by the photoelectric sensor, tension sensor, and acceleration sensor, providing security protection that is as accurate and immediate as is possible. We developed a two-stage activity recognition neural network algorithm to perform activity recognition and fall detection using the collected data. We hypothesized that the new algorithm would achieve a reduction in the time delay between a detected event and an actual fall such that it could guarantee enough time to institute fall protection. In the low-level motion controller, the MCLLRTR controls its velocity and trajectory based on either the potentiometer signal of the control handle or the specific value set by the rehabilitation therapist. When the detection system predicts a fall, the MCLLRTR brakes immediately and deploys safety protection for the patient through a body weight support sling.

2.3. Sensor Locations

It is challenging to identify diverse falls and abnormal motor patterns from routine activities because there are many similar features between falls and the activities of daily life (ADLs). Thus, it is challenging to achieve a good trade-off between false positives (when the threshold value is too low, the system may generate false alarms, specificity <100%) and false negatives (when the threshold value is too high, the system may miss real alarms, sensitivity <100%) when setting thresholds using a single sensor. To solve this problem, we proposed a multisensor system (as shown in Figure 3) that obtains multiple features for classifying activities from different dimensions. This system board is designed with an STM32 microprocessor powered by a 5-V battery. The photoelectric sensors collect the spatial distribution of the gait data, the tension sensors collect the interactive forces between the patient and the rehabilitation robot, and the accelerometer sensor collects the patient’s kinematic features.
As shown in Figure 3, the multisensor system includes an accelerometer, a pulse sensor, and two photoelectric sensors, which include four tension sensors and seven photoelectric sensors. The photoelectric sensors include a speed-detecting sensor. The accelerometer monitors the patients’ spatial motion from the kinematic dimension. The pulse sensor is used to detect physical load. The tension sensor monitors the motion homogeneity between the robot and patient from the kinetics dimension, and the photoelectric sensors monitor gait from the spatial distribution characteristics. At the same time, to meet the needs of the rehabilitation assessment, we introduce asynchronous deviation and pulse monitoring based on the multisensor system. The tension sensors mainly obtain the synchronous deviation, and the tolerance degree of the patients is mainly obtained by synthesizing the data of the pulse sensor and the idle duration. The activity recognition system also provides data support for dynamic control.
The spatial distributions of normal and abnormal gaits (such as the wiggly duck gait) are different. Based on this characteristic, the OMRON (E3JK-RAMA, Japan) photoelectric sensors are applied to this type of detection. We located the photoelectric sensors below the ankle and above the knee in a vertical direction to detect whether the lower leg is beyond the normal gait range or at a healthy gait boundary in the horizontal direction (as shown in Figure 1 and Figure 3). These sensors detect whether the foot’s landing point is beyond the range of a normal gait. The photoelectric sensors include a transmitter, a receiver, and a detection circuit. The transmitter continuously emits light; that is, the light is received by the receiver or blocked by patients. The receiver converts the received optical signal into an electric signal, and the detection circuit is used to detect the electrical signals received by the receiver. Signal impurities are removed, leaving the valid signals; then, the valid electric signals are transmitted to the main control board.
The spatial distribution of a normal gait is shown in Figure 4 (the blue area). Six photoelectric sensors were installed on the left and front side of the rehabilitation robot. The second set of photoelectric sensors was installed in the middle position on the left side of the robot, and is used to detect the number of training steps. The first and third sets of photoelectric sensors were installed on the front and back sides of the second set of photoelectric sensors, and are used to determine whether the patient’s gait exhibits a normal travel direction. The fourth, fifth, and sixth sets of photoelectric sensors were installed on the front side of the robot, and are used to detect whether the patient’s gait is normal in the lateral direction. The second set of photoelectric sensors is triggered when the patient walks and counts the normal gait steps. The other sets of photoelectric sensors are triggered when an abnormal gait occurs. The locations of the photoelectric sensors are flexible and can be adjusted according to the rehabilitation training needs.
The tension sensor (Micro tension sensor, ZNLBS-100KG, China) is used to detect the interactive force and the consistency of movement between the robot and the patient. This sensor converts force into a microvoltage, which is then amplified by an amplifier to a value between 0–5 volts. Four sets of tension sensors were installed on the joints between the bodyweight support sling and the robot in four diagonal directions (as shown in Figure 1 and Figure 3). An abnormal gait or fall will cause tension force changes between the robot and the patient; thus, classifications can be made based on tension changes. The tension sensor installed between the bodyweight support sling and the main robot frame provides vertical and horizontal interaction force data in real time.
The kinematic sensor chosen is a micro-inertial measurement unit (μ-IMU, a three-axis acceleration sensor that is shown in Figure 3). To acquire real-time motion data, the accelerometer is installed on the patient’s upper trunk, i.e., below the neck and above the waist, as this region is the most suitable for distinguishing falls from other movements during rehabilitation training [18].
A pulse sensor (FengQi, DC 3.3 V, China) was attached to the patient’s ring finger to detect the tolerance degree of the patients (as shown in Figure 3).

2.4. Feature Extraction and Two-Stage Activity Recognition Method

To predict falls as early as possible to ensure that subsequent protection steps can be taken and for detailed functional assessment needs, we proposed a two-stage activity recognition method in which a probabilistic neural network (PNN) is first trained on routine activities. This network identifies potential falls and passes their data to a supervised SVM-RBF-KNN algorithm that can classify falls and abnormal gaits.
The feature extraction procedure involves extracting one element (xi) from three sets of sensing data (photoelectric sensor data (Pi=[P1,P2, P3,P4,P5]), tension sensor data (Ti=[T1,T2,T3,T4]), and accelerometer sensor data (Ai=[A1,A2,A3])) during a sampling period (ts). Then, the three elements make up the input data (Xi) that describe the features of patient activity during rehabilitation training. The sensor sampling period in our experiment is t s = 20   ms .
Since accelerometer values change dramatically during falls or abnormal gaits, they are often used to predict abnormal conditions. Since the angle of acceleration is related to the sensor’s orientation when worn, it is generally not used for prediction. To eliminate the accelerometer gravity effect and avoid complex threshold comparisons from every direction ( A x , A y , and A z ), we employed the signal magnitude vector (SMV) algorithm [31] to discriminate between abnormal gaits and falls during normal gait; the SMV algorithm is the most frequently employed algorithm for distinguishing motion:
A s v m ( S i g n a l M a g n i t u d e V e c t o r ) = ( A x ) 2 + ( A y ) 2 + ( A z ) 2
To predict falls, the value of the tension sensor is significant. In particular, the value of the tension sensor can help make a sensitive determination of a flexible fall. The combination of the photoelectric sensor and the acceleration pull force is also valid for abnormal gait determination.
The tension sensors, which are located between the robot and the patient’s waist in different directions, can reflect the force characteristics between the robot and the patient, such as the force direction and the uniformity and continuity of target movement. We adopt d (T1, T4) and d (T2, T3) to represent the forces in the left and right directions, respectively, and d (T3, T1) and d (T4, T2) to represent the frontward and backward force directions, respectively. We adopt Tsvm to predict falls.
d ( T 1 , T 4 ) = T 1 T 4
d ( T 3 , T 2 ) = T 3 T 2
d ( T 3 , T 1 ) = T 3 T 1
d ( T 4 , T 2 ) = T 4 T 2
T s v m = T 1 + T 2 + T 3 + T 4

2.5. Asynchronous Deviation

To determine the completion of rehabilitation training and the tolerance degree of the patient in real time, we need to detect the consistency and continuity between actual and target movements. Therefore, we introduce a state variable to represent the asynchronous difference. Ideally, the patient and the robot move synchronously without mutual interference, but in most cases, it is difficult for the patients to synchronize with the predefined training target set on the robot because of their gait disorders. We defined this type of interference as an external disturbance in the dynamic model and employed asynchronous deviation to indicate the interference degree. The asynchronous deviation in kinematics can be obtained through the four tension sensors. These tension sensors were installed on the joints between the bodyweight support sling device and the robot in four diagonal positions (as shown in Figure 3).
Asynchronous deviation is disadvantageous for dynamic control, but it can be used to enhance the training effect.
This algorithm is detailed in our previous paper, which mainly describes the process of activity recognition.

2.6. Activity Recognition Method

Activity recognition is performed in two stages (as shown in Figure 5). In Stage 1, the probabilistic neural network (PNN) is first trained on routine activities to identify potential falls. This stage of the classifier is mainly intended to perform rapid prejudgments of possible falls to enable sufficient time to protect patients; therefore, timeliness is the most important factor. As mentioned above, we selected acceleration Asvm and Tsvm to predict falls.
PNN is a feed-forward neural network based on the Bayesian criterion and a Parzen window for probability distribution function estimation that also exhibits good pattern recognition accuracy. The most important advantage of PNN is that training it is easy and nearly instantaneous. Other advantages offered by PNN are that the user needs to set only one parameter (the so-called smoothing parameter). This model can achieve reasonable accuracy, even with small numbers of training samples. Moreover, the network is tolerant to erroneous data and operates entirely in parallel without requiring feedback from the individual neurons to the inputs [32].
The PNN consists of nodes allocated in three layers after the inputs:
Pattern layer: In this layer, one pattern node exists for each training example. Each pattern node forms a product of the weight vector and the given example for classification, where the weights entering a node stem from a particular example. Next, the vector is passed to the summation layer.
e [ ( x T ω k i 1 ) / σ 2 ]
Summation layer: This layer weights the output of the hidden neurons belonging to the same class in the hidden layer.
i = 1 N K e [ ( x T ω k i 1 ) / σ 2 ]
Output layer: The output nodes are binary neurons that produce the classification decision:
i = 1 N K e [ ( x T ω k i 1 ) / σ 2 ] > i = 1 N K e [ ( x T ω k j 1 ) / σ 2 ]
where X is the input data, NK is the number of samples, k is the dimensionality of the training samples, and σ is a smoothing factor, which is the deviation of the Gaussian functions. An appropriate deviation is chosen through experimentation.
In Stage 2, the SVM-RBF-KNN classifier is trained to classify specific falls or an abnormal gait in a supervised manner. The goal of this classifier is to provide the rehabilitation physician with a detailed gait status for functional assessment. Supplying a classification that includes as much detail as possible is the most important factor.
During the rehabilitation process, activity recognition is critical, because in addition to fall prediction, we need to identify and record gait situations so that therapists fully understand the rehabilitation process and provide essential data for designing reasonable and scientific rehabilitation therapy schedules. Traditional machine learning techniques such as perception, backpropagation (BP) neural network, and one-class support vector machines (OC-SVM) have limitations in modeling human behavior because they lack the reference points of human decision-making [33]. They discriminate a target class from other data by calculating the optimal hyperplane with a bias term such that all training data patterns are classified to the target class [34]. However, for many real-life datasets, such a hyperplane may not exist, and thus, the input space must be mapped to a new feature space with a specific structure (the same as for OC-SVM). Therefore, we proposed a multiple classifier system that combines the SVM, radial basis function (RBF), and KNN classifiers to form an ensemble, which significantly improves fall detection and activity recognition systems.
The k-nearest neighbor (KNN) algorithm is a simple and effective technique for object classification based on finding the closest training examples in the feature space [35]. The radial basis function (RBF) has been widely applied in approximation and classification tasks because of its structural simplicity; its learning process converges quickly, and no local minimum problem occurs. Therefore, we proposed a fusion algorithm (SVM- RBF-KNN) based on extending SVM, KNN, and RBF to recognize and classify complex rehabilitation activities. We set up seven cluster centers based on evaluation criteria for abnormal and normal gaits and falls provided by a rehabilitative therapist, and combine these with real data. Let x1, x2,... xn R d be the n training samples; Ci is the ith cluster center; and the representation y i of the hidden layer, which maps the input xk into yi, is calculated using the following nonlinear function:
y i = Φ ( X k , X i ) = ω G ( X k , X i ) = ω G ( X X i ) = ω e ( 1 2 σ 2 X k X i )
σ = d m a x 2 n
where σ is a parameter scaling the Euclidean distance between Φ ( X k )   and   Φ ( X i ) , and d max is the maximum distance between the selected centers, X i C i , where C i denotes the neighborhood of xi in the feature space Ƒ. To obtain reliable clustering centers, we adopted a supervised learning algorithm. The RBF function center C i , expansion constant σ j , and output weight ω j should be trained by a supervised learning algorithm and undergo a process of error-corrected learning to obtain their optimal values. The gradient descent method is used to define the objective function as follows:
e k = d k i = 1 I ω j G ( X k t j C j )
E = 1 2 i = 1 n e k 2
Δ t j = η E t j
Δ ω j = η E ω j
Δ σ j = η E σ j
The algorithm iterates until the cluster centers are obtained.
The weight ω can be obtained as follows:
E ( n ) ω i ( n ) = k = 1 N e k ( n ) G ( X k t i C i )
ω ( n + 1 ) = ω i ( n ) η 1 E ( n ) ω i ( n ) ,   i = 1 , 2 , , I
The SVM-RBF-KNN Algorithm 1 for acquiring the cluster centers is shown below:
Algorithm 1: SVM-RBF-KNN algorithm
  <Sample ID#>
  Input samples: {photoelectric sensors signal: P1, P3, P4, P6}, {tension sensors signal: Tsvm, d (T1, T4), d (T2, T3), d (T1, T3), d(T4, T2)}, {IMU sensors signal: Asvm}.
  Transform IMU sensor signal to SVM value:
   A s v m ( S i g n a l M a g n i t u d e V e c t o r ) = ( A x ) 2 + ( A y ) 2 + ( A z ) 2
  Initialization;
  Randomly select eight cluster centers [ X 1 ,   X 2 , , X i ];
  for i = 1: maximum iterations
    if E<=0.00001
      break;
  for i = 1 to N do,
  Forward propagation:
  compute output and the error:
     e k = d k y i ;
  compute sum error:
     E = 1 2 i = 1 n e k 2 ;
  Backward propagation:
  update parameters:
   Δ t j = η E t j
   Δ ω j = η E ω j
   Δ σ j = η E σ j
  Iterate until cluster centers [ C 1 ,   C 2 , , C i ] are obtained.
  end for
  Generate clustering centers and weights.
end for

3. Experiment with More Detailed Information

To verify the feasibility and efficiency of our proposed system, we designed fall prediction experiments that were conducted using the MCLLRTR. A total of 54 young, healthy student volunteers (including 36 males and 18 females; 22 ± 4 years old; weight: 64 ± 20 kg) performed simulated falls and abnormal gaits as well as some other training activities (the numbers of falls, abnormal gaits, and normal gait events are presented in Table 1). All volunteers provided informed consent before the simulated experiments, and were paid for their participation. Exclusion criteria included severe hypertension (BP >180/100 mmHg) and severe cardiopulmonary disease. Each participant was informed about how to operate the rehabilitation robot, and each participant completed the entire experiment. They performed various fall-type actions and rehabilitation tasks using the protective robot.
Figure 6 shows the simulation of different activities in the training process, such as (a) normal gait, (b1–b2) abnormal gait, and (c1–c5) falls. Accelerations along the x, y, and z axes are denoted as A x , A y , and A z , respectively, and their norm (the resultant acceleration) is A s v m = A X 2 + A y 2 + A z 2 . The tension signal is stored on the control board using an A/D converter with a sampling interval of Δt = 20 ms. The photodiode signals consist of digital signals, either a “0” or “1”, which reflect the foot landing point distribution.
After the experiment, the data must be processed. First, the two-stage activity recognition classifier mentioned above needs to be trained; then, the trained classifier can be used to classify the rehabilitation training activities. During neural network classifier training, the output needs to be labeled. After training, the classification threshold is determined.
When setting the threshold, the values of the inertia sensor and tension sensor were transformed by SVM kernel functions (Asvm and Tsvm) and drawn by MATLAB (as shown in Figure 6). When setting the thresholds of the first-level PNN network for training to reduce the number of calculations, we used only accelerometer and tension sensor data for fall prediction to simplify the rules for fall prediction. These rules can be expressed as follows: if |Asvm | exceeds a threshold b1 and T_svm exceeds a threshold b2, then, the event is determined to be a 1 (a fall event); otherwise, the event is determined to be a 0 (a nonfall event). The trigger thresholds for a falling event are shown in Figure 6. To achieve high accuracy, the threshold value determined by a single sensor must be 28.6% higher than that determined by multisensor fusion. Moreover, the prediction time by multisensor fusion is approximately 205 milliseconds faster than that of a single sensor. As Figure 6 shows, setting a single threshold value for the acceleration and tension features fails. The multisensor system incorporates multiple thresholds to avoid the effects of single-sensor noise. By integrating multiple single weak classifiers into a robust classifier, the multisensor fusion approach also achieves a high detection rate, while reducing the false alarm rate. From Figure 6, we can see that a single sensor’s data indicate only some activity features, and are sensitive in only some aspects.
Table 2 compares our proposed method with state-of-the-art fall prediction algorithms. The proposed algorithm uses an accelerometer attached to the chest and waist belt force sensors in conjunction with the rehabilitation robot for data acquisition. The prediction algorithm identifies a fall or other gait abnormalities based on the off-line threshold value set by the PNN neural network. The prediction time when using the multisensor fusion approach is approximately 205 milliseconds ahead of approaches using a single sensor. The sensitivity is 99.37%, and the threshold is lower than that of single-sensor approaches.
The curves for the accelerometer data show that each type of activity has significant distinctions. The tension data curves have abrupt changes, and the peak sequence has an implicit law. After a long period of experimental study, we found that the most significant peak value of the tension sensor in one direction indicates that the fall is occurring in the opposite direction.
Detailed and correct functional assessment is the basis of formulating a therapeutic schedule [4]; therefore, further detailed identification concerning falls and abnormal gaits is required. The second SVM-RBF-KNN classifier is intended to classify specific types of falls or abnormal gaits in a supervised manner. The data extracted by the first-stage classifier are transmitted to the second-stage classifier. Let Xi = [Asvm, Tsvm, d (T1, T4), d (T2, T3), d (T3, T1), d (T4, T2), P1, P3, P4, P6] denote the input data, and Ci denotes the corresponding cluster center. We set seven types for the final output; “c1” denotes a normal gait, “c2” denotes an abnormal gait, “c3” denotes a vertical fall, “c4” denotes a fall to the left, “c5” denotes a fall to the right, “c6” denotes a backward fall, and “c7” denotes a frontward fall. First, we adopted a gradient descent algorithm for the obtained data cluster center, which starts with few group data. Then, other samples are treated as falls or abnormal patterns, and RBF-BP is trained on each of them. This step is followed by an iterative procedure that involves merging a similar cluster (reclassifying) and retraining the remaining RBF-BP until no more merging occurs. Finally, the RBF is applied to train the neural network, and some randomly selected samples are used to test the performance of the neural network. This study was conducted with the MATLAB-2016a 64-bit version installed on Windows 7 Ultimate 64-bit with an Intel Core i7 (8 M cache, up to 2.7 GHz) and 8 GB of RAM. The final clustering centers are as follows:
[ C 1 C 2 C 3 C 4 C 5 C 6 C 7 ] = [ 948.47 366.98 19.91 1.19 5.25 23.98 0.25 0.99 0.99 0.77 952.35 381.16 24.20 2.49 6.65 28.35 0.21 0.99 0.99 0.78 952.14 467.32 35.67 8.24 7.39 34.82 0.24 0.96 0.92 0.63 953.27 428.85 27.19 13.84 18.74 32.10 0.23 0.93 0.92 0.75 951.72 447.61 32.50 0.27 3.56 28.65 0.24 0.98 0.95 0.67 953.45 484.96 43.75 8.26 5.05 40.53 0.27 0.98 0.87 0.74 950.32 413.71 29.59 0.44 10.45 40.49 0.22 0.99 0.99 0.71 ]
ω = [ 1.06 e + 38 7.68 e + 45 3.35 e + 48 2.30 e + 120 5.75 e + 122 7.02 e + 122 3.11 e + 108 2.70 ]
Through data training, we obtained weight ω, which could also be calculated by bringing the data into Equations (17) and (18).
The primary purpose of the second-level classifier is to carry out detailed category identification. We selected several classifiers commonly used in MATLAB and compared them with the classifiers we built, using accuracy and time consumption as evaluation indicators. All the data collected by our homemade rehabilitation robot platform were compared in terms of accuracy and time consumption.
Table 3 compares the proposed fall detection method with state-of-the-art methods. The proposed method is the most accurate, and it can identify other gait features to enhance functional assessments. However, the proposed activity recognition method took approximately 41.79 seconds. In our study, after quickly predicting falls and instituting protection, the robot needs to perform more detailed activity recognition; therefore, the accuracy of its performance is more critical than its time consumption.
The data from the three sensors are supplied to three weak classifiers, which are merged into a robust classifier. The experiment shows that the tension is most sensitive in the gait direction, with a high weight value, the photoelectric sensors are sensitive to abnormal gaits, and the acceleration data are most sensitive to sudden abnormalities. Since different people have different kinematic features and different weights, the cluster weights ω and, consequently, the classifier weights are also different for different people; therefore, cluster weight assessment is of critical importance to ensure correct weight values for each person. In general, for weight loss gait training, the higher the weight loss ratio is, the greater the weight of the tension sensor (as shown in ω).

4. Discussion

In rehabilitation training, activity recognition is essential for the implementation and control of a rehabilitation robot and in the functional assessments of patients. However, there is currently no standard and detailed classification identification method for rehabilitation activity recognition. In addition, to prevent injuries due to falls, timely and accurate fall prediction is of the utmost importance. Unfortunately, very few studies have been conducted in this area [42,43]. Achieving timely prediction and making good trade-offs between sensitivity and specificity are also crucial for activity recognition, and thus are worth studying. To improve the accuracy of activity recognition and fall prediction during rehabilitation, we proposed a multisensor fusion system by combining heterogeneous sensor data for reasoning and analysis to solve complex activity recognition problems. We employed an asynchronous deviation obtained via tension sensors and individual conversion to estimate the motor capabilities of patients and adopted the deviation as a modulating factor to compute a corresponding assistance torque with ultimate adjustable torque bounds. Then, an assistance strategy can be generated. We also introduced a two-stage fall detection method in which a probabilistic neural network (PNN) is trained on routine activities. Subsequently, it identifies potential falls and passes them to an SVM-RBF-KNN algorithm, which predicts specific fall types or abnormal gaits in a supervised manner. The first stage allows fall prediction to occur as early as possible, while the second stage determines the fall details or the type of abnormal gait. These detailed classifications are stored and later provided to a therapist for functional assessment and the subsequent formulation of the therapeutic schedule.
During rehabilitation training, falls are most commonly caused by intrinsic factors such as muscle weakness, impaired balance, and declining cognition. These intrinsic falls usually do not show dramatic changes in acceleration values, and are therefore difficult to identify with general kinematic sensors. To solve this problem, we implemented a unique design in the multisensor system, which utilizes force dimensional information to avoid missed alarms caused by the lack of dramatic changes in kinematics values. This dimensional force information comes from the bodyweight support sling, which can reflect the potential falls caused by intrinsic factors in a timely manner. Previous studies [19,21,22,26,27] have also employed multicriteria algorithms to detect falls, but these algorithms perform fall identification in a parallel manner, and involve only explicit kinematics and posture information. In contrast, our multisensor system processes not only explicit information such as kinematics and spatial distribution information, but also implicit information such as kinetics and pulse information. Both the explicit and implicit information are analyzed in one discriminant function to obtain a detailed and accurate recognition result.
The results of the experiments indicated that it is possible to set lower thresholds that perform accurate classification by fusing information from the two signal sources. The proposed first-stage classifier can provide timely and accurate fall detection. The second stage SVM-RBF-KNN algorithm performed favorably in terms of activity recognition, and achieved higher accuracy and fewer false warnings than did the other classifiers. The results also revealed that separate classification problems could be solved by providing a different set of features from multisensor data. We also found that the value obtained from an accelerometer on a single axis has large amounts of randomness, and needs to be transformed into a suitable feature space by SVM algorithms. The multisensor system incorporates multiple thresholds to avoid the effects of noise in a single sensor, and it integrates multiple single weak classifiers into a robust classifier. This multisensor fusion method also achieves a high detection rate while reducing the false alarm rate. The proposed multisensor activity recognition system can effectively meet a physician’s needs for rehabilitation training. The sampling rate, number, and locations of sensors, and the employed algorithm and its corresponding ranges for such variables affected the detection results. The tension sensor data are more sensitive than the accelerometer data in the second stage of classification. The experiments demonstrate that 205 milliseconds can be saved compared to models that use a single sensor. The accuracy of the fusion method is almost 99.37%, and no false warnings occurred. These results verify the feasibility and efficiency of multisensor fuzzy data fusion algorithms and the calculation process. We also found that the tension signal is most reliable for identification, and the tension signal peak sequence reflects the fall direction (the largest peak value of the tension sensor in one direction indicates that the fall is occurring in the opposite direction).
The proposed recognition system coupled with a new type of mobile chaperone lower limb rehabilitation training robot exhibits excellent control performance, good interaction performance, and accurate monitoring, which promotes the enthusiasm and confidence of patients during rehabilitation training.
However, this new type of mobile chaperone lower limb rehabilitation training robot cannot improve recovery efficiency, because that efficiency is related to the pathological physical condition and medication of the patients; instead, our rehabilitation robots improve the efficiency of physicians by providing them with reliable data and allowing them to adjust the training strategies and training difficulty levels. Currently, neural network classifiers require more data for further training. Additionally, the experiments were performed by young students who were not hemiplegic patients. In future work, we plan to conduct experiments using hemiplegic patients from the China Rehabilitation Research Center (CRRC).

5. Patents

An Intelligent Safety Protection Robot for Rehabilitation Training of Human Lower Limbs (201611102656.X).

Author Contributions

Conceptualization, T.Y. and X.G.; methodology, T.Y.; software, T.Y.; validation, T.Y. and X.G.; formal analysis, T.Y. and X.G.; investigation, T.Y. and X.G.; data curation, T.Y. and X.G.; writing—original draft preparation, T.Y.; writing—review and editing, T.Y., R.G., F.D. and J.P.; funding acquisition, X.G.

Funding

This research was funded by the State Key Laboratory of Robotics and Systems (SKLRS-2017-KF-04), the Beijing Municipal Science and Technology Commission (Grant No. Z161100002616018), the Science and Technology Platform Construction Project of Fujian Science and Technology Department (Grant No. 2015Y2001-34), and the Science Foundation for Young Scholars of Fujian Province (Grant No. 2018J05099).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kaste, M.; Norrving, B. From the World Stroke Day to the World Stroke Campaign: One in six: Act now! Int. J. Stroke 2010, 5, 342–343. [Google Scholar] [CrossRef]
  2. Li, J.; Li, B.; Zhang, F.; Sun, Y. Urban and rural stroke mortality rates in china between 1988 and 2013: An age-period-cohort analysis. J. Int. Med. Res. 2017, 45, 680–690. [Google Scholar] [CrossRef]
  3. Wolbrecht, E.T.; Chan, V.; Reinkensmeyer, D.J.; Bobrow, J.E. Optimizing Compliant, Model-Based Robotic Assistance to Promote Neurorehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2008, 16, 286–297. [Google Scholar] [CrossRef]
  4. Zhao, W. Rehabilitation Therapy of Neurological Training of Hemiplegia: Daoyin Technique in Chinese Medicine; Springer Nature Singapore Pte Ltd. and People’s Medical Publishing House: Singapore, 2019; pp. 301–326. [Google Scholar]
  5. Jorgensen, L.; Engstad, T.; Jacobsen, B.K. Higher Incidence of Falls in Long-Term Stroke Survivors Than in Population Controls Depressive Symptoms Predict Falls After Stroke. Stroke 2002, 33, 542–547. [Google Scholar] [CrossRef]
  6. Hyndman, D.; Ashburn, A. People with stroke living in the community: Attention deficits, balance, ADL ability and falls. Disabil. Rehabil. 2003, 25, 817–822. [Google Scholar] [CrossRef]
  7. Lamb, S.E.; Ferrucci, L.; Volapto, S.; Fried, L.P.; Guralnik, J.M. Risk Factors for Falling in Home-Dwelling Older Women With Stroke The Women’s Health and Aging Study. Stroke 2003, 34, 494–501. [Google Scholar] [CrossRef]
  8. Kerse, N.; Parag, V.; Feigin, V.L.; McNaughton, H.; Hackett, M.L.; Bennett, D.A.; Anderson, C.S.; Auckland Regional Community Stroke (ARCOS) Study Group. Falls After Stroke Results From the Auckland Regional Community Stroke (ARCOS) Study, 2002 to 2003. Stroke 2008, 39, 1890–1893. [Google Scholar] [CrossRef]
  9. Wei, T.S.; Liu, P.T.; Chang, L.W.; Liu, S.Y. Gait asymmetry, ankle spasticity, and depression as independent predictors of falls in ambulatory stroke patients. PLoS ONE 2017, 12, e0177136. [Google Scholar] [CrossRef]
  10. Fregly, B.J.; Reinbolt, J.A.; Rooney, K.L.; Mitchell, K.H.; Chmielewski, T.L. Design of patient-specific gait modifications for knee osteoarthritis rehabilitation. IEEE Trans. Biomed. Eng. 2007, 54, 1687–1695. [Google Scholar] [CrossRef] [Green Version]
  11. Mirmahboub, B.; Samavi, S.; Karimi, N.; Shirani, S. Automatic Monocular System for Human Fall Detection Based on Variations in Silhouette Area. IEEE Trans. Biomed. Eng. 2013, 60, 427–436. [Google Scholar] [CrossRef]
  12. Litvak, D.; Zigel, Y.; Gannot, I. Fall detection of elderly through floor vibrations and sound. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008. [Google Scholar]
  13. Rashidi, P.; Mihailidis, A. A survey on ambient-assisted living tools for older adults. IEEE J. Biomed. Health Inform. 2013, 17, 579–590. [Google Scholar] [CrossRef]
  14. Cheng, W.-C.; Jhan, D.-M. Triaxial Accelerometer-Based Fall Detection Method Using a Self-Constructing Cascade-AdaBoost-SVM Classifier. IEEE J. Biomed. Health Inform. 2013, 17, 411–419. [Google Scholar] [CrossRef]
  15. Degen, T.; Jaeckel, H.; Rufer, M.; Wyss, S. A fall detector in a wristwatch. In Proceedings of the Seventh IEEE International Symposium on Wearable Computers, White Plains, NY, USA, 21–23 October 2003. [Google Scholar]
  16. Huang, J.; Xu, W.; Mohammed, S.; Shu, Z. Posture estimation and human support using wearable sensors and walking-aid robot. Robot. Auton. Syst. 2015, 73, 24–43. [Google Scholar] [CrossRef]
  17. Saadeh, W.; Altaf, M.A.B.; Altaf, M.S.B. A high accuracy and low latency patient-specific wearable fall detection system. In Proceedings of the IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Orlando, FL, USA, 16–19 Feburary 2017; pp. 441–444. [Google Scholar]
  18. Tong, L.; Song, Q.; Ge, Y.; Liu, M. HMM-based human fall detection and prediction method using tri-axial accelerometer. IEEE Sens. J. 2013, 13, 1849–1856. [Google Scholar] [CrossRef]
  19. Saadeh, W.; Altaf, M.A.B.; Butt, S.A. A wearable neuro-degenerative diseases detection system based on gait dynamics. In Proceedings of the IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC), Abu Dhabi, UAE, 23–25 October 2017; pp. 1–6. [Google Scholar]
  20. Debard, G.; Karsmakers, P.; Deschodt, M.; Vlaeyen, E.; Dejaeger, E.; Milisen, K.; Goedemé, T.; Vanrumste, B.; Tuytelaars, T. Camera-based fall detection on real world data. In Outdoor and Large-Scale Real-World Scene Analysis; Lecture notes in computer science; Dellaert, F., Frahm, J.-M., Pollefeys, M., Leal-Taix, L., Rosenhahn, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; p. 7474. [Google Scholar]
  21. Zhang, T.; Wang, J.; Xu, L.; Liu, P. Fall detection by wearable sensor and one-class SVM algorithm. In Proceedings of the International Conference on Intelligent Computing, ICIC 2006, Kunming, China, 16–19 August 2006; Volume 345, pp. 858–863. [Google Scholar]
  22. Lin, S.-H.; Cheng, W.-C. Fall detection with support vector machine during scripted and continuous unscripted activities. Sensors 2012, 12, 12301–12316. [Google Scholar]
  23. Yu, M.; Naqvi, S.; Rhuma, A.; Chambers, J. One class boundary method classifiers for application in a video-based fall detection system. IET Comput. Vis. 2012, 6, 90–100. [Google Scholar] [CrossRef] [Green Version]
  24. Khan, S.S.; Jesse, H. Review of fall detection techniques: A data availability perspective. Med. Eng. Phys. 2017, 39, 12–22. [Google Scholar] [CrossRef]
  25. Brunnstrom, S. Movement Therapy in Hemiplegia: A Neurophysiological Approach; Harper & Row. Published: New York, NY, USA, 1972. [Google Scholar]
  26. Abbate, S.; Avvenuti, M.; Cola, G.; Corsini, P.; Light, J.; Vecchio, A. Recognition of false alarms in fall detection systems. In Proceedings of the IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, USA, 9–12 January 2011; pp. 23–28. [Google Scholar]
  27. Luque, R.; Casilari, E.; Morón, M.-J.; Redondo, G. Comparison and characterization of android-based fall detection systems. Sensors 2014, 14, 18543–18574. [Google Scholar] [CrossRef]
  28. Igual, R.; Medrano, C.; Plaza, I. Challenges, issues and trends in fall detection systems. Biomed. Eng. Online 2013, 12, 66. [Google Scholar] [CrossRef]
  29. Shumway-Cook, A.; Baldwin, M.; Polissar, N.L.; Gruber, W. Predicting the probability for falls in community-dwelling older adults. Phys. Ther. 1997, 77, 812–819. [Google Scholar] [CrossRef]
  30. Liu, J.; Lockhart, T.E. Development and evaluation of a prior-to-impact fall event detection algorithm. IEEE Trans. Biomed. Eng. 2014, 61, 2135–2140. [Google Scholar]
  31. Karantonis, D.M.; Narayanan, M.R.; Mathie, M.; Lovell, N.H.; Celler, B.G. Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans. Inf. Technol. Biomed. 2006, 10, 156–167. [Google Scholar] [CrossRef]
  32. Specht, D.F. Probabilistic neural networks. Neural Netw. 1990, 3, 109–118. [Google Scholar] [CrossRef]
  33. Kwolek, B.; Kepski, M. Fuzzy inference-based fall detection using kinect and body-worn accelerometer. Appl. Soft Comput. 2016, 40, 305–318. [Google Scholar] [CrossRef]
  34. Aburomman, A.A.; Reaz, M.B.I. A novel SVM-kNN-PSO ensemble method for intrusion detection system. Appl. Soft Comput. 2016, 38, 360–372. [Google Scholar] [CrossRef]
  35. Wang, W.; Zhang, X.; Gombaul, S. Constructing attribute weights from computer audit data for effective intrusion detection. J. Syst. Softw. 2009, 82, 1974–1981. [Google Scholar] [CrossRef]
  36. Caporusso, N.; Lasorsa, I.; Rinaldi, O.; la Pietra, L. A pervasive solution for risk awareness in the context of fall prevention. In Proceedings of the 3rd International ICST Conference on Pervasive Computing Technologies for Healthcare, London, UK, 1–3 April 2009; pp. 1–8. [Google Scholar]
  37. Di, P.; Huang, J.; Sekiyama, K.; Fukuda, T. A novel fall prevention scheme for intelligent cane robot by using a motor driven universal joint. In Proceedings of the International Symposium on Micro-NanoMechatronics and Human Science, Nagoya, Japan, 6–9 November 2011; pp. 391–396. [Google Scholar]
  38. Majumder, A.J.A.; Zerin, I.; Uddin, M.; Ahamed, S.I.; Smith, R.O. SmartPrediction: A real-time smartphone-based fall risk prediction and prevention system. In Proceedings of the Research in Adaptive and Convergent Systems, Montreal, QC, Canada, 1–4 October 2013; pp. 434–439. [Google Scholar]
  39. Bilgin, T.; Erdogan, S. A data mining approach for fall detection by using k-nearest neighbour algorithm on wireless sensor network data. IET Commun. 2012, 6, 3281–3287. [Google Scholar]
  40. Ojetola, O.; Gaura, E.I.; Brusey, J. Fall detection with wearable sensors–SAFE (Smart Fall DEtection). In Proceedings of the 7th International Conference on Intelligent Environments, Nottingham, UK, 25–28 July 2011; pp. 318–321. [Google Scholar]
  41. Baek, W.; Kim, D.; Bashir, F.; Pyun, J. Real life applicable fall detection system based on wireless body area network. In Proceedings of the IEEE 10th Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, USA, 11–14 January 2013; pp. 62–67. [Google Scholar]
  42. Nyan, M.N.; Tay, F.E.H.; Murugasu, E. A wearable system for pre-impact fall detection. J. Biomech. 2008, 41, 3475–3481. [Google Scholar] [CrossRef]
  43. Wu, G.; Xue, S.W. Portable preimpact fall detector with inertial sensors. IEEE Trans. Neural Syst. Rehabil. Eng. 2008, 16, 178–183. [Google Scholar]
Figure 1. Mobile chaperonage lower limb rehabilitation-training robot (MCLLRTR) prototype.
Figure 1. Mobile chaperonage lower limb rehabilitation-training robot (MCLLRTR) prototype.
Applsci 09 03986 g001
Figure 2. The control system of MCLLRTR.
Figure 2. The control system of MCLLRTR.
Applsci 09 03986 g002
Figure 3. Multisensor fusion system.
Figure 3. Multisensor fusion system.
Applsci 09 03986 g003
Figure 4. The spatial distribution of normal gait.
Figure 4. The spatial distribution of normal gait.
Applsci 09 03986 g004
Figure 5. The two-stage activity recognition classifier.
Figure 5. The two-stage activity recognition classifier.
Applsci 09 03986 g005
Figure 6. Threshold comparison between multisensor fusion and single-sensor models.
Figure 6. Threshold comparison between multisensor fusion and single-sensor models.
Applsci 09 03986 g006
Table 1. Statistical information on simulated behavior of rehabilitation training performed by each participant.
Table 1. Statistical information on simulated behavior of rehabilitation training performed by each participant.
TypeActivityNumber of Activity
Fall(c1) Vertical fall (soft fall)5
(c2) Fall to left5
(c3) Fall to right5
(c4) Fall frontward5
(c5) Fall backward5
(b) Abnormal gaitRond de jembe a terre at left5
Rond de jembe a terre at right5
Dragging gait5
Forward gait5
(a) Normal gaitNormal gait5
Table 2. Comparison between the proposed algorithm and state-of-the-art fall prevention systems. PNN: probabilistic neural network.
Table 2. Comparison between the proposed algorithm and state-of-the-art fall prevention systems. PNN: probabilistic neural network.
SensorSensor LocationPrediction MethodComputational ComplexitySensitivity
Fallarm [36]Inertial sensorWristTraffic light-like alert systemLow NA
iCane [37]Force sensor + laser rangefinders Robot Center of gravity of the user in conjunction with sensorsMedium NA
SmartPrediction [38]Accelerometer and pressure sensorPocket and shoeDecision tree of triggering an alarmMedium 97.2%
Our workAccelerometer and tension sensor Chest and robotPNN Low 99.37%
Table 3. Comparison between the proposed algorithm and state-of-the-art fall detection systems. KNN: k-nearest neighbors, RBF: radial basis function, SVM: support vector machine.
Table 3. Comparison between the proposed algorithm and state-of-the-art fall detection systems. KNN: k-nearest neighbors, RBF: radial basis function, SVM: support vector machine.
Bilgin [39]SAFE [40]Baek [41]Our Work
Sensor typeTri-accelerometerTri-accelerometer and GyroTri-accelerometer and Gyrokinematic, tension, and photoelectric
Kinematic sensor locationWaist Chest and thigh Neck Chest and robot
Fall type××
Recognition expect fall×××
Learning KNNDecision treeThreshold-basedSVM-RBF-KNN
Specificity NANA100%100%
Sensitivity 89.4%99.45%80%99.37%

Share and Cite

MDPI and ACS Style

Yang, T.; Gao, X.; Gao, R.; Dai, F.; Peng, J. A Novel Activity Recognition System for Alternative Control Strategies of a Lower Limb Rehabilitation Robot. Appl. Sci. 2019, 9, 3986. https://doi.org/10.3390/app9193986

AMA Style

Yang T, Gao X, Gao R, Dai F, Peng J. A Novel Activity Recognition System for Alternative Control Strategies of a Lower Limb Rehabilitation Robot. Applied Sciences. 2019; 9(19):3986. https://doi.org/10.3390/app9193986

Chicago/Turabian Style

Yang, Tao, Xueshan Gao, Rui Gao, Fuquan Dai, and Jinmin Peng. 2019. "A Novel Activity Recognition System for Alternative Control Strategies of a Lower Limb Rehabilitation Robot" Applied Sciences 9, no. 19: 3986. https://doi.org/10.3390/app9193986

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop