Next Article in Journal
Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments
Next Article in Special Issue
A Telerehabilitation System for the Selection, Evaluation and Remote Management of Therapies
Previous Article in Journal
Comparing Building and Neighborhood-Scale Variability of CO2 and O3 to Inform Deployment Considerations for Low-Cost Sensor System Use
Previous Article in Special Issue
Wearable Sensors and the Assessment of Frailty among Vulnerable Older Adults: An Observational Cohort Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Fall Detection Using an On-Wrist Wearable Accelerometer

by
Samad Barri Khojasteh
1,†,
José R. Villar
2,*,†,‡,
Camelia Chira
3,†,
Víctor M. González
2,† and
Enrique De la Cal
2,†
1
Sakarya University, 54050 Sakarya, Turkey
2
Electric, Electronic, Computers and Systems Engineering Department, University of Oviedo, 33003 Oviedo, Spain
3
Computer SCience Department, Babes-Bolyai University, 400084 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Current address: University of Oviedo, Computer Science Department, EIMEM, c/Independencia 13, 33004 Oviedo, Spain.
Sensors 2018, 18(5), 1350; https://doi.org/10.3390/s18051350
Submission received: 12 March 2018 / Revised: 18 April 2018 / Accepted: 23 April 2018 / Published: 26 April 2018
(This article belongs to the Special Issue Sensors for Gait, Posture, and Health Monitoring)

Abstract

:
Fall detection is a very important challenge that affects both elderly people and the carers. Improvements in fall detection would reduce the aid response time. This research focuses on a method for fall detection with a sensor placed on the wrist. Falls are detected using a published threshold-based solution, although a study on threshold tuning has been carried out. The feature extraction is extended in order to balance the dataset for the minority class. Alternative models have been analyzed to reduce the computational constraints so the solution can be embedded in smart-phones or smart wristbands. Several published datasets have been used in the Materials and Methods section. Although these datasets do not include data from real falls of elderly people, a complete comparison study of fall-related datasets shows statistical differences between the simulated falls and real falls from participants suffering from impairment diseases. Given the obtained results, the rule-based systems represent a promising research line as they perform similarly to neural networks, but with a reduced computational cost. Furthermore, support vector machines performed with a high specificity. However, further research to validate the proposal in real on-line scenarios is needed. Furthermore, a slight improvement should be made to reduce the number of false alarms.

1. Introduction

Fall Detection (FD) is a very active research area, with many applications in health care, work safety, etc. [1]. Even though there are plenty of commercial products, the best rated products only reach an 80% success rate [2,3]. There are basically two types of FD systems: context-aware systems and wearable devices [4,5]. FD has been widely studied using context-aware systems, i.e., video systems [6]; nevertheless, the use of wearable devices is crucial because of the high percentage of elderly people and their desire to live autonomously in their own house [7].
Wearable-based solutions may combine different sensors, such as a barometer and inertial sensors [8], 3DACC combined with other devices, like a gyroscope [9], intelligent tiles [10] or a barometer in a necklace [11]. By far, 3DACC is the most used option within the literature [12,13,14,15,16]. Different solutions have been proposed to perform the FD; for instance, a feature extraction stage and SVM have been applied directly in [12,14], using some transformations and thresholds with very simple rules for classifying an event as a fall [15,16,17]. A comparison of classifiers has been presented in [13].
The common characteristic in all these solutions is that the wearable devices are placed on the waist or on the chest. The reason for this location is that it is by far much easier to detect a fall using the sensory system in this placement [18]. Clearly, each location is the best option for some cases, while for other problems, it may not be the best one. For instance, placing the sensor on the waist is valid for patients with severe impairment; however, the requirement to use a belt with some dressing might not be valid in the case of healthy participants. Furthermore, this type of device lacks usability, and people might find it easy to forget them on the bedside table [4,19]. Thus, this research limits itself to use a single sensor, a commercial smart wristband, placed on the wrist.
Furthermore, these previous studies do not focus on the specific dynamics of a falling event: although some of the proposals report good performances, they are just machine learning applied to the FD problem. There are studies concerned with the dynamics in a fall event with sensors located on the waist [20,21,22,23], establishing the taxonomy and the time periods for each sequence. Interestingly, it has been found that the vast majority of the solutions have been obtained using data gathered from simulated falls [3,24]; these studies have found that analyzing the solutions with data gathered from real falls produce a high error rate and rather poor performances. As far as we know, FARSEEING (FAll Repository for the design of Smart and sElf-adaptive Environments prolonging Independent livinG) is the first dataset including data from real elderly persons’ falls [3]. The data have been gathered from patients suffering an impairment illness while using a 3DACC placed either on the thigh or on the lower back. It seems the data must come from the same population in order to record the inherent behavior of the subjects when falling, which may vary with age [25].
Focusing on FD using a wrist-worn bracelet, there are several works published in the literature. Ngu et al. proposed using a BLE link with a smartphone that has access to intelligent web services [26]. Basically, the agent running on the smartphone gathers the 3DACC data stream, analyzing each sliding window with a one step sample. For each sliding window, up to four features are computed and fed to an SVM, which classifies the window as FALLor NOT_FALL. For training the SVM, the authors proposed a training stage for each user, sending the data stream to an intelligent web service to learn the model. Furthermore, the authors proposed a set of ADL to be performed by the user to gather data, including the fall simulation.
In [27], a smartwatch sends the data to a smartphone, where the detection takes place in the smartphone using Kalman filters and the CUMSUMalgorithm [28] using adaptive thresholds. Similarly, a commercial wrist-worn 3DACC wearable was used together with a smartphone to detect falls and inactivity in [29]. In this case, the wearable delivers the processed data to the BLE-linked smartphone, which includes an implementation of WEKA. With the gathered data, an NN model was trained to discriminate between fall and several ADL; also, a threshold-based inactivity detection is continuously updated. As the authors stated, one of the problems is the energy efficiency of the wearable: the data stream, even with BLE, penalizes the battery life. Furthermore, an important problem refers to computing the models in the wearable, which also reduces the autonomy. Moreover, the authors stated the problem of the “heavy dependence on the availability of a smartphone which should therefore always be within a few meters of the user” [29].
A threshold-based solution was analyzed in [30], where up to four different features were computed for each sliding window with one sample shift. Up to 11 thresholds were defined; their values were found experimentally. The authors reported very good results when the alternative ADLs were walking, sitting and other. When the threshold algorithms ran both on the smartphone and on the wrist worn wearable device, the performance was enhanced between 5 and 15%. Although thresholds have been widely used in the literature, having only this type of discrimination might not apply to the general population. Additionally, depending on the availability of the smartphone, this represents a big challenge as the whole FD is computed in this device. A similar solution is proposed in [31], using threshold-based algorithms in both the smartwatch and in the smartphone.
An alternative solution was proposed in [32], where a smartwatch works autonomously to detect falls and send notifications. Threshold-based solutions were proposed assuming that only those falls for which the user faints are the ones to be detected: in the rest of the cases, the user is in charge of calling the ehealth service. Similarly, [27] made use of an Android smartphone to run threshold-based FD. In these latter studies, the authors proposed a continuous analysis of the acceleration magnitude in order to classify the current motion as FALL or NOT_FALL. As the authors stated in these papers, performing more complex models continuously would drain the battery, severely reducing the autonomy of the solution.
In one of the published solutions, Abbate et al. [33] proposed the use of the inherent dynamics of a fall as the basis of the FD algorithm with the sensor placed on the waist. A fall event detection is run continuously based on peak detection; once a peak is detected, a feature extraction is performed, and a feed forward NN classifies the event as FALL or NOT_FALL. A very interesting point of this approach is that the computational constraints for the first two stages are kept moderate so as to be deployed in a wearable device, although this solution includes a high number of thresholds to tune.
The aim of this study is to develop a wrist wearable solution for FD focused on the elderly. A wrist-worn 3DACC on a smart wristband is proposed to enhance the ergonomic skills of the solution. Based on [33], the solution has been implemented and enhanced with (i) an intelligent optimization stage to improve the peak detection, (ii) a dataset balancing stage to avoid biasing the models towards the majority class, (iii) alternative machine learning methods compared to the one originally proposed in order to reduce the computational complexity and promote a longer battery life. Finally, this study makes use of several published datasets, including real falls [3], simulated falls and ADL [34], ADL only [35] and from ADL and simulated epileptic seizures [36]; all of them are using 3DACC, the former with the sensor on the lower back or on the thigh, the three latter with the sensor placed on a wrist. A comparison between real falls suffered by elderly people and falls from young participants in ideal conditions is included to analyze the validity of the results using the simulated falls. Moreover, a complex cross-validation stage, including training, testing and validation, is performed. To our knowledge, this is the first study considering so many different published datasets and a complex scheme of comparison to analyze different FD solutions.
The remainder of this study is organized as follows. Next, the description of the solution proposed in this research is outlined. Section 3 details the experimentation that has been carried out, while Section 4 shows the experiment results and the discussion on them. The study ends with the conclusions drawn from this research.

2. Fall Detection with a Wrist-Worn Sensor

The block diagram depicted in Figure 1 is defined in this research, which basically is the proposal in [33]. The data gathered from a 3DACC located on the wrist are processed using a sliding window. A peak detection is performed, and if a peak is found, the data within the sliding window are analyzed to extract several features, which are ultimately classified as FALL or NOT_FALL. The FD block is performed with an AI classifier.
The next subsection describes the method for detecting a peak, as well as the feature extraction, while the method for training the FD block is detailed in Section 2.2. For each case, the proposed modifications are included. A discussion on the most suitable models to be used in this approach is held in Section 2.3. Finally, a new stage is included in the process devoted to the tuning of the peak detection threshold; this stage is explained in Section 2.4.

2.1. Feature Extraction Based on the Dynamics of a Fall

Abate et al. [33] proposed the following scheme to represent the dynamics within a fall, so a possible fall event could be detected (refer to Figure 2). Let us assume that gravity is g = 9 . 8 m/s. Given the current times tamp t, we find a peak at peak time p t = t 2500 ms (Point 1) if at time p t the magnitude of the acceleration a is higher than t h 1 = 3 × g and there is no other peak in the period ( t 2500 ms, t ) (no other a value higher than t h 1 ). If this condition holds, then it is stated that a peak occurred at p t .
When a peak is detected, the feature extraction is performed, computing for this peak time several parameters and features. The impact end ( i e ) (Point 2) denotes the end of the fall event; it is the last time for which the a value is higher than t h 2 = 1 . 5 × g. Finally, the impact start ( i s ) (Point 3) denotes the starting time of the fall event, computed as the time of the first sequence of an a < = t h 3 ( t h 3 = 0.8 × g) followed by a value of a > = t h 2 . The impact start must belong to the interval [ i e 1200 ms, p t ] . If no impact end is found, then it is fixed to p t + 1000 ms. If no impact start is found, it is fixed to p t .
With these three times— i s , p t and i e —calculated, the following transformations should be computed:
  • Average Absolute Acceleration Magnitude Variation, A A M V = t = i s i e | a t + 1 a t | N , with N the number of samples in the interval.
  • Impact Duration Index, I D I = i e i s .
  • Maximum Peak Index, M P I = m a x t [ i s , i e ] ( a t ) .
  • Minimum Valley Index, M V I = m i n t [ i s 500 , i e ] ( a t ) .
  • Peak Duration Index, P D I = p e p s , with p s the peak start defined as the time of the last magnitude sample below t h P D I = 1.8 × g occurring before p t and p e the peak end defined as the time of the first magnitude sample below t h P D I = 1.8 × g occurring after p t .
  • Activity Ratio Index, A R I , calculated as the ratio between the number of samples not in [ t h A R I l o w 0.85 × g , t h A R I I h i g h = 1.3 × g ] and the total number of samples in the 700-ms interval centered in ( i s + i e ) / 2 .
  • Free Fall Index, F F I , the average magnitude in the interval [ t F F I , p t ] . The value of t F F I is the time between the first acceleration magnitude below t h F F I = 0.8 × g occurring up to 200 ms before p t ; if not found, it is set to p t 200 ms.
  • Step Count Index, S C I , measured as the number of peaks in the interval [ p t 2200 , p t ] .
According to the block diagram, each sample of these eight features is classified as a fall event or not using the predefined model. Therefore, this model has to be trained; this topic is covered in the next subsection.

2.2. Training the FD Model

Provided there exists a collection of TS with data gathered from real falls or from ADL, a training phase can be proposed to train the FD model. Let us consider a dataset containing { T S i L } , with i = 1 N , n the number of TS samples and L the assigned label; that is, a sample of this dataset is a T S i L with the data gathered from a participant using a 3DACC on the chosen location, i.e., on a wrist. Let us assume we know a priori whether this T S i L includes or not the signal gathered when a fall occurred; therefore, each TS is labeled as L = F A L L or L = N O T _ F A L L .
Now, let us evaluate the peak detection and the feature extraction blocks for each TS. Whenever a  T S i L has no peak, the T S i L is discarded. When a peak is detected for T S i L , then the eight features are computed, and label L can be assigned to this new sample. Therefore, a new dataset is created with M being eight features’ labeled samples, with M N . This dataset was used in [33] to train the feed-forward NN.
Nevertheless, it has been found that this solution (i) might generate more than a sample for a single T S i L , which is not a problem, and (ii) certainly will generate a very biased dataset, with the majority of the samples belonging to the class FALL. From their study [33], it can be easily seen that the main reason for a 100% detection is this biased dataset.
Consequently, in this research, we propose to include a dataset balancing stage using SMOTE [37], so at least a 40 / 60 ratio is obtained for the minority class.

2.3. Model Complexity and the Battery Life

In [33], Abbate et al. made use of a feed-forward NN. Although the number of hidden neurons was set to seven, using a balanced training dataset as stated in the previous section raises this NN parameter up to 20. Basically, the use of any type of NN is a well-known solution that works quite well in computerized environments [12,14]. Nevertheless, it is known that the higher the number of operations with real numbers the higher the effort a computer has to perform; in the context of wearable and mobile devices, this extra cost matters [38].
In previous research, a comparison between models and their suitability to each possible scenario was presented [36,39]. As it has been shown, those models that include high computation seem to perform better. Actually, K-nearest neighbor outperformed many other solutions; however, its implementation in battery feed devices could drain the battery in a relatively short period of time [40].
Therefore, this research proposes to constrain the models to those that include a low computational impact, reducing complex calculations as much as possible. Actually, in this research, only decision trees and rule-based systems are proposed. These models are based on comparison operations, which are much simpler; the hypothesis is that the obtained results are not going to significantly differ from those of an NN. Finally, to obtain a comparison with state-of-the-art modeling [12,14], we also include the SVM as an alternative.

2.4. Tuning the Peak Threshold

As stated in the Introduction of this study, several solutions in the literature are based on thresholds (for instance, [15,20,21,22], among others). In all of these studies, the thresholds were set up based on the data analysis, either by experts or by data engineers through data visualization.
The solution proposed in [33] is not different. Furthermore, several thresholds are used in that study, not only to detect a peak, but also to compute the extracted features. All of them have been fixed by analyzing the gathered data, establishing some typical values for the features for the class FALL.
However, this can be improved by means of computational intelligence and optimization. In this research, we propose to use well-known techniques (genetic algorithms and simulated annealing) to find the most suitable values for these thresholds. This study, in any case, requires not only optimization, but also some design decisions to modify the features. Therefore, for the purpose of this study, we constrain ourselves to focus on the optimization of the peak threshold, which is the most important threshold as it is the one responsible for finding fall event candidates.

3. Materials and Methods

3.1. Public Datasets

A common way of studying FD is by developing a dataset of simulated falls plus extra sessions of different ADL; all of these TS are labeled and become the test bed for the corresponding study. In this context, a simulated fall is performed by a set of healthy young participants wearing the sensory system, each of them letting him/herself fall towards a mattress from a standing still position.
The vast majority of these datasets were gathered with the sensor attached to the main body, either on the chest, waist, lumbar area or thigh. Interestingly, the UMAFall [34] dataset includes data gathered from 3DACC sensors placed on different parts of the body—ankle, waist, wrist and head—while performing simulated falls; this is the type of data needed in this research as long as the main hypothesis of this study is to perform FD with a sensor worn on a wrist. Furthermore, there is no pattern in the number of repetitions of each activity or fall simulation. Some participants did not simulate any fall; some performed 6 or 9; and one participant simulated 60 falls.
Besides, this research also includes more publicly available datasets. On the one hand, the ADL and simulated epileptic seizure datasets published in [36] are considered because they include a high movement activity, the simulated partial tonic-clonic seizures, followed by a relatively calm period plus some other ADL, all of them measured using 3DACC placed on the dominant wrist. Although this dataset includes neither simulated, nor real falls, it includes activities that share similar dynamics with that proposed for a fall.
Additionally, the DaLiac [35] dataset is also considered in this study. This dataset includes several sensors, one on the wrist and one on the waist, among others. Up to 19 young healthy participants and up to 13 different ADLs are considered, from sitting to cycling.
On the other hand, the FARSEEING dataset [3] is also used for studying the validity of the simulated falls compared with real falls. As stated on the web page, “the FARSEEING real-world fall repository: a large-scale collaborative database to collect and share sensor signals from real-world falls”. Data from 15 participants have been gathered for a total of 22 TS; each TS corresponds to a fall: 7 participants (producing 7 TS) have the 3DACC placed on a thigh, while 8 participants (producing 15 TS) have the 3DACC sensor placed on the lower back. Therefore, this dataset is used to validate the simulated falls, so the extent of the conclusions using the available datasets can be determined. Table 1 summarizes the datasets used in this study.

3.2. Dataset Comparison

As mentioned before, the published studies on FD use to base their experimentation on simulated falls with healthy participants, the UMA Fall among them, with an age out of the range of the population on which we focus in this research. In this context, it can be argued that the extrapolation of the conclusions could not be straightforward.
Therefore, a comparison of the signals recorded from the waist from UMA Fall and lower back from FARSEEING is performed, so a conclusion about the similarity of the simulated and the real falls can be drawn. This comparison will consider an exhaustive visual comparison of the signals. To do so, signals of falls from the UMA Fall dataset with the same direction—forward, backward or lateral—will be compared with each of the fall signals coming from a sensor placed on the lower back. The idea is to evaluate whether the dynamics from those TS are similar and if they are similar to that mentioned in [20,33].

3.3. A Complete Cross-Validation Scheme

In this research, a complete cross-validation (cv) scheme is performed, that is including training, testing and validation. Each of these stages includes all the TS from the same individual. In other words, once a participant is chosen to become part of the dataset partition, either validation or training and testing, all of his/her TS are included in that partition (refer to Figure 3). Therefore, none of the TS from a participant included in the validation dataset are used in the training and testing: these two partitions—on one side, the validation, and on the other side, the training and testing—are absolutely unrelated.
The first thing that has been done is choosing the participants from the UMA Fall and the simulated epileptic seizures datasets that are preserved for the validation. Fifteen percent of the participants from each dataset have been chosen to be included in the validation dataset. The remaining participants are assigned to the training and testing dataset.
On this training and testing dataset, cross-validation is performed. Both 10-fold cv and 5 × 2 cv based on participants are performed on a participant’s basis, as well. This means that for each fold, the participants are grouped for training or for testing. Once a participant is grouped for either training or testing, then all of his/her TS are used in the corresponding process. Again, for each fold, the training and testing partitions do not share any participant’s TS; they are completely unrelated.
This scheme is outlined in Figure 3. The advantage of this cv scheme is that it will allow one to evaluate the performance of the solution with unseen participants, those preserved for validation, like would be the case in real life. Furthermore, this scheme allows one to perform training and testing on independent participants. This means that a model is trained with data from a set of participants and then it is tested with data from a different and independent set of participants. Therefore, the training models are tested against data from participants that are totally unseen by them. For sure, this will reduce the performance of the methods, but will allow one to evaluate the robustness of the solutions.
The general process is depicted in Figure 4. The training and testing dataset is used for tuning the threshold to perform the peak detection, this optimization process is detailed in Section 3.4. Once the threshold is obtained, then the peak detection takes place. Each TS included in the training and testing dataset is analyzed to find out whether there exists a peak or not. Those detected peaks are analyzed in depth, extracting the eight features and assigning a label: FALL or NOT_FALL.
This new intermediate dataset, called the model training dataset, might be highly imbalanced; therefore SMOTE is applied to obtain a more suitable dataset to use in the learning process. The learning process is detailed in Section 3.5 and includes up to four types of models: feed forward NN, SVM, DT and RBS. The SMOTE configuration will be to obtain a 40–60% representation of the minority class at least. This balanced dataset and the best model configuration found using a grid scheme are used in the training of the model.
Finally, the validation dataset is considered. It goes through the peak detection block, using the optimized threshold, and whenever a peak is found, the feature extraction stage is executed. Finally, the eight features are classified using the best model found in the previous stage. A TS from the validation dataset will be classified as FALL if a peak is detected and the subsequent classifier outputs the FALL label; otherwise, the TS will be assigned the label NO_FALL.
To evaluate this validation stage, and every classification result in this study, the standard measurements accuracy, Kappa factor, precision, sensitivity, specificity and the geometric mean of these two latter will be computed. In order to compute the TP, TN, FP and FN, each TS is labeled with FALL if it includes a fall event; otherwise, it is labeled as NOT_FALL. Each TS is evaluated using each of the classifiers; a label FALL is assigned to the TS whenever a peak is detected and the corresponding output of the classifier is FALL; otherwise, the TS is labeled as NOT_FALL. Then, the following formulas hold.
A c c = T P + T N T P + T N + F P + F N
P 0 = T P + T N
P e = ( T P + F N ) × ( T P + F P ) + ( T N + F P ) × ( T N + F N )
K = P 0 P e 1 P e
S e = T P T P + F N
S p = T N T N + F P
P r = T P T P + F P
G = T P T P + F N × T N T N + F P

3.4. Tuning the Peak Detection Threshold

A peak is detected whenever the acceleration magnitude is higher than 3 × g as defined in [33] when the sensor is located on the waist. However, is this a valid value when the sensor is located on a wrist? This question will be answered using two metaheuristics: Genetic Algorithms (GA) and Simulated Annealing (SA).
The peak threshold is encoded as a real value ranging from 2.0 to 3.5 . As explained in the dataset comparison experiments, these values were collected from the analysis of the TS from the UMA Fall gathered with the sensor on the wrist; for the sake of brevity, these TS are not plotted. The encoded real value represents a possible solution for both GA and SA approaches. The quality of the solution is evaluated using a fitness function based on the sensitivity and specificity obtained by the classification measurements generated using the current peak threshold. The fitness function used to guide the search process of the metaheuristics is the geometric mean of the specificity and the sensitivity, that is f ( x ) = G ( x ) ; see Equation (8).
The GA starts with a population of randomly-generated individuals. Each generation, convex crossover is applied with a certain probability between each individual and a mate selected using a binary tournament. The resulting offspring replaces the first parent if it has a better fitness value. Gaussian mutation is then applied to the current individual with a fixed probability. Mutation perturbs the peak threshold using a zero-mean Gaussian distribution, and the new obtained value is allowed to replace the current individual. This unconditioned replacement enhances the diversity of the population and benefits the search process. The parameter setting is performed with the aim to keep the number of fitness evaluations as low as possible in order to avoid high computational cost. To this end, the peak threshold optimization using GA is based on a population size and generation number of 10, crossover probability of 0.8 and mutation probability of 0.2 .
The SA algorithm is based on a single solution initialized with a random value in the considered range [ 2.0 , 3.5 ] . The neighborhood of a solution is defined based on the Gaussian mutation. A new solution y selected from the neighborhood of a current solution x is accepted as the new current solution if it has better fitness or with a probability defined according to the SA approach (as given below).
P a c c e p t = e f ( y ) f ( x ) T
The probability of accepting a new solution from the neighborhood that does not improve the current fitness value depends on the difference between the fitnesses of the two solutions and on an SA parameter called temperature (denoted by T above). The cooling scheme for the temperature is based on a simple iterative function that returns the current T multiplied by a constant value α . For each value of T starting from the initial temperature to the minimum temperature, several iterations are allowed to select a new neighboring solution. In the current parameter setting, the value of T starts at 1.0 , the minimum temperature is 0.1 , the value of α is 0.9 and the number of iterations is set to 5. Parameter values for both GA and SA have been selected based on some preliminary experiments according to the results obtained and the computational cost.

3.5. Model Learning

The original solution proposed in [33] made use of a feed-forward NN with 7 hidden neurons. However, in that original paper, the authors did not balance the model training dataset. In our experience, the feature extraction domain was clearly unbalanced toward the FALL label, so obtaining good results for the FALL label does not guarantee a good performance as the specificity might be really poor. Further, if this approach were to be deployed on a smart wristband or similar device, it would be advisable to use low computational models.
Therefore, in this study, several different models are proposed: the feed-forward NN, support vector machines (SVM), C5.0 decision trees (DT) and C5.0 rule-based systems (RBS). The former is the one proposed in the original work, and the two latter are simpler models based on C4.5. Alternatively, SVM is proposed as an alternative state-of-the-art modeling technique that has been applied in FD [12,14]. All of them are implementations included in the caret package for R [41,42].
For each model technique, a grid search for the most interesting parameters will be performed after the balancing stage, even for the NN as long as the model training dataset has changed from that originally published.

4. Results and Discussion

4.1. Dataset Comparison

The FARSEEEING dataset includes up to 15 falls from elderly people using a 3DACC placed on the lower back; for each of them, there might be a break in the circumstances of the fall event. This context information is included in Table 2 with the corresponding ID within this research and within the FARSEEING dataset. Furthermore, in Figure 5, the evolution of the acceleration magnitude is plotted for F1to F8. Although for the majority of the subjects, the 3 × g threshold remains valid, some subjects perform with lower peak values; i.e, F3 in the figure. Furthermore, F9 has a peak value below the Abbate et al. threshold, though it has not been included for the sake of space.
Besides, Figures and depict several fall events from the participants in the UMA Fall dataset. In these figures, P x refers to the corresponding participant in that dataset, and the plots include the 3DACC magnitude (see Equation (10)) data from the sensor on the waist. Most of the participants did fairly similar to the hypothesis of dynamics and also the thresholds in [33]. Nevertheless, there were also several exceptions; see Figure 6. For instance, Participants 1, 2 and 15 seem to have been falling with fear: their movements were clearly slower. For these participants, some tests were fair, even with a remarkable magnitude value higher than the 3 × g threshold; for some other tests, they performed gently. In some tests, the participant behaved really differently, with the evolution of the magnitude of the acceleration having a totally different shape: Participant 12, the backward fall included in the figure.
a ^ = a x 2 + a y 2 + a z 2 2
However, the majority of the simulations behaved as expected (refer to Figure 7). As seen in these plots, with the independence of the fluctuation of the signal due to the different sampling frequencies, the dynamics can be considered similar to those shown in FARSEEEING, accomplished to some degree with the dynamics proposed in [33]. Still, some differences in this issue can be observed.
On the one hand, the peak threshold is valid for the majority of the cases, but some of the TS behaved under that limit. This will produce a false negative, that is there will be undetected falls. This is the reason why in this research an optimization stage is included in order to tune the peak threshold. The range of possible candidates is defined with the smallest peak threshold found for all the TS from the UMA Fall dataset for the sensor on the wrist: this value has been found to be 2.5 × g; therefore, the lower limit was set to 2.0 × g. The upper limit of the range is defined as a relatively large threshold, which was estimated as 3.5 × g.
On the other hand, the FARSEEING includes some TS that cover walking and a sudden fall; the TS obtained for these cases may change the time periods mentioned in [33]. Moreover, each subject and participant has a different reaction speed. These two ideas must be reconsidered in future work to revisit the definition of the extracted features.
Due to the fact that there were visual differences in the behavior of the different datasets, and also because it would allow a better comprehension of the similarities between the simulated and real falls, a comparison between the TS from the FARSEEING and from the UMA Fall datasets is performed using the algorithm and thresholds proposed in [33]. Table 3 shows the mean and the standard deviation of the values of the extracted features for the TS that include a fall event. Using the Shapiro normality test, it was found that not all features follow a normal distribution; thus, a Mann–Whitney–Wilcoxon test was used to evaluate whether the features from each dataset belonged to the same distribution or not. These results are included in Table 3, as well.
As can be seen, the results clearly show the differences between the simulated and the real falls. This is a very relevant finding as it is normal in the literature to use simulated falls in the evaluation of FD algorithms: now, it is found that there is evidence to not accept simulated falls as valid. Although these differences might be explained because the participants in the FARSEEING datasets suffer from impairment illnesses, it is clear from the obtained results that what is found out in the next subsections needs to be validated in real scenarios, with participants from the population in focus living independently, but keeping a log of any possible fall that might happen so real data could be gathered.
Notwithstanding the differences between the simulated and the real fall datasets found so far, we have no other option than to use the simulated fall dataset because, to the best of our knowledge, there are no publicly available real fall datasets using a 3DACC sensor placed on a wrist. Nevertheless, further research will be needed, as explained before.
Moreover, there are some issues in the UMA Fall dataset that need further addressing. When people fall, they use their arms to protect themselves and to try to grab something to avoid falling. Therefore, there will be much more movement variability, from those who fall without moving the arms to those that frantically try not to fall. Research with sensors worn on the wrist and in real scenarios will be needed.

4.2. Threshold Optimization

The GA and SA algorithms have been run 10 times based on the parameter setting given in Section 3.4. The results obtained have been analyzed according to the fitness function defined to guide the search process. The dataset used in this threshold optimization, following the experiment scheme shown in Figure 4, was the training and testing dataset.
The best fitness value generated by the GA is 0.870 for the peak threshold values 3.09629 , 3.09632 and 3.0971 . The average fitness over 10 runs is 0.8695 , which only slightly deviates from the best run. The best thresholds detected by GA are mostly in the fitness range from 3.093 to 3.109 with a median value of 3.09590 .
The SA algorithm obtains similar results to the GA. The best fitness value is 0.869 obtained for the peak threshold values 3.0936 , 3.0921 , 3.0940 and 3.0984 . The average fitness for the 10 SA runs considered is 0.868 , which is, as in the case of GA, near the best value obtained. This indicates a stable performance for both algorithms over the independent runs. Most peak values detected by SA range from 3.078 to 3.093 with a median value of 3.09290 . As already emphasized, these are fitness values obtained based on the training and testing data. As can be noticed, GA and SA trigger similar results both in terms of the best and average fitness values, as well as the median peak threshold values.
After these optimization stages, and also by the visualization stage performed in the previous subsection, the following thresholds will be compared:
  • t h 25 = 2.5 × g: as the minimum value to detect any peak in the datasets.
  • t h 3 = 3.0 × g: the original proposal from [33].
  • t h 309 = 3.09290 × g: the median of the values obtained from the SA optimization. The median value obtained from the GA runs was 3.09590 × g, which is quite a similar value; for the sake of the length, only the SA optimized value will be analyzed.

4.3. Model Training and Cross-Validation Results

Recall that the experimentation design included several published datasets; these datasets were split into training, testing and validation. When splitting, the participants (and all of the TS gathered from them) were assigned either to the training and testing or to the validation datasets. Furthermore, the majority of the available datasets gathered using a wrist-worn 3DACC do not include fall events but ADL, including jumping, simulated seizures or running, among others; this results in a more balanced feature extraction dataset than if only a single dataset were used. Nevertheless, a SMOTE stage was performed to guarantee 40% minority samples in the training and testing dataset.
The best parameter subset was obtained for each pair of threshold and model type using a grid search. The obtained parameter subsets are shown in Table 4 for the feed forward NN, in Table 5 for SVM and in Table 6 for both the decision tree and the rule-based system based on C5.0.
Both 10-fold cv and 5 × 2 cv were performed, and the obtained results are depicted and shown in Figure 8 and Table 7 and Table 8 for threshold th25. For both thresholds th3 and th309, only the 5 × 2 cv results are included for the sake of both readability and space; Table 9 shows the 5 × 2 cv for threshold th3, and Table 10 shows the 5 × 2 cv results for threshold th309. Finally, Figure 9 depicts the boxplots for 5 × 2 cv for both th3 and th309.
Recall that these results regard the feature extraction dataset obtained for the corresponding threshold. This means that, in this stage of the experiment, we are only considering that if a peak is found we could correctly label it to belong to the FALL or NOT_FALL class. Thus, this would allow us to choose the most suitable model, if enough evidence is found.
In general, the results for 10-fold cv are better due to the differences in the number of samples contained in the training and testing datasets; however, the same behavior of the statistics can be observed. This is the reason why only the results for 5 × 2 cv are shown in the remainder of this subsection.
We have statistically compared the different methods for each of the thresholds. To do so, we have used the analysis of variance and Tukey honest significance differences, both tools included in R. With a confidence level of 95%, it has been stated that:
  • For th25, SVM outperforms in sensitivity the other methods. However, in the remainder of the classifier performance measurements, all the methods are comparable.
  • For th3, all of the methods are comparable except when using the sensitivity. With this measurement, NN is outperformed by SVM, DT and RBS.
  • For th309, all the methods are comparable for all the measurements.
As a conclusion of this stage, we can state that:
  • The different models are totally comparable, and there is no evidence that one of the combinations outperforms the others.
  • In this scenario of similar behavior, the DT or the RBS might be advisable due to their simplicity and tuning capability. However, SVM are at least as interesting as these two models.
  • No threshold has been observed better than the others. The original threshold proposed in [33] is just in the middle compared with the performances obtained for th25 and th309.
  • The decrease in the performance from the 10-fold cv to the 5 × 2 cv suggests that the validation results might be significantly worse, as the participants chosen for validation have not been presented to the models, representing the performance in real scenarios in the case of deploying the solution.
Henceforward, there is no clear winner from this comparison, neither with the threshold values, nor with the models. Thus, the next stage of the experimentation, which will evaluate the overall performance of the pair <threshold, model>, will be the definitive phase in this research.

4.4. Final Validation

In this stage, the performance of the whole solution will be evaluated. To do so, for each threshold, a model will be learned using the corresponding best parameter subset and the full training and testing datasets. With these models, the following algorithm is performed:
For each participant included in validation
 For each TS from the current participant
   If a peak is found using the currently chosen threshold
     Extract the features
     Predict the class using the corresponding model
     Update the classifying statistics according to the TS label
   Otherwise
     Update the classifying statistics according to the TS label
The obtained results are included in Table 11 and Table 12. The former shows the confusion matrices for each combination of threshold and model. The latter shows the classifying performance of the whole solution.
In our opinion, the confusion matrices obtained for the th25 threshold, independently of the model, show a performance where (i) the number of false alarms is higher for the NN and RBS and (ii) there are several undetected fall events. The relevance of an undetected fall event makes the th25 threshold the worst candidate.
Increasing the threshold has a clear impact on the number of undetected fall events. However, the false alarm number varies from one case to the other: the number of false alarms and the corresponding specificities suggest that further research is needed to tackle this problem. More importantly, for the th309, two of the models did detect all the fall events, which suggests this th309 threshold learned from the optimization stage may be considered the best solution.
Furthermore, the comparison of the two main models (NN and RBS) shows this latter as more robust and reliable as the number of false alarms is about one third of that reported for the NN.
Besides, the SVM performance is really good if we consider the small number of false alarms (showing a very good specificity indeed), although it was not able to detect all the falls.
However, there is no real evidence about how this solution would perform with elderly people because (i) the intensity level of the ADL is expected to be smaller for this population, which favors the solution, indeed, (ii) there are no real fall datasets with the sensor placed on one wrist, which go against this solution, and (iii) adapting the thresholds for each individual is not addressable in the current design. Furthermore, there would be differences between the evolution of the 3DACC TS for healthy elderly people and those obtained for, say, elderly people with impairments. These reflections lead us to conclude that a solution should be independent of the user intensity level and easier to tune and adapt to the current user. Moreover, gathering data from the elderly population would help in obtaining a more representative dataset. In those cases, like in faints, where there is not enough data, mimicking the faints with human-like flexible mannequins can also help.

5. Conclusions

This research focuses on fall detection for elderly people. Several solutions were studied, one of them was chosen for deployment and improvement with the premise of a reduced computational cost because it has to be implemented on wearable sensors. A threshold-based peak detection plus an NN stage to label the features extracted from the data has been extended with (i) an optimization stage to find the best threshold candidate, (ii) an SMOTE stage to balance the classes in the feature extraction domain and (iii) alternative classifiers with reduced computation and higher adaptability.
The experimentation includes several published datasets: the FARSEEING dataset that includes real falls gathered from a 3DACC placed on the lower back of patients suffering from some impairment illnesses, the UMA FALL including simulated falls and ADL with several sensors and locations, the DaLiac including ADL and the simulated epilepsy, including ADL and simulated seizures; these two latter datasets gathered the data from a 3DACC placed on a wrist. After a comparison of the falls included in FARSEEING and in UMA Fall, it has been found that simulating falls might not represent the real movements. Therefore, using simulated data might help in evaluating a solution, but extra research with data from real falls will be needed in order to validate a solution.
The threshold optimization introduced did not show a clear advantage with regard to neither the original proposed in [33], nor the manually chosen one. However, SVM, RBS and DT were found comparable for almost all the cases. Besides, SVM was the modeling technique that performed with better specificity, producing the smallest amount of false alarms.
More research is needed to find a solution that performs independently of the intensity level of the user. Furthermore, the relevance of the wrist orientation in the FD must be evaluated. Moreover, a dataset gathered from elderly people using the sensor on a wrist and including real falls is needed. Additionally, using mannequins would enrich the fall detection dataset. Finally, the use of different oracles for different types of falls, like faints, for instance, might be needed to cope with all the possible sources of fall events to detect. Perhaps introducing ensembles can enhance the final results, but always keeping in mind the battery life of the wearable smartwatches.

Author Contributions

José R. Villar, Samad Barri Khojasteh and Camelia Chira conceived of and designed the experiments. All the authors participated in performing the experiments, collecting, processing, organizing and analyzing the data of the experiments. José R. Villar and Camelia Chira wrote the paper. All the authors participated reading, improving and amending the paper.

Funding

This research has been funded by the Spanish Ministry of Science and Innovation, under Projects MINECO-TIN2014-56967-R and MINECO-TIN2017-84804-R. Furthermore, this research has been funded by the research contract No. 1996/12.07.2017, Internal Competition CICDI-2017 of Technical University of Cluj-Napoca, Romania.

Acknowledgments

The authors thank all participating men and women in the FARSEEING project, as well as all FARSEEING research scientists, study and data managers and clinical and administrative staff who made this study possible.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
3DACCTri-Axial Accelerometer
AccAccuracy
ADLActivity of Daily Living
AIArtificial Intelligence
BLEBluetooth Low Energy
cvCross Validation
DTDecision Tree
FDFall Detection
FNFalse Negative
FPFalse Positive
KCohen’s Kappa measurement
MLMachine Learning
NNNeural network
RBSRule-Based System
SenSensitivity
SpecSpecificity
SVMSupport Vector Machine
TSTime Series
TPTrue Positive
TNTrue Negative
StdStandard Deviation
AAMVAbsolute Acceleration Magnitude Module
ARIActivity Radio Index
FFIFree Fall Index
IDIImpact Duration Index
MPIMaximum Peak Index
MVIMinimum Valley Index
PDIPeak Duration Index
SCIStep Count Index

References

  1. Rubenstein, L. Falls in older people: Epidemiology, risk factors and strategies for prevention. Age Ageing 2006, 35, 37–41. [Google Scholar] [CrossRef] [PubMed]
  2. Purch.com. Top Ten Reviews for Fall Detection of Seniors. Available online: http://www.toptenreviews.com/health/senior-care/best-fall-detection-sensors/ (accessed on 25 April 2018).
  3. Bagala, F.; Becker, C.; Cappello, A.; Chiari, L.; Aminian, K.; Hausdorff, J.M.; Zijlstra, W.; Klenk, J. Evaluation of accelerometer-based fall detection algorithms on real-world falls. PLoS ONE 2012, 7, e37062. [Google Scholar] [CrossRef] [PubMed]
  4. Igual, R.; Medrano, C.; Plaza, I. Challenges, issues and trends in fall detection systems. BioMed. Eng. OnLine 2013, 12, 24. [Google Scholar] [CrossRef] [PubMed]
  5. Khan, S.S.; Hoey, J. Review of fall detection techniques: A data availability perspective. Med. Eng. Phys. 2017, 39, 12–22. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, S.; Wei, Z.; Nie, J.; Huang, L.; Wang, S.; Li, Z. A review on human activity recognition using vision-based method. J. Healthc. Eng. 2017, 2017, 31. [Google Scholar] [CrossRef] [PubMed]
  7. Kumari, P.; Mathew, L.; Syal, P. Increasing trend of wearables and multimodal interface for human activity monitoring: A review. Biosens. Bioelectron. 2017, 90, 298–307. [Google Scholar] [CrossRef] [PubMed]
  8. Sabatini, A.M.; Ligorio, G.; Mannini, A.; Genovese, V.; Pinna, L. Prior-to- and post-impact fall detection using inertial and barometric altimeter measurements. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 774–783. [Google Scholar] [CrossRef] [PubMed]
  9. Sorvala, A.; Alasaarela, E.; Sorvoja, H.; Myllyla, R. A two-threshold fall detection algorithm for reducing false alarms. In Proceedings of the 6th International Symposium on Medical Information and Communication Technology (ISMICT), La Jolla, CA, USA, 25–29 March 2012. [Google Scholar]
  10. Daher, M.; Diab, A.; Najjar, M.E.B.E.; Khalil, M.A.; Charpillet, F. Elder tracking and fall detection system using smart tiles. IEEE Sens. J. 2017, 17, 469–479. [Google Scholar] [CrossRef]
  11. Bianchi, F.; Redmond, S.J.; Narayanan, M.R.; Cerutti, S.; Lovell, N.H. Barometric pressure and triaxial accelerometry-based falls event detection. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 619–627. [Google Scholar] [CrossRef] [PubMed]
  12. Zhang, T.; Wang, J.; Xu, L.; Liu, P. Fall detection by wearable sensor and one-class SVM algorithm. In Intelligent Computing in Signal Processing and Pattern Recognition; Lecture notes in control and information systems; Huang, D.S., Li, K., Irwin, G.W., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 345, pp. 858–863. [Google Scholar]
  13. Hakim, A.; Huq, M.S.; Shanta, S.; Ibrahim, B. Smartphone based data mining for fall detection: Analysis and design. Procedia Comput. Sci. 2017, 105, 46–51. [Google Scholar] [CrossRef]
  14. Wu, F.; Zhao, H.; Zhao, Y.; Zhong, H. Development of a wearable-sensor-based fall detection system. Int. J. Telemed. Appl. 2015, 2015, 11. [Google Scholar] [CrossRef] [PubMed]
  15. Bourke, A.; O’Brien, J.; Lyons, G. Evaluation of a threshold-based triaxial accelerometer fall detection algorithm. Gait Posture 2007, 26, 194–199. [Google Scholar] [CrossRef] [PubMed]
  16. Huynh, Q.T.; Nguyen, U.D.; Irazabal, L.B.; Ghassemian, N.; Tran, B.Q. Optimization of an accelerometer and gyroscope-based fall detection algorithm. J. Sens. 2015, 2015, 8. [Google Scholar] [CrossRef]
  17. Kangas, M.; Konttila, A.; Lindgren, P.; Winblad, I.; Jämsaä, T. Comparison of low-complexity fall detection algorithms for body attached accelerometers. Gait Posture 2008, 28, 285–291. [Google Scholar] [CrossRef] [PubMed]
  18. Chaudhuri, S.; Thompson, H.; Demiris, G. Fall detection devices and their use with older adults: A systematic review. J. Geriatr. Phys. Ther. 2014, 37, 178–196. [Google Scholar] [CrossRef] [PubMed]
  19. Jatesiktat, P.; Ang, W.T. An elderly fall detection using a wrist-worn accelerometer and barometer. In Proceedings of the 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, South Korea, 11–15 July 2017; pp. 125–130. [Google Scholar]
  20. Abbate, S.; Avvenuti, M.; Corsini, P.; Light, J.; Vecchio, A. Monitoring of human movements for fall detection and activities recognition in elderly care using wireless sensor network: A survey. In Wireless Sensor Networks: Application—Centric Design; Intech: London, UK, 2010; p. 22. [Google Scholar]
  21. Bourke, A.; van de Ven, P.; Gamble, M.; O’Connor, R.; Murphy, K.; Bogan, E.; McQuade, E.; Finucane, P.; Olaighin, G.; Nelson, J. Evaluation of waist-mounted tri-axial accelerometer based fall-detection algorithms during scripted and continuous unscripted activities. J. Biomech. 2010, 43, 3051–3057. [Google Scholar] [CrossRef] [PubMed]
  22. Bourke, A.K.; Klenk, J.; Schwickert, L.; Aminian, K.; Ihlen, E.A.F.; Mellone, S.; Helbostad, J.L.; Chiari, L.; Becker, C. Fall detection algorithms for real-world falls harvested from lumbar sensors in the elderly population: A machine learning approach. In Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 3712–3715. [Google Scholar]
  23. Delahoz, Y.S.; Labrador, M.A. Survey on fall detection and fall prevention using wearable and external sensors. Sensors 2014, 14, 19806–19842. [Google Scholar] [CrossRef] [PubMed]
  24. Medrano, C.; Plaza, I.; Igual, R.; Sánchez, Á.; Castro, M. The effect of personalization on smartphone-based fall detectors. Sensors 2016, 16, 117. [Google Scholar] [CrossRef] [PubMed]
  25. Igual, R.; Medrano, C.; Plaza, I. A comparison of public datasets for acceleration-based fall detection. Med. Eng. Phys. 2015, 37, 870–878. [Google Scholar] [CrossRef] [PubMed]
  26. Ngu, A.; Wu, Y.; Zare, H.; Polican, A.; Yarbrough, B.; Yao, L. Fall detection using smartwatch sensor data with accessor architecture. In Lecture Notes in Computer Science, Proceedings of the International Conference on Smart Health ICSH, Hong Kong, China, 26–27 June 2017; Chen, H., Zeng, D.D., Karahanna, E., Bardhan, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10347, pp. 81–93. [Google Scholar]
  27. Kostopoulos, P.; Nunes, T.; Salvi, K.; Deriaz, M.; Torrent, J. F2D: A fall detection system tested with real data from daily life of elderly people. In Proceedings of the 17th International Conference on E-health Networking, Application Services (HealthCom), Boston, MA, USA, 14–17 October 2015; pp. 397–403. [Google Scholar]
  28. Tasoulis, S.K.; Doukas, C.N.; Maglogiannis, I.; Plagianakos, V.P. Statistical data mining of streaming motion data for fall detection in assistive environments. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 3720–3723. [Google Scholar]
  29. Deutsch, M.; Burgsteiner, H. A smartwatch-based assistance system for the elderly performing fall detection, unusual inactivity recognition and medication reminding. In Studies in Health Technology and Informatics; Health Informatics Meets eHealth; IOS Press: Amsterdam, The Netherlands, 2016; Volume 223, pp. 259–266. [Google Scholar]
  30. Casilari, E.; Oviedo-Jiménez, M.A. Automatic fall detection system based on the combined use of a smartphone and a smartwatch. PLoS ONE 2015, 10, e0140929. [Google Scholar] [CrossRef] [PubMed]
  31. Vilarinho, T.; Farshchian, B.; Bajer, D.G.; Dahl, O.H.; Egge, I.; Hegdal, S.S.; Lønes, A.; Slettevold, J.N.; Weggersen, S.M. A Combined Smartphone and Smartwatch Fall Detection System. In Proceedings of the IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Liverpool, UK, 26–28 October 2015; pp. 1443–1448. [Google Scholar]
  32. Gjoreski, H.; Bizjak, J.; Gams, M. Using Smartwatch as Telecare and Fall Detection Device. In Proceedings of the 12th International Conference on Intelligent Environments (IE), London, UK, 14–16 September 2016; pp. 242–245. [Google Scholar]
  33. Abbate, S.; Avvenuti, M.; Bonatesta, F.; Cola, G.; Corsini, P.; Vecchio, A. A smartphone-based fall detection system. Pervasive Mob. Comput. 2012, 8, 883–899. [Google Scholar] [CrossRef]
  34. Casilari, E.; Santoyo-Ramón, J.A.; Cano-García, J.M. UMAFall: A multisensor dataset for the research on automatic fall detection. Procedia Comput. Sci. 2017, 110, 32–39. [Google Scholar] [CrossRef]
  35. Leutheuser, H.; Schuldhaus, D.; Eskofier, B.M. Hierarchical, multi-sensor based classification of daily life activities: Comparison with state-of-the-art algorithms using a benchmark dataset. PLoS ONE 2013, 8, e75196. [Google Scholar] [CrossRef] [PubMed]
  36. Villar, J.R.; Vergara, P.; Menéndez, M.; de la Cal, E.; González, V.M.; Sedano, J. Generalized models for the classification of abnormal movements in daily life and its applicability to epilepsy convulsion recognition. Int. J. Neural Syst. 2016, 26, 21. [Google Scholar] [CrossRef] [PubMed]
  37. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artific. Intell. Res. 2002, 16, 321–357. [Google Scholar]
  38. Vergara, P.M.; de la Cal, E.; Villar, J.R.; González, V.M.; Sedano, J. An IoT platform for epilepsy monitoring and supervising. J. Sens. 2017, 2017, 18. [Google Scholar] [CrossRef]
  39. Villar, J.R.; González, S.; Sedano, J.; Chira, C.; Trejo-Gabriel-Galán, J.M. Improving human activity recognition and its application in early stroke diagnosis. Int. J. Neural Syst. 2015, 25, 1450036–1450055. [Google Scholar] [CrossRef] [PubMed]
  40. Han, Y.; Park, K.; Hong, J.; Ulamin, N.; Lee, Y.K. Distance-constraint k-nearest neighbor searching in mobile sensor networks. Sensors 2015, 15, 18209–18228. [Google Scholar] [CrossRef] [PubMed]
  41. R Development Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2008; ISBN 3-900051-07-0. [Google Scholar]
  42. Kuhn, M. The Caret Package. 2017. Available online: http://topepo.github.io/caret/index.html (accessed on 15 January 2018).
Figure 1. Block diagram of the solution.
Figure 1. Block diagram of the solution.
Sensors 18 01350 g001
Figure 2. Graph elaborated from [33], showing the evolution of the magnitude of the acceleration in multiples of g. Analyzing the signal at time stamp t, the peak condition described in the text must be found in order to detect a fall. The X-axis represents the time, and each mark corresponds to 500 ms.
Figure 2. Graph elaborated from [33], showing the evolution of the magnitude of the acceleration in multiples of g. Analyzing the signal at time stamp t, the peak condition described in the text must be found in order to detect a fall. The X-axis represents the time, and each mark corresponds to 500 ms.
Sensors 18 01350 g002
Figure 3. Cross-validation scheme. From the available datasets ( D x ), some participants ( P y ) (and all of their TS) are preserved for validation purposes. The remaining participants and their TS are all conforming to the training and validation dataset.
Figure 3. Cross-validation scheme. From the available datasets ( D x ), some participants ( P y ) (and all of their TS) are preserved for validation purposes. The remaining participants and their TS are all conforming to the training and validation dataset.
Sensors 18 01350 g003
Figure 4. The machine learning process within the cross-validation scheme. The training and testing dataset is used for (i) threshold optimization and (ii) peak detection and feature extraction. The labeled dataset is then used for the machine learning process to find the best modeling option. The best option is then evaluated with the validation dataset once processed so the real performance of the system can be obtained.
Figure 4. The machine learning process within the cross-validation scheme. The training and testing dataset is used for (i) threshold optimization and (ii) peak detection and feature extraction. The labeled dataset is then used for the machine learning process to find the best modeling option. The best option is then evaluated with the validation dataset once processed so the real performance of the system can be obtained.
Sensors 18 01350 g004
Figure 5. FARSEEING plots of some falls depicting the magnitude of the acceleration during a fall; 400 samples in 4 s. From left to right and top to bottom: F1, F2, F3, F4, F5, F6, F7 and F8. F3 shows a peak below 3 × g. In all the cases, the sensor is located on the lower back. (a) F1: backward fall, 3DACC [ 0.0 , 3.5 ] × g. (b) F2: forward fall, 3DACC [ 0.0 , 3.5 ] × g. (c) F3: forward fall, 3DACC [ 0.0 , 3.0 ] × g. (d) F4: forward fall, 3DACC [ 0.0 , 5.2 ] × g. (e) F5: forward fall, 3DACC [ 0.0 , 5.2 ] × g. (f) F6: backward fall, 3DACC [ 0.0 , 3.6 ] × g. (g) F7: backward fall, 3DACC [ 0.0 , 3.6 ] × g. (h) F8: unknown direction, 3DACC [ 0.0 , 3.3 ] × g.
Figure 5. FARSEEING plots of some falls depicting the magnitude of the acceleration during a fall; 400 samples in 4 s. From left to right and top to bottom: F1, F2, F3, F4, F5, F6, F7 and F8. F3 shows a peak below 3 × g. In all the cases, the sensor is located on the lower back. (a) F1: backward fall, 3DACC [ 0.0 , 3.5 ] × g. (b) F2: forward fall, 3DACC [ 0.0 , 3.5 ] × g. (c) F3: forward fall, 3DACC [ 0.0 , 3.0 ] × g. (d) F4: forward fall, 3DACC [ 0.0 , 5.2 ] × g. (e) F5: forward fall, 3DACC [ 0.0 , 5.2 ] × g. (f) F6: backward fall, 3DACC [ 0.0 , 3.6 ] × g. (g) F7: backward fall, 3DACC [ 0.0 , 3.6 ] × g. (h) F8: unknown direction, 3DACC [ 0.0 , 3.3 ] × g.
Sensors 18 01350 g005
Figure 6. UMA Fall plots for some falls that behave differently from what was expected. The plots depict the magnitude of the acceleration during a fall in a period of 4 s (80 samples). The data come from the 3DACC sensor on the waist. (a) P1: forward fall, 3DACC [ 0.0 , 3 . 2 ] × g. (b) P1: backward fall, 3DACC [ 0.0 , 5.0 ] × g. (c) P12: lateral fall, 3DACC [ 0.0 , 2.8 ] × g. (d) P12: backward fall, 3DACC [ 0.0 , 2.0 ] × g. (e) P9: lateral fall, 3DACC [ 0.0 , 6.0 ] × g. (f) P9: forward fall, 3DACC [ 0.0 , 3.2 ] × g. (g) P15: forward fall, 3DACC [ 0.0 , 2.8 ] × g. (h) P15: backward fall, 3DACC [ 0.0 , 4.5 ] × g.
Figure 6. UMA Fall plots for some falls that behave differently from what was expected. The plots depict the magnitude of the acceleration during a fall in a period of 4 s (80 samples). The data come from the 3DACC sensor on the waist. (a) P1: forward fall, 3DACC [ 0.0 , 3 . 2 ] × g. (b) P1: backward fall, 3DACC [ 0.0 , 5.0 ] × g. (c) P12: lateral fall, 3DACC [ 0.0 , 2.8 ] × g. (d) P12: backward fall, 3DACC [ 0.0 , 2.0 ] × g. (e) P9: lateral fall, 3DACC [ 0.0 , 6.0 ] × g. (f) P9: forward fall, 3DACC [ 0.0 , 3.2 ] × g. (g) P15: forward fall, 3DACC [ 0.0 , 2.8 ] × g. (h) P15: backward fall, 3DACC [ 0.0 , 4.5 ] × g.
Sensors 18 01350 g006aSensors 18 01350 g006b
Figure 7. UMA Fall plots for some fall events from participants behaving as expected. The plots depict the magnitude of the acceleration during a fall in a period of 4 s (80 samples). The data come from the 3DACC sensor on the waist. (a) P1: backward fall, 3DACC [ 0.0 , 4.0 ] × g. (b) P1: backward fall, 3DACC [ 0.0 , 8.0 ] × g. (c) P2: lateral fall, 3DACC [ 0.0 , 5.5 ] × g. (d) P2: forward fall, 3DACC [ 0.0 , 4.1 ] × g. (e) P5: backward fall, 3DACC [ 0.0 , 5.0 ] × g. (f) P5: backward fall, 3DACC [ 0.0 , 3.5 ] × g. (g) P12: forward fall, 3DACC [ 0.0 , 3.8 ] × g. (h) P15: lateral fall, 3DACC [ 0.0 , 3.5 ] × g.
Figure 7. UMA Fall plots for some fall events from participants behaving as expected. The plots depict the magnitude of the acceleration during a fall in a period of 4 s (80 samples). The data come from the 3DACC sensor on the waist. (a) P1: backward fall, 3DACC [ 0.0 , 4.0 ] × g. (b) P1: backward fall, 3DACC [ 0.0 , 8.0 ] × g. (c) P2: lateral fall, 3DACC [ 0.0 , 5.5 ] × g. (d) P2: forward fall, 3DACC [ 0.0 , 4.1 ] × g. (e) P5: backward fall, 3DACC [ 0.0 , 5.0 ] × g. (f) P5: backward fall, 3DACC [ 0.0 , 3.5 ] × g. (g) P12: forward fall, 3DACC [ 0.0 , 3.8 ] × g. (h) P15: lateral fall, 3DACC [ 0.0 , 3.5 ] × g.
Sensors 18 01350 g007aSensors 18 01350 g007b
Figure 8. Box plots of the different statistics for the three models when the threshold is set to 2.5 × g. The prefixes N_, SVM_, DT_ and RBS_ stand for the NN, SVM, Decision Trees C5.0 and Rule-Based System C5.0 models. The statistics are: Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the Geometric mean (G), all of them computed using Equations (1) to (8). (a) Ten-fold cv; (b) 5 × 2 cv.
Figure 8. Box plots of the different statistics for the three models when the threshold is set to 2.5 × g. The prefixes N_, SVM_, DT_ and RBS_ stand for the NN, SVM, Decision Trees C5.0 and Rule-Based System C5.0 models. The statistics are: Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the Geometric mean (G), all of them computed using Equations (1) to (8). (a) Ten-fold cv; (b) 5 × 2 cv.
Sensors 18 01350 g008
Figure 9. Box plots of the different statistics for the three models when the threshold is set to 3.0 × g (upper part) and 3.09290 × g (lower part) with the 5 × 2 cv. The prefixes N_, SVM_, DT_ and RBS_ stand for the NN, SVM, Decision Trees C5.0 and Rule-Based System C5.0 models. The statistics are: Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the geometric mean (G), all of them computed using Equations (1) to (8). (a) t h = 3.0 × g. (b) t h = 3.09290 × g.
Figure 9. Box plots of the different statistics for the three models when the threshold is set to 3.0 × g (upper part) and 3.09290 × g (lower part) with the 5 × 2 cv. The prefixes N_, SVM_, DT_ and RBS_ stand for the NN, SVM, Decision Trees C5.0 and Rule-Based System C5.0 models. The statistics are: Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the geometric mean (G), all of them computed using Equations (1) to (8). (a) t h = 3.0 × g. (b) t h = 3.09290 × g.
Sensors 18 01350 g009aSensors 18 01350 g009b
Table 1. Descriptions of the different datasets used in this research. Columns NP, NF and NR stand for the Number of Participants, the Number of Falls in the dataset and the Number of goes for each ADL, respectively. A question mark (?) means that it is not a regular value. The sampling frequency used in gathering the dataset is stated in Hz in the frequency column (Fqcy).
Table 1. Descriptions of the different datasets used in this research. Columns NP, NF and NR stand for the Number of Participants, the Number of Falls in the dataset and the Number of goes for each ADL, respectively. A question mark (?) means that it is not a regular value. The sampling frequency used in gathering the dataset is stated in Hz in the frequency column (Fqcy).
DatasetNPNRNFFqcyDescription
UMAFall [34]17?20820Includes forward, backward and lateral falls, running, hopping, walking and sitting.
Neither do all the participants have every type of activity, nor the same number of goes.
Sensors on the wrist, waist, ankle, chest and trouser pocket.
DaLiac [35]1910204.8Sitting, standing, lying, vacuuming, washing dishes, sweeping, walking, up and down stairs, using a treadmill, cycling and rope jumping.
Sensors on the wrist, waist, ankle and chest.
Epilepsy [36]610016Walking at different paces, running, sawing and simulating epileptic partial tonic-clonic seizures.
Sensor on the dominant wrist.
FARSEEING [3]15?2210022 real falls recorded from 15 elderly people;
each of the files might include a description: context of the fall event, where the sensor was located, etc.
Sensor located either on the lower back or on the thigh.
Table 2. FARSEEING dataset: Context information of each fall event for subjects wearing a 3DACC placed on the lower back. ID and FRSIDare the Identification within this study and that given in the dataset, respectively. The context is extracted from the FARSEEING documentation [3].
Table 2. FARSEEING dataset: Context information of each fall event for subjects wearing a 3DACC placed on the lower back. ID and FRSIDare the Identification within this study and that given in the dataset, respectively. The context is extracted from the FARSEEING documentation [3].
IDFRSIDContext
F117744725-01Female. After walking with a wheeled walker, stood behind a chair, then fell
backwards on the floor. Standing, backward fall.
F238243026-05Female. Fell forward while bending down to fix a shoelace. Bending down,
forward fall.
F342990421-01Male. Wanted to pick up an object from the ground. Bending down, forward fall.
F442990421-02Male. Got up from the chair and wanted to walk. Transferring from
sitting to standing, forward fall.
F542990421-03aMale. When trying to move to the side, the wheeled walker fell forward.
Freezing of movement. Walking, side-forward fall.
F672858619-01Female. Went to the table in the dining room. Fell down backwards on
the buttocks. Standing, backward fall.
F772858619-02Female. Person held on the wall, then fell down backwards on the buttocks.
Standing, backward fall.
F872858619-02Male. Fell during walking. Walking, unknown fall direction.
F974827807-07Male. Walking, then fell down in front of the entrance of the house.
Walking, unknown fall direction.
F1079761947-03Female. Changing the hip protector, fell backwards on the ground and
hit the toilet. Standing, backward fall.
F1191943076-01Female. Changing the hip protector, fell backwards on the ground and
Walking, unknown fall direction.
F1291943076-02Female. Walking, then fell down opening the door in the entrance hall.
Walking, unknown fall direction.
F1396201346-01Female. Walking to the bathroom, stopped with freezing and fell from
standing position. Walking, left-backward fall.
F1496201346-02Female. Standing at the wardrobe, wanted to walk backwards and then fell
on buttocks. Standing, backward fall.
F1596201346-05Female. Walking backwards from the wash basin, then fell backwards.
Standing, backward fall.
Table 3. Comparison between FARSEEING (sensor on the lower back) and UMA Fall (sensor on the waist) datasets: the mean and standard deviation (Std) of the features computed for all the TS that correspond to a fall. The last row shows the p-values from the Mann-Whitney-Wilcoxon test (MWW test), showing that the features from FARSEEING and UMA Fall do not follow the same probability distribution.
Table 3. Comparison between FARSEEING (sensor on the lower back) and UMA Fall (sensor on the waist) datasets: the mean and standard deviation (Std) of the features computed for all the TS that correspond to a fall. The last row shows the p-values from the Mann-Whitney-Wilcoxon test (MWW test), showing that the features from FARSEEING and UMA Fall do not follow the same probability distribution.
MeanAAMVIDIMPIMVIPDIARIFFISCI
FARSEEING0.15921.22254.23450.63631.21600.59331.56604.7170
UMA Mob0.51551.46195.95340.49431.59110.34751.36072.3730
StdAAMVIDIMPIMVIPDIARIFFISCI
FARSEEING0.17012.58172.64190.30022.58170.33760.85903.8098
UMA Mob0.38892.0022.9280.23581.41730.31420.87681.5238
MWW testAAMVIDIMPIMVIPDIARIFFISCI
p-value2.20 × 10 1 6 1.68 × 10 6 1.54 × 10 7 0.00091.04 × 10 1 0 4.53 × 10 6 0.00224.70 × 10 5
Table 4. Best parameter set found for the feed forward NN and for each threshold.
Table 4. Best parameter set found for the feed forward NN and for each threshold.
ThresholdSizeDecayMax. Iter.Abs. Tol.Rel. Tol.
2.5201 × 10 6 5004 × 10 6 1 × 10 1 0
3.0201 × 10 4 10004 × 10 6 1 × 10 9
3.09290201 × 10 3 10004 × 10 6 1 × 10 9
Table 5. Best parameter set found for the SVM and for each threshold.
Table 5. Best parameter set found for the SVM and for each threshold.
ThresholdSigmaC
2.50.24.5
3.00.20.7
3.092900.201.0
Table 6. Best parameter set found for the Decision Tree (DT) and Rule-Based System (RBS) based on C5.0, for each threshold.
Table 6. Best parameter set found for the Decision Tree (DT) and Rule-Based System (RBS) based on C5.0, for each threshold.
ThresholdModelWinnowTrialsCFBandsFuzzy Threshold
2.5DTFALSE150.50FALSE
2.5RBSFALSE200.50FALSE
3.0DTFALSE200.050FALSE
3.0RBSFALSE150.200FALSE
3.09290DTTRUE100.50FALSE
3.09290RBSTRUE50.050TRUE
Table 7. Results obtained from the 10-fold cv when the threshold is set to 2.5 × g: the different statistics are the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the Geometric mean (G), all of them computed using Equations (1) to (8). The models are feed forward NN, Support Vector Machine (SVM), decision trees learned with C5.0 (DT) and Rule-Bases systems learned with C5.0 (RBS).
Table 7. Results obtained from the 10-fold cv when the threshold is set to 2.5 × g: the different statistics are the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the Geometric mean (G), all of them computed using Equations (1) to (8). The models are feed forward NN, Support Vector Machine (SVM), decision trees learned with C5.0 (DT) and Rule-Bases systems learned with C5.0 (RBS).
NNDT
FoldAccKpSeSpPrGAccKpSeSpPrG
10.90910.815640.928570.894740.866670.897090.787880.57770.857140.73680.705880.77784
20.86110.711540.863640.857140.904760.883960.888890.75341.000000.71430.846150.91987
30.92110.840340.833331.000001.000000.912870.947370.89390.888891.00001.000000.94281
40.90320.805031.000000.800000.842110.917660.870970.74060.937500.80000.833330.88388
50.93750.855200.913041.000001.000000.955530.812500.59150.782610.88890.947370.86106
60.91180.823530.882350.941180.937500.909510.941180.88240.941180.94120.941180.94118
70.83870.680410.750001.000001.000000.866030.806450.60920.750000.90910.937500.83853
80.84380.687500.789470.923080.937500.860310.875000.74700.842110.92310.941180.89026
90.97140.937830.923081.000001.000000.960770.914290.80730.769231.00001.000000.87706
100.85290.684010.863640.833330.904760.883960.882350.73230.954550.75000.875000.91391
mean0.89510.784100.874710.924950.939330.904770.872690.73350.872320.86630.902760.88464
median0.90620.810340.872990.932130.937500.903300.878680.74380.873020.89900.939340.88707
std0.04420.088320.072450.076240.059490.033940.055340.11250.086350.10790.089820.05039
RBSSVM
FoldAccKpSeSpPrGAccKpSeSpPrG
10.90910.81210.857140.94740.923080.889500.969700.937380.928571.00001.000000.96362
20.94440.88310.954550.92860.954550.954550.888890.772150.863640.92860.950000.90579
30.94740.89390.888891.00001.000000.942810.973680.947080.944441.00001.000000.97183
40.90320.80580.937500.86670.882350.909510.903230.805850.937500.86670.882350.90951
50.87500.72770.826091.00001.000000.908890.968750.925230.956521.00001.000000.97802
60.94120.88240.941180.94120.941180.941180.941180.882350.941180.94120.941180.94118
70.87100.72930.850000.90910.944440.895980.870970.739500.800001.00001.000000.89443
80.81250.62060.789470.84620.882350.834620.937500.873520.894741.00001.000000.94591
90.91430.81350.846150.95450.916670.880700.914290.807340.769231.00001.000000.87706
100.88240.73230.954550.75000.875000.913910.852940.684010.863640.83330.904760.88396
mean0.90000.79010.884550.91440.931960.907160.922110.837440.889950.95700.967830.92713
median0.90620.80900.873020.93490.932130.909200.925890.840430.911651.00001.000000.92534
std0.04170.08750.059350.07620.045320.035160.042970.089650.064850.06290.045360.03753
Table 8. Results obtained from the 5 × 2 cv when the threshold is set to 2 . 5 × g: the different statistics are the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the geometric mean (G), all of them computed using Equations (1) to (8). The models are feed forward NN, Support Vector Machine (SVM), decision trees learned with C5.0 (DT) and Rule-Based systems learned with C5.0 (RBS).
Table 8. Results obtained from the 5 × 2 cv when the threshold is set to 2 . 5 × g: the different statistics are the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the geometric mean (G), all of them computed using Equations (1) to (8). The models are feed forward NN, Support Vector Machine (SVM), decision trees learned with C5.0 (DT) and Rule-Based systems learned with C5.0 (RBS).
NNDT
FoldAccKpSeSpPrGAccKpSeSpPrG
10.89660.79300.89890.89410.89890.89890.93100.86210.91010.95290.95290.9313
20.83950.67630.81050.88060.90590.85690.85800.71050.85260.86570.90000.8760
30.87570.75080.85870.89610.90810.88300.88760.77360.89130.88310.90110.8962
40.85030.70220.79350.92000.92410.85630.88620.77040.89130.88000.90110.8962
50.89700.79350.86020.94440.95240.90510.90910.81890.84950.98610.98750.9159
60.84210.68550.79120.90000.90000.84390.84210.68460.81320.87500.88100.8464
70.80840.62110.66670.96250.95080.79620.83830.67850.75860.92500.91670.8339
80.92900.85480.93810.91670.93810.93810.91120.81880.91750.90280.92710.9223
90.88620.77240.86520.91030.91670.89060.91620.83240.88760.94870.95180.9192
100.89350.78810.83160.97300.97530.90060.90530.80880.89470.91890.93410.9142
mean0.87180.74380.83150.91980.92700.87700.88850.77590.86670.91380.92530.8952
median0.88100.76160.84510.91350.92040.88680.89650.79120.88950.91090.92190.9052
std0.03590.07060.07400.03080.02610.03980.03230.06460.04940.03970.03210.0332
RBSSVM
FoldAccKpSeSpPrGAccKpSeSpPrG
10.91950.83920.89890.94120.94120.91980.92530.85090.87640.97650.97500.9244
20.82100.63810.80000.85080.88370.84080.882720.76190.86320.91050.93180.8967
30.86980.73810.86960.87010.88890.87920.91720.83410.89130.94810.95350.9219
40.89820.79450.90220.89330.91210.90710.89820.79700.84780.96000.96300.9036
50.84850.70000.77420.94440.94740.85640.92120.84160.89250.95830.96510.9281
60.83040.66220.78020.88750.88750.83210.84210.68360.83520.85000.86360.8493
70.82040.64300.73560.91250.90140.81430.85030.70260.75860.95000.94290.8458
80.91120.82140.87630.95830.96590.92000.94680.89090.95880.93060.94900.9539
90.90420.80820.88760.92310.92940.90830.89820.79630.87640.92310.92860.9021
100.89940.79840.86320.94600.95350.90720.89940.79960.84210.97300.97560.9064
mean0.87230.74430.83880.91270.92110.87850.89810.79580.86420.93800.94480.9032
median0.88400.76630.86640.91780.92080.89320.89880.79830.86980.94900.95120.9050
std0.03920.07800.06030.03620.03030.03960.03280.06500.05110.03750.03280.0337
Table 9. Results obtained from the 5 × 2 cv when the threshold is set to 3.0 × g: the different statistics are the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the geometric mean (G), all of them computed using Equations (1) to (8). The three different models are feed forward NN, Support Vector Machines (SVM), Decision Trees learned with C5.0 (DT) and Rule-Based Systems learned with C5.0 (RBS).
Table 9. Results obtained from the 5 × 2 cv when the threshold is set to 3.0 × g: the different statistics are the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the geometric mean (G), all of them computed using Equations (1) to (8). The three different models are feed forward NN, Support Vector Machines (SVM), Decision Trees learned with C5.0 (DT) and Rule-Based Systems learned with C5.0 (RBS).
NNDT
FoldAccKpSeSpPrGAccKpSeSpPrG
10.87810.75640.83520.93150.93830.88520.92680.85350.89010.97260.97590.9320
20.93020.86030.90320.96200.96550.93390.90700.81310.90320.91140.92310.9131
30.92030.83870.90320.94290.95460.92850.91410.82780.87100.97140.97590.9219
40.90750.81560.85710.96340.96300.90850.92490.84900.94510.90240.91490.9299
50.90420.80640.91300.89330.91300.91300.89220.78320.88040.90670.92050.9002
60.90530.81000.89130.92210.93180.91130.90530.81080.86960.94810.95240.9100
70.87950.75680.89010.86670.89010.89010.86750.73620.81320.93330.93670.8728
80.90000.80020.86020.94810.95240.90510.89410.78820.86020.93510.94120.8998
90.91020.81780.93480.88000.90530.91990.90420.80780.88040.93330.94190.9106
100.93490.86950.91300.96100.96550.93890.89350.78750.84780.94810.95120.8980
mean0.90700.81320.89010.92710.93790.91350.902300.80570.87610.93620.94340.9088
median0.90640.81280.89730.93720.94530.91220.90480.80930.87570.93420.94150.9103
std0.01870.03760.03050.03560.02720.01760.01750.03450.03470.02470.02120.0174
RBSSVM
FoldAccKpSeSpPrGAccKpSeSpPrG
10.90240.80410.87910.93150.94120.90960.94510.88880.95600.93150.94570.9508
20.91280.82460.91400.91140.92390.91890.89540.79050.87100.92410.93100.9005
30.92030.83980.88170.97140.97620.92780.90800.81250.91400.90000.92390.9189
40.93640.87240.94510.92680.93480.93990.90170.80350.87910.92680.93020.9043
50.88020.75860.88040.88000.90000.89020.92220.84210.94570.89330.91580.9306
60.89940.79870.86960.93510.94120.90470.91120.82050.93480.88310.90530.9199
70.87350.74730.83520.92000.92680.87980.92170.84060.96700.86670.89800.9319
80.90000.79890.89250.90910.92220.90720.89410.78870.84950.94810.95180.8992
90.89220.78380.86960.92000.93020.89940.92810.85440.94570.90670.92550.9355
100.89940.79870.86960.93510.94120.90470.93490.86920.92390.94810.95510.9394
mean0.90170.80270.88370.92400.93380.90820.91620.83110.91870.91280.92820.9231
median0.89970.79880.87980.92340.93250.90590.91650.83060.92940.91540.92790.9253
std0.01840.03670.02930.02340.01940.01750.01710.03370.03960.02740.01890.0176
Table 10. Results obtained from the 5 × 2 cv when the threshold is set to 3.09290 × g: the different statistics are the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the Geometric mean (G), all of them computed using Equations (1) to (8). The three different models are feed forward NN, Support Vector Machines (SVM), Decision Trees learned with C5.0 (DT) and Rule-Based Systems learned with C5.0 (RBS).
Table 10. Results obtained from the 5 × 2 cv when the threshold is set to 3.09290 × g: the different statistics are the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the Geometric mean (G), all of them computed using Equations (1) to (8). The three different models are feed forward NN, Support Vector Machines (SVM), Decision Trees learned with C5.0 (DT) and Rule-Based Systems learned with C5.0 (RBS).
NNDT
FoldAccKpSeSpPrGAccKpSeSpPrG
10.92260.84400.92390.92110.93410.92900.89880.79360.95650.82900.87130.9129
20.88100.75860.91300.84210.87500.89380.92860.85480.96740.88160.90820.9373
30.88100.76080.86960.89470.90910.88910.83330.65490.96740.67110.78070.8691
40.87500.74680.90220.84210.87370.88780.90480.80650.94570.85530.88780.9163
50.89880.79690.88040.92110.93100.90540.77980.55400.81520.73680.78950.8022
60.88690.77250.88040.89470.91010.89520.88100.76030.88040.88160.90000.8902
70.89880.79650.89130.90790.92140.90620.86910.74050.79350.96050.96050.8730
80.88100.76240.83700.93420.93900.88650.92260.84260.96740.86840.89900.9326
90.86910.73510.89130.84210.87230.88180.89290.78380.90220.88160.90220.9022
100.87500.75090.82610.93420.93830.88040.91670.83230.91300.92110.93330.9231
mean0.88690.77250.88150.89340.91040.89550.88270.76230.91090.84870.88320.8959
median0.88100.76160.88590.90130.91570.89150.89580.78870.92940.87500.89950.9075
std0.01590.03200.03090.03800.02740.01470.04590.09350.06400.08560.05720.0403
RBSSVM
FoldAccKpSeSpPrGAccKpSeSpPrG
10.91070.82000.91300.90790.92310.91810.93450.86710.96740.89470.91750.9421
20.89290.78470.88040.90790.92050.90020.89880.79550.91300.88160.90320.9081
30.88690.76940.94570.81580.86140.90250.91070.82000.91300.90790.92310.9181
40.89880.79360.95650.82900.87130.91290.90480.80690.93480.86840.89580.9151
50.86910.73870.82610.92110.92680.87500.91070.81920.93480.88160.90530.9199
60.91070.81960.92390.89470.91410.91890.88690.77300.86960.90790.91950.8942
70.86310.72900.78260.96050.96000.86680.94050.87900.97870.89470.91840.9478
80.92860.85450.97830.86840.90000.93830.89880.79510.92390.86840.89470.9092
90.91070.81960.92390.89470.91400.91890.91070.81830.95650.85530.88890.9221
100.92860.85620.92390.93420.94440.93410.91670.83260.90220.93420.94320.9225
mean0.90000.79850.90540.89340.91350.90860.91130.82070.92940.88950.91100.9199
median0.90480.80660.92390.90130.91720.91550.91070.81880.92940.88820.91140.9190
std0.02240.04380.06020.04490.03000.02320.01620.03240.03250.02340.01640.0157
Table 11. Confusion matrices for the three analyzed thresholds and for each model type: feed forward NN, Decision Trees learned with C5.0 (DT), Rule-Based Systems learned with C5.0 (RBS) and Support Vector Machines (SVM).
Table 11. Confusion matrices for the three analyzed thresholds and for each model type: feed forward NN, Decision Trees learned with C5.0 (DT), Rule-Based Systems learned with C5.0 (RBS) and Support Vector Machines (SVM).
Threshold 2.5
Reference Reference Reference Reference
NNFallNot FallDTFallNot FallRBSFallNot FallSVMFallNot Fall
Fall1047Fall1020Fall1042Fall818
Not Fall2250Not Fall2277Not Fall2245Not Fall4279
Threshold 3.0
Reference Reference Reference Reference
NNFallNot FallDTFallNot FallRBSFallNot FallSVMFallNot Fall
Fall1252Fall1118Fall1129Fall1012
Not Fall0245Not Fall1279Not Fall1268Not Fall2285
Threshold 3.09290
Reference Reference Reference Reference
NNFallNot FallDTFallNot FallRBSFallNot FallSVMFallNot Fall
Fall1259Fall1126Fall1235Fall1013
Not Fall0238Not Fall1271Not Fall0262Not Fall2284
Table 12. Results obtained for the best model for each threshold. Different statistics are shown: the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the Geometric mean (G), all of them computed using Equations (1) to (8). The models are feed forward NN, Decision Trees learned with C5.0 (DT), Rule-Bases Systems learned with C5.0 (RBS) and Support Vector Machines (SVM).
Table 12. Results obtained for the best model for each threshold. Different statistics are shown: the Accuracy (Acc), Kappa factor (Kp), Sensitivity (Se), Specificity (Sp), Precision (Pr) and the Geometric mean (G), all of them computed using Equations (1) to (8). The models are feed forward NN, Decision Trees learned with C5.0 (DT), Rule-Bases Systems learned with C5.0 (RBS) and Support Vector Machines (SVM).
ThresholdModelAccKpSeSpPrG
2.5NN0.84140.24120.83330.84180.17540.8375
DT0.92880.44540.83330.93270.33330.8816
RBS0.85760.26620.83330.85860.19230.8459
SVM0.92880.38860.66670.93940.30770.7914
3.0NN0.83170.26791.00000.82490.18750.9082
DT0.93850.50960.91670.93940.37930.9280
RBS0.90290.38640.91670.90240.27500.9095
SVM0.95470.56640.83330.95960.45450.8942
3.09290NN0.80910.23861.00000.80130.16900.8952
DT0.91260.41460.91670.91250.29730.9146
RBS0.88670.36771.00000.88220.25530.9392
SVM0.95150.54840.83330.95620.43480.8927

Share and Cite

MDPI and ACS Style

Khojasteh, S.B.; Villar, J.R.; Chira, C.; González, V.M.; De la Cal, E. Improving Fall Detection Using an On-Wrist Wearable Accelerometer. Sensors 2018, 18, 1350. https://doi.org/10.3390/s18051350

AMA Style

Khojasteh SB, Villar JR, Chira C, González VM, De la Cal E. Improving Fall Detection Using an On-Wrist Wearable Accelerometer. Sensors. 2018; 18(5):1350. https://doi.org/10.3390/s18051350

Chicago/Turabian Style

Khojasteh, Samad Barri, José R. Villar, Camelia Chira, Víctor M. González, and Enrique De la Cal. 2018. "Improving Fall Detection Using an On-Wrist Wearable Accelerometer" Sensors 18, no. 5: 1350. https://doi.org/10.3390/s18051350

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop