Next Article in Journal
A Survey on Application of Non-Orthogonal Multiple Access to Different Wireless Networks
Previous Article in Journal
A 1.8 V 18.13 MHz Inverter-Based On-Chip RC Oscillator with Flicker Noise Suppression Using Logic Transition Voltage Feedback
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Elderly Fall Detection with an Accelerometer Using Lightweight Neural Networks

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
2
Guangdong Key Laboratory of Urban Informatics, Shenzhen University, Shenzhen 518060, China
3
Shenzhen Key Laboratory of Spatial Smart Sensing and Service, Shenzhen University, Shenzhen 518060, China
4
School of Engineering, University of British Columbia, Kelowna, BC V1V 1V7, Canada
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(11), 1354; https://doi.org/10.3390/electronics8111354
Submission received: 28 September 2019 / Revised: 25 October 2019 / Accepted: 6 November 2019 / Published: 15 November 2019
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Falls have been one of the main threats to people’s health, especially for the elderly. Detecting falls in time can prevent the long lying time, which is extremely fatal. This paper intends to show the efficacy of detecting falls using a wearable accelerometer. In the past decade, the fall detection problem has been extensively studied. However, since the hardware resources of wearable devices are limited, designing highly accurate embeddable models with feasible computational cost remains an open research problem. In this paper, different types of shallow and lightweight neural networks, including supervised and unsupervised models are explored to improve the fall detection results. Experiment results on a large open dataset show that the lightweight neural networks proposed have obtained much better results than machine learning methods used in previous work. Moreover, the storage and computation requirements of these lightweight models are only a few hundredths of deep neural networks in literature. In tested lightweight neural networks, the best one is proved to be the supervised convolutional neural network (CNN) that can achieve an accuracy beyond 99.9% with only 441 parameters. Its storage and computation requirements are only 1.2 KB and 0.008 MFLOPs, which make it more suitable to be implemented in wearable devices with restricted memory size and computation power.

1. Introduction

The world is currently experiencing an unprecedented aging of the population [1]. It has been estimated that the population of elder people aged 60 and over will keep increasing rapidly and exceed three billion by 2100. Such a huge elder market will stimulate the development of the healthcare industry. Hence, providing healthcare service to the elder to reduce living risks associated with their daily life is increasingly being demanded.
Meanwhile, falls have been one of the main threats in elder people’s life [2]. Almost 80% of reported accidents among elder patients are due to falls [3]. This situation is even worse in high altitude areas that are usually covered with snow and ice for most of the year, such as Canada, North America, and China. For instance, a living environment with a high risk of falling in Kelowna (Canada) is shown in Figure 1.
Early detection of falls can minimize the time between a fall and the arrival of medical caretakers, hence prevent long lying times that are potentially fatal. Therefore, fall detection has become a hot research topic during the past decade and a large number of fall detection systems have been proposed [4,5,6,7]. Based on the different sensors used in detection, these systems can be categorized into vision-based [8,9], and wearable sensor-based [10]. Vision-based fall detection has been an active research topic for a long time [11]. Recently, interest in wearable sensor-based systems has increased rapidly due to the emergence of low-cost physical sensors [12,13,14,15].
In literature, different methods have been proposed to detect falls using wearable sensors. Some of them are threshold-based and others are machine-learning based [16]. In these methods, machine learning methods have shown superior performance over threshold methods. Hence, they have been widely explored in previous work. Methods including k-nearest neighbors (KNN), kernel Fisher discriminant (KFD) and support vector machine (SVM) were used in [17] to detect falls based on an integrated device attached to the waist of the human body. Five methods including logistic regression (LR), naïve Bayes (NB), decision tree (DT), SVM and KNN were evaluated together by Aziz et al. [18] in fall detection based on seven distributed accelerometers on the human body and the SVM is proven to be the best.
Moreover, neural networks have been increasingly popular in the machine learning field due to the improvement of computation force and the breakthrough of theory. Their more advanced modeling capability has also attracted a large amount of attention in the fall detection field [19]. Different types of neural networks including recurrent and convolutional neural networks have been used in literature.
In [20], a long short-term memory (LSTM) neural network, named LSTM-Acc and a variant LSTM-Acc Rot were proposed to detect falls. The LSTM models consist of two LSTM layers and two fully-connected layers with each layer consisting of 200 neurons. Experiment results have shown the proposed LSTM models could achieve an accuracy of 98.57%. Furthermore, a gated recurrent units (GRU) neural network was used in [21] to detect falls based on a smartwatch. The GRU model used consists of three nodes at the input layer, a GRU layer, a fully connected layer, and a two-node softmax output layer.
Some other researchers used convolutional neural networks in their work. A convolutional neural network (CNN) composed of four convolution layers and four pooling layers was used to recognize human falls in [22]. Experiment results have shown the proposed CNN model could achieve an accuracy of 99.1%. Another CNN model composed of two convolutional and two max-pooling layers was used in [23] to detect falls and the results proved the CNN could achieve an accuracy of 98.61%. Furthermore, a CNN named CNN-3B3Conv was proposed in [24] to detect falls using acceleration measurements. The experiment results proved that the CNN-3B3Conv model could obtain much better results than recurrent neural networks with an accuracy near 99%.
Indeed, good results have been obtained by machine learning methods, especially the deep learning techniques in literature in the context of fall detection. However, most of the neural networks used are deep, complex and computationally intensive, and implementing them in wearable devices with limited hardware resources is a challenge. One solution used to tackle this problem is to avoid embedding these deep neural networks on the wearable device itself but on a base-station instead as in [23]. Raw data (or preprocessed data) are sent via some wireless link from the wearable device to the base station where these data are processed to detect falls. However, this solution is not appropriate for outdoor environments as the distance between the wearable device and the base station is limited in the considered technologies e.g., ZigBee in [23]. Therefore, developing highly accurate embeddable models with lightweight architectures and feasible computational cost is mandatory to achieve an accurate wearable fall detector that could work in both indoor and outdoor environments.
In this work, different types of lightweight neural networks, including the supervised and unsupervised models are explored in fall detection based on an accelerometer worn on the human waist. The performance of these lightweight neural networks is evaluated against both the conventional machine learning methods and the deep neural networks used in literature.
As shown in Figure 2, the standard process of machine-learning-based fall detection consists of three main steps. Acquired sensor signals are firstly segmented into small data blocks and then features that can reflect characteristics of human falls are extracted and fed into classifiers for recognition. According to this process, the rest of this paper is organized as follows. The dataset, signal pre-processing methods and classification protocol used in this work are first explained in Section 2. Section 3 provides a brief introduction on classifiers used. Then, experimental results are presented in Section 4. Finally, Section 5 draws conclusions.

2. Datase and Pre-Processing

2.1. Dataset Description

To guarantee a reliable evaluation, a large public dataset known as the SisFall dataset is used in this work [25]. This dataset has been used in previous work for its diversity and integrity [26]. The dataset was recorded with a self-developed embedded device composed of a Kinets MKL25Z128VLK4 microcontroller (NPX, Austin, TX, USA), an Analog Devices (Norwood, MA, USA) ADXL345 accelerometer, a Freescale MMA8451Q accelerometer, an ITG3200 gyroscope, and a 1000 mA/h generic battery. During data collection, the device was tethered on the waist of subjects as shown in Figure 3a with a sampling rate of 200 Hz and then different activities listed in Table 1 were performed in the classrooms and open spaces of a coliseum at the Universidad de Antioquia (Medellín, Colombia). Some of the data collection scenarios are shown in Figure 4. In order to guarantee safety conditions, falls were simulated using safety landing mats [25]. To collect the dataset, overall 38 volunteers including 15 elder people and 23 young people were employed and the characteristics of these subjects such as sex, age, height, and weight are summarized in Table 2.
In this work, only acceleration data acquired from the three-axial accelerometer ADXL345 are used as in [25]. As shown in Figure 3b, the ADXL345 is an energy-efficient accelerometer that has been widely embedded in handsets, medical instrumentation, gaming, and pointing devices, industrial instrumentation, and personal navigation devices. The ADXL345 used is configured with a measuring range of ±16 g and a resolution of 13 bits with a sensitivity of 3.9 mg/LSB. The supply voltage range of ADXL345 is 2.0 V to 3.6 V and the temperature range is −40 C to +85 C. Moreover, the accelerometer has a small size of 3 mm × 5 mm × 1 mm [27].
Since it has been found that there is no significant gain for having sampling frequency higher than 25 Hz in fall detection [26], the original acceleration measurements are first downsampled to 25 Hz. In data downsampling, original acceleration measurements are decimated by an integer factor instead of resampling sensing data, where artifacts and distortion may occur. When the original sensing data S = s 1 , s 2 , , s l is downsampled by an integer of n, it would keep the first sample from every n samples and starting with an integer offset of m as follows.
D S m n = s k | k = 1 + m + α × n ,
where 0 m < n , D S m n is the downsampled data, α is an integer and 0 α l n . If the original sampling rate is R Hz, the sensor data after downsampling is R n Hz. In this work, an integer of eight is used to downsample sensor signals to 25 Hz.

2.2. Data Pre-Processing

In this section, the segmentation, feature extraction and data oversampling methods used to pre-process the acquired acceleration measurements are explained in detail.

2.2.1. Data Segmentation with Impact Point

To segment sensor signals for classification, most researchers in literature used a sliding window method shown in Figure 5a, where sensor data are continuously segmented by a moving window with an overlap. This method is simple but energy-intensive since the classifiers need to operate continuously at a small interval. Moreover, it is also not accurate in extracting data blocks of falls since a sliding window with an overlap may not locate exactly on the whole data block of a fall. The window may only cover a part of the fall and another part of human activities happened before the fall such as walking or running and this may cause bias in recognition.
To deal with this, an impact point-based data segmentation method is used in this work. It is based on the fact that a fall is always associated with an extreme impact between the human body and the ground. By detecting the impact, the sensor signals of falls can be accurately located. Moreover, a large number of uninterested sensor data (e.g., data of activities without evident impact such as sitting, standing or lying) can be excluded to avoid unnecessary recognition and save energy.
To detect the impact point, the acceleration magnitude (AM) that can reflect the energy contained in the sensor signals is used with a threshold of 1.6 g according to [28,29]. The AM can be obtained as following:
A M a [ n ] = a [ n ] x 2 + a [ n ] y 2 + a [ n ] z 2 ,
where a represents the acceleration measurements on different axes of the accelerometer.
Figure 5b shows the process of data segmentation with an impact point in fall detection. Once an impact is identified with the pre-defined threshold of AM, a window is centered on the impact point to extract the complete fall process. In the experiment, a window of 3 s is used according to previous work [18].

2.2.2. Feature Extraction

Once sensor signals are segmented, meaningful features should be extracted for classification. As for neural networks, features can be extracted automatically. However, human-design features that can reflect the shape, energy, and dispersion of sensor signals are needed for other conventional machine learning classifiers such as SVM and KNN. In this work, 13 types of statistical features that have been used in literature [26] are extracted from acceleration measurements on each axis:
(1)
Minimum values of acceleration measurements;
(2)
Maximum values of acceleration measurements;
(3)
Mean values of acceleration measurements;
(4)
Median values of acceleration measurements;
(5)
Interquartile range of acceleration measurements;
(6)
Variance of acceleration measurements;
(7)
Standard deviation of acceleration measurements;
(8)
Mean absolute deviation of acceleration measurements;
(9)
Root mean square of acceleration measurements;
(10)
Entropy of acceleration measurements;
(11)
Energy of acceleration measurements;
(12)
Skewness of acceleration measurements;
(13)
Kurtosis of acceleration measurements.

2.2.3. Mitigating Effects of Class Imbalance

One issue with dataset generation that is frequently overlooked in previous work is class imbalance. It is quite common in fall detection datasets, due to the difficulty of collecting fall trials and practical constraints on collecting data from multiple subjects, that the number of data samples for each class are not equal. Imbalance in the dataset can cause algorithms to be biased toward the classes having more data. The data imbalance in the Sisfall dataset is larger than 50:1 (ADLs to falls).
To deal with this, the synthetic minority oversampling technique (SMOTE) is used on the training dataset to prevent imbalanced learning and avoid overfitting. SMOTE solves the data imbalanced problem by oversampling the samples in the minority class. In oversampling, new instances of minority class are interpolated using the KNN within the feature space. A new synthetic data instance X is generated as follows:
X = X i + r a n d ( 0 , 1 ) × ( X j X i ) ,
where X i is a sample of minority class, X j is one of the nearest neighbors of X i of the same class. This interpolation process is then repeated for the other nearest neighbors of X i . As a result, SMOTE generates more general regions from the minority class and many machine learning classifiers are able to use the data set for better generalizations. Figure 6 shows some fall trails generated by the SMOTE method in data oversampling.

2.2.4. Evaluation Metrics

In this work, the performance of different classifiers is presented with the confusion matrix, accuracy (ACC), sensitivity (SEN) and specificity (SPE). Table 3 shows the confusion matrix in fall detection. In the matrix, true-positive (TP) is the number of observations that are falls and were predicted to be falls, false-negative (FN) is the number of observations that are ADLs but were predicted to be falls, true-negative (TN) is the number of observations that are ADLs and were predicted to be ADLs, and false-positive (FP) is the number of observations that are ADLs but were predicted to be falls (false alarms). P is the number of falls, and N is the number of ADLs observations.
Based on the confusion matrix, ACC, SEN, and SPE are defined as follows:
A C C = T P + T N T P + T N + F P + F N
S E N = T P T P + F N
S P E = T N T N + F P
In these metrics, ACC is a measure of the overall performance of a classifier. SEN can be used to know how correct a classifier is and SPE can be used to assess the capability of a classifier to avoid misclassifying. Since an accurate classifier with a large number of false alarms is still not acceptable in daily use, both of the abilities to recognize falls and exclude false alarms of classifiers are important. Generally, a classifier is deemed to have a higher level performance only when its accuracy, specificity, and sensitivity are all higher than others.

2.2.5. Classification Protocol

In order to present the performance of different machine learning methods in a realistic way. The SisFall dataset is divided into two parts: the first one contains the activities performed by young adults Y1, …, Y12 and elderly E1, …, E8, while the second part contains activities performed by the remaining young adults Y13, …, Y23 and elderly E9, …, E15. Then, a two-fold cross-validation strategy is conducted on these two different datasets. In this way, activities performed by some subjects are always tested with classifiers trained on different persons, which guarantees realistic evaluation. Finally, the total numbers of TP, TN, FP, and FN are counted from the validation results and used to assess the performance.

3. Machine Learning Methods

In this section, the background of machine learning classifiers used in this paper is introduced to facilitate understanding. Overall eight machine learning approaches are used including four types of conventional methods and four types of neural networks.

3.1. Conventional Machine Learning Methods

Conventional machine learning methods used in this work include SVM, decision tree (DT), KNN, and extreme gradient boosting method (XGB).

3.1.1. SVM

The SVM theory was proposed by Vapnik and Chervonenkis [30] and it has been proven very effective in addressing problems including handwritten digit recognition and face detection in images. The principle of SVM is to find a boundary between two hyperplanes that can separate samples of different classes.
Given the training data X = { X 1 , X 2 , , X N } and corresponding label Y = { y 1 , y 2 , y N , y i [ 1 , 1 ] } , two hyperplanes can be found:
w T X i + b + 1 , y i = + 1
w T X i + b 1 , y i = 1 ,
where w and b are the parameters that represent hyperplanes. SVM is to find a boundary between these two hyperplanes meanwhile maximizing the distance d = 2 w between them.

3.1.2. KNN

KNN classifies an unseen feature vector based on the votes of its most similar samples in the training dataset. Generally, a Euclidean distance function is first used to measure the similarity between the target feature vector and training samples:
d ( X i , X j ) = ( x i 1 x j 1 ) 2 + + ( x i n x j n ) 2
R k X = { X R n , d ( X , X i ) d ( X , X k ) } ,
where d ( X i , X j ) means the distance between samples X i and X j , R k X is the group of k nearest neighbors of the new feature vector X. Then, the new feature vector is assigned to the class, to which the majority of its k nearest neighbors belong.

3.1.3. DT

DT solves a classification problem through a series of cascading decision questions. A feature vector, which satisfies a specific set of questions, is assigned to a specific class. This method is represented graphically using a tree structure, where each internal node is a test on a feature compared with the threshold, and the remaining values refer to the decided classes. Its implementation is based on a loop of if/else conditions. Many types of DTs have been generated by different algorithms. In our research, a C4.5 is used.

3.1.4. XGB

The XGB is a meta-algorithm. It is a method that can be used with other machine-learning methods to improve recognition accuracy. It combines the outputs of plenty of “weak” classifiers into a weighted sum that represents the final output. The individual learners can be weak, but as long as the performance of each one is slightly better than random guessing, the final model can be proven to converge to a strong learner. In this paper, XGB embedded with decision trees is used.
In the experiments, the performance of SVM was compared for two different kernels: linear and radial basis function (RBF) kernel, between which the linear kernel was found to yield better results and was finally selected. The parameter searching of k in KNN was performed in a wide range from 1 to 10 and a value of 1 was selected. Model parameters of the XGB were optimized using a grid search over two parameters: the number of trees and maximum depth of the tree. The best results were achieved based on 50 trees with a maximum depth of 3.

3.2. Neural Networks

Neural networks are a family of statistical learning models through replicating the working principle of neurons in the human brain. Overall, four types of neural networks are used in this work including supervised models such as multi-layer perceptron (MLP), convolutional neural network (CNN) and unsupervised autoencoders.

3.2.1. MLP

An MLP that is also known as the feed-forward neural network is shown in Figure 7a. It is a model that processes information through a series of interconnected computational neurons. The inputs are fed directly to the outputs via a series of hidden neurons, which are grouped into layers and associated with previous layers using weighted connections. Formally, neurons are defined as the following function:
a l + 1 = σ W l a l + b l ,
where a l is the value of neurons in layer l, ( a i l denotes the value of neuron i in layer l), W is the weight matrix between layer l and l + 1 , b l is the bias associated with neurons in layer l and σ is the activation function. For the first layer in the network, the neuron value is a 1 = x , which is the input to the neural network (flattened sensor signal in this work). MLPs use a fully-connected topology, where each neuron in the present layer is connected with every neuron in the previous one.

3.2.2. CNN

The architecture of the CNN is shown in Figure 7b. Different from the MLP, there are many additional convolutional layers between the input and fully connected layers. These convolutional layers can help to extract more meaningful feature maps for recognition by conducting convolutional operation on the input signals with different kernels. In the convolutional operation, kernels act as different filters or feature detectors. Formally, a feature map is generated by a kernel as following:
a j l + 1 γ = σ b j l + f = 1 n k j f l γ a f l γ ,
where a j l + 1 means the value of feature map j in layer l + 1 , σ is the activation function, n is the number of feature maps in layer l, k j f l denotes the kernel that convolves over feature maps in layer l to create the feature map j in layer l + 1 , a l is the value of feature maps in layer l, b l is the bias vector. Once feature maps are generated with convolutional layers, they will be flattened and fed into subsequent fully-connected layers for classification.
The training of MLP and CNN is based on optimizing their parameters including weights and biases and the optimization can be realized by minimizing the following cross-entropy error function:
J w , b = 1 N n = 1 N y n log y ¯ n + 1 y n log 1 y ¯ n ,
where, w and b denote the weight and bias parameters, N means the number of samples, y n is the real value of the sample n and y ¯ n is the prediction from neural networks. Given the training dataset X = { X 1 , X 2 , , X N } and corresponding label Y = { y 1 , y 2 , y N , y i [ 1 , 0 ] } , the optimal values of parameters in MLP and CNN can be found based on gradient-descent approach.

3.2.3. Autoencoders

Autoencoders are neural networks that are trained in an unsupervised way. Autoencoders aim to learn the representation (encoding) of sensor signals with the purpose to reconstruct themselves. Since only sensor signals of different activities without labels are needed during training, autoencoders are known as the unsupervised models. Figure 7c shows a dense autoencoder (DAE) that is built based on an MLP. The MLP is used as an encoder δ in the DAE with another MLP that has a symmetrical structure as a decoder ψ . Similarly, a convolutional autoencoder (CAE) can be built based a CNN as shown in Figure 7d.
The aims of encoders and decoders in autoencoders are to learn how to condense input signals into representative features and then use them to reconstruct the signal as follows:
δ : h w ( x ) δ c
ψ : h w ψ ( c ) x ,
where, x means the input signal, c means the condensed code and x means the reconstructed signal.
Different from MLP and CNN, the training of DAE and CAE is based on minimizing the construction error between the original and reconstructed signals and a mean square error function is used during training:
δ , ψ = a r g m i n | | x x | | 2 = a r g m i n | | x h w ψ h w ( x ) δ x | | 2 .
In this work, the DAE and CAE are built based on the MLP and CNN used. After unsupervised training, the encoders in DAE and CAE are extracted out and concatenated with a fine-turned fully connected layer for recognition.

3.2.4. Neural Network Architectures

Overall, seven neural networks are evaluated in this paper. Three of them are the models that have achieved superior performance in literature. They are used as the baselines to compare with the lightweight neural networks proposed in this paper:
  • (CNN-HE) [23]: CNN-HE consists of two convolutional layers (each appended with a max-pooling layer) and two fully-connected layers. The first convolutional layer consists of 32 kernels and the second layer consists of 64 kernels. The size of kernels used is 1 × 5 with a stride of 1. Furthermore, the first fully-connected layer consists of 512 neurons and the second layer consists of 8 neurons (change to 1 in this work) for classification.
  • (CNN-3B3Conv) [24]: CNN-3B3Conv consists of three-layer blocks. The first block consists of three convolutional layers and one max-pooling layer. Each of the convolutional layer consists of 64 kernels with a size of 1 × 4. The second block also consists of three convolutional layers and one max-pooling layer, but the kernel size is set to 1 × 3 empirically. The third block consists of three fully-connected layers with 64 neurons, 32 neurons and two neurons (changed to one in this work) respectively.
  • CNN-EDU [22]: CNN-EDU consists of four convolutional layers composed by 16 kenerls, 32 kenerls, 64 kenerls and 128 kenerls ( 1 × 5 ) respectively. Each convolutional layer is also appended with a pooling layer. Moreover, two fully-connected layers are appended in the end.
Another four neural networks are the lightweight neural networks used in this work. They are designed based on the evaluation results in Table 4 and Table 5. In Table 4, we compare the effect of filter size, as well as the depth (layer number) and width (kernel number) of the CNN on the resulting accuracy. Notably, the max-pooling layers and the additional fully-connected layers, which were often appended after convolutional layers in previous work, are abandoned due to information loss [31] and parameter redundancy.
Based on Table 4, a simple CNN consisting of a single convolutional layer composed of ten 1 × 5 kernels with a stride of 3, and one fully-connected layer is chosen (as highlighted in bold in Table 2). Similarly, a simple MLP consisting of a single hidden layer with 64 hidden neurons is selected according to the results in Table 5. Meanwhile, a DAE and a CAE are built based on the lightweight MLP and CNN selected.
In all of these neural networks, the rectified linear units (ReLU) is used as the activation function except in the last fully-connected layers where a sigmoid function is used for classification. Moreover, a learning rate of 0.001 and a batch size of 128 are proved the best and used with the ADAM algorithm [32] in parameter optimizing.

4. Experiment Results and Discussion

4.1. Lightweight Neural Networks against Conventional Methods

To guarantee reliable experimental results, each of the classifiers used was run for 10 rounds (detailed results see Appendix A), and the final average results are used for evaluation. Firstly, the performance of lightweight neural networks are compared with conventional machine learning methods in Table 6.
As we can see from the results, the XGB performs the best with an accuracy of 99.35% among conventional methods. DT and KNN come next to XGB with an accuracy of 98.93% and 98.52%. The SVM performs the worst with an accuracy of 98.30%. The improvement of the boosting method over other conventional classifiers is evident, especially on the false positive samples (decrease 1309.4 of SVM, 1000.9 of KNN and 799.4 of DC to 496.7 of XGB).
As for the lightweight neural networks, much better results can be obtained. The accuracy of each neural network was higher than 99.5% which was even higher than the best conventional method (99.35% of XGB). The best results of neural networks were obtained from the CNN with an accuracy of 99.94%, a sensitivity of 98.71% and a specificity of 99.96%. These metrics show significant improvement over conventional methods, especially on decreasing false alarms. Let us consider, as an example, the specificity of CNN and XGB which shows a specificity of 99.36%. Now, comparing this result with that of CNN i.e., 99.96%, the latter improved the specificity only by 0.59%. However, this difference is significant as it means reducing the number of false alarms from 497.7 to 26.9 only.
In our analysis, the better results of CNN were partly due to its advanced modeling ability, but mainly due to the ability of CNN to extract local features. The convolutional kernels in the CNN are visualized in Figure 8, where X, Y, and Z denote the kernels on each axis of acceleration measurements. As we can see, these kernels were in different patterns and shapes and were also different on every axis. Some of them were line segments with a big slope and some of them are line segments fluctuating uniformly. These kernels act as various pattern detectors and move along the input signals to identify certain signal patterns on different locations for classification. Compared to other methods that depend on features extracted from the whole data segments, these automatically learned kernels can help CNN to extract local features that can reveal the differences between signals of falls and ADLs on a much smaller scale. In this work, kernels in the CNN can extract local features on a scale as small as 0.2 s (1 × 5) at each step for recognition.
On the other hand, although autoencoders have been proved effective in learning the intrinsic characteristics of data, their slightly poor performance over supervised neural networks proves the efficacy of autoencoders is not evident in fall detection. This may due to the fact that sensor signals used in fall detection are usually not complex that only last for many seconds. Hence, supervised models are enough to learn effective features for recognition.

4.2. Leightweight Neural Networks against Baseline Models

The performance of lightweight neural networks is compared with baseline models used in previous work in Table 7. Notably, to further compare the complexity of different neural networks, the number of parameters (PARA) and the number of floating-point operations (FLOPs [33], detailed calculation see Appendix B) of each neural network are also listed in Table 7.
As we can see from the accuracy metrics, even though the baseline models are much deeper and more complex, they could only achieve a similar accuracy around 99.93% as the lightweight models. However, the number of parameters of baseline models are generally hundreds of times the lightweight models, which also means hundreds of times the storage requirement. The simplest models are the lightweight CNN and CAE with only 411 parameters and the most complex one is CNN-HE with 60.1 × 10 4 parameters.
Furthermore, the complex structure of baseline models also leads to more computational cost during classification. In the baseline models, even the simplest mode (CNN-EDU) still requires 1.4 MFLOPs to make one decision (fall/no fall), which is hundreds of times the lightweight CNN and CAE. Such large FLOPs mean higher power requirements and more frequent battery recharging that make the wearable fall detector more obtrusive to use in daily life.
Even though many deep neural networks that consist of more than three layers with thousands of neurons have been the focus in previous work, the experiment results prove that lightweight neural networks which consist of only one hidden layer with less than 100 neurons are enough to achieve satisfying accuracy in fall detection. These lightweight neural networks have fewer parameters and smaller FLOPs that make them more suitable to be embedded in wearable devices that usually have real-time requirements restricting the memory size and computation power. In this work, the most simple and accurate neural network is the lightweight CNN used, which has only 411 parameters (160 from the convolutional layer and 251 from the final fully-connected layer). The total storage space needed is only 4 × δ = 1.2 KB (using 4-Byte floating-point numbers)and the FLOPs needed in classification is 0.008 MFLOPs that is only a few hundredth of deep models used previously.

5. Conclusions

As the population of elderly people is increasing fast, providing healthcare service to the elderly to reduce living risks associated with their daily life is increasingly demanded. Falls are one of the main threats to the life of elder people that have caused a large number of accidents. The treatment of falls has also been a huge financial burden to society. Since early detection of falls can prevent the extremely fatal long lying time, the quest to detect falls of elder people with the highest possible accuracy using wearable sensors has been a hot research topic in past decades.
Even though a large number of work has been done, developing highly accurate embeddable models with lightweight architectures and feasible computational cost is still an obstacle to realize a pervasive sensing fall detector using wearable devices. In this paper, different types of lightweight neural networks are proposed including supervised and unsupervised models. Experiment results prove the superior performance of proposed lightweight neural networks. The best results are obtained from a lightweight CNN. This model can provide an accuracy beyond 99.9% with a small size of only 1.2 KB and a low computational cost of 0.008 MFLOPs that is more suitable to be implemented on wearable devices.
In the future, we plan to design different types of neural networks to detect human falls using other wearable devices such as the smartphone to provide the fall detection service to the general public. We also plan to improve our model to detect other human activities such as walking, running and jumping to realize a cognitive wearable module to use in healthcare industry.

Author Contributions

Q.L. conceived and designed the experiments; G.W. performed the experiments; G.W. and L.W. analyzed the data; G.W., L.W., Y.Z. and Z.L. wrote the paper; and all authors proof-read the paper.

Funding

This work was supported in part by the National Key Research and Development Program of China under Grant 2016YFB0502203, in part by the National Natural Science Foundation of China under Grant 41704002, Grant 41701519, in part by the National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, China, and in part by the University of British Columbia, BC V1V 1V7, Canada.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional neural network
SVMSupport vector machine
DTDecision tree
XGBExtreme gradient boosting
KNNK-nearest neighbor
MLPMlti-layer perceptron
DAEDense autoencoder
CAEConvolutional autoencoder
HARHuman activity recognition
ADLsHuman activities in daily life
AMAcceleration magnitude
SMOTESynthetic minority oversampling technique
ReLURectified linear unit
ACCAccuracy
SENSensitivity
SPESpecificity

Appendix A

To guarantee the reliability of experiment results, every classifier used in this work is run for 10 rounds. Detailed results are presented in this appendix.
Table A1. Classification results over 10 rounds of support vector machine (SVM).
Table A1. Classification results over 10 rounds of support vector machine (SVM).
Run12345678910AVGSTD
SEN.(%)98.1698.2798.2198.1698.2198.3298.3898.2198.3898.3898.270.86
SPE.(%)98.2398.3398.2098.3098.3798.3798.4798.2598.1998.2798.300.08
ACC.(%)98.2398.3398.2098.3098.3798.3798.4798.2498.1998.2898.300.08
TP175717591758175717581760176117581761176117591.55
FN33313233323029322929311.55
FP13611281138713051253125511791350139613271309.464.89
TN75,56475,64475,53875,62075,67275,67075,74675,57575,52975,59875,615.664.89
Table A2. Classification results over 10 rounds of k-nearest neighbor (KNN).
Table A2. Classification results over 10 rounds of k-nearest neighbor (KNN).
Run12345678910AVGSTD
SEN.(%)90.8990.9590.8990.8990.9590.8990.9590.9590.8990.8990.910.03
SPE.(%)98.7098.7098.7098.7098.7098.7098.7098.7098.7098.7098.700
ACC.(%)98.5298.5398.5298.5298.5298.5298.5198.5298.5298.5298.520
TP16271628162716271628162716281628162716271627.40.49
FN163162163163162163162162163163162.60.49
FP1001998100210001001100110021001100310001000.91.3
TN75,92475,92775,92375,92575,92475,92475,92375,92475,92275,92575,924.11.3
Table A3. Classification results over 10 rounds of decision tree (DT).
Table A3. Classification results over 10 rounds of decision tree (DT).
Run12345678910AVGSTD
SEN.(%)97.8297.7797.8297.8897.6597.8297.8297.7197.7197.7197.770.07
SPE.(%)98.9398.9398.9398.9398.9499.0498.9399.0299.0198.9498.960.04
ACC.(%)98.9198.9098.9198.9198.9199.0198.9098.9998.9898.9298.930.04
TP17511750175117521748175117511749174917491750.11.22
FN3940393842393941414139.91.22
FP822823822822817741823751760813799.432.32
TN76,10376,10276,10376,10376,10876,18476,10276,17476,16576,11276,125.632.32
Table A4. Classification results over 10 rounds of extreme gradient boosting method (XGB).
Table A4. Classification results over 10 rounds of extreme gradient boosting method (XGB).
Run12345678910AVGSTD
SEN.(%)99.3299.3399.3399.3999.3999.2299.3399.2799.3399.2799.320.05
SPE.(%)99.3499.4399.3999.1999.2999.4199.3499.4399.4099.3399.360.07
ACC.(%)99.3499.4399.3999.2099.2999.4099.3499.4299.4099.3399.350.07
TP17781778177817791779177617781777177817771777.80.87
FN1212121111141213121312.20.87
FP509435471622548457506442460517496.754.19
TN76,41676,49076,45476,30376,37776,46876,41976,48376,46576,40876,428.354.19
Table A5. Classification results over 10 rounds of MLP.
Table A5. Classification results over 10 rounds of MLP.
Run12345678910AVGSTD
SEN.(%)98.6698.7297.9998.2798.0498.0498.7197.8298.4998.3298.310.31
SPE.(%)99.9499.9599.9699.9699.9699.9699.9699.9699.9599.9699.960.01
ACC.(%)99.9199.9299.9299.9299.9199.9299.9399.9199.9299.9299.920.01
TP17661767175417591755175517671751176317601759.75.57
FN2423363135352339273030.35.57
FP4639303132302928353333.35.22
TN76,87976,88676,89576,89476,89376,89576,89676,89776,89076,89276,891.75.22
Table A6. Classification results over 10 rounds of CNN.
Table A6. Classification results over 10 rounds of CNN.
Run12345678910AVGSTD
SEN.(%)99.0598.4498.6098.8298.9998.6098.0498.8399.1198.6698.710.30
SPE.(%)99.9599.9799.9699.9699.9699.9799.9899.9699.9699.9799.960.01
ACC.(%)99.9399.9499.9399.9499.9499.9499.9399.9499.9499.9499.940.01
TP17731762176517691772176517551770177417661767.15.49
FN1728252118253520162422.95.49
FP3521302730261831282326.94.83
TN76,89076,90476,89576,89876,89576,89976,90776,89476,89776,90276,898.14.83
Table A7. Classification results over 10 rounds of DAE.
Table A7. Classification results over 10 rounds of DAE.
Run12345678910AVGSTD
SEN.(%)98.9999.1198.8898.9999.2299.0599.1199.1699.2798.9499.070.12
SPE.(%)99.8199.8399.8499.7699.8699.8799.8199.8699.8299.8499.830.03
ACC.(%)99.7999.8199.8299.7499.8499.8599.8099.8599.8199.8299.810.03
TP17721774177017721776177317741775177717711773.42.11
FN1816201814171615131916.62.11
FP148132121184109102145107139124131.123.26
TN76,77776,79376,80476,74176,81676,82376,78076,81876,78676,80176,793.923.26
Table A8. Classification results over 10 rounds of CAE.
Table A8. Classification results over 10 rounds of CAE.
Run12345678910AVGSTD
SEN.(%)99.3999.0599.2299.2299.2799.1699.2798.8399.1199.4499.200.17
SPE.(%)99.9499.9099.9499.9399.9199.9399.9199.9399.9499.9399.930.01
ACC.(%)99.9299.8899.9299.9299.8999.9199.8999.9099.9299.9299.910.01
TP17791773177617761777177517771769177417801775.62.97
FN1117141413151321161014.42.97
FP4977485271577057455357.910.43
TN76,87676,84876,87776,87376,85476,86876,85576,86876,88076,87276,867.110.43
Table A9. Classification results over 10 rounds of CNN-HE.
Table A9. Classification results over 10 rounds of CNN-HE.
Run12345678910AVGSTD
SEN.(%)99.2299.4498.4999.2799.1699.5598.8899.4499.4499.3899.230.3
SPE.(%)99.9499.9599.9899.9599.9399.9799.9699.8299.9599.9599.940.04
ACC.(%)99.9399.9499.9499.9499.9199.9699.9499.8199.9499.9499.930.04
TP17761780176317771775178217701780178017791776.25.47
FN141027131582010101113.85.47
FP43391937552230138393645.832.24
TN76,88276,88676,90676,88876,87076,90376,89576,78776,88676,88976,879.232.24
Table A10. Classification results over 10 rounds of CNN-3B3Conv.
Table A10. Classification results over 10 rounds of CNN-3B3Conv.
Run12345678910AVGSTD
SEN.(%)99.3399.5599.5598.8399.3399.7299.5099.5099.6699.5599.450.24
SPE.(%)99.9399.9799.9499.9799.9699.8899.9499.9099.8799.9699.930.03
ACC.(%)99.9299.9699.9399.9599.9599.8899.9399.8999.8799.9599.920.03
TP17781782178217691778178517811781178417821780.24.28
FN12882112599689.84.28
FP52254621309048751013452.226.31
TN76,87376,90076,87976,90476,89576,83576,87776,85076,82476,89176,872.826.31
Table A11. Classification results over 10 rounds of CNN-EDU.
Table A11. Classification results over 10 rounds of CNN-EDU.
Run12345678910AVGSTD
SEN.(%)99.7299.6699.8399.4499.1199.0599.5099.6199.6699.5599.510.24
SPE.(%)99.9499.9699.8599.9699.9399.9599.9599.9099.9599.9399.930.03
ACC.(%)99.9499.9599.8599.9599.9199.9399.9499.8999.9599.9399.930.03
TP17851784178717801774177317811783178417821781.34.34
FN56310161797688.74.34
FP46341123152404176355151.823.52
TN76,87976,89176,81376,89476,87376,88576,88476,84976,89076,87476,873.223.52

Appendix B

To compute the number of floating-point operations (FLOPs), we assume convolution is implemented as a sliding window and that the nonlinearity function is computed for free. For convolutional layers and fully-connected layers we compute FLOPs respectively as:
Γ C O N V = ( 2 × C i n × K ) × I × C o u t s
Γ F C = 2 × I × O ,
where I is the dimension of input feature vector; C i n is the number of channels of the input feature vector; K is the kernel width; C o u t is the number of channels of the output feature vector; s is the stride of kernels; O is the output dimensionality [33].

References

  1. United Nations, Department of Economic and Social Affairs. World Population Prospects: The 2017 Revision, Key Findings and Advance Tables. ESA/P/WP/248. 2017. Available online: https://population.un.org/wpp/Publications/ (accessed on 6 November 2019).
  2. Murray, C.J.; Lopez, A.D. The global burden of disease: A comprehensive assessment of mortality and disability from diseases, injuries, and risk factors in 1990 and projected to 2020: summary. Glob. Burd. Dis. Inj. Ser. 1996, 1, 201–246. [Google Scholar]
  3. Schwendimann, R. Patient falls: A Key Issue in Patient Safety in Hospitals. Ph.D. Thesis, University of Basel, Basel, Switzerland, 2006. [Google Scholar]
  4. Schwickert, L.; Becker, C.; Lindemann, U.; Maréchal, C.; Bourke, A.; Chiari, L.; Helbostad, J.; Zijlstra, W.; Aminian, K.; Todd, C.; et al. Fall detection with body-worn sensors. Z. Für Gerontol. Geriatr. 2013, 46, 706–719. [Google Scholar] [CrossRef] [PubMed]
  5. Büsching, F.; Post, H.; Gietzelt, M.; Wolf, L. Fall detection on the road. In Proceedings of the 2013 IEEE 15th International Conference on e-Health Networking, Applications and Services (Healthcom 2013), Lisbon, Portugal, 9–12 October 2013; pp. 439–443. [Google Scholar]
  6. Aguiar, B.; Rocha, T.; Silva, J.; Sousa, I. Accelerometer-based fall detection for smartphones. In Proceedings of the 2014 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Lisboa, Portugal, 11–12 June 2014; pp. 1–6. [Google Scholar]
  7. Hakim, A.; Huq, M.S.; Shanta, S.; Ibrahim, B. Smartphone based data mining for fall detection: Analysis and design. Procedia Comput. Sci. 2017, 105, 46–51. [Google Scholar] [CrossRef]
  8. Rougier, C.; Meunier, J.; St-Arnaud, A.; Rousseau, J. Robust Video Surveillance for Fall Detection Based on Human Shape Deformation. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 611–622. [Google Scholar] [CrossRef]
  9. Cucchiara, R.; Prati, A.; Vezzani, R. A multi-camera vision system for fall detection and alarm generation. Expert Syst. 2007, 24, 334–345. [Google Scholar] [CrossRef]
  10. Mohamed, O.; Choi, H.; Iraqi, Y. Fall Detection Systems for Elderly Care: A Survey. In Proceedings of the 2014 6th International Conference on New Technologies, Mobility and Security (NTMS), Dubai, United Arab Emirates, 30 March–2 April 2014; pp. 1–4. [Google Scholar]
  11. Zhang, Z.; Conly, C.; Athitsos, V. A Survey on Vision-based Fall Detection. In Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 1–3 July 2015; pp. 46:1–46:7. [Google Scholar]
  12. Kamilaris, A.; Pitsillides, A. Mobile Phone Computing and the Internet of Things: A Survey. IEEE Internet Things J. 2016, 3, 885–898. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Sun, L.; Song, H.; Cao, X. Ubiquitous WSN for Healthcare: Recent Advances and Future Prospects. IEEE Internet Things J. 2014, 1, 311–318. [Google Scholar] [CrossRef]
  14. Sezer, O.B.; Dogdu, E.; Ozbayoglu, A.M. Context-Aware Computing, Learning, and Big Data in Internet of Things: A Survey. IEEE Internet Things J. 2018, 5, 1–27. [Google Scholar] [CrossRef]
  15. Chen, M.; Li, Y.; Luo, X.; Wang, W.; Wang, L.; Zhao, W. A Novel Human Activity Recognition Scheme for Smart Health Using Multilayer Extreme Learning Machine. IEEE Internet Things J. 2019, 6, 1410–1418. [Google Scholar] [CrossRef]
  16. De Quadros, T.; Lazzaretti, A.E.; Schneider, F.K. A Movement Decomposition and Machine Learning-Based Fall Detection System Using Wrist Wearable Device. IEEE Sens. J. 2018, 18, 5082–5089. [Google Scholar] [CrossRef]
  17. Liu, Z.; Cao, Y.; Cui, L.; Song, J.; Zhao, G. A benchmark database and baseline evaluation for fall detection based on wearable sensors for the internet of medical things platform. IEEE Access 2018, 6, 51286–51296. [Google Scholar] [CrossRef]
  18. Aziz, O.; Musngi, M.; Park, E.J.; Mori, G.; Robinovitch, S.N. A comparison of accuracy of fall detection algorithms (threshold-based vs. machine learning) using waist-mounted tri-axial accelerometer signals from a comprehensive set of falls and non-fall trials. Med. Biol. Eng. Comput. 2017, 55, 45–55. [Google Scholar] [CrossRef] [PubMed]
  19. Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep learning for sensor-based activity recognition: A survey. Pattern Recognit. Lett. 2019, 119, 3–11. [Google Scholar] [CrossRef]
  20. Theodoridis, T.; Solachidis, V.; Vretos, N.; Daras, P. Human fall detection from acceleration measurements using a Recurrent Neural Network. In Precision Medicine Powered by pHealth and Connected Health; Springer: Heidelberg, Germany, 2018; pp. 145–149. [Google Scholar]
  21. Mauldin, T.; Canby, M.; Metsis, V.; Ngu, A.; Rivera, C. SmartFall: A smartwatch-based fall detection system using deep learning. Sensors 2018, 18, 3363. [Google Scholar] [CrossRef] [PubMed]
  22. Casilari, E.; Lora-Rivera, R.; García-Lagos, F. A Wearable Fall Detection System Using Deep Learning. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems; Springer: Heidelberg, Germany, 2019; pp. 445–456. [Google Scholar]
  23. He, J.; Zhang, Z.; Wang, X.; Yang, S. A Low Power Fall Sensing Technology Based on FD-CNN. IEEE Sens. J. 2019, 19, 5110–5118. [Google Scholar] [CrossRef]
  24. Santos, G.L.; Endo, P.T.; Monteiro, K.H.d.C.; Rocha, E.d.S.; Silva, I.; Lynn, T. Accelerometer-Based Human Fall Detection Using Convolutional Neural Networks. Sensors 2019, 19, 1644. [Google Scholar] [CrossRef] [PubMed]
  25. Sucerquia, A.; López, J.; Vargas-Bonilla, J. SisFall: A fall and movement dataset. Sensors 2017, 17, 198. [Google Scholar] [CrossRef]
  26. Liu, K.; Hsieh, C.; Hsu, S.J.; Chan, C. Impact of Sampling Rate on Wearable-Based Fall Detection Systems Based on Machine Learning Models. IEEE Sens. J. 2018, 18, 9882–9890. [Google Scholar] [CrossRef]
  27. Devices, A. ADXL345 Datasheet; Analog Devices: Norwood, MA, USA, 2010. [Google Scholar]
  28. Karantonis, D.M.; Narayanan, M.R.; Mathie, M.; Lovell, N.H.; Celler, B.G. Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans. Inf. Technol. Biomed. 2006, 10, 156–167. [Google Scholar] [CrossRef]
  29. Kau, L.; Chen, C. A Smart Phone-Based Pocket Fall Accident Detection, Positioning, and Rescue System. IEEE J. Biomed. Health Inform. 2015, 19, 44–56. [Google Scholar] [CrossRef]
  30. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  31. Ronao, C.A.; Cho, S.B. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl. 2016, 59, 235–244. [Google Scholar] [CrossRef]
  32. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  33. Molchanov, P.; Tyree, S.; Karras, T.; Aila, T.; Kautz, J. Pruning convolutional neural networks for resource efficient inference. arXiv 2016, arXiv:1611.06440. [Google Scholar]
Figure 1. The slippery living places in Kelowna (Canada), which are covered with snow and ice in winter.
Figure 1. The slippery living places in Kelowna (Canada), which are covered with snow and ice in winter.
Electronics 08 01354 g001
Figure 2. Recognition process of machine learning methods.
Figure 2. Recognition process of machine learning methods.
Electronics 08 01354 g002
Figure 3. Description of the device used in SisFall dataset including (a) the device setting and (b) the accelerometer used in this work.
Figure 3. Description of the device used in SisFall dataset including (a) the device setting and (b) the accelerometer used in this work.
Electronics 08 01354 g003
Figure 4. Data collection scenarios in the SisFall dataset.
Figure 4. Data collection scenarios in the SisFall dataset.
Electronics 08 01354 g004
Figure 5. Different segmentation methods.
Figure 5. Different segmentation methods.
Electronics 08 01354 g005
Figure 6. Fall trails generated using the synthetic minority oversampling technique (SMOTE).
Figure 6. Fall trails generated using the synthetic minority oversampling technique (SMOTE).
Electronics 08 01354 g006
Figure 7. Different types of neural networks. (a) multi-layer perceptron (MLP); (b) convolutional neural network (CNN); (c) dense autoencoder (DAE); (d) convolutional autoencoder (CAE).
Figure 7. Different types of neural networks. (a) multi-layer perceptron (MLP); (b) convolutional neural network (CNN); (c) dense autoencoder (DAE); (d) convolutional autoencoder (CAE).
Electronics 08 01354 g007
Figure 8. Visulization of convolutional kernels in the CNN.
Figure 8. Visulization of convolutional kernels in the CNN.
Electronics 08 01354 g008
Table 1. Activities covered in the SisFall dataset.
Table 1. Activities covered in the SisFall dataset.
CodeADLsDuration
D01
D02
D03
D04
D05
D06
D07
D08
D09
D10
D11
D12
D13
D14

D15
D16
D17
D18
D19
Walking slowly
Walking quickly
Jogging slowly
Jogging quickly
Walking upstairs and downstairs slowly
Walking upstairs and downstairs quickly
Slowly sit in a half height chair, wait a moment, and up slowly
Quickly sit in a half height chair, wait a moment, and up quickly
Slowly sit in a low height chair, wait a moment, and up slowly
Quickly sit in a low height chair, wait a moment, and up quickly
Sitting a moment, trying to get up, and collapse into a chair
Sitting a moment, lying slowly, wait a moment, and sit again
Sitting a moment, lying quickly, wait a moment, and sit again
Being on one’s back change to lateral position, wait a moment,
and change to one’s back
Standing, slowly bending at knees, and getting up
Standing, slowly bending without bending knees, and getting up
Standing, get into a car, remain seated and get out of the car
Stumble while walking
Gently jump without falling (trying to reach a high object)
100 s
100 s
100 s
100 s
25 s
25 s
12 s
12 s
12 s
12 s
12 s
12 s
12 s
12 s

12 s
12 s
25 s
12 s
12 s
CodeFallsDuration
F01
F02
F03
F04
F05
F06
F07

F08
F09
F10
F11
F12
F13
F14
F15
Fall-forward while walking caused by a slip
Fall-backward while walking caused by a slip
Lateral fall while walking caused by a slip
Fall-forward while walking caused by a trip
Fall-forward while jogging caused by a trip
Vertical fall while walking caused by fainting
Fall while walking, with use of hands in a table to dampen fall,
caused by fainting
Fall-forward when trying to get up
Lateral fall when trying to get up
Fall-forward when trying to sit down
Fall-backward when trying to sit down
Lateral fall when trying to sit down
Fall-forward while sitting, caused by fainting or falling asleep
Fall-backward while sitting, caused by fainting or falling asleep
Lateral fall while sitting, caused by fainting or falling asleep
15 s
15 s
15 s
15 s
15 s
15 s
15 s

15 s
15 s
15 s
15 s
15 s
15 s
15 s
15 s
Table 2. Age, height and weight of the subjects.
Table 2. Age, height and weight of the subjects.
SexAgeHeight (m)Weight (kg)
ElderlyFemale62–751.50–1.6950–72
Male60–711.63–1.7156–102
AdultFemale19–301.49–1.6942–63
Male19–301.65–1.8358–81
Table 3. Overview of a confusion matrix.
Table 3. Overview of a confusion matrix.
Confusion MatrixPredicted Class
FallsADLs
Actual ClassFalls (P)TPFN
ADLs (N)FPTN
Table 4. Evaluation of different CNN architectures.
Table 4. Evaluation of different CNN architectures.
Filter SizeDepthWidthAcc.(%)Filter SizeDepthWidthAcc.(%)Filter SizeDepthWidthAcc.(%)
1 × 21599.85%1 × 51599.89%1 × 101599.89%
11099.90%11099.94%11099.91%
13099.91%13099.94%13099.94%
2599.77%2599.83%2599.91%
21099.91%21099.93%21099.92%
23099.93%23099.94%23099.94%
3599.88%3599.91%3599.91%
31099.89%31099.93%31099.92%
33099.89%33099.94%33099.94%
Table 5. Evaluation of different MLP architectures.
Table 5. Evaluation of different MLP architectures.
DepthWidthAcc.(%)DepthWidthAcc.(%)
11699.91%21699.90%
13299.91%23299.91%
16499.92%26499.91%
31699.90%41699.88%
33299.91%43299.92%
36499.90%46499.92%
Table 6. Average detection results of lightweight neural networks against conventional machine learning methods.
Table 6. Average detection results of lightweight neural networks against conventional machine learning methods.
Classifiers →Conventional MethodsLeight Weight Neural Networks
Metrics ↓XGBKNNSVMDTMLPDAECAECNN
SEN.(%)99.3290.9198.2797.7798.3199.0799.2098.71
SPE.(%)99.3698.7098.3098.9699.9699.8399.9399.96
ACC.(%)99.3598.5298.3098.9399.9299.8199.9199.94
TP1777.81627.417591750.11759.71773.41775.61767.1
FN12.2162.63139.930.316.614.422.9
FP496.71000.91309.4799.433.3131.157.926.9
TN76,428.375,924.175,615.676,125.676,891.776,793.976,867.176,898.1
Table 7. Average detection results of lightweight neural networks against baseline models.
Table 7. Average detection results of lightweight neural networks against baseline models.
Classifiers →Baseline ModelsLeight Weight Neural Networks
Metrics ↓CNN-HECNN-3B3CNN-EDUMLPDAECAECNN
SEN.(%)99.2399.4599.5198.3199.0799.2098.71
SPE.(%)99.9499.9399.9399.9699.8399.9399.96
ACC.(%)99.9399.9299.9399.9299.8199.9199.94
TP1776.21780.21781.31759.71773.41775.61767.1
FN13.89.88.730.316.614.422.9
FP45.852.251.833.3131.157.926.9
TN76,879.276,872.876,873.276,891.776,793.976,867.176,898.1
PARA 60.1 × 10 4 10.6 × 10 4 8 . 7 × 10 4 1.5 × 10 4 1.5 × 10 4 411411
FLOPs2 M6.9 M1.4 M0.03 M0.03 M0.008 M0.008 M

Share and Cite

MDPI and ACS Style

Wang, G.; Li, Q.; Wang, L.; Zhang, Y.; Liu, Z. Elderly Fall Detection with an Accelerometer Using Lightweight Neural Networks. Electronics 2019, 8, 1354. https://doi.org/10.3390/electronics8111354

AMA Style

Wang G, Li Q, Wang L, Zhang Y, Liu Z. Elderly Fall Detection with an Accelerometer Using Lightweight Neural Networks. Electronics. 2019; 8(11):1354. https://doi.org/10.3390/electronics8111354

Chicago/Turabian Style

Wang, Gaojing, Qingquan Li, Lei Wang, Yuanshi Zhang, and Zheng Liu. 2019. "Elderly Fall Detection with an Accelerometer Using Lightweight Neural Networks" Electronics 8, no. 11: 1354. https://doi.org/10.3390/electronics8111354

APA Style

Wang, G., Li, Q., Wang, L., Zhang, Y., & Liu, Z. (2019). Elderly Fall Detection with an Accelerometer Using Lightweight Neural Networks. Electronics, 8(11), 1354. https://doi.org/10.3390/electronics8111354

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop