Next Article in Journal
Development of Autonomous Driving Patrol Robot for Improving Underground Mine Safety
Next Article in Special Issue
Enhancing Self-Care Prediction in Children with Impairments: A Novel Framework for Addressing Imbalance and High Dimensionality
Previous Article in Journal
Diffraction by a Semi-Infinite Parallel-Plate Waveguide with Five-Layer Material Loading: The Case of H-Polarization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human Activity Detection Using Smart Wearable Sensing Devices with Feed Forward Neural Networks and PSO

by
Raghad Tariq Al_Hassani
1,2,* and
Dogu Cagdas Atilla
1
1
Faculty of Engineering, Altinbas University, Istanbul 34676, Turkey
2
Ministry of Higher Education and Scientific Research, Baghdad 10065, Iraq
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(6), 3716; https://doi.org/10.3390/app13063716
Submission received: 3 February 2023 / Revised: 28 February 2023 / Accepted: 6 March 2023 / Published: 14 March 2023

Abstract

:
Hospitals must continually monitor their patients’ actions to lower the chance of accidents, such as patient falls and slides. Human behavior is difficult to track due to the complexity of human activities and the unpredictable nature of their conduct. As a result, creating a static link that is used to influence human behavior is challenging, since it is hard to forecast how individuals will think or act in response to a certain event. Mobility tracking depends on intelligent monitoring systems that apply artificial intelligence (AI) applications referred to as “categories”. Because motion sensors, such as gyroscopes and accelerometers, output unconnected data that lack labels, event detection is a vital task. The fall feature parameters of tridimensional accelerometers and gyroscope sensors are presented and used, and the classification technique is based on distinguishing characteristics. This study focuses on the age-old problem of tracking turbulence in motion to improve detection precision. We trained the model, considering that detection accuracy is limited by factors such as the subject’s mass, velocity, and gait style. This is performed by employing an experimental dataset. When we used the sophisticated technique of particle swarm optimization (PSO) in combination with a four-stage forward neural network (4SFNN) to forecast four different types of turbulent motion, we observed that the total prediction accuracy was 98.615% accurate.

1. Introduction

New data technologies have evolved in recent years due to the expansion of internet networks and the introduction of digital communication techniques. The internet of things (IoT) stack has been used for a wide range of innovative purposes in a wide range of sectors. We were able to increase human–machine communication and achieve our aim of improved collaboration by experimenting with novel approaches for human supervisors and robots to interact with one another [1]. More adaptable monitoring of complicated operations was made possible by the use of wirelessly connected sensors and technology [2]. The micro and nanosensors that have been developed are only two examples of the many various types and styles of sensors that have been developed for use in a wide range of detection tasks. These sensors might be used in places that are inaccessible to humans. Small sensors can resist harsh environments and capture crucial data, which may then be sent over a wireless sensor network (WSN) [3]. The data gathered by sensor arrays and networks are kept on supercomputers and then retrieved using mining techniques.
Wearable sensors may be actively explored and used to detect people’s health, activities, and habits. This can be performed with higher precision. As a result, the widespread use of these sensors has the potential to improve our daily lives in ways analogous to the benefits we obtain from the popular use of computers, cell phones, and so on. The primary monitoring device is in charge of extracting anxiety aspects and creating evaluations based on data collected from sensors put in or worn on a person’s body, clothes, home, and other settings. These sensors can detect the degree of anxiety a person is experiencing. As soon as the data reaches the targeted servers, it is mined using machine learning and deep learning-inspired algorithms.
This study aims to establish techniques for classifying human activities, with the ultimate objective of recognizing and preventing mobility issues due to its results. Movement abnormality exercises are crucial for avoiding deadly crashes or slips because older adults are more likely to have unexpected mobility limits due to health conditions. Human behavior is difficult to observe, since it is complex and occasionally surprising. As a result, it is difficult to establish a stable connection that can be used to influence the conduct of others. Due to the tremendous number and diversity of data available for human tracking and monitoring, predicting how people will perceive and respond to any given set of circumstances is exceedingly challenging. The quest to uncover data-driven alternatives to present employment activities drives advancement in this sector’s artificial intelligence (AI). This article defines the category as an intelligent monitoring system that uses AI-powered apps to monitor the movement of elderly or senior citizens, pregnant women, severely injured patients, and so on. Previous research has shown that using these strategies to treat several conditions is effective.
Wearable sensor systems, such as smartwatches, smartphones, and other mobile computing devices, are now being used in several settings, including hospitals, smart homes, libraries, and other public and private areas. Interference with communication, such as shadowing, noise, interactions with other communication systems, and so on, as well as noise inherited from human organs, are some examples of elements that may delay information transfer (e.g., when heartbeats are monitored, muscular motions or other organs can be interfaced with the electrocardiogram signal). As a result, it is possible that systems that recognize actions or behaviors will not function properly [4].
The second difficulty is the presence of data that has not been labeled. These are the kind of sources that students typically use to gain access to the sensitive subject matter discussed in class. All data collectors simply acquire the information they want without first classifying it, which raises processing costs [5].
Wearing smart devices designed to avoid falls presents a distinct set of challenges. A variety of falls can occur, and some are more likely to occur than others based on the presence or absence of specific physical imbalance factors (i.e., using data from an accelerometer sensor, etc.) Because the data generated represent the entirety of a single class, they cannot be addressed by any of the currently known supervised machine learning or deep learning tools or methods [6].

2. The Literature Review

Human habits and routines are used in many different industries, including economics, internet commerce, health applications, and even security systems. Many approaches have been presented in the research when it comes to gathering behavioural data, as well as analysing and classifying the information that has been acquired.
Perez-Vega et al. [7] have created a deep neural network (DNN). They intend to develop a theoretical model that defines how organizations and customers might employ AI-enabled data processing technology to improve the results of both solicited and unsolicited forms of online customer interactions. They do this by building on the analogy of AI systems as biological creatures and adopting a stimulus–organism–response theory viewpoint. They can now distinguish between firm-solicited and firm-unsolicited online consumer involvement practices. They operate as triggers for AI organisms to analyse customer-related information, which motivates reactions from AI organisms and humans, changing the settings for future online consumer involvement.
For commercial artificial intelligence applications in the banking industry, Konigstorfer et al. [8] used the support vector machine (SVM), logistic regression (LR), convolutional neural network (CNN), and artificial neural network (ANN) models. The study suggests that commercial banks may employ AI to enhance automation, reduce loan losses, secure payment processing, and improve consumer targeting.
Recurrent CNN with traffic management systems might minimize highway collisions [9]. This study examines how crowd sensing and AI might enhance emergency situational awareness and response times.
There is a proposition floating around about how to use recurrent CNN in traffic management systems to assist in reducing the number of collisions that take place on roadways [9]. This study gives insight into how crowd sensing and artificial intelligence may be utilized to increase emergency situational awareness, as well as response times.
Chunhui Li. et al. [10] proposed that an ANN be built on a field programmable gate array (FPGA) board and used to assess biodiversity. Data analysis and model evaluation were both used to establish a distinct pattern. AI and neural network algorithms were consistent with model standards that promote openness and reproducibility, and they indicated that incorporating biodiversity assessments into practice will result in high-quality models and reviews.
The authors of [11,12] proposed conducting a pilot study to determine whether or not AI can be used in an academic setting. The authors proposed a method for combining artificial autonomy based on humanlike perceptions of non-human animals’ competence and warmth. They rely on the theory of mind perceptions to back up this claim. These studies enhance our theoretical knowledge of artificial autonomy in information systems research because they used AI. The application of AI makes this possible.
In [13], the performance of multi-layer perceptron (MLP), k-nearest neighbors (KNN), growing self organization map (GSOM), and random forest (RF) in identifying emotional states has investigated to construct profiles based on emotional states. Improving AI’s capabilities has the potential to improve disease modeling, protein structure prediction, therapeutic repurposing, and vaccine development.
Chimamiwa et al. [14] devised a system for teaching home tasks. Sensors were varied. House sensors capture residents’ behaviors. Between 26 February and 26 August 2020, millions of sensor data samples were collected at 1 Hz. This dataset may test several methods, including data-driven algorithms for routine recognition. Long-term usage of such data by AI systems can reveal the user’s actions and discover changes in their habits.
In [15], the development of the Earthquake Emergency Micro Reaction Device was observed, and a new device was proposed to improve existing emergency response methods in the aftermath of a devastating earthquake. This device can monitor post-earthquake conditions by combining data from smart watches worn by the general population with a geographic information system (GIS). A system was created to discover imprisoned individuals and important rescue areas by utilizing data generated by smartwatches belonging to probable victims and exposed elements. This new technology can rapidly determine the most critical rescue zone by monitoring patients’ heart rates and calculating their location. It increases the likelihood of successful capture and rescue missions.
In [4], SVM, KNN, RF, and the hidden Markov model (HMM) were all used in the detection of sleep disorders. Categorization results were obtained by applying various permutations of data, training, and scoring procedures to five distinct machine learning algorithms. The system’s efficiency was tested in two ways: first, the success rate in identifying participants based on their respiratory issues had one misclassification for every seventeen individuals, and second, the accuracy rate in recognizing abnormal respiratory episodes was 85.95%.
A study [16] demonstrated a one-dimensional CNN-based technique for detecting human activity using triaxial accelerometer data acquired from users’ smartphones. The technology was capable of detecting many forms of human activity. The data collected by a smartphone’s accelerometer sensor relate to three main types of human movement and inactivity. The accuracy of the one-dimensional CNN-based system was 92.71%, which was greater than the standard random forest approach (which reached 89.10% accuracy).
In [17], a model for distinguishing between sedentary and active behaviour in public datasets using a one-dimensional CNN was proposed. It might be achieved by employing the CNN model. This research technique has been tried and tested (through experiments). The CNN model was made up of four convolutional layers. The rectification linear unit (ReLU) was used as the activation function in each of the CNN layers. The ultimate aim was to obtain a level of accuracy of 95.9%.
The authors of [18] provide the results of an activity identification experiment performed using Kinect RGB and a depth sensor camera. The investigators had to identify seven distinct human activities (seven classes). The feature vectors for the eight limbs used in the experiment were the joint angles obtained from the Kinect depth sensor, and each of these vectors has three axes. They employed three separate cutting-edge recurrent neural network (RNN) models for training and testing. The comparison of the three RNN models revealed that the long short-term memory (LSTM) model, with a 96% success rate, had the highest accuracy in identifying human activities.

3. Methodology

The first step is to gather data from a wearable sensing device, which may include readings from the accelerometer and gyroscope. Once we obtain the data, the preprocess is made appropriate for analysis by normalizing and labeling the data to show whether or not a person is falling, as well as concatenating them to prepare them for analysis. Then, the labeled data are partitioned into training and testing sets, where the training set is used to train the model. Thereafter, the classification model where the preprocessed data should be fed through it is constructed and optimized to produce a binary classification (fall or not fall). The training data are used to train the mode and adjust its weights to decrease the classification error. Upon the model being trained, use the testing data to assess the model’s performance. Finally, the model’s aptitude is verified to appropriately detect falls and non-falls by computing metrics, such as accuracy, precision, recall, and F1-score.
In this section, the methodology will be explained in detail.

3.1. Dataset

The dataset includes four falling tasks completed by 11 people across three separate tries each [11]. We obtained this dataset from 11 subjects who performed four different forms of falling (falling forward using hands, falling forward using knees, falling backward, and falling sideways). Each subject had a wearable sensing device with an accelerometer, gyroscope, and orientation sensors to track every movement, with a sampling rate of (100 Hz). The collection contains 637,127 data in total.
Here is how the data set is structured:
  • An amount of 11 subject-specific folders:
  • Four different folders, one for each task:
  • Each trial will have three subfolders:
There will be one sensor data CSV file, with information from the attempt.
Select a topic, action, and experiment from the menus below to select the files you wish to download. The “Complete Downloads” section contains links to the whole data set (sadly, we do not have a single file with all the photographs), while the “File Search” section allows you to browse for individual files. Every record has:
  • Information gathered = total acquired data (synchronized and organized with tags).
Features:
  • X and Y: characteristics determined by analyzing the data. Each feature’s window length (in seconds) is represented by the Y value, while the X value indicates the time interval (in seconds) during which the features were collected.
  • Camera X: refers to a zipped folder containing the photographs for each experiment that were captured with Camera X.
  • Camera X OF: this is a compressed folder containing all of Camera X’s settings (optical flow).
  • Shrinks the camera OF: this means a CSV file containing the OF from both cameras, scaled down to a 20 × 20 matrix.
  • CCTV OF Features The X and Y: this file contains the OF from both cameras, scaled down to a 20 × 20 matrix. The mean was extracted as the lone feature from these files, with X denoting the time gap in seconds over which the feature was collected, and Y denotes the window length in seconds.

3.2. Prediction Model

3.2.1. Preprocessing: Missing Data Elimination

Because motion sensors generate a large amount of data, it is vital to store them on servers. Some missing values occurred due to data storage and retrieval, resulting in such spaces being filled up with hashes or question marks. There will be a decrease in categorization accuracy due to the employment of these symbols. To avoid a situation such as this, we combed through all of the data looking for similar symbols (see Figure 1).
When the error is discovered, the line with the missing value (represented by symbols) is erased from the file. Because of the massive amount of data and because of the lines representing Cartesian coordinates that are formed every minute, there is no need to be concerned about losing out on any information. As a consequence, deleting a single line will result in no information loss.
The detection of falls should always serve as the basis for motion classification. This endeavor is driven by the rising need for accurate fall prevention technology, which is especially crucial considering the limitations of the solutions that are now available. It is necessary to improve the quality of the classifier to make fall detection more reliable. Simply following the instructions listed here will take you to your destination.
To commence, the data are transformed, labeled, and concatenated to fit the format required by the model and to convert to coded form and analysis. Ultimately, all data are divided into test data and data trains (see Table 1).

3.2.2. Auto Encoder (AEC)

An auto encoder (AEC) is a form of ANN used to learn effective encodings of unlabeled input (unsupervised learning). Working towards the aim of recreating the initial input is an excellent way for evaluating and improving encoding. Auto encoders are neural networks that may be trained to learn a representation (encoding) of a collection of inputs by filtering out irrelevant information (noise). The categorization of one-class scenarios, and the administration of unlabeled data provided by wearable sensing devices, are currently beyond the capability of AECs. With a mean absolute error (MAE) of 37.3 and an accuracy of 52.8%, the AEC method allows us to predict when someone will fall. Figure 2a,b both illustrate the same thing for us to see.
As shown in Figure 2a, the ten-fold accuracy ranges between 49% and 51.6%, with a floor of 49%. The root means square error (RMSE) was 86.488, and the mean square error (MSE) was 7.4803 × 103. A single-stage classifier with 80% training and 20% testing data yielded the findings. AEC classifies unlabeled data using unsupervised deep learning. This procedure yielded results.

3.2.3. Classifiers

To categorize the labeled drop data, a neural network with four steps of feed-forward classification is utilized. To increase the accuracy of data predictions, a supervised learning approach known as a “4-stage forward neural network” (4SFNN) was implemented. Each categorization level is fine-tuned using a technique known as particle swarm optimization (PSO). Table 1 shows the list of configuration parameters of the proposed model. PSO is used to improve training quality by roughly modeling mistakes and, hence, maximizing the performance of each step of the classification process. Weight/bias estimation is used to achieve this purpose. This approach assigns weight/bias values depending on the changing value of the training MSE that happens during the investigation.
Using this approach, the information is divided into four groups based on the characteristics of the fall: MSC, FKL, FOL, and SDL (each one represents a type of falling mentioned in Table 1). As a result, every incoming class is subjected to a four-stage categorization procedure (Figure 3 demonstrates the underlying structure of the proposed classifier). As result of the labeling challenge, it is difficult to use appropriate classifiers; extensive performance evaluation may be performed to strengthen the reliability of the proposed method. To ensure that the findings are valid, each suggested classifier must be evaluated using metrics, such as the proportion of correct classifications and the MAE.
When the classification is being trained, an epoch is a full pass through the training data. Low epoch counts can lead to under-fitting, which occurs when the model has not learned enough from the training set of data to make reliable predictions about the set of incoming data. On the other hand, too many epochs may lead to overfitting, which prevents the model from generalizing well to new data, since it has learned too much from the training set. Therefore, it is vital to be vigilant about the model’s performance on a validation dataset throughout the training process and to stop training when the validation loss starts to rise the overfitting.
The common process for adjusting the ideal number of epochs is called early stopping, which automatically determines the optimal number of epochs based on the performance of your model on a validation dataset. Divide the dataset into three sets: a training, a validation, and a test set. Train the model on the training set for a large number of epochs, which is significantly greater than anticipated. After each epoch, evaluate how well the model performs on the validation set. Whenever the enhancement of a validation set’s performance is stopping training, then use the epoch with the best validation set performance. The value of accuracy in the system refers to the percentage of correctly classified fall events out of all the fall events in your dataset. During the training process, the model will predict the type of fall based on the sensor data from the accelerometer and gyroscope sensors. The predicted fall is then compared to the actual fall type in the dataset and identifies the accuracy. The quality of the classifier must be increased to improve the accuracy of fall detection. The accuracy is calculated by comparing the model’s predictions to the true labels for the data in the training or validation set. During the training process, we can use these accuracy values to monitor the performance of the model and adjust the hyperparameters or architecture, if needed, to improve the model’s accuracy on the validation set.

4. Results and Discussion

4.1. Nonoptimized

After preprocessing the data by research and identifying the missing values of three coordinates’ sensor data to compensate for them and obtain the relevant features by specifying the coordinated values with corresponding time to make it accessible for the classification, then we partition it into training and testing sets, train the model on the training set 0.8%, evaluate its performance on the validation set 0.1%, and test it on the testing set 0.1% (see Table 2). The outcomes of identifying the labeled data on falling using a feed-forward neural network with four stages are depicted below. The first findings of a classification system that was not optimized suggest an accuracy of 97.78% and a mean absolute error of (0.0275), while the MSE is 0.031, and the RMSE is 0.176. In the end, the third-stage classifier was successful in achieving an accuracy of 98.16%, a MAE of 0.0256, a MSE of 0.0319, and a RMSE of 0.177. In the fourth step of the classifier, we achieved 98% accuracy with a MAE of 0.0287, a MSE of 0.0365, and a RMSE of 0.191 (Figure 4).

4.2. Optimanzed

PSO improves the weights of an FFNN by repeatedly altering particle locations to classify features. This technique improves the FFNN convergence rate and learning process, creating a more accurate and efficient neural network model. FFNN-PSO has four layers with 40, 30, 10, and 10 neurons, with 0.7%, 0.2%, and 0.1% training, validation, and testing tests. Stage 1 findings for the MSE class, where the proposed classifier predicts the MSE class with 98.95% accuracy, the MAE with 0.016, the MSE with 0.0168, and the RMSE with 0.129. Stage 3 of the proposed classifier predicts the SDL class with 97.94% accuracy, the MAE with 0.028, the MSE with 0.0266, and the RMSE with 0.137. Figure 5 shows that the proposed classifier predicts FKL with a 98.45% accuracy, the MAE with 0.018, the MSE with 0.0216, and the RMSE with 0.147. The proposed classifier has a FOL prediction accuracy of 98.76%, a FKL of 0.0176, a MSE of 0.0363, and a RMSE of 0.190.
Six male and five female participants, aged 22 to 36, with a mean height of 176.09 cm and a mean weight of 77.63 kg, were asked to perform different falling activities (falling backward, falling forward, sitting in an empty chair, and falling sideways) to collect raw data from wearable (x, y, z) axis accelerometer sensors and (roll, pitch, yaw) gyroscope sensors at 100 Hz.
Choose consistent missing value characteristics. The dataset has four subgroups. Find the frequency of each character in each chosen subgroup. Employ the classification techniques and accuracy measures to gradually examine feature prediction abilities.
This study uses the dataset to anticipate three fall models. The PSO technique gave the FFNN classifier a final prediction accuracy of 98.615%, compared to 52.8% for the AEC in the first model (see Table 3). The mean absolute error rate also contrasts AEC with the suggested methods. PSO in the FFNN algorithm stages decreases the MAE rate from 0.0262 to 0.0193, improving performance and reducing errors.
Similarly, there is an obvious increase in training and validation accuracy with the rise of the number of epochs, which has a significant impact on improving the system and reducing loss. As shown in the Figure 6, the increase in the number of epochs to 30 leads to a significant improvement, reaching 92.3% in training accuracy with 88.8% in validation, and, then, the increment during the range of 60 to 80 epoch matched to a great increase in each of training and validation accuracy, arriving at 98.1 and 95.3, respectively, until it stabilized between 80 to 100. Thus, it is clear how the appropriate choice of the number of epochs achieved best classification performance.
With exact performance percentages (see Table 4), powerful human fall detection systems increase safety and wellbeing, especially for the elderly and those with medical issues. Caregivers and family members may rest easy with fall detection systems. These technologies can reduce falls that need emergency medical services and hospitalizations, lowering healthcare costs.
Finally, we make a comparison that shows some comparisons between this study and the literature studies of performance. Comparing our best method with other studies, the accuracy outperforms every single method (see Table 5).

4.3. Confusion Matrix

A confusion matrix table is frequently used to assess whether a classification model is working well. It displays the proportion of precise and imprecise predictions generated by the model in contrast to the actual results. It is commonly depicted as a square matrix with rows and columns corresponding to the predicted and actual classes, respectively. The confusion matrix is used to evaluate a model’s performance, notably its accuracy, precision, recall, and F1 score (Figure 7).
The accuracy of the model is determined by dividing the total number of predictions by the sum of true positives and true negatives. Precision is determined by dividing the number of true positives by the total number of positive predictions, whereas recall is determined by dividing the number of true positives by the total number of positive predictions. The harmonic mean of recall and precision is the F1 score.
The four outcomes of a binary classification model are:
  • True positive (TP): The model accurately predicted the positive class.
  • False positive (FP): The model predicts that the class would be positive, but the actual class was negative.
  • True negative (TN): The model predicted the negative class in the appropriate order.
  • False negative (FN): The model indicated that the class would be negative, but it was actually positive.
A c c u r a c y =   T P + T N T P + T N + F P + F N
S e n s i t i v i t y = R e c a l l =   T P T P + F N
P o s i t i v e   P r e d i c t i v e   V a l u e = P r e c i s i o n =   T P T P + F P
F 1   s c o r e = 2 ×   P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l  
The dataset’s confusion matrix of human falling detection is evaluated by the findings with four labels on the actual and predicted labels, which have an accuracy of 98.615% and have been processed using the FFNN-PSO algorithm. This shows that falling forward using knees has the highest number of predictions (16,008), while falling forward using hands receives the lowest prediction (eight at the same time).
Precision and recall are important for evaluating system performance. Precision measures the system’s ability to correctly identify positive results, while recall measures the system’s ability to correctly identify all positive results. The values of precision and recall depend on the number of true positives, false positives, and false negatives. The F1 Score depends on the value of precision and recall and is used as a measure of a system’s accuracy
Regarding matrix results, indicating that the FFNN_PSO model had achieved precise performance for activities with a precision of 98.9%, recall of 95.2%, and F1 score of 97.1%, the prediction error is about 0.0224. The value of precision and recall is used as a measure of a system’s accuracy.
For BSC falling, 15,925 actual samples were correctly classified as true positive (TP), while only eight samples were incorrectly classified as false positive (FP). For FLK falling, 16,008 samples were correctly classified as (TP), and 388 samples were incorrectly classified as FKL or SDL (FP). For FOL, 15,628 actual samples were correctly classified as (TP), while 632 samples were incorrectly classified as BSC or FKL (FN), and 459 samples were incorrectly classified as SDL (FP). An amount of 15,457 actual samples were correctly classified (TP), 247 samples were incorrectly classified as FKL or FOL (FN), and 347 samples were incorrectly classified as BSC (FP).

5. Conclusions

Assistive technology improves healthcare, especially by enabling older, independent people to live longer. It requires continuous human monitoring systems that alert healthcare workers of problems and ensures rapid intervention. A healthcare alert system collapses a person. To utilize these techniques to identify human falls, behavioral observation methods were developed. Falls and slides are more likely to hurt elderly and sick people with irregular movements. Deep learning and data mining allowed a computational model to predict turbulence. This advanced the field significantly. This study predicted four types of motion turbulence using auto encoding and two other methods. AEC categorizes data without labels in the first model. The study obtained 52.8% accuracy and 37.3 mean absolute error for the AEC. The 4SFNN method divides the data into four portions and assigns each classifier to a segment. The algorithm’s predictions became 98.615% accurate. We found that PSO optimization improved the system.

Author Contributions

Conceptualization, R.T.A. and D.C.A.; methodology, R.T.A. and D.C.A.; software R.T.A.; validation, R.T.A. and D.C.A.; formal analysis, R.T.A.; investigation, R.T.A.; resources, R.T.A. and D.C.A.; data curation, R.T.A.; writing—original draft preparation R.T.A.; writing—review and editing, R.T.A.; visualization, R.T.A. and D.C.A.; supervision, D.C.A.; project administration R.T.A.; funding acquisition, R.T.A. and D.C.A. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Raghad Tariq Al_Hassani in 9 March, support of open access publishing, and recive the journal confirmation of APC.

Data Availability Statement

We mentioned that the datasets it is already puplished in reference [11].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pu, X.; An, S.; Tang, Q.; Guo, H.; Hu, C. Wearable triboelectric sensors for biomedical monitoring and human-machine interface. iScience 2021, 24, 102027. [Google Scholar] [CrossRef] [PubMed]
  2. Yu, D.; Kang, J.; Dong, J. Service Attack Improvement in Wireless Sensor Network Based on Machine Learning. Microprocess. Microsyst. 2021, 80, 103637. [Google Scholar] [CrossRef]
  3. Hamami, L.; Nassereddine, B. Application of wireless sensor networks in the field of irrigation: A review. Comput. Electron. Agric. 2020, 179, 105782. [Google Scholar] [CrossRef]
  4. Camcı, B.; Ersoy, C.; Kaynak, H. Abnormal respiratory event detection in sleep: A prescreening system with smart wearables. J. Biomed. Inform. 2019, 95, 103218. [Google Scholar] [CrossRef]
  5. Chen, L.; Li, R.; Zhang, H.; Tian, L.; Chen, N. Intelligent fall detection method based on accelerometer data from a wrist-worn smart watch. Measurement 2019, 140, 215–226. [Google Scholar] [CrossRef]
  6. Tomar, D.; Prasad, Y.; Thakur, M.K.; Biswas, K.K. Feature Selection Using Autoencoders, Proceedings of the 2017 International Conference on Machine Learning and Data Science, MLDS 2017, Noida, India, 14–15 December 2017; IEEE: Piscataway, NJ, USA, 2018; pp. 56–60. [Google Scholar] [CrossRef]
  7. Perez-Vega, R.; Kaartemo, V.; Lages, C.R.; Razavi, N.B.; Männistö, J. Reshaping the contexts of online customer engagement behavior via artificial intelligence: A conceptual framework. J. Bus. Res. 2021, 129, 902–910. [Google Scholar] [CrossRef]
  8. Königstorfer, F.; Thalmann, S. Applications of Artificial Intelligence in commercial banks—A research agenda for behavioral finance. J. Behav. Exp. Financ. 2020, 27, 100352. [Google Scholar] [CrossRef]
  9. El Barachi, M.; Kamoun, F.; Ferdaos, J.; Makni, M.; Amri, I. An artificial intelligence based crowdsensing solution for on-demand accident scene monitoring. Procedia Comput. Sci. 2020, 170, 303–310. [Google Scholar] [CrossRef]
  10. Li, C. Biodiversity assessment based on artificial intelligence and neural network algorithms. Microprocess. Microsyst. 2020, 79, 103321. [Google Scholar] [CrossRef]
  11. Hervieux, S.; Wheatley, A. Perceptions of artificial intelligence: A survey of academic librarians in Canada and the United States. J. Acad. Librariansh. 2021, 47, 102270. [Google Scholar] [CrossRef]
  12. Hu, Q.; Lu, Y.; Pan, Z.; Gong, Y.; Yang, Z. Can AI artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants. Int. J. Inf. Manag. 2021, 56, 102250. [Google Scholar] [CrossRef]
  13. Chang, A.C. Artificial intelligence and COVID-19: Present state and future vision. Intell. Based Med. 2020, 3–4, 100012. [Google Scholar] [CrossRef] [PubMed]
  14. Chimamiwa, G.; Alirezaie, M.; Pecora, F.; Loutfi, A. Multi-sensor dataset of human activities in a smart home environment. Data Brief 2021, 34, 106632. [Google Scholar] [CrossRef] [PubMed]
  15. Hossain, M.S.; Gadagamma, C.K.; Bhattacharya, Y.; Numada, M.; Morimura, N.; Meguro, K. Integration of smart watch and Geographic Information System (GIS) to identify post-earthquake critical rescue area part. I. Development of the system. Prog. Disaster Sci. 2020, 7, 100116. [Google Scholar] [CrossRef]
  16. Lee, S.M.; Yoon, S.M.; Cho, H. Human Activity Recognition from Accelerometer Data Using Convolutional Neural Network. In Proceedings of the IEEE International Conference on Big Data and Smart Computing, BigComp 2017, Jeju, Republic of Korea, 13–16 February 2017; pp. 131–134. [Google Scholar] [CrossRef]
  17. Kusuma, W.A.; Minarno, A.E.; Wibowo, M.S. Triaxial Accelerometer-Based Human Activity Recognition Using 1D Convolution neural network. In Proceedings of the 2020 International Workshop on Big Data and Information Security, IWBIS 2020, Depok, Indonesia, 17–18 October 2022; pp. 53–57. [Google Scholar] [CrossRef]
  18. Wesonga, S.; Tahira, N.J.; Park, J.S. Performance Comparison of Human Activity Recognition for Unmanned Retails. In Proceedings of the International Conference on Control, Automation and Systems, Jeju, Republic of Korea, 27 November–1 December 2022; pp. 333–336. [Google Scholar] [CrossRef]
Figure 1. Missing values elimination program.
Figure 1. Missing values elimination program.
Applsci 13 03716 g001
Figure 2. Falling precision with the 10-cross valuation; (a) AEC accuracy; (b) AEC MAE.
Figure 2. Falling precision with the 10-cross valuation; (a) AEC accuracy; (b) AEC MAE.
Applsci 13 03716 g002
Figure 3. Architecture with optimization.
Figure 3. Architecture with optimization.
Applsci 13 03716 g003
Figure 4. 2SFNN of falling precision with the 10-cross valuation (no optimization): (a) accuracy; (b) MAE; (c) MSE; (d) RMSE.
Figure 4. 2SFNN of falling precision with the 10-cross valuation (no optimization): (a) accuracy; (b) MAE; (c) MSE; (d) RMSE.
Applsci 13 03716 g004
Figure 5. 3SFNN of falling precision with the 10-cross valuation (no optimization): (a) accuracy; (b) MAE; (c) MSE; (d) RMSE.
Figure 5. 3SFNN of falling precision with the 10-cross valuation (no optimization): (a) accuracy; (b) MAE; (c) MSE; (d) RMSE.
Applsci 13 03716 g005
Figure 6. The effect of epoch.
Figure 6. The effect of epoch.
Applsci 13 03716 g006
Figure 7. Confusion matrix.
Figure 7. Confusion matrix.
Applsci 13 03716 g007
Table 1. The distribution of activity data.
Table 1. The distribution of activity data.
Activity
ID
DescriptionDataAECFFNNFFNN-PSO
Training
80%
Testing
20%
Training
80%
Validation
20%
Testing
20%
Training
70%
Validation
20%
Testing
10%
1Falling forward using hands159,333127,466.431,866.6127,466.415,933.3111,533.1111,533.131,866.615,933.3
2Falling forward using knees161,176128,940.832,235.2128,940.816,117.6112,822.2112,823.232,235.216,117.6
3Falling backward158,528126,822.431,705.6126,822.415,852.8110,969.6110,969.631,705.615,852.8
4Falling sideward158,090126,47231,618126,47215,809110,663110,66331,61815,809
Table 2. FFNN model configurations and parameters.
Table 2. FFNN model configurations and parameters.
ParameterAmount
Number of layers4
Neurons40, 30, 10, 10
Train MetricMSE
Epochs100
MSE goal1 × 10−50
Training Set80%
Validation Set10%
Test Set10%
Minimum Gradience1
Maximum Training Time60
Table 3. FFNN-PSO model configurations and parameters.
Table 3. FFNN-PSO model configurations and parameters.
ParameterAmount
Number of layers4
Neurons40, 30, 10, 10
Train MetricMSE
Training OptimizerPSO
Epochs100
MSE goal1 × 10−100
Training Set70%
Validation Set20%
Test Set10%
Minimum Gradience1 × 10−200
Maximum Training Time0.5 × 60
Table 4. The final average results of the performance of the proposed models.
Table 4. The final average results of the performance of the proposed models.
AlgorithmStageAccuracyAvg.
Accuracy
Mean
Absolut Error (MAE)
Avg.
(MAE)
Mean Square Error (MSE)Avg.
(MSE)
Root Mean Square Error (RMSE)Avg.
(RMSE)
FFNN197.7898.02750.02750.02620.0310.03180.1760.177
298.150.01700.0280.167
398.160.02560.03190.177
4980.02870.03650.191
FFNN-PSO198.9598.6150.0160.01930.01680.02530.1290.150
297.940.0280.02660.137
398.450.0180.02160.147
497.760.01760.03630.190
AEC 52.852.8%37.10837.37.48 × 1037.48 × 10386.48886.488
Table 5. The final average results of the performance of the proposed models compared with similar work.
Table 5. The final average results of the performance of the proposed models compared with similar work.
AlgorithmAccuracyMAEMSE
FFNN98.02750.02620.0318
FFNN-PSO98.6150.01930.0253
AEC52.8%37.37.48 × 103
(1D) Convolutional Neural Network (CNN) [16]92.71%--
Random Forest [16]89.10%--
ReLU CNN [17]95.9%--
GRU [18]96%--
LSTM [18]90%--
Bi-LSTM [18]91%--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al_Hassani, R.T.; Atilla, D.C. Human Activity Detection Using Smart Wearable Sensing Devices with Feed Forward Neural Networks and PSO. Appl. Sci. 2023, 13, 3716. https://doi.org/10.3390/app13063716

AMA Style

Al_Hassani RT, Atilla DC. Human Activity Detection Using Smart Wearable Sensing Devices with Feed Forward Neural Networks and PSO. Applied Sciences. 2023; 13(6):3716. https://doi.org/10.3390/app13063716

Chicago/Turabian Style

Al_Hassani, Raghad Tariq, and Dogu Cagdas Atilla. 2023. "Human Activity Detection Using Smart Wearable Sensing Devices with Feed Forward Neural Networks and PSO" Applied Sciences 13, no. 6: 3716. https://doi.org/10.3390/app13063716

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop