Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization
Abstract
:1. Introduction
- Implemented separate denoising filters for inertial and GPS sensors, significantly enhancing data cleanliness and accuracy.
- Developed a robust methodology for concurrent feature extraction from human locomotion and localization data, improving processing efficiency and reliability.
- Established dedicated processing streams for localization and locomotion activities, allowing for more precise activity recognition by reducing computational interference.
- Applied a novel data augmentation technique to substantially increase the dataset size of activity samples, enhancing the robustness and generalizability of the recognition algorithms.
- Utilized an advanced feature optimization algorithm to adjust the feature vector distribution towards normality, significantly improving the accuracy of activity recognition.
2. Related Work
2.1. Activity Recognition Using Inertial Sensors
2.2. Activity Recognition Using Computer Vision and Image Processing Techniques
3. Proposed System Methodology
3.1. Signal Denoising
3.2. Signal Windowing and Segmentation
3.3. Feature Extraction
3.3.1. Feature Extraction for Physical Activity
Shannon Entropy
Linear Prediction Cepstral Coefficients (LPCCs)
Skewness
Kurtosis
3.3.2. Feature Extraction for Localization Activity
Mel-Frequency Cepstral Coefficients (MFCCs)
Step Detection
Heading Angle
3.4. Feature Selection Using Variance Threshold
Algorithm 1: Variance Threshold Feature Selection |
1: Input: Dataset D with m features: f1, f2, …, fm. Variance threshold value τ. k: Desired number of features to select. 2: Output: A subset of features whose variance is above 3: Initialization: Create an empty list R to store the retained features 4: Feature Selection: For each feature fi in (D). Compute the variance vi of fi Add fi to the list R end for 5: Return: Return the list R as the subset of features with variance above τ. 6: End |
3.5. Feature Optimization via Yeo–Johnson Power Transformation
3.6. Data Augmentation
3.7. Proposed Multi-Layer Perceptron Architecture
3.7.1. Architecture Overview
- Input Layer: The size of the input layer directly corresponds to the number of features extracted and optimized from the sensor data. In our study, the dimensionality of the input layer was adjusted based on the dataset being processed, aligning it with the feature vector size derived after optimization.
- Hidden Layers: We include three hidden layers. The first and second hidden layers are each composed of 64 neurons, while the third hidden layer contains 32 neurons. We utilized the ReLU (rectified linear unit) activation function across these layers to introduce necessary nonlinearity into the model, which is crucial for learning the complex patterns present in the activity data.
- Output Layer: The size of the output layer varies with the dataset; it comprises nine neurons for the Extrasensory dataset and 10 neurons for the Huawei dataset, each representing the number of activity classes within these datasets. The softmax activation function is employed in the output layer to provide a probability distribution over the predicted activity classes, facilitating accurate activity classification.
3.7.2. Training Process
- Batch Size: We processed 32 samples per batch, optimizing the computational efficiency without sacrificing the ability to learn complex patterns.
- Epochs: The network was trained for up to 100 epochs. To combat overfitting, we implemented early stopping, which halted training if the validation loss did not improve for 10 consecutive epochs.
- Validation Split: To ensure robust model evaluation and tuning, 20% of our training data were set aside as a validation set. This allowed us to monitor the model’s performance and make necessary adjustments to the hyperparameters in real-time.
3.7.3. Model Application and Evaluation
- Confusion matrix: For each dataset, a confusion matrix was generated to visually represent the performance of the model across all activity classes. The confusion matrix [106,107] helps in identifying not only the instances of correct predictions but also the types of errors made by the model, such as false positives and false negatives. This detailed view allows us to specific activities where the model may require further tuning.
- ROC Curves: We also plotted receiver operating characteristic (ROC) curves for each class within the datasets. The ROC curves provide a graphical representation of the trade-off between the true positive rate and the false positive rate at various threshold settings. The area under the ROC curve (AUC) was calculated to quantify the model’s ability to discriminate between the classes under study.
4. Experimental Setup
4.1. Datasets Descriptions
4.1.1. The Extrasensory Dataset
4.1.2. The Sussex Huawei Dataset (SHL)
4.2. First Experiment: Confusion Matrix
4.3. Second Experiment: Precision, Recall, and F1-Score
4.4. Third Experiment: Receiver Operating Characteristics (ROC Curve)
4.5. Fourth Experiment: Comparison with Other Techniques
5. Computational Analysis
6. Discussion and Limitations
- Detailed Analysis of Key Findings
- GPS limitations: The GPS technology we utilize, while generally effective, can suffer from significant inaccuracies in environments such as urban canyons or indoors due to signal blockage and multipath interference. These environmental constraints can affect the system’s ability to precisely track and localize activities, particularly in complex urban settings.
- Data diversity and completeness: The dataset employed for training our system, though extensive, does not encompass the entire spectrum of human activities, particularly those that are irregular or occur less frequently. This limitation could reduce the model’s ability to generalize to activities not represented in the training phase, potentially impacting its applicability in varied real-world scenarios.
- Performance across different hardware: Our system was primarily tested and optimized on a specific computational setup. When considering deployment across diverse real-world devices such as smartphones, smartwatches, or other IoT wearables, variations in processing power, storage capacity, and sensor accuracy must be addressed. The heterogeneity of these devices could result in inconsistent performance, with higher-end devices potentially delivering more accurate results than lower-end counterparts.
- Scalability and real-time processing: Scaling our system to handle real-time data processing across multiple devices simultaneously presents another significant challenge. The computational demands of processing large volumes of sensor data in real time necessitate not only robust algorithms but also hardware capable of efficiently supporting these operations.
- Privacy and security concerns: As with any system handling sensitive personal data, ensuring privacy and security is paramount. Our current model must incorporate more advanced encryption methods and privacy-preserving techniques to safeguard user data against potential breaches or unauthorized access.
7. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Qi, M.; Cui, S.; Chang, X.; Xu, Y.; Meng, H.; Wang, Y.; Yin, T. Multi-region Nonuniform Brightness Correction Algorithm Based on L-Channel Gamma Transform. Secur. Commun. Netw. 2022, 2022, 2675950. [Google Scholar] [CrossRef]
- Li, R.; Peng, B. Implementing Monocular Visual-Tactile Sensors for Robust Manipulation. Think. Ski. Creat. 2022, 2022, 9797562. [Google Scholar] [CrossRef] [PubMed]
- Babaei, N.; Hannani, N.; Dabanloo, N.J.; Bahadori, S. A Systematic Review of the Use of Commercial Wearable Activity Trackers for Monitoring Recovery in Individuals Undergoing Total Hip Replacement Surgery. Think. Ski. Creat. 2022, 2022, 9794641. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Q.; Yan, S.; Zhang, B.; Fan, K.; Zhang, J.; Li, W. An On-Chip Viscoelasticity Sensor for Biological Fluids. Think. Ski. Creat. 2023, 4, 6. [Google Scholar] [CrossRef] [PubMed]
- Qu, J.; Mao, B.; Li, Z.; Xu, Y.; Zhou, K.; Cao, X.; Fan, Q.; Xu, M.; Liang, B.; Liu, H.; et al. Recent Progress in Advanced Tactile Sensing Technologies for Soft Grippers. Adv. Funct. Mater. 2023, 33, 2306249. [Google Scholar] [CrossRef]
- Khan, D.; Alonazi, M.; Abdelhaq, M.; Al Mudawi, N.; Algarni, A.; Jalal, A.; Liu, H. Robust human locomotion and localization activity recognition over multisensory. Front. Physiol. 2024, 15, 1344887. [Google Scholar] [CrossRef] [PubMed]
- Jalal, A.; Nadeem, A.; Bobasu, S. Human Body Parts Estimation and Detection for Physical Sports Movements. In Proceedings of the 2019 2nd International Conference on Communication, Computing and Digital systems (C-CODE), Islamabad, Pakistan, 6–7 March 2019; pp. 104–109. [Google Scholar]
- Arshad, M.H.; Bilal, M.; Gani, A. Human Activity Recognition: Review, Taxonomy and Open Challenges. Sensors 2022, 22, 6463. [Google Scholar] [CrossRef] [PubMed]
- Elbayoudi, A.; Lotfi, A.; Langensiepen, C.; Appiah, K. Modelling and Simulation of Activities of Daily Living Representing an Older Adult’s Behaviour. In Proceedings of the 8th ACM International Conference on Pervasive Technologies Related to Assistive Environments (PETRA ’15), Corfu, Greece, 1–3 July 2015; Article 67. Association for Computing Machinery: New York, NY, USA, 2015; pp. 1–8. [Google Scholar]
- Azmat, U.; Jalal, A. Smartphone Inertial Sensors for Human Locomotion Activity Recognition based on Template Matching and Codebook Generation. In Proceedings of the 2021 International Conference on Communication Technologies (ComTech), Rawalpindi, Pakistan, 21 September 2021; pp. 109–114. [Google Scholar]
- Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Serpush, F.; Menhaj, M.B.; Masoumi, B.; Karasfi, B. Wearable Sensor-Based Human Activity Recognition in the Smart Healthcare System. Comput. Intell. Neurosci. 2022, 2022, 1–31. [Google Scholar] [CrossRef]
- Yan, L.; Shi, Y.; Wei, M.; Wu, Y. Multi-feature fusing local directional ternary pattern for facial expressions signal recognition based on video communication system. Alex. Eng. J. 2023, 63, 307–320. [Google Scholar] [CrossRef]
- Cai, L.; Yan, S.; Ouyang, C.; Zhang, T.; Zhu, J.; Chen, L.; Ma, X.; Liu, H. Muscle synergies in joystick manipulation. Front. Physiol. 2023, 14, 1282295. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Li, J.; Wang, C.; Verbeek, F.J.; Schultz, T.; Liu, H. Outlier detection using iterative adaptive mini-minimum spanning tree generation with applications on medical data. Front. Physiol. 2023, 14, 1233341. [Google Scholar] [CrossRef] [PubMed]
- Wang, F.; Ma, M.; Zhang, X. Study on a Portable Electrode Used to Detect the Fatigue of Tower Crane Drivers in Real Construction Environment. IEEE Trans. Instrum. Meas. 2024, 73, 1–14. [Google Scholar] [CrossRef]
- Yu, J.; Dong, X.; Li, Q.; Lu, J.; Ren, Z. Adaptive Practical Optimal Time-Varying Formation Tracking Control for Disturbed High-Order Multi-Agent Systems. IEEE Trans. Circuits Syst. I Regul. Pap. 2022, 69, 2567–2578. [Google Scholar] [CrossRef]
- He, H.; Chen, Z.; Liu, H.; Liu, X.; Guo, Y.; Li, J. Practical Tracking Method based on Best Buddies Similarity. Think. Ski. Creat. 2023, 4, 50. [Google Scholar] [CrossRef] [PubMed]
- Hou, X.; Zhang, L.; Su, Y.; Gao, G.; Liu, Y.; Na, Z.; Xu, Q.; Ding, T.; Xiao, L.; Li, L.; et al. A space crawling robotic bio-paw (SCRBP) enabled by triboelectric sensors for surface identification. Nano Energy 2023, 105, 108013. [Google Scholar] [CrossRef]
- Hou, X.; Xin, L.; Fu, Y.; Na, Z.; Gao, G.; Liu, Y.; Xu, Q.; Zhao, P.; Yan, G.; Su, Y.; et al. A self-powered biomimetic mouse whisker sensor (BMWS) aiming at terrestrial and space objects perception. Nano Energy 2023, 118, 109034. [Google Scholar] [CrossRef]
- Ma, S.; Chen, Y.; Yang, S.; Liu, S.; Tang, L.; Li, B.; Li, Y. The Autonomous Pipeline Navigation of a Cockroach Bio-robot with Enhanced Walking Stimuli. Think. Ski. Creat. 2023, 4, 0067. [Google Scholar] [CrossRef] [PubMed]
- Bahadori, S.; Williams, J.M.; Collard, S.; Swain, I. Can a Purposeful Walk Intervention with a Distance Goal Using an Activity Monitor Improve Individuals’ Daily Activity and Function Post Total Hip Replacement Surgery. A Randomized Pilot Trial. Think. Ski. Creat. 2023, 4, 0069. [Google Scholar] [CrossRef]
- Hsu, Y.-L.; Yang, S.-C.; Chang, H.-C.; Lai, H.-C. Human Daily and Sport Activity Recognition Using a Wearable Inertial SensorNetwork. IEEE Access 2018, 6, 31715–31728. [Google Scholar] [CrossRef]
- Abdel-Basset, M.; Hawash, H.; Chang, V.; Chakrabortty, R.K.; Ryan, M. Deep Learning for Heterogeneous Human Activity Recognition in Complex IoT Applications. IEEE Internet Things J. 2022, 9, 5653–5665. [Google Scholar] [CrossRef]
- Konak, S.; Turan, F.; Shoaib, M.; Incel, Ö.D. Feature Engineering for Activity Recognition from Wrist-worn Motion Sensors. In Proceedings of the International Conference on Pervasive and Embedded Computing and Communication Systems, Lisbon, Portugal, 25–27 July 2016. [Google Scholar]
- Chetty, G.; White, M.; Akther, F. Smart Phone Based Data Mining for Human Activity Recognition. Procedia Comput. Sci. 2016, 46, 1181–1187. [Google Scholar] [CrossRef]
- Ehatisham-ul-Haq, M.; Azam, M.A. Opportunistic sensing for inferring in-the-wild human contexts based on activity patternrecognition using smart computing. Future Gener. Comput. Syst. 2020, 106, 374–392. [Google Scholar] [CrossRef]
- Zhang, X.; Huang, D.; Li, H.; Zhang, Y.; Xia, Y.; Liu, J. Self-training maximum classifier discrepancy for EEG emotion recognition. CAAI Trans. Intell. Technol. 2023, 8, 1480–1491. [Google Scholar] [CrossRef]
- Wen, C.; Huang, Y.; Zheng, L.; Liu, W.; Davidson, T.N. Transmit Waveform Design for Dual-Function Radar-Communication Systems via Hybrid Linear-Nonlinear Precoding. IEEE Trans. Signal Process. 2023, 71, 2130–2145. [Google Scholar] [CrossRef]
- Wen, C.; Huang, Y.; Davidson, T.N. Efficient Transceiver Design for MIMO Dual-Function Radar-Communication Systems. IEEE Trans. Signal Process. 2023, 71, 1786–1801. [Google Scholar] [CrossRef]
- Yao, Y.; Shu, F.; Li, Z.; Cheng, X.; Wu, L. Secure Transmission Scheme Based on Joint Radar and Communication in Mobile Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 10027–10037. [Google Scholar] [CrossRef]
- Jalal, A.; Quaid, M.A.K.; Kim, K. A Wrist Worn Acceleration Based Human Motion Analysis and Classification for Ambient Smart Home System. J. Electr. Eng. Technol. 2019, 14, 1733–1739. [Google Scholar] [CrossRef]
- Hu, Z.; Ren, L.; Wei, G.; Qian, Z.; Liang, W.; Chen, W.; Lu, X.; Ren, L.; Wang, K. Energy Flow and Functional Behavior of Individual Muscles at Different Speeds During Human Walking. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 294–303. [Google Scholar] [CrossRef]
- Wang, K.; Boonpratatong, A.; Chen, W.; Ren, L.; Wei, G.; Qian, Z.; Lu, X.; Zhao, D. The Fundamental Property of Human Leg During Walking: Linearity and Nonlinearity. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 4871–4881. [Google Scholar] [CrossRef]
- Jalal, A.; Quaid, M.A.K.; Hasan, A.S. Wearable Sensor-Based Human Behavior Understanding and Recognition in Daily Life for Smart Environments. In Proceedings of the 2018 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 17–19 December 2018; pp. 105–110. [Google Scholar]
- Zhao, Z.; Xu, G.; Zhang, N.; Zhang, Q. Performance analysis of the hybrid satellite-terrestrial relay network with opportunistic scheduling over generalized fad-ing channels. IEEE Trans. Veh. Technol. 2022, 71, 2914–2924. [Google Scholar] [CrossRef]
- Zhu, T.; Ding, H.; Wang, C.; Liu, Y.; Xiao, S.; Yang, G.; Yang, B. Parameters Calibration of the GISSMO Failure Model for SUS301L-MT. Chin. J. Mech. Eng. 2023, 36, 1–12. [Google Scholar] [CrossRef]
- Qu, J.; Yuan, Q.; Li, Z.; Wang, Z.; Xu, F.; Fan, Q.; Zhang, M.; Qian, X.; Wang, X.; Wang, X.; et al. All-in-one strain-triboelectric sensors based on environment-friendly ionic hydrogel for wearable sensing and underwater soft robotic grasping. Nano Energy 2023, 111, 108387. [Google Scholar] [CrossRef]
- Zhao, S.; Liang, W.; Wang, K.; Ren, L.; Qian, Z.; Chen, G.; Lu, X.; Zhao, D.; Wang, X.; Ren, L. A Multiaxial Bionic Ankle Based on Series Elastic Actuation with a Parallel Spring. IEEE Trans. Ind. Electron. 2023, 71, 7498–7510. [Google Scholar] [CrossRef]
- Liang, X.; Huang, Z.; Yang, S.; Qiu, L. Device-Free Motion & Trajectory Detection via RFID. ACM Trans. Embed. Comput. Syst. 2018, 17, 1–27. [Google Scholar] [CrossRef]
- Liu, C.; Wu, T.; Li, Z.; Ma, T.; Huang, J. Robust Online Tensor Completion for IoT Streaming Data Recovery. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 10178–10192. [Google Scholar] [CrossRef]
- Nadeem, A.; Jalal, A.; Kim, K. Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model. Multimed. Tools Appl. 2021, 80, 21465–21498. [Google Scholar] [CrossRef]
- Yu, J.; Lu, L.; Chen, Y.; Zhu, Y.; Kong, L. An Indirect Eavesdropping Attack of Keystrokes on Touch Screen through Acoustic Sensing. IEEE Trans. Mob. Comput. 2021, 20, 337–351. [Google Scholar] [CrossRef]
- Bashar, S.K.; Al Fahim, A.; Chon, K.H. Smartphone-Based Human Activity Recognition with Feature Selection and Dense Neural Network. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 5888–5891. [Google Scholar]
- Xie, L.; Tian, J.; Ding, G.; Zhao, Q. Hu-man activity recognition method based on inertial sensor and barometer. In Proceedings of the 2018 IEEE International Symposium on Inertial Sensors and Systems (INERTIAL), Lake Como, Italy, 26–29 March 2018; pp. 1–4. [Google Scholar]
- Lee, S.-M.; Yoon, S.M.; Cho, H. Human activity recognition from accelerometer data using Convolutional Neural Network. In Proceedings of the 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), Jeju, Republic of Korea, 13–16 February 2017; pp. 131–134. [Google Scholar]
- Mekruksavanich, S.; Jitpattanakul, A. Recognition of Real-life Activities with Smartphone Sensors using Deep Learning Approaches. In Proceedings of the 2021 IEEE 12th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 20–22 August 2021; pp. 243–246. [Google Scholar]
- Cong, R.; Sheng, H.; Yang, D.; Cui, Z.; Chen, R. Exploiting Spatial and Angular Correlations with Deep Efficient Transformers for Light Field Image Super-Resolution. IEEE Trans. Multimed. 2024, 26, 1421–1435. [Google Scholar] [CrossRef]
- Liu, H.; Yuan, H.; Liu, Q.; Hou, J.; Zeng, H.; Kwong, S. A Hybrid Compression Framework for Color Attributes of Static 3D Point Clouds. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1564–1577. [Google Scholar] [CrossRef]
- Liu, Q.; Yuan, H.; Hamzaoui, R.; Su, H.; Hou, J.; Yang, H. Reduced Reference Perceptual Quality Model with Application to Rate Control for Video-Based Point Cloud Compression. IEEE Trans. Image Process. 2021, 30, 6623–6636. [Google Scholar] [CrossRef]
- Mutegeki, R.; Han, D.S. A CNN-LSTM Approach to Human Activity Recognition. In Proceedings of the International Conference on Artificial Intelligence and Information Communications (ICAIIC), Fukuoka, Japan, 19–21 February 2020; pp. 362–366. [Google Scholar]
- Liu, A.-A.; Zhai, Y.; Xu, N.; Nie, W.; Li, W.; Zhang, Y. Region-Aware Image Captioning via Interaction Learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 3685–3696. [Google Scholar] [CrossRef]
- Jaramillo, I.E.; Jeong, J.G.; Lopez, P.R.; Lee, C.-H.; Kang, D.-Y.; Ha, T.-J.; Oh, J.-H.; Jung, H.; Lee, J.H.; Lee, W.H.; et al. Real-Time Human Activity Recognition with IMU and Encoder Sensors in Wearable Exoskeleton Robot via Deep Learning Networks. Sensors 2022, 22, 9690. [Google Scholar] [CrossRef]
- Hussain, I.; Jany, R.; Boyer, R.; Azad, A.; Alyami, S.A.; Park, S.J.; Hasan, M.; Hossain, A. An Explainable EEG-Based Human Activity Recognition Model Using Machine-Learning Approach and LIME. Sensors 2023, 23, 7452. [Google Scholar] [CrossRef]
- Garcia-Gonzalez, D.; Rivero, D.; Fernandez-Blanco, E.; Luaces, M.R. New machine learning approaches for real-life human activity recognition using smartphone sensor-based data. Knowl. Based Syst. 2023, 262, 110260. [Google Scholar] [CrossRef]
- Zhang, J.; Zhu, C.; Zheng, L.; Xu, K. ROSEFusion: Random optimization for online dense reconstruction under fast camera motion. ACM Trans. Graph. 2021, 40, 1–17. [Google Scholar] [CrossRef]
- Zhang, J.; Tang, Y.; Wang, H.; Xu, K. ASRO-DIO: Active Subspace Random Optimization Based Depth Inertial Odometry. IEEE Trans. Robot. 2022, 39, 1496–1508. [Google Scholar] [CrossRef]
- She, Q.; Hu, R.; Xu, J.; Liu, M.; Xu, K.; Huang, H. Learning High-DOF Reaching-and-Grasping via Dynamic Representation of Gripper-Object Interaction. ACM Trans. Graph. 2022, 41, 1–14. [Google Scholar] [CrossRef]
- Xu, J.; Zhang, X.; Park, S.H.; Guo, K. The Alleviation of Perceptual Blindness During Driving in Urban Areas Guided by Saccades Recommendation. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16386–16396. [Google Scholar] [CrossRef]
- Xu, J.; Park, S.H.; Zhang, X.; Hu, J. The Improvement of Road Driving Safety Guided by Visual Inattentional Blindness. IEEE Trans. Intell. Transp. Syst. 2022, 23, 4972–4981. [Google Scholar] [CrossRef]
- Mao, Y.; Sun, R.; Wang, J.; Cheng, Q.; Kiong, L.C.; Ochieng, W.Y. New time-differenced carrier phase approach to GNSS/INS integration. GPS Solutions 2022, 26, 122. [Google Scholar] [CrossRef]
- Jalal, A.; Kim, Y. Dense depth maps-based human pose tracking and recognition in dynamic scenes using ridge data. In Proceedings of the 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Seoul, Republic of Korea, 26–29 August 2014; pp. 119–124. [Google Scholar]
- Mahmood, M.; Jalal, A.; Kim, K. WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors. Multimed. Tools Appl. 2020, 79, 6919–6950. [Google Scholar] [CrossRef]
- Chen, Z.; Cai, C.; Zheng, T.; Luo, J.; Xiong, J.; Wang, X. RF-Based Human Activity Recognition Using Signal Adapted Convolutional Neural Network. IEEE Trans. Mob. Comput. 2023, 22, 487–499. [Google Scholar] [CrossRef]
- Batool, M.; Alotaibi, S.S.; Alatiyyah, M.H.; Alnowaiser, K.; Aljuaid, H.; Jalal, A.; Park, J. Depth Sensors-Based Action Recognition using a Modified K-Ary Entropy Classifier. IEEE Access 2023, 11, 58578–58595. [Google Scholar] [CrossRef]
- Xu, J.; Pan, S.; Sun, P.Z.H.; Park, S.H.; Guo, K. Human-Factors-in-Driving-Loop: Driver Identification and Verification via a Deep Learning Approach using Psychological Behavioral Data. IEEE Trans. Intell. Transp. Syst. 2022, 24, 3383–3394. [Google Scholar] [CrossRef]
- Xu, J.; Guo, K.; Sun, P.Z. Driving Performance under Violations of Traffic Rules: Novice vs. Experienced Drivers. IEEE Trans. Intell. Veh. 2022, 7, 908–917. [Google Scholar] [CrossRef]
- Liu, H.; Xu, Y.; Chen, F. Sketch2Photo: Synthesizing photo-realistic images from sketches via global contexts. Eng. Appl. Artif. Intell. 2023, 117, 105608. [Google Scholar] [CrossRef]
- Pazhanirajan, S.; Dhanalakshmi, P. EEG Signal Classification using Linear Predictive Cepstral Coefficient Features. Int. J. Comput. Appl. 2013, 73, 28–31. [Google Scholar] [CrossRef]
- Fausto, F.; Cuevas, E.; Gonzales, A. A New Descriptor for Image Matching Based on Bionic Principles. Pattern Anal. Appl. 2017, 20, 1245–1259. [Google Scholar] [CrossRef]
- Alonazi, M.; Ansar, H.; Al Mudawi, N.; Alotaibi, S.S.; Almujally, N.A.; Alazeb, A.; Jalal, A.; Kim, J.; Min, M. Smart healthcare hand gesture recognition using CNN-based detector and deep belief network. IEEE Access 2023, 11, 84922–84933. [Google Scholar] [CrossRef]
- Jalal, A.; Mahmood, M. Students’ behavior mining in e-learning environment using cognitive processes with information technologies. Educ. Inf. Technol. 2019, 24, 2797–2821. [Google Scholar] [CrossRef]
- Quaid, M.A.K.; Jalal, A. Wearable sensors based human behavioral pattern recognition using statistical features and reweighted genetic algorithm. Multimed. Tools Appl. 2020, 79, 6061–6083. [Google Scholar] [CrossRef]
- Pervaiz, M.; Jalal, A. Artificial Neural Network for Human Object Interaction System Over Aerial Images. In Proceedings of the 2023 4th International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 20–22 February 2023; pp. 1–6. [Google Scholar]
- Jalal, A.; Kim, J.T.; Kim, T.-S. Development of a life logging system via depth imaging-based human activity recognition for smart homes. In Proceedings of the International Symposium on Sustainable Healthy Buildings, Seoul, Republic of Korea, 19 September 2012; pp. 91–95. [Google Scholar]
- Jalal, A.; Rasheed, Y. Collaboration achievement along with performance maintenance in video streaming. In Proceedings of the IEEE Conference on Interactive Computer Aided Learning, Villach, Austria, 23 December 2007; pp. 1–8. [Google Scholar]
- Muneeb, M.; Rustam, H.; Jalal, A. Automate Appliances via Gestures Recognition for Elderly Living Assistance. In Proceedings of the 2023 4th International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 20–22 February 2023; pp. 1–6. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception–ResNet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 3. [Google Scholar]
- Azmat, U.; Ghadi, Y.Y.; al Shloul, T.; Alsuhibany, S.A.; Jalal, A.; Park, J. Smartphone Sensor-Based Human Locomotion Surveillance System Using Multilayer Perceptron. Appl. Sci. 2022, 12, 2550. [Google Scholar] [CrossRef]
- Jalal, A.; Batool, M.; Kim, K. Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors. Appl. Sci. 2020, 10, 7122. [Google Scholar] [CrossRef]
- Tan, T.-H.; Wu, J.-Y.; Liu, S.-H.; Gochoo, M. Human Activity Recognition Using an Ensemble Learning Algorithm with Smartphone Sensor Data. Electronics 2022, 11, 322. [Google Scholar] [CrossRef]
- Hartmann, Y.; Liu, H.; Schultz, T. High-Level Features for Human Activity Recognition and Modeling. In Biomedical Engineering Systems and Technologies, Proceedings of the BIOSTEC 2022, Virtual Event, 9–11 February 2022; Roque, A.C.A., Gracanin, D., Lorenz, R., Tsanas, A., Bier, N., Fred, A., Gamboa, H., Eds.; Communications in Computer and In-formation Science; Springer: Cham, Switzerland, 2023; Volume 1814. [Google Scholar] [CrossRef]
- Khalid, N.; Gochoo, M.; Jalal, A.; Kim, K. Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System. Sustainability 2021, 13, 970. [Google Scholar] [CrossRef]
- Liu, H.; Yuan, H.; Hou, J.; Hamzaoui, R.; Gao, W. PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point Cloud Upsampling. IEEE Trans. Image Process. 2022, 31, 7389–7402. [Google Scholar] [CrossRef] [PubMed]
- Jalal, A.; Sharif, N.; Kim, J.T.; Kim, T.-S. Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart homes. Indoor Built Environ. 2013, 22, 271–279. [Google Scholar] [CrossRef]
- Manos, A.; Klein, I.; Hazan, T. Gravity-based methods for heading computation in pedestrian dead reckoning. Sensors 2019, 19, 1170. [Google Scholar] [CrossRef]
- Jalal, A.; Batool, M.; Kim, K. Sustainable Wearable System: Human Behavior Modeling for Life-logging Activities Using K-AryTree Hashing Classifier. Sustainability 2020, 12, 10324. [Google Scholar] [CrossRef]
- Cruciani, F.; Vafeiadis, A.; Nugent, C.; Cleland, I.; McCullagh, P.; Votis, K.; Giakoumis, D.; Tzovaras, D.; Chen, L.; Hamzaoui, R. Feature learning for human activity recognition using convolutional neural networks: A case study for inertial measurement unit and audio data. CCF Trans. Pervasive Comput. Interact. 2020, 2, 18–32. [Google Scholar] [CrossRef]
- Jalal, A.; Ahmed, A.; Rafique, A.A.; Kim, K. Scene Semantic Recognition Based on Modified Fuzzy C-Mean and Maximum En-tropy Using Object-to-Object Relations. IEEE Access 2021, 9, 27758–27772. [Google Scholar] [CrossRef]
- Won, Y.-S.; Jap, D.; Bhasin, S. Push for More: On Comparison of Data Augmentation and SMOTE with Optimised Deep Learning Architecture for Side-Channel Information Security Applications. In Proceedings of the Information Security Applications: 21st International Conference, WISA 2020, Jeju Island, Republic of Korea, 26–28 August 2020; Volume 12583, ISBN 978-3-030-65298-2. [Google Scholar]
- Hartmann, Y.; Liu, H.; Schultz, T. Interactive and Interpretable Online Human Activity Recognition. In Proceedings of the 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), Pisa, Italy, 20–25 March 2022; pp. 109–111. [Google Scholar]
- Jalal, A.; Khalid, N.; Kim, K. Automatic Recognition of Human Interaction via Hybrid Descriptors and Maximum Entropy Markov Model Using Depth Sensors. Entropy 2020, 22, 817. [Google Scholar] [CrossRef] [PubMed]
- Vaizman, Y.; Ellis, K.; Lanckriet, G. Recognizing Detailed Human Context in the Wild from Smartphones and Smartwatches. IEEE Pervasive Comput. 2017, 16, 62–74. [Google Scholar] [CrossRef]
- Sztyler, T.; Stuckenschmidt, H. Online personalization of cross sub-jects based activity recognition models on wearable devices. In Proceedings of the 2017 IEEE International Conference on Pervasive Computing and Communications (PerCom), Kona, HI, USA, 13–17 March 2017; pp. 180–189. [Google Scholar]
- Garcia-Gonzalez, D.; Rivero, D.; Fernandez-Blanco, E.; Luaces, M.R. A public domain dataset for real-life human activi-ty recognition using smartphone sensors. Sensors 2020, 20, 2200. [Google Scholar] [CrossRef]
- Jalal, A.; Kim, Y.-H.; Kim, Y.-J.; Kamal, S.; Kim, D. Robust human activity recognition from depth video using spatiotemporal multi-fused features. Pattern Recognit. 2017, 61, 295–308. [Google Scholar] [CrossRef]
- Sheng, H.; Wang, S.; Yang, D.; Cong, R.; Cui, Z.; Chen, R. Cross-View Recurrence-Based Self-Supervised Super-Resolution of Light Field. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 7252–7266. [Google Scholar] [CrossRef]
- Wang, L.; Ciliberto, M.; Gjoreski, H.; Lago, P.; Murao, K.; Okita, T.; Roggen, D. Locomotion and Transportation Mode Recognition from GPS and Radio Signals: Summary of SHL Challenge 2021. In Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers (UbiComp/ISWC ‘21 Adjunct), Virtual, 21–26 September 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar]
- Fu, C.; Yuan, H.; Xu, H.; Zhang, H.; Shen, L. TMSO-Net: Texture adaptive multi-scale observation for light field image depth estimation. J. Vis. Commun. Image Represent. 2023, 90, 103731. [Google Scholar] [CrossRef]
- Luo, G.; Xie, J.; Liu, J.; Luo, Y.; Li, M.; Li, Z.; Yang, P.; Zhao, L.; Wang, K.; Maeda, R.; et al. Highly Stretchable, Knittable, Wearable Fiberform Hydrovoltaic Generators Driven by Water Transpiration for Portable Self-Power Supply and Self-Powered Strain Sensor. Small 2023, 20, 2306318. [Google Scholar] [CrossRef]
- Feng, Y.; Pan, R.; Zhou, T.; Dong, Z.; Yan, Z.; Wang, Y.; Chen, P.; Chen, S. Direct joining of quartz glass and copper by nanosecond laser. Ceram. Int. 2023, 49, 36056–36070. [Google Scholar] [CrossRef]
- Miao, Y.; Wang, X.; Wang, S.; Li, R. Adaptive Switching Control Based on Dynamic Zero-Moment Point for Versatile Hip Exoskeleton Under Hybrid Locomotion. IEEE Trans. Ind. Electron. 2022, 70, 11443–11452. [Google Scholar] [CrossRef]
- Xu, C.; Jiang, Z.; Wang, B.; Chen, J.; Sun, T.; Fu, F.; Wang, C.; Wang, H. Biospinning of hierarchical fibers for a self-sensing actuator. Chem. Eng. J. 2024, 485, 150014. [Google Scholar] [CrossRef]
- Liu, Y.; Fang, Z.; Cheung, M.H.; Cai, W.; Huang, J. Mechanism Design for Blockchain Storage Sustainability. IEEE Commun. Mag. 2023, 61, 102–107. [Google Scholar] [CrossRef]
- Fu, X.; Pace, P.; Aloi, G.; Guerrieri, A.; Li, W.; Fortino, G. Tolerance Analysis of Cyber-Manufacturing Systems to Cascading Failures. ACM Trans. Internet Technol. 2023, 23, 1–23. [Google Scholar] [CrossRef]
- Wang, S.; Sheng, H.; Yang, D.; Zhang, Y.; Wu, Y.; Wang, S. Extendable Multiple Nodes Recurrent Tracking Framework with RTU++. IEEE Trans. Image Process. 2022, 31, 5257–5271. [Google Scholar] [CrossRef]
- Yang, D.; Zhu, T.; Wang, S.; Wang, S.; Xiong, Z. LFRSNet: A Robust Light Field Semantic Segmentation Network Combining Contextual and Geometric Features. Front. Environ. Sci. 2022, 10, 1443. [Google Scholar] [CrossRef]
- Asim, Y.; Azam, M.A.; Ehatisham-Ul-Haq, M.; Naeem, U.; Khalid, A. Context-Aware Human Activity Recognition (CAHAR) in-the-Wild Using Smartphone Accelerometer. IEEE Sens. J. 2020, 20, 4361–4371. [Google Scholar] [CrossRef]
- Vaizman, Y.; Weibel, N.; Lanckriet, G. Context Recognition In-the-Wild: Unified Model for Multi-Modal Sensors and Mul-ti-Label Classification. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 168. [Google Scholar] [CrossRef]
- Sharma, A.; Singh, S.K.; Udmale, S.S.; Singh, A.K.; Singh, R. Early Transportation Mode Detection Using Smartphone Sensing Data. IEEE Sens. J. 2021, 21, 15651–15659. [Google Scholar] [CrossRef]
- Akbari, A.; Jafari, R. Transition-Aware Detection of Modes of Locomotion and Transportation through Hierarchical Segmentation. IEEE Sens. J. 2020, 21, 3301–3313. [Google Scholar] [CrossRef]
- Brimacombe, O.; Gonzalez, L.C.; Wahlstrom, J. Smartphone-Based CO2e Emission Estimation Using Transportation Mode Clas-sification. IEEE Access 2023, 11, 54782–54794. [Google Scholar] [CrossRef]
Sensors | Signal Type | Sampling Rate (Hz) | Duration (sec) | Number of Recordings |
---|---|---|---|---|
Accelerometer | Acceleration | 32 | 2 | 308,306 |
Gyroscope | Angular Velocity | 32 | 2 | 291,883 |
Magnetometer | Magnetic Field | 32 | 2 | 282,527 |
Location | Latitude, Longitude | 1 | 2 | 273,737 |
Obj. Classes | Sitting | Eating | Cooking | Bicycle |
---|---|---|---|---|
sitting | 0.95 | 0.01 | 0.03 | 0.00 |
eating | 0.00 | 1.00 | 0.00 | 0.00 |
cooking | 0.00 | 0.00 | 1.00 | 0.00 |
bicycle | 0.03 | 0.00 | 0.00 | 0.97 |
Mean Accuracy= | 96.61% |
Obj. Classes | Indoors | Outdoors | Home | School | Car |
---|---|---|---|---|---|
Indoors | 1.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Outdoors | 0.00 | 1.00 | 0.00 | 0.00 | 0.00 |
Home | 0.05 | 0.06 | 0.80 | 0.02 | 0.07 |
School | 0.02 | 0.02 | 0.03 | 0.90 | 0.03 |
Car | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 |
Mean Accuracy= | 94.28% |
Obj. Classes | Sit | Walk | Stand | Run |
---|---|---|---|---|
Sit | 0.96 | 0.00 | 0.04 | 0.00 |
Walk | 0.03 | 0.97 | 0.00 | 0.00 |
Stand | 0.03 | 0.03 | 0.92 | 0.02 |
Run | 0.02 | 0.01 | 0.03 | 0.94 |
Mean Accuracy= | 94.75% |
Obj. Classes | Indoor | Outdoor | In Train | In Car | In Bus | In Subway |
---|---|---|---|---|---|---|
Indoor | 0.93 | 0.00 | 0.05 | 0.02 | 0.00 | 0.00 |
Outdoor | 0.00 | 0.95 | 0.04 | 0.00 | 0.00 | 0.01 |
In train | 0.01 | 0.03 | 0.89 | 0.02 | 0.05 | 0.00 |
In car | 0.00 | 0.01 | 0.01 | 0.94 | 0.00 | 0.04 |
In bus | 0.03 | 0.02 | 0.07 | 0.00 | 0.88 | 0.00 |
In subway | 0.03 | 0.00 | 0.03 | 0.00 | 0.02 | 0.92 |
Mean Accuracy | =91.83% |
Classes | Extrasensory | SHL | ||||
---|---|---|---|---|---|---|
Activities | Precision | Recall | F1-Score | Precision | Recall | F1-Score |
Sitting | 0.95 | 1.00 | 0.92 | - | - | - |
Eating | 1.00 | 0.80 | 0.90 | - | - | - |
Cooking | 1.00 | 0.89 | 0.95 | - | - | - |
Bicycle | 0.97 | 0.95 | 0.96 | - | - | - |
Sit | - | - | - | 0.92 | 0.96 | 0.94 |
Stand | - | - | - | 0.94 | 0.92 | 0.93 |
Walking | - | - | - | 0.96 | 0.97 | 0.97 |
Run | - | - | - | 0.95 | 0.94 | 0.92 |
Classes | Extrasensory | SHL | ||||
---|---|---|---|---|---|---|
Activities | Precision | Recall | F1-Score | Precision | Recall | F1-Score |
Indoors | 1.00 | 0.94 | 0.91 | - | - | - |
Outdoors | 1.00 | 1.00 | 0.95 | - | - | - |
School | 0.84 | 1.00 | 0.92 | - | - | - |
Home | 0.90 | 0.85 | 0.88 | - | - | - |
Car | 1.00 | 1.00 | 1.00 | - | - | - |
Indoor | - | - | - | 0.93 | 0.93 | 0.93 |
Outdoor | - | - | - | 0.94 | 0.95 | 0.94 |
In train | - | - | - | 0.82 | 0.89 | 0.85 |
In car | - | - | - | 0.96 | 0.94 | 0.95 |
In subway | - | - | - | 0.95 | 0.92 | 0.93 |
In bus | - | - | - | 0.93 | 0.88 | 0.90 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Almujally, N.A.; Khan, D.; Al Mudawi, N.; Alonazi, M.; Alazeb, A.; Algarni, A.; Jalal, A.; Liu, H. Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization. Sensors 2024, 24, 3032. https://doi.org/10.3390/s24103032
Almujally NA, Khan D, Al Mudawi N, Alonazi M, Alazeb A, Algarni A, Jalal A, Liu H. Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization. Sensors. 2024; 24(10):3032. https://doi.org/10.3390/s24103032
Chicago/Turabian StyleAlmujally, Nouf Abdullah, Danyal Khan, Naif Al Mudawi, Mohammed Alonazi, Abdulwahab Alazeb, Asaad Algarni, Ahmad Jalal, and Hui Liu. 2024. "Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization" Sensors 24, no. 10: 3032. https://doi.org/10.3390/s24103032