Wearable Inertial Sensor-Based Hand-Guiding Gestures Recognition Method Robust to Significant Changes in the Body-Alignment of Subject
Abstract
:1. Introduction
2. Problem Definition
2.1. Concepts of Creating Body-Fixed Frame and Aligning All Sensor-Fixed Frames Equally
2.2. Description about Concept of Floating Body-Fixed Frame Method
3. Method
3.1. Sensor Calibration
- (1)
- coincides with the subject’s transverse axis;
- (2)
- axis coincides with the subject’s longitudinal axis;
- (3)
- axis is aligned with the subject’s anteroposterior axis.
Algorithm 1 Create body-fixed frame | |
1: | procedure sensor data |
2: | While about 5 s |
3: | Maintain standing posture |
4: | Save orientation data |
5: | end |
6: | Update |
7: | While 5 s |
8: | Maintain stooping posture |
9: | Save orientation data |
10: | end |
11: | Update |
12: | Calculate vector k |
13: | Calculate vector x |
14: | Return Body- fixed frame |
3.1.1. Orientation w.r.t Body-Fixed Frame
3.1.2. Angular Velocity and Acceleration w.r.t Body-Fixed Frame
Algorithm 2 Align all sensor fixed frames equally | |
1: | procedure sensor data, body fixed frame, orientation of the initial posture |
2: | Calculate rotated frames mapping |
3: | Calculate sensor orientation w.r.t |
4: | Calculate sensor acceleration w.r.t |
5: | Calculate sensor rate of turn w.r.t |
6: | Return sensor data w.r.t |
3.2. Floating Body-Fixed Frame
Algorithm 3 Updating the body-fixed frame according to change in subject’s time-varying body-alignment | |
1: | procedure sensor data, orientation of the initial posture |
2: | Calculate floating body fixed frame |
3: | Calculate sensor orientation w.r.t |
4: | Calculate sensor acceleration w.r.t |
5: | Calculate sensor rate of turn w.r.t |
6: | Return sensor data w.r.t |
3.3. Practical Application to the Multi-Class Classification of the Hand-Guiding Gestures
4. Experiment and Discussion
4.1. Experimental Setup
4.2. Training and Test Dataset Acqusition
4.3. Training, Test, and Result Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Nomenclature
Rotation matrix | |
Global reference frame | |
Body-fixed frame | |
Floating body-fixed frame | |
Sensor-fixed frame of jth IMU sensor | |
Calibrated sensor-fixed frame | |
Sensor-fixed frame at initial standing posture | |
Sensor-fixed frame at initial stooping posture | |
Acceleration | |
Angular rate | |
SO(3) | Three-dimensional orthogonal group |
References
- Al-Hammadi, M.; Muhammad, G.; Abdul, W.; Alsulaiman, M.; Bencherif, M.A.; Mekhtiche, M.A. Hand gesture recognition for sign language using 3DCNN. IEEE Access 2020, 8, 79491–79509. [Google Scholar] [CrossRef]
- Chen, B.; Hua, C.; Dai, B.; He, Y.; Han, J. Online control programming algorithm for human–robot interaction system with a novel real-time human gesture recognition method. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419861764. [Google Scholar] [CrossRef]
- Popov, V.; Ahmed, S.; Shakev, N.; Topalov, A. Gesture-based Interface for Real-time Control of a Mitsubishi SCARA Robot Manipulator. IFAC-PapersOnLine 2019, 52, 180–185. [Google Scholar] [CrossRef]
- Chen, J.; Ji, Z.; Niu, H.; Setchi, R.; Yang, C. An auto-correction teleoperation method for a mobile manipulator using gaze tracking and hand motion detection. In Proceedings of the Annual Conference Towards Autonomous Robotic Systems, London, UK, 3–5 July 2019; pp. 422–433. [Google Scholar]
- Nuzzi, C.; Pasinetti, S.; Lancini, M.; Docchio, F.; Sansoni, G. Deep learning-based hand gesture recognition for collaborative robots. IEEE Instrum. Meas. Mag. 2019, 22, 44–51. [Google Scholar] [CrossRef] [Green Version]
- Jiang, D.; Li, G.; Sun, Y.; Kong, J.; Tao, B. Gesture recognition based on skeletonization algorithm and CNN with ASL database. Multimed. Tools Appl. 2019, 78, 29953–29970. [Google Scholar] [CrossRef]
- Suarez, J.; Murphy, R.R. Hand gesture recognition with depth images: A review. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot And Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 411–417. [Google Scholar]
- Abualola, H.; Al Ghothani, H.; Eddin, A.N.; Almoosa, N.; Poon, K. Flexible gesture recognition using wearable inertial sensors. In Proceedings of the 2016 IEEE 59th International Midwest Symposium on Circuits and Systems (MWSCAS), Abu Dhabi, United Arab Emirates, 16–19 October 2016; pp. 1–4. [Google Scholar]
- Suri, K.; Gupta, R. Convolutional neural network array for sign language recognition using wearable IMUs. In Proceedings of the 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 7–8 March 2019; pp. 483–488. [Google Scholar]
- Khassanov, Y.; Imanberdiyev, N.; Varol, H.A. Inertial motion capture based reference trajectory generation for a mobile manipulator. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; pp. 202–203. [Google Scholar]
- Khassanov, Y.; Imanberdiyev, N.; Varol, H.A. Real-time gesture recognition for the high-level teleoperation interface of a mobile manipulator. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; pp. 204–205. [Google Scholar]
- Digo, E.; Gastaldi, L.; Antonelli, M.; Pastorelli, S.; Cereatti, A.; Caruso, M. Real-time estimation of upper limbs kinematics with IMUs during typical industrial gestures. Procedia Comput. Sci. 2022, 200, 1041–1047. [Google Scholar] [CrossRef]
- Neto, P.; Simão, M.; Mendes, N.; Safeea, M. Gesture-based human-robot interaction for human assistance in manufacturing. Int. J. Adv. Manuf. Technol. 2019, 101, 119–135. [Google Scholar] [CrossRef]
- Kulkarni, P.V.; Illing, B.; Gaspers, B.; Brüggemann, B.; Schulz, D. Mobile manipulator control through gesture recognition using IMUs and Online Lazy Neighborhood Graph search. ACTA IMEKO 2019, 8, 3–8. [Google Scholar] [CrossRef]
- Assad, C.; Wolf, M.T.; Karras, J.; Reid, J.; Stoica, A. JPL BioSleeve for gesture-based control: Technology development and field trials. In Proceedings of the 2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), Woburn, MA, USA, 11–12 May 2015; pp. 1–6. [Google Scholar]
- Wang, W.; Li, R.; Diekel, Z.M.; Chen, Y.; Zhang, Z.; Jia, Y. Controlling object hand-over in human–robot collaboration via natural wearable sensing. IEEE Trans. Hum.-Mach. Syst. 2018, 49, 59–71. [Google Scholar] [CrossRef]
- Hassan, H.F.; Abou-Loukh, S.J.; Ibraheem, I.K. Teleoperated robotic arm movement using electromyography signal with wearable Myo armband. J. King Saud Univ.-Eng. Sci. 2020, 32, 378–387. [Google Scholar] [CrossRef]
- Chico, A.; Cruz, P.J.; Vásconez, J.P.; Benalcázar, M.E.; Álvarez, R.; Barona, L.; Valdivieso, Á.L. Hand Gesture Recognition and Tracking Control for a Virtual UR5 Robot Manipulator. In Proceedings of the 2021 IEEE Fifth Ecuador Technical Chapters Meeting (ETCM), Cuenca, Ecuador, 12–15 October 2021; pp. 1–6. [Google Scholar]
- Fang, B.; Sun, F.; Liu, H.; Guo, D.; Chen, W.; Yao, G. Robotic teleoperation systems using a wearable multimodal fusion device. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417717057. [Google Scholar] [CrossRef]
- Kim, M.; Lee, D. Development of an IMU-based foot-ground contact detection (FGCD) algorithm. Ergonomics 2017, 60, 384–403. [Google Scholar] [CrossRef] [PubMed]
- Knudson, D.V.; Knudson, D. Fundamentals of Biomechanics; Springer: Berlin/Heidelberg, Germany, 2007; Volume 183. [Google Scholar]
- Alazrai, R.; Mowafi, Y.; Lee, C.G. Anatomical-plane-based representation for human–human interactions analysis. Pattern Recognit. 2015, 48, 2346–2363. [Google Scholar] [CrossRef]
- Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef] [Green Version]
- Elman, J.L. Finding structure in time. Cogn. Sci. 1990, 14, 179–211. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
Requirement | A | B | C |
---|---|---|---|
Creating and referencing subject’s body-fixed frame | ○ | ○ | ○ |
Aligning all sensor-fixed frames equally | × | ○ | ○ |
Floating body-fixed frame | × | × | ○ |
- | Training/Test Accuracy by Label (%) | |||||
---|---|---|---|---|---|---|
vm | si | So | ti | to | Total | |
300 ms | 99.3/71.9 | 99.7/83.5 | 99.8/99.9 | 99.3/91.6 | 99.6/82.2 | 99.5/84.0 |
400 ms | 97.7/89.6 | 97.0/92.6 | 96.4/89.0 | 99.3/98.7 | 98.9/99.4 | 97.9/91.7 |
500 ms | 99.9/57.6 | 99.6/96.1 | 99.8/99.8 | 99.6/75.6 | 99.8/90.4 | 99.7/82.6 |
600 ms | 99.9/70.0 | 100/97.4 | 100/63.1 | 99.9/81.4 | 100/70.3 | 99.9/72.9 |
- | Training/Test Accuracy by Label (%) | |||||
---|---|---|---|---|---|---|
vm | si | so | ti | to | Total | |
Vanilla RNN | 91.8/88.6 | 81.8/59.5 | 78.8/62.1 | 90.7/83.9 | 78.4/61.7 | 83.6/66.4 |
Vanilla LSTM | 98.0/89.1 | 98.2/90.2 | 94.4/71.2 | 98.1/85.3 | 98.7/98.9 | 97.5/85.1 |
Bi-directional LSTM | 97.7/89.6 | 97.0/92.6 | 96.4/89.0 | 99.3/98.7 | 98.9/99.4 | 97.9/91.7 |
- | Training/Test Accuracy by Label (%) | |||||
---|---|---|---|---|---|---|
vm | si | so | ti | to | Total | |
Method A | 98.4/49.6 | 92.8/37.9 | 93.6/43.4 | 98.4/95.0 | 98.5/99.6 | 96.3/57.7 |
Method B | 99.4/77.5 | 97.1/64.7 | 99.2/75.8 | 99.7/82.6 | 99.8/75.8 | 99.0/74.6 |
Method C | 97.7/89.6 | 97.0/92.6 | 96.4/89.0 | 99.3/98.7 | 98.9/99.4 | 97.9/91.7 |
- | Training/Test Accuracy by Label (%) | |||||
---|---|---|---|---|---|---|
SLR | FR | DKB | SP | VM | Total | |
Method A | 95.6/74.0 | 94.5/72.8 | 97.7/60.9 | 97.9/76.1 | 96.5/37.6 | 96.4/58.0 |
Method B | 98.6/74.2 | 98.2/73.8 | 99.5/77.4 | 99.9/97.4 | 99.5/54.7 | 99.0/73.6 |
Method C | 99.7/77.9 | 99.6/92.9 | 99.9/85.9 | 99.9/96.2 | 99.5/71.9 | 99.7/84.2 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jeon, H.; Choi, H.; Noh, D.; Kim, T.; Lee, D. Wearable Inertial Sensor-Based Hand-Guiding Gestures Recognition Method Robust to Significant Changes in the Body-Alignment of Subject. Mathematics 2022, 10, 4753. https://doi.org/10.3390/math10244753
Jeon H, Choi H, Noh D, Kim T, Lee D. Wearable Inertial Sensor-Based Hand-Guiding Gestures Recognition Method Robust to Significant Changes in the Body-Alignment of Subject. Mathematics. 2022; 10(24):4753. https://doi.org/10.3390/math10244753
Chicago/Turabian StyleJeon, Haneul, Haegyeom Choi, Donghyeon Noh, Taeho Kim, and Donghun Lee. 2022. "Wearable Inertial Sensor-Based Hand-Guiding Gestures Recognition Method Robust to Significant Changes in the Body-Alignment of Subject" Mathematics 10, no. 24: 4753. https://doi.org/10.3390/math10244753
APA StyleJeon, H., Choi, H., Noh, D., Kim, T., & Lee, D. (2022). Wearable Inertial Sensor-Based Hand-Guiding Gestures Recognition Method Robust to Significant Changes in the Body-Alignment of Subject. Mathematics, 10(24), 4753. https://doi.org/10.3390/math10244753