Upper Extremity Motion-Based Telemanipulation with Component-Wise Rescaling of Spatial Twist and Parameter-Invariant Skeletal Kinematics
Abstract
:1. Introduction
- The method CWR proposed in this study uses the three wearable IMU sensors and a parameter-invariant 5-DOF skeletal kinematic model of the upper extremity part for all human operators. The term parameter invariant means that all kinematic parameters in the skeletal kinematic model are fixed to unit length and identical to all human operators;
- The CWR of spatial twist, which is calculated with the skeletal kinematics and three IMU sensor measurements, is proposed to improve telemanipulation performances in tasks requiring diverse speed and accuracy by adjusting the operator’s motion scale in terms of linear and angular, respectively;
- Then, the CWR allows the user to directly adjust the scale difference between actual and estimated hand motion that inevitably occurs when mimicking human motion with inaccurate human factors;
- Therefore, what this study pursues is to develop a framework that allows the operator to decide the desired motion scale through their visual feedback with the help of intuitive scaling guidance between accuracy and responsiveness;
- In addition to the CWR framework, the heading direction of both the manipulator and the operator can be identically maintained with the floating body fixed frame [29], even during the time-varying heading direction of the operator.
2. Method
- Sensor calibration for creating and updating a floating body-fixed frame;
- Forward kinematics of the upper right extremity part;
- Velocity kinematics of the upper extremity part;
- CWR of spatial twist for the scaling adjustment of the upper extremity motion.
2.1. Method for Creating and Updating a Floating Body-Fixed Frame
2.2. Forward Kinematics of the Upper Right Extremity Part
2.3. Velocity Kinematics of the Upper Extremity Part
2.4. Component-Wise Rescaling of Spatial Twist
3. Experiment and Discussion
3.1. Experimental Configurations in Testbench
- Testbench: within the testbench, six Prime 13 cameras, three retro-reflective marker sets (on manipulator’s EEF, subject’s hand, and target object), UR5e-based mobile manipulator equipped with 2F-85 Robotiq two-finger gripper, Xsens MTw 3 wearable IMU sensors, a target object for teleoperated pick-and-place experiments on the table, a laptop with OptiTrack Motive (Win 10) installed, and (f) a workstation with ROS 1 (Ubuntu 20.04) installed. The size of the 3D motion-capture stage is 4 m in width, 4 m in height, and 3 m in height. {Of}, the reference frame of the Optitrack prim 13 motion-capture camera, is defined as the exact center point of the floor, which is 2 m wide and 2 m high, and (g) calibration square camera calibration was performed using (CS-200);
- Hand trajectory: wireless IMU sensors are attached to the back of the subject’s pelvis, the arm, and the forearm, which is very close to the wrist part with straps. The reflective marker sets are attached to the wrist and the top of the head to measure the subject’s positions with respect to the frame {Of};
- Motion state: all outputs of the wireless IMU sensor are converted to the output with respect to the frame {FBf}. Note that the conversion relationship between {FBf} and {Of} defined by the calibration square cannot be accurately identified. However, the z-axis is all the same as [0 0 1], and we did our best to align the body-heading direction with the L-square heading direction during the calibration gesture.
3.2. Effects of the CWR of Spatial Twist in Telemanipulation-Based Pick-and-Place
3.3. Effects Validation of the Proposed Dynamic Upper Extremity Motion-Based Telemanipulation through Pick and Place Task
3.4. Discussion of Experiment Results
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Nomenclature
CWR | Component-wise rescaling |
EEF | End effector |
FOV | Field of view |
CNN | Convolutional neural network |
EMG | Electromyography |
sEMG | Surface electromyography |
IMU | Inertial measurement unit |
DOF | Degrees of Freedom |
ML | Machine learning |
PoE | Product of exponentials |
ROS | Robot operating system |
CPS | Cyber–physical system |
Rotation matrix | |
Global reference frame | |
Sensor-fixed frame at initial standing posture | |
Sensor-fixed frame at initial stooping posture | |
Body-fixed frame | |
Floating body-fixed frame | |
Sensor-fixed frame | |
Sensor-calibrated frame | |
End-effector frame of upper extremity | |
Manipulator base frame | |
Optical camera reference frame | |
Acceleration | |
Angular rate | |
Three-dimensional special orthogonal group | |
Three-dimensional special Euclidean group | |
Rescaling parameter (linear and angular velocity) |
References
- Kumar, N.; Lee, S.C. Human-machine interface in smart factory: A systematic literature review. Technol. Forecast. Soc. Change 2022, 174, 121284. [Google Scholar] [CrossRef]
- Pajor, M.; Miądlicki, K.; Saków, M. Kinect sensor implementation in FA-NUC robot manipulation. Arch. Mech. Technol. Autom. 2014, 34, 35–44. [Google Scholar]
- Mazhar, O.; Navarro, B.; Ramdani, S.; Passama, R.; Cherubini, A. A real-time human-robot interaction framework with robust background invariant hand gesture de-tection. Robot. Comput. Integr. Manuf. 2019, 60, 34–48. [Google Scholar] [CrossRef]
- Zhou, D.; Shi, M.; Chao, F.; Lin, C.M.; Yang, L.; Shang, C.; Zhou, C. Use of human gestures for controlling a mobile robot via adaptive cmac network and fuzzy logic controller. Neurocomputing 2018, 282, 218–231. [Google Scholar] [CrossRef]
- Moe, S.; Schjolberg, I. Real-Time Hand Guiding of Industrial Manipulator in 5 DOF Using Microsoft Kinect and Accelerometer. In Proceedings of the 2013 IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Gyeongju, Republic of Korea, 26–29 August 2013; pp. 644–649. [Google Scholar]
- Lopes, M.; Melo, A.S.C.; Cunha, B.; Sousa, A.S.P. Smartphone-Based Video Analysis for Guiding Shoulder Therapeutic Exercises: Concurrent Validity for Movement Quality Control. Appl. Sci. 2023, 13, 12282. [Google Scholar] [CrossRef]
- Latreche, A.; Kelaiaia, R.; Chemori, A.; Kerboua, A. A New Home-Based Upper-and Lower-Limb Telerehabilitation Platform with Experimental Valida-tion. Arab. J. Sci. Eng. 2023, 48, 1–16. [Google Scholar] [CrossRef] [PubMed]
- Vogel, J.; Castellini, C.; van der Smagt, P. EMG-Based Teleoperation and Manipulation with the DLR LWR-III. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011. [Google Scholar]
- Wolf, M.T.; Assad, C.; Stoica, A.; You, K.; Jethani, H.; Vernacchia, M.T.; Fromm, J.; Iwashita, Y. Decoding Static and Dynamic Arm and Hand Gestures from the JPL BioSleeve. In Proceedings of the 2013 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2013; pp. 1–9. [Google Scholar]
- Hiroaki, G.; Osu, R. Task-dependent viscoelasticity of human multi joint arm and its spatial characteristics for interaction with environments. J. Neurosci. 1998, 18, 8965–8978. [Google Scholar]
- Zhang, Y.; Huang, Y.; Sun, X.; Zhao, Y.; Guo, X.; Liu, P.; Zhang, Y. Static and dynamic human arm/hand gesture capturing and recognition via multi-information fu-sion of flexible strain sensors. IEEE Sens. J. 2020, 20, 6450–6459. [Google Scholar] [CrossRef]
- Kulkarni, P.V.; Illing, B.; Gaspers, B.; Brüggemann, B.; Schulz, D. Mobile manipulator control through gesture recognition using IMUs and Online Lazy Neighborhood Graph search. Acta IMEKO 2019, 8, 3–8. [Google Scholar] [CrossRef]
- Choi, H.; Jeon, H.; Noh, D.; Kim, T.; Lee, D. Hand-guiding gesture-based telemanipulation with the gesture mode classification and state estima-tion using wearable IMU sensors. Mathematics 2023, 11, 3514. [Google Scholar] [CrossRef]
- Škulj, G.; Vrabič, R.; Podržaj, P. A Wearable IMU System for Flexible Teleoperation of a Collaborative Industrial Robot. Sensors 2021, 21, 5871. [Google Scholar] [CrossRef]
- Vargas-Valencia, L.S.; Elias, A.; Rocon, E.; Bastos-Filho, T.; Frizera, A. An IMU-to-Body Alignment Method Applied to Human Gait Analysis. Sensors 2016, 16, 2090. [Google Scholar] [CrossRef]
- Bertomeu-Motos, A.; Lledó, L.D.; Díez, J.A.; Catalan, J.M.; Ezquerro, S.; Badesa, F.J.; Garcia-Aracil, N. Estimation of Human Arm Joints Using Two Wireless Sensors in Robotic Rehabilitation Tasks. Sensors 2015, 15, 30571–30583. [Google Scholar] [CrossRef]
- Tian, Y.; Meng, X.; Tao, D.; Liu, D.; Feng, C. Upper limb motion tracking with the integration of IMU and Kinect. Neurocomputing 2015, 159, 207–218. [Google Scholar] [CrossRef]
- Lin, C.-J.; Peng, H.-Y. A Study of the Human-Robot Synchronous Control Based on IMU and EMG Sensing of an Upper Limb. In Proceedings of the 2022 13th Asian Control Conference (ASCC), Jeju, Republic of Korea, 4–7 May 2022; pp. 1474–1479. [Google Scholar]
- Carrino, S.; Mugellini, E.; Khaled, O.A.; Ingold, R. Gesture-Based Hybrid Approach for HCI in Ambient Intelligent Environmments. In Proceedings of the 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), Taipei, Taiwan, 27–30 June 2011; pp. 86–93. [Google Scholar]
- Chen, P.-J.; Du, Y.-C.; Shih, C.-B.; Yang, L.-C.; Lin, H.-T.; Fan, S.-C. Development of an Upper Limb Rehabilitation System Using Inertial Movement Units and Kinect Device. In Proceedings of the 2016 International Conference on Advanced Materials for Science and Engineering (ICAMSE), Tainan, Taiwan, 12–13 November 2016; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2016; pp. 275–278. [Google Scholar]
- Zhou, S.; Fei, F.; Zhang, G.; Mai, J.D.; Liu, Y.; Liou, J.Y.J.; Li, W.J. 2D Human Gesture Tracking and Recognition by the Fusion of MEMS Inertial and Vision Sensors. IEEE Sensors J. 2013, 14, 1160–1170. [Google Scholar] [CrossRef]
- Yoo, M.; Na, Y.; Song, H.; Kim, G.; Yun, J.; Kim, S.; Moon, C.; Jo, K. Motion Estimation and Hand Gesture Recognition-Based Human–UAV Interaction Approach in Real Time. Sensors 2022, 22, 2513. [Google Scholar] [CrossRef]
- Moradi, M.; Dang, S.; Alsalem, Z.; Desai, J.; Palacios, A. Integrating Human Hand Gestures with Vision Based Feedback Controller to Navigate a Virtual Robotic Arm. In Proceedings of the 2020 23rd International Symposium on Measurement and Control in Robotics (ISMCR), Budapest, Hungary, 15–17 October 2020; pp. 1–6. [Google Scholar]
- Zhou, H.; Alici, G. Non-Invasive Human-Machine Interface (HMI) Systems with Hybrid On-Body Sensors for Controlling Upper-Limb Prosthesis: A Review. IEEE Sens. J. 2022, 22, 10292–10307. [Google Scholar] [CrossRef]
- Alfaro, J.G.C.; Trejos, A.L. User-Independent Hand Gesture Recognition Classification Models Using Sensor Fusion. Sensors 2022, 22, 1321. [Google Scholar] [CrossRef]
- Shahzad, W.; Ayaz, Y.; Khan, M.J.; Naseer, N.; Khan, M. Enhanced Performance for Multi-Forearm Movement Decoding Using Hybrid IMU–sEMG Interface. Front. Neurorobotics 2019, 13, 43. [Google Scholar] [CrossRef] [PubMed]
- Amini, S.; Dehkordi, S.F.; Fahraji, S.H. Motion Equation Derivation and Tip-Over Evaluations for K Mobile Manipulators with the Consideration of Motors mass By the Use of Gibbs-Appell Formulation. In Proceedings of the 5th RSI International Conference on Robotics and Mechatronics (IcRoM), Tehran, Iran, 25–27 October 2017; pp. 502–507. [Google Scholar] [CrossRef]
- Tanha, S.D.N.; Dehkordi, S.F.; Korayem, A.H. Control a Mobile Robot in Social Environments by Considering Human as a Moving Obstacle. In Proceedings of the 2018 6th RSI International Conference on Robotics and Mechatronics (IcRoM), Tehran, Iran, 23–25 October 2018; pp. 256–260. [Google Scholar]
- Jeon, H.; Kim, S.L.; Kim, S.; Lee, D. Fast wearable sensor–based foot–ground contact phase classification using a convolutional neural network with sliding-window label overlapping. Sensors 2020, 20, 4996. [Google Scholar] [CrossRef]
- Lynch, K.M.; Park, F.C. Modern Robotics; Cambridge University Press: Cambridge, MA, USA, 2017. [Google Scholar]
Article | Sensors | Gesture Type | Gesture Recognition | Robot Control | Advantage and Disadvantage |
---|---|---|---|---|---|
Pajor et al. [2] | vision | dynamic | motion tracking | ○ | A: gesture recognition with precise intent D: high computational cost |
Mazhar et al. [3] | vision | static | data learning | ○ | A: high applicability D: limited range of depth data |
Zhou et al. [4] | vision | both | both | ○ | A: speed control possibility D: limited FOV and occlusion |
Vogel et al. [8] | wearable | dynamic | motion tracking | ○ | A: continuous omnidirectional control possible D: low control speed, occlusion |
Kulkarni et al. [12] | wearable | dynamic | data learning | ○ | A: toughness to light conditions D: low control speed and response |
Choi, Haegyeom et al. [13] | wearable | dynamic | data learning | ○ | A: no kinematics model is required D: continuous control is impossible |
Yoo, Minjeong, et al. [22] | hybrid | both | data learning | ○ | A: intent estimation and omnidirectional control possible D: continuous control is impossible |
M.Moradi et al. [23] | hybrid | static | data learning | ○ | A: reduced real-time performance due to processing large amounts of data D: high computational cost, poor real-time performance |
Colli Alfaro et al. [25] | hybrid | static | data learning | ⨉ | A: generality, high accuracy D: continuous control is impossible |
Proposed method | wearable | dynamic | motion tracking | ○ | A: continuous and intuitive control possible D: Sensor drift due to bias error |
Frame i | |||
---|---|---|---|
1 | (0, 0, 1) | (0, 0, 0) | (0, 0, 0) |
2 | (0, 1, 0) | (0, 0, 0) | (0, 0, 0) |
3 | (1, 0, 0) | (0, 0, 0) | (0, 0, 0) |
4 | (0, 1, 0) | (0, 0, ) | (, 0, 0) |
5 | (0, 0, −1) | (0, 0, ) | (0, 0, 0) |
Frame i | |
---|---|
1 | (0, 0, 1, 0, 0, 0) |
2 | (0, 1, 0, , 0, 0) |
3 | (1, 0, 0, 0, , 0) |
4 | (0, 1, 0, , 0, 0) |
5 | (0, 0, −1, 0, 0, 0) |
Operator | 1 | 3 | 5 | 6 | 7 | 8 | |
---|---|---|---|---|---|---|---|
Size ratio | 1 | 0.05 | 0.43 | 0.99 | 1.17(17%↑) | 1.33 | 1.26 |
Shape ratio | 1 | 0.99 | 1.13 | 1.07 | 1.05 | 0.98 | 0.93 |
Operator | 3:5 | 4:5 | 5:5 | 5:4 | 5:3 | |
---|---|---|---|---|---|---|
Size ratio | 1 | 0.32 | 0.57 | 1.00 | 1.16 | 1.21 |
Shape ratio | 1 | 1.30 | 1.23 | 1.19 | 1.14 | 1.11 |
Operator | 4:6 | 5:6 | 6:6 | 6:5 | 6:4 | |
Size ratio | 1 | 0.47 | 0.79 | 1.21 | 1.35 | 1.56 |
Shape ratio | 1 | 1.29 | 1.26 | 1.25 | 1.17 | 1.11 |
Scaling Ratio | Trial No. | Picking Touch Violation | Pick-and-Place Boundary Violation | Total Time |
---|---|---|---|---|
6:6 | 1 | Touch | - | 04:19 |
2 | Touch | - | 02:56 | |
3 | Touch | Out | 02:49 | |
4 | Touch | Out | 02:19 | |
6:5 | 1 | Touch | - | 03:45 |
2 | Touch | Out | 04:12 | |
3 | - | - | * 03:01 | |
4 | - | - | 03:04 | |
6:4 | 1 | - | Out | 05:11 |
2 | - | - | 04:33 | |
3 | - | - | * 03:09 | |
4 | - | - | 03:18 | |
5:5 | 1 | - | - | 04:52 |
2 | - | - | 04:03 | |
3 | - | - | * 04:29 | |
4 | - | Out | 04:01 | |
5:4 | 1 | - | Out | 03:43 |
2 | - | - | 05:20 | |
3 | - | - | * 03:18 | |
4 | Touch | - | 03:46 | |
5:3 | 1 | - | - | * 05:07 |
2 | Touch | - | 05:10 | |
3 | Touch | - | 05:05 | |
4 | Touch | - | 05:15 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Noh, D.; Choi, H.; Jeon, H.; Kim, T.; Lee, D. Upper Extremity Motion-Based Telemanipulation with Component-Wise Rescaling of Spatial Twist and Parameter-Invariant Skeletal Kinematics. Mathematics 2024, 12, 358. https://doi.org/10.3390/math12020358
Noh D, Choi H, Jeon H, Kim T, Lee D. Upper Extremity Motion-Based Telemanipulation with Component-Wise Rescaling of Spatial Twist and Parameter-Invariant Skeletal Kinematics. Mathematics. 2024; 12(2):358. https://doi.org/10.3390/math12020358
Chicago/Turabian StyleNoh, Donghyeon, Haegyeom Choi, Haneul Jeon, Taeho Kim, and Donghun Lee. 2024. "Upper Extremity Motion-Based Telemanipulation with Component-Wise Rescaling of Spatial Twist and Parameter-Invariant Skeletal Kinematics" Mathematics 12, no. 2: 358. https://doi.org/10.3390/math12020358
APA StyleNoh, D., Choi, H., Jeon, H., Kim, T., & Lee, D. (2024). Upper Extremity Motion-Based Telemanipulation with Component-Wise Rescaling of Spatial Twist and Parameter-Invariant Skeletal Kinematics. Mathematics, 12(2), 358. https://doi.org/10.3390/math12020358