Next Article in Journal
A Many-Objective Simultaneous Feature Selection and Discretization for LCS-Based Gesture Recognition
Previous Article in Journal
Frequency Converter as a Node for Edge Computing of Big Data, Related to Drive Efficiency, in Industrial Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Augmented Reality Assisted Assembly Training Oriented Dynamic Gesture Recognition and Prediction

1
Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China
2
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(21), 9789; https://doi.org/10.3390/app11219789
Submission received: 21 September 2021 / Revised: 12 October 2021 / Accepted: 14 October 2021 / Published: 20 October 2021

Abstract

Augmented reality assisted assembly training (ARAAT) is an effective and affordable technique for labor training in the automobile and electronic industry. In general, most tasks of ARAAT are conducted by real-time hand operations. In this paper, we propose an algorithm of dynamic gesture recognition and prediction that aims to evaluate the standard and achievement of the hand operations for a given task in ARAAT. We consider that the given task can be decomposed into a series of hand operations and furthermore each hand operation into several continuous actions. Then, each action is related with a standard gesture based on the practical assembly task such that the standard and achievement of the actions included in the operations can be identified and predicted by the sequences of gestures instead of the performance throughout the whole task. Based on the practical industrial assembly, we specified five typical tasks, three typical operations, and six standard actions. We used Zernike moments combined histogram of oriented gradient and linear interpolation motion trajectories to represent 2D static and 3D dynamic features of standard gestures, respectively, and chose the directional pulse-coupled neural network as the classifier to recognize the gestures. In addition, we defined an action unit to reduce the dimensions of features and computational cost. During gesture recognition, we optimized the gesture boundaries iteratively by calculating the score probability density distribution to reduce interferences of invalid gestures and improve precision. The proposed algorithm was evaluated on four datasets and proved to increase recognition accuracy and reduce the computational cost from the experimental results.
Keywords: augmented reality assisted assembly training; human-machine interaction; gesture recognition and prediction augmented reality assisted assembly training; human-machine interaction; gesture recognition and prediction

Share and Cite

MDPI and ACS Style

Dong, J.; Xia, Z.; Zhao, Q. Augmented Reality Assisted Assembly Training Oriented Dynamic Gesture Recognition and Prediction. Appl. Sci. 2021, 11, 9789. https://doi.org/10.3390/app11219789

AMA Style

Dong J, Xia Z, Zhao Q. Augmented Reality Assisted Assembly Training Oriented Dynamic Gesture Recognition and Prediction. Applied Sciences. 2021; 11(21):9789. https://doi.org/10.3390/app11219789

Chicago/Turabian Style

Dong, Jiaqi, Zeyang Xia, and Qunfei Zhao. 2021. "Augmented Reality Assisted Assembly Training Oriented Dynamic Gesture Recognition and Prediction" Applied Sciences 11, no. 21: 9789. https://doi.org/10.3390/app11219789

APA Style

Dong, J., Xia, Z., & Zhao, Q. (2021). Augmented Reality Assisted Assembly Training Oriented Dynamic Gesture Recognition and Prediction. Applied Sciences, 11(21), 9789. https://doi.org/10.3390/app11219789

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop