Next Article in Journal
Deep Learning for Smart Healthcare—A Survey on Brain Tumor Detection from Medical Imaging
Next Article in Special Issue
Heterogeneous Autonomous Robotic System in Viticulture and Mariculture: Vehicles Development and Systems Integration
Previous Article in Journal
Security Requirements and Challenges of 6G Technologies and Applications
Previous Article in Special Issue
Carved Turn Control with Gate Vision Recognition of a Humanoid Robot for Giant Slalom Skiing on Ski Slopes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Motion Similarity Evaluation between Human and a Tri-Co Robot during Real-Time Imitation with a Trajectory Dynamic Time Warping Model

School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2022, 22(5), 1968; https://doi.org/10.3390/s22051968
Submission received: 4 January 2022 / Revised: 19 February 2022 / Accepted: 28 February 2022 / Published: 2 March 2022
(This article belongs to the Collection Smart Robotics for Automation)

Abstract

:
Precisely imitating human motions in real-time poses a challenge for the robots due to difference in their physical structures. This paper proposes a human–computer interaction method for remotely manipulating life-size humanoid robots with a new metrics for evaluating motion similarity. First, we establish a motion capture system to acquire the operator’s motion data and retarget it to the standard bone model. Secondly, we develop a fast mapping algorithm, by mapping the BVH (BioVision Hierarchy) data collected by the motion capture system to each joint motion angle of the robot to realize the imitated motion control of the humanoid robot. Thirdly, a DTW (Dynamic Time Warping)-based trajectory evaluation method is proposed to quantitatively evaluate the difference between robot trajectory and human motion, and meanwhile, visualization terminals render it more convenient to make comparisons between two different but simultaneous motion systems. We design a complex gesture simulation experiment to verify the feasibility and real-time performance of the control method. The proposed human-in-the-loop imitation control method addresses a prominent non-isostructural retargeting problem between human and robot, enhances robot interaction capability in a more natural way, and improves robot adaptability to uncertain and dynamic environments.

1. Introduction

In 2017, the National Natural Science Foundation of China (NSFC) launched a major research project, the Tri-Co Robot (Coexisting-Cooperative-Cognitive Robot). Tri-Co Robots are those that can naturally interact and collaborate with the operating environment, humans, as well as other robots and that are adaptive to complex dynamic environments [1]. Over the years, Tri-Co robots have formed many different types due to different scenarios, different functions, and different tasks completed, where human–robot interaction (HRI) has become an important research field and has received extensive attention in academia and industry.
The working scenes of robots are mainly scenes of human life, and most of these scenes are constructed according to human scales, needs, and capabilities. Whether it is industrial robots, agricultural robots, or various service robots, humanoid robots have relative advantages when replacing or helping humans in their work. Compared to the general-purpose HRIs, humanoid robots have many advantages. First of all, humanoid robots have the same structure and scale as humans, which means that they can imitate most of the actions that humans can do. Secondly, humanoid robots provide a platform for the subsequent development of HRIs. Due to the similar structures between humanoid robots and humans, human experience can give first-person guidance to robots in the form of teaching and can even derive humanoid autonomous decision-making methods. Thirdly, humanoid robots can use existing human knowledge and skills to improve performance and greatly reduce the cost of HRI. Our team has proposed a kind of human robot control method [2], which is used to control humanoid robots so that the robots can imitate human motion.
It is a valuable method to teach robots behaviors that are not pre-programmed naturally, and it promotes the interactivity between humans and humanoid robots. Taking humans as an example, humans always learn new knowledge and skills through imitation [3]. For humanoid robots, it is usually easier to imitate human behavior than to program the controller directly [4]. Therefore, it is particularly important for humanoid robots to imitate humans. Humanoid behavior is the basis of humanoid robot motion [5].
There are several related works over the past few years. Marcia Riley et al. use an external camera and the operator’s head-mounted camera to obtain body posture, and calculated the joint angle by a fast full-body inverse kinematics (IK) method. They use this method to realize a real-time simulation of a Sarcos humanoid robot with 30 degrees of freedom (DOF). In addition, some articles ([1,6,7,8]) use Kinect to collect pictures, perform gesture recognition, and reproduce similarities on humanoid robots through various algorithms actions. With the upgrading of machine vision algorithms, some new methods have emerged. Emily-Jane Rolley-Parnell uses an RGB-D camera to photograph human movements, obtains image information and depth information from it, uses an openpose algorithm to obtain two-dimensional information of human skeleton posture, and realizes the control of humanoid robots through solution [9]. In addition to vision, researchers have also tried other sensory methods. Abhay Bindal fixes an accelerometer motion sensor and an infrared sensor on the human leg to obtain data to control the gait movement of biped robots in real time [10]. Akif DURDU connects potentiometers to human joints, and after classification by neural network, controls the robot to perform movement [4]. Shingo Kitagawa uses a newer method: they develop a miniature tangible cube. They use this cube to obtain the controller’s arm information, thus realizing the control of the robot’s arms [11]. However, there are some demerits in existing works. First of all, the use of vision for motion recognition and control is not reliable because these methods are sensitive to lighting conditions and complex backgrounds. With wearable sensors, the accuracy of motion control will be significantly improved. Secondly, solving the whole-body IK problem will inevitably affect the real-time performance. Thirdly, motion imitation has rarely been applied to humanoid robots with human-level dexterity due to the pending issue of how to deal with the incongruent geometrics between humans and robots.
In this article, we propose a human-in-the-loop system, which implements the robot imitating the real-time motion of human upper limbs on the life-size open source 3D humanoid robot InMoov. The InMoov robot has a structure similar to human beings, with a total of 29 degrees of freedom, 22 of which are used and controlled in our system. The flow of motion simulation is as follows. First, we use wearable sensors to capture the movement of the upper limbs of the human body. These data are saved in the BVH (BioVision Hierarchy) format and transmitted to the robot controller industrial computer. Next, the BVH data are analyzed by mathematical methods and converted into the corresponding joint angle data of the robot. Finally, the industrial computer sends the joint angle data to the lower controller in real time to control the robot. This kind of human-in-the-loop system provides a novel, real-time, and accurate method for the imitation of human actions on humanoid robots. In addition, this paper also proposes a trajectory evaluation method, which is based on DTW (Dynamic Time Warping), to evaluate the similarity between human behavior and robot motion.
This paper is organized as follows. In Section 2, the motion capture system is introduced. Section 3 discusses the setup of the humanoid robot. Section 4 presents the realization of real-time motion imitation on the humanoid robot, which provides a quantitative model for describing the incongruent feature between human and robot. Section 5 conducts several experiments on complicated gesture imitation through our proposed method. Finally, Section 6 and Section 7 give the discussions and conclusions.

2. Motion Capture System

This section introduces the motion capture system. The system mainly includes a motion sensor for capturing human motion and a human motion redirection method that links human motion with a simplified skeleton model.

2.1. Motion Sensor

We used a wearable sensor designed by Noitom Technology Ltd. in Beijing, China. It is a system composed of 32 9-axis sensors. This system is small in size, easy to wear, and has strong applicability. By connecting with Axis Neuron Pro (ANP) on the Windows operating system, the system can perform calibration and data transmission management. At the same time, the collected data can visually reflect the operator’s movement in ANP.

2.2. Human Motion Retargeting

Motion redirection is a classic problem, which aims to redistribute and combine the motion of one object to another while keeping the two motion styles consistent [12]. By using motion redirection, BVH data can be used to reproduce the human motion collected by the sensor on the ANP bone model. BVH data can store the hierarchical movement of the skeleton, that is, the movement of the child node depends on the movement of the parent node [13].
The BVH data we use do not include position channels; each joint uses only three rotation data while keeping the length of the bones connecting the joints unchanged. Next, since the wearable sensor can be regarded as being fixed on the operator, the posture of the operator can be calculated through the three rotation angles of each joint.

3. Setup of Humanoid Robot

The humanoid robot has a humanoid design, which has a similar structure and scale to the human body, and can imitate the movement of the human body [14]. However, due to the complexity of the human body structure and the limitations of traditional manufacturing methods, there have been few subtle humanoid robot designs for a long time. Today, with the rapid development of 3D printing technology, 3D-printed humanoid robots such as InMoov, Flobi, and iCub are designed to be used as experimental platforms for HRI research.
The research in this article is based on the InMoov 3D printed life-size humanoid robot initiated by French sculptor Gael Langevin in 2012 [15]. The InMoov robot contains a total of 29 degrees of freedom, 22 of which are controlled in the motion simulation of this article, including 5 DOF for each hand, 4 for each arm, 3 for each shoulder, and 2 for the neck, as shown in Figure 1. In terms of control, the upper controller uses the Arduino Mega 2560. On the one hand, the upper controller needs to communicate with the industrial computer to retrieve the joint angle control information. On the other hand, it needs to communicate with the lower controllers, which include four Arduino Nano boards, through the Modbus RTU protocol. Each Arduino Nano controls the movement of six servos through PWM.

4. Real-Time Imitation of Human Motion

The overall structure of the proposed method is shown in Figure 2. First, the ROS (Robot Operating System) operating system is adopted to use message subscription and publication for data communication to ensure the security of data transmission. Secondly, a fast mapping algorithm is established to convert the Euler angle of each joint in the BVH data into the joint angle of the corresponding joint in a very simplified way. Thirdly, the trajectory evaluation function is used to quantify the degree of similarity between the trajectory of the robot and the trajectory of humans. Fourthly, the collected human movement and robot movement can be observed and compared on different visualization terminals.

4.1. Data Transmission

Nodes, which are the message processing units in the ROS system, are used to subscribe or publish messages to ROS topics [16]. The data flow of the system in this article is visualized in Figure 3, where ellipses represent nodes and squares represent topics.
  • rosserial_server_socket_node connects with the win32 console through TCP/IP and then advertises the topic, perception_neuron/data_1;
  • perception_neuron_one_topic_talk_node subscribes to the previous topic and then converts Euler angles in BVH data to joint angles, which are then published to another topic called Controller_joint_states;
  • joint_state_publisher subscribes to the previous topic and realizes the real-time simulation of robot model;
  • perception_serial will send joint angles to the low-level slave controller through a serial port after obtaining them from Contoller_joint_states.
The above content shows that the data transmission on the industrial computer is mainly carried out on ROS. After the data leave the industrial computer, the packaged joint angle needs to be transmitted to the upper and lower controllers through the serial port. In order to prevent packet loss or data misalignment during transmission, we designed a specific communication protocol, as shown in Figure 4. The time stamp data and joint angle data are converted into integers through a specific encoding method in the protocol. The communication protocol includes 2 bits of time stamp data, 22 bits of position data corresponding to each joint, and 2 bits of CRC16 check code, which are generated based on the first 27 bits to ensure the safety of data transmission.

4.2. Mapping Algorithm

In order for the InMoov robot to imitate the motion of the human body, it is necessary to design an algorithm to calculate the corresponding joint angle based on the Euler angle in the BVH data. Through the three Euler angles of each joint in BVH, we can calculate the rotation matrix between the child link and the parent link. Assuming that the Euler angle of rotation order ZYX can be expressed as φ , θ , ψ , the rotation matrix of the child frame relative to the parent frame is:
R c h i l d p a r e n t =   c o s φ s i n φ 0 s i n φ c o s φ 0 0 0 1 c o s θ 0 s i n θ 0 1 0 s i n θ 0 c o s θ 1 0 0 0 c o s ψ s i n ψ 0 s i n ψ c o s ψ
Figure 5 shows the mapping problem. The joints of humans and humanoid robots are not exactly the same. Limited by mechanical constraints, some joints of humanoid robots cannot achieve rotation in three independent directions. For each joint, the situation is different, so we need to formulate algorithms for different situations.
The first case is that the degrees of freedom of the human joints are the same as the degrees of freedom of the robot joints. Take the shoulders as an example. The shoulders of the InMoov robot are similar to the shoulders of the human body, both have three degrees of freedom, and their rotation axes can be approximately regarded as perpendicular to each other. Assuming the joint angles of three shoulder joints are relatively α , β , γ , the rotation matrix of the arm coordinate system relative to the shoulder coordinate system can be expressed as:
R a r m s h o u l d e r =   c o s α s i n α 0 s i n α c o s α 0 0 0 1 c o s β 0 s i n β 0 1 0 s i n β 0 c o s β 1 0 0 0 c o s γ s i n γ 0 s i n γ c o s γ
From the formula, you only need to make the φ , θ , ψ angle obtained from the motion sensor equal to the α , β , γ angle of the control robot. The only thing to note is that the rotation sequence of the two must be the same.
The second one is conversion from two human DOF to one robot DOF. In the upper limbs, this type of conversion mainly includes the elbow and wrist. Take the elbow as an example. The elbow of a human can bend and rotate, while the elbow of a robot can only bend. In order to calculate the bending joint angle Ω of the elbow of the robot, as shown in the Figure 6, with the assumptions that sensors are fixed with respect to the human body and the x-direction is along the links, we can derive the following equations with the rotation matrix (1). R 2 1 stands for the rotation matrix of frame x 2 y 2 z 2 with respect to x 1 y 1 z 1 . x 1 ^ 1 is the description of unit vector of x 1 in frame x 1 y 1 z 1 .
x 2 ^ 2 = ( 1 , 0 , 0 ) T
x 2 ^ 1 = R 2 1 x 2 ^ 2 = ( c o s θ c o s φ , c o s θ s i n φ , s i n θ ) T
< x 2 ^ 1 , x 1 ^ 1 > = a r c c o s ( c o s θ c o s φ )
Ω = π < x 2 ^ 1 , x 1 ^ 1 > = π a r c c o s ( c o s θ c o s φ )
The wrist is similar to the elbow. The difference is that the robot wrist can only rotate rather than bend. We need to compute the joint angle for rotating, which is ω , as shown in Figure 7. We apply the same mathematical notation settings above and obtain the following results:
z 2 ^ 2 = ( 0 , 0 , 1 ) T
z 2 ^ 1 = R 2 1 z 2 ^ 2 = s i n φ s i n ψ + c o s φ s i n θ c o s ψ c o s φ s i n ψ + s i n φ s i n θ s i n ψ c o s θ c o s ψ
< z 2 ^ 1 , z 1 ^ 1 > = a r c c o s ( c o s θ c o s ψ )
ω = < z 2 ^ 1 , z 1 ^ 1 > = a r c c o s ( c o s θ c o s ψ )
According to formulas 6 and 10, we draw the mapping of the robot’s elbow and wrist with the sensor’s data, as shown in Figure 8 and Figure 9.
The last case is conversion from three human DOF to two robot DOF, such as in the neck joint. The solution to this case resembles that for the shoulder joint, and we only need to take two of three Euler angles in the corresponding order.

4.3. Trajectory Imitation Evaluation

The elbow and wrist joints of robots are different from human upper limbs and lack some degrees of freedom. This leads to the fact that the robot cannot perform one-to-one correspondence of joints when imitating human actions. Robots need to map human movements to themselves through a mapping algorithm. In this case, multiple motion trajectories of the human operator may be mapped to the same robot trajectory, as shown in Figure 10, if we apply our proposed mapping algorithm. To this end, we need a method to quantitatively assess the degree of similarity between human motion trajectories and robot trajectories. So, we propose the DTW trajectory evaluation method.
Taking wrist mapping as an example, as shown in the Figure 10, three motion trajectories of human operators are drawn, marked as A, B, and C, respectively, and the robot trajectories they map to are the same. Each trace has several marker points, which are sample points for the simulation. The trajectory A is a special trajectory, which is a movement trajectory made by making the wrist bending angle θ w = 0 . From the data sheet, the robot trajectories mapped by these three trajectories are the same, from 0 degrees to 180 degrees, and the overall time length is the same, but there is a scaling phenomenon in the time series. Reaching the same robot angle, different human trajectories differ by a maximum of 20 frames. If the point-to-point error calculation is performed directly according to the time series, there will be a time deviation between the corresponding two points used for the calculation. This approach fails to capture how similar the overall trajectories are. The introduction of the DTW distance can be used to solve this timing drift phenomenon. Obviously, with the continuous advancement of the trajectory, the accumulation of point-to-point distances will only continue to increase, while the DTW distance will vary according to the overall similarity of the trajectory.
For a trajectory with n discrete moments, we build a DTW square matrix D n × n to describe the DTW distance between human motion trajectory and robot trajectory. In this matrix, d i , j means the DTW distance between the human trajectory data at time i and the robot trajectory data at time j. In addition, the elements in D obey the following iterative relationship, where the s i means the human trajectory data at time i, and r j means the robot trajectory data at time j. For a trajectory used for evaluation, any cut at a certain moment can be regarded as an independent trajectory. Therefore, the diagonal elements of the DTW matrix D, that is, d i a g ( D ) , can be selected for drawing, which can more intuitively and dynamically show the similarity of the trajectory in the process.
d i , j = d i s t a n c e ( s i , r j ) + m i n ( d i 1 , j 1 , d i 1 , j , d i , j 1 )
where the d i s t a n c e ( s i , r j ) is the Euclidean distance between s i and r j .
Our method uses DTW to calculate the timing similarity of various body parts, and then we set weights for every joint to calculate the weighted average of the trajectory differences of the whole system. Take the imitation of the arm motion of a humanoid robot as an example. Suppose the robot has three degrees of freedom at the shoulder, and one degree of freedom at the elbow and wrist, so that each degree of freedom will produce a DTW distance. We take the range of activity of each degree of freedom as its respective weight. W * is the weight of each joint angle, D * is DTW distance of each joint angle, and A w is the DTW distance of the robot system:
W x = m a x ( x ) m i n ( x )
A w = W s 1 D s 1 + W s 2 D s 2 + W s 3 D s 3 + W e D e + W w D w W s 1 + W s 2 + W s 3 + W e + W w

4.4. Different Visualization Terminals

On one hand, as shown in Figure 11 on the left, ANP provides a skeletal model to visualize the human movement collected by the sensor. The skeleton model analyzes the BVH data and uses Euler angles to show the rotation of each joint of the operator.
On the other hand, as shown in Figure 11 on the right, ROS provides a simulation environment that can visualize the robot model. This requires converting the robot’s 3D model into URDF (unified robot description format) format. URDF is an XML-based language that mainly describes the general robot simulation model in the ROS system, including the shape, size, color, kinematics, and dynamic characteristics of the model [16]. We import the open source robot STL file into URDF after adjusting the scale. Then, we use Xacro (XML Macros) to reuse a structure for two different parts, namely, the left arm and the right arm, and automatically generate a URDF file. The Table A1 shows some basic syntax. Finally, we call RVIZ (a visualization tool in ROS) to visualize the robot model and make it run in real time according to the calculated joint angle.

5. Results

This section presents the experimental results using the proposed method based on the humanoid robot. The results can be seen from Figure 12 and Figure 13. To verify the feasibility of the system, we take various photos from the human motion imitation system, including different positions of two arms, face orientations and movements of fingers. These gestures are complicated because imitation of these gestures entails the rotation of most revolute joints at the same time rather than one or two. In addition, the consistency between the wearer’s action and the humanoid robot’s action has demonstrated that the robot has successfully followed the motion of the wearer’s upper limbs, thus proving the feasibility of our proposed method. In addition, the synchronous latency of less than 0.5 seconds validates the real-time performance.
We use a degree of freedom rotation experiment on the right shoulder to illustrate the accuracy of our system. Figure 14 shows the comparison of the trajectory of the operator and the humanoid robot. The operator makes an arc trajectory, and his arm rotates 49°, while the humanoid robot turns over 52° under real-time control, the absolute error is close to 3°, which is about 6.1% of the rotation angle of the human arm. The relative error is small, which proves that our method has high accuracy.
To evaluate the accuracy of the robot’s imitation of human motion trajectories, we randomly generated two human motion trajectories and recorded the joint angles of the robot when the robot imitated the human motion trajectories. It is worth mentioning that the three shoulder joints apply to the conversion from three human DOF to three robot DOF, so that their DTW distances are zeros. According to the range of each degree of freedom, we set W s 1 = 150 , s 1 ( 30 , 120 ) ; W s 2 = 210 , s 2 ( 120 , 90 ) ; W s 3 = 40 , s 3 ( 20 , 20 ) ; W e = 135 , e ( 0 , 135 ) ; W w = 180 , w ( 180 , 0 ) . We calculate the DTW distance of the elbow and the wrist, and finally calculate the DTW distance of the robot system, as shown in Figure 15.
As time elapses, the DTW distance of each joint in each experiment keeps increasing, which is obviously the result of the accumulation of errors generated by the mapping during the experiment. In the curve of the wrist joint, there are obvious special phenomena, and we found the appearance of several sharp points. The reason for this phenomenon is that when the trajectory reaches the cusp, the rotational freedom of the operator’s wrist quickly reaches the limit position, and the bending freedom of the wrist changes greatly, making the action difficult for the robot to imitate. In the subsequent process, the DTW distance is reduced to a normal level because the robot’s actions at the moment are similar to the actions in the subsequent trajectory. According to the characteristics of the DTW algorithm, a better corresponding method will be selected to determine the DTW distance.
During the experiment, the second human motion trajectory we randomly generated had smaller elbow rotation and wrist bending motions than the first one. Therefore, as shown in the figure, the DTW distance of the elbow, wrist, and the whole system in the first experiment is larger than that in the second experiment. What is more, since the bending of the elbow has little effect on the imitation of the robot’s actions, while the rotation of the wrist has a greater influence, in the two experiments, the DTW distance of the wrist is significantly larger than that of the elbow. Since the DTW distance of the shoulder is zero, the DTW distance of the system is generally smaller than the former two.
In general, the DTW-based metrics reveals the following robot imitation characteristics. First, human poses that are difficult for robots to imitate can be identified with a large DTW value and hereby can be avoided at a choreographic stage. Secondly, it is feasible to judge which trajectories can be imitated more accurately and which trajectories are more difficult to imitate among the multiple trajectories in the attainable workspace of the robots.

6. Discussion

In this article, we demonstrate a novel teleoperation method that uses lightweight wearable inertial sensors to collect human motion data and map it to the robot. Compared with some existing teleoperation methods, this method adopts first-view mapping, which makes the operator feel more immersive and the robot imitate more accurately. In addition, we propose a DTW trajectory evaluation method, which more accurately describes the similarity between human motion trajectories and robot motion trajectories.
However, our method still has some limitations. In terms of teleoperation methods, firstly, there are differences in the structure of humans and robots. We use a special mapping algorithm, which also means that we have lost part of the data collected by the sensor. This will lead to deviations between robots‘ actions and humans’ actions. In addition, the working space of the robot’s joint angle is limited due to mechanical design, such as the robot’s arm failing to go over the shoulder. Secondly, since the rotation of the human joints is achieved through the rotation of the bones, and the wearable sensor is worn on the surface of the human body, there is a certain angular displacement deviation from the bones. Therefore, when the operator does some special actions, there will be obvious errors. Other factors include accumulated drift error and so on. In terms of DTW trajectory evaluation, although this method describes the similarity between trajectories more accurately and quantitatively, we can only compare it with the method that directly calculates the Euclidean distance. Although this method quantitatively expresses the degree of similarity, the quantitative index can only be used to compare the size with each other to determine the pros and cons, and there is no numerical correspondence.
In future work, we expect to design a more reasonable mapping algorithm, which can reduce the influence of non-isomorphic mapping on the accuracy of robot trajectory simulation through the linkage between joints. Meanwhile, the DTW trajectory evaluation method can be used as an indicator to evaluate whether the mapping algorithm makes the imitation trajectory of the robot more accurate in future research.

7. Conclusions

In this article, a human-in-the-loop system for humanoid robots to imitate human motion is proposed, and the metrics of evaluating to what extent the robot motion is similar to that of human are highlighted. The system realizes a real-time simulation and evaluation of humanoid robots through a motion capture system, a fast mapping algorithm, a time series trajectory evaluation method, and multiple visual terminal displays. Under the experiment of a variety of human motion postures, this system has demonstrated good real-time performance and accuracy, and it has also been quantitatively analyzed in terms of the motion similarity evaluation system. This work laid a foundation on improving the robot’s interactive capabilities, especially for human motion imitation.

Author Contributions

Conceptualization, L.G., B.C. and W.X.; data curation, L.G. and W.X.; formal analysis, C.L.; funding acquisition, L.G.; investigation, L.G.; methodology, B.C., X.L. and L.Z.; project administration, L.G.; software, B.C., W.X. and Z.Z.; supervision, C.L.; validation, X.L.; visualization, B.C. and X.L.; writing—original draft, L.G., B.C. and W.X.; writing—review and editing, L.G. and B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Key R&D Program of China (Grant No. 2018YFB1306703 and No.2019YFE0125200) from the Ministry of Science and Technology of China.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HRI  Human–robot interaction
NSFCThe National Natural Science Foundation of China
IKInverse kinematics
DOFDegrees of freedom
BVHBioVision Hierarchy
IMUInertial Measurement Unit
ANPAxis Neuron Pro
ROSRobot Operating System
DTWDynamic Time Warping
URDFUnified robot description format

Appendix A

Table A1. Fundamental grammars of Xacro.
Table A1. Fundamental grammars of Xacro.
CommandDefinitionUsage
Property<xacro:property name=“pi” value=“3.14” /><⋯ value =“ ${2*pi}”⋯/>
Argument<xacro:arg name=“use_gui” default=“false”/><⋯ use_gui:= true ⋯/>
Macro<xacro:macro name=“arm" params=“side”/><xacro:arm side=“left”/>
Including<xacro:include filename=“other_file.xacro" />

References

  1. Yavşan, E.; Uçar, A. Gesture imitation and recognition using Kinect sensor and extreme learning machines. Measurement 2016, 94, 852–861. [Google Scholar] [CrossRef]
  2. Xu, W.; Li, X.; Xu, W.; Gong, L.; Huang, Y.; Zhao, Z.; Zhao, L.; Chen, B.; Yang, H.; Cao, L.; et al. Human-robot Interaction Oriented Human-in-the-loop Real-time Motion Imitation on a Humanoid Tri-Co Robot. In Proceedings of the 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), Singapore, 18–20 July 2018; pp. 781–786. [Google Scholar] [CrossRef]
  3. Riley, M.; Ude, A.; Wade, K.; Atkeson, C.G. Enabling real-time full-body imitation: A natural way of transferring human movement to humanoids. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA, Taipei, Taiwan, 14–19 September 2003; Volume 2, pp. 2368–2374. [Google Scholar]
  4. Durdu, A.; Cetin, H.; Komur, H. Robot imitation of human arm via Artificial Neural Network. In Proceedings of the International Conference on Mechatronics-Mechatronika, Brno, Czech Republic, 5–7 December 2015; pp. 370–374. [Google Scholar]
  5. Hyon, S.H.; Hale, J.G.; Cheng, G. Full-Body Compliant Human–Humanoid Interaction: Balancing in the Presence of Unknown External Forces. IEEE Trans. Robot. 2007, 23, 884–898. [Google Scholar] [CrossRef]
  6. Ding, I.J.; Chang, C.W.; He, C.J. A kinect-based gesture command control method for human action imitations of humanoid robots. In Proceedings of the International Conference on Fuzzy Theory and ITS Applications, Yilan, Taiwan, 26–28 November 2014; pp. 208–211. [Google Scholar]
  7. Bindal, A.; Kumar, A.; Sharma, H.; Kumar, W.K. Design and implementation of a shadow bot for mimicking the basic motion of a human leg. In Proceedings of the International Conference on Recent Developments in Control, Automation and Power Engineering, Noida, India, 12–13 March 2015; pp. 361–366. [Google Scholar]
  8. Koenig, A.; Rodriguez Y Baena, F.; Secoli, R. Gesture-based teleoperated grasping for educational robotics. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 222–228. [Google Scholar]
  9. Rolley-Parnell, E.J.; Kanoulas, D.; Laurenzi, A.; Delhaisse, B.; Rozo, L.; Caldwell, D.; Tsagarakis, N. Bi-Manual Articulated Robot Teleoperation using an External RGB-D Range Sensor. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore, 18–21 November 2018; pp. 298–304. [Google Scholar]
  10. Gobee, S.; Muller, M.; Durairajah, V.; Kassoo, R. Humanoid robot upper limb control using microsoft kinect. In Proceedings of the 2017 International Conference on Robotics, Automation and Sciences (ICORAS), Melaka, Malaysia, 27–29 November 2017; pp. 1–5. [Google Scholar] [CrossRef]
  11. Kitagawa, S.; Hasegawa, S.; Yamaguchi, N.; Okada, K.; Inaba, M. Miniature Tangible Cube: Concept and Design of Target-Object-Oriented User Interface for Dual-Arm Telemanipulation. IEEE Robot. Autom. Lett. 2021, 6, 6977–6984. [Google Scholar] [CrossRef]
  12. Meng, X.; Pan, J.; Qin, H. Motion Capture and Retargeting of Fish by Monocular Camera. In Proceedings of the International Conference on Cyberworlds, Chester, UK, 20–22 September 2017; pp. 80–87. [Google Scholar]
  13. Dai, H.; Cai, B.; Song, J.; Zhang, D. Skeletal Animation Based on BVH Motion Data. In Proceedings of the 2010 2nd International Conference on Information Engineering and Computer Science, Wuhan, China, 25–26 December 2010; pp. 1–4. [Google Scholar]
  14. Rodriguez, N.E.N.; Carbone, G.; Ceccarelli, M. Antropomorphic Design and Operation of a New Low-Cost Humanoid Robot. In Proceedings of the IEEE/RAS-Embs International Conference on Biomedical Robotics and Biomechatronics, Pisa, Italy, 20–22 February 2006; pp. 933–938. [Google Scholar]
  15. Gong, L.; Gong, C.; Ma, Z.; Zhao, L.; Wang, Z.; Li, X.; Jing, X.; Yang, H.; Liu, C. Real-time human-in-the-loop remote control for a life-size traffic police robot with multiple augmented reality aided display terminals. In Proceedings of the 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM), Tai’an, China, 27–31 August 2017; pp. 420–425. [Google Scholar] [CrossRef]
  16. Wang, Z.; Gong, L.; Chen, Q.; Li, Y.; Liu, C.; Huang, Y. Rapid Developing the Simulation and Control Systems for a Multifunctional Autonomous Agricultural Robot with ROS; Springer International Publishing: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
Figure 1. DOF of the humanoid robot (DOF of fingers are not shown).
Figure 1. DOF of the humanoid robot (DOF of fingers are not shown).
Sensors 22 01968 g001
Figure 2. Whole structure of the proposed method.
Figure 2. Whole structure of the proposed method.
Sensors 22 01968 g002
Figure 3. Visualized data stream through ROS publish_subscribe messaging.
Figure 3. Visualized data stream through ROS publish_subscribe messaging.
Sensors 22 01968 g003
Figure 4. Designed Communication Protocol.
Figure 4. Designed Communication Protocol.
Sensors 22 01968 g004
Figure 5. Three motion systems with different constraints. (a) shows the human motion system with biological constraint. (b) shows the BVH motion system with no constraint. (c) shows the robot motion system with mechanical constraint.
Figure 5. Three motion systems with different constraints. (a) shows the human motion system with biological constraint. (b) shows the BVH motion system with no constraint. (c) shows the robot motion system with mechanical constraint.
Sensors 22 01968 g005
Figure 6. Elbow conversion from 2 DOF to 1 DOF. x 1 y 1 z 1 and x 2 y 2 z 2 are the DH coordinate systems connecting the two links of the elbow joint, respectively. Ω is the bending angle of the elbow joint.
Figure 6. Elbow conversion from 2 DOF to 1 DOF. x 1 y 1 z 1 and x 2 y 2 z 2 are the DH coordinate systems connecting the two links of the elbow joint, respectively. Ω is the bending angle of the elbow joint.
Sensors 22 01968 g006
Figure 7. Wrist conversion from 2 DOF to 1 DOF. x 1 y 1 z 1 and x 2 y 2 z 2 are the DH coordinate systems connecting the two links of the elbow joint, respectively. ω is the rotating angle of the elbow joint.
Figure 7. Wrist conversion from 2 DOF to 1 DOF. x 1 y 1 z 1 and x 2 y 2 z 2 are the DH coordinate systems connecting the two links of the elbow joint, respectively. ω is the rotating angle of the elbow joint.
Sensors 22 01968 g007
Figure 8. Elbow joint angle map.
Figure 8. Elbow joint angle map.
Sensors 22 01968 g008
Figure 9. Wrist joint angle map.
Figure 9. Wrist joint angle map.
Sensors 22 01968 g009
Figure 10. Schematic diagram of three human trajectories mapped to the same robot trajectory.
Figure 10. Schematic diagram of three human trajectories mapped to the same robot trajectory.
Sensors 22 01968 g010
Figure 11. Different visualization terminals for different motion systems.
Figure 11. Different visualization terminals for different motion systems.
Sensors 22 01968 g011
Figure 12. Experiments of different gestures with arms and head.
Figure 12. Experiments of different gestures with arms and head.
Sensors 22 01968 g012
Figure 13. Comparison between fingers.
Figure 13. Comparison between fingers.
Sensors 22 01968 g013
Figure 14. Snapshots for motion trajectory.
Figure 14. Snapshots for motion trajectory.
Sensors 22 01968 g014
Figure 15. DTW distance in the experiment.
Figure 15. DTW distance in the experiment.
Sensors 22 01968 g015
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gong, L.; Chen, B.; Xu, W.; Liu, C.; Li, X.; Zhao, Z.; Zhao, L. Motion Similarity Evaluation between Human and a Tri-Co Robot during Real-Time Imitation with a Trajectory Dynamic Time Warping Model. Sensors 2022, 22, 1968. https://doi.org/10.3390/s22051968

AMA Style

Gong L, Chen B, Xu W, Liu C, Li X, Zhao Z, Zhao L. Motion Similarity Evaluation between Human and a Tri-Co Robot during Real-Time Imitation with a Trajectory Dynamic Time Warping Model. Sensors. 2022; 22(5):1968. https://doi.org/10.3390/s22051968

Chicago/Turabian Style

Gong, Liang, Binhao Chen, Wenbin Xu, Chengliang Liu, Xudong Li, Zelin Zhao, and Lujie Zhao. 2022. "Motion Similarity Evaluation between Human and a Tri-Co Robot during Real-Time Imitation with a Trajectory Dynamic Time Warping Model" Sensors 22, no. 5: 1968. https://doi.org/10.3390/s22051968

APA Style

Gong, L., Chen, B., Xu, W., Liu, C., Li, X., Zhao, Z., & Zhao, L. (2022). Motion Similarity Evaluation between Human and a Tri-Co Robot during Real-Time Imitation with a Trajectory Dynamic Time Warping Model. Sensors, 22(5), 1968. https://doi.org/10.3390/s22051968

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop