Next Article in Journal
Key Strata Identification of Overburden Based on Magnetotelluric Detection: A Case Study
Next Article in Special Issue
Will Bid/No-Bid Decision Factors for Construction Projects Be Different in Economic Downturns? A Chinese Study
Previous Article in Journal
Semi-CNN Architecture for Effective Spatio-Temporal Learning in Action Recognition
Previous Article in Special Issue
A Fuzzy-Based Holistic Approach for Supply Chain Risk Assessment and Aggregation Considering Risk Interdependencies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment of Handover Prediction Models in Estimation of Cycle Times for Manual Assembly Tasks in a Human–Robot Collaborative Environment

1
Department of Industrial Engineering and System Management, Feng Chia University, Taichung 407, Taiwan
2
International School of Technology and Management, Feng Chia University, Taichung 407, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(2), 556; https://doi.org/10.3390/app10020556
Submission received: 30 September 2019 / Revised: 2 January 2020 / Accepted: 10 January 2020 / Published: 12 January 2020
(This article belongs to the Special Issue Industrial Engineering and Management: Current Issues and Trends)

Abstract

:
The accuracy and fluency of a handover task affects the work efficiency of human–robot collaboration. A precise and proactive estimation of handover time points by robots when handing over assembly parts to humans can minimize waiting times and maximize efficiency. This study investigated and compared the cycle time, waiting time, and operators’ subjective preference of a human–robot collaborative assembly task when three handover prediction models were applied: traditional method-time measurement (MTM), Kalman filter, and trigger sensor approaches. The scenarios of a general repetitive assembly task and repetitive assembly under a learning curve were investigated. The results revealed that both the Kalman filter prediction model and the trigger sensor method were superior to the MTM fixed-time model in both scenarios in terms of cycle time and subjective preference. The Kalman filter prediction model could adjust the handover timing according to the operator’s current speed and reduce the waiting time of the robot and operator, thereby improving the subjective preference of the operator. Moreover, the trigger sensor method’s inherent flexibility concerning random single interruptions on the operator’s side earned it the highest scores in the satisfaction assessment.

1. Introduction

Human collaboration with robots in a coworking space or working with robots in a partnership manner is described as a close-proximity human–robot interaction (HRI) [1,2,3,4]. This collaboration between human operators and robots becomes possible in production systems mainly due to the improvements in safety design and autonomy of robots [5,6,7,8,9]. The combination of performative accuracy of collaborative robots with the planning flexibility of human actors has resulted in novel production applications that allow more natural and effective interaction between humans and robots with the objective of smart production [3,10,11,12,13]. The immediate collaboration between human operators and robots becomes possible in production systems mainly due to the improvements in safety design [14].
In smart production, the handover is one of the basic interactive elements in the context of close-proximity HRI. A significant amount of research has been conducted in recent years to explore the different facets of handovers [15], including grasp path planning [16,17], grasp power control during a handover process [18], fluency [19], and social interactivity [20,21,22].
This study investigated the context in which a robot must hand over a part to a human partner to complete an assembly task in a close-proximity human–robot collaborative environment. For the handover process to be completed smoothly, the robot must estimate the human worker’s task completion time and be able to self-adjust the cycle time between different skill levels of assembly workers. Prediction and timeliness on the side of the robot can affect performance and preference of handover tasks between humans and robots [23]. When humans are performing an assembly task, the causes of the variation in the assembly time may be divided into two categories. The first category is causable and trending variation due to the learning curve or undesirable internal or external factors (e.g., fatigue, noise, ambient temperature, and vibration). For instance, progressing tiredness of the human operator leads to lower processing speed and considerably varying processing time. The second category is noncausable randomness. In this case, assembly time variation is simply caused by nonattributable random errors (e.g., accidentally dropping an assembly part, sudden itch as a trigger to scratch a body part, etc.). The present study used the common building blocks assembly task [24,25] as a collaborative task between robots and operators to explore the following two work scenarios: (1) a general repetitive assembly task during which mainly only nonattributable random errors occur in the cycle time; and (2) repetitive assembly in which the human operator exhibits a learning curve, resulting in causable and trending cycle time variation. The method-time measurement (MTM), Kalman filter [26], and trigger sensor predictive models were employed to compare the cycle time of the assembly task, waiting times of robot and operator, and the participants’ subjective preferences. The objective was to obtain a predictive handover model with lower waiting times, shorter cycle time, and higher efficiency.

1.1. Handover Tasks

Strabala et al. explored the handover process from sociocognitive and physical perspectives [23]. The sociocognitive aspect consists of establishing an agreement on handover object (what), handover time (when), and handover location (where) between the two entities involved. The physical aspect involves the object handover actions, such as approach, reach, and transfer.
Regarding the sociocognitive aspect, Endsley suggested that efficient collaboration requires common understanding of the task context between humans and robots and common expectations regarding the next step [27]. When two human working partners collaborate on an assembly task, they must be able to estimate the time required by their partner in the assembly process, including the effects of potential external and intrinsic factors (e.g., skill level, fatigue, and stress) that can affect the assembly rate [24]. Aleotti et al. suggested that the precise and proactive estimation of handover time points by robots when handing over assembly parts to humans can minimize waiting times and maximize efficiency [28]. Robots with predictive capabilities can interact more effectively, intuitively, and naturally, and their operational efficiency is considerably greater than that of reactive robots. Additionally, experiment participants have reflected that interactions with robots are smoother when the robots are predictive rather than reactive [29,30]. Therefore, to ensure efficient and smooth collaboration, the handover time and position control during handovers are critical.
Regarding the physical aspect, numerous scholars have explored the effect of robot handover posture, power, and path on handover efficiency. Edsinger and Kemp conducted an experiment in which an object was handed by a humanoid robot with 29 degrees of freedom to a human [31]. The end effector of the humanoid robot used in the experiment had a complete sensing function that could detect an object’s approach and the power supplied. The results demonstrated that the participants could respond to the robot’s gestures and complete the handover process without any additional prompts. Cakmak et al. used a Home Exploring Robotic Butler to investigate the effects of spatial contrast and temporal contrast of the robot before and after the handover process on handover intention [15]. The results revealed that temporal contrast is more beneficial to task fluency, reducing the time that a human worker must wait to receive the task. Strabala et al. investigated how trajectory and final handover posture can clearly indicate handover intention [23]. Aleotti et al. suggested a handover method with which humans can most easily grasp an object; in this method, the robot should present the most appropriate part of the object (e.g., the handle) to the human [28]. St. Clair and Mataric proposed that robotic verbal feedback improves team performance [32]. Other studies have focused on anthropomorphic social behavior within collaborative processes, such as visual fixation, which can improve HRI and thus ensure a smooth handover process [20,33].

1.2. Handover Timing Prediction and Fluency of Robots

In the context of assembly tasks during which robots hand over parts to human partners, the handover path should be as close as possible to the expectations of the human partners, whereas the time point must be estimated as accurately as possible to enhance the interactive confidence and minimize the waiting time, thus improving the efficiency.
To calculate paths and time points, a path-planning algorithm that accounts for human factors must first be designed [34,35]. Sisbot et al. constructed robot movement paths on the basis of the distance between operators and robots and the operators’ safety and subjective perceptions [17]; subsequently, human movement and operational constraints were incorporated [36]. Regarding the tools used for establishing a path-planning model, some scholars have employed statistical learning methods [37] and the hidden Markov model [38]. Numerous methods can be used to predict human activity trajectories. Nikolaidis et al. conducted task-level analysis by using the Markov decision process to predict the next steps of an operator [39]. Some scholars have also used knowledge bases and decision-making procedures to predict human operations [40]. Kwon and Suh simultaneously inferred time and causality in a Bayesian network to assist robots in determining which tasks and when the tasks should be implemented [41].
Pellegrinelli et al. used the partially observable Markov decision-making process to reduce the interference caused by the nature of human–robot coworking spaces and to improve the fluency and efficiency of HRI during tasks [42]. In the context of human-robot collaboration within assembly tasks, Chao and Thomaz employed timed Petri nets to model and control robots through language, fixation, gestures, and manipulation [43]. The results demonstrated shorter reaction delay of the robots and are subjectively considered by participants to be satisfactory working partners. Shah et al. used a task-level robot plan execution system, chaski, that selects and schedules the robot’s actions and adapts to the robot’s human partner [30]. The system reduced the idle time of the human partner by 85% compared with the robot being verbally commanded by a human teammate. Similar low-order robot motion planning has been used to improve team fluency and human security [1,44,45].
The method-time measurement approach (MTM) is often used in the industrial engineering field as a tool for predicting assembly times. Huber et al. compared four methods (including the MTM) and discovered that the Kalman filter, which uses Bayesian probability, was more adaptable to assembly operations with complex changes than MTM, which was more suitable for stable assembly [24]. Stable predictions could still be made when the operator changed the assembly time for various reasons. The Kalman filter is capable of predicting complex systems and filtering measurement noise; it is thus widely used in applications such as robot navigation that require correction after prediction. Whereas the MTM can be characterized as stable but inflexible (without dynamic response to changing operation parameters (operator’s speed, precision, etc.), the other end of the scale is the trigger sensor method that features a collaborative robot that responds to the operator’s specific movement, for example crossing a plane or passing by a sensor. Thus, in contrast to the other two predictive handover models, this is a reactive handover model. Its application in actual production systems is limited. First, the trigger sensor must be relatively complicated if it is to instantly detect an operator’s hand movements and position, which is difficult to implement on production lines due to complexity and cost considerations. Compared with a trigger sensor, the MTM uses the standard cycle time determined beforehand. Moreover, when the Kalman filter is executed on an actual production line, the only system input required is the interval between the removal time of previous and current objects. For a robotic gripper with general force feedback, these data are relatively easy to obtain. Therefore, the MTM and Kalman filter are easier to implement on actual production lines than a trigger sensor. Second, in the trigger sensor method, the robot has to be kept in a handover-ready state during the entire assembly cycle to achieve an efficient robot reaction time. However, in actual collaborative assembly tasks, robots generally have their own tasks in addition to handovers and thus cannot be kept ready for a handover task. Yet, given its high responsiveness to random and sudden errors, operators have a greater flexibility and freedom to deviate from the strict time constraints of the MTM.

2. Materials and Methods

2.1. Assembly Task and Measurement System

The human–robot collaborative assembly task used in this study was the desktop version of the common building blocks task. The robot used was a 6-degrees-of-freedom collaborative robot UR3 produced by Universal Robots (Odense, Denmark). The UR3 can work as the second member of a two-person team, which conforms with the situations to be explored in this study. In addition, the robot is equipped with 15 adjustable safety tools including power sensing, which enables the UR3 to maintain a high degree of safety during operation or when working in close proximity with the operator. The workbench layout is as displayed in Figure 1. The operator sat in front of the workbench. The point 12 cm from the leading edge of the workbench was the assembly point. Moreover, a parts bin was constructed 38 cm to the right using scattered orange blocks. The appearance of the blocks and their manner of assembly were similar to those of Lego blocks; thus, the assembly task involved only alignment and downward force once the blocks had been removed from the parts bin. Another parts bin containing assembled blocks was placed 38 cm to the left of the operator, and the point 34 cm in front of the assembly point was the handover point of the UR3 robot.
The experimental operation is illustrated in Figure 2. Before the experiment was started, the operator sat in front of the workbench and placed both hands at the assembly point of the workbench. When the operation was started, the operator used the right hand to grab an orange block from the parts bin and passed the block to the left hand, which fixed the block on the assembly point. Subsequently, the operator’s right hand reached to the parts bin again to pick up a second orange block and return to the assembly point. The second block was assembled onto the first block. A similar procedure was applied for the third block. After assembling three orange blocks, the operator’s right hand extended from the assembly point to the handover point, grabbed a green block being handed over by the robotic gripper, and returned to the assembly point, where the green block was assembled onto the semi-finished product of orange blocks. Upon completion of the assembly task, the finished product was placed in the parts bin on the left side. A completely assembled product is displayed on the right side in Figure 1.
This study used PhaseSpace motion capture to capture motion data from the experiment. To measure the task operation of the operator, one light-emitting diode (LED) spotlight of the PhaseSpace was fixed to the joint of the operator’s middle finger on the right hand, and another spotlight was fixed to the tool center point of the robot’s outer gripper; the sampling frequency was set to 240 Hz. During the experiment, the spatial position and time information of the two LEDs were collected using Vizard Virtual Reality Software’s built-in real-time Python application programming interface for capturing PhaseSpace data, and the handover time point was calculated according to the handover prediction model being tested in that experiment. Subsequently, the gripper of the UR3 robot was moved to the handover point according to the calculation result transmitted by the network; this completed the handover action. The robot returned to its initial state after completing the handover task and was manually loaded with a new assembly block by an assistant for the next assembly cycle. Vizard also recorded the position and speed of the LEDs on the UR3 gripper and the back of the operator’s right hand. The data were stored in an Excel file at a recording frequency of 60 Hz.

2.2. Handover Prediction Models

2.2.1. Method of Time Measurement

The MTM calculates the overall execution time based on the predetermined durations of basic motion sequence, body motion, and various physical task parameters [46]. The MTM breaks down human motions into extremely small steps to accurately describe the tasks performed by humans and calculate the time required in a standardized manner [47]. The unit of time of MTM is the TMU, with 1 TMU = 0.036 s.
In this study, the MTM was employed to calculate the standard assembly time, which was then used to set the cycle time over which the robot would perform fixed tasks. Table 1 presents a decomposition of the actions for assembling one orange block and one green block, which take 56.02 TMU and 73.32 TMU, respectively. To complete an assembly task (assembly of three orange blocks and one green block), the orange block action must be performed three times before a green block is added, thus taking a total of 241.38 TMU (~8.7 s). This value was verified in a pretest involving 10 participants, for which the average assembly time and twice its standard deviation was 8.31 ± 0.82 s for the completion of one assembly task; this is consistent with the standard assembly time of 8.7 s calculated using the MTM.
In the MTM handover prediction model, each handover of the UR3 robot lasts 8.7 s according to the calculated cycle time. The calculation benchmark was based on the completion time point of the previous handover. Due to the fact that the UR3 robot requires 0.45 s to move from the start point to the handover point and the network information transfer latency is approximately 0.15 s, the trigger is released 0.6 s before the predicted handover time.

2.2.2. Kalman Filter

The Kalman filter was proposed by Kalman in the 1960s and is an optimal recursive data processing algorithm. It can generate optimal solutions with high efficiency for most problems. In addition, it has been widely applied for more than 30 years, particularly in autonomous systems and navigation assistance, including robot navigation, control, and military radar systems and missile tracking. The Kalman filter uses a set of efficient mathematical recursive equations to estimate a system process state by minimizing the squared error under the assumption of Gaussian white noise. The process can be used to predict past, present, and future states and can operate under a system with uncertain system states. For a linear system, the Kalman filter can use a system process model to predict the next state of the system.
Assume that the current system state is k. According to the system model, k can be predicted on the basis of the previous state, denoted k − 1:
x ^ k = A x ^ k 1 + Bu k 1
P k = AP k 1 A T + Q
where x ^ k is the predictive system value of state k, which is the estimated occurrence time of the kth handover in this study. Through x ^ k , prediction is performed using the predicted result from the previous state, x ^ k 1 . Additionally, u k 1 is the input control amount in the state k − 1, and A and B are system parameter matrices. In this study, u k 1 is set to 0 and A is set to the identity matrix because there is no additional input for adjustment of the system. In Equation (2), P k is the estimated error covariance matrix corresponding to x ^ k . A T is the transpose of matrix A, and Q is the system process noise covariance matrix, which represents the noise of the process and is assumed to be Gaussian white noise that does not change with the system state. In this study, Q was set to 1×10−5. Equations (1) and (2) are the first two of the five formulas of the Kalman filter and correspond to the prediction part of the system.
K k = P k H T ( HP k H T + R ) 1
K k = P k H T ( HP k H T + R ) 1
P k = ( I K k H ) P k
After obtaining the prediction of the current state, the Kalman Gain, denoted K k , is calculated. H is the parameter matrix of the measurement system and is assumed to be a constant that is set to the identity matrix in this study; R is the measurement noise covariance matrix of the system, which is also assumed to be Gaussian white noise that does not change with the system state. In this study, R is set to 0.0042, and this number affects sensitivity of the system predictions. In Equation (4), K k , the predicted value x ^ k of the previous state k − 1, and the measured value z k of the current state are employed to determine the assembly cycle time. The assembly cycle time is defined as the actual measured interval between two successive block grabs of the UR3. By combining x ^ k and z k , the optimal estimated value x ^ k of the current state k can be obtained. Equation (5) updates the covariance P k of x ^ k in the state k, allowing the Kalman filter to proceed to the next recursive cycle. Equations (3) to (5) are the final three equations of the Kalman filter and correspond to the updating of the system. Therefore, in the handover prediction model of the Kalman filter in this study, in addition to adjustable system parameters such as R and Q, the external input values that affect the system include the actual assembly cycle time that is measured in each cycle. It is worth noting that Kalman filter required initial data x ^ o to start the algorithm and the MTM standard time, 8.7 s, was used as the initial solution in this study.

2.2.3. Trigger Sensor

Unlike the two aforementioned models, the trigger sensor method is a reactive handover prediction model. Due to the fact that the positions of the LEDs fixed to the back of the operator’s hand and the UR3 robotic gripper were detected by the PhaseSpace during the assembly task and then read using Vizard, the position information from the repetitive assembly task could be used as the basis for specifying when the robot initiated a handover. When the operator completed the assembly of three orange blocks and reached beyond a specific plane toward the handover point, the UR3 robot was triggered to initiate the handover action. A total of 0.6 s was required for one complete handover task (0.45 s for the UR3 robot to move from the starting point to the handover point, and 0.15 s for network transfer latency). Therefore, the trigger plane was set to the location corresponding to 0.6 s before the operator reaches the handover point. Given the average speed of the operator, the trigger plane was set to 12 cm in front of the assembly point. When the PhaseSpace LEDs on the back of the operator’s hand were detected to pass this plane, the program immediately sent a command to trigger the robot. Theoretically, in this reactive handover prediction model, the robot and operator should arrive at the handover point simultaneously, minimizing the waiting time and cycle time of the robot and operator.
The trigger sensor model was used to benchmark the other two models but is unsuitable for predicting robot handover times in actual production systems. First, the trigger sensor must be relatively complicated if it is to instantly detect an operator’s hand movements and position. Taking this study as an example, the PhaseSpace system must detect the LED position, which is difficult to implement on production lines due to complexity and cost considerations. Compared with a trigger sensor, the MTM uses the standard cycle time determined beforehand. Moreover, when the Kalman filter is executed on an actual production line, the only system input required is the interval between the removal time of previous and current objects. For a robotic gripper with force feedback, these data are relatively easy to obtain. Therefore, the MTM and Kalman filter are easier to implement on actual production lines than a trigger sensor. Second, in the trigger sensor method, once the network transfer latency time is deducted, the robot has only 0.45 s to respond once the operator’s hand has passed the trigger plane. The present study focused on the effect of time variability on handover fluency; no other tasks were given to the robot except for this task. Therefore, this robot had to be kept in a handover-ready state during the experiment to achieve the robot reaction time of 0.45 s. However, in actual collaborative assembly tasks, robots generally have their own tasks in addition to handovers and thus cannot be kept ready for a handover task. The trigger sensor method is thus a suitable point of comparison with the MTM and Kalman filter.

2.3. Task Cycle Time and Waiting Time

The objective measurements made in this study included the cycle time and waiting time for completing an assembly task. The waiting time was further divided into the operator waiting time (OWT) and robot waiting time (RWT).
Figure 3 presents a graph showing the speed and distance changes during an assembly task, including the distance between the two spotlights and the speed change of the LED spotlights on the robotic gripper and back of the operator’s hand. A complete assembly task consisted of the assembly of three orange blocks, handover of a green block from the robot, assembly of the green block, and finally placement of the assembled blocks in the placement area.
According to the hand speed variation curve in Figure 3 (V_Hand), the movement process for the first block (G1) that was taken from the building block area produced a waveform showing clear acceleration and then deceleration. A similar undulating waveform was generated when the hand returned from the building block area to the assembly point (R1), during the subsequent movements for the orange block assembly, and when the hand was extended to the handover point (RtR). At this time, the UR3 speed variation curve (V_UR3) shows acceleration and then deceleration as the robot moved to the handover point to prepare to perform the handover task (SD); however, the speed changed less substantially than that of the human operator’s hand. When the robot reached the handover point and stopped, the operator took the block from the robot and returned to the assembly point (RtA). The robot returned to the starting point (FD) once the spotlight on the operator’s hand had passed the virtual trigger plane. The cycle time is the time required to complete an assembly task and is defined as the interval between the two G1 events.
During the handover process, both the RWT and OWT affected the task fluency. As illustrated in Figure 3, RtR occurred earlier than SD. When the hand speed was close to 0 during the RtR event, the operator’s hand had arrived at the handover point. When the robot’s speed was close to 0 during the SD event, the robot had arrived at the handover point. The OWT is the duration between the speed of the operator’s hand reaching 0 during the handover event (RtR) and the robot arriving at the handover point. By contrast, the RWT is the duration between the robot arriving at the handover point and the operator completing the orange block assembly tasks and moving to the handover point (RtR). This case is illustrated in Figure 4: SD occurred earlier than RtR, and the interval between them is the RWT.

2.4. Subjective Measurement of Collaboration Fluency

In addition to the objectively assessed cycle time and waiting time, this study subjectively assessed the handover task collaboration fluency of the operator and robot when different handover prediction models were employed. The subjective questionnaire was adapted from the fluency section of Unhelkar et al. [48], which defined fluency through the items “the robot and I work well together”, “the deliveries made by the robot are smooth”, and “I work fluently with the robot”. The experimental participants responded to these items using a 5-point Likert scale.

2.5. Experimental Procedures

2.5.1. Experiment 1: Repetitive Assembly

The first part of the repetitive assembly task was employed to investigate whether the operator’s assembly time was affected only by nonattributable random errors. There were two factors in the first experiment. The first one was a between-subject factor, skill level. The participants were divided into three skill levels, namely the beginner, intermediate, and advanced skill levels. The second one was a within-subject factor, handover prediction model, with three levels, namely the MTM, trigger sensor method, and Kalman filter.
This study was reviewed and approved by the Research Ethics Committee (REC) at National Tsing Hua University in Taiwan under the approval reference number 10507EE038. A total of 36 volunteers recruited from the College of Engineering at Feng Chia University participated in the experiment. After the participants signed a consent form, the experiment was conducted, and appropriate compensation was given to the participants according to their number of participation hours. At the beginning of the experiment, the participants familiarized themselves with the experimental task during a training session and the experimental procedure, after which they were instructed to assemble the blocks in a situation similar to that at an actual production site before conducting the experiment.
In the experiment, one session comprised the assembly of 20 products, with each product consisting of four building blocks (three orange blocks and one green block handed over by the robot). Thus, each participant in each session had to complete 20 handover tasks with the robot. Before starting the experiment formally, a training session was implemented. The handover prediction model used in the training session was the benchmarking trigger sensor method. The purpose of the training session was to familiarize participants with the process of the assembly task and the handover procedure in which a participant obtained the last building block from the robot at a specific delivery location. The training session did not intend to disseminate the differences among the three handover models. Since the trigger sensor method was expected to have the capability to move the robot to the delivery location simultaneously with a human operator, it was applied in the training session.
Of the 36 participants in the training session, the 12 participants with the longest average assembly cycle time were assigned to the beginner skill level group, the 12 fastest participants were assigned to the advanced skill level group, and the remaining 12 participants were assigned to intermediate skill level group.
Assembly tasks on a real production assembly line require specific conditions to achieve high efficiency. To reflect these real-world situations, the following instructions were provided to the participants. Assembly errors result in considerable penalty costs; thus, the participants were told that the assembly should be performed as carefully as possible and without making errors. Additionally, production requires line balance, so the participants were instructed that a stable assembly rate should be maintained to reduce cycle time variation. Finally, this study assumed that a production line is rested after 1.5 h of continuous operation; thus, the participants were told that they should be able to maintain a constant speed for 1.5 h.
The handover method was a within-subject factor, which means that each participant performed all three handover methods. It was important to mitigate the potential confounding caused by experience gained from conducting the previous method. To avoid such confounding, within each skill level, all participants were randomly divided into three groups and the sequences of conducting the three methods in each of the three groups were counterbalanced. That is, the sequence of the first group was MTM, trigger sensor, and Kalman filter; the second group trigger sensor, Kalman filter, and MTM; and the last group Kalman filter, MTM, and trigger sensor. Due to the fact that each method was evenly tested at all three orders, any learning effects would “balance out” across the three methods, removing the confounding. Participants were aware that they would interact with the robot in three different methods in terms of handover timing control, but they were not aware of the specific differences.
The assembly cycle time, OWT, and RWT were recorded during each session for 20 sessions. The intermediate group was given two additional training sessions and the advanced group was given four additional training sessions in order to increase the difference to the beginner level. After completing a handover experiment, each participant was required to complete the subjective preference questionnaire for the handover prediction model employed.

2.5.2. Experiment 2: Assembly Learning Effect

Performance usually increases when a person repeats a particular activity. The learning curve indicates that the more repetitions are performed, the shorter the completion time or the higher the performance becomes. A learning curve exists for repetitive assembly tasks. As the number of repetitions increases, the cycle time decreases. However, because random errors can occur during each assembly task, the learning curve for a single operator is a decreasing and meandering curve.
The second part of the assembly learning effect experiment explored the effects of different handover prediction models on the interaction fluency between the robot and operator in a repetitive assembly process due to experience accumulation and a reduction of the task completion cycle time. A total of 36 volunteers participated in this experiment, none of whom participated in the previous phase of the experiment. The procedures of signing the consent form and remuneration were similar to those in the first experiment, and the study was approved by the same REC reference number as the first experiment. In this experiment, the handover prediction model was the between-subject factor. The participants were randomly assigned to one of the handover prediction models, with 12 participants for each model. The experiment also started with a training session to familiarize participants with the assembly task and the handover procedure. However, different from the first experiment, the corresponding prediction model was used in the training session. Another observed variable was the number of repetitions. To determine learning performance, 20 continuous sessions of assembly tasks were performed, and the participants had a rest period of 1 min between each session. The task content in each session was the same as in the previous experiment: the assembly of 20 products comprising four blocks. The dependent variables in this experiment were the average cycle time, RWT, and OWT. After finishing the experiment, each participant completed the subjective preference questionnaire for the handover prediction model performed.

3. Results

3.1. Experiment 1: Repetitive Assembly

Experiment 1 investigated the effects of the handover prediction model and assembly speed on cycle time and waiting time in mass production mode. Table 2 shows the average cycle time for the three skill level groups under the three handover prediction models. According to the analysis of variance (ANOVA) of cycle time, the two main effects of handover prediction model and skill level were significant (F(2,66) = 17.51, p < 0.001 and F(2,33) = 233.09, p < 0.001, respectively). The post hoc test was used to verify the levels of the two main effects, and the three skill levels were discovered to be highly significant (p < 0.001). Among the three handover prediction models, the cycle time was significantly higher when the MTM was used rather than the trigger sensor method and Kalman filter (p < 0.001); the difference between the cycle times for the trigger sensor method and Kalman filter was nonsignificant (p > 0.5).
According to Figure 5, the participants with higher skill level had a shorter average cycle time. For the trigger sensor method and Kalman filter, the participants of all skill levels had almost identical performance. However, the cycle time when the MTM was used was not positively affected by the higher efficiency of the more skilled participants because the robot’s handover cycle time was predetermined at 8.7 s, causing the cycle time to remain close to that of the intermediate group. According to ANOVA, the interaction between prediction model and skill level was highly significant (F(4,66) = 14.66, p < 0.001). Furthermore, the post hoc test verification demonstrated that the cycle time did not differ between the three handover prediction models for the beginner and intermediate groups (p > 0.5). Nonetheless, for the advanced group, the cycle time was significantly higher for the MTM than for the trigger sensor method and Kalman filter (p < 0.001), whereas the difference in cycle time between the trigger sensor method and Kalman filter was nonsignificant (p > 0.5).
The second part of the experiment concerned the waiting time, which was divided into the OWT and RWT. According to Figure 6, the three skill level groups had similar OWTs, approximately 0.1 s, for the trigger sensor method and Kalman filter. However, when the MTM was employed, the OWT of the beginner group was 0 s, indicating that the cycle time for all tasks was within 8.7 s, resulting in no OWT. By contrast, for the advanced group, almost every task resulted in the operator having to wait for the robot; thus, the average OWT was 1.2 s. The results of ANOVA indicated that the effect of handover prediction model on OWT was highly significant (F(2,66) = 69.11, p < 0.001), as was that of skill level on OWT (F(2,33) = 77.23, p < 0.001). The interaction between model and skill level was also highly significant (F(4,66) = 69.84, p < 0.001].
Figure 7 plots the RWT and shows that when the trigger sensor method was employed, the three groups had fairly similar RWTs of approximately 0.1 s. When the Kalman filter was used, the RWT decreased slightly from 0.65 to 0.51 s as the skill level increased, but the RWT difference between the three skill level groups was not large. When the MTM was employed, the RWT dropped significantly from an average of 1.17 to 0.01 s as the skill level increased. The RWT was high for the beginner group but RWT almost 0 for the advanced group. The results of ANOVA revealed that the RWT was significantly affected by the handover prediction model (F(2,66) = 57.20, p < 0.001) and skill level (F(2,33) = 26.27, p < 0.001). In addition, the interaction term was highly significant (F(4,66) = 28.95, p < 0.001).
Figure 8 displays the total waiting time (i.e., the sum of the OWT and RWT). The lowest total waiting time occurred when the trigger sensor method was employed, followed by the Kalman filter. The MTM resulted in the highest total waiting time in the beginner and advanced groups.
Although the total waiting time was higher for the Kalman filter than the trigger sensor method, the average cycle time (Figure 5) showed that these two models resulted in almost identical performance. The main factor was that in the investigated collaborative process, the robot only participated in the handover of assembly task; the final assembly task had to be completed by the operator. Hence, the cycle time of the entire operation was limited by operator-induced interference, with the waiting time prolonging the overall cycle time. Conversely, robot-induced interference did not affect the total task completion time. According to Figure 6, the OWT was almost the same, approximately 0.1 s, for all skill groups when the trigger sensor method and Kalman filter were used. This explains why the cycle time was almost identical even though the waiting time was slightly longer for the Kalman filter method than the trigger sensor method.
In the subjective preference analysis of Experiment 1, the median scores and 95% confidence intervals of the participants’ subjective preference for the three handover prediction models are displayed in Figure 9. The advanced group had significantly lower preference for the MTM than the other two groups. The preference scores of all participants for the trigger sensor method and Kalman filter were relatively close, but the preference score for the Kalman filter increased as the skill level increased.
Two-way repeated ordinal regression was used to analyze the ordinal data of subjective preference, in which the prediction model and skill level were the independent variables, and the participants were used as blocking variables and were entered as a random variable. The results demonstrated that both prediction model (x2 (2) = 42.12, p < 0.001) and skill level (x2 (2) = 8.05, p < 0.05) had significant effects, and the interaction term of these two factors was also highly significant (χ2 (4) = 69.09, p < 0.001). Due to the fact that the interaction term was highly significant, further Tukey post hoc tests were performed on the nine experimental combinations of three skill levels and three handover prediction models. The results revealed that at the 95% confidence level, the preference of the advanced group for the MTM was significantly lower than that for the other eight experimental combinations.

3.2. Experiment 2: Assembly Learning Effect

In Experiment 2, the cycle time was discovered to decrease due to the learning effect in a longer period of assembly. In addition, the effects of the three handover prediction models on the operator’s cycle time and waiting time were compared.
According to Figure 10, the overall average cycle time for 20 assembly sessions when the MTM, trigger sensor method, and Kalman filter were used was 9.11, 8.60, and 8.58 s, respectively. The cycle time for the MTM was significantly larger than that for the other two models. Between-participants ANOVA revealed that the main effect of the handover prediction model was highly significant (F(2,33) = 5.58, p < 0.01). In addition, the Tukey post hoc test showed that the average cycle time was significantly larger when the MTM was used compared with the trigger sensor method and Kalman filter (p < 0.05), but no difference was discovered between the trigger sensor method and Kalman filter (p > 0.1).
Figure 11 illustrates the change in the average assembly time over time (20 sessions). The assembly time clearly decreased over time when the trigger sensor method and Kalman filter were employed, which is in agreement with the learning effect, according to which the cycle time should show a downward trend. When the MTM was employed, the cycle time again exhibited a downward trend when the assembly cycle time of the participants in the previous phase was greater than the predetermined cycle time (8.7 s). However, even when the assembly speed of the participants was higher in the later phase, the cycle time could not drop below the fixed handover cycle time (8.7 s; as shown by the dotted line in Figure 11). ANOVA revealed a highly significant effect of the prediction model (F(19,627) = 79.75, p < 0.001), and the interaction term between the session and prediction model was also highly significant (F(38,627) = 6.09, p < 0.001].
From the perspective of waiting time, neither the RWT (Figure 12) nor OWT (Figure 13) significantly changed over time when the trigger sensor method and Kalman filter were employed; instead, the waiting times were randomly varying in each session. However, for the MTM, the RWT clearly decreased, whereas the OWT gradually increased in the later stage, which corresponds to the trend change displayed in Figure 11, thereby affecting the average cycle time.
The participants’ subjective preferences were compared for the learning effect experiment (Experiment 2). The median score and 95% confidence interval for the three handover models are displayed in Figure 14. The participants’ preference score for the MTM was significantly lower than that for the other two models. The Kruskal–Wallis rank-sum test was used to analyze the ordinal data of subjective preference, and the results revealed that handover prediction model (χ2 (2) = 25.93, p < 0.001) had a highly significant effect. Furthermore, post hoc verification was performed through the Wilcoxon rank-sum test, and the findings showed no significant difference between the trigger sensor method and Kalman filter (p > 0.1), but both were preferred by the participants over the MTM (p < 0.001).

4. Discussion

The results of Experiment 1 revealed that the subjective preference and objective average cycle time of participants of all skill levels for the three handover prediction models were highly consistent. No significant differences were discovered in either objective measurement or subjective preference between the beginner and intermediate groups for the three prediction models. However, for the advanced participants, the average cycle time was significantly lower and subjective preference were significantly superior when the Kalman filter was used compared to the MTM. In addition, no statistical differences were discovered between the performance of the Kalman filter and the trigger sensor method. This nondifference in performance is explained by the waiting time analysis. Since an OWT constitutes a negative interference with the fluency of an operation, a greater OWT not only results in a longer average cycle time but also lower preference. According to a comparison between Figure 6 and Figure 9, a shorter OWT resulted in higher preference. By contrast, a comparison between Figure 7 and Figure 9 reveals that the RWT was uncorrelated with subjective preference. The results suggest that no interference occurred in the HRI because the RWT did not affect the cycle time. Therefore, a longer RWT does not affect the task completion time and subjective preference. This finding is consistent with [19], who also concluded that human–robot team fluency was not correlated to robot idle time.
Experiment 2 explored the effects of the three handover prediction models on the learning curve of repetitive assembly. Similar to the results of Experiment 1, the average cycle time and subjective preference revealed that the Kalman filter was superior to the MTM, and no statistical difference was discovered between the Kalman filter and trigger sensor method. According to a comparison between Figure 11 and Figure 13, the OWT was again discovered to be the key factor affecting the cycle time. Moreover, the result illustrated in Figure 14 shows that the subjective satisfaction with the MTM was lower probably because of the higher OWT in the later phase of assembly.
Combining the two experiments revealed that in the repetitive assembly task, in which either a stable average assembly speed was accompanied by nonattributable random errors or there was a gradual change in the causable assembly speed, the Kalman filter was superior to the MTM and resulted in the same performance as the trigger sensor method. Using the five formulas of the Kalman filter and parameter settings, the system can be employed to recursively predict the next value using the preceding measured value. The R in this study was set to 0.0042 after several pretests. If the set value is too large, the correction of the system is slow, and the system lags behind the gradually changing assembly speed. By contrast, if the set value is too small, the system is overly sensitive. If the previous assembly was too slow due to a nonattributable random error, this information is quickly reflected in the next assembly time prediction, resulting in a longer prediction time for the next cycle, which is obviously not the behavior of an accurate system.
The Kalman filter effectively predicted handovers in the repetitive assembly tasks used in this study under appropriate parameter settings. All predictions had errors that led to an RWT or OWT, and those made using the Kalman filter were no exceptions. As aforementioned, the RWT had no effect on the cycle time and subjective preference in the investigated assembly task; rather, the key was the OWT. Figure 6 shows that OWT for the Kalman filter, apparently, was larger than that of the trigger sensor method. However, the participants did not appear to dislike the Kalman filter, as evidenced by the subjective preference ratings reported by the participants. One reason may be that the random errors in the OWT were generally small and less than 1 s. Interestingly, some participants stated that completing the task with the robot using the Kalman filter prediction method was more “like interacting with another human operator”. Interacting with a robot that applies the trigger sensor approach results in the smallest and most ‘predictable’ random error in OWT and RWT, “like pushing a button and getting a response from the robot”. Participants usually related this experience to ‘reactive’ and ‘not autonomous’. On the other hand, when interacting with the robot using the Kalman filter prediction method, the variations in OWT and RWT were generally less predictable and this small variation in the Kalman filter predictions, when compared with the almost fixed random error in the trigger sensor context, may have enhanced the participants’ acceptance of the Kalman filter prediction model.
The subjective questionnaire assessed the participants’ acceptance in terms of “fluency” by using the items “The robot and I work well together”, “Deliveries made by the robot are smooth”, and “I work fluently with the robot”. The two experiments conducted in the present study verified that when the OWT was lower, the participants were more likely to regard the robot as a team member and subjectively felt greater fluency in the HRI. Therefore, in a collaborative environment, reducing the OWT through effective prediction can help operators accept robots as normal working partners in the work environment.
This study assumed that HRI was limited to a simple handover and investigated the cycle time, waiting time, and subjective preference of operators. This simple handover context revealed that the RWT did not affect performance and subjective preference. However, in actual manufacturing, HRI is not limited to simple handovers, because robots usually have to perform other tasks in addition to handover tasks. Therefore, this limitation was ignored in this study. The effects of the OWT and RWT on performance and whether the Kalman filter prediction model is superior are research topics worthy of further investigation.
In this study, the Kalman filter prediction model was implemented at the task level; that is, the time measurement value z k of each completed assembly was the input of the current state. The real-time assembly movement speed could feasibly be used as an input with the PhaseSpace measurement method, which should result in improved stability of predictions. Although measurements are not easy to perform using conventional motion capture equipment due to the high cost of their implementation on production lines, if the assembly motion speed can be confirmed as an input that improves prediction stability, image and speed information could be captured using a relatively low-cost RGB 3D camera and Leap motion gesture sensing device. In addition, if the model were integrated with an artificial intelligence and deep learning algorithm (e.g., a convolutional neural network or long short-term memory), the assembly movement speed could be predicted, and the Kalman filter could be used to predict the cycle time of human-robot collaboration in an assembly task.
It may be questioned whether the choice of a fixed standard time for all those experiments that employ the MTM approach—here set to 8.7 s—is distorting the results of the user experience assessment. For example, one may assume that participants with different skill levels should be assigned different cycle times that reflect their level, or that in general a shorter cycle time would be appropriate for this simple handover task. A shorter cycle time and, subsequently, a shorter OWT would have resulted in a better evaluation of working with the MTM approach. Two defenses of the strategy chosen in this study shall be given: First, it would be inappropriate to assume an exclusive and direct positive correlation between the cycle time being shorter and the user being more satisfied. Rather, what contributes to a lower satisfaction is the lack of flexibility and nonreactivity to changing conditions. Second, the choice of a standard time is at the core of the MTM as a well-implemented procedure in industrial processes since the 1940s. This standard time is used extensively to calculate operators’ wages, productivity, or as basis for quotation price, to name a few. In many companies, these data are readily available and applied in cost minimization strategies. Since MTM is predetermined, analysis-, and calculation-based, the obtaining of this standard time does not involve human operators or react immediately to individual deviations and differences. Yet, this is exactly the objective of and motivation for this study. Varying the standard time would undermine this objective by removing it further from real production processes.

5. Conclusions

This study used the common building block task as a human–robot collaboration setup and discussed two scenarios of repetitive assembly tasks and repetitive assembly under a learning curve. The Kalman filter, MTM, and trigger sensor method were employed, and the cycle time, waiting time, and participants’ subjective preference were compared to identify the handover prediction model resulting in the shortest cycle time and highest subjective preference.
The results revealed that the Kalman filter was superior to MTM in terms of the average cycle time and subjective preference and was comparable to the trigger sensor method. The system input used by the Kalman filter in this study was the time measurement for each completed task. If the model were to be implemented on an actual production line, this system input would be equivalent to the time interval between the removal of a previous object from the robotic gripper and the removal of the current object. Data are relatively easy to obtain from grippers, which are generally powerful; thus, this approach has potential for implementation on production lines while the similarly preferential and efficient trigger sensor method bears practical limitations in terms of cost and feasibility.

Author Contributions

Conceptualization, K.-H.T.; data curation, C.-F.H.; formal analysis, C.-F.H.; funding acquisition, K.-H.T.; investigation, C.-F.H.; methodology, K.-H.T.; project administration, K.-H.T.; resources, K.-H.T.; supervision, K.-H.T.; validation, C.-F.H., J.M., and S.-T.C.; visualization, S.-T.C.; writing—original draft, C.-F.H.; writing—review and editing, K.-H.T. and J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology (Taiwan), grant number 105-2221-E-035-037-MY3.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dragan, A.D.; Bauman, S.; Forlizzi, J.; Srinivasa, S.S. Effects of robot motion on human-robot collaboration. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; ACM: New York, NY, USA, 2015; pp. 51–58. [Google Scholar] [CrossRef] [Green Version]
  2. Lasota, P.A.; Shah, J.A. Analyzing the effects of human-aware motion planning on close-proximity human–robot collaboration. Hum. Factors 2015, 57, 21–33. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Tsarouchi, P.; Makris, S.; Chryssolouris, G. Human–robot interaction review and challenges on task planning and programming. Int. J. Comput. Integr. Manuf. 2016, 29, 916–931. [Google Scholar] [CrossRef]
  4. Hoang Dinh, K.; Oguz, O.S.; Elsayed, M.; Wollherr, D. Adaptation and Transfer of Robot Motion Policies for Close Proximity Human-Robot Interaction. Front. Robot. AI 2019, 6, 69. [Google Scholar] [CrossRef] [Green Version]
  5. Duchaine, V.; Gosselin, C. Safe, stable and intuitive control for physical human-robot interaction. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3383–3388. [Google Scholar]
  6. Krüger, J.; Bernhardt, R.; Surdilovic, D.; Spur, G. Intelligent assist systems for flexible assembly. CIRP Ann. 2006, 55, 29–32. [Google Scholar] [CrossRef]
  7. Krüger, J.; Lien, T.K.; Verl, A. Cooperation of human and machines in assembly lines. CIRP Ann. 2009, 58, 628–646. [Google Scholar] [CrossRef]
  8. Maurtua, I.; Ibarguren, A.; Kildal, J.; Susperregi, L.; Sierra, B. Human–robot collaboration in industrial applications: Safety, interaction and trust. Int. J. Adv. Robot. Syst. 2017, 14. [Google Scholar] [CrossRef]
  9. Ajoudani, A.; Zanchettin, A.M.; Ivaldi, S.; Albu-Schäffer, A.; Kosuge, K.; Khatib, O. Progress and prospects of the human–robot collaboration. Auton. Robot. 2018, 42, 957–975. [Google Scholar] [CrossRef] [Green Version]
  10. Mayer, M.P.; Odenthal, B.; Faber, M.; Winkelholz, C.; Schlick, C.M. Cognitive engineering of automated assembly processes. Hum. Factors Ergon. Manuf. 2014, 24, 348–368. [Google Scholar] [CrossRef]
  11. Moniz, A. Robots and Humans as Co-Workers? The Human-Centred Perspective of Work with Autonomous Systems; IET Working Papers Series; IET/CESNOVA: Monte de Caparica, Portugal, 2013; pp. 1–21. [Google Scholar]
  12. Ruskowski, M.; Legler, T.; Beetz, M.; Bartels, G. Special Issue on Smart Production. KI Künstl. Intell. 2019, 33, 111–116. [Google Scholar] [CrossRef] [Green Version]
  13. Michalos, G.; Makris, S.; Tsarouchi, P.; Guasch, T.; Kontovrakis, D.; Chryssolouris, G. Design considerations for safe human-robot collaborative workplaces. Procedia CIRP 2015, 37, 248–253. [Google Scholar] [CrossRef]
  14. Roy, S.; Edan, Y. Investigating joint-action in short-cycle repetitive handover tasks: The role of giver versus receiver and its implications for human–robot collaborative system design. Int. J. Soc. Robot. 2018, 1–16. [Google Scholar] [CrossRef]
  15. Cakmak, M.; Srinivasa, S.S.; Lee, M.K.; Forlizzi, J.; Kiesler, S. Human preferences for robot-human hand-over configurations. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 1986–1993. [Google Scholar]
  16. Lopez-Damian, E.; Sidobre, D.; DeLaTour, S.; Alami, R. Grasp planning for interactive object manipulation. In Proceedings of the 5th International Symposium on Robotics and Automation 2006, San Miguel Regla Hidalgo, Mexico, 25–28 August 2006. [Google Scholar]
  17. Sisbot, E.A.; Clodic, A.; Alami, R.; Ransan, M. Supervision and motion planning for a mobile manipulator interacting with humans. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, The Netherlands, 12–15 March 2008; ACM: New York, NY, USA, 2008; pp. 327–334. [Google Scholar]
  18. Nagata, K.; Oosaki, Y.; Kakikura, M.; Tsukune, H. Delivery by hand between human and robot based on fingertip force-torque information. In Proceedings of the 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications, Victoria, BC, Canada, 17 October 1998; Volume 2, pp. 750–757. [Google Scholar]
  19. Hoffman, G. Evaluating Fluency in Human-Robot Collaboration. IEEE Trans. Hum. Mach. Syst. 2019, 49, 209–218. [Google Scholar] [CrossRef]
  20. Fischer, K.; Jensen, L.C.; Kirstein, F.; Stabinger, S.; Erkent, Ö.; Shukla, D.; Piater, J. The effects of social gaze in human-robot collaborative assembly. In Proceedings of the 7th International Conference on Social Robotics, Paris, France, 26–30 October 2015; Tapus, A., André, E., Martin, J.-C., Ferland, F., Ammi, M., Eds.; Springer: Cham, Switzerland, 2015; pp. 204–213. [Google Scholar]
  21. Koay, K.L.; Sisbot, E.A.; Syrdal, D.S.; Walters, M.L.; Dautenhahn, K.; Alami, R. Exploratory study of a robot approaching a person in the context of handing over an object. In Proceedings of the AAAI Spring Symposium: Multidisciplinary Collaboration for Socially Assistive Robotics, Stanford, CA, USA, 26–28 March 2007; AAAI: Palo Alto, CA, USA, 2007; pp. 18–24. [Google Scholar]
  22. Rahman, S.M.; Wang, Y. Dynamic affection-based motion control of a humanoid robot to collaborate with human in flexible assembly in manufacturing. In Proceedings of the ASME 2015 Dynamic Systems and Control Conference, Columbus, OH, USA, 28–30 October 2015; American Society of Mechanical Engineers: New York, NY, USA, 2015; p. V003T40A005. [Google Scholar]
  23. Strabala, K.; Lee, M.K.; Dragan, A.; Forlizzi, J.; Srinivasa, S.S.; Cakmak, M.; Micelli, V. Toward seamless human-robot handovers. J. Hum. Robot Interact. 2013, 2, 112–132. [Google Scholar] [CrossRef] [Green Version]
  24. Huber, M.; Lenz, C.; Wendt, C.; Färber, B.; Knoll, A.; Glasauer, S. Predictive mechanisms increase efficiency in robot-supported assemblies: An experimental evaluation. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Gyeongju, South Korea, 26–29 August 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  25. Schulz, R.; Kratzer, P.; Toussaint, M. Building a Bridge with a Robot: A System for Collaborative On-table Task Execution. In Proceedings of the 5th International Conference on Human Agent Interaction, Bielefeld, Germany, 17–20 October 2017; ACM: New York, NY, USA, 2017; pp. 399–403. [Google Scholar]
  26. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  27. Endsley, M.R. Designing for Situation Awareness: An Approach to User-Centered Design, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  28. Aleotti, J.; Micelli, V.; Caselli, S. An Affordance Sensitive System for Robot to Human Object Handover. Int. J. Soc. Robot. 2014, 6, 653–666. [Google Scholar] [CrossRef]
  29. Hoffman, G.; Breazeal, C. Effects of anticipatory action on human-robot teamwork efficiency, fluency, and perception of team. In Proceedings of the ACM/IEEE International Conference on HUMAN-Robot Interaction, Arlington, VA, USA, 9–11 March 2007; ACM: New York, NY, USA, 2007; pp. 1–8. [Google Scholar]
  30. Shah, J.; Wiken, J.; Williams, B.; Breazeal, C. Improved human-robot team performance using chaski, a human-inspired plan execution system. In Proceedings of the 6th International Conference on Human-Robot Interaction, Lausanne, Switzerland, 6–9 March 2011; ACM: New York, NY, USA, 2011; pp. 29–36. [Google Scholar]
  31. Edsinger, A.; Kemp, C.C. Human-robot interaction for cooperative manipulation: Handing objects to one another. In Proceedings of the RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Korea, 26–29 August 2007; pp. 1167–1172. [Google Scholar]
  32. St Clair, A.; Mataric, M. How robot verbal feedback can improve team performance in human-robot task collaborations. In Proceedings of the 10th annual ACM/IEEE international conference on human-robot interaction, Portland, OR, USA, 2–5 March 2015; ACM: New York, NY, USA, 2015; pp. 213–220. [Google Scholar]
  33. Moon, A.; Troniak, D.M.; Gleeson, B. Meet me where I’m gazing: How shared attention gaze affects human-robot handover timing. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, Bielefeld, Germany, 3–6 March 2014; ACM: New York, NY, USA, 2014; pp. 334–341. [Google Scholar]
  34. Bellotto, N. Robot control based on qualitative representation of human trajectories. In 2012 AAAI Spring Symposium Series; AAAI: Palo Alto, CA, USA, 2012. [Google Scholar]
  35. Kitade, T.; Satake, S.; Kanda, T.; Imai, M. Understanding suitable locations for waiting. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 3–6 March 2013; pp. 57–64. [Google Scholar]
  36. Sisbot, E.A.; Marin-Urias, L.F.; Broquère, X.; Sidobre, D.; Alami, R. Synthesizing Robot Motions Adapted to Human Presence. Int. J. Soc. Robot. 2010, 2, 329–343. [Google Scholar] [CrossRef]
  37. Chung, S.Y.; Huang, H.P. Predictive navigation by understanding human motion patterns. Int. J. Adv. Robot. Syst. 2011, 8, 3. [Google Scholar] [CrossRef] [Green Version]
  38. Bennewitz, M.; Burgard, W.; Cielniak, G.; Thrun, S. Learning motion patterns of people for compliant robot motion. Int. J. Robot. Res. 2005, 24, 31–48. [Google Scholar] [CrossRef]
  39. Nikolaidis, S.; Lasota, P.; Rossano, G.; Martinez, C.; Fuhlbrigge, T.; Shah, J. Human-robot collaboration in manufacturing: Quantitative evaluation of predictable, convergent joint action. In Proceedings of the IEEE ISR, Seoul, Korea, 24–26 October 2013; pp. 1–6. [Google Scholar]
  40. Lenz, C.; Nair, S.; Rickert, M.; Knoll, A.; Rösel, W.; Gast, J.; Bannat, A.; Wallhoff, F. Joint-Action for Humans and Industrial Robots for Assembly Tasks. In Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 130–135. [Google Scholar]
  41. Kwon, W.Y.; Suh, I.H. Planning of proactive behaviors for human–robot cooperative tasks under uncertainty. Knowl. Based Syst. 2014, 72, 81–95. [Google Scholar] [CrossRef]
  42. Pellegrinelli, S.; Admoni, H.; Javdani, S.; Srinivasa, S. Human-robot shared workspace collaboration via hindsight optimization. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 831–838. [Google Scholar]
  43. Chao, C.; Thomaz, A. Timed Petri nets for fluent turn-taking over multimodal interaction resources in human-robot collaboration. Int. J. Robot. Res. 2016, 35, 1330–1353. [Google Scholar] [CrossRef]
  44. Mainprice, J.; Berenson, D. Human-robot collaborative manipulation planning using early prediction of human motion. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 299–306. [Google Scholar]
  45. Mainprice, J.; Sisbot, E.A.; Jaillet, L.; Cortés, J.; Alami, R.; Siméon, T. Planning human-aware motions using a sampling-based costmap planner. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 5012–5017. [Google Scholar]
  46. Maynard, H.B.; Stegemerten, G.J.; Schwab, J.L. Methods-Time Measurement; McGraw-Hill: New York, NY, USA, 1948. [Google Scholar]
  47. Syska, A. Produktionsmanagement: Das A—Z Wichtiger Methoden und Konzepte für die Produktion von Heute; GWV Fachverlage GmbH: Wiesbaden, Germany, 2006; p. 99. [Google Scholar]
  48. Unhelkar, V.V.; Siu, H.C.; Shah, J.A. Comparative performance of human and mobile robotic assistants in collaborative fetch-and-deliver tasks. In Proceedings of the 2014 ACM/IEEE International Conference on Human–Robot Interaction, Bielefeld, Germany, 3–6 March 2014; ACM: New York, NY, USA, 2014; pp. 82–89. [Google Scholar]
Figure 1. Workbench layout (left) and assembled blocks (right).
Figure 1. Workbench layout (left) and assembled blocks (right).
Applsci 10 00556 g001
Figure 2. Experimental workflow diagram with an inset figure showing a close-up actual human–robot physical interaction.
Figure 2. Experimental workflow diagram with an inset figure showing a close-up actual human–robot physical interaction.
Applsci 10 00556 g002
Figure 3. Speed variation of the operator’s hand and robot during the handover process, with Gx = reach out to grab the xth building block, x = 1, 2, 3; Rx = return to the assembly area with the xth building block; RtR = reach to the robot; RtA = return to the assembly area with the handed over building block; SD = start handover by robot; FD = finish handover by robot.
Figure 3. Speed variation of the operator’s hand and robot during the handover process, with Gx = reach out to grab the xth building block, x = 1, 2, 3; Rx = return to the assembly area with the xth building block; RtR = reach to the robot; RtA = return to the assembly area with the handed over building block; SD = start handover by robot; FD = finish handover by robot.
Applsci 10 00556 g003
Figure 4. Robot waiting time (RWT) in the assembly cycle. Abbreviations: see caption of Figure 3.
Figure 4. Robot waiting time (RWT) in the assembly cycle. Abbreviations: see caption of Figure 3.
Applsci 10 00556 g004
Figure 5. Average cycle time for three skill levels and handover prediction models.
Figure 5. Average cycle time for three skill levels and handover prediction models.
Applsci 10 00556 g005
Figure 6. Average operator waiting time (OWT) for the three skill level groups and handover prediction models.
Figure 6. Average operator waiting time (OWT) for the three skill level groups and handover prediction models.
Applsci 10 00556 g006
Figure 7. Average RWT for the three skill levels and handover prediction models.
Figure 7. Average RWT for the three skill levels and handover prediction models.
Applsci 10 00556 g007
Figure 8. Average total waiting time for the three skill levels and handover prediction models.
Figure 8. Average total waiting time for the three skill levels and handover prediction models.
Applsci 10 00556 g008
Figure 9. Median preference scores of participants in three skill level groups and for three handover prediction models.
Figure 9. Median preference scores of participants in three skill level groups and for three handover prediction models.
Applsci 10 00556 g009
Figure 10. Cycle time for three handover prediction models.
Figure 10. Cycle time for three handover prediction models.
Applsci 10 00556 g010
Figure 11. Cycle time over 20 sessions for different handover prediction models under the assembly learning effect.
Figure 11. Cycle time over 20 sessions for different handover prediction models under the assembly learning effect.
Applsci 10 00556 g011
Figure 12. Time variation of the RWT for different handover prediction models under the assembly learning effect.
Figure 12. Time variation of the RWT for different handover prediction models under the assembly learning effect.
Applsci 10 00556 g012
Figure 13. Time variation of the OWT for different handover prediction models under the assembly learning effect.
Figure 13. Time variation of the OWT for different handover prediction models under the assembly learning effect.
Applsci 10 00556 g013
Figure 14. Median preference score for the three handover prediction models.
Figure 14. Median preference score for the three handover prediction models.
Applsci 10 00556 g014
Table 1. Description of block assembly actions and their duration, as calculated using the method-time measurement MTM.
Table 1. Description of block assembly actions and their duration, as calculated using the method-time measurement MTM.
ActionSymbolTMUAction Symbol DescriptionExperimental Action
Assembly of each of the three orange blocksR38C16.28R: Extend the hand
C: Reach out to the messily placed target object
Extend the hand for 38 cm from the assembly point to the blocks
G4B9.10G: Grab the targeted object (need to find or select), which is smaller than 26 × 26 × 26 mm3 but larger than 6 × 6 × 3 mm3Grab an orange block from the building block area
R38A10.94R: Extend the hand
A: Extend the hand to a specified position
Return to the assembly point from the building block area
P2SS19.70P2 is a slightly tightened degree of fit, and the object can be placed into semisymmetry (SS) with a slight force: such as the fit of two square objectsAssemble the orange block
Assembly of the fourth block (a green block)R34B13.92R: Extend the hand
B: Extend the hand to the target object
Extend the hand 34 cm from the assembly point to the handover point
G25.60G: Use the correct grabbing methodGrab the green block held by the robotic gripper
R34A10.22R: Extend the hand
A: Extend the hand to a specified position
Return to the assembly point from the handover point
P2SS19.70P2 is a slightly tightened degree of fit, and the object can be placed into semisymmetry (SS) with a slight force: such as the fit of two square objectsAssemble the green block onto the orange blocks
R38A10.94R: Extend the hand
A: Extend the hand to a specified position
Extend the hand 28 cm from the assembly point to the placement area
RL12.00Release the fingers to release the objectPlace the assembled blocks
R38A10.94R: Extend the hand
A: Extend the hand to a specified position
Return to the assembly point from the placement area
Table 2. Average cycle time for three skill level groups under three handover prediction models.
Table 2. Average cycle time for three skill level groups under three handover prediction models.
Method
Skill LevelMTMKalman FilterTrigger SensorAverage
Novel9.869.639.809.76
Intermediate8.718.788.768.75
Advanced8.717.807.738.08
Average9.098.748.77

Share and Cite

MDPI and ACS Style

Tang, K.-H.; Ho, C.-F.; Mehlich, J.; Chen, S.-T. Assessment of Handover Prediction Models in Estimation of Cycle Times for Manual Assembly Tasks in a Human–Robot Collaborative Environment. Appl. Sci. 2020, 10, 556. https://doi.org/10.3390/app10020556

AMA Style

Tang K-H, Ho C-F, Mehlich J, Chen S-T. Assessment of Handover Prediction Models in Estimation of Cycle Times for Manual Assembly Tasks in a Human–Robot Collaborative Environment. Applied Sciences. 2020; 10(2):556. https://doi.org/10.3390/app10020556

Chicago/Turabian Style

Tang, Kuo-Hao, Chia-Feng Ho, Jan Mehlich, and Shih-Ting Chen. 2020. "Assessment of Handover Prediction Models in Estimation of Cycle Times for Manual Assembly Tasks in a Human–Robot Collaborative Environment" Applied Sciences 10, no. 2: 556. https://doi.org/10.3390/app10020556

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop