Next Article in Journal
Assembly Modes and Workspace Analysis of a 3-RRR Planar Manipulator
Previous Article in Journal
Quantum-Inspired Sliding-Mode Control to Enhance the Precision and Energy Efficiency of an Articulated Industrial Robotic Arm
Previous Article in Special Issue
Subtask-Based Usability Evaluation of Control Interfaces for Teleoperated Excavation Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Haptic Guidance System for Teleoperation Based on Trajectory Similarity

1
Haptics Laboratory, Faculty of Fiber Science and Engineering, Kyoto Institute of Technology, Kyoto 606-8585, Japan
2
Department of Mechanical Engineering, Kobe University, Kobe 657-8501, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Robotics 2025, 14(2), 15; https://doi.org/10.3390/robotics14020015
Submission received: 10 December 2024 / Revised: 15 January 2025 / Accepted: 28 January 2025 / Published: 30 January 2025
(This article belongs to the Special Issue Robot Teleoperation Integrating with Augmented Reality)

Abstract

:
Teleoperation technology enables remote control of machines, but often requires complex manoeuvres that pose significant challenges for operators. To mitigate these challenges, assistive systems have been developed to support teleoperation. This study presents a teleoperation guidance system that provides assistive force feedback to help operators align more accurately with desired trajectories. Two key issues remain: (1) the lack of a flexible, real-time approach to defining desired trajectories and calculating assistive forces, and (2) uncertainty about the effects of forward motion assistance within the assistive forces. To address these issues, we propose a novel approach that captures the posture trajectory of the local control interface, statistically generates a reference trajectory, and incorporates forward motion as an adjustable parameter. In Experiment 1, which involved simulating an object transfer task, the proposed method significantly reduced the operator’s workload compared to conventional techniques, especially in dynamic target scenarios. Experiment 2, which involved more complex paths, showed that assistive forces with forward assistance significantly improved manoeuvring performance.

1. Introduction

Teleoperation technology enables the remote control of various robots, such as multi-joint robotic manipulators, construction machinery, and humanoid avatar robots [1]. Teleoperation is expected to be applied to construction work [2] and to disaster response and recovery work [3,4]. While it offers significant benefits, including the ability to work remotely, teleoperation often poses challenges due to the complexity of the required maneuvers, leading to reduced operational efficiency/controllability compared to direct control [5]. To support robot teleoperation, a variety of technologies have been developed.
Numerous studies have focused on reducing the operator’s load by using visual information [6,7,8,9,10] and auditory information [11] commonly used in direct operation. However, these modalities are already heavily relied upon by operators for environmental awareness, and their effectiveness can be compromised by external disturbances such as noise.
Haptic information has emerged as a promising approach for supporting teleoperation tasks [12,13]. There are two main approaches to utilizing haptic information: (1) the “haptic feedback approach”, where forces and accelerations encountered by the robot are transferred to the operator [14,15,16,17,18], and (2) the “haptic guidance approach”, which provides force and acceleration cues to guide the operator toward target positions and along target trajectories [19,20,21,22,23,24,25]. Both approaches can be activated together.
This study focuses on the “haptic guidance approach”, which is particularly suitable for scenarios such as object transport with little environmental contact interaction. These studies often rely on predefined desired trajectories determined through empirical methods, which limits flexibility in dynamic or unknown environments.
To address this limitation, statistical methods based on teleoperation data have been proposed, often referred to as data-driven approach or Learning from Demonstration (LfD) [26]. These methods, particularly in robotic arm motion studies, allow for the recording of multiple teleoperation trajectories and the subsequent generation of reference trajectories [27,28,29,30,31]. Such learning-based trajectory planning is an important method in human–robot interaction [32] broadly, covering not only teleoperation but also robot automation [33] and human–robot physical collaboration [34]. While these methods provide flexibility for adapting to diverse tasks, they encounter two primary challenges: (1) predefining the target destination position restricts real-time trajectory adjustments, limiting adaptability during operation, and (2) there is ambiguity regarding whether assistive forces should include a forward-driving component toward the target position.
This study introduces a novel teleoperation haptic guidance method that utilizes pre-recorded robot arm motion data to statistically determine desired trajectories in real-time, providing assistive force feedback to the operator. In contrast to traditional approaches, this method eliminates the need to predefine target positions, dynamically adapting guidance based on the similarity between current and previously recorded trajectories. Furthermore, the study investigates the effects of integrating forward-driving forces into the trajectory-following feedback on teleoperation performance.

2. Method

2.1. Desired Trajectory Calculation Based on Trajectory Similarity

The proposed system consists of two parts: the calculation of the desired trajectory based on the similarity between the pre-recorded and current trajectories, and the calculation of the assistive force based on the desired trajectory and trajectory error (confidence). In this section, we explain the former desired trajectory calculation process as shown in Figure 1, while the latter process will be described in the next section.
The proposed system is expected to be particularly useful for tasks involving repeated debris collection and sorting across multiple locations, where the final destination of the trajectory can vary. Additionally, we account for scenarios where the environment changes during the task, such as collapses obstructing the original path. Our goal is to develop a system that adapts to differences in recorded trajectories, even when the final destination remains the same but the intermediate paths differ significantly. Currently, the end point of each trial must be defined by the operator, typically using a switch or similar device. A potential improvement for future iterations of the system includes the automatic detection of these end points.
The trajectory measurement and pre-processing processes are introduced. T c is a time-series current trajectory consisting of positions P c ( t ) R 3 . T r i is the i-th pre-recorded trajectory consisting of positions P r i ( t ) R 3 . Here, the similarity ratings between the current resampled trajectory τ c R 3 × T and the i-th pre-recorded resampled trajectories τ r i R 3 × T are explained. τ c is the trajectory with T ( = 100 ) equally spaced resampled points from the current position consisting of positions p c ( n ) R 3 , which is calculated using T c in real time. The maximum total distance is fixed in τ c in order to refer only to the part close to the current position. τ r i is the trajectory obtained using T r i by finding the point closest to the current position in the i-th pre-recorded trajectory and resampling T points at equal intervals from that point, which is consisting of positions p r i ( n ) R 3 .
The similarity metric S i is defined as follows:
S i = e β τ c τ r i 2 .
Here, β is a constant that determines the sensitivity, which the authors have determined empirically. Differences in this value may affect the performance of the system, but there is as of yet no way to determine this quantitatively, and this is a subject for future research.
This explanation outlines the calculation process for determining the desired trajectory and trajectory error (confidence) in the proposed system. Locally Weighted Regression (LWR) [35] is used to calculate the weighted average and variance of the recorded trajectory, which are defined as the desired trajectry and trajectry error (confidence), respectively. The weight w i for T r i in N trials of recorded trajectories is defined by the following equation.
w i = S i Σ i = 1 N S i .
Next, LWR is performed to compute the mean and variance of the weighted recording trajectories. We obtain a T and A T according to the following equation.
a t T A t T = ( Ξ T W Ξ + λ I ) 1 Ξ T W X t .
Here, Ξ = [ [ 1 ; τ ˜ r 1 ] , , [ 1 ; τ ˜ r N ] ] is a matrix composed of τ ˜ r i = [ p r i ( 1 ) ; ; p r i ( T ) ] , and W = diag ( w 1 , , w N ) and X t = [ P r 1 ( t ) , , P r N ( t ) ] T are weight matrix and matrix of states of the recorded trajectories at time t, respectively. The position P d ( t ) R 3 × T at time t in the desired trajectory is given by the following equation.
P d ( t ) = A t τ c ˜ + a t .
Here, τ c ˜ = [ p c ( 1 ) ; ; p c ( T ) ] . Moreover, the covariance matrix Σ d ( t ) R 3 × T at time t of the desired trajectory is obtained as the confidence of the desired trajectory, and it is calculated using the following equation.
Σ d ( t ) = i = 1 N w i ( P d ( t ) μ i ) ( P d ( t ) μ i ) T i = 1 N w i .
Here, μ i = A t τ ˜ r i + a t .

2.2. Guidance Force Calculation Based on Desired Trajectory

In this section, we explain the calculation process of guidance force as shown in Figure 2. A guidance force f ( t ) at the current time is proportional to the difference between the target position and the current position, which is calculated as follows.
f ( t ) = K ( P d ( t ) P c ( t ) ) .
Here, K and t are the stiffness coefficient and the time index of the target position, respectively.
Figure 2. Calculation of guidance force. (a) Without forward component. (b) With forward component.
Figure 2. Calculation of guidance force. (a) Without forward component. (b) With forward component.
Robotics 14 00015 g002
By increasing or decreasing the force feedback depending on the confidence of the desired trajectory, the present system can induce a strong force when the confidence is high and less force when it is low. Therefore, the stiffness coefficient K is defined using the covariance matrix Σ d ( t ) containing of σ i i ( t ) component as follows:
K = k 0 e α ( σ 11 ( t ) + σ 22 ( t ) + σ 33 ( t ) ) .
Here, k 0 and α are constant parameters, which the authors have determined empirically. Although the present system does not support following the target attitude, it may be necessary to provide supportive force depending on the task, which will be considered in future studies.
Additionally, by preparing two types of target position time index t , the present system can consider whether the supportive force includes a forward component or not. If the supportive force does not include a forward component, as shown in Figure 2a, t is defined as follows using the time index t of the closest point on the desired trajectry p M , d ( t ) to the current state p M , c ( t ) :
t = arg min τ p M , d ( τ ) p M , c ( T ) .
If the supportive force includes a forward component, as shown in Figure 2b, f ( t ) is defined as follows using the time index t and an additional time index Δ t :
f ( t ) = K ( P d ( t + Δ t ) P c ( t ) ) .
Δ t is a constant value which the authors have determined empirically. Investigation of the impact of this value and optimization of the value is the subject of future research.

3. Experiment 1

The objective of Experiment 1 is to demonstrate that haptic guidance assistance can be provided using the current trajectory, even in situations where the target position changes during task execution, which cannot be achieved with the conventional method [28]. By comparing the proposed method with the conventional method, we investigate whether the assistance force provided by the proposed method can further alleviate task load.

3.1. Conventional Method

The conventional method [28] calculates the desired trajectory based on the similarity between the final reached position of the pre-measured trajectory and the final reached position of the task currently being performed.
We define the index S i that represents similarity as follows:
S i = e β P goal P r i ( T ) 2 ,
where β , P goal , and P r i ( T ) denote the sensitivity constant, the current desired goal manually suggested by the user, and the final arrival position of the recorded trajectory, respectively.
The weight w i is defined in (2). Next, we obtain a t T and A t T , which are necessary for calculating the desired trajectory and the trajectory error, by the following equation:
a t T A t T = ( S T W S + λ I ) 1 S T W X t ,
where S = [ s 1 ˜ , , s N ˜ ] is a matrix composed of s i ˜ = [ 1 ; P r i ( T ) ] , W = diag ( w 1 , , w N ) is a weight matrix, and X t = [ P r 1 ( t ) , , P r N ( t ) ] T is a matrix consisting of the state at time t of the recorded trajectory.
The state P d ( t ) of the desired trajectory at time t is given by the following equation:
P d ( t ) = A t P goal + a t .
Furthermore, the variance–covariance matrix Σ d ( t ) of the desired trajectory at time t is calculated as follows:
Σ d ( t ) = i = 1 N w i ( P d ( t ) μ i ) ( P d ( t ) μ i ) T i = 1 N w i .
where μ i = A t P r i ( T ) + a i .

3.2. Participants

The experiment involved 12 participants (11 males and 1 female) aged between 22 and 25 years, all of whom are right-handed. All participants provided informed consent prior to the experiment.

3.3. Experimental Environment

The experiment used a remote control system consisting of a local interface (Virtuose6D, Haption) and a remote robot arm. The remote robot arm was simulated in a virtual experimental environment, as shown in Figure 3.

3.4. Experimental Task

The experimental task involved transporting an object to two different goal areas. Area A is the regular goal area and Area B is the irregular area as shown in Figure 4a. The scenario assumed is that the system basically instructs the operator to transport the object to Area A, but in an unforeseen situation the system may instruct the operator to transport the object to Area B. The task was conducted in the following manner, as shown in Figure 4:
  • The participant operated the local interface to move the end effector of the remote robot arm to the location where the object appeared.
  • While holding the object, the participant moved the end effector to the next waypoint indicated on the screen. As the participant moved the end effector, waypoints appeared on the screen, and the participant moved the end effector so that the object made contact with each waypoint.
  • When the end effector made contact with a waypoint, the waypoint disappeared, and the next waypoint appeared.
  • After passing through the last waypoint as shown in Figure 4c, the target end area is indicated. The participant then moved the object onto the target end area by releasing it from the end effector (Figure 4d).

3.5. Experimental Condition

The experimental comparison involved three conditions: the proposed method with trajectory-following and forward-driving forces (TF+FD), the conventional force assistance method (CNV), and no force assistance (NF).

3.5.1. Proposed Method with Trajectory-Following and Forward-Driving Forces (TF+FD)

This condition used the proposed method described in Section 2. In this method, the desired trajectory is generated based on the current trajectory of the end effector. The target position is set on the desired trajectory based on Equation (9), and a force is applied to move the end effector toward the target position.

3.5.2. Conventional Method (CNV)

This condition used the conventional method described in Section 3.1. In this method, the desired trajectory is generated based on the target position provided at the beginning of the task, as shown in Equation (12). The target position is set on the desired trajectory based on Equation (9), and a force is applied to move the end effector toward the target position. In this experiment, the target position was set on the end area A and therefore it can be difficult to generate an assist force towards the end area B.

3.5.3. No Force Guidance (NF)

This condition served as a comparison condition and provided no force assistance.

3.6. Experimental Procedure

The experimental procedure is as follows: Each participant performs the pre-recording phase and the task phase.
In the pre-recording phase, the participant performs 10 trials for each of the end area while passing through the waypoints without any force assistance. A total of 20 recorded trajectories are collected for each participant.
In the task phase, the participant performs 10 trials for each condition. The participants are instructed to complete the task as quickly as possible. The order of the three conditions is randomized for each participant. After 10 trials for each condition are completed, a questionnaire is conducted.

3.7. Evaluation Criteria

3.7.1. Objective Evaluation

For each trial, we measured the task completion time. The task completion time was defined as the time elapsed from when the subject grasped the object until the object was placed at the final destination.

3.7.2. Subjective Evaluation

In the questionnaire, participants were asked to rate their subjective experience on a seven-point scale for six evaluation criteria. These criteria were mental workload, physical workload, effort, own performance, operability, and difficulty, as shown in Table 1. These evaluation criteria were designed based on the NASA-TLX [36] to measure subjective workload during the task.

3.8. Results

3.8.1. Comparison of Trajectories Between Methods

Figure 5 shows the trajectories going to end area B using the conventional model. In this case, the desired trajectory is calculated based on the area A, and the trajectory towards B cannot be taken into account.
Figure 6 and Figure 7 show the trajectories going to end areas A and B using the proposed model, respectively. For both the end area A and B tasks, when the decision on whether to take the end area A or the end area B has not been determined (Figure 6a and Figure 7a), the desired trajectory is positioned in the middle between the A and B area. However, once the midpoint is passed and the decision is made to take either A or B (Figure 6c and Figure 7c), the desired trajectory to the selected goal is generated.

3.8.2. Task Completion Time

Figure 8 shows the task completion times in the case of goal areas A and B, respectively. The results of the Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. Therefore, the Kruskal–Wallis test was performed. The results showed no significant differences (A: p > 0.05 , B: p > 0.05 ) among the three conditions (NF, CNV, and TF+FD).

3.8.3. Mental Workload

Figure 9 shows the evaluated ratings of mental workload for each of the three conditions. The results of the Shapiro-Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal-Wallis test showed the significant differences (A: p < 0.05 , B: p < 0.001 ) among the three conditions (NF, CNV, and TF+FD) for each of the goals A and B. The post-hoc analysis using Mann-Whitney U test with Bonferroni correction showed the significant difference between NF and TF+FD ( p < 0.05 ) for the goal A, and the significant differences between NF and CNV ( p < 0.01 ) and between CNV and TF+FD ( p < 0.01 ) for the goal area B.

3.8.4. Physical Workload

Figure 10 shows the ratings of physical workload for each of the three conditions. The results of the Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal–Wallis test showed the significant difference for the goal B among the three conditions (A: p > 0.05 , B: p < 0.001 .) The post-hoc analysis using Mann–Whitney U test with Bonferroni correction showed the significant differences between NF and TF+FD ( p < 0.05 ) and between CNV and TF+FD ( p < 0.001 ) for the goal area B.

3.8.5. Effort

Figure 11 shows the ratings of effort for each of the three conditions. The results of the Shapiro-Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal-Wallis test showed the significant difference for the goal B among the three conditions (A: p > 0.05 , B: p < 0.01 ). The post-hoc analysis using Mann-Whitney U test with Bonferroni correction showed the significant difference between CNV and TF+FD ( p < 0.01 ) for the goal area B.

3.8.6. Achievement

Figure 12 presents the evaluations of achievement for each of the three conditions. The results of the Shapiro-Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal-Wallis test showed no significant differences (A: p > 0.05 , B: p > 0.05 ) among the three conditions. Therefore, it can be concluded that both of the methods had no significant impact on task achievement ratings in this experiment.

3.8.7. Operation Difficulty

Figure 13 shows the evaluations of operation difficulty for each of the three conditions. The results of the Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. A Kruskal–Wallis test was conducted as a statistical test, and the results showed a significant difference for the goal B among the three conditions (A: p > 0.05 , B: p < 0.001 ). The post-hoc analysis using Mann–Whitney U test with Bonferroni correction showed the significant differences between NF and CNV ( p < 0.001 ) and CNV and TF+FD ( p < 0.001 ). Therefore, it was shown that the conventional method deteriorated the operation difficulty for the goal B.

3.8.8. Task Difficulty

Figure 14 presents the ratings of task difficulty for each of the three conditions. The Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal–Wallis test showed the significant difference for the goal A among the three conditions (A: p < 0.05 , B: p > 0.05 ). However, the post-hoc analysis using the Mann–Whitney U test with Bonferroni correction showed no significant difference among the three conditions. Therefore, it can be concluded that the both methods had no significant impact on task difficulty ratings in this experiment.

4. Experiment 2: Verification of Effect of Forward-Driving and Trajectory-Following Forces

The objective of Experiment 2 is to investigate the impact of two factors on remote control performance: the presence of forward-driving force and trajectory-following force.

4.1. Participants

The experiment involved 12 participants (10 males and 2 female) aged between 22 and 25 years, all of whom are right-handed. All participants provided informed consent prior to the experiment.

4.2. Experimental Task

The experimental task involves obstacle avoidance in a narrow environment and object transportation. The task is conducted as follows, as shown in Figure 15:
  • The participant operated the local interface to grasp the target object at the starting position.
  • While holding the object, the participant move the target object to the waypoints while avoiding contact with obstacles.
  • The participant then moved the target object to the final target position and released it.

4.3. Experimental Conditions

The experimental conditions consist of three comparison conditions: force feedback with progressiveness and trajectory following (TF+FD), force feedback with only trajectory following (TFF), and no force feedback (NF).

4.3.1. Trajectory-Following and Forward-Driving Forces (TF+FD)

Force is provided to move the current input position slightly ahead from the nearest position on the desired trajectory, which is calculated based on the input trajectory. The force aims to make progress towards the target position while following the desired trajectory. This assistance provides both trajectory following and forward movement towards task completion, as represented by Equation (9), which represents trajectory-following and forward-driving forces.

4.3.2. Trajectory-Following Force (TF)

Force is provided to move the current input position towards the nearest position on the desired trajectory. The force assists in following the desired trajectory, as represented by Equation (8).

4.3.3. No Force Guidance (NF)

This condition serves as the comparison condition without force feedback assistance.

4.4. Experimental Procedure

The experimental procedure is as follows. Each participant undergoes a preliminary recording phase followed by a task execution phase.
In the preliminary recording phase, the task is performed without any force feedback assistance for 10 trials. This allows the collection of 10 trial recordings for each participant.
In the task execution phase, the task is performed for 10 trials in each condition. Participants are instructed to complete the task as quickly as possible. The order of the three conditions is randomized for each participant. After completing 10 trials in each condition, a questionnaire is administered.

4.5. Evaluation Criteria

4.5.1. Objective Evaluation

For each trial, the task completion time is measured. The task completion time is the duration from grasping the target object to placing it at the final target position.

4.5.2. Subjective Evaluation

In the questionnaire, participants provide ratings on a seven-point scale for evaluation criteria. The evaluation criteria include mental workload, physical workload, effort, own performance, operability, and difficulty, as shown in Table 1.

4.6. Result

4.6.1. Task Completion Time

Figure 16 shows the task completion times for each of the three conditions. The Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. Therefore, the Kruskal–Wallis test was conducted, revealing a significant difference ( p < 0.001 ) among the three conditions. The post-hoc analysis using Mann–Whitney U test with Bonferroni correction showed the significant differences between NF and TF+FD ( p < 0.001 ) and TF and TF+FD ( p < 0.001 ). Therefore, the results concluded that the inclusion of forward-driving force shortened task completion time.

4.6.2. Mental Workload

Figure 17 shows the ratings of mental workload for each of the three conditions. The Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal–Wallis test showed a significant difference ( p < 0.01 ) among the three conditions. The post-hoc analysis using Mann–Whitney U test with Bonferroni correction showed the significant differences between NF and TF ( p < 0.05 ) and NF and TF+FD ( p < 0.05 ). Therefore, both force guidance assistance (TF and TF+FD) reduced mental workload. However, the presence or absence of forward-driving force did not have a significant impact on mental workload.

4.6.3. Physical Workload

Figure 18 shows the ratings of physical workload for each of the three conditions. The Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal–Wallis test showed a significant difference ( p < 0.001 ) among the three conditions. The post-hoc analysis using Mann–Whitney U test with Bonferroni correction showed the significant differences between NF and TF ( p < 0.05 ), NF and TF+FD ( p < 0.001 ) and TF and TF+FD ( p < 0.05 ). Therefore, TF+FD, TF, and NF tended to have the lowest physical workload ratings, in that order, and the condition involving forward-driving and trajectory-following forces were best.

4.6.4. Effort

Figure 19 shows the ratings of effort for each of the three conditions. The Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal–Wallis test showed a significant difference ( p < 0.001 ) among the three conditions. The post-hoc analysis using Mann–Whitney U test with Bonferroni correction showed the significant differences between NF and TF ( p < 0.01 ), NF and TF+FD ( p < 0.001 ) and TF and TF+FD ( p < 0.01 ). Therefore, TF+FD, TF, and NF tended to have the lowest effort ratings, in that order, and the condition involving forward-driving and trajectory-following forces were best.

4.6.5. Achievement

Figure 20 shows the ratings of achievement for each of the three conditions. The Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal–Wallis test showed a significant difference ( p < 0.05 ) among the three conditions. However, the post-hoc analysis using Mann–Whitney U test with Bonferroni correction showed no significant difference.

4.6.6. Operation Difficulty

Figure 21 shows the ratings of operation difficulty for each of the three conditions. The Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal–Wallis test showed a significant difference ( p < 0.01 ) among the three conditions. The post-hoc analysis using Mann–Whitney U test with Bonferroni correction showed the significant differences between NF and TF ( p < 0.01 ) and NF and TF+FD ( p < 0.05 ).

4.6.7. Task Difficulty

Figure 22 shows the ratings of task difficulty for each of the three conditions. The Shapiro–Wilk test showed that not all conditions satisfied the assumption of normality. The Kruskal–Wallis test showed a significant difference ( p < 0.05 ) among the three conditions. However, the post-hoc analysis using Mann–Whitney U test with Bonferroni correction showed no significant difference.

5. Discussion

5.1. Effectiveness of Method Based on Trajectory Similarity

Mainly on the basis of the results of Section 3.8, we discuss the effects of the method based on trajectory similarity.
The conventional method successfully determined a desired trajectory along the main trajectory when target positions were pre-specified. However, for secondary trajectories, where the provided target position did not correspond to the actual task target, the system erroneously treated the given target as the desired trajectory. This indicates that, despite pre-learning various trajectories, the system cannot select an appropriate desired trajectory without explicitly specified target positions for each task.
In contrast, the proposed method based on trajectory similarity effectively determined desired trajectories for both main and secondary trajectories without requiring pre-specified target positions. By leveraging pre-learned trajectory data, the proposed system autonomously identified suitable trajectories during task execution, eliminating the dependency on predefined target positions.
Regarding subjective evaluation, the conventional method worsened “mental workload”, “physical workload”, “effort”, and “operation difficulty” when navigating to the irregular target goal compared with the no force guidance condition. On the other hand, the proposed method based on trajectory similarity showed no such negative results. In addition, the proposed method decreased “mental workload” when navigating to the regular goal and “physical workload” when navigating to the irregular goal, compared with the no force guidance condition. Although other questionnaire items did not reveal statistically significant differences, but many favorable trends were observed with the proposed method compared with the no force guidance condition and the conventional method.

5.2. Effectiveness of Forward-Driving Force Guidance

Mainly on the basis of the results of Section 4, we discuss the effects of forward-driving force guidance.
As shown in Section 4.6.1, the proposed method is effective in reducing task completion time. Furthermore, the presence of forward-driving force resulted in a further reduction in task completion time compared to the condition without forward-driving force, indicating that forward force assistance is beneficial for reducing task completion time.
In terms of subjective evaluation through the questionnaire, overall, the presence of forward-driving force assistance resulted in a reduction in “physical workload” and “effort” compared to no forward-driving force.
Based on these findings, it can be concluded that force feedback with forward-driving force guidance can be effective in complex teleoperation tasks.

5.3. Segmentation of Recorded Trajectories

In this study, for the preparation of recorded trajectories, the unit of a recorded trajectory was defined as the motion trajectory from the start to the end of object grasping in a pick-and-place task. During recording, the button operation of the operator’s local interface was detected, and the recorded trajectory was determined based on the timing of object grasping initiation and completion. This recording method enables accurate recording of trajectories for performing pick-and-place tasks.
However, in realistic teleoperation scenarios, the operator’s actions would involve not only pick-and-place operations but also various trajectories, such as returning to the initial position after placing the grasped object. Therefore, it is expected to generate recorded trajectories not only based on the limited timing from grasping initiation to completion but also from all the operator’s control actions to define the desired trajectory for the current task.
Regarding the generation of recorded trajectories from all the operator’s control actions, trajectory segmentation is considered to be effective. Bashir et al. [37] proposed a novel classification algorithm using hidden Markov models to recognize object activities based on the motion trajectories of objects. By utilizing a similar classification algorithm, it would be possible to classify each trajectory of the operator’s actions into different tasks. Then, from the classified trajectories, it would be possible to generate recorded trajectories that can be used to determine the desired trajectory, enabling the generation of recorded trajectories without the need for detecting the timing of task initiation and completion. This approach would allow simultaneous assistance for various tasks included in the operator’s control actions.

5.4. Limitation of Proposed Method

In this study, the similarity of trajectories was calculated as the similarity of geometric trajectories without considering the time component. In other words, even if there are fast and slow moving parts of the trajectory, they are not distinguished. Advantages include the ability to mitigate individual differences over time and between trials over time. The disadvantage is that differences in velocity within the orbit cannot be used for guidance. If it is necessary to clarify the difference between slowly and quickly guided areas, it may be better to use trajectories that include a time element in the similarity evaluation.
The local interface used in this study is capable of providing force feedback in six directions: three translational axes and three rotational axes. In this study, the assistance forces were limited to translational directions, which proved effective for tasks where position accuracy is crucial, such as pick-and-place tasks. However, for tasks such as peg-in-hole assembly or excavation with construction machinery, where orientation is important, rotational assistance forces may be beneficial. In the future, by expanding the method to include force feedback in the rotational directions and conducting experiments that provide assistance in all six directions (three translational and three rotational), it would be possible to verify whether the rotational components of force feedback are effective in reducing task load. However, it should be noted that in the calculation of trajectory similarity, the distance uses the vector norm of position P . If we instead use X = [ P , ϕ ] in place of P , where X including orientation ϕ is a 6-dimensional vector, we need to be careful with the definition of the norm since X is a vector with different units.
An additional issue is the difference between velocity control and position control. When operating in a large workspace, velocity and position control may be used together [38]. Current systems are designed for position control, and it is a challenge how they can be adapted for velocity control.
The current system relies on participant-specific data for training, requiring data collection for each individual. This approach ensures tailored assistance but poses a limitation in terms of scalability and efficiency. Whether the system can provide similar support using data collected from other participants remains unexplored. If such cross-user data utilization proves feasible, the data collection phase could be significantly streamlined, enabling broader applicability of the system with reduced setup time. Future research should investigate the generalizability of the system across multiple participants and evaluate the trade-offs between individual-specific and generalized data utilization.
In this study, subjective evaluations of effort and physical workload were collected through participant questionnaires. While this approach provides valuable insights into individual perceptions, it is inherently qualitative and may be influenced by participant bias or variability in interpretation. To complement these subjective assessments, future work could explore quantitative evaluation methods, such as physiological measurements (e.g., heart rate variability, muscle activity via electromyography). Incorporating objective metrics would not only enhance the reliability of workload assessment but also provide deeper insights into the underlying physical and cognitive demands of the tasks.

6. Conclusions

In this study, we proposed a teleoperation guidance method based on the similarity between the recorded trajectory and the current trajectory to calculate the desired trajectory and provide 3-DoF trajectory-following and forward-driving forces. We conducted Experiment 1 and 2 using a virtual environment for the validation of the proposed method. The proposed method had the advantage of not requiring the pre-specification of the target reaching position, which reduced the complexity of the teleoperation process and allowed for flexible adaptation to changing environmental conditions and target positions. Experiment 1 showed that the proposed method was superior in terms of “mental workload”, “physical workload”, “effort”, and “operation difficulty” compared to conventional methods that do not consider trajectory similarity. In addition, the proposed method was superior in terms of “mental workload” and “physical workload” compared with the no force guidance condition. Experiment 2 showed that the presence of forward-driving force effectively reduced task completion time and “physical workload” and “effort” subjective ratings compared to no forward-driving force. Based on these findings, it can be concluded that the proposed method with trajectory-following and forward-driving forces can be effective in complex teleoperation tasks.

Author Contributions

Conceptualization, H.N. and T.N.; methodology, H.N. and T.N.; software, H.N. and T.N.; validation, H.N. and T.N.; formal analysis, H.N. and T.N.; investigation, H.N. and T.N.; resources, H.N., Y.T. and Y.Y.; data curation, H.N. and T.N.; writing—original draft preparation, H.N. and T.N.; writing—review and editing, H.N., Y.T. and Y.Y.; visualization, H.N. and T.N.; supervision, H.N. and Y.Y.; project administration, H.N.; funding acquisition, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Komatsu Ltd.

Institutional Review Board Statement

All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the ethical review board of the faculty of engineering of Kobe University (protocol code 04-51 and 13 January 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

This research was conducted in collaboration with Osaka University and Komatsu Ltd.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sheridan, T.B. Teleoperation, telerobotics and telepresence: A progress report. Control Eng. Pract. 1995, 3, 205–214. [Google Scholar] [CrossRef]
  2. Lee, J.S.; Ham, Y.; Park, H.; Kim, J. Challenges, tasks, and opportunities in teleoperation of excavator toward human-in-the-loop construction automation. Autom. Constr. 2022, 135, 104119. [Google Scholar] [CrossRef]
  3. Kawatsuma, S.; Fukushima, M.; Okada, T. Emergency response by robots to Fukushima-Daiichi accident: Summary and lessons learned. Ind. Robot. Int. J. 2012, 39, 428–435. [Google Scholar] [CrossRef]
  4. Katyal, K.D.; Brown, C.Y.; Hechtman, S.A.; Para, M.P.; McGee, T.G.; Wolfe, K.C.; Murphy, R.J.; Kutzer, M.D.; Tunstel, E.W.; McLoughlin, M.P.; et al. Approaches to robotic teleoperation in a disaster scenario: From supervised autonomy to direct control. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 1874–1881. [Google Scholar]
  5. Hiramatsu, Y.; Aono, T.; Nishio, M. Disaster restoration work for the eruption of Mt Usuzan using an unmanned construction system. Adv. Robot. 2002, 16, 505–508. [Google Scholar] [CrossRef]
  6. Belousov, I.R.; Tan, J.; Clapworthy, G.J. Teleoperation and Java3D visualization of a robot manipulator over the World Wide Web. In Proceedings of the 1999 IEEE International Conference on Information Visualization (Cat. No. PR00210), London, UK, 14–16 July 1999; pp. 543–548. [Google Scholar]
  7. Monferrer, A.; Bonyuet, D. Cooperative robot teleoperation through virtual reality interfaces. In Proceedings of the Sixth International Conference on Information Visualisation, London, UK, 10–12 July 2002; pp. 243–248. [Google Scholar]
  8. Iwataki, S.; Fujii, H.; Moro, A.; Yamashita, A.; Asama, H.; Yoshinada, H. Visualization of the surrounding environment and operational part in a 3DCG model for the teleoperation of construction machines. In Proceedings of the 2015 IEEE/SICE International Symposium on System Integration (SII), Nagoya, Japan, 11–13 December 2015; pp. 81–87. [Google Scholar]
  9. Kamezaki, M.; Yang, J.; Iwata, H.; Sugano, S. Visibility enhancement using autonomous multicamera controls with situational role assignment for teleoperated work machines. J. Field Robot. 2016, 33, 802–824. [Google Scholar] [CrossRef]
  10. Kamezaki, M.; Miyata, M.; Sugano, S. Video presentation based on multiple-flying camera to provide continuous and complementary images for teleoperation. Autom. Constr. 2024, 159, 105285. [Google Scholar] [CrossRef]
  11. Durlach, N. Auditory localization in teleoperator and virtual environment systems: Ideas, issues, and problems. Perception 1991, 20, 543–554. [Google Scholar] [CrossRef] [PubMed]
  12. Hannaford, B.; Okamura, A.M. Haptics. In Springer Handbook of Robotics; Springer: Cham, Switzerland, 2016; pp. 1063–1084. [Google Scholar]
  13. Choi, S.; Kuchenbecker, K.J. Vibrotactile display: Perception, technology, and applications. Proc. IEEE 2012, 101, 2093–2104. [Google Scholar] [CrossRef]
  14. Yamada, H. Master-slave control for construction robot teleoperation: Application of a velocity control with a force feedback model. J. Robot. Mechatron. 2007, 19, 60–67. [Google Scholar] [CrossRef]
  15. McMahan, W.; Gewirtz, J.; Standish, D.; Martin, P.; Kunkel, J.A.; Lilavois, M.; Wedmid, A.; Lee, D.I.; Kuchenbecker, K.J. Tool contact acceleration feedback for telerobotic surgery. IEEE Trans. Haptics 2011, 4, 210–220. [Google Scholar] [CrossRef]
  16. Nagano, H.; Takenouchi, H.; Cao, N.; Konyo, M.; Tadokoro, S. Tactile feedback system of high-frequency vibration signals for supporting delicate teleoperation of construction robots. Adv. Robot. 2020, 34, 730–743. [Google Scholar] [CrossRef]
  17. Takahashi, M.; Nagano, H.; Tazaki, Y.; Yokokohji, Y. Effective haptic feedback type for robot-mediated material discrimination depending on target properties. Front. Virtual Real. 2023, 4, 1070739. [Google Scholar] [CrossRef]
  18. Gong, Y.; Mat Husin, H.; Erol, E.; Ortenzi, V.; Kuchenbecker, K.J. AiroTouch: Enhancing telerobotic assembly through naturalistic haptic feedback of tool vibrations. Front. Robot. AI 2024, 11, 1355205. [Google Scholar] [CrossRef] [PubMed]
  19. Crandall, J.W.; Goodrich, M.A. Characterizing efficiency of human robot interaction: A case study of shared-control teleoperation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; Volume 2, pp. 1290–1295. [Google Scholar]
  20. Marayong, P.; Okamura, A.M. Speed-accuracy characteristics of human-machine cooperative manipulation using virtual fixtures with variable admittance. Hum. Factors 2004, 46, 518–532. [Google Scholar] [CrossRef] [PubMed]
  21. Abbott, J.J.; Marayong, P.; Okamura, A.M. Haptic virtual fixtures for robot-assisted manipulation. In Robotics Research: Results of the 12th International Symposium ISRR; Springer: Berlin/Heidelberg, Germany, 2005; pp. 49–64. [Google Scholar]
  22. Lam, T.M.; Boschloo, H.W.; Mulder, M.; Van Paassen, M.M. Artificial force field for haptic feedback in UAV teleoperation. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2009, 39, 1316–1330. [Google Scholar] [CrossRef]
  23. Boessenkool, H.; Abbink, D.A.; Heemskerk, C.J.; van der Helm, F.C.; Wildenbeest, J.G. A task-specific analysis of the benefit of haptic shared control during telemanipulation. IEEE Trans. Haptics 2012, 6, 2–12. [Google Scholar] [CrossRef]
  24. Salazar, J.; Okabe, K.; Hirata, Y. Path-following guidance using phantom sensation based vibrotactile cues around the wrist. IEEE Robot. Autom. Lett. 2018, 3, 2485–2492. [Google Scholar] [CrossRef]
  25. Tawa, S.; Nagano, H.; Tazaki, Y.; Yokokohji, Y. Three-Dimensional Position Presentation Via Head and Waist Vibrotactile Arrays. IEEE Trans. Haptics 2023, 17, 319–333. [Google Scholar] [CrossRef] [PubMed]
  26. Argall, B.D.; Chernova, S.; Veloso, M.; Browning, B. A survey of robot learning from demonstration. Robot. Auton. Syst. 2009, 57, 469–483. [Google Scholar] [CrossRef]
  27. Tanwani, A.K.; Calinon, S. A generative model for intention recognition and manipulation assistance in teleoperation. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 43–50. [Google Scholar]
  28. Abi-Farraj, F.; Osa, T.; Peters, N.P.J.; Neumann, G.; Giordano, P.R. A learning-based shared control architecture for interactive task execution. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 329–335. [Google Scholar]
  29. Zeestraten, M.J.; Havoutis, I.; Calinon, S. Programming by demonstration for shared control with an application in teleoperation. IEEE Robot. Autom. Lett. 2018, 3, 1848–1855. [Google Scholar] [CrossRef]
  30. Lu, Z.; Si, W.; Wang, N.; Yang, C. Dynamic Movement Primitives-Based Human Action Prediction and Shared Control for Bilateral Robot Teleoperation. IEEE Trans. Ind. Electron. 2024, 71, 16654–16663. [Google Scholar] [CrossRef]
  31. Turco, E.; Castellani, C.; Bo, V.; Pacchierotti, C.; Prattichizzo, D.; Baldi, T.L. Reducing cognitive load in teleoperating swarms of robots through a data-driven shared control approach. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024; pp. 4731–4738. [Google Scholar]
  32. Selvaggio, M.; Cognetti, M.; Nikolaidis, S.; Ivaldi, S.; Siciliano, B. Autonomy in physical human-robot interaction: A brief survey. IEEE Robot. Autom. Lett. 2021, 6, 7989–7996. [Google Scholar] [CrossRef]
  33. Osa, T.; Sugita, N.; Mitsuishi, M. Online trajectory planning and force control for automation of surgical tasks. IEEE Trans. Autom. Sci. Eng. 2017, 15, 675–691. [Google Scholar] [CrossRef]
  34. Rozo, L.; Calinon, S.; Caldwell, D.G.; Jimenez, P.; Torras, C. Learning physical collaborative robot behaviors from human demonstrations. IEEE Trans. Robot. 2016, 32, 513–527. [Google Scholar] [CrossRef]
  35. Atkeson, C.G.; Moore, A.W.; Schaal, S. Locally weighted learning for control. Lazy Learn. 1997, 11, 75–113. [Google Scholar]
  36. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  37. Bashir, F.I.; Khokhar, A.A.; Schonfeld, D. Object trajectory-based activity classification and recognition using hidden Markov models. IEEE Trans. Image Process. 2007, 16, 1912–1919. [Google Scholar] [CrossRef]
  38. Nagate, T.; Nagano, H.; Tazaki, Y.; Yokokohji, Y. Subtask-Based Usability Evaluation of Control Interfaces for Teleoperated Excavation Tasks. Robotics 2024, 13, 163. [Google Scholar] [CrossRef]
Figure 1. Calculation of desired trajectory. (a-1)–(a-3) and (b-1)–(b-3) are different examples. (a-1), (b-1) Pre-recorded trajectories and current trajectory. (a-2), (b-1) Resampled trajectories for calculating a similarity. (a-3), (b-3) Target trajectory including positions and variance.
Figure 1. Calculation of desired trajectory. (a-1)–(a-3) and (b-1)–(b-3) are different examples. (a-1), (b-1) Pre-recorded trajectories and current trajectory. (a-2), (b-1) Resampled trajectories for calculating a similarity. (a-3), (b-3) Target trajectory including positions and variance.
Robotics 14 00015 g001
Figure 3. Experiment environment.
Figure 3. Experiment environment.
Robotics 14 00015 g003
Figure 4. Time lapse ( a b c d ) of an example pick and place task at the virtual environment of Experiment 1. ( a ) Approaching the start position for grasping the target object. ( b , c ) Passing through via waypoints. approach the target area as directed by the system during the experiment. ( d ) Bringing the object to the target area as directed by the system during the experiment.
Figure 4. Time lapse ( a b c d ) of an example pick and place task at the virtual environment of Experiment 1. ( a ) Approaching the start position for grasping the target object. ( b , c ) Passing through via waypoints. approach the target area as directed by the system during the experiment. ( d ) Bringing the object to the target area as directed by the system during the experiment.
Robotics 14 00015 g004
Figure 5. Trajectories in conventional method (CNV) while heading towards area B.
Figure 5. Trajectories in conventional method (CNV) while heading towards area B.
Robotics 14 00015 g005
Figure 6. Trajectories in proposal method (TF+FD) while heading towards area A.
Figure 6. Trajectories in proposal method (TF+FD) while heading towards area A.
Robotics 14 00015 g006
Figure 7. Trajectories in proposal method (TF+FD) while heading towards area B.
Figure 7. Trajectories in proposal method (TF+FD) while heading towards area B.
Robotics 14 00015 g007
Figure 8. Task completion time of Experiment 1.
Figure 8. Task completion time of Experiment 1.
Robotics 14 00015 g008
Figure 9. Mental workload of Experiment 1. *: p < 0.05 , * * : p < 0.01 .
Figure 9. Mental workload of Experiment 1. *: p < 0.05 , * * : p < 0.01 .
Robotics 14 00015 g009
Figure 10. Physical workload of Experiment 1. *: p < 0.05 , * * * : p < 0.001 .
Figure 10. Physical workload of Experiment 1. *: p < 0.05 , * * * : p < 0.001 .
Robotics 14 00015 g010
Figure 11. Effort of Experiment 1. * * : p < 0.01 .
Figure 11. Effort of Experiment 1. * * : p < 0.01 .
Robotics 14 00015 g011
Figure 12. Achievement of Experiment 1.
Figure 12. Achievement of Experiment 1.
Robotics 14 00015 g012
Figure 13. Operation difficulty of Experiment 1. * * * : p < 0.001 .
Figure 13. Operation difficulty of Experiment 1. * * * : p < 0.001 .
Robotics 14 00015 g013
Figure 14. Task difficulty of Experiment 1.
Figure 14. Task difficulty of Experiment 1.
Robotics 14 00015 g014
Figure 15. Time lapse ( a b c d ) of an example pick and place task at the virtual environment of experiment 2. ( a ) Approaching the start position for grasping the target object. ( b , c ) Passing through via waypoints while avoiding contact with obstacles. ( d ) Placing the target object at the end position.
Figure 15. Time lapse ( a b c d ) of an example pick and place task at the virtual environment of experiment 2. ( a ) Approaching the start position for grasping the target object. ( b , c ) Passing through via waypoints while avoiding contact with obstacles. ( d ) Placing the target object at the end position.
Robotics 14 00015 g015
Figure 16. Task completion time of Experiment 2. * * * : p < 0.001 .
Figure 16. Task completion time of Experiment 2. * * * : p < 0.001 .
Robotics 14 00015 g016
Figure 17. Mental workload of Experiment 2. *: p < 0.05 .
Figure 17. Mental workload of Experiment 2. *: p < 0.05 .
Robotics 14 00015 g017
Figure 18. Physical workload of Experiment 2. *: p < 0.05 , * * * : p < 0.001 .
Figure 18. Physical workload of Experiment 2. *: p < 0.05 , * * * : p < 0.001 .
Robotics 14 00015 g018
Figure 19. Effort of Experiment 2. * * : p < 0.01 , * * * : p < 0.001 .
Figure 19. Effort of Experiment 2. * * : p < 0.01 , * * * : p < 0.001 .
Robotics 14 00015 g019
Figure 20. Achievement of Experiment 2.
Figure 20. Achievement of Experiment 2.
Robotics 14 00015 g020
Figure 21. Operation difficulty of Experiment 2. *: p < 0.05 , * * : p < 0.01 .
Figure 21. Operation difficulty of Experiment 2. *: p < 0.05 , * * : p < 0.01 .
Robotics 14 00015 g021
Figure 22. Task difficulty of Experiment 2.
Figure 22. Task difficulty of Experiment 2.
Robotics 14 00015 g022
Table 1. Experiment 1 questionnaire.
Table 1. Experiment 1 questionnaire.
Item(1/7)Description
Mental workload(Low/High)How much mental workload was involved?
Physical workload(Low/High)How much physical workload was involved?
Effort(Low/High)How much effort was required?
Achievement(Low/High)How well did you accomplish the task?
Operation difficulty(Easy/Difficult)How difficult was the operation?
Task difficulty(Easy/Difficult)How difficult was the task?
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nagano, H.; Nishino, T.; Tazaki, Y.; Yokokohji, Y. Haptic Guidance System for Teleoperation Based on Trajectory Similarity. Robotics 2025, 14, 15. https://doi.org/10.3390/robotics14020015

AMA Style

Nagano H, Nishino T, Tazaki Y, Yokokohji Y. Haptic Guidance System for Teleoperation Based on Trajectory Similarity. Robotics. 2025; 14(2):15. https://doi.org/10.3390/robotics14020015

Chicago/Turabian Style

Nagano, Hikaru, Tomoki Nishino, Yuichi Tazaki, and Yasuyoshi Yokokohji. 2025. "Haptic Guidance System for Teleoperation Based on Trajectory Similarity" Robotics 14, no. 2: 15. https://doi.org/10.3390/robotics14020015

APA Style

Nagano, H., Nishino, T., Tazaki, Y., & Yokokohji, Y. (2025). Haptic Guidance System for Teleoperation Based on Trajectory Similarity. Robotics, 14(2), 15. https://doi.org/10.3390/robotics14020015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop