Next Article in Journal
Enhanced Particle Swarm Optimisation for Multi-Robot Path Planning with Bezier Curve Smoothing
Previous Article in Journal
Non-Orthogonal Serret–Frenet Parametrization Applied to Path Following of B-Spline Curves by a Mobile Manipulator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using a Guidance Virtual Fixture on a Soft Robot to Improve Ureteroscopy Procedures in a Phantom

1
Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 5, 2628 CD Delft, The Netherlands
2
Department of Electronics, Information and Bioengineering, Politecnico di Milano, Leonardo da Vinci 32, 20133 Milano, Italy
*
Author to whom correspondence should be addressed.
Robotics 2024, 13(9), 140; https://doi.org/10.3390/robotics13090140
Submission received: 16 August 2024 / Revised: 13 September 2024 / Accepted: 16 September 2024 / Published: 18 September 2024
(This article belongs to the Section Soft Robotics)

Abstract

:
Manipulating a flexible ureteroscope is difficult, due to its bendable body and hand–eye coordination problems, especially when exploring the lower pole of the kidney. Though robotic interventions have been adopted in various clinical scenarios, they are rarely used in ureteroscopy. This study proposes a teleoperation system consists of a soft robotic endoscope together with a Guidance Virtual Fixture (GVF) to help users explore the kidney’s lower pole. The soft robotic arm was a cable-driven, 3D-printed design with a helicoid structure. GVF was dynamically constructed using video streams from an endoscopic camera. With a haptic controller, GVF can provide haptic feedback to guide the users in following a trajectory. In the user study, participants were asked to follow trajectories when the soft robotic arm was in a retroflex posture. The results suggest that the GVF can reduce errors in the trajectory tracking tasks when the users receive the proper training and gain more experience. Based on the NASA Task Load Index questionnaires, most participants preferred having the GVF when manipulating the robotic arm. In conclusion, the results demonstrate the benefits and potential of using a robotic arm with a GVF. More research is needed to investigate the effectiveness of the GVFs and the robotic endoscope in ureteroscopic procedures.

1. Introduction

Ureteroscopy is an endoscopic intervention in the urinary system. Ureterocopists perform ureteroscopy to diagnose carcinoma in the urinary tract and to remove kidney stones. Ureteroscopy is challenging, and requires long and intensive training because of two main factors. The first factor lies in the instrument, the ureteroscope. To reach the upper urinary tract, the ureteroscopist uses a flexible ureteroscope with a long and compliant body, making it challenging to steer inside a human body. The second factor is the hand–eye coordination problem, which occurs because when steering through a small hole, the pose of the ureteroscope’s handle is not aligned with the pose of its front tip. Furthermore, ureteroscopists can only see the view from an endoscopic camera attached the ureteroscope, losing the direct sight of an endoscope’s tip. One of the most challenging tasks when performing a ureteroscopy is exploring the kidney’s lower pole calyx, which can only be reached with the tip of a ureteroscope in a retroflex position. To improve urologists’ performance, this study aims to develop a teleoperation system with a soft endoscopic robot using a Virtual Fixture (VF) to guide urologists in navigation tasks inside calyxes.

1.1. Robotic Solution in Ureteroscopy

Robotic solutions have been developed, and have shown great potential for various clinical scenarios. One of the most famous surgical robots, the da Vinci Surgical system, has been applied to urology for almost 20 years. However, compared to this achievement, continuum robotic solutions for ureteroscopy are still in their early stages [1,2]. The development of continuum robots in endoscopic procedures and minimally invasive surgeries has risen in recent years due to the need to navigate deep inside the human body safely [3]. These robots are inspired by those manual, flexible, steerable instruments used in current endoscopic procedures. Various robotic mechanisms and computer-assisted functionalities have been studied [2,3,4]. Robotic ureteroscopes started by modifying a master-follower device, such as the Hansen Device, which was initially used in cardiology interventions. The Sensei–Magellan system was introduced and performed in a clinical trial with 18 patients in 2008 [5]. However, this project was discontinued due to the difficulty of designing the scope. Then, Roboflex Avicenna (ELMED, Ankara, Turkey) was introduced in 2011 and obtained a CE certificate in 2013 [6]. At the time of writing, this is the only robotic solution for ureteroscopic procedures on the market. Reports show that it improves ergonomics during a long-lasting ureteroscopy [7].
A new system, easyUretero (ROEN Surgical Inc), was presented in 2022 [8]. However, this system was only tested in a test bed mimicking kidney calyx. Talari et al. used a “snap-in” mechanism to build a robotic ureteroscope [9]. They proposed a framework to localize the tip position of the scope by using Electromagnetic sensor (EM sensor) and pre-operative Magnetic Resonance (MR) images to reduce the need for additional fluoroscopy.
Various studies have been conducted to address the navigation issues at different levels when the robotized instruments are in a narrow tube-like structure. For example, researchers focus on proposing different robot arm mechanisms in transoral robotic surgeries [4]. Duan et al. [10] implemented a path-planning system with an electromagnetic sensor. Furthermore, Mo et al. [11] increased the Level of Autonomy of their robotic endoscopic system to a task autonomy level, which allowed the robot to follow trajectories by itself.
On the other hand, in robotic endoscopic procedures, force feedback is a desired function for the users because when an instrument is robotized, the users lose the direct feedback from the instrument [12,13]. In this light, Shu et al. developed a robotic system with force sensors installed at the joints of the follower robots [14]. Combined with a neural network-based method, the robotic system can estimate the interactive forces and exert it on the user through two haptic devices. To the best of this author’s knowledge, this is the first and only example of enabling interactive force feedback function in the robotic system for ureteroscopy.

1.2. Virtual Fixture in Medical Robots

Virtual Fixture (VF) is widely used in teleoperation systems, providing extra sensory information, such as force feedback, to a remote operation site. This force is used to help the human operator perform better-than-human performance when manipulating robots [15]. VF has been applied in a variety of fields, including medical applications, such as robotic-assisted catheterization and minimally invasive robotic surgery [15,16,17,18,19,20,21,22,23,24]. To create a VF in the remote operation site, the system must have the perception of the environment. There are two main ways to determine the required geometry in clinical scenarios. The first one is to use pre-operative images from the operating site. Park et al. [25] registered the patient’s anatomy to the image data and used a VF to constrain the motions of the surgical robot. They show a faster and more precise dissection than with conventional techniques. He et al. [19,22] uses a point cloud model of CT image sequence on the nasal cavity to create a Forbidden Region VF (FRVF) boundary. They also analyzed the motion constraint of different stages to create a spatial-curve-based Guidance VF (GVF) and a hyperbolic-plane-based FRVF. Their results show that GVF can reduce the path-tracking error and that FRVF can avoid collisions during nasal examinations. The second way is to use intra-operative imaging modalities. Park et al. used a camera to simulate the intra-operative X-ray fluoroscopy in [18]. They segmented the vessel wall from the images, identified the centerline of the vessel phantom, and used it to create a forbidden region border. Endoscopic camera images can also be used to construct VFs dynamically. Moccia et al. used intra-operative endoscopic videos to create virtual fixtures. In [20], they used stereo endoscopic images from the da Vinci Research Kit (dVRK) to generate a desired 3D path for polyp dissection. Based on the path, a VF was generated and used to guide the follower manipulator. The results showed that the absolute error was significantly reduced. In [23], they also constructed a dynamic FRVF on two surgical tools by fusing the vision data and robot kinematics to avoid tool collisions. There are also works using information data other than image data; for example, Marinho et al. defined their dynamic constraints and virtual fixture region by using the kinematics of the robotics arm and joint information [21,24]. However, research on using virtual fixtures on a flexible instrument is still lacking. In addition, there is no VF application focusing on ureteroscopic procedures.
In summary, the force feedback function is highly requested by endoscopists when performing robotic endoscopic procedures as force feedback can provide sensory information to the users to avoid damaging the surrounding soft tissues. On the other hand, VF is able to provide extra sensory information to guide movement or avoid dangerous regions for the robot using an endoscopic camera. In this light, this research proposes using GVF to help users navigate the robotic-assisted flexible endoscope when it is in a retroflex posture and inside a partly confined area, e.g., the kidney’s lower pole. The remainder of the article is structured as follows: Section 2 describes the soft robotic endoscope used in this research to simulate a robotized ureteroscope, the teleoperation system and the dynamically constructed GVF and the workflows of the user study, the experimental phantom set-up, and the performance metrics for the experiments. Section 3 presents results from the user study, and Section 4 discusses the findings. Lastly, Section 5 draws the conclusion.

2. Materials and Methods

This section first describes the robotic endoscope used in this study, consisting of a 3D-printed endoscopic arm, its actuation system, and the GVF in the teleoperation system. Then, the protocols of the validation experiments with a user study are detailed.

2.1. Robotic Endoscope System

A robotic endoscope system, the ATLAScope, consisting of a soft robotic arm, was built. The structure of the robotic arm is inspired by the HelicoFlex [26], and is manufactured using 3D printing technology. The robot arm’s body is 5 mm in outer diameter and 90 mm in length. The body has a 70 mm long steerable segment as a helicoidal structure. Along the body, four cable tunnels allow the driving cables to traverse the body and connect the tip of the robot arm to the driving pulleys antagonistically. Two closed-loop stepper motors with gearboxes (11HS12-0674D-PG27-E22-300, STEPPERONLINE) are connected to these driving pulleys to actuate these driving cables. These motors and pulleys are mounted on a linear stage for an extra linear motion. The robotic endoscope has three degrees of freedom (DOFs), allowing its arm to bend in two perpendicular directions and move forward and backward. The robotic endoscope is equipped with a miniature RGB camera NanEye 2D (NE2DRGBV160F242M, OSRAM). This camera has a resolution of 250 pixels by 250 pixels and a focal length of 10 mm. The whole set-up of the ATLAScope is depicted in Figure 1.

2.2. Teleoperation System

The teleoperation system consists of three major modules: a leader controller, a follower robot, and a communication channel. A schematic of the proposed teleoperation system is shown in Figure 2.
The ATLAScope, serving as a follower robot, is connected to a commercially available Omni Phantom haptic controller (Sensable Technologies) via a PC which runs the Robot Operating System (ROS). The endoscopic camera and image processing algorithm are running on a separate PC but are connected with ROS via Ethernet bus. There are two goals for the teleoperation system. The first is calculating and exerting the guidance force in the user’s controller space Ω C . The second goal is to steer the robotic arm in its task space with a user command in Ω C . Because the user steers the robotic arm relying on the information from the endoscopic video stream, which is in the image space Ω I , Ω I is regarded as the task space of the teleoperation system. Then, it is essential to define the transformation between Ω C and Ω I . Suppose a desired target movement, y i R 2 in Ω I , moves when a user moves the leader device with u c R 3 in Ω C . Their relationship can be obtained by:
y i = K u m ,
where u m R 2 is the projection of u c into x y plane of Ω C and K is a two-by-two scaling transformation matrix. If the coefficients, K i i , i = 1, 2, are too large, the camera view will change rapidly in response to small movements from the haptic controller; conversely, if the coefficients are too small, the user has to move the haptic controller with larger arm movements. The matrix K was determined heuristically to allow the user easily reach one side of the camera space using only wrist movements to ease physical burdens. With Equation (1), the user is commanding in the controller space of the haptic controller with the same orientation as in the image space. Furthermore, a button on the haptic controller serves as a clutch, which can set K into a zero matrix when pressing the button. With this clutch, the controller can move freely without steering the tip of the robotic arm and increase the working space of the robotic arm easily. This relationship between Ω C and Ω I is illustrated in Figure 3.
Next, to steer the robotic arm based on y i , the corresponding actuation movement, θ a in the actuation space, Ω A , has to be determined to steer the robotic arm into a new position, t e in the end-effector space Ω E . An image Jacobian matrix J determines the desired movement in Ω A from the information in Ω I from the endoscopic video. It has the form of
J = d f ( x ) d x = [ f ( x ) x 1 . . . f ( x ) x n ] ,
where f ( . ) contains image features, and x is actuators’ movement.
Based on previous successful studies [27,28], where soft robots were controlled without prior knowledge of their kinematic model, this research utilizes a model-free image Jacobian matrix, J f r e e . In this teleoperation system, the dimension of Ω I and Ω A are equal to two; therefore, the J f r e e R 2 × 2 and can be estimated by
J f r e e = Δ y 1 ( θ ) Δ θ 1 Δ y 1 ( θ ) Δ θ 2 Δ y 2 ( θ ) Δ θ 1 Δ y 2 ( θ ) Δ θ 1
The Δ y i ( θ ) and Δ θ j , with i and j = 1 , 2 , can be obtained by commanding each actuator and record the changes of the features in the endoscopic image. With J f r e e , the actuator velocity θ ˙ a can be derived as
θ ˙ a = J f r e e y ˙ i ,
where the symbol † denotes the Moore–Penrose inverse of a Jacobian matrix.
Lastly, the actuator velocity is fed into the low-level controllers in the ATLAScope, which actuate its motors using the Proportional Integral (PI) controllers:
v ( t ) = K p e ( t ) + K i t 0 t e ( τ ) d τ ,
where v ( t ) is the effort determined by the PI controller to drive the motor, and e ( t ) is the error between the desired motor displacement and the actual motor displacement read by an encoder. The coefficients, K p and K i , from the PI controller were first determined by the autotune function in the ROS PID package and then fine-tuned to prevent overshooting while keeping a low settling time. The values of K p and K i for both motors used in this study are set to 5.0 and 0.1, respectively.

2.3. Dead Zone Compensation

The cable-driven mechanism of the soft robotic arm introduces hysteresis behavior, particularly a dead zone. When a driving cable is within the dead zone, it is unable to actuate the soft robotic arm. This will decrease the user’s performance when teleoperating with the ATLAScope. To address this issue, a dead zone compensation algorithm is implemented as follows.
First, the initial size of the dead zone is estimated by rotating one motor in a single direction. During the dead zone, the image on the endoscopic camera will not change. When the image begins to change, the soft robotic arm has reached the boundary of the dead zone. The motor angle at which the first change occurs is noted as δ i + . Next, the motor is rotated in the opposite direction until the image changes again, and this motor angle is noted as δ i . The size of the dead zone of one driving cable, δ i , is estimated as
δ i = l | | δ i + δ i | | , 0 l 1 ,
where l is a constant to prevent an overshoot from the dead-zone compensation.
When the ATLAScope is in its initial state with motor angles θ i = 0 , it is assumed that the motors are positioned in the middle of the dead zone. Therefore, the initial dead zone boundary at time t 0 is set to S D Z ( t 0 ) [ δ i / 2 , δ i / 2 ] . Whenever θ i falls within this range, the actual rotation speed will be m times faster than the original motor velocity, θ ˙ , determined by Equation (4), to help the users rapidly overcome the dead zone. On the other hand, when the θ i falls outside the dead zone range, the motor velocity remains unchanged. Since the dead zone occurs when the motors change their original moving direction, when the θ i is not within the initial dead zone range, it needs to be updated according to the current motor moving direction and motor angle. If the ATLAScope reads a new angle θ i ( t ) , that is larger than the upper boundary of the initial dead zone range at time t, the dead zone is updated. The updated upper boundary becomes the current time angle and the updated lower boundary becomes the current time angle minus the full size of the dead zone, δ i . It can be expressed as S D Z ( t ) [ θ i ( t ) , δ i θ i ( t ) ] . Similarly, when θ i ( t ) is smaller than the lower boundary of the dead zone range, the updated dead zone is updated as S D Z ( t ) [ θ i ( t ) , θ i ( t ) + δ i ] . Algorithm 1 details the dead zone compensation algorithm.
Algorithm 1 Dead zone compensation and update
θ i ( t 0 ) 0 ▹ Initialization of variables.
S D Z ( t 0 ) [ δ i / 2 , δ i / 2 ]
while do
        θ i θ i ( t + 1 ) ▹ When the motor movements are updated.
       if  θ i S D Z ( t )  then
              θ i ˙ ( t + 1 ) J f r e e y ˙ d ( t + 1 )
             if  θ i > M a x ( S D Z ( t ) )  then
                   S D Z ( t + 1 ) [ θ i δ i , θ i ] ▹ Update the dead zone boundary.
             else if  θ i < M i n ( S D Z ( t ) )  then
                   S D Z ( t + 1 ) [ θ i , θ i + δ i ] ▹ Update the dead zone boundary.
             end if
       else if  θ i S D Z ( t )  then
              θ i ˙ ( t + 1 ) m × J f r e e y ˙ d ( t + 1 ) ▹ Motors moves m times than within dead zone
              S D Z ( t + 1 ) S D Z ( t )
       end if
end while

2.4. Guidance Virtual Fixture

The GVF in the teleoperation system takes the guidance vector, p i in Ω I and translates it into force feedback, which exerts on the users with the haptic controller in Ω C . The guidance vector is defined as follows when a desired target, r s in Ω I , is detected in the endoscopic image:
p i = r s c ,
where c is the center coordinate of the image space. How to find r s in the endoscopic image will be discussed later in Section 2.5.1. To determine the force exerted on a user by the haptic controller, p i has to be transformed into Ω C . Recall the scaling transformation matrix, K in Equation (1), which transforms coordinates from Ω I into Ω C ; p c in Ω C can be determined by
p c = K 1 p i
Finally, the guidance force f c on Ω C created by the GVF is defined as
f c = k G p c
where k G is the spring constant of the guidance force. f c exerts on the user from the haptic controller and guides the user to the desired target position. In this GVF system, to guarantee the effeteness of the guidance force from the haptic controller and the freedom to allow the user to move along the route, the spring constant was determined using the soft guidance approach [15]. It was set heuristically based on three participants’ feedback in a simulator pilot study. This amount of force feedback allows the users to move freely with little interference within small errors and to be guided when the errors become larger. Throughout this study, k G is equal to 0.006 (N/pixel). The detailed diagram of the coordinate transformation between Ω I and Ω C of the GVF is illustrated in Figure 3.

2.5. System Validation

This subsection describes the experimental set-up to test the effect of using VF, the user study, and the selected performance metrics.

2.5.1. Experiment Design and Set-Up

The experimental set-up is shown in Figure 4. To simulate a ureteroscope exploring the kidney’s lower pole, the robotic arm of the ATLAScope is fixed in a retroflex configuration, which means that the robotic arm is bent 180 degrees. In addition, the proximal part of the robotic arm is fixed by a 3D printing fixture (Figure 4(2)), while the distal part is allowed to move freely.
Two route patterns are designed to evaluate how a user navigates rounded and sharp corners: an oval and a triangle pattern (Figure 4(5)). The patterns are positioned 10 mm before the miniature camera. During the experiments, participants are asked to follow the edge of the two patterns through the video stream captured by the miniature camera.
According to Equations (7)–(9), to determine the guidance force f c , first we have to find r s . In this experiment, r s is defined as the nearest point of a route to c . The route is defined as the boundary of the triangle and oval patterns. In Ω I , the pattern is first segmented using a binarization algorithm to find the route R ( t ) = r 1 , r 2 , , r n , which is expressed as a set of two-dimensional vectors, where r i for i from 1 to n is one-pixel point on the image that belongs to the routes and n is the number of pixels in the routes at time t. Lastly, r s is the r i in R ( t ) that satisfies
min r i R ( t ) | | r i c | |

2.5.2. User Study Protocol

A user study was conducted to evaluate the proposed GVF teleoperation system using ATLAScope. In this study, participants were requested to complete two sets of trajectory tracking tasks: one with GVF enabled and one without, noted as GVF-on and Control, respectively. During these tasks, participants manipulated the haptic controller from the proposed system to control the ATLAScope following the trajectories displayed on the endoscopic video stream. Both the GVF-on and Control tasks consisted of two subsets of tasks: Oval Route and Triangle Route, where the trajectories were in the shapes of an oval and a triangle pattern, respectively. For each route, the participants were requested to follow the displayed pattern five times. Participants are randomly assigned into two groups, each starting with a different set of tasks and then crossing over. The group that starts with the GVF-on, followed by Control, is noted as Group A, and the other group, which begins with Control, followed by GVF-on, is noted as Group B (Figure 5).
After completing one set of tasks (GVF-on or Control), participants are asked to self-evaluate themselves based on NASA’s Task Load Index (NASA TLX) [29] questionnaire. After completing both the GVF-on and Control tasks, the participants are asked to complete a final comparison questionnaire to evaluate their preferences toward the GVF-on and Control tasks.
Prior to the experiments, the participants had a 5 min training session on how to manipulate the ATLAScope using the haptic controller. This is followed by a 10 min trial session in which the participants manipulate the ATLAScope to become familiar with the haptic controller and the system, both with and without GVF enabled. During the trial session, a cross-pattern is placed in front of the camera, which is different from the pattern used in the actual tasks. The detailed procedure of the user study is illustrated in Figure 5.

2.5.3. System Performance Metrics

Two different types of metrics are used. The first type of metric is the objective data recorded and interpreted from the ATLAScope. This includes:
  • Completion Time (CT) of the task is the interval between the starting time t 0 , when participants are asked to begin following the route, and the ending time t e when they complete the route.
  • Mean Absolute Error (MAE) along the routes. In each task, the error is defined as the smallest distance between the center of the image view and the routes, which can be expressed as p ( t n ) at any time point t n . MAE is defined as
    M A E = 1 N n = 0 N | p ( t n ) |
    where N represents the total number of data time points recorded in one task.
  • Maximum Error (ME) that occurred in each task is the maximum p ( t n ) found between t 0 and t e .
It is worth noting that even though | p ( t n ) | is a measurement in the image space with the pixel as its unit, we approximate the error in mm in the result section. This approximation assumes that the displacement of the camera during the tasks is relatively small; hence, the change in depth between the camera sensor and the plane of the route is negligible. Then, the transformation factor between the real world and the camera sensor is simply a constant, 0.067 (mm/pixel), according to the camera calibration process before the experiments. Also, the distance between the target route and the endoscopic camera is fixed throughout all the tasks. Therefore, the MAE and ME extracted from each run of the experiments can be compared directly without further adjustment.
The second type of metrics are the subjective indicators from the questionnaires. Each participant filled in three questionnaires during the experiment. The first two questionnaires are the NASA TLX questionnaires for each set of tasks. The participants self-evaluated their workload towards the tasks with GVF-on or Control. In the NASA TLX, there are six subscales, with scores ranging from −10 to 10, and they evaluate mental demand, physical demand, temporal demand, performance, effort, and frustration of the user. The lower the scores, the lower the participants consider the task loads are, and vice versa. For the last comparison questionnaire, which is a variation of the NASA TLX, the participants were asked to evaluate their preferences toward each task on a scale of 0 (no preferences) to 10 (strong preferences). Significant tests were performed with the Mann–Whitney U test for the NASA TLX on ordinal scales.

3. Results

This user study was approved by the Human Research Ethics Committee at Delft University of Technology with ID 3272. Before the experiments started, all the participants signed the informed consent forms and were well-informed about the purpose of the experiments and how their data would be processed. In total, 14 participants (six women and eight men) were included in the user study. All the participants had no prior knowledge regarding medical surgeries, manipulating an endoscope or the ATLAScope. They are equally and randomly allocated to Group A (n = 7) and Group B (n = 7). The results will be presented with the metrics from objective machine data and from subjective questionnaires. In addition, all the statistical significance tests in this section were performed with the Mann–Whitney U test as all the test samples were not normally distributed (p < 0.05 with Shapiro–Wilk test), and p < 0.05 is considered significantly different.

3.1. Machine Metrics

3.1.1. Overall Results

The overall performance metrics of the experiments performed by Groups A and B with respect to two different sets of tasks (Control and GVF-on) are presented in Figure 6 and Table 1. All these results did not show a significant difference (p > 0.05).

3.1.2. Crossover Groups Results

All metrics per run for different routes and systems are presented in Figure 7. The first observation in these runs was that some weak learning curves were visible. The majority of the task sets presented a negative correlation (R) along the progress, but not significant ( R 2 > 0.8 ). In Figure 8, these results were further grouped into each crossover group.
For CT, a notable finding lay in the Oval Route. Here, GVF-on showed much smaller variances than Control in each run in the First Crossover Group. However, in the Second Crossover Group, this trend did not continue. The IQR in each run was bigger in GVF-on than in Control Figure 7a,b. However, no significant difference was found between Control and GVF-on in any run.
Regarding MAE, four significant differences are found in the two crossover groups. Among them, there was one in the First Crossover Group. This happened in the 5th run of the Oval Route, where the MAE is significantly lower ( p = 0.040 ) in Control than in GVF-on. On the other hand, the other three significant differences showed that the MAEs in GVF-on are significantly lower than in Control. These three runs were the 3rd and 4th runs in the Oval Route, and the 3rd run in the Triangle Route, with p = 0.040 , 0.017 and 0.026, respectively.
Similar observations were found in metric ME. All of the significant differences were found in the Second Crossover Group, indicating that MEs were lower in GVF-on than in Control. That are the median MEs in the 3rd ( p = 0.011 ) and 5th ( p = 0.017 ) runs in the Oval Route; and in the Triangle Route, the median MEs in the 2nd ( p = 0.026 ) and 3rd ( p = 0.017 ) runs.

3.2. Workload and Comparison Questionnaires

The results of the NASA TLX Questionnaire and the Comparison Questionnaire are presented in Figure 9 and Figure 10, respectively. Note that both ordinary scales in the questionnaires were transferred into percentages.
The results for all the participants are presented in Table 2. In summary, the Total Task Load indexes were 58% and 52% for Control and GVF-on, respectively. No significance was found in the subjective scales. In the comparison questionnaire, 10 out of 14 participants chose GVF-on. For the remaining participants, three preferred Control, and one showed no preference. In all task load categories but Temporal Load, participants preferred having GVF enabled while performing the tasks.

4. Discussion

From the user study results, the participants tended to perform better when GVF was enabled, especially in reducing errors. This is also the case if comparing the results of errors as a group, but not per run, as shown in Figure 8. With the progression of the tasks, a weak learning curve was visible in a few cases. It was expected that the participants may experience a stronger learning effect as presented in educational training in endourological procedures [30], and their performance may be easily distinguished, as shown in [31]. A potential explanation is that because the tasks are much simpler than those surgical tasks presented in [30,31], the participant quickly adapted the steep learning part and then reached to plateau curve. Therefore, after the Training Session, often no learning curve was found within the five runs. In the Second Crossover Group, MAEs decreased significantly by 45.5% ( p < 0.001 ) for Oval Route and 43.8% ( p < 0.001 ) for Triangle Route with GVF-on. Also, MEs were significantly reduced ( p < 0.001 ) in GVF-on by 36.5% and 41.7% in the Oval Route and Triangle Route, respectively. However, the results in the First Crossover Group were completely the opposite, with a significant increase in MAEs of 58.3% ( p = 0.008 ) and 24.0% ( p = 0.022 ) in the Oval Route and Triangle Route, respectively. Regarding ME, there is no significant difference in the First Crossover Group. Our explanation of this finding is that learning how to manipulate the ATLAScope, especially with haptic feedback enabled at the same time, is a big challenge. In the First Crossover Group, all of the participants had their first experiences with the designed tasks. After having gained prior experiences with the ATLAScope, just as those participants in the Second Crossover Group, they can benefit from the GVF feedback force. Therefore, the best approach to learning using a system such as the ATLAScope might be to begin with learning with GVF disabled and then introduce haptic feedback later. Eventually, when users are familiar with such a system, they could benefit from the support of GVF. This explanation is also supported by the comparisons of the results per run in the Second Crossover Group, which showed a significant reduction in MAE and ME in their later runs of the tasks. When comparing the results from the Oval Route and the Triangle Route, in general, a similar trend is found. This suggests that when users are familiar with the ATLAScope, having GVF enabled is beneficial to both round corners as well as sharp corners.
If analyzing the Workload and Comparison Questionnaires, it was found that nine out of fourteen participants found that the workload is less in GVF-on tasks. For the remaining five participants who found a higher workload in GVF-on tasks, it is worth noting that four out of five participants are in Group B, where the participants started the user experiment with GVF-on first. This finding may also support the explanation in the previous paragraph, suggesting that learning to use the ATLScope with the GVF function enabled in the first place causes extra burdens to users.
There are still several challenges and limitations with the current ATLAScope and its teleoperation system. Among them, the significant dead zone results in a poorer user performance when the robotic arm undergoes substantial bending. While a dead zone compensation algorithm was implemented in the teleoperation system to reduce this issue, users still experience difficulties when the motors begin to rotate in opposite directions. Several studies have proposed different modeling techniques and mechanisms for tendon-driven systems [32,33], but further improvements are necessary to enhance the performance of ureteroscopists in the current robotic endoscope system.
Regarding the experimental set-up, the phantom is still far from realistic. According to [34,35], which provides guidelines for surgical robots and surgical tasks, this system and the set-up using a rigid phantom only fall into Level of Clinical Realism (LoCR) 1. As the GVF system provides guidance force to the user and allows the user to continuously control the robot, this system is in the Level of Autonomy (LoA) 1. In clinical scenarios, the operating space inside a kidney is more dynamic and unstructured. Additionally, limited by the system design and experimental set-up, this study only assessed the temporal and outcome metrics (the ME and MAE in the camera space). Future validation experiment should consider other performance metrics presented, for example, in [34]. Considering the magnitude of the force feedback from the haptic controller may affect the performance of the user, further validation study is encouraged to focus on analyzing the force feedback along the route and how to optimize the user performance by finding the optimal spring constant K G . From the current experiments, using Equation (9) and the errors occurred during the experiments, it can be estimated that the force the participants experienced along the route is approximately 0.1 Newton, and the maximum force did not exceed more than 1 Newton.
It is also important to know that the current experimental set-up did not validate the computer vision system, which serves a critical role in the GVF system and detects targets of interest. With the advancement of technology, several data-driven computer vision algorithms have demonstrated the ability to easily detect carcinoma and kidney stones in urinary tracts [36,37]. In this context, the proposed GVF system can be integrated with existing computer vision algorithms to address challenges in ureteroscopy in the future. This integration must be capable of detecting targets in real time along with the disturbance from the environment. The current proposed GVF is able to extract the target and calculate the guidance vector, p i , much faster than the video stream rate (approximately 35 fps). Therefore, there is no perceptible delay between the visual and force feedback for the user.

5. Conclusions

This study aimed to develop an assistive system for improving the current ureteroscopy procedures, particularly in the kidney’s lower pole. To test the virtual fixture technology, a teleoperation system with ATLAScope and a haptic controller was developed and a user study was conducted with GVF function in a phantom environment. From the user study, measures of errors, i.e., MAE and ME, improved when the participants performed designated tasks with the GVF function enabled in the Second Crossover Group. These findings suggested that the GVF can have a positive and significant influence when the participants are familiar with the manipulation of flexible robotic arms. Based on the subjective self-evaluation questionnaires, most participants preferred using GVF when performing these tasks. In contrast, the majority of the participants in the group who started using the robotic endoscopes with GVF enabled experienced a higher workload when completing the tasks with GVF. These two findings indicate that the GVF can have a positive influence on users when performing these tasks if they receive a proper training procedure and enough training. In conclusion, the results demonstrate the potential of GVF when a robotic system is accessible and helpful for following a predefined path. More research and realistic experiments are needed to investigate whether the support of GVF improves performance in ureteroscopy procedures, especially in challenging locations such as lower poles.

Author Contributions

Conceptualization, C.-F.L. and J.D.; Data curation, C.-F.L.; Formal analysis, C.-F.L.; Funding acquisition, E.D.M., G.F. and J.D.; Investigation, C.-F.L.; Methodology, C.-F.L.; Project administration, C.-F.L., E.D.M. and J.D.; Resources, E.D.M., G.F. and J.D.; Software, C.-F.L.; Supervision, E.D.M., G.F. and J.D.; Validation, C.-F.L.; Writing—original draft, C.-F.L.; Writing—review and editing, C.-F.L., E.D.M., G.F. and J.D. All authors have read and agreed to the published version of the manuscript.

Funding

This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 813782.

Institutional Review Board Statement

The user study protocol was approved by the Human Research Ethics Committee of Delft University of Technology with ID 3272 (approved on 7 July 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author/s.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tokatli, N.Z.; Sarica, K. Robotic Flexible Ureteroscopy (Robotic fURS). In Flexible Ureteroscopy; Springer Nature: Singapore, 2022; pp. 215–222. [Google Scholar] [CrossRef]
  2. Dupont, P.E.; Simaan, N.; Choset, H.; Rucker, C. Continuum robots for medical interventions. Proc. IEEE 2022, 110, 847–870. [Google Scholar] [CrossRef] [PubMed]
  3. Taylor, R.H.; Simaan, N.; Menciassi, A.; Yang, G.Z. Surgical robotics and computer-integrated interventional medicine [scanning the issue]. Proc. IEEE 2022, 110, 823–834. [Google Scholar]
  4. Gu, X.; Ren, H. A survey of transoral robotic mechanisms: Distal dexterity, variable stiffness, and triangulation. Cyborg Bionic Syst. 2023, 4, 0007. [Google Scholar] [CrossRef] [PubMed]
  5. Desai, M.M.; Aron, M.; Gill, I.S.; Pascal-Haber, G.; Ukimura, O.; Kaouk, J.H.; Stahler, G.; Barbagli, F.; Carlson, C.; Moll, F. Flexible robotic retrograde renoscopy: Description of novel robotic device and preliminary laboratory experience. Urology 2008, 72, 42–46. [Google Scholar] [CrossRef]
  6. Desai, M.M.; Grover, R.; Aron, M.; Ganpule, A.; Joshi, S.S.; Desai, M.R.; Gill, I.S. Robotic flexible ureteroscopy for renal calculi: Initial clinical experience. J. Urol. 2011, 186, 563–568. [Google Scholar] [CrossRef] [PubMed]
  7. Saglam, R.; Muslumanoglu, A.Y.; Tokatlı, Z.; Çaşkurlu, T.; Sarica, K.; Taşçi, A.İ.; Erkurt, B.; Süer, E.; Kabakci, A.S.; Preminger, G.; et al. A new robot for flexible ureteroscopy: Development and early clinical results (IDEAL stage 1–2b). Eur. Urol. 2014, 66, 1092–1100. [Google Scholar] [CrossRef]
  8. Cheon, B.; Kim, C.K.; Kwon, D.S. Intuitive endoscopic robot master device with image orientation correction. Int. J. Med. Robot. Comput. Assist. Surg. 2022, 18, e2415. [Google Scholar] [CrossRef]
  9. Talari, H.F.; Monfaredi, R.; Wilson, E.; Blum, E.; Bayne, C.; Peters, C.; Zhang, A.; Cleary, K. Robotically assisted ureteroscopy for kidney exploration. Proc. SPIE Int. Soc. Opt. Eng. 2017, 10135, 279–284. [Google Scholar]
  10. Duan, X.; Xie, D.; Zhang, R.; Li, X.; Sun, J.; Qian, C.; Song, X.; Li, C. A novel robotic bronchoscope system for navigation and biopsy of pulmonary lesions. Cyborg Bionic Syst. 2023, 4, 0013. [Google Scholar] [CrossRef]
  11. Mo, H.; Li, X.; Ouyang, B.; Fang, G.; Jia, Y. Task autonomy of a flexible endoscopic system for laser-assisted surgery. Cyborg Bionic Syst. 2022, 2022, 9759504. [Google Scholar] [CrossRef]
  12. Rassweiler, J.; Fiedler, M.; Charalampogiannis, N.; Kabakci, A.S.; Saglam, R.; Klein, J.T. Robot-assisted flexible ureteroscopy: An update. Urolithiasis 2018, 46, 69–77. [Google Scholar] [CrossRef] [PubMed]
  13. Rassweiler-Seyfried, M.C.; Herrmann, J.; Klein, J.; Michel, M.S.; Rassweiler, J.; Grüne, B. Robot-assisted flexible ureterorenoscopy: State of the art in 2022. Mini-Invasive Surg. 2022, 6, 41. [Google Scholar] [CrossRef]
  14. Shu, X.; Hua, P.; Wang, S.; Zhang, L.; Xie, L. Safety enhanced surgical robot for flexible ureteroscopy based on force feedback. Int. J. Med. Robot. Comput. Assist. Surg. 2022, 18, e2410. [Google Scholar] [CrossRef] [PubMed]
  15. Abbott, J.J.; Marayong, P.; Okamura, A.M. Haptic virtual fixtures for robot-assisted manipulation. In Robotics Research: Results of the 12th International Symposium ISRR, San Francisco, CA, USA, 12–15 October 2005; Springer: Berlin/Heidelberg, Germany, 2007; pp. 49–64. [Google Scholar]
  16. Li, M.; Taylor, R.H. Optimum Robot Control for 3D Virtual Fixture in Constrained ENT Surgery. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2003, Proceedings of the 6th International Conference, Montréal, QC, Canada, 15–18 November 2003; Ellis, R.E., Peters, T.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 165–172. [Google Scholar]
  17. Li, M.; Ishii, M.; Taylor, R.H. Spatial motion constraints using virtual fixtures generated by anatomy. IEEE Trans. Robot. 2007, 23, 4–19. [Google Scholar] [CrossRef]
  18. Park, J.W.; Choi, J.; Park, Y.; Sun, K. Haptic virtual fixture for robotic cardiac catheter navigation. Artif. Organs 2011, 35, 1127–1131. [Google Scholar] [CrossRef]
  19. He, Y.; Hu, Y.; Zhang, P.; Zhao, B.; Qi, X.; Zhang, J. Human–Robot Cooperative Control Based on Virtual Fixture in Robot-Assisted Endoscopic Sinus Surgery. Appl. Sci. 2019, 9, 1659. [Google Scholar] [CrossRef]
  20. Moccia, R.; Selvaggio, M.; Villani, L.; Siciliano, B.; Ficuciello, F. Vision-based Virtual Fixtures Generation for Robotic-Assisted Polyp Dissection Procedures. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 7934–7939. [Google Scholar] [CrossRef]
  21. Marinho, M.M.; Adorno, B.V.; Harada, K.; Mitsuishi, M. Dynamic active constraints for surgical robots using vector-field inequalities. IEEE Trans. Robot. 2019, 35, 1166–1185. [Google Scholar] [CrossRef]
  22. He, Y.; Zhao, B.; Qi, X.; Li, S.; Yang, Y.; Hu, Y. Automatic surgical field of view control in robot-assisted nasal surgery. IEEE Robot. Autom. Lett. 2020, 6, 247–254. [Google Scholar] [CrossRef]
  23. Moccia, R.; Iacono, C.; Siciliano, B.; Ficuciello, F. Vision-Based Dynamic Virtual Fixtures for Tools Collision Avoidance in Robotic Surgery. IEEE Robot. Autom. Lett. 2020, 5, 1650–1655. [Google Scholar] [CrossRef]
  24. Marinho, M.M.; Ishida, H.; Harada, K.; Deie, K.; Mitsuishi, M. Virtual fixture assistance for suturing in robot-aided pediatric endoscopic surgery. IEEE Robot. Autom. Lett. 2020, 5, 524–531. [Google Scholar] [CrossRef]
  25. Park, S.; Howe, R.D.; Torchiana, D.F. Virtual Fixtures for Robotic Cardiac Surgery. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2001, Proceedings of the 4th International Conference, Utrecht, The Netherlands, 14–17 October 2001; Niessen, W.J., Viergever, M.A., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 1419–1420. [Google Scholar]
  26. Culmone, C.; Henselmans, P.W.J.; van Starkenburg, R.I.B.; Breedveld, P. Exploring non-assembly 3D printing for novel compliant surgical devices. PLoS ONE 2020, 15, e0232952. [Google Scholar] [CrossRef] [PubMed]
  27. Wu, K.; Zhu, G.; Wu, L.; Gao, W.; Song, S.; Lim, C.M.; Ren, H. Safety-Enhanced Model-Free Visual Servoing for Continuum Tubular Robots Through Singularity Avoidance in Confined Environments. IEEE Access 2019, 7, 21539–21558. [Google Scholar] [CrossRef]
  28. Wang, X.; Fang, G.; Wang, K.; Xie, X.; Lee, K.H.; Ho, J.D.L.; Tang, W.L.; Lam, J.; Kwok, K.W. Eye-in-Hand Visual Servoing Enhanced with Sparse Strain Measurement for Soft Continuum Robots. IEEE Robot. Autom. Lett. 2020, 5, 2161–2168. [Google Scholar] [CrossRef]
  29. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  30. Aditya, I.; Kwong, J.C.; Canil, T.; Lee, J.Y.; Goldenberg, M.G. Current educational interventions for improving technical skills of urology trainees in endourological procedures: A systematic review. J. Endourol. 2020, 34, 723–731. [Google Scholar] [CrossRef] [PubMed]
  31. Lukács, E.; Levendovics, R.; Haidegger, T. Enhancing autonomous skill assessment of robot-assisted minimally invasive surgery: A comprehensive analysis of global and gesture-level techniques applied on the JIGSAWS dataset. Acta Polytech. Hung 2023, 20, 133–153. [Google Scholar] [CrossRef]
  32. Sun, Z.; Wang, Z.; Phee, S.J. Elongation modeling and compensation for the flexible tendon–sheath system. IEEE/ASME Trans. Mechatronics 2013, 19, 1243–1250. [Google Scholar] [CrossRef]
  33. Sun, Z.; Wang, Z.; Phee, S.J. Modeling and motion compensation of a bidirectional tendon-sheath actuated system for robotic endoscopic surgery. Comput. Methods Programs Biomed. 2015, 119, 77–87. [Google Scholar] [CrossRef]
  34. Nagy, T.D.; Haidegger, T. Performance and capability assessment in surgical subtask automation. Sensors 2022, 22, 2501. [Google Scholar] [CrossRef] [PubMed]
  35. Yang, G.Z.; Cambias, J.; Cleary, K.; Daimler, E.; Drake, J.; Dupont, P.E.; Hata, N.; Kazanzides, P.; Martel, S.; Patel, R.V.; et al. Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy. Sci. Robot. 2017, 15, eaam8638. [Google Scholar] [CrossRef]
  36. Lazo, J.F.; Moccia, S.; Marzullo, A.; Catellani, M.; De Cobelli, O.; Rosa, B.; de Mathelin, M.; De Momi, E. A transfer-learning approach for lesion detection in endoscopic images from the urinary tract. arXiv 2021, arXiv:2104.03927. [Google Scholar]
  37. Setia, S.A.; Stoebner, Z.A.; Floyd, C.; Lu, D.; Oguz, I.; Kavoussi, N.L. Computer vision enabled segmentation of kidney stones during ureteroscopy and laser lithotripsy. J. Endourol. 2023, 37, 495–501. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The robotic endoscope system, ATLAScope, to simulate a robotized ureteroscope. (1) Two stepper motors; (2) two pulleys; (3) cable tunnels to guide driving cables; (4) soft robotic arm with HelicoFlex design, with a total length of 90 mm and a steerable segment of 70 mm; (5) miniaturized endoscopic camera and the two bending directions of the robotic arm.
Figure 1. The robotic endoscope system, ATLAScope, to simulate a robotized ureteroscope. (1) Two stepper motors; (2) two pulleys; (3) cable tunnels to guide driving cables; (4) soft robotic arm with HelicoFlex design, with a total length of 90 mm and a steerable segment of 70 mm; (5) miniaturized endoscopic camera and the two bending directions of the robotic arm.
Robotics 13 00140 g001
Figure 2. The teleoperation system with the GVF consists of ATLAScope, a haptic controller and a communication channel. The user commands the haptic controller with u ˙ c , the velocity of the tip of the haptic controller. This movement is translated into y ˙ i , a desired moving velocity of the target in the image space. Then, the velocity of motors θ ˙ in the actuation space is determined by the Moore–Penrose inverse of the model-free Jacobian matrix J f r e e . After the motors move the tip of the endoscopic camera into a new position, t , the camera captures a new image. The Segmentation and Target Detection module processes this new image and returns a new target vector, p i , which is the shortest vector from the center of the image to the route. Within the Virtual Fixture module, this target vector is translated into a force f c by the spring-damper model F ( · ) and exerted on the user. K, k, and ξ are working space transformation matrix, spring constant, and damping constant, respectively. Ω stands for coordinate space, and its superscript C, I, A, and E stand for controller, image actuation and end-effector, respectively.
Figure 2. The teleoperation system with the GVF consists of ATLAScope, a haptic controller and a communication channel. The user commands the haptic controller with u ˙ c , the velocity of the tip of the haptic controller. This movement is translated into y ˙ i , a desired moving velocity of the target in the image space. Then, the velocity of motors θ ˙ in the actuation space is determined by the Moore–Penrose inverse of the model-free Jacobian matrix J f r e e . After the motors move the tip of the endoscopic camera into a new position, t , the camera captures a new image. The Segmentation and Target Detection module processes this new image and returns a new target vector, p i , which is the shortest vector from the center of the image to the route. Within the Virtual Fixture module, this target vector is translated into a force f c by the spring-damper model F ( · ) and exerted on the user. K, k, and ξ are working space transformation matrix, spring constant, and damping constant, respectively. Ω stands for coordinate space, and its superscript C, I, A, and E stand for controller, image actuation and end-effector, respectively.
Robotics 13 00140 g002
Figure 3. GVF coordinate transformation between the image space Ω I (Left) and controller space Ω C (Right) by the scaling transformation matrix K. Both the teleoperating manipulation and the GVF rely on the information in Ω I . To link Ω I with Ω C , u c in Ω C are projected into x y plane to form u m . Using space transformation matrix K, u m is transformed into desired target movement, y i , in Ω I . Reversely, the guidance vector, p i created by GVF in Ω I can be also transformed into Ω C as p c using K 1 . In the top figure, R is the set of two-dimensional vectors of the segmented route. c , r s , and p i is the center of the image, the closest point in R to c, and the guidance vector, respectively.
Figure 3. GVF coordinate transformation between the image space Ω I (Left) and controller space Ω C (Right) by the scaling transformation matrix K. Both the teleoperating manipulation and the GVF rely on the information in Ω I . To link Ω I with Ω C , u c in Ω C are projected into x y plane to form u m . Using space transformation matrix K, u m is transformed into desired target movement, y i , in Ω I . Reversely, the guidance vector, p i created by GVF in Ω I can be also transformed into Ω C as p c using K 1 . In the top figure, R is the set of two-dimensional vectors of the segmented route. c , r s , and p i is the center of the image, the closest point in R to c, and the guidance vector, respectively.
Robotics 13 00140 g003
Figure 4. Experimental set-up with the flexible arm bent in a retroflex posture. (1) Soft robotic arm; (2) 3D printed fixture mold to restrict the movement of the soft robotic arm; (3) tip of the robotic arm equipped with a miniaturized endoscopic camera in a retroflex posture; (4) target plane with a triangle or oval route; (5) two designed routes and their dimensions.
Figure 4. Experimental set-up with the flexible arm bent in a retroflex posture. (1) Soft robotic arm; (2) 3D printed fixture mold to restrict the movement of the soft robotic arm; (3) tip of the robotic arm equipped with a miniaturized endoscopic camera in a retroflex posture; (4) target plane with a triangle or oval route; (5) two designed routes and their dimensions.
Robotics 13 00140 g004
Figure 5. A flow diagram showing the user study protocol. After the first training session, the participants are divided into two groups (Group A and Group B). Each group has two sets of runs: one set of Control tasks (Control) and one set of Guided Virtual Fixture tasks (GVF-on). Within each set, there are two different routes (Oval Route and Triangle Route) that participants had to repeat five times, and they had to fill in one NASA TLX Questionnaire. Finally, all the participants had to fill in a Comparison Questionnaire. It is worth noting that a crossover group is being highlighted within the dashed line, and the colors and dashed lines represent the group of data shown in the next figures.
Figure 5. A flow diagram showing the user study protocol. After the first training session, the participants are divided into two groups (Group A and Group B). Each group has two sets of runs: one set of Control tasks (Control) and one set of Guided Virtual Fixture tasks (GVF-on). Within each set, there are two different routes (Oval Route and Triangle Route) that participants had to repeat five times, and they had to fill in one NASA TLX Questionnaire. Finally, all the participants had to fill in a Comparison Questionnaire. It is worth noting that a crossover group is being highlighted within the dashed line, and the colors and dashed lines represent the group of data shown in the next figures.
Robotics 13 00140 g005
Figure 6. Box and whisker plots comparing overall results for the three performance metrics, Completion Time (CT), Mean Absolute Error (MAE), and Max Error (ME). The color blue represents the Control set, while the color orange represents the GVF-on set. Those boxes with slashes are the results of Triangle Route. The hollow circles are the outliers, and the black horizontal lines in the box indicate where the median values are.
Figure 6. Box and whisker plots comparing overall results for the three performance metrics, Completion Time (CT), Mean Absolute Error (MAE), and Max Error (ME). The color blue represents the Control set, while the color orange represents the GVF-on set. Those boxes with slashes are the results of Triangle Route. The hollow circles are the outliers, and the black horizontal lines in the box indicate where the median values are.
Robotics 13 00140 g006
Figure 7. Results of Crossover Groups in each run. The box and whisker plots show the three performance metrics (Completion Time, Mean Absolute Error, Max Error) per run with respect to the Oval Route (in the upper row) and the Triangle Route (in the lower row). Blue: Control; orange: GVF-on, respectively. Darker color tones: Group A; lighter tones: Group B. (*) p < 0.05 . (a) First Crossover Group. (b) Second Crossover Group. (c) First Crossover Group. (d) Second Crossover Group.
Figure 7. Results of Crossover Groups in each run. The box and whisker plots show the three performance metrics (Completion Time, Mean Absolute Error, Max Error) per run with respect to the Oval Route (in the upper row) and the Triangle Route (in the lower row). Blue: Control; orange: GVF-on, respectively. Darker color tones: Group A; lighter tones: Group B. (*) p < 0.05 . (a) First Crossover Group. (b) Second Crossover Group. (c) First Crossover Group. (d) Second Crossover Group.
Robotics 13 00140 g007aRobotics 13 00140 g007b
Figure 8. Results of each Crossover Group. Compared into two different dimensions, within its Crossover Groups and in Control and GVF-on. Blue: Control; orange: GVF-on. Darker tones: Group A; lighter tones: Group B. Boxes without slashes: Oval Route; boxes with slashes: Triangle Route. (*) p < 0.05 and (**) p < 0.01 .
Figure 8. Results of each Crossover Group. Compared into two different dimensions, within its Crossover Groups and in Control and GVF-on. Blue: Control; orange: GVF-on. Darker tones: Group A; lighter tones: Group B. Boxes without slashes: Oval Route; boxes with slashes: Triangle Route. (*) p < 0.05 and (**) p < 0.01 .
Robotics 13 00140 g008
Figure 9. Bar plots representing the results of NASA TLX Questionnaires. Left: all Participants, middle: Group A, right: Group B. The bars and error bars show each TLX index’s mean and standard error, respectively. Note: Scales are transferred into percentages.
Figure 9. Bar plots representing the results of NASA TLX Questionnaires. Left: all Participants, middle: Group A, right: Group B. The bars and error bars show each TLX index’s mean and standard error, respectively. Note: Scales are transferred into percentages.
Robotics 13 00140 g009
Figure 10. Bar plot showing the results of Comparison Questionnaires. The bar shows the preferences of participants toward the two tasks with respect to the six task load indexes and their general preferences toward the two sets of tasks.
Figure 10. Bar plot showing the results of Comparison Questionnaires. The bar shows the preferences of participants toward the two tasks with respect to the six task load indexes and their general preferences toward the two sets of tasks.
Robotics 13 00140 g010
Table 1. Overall performance metrics by all the participants.
Table 1. Overall performance metrics by all the participants.
CTMAEME
RouteModeMedian (IQR)Median (IQR)Median (IQR)
Oval RouteGVF-On50.2 (36.9, 66.0)0.89 (0.66, 1.29)2.8 (2.2, 4.3)
Control54.0 (42.7, 76.5)1.10 (0.63, 1.51)3.7 (2.4, 4.7)
Triangle RouteGVF-On63.1 (43.6, 76.7)0.87 (0.63, 1.32)3.6 (2.4, 4.7)
Control57.4 (47.0, 73.0)0.96 (0.72, 1.40)4.0 (2.8, 4.9)
Completion Time (CT), Mean Absolute Error (MAE), Maximum Error (ME), InterQuartile Range (IQR).
Table 2. NASA TLX Questionnaire of all the participants.
Table 2. NASA TLX Questionnaire of all the participants.
Group AGroup BAll
AverageSTDAverageSTDAverageSTD
Mental57.118.264.320.760.719.1
Physical60.020.057.121.658.620.0
Temporal47.19.545.028.146.120.2
Performance52.921.260.719.956.820.2
Effort70.721.168.617.769.618.8
Frustration55.721.151.428.553.624.2
Total Load57.37.757.915.957.612.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lai, C.-F.; De Momi, E.; Ferrigno, G.; Dankelman, J. Using a Guidance Virtual Fixture on a Soft Robot to Improve Ureteroscopy Procedures in a Phantom. Robotics 2024, 13, 140. https://doi.org/10.3390/robotics13090140

AMA Style

Lai C-F, De Momi E, Ferrigno G, Dankelman J. Using a Guidance Virtual Fixture on a Soft Robot to Improve Ureteroscopy Procedures in a Phantom. Robotics. 2024; 13(9):140. https://doi.org/10.3390/robotics13090140

Chicago/Turabian Style

Lai, Chun-Feng, Elena De Momi, Giancarlo Ferrigno, and Jenny Dankelman. 2024. "Using a Guidance Virtual Fixture on a Soft Robot to Improve Ureteroscopy Procedures in a Phantom" Robotics 13, no. 9: 140. https://doi.org/10.3390/robotics13090140

APA Style

Lai, C. -F., De Momi, E., Ferrigno, G., & Dankelman, J. (2024). Using a Guidance Virtual Fixture on a Soft Robot to Improve Ureteroscopy Procedures in a Phantom. Robotics, 13(9), 140. https://doi.org/10.3390/robotics13090140

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop