1.2.1. Bimanual Robotics

*Dual-arm robotic systems* are being used in a wide range of domestic, industrial, and healthcare tasks. The main reason for this is their flexibility and manipulability. In addition, they have a behavior quite similar to that of the human, which makes it possible for humans to relate to their movements more intuitively [9–11].

More specifically, *bimanual robotics* consists of the coordination of two robotic arms that interact physically in order to achieve a common goal [9]. Many applications of bimanual robotics can be found in the literature—for instance, the handling of deformable objects [12–14], objects with unknown shape [15,16], or objects whose geometry requires

**Citation:** García, A.; Solanes, J.E.; Muñoz, A.; Gracia, L.; Tornero, J. Augmented Reality-Based Interface for Bimanual Robot Teleoperation. *Appl. Sci.* **2022**, *12*, 4379. https:// doi.org/10.3390/app12094379

Academic Editors: DaeEun Kim and Alessandro Gasparetto

Received: 21 March 2022 Accepted: 24 April 2022 Published: 26 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

two grasping points [17–19]; the emulation of human bimanual tasks [20–22]; assistive robotics [23,24]; assembly operations [25–27]; surgery tasks [28]; and simultaneous manipulation and cutting [29], manipulation and fastening [10], or manipulation and surface treatment [30,31], which is the case considered in this paper.

The vast majority of contributions in bimanual robotics present fully automated tasks based, for instance, on artificial intelligence techniques [21,23], motion planing techniques [14,15,26,32,33], or other low-level control approaches [19,24,25,27,28].

However, the presence of the human interacting with the bimanual robotic system is very interesting due to the possibility of exploiting the human's natural knowledge of bimanual configurations and motions in order to improve the task performance [31,34,35]. For this reason, human–robot interaction (HRI), which is the main focus in this work, is nowadays a trending research topic in bimanual robotics.

Some interesting approaches can be found related to HRI in bimanual robotics. For instance, the authors in [34] proposed to improve the transportation of a large workpiece, typically performed by two users, by using a bimanual robot attached to a mobile platform. In this approach, the mobile platform moved through a pre-defined trajectory, while the user was able to arbitrarily adapt this trajectory by means of an impedance control. The authors in [35] proposed a multi-layered prioritized shared controller to maintain the robot hands' orientation and contact with the manipulated surface, while the user was able to teleoperate the bimanual robot hands on a plane. The authors in [31] presented a similar approach based on the task priority and sliding mode control techniques to perform surface treatment tasks using a bimanual robotic system. In this case, the user was able to teleoperate all six Degrees of Freedom (DoF) of one robotic arm that held the workpiece, whose movement was limited in the 3D workspace, and to teleoperate two DoF of the other robotic arm, which held the surface treatment, maintaining the appropriate tool orientation and pressure. Due to the relevance of this application for this work, more details can be found in Section 2.

#### 1.2.2. Assisted Robot Teleoperation

The remote control or *teleoperation* of robots by users has been studied for many years [36] and still represents a relevant research field in robotics. Robot teleoperation is required for a wide variety of reasons: when the working environment is dangerous to humans (e.g., in space [37], radioactive zones [29,38], aerial zones [18,39], or underwater areas [35,40]); when performing rescue operations [41], and when precision surgeries need to be performed [42–45], among others.

Nowadays, there are sophisticated artificial intelligence (AI) techniques that allow the automation of complex tasks that not so long ago had to be performed by means of human teleoperation. However, despite current advances in AI, there are still many tasks that cannot be fully automated due to their complexity or subjectivity. However, these tasks can be partially automated, allowing the cooperation between human and robot, introducing shared-control architectures [46]. Hence, many recent contributions have focused on human–robot interaction and, more specifically, on advanced robot teleoperation [16,30,31,47–51], which is also the case of this paper.

Telepresence [36] allows the user to perform the robot teleoperation task by means of an interface, achieving a result less dependent on their skills. Telepresence is currently a trending research topic thanks to the introduction of new technologies, such as augmented and virtual reality [51], visual interfaces [42], haptic devices [52], or a combination of them [16,30,43], to perform direct control teleoperation. For instance, the authors in [53] proposed a low-cost telerobotic system based on virtual reality technology and the homunculus model of mind. In this case, the user was able to move both robotic arms according to the dynamic mapping between the user and the robot developed. In addition, the user was able to see the real workspace in the virtual environment using feedback from a camera. Similarly, the authors in [54] proposed a virtual reality interface based on the three-dimensional coordinates of the shoulder, elbow, wrist, and hand captured by a

Kinect camera to model the geometry of the human arms and perform the mapping with the robot arms. As in [53], the user receives visual feedback from a camera placed on the robot. In both cases, robot manipulation tasks were performed. However, for more complex tasks (e.g., surface treatment tasks), interfaces developed with virtual reality techniques can increase the time of completion of the task and worsen the quality of the surface finishing, compared to that obtained by the human operator using direct teleoperation. This is due to two facts: on the one hand, when using virtual reality, it is difficult to incorporate all the necessary information of the task in the virtual world and in real time, and, on the other hand, the user already has a real notion of the robotic system and, hence, is able to guide it naturally and intuitively using direct teleoperation. For this reason, in order to obtain the best of both worlds (i.e., direct teleoperation and teleoperation based on virtual reality), the present work proposes to use interfaces based on augmented reality to provide a solution to a greater number of industrial tasks carried out with bimanual robots.

Other approaches try to ease the teleoperation of bimanual robotic systems, such as in [48], where the authors developed a bimanual robot application in which a robot arm is teleoperated to grasp the workpiece, whilst the other robotic arm is automatically controlled using visual servoing in order to keep the workpiece visible for the camera.

Since the performance of robot teleoperation may rely on the user's skills, some approaches are focused on incorporating restrictions that prevent the user from commanding the robot into failure situations. For example, the authors in [44] incorporate Virtual Fixtures (i.e, virtual barriers) so that the references provided by the user are automatically adapted to the allowed region. The authors in [38,52] proposed the use of haptic devices in order to prevent the user from commanding references beyond the allowed region.

Despite all the above, robot teleoperation by means of interfaces and virtual barriers is still a subject of study due to the drawbacks it presents, mainly due to direct control performed by the user [47]. In this sense, this work presents a new methodology based on augmented reality devices to improve the current assisted teleoperation interfaces for bimanual robotics.

#### 1.2.3. Augmented Reality-Based Interfaces

Human–machine interfaces are devices that allow the interaction between a human and a machine [55,56]. If the interface is placed inside the brain or body of the human, it is known as an invasive or implanted interface [57]. On the contrary, if the interface is external to the human body, it is known as a non-invasive or wearable interface [58–60]. This work is focused on non-invasive interfaces and on how to develop this kind of interface for complex robotic applications.

Technological advances in the creation of holograms have nowadays made it possible to have devices and software tools that allow augmented reality (AR) applications in industrial sectors [61–64]. In short, augmented reality projects holograms into physical space, allowing for a more intuitive and natural interaction between human and machine [65].

Some previous works used AR interfaces to improve robot teleoperation for industrial tasks. For example, the authors in [66] proposed a new AR interface to control a robot manipulator in order to facilitate the interaction between the user and the robot. The authors in [66] proposed a mixed reality system in order to move the end-effector of the robot system. The authors in [67] proposed a mixed reality system to allow the user to visualize the intended teleoperation command prior to the real robot motion. A similar approach was developed in [68], where a mixed reality head-mounted display enabled the user to create and edit robot motions using waypoints. The authors in [69] proposed a multimodal AR interface coined as *Sixth Sense* that allowed the user to interact with information that was projected onto physical objects through hand gestures, arm movements, and, in some cases, blinking. The authors in [70] proposed a method for using hand gestures and speech inputs for AR multimodal interaction with industrial manipulators.

Note that most of the AR approaches mentioned above developed solutions for robot– object manipulation tasks. Thus, to the best of the authors' knowledge, this is the first work

that proposes a new AR interface for industrial, complex tasks, such as surface treatment tasks, involving a bimanual robot system.

In addition, the interaction with the robotic system needs to be natural and intuitive, not only from the point of view of the visual feedback produced by the AR but also from the point of view of the means of sending the robot commands. All AR headsets have interaction elements based on hand tracking. As demonstrated in [5–8], such prolonged interaction over time can be annoying and not ergonomic enough. This is why, similarly to [8], this work proposes the use of gamepads, which are devices ergonomically designed to be used for long periods of time.

To the best of the authors' knowledge, this is the first work proposing an AR interface together with a gamepad for bimanual robot teleoperation.

#### *1.3. Proposed Approach*

This paper develops an original augmented reality-based interface for teleoperating bimanual robots. The proposed interface is more natural to the user, which reduces the interface learning process. A full description of the proposed interface is detailed in the paper, whereas its effectiveness is shown experimentally using two industrial robot manipulators. Moreover, the drawbacks and limitations of the classic teleoperation interface using joysticks are analyzed in order to illustrate the benefits of the proposed augmented reality-based interface approach.

The content of the article is as follows. Section 2 presents a brief description of the advanced bimanual robot teleoperation application considered in this work. Then, Section 3 provides a methodology to develop AR interfaces for bimanual robot teleoperation tasks and, subsequently, develops the specific AR-based interface proposed for the application at hand. Moreover, the interface functionalities are illustrated through several experiments. Furthermore, Section 4 shows the performance and effectiveness of the proposed AR-based interface by means of real experimentation. Finally, Section 5 presents the conclusions.

#### **2. Previous Work**

Without loss of generality, this work uses the robotic application developed by the authors in [31] to demonstrate the benefits of the proposed AR-based interface with respect to conventional PC-based interfaces. It consists in a surface treatment application carried out through the cooperation of a bimanual robotic system and a user, who is able to partially command both robots at distance, i.e., by means of robot teleoperation. Moreover, both robots are partially automatically controlled to fulfill some 2D and 3D constraints, as well as to keep constant the force exerted to the workpiece by the tool and the orientation of the tool at any time during the task.

Next, a description of this application, as well as the problems of using conventional PC-based interfaces, is detailed.

#### *2.1. Description of the Advanced Bimanual Robot Teleoperation Application*

The advanced bimanual robot teleoperation is based on the *task-priority strategy* [71,72] and *conventional* and *non-conventional Sliding Mode Controllers* (SMCs) [73,74]. As commented before, the goal of this bimanual robotic application is to perform a human–robot cooperative control loop so that the user operator partially teleoperates two robotic arms to perform a surface treatment operation, whilst the robots automatically ensure the appropriate tool force and orientation; see Figure 1. Thus, the so-called *workpiece robot* (WR), which consists of a 7R collaborative robot with a workpiece of flat methacrylate fixed to the end-effector using a self-made piece (see Figure 1a), is in charge of holding the workpiece. Meanwhile, the so-called *surface treatment robot* (STR), which consists of a 6R robotic arm with a Force/Torque (F/T) sensor and a cylinder-shaped tool with a piece of cloth (see Figure 1a), operates with the surface treatment tool on the workpiece. Thus, the user controls the workpiece position and orientation and, simultaneously, controls the 2D tool motion on the workpiece surface using an interface, which consists of a gamepad

to command the robots and a visual feedback screen to show the user the robots and the user reference states; see Figure 1a.

**Figure 1.** Bimanual application setup and block diagram (for further details, refer to [31]). (**a**) Previous setup used for the real experimentation. (**b**) Block control diagram for both robots (WR and STR).

Figure 1b shows the block control diagram for both robots, where subscript *s* stands for the STR; subscript *w* stands for the WR; subscript *ref* stands for the user reference; subscript *c* stands for the commanded control action; **p** = *xyz αβγ <sup>T</sup>* is the robot pose, i.e., the linear positions {*x*, *<sup>y</sup>*, *<sup>z</sup>*} plus orientation angles {*α*, *<sup>β</sup>*, *<sup>γ</sup>*}; **<sup>p</sup>***<sup>s</sup>* <sup>=</sup> *x y <sup>T</sup>* is the 2D position of the STR tool on the workpiece surface, i.e., the linear positions {*x*, *y*} relative to this surface; **q** = *q*<sup>1</sup> ··· *qn <sup>T</sup>* is the robot configuration, with *n* the number

of robot joints; and **F** is the vector containing the measured forces and torques. Thus, using the gamepad joysticks, the user is able to send the reference to the WR pose **p***w* and, simultaneously, the reference to the 2D position **p***<sup>s</sup>* of the STR tool on the workpiece surface. Thus, the high-level controllers of both robots compute the corresponding joint commands **q***<sup>c</sup>* from the user references, the state {**q**, **q**˙ , **p**} of both robots, and the force sensor data **F**. These joint commands are then sent to the low-level controllers of both robots, as shown in Figure 1b, in order to complete the teleoperation task. See [31] for further details on the high-level controllers of both robots and the related signals.

In addition, some constraints are considered for both robots in order to increase the safety of the task: (1) the WR is automatically controlled to maintain the workpiece center inside the allowed region that is modeled as a superellipsoid, which is similar to a rectangular prism with smooth corners; (2) the STR is automatically controlled to keep the center of the treatment tool within the allowed region on the workpiece, which is modeled as a superellipse, i.e., a rectangle with smooth edges.
