Next Article in Journal
Binary PSO with Classification Trees Algorithm for Enhancing Power Efficiency in 5G Networks
Next Article in Special Issue
Smart Pneumatic Artificial Muscle Using a Bend Sensor like a Human Muscle with a Muscle Spindle
Previous Article in Journal
Compact Camera Fluorescence Detector for Parallel-Light Lens-Based Real-Time PCR System
Previous Article in Special Issue
Myoelectric Control Systems for Upper Limb Wearable Robotic Exoskeletons and Exosuits—A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Piton: Investigating the Controllability of a Wearable Telexistence Robot

1
Department of Computer Science and Engineering, Waseda University, Tokyo 169-8555, Japan
2
Department of Computer Science and Engineering, Qatar University, Doha 2713, Qatar
3
Future Robotics Organization, Waseda University, Tokyo 169-8555, Japan
4
Avatarin Inc., Tokyo 103-0022, Japan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(21), 8574; https://doi.org/10.3390/s22218574
Submission received: 10 September 2022 / Revised: 19 October 2022 / Accepted: 27 October 2022 / Published: 7 November 2022
(This article belongs to the Special Issue Challenges and Future Trends of Wearable Robotics)

Abstract

:
The COVID-19 pandemic impacted collaborative activities, travel, and physical contact, increasing the demand for real-time interactions with remote environments. However, the existing remote communication solutions provide limited interactions and do not convey a high sense of presence within a remote environment. Therefore, we propose a snake-shaped wearable telexistence robot, called Piton, that can be remotely used for a variety of collaborative applications. To the best of our knowledge, Piton is the first snake-shaped wearable telexistence robot. We explain the implementation of Piton, its control architecture, and discuss how Piton can be deployed in a variety of contexts. We implemented three control methods to control Piton: HM—using a head-mounted display (HMD), HH—using an HMD and hand-held tracker, and FM—using an HMD and a foot-mounted tracker. We conducted a user study to investigate the applicability of the proposed control methods for telexistence, focusing on body ownership (Alpha IVBO), mental and physical load (NASA-TLX), motion sickness (VRSQ), and a questionnaire to measure user impressions. The results show that both the HM and HH provide relevantly high levels of body ownership, had high perceived accuracy, and were highly favored, whereas the FM control method yielded the lowest body ownership effect and was least favored. We discuss the results and highlight the advantages and shortcomings of the control methods with respect to various potential application contexts. Based on our design and evaluation of Piton, we extracted a number of insights and future research directions to deepen our investigation and realization of wearable telexistence robots.

1. Introduction

1.1. Background

With the emergence of the COVID-19 pandemic in 2020, global restrictions were imposed on various sectors and activities, including overseas travel, close physical contact, as well as training and educational activities at physical facilities. The pandemic highlighted the importance of video conferencing and telecommunication within both industrial and educational and academic contexts. Various video conferencing solutions have advanced communication and collaborative work innovations that attempt to replace close and physical contact [1].
Telepresence is an emerging medium for communication, where instruments such as video cameras and microphones are intermediaries for the participant’s senses [2]. Telepresence has been used in a variety of novel application domains, such as office settings [3,4], education [5], and medical applications [6]. However, telepresence systems have drawbacks: most systems do not enable a high sense of immersion, body ownership, or enable physical interaction. In contrast, telexistence refers to a group of technologies aiming at conveying high sense presence within remote environments through various modalities, including visual, auditory, and real-time physical interactions [7,8]. Despite the potential of telexistence systems, the amount of research that focuses on telexistence remains limited.
Most telexistence systems have demonstrated robots that focus on anthropomorphic (human-like) implementations, which are limited as they only provide positional and rotational movements similar to the human head. Generally, having a large robot workspace and flexible movement are essential for object inspection and interaction within daily life and industrial contexts [9,10]. Accordingly, this research presents a novel wearable telexistence robot, “Piton”, which is designed based on a snake-like structure with high degrees of freedom (DoFs) and redundancy. Piton is worn by a user (surrogate), and it is operated by another user at a different location. Piton provides auditory feedback, stereoscopic visual feedback, and rotational and positional movements of the camera system in a snake-shaped form-factor for a remote-user. In addition, Piton’s snake form-factor enables parallax motion, inspection of objects from multiple angles, and head gestures which can be used within social contexts. Our implementation consists of a local system and a remote system. The local system includes a head-mounted display (HMD), a motion-tracking system, and an interface for initiating various system functions. The remote system consists of a wearable snake-shaped robot with a stereo camera as an end-effector, a robot control system, and an audio-visual communication system.
Although various works have proposed different control methods for wearable robots [11,12,13], these control methods focus on exoskeletons and rehabilitation tasks, which are completely different and have different design objectives than telexistence robots. Accordingly, previous works in telexistence have proposed various control methods, such as using HMDs [14,15,16] and foot-mounted trackers [17,18] to remotely control robots. Accordingly, we extend these previous works to implement and explore the controllability of Piton by implementing the following control methods: (1) using the HMD to control the position and orientation of Piton (HM); (2) using the HMD to control the rotation and a hand-held tracker to control the position of Piton (HH); and (3) using the HMD to control the rotation and a foot-mounted tracker to control the position of Piton (FM).
Telexistence systems have several critical requirements for enabling a highly immersive telexistence experience. These requirements include low mental workload [17,19], low motion sickness effects [20], and high body ownership over the remote system while interacting with the remote environment [21]. Accordingly, we focused our evaluation objective on gauging and comparing the suitability of Piton’s three control methods for use in telexistence experiences. Therefore, we used the Task Load Index (NASA-TLX) [22] to measure mental and physical workloads, the Virtual Reality Sickness Questionnaire (VRSQ) [23] to measure motion sickness effects, and the Illusion Virtual Body Ownership questionnaire (Alpha IVBO) [24].
We hired 17 subjects to evaluate three control methods to do three tasks: mirroring on a screen (Task 1), finding letters and numbers on Uno Stacko (Task 2), and reading a text on wrapped cardboard (Task 3). These three tasks are executed using the three control methods. Overall, the results indicate that Piton can be used for telexistence, as all three control methods demonstrated overall good scores in the mentioned questionnaires. Such findings are critical, since this is the first study to provide evidence that snake-shaped robotic systems can be used for telexistence, which paves the way to investigate novel robotic form-factors or control methods for use in telexistence.
This paper consists of ten sections, commencing with the first section for the introduction. Section 2 explains the related work. Section 3 introduces the design concept. Section 4 describes Piton’s novel interactions and contexts of use. Section 5 explains the system implementation. Section 6 describes the Piton control methods and their calibration process. Section 7 describes the evaluation and results. Section 8 provides a discussion of the quantitative and qualitative results. Section 9 describes the future work and conclusions of the paper.

1.2. Contribution

We summarize the contributions of our paper as follows:
1.
To the best of our knowledge, our work is the first to design, fabricate, and explore using a wearable snake-shaped robot for telexistence. Although previous works have explored the snake form-factor for various domains, such as for a wearable multipurpose appendage or for multi-haptic feedback in VR [9,25], the snake form-factor was not evaluated in any previous work within the telexistence research domain. Our literature survey shows that telexistence robots, whether wearable or not, are mainly implemented as anthropomorphic robots; the shape of the robot’s head and structure replicates that of a human. This contribution is significant, as nonhuman telexistence robots are unexplored within the domain of telexistence. Moreover, we show that a snake-like robotic system can be advantageous in various scenarios when compared to human-like telexistence systems. For example, we demonstrate how Piton can use its malleable body to inspect objects from multiple points of view, and its large workspace can be used to gain better awareness of the surrounding environment.
2.
We present three control methods for using Piton while complying with the requirements of telexistence. Previous efforts in telexistence mainly focused on standard control methods that mimicked simple human head movements, relying mainly on linear and direct mapping between the user’s head rotations and those of the robot [15,16,26]. Unlike previous works, our developed control methods utilize the user’s head, hands, and feet while using linear and scalar mappings to control a snake-shaped robot. This contribution advances the state-of-the-art design and development as it proposes and evaluates novel control methods for use in telexistence robots and proceeds to provide evidence of their validity for telexistence (contribution 3).
3.
We provide user study results which indicate that Piton can be used for telexistence. We evaluated the three developed control methods against the main requirements of telexistence, which are a high sense of body ownership, low motion sickness, and low mental and physical load. The results show that the three control methods generally meet telexistence requirements. This contribution is significant for telexistence, as the results and statistical analysis show that a snake-like robot can be used for telexistence.
Overall, the contributions of this paper are critical to advance the state-of-the-art in telexistence robots. To the best of our knowledge, this work is the first to propose a snake-like telexistence robot and to propose three control methods for controlling the robot. Most significantly, the user study results of these three control methods show that Piton can be used for telexistence, as it meets the main requirements of telexistence systems. Therefore, such contributions are essential for paving the way for future generations of nonanthropomorphic telexistence robots that span beyond human-like robots and typical control methods.

2. Related Work

Our research expands on three main research areas: (1) nonrobotic telepresence systems, (2) robotic telepresence systems, and (3) telexistence systems. We explain each of these below.
A variety of researchers have explored nonrobotic telepresence systems. These systems mainly communicate primary modalities such as vision and auditory information. “Livemask” [27] is a facial screen that stands on a table as a monitor and which mimics a remote-side user’s face. The screen is considered a surrogate telepresence system that shows faces on a local screen based on a remote user’s 3D face data. Similarly, “ChameleonMask” [28] is a wearable screen worn by a surrogate user, where the screen shows a remote user’s face. The surrogate wearing the screen responds to requests from the remote user displayed on the wearable screen. “Livesphere” [29] is a head-mounted device, worn by a remote user, with six cameras on top of it to capture the surrounding area. The local user uses an HMD to see spherical images and control the movement via an HMD’s head rotation data.
Robotic telepresence systems extend the nonrobotic telepresence systems by providing higher movement flexibility and interactivity within the remote environment. TEROOS [30] is a wearable, small telepresence humanoid robot mounted on the user’s shoulder and controlled remotely by another user. This wearable robot can rotate 92 degrees horizontally and 110 degrees vertically, thereby enabling the controlling user to inspect remote environments. “WithYou” [31] is a wearable device with a pan-and-tilt camera and various sensors mounted on the remote user’s chest, allowing local users to use an HMD to control the pan and tilt motions of the camera, which is worn by a remote user.
Telexistence is a similar concept to telepresence, however, telexistence focuses on the highly-realistic sensation of existence in a remote location by engaging multiple sensory modalities [32]. TELESAR [7] is a telexistence robot, consisting of a humanoid robot and a cockpit system for controlling the robot. The cockpit consists of various control and feedback devices, including an HMD, speakers, a microphone, a haptic glove, and a motion tracking system. The remote location includes a humanoid robot with a stereoscopic camera, microphone, and speakers. The user controls a humanoid robot using motion tracking system, with markers on their head, shoulder, arm, hand, foot. Fusion [16] is a wearable telexistence system, where the local system consists of HMD(the Oculus CV1) for viewing and for head motion control, as well as a hand controllers for controlling the hands of the wearable robot. The remote system consists of a wearable backpack-like robot, comprising robotic head with stereoscopic vision, speakers, microphones, and two robotic arms (six DoFs) and 5-finger robotic hands. Al-Remaihi et al. [33] investigated a telexistence robotic system for remote physical object manipulation. The system consists of a robot arm with a gripper end-effector that can be remotely operated using an HMD and hand exoskeleton. The user at a local site controls the robot using a Vive Tracker [34] mounted on the hand, while the HMD is used to monitor the remote environment. A 3D-printed exoskeleton is used for both controlling the robotic-gripper and for receiving haptic feedback.
Unlike the majority of telepresence systems which do not exhibit a high sense of immersion, body ownership, or physical interactions, Piton advances the state-of-the-art development of telexistence systems. Piton is the first to utilize a wearable robot with a snake form-factor, which provides higher flexibility in movement to explore remote environments beyond previous systems. Piton’s flexibility allows for novel interactions that have not been explored before, such as inspecting and assisting in industrial or daily application contexts.

3. Design Concept

Our main objective is to design a snake-shaped, wearable telexistence robot sharing travel and work-related experiences with remote users. In order to develop a telexistence system for these purposes, we set several main design considerations and show how we satisfy them in Piton, in a similar fashion to previously developed robots [9,15,25,35,36]. Accordingly, our concept design has three primary design considerations: (1) telexistence, (2) flexible head movement, and (3) wearability. Below, we discuss how we aim to achieve each design consideration:
  • Telexistence: A telexistence system requires low-latency stereoscopic visual feedback [37], low-latency auditory communication [38], and physical interactions with the remote environment [39].
  • Flexible Head Movement: The robot should have flexible head movements with high DoFs and redundancy, similar to snake-shaped robots [25]. Such high-flexibility enable users to have large operational workspaces, which accordingly enable efficient inspection of remote environments and objects.
  • Wearability: Our system should be designed as a wearable system so remote users can easily wear it and go to various locations. Wearable systems can also provide numerous interaction potentials to assist remote users in various contexts that are otherwise difficult to access through mobile robots (e.g., going up stairs or in narrow corridors).
We implemented the design considerations of Piton in a wearable robot system, comprising a local site (Figure 1a), where a user controls and communicates through the remote robot, and a remote site (Figure 1b), where a surrogate user wears the robot. The supplementary materials (Video S1) show how the local and remote systems are implemented. We explain how we implemented the design considerations in our system as follows:
  • Telexistence: Our system uses a stereo camera system [40] embedded on the end-effector of the robot; auditory communication, including a mic and speaker (both explained in Section 5.2.2 and Section 5.3.2) on the local and remote system; and an HMD on the local system to enable an immersive experience (detailed in Section 5.2.1).
  • Flexible Head Movement: The telexistence robot is snake-shaped with eight interlinked servomotors (explained in Section 5.3.1). Snake-type systems enable a high degree of redundancy, which in turn, allows the robot to be situated in a wide variety of postures using various movement trajectories.
  • Wearability: We developed a lightweight robot system capable of being worn on various locations around the body, such as the shoulder or waist. The robot is mounted on a rack that is worn as a backpack, which enables users to conveniently wear or take-off the robot (explained in Section 5.3.1).
The term “Wearable robot” is mostly associated with traditional robotic systems, such as exoskeletons or rehabilitation robots [41]. However, novel forms of wearable robots emerged in recent years, where such robots are also identified as wearable robots. Such emerging wearable robots are designed to be continuously worn and used while worn, meeting the same presumptions of wearability of wearable devices and robots [42]. Some examples include supernumerary robotic limbs (SRLs) [9,35,43], wearable companion robots [44], haptic feedback robots [25], and telexistence wearable robots [16]. Accordingly, Piton is the first wearable snake-shaped telexistence robot that is designed to be ergonomically worn like a backpack, and it is designed to be used while being worn.

4. Piton’s Novel Interactions and Contexts of Use

Previous works show that the malleability of the snake form-factor enables robots to be flexible enough to be used in a large variety of usage contexts [9,10,25,45]. Similarly, Piton can be used in different unexplored interaction contexts, whether for daily use or industrial contexts. Piton can be used in daily usage contexts to enable companionship with a remote user [10]. For example, Piton can be used for interacting with the surrogate user (as shown in Figure 2a) or the remote environment, such as checking merchandise or enjoying activities with the surrogate (as shown in Figure 2b).
Piton can also be used within social contexts (e.g., social gatherings or parties), where it can engage in interactions with various users at the remote site (Figure 2c). Since Piton is a wearable robot, it can easily be taken to various locations, such as outdoors for hiking, shopping, or going to museums, which paves the way for various potential deployment contexts [46]. Moreover, Piton can also be used for practical use cases, such as guiding users at home to fix equipment or as a replacement for existing teleconferencing solutions [9].
Within industrial contexts, Piton can have a variety of advantages over existing systems since it can move and inspect objects from various directions and distances and can be used for skill-transfer applications (as shown in Figure 3a). For example, Piton can be deployed at a remote industrial location to inspect equipment or instruct workers on how to operate machinery correctly at distant locations (e.g., power plants, offshore oil rigs, different cities, etc.). In such scenarios, an expert controls Piton and carries out inspection tasks for environments or objects with a surrogate user (as shown in Figure 3b). The combination of telexistence, multimodal communication capabilities, and Piton’s large workspace enables it to offer a versatile user experience within remote environments, thereby potentially contributing to saving effort and time through remote work within various application contexts.
Overall, Piton can be used as a test-bed to explore the mentioned application contexts), which can pave the way for future implementations of Piton that focus on specific application domains. For example, within industrial tasks, Piton can be integrated with thermal imaging cameras or additional sensors. For daily usage contexts, Piton can be designed with smaller, slimmer, or unobtrusive form-factors to satisfy the requirements of daily use [9]. The supplementary materials (Video S1) demonstrates a variety of novel potential application contexts of Piton.

5. System Implementation

Our implementation of Piton consists of two parts, a local system and a remote system. In the below subsections, we start by highlighting the systems integration architecture, followed by a detailed implementation of each of Piton’s systems at the remote and local sites.

5.1. System Integration

Our implementation consists of a system at the local site (local system) and a remote site (remote system), shown in Figure 4. This architecture extends common telepresence and telexistence systems architectures [28,46] by focusing on various control methods, an inverse kinematic model, and a novel robotic form-factor. The next subsections explain the local and remote system implementations in detail.

5.2. Local System

The local system consists of two main components: (1) the head mount display and tracking system and (2) the control system. Both main components are explained in the following section.

5.2.1. HMD and Tracking System

We used the HTC Vive HMD, trackers and HTC Vive tracking system to track the user’s head orientation and to carry out positional control on the robot [34]. We evaluated three control methods (HM shown in Figure 5a, HH shown in Figure 5b, and FM is shown in Figure 5c). While HM is implemented only using the HMD, enabling both rotational and positional controls of Piton, two HTC Vive trackers were used to implement the foot and hand controls of the robot (further explained in Section 6). Moreover, the HMD is equipped with a built-in microphone and headphones to establish auditory communication with the remote user.

5.2.2. Control System

We used the Unity3D engine [47] to develop the robot control system, integrate the stereoscopic video stream, and establish auditory communication. We chose Unity3D as it enabled us to easily integrate various control and feedback components to interface with the robot, HMD, its tracking system and interface with various hardware.
Robot Control: We implemented an inverse kinematic solver (IK) based on BioIK [48] so the robot IK solver can provide a solution for each of the robot’s servomotor. We used two IK solvers, one to set the rotations of Piton’s head (using the HMD in all the control methods), while the other was used to set the position of Piton (dependent on the chosen control method). For positional movements, we created a positional movement objective that can be moved using the three control methods (shown as Figure 6a,b). When the objective is moved, the IK solver attempts to find a solution that satisfies the positional objective, while also satisfying other constraints (keeping the robot’s head straight, angular limits., etc.). The IK solver was optimized to fulfill the movement objectives based on the three control methods needed for the user study (further discussed in Section 6.1). For example, if the objective is moved to the top or right positions (Figure 6a, Figure 6b) the robot’s model moves to the top or right position following the objective. The positional movement objective of the IK [48] is set for the fourth servomotor, so that we can obtain a solution for the first four servomotors of the robot. The IK solver is only used for positional movements, whereas rotational movements are read directly from the HMD (Further details in Section 6).
Accordingly, users only need to move the HMD (in HM control method) or HMD with trackers to control Piton (in HH and FM control methods). The HMD’s orientation data (based on IMU) and trackers’ positional data are read as inputs, where they are mapped to control the robot head’s rotations and the IK target objectives’ positions, respectively (further explained in Section 6. Piton Control Methods). When the user moves the HMD or trackers, the IK objectives associated with the HMD and trackers will also move, causing the IK system to recalculate a new solution to direct the robot movements (more details in Section 6.1). The angles from the IK system are read in real time and transmitted to the robot through WebSocket [9,49], after which they are executed. We also developed a simple interface to allows users to initiate and set-up various system functions for the robot, camera, and calibration system (Figure 7).
Our initial evaluation of latency, based on the network latency method [50,51] indicates an average of 11.64 ms (SD = 3.40), which was measured from the moment positional and rotational data of the IK model were read in the Unity3D environment to the moment the servomotor angles are executed on Piton. Previous works in telexistence and virtual reality indicate that latency should be less than 200 ms in order to provide a smooth experience and not induce simulator sickness [52,53]. Accordingly, we believe the latency of our system is within the acceptable range.
Stereoscopic Video Stream: Our system used the Gstreamer plugin for Unity3D, which was used and evaluated in previous work [39,54,55]. Once the camera video feed is received at the local system, it is divided into two images to be viewed by the left and right eyes in the HMD. Since our system replicates the same hardware and software implementation of a previous system [54], the estimated latency is 80 s, which was measured from when the ZED Mini Stereo Camera captures the stereo images at the remote site to the moment when the images are shown on the HMD at the local site.
Audio Streaming: We used the WebRTC Unity3D plug-in [56] to provide an easy and reliable audio communication method across the internet.

5.3. Remote System

The remote system comprises two main components: (1) the implementation of the Piton robot and (2) the robot controller software and audio-visual streaming. These components are explained in the following subsections.

5.3.1. Piton Implementation

The structure of the robot consists of eight servomotors (eight DoFs) interlinked together in a way that is similar to snake-shaped robots. The robot was implemented using two Dynamixel MX64-AT servomotors, one MX106 servomotor, one 2XL430 servomotor, and three AX12A servomotors [57]. We slightly optimized the PID parameters for the experiment to reduce shaking and overshooting. The values for the first servomotor are (800, 100, 10,652); the second servomotor, (500, 100, 6000); and the third servomotor, (500, 100, 6000), whereas the rest of the motors use the manufacturer’s set PID parameters. The links between the servomotors were made of aluminum, whereas the last four servomotors were linked using PLA frames (Figure 8). The end-effector was custom-designed, and 3D printed with PLA to house the ZED Mini Camera [40]. The supplementary materials (Video S1) further shows the robot from various angles and when being used.
Piton is mounted on a lightweight backpack rack and mainly constructed with aluminum (front view as shown in Figure 9a and side view as shown in Figure 9b). We designed the base motor’s enclosure, which was 3D printed using ABS, to be fitted on the user’s shoulder. The high DoF of the robot provides a flexible workspace for moving and situating the end-effector in different postures to inspect remote objects.
The total weight of the robot is 0.98 kg, and its length is 53 cm. The weight of a single-board computer (LattePanda Alpha) is 0.35 kg, the backpack rack is 1.05 kg, the microphone is 0.07 kg, the speaker is 0.03 kg, and the Zed Mini Camera is 0.06 kg. Therefore, the total weight of the Piton is approx. 2.54 kg.

5.3.2. Robot Control Software and Audio-Visual Streaming

Robot Control: The robot was connected to LattePanda Alpha [58], which is a low-powered PC. Our robot control software extends previous works [9,10,25], by implementing a network-based robot control in a publisher–subscriber model using WebSocket [10]. The user interface of the software is shown in Figure 10.
Audio-Visual Streaming: The remote uses Gstreamer for video streaming over the network [59]. Our system can transmit real-time video feed captured at 60 frames per second with x264 encryption, which is sent with UDP to the local system. Data transmission could be initiated via a Windows command prompt. We set various attributes in order to establish the connections, including the local user’s IP address, port number, and the source of the camera device (i.e., ZED Mini Camera). Audio communication was implemented using WebRTC on Unity3D, similar to the local system.

6. Piton Control Methods

We implemented three control methods, HM—using a head-mounted display (HMD), HH—using an HMD and hand-held tracker, and FM—using an HMD and a foot-mounted tracker. The supplementary materials (Video S1) show how each of the control methods are used to control Piton.
Previous research evaluated foot-controlled wearable robots, which utilized a linear control method to map foot movement to the robot’s movement [17,18]. In linear control, the robot’s workspace is measured and directly converted to a movement range for the user’s leg. Therefore, each point on the foot movement workspace is directly and linearly linked to each point in the robot’s physical workspace [17,18]. Despite the simplicity of linear control, an inevitable drawback of such a method is that it does not compensate for individual physiological differences among users’ bodies. For example, short users may not be able to extend their leg to completely cover the entire workspace of the robot while being seated.
In contrast to linear control, scalar control maps a user-defined workspace to the robot’s physical workspace, where the user-defined workspace does not necessarily match the dimensions of the robot’s physical workspace. Linear calibration and controls are widely used in telexistence systems [7,20], as they can easily be applied to map a user’s head rotations to those of the telexistence robot [7,17,20]. In scalar calibration, each point is calibrated to a different point of the other workspace to control the robot. The maximum and minimum movement can be scaled to a user’s defined body movement ranges. More generally, we can scale the movement ranges of the user’s hand or foot to compensate for users’ physiological differences. Accordingly, we utilized linear mapping for controlling the robot’s head rotations in all control methods, whereas we used scalar mapping for controlling positional movements (determined by the head, hand, or foot movements in each of the control method). To ensure proper execution of the controls, each user conducted an individual calibration of the intended control method before usage.

6.1. Calibration Procedures

Our calibration process should be conducted once for each user prior to using each of the control methods. The calibration is required as it ensures the produced robot movements cope with each user’s unique physiological movement ranges of their head, hand, and foot. Overall, we used the HMD for rotational movement in HM, HH, and FM control methods. The calibration process for rotational movement starts by instructing users to sit straight while wearing the HMD. Linear calibration of the rotational movement is initiated based on the first idle pose of the user’s head, which is set to zero in all rotational axes. Each rotational axis can enable a rotation of 140 degrees (70 degrees for each direction), corresponding to the user’s movement. Such movements directly control Piton’s head rotations within the same movement ranges, thereby directly controlling the three servomotor angles holding the ZED Mini Camera (pitch, yaw, and tilt).
While rotational movements are calibrated in the same way across all control methods, the calibration procedures for positional movements differ in each control method. The positional-movement calibration procedures are explained in the next subsections.

6.1.1. HM

Positional movement in the HM is calibrated by instructing users to move their head to the lowest point near their left knee, then to move to the highest point on the top-back-right side (as shown in Figure 11a,b). Users were instructed to stretch their bodies as much as they comfortably can at each of the calibration points. Such calibration movements form a cube with coordinates at the bottom-front-left and top-back-right corners, where each point within such a workspace is scaled to a movement point in the robot’s IK [48] workspace to control the robot’s location.

6.1.2. HH

The HH control method is calibrated by instructing users to hold the trackers with their dominant hand and moving the tracker to the left-bottom-front position, as high as their knee and as far as their hand extends (as shown in Figure 12a). Then, they are instructed to move their hand to the right-top-back, as high as their shoulder and as close as possible to their shoulder position (Figure 12b).

6.1.3. FM

The calibration process starts by instructing users to attach the tracker on top of their shoes using velcro and to sit facing the calibration area, which is designated by the white square on the floor (as shown in Figure 13a,b). The white square is approx. 550 mm × 400 mm; such dimensions were selected to guide users during the calibration, as the calibration is scalar and adaptive to the user’s calibration procedure. Next, users are instructed to move their foot to the bottom-right and top-left edges of the white box as much as they comfortably can (as shown in Figure 13a). Vertical positional calibration is conducted by asking users to dorsiflex their foot and face upwards (bend the toes upwards while keeping their heels on the floor), as the difference between the natural foot posture on the floor and the dorsiflexed foot position (as shown in Figure 13b) determines the vertical workspace of the robot. Therefore, the vertical workspace was adaptive to each user’s maximum foot dorsiflexing angle. We chose this calibration procedure as we believe raising the foot in high positions may cause the users to become tired. The dorsiflexed pose enables users to have vertical movement while resting their foot on the floor (keeping the foot heel on the floor).

6.2. Using the Control Methods

To control Piton using each of the control methods (HM, HH, and FM), users can directly rotate Piton’s head by rotating their heads (while wearing the HMD). Positional movements are accomplished differently depending on the selected control method (as in Figure 14). The positional movements’ information, captured in real time through the HMD (HM) or the trackers (HH, FM), is fed to the coordinate system and set as an IK objective for the IK solver [48]. The received positional movements’ information is scaled to match the workspace of the robot’s movements. Next, the IK solver finds a solution that matches the set rotational positional movement objective in real time. Upon finding an IK solution that satisfies the set objectives, our system extracts the servomotor angle values of the provided solution and sends them to the robot’s control system over the network. Finally, the robot control system directly executes the servomotor angles on Piton through robot control software.

7. Evaluation

7.1. User Study Objectives

There are fundamental differences between human head movement and Piton’s movement. Compared to human head movement, Piton has a larger movement workspace due to its long snake-like form-factor. It can rotate on each axis at angles surpassing natural human movements. Such differences present a critical control issue, given that direct mapping between a user’s head and Piton cannot be easily achieved, both in terms of movements and orientations. These differences present challenges in achieving telexistence, especially as all surveyed telexistence systems use anthropomorphic robotic structures to directly match the user’s head movements with the robot’s movements.
Fundamental requirements of telexistence include low mental and physical loads [17], low motion sickness [20] and high body ownership [21] over the robot. Therefore, an essential objective of any control method of Piton is to have acceptable overall scores across these requirements. Therefore, we set the main objective of our evaluation to study the suitability of the three implemented control methods for use in telexistence, by focusing on the measurements of their mental and physical demands, motion sickness effects, and body ownership effects during various tasks. Accordingly, we used the NASA Task Load Index (NASA-TLX) [22] to measure users’ mental and physical demands. We used the Virtual Reality Motion Sickness Questionnaire (VRSQ) [23] to measure motion sickness effects. We used the Alpha IVBO questionnaire [24] to measure body ownership effects.
In addition to the mentioned requirements, we also explored various user impressions and opinions upon using these control methods using post-study questionnaires. The findings are significant in enabling us to understand the suitability and usability of the three implemented control methods across various contexts of use.
This study was conducted in Japan from September 2021 to May 2022 and was performed in accordance with the guidelines and procedures of the Office of Research Ethics of Waseda University.

7.1.1. Participants

We recruited seventeen participants, fourteen males and three females, aged between 20–40 years (M: 26.56, SD: 5.92) from the university and outside the university. Participants came from various disciplines and backgrounds. All participants indicated that they had used VR at least once before and stated that they did not have significant eye-sight problems (six participants used eyeglasses with the HMD).

7.1.2. Experimental Set-Up and Tasks

As shown in Figure 8, we mounted Piton on an aluminum frame, set the servomotor speeds to 17 rpm, and used the PID parameters explained in Section 5.3.1 for our user study. We chose this set-up and servomotor configurations for the following reasons: First, it is similar to wearing and using Piton while standing still in a remote environment. Second, to maintain safety while using Piton, especially since participants might rapidly move Piton as they try the three control methods. Third, this set-up enabled us to easily place a variety of equipment and objects that are needed for the different conditions of our experiment. Lastly, we chose these specific servomotor speeds, as higher speeds would cause high overshooting and camera-shaking, which would negatively affect our studied factors. Therefore, we instructed users to move their head and trackers at moderate speeds to match the robot’s speeds.
This experiment consists of three main tasks. They are explained below as follows:
Task 1 (mirroring) users have to look at a monitor that is placed in front of the robot to observe the robot’s body for one and a half minutes (as shown in Figure 15a,b). The monitor is equipped with a camera and acted as a mirror (similar to previous work [24], [60]). Mirroring is used to enhance virtual body ownership so that users can adapt to the three methods of controlling robot movement. This task is inspired by previous work on animal body ownership in VR [61].
Task 2 requires users to find and look at five randomly mentioned numbers and alphabetical letters from Uno Stacko [62]. The task is timed for a length of three minutes and involves approximately five trials (similar to previous work [63]). Uno Stacko is a block game where the goal is to match the color or number of the last block pulled and restack it on the top. In our study, we set Uno Stacko to show random numbers and letters with various colors on the front, left, right, and top sides (Figure 16a,b). This task was chosen to make users inspect Uno Stacko from multiple heights, directions, and orientations, resembling object inspection tasks [9].
Task 3 requires users to read a sentence that is printed and wrapped around a cardboard box, as shown in Figure 17a,b. The task is timed to finish in three minutes and involves approximately two trials. In each trial, users have to read a text from the left side to the right side of the box. We prepared six texts in total, and two texts are randomly used for each control method. This task was chosen to evaluate flexible head movement capabilities when scanning objects horizontally.

7.1.3. Flow

The user study used a within-subject design and began with the participant completing a bibliographic questionnaire. Next, a researcher explained the experiment’s objectives, introduced Piton, and demonstrated its movement workspace. After that, the participant put on the HMD, and the researcher selected a random control method for the user. Each participant was given about one and a half minutes (similar to previous work [24], [60]) to familiarize themselves and observe how the IK robot model moves in response to the control method.
Next, Task 1 was conducted (one and a half minutes), followed by Task 2 (three minutes) and Task 3 (three minutes), where the order of performing task 2 and task 3 was randomized.
After performing Task 1, users completed the body ownership survey (based on Alpha IVBO [24]). Upon finishing Task 2 and 3, users completed the Virtual Reality Motion Sickness Questionnaire (VRSQ [23]) and mental and physical workload (NASA-TLX [22]). After that, the users were given a three-minute break. The same flow was repeated upon completing the surveys, albeit with a randomly selected control method. After completing all the conditions of the control methods, participants were given a survey to measure their overall preferences and impressions of the control methods.

7.2. Results and Analysis

7.2.1. Quantitative Results and Analysis

Alpha IVBO

This questionnaire consists of three parts: acceptance, control, and change [24]. Acceptance corresponds to accepting the virtual body parts as one’s body parts, control corresponds to controlling the virtual body as one’s own body, and change corresponds to physiological or aesthetic changes felt during or after using a system. The questions were on a 7-point scale (1: strongly disagree to 7: strongly agree). The question results were grouped based on the three parts of Alpha IVBO (acceptance, control, and change), and then their mean values were calculated, as shown in Figure 18. A higher score indicates that the control methods were better in terms of users perceiving a virtual body as their own. The results of each of the Alpha IVBO parts are as follows.
Acceptance: the FM (M = 4.09, STD = 1.19) was rated lower than the HH (M = 4.44, STD = 1.00) and HH (M = 4.85, STD = 1.16).
Control: the FM (M = 4.90, STD = 0.77) was rated lower than the HH (M = 5.00, STD = 0.83) and HM (M = 5.16, STD = 0.87).
Change: the FM (M = 2.54, STD = 1.22) was rated lower than the HH (M = 3.08, STD = 0.96) and HM (M = 3.22, STD = 1.01).
A repeated-measures ANOVA was performed to confirm whether there was a difference between the robot control conditions in each subjective scale: the acceptance, control, and change. The repeated-measures ANOVA was selected since the experiment is a within-group design with more than two conditions and two independent variables.
The results of the repeated-measures ANOVA (as shown in Table 1) indicate that the acceptance and change are significantly different across the control methods, whereas control does not yield any significant effect.
As shown in Table 2, pairwise comparisons with a Bonferroni adjustment revealed that there was a statistically significant difference in the change between the HM and FM (p = 0.027) and the HH and FM (p = 0.021). The HM and HH did not yield significant results (p = 1.000).

NASA-TLX

The NASA-TLX consists of six subjective scales: mental demand, physical demand, temporal demand, performance, effort, and frustration. Each of these is divided into seven levels, from positive to negative evaluation with identical intervals [22], as shown in Figure 19.
Mental demand measures mental and perceptual activity requirements; physical demand assesses physical effort by the user during the task; temporal demand measures how pressured the users felt to complete the task; performance measures how successful users thought they were when completing the task; effort assesses how difficult it was to perform the task; and frustration measures the feeling of irritation, stress, or annoyance while completing the task. A lower score in each of the TLX subjective scale scores is considered better. Each of the subjective scale scores is as follows:
Mental demand: the HM (M = 2.06, STD = 1.25) was rated lower than the HH (M = 2.47, STD = 1.59) and FM (M = 2.88, STD = 1.50).
Physical demand: the HH (M = 2.29, STD = 1.26) was rated lower than the HM (mean = 2.59, STD = 1.54) and FM (M = 2.71, STD = 1.65).
Temporal demand: the HH (M = 2.59, STD = 1.42) was rated lower than the FM (M = 2.82, STD = 1.33) and HM (M = 2.88, STD = 1.54).
Performance: the HM (M = 1.88, STD = 0.86) was rated lower than the FM (M = 2.47, STD = 1.74) and HH (M = 2.53, STD = 1.46).
Effort: the HM (M = 2.53, STD = 1.50) was rated lower than the HH (M = 3.00, STD = 1.73) and FM (M = 3.29, STD = 1.79).
Frustration: the HM (M = 2.18, STD = 1.42) was rated lower than the HH (M = 2.18, STD = 1.47) and FM (M = 2.47, STD = 1.55).
A repeated-measures ANOVA was performed to confirm whether there was a difference between the control conditions in each subjective scale across the control method: mental demand, physical demand, temporal demand, performance, effort, and frustration. The repeated-measures ANOVA is selected since the experiment is a within-group design with more than two conditions and independent variables.
The repeated-measures ANOVA indicates that there are no differences for all the terms of the NASA-TLX across control methods, as shown in Table 3. The results show that the control methods produce a similar workload effect. These findings are further discussed within the Discussion section in Section 8.

VRSQ

The VRSQ is a motion sickness measurement index, specialized for virtual reality environments, that is widely used in various studies [23] and consists of an oculomotor component and a disorientation component, which is shown in Table 4. The oculomotor component covers general discomfort, fatigue, eye strain, and focus difficulty. The disorientation component accounts for headache, the fullness of the head, blurred vision, dizziness with eyes open, and vertigo.
The oculomotor score, disorientation, and VRSQ total score are calculated based on methods from a previous work [23], as shown in Figure 20. The computation of the oculomotor score is: ((attributes score)/12) × 100. The computation of the disorientation score is: ([attributes score]/15) × 100. The VRSQ total score is calculated as [Oculomotor score + Disorientation Score]/2. A lower VRSQ score is better as it indicates that the control method triggers fewer motion sickness effects.
The VRSQ’s lowest oculomotor score was for the FM (M = 12.25, STD = 9.37), followed by the HH (M = 15.20, STD = 18.69) and the HM (M = 20.59, STD = 20.86). Meanwhile, the lowest disorientation score was for the FM and HH (M = 3.14, STD = 4.78), followed by the HM (M = 5.49, STD = 5.89). Overall, the FM obtained the lowest VRSQ total score (M = 7.69, STD = 5.62), followed by the HH (M = 9.17, STD = 10.46) and HM (M = 13.04, STD = 11.64).
The Friedman test was performed to determine whether there was a significant difference in mean between the oculomotor component, disorientation component, and VRSQ total score of the three control methods. The Friedman test was selected because the experiment is a within-group design, and the data are not normally distributed. As shown in Table 5, the Friedman test results show there is a significant effect in the disorientation component across the control methods. However, the oculomotor component and VRSQ total do not yield any significant results.
Since there was a statistical significance in the VRSQ disorientation term, we conducted a post hoc analysis using Wilcoxon signed-rank tests with a Bonferroni correction to find where the differences are. As shown in Table 6, there is no significant difference in disorientation between the conditions (p is set to <0.0016). However, there are strong indications of a significant difference between the HM-HH and HM-FM (as shown in Table 6). We discuss the possible implications of this difference within Section 8.3.

7.2.2. Qualitative Results

We asked users several questions to gauge the usability and impressions of the control methods (Q1: easiness, Q2: difficulty of looking at object, Q3: easiness of horizontal movement, Q4: easiness of vertical movement, Q5: rank the control methods from least to most liked, and Q6: rank the control methods in terms of subjective accuracy from most least accurate to most accurate). Q1–4 were asked individually after finishing each of the conditions (HM, HH, and FM) and consisted of a 6-point Likert scale (6 means best). Q5 and Q6 were asked after finishing all the conditions and consisted of ranking the control methods based on different subjective factors, where each control method should receive a unique rank.
In Q1, participants thought the easiest control method to move to the desired location (1: strongly disagree to 6: strongly agree) was the HM (M = 4.65, STD = 1.11), followed by the HH (M = 4.41, STD = 1.46), and FM (M = 3.94, STD = 1.43). In Q2, participants thought the easiest control method to look at specific objects (1: difficult to 6: easy) was the HM (mean = 4.59, STD = 1.12), followed by the HH (M = 4.47, STD = 1.42) and FM (M = 4.00, STD = 1.06).
In Q3, participants thought the easiest control method for the horizontal movement was the HM (M = 4.65, STD = 1.46), followed by the HH (M = 4.65, STD = 1.17) and FM (M = 4.18, STD = 1.29). In Q4, participants thought the easiest control method for the vertical movement was the HH (M = 5.18, STD = 1.07), followed by the HM (M = 4.24, STD = 1.60), and FM (M = 3.71, STD = 1.26).
In Q5, participants thought the most liked control method was the HM (M = 2.47, STD = 0.72) followed by the HH (M = 2.06, STD = 0.66) and FM (M = 1.47, STD = 0.80). Similarly, In Q6, participants thought the most accurate control method was the HM (M = 2.59, STD = 0.62) followed by the HH (M = 2.00, STD = 0.71) and FM (M = 1.41, STD = 0.71).
Users showed different responses to each question regarding usability (as shown in Figure 21). Overall, the users tended to select the HM for high scores over other control methods.
Each qualitative subjective result was further analyzed to reveal significant effects among the conditions. Therefore, we used the repeated-measures ANOVA to confirm whether there was a difference between the control conditions in each of the qualitative scales of Q1–Q6. If a change was found, we ran pairwise comparisons among the conditions to discover where the differences were exactly.
The results of the repeated-measures ANOVA (as shown in Table 7), indicates that the terms of Q4, Q5, and Q6 are significantly different across the control methods, whereas the terms of Q1–Q3 do not yield any significant effects.
Pairwise comparisons with the Bonferroni adjustment (as shown in Table 8) revealed that there was a statistically significant difference in the Q4 results between the HM and HH (p = 0.045), and the HH and FM (p < 0.001). The HM-FM (p = 1.000) did not yield significant results.
Pairwise comparisons with the Bonferroni adjustment (as shown in Table 9) revealed that there was a statistically significant difference in the Q5 results in the HM-FM (p < 0.025). Other conditions did not yield significant results.
As shown in Table 10, pairwise comparisons with the Bonferroni adjustment revealed that there was a statistically significant difference in terms of Q6 between the HM and FM (p < 0.001). Other conditions did not yield significant results.
To conclude, the HM control method was the most favored in terms of easiness of moving to the desired location, for looking at a specific object, moving horizontally, and was the most accurate in situating the robot at different postures during the task. The HM was also the second-most favorable in moving vertically during the tasks.
Meanwhile, the HH control method was the second-most preferred regarding its movement to the desired location, easiness of looking at a specific object, and accuracy of movement to locate the robot in various positions during the task; in addition, the HH control method is most favored in terms of vertical movement, and highly favored for horizontal movements.
Lastly, the FM control method is least preferred in terms of easiness of moving to the desired location, looking at a specific object, as well as moving horizontally and vertically. The FM was also least favored in terms of its overall accuracy in controlling Piton.

8. Discussion

In this section, we discuss the qualitative and quantitative results in light of our user study objectives. We conclude with discussing the suitability of each of Piton’s control methods within daily interaction contexts. The results are discussed in the following subsections.

8.1. Alpha IVBO Results

The IVBO-change results indicate that the HM and HH control methods scored significantly higher than the FM control method. This finding indicates that users felt a higher self-perception of the robot body through the HM and HH control methods than the FM. Having a higher self-perception (IVBO-change) contributes to better visual awareness of the surrounding environment and movement [64]. Although there are no significant effects between the conditions in IVBO-acceptance, the reported scores are high in all the control methods. Such results indicate that users felt self-attribution and body ownership of the robot’s body. Lastly, the reported IVBO-control results are high across all control methods, without significant effects between the conditions. The high IVBO-control indicates that the users felt a high agency while using Piton in all the control conditions and across the various tasks.
To conclude, all controls had high ratings for the body ownership in terms of IVBO. The only exception to this is the IVBO-change for the FM, which had a significantly lower rating than other control methods. We believe that this finding indicates a lower body ownership effect for the FM control method when controlling Piton.

8.2. TLX

The TLX results indicated that all control methods have low scores (below or equal to 3 on a 7-point scale) in each of the corresponding six subjective scales. The low mental and physical demand scores indicate that users did not feel mentally and physically exhausted in all the conditions of our study. The low scores on effort and performance indicate that the users could use each control method easily and successfully to accomplish the various tasks. The temporal term shows that users did not feel rushed to complete the task and generally had good pacing. Lastly, the low score in frustration indicates that users did not feel irritated or annoyed with the control methods. Our statistical significance testing did not reveal significant effects among the conditions in all the TLX terms. Therefore, we conclude that the various control methods had similar scores despite minor differences.
Overall, the low scores indicate that the control methods were not exhaustive or demanding across the various terms of the TLX. Nevertheless, it is critical to evaluate the control methods in lengthier user studies, which may result in different effects on users mental and physical efforts after extended usage. The results show that the HM, HH, and FM can be utilized for controlling Piton in various tasks, given that they are used for short periods.

8.3. VRSQ

The low VRSQ oculomotor score across the control method indicates that the users felt minimal motion sickness effects while using Piton (general discomfort, fatigue, eye strain, and focus difficulty). Similarly, the VRSQ disorientation score was low, thereby indicating that mental effects related to motion sickness (headache, fullness of head, blurred vision, dizziness, and vertigo) were not significant in all the control methods. Lastly, the VRSQ total score indicates that the control methods had minimal overall motion sickness effects.
Although our statistical analysis did not reveal specific significant effects between the conditions in the VRSQ disorientation score, there are strong indications that there was an effect. However, the effect was not strong enough to elicit significance. Such findings are especially apparent between the HM-HH and HM-FM (Table 6). We believe that there are stronger motion sickness symptoms from the HM than the HH and FM due to two main reasons. First, Piton is set to execute rotational and positional movements at a fixed speed during the evaluations. Although rotational movements are usually relatively short in duration, positional movements require the entire robot to take a new pose, which is more time-consuming. Accordingly, rapid movements by the user, or moving with varied acceleration/deceleration, may introduce an effect similar to latency in VR systems, which is a common cause of motion sickness [65]. Second, due to anatomical differences between the robot’s structure and the human head, the IK system occasionally provides solutions to positional movement objectives that are correct yet are executed with a slight mismatch to the user’s head location. For example, if the user quickly leans forward to inspect an object closely, the IK solver would produce a correct solution for the final pose. However, the produced trajectory is executed on the robot with minor changes that fit the robot structure and the IK model and objectives, thereby producing a slight trajectory mismatch. Eventually, this mismatch causes users to experience visually induced motion sickness. Second, due to anatomical differences between the robot’s structure and the human head, the IK system occasionally provides solutions to positional movement objectives that are correct yet are executed with a slight mismatch to the user’s head location. For example, if the user quickly leans forward to inspect an object closely, the IK solver would produce a correct solution for the final pose. However, the produced trajectory is executed on the robot with minor changes that fit the robot structure and the IK model and objectives, thereby producing a slight trajectory mismatch. Second, due to anatomical differences between the robot’s structure and the human head, the IK system occasionally provides solutions to positional movement objectives that are correct, yet executed with a slight mismatch to the user’s head location. For example, if the user quickly leans forward to inspect an object closely, the IK solver would produce a correct solution for the final pose. However, the produced trajectory is executed on the robot with minor changes that fit the robot structure, set objectives and the IK model structure, thereby producing a slight trajectory mismatch. Eventually, this mismatch causes users to experience visually induced motion sickness [66,67].
In comparison, the HH and FM do not require the user’s head to conduct positional controls, which we believe have contributed to their lower overall disorientation score. Overall, both discussed challenges can be addressed using the PID adjustments of the servomotors and adaptive control of Piton [68], which will significantly reduce the amount of delay between the user’s movements and the robot’s positional movements. Moreover, the IK system should further be enhanced to take into consideration the produced trajectory so that it completely follows the user’s head positional movements. For example, by assigning IK objectives to the middle joints, they can be precisely controlled according to the user’s head position. Furthermore, increasing the speed of the Piton control loop would also contribute to better and more responsive overall controls.
Overall, the results show that the control methods do not produce high motion sickness effects. Although there are significant differences that show the HM producing slightly higher motion sickness effects than other conditions, enhancements to the robot control loop and IK model can contribute to mitigating such issues. The HH and FM also have minimal motion sickness effects, which is encouraging to pursue extended deployments of such control methods further.

8.4. Qualitative Data

Overall, users thought the HM was easiest to utilize for moving Piton to a desired location and for looking at objects with relatively high accuracy. Participants also indicated that they thought the HM was the easiest to use, which we believe was mainly due to the control method’s simplicity and similarity with natural head movements.
Although binding Piton’s rotational and positional controls to the user’s head movements is intuitive for users, the limitations of human head movements cause a number of challenges. Participants indicated that some poses were difficult to accomplish, which was mainly due to limitations of the user’s natural head movement ranges. For example, it is difficult and tiring for users to rotate their heads up while extending their bodies forward. Similarly, it is difficult to access some locations in the calibrated space for positional controls, such as those directly near the user’s waist or thighs. Results from the questionnaire also show that users did not think the HM was best for vertical movements. Accordingly, our current calibration space is cubical in shape, and such a challenge can be addressed by creating a nonuniform calibration space that complies with the limitations of the human head movements. Therefore, despite its intuitiveness and ease of use, we believe that limitations caused by natural human head and neck movements affect the accessibility and usability of the workspace.
Users praised the HH control method for providing broad vertical and horizontal movement ranges that were easily accessible for rotational controls as well as vertical and horizontal positioning. In addition, the HH control method was highly rated by participants for the ease of movement to the desired location or for looking at an object. Our results also show that participants thought the HH control method was significantly more accurate in vertical movements than the HM and FM.
Although the rotational control was intuitive in the HH, positional control required coordinating the user’s hand movements with the head movements to situate the robot at different postures. Such coordination requires some practice, as users occasionally confuse head and hand movements, which leads to small control errors during the experiment. Similarly, visualizing the boundaries of the workspace, especially for horizontal movement, is an essential improvement to alert users upon reaching the extent of possible movement range. Accordingly, although the HH requires further head-hand movement coordination to control Piton, the HH has the highest subjective accuracy, especially for vertical movement, where these advantages overcome the movement limitations found in the HM.
The FM was least preferred by the users in terms of movement to a desired position and looking at an object, as well as for vertical and horizontal movement and subjective accuracy. The FM had low scores in terms of user preference, which indicates that users generally disliked the FM.
Further analysis of the qualitative results indicates that participants mainly attributed their dislike of the FM to the narrow workspace for positional movements using the foot. Overall, accessibility to various points in the control workspace is an issue, especially when their foot is very close or far away from them (as shown in Figure 13b). Such a limitation makes users unable to perform specific controls of Piton, such as raising Piton high at the forward-most or backward-most horizontal positions of Piton.
Participants also highlighted the difficulty of coordinating head and foot movements to control Piton, especially when compared to the HH. Despite the stated difficulties of the FM, users stated that it was not as tiring as the HM or HH. Users moved their foot within the tracking space while resting on their heels and dorsiflexed their toes up and down to control Piton. In contrast, the HM and HH require users to move their bodies or hand in order to control Piton, which could be more tiring during extensive positional movements or prolonged sessions. However, the NASA-TLX results did not show significant differences among the conditions.
Overall, the qualitative results show that users greatly favored the HM, the HH and then, the FM. We believe the main contributing factor for the user’s subjective preference of the HM over other methods is the familiarity of the control method, which resembled natural head movements. Their subjective accuracy scores show that they felt the HH was more accurate in controlling Piton, then the HM, and lastly, the FM. The HH control method’s main advantage is the accessibility and flexibility in horizontal and vertical movements that surpass the HM and FM, which are mainly limited due to the user’s natural head and foot movement limitations.

8.5. The Suitability of the Control Methods in Various Contexts of Use

The HM control method had a low TLX score, which showed low mental and physical demand. At the same time, a low overall VRSQ score indicated only a slight motion sickness effect. The HM control method also had higher body ownership, which shows a high self-perception of the robot as the user’s own body. The qualitative analysis also indicates that users favor the HM over other methods due to its intuitiveness and praised it for its relatively high accuracy. Such a rating was mainly due to the HM resembling human head movements, making it familiar and easy to use. The HM also had minimal motion sickness effects. Therefore, we believe the HM control method is suitable for tasks that do not require high accuracy but require ease of use and comfortable control methods that any person can use. A typical example of this is daily usage tasks, such as companionship during travel, hiking, or shopping. Additional tasks also include remote assistance and guidance at home, such as teaching users how to cook or to set-up and operate a device.
The HH control method had low mental and physical demand, low motion sickness, and higher body ownership, which is similar to the HM. Users praised the HH for its relatively high accuracy, especially for vertical movement. Therefore, we believe the HH control method is suitable for tasks that require high accuracy in positional movements. For example, industrial tasks that require inspection of tools or objects from multiple angles and distances.
Lastly, the FM control method had a low score for both mental-physical demand and motion sickness. The FM control method had a lower IVBO-change score and higher IVBO-acceptance and IVBO-control scores, meaning this control method had fair body ownership scores. The qualitative results show that users least-liked the FM because other control methods were more accurate. However, the FM can potentially be effective for longer usage sessions, as users can relax their foot on the floor while also being able to control Piton. Therefore, the FM control method can be utilized for daily life tasks or industrial tasks that do not require high accuracy but require prolonged usage sessions.
Overall, we believe that each control method has different advantages and disadvantages in various contexts, especially since the control method should be designed based on task interaction requirements [69]. Our evaluation of the control methods provides various insights for the usability of the control methods. We also believe the sample size is suitable for studying Piton’s control methods and elicits various insights about its usability within various contexts. However, we believe that a larger and more varied sample size may yield extended results, especially about the suitability of these control methods within various tasks within daily or industrial contexts.

9. Conclusions and Future Work

This paper presents Piton, a novel, wearable, snake-like telexistence robot. Piton can be used in various contexts, whether for leisure applications or industrial and professional contexts. We discuss Piton’s implementation specifications and explain three control methods that we used for controlling Piton.
Although all the control methods generally had high NASA-TLX scores, high body ownership, and low motion sickness effects, there exists a number of differences that distinguish the control methods, especially in terms of qualitative results. The HM control method has the highest body ownership results and was most favored by participants due to its intuitiveness, as it resembles natural human head motion. Therefore, the HM is best deployed for tasks that require basic movements, such as those within daily usage contexts. The HH control method has the highest perceived accuracy, since users could easily position Piton in various locations using the hand-held tracker. Therefore, we believe it is best suited for tasks requiring high accuracy, such as industrial inspection tasks. The FM was least-liked by the participants, who also thought it was imprecise for the evaluated tasks. However, this control method enables users to relax their foot on the floor while controlling the robot, which could be suitable for longer usage sessions.
Although our user study yielded various insights for the usability of the implemented control methods for telexistence, it is essential for future work to explore additional control methods using other modalities of control. Moreover, expanding the user study with a larger and more diverse sample size would enable us to gather deeper insights, especially with users who are not familiar with VR or with those coming from industrial backgrounds. Piton should also be improved, especially by increasing its DoFs and further stabilizing its camera, which would increase its movement flexibility, increasing its speed and overall user experience.
An important finding of our work is that our results show that a control method that does not resemble natural human head movements, such as HH, can be superior for robot control and does not jeopardize essential telexistence requirements (e.g., relatively high body ownership, low motion sickness, and TLX scores). This aspect is encouraging for further exploration of future control methods that can both provide an adequate balance between telexistence experience and task efficiency and may include other control modalities for accurate robot controls.
Most importantly, Piton shows that robots with different anatomical designs than humans can be used for telexistence. Although Piton has a different anatomical structure and workspace than a human neck, our developed control methods of the kinematic model and IK solver could yield a suitable telexistence experience. Such findings pave the way for further research to explore other nonhuman form-factors for telexistence. Such form-factors can provide various benefits beyond human-mimetic telexistence robots, such as higher accuracy, a larger movement workspace, or more interaction capabilities.
The design and evaluation of Piton revealed several research opportunities and challenges to build on our presented robot and research results. Based on our design and evaluation of Piton, we discuss several design improvements and future research directions that are essential to build on our presented efforts:
Motion Sickness: In addition to the enhancements discussed in Section 8.3, we believe camera vibrations and shaking, which are caused by the servomotors or rapid movements, contributed to motion sickness. Such challenges can be addressed in a variety of improvements. Motor control optimizations (e.g., PID control) can stabilize and smooth out the robot’s movements, as well as decrease overshooting during faster movements. In addition, adding elastic and soft materials to the camera holder can absorb vibrations caused by the servomotors, which in turn, can significantly reduce shaking during movements. Some of these improvements are widely used in drones to reduce the shaking caused by atmospheric factors and high-speed motors [70,71].
Our VRSQ results showed that most users had minimal motion sickness while using Piton. However, participants of our user study used Piton in short bursts (3–5 min), and we allocated sufficient resting time between the tasks. Therefore, it is important to evaluate the usability of Piton for prolonged sessions, especially as our initial tests revealed that using the Piton for long periods induces high motion sickness effects.
Robot Structure: Inspecting an object from various distances and angles is an important capability of Piton. However, the current implementation limits horizontal movement of the robot as such movement is conducted using two servomotors (first and third servomotors). Therefore, in order to extend the movement range of Piton, more DoFs are needed. Therefore, adding more servomotors next to the third servomotor for backward and forward positional movements can extend the range of horizontal motions.
Safety and Hazards: Although our robot is generally underpowered, we used a position-based control method to control the robot. The robot’s sudden and quick movements may cause hazards to the users, especially near its base (where we use stronger servomotors). Therefore, we intend to utilize torque-based position controls or impedance controls, which can enable the wearer to easily push away the robot in case of emergencies without much effort.
Supplementary Interaction Methods: Further research should explore using additional interaction methods to supplement HMD-based controls, such as using eye-gaze [72,73], electromyography [74,75], or a hand-mounted exoskeleton [76]. We believe these additional interaction methods are especially needed to extend controllability of Piton, especially for positional movements and postures that are uncomfortable for users to execute using our implemented control methods (e.g., positions too close/low to the user). Integrating these control methods may also contribute to higher accuracy and comfortable usage during extended sessions, especially since physical controls utilizing the user’s head, hands, or legs are tiring after prolonged use. However, the effects of using supplementary control methods should also be evaluated within the context of telexistence.
Wearability Evaluation: Although our work focuses on Piton control methods, its wearability within daily and industrial contexts presents numerous challenges. First, since the wearable robot is kinematically dependent on the user wearing it (at the remote site), the robot may be moved involuntarily by the user wearing Piton, which may induce high motion sickness. One method for addressing this challenge is using inertia measurement units (IMUs) to moderate the effects of the surrogate’s movements on robot movements.
Second, sudden movements of the robot at high speeds cause the backpack rack to rapidly shake. Therefore, the backpack rack and the robot base should be further strengthened and stabilized. Lastly, similar to other innovative robot form-factors [76], Piton presents interaction potentials that are not explored in prior works. Therefore, we believe that workshops should be held with both professional and casual users to explore future potential application domains of Piton. The outputs of such workshops would deepen our understanding of the requirements and expectations of using robots such as Piton.
Extended Evaluations and Task Domain Investigation: Our main evaluation results showed the advantages and disadvantages of each of Piton’s control methods within the context of telexistence. Such results are essential to pave the way for extended evaluations that focus on larger and varied user groups, as well as deeper usability studies of daily life or industrial tasks. Accordingly, future evaluations should focus on large-scale user studies involving larger and varied user groups. Moreover, the interactions that occur between the controller and surrogate users, and between those mentioned users and the remote environment, should be studied in the context of using Piton as the sole communication medium to fulfill different tasks. Another important direction of evaluations is to have deeper explorations of the deployment domains of Piton within daily usage and industrial contexts. To fulfill this objective, focus groups and workshops should be conducted [9,44,45], where such evaluations explicitly study the requirements, expectations, and deployment tasks and contexts of robotic systems. Overall, such findings are critical for the adoption and deployment of Piton within real-world daily use and industrial contexts.
In summary, our design and evaluation of Piton revealed several opportunities and challenges. Various mechanical and technical enhancements are required, such as camera stabilization and robot structure enhancements, that may directly contribute to a better user experience. Utilizing adaptive control strategies to select the appropriate control method based on the needed task is an important direction when deploying robots such as Piton. Moreover, other control modalities should also be explored, especially as physical controls are often tiring for users during extended usage sessions. Further research directions should also focus on exploring Piton as a robotic appendage for use within both daily and industrial contexts, which will pave the way for its effective deployment within such contexts.

Supplementary Materials

The following are available online at https://iframe.videodelivery.net/72d3fadfa965407ea66b2aedc6365732 (accessed on 24 Jannuary), Video S1: Video Overview of Piton Controllability-Telexistence Wearable Robot.

Author Contributions

Conceptualization, A.I. and M.A.-S.; methodology, A.I. and M.A.-S.; software, A.I., M.A.-S., Y.S.; validation, A.I., M.A.-S.; formal analysis, A.I., and M.A.-S.; investigation, A.I., M.A.-S.; resources, A.I., M.A.-S., Y.S.; data curation, A.I., M.A.-S.; writing—original draft preparation, A.I. and M.A.-S.; writing—review and editing, A.I., M.A.-S., T.M., Y.S., O.H. and T.N.; visualization, A.I. and M.A.-S.; supervision, Y.S., O.H. and T.N.; project administration, A.I. and M.A.-S.; funding acquisition, A.I., M.A.-S., O.H. and T.N. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by Qatar University M-QJRC-2020-7. This work also supported by the Indonesia Endowment Fund for Education or Lembaga Pengelola Dana Pendidikan (LPDP) in the form of a master’s degree scholarship to AI with Grant Agreement Number S-410/LPDP.4/2020 (29 January 2020).

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data collected during this research is presented in full in this manuscript.

Acknowledgments

This paper was supported by Qatar University M-QJRC-2020-7. The findings achieved herein are solely the responsibility of the authors. The presented work is also supported, in part, through the Program for Leading Graduate Schools, “Graduate Program for Embodiment Informatics” by Japan’s Ministry of Education, Culture, Sports, Science, and Technology. We would like to thank the Indonesia Endowment Fund for Education (LPDP) from the Ministry of Finance Republic Indonesia for granting the scholarship and funding this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ballano, V.O. COVID-19 pandemic, telepresence, and online masses: Redefining catholic sacramental theology. Int. J. Interdiscip. Glob. Stud. 2021, 16, 41–53. [Google Scholar] [CrossRef]
  2. Sherman, W.R.; Craig, A.B. Introduction to Virtual Reality; Springer: London, UK, 2019. [Google Scholar]
  3. Tang, J.C.; Xiao, R.; Hoff, A.; Venolia, G.; Therien, P.; Roseway, A. HomeProxy:Exploring a Physical Proxy for Video Communication in the Home. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 1339–1342. [Google Scholar] [CrossRef]
  4. Venolia, G.; Tang, J.; Cervantes, R.; Bly, S.; Robertson, G.; Lee, B.; Inkpen, K. Embodied social proxy: Mediating interpersonal connection in hub-and-satellite teams. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; pp. 1049–1058. [Google Scholar] [CrossRef]
  5. Bauer, J.M.; Durakbasa, N.M.; Bas, G.; Guclu, E.; Kopacek, P. Telepresence in Education. IFAC-PapersOnLine 2015, 24, 178–182. [Google Scholar] [CrossRef]
  6. Latifi, R.; Hadeed, G.J.; Rhee, P.; O’Keeffe, T.; Friese, R.S.; Wynne, J.L.; Ziemba, M.L.; Judkins, D. Initial experiences and outcomes of telepresence in the management of trauma and emergency surgical patients. Am. J. Surg. 2009, 198, 905–910. [Google Scholar] [CrossRef]
  7. Tachi, S.; Inoue, Y.; Kato, F. TELESAR VI: Telexistence Surrogate Anthropomorphic Robot VI. Int. J. Hum. Robot. 2020, 17, 2050019. [Google Scholar] [CrossRef]
  8. Tachi, S. Telexistence: Enabling Humans to Be Virtually Ubiquitous. IEEE Comput. Graph. Appl. 2016, 36, 8–14. [Google Scholar] [CrossRef]
  9. Al-Sada, M.; Höglund, T.; Khamis, M.; Urbani, J.; Nakajima, T. Orochi: Investigating requirements and expectations for multipurpose daily used supernumerary robotic limbs. In Proceedings of the Augmented Human International Conference, Reims, France, 11–12 March 2019. [Google Scholar]
  10. Urbani, J.; Al-Sada, M.; Nakajima, T.; Höglund, T. Exploring augmented reality interaction for everyday multipurpose wearable robots. In Proceedings of the 2018 IEEE 24th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), Hakodate, Japan, 28–31 August 2019; pp. 209–216. [Google Scholar] [CrossRef]
  11. Fazli, E.; Rakhtala, S.M.; Mirrashid, N.; Karimi, H.R. Real-time implementation of a super twisting control algorithm for an upper limb wearable robot. Mechatronics 2022, 84, 102808. [Google Scholar] [CrossRef]
  12. Gilson, O.; Xie, S.; O’Connor, R.J. Design of a Wearable Bilateral Exoskeleton for Arm Stroke Treatment in a Home Environment. In Proceedings of the 2021 27th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Shanghai, China, 26–28 November 2021; pp. 635–640. [Google Scholar] [CrossRef]
  13. Sheng, B.; Xie, S.; Tang, L.; Deng, C.; Zhang, Y. An Industrial Robot-Based Rehabilitation System for Bilateral Exercises. IEEE Access 2019, 7, 151282–151294. [Google Scholar] [CrossRef]
  14. Tanase, M.; Yanagida, Y. Video stabilization for HMD-based telexistence—Concept and prototype configuration. In Proceedings of the 2015 IEEE/SICE International Symposium on System Integration (SII), Nagoya, Japan, 11–13 December 2015; pp. 106–111. [Google Scholar] [CrossRef]
  15. Fernando, C.L.; Furukawa, M.; Kurogi, T.; Kamuro, S.; Sato, K.; Minamizawa, K.; Tachi, S. Design of TELESAR V for transferring bodily consciousness in telexistence. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 5112–5118. [Google Scholar]
  16. Saraiji, M.Y.; Sasaki, T.; Matsumura, R.; Minamizawa, K.; Inami, M. Fusion: Full body surrogacy for collaborative communication. In Proceedings of the ACM SIGGRAPH 2018 Emerging Technologies, Vancouver, BC, Canada, 12–16 August 2018. [Google Scholar]
  17. Yamen Saraiji, M.H.D.; Sasaki, T.; Kunze, K.; Minamizawa, K.; Inami, M. MetaArmS: Body remapping using feet-controlled artificial arms. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, Berlin, Germany, 14 October 2018; pp. 65–74. [Google Scholar] [CrossRef]
  18. Sasaki, T.; Saraiji, M.Y.; Fernando, C.L.; Minamizawa, K.; Inami, M. MetaLimbs: Multiple arms interaction metamorphism. In Proceedings of the ACM SIGGRAPH 2017 Emerging Technologies, Los Angeles, CA, USA, 30 July–3 August 2017. [Google Scholar] [CrossRef]
  19. Zhang, D.; Pun, C.M.; Yang, Y.; Gao, H.; Xu, F. A rate-based drone control with adaptive origin update in telexistence. In Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 27 March 2021–1 April 2021; pp. 807–816. [Google Scholar] [CrossRef]
  20. Watanabe, K.; Kawabuchi, I.; Kawakami, N.; Maeda, T.; Tachi, S. TORSO: Development of a telexistence visual system using a 6-d.o.f. robot head. Adv. Robot. 2008, 22, 1053–1073. [Google Scholar] [CrossRef] [Green Version]
  21. Inoue, Y.; Kato, F.; Saraiji, M.Y.; Fernando, C.L.; Tachi, S. Observation of mirror reflection and voluntary self-touch enhance self-recognition for a telexistence robot. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 345–346. [Google Scholar] [CrossRef]
  22. Hart, S.G. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2006, 50, 904–908. [Google Scholar] [CrossRef] [Green Version]
  23. Kim, H.K.; Park, J.; Choi, Y.; Choe, M. Virtual reality sickness questionnaire (VRSQ): Motion sickness measurement index in a virtual reality environment. Appl. Ergon. 2018, 69, 66–73. [Google Scholar] [CrossRef]
  24. Roth, D.; Lugrin, J.L.; Latoschik, M.E.; Huber, S. Alpha IVBO-Construction of a scale to measure the illusion of virtual body ownership. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 2875–2883. [Google Scholar] [CrossRef]
  25. Al-Sada, M.; Jiang, K.; Ranade, S.; Kalkattawi, M.; Nakajima, T. HapticSnakes: Multi-haptic feedback wearable robots for immersive virtual reality. Virtual Real. 2020, 24, 191–209. [Google Scholar] [CrossRef] [Green Version]
  26. Fu, J.; Kato, F.; Inoue, Y.; Tachi, S. Development of a Telediagnosis System using Telexistence. Trans. Virtual Real. Soc. Jpn. 2020, 25, 277–283. [Google Scholar]
  27. Misawa, K.; Ishiguro, Y.; Rekimoto, J. Livemask: A telepresence surrogate system with a face-shaped screen for supporting nonverbal communication. J. Inf. Process. 2013, 21, 295–303. [Google Scholar] [CrossRef] [Green Version]
  28. Misawa, K.; Rekimoto, J. Wearing another’s personality: A human-surrogate system with a telepresence face. In Proceedings of the Proceedings of the 2015 ACM International Symposium on Wearable Computers, Osaka, Japan, 7–11 September 2015; pp. 125–132. [Google Scholar]
  29. Nagai, S.; Kasahara, S.; Rekimoto, J. LiveSphere: Sharing the surrounding visual environment for immersive experience in remote collaboration. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, Stanford, CA, USA, 15–19 January 2015; pp. 113–116. [Google Scholar] [CrossRef]
  30. Kashiwabara, T.; Osawa, H.; Shinozawa, K.; Imai, M. TEROOS: A Wearable Avatar to Enhance Joint Activities. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 1433–1434. [Google Scholar]
  31. Chang, C.T.; Takahashi, S.; Tanaka, J. A remote communication system to provide “out together feeling. J. Inf. Process. 2014, 22, 76–87. [Google Scholar] [CrossRef] [Green Version]
  32. Tachi, S. Forty Years of Telexistence—From Concept to TELESAR VI; Eurographics Association: Eindhoven, The Netherlands, 2019. [Google Scholar] [CrossRef]
  33. Al-Remaihi, R.; Al-Raeesi, A.; Al-Kubaisi, R.; Al-Sada, M.; Nakajima, T.; Halabi, O. A Cost-Effective Immersive Telexistence Platform for Generic Telemanipulation Tasks. In HCI International 2021—Late Breaking Papers: Multimodality, eXtended Reality, and Artificial Intelligence; Stephanidis, C., Kurosu, M., Chen, J.Y.C., Fragomeni, G., Streitz, N., Konomi, S., Degen, H., Ntoa, S., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 197–208. [Google Scholar]
  34. Vive Pro 2 Full Kit. Available online: https://www.vive.com/us/product/vive-pro2-full-kit/overview/ (accessed on 24 January 2022).
  35. Iwata, H.; Sugano, S. Design of human symbiotic robot TWENDY-ONE. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 580–586. [Google Scholar]
  36. Horie, A.; Saraiji, M.H.D.Y.; Kashino, Z.; Inami, M. EncounteredLimbs: A Room-scale Encountered-type Haptic Presentation using Wearable Robotic Arms. In Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 27 March–1 April 2021; pp. 260–269. [Google Scholar]
  37. Kunita, Y.; Ogawa, N.; Sakuma, A.; Inami, M.; Maeda, T.; Tachi, S. Immersive autostereoscopic display, TWISTER I (Telexistence Wide-angle Immersive STEReoscope Model I). Kyokai Joho Imeji Zasshi/J. Inst. Image Inf. Telev. Eng. 2001, 55, 671–677. [Google Scholar] [CrossRef]
  38. Tachi, S.; Kawakami, N.; Nii, H.; Watanabe, K.; Minamizawa, K. TELEsarPHONE: Mutual Telexistence Master-Slave Communication System Based on Retroreflective Projection Technology. SICE J. Control Meas. Syst. Integr. 2008, 1, 335–344. [Google Scholar] [CrossRef] [Green Version]
  39. Yamen Saraiji, M.H.D.; Fernando, C.L.; Minamizawa, K.; Tachi, S. Development of Mutual Telexistence System using Virtual Projection of Operator’s Egocentric Body Images. In ICAT-EGVE 2015—International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments; The Eurographics Association: Eindhoven, The Netherlands, 2015; pp. 125–132. [Google Scholar] [CrossRef]
  40. ZED Mini-Mixed-Reality Camera. Available online: https://www.stereolabs.com/zed-mini/ (accessed on 24 January 2022).
  41. Pons, J.L. Wearable Robots: Biomechatronic Exoskeleton; British Library Cataloguing in Publication Data: Madrid, Spain, 2008. [Google Scholar]
  42. Gemperle, F.; Kasabach, C.; Stivoric, J.; Bauer, M.; Martin, R. Design for wearability. In Proceedings of the 2nd IEEE International Symposium on Wearable Computers, Pittsburgh, PA, USA, 19–20 October 1998; pp. 116–122. [Google Scholar]
  43. Véronneau, C.; Denis, J.; Lebel, L.-P.; Denninger, M.; Blanchard, V.; Girard, A.; Plante, J.-S. Multifunctional Remotely Actuated 3-DOF Supernumerary Robotic Arm Based on Magnetorheological Clutches and Hydrostatic Transmission Lines. IEEE Robot. Autom. Lett. 2020, 5, 2546–2553. [Google Scholar] [CrossRef]
  44. Jiang, H.; Lin, S.; Prabakaran, V.; Elara, M.R.; Sun, L. A survey of users’ expectations towards on-body companion robots. In Proceedings of the 2019 on Designing Interactive Systems Conference, San Diego, CA, USA, 23–28 June 2019; pp. 621–632. [Google Scholar] [CrossRef]
  45. Al Sada, M.; Khamis, M.; Kato, A.; Sugano, S.; Nakajima, T.; Alt, F. Challenges and Opportunities of Supernumerary Robotic Limbs. In Proceedings of the CHI 2017 workshop on Amplification and Augmentation of Human Perception, Denver, CO, USA, 7 May 2017. [Google Scholar]
  46. Cai, M.; Tanaka, J. Trip together: A remote pair sightseeing system supporting gestural communication. In Proceedings of the 5th International Conference on Human Agent Interaction, Bielefeld, Germany, 17–20 October 2017; pp. 317–324. [Google Scholar] [CrossRef]
  47. Unity Real-Time Development. Available online: https://unity.com/ (accessed on 24 January 2022).
  48. Starke, S.; Hendrich, N.; Zhang, J. Memetic Evolution for Generic Full-Body Inverse Kinematics in Robotics and Animation. IEEE Trans. Evol. Comput. 2019, 23, 406–420. [Google Scholar] [CrossRef]
  49. Liu, Q.; Sun, X. Research of Web Real-Time Communication Based on Web Socket. Int. J. Commun. Netw. Syst. Sci. 2012, 05, 797–801. [Google Scholar] [CrossRef] [Green Version]
  50. Cloete, R.; Holliman, N. Measuring and simulating latency in interactive remote rendering systems. arXiv 2019, arXiv:abs/1905.05411. [Google Scholar]
  51. Li, M.; Junior, F.E.F.; Sheng, W.; Bai, H.; Fan, Z.; Liu, M. Measurement of Latency on Visual Feedback in an Immersive Telepresence Robotic System. In Proceedings of the 2018 13th World Congress on Intelligent Control and Automation (WCICA), Changsha, China, 4–8 July 2018; pp. 1757–1762. [Google Scholar] [CrossRef]
  52. McGovern, D.E. Human Interfaces in Remote Driving; Technical Report; Sandia National Labs.: Albuquerque, NM, USA, 1988. [Google Scholar]
  53. Fernando, C.L.; Yamen Saraiji, M.H.D.; Seishu, Y.; Kuriu, N.; Minamizawa, K.; Tachi, S. Effectiveness of Spatial Coherent Remote Drive Experience with a Telexistence Backhoe for Construction Sites. In Proceedings of the 2015—International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Kyoto, Japan, 28–30 October 2015; pp. 69–75. [Google Scholar] [CrossRef]
  54. Saraiji, M.Y. Embodied-Driven Design: A Framework to Configure Body Representation & Mapping. Ph.D. Thesis, Keio University, Minato City, Tokyo, Japan, 2017. [Google Scholar]
  55. Saraiji, M.Y.; Minamizawa, K.; Taichi, S. Foveated Streaming: Optimizing Video Streaming for Telexistence Systems Using Eye-Gaze Based Foveation. Available online: http://tachilab.org/content/files/publication/study_on_telexistence/te088.pdf (accessed on 24 January 2022).
  56. WebRTC Video Chat. Available online: https://assetstore.unity.com/packages/tools/network/webrtc-video-chat-68030 (accessed on 24 January 2022).
  57. ROBOTIS Store. Available online: https://www.robotis.us/ (accessed on 24 January 2022).
  58. LattePanda Alpha 864s. Available online: https://www.lattepanda.com/products/lattepanda-alpha-864s.html (accessed on 24 January 2022).
  59. Gstreamer: Open Source Multimedia Framework. Available online: https://gstreamer.freedesktop.org/ (accessed on 24 January 2022).
  60. Jung, S.; Wisniewski, P.J.; Sandor, C.; Hughes, C.E. RealME: The influence of body and hand representations on body ownership and presence. In Proceedings of the 5th Symposium on Spatial User Interaction, Brighton, UK, 16–17 October 2017; pp. 3–11. [Google Scholar] [CrossRef]
  61. Krekhov, A.; Cmentowski, S.; Kruger, J. The illusion of animal body ownership and its potential for virtual reality games. In Proceedings of the 2019 IEEE Conference on Games (CoG), London, UK, 20–23 August 2019. [Google Scholar] [CrossRef] [Green Version]
  62. How To Play UNO Stacko. Available online: https://www.ultraboardgames.com/uno/stacko-game-rules.php (accessed on 24 January 2022).
  63. González-Franco, M.; Pérez-Marcos, D.; Spanlang, B.; Slater, M. The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment. In Proceedings of the 2010 IEEE Virtual Reality Conference (VR), Boston, MA, USA, 20–24 March 2010; pp. 111–114. [Google Scholar] [CrossRef] [Green Version]
  64. Van Der Hoort, B.; Reingardt, M.; Henrik Ehrsson, H. Body ownership promotes visual awareness. eLife 2017, 6, e26022. [Google Scholar] [CrossRef] [PubMed]
  65. Chang, E.; Kim, H.T.; Yoo, B. Virtual Reality Sickness: A Review of Causes and Measurements. Int. J. Hum. Comput. Interact. 2020, 36, 1658–1682. [Google Scholar] [CrossRef]
  66. Keshavarz, B.; Riecke, B.E.; Hettinger, L.J.; Campos, J.L. Vection and visually induced motion sickness: How are they related? Front. Psychol. 2015, 6, 472. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Nürnberger, M.; Klingner, C.; Witte, O.W.; Brodoehl, S. Mismatch of Visual-Vestibular Information in Virtual Reality: Is Motion Sickness Part of the Brains Attempt to Reduce the Prediction Error? Front. Hum. Neurosci. 2021, 15, 757735. [Google Scholar] [CrossRef]
  68. Wang, H. Adaptive Control of Robot Manipulators with Uncertain Kinematics and Dynamics. IEEE Trans. Automat. Contr. 2017, 62, 948–954. [Google Scholar] [CrossRef] [Green Version]
  69. Jacob, R.J.K.; Sibert, L.E.; McFarlane, D.C.; Mullen, M.P. Integrality and Separability of Input Devices. ACM Trans. Comput. Interact. 1994, 1, 3–26. [Google Scholar] [CrossRef] [Green Version]
  70. Verma, M.; Lafarga, V.; Dehaeze, T.; Collette, C. Multi-degree of freedom isolation system with high frequency roll-off for drone camera stabilization. IEEE Access 2020, 8, 176188–176201. [Google Scholar] [CrossRef]
  71. Kim, M.; Byun, G.-S.; Kim, G.-H.; Choi, M.-H. The Stabilizer Design for a Drone-Mounted Camera Gimbal System Using Intelligent-PID Controller and Tuned Mass Damper. Int. J. Control Autom. 2016, 9, 387–394. [Google Scholar] [CrossRef]
  72. Iskandar, A.; Basuki, A.; Nurindiyani, A.K.; Putra, F.R.; Safrodin, M. Developing Shooter Game Interaction using Eye Movement Glasses. Emit. Int. J. Eng. Technol. 2020, 8, 67–85. [Google Scholar] [CrossRef]
  73. Iskandar, A.; Nakanishi, T.; Basuki, A.; Okada, R.; Kitagawa, T. Gaze-music Media Transformation by Similarity of Impression Words. In Proceedings of the 2020 International Electronics Symposium (IES), Surabaya, Indonesia, 29–30 September 2020; pp. 655–661. [Google Scholar] [CrossRef]
  74. Haque, F.; Nancel, M.; Vogel, D. Myopoint: Pointing and clicking using forearm mounted electromyography and inertial motion sensors. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; pp. 3653–3656. [Google Scholar] [CrossRef] [Green Version]
  75. Bernal, G.; Yang, T.; Jain, A.; Maes, P. PhysioHMD: A conformable, modular toolkit for collecting physiological data from head-mounted displays. In Proceedings of the 2018 ACM International Symposium on Wearable Computers, Singapore, 8–12 October 2018; pp. 160–167. [Google Scholar] [CrossRef]
  76. Ho, N.S.K.; Tong, K.Y.; Hu, X.L.; Fung, K.L.; Wei, X.J.; Rong, W.; Susanto, E.A. An EMG-driven exoskeleton hand robotic training device on chronic stroke subjects: Task training system for stroke rehabilitation. In Proceedings of the 2011 IEEE International Conference on Rehabilitation Robotics, Zurich, Switzerland, 29 June–1 July 2011. [Google Scholar] [CrossRef]
Figure 1. Design concept of Piton. (a) At the local site, the user uses the HMD to interact with Piton at the remote site. (b) At the remote site, a surrogate user wears Piton.
Figure 1. Design concept of Piton. (a) At the local site, the user uses the HMD to interact with Piton at the remote site. (b) At the remote site, a surrogate user wears Piton.
Sensors 22 08574 g001
Figure 2. Piton can be used in everyday usage contexts. For example, Piton can be used to (a) interact with the surrogate, (b) interact with the remote environment, sharing various experiences with remote users or checking merchandise, or (c) enjoy the outdoor scenery or social activities.
Figure 2. Piton can be used in everyday usage contexts. For example, Piton can be used to (a) interact with the surrogate, (b) interact with the remote environment, sharing various experiences with remote users or checking merchandise, or (c) enjoy the outdoor scenery or social activities.
Sensors 22 08574 g002
Figure 3. Piton can be used for industrial tasks. (a) Piton can support remote knowledge transfer and training, such as instructing remote users during assembly or machinery operation tasks. (b) The flexibility of Piton can be used for inspecting objects or environments, such as by extending around or above the surrogate user.
Figure 3. Piton can be used for industrial tasks. (a) Piton can support remote knowledge transfer and training, such as instructing remote users during assembly or machinery operation tasks. (b) The flexibility of Piton can be used for inspecting objects or environments, such as by extending around or above the surrogate user.
Sensors 22 08574 g003
Figure 4. This diagram shows the overall architecture of our system. The arrows indicate the data flow between the various components, coded in three colors: red for auditory communication, purple for stereoscopic video streaming, and yellow for robot control.
Figure 4. This diagram shows the overall architecture of our system. The arrows indicate the data flow between the various components, coded in three colors: red for auditory communication, purple for stereoscopic video streaming, and yellow for robot control.
Sensors 22 08574 g004
Figure 5. Control methods: (a) HMD to control the position and orientation of Piton; (b) HMD to control Piton’s rotation and a hand-held tracker to control its position; (c) HMD to control Piton’s rotation and a foot-mounted tracker to control its position.
Figure 5. Control methods: (a) HMD to control the position and orientation of Piton; (b) HMD to control Piton’s rotation and a hand-held tracker to control its position; (c) HMD to control Piton’s rotation and a foot-mounted tracker to control its position.
Sensors 22 08574 g005
Figure 6. Visualization of the IK system and robot model, with the green cube presenting the target objective for the positional movement: (a) The robot moves to top position; (b) The robot moves to the right position.
Figure 6. Visualization of the IK system and robot model, with the green cube presenting the target objective for the positional movement: (a) The robot moves to top position; (b) The robot moves to the right position.
Sensors 22 08574 g006
Figure 7. The graphical user interface of our system connects with Gstreamer and WebSocket server (robot control software), allows enabling/disabling the robot’s movements or sending data, and for starting the calibration process of the control methods.
Figure 7. The graphical user interface of our system connects with Gstreamer and WebSocket server (robot control software), allows enabling/disabling the robot’s movements or sending data, and for starting the calibration process of the control methods.
Sensors 22 08574 g007
Figure 8. Piton robot structure. The robot is composed of eight servomotors interlinked using aluminum and PLA brackets. The end-effectors comprise a PLA ZED camera holder.
Figure 8. Piton robot structure. The robot is composed of eight servomotors interlinked using aluminum and PLA brackets. The end-effectors comprise a PLA ZED camera holder.
Sensors 22 08574 g008
Figure 9. The robot is mounted on a backpack rack. (a) Front view; (b) side view.
Figure 9. The robot is mounted on a backpack rack. (a) Front view; (b) side view.
Sensors 22 08574 g009
Figure 10. The robot control software’s UI enables controlling the robot through WebSocket.
Figure 10. The robot control software’s UI enables controlling the robot through WebSocket.
Sensors 22 08574 g010
Figure 11. HM control calibration procedure of positional movement. The user moves their head to the corners of the control space, as shown in (a,b), thereby forming a tracking area that maps the user’s head position to the robot’s neck position.
Figure 11. HM control calibration procedure of positional movement. The user moves their head to the corners of the control space, as shown in (a,b), thereby forming a tracking area that maps the user’s head position to the robot’s neck position.
Sensors 22 08574 g011
Figure 12. HH control calibration procedure. The user moves their hand to the corners of the control space, as shown in (a,b), thereby forming a tracking area that maps the user’s hand position to the robot’s neck position.
Figure 12. HH control calibration procedure. The user moves their hand to the corners of the control space, as shown in (a,b), thereby forming a tracking area that maps the user’s hand position to the robot’s neck position.
Sensors 22 08574 g012
Figure 13. FM control calibration procedure. The user moves his foot to the corners of the control space, as shown in (a,b), thereby forming a tracking area that maps the user’s foot position to the robot’s neck position. Dorsiflexing their foot upwards calibrates the vertical positional movement, enabling them to move Piton within the calibrated workspace.
Figure 13. FM control calibration procedure. The user moves his foot to the corners of the control space, as shown in (a,b), thereby forming a tracking area that maps the user’s foot position to the robot’s neck position. Dorsiflexing their foot upwards calibrates the vertical positional movement, enabling them to move Piton within the calibrated workspace.
Sensors 22 08574 g013
Figure 14. This diagram illustrates how each data point is captured from the HMD and trackers, and then processed to produce servomotor angles through our control system at the local site. The servomotor angles are then sent to the remote site where they are executed by the robot control software.
Figure 14. This diagram illustrates how each data point is captured from the HMD and trackers, and then processed to produce servomotor angles through our control system at the local site. The servomotor angles are then sent to the remote site where they are executed by the robot control software.
Sensors 22 08574 g014
Figure 15. Task 1 (mirroring): (a) a monitor with a web camera is used for mirroring task; (b) the user can observe the robot’s movements by looking at the screen (similar to a mirror).
Figure 15. Task 1 (mirroring): (a) a monitor with a web camera is used for mirroring task; (b) the user can observe the robot’s movements by looking at the screen (similar to a mirror).
Sensors 22 08574 g015
Figure 16. Task 2 (finding numbers and letters): (a) Uno Stacko block game with randomly set numbers and letters; (b) A user moving Piton to find specifically colored numbers during task 2.
Figure 16. Task 2 (finding numbers and letters): (a) Uno Stacko block game with randomly set numbers and letters; (b) A user moving Piton to find specifically colored numbers during task 2.
Sensors 22 08574 g016
Figure 17. Task 3 is text reading: (a) the text is printed and wrapped around a box, and (b) users have to control Piton to look around the box edges to read the text.
Figure 17. Task 3 is text reading: (a) the text is printed and wrapped around a box, and (b) users have to control Piton to look around the box edges to read the text.
Sensors 22 08574 g017
Figure 18. Alpha IVBO questionnaire results: acceptance, change, control.
Figure 18. Alpha IVBO questionnaire results: acceptance, change, control.
Sensors 22 08574 g018
Figure 19. NASA-TLX results of the 17 participants.
Figure 19. NASA-TLX results of the 17 participants.
Sensors 22 08574 g019
Figure 20. VR Sickness Questionnaire: (a) results of oculomotor score and disorientation score; (b) results of VRSQ total score.
Figure 20. VR Sickness Questionnaire: (a) results of oculomotor score and disorientation score; (b) results of VRSQ total score.
Sensors 22 08574 g020
Figure 21. Results of the poststudy questionnaires: (a) results of Q1–4; (b) results of ranking questions (Q5, Q6).
Figure 21. Results of the poststudy questionnaires: (a) results of Q1–4; (b) results of ranking questions (Q5, Q6).
Sensors 22 08574 g021
Table 1. The results of repeated-measures ANOVA.
Table 1. The results of repeated-measures ANOVA.
SourceSSdfMean SquareFp-Value
Acceptance4.9801.5703.1734.1900.035 *
Control0.6051.4730.4111.0540.344
Change4.4111.7372.5406.8460.005 *
* The significant value is <0.05.
Table 2. IVBO-change results of pairwise comparisons with Bonferroni adjustment.
Table 2. IVBO-change results of pairwise comparisons with Bonferroni adjustment.
Control MethodMean Differences (I–J)Std. Errorp-Value
HM-HH0.1410.1751.000
HM-FM0.6820.2290.027 *
HH-FM0.5410.1750.021 *
* The significant value is <0.05.
Table 3. The results of repeated-measures ANOVA.
Table 3. The results of repeated-measures ANOVA.
SourceSSdfMean SquareFp-Value
Mental Demand5.7651.7033.3852.4120.115
Physical Demand1.5291.3981.0940.5120.542
Temporal Demand0.8241.9350.4260.6420.528
Performance4.3531.8712.3272.4030.111
Effort5.0591.9702.5672.9320.69
Frustration0.9801.9160.5120.5670.566
Table 4. Motion sickness component mean scores and standard deviation values for each control method.
Table 4. Motion sickness component mean scores and standard deviation values for each control method.
Method General DiscomfortFatigueEye StrainFocus
Difficulty
HeadacheFullness of the HeadBlurred VisionDizzyVertigo
HMM0.530.650.880.410.350.180.240.060.00
STD0.510.701.050.870.490.390.440.240.00
HHM0.290.530.650.350.000.120.290.060.00
STD0.590.621.000.610.000.330.590.240.00
FMM0.410.410.470.180.060.120.180.120.00
STD0.510.620.620.390.240.330.390.330.00
Table 5. The results of Friedman test of VRSQ.
Table 5. The results of Friedman test of VRSQ.
SourceNChi-Squaredfp-Value
Oculomotor172.57720.276
Disorientation178.00020.018 *
VRSQ Total174.23020.121
* The significant value is <0.05.
Table 6. Wilcoxon signed-rank test results on the VRSQ disorientation term.
Table 6. Wilcoxon signed-rank test results on the VRSQ disorientation term.
HM-HHHM-FMHH-FM
Z−2.251−1.7320.000
p-Value0.0240.0831.000
Table 7. The results of repeated-measures ANOVA on the results of Q4–6.
Table 7. The results of repeated-measures ANOVA on the results of Q4–6.
SourceSSdfMean SquareFp-Value
Q423.1111.64314.0687.1580.005*
Q58.5881.9034.5145.4070.011*
Q612.7041.7887.1077.2080.004*
* The significant value is <0.05.
Table 8. Results of pairwise comparisons with Bonferroni adjustment of Q4 results.
Table 8. Results of pairwise comparisons with Bonferroni adjustment of Q4 results.
Control MethodMean Differences (I–J)Std. Errorp-Value
HM-HH−1.1110.4110.045 *
HM-FM0.4440.5061.000
HH-FM1.5560.336<0.001 *
* The significant value is <0.05.
Table 9. Results of pairwise comparisons with Bonferroni adjustment of Q5 results.
Table 9. Results of pairwise comparisons with Bonferroni adjustment of Q5 results.
Control MethodMean Differences (I–J)Std. Errorp-Value
HM-HH−0.4120.2720.449
HM-FM−1.0000.3320.025 *
HH-FM−0.5880.3100.228
* The significant value is <0.05.
Table 10. Pairwise comparisons with Bonferroni adjustment of Q6 results.
Table 10. Pairwise comparisons with Bonferroni adjustment of Q6 results.
Control MethodMean Differences (I–J)Std. Errorp-Value
HM-HH−0.7780.3190.078
HM-FM−1.1670.259<0.001 *
HH-FM−0.3890.3540.861
* The significant value is <0.05.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Iskandar, A.; Al-Sada, M.; Miyake, T.; Saraiji, Y.; Halabi, O.; Nakajima, T. Piton: Investigating the Controllability of a Wearable Telexistence Robot. Sensors 2022, 22, 8574. https://doi.org/10.3390/s22218574

AMA Style

Iskandar A, Al-Sada M, Miyake T, Saraiji Y, Halabi O, Nakajima T. Piton: Investigating the Controllability of a Wearable Telexistence Robot. Sensors. 2022; 22(21):8574. https://doi.org/10.3390/s22218574

Chicago/Turabian Style

Iskandar, Abdullah, Mohammed Al-Sada, Tamon Miyake, Yamen Saraiji, Osama Halabi, and Tatsuo Nakajima. 2022. "Piton: Investigating the Controllability of a Wearable Telexistence Robot" Sensors 22, no. 21: 8574. https://doi.org/10.3390/s22218574

APA Style

Iskandar, A., Al-Sada, M., Miyake, T., Saraiji, Y., Halabi, O., & Nakajima, T. (2022). Piton: Investigating the Controllability of a Wearable Telexistence Robot. Sensors, 22(21), 8574. https://doi.org/10.3390/s22218574

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop