Digital Twin-Based Human–Robot Collaborative Systems

A special issue of Robotics (ISSN 2218-6581). This special issue belongs to the section "Industrial Robots and Automation".

Deadline for manuscript submissions: 31 May 2024 | Viewed by 8623

Special Issue Editor


E-Mail Website
Guest Editor
HUMAIN-Lab, Department of Computer Science, International Hellenic University (IHU), 65404 Kavala, Greece
Interests: mobile robotics; social robotics; computational intelligence; augmented reality applications and educational software
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, an increasing number of industries have benefited from the use of digital twins, aiming to improve product design, data collection, testing, diagnostics and monitoring. In the field of robotics, a digital twin can be considered a virtual representation of a robot and its processes, within the context of its intended operating environment. A key challenge in the area of human-robot collaborations is to determine appropriate interaction models in order to better understand the requirements for effective and safe human-robot collaboration in shared spaces, such as industry, agriculture and education. Following a digital twin paradigm allows the formation of more reliable virtual platforms for applying the proposed HRI models and studying the complex issues arising from human-robot interactions.

In this Special Issue, original research articles and comprehensive reviews are welcome. Contributions may include novel approaches to the application of digital twins in all cycles of robotic development, in various application areas such as manufacturing, agriculture, education, etc. Research areas may include (but are not limited to) the following:

  • System architectures for digital twins;
  • Human-robot interaction modeling;
  • Digital twins for telepresence and teleoperation tasks;
  • Augmented and virtual reality for digital twins;
  • Industrial applications and implementation.

Dr. Chris Lytridis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • digital twins
  • human-robot interaction
  • modeling
  • computational models
  • social intelligence
  • cooperation
  • virtual models

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 11423 KiB  
Article
Designing Digital Twins of Robots Using Simscape Multibody
by Giovanni Boschetti and Teresa Sinico
Robotics 2024, 13(4), 62; https://doi.org/10.3390/robotics13040062 - 14 Apr 2024
Viewed by 582
Abstract
Digital twins of industrial and collaborative robots are widely used to evaluate and predict the behavior of manipulators under different control strategies. However, these digital twins often employ simplified mathematical models that do not fully describe their dynamics. In this paper, we present [...] Read more.
Digital twins of industrial and collaborative robots are widely used to evaluate and predict the behavior of manipulators under different control strategies. However, these digital twins often employ simplified mathematical models that do not fully describe their dynamics. In this paper, we present the design of a high-fidelity digital twin of a six degrees-of-freedom articulated robot using Simscape Multibody, a Matlab toolbox that allows the design of robotic manipulators in a rather intuitive and user-friendly manner. This robot digital twin includes joint friction, transmission gears, and electric actuators dynamics. After assessing the dynamic accuracy of the Simscape model, we used it to test a computed torque control scheme, proving that this model can be reliably used in simulations with different aims, such as validating control schemes, evaluating collaborative functions or minimizing power consumption. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

15 pages, 3678 KiB  
Article
The Impact of Changing Collaborative Workplace Parameters on Assembly Operation Efficiency
by Klemen Kovič, Aljaž Javernik, Robert Ojsteršek and Iztok Palčič
Robotics 2024, 13(3), 36; https://doi.org/10.3390/robotics13030036 - 23 Feb 2024
Viewed by 1056
Abstract
Human–robot collaborative systems bring several benefits in using human and robot capabilities simultaneously. One of the critical questions is the impact of these systems on production process efficiency. The search for high-level efficiency is severely dependent on collaborative robot characteristics and motion parameters, [...] Read more.
Human–robot collaborative systems bring several benefits in using human and robot capabilities simultaneously. One of the critical questions is the impact of these systems on production process efficiency. The search for high-level efficiency is severely dependent on collaborative robot characteristics and motion parameters, and the ability of humans to adjust to changing circumstances. Therefore, our research analyzes the effect of the changing collaborative robot motion parameters, acoustic parameters and visual factors in a specific assembly operation, where efficiency is measured through operation times. To conduct our study, we designed a digital twin-based model and a laboratory environment experiment in the form of a collaborative workplace. The results show that changing the motion, acoustic and visual parameters of the collaborative workplace impact the assembly process efficiency significantly. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

32 pages, 17080 KiB  
Article
User Study to Validate the Performance of an Offline Robot Programming Method That Enables Robot-Independent Kinesthetic Instruction through the Use of Augmented Reality and Motion Capturing
by Fabian Müller, Michael Koch and Alexander Hasse
Robotics 2024, 13(3), 35; https://doi.org/10.3390/robotics13030035 - 23 Feb 2024
Viewed by 1000
Abstract
The paper presents a novel offline programming (OLP) method based on programming by demonstration (PbD), which has been validated through user study. PbD is a programming method that involves physical interaction with robots, and kinesthetic teaching (KT) is a commonly used online programming [...] Read more.
The paper presents a novel offline programming (OLP) method based on programming by demonstration (PbD), which has been validated through user study. PbD is a programming method that involves physical interaction with robots, and kinesthetic teaching (KT) is a commonly used online programming method in industry. However, online programming methods consume significant robot resources, limiting the speed advantages of PbD and emphasizing the need for an offline approach. The method presented here, based on KT, uses a virtual representation instead of a physical robot, allowing independent programming regardless of the working environment. It employs haptic input devices to teach a simulated robot in augmented reality and uses automatic path planning. A benchmarking test was conducted to standardize equipment, procedures, and evaluation techniques to compare different PbD approaches. The results indicate a 47% decrease in programming time when compared to traditional KT methods in established industrial systems. Although the accuracy is not yet at the level of industrial systems, users have shown rapid improvement, confirming the learnability of the system. User feedback on the perceived workload and the ease of use was positive. In conclusion, this method has potential for industrial use due to its learnability, reduction in robot downtime, and applicability across different robot sizes and types. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

16 pages, 8410 KiB  
Article
A Path to Industry 5.0 Digital Twins for Human–Robot Collaboration by Bridging NEP+ and ROS
by Enrique Coronado, Toshio Ueshiba and Ixchel G. Ramirez-Alpizar
Robotics 2024, 13(2), 28; https://doi.org/10.3390/robotics13020028 - 01 Feb 2024
Viewed by 1669
Abstract
The integration of heterogeneous hardware and software components to construct human-centered systems for Industry 5.0, particularly human digital twins, presents considerable complexity. Our research addresses this challenge by pioneering a novel approach that harmonizes the techno-centered focus of the Robot Operating System (ROS) [...] Read more.
The integration of heterogeneous hardware and software components to construct human-centered systems for Industry 5.0, particularly human digital twins, presents considerable complexity. Our research addresses this challenge by pioneering a novel approach that harmonizes the techno-centered focus of the Robot Operating System (ROS) with the cross-platform advantages inherent in NEP+ (a human-centered development framework intended to assist users and developers with diverse backgrounds and resources in constructing interactive human–machine systems). We introduce the nep2ros ROS package, aiming to bridge these frameworks and foster a more interconnected and adaptable approach. This initiative can be used to facilitate diverse development scenarios beyond conventional robotics, underpinning a transformative shift in Industry 5.0 applications. Our assessment of NEP+ capabilities includes an evaluation of communication performance utilizing serialization formats like JavaScript Object Notation (JSON) and MessagePack. Additionally, we present a comparative analysis between the nep2ros package and existing solutions, illustrating its efficacy in linking the simulation environment (Unity) and ROS. Moreover, our research demonstrates NEP+’s applicability through an immersive human-in-the-loop collaborative assembly. These findings offer promising prospects for innovative integration possibilities across a broad spectrum of applications, transcending specific platforms or disciplines. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

21 pages, 57501 KiB  
Article
A 3D World Interpreter System for Safe Autonomous Crane Operation
by Frank Bart ter Haar, Frank Ruis and Bastian Thomas van Manen
Robotics 2024, 13(2), 23; https://doi.org/10.3390/robotics13020023 - 26 Jan 2024
Viewed by 1482
Abstract
In an effort to improve short-sea shipping in Europe, we present a 3D world interpreter (3DWI) system as part of a robotic container-handling system. The 3DWI is an advanced sensor suite combined with AI-based software and the communication infrastructure to connect to both [...] Read more.
In an effort to improve short-sea shipping in Europe, we present a 3D world interpreter (3DWI) system as part of a robotic container-handling system. The 3DWI is an advanced sensor suite combined with AI-based software and the communication infrastructure to connect to both the crane control and the shore control center. On input of LiDAR data and stereo captures, the 3DWI builds a world model of the operating environment and detects containers. The 3DWI and crane control are the core of an autonomously operating crane that monitors the environment and may trigger an emergency stop while alerting the remote operator of the danger. During container handling, the 3DWI scans for human activity and continuously updates a 3D-Twin model for the operator, enabling situational awareness. The presented methodology includes the sensor suite design, creation of the world model and the 3D-Twin, innovations in AI-detection software, and interaction with the crane and operator. Supporting experiments quantify the performance of the 3DWI, its AI detectors, and safety measures; the detectors reach the top of VisDrone’s leaderboard and the pilot tests show the safe autonomous operation of the crane. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

26 pages, 8960 KiB  
Article
Virtual Reality Teleoperation System for Mobile Robot Manipulation
by Bryan R. Galarza, Paulina Ayala, Santiago Manzano and Marcelo V. Garcia
Robotics 2023, 12(6), 163; https://doi.org/10.3390/robotics12060163 - 29 Nov 2023
Cited by 1 | Viewed by 2043
Abstract
Over the past few years, the industry has experienced significant growth, leading to what is now known as Industry 4.0. This advancement has been characterized by the automation of robots. Industries have embraced mobile robots to enhance efficiency in specific manufacturing tasks, aiming [...] Read more.
Over the past few years, the industry has experienced significant growth, leading to what is now known as Industry 4.0. This advancement has been characterized by the automation of robots. Industries have embraced mobile robots to enhance efficiency in specific manufacturing tasks, aiming for optimal results and reducing human errors. Moreover, robots can perform tasks in areas inaccessible to humans, such as hard-to-reach zones or hazardous environments. However, the challenge lies in the lack of knowledge about the operation and proper use of the robot. This work presents the development of a teleoperation system using HTC Vive Pro 2 virtual reality goggles. This allows individuals to immerse themselves in a fully virtual environment to become familiar with the operation and control of the KUKA youBot robot. The virtual reality experience is created in Unity, and through this, robot movements are executed, followed by a connection to ROS (Robot Operating System). To prevent potential damage to the real robot, a simulation is conducted in Gazebo, facilitating the understanding of the robot’s operation. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

Back to TopTop