Next Article in Journal
Utilising Sentinel-1’s Orbital Stability for Efficient Pre-Processing of Radiometric Terrain Corrected Gamma Nought Backscatter
Previous Article in Journal
A Tactile Skin System for Touch Sensing with Ultrasound Tomography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scalability of Cyber-Physical Systems with Real and Virtual Robots in ROS 2

by
Francisco José Mañas-Álvarez
,
María Guinaldo
,
Raquel Dormido
and
Sebastian Dormido-Canto
*
Department of Computer Sciences and Automatic Control, Universidad Nacional de Educación a Distancia (UNED), Juan del Rosal 16, 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2023, 23(13), 6073; https://doi.org/10.3390/s23136073
Submission received: 31 May 2023 / Revised: 21 June 2023 / Accepted: 27 June 2023 / Published: 1 July 2023
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Nowadays, cyber-physical systems (CPSs) are composed of more and more agents and the demand for designers to develop ever larger multi-agent systems is a fact. When the number of agents increases, several challenges related to control or communication problems arise due to the lack of scalability of existing solutions. It is important to develop tools that allow control strategies evaluation of large-scale systems. In this paper, it is considered that a CPS is a heterogeneous robot multi-agent system that cooperatively performs a formation task through a wireless network. The goal of this research is to evaluate the system’s performance when the number of agents increases. To this end, two different frameworks developed with the open-source tools Gazebo and Webots are used. These frameworks enable combining both real and virtual agents in a realistic scenario allowing scalability experiences. They also reduce the costs required when a significant number of robots operate in a real environment, as experiences can be conducted with a few real robots and a higher number of virtual robots by mimicking the real ones. Currently, the frameworks include several types of robots, such as the aerial robot Crazyflie 2.1 and differential mobile robots Khepera IV used in this work. To illustrate the usage and performance of the frameworks, an event-based control strategy for rigid formations varying the number of agents is analyzed. The agents should achieve a formation defined by a set of desired Euclidean distances to their neighbors. To compare the scalability of the system in the two different tools, the following metrics have been used: formation error, CPU usage percentage, and the ratio between the real time and the simulation time. The results show the feasibility of using Robot Operating System (ROS) 2 in distributed architectures for multi-agent systems in experiences with real and virtual robots regardless of the number of agents and their nature. However, the two tools under study present different behaviors when the number of virtual agents grows in some of the parameters, and such discrepancies are analyzed.

1. Introduction

Cyber-physical systems (CPSs) can be considered a new generation of digital systems and their impact on the acceleration of technological progress is huge. A CPS arises from a tight integration of a cyber system and a physical process. It is defined as a system that deeply joins the capacity of computing and communication to control and interact with a process in the physical world [1]. Through the feedback loop between computing and the process, real-time interactions are increased with the physical system to monitor or control the physical entity in a secure, efficient, and reliable way in real time.
Examples of CPSs include applications in many areas such as aerospace, transportation, manufacturing, automotive, etc. [2]. The heterogeneous nature of CPSs requires advanced knowledge in several disciplines for their design and construction. For instance, the correct operation of applications with huge potential impacts on the daily lives of many people such as the so-called Human-in-the-Loop Cyber-Physical Systems (HiLCPSs). It needs not only technical knowledge to design the interface between the human and the autonomous components but also control competences to design strategies for these human-in-the-loop systems [3,4]. Currently, most CPS researchers mainly focus on system architecture, information processing, and software design [5,6].
The building of CPSs in mobile robotics is particularly interesting [7]. In designing and implementing advanced robotic heterogeneous hardware platforms for controlling an ever-larger multi-agent system, several challenges appear [8]. Integration of all hardware devices and the requirement of reliable, concurrent hardware access with real-time constraints makes the design and implementation of these systems a difficult task. In particular, when the number of agents increases, several problems related to control or communication arise due to the lack of scalability of existing solutions. Developing and comparing tools that allow control strategies evaluation of large-scale systems is necessary. The main contribution of this paper is oriented towards this purpose.
Usually, before working the system in a real-world scenario, tests under controlled environments to reduce possible unexpected behaviors must be accomplished. In this regard, different alternatives are available: implementing through virtual environments using robotic simulation tools; using a physical environment that allows validating developments in a real platform; or making use of hybrid platforms that combine real and virtual agents [9]. Whatever scheme is chosen, a robotic simulation tool is always used in the first steps before real implementation, making the right selection of the most suitable tool an important task for the scientific and academic community.
In this paper, it is considered that a CPS is formed by a heterogeneous multi-robot system (MRS) that cooperatively performs a formation task through a wireless network. In recent years, the number of simulation tools and technologies available for the development of MRS platforms has grown substantially. Digital twins (DTs) and digital shadows [10,11,12,13] are among those technologies that perfectly integrate the principles of CPSs and their capability to manage information from a virtual world and to allow data collection and exchange in real time with the physical system. Augmented reality (AR) has also been used as a support tool to integrate physical and digital environments [14]. Unlike virtual reality (VR), which generates representations of the physical world, AR, mainly focusing on collaborative robotics, allows us to superimpose digital information on elements of the physical world. Mixed reality (MR) involves a combination of VR and AR [15,16,17]. An MR based system interacts and manipulates both physical and digital elements and environments by a completely bidirectional communication [18,19,20]. In this way, MR has great potential to solve classic mobile robotics problems in an intuitive and efficient way [21]. MR technology has been used in different applications in robotics [22,23]. For instance, it has been explored in social robotics to enhance a robot with limited expressivity [22].
Robotic Park [9,24], a recently developed complete heterogeneous flexible, and easy-to-use indoor platform to perform MRS experiments in the Department of Computer Sciences and Automatic Control of UNED, is the experimental framework where our tests are carried out. It supports virtual, real, or hybrid scheme experiences through the two hybrid frameworks available that enable combining physical (and their DT) and virtual agents in MR experiences. As the DTs are indistinguishable from the real agents, their nature is indistinguishable when integrated into ROS 2.
Among the most popular MRS simulators are Gazebo [25], V-REP (Coppeliasim now) [26], Webots [27], or MVSIM [28]. As they were not built as general-purpose tools it is difficult to compare them. In the literature, several works are found comparing different simulators pointing out the strengths and weaknesses of each one [29,30,31]. Comparisons on features such as programming languages, supported OS, accuracy, open-sourced or related to robots, sensors, or actuators supported are available [32,33]. Gazebo, a versatile open-source simulator with a large community of developers, is the reference software tool for robotics research widely used over time. V-REP is simpler to manage and makes it simpler to edit objects, but a license is required [34]. Recently, Audonnet et al. studied a robotic arm manipulation under different simulation software compatible with ROS 2 [35]. In this work, they give a comparison between Ignition and Webots in terms of stability and analyze PyBullet and Coppeliasim attending the resources usage. No best simulation software is determined overall. In [36] a comparison of four simulation environments for robotics and reinforcement learning can be found. Pitonakova et al. [37] compared the functionalities and simulation speed of Gazebo, ARGoS, and V-REP. If we focus on quantitative comparisons, few works have been reported for the most used robot simulation tools. In [31] a quantitative comparison among CoppeliaSim, Gazebo, MORSE, and Webots on the accuracy of motion for mobile robots can be found. The main motivation of this work is to evaluate the system’s performance when the number of agents increases. The objective is to carry out a comparative analysis of two popular robotic simulators, Gazebo and Webots, determining the simulator that allows working with a larger number of agents, what the resource consumption is for each one, and to determine the limit in the possible combination of real and virtual agents up to which the Real Time Factor drops. Moreover, an analysis of convergence times for both simulators to achieve the desired formation using an event-based control strategy has been performed. Both a quantitative and objective comparison of the two simulators is shown.
For that purpose, two different frameworks within Robotic Park have been developed with the open-source tools Gazebo and Webots. Both tools enable MRS experiences combining both physical and virtual agents in a realistic scenario regardless of the number of agents and their nature, allowing scalability tests with no extra cost when a significant number of robots operate in a real environment. In this way, several experiences that address an event-based control strategy for rigid formations varying the number of agents (either aerial robot Crazyflie 2.1 or differential mobile robots Khepera IV) are carried out. The required formation is defined by a set of desired Euclidean distances among neighbors. Several metrics are used to analyze the scalability of the tools: the formation error, CPU utilization as a percentage, and the ratio between the real time and the simulation time. The results illustrate the scalability of Robotic Park as experiences can be conducted with a few physical robots and a large number of virtual robots that mimic the real ones. Moreover, the use of ROS 2 facilitates the addition of new agents, a decentralized communication system, and greater integration with simulation environments.
The paper is organized as follows. Section 2 describes in detail the experimental environment and its components. Section 3 presents the main features of the MRS focused on formation with distance constraints used and defines the experiments carried out to perform the comparison between Gazebo and Webots. Section 4 discusses experimental results using the platform. Finally, Section 5 ends the paper with conclusions and suggestions for future works.

2. Materials and Methods

2.1. Experimental Platform

The experiences conducted in this work have been performed over Robotic Park [9] (Figure 1a). This platform is focused on indoor multi-robot experiences with heterogeneous agents. The workspace has a volume of 3.6 m × 3.6 m × 2 m. This space allows the operation of a relatively large number of small robots in formation and navigation tasks. Different positioning systems are available using different technologies (Motion Capture, UWB, Infrared, etc). This enables combining different agents, as some of them can only use a single system. Among the different robots available on the platform, in this work, the micro aerial robots Crazyflie 2.1 (Figure 1b) and mobile robots Khepera IV (Figure 1c) are used. The Crazyflie 2.X is an open-source platform developed by Bitcraze [38]. It is suitable for indoor experimentation due to its small size (92 mm × 92 mm × 29 mm), low mass ( 27 g), and inertia. Khepera IV is a wheeled mobile robot designed by K-Team [39,40]. It is specially designed to work on hard and flat indoor surfaces.

2.2. Simulators

Gazebo and its integration with ROS/ROS 2 distributions is one of the most widely used frameworks in robotics [25]. It is a multi-platform open-source software with a large number of libraries developed by users to support a wide range of sensors, actuators, robots, etc. It allows the development of multi-robot experiences with easy management of the loading and unloading of assets in the environment. Gazebo allows the implementation of distributed simulations and adjustable real-time factor performance. It supports different physics engines: Bullet, Dynamic Animation and Robotics Toolkit (DART), Open Dynamics Engine (ODE), and Simbody. In addition, Gazebo uses the Universal Robot Description Format (URDF) and the Simulation Description Format (SDF), the two most widely used representation formats for robotics simulations. Figure 2a shows the twin of the Robotic Park platform developed with Gazebo.
Webots is a simulation platform with a long history since its launch in 1996 by Dr. Oliver Michel at the Swiss Federal Institute of Technology EPFL in Lausanne [27]. Since 1998 Cyberbotics Ltd. is the main maintainer of the tool. It is a multi-platform open-source software with wide use in industry and research. As a physics engine, it exclusively uses ODE and it supports C++, Python, Java, and Matlab as programming languages. It has recently added a ROS 2 bridge among its features. It has a more user-friendly interface than other tools, allowing users to manually interact in real time with the robots. This facilitates the use of the tool for non-experienced users, such as students. Finally, Webots supports the PROTO format for real objects/robots representation. It is possible to use a short URDF format file to fix robots’ parameters (they could be parameters defined by users or ROS 2 plugins’ characteristics, such as sensor update rate). Figure 2b shows the developed twin of the Robotic Park platform in Webots.

2.3. ROS 2

ROS is a set of open-source software libraries and standard tools that help in building robot applications [41]. Its most recent version, ROS 2, includes new features that make ROS easier to learn and use. This fact encourages new users who until now were reticent to ROS. These improvements enable a more efficient communication protocol with a better real-time performance than ROS that allows distributed architectures, an adaptation to the most recent language libraries, such as Python 3, and native multi-platform development. This last feature brings its use closer to Mac or Windows users, two systems with wide presence among non-professional users.
In Robotic Park, ROS 2 is used as a link between all agents in the system (real and virtual). It allows easier integration of new agents without updates to already installed agents. Each element (robots, positioning systems, cameras, etc.) has a node that serves a single, modular purpose in the system. On the one hand, a ROS 2 node is created for the positioning systems that are responsible for the publication of the global position of the robots in their corresponding topics (geometry_msgs/Pose type). On the other hand, the nodes associated with the robots are in charge of subscribing to relevant topics for them, such as their global position from the external positioning systems. These same nodes publish the robots’ internally estimated position from their global position and other internal localization systems, such as IMUs or odometry. The nodes of those agents that are connected in the formation subscribe to these topics to close the coordination control loop. A DT model recreates real robots in the virtual environment with identical characteristics to the real system. DTs run on the simulation software and have a twofold purpose. First, they allow complementing the sensors of the real robot, such as distance sensors. This enables our experiences to emulate the presence of virtual robots in the physical environment and to make obstacle avoidance algorithms respond as if all robots were real, and secondly, they can also be used for fault detection. For instance, in case of the loss of a real robot, it could be temporarily replaced by a digital twin, to avoid triggering failures in the movement of the rest of the agents.
In the designed ROS 2 architecture, each robot is defined within a namespace where all the nodes necessary for its operation are grouped. For instance, Figure 3 shows a subset of two robots’ namespaces of these experiments. The robot “i” is a physical robot with its DT and the robot “ i + 1 ” is just a virtual robot. A driver node is defined for each robot, which is responsible for the communication with sensors and actuators. DTs in a ROS network are indistinguishable from real robots since they can use the same nomenclature and types of nodes and topics.

2.4. Computational Resources

The setup of this work is developed on a laptop with a centralized architecture. In this way, all the nodes and the simulation software run on the same CPU. It is an HP ZBOOK POWER G7 Mobile Workstation 15.6″ with an Intel core i9-10885H, DDR4 3200 64 GB RAM, 2 TB SSD memory, and NVIDIA TU117GLM [T1200 Laptop GPU] and Intel Corp. TigerLake-H GT1 as the graphics card. Ubuntu 22.04 is the operating system installed on the PC with kernel version 5.19.0-41-generic. To read the CPU consumption, the Python library psutil in a ROS 2 node is used.

3. Problem Formulation and Experiments

In this section, we first present the control objective and implemented architecture to perform the experiments, and secondly, the specifications for the experiments and the analyzed metrics to study the scalability are described.

3.1. Control Architecture

The main goal of this work is to explore the limit in the number of agents supported by two of the main simulators used in the field of control and robotics. For that purpose, we consider a formation control problem in which the complexity of the formation basically depends on the number of agents, N. Specifically, the desired formation is defined in such a way that the agents should be uniformly distributed over a semi-spherical virtual surface. The ground level of the dome ( z = 0 ) is composed of mobile robots, and the rest of the robots are drones. Figure 4 shows an example of the desired 3D formation and the projection over the X Y -plane for N = 50 and a semi-sphere centered at (0,0,0) and a radius R = 2 m. The drones are distributed in rings of different height, each of which is drawn with a different color. The uniform distribution of points over a sphere is a classical mathematical problem that is difficult to solve analytically, and recent approximate solutions have been proposed [42]. The MRS is modeled in terms of a graph G ( V , E ) , where V = v 1 , , v N is a finite set of N vertices representing the nodes or agents and E V × V is a finite set of edges, representing the communication links between them. The graph G is defined in such a way that the formation is ensured to be rigid [43] and, as a result, target values are generated for the inter-distances between any two nodes v i and v j that are connected in the graph ( v i v j E ).
In Robotic Park a hierarchical control structure with three levels is available for experiences. It includes a position controller at the lower level, a coordination controller, and path planning at the upper level (see Figure 5). The solution to the formation control problem with collision avoidance proposed in this paper does not involve the upper level. Thus, the intermediate and the lower level participate as described below.
At the lower level, the position controllers follow a distributed architecture and are implemented onboard each robot. They close the control loop with signals generated by internal estimators from their sensors’ data and the external positioning systems (they have to ensure the stability of the closed-loop response) and work at the highest frequency (between 100 and 500 Hz). Different control architectures have been implemented on the two types of robots used in this work [9]. For instance, the Kheperas IV use two controllers focused on position and orientation, and the Crazyflie is controlled using a two-level cascaded PID controllers scheme. Moreover, at this level, an additional term is included to avoid collisions between robots. This control term is derived from repulsive potential fields as follows:
U k = 1 2 η ( 1 d k 1 d 0 ) 2 i f d k d 0 0 i f d k > d 0
where U k denotes the repulsive potential of sensor k, d 0 is a threshold that activates the repulsive potential, d k is the value of the distance between the sensor and the obstacle, and η is a constant that characterizes the field. Then, the resulting repulsive force F k is defined by:
F k = U k = η ( 1 d k 1 d 0 ) 1 d k 2 p k p o d k i f d k d 0 0 i f d k > d 0
where p k p o is the relative position between the robot and the obstacle. Hence, the sum of all repulsive forces is F = k F k , which has an impact on the goal position according to the following expression:
u o a = h · v · F F
where u o a is the deviation of the goal position signal received from the coordination level, h is the period of the controller, and v is a constant velocity.
The intermediate level is responsible for coordinating the movements between agents to ensure that the desired formation is achieved. The implementation of this level can be centralized or decentralized, with the second option being the preferred choice when the number of agents is large. Moreover, a distributed implementation in which a node only communicates with a subset of agents (neighbors), defined by the graph, reduces delays in the transmission of signals between the different control levels. As described above, the formation is defined by a set of target distances. That is, if two nodes v i and v j are connected in the graph by an edge, then the distance between them, d i j , should reach a desired value d i j * . Both the connection between nodes and the target distances d i j * are assumed to be given to ensure the desired dome shape. Additionally, to achieve the semi-spherical surface and guarantee rigidity, all the agents should maintain a distance R to the origin.
Let us denote as p i R 3 the position of any agent i. In this case, the formation controller is defined as follows:
u i s = j N i μ i j ( ( d i j * ) 2 p i p j 2 ) ( p i p j ) ( R 2 p i 2 ) p i ,
where μ i j > 0 is a gain and p i p j = d i j represents the distance between agents i and j. Note that when all the distances between i and its neighbors j N i converge to the target values d i j * and the distance of each agent to the origin is R, then the control signal u i s approaches zero. For practical reasons, the gains μ i j are all set to the same value μ < 1 , which ensures that the second term in (4) has a higher weight.
From the implementation point of view, the control law (4) cannot be computed in continuous time but at discrete instances of time. Additionally, each agent needs to obtain the relative distance to its neighbors. Thus, deciding the updating times of (4) has an impact on both the control performance and the amount of information exchanged through the network. Moreover, the higher the number of nodes N, the larger the demand for network usage. For this reason, an event-based communication protocol has been implemented. Event-based sampling is an alternative to the periodic sampling method that has been proven to be effective in reducing the number of samples [44]. The main idea is that it is the state/the output of the system and not the time that determines when to sample the system. In recent decades it has experienced a boom due to the advantages it provides in cyber-physical systems (networked control systems), due to their being resource-constrained.
In this case, the trigger function is defined depending on an error function that depends on the position of the agent and a threshold, so that an event (sampling) occurs when that function reaches that limit value. Usually, the error function e ( t ) is defined as the norm of the difference between the current measurement of the state x ( t ) and the last transmitted measurement, x ( t k ) . There are different proposals for the threshold in the literature (constant, state-dependent, etc.), but, in general terms, the larger the threshold, the lower the rate of events. Thus, the trigger time of an event-based control system is defined by the following recursive form:
t k + 1 = i n f { t : t > t k , f ( e ( t ) , x ( t ) ) > 0 }
where e ( t ) = x ( t k ) x ( t ) is the error function and f ( e ( t ) , x ( t ) ) is the trigger function. The evaluation of the trigger function runs in a loop that, in the current implementation, has a frequency of 50 Hz.

3.2. Experiments Description

Once the control objective has been defined, the experiments designed to study the scalability of the tools developed are described below. The different experiments are characterized by an increasing number of agents located along the surface of a hemisphere. A sweep is made from the simplest case with 5 agents to the limits that the simulation tools are able to support, 40 agents. In the hemisphere, the agents are placed at different levels. The level of height z = 0 consists of the Khepera IV ground robots moving in the X Y -plane. The rest of the agents are of the Crazyflie type and can move in three-dimensional space. Next, we briefly describe the developed experiments:
  • The MRS of experiment A (see Figure 6a,b) consists of a total of five agents, four of which are Khepera IV and one Crazyflie. In this case, all the robots are real and only their corresponding DTs are running in the virtual environment.
  • In experiment B (see Figure 6c,d), the MRS is composed of 10 agents: four Crazyflies, and six Khepera. In this case, four real Crazyflies and four real Kheperas are used. In the virtual environment, two Kheperas run in addition to the virtual twins of the real robots.
  • In experiment C (see Figure 6e,f), the MRS is composed of 15 agents: seven Crazyflies, and eight Khepera. In this case, five real Crazyflies and four real Kheperas are used. The rest of the agents up to 15 are completely digital.
  • The fourth experiment, D (see Figure 6g,h), employs a total of 20 agents, 11 of which are Kheperas and 9 are Crazyflies. In this experience, six Crazyflies and four Kheperas are real. The rest of the agents up to 20 are completely digital.
  • The MRS in experiment E (see Figure 6i,j) is composed by 30 agents. In this case, the distribution of agents is 18 Crazyflies and 12 Khepera.
  • For the last experiment, F, depicted in Figure 6k,l, the number of robots is 40 (26 Crazyflies and 14 Khepera).
From experiences A to C the proportion of real robots is more than 50% (reached in D). From D to F, the number of real robots is maintained at six Crazyflies and four Kheperas, increasing in this way the proportion of digital agents in these experiences progressively. This information is summarized in Table 1 and the final spatial distribution is shown in Figure 6. The video showing the real and virtual environment for the experiment E is available online: https://youtu.be/4H3YZ-sr2mw (accessed on 30 June 2023).
To quantitatively evaluate the experiments, several parameters have been considered. On the one hand, we try to measure how computationally demanding these experiments are when the number of agents increases, and on the other hand, the impact on the performance of the system. These are the performance indices analyzed:
  • Global CPU percentage. This value represents the current system-wide CPU utilization as a percentage.
  • CPU percentage. This represents the individual process CPU utilization as a percentage. It can be >100.0 in case of a process running multiple threads on different CPUs.
  • Real-Time Factor (RTF). This shows a ratio of calculation time within a simulation (simulation time) to execution time (real time).
  • Integral Absolute Error (IAE). This index weights all errors equally over time. It gives global information about the agents.
  • Integral of Time-weighted Absolute Error (ITAE). In systems that use step inputs, the initial error is always high. Consequently, to make a fair comparison between systems, errors maintained over time should have a greater weight than the initial errors. In this way, ITAE emphasizes reducing the error during the initial transient response and penalizes larger errors for longer.

4. Results

CPU usage, RTF, and formation error (IAE and ITAE) are used as criteria to compare and assess the simulation performance of Gazebo and Webots for all experiments described in Table 1. In each experiment, the number of agents increases and goes from 5 (experiment A) to 40 (experiment F).

4.1. CPU Consumption

Results of the CPU consumption are presented in Figure 7. Figure 7a shows the global CPU usage of the system. Note that this parameter increases with the number of robots for both simulators. However, Gazebo CPU consumption is higher than Webots in all experiments and reaches 100% for N = 40 (Experiment F). Indeed, for this later case, Gazebo was unable to run correctly as the timeout of the tool is exceeded when loading the robots at the beginning of the simulation. None of the simulators were able to carry out an experiment with 50 agents. Furthermore, from a performance perspective, a significant difference is observed between the two tools in terms of resource consumption. Figure 7b shows the specific CPU usage for the processes running in each simulation tool, since Gazebo is split into two processes: Gzclient, responsible for running the GUI, and Gzserver, responsible for the physics engine and sensors. The results show that, in all cases recorded, the resource consumption of Gazebo (Gzclient+Gzserver) is more than double that of Webots. It is a clear indicator that Webots is a more efficient tool in terms of CPU usage management. Note that for experiments C, D, and E the threshold of 100% is exceeded for Gazebo’s processes. This just indicates that these processes have more than one thread running.

4.2. Real-Time Factor

The Real-Time Factor, RTF, is the ratio between the real execution time and the simulation time. This factor is easily accessible in the simulators. If it reaches the unit value, the simulation is running in real time, and when RTF > 1 it means that the process is running at an accelerated rate. Since we are performing experiments with real and virtual robots, even if all nodes are running in simulated time, this index should be as close to 1 as possible.
Table 2 shows the RTF obtained for each experience. Results are similar for both simulators when the number of agents is under 15. Gazebo achieves good performance in experiment C with 15 agents, but when increasing the number up to 20, the RTF drops below 0.75. In this case, in experiment D ( N = 20 ), Webots still maintains a high performance. Experiment E increases the number of agents up to 30, and Webots performance is slightly lower but maintains a value above 0.8, while Gazebo falls below 0.5. Finally, in experiment F, Webots’ RTF drops to a value of 0.56. There is a clear decrease in RTF when the number of robots increases. As Gazebo runs in two processes, when the physics simulation in the successive experiments struggles, Gazebo decreases the RTF sooner, scaling worse than Webots to a large number of robots. Maintaining a high RTF allows running models in real time while using a hardware-in-the-loop simulation to test controllers. In this sense, for experiences as presented in this paper, where batteries in real agents are a limiting factor, the highest RTF is the best. In fact, batteries for Crazyflies can only support 3–5 min flight time, so allowing RTF under 0.75 is not possible.

4.3. System Performance

Once metrics related to simulator computational efficiency have been examined, an analysis of convergence times to achieve the desired formation for both simulators is carried out. For this goal, first, the time evolution of the total error weighted by the number of agents for the different experiences is depicted in Figure 8. In a qualitative way, the results are consistent for both simulators, as similar behaviors are found among different tests in all cases. To obtain a quantitative analysis, we compute the IAE and ITAE weighted by the total number of agents and the experiment duration time. The results are shown in Table 3.
Data show that both tools scale well with the number of agents up to the limit supported by each simulator. Indeed, most of the values in Table 3 are in the same range. However, many factors might influence this result, such as the number of digital and real agents, the number of connectivity links, the initial conditions, etc.

5. Conclusions

Robots’ simulators are a fundamental part of the development of robotic systems. Therefore, the evaluation and comparison among different simulation systems available is an important task. In this paper, the scalability of Gazebo and Webots simulators integrated with Robotic Park, a MRS-ROS2 experimental platform, has been analyzed. To this end, several experiments are conducted to achieve a rigid formation when the number of agents changes. An event-based communication protocol has been implemented to reduce the demand for network usage, an important issue when the number of robots increases. The combining of physical and virtual robots in both simulators along with ROS 2 has shown an easy and realistic way of increasing the number of agents in the experiments. Models and environments are identical in both tools, facilitating the comparison. The use of resources such as the CPU, the RTF, and the formation error have been evaluated for both tools.
The analyzed simulators present some limitations. For instance, Gazebo does not scale well to a large number of agents, as it presents a limit to the number of robots used in the system by a too-high CPU usage requirement. The results show that Webots obtains the best scores as fewer CPU consumption resources are required when performing the simulation task. It must be noted that there are two processes, client and server, involved in Gazebo due to the implementation design, and without having the agent models integrated. Moreover, RTF, which is considered a good measure of how efficient a simulator is regardless of the hardware it is being run on, has been analyzed. Although the RTF is similar for both tools for a reduced number of agents, it drops when the number of robots increases. This decrease is particularly evident for Gazebo, which is not able to maintain a good RTF above 20 agents.
The results show that Webots allows the combination of a greater size of virtual robots with real robots in collaborative experiences in MRS. Moreover, the consumption of CPU resources for the same number of robots is lower, which highlights its suitability for resource-constrained systems. These benefits, in addition to a friendly user interface for inexperienced users, give Webots added value. Although this study provides some insights into scalability issues of both simulators, further improvements are needed to detail the influence of different factors, such as the type of robots or the control architecture (frequencies, types, onboard versus offboard, etc). Future works will include performance and scalability comparisons of other ROS2-supported simulators or the consideration of other metrics, such as time using different physics engines or the study of the delay impact over the computation. In addition, tests of the simulators under different scenarios involving more complex tasks that require the use of the path planning level will be considered, e.g., in cooperative navigation tasks or with the presence of disturbances.

Author Contributions

Conceptualization, F.J.M.-Á., M.G. and R.D.; methodology, M.G. and R.D.; software, F.J.M.-Á.; validation, F.J.M.-Á. and S.D.-C.; formal analysis, M.G. and F.J.M.-Á.; investigation, F.J.M.-Á., M.G., R.D. and S.D.-C.; resources, M.G.; data curation, R.D. and S.D.-C.; writing—original draft preparation, F.J.M.-Á. and R.D.; writing—review and editing, M.G., R.D. and S.D.-C.; visualization, R.D. and S.D.-C.; supervision, M.G. and R.D.; project administration, M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly funded by Agencia Estatal de Investigación (AEI) under the Project PID2020-112658RB-I00/AEI/10.13039/501100011033, by the Spanish Ministry of Economy and Competitiveness under the Project No. PID2019-108377RB-C32, by the Project 2021V/-TAJOV/001 and by the Project IEData 2016-6.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

This work was supported, in part, by Agencia Estatal de Investigación (AEI) under the Project PID2020-112658RB-I00/AEI/10.13039/501100011033, by the Spanish Ministry of Economy and Competitiveness under the Project No. PID2019-108377RB-C32, by the Project 2021V/-TAJOV/001 and by the Project IEData 2016-6.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented reality
CPSCyber-physical system
CPUCentral processing unit
DARTDynamic Animation and Robotics Toolkit
GUIGraphical user interface
HiLCPSHuman-in-the-Loop Cyber-Physical System
IAEIntegral Absolute Error
IMUInertial Measurement Unit
ITAEIntegral of Time-weighted Absolute Error
MDPIMultidisciplinary Digital Publishing Institute
MRMixed reality
MRSMulti-robot system
ODEOpen Dynamics Engine
PIDProportional–Integral–Derivative
ROSRobot Operating System
RTFReal-Time Factor
SDFSimulation Description Format
URDFUniversal Robot Description Format
UWBUltra-wideband
VRVirtual reality

References

  1. Lee, E.A. Cyber physical systems: Design challenges. In Proceedings of the 2008 11th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC), Orlando, FL, USA, 5–7 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 363–369. [Google Scholar]
  2. Zanero, S. Cyber-physical systems. Computer 2017, 50, 14–16. [Google Scholar] [CrossRef]
  3. Schirner, G.; Erdogmus, D.; Chowdhury, K.; Padir, T. The future of human-in-the-loop cyber-physical systems. Computer 2013, 46, 36–45. [Google Scholar] [CrossRef]
  4. Kim, J.; Seo, D.; Moon, J.; Kim, J.; Kim, H.; Jeong, J. Design and Implementation of an HCPS-Based PCB Smart Factory System for Next-Generation Intelligent Manufacturing. Appl. Sci. 2022, 12, 7645. [Google Scholar] [CrossRef]
  5. Liu, Y.; Peng, Y.; Wang, B.; Yao, S.; Liu, Z. Review on cyber-physical systems. IEEE/CAA J. Autom. Sin. 2017, 4, 27–40. [Google Scholar] [CrossRef]
  6. Duo, W.; Zhou, M.; Abusorrah, A. A survey of cyber attacks on cyber physical systems: Recent advances and challenges. IEEE/CAA J. Autom. Sin. 2022, 9, 784–800. [Google Scholar] [CrossRef]
  7. Romeo, L.; Petitti, A.; Marani, R.; Milella, A. Internet of robotic things in smart domains: Applications and challenges. Sensors 2020, 20, 3355. [Google Scholar] [CrossRef] [PubMed]
  8. Guo, Y.; Hu, X.; Hu, B.; Cheng, J.; Zhou, M.; Kwok, R.Y. Mobile cyber physical systems: Current challenges and future networking applications. IEEE Access 2017, 6, 12360–12368. [Google Scholar] [CrossRef]
  9. Mañas-Álvarez, F.J.; Guinaldo, M.; Dormido, R.; Dormido, S. Robotic Park. Multi-Agent Platform for Teaching Control and Robotics. IEEE Access 2023, 11, 34899–34911. [Google Scholar] [CrossRef]
  10. Maruyama, T.; Ueshiba, T.; Tada, M.; Toda, H.; Endo, Y.; Domae, Y.; Nakabo, Y.; Mori, T.; Suita, K. Digital twin-driven human robot collaboration using a digital human. Sensors 2021, 21, 8266. [Google Scholar] [CrossRef]
  11. Poursoltan, M.; Traore, M.K.; Pinède, N.; Vallespir, B. A Digital Twin Model-Driven Architecture for Cyber-Physical and Human Systems. In Proceedings of the International Conference on Interoperability for Enterprise Systems and Applications, Tarbes, France, 24–25 March 2020; Springer: Berlin/Heidelberg, Germany, 2023; pp. 135–144. [Google Scholar]
  12. Phanden, R.K.; Sharma, P.; Dubey, A. A review on simulation in digital twin for aerospace, manufacturing and robotics. Mater. Today Proc. 2021, 38, 174–178. [Google Scholar] [CrossRef]
  13. Guo, J.; Bilal, M.; Qiu, Y.; Qian, C.; Xu, X.; Choo, K.K.R. Survey on digital twins for Internet of Vehicles: Fundamentals, challenges, and opportunities. Digit. Commun. Netw. 2022, in press. [CrossRef]
  14. Makhataeva, Z.; Varol, H.A. Augmented reality for robotics: A review. Robotics 2020, 9, 21. [Google Scholar] [CrossRef] [Green Version]
  15. Hoenig, W.; Milanes, C.; Scaria, L.; Phan, T.; Bolas, M.; Ayanian, N. Mixed reality for robotics. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 5382–5387. [Google Scholar]
  16. Cruz Ulloa, C.; Domínguez, D.; Del Cerro, J.; Barrientos, A. A Mixed-Reality Tele-Operation Method for High-Level Control of a Legged-Manipulator Robot. Sensors 2022, 22, 8146. [Google Scholar] [CrossRef]
  17. Blanco-Novoa, Ó.; Fraga-Lamas, P.; Vilar-Montesinos, M.A.; Fernández-Caramés, T.M. Creating the internet of augmented things: An open-source framework to make iot devices and augmented and mixed reality systems talk to each other. Sensors 2020, 20, 3328. [Google Scholar] [CrossRef]
  18. Phan, T.; Hönig, W.; Ayanian, N. Mixed reality collaboration between human-agent teams. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 18–22 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 659–660. [Google Scholar]
  19. Chen, I.Y.H.; MacDonald, B.; Wunsche, B. Mixed reality simulation for mobile robots. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 232–237. [Google Scholar]
  20. Seleckỳ, M.; Faigl, J.; Rollo, M. Communication architecture in mixed-reality simulations of unmanned systems. Sensors 2018, 18, 853. [Google Scholar] [CrossRef] [Green Version]
  21. Ostanin, M.; Klimchik, A. Interactive robot programing using mixed reality. IFAC-PapersOnLine 2018, 51, 50–55. [Google Scholar] [CrossRef]
  22. Groechel, T.; Shi, Z.; Pakkar, R.; Matarić, M.J. Using socially expressive mixed reality arms for enhancing low-expressivity robots. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–8. [Google Scholar]
  23. Tian, H.; Lee, G.A.; Bai, H.; Billinghurst, M. Using Virtual Replicas to Improve Mixed Reality Remote Collaboration. IEEE Trans. Vis. Comput. Graph. 2023, 29, 2785–2795. [Google Scholar] [CrossRef]
  24. Mañas-Álvarez, F.J.; Guinaldo, M.; Dormido, R.; Socas, R.; Dormido, S. Formation by Consensus in Heterogeneous Robotic Swarms with Twins-in-the-Loop. In Proceedings of the ROBOT2022: Fifth Iberian Robotics Conference: Advances in Robotics, Zaragoza, Spain, 23–25 November 2022; Springer: Zaragoza, Spain, 2022; Volume 1, pp. 435–447. [Google Scholar]
  25. Koenig, N.; Howard, A. Design and use paradigms for gazebo, an open-source multi-robot simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 September–2 October 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 3, pp. 2149–2154. [Google Scholar]
  26. Rohmer, E.; Singh, S.P.; Freese, M. V-REP: A versatile and scalable robot simulation framework. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1321–1326. [Google Scholar]
  27. Michel, O. Cyberbotics ltd. webots™: Professional mobile robot simulation. Int. J. Adv. Robot. Syst. 2004, 1, 5. [Google Scholar] [CrossRef] [Green Version]
  28. Blanco-Claraco, J.L.; Tymchenko, B.; Mañas-Alvarez, F.J.; Cañadas-Aránega, F.; López-Gázquez, Á.; Moreno, J.C. MultiVehicle Simulator (MVSim): Lightweight dynamics simulator for multiagents and mobile robotics research. SoftwareX 2023, 23, 101443. [Google Scholar] [CrossRef]
  29. Collins, J.; Chand, S.; Vanderkop, A.; Howard, D. A review of physics simulators for robotic applications. IEEE Access 2021, 9, 51416–51431. [Google Scholar] [CrossRef]
  30. Kästner, L.; Bhuiyan, T.; Le, T.A.; Treis, E.; Cox, J.; Meinardus, B.; Kmiecik, J.; Carstens, R.; Pichel, D.; Fatloun, B.; et al. Arena-bench: A benchmarking suite for obstacle avoidance approaches in highly dynamic environments. IEEE Robot. Autom. Lett. 2022, 7, 9477–9484. [Google Scholar] [CrossRef]
  31. Farley, A.; Wang, J.; Marshall, J.A. How to pick a mobile robot simulator: A quantitative comparison of CoppeliaSim, Gazebo, MORSE and Webots with a focus on accuracy of motion. Simul. Model. Pract. Theory 2022, 120, 102629. [Google Scholar] [CrossRef]
  32. Noori, F.M.; Portugal, D.; Rocha, R.P.; Couceiro, M.S. On 3D simulators for multi-robot systems in ROS: MORSE or Gazebo? In Proceedings of the 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Shanghai, China, 11–13 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 19–24. [Google Scholar]
  33. Portugal, D.; Iocchi, L.; Farinelli, A. A ROS-based framework for simulation and benchmarking of multi-robot patrolling algorithms. In Robot Operating System (ROS) The Complete Reference (Volume 3); Springer: Berlin/Heidelberg, Germany, 2019; pp. 3–28. [Google Scholar]
  34. De Melo, M.S.P.; da Silva Neto, J.G.; Da Silva, P.J.L.; Teixeira, J.M.X.N.; Teichrieb, V. Analysis and comparison of robotics 3d simulators. In Proceedings of the 2019 21st Symposium on Virtual and Augmented Reality (SVR), Rio de Janeiro, Brazil, 28–31 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 242–251. [Google Scholar]
  35. Audonnet, F.P.; Hamilton, A.; Aragon-Camarasa, G. A Systematic Comparison of Simulation Software for Robotic Arm Manipulation using ROS2. In Proceedings of the 2022 22nd International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 27 November–1 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 755–762. [Google Scholar]
  36. Körber, M.; Lange, J.; Rediske, S.; Steinmann, S.; Glück, R. Comparing popular simulation environments in the scope of robotics and reinforcement learning. arXiv 2021, arXiv:2103.04616. [Google Scholar]
  37. Pitonakova, L.; Giuliani, M.; Pipe, A.; Winfield, A. Feature and performance comparison of the V-REP, Gazebo and ARGoS robot simulators. In Proceedings of the Towards Autonomous Robotic Systems: 19th Annual Conference, TAROS 2018, Bristol, UK, 25–27 July 2018; Proceedings 19. Springer: Berlin/Heidelberg, Germany, 2018; pp. 357–368. [Google Scholar]
  38. Giernacki, W.; Skwierczyński, M.; Witwicki, W.; Wroński, P.; Kozierski, P. Crazyflie 2.0 quadrotor as a platform for research and education in robotics and control engineering. In Proceedings of the 2017 22nd International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 28–31 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 37–42. [Google Scholar]
  39. Khepera IV User Manual. Available online: https://www.k-team.com/khepera-iv#manual (accessed on 30 June 2023).
  40. Farias, G.; Fabregas, E.; Torres, E.; Bricas, G.; Dormido-Canto, S.; Dormido, S. A distributed vision-based navigation system for Khepera IV mobile robots. Sensors 2020, 20, 5409. [Google Scholar] [CrossRef]
  41. Macenski, S.; Foote, T.; Gerkey, B.; Lalancette, C.; Woodall, W. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, 66. [Google Scholar] [CrossRef]
  42. Hardin, D.P.; Michaels, T.; Saff, E.B. A Comparison of Popular Point Configurations on S2. arXiv 2016, arXiv:1607.04590. [Google Scholar]
  43. Anderson, B.D.; Yu, C.; Fidan, B.; Hendrickx, J.M. Rigid graph control architectures for autonomous formations. IEEE Control Syst. Mag. 2008, 28, 48–63. [Google Scholar]
  44. Heemels, W.P.; Johansson, K.H.; Tabuada, P. An introduction to event-triggered and self-triggered control. In Proceedings of the 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 3270–3285. [Google Scholar]
Figure 1. (a) Real platform Robotic Park. (b) Crazyflies with Loco Positioning deck (left), Motion capture marker deck (center) and Lighthouse positioning deck (right). (c) Turtlebot3 Burger (left) and Khepera IV (right).
Figure 1. (a) Real platform Robotic Park. (b) Crazyflies with Loco Positioning deck (left), Motion capture marker deck (center) and Lighthouse positioning deck (right). (c) Turtlebot3 Burger (left) and Khepera IV (right).
Sensors 23 06073 g001
Figure 2. (a) Robotic Park in Gazebo. (b) Robotic Park in Webots.
Figure 2. (a) Robotic Park in Gazebo. (b) Robotic Park in Webots.
Sensors 23 06073 g002
Figure 3. Subset of multi-robot namespace examples. Agent{i}: physical robot with DT; Agent{i + 1}: virtual robot.
Figure 3. Subset of multi-robot namespace examples. Agent{i}: physical robot with DT; Agent{i + 1}: virtual robot.
Sensors 23 06073 g003
Figure 4. Example of formation with N = 50 agents and R = 2 m. (a) Desired 3D formation. (b) Projection over the X Y —plane.
Figure 4. Example of formation with N = 50 agents and R = 2 m. (a) Desired 3D formation. (b) Projection over the X Y —plane.
Sensors 23 06073 g004
Figure 5. Multi-robot hierarchical control.
Figure 5. Multi-robot hierarchical control.
Sensors 23 06073 g005
Figure 6. Multi-Robot System formation. Drones are distributed in rings of different height, each of which is drawn with a different color. Experiment A: (a) 3D representation, (b) 2D agents distribution; Experiment B: (c) 3D representation, (d) 2D agents distribution; Experiment C: (e) 3D representation, (f) 2D agents distribution; Experiment D: (g) 3D representation, (h) 2D agents distribution; Experiment E: (i) 3D representation, (j) 2D agents distribution; Experiment F: (k) 3D representation, (l) 2D agents distribution.
Figure 6. Multi-Robot System formation. Drones are distributed in rings of different height, each of which is drawn with a different color. Experiment A: (a) 3D representation, (b) 2D agents distribution; Experiment B: (c) 3D representation, (d) 2D agents distribution; Experiment C: (e) 3D representation, (f) 2D agents distribution; Experiment D: (g) 3D representation, (h) 2D agents distribution; Experiment E: (i) 3D representation, (j) 2D agents distribution; Experiment F: (k) 3D representation, (l) 2D agents distribution.
Sensors 23 06073 g006
Figure 7. (a) General CPU percent usage. (b) Simulation tools CPU percent usage.
Figure 7. (a) General CPU percent usage. (b) Simulation tools CPU percent usage.
Sensors 23 06073 g007
Figure 8. Instant total errors weighted by the total number of agents: (a) Gazebo. (b) Webots.
Figure 8. Instant total errors weighted by the total number of agents: (a) Gazebo. (b) Webots.
Sensors 23 06073 g008
Table 1. Number of agents for each experiment. The digital twins of real robots are included in the virtual robots column.
Table 1. Number of agents for each experiment. The digital twins of real robots are included in the virtual robots column.
Real RobotsVirtual Robots
ExperimentFigureSizeCrazyflie 2.1Khepera IVCrazyflie 2.1Khepera IV
AFigure 6a,b51414
BFigure 6c,d104446
CFigure 6e,f155478
DFigure 6g,h2064119
EFigure 6i,j30641812
FFigure 6k,l40642614
Table 2. Real-Time Factor results in Gazebo and Webots.
Table 2. Real-Time Factor results in Gazebo and Webots.
ExperimentSizeGazeboWebots
A5 agents 0.995 0.977
B10 agents 0.967 0.977
C15 agents 0.866 0.962
D20 agents 0.716 0.941
E30 agents 0.477 0.831
F40 agents- 0.563
Table 3. IAE and ITAE in Gazebo and Webots experiences weighted by the total number of agents and experiments duration time.
Table 3. IAE and ITAE in Gazebo and Webots experiences weighted by the total number of agents and experiments duration time.
IAE (m/s)ITAE (m)
ExperimentGazeboWebotsGazeboWebots
A 0.4485 0.3879 6.8471 3.5558
B 0.3421 0.2481 4.2980 2.5427
C 0.4293 0.3062 5.0444 1.9985
D 0.4619 0.3182 5.0498 2.4613
E 0.5404 0.3108 5.7722 3.5415
F- 0.6511 - 8.3456
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mañas-Álvarez, F.J.; Guinaldo, M.; Dormido, R.; Dormido-Canto, S. Scalability of Cyber-Physical Systems with Real and Virtual Robots in ROS 2. Sensors 2023, 23, 6073. https://doi.org/10.3390/s23136073

AMA Style

Mañas-Álvarez FJ, Guinaldo M, Dormido R, Dormido-Canto S. Scalability of Cyber-Physical Systems with Real and Virtual Robots in ROS 2. Sensors. 2023; 23(13):6073. https://doi.org/10.3390/s23136073

Chicago/Turabian Style

Mañas-Álvarez, Francisco José, María Guinaldo, Raquel Dormido, and Sebastian Dormido-Canto. 2023. "Scalability of Cyber-Physical Systems with Real and Virtual Robots in ROS 2" Sensors 23, no. 13: 6073. https://doi.org/10.3390/s23136073

APA Style

Mañas-Álvarez, F. J., Guinaldo, M., Dormido, R., & Dormido-Canto, S. (2023). Scalability of Cyber-Physical Systems with Real and Virtual Robots in ROS 2. Sensors, 23(13), 6073. https://doi.org/10.3390/s23136073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop