Next Article in Journal
Research on Computing Resource Measurement and Routing Methods in Software Defined Computing First Network
Previous Article in Journal
Blink-Related Oscillations Provide Naturalistic Assessments of Brain Function and Cognitive Workload within Complex Real-World Multitasking Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Concept of a Plug-In Simulator for Increasing the Effectiveness of Rescue Operators When Using Hydrostatically Driven Manipulators

Faculty of Mechanical Engineering, Military University of Technology, 2 Gen. S. Kaliskiego Str., 00-908 Warsaw, Poland
Sensors 2024, 24(4), 1084; https://doi.org/10.3390/s24041084
Submission received: 31 October 2023 / Revised: 22 January 2024 / Accepted: 22 January 2024 / Published: 7 February 2024

Abstract

:
The introduction of Unmanned Ground Vehicles (UGVs) into the field of rescue operations is an ongoing process. New tools, such as UGV platforms and dedicated manipulators, provide new opportunities but also come with a steep learning curve. The best way to familiarize operators with new solutions are hands-on courses but their deployment is limited, mostly due to high costs and limited equipment numbers. An alternative way is to use simulators, which from the software side, resemble video games. With the recent expansion of the video game engine industry, currently developed software becomes easier to produce and maintain. This paper tries to answer the question of whether it is possible to develop a highly accurate simulator of a rescue and IED manipulator using a commercially available game engine solution. Firstly, the paper describes different types of simulators for robots currently available. Next, it provides an in-depth description of a plug-in simulator concept. Afterward, an example of a hydrostatic manipulator arm and its virtual representation is described alongside validation and evaluation methodologies. Additionally, the paper provides a set of metrics for an example rescue scenario. Finally, the paper describes research conducted in order to validate the representation accuracy of the developed simulator.

Graphical Abstract

1. Introduction

Unmanned Ground Vehicles (UGVs) are being more and more frequently used in demining and rescue scenarios. This is due to both the fact that these activities are often carried out in environments that are dangerous to humans, and the fact that the tools used in these cases are not adapted to be man-portable and man-operated [1,2,3,4]. This is a trend which will continue as we move closer and closer into machine-only autonomous rescue and demining operations. However, because of the complexity and the dynamic nature of such missions, we can still see a wide range of unmanned machines being deployed and operated remotely. This creates a need to train IED/EOD and rescue operators in order to increase the operational safety and effectiveness of unmanned units [5]. One of the methods to increase the number of possible training instances and reduce costs at the same time is the use of simulators [6,7]. Useful for training in a wide range of tasks, ranging from maintenance to manipulation, these solutions must use highly accurate virtual models in order to be effective [8,9]. The high level of model detail should cover both the robot’s or UGV’s base platform’s structure and its kinematics, and the functionality of its tools/manipulators, with a wide range of control options like speed and torque control [10,11,12].
As teleoperation is the most commonly used mode for controlling UGVs in demining and rescue operations, these types of simulators should not only provide sensor data commonly found in real-world products, like torque or vision sensors, i.e., RGB and RGB-D cameras, but also additional information which may be beneficial from a training standpoint [13]. Less common, but more relevant from that perspective, especially as computation capabilities increase, is the possibility of having deformable environmental objects and terrain [14]. Conducted analyses have shown that there are several simulators which offer extensive use case possibilities [15,16,17,18]. A solution commonly used in research is the MuJoCo [19]. A demonstrated use case was the ability to use a gripper-hand manipulation to solve a Rubik’s cube using a 24DOF robotic hand driven using tendons [20]. A different example could be the MuJoCo simulator which researchers have used to build and simulate robotic manipulators in order to check initial concepts [21] to then transition them into real-world systems [15,22,23,24]. The described simulator supports most of the functions required in mobile robots with the exception of inverse kinematics and path planning. For research related to the analysis of object collisions, Pybullet [25] is being widely used, especially in the dynamics of gripping [24], and for manipulation of deformable objects (e.g., fabric) [26]. Another possible solution would be Gazebo [25], which is mostly being used for research on manipulation robots [27,28]. While none of these studies rely on Gazebo to perform skillful manipulations, one study clearly extended the simulation to external algorithms to deal with non-rigid bodies. Gazebo provides a simulation environment with the necessary actuators and sensors to enable manipulation. It also provides the Robotic Operating System (ROS) support which provides forward and reverse kinematics packages as well as path and motion planning. Another possible solution is the CoppeliaSim. It is a robotics simulator with a range of user-centric features, including sensor and actuator models, as well as motion planning, and support for simple and inverse kinematics. PyRep was recently introduced as a Python toolkit for teaching robots. It has been used to manipulate, pick up, and place cubes with the Kinova robot arm [29]. While the above-mentioned simulators focus on manipulation tasks which are in the scope of this article, they mainly focus on the machine–environment interactions. It should be stated however, that it is not the only requirement for robot/UGV training simulators, and the focus should be placed mainly on the need to train operators with the Human Machine Interfaces (HMIs) they will inevitably have to use in order to carry out a rescue operation. Most of the previously mentioned solutions rely on consumer-grade controllers for operator input. While sometimes found in real-world solutions of certain robots, they lack the interaction levels a controller has with its control station, as it is being replaced by an interaction with an operation system of a PC.
The main aim of this work was to develop a simulator capable of accepting commands coming from a control station typically used when operating UGVs or robots for IED and rescue missions [2]. Additionally, emphasis has been placed on achieving a high level of accuracy when recreating a real-world robotic arm for IED missions. In order to do so, the robotic arm has been equipped with a sensor suite for special tracking. Alongside a proposed data mapping method, data recorded from those sensors have been used to recreate real-world behaviors of the robotic arm in the simulated environment and were also later used for accuracy validation of said model.

2. Materials and Methods

Studies have shown [30,31,32,33] that the increase of an operator’s immersion yields better learning results when using simulators. In order to take advantage of this fact, this paper proposes a solution which would allow for a plug-in simulator operation with the ability to assess mission critical parameters by an outside observer as well as to log test data for future analysis and the operator’s effectiveness tracking (Figure 1). The plug-in operation refers to the ability to connect an existing control station of a UGV to the simulator and to be able to send data between them just like it would happen during normal operation. Because of this, the system introduces a hardware/software component called a gateway. This device is meant to work bidirectionally and translate data between the communication protocol of the UGV and the simulator. To not disclose said protocols, the simulator offers an open communication protocol with the need for implementation of the gateway shifted to UGV manufacturers.
Within the simulator, data sent from the gateway is being processed by a software component called the virtual model descriptor. It is meant to house all the code used to model the simulated device and use it along with control signals to calculate its output state during simulation. That state is then sent to another software component called the scenario manager which houses all the virtual descriptions of objects taking part in the simulation. Its aim is to determine the ways in which the virtual model interacts with the environment on a scenario level. For example, if the scenario includes a person search and rescue operations, the object defined in the scenario manager may be a human model. Possible modifications to the model may include limb or body deformations, which the search and rescue operator should be able to detect prior to manipulations. A given set of parameters describing objects in the scene is then forwarded to another piece of software called the supervisor tool. The aim of this component is to provide the person supervising the training with sufficient data to enable performance assessment of the operator in real-time, and to help and guide him/her during testing, if needed. To do so, the supervisor tool is also able to access information from the virtual model descriptor. This allows to also measure the operator’s level of familiarity with the controlled device. The next component included in the plug-in simulator is the visualization tool. It is meant to be used by the trainee to control the virtual model in order to interact with the environment. The main requirement for this module is that it needs to represent the virtual device’s surroundings as accurately as possible. This covers both the hardware as well as the software side of things. The last piece of code running on the simulator is the data logger. Its aim is to record selected data for more in-depth analysis and post-mission assessment. That is why this component is receiving inputs from the virtual model descriptor, the scenario manager, and the supervisor tool.

2.1. Model Descriptor’s and Supervisor Tool’s Building Methodology

Testing methodology for the plug-in simulator consists of two main parts: the model’s building methodology and the experiment’s methodology. The first one describes the way the virtual model needs to be prepared to be able to be tested and the second one describes the way the model is going to be tested and how to process recorded data. The plug-in simulator assumes that the virtual representation and the real-world solution, which it is based upon, should resemble each other as closely as possible. This covers such aspects as kinematic description, manipulation functionalities, mechanical structure, physical dimensions, control scheme, and dynamic description. Because current robot development heavily relies on computer-based designs, developing all but the last two aspects is relatively easy. It is possible to export meshes of the robot’s components to external formats and reimport them into the game engine. Then, using built-in functionalities, stitch them together the same way it is being performed in real-life. The more work-intensive part is related to the last two aspects: control scheme and dynamic description. The first one requires a process of mapping the gateway protocol with the model, while the second one requires the development of a component called Dynamic Model Description (DMD). The creation of a DMD is a 3-stage process. The first stage is the parameter identification phase, carried out on the real-world unit. Its goal is to acquire time domain-based reference position data in the time domain from the controlled device based on a set of discrete control signals for each of the device’s functionality. An example of such an approach could be an IED/EOD manipulator arm (Figure 2). The data set should have twice the resolution of the control signal used to create the virtual model. The reason for it is that the unused datasets (every second datapoint in the dataset) will be used in the second (validation) stage. With the gathered data, an initial DMD can be developed ensuring that each joint can achieve the same movement speeds at the same positions in the time domain as its real-world counterpart. This creates a layered model for different control signal values. Its accuracy can be later increased by using approximation functions on each of the control signal speed and position datasets. For the purpose of this article, general polynomials have been assumed. By minimizing the approximation function using the sum of squared differences criterion, it is possible to obtain a continuous function of speed at certain rotational angles. To increase the DMD’s accuracy, it is possible to run this scenario in both directions, going from the starting angle to maximum and back for each of the joints (Figure 2).
The above-described tasks should produce a layered DMD such as the one in Figure 3. While continuous for a set control signal, the DMD is still discrete in the control signal domain. In order to mitigate this problem, it has been assumed that a linear function would be calculated from two sets of known angular speeds for corresponding control signal values above and below the value currently generated by the operator. The output value is then proportional to the position of control signal value with relation to edge cases.
To generate data for the supervisor tool, the model needs to be configured in a way which enables data propagation between components. Table 1 lists the parameter types enabled for recording.
The same concept applies to the supervisor’s tool. Table 2 shows the data made available by this component. The list differentiates between a body and an object.
The testing phase covers not only the objective aspects of the rescue operation like precision of movement, equipment handling, and object handling, but also subjective ones. The reason for it is to measure the operator’s response to the training process and determine if he is becoming more familiar with the scenario. This knowledge is then used to determine two basic factors: environmental familiarity and scenario fatigue.
Transferring the operator’s control from a real-world solution to a simulated one introduces new stimuli for the operator. While a lot of care has been given toward a precise representation of the controlled object, the visual feedback the operator receives differs from what he will have to interact with. As such, there is a learning phase that needs to be carried out before the operator familiarizes himself with the simulator (environmental familiarity) and can start to learn behavioral responses which he will then be able to transfer over to the real world. The second aspect is scenario fatigue—it is a state where the operator knows all the specifics of a scenario and is becoming bored with it, which may introduce errors not present in real-world scenarios. In this case, a new scenario needs to be introduced to provide a new challenge and reintroduce uncertainty. For the above-mentioned purposes, two parameters were introduced that can be recorded: subjective accuracy and subjective situational awareness. The first one represents the operator’s assessment on how good he was doing during the test. The second one allows to determine how well the operator thought he knew about what was going on around the controlled object during the test. This assessment needs to be carried out after the operator has finished the test and should be recorded alongside the objective parameters.

2.2. Data Evaluation

Data evaluation is a process that is being conducted both in real time and after the tests. Its aim is to compute an overall effectiveness score for the tested operator. This includes objective, measurable factors as well as subjective aspects of an operator’s state and his or her perception of the completed task. The simulator uses a performance factor value to introduce a unified scoring system that has been described in detail in the author’s thesis [34] as a performance indicator ( P I ) methodology for teleoperated unmanned ground systems. The main requirement of this method is having a reference dataset. The original implementation has used manned operated units for that purpose. With the plug-in simulator, there is a referencing initial stage for each of the tested operators. It needs to be stated that not all of the previously listed object and manipulator parameters are being used in the P I . The remaining ones are either used for post-test assessments or for additional processing. Calculation of the P I is described using the following Expression (1):
P I x = t x _ r e f t x _ y n o c o l + n b c o l + n o m a x + n o m i n + s a + s s a k
where:
  • P I x —performance indicator index;
  • t x _ y   —test duration;
  • t x _ r e f —reference test duration;
  • n o c o l —number of object collisions when the force exceeded max value;
  • n b c o l —number of body collisions when the force exceeded max value;
  • n o m a x —number of overloads in the max direction;
  • n o m i n —number of overloads in the min direction;
  • s a —subjective accuracy;
  • s s a — subjective situational awareness;
  • k —completion indicator.
Test duration ( t x _ y ) is the timespan calculated from tstart and tstop. Reference test duration ( t x _ r e f ) is calculated in a similar fashion but for the reference test phase. The completion indicator (k) is a 0 or 1 value given out per test by the supervisor via the supervisor tool. All the other components of the P I have certain weights attached to them. This was conducted in order to allow for higher testing flexibility if the operator is intended to focus more on a certain operational parameter. Setting these weight values is conducted freely, however, it needs to be recorded alongside the dataset. By default, the weight values per objective parameters equal 20 and per subjective 10. An example of this implementation is shown in (2).
n o c o l = 20 5 y = 1 n j cx _ y
It has been assumed that, for the purpose of this simulator, the maximum number of mistakes an operator can make equals 4. If he exceeds that number, the calculated component’s value should not be calculated as being below 0, but instead, such state should result in an automatic test termination with a k factor equaling 0, unless the supervisor states otherwise.

2.3. Hydrostatic Arm Model

The plug-in simulator was created using the Unity game engine. The main reason for it was that it supports a robust physics engine, which is being used to calculate world interactions in real time [7]. Additionally, thanks to its target audience, it is relatively easy to develop HMIs for both the operator as well as the supervisor.
In order to create a virtual representation of a hydrostatically driven manipulator arm, 3D construction models were used for mesh implementation. Next, a set of joints were created. They are a configurable physical constraint which forces a certain type of relation between connected objects. For the purpose of the manipulator’s definitions, joints were used both in a relation between the manipulator’s segments (i.e., an arm, boom, or a gripper) as well as between an actuator and a segment (Figure 4). Due to their configurable nature, it is possible to change their settings during simulation, which allows for the development of complex relations.
In order to enable world interactions, Unity uses constructs called colliders. They are a geometric representation used in physics calculations. It is separate from a mesh representation of an object because most of the time, the latter are too complex to allow for real-time computations. This does not mean that the manipulator’s model has rudimentary collision detections. Figure 5 shows the number of basic colliders being used for the lower jaw of the recreated hydrostatic IED/EOD manipulator.
The dataset for the simulator was gathered using absolute sensors (Figure 6a) connected to an input card. Gathered data (Figure 6b) were then processed to be used in the virtual model. Each manipulator segment’s speed was registered for varying control signal levels from −250 to 250, with a 10-unit step. This value span is compliant with the J1939 CAN standard, which is normally used with digitally driven hydraulic valves, like the PVED-CC series spool valves from Danfoss. The simulator was created using 40-unit step increments, with the same signal-level range. This means that real-world angular speeds of each of the manipulator’s segments were measured with valve control signals being: 0, 40, 80, 120, 160, 200, 250 and 0, −40, −80, −120, −160, −200, −250. In order to solve the problem of missing speed data for control signals in between consecutive control points (which correspond to the manipulator’s arm speed), the linear approximation method described earlier in this article was used.

2.4. Simulator’s Functionality

The main aspect of the developed simulator is its ability to interface with standard control stations. In Figure 7b, the solution presented is a CAN-bus-based UGV control panel, which is connected to the simulator PC using a specially developed CAN gateway. This solution acts as a translator between the proprietary control protocol and Windows API for HMI devices (used by the simulator).
The Unity-based simulator has the ability to utilize a wide range of objects to create complex scenarios. Figure 8 shows an example of a scene created for person retrieval tests.
As was the case with the manipulator, objects in the environment also possess physical traits, such as mass and dimensions. This is especially critical for interactions and assessing the operator’s effectiveness. While it is possible to use mesh colliders for environmental objects, using it on humans is not efficient due to how the skeletal structure is being managed in Unity. Because of this, the use of multiple simple colliders was required (Figure 9).
The simulator’s output is a timestamped file with all of the previously mentioned parameters logged for evaluation purposes. In order to automate the process of assessing and scoring the operators, a VBA script was developed which reads all the parameters, classifies them, and implements the evaluation methodology to produce an output Excel file with the final P I score as well as a score breakdown with regards to subjective and objective parameters. Additionally, a suggestion regarding the next aspect of the rescue scenario, where the operator needs to put more focus on, was generated.

3. Results

In order to determine the level of precision at which the virtual model represents the real-life solution, the following metrics were used: Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE).
The method of evaluating the developed simulator assumed additional real-world speed measurements were taken for each of the intermediate steps, with a constant increment of the control signal. This means that speed measurements for the following control signal groups were registered:
  • 0, 20, 60, 100, 140, 180, 220.
  • 0, −20, −60, −100, −140, −180, −220.
Afterward, each segment of the simulator was driven with the same control signal values and its angular speeds were measured. Later, both of these signal groups were compared using the previously mentioned metrics.
Table 3 presents the average values from each of the approximated levels for each of the manipulator’s segments. Of note is the fact that the highest levels of average differences in speeds are observed for the simulator’s boom segment and this trend persists for each consecutive segment. Several factors may be responsible for such an occurrence: approximation errors, simulator joint configuration, and weight differences between the model and the real-world unit. The last reason was considered the most probable because of the observed trend in MAE values. If it were the approximation errors, these values should not form a trend like that. It would be expected that they would remain somewhat constant between each segment. Joint configuration also does not provide a clear explanation as to why this trend is observed, but weight estimation errors could provide a reasonable explanation in that the simulation model stacks all the stack segments of a kinematic chain starting from the one being controlled. As such, if there would be a weight calculation error (said calculations were conducted using CAD designs and selected material information), that weight difference would affect each segment independently but would be summed when trying to move a stack of segments, rather than just one (i.e., jaws).
The maximum average residual variance levels of angular speeds of 0.0589 deg/s allow to determine that the simulator is not generating speeds which would produce unexpected results for the operator controlling it. It should be noted that the maximum speed errors obtained were for the Boom segment, with the highest control signal levels of 220 and equated to less than 0.1 deg/s (Table 4). This signal has produced an error of approx. 1.9% with the average Boom speed for that control signal being 5.048 deg/s.
The deviation levels, when compared to registered results, can be considered low, with a maximum value of 0.0244. This provides a stable foundation for a thesis, that the proposed method of transcribing real-world speed values onto a virtual model can be used without the model behaving “alien” or somehow “off” to the operator.

4. Conclusions

The plug-in simulator presented in this paper is a complete solution for the evaluating and training of rescue personnel in unmanned machine operation. By using existing control stations, it is possible for the operators to familiarize themselves with the HMI layout and control behaviors. The machine’s model implemented in the simulator strives to provide an accurate representation of its real-world counterpart through the use of functional identity, dynamic response levels, mechanical constraints, and environmental interactions. Additionally, an ever-growing repository of external objects allows for building complex scenarios with multiple factors and conditions being introduced. This is largely due to the usage of a commercially available Unity game engine. The research presented in this paper show that it is possible to create a fully functional plug-in simulator solution, which aims at faithfully representing the mechanical and dynamic aspects of the real-world unit.

Funding

This research was funded by Military University of Technology, grant number UGB 708/2024.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the author. Certain data may have share restrictions due to parts of the software being used in actions with restricted access.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Amit, D.; Neel, S.; Pranali, A.; Sayali, K. Fire Fighter Robot with Deep Learning and Machine Vision. Neural Comput. Appl. 2021, 34, 2831–2839. [Google Scholar]
  2. Bishop, R. A survey of intelligent vehicle applications worldwide. In Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No. 00TH8511), Dearborn, MI, USA, 5 October 2000; IEEE: Piscataway, NJ, USA, 2000; pp. 25–30. [Google Scholar]
  3. Giuseppe, Q.; Paride, C. Rese_Q: UGV for Rescue Tasks Functional Design. In Proceedings of the ASME 2018 International Mechanical Engineering Congress and Exposition, Pittsburgh, PA, USA, 9–15 November 2018. Volume 4A: Dynamics, Vibration, and Control. [Google Scholar]
  4. Halder, S.; Afsari, K. Robots in Inspection and Monitoring of Buildings and Infrastructure: A Systematic Review. Appl. Sci. 2023, 13, 2304. [Google Scholar] [CrossRef]
  5. Mao, Z.; Yan, Y.; Wu, J.; Hajjar, J.F.; Padlr, T. Towards Automated Post-Disaster Damage Assessment of Critical Infrastructure with Small Unmanned Aircraft Systems. In Proceedings of the 2018 IEEE International Symposium on Technologies for Homeland Security (HST), Woburn, MA, USA, 23–24 October 2018. [Google Scholar]
  6. Martirosov, S.; Hořejší, P.; Kopeček, P.; Bureš, M.; Šimon, M. The Effect of Training in Virtual Reality on the Precision of Hand Movements. Appl. Sci. 2021, 11, 8064. [Google Scholar] [CrossRef]
  7. Shi, X.; Yang, S.; Ye, Z. Development of a Unity–VISSIM Co-Simulation Platform to Study Interactive Driving Behavior. Systems 2023, 11, 269. [Google Scholar] [CrossRef]
  8. De Luca, A.; Siciliano, B. Closed-form dynamic model of planar multilink lightweight robots. IEEE Trans. Syst. Man Cybern. 1991, 21, 826–839. [Google Scholar] [CrossRef]
  9. Giorgio, I.; Del Vescovo, D. Energy-based trajectory tracking and vibration control for multi-link highly flexible manipulators. Math. Mech. Complex Syst. 2019, 7, 159–174. [Google Scholar] [CrossRef]
  10. Kim, P.; Park, J.; Cho, Y.K.; Kang, J. UAV-Assisted Autonomous Mobile Robot Navigation for as-Is 3D Data Collection and Registration in Cluttered Environments. Autom. Constr. 2019, 106, 102918. [Google Scholar] [CrossRef]
  11. Stampa, M.; Jahn, U.; Fruhner, D.; Streckert, T.; Röhrig, C. Scenario and system concept for a firefighting UAV-UGV team. In Proceedings of the 2022 Sixth IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 5–7 December 2022. [Google Scholar]
  12. Szrek, J.; Jakubiak, J.; Zimroz, R. A Mobile Robot-Based System for Automatic Inspection of Belt Conveyors in Mining Industry. Energies 2022, 15, 327. [Google Scholar] [CrossRef]
  13. Zhu, S.; Xiong, G.; Chen, H.; Gong, J. Guidance Point Generation-Based Cooperative UGV Teleoperation in Unstructured Environment. Sensors 2021, 21, 2323. [Google Scholar] [CrossRef] [PubMed]
  14. Va, H.; Choi, M.-H.; Hong, M. Efficient Simulation of Volumetric Deformable Objects in Unity3D: GPU-Accelerated Position-Based Dynamics. Electronics 2023, 12, 2229. [Google Scholar] [CrossRef]
  15. Mahler, J.; Goldberg, K. Learning deep policies for robot bin picking by simulating robust grasping sequences. Proc. Mach. Learn. Res. 2017, 78, 515–524. Available online: http://proceedings.mlr.press/v78/mahler17a.html (accessed on 21 January 2024).
  16. Mora-Soto, M.E.; Maldonado-Romo, J.; Rodríguez-Molina, A.; Aldape-Pérez, M. Building a Realistic Virtual Simulator for Unmanned Aerial Vehicle Teleoperation. Appl. Sci. 2021, 11, 12018. [Google Scholar] [CrossRef]
  17. Nuaimi, A.; Ali, F.; Zeddoug, J.; Nasreddine, B.A. Real-time Control of UGV Robot in Gazebo Simulator using P300-based Brain-Computer Interface. In Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Las Vegas, NV, USA, 6–8 December 2022. [Google Scholar]
  18. Sánchez, M.; Morales, J.; Martínez, J.L. Reinforcement and Curriculum Learning for Off-Road Navigation of an UGV with a 3D LiDAR. Sensors 2023, 23, 3239. [Google Scholar] [CrossRef] [PubMed]
  19. Todorov, E.; Erez, T.; Tassa, Y. MuJoCo: A physics engine for model-based control. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 5026–5033. [Google Scholar]
  20. Akkaya, I.; Andrychowicz, M.; Chociej, M.; Litwin, M.; McGrew, B.; Petron, A.; Paino, A.; Plappert, M.; Powell, G.; Ribas, R.; et al. Solving Rubik’s cube with a robot hand. arXiv 2019, arXiv:1910.07113. Available online: http://arxiv.org/abs/1910.07113 (accessed on 21 January 2024).
  21. Pathak, D.; Gandhi, D.; Gupta, A. Self-supervised exploration via disagreement. Proc. Mach. Learn. Res. 2019, 97, 5062–5071. Available online: http://proceedings.mlr.press/v97/pathak19a.html (accessed on 21 January 2024).
  22. Christiano, P.; Shah, Z.; Mordatch, I.; Schneider, J.; Blackwell, T.; Tobin, J.; Abbeel, P.; Zaremba, W. Transfer from simulation to real-world through learning deep inverse dynamics. arXiv 2016, arXiv:1610.03518. Available online: http://arxiv.org/abs/1610.03518 (accessed on 21 January 2024).
  23. Rusu, A.A.; Večerík, M.; Rothörl, T.; Heess, N.; Pascanu, R.; Hadsell, R. Sim-to-real robot learning from pixels with progressive nets. Proc. Mach. Learn. Res. 2017, 78, 262–270. Available online: http://proceedings.mlr.press/v78/rusu17a.html (accessed on 21 January 2024).
  24. Tobin, J.; Fong, R.; Ray, A.; Schneider, J.; Zaremba, W.; Abbeel, P. Domain randomization for transferring deep neural networks from simulation to the realworld. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 23–30. [Google Scholar]
  25. Koenig, N.; Howard, A. Design and use paradigms for Gazebo, an opensource multi-robot simulator. IEEE/RSJ Int. Conf. Intell. Robot. Syst. 2004, 3, 2149–2154. Available online: http://ieeexplore.ieee.org/document/1389727 (accessed on 21 January 2024).
  26. Matas, J.; James, S.; Davison, A.J. Sim-to-real reinforcement learning for deformable object manipulation. arXiv 2018, arXiv:1806.07851. Available online: http://arxiv.org/abs/1806.07851 (accessed on 21 January 2024).
  27. Jin, H.; Chen, Q.; Chen, Z.; Hu, Y.; Zhang, J. Multi-LeapMotion sensor based demonstration for robotic refine tabletop object manipulation task. CAAI Trans. Intell. Technol. 2016, 1, 104–113. Available online: http://www.sciencedirect.com/science/article/pii/S2468232216000111 (accessed on 21 January 2024). [CrossRef]
  28. Kunze, L.; Beetz, M. Envisioning the qualitative effects of robot manipulation actions using simulation-based projections. Artif. Intell. 2017, 247, 352–380. Available online: http://www.sciencedirect.com/science/article/pii/S0004370214001544 (accessed on 21 January 2024). [CrossRef]
  29. James, S.; Davison, A.J.; Johns, E. Transferring end-to-end visuomotor control from simulation to real-world for a multi-stage task. Proc. Mach. Learn. Res. 2017, 78, 334–343. Available online: http://proceedings.mlr.press/v78/james17a.html (accessed on 21 January 2024).
  30. Alexander, A.L.; Brunyé, T.; Sidman, J.; Weil, S.A. From Gaming to Training: A Review of Studies on Fidelity, Immersion, Presence, and Buy-in and Their Effects on Transfer in Pc-Based Simulations and Games, DARWARS Training Impact Group; Aptima Inc.: Woburn, MA, USA, 2005; Volume 5, pp. 1–14. [Google Scholar]
  31. Aline, M.; Torchelsen, R.; Nedel, L. The effects of VR in training simulators: Exploring perception and knowledge gain. Comput. Graph. 2022, 102, 402–412. [Google Scholar]
  32. Coulter, R.; Saland, L.; Caudell, T.; Goldsmith, T.E.; Alverson, D. The effect of degree of immersion upon learning performance in virtual reality simulations for medical education. In Medicine Meets Virtual Reality; IOS Press: Amsterdam, The Netherlands, 2007; Volume 15, p. 155. [Google Scholar]
  33. Salman, N.; Colombo, S.; Manca, D. Testing and analyzing different training methods for industrial operators: An experimental approach. Comput. Aided Chem. Eng. 2013, 32, 667–672. Available online: https://www.sciencedirect.com/science/article/pii/B9780444632340501123 (accessed on 21 January 2024).
  34. Typiak, R. Wpływ Konfiguracji Układu Akwizycji Obrazu na Sterowanie Bezzałogową Platformą Lądową. Ph.D. Thesis, Wojskowa Akademia Techniczna, Warsaw, Poland, 2017. [Google Scholar]
Figure 1. System structure of a plug-in simulator for rescue operations.
Figure 1. System structure of a plug-in simulator for rescue operations.
Sensors 24 01084 g001
Figure 2. Testing angle directions used during the development of the DMD.
Figure 2. Testing angle directions used during the development of the DMD.
Sensors 24 01084 g002
Figure 3. Visualization of the method for calculating angular joint speeds which are not covered by measured data. Colored lines represent speed values for consecutive, discrete control values. In order to calculate in between values, a line equation is being calculated based on two points from the known speed values, marked as red circles on the figure. The blue line symbolizes that line.
Figure 3. Visualization of the method for calculating angular joint speeds which are not covered by measured data. Colored lines represent speed values for consecutive, discrete control values. In order to calculate in between values, a line equation is being calculated based on two points from the known speed values, marked as red circles on the figure. The blue line symbolizes that line.
Sensors 24 01084 g003
Figure 4. Hydraulic actuator’s representation in the virtual model with an anchor point on the arm.
Figure 4. Hydraulic actuator’s representation in the virtual model with an anchor point on the arm.
Sensors 24 01084 g004
Figure 5. Structure of colliders on the manipulator’s lower jaw.
Figure 5. Structure of colliders on the manipulator’s lower jaw.
Sensors 24 01084 g005
Figure 6. Boom’s speed identification: (a) Sensor setup and (b) Dataset for a control signal of “−200”.
Figure 6. Boom’s speed identification: (a) Sensor setup and (b) Dataset for a control signal of “−200”.
Sensors 24 01084 g006
Figure 7. Simulator view of an operator using a standard controller (a) and a plug-in control station (b).
Figure 7. Simulator view of an operator using a standard controller (a) and a plug-in control station (b).
Sensors 24 01084 g007
Figure 8. A rescue operation scenario created using the developed simulator.
Figure 8. A rescue operation scenario created using the developed simulator.
Sensors 24 01084 g008
Figure 9. Collider structure of a male body model used for collisions during a rescue operation (green boxes and spheres).
Figure 9. Collider structure of a male body model used for collisions during a rescue operation (green boxes and spheres).
Sensors 24 01084 g009
Table 1. Manipulator’s parameter list.
Table 1. Manipulator’s parameter list.
IDName Parameter NameType Visualized Recorded
1Joint anglejax_yFloatNoYes
2Joint speedjsx_yFloatNoYes
3Joint collisionjcx_yBoolPossibleYes
4Object name per joint collisiononx_y_zStringNoYes
5Joint movement start tjmx_y_startBoolNoYes
6Joint movement stoptjmx_y_stopBoolNoYes
7Joint maximum position overloadtjox_y_maxBoolNoYes
8Joint minimum position overloadtjox_y_minBoolNoYes
9Time starttstartLongYesYes
10Time endtstopLongYesYes
Table 2. Environmental object parameter list.
Table 2. Environmental object parameter list.
IDName Parameter NameType VisualizedRecorded
1Human body collisionhcxBool PossibleYes
2Human body maximum force in collision pointhcxfFloatNoYes
3Object collisionocxboolNoYes
4Object maximum force in collision pointocxfFloatNoYes
5Maximum force on body without manipulator contacthcxf_nmFloatNoYes
6Maximum force on object without manipulator contactocxf_nmFloatNoYes
Table 3. Error table for each of the manipulator’s segments.
Table 3. Error table for each of the manipulator’s segments.
Manipulator SegmentMAEMSERMSE
Boom0.05890.00060.0244
Arm0.05300.00040.0220
Long arm0.04240.00030.0154
Short arm0.03810.00030.0123
Jaws0.03430.00020.0111
Table 4. MAE values per control signal value for the Boom segment of the simulator.
Table 4. MAE values per control signal value for the Boom segment of the simulator.
Control signal2060100140180
MAE0.04450.04240.05540.06650.0795
Control signal−20−60−100−140−180
MAE0.04010.03860.04060.06110.0694
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Typiak, R. A Concept of a Plug-In Simulator for Increasing the Effectiveness of Rescue Operators When Using Hydrostatically Driven Manipulators. Sensors 2024, 24, 1084. https://doi.org/10.3390/s24041084

AMA Style

Typiak R. A Concept of a Plug-In Simulator for Increasing the Effectiveness of Rescue Operators When Using Hydrostatically Driven Manipulators. Sensors. 2024; 24(4):1084. https://doi.org/10.3390/s24041084

Chicago/Turabian Style

Typiak, Rafał. 2024. "A Concept of a Plug-In Simulator for Increasing the Effectiveness of Rescue Operators When Using Hydrostatically Driven Manipulators" Sensors 24, no. 4: 1084. https://doi.org/10.3390/s24041084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop