Next Article in Journal
The Basic Process of Lighting as Key Factor in the Transition towards More Sustainable Urban Environments
Previous Article in Journal
Low-Carbon Economic Dispatch of Virtual Power Plants Considering the Combined Operation of Oxygen-Enriched Combustion and Power-to-Ammonia
Previous Article in Special Issue
Sustainable Supply Chain Practices in the Oil and Gas Industry: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Taking Charge of One’s Own Safety While Collaborating with Robots: Enhancing Situational Awareness for a Safe Environment

1
Department of Autonomous & Intelligent Systems, Tekniker, Iñaki Goenaga, 5, 20600 Eibar, Spain
2
Department of Computer Science and Artificial Intelligence, UPV/EHU, Manuel Lardizabal Pasealekua, 1, 20018 Donostia, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(10), 4024; https://doi.org/10.3390/su16104024
Submission received: 13 February 2024 / Revised: 20 April 2024 / Accepted: 8 May 2024 / Published: 11 May 2024
(This article belongs to the Special Issue Sustainable Production and Manufacturing in the Age of Industry 4.0)

Abstract

:
Collaborative robots, designed to work alongside humans in industrial manufacturing, are becoming increasingly prevalent. These robots typically monitor their distance from workers and slow down or stop when safety thresholds are breached. However, this results in reduced task execution performance and safety-related uncertainty for the worker. To address these issues, we propose an alternative safety strategy, where the worker is responsible for their own safety and the robot executes its task without modifying its speed except in the case of imminent contact with the worker. The robot provides precise situation-awareness information to the worker using a mixed-reality display, presenting information about relative distance and movement intentions. The worker is then responsible for placing themselves with respect to the robot. A user study was conducted to evaluate the efficiency of task execution, worker safety, and user experience. Results suggest a good user experience and safety perception while maintaining worker safety, which would support social sustainability of human activities in industrial production contexts that require collaboration with robots.

1. Introduction

Traditional industrial robotic systems require fences and peripheral safety equipment, preventing work in spaces shared with humans, which reduces flexibility of production, increases costs, and requires more space to operate. As a result, these conventional systems struggle to adapt to the evolving landscape of the manufacturing industry.
In recent years, there has been a shift in the market, from mass production to mass customization, which requires flexible and multi-purpose production systems [1]. To solve these new challenges of flexible production in industry, collaborative robots (or cobots [2]) have been developed. These robots can operate in close proximity to humans, cooperatively or even collaboratively, in various production tasks. This breakthrough in technology has given rise to what are known as Human–Robot Collaboration (HRC) scenarios [3,4,5,6]. Such hybrid human–robot teams in HRC scenarios combine the strength of humans in problem-solving with the superior capacity for consistently accurate execution of repeatable or precision-demanding tasks of robots. This allows humans to focus on the creative aspects of tasks, trusting that other aspects are well taken care of by the robot, in perfect teamwork. As a result, productivity is enhanced and production costs are reduced, ultimately benefiting the economic sustainability of industrial production activities.
The most important criterion to meet in a HRC scenario is the safety of the human worker [7,8,9]. Current regulations, such as ISO15066, ISO102182, and ISO102181 [10,11,12], have been put in place to ensure the safety of human operators working alongside cobots. These robots are designed and built to be intrinsically safe, with safety-ensuring capabilities such as anticipating collisions and reacting to them [13]. Ongoing research on collaborative robotics focuses on developing technologies and designing procedures that can further improve safety in HRC scenarios, as reviewed below. What all such current approaches have in common is that they provide the robot with a high degree of autonomy for deciding how to implement a safe behavior for the operator’s safety to be guaranteed. However, research suggests that relying solely on the robot’s autonomy for ensuring safety can have a negative impact on the human user’s perception of both safety and efficiency during collaboration. This, in turn, can hinder the overall user experience derived from such collaborative efforts [14,15].
In this paper, we challenge this common practice of making the robot accountable for the human worker’s safety based on its action decisions. With this research, we aim to analyze the outcomes when the robot is granted freedom of movement to execute tasks in a shared space, as long as the robot provides situation awareness information for the human operator to administer.
We hypothesize that providing the human workers with proper situation awareness, and most of the autonomy for deciding how to maximize both safety and productivity, while collaborating with the robot will promote social sustainability in the workplace. By removing safety-related uncertainties about how to behave to remain safe, as well as by showing that collaboration does not hinder productivity, workers with any background and experience can be more willing to accept robots as close collaborators. Additionally, multimodal situation-awareness displays can present all the necessary hazard-related situation awareness information also to workers with any degree of hearing impairment as well as to any worker when environmental noise is significant, making collaboration more inclusive and accessible [16,17]. Furthermore, we hypothesize that providing users with information about the situation while they have full control, will lead not only to safe HRC scenarios, but also to an environment where users will perceive feeling safe.
The present paper is an extended version of our previously published conference paper [18]. However, this article expands upon the conference paper by providing more detailed designs, the description of a conducted user study, analysis of data collected, and a more comprehensive discussion and interpretation of the results. Moreover, this article includes additional sections, such as an expanded introduction, literature review, an extended user study with 16 users (twice as many as for the study reported at the conference), more detailed results, and a more extensive data analysis, which were not present in the conference paper.
In the following sections, we analyze current techniques that are typically used to guarantee the safety of operators during collaboration with robots, noticing the high level of autonomy that robots are given to make use of such techniques. Subsequently, we propose an alternative way of distributing between humans and robots the autonomy to decide at each moment how to collaborate most efficiently while preserving safety (as well as the subjective perception of safety) at all times, all of which should contribute to a better user experience.

2. Literature Review

2.1. Current Safety Techniques

The increase in the use of safe cobots in industry has created the need to not only remove residual hazards due to moving parts, but also to improve perception of safety through good situation awareness, thus reducing the user’s anxiety when working with robots in shared spaces. As a result, much research has been conducted to address these needs in manufacturing HRC contexts.
Reactive strategies to improve cobot safety simply aim at minimizing the consequences of accidental physical contact between robot and human operator when such contact occurs. In such instances, novel actuators and mechanisms have been proposed, some of which can reduce large impact forces in the case of accidental contact [19]. Counterbalancing mechanisms is another approach to reduce the power of a manipulator [20]. New control methods can also be used so as to improve safety in HRC [21,22].
More recent research focus on control strategies to improve pivotal goals of HRC, which are the efficiency, safety, and ergonomics [23], by the use of proactive strategies. The proactive strategies focus on providing a solution before any contact with the user happens. Many cobotics safety protocols heavily depend on precollision systems, focused on anticipating human intentions through four key approaches: (1) learning-based techniques, (2) exteroceptive and proprioceptive sensors, (3) speed and separation monitoring, and (4) power and force limiting. These approaches empower the cobot to halt or adjust its trajectory preemptively to prevent collisions. In this context, Yu et al. presented a comprehensive framework that ensures safe HRC [24,25]. This framework incorporates robot learning not only for understanding human motion intentions but also for real-time estimation of human impedance. The human impedance is calculated using Radial Basis Function Neural Networks (RBFNNs) and a least square (LS) method in this context. However, most research has focused on developing sensor-based safety systems for the real-time monitoring of the distance between robot and user [26,27,28,29]. Khatib et al. [30] created a multi-sensor control system for collision avoidance, incorporating static and dynamic constraints. They employed the Saturation in the Null Space (SNS) algorithm to prioritize tasks, giving the highest precedence to whole-body collision avoidance and, subsequently, to the robot’s end-effector. In [31], a control method for static and dynamic obstacle avoidance in real-time that employs a dual-type proximity sensor was developed for HR distance evaluation. Another common safety strategy in these systems is that, when the distance between the robot and human is too small, the robot just slows down, stops, or moves away. One popular design for the implementation of such strategies includes the definition of safety zones, according to which safety distance is maintained. In each predefined zone, the robot adopts a different safety behavior to ensure the operator’s safety [32,33,34]. For instance, in [35], a safe-speed behavior is adopted that ensures safety. In a related approach, 3D sensors are used to monitor the body of the operator. With the use of the 3D sensors, the human’s 3D body volume and the robot’s 3D model are imported to the same virtual space, and the robot is then able to plan its movements, avoiding moving too close to the human operator or stopping otherwise [34,36]. Variations in design of this same safety strategy have been described in the literature and adapted to different production scenarios, such as in [32,33,37]. In these examples, the safety of the HRC context is treated as a whole by implementing strategies such as the recalculation of the robot’s path-planning in response to the relative position between the robot and human. As more advanced examples, in [38], Nascimento et al. developed a different control system architecture that uses a limitation of the repulsive force model with safety contour, and, in the paper [39] by Nikolakis et al., they focused on a cyber-physical system (CPS) for safe HRC assembly.
One of the main drawbacks of these proactive robot behavior techniques lies in robots slowing down or even stopping altogether in the presence of a person. This results in a reduction in productivity, which is undesirable for workers, whose aims are to be more productive at work. These approaches in which robots frequently stop and start moving not only impact productivity but also contribute to energy waste, exacerbating the sustainability concerns associated with these techniques. In addition to productivity and energy efficiency, although these techniques are shown to be sufficiently safe [40], users may continue to perceive that they may be at risk when entering the area of influence of a robot. This is due to operators believing that some potential danger may still exist near a robot, such as the malfunction of safety mechanisms, which require that the robot is constantly vigilant and proactive at keeping a safe distance with the worker, particularly when the robot is holding sharp tools or other objects. The operator is not certain about the actions that the robot will actually take in the immediate future, and this uncertainty takes a toll on the level of trust for the robot that the operator can develop [41]. In the long term, sustained stress caused by working in a state of permanent uncertainty due to poor situation awareness can damage health [42].

2.2. Extended Reality

Improving situation awareness is, hence, necessary for workers’ well-being and safety, as well as to obtain a good user experience (UX) from collaborating with robots [43,44,45]. As states in paper [46], a good design is a key driver of innovation, efficiency, and sustainability in the Industry 4.0 era. Industrial designers play a crucial role in shaping the products of the future, and, by embracing sustainable design principles, they can positively impact both the industry and the environment. The integration of technology and design thinking not only enhances competitiveness but also fosters a more sustainable approach to manufacturing in the rapidly evolving landscape of industry.
Augmented Reality (AR) has shown potential in tackling this challenge. As analyzed in [47], AR is indeed an important pillar for Industry 4.0, providing help in areas such as product structure, training of workers, production collaborations, and many other fields. AR also has the capability of offering adaptability to diverse user profiles and facilitating the seamless integration of various individuals [16,17]. Brending et al. [48] suggested that presenting contextual information through a visual AR channel to operators that are working in an area close to the robot can help reduce HRC related anxiety. Vogel et al. showed that projection-based AR systems could improve performance and UX in HRC [49]. With this motivation, Vogel et al. developed a projection-based sensor system using cameras and data from robot controllers to monitor the configuration of the robot and the position of the object that the robot was going to reach out to and grasp. Based on this information, they projected on the work surface the boundaries of safety areas around the robot and target object [50,51,52,53]. If any other external object crossed the boundary of the safety areas, especially a body part of the collaborating human operator, the robot stopped. While safe, the robot behavior of stopping due to close distance to the operator is not compatible, in practice, with collaborative work that requires the worker to constantly cross that boundary. Once again, this robot behavior leads to low efficiency in collaborative task execution, and strict observance by the robot of this safety rule can easily feel unnecessary at times and become very annoying.
We argue that, while the worker needs to be made aware when becoming too close to a moving robot, slowing down or stopping its movement may not be the best strategy in every case, in terms of efficiency of collaboration and UX obtained by the operator. In addition, utilizing projected 2D AR interfaces requires the user to shift their visual attention between the displayed information and physical scene. Furthermore, in many collaborative tasks, improved situation awareness is also needed when the task requires at times that the worker looks away from the robot during collaboration. In such contexts, extended reality (XR) technologies, such as head-mounted display (HMD) devices, may offer the opportunity to design new ways of presenting information about the relative position of the robot, even when outside the complete field of view that human vision provides. An example may be the presentation of audio–visual information designed for a mixed-reality (MR) multimodal HMD device, like in [54], raising awareness about potential nearby hazards located anywhere around the user.
As stated in Lee and See [55], transparency of information and trust by the operator are needed to obtain good situation awareness, which are obtained through three key components: purpose, process, and performance. Purpose is the desire or goal selected by the robot. Process is the intention or, in this case, the path to be followed by the robot. Finally, the performance is the way in which the trajectory is executed. Examples in the recent research literature exist where these three requirements have been satisfied using previsualization of planned trajectories and visualization of the trajectory actually executed [56,57]. The user can observe the action objective of the robot (purpose), the visualization of the trajectory the robot will follow (process), and then the path the robot is executing (performance).
Even when display of information is designed to achieve good situation awareness, with current safety techniques developed for HRC scenarios, the robot bears most of the accountability for the operator’s safety. Figure 1 illustrates the estimated distribution between robot and human worker of the autonomy for decision making to implement actions that will keep the worker safe. Hence, the accountability for any safety-related incidents is distributed in a similar way, making the robot the primary accountable member of the team. The robot has a high degree of autonomy for decision making about how the worker will be kept safe by observing the user and adapting its behavior, primarily its speed of movement. Meanwhile, the human worker is expected to focus on its part of the task execution while trusting, somewhat blindly, that the robot will not miss the next event of them both becoming too close to each other and that the robot will react promptly when that happens.
Each time the robot slows down is a confirmation for the worker that the safety mechanism is working. However, the worker will experience interruptions in the fluency of the collaboration, which may be frustrating and negative for the UX that the worker obtains.
An additional limitation of such design is residual uncertainty about the reliability of the monitoring that the robot is performing. Some level of uncertainty often remains, and workers deal with it by keeping an eye on how the robot reacts to proximity. For this reason, the worker tends not to delegate to the robot the whole of the decision-making autonomy, and both robot and worker may end up keeping an eye on each other, partly spoiling the purpose of freeing the worker’s mind with regard to safety. Another unintended consequence may arise, when worker actively tries to stay far enough from the robot to prevent triggering its slow-down safety mechanism, which could consequently affect productivity. Thus, the worker may end up counteracting the safety assurance behavior of the robot in favor of task execution performance.
The remaining uncertainty just mentioned has been reported in the literature for cases in which the robot holds a high degree of autonomy. This can sometimes feel overwhelming and annoying to users because they feel a lack of control [14,58]. In such scenarios, and as long as the robot’s actions are not very unexpected [59], operators benefit from transparency in order to feel safe while the robot is making decisions [15]. Situation awareness improves transparency about the robot’s intentions, and potential danger is then identified more clearly during collaboration.

3. Proposed New Approach: Worker Taking Charge of Their Own Safety

Based on the discussion above, we want to analyze the strategy of granting most of the autonomy on safety decisions to the operator, as long as the robot provides good situation awareness information to the operator at all times. In other words, we propose that, if the operator is aware of the robot’s relative location and of the trajectory in which it is going to move, it may also be a safe strategy to grant most of the autonomy to the operator for safety-related decision making. Being largely in control of the safety strategy may help the operator obtain sense of safety in a collaboration that is more fluent and efficient, all of which would result in a good user experience. This proposed approach is represented in Figure 2.
In this alternative approach, and to guarantee safety in the case of imminent contact, the robot would still monitor relative distance to the operator and would stop any movement if contact was imminent (very short distance before contact) but would not modify its course and speed of movement otherwise.
To direct research efforts towards characterizing and validating this proposed alternative safety paradigm, we present our hypothesis: users will experience a positive UX, feeling assured, safe, and free from uncertainty while maintaining safety.
If these hypotheses are confirmed, we anticipate that, when our approach is tested against other current approaches, it could lead to a superior UX with the proposed safety paradigm. This expectation arises from the anticipated reduction in uncertainty and enhancement of the sense of control, both of which contribute to an improved UX. Furthermore, we expect that actual safety and performance levels will remain comparable. However, it is plausible that this is a conservative estimate, as collaborative performance may also see enhancements, further enriching the UX.

4. Development of the Proposed Approach

As outlined in Section 2, achieving a robust situational awareness hinges upon effectively conveying the aspects of purpose, process, and performance to users [55]. In HRC scenarios, this information takes on paramount significance, so the user can control the situation.
In this paper, HRC scenarios involve the collaboration between a human worker and a physical robot arm within a shared workspace. The robot and the human work together, with their movements interdependently adjusting based on real-time information from an awareness display. Notably, the human adapts their behavior in real-time to mitigate potential hazards near the robot. In the context of enhancing HRC through the development of an AR 3D application, insights from a study on Design Thinking in technology-focused organizations provide valuable guidance [60]. The study highlights Design Thinking’s multifaceted nature, seen as both a toolbox of methods and a mindset that fosters human-centered and innovative solutions. Considerations for AR application development, such as the importance of a variable content toolkit, the need for a mindset shift, and challenges associated with Design Thinking, align with this study’s findings. Additionally, the suggestion to seamlessly integrate Design Thinking with established engineering methodologies offers a pertinent perspective for optimizing AR solution development in the realm of HRC improvement.
For the HRC scenario in this paper, the physical cobot arm selected was an LBR Iiwa 14 robot, and the virtual environment was presented through a Microsoft HoloLens 2 head-mounted display (HMD). To bring this scenario to life, we used Unity (version 2020.3.11f1) and the Unity Robotics Hub (version 0.7.0), complemented by the ROS-TCP Connector (version 0.7.0) to establish connectivity with the Robot Operating System (ROS).
To facilitate this simulation, we utilized a Dell computer featuring an Intel Core i7 CPU and 32 GB of RAM. This robust hardware configuration powered the execution of ROS, MoveIt, and Gazebo, enabling us to simulate the robot’s movements. In practice, the computer executed commands to position the robot as instructed by the HoloLens. Consequently, the HMD’s visualization was dynamically updated in real-time to reflect the robot’s simulated actions.

4.1. Design of Situation-Awareness Information Displays

With the aim of providing good situational awareness, we used the auditory and visual display designs developed by San Martin et al. [61], which were also evaluated in a close-proximity HRC scenario. The authors showed in their work that the combined audio–visual design not only provides information about the level of hazard to which the user is exposed, but it also provides information regarding how far the user is from the center of the source of danger. They found that a display using the auditory channel is good to capture the attention and the visual display provides a proper description of the danger. Hence, we decided using both displays in order to provide a proper description of the hazard to which the users are exposed.
The visual part of the selected display (see Figure 3) featured three different levels of hazard, presented to the user as three different color casts that do not occlude the scene.
  • Yellow level of danger: This level represented the lowest level of danger and warned the user about entering a potentially hazardous area. It was represented by the yellow color and a sphere that covered the entire reachable space of the robot;
  • Orange level of danger: The orange level was activated when the user entered the area through which the robot was currently following its trajectory. This space was indicated by the color orange and was visually represented by a cylindrical shape, encompassing the region from the starting point to the target of the current trajectory;
  • Red level of danger: The red level signified imminent danger and was denoted by the color red. It was depicted using spheres and cylinders that represented the robot. The thickness of these shapes varied depending on the robot’s speed.
In addition, as analyzed by different researchers, the motion planning plays an important role in ergonomics [62,63]. Hence, we designed trajectories, so the robot would make the movements as straight as possible without making simultaneous changes during the trajectories and we added their revitalization, so users could identify the trajectories before they were executed.
To detect when a user entered each level of danger, transparent boxes were used to represent the user’s body and hands. These boxes were fitted to the user’s body, and, when any of them collided with a level of danger, the corresponding level was activated. The dimensions of each box were adjusted before each user started the task to ensure that the boxes were well-suited for each individual (see Figure 3).
In terms of the auditory display, a linear design was employed, ranging from 10 Hz at the farthest distance to 2 Hz at the nearest proximity to the hazard. The center of danger was positioned at the midpoint of the volume created by the robot’s current configuration, and the distance to the center of the hazard was calculated by measuring the distance between this center and the HMD (refer to Figure 3).
For a live demonstration of the design in a real environment, please refer to the video (https://shorturl.at/dh029) and, for more detailed information on the specifications of the audio–visual design and its assessment, refer to paper [61].

5. User Study

5.1. Recruiting

Once both audio and visual designs were implemented, a user study was conducted with 16 participants, with a mean age of 28 and where 8 of them were female, to assess our approach. Of these, 30% of the users had previous experience with AR and only four had previous experience in collaborating with robots. All subjects gave their informed consent for inclusion before they participated in the study.

5.2. Collaborative Task

The aim of the task designed was to replicate a task that could be found in industry and, at the same time, involved the user moving around the robot and working in a shared space. Having this in mind, we designed an inspection task where robot and user were involved. The task entailed a user retrieving an A4-sized paper document from a location labeled as “A” (see Figure 4) and positioning it at specific spots on a tabletop (“B”), where the robot would subsequently inspect them. Each time, the robot would specify the document number to be obtained from location “A”. Subsequently, the user had to search for the specified document among the papers positioned in that spot and then deliver it to the robot by placing it within its reach (position “B”). It is important to note that the robot was in constant motion, involving transitions from spot “A” to spot “B” and then executing the necessary inspection movements at spot “B”. Every time the users finished their collaborative task, they had to come back to their individual space, press the “end” button that appeared next to their table, and continue their individual work. While the robot did not require the user’s assistance, the users remained in their “individual space”, engrossed in performing arithmetic calculations, inadvertently distracted from the robot’s activities.
Each participant in the study performed the task three times. The tabletop and part of its surroundings were the workspace shared between the robot and the participant. Each time the robot needed a new document, it asked the user to bring it by turning on a yellow sphere in the HoloLens and playing a sound, which forced both agents to move inside the shared space. The user was responsible to negotiate their relative position with respect to the robot while in the shared workspace. The participant was assisted on this by the situation awareness information that the robot provided, explained in Section 4.1 (see Figure 3). The display created three nested zones of potential hazard around the robot, which were represented visually and auditorily. The robot only stopped if the user entered its innermost red zone, which we took to mean that an imminent collision with the robot could occur.
The task was designed as a gamified experience to motivate participants to give their best effort in following the instructions. Consequently, three key facets of task execution—speed, safety, and accuracy—were assessed, and the resulting score was considered better when it was higher. Speed was quantified by measuring the time taken to complete the task. Safety was evaluated by penalizing each intrusion, where entering into orange counted as one entry, and into red counted as two entries. Each aspect was rated in a five-value scale:
  • Task execution: from 0–39 s, 5 points; from 40–59 s, 4 points; 1 min–1 min 19 s, 3 points; 1 min 20 s–1 min 39 s, 2 points; and 1 min 40 on, 1 point;
  • Safety: 0–3 entries, 5 points; 4–6 entries, 4 points; 7–9 entries, 3 points; 10–12 entries, 2 points; and 13 or more, 1 point;
  • Accuracy: Perfectly placed, 5 points; well placed, 4 points; poorly placed, 3 points; more than one try to place the sheet, 2 points; and more than two tries to place the sheet, 1 point.

5.3. Evaluation Procedure

All participants read and signed a consent form and then filled out a demographic questionnaire. Then users read a written explanation of the task to be performed and an explanatory video was also shown in order to obtain a clear idea of the task. Once they finished filling out the demographic data, we gave the users the opportunity to test the situation-awareness display by a training session of 10 min.
For safety considerations, the training session involved interaction with the holographic representation of the robot’s digital twin rather than with the physical robot used during the study’s task execution. The virtual robot was placed in the exact position of the real one, so users could perform the task in the same place that they would perform in the experimental session. The users had the chance to experiment with the display as much as they wanted, until they felt comfortable and reliable with the technology. They would also try once to take a sheet from place A and place it in place B, so they could more specifically understand what the task would involve. For the training session, the robot performed a different trajectory to the one used in the experimental session.
Once the users finished the training session, the scenario had to be prepared to commence the task. The initial step involved positioning the hologram of the robot’s digital twin precisely at the same location as the real robot. This calibration ensured that the entire scenario was aligned with the real-world setup. The hologram of the robot’s digital twin was transparent during the task, so that the user could see the visual information of the hazard display superimposed on the real robot with no other visual layer in between (Figure 5).
We designed a one-condition study to analyze the safety and UX when users collaborate with a robot using the audio–visual display described earlier. The task was repeated three times per user.
This study had two primary objectives. The first was to confirm whether giving users control over their own safety results in a safe HRC scenario, and the second was to examine if the user experience (UX) is improved when users are provided with greater autonomy over the robot. While the goal was to grant users complete control over their own safety, certain safety precautions were taken. Potentially hazardous objects, such as sharp or pointy items, were removed. The robot’s movement was deliberately maintained at a slow pace throughout the study, with an average speed of 0.25 m/s. This deliberate choice allowed users to readily react to its motions. However, at specific instances, it could briefly accelerate to 0.86 m/s to navigate through small gaps. The experimental study took place in an open area to prevent participants from becoming trapped by the robot. Additionally, an observer was present at all times, equipped with a wireless emergency stop button and responsible for promptly halting the robot in the event of any unforeseen hazardous situations.

5.4. Measures

To assess safety during task execution, the HMD collected different quantitative pieces of information. We recorded the events during which the users entered the red area and also for how much time were they exposed to this level of danger. This data enabled us to evaluate the degree to which users could maintain safety during collaboration, with the assistance of the provided situational awareness. The same measures were recorded for the orange level, in order to obtain more detailed information about the user’s behavior. It also recorded the distance of users with respect to the robot for an analysis of security.
For the assessment of the UX, we analyzed feedback obtained from post-study semi-structured interviews. During these interviews, we explored various themes, including the overall UX, participants’ perception of the level of danger during collaboration, and their sense of safety throughout the collaborative process. The interviews were recorded and subsequently transcribed for further analysis. The questionnaires were also transcribed and analyzed using the methodology of affinity diagramming, as described in [64].
During the execution of the conditions, two experimenters were involved. One of them observed and interacted with users throughout the entire process, while the other one was solely an observer. Both experimenters participated in the development of the data analysis and the creation of the affinity diagram.

6. Results

We analyzed both quantitative and qualitative data collected during the study. Quantitative results included objective observed data. Qualitative data were obtained from semi-structured interviews conducted at the end of each study session.
For the analysis of quantitative data, we used effect size (ES) estimation techniques, using 95% confidence intervals (CI) instead of using null-hypothesis significant testing (NHST). In this way, we provide accumulation of evidence, so that readers can extract their own critical conclusions, as currently recommended for user studies in disciplines of human–robot interaction [65,66,67].

6.1. Quantitative Results

All data regarding the quantitative results are shown in Table 1. Same values are shown graphically in Figure 6, which represent the 95% CI. The first row contains the numerical values of the total time spent in the orange and red zones during tasks. As can be observed, users spent quite an amount of time in orange zones (11.46 s, C I 95 % [8.484, 14.44]); however, the level at which the users were exposed to real danger was in the red area, activating the stop of the robot, with a mean time of almost 0s (0.342 s, C I 95 % [0, 0.911]). At the same time, it was identified that users spent a mean time of 60.28 s ( C I 95 % [58.78, 61.78]) performing the task, which means that they spent no more than 20% of the time in the orange zone. In terms of the task execution gamification aspect, 87.5% of users obtained 4 points and 12.5% obtained 3 points. In addition, it was observed that, except in the cases where users entered the robot’s red zone, where they were at a minimum distance of 20 cm from the robot, in all the other cases, users did not move closer than 60 cm from the robot, showing that users were maintaining a prudent distance from the robot.
The second row displays the total number of times users entered tasks in the two different zones. Out of 16 users, only 4 entered the red zone, participating in a total of 6 tasks out of the 48 tasks performed collectively (3 tasks per user). Among these four users, two entered only once out of the three tasks they performed, while the remaining two entered twice out of the three tasks. Once again, the number of entries in the red zone is nearly negligible, as depicted in Figure 6 (0.1122, C I 95 % [0.029, 0.215]). Shifting focus to the orange zone, a different pattern emerges. Users in this zone entered tasks almost uniformly, not exceeding one entry per task on average (6.717, C I 95 % [5.05, 8.38]). Moving these values to gamification aspect, 10 users obtained 4 points, 5 obtained 3 points and just 1 person obtained 2 points. Calculating the ratio of the total time spent in the orange area (11.46 s) to the total number of entries in the orange area (6.717 times) yields a value of 1.7 s. Although this may seem substantial, it signifies that, each time users entered the orange zone, they spent approximately 1.7 s while maintaining a minimum distance of 60 cm, as previously explained. These observations suggest that the hazard awareness information provided to users established a safe environment. They also suggest that the situational awareness imparted to users instilled confidence when navigating hazard zones.
A final aspect gathered during the experiments was the accuracy points obtained during the task, where 68.75% obtained 5 points, 18.75% obtained 4 points, and 12.5% obtained 3 points.

6.2. Qualitative Results

After conducting semi-structured interviews at the end of each participant’s session, we utilized an affinity diagram to analyze the gathered data, which consisted of comments and responses to the facilitator’s questions. Only one facilitator took part in the interviews, and it was the same facilitator for all participants. This involved transcribing all the data onto post-it notes. The resulting affinity diagram, depicted in Figure 7, enabled us to categorize the interview data into three main groups: Safety perception, Information of danger, and Negative aspects.
Within the Information of danger category, we observed that some users provided general insights, such as expressing that the system “Provides complete information of the hazard” (U15), while others delved into specific characteristics of hazard comprehensibility, as exemplified by comments like “I could understand perfectly from where the danger was coming” (U14). Consequently, we segmented the comments based on whether they offered a holistic description (users falling under the blue arrow, in the blue category) or focused on specific characteristics, represented by individual leaves, elucidating the aspect of hazard comprehension the user was addressing (refer to Figure 7).
The transcription of all participant comments produced 27 post-it notes, which were most successfully organized in the 3 branch categories just mentioned, which contained 3 leaf categories. The following is an outline of each leaf, from left to right:
  • Direction of danger: Comments about being able to identify in which direction was the hazard;
  • Distance to center: Comments regarding the possibility of identifying to the distance from danger;
  • Catches the attention: The capability of the display to catch the attention when users are exposed to danger.

7. Discussion of Results

The main conclusion obtained from the study is that that the new HRC safety paradigm seemed to provide a safe and collaborative environment, resulting in an overall positive user experience (UX).
During the study, in almost every case, participants never positioned themselves next to the cobot but instead at a safe distance. In the rare instances where they did venture too close with a distance lower than 60 cm (which occurred in only 6 out of 48 trials), participants promptly adjusted their positions, with a maximum delay of 0.911 s (in all cases, the robot stopped itself to avoid any possible contact with a moving part). From the quantitative data, it is also observed that users were capable of identifying properly and feeling safe while they were inside hazardous areas, since they were staying for a total time of around 11 s in the orange area and, in 87.5% of cases, users were able to maintain a prudent distance from the robot. Another interesting aspect to analyze in the quantitative results is the accuracy gamification aspect, where all users were capable of placing the sheet in the first attempt and almost 70% of them placed it perfectly without moving the other sheets.
These findings suggest an enhancement in the economic sustainability of the collaborative production task. In contrast to methods involving the continuous stop-and-start of robots [32,33,37,38] or recalculating paths [30,31] throughout collaboration to avoid the user, our approach maintains a constant speed, which could lead to a minimization of energy consumption, thereby potentially optimizing task efficiency. For example, in the case of safety by-zones, the robots would slow down in the orange zone reducing efficiency, however in our case the robot is able to keep working in the same way and the safety is maintained.
Analyzing the experiences of the users, it is also identified that users did feel safe while they were exposed to the danger. A total of 13 out of 16 users explained that they felt safe when using the display, with comments such as “You feel safe” (U11) or “I felt safe while I was collaborating with the robot” (U06). The interview also showed that the display was capable of properly explaining the danger to which the users were exposed. In this branch, 12 users explained that the display was providing complete information about the hazard to which they were exposed (“Provides a complete information of the hazard” (U09)). This appeared to be due to different aspects provided by the display, such as the ability to transmit the distance from danger, as commented by four users (“You can know if you are getting closer or moving further” (U15), the ability to explain properly the direction of the danger, as pointed out by four other users (“With the display I can easily identify from where the danger is coming” (U04), and the ability to catch the attention, as mentioned by five users (“Although you are focus, it catches your attention and you keep track of the danger” (U13). Hence, according to their descriptions, the situation awareness information received from the multi-modal mixed-reality display was comprehensive, easily noticeable, and helpful to understand and monitor the moving area of influence of the robot, as well as how to avoid it. An aspect for improvement reported by some users was that the display was possibly providing too much information (“It provides too much information” (U08)).

8. Conclusions

Research in the field of Human–Robot Collaboration (HRC), particularly in safety techniques, emphasizes the development of strategies that enable robots to autonomously decide actions to maintain a safe environment. However, these techniques often fall short in achieving a positive User Experience (UX) due to uncertainties and users’ perceived lack of control. The frequent stopping and starting of robots not only impact productivity, but also contribute to energy waste, heightening sustainability concerns.
In response, we proposed and analyzed how a change in autonomy, coupled with improved situation awareness, could positively impact UX, safety, and various sustainability aspects. We firmly believe that effective design is a key catalyst for innovation, efficiency, and sustainability in the Industry 4.0 era.
The findings indicate that effective situation awareness, designed to alleviate safety uncertainties, can be employed for individuals with visual or auditory impairments. This is made possible through the utilization of dual displays, contributing to the promotion of social sustainability in the workplace. The integration of multimodal situation awareness displays guarantees comprehensive access to hazard-related information. This inclusive approach significantly improves collaboration accessibility for all members of the workforce.
Furthermore, our approach has been shown to maintain the users’ safety while providing a good UX since users pointed out how they could understand properly and feel safe while they were performing the task in collaboration with the robot. In addition, with this approach in which the robot only stops in the presence of imminent contact, energy waste from frequent stop–start cycles is reduced, which could lead to an improved economic and energetic sustainability. In whole, this approach aims to enhance the collaborative potential of robots while aligning with broader goals of efficiency and sustainability in the evolving landscape of Industry 4.0.

9. Limitations and Future Work

The displays used in this work are just a subset of the available design alternatives that could provide a good situation awareness. There is ample room for variations in the design choices demonstrated in this paper, and we only selected a set of designs as a base that previously showed promising results in this area. It is worth noting that the user study described in this paper focuses exclusively on the designs that were presented and evaluated here. Researchers dealing with similar needs may find it beneficial to search for new designs that might fit better into their scenarios. Additionally, it may be advantageous to investigate how different designs can influence users’ perceived situation awareness by quantifying qualitative questionnaires such as SART or SAGAT [68].
Another constraint arises from the current limitations in technology’s ability to replicate the field of view of human vision. While the HoloLens 2 device offers a wider field of view (54°) compared to its earlier version (34°), it still falls significantly short of matching the field of view of human vision. As a result, we chose to utilize color-coded areas, allowing the user to perceive both the color and the level of danger to which they are exposed, even when facing away from the robot.
Addressing the previously mentioned issue of information overload, future research should explore methods for optimizing the quantity of information presented without diminishing situational awareness. This optimization could be obtained by considering single-modality displays, such as auditory or visual displays, in addition to the combined audio–visual approach employed in the current study. However, it must be pointed out that using one unique modality could limit the use in people with some kind of disability making social sustainability worse, which is improved with the use of both displays together.
In terms of the shared workspace and tasks selected, we chose a scenario that can be found in any industry to create a realistic user study. In this scenario, users and robots had to work in close proximity, needing a robot speed that prioritized user safety. We believe that future research could explore how altering the shared workspace and robot speed could impact UX and safety compared to the methods we have employed.
Our current study aims to analyze the role of good situational awareness, granting users full control, and to evaluate the perceived UX and safety by users. However, for future research, it would be interesting to examine how implementing our approach, as well as other common approaches or a combination thereof, within the same task could vary the results obtained from users’ UX and their perception of safety.
We conclude that this alternative new paradigm should be considered as a subject of research, for its potential to provide a superior UX without compromising safety. Another aspect that could be interesting to analyze in the future is synchronization. While we believe that providing full control to the user will maintain synchronization between the user and the robot, this hypothesis could be investigated in future work to assess how different speeds may affect the efficiency of the scenario.

Author Contributions

Conceptualization, A.S.M. and J.K.; Methodology, A.S.M. and J.K.; Software, A.S.M.; Validation, A.S.M.; Formal Analysis, J.K., E.L. and A.S.M.; Investigation, A.S.M. and J.K.; Resources, A.S.M.; Data Curation, J.K. and A.S.M.; Writing—Original Draft Preparation, A.S.M. and J.K.; Writing—Review & Editing, J.K., E.L. and A.S.M.; Visualization, A.S.M.; Supervision, J.K., E.L. and A.S.M.; Project Administration, J.K. and A.S.M.; Funding Acquisition, J.K. and A.S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This publication has been partially funded by the project “5R- Red Cervera de Tecnologías robóticas en fabricación inteligente”, contract number CER-20211007, under “Centros Tecnológicos de Excelencia Cervera” programme funded by “The Centre for the Development of Industrial Technology (CDTI)”.

Institutional Review Board Statement

An internal board has reviewed the ethical design of the user study, following guidance from the European Commission on Ethics and Social Science and Humanities (https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-in-social-science-and-humanities_he_en.pdf).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the participants to publish this paper.

Data Availability Statement

Data are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Barbazza, L.; Faccio, M.; Oscari, F.; Rosati, G. Agility in assembly systems: A comparison model. Assem. Autom. 2017, 37, 411–421. [Google Scholar] [CrossRef]
  2. Colgate, J.E.; Edward, J.; Peshkin, M.A.; Wannasuphoprasit, W. Cobots: Robots for Collaboration with Human Operators; Northwestern University: Evanston, IL, USA, 1996. [Google Scholar]
  3. Liu, H.; Fang, T.; Zhou, T.; Wang, Y.; Wang, L. Deep learning-based multimodal control interface for human-robot collaboration. Procedia CIRP 2018, 72, 3–8. [Google Scholar] [CrossRef]
  4. Liu, H.; Wang, L. Gesture recognition for human-robot collaboration: A review. Int. J. Ind. Ergon. 2018, 68, 355–367. [Google Scholar] [CrossRef]
  5. Bi, Z.; Wang, L. Dynamic control model of a cobot with three omni-wheels. Robot.-Comput.-Integr. Manuf. 2010, 26, 558–563. [Google Scholar] [CrossRef]
  6. Chen, H.; Leu, M.C.; Yin, Z. Real-time multi-modal human–robot collaboration using gestures and speech. J. Manuf. Sci. Eng. 2022, 144, 101007. [Google Scholar] [CrossRef]
  7. Bolstad, C.; Costello, A.; Endsley, M. Bad situation awareness designs: What went wrong and why. In Proceedings of the 16th World Congress of International Ergonomics Association, Maastricht, The Netherlands, 10–14 July 2006. [Google Scholar]
  8. Lohse, M. The role of expectations and situations in human-robot interaction. New Front.-Hum.-Robot. Interact. 2011, 2, 35–56. [Google Scholar]
  9. Onal, E.; Craddock, C.; Endsley, M.; Chapman, A. From theory to practice: How designing for situation awareness can transform confusing, overloaded shovel operator interfaces, reduce costs, and increase safety. In Proceedings of the ISARC, International Symposium on Automation and Robotics in Construction, Montreal, QC, Canada, 11–15 August 2013; IAARC Publications; Volume 30, p. 1. [Google Scholar]
  10. ISO/TS 15066:2016; Robots and Robotic Devices—Collaborative Robots. International Organization for Standardization: Geneva, Switzerland, 2016.
  11. ISO 10218-2:2011; Robots and Robotic Devices—Safety Requirements for Industrial Robots—Part 2: Robot Systems and Integration. International Organization for Standardization: Geneva, Switzerland, 2011.
  12. ISO 10218-1:2011; Robots and Robotic Devices—Safety Requirements for Industrial Robots–Part 1: Robots. International Organization for Standardization: Geneva, Switzerland, 2011.
  13. Matsas, E.; Vosniakos, G.C.; Batras, D. Prototyping proactive and adaptive techniques for human-robot collaboration in manufacturing using virtual reality. Robot.-Comput.-Integr. Manuf. 2018, 50, 168–180. [Google Scholar] [CrossRef]
  14. Norman, D.A. How might people interact with agents. Commun. ACM 1994, 37, 68–71. [Google Scholar] [CrossRef]
  15. Stubbs, K.; Hinds, P.J.; Wettergreen, D. Autonomy and common ground in human-robot interaction: A field study. IEEE Intell. Syst. 2007, 22, 42–50. [Google Scholar] [CrossRef]
  16. Grandi, F.; Khamaisi, R.K.; Peruzzini, M.; Raffaeli, R.; Pellicciari, M. A reference framework to combine model-based design and AR to improve social sustainability. Sustainability 2021, 13, 2031. [Google Scholar] [CrossRef]
  17. Luna, S.M.; Tigwell, G.W.; Papangelis, K.; Xu, J. Communication and Collaboration Among DHH People in a Co-located Collaborative Multiplayer AR Environment. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility, New York, NY, USA, 22–25 October 2023; pp. 1–5. [Google Scholar]
  18. San Martin, A.; Kildal, J.; Lazkano, E. “You Can Go Your Own Way, but Keep Me Informed”: Taking Charge of Own Safety when Collaborating with a Robot in a Shared Space. In Advances in Manufacturing Technology XXXVI; IOS Press: Amsterdam, The Netherlands, 2023; pp. 93–98. [Google Scholar]
  19. Wang, R.J.; Huang, H.P. An active-passive variable stiffness elastic actuator for safety robot systems. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 3664–3669. [Google Scholar]
  20. Do, H.M.; Kim, H.S.; Kim, D.H.; Son, Y.; Cho, Y.; Cheong, J. A manipulator with counterbalancing mechanism for safety in human-robot collaboration. In Proceedings of the IECON 2016–42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, Italy, 23–26 October 2016; pp. 681–685. [Google Scholar]
  21. Oh, S.; Woo, H.; Kong, K. Frequency-shaped impedance control for safe human–robot interaction in reference tracking application. IEEE/ASME Trans. Mechatronics 2014, 19, 1907–1916. [Google Scholar] [CrossRef]
  22. Geravand, M.; Shahriari, E.; De Luca, A.; Peer, A. Port-based modeling of human-robot collaboration towards safety-enhancing energy shaping control. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 3075–3082. [Google Scholar]
  23. Proia, S.; Carli, R.; Cavone, G.; Dotoli, M. Control techniques for safe, ergonomic, and efficient human-robot collaboration in the digital industry: A survey. IEEE Trans. Autom. Sci. Eng. 2021, 19, 1798–1819. [Google Scholar] [CrossRef]
  24. Yu, X.; He, W.; Li, Y.; Xue, C.; Li, J.; Zou, J.; Yang, C. Bayesian estimation of human impedance and motion intention for human–robot collaboration. IEEE Trans. Cybern. 2019, 51, 1822–1834. [Google Scholar] [CrossRef] [PubMed]
  25. Yu, X.; Li, Y.; Zhang, S.; Xue, C.; Wang, Y. Estimation of human impedance and motion intention for constrained human–robot interaction. Neurocomputing 2020, 390, 268–279. [Google Scholar] [CrossRef]
  26. Wang, L.; Givehchi, M.; Adamson, G.; Holm, M. A sensor-driven 3D model-based approach to remote real-time monitoring. CIRP Ann. 2011, 60, 493–496. [Google Scholar] [CrossRef]
  27. Morato, C.; Kaipa, K.N.; Zhao, B.; Gupta, S.K. Toward safe human robot collaboration by using multiple kinects based real-time human tracking. J. Comput. Inf. Sci. Eng. 2014, 14, 011006. [Google Scholar] [CrossRef]
  28. Zanchettin, A.M.; Ceriani, N.M.; Rocco, P.; Ding, H.; Matthias, B. Safety in human-robot collaborative manufacturing environments: Metrics and control. IEEE Trans. Autom. Sci. Eng. 2015, 13, 882–893. [Google Scholar] [CrossRef]
  29. Scibilia, A.; Pedrocchi, N.; Fortuna, L. Modeling Nonlinear Dynamics in Human–Machine Interaction. IEEE Access 2023, 11, 58664–58678. [Google Scholar] [CrossRef]
  30. Khatib, M.; Al Khudir, K.; De Luca, A. Human-robot contactless collaboration with mixed reality interface. Robot.-Comput.-Integr. Manuf. 2021, 67, 102030. [Google Scholar] [CrossRef]
  31. Moon, S.J.; Kim, J.; Yim, H.; Kim, Y.; Choi, H.R. Real-time obstacle avoidance using dual-type proximity sensor for safe human-robot interaction. IEEE Robot. Autom. Lett. 2021, 6, 8021–8028. [Google Scholar] [CrossRef]
  32. Long, P.; Chevallereau, C.; Chablat, D.; Girin, A. An industrial security system for human-robot coexistence. Ind. Robot. Int. J. 2018, 45, 220–226. [Google Scholar] [CrossRef]
  33. Tan, J.T.C.; Duan, F.; Zhang, Y.; Kato, R.; Arai, T. Safety design and development of human-robot collaboration in cellular manufacturing. In Proceedings of the 2009 IEEE International Conference on Automation Science and Engineering, Bangalore, India, 22–25 August 2009; pp. 537–542. [Google Scholar]
  34. Mohammed, A.; Schmidt, B.; Wang, L. Active collision avoidance for human–robot collaboration driven by vision sensors. Int. J. Comput. Integr. Manuf. 2017, 30, 970–980. [Google Scholar] [CrossRef]
  35. Lasota, P.A.; Rossano, G.F.; Shah, J.A. Toward safe close-proximity human-robot interaction with standard industrial robots. In Proceedings of the 2014 IEEE International Conference on Automation Science and Engineering (CASE), New Taipei, Taiwan, 18–22 August 2014; pp. 339–344. [Google Scholar]
  36. Schmidt, B.; Wang, L. Depth camera based collision avoidance via active robot control. J. Manuf. Syst. 2014, 33, 711–718. [Google Scholar] [CrossRef]
  37. Patel, H.; Singh, C.; Liu, G. Safe robot operation alongside humans using spring-assisted modular and reconfigurable robot. In Proceedings of the 2017 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, Japan, 6–9 August 2017; pp. 787–792. [Google Scholar]
  38. Nascimento, H.; Mujica, M.; Benoussaad, M. Collision avoidance interaction between human and a hidden robot based on kinect and robot data fusion. IEEE Robot. Autom. Lett. 2021, 6, 88–94. [Google Scholar] [CrossRef]
  39. Nikolakis, N.; Maratos, V.; Makris, S. A cyber physical system (CPS) approach for safe human-robot collaboration in a shared workplace. Robot.-Comput.-Integr. Manuf. 2019, 56, 233–243. [Google Scholar] [CrossRef]
  40. Malm, T.; Salmi, T.; Marstio, I.; Aaltonen, I. Are collaborative robots safe? In Proceedings of the Automaatiopäivät23, Oulu, Finland, 23–25 July 2019; Finnish Society of Automation: Helsinki, Finland, 2019; pp. 110–117. [Google Scholar]
  41. Lasota, P.A.; Shah, J.A. Analyzing the effects of human-aware motion planning on close-proximity human–robot collaboration. Hum. Factors 2015, 57, 21–33. [Google Scholar] [CrossRef]
  42. Oken, B.S.; Chamine, I.; Wakeland, W. A systems approach to stress, stressors and resilience in humans. Behav. Brain Res. 2015, 282, 144–154. [Google Scholar] [CrossRef] [PubMed]
  43. Bagchi, S.; Marvel, J.A. Towards augmented reality interfaces for human-robot interaction in manufacturing environments. In Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI), Chicago, IL, USA, 5 March 2018. [Google Scholar]
  44. Green, S.A.; Billinghurst, M.; Chen, X.; Chase, J.G. Human robot collaboration: An augmented reality approach—a literature review and analysis. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Las Vegas, NV, USA, 4–7 September 2007; Volume 48051, pp. 117–126. [Google Scholar]
  45. Rowen, A.; Grabowski, M.; Rancy, J.P.; Crane, A. Impacts of Wearable Augmented Reality Displays on operator performance, Situation Awareness, and communication in safety-critical systems. Appl. Ergon. 2019, 80, 17–27. [Google Scholar] [CrossRef]
  46. Kuys, B.; Koch, C.; Renda, G. The priority given to sustainability by industrial designers within an industry 4.0 paradigm. Sustainability 2021, 14, 76. [Google Scholar] [CrossRef]
  47. Nayyar, A.; Kumar, A. A Roadmap to Industry 4.0: Smart Production, Sharp Business and Sustainable Development; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  48. Brending, S.; Khan, A.M.; Lawo, M.; Müller, M.; Zeising, P. Reducing anxiety while interacting with industrial robots. In Proceedings of the 2016 ACM International Symposium on Wearable Computers, Heidelberg, Germany, 12–16 September 2016; pp. 54–55. [Google Scholar]
  49. Vogel, C.; Fritzsche, M.; Elkmann, N. Safe human-robot cooperation with high-payload robots in industrial applications. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 529–530. [Google Scholar]
  50. Vogel, C.; Walter, C.; Elkmann, N. A projection-based sensor system for ensuring safety while grasping and transporting objects by an industrial robot. In Proceedings of the 2015 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Langkawi, Malaysia, 18–20 October 2015; pp. 271–277. [Google Scholar]
  51. Vogel, C.; Poggendorf, M.; Walter, C.; Elkmann, N. Towards safe physical human-robot collaboration: A projection-based safety system. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 3355–3360. [Google Scholar]
  52. Vogel, C.; Walter, C.; Elkmann, N. A projection-based sensor system for safe physical human-robot collaboration. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 5359–5364. [Google Scholar]
  53. Vogel, C.; Walter, C.; Elkmann, N. Space-time extension of the projection and camera-based technology dealing with high-frequency light interference in HRC applications. In Proceedings of the 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), Magdeburg, Germany, 8–10 September 2021; pp. 1–6. [Google Scholar]
  54. San Martín, A.; Kildal, J. Audio-visual AR to improve awareness of hazard zones around robots. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–6. [Google Scholar]
  55. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  56. Bolano, G.; Roennau, A.; Dillmann, R. Transparent robot behavior by adding intuitive visual and acoustic feedback to motion replanning. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 1075–1080. [Google Scholar]
  57. Palmarini, R.; del Amo, I.F.; Bertolino, G.; Dini, G.; Erkoyuncu, J.A.; Roy, R.; Farnsworth, M. Designing an AR interface to improve trust in Human-Robots collaboration. Procedia CIRP 2018, 70, 350–355. [Google Scholar] [CrossRef]
  58. Maier, S.F.; Amat, J.; Baratta, M.V.; Paul, E.; Watkins, L.R. Behavioral control, the medial prefrontal cortex, and resilience. Dialogues Clin. Neurosci. 2006, 8, 397–406. [Google Scholar] [CrossRef] [PubMed]
  59. Kim, T.; Hinds, P. Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In Proceedings of the ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 80–85. [Google Scholar]
  60. Eisenbart, B.; Bouwman, S.; Voorendt, J.; McKillagan, S.; Kuys, B.; Ranscombe, C. Implementing design thinking to drive innovation in technical design. Int. J. Des. Creat. Innov. 2022, 10, 141–160. [Google Scholar] [CrossRef]
  61. San Martin, A.; Kildal, J.; Lazkano, E. Mixed Reality Representation of Hazard Zones While Collaborating with a Robot: Sense of Control over Own Safety. preprint. Available online: https://www.researchsquare.com/article/rs-3498014/v1 (accessed on 7 May 2024).
  62. Rojas, R.A.; Garcia, M.A.R.; Wehrle, E.; Vidoni, R. A variational approach to minimum-jerk trajectories for psychological safety in collaborative assembly stations. IEEE Robot. Autom. Lett. 2019, 4, 823–829. [Google Scholar] [CrossRef]
  63. Rojas, R.A.; Vidoni, R. Designing fast and smooth trajectories in collaborative workstations. IEEE Robot. Autom. Lett. 2021, 6, 1700–1706. [Google Scholar] [CrossRef]
  64. Lucero, A. Using affinity diagrams to evaluate interactive prototypes. In Proceedings of the IFIP Conference on Human-Computer Interaction, Bamberg, Germany, 14–18 September 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 231–248. [Google Scholar]
  65. Cumming, G. The new statistics: Why and how. Psychol. Sci. 2014, 25, 7–29. [Google Scholar] [CrossRef] [PubMed]
  66. Dragicevic, P. Fair statistical communication in HCI. In Modern Statistical Methods for HCI; Springer: Berlin/Heidelberg, Germany, 2016; pp. 291–330. [Google Scholar]
  67. Roesler, E.; Manzey, D.; Onnasch, L. A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction. Sci. Robot. 2021, 6, eabj5425. [Google Scholar] [CrossRef]
  68. Endsley, M.R.; Selcon, S.J.; Hardiman, T.D.; Croft, D.G. A comparative analysis of SAGAT and SART for evaluations of situation awareness. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 1998, 42, 82–86. [Google Scholar] [CrossRef]
Figure 1. Decision autonomy about how to keep the human worker safe during collaboration with a robot. In the current paradigm, most of the accountability for the operator’s safety is delegated to the robot.
Figure 1. Decision autonomy about how to keep the human worker safe during collaboration with a robot. In the current paradigm, most of the accountability for the operator’s safety is delegated to the robot.
Sustainability 16 04024 g001
Figure 2. Decision autonomy for the mechanisms to keep the human worker safe in a collaborative task with a robot. In the proposed new paradigm, most of the accountability for the operator’s safety is on the (well-informed) worker.
Figure 2. Decision autonomy for the mechanisms to keep the human worker safe in a collaborative task with a robot. In the proposed new paradigm, most of the accountability for the operator’s safety is on the (well-informed) worker.
Sustainability 16 04024 g002
Figure 3. Audio –visual design for situational awareness and setup for HRC tasks with previsualization of trajectories. The green boxes are the transparent boxes used in the program in order to identify the collision of the user with the different levels of danger.
Figure 3. Audio –visual design for situational awareness and setup for HRC tasks with previsualization of trajectories. The green boxes are the transparent boxes used in the program in order to identify the collision of the user with the different levels of danger.
Sustainability 16 04024 g003
Figure 4. Scenario of the user study. While the robot was operating on its own, the participant was performing calculations in the individual workspace (I). Once the robot called the participant, she moved to the robot’s table (III) to perform the task assigned. While the participant performed the task at the table, the robot moved in a loop. During each condition, the experimenters (II) took notes about how the participant performed the task.
Figure 4. Scenario of the user study. While the robot was operating on its own, the participant was performing calculations in the individual workspace (I). Once the robot called the participant, she moved to the robot’s table (III) to perform the task assigned. While the participant performed the task at the table, the robot moved in a loop. During each condition, the experimenters (II) took notes about how the participant performed the task.
Sustainability 16 04024 g004
Figure 5. Example view of user perceiving information during the task superimposed on the real scenario. The image shows an example of the information received by the user when inside a hazard zone of yellow level.
Figure 5. Example view of user perceiving information during the task superimposed on the real scenario. The image shows an example of the information received by the user when inside a hazard zone of yellow level.
Sustainability 16 04024 g005
Figure 6. (Left graph): mean time in seconds spent inside each zone per task. (Right graph): mean number of times entered in each zone per task. (Red: red zone, Orange: orange zone). The graphs show average values per condition. Error bars represent 95% confidence intervals. Numerical values of means and confidence intervals are shown in Table 1.
Figure 6. (Left graph): mean time in seconds spent inside each zone per task. (Right graph): mean number of times entered in each zone per task. (Red: red zone, Orange: orange zone). The graphs show average values per condition. Error bars represent 95% confidence intervals. Numerical values of means and confidence intervals are shown in Table 1.
Sustainability 16 04024 g006
Figure 7. Representative comments, classified by categories, obtained from each user in the semi-structured interview after it was transcribed.
Figure 7. Representative comments, classified by categories, obtained from each user in the semi-structured interview after it was transcribed.
Sustainability 16 04024 g007
Table 1. Mean time and mean number of times entered in the red and orange zones per task. Values represent 95% confidence intervals.
Table 1. Mean time and mean number of times entered in the red and orange zones per task. Values represent 95% confidence intervals.
RedOrange
Time in the zone (s)0.342
[0, 0.911726]
11.46
[8.484, 14.44351]
Number of times
in the Zone
0.122
[0.029, 0.215]
6.717
[5.05, 8.38]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

San Martin, A.; Kildal, J.; Lazkano, E. Taking Charge of One’s Own Safety While Collaborating with Robots: Enhancing Situational Awareness for a Safe Environment. Sustainability 2024, 16, 4024. https://doi.org/10.3390/su16104024

AMA Style

San Martin A, Kildal J, Lazkano E. Taking Charge of One’s Own Safety While Collaborating with Robots: Enhancing Situational Awareness for a Safe Environment. Sustainability. 2024; 16(10):4024. https://doi.org/10.3390/su16104024

Chicago/Turabian Style

San Martin, Ane, Johan Kildal, and Elena Lazkano. 2024. "Taking Charge of One’s Own Safety While Collaborating with Robots: Enhancing Situational Awareness for a Safe Environment" Sustainability 16, no. 10: 4024. https://doi.org/10.3390/su16104024

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop