Next Article in Journal
Trajectory Tracking Control of Quadrotor Based on Fractional-Order S-Plane Model
Next Article in Special Issue
Experimental Assessment of Hole Quality and Tool Condition in the Machining of an Aerospace Alloy
Previous Article in Journal
Integration of sEMG-Based Learning and Adaptive Fuzzy Sliding Mode Control for an Exoskeleton Assist-as-Needed Support System
Previous Article in Special Issue
Combined Use of sEMG and Inertial Sensing to Evaluate Biomechanical Overload in Manufacturing: An On-the-Field Experience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Human Error Analysis in Human–Robot Interaction Contexts: Evidence from an Empirical Study

by
Mario Caterino
1,*,
Marta Rinaldi
2,
Valentina Di Pasquale
1,
Alessandro Greco
2,
Salvatore Miranda
1 and
Roberto Macchiaroli
2
1
Department of Industrial Engineering, University of Salerno, 84084 Fisciano, Italy
2
Department of Engineering, University of Campania Luigi Vanvitelli, 81031 Aversa, Italy
*
Author to whom correspondence should be addressed.
Machines 2023, 11(7), 670; https://doi.org/10.3390/machines11070670
Submission received: 10 May 2023 / Revised: 7 June 2023 / Accepted: 19 June 2023 / Published: 21 June 2023
(This article belongs to the Special Issue Recent Advances in Smart Design and Manufacturing Technology)

Abstract

:
More than 60 years has passed since the installation of the first robot in an industrial context. Since then, industrial robotics has seen great advancements and, today, robots can collaborate with humans in executing a wide range of working activities. Nevertheless, the impact of robots on human operators has not been deeply investigated. To address this problem, we conducted an empirical study to measure the errors performed by two groups of people performing a working task through a virtual reality (VR) device. A sample of 78 engineering students participated in the experiments. The first group worked with a robot, sharing the same workplace, while the second group worked without the presence of a robot. The number of errors made by the participants was collected and analyzed. Although statistical results show that there are no significant differences between the two groups, qualitative analysis proves that the presence of the robot led to people paying more attention during the execution of the task, but to have a worse learning experience.

1. Introduction

The first application of industrial robotics was in 1961, when General Motors introduced the UNIMATE robot in its factory to perform repetitive and dangerous tasks for human operators [1]. Since then, the lower prices due to technological developments and the need for replacing human workers with machines paved the way for the adoption of robots in many sectors. The introduction of robots into the industry was one of the main innovations during the third industrial revolution, and, today, half of the global manufacturing companies have at least one robot in their factories [2]. One of the main differences between the robots adopted in industrial systems during the third industrial revolution and the ones developed within the fourth industrial revolution is the degree of collaboration with human workers. In the past, human workers could not interact with robots, which were confined in delimited zones for carrying out the programmed tasks, often dangerous for humans because of the execution speed needed for the series of productions. In recent years, the rise of new technologies increased the safety of machines, including robots. Thus, the concept of human–robot interaction (HRI) has arisen and seems to be very promising for improving the performance of industrial systems, while also considering the social aspects related to human jobs, which may become less repetitive and exhausting. The HRI discipline brings together scholars and practitioners from various domains (engineers, psychologists, designers, etc.) to study the best solutions for integrating robots that can interact with humans and the social impacts of such an interaction [3]. The common opinion in the scientific community is that human behavior and, consequently, the performance of humans in industrial environments, is strongly affected by the feelings of trust and safety toward robots [4].
Safety is a crucial aspect of HRI because the closeness to the robots can cause injuries to humans due to excessive energy/power transferred to the robots [5]. Zachakari et al. [6] and Sharkawy and Koustoumpardis [7] claimed that the safety problem in HRI needs to be addressed from different perspectives, in relation to the design of the robots, the standards, and human psychology. The design of robots in HRI concerns the implementation of control methods to tune the speed and the force elapsed by robots according to their proximity to a human worker. In Caterino et al.’s work [8], the authors presented an industrial application in which algorithms were introduced as control methods for the speed of the robot according to the proximity to the worker. The method is based on the standard ISO/TS 15066 [9], which provides the guidelines for safety in HRI by defining four different degrees of collaboration: (i) safety-rated monitored stop, when a robot ceases its motion because a worker is entering in the collaborative workspace; (ii) hand guiding, when workers use hand-operated devices to transmit motion command to the robot; (iii) speed and separation monitoring, when robot systems and workers may move concurrently in the shared workspace (co-existence); and (iv) power and force limiting, when physical contacts between the workers and the robots can occur either intentionally or unintentionally.
It is clear that, according to the degree of HRI, the psychological aspect alongside the trust of the human worker in the robot system plays a significant role. Trust is “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” [10]. In HRI contexts, trust is important because it affects human decisions and behavior. Social studies have demonstrated that the trust in robots by individuals is linked to their personal previous experiences, attitudes, and the risks connected to the execution of the tasks [11,12]. In industrial contexts, trust has been assessed by Charalambous et al. [13], who observed that it mainly depends on the robot’s motion and speed and on the safety level perceived by the human during collaboration.
According to the level of trust, safety, and the design of robots, human performance can be affected while executing a task in collaborative workplaces and can lead to making errors, which can, at times, result in serious injuries, but, of course, is always an indicator of the human performance within industrial environments, especially in HRI cases [14].
This paper aims to investigate such an aspect related to human performance in an industrial environment, i.e., the number of errors made during the working activity. To achieve this scope, the errors made by two groups of workers, one working in a shared environment with robots and one working without, have been assessed. The assessments have been performed through the use of VR, and statistical analyses have been carried out to evaluate if significant differences in the errors made by the two groups of workers were verified. Differences would lead to conclude that the presence of robots may affect human performance.
The use of VR tools falls within the scope of promoting or testing efficient and safe industrial systems and improving the worker’s well-being [15,16] through digital tools developed during the era of the fourth industrial revolution. In HRI context, the use of digital tools represents a strategic way to enable dynamic and safe analyses. Among several tools, VR reproduces the working environment, simulates the production system, and allows for the quick, safe, and economical interaction between humans and robots [17]. For years, the use of VR has been validated as an effective tool to simulate industrial HRI systems [18], and it has demonstrated that it can provide performance assessments of HRI systems [19], including errors [14].
The remainder of the paper is presented as follows. Section 2 analyzes the literature related to the scope of this paper. In Section 3, the experimental phase is explained, while in Section 4, the results are reported and discussed. Finally, Section 5 reports the main conclusions.

2. Literature Review

Human errors in working environments can occur. The consequences of such errors can range from wasted time to more serious problems, affecting the safety of the workers themselves, especially in high-risk fields such as nuclear plants or aerospace [20]. In all cases, human errors represent financial losses for the companies. For such a reason, the study of human errors has become increasingly important, and today, it is a well-defined discipline called the human reliability analysis (HRA).
HRA is the discipline that aims to assess the reliability of a complex system, in which humans and machines work together. To elaborate, HRA evaluates the worker’s contribution to the reliability of the system, trying to predict human error rates and understanding the impact caused by human errors in such systems [21]. Although each method attempting to evaluate the human error probability (HEP) falls under the umbrella of HRA, it is commonly recognized by scientists that HRA has three main functions [22]: (i) the identification of human errors, (ii) the prediction of the likelihood of occurrence; and (iii) the reduction in the probability of the occurrence of errors. The focus of this paper is on (i) and (ii).
HRA can be conducted by utilizing several methods [23] and its domain of application is very wide. In the assembly and manufacturing fields, HRA methods have been used for different applications and may have different purposes. Kern and Rafflinghaus developed an HRA for manual assembly operations in the automotive sector, demonstrating its usefulness in developing time-optimized and quality-optimized assembly operations [24]. The same authors also demonstrated that implementing systematic HRA methods provides companies with a reliable tool for error risk prediction [25]. Caputo et al. [26] defined an HRA model that, besides mapping error types and analyzing the logic conditions in which the errors occurred, was also able to identify the errors causing the highest economic impact and to provide a quantitative assessment of the economic impact of the errors. Di Pasquale et al. used an HRA method to simulate the optimal breaks during a work shift in a manual assembly line [27]. Concerning the causes of errors, Torres et al. highlighted that, during the execution of manual assembly tasks, some operations are the most likely to cause errors because the level of difficulty is very high [28]. In other cases, the main reason for the errors can be related to the poor skills of the operators or the adverse psychology, often due to the scarce confidence in the working environment or the lack of safety awareness [29]. The psychological factors which ledd to human errors have been widely debated in the literature, beginning from 1990, when the study of HRA started combining different areas of scientific studies, such as behavioral science and psychology [30]. The need to also study the psychological aspect of HRA arose when it became evident that human behavior was impacted by many aspects that do not impact a machine. The working environment, the mental and physiological factors, and many others have been widely recognized as possible causes of human errors [31]. In this context, it is not possible to neglect these factors.
When robots are introduced in shared working spaces in industrial contexts with humans, they represent agents affecting the behavior of human operators. In the case of HRI systems, robots are often seen as “co-workers” by operators, generating social interactions [32]. Additionally, the introduction of cooperative or collaborative robotics leads to many benefits for operators, mainly related to the reduction in efforts due to repetitive and heavy tasks [33]. In other cases, robots may create feelings of fear, anxiety, and surprise, especially when they have dangerous end effectors or move in an unpredictable trajectory [34]. According to Lu et al. [34], the factors generating these feelings are mainly related to two aspects. (i) The characteristics of the robots, such as the dimensions or the speed, which have a connection with the safety awareness. In fact, robots’ speed and dimensions lead operators to emphasize their perception, comprehension, and projection of the safety-related events during the daily task to understand the potential hazard of the working environment surrounding them [35]. (ii) Robots’ trajectories, which are often felt as unpredictable, confuse the human operator. In fact, in manufacturing contexts, robots are often programmed for minimizing the execution times of a task, thus resulting in unpredictable movement trajectories for the operator working with or near the robots [36]. The presence of the safety awareness issue was also confirmed by a study by Di Pasquale et al. [37], who found that, besides safety, the presence of a robot in a shared working environment also influences ergonomics and productivity.
From the previous literature, it is evident that the human operators’ performance can be impacted by HRI. The evaluation of such impacts is carried out by means of different methods, both direct (such as questionnaire [38,39]) and indirect (cardiac response, facial expression, electromyogram, etc. [34]). Some empirical studies used direct and indirect methods together to evaluate the impact of robots on human actions in HRI. The findings are often conflicting. Huber et al. [40] demonstrated that robots influence the behavior of humans in simple tasks as well, such as the ones implying hand movements. Similar results were found by Zhang et al. [41], whose empirical study demonstrated that robots affect humans in some actions, influencing the completion times and trajectories followed. The same results are confirmed by Chen et al. [42], who observed that people changed behavior according to their anticipation of a potential collision with robots during pedestrian motion. On the other hand, the practical findings of the study proposed by Xie et al. [43] highlighted that there is no difference in the behavior of humans when comparing human–human and human–robot activities, even if the participants declare a higher trust in human–human than human–robot activities. Vassallo et al. [44] also observed similar results for pedestrian activity.
Among the indirect methods, operators’ error evaluation during the working tasks can be used as an approach to evaluate HRA in HRI [45,46]. Errors directly influence the productivity of the company because they generate defective items. Morioka and Sakakibara demonstrated that HRI reduces the number of defective items thanks to the precision and repeatability of the robots [47]. As opposed to this, the feelings of anxiety generated by the presence of robots may lead to errors by the operators [48]. Thus, there is, again, a conflicting perspective on the impacts of robots on human behavior and performance. To address this problem, this paper aims to provide insights from an experimental study by comparing the errors made by operators working with and without robots in a cooperative industrial environment.
To the best of the authors’ knowledge, there are no experimental studies aiming to assess the impact of HRI on the number of errors made by the operators. To cover this research gap, an experimental campaign has been developed and conducted through VR systems to reach the objective of the paper.

3. Materials and Methods

An experimental campaign has been conducted for assessing the impact of robots’ presence on human performance by evaluating the number of errors made by the operators in an HRI environment. The experimental campaign has been conducted in a laboratory environment by virtually reproducing a real task of a company working in the aerospace sector.

3.1. Task and Virtual Environment

VR has been used for reproducing the working environment and simulating the task. The task reproduced is an aircraft fuselage panel assembly task and it consists of the application of sealant on a stringer. The task includes several operations to be completed, which involve the use of different tools. For privacy reasons, it is not possible to describe the operations in detail, but a general overview is given in Figure 1, where the main operations of the entire task are represented in the VR environment, and in Table A1 in Appendix A, where the errors made during the task are listed. It is worth noting that the task simulated is the reproduction of a real industrial task and it was not properly created for pursuing the objectives of this paper.
For realizing the virtual simulation, the HTC Vive Pro® system was used, while the software used to build the interactions among objects and human players was Unity®. Figure 2 shows the simulated and real environments.
To guide the participants through the execution of the required task, a panel was created in the simulated working environment to present instructions on the specific activities to perform.
Two different scenarios have been simulated: scenario#1, in which people perform the task in a shared environment while the robot performs another operation, and scenario#2, in which people perform the task without the presence of the robot. The task that the operator has to perform is identical in both the scenarios. Co-existence has been tested in scenario#1, by considering speed and separation monitoring according to the ISO-TS 15066. Co-existence is a type of HRI where humans and robots share the same workspace, but they cannot have physical contact. When the robot approaches the operator, its speed is reduced until stopping when the distance is under a safety threshold. Figure 3 shows a point of view shot of the operator while completing one of his/her tasks. The presence of the robot was noted at different distances from the operator. The robot used for the experiments is the digital reproduction of the one used in the real industrial environment. The model is a Fanuc M20iA.

3.2. Participants

Students from the department of engineering (mainly industrial, aerospace, or mechanical engineering) were recruited for the experiment. Overall, 78 students (33 females, 45 males) volunteered for the experimentation (age, mean = 24.8 years, standard deviation = 3.49 years). The number of students participating in the experiment was high in order to limit some undesired effects, such as the effect of external factors, for example, stress, which may influence the efficiency of the single participant [49].
Participants reported to have no previous experience with robots and have no or very little experience with experiments in virtual environments. However, most participants reported slight familiarity with joystick devices, which were utilized to control the virtual scene. Additionally, no one reported previous experience or knowledge of the task to be performed. None declared to have mobility, sight, or hearing problems.

3.3. Procedure and Experimental Design

The experimental campaign was conducted in a laboratory environment and it was carried out following the rules of the Declaration of Helsinki of 1975 (https://www.wma.net/what-we-do/medical-ethics/declaration-of-helsinki/, (accessed on 10 May 2023)), revised in 2013.
Upon arrival at the laboratory, the context and the setting of the experimental test were introduced, and participants were asked to watch a short video, which trained them on how to interact with VR and how to perform the task (tools that he/she must use and objects of the virtual scene). After that, they were given a copy of the information sheet and they were asked to sign the consent form before starting the experiment. To prevent any potential bias, the purpose of the experiment was not revealed to them during the introduction, and participants were simply told that they would be enrolled in a research study involving VR.
Then, each participant was prepared to interact with VR and assigned to one group (HRI_coex and No_HRI); assignation was random and groups were only balanced by sex. Participants were not informed on which group they belonged; thus, they were not aware of the presence of the robot.
A between-group experiment was conducted so that each person was only exposed to a single condition: group#1 (called “HRI_coex”) performed the assembly task with the presence of the robot (scenario#1), and group#2 (called “No_HRI”) performed the assembly task without the presence of the robot (scenario#2).
Both groups consisted of 39 participants, which were more than sufficient for reaching statistical significance [50,51]. The characteristics of the groups are reported in Table 1.
Experimental conditions were the same across participants. Each participant repeated the task four times without breaks between repetitions. Overall, the test lasted 20 min on average.
After completing the experiment, participants were asked to indicate their opinion on the complexity of the task and their perception on the impact of co-existence on their performance. They were asked to respond to seven questions: six were related to the complexity of the task focused on the specific operations performed during the task, and one was on the perception of the participant about errors made due to the presence of the robot. This last command was only posed to the HRI_coex group. For all the commands, the respondents were asked to indicate their opinion using a 10-point bipolar scale [52], where 1 = “not complex” or “strongly disagree” and 10 = “highly complex” or “strongly agree”. The detailed questions can be found in Table 6.

3.4. Error Assessment and Data Collection

The purpose of this paper is to investigate if the number of errors made during the working tasks is affected by the presence of a robot in a co-existing working task. The errors were then classified into different categories according to the difficulty in performing the task and their consequences on the product quality and lead time. Task difficulty was considered since the most difficult operations were those that could generate errors and compromise the final quality of the product, thus generating economic damage as well. Moreover, each error impacts the lead times of the entire task, thereby also producing negative economic consequences. Errors leading to the highest economic consequences are also the ones generating the highest increment in the lead times. Based on these aspects, a panel of experts working in the aerospace field was involved in a roundtable discussion to identify the kind of errors to investigate. Three categories have been proposed and are listed below:
  • Low-severity errors: Errors considered in this category cause a small increase in the lead times of the entire task, but do not cause any problem to the final quality of the product.
  • Medium-severity errors: Errors considered in this category cause a medium increase in the lead times of the entire task with very limited consequences on the quality of the final product.
  • High-severity errors: Errors considered in this category cause an increase in the lead times of the entire task as well as quality damage that compromise the final quality of the products. Products subject to this type of error should be reworked to make their quality conform to the quality standards.
Twelve types of errors have been identified during the experimental study, including the possibility of not performing any error. Each error is briefly described in Table A1 reported in Appendix A, where errors are coded.
According to this classification, data were collected through real-time direct observations and validated through the videos of the simulated environment recorded during the experiments. The errors were manually assessed by expert analysts (researchers in the operations management field).

3.5. Statistical Analysis

The application of the experimental procedure allowed us to obtain independent observations (data) about the number of errors made by each participant. Such data were primarily used to provide an overview of the results, using descriptive statistics in the form of frequencies. Statistical analyses were then used, whenever appropriate, to check the statistical differences between groups. The first assessment was to verify if data had a normal distribution, which is a requirement for the choice of the test to perform. For such reasons, a Shapiro–Wilk test was carried out. The next analysis focused on assessing the statistical differences between the two groups, considering the number of errors made in the single experiments, with a focus on potential differences between males and females. Finally, the differences between the groups considering only the first and last experiments were statistically evaluated to determine if there is a reduction in the number of errors due to the repetition of the experiments. For such assessments, two-tailed Wilcoxon rank sum tests were carried out based on the results of the Shapiro–Wilk test. All the statistical analyses were conducted using R studio® 2023.

4. Results and Discussion

4.1. Descriptive Statistics

As mentioned previously, the participants were divided into two groups, namely the group performing the task in scenario#1 with the robot (group #1, “HRI_coex”) and the group in scenario#2, without HRI (group #2, “no_HRI”).
Firstly, a qualitative analysis of the errors was performed to highlight the frequency of errors on the total number of operations performed. Figure 4 shows the total frequencies of the errors for the two groups, i.e., the errors made by the two groups in all the experiments.
From this first analysis, the reader can observe that the difference on the number of errors committed is only 0.1% between the two groups (13.6% for HRI_coex group and 13.7% for No_HRI group). A small difference can be found in the type of errors made. In particular, the HRI_coex group has a higher percentage of high-severity errors.
The following Table 2 shows the frequency of the total errors made by the two groups for each of the four experiments.
Additionally, by analyzing the results of Table 2, the difference between the two groups seems to be negligible. One remarkable finding is in the difference between the first and second experiments. In fact, although the HRI_coex group performed less errors during the first experiment, the situation was overturned in the second experiment.

4.2. Statistical Results

Hypothesis tests were performed to evaluate if there are significant differences in the errors made by the two groups. In order to choose the most appropriate hypothesis test, the normality of the data must be verified. For such a reason, the Shapiro–Wilk test was performed. The test demonstrated that the errors are not normally distributed. This can also be qualitatively noted in Figure 5, where the histograms representing the mean number of errors (x-axis) made by each participant (y-axis) of the two groups in the four experiments are represented and compared with the expected normal curve, given the means and the standard deviations of the groups.
Based on the results of the Shapiro–Wilk test, a two-tailed Wilcoxon rank sum test was performed to evaluate if there are significant differences between the two groups, considering both the errors made in the single experiments and the mean of the errors made during the four experiments (Experiment_mean). The Wilcoxon rank sum test is a nonparametric test that can be used to assess if independent samples of data come from the same distribution, even if not normal [53]. In detail, the Wilcoxon rank sum test was used to evaluate if the medians of the two groups come from the same distribution The null and alternative hypotheses, in this paper, are formulated as follows:
  • H0: the median of HRI_coex is equal to the median of No_HRI;
  • H1: the median of HRI_coex is not equal to the median of No_HRI.
Table 3 reports the values of the tests for the four experiments.
The output of the Wilcoxon rank sum test is the same for all the experiments. The p-value is always higher than the α-value; thus, it is not possible to reject the null hypothesis. There is no evidence of a significative difference in the medians of the two groups when considering both the single experiments and the mean of the errors.
This can also be observed in Figure 6, where the boxplots of the two groups are shown.
The boxplots show similar characteristics for the two groups, even if the HRI_coex group shows a number of outliers higher than the No_HRI group, which, instead, presents a higher spread of data. A feasible interpretation of this fact may be that the people of the HRI_coex group seem to pay more attention in the tasks’ execution, but the effect of the presence of the robot has some kind of impact, generating more errors (three outliers, represented by circles in Figure 6) in some cases.
The second analysis performed was aimed to evaluate if there were significant differences between the male and female sexes related to the two groups. Again, a Wilcoxon rank sum test was performed. The hypotheses made were the same H0 and H1 of the previous analysis. The results are reported in Table 4.
In all the experiments considered, it is not possible to reject the null hypothesis. Thus, it is possible to affirm that in all the cases analyzed, the differences in the number of errors between the groups (HRI_coex, No_HRI) occurred by chance alone.
The analysis of the boxplots for the males and females (Figure 7) allows highlighting some differences between them.
For both sexes, the HRI_coex group shows a lower variability than the other. When considering only males (Figure 7a), participants performing the tests with the robot show a median very close to the first quartile. This means that the variance of the errors performed is higher than the median, as opposed to the females (Figure 7b), whose median is closer to the third quartile than the first. In addition to this, it is possible to note that the first and third quartiles are lower for the female (Figure 7b) than males (Figure 7a) and that males present more outliers than the females. Thus, it could be concluded that females seem to pay more attention to task execution than males in case of the presence of robots (HRI_coex). This trend is also confirmed in the case of the No_HRI group, where males show a higher median and higher percentiles than females, even if, in this case, this phenomenon is less evident because of the higher variability of the data highlighted by the boxplots. Finally, by comparing the boxplots of the two groups for the same sex, it is evident that the number of errors has a greater variability for the No_HRI group than the HRI_coex group for both sexes (larger boxes and higher whiskers). Then, once again, this aspect seems to highlight a higher attention paid in the tasks execution by the HRI_coex group.
A Wilcoxon rank sum test was also performed to compare the two groups on a specific activity during the task. The activity is related to the deburring of the panel and it is the only activity in which the participant was oriented with the eyes in the direction of the robot. Although the total number of errors during the four experiments was greater for the HRI_coex group (43 vs. 37), the Wilcoxon rank-sum test does not allow rejecting the null hypothesis (W statistic= 851, p-value = 0.602, α-value = 0.05). Additionally, in this case, it is possible to affirm that the presence of the robot does not significantly affect the number of errors made by the participants.
The last statistical analysis carried out was aimed at evaluating If there are significant differences within the same group on the number of errors made in the first and last experiments. A Wilcoxon rank sum test was performed again, this time with the assumption that a learning phenomenon applies to both groups and, consequently, that the number of errors made in the first and last experiments would be significantly different. The results of the tests are shown in Table 5.
For both groups, the p-value is lower than the α-value, indicating that the null hypothesis should be rejected and the difference in the medians is significant between the first and fourth experiments. Thus, for both groups, a learning phenomenon is verified, even if the higher p-value of the HRI_coex may lead to the conclusion that the HRI_coex group had a learning phase less marked than the No_HRI group. This thesis is also supported by the results reported in Table 2, which were determined by observing the number of errors made by the groups in the first and second experiments, whereby, even though the HRI_coex group performed less errors during the first experiment, the situation was overturned in the following experiments, thus confirming that the learning phase for the HRI_coex group is less marked than the No_HRI group.
An assessment was finally carried out to qualitatively evaluate the perception of participants regarding the complexity of the tasks performed and the perceived impact of the robot. Table 6 reports the questions submitted to the participants, while the results of this assessment are reported in Table 7.
Firstly, it can be observed that both groups had a similar perception on the complexity of the single operations within the task. Thus, it can be concluded that the robot does not have any influence on the perception of the complexity, as expected. According to the participants, all the operations have a low degree of complexity. This confirms the results of Table 2, which highlighted that the majority of the operations were carried out without making any error. The operation of deburring (Question#3), which was the only one with a high severity level and the one which recorded the highest number of errors, was considered not complex, highlighting that participants had no real perception of the errors made during the experiment. Moreover, the participants of the HRI_coex group strongly disagree on the fact that the presence of a robot leads to making a higher number of errors (Question#7). This is a further confirmation of the previous statistical analysis, which highlighted that there are no significant differences between the groups in the number of errors performed.

4.3. Discussion

The results achieved by this case study contribute to the theory in different ways. It was demonstrated that the presence of robots does not generate statistical differences in the human performance. In fact, no significant difference was observed in the number of errors made between the group working in a co-existing environment with robots and the group working without the robot in any of the statistical tests carried out. This is in line with a part of the current literature, such as works by Xie et al. [43] and Vassallo et al. [44], who noticed no difference in HRI and non-HRI activities. Most of the literature argued that an influence is created by the presence of the robot in a working environment [40,41,42]. The difference between the results of the present study and the other contrasting studies can be explained by the monitored parameters in the papers. In fact, the parameter monitored in the present study is the total number of errors made by two groups and it was empirically observed by observing the real-time experiments and the recorded video. In contrast with the study of Huber et al. [40], who used a very precise motion-tracking system that was able to detect very small differences between participant groups, and that of other papers, which used very sophisticated devices capable to detect very small differences between the monitored physiological parameters of participant groups [34], this paper evaluated a macroscopical parameter (the number of errors) to assess the impact of the presence of the robot on human performance during working tasks. For the reader, it will be easy to understand that monitoring very precise parameters allowed us to better evaluate possible differences between the groups. Our statistical analysis was also confirmed by the participants’ answers to a small questionnaire that was submitted to them after the experiments. The qualitative analyses of the answers highlighted that the perception of the complexity of the operations was not affected by the presence of the robot. Moreover, the participants of the HRI_coex group declared that they did not even perceive the robot as a source of nuisance that may have led to increase the number of errors made during the task. However, some small differences between the behaviors of the two groups are also hypothesized in this paper based on some general results. By analyzing the boxplots (Figure 6 and Figure 7), the smaller height of the boxes and the greater number of outliers for the HRI_coex group seem to indicate that greater attention was involved in carrying out the experiments by this group. This could be explained by the presence of the robot, which has the impact of creating feelings of anxiety or stress, making people more focused on the task they are executing. This is in line with what many authors affirmed on the mental stress caused by HRI. The review by Lu et al. [34] demonstrated that psychological factors may affect the behavior and the performance of human operators working in HRI environments. A further demonstration of this is given by the last analysis performed in the previous section, aimed to demonstrate that learning experiences are different in the two groups analyzed. The results showed that, although both groups had a learning experience during the repetition of the experiments, the learning experience of the HRI_coex group was less marked than the No_HRI group. Thus, once again, it seems that the presence of the robot has an impact on the feelings of the participants in the experiments.

5. Conclusions

The present study aimed to evaluate the influence of the presence of a robot in a HRI task performed in an industrial environment. An empirical assessment was carried out for this purpose. A task regarding the assembly of aeronautical parts was simulated in a VR environment and participants were asked to complete the task. Half of the participants performed the test in a co-existing environment with a robot (HRI_coex), half without the robot (No_HRI). The parameter monitored to assess the differences between the groups was the number of errors performed during the execution of the task. The results of the experiment highlight that there are no statistically significant differences between the two groups, even if some considerations can be concluded by investigating the results: the HRI_coex group seems to have been more focused and to have paid more attention to the correct execution of the task. Similar results were observed also by considering the results of male and female participants for the two groups.
This paper has some limitations. Only the number of errors was used as the monitored parameter to assess differences between the groups. It was not considered to evaluate the feelings of the people, which can be assessed through a questionnaire, or the physiological factors by means of specific devices measuring, for example, oxygen intake or the heart rate. Moreover, parameters related to the task, such as the time needed by each participant to accomplish the single experiment, were not considered in this paper. This parameter may be very important because it can provide significant insights on the learning rate for HRI tasks in a virtual environment. In fact, by analyzing the different times employed by participants for each repetition of the task, the learning curves for this type of task could be retrieved. Future developments of this paper will aim to cover these gaps. Moreover, this paper does not consider collaboration, but only co-existence. Future works will consider cooperative tasks to assess the effects of robots on human performance during cooperation, where a physical contact between human and robot is possible.

Author Contributions

Conceptualization, M.C., M.R., V.D.P. and A.G.; methodology, M.C., M.R. and V.D.P.; software, M.C. and A.G.; validation, M.R. and V.D.P.; formal analysis, S.M. and R.M.; investigation, M.C., M.R., V.D.P. and A.G.; resources, A.G. and R.M.; data curation, M.C.; writing—original draft preparation, M.C., M.R., V.D.P. and A.G.; writing—review and editing, S.M. and R.M.; supervision, S.M. and R.M.; project administration, S.M. and R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data available on request due to restrictions eg privacy or ethical.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Table of errors made by the participants during the experiments.
Table A1. Table of errors made by the participants during the experiments.
Type of ErrorError CodeSeverity Level
No errors committed during the taskE0-
Error in the removal of panel’s pop rivetsE1Low
Error in the positioning of the tool for pop rivets’ removal after usageE2Medium
Error in the removal of a panel’s stringerE3Low
Error in the positioning of the panel’s stringer on the work tableE4Low
Error in the operation of deburringE5High
Error in the positioning of the deburring tool after usageE6Medium
Error in positioning the sealant application tool after usageE7Low
Error in releasing the stringer during the transportation to the panelE8Medium
Error in positioning the stringer on the panelE9Medium
Error during the grasp of elements supporting the stringerE10Low
Error in positioning the elements supporting the stringerE11Low

References

  1. Devol, G. Mechanical Arm. U.S. Patent 2,988,237, 16 June 1961. Available online: https://patents.google.com/patent/US2988237A/en (accessed on 6 June 2023).
  2. Humlum, A. Robot Adoption and Labor Market Dynamics; Rockwool Foundation Research Unit: Berlin, Germany, 2022. [Google Scholar]
  3. Bartneck, C.; Belpaeme, T.; Eyssel, F.; Kanda, T.; Keijsers, M.; Šabanović, S. Human-Robot Interaction: An Introduction; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
  4. Salem, M.; Dautenhahn, K. Evaluating trust and safety in HRI: Practical issues and ethical challenges. In Emerging Policy and Ethics of Human-Robot Interaction; University of Hertfordshire: London, UK, 2015. [Google Scholar]
  5. De Santis, A.; Siciliano, B.; De Luca, A.; Bicchi, A. An atlas of physical human–robot interaction. Mech. Mach. Theory 2008, 43, 253–270. [Google Scholar] [CrossRef] [Green Version]
  6. Zacharaki, A.; Kostavelis, I.; Gasteratos, A.; Dokas, I. Safety bounds in human robot interaction: A survey. Saf. Sci. 2020, 127, 104667. [Google Scholar] [CrossRef]
  7. Sharkawy, A.N.; Koustoumpardis, P.N. Human–robot interaction: A review and analysis on variable admittance control, safety, and perspectives. Machines 2022, 10, 591. [Google Scholar] [CrossRef]
  8. Caterino, M.; Chiacchio, P.; Cristalli, C.; Fera, M.; Lettera, G.; Natale, C.; Nisi, M. Robotized assembly and inspection of composite fuselage panels: The LABOR project approach. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2021; Volume 1024, p. 012019. [Google Scholar]
  9. ISO/TS 15066; Robots and Robotic Devices: Collaborative Robots. International Organization for Standardization: Geneva, Switzerland, 2016.
  10. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [Green Version]
  11. Rossi, A.; Dautenhahn, K.; Koay, K.L.; Walters, M.L. How the timing and magnitude of robot errors influence peoples’ trust of robots in an emergency scenario. In Proceedings of the Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, 22–24 November 2017; pp. 42–52. [Google Scholar]
  12. Rossi, A.; Dautenhahn, K.; Koay, K.L.; Walters, M.L.; Holthaus, P. Evaluating people’s perceptions of trust in a robot in a repeated interactions study. In Proceedings of the Social Robotics: 12th International Conference, ICSR 2020, Golden, CO, USA, 14–18 November 2020; pp. 453–465. [Google Scholar]
  13. Charalambous, G.; Fletcher, S.; Webb, P. The development of a scale to evaluate trust in industrial human-robot collaboration. Int. J. Soc. Robot. 2016, 8, 193–209. [Google Scholar] [CrossRef]
  14. Koppenborg, M.; Nickel, P.; Naber, B.; Lungfiel, A.; Huelke, M. Effects of movement speed and predictability in human–robot collaboration. Hum. Factors Ergon. Manuf. Serv. Ind. 2017, 27, 197–209. [Google Scholar] [CrossRef]
  15. Kadir, B.A.; Brober, O. Human-centered Design of Work Systems in the Transition to Industry 4.0. Appl. Ergon. 2021, 92, 103334. [Google Scholar] [CrossRef]
  16. Caterino, M.; Rinaldi, M.; Fera, M. Digital ergonomics: An evaluation framework for the ergonomic risk assessment of heterogeneous workers. Int. J. Comput. Integr. Manuf. 2022, 1–21. [Google Scholar] [CrossRef]
  17. Cárdenas-Robledo, L.A.; Hernández-Uribe, Ó.; Reta, C.; Cantoral-Ceballos, J.A. Extended reality applications in industry 4.0.-A systematic literature review. Telemat. Inform. 2022, 73, 101863. [Google Scholar] [CrossRef]
  18. Or, C.K.; Duffy, V.G.; Cheung, C.C. Perception of safe robot idle time in virtual reality and real industrial environments. Int. J. Ind. Ergon. 2009, 39, 807–812. [Google Scholar] [CrossRef]
  19. Ottogalli, K.; Rosquete, D.; Rojo, J.; Amundarain, A.; María Rodríguez, J.; Borro, D. Virtual reality simulation of human-robot coexistence for an aircraft final assembly line: Process evaluation and ergonomics assessment. Int. J. Comput. Integr. Manuf. 2021, 34, 975–995. [Google Scholar] [CrossRef]
  20. Zio, E. Reliability engineering: Old problems and new challenges. Reliab. Eng. Syst. Saf. 2009, 94, 125–141. [Google Scholar] [CrossRef] [Green Version]
  21. Swain, A.D.; Guttmann, H.E. Handbook of Human-Reliability Analysis with Emphasis on Nuclear Power Plant Applications; Final report (No. NUREG/CR-1278; SAND-80-0200); Sandia National Labs: Albuquerque, NM, USA, 1983.
  22. Di Pasquale, V.; Miranda, S.; Iannone, R.; Riemma, S. A simulator for human error probability analysis (SHERPA). Reliab. Eng. Syst. Saf. 2015, 139, 17–32. [Google Scholar] [CrossRef]
  23. Hou, L.X.; Liu, R.; Liu, H.C.; Jiang, S. Two decades on human reliability analysis: A bibliometric analysis and literature review. Ann. Nucl. Energy 2021, 151, 107969. [Google Scholar] [CrossRef]
  24. Kern, C.; Refflinghaus, R. Cross-disciplinary method for predicting and reducing human error probabilities in manual assembly operations. Total Qual. Manag. Bus. Excell. 2013, 24, 847–858. [Google Scholar] [CrossRef]
  25. Kern, C.; Refflinghaus, R. Assembly-specific database for predicting human reliability in assembly operations. Total Qual. Manag. Bus. Excell. 2015, 26, 1056–1070. [Google Scholar] [CrossRef]
  26. Caputo, A.C.; Pelagagge, P.M.; Salini, P. Modelling human errors and quality issues in kitting processes for assembly lines feeding. Comput. Ind. Eng. 2017, 111, 492–506. [Google Scholar] [CrossRef]
  27. Di Pasquale, V.; Miranda, S.; Iannone, R.; Riemma, S. An HRA-based simulation model for the optimization of the rest breaks configurations in human-intensive working activities. IFAC-PapersOnLine 2015, 48, 332–337. [Google Scholar] [CrossRef]
  28. Torres, Y.; Nadeau, S.; Landau, K. Classification and quantification of human error in manufacturing: A case study in complex manual assembly. Appl. Sci. 2021, 11, 749. [Google Scholar] [CrossRef]
  29. Wang, Y.B.; Zhang, T.; Xue, Q. The design and realization of HRA service system for the engine assembly. In Applied Mechanics and Materials; Trans Tech Publications Ltd.: Wollerau, Switzerland, 2014; Volume 635, pp. 1900–1905. [Google Scholar]
  30. Pan, X.; Lin, Y.; He, C. A review of cognitive models in human reliability analysis. Qual. Reliab. Eng. Int. 2017, 33, 1299–1316. [Google Scholar] [CrossRef]
  31. Mosleh, A.; Chang, Y.H. Model-based human reliability analysis: Prospects and requirements. Reliab. Eng. Syst. Saf. 2004, 83, 241–253. [Google Scholar] [CrossRef]
  32. Sauppé, A.; Mutlu, B. The social impact of a robot co-worker in industrial settings. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 3613–3622. [Google Scholar]
  33. Laudante, E.; Greco, A.; Caterino, M.; Fera, M. Human–robot interaction for improving fuselage assembly tasks: A case study. Appl. Sci. 2020, 10, 5757. [Google Scholar] [CrossRef]
  34. Lu, L.; Xie, Z.; Wang, H.; Li, L.; Xu, X. Mental stress and safety awareness during human-robot collaboration-Review. Appl. Ergon. 2022, 105, 103832. [Google Scholar] [CrossRef] [PubMed]
  35. Stanton, N.A.; Chambers, P.R.; Piggott, J. Situational awareness and safety. Saf. Sci. 2001, 39, 189–204. [Google Scholar] [CrossRef] [Green Version]
  36. Gasparetto, A.; Zanotto, V. A technique for time-jerk optimal planning of robot trajectories. Robot. Comput-Integr. Manuf. 2008, 24, 415–426. [Google Scholar] [CrossRef]
  37. Di Pasquale, V.; De Simone, V.; Giubileo, V.; Miranda, S. A taxonomy of factors influencing worker’s performance in human–robot collaboration. IET Collab. Intell. Manuf. 2023, 5, e12069. [Google Scholar] [CrossRef]
  38. Alves, C.; Cardoso, A.; Colim, A.; Bicho, E.; Braga, A.C.; Cunha, J.; Faria, C.; Rocha, L.A. Human–robot interaction in industrial settings: Perception of multiple participants at a crossroad intersection scenario with different courtesy cues. Robotics 2022, 11, 59. [Google Scholar] [CrossRef]
  39. Bethel, C.L.; Salomon, K.; Murphy, R.R.; Burke, J.L. Survey of psychophysiology measurements applied to human-robot interaction. In Proceedings of the RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Republic of Korea, 26–29 August 2007; pp. 732–737. [Google Scholar]
  40. Huber, M.; Rickert, M.; Knoll, A.; Brandt, T.; Glasauer, S. Human-robot interaction in handing-over tasks. In Proceedings of the RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 107–112. [Google Scholar]
  41. Zhang, B.; Amirian, J.; Eberle, H.; Pettré, J.; Holloway, C.; Carlson, T. From HRI to CRI: Crowd Robot Interaction—Understanding the Effect of Robots on Crowd Motion: Empirical Study of Pedestrian Dynamics with a Wheelchair and a Pepper Robot. Int. J. Soc. Robot. 2021, 14, 631–643. [Google Scholar] [CrossRef]
  42. Chen, Z.; Jiang, C.; Guo, Y. Pedestrian-robot interaction experiments in an exit corridor. In Proceedings of the 2018 15th International Conference on Ubiquitous Robots (UR), Honolulu, HI, USA, 26–30 June 2018; pp. 29–34. [Google Scholar]
  43. Xie, Y.; Bodala, I.P.; Ong, D.C.; Hsu, D.; Soh, H. Robot capability and intention in trust-based decisions across tasks. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 39–47. [Google Scholar]
  44. Vassallo, C.; Olivier, A.H.; Souères, P.; Crétual, A.; Stasse, O.; Pettré, J. How do walkers avoid a mobile robot crossing their way? Gait Posture 2017, 51, 97–103. [Google Scholar] [CrossRef] [Green Version]
  45. De Simone, V.; Di Pasquale, V.; Giubileo, V.; Miranda, S. Human-Robot Collaboration: An analysis of worker’s performance. Procedia Comput. Sci. 2022, 200, 1540–1549. [Google Scholar] [CrossRef]
  46. Hashemi-Petroodi, S.E.; Thevenin, S.; Kovalev, S.; Dolgui, A. Operations management issues in design and control of hybrid human-robot collaborative manufacturing systems: A survey. Annu. Rev. Control 2020, 49, 264–276. [Google Scholar] [CrossRef]
  47. Morioka, M.; Sakakibara, S. A new cell production assembly system with human–robot cooperation. CIRP Ann. 2010, 59, 9–12. [Google Scholar] [CrossRef]
  48. Weems, C.F.; Costa, N.M.; Watts, S.E.; Taylor, L.K.; Cannon, M.F. Cognitive errors, anxiety sensitivity, and anxiety control beliefs: Their unique and specific associations with childhood anxiety symptoms. Behav. Modif. 2007, 31, 174–201. [Google Scholar] [CrossRef] [PubMed]
  49. Barker, L.M.; Nussbaum, M.A. The effects of fatigue on performance in simulated nursing work. Ergonomics 2011, 54, 815–829. [Google Scholar] [CrossRef]
  50. Rahimi, M.; Karwowski, W. Human perception of robot safe speed and idle time. Behav. Inf. Technol. 1990, 9, 381–389. [Google Scholar] [CrossRef]
  51. Dehais, F.; Sisbot, E.A.; Alami, R.; Causse, M. Physiological and subjective evaluation of a human–robot object hand-over task. Appl. Ergon. 2011, 42, 785–791. [Google Scholar] [CrossRef] [Green Version]
  52. Taherdoost, H. What Is the Best Response Scale for Survey and Questionnaire Design; Review of Different Lengths of Rating Scale/Attitude Scale/Likert Scale. Int. J. Acad. Res. Manag. 2019, 8, 1–10. [Google Scholar]
  53. De Barros RS, M.; Hidalgo JI, G.; de Lima Cabral, D.R. Wilcoxon rank sum test drift detector. Neurocomputing 2018, 275, 1954–1963. [Google Scholar] [CrossRef]
Figure 1. Representation of the task in the VR environment.
Figure 1. Representation of the task in the VR environment.
Machines 11 00670 g001
Figure 2. Representation of the virtual and real environments.
Figure 2. Representation of the virtual and real environments.
Machines 11 00670 g002
Figure 3. Point of view shot of the robot in the simulated environment.
Figure 3. Point of view shot of the robot in the simulated environment.
Machines 11 00670 g003
Figure 4. Frequencies of the errors for the groups.
Figure 4. Frequencies of the errors for the groups.
Machines 11 00670 g004
Figure 5. Histograms of the errors of the groups.
Figure 5. Histograms of the errors of the groups.
Machines 11 00670 g005
Figure 6. Boxplots of the two groups.
Figure 6. Boxplots of the two groups.
Machines 11 00670 g006
Figure 7. Boxplots of the males (a) and females (b) errors for the two groups.
Figure 7. Boxplots of the males (a) and females (b) errors for the two groups.
Machines 11 00670 g007
Table 1. Characteristics of the two groups.
Table 1. Characteristics of the two groups.
HRI_coexNo_HRI
Number of male participants2322
Number of female participants1617
Mean age of participants (years)24.525.1
Standard deviation of age (years)3.493.49
Table 2. Characteristics of the two groups.
Table 2. Characteristics of the two groups.
HRI_coexNo_HRI
First experiment
Number of operations without errors191186
Number of operations with low-severity errors1928
Number of operations with medium-severity errors1919
Number of operations with high-severity errors117
Second experiment
Number of operations without errors201211
Number of operations with low-severity errors1512
Number of operations with medium-severity errors117
Number of operations with high-severity errors1310
Third experiment
Number of operations without errors210210
Number of operations with low-severity errors1713
Number of operations with medium-severity errors58
Number of operations with high-severity errors89
Fourth experiment
Number of operations without errors217219
Number of operations with low-severity errors48
Number of operations with medium-severity errors85
Number of operations with high-severity errors118
Table 3. Results of the Wilcoxon rank sum test.
Table 3. Results of the Wilcoxon rank sum test.
Wilcoxon Statisticp-Valueα-Value
Experiment 1729.50.4820.05
Experiment 2866.50.4970.05
Experiment 38520.9130.05
Experiment 47830.8580.05
Experiment_mean765.50.7410.05
Table 4. Results of the Wilcoxon rank sum test for males and females.
Table 4. Results of the Wilcoxon rank sum test for males and females.
MALESFEMALES
Wilcoxon Statisticp-ValueWilcoxon Statisticp-Valueα-Value
Experiment 1198.50.309162.50.9470.05
Experiment 2247.00.871194.50.2360.05
Experiment 3238.50.9801400.4820.05
Experiment 41960.2611810.4330.05
Experiment_mean198.50.3271730.6880.05
Table 5. Results of the Wilcoxon rank sum test for evaluating differences between the first and fourth experiments.
Table 5. Results of the Wilcoxon rank sum test for evaluating differences between the first and fourth experiments.
Wilcoxon Statisticp-Valueα-Value
HRI_coex1108.50.00160.05
No_HRI1151.00.00030.05
Table 6. Questions submitted to the participants after the experiments.
Table 6. Questions submitted to the participants after the experiments.
Questions
#1 Express the complexity level of the operation “pop rivets removal”.
#2 Express the complexity level of the operation “stringer removal”.
#3 Express the complexity level of the operation “deburring execution”.
#4 Express the complexity level of the operation “sealant application”.
#5 Express the complexity level of the operation “stringer positioning”.
#6 Express the complexity level of the operation “positioning the elements supporting the stringer”.
#7 Express how much you agree with the following sentence: “the presence of robot leads making a high number of errors”.
Table 7. Results of the assessment on the complexity of the task and the perception of robot influence on the errors.
Table 7. Results of the assessment on the complexity of the task and the perception of robot influence on the errors.
Question No.Score RangeResults
MeanStandard Deviation
HRI_coexNo-HRIHRI_coexNo-HRI
#10 (not complex) 10
(highly complex)
0.760.742.211.96
#20 (not complex) 10
(highly complex)
2.572.712.162.37
#30 (not complex) 10
(highly complex)
1.281.232.232.33
#40 (not complex) 10
(highly complex)
0.730.952.132.30
#50 (not complex) 10
(highly complex)
1.971.952.342.53
#60 (not complex) 10
(highly complex)
2.071.382.632.26
#70 (strongly disagree) 10
(strongly agree)
1.90NA2.11NA
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Caterino, M.; Rinaldi, M.; Di Pasquale, V.; Greco, A.; Miranda, S.; Macchiaroli, R. A Human Error Analysis in Human–Robot Interaction Contexts: Evidence from an Empirical Study. Machines 2023, 11, 670. https://doi.org/10.3390/machines11070670

AMA Style

Caterino M, Rinaldi M, Di Pasquale V, Greco A, Miranda S, Macchiaroli R. A Human Error Analysis in Human–Robot Interaction Contexts: Evidence from an Empirical Study. Machines. 2023; 11(7):670. https://doi.org/10.3390/machines11070670

Chicago/Turabian Style

Caterino, Mario, Marta Rinaldi, Valentina Di Pasquale, Alessandro Greco, Salvatore Miranda, and Roberto Macchiaroli. 2023. "A Human Error Analysis in Human–Robot Interaction Contexts: Evidence from an Empirical Study" Machines 11, no. 7: 670. https://doi.org/10.3390/machines11070670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop