Next Article in Journal
The Illusive Pipedream of Zero Harm: A South African Mining Industry Perspective
Previous Article in Journal
Evaluation of Shoulder Risk Factors in the Repetitive Task of Slaughterhouse
Previous Article in Special Issue
The BowTie as a Digital Twin: How a BowTie Looks Different from a Data Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Model of Adaptive Error Management Practices Addressing the Higher-Order Factors of the Dirty Dozen Error Classification—Implications for Organizational Resilience in Sociotechnical Systems

by
Nicki Marquardt
1,*,
Ricarda Gades-Büttrich
2,
Tammy Brandenberg
1 and
Verena Schürmann
1
1
Faculty of Communication and Environment, Rhine-Waal University of Applied Sciences, 47475 Kamp-Lintfort, Germany
2
Hochschule Fresenius, 10117 Hamburg, Germany
*
Author to whom correspondence should be addressed.
Safety 2024, 10(3), 64; https://doi.org/10.3390/safety10030064
Submission received: 20 March 2024 / Revised: 27 June 2024 / Accepted: 15 July 2024 / Published: 17 July 2024
(This article belongs to the Special Issue Safety and Risk Management in Process Industries)

Abstract

:
Within the dynamic, complex, and often safety-critical operations of many process industries, the integration of technology and human elements has given rise to sociotechnical systems (STSs), where the interaction between people and technology plays a pivotal role. To thrive in this complex environment, organizations must adopt adaptive error management strategies and cultivate organizational resilience. This approach involves managing the unexpected and designing systems to embrace disorder by organizational learning from errors in STSs. The main objective of this article was to present empirical data of error-causing elements in STSs based on the Dirty Dozen concept, their underlying structure, and implications for error causation screening and adaptive error management systems. A sample of 544 workers employed in seven process industries, such as automotive, chemicals, defense, metal, and timber, participated in this study. The results revealed a three-factor model of human error causation in STSs. Based on these results, an adaptive error management system (AEMS), which includes evidence-based interventions to manage causes of human errors and mitigate their risky consequences, was presented. Finally, implications for organizational resilience and safety culture in STSs were discussed.

1. Introduction

In an ever-evolving landscape of complex industrial systems and processes, the ability to manage the unexpected and navigate through uncertainty is paramount for organizational resilience, success, sustainability, and safety [1]. Specifically, during this volatile, uncertain, complex, and ambiguous era (referred to as VUCA), incidents stemming from geopolitical shifts, pandemic outbreaks, scientific and technological innovations, and other uncontrollable factors have become increasingly frequent [2]. These occurrences have significantly impacted global enterprises, akin to the upheaval, disruption, and setbacks witnessed during the sudden onset of the COVID-19 pandemic in 2020 [3]. Specifically, the process industry, including sectors such as chemical and manufacturing in particular, has suffered from breakdowns in supply chains and consequently revealed a lack of organizational resilience. Even the most robust companies, boasting considerable economic prowess, have found themselves inevitably grappling with substantial losses [2]. Within the dynamic, complex, and often dangerous operations of many organizations, the integration of technological and human factors has emphasized the need to understand organizations as sociotechnical systems (STSs) [4]. Specifically, sociotechnical systems theory posits that organizations consist of two interconnected systems: a social system and a technical system [5,6,7]. These systems interact closely, often manifesting in human–machine interactions, such as between operators and displays [8]. Changes in one system invariably impact the other. An STS, grounded in fundamental principles, particularly emphasizes adaptation [9]. Adaptation entails aligning technical elements, like software, hardware, and environmental factors, with human system components [10]. For instance, maladaptive designs in feedback systems, such as excessive false alarms, can lead to issues like the out-of-the-loop-performance problem (OOTLPP) [11], which makes it hard for operators to receive relevant cues and to comprehend what is going on in their task environment. Hence, the human being is the hub of the STS, and therefore, the social system is the most valuable and adaptable part of the overall system [9,10]. While humans exhibit performance variations and limitations, they possess the unique ability to swiftly adapt to changing circumstances through double-loop and triple-loop learning [12]. This means people can learn from errors by questioning their underlying assumptions, which refers to double-loop learning, as well as engaging in triple-loop learning by reviewing the learning process itself [12,13]. In contrast, technical components, like robotic systems and artificial intelligence, lack the capacity for such adaptive learning processes as they cannot question or modify their underlying programming [12,14]. Therefore, sociotechnical systems intertwine human and technological components, creating intricate networks where errors and unexpected events can have far-reaching consequences [1]. The capacity of an organization to anticipate, respond to, adapt to, and recover from unexpected events and disruptions effectively refers to organizational resilience [15]. In the context of the STS, resilience encompasses the ability to withstand shocks and disturbances, while maintaining functionality and achieving desired outcomes [1]. Resilient organizations recognize that unexpected events are inevitable and view them as opportunities for learning and improvement rather than as insurmountable obstacles [16]. Specifically, the aviation industry, as the cutting-edge industry in error prevention and organizational resilience, has introduced different approaches to overcome the causes of errors and near misses [17]. For instance, many airlines use the so-called Dirty Dozen concept for the identification and classification of error causes, as well as for countermeasures to ensure organizational resilience [18,19,20].
Expanding upon these findings, this article directs its attention toward the following explorative research questions (RQs): RQ1: Can the aviation-based Dirty Dozen concept be used as a general multilevel model of adaptive error management practices in STSs? RQ2: Can the aviation-based Dirty Dozen concept be used to conduct error causation screening in different process industries by exploring its higher-order structure? RQ3: What are suitable adaptive error management interventions that may fit in a multilevel model of adaptive error management practices? However, it should be mentioned that these explorative research questions are treated more in a hypothetical than in a definitive manner. Indeed, this article and the included empirical study do not test directly the quantitative effects of adaptive error management and organizational learning on organizational resilience. Moreover, the study does not compare the aviation-based and process-industry-based data of the Dirty Dozen concept. In addition, it does not test the effectiveness of specific adaptive error management interventions. Rather than testing definitive research questions and specific hypotheses, it aims to explore and generate conceptual and methodological ideas that can guide further error management approaches, organizational resilience measures, safety research, and quantitative testing in different safety-critical sectors.
To address these inquiries, the article is structured into three principal sections. The initial theoretical segment elucidates the interconnectedness of adaptive error management, organizational learning from errors, and organizational resilience. Subsequently, an error causation concept based on the Dirty Dozen approach is delineated, paving the way for the development of a research methodology tailored to empirically assess the core factors of error causation in STSs. Finally, the scientific and practical implications of the Dirty Dozen-based classification system and multilevel model for diagnosing and managing the causes of errors are presented. The article is finished with a short conclusion, where the explorative RQ1–RQ3 are answered and the main findings are summarized.

1.1. Adaptive Error Management

In complex and dynamic environments, errors are inevitable. However, the way organizations respond to errors can significantly impact their performance [1,21,22,23], safety [24,25,26,27,28,29,30,31,32,33,34,35,36,37], and resilience [1,38,39,40,41,42,43,44,45,46,47,48]. Adaptive error management (AEM) is a framework designed to not only address errors but also learn and evolve from them [49]. This involves a nuanced understanding of the differences between error management and error prevention, as well as a structured approach based on triple-loop learning models. Error management and error prevention represent two distinct paradigms in dealing with mistakes within an organization [50,51]. On the one hand, error prevention focuses on minimizing the occurrence of errors through measures such as standardization, training, and technology. On the other hand, error management acknowledges the inevitability of errors and emphasizes the need to effectively respond to and learn from them [50]. AEM, as a concept, aligns more closely with error management, as it not only addresses errors but also aims to adapt and improve through the learning process.
At the heart of AEM is the concept of triple-loop learning [49]. Unlike single-loop learning and double-loop learning, which focus on correcting action errors (single-loop learning, e.g., correcting the consequences of laps) and thinking errors (double-loop learning, e.g., adjusting to mistakes in planning), triple-loop learning delves deeper into questioning the organizational learning process itself (e.g., monitoring, reviewing, and adapting single-loop and double-loop learning) [12]. This process involves reflecting on the assumptions of organizational learning and fostering a more profound and sustainable change within the organizational culture [13].
Based on an organizational triple-loop learning view, AEM unfolds through three interconnected phases [49], each contributing to a comprehensive, safe, and resilient system (see Figure 1):
  • Pre-Operational Phase: This phase occurs before members of an organization perform a task (e.g., maintaining an engine). It involves activities such as anticipating potential errors and vulnerabilities before they occur. Thus, it emphasizes the preparedness for the unexpected by equipping individuals and teams with the skills and knowledge needed to navigate potential challenges of errors and accidents.
  • Operational Phase: This is the phase where actions are executed. It involves activities such as error recognition (e.g., real-time detection of errors through monitoring systems and human observation) and mitigation of the consequences of errors. Thus, it emphasizes decision-making flexibility by empowering individuals to adapt to evolving situations (e.g., by effective communication channels to address errors promptly and collaboratively).
  • Post-Operational Phase: This final phase focuses on learning and reflection. It involves activities such as conducting after-action reviews or analyzing the root causes of errors and identifying opportunities for continuous improvement. Thus, it emphasizes the analysis and planning of changes in processes, structures, and the organizational culture based on the lessons learned from errors to become more resilient over time.
Figure 1. Adaptive error management including three organizational learning loops (based on [12,49]).
Figure 1. Adaptive error management including three organizational learning loops (based on [12,49]).
Safety 10 00064 g001
In the triple-loop learning view [12], comparing underlying assumptions with real outcomes fosters different modes of organizational learning. Initially, it prompts single-loop learning, involving adjusting actions during the operational phase (see in Figure 1 the short loop linking the post-operational phase with the operational phase). Moving beyond, a deeper reflection at higher levels of consciousness can support double-loop learning, which involves re-evaluating assumptions and processes before operations (see in Figure 1 the medium loop linking the post-operational phase with the pre-operational phase). Additionally, it can facilitate triple-loop learning, which involves scrutinizing the broader learning context at individual, team, or organizational levels (see in Figure 1 the long loop linking the post-operational phase with the level of intervention, e.g., individual mental context, team’s social context, and organizational context). These three feedback loops embedded in the model ensure ongoing learning and underscore the adaptability of the error management system.
Hence, the basic premise of AEM is that the error management system itself is highly adaptable by using triple-loop learning to make adjustments to the error management system as quickly and effectively as possible.

1.2. Organizational Learning from Errors and Resilience

Organizational learning is a cornerstone of resilience, enabling organizations to adapt and evolve in response to changing circumstances [1,15,16]. Consequently, as depicted in RQ1 (see introduction), learning from errors is a must to ensure organizational resilience and safety [12,42,44,49]. To learn from errors involves the identification and analysis of their causes [12]. Moreover, it is highly recommended that evidence-based approaches and lessons learned from high-reliability organizations (HROs) be used to find the best way for error detection and classification [52]. Despite the inherent complexities and hazards, HROs prioritize safety and demonstrate a steadfast commitment to mitigating risks. They have a distinctive attribute of being able to operate in hazardous, complex, and uncertain environments nearly error free [30] and, at the same time, achieve high safety and production performance [29,32]. HROs enhance organizational resilience [23] by facilitating the detection of, categorization of, and response to uncertainties and unexpected events [39,40] to ensure operational continuity during a crisis [41] and bouncing back after it. Williams et al. [42] conceptualized organizational resilience as the process through which an organization uses its capabilities to positively adapt and maintain functioning amid internal and external dynamics, both before and after adversity. Resilient organizations, such as HROs, are proactively ready and prepared [43]; show the ability to rebound and bounce back [44] by absorbing shocks or changes [45]; and can survive, recover, and flourish in the face of a crisis [46]. Accordingly, organizational learning from errors is critical in resilient systems, as it permits organizations to adapt and change their structures in response to internal and external forces [41].
Among different HROs and high-risk industries, aviation is one of the leading industries in error management [18] and has spent decades investing in the understanding of error causation. One concept for classifying the causes of errors, which has proved transferability from aviation to non-aviation industries, is the Dirty Dozen concept [19]. The Dirty Dozen concept, developed by Gordon Dupont in 1997 [53], explains most of the causes of error in the aviation industry. In contrast to theoretically based error models, such as the Swiss Cheese model [27], the Dirty Dozen concept was generated from a variety of incident-based error analyses in aviation maintenance [17,53]. Accordingly, the concept encapsulates various factors contributing to human errors, incidents, and accidents, including complacency, a lack of knowledge, social norms, a lack of assertiveness, a lack of teamwork, a lack of communication, fatigue, stress, a lack of awareness, pressure, distraction, and a lack of resources. Table 1 demonstrates the 12 original categories of the Dirty Dozen concept, with a short description of each category [54]. Many airlines have used this concept successfully to identify and analyze the causes of human error and to conceptualize training programs for maintenance personnel focusing on human factors in error prevention [17,20].
The main assumption of this study is, therefore, that these 12 causes of error can be transferred to other industries involving complex sociotechnical systems. Specifically, work processes in sociotechnical task environments within process industries, such as manufacturing, share several similarities with aviation maintenance tasks. The process industry encompasses sociotechnical systems wherein raw materials undergo a transformation into finished goods [12]. Moreover, sociotechnical systems, such as automotive manufacturing, entail multifaceted interactions among numerous individuals possessing a diverse array of knowledge and skills. Tasks in production processes frequently entail intricate assembly operations with interconnected complexities. Within this multifaceted task environment, the integration of human, technological, and organizational factors forms sociotechnical interdependencies [55]. Consequently, tasks within the process industry bear striking resemblances to those encountered in aviation, particularly within aviation maintenance. Furthermore, the 12 causes of error, as described in the Dirty Dozen concept, have been successfully transferred even to medical sociotechnical work systems, such as operating rooms [19,56]. Moreover, we expect a latent structure behind these 12 categories because some studies have shown higher-order categories [12,54,56] that can subsume several categories and enable a more clearly arranged classification system. In addition, such an error–cause classification system could be used for error analysis and management to finally ensure organizational resilience.

2. Method

2.1. Sample

The study used a sample of 544 participants from seven process industries. Sample selection was based on the principle of purposive sampling using the following inclusion criteria: (a) participants are employed in the process industry, (b) their task environment represents an STS, (c) their tasks involve the potential for error occurrences, and (d) the potential error consequences represent a threat to organizational resilience. The criteria for selecting participants were operationalized and checked by subject matter experts (SMEs) (e.g., human factors specialists, error researchers, quality and safety managers) via in-depth document analysis of industry-specific accident and error reports, as well as direct workplace observations. For instance, criterion b (participants’ task environment represents an STS) was operationalized via the following indicators: the main tasks involve human–machine interactions, such as the operator is connected to the work system by the mediation of software (e.g., digital flow diagrams) or hardware (e.g., displays and controls for error detection). Criteria c (tasks involve the potential for error occurrences) and d (potential error consequences represent a threat to organizational resilience) were assessed by SMEs based on prior error rates and resilience-oriented root cause analyses (RCAs) derived from each company’s internal error reports. Participants were assigned to the following process industries as follows: chemical industry (n = 52), metal industry (n = 82), timber industry (n = 76), automotive manufacturing (n = 62), automotive supplier (n = 105), army vehicle manufacturing (n = 90), and army vehicle maintenance (n = 78). All participants were shift workers performing manufacturing tasks in complex sociotechnical production or maintenance systems. Due to the companies’ requirements that the survey be anonymous, personal data, such as age, sex, and qualification, could not be collected. All respondents were assured that their answers would remain anonymous and confidential and that none of their managers were involved in the study.

2.2. Materials

The survey was conducted using the Human Error Questionnaire (HEQ), developed by Marquardt and Hoeger [54]. It consists of 60 items (example items are depicted in Table 1) and draws its conceptual foundation from the Dirty Dozen concept introduced by Dupont [53]. Originally identified as the 12 most prevalent causes of human error in aviation maintenance, the Dirty Dozen errors have become integral to incident and accident analyses, as well as maintenance resource management (MRM) training programs adopted by many airlines [20]. These categories, with their broad applicability, have facilitated the seamless transition of the concept from aviation to diverse industry contexts. Notably, similar error categories manifest across non-aviation sectors, such as medicine [19,56], underscoring the universality of these human error categories. Given the generic nature of the Dirty Dozen categories, the questionnaire requires no significant readjustment for application in different industries. Previous surveys using the HEQ across various domains have demonstrated sufficient reliability and construct validity [12,54,55]. Each of the Dirty Dozen categories constitutes one scale and is represented by five items, with respondents rating their disagreement or agreement on a 5-point Likert scale (1 = strongly disagree to 5 = strongly agree). The reliability scores, meant as internal consistency, of the 12 HEQ scales (assessed by Cronbach’s α) in the sample ranged between almost satisfying and good (α = .58 to α = .81; see Table 2) and are in line with former studies [55]. Hence, the scale development of the 12 Dirty Dozen categories was based on a rational-theoretical approach and the internal consistency approach [57]. On the one hand, based on a thorough literature review, items were created for each Dirty Dozen category and evaluated for their appropriateness by subject matter experts (SMEs) (e.g., human factors specialists, error researchers, quality and safety managers) to ensure content validity. On the other hand, statistical values, such as Cronbach’s α, were used to assess the psychometric goodness of each scale.

2.3. Design and Procedure

Data collection occurred during the operational shifts of the companies’ three-shift rotation. This process, involving the HEQ, transpired at the end of each shift, in a tranquil room located within the factory, with groups of 15 workers in attendance. All materials were presented in German. Workers completed the HEQ, which typically took around 10 min. Participants were selected at the behest of department managers in collaboration with the company’s work council. This directive was conveyed to all employees during their weekly team gatherings. Consequently, they were aware that the work council had granted authorization for an anonymous survey. Moreover, all employees were assured that participation was optional and that data would only be gathered with the informed consent of each participant.

3. Results

3.1. Descriptive Statistics of Error Causation Categories

As can be seen in Table 2, the mean values of the 12 HEQ scales were at a medium level. Higher values indicate a larger impact of the respective Dirty Dozen categories on human error causation. Thus, workers stated dimensions such as a lack of teamwork (M = 2.72), social norms (M = 2.70), a lack of assertiveness (M = 2.69), a lack of resources (M = 2.57), and pressure (M = 2.53) as the five most relevant causal factors for human error.

3.2. Factor Analysis of Error Causation Categories

To uncover the underlying structure of the Dirty Dozen error categories (HEQ scales), factor analysis, specifically principal component analysis (PCA), was conducted on all 12 categories. Since the use of PCA is a matter of debate and criticism [58], the appropriateness of PCA must be justified. According to Lloret-Segura et al. [58] (p. 1152), “If the aim is to identify the number and composition of components needed to summarize the scores obtained on a large set of observed variables, then applying PCA is appropriate.” Indeed, PCA “explains the maximum percentage of variance observed in each item from a lower number of components summarizing this information” [58] (p. 1152), which was applicable to in this study. Consequently, using PCA in this study seemed reasonable. The analysis identified three distinct error-causing factors, each with eigenvalues exceeding 1, collectively explaining 62.73% of the variance. After rotating the factor solutions and accepting only factor loadings of 0.5 or higher, the following factors emerged: individual factors, social interaction, and organizational context. In Table 3, the factor associated with individual factors exhibited strong factor loadings with categories such as a lack of assertiveness (0.79), a lack of awareness (0.59), and a lack of knowledge (0.54). However, the Dirty Dozen category of lack of knowledge demonstrated similar factor loadings across two factors. To resolve this ambiguity, the Fürntratt criterion [59] was used to distinctly associate this HEQ scale with one of the three factors. Thus, the first factor represented elements contributing to the inadequate cognitive state of individuals during task performance. The second factor, social interaction, encompassed categories like a lack of communication (0.76), a lack of teamwork (0.72), social norms (0.70), a lack of resources (0.66), and complacency (0.70), which can notably impair team effectiveness. The third factor, organizational context, encapsulated challenging work conditions, such as stress (0.77), fatigue (0.75), distraction (0.75), and pressure (0.69), all of which detrimentally impact human performance and finally organizational resilience.

4. Discussion

4.1. Discussion of Results

The findings of this study revealed higher-order factors of the Dirty Dozen categories within various non-aviation sectors. Unlike previous research that concentrated solely on one industry [12,19,56], our study aimed to extend this error causation model to seven different industries. In line with previous studies [12,54], our analysis also revealed that the 12 error causation categories could be consolidated into three primary higher-order factors through principal component analysis (PCA): individual factors, social interaction, and organizational context.
Individual factors encompass the Dirty Dozen scales of lack of awareness, lack of knowledge, and lack of assertiveness, which contribute to inadequate mental states among workers during task execution [20]. While a lack of awareness and a lack of knowledge align well with this factor, a lack of assertiveness appears to fit better under the factor of social interaction. One possible explanation for this result is that participants interpreted the respective items as indicators of individual shortcomings instead of dysfunctional social interactions when discussing ways of handling errors at work. In addition to this methodological issue, another more content-validity-related reason can be that a lack of assertiveness tends to be more of an individual-level problem because it involves an individual’s ability or willingness to speak up; take initiative; or assert opinions, concerns, or ideas within a group or organization, as has been shown by Lorr and More [60]. Specifically, their research on assertiveness revealed factor-analytic-confirmed dimensions, such as directiveness, social assertiveness, defense of one’s interests, and independence [60]. Hence, a lack of assertiveness is primarily an issue related to an individual’s personality traits, communication skills, and self-confidence [61]. Moreover, Lazarus [62] identified habits that are possessed by low-assertive personalities, such as the inability to say “no,” the inability to openly talk about own needs and feelings, and the inability to lead a conversation effectively. Thus, even though a lack of assertiveness is a category that is close to the edge of the social interaction factor, it is primarily an individual-level personality-related construct.
The social interaction factor comprises a lack of teamwork, a lack of communication, adherence to social norms, a lack of resources, and complacency. While a lack of teamwork, a lack of communication, and adherence to social norms involve social dynamics, categorizing a lack of resources and complacency under the same factor initially seems perplexing. However, a lack of resources (e.g., a lack of personnel) can be seen as a social issue due to its impact on cooperation within organizations. A lack of resources, such as personnel, support, information, or leadership, can indeed impact both teams and the broader organization [17,20,63]. However, it often manifests as a team-level problem when analyzing human error in an organization, due to its immediate effects on team dynamics, decision making, and task execution. Teams rely on adequate resources to perform their tasks effectively. A lack of personnel can lead to overburdened team members and a higher likelihood of errors. Similarly, a shortage of information or support can hinder a team’s ability to make informed decisions, solve problems, or respond to challenges effectively, thereby increasing the risk of errors [64]. In addition, resource deficiencies can strain communication and collaboration within teams. For example, if there is a lack of support from higher management, team members may feel unsure about their roles and responsibilities. This can lead to misunderstandings, conflicts, or a breakdown in teamwork, all of which can contribute to human error [1,30]. Moreover, teams often need to adapt to changing circumstances or unexpected challenges. However, as has been shown in the context of HROs and team resilience, a lack of resources can impede their ability to be flexible and resilient in the face of adversity [29,30,31,32,33,34,63]. While resource shortages can certainly be considered organizational problems as well, they often have more immediate and pronounced effects at the team level, directly influencing how teams function, communicate, and perform their tasks, as has been proposed by Stoverink et al. [63]. Therefore, when analyzing human error in an organization, it is essential to consider the impact of resource deficiencies on the dynamics and performance of individual teams. Similarly, complacency can lead to risks being accepted by staff, leading to inattentiveness in monitoring systems and inadequate execution of tasks, which can strain teamwork as colleagues may need to compensate for incomplete work. Otherwise, complacency seems to also have individual-level elements (e.g., carelessness, overconfidence, recklessness), which might raise the question why complacency is a social interaction category, whereas a lack of assertiveness is an individual-level factor. Since there is no standard definition of complacency [65], some researchers emphasize individual cognitive and attitudinal characteristics. For instance, the National Aeronautics and Space Administration Aviation Safety Reporting System defines complacency as “self-satisfaction that may result in no vigilance based on an unjustified assumption of satisfactory system state” [66]. In contrast to this individual-focused approach, Hyten and Ludwig [67] (p. 240) advocate a behavioral, process-oriented, and multilevel perspective as they define complacency as a “pattern in which formerly safe behaviors begin varying in form, eventually including deviations that elevate the risk of process incidents and/or put frontline workers at elevated risk of injury. It is a term that describes a particular kind of behavioral trend that can occur within the task-related repertoire of frontline workers as well as within the decision-making repertoire of management.” Thus, in spite of comprising some personality-related elements, complacency is more of a team-level problem because it involves a collective mindset within a team where members become overly comfortable or satisfied with the status quo. Indeed, in a recent study of complacency in maritime STSs, complacency was directly related to team-level issues, such as a lack of collaboration and poor communication [68]. Complacency within a team can develop due to factors such as past successes breeding a sense of invulnerability, a lack of clear goals or challenges to overcome, or ineffective leadership that fails to promote a culture of continuous improvement and accountability. In summary, while both a lack of assertiveness and complacency can contribute to human error in an organization, they operate at different levels. A lack of assertiveness is more of an individual-level issue related to the inability to take initiative and make decisions, whereas complacency is a broader, team-level, issue involving risky attitudes, behaviors, and group dynamics.
The organizational context factor combines stress, fatigue, distraction, and pressure, all of which detrimentally influence human performance and organizational resilience [17,20]. All these factors are well represented within this category, as they describe error-provoking work conditions and external pressures that affect performance beyond individual and social dynamics. Nevertheless, the factor organizational context encompassing the categories stress, fatigue, distraction, and pressure might look inappropriate at first glance since these error categories are often viewed as individual problems at work. However, the respective items actually indicate deeper organizational context issues. For instance, the items representing the stress scale refer to systemic problems within the organization, such as too high job demand and workload [64]. The fatigue scale contains items covering organizational context issues, such as long working hours, insufficient breaks, or monotonous tasks. In many cases, it is not simply a matter of individuals not getting enough rest outside of work but, rather, a reflection of organizational policies and practices that prioritize productivity over employee well-being [17,20]. For instance, Reason and Hobbs [69] (p. 96) stated that “the problem with errors is not the psychological processes, but the workplaces that exist within complex systems. Identifying these error traps and recognizing their characteristics [e.g., organizational workplace characteristics such as stress and fatigue] are essential preliminaries to effective error management”. Moreover, distraction can stem from both internal and external sources, but organizational factors play a significant role. The distraction scale items encompass issues such as excessive noise, poorly designed workspaces, and frequent interruptions. Addressing distraction requires a holistic approach that addresses both environmental and systemic factors within the organization [20]. Finally, the items belonging to the pressure scale address organizational context issues, such as urgency and time pressure, financial difficulties, and unrealistic goal setting. In summary, stress, fatigue, distraction, and pressure are not just individual factors; they are often symptoms of dysfunctional organizational work conditions.

4.2. Strengths, Weaknesses, and Future Research

This study aimed to uncover a higher-order structure of the Dirty Dozen categories that could be used as the building blocks of a multilevel model of adaptive error management. In addition, as stated in RQ2 (see introduction), we intended to explore its higher-order structure to support error causation screening applicable to various industries, drawing on the Dirty Dozen concept [19] to identify and analyze human error sources outside of aviation. However, this study did not explicitly test the applicability of the Dirty Dozen concept and its related questionnaire to non-aviation industries. Indeed, this study tested the psychometric goodness of the Dirty Dozen scales and the structure of higher-order factors of error causation in a diverse sample of process industry sectors. Hence, future research should test the questionnaire and its underlying factor structure within the aviation industry to justify the statement of adaptability and to enable comparisons between aviation and process industry results. Nevertheless, with a large sample size of 544 workers across seven industries, the study ensures high external validity. However, the focus on German companies might limit the potential for generalization across other countries and cultures. Therefore, future studies should survey other industries in different countries. In addition, in addition to validating the model in other industries and countries, including industry profile comparisons, future studies should also test the factor structure by using confirmatory factor analysis (CFA) with even larger sample sizes to ensure sufficient external and internal validity. Yet, the error-causation-screening framework offers a global analysis based on PCA or detailed examination across 12 scales, aiding industries in identifying everyday incidents and reducing accident costs. While the study provides indicative findings, further research is necessary to improve the reliability of the Human Error Questionnaire, particularly for the complacency and lack-of-knowledge scales. Additionally, future studies should delve deeper into the PCA results, as certain scales, like lack of assertiveness, may fit better within different factors. Further validation of the Dirty Dozen model and its associated questionnaire is needed, including establishing cutoff scores and ensuring the adequacy of item representation within each category. The continued theoretical development of the Dirty Dozen concept remains a challenge for the future since the grouping of questionnaire items purely based on PCA can appear arbitrary. In fact, the interpretation of empirical responses from a diverse set of participants across various industries can be difficult because they depend on specific settings, industry profiles, organizational states, and individual backgrounds. Accordingly, it might be reasonable to perform further statistical analyses to examine the intra- and inter-team, organization, and industry variances in each Dirty Dozen category and underlying factors and their impact on specific outcome variables, such as error rates. However, due to the companies’ data protection requirements requested by their work councils, it was not possible within this study to collect individual outcome variables, such as error rates, accident involvements, and near misses, as well as to align data to individual teams or departments. Consequently, as we did not measure specific error rates or other quantifiable outcomes that could serve as a criterion variable and as we could not separate different teams, it was currently not feasible to empirically determine the direct influence of individual, team, or organizational factors on error rates. Nevertheless, future studies could try to examine the intra- and inter-level variances and their contribution to the prediction of incidents and accidents. Actually, a detailed hierarchical structure (e.g., specific team-, organization-, and industry-level data) performed by a multilevel analysis would provide valuable insights in future research.

5. Practical Implications

5.1. Error Causation Screening

There are several practical implications that can be derived from this study. As stated in RQ2 (see Section 1), one field of application is the diagnosis of potential error-producing conditions and risk areas via error causation screening. Based on error causation profiles shown in Figure 2, industry comparisons at the level of overarching error causation factors (e.g., differences in the characteristics of the organizational context factor) can help to identify global error potentials, and differences in the individual Dirty Dozen categories (e.g., differences in the pressure scale) can help to identify specific risk areas. Thus, Figure 2 exemplarily illustrates variances and parallels in the profile analysis of two distinct sectors: military vehicle manufacturing and automotive manufacturing. However, the dissimilarities in organizational contextual elements and the resemblances in individual factors may not solely arise from the disparities or similarities between the two companies. Instead, the congruencies in individual factors might mirror the shared essence of vehicle production, while the disparities in organizational context factors could signify the contrast between military and commercial realms. Hence, naming these factors to denote different tiers might not be suitable. Accordingly, Figure 2 is just an example for illustration. When using such error causation profiles in practice, additional data regarding the respective industry, organizational context, team structure, job characteristics, and demographic attributes of participants are needed to elucidate how diverse individual, team, and organizational factors have caused human error. Thus, implications for management can vary significantly across different settings and real-world applications, which often involve diverse combinations of the Dirty Dozen concept, leading to unique error management strategies. For example, the organizational context categories pressure, stress, and fatigue differ greatly between the automotive manufacturer and the army vehicle manufacturer (see Figure 2). Since this error causation questionnaire is just a screening tool that allows a first overview of error causation profiles, additional in-depth analyses are needed. Specifically, in the case of the automotive manufacturer, work-setting-oriented principles could be applied, since pressure, stress, and fatigue were grouped as organizational context categories. Specifically, work-setting-oriented criteria, such as work schedules, the pace of production processes, workplace ergonomics, and job characteristics, could be changed to reduce the likelihood of errors. Moreover, such work-setting-oriented criteria could, in turn, be decomposed into more detail-oriented sub-categories. For instance, the workplace ergonomics category might include measures such as noise reduction, improved lighting, moderate temperature, and improved cognitive ergonomics of displays. In addition, the job characteristics criterion might relate to interventions that increase task autonomy, task identity, and skill variety by introducing task design approaches such as job rotation, decentralization of authority, and self-directed teamwork. Based on previous evidence, such interventions should be suitable for reducing error causation categories such as stress, pressure, and fatigue [27,69]. For example, in the case of the automotive manufacturer, the following countermeasures could be applied: improved work schedules, including frequent rest breaks, to reduce fatigue; ongoing job rotation and noise reduction to reduce stress; and increased task autonomy by the decentralization of tasks and self-directed teamwork to buffer the impact of pressure. Hence, error causation screening is just the first step that allows one to derive specific hypotheses and perform further in-depth analyses, which can lead to case-specific interventions.
Another way to display the interrelation between error causation categories (e.g., interaction between pressure, a lack of knowledge, and a lack of teamwork) via this Dirty Dozen concept is the three-dimensional model, which can be seen in Figure 3.
Depending on the target group, problem areas can be presented globally and aggregated (e.g., for top management) or concretely and specifically (e.g., occupational safety or quality management staff). This type of error diagnosis can help to quickly sound out intervention measures. The global consideration of the three error causation factors already enables an accentuation of approaches centered on the individual (e.g., ergonomic display design for individual risky workplaces), group-related measures (e.g., team training for error prevention), or organizational design strategies (e.g., loose coupling of production processes). Similar to Figure 2, Figure 3 is just another exemplary illustration. Hence, in the 3D model with a potential for a multitude of unique interactions, Figure 3 does not sufficiently elucidate the differences or spatial relationships among these interactions. Consequently, in practice, prior to showcasing these interactions in such a manner, the nature and detailed characteristics of these interactions (e.g., time pressure interactions with team conflicts and a lack of goal clarity, which contributed to an incident) must be presented.

5.2. Adaptive Error Management System

Another field of application of this study, as depicted in RQ1 and RQ3 (see Section 1), is the design of a general multilevel adaptive error management system (RQ1) and a compilation of suitable error management interventions (RQ3). An adaptive error management system (AEMS) integrates effective error prevention tools and organizational learning concepts from safety-critical industries and high-performance domains, like aviation, medicine, firefighting, and the military [49]. An AEMS operates through a three-phased intervention program aimed at organizational learning from errors across the three levels uncovered by PCA in this study: individual (individual factors), team (social interaction), and organizational (organizational context) [49]. As can be seen in Figure 4, each intervention level encompasses three temporal phases: pre-operational, operational, and post-operational (see Section 1.1). The pre-operational phase precedes task execution and establishes mental and organizational task contexts, significantly influencing workforce assumptions and goals regarding unexpected events. The operational phase involves task execution and is susceptible to error types such as slips and lapses. The post-operational phase follows task completion, during which individuals reflect on their task performance. According to triple-loop learning models [13,49], reflection during the post-operational phase promotes single-loop learning by adjusting actions within the operational phase. Higher levels of consciousness facilitate double-loop learning (rethinking assumptions and processes within the pre-operational phase) and triple-loop learning (reviewing the mental learning context on the individual, team, or organizational level).
The upcoming sections will delve into the evidence-based tools and principles embedded within an AEMS. Each of the three phases, along with their corresponding error management concepts and tools across the three levels, will be briefly outlined.

5.3. Adaptive Error Management System—Individual Level

As can be seen in Figure 4, the individual-level error causation category lack of knowledge plays a major role in the pre-operational phase. In the pre-operational phase of an action, individuals may lack sufficient knowledge about the task, including the task goals, processes, and context at hand, which can significantly impact their ability to effectively carry out the operation. The error management concept briefing can be used to prepare for the unexpected and to prevent human error before the actual operation is started. Briefings have been shown empirically to enhance human performance and diminish human error [70,71,72,73]. Thus, briefing methodologies represent an adequate remedy for addressing the challenges posed by a lack of knowledge within an STS by providing essential information, guidance, and clarification to mitigate the impact of a lack of knowledge in the pre-operational phase, enabling individuals to perform tasks more effectively and with greater understanding.
During the operational phase, a lack of awareness is a severe error-producing element for many operators. However, there exists a design solution aimed at changing the technological component: cue salience. Cue salience plays a crucial role in enhancing the information-processing quality and, consequently, the adaptability of STS operators. By making cues (such as visual or auditory signals) prominent and providing individuals with accurate system feedback, salient cues help prevent the out-of-the-loop performance problem (OOTLPP [11]; see Section 1). Therefore, in intricate STS setups, it is imperative to ensure that critical cues for specific system activities be highlighted in the interface design (e.g., flashing lights, vibrant colors, or loud sounds). Research has consistently demonstrated that perceptually salient critical cues enhance decision-making processes [74].
In the post-operational phase of an action, individuals may encounter challenges related to assertiveness, which can significantly impact the outcome and effectiveness of the action. The primary purpose of the post-operational phase is to facilitate learning loops, for example, to reduce the lack of assertiveness in critical situations. Following the execution of actions, it is imperative to compare performance outcomes against intended goals and assumptions. When discrepancies arise, adjustments must be made to actions (single-loop learning), assumptions (double-loop learning), or the mental context (triple-loop learning) [12]. Debriefing, as proposed by Keiser and Arthur [75], Tannenbaum and Cerasoli [76], and Villado and Arthur [77], emerges as one of the most effective strategies for facilitating learning after task completion. Debriefing is the counterpart to briefing, where outcomes, goals, and actions are assessed against the assumptions established during the briefing phase. Various forms of debriefings exist, including the after-action review (AAR). An AAR constitutes a professional discussion focused on performance standards, enabling individuals to discern four central questions (i.e., the GOAL protocol) [49]: 1. What was supposed to happen? (Goal review); 2. What actually happened? (Outcome review); 3. Why was there a difference? (Error review); and 4. What can we learn from this? (Lessons learned review) [78]. Debriefings have become prevalent tools for fostering learning in different military contexts and are increasingly adopted in other sectors, such as aviation, medicine, and non-HROs [71,76,77].

5.4. Adaptive Error Management System—Team Level

As can be seen in Figure 4, the team-level error causation categories lack of teamwork and lack of resources have a huge impact on social interactions in the pre-operational phase. Specifically, a lack of teamwork and resources (e.g., a lack of personnel, a lack of information, and a lack of leadership support) in the pre-operational phase can impede problem-solving abilities, hinder team development, and limit opportunities for the detection of team-based errors. Opportunities for collaboration, access to resources, and support from team leaders are essential for facilitating successful actions and preventing error during this phase. Crew resource management (CRM) training is a promising concept for preparing teams to cope with unexpected events by making “optimum use of all available resources (e.g., equipment, procedures, and people)” of an STS [79] (p. 1). CRM training originated in the aviation sector as a measure to mitigate human error and has since expanded to diverse fields, including medicine, nuclear power, offshore oil and gas, shipping, and automotive [55,80,81,82,83].
In the operational phase of a team, a lack of communication is a latent threat to team performance, since it can produce different forms of human error. In short, a lack of communication in the operational phase can hinder coordination, clarity, problem solving, feedback, and team cohesion. Time-out is an effective strategy to overcome communication errors. Within the operative phase, teams can perform a time-out to ensure a common understanding of the current task (e.g., roles, procedures, anticipated critical events) among team members [12].
Social norms and complacency at the team level are error-causing elements that could be addressed in the post-operational phase by a root cause analysis (RCA). In the post-operational phase of an action, social norms and complacency can significantly impact the ability of individuals and teams to reflect, learn, and improve from their experiences. If there is a prevailing norm of avoiding criticism or admitting mistakes, it can hinder open and honest reflection, preventing individuals from identifying areas for improvement. In addition, complacency occurs when individuals or teams become satisfied with the status quo and resist change or innovation. Hence, RCA serves as a tool for uncovering the underlying causes and contributory factors behind incidents or accidents [84]. For decades, RCA has effectively unearthed latent errors within high-reliability sectors, like aviation and nuclear power [27,84]. The process involves iterative questioning of why post-incident, enabling teams to dissect the accident’s causal chain.

5.5. Adaptive Error Management System—Organizational Level

Interventions at the organizational level can be described as long-term changes compared to interventions at the individual and team levels. In the pre-operational phase, pressure and fatigue can significantly impact the ability of the workforce to effectively plan and prepare for the task at hand. Pressure to meet deadlines or achieve objectives within a limited time frame can create stress and urgency, making it difficult for individuals to think critically and consider all relevant factors in their planning efforts. In addition, fatigue, whether from long hours of work, a lack of sleep, or high levels of stress, can impair cognitive functioning and reduce productivity. In the pre-operational phase, employees may be required to engage in intense mental exertion for extended periods, leading to decreased attention, memory, and problem-solving abilities. To address error-provoking Dirty Dozen categories, such as pressure and fatigue, loose coupling can be introduced in the pre-operational phase. Loose coupling can be defined as a complex system attribute where interdependent elements at any organizational level (top, middle, or bottom) exhibit varying degrees of interconnection and autonomy [85]. The term “loosely coupled” implies that while these elements are connected and maintain some level of determinacy, they also retain a degree of independence and unpredictability. Empirical studies by Orton and Weick [85] support several key organizational benefits of loose coupling, including enhanced persistence, buffering, adaptability, and effectiveness. By decoupling various aspects of the operation, such as tasks, resources, and timelines, team units can more easily adjust to changing circumstances and mitigate the effects of pressure and fatigue.
In the operational phase of an action, stress can significantly impact performance and outcomes. In such situations, individuals may struggle to maintain focus, to adapt to changing circumstances, and to make effective decisions under pressure. To address stress as an error-producing condition in the operational phase, adaptive standard operating procedures (SOPs) can be helpful. Adaptive standard operating procedures (SOPs) are crucial for HROs. SOPs entail the adoption of standardized procedures, techniques, and equipment, whenever feasible, ensuring that similar tasks are executed consistently regardless of the personnel involved. However, in emergent or adverse circumstances, the team must exhibit resilience and adaptability, responding appropriately to the situation without rigidly adhering to standard routines [86].
In the post-operational phase, distractions can significantly impact the ability of individuals and teams to reflect, analyze, and learn from their experiences. Distractions divert attention away from the task at hand, making it difficult for individuals to concentrate on evaluating the outcomes of the operation or engaging in critical reflection. To overcome the problems of distraction, the implementation of critical incident reporting systems (CIRSs) in the post-operational phase is highly recommended. Introducing a CIRS is pivotal for fostering learning throughout the organization. Rall and Dieckmann [86] delineated the essential attributes of an effective CIRS, such as a non-punitive environment for anonymously reporting, incentives provided to team members for reporting incidents, and a prompt reaction to reports. CIRSs help ensure that important information is captured and documented, even in the presence of distractions.

6. Conclusions

In conclusion, this study tried to find tentative answers to three interrelated explorative research questions (RQ1–RQ3; see Section 1). The first RQ asked, “Can the aviation-based Dirty Dozen concept be used as a general multilevel model of adaptive error management practices in STSs?” It has been shown by previous theoretical concepts and empirical studies that AEM is crucial for an organizational STS operating in complex environments, aiming not only to address errors but also to learn and evolve from them. The derived multilevel model of AEM (see Figure 4) aligns closely with triple-loop learning, which questions organizational learning processes at multiple levels deeply. Organizational learning from errors, particularly in adopting concepts like the Dirty Dozen from aviation, might facilitate error causation analysis and management across industries, enhancing organizational resilience.
Regarding RQ2, “Can the aviation-based Dirty Dozen concept be used to conduct error causation screening in different process industries by exploring its higher-order structure?” the study uncovered higher-order factors of the Dirty Dozen categories, offering valuable insights into error causation across seven different process industries. Through principle component analysis (PCA), the study consolidated the 12 error causation categories into three primary factors: individual factors, social interaction, and organizational context. These factors encapsulate the multifaceted nature of human error sources and provide a framework for understanding and addressing them effectively. The practical implications of this research are significant. It offers industries a clearly arranged framework for screening error causation and identifying everyday incidents, thereby reducing accident costs and enhancing safety. By delving into the nuances of each factor, the study underscores the importance of tailored interventions at the individual, team, and organizational levels.
To answer RQ3, “What are suitable adaptive error management interventions that may fit in a multilevel model of adaptive error management practices?” the proposed adaptive error management system (AEMS) presents a comprehensive approach to error prevention and organizational learning, drawing on principles from safety-critical industries, as has been revealed in previous empirical studies from HROs in aviation, defense, firefighting, and medicine. Through pre-operational, operational, and post-operational phases, an AEMS integrates effective error management tools and fosters continuous learning loops across multiple levels of the organization. At the individual level, strategies such as briefings, cue salience, and debriefing seem to promote awareness, knowledge, and assertiveness, fostering a culture of learning and adaptation. Team-level interventions, such as crew resource management (CRM) training, time-outs, and root cause analysis (RCA), address issues of teamwork, communication, social norms, and complacency, enhancing team performance and collaboration. Organizational-level interventions, like loose coupling, adaptive standard operating procedures (SOPs), and critical incident reporting systems (CIRSs), might mitigate stress, fatigue, distraction, and pressure, promoting resilience and adaptability within the organization. Overall, this study highlights the importance of understanding and addressing human error sources across various industries, offering practical insights and strategies for enhancing safety and organizational resilience. Further research and content validation of the Dirty Dozen concept and the associated Human Error Questionnaire (HEQ) and the assignment of the Dirty Dozen categories to specific phases within the adaptive error management system (AEMS) are warranted to refine error management measures and promote organizational resilience across different operational domains.

Author Contributions

Conceptualization, N.M.; methodology, N.M. and R.G.-B.; software, R.G.-B.; validation, N.M. and R.G.-B.; formal analysis, N.M. and R.G.-B.; investigation, N.M. and R.G.-B.; resources, N.M. and R.G.-B.; data curation, R.G.-B.; writing—original draft preparation, N.M., R.G.-B., T.B., and V.S.; writing—review and editing, T.B. and V.S.; visualization, N.M. and V.S.; supervision, N.M.; project administration, N.M. and R.G.-B.; funding acquisition, none. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. We acknowledge support by the Open Access Publication Fund of Rhine-Waal University of Applied Sciences.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of the Rhine-Waal University of Applied Sciences (protocol code HSRW-2022-006 and date of approval 21 September 2022) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available upon request from the corresponding authors. The data are not publicly available due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Degerman, H.; Wallo, A. Conceptualising learning from resilient performance: A scoping literature review. Appl. Ergon. 2024, 115, 104165. [Google Scholar] [CrossRef] [PubMed]
  2. Liang, F.; Cao, L. Linking employee resilience with organizational resilience: The roles of coping mechanism and managerial resilience. Psychol. Res. Behav. Manag. 2021, 14, 1063–1075. [Google Scholar] [CrossRef] [PubMed]
  3. Zhao, J. Organizational immunity: How to move from fragility to resilience. Tsinghua Business Rev. 2020, 6, 101–107. [Google Scholar]
  4. Carayon, P.; Hancock, P.; Leveson, N.; Noy, I.; Sznelwar, L.; Van Hootegem, G. Advancing a sociotechnical systems approach to workplace safety: Developing the conceptual framework. Ergonomics 2015, 58, 548–564. [Google Scholar] [CrossRef] [PubMed]
  5. Clegg, C.W. Sociotechnical principles for system design. Appl. Ergon. 2000, 31, 463–477. [Google Scholar] [CrossRef]
  6. Emery, F.E.; Trist, E.L. The causal texture of organizational environments. Hum. Relat. 1965, 18, 21–32. [Google Scholar] [CrossRef]
  7. Stanton, N.A.; Harvey, C. Beyond human error taxonomies in assessment of risk in sociotechnical systems: A new paradigm with the EAST ‘broken-links’ approach. Ergonomics 2017, 60, 221–233. [Google Scholar] [CrossRef]
  8. Hollnagel, E. The art of efficient man-machine interaction: Improving the coupling between man and machine. In Expertise and Technology: Cognition and Human-Computer Cooperation; Hoc, J.M., Cacciabue, P.C., Hollnagel, E., Eds.; Lawrence Erlbaum: Hillsdale, MI, USA, 1995; pp. 229–242. [Google Scholar]
  9. Hawkins, F.H. Human Factors in Flight; Avebury Technical: Aldershot, UK, 2007. [Google Scholar]
  10. Storm, F.A.; Chiappini, M.; Dei, C.; Piazza, C.; André, E.; Reißner, N.; Brdar, I.; Delle Fave, A.; Gebhard, P.; Malosio, M.; et al. Physical and mental well-being of cobot workers: A scoping review using the Software-Hardware-Environment-Liveware-Liveware-Organization model. Hum. Factors Ergon. Manuf. Serv. Ind. 2022, 32, 419–435. [Google Scholar] [CrossRef]
  11. Endsley, M.R.; Kiris, E.O. The out-of-the-loop performance problem and level of control in automation. Hum. Factors 1995, 37, 381–394. [Google Scholar] [CrossRef]
  12. Marquardt, N. Situation awareness, human error, and organizational learning in sociotechnical systems. Hum. Factors Ergon. Manuf. Serv. Ind. 2019, 29, 327–339. [Google Scholar] [CrossRef]
  13. Snell, R.; Chak, A.M.-K. The Learning Organization: Learning and Empowerment for Whom? Manage Learn. 1998, 29, 337–364. [Google Scholar] [CrossRef]
  14. Solso, R.L. Cognitive Psychology; Allyn & Bacon: Paramus, NJ, USA, 2001. [Google Scholar]
  15. Sutcliffe, K.M.; Vogus, T.J. Organizing for resilience. In Positive Organizational Scholarship: Foundations of a New Discipline; Cameron, K.S., Dutton, J.E., Quinn, R.E., Eds.; Berrett-Koeller Publishers: San Francisco, CA, USA, 2003; pp. 94–110. [Google Scholar]
  16. Lengnick-Hall, C.A.; Beck, T.E. Beyond Bouncing Back: The Concept of Organizational Resilience. In Proceedings of the National Academy of Management Meetings, Seattle, WA, USA, 1–5 August 2003. [Google Scholar]
  17. Safety Regulation Group. An Introduction to Aircraft Maintenance Engineering Human Factors for JAR 66; TSO/Civil Aviation Authority: Norwich, UK, 2002. [Google Scholar]
  18. Wilf-Miron, R.; Lewenhoff, I.; Benyamini, Z.; Aviram, A. From aviation to medicine: Applying concepts of aviation safety to risk management in ambulatory care. Qual. Saf. Health Care 2003, 12, 35–39. [Google Scholar] [CrossRef] [PubMed]
  19. Poller, D.N.; Bongiovanni, M.; Cochand-Priollet, B.; Johnson, S.J.; Perez-Machado, M. A human factor event-based learning assessment tool for assessment of errors and diagnostic accuracy in histopathology and cytopathology. J. Clin. Pathol. 2020, 73, 681–685. [Google Scholar] [CrossRef] [PubMed]
  20. Patankar, M.S.; Taylor, J.C. Applied Human Factors in Aviation Maintenance; Ashgate: Aldershot, UK, 2004. [Google Scholar]
  21. Cantu, J.; Tolk, J.; Fritts, S.; Gharehyakheh, A. High Reliability Organization (HRO) systematic literature review: Discovery of culture as a foundational hallmark. J. Conting. Crisis Manag. 2020, 28, 399–410. [Google Scholar] [CrossRef]
  22. van Stralen, D. Ambiguity. J. Conting. Crisis Manag. 2015, 23, 47–53. [Google Scholar] [CrossRef]
  23. Ogliastri, E.; Zúñiga, R. An introduction to mindfulness and sensemaking by highly reliable organizations in Latin America. J. Bus. Res. 2016, 69, 4429–4434. [Google Scholar] [CrossRef]
  24. Hudson, P. Aviation safety culture. SafeSkies 2001, 1, 23. [Google Scholar]
  25. Marquardt, N.; Gades, R.; Robelski, S. Implicit Social Cognition and Safety Culture. Hum. Factors Ergon. Manuf. Serv. Ind. 2012, 22, 213–234. [Google Scholar] [CrossRef]
  26. Patankar, M.S.; Sabin, E.J. The Safety Culture Perspective. In Human Factors in Aviation; Salas, E., Maurino, D., Eds.; Elsevier: San Diego, CA, USA, 2010; pp. 95–122. [Google Scholar]
  27. Reason, J. Managing the Risk of Organizational Accidents; Ashgate: Aldershot, UK, 1997. [Google Scholar]
  28. Schein, E. Organisational Culture and Leadership; Jossey-Bass: San Francisco, CA, USA, 1992. [Google Scholar]
  29. La Porte, T. High reliability organizations: Unlikely demanding and at risk. J. Conting. Crisis Manag. 1996, 4, 60–71. [Google Scholar] [CrossRef]
  30. Roberts, K.H. Some characteristics of one type of high reliability organization. Organ. Sci. 1990, 1, 160–176. [Google Scholar] [CrossRef]
  31. Weick, K.E.; Sutcliffe, K.M. Managing the Unexpected: Assuring High Performance in an Age of Complexity; Jossey-Bass: San Francisco, CA, USA, 2001. [Google Scholar]
  32. Weick, K.E. Organizational culture as a source of high reliability. Calif. Manag. Rev. 1987, 29, 112–127. [Google Scholar] [CrossRef]
  33. Vogus, T.J. Mindful organizing: Establishing and extending the foundations of highly reliable performance. In Handbook of Positive Organizational Scholarship; Cameron, K., Spreitzer, G.M., Eds.; Oxford University Press: New York, NY, USA, 2011; pp. 664–676. [Google Scholar]
  34. Hopkins, A. The Problem of Defining High Reliability Organisations. Working Paper No 51. 2007. Available online: https://theisrm.org/documents/Hopkins%20%282007%29The%20Problem%20of%20Defining%20High%20Reliability%20Organisations.pdf (accessed on 14 March 2024).
  35. Enya, A.; Dempsey, S.; Pillay, M. High Reliability Organisation (HRO) Principles of Collective Mindfulness: An Opportunity to Improve Construction Safety Management. In Advances in Safety Management and Human Factors; Arezes, P.M.F.M., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 3–13. [Google Scholar] [CrossRef]
  36. Sutcliffe, K.M. High reliability organizations (HROs). Best. Pract. Res. Clin. Anaesthesiol. 2011, 25, 133–144. [Google Scholar] [CrossRef]
  37. Hales, D.N.C.; Satya, S. Creating high reliability organizations using mindfulness. J. Bus. Res. 2016, 69, 2873–2881. [Google Scholar] [CrossRef]
  38. Roberts, K.G.; Stout, S.K.; Halpern, J.J. Decision dynamics in two high reliability military organizations. Manag. Sci. 1994, 40, 614–624. [Google Scholar] [CrossRef]
  39. Weick, K.E.; Sutcliffe, K.M.; Obstfeld, D. Organizing for high reliability: Processes of collective mindfulness. In Research in Organizational Behavior; Sutton, R.S., Staw, B.M., Eds.; Elsevier Science/JAI Press: Amsterdam, The Netherlands, 1999; pp. 81–123. [Google Scholar]
  40. Turner, N.; Kutsch, E.; Leybourne, S.A. Rethinking project reliability using the ambidexterity and mindfulness perspectives. Int. J. Manag. Proj. Bus. 2016, 9, 845–864. [Google Scholar] [CrossRef]
  41. Somers, S. Measuring resilience potential: An adaptive strategy for organizational crisis planning. J. Conting. Crisis Manag. 2009, 17, 12–23. [Google Scholar] [CrossRef]
  42. Williams, T.A.; Gruber, D.A.; Sutcliffe, K.M.; Shepherd, D.A.; Zhao, E.Y. Organizational response to adversity: Fusing crisis management and resilience research streams. Acad. Manag. Ann. 2017, 11, 733–769. [Google Scholar] [CrossRef]
  43. Giustiniano, L.; Clegg, S.R.; Cunha, M.P.; Rego, A. Elgar Introduction to Theories of Organizational Resilience; Edward Elgar Publishing Limited: Cheltenham, UK, 2018. [Google Scholar]
  44. Wildavsky, A.B. Searching for Safety; Transaction Books: New Brunswick, NJ, USA, 1988. [Google Scholar]
  45. Holling, C.S. Resilience and Stability of Ecological Systems. Annu. Rev. Ecol. Syst. 1973, 4, 1–23. [Google Scholar] [CrossRef]
  46. Bhamra, R.; Dani, S.; Burnard, K.J. Resilience: The concept, a literature review, and future directions. Int. J. Prod. Res. 2011, 49, 5375–5393. [Google Scholar] [CrossRef]
  47. Hillmann, J.; Guenther, E. Organizational resilience: A valuable construct for management research? Int. J. Manag. Rev. 2020, 23, 7–44. [Google Scholar] [CrossRef]
  48. Linnenluecke, M.K. Resilience in business and management research: A review of influential publications and a research agenda. Int. J. Manag. Rev. 2017, 19, 4–30. [Google Scholar] [CrossRef]
  49. Marquardt, N. The effect of locus of control on organizational learning, situation awareness and safety culture. In Safety Culture: Progress, Trends and Challenges; Sacré, M., Ed.; Nova Science Publishers: New York, NY, USA, 2019; pp. 157–218. [Google Scholar]
  50. Frese, M.; Keith, N. Action Errors, Error Management, and Learning in Organizations. Annu. Rev. Psychol. 2014, 66, 661–687. [Google Scholar] [CrossRef]
  51. van Dyck, C.; Frese, M.; Baer, M.; Sonnentag, S. Organizational error management culture and its impact on performance: A two-study replication. J. Appl. Psychol. 2005, 90, 1228–1240. [Google Scholar] [CrossRef]
  52. Wiggins, M.W. Introduction to Human Factors for Organisational Psychologists; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  53. Dupont, G. The Dirty Dozen Errors in Maintenance. In Proceedings of the 11th Symposium on Human Factors in Aviation Maintenance, San Diego, CA, USA, 12–13 March 1997. [Google Scholar]
  54. Marquardt, N.; Höger, R. The structure of contributing factors of human error in safety-critical industries. In Human Factors Issues in Complex System Performance; de Waard, D., Hockey, G.R.J., Nickel, P., Brookhuis, K.A., Eds.; Shaker Publishing: Maastricht, The Netherlands, 2007; pp. 67–71. [Google Scholar]
  55. Marquardt, N.; Robelski, S.; Jenkins, G. Designing and evaluating a crew resource management training for manufacturing industries. Hum. Factors Ergon. Manuf. Serv. Ind. 2011, 21, 287–304. [Google Scholar] [CrossRef]
  56. Marquardt, N.; Treffenstädt, C.; Gerstmeyer, K.; Gades-Buettrich, R. Mental workload and cognitive performance in operating rooms. Int. J. Psychol. Res. 2015, 10, 209–233. [Google Scholar]
  57. Simms, L.J. Classical and modern methods of psychological scale construction. Soc. Personal. Psychol. Compass 2008, 2, 414–433. [Google Scholar] [CrossRef]
  58. Lloret-Segura, S.; Ferreres-Traver, A.; Hernandez-Baeza, A.; Tomas-Marco, I. Exploratory item factor analysis: A practical guide revised and updated. Anales de Psicología 2014, 30, 1151–1169. [Google Scholar]
  59. Bortz, J.; Weber, R. Statistik: Für Human- und Sozialwissenschaftler; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  60. Lorr, M.; More, W.W. Four Dimensions of Assertiveness. Multivar. Behav. Res. 1980, 15, 127–138. [Google Scholar] [CrossRef]
  61. Peneva, I.; Mavrodiev, S. A Historical Approach to Assertiveness. Psychol. Thought 2013, 6, 3–26. [Google Scholar] [CrossRef]
  62. Lazarus, A.A. On assertive behavior: A brief note. Behav. Ther. 1973, 4, 697–699. [Google Scholar] [CrossRef]
  63. Stoverink, A.C.; Kirkman, B.; Mistry, S.; Rosen, B. Bouncing Back together: Towards a theoretical model of work team resilience. Acad. Manag. Rev. 2020, 45, 395–422. [Google Scholar] [CrossRef]
  64. Ceschi, A.; Demerouti, E.; Sartori, R.; Weller, J. Decision-Making Processes in the Workplace: How Exhaustion, Lack of Resources and Job Demands Impair Them and Affect Performance. Front. Psychol. 2017, 8, 313. [Google Scholar] [CrossRef] [PubMed]
  65. Merritt, S.M.; Ako-Brew, A.; Bryant, W.J.; Staley, A.; McKenna, M.; Leone, A.; Shirase, L. Automation-induced complacency potential: Development and validation of a new scale. Front. Psychol. 2019, 10, 225. [Google Scholar] [CrossRef] [PubMed]
  66. Billings, C.; Lauber, J.; Funkhouser, H.; Lyman, E.; Huff, E. NASA Aviation Safety Reporting System; U.S. Government Printing Office: Washington, DC, USA, 1976. [Google Scholar]
  67. Hyten, C.; Ludwig, T.D. Complacency in process safety: A behavior analysis toward prevention strategies. J. Organ. Behav. Manag. 2017, 37, 240–260. [Google Scholar] [CrossRef]
  68. Bielić, T.; Čulin, J.; Poljak, I.; Orović, J. Causes of and Preventive Measures for Complacency as Viewed by Officers in Charge of the Engineering Watch. J. Mar. Sci. Eng. 2020, 8, 517. [Google Scholar] [CrossRef]
  69. Reason, J.T.; Hobbs, A. Managing Maintenance Error; Ashgate: Aldershot, UK, 2003. [Google Scholar]
  70. Christianson, M.K.; Sutcliffe, K.M.; Miller, M.A.; Iwashyna, T.J. Becoming a high reliability organization. Crit. Care 2011, 15, 314. [Google Scholar] [CrossRef] [PubMed]
  71. Haynes, A.B.; Weiser, T.G.; Berry, W.R.; Lipsitz, S.R.; Breizat, A.H.S.; Dellinger, E.P.; Herbosa, T.; Joseph, S.; Kibatala, P.L.; Lapitan, M.C.M.; et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N. Engl. J. Med. 2009, 360, 491–499. [Google Scholar] [CrossRef] [PubMed]
  72. Weick, K.E.; Sutcliffe, K.M. Managing the Unexpected: Assuring High Performance in an Age of Complexity, 2nd ed.; Jossey-Bass: San Francisco, CA, USA, 2007. [Google Scholar]
  73. Weick, K.E. Organizing for transient reliability: The production of dynamic non-events. J. Conting. Crisis Manag. 2011, 19, 21–27. [Google Scholar] [CrossRef]
  74. Endsley, M.R.; Bolte, B.; Jones, D.G. Designing for Situation Awareness—An Approach to User-Centered Design; Taylor & Francis: London, UK, 2003. [Google Scholar]
  75. Keiser, N.L.; Arthur, W. A meta-analysis of the effectiveness of the after-action review (or debrief) and factors that influence its effectiveness. J. Appl. Psychol. 2021, 106, 1007–1032. [Google Scholar] [CrossRef]
  76. Tannenbaum, S.I.; Cerasoli, C.P. Do team and individual debriefs enhance performance? A meta-analysis. Hum. Factors 2013, 55, 231–245. [Google Scholar] [CrossRef]
  77. Villado, A.J.; Arthur, W. The comparative effect of subjective and objective after-action reviews on team performance on a complex task. J. Appl. Psychol. 2013, 98, 514–528. [Google Scholar] [CrossRef] [PubMed]
  78. U.S. Army Combined Arms Center. A Leader’s Guide to After-Action Reviews (Training Circular 25-20); U.S. Army Combined Arms Center: Fort Leavenworth, KS, USA, 1993. [Google Scholar]
  79. Safety Regulation Group. Crew Resource Management (CRM) Training. Guidance for Flight Crew, CRM Instructors (CRMIS) and CRM Instructor Examiners (CRMIES); Civil Aviation Authority: Norwich, UK, 2006. [Google Scholar]
  80. O’Connor, P. Assessing the effectiveness of bridge resource management. Int. J. Aviat. Psychol. 2011, 21, 357–374. [Google Scholar] [CrossRef]
  81. Salas, E.; Shuffler, M.L.; DiazGranados, D. Team dynamics at 35,000 feet. In Human Factors in Aviation; Salas, E., Maurino, D., Eds.; Academic Press: London, UK, 2010; pp. 249–291. [Google Scholar]
  82. Salas, E.; Wilson, K.A.; Burke, C.S.; Wightman, D.C. Does crew resource management training work? An update, an extension, and some critical needs. Hum. Factors 2006, 48, 392–412. [Google Scholar] [CrossRef] [PubMed]
  83. O’Dea, A.; O’Connor, P.; Keogh, I. A meta-analysis of the effectiveness of crew resource management training in acute care domains. Postgrad. Med. J. 2014, 90, 699–708. [Google Scholar] [CrossRef] [PubMed]
  84. Wu, A.W.; Lipshutz, A.M.; Pronovost, P.J. Effectiveness and efficiency of root cause analysis in medicine. JAMA 2008, 299, 685–687. [Google Scholar] [CrossRef] [PubMed]
  85. Orton, J.D.; Weick, K.E. Loosley coupled systems: A reconceptualization. Acad. Manag. Rev. 1990, 15, 203–223. [Google Scholar] [CrossRef]
  86. Rall, M.; Dieckmann, P. Safety culture and crisis resource management in airway management: General principles to enhance patient safety in critical airway situations. Best. Pract. Res. Clin. Anaesthesiol. 2005, 19, 539–557. [Google Scholar] [CrossRef]
Figure 2. Error causation screening (profile comparison of two companies).
Figure 2. Error causation screening (profile comparison of two companies).
Safety 10 00064 g002
Figure 3. Dirty Dozen 3D model (error causation interaction between a lack of knowledge, pressure, and a lack of teamwork).
Figure 3. Dirty Dozen 3D model (error causation interaction between a lack of knowledge, pressure, and a lack of teamwork).
Safety 10 00064 g003
Figure 4. Adaptive error management system (AEMS) addressing the Dirty Dozen (based on [49]).
Figure 4. Adaptive error management system (AEMS) addressing the Dirty Dozen (based on [49]).
Safety 10 00064 g004
Table 1. Dirty Dozen categories [54].
Table 1. Dirty Dozen categories [54].
Dirty Dozen CategoryDescriptionExample Items (HEQ)
Lack of knowledgeThe workers do not know enough about their tasks, duties, or the corporate work process.I am well aware of the duties of all those involved in my work process. (-)
ComplacencyDue to overconfidence, the workers are careless in the execution of their tasks.I sometimes do not check my work, because I am convinced of my own infallibility.
Lack of teamworkThe workers do not cooperate well or have conflicts on a personal level.Conflicts within the team often remain unsolved.
Lack of communicationThe workers have deficits in terms of sharing information due to insufficient listening and talking.Sometimes, I do not understand my supervisors.
Social normsDue to informal group rules, the workers do not always follow safety regulations.I believe safety regulations often disturb an efficient work environment.
Lack of resourcesThe organization is short of manuals, checklists, technical support, other materials, or staff.Because of missing technical assistance, there are often delays.
PressureDue to ambitious goals, tasks have to be performed under high time or economic pressures.Due to financial pressure, even high-risk tasks have to be performed under an increasing time demand.
Lack of awarenessThe workers do not recognize dangers in critical situations on time.I recognize dangers in my work environment promptly. (-)
StressThe workers have a high mental or physical workload.There are too many tasks I have to do.
FatigueDue to fatigue, the workers have less capacity to concentrate on their tasks.I am often tired during the day.
DistractionDue to different contextual factors, the workers do not focus their attention on the current task.Because of the noise level, our efficiency is negatively affected.
Lack of assertivenessThe workers do not refuse to compromise safety standards.I often make constructive suggestions to solve a problem. (-)
Note. HEQ = Human Error Questionnaire. Reverse-scored items are denoted with (-).
Table 2. Reliabilities and descriptive statistics of the Human Error Questionnaire (HEQ).
Table 2. Reliabilities and descriptive statistics of the Human Error Questionnaire (HEQ).
Dirty Dozen ScaleMeanSDReliability (Cronbach’s α)
Lack of teamwork2.720.73.71
Pressure2.530.78.74
Lack of resources2.570.77.79
Lack of assertiveness2.690.65.71
Social norms2.700.66.65
Lack of communication2.010.60.73
Stress2.360.75.73
Distraction2.500.65.63
Fatigue2.070.74.81
Lack of awareness2.050.47.64
Complacency1.900.52.58
Lack of knowledge1.900.50.59
Note. N = 544.
Table 3. Rotated (VARIMAX) factor loadings of error causation categories (HEQ scales).
Table 3. Rotated (VARIMAX) factor loadings of error causation categories (HEQ scales).
Human Error CategoriesError-Causing Factors (Component)
Individual
Factors
Social
Interaction
Organizational
Context
Lack of knowledge0.5440.5220.195
Lack of awareness0.5870.1800.347
Lack of assertiveness0.792−0.043−0.060
Complacency0.1710.6790.050
Stress−0.0770.3570.775
Fatigue0.2580.2110.753
Social norms−0.0580.6990.409
Lack of teamwork−0.0510.7230.255
Lack of communication0.2050.7630.262
Lack of resources0.2430.6570.367
Pressure−0.0830.4500.694
Distraction0.2930.1340.752
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marquardt, N.; Gades-Büttrich, R.; Brandenberg, T.; Schürmann, V. A Model of Adaptive Error Management Practices Addressing the Higher-Order Factors of the Dirty Dozen Error Classification—Implications for Organizational Resilience in Sociotechnical Systems. Safety 2024, 10, 64. https://doi.org/10.3390/safety10030064

AMA Style

Marquardt N, Gades-Büttrich R, Brandenberg T, Schürmann V. A Model of Adaptive Error Management Practices Addressing the Higher-Order Factors of the Dirty Dozen Error Classification—Implications for Organizational Resilience in Sociotechnical Systems. Safety. 2024; 10(3):64. https://doi.org/10.3390/safety10030064

Chicago/Turabian Style

Marquardt, Nicki, Ricarda Gades-Büttrich, Tammy Brandenberg, and Verena Schürmann. 2024. "A Model of Adaptive Error Management Practices Addressing the Higher-Order Factors of the Dirty Dozen Error Classification—Implications for Organizational Resilience in Sociotechnical Systems" Safety 10, no. 3: 64. https://doi.org/10.3390/safety10030064

APA Style

Marquardt, N., Gades-Büttrich, R., Brandenberg, T., & Schürmann, V. (2024). A Model of Adaptive Error Management Practices Addressing the Higher-Order Factors of the Dirty Dozen Error Classification—Implications for Organizational Resilience in Sociotechnical Systems. Safety, 10(3), 64. https://doi.org/10.3390/safety10030064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop