Next Article in Journal
DAGANFuse: Infrared and Visible Image Fusion Based on Differential Features Attention Generative Adversarial Networks
Previous Article in Journal
Q-Pandora Unboxed: Characterizing Resilience of Quantum Error Correction Codes Under Biased Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human Reliability Analysis in Acetylene Filling Operations: Risk Assessment and Mitigation Strategies

by
Michaela Balazikova
* and
Zuzana Kotianova
Department of Quality, Safety and Environment, Institute of Special Engineering Processologies, Faculty of Mechanical Engineering, Technical University of Kosice, Letna 1/9, 042 00 Kosice-Sever, Slovakia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4558; https://doi.org/10.3390/app15084558
Submission received: 16 March 2025 / Revised: 5 April 2025 / Accepted: 18 April 2025 / Published: 21 April 2025

Abstract

:
Human reliability is a key factor in long-term sustainability, especially for tasks that are critical to safety. It is also evident that human behavior is often the main or significant cause of system failures. Identifying human error is challenging, particularly when it comes to determining the exact moment when an error occurred that led to an accident, as errors develop over time. It is essential to understand the causes and mechanisms of human errors. This finding is not new; for over 30 years, it has been recognized that human operations in safety-critical systems are so important that they should be modeled as part of risk assessment in operation. This article discusses various HRA methodologies and argues that further research and development are necessary. An example of selected HRA techniques will be demonstrated through a case study on acetylene filling activities. When filling acetylene into pressure vessels or cylinders, it is critically important to analyze the reliability of the human factor, as this process involves handling a highly explosive gas. Irresponsibility, lack of training, or incorrect decision-making can lead to severe accidents. Any deficiencies in this process can result in not only equipment damage but also endanger the health and lives of people nearby. This case may also suggest potential improvements to existing guidelines, international standards, and regulations, which often require the consideration of a wider range of ergonomic factors in the risk assessment process.

1. Introduction

In complex human–machine systems, reliability is a key parameter. The human factor often plays a crucial role, particularly in probabilistic safety assessments of the system. Human errors are often the cause of negative events, and the contribution of human activity to system reliability and safety is often significant. In his work, Reason [1] defines human reliability as the ability of an individual to perform tasks without failure over a certain period of time or number of attempts. This reliability depends on various factors, such as the individual’s capabilities, working environment conditions, and the design of the systems with which the person interacts. Human reliability is not easy to determine or predict. This is because it is not an isolated act or a single decision but a combination of various influences, causes, and factors. Over the years, several approaches to human reliability analysis (HRA) have developed. This progress has led researchers to examine information more closely to determine the most appropriate approach for HRA. These methodologies can be divided into three main categories: first-, second-, and third-generation methods [2].
First-generation methods: These methods are based on static models and primarily use simple techniques to assess the probability of human errors. They are oriented on the assumption that errors can be quantified using fixed factors such as human capabilities and the environment. These techniques are often used to analyze repetitive operations in stable conditions but tend to ignore the dynamic aspects of complex systems.
Second-generation methods: These methods are more sophisticated and take into account the system’s dynamics, as well as the interaction between humans, machines, and the environment. They consider various factors that may affect human performance, such as stress, fatigue, and other psychological or physiological factors. Second-generation methods focus more on analyzing task complexity and the impact of human behavior in real operational conditions.
Third-generation methods: These methods are the most advanced and include sophisticated quantitative models that allow for more detailed and flexible assessment of human reliability. These methodologies often use simulation-based models and involve various tools that allow for modeling interactions between humans, machines, and systems. The third generation emphasizes complex, dynamic, and unpredictable factors that may affect human performance in critical situations.
The authors in this paper critically evaluate each of these generations and emphasize that although the methods have evolved, there are still certain limitations in effectively modeling human reliability in real, complex, and dynamic systems. Despite these reservations, these methods remain useful and many are commonly used for quantitative risk assessment [3]. Second-generation methods, a term introduced by Dougherty [4], aim to address the shortcomings of traditional approaches, specifically the following:
  • Offering guidance on potential and likely decision-making pathways for operators, based on mental process models from cognitive psychology;
  • Expanding the understanding of errors beyond the typical binary classification (omission-commission), acknowledging the significance of “cognitive errors”;
  • Considering the dynamic nature of human–machine interaction, which can serve as a foundation for developing simulators to assess operator performance.
The problem lies in the fact that we often need to design systems with extremely high reliability, often requiring an overall failure rate of less than 1 in 10 million (i.e., 1 in 107). Designing and analyzing such systems requires a deep understanding of human behavior under all possible circumstances that may occur during their operation and management. In recent years, the limitations and shortcomings of second-generation methods have led to the development of new third-generation methodologies [5]. The use of machine learning and big data analytics has become an innovative approach to predicting human errors based on historical data. Machine learning allows for real-time modeling and classification of errors and can provide accurate risk predictions based on behavioral patterns and circumstances. This approach is starting to be used in various industries such as aviation, nuclear energy, and the automotive industry, where human behavior is analyzed in real-world conditions [6]. Virtual and augmented reality-based simulations are increasingly being used to test workers’ responses in simulated risky situations. Using VR, different scenarios can be modeled to test decision-making processes and interactions between people and technology. This approach not only enables training but also allows for the assessment of how human behavior can impact the overall reliability of the system [7]. Cognitive Task Analysis (CTA) is a technique that focuses on a detailed understanding of individual decision-making processes while performing tasks. Models such as Cognitive Reliability and Error Analysis Method (CREAM) provide a framework for assessing human errors in complex systems. These models are used to identify cognitive errors that may affect system reliability [8]. Bayesian networks are probabilistic models that allow for the analysis of risks and prediction of human errors in various scenarios. This method is used to evaluate dynamic interactions between human factors, technology, and the environment, while accounting for uncertainty and variability in human behavior [9]. The latest trend in the concept of “Safety Culture” focuses on evaluating how organizational factors (e.g., communication, culture, and leadership) influence individual behavior in safety matters. These approaches are based on the assumption that human behavior is influenced not only by individual factors but also by the broader organizational context [10].
HRA is a discipline that offers methods and tools for both qualitative and quantitative assessment of human errors in systems where people perform monitoring and/or control functions [11].
Although HRA methods are widely used, especially in the nuclear sector, they were specifically developed for probabilistic risk assessment in the assembly of nuclear weapons in the 1960s [12]. In 1982, Rasmussen [13] presented a taxonomy aimed at assessing differences in the performance of specific tasks. This behavioral model (based on skills, rules and knowledge—SRK) includes the assessment of three categories:
  • Capability-based errors;
  • Knowledge-based errors;
  • Rule-based errors.
Rasmussen’s research verified that errors due to lack of skills accounted for 16.5% of the total failure rate, errors due to rule violations accounted for 51% and errors based on lack of knowledge accounted for 22.5%.
The Human Reliability Assessment (HRA) is based on a detailed description of the functions, roles and resources in the human-environment relationship. The procedure is based on partial analyses [14]:
  • Task Analysis—TA;
  • Human Error Identification—HEI;
  • Wuantification of human factor reliability or Human Error Probability—HEP.
HRA was subsequently adapted for risk assessment in nuclear power plants and integrated into the nuclear regulatory framework. It focused on identifying, assessing, and mitigating risks associated with human errors in nuclear power plants [15,16]. Currently, HRA is used in various safety-critical sectors, such as aviation, railway transportation, public transportation, the oil and gas industry, as well as in other sectors like construction and agriculture [17,18]. A study by the UK Health and Safety Executive identified 17 Human Reliability Assessment methods that could potentially be useful for authoritative organizations performing high-risk activities. The study focuses on reviewing and evaluating various HRA methodologies that can be applied to identify and minimize risks in high-safety potential systems, such as those in the nuclear, chemical, and aviation industries [19]. A similar study was conducted by the International Association of Oil & Gas Producers (IOGP). The IOGP 434-5 standard is aimed at supporting safety and reliability procedures in human error analysis across various operational conditions. This standard recommends using specific HRA methodologies to identify, quantify, and mitigate risks associated with human factors in critical operations [20].
Over the years, several HRA methods have been developed for general application [21,22,23]. While each method includes its own qualitative and quantitative techniques, they can generally be broken down into the following steps:
  • Problem definition.
  • Task analysis.
  • Identification of human errors.
  • Representation.
  • Quantification of human error.
  • Impact assessment.
  • Error reduction analysis.
  • Documentation and quality assurance [24].
Not all methods include every step—some provide only qualitative guidance, while others focus primarily on HEP calculation with minimal qualitative input. Furthermore, each HRA method introduces different approaches to these steps, such as defining a specific unit of analysis (e.g., single operator crew), using a custom PIF file (or no predefined PIF file), and offering its own qualitative approach.

2. Materials and Methods

Modeling the type of error that can occur in a human–machine system is probably the most important aspect of assessing and eliminating the human contribution to the risk of an accident [5]. For modeling reliability, the performance of dynamic systems, (solving repetitive tasks), it is preferable to use the systems approach applied in classical reliability theory. For this case, Askren and Regulinski [25,26] defined a reliability function of human performance:
R h t = e x p   ( 0 t e r t d t ) [ - ]
The variable er(t) appearing in Equation (1) represents the instantaneous value of the human error rate under the given conditions. In practice, a mean value related to an infinite time interval, the number of work operations performed, and the number of individuals is used. Such variable is referred to as HEP and can be simplistically expressed as the ratio of the number of observed errors NoEi (Number of Erroneously performed tasks of type i) and the number of opportunities (occasions) to make errors NoTi (Number of all Tasks of type i), Equation (2).
H E P = ( N 0 . E i ) / ( N 0 . T i ) [ - ]
To determine the HEP, it is necessary to precisely define the cases and states of human activity in the analyzed system (i.e., to determine deviations from the prescribed scheme), which means that it is necessary to start from the knowledge of specific errors that a human can make when performing a given task. For operator actions where the actions are performed as independent, relation 3 applies. For actions performed as dependent, relation 4 applies.
P A = í = 1 n P ( A i ) [ - ]
P A = í = 1 n P ( A i ) [ - ]
The resulting probability of human failure is then determined as the logical product of these actions, which represents the particular work step being analyzed.
The classification of the individual error groups is as follows [13]:
  • Activity errors A;
  • Control errors C;
  • Information retrieval errors R;
  • Tracking/checking errors T;
  • Selection errors S;
  • Planning errors P.
Subsequently, for each activity (or partial steps of a given activity), it is possible to select plausible error types from the HEP database and from them further specifically relevant errors, i.e., errors that can be expected to occur in the human–machine/equipment relationship. For each potential error, the possible consequences must be estimated—the probability of error occurrence (HEP) or the correction of the HEP to a realistic level of operational safety.
The HEP database was developed by gathering data from various expert sources that provide either generic or statistical information from different types of process industries. Compared to the statistically “average” probability of a specific error occurring in a given operation, which is the mean value obtained from multiple expert sources, the actual probability of an error might be either higher or lower. For each potential error, its possible consequences are evaluated. The consequences are expressed as the probability of occurrence (HEP) of the given error:
  • Low (L)—The occurrence of the error is nearly unpredictable at the current security level.
  • Medium (M)—The error has been detected in the past, but the current security level limits its recurrence.
  • High (H)—The error has occurred multiple times (by different team members) or repeatedly by the same employee, and its occurrence must be expected at the current system security level.
Figure 1 illustrates the human reliability assessment process as outlined in the IEEE 1082 Guide for Incorporating Human Reliability Analysis into Probabilistic Risk Assessments for Nuclear Power Generating Stations and Other Nuclear Facilities [27,28].

3. Results

Based on the above examples, a practical case study was conducted on the acetylene bottling process using three different reliability methods.

3.1. Human Reliability Assessment for Activities with the Potential for Human Failure

When assessing a person’s reliability, it is first of all necessary to describe in individual steps the activities they perform. The following table, Table 1, describes the responsibilities and co-responsibilities for each activity in the acetylene filling. The table also defines the technical barriers for each activity that increase the reliability of the human agent in performing each step of the bottling process.
In the following table, Table 2, the HEP—probabilities of human error for each activity—are assigned. When defining the HEP values for the responsible person, the value from type A—activity error—was set; for the secondary control (co-responsible person), the values from type C—control error—were set.
Based on the reliability assessment performed for acetylene bottling, it can be concluded that step 3 of bottle reacetonation and step 16 of bottle securing represent the most risky activities in terms of human reliability.

3.2. TESEO Method (Technique to Estimate Operator’s Errors)

TESEO is an empirical method proposed by G. C. Bell and C. Columbori in 1980 for estimating operator errors. It is one of the screening methods. It is the simplest of all methods for detecting human factor failure and requires the least material and capacity resources.
In the TESEO method [29,30] (Technica Stima Errori Operatori in Italian), the human error index is calculated according to the following formula:
P(E0) = K1.K2.K3.K4.K5
where:
K1—the level of complexity of the task;
K2—estimated time required to perform routine and non-routine activities of the operator;
K3—training and experience of the operator;
K4—stress index related to the seriousness of the situation;
K5—interaction between the operator and the system.
The values of the K indices are given in Table 3. The probability of an operator making a mistake is [21]:
P(EO) = 2(n−1). P
where:
P—the primary probability of making a mistake;
n—the number of previous unsuccessful attempts to correct an incorrect decision.
Table 3. TESEO method [29].
Table 3. TESEO method [29].
K1: Activity Typological Factor
   Simple, routine: 0.001
   Requiring attention, routine: 0.01
   Not routine: 0.1
K2: Time Available
   Routine Activities
          >20 s: 0.5
          >10 s: 1
          >2 s: 10
   Non-routine Activities
          >60 s: 0.1
          >45 s: 0.3
          >30 s: 1
          >3 s: 10
K3: Operators Qualities
   Carefully selected, expert, well trained: 0.5
   Average knowledge and training: 1
   Little knowledge, poorly trained: 3
K4: State of Anxiety
   Situation of grave emergency: 3
   Situation of potential emergency: 2
   Normal situation: 1
K5: Environmental Ergonomic Factor
   Excellent microclimate, excellent interface with plant: 0.7
   Good microclimate, good interface with plant: 1
   Discrete microclimate, discrete interface with plant: 3
   Discrete microclimate, poor interface with plant: 7
   Worse microclimate, poor interface with plant: 11
According to the Formula (6), the probability can reach a value of 1.0, which indicates that the operator was completely disoriented and completely lost control over the system.
Table 4 shows the individual activities that are performed during the acetylene filling process. The parameters of the TESEO method (K1, K2, K3, K4 and K5) are identified for each activity. After estimating the parameters, the probability of failure of the human factor in individual activities was identified. The most risky activity in the acetylene filling process assessed by the TESEO method is the activity reacetonation and ramp degassing.

3.3. FTA

FTA (Fault Tree Analysis) is a structured graphical representation of the conditions that lead to or contribute to the occurrence of a specific undesirable event, known as the top event [30]. The fault tree is designed in a way that allows it to be easily understood, analyzed, and, if needed, modified to facilitate the identification of the fault being monitored. This approach enables the examination of various contexts within the system, as well as its subsystems.
The goals of this procedure are as follows:
  • Represent the causal dependency model to investigate the reliability and safety of the monitored system, in order to understand the input–output interactions;
  • Define the failure frequency in the system or any of its components;
  • Provide a clear analytical representation of the logical operations within the monitored system;
  • Present the monitored system as a graphical model in which both quantitative and qualitative data are recorded.
Fault tree creation procedure:
  • Define the system under analysis, the purpose and scope of the analysis, and the basic assumptions made.
  • Define the fault. Specifying an undesirable fault involves identifying the onset of unsafe conditions or the system’s inability to perform required functions.
  • Create a graphical representation consisting of individual elements connected by logical operations, describing the monitored process. It is important to distinguish between conditioned and unconditioned states during this step.
  • Evaluate the fault state tree.
  • Conduct a logical (qualitative) analysis of the system.
  • Perform a numerical (quantitative) analysis of the system.
  • Assess the plausibility of the information for critical system elements.
Determine the diagnostic approaches for improving the existing condition. Steps 1,2, and 3 are carried out as part of the preliminary risk assessment, while steps 4,5, and 6 are specific to this method. The FTA itself is simply a display and calculation of values for already identified events. Each fault tree represents a specific causal dependency of a specific technical object.
The FTA tracks the input/output interactions of defined system or subsystem functions [30,31]. At the same time, possible deviations depending on the change of components and functional links are tracked. An undesired disturbance must be defined with respect to the type of intended analysis (reliability, safety, etc.). The analysis criteria must be proportionate to the purpose of the analysis. The FTA allows the creation of a graphical representation of the observed links and the subsequent creation of a mathematical apparatus used to detect the weaknesses of the system.

4. Discussion

The reliability of human factors is not easy to establish or predict. This is because it is not an isolated step or a one-off decision. It is a set of various influences, causes, factors by which a person’s characteristic is formed. Examining the reliability of human factors is one of the most important tasks in probabilistic risk estimation. The paper provides examples of human assessment in acetylene bottling where failure of the human factor can have fatal consequences. Methods or procedures such as HEP determination, TESEO method and FTA method were applied. The output of the HEP method determination is the finding that the most risky acetylene bottling activity is reacetonation and ramp degassing. From the TESEO method it is the activity reacetonation and ramp degassing. This fact is also evident from the FTA method, Figure 2, where the reliability of the human agent in the acetylene bottling activity is estimated to be 1.10−3 [32]. Figure 3 shows the proposed algorithm for assessing psychosocial risks that can cause stress to employees during work. After identifying the threats, it is necessary to assess the risk by determining the parameters for each individual threat. The risk is determined by a combination of the following factors:
-
The severity of the damage and its consequences;
-
The probability of the occurrence of such damage, which depends on the following:
(a)
The frequency and duration of the threat;
(b)
The probability of an adverse event occurring;
(c)
Measures (technical, organizational, humanitarian) to eliminate the risk.
Figure 2. FTA for acetylene-filling operation.
Figure 2. FTA for acetylene-filling operation.
Applsci 15 04558 g002
Figure 3. Proposal of the psychosocial risk assessment algorithm.
Figure 3. Proposal of the psychosocial risk assessment algorithm.
Applsci 15 04558 g003
Mental health and psychosocial risks in the workplace have been considered key priorities in the field of occupational health and safety in the European Union for more than two decades. The management of these risks represents a current challenge in the field of health and safety, mainly due to their impact on work stress and the rapidly changing environment on the labor market. The effective management of psychosocial risks can be carried out on the basis of an integrated multidisciplinary model based on the risk management paradigm.

5. Conclusions

Human reliability analysis (HRA) has become an essential component in many industries for risk management and the prevention of major accidents, where human reliability is a critical factor [33,34]. HRA operates on the underlying assumption that human error is a significant concept and can be linked to individual actions [35]. It is increasingly being used independently, both to assess the risks associated with human error and to reduce system vulnerabilities. The current trend in organizations is to keep the level of risk as low as possible. Based on available statistics, humans can be identified as one of the most risky factors in the human–machine system. Human error not only affects safety, but also the environment and quality. Any type of organization can benefit from addressing the issue of human reliability. The issue of the impact of the human factor on technological processes is a complex system. The complexity results primarily from the fact that each person is an individual and behaves differently when exposed to the same conditions. Therefore, it is not possible to elaborate a general model of human behavior that would allow to reveal the factors causing human factor failure in technological processes.
This close relationship between man and machine increases work efficiency, but at the same time brings new incident situations and presents new challenges in the field of occupational health and safety [36]. The reliability of protective barriers is also influenced by the human factor and the activity of individuals in identifying threats or changes in the functionality of these barriers, diagnosing the necessary steps and subsequently taking measures [37,38,39].
There are four shortcomings of methods for estimating the probability and prediction of human error in the past. First, reliable data are insufficient. Second, there are insufficient criteria for selecting PSFs (PSFS is an easy-to-use, reliable and sensitive scale for many non-musculoskeletal disorders). Third, there is a limited ability to assess cognitive behavior. Finally, possible causes are ignored [40,41].

Author Contributions

Conceptualization, M.B.; methodology, M.B.; validation, M.B.; formal analysis, Z.K.; investigation, Z.K.; resources, M.B.; data curation, M.B.; writing—original draft preparation, Z.K.; writing—review and editing, Z.K.; visualization, M.B.; supervision, M.B. and Z.K.; project administration, Z.K.; funding acquisition, Z.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request due to restrictions, e.g., privacy or ethical. The data presented in this study are available on request from the corresponding author. The data are not publicly available due to measurement in private industrial sector.

Acknowledgments

This work has been supported by the Slovak Agency Supporting Research and Development APVV-19-0367 and KEGA 026TUKE-4/2023 Supporting the development of knowledge in the area of implementing quality management system requirements for the aerospace and defense industries.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Reason, J. Human error: Models and management. BMJ 2000, 320, 768–770. [Google Scholar] [CrossRef] [PubMed]
  2. Adhikari, S.; Bayley, C.; Bedford, T.; Busby, J.; Cliffe, A.; Devgun, G.; Eid, M.; French, S.; Keshvala, R.; Pollard, S.J.T.; et al. Human Reliability Analysis: A Review and Critique; Technical Report; University of Manchester: Manchester, UK, 2021. [Google Scholar]
  3. Bell, J.; Holroyd, J. Review of Human Reliability Assessment Methods; Health & Safety Laboratory: Buxton, UK, 2009. [Google Scholar]
  4. Dougherty, E. Human reliability Analysis—Where Shouldst Thou Turn? Reliab. Eng. Syst. Saf. 1990, 29, 283–299. [Google Scholar] [CrossRef]
  5. Ji, C.; Gao, F.; Liu, W. Dependence assessment in human reliability analysis based on cloud model and best-worst method. Reliab. Eng. Syst. Saf. 2024, 242, 109770. [Google Scholar] [CrossRef]
  6. Perera, H.; De Silva, C. Human reliability analysis using machine learning: A review. Saf. Sci. 2021, 134, 105072. [Google Scholar]
  7. Ioffe, A.; Muscarella, A. Virtual reality simulations for human reliability analysis in high-risk environments. Int. J. Hum.-Comput. Stud. 2021, 146, 102552. [Google Scholar]
  8. Hollnagel, E. The Cognitive Reliability and Error Analysis Method (CREAM): A framework for human reliability analysis. Reliab. Eng. Syst. Saf. 2019, 183, 58–67. [Google Scholar]
  9. Shi, H.; Xu, X. A review of Bayesian network-based human reliability analysis methods. Saf. Sci. 2020, 124, 104602. [Google Scholar]
  10. Cooper, M.D. Towards a model of safety culture. Saf. Sci. 2020, 36, 111–136. [Google Scholar] [CrossRef]
  11. Patriarca, R.; Ramos, M.; Paltrinieri, N.; Massaiu, S.; Costantino, F.; Di Gravio, G.; Boring, R.L. Human reliability analysis: Exploring the intellectual structure of a research field. Reliab. Eng. Syst. Saf. 2020, 203, 107102. [Google Scholar] [CrossRef]
  12. Cross, R.; Youngblood, R. Probabilistic Risk Assessment Procedures Guide for Offshore Applications; Technical Report; Bureau of Safety and Environmental Enforcement: Washington, DC, USA, 2018. [Google Scholar]
  13. Rasmussen, J. Risk management in a dynamic society: A modelling problem. Saf. Sci. 1997, 27, 183–213. [Google Scholar] [CrossRef]
  14. Skrehot, P.; Trpiš, J. Analýza Chybování Lidského Činitele Pomocí Integrované Metody HTA-PHEA; Metodická Příručka; Výzkumný Ústav Bezpečnosti Práce, v.v.i.: Prague, Czech Republic, 2009. [Google Scholar]
  15. Swain, A.D.; Guttmann, H.E. Handbook of Human-Reliability Analysis with Emphasis on Nuclear Power Plant Applications; Final Report; Sandia National Labs: Albuquerque, NM, USA, 1983. [Google Scholar]
  16. Park, J.; Boring, R.L. An Approach to Dependence Assessment in Human Reliability Analysis: Application of Lag and Linger Effects. In Proceedings of the ESREL 2020 PSAM 15, Venice, Italy, 1–5 November 2020. [Google Scholar]
  17. Fargnoli, M.; Lombardi, M. Preliminary Human Safety Assessment (PHSA) for the improvement of the behavioral aspects of safety climate in the construction industry. Buildings 2019, 9, 69. [Google Scholar] [CrossRef]
  18. Fargnoli, M.; Lombardi, M.; Puri, D. Applying Hierarchical Task Analysis to Depict Human Safety Errors during Pesticide Use in Vineyard Cultivation. Agriculture 2019, 9, 158. [Google Scholar] [CrossRef]
  19. Jung, W.; Park, J.; Kim, Y.; Choi, S.Y.; Kim, S. HuREX—A framework of HRA data collection from simulators in nuclear power plants. Reliab. Eng. Syst. Saf. 2020, 194, 106235. [Google Scholar] [CrossRef]
  20. IOGP Assessment Risk Data Directory: Human Factors in QRA 66. 2010. Available online: https://www.iogp.org/bookstore/product/risk-assessment-data-directory-human-factors-in-qra/ (accessed on 5 July 2024).
  21. Porthin, M.; Liinasuo, M.; Kling, T. Effects of digitalization of nuclear power plant control rooms on human reliability analysis—A review. Reliab. Eng. Syst. Saf. 2020, 194, 106415. [Google Scholar] [CrossRef]
  22. Ade, N.; Peres, S.C. A review of human reliability assessment methods for proposed application in quantitative risk analysis of offshore industries. Int. J. Ind. Ergonom. 2022, 87, 103238. [Google Scholar] [CrossRef]
  23. De Felice, F.; Petrillo, A. Human Factors and Reliability Engineering for Safety and Security in Critical Infrastructures. Decision Making, Theory, and Practice; Springer International Publishing: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  24. Kirwan, B. A Guide to Practical Human Reliability Assessment; CRC Press: London, UK, 1994; p. 606. [Google Scholar]
  25. Sandom, C.; Harvey, R.S. Human Factors for Engineers; The Institution of Engineering & Technology: London, UK, 2004. [Google Scholar]
  26. Harris, D.; Stanton, N.A.; Marshall, A.; Young, M.S.; Demagalski, J.; Salmon, P. Using SHERPA to predict design-induced error on the flight deck. Aerosp. Sci. Technol. 2005, 9, 525–532. [Google Scholar] [CrossRef]
  27. IEEE. IEEE Guide for Incorporating Human Reliability Analysis into Probabilistic Risk Assessments for Nuclear Power Generating Stations and Other Nuclear Facilities. In IEEE Std 1082-2017 (Revision of IEEE Std 1082-1997); IEEE: New York, NY, USA, 2018; pp. 1–34. [Google Scholar] [CrossRef]
  28. Hannman, G.W.; Spurgin, A.J. Systematic Human Action Reliability Procedure (SHARP); EPRI NP-3583; NUS Corp.: San Diego, CA, USA, 1984. [Google Scholar]
  29. Oddone, I. Validation of the TESEO Human Reliability Assessment Technique for the Analysis of Aviation Occurrences, Politecnico di Milano, 2017–2018. Available online: https://www.politesi.polimi.it/retrieve/a81cb05d-03fb-616b-e053-1605fe0a889a/2018_12_Cassis.pdf (accessed on 5 July 2024).
  30. Schneeweiss, W.G. The Fault Tree Method: In the Fields of Reliability and Safety Technology; LiLoLe-Verlag: Hagen, Germany, 1999. [Google Scholar]
  31. Mueller, M.; Polinder, H. Electrical Drives for Direct Drive Renewable Energy Systems; Woodhead Publishing: Sawston, UK, 2013; ISBN 978-1-84569-783-9. [Google Scholar]
  32. Haag, U. Reference Manual Bevi Risk Assessment; National Institute of Public Health and the Environment (RIVM): Bilthoven, The Netherlands, 2009. [Google Scholar]
  33. Vaurio, J.K. Modelling and quantification of dependent repeatable human errors in system analysis and risk assessment. Reliab. Eng. Syst. Saf. 2001, 71, 179–188. [Google Scholar]
  34. Strand, G.O.; Haskins, C. On Linking of Task Analysis in the HRA Procedure: The Case of HRA in Offshore Drilling Activities. Safety 2018, 4, 39. [Google Scholar] [CrossRef]
  35. Steijn, W.M.P.; Van Gulijk, C.; Van der Beek, D.; Sluijs, T. A System-Dynamic Model for Human–Robot Interaction; Solving the Puzzle of Complex Interactions. Safety 2023, 9, 1. [Google Scholar] [CrossRef]
  36. Li, Y.; Guldenmund, F.W.; Aneziris, O.N. Delivery systems: A systematic approach for barrier management. Saf. Sci. 2017, 121, 679–694. [Google Scholar] [CrossRef]
  37. Winge, S.; Albrechtsen, E. Accident types and barrier failures in the construction industry. Saf. Sci. 2018, 105, 158–166. [Google Scholar] [CrossRef]
  38. Nnaji, C.; Karakhan, A.A. Technologies for safety and health management in construction: Current use, implementation benefits and limitations, and adoption barriers. J. Build. Eng. 2020, 29, 101212. [Google Scholar] [CrossRef]
  39. Selleck, R.; Hassall, M.; Cattani, M. Determining the Reliability of Critical Controls in Construction Projects. Safety 2022, 8, 64. [Google Scholar] [CrossRef]
  40. Yang, C.W.; Lin, C.J.; Jou, Y.T.; Yenn, T.C. A review of current human reliability assessment methods utilized in high hazard human-system interface design. In Engineering Psychology and Cognitive Ergonomics: 7th International Conference, EPCE 2007, Held as Part of HCI International 2007, Beijing, China, July 22–27, 2007. Proceedings 7 2007; LNAI 4562; Springer: Berlin/Heidelberg, Germany, 2007; pp. 212–221. [Google Scholar]
  41. Hollnagel, E. Human reliability assessment in context. Nucl. Eng. Technol. 2005, 37, 2. [Google Scholar]
Figure 1. Human reliability assessment process.
Figure 1. Human reliability assessment process.
Applsci 15 04558 g001
Table 1. Acetylene bottling.
Table 1. Acetylene bottling.
ProcessAcetylene BottlingDate
StepActivity DescriptionResponsibilityCo-ResponsibilityTechnical Inspection
Preparatory phase
1Empty bottles checkBottling plant operatorProduction technologist, operations masterFlame arresters, safety caps
2Checking the amount of solvent—acetone in the bottlesBottling plant operatorProduction technologist, operations master
3ReacetonationBottling plant operatorProduction technologist, operations master
4Connecting the bottles to filling pointsBottling plant operatorProduction technologist, operations master
5Checking the tightness of connectionsBottling plant operatorProduction technologist, operations master
6To be recorded in the production record of productionBottling plant operatorProduction technologist, operations master
Filling
7Securing bottles with fall-arrest chainsBottling plant operatorProduction technologist, operations masterFlame arresters, safety caps
8BottlingBottling plant operatorProduction technologist, operations master
9During filling, the bottle cooling function and bottle temperature are checkedBottling plant operatorProduction technologist, operations master
10Closing the filling valves at the acetylene inlet on the ramp Bottling plant operatorProduction technologist, operations master
11When the compressors are switched off, the inlet valves on the ramps are closedBottling plant operatorProduction technologist, operations master
12Closing the bottlesBottling plant operatorProduction technologist, operations master
13Opening the degassing valvesBottling plant operatorProduction technologist, operations master
14Ramp degassingBottling plant operatorProduction technologist, operations master
15Degassing valves closingBottling plant operatorProduction technologist, operations master
16Bottle securingBottling plant operatorProduction technologist, operations master
17Tightness inspectionBottling plant operatorProduction technologist, operations master
18Bottle weighingBottling plant operatorProduction technologist, operations master
19Bottle and ramp valve closingBottling plant operatorProduction technologist, operations master
Acetylene bottles carrying away
20Transport to the handling areaTransport platoon staffProduction technologist, operations masterAnti-flame insurance, Insurance caps
21Record in the production recordTransport platoon staffProduction technologist, operations master
22Records of imported empty bottles, dispatched bottles and full bottlesWarehouse
woman or foreman (production and dispatch management)
Production technologist, operations master
Table 2. HEP determination for acetylene bottling.
Table 2. HEP determination for acetylene bottling.
ProcessHEP
StepActivity DescriptionResponsibility PersonSecondary InspectionTechnical InspectionResulting HEP
1Empty bottles check0.0050.0010.0010.000000005
2Checking the amount of solvent—acetone in the bottles0.0050.0010.0010.000000005
3Reacetonation0.010.0010.0010.00000001
4Connecting the bottles to filling points0.0010.0010.0010.000000001
5Checking the tightness of connections0.00210.0010.0010.0000000021
6To be recorded in the production record of production0.0010.0010.0010.000000001
7Securing bottles with fall-arrest chains0.0050.0010.0010.000000005
8Bottling0.0010.0010.0010.000000001
9During filling, the bottle cooling function and bottle temperature are checked0.0050.0010.0010.000000005
10Closing the filling valves at the acetylene inlet on the ramp 0.0010.0010.0010.000000001
11When the compressors are switched off, the inlet valves on the ramps are closed0.0010.0010.0010.000000001
12Closing the bottles0.0010.0010.0010.000000001
13Opening the degassing valves0.0010.0010.0010.000000001
14Ramp degassing0.010.0010.0010.00000001
15Degassing valves closing0.0010.0010.0010.000000001
16Bottle securing0.010.0010.0010.00000001
17Tightness inspection0.0050.0010.0010.000000005
18Bottle weighing0.0030.0010.0010.000000003
19Bottle and ramp valve closing0.0010.0010.0010.000000001
20Transport to the handling area0.0010.0010.0010.000000001
21Record in the production record0.0010.0010.0010.000000001
22Records of imported empty bottles, dispatched bottles and full bottles0.0010.0010.0010.000000001
Table 4. Application of TESEO method for acetylene filling operation.
Table 4. Application of TESEO method for acetylene filling operation.
ProcessFactorsP
(EO)
StepActivity DescriptionK1K2K3K4K5
1Empty bottles check0.0010.51170.0035
2Checking the amount of solvent—acetone in the bottles0.0010.51170.0035
3Reacetonation0.0110.5110.005
4Connecting the bottles to filling points0.0010.51170.0035
5Checking the tightness of connections0.00110.5130.0015
6To be recorded in the production record of production0.0010.51170.0035
7Securing bottles with fall-arrest chains0.00110.5130.0015
8Bottling
9During filling, the bottle cooling function and bottle temperature are checked0.00110.5130.0015
10Closing the filling valves at the acetylene inlet on the ramp 0.0010.51170.0035
11When the compressors are switched off, the inlet valves on the ramps are closed0.0010.51170.0035
12Closing the bottles0.0010.51170.0035
13Opening the degassing valves0.0010.51170.0035
14Ramp degassing0.0110.5110.005
15Degassing valves closing0.00110.5130.0015
16Bottle securing0.0010.51170.0035
17Tightness inspection0.00110.5130.0015
18Bottle weighing0.0010.51130.0015
19Bottle and ramp valve closing0.0010.51170.0035
20Transport to the handling area0.00110.5170.00035
21Record in the production record0.0010.51170.0035
22Records of imported empty bottles, dispatched bottles and full bottles0.0010.51170.0035
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Balazikova, M.; Kotianova, Z. Human Reliability Analysis in Acetylene Filling Operations: Risk Assessment and Mitigation Strategies. Appl. Sci. 2025, 15, 4558. https://doi.org/10.3390/app15084558

AMA Style

Balazikova M, Kotianova Z. Human Reliability Analysis in Acetylene Filling Operations: Risk Assessment and Mitigation Strategies. Applied Sciences. 2025; 15(8):4558. https://doi.org/10.3390/app15084558

Chicago/Turabian Style

Balazikova, Michaela, and Zuzana Kotianova. 2025. "Human Reliability Analysis in Acetylene Filling Operations: Risk Assessment and Mitigation Strategies" Applied Sciences 15, no. 8: 4558. https://doi.org/10.3390/app15084558

APA Style

Balazikova, M., & Kotianova, Z. (2025). Human Reliability Analysis in Acetylene Filling Operations: Risk Assessment and Mitigation Strategies. Applied Sciences, 15(8), 4558. https://doi.org/10.3390/app15084558

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop