Next Article in Journal
Brain Tumor Detection with Deep Learning Methods’ Classifier Optimization Using Medical Images
Previous Article in Journal
Enabling Bitwise Reproducibility for the Unstructured Computational Motif
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human Factors as Predictor of Fatalities in Aviation Accidents: A Neural Network Analysis

by
Flávio L. Lázaro
1,2,
Rui P. R. Nogueira
1,
Rui Melicio
1,3,*,
Duarte Valério
1 and
Luís F. F. M. Santos
3,4
1
Institute of Mechanical Engineering (IDMEC), Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal
2
Instituto Politécnico, Universidade Cuito Cuanavale, Menongue EN280, Angola
3
Aeronautics and Astronautics Research Center (AEROG), Universidade da Beira Interior, Calçada Fonte do Lameiro, 6200-358 Covilhã, Portugal
4
ISEC Lisboa, Alameda das Linhas de Torres, 179, 1750-142 Lisboa, Portugal
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(2), 640; https://doi.org/10.3390/app14020640
Submission received: 26 November 2023 / Revised: 4 January 2024 / Accepted: 7 January 2024 / Published: 11 January 2024
(This article belongs to the Section Aerospace Science and Engineering)

Abstract

:
In the area of aviation safety, the importance of human factors is indisputable. This research endeavors to assess the importance of human factors in predicting fatalities during aviation mishaps. Utilizing reports from the Aviation Safety Network Database, encompassing 1105 accidents and incidents spanning from 2007 to 2016, neural networks were trained to forecast the probability of fatalities. Our findings underscore that the human factors involved, by themselves, can yield strong predictions. As a term of comparison, other variables (type of occurrence, flight phase, and aircraft fate) were used as predictors, with poorer results; by combining these variables with human factors, the prediction is only marginally better, if at all, than that based on human factors alone. So, although these supplementary variables can marginally benefit the predictive results derived from human factors, their contribution remains minimal. Consequently, this study illuminates the paramount importance of human factors in influencing aviation fatalities, guiding stakeholders on the immediate interventions and investments which are most warranted to prevent them.

1. Introduction

Air transportation is one of the most reliable means of transportation, contributing significantly to social and economic development on a global scale. All the stakeholders in this highly regulated market have a strong interest in preventing operational hazards, thereby increasing the safety of persons and goods. The number of accidents and incidents has significantly decreased over the years [1], both because of technological improvements, and due to the introduction of policies, processes, and regulations to deal with human factors, organizational factors, and state safety programs. The evolution of the relative importance of technical and human causes—these encompassing human factors, organizational factors, and state safety programs—is depicted in Figure 1. Of course, accidents seldom have only one cause, being more often than not the result of a concurrence of causes. Nevertheless, for a long time, human-related errors have been the main cause of accidents in the air transport industry [2,3]. Consequently, avoiding accidents means addressing human errors and the factors that may cause them [4]. This is why the study of human factors is important to decrease accidents and incidents in this industry [3,5].
In this paper, data taken from reports of aviation accidents and incidents over the 2007–2016 are used to predict fatalities. This is carried out using different artificial intelligence techniques, in particular, neural networks and random forests. The importance of human factors in the prediction of fatalities is highlighted. While human factors have been shown in the literature to allow for the prediction of fatalities [7], our contribution shows that they are the best available predictor, and that neural networks are the best performing tool.
The remainder of the paper is organized as follows: Section 2 presents the state of the art of the relation between human factors and aviation safety; Section 3 describes the database of accident and incident reports that was used and the extraction of information from these reports; Section 4 describes the neural networks with which fatalities were predicted for each case; and the results are given in Section 5. Finally, Section 6 concludes the paper with a discussion of the results and of what is work still to be carried out in this area.

2. State of the Art

The study of human factors in aviation safety has evolved significantly over the years. Early research was primarily focused on individual errors, often attributing accidents to pilot mistakes. However, a paradigm shift occurred with seminal works such as Reason’s study on human errors, which emphasized a systems approach to understanding the root causes of accidents [8]. One of the early methodologies that gained prominence is Crew Resource Management (CRM). CRM aims to improve communication, leadership, and decision making among flight crews. Studies have shown that CRM training is effective in reducing errors attributable to human factors [9]. Recent advancements in technology have also contributed to the depth of this field. For instance, eye-tracking technology is increasingly being used to study pilot attention and situational awareness. Research in this area has shown that technology such as eye-tracking can provide valuable insights into cognitive processes, thereby aiding in the design of more ergonomic cockpits and effective training programs [10]. The International Civil Aviation Organization (ICAO) has been instrumental in shaping the regulatory landscape. ICAO’s Safety Management System (SMS) [11] has set the global standard for proactive hazard identification and risk management. The ICAO’s Document 9859 [6] provides comprehensive guidelines that have been adopted globally, emphasizing the importance of human factors in safety management systems. Different organizations and regulatory bodies have adopted various approaches to tackle human factors in aviation safety. While the ICAO focuses on a broad systems approach, entities like Boeing have specialized initiatives targeting specific aspects such as cockpit design and human–machine interfaces. A comparative analysis of these approaches reveals that no single methodology is universally effective; rather, a multi-faceted approach is often the most effective strategy for mitigating risks associated with human factors [12].
According to the International Civil Aviation Organization (ICAO), aviation safety is “The state in which risks associated with aviation activities, related to, or in direct support of the operation of aircraft, are reduced and controlled to an acceptable level” [6]. Since it is not possible to eliminate risk, the ICAO promotes a process of incorporating defenses, preventive controls, or recovery measures to lower the severity and/or likelihood of a hazard’s projected consequence such as an accident or incident. This process is widely adopted by the aviation ecosystem and encompasses the relation of the Risk Probability of occurrence vs. the Risk Severity of the event. This methodology can be illustrated in a matrix [6,13], like the one in Table 1.
Based on Table 1, special programs can be created to deal with errors and hazards behaviors. One of the mainstream programs is the already mentioned ICAO Safety Management System, commonly referred to as Annex 19 [11]. The objective of Safety Management Programs is to analyze the environment and build defenses against errors and hazards. These defenses are then responsible for creating specific mitigating actions, preventive controls, or recovery measures, putting them in place to prevent a hazard or its escalation into an undesirable consequence.
Human errors have been implicated in most aviation accidents [3] and are defined by the ICAO as follows: “Human Factors is concerned to optimize the relationship between people and their activities, by the systematic application of human sciences, integrated within the framework of systems engineering” [14]. Thus, it can be said that human factors in aviation represent an interdisciplinary sphere that aims to elucidate the intricate interactions between aviation personnel, the environments they operate in, and the equipment they utilize, with the ultimate objective of improving safety, performance, and well-being within the aviation sector. The root causes of all factors can be reduced to twelve. These twelve root causes are very well known as the “Dirty Dozen” and are an aviation standard and field of study by themselves. These Dirty Dozen causes delineate twelve prevalent factors that have been empirically shown to contribute to human error in the context of aviation. They are as follows: lack of communication, complacency, lack of knowledge, distraction, lack of teamwork, fatigue, lack of resources, pressure, lack of assertiveness, norms, lack of awareness, and stress [1,11,15,16,17]. By methodically identifying and addressing the elements within the Dirty Dozen, professionals in the aviation industry are better positioned to foster a culture oriented towards safety, thereby reducing the propensity for errors, and substantially enhancing operational safety metrics. This framework not only serves as an instrumental tool for training initiatives but also facilitates self-assessment procedures, laying the groundwork for the formulation of risk mitigation strategies pertinent to human factors in aviation. Through ongoing scrutiny and comprehension of these twelve critical elements, the aviation sector is poised for progression toward a more secure and efficient operational paradigm.
While every human, even with the best training, is of course always liable to make mistakes [18], many players have suggested different ways of reducing the impact of human factors. The ICAO has provided efforts to improve organizational structures and human management [11]; Boeing addressed visual perception, ergonomics, and the human–computer interface [14,19]; organizational safety culture was addressed in [20]; a constant monitoring of routine operations is envisioned in [8]; an Aviation Maintenance Monitoring Process (AMMP) is suggested in [21]; and a bibliometric analysis can be found in [22]. In [23], a probabilistic and statistical study of aeronautical accidents is carried out, in which the authors, using autoregressive integrated moving average (ARIMA) models, sought to assess safety levels in the aviation sector, analyzing available data and accident reports and the different variables involved, which, through time series analysis, present forecasts of values and respective trends for the future. In [24], a review of problems linked to the mental health of pilots and aviation professionals is presented. The study sought to find answers from the players about the awareness or lack of mental health in the sector, and its criticality and importance for aviation safety, as well as the identification of the factors (fatigue, stress, anxiety, depression, and exhaustion) that most contribute to the deterioration of mental health, with a view to finding results to optimize aviation safety. The topic surrounding mental health has become more relevant since the accidents that occurred with a LAM flight in 2013 and a Germanwings flight in 2015, caused by pilots with mental health problems, and has been attracting the attention of researchers in a constant search for mitigating and preventive measures for aeronautical safety.
There are several factors that contribute to human errors continuing to be classified as the main cause of aviation accidents in recent years. The study in [25] statistically analyzed the trend in the evolution of air accidents, and the authors state that the lack of personnel is also one of the factors that compromises the smooth functioning of operations in the sector, as it overloads the professionals present, increasing the probability of errors due to the stress faced by other team members. And although human errors have been identified as the most significant cause of accidents in several studies over the years, the importance of technical and environmental factors in this issue is recognized as being responsible for a smaller proportion of serious accidents. Airlines have included in their reports occurrences of unsafe events with high frequencies but potentially considered low-severity incidents. The article in [26] presents a study that proposes a technical scheme for automatically classifying and identifying risk factors from Chinese civil aviation incident reports in an intelligent way in order to prevent the recurrence of such events. The authors state that unsafe events during a flight are always a major threat and concern for aircraft safety and can cause damage to the aircraft or even injury to personnel involved or on board. Flight safety and the sustainable growth of aviation also depend on the good performance of air traffic control (ATC). In [27], an analysis of human factors in air traffic safety is carried out, based on the human factors analysis and classification system—Bayesian network (HFACS-BN) model, in which some machine learning language techniques were used, to evaluate ATC performance. In [28], a statistical method is applied to detect anomalies in flight data using a hybrid machine learning language.
A useful image is that of the Swiss cheese model of accident causation developed by Reason in 1990 [8]. The Swiss Cheese Model is widely used in aviation safety management for its simplicity and ease of understanding. It provides a visual representation that helps in identifying potential weaknesses in a system and offers a basis for implementing corrective actions. The model has been instrumental in the development of various safety management systems, including the ICAO’s SMS [11]. This human factor model serves as a conceptual framework for understanding how errors occur in complex systems like aviation. This model likens organizational defenses against accidents to layers of Swiss cheese, where each layer represents a barrier or a safeguard. While individual layers may have holes or weaknesses, only the alignment of these holes across multiple layers leads to the occurrence of an accident. This means that in an organization, there are safeguards against accidents at several levels (organizational, supervision, etc.), and, hence, even if there is a failure at one of these levels, a safeguard at another level should prevent an accident. But if all safeguards fail at all levels, it is as if there were a hole at each layer, and the holes are aligned so that an accident can pass through the efforts to prevent it. Human factors can affect each one of these layers. Despite its widespread use, the Swiss Cheese Model has been subject to criticism. One of the main criticisms is that the model is overly simplistic and does not account for the dynamic interactions between different layers of defense. It also tends to focus on individual errors and does not adequately address systemic issues or organizational culture [29].
In this work, human factors are classified according to a general, comprehensive framework developed in [3], called Human Factors Analysis and Classification System (HFACS), shown in Figure 2, for the four different layers of the Swiss cheese model [3].
HFACS was developed for the field of aviation (both civil and military) and has been used to this effect [30], but has since been used in other industries as well, for instance, in the naval industry (in which it was used to show that most accidents are due to decision errors, the planning of inappropriate operations, and supervisory violations) [31].

3. Database

The Aviation Safety Network (ASN) database is a worldwide repository of information on airliner accidents and safety issues kept by a non-profit organization [32]. The data used in this paper correspond to all the 1105 accidents and incidents reported during a period of ten years, from 2007 to 2016. This period was chosen since it is the same as that used in [2,7]. For each accident or incident, available reports were used to find out which human factors were involved. The list of human factors employed and its numeration, adapted from [3] and its extension in [33] for maintenance activities, is also taken from [2,7] and is given in Table 2 [2,7].
In addition to the human factors involved in each occurrence, three variables in the metadata of the reports were also obtained:
  • Type of Occurrence—This variable captures the nature of what happened. Its possible values are accident, criminal event, incident, and other occurrence.
  • Flight Phase—Different phases of flight have different risk profiles. For example, take-off and landing are often considered the most critical phases of flight. By including this variable, the study can identify which phases are most susceptible to human factors-related accidents, thereby allowing for targeted interventions. The nine phases considered are, in alphabetical order, approach, en route, initial climb, landing, maneuvering, pushback, standing, take-off, taxi, and unknown.
  • Aircraft Fate—The aircraft may be repaired or be in such a condition that it is scrapped. This is consequently a binary variable.

4. Data Processing

4.1. Machine Learning

Among the available machine learning algorithms, there are several that can be used to predict fatalities. Neural networks are an obvious choice given that their performance in this type of non-linear modeling tasks is regularly satisfactory [34], that an active learning system (which can be used with a small initial set of labeled data and obtain further data as it runs [35]) is not needed since the data are labeled, and that data are sufficient to train a neural network. Concerning this last point, it is true that it would be desirable to have more than the 1105 incidents and accidents above mentioned; data augmentation techniques, routinely applied in several areas, are not suitable for this problem, and neural networks with many hidden layers (such as used in deep learning) are completely out of consideration. Furthermore, for databases of this size, tree-based models may present comparable results [36]. For this reason, each model was developed both with a neural network and with a random forest to verify which of these tools is more suitable.

4.2. Models Developed

In this paper, different models to predict fatalities were developed, using different inputs, aiming to answer the following question: given some variables about an incident or accident, and the human factors involved, is it possible to predict if there were fatalities? These models will be referred to as experiments and are illustrated in Figure 3. Their rationale is the following:
Experiment 1. 
uses the three variables available in the metadata of the reports, and listed above in Section 3, as predictors of fatalities. This experiment is used as a baseline for the comparison of the utility of employing human factors as predictors of fatalities.
Experiment 2. 
uses the human factors identified in the reports as predictors of fatalities. Its purpose is to verify whether or not they suffice as predictors.
Experiment 3. 
uses one of the variables (type of occurrence) of the neural network in Experiment 1 and the human factors as predictors. The reason why the type of occurrence was chosen, rather than the other two, is that in an incident, by definition, there are no fatalities, and thus this variable can be expected to significantly improve results.
Experiment 4. 
uses the three variables of the neural network in Experiment 1 and the human factors as predictors. These last two experiments serve to verify what is most important in predicting fatalities: human factors, or the other variables.
For each of these experiments, a Multiplayer Perceptron (MLP) neural network is trained using the data described in Section 3 (i.e., all the 1105 accidents and incidents), combining the corresponding predictors. The architecture of the neural networks was chosen through trial and error as a compromise between performance and overfitting; the corresponding pseudo-code is given below in Algorithm 1. Notice that there are two hidden layers with the relu (rectified linear unit) activation function
ϕ v = max 0 , v
after the input layer, and that the output layer uses the sigmoid activation function.
ϕ v = 1 1 + e v
This can be represented by the sequence {13 13 13 1} (i.e., an input layer consisting of 13 neurons, two hidden layers also with 13 neurons, and an output layer consisting of only 1 neuron) [37].
As explained above, a random forest was also used for each experiment. The data from the 1105 accidents of incidents were again used for this. The pseudo-code is similar to that in Algorithm 1 for neural networks; default parameters were used.
Algorithm 1. Pseudocode of the Python program used to train the neural networks
Load database information
Select desired data (human factors, type of occurrence, flight phase, aircraft fate)
Split data randomly: 20% test, 80% training
Use Python library Keras to build a Neural Network (NN) with:
    two 13-neuron hidden layers, using the rectified linear unit (ReLU) activation function
   a 1-neuron output layer, using the sigmoid activation function
Train the NN with the Adaptive Moment Estimation (AdaM) optimiser, learning rate 0.001, decay rates  β 1 = 0.9   and   β 2 = 0.999 ,   ε = 10 7
Test the model and plot the results

4.3. Performance Indexes

The confusion matrix is defined in Table 3. The following performance indexes, usually employed in similar problems, were found to quantify the results.
P r e c i s i o n = T r u e   P o s i t i v e   T P T r u e   P o s i t i v e   T P + F a l s e   P o s i t i v e   F P R e c a l l = T r u e   P o s i t i v e   T P T r u e   P o s i t i v e   T P + F a l s e   N e g a t i v e   F N F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l A c c u r a c y = T r u e   P o s i t i v e   T P + T r u e   N e g a t i v e   T N T r u e   P o s i t i v e   T P + F a l s e   P o s i t i v e   F P + T r u e   N e g a t i v e   T N + F a l s e   N e g a t i v e   F N
These performance indexes can be found for multiclass analysis as well. In that case, the confusion matrix will have as many lines and columns as the classes. It should be noticed that some performance indexes can be deceitful if there are over-represented classes. Precision-recall plots and receiver operating characteristic (ROC) curves (i.e., a plot of TP as a function of FP) are better suited for such cases.

5. Results

Training algorithms can be run for a fixed number of iterations (also known as epochs), until no more data are available, or until no further performance improvement is possible [34]. Our training ran in mini-batches of size 16, and terminated after, at most, 100 epochs.
Another important decision is how to split the dataset into a part used for training and another for validation. We settled for an 80–20% split, which is a usual choice in the literature as a heuristically sufficient relationship to solve linear problems. To confirm the suitability for the problem under study, different splits were used with the neural network models; the resulting final accuracies for the four experiments are shown in Figure 4. The 80–20% split led to the best accuracy in two experiments, and the second best in the others; no other split achieves this. For this reason, the results will be shown only for this case; the same split was also used with random forests, for obvious reasons of fairness in the comparison.
The evolution with the epochs of cross entropy for neural network models, found from predicted and true outcomes, is shown in Figure 5 for each of the four experiments. There seems to be no overfitting since after the initial exponential decay, the values for the training set keep decreasing, while those for validation do not. The evolution of accuracy, shown in Figure 6, follows a roughly constant pattern, and verifies that the neural network architecture is suitable.
Concerning the performance of neural network models after training is carried out, Figure 7 shows the confusion matrixes for the four experiments; all performance indexes are tabulated in Table 4. The results for random forest models are presented in a similar way: confusion matrixes are shown in Figure 8, and performance indexes are tabulated in Table 5. It can be seen that their performance is practically the same; numerical values are often equal, and, when not, these differ only in the second decimal case. It is reasonable to say that neural networks and random forests are equally good at modeling the data; since neural networks require fewer parameters, we conclude that they should be the preferred method to carry out the prediction. Furthermore, the results reported in [38,39] are outperformed.
The results show that Experiment 2, using human factors as inputs, achieves better results than Experiment 1, the inputs of which are the three variables from the reports’ metadata. It is true that Experiments 3 and 4, combining human factors with one of the three variables or with all of them, present slight improvements when compared to Experiment 2. Such improvements, however, are marginal and not consistent since for some indicators, Experiment 2 outperforms all others.
These results are not surprising since human factors are present is a vast majority of database reports. That is why they can play a role in fatality prediction.
The statements above are confirmed by the results obtained with random forests given in Table 6. Consequently, it is possible to conclude that human factors are the most important predictor of fatalities.
Finally, these results ought to be compared with those from [7], shown in Table 6. The similarity of the values confirms that good results are not outliers. From [7], it could be argued that human factors were useful in predicting fatalities. This paper’s novelty is the confirmation that they are the most important variable, and can in fact be used as the only one for this purpose.
To conclude, neural networks are shown to dissect and interpret complex aviation data with the objective of predicting the likelihood of fatalities in real aviation accidents. The inherent ability of neural networks to learn and adapt from the data they process significantly bolsters their predictive accuracy. This attribute is of paramount importance in the context of aviation safety as it enables the identification of risk patterns and contributing factors in accidents. This methodology extends beyond basic analytical capabilities; it offers a nuanced comprehension of the factors that culminate in fatalities. Such an understanding is instrumental in formulating targeted safety measures and policies. By leveraging the computational power of neural networks, our ultimate goal is to enhance the industry’s capacity to preempt and mitigate the risks associated with various flight phases and accident types, thereby fostering a safer aviation environment.

6. Conclusions

This paper shows the importance of human factors in predicting whether or not there will be fatalities in aviation accidents. These results allow the industry to adopt measures that will address the relevant factors, and thus mitigate the probability of deaths.
The novelty of these results, when compared for instance with those in [1], is that we show how, using human factors alone, without further inputs, good predictions can be obtained about whether or not there will be fatalities.
Future work includes using a larger period corresponding to more recent reports. Increasing the size of the database is expected to allow not only for more accurate results, but also for the identification of the factors that are more relevant to prevent fatalities. Finally, it is important to address a more advanced question: Given the human factors involved in an accident in which there were fatalities, is it possible to predict how many fatalities there were?

Author Contributions

Conceptualization, D.V. and R.M.; methodology, F.L.L., D.V. and R.M.; software, F.L.L. and R.P.R.N.; validation, R.M., D.V. and L.F.F.M.S.; investigation, F.L.L.; resources, R.M.; data curation, F.L.L. and R.M.; writing—original draft preparation, F.L.L.; writing—review and editing, D.V., R.M. and L.F.F.M.S.; visualization, F.L.L.; supervision, D.V. and R.M.; project administration, D.V.; funding acquisition, R.M. All authors have read and agreed to the published version of the manuscript.

Funding

The author Flávio Lázaro acknowledges a scholarship from Projecto de Desenvolvimento de Ciência e Tecnologia, from MESCTI, number 011/D-UL/PDCT-M003/2022. This work was supported by FCT, through IDMEC, under LAETA, project UIDB/50022/2020; and supported by FCT, through AEROG, under LAETA, project UIDB/50022/2020, project UIDP/50022/2020, and project LA/P/0079/2020.

Data Availability Statement

The data presented in this study are openly available on https://aviation-safety.net/ (accessed on 25 May 2023).

Acknowledgments

The authors would like to acknowledge Harro Ranter for kindly providing access to the Aviation Safety Network (ASN) Database for this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Santos, L.F.; Melicio, R. Stress, pressure and fatigue on aircraft maintenance personal. Int. Rev. Aerosp. Eng. 2019, 12, 35–45. [Google Scholar] [CrossRef]
  2. Madeira, T.; Melicio, R.; Valério, D.; Santos, L. Machine learning and natural language processing for prediction of human factors in aviation incident reports. Aerospace 2021, 8, 47. [Google Scholar] [CrossRef]
  3. Shappell, S.A.; Wiegmann, D.A. The Human Factors Analysis and Classification System (HFACS); Report Number DOT/FAA/AM-00/7; Federal Aviation Administration: Washington, DC, USA, 2000.
  4. Kharoufah, H.; Murray, J.; Baxter, G.; Wild, G. A review of human factors causations in commercial air transport accidents and incidents: From to 2000–2016. Prog. Aerosp. Sci. 2018, 99, 1–13. [Google Scholar] [CrossRef]
  5. ICAO. The World of Air Transport in 2018. Available online: https://www.icao.int/annual-report-2018/Pages/the-world-of-air-transport-in-2018.aspx (accessed on 16 May 2023).
  6. ICAO. Safety Management Manual, 3rd ed.; DOC 9859 AN/474; International Civil Aviation Organization: Montreal, QC, Canada, 2013. [Google Scholar]
  7. Nogueira, R.P.R.; Melicio, R.; Valério, D.; Santos, L.F.F.M. Learning methods and predictive modeling to identify failure by human factors in the aviation industry. Appl. Sci. 2023, 13, 4069. [Google Scholar] [CrossRef]
  8. Reason, J. Human Error; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar] [CrossRef]
  9. Helmreich, R.L.; Foushee, H.C. Chapter 1—Why CRM? Empirical and theoretical bases of human factors training. In Crew Resource Management, 3rd ed.; Kanki, B.G., Anca, J., Chidester, T.R., Eds.; Academic Press: Cambridge, MA, USA, 2019; pp. 3–52. ISBN 9780128129951. [Google Scholar] [CrossRef]
  10. Mengtao, L.; Fan, L.; Gangyan, X.; Su, H. Leveraging eye-tracking technologies to promote aviation safety- a review of key aspects, challenges, and future perspectives. Saf. Sci. 2023, 168, 106295. [Google Scholar] [CrossRef]
  11. ICAO. Annex 19 to the Convention on International Civil Aviation—Safety Management; ICAO: Montréal, QC, Canada, 2013. [Google Scholar]
  12. Dekker, S.; Hollnagel, E. Human factors and folk models. Cogn. Technol. Work 2004, 6, 79–86. [Google Scholar] [CrossRef]
  13. EASA. ICAO Annex 19. In Safety Management, International Standards and Recommended Practices; European Aviation Safety Agency: Cologne, Germany, 2016. [Google Scholar]
  14. ICAO. Human Factors Guidelines for Aircraft Maintenance Manual, 1st ed.; Doc. 9824-AN/450; International Civil Aviation Organization: Montreal, QC, Canada, 2003. [Google Scholar]
  15. Pereira, D.P.; Gomes, I.L.; Melicio, R.; Mendes, V.M. Planning of aircraft fleet maintenance teams. Aerospace 2021, 8, 140. [Google Scholar] [CrossRef]
  16. Dias, N.G.; Santos, L.F.; Melicio, R. Aircraft maintenance professionals: Stress, pressure and fatigue. In Proceedings of the 9th EASN International Conference on Innovation in Aviation and Space, Athens, Greece, 3–6 September 2019; Volume 304, pp. 1–7, 06001. [Google Scholar]
  17. IATA. Safety Report 2017; IATA: Montreal, QC, Canada, 2018. [Google Scholar]
  18. Kanki, G.B.; Helmreich, L.R.; Anca, J. Crew Resource Management, 2nd ed.; Elsevier Inc.: Amsterdam, The Netherlands, 2010. [Google Scholar]
  19. Boeing. Available online: https://www.boeing.com/commercial/aeromagazine/aero_08/human_textonly.html (accessed on 17 May 2023).
  20. Morley, F.J.J.; Harris, D. Ripples in a pond: An open system model of the evolution of safety culture. Int. J. Occup. Saf. Ergon. 2006, 12, 3–15. [Google Scholar] [CrossRef]
  21. Rashid, H.; Place, C.; Braithwaite, G. Helicopter maintenance error analysis: Beyond the third order of the HFACS-ME. Int. J. Ind. Ergon. 2010, 40, 636–647. [Google Scholar] [CrossRef]
  22. Wan, M.; Liang, Y.; Yan, L.; Zhou, T. Bibliometric analysis of human factors in aviation accident using MKD. IET Image Process. 2021, 1–9. [Google Scholar] [CrossRef]
  23. Amaral, Y.; Santos, L.F.F.M.; Valério, D.; Melicio, R.; Barqueira, A. Probabilistic and statistical analysis of aviation accidents. J. Phys. Conf. Ser. 2023, 2526, 012107. [Google Scholar] [CrossRef]
  24. Silva, M.; Santos, L.F.F.M.; Melicio, R.; Valério, D.; Rocha, R.; Barqueira, A.; Brito, E. Aviation’s approach towards pilots’ mental health: A review. Int. Rev. Aerosp. Eng. 2022, 15, 294–307. [Google Scholar] [CrossRef]
  25. Samarra, J.; Santos, L.F.F.M.; Barqueira, A.; Melicio, R.; Valério, D. Uncovering the hidden correlations between socioeconomic indicators and aviation accidents in the United States. Appl. Sci. 2023, 13, 7997. [Google Scholar] [CrossRef]
  26. Jiao, Y.; Dong, J.; Han, J.; Sun, H. Classification and Causes Identification of Chinese Civil Aviation Incident Reports. Appl. Sci. 2022, 12, 10765. [Google Scholar] [CrossRef]
  27. Lyu, T.; Song, W.; Du, K. Human factors analysis of air traffic safety based on HFACS-BN model. Appl. Sci. 2019, 9, 5049. [Google Scholar] [CrossRef]
  28. Jasra, S.K.; Valentino, G.; Muscat, A.; Camilleri, R. Hybrid machine learning–statistical method for anomaly detection in flight data. Appl. Sci. 2022, 12, 10261. [Google Scholar] [CrossRef]
  29. Shappell, S.; Wiegmann, D.A. A Human Error Approach to Aviation Accident Analysis: The Human Factors Analysis and Classification System; Ashgate: Aldershot, UK, 2003. [Google Scholar]
  30. Wiegmann, D.A.; Shappell, S.A. A Human Error Analysis of Commercial AVIATION Accidents Using the Human Factors Analysis and Classification System (HFACS); Technical Report; Office of Aviation Medicine, FAA: Daytona Beach, FL, USA, 2001. [Google Scholar]
  31. Chauvin, C.; Lardjane, S.; Morel, G.; Clostermann, J.-P.; Langard, B. Human and organisational factors in maritime accidents: Analysis of collisions at sea using the HFACS. Accid. Anal. Prev. 2013, 59, 26–37. [Google Scholar] [CrossRef]
  32. Ranter, H.; Lujan, F.I. Aviation Safety Network. 2016. Available online: https://aviation-safety.net/about/ (accessed on 25 May 2023).
  33. Schmidt, J.; Schmorrow, D.; Figlock, R. Human factors analysis of naval aviation maintenance related mishaps. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2000, 44, 775–778. [Google Scholar] [CrossRef]
  34. Seliya, N.; Khoshgoftaar, T.M. Active learning with neural networks for intrusion detection. In Proceedings of the 2010 IEEE International Conference on Information Reuse & Integration, Las Vegas, NV, USA, 4–6 August 2010; pp. 49–54. [Google Scholar] [CrossRef]
  35. Settles, B. Active Learning Literature Survey; Technical Report 1648; Department of Computer Sciences, University of Wisconsin—Madison: Madison, WI, USA, 2009. [Google Scholar]
  36. Grinsztajn, L.; Oyallon, E.; Varoquaux, G. Why do tree-based models still outperform deep learning on typical tabular data? Adv. Neural Inf. Process. Syst. 2022, 35, 507–520. [Google Scholar]
  37. Chen, B.; Deng, W.; Du, J. Noisy softmax: Improving the generalization ability of DCNN via postponing the early softmax saturation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 4021–4030. [Google Scholar]
  38. Shao-Yu, L.; Hui-Chin, T.; Wang, Y. The Study on the Prediction Models of Human Factor Flight Accidents by Combining Fuzzy Clustering Methods and Neural Networks. J. Aeronaut. Astronaut. Aviat. 2018, 50, 175–186. [Google Scholar] [CrossRef]
  39. Harris, D.; Li, W. Using Neural Networks To Predict Hfacs Unsafe Acts From The Pre-Conditions Of Unsafe Acts. Ergonomics 2017, 62, 1–30. [Google Scholar] [CrossRef]
Figure 1. Evolution of the causes of aviation accidents (adapted from [2,6]).
Figure 1. Evolution of the causes of aviation accidents (adapted from [2,6]).
Applsci 14 00640 g001
Figure 2. Human Factors Analysis and Classification System (adapted from [2,30]).
Figure 2. Human Factors Analysis and Classification System (adapted from [2,30]).
Applsci 14 00640 g002
Figure 3. The four experiments to predict fatalities and their inputs.
Figure 3. The four experiments to predict fatalities and their inputs.
Applsci 14 00640 g003
Figure 4. Evaluation of the performance of each experiment in splitting datasets for training and validation using the multiplayer perceptron.
Figure 4. Evaluation of the performance of each experiment in splitting datasets for training and validation using the multiplayer perceptron.
Applsci 14 00640 g004
Figure 5. Function loss of (a) Experiment 1, (b) Experiment 2, (c) Experiment 3, and (d) Experiment 4 with Multiplayer Perceptron algorithm.
Figure 5. Function loss of (a) Experiment 1, (b) Experiment 2, (c) Experiment 3, and (d) Experiment 4 with Multiplayer Perceptron algorithm.
Applsci 14 00640 g005
Figure 6. Accuracy of (a) Experiment 1, (b) Experiment 2, (c) Experiment 3, and (d) Experiment 4 with Multiplayer Perceptron algorithm.
Figure 6. Accuracy of (a) Experiment 1, (b) Experiment 2, (c) Experiment 3, and (d) Experiment 4 with Multiplayer Perceptron algorithm.
Applsci 14 00640 g006
Figure 7. Confusion matrix of each experiment via neural network: (a) Experiment 1, (b) Experiment 2, (c) Experiment 3, and (d) Experiment 4.
Figure 7. Confusion matrix of each experiment via neural network: (a) Experiment 1, (b) Experiment 2, (c) Experiment 3, and (d) Experiment 4.
Applsci 14 00640 g007
Figure 8. Confusion matrix of each experiment via random forest: (a) Experiment 1, (b) Experiment 2, (c) Experiment 3, and (d) Experiment 4.
Figure 8. Confusion matrix of each experiment via random forest: (a) Experiment 1, (b) Experiment 2, (c) Experiment 3, and (d) Experiment 4.
Applsci 14 00640 g008
Table 1. Risk matrix showing different levels of risk acceptability in different colors (adapted from [11]).
Table 1. Risk matrix showing different levels of risk acceptability in different colors (adapted from [11]).
Risk
Probability
Risk Severity
Catastrophic
A
Danger
B
Major
C
Minor
D
Insignificant
E
Frequent—55A5B5C5D5E
Occasional—44A4B4C4D4E
Remote—33A3B3C3D3E
Improbable—22A2B2C2D2E
Extremely improbable—11A1B1C1D1E
Table 2. Human factors, labeled from the contributory causes in the database, with their respective frequency in the database.
Table 2. Human factors, labeled from the contributory causes in the database, with their respective frequency in the database.
#Factor According to HFACS-MENumber of Cases
1Adverse mental state73
2Adverse physiological state19
3Group resource management11
4Dated/Uncertified Equipment67
5Decision Error62
6Exceptional Violation20
7Failed to Fix Known Issue12
8Inaccessible2
9Inadequate Design42
10Inadequate Documentation36
11Inadequate supervision63
12Inappropriate Operations76
13Infraction1
14Lighting1
15Operational process12
16Misperception131
17Personal Readiness134
18Physical environment216
19Physical/mental limitations18
20Plan for improper operation1
21Resource management9
22Routine114
23Routine Violation39
24Rule72
25Skill29
26Skill-Based Error104
27Supervisory violation15
28Technological Environment19
29Training4
30Problem not fixed8
Table 3. Confusion matrix for binary classification: 0, no fatality; 1, fatality or fatalities.
Table 3. Confusion matrix for binary classification: 0, no fatality; 1, fatality or fatalities.
Predicted
Value
True Value
01
0True NegativeFalse Negative
1False PositiveTrue Positive
Table 4. Performance indexes of each experiment processed via neural network.
Table 4. Performance indexes of each experiment processed via neural network.
Type of
Learning
PerformanceClassAverages
No FatalityFatalityMacroWeighted
Multiplayer
Perceptron
Experiment 1
Precision0.910.760.830.87
Recall0.940.690.810.88
F1-Score0.920.720.820.88
Accuracy0.88
Multiplayer
Perceptron
Experiment 2
Precision0.920.790.860.89
Recall0.940.750.840.89
F1-Score0.930.770.850.89
Accuracy0.89
Multiplayer
Perceptron Experiment 3
Precision0.930.800.860.90
Recall0.940.760.850.90
F1-Score0.940.780.860.90
Accuracy0.90
Multiplayer
Perceptron
Experiment 4
Precision0.910.880.890.90
Recall0.970.690.830.90
F1-Score0.940.770.850.90
Accuracy0.90
Table 5. Performance indexes of each experiment processed via random forest.
Table 5. Performance indexes of each experiment processed via random forest.
Type of
Learning
PerformanceClassAverages
No FatalityFatalityMacroWeighted
Random
Forest
Experiment 1
Precision0.910.760.830.87
Recall0.940.690.810.88
F1-Score0.920.720.820.88
Accuracy0.88
Random
Forest
Experiment 2
Precision0.920.790.860.89
Recall0.940.750.840.90
F1-Score0.930.770.850.89
Accuracy0.90
Random
Forest
Experiment 3
Precision0.920.790.860.89
Recall0.940.750.840.90
F1-Score0.930.770.850.89
Accuracy0.90
Random
Forest
Experiment 4
Precision0.900.870.890.89
Recall0.970.650.810.90
F1-Score0.930.740.840.89
Accuracy0.90
Table 6. Performance indexes of the models predicted in [7].
Table 6. Performance indexes of the models predicted in [7].
Type of
Learning
PerformanceClassAverages
No FatalityFatalityMacroWeighted
Multiplayer
Perceptron
Precision0.890.750.800.83
Recall0.920.590.760.83
F1-Score0.900.660.770.83
Accuracy0.83
Random
Forest
Precision0.920.840.880.89
Recall0.940.770.860.90
F1-Score0.930.800.870.89
Accuracy0.90
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lázaro, F.L.; Nogueira, R.P.R.; Melicio, R.; Valério, D.; Santos, L.F.F.M. Human Factors as Predictor of Fatalities in Aviation Accidents: A Neural Network Analysis. Appl. Sci. 2024, 14, 640. https://doi.org/10.3390/app14020640

AMA Style

Lázaro FL, Nogueira RPR, Melicio R, Valério D, Santos LFFM. Human Factors as Predictor of Fatalities in Aviation Accidents: A Neural Network Analysis. Applied Sciences. 2024; 14(2):640. https://doi.org/10.3390/app14020640

Chicago/Turabian Style

Lázaro, Flávio L., Rui P. R. Nogueira, Rui Melicio, Duarte Valério, and Luís F. F. M. Santos. 2024. "Human Factors as Predictor of Fatalities in Aviation Accidents: A Neural Network Analysis" Applied Sciences 14, no. 2: 640. https://doi.org/10.3390/app14020640

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop