*3.7. Analysis of SME Survey*

The NVivo 12 plus is a qualitative data analysis software and it was the primary software used for survey analysis. All the survey questions were downloaded from Qualtrics in Microsoft Word format. These were then uploaded to the software as separate projects. These projects were analysed individually for identification of reoccurring words and themes in each response to the question. The responses were to aid in the methodology used, provide in-depth information and guidance in the study. The analysis did not focus only on responses tallied with the questions asked. This was to avoid missing out on new and important information. This is an inductive analysis method which allows researchers to code data without bias [41].

### *3.8. Evaluation of Research Rigour*

According to Brink, a valid study demonstrates what is in existence, a valid measurement and measures what it was created to measure. A reliable study should produce the same results consistently when repeated by a di fferent researcher [42]. However, these terms are di fficult to apply to qualitative research methods when compared to quantitative research methods. Rigour is a more suitable term to measure validity and reliability of a quantitative research method [43].

Although Liamputtong and Ezzy argue that it does not perfectly verify the reliability and validity of qualitative research as stated above, coherence between different researchers is important to provide meaningful information. Inter-rater reliability or inter-rater concordance can be used to assess the level of coherence between two or more researchers. Cohen's kappa is the most commonly used measure for assessing this match [44].

To assess the proportion of coherence and corrected for chance, Cohen's Kappa was used in this study. Equation (1) shows how it is derived, Equations (2) and (3) show how formula components are determined.

$$\kappa = \frac{P\_{\sigma} - P\_{\varepsilon}}{1 - P\_{\varepsilon}} \tag{1}$$

$$P\_o = \frac{\sum\_{i=1}^{n} R}{n} \tag{2}$$

$$P\_{\mathcal{E}} = \frac{\sum\_{i=1}^{n} \frac{c\_i \times r\_i}{n}}{n} \tag{3}$$

where κ = Cohen's Kappa, *Po* joint probability of agreement, *Pe* = chance agreement, *R* = rater agreements, *n* = total number of ratings, *c* = column marginal and *r* = row marginal.

To evaluate the research rigour, SMEs from the NCAA coded a sample dataset using Hieminga's maintenance incident taxonomy. This dataset was selected from one year and all the information was cleaned. The SMEs from the AIB coded all the maintenance error accidents identified using the MxFACS taxonomy. The researcher and SME's coding were compared to determine Cohen's Kappa using IBM SPSS V.25 statistics software.
