2.2. Instruments
Participants completed the following questionnaires: a brief demographic survey, the SOC-13, the Connor–Davidson Resilience Scale (CD-RISC10) [
28], the PTSD Checklist for DSM-5 (PCL-5) [
29], and short forms of the Center for Epidemiological Studies Depression Scale (CES-D10) [
30] and the trait scale of the Spielberger State-Trait Anxiety Inventory (STAI-T5) [
31].
The SOC-13 is a 13-item measure of the extent to which respondents view the world as manageable, understandable and meaningful. It has a seven-point Likert scale response format. An example item from the SOC-13 is “Do you have the feeling that you are being treated unfairly?” A 1993 review conducted by the author of the SOC-13 found estimates of internal consistency between 0.74 and 0.91 in 16 studies [
1]. A more recent review (2017) found Cronbach’s alphas that ranged between 0.70 and 0.92 in 127 studies [
22]. The SOC-13 has been used with a sample of schoolteachers [
32] and a sample of students in South Africa [
33], and reliability coefficients of 0.81 were reported for both samples.
The CD-RISC10 is a 10-item version of the original 25-item measure of resilience [
34]. The items of the CD-RISC10 are rated on a five-point scale with scale anchors not true at all (0) and true nearly all the time (4). An example item is “I tend to bounce back after illness, injury or other hardships.” In the original study that developed the short version of the scale, the authors reported a reliability coefficient of 0.85, and the ability of CD-RISC10 scores to moderate the relationship between childhood maltreatment and current psychiatric symptoms served as evidence of construct validity [
28]. In a South African study with schoolteachers, the authors used classical test theory and IRT to investigate the psychometric properties of the CD-RISC10; they reported that the instrument was unidimensional and displayed satisfactory reliability (alpha = 0.95, Mokken scale reliability = 0.95), and there was sufficient evidence of construct, convergent and criterion-related validity [
32].
The PCL-5 is a 20-item measure of the presence and severity of PTSD symptoms. Responses to the 20 items are made on a five-point scale ranging from not at all (0) to extremely (4). An example PCL-5 item is “How much have you been bothered by irritable behavior, angry outbursts, or acting aggressively?” In the original validation study of the PCL-5, Blevins and colleagues reported a reliability coefficient of 0.94 and provided evidence of convergent and discriminant validity [
29]. A South African study reported a reliability coefficient of 0.93 for the PCL-5 for a sample of university students [
35].
The CES-D10 is a short form of the original 20-item CES-D and is a measure of symptoms of depression [
36]. It consists of 10 items that are scored on a four-point scale ranging from rarely or none of the time (0) to most or all of the time (3). An example of items in the scale include “I felt that everything I did was an effort” and “I felt hopeful about the future”. The authors of the short form of the CES-D reported a reliability coefficient of 0.88 and found that the short form was highly correlated with the original 20-item version (
r = 0.97). In addition, they reported that the short form was as accurate as the original version in classifying respondents with depressive symptoms. In South Africa, Baron and colleagues reported reliability coefficients for the CES-D10 ranging from 0.69 to 0.89 for different language groups [
37].
The STAI-T5 is a five-item version of the original 20-item trait scale of the STAI [
38]. Responses to the STAI-T5 are made on a four-point scale ranging from not at all (1) to very much so (4). Examples of scale items are “I cannot get disappointments out of my mind” and “I worry about things that doesn’t matter”. The authors of the short form used IRT to derive a five-item unidimensional scale and reported a reliability coefficient of 0.86 for the short form. In addition, the relationship between STAI-T5 scores and measures of depression, life satisfaction and self-esteem provided evidence of external validity [
31].
2.4. Analysis
Responding to all items was mandatory, as participants could not proceed with the link if they had not responded to all items on a particular page. Thus, there were no missing data. All classical test theory analyses were conducted with IBM SPSS for Windows Version 28 (IBM Corp., Armonk, NY, USA). These analyses included checks of whether the data were normally distributed (indices of skewness and kurtosis), EFA, descriptive statistics (means and standard deviations) and assessments of reliability (alpha and omega) and intercorrelation between study variables (Pearson r). With respect to the distribution of data, indices of skewness between −2 and +2, and kurtosis values between −7 and +7, would indicate that the data are approximately normally distributed [
39]. Factor loadings in EFA > 0.40 [
40] and item-total correlations between 0.30 and 0.70 [
41] would indicate substantial correlation between the items and the latent construct, which would provide evidence for construct validity.
We used CFA to test three models of the factor structure of the SOC-13: a one-factor model, a bifactor model with one general factor and three specific factors, and a correlated three-factor model. For this purpose, we used IBM SPSS AMOS for Windows Version 28 (IBM Corp., Armonk, NY, USA). The fit indices that were used to assess model fit were χ
2; the goodness-of-fit index (GFI); the comparative fit index (CFI); the Tucker–Lewis index (TLI); the root mean square error of approximation (RMSEA); and a model comparison index, Akaike’s information criterion (AIC). Good fit indicators would be a non-significant χ
2 (which would, however, indicate a perfect fit [
42]), GFI ≥ 0.95, CFI and TLI ≥ 0.90 and RMSEA ≤ 0.08. In terms of model comparison, the model with the lowest AIC value is considered to be the best model.
Since CFA only confirms the structure of an instrument and not whether the identified subscales explain a sufficient amount of variance in the items beyond that which is explained by the total scale, we used a freely available online Excel calculator to calculate ancillary bifactor indices [
43]. These included explained common variance (
ECV: the amount of variance explained by the general and specific factors, respectively) [
44], omega (ω: a model-based estimate of reliability), omegaH (
ωH: the amount of variance in total scores explained by the general factor) [
45], the percentage of uncontaminated correlations (
PUC: percentage of correlations between item pairs that are influenced by the general factor) and the construct replicability coefficient (H: “the correlation between a factor and an optimally weighted item-composite” p. 230) [
46]. For specific factors,
ωH is the amount of variance explained by the specific factor after the variance attributable to the general factor has been removed. While there are general guidelines for evaluating each of these indices individually, Reise and colleagues suggested that
PUC,
ECV and
ωH should be considered together [
47]. In this regard, Reise and colleagues have suggested that when
PUC < 0.80,
ECV > 0.60 and
ωH > 0.70, the instrument under consideration should be regarded as essentially unidimensional, despite the presence of some dimensionality [
47]. In addition, a construct replicability coefficient >0.80 reflects a latent variable that is well defined [
46].
We used the monotone homogeneity model (MHM) in MSA to examine the dimensionality of the SOC-13 from an IRT perspective. MSA was conducted using the package “Mokken” [
48] in R [
49]. The MHM model has three assumptions: unidimensionality (the items are indicators of a single latent construct), local independence (conditional on the latent trait value of a person, the responses to different items are assumed to be independent, meaning that the latent trait value is the only source of relationship between the responses) and monotonicity (the likelihood of endorsing an item increases as the latent variable increases).
MSA uses an automated item selection procedure (
AISP) to indicate whether an item is unscalable (0), loads on a single scale (1) or loads on multiple scales (as many values as there are scales). MSA also provides a scalability coefficient, Mokken
H, indicating the strength of the scale:
H below 0.40 indicates a weak scale,
H between 0.40 and 0.50 a medium scale and
H above 0.50 a strong scale [
50]. In addition, a scalability coefficient
Hi is provided for each individual item, reflecting the extent to which the individual item contributes to the measurement of the latent construct. It is suggested that
Hi lower than 0.30 indicates items that do not usefully contribute to the measurement of the latent construct [
24]. If a scale is unidimensional, that also implies local independence [
25]. Violations of the assumption of monotonicity are evaluated in MSA using a
Crit value. Sijtsma and van der Ark have indicated that
Crit values above 80 represent serious violations of monotonicity, while values below 80 indicate minor and acceptable violations [
51]. Lastly, MSA also provides an estimate of internal consistency,
MSrho.
To confirm the unidimensionality of the SOC-7, we used the more stringent Rasch analysis. In Rasch analysis, the dimensionality of an instrument is assessed with a principal component analysis (PCA) of the residuals after the presumed latent trait has been removed. If the eigenvalue associated with a presumed second dimension is greater than 2, the instrument is likely dimensional. Rasch analysis also provides an extent to which items fit the Rasch model, infit and outfit mean square (MnSq). In general, MnSq values below 0.5 and above 1.5 are indicative of misfitting items [
52]. We also used the differential item functioning (DIF) measure in Rasch to assess measurement invariance in terms of gender and area of residence (rural/urban). A DIF less than 0.50 would indicate measurement invariance [
52]. The Rasch analysis were conducted with Winsteps version 5.6.0 [
53].
To examine the criterion-related validity of the SOC, we obtained the zero-order correlations between SOC, resilience, depression, anxiety and PTSD. We predicted that SOC would be positively related to resilience, which the literature has also identified as playing a health-protective role, and negatively related to the indices of psychological wellbeing.