Next Article in Journal
Precipitation Extremes and Trends over the Uruguay River Basin in Southern South America
Previous Article in Journal
The Machine Learning Attribution of Quasi-Decadal Precipitation and Temperature Extremes in Southeastern Australia during the 1971–2022 Period
Previous Article in Special Issue
People’s Perception of Climate Change Impacts on Subtropical Climatic Region: A Case Study of Upper Indus, Pakistan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reliability and Exploratory Factor Analysis of a Measure of the Psychological Distance from Climate Change

College of Education, University of Georgia, Athens, GA 30602, USA
Climate 2024, 12(5), 76; https://doi.org/10.3390/cli12050076
Submission received: 31 March 2024 / Revised: 11 May 2024 / Accepted: 13 May 2024 / Published: 18 May 2024
(This article belongs to the Special Issue Anthropogenic Climate Change: Social Science Perspectives - Volume II)

Abstract

:
Psychological distance from climate change has emerged as an important construct in understanding sustainable behavior and attempts to mitigate and/or adapt to climate change. Yet, few measures exist to assess this construct and little is known about the properties of the existing measures. In this article, the author conducted two studies of a psychological distance measure developed by Wang and her colleagues. In Study 1, the author assessed the test–retest reliability of the measure over a two-week interval and found the scores to be acceptably stable over time. In Study 2, the author conducted two exploratory factor analyses, using different approaches to the correlation and factor extraction. Similar results were observed for each factor analysis: one factor was related to items that specified greater psychological distance from climate change; a second factor involved items that specified closeness to climate change; and a third involved the geographic/spatial distance from climate change. The author discussed the results and provided recommendations on ways that the measure may be used to research the construct of psychological distance from climate change.

1. Introduction

The psychological distance from climate change is a person’s subjective perception of the distance they are from climate change or an impact that follows from climate change [1,2,3]. In this definition, distance can involve a point in time, a geographical place or space, the social distance or impacts of climate change, and hypothetical distances [3]. As a subjective perception of the proximity of climate change or its impacts, psychological distance can encompass one or more of these dimensions singly or a combination of them [1]. Psychological distance from climate change is an application of the concepts of psychological distance and construal level theory (CLT) developed by Yaacov Trope and Nira Liberman in 2010 [4]. Spence and her colleagues first applied psychological distance to the topic climate change in 2012, where the researchers developed a brief self-report scale of psychological distance and studied its relationships with other variables [5].
Since the emergence of the psychological distance from climate change construct over 10 years ago, researchers have examined its relationship with a number of variables that include, among others, efficacy in adapting to climate change [6], risk perceptions of climate change [6,7], mitigation and adaptation behaviors [8], direct experiences of climate change impacts [8,9], global identity salience [10], individual difference variables [9], and pro-environmental intentions and behaviors [1,11]. Within descriptive and exploratory studies, some researchers have assessed the perceptions of psychological distance qualitatively and linguistically [12,13]. A larger proportion of researchers have assessed the psychological distance from climate change quantitatively using Spence’s measure [5]. More recently, Susie Wang and her colleagues adapted Spence’s original 12-item measure of psychological distance from climate change by adding items that assessed psychological distance for single or combinations of time, space, social, and hypothetical dimensions; the full measure possesses 18 items [1].
The present article examines several psychometric characteristics of Wang’s measure of the psychological distance from climate change (PDCC), which the author refers to as the PDCC scale. The research in this article is important for several reasons, the first of which is that the PDCC scale, and the earlier version (Spence et al.) on which it is based, has been used frequently in quantitative studies [1,3,5]. Second, Spence’s original measure was developed using samples from Australia and Wang’s PDCC scale was evaluated using samples from the United Kingdom [1,5]. To date, no research has examined the functionality of the PDCC scale with a sample of respondents from the United States. Third, no existing research has ever examined the temporal stability (i.e., test–retest reliability) of any measure of the psychological distance from climate change. Fourth, although the underlying CLT possesses four different dimensions of psychological distance, researchers frequently have treated the empirical measurement of psychological distance as unidimensional. In addition, Wang has reported that her PDCC scale is essentially unidimensional [1]. Thus, it becomes important to explore the number of latent variables underlying the observed items.
In the next section, the author first briefly reviews the concepts of CLT, psychological distance, and the psychological distance from climate change. This is followed by a review of the quantitative measurement of PDCC and the PDCC scale. The author then presents the results of two studies of the psychometric characteristics of the PDCC scale, the first of which examines the test–retest reliability. Temporal reliability in this regard is significant to assess because it necessarily limits the magnitude of subsequent validity coefficients one may expect to observe [14]. The second study undertakes an exploratory factor analysis of the PDCC scale items using two different approaches. After presenting descriptive statistical data based upon the factor analytic results, the author then describes the limitations of the studies and discusses the results in terms of the implications of the measurement of the psychological distance from climate change in future studies.

1.1. Construal Level Theory and Psychological Distance

Trope and Liberman based their construal level theory upon prior work in cognitive and personality psychology that dealt with how people categorize objects, events, and people [15,16]; how people form concepts [17]; and how they cognitively represent their actions [18]. Construal within CLT refers to people actively or implicitly constructing a cognitive representation of something. High-level construals pertain to something that is general, encompassing, and correspondingly more abstract. Alternatively, low-level construals are more circumscribed, specific, and concrete [4]. “Construal levels refer to the perception of what will (or did) occur: the processes that give rise to the (cognitive) representations of the event itself” [4] (p. 442) (author added parenthetical terms).
In contrast, according to Trope and Liberman [4] (p. 442), “[p]sychological distance refers to the perception of when an event occurs, where it occurs, to whom it occurs, and whether it occurs”. Here, perception is the person’s subjective experience that an event is proximal or distal to the self in the present time [4]. The terms in the quote refer to the four dimensions of psychological distance: when (temporal distance), where (spatial distance), to whom (social distance), and whether the event occurs (hypothetical distance). The four dimensions of psychological distance are oblique to each other, rather than orthogonal. Thus, events usually find meaningful representation on the four dimensions taken together. Generally, events that are experienced as psychologically close involve low-level construals—things that are near, specific, concrete, and capable of being experienced. Conversely, the high-level construal of an event involves abstraction and generalization that can make it seem psychologically distant, especially in space and time [4].

1.2. Psychological Distance and Climate Change

Trope and Liberman developed CLT and psychological distance as general concepts that could find wide applications within cognitive and social psychology [4]. The nature of CLT and psychological distance make them appealing constructs for understanding how people understand and perhaps respond to global climate change [3,5]. The global climate and the climate system [19,20] easily represent high-level constructions given the impacts of anthropogenic forcing of the climate by greenhouse gases on a global scale and on longer timeframes. In addition, the global climate system, defined as the long-term thermodynamic (energy) and hydrodynamic (water) balances of the Earth, gives rise to the weather over various spatiotemporal scales [20,21]. Here, as something that is more immediate, sensible, and tangible, weather functions as a low-level construal. Weather and weather changes readily satisfy the criteria as events that develop over time and space [22,23]. In Trope and Liberman’s framework, climate is the (global) what that produces weather at smaller and more concrete scales where it is possible to ask about whether, when, where, and to whom [4]. In this way, global climate and climate change may be more distant psychologically from people than the immediate past, present, and/or upcoming forecasted weather. Support for this conceptualization of CLT and psychological distance within weather and climate is provided by several research groups that have inquired whether it is possible to experience climate immediately and whether the occurrence of severe or extreme weather events leads people to be more mindful of climate change [24,25,26,27]. In addition, there is some evidence that construal level and psychological distance may not always be highly correlated [28] and that climate change that occurs locally does not preclude being concerned about its effects on distant places and further in time [29].

1.3. Measuring the Psychological Distance from Climate Change

Thus far, CLT has strongly informed the measurement of psychological distance from climate change [1,3,5]. Spence, Poortinga, and Pidgeon developed the first set of survey items to assess the perceived distance from climate change incorporating the four dimensions two years after the original CLT paper by Trope and Liberman [4,5]. In addition to exploring the nature of psychological distance among a large sample of British respondents, the authors also desired to assess how psychological distance was related to concerns about climate change and sustainable behavior intentions [5]. Spence’s psychological distance measure included 10 items: 2 for geographic distance, 1 for temporal distance, 2 for social distance, and 5 items relating to uncertainty and skepticism about climate change (i.e., hypothetical distance). After its development, researchers used Spence’s measure in all or part to explore the relationships of psychological distance from climate change with other variables [1,7,11,30,31,32,33].
Wang and her colleagues adapted and supplemented Spence’s measure of psychological distance to investigate its relationships with construal level in an Australian sample [1]. In her modification, Wang added items to the spatial, temporal, and social dimensions so that each dimension was indicated by four items. In addition, two hybrid items were added, one of which assessed temporal and spatial distance together with the other assessing temporal and social distance. This resulted in 18 items that people responded to with a 1 (Strongly Disagree) to 5 (Strongly Agree) rating scale. The PDCC scale items appear in Appendix A (Table A1).
The PDCC scale items (PD1 in [1]) exhibited good internal consistency for the sample that responded to them (Cronbach’s α = 0.93). Here, it is possible that the additional items resulted in a more homogenous item pool compared with Spence’s initial measure and hence greater internal consistency [5]. In addition, increasing the number of items results in a larger sample of behavior from respondents from which to infer the underlying construct [14]. Wang’s principal components analysis revealed that all the items except one loaded onto a single component [1]. Again, the four dimensions of psychological distance were significantly intercorrelated.
The work of both Spence and Wang has been influential in focusing researchers’ attention upon CLT, psychological distance to climate change, and the relationship between these constructs [1,5]. At the time of this article, 747 researchers have cited Spence’s work and 83 have referenced the work of Wang according to the respective journals in which the research was published. Many researchers have used the items that Spence and Wang developed, in all or part, to assess one or more dimensions of the psychological distance from climate change [2,3,8,32,33,34,35,36,37,38,39,40,41]. With this ongoing interest in the measurement of the psychological distance from climate change, it becomes important to assess the psychometric characteristics of the measures that researchers employ. This is especially the case for the psychological distance from climate change because empirical results are used both to evaluate theory (i.e., CLT) and to suggest ways to decrease psychological distance so that people may behave in a more pro-environmental and sustainable manner [2]. The focus of the research in this article is to describe the psychometric characteristics of the PDCC scale items that Wang developed with a sample of undergraduate respondents from the United States. The author chose the PDCC scale items for further examination in this article because they were derived directly from the work of Spence and exhibited promising levels of internal consistency in prior research [1].

1.4. Research Questions

The author pursued two research questions in this article:
  • To what extent are the 18 items of the PDCC scale reliable over a two-week test–retest interval?
  • Given the correlation matrices of the PDCC items, how many latent variables are required to best reproduce these correlations? Relatedly, for the given latent variables, what are the relations (factor structure) of the measured variables to them?
Study 1 reports on the temporal stability of the PDCC items and Study 2 presents results of exploratory factor analyses of the items.

2. Study 1: Test–Retest Reliability

How reproducible are the responses that people make to the 18 items of the PDCC over a short interval of time? Establishing the test–retest (or replication) reliability of the PDCC scale can give researchers an indication of the extent to which people respond to the items in a stable manner [14,42]. In addition, knowledge of the test–retest reliability of a measure can help researchers when designing new research with the measure [43]. Although one’s perceived distance from climate change may be malleable by climate events that may occur in their locality or perhaps by a new report about the status of the global climate in the media, it is reasonable to expect that over a short period of time such as two weeks, the perceived distance from climate change would be stable. With such a test–retest interval, therefore, a low test–retest reliability coefficient could indicate problems with how people interpret and/or respond to the items (e.g., the items are ambiguous in meaning) [14,42]. The first study examined the test–retest reliability of the PDCC items over a two-week interval and estimated the internal consistency of the item set.

2.1. Materials and Methods

2.1.1. Participants and Procedures

The participant sample consisted of undergraduate students at a large state university located in the southeastern United States who were part of a research pool in a college of education. Participation in the study was voluntary. The incentive for the study was credit in the research pool. The participants were able to see the study along with other research alternatives online and then elected to participate. The participants would register for an initial session followed by one that was two weeks afterwards.
At the initial session, the participants received a written informed consent document along with the researcher’s explanation of it. After obtaining consent, the researcher distributed a packet of measures that people usually completed within 30 min. The packet contained several scales used in research involving the psychology of weather and climate in addition to the PDCC scale items. The order of the measures within the packets was randomized. The participants worked in a quiet setting and were asked to refrain from checking or using digital devices while completing the measurement packet.
The participants returned after two weeks and completed the measurement packet a second time. Again, the order of the measures was randomized. After completing the packets, the participants received a written debriefing form, and the session was concluded. The participants completed the measures in groups of approximately 5–15 people. The study procedures and measurement packets were reviewed and approved by the Institutional Review Board at the author’s university (approval: PROJECT00005184).

2.1.2. Data Analysis

The author used the R Statistics Package and the psych package for R for the data analyses in Study 1 [44,45]. The psych package allowed for the calculation of item-to-total score correlations, the Cronbach’s α for the scale if an item were removed, and the total scale (no items removed) value of α. Because Cronbach’s α has some limitations [46], the author also used the psych package to calculate McDonald’s total omega (ωt) [45,47]. To examine the test–retest reliability between the two administrations, the author used R to calculate the Pearson’s correlation (rtrt) and used psych to calculate the intraclass correlation (ricc). Finally, a within-subjects t-test was used to check for differences in the mean PDCC total scores (i.e., all items summed) across the test–retest interval.

2.2. Results

2.2.1. Sample Characteristics

The participants in Study 1 were 66 undergraduate students at a large public university in the southeastern United States. The participants ranged in age from 19 to 29 years (M = 21.1 years, SD = 1.35). The gender identifications were the following: 14% male, 85% female, and 1% gender-fluid or non-binary. Regarding the participants’ race, 68% were white/Caucasian, 14% African American/Black, 11% Asian, 3% Hispanic/Latino/a, and 4% multiracial or another race.

2.2.2. Item-to-Total Correlations, Internal Consistency, and Test–Retest Correlations

The item-level Cronbach’s alpha and item-to-total correlations appear in Table 1. No items were identified that, if removed from the scale, would noticeably improve Cronbach’s alpha. It was observed, however, that items 8 and 11 each exhibited lower associations with the total scores at each administration of items over the two-week test–retest interval. Items 8 (I can identify with victims of climate related disasters) and 11 (If climate change is to happen, it will happen in the remote future.) each correlated in the 0.20 to 0.30 range with the total scores, whereas most of the remaining items exhibited item-total correlations in the 0.50 to 0.80 range.
Overall, the items at each administration exhibited good internal consistency. At the first administration, α = 0.91, 95% CI: 0.87–0.93. Similarly at the second administration, α = 0.91, 95% CI: 0.87–0.94. McDonald’s [47] omega total (ωt) coefficient at each administration was 0.93, which suggested that the items formed an internally consistent set of indicators for assessing the psychological distance from climate change for the participants in the study.
To what extent were the total PDCC scale scores correlated with each other over the two-week test–retest interval? The Pearson correlation suggested that the items exhibited an acceptable level of temporal stability, rtrt = 0.80, p < 0.0001, 95% CI: 0.68–0.89. Similar results were observed when assessing the temporal stability through intraclass correlation, ricc = 0.80, p < 0.0001, 95% CI: 0.69–0.87. The author also evaluated the effects on the internal consistency and test–retest reliability of removing items 8 and 11. The internal consistency estimates increased by approximately 0.01; however, the test–retest reliabilities were unchanged. Finally, the mean PDCC total scale score at the first administration (M = 40.49, SD = 11.02) was not statistically different from the mean score at the second administration (M = 41.77, SD = 11.04) (t (65) = 1.15, p = 0.26).

2.3. Discussion

The participants in this study exhibited an acceptably good level of reliability when responding to the PDCC items over a two-week period with both the Pearson and the intraclass correlations at a value of approximately 0.80. Because the items were part of a larger reliability assessment effort that included several additional weather and climate measures whose presentation order was randomized, the likelihood that participants remembered or became familiar with the PDCC items seems low, although the general topics of weather and climate were probably salient. Beyond this, the test–retest reliability estimates, from the perspective of Classical True Score Theory [14], represent the squared value of the correlation between the observed scores and the true scores (latent construct scores). That is, with a test–retest reliability of 0.80, the observed scores correlate with the latent, true scores on this construct at a value of 0.90 (i.e., 0.902 ≈ 0.80). This suggests that the PDCC items function well in indicating the construct of psychological distance from climate change but that they do not measure or completely tap the construct, with the remaining proportion of variance being measurement error and/or some aspect of psychological distance that was not assessed.
It was noteworthy that two items (8 and 11) did not exhibit higher item-to-total score correlations. This was observed in both instances of the PDCC scale administration. It is possible that for item 8, that the term identify may be ambiguous with respect to what it means for psychological distance from climate change. Also, it is possible that this term may be agreed with regardless of the proximity of climate change impacts. Another possibility is that this item has perhaps a social or interpersonal quality that results in its measuring something other than (or in addition to) psychological distance. Item 11 was interesting because people possibly could interpret it to mean that climate change has not yet occurred (If climate change is to happen….) or perhaps that its occurrence is still a matter to be determined for some individuals [48]. The performance of both items 8 and 11 will be evaluated with the larger sample of Study 2, which involved two exploratory factor analyses of these items.

3. Study 2: Exploratory Factor Analyses

The second study involved completing exploratory factor analyses (EFAs) on the PDCC items. As a relatively new measure with potential for use in subsequent studies, conducting an EFA on the PDCC items will allow for an assessment of the number of latent variables (factors) that are necessary to reproduce the pattern of observed PDCC item correlations. Wang and her colleagues developed the PDCC items to assess the four dimensions of psychological distance [1,4]. Are four latent variables thus necessary to model the intercorrelations of the PDCC items or might a smaller number of latent variables be necessary? Although the items ostensibly were written to correspond to the four dimensions, empirically, are four latent variables discernible and necessary? An EFA can assist in addressing these questions and in developing a measurement model for the PDCC items. Once this is determined, then researchers can subsequently assess this model with different samples of respondents using confirmatory factor analyses [49,50].
Wang and her colleagues used a sample of Australian respondents to conduct a principal components analysis (PCA) of the PDCC items [1]. PCA is not the same as EFA and has a different purpose from EFA, although the two techniques can produce similar results. PCA is useful as a data reduction technique for identifying a smaller number of factors that can represent, via linear composites, the observed variables [49]. In contrast, EFA assumes a common latent variable (factor) and assumes that measurement error exists in the observed variables [49]. Thus, as a tool for assessing and developing a measure of psychological distance from climate change, EFA is the procedure of choice [49,51]. Although Wang observed a single factor in her PCA of the PDCC items with Australian participants, would a similar result be obtained using EFA with participants in the southeastern United States?

3.1. Materials and Methods

3.1.1. Participants and Procedures

Like Study 1, the participant sample consisted of undergraduate students at a large state university located in the southeastern United States who were part of a research pool in a college of education. Participation in the study was voluntary. The incentive for the study was credit in the research pool. The participants were able to see the study along with other research alternatives online and then elected to participate. After signing up for the study online, the participants were then shown the informed consent document and then were directed to the project measures.
The Study 2 measures were administered online using the Qualtrics survey platform. In addition to the PDCC scale items, the participants completed other measures relating to the psychology of weather and climate as part of a larger research effort. The order of the measures was randomized. The participants completed the demographics items at the conclusion of the project, just before receiving their online debriefing and research pool credits. This study was conducted approximately a year after Study 1. The study procedures and measures were reviewed and approved by the Institutional Review Board at the author’s university (approval: PROJECT00005498).

3.1.2. Data Analysis

The author used the R statistics package to compute the sample and item descriptive statistics [44]. The author also used the psych package for R to compute the item-to-total statistics and internal consistency statistics (α, ωt) for the PDCC item set [45]. The MVN package in R was used to assess the univariate and multivariate normality of the PDCC items [52]. The EFAtools package in R provided for the calculation of Bartlett’s test of sphericity [53]. A histogram of scores was plotted using the ggplot2 package [54]. The author used the FACTOR program (release 12.04.01, May 2023) to perform the exploratory factor analyses [55]. The FACTOR program has been developed specifically to perform exploratory factor analyses and contains built-in features that allow one to follow the best practices for EFA [50,51,56]. This software is available free of charge at https://psico.fcep.urv.cat/utilitats/factor/ (accessed on 12 May 2023).
Given the exploratory scope of the factor analyses in this study, the author adopted a broad approach in evaluating the structure and performance of the PDCC items in their contributions to the measurement of the latent variables of psychological distance from climate change. The author conducted two different implementations of EFA that reflect some of the current thinking about the nature of self-response rating scales like that which was used in the PDCC.
First, researchers in psychometrics have argued that rating scales (i.e., 1 to 5 ratings) should be treated statistically as ordinal variables (ordered categories) rather than as metrical variables that possess equal divisions between rating points [51,57,58,59,60,61,62,63]. Making this assumption properly requires the calculation of polychoric correlations among the items. Such polychoric correlation matrices are best factor analyzed with factor extraction methods such as diagonally weighted least squares (DWLS) or unweighted least squares (ULS) [64,65]. Then, if two or more factors are retained for interpretation, they can be rotated obliquely to allow for some degree of intercorrelation as indicated by CLT and psychological distance research [4].
Second, other researchers in psychometrics have maintained that Pearson correlations work acceptably well with ordered data and that maximum likelihood (ML) factor extraction methods of Pearson correlation matrices can be used to produce an EFA [66,67]. The adherents of this approach cite the strengths of the ML algorithm and its wide use in the development of psychological instruments [67]. The ML method of factor extraction in EFA is limited, however, by the assumption that the observed data follow a multivariate normal distribution [49,68]. Further, the DWLS method can provide more accurate parameter estimates than the ML method when working with ordinal-level data that are non-normal [64,65].
In appreciating the contribution of each of these perspectives, the author conducted two factor analyses using the FACTOR program [55]: 1. Calculating polychoric correlations and then extracting the factors with DWLS methods. 2. Calculating Pearson correlations and extracting the factors with the ML method. In the case of multiple factors, which may correspond to the different dimensions of psychological distance from climate change, the author used the promin rotation to interpret the factors. The promin rotation allows for correlation among the factors [69].
The next analytic issue involved the number of latent variables (or factors) to retain in the EFAs. The use of heuristics such as “eigenvalues greater than one” or examining the leveling-off of scree plots are outdated, given the availability of better methods of determining the likely number of latent variables that exist in a measurement model [70]. Thus, at the inception of each EFA, the author used the FACTOR program to estimate the possible number factors using parallel analyses, Hull’s method, and the Bayesian information criterion (BIC) [55]. In addition to these indices, in each factor analysis, the author used several indices of the EFA model fit that the FACTOR program provides: 1. the root mean square error of approximation (RMSEA); 2. the Tucker–Lewis index; 3. the root mean squared residual; and 4. the minimum fit chi-square statistic (χ2) [55,71,72]. These indices were useful in ascertaining the number of factors in the EFA that resulted in the optimal model fit for the sample that the author used.
Finally, the author reported indices of factor determinancy (FDI), factor simplicity (FSI), and construct replicability (i.e., the G-H index). The FDI indicates the extent to which factor score estimates represent the latent factor scores; generally, FDI values at or above 0.80 are considered adequate [72]. The construct replicability index assesses the ability of the items to indicate the latent variables; it also provides an indication of the possible replicability of the EFA factor solution in subsequent studies. Values of 0.80 or higher suggest an acceptable degree of representation and potential for replicability [72]. Bentler’s factor simplicity index ranges from 0 to 1 and indicates the extent to which the PDCC items in the factor pattern matrix were simple rather than complex; higher values of the index suggest that a simple solution was obtained [71].

3.2. Results

3.2.1. Sample Characteristics

The participants in Study 2 were 342 undergraduate students at a large public university in the southeastern United States. The participants ranged in age from 18 to 41 years (M = 20.9 years, SD = 2.04). The gender identifications were the following: 16% male, 81% female, and 3% gender-fluid or non-binary. Regarding the participants’ race, 73% were white/Caucasian, 8% African American/Black, 6% Asian, 5% Hispanic/Latino/a, and 8% multiracial or another race.

3.2.2. Item Descriptive Statistics

Prior to conducting the factor analyses, the author calculated descriptive statistics for each of the 18 PDCC items; these statistics appear in Table 2. Most of the individual item mean values fell within the mid-ranges of the 1 to 5 response scales and suggested that the sample experienced climate change as psychologically closer to them rather than distant. For example, the participants somewhat disagreed with item 5, “I don’t see myself as someone who will be affected by climate change”. Similarly, for item 9, which was reverse-scored, people somewhat agreed that “Climate change is happening now”.
Regarding the item response distributions, most of the items exhibited moderate positive skew; item #9 demonstrated the highest value of positive skew (see Table 2). Similarly, most items demonstrated slight positive or negative kurtosis; items 4 and 8 resulted in the sharpest peaks for the present sample. Table 2 also shows that the null hypothesis that the population from which the individual item responses were drawn was normally distributed was rejected for each item according to both the Shapiro–Wilk and the Anderson–Darling tests. Three tests of multivariate normality (Mardia, Henze–Zirkler, and Royston) each suggested that the 18 PDCC items taken together did not exhibit multivariate normality [73,74,75].
The participants in this study responded in an internally consistent manner to the 18 PDCC items, α = 0.91, 95% CI: 0.90–0.93. A comparable value was observed for McDonald’s [47] omega total, ωt = 0.93. The item-to-total correlations were also computed for the present sample (see Table 2). Most of the items exhibited item-to-total correlations in the 0.50 to 0.80 range. The exceptions to this were, again, items 8 and 11 that exhibited noticeably lower magnitudes of correlation with the total score.

3.2.3. EFA Using Polychoric Correlations with the DWLS Extraction

The lower triangle of polychoric correlations for the PDCC items is provided in Table A2. An inspection of this matrix shows a generally moderate degree of intercorrelation among the items. Within this matrix, the average correlation using the Fisher r-to-z transformation [76] was 0.47 and the median correlation was 0.49. The median polychoric correlation of an item with all other items ranged from 0.09 for item 8 to 0.60 for item 16.
This EFA was conducted with 16 PDCC scale items. Items 8 and 11 were removed from further analysis for three reasons. First, items 8 and 11 exhibited the lowest levels of polychoric correlations with the remaining PDCC items (see Table A2). Second, in both Table 1 and Table 2, these items demonstrated low item-to-total correlations. Third, an initial EFA that included all 18 items revealed that items 8 and 11 had very low communalities (h2), which denotes the square of the item correlation with the factor. In this initial analysis, for item 8, h2 = 0.018, and for item 11, h2 = 0.134. These values suggested that the items were very weakly related to the latent variable (psychological distance from climate change) that would indicate them. Thus, these items were removed from the two EFAs performed in this study.
The Kaiser–Meyer–Olkin (KMO) test for sampling adequacy for the factor analysis of the 16 items resulted in a value of 0.883; this result suggested that there was a sufficiently high amount of common variance among the items for the EFA to produce meaningful results. Bartlett’s test of sphericity that the matrix of correlations was different from an identity matrix was statistically significant, with χ2 (120) = 3273.38 and p < 0.001. Thus, the data appeared suitable for factor analysis. Regarding the potential number of latent variables that may underlie the item correlations, the parallel analysis criteria suggested two factors when considering the mean. Similarly, the Bayesian Information Criterion (BIC) suggested two factors. Hull’s method for selecting the number of common factors suggested the existence of a single factor among the 16 items.
Because two of the above factor indices suggested the possibility of two factors, the author explored the fit statistics for EFA models that contained one to four factors. The fit of the one-factor model was poor with a root mean square error of approximation (RMSEA) value of 0.114 and a root mean square of residual (RMSR) value of 0.106. The degree of EFA model fit was improved slightly with two factors, with RMSEA = 0.07. The three-factor model, however, provided a degree of fit that was significantly improved over one with two factors, with χ2 (14) = 174.85 and p < 0.001. The four-factor model also resulted in a good fit; however, only two items loaded onto the fourth factor and, overall, the loadings of this model became more difficult to interpret than those of the three-factor solution. Along with this, the four-factor model did not significantly improve the fit of the model, with χ2 (13) = 20.05 and p < 0.094. Thus, the author selected the three-factor EFA model of the PDCC items. The first three eigenvalues for this factor analysis appear in Table 3; these factors accounted for 72.6% of the variance among the items. The fit indices appear in Table 4. Both the RMSEA and RMSR indicated good model fit. In addition, the minimum fit chi-squared statistic was not statistically significant, suggesting also that this EFA model fit the data well. Finally, the Tucker–Lewis index indicated a good degree of model fit (see Table 4).
The indices of factor determinancy, factor simplicity, and construct replicability appear in Table 5. Overall, the FDI and G-H index suggested that factor scores well represented scores on the latent factors and that the items performed well in indicating the latent variables. The higher values for the G-H index also suggested that the solution obtained in this EFA may be replicable in future studies. The FSI index conveyed that the three-factor solution from the EFA was simple and parsimonious in nature, rather than factorially complex.
The promin-rotated factor loadings appear in Table 6. Generally, the three factors were related to items representing a mixture of the dimensions of psychological distance from climate change. On the left side of Table 6, Factor 1 was related to two temporal items (items 9 and 10), a temporal social item (12), two geographic items (3 and 4), and two items representing the hypothetical dimension (14 and 15). There is a component of temporal distance from climate change represented in Factor 1; however, this content is muted or mixed with geographic and hypothetical distance.
The correlations of Factor 2 with the items suggested that this latent variable pertained to a mixture of hypothetical and social distance from climate change. The items loading onto this factor included two social distance items (5 and 7), one hypothetical distance item (16), one item assessing hypothetical geographical distance (17), one assessing hypothetical social distance (18), and finally one assessing temporal geographic distance (13).
Factor 3 corresponded most clearly to geographical distance from climate change. The four items assessing geographic distance (1 to 4) possessed loadings onto Factor 3. In addition, one social item (6) and one temporal social item (12) were also correlated with this factor.
Table 7 shows the Pearson correlations of the three factors. Because the promin rotation allowed for the factors to be related to one another, the correlation in Table 6 suggested that the EFA factors are all related and in the same direction (i.e., there were no negative correlations). Factors 1 and 3 each exhibited higher magnitudes of correlation with Factor 2. The temporal psychological distance associated with Factor 1, however, was somewhat less correlated with the geographical distance content of Factor 3.

3.2.4. EFA Using Pearson Correlations with the ML Extraction

The lower triangle of Pearson correlations for the PDCC items is provided in Table A3. Like the polychoric correlation matrix (Table A2), a generally moderate degree of intercorrelation existed among the items. Within the Pearson matrix, the average correlation was 0.37 and the median correlation was 0.41. The median correlation of an item with all other items ranged from 0.08 for item 8 to 0.51 for item 16. Again, items 8 and 11 were eliminated from the factor maximum likelihood factor analysis for the same reasons described previously in Section 3.2.3 above. EFA with maximum likelihood factor extraction methods assumes an underlying multivariate normal distribution in the data, which was not observed (Section 3.2.2 above). Because a robust ML EFA was performed, however, the author assumed, according to Robitzsch, that the results produced would be unbiased [55,67].
The KMO statistic for this analysis was 0.92, which suggested that there was a sufficiently high amount of common variance to produce meaningful factor analytic results. Identical to the results in Section 3.2.3, Bartlett’s test indicated that the correlation matrix was significantly different from an identity matrix (χ2 (120) = 3273.38, p < 0.001). Regarding the number of factors, the parallel analysis criterion suggested the existence of two factors while both Hull’s method and the Bayesian information criterion suggested only a single factor.
The author again fitted EFA models that contained between one and four factors. The one-factor model produced a poor fit (RMSEA = 0.10 and RMSR = 0.10). The fit of a two-factor model to the data improved slightly (RMSEA = 0.89, RMSR = 0.05), although the robust chi-square statistic still indicated a statistically significant degree of misfit (χ2 (89) = 331.77, p < 0.0001). A three-factor model resulted in a statistically better degree of fit (χ2 (14) = 76.73, p < 0.0001), with improvements in the RMSEA, RMSR, and the TLI (see Table 3 and Table 4). By the chi-square criterion, however, the three-factor model still exhibited a degree of misfit (Table 4). Because the four-factor solution did not appreciably improve the fit or result in substantially less misfit, the author chose the three-factor solution.
The indices of factor determinancy, factor simplicity, and construct replicability appear for the ML EFA of the Pearson matrix in Table 5. Like the first EFA, the FDI and G-H index suggested that factor scores well represented scores on the latent factors and that the items performed well in indicating the latent variables. The values of the G-H index also suggested that the solution obtained in this EFA may be replicable in future studies. The FSI index conveyed that the three-factor solution from the ML EFA was simple and parsimonious.
The right portion of Table 6 shows the rotated factor loadings for the ML EFA. The results for the ML EFA were quite like those of the DWLS analysis in that the same items loaded onto Factor 2 for each EFA. Factor 1 from the ML analysis corresponded to Factor 3 from the DWLS EFA. Factor 3 from the ML EFA corresponded to Factor 1 from the DWLS analysis. Factor 2 was highly correlated with items that originally were designed to assess social and hypothetical distance from climate change. Factor 1 in the ML analysis of the items corresponds to a mix of geographic and social distance from climate change, whereas in the DWLS analysis, this factor involved more item content related to geographic distance. Factor 3 represented an amalgam of geographic, temporal, and hypothetical distance from climate change. Regarding factor intercorrelations (see Table 7), both Factors 1 and 3 exhibited higher intercorrelations with Factor 2 than they did with each other. Again, the pattern of correlations from the ML-derived factors was quite like that obtained from the DWLS EFA.

3.2.5. Descriptive Statistics for the PDCC Full Scale and Factor Subscales

The author used the item–factor associations produced by the polychoric and DWLS approach (see Table 6, left side) to calculate the descriptive statistics for the full 16-item PDCC scale and for the sum of items related to each of the three factors. The author chose the DWLS solution here because of its generally better fit compared to the ML approach. This descriptive data may be useful to other researchers who use the PDCC items in the future and wish to have some basis for comparing or possibility interpreting magnitudes of scores that they obtain when using the measure.
The descriptive statistics appear in Table 8. The scores were obtained by summing the items for the relevant full or factor-related subscales after the appropriate items had been reverse-scored according to Table A1. The scores for the full PDCC scale were calculated after omitting items 8 and 11, as in the factor analyses. Also, because three of the items are part of more than one factor subscale, the sums of scores on the factors will exceed the total score. Generally, the mean and median values were quite close and each score distribution showed some slight positive skew. The values of Cronbach’s α were all within the acceptable range. The author checked for score differences between men and women. Although men in the sample tended to report higher mean scores on each of the three factors, none of the score differences between men and women achieved statistical significance.
Figure 1 shows the distribution of the PDCC full/total scores with a superimposed normal curve (using the mean and standard deviation in Table 8). Although the distribution was approximately mound-shaped, Mardia’s test indicated that it deviated significantly from a standard normal curve, p < 0.0001.

3.3. Discussion

The results of the DWLS and ML factor analyses were noteworthy in two respects. First, despite making different assumptions about the nature of the five-point response scale (either as an ordinal variable or one that is metrical and continuous) and what this means for the appropriate methods of intercorrelating the PDCC items and subsequently performing the EFAs, the two methods yielded nearly identical results. That is, for both the DWLS and ML methods of EFA, a three-factor model was appropriate when considering some of the common fit indices. Beyond this, the promin rotation of the DWLS and ML factor solutions produced item–factor loadings that were very comparable (see Table 6). The differences in the magnitudes of the loadings between the two EFA methods may have stemmed from the more precise estimates of the polychoric and DWLS approach for ordinal data compared to the Pearson and ML approaches [51,57,59,60,62,63,64,65]. Further, it is also possible that because the data did not satisfy the assumption of multivariate normality, some of the ML estimates may have differed from those of the DWLS approach, despite having used robust ML methods. Nonetheless, observing comparable results using two different EFA approaches provides a degree of convergent support for the factor structure of the PDCC scale.
The second noteworthy feature of the EFA results concerned the actual composition of the three observed factors. Although one may have expected initially that the EFA would reflect the four dimensions of psychological distance from climate change (i.e., geographical/spatial, temporal, social, and hypothetical) in the factor loadings, this was not the case. Instead, items from multiple dimensions were associated with each factor. An examination of the wording, especially as it conveys nearness to climate change impact or distance from it, lends clarity to what the factors were assessing (refer to Table A1). Within each of the four content dimensions, some items are worded such that climate change impacts are distant while the remaining items in the dimension are worded such that the perceived climate changes are proximal. Thus, Factor 1 in the DWLS analysis (or Factor 3 in the ML analysis) represents items that are associated with smaller psychological distances in space and time: 3. My local area will be affected by climate change and 9. Climate change is happening now. All the items associated with this factor were the ones to be reverse-scored when calculating a total psychological distance score. Factor 2 in both EFAs contained items that were worded such that climate change impacts across the four dimensions were distal. Thus, it is possible that the wording used in the respective items was responsible for the factor loadings more than the dimensions of psychological distance were.
Factor 3 in the DLWS EFA (or Factor 1 in the ML EFA) appeared to come the closest of the three latent variables in reflecting the geographic or spatial dimension of distance from climate change (see Table 6). The first four PDCC items were designed to assess geographic distance (with items 3 and 4 being worded such that climate change impacts were close). Beyond this, other items that loaded onto this factor implied geographic distance (6. Serious effects of climate change will mostly affect people who are distant from me.) or emphasized a geographic region (12. The region where I live is already experiencing serious effects of climate change.).
The descriptive statistical data (see Table 8) may be useful for future researchers who wish to characterize or compare the degree of psychological distance from climate change that is observed in their samples. This analysis also was useful in suggesting that, at least for this sample, there may have been a floor effect at the proximal end of the scale. That is, there was a greater proportion of people in the sample who perceived climate change as psychologically close to them compared to being psychologically distant. Use of the PDCC scale with other samples can assess whether this is a trend, especially among demographically diverse groups.

3.4. Limitations

Both studies in this article were limited by the nature of the samples that the author used. Each sample included a preponderance of people who identified as White in race and female in gender. In addition, the participants were university students in the southeastern United States. Although the sample demographics in the first study may have had less impact on the nature of the results regarding the PDCC’s reliability, the nature of sample could have had more impact on the results of the EFAs. People who live in different and more climatically vulnerable regions of the world could respond differently to the PDCC items. Similarly non-students may have different associations with the PDCC items. Thus, the results of the studies presented here should be interpreted and used with caution. Beyond these limitations, the results of the studies suggest that it would be worthwhile to explore the functionality of the PDCC items with different samples of people.

4. Conclusions and Recommendations

The results from Study 1 suggested that the total PDCC scores were acceptably stable over a two-week test–rest interval. This result can provide researchers with some confidence that people can respond reliably to the PDCC items over a short time. In both Study 1 and Study 2, the participants were able to respond in a consistent way to the items taken together according to the α and ωt statistics. These results suggested that people can respond in a consistent and reliable manner to the PDCC items.
The issue of the number and nature of latent variables that underly the PDCC items was less clear. From both EFAs in Study 2, two of the three factors may have been an artifact of the wording of the items in terms of the perceived proximity of climate change impacts. In both EFAs, Factor 2 represented a greater distance from climate change across the four dimensions of space, time, social effects, and hypothetical outcomes, whereas Factor 1 (DWLS EFA) and Factor 3 (ML EFA) represented a lesser perceived distance to these impacts. The remaining factor (Factor 3 in the DWLS analysis and Factor 1 in the ML analysis) came the closest in representing one of the four dimensions of psychological distance from climate change, that of spatial distance. That the two different EFA approaches largely produced the same overall outcome regarding the factor loadings reduces the likelihood that the findings of Study 2 were an artifact of using a single EFA approach. Beyond this, it is possible that the results from Study 2 are limited by using a younger, university-based sample and that the inclusion of a wider cross-section of adults in the United States may have produced different results.
Two recommendations can be made about the use of the PDCC scales. First, it seems acceptable given the results from both studies to sum the items (after reversing the items listed in Table A1 in Appendix A) to provide an overall indication of psychological distance from climate change. This could be accomplished with all 18 items or, preferably, using 16 items and omitting items 8 and 11 because of their small degree of association with the other items. The author has followed this approach and found meaningful results in the association of the PDCC total scores with measure of climate change worry and anxiety [77].
The second recommendation stems from the results of Study 2 and is offered here to encourage future researchers to experiment with the items to discover what they may offer. That is, if none of the items in Table A1 are reversed for scoring, then the items associated with the Factor 2 (5, 7, 13, 16, 17, and 18) represent perceptions of climate change impacts as distal or distant. Conversely, the Factor 1 items from the first EFA (3, 4, 9, 10, 12, 14, and 15) represent perceptions of climate change impacts as closer (or immediate). Without reverse-scoring the items, these item sets would then be expected to be negatively correlated with each other at a substantial level (see Table 7). If the arithmetic average of items in Factor 1 and Factor 2 were calculated, then this may allow for a more nuanced assessment of a person’s perceptions of the closeness to or distance from climate change impacts. The items from Factor 3 in the DWLS EFA could be used to indicate geographic/spatial distance from climate change. To score this factor, items 3, 4, and 12 would need to be reversed; higher scores would indicate greater spatial distance from climate change.
Beyond these practical recommendations for use of the PDCC scale, additional research to assess the measurement model of the items with geographically and demographically different samples would help to develop and refine the measurement of this important construct. In some respects, as Keller and colleagues have observed [2,3], the measurement of the psychological distance of climate change has occurred largely within the framework of CLT, which may have been somewhat constraining. As researchers have observed, the construct of psychological distance within the realm of climate change is complex and layered [2,3]. It is possible that broader theoretical and empirical approaches may be needed to assess the construct of perceived and/or experienced distance from climate change impacts. Experimentation with different response formats beyond Likert-type ratings also may be beneficial, especially if the perceptions of climate change impacts can be anchored or calibrated with known psychological distances within the person.

Funding

This research received no external funding.

Data Availability Statement

The data for both studies can be made available upon request to the author.

Acknowledgments

The author would like to thank Harrison Chapman for his assistance in gathering the data for Study 1 of this project.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. Wang’s PDCC Scale Items, Instructions, Rating Scale, and Scoring Procedure [1]

Table A1. Items of the PDCC scale.
Table A1. Items of the PDCC scale.
1. I feel geographically far from the effects of climate change. (G)
2. Serious effects of climate change will mostly occur in areas far away from here. (G)
3. My local area will be affected by climate change. (G)
4. Climate change will have consequences for every region, including where I live. (G)
5. I don’t see myself as someone who will be affected by climate change. (S)
6. Serious effects of climate change will mostly affect people who are distant from me. (S)
7. My family and I will be safe from the effects of climate change. (S)
8. I can identify with victims of climate related disasters. (S)
9. Climate change is happening now. (T)
10. We will see the serious effects of climate change in my lifetime. (T)
11. If climate change is to happen, it will happen in the remote future. (T)
12. The region where I live is already experiencing serious effects of climate change. (T,S)
13. Climate change will not change my life, or my family’s lives anytime soon. (T,G)
14. Climate change is virtually certain to affect the world. (H)
15. It is almost certain that climate change will change my life for the worse. (H)
16. It is extremely unlikely that climate change will affect me. (H)
17. My local area is very unlikely to be affected by climate change. (H,G)
18. It is virtually certain that my family will be safe from the effects of climate change. (H,S)
Note: The item associations with the four dimensions of psychological distance are indicated in the table by the following: (G) Geographic Distance, (S) Social Distance, (T) Temporal Distance, and (H) Hypothetical Distance. Wang and her colleagues designed items 12, 13, 17, and 18 to measure a combination of distance dimensions.
The following instructions were provided for responding to the items:
Please read each statement and then indicate the extent to which you disagree or agree with each statement using the following scale:
1 = Strongly Disagree
2 = Somewhat Disagree
3 = Neither Disagree nor Agree
4 = Somewhat Agree
5 = Strongly Agree
Scoring:
Most of the scale items are designed so that a greater level of agreement (higher numerical ratings) corresponds to a greater degree of psychological distance from climate change. The ratings of the following items should be reversed in the scoring process: 3, 4, 8, 9,10, 12, 14, and 15. The total score of psychological distance from climate change is the sum of the 18 items after reverse-scoring the listed items.
Table A2. Polychoric correlations of the PDCC items.
Table A2. Polychoric correlations of the PDCC items.
PDCC Item123456789
11.000
20.6971.000
30.5010.4591.000
40.4360.3450.7881.000
50.6160.4930.5750.5871.000
60.5880.7750.3100.3170.5221.000
70.6090.5710.5310.5030.7200.6191.000
80.1400.1040.1270.0740.1010.0710.0311.000
90.3210.2630.5770.7080.6040.2310.4710.0401.000
100.3620.3000.5030.6480.5700.2100.4650.1250.783
110.1800.3180.1810.1870.2940.2910.2350.0460.250
120.3580.4340.4670.4230.3590.2890.3280.3480.351
130.4730.4900.4850.4970.6760.4230.6420.0470.523
140.3310.1880.4150.6160.4700.1660.3260.1180.703
150.3330.2400.4170.5370.5310.2020.4590.2220.635
160.5130.4680.4920.6010.7620.4040.6640.0640.639
170.5480.4960.5780.6560.7020.4340.6520.0050.667
180.4880.4370.4210.4720.6510.4080.6830.0070.481
PDCC Item101112131415161718
101.000
110.3011.000
120.4240.1191.000
130.5950.3630.3981.000
140.5870.1280.2910.4081.000
150.6660.1960.4340.5700.5811.000
160.6470.3900.3880.7030.5520.6041.000
170.5960.2990.4600.6870.5200.5020.8561.000
180.5080.3030.3200.6140.4070.4680.7430.7261.000
Table A3. Pearson correlations of PDCC items.
Table A3. Pearson correlations of PDCC items.
PDCC Item123456789
11.000
20.5911.000
30.4210.3751.000
40.3560.2660.6631.000
50.5460.4270.4920.5031.000
60.5120.6820.2520.2610.4561.000
70.5380.4880.4510.4210.6480.5521.000
80.1180.0840.1290.0840.0910.0550.0251.000
90.2460.1870.4870.6170.5030.1660.3780.0431.000
100.3040.2400.4420.5830.5010.1730.3980.1100.697
110.1530.2660.1500.1540.2550.2550.2110.0400.182
120.3030.3730.3860.3670.3180.2610.2880.3040.299
130.3980.4140.3950.3980.5970.3680.5750.0350.424
140.2620.1280.3640.5370.3970.1270.2620.1050.626
150.2820.1890.3600.4590.4660.1710.4040.2000.539
160.4420.3930.4130.5110.6930.3510.5820.0590.534
170.4690.4170.4870.5620.6360.3710.5740.0020.551
180.4340.3690.3590.4100.5790.3570.6100.0060.400
PDCC Item101112131415161718
101.000
110.2511.000
120.3710.1111.000
130.5180.3170.3431.000
140.5050.0920.2470.3171.000
150.5910.1780.3830.5060.4931.000
160.5720.3300.3320.6220.4560.5221.000
170.5280.2510.3990.5800.4350.4330.7731.000
180.4430.2630.2860.5430.3390.4140.6590.6521.000

References

  1. Wang, S.; Hurlstone, M.J.; Leviston, Z.; Walker, I.; Lawrence, C. Climate Change from a Distance: An Analysis of Construal Level and Psychological Distance from Climate Change. Front. Psychol. 2019, 10, 438569. [Google Scholar] [CrossRef] [PubMed]
  2. Maiella, R.; La Malva, P.; Marchetti, D.; Pomarico, E.; Di Crosta, A.; Palumbo, R.; Cetara, L.; Di Domenico, A.; Verrocchio, M.C. The Psychological Distance and Climate Change: A Systematic Review on the Mitigation and Adaptation Behaviors. Front. Psychol. 2020, 11, 568899. [Google Scholar] [CrossRef] [PubMed]
  3. Keller, E.; Marsh, J.E.; Richardson, B.H.; Ball, L.J. A Systematic Review of the Psychological Distance of Climate Change: Towards the Development of an Evidence-Based Construct. J. Environ. Psychol. 2022, 81, 101822. [Google Scholar] [CrossRef]
  4. Trope, Y.; Liberman, N. Construal-Level Theory of Psychological Distance. Psychol. Rev. 2010, 117, 440–463. [Google Scholar] [CrossRef] [PubMed]
  5. Spence, A.; Poortinga, W.; Pidgeon, N. The Psychological Distance of Climate Change. Risk Anal. 2012, 32, 957–972. [Google Scholar] [CrossRef] [PubMed]
  6. van Valkengoed, A.M.; Steg, L.; Perlaviciute, G. The Psychological Distance of Climate Change Is Overestimated. One Earth 2023, 6, 362–391. [Google Scholar] [CrossRef]
  7. Chu, H.; Yang, J.Z. Risk or Efficacy? How Psychological Distance Influences Climate Change Engagement. Risk Analysis 2020, 40, 758–770. [Google Scholar] [CrossRef]
  8. McDonald, R.I.; Chai, H.Y.; Newell, B.R. Personal Experience and the ‘Psychological Distance’ of Climate Change: An Integrative Review. J. Environ. Psychol. 2015, 44, 109–118. [Google Scholar] [CrossRef]
  9. Milfont, T.L.; Evans, L.; Sibley, C.G.; Ries, J.; Cunningham, A. Proximity to Coast Is Linked to Climate Change Belief. PLoS ONE 2014, 9, 103180. [Google Scholar] [CrossRef]
  10. Loy, L.S.; Spence, A. Reducing, and Bridging, the Psychological Distance of Climate Change. J. Environ. Psychol. 2020, 67, 101388. [Google Scholar] [CrossRef]
  11. Chen, M.F. Effects of Psychological Distance Perception and Psychological Factors on Pro-Environmental Behaviors in Taiwan: Application of Construal Level Theory. Int. Sociol. 2020, 35, 70–89. [Google Scholar] [CrossRef]
  12. Schattman, R.E.; Caswell, M.; Faulkner, J.W. Eyes on the Horizon: Temporal and Social Perspectives of Climate Risk and Agricultural Decision Making among Climate-Informed Farmers. Soc. Nat. Resour. 2021, 34, 763–782. [Google Scholar] [CrossRef]
  13. Poortvliet, P.M.; Niles, M.T.; Veraart, J.A.; Werners, S.E.; Korporaal, F.C.; Mulder, B.C. Communicating Climate Change Risk: A Content Analysis of Ipcc’s Summary for Policymakers. Sustainability 2020, 12, 4861. [Google Scholar] [CrossRef]
  14. Allen, M.J.; Yen, W.M. Introduction to Measurement Theory; Brooks/Cole Pub. Co.: Monterey, CA, USA, 1979; ISBN 0818502835. [Google Scholar]
  15. Rosch, E. Cognitive Representations of Semantic Categories. J. Exp. Psychol. Gen. 1975, 104, 192–233. [Google Scholar] [CrossRef]
  16. Semin, G.R.; Fiedler, K. The Cognitive Functions of Linguistic Categories in Describing Persons: Social Cognition and Language. J. Pers. Soc. Psychol. 1988, 54, 558–568. [Google Scholar] [CrossRef]
  17. Medin, D.L.; Smith, E.E. Concepts and Concept Formation. Annu. Rev. Psychol. 1984, 35, 113–138. [Google Scholar] [CrossRef] [PubMed]
  18. Vallacher, R.R.; Wegner, D.M. What Do People Think They’re Doing? Action Identification and Human Behavior. Psychol. Rev. 1987, 94, 3–15. [Google Scholar] [CrossRef]
  19. Calvin, K.; Dasgupta, D.; Krinner, G.; Mukherji, A.; Thorne, P.W.; Trisos, C.; Romero, J.; Aldunce, P.; Barrett, K.; Blanco, G.; et al. IPCC, 2023: Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II and III to the Sixth Assessment Report. of the Intergovernmental Panel on Climate Change; Core Writing Team, Lee, H., Romero, J., Eds.; IPCC: Geneva, Switzerland, 2023. [Google Scholar]
  20. Bryson, R.A. The Paradigm of Climatology: An Essay. Bull. Am. Meteorol. Soc. 1997, 78, 449–455. [Google Scholar] [CrossRef]
  21. Fujita, T.T. Mesoscale Classifications: Their History and Their Application to Forecasting. In Mesoscale Meteorology and Forecasting; Ray, P.S., Ed.; American Meteorological Society: Boston, MA, USA, 1986; pp. 18–35. [Google Scholar]
  22. Stewart, A.E.; Blau, J.J.C. Weather as Ecological Events. Ecol. Psychol. 2019, 31, 107–126. [Google Scholar] [CrossRef]
  23. Stewart, A.E.; Oh, J. Weather and Climate as Events: Contributions to the Public Idea of Climate Change. Int. J. Big Data Min. Glob. Warm. 2019, 01, 1950005. [Google Scholar] [CrossRef]
  24. Akerlof, K.; Maibach, E.W.; Fitzgerald, D.; Cedeno, A.Y.; Neuman, A. Do People “Personally Experience” Global Warming, and If so How, and Does It Matter? Glob. Environ. Change 2013, 23, 81–91. [Google Scholar] [CrossRef]
  25. Sambrook, K.; Konstantinidis, E.; Russell, S.; Okan, Y. The Role of Personal Experience and Prior Beliefs in Shaping Climate Change Perceptions: A Narrative Review. Front. Psychol. 2021, 12, 669911. [Google Scholar] [CrossRef] [PubMed]
  26. Weber, E.U. Experience-Based and Description-Based Perceptions of Long-Term Risk: Why Global Warming Does Not Scare Us (Yet). Clim. Chang. 2006, 77, 103–120. [Google Scholar] [CrossRef]
  27. Sisco, M.R.; Bosetti, V.; Weber, E.U. When Do Extreme Weather Events Generate Attention to Climate Change? Clim. Chang. 2017, 143, 227–241. [Google Scholar] [CrossRef]
  28. Williams, L.E.; Stein, R.; Galguera, L. The Distinct Affective Consequences of Psychological Distance and Construal Level. J. Consum. Res. 2014, 40, 1123–1138. [Google Scholar] [CrossRef]
  29. Devine-Wright, P. Think Global, Act Local? The Relevance of Place Attachments and Place Identities in a Climate Changed World. Glob. Environ. Change 2013, 23, 61–69. [Google Scholar] [CrossRef]
  30. Chu, H. Construing Climate Change: Psychological Distance, Individual Difference, and Construal Level of Climate Change. Environ. Commun. 2022, 16, 883–899. [Google Scholar] [CrossRef]
  31. Jones, C.; Hine, D.W.; Marks, A.D.G. The Future Is Now: Reducing Psychological Distance to Increase Public Engagement with Climate Change. Risk Analysis 2017, 37, 331–341. [Google Scholar] [CrossRef]
  32. Verplanken, B.; Marks, E.; Dobromir, A.I. On the Nature of Eco-Anxiety: How Constructive or Unconstructive Is Habitual Worry about Global Warming? J. Environ. Psychol. 2020, 72, 101528. [Google Scholar] [CrossRef]
  33. Većkalov, B.; Zarzeczna, N.; Niehoff, E.; McPhetres, J.; Rutjens, B.T. A Matter of Time… Consideration of Future Consequences and Temporal Distance Contribute to the Ideology Gap in Climate Change Scepticism. J. Environ. Psychol. 2021, 78, 101703. [Google Scholar] [CrossRef]
  34. Demski, C.; Capstick, S.; Pidgeon, N.; Sposato, R.G.; Spence, A. Experience of Extreme Weather Affects Climate Change Mitigation and Adaptation Responses. Clim. Chang. 2017, 140, 149–164. [Google Scholar] [CrossRef] [PubMed]
  35. Hornsey, M.J.; Harris, E.A.; Bain, P.G.; Fielding, K.S. Meta-Analyses of the Determinants and Outcomes of Belief in Climate Change. Nat. Clim. Chang. 2016, 6, 622–626. [Google Scholar] [CrossRef]
  36. van der Linden, S. The Social-Psychological Determinants of Climate Change Risk Perceptions: Towards a Comprehensive Model. J. Environ. Psychol. 2015, 41, 112–124. [Google Scholar] [CrossRef]
  37. Howe, P.D.; Marlon, J.R.; Mildenberger, M.; Shield, B.S. How Will Climate Change Shape Climate Opinion? Environ. Res. Lett. 2019, 14, 113001. [Google Scholar] [CrossRef]
  38. Azadi, Y.; Yazdanpanah, M.; Mahmoudi, H. Understanding Smallholder Farmers’ Adaptation Behaviors through Climate Change Beliefs, Risk Perception, Trust, and Psychological Distance: Evidence from Wheat Growers in Iran. J. Environ. Manag. 2019, 250, 109456. [Google Scholar] [CrossRef] [PubMed]
  39. Singh, A.S.; Zwickle, A.; Bruskotter, J.T.; Wilson, R. The Perceived Psychological Distance of Climate Change Impacts and Its Influence on Support for Adaptation Policy. Environ. Sci. Policy 2017, 73, 93–99. [Google Scholar] [CrossRef]
  40. Rana, I.A.; Lodhi, R.H.; Zia, A.; Jamshed, A.; Nawaz, A. Three-Step Neural Network Approach for Predicting Monsoon Flood Preparedness and Adaptation: Application in Urban Communities of Lahore, Pakistan. Urban. Clim. 2022, 45, 101266. [Google Scholar] [CrossRef]
  41. Wang, S.; Hurlstone, M.J.; Leviston, Z.; Walker, I.; Lawrence, C. Construal-Level Theory and Psychological Distancing: Implications for Grand Environmental Challenges. One Earth 2021, 4, 482–486. [Google Scholar] [CrossRef]
  42. Brennan, R.L. An Essay on the History and Future of Reliability from the Perspective of Replications. J. Educ. Meas. 2001, 38, 295–317. [Google Scholar] [CrossRef]
  43. Matheson, G.J. We Need to Talk about Reliability: Making Better Use of Test-Retest Studies for Study Design and Interpretation. PeerJ 2019, 7, e6918. [Google Scholar] [CrossRef]
  44. R Core Team. R. A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2023; Available online: https://www.R-project.org/ (accessed on 26 June 2023).
  45. Revelle, W. Psych: Procedures for Psychological, Psychometric, and Personality Research; Northwestern University: Evanston, IL, USA, 2023. [Google Scholar]
  46. McNeish, D. Thanks Coefficient Alpha, We’ll Take It from Here. Psychol. Methods 2018, 23, 412–433. [Google Scholar] [CrossRef] [PubMed]
  47. McDonald, R.P. Test Theory: A Unified Treatment; L. Erlbaum Associates: Mahwah, NJ, USA, 1999; ISBN 0805830758. [Google Scholar]
  48. Hulme, M. Why We Disagree about Climate Change: Understanding Controversy, Inaction and Opportunity; Cambridge University Press: Cambridge, UK, 2009; ISBN 9780511841200. [Google Scholar]
  49. Fabrigar, L.R.; Wegener, D.T.; MacCallum, R.C.; Strahan, E.J. Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychol. Methods 1999, 4, 272–299. [Google Scholar] [CrossRef]
  50. Rogers, P. Best Practices for Your Exploratory Factor Analysis: A Factor Tutorial. Rev. Adm. Contemp. 2022, 26, e210085. [Google Scholar] [CrossRef]
  51. Watkins, M.W. Exploratory Factor Analysis: A Guide to Best Practice. J. Black Psychol. 2018, 44, 219–246. [Google Scholar] [CrossRef]
  52. Korkmaz, S.; Goksuluk, D.; Zararsiz, G. MVN: An R Package for Assessing Multivariate Normality. R. J. 2014, 6, 151–162. [Google Scholar] [CrossRef]
  53. Steiner, M.; Grieder, S. EFAtools: An R Package with Fast and Flexible Implementations of Exploratory Factor Analysis Tools. J. Open Source Softw. 2020, 5, 2521. [Google Scholar] [CrossRef]
  54. Wickham, H. Ggplot2: Elegant Graphics for Data Analysis; Springer: New York, NY, USA, 2016. [Google Scholar]
  55. Lorenzo-Seva, U.; Ferrando, P.J. Factor 92: A Comprehensive Program for Fitting Exploratory and Semiconfirmatory Factor Analysis and IRT Models. Appl. Psychol. Meas. 2013, 37, 497–498. [Google Scholar] [CrossRef]
  56. Baglin, J. Improving Your Exploratory Factor Analysis for Ordinal Data: A Demonstration Using FACTOR. Pract. Assess. Res. Eval. 2014, 19, 5. [Google Scholar]
  57. Barendse, M.T.; Oort, F.J.; Timmerman, M.E. Using Exploratory Factor Analysis to Determine the Dimensionality of Discrete Responses. Struct. Equ. Model. 2015, 22, 87–101. [Google Scholar] [CrossRef]
  58. Timmerman, M.E.; Lorenzo-Seva, U. Dimensionality Assessment of Ordered Polytomous Items with Parallel Analysis. Psychol. Methods 2011, 16, 209–220. [Google Scholar] [CrossRef]
  59. Coenders, G.; Saris, W.E. Categorization and Measurement Quality. The Choice between Pearson and Polychoric Correlations. In The Multitrait-Multimethod Approach to Evaluate Measurement Instruments; Eötvös University Press: Budapest, Hungary, 1995. [Google Scholar]
  60. Choi, J.; Peters, M.; Mueller, R.O. Correlational Analysis of Ordinal Data: From Pearson’s r to Bayesian Polychoric Correlation. Asia Pac. Educ. Rev. 2010, 11, 459–466. [Google Scholar] [CrossRef]
  61. Choi, J.; Kim, S.; Chen, J.; Dannels, S. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation. J. Educ. Behav. Stat. 2011, 36, 523–549. [Google Scholar] [CrossRef]
  62. Holgado-Tello, F.P.; Chacón-Moscoso, S.; Barbero-García, I.; Vila-Abad, E. Polychoric versus Pearson Correlations in Exploratory and Confirmatory Factor Analysis of Ordinal Variables. Qual. Quant. 2010, 44, 153–166. [Google Scholar] [CrossRef]
  63. Liddell, T.M.; Kruschke, J.K. Analyzing Ordinal Data with Metric Models: What Could Possibly Go Wrong? J. Exp. Soc. Psychol. 2018, 79, 328–348. [Google Scholar] [CrossRef]
  64. Li, C.-H. The Performance of ML, DWLS, and ULS Estimation with Robust Corrections in Structural Equation Models with Ordinal Variables. Psychol. Methods 2016, 21, 369–387. [Google Scholar] [CrossRef] [PubMed]
  65. Mîndrilă, D. Maximum Likelihood (ML) and Diagonally Weighted Least Squares (DWLS) Estimation Procedures: A Comparison of Estimation Bias with Ordinal and Multivariate Non-Normal Data. Int. J. Digit. Soc. 2010, 1, 60–66. [Google Scholar] [CrossRef]
  66. Hu, L.; Bentler, P.M. Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives. Struct. Equ. Model. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  67. Robitzsch, A. Why Ordinal Variables Can (Almost) Always Be Treated as Continuous Variables: Clarifying Assumptions of Robust Continuous and Ordinal Factor Analysis Estimation Methods. Front. Educ. 2020, 5, 589965. [Google Scholar] [CrossRef]
  68. Curran, P.J.; West, S.G.; Finch, J.F. The Robustness of Test Statistics to Nonnormality and Specification Error in Confirmatory Factor Analysis. Psychol. Methods 1996, 1, 16–29. [Google Scholar] [CrossRef]
  69. Lorenzo-Seva, U.; Ferrando, P.J. Robust Promin: A Method for Diagonally Weighted Factor Rotation. Liberabit Rev. Peru. Psicol. 2019, 25, 99–106. [Google Scholar] [CrossRef]
  70. Yang, Y.; Xia, Y. On the Number of Factors to Retain in Exploratory Factor Analysis for Ordered Categorical Data. Behav. Res. Methods 2015, 47, 756–772. [Google Scholar] [CrossRef] [PubMed]
  71. Bentler, P.M. Factor Simplicity Index and Transformations. Psychometrika 1977, 42, 277–295. [Google Scholar] [CrossRef]
  72. Ferrando, P.J.; Lorenzo-Seva, U. Assessing the Quality and Appropriateness of Factor Solutions and Factor Score Estimates in Exploratory Item Factor Analysis. Educ. Psychol. Meas. 2018, 78, 762–780. [Google Scholar] [CrossRef] [PubMed]
  73. Mardia, K.V. Measures of Multivariate Skewness and Kurtosis with Applications. Biometrika 1970, 57, 519. [Google Scholar] [CrossRef]
  74. Henze, N.; Zirkler, B. A Class of Invariant Consistent Tests for Multivariate Normality. Commun. Stat. Theory Methods 1990, 19, 3595–3617. [Google Scholar] [CrossRef]
  75. Royston, J.P. Some Techniques for Assessing Multivarate Normality Based on the Shapiro-Wilk W. Appl. Stat. 1983, 32, 121. [Google Scholar] [CrossRef]
  76. Bruning, J.L.; Kintz, B.L. Computational Handbook of Statistics, 4th ed.; Longman: New York, NY, USA, 1997. [Google Scholar]
  77. Stewart, A.E.; Chapman, H.E.; Davis, J.B.L. Anxiety and Worry About Six Categories of Climate Change Impacts. Int. J. Environ. Res. Public Health 2023, 21, 23. [Google Scholar] [CrossRef]
Figure 1. Distribution of the PDCC-Full scale scores with superimposed normal curve (n = 342).
Figure 1. Distribution of the PDCC-Full scale scores with superimposed normal curve (n = 342).
Climate 12 00076 g001
Table 1. Standardized Cronbach’s alpha (α) coefficients and item-to-total correlations for each administration of the PDCC Scale (n = 66).
Table 1. Standardized Cronbach’s alpha (α) coefficients and item-to-total correlations for each administration of the PDCC Scale (n = 66).
ItemFirst AdministrationSecond Administration
Standardized α
If Item Removed
Item-to-Total CorrelationStandardized α
If Item Removed
Item-to-Total Correlation
10.900.680.910.67
20.910.540.900.78
30.900.590.910.44
40.910.480.910.70
50.900.750.900.77
60.900.580.910.61
70.900.550.900.75
80.910.280.920.20
90.900.730.910.69
100.900.710.910.62
110.910.230.920.31
120.900.600.910.62
130.900.840.900.75
140.910.490.910.41
150.900.810.910.65
160.900.660.900.81
170.900.740.910.60
180.900.630.910.73
Note: The item-to-total correlations were corrected for item overlap and scale reliability.
Table 2. Item descriptive statistics (n = 342).
Table 2. Item descriptive statistics (n = 342).
PDCC
Item
Mean *VarianceSkewKurtosisShapiro–Wilk W Statistic **Anderson–
Darling
Statistic
Item-to-Total Correlation
12.670.980.31−0.760.8819.710.62
22.681.060.34−0.860.8721.420.58
32.320.750.820.770.8426.310.64
41.990.721.021.480.8126.140.70
52.351.040.62−0.230.8719.050.79
62.671.220.25−0.950.8917.030.53
72.490.840.35−0.400.8818.780.72
83.391.04−0.16−0.630.9013.710.15
91.750.661.141.740.7827.980.68
102.131.010.740.140.8617.770.70
112.831.050.03−0.650.9113.550.33
123.120.91−0.18−0.470.9015.940.51
132.450.890.33−0.480.8917.330.71
141.930.820.970.860.8222.780.55
152.691.030.22−0.530.9113.990.64
162.160.850.740.330.8521.150.81
172.220.800.640.330.8620.550.79
182.420.850.35−0.150.8916.970.69
Note: * Mean values indicate the location on the 1 to 5 rating scale for each item. ** The significance level for each W-statistic was <0.0001, which results in the rejection of the null hypothesis that the population from which the sample was drawn was normally distributed for each of the 18 PDCC items. The statistics were calculated after reverse scoring the necessary items. The significance level for the Anderson–Darling tests of univariate normality were all significant, p < 0.001.
Table 3. Eigenvalues and proportion of variance for the 3-factor DWLS and ML analyses.
Table 3. Eigenvalues and proportion of variance for the 3-factor DWLS and ML analyses.
Polychoric Correlations/DWLS ExtractionPearson Correlations/ML Extraction
EigenvalueProportion of
Variance
Cum. Prop. VarianceEigenvalueProportion of
Variance
Cum. Prop. Variance
8.7440.5470.5477.6630.4790.479
1.8860.1180.6641.8750.1170.596
0.9900.0620.7261.0180.0640.660
Table 4. Fit indices for the 3-factor DWLS and ML EFA models.
Table 4. Fit indices for the 3-factor DWLS and ML EFA models.
Fit Index for a
3-Factor Model
Polychoric Correlations/DWLS
Extraction
Pearson Correlations/ML
Extraction
Fit Index Value95% CIFit Index Value95% CI
RMSEA0.0510.038–0.0560.0840.082–0.087
Tucker-Lewis Index0.9920.990–0.996 0.9660.950–0.970
RMSR0.0440.038–0.0460.0330.028–0.034
Minimum Fit χ268.997, df = 75, ns--252.262, df = 75,
p < 0.00001
--
Table 5. Indices of factor determinancy, factor simplicity, and construct replicability.
Table 5. Indices of factor determinancy, factor simplicity, and construct replicability.
Factor IndexPolychoric Correlations with DWLS ExtractionPearson Correlations with ML Extraction
Index Value95% CIIndex Value95% CI
Factor Determinancy Index (FDI)
Factor 10.9660.943–0.9760.9200.879–0.937
Factor 20.9790.971–0.9940.9660.956–0.981
Factor 30.9580.927–0.9810.9480.927–0.960
Construct Replicability Index (G-H Index)
Factor 10.8700.835–0.8910.8460.773–0.877
Factor 20.9580.922–0.9910.9320.914–0.962
Factor 30.8930.844–0.9320.8990.859–0.921
Factor Simplicity Index (FSI)0.9560.728–0.9910.9800.868–0.996
Table 6. Rotated factor loadings and communalities for the DWLS and ML factor analyses.
Table 6. Rotated factor loadings and communalities for the DWLS and ML factor analyses.
Polychoric Correlations/DWLS ExtractionPearson Correlations/ML Extraction
PDCC ItemFactor 1Factor 2Factor 3Comm.Factor 1Factor 2Factor 3Comm.
1 0.5780.5970.555 0.524
2 0.9080.8640.792 0.679
30.674 0.5250.5730.392 0.680.512
40.822 0.3530.702 0.8270.642
5 0.641 0.708 0.629 0.645
6 0.7150.7390.728 0.617
7 0.713 0.693 0.593 0.604
90.839 0.859 0.8290.694
100.701 0.747 0.6680.622
120.415 0.3770.3080.3 0.3630.261
13 0.676 0.606 0.671 0.529
140.726 0.573 0.6990.48
150.54 0.557 0.4670.455
16 1.022 0.892 0.99 0.792
17 0.784 0.808 0.757 0.701
18 0.986 0.681 0.884 0.588
Note: Only rotated factor loadings greater than 0.3 are shown. Given that the promin factor rotation method allows factors to be correlated, it is possible for factor loadings to exceed a value of 1.0.
Table 7. Factor correlations for the DWLS and ML factor analyses.
Table 7. Factor correlations for the DWLS and ML factor analyses.
FactorsDWLS AnalysisML Analysis
1212
1. Factor 11.00 1.00
2. Factor 20.711.000.601.00
3. Factor 30.270.640.300.73
Note: All correlations were statistically significant, p < 0.05.
Table 8. Descriptive statistics for the PDCC full scale and factor subscales (n = 342).
Table 8. Descriptive statistics for the PDCC full scale and factor subscales (n = 342).
ScaleMeanMedianStand.
Dev.
SkewnessKurtosisCronbach’s α
(95% CI)
PDCC-Full38.03710.380.410.740.92 (0.91–0.94)
Factor 115.9164.760.881.940.86 (0.83–0.89)
Factor 214.113.54.650.460.250.91 (0.89–0.92)
Factor 315.5154.130.080.200.80 (0.76–0.84)
Note: The PDCC-Full scores are the sum of the 16 retained items after reverse-scoring the appropriate items.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stewart, A.E. Reliability and Exploratory Factor Analysis of a Measure of the Psychological Distance from Climate Change. Climate 2024, 12, 76. https://doi.org/10.3390/cli12050076

AMA Style

Stewart AE. Reliability and Exploratory Factor Analysis of a Measure of the Psychological Distance from Climate Change. Climate. 2024; 12(5):76. https://doi.org/10.3390/cli12050076

Chicago/Turabian Style

Stewart, Alan E. 2024. "Reliability and Exploratory Factor Analysis of a Measure of the Psychological Distance from Climate Change" Climate 12, no. 5: 76. https://doi.org/10.3390/cli12050076

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop