Next Article in Journal
Properties of Precast Concrete Using Food Industry-Filtered Recycled Diatoms
Previous Article in Journal
Sustainable Urban Greening and Cooling Strategies for Thermal Comfort at Pedestrian Level
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Psychometric Investigation of the Cultural Intelligence Scale Using the Rasch Measurement Model in South Korea

1
Department of Human Resource Development, Graduate School, Chung-Ang University, Seoul 06974, Korea
2
Department of Education, College of Education, Chung-Ang University, Seoul 06974, Korea
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(6), 3139; https://doi.org/10.3390/su13063139
Submission received: 20 February 2021 / Revised: 4 March 2021 / Accepted: 8 March 2021 / Published: 12 March 2021
(This article belongs to the Section Sustainable Management)

Abstract

:
As the business environment and its workforce’s composition have become more diversified, more attention has been paid to cultural intelligence or the ability to effectively function in such an environment. Various attempts have been made to measure and develop leaders and staff members’ cultural intelligence. The Cultural Intelligence Scale (CQS) is the most representative tool for measuring cultural intelligence worldwide and has been frequently used in South Korea. Although CQS validity has been verified for each country and cultural region, it has been used without psychometric verification in South Korea and simply translates the original scale into Korean. This study’s objective was to test CQS validity when employed with the employees of South Korean companies. It applied the Rasch measurement model to confirm the validity in various aspects. First, this study calculated an item adequacy index and performed point biserial correlation analysis. The results showed that all values were above the appropriateness threshold. A Likert scale response test was conducted to check actual validity. The results confirmed that the 7-point rating scale was valid, and the item–subject map proved that the CQS items reflected the respondents’ ability distribution relatively well. Rasch factor analysis for confirming the internal structure’s validity revealed that CQS showed a multidimensional factor structure. Lastly, this study checked differential functioning items by position to secure evidence of generalization validity. It was confirmed that it has an appropriate level of generalization-related validity.

1. Introduction

We have encountered various worlds in real time owing to the digital transformation advanced by COVID-19. We exchange with other cultures, between organizations, and between departments in the workplace. The Harvard Business Review has indicated that coordinating codes across cultures was one of the three key abilities required for managers working in a dynamic 21st-century workplace [1]. Although cross-cultural contact and diversity, which have become common, can enhance the production of creative outcomes, they often lead to cultural conflicts [2]. Employees face more diverse cultural challenges due to increased international business and ethnic diversity [3]. Andreason evaluated the overseas adaptation of sojourning employees empirically and reported that at least 40% of them and up to 70% failed in overseas secondment tasks due to cultural issues in developed and developing countries [4]. Today, it is imperative to identify the factors that induce intercultural harmony and effective functioning in a multicultural context [5].
Therefore, various discussions regarding culture in the workplace began to expand. Moreover, cultural intelligence (CQ) was developed by Earley [6], although it combined the cross-cultural management studies that Hofstede, Trompenaars, and others had conducted since the 1980s and the theory of intelligence in a traditional cognitive aspect. CQ refers to the ability to adapt and function effectively in different cultural environments [7]. It is essential for working in an organization with various nationalities represented in a culturally diverse workplace [7]. CQ has drawn many researchers’ interest, since it seeks answers to an important question, “What traits help people adapt better than others in a culturally different context?” by empirically testing the adaptive results for different samples in multicultural environments [8,9,10,11].
In addition, many researchers have sought to assess and develop CQ up until today. Various attempts have been made to measure CQ, since Ang and her colleagues developed the Cultural Intelligence Scale [12,13,14,15]. Recent studies have tried to validate the CQS in various countries (China, the Netherlands, Poland, and Italy) and cultural contexts [16,17,18]. However, it is still uncertain if it is valid to use the CQS, one of the representative measurement tools for measuring CQ, in South Korea because there is no evidence of psychometric validity for a Korean population. Therefore, examining its validity in the South Korean organizational context will confirm the appropriateness of applying the measurement tool in this setting. Furthermore, it will enhance a multicultural understanding of CQ. Particularly, this study aimed to examine more in-depth psychometric evidence, including the suitability of the items for South Korean subjects by considering the validity of individual items and the subject’s ability simultaneously, using the Rasch measurement model method based on the item response theory. This was unlike most previous studies that evaluated the validity of the CQS measurement tool mainly using the factor analysis. This study also tried to propose the implications of measuring CQ suitable for the South Korean organizational context.

2. Literature Review

2.1. Definition and Dimensions of CQ

As more business organizations are characterized by diversity, it has become more necessary for individuals to work or regularly interact with people with different cultural and ethnic backgrounds [19]. Multinational corporations have been advancing overseas and managing business activities by dispatching their employees locally to survive the fierce competition of the global environment. In this context, it is critical to understand why some individuals are more effective than others in culturally diverse situations. Earley and Ang conceptualized CQ to find answers to this question [20].
CQ can be defined as an individual’s ability to effectively function and manage in culturally diverse contexts [19,20]. Moreover, CQ is the ability to recognize and express cultural diversity, solve problems caused by cultural differences, and work in harmony with people with various cultural backgrounds while minimizing cultural conflicts [21]. It includes developing behaviors and capacities for promoting effective cross-cultural functioning beyond simply adaptive behaviors [5].
CQ is composed of four factors: meta-cognitive CQ, cognitive CQ, motivational CQ, and behavioral CQ [8]. The following describes these factors:
First, meta-cognitive CQ refers to individual mental processes that aim to understand and respond to cultural knowledge, and it is a high-level cognition concept. In other words, it refers to the level of perception and awareness regarding multiple cultures, while an individual interacts with others in a multicultural context [19]. The specific abilities associated with it include planning, monitoring, and modifying mental models related to national or group cultural norms [8]. When people with high meta-cognitive CQ interact with people from different cultural backgrounds, they consciously and intentionally ask themselves about their own cultural assumptions, reflect these assumptions during intercultural interactions, and adjust their cultural knowledge [22].
Second, cognitive CQ responds to various cultural norms and practices obtained from education or personal experience. It has defined cognitive CQ as a level of individual cultural knowledge (or knowledge about the cultural environment) [8,23]. People with high cognitive CQ have prompt and excellent social awareness ability [19]. Therefore, they can flexibly and effectively adapt to an unfamiliar culture, and they have outstanding inductive and analogical reasoning skills along with this awareness [19].
Third, motivational CQ is defined as an individual’s ability associated with direct interest and efforts to learn and respond appropriately to situations resulting from cultural differences. It means a behavioral tendency to adapt to a different culture, which includes behaving appropriately and the cognitive ability to understand it. People with high motivational CQ can pay attention and put efforts into a different cultural situation, displaying an intrinsic interest in and self-efficacy toward it [19].
Fourth, behavioral CQ is the ability to express verbal and nonverbal behaviors when interacting with people from different cultural backgrounds [23]. Behavioral CQ refers to acting appropriately in a specific situation within various behavioral patterns [20,24]. People with high behavioral CQ first recognize what to do and how to act appropriately for a given situation along with an attitude of adapting and striving [24].

2.2. Measurement and Validity of CQ

CQ measuring tools have been actively developed since CQS was established based on the CQ concept of Earley and Ang [8]. They created 40 items by building an expert group, validating them using 576 university students in Singapore. They refined them into 20 items and cross-validated them through a secondary study. The subsequent tertiary study randomly extracted a portion of the students who had participated before to secure retest reliability. The quaternary study was conducted on United States college students to verify the possibility of generalization across countries. QC measurement aims evaluate performance in multicultural situations that may be affected by differences in race, ethnicity, and nationality. Overall, the factor structure of CQS showed stable and culturally universal characteristics over time, and it secured satisfactory reliability (range from 0.70 to 0.88) for each dimension of the 20 items [8,9].
Unlike Earley and Ang, who composed CQ as having four dimensions, Thomas et al. [21] developed a CQ measurement tool consisting of three dimensions: meta-cognition, technology, and knowledge. Although this measurement tool shows high reliability and validity, its use is limited due to its complexity. As a result, Thomas et al. [13] developed and validated a short-form cultural intelligence (SFCQ) scale composed of 10 questions. Using the SFCQ scale, Thomas et al. [13] proved that CQ was distinguished from personality and emotional intelligence, negatively correlated with ethnocentrism and positively correlated with other indicators associated with multicultural experience.
In addition to this, many researchers have continued to develop tools to measure CQ. These tools include a self-report type test scale, a mini CQS, a shortened CQ scale developed by Van Dyne et al. [9], a firm-level CQ scale developed by Ang and Inkpen [12], E-CQS, an extended CQ scale developed by Van Dyne et al. [3], SFCQ, developed by Thomas et al. [13], BCIQ, a business CQ scale developed by Alon et al. [25], and COCI, an inter-organizational CQ scale developed by Zhou et al. [26].
The CQS developed by Ang et al. [8] is the most representative CQ measure and is widely used. However, several shortcomings have been pointed out. First, although the validity test showed strong correlations between dimensions in the four-dimensional structure of the CQS, the discriminant validity has not been tested [14]. Moreover, the issue of measurement invariance across cultures has been raised. When it was tested with subjects from the Netherlands and China, the discrimination validity of the 2-dimensional scale was higher than that of the existing 4-dimensional scale [16]. These results indicated that validity should be examined before using a measurement tool in the given context because CQS measurement tools could show a different basis for validity depending on national and cultural contexts. Second, all existing CQ measurement tools, including CQS, have mainly used the item internal consistency (Cronbach’s α) reliability index and factor analysis techniques based on classical test theories in the development process as ways to reduce and validate items. Even though these analysis techniques are useful for confirming the scale’s sub-factor structure based on the correlation between individual items, it is impossible to conclude that all the items grouped into a sub-factor can measure the target variable well just because the correlation coefficient is high. Although it is essential to verify whether each item and the range of responses function properly for the test purpose of validating an index, factor analysis cannot provide such information, which is a disadvantage. The item response theory can estimate the subject’s ability and the function of an individual item by maintaining the invariability of item characteristics and that of the subject’s ability, which is advantageous [27]. This advantage has led to its active usage in validating the scale test tool in the overseas and domestic business administration field [28].
The objectives of this study were to verify the validity of various dimensions based on the item response theory and confirm the applicability of the CQS to the South Korean organizational context.

3. Method

3.1. Procedure and Participants

This study surveyed many social research methodologies to achieve the study objective. The population is employees of Korean companies (organizations), and samples were selected using the convenience sampling method among non-probability sampling methods. A survey was conducted on the selected samples using online and offline questionnaires, and 486 copies of questionnaires were used for statistical analyses after excluding responses, which were uncompleted. Gender showed a relatively even distribution: 250 males (52.2%) and 229 females (47.8%). The age distribution showed that people in their 30s were the majority (216 subjects; 46.5%), followed by those in their 20s (158 subjects; 34.0%), those in their 40s (74 subjects; 15.9%), and those in their 50s or older (17 subjects; 3.7%). The position distribution revealed that the majority of samples were rank-and-file employees (167 subjects; 34.9%), followed by assistant managers (140 subjects; 29.2%), managers (89 subjects; 18.6%), 44 deputy department heads (44 subjects; 9.2%), and department heads and above (39 subjects; 8.1%). In the distribution of survey respondents’ duties, management support was provided by 186 subjects (40.9%), followed by sales/marketing/services (143 subjects; 31.4%), R&D (53 subjects; 11.6%), corporate strategy (20 subjects; 4.4%), others (20 subjects; 4.4%), finance (14 subjects; 3.1%), production (11 subjects; 2.4%), quality (5 subjects; 1.1%), and purchasing (3 subjects; 0.7%).

3.2. Measurement

Cultural intelligence is a specific form of intelligence that focuses on effectively identifying, inferring, and acting in situations characterized by cultural diversity [2]. This study evaluated the validity basis of the Cultural Intelligence Scale (CQS) developed by Ang et al. [8] to measure cultural intelligence using the Rasch measurement model. CQS consists of four dimensions (20 items), and the questionnaire items of each dimension are as follows: four items concerning metacognition CQ (e.g., “I am conscious of the cultural knowledge I use when interacting with people with different cultural backgrounds.” and “I adjust my cultural knowledge as I interact with people from a culture that is unfamiliar to me.”); six items concerning cognitive CQ (e.g., “I know the legal and economic systems of other cultures.” and “I know the cultural values and religious beliefs of other cultures.”); five items related to motivational CQ (e.g., “I enjoy interacting with people from different cultures.” and “I am confident that I can socialize with locals in a culture that is unfamiliar to me.”); and five items related to behavioral CQ (e.g., “I change my verbal behavior (e.g., accent, tone) when a cross-cultural interaction requires it.” and “I use pause and silence differently to suit different cross-cultural situations”).

3.3. Translation Procedure

In this study, CQS was translated by two researchers who achieved Ph.D. from a university in the United States and South Korea and were involved in studies related to cultural intelligence, one independent English translation expert, and three corporate HR experts. The translation was conducted in four steps according to the guidelines for translating and adapting test tools proposed by the International Test Commission (ITC). First, two researchers who spoke Korean as their mother tongue and were fluent in English translated the original items of CQS into Korean. Second, the two translated versions were compared, and the difference between the two versions was revised and supplemented to present the most appropriate content based on discussion between the researchers. Third, the Korean version was translated back into English, and this English version was compared with the original items of CQS. In the last step, three human resources experts with more than 10 years of working experience in Korean companies finally verified the Korean version of the CQS measurement tool.

3.4. Statistical Analysis

The Rasch measurement model is a part of the item response theory, which is a theoretical and mathematical explanation of how social and psychological variables should be measured fundamentally [29]. Therefore, it provides prescriptive guidance for scientific and rigorous measurements [29]. This study carried out a differential item function analysis by rank along with individual item fitness, response scale analysis, and dimensionality verification based on the parameter estimates for items and respondents by applying the Rasch rating scale model to apply the Rasch measurement tool for the CQS rating scale [30]. The model equation is as follows:
P n i k P n i k 1 + P n i k = e x D ( θ n ( δ i + τ k ) ) 1 + e x D ( θ n ( δ i + τ k ) )
where Pnik indicates the probability that a subject n will respond to the scale k of item i, while Pnik−1 means the probability that the subject n will respond to k−1 regarding item i. Moreover, θ n , δ i , and τ k refer to the subject’s ability, the endorsability of an item, and the location of the boundary point between scales k and k − 1, respectively [31].
This study used Winsteps 3.91 program to estimate the parameters of the rating scale model and analyze it.

4. Results

4.1. Content Validity

First, this study drew the measurement, item adequacy index, and point biserial correlation coefficient of each item provided by the Rasch model to check the adequacy of the content of each item constituting the index. The derived results are shown in Table 1 and Table 2. It was found that item #2 of metacognition CQ (“I adjust my cultural knowledge as I interact with people from a culture that is unfamiliar to me.”) was −0.64, which was the lowest. In other words, it was determined that the item was the easiest question for the respondents to answer. On the other hand, item #3 of cognitive CQ (“I know the marriage systems of other cultures.”) showed the highest value (0.79). In other words, it was identified as the most challenging question for the respondents to answer.
Additionally, each item’s fitness was analyzed, and it was found that the results of all items were between 0.6 and 1.4, the thresholds for determining the adequacy of infit and outfit based on the Likert scale [31]. However, item #5 concerning motivational CQ (“I am confident that I can get accustomed to the shopping conditions in a different culture.”) showed relatively higher values (infit was 1.31 and outfit was 1.29) than other items. Therefore, it was judged that the item’s content needed to be reviewed further when composing a questionnaire. The point biserial correlation coefficient shows the correlation between a specific item’s response score and the subject’s total score. The point biserial correlation coefficients of all items were above 0.4, the adequacy criterion, indicating good values in general.

4.2. Substantive Validity

To understand the validity based on the practical aspects of CQS, this study evaluated the function of scales used to measure the items. The original CQS scale is composed of a 7-point rating scale. Therefore, this study also developed a 7-point scale to respond. Table 3 shows the analysis results of rating scale functions. The analysis results showed that, as the scale score increased from 1 to 7 points, the respondent’s estimated ability increased from −1.14 to 1.99 logit accordingly. Moreover, it was found that the index’s threshold parameter values were properly aligned from 1 point to 7 points without changing the order according to the difficulty levels of the items (Figure 1). These overall results confirmed that the 7-point rating scale of the CQS scale functioned properly.
As the next step, the question–subject distribution was examined to check suitability between the respondent’s estimated ability values based on the respondent’s responses and the difficulty distribution of the items. The item–subject distribution provides visual information on whether the measurement tool functions properly for the item respondent’s ability distribution. In other words, if the measurement items are distributed too high (too difficult) or too low (too easy) across the respondents’ ability distribution, the measurement items can be considered to not properly reflect their abilities. Figure 2 presents the analysis results. Figure 2 shows that the items constituting CQS reflect the distribution of respondents’ ability relatively well. In other words, the area that the CQS items could measure included 65.6% of all survey respondents, and CQS could not measure the remaining 34.4% in the ability distribution of all survey respondents. However, the CQS items were found to be somewhat easier for these respondents than for the population of all South Korean workers, reflecting a different level of cultural intelligence ability. Therefore, it may be necessary to develop additional questionnaire items for organization members with a high cultural intelligence level.

4.3. Structural Validity

This study evaluated dimensionality using Rasch factor analysis to confirm the validity of the internal structure. Unlike general factor analysis, Rasch factor analysis tests dimensionality through principal component analysis that simultaneously considers standardized residuals and item difficulty. The principal component analysis results showed that the Rasch model explained 47.4% of the total variance: 19.42% was explained by the respondent, and the items explained 28.15%. The Rasch model assumes unidimensionality. When the variance explained by the Rasch model is less than 50%, it implies that the scale has multidimensionality characteristics.
This study checked a dimensionality map as Figure 3 to identify the specific inner structure of the CQS scale. The dimensionality map shows the factor distribution of standardized residuals, which remained after explaining the data using the Rasch measurement model. The dimensionality map shows the principal component factor loadings of each item’s residuals. As the amount of variance that is still not explained by Rasch measurements becomes larger, factor loadings increase. The analysis results showed that the factor loadings of items mainly measuring cognitive dimensions (e.g., A (cognition #4), B (cognition #1), C (cognition #3), D (cognition #5), and E (cognition #2)) were ±0.4 or higher, and they were one cluster of factors. On the other hand, items such as I (meta #4) and J (meta #2) showed factor loading of ±0.4 or less. Consequently, their item difficulty levels were relatively low, indicating one cluster of factors. Likewise, although showing low difficulty levels, items (e.g., a (motivation #4), b (behavior #1), c (motivation #4), d (motivation #5), e (behavior #3), f (motivation #2), and g (action #4)) had 0.4, or higher factor loadings were mainly included in a cluster of factors with the same motivational and behavioral dimensions. These results imply that the CQS scale has three different dimensions internally. As a result, Rasch factor analysis for analyzing the internal structure’s validity confirmed that CQS exhibited three or more multidimensional structures.

4.4. Generalizability

This study conducted differential item functioning analysis by position to identify the valid evidence for the generalization of CQS. In other words, this study divided the respondents into a manager level and a member level based on the position and role. It then examined whether there was any difference in the measurement of the items between the groups. If a specific item shows different measurement values between the groups, it can be judged that it has low generalization-related validity. First, this study evaluated the different items due to the difference in measurements between the groups. The results confirmed that no items differed by more than 0.64 logit [32], a criterion for judging a meaningful differential item functioning. Afterward, the Rasch–Welch t-test was performed to evaluate the difference in measurements between the manager and member groups. The Rasch–Welch t-test analysis also did not show any items of t ±1.96 or more, determining a statistical significance. These analyses confirmed that the CQS scale provided consistent measurements regardless of the respondents’ position or role. Therefore, it was concluded that it had an appropriate level of generalization-related validity.
Moreover, this study calculated the separation and reliability of respondents and questions to test the items’ reliability. The results showed that the respondent’s reliability and the item were 0.91 and 0.99, respectively, and separation was 3.24 and 9.12. The results proved that the reliability was excellent. In other words, it was confirmed that the consistency of the respondent rank (separation) and that of the respondent’s item difficulty rank were good. This study additionally calculated the stratification index 1, indicating how many levels of respondents can be divided by CQS and how many difficulty levels of items can be divided by CQS. The analysis results revealed that respondents can be divided into four groups, and items can be divided into approximately 12 levels. Considering these reliability indices, it was concluded that CQS could classify subjects’ immersion abilities and item difficulties consistently and stably.

5. Discussion and Conclusions

This study examined the validity of CQS, the most actively used tool to measure CQ, at various levels by applying the Rasch analysis model to secure a valid basis of the CQS. This study carried out statistical analyses using the survey response data of the 486 employees of Korean companies, and the results for each validity dimension were as follows.
First, each item’s infit and outfit (20 items in total) consisting of the CQS were derived from examining the measurement tool’s content validity. It was found that the infit and outfit results fell between 0.6 and 1.4, the criterion for determining the adequacy of infits and outfits [33]. Moreover, the point-biserial correlation coefficients of the items were higher than 0.4, the criterion for judging appropriateness. The point-biserial correlation coefficient indicates the degree of correlation between the response score of correct answers and the subject’s total score. It was confirmed that it showed good values in general.
Second, a scale function analysis was carried out to evaluate actual validity, and it was found that the 7-point Likert scale functioned properly in the CQS. As the scale score increased from 1 point to 7 points, the respondent’s ability estimate increased from −1.14 logit to 1.99 logit at the same time. Furthermore, threshold parameter values are also appropriately aligned from 1 point to 7 points according to the item’s difficulty. On the other hand, the item–subject map was checked to confirm the suitability between the distribution of the subject’s ability and the difficulty of the item. The results revealed that most of the measurement items included in the CQS appeared to be easy for members of the domestic organization to answer. The results showed that the CQS items could cover 66.6% of all survey respondents’ ability distribution, and they could not measure the remaining 34.4%. Therefore, it may be beneficial to develop additional questionnaire items for organization members with high CQ levels.
Third, a principal component analysis was performed using standardized residuals to check the internal structure’s validity. The results showed that the Rasch model explained 47.4% of the variance. The results implied that CQS had multidimensional characteristics. The dimensionality map, showing detailed factor clusters, indicated that the CQS was clustered into three sub-factors. This result supported Thomas’s three-factor model, which developed the existing four-factor model. In particular, it was found that Korean employees, the subject of this study, perceived motivational CQ and behavioral CQ as one component. It will be necessary to consider this when developing the composition concept of CQ and improving measurement tools in the future.
Lastly, a differential item functioning analysis was conducted by position to understand the degree of validity related to generalization. This study evaluated whether there was a difference in measuring the items between groups after dividing the subjects into the manager group and the member group. It was confirmed that the survey respondent showed consistent measurement results regardless of their rank or role. Therefore, it was concluded that it had an appropriate level of validity related to generalization. Afterward, this study calculated the separation and reliability of respondents and items to verify the items’ reliability. The results showed that the respondents’ reliability was 0.91, the reliability of the items was 0.99, the respondents’ separation was 3.24, and the separation of the items was 9.12. Considering these reliability indices, it can be determined that the CQS consistently and stably distinguishes the CQ level of the subjects and the difficulty of the items.
The study results suggest the following implications. First, it is necessary to validate the CQS in advance when used to measure an organization’s members’ CQ. This study verified that the CQS was a valid tool to measure the CQ level of employees in South Korea. Moreover, it was proven that content validity, practical validity, internal structure validity, and generalization validity were statistically suitable. Second, it will ultimately be necessary to develop a new CQ scale that reflects the South Korean organizational context’s characteristics. Although it did not satisfy the statistical standards in general, the CQS items were mostly easy for Korean employees to answer. Therefore, there is a problem in that it cannot measure the CQ of a respondent with a high CQ level. Additionally, even though a four-factor structure was presented conceptually and a tool was developed, Korean employees recognized three factors in practice, which indicated that they could not distinguish between behavioral CQ and motivational CQ well. Therefore, a new measurement tool shall be developed to reflect the Korean organizational context’s cultural characteristics and accurately measure the CQ of Korean employees.

6. Limitations and Future Directions

This study has several limitations despite the above findings and implications. First, this study could not examine the external validity and outcome validity among the six validity dimensions proposed by Messick [33] due to data composition limitation. The external validity can be checked using the evidence of representative concurrent validity and predictive validity, while the outcome validity can be evaluated by using the many-facet Rasch measurement model, which analyzes the degree of consistency of the raters. Therefore, it will be necessary to examine the validity of the CQS in more depth by using these methods in the future. Second, a differential item functioning analysis could not be conducted for demographic variables other than the respondent’s position. To secure the CQS measurement’s invariability, it is necessary to measure various organizations and individuals in South Korea and check the basis of differential functioning items at various dimensions. Third, the validity of only CQS, a representative CQ measurement tool, was examined. It will be possible to select and utilize a CQ measurement tool most appropriate for the Korean organizational context by conducting separate validity tests for the recently developed expanded cultural intelligence scale (the E-CQS) and a short-form cultural intelligence measure (SFCQS) or by comparing them with the CQS.

Author Contributions

S.Y.L. designed and wrote the draft. A.J.H. modified and edited the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2020S1A3A2A02091529).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Molinsky, A.L.; Davenport, T.H.; Iyer, B.; Davidson, C. Three skills every 21st century manager needs. Harv. Bus. Rev. 2012, 90, 139–143. [Google Scholar]
  2. Ang, S.; Van Dyne, L.; Tan, M.L. Cultural intelligence. In The Cambridge Handbook of Intelligence; Sternberg, R.J., Kuffman, S.B., Eds.; Cambridge University Press: New York, NY, USA, 2011; pp. 582–602. [Google Scholar]
  3. Van Dyne, L.; Ang, S.; Ng, K.Y.; Rockstuhl, T.; Tan, M.L.; Koh, C. Sub-dimensions of the four factor model of cultural intelligence: Expanding the conceptualization and measurement of cultural intelligence. Soc. Personal. Psychol. Compass 2012, 6, 295–313. [Google Scholar] [CrossRef]
  4. Andreason, A.W. Expatriate adjustment to foreign assignments. Int. J. Commer. Manag. 2003, 13, 42–61. [Google Scholar] [CrossRef]
  5. Ang, S.; Van Dyne, L.; Rockstuhl, T. Cultural intelligence: Origins, evolution, and methodological diversity. In Advances in Culture and Psychology; Gelfand, M., Chiu, C.Y., Hong, Y.Y., Eds.; Oxford University Press: New York, NY, USA, 2015; Volume 5, pp. 273–324. [Google Scholar]
  6. Earley, P.C. Redefining interactions across cultures and organizations: Moving forward with cultural intelligence. Res. Organ. Behav. 2002, 24, 271–299. [Google Scholar] [CrossRef]
  7. Earley, P.C. Cultural intelligence. In Encyclopedia of Management Theory; Kessler, E.H., Ed.; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2013. [Google Scholar]
  8. Ang, S.; Dyne, L.V.; Koh, C.; Ng, K.Y.; Templer, K.J.; Tay, C.; Chandrasekar, N.A. Cultural intelligence: Its measurement and effects on cultural judgment and decision making, cultural adaptation and task performance. Manag. Organ. Rev. 2007, 3, 335–371. [Google Scholar] [CrossRef]
  9. Van Dyne, L.; Ang, S.; Koh, C. Development and validation of the CQS. In Handbook of Cultural Intelligence: Theory, Measurement, and Application; M.E. Sharpe: Armonk, NY, USA, 2008; pp. 16–38. [Google Scholar]
  10. Ward, C.; Fischer, R.; Zaid Lam, F.S.; Hall, L. The convergent, discriminant, and incremental validity of scores on a self-report measure of cultural intelligence. Educ. Psychol. Meas. 2009, 69, 85–105. [Google Scholar] [CrossRef]
  11. Matsumoto, D.; Hwang, H.C. Assessing cross-cultural competence: A review of available tests. J. Cross-Cult. Psychol. 2013, 44, 849–873. [Google Scholar] [CrossRef] [Green Version]
  12. Ang, S.; Inkpen, A.C. Cultural intelligence and offshore outsourcing success: A framework of firm-level intercultural capability. Decis. Sci. 2008, 39, 337–358. [Google Scholar] [CrossRef]
  13. Thomas, D.C.; Liao, Y.; Aycan, Z.; Cerdin, J.-L.; Pekerti, A.A.; Ravlin, E.C.; Stahl, G.K.; Lazarova, M.B.; Fock, H.; Arli, D.; et al. Cultural intelligence: A theory-based, short form measure. J. Int. Bus. Stud. 2015, 46, 1099–1118. [Google Scholar] [CrossRef]
  14. Bücker, J.; Furrer, O.; Lin, Y. Measuring cultural intelligence (CQ) A new test of the CQ scale. Int. J. Cross Cult. Manag. 2015, 15, 259–284. [Google Scholar] [CrossRef] [Green Version]
  15. Rockstuhl, T.; Ang, S.; Ng, K.Y.; Lievens, F.; Van Dyne, L. Putting judging situations into situational judgment tests: Evidence from intercultural multimedia SJTs. J. Appl. Psychol. 2015, 100, 464–485. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Bücker, J.; Furrer, O.; Weem, T.P. Robustness and cross-cultural equivalence of the Cultural Intelligence Scale (CQS). J. Glob. Mobil. 2016, 4, 300–325. [Google Scholar] [CrossRef]
  17. Gozzoli, C.; Gazzaroli, D. The cultural intelligence scale (CQS): A contribution to the Italian validation. Front. Psychol. 2018, 9, 1–8. [Google Scholar] [CrossRef] [PubMed]
  18. Barzykowski, K.; Majda, A.; Szkup, M.; Przyłęcki, P. The Polish version of the Cultural Intelligence Scale: Assessment of its reliability and validity among healthcare professionals and medical faculty students. PLoS ONE 2019, 14, e0225240. [Google Scholar] [CrossRef]
  19. Ang, S.; Van Dyne, L.; Koh, C. Personality correlates of the four-factor model of cultural intelligence. Group Organ. Manag. 2006, 31, 100–123. [Google Scholar] [CrossRef]
  20. Earley, P.C.; Ang, S. Cultural Intelligence: Individual Interactions across Cultures; Stanford University Press: Stanford, CA, USA, 2003. [Google Scholar]
  21. Thomas, D.C.; Elron, E.; Stahl, G.; Ekelund, B.Z.; Ravlin, E.C.; Cerdin, J.L.; Poelmans, S.; Brislin, R.; Pekerti, A.; Lazarova, M.B. Cultural intelligence: Domain and assessment. Int. J. Cross Cult. Manag. 2008, 8, 123–143. [Google Scholar] [CrossRef]
  22. Brislin, R.; Worthley, R.; Macnab, B. Cultural intelligence: Understanding behaviors that serve people’s goals. Group Organ. Manag. 2006, 31, 40–55. [Google Scholar] [CrossRef]
  23. Ang, S.; Van Dyne, L. Conceptualization of cultural intelligence: Definition, distinctiveness, and nomological network. In The Cambridge Handbook of Acculturation Psychology; Sam, D.L., Berry, J.W., Eds.; Cambridge University Press: Cambridge, UK, 2008; pp. 3–15. [Google Scholar]
  24. Earley, P.C.; Peterson, R.S. The elusive cultural chameleon: Cultural intelligence as a new approach to intercultural training for the global manager. Acad. Manag. Learn. Educ. 2004, 3, 100–115. [Google Scholar] [CrossRef] [Green Version]
  25. Alon, I.; Boulanger, M.; Elston, J.A.; Galanaki, E.; de Ibarreta, C.M.; Meyers, J.; Muñiz-Ferrer, M.; Vélez-Calle, A. Business cultural intelligence quotient: A five-country study. Thunderbird Int. Bus. Rev. 2016, 60, 237–250. [Google Scholar] [CrossRef]
  26. Zhou, C.; Hu, N.; Wu, J.; Gu, J. A new scale to measure cross-organizational cultural intelligence. Chin. Manag. Stud. 2018, 12, 658–679. [Google Scholar] [CrossRef]
  27. Yates, S.M. Rasch and attitude scales: Explanatory style. In Applied Rasch Measurement: A Book of Exemplars; Alagumalai, S., Curtis, D.D., Hungi, N., Eds.; Springer: Dordrecht, The Netherlands, 2005; pp. 207–225. [Google Scholar]
  28. Wolfe, E.W.; Smith, E.V. Instrument development tools and activities for measure validation using Rasch models: Part II—Validation activities. J. Appl. Meas. 2007, 8, 204–234. [Google Scholar] [PubMed]
  29. Bond, T.G.; Fox, C.M. Applying the Rasch Model: Fundamental Measurement in the Human Sciences, 3rd ed.; L. Erlbaum: Mahwah, NJ, USA, 2015. [Google Scholar]
  30. Andrich, D. A rating formulation for ordered response categories. Psychometrika 1978, 43, 561–573. [Google Scholar] [CrossRef]
  31. Wright, B.D.; Linacre, J.M. Reasonable mean-square fit values. Rasch Meas. Trans. 1994, 8, 370–371. [Google Scholar]
  32. Zieky, M. Practical questions in the use of DIF statistics in test development. In Differential Item Functioning; Holland, P.W., Wainer, H., Eds.; Lawrence Erlbaum: Hillsdale, NJ, USA, 1993; pp. 337–347. [Google Scholar]
  33. Messick, S. Standards of validity and the validity of standards in performance asessment. Educ. Meas. Issues Pract. 1995, 14, 5–8. [Google Scholar] [CrossRef]
Figure 1. CQS category probability curve.
Figure 1. CQS category probability curve.
Sustainability 13 03139 g001
Figure 2. Comparison between person traits and item difficulty measures, # means the distribution of respondents.
Figure 2. Comparison between person traits and item difficulty measures, # means the distribution of respondents.
Sustainability 13 03139 g002
Figure 3. Dimensionality map.
Figure 3. Dimensionality map.
Sustainability 13 03139 g003
Table 1. Technical statistics analysis result for Cultural Intelligence Scale (CQS) items.
Table 1. Technical statistics analysis result for Cultural Intelligence Scale (CQS) items.
ItemMeanStandard DeviationSkewnessKurtosis
Meta cognitive CQ 14.3831.123−0.1460.377
Meta cognitive CQ 24.9911.164−0.5500.495
Meta cognitive CQ 34.4231.084−0.3060.287
Meta cognitive CQ 44.7541.194−0.274−0.106
Cognitive CQ13.7071.227−0.132−0.210
Cognitive CQ23.9911.223−0.132−0.199
Cognitive CQ34.0631.261−0.206−0.313
Cognitive CQ43.6831.323−0.050−0.281
Cognitive CQ53.8001.338−0.020−0.499
Cognitive CQ63.8661.2200.013−0.094
Motivational CQ14.4951.444−0.112−0.519
Motivational CQ24.4851.427−0.198−0.385
Motivational CQ34.6961.257−0.3430.147
Motivational CQ44.4381.365−0.226−0.332
Motivational CQ54.8791.364−0.466−0.135
Behavioral CQ14.5391.326−0.346−0.218
Behavioral CQ24.7691.096−0.2030.009
Behavioral CQ34.8971.034−0.4000.650
Behavioral CQ44.8291.124−0.6370.878
Behavioral CQ54.7851.154−0.4460.241
Table 2. Technical quality of the CQS items.
Table 2. Technical quality of the CQS items.
ItemMeasureInfit Mean SquareOutfit Mean SquarePoint-Biserial Correlation
Meta cognitive CQ 10.060.830.840.62
Meta cognitive CQ 2−0.641.010.980.61
Meta cognitive CQ 30.010.680.670.67
Meta cognitive CQ 4−0.360.960.960.63
Cognitive CQ10.770.981.000.60
Cognitive CQ20.480.840.880.66
Cognitive CQ30.400.820.840.69
Cognitive CQ40.791.071.110.62
Cognitive CQ50.671.091.100.63
Cognitive CQ60.600.780.770.69
Motivational CQ1−0.061.011.020.76
Motivational CQ2−0.051.111.100.71
Motivational CQ3−0.290.990.960.67
Motivational CQ40.000.990.990.71
Motivational CQ5−0.511.311.290.63
Behavioral CQ1−0.111.001.000.69
Behavioral CQ2−0.381.231.240.44
Behavioral CQ3−0.530.890.870.57
Behavioral CQ4−0.451.181.160.49
Behavioral CQ5−0.401.221.220.50
Table 3. Category structure of the CQS.
Table 3. Category structure of the CQS.
Category ScoreMeasure%Andrich ThresholdInfit MNSQOutfit MNSQStep Calibration
11742−1.141.181.22-
26306−0.881.011.05−2.36
3134714−0.420.960.98−1.38
42739280.070.910.92−0.87
52865290.610.930.930.29
61515161.171.031.011.52
745051.991.071.032.81
Note. Infit MNSQ: information-weighted mean square statistic; Outfit MNSQ: outlier-sensitive mean square statistic; rpm: point-measure correlation.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, S.Y.; Hong, A.J. Psychometric Investigation of the Cultural Intelligence Scale Using the Rasch Measurement Model in South Korea. Sustainability 2021, 13, 3139. https://doi.org/10.3390/su13063139

AMA Style

Lee SY, Hong AJ. Psychometric Investigation of the Cultural Intelligence Scale Using the Rasch Measurement Model in South Korea. Sustainability. 2021; 13(6):3139. https://doi.org/10.3390/su13063139

Chicago/Turabian Style

Lee, Seung Yeon, and Ah Jeong Hong. 2021. "Psychometric Investigation of the Cultural Intelligence Scale Using the Rasch Measurement Model in South Korea" Sustainability 13, no. 6: 3139. https://doi.org/10.3390/su13063139

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop