Next Article in Journal
Loneliness and Relationship Well-Being: Investigating the Mediating Roles of Relationship Awareness and Distraction among Romantic Partners
Previous Article in Journal
Interplay between Children’s Electronic Media Use and Prosocial Behavior: The Chain Mediating Role of Parent–Child Closeness and Emotion Regulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptation and Psychometric Properties of an Attitude toward Artificial Intelligence Scale (AIAS-4) among Peruvian Nurses

by
Wilter C. Morales-García
1,2,3,4,
Liset Z. Sairitupa-Sanchez
5,
Sandra B. Morales-García
6 and
Mardel Morales-García
7,*
1
Escuela de Posgrado, Universidad Peruana Unión, Lima 15457, Peru
2
Facultad de Teología, Universidad Peruana Unión, Lima 15457, Peru
3
Sociedad Científica de Investigadores Adventistas, SOCIA, Universidad Peruana Unión, Lima 15457, Peru
4
Club de Conquistadores, Orión, Universidad Peruana Unión, Lima 15457, Peru
5
Escuela Profesional de Psicología, Facultad de Ciencias de la Salud, Universidad Peruana Unión, Lima 15457, Peru
6
Escuela Profesional de Medicina Humana, Facultad de Ciencias de la Salud, Universidad Peruana Unión, Lima 15457, Peru
7
Unidad de Salud, Escuela de posgrado, Universidad Peruana Unión, Km 19, Carretera Central, Lima 15033, Peru
*
Author to whom correspondence should be addressed.
Behav. Sci. 2024, 14(6), 437; https://doi.org/10.3390/bs14060437
Submission received: 2 March 2024 / Revised: 11 May 2024 / Accepted: 14 May 2024 / Published: 23 May 2024

Abstract

:
Background: The integration of Artificial Intelligence (AI) into various aspects of daily life has sparked growing interest in understanding public attitudes toward this technology. Despite advancements in tools to assess these perceptions, there remains a need for culturally adapted instruments, particularly in specific contexts like that of Peruvian nurses. Objective: To evaluate the psychometric properties of the AIAS-4 in a sample of Peruvian nurses. Methods: An instrumental design was employed, recruiting 200 Peruvian nurses. The Attitude toward Artificial Intelligence in Spanish (AIAS-S), a cultural and linguistic adaptation of the AIAS-4, involved data analysis using descriptive statistics, confirmatory factor analysis (CFA), and invariance tests. Results: The Confirmatory Factor Analysis (CFA) confirmed a unidimensional factor structure with an excellent model fit (χ2 = 0.410, df = 1, p = 0.522, CFI = 1.00, TLI = 1.00, RMSEA = 0.00, SRMR = 0.00). The scale demonstrated high internal consistency (α = 0.94, ω = 0.91). Tests of invariance from configural to strict confirmed that the scale is stable across different demographic subgroups. Conclusions: The AIAS-S proved to be a psychometrically solid tool for assessing attitudes toward AI in the context of Peruvian nurses, providing evidence of validity, reliability, and gender invariance. This study highlights the importance of having culturally adapted instruments to explore attitudes toward emerging technologies in specific groups.

1. Introduction

Artificial Intelligence (AI) has marked a milestone in the modern era, standing out for its ability to simulate human cognitive processes in machines and software, with applications ranging from health to education, promising to revolutionize our existence [1,2,3]. As AI integrates into our daily lives, products like Siri and Alexa, along with the development of social robots, showcase its potential to improve quality of life, offering everything from safer driving to more efficient medical care [4,5]. However, this expansion brings significant challenges, such as the risk of job displacement and ethical concerns, reflecting a spectrum of public opinions ranging from acceptance to anxiety [6,7,8,9].
Despite the benefits AI can bring, it is not without faults and requires expert human supervision to mitigate unpredictable errors and biases. It is crucial that users, especially students, are informed about the opportunities and ethical dilemmas associated with responsible and critical use [10,11]. The need for appropriate regulations becomes evident for its integration into society and governmental processes, facing challenges in terms of scope and impact on innovation [12]. The widespread adoption of AI has also been influenced by the COVID-19 pandemic, increasing emotional and psychological dependency on these technologies. In areas like mental health and financial advising, service chatbots have shown potential benefits, though they also present risks such as social withdrawal and addiction [13,14,15,16].
In the debate over AI, concerns about the development of autonomous weaponry and the existential impact of advanced AI have been fueled by prominent figures, reflecting the need for ongoing reflection on humanity’s future with AI [17,18,19,20]. Research on public perceptions is mixed, highlighting both concerns and acknowledgment of its potential for innovative solutions [21,22]. Personality and trust emerge as crucial factors in shaping attitudes toward AI. Studies indicate that traits such as openness and conscientiousness can influence technological acceptance, while trust in AI and in the corporations developing it plays a significant role in the perception of its risk and utility [23,24,25,26].
The integration of AI into clinical practice represents a significant advancement in healthcare. This progress is reflected in the attitude and perception toward AI by healthcare professionals and nursing students, who play a crucial role in the adoption and application of these technologies. Despite the growing importance of AI, previous studies have identified a notable lack of knowledge and understanding about AI in the nursing context, especially in regions like Jordan, recognized as a center of medical care and medical tourism with steady annual growth in foreign patients [27,28].
The attitude toward AI in the clinical community varies, with studies indicating that, although there is a generally positive attitude toward its use, there is a significant gap in knowledge and practical experience with these technologies [29,30]. This challenge extends to nursing education, where more than 70% of nurses and nursing students acknowledge AI’s potential to revolutionize healthcare, but admit to a limited understanding of its practical application [31]. Anxiety and lack of confidence in using AI are also highlighted issues, affecting future professional decisions and openness to specialization in fields like radiology [32].
Furthermore, interdisciplinary collaboration and the gathering of information from a diverse group of healthcare students are essential for the effective incorporation of AI into modern medicine, highlighting the need for a comprehensive approach that includes training in data acquisition and protection, AI ethics, and critical evaluation and interpretation of AI applications in health [33,34]. As the field of AI in healthcare continues to evolve, it is imperative to develop and enhance nurses’ competencies to adapt to these technological changes, ensuring they are equipped with the necessary knowledge to lead and shape the future of nursing practice in the AI era [35,36].
Gender differences in attitudes toward AI have been observed across various domains, including education, healthcare, and professional environments. For instance, a multiple-group analysis of prospective German teachers revealed gender-specific factors that influence the acceptance of AI-based applications, underscoring the importance of addressing these aspects in educational settings [37]. Similarly, attitudes toward AI among medical and pharmacy students have shown variations that could affect future implementation and use in medical practice [38,39]. In the healthcare sector, a study among Chinese dermatologists indicated varying levels of engagement with AI, influenced by gender among other factors, which could impact the adoption and effective use of AI technologies in dermatology [40]. Furthermore, exploring attitudes toward AI in broader demographic samples, including different age groups and cultural backgrounds, further complicates the landscape, suggesting that these attitudes are not only dependent on gender but also on a multitude of sociodemographic factors [41].
This complex landscape underscores the importance of developing accurate and reliable assessment tools to capture general attitudes toward AI, addressing the need for multidimensional approaches in its study [42,43,44,45,46,47]. In this regard, the evolution in the development of instruments to assess attitudes toward artificial intelligence (AI) has been significant over the last decade, with notable contributions such as the General Attitudes toward Artificial Intelligence Scale (GAISS) [42], the Attitude toward Artificial Intelligence (ATAI scale) [43], and the Threats of Artificial Intelligence Scale (TAI) [48]. These studies have established a foundation for measuring perceptions and attitudes toward AI, highlighting both the positive aspects and the fears associated with its implementation and development. However, despite these advancements, significant limitations in these scales have been identified, justifying the need for further studies, especially in specific cultural and geographical contexts like Peru. The GAISS, for example, although comprehensive, may be impractical for large-scale applications due to its length, while the ATAI, being more concise, might not capture the complexity and nuances of attitudes toward AI, focusing on extremes that may not reflect the intermediate perceptions of individuals. Moreover, the TAI specifically focuses on fears related to AI, which might not provide a complete picture of the general attitudes toward this technology.
The study by Grassini [49] on the Attitude toward AI Scale (AIAS-4) represents a significant advance by offering a brief and psychometrically validated measure that addresses some of these limitations. However, it is crucial to recognize that attitudes toward AI are neither static nor universal. The rapid evolution of AI technology, along with the emergence of large language models like Chat Generative Pre-trained Transformer (ChatGPT), has transformed the way people interact with and understand AI [46]. This technological dynamism, combined with the cultural, social, and economic particularities of each region, suggests that instruments developed and validated in one context may not be directly applicable or fully relevant in another, such as in Peru [50,51]. Therefore, it is essential to have scores derived from measurement instruments that prove to be valid and reliable within the specific cultural context of Peru. Thus, the aim of this study is to evaluate the psychometric properties of the scores obtained through a brief scale that measures the general attitude toward artificial intelligence among Peruvian nurses.

2. Methods

2.1. Design and Participants

This study, of an instrumental nature [52], was based on convenience sampling for participant selection. The specific inclusion criteria regarding attitude toward Artificial Intelligence were as follows: currently employed as nurses, having at least one year of experience in the nursing field to ensure basic familiarity with health technologies, and having had some previous interaction with AI-based tools or systems in their work environment. To determine the necessary sample size, an electronic sample size calculator was used, following the recommendations of Soper [53]. This calculation took into account several critical factors, including the number of observed and latent variables in the proposed model, an expected moderate effect size (λ = 0.10), a statistical significance level of α = 0.05, and a statistical power of 1 − β = 0.80. Although the minimum required sample size was estimated at 85 participants, a total of 243 nurses were recruited, with ages ranging from 22 to 62 years (M = 34.93, SD = 7.94). The majority of participants were women (63.0%), single (53.5%), held a bachelor’s degree (53.1%), and were employed under Contractual Services Administration (CAS) contracts, a form of temporary employment used in the public sector (26.7%) (Table 1).

2.2. Instruments

General Attitude toward Artificial Intelligence. The Attitude toward Artificial Intelligence Scale (AIAS) assesses public attitudes toward artificial intelligence (AI) [49]. This scale, validated in the UK and USA, consists of four items that capture individual beliefs about the influence of AI on their lives, careers, and humanity in general [49]. The AIAS is structured around a single dimension, reflecting a composite attitude toward AI, including perceived utility and potential impact on society. The scale demonstrated adequate internal consistency, with a Cronbach’s alpha of 0.902 and a McDonald’s omega of 0.904. It employs a 10-point Likert scale (1 = Not at all, 10 = Completely agree).
The Spanish adaptation of the AIAS was carried out using a rigorous cultural adaptation method [54] to ensure linguistic accuracy and conceptual equivalence with the original instrument. It is important to note that permission was obtained from the original author for the use of the instrument. The adaptation procedure included the following stages:
  • Two bilingual translators, native Spanish speakers, independently performed the initial translation of the AIAS into Spanish. Subsequently, both versions were compared to create an initial unified version.
  • This translated version was then back-translated into English by two native English speakers from the United States, competent in Spanish but without prior knowledge of the AIAS. This stage aimed to confirm the preservation of the original meaning in the translation.
  • An expert panel, consisting of two psychologists and two nurses, examined the Spanish version along with the back-translated English versions, with the purpose of developing a preliminary version of the AIAS in Spanish (AIAS-S).
  • This preliminary version was subjected to the evaluation of a focus group composed of 10 nurses, to verify its comprehension and readability. The issues identified at this stage led to the making of relevant linguistic adjustments, resulting in the final version of the instrument in Spanish, called AIAS-S, which translates to “Attitude toward Artificial Intelligence in Spanish” (see Table 2).

2.3. Procedure

The study was conducted under strict ethical principles, having received approval from the Ethics Committee of the Peruvian university, under the reference code 2023-CEUPeU-033. Permission was requested and obtained from the administration of the involved hospitals, thus ensuring institutional collaboration and adequate access to participants in the hospital setting. The privacy and confidentiality of the participants’ information were guaranteed, ensuring the acquisition of their informed consent before proceeding with the survey application. The questionnaire administration took place in person at two major Peruvian universities, emphasizing the voluntary and anonymous nature of participation.

2.4. Data Analysis

A preliminary descriptive analysis of the AIAS-S items was conducted, including the evaluation of the mean, standard deviation, skewness, and kurtosis, along with a corrected inter-item correlation analysis. Skewness (g1) and kurtosis (g2) values within the range of ±1.5 were considered acceptable [55]. Additionally, corrected item-total correlation analysis was employed to discard any items with a r(i − tc) ≤ 0.2 or in cases of multicollinearity [56].
We proceeded with a Confirmatory Factor Analysis (CFA) for the scale, using the Maximum Likelihood Robust (MLR) estimator, which is suitable for data presenting deviations from normality [57]. The indices used to evaluate the model fit included chi-square (χ2), Comparative Fit Index (CFI) and Tucker–Lewis Index (TLI) (≥0.95), as well as Root Mean Square Error of Approximation (RMSEA) and Standardized Root Mean Square Residual (SRMR) (≤0.08) [56,58]. Internal consistency was verified using Cronbach’s alpha and McDonald’s omega, expecting values above 0.70 to consider it adequate [59].
A hierarchical sequence of measurement invariance models was implemented. Initially, configural invariance was analyzed, serving as the reference model. This analysis was followed by the assessment of metric invariance, which ensures the equality of factor loadings across groups, and then scalar invariance, which additionally equalizes item intercepts. To verify invariance across these models, a modeling strategy was adopted that involves observing differences in the Comparative Fit Index (CFI). According to Chen [60], ΔCFI differences less than 0.010 indicate that invariance is maintained between groups, thus validating the consistency of the scale across different gender contexts.
Statistical analyses were performed using RStudio [61] with R version 4.1.1 (R Foundation for Statistical Computing, Vienna, Austria). For the confirmatory factor analysis and structural equation modeling, the “lavaan” package was used [62], and to facilitate the analysis of measurement invariance, the “semTools” package was employed [63].

3. Results

3.1. Descriptive Statistics of Items

The results of the Attitude Scale toward Artificial Intelligence (AIAS) show variability in perceptions about AI, with average scores ranging from 5.84 to 6.73 on a 10-point scale. Item 3 displays the highest average (M = 6.73, SD = 2.65), indicating a positive expectation toward future use of AI. Conversely, Item 1 has the lowest average (M = 5.84, SD = 2.62), reflecting a potentially more cautious view on the immediate personal benefits of AI. All items exhibit skewness (g1) and kurtosis (g2), suggesting a slight deviation from normality, but not excessively so. The item-total correlations (r.cor) for each item are high, all around 0.85 or more, demonstrating a strong relationship between each item and the total score of the scale. This implies that each item significantly contributes to the overall measurement of attitudes toward AI, and therefore, it is not recommended to eliminate any. Moreover, the correlation matrix between items reveals high coefficients, ranging from 0.75 to 0.86, confirming that perceptions of AI are consistently assessed across the different statements of the scale (Table 2).

3.2. Confirmatory Factor Analysis

A Confirmatory Factor Analysis (CFA) was conducted following the guidelines established by Grassini [49]. The evaluation of the resulting model indicated a significant improvement in the fit indices: χ2 = 0.410, gl = 1, p = 0.522, CFI = 1.00, TLI = 1.00, RMSEA = 0.00 (90% CI 0.00–0.13), SRMR = 0.00. The factor loadings were of adequate magnitudes, all surpassing the threshold of 0.50, reinforcing the validity of the construct measured by the scale (Figure 1). These results corroborate the unidimensional factor structure of the scale and its appropriateness for assessing the general attitude toward Artificial Intelligence.

3.3. Reliability

In the analysis of internal consistency of our scale, the results were highly satisfactory. The reliability coefficient for Cronbach’s alpha (α) was 0.94 and for McDonald’s omega (ω) was 0.91.

3.4. Measurement Invariance

Starting with configural invariance, which establishes a baseline model without constraints, a high CFI of 0.999 was observed, indicating excellent fit of the initial model. Upon introducing metric invariance, which involves equality in factor loadings across groups, the CFI remained perfect at 1.000, with a ΔCFI of −0.001, demonstrating that the scale measure constructs equivalently across genders. Progressing to scalar invariance, where both factor loadings and intercepts are equalized, the CFI slightly decreased to 0.995 with a ΔCFI of 0.005. Although this change exceeds the threshold of 0.001, it remains below 0.01, suggesting still good measurement equivalence between groups. Finally, strict invariance, which adds equality of error variances to the previous constraints, resulted in a CFI of 0.998 with a ΔCFI of −0.003, showing improvement from the scalar model (Table 3).

4. Discussion

Artificial Intelligence (AI) has revolutionized various sectors, including healthcare and education, by simulating human cognitive processes in machines and software. This integration has led to improvements in quality of life but also presents significant challenges such as job displacement and ethical dilemmas. As AI becomes more prevalent in our daily lives, human oversight becomes crucial to mitigate errors and biases. Additionally, its rapid adoption has been driven by circumstances such as the COVID-19 pandemic, increasing our emotional and psychological dependence on these technologies. In the healthcare context, particularly in nursing, although there is generally a positive perception of AI, a notable gap in knowledge and practical experience is observed. This underscores the importance of training in the use of AI, critically evaluating its applications in health. Despite advances in developing instruments to assess attitudes toward AI, there are limitations that justify further research, especially in specific contexts like Peru, to develop culturally relevant evaluation tools. This study aims to evaluate the psychometric properties of an attitude scale toward AI, specifically adapted for Peruvian nurses, addressing the need for precise and reliable evaluation tools in this rapidly evolving field.
The Confirmatory Factor Analysis (CFA) for the AIAS-S was conducted following the guidelines of Grassini [49]. Comparing our findings with existing literature, we observed both similarities and significant variations. For instance, Grassini [49] reports a good model fit for the AIAS-4, similar to our study, which supports the proposal of a unifactorial structure for measuring attitudes toward AI in various contexts. However, compared to other studies such as those by Schepman and Rodway [42] and Kieslich et al. [48], who explored more complex and varied factorial structures in their respective studies, significant differences are observed. Schepman and Rodway [42] validated a two-factor model for the GAAIS, while Kieslich et al. [48] reported a significant model with good fit indices for the TAI. These discrepancies highlight the diversity in perceptions and attitudes toward AI and underscore the need to adapt and assess measurement tools specific to each cultural and linguistic context. Although our study identified a unifactorial structure, we advise caution in generalizing this homogeneity to other contexts. Given that the Spanish-speaking environment may be influenced by factors such as exposure to technology, the level of understanding of AI, and specific sociocultural variables that modulate the perception of its benefits and risks. This observation invites further reflection on how cultural and contextual differences can affect factorial structures in assessing attitudes toward artificial intelligence.
Similarly, the AIAS-S follows a line similar to the research conducted by Grassini [49], Schepman and Rodway [42], and Kieslich et al. [48], each of which addressed different dimensions and contexts of attitude toward AI. Comparing the reliability findings of our study with those mentioned above, there is a general consistency in the good internal reliability of the scales used, with Cronbach’s alpha coefficients generally indicating acceptable to good internal consistency. In this regard, Grassini’s [49] research established good internal consistency for the AIAS-4, a finding that aligns with our study and supports the reliability of the instrument for measuring attitudes toward AI. Similarly, Schepman and Rodway [42] and Kieslich et al. [48] reported Cronbach’s alpha values reflecting satisfactory reliability in their respective scales, highlighting the robustness of the instruments to capture variations in attitudes and perceptions toward AI. These findings are crucial as they ensure that measurements of attitudes toward AI are stable and comparable across different studies and populations.
Measurement invariance in the AIAS-4 was analyzed across four levels (configural, metric, scalar, and strict) with results indicating exceptional fit in all models. Comparative Fit Index (CFI) values ranged from 0.995 to 1.000, with minimal variations in ΔCFI, suggesting that the AIAS scale consistently measures the same construct among both men and women, which is essential for making valid comparisons between these groups. Finally, the consistency in reliability observed across different studies suggests that attitudes toward AI can be effectively conceptualized and measured using well-designed scales. This indicates a coherent underlying structure in how people evaluate AI, enabling meaningful comparisons of attitudes across various cultural and demographic contexts.

4.1. Implications

The integration of Artificial Intelligence (AI) into professional practice and healthcare systems represents a promising technological advancement with the potential to revolutionize how healthcare is delivered and managed. Assessing the psychometric properties of the AIAS-S among Peruvian nurses provides an empirical foundation for understanding healthcare professionals’ receptiveness and perceptions toward this emerging technology. The findings suggest significant implications for professional practice, health policy, and theoretical development in the field of health AI.
From a practical perspective, the generally positive attitude toward AI identified among Peruvian nurses indicates an openness to incorporating these technologies into their daily practice. This suggests the need to develop training and professional development programs that focus not only on the technical skills required to operate AI technologies but also on understanding their ethical, regulatory, and practical applications in the healthcare setting.
Regarding health policies, the results underline the importance of formulating regulatory frameworks that support the safe and ethical implementation of AI in healthcare. Policies should focus on ensuring patient data privacy and security while promoting equity in access to advanced healthcare technologies. Moreover, it is crucial that these policies encourage interdisciplinary collaboration among engineers, AI designers, healthcare professionals, and patients to ensure that AI solutions are relevant, clinically sensitive, and culturally appropriate.
From a theoretical perspective, this study contributes to the existing body of knowledge by providing empirical evidence about nurses’ attitudes toward AI in a specific context. This suggests the need to continue exploring how cultural, contextual, and educational factors influence healthcare professionals’ perceptions of AI. Future research could aim to examine attitudes toward AI across different medical specialties and geographical contexts to better understand the variables that facilitate or inhibit the adoption of AI in healthcare practice.

4.2. Limitations

One of the main limitations of our study is the use of a cross-sectional design. The cross-sectional nature of the study prevents the assessment of how attitudes toward AI may change over time, especially in a field as dynamic and rapidly evolving as AI technology. Another limitation is the potential for social desirability bias in the participants’ responses. Given the context of the study, the nurses might have responded in a way that reflects more positive perceptions of AI, influenced by the perception of AI as a valuable technological innovation in the healthcare field. Although measures were taken to ensure confidentiality and the importance of honest responses was emphasized, the inherent self-assessment in surveys may still be subject to this type of bias. Additionally, the absence of studies on convergent and discriminant validity is identified as a limitation. Convergent validity studies with other relevant scales for assessing attitudes toward AI, such as the General Attitudes toward Artificial Intelligence Scale (GAISS) and the Threats of Artificial Intelligence Scale (TAI), would have strengthened the interpretation of the AIAS-4 results. On the other hand, discriminant validity studies with scales measuring conceptually different variables, such as the Technology Anxiety Scale, could have helped establish the specificity of the AIAS-4 in measuring attitudes toward AI, contrasting with other psychological constructs. It is advisable that future research includes these types of validation to confirm the robustness of the AIAS-4 and its ability to accurately measure attitudes toward AI without being influenced by related but distinct constructs.

5. Conclusions

The present study marks a significant advancement in understanding attitudes toward Artificial Intelligence (AI) within the specific context of Peruvian nurses, valuably contributing to the body of knowledge at the intersection of technology and health. By evaluating the psychometric properties of the scores obtained with the AIAS-S, which was culturally and linguistically adapted for the Peruvian context, this study demonstrated that the scores are valid and reliable. The findings, revealing a coherent factorial structure and robust internal reliability in the AIAS-S scores, align with previous research in other contexts, highlighting the importance of cultural considerations in psychometric assessment.

Author Contributions

W.C.M.-G. and L.Z.S.-S. participated in the conceptualization of the idea, S.B.M.-G., M.M.-G. and W.C.M.-G. were in charge of the methodology and software. For validation, formal analysis, and research, L.Z.S.-S., S.B.M.-G. and W.C.M.-G. Data curation and resources were commissioned by M.M.-G., W.C.M.-G. and L.Z.S.-S. The writing of the first draft, review and editing, visualization, and supervision were carried out by W.C.M.-G., L.Z.S.-S. and M.M.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was carried out in accordance with the Declaration of Helsinki and was approved by the ethics committee of the Universidad Peruana Unión (protocol code CEUPeU-033 and approval date 6 October 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

Data can be provided at the request of the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kaplan, A.D.; Sanders, T.; Hancock, P.A. The Relationship between Extroversion and the Tendency to Anthropomorphize Robots: A Bayesian Analysis. Front. Robot. AI 2019, 6, 135. [Google Scholar] [CrossRef] [PubMed]
  2. Harari, Y.N. Reboot for the AI Revolution. Nature 2017, 550, 324–327. [Google Scholar] [CrossRef] [PubMed]
  3. Makridakis, S. The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms. Futures 2017, 90, 46–60. [Google Scholar] [CrossRef]
  4. Brill, T.M.; Munoz, L.; Miller, R.J. Siri, Alexa, and Other Digital Assistants: A Study of Customer Satisfaction with Artificial Intelligence Applications. J. Mark. Manag. 2019, 35, 35–70. [Google Scholar] [CrossRef]
  5. Johnson, K.B.; Wei, W.Q.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision Medicine, AI, and the Future of Personalized Health Care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef] [PubMed]
  6. Ivanov, S.; Kuyumdzhiev, M.; Webster, C. Automation Fears: Drivers and Solutions. Technol. Soc. 2020, 63, 101431. [Google Scholar] [CrossRef]
  7. Waytz, A.; Norton, M.I. Botsourcing and Outsourcing: Robot, British, Chinese, and German Workers Are for Thinking-Not Feeling-Jobs. Emotion 2014, 14, 434. [Google Scholar] [CrossRef] [PubMed]
  8. Tschang, F.T.; Almirall, E. Artificial Intelligence as Augmenting Automation: Implications for Employment. Acad. Manag. Perspect. 2021, 35, 642–659. [Google Scholar] [CrossRef]
  9. Fast, E.; Horvitz, E. Long-Term Trends in the Public Perception of Artificial Intelligence. Proc. AAAI Conf. Artif. Intell. 2017, 31, 963–969. [Google Scholar] [CrossRef]
  10. Liu, M.; Ren, Y.; Nyagoga, L.M.; Stonier, F.; Wu, Z.; Yu, L. Future of Education in the Era of Generative Artificial Intelligence: Consensus among Chinese Scholars on Applications of ChatGPT in Schools. Futur. Educ. Res. 2023, 1, 72–101. [Google Scholar] [CrossRef]
  11. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. Opinion Paper: “So What If ChatGPT Wrote It?” Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  12. Sheikh, H.; Prins, C.; Schrijvers, E. Regulation. In Mission AI; Springer: Cham, Switzerland, 2023; pp. 241–286. [Google Scholar]
  13. Xie, T.; Pentina, I.; Hancock, T. Friend, Mentor, Lover: Does Chatbot Engagement Lead to Psychological Dependence? J. Serv. Manag. 2023, 34, 806–828. [Google Scholar] [CrossRef]
  14. Hu, B.; Mao, Y.; Kim, K.J. How Social Anxiety Leads to Problematic Use of Conversational AI: The Roles of Loneliness, Rumination, and Mind Perception. Comput. Human Behav. 2023, 145, 107760. [Google Scholar] [CrossRef]
  15. Pentina, I.; Hancock, T.; Xie, T. Exploring Relationship Development with Social Chatbots: A Mixed-Method Study of Replika. Comput. Human Behav. 2023, 140, 107600. [Google Scholar] [CrossRef]
  16. Laestadius, L.; Bishop, A.; Gonzalez, M.; Illenčík, D.; Campos-Castillo, C. Too Human and Not Human Enough: A Grounded Theory Analysis of Mental Health Harms from Emotional Dependence on the Social Chatbot Replika. New Media Soc. 2022. [Google Scholar] [CrossRef]
  17. Gherheş, V. Why Are We Afraid of Artificial Intelligence (Ai)? Eur. Rev. Appl. Sociol. 2018, 11, 6–15. [Google Scholar] [CrossRef]
  18. Liang, Y.; Lee, S.A. Fear of Autonomous Robots and Artificial Intelligence: Evidence from National Representative Data with Probability Sampling. Int. J. Soc. Robot. 2017, 9, 379–384. [Google Scholar] [CrossRef]
  19. Gibbs, S. Elon Musk Leads 116 Experts Calling for Outright Ban of Killer Robots. Available online: https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war (accessed on 3 December 2023).
  20. Cellan-Jones, R. Stephen Hawking Warns Artificial Intelligence Could End Mankind. BBC News, 2 December 2014; 2. [Google Scholar]
  21. Morley, J.; Machado, C.C.V.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The Ethics of AI in Health Care: A Mapping Review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef]
  22. Zhang, B.; Dafoe, A. Artificial Intelligence: American Attitudes and Trends. SSRN Electron. J. 2019. [Google Scholar] [CrossRef]
  23. Barnett, T.; Pearson, A.W.; Pearson, R.; Kellermanns, F.W. Five-Factor Model Personality Traits as Predictors of Perceived and Actual Usage of Technology. Eur. J. Inf. Syst. 2015, 24, 374–390. [Google Scholar] [CrossRef]
  24. Devaraj, U.S.; Easley, R.F.; Michael Crant, J. How Does Personality Matter? Relating the Five-Factor Model to Technology Acceptance and Use. Inf. Syst. Res. 2008, 19, 93–105. [Google Scholar] [CrossRef]
  25. Chen, Y.N.K.; Wen, C.H.R. Impacts of Attitudes Toward Government and Corporations on Public Trust in Artificial Intelligence. Commun. Stud. 2021, 72, 115–131. [Google Scholar] [CrossRef]
  26. Siegrist, M. Trust and Risk Perception: A Critical Review of the Literature. Risk Anal. 2021, 41, 480–490. [Google Scholar] [CrossRef] [PubMed]
  27. Al-makhadmah, I. Challenges Facing Medical Tourism in Jordan from View of Tourism Decision Makers. Int. J. Herit. Tour. Hosp. 2020, 14, 236–243. [Google Scholar] [CrossRef]
  28. Hosseini, F.; Mirzaei, A. A Survey of Perspectives and Factors in the Development of Medical Tourism in the Middle East. J. Arch. Mil. Med. 2021, 9, e116161. [Google Scholar] [CrossRef]
  29. Ahmed, Z.; Bhinder, K.K.; Tariq, A.; Tahir, M.J.; Mehmood, Q.; Tabassum, M.S.; Malik, M.; Aslam, S.; Asghar, M.S.; Yousaf, Z. Knowledge, Attitude, and Practice of Artificial Intelligence among Doctors and Medical Students in Pakistan: A Cross-Sectional Online Survey. Ann. Med. Surg. 2022, 76, 103493. [Google Scholar] [CrossRef] [PubMed]
  30. Swed, S.; Alibrahim, H.; Elkalagi, N.K.H.; Nasif, M.N.; Rais, M.A.; Nashwan, A.J.; Aljabali, A.; Elsayed, M.; Sawaf, B.; Albuni, M.K.; et al. Knowledge, Attitude, and Practice of Artificial Intelligence among Doctors and Medical Students in Syria: A Cross-Sectional Online Survey. Front. Artif. Intell. 2022, 5, 1011524. [Google Scholar] [CrossRef] [PubMed]
  31. Swan, B.A. Assessing the Knowledge and Attitudes of Registered Nurses about Artificial Intelligence in Nursing and Health Care. Nurs. Econ. 2021, 39, 139. [Google Scholar]
  32. Gong, B.; Nugent, J.P.; Guest, W.; Parker, W.; Chang, P.J.; Khosa, F.; Nicolaou, S. Influence of Artificial Intelligence on Canadian Medical Students’ Preference for Radiology Specialty: ANational Survey Study. Acad. Radiol. 2019, 26, 566–577. [Google Scholar] [CrossRef]
  33. Teng, M.; Singla, R.; Yau, O.; Lamoureux, D.; Gupta, A.; Hu, Z.; Hu, R.; Aissiou, A.; Eaton, S.; Hamm, C.; et al. Health Care Students’ Perspectives on Artificial Intelligence: Countrywide Survey in Canada. JMIR Med. Educ. 2022, 8, e33390. [Google Scholar] [CrossRef]
  34. Reznick, R.K.; Harris, K.; Horsley, T.; Hassani, M.S. Task Force Report on Artificial Intelligence and Emerging Digital Technologies; Royal College of Physicians and Surgeons of Canada: Ottawa, ON, Canada, 2020. [Google Scholar]
  35. McGrow, K. Artificial Intelligence: Essentials for Nursing. Nursing 2019, 49, 46–49. [Google Scholar] [CrossRef] [PubMed]
  36. Ji, E.-S.; Lee, H.-J. Type of Perception toward Change of 4th Industrial Revolution and Nursing Education in Nursing Students: Q Methodological Approach. J. Korean Soc. Wellness 2020, 15, 135–148. [Google Scholar] [CrossRef]
  37. Zhang, C.; Schießl, J.; Plößl, L.; Hofmann, F.; Gläser-Zikuda, M. Acceptance of Artificial Intelligence among Pre-Service Teachers: A Multigroup Analysis. Int. J. Educ. Technol. High. Educ. 2023, 20, 49. [Google Scholar] [CrossRef]
  38. Busch, F.; Hoffmann, L.; Truhn, D.; Palaian, S.; Alomar, M.; Shpati, K.; Makowski, M.R.; Bressem, K.K.; Adams, L.C. International Pharmacy Students’ Perceptions towards Artificial Intelligence in Medicine—A Multinational, Multicentre Cross-Sectional Study. Br. J. Clin. Pharmacol. 2024, 90, 649–661. [Google Scholar] [CrossRef] [PubMed]
  39. Güven, D.; Kazanci, E.G.; Ören, A.; Sever, L.; Ünlü, P. The Knowledge of Students at Bursa Faculty of Medicine towards Artificial Intelligence: A Survey Study. J. Bursa Fac. Med. 2024, 2, 20–26. [Google Scholar] [CrossRef]
  40. Shen, C.; Li, C.; Xu, F.; Wang, Z.; Shen, X.; Gao, J.; Ko, R.; Jing, Y.; Tang, X.; Yu, R.; et al. Web-Based Study on Chinese Dermatologists’ Attitudes towards Artificial Intelligence. Ann. Transl. Med. 2020, 8, 698. [Google Scholar] [CrossRef] [PubMed]
  41. Kolling, T. Attitudes towards Artificial Intelligence: An Ageing and Gender Perspective. Gerontechnology 2022, 21, 2. [Google Scholar] [CrossRef]
  42. Schepman, A.; Rodway, P. Initial Validation of the General Attitudes towards Artificial Intelligence Scale. Comput. Hum. Behav. Reports 2020, 1, 100014. [Google Scholar] [CrossRef] [PubMed]
  43. Sindermann, C.; Sha, P.; Zhou, M.; Wernicke, J.; Schmitt, H.S.; Li, M.; Sariyska, R.; Stavrou, M.; Becker, B.; Montag, C. Assessing the Attitude Towards Artificial Intelligence: Introduction of a Short Measure in German, Chinese, and English Language. KI Kunstl. Intell. 2021, 35, 109–118. [Google Scholar] [CrossRef]
  44. Eysenck, H.J. Four Ways Five Factors Are Not Basic. Pers. Individ. Dif. 1992, 13, 667–673. [Google Scholar] [CrossRef]
  45. McCrae, R.R.; Costa, P.T. Empirical and Theoretical Status of the Five-Factor Model of Personality Traits. In The SAGE Handbook of Personality Theory and Assessment: Volume 1—Personality Theories and Models; Boyle, G.J., Matthews, G., Saklofske, D.H., Eds.; Sage Publications Ltd.: London, UK, 2008; pp. 273–294. [Google Scholar]
  46. Araujo, T.; Helberger, N.; Kruikemeier, S.; de Vreese, C.H. In AI We Trust? Perceptions about Automated Decision-Making by Artificial Intelligence. AI Soc. 2020, 35, 611–623. [Google Scholar] [CrossRef]
  47. Rheu, M.; Shin, J.Y.; Peng, W.; Huh-Yoo, J. Systematic Review: Trust-Building Factors and Implications for Conversational Agent Design. Int. J. Hum. Comput. Interact. 2021, 37, 81–96. [Google Scholar] [CrossRef]
  48. Kieslich, K.; Lünich, M.; Marcinkowski, F. The Threats of Artificial Intelligence Scale (TAI): Development, Measurement and Test Over Three Application Domains. Int. J. Soc. Robot. 2021, 13, 1563–1577. [Google Scholar] [CrossRef]
  49. Grassini, S. Development and Validation of the AI Attitude Scale (AIAS-4): A Brief Measure of General Attitude toward Artificial Intelligence. Front. Psychol. 2023, 14, 1191628. [Google Scholar] [CrossRef] [PubMed]
  50. Morales-García, W.C.; Sairitupa-Sanchez, L.Z.; Morales-García, S.B.; Morales-García, M. Adaptation and Psychometric Properties of a Brief Version of the General Self-Efficacy Scale for Use with Artificial Intelligence (GSE-6AI) among University Students. Front. Educ. 2024, 9, 1293437. [Google Scholar] [CrossRef]
  51. Morales-García, W.C.; Sairitupa-Sanchez, L.Z.; Morales-García, S.B.; Morales-García, M. Development and Validation of a Scale for Dependence on Artificial Intelligence in University Students. Front. Educ. 2024, 9, 1323898. [Google Scholar] [CrossRef]
  52. Ato, M.; López, J.J.; Benavente, A. Un Sistema de Clasificación de Los Diseños de Investigación En Psicología. An. Psicol. 2013, 29, 1038–1059. [Google Scholar] [CrossRef]
  53. Soper, D. A-Priori Sample Size Calculator for Structural Equation Models. Available online: https://www.danielsoper.com/statcalc/calculator.aspx?id=89 (accessed on 10 October 2023).
  54. Beaton, D.E.; Bombardier, C.; Guillemin, F.; Ferraz, M.B. Guidelines for the Process of Cross-Cultural Adaptation of Self-Report Measures. Spine 2000, 25, 3186–3191. [Google Scholar] [CrossRef] [PubMed]
  55. George, D.; Mallery, P. SPSS for Windows Step by Step A Simple Guide and Reference Fourth Edition (11.0 Update) Answers to Selected Exercises; Allyn & Bacon: Boston, MA, USA, 2003; p. 63. [Google Scholar]
  56. Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Guilford Press: New York, NY, USA, 2016. [Google Scholar]
  57. Muthen, L.; Muthen, B. Mplus Statistical Analysis with Latent Variables. User’s Guide, 8th ed.; Muthén & Muthén: Los Angeles, CA, USA, 2017. [Google Scholar]
  58. Schumacker, R.E.; Lomax, R.G. A Beginner’s Guide to Structural Equation Modeling, 4th ed.; Taylor & Francis: New York, NY, USA, 2016. [Google Scholar]
  59. McDonald, R.P. Test Theory: A United Treatment; Lawrence Erlbaum: Mahwah, NJ, USA, 1999. [Google Scholar]
  60. Chen, F.F. Sensitivity of Goodness of Fit Indexes to Lack of Measurement Invariance. Struct. Equ. Model. A Multidiscip. J. 2007, 14, 464–504. [Google Scholar] [CrossRef]
  61. Allaire, J.J. RStudio: Integrated Development Environment for R; Posit: Boston, MA, USA, 2018; pp. 165–171. [Google Scholar]
  62. Rosseel, Y. Lavaan: An R Package for Structural Equation Modeling. J. Stat. Softw. 2012, 48, 1–36. [Google Scholar] [CrossRef]
  63. Jorgensen, T.D.; Pornprasertmanit, S.; Schoemann, A.M.; Rosseel, Y. SemTools: Useful Tools for Structural Equation Modeling. 2022. Available online: https://cran.r-project.org/web/packages/semTools/semTools.pdf (accessed on 4 December 2023).
Figure 1. Path Diagram of the Confirmatory Factor Analysis Model for the AIAS-4 Scale.
Figure 1. Path Diagram of the Confirmatory Factor Analysis Model for the AIAS-4 Scale.
Behavsci 14 00437 g001
Table 1. Descriptive Statistics.
Table 1. Descriptive Statistics.
Characteristicsn%
GenderFemale15363.0
Male9037.0
Marital StatusMarried8032.9
Cohabiting208.2
Living together135.3
Divorced13053.5
Widowed5422.2
Level of EducationSpecialty12953.1
Bachelor’s Degre6024.7
Postgraduate6526.7
Employment StatusContract (CAS)3614.8
Permanent Contract)6828.0
Tenured166.6
Substitute5823.9
Third-party15363.0
Table 2. Descriptive Statistics and the polychoric correlation matrix.
Table 2. Descriptive Statistics and the polychoric correlation matrix.
English VersionSpanish VersionMsdg1g2r.cor1234
1. I believe that AI will improve my life1. Creo que la IA mejorará mi vida5.842.62−0.07−0.850.86-
2. I believe that AI will improve my work2. Creo que la IA mejorará mi trabajo5.872.77−0.07−1.020.860.86 **-
3. I think I will use AI technology in the future3. Pienso que usaré tecnología de IA en el futuro6.732.65−0.41−0.820.850.78 **0.78 **-
4. I think AI technology is positive for humanity4. Pienso que la tecnología IA es positiva para la humanidad6.162.61−0.1−0.870.830.75 **0.76 **0.82 **-
** = p < 0.01.
Table 3. Invariance according to sex.
Table 3. Invariance according to sex.
Invarianceχ2dfpTLIRMSEASRMRCFIΔCFI
Configural2.22820.3280.9960.0310.0040.999
Metric2.86950.721.0130.0000.0131.000−0.001
Scalar9.86280.2750.9930.0440.0230.9950.005
Strict12.764120.3860.9980.0230.0280.998−0.003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Morales-García, W.C.; Sairitupa-Sanchez, L.Z.; Morales-García, S.B.; Morales-García, M. Adaptation and Psychometric Properties of an Attitude toward Artificial Intelligence Scale (AIAS-4) among Peruvian Nurses. Behav. Sci. 2024, 14, 437. https://doi.org/10.3390/bs14060437

AMA Style

Morales-García WC, Sairitupa-Sanchez LZ, Morales-García SB, Morales-García M. Adaptation and Psychometric Properties of an Attitude toward Artificial Intelligence Scale (AIAS-4) among Peruvian Nurses. Behavioral Sciences. 2024; 14(6):437. https://doi.org/10.3390/bs14060437

Chicago/Turabian Style

Morales-García, Wilter C., Liset Z. Sairitupa-Sanchez, Sandra B. Morales-García, and Mardel Morales-García. 2024. "Adaptation and Psychometric Properties of an Attitude toward Artificial Intelligence Scale (AIAS-4) among Peruvian Nurses" Behavioral Sciences 14, no. 6: 437. https://doi.org/10.3390/bs14060437

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop