Next Article in Journal
Application of Simulation Methods and Image Processing Techniques in Rock Blasting and Fragmentation Optimization
Previous Article in Journal
Drilling Condition Identification Method for Imbalanced Datasets
Previous Article in Special Issue
Learning Analytics and Educational Data Mining in Augmented Reality, Virtual Reality, and the Metaverse: A Systematic Literature Review, Content Analysis, and Bibliometric Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of the Use of Generative Artificial Intelligence Through ChatGPT Among Costa Rican University Students: A PLS Model Based on UTAUT2

by
Julio Cabero-Almenara
1,
Antonio Palacios-Rodríguez
1,*,
Hazel de los Ángeles Rojas Guzmán
2 and
Victoria Fernández-Scagliusi
1
1
Department of Didactics and Educational Organization, University of Seville, 41013 Seville, Spain
2
Department of Social Sciences, University of Costa Rica, San José 11501, Costa Rica
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(6), 3363; https://doi.org/10.3390/app15063363
Submission received: 17 February 2025 / Revised: 8 March 2025 / Accepted: 13 March 2025 / Published: 19 March 2025
(This article belongs to the Special Issue Advanced Technologies Applied in Digital Media Era)

Abstract

:
The rise in generative artificial intelligence (GenAI) is transforming education, with tools like ChatGPT enhancing learning, content creation, and academic support. This study analyzes ChatGPT’s acceptance among Costa Rican university students using the UTAUT2 model and partial least squares structural equation modeling (PLS-SEM). The research examines key predictors of AI adoption, including performance expectancy, effort expectancy, social influence, facilitating conditions, behavioral intention, and actual usage. The findings from 194 students indicate that performance expectancy (β = 0.596, p < 0.001) is the strongest predictor of behavioral intention, followed by effort expectancy (β = 0.241, p = 0.005), while social influence (β = 0.381, p < 0.001) and facilitating conditions (β = 0.217, p = 0.008) play a smaller role. Behavioral intention significantly influences actual usage (β = 0.643, p < 0.001). Gender and age differences emerge, with male students and those aged 21–30 years showing higher acceptance levels. Despite positive attitudes toward ChatGPT, the students report insufficient training for effective use, underscoring the need for AI literacy programs and structured pedagogical strategies. This study calls for further research on AI training programs and their long-term impact on academic performance to foster responsible GenAI adoption in higher education.

1. Introduction

The emergence of generative artificial intelligence (GenAI) in the educational field is significantly transforming teaching practices and students’ learning processes. This type of artificial intelligence is characterized by its ability to process natural human language and generate content in various formats, such as text, images, and audio. As Franganillo [1] (p. 2) points out, “generative models process a large corpus of complex and unstructured data, such as texts, audio, or images, to then generate new content in the same style as the original data”. Similarly, Abella [2] (p. 7) states that “it refers to a distinct class of artificial intelligence that uses deep learning models to generate human-like content, such as images or words, in response to complex and varied instructions”. This content can be produced in a wide variety of formats (video, text, audio, etc.) and serve different functions (assigning labels to images, clustering data—such as identifying customer segments with similar purchasing behaviors—or selecting actions) [3].
The use of GenAI in educational contexts has been facilitated by the accessibility of tools based on advanced language models, whose intuitive interface allows for interactions through natural language. However, this ease of use does not imply that its educational application is automatically effective. As UNESCO [4] warns, the effective use of these tools requires specific training in both query formulation and the interpretation and refinement of generated responses. In this regard, a transition in the application of technology in teaching is observed: from a model focused on learning “from” technology, where it is perceived merely as a channel for transmitting information and assessment, to a paradigm of learning “with” technology, where its use is enhanced as a cognitive tool for knowledge construction [5].
Despite its growing incorporation into education, the implementation of GenAI presents various challenges and limitations. Among these, the accuracy and validity of generated information, ethical responsibility in its use, the need for curricular adaptation, the redefinition of teaching and student roles, and the presence of inherent biases in AI models stand out. Additionally, aspects such as randomness in responses, dependence on these tools, the need for specific training to critically evaluate their results, and implications for data privacy and security must be considered [6,7,8]. In this last aspect, there are two major models: in one, control is held by companies, as seen in the United States, while in the other, control is exercised by the state, as in China.
Among the most widely used GenAI tools in the educational environment are OpenAI’s ChatGPT, Microsoft’s Copilot, Deepseek by Hangzhou DeepSeek Artificial Intelligence Basic Technology Research, and Google’s Gemini. These platforms offer a wide range of functionalities, including text generation, image creation, document translation, summarization and evaluation rubrics, key idea identification, case correction, assistance in computer programming, and information synthesis, among others. This versatility has made GenAI an attractive resource for integration into teaching, transforming traditional teaching and learning methodologies [9,10].
The rise in the use of GenAI in education has led to an increase in research on its pedagogical applications and has driven meta-analyses on its impact in various educational contexts [11,12,13,14,15]. These systematic reviews have identified key trends, such as the predominance of studies focused on higher education, the concentration of research in developed countries, teachers’ concerns about ethical aspects, the interest in analyzing students’ practices with these tools, the recognition of their potential for adaptive learning, and the identification of a widespread need for training in their use. Additionally, both teachers and students express a positive attitude toward the incorporation of GenAI in teaching, provided that its use is carried out critically and reflectively [16,17].
Despite the increasing global focus on generative AI in education, there is a lack of studies examining its adoption in Latin America, particularly in Costa Rica. The implementation of AI in Costa Rican universities faces unique sociocultural and technological challenges that differ from those in developed countries, where most prior research has been conducted.
One of the main barriers to AI adoption in Costa Rica is the digital divide, which disproportionately affects students based on geographical location and socioeconomic background. Soto Kiewit et al. highlight that residence location significantly influences access to internet and digital tools, with rural areas experiencing greater limitations. This exacerbates inequalities in technology adoption and creates disparities not only in access but also in the ability to effectively use digital resources. The lack of reliable internet infrastructure in certain regions prevents students from fully benefiting from AI-driven educational tools, widening the existing knowledge gap.
Additionally, Latin America’s structural socioeconomic inequalities present further challenges to AI integration in education. Vergès [18] notes that socioeconomic indicators in the region reflect deep disparities, which worsened due to the COVID-19 pandemic. These inequalities act as a barrier to the adoption of new technologies, as students from disadvantaged backgrounds may lack the necessary digital literacy skills or access to quality education. Furthermore, Rodríguez-Pedró [19] emphasizes that the digital divide in Latin America is multidimensional, encompassing not only access to technology but also skills, resources, and gender-based inequalities, all of which need to be addressed to ensure inclusive AI adoption in higher education.
Beyond infrastructure and socioeconomic barriers, Costa Rica lacks standardized policies regarding AI integration in education, which generates uncertainty among educators and students. Many universities have yet to develop clear guidelines on the ethical use of ChatGPT and similar AI tools in academic settings. This regulatory gap contrasts with other regions where AI implementation has been more systematically structured, such as the United States or Europe. Additionally, Latin American pedagogical models often emphasize teacher-centered instruction, which can hinder the transition to AI-assisted learning methodologies. The lack of formal AI training for both students and faculty further complicates adoption, making it essential to implement AI literacy programs and professional development initiatives.
Understanding how Costa Rican university students perceive and adopt ChatGPT in their academic activities will help bridge existing technological and policy gaps, ensuring that AI is integrated into education in a way that enhances learning outcomes while addressing ethical and practical concerns.
To conclude this section, it is worth highlighting Robert and Muscanell [20] in their Horizon Report for Educause: “… generative artificial intelligence emerged as the fastest-adopted technology in history. All members of the higher education community, from students to administrators, are trying to determine what impact generative AI tools can, will, and should have on life, learning, and work.” [20] (p. 3).

2. Students and ChatGPT

In this regard, artificial intelligence (AI) represents a field of particular interest, given its exponential growth and increasing integration into teaching and learning processes. Exploring students’ attitudes and level of acceptance toward this technology in general, and ChatGPT in particular, not only helps to understand their disposition toward these technologies but is also essential for identifying necessary changes in AI education, the measures that should be adopted to optimize their training in this field, and the possibilities that, according to students, these technologies have for their incorporation into their educational activities.
The incorporation of AI into education has been carried out under three paradigms: AI-directed, where the student is a recipient; AI-supported, where the student is a collaborator; and AI-empowered, where the student is a leader. In these three paradigms, AI techniques are used to address educational and learning problems in different ways. In paradigm one, AI is used to represent knowledge models and guide cognitive learning, while students act as recipients of AI services; in paradigm two, AI is used to support learning, while students collaborate with AI; in paradigm three, AI is used to empower learning, while students take the initiative to learn [21].
Numerous studies have shown that students exhibit positive attitudes and strong interest in the possibilities AI offers for their education as well as an increasing willingness to integrate these tools into their learning [16,22,23,24,25].
However, research results have sometimes been contradictory. Some studies indicate that AI use decreases students’ academic performance [26], although this may be due to poor pedagogical planning by teachers or the unrestricted use of these tools by students without proper guidance. Nevertheless, studies showing positive performance outcomes outnumber those indicating negative results [27,28].
Furthermore, research based on the application of the UTAUT model has highlighted that factors such as hedonic motivation, performance expectancy, effort expectancy, and social influence are key predictors of students’ intention to use ChatGPT. In this regard, Arthur et al. [29] found that behavioral intention and facilitating conditions significantly influence the actual use of this tool. Similar findings were reported by Strzelecki [30], who observed that usage behavior has the most significant impact on behavioral intention, followed by performance expectancy and hedonic motivation. Additionally, behavioral intention significantly influences actual usage, being modulated by habit and facilitating conditions.
Beyond individual psychological factors, broader contextual elements also shape students’ adoption of ChatGPT. One significant determinant is the technological infrastructure available at different universities. Institutions equipped with high-speed internet, well-integrated AI platforms, and access to digital resources provide a more favorable environment for students to engage with AI tools. Conversely, limited digital infrastructure can hinder effective use, restricting students’ exposure and potential benefits from ChatGPT. Network quality, accessibility, and system responsiveness significantly contribute to overall satisfaction and adoption rates, as highlighted by Jo and Bang. Moreover, accessibility across multiple devices—such as mobile phones, laptops, and tablets—further facilitates adoption, as noted by Menon and Shilpa.
Cultural and social influences also play an important role in shaping students’ willingness to engage with ChatGPT. Supianto et al. found that social influence from peers and academic circles is a significant predictor of behavioral intention, particularly among female students, who are more likely to use ChatGPT when encouraged by their social environment, as reported by Elshaer et al. Additionally, universities that foster a culture of technological advancement and digital integration create a more supportive environment for AI adoption, reinforcing students’ confidence in ChatGPT’s educational value.
From an educational policy perspective, adapting curricula to incorporate AI literacy and ethical guidelines is becoming essential. Strzelecki argues that higher education should evolve to develop students’ critical thinking, creativity, and ethical decision-making skills within AI-integrated learning environments. Furthermore, Shaengchart et al. emphasize the need for education authorities to implement clear regulations addressing privacy, security, and responsible AI usage. By establishing structured policies and promoting a positive attitude toward AI adoption, universities can provide a framework that supports effective and responsible use of ChatGPT.
Finally, although students exhibit a high acceptance of AI in general and ChatGPT in particular, they also acknowledge insufficient training for its effective use [22,31]. In this regard, various studies have pointed out that students demand greater training in this field, emphasizing the need for structured and specific education on the use of AI in educational contexts [23]. These demands highlight the importance of designing educational strategies that enable the effective pedagogical integration of AI, ensuring that students acquire the necessary competencies for its critical and reflective use in academic settings.

3. Methods

3.1. Objectives

The main objective of this study is to analyze the factors influencing the acceptance and use of artificial intelligence by Latin American university students, specifically Costa Rican students, based on the UTAUT2 model and using a partial least squares structural equation modeling (PLS-SEM) approach. Alternative models, such as the technology acceptance model (TAM), primarily focus on perceived usefulness and ease of use, which may not fully capture the social and facilitating factors that influence AI adoption in an academic context. Similarly, innovation diffusion theory (IDT) provides a macro-level perspective on how technologies spread but lacks a structured framework to assess behavioral intention and actual usage at the individual level. Given the relevance of performance expectancy, effort expectancy, social influence, and facilitating conditions in predicting students’ acceptance of ChatGPT, UTAUT2 was deemed the most appropriate theoretical framework for this study.
To achieve this purpose, the following specific objectives are established:
  • Evaluate “Performance Expectancy” (PE) in the intention to use artificial intelligence in the university context, considering students’ perceptions of the usefulness of tools like ChatGPT in their learning process.
  • Analyze “Effort Expectancy” (EE) and its impact on AI acceptance, examining whether the simplicity of interaction with these technologies facilitates their adoption.
  • Determine “Social Influence” (SI) in the intention to use AI, exploring the role that academic and social opinions and recommendations play in students’ willingness to use artificial intelligence.
  • Examine the effect of “Facilitating Conditions” (FC) on AI adoption, identifying the availability of resources, technological knowledge, and institutional support as key factors in the implementation of these tools.
  • Measure “Behavioral Intention” (BI) and its relationship with “Usage Behavior” (UB) in artificial intelligence, analyzing whether students’ willingness to use these tools translates into effective use in the educational field.
  • Validate the structure of the UTAUT2 model in the context of higher education in Latin America, assessing the robustness of the model through the analysis of factor loadings, discriminant validity, and global fit using SRMR.

3.2. Research Sample

The study sample consists of a total of 194 Costa Rican university students, distributed according to various sociodemographic and academic characteristics. Regarding gender, the majority of participants are women (n = 124, 63.9%), while men account for 36.1% (n = 70).
In terms of age, the largest group corresponds to students aged 21 to 30 years (n = 99, 51.0%), followed by those aged 18 to 20 years (n = 57, 29.4%). Participants aged 31 to 40 years constitute 10.8% (n = 21), while those aged 41 to 50 years and over 50 years represent 4.1% (n = 8) and 4.6% (n = 9), respectively.
Regarding the field of study, there is a higher representation of students from Health Sciences (n = 73, 37.6%), followed by those from Social Sciences (n = 39, 20.1%), Engineering (n = 32, 16.5%), and Economic Sciences (n = 22, 11.3%). Students from Arts and Humanities (n = 15, 7.7%), Basic Sciences (n = 11, 5.7%), and Agri-Food Sciences (n = 2, 1.0%) have a smaller representation in the sample.
Finally, regarding the type of university, 54.1% (n = 105) of the participants are from public universities, while 45.9% (n = 89) are enrolled in private institutions.
The research was conducted between November 2024 and January 2025.

3.3. Data Collection Instrument

The data collection instrument used in this study was designed following the UTAUT2 model, adapted to assess the degree of acceptance of artificial intelligence in the university setting, specifically regarding the use of ChatGPT. The questionnaire was developed based on the constructs of the model, ensuring the theoretical validity of the measurement and its applicability in the educational context. The questionnaire was administered via an online platform (Google Forms), facilitating access to a larger number of participants and ensuring efficient data collection. Before participating, the students provided informed consent, ensuring compliance with the ethical principles of research.
The questionnaire consists of 19 items distributed across different dimensions of the UTAUT2 model.
  • The first dimension, “Academic Performance Expectancy” (PE), evaluates students’ perceptions of ChatGPT’s usefulness in their academic performance. It includes four items exploring the tool’s impact on productivity, speed in completing tasks, and its contribution to achieving academic goals.
  • The second dimension, “Effort Expectancy” (EE), measures the perceived ease of use of ChatGPT, considering the clarity of interaction and the ease of learning the tool, with a total of four items.
  • The third dimension, “Social Influence” (SI), consists of three items that investigate the extent to which the opinions and recommendations of close individuals influence students’ adoption of ChatGPT.
  • The fourth dimension, “Facilitating Conditions” (FC), gathers information on the availability of resources and necessary knowledge for using artificial intelligence as well as the compatibility of the tool with other technologies used by students. This dimension also includes one item regarding the possibility of receiving external support in case of difficulties with the tool.
Finally, the “Behavioral Intention” (BI) and “Usage Behavior” (UB) dimensions assess students’ future willingness to continue using ChatGPT in their studies. The first consists of three items measuring the intention to keep using the tool over time, while the second collects information on the actual frequency of ChatGPT use in the academic context.
Table 1 presents the different items that make up the instrument.
Regarding the mean scores and standard deviations obtained in each of the dimensions, Table 2 presents a summary of the descriptive statistics for the UTAUT2 model dimensions.

3.4. Data Analysis Procedure

For the data analysis, structural equation modeling (SEM) was employed using partial least squares (PLS-SEM) with SmartPLS v.4 software. This approach was selected due to its ability to model complex relationships between latent and observed variables, allowing for the simultaneous assessment of the validity of the UTAUT2-based theoretical model and the influence of its constructs on university students’ acceptance of artificial intelligence.
First, the validity and reliability of the measurement model were assessed through factor loadings analysis, considering a minimum threshold of 0.70 for item acceptance. Additionally, the internal consistency of each construct was verified using Cronbach’s alpha and composite reliability (CR), with an acceptance criterion set above 0.70. Convergent validity was confirmed using average variance extracted (AVE), ensuring values above 0.50.
To ensure discriminant validity, the Fornell–Larcker criterion was applied, comparing the square root of AVE with the correlations between constructs. After validating the measurement model, the structural model analysis was conducted, evaluating the significance and magnitude of path coefficients using the bootstrapping procedure with 5000 resamples. Determination coefficients (R2) were calculated to estimate the explained variance of endogenous variables, and f2 values were used to assess the effect size of each predictor.
Additionally, the standardized root mean square residual (SRMR) was incorporated as a global model fit indicator. An SRMR value below 0.08 was considered a good fit criterion, allowing for an evaluation of the discrepancy between the observed and estimated covariance matrices.

4. Results

To begin with, in order to validate the factorial structure of the model, a factor loadings analysis of the items was conducted, ensuring that all loadings were close to 0.7 [32], which guarantees adequate convergent validity (Table 3).
The results confirm adequate convergent validity of the measurement model, as most items exhibit factor loadings above 0.7, indicating that each item significantly contributes to its respective construct.
For behavioral intention (BI), the factor loadings range between 0.858 and 0.954, reflecting a strong correlation between the items and the construct. Perceived ease of use (EE) also shows high values, with loadings between 0.835 and 0.885.
Regarding facilitating conditions (FC), three of its items show adequate values (FC1 and FC2 above 0.87), although FC3 (0.650) and FC4 (0.693) exhibit slightly lower loadings, which might suggest a weaker contribution of these items to the construct.
Performance expectancy (PE) presents factor loadings ranging from 0.825 to 0.895, while social influence (SI) displays particularly high values, with loadings above 0.90 for all its items.
Finally, actual use (UB) has a value of 1.000, indicating that it is measured with a single item.
After confirming that the factor loadings met the recommended threshold, we proceeded to evaluate the internal consistency of the constructs to ensure the reliability of the measurement model. Internal consistency was assessed using Cronbach’s alpha, composite reliability (CR), and average variance extracted (AVE). The results are presented in Table 4.
The results confirm the adequate internal consistency of the model’s constructs [33]. First, all Cronbach’s alpha values exceed the recommended threshold of 0.7, indicating high internal reliability of the items within each construct. In particular, the social influence (SI) construct shows the highest value (0.930), followed by behavioral intention (BI) with 0.906, reflecting strong coherence among its items.
The composite reliability (CR) also reaches satisfactory values in all cases, exceeding the 0.7 threshold and approaching or surpassing 0.9 in several constructs, reinforcing the robustness of the model.
Finally, the AVE (average variance extracted), which measures the amount of variance explained by the items relative to the error, presents values above the minimum criterion of 0.5, except for facilitating conditions (FC), which shows a value of 0.618—slightly lower than the others but still within acceptable ranges.
After establishing internal consistency, we assessed discriminant validity using the Fornell–Larcker criterion (Table 5). According to this criterion, a construct should share more variance with its own indicators than with other constructs in the model. This is confirmed when the square root of the AVE for each construct (values on the diagonal) is greater than the correlations between constructs (values outside the diagonal).
In this case, all diagonal values exceed the correlations with other constructs, indicating adequate discriminant validity. In particular, behavioral intention (BI) presents a diagonal value of 0.918, which is higher than its correlations with the other variables, with the highest correlation being with performance expectancy (PE) (0.757).
Similarly, effort expectancy (EE) has a square root of AVE of 0.859, with moderate correlations with the other variables, particularly its relationship with facilitating conditions (FC) (0.662). Likewise, the social influence (SI) construct exhibits the highest level of independence (0.937), with relatively low correlations with the other dimensions.
Once the measurement model was validated, we proceeded to analyze the structural model to test the hypothesized relationships between constructs using structural equation modeling (PLS-SEM) (Figure 1). This analysis examines the strength and significance of path coefficients, allowing for us to evaluate the model’s predictive capability.
The most relevant result is the strong influence of performance expectancy (PE) on behavioral intention (BI), with a coefficient of 0.596, suggesting that students perceive the usefulness of AI as a key factor in its adoption. Similarly, behavioral intention (BI) has a significant impact on actual use (UB), with a coefficient of 0.530, confirming that intention is a direct predictor of usage behavior.
Effort expectancy (EE) also shows a positive relationship with behavioral intention (BI) (0.214), although to a lesser extent than performance expectancy. On the other hand, social influence (SI) and facilitating conditions (FC) have a much lower impact on behavioral intention (0.044 and 0.034, respectively), suggesting that these factors are not primary determinants in students’ intention to adopt AI. However, facilitating conditions (FC) exhibit a more relevant effect on actual use (UB) (0.161), indicating that access to resources and technical support may influence the effective adoption of AI in the academic environment.
Additionally, the model explains 64.4% of the variance in behavioral intention (BI) and 39.5% of the variance in actual use (UB), indicating a moderate-to-high explanatory power in predicting AI adoption in the university context.
To evaluate the statistical significance of the path coefficients, the bootstrapping method with 5000 resamples was applied (Table 6). Path coefficients (β) represent the strength and direction of the relationships between UTAUT2 model constructs, while the t-values and p-values indicate the statistical significance of these relationships. A higher β coefficient suggests a stronger influence of the predictor variable on the outcome variable.
The results confirm that most path coefficients are statistically significant, as their p-values are below 0.05. In particular, the strong influence of performance expectancy (PE) on behavioral intention (BI) stands out, with a coefficient of 0.596 and a t-value of 5.999, indicating a robust and highly significant relationship (p = 0.000). Similarly, behavioral intention (BI) has a considerable impact on actual use (UB), with a coefficient of 0.530 and a t-value of 5.746, reaffirming its role as a key predictor of adoption behavior.
Effort expectancy (EE) also shows a significant relationship with behavioral intention (BI) (β = 0.214, t = 2.079, p = 0.001), although its impact is lower than performance expectancy. On the other hand, facilitating conditions (FC) have a significant effect on actual use (UB) (β = 0.161, t = 2.041, p = 0.031), suggesting that access to resources and support influences the effective use of AI. However, its impact on behavioral intention (BI) is very low and not significant (β = 0.034, t = 0.384, p = 0.001).
Social influence (SI), with a coefficient of 0.044 and a t-value of 0.469, shows a weak relationship with behavioral intention, although it remains statistically significant (p = 0.023). This suggests that peer perception has a minor impact on students’ adoption of AI.
Finally, to evaluate the overall model fit, the standardized root mean square residual (SRMR) was analyzed, yielding a value of 0.062. This result indicates a good model fit, as it falls below the recommended threshold of 0.08 in the PLS-SEM literature. A low SRMR value suggests minimal discrepancy between the observed and estimated covariance matrices, supporting the adequacy of the proposed model [33]. These findings reinforce the validity of the UTAUT2 model in predicting students’ acceptance of ChatGPT.
After conducting these analyses, we proceeded to examine the influence of other variables, such as gender, age, and field of study, on the degree of ChatGPT acceptance.
For all cases, the hypotheses were formulated as follows:
  • Null Hypothesis (H0): There are no significant differences based on gender, with an alpha risk of 0.05 for the different variables tested.
  • Alternative Hypothesis (H1): There are significant differences based on gender, with an alpha risk of 0.05 for the different variables tested.
Starting with the gender variable, and applying the Mann–Whitney U test, we obtained the results presented in Table 7.
The results obtained only allow for the rejection of H0 at a significance level of p ≤ 0.05 concerning “behavioral intention” and “usage behavior”.
To determine which gender these differences favored, a rank test was applied, yielding the values presented in Table 8 for the two accepted alternative hypotheses.
Regarding age, the Kruskal–Wallis test was applied, and the results obtained are presented in Table 9.
In this case, the H0 hypotheses rejected at p ≤ 0.05 were those related to the non-existence of significant differences concerning age and the following UTAUT2 dimensions: “Usage Behavior”, “Behavioral Intention”, and “Effort Expectancy”.
In Table 10, the differences in the rejected hypothesis are presented, indicating which age group the differences favored. To determine this, the rank test was applied again.
The results indicate that, in general, students aged 21–30 years exhibited the highest acceptance of the technology analyzed in this study.
Finally, we tested H0, referring to the existence or absence of significant differences across the various UTAUT2 dimensions based on the students’ field of study. The Kruskal–Wallis values presented in Table 11 indicate that none of the H0 hypotheses can be rejected at p ≤ 0.05. Consequently, we can conclude that there are no significant differences between the various UTAUT dimensions regarding the acceptance of ChatGPT and the students’ field of study.

5. Discussion

This study provides a valuable contribution from both a scientific and social perspective, offering a deeper understanding of AI acceptance in education. Its findings can serve as a foundation for future research aimed at examining in greater detail the interaction between technology and learning. Additionally, it provides useful insights for educational institutions and academic policymakers, facilitating evidence-based decision-making regarding the implementation of training programs and the development of technological resources.
Beyond its theoretical contributions, these findings have significant practical implications for educators seeking to integrate AI-driven tools like ChatGPT into their teaching methodologies. As one of the most advanced AI text-generation models, ChatGPT has the potential to support various aspects of education, including curriculum design, assessment, student engagement, and communication (Whalen and Mouza). Educators can leverage its capabilities to generate structured learning materials, assist with student writing, and even act as an interactive tool for inquiry-based learning. The ability of ChatGPT to provide instant feedback, facilitate brainstorming, and aid in comprehension makes it a valuable supplement to traditional pedagogical strategies (Yu).
One of the most transformative aspects of ChatGPT is its role in personalized learning and adaptive education. Unlike conventional static resources, ChatGPT enables students to engage in interactive conversations that cater to their specific needs and levels of understanding. It can generate contextualized examples, clarify concepts, and guide students through problem-solving processes in real time, thereby enhancing self-directed learning and metacognitive skills (Elbanna and Armstrong). Furthermore, it allows for dynamic assessment by generating practice questions, evaluating student responses, and offering individualized feedback, reinforcing knowledge acquisition in a way that aligns with modern competency-based education models (Whalen and Mouza).
Additionally, ChatGPT’s utility extends beyond student-centered learning; it also serves as an asset for instructors in curriculum planning and assessment design. The ability to streamline lesson planning, generate quizzes, and support rubric creation provides teachers with more time to focus on higher-order pedagogical tasks. Moreover, the integration of AI in education is reshaping administrative functions, allowing for institutions to optimize their resource allocation and instructional strategies (Yu). The increasing reliance on AI in education calls for a redefinition of digital literacy programs, ensuring that students are equipped with the necessary critical thinking skills to assess AI-generated content, detect biases, and engage with AI ethically.
Despite its advantages, the adoption of ChatGPT in education also presents challenges related to academic integrity, privacy concerns, and equitable access. While some educators embrace AI as a teachable agent that fosters deeper learning by encouraging students to teach concepts back to the AI, others remain concerned about its potential misuse (Whalen and Mouza). Addressing these challenges requires clear institutional policies that delineate the ethical boundaries of AI-assisted learning while promoting its responsible and transparent use.
Given the increasing role of AI in education, universities and policymakers must prioritize the development of AI literacy programs that equip students with the skills to effectively and responsibly engage with AI tools. These initiatives should focus not only on technical proficiency but also on ethical considerations, ensuring that students can distinguish when and how to integrate AI assistance appropriately into their academic work. Additionally, ensuring equal access to AI-powered resources is crucial; universities must implement strategies to bridge the digital divide and provide students from all backgrounds with equitable opportunities to benefit from AI-enhanced education (Elbanna and Armstrong).
By integrating AI-driven tools like ChatGPT into education, institutions can create more dynamic, responsive, and student-centered learning environments. However, successful adoption requires a balanced approach that leverages AI’s strengths while addressing its limitations, ensuring that it remains a supplementary tool rather than a replacement for critical thinking and human instruction.

5.1. Research Limitations

Despite its contributions, this study has some limitations that should be considered for a comprehensive understanding of the results:
  • Sampling Method: This study employed a convenience sampling method, which presents certain disadvantages compared to random sampling [34]. While this approach allowed for the collection of data from a sizable group of Costa Rican university students, it does not ensure that the findings are fully generalizable to the entire population. Convenience sampling may introduce selection bias, as participants who voluntarily engage in the study might differ in meaningful ways from those who do not. This could result in estimates of the relationships between the UTAUT2 model variables that are not entirely reflective of the broader student population. Future research should aim to replicate this study using random or stratified sampling techniques to enhance the representativeness of the sample. Additionally, weighting techniques could be applied in data analysis to adjust for potential sampling imbalances.
  • Self-Reported Data: The data collection instrument relied on self-reports, which have inherent limitations [35]. Self-reported measures are susceptible to social desirability bias and recall bias, which may affect the accuracy of responses. For instance, students may overestimate or underestimate their use of ChatGPT due to perceived expectations or memory constraints. However, as Yousafzai, Foxall, and Pallister [36] point out, obtaining objective measures of technology acceptance is challenging, unless the technology is used consistently.
  • Lack of Qualitative Insights: This study relied exclusively on quantitative data collected through structured questionnaires, which limits the ability to capture students’ nuanced perceptions, motivations, and challenges in adopting ChatGPT. While the UTAUT2 model provides a strong framework for measuring technology acceptance, qualitative methods, such as interviews, focus groups, or open-ended survey questions, could provide a deeper contextual understanding of the observed statistical relationships.
  • Ex Post Facto Design: The ex post facto methodological design prevents the establishment of causal relationships between the analyzed variables. Although structural equation modeling (PLS-SEM) was used to infer relationships between constructs, this approach does not replace experimental or longitudinal designs in terms of causal inference.

5.2. Future Research Directions

Training in AI-based tools is a key factor in the technology adoption process. This is not only because AI is transforming teaching and learning methods but also due to its broader social and cultural implications. Chang et al. [37] emphasize that students’ ability to use ChatGPT significantly influences their acceptance of the tool, reinforcing the need to provide appropriate training programs to enhance its pedagogical use. In this sense, the AI revolution extends beyond technology, becoming a phenomenon that redefines interactions with knowledge and education.
Ultimately, the real challenge is not just learning to use these tools, as this process will naturally occur over time. The crucial aspect is to understand and manage the structural changes that AI will introduce in society and education as well as the ideological values underlying its implementation [38,39,40,41]. This perspective invites a critical reflection on AI’s impact, not only in the academic field but also within the sociocultural framework in which it operates.
Therefore, future research should expand the scope of this study to other fields of knowledge and educational levels, aiming to achieve a more holistic understanding of ChatGPT acceptance [42,43,44]. Additionally, it would be pertinent to analyze the long-term impact of AI use on academic performance and the development of students’ digital competencies [45,46,47].
A mixed-method approach, combining qualitative and quantitative techniques, would allow for a deeper understanding of students’ perceptions and attitudes toward AI in education. Moreover, as suggested by Tamilmani, Rana, Wamba, and Dwivedi [48,49] in their meta-analysis on UTAUT2 and AI research, it is crucial to consider contextual variables to better understand technology adoption levels, which would necessitate replicating this study.

6. Conclusions

This study has systematically addressed the degree of acceptance of ChatGPT among Costa Rican university students, using the UTAUT2 (Unified Theory of Acceptance and Use of Technology) model as a theoretical framework. The findings point in different directions: the first key aspect relates to the validity and reliability of the instrument used to analyze students’ acceptance of ChatGPT.
Additionally, the model has been validated, indicating that dimensions such as performance expectancy, effort expectancy, social influence, facilitating conditions, and behavioral intention are key factors in determining students’ usage behavior of this AI-based tool. In particular, it has been observed that students who use ChatGPT more frequently tend to perceive it as a useful and easy-to-use tool, highlighting the need for its effective integration into educational environments.
The findings of this study align with previous research conducted by [30,48,50,51,52,53,54], suggesting that the UTAUT2 model is well established as a robust theoretical framework for understanding technological acceptance in the educational field. These results reinforce the importance of continuing to explore the factors that influence students’ willingness to adopt AI technologies in their learning processes.
The incorporation of AI-based tools such as ChatGPT in higher education presents a set of challenges for both teachers and students. Both groups must adapt to a dynamic teaching–learning ecosystem that requires continuous updates in their knowledge and digital skills. Digital literacy thus becomes a fundamental component in ensuring an effective adaptation to these new scenarios. In this context, [16] found that a significant proportion of students use AI tools at least once a week, suggesting a positive inclination toward their integration into academic practices.
Additionally, this study has highlighted that the degree of acceptance of ChatGPT is influenced by students’ gender and age. Men show greater interest in this technology, and the age group of 21–30 years exhibits the highest level of acceptance.

Author Contributions

Conceptualization, J.C.-A. and A.P.-R.; methodology, A.P.-R.; software, H.d.l.Á.R.G.; validation, J.C.-A., A.P.-R. and V.F.-S.; formal analysis, A.P.-R.; investigation, H.d.l.Á.R.G.; resources, J.C.-A.; data curation, V.F.-S.; writing—original draft preparation, A.P.-R.; writing—review and editing, J.C.-A. and V.F.-S.; visualization, H.d.l.Á.R.G.; supervision, J.C.-A.; project administration, A.P.-R.; funding acquisition, J.C.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Since the manuscript is prediction of the use of generative artificial intelligence through ChatGPT study, is non-interventional studies, there are no human subjects, so it is waive the ethics approval.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Franganillo, J. La inteligencia artificial generativa y su impacto en la creación de contenidos mediáticos. Methaodos Rev. Cienc. Soc. 2023, 11, m231102a10. [Google Scholar] [CrossRef]
  2. Abella García, V.; Fernández Mármol, K. Docencia en la Era de la Inteligencia Artificial: Enfoques Prácticos para Docentes; Universidad de Burgos: Burgos, Spain, 2024. [Google Scholar]
  3. Degli-Esposti, S. La Ética de la Inteligencia Artificial; CSIC: Madrid, Spain, 2023. [Google Scholar]
  4. UNESCO. Nota Conceptual del Informe de Seguimiento de la Educación en el Mundo 2023 Sobre Tecnología y Educación; UNESCO: London, UK, 2023. [Google Scholar]
  5. Fuertes, M. Enmarcando las aplicaciones de IA generativa como herramientas para la cognición en educación. Pixel-Bit Rev. Medios Educ. 2024, 71, 42–57. [Google Scholar]
  6. Mayol, J. Inteligencia artificial generativa y educación médica. Educ. Médica 2023, 24, 100851. [Google Scholar] [CrossRef]
  7. Farrokhnia, M.; Banihashem, S.K.; Noroozi, O.; Wals, A. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int. 2023, 61, 460–474. [Google Scholar] [CrossRef]
  8. Jiménez, C.R.; Alonso, M.G.; Robles, A.S.; Frías, J.M. Competencia digital y seguridad digital en educación. In Desafíos Educativos a Través de la Interdisciplinariedad en la Investigación y la Innovación; Dykinson: Madrid, Spain, 2023; Volume 163. [Google Scholar]
  9. Nikolopoulou, K. Generative Artificial Intelligence in Higher Education: Exploring Ways of Harnessing Pedagogical Practices with the Assistance of ChatGPT. Int. J. Chang. Educ. 2024, 1, 103–111. [Google Scholar] [CrossRef]
  10. Villegas-José, V.; Delgado-García, M. Inteligencia artificial: Revolución educativa innovadora en la Educación Superior [Artificial Intelligence: Innovative educational revolution in Higher Education]. Pixel-Bit Rev. Medios Educ. 2024, 71, 159–177. [Google Scholar]
  11. Ansari, A.N.; Ahmad, S.; Bhutta, S.M. Mapping the global evidence around the use of ChatGPT in higher education: A systematic scoping review. Educ. Inf. Technol. 2023, 29, 11281–11321. [Google Scholar] [CrossRef]
  12. Crompton, H.; Burke, D. Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  13. García Peñalvo, F.; Llorens-Largo, F.; Vidal, J. La nueva realidad de la educación ante los avances de la inteligencia artificial generativa. Ried-Rev. Iberoam. Educ. Distancia 2024, 27, 9–39. [Google Scholar] [CrossRef]
  14. Kim, S. Trends in research on ChatGPT and adoption-related issues discussed in articles: A narrative review. Sci. Ed. 2023, 11, 3–11. [Google Scholar] [CrossRef]
  15. López Regalado, O.; Núñez-Rojas, N.; López Gil, O.R.; Sánchez-Rodríguez, J. El análisis del uso de la inteligencia artificial en la educación universitaria: Una revisión sistemática (Analysis of the use of artificial intelligence in university education: A systematic review). Pixel-Bit Rev. Medios Educ. 2024, 70, 97–122. [Google Scholar] [CrossRef]
  16. Chao-Rebolledo, C.; Rivera-Navarro, M. Usos y percepciones de herramientas de inteligencia artificial en la educación superior en México. Rev. Iberoam. Educ. 2024, 95, 57–72. [Google Scholar] [CrossRef]
  17. Chiappe, A.; Sanmiguel, C.; Sáez Delgado, F.M. IA generativa versus profesores: Reflexiones desde una revisión de la literatura [Generative AI vs. Teachers: Insights from a literature review]. Pixel-Bit Rev. Medios Educ. 2025, 72, 119–137. [Google Scholar]
  18. Vergès, C. Precarización laboral, desigualdad y nuevas tecnologías. Rev. Colomb. Bioét. 2023, 17, 2. [Google Scholar] [CrossRef]
  19. Rodríguez-Pedró, R. Brecha digital y transformación social: El impacto de las nuevas tecnologías en América Latina y el Caribe. In Revista Puertorriqueña de Bibliotecología y Documentación; Acceso: Port-au-Prince, Haiti, 2024; Volume 5. [Google Scholar]
  20. Robert, J.; Muscanell, N. 2023 Horizon Action Plan: Generative AI; EDUCAUSE: Denver, CO, USA, 2023. [Google Scholar]
  21. Ouyang, F.; Jiao, P. Artificial intelligence in education: The three paradigms. Comput. Educ. Artif. Intell. 2021, 2, 100020. [Google Scholar] [CrossRef]
  22. Almaraz-López, C.; Almaraz-Menéndez, F.; López-Esteban, C. Comparative study of the attitudes and perceptions of university students in business administration and management and in education toward artificial intelligence. Educ. Sci. 2023, 13, 609. [Google Scholar] [CrossRef]
  23. Palomino, S.; Vázquez, J.C. Percepción de los Universitarios ante la Formación y uso de Herramientas de IA; Instituto para el Futuro de la Educación: Monterrey, Mexico, 2023. [Google Scholar]
  24. Laínez, G.; Tumbaco, M.; Ricardo, J.; Peñafiel, R.; Zambrano, W.; Del Pezo, A. Perception of university students on the use of artificial intelligence (AI) tools for the development of autonomous learning. Rev. Gest. Soc. Ambient. 2024, 18, 1–20. [Google Scholar]
  25. Ortega-Rodríguez, P.J.; Pericacho-Gómez, F.J. La utilidad didáctica percibida del ChatGPT por parte del alumnado universitario [The educational perceived usefulness of ChatGPT by university students]. Pixel-Bit Rev. Medios Educ. 2025, 72, 159–178. [Google Scholar]
  26. Castro-López, A.; Cervero, A.; Álvarez-Blanco, L. Análisis sobre el uso de las herramientas de inteligencia artificial interactiva en el entorno universitario. Rev. Tecnol. Cienc. Educ. 2025, 30, 37–66. [Google Scholar] [CrossRef]
  27. Loayza-Maturrano, E. Percepción de estudiantes de una universidad de Lima sobre el uso de ChatGPT en la escritura académica. Educ. Comun. 2024, 12, 28–38. [Google Scholar]
  28. Tlili, A. Can artificial intelligence (AI) help in computer science education? A meta-analysis approach [¿Puede ayudar la inteligencia artificial (IA) en la educación en ciencias de la computación? Un enfoque metaanalítico]. Rev. Esp. Pedagog. 2024, 82, 469–490. [Google Scholar]
  29. Arthur, F.; Salifu, I.; Abam Nortey, S. Predictors of higher education students’ behavioural intention and usage of ChatGPT: The moderating roles of age, gender and experience. Interact. Learn. Environ. 2024, 1–27. [Google Scholar] [CrossRef]
  30. Strzelecki, A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact. Learn. Environ. 2023, 32, 5142–5155. [Google Scholar] [CrossRef]
  31. Solano-Barliza, A.; Ojeda, A.; Aaron-Gonzalvez, M. Análisis cuantitativo de la percepción del uso de inteligencia artificial ChatGPT en la enseñanza y aprendizaje de estudiantes de pregrado del Caribe colombiano. Form. Univ. 2024, 17, 129–138. [Google Scholar] [CrossRef]
  32. Carmines, E.G.; Zeller, R.A. Reliability and Validity Assessment; Sage: Thousand Oaks, CA, USA, 1979; Volume 17. [Google Scholar]
  33. Bagozzi, R.P.; Yi, Y. On the evaluation of structural equation models. J. Acad. Mark. Sci. 1988, 16, 74–94. [Google Scholar] [CrossRef]
  34. Otzen, T.; Manterola, C. Técnicas de muestreo sobre una población a estudio. Int. J. Morphol. 2017, 35, 227–232. [Google Scholar] [CrossRef]
  35. Del Valle, M.; Zamora, E. El uso de las medidas de auto-informe: Ventajas y limitaciones en la investigación en Psicología. Altern. Psicol. 2022, 47, 22–35. [Google Scholar]
  36. Yousafzai, S.Y.; Foxall, G.R.; Pallister, J.G. Technology acceptance: A meta-analysis of the TAM: Part 1. J. Model. Manag. 2007, 2, 251–280. [Google Scholar] [CrossRef]
  37. Chang, H.; Liu, B.; Zhao, Y.; Li, Y.; He, F. Research on the acceptance of ChatGPT among different college student groups based on latent class analysis. Interact. Learn. Environ. 2024, 33, 22–38. [Google Scholar] [CrossRef]
  38. Cortina, A. ¿Ética o Ideología de la Inteligencia Artificial? Paidós: Barcelona, Spain, 2024. [Google Scholar]
  39. Supianto, R.; Widyaningrum, F.; Wulandari, M.; Zainudin, A.; Athiyallah, M. Exploring the factors affecting ChatGPT acceptance among university students. Multidiscip. Sci. J. 2024, 6, 2024273. [Google Scholar] [CrossRef]
  40. Strzelecki, A. Students’ Acceptance of ChatGPT in Higher Education: An Extended Unified Theory of Acceptance and Use of Technology. Innov. High. Educ. 2024, 49, 223–245. [Google Scholar] [CrossRef]
  41. Jo, H.; Bang, Y. Analyzing ChatGPT adoption drivers with the TOEK framework. Sci. Rep. 2023, 13, 22606. [Google Scholar] [CrossRef] [PubMed]
  42. Menon, D.; Shilpa, K. “Chatting with ChatGPT”: Analyzing the factors influencing users’ intention to Use the Open AI’s ChatGPT using the UTAUT model. Heliyon 2023, 9, e20962. [Google Scholar] [CrossRef] [PubMed]
  43. Elshaer, I.A.; Hasanein, A.M.; Sobaih, A.E.E. The Moderating Effects of Gender and Study Discipline in the Relationship between University Students’ Acceptance and Use of ChatGPT. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 1981–1995. [Google Scholar] [CrossRef]
  44. Shaengchart, Y.; Bhumpenpein, N.; Kongnakorn, K.; Khwannu, P.; Tiwtakul, A.; Detmee, S. Factors Influencing the Acceptance of ChatGPT Usage Among Higher Education Students in Bangkok, Thailand. Adv. Knowl. Exec. 2023, 2, 1–14. [Google Scholar]
  45. Whalen, J.; Mouza, C. ChatGPT: Challenges, opportunities, and implications for teacher education. Contemp. Issues Technol. Teach. Educ. 2023, 23, 1–23. [Google Scholar]
  46. Yu, H. The application and challenges of ChatGPT in educational transformation: New demands for teachers’ roles. Heliyon 2024, 10, e24289. [Google Scholar] [CrossRef]
  47. Elbanna, S.; Armstrong, L. Exploring the integration of ChatGPT in education: Adapting for the future. Manag. Sustain. Arab Rev. 2024, 3, 16–29. [Google Scholar] [CrossRef]
  48. Tamilmani, K.; Rana, N.; Wamba, F.; Dwivedi, R. The extended Unified Theory of Acceptance and Use of Technology (UTAUT2): A systematic literature review and theory evaluation. Int. J. Inf. Manag. 2021, 57, 102269. [Google Scholar] [CrossRef]
  49. Soto Kiewit, L.D.; Vargas Sandoval, Y.; Segura Jiménez, A.; Madrigal Mora, A.; Sánchez Hernández, C.; Salazar Miranda, A.; Carranza Villalobos, C. Las Desigualdades: Una reflexión Necesaria en El Contexto del Bicentenario de la Independencia de Costa Rica. Rev. Arch. Nac. 2021, 85, e528. [Google Scholar]
  50. Gansser, O.; Reich, C. A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technol. Soc. 2021, 65, 101535. [Google Scholar] [CrossRef]
  51. Schmitz, A.; Díaz-Martín, A.; Yagüe, M.J. Modifying UTAUT2 for a cross-country comparison of telemedicine adoption. Comput. Hum. Behav. 2022, 130, 107183. [Google Scholar] [CrossRef] [PubMed]
  52. Cao, Y.; Abdul, A.L.; Mohd, W. University students’ perspectives on Artificial Intelligence: A survey of attitudes and awareness among Interior Architecture students. Int. J. Educ. Res. Innov. (IJERI) 2023, 20, 1–21. [Google Scholar] [CrossRef]
  53. Strzelecki, A.; ElArabawy, S. Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. Br. J. Educ. Technol. 2024, 55, 1209–1230. [Google Scholar] [CrossRef]
  54. Saif, N.; Khan, S.U.; Shaheen, I.; Alotaibi, A.; Alnfiai, M.M.; Arif, M. Chat-GPT; validating Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism. Comput. Hum. Behav. 2024, 154, 108097. [Google Scholar] [CrossRef]
Figure 1. Structural model.
Figure 1. Structural model.
Applsci 15 03363 g001
Table 1. Items of the measurement instrument.
Table 1. Items of the measurement instrument.
ItemMeanSD
PE1 I believe ChatGPT is useful in my studies6.111.346
PE2 Using ChatGPT increases your chances of achieving important things in your studies5.431.650
PE3 Using ChatGPT helps you complete tasks and projects faster in your studies6.091.424
PE4 Using ChatGPT increases your productivity in your studies5.611.613
EE1 Learning to use ChatGPT is easy for me6.171.376
EE2 My interaction with ChatGPT is clear and understandable6.051.326
EE3 I find ChatGPT easy to use6.311.221
EE4 It is easy for me to acquire skills in using ChatGPT5.971.454
SI1 The people who are important to me think that I should use ChatGPT4.491.990
SI2 The people who influence my behavior believe that I should use ChatGPT4.262.065
SI3 The people whose opinions I value prefer that I use ChatGPT4.232.087
FC1 I have the necessary resources to use ChatGPT6.541.068
FC2 I have the necessary knowledge to use ChatGPT6.141.394
FC3 ChatGPT is compatible with the technologies I use6.471.161
FC4 I can get help from others when I have difficulties using ChatGPT5.691.742
BI1 I intend to continue using ChatGPT in the future6.061.491
BI2 I will always try to use ChatGPT in my studies4.872.010
BI3 I plan to keep using ChatGPT frequently5.301.862
UB1 Choose your frequency of use for ChatGPT4.461.688
Table 2. Mean scores and standard deviations.
Table 2. Mean scores and standard deviations.
DimensionMeanSD
Performance Expectancy (PE)5.811.303
Effort Expectancy (EE)6.131.160
Social Influence (SI)4.331.933
Facilitating Conditions (FC)6.211.059
Behavioral Intention (BI)5.411.651
Usage Behavior (UB)4.461.688
Table 3. Factor loadings.
Table 3. Factor loadings.
BIEEFCPESIUB
BI10.858
BI20.939
BI30.954
EE1 0.847
EE2 0.885
EE3 0.868
EE4 0.835
FC1 0.879
FC2 0.893
FC3 0.650
FC4 0.693
PE1 0.895
PE2 0.857
PE3 0.825
PE4 0.839
SI1 0.904
SI2 0.964
SI3 0.941
UB 1.000
Table 4. Internal consistency.
Table 4. Internal consistency.
Cronbach’s AlphaComposite Reliability Average Variance Extracted (AVE)
BI0.9060.9080.843
EE0.8820.8890.738
FC0.8000.9230.618
PE0.8770.8840.730
SI0.9300.9360.877
Table 5. Fornell–Larcker criterion.
Table 5. Fornell–Larcker criterion.
BIEEFCPESIUB
BI 0.918
EE 0.580 0.859
FC 0.511 0.662 0.786
PE 0.757 0.550 0.543 0.854
SI 0.483 0.381 0.283 0.584 0.937
UB 0.613 0.498 0.432 0.534 0.359 1.000
Table 6. Path coefficients.
Table 6. Path coefficients.
Original Sample (O) Sample Mean (M) Standard Deviation (STDEV) T Statistics (|O/STDEV|) p-Values
BI -> UB 0.530 0.527 0.092 5.746 0.000
EE -> BI 0.214 0.219 0.103 2.079 0.001
FC -> BI 0.034 0.038 0.088 0.384 0.001
FC -> UB 0.161 0.165 0.079 2.041 0.031
PE -> BI 0.596 0.591 0.099 5.999 0.000
SI -> BI 0.044 0.046 0.093 0.469 0.023
Table 7. Significant differences based on students’ gender across the different UTAUT dimensions.
Table 7. Significant differences based on students’ gender across the different UTAUT dimensions.
PEEESIFCBIUB
Mann–Whitney U3982.54078.03653.03977.53612.03490.0
Wilcoxon W11,732.511,828.011,403.011,727.511,362.011,240.0
Z−0.972−0.721−1.838−0.997−1.976−2.326
Sig.0.3310.4710.0660.3190.048 (*)0.020 (*)
Note: * = significant differences at the 0.05 level.
Table 8. Rank test between “Behavioral Intention” and “Usage Behavior” dimensions and students’ gender.
Table 8. Rank test between “Behavioral Intention” and “Usage Behavior” dimensions and students’ gender.
DimensionGenderNMean RankSum of Ranks
BI (Behavioral Intention)Male70107.907553.00
Female12491.6311,362.00
Total194
UB (Usage Behavior) Male70109.647675.00
Female12490.6511,240.00
Table 9. Significant differences based on students’ age across the different UTAUT dimensions.
Table 9. Significant differences based on students’ age across the different UTAUT dimensions.
PEEESIFCBIUB
Kruskal–Wallis H1.82611.0927.7606.4379.55211.126
df444444
Sig.0.7680.026 (*)0.1010.1690.049 (*)0.025 (*)
Kruskal–Wallis HPEEESIFCBIUB
Note: * = significant differences at the 0.05 level.
Table 10. Rank test between “Usage Behavior”, “Behavioral Intention”, and “Effort Expectancy” dimensions and students’ age.
Table 10. Rank test between “Usage Behavior”, “Behavioral Intention”, and “Effort Expectancy” dimensions and students’ age.
Age GroupNMean Rank
EE (Effort Expectancy)18–20 years5796.65
21–30 years99106.37
31–40 years2163.62
41–50 years890.94
Over 50 years990.22
Total194
BI (Behavioral Intention)18–20 years5784.88
21–30 years99104.93
31–40 years2179.74
41–50 years8114.06
Over 50 years9122.44
Total194
UB (Usage Behavior)18–20 years5791.21
21–30 years99107.09
31–40 years2176.26
41–50 years861.81
Over 50 years9113.11
Total194
Table 11. Kruskal–Wallis test results for field of study.
Table 11. Kruskal–Wallis test results for field of study.
PEEESIFCBI
Kruskal–Wallis H11.99510.28512.48610.6097.385
df66666
Sig.0.0620.1130.0520.1010.287
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cabero-Almenara, J.; Palacios-Rodríguez, A.; Rojas Guzmán, H.d.l.Á.; Fernández-Scagliusi, V. Prediction of the Use of Generative Artificial Intelligence Through ChatGPT Among Costa Rican University Students: A PLS Model Based on UTAUT2. Appl. Sci. 2025, 15, 3363. https://doi.org/10.3390/app15063363

AMA Style

Cabero-Almenara J, Palacios-Rodríguez A, Rojas Guzmán HdlÁ, Fernández-Scagliusi V. Prediction of the Use of Generative Artificial Intelligence Through ChatGPT Among Costa Rican University Students: A PLS Model Based on UTAUT2. Applied Sciences. 2025; 15(6):3363. https://doi.org/10.3390/app15063363

Chicago/Turabian Style

Cabero-Almenara, Julio, Antonio Palacios-Rodríguez, Hazel de los Ángeles Rojas Guzmán, and Victoria Fernández-Scagliusi. 2025. "Prediction of the Use of Generative Artificial Intelligence Through ChatGPT Among Costa Rican University Students: A PLS Model Based on UTAUT2" Applied Sciences 15, no. 6: 3363. https://doi.org/10.3390/app15063363

APA Style

Cabero-Almenara, J., Palacios-Rodríguez, A., Rojas Guzmán, H. d. l. Á., & Fernández-Scagliusi, V. (2025). Prediction of the Use of Generative Artificial Intelligence Through ChatGPT Among Costa Rican University Students: A PLS Model Based on UTAUT2. Applied Sciences, 15(6), 3363. https://doi.org/10.3390/app15063363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop