1. Introduction
In this current era of the “Fourth Industrial Revolution”, artificial intelligence (AI) has emerged as a transformative force in multiple spheres, including higher education. Integrating AI tools in the educational process promises to revolutionize how knowledge is imparted and acquired, offering unprecedented opportunities to personalize learning, enhance teaching methods, and optimize administrative management [
1]. The ability of artificial intelligence to analyze large volumes of data and generate personalized insights can radically transform future professionals’ experience [
2].
In this context, university training in AI tools becomes critical for students’ professional future and for the competitiveness and sustainability of higher education on the global stage. By integrating AI training into their curricula, universities equip their students with the technical skills necessary to navigate and thrive in increasingly digitized work environments and foster a more profound understanding of these technologies’ ethical, social, and economic implications [
3].
However, the effective adoption of these technologies critically depends on the perception and acceptance of the primary beneficiaries: university students. The relevance of students maintaining a positive perception of the use of and training in AI tools lies in its direct impact on motivation and commitment to learning [
4]. A positive perception increases students’ willingness to integrate these technologies into their educational process, giving them greater confidence in their everyday use within their future profession. In this sense, it is not enough to want to include AI tools in all curricular plans; it is necessary to promote a positive perception of adopting these technologies in the educational process [
1].
This article presents the results of an exploratory study of a sample population of students from a technological university in Mexico who responded to the validated instrument “Perception of Adoption and Training in the Use of Artificial Intelligence Tools in the Profession”. The objective was to explore students’ perceptions and examine their degree of familiarity, comfort, and expectations regarding training and using these technological tools during their university experience. Methodologically, we conducted an exploratory, quantitative investigation using a multigroup study employing PLS-SEM [
5,
6].
The motivations for conducting the presented study are diverse and deeply relevant in the current context of rapid technological evolution. First, understanding students’ perceptions of integrating artificial intelligence (AI) tools into their academic training is crucial for designing curricula that are not only technically advanced but also well accepted and valued by students. This is vital since a positive perception can directly increase students’ motivation and commitment to their learning and future professional development. Additionally, by exploring students’ familiarity, comfort, and expectations regarding AI tool training, universities can adjust their teaching methods to maximize educational effectiveness and adequately prepare students for the challenges of a digitalized work environment. It is also important to identify potential barriers or challenges in the perception and acceptance of these technologies to implement strategies that promote an effective and conscious integration of AI, considering its ethical, social, and economic implications.
Lastly, this study can provide valuable data for educational policymakers and university administrators, helping them better understand how AI tools can be incorporated into the educational system in a way that aligns with students’ needs and expectations, thereby strengthening the competitiveness and sustainability of higher education on the global stage. This will not only enrich the educational experience of students but also equip them with essential skills for their professional future in an increasingly technological world.
3. Materials and Methods
This article details the findings of an exploratory investigation conducted among 238 students from a university institution in Mexico. The study aimed to compare the differences in students’ attitudes toward the training and use of artificial intelligence (AI) tools in their disciplinary areas. The sample included students from various academic semesters and six disciplines, grouped into four categories: (1) architecture, art, and design; (2) health sciences; (3) business, social sciences, humanities, and education; and (4) engineering. The sample’s gender composition was 101 men and 137 women, reflecting a gender distribution similar to the university’s student population. However, the study found no statistically significant gender differences.
Considering the variety of disciplines among the participants, this study has not specified the use of any particular artificial intelligence tool. However, it is noteworthy that the majority of participants employ tools based on artificial language models, such as ChatGPT, and image generative models, like DALL-E and MidJourney, especially students from fields such as architecture, art, and design.
Data collection occurred during the August to December 2023 academic period using the applied, validated research instrument on the Google Forms platform, ensuring accessibility and ease of student participation. To adhere to fundamental ethical principles in research, this study correctly followed institutional guidelines and regulations, ensuring that all participants gave their informed consent to participate in the study and to have their responses used for academic purposes. The regulation and supervision of the study were managed by the R4C research group, with additional technical support from the Writing Lab of the Institute for the Future of Education at Tecnologico de Monterrey, thus ensuring the integrity and methodological quality of the research process. The present study was validated by the Institutional Ethics Committee of the Tecnologico de Monterrey, who assessed the research as low risk. ID. IFE-2024-01. It also adheres to the terms and conditions of the Research for Challenges privacy notice:
https://tec.mx/es/aviso-privacidad-research-challenges (accessed on 1 July 2024).
3.1. Instrument
This study applied the “Perception of the Adoption and Training in the Use of Artificial Intelligence Tools in the Profession” instrument (
Table 1), adapted and validated for the Latin American population by Vázquez-Parra et al. [
27]. This instrument is an adaptation of the “Attitudes and Perceptions of Students Towards Artificial Intelligence” scale designed by Sit et al. [
28] to measure students’ attitudes and perceptions regarding the teaching and use of AI tools among medical students at King’s College London. The instrument consists of 11 items that are answered using a 5-point Likert scale, where participants rate their agreement with statements related to their current attitudes towards AI, their professional intentions regarding its use, their current understanding of these tools, their openness to adoption by their profession’s curricula, and their confidence in routinely and critically using AI tools. Additionally, a dichotomous question was included to determine whether participants had received AI training and whether it was mandatory in their curriculum. Although the original instrument was designed for medical students, it has been adapted for other areas of knowledge, such as the adaptation by Almaraz, Almaraz, and López [
22], who applied it to Business and Education students at the University of Salamanca. The adaptation of this instrument for the Latin American population is supported by theoretical validation by experts and statistical validation in a pilot test using a MIMIC (Multiple Indicators and Multiple Causes) model analysis [
27].
3.2. Data Analysis
Because the data do not follow a normal distribution, we used Partial Least Squares Structural Equation Modeling (PLS-SEM). Unlike traditional statistical methods, PLS-SEM does not assume a normal data distribution. Additionally, PLS-SEM proves particularly effective for analyzing models with multiple relationships between latent variables, even when these latent variables are measured with a small number of indicators [
29]. In this study, we employ PLS-SEM to utilize total variance for estimating model parameters [
30]. Primarily an exploratory technique [
31], PLS-SEM is especially suitable for investigating phenomena that are relatively new or evolving [
32].
We can express the structural equations for this model as follows:
A PLS-SEM model in a multigroup study was the data analysis technique.
Figure 1 shows the proposed structural model.
The respective indicator variables for attitude towards including this knowledge as part of the professional training process were as follows:
X7. Training in artificial intelligence topics would give me confidence in using basic artificial intelligence tools if necessary.
X8. Training in artificial intelligence topics will allow me to evaluate the various existing artificial intelligence tools and algorithms in the discipline or profession.
X9. Training in artificial intelligence topics would give me the basic knowledge necessary to routinely work with artificial intelligence tools in my discipline or profession.
On the other hand, the predictors from
Table 1 refer to questions related to the following:
- (i)
The impact of the use of AI tools in the profession.
- (ii)
The understanding of AI tools and professional implications.
- (iii)
Attitude and perception toward the relevance of training for appropriately using these tools.
According to Mehmetoglu and Venturini [
34], the first stage of PLS-SEM is an iterative process aimed at estimating latent variable scores. This process begins by initializing these scores, allowing information to flow from the “outside” to the “inside” using data from the outer model. Next, latent variable scores are adjusted based on the relationships among latent variables in the inner model. Finally, the information moves back to the “outside” to update the outer weights.
- 1.
Calculate the latent variable scores:
This equation implies that the scores for each latent variable are initially set equal to the algebraic sum of the indicator variables in the block to which they belong.
Sq represents the score for latent variable q.
Dq represents the number of indicator variables for latent variable q.
mDq represents the weight for indicator variable D of latent variable q.
yDq represents the value of indicator variable D of latent variable q.
- 2.
The inner weights are calculated for the latent variables using the factorial scheme and the path scheme [
35].
- 3.
The scores of the latent variables Sq obtained in the previous step are updated, obtaining new scores.
- 4.
To update the external weights of the reflective models, we use:
And for the external weightings of the formative model, we use:
- 5.
Latent variable scores are estimated using:
- 6.
Steps 2 through 5 are repeated until a convergence criterion is met.
In the second stage of the PLS-SEM algorithm, we calculated the loadings for reflective constructs and the coefficients for formative constructs. The final stage estimated the path coefficients.
To ensure the validity and reliability of constructs in a reflective model, certain criteria must be met. First, there should be only one eigenvalue above 1 associated with a construct, indicating unidimensionality. Second, the construct reliability coefficient (DG rho) should be greater than 0.7, reflecting homogeneity. Third, standardized loadings should exceed 0.7, ensuring item reliability. Fourth, the average variance extracted (AVE) should be above 0.5, indicating convergent validity. Fifth, the AVE should be greater than the squared correlations, guaranteeing discriminant validity. In formative models, items must measure what they are supposed to represent, known as content validity. Additionally, variance inflation values should be less than 2.5 to ensure the absence of multicollinearity, and the weights must be statistically significant [
34].
4. Results
The partial least squares structural equation modeling (PLS-SEM) results revealed that the observed variables explain approximately 60.8% of the variability in the latent variables. This finding was complemented by a Relative Global Goodness-of-Fit (Relative GoF) value of 0.97249, indicating an adequate fit compared to a null model. Additionally, the Average Redundancy indicates that around 51.3% of the variance in the latent variables was explained through other latent variables. The minimal tolerance (1.00 × 10
−7) suggests no significant convergence problems during the model estimation, reinforcing the precision of the analysis. These results, presented in
Table 2, support the validity and robustness of the proposed model
1.
The standardized loadings measured the strength of the relationship between the latent variables and their indicators, where higher values indicate a more robust definition and measurement of these specific constructs. In other words, this analysis highlights the most critical variables in measuring each type of thinking, demonstrating their impact on defining the respective latent constructs (see
Table 3, Equations (2)–(4)).
The factor loadings of the reflective model are significant, indicating a strong relationship between the latent variables and their respective observed variables. Specifically, for the “impact” variable, the factor loading was notably high, with a value of 0.9356. This suggests a significant influence of the observed variables on the latent variable of “impact”. Similarly, the factor loadings for the “understanding”, “attitude”, and “perception” variables are also substantial, with values of 0.6924, 0.7807, and 0.9184, respectively. These results indicate a strong correlation between the observed variables and the associated latent variables. Additionally, the factor loadings for the remaining variables (x5, x6, x7, x8, x9, x10, and x11) are also significant, although they vary in magnitude. These findings suggest that the reflective model adequately captures the correlations between the observed and latent variables, thus providing a solid basis for analysis and interpretation.
Regarding the internal reliability of the variables, Cronbach’s alpha coefficient showed values indicating higher internal consistency in the responses to the questions that comprise each latent variable. The Dillon-Goldstein rho (DG) index confirmed the internal reliability of the latent variables. Examining rho_A demonstrated that the model presented composite reliability for each latent variable, indicating greater consistency in the responses to the questions that constitute the latent variable (
Table 4). In summary, the results showed that the PLS-SEM model has a good fit to the data, the latent variables have good, standardized loadings on their indicators, and the reliability measures suggest that the latent variables are reliable.
The comparison between the squared factor correlation (interfactor correlation squared) and the Average Variance Extracted (AVE) was used to assess discriminant validity in a Partial Least Squares model. Discriminant validity is the ability to clearly distinguish between different latent constructs in a model. The values on the main diagonal represent the squared correlation between the latent factors. This squared correlation measures the shared variance between two factors. The goal is for this shared variance to be low so that the factors are distinct from each other (
Table 5).
The standardized path coefficients indicate the strength and direction of the relationships between the predictor variables. High values (close to 1) indicate a strong correlation between the indicator and the latent variable; low values (close to 0) indicate a weak correlation. Most of the standardized loadings in this table are above 0.7, suggesting a good correlation between the measured indicators and the latent variables (
Table 6 and Equation (1)).
Table 6 presents the standardized path coefficients of the structural model.
Impact: This variable has a standardized path coefficient of 0.3114 and a p-value of 0. This indicates that it positively and significantly impacts attitude. A positive standardized coefficient means that as the impact increases, the attitude also increases. The p-value of 0 indicates that this effect is statistically significant.
Understanding: This variable has a standardized path coefficient of 0.0522 and a p-value of (0.2451). The coefficient is close to zero, suggesting that it has a weak effect on attitude. The p-value in parentheses, greater than 0.05, indicates that this effect is not statistically significant.
Perception: This variable has a standardized path coefficient of 0.5357 and a p-value of 0. Like impact, it positively and significantly affects attitude. The higher coefficient indicates that perception has a stronger impact on attitude than impact.
Additionally, the coefficient of determination R2 for the “attitude” variable is 0.6028, indicating that these predictor variables explain approximately 60.28% of the variability in the attitude variable.
The results show moderate positive correlations between the different latent variables in the model. This suggests that there are associations between these constructs (
Table 7).
Table 7 presents the correlations between the latent variables of the model. There is a moderate positive correlation between impact and understanding (r = 0.3502), as well as between impact and attitude (r = 0.6336). This suggests a significant relationship between the perception of impact and the understanding of the situation and an even stronger relationship between the perception of impact and the resulting attitude. Additionally, there is a moderate positive correlation between understanding and attitude (r = 0.3582), as well as a strong correlation between understanding and perception (r = 0.7315). This indicates that better understanding is associated with more positive attitudes and a clearer perception of the phenomenon. Finally, there is a strong correlation between attitude and perception (r = 0.7315), suggesting that a favorable attitude is related to a more positive perception of the phenomenon under study.
Cross-loadings in a Partial Least Squares model represent the strength of the relationship between the indicator variables and the latent variables. The values in
Table 8 are the standardized regression coefficients that indicate how much each indicator variable contributes to the corresponding latent variable. Cross-loadings help understand how the indicator variables relate to multiple latent variables in the model. This is useful for comprehending the measurement structure of the model.
Table 8 presents the cross-loading coefficients of the observed variables on the latent variables of the model (Equation (4)). Significant coefficients are observed in various cells, indicating the strength of the relationship between the observed variables and the corresponding latent variables. For the latent variable “impact”, the observed variables x1, x4, x10, and x11 show notable cross-loading coefficients, with values of 0.9356, 0.3718, 0.5955, and 0.5689, respectively. This suggests a significant influence of these observed variables on the latent variable “impact”. For the latent variable “understanding”, the observed variables x1, x4, x10, and x11 also show significant cross-loading coefficients, with values of 0.3564, 0.9184, 0.3579, and 0.3001, respectively. This indicates a strong correlation between these observed variables and the latent variable “understanding”. For the latent variable “attitude”, the observed variables x1, x4, x10, x11, x7, x8, and x9 exhibit significant cross-loading coefficients, suggesting a significant correlation between these observed variables and the latent variable “attitude”. Finally, for the latent variable “perception”, the observed variables x1, x4, x10, x11, x7, x8, and x9 also show notable cross-loading coefficients, indicating a strong relationship between these observed variables and the latent variable “perception”.
The VIFs (variance inflation factors) for the predictor variables in relation to the dependent variable are used to check for multicollinearity in a model.
Table 9 presents the results of the multicollinearity check using the variance inflation factors (VIFs) for the variable “attitude” in the structural model. VIF values of 1.527 for impact, 1.197 for understanding, and 1.549 for perception are observed. These values are below the commonly accepted threshold of 3, indicating no significant multicollinearity issues among these predictor variables in the structural model. This suggests that the predictor variables independently contribute to explaining the variance in the “attitude” variable.
Finally, regarding the differences between disciplines, the measurement effects are found (see
Table 10 and
Figure 2). For analysis purposes, the disciplines were grouped as follows: (1) architecture, art, and design; (2) health sciences; (3) business, social sciences, humanities, and education; and (4) engineering.
- a.
Latent variable: impact
- -
x1: The loading coefficients were high across all groups, indicating that the latent variable “impact” was well represented by the measure “x1” in all groups.
- -
x2: The measure “x2” represented the variable “impact” well in all groups, although it was lower in Group 3.
- b.
Latent variable: understanding
- -
x3: Overall, “x3” was a good measure of the variable “understanding”, although its representation was weaker in Group 4.
- -
x4: The measure “x4” appeared to represent the variable “understanding” in all groups.
- -
x5: The measure “x5” had a moderate representation of the variable “understanding” in all groups, with a weaker representation in Group 4.
- -
x6: The measure “x6” appeared less consistent in representing the variable “understanding”, especially in Group 4.
- c.
Latent variable: attitude
- -
x10: The measure “x10” well represented the latent variable “attitude” in all groups.
- -
x11: Similar to “x10”, “x11” well represented the latent variable “attitude” in all groups.
- d.
Latent variable: perception
- -
x7: The measure “x7” represented the variable “perception” well in all groups.
- -
x8: “x8” had a solid representation of the variable “perception”, although it was slightly weaker in Group 4.
- -
x9: The measure “x9” had a moderate representation of the variable “perception” in all groups.
Regarding the structural effects (see
Table 11 and
Figure 2 and
Figure 3), the relationship between the variables “impact” and “attitude” was moderate in Groups 1 and 4 but lower in Groups 2 and 3. This suggests that professionals in these disciplines are more likely to be interested in AI training if they perceive that AI will significantly impact their work.
The model did not identify significant differences in the gender variable (
Figure 4 and
Figure 5). This is corroborated in
Appendix A, which includes measuring the effect and the structural effects. The
p-value was higher than 0.05 in all cases, indicating no statistically significant differences between men and women.
5. Discussion
The Partial Least Squares Structural Equation Modeling (PLS-SEM) used to analyze the relationship between latent and observed variables showed a very adequate fit, explaining about 60.8% of the variability in the latent variables. This suggests that the model is robust and can be used to understand the relationships between the variables of interest. The latent variables also have high reliability and internal consistency, measured through Cronbach’s alpha coefficient, the DG index, and the rho_A index. This suggests that the questions used to measure each latent variable were consistent and reliable in different contexts.
Furthermore, significant relationships were found between the latent variables related to the impact of using AI tools in the profession, the understanding of these tools, the attitude toward AI training, and the perception of their relevance.
The multigroup analysis revealed differences in the variables’ representation and the relationships between the latent variables per discipline. This indicates that the perception of the impact of AI tools and the attitude towards AI training may vary depending on the group studied, suggesting the importance of considering these factors when designing training and professional development strategies.
The model suggests that perception and impact positively and significantly affect attitude, while understanding does not have a significant effect. In this regard, the presented study offers valuable information to understand the attitudes and perceptions of Latin American students towards AI training. In conclusion, the following findings are indicated:
- -
The perception of the impact of AI tools in the profession has a positive and significant effect on students’ attitudes towards adopting and training these tools. This suggests that professionals who perceive that AI will substantially impact their work are more likely to be willing to embrace and learn to use it.
- -
Understanding AI tools and their professional implications does not significantly affect attitude. This finding suggests that while understanding AI is essential, it alone is not enough to foster a positive attitude toward its adoption.
- -
The perception of AI tools positively and significantly affects attitude. Similar to perceived impact, this suggests that professionals who have a positive perception of AI are more likely to be willing to adopt and learn to use it.
- -
The predictor variables explained in the model suggest that training in AI tools, the perception of their impact, and the understanding of their implications are vital factors influencing professionals’ attitudes toward adopting these tools.
- -
No significant differences were found in the correlations between the study variables based on gender. This implies that the results apply to both men and women.
To establish a connection between the theoretical framework and the study’s findings, it is crucial to highlight how the perception and training in artificial intelligence (AI) tools influence students’ attitudes toward adopting these technologies. The theoretical framework emphasizes the importance of integrating AI in higher education, preparing students to face the challenges of Industry 4.0 and foster innovation [
7,
10,
17]. The study’s findings complement this approach by demonstrating that the perception of AI’s positive impact on the profession significantly affects students’ attitudes towards adopting and training in these tools [
17]. This indicates that students who perceive that AI will substantially impact their work are more likely to embrace and learn to use it.
Additionally, the study shows that a positive perception of AI tools significantly influences attitudes towards their adoption, like how their impact is perceived [
22]. This finding underscores the importance of fostering a positive perception among students, as suggested in the theoretical framework, where motivation and a positive perception of new technologies are seen as crucial for effective training [
17,
22]. Students with a positive attitude toward AI are more willing to integrate these technologies into their educational process, highlighting the need for pedagogical strategies that enhance their perception and motivation.
However, the findings also indicate that simply understanding AI tools and their professional implications is not enough to foster a positive attitude towards their adoption [
9,
13,
21]. Although the theoretical framework emphasizes the need to comprehend and manage these technologies to foster innovation, it is evident that other factors, such as the perception of impact and the general attitude toward AI, are more influential [
13,
21]. This suggests that educational programs need to go beyond mere technical understanding and focus on how these technologies are perceived and the value that students assign to them.
Finally, the study reveals that there are no significant gender-based differences in the perception and attitudes toward AI, implying that teaching strategies should be inclusive and universally applicable [
25,
26]. The theoretical framework and findings highlight the importance of adapting teaching methods and curricular content to meet the expectations of all students, creating a more inclusive and stimulating learning environment [
25,
26]. Recognizing and addressing students’ perceptions is crucial for overcoming potential learning barriers and ensuring that AI education is accessible, relevant, and motivating for all students.
6. Conclusions
6.1. Theoretical and Practical Implications
The findings have practical implications for educational institutions, students, and employers and can be used to promote greater adoption and responsible use of AI in the region.
Educational institutions, universities, and technical schools must incorporate modules or courses in their curricula that address the impact of AI in various professional areas. This will allow students to understand how these tools can transform their future work and the opportunities they offer. AI training programs should be designed to consider each professional field’s specific characteristics and needs. This involves addressing AI’s challenges and opportunities for each discipline and developing the technical skills and knowledge necessary for its practical application. Finally, AI training should not be limited to the technical aspects of the tools but should also address these technologies’ ethical, social, and economic implications. This will enable students to develop a critical and responsible view of the use of AI in their professional practice.
It is important for students to investigate how AI is transforming their field of interest and the opportunities it offers for their professional development. This will allow them to make informed decisions about their training and preparation for the future labor market. Additionally, students need to approach AI with an open and critical mind. They should be aware of both the benefits and risks of these technologies and develop the ability to evaluate their use responsibly and ethically.
When hiring staff, employers should consider AI training an important factor. Employees with AI skills can bring an innovative perspective and the ability to solve problems creatively using these technologies. Companies should offer AI training opportunities to their current employees so they can develop the necessary skills to adapt to the changing demands of the labor market.
6.2. Limitations and Future Lines of Research
To address the limitations identified in the current study, several strategies are suggested. First, considering that the results are based on a specific sample of Latin American students, it is essential to expand and diversify the sample. This can be achieved by including participants from different geographic regions and various demographic groups, incorporating variations in age, socioeconomic levels, and educational backgrounds. Such expansion would allow for a broader generalization of the results and provide a more comprehensive view of attitudes towards AI. Second, since the study relies on self-reported data, which may be subject to biases, mixed methods of data collection are recommended. Combining surveys with interviews, focus groups, or observations could mitigate social desirability bias and provide a deeper understanding of attitudes towards AI. Additionally, validating and triangulating data through cross-validation techniques and comparison with other sources could help verify the accuracy of the collected information and reduce the biases inherent in self-reported data.
Moreover, the study does not include all potential variables that might influence attitude towards AI adoption. Incorporating additional variables such as the level of prior exposure to technology, practical experience with AI tools, and psychological aspects like resistance to change and openness to new experiences would enrich the model. These measures would help provide a more detailed understanding of the factors influencing AI adoption.
Regarding future research directions, it is suggested to explore the impact of different educational programs on the perception and adoption of AI, such as project-based learning or gamification. Investigating the effects of specific policies designed to promote AI adoption in the workplace and academic settings would also be beneficial, assessing their effectiveness and areas for improvement. Additionally, conducting international comparative studies that compare attitudes towards AI across different educational systems and cultures could identify both global and local factors influencing the adoption of these technologies. These investigations would not only help overcome the limitations of the current study but also contribute to a richer and deeper understanding of how AI tools are perceived and adopted across various global and demographic contexts.