Next Article in Journal
YOD-SLAM: An Indoor Dynamic VSLAM Algorithm Based on the YOLOv8 Model and Depth Information
Previous Article in Journal
Efficient Elliptic Curve Diffie–Hellman Key Exchange for Resource-Constrained IoT Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence Tool Adoption in Higher Education: A Structural Equation Modeling Approach to Understanding Impact Factors among Economics Students

by
Robert Sova
1,
Cristiana Tudor
2,*,
Cristina Venera Tartavulea
1 and
Ramona Iulia Dieaconescu
2
1
Faculty of Accounting and Management Information Systems, Bucharest University of Economic Studies, Romana Square 6, 010374 Bucharest, Romania
2
Faculty of International Business and Economics, Bucharest University of Economic Studies, Romana Square 6, 010374 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(18), 3632; https://doi.org/10.3390/electronics13183632
Submission received: 10 August 2024 / Revised: 27 August 2024 / Accepted: 9 September 2024 / Published: 12 September 2024
(This article belongs to the Section Artificial Intelligence)

Abstract

:
The integration of Artificial Intelligence (AI) in higher education has the potential to significantly enhance the educational process and student outcomes. However, there is a limited understanding of the factors influencing AI adoption among university students, particularly in economic programs. This study examines the relationship between students’ perceptions of the efficacy and usefulness of AI tools, their access to these tools, and their concerns regarding AI usage. A comprehensive survey of Romanian university students, focusing on economics students, was undertaken. This study identifies critical latent factors and investigates their interrelationships by employing advanced analytical techniques, such as Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA), and Structural Equation Modeling (SEM), with robust standard errors. The results suggest that formal training and integration, concerns regarding AI, perceived utility, and positive attitudes towards AI are positively influenced by general awareness and familiarity with AI tools. The frequency of AI tool usage is substantially increased by perceived usefulness, positive attitudes, and formal training and integration. Conversely, positive attitudes and perceived utility are adversely affected by AI-related concerns. Indirect effects suggest that formal training and positive attitudes indirectly increase the usage frequency by increasing general awareness. This research is relevant to computer science, as it helps to build strategies to integrate AI technologies into educational processes. Increasing students’ awareness and access to AI tools and addressing their concerns can facilitate the widespread adoption and effective integration of AI technologies, improving academic experiences and outcomes.

1. Introduction

In recent years, artificial intelligence has been among the technologies with the most accelerated rate of development, with AI tools evolving from rule-based systems, reproducing human intelligence, to complex models capable of deep learning, adapting, problem solving, understanding and generating human language, processing and analyzing images, analyzing historical data to predict future outcomes, making informed decisions, and much more. Since 2010, the overall computing power, including advancements in both CPUs and GPUs, has increased by approximately 350 million times. This exponential growth has been a key enabler in the development of more robust machine learning models, enhancing their capacity to process and analyze vast amounts of data, and thereby significantly contributing to the advancement of AI systems’ learning capabilities. These advancements are supported by trends predicting a continued growth in computing power, with significant implications for the future of AI development [1,2].
The education sector has benefited from AI advancements. The utilization of AI tools in higher education has the potential to transform student learning and assessment processes. Also, the successful integration of AI in higher education can contribute to societal and economic growth, as AI-driven technologies are increasingly implemented across various industries and organizations [3].
The dynamic expansion of AI tools and their usage in education require a deeper understanding of their possible academic applications [4,5]. In recent years, a growing amount of research address the subject of AI integration in higher education. Most of the research is conducted in developed countries, as the field is still in its early stage in most developing countries [3]. China and the US are the countries with the highest amount of research conducted in the field of AI in higher education [3,4].
AI use in higher education was also addressed by Romanian researchers. Ref. [6] conducted qualitative research through semi-structured interviews with academics from Romanian universities to find out their perspective on implementing AI in social sciences and humanities fields of education. The findings show that AI implementation in higher education has positive aspects related to improvements in the educational process and sense of inclusion, the development of students’ skills, research process support and boost, and a decrease in administrative costs. Among the negative implications, the study reveals concerns regarding data security, ethical problems, potential unemployment, and effects at a psychosocial level. The teachers’ perceptions regarding the difficulties and opportunities of using AI in universities from Romania, as well as from Serbia, was also analyzed by [7] through quantitative research based on a survey, highlighting that education for a sustainable future should address the present and future challenges related to emerging technologies.
The potential shift towards more expensive AI tools raises critical concerns about socio-economic inequalities in education [8]. Lower-income students may not be able to afford advanced AI tools, leading to a digital divide that increases existing socio-economic disparities in education institutions [9]. This division could lead to unequal learning opportunities and outcomes, where only students who can afford to pay for AI tools would benefit from their full potential. Another major concern regarding the use of AI tools is the potential negative impact on natural human learning processes and the limiting of critical thinking development in students [10].
Although many studies address the topic of AI tool utilization in higher education, most of them are focused on generative AI, such as ChatGPT [11,12,13,14,15,16,17,18,19,20]. Our study attempts to fill this gap by analyzing the utilization of a wider range of AI tools, such as AI for machine learning projects, automated essay scoring tools, AI-powered data visualization tools, and AI tools integrated with Microsoft Office 365 Applications.
Against this background, this study addresses a critical gap in the current literature by examining a broader spectrum of AI tools beyond the commonly studied generative AI, such as ChatGPT. This research uniquely focuses on the adoption and utilization of AI tools, including those for machine learning projects, automated essay scoring, AI-powered data visualization, and tools integrated with Microsoft Office Applications. Additionally, by concentrating on economics students in Romania, this study provides insights into AI adoption in a developing country context, which is underrepresented in the existing research. The application of advanced methodologies like Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA), and Structural Equation Modeling (SEM) further distinguishes this work, offering a nuanced understanding of the latent factors influencing AI adoption in higher education.
The primary objective of this research is to explore the key factors influencing the use and adoption of AI tools among economics university students [11,21,22,23,24].
The primary objective of this research is to explore the key factors influencing the use and adoption of AI tools among economics university students. This study specifically addresses the following main research questions:
  • What are the underlying factors that influence students’ interaction with and perception of AI tools?
  • How do these underlying factors interrelate and affect each other?
  • What are the direct and indirect effects of these factors on the frequency of AI tool usage?
This focused inquiry facilitates a clearer and more impactful understanding of the factors driving AI tool utilization within the academic context of economics education. This study explores the interrelationships between these factors, employing advanced analytical techniques such as Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA), and Structural Equation Modeling (SEM). These methods aim to uncover the latent constructs that govern AI adoption, thereby contributing meaningfully to the discourse on AI integration in higher education, particularly within the field of economics.
The next sections of this paper present a literature review of relevant research on the topic of AI usage, materials and methods, results and discussion, conclusions and future research directions.

2. Literature Review

Artificial intelligence (AI) has evolved a lot over the years since the concept was first introduced, in 1950, by the computer scientist Alan Turing. Ref. [25] define AI “as computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and use of data for complex processing tasks.” The application of AI in education has been a topic of extensive research for more than 30 years [26]. Progress in the field of AI opens new opportunities, as well as challenges, for teaching and learning processes in higher education [25]. AI provides students the opportunity to engage in a more interactive, customized, and motivating learning experience [14,21]. The history of AI in education is marked by significant milestones, beginning with early chatbot programs like ELIZA and PARRY, which laid the groundwork for natural language processing and conversational AI [27,28]. Among recent advancements, we note the development of sophisticated AI chatbots such as OpenAI’s ChatGPT and Google Bard, which generate creative content and provide informative responses, though there are some concerns about their accuracy.
The integration of artificial intelligence (AI) tools in higher education has been increasingly recognized as a transformative approach to address various challenges and enhance the learning experience. AI-powered chatbots, for instance, can address issues such as overcrowded classrooms and lack of personalized attention, as they can provide instant feedback and adapt to individual learning styles, while offering benefits that include personalized tutoring, homework assistance, and mental health support [17].
In the next sections of this literature review, we will summarize the identified key factors that could have an impact on the adoption and utilization of AI tools among university students.

2.1. Familiarity with AI Tools and Their Utilization in Higher Education

The rapid evolution in the field of AI, in recent years, has led to the development of a wide variety of AI tools that can be used by students in higher education to perform their academic tasks. Being aware of AI tools can empower students to use them effectively and enhance their educational process.
A study conducted by [29] analyzed how familiar students are with AI tools used in higher education and their potential applications. The results showed that Google Translate, ChatGPT, and DeepL were known by more than 80% of students and professors, while DALL-E, OpenAI GPT-3, OpenAI Codex, Stable Diffusion, and GitHub Copilot were mostly known by professors and less by students. In addition to the pre-defined list of AI tools used in the survey, participants were asked to mention other tools that they are familiar with in regard to the following three main categories: tools for generating images, tools for chat and text generation, and tools for supporting scientific writing.
Refs. [16,29] have conducted SWOT analyses of using AI tools in higher educational contexts, with the study of [16] being focused only on ChatGPT, while [29] analyzed a broad range of AI-based tools.
Research conducted by [18] examined the potential AI applications (such as chatbots, personalized learning and testing, and predictive analytics) regarding the experience of international students. The results showed that AI tools can enhance their learning experience by developing customized content (such as personalized interactive lessons, exercises, and quizzes), offering language translation, and providing feedback on students’ work. AI-powered essay grading tools can score essays without human intervention by using machine learning and natural language processing based on patterns from previous human-scored essays [30]. Students could use this type of AI tool to pre-test their essays.
A study conducted by [29] showed that students in higher education most commonly use AI-based tools (ABTs) for conducting tasks in project work, such as translation, writing text, writing code, summarizing, topic analysis, and image generation/editing. Generative AI tools can assist students in writing, learning, solving assessments [13] and improving their quality [31], carrying out research activities, and performing analysis [15]. Ref. [32] revealed that students use AI for the following tasks: analyzing data, identifying trends and patterns, and making decisions more quickly and efficiently.
The perception of students of economics and business studies on the use of AI for data analysis was analyzed in a study by [33]. The results indicated a high average level of agreement among undergraduate students regarding the fact that cloud-based AI apps are quicker at processing big data and producing results, and AI solutions can predict outcomes based on data analysis, such as sales volume, demand, and stock volumes.
Collaboration between students may influence AI tool adoption, and recommendations from colleagues can create positive interactions and increase the willingness to engage with AI. Social influence can represent an important factor in the familiarity and use of new technologies in education processes [34].

2.2. Access and Subscription to AI Tools

Accessibility refers to the availability of technology, Internet connectivity, and institutional support, all of which may significantly influence the utilization of AI tools among students. A study by [35] shows that limited access to technological resources can limit students’ ability to engage with AI tools effectively. A study by [36] concludes that limited access to digital infrastructure is a barrier to AI tool utilization. The availability of high-speed Internet is a crucial factor in enabling access to AI tools. According to a report, students with access to a reliable Internet connection use online educational resources more frequently, including AI tools [37].
Access to AI tools can also be influenced by institutional support. Universities that invest in providing access to AI technologies, through training, subscriptions, and support services, have higher levels of adoption among students [38], as it helps students to overcome barriers to technology use. Ref. [6] raise the issue that it can be difficult for many higher education institutions to adapt to dynamic changes in technology and to integrate them into the educational process. The study states that these issues create a greater gap between the universities that are benefiting from sufficient funding and those with scarce financial resources.

2.3. Frequency and Impact of AI Tool Usage

With the rapid development of AI tools, we have seen a growing interest in students using these instruments to enhance their academic performance. Using AI tools can contribute to increasing the efficiency [16,18,19,29] and the quality [18] of the academic activities undertaken by students. Ref. [39] found that the frequent use of AI tools positively impacts student’s attitudes and satisfaction in the educational process.
Generative AI can increase efficiency by increasing the accessibility of information [16], generating ideas, summarizing and synthetizing information [15], and providing instant clarification on complex concepts [19]. Students participating in the study conducted by [29] have identified “an increased efficiency” as being the most expected change to future learning as a result of using AI-based tools (ABTs).
AI can provide students with a more personalized education, addressing different learning needs. The most useful AI tools, according to students, are as follows: chatbots, voice assistants, personalized tutoring, feedback and assessment, and gamification instruments [6]. Personalized support and feedback can be provided to students at different levels of complexity [16], leading to an increase in the quality of education. As the results of previous research show, the use of AI tools in higher education can generate a positive impact for students’ results and satisfaction.
The concept of ease of use is a critical factor in understanding the adoption and frequent utilization of AI tools in higher education. The importance of this variable in educational settings is underscored by numerous studies, which suggest that, when students perceive AI tools as easy to use, they are more likely to integrate these tools into their academic routine [40].
Research has shown that technologies that are perceived as user-friendly can significantly enhance students’ learning experiences and outcomes. For instance, ref. [41] found that perceived ease of use was a strong predictor of students’ intention to use technology in their studies. This relationship is particularly pertinent in the context of AI tools, which often involve complex interfaces and functionalities. If students find these tools intuitive and easy to navigate, their likelihood of frequent use increases, thereby maximizing the tools’ educational benefits [42].
Ref. [43] conducted a study regarding the adoption of e-learning platforms and concluded that the perceived ease of use influences the acceptance and use of technology. Similarly, in a study by [44], the perceived ease of use of AI tools in education is linked to higher engagement levels and better academic performance. These findings are corroborated by recent research, which suggests that, if AI tools are easy to use, students can focus more on learning rather than on figuring out how to use the technology [45].

2.4. Training and Support

Training and support could be an important factor influencing the use of AI tools by students in higher education. To ensure the effective and ethical use of AI, educators and students should receive adequate training and support [29]. The survey conducted by [32] on Spanish students in higher education, in the fields of economics and business management and education, has revealed that the students’ current knowledge of AI tools is limited due to the fact that most of them have not received any formal training in AI. Ref. [32] emphasize that training regarding the use of AI, especially by presenting realistic use cases and the real limitations of the tools, could support students in using them confidentially and responsibly.
AI integration in education requires a strategic vision at the level of institutions and the training of all parties involved in the teaching–learning process [6]. Offering training opportunities to professors can enhance their ability to integrate AI tools into their teaching practices [46]. Without training programs, the process of the adoption and utilization of AI tools may be unsuccessful, leading to skills gaps and the manifestation of resistance to change [29].

2.5. General Attitudes

The attitudes of students towards using AI tools for academic tasks may play a crucial role in determining the frequency and extent of their utilization. Attitude (positive or negative feelings about a certain task) can be a predictive factor of technology adoption and usage [47].
Research indicates that a positive attitude towards AI tools is associated with higher levels of adoption and frequent use. For instance, ref. [48] found that students who held favorable views about the usefulness and benefits of educational technologies were more likely to use them regularly. According to the Technology Acceptance Model (TAM), the perceived ease of use and usefulness shape the users’ attitudes towards technology, ultimately influencing their behavioral intention to use it [49].
Moreover, a study by [50] found that a positive attitude favored the use of AI tools by students, as they are willing to explore the capabilities and potential applications, leading to an increased frequency of use. Conversely, negative attitudes, often stemming from concerns about accuracy, privacy, and ethical issues, can act as barriers to adoption and frequent use [51].
Other recent studies [15,19] have revealed a positive attitude of students towards the use of generative AI, such as ChatGPT, in their academic activities. Ref. [15] conducted a survey of 399 undergraduate and postgraduate students in Hong Kong with the purpose of exploring their perception of generative AI, focusing on familiarity, willingness to engage, potential benefits and challenges, and effective integration. The results indicated a generally positive attitude toward the use of generative AI in teaching and learning.
Ref. [52] conducted a study based on in-depth interviews to investigate the perceptions of students from South Korea regarding the use of AI tools. The results indicated that students have positive attitudes towards the interaction with AI and believe that it helps them to improve their task performance and also provides emotional support.
The ease of use is closely associated with students’ self-efficacy in using AI tools, as they are more likely to be confident about their ability to use them, which in turn creates positive attitudes towards these technologies [53].
A positive attitude can be a crucial factor regarding the use of AI tools in educational processes, as it can lead to a virtuous cycle, where the ease of use enhances confidence, leading to more frequent use and further ease of use through familiarity [54].

2.6. Concerns

A literature review carried out by [20] reveled that the main issues associated with using ChatGPT in education are related to accuracy, reliability, and plagiarism prevention. In addition to these issues, ref. [14,15] mention problems related to privacy and security risks. The results of the study conducted by Almaraz-López et al. (2023) [32] have also revealed that students in higher education are concerned about the risks of the possible effect that using AI could have on data privacy and security. All of these concerns can be extended to all types of AI tools, not only to the generative ones.
Students and lecturers in higher education participating in a study conducted by [29] have reported that they have experienced AI-based tools (ABTs) producing wrong or inaccurate results. In the case of ChatGPT, ref. [16] identified limitations that could lead to inaccurate results, such as a lack of deep understanding of the text it processes and difficulty in evaluating the quality of responses. Ref. [17] also found that students acknowledge the possibility that ChatGPT may generate inaccurate results and emphasize the need for having a solid background knowledge to utilize it effectively.
The limitations and concerns related to language proficiency, privacy, and the ethical implications of using AI tools in higher education were also analyzed by [18]. The study mentions that AI voice recognition and dictation tools may potentially lead to misunderstandings or miscommunications due to the difficulties in understanding accents and dialects. AI generating customized learning content use personal data to tailor the learning experience and, therefore, may raise privacy and security concerns for students.
Plagiarism and cheating are considered among the most important problems in contemporary academia. Refs. [6,55] raise the issue of ethical concerns related to implementing AI in higher education, as students might be tempted to cheat by using AI tools to elaborate papers required by their teachers. More than this, students may even use AI without realizing that it may lead to plagiarism, due to the fact that it can produce responses that are similar to existing sources [16]. To address the concerns related to plagiarism and academic integrity, innovative educational environments are required [31].
To conclude, the main concerns associated with the use of AI tools in higher education are related to accuracy [16,17], data privacy and security risks [14,15,32], and ethical issues [6,55]. The students’ perception of these concerns may influence the frequency of using the AI tools.

2.7. Integration of AI Tools

The integration of AI tools into academic activities is significantly influenced by teachers’ readiness to adopt and effectively utilize these technologies. Teachers’ AI readiness refers to their knowledge, skills, and attitudes towards AI, which collectively determine their ability to integrate AI into the teaching process. Studies by [56,57] found that teachers who are well-prepared and confident in using AI tools are more likely to incorporate these technologies into their coursework. Teachers who have competencies in using AI tools are more likely to integrate these tools into their courses [58].
Professional development and training activities for teachers are important factors for enhancing their AI readiness. According to [59], targeted training programs that focus on AI literacy and practical application can significantly boost teachers’ confidence and proficiency in using AI tools, leading to a higher degree of coursework integration, resulting in an improvement of student outcomes. The integration of AI tools in the learning and evaluation process in higher education could impact the frequency of the utilization of AI tools by students. Students’ interest in courses is increased when AI tools are integrated into learning environments [60].
The key factors that could impact the adoption and utilization of AI tools by students in higher education, summarized above, provide a conceptual foundation for exploring the relationships between students’ perceptions and their actual use of AI tools in academic contexts. Furthermore, our study employs Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) to empirically identify and validate these factors. Subsequently, Structural Equation Modeling (SEM) is used to examine the interrelationships between these variables, including their collective impact on the frequency of AI tool usage.

3. Materials and Methods

3.1. Data Collection and Preprocessing

The dataset utilized in this study was collected via a comprehensive survey distributed to students from various universities in Romania during the second semester of 2024. The survey aimed to capture a broad spectrum of variables related to demographic information, familiarity with AI tools, and the usage of these tools in academic settings. The survey was conducted using an online platform but was completed in situ, meaning that the students filled out the survey during their class time under supervision. This approach ensured a high response rate and data accuracy by combining the benefits of digital data collection with the controlled environment of in-class administration. The in situ completion of the survey helped to minimize potential biases related to the respondents’ environment and ensured consistent conditions for all participants. The sample comprised 748 participants, who were informed about this study’s purpose and provided consent before proceeding to complete the questionnaire. Participation in the survey was entirely voluntary, and the students were explicitly informed that their responses would remain anonymous. Furthermore, it was clearly communicated that participation would have no bearing on their grades or academic standing, thereby ensuring that their input was provided freely and without any external pressure.
Prior to the main survey, a pilot study was conducted with a smaller subset of students to refine the survey instrument and ensure the clarity of the questions. The pilot study involved 50 participants who provided detailed feedback on the survey items. Based on their input, we made several adjustments to the wording and structure of the questions to improve comprehension and reduce ambiguity. Additionally, we conducted preliminary reliability analyses, such as calculating Cronbach’s alpha for the Likert scales, which resulted in values above the acceptable threshold. These steps ensured that the final survey instrument was both reliable and valid, facilitating accurate data collection in the main study. Thus, this preliminary phase allowed for the calibration of the survey tool, which was essential in capturing comprehensive data on the demographic characteristics, familiarity with AI tools, and their utilization in academic contexts.
Upon collection, the dataset underwent rigorous preprocessing to ensure that it was primed for analysis. The initial steps included renaming the columns to more concise and descriptive labels, enhancing readability and clarity. This was followed by the conversion of the categorical variables into factors, a crucial step for facilitating statistical modeling and interpretation. The transformed dataset included recoded values for ordinal and nominal variables, ensuring that the data were in a suitable format for advanced statistical analyses, such as Exploratory Factor Analysis (EFA) and Structural Equation Modeling (SEM).
Missing data were handled through listwise deletion, removing any cases with incomplete responses. This method was chosen to preserve the integrity of the dataset by ensuring that analyses were based on complete and reliable data points. Although listwise deletion can reduce the sample size, it was deemed appropriate in this context because the proportion of missing data was relatively small (less than 5% of the total responses). Additionally, listwise deletion helps to avoid the potential biases that can arise from imputation methods when the data are not missing completely at random (MCAR).
All data analyses and estimations were performed using R, a statistical software environment widely used for its powerful data manipulation and analysis capabilities. Specifically, we used RStudio Version 2024.04.2+764 (2024.04.2+764) for the estimations.

3.2. Survey Instrument

The survey instrument was designed to capture a multi-dimensional view of the students’ demographics, their interaction with AI tools, and the perceived impact of these tools on their academic activities. The instrument comprised the following sections:
  • Demographics: This section collected fundamental data on the respondents’ age, gender, education level, university affiliation, specialization, and domicile type. These variables were essential for contextualizing the respondents’ backgrounds and ensuring the representativeness of the sample.
  • Familiarity with AI Tools: The respondents’ familiarity with AI tools was assessed using a 5-point Likert scale, ranging from “Not at all” to “Extremely”. This scale provided a nuanced understanding of the respondents’ exposure to AI technologies. The reliability of this scale was validated using Cronbach’s alpha, yielding a value of 0.85, indicating good internal consistency.
  • Known and Used AI Tools: Multiple-response items were employed to capture the variety of AI tools known and utilized by the respondents. This section provided an insight into the specific AI applications that the students were aware of and actively using.
  • Access to AI Tools: The questions in this section measured the respondents’ access to AI tools, again using a Likert scale to capture the extent of accessibility, ranging from “Not at all” to “Extremely”. The reliability of this scale was also validated with a Cronbach’s alpha value of 0.87, indicating good internal consistency.
  • Subscription to AI Tools: This part of the survey inquired whether the respondents had access to AI tools through university-provided subscriptions or through personal subscriptions. This was critical for understanding the sources of access to AI resources.
  • Frequency and Impact of AI Tool Usage: Various Likert scales were used to assess the frequency of AI tool usage, as well as the perceived impact on academic efficiency, quality of work, and ease of use. These scales helped in quantifying the tangible benefits perceived by the students.
  • Training and Support: This section included questions about any formal training received on the use of AI tools and the perceived usefulness of such training. This was essential for understanding the level of institutional support provided to the students.
  • General Attitudes and Concerns: The questions were designed to gauge the respondents’ overall attitudes towards AI tools, including concerns about inaccuracy, cheating, privacy, and the broader integration of AI into academic activities.
  • Integration of AI Tools: This section assessed how AI tools were integrated into the curriculum and academic activities. The questions focused on the manner of AI tool usage in coursework, projects, and other academic tasks, providing a clearer picture of the practical application of AI tools in the educational environment.
Appendix A provides an overview of the survey instrument, including the sections, items, and assessment type.
The survey instrument underwent a thorough validation process during the pilot study phase, ensuring that each question was both relevant and clear. This validation process was pivotal in refining the instrument, resulting in a robust tool capable of capturing detailed and accurate data.
The survey was translated into Romanian to ensure better comprehension by the respondents. The translation was validated through a back-translation process, where the survey was first translated from English to Romanian and then independently translated back to English by a different translator. This process ensured that the translated survey maintained the accuracy and meaning of the original questions, minimizing potential misunderstandings and enhancing the validity of the responses.
The data collected were then transformed into appropriate formats for analysis. Categorical variables were converted into factors, and ordinal scales were recoded to ensure compatibility with the statistical models employed. For example, the age categories were defined as follows: 1 (18–21 years), 2 (22–25 years), 3 (26–30 years), 4 (31–35 years), and 5 (over 35 years). Gender was coded as 1 (Female), 2 (Male), and 3 (Other). Education levels were coded from 1 (High School) to 6 (Other); and domicile types were defined as 1 (Urban), 2 (Rural), 3 (Suburban), and 4 (Prefer not to say).
The demographic profile of the respondents summarized in Table 1 provides a detailed overview of the age categories, gender distribution, education levels, and domicile types. This comprehensive demographic analysis ensures that the findings of this study are contextualized within a diverse sample. By capturing a wide range of demographic variables, we can better understand how the different segments of the student population interact with AI tools. While we did not have access to national statistics for direct comparison, the diversity in our sample allows us to explore variations and commonalities across different demographic groups, providing valuable insights into AI tool adoption among economics students.
The demographic distribution indicates a predominant youth demographic, with most respondents being between 18 and 25 years old, whereas the gender distribution shows a higher representation of females. Moreover, the education levels suggest a broad spectrum, with many respondents pursuing undergraduate or postgraduate studies. Additionally, most respondents reside in suburban or rural areas. Appendix B provides visualizations that succinctly represent the demographic characteristics of the study participants, enhancing the understanding of the sample’s composition.
The respondents represented a diverse array of Romanian universities that offer economics programs. Notable concentrations were observed in the following universities: (i) The Bucharest University of Economics Studies (42.9%); (ii) University of Craiova (9.1%); and (iii) University “1 December 1918” from Alba Iulia (7.8%). Regarding specializations, the participants were majorly focused on Accounting and Management Information Systems (30%), International Business and Economics (26.9%), and Management (10.9%).
It should be mentioned that this study is based on a robust and representative sample of 748 valid responses (i.e., after addressing any missing or incomplete responses) drawn from economics students across all four macroregions of Romania, as follows: Macroregion 1 (north-west and Center), Macroregion 2 (north-east and south-east), Macroregion 3 (Sud-Muntenia and Bucharest-Ilfov), and Macroregion 4 (south-west and west). These macroregions are the principal territorial divisions used in national statistics and policy making, ensuring that the sample captures the geographic diversity and academic distribution of the country [61].
The sample encompasses nine prestigious Romanian universities, including some of the most renowned institutions for economics education. This selection represents 46% of the universities that include economics specialization featured in the Romanian Ministry of Education’s metaranking of top institutions [62]. Additionally, 44% of the universities within the Universitaria Consortium—the most significant and influential academic alliance in Romania—are included. It is important to note that, while the Universitaria Consortium consists of the country’s leading universities, not all of its members offer specialized economics programs. However, the sample is specifically focused on those that do, thereby reinforcing the representativeness of the findings for the field of economics education. The Universitaria Consortium is a key collaborative platform among Romania’s top universities, dedicated to enhancing the quality of higher education and research at the national level [63]. Membership in this consortium is a hallmark of academic excellence and relevance. By including universities that are part of this consortium, this study aligns with the highest academic standards and reflects the most influential centers of economics higher education in Romania.
Furthermore, the sample proportionately reflects the student populations within Romania’s major economic hubs. For example, a significant portion of respondents come from the Bucharest University of Economic Studies, which accounts for 42.9% of the sample. This institution is the largest and most prominent economics-focused university in Romania [64], and its inclusion, along with that of other key universities, provides a solid foundation for the representativeness of the data. Consequently, we argue that the strategic selection of respondents across diverse and representative institutions, coupled with the alignment of the sample with the geographic and academic landscape of Romanian economics education, ensures that the findings are statistically sound and broadly generalizable across the country’s higher education system in this field.

3.3. Exploratory and Confirmatory Factor Analysis

Exploratory Factor Analysis (EFA) was conducted to identify the underlying structure of the survey items. The number of factors was determined using the Kaiser criterion (eigenvalues greater than 1) and the scree plot method. Varimax rotation was applied to achieve a simpler and more interpretable factor structure. The EFA model can be represented as follows:
X = LF + E
where
  • X is the matrix of observed variables;
  • L is the factor loading matrix;
  • F is the matrix of latent factors;
  • E is the matrix of unique variances (errors).
The adequacy of the factor model was assessed using measures such as the Kaiser–Meyer–Olkin (KMO) test for sampling adequacy and Bartlett’s test of sphericity.
Confirmatory Factor Analysis (CFA) was performed to validate the factor structure identified by the EFA.
The CFA can be specified as follows:
X = Λξ + δ
where
  • Λ is the factor loading matrix;
  • ξ is the vector of latent variables;
  • δ is the vector of measurement errors.

3.4. Structural Equation Modeling

A comprehensive Structural Equation Modeling (SEM) framework was employed to examine the relationships between the latent variables [65]. SEM is an advanced statistical technique that combines features of factor analysis and multiple regression, allowing for the examination of complex relationships between observed and latent variables. The model was also estimated using Maximum Likelihood with robust standard errors (MLR) to account for deviations from normality and provide more accurate parameter estimates [66]. This robust estimation technique enhances the validity of our findings, particularly in handling non-normal data typical of survey research.
The use of SEM is particularly appropriate for this study due to the following reasons: (i) SEM is ideal for modeling complex relationships between multiple dependent and independent variables simultaneously. In this study, we hypothesize multiple interrelated paths among constructs, whereas traditional regression techniques would not adequately capture these interdependencies [67,68]. (ii) SEM allows for the inclusion of latent variables, which are not directly observed but rather inferred from multiple indicators. For example, constructs like “perceived usefulness” and “ease of use” are measured through several survey items. SEM provides a robust framework to model these latent constructs and their relationships accurately [69]. (iii) SEM accounts for measurement error in the observed variables, providing more accurate estimates of the relationships between latent constructs. This is crucial in survey-based research, where measurement error can be significant [70]. (iv) The use of SEM also allows for the assessment of both direct and indirect effects, providing a comprehensive view of the relationships between variables, including mediation effects that would be challenging to identify using traditional regression techniques. (v) SEM provides comprehensive fit indices (e.g., CFI, TLI, RMSEA, and SRMR) to evaluate how well the proposed model fits the observed data. This allows for a more rigorous validation of the hypothesized model compared to traditional statistical methods [66,71]. (vi) SEM offers modification indices that suggest potential improvements to the model. This feature facilitates the refinement of the model to achieve a better fit with the data, making SEM a flexible and iterative approach to model building [72].
The structural model can be represented as follows:
η = Bη + Γξ + ζ
where
  • η is the vector of endogenous latent variables;
  • B is the matrix of coefficients for the relationships among endogenous variables;
  • Γ is the matrix of coefficients for the relationships between exogenous and endogenous variables;
  • ξ is the vector of exogenous latent variables;
  • ζ is the vector of disturbances (errors).
Of note, in conducting the statistical analyses for this study, including Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA), and Structural Equation Modeling (SEM), several key assumptions were assessed to ensure the validity and reliability of the results. The normality of the data was evaluated using the Shapiro–Wilk test, and visual inspections with QQ plots revealed deviations from normality, characteristic of ordinal data. The linearity between variables and factors was assessed through scatter plot analysis, which indicated some degree of linearity, despite the ordinal nature of the data. Homoscedasticity was checked using the Breusch–Pagan test, which is particularly important for reliable parameter estimates in CFA and SEM. Additionally, the absence of multicollinearity was verified by calculating the Variance Inflation Factor (VIF) for all predictor variables, ensuring distinct and interpretable factors. Although preliminary checks indicated some violations of the assumptions, robust estimation methods were employed to address these issues, thereby supporting the robustness of the statistical findings.
Additionally, post-estimation diagnostics were also performed to further ensure the reliability of the findings. Goodness-of-fit indices, including CFI, TLI, RMSEA, and SRMR, were evaluated to assess the model fit. Composite Reliability (CR) and Average Variance Extracted (AVE) were calculated to ensure the reliability and validity of the constructs. Modification indices were inspected to identify potential areas for model improvement, and residual analysis was conducted to check for systematic patterns that could indicate model misspecification. Multi-collinearity diagnostics were also performed to confirm the absence of multicollinearity among the variables. These comprehensive checks ensure the robustness and reliability of this study’s findings.

4. Results

4.1. Exploratory Factor Analysis (EFA)

The Exploratory Factor Analysis (EFA) was conducted using the “minres” (minimum residual) method and varimax rotation. The “minres” method, as outlined by [73], is a commonly used approach in EFA for estimating the factor loadings by minimizing the sum of the squared residuals, which represents the difference between the observed and predicted correlations. This method is particularly useful in providing a more accurate representation of the underlying factor structure, especially when the data do not perfectly fit the assumed model [74].
Varimax rotation, an orthogonal rotation technique developed by [75] was applied to the factor solution to simplify the interpretation of the factors. The goal of varimax rotation is to maximize the variance of the squared loadings of a factor across the variables, thereby producing factors that are more easily interpretable. Each factor tends to have high loadings for a smaller number of variables, while the remaining variables have loadings closer to zero, which clarifies the relationship between the variables and factors [76].
The combination of the “minres” method and varimax rotation yielded a five-factor solution that accounts for a cumulative variance of 49%, as presented in Table 2.
Factor 1 (MR1) primarily captures items related to AI awareness, including familiarity with AI, access to AI, and usage frequency, with substantial loadings of 0.68, 0.73, and 0.65, respectively, indicating the degree of AI familiarity and accessibility. Factor 2 (MR2) encompasses items related to training and integration, such as formal training, training usefulness, and AI integration, with significant loadings of 0.71, 0.66, and 0.64, respectivley, reflecting the emphasis on training and its usefulness for AI integration. Factor 3 (MR3) is characterized by items pertaining to concerns about AI, including concerns about inaccuracy, cheating, and privacy, with strong loadings of 0.62, 0.75, and 0.55, respectivley, suggesting this factor measures various apprehensions regarding AI use. Factor 4 (MR4) pertains to efficiency and quality, encapsulating efficiency increase, quality increase, and general attitude towards AI, with notably high loadings of 0.75, 0.79, and 0.59, respectivley, indicating perceived improvements in efficiency and quality due to AI. Factor 5 (MR5) has relatively low loadings and does not distinctly capture any specific theme, contributing only 2% to the cumulative variance. Given its low loadings and minimal variance contribution, MR5 is not considered meaningful and will not be retained in the subsequent Confirmatory Factor Analysis (CFA). Model fit indices further support the adequacy of this factor structure, with an RMSR of 0.02, an RMSEA of 0.033 (90% CI: 0.017–0.047), and a Tucker–Lewis Index (TLI) of 0.974, indicating a good fit.
This refined factor structure, visually represented in Appendix C, provides a comprehensive basis for the subsequent CFA, ensuring a more parsimonious and theoretically sound model.

4.2. Confirmatory Factor Analysis (CFA)

Confirmatory Factor Analysis (CFA) was conducted to validate the factor structure identified by the Exploratory Factor Analysis (EFA). Its results demonstrate a robust fit for the measurement model, confirming the validity of the hypothesized latent constructs. Of note, the estimated CFA model specified the following four latent factors: awareness of AI (MR1), training and integration (MR2), concerns about AI (MR3), and efficiency and quality (MR4) and was estimated using Maximum Likelihood (ML), converging after 46 iterations. The corresponding fit indices are reported in Table 3.
The Comparative Fit Index (CFI) of 0.957 indicates a good fit, as values above 0.90 are generally considered acceptable in Structural Equation Modeling [77]. The Tucker–Lewis Index (TLI) of 0.941, above the recommended threshold of 0.90, also reinforces this conclusion. The Root Mean Square Error of Approximation (RMSEA) is 0.059, with a 90% confidence interval ranging from 0.048 to 0.070, further confirming the robustness of the factor structure identified.
Additionally, the Standardized Root Mean Square Residual (SRMR) of 0.049 is within the acceptable range (less than 0.08), supporting the model’s adequacy. Consequently, the CFA results support the validity of the four-factor model, confirming the structure proposed by the EFA, with all specified factors showing significant and substantial loadings on their respective items. The exclusion of MR5 from the CFA model is justified by its low loadings and minimal contribution to the explained variance in the EFA, ensuring a more parsimonious and interpretable model.
The parameter estimates reported in Table 4 reveal that the latent variable MR1 (general awareness and familiarity with AI tools) significantly loads on familiarity_AI (0.713), access_to_AI (0.740), and usage_frequency (0.756). MR2 (formal training and integration of AI tools) significantly loads on formal_training (0.665), training_usefulness (0.717), and AI_integration (0.621). MR3 (concerns regarding AI) significantly loads on concern_inaccuracy (0.627), concern_cheating (0.767), and concern_privacy (0.494). MR4 (perceived usefulness and positive attitudes towards AI) significantly loads on efficiency_increase (0.888), quality_increase (0.867), and general_attitude (0.668).
Moreover, the variances of the latent constructs are well defined (see Table 5), with MR1 (1.000), MR2 (1.000), MR3 (1.000), and MR4 (1.000) indicating well-identified constructs. The residual variances for the observed variables are moderate, indicating that a substantial portion of the variance in the observed measures is explained by the latent constructs. As such, the CFA results provide strong evidence for the reliability and validity of the measurement model, confirming the hypothesized factor structure and supporting the use of these latent constructs in subsequent Structural Equation Modeling (SEM) analyses.
To further explore the relationships among the observed variables, a correlation heatmap was generated (Figure 1). This heatmap provides a visual representation of the pairwise correlations between the variables included in this study. The intensity and color of the circles indicate the strength and direction of the correlations, with red hues representing positive correlations and blue hues representing negative correlations.
The correlation heatmap reveals several notable patterns. Positive correlations, such as the strong association between efficiency_increase and quality_increase (r = 0.77), indicate that improvements in the perceived efficiency are closely linked with enhancements in the perceived quality. Conversely, some variables, although not prominently featured, show weaker negative correlations. This visual representation allows for a quick assessment of how variables are interrelated, which is crucial for understanding the underlying structure of the data. Certain variables, like concern_cheating and concern_privacy, exhibit low correlations with most other variables, indicating that they may represent distinct dimensions of the students’ perceptions and concerns about AI tools.
These insights from the correlation heatmap complement the findings from the CFA by providing a broader context of how individual variables interact with each other. This holistic view aids in validating the constructs identified through EFA and CFA, offering a more comprehensive understanding of the data.

4.3. Structural Equation Model (SEM) Results

The Structural Equation Modeling (SEM) analysis was conducted to examine the complex interrelationships among the latent factors identified through Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA), complemented by the correlation analysis. The final SEM was specified as follows:
# Measurement model
MR1 =~ familiarity_AI + access_to_AI + usage_frequency
MR2 =~ formal_training + training_usefulness + AI_integration
MR3 =~ concern_inaccuracy + concern_cheating + concern_privacy
MR4 =~ efficiency_increase + quality_increase + general_attitude
# Structural model
MR4 ~ MR1 + MR2 + MR3
MR3 ~ MR1 + MR2
MR2 ~ MR1
usage_frequency ~ MR2 + MR3 + MR4
Of note, in the context of our study, we introduced a path for usage frequency in the SEM, despite its absence in the CFA, to better capture the behavioral component of the students’ interaction with AI tools. While the CFA primarily validated the latent constructs related to awareness, training, concerns, and attitudes, it became evident that understanding the actual frequency of AI tool usage was crucial for a comprehensive analysis. Incorporating usage frequency into the SEM allows us to explicitly examine how formal training, concerns about AI, and positive attitudes towards AI directly and indirectly influence students’ engagement with AI tools. This addition provides a more nuanced understanding of the practical implications of the identified latent factors and their impact on actual behavior, thereby enhancing the overall explanatory power and relevance of our model.
The SEM demonstrates excellent fit to the data, as indicated by the high CFI (0.962) and TLI (0.945), both being well above the threshold of 0.90. The RMSEA value of 0.055, with a 90% confidence interval ranging from 0.043 to 0.065, is within acceptable limits, indicating a reasonable error of approximation. The SRMR of 0.046, being below the threshold of 0.08, further confirms the good fit of the model. Additionally, it should be mentioned that the robustness of these results was confirmed by re-estimating the model using the Maximum Likelihood with robust standard errors (MLR). The robust estimation yielded consistent results, reinforcing the reliability of the findings. The model fit indices for both error specifications are presented in Table 6.
The parameters for the robust model estimation using the MLR estimator are reported in Table 7.
The standardized factor loadings for the latent variables are strong and significant, affirming that the observed variables are robust indicators of their respective constructs. For MR1 (awareness and familiarity with AI Tools), the factor loadings are as follows: familiarity with AI (0.739), access to AI (0.762), and usage frequency (0.584). MR2 (formal training and integration) shows loadings of formal training (0.664), training usefulness (0.718), and AI integration (0.621). MR3 (concerns about AI) includes concerns about inaccuracy (0.626), concerns about cheating (0.767), and concerns about privacy (0.495). MR4 (perceived usefulness and positive attitudes towards AI) consists of efficiency increase (0.890), quality increase (0.864), and general attitude (0.669).
The structural model path coefficients reveal significant relationships between the latent variables, shedding light on the intricate dynamics at play. Specifically, MR1 (general awareness and familiarity with AI tools) exerts a substantial positive influence on MR4 (perceived usefulness and positive attitudes), with a standardized path coefficient of 0.663. This finding shows that heightened awareness and familiarity with AI tools significantly enhance students’ perceptions of the usefulness of these tools and foster more positive attitudes toward their adoption.
Moreover, MR2 (formal training and integration) exhibits dual effects, as follows: it positively influences MR4 with a path coefficient of 0.124 and negatively influences MR3 (concerns regarding AI) with a coefficient of −0.116. These results indicate that structured training programs and effective integration of AI tools within the educational framework not only bolster students’ perceptions of the tools’ usefulness and generate positive attitudes, but also alleviate concerns related to the accuracy, privacy, and ethical implications of AI use.
On the other hand, MR3 demonstrates a negative influence on MR4, with a path coefficient of −0.116, signifying that increased concerns about AI can detract from its perceived usefulness and foster negative attitudes towards its use. This relationship highlights the importance of addressing and mitigating student concerns in order to promote a more favorable perception of AI tools.
Crucially, the model elucidates the factors driving the frequency of AI tool usage among students. MR2 exerts a direct positive influence on usage frequency, evidenced by a path coefficient of 0.137. This underscores the pivotal role of formal training and integration in fostering regular and effective use of AI tools. Students who receive comprehensive training and perceive the integration of AI into their curriculum as beneficial are more likely to use these tools frequently.
Furthermore, MR4 also positively influences usage frequency, with a path coefficient of 0.142. This relationship suggests that students who perceive AI tools as useful and hold positive attitudes towards them are more inclined to incorporate these tools into their routine activities. The alignment of positive perceptions and attitudes with actual usage behavior underscores the critical role of fostering a favorable outlook on AI tools to drive their adoption and utilization.
In conclusion, the strong fit indices, significant factor loadings, and meaningful path coefficients collectively illustrate the intricate dynamics between students’ awareness, training, concerns, attitudes, and their actual usage of AI tools. The robustness checks, using the MLR estimator, further solidify the validity of these results, ensuring that the conclusions drawn are not only statistically sound, but also practically relevant for enhancing AI adoption in educational contexts.
This comprehensive analysis, depicted in the path diagram in Figure 2, provides a detailed understanding of the factors influencing students’ engagement with AI tools. It underscores the importance of enhancing awareness and familiarity, providing formal training, and addressing concerns to foster positive attitudes and increase the frequency of AI tool usage among students.

5. Discussion

The findings from this study provide valuable insights into the factors influencing university students’ use of AI tools in their academic activities. The results of the Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA), and Structural Equation Modeling (SEM) offer a robust framework for understanding these dynamics.
The analysis of AI tools familiarity among students revealed several insightful trends. The majority of students reported familiarity with AI chatbots, such as ChatGPT, indicating a widespread awareness and use of conversational AI technologies, with 81.3% of respondents indicating familiarity with chatbots. This high level of recognition underscores the integration of conversational AI in everyday academic and personal tasks, reflecting broader global trends where chatbots are increasingly used for various applications [78]. A study by [29] has similar findings, with over 80% of higher education students and professors declaring that they are familiar with AI tools such as Google Translate, ChatGPT, and DeepL.
Interestingly, 36.1% of the students were familiar with AI tools integrated with Microsoft Office applications, such as Microsoft 365 Copilot. This significant penetration highlights the growing adoption of AI in productivity tools that are commonly used in academic settings. Similarly, AI tools for data analysis, such as Tableau, Microsoft Power BI, IBM Watson Analytics, and DataRobot, were familiar to 14.3% of students. This suggests an engagement with sophisticated AI applications that support analytical tasks and data-driven decision making [79].
Additionally, 14% of students reported familiarity with AI tools for image generation, such as Dall-E, Midjourney, and Adobe Firefly. This indicates an interest in creative AI applications, which are becoming increasingly relevant in various fields of study, including art and design. However, there remains a notable proportion of students (9.4%) who reported no familiarity with any AI tools. This gap indicates an opportunity for educational institutions to enhance AI literacy and provide more targeted AI-related education and training programs [80].
Addressing this gap is crucial for ensuring that all students can benefit from the potential advantages that AI tools offer in academic and future professional settings. Overall, these insights into the specific AI tools that students are familiar with provide a detailed understanding of the current state of AI tool adoption in higher education. This information is valuable for educational policymakers and institutions aiming to design interventions and programs that can further integrate AI technologies into the academic curriculum, thereby improving student engagement and learning outcomes.
The final SEM demonstrated an excellent fit to the data, with high values for the Comparative Fit Index (CFI) and the Tucker–Lewis Index (TLI), and acceptable values for the Root Mean Square Error of Approximation (RMSEA) and the Standardized Root Mean Square Residual (SRMR). These fit indices confirm the robustness of our model, consistent with the benchmarks suggested by Hu and Bentler (1999) [77]. This robust model fit suggests that the hypothesized effects are well supported by the data. The structural model path coefficients reveal significant relationships between the latent variables. Specifically, awareness and familiarity with AI tools (MR1) positively influence the perceived usefulness and positive attitudes towards AI (MR4), with a coefficient of 0.663. This finding indicates that higher awareness and familiarity with AI tools enhance students’ positive perceptions and attitudes towards AI. Formal training and integration (MR2) also positively influence the perceived usefulness and attitudes (MR4), with a coefficient of 0.124, while negatively influencing concerns about AI (MR3), with a coefficient of −0.116. These results suggest that structured training programs not only improve students’ perceptions of AI, but also help to mitigate their concerns. Additionally, concerns regarding AI (MR3) negatively influence the perceived usefulness and positive attitudes (MR4), with a coefficient of −0.116. Furthermore, MR2 positively influences the frequency of AI tool usage, with a coefficient of 0.137, and MR4 positively influences usage frequency, with a coefficient of 0.142, indicating that both formal training and positive attitudes are crucial drivers of AI tool usage.
These findings underscore the significance of institutional support mechanisms, which extend beyond individual awareness and familiarity with AI tools. For instance, peer influence and collaboration among students are increasingly recognized as vital components of AI adoption in educational settings. As [34] suggest, social dynamics and recommendations from peers can significantly enhance the willingness to engage with AI technologies. This phenomenon aligns with broader theories of social learning, where the diffusion of innovation within peer networks can accelerate the adoption of new technologies.
Moreover, faculty training and curriculum integration are critical in shaping student attitudes toward AI. As indicated by [46], the ongoing professional development for educators is essential for the effective integration of AI tools into teaching practices. By equipping faculty with the necessary skills and knowledge, educational institutions can foster a learning environment where AI technologies are seamlessly integrated, thereby enhancing both teaching effectiveness and student engagement.
In addition to these institutional factors, this study also highlights the varied purposes that different AI tools serve in the academic context. While AI chatbots and productivity tools have gained widespread recognition, other applications, such as data analysis and creative AI tools, remain underutilized. This discrepancy points to the need for targeted interventions that not only broaden student exposure to diverse AI tools, but also align these tools with specific academic needs and disciplines. As [60] notes, when AI tools are integrated into learning environments in a way that resonates with students’ interests and academic requirements, there is a marked increase in engagement and motivation.
A more detailed view can be gained by specifying indirect effects within the structural model. This approach explores how awareness, training, and concerns about AI interrelate and collectively impact AI tool usage and attitudes. The extended SEM analysis includes the following indirect effects:
  • Ind 1. The influence of general awareness and familiarity with AI tools (MR1) on perceived usefulness and positive attitudes towards AI (MR4) through formal training and integration (MR2).
  • Ind 2. The influence of general awareness and familiarity with AI tools (MR1) on perceived usefulness and positive attitudes towards AI (MR4) through concerns regarding AI (MR3).
  • Ind 3. The influence of general awareness and familiarity with AI tools (MR1) on the frequency of AI tool usage through formal training and integration (MR2).
  • Ind 4. The influence of general awareness and familiarity with AI tools (MR1) on the frequency of AI tool usage through concerns regarding AI (MR3).
  • Ind 5. The influence of general awareness and familiarity with AI tools (MR1) on the frequency of AI tool usage through perceived usefulness and positive attitudes towards AI (MR4).
  • Ind 6. The influence of formal training and integration (MR2) on the frequency of AI tool usage through perceived usefulness and positive attitudes towards AI (MR4).
  • Ind 7. The influence of concerns regarding AI (MR3) on the frequency of AI tool usage through perceived usefulness and positive attitudes towards AI (MR4).
Table 8 presents the results.
The extended SEM analysis reveals several noteworthy indirect effects. The indirect pathway from MR1 to MR4 through MR2 is statistically significant, with an indirect effect of 0.041. This indicates that formal training and integration significantly mediate the positive influence of awareness and familiarity on perceived usefulness and attitudes towards AI. Another significant indirect effect is from MR1 to usage frequency through MR2, with an estimate of 0.055, highlighting the crucial role of formal training in translating awareness into actual usage behavior.
The analysis also shows a significant indirect effect of MR2 on usage frequency through MR4, with an estimate of 0.073. This finding underscores the importance of developing positive perceptions and attitudes towards AI, which in turn drive higher usage rates. Although the indirect effects involving MR3 are not all statistically significant, they provide valuable insights. For instance, the pathway from MR3 to usage frequency through MR4, though marginally significant, suggests that concerns can negatively impact attitudes, yet their overall influence on usage behavior is relatively weaker. This is consistent with the findings of [51], stating that concerns about accuracy, privacy, and ethical issues induce negative attitudes, which act as barriers to the adoption of AI tools by students.
This study’s findings are consistent with several key studies in the field of technology acceptance and usage. For instance, the Technology Acceptance Model (TAM) has been widely used to explain the determinants of technology acceptance and usage, emphasizing the perceived ease of use and perceived usefulness as critical factors [81,82]. However, our study uniquely highlights the intermediary role of access, particularly in the context of AI tools for academic purposes. Recent studies by [82,83] have also indicated that access to technology significantly impacts students’ perceptions and adoption behaviors, reinforcing the importance of addressing access issues to enhance the effectiveness of AI tools in education.
The findings of this study have significant implications for educational institutions and policymakers. Firstly, educational institutions should prioritize the integration of AI tools into their curricula. This can be achieved by offering targeted training programs for both students and faculty to increase their familiarity and competence with AI technologies. Such programs could include workshops, seminars, and online courses tailored to different disciplines in order to ensure broad coverage and relevance.
Secondly, institutions should ensure that students have easy access to a variety of AI tools. This could involve providing subscriptions or licenses to AI software programs as part of the institution’s educational resources. This would limit the potential negative socio-economic impact on students from lower-income backgrounds. Access to these tools can be facilitated through university-provided subscriptions, partnerships with software providers, and leveraging open-source AI tools where feasible.
The findings of this study underscore the importance of ensuring equitable access to AI tools in higher education, particularly as the costs associated with these technologies continue to rise. The potential shift towards more expensive AI models, driven by the increasing costs of energy and development, raises significant concerns about socio-economic disparities. Students from lower-income backgrounds may find it increasingly difficult to afford access to the most advanced AI tools, which could exacerbate existing inequalities in educational outcomes.
To mitigate these risks, educational institutions must prioritize providing universal access to AI tools, ensuring that all students, regardless of their socio-economic status, can benefit from these technologies. This could involve partnerships with software providers to offer discounted or free licenses to students, as well as leveraging open-source AI tools that are accessible to a broader audience. Additionally, policymakers should consider implementing funding mechanisms, such as grants and subsidies, to support institutions in acquiring these tools and making them available to all students.
Furthermore, institutions should develop targeted programs that address the specific needs of lower-income students, such as providing additional training and support to help them to effectively utilize AI tools. By taking these steps, the potential negative socio-economic impact of AI in education can be minimized, ensuring that AI serves as a tool for enhancing educational equity, rather than reinforcing existing disparities.
Policymakers can support these initiatives by funding improvements in technological infrastructure and developing policies that encourage the adoption and effective use of AI in educational settings. For example, grants and subsidies for educational institutions to acquire AI tools and training resources can significantly enhance their capabilities.
Furthermore, to address the observed gap in AI tool familiarity among some students, institutions should implement targeted AI literacy programs. These programs can help to ensure that all students, regardless of their initial level of exposure, can benefit from AI technologies. Additionally, creating collaborative environments where students can share their experiences and knowledge about AI tools can foster a community of practice, enhancing the overall adoption and effective use.
By focusing on these areas, educational institutions can significantly enhance the perceived usefulness and actual usage of AI tools, thereby improving educational outcomes and preparing students for the evolving job market. These efforts will not only bridge the gap in AI tool adoption, but also ensure that students are well equipped with the necessary skills to leverage AI technologies in their future careers.

6. Conclusions

The integration of artificial intelligence (AI) in higher education is increasingly recognized as a transformative force with the potential to enhance academic outcomes significantly. However, the factors influencing university students’ adoption and utilization of AI tools in their educational pursuits remain underexplored, particularly within specific academic disciplines. This study addresses this gap by examining the relationships between students’ awareness of AI tools, their access to these tools, and the perceived effectiveness and usefulness of AI in academic settings. Focusing on economics students, who are uniquely positioned at the nexus of technology and economic analysis, this research provides valuable insights into how AI might be integrated into future economic practices and educational paradigms.
The current findings reveal that, while a substantial number of students are familiar with AI tools such as chatbots (81.3%) and AI-integrated applications like Microsoft Office (36.1%), such awareness is not universal, with over 9% of respondents indicating no familiarity with any AI tools. This underscores the critical importance of enhancing awareness and addressing barriers to AI adoption. Through a rigorous methodological approach, including Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA), and Structural Equation Modeling (SEM), this study identified significant pathways through which awareness, training, and institutional support influence AI tool usage. Furthermore, the expanded SEM analysis reveals significant indirect effects that further elucidate the pathways through which awareness, training, and concerns about AI interact to influence usage behavior. The critical role of formal training and positive attitudes in driving AI tool adoption underscores the importance of comprehensive educational strategies that address both the technical and attitudinal dimensions of AI integration.
The findings of this study are not only relevant to the field of economics education, but also have broader implications across various domains where AI adoption plays a critical role. In fields such as business, healthcare, engineering, and social sciences, understanding the factors that influence the adoption and effective use of AI tools is essential for designing educational programs and professional development initiatives. For instance, in healthcare, AI tools are increasingly used for diagnostic purposes and personalized treatment plans, making the insights from this study valuable for enhancing AI literacy and acceptance among medical professionals. Similarly, in engineering and business, where AI is applied to optimize processes and decision making, the identified factors can guide the development of targeted training programs that improve the integration of AI technologies into professional practice. Therefore, this study’s conclusions contribute not only to the academic discourse on AI in education, but also offer practical implications for various industries undergoing digital transformation.
However, several limitations warrant acknowledgment. Firstly, this study’s geographic focus on Romania restricts the generalizability of the findings to other regions with different educational systems, technological infrastructures, and cultural attitudes toward AI. Recognizing this limitation, the conclusions drawn are not intended to represent universal AI adoption patterns but rather to provide a foundation for further research in diverse contexts. Future studies should incorporate a broader geographical scope in order to enhance the external validity of the findings.
Secondly, the cross-sectional nature of this study limits the ability to draw causal inferences regarding the relationships between the awareness, access, and perceived usefulness of AI tools. While current results offer valuable preliminary insights, longitudinal studies would be instrumental in tracking changes in AI adoption over time and in understanding the long-term impacts of awareness and training on AI usage behavior. The dynamic and rapidly evolving nature of AI technologies further emphasizes the need for ongoing research that can capture these temporal changes.
The reliance on self-reported data is another limitation that may introduce response biases, such as social desirability bias, where students might overestimate their familiarity with or positive attitudes toward AI tools. Although anonymity was ensured to encourage candid responses, the potential for bias cannot be entirely eliminated. Future research should consider employing complementary data collection methods, such as observational studies or usage analytics, to validate self-reported measures.
Moreover, while this study concentrated on economics students, due to their strategic role in understanding the impact of AI on business and economic decision making, there is a clear need to explore AI adoption across a wider range of disciplines. Different academic fields may encounter unique challenges and opportunities in integrating AI, which could be overlooked when focusing solely on a single discipline. Future research could explore how disciplinary differences shape AI adoption, thus providing a more nuanced understanding of the factors that influence AI integration in higher education.
In addition, although this study acknowledges the importance of institutional support mechanisms, such as faculty training and curriculum integration, further exploration is warranted. Understanding how these factors interact with student awareness and attitudes could offer a more comprehensive view of AI adoption in educational settings. Expanding research to investigate the influence of peer networks and social learning dynamics on AI usage could also yield valuable insights, particularly given the increasingly collaborative nature of higher education environments.
This study significantly contributes to the existing body of knowledge by providing a detailed analysis of the factors influencing AI adoption among university students. The findings underscore the importance of awareness, access, and positive attitudes in the successful integration of AI tools in higher education. By highlighting both the strengths and limitations of the current study, we pave the way for future research that can build on these insights, offering a deeper understanding of the complex dynamics at play in AI adoption across diverse educational contexts.

Author Contributions

Conceptualization, all authors; introduction, R.I.D. and C.V.T.; literature review, C.V.T. and R.I.D.; data gathering (survey responses), all authors; methodology, C.T. and R.S.; software, C.T. and R.S.; validation, C.T.; formal analysis, C.T. and R.S.; data curation, C.T.; investigation, all authors; writing—original draft preparation, all authors; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially funded by the European Union’s NextGenerationEU instrument through the National Recovery and Resilience Plan of Romania—Pillar III-C9-I8, managed by the Ministry of Research, Innovation, and Digitalization of Romania, within the project with code CF 194/31.07.2023, contract no. 760243/28.12.2023.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. The Survey Instrument

SectionItemAssessment Type
DemographicsAgeSingle-choice answer
GenderSingle-choice answer
Education levelSingle-choice answer
University affiliationSingle-choice answer, including an open text response
SpecializationSingle-choice answer, including an open text response
Domicile typeSingle-choice answer
Familiarity with AI ToolsLevel of familiarity with AI Tools5-point Likert scale
Known and Used AI ToolsKnown AI Tools—including a variety of AI Tools with applications in higher educationMultiple-choice answers, including an open text response
Used AI Tools—including a variety of AI Tools with applications in higher educationMultiple-choice answers, including an open text response
Access to AI ToolsAccess to AI tools5-point Likert scale
Subscription to AI ToolsUniversity-provided subscriptionsYes/No
Personal subscriptionsYes/No
Frequency and Impact of AI Tool UsageFrequency of AI tool usage in academic tasks5-point Likert scale
Perceived impact on academic efficiency5-point Likert scale
Perceived impact on quality of work5-point Likert scale
Ease of use5-point Likert scale
Training and SupportFormal training receivedYes/No
Usefulness of training5-point Likert scale
General Attitudes and ConcernsGeneral attitudes towards AI tools5-point Likert scale
Concerns about inaccuracy5-point Likert scale
Concerns about cheating5-point Likert scale
Concerns about privacy5-point Likert scale
Integration of AI ToolsIntegration into the curriculum and academic activities5-point Likert scale

Appendix B. Demographic Characteristics of the Study Participants

Electronics 13 03632 i001Electronics 13 03632 i002

Appendix C. EFA Factor Loadings

Electronics 13 03632 i003

References

  1. Centre for the Governance of AI. Computing Power and the Governance of AI. 2024. Available online: https://www.governance.ai/post/computing-power-and-the-governance-of-ai (accessed on 22 August 2024).
  2. Utilitynet. Outlook 2024: Nine Major Trends in Computing Power Development. 2024. Available online: https://medium.com/@utilitynet_community/outlook-2024-nine-major-trends-in-computing-power-development-52cf46df8f48 (accessed on 23 August 2024).
  3. Maphosa, V.; Maphosa, M. Artificial intelligence in higher education: A bibliometric analysis and topic modeling approach. Appl. Artif. Intell. 2023, 37, 2261730. [Google Scholar] [CrossRef]
  4. Crompton, H.; Burke, D. Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  5. Zhang, Z.; Xu, L. Student engagement with automated feedback on academic writing: A study on Uyghur ethnic minority students in China. J. Multiling. Multicult. Dev. 2022, 43, 1–14. [Google Scholar] [CrossRef]
  6. Pisica, A.I.; Edu, T.; Zaharia, R.M.; Zaharia, R. Implementing Artificial Intelligence in Higher Education: Pros and Cons from the Perspectives of Academics. Societies 2023, 13, 118. [Google Scholar] [CrossRef]
  7. Bucea-Manea-Țoniş, R.; Kuleto, V.; Gudei, S.C.D.; Lianu, C.; Lianu, C.; Ilić, M.P.; Păun, D. Artificial Intelligence Potential in Higher Education Institutions Enhanced Learning Environment in Romania and Serbia. Sustainability 2022, 14, 5842. [Google Scholar] [CrossRef]
  8. Săseanu, A.S.; Gogonea, R.-M.; Ghiţă, S.I. The social impact of using artificial intelligence in education. Amfiteatru Econ. 2024, 26, 89–105. [Google Scholar] [CrossRef]
  9. Farahani, M.S.; Ghasemi, G. Artificial Intelligence and Inequality: Challenges and Opportunities. Qeios 2024, 7HWUZ2, 1–14. [Google Scholar] [CrossRef]
  10. Wang, Y.M.; Wei, C.L.; Lin, H.H.; Wang, S.C.; Wang, Y.S. What drives students’ AI learning behavior: A perspective of AI anxiety. Interact. Learn. Environ. 2022, 2022, 1–17. [Google Scholar] [CrossRef]
  11. Al-Sharafi, M.A.; Al-Emran, M.; Iranmanesh, M.; Al-Qaysi, N.; Iahad, N.A.; Arpaci, I. Understanding the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational purposes using a hybrid SEM-ANN approach. Interact. Learn. Environ. 2023, 31, 7491–7510. [Google Scholar] [CrossRef]
  12. Hultberg, P.T.; Santandreu Calonge, D.; Kamalov, F.; Smail, L. Comparing and assessing four AI chatbots’ competence in economics. PLoS ONE 2024, 19, e0297804. [Google Scholar] [CrossRef]
  13. Strzelecki, A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact. Learn. Environ. 2023, 31, 1–14. [Google Scholar] [CrossRef]
  14. Al-Abdullatif, A.M. Modeling Students’ Perceptions of Chatbots in Learning: Integrating Technology Acceptance with the Value-Based Adoption Model. Educ. Sci. 2023, 13, 1151. [Google Scholar] [CrossRef]
  15. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  16. Farrokhnia, M.; Banihashem, S.K.; Noroozi, O.; Wals, A. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int. 2023, 61, 460–474. [Google Scholar] [CrossRef]
  17. Labadze, L.; Grigolia, M.; Machaidze, L. Role of AI chatbots in education: Systematic literature review. Int. J. Educ. Technol. High Educ. 2023, 20, 56. [Google Scholar] [CrossRef]
  18. Wang, T.; Lund, B.D.; Marengo, A.; Pagano, A.; Mannuru, N.R.; Teel, Z.A.; Pange, J. Exploring the Potential Impact of Artificial Intelligence (AI) on International Students in Higher Education: Generative AI, Chatbots, Analytics, and International Student Success. Appl. Sci. 2023, 13, 6716. [Google Scholar] [CrossRef]
  19. Hasanein, A.M.; Sobaih, A.E.E. Drivers and Consequences of ChatGPT Use in Higher Education: Key Stakeholder Perspectives. Eur. J. Investig. Health Psychol. Educ. 2023, 13, 2599–2614. [Google Scholar] [CrossRef]
  20. Lo, C.K. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  21. Kuleto, V.; Ilić, M.; Dumangiu, M.; Ranković, M.; Martins, O.M.D.; Păun, D.; Mihoreanu, L. Exploring Opportunities and Challenges of Artificial Intelligence and Machine Learning in Higher Education Institutions. Sustainability 2021, 13, 10424. [Google Scholar] [CrossRef]
  22. Kashive, N.; Powale, L.; Kashive, K. Understanding user perception toward artificial intelligence (AI) enabled e-learning. The Int. J. Inf. Learn. Technol. 2020, 38, 1–19. [Google Scholar] [CrossRef]
  23. Chen, L.; Chen, P.; Lin, Z. Artificial Intelligence in Education: A Review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  24. Ruiz-Real, J.L.; Uribe-Toril, J.; Torres, J.A.; De Pablo, J. Artificial intelligence in business and economics research: Trends and future. J. Bus. Econ. Manag. 2021, 22, 98–117. [Google Scholar] [CrossRef]
  25. Popenici, S.A.D.; Kerr, S. Exploring the impact of artificial intelligence on teaching and learning in higher education. Res. Pract. Technol. Enhanc. Learn. 2017, 12, 22. [Google Scholar] [CrossRef]
  26. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High Educ. 2019, 16, 1–27. [Google Scholar] [CrossRef]
  27. Weizenbaum, J. ELIZA: A computer program for the study of natural language communication between man and machine. Commun. ACM 1966, 9, 36–45. [Google Scholar] [CrossRef]
  28. Colby, K.M. Modeling a paranoid mind. Behav. Brain Sci. 1981, 4, 515–534. [Google Scholar] [CrossRef]
  29. Denecke, K.; Glauser, R.; Reichenpfader, D. Assessing the Potential and Risks of AI-Based Tools in Higher Education: Results from an eSurvey and SWOT Analysis. Trends High. Educ. 2023, 2, 667–688. [Google Scholar] [CrossRef]
  30. Nguyen, N.D. Exploring the role of AI in education. Lond. J. Soc. Sci. 2023, 6, 84–95. [Google Scholar] [CrossRef]
  31. Salinas-Navarro, D.E.; Vilalta-Perdomo, E.; Michel-Villarreal, R.; Montesinos, L. Using Generative Artificial Intelligence Tools to Explain and Enhance Experiential Learning for Authentic Assessment. Educ. Sci. 2024, 14, 83. [Google Scholar] [CrossRef]
  32. Almaraz-López, C.; Almaraz-Menéndez, F.; López-Esteban, C. Comparative Study of the Attitudes and Perceptions of University Students in Business Administration and Management and in Education toward Artificial Intelligence. Educ. Sci. 2023, 13, 609. [Google Scholar] [CrossRef]
  33. Tominc, P.; Rožman, M. Artificial Intelligence and Business Studies: Study Cycle Differences Regarding the Perceptions of the Key Future Competences. Educ. Sci. 2023, 13, 580. [Google Scholar] [CrossRef]
  34. Jo, H.; Bang, Y. Analyzing ChatGPT adoption drivers with the TOEK framework. Sci. Rep. 2023, 13, 1–17. [Google Scholar] [CrossRef] [PubMed]
  35. Selwyn, N. The Digital Native—Myth and Reality. Aslib Proc. 2009, 61, 364–379. [Google Scholar] [CrossRef]
  36. Warschauer, M.; Matuchniak, T. New Technology and Digital Worlds: Analyzing Evidence of Equity in Access, Use, and Outcomes. Rev. Res. Educ. 2010, 34, 179–225. [Google Scholar] [CrossRef]
  37. Pew Research Center. Digital Divide Persists Even as Lower-Income Americans Make Gains in Tech Adoption. Available online: https://www.pewresearch.org/fact-tank/2019/05/07/digital-divide-persists-even-as-lower-income-americans-make-gains-in-tech-adoption/ (accessed on 12 July 2024).
  38. Bower, M.; Kenney, J.; Dalgarno, B.; Lee, M.J.W.; Kennedy, G.E. Patterns and Principles for Blended Synchronous Learning: Engaging Remote and Face-to-Face Learners in Rich-Media Real-Time Collaborative Learning Environments. Australas J. Educ. Technol. 2017, 33, 1–23. [Google Scholar] [CrossRef]
  39. Fakhri, M.M.; Ahmar, A.S.; Isma, A.; Fadhilatunisa, R.; Fadhilatunisa, D. Exploring Generative AI Tools Frequency: Impacts on Attitude, Satisfaction, and Competency in Achieving Higher Education Learning Goals. J. Educ. Learn. Innov. 2024, 4, 196–208. [Google Scholar] [CrossRef]
  40. Venkatesh, V.; Davis, F.D. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Manag. Sci. 2000, 46, 186–204. [Google Scholar] [CrossRef]
  41. Teo, T. Factors influencing teachers’ intention to use technology: Model development and test. Comput. Educ. 2011, 57, 2432–2440. [Google Scholar] [CrossRef]
  42. Sánchez, R.A.; Hueros, A.D. Motivational Factors that Influence the Acceptance of Moodle Using TAM. Comput. Hum. Behav. 2010, 26, 1632–1640. [Google Scholar] [CrossRef]
  43. Abdullah, F.; Ward, R. Developing a General Extended Technology Acceptance Model for E-Learning (GETAMEL) by Analysing Commonly Used External Factors. Comput. Hum. Behav. 2016, 56, 238–256. [Google Scholar] [CrossRef]
  44. Cheng, Y. Effects of quality antecedents on e-learning acceptance. Internet Res. 2012, 22, 361–390. [Google Scholar] [CrossRef]
  45. Park, S.Y.; Nam, M.-W.; Cha, S.-B. University Students’ Behavioral Intention to Use Mobile Learning: Evaluating the Technology Acceptance Model. Br. J. Educ. Technol. 2012, 43, 592–605. [Google Scholar] [CrossRef]
  46. AACSB. Available online: https://www.aacsb.edu/insights/articles/2024/06/ai-upskilling-for-business-school-faculty (accessed on 5 June 2024).
  47. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  48. Teo, T. Modelling technology acceptance in education: A study of pre-service teachers. Comput. Educ. 2009, 52, 302–312. [Google Scholar] [CrossRef]
  49. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  50. Lee, M.K.; Cheung, C.M.; Chen, Z. Acceptance of Internet-based learning medium: The role of extrinsic and intrinsic motivation. Inf. Manag. 2005, 42, 1095–1104. [Google Scholar] [CrossRef]
  51. Yuen, A.H.K.; Ma, W.W.K. Exploring teacher acceptance of e-learning technology. Asia-Pac. J. Teach. Educ. 2008, 36, 229–243. [Google Scholar] [CrossRef]
  52. Kim, J.; Cho, Y.H. My teammate is AI: Understanding students’ perceptions of student-AI collaboration in drawing tasks. Asia Pac. J. Educ. 2023, 43, 1–15. [Google Scholar] [CrossRef]
  53. Holden, H.; Rada, R. Understanding the Influence of Perceived Usability and Technology Self-Efficacy on Teachers’ Technology Acceptance. J. Res. Technol. Educ. 2011, 43, 343–367. [Google Scholar] [CrossRef]
  54. Tarhini, A.; Hone, K.; Liu, X. Measuring the Moderating Effect of Gender and Age on E-Learning Acceptance in England: A Structural Equation Modeling Approach for An Extended Technology Acceptance Model. J. Educ. Comput. Res. 2014, 51, 163–184. [Google Scholar] [CrossRef]
  55. McIntire, A.; Calvert, I.; Ashcraft, J. Pressure to Plagiarize and the Choice to Cheat: Toward a Pragmatic Reframing of the Ethics of Academic Integrity. Educ. Sci. 2024, 14, 244. [Google Scholar] [CrossRef]
  56. Ertmer, P.A.; Ottenbreit-Leftwich, A.T. Teacher Technology Change: How Knowledge, Confidence, Beliefs, and Culture Intersect. J. Res. Technol. Educ. 2010, 42, 255–284. [Google Scholar] [CrossRef]
  57. Howard, S.K.; Ma, J.; Yang, J. Student rules: Exploring patterns of students’ computer-efficacy and engagement with digital technologies in learning. Comput. Educ. 2016, 101, 29–42. [Google Scholar] [CrossRef]
  58. Zhu, C.; Wang, D.; Cai, Y.; Engels, N. What core competencies are related to teachers’ innovative teaching? Asia-Pac. J. Teach. Educ. 2013, 41, 9–27. [Google Scholar] [CrossRef]
  59. Scherer, R.; Siddiq, F.; Tondeur, J. The Technology Acceptance Model (TAM): A Meta-Analytic Structural Equation Modeling Approach to Explaining Teachers’ Adoption of Digital Technology in Education. Comput. Educ. 2019, 128, 13–35. [Google Scholar] [CrossRef]
  60. Almasri, F. Exploring the Impact of Artificial Intelligence in Teaching and Learning of Science: A Systematic Review of Empirical Research. Res. Sci. Educ. 2024, 54, 977–997. [Google Scholar] [CrossRef]
  61. Davidescu, A.; Strat, V.A. An empirical analysis of regional poles from the perspective of Romanian sustainable development. Procedia Econ. Financ. 2014, 10, 114–124. [Google Scholar]
  62. Romanian Ministry of Education. 2023. Available online: https://www.edu.ro/sites/default/files/_fi%C8%99iere/Minister/2024/div/Raport_metaranking_2023.pdf (accessed on 15 May 2024).
  63. Babeș-Bolyai University. Available online: https://consortiul-universitaria.ubbcluj.ro/ (accessed on 13 July 2024).
  64. Bucharest University of Economic Studies. Available online: https://international.ase.ro/21/ (accessed on 5 July 2024).
  65. Bollen, K.A. Structural Equations with Latent Variables; Wiley-Interscience: New York, NY, USA, 1989. [Google Scholar]
  66. Bentler, P.M. Comparative fit indexes in structural models. Psychol. Bull. 1990, 107, 238–246. [Google Scholar] [CrossRef]
  67. Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Guilford Press: New York, NY, USA, 2015. [Google Scholar]
  68. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 8th ed.; Cengage Learning: Boston, MA, USA, 2018. [Google Scholar]
  69. Byrne, B.M. Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming, 3rd ed.; Routledge: London, UK, 2016. [Google Scholar]
  70. Hoyle, R.H. (Ed.) Handbook of Structural Equation Modeling; Guilford Press: New York, NY, USA, 2012. [Google Scholar]
  71. Iacobucci, D. Structural equations modeling: Fit indices, sample size, and advanced topics. J. Consum. Psychol. 2010, 20, 90–98. [Google Scholar] [CrossRef]
  72. MacCallum, R.C.; Austin, J.T. Applications of structural equation modeling in psychological research. Annu. Rev. Psychol. 2000, 51, 201–226. [Google Scholar] [CrossRef]
  73. Harman, H.H. Modern Factor Analysis; University of Chicago Press: Chicago, IL, USA, 1976. [Google Scholar]
  74. Fabrigar, L.R.; Wegener, D.T.; MacCallum, R.C.; Strahan, E.J. Evaluating the use of exploratory factor analysis in psychological research. Psychol. Methods 1999, 4, 272. [Google Scholar] [CrossRef]
  75. Kaiser, H.F. The varimax criterion for analytic rotation in factor analysis. Psychometrika 1958, 23, 187–200. [Google Scholar] [CrossRef]
  76. Tabachnick, B.G.; Fidell, L.S.; Ullman, J.B. Using Multivariate Statistics; Pearson: Boston, MA, USA, 2013; Volume 6, pp. 497–516. [Google Scholar]
  77. Hu, L.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. A Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  78. Crompton, H.; Edmett, A.; Ichaporia, N.; Burke, D. AI and English language teaching: Affordances and challenges. Br. J. Educ. Technol. 2024, 1–27. [Google Scholar] [CrossRef]
  79. Bsharat, M.; Ibrahim, O. Quality of service acceptance in cloud service utilization: An empirical study in Palestinian higher education institutions. Educ. Inf. Technol. 2020, 25, 863–888. [Google Scholar] [CrossRef]
  80. Zhai, X.; Chu, X.; Chai, C.S.; Jong, M.S.Y.; Istenic, A.; Spector, M.; Liu, J.B.; Yuan, J.; Li, Y. A Review of Artificial Intelligence (AI) in Education from 2010 to 2020. Complexity 2021, 1, 8812542. [Google Scholar] [CrossRef]
  81. Venkatesh, V.; Bala, H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  82. Kumar, V.R.; Raman, R. Student Perceptions on Artificial Intelligence (AI) in higher education. In Proceedings of the 2022 IEEE Integrated STEM Education Conference (ISEC), Princeton, NJ, USA, 26 March 2022; pp. 450–454. [Google Scholar] [CrossRef]
  83. Dahri, N.A.; Yahaya, N.; Al-Rahmi, W.M.; Vighio, M.S.; Alblehai, F.; Soomro, R.B.; Shutaleva, A. Investigating AI-based academic support acceptance and its impact on students’ performance in Malaysian and Pakistani higher education institutions. Educ. Inf. Technol. 2024, 29, 1–50. [Google Scholar] [CrossRef]
Figure 1. Correlation heatmap of observed variables.
Figure 1. Correlation heatmap of observed variables.
Electronics 13 03632 g001
Figure 2. Path diagram of the SEM.
Figure 2. Path diagram of the SEM.
Electronics 13 03632 g002
Table 1. Descriptive statistics for respondents’ demographic profile.
Table 1. Descriptive statistics for respondents’ demographic profile.
VariableMeanSDMedianMinMaxRangeSkewKurtosisSE
Age Category1.751.1611541.571.350.05
Gender1.310.4911321.160.080.02
Education Level3.251.1631650.38−1.000.05
Domicile Type3.280.974143−0.65−1.430.04
Table 2. EFA Results.
Table 2. EFA Results.
Standardized LoadingsMR1MR4MR2MR3MR5h2u2com
familiarity_AI0.680.210.060.050.040.520.481.2
access_to_AI0.730.180.150.060.040.590.411.2
university_subscription0.070.030.46−0.050.130.240.761.2
own_subscription0.240.10.210.030.070.120.882.6
usage_frequency0.650.290.23−0.02−0.050.570.431.7
efficiency_increase0.420.750.110.04−0.020.760.241.6
quality_increase0.350.790.21−0.020.150.810.191.6
ease_of_use0.460.3−0.1−0.01−0.10.330.671.9
formal_training0.010.020.710.01−0.250.570.431.2
training_usefulness0.040.080.66−0.01−0.080.450.551.1
general_attitude0.330.590.09−0.21−0.090.520.482
concern_inaccuracy0.02−0.05−0.020.62−0.070.390.611
concern_cheating0.12−0.04−0.080.75−0.10.60.41.1
concern_privacy−0.0600.040.550.240.370.631.4
AI_integration0.190.120.64−0.030.180.490.511.4
Table 3. CFA model fit indices.
Table 3. CFA model fit indices.
Fit IndexValue
Comparative Fit Index (CFI)0.957
Tucker–Lewis Index (TLI)0.941
Root Mean Square Error of Approximation (RMSEA)0.059
90% Confidence Interval of RMSEA0.048–0.070
Standardized Root Mean Square Residual (SRMR)0.049
Table 4. CFA standardized factor loadings.
Table 4. CFA standardized factor loadings.
Latent VariablesEstimateStd.Errz-Valuep (>|z|)Std.lvStd.all
MR1=~
familiarity_AI1 0.660.713
access_to_AI1.1230.07514.91700.7410.74
usage_frequncy1.2030.0815.1300.7950.756
MR2=~
formal_trainng1 0.2530.665
trainng_sflnss4.70.44510.55801.1910.717
AI_integration2.4390.23410.42800.6180.621
MR3=~
concern_nccrcy1 0.5540.627
concern_chetng1.4870.1838.11300.8230.767
concern_privcy0.9230.1088.55600.5110.494
MR4=~
efficincy_ncrs1 0.8610.888
quality_incres0.9740.0424.12800.8380.867
general_attitd0.6030.03417.56700.5190.668
Table 5. Estimated variances of the latent constructs.
Table 5. Estimated variances of the latent constructs.
VariancesEstimateStd.Errz-ValueP(>|z|)Std.lvStd.all
.familiarity_AI0.4230.03312.91300.4230.492
.access_to_AI0.4550.03712.20300.4550.453
.usage_frequncy0.4740.0411.70500.4740.428
.formal_trainng0.0810.00711.18300.0810.558
.trainng_sflnss1.3450.1439.42101.3450.487
.AI_integration0.6080.04912.48500.6080.614
.concern_nccrcy0.4720.04510.5100.4720.606
.concern_chetng0.4760.0835.76500.4760.412
.concern_privcy0.8080.05714.26600.8080.756
.efficincy_ncrs0.1990.0248.23200.1990.212
.quality_incres0.2330.0249.54100.2330.249
.general_attitd0.3330.02215.21700.3330.553
MR10.4360.0498.904011
MR20.0640.0097.169011
MR30.3070.0516.035011
MR40.7410.05812.722011
Table 6. SEM model fit indices.
Table 6. SEM model fit indices.
Fit IndexSEM (Conventional)SEM with Robust Standard Errors
Comparative Fit Index (CFI)0.9620.965
Tucker–Lewis Index (TLI)0.9450.949
Root Mean Square Error of Approximation (RMSEA)0.0550.054
90% Confidence Interval of RMSEA0.043–0.0650.043–0.067
Standardized Root Mean Square Residual (SRMR)0.0460.046
Table 7. Robust SEM estimation results.
Table 7. Robust SEM estimation results.
Latent VariablesEstimateStd.Errz-ValueP(>|z|)Std.lvStd.all
MR1=~
familiarity_AI1 0.6850.739
access_to_AI1.1150.07714.52300.7640.762
usage_frequncy0.8960.1177.68200.6140.584
MR2=~
formal_trainng1 0.2530.664
trainng_sflnss4.7130.3912.07601.1930.718
AI_integration2.4390.279.01800.6170.621
MR3=~
concern_nccrcy1 0.5530.626
concern_chetng1.490.2286.52700.8230.767
concern_privcy0.9270.1227.59500.5120.495
MR4=~
efficincy_ncrs1 0.8620.89
quality_incres0.9690.03924.90800.8360.864
general_attitd0.6020.03716.30500.5190.669
RegressionsEstimateStd.Errz-valueP(>|z|)Std.lvStd.all
MR4~
MR10.8350.07810.67300.6630.663
MR20.4220.1642.5830.010.1240.124
MR3−0.1820.083−2.1770.029−0.116−0.116
MR3~
MR10.120.061.9940.0460.1480.148
MR2−0.2180.146−1.4950.135−0.1−0.1
MR2~
MR10.0960.0243.98500.2610.261
usage_frequency~
MR20.5690.1942.940.0030.1440.137
MR3−0.0790.084−0.9410.347−0.044−0.042
MR40.1730.0752.3190.020.1490.142
Table 8. Indirect effects estimation results.
Table 8. Indirect effects estimation results.
Indirect EffectEstimateStd.Errz-ValueP(>|z|)Std.lvStd.all
Ind10.0410.0162.510.0120.0320.032
Ind2−0.0220.014−1.5570.12−0.017−0.017
Ind30.0550.0183.0290.0020.0380.036
Ind4−0.0090.011−0.870.384−0.007−0.006
Ind50.1450.062.40.0160.0990.094
Ind60.0730.0441.6780.0930.0190.018
Ind7−0.0310.018−1.7560.079−0.017−0.017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sova, R.; Tudor, C.; Tartavulea, C.V.; Dieaconescu, R.I. Artificial Intelligence Tool Adoption in Higher Education: A Structural Equation Modeling Approach to Understanding Impact Factors among Economics Students. Electronics 2024, 13, 3632. https://doi.org/10.3390/electronics13183632

AMA Style

Sova R, Tudor C, Tartavulea CV, Dieaconescu RI. Artificial Intelligence Tool Adoption in Higher Education: A Structural Equation Modeling Approach to Understanding Impact Factors among Economics Students. Electronics. 2024; 13(18):3632. https://doi.org/10.3390/electronics13183632

Chicago/Turabian Style

Sova, Robert, Cristiana Tudor, Cristina Venera Tartavulea, and Ramona Iulia Dieaconescu. 2024. "Artificial Intelligence Tool Adoption in Higher Education: A Structural Equation Modeling Approach to Understanding Impact Factors among Economics Students" Electronics 13, no. 18: 3632. https://doi.org/10.3390/electronics13183632

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop