Next Article in Journal
The Application of Explainable Artificial Intelligence to Low-Power Internet of Things Devices with Secure Communication Using Chaos-Based Cryptography
Previous Article in Journal
Dual-Gate Metal-Oxide-Semiconductor Transistors: Nanoscale Channel Length Scaling and Performance Optimization
Previous Article in Special Issue
Generative AI in Education: Perspectives Through an Academic Lens
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Use of Generative AI by Higher Education Students

1
School of Tourism and Maritime Technology, CiTUR, Polytechnic University of Leiria, 2411-901 Leiria, Portugal
2
Laboratory of Distance Education and e-Learning, Universidade Aberta, 1250-100 Lisboa, Portugal
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(7), 1258; https://doi.org/10.3390/electronics14071258
Submission received: 2 February 2025 / Revised: 19 March 2025 / Accepted: 20 March 2025 / Published: 22 March 2025
(This article belongs to the Special Issue Techniques and Applications in Prompt Engineering and Generative AI)

Abstract

:
This research aims to explore the use, perceptions, and challenges associated with generative AI (GenAI) among higher education students. As GenAI technologies, such as language models, image generators, and code assistants, become increasingly prevalent in academic settings, it is essential to understand how students engage with these tools and their impact on their learning process. The study investigates students’ awareness, adoption patterns, and perceptions of generative AI’s role in academic tasks, alongside the benefits they identify and the challenges they face, including ethical concerns, reliability, and accessibility. Through quantitative methods, the research provides a comprehensive analysis of student experiences with generative AI in higher education. The findings aim to inform educators, technologists, and institutions about the opportunities and barriers of integrating these technologies into educational practices and guide the development of strategies that support effective and responsible AI use in academia.

1. Introduction

The development of artificial intelligence (AI) is revolutionizing various sectors, including the education sector, and is likely to redefine the ways of teaching and learning. In higher education, generative artificial intelligence (GenAI) has many applications that potentially assist in customizing learning, promoting collaborative learning and meta-cognition, providing useful feedback, and fostering student motivation, improving students’ learning experiences [1]. GenAI technologies, especially large language models (LLMs) such as ChatGPT, developed by OpenAI, have caused public and particularly educational interest, due to their promises about the ability to comprehend and humanize natural language, produce coherent responses, and offer personalized feedback [2]. Such tools allow educators to design their own learning content, examine student work more efficiently, and promote interactive learning environments that bring the potential of making education more available and equitable [3]. But, for this to happen, certain conditions need to be met, as postulated further in this paper; otherwise, it may certainly worsen existing biases and inequalities.
However, the use of GenAI in higher education is not without its problems. Issues with the use of GenAI systems include concerns about data security, moral principles, and overdependence on technology [4]. Furthermore, differences in technological infrastructure and digital competence at the institutional level may worsen existing inequalities in the availability of high-quality education [5]. Educators worldwide are concerned about the potential for increased plagiarism and automating tasks [6]; therefore, it becomes crucial to understand how students are making use of GenAI in order to take the best advantage of its features and minimize the risks associated with potential misuse.
As GenAI technology continues to develop, understanding how students engage with these tools around the world is essential to effectively integrating them into teaching, learning, and research. The integration of the technology acceptance model (TAM) and task–technology fit (TTF) theory in this study contributes to its relevance, as it provides a broad basis for exploring the multifaceted aspects of GenAI tools among higher education students.
This work thus investigates how higher education students in Portugal are making use of generative AI technologies in their academic life, particularly students’ awareness, adoption patterns, and perceptions of GenAI’s role in academic tasks, alongside the benefits they identify and the challenges they face, including ethical concerns, reliability, and accessibility. Simultaneously, we intend to compare the results obtained in this research with the study by Almassaad et al. [7] developed in Saudi Arabia, as according to the AI Index Report 2023 [8], students in this country are among the global leaders in the adoption of GenAI tools and Saudi Arabia is actively exploring the potential of GenAI in various fields, including education, with the aim of effectively leveraging its capabilities [7]. Based on data from a 2022 IPSOS survey (Figure 1), the AI Index Report also states that Saudi Arabia comes in second place (76%), right after Chinese respondents (78%), when it comes to how positive respondents feel towards AI products. According to the same study, Saudi Arabia has had, since 2020, a national AI strategy, which is an important measure to understand how countries are aware and prioritizing the management and regulation of AI technologies. In Portugal, this national AI strategy was implemented in 2019, one year sooner, and this allows us to also compare trends in these countries’ national strategies.
The research questions for this study are:
  • What are higher education students’ adoption practices of GenAI technologies?
  • What are the potential benefits and challenges of using GenAI technologies, as perceived by students?
  • How do Portuguese students’ perceptions of GenAI compare with Saudi Arabian students’ perceptions?

2. Literature Review

2.1. The Overview of Artificial Intelligence in Education

Although artificial intelligence has gained significant attention and momentum in recent years, it is not a new concept; its foundations have been established for decades, with its origins tracing back to early theoretical work in the mid-20th century, particularly associated with the fields of mathematics, computer science, and even philosophy [9,10]. In fact, the foundations for the concept of AI were established in the paper “Computer Machinery and Intelligence”, where Alan Turing postulated that machines could simulate human intelligence through computation, and the Turing Test was then proposed as a means to measure machine intelligence [11]. The term artificial intelligence was later coined in 1956 during the Dartmouth Conference, but the context was not based on a common methodology or general theory proposed by the conference participants, but rather on the shared vision that computers can be made to perform intelligent tasks [12]. The principle was that one should continue to investigate based on the conjecture that every aspect of learning or any other characteristic of intelligence can, supposedly, be described so precisely that a machine can be made to simulate it. Despite further developments, the field has experienced several periods of stagnation, known as “AI winters,” mainly due to reduced interest and funding, as well as limited computational power [13,14,15].
However, the field has more recently seen greater progress, particularly since the 1990s, thanks to advancements in computing capacity and machine learning, which enabled the availability of large amounts of data and other innovative computational approaches. Currently, AI is largely involved in most aspects of our lives. We find AI systems at the core of everything, from mobile apps to online shopping, weather forecasts to medical diagnoses, financial and legal services to autonomous vehicles, and much more.
But it is not only industries that have been transformed by AI. Education is also being reshaped by its potential. If we look at artificial intelligence with an educational lens, the definition provided by UNICEF becomes particularly relevant: “AI refers to machine-based systems that can, given a set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments. AI systems interact with us and act on our environment, either directly or indirectly. Often, they appear to operate autonomously and can adapt their behavior by learning about the context” [16] (p. 13). This is crucial in the educational field because it emphasizes the role of human interactions and, despite acting on our environment, AI systems do so based on human decision-making. This principle is particularly relevant when considering generative AI GenAI), which, while capable of producing content autonomously, still depends on human guidance to set objectives, refine outputs, and ensure ethical use.

2.2. Generative Artificial Intelligence (GenAI) and Its Potential for Learning

According to Ref. [15], “GenAI is an unsupervised or partially supervised machine learning framework that seamlessly generates artificial creations by analyzing existing digital content like videos, images/graphics, text, and audio”. It characterizes a class of AI systems that were conceived to generate different types of data such as a simple text or image, to complex combinations of data that identify and mimic patterns and structures through machine learning techniques, in particular deep learning [17]. Machine learning (ML) represents the type of analytical models that have been developed and are able to capitalize on the availability of data and computing power to generate predictions, rules, recommendations and similar outcomes, therefore allowing humans to formalize their knowledge into more efficient machine-accessible forms [16]. Deep learning (DL) represents some of the progress that has been made within ML, with the evolution of artificial neural networks (ANNs), thus allowing increasingly deep neural network architectures with advanced learning abilities [18,19].
In education, generative AI assists as a tool to improve learning rather than replace human educators, supporting tasks such as content creation, personalized feedback, and adaptive learning experiences. However, its effectiveness depends on how educators and students engage with it, thus reinforcing the idea suggested in UNICEF’s definition that AI should expand—not replace—human expertise and creativity in the learning process. One of the most significant breakthroughs in GenAI are generative -re-trained transformer (GPT) models, which lie beneath the widespread ChatGPT tools and are based on the use of “publicly available digital content data (natural language processing [NLP]) to read and produce human-like text in several languages and can exhibit creativity in writing” [17] (p. 53). In fact, tools fueled by GenAI, such as language models and content creation platforms, have been increasingly incorporated into educational practices in order to enhance learning experiences, automate administrative tasks, and support personalized education for students.
However, despite the excitement around recent developments, AI also raises multiple concerns, and its role in education brings both benefits as well as challenges.

2.3. Benefits, Challenges, and Ethical Concerns

Just like previous technology-driven tools or applications, such as the introduction of the first computers and the internet, learning management systems (LMSs), and mobile learning tools, GenAI also holds transformative potential, particularly if we consider the incredibly large number of applications that have been developed for multiple functionalities, such as:
  • AI-powered writing and content creation (like ChatGPT, Claude, or Microsoft Copilot);
  • AI-based tutoring and personalized learning (Socratic, Khanmigo or Quizlet AI);
  • AI for coding and computational thinking (GitHub Copilot, Code.org AI Lab, or Replit Ghostwriter);
  • AI-powered language learning (Duolingo Max, Talk-to-ChatGPT, or Elsa Speak);
  • AI for research and study support (Elicit, Scite.ai, or Perplexity AI);
  • AI-based presentations and lesson planning (Canva Magic Write, Slidesgo AI, or Curipod);
  • AI-powered assessment and feedback (Grammarly Go, Turnitin AI Detection, or Gradescope);
  • AI for STEM education and simulations (Wolfram Alpha, Labster AI, or PhET AI Simulations).
These tools represent only some of the possibilities, but they allow us to explain how generative AI is transforming many different fields of education within the ideal of promoting enhanced teaching and learning experiences.
The benefits it brings to the educational field have been extensively studied in recent literature. For teachers, these benefits include enhanced content creation that can be used to expand traditional teaching methods, creating a more interactive and personalized learning experience [6,20,21], automating administrative tasks such as grading and scheduling, enhancing active learning and problem-solving skills [21], and providing adaptive assessment and feedback, thus improving learning outcomes [22], among others. For students, several studies have also identified multiple advantages of the use of GenAI tools, such as instant and personalized learning support [20]; immediate and diverse feedback [22]; personalized tutoring [17]; writing support to students, especially non-native English-speaking students [23]; and promoting critical thinking skills and therefore helping reduce learning barriers, improve work efficiency, and save time [7], among others.
However, this does not happen without challenges [3]. Equity of access, data privacy, and ethical issues are some of the main challenges in using GenAI tools in education. Several concerns such as lack of authenticity and academic integrity, quality of prompts, response variability and hallucinations, over-reliance on technology, inaccurate or biased information, and ethical and security concerns have also been noted by Refs. [24,25,26].
These challenges emphasize the importance of conducting thorough studies on the real use of GenAI tools in the context of student learning, specifically to research its applications and implications in shaping the future of educational practices, in order to promote its ethical and responsible implementation.

2.4. Gaps in Current Research

The main studies cited in the current research provide an extensive review on how generative AI has been studied in recent years. The overview of these studies contributes to a nuanced understanding of both the potential and challenges of using generative AI in educational settings, not only because they employ a variety of methodologies to explore the role of GenAI in education, but also because they allow us to identify the main gaps in current research. If, on one hand, studies like Refs. [7,20] research student and teacher perceptions through survey-based research, on the other hand, Refs. [22,25] adopt case studies as methodology to analyze specific examples of how GenAI is integrated in educational contexts, exploring its contextual factors and outcomes. The current literature review is also supported by bibliometric and content analysis [21] and scoping reviews [22] to help review existing literature and identify its trends and main arguments, as well as potential gaps in research.
The studies by Refs. [17,20,22] focus on student perceptions on GenAI and have as common findings the recognition of GenAI as a useful tool in students’ educational process, as well as the main challenges perceived by students, such as fear of overdependence and its impact on cognitive abilities like critical thinking. On the other hand, the research by Refs. [6,22] concludes that, although GenAI may allow the teaching and learning process to be customized, more dynamic, and potentially efficient, it may bring inherent risks such as misinformation and equity issues in its access, which depends also on different institutional adaptation and implementation levels. Furthermore, Refs. [6,21,24,25] highlight potential ethical issues, such as data privacy and potential manipulation, thus reinforcing the need for regulation.
Although the literature review reveals the potential of GenAI for reshaping education, there are still some research gaps that need to be addressed, particularly as the body of research around GenAI expands. There is a need for solid development on comprehensive guidelines on its use in different educational settings if it is to truly bridge the gap between access to technology and cultural bias, and build the way for an inclusive and effective landscape. Similarly, as research on GenAI continues to develop, conditions are being created for the existence of longitudinal research, allowing researchers to understand the long-term impact of GenAI on educational practices. This issue connects deeply with the need for further research on how GenAI can effectively be integrated into existing curricular programs and teaching and learning methodologies. Finally, drawing on the conclusions in Ref. [21], we also point out that research should also address and try to mitigate the existing biases in the global database of knowledge, in order not to perpetuate stereotypes or discriminate against certain groups. Similarly, it should also be able to explore how GenAI can make learning more inclusive and accessible to students with disabilities or language barriers.

2.5. Theorical Framework

2.5.1. TAM Model

The technology acceptance model (TAM) is a theoretical framework that identifies and explains the key factors influencing technology adoption and use [27].
The TAM model emphasizes two main components: perceived ease of use and perceived usefulness [28]. Perceived ease of use is the degree to which an individual believes that using the technology will be easy, and perceived usefulness is the degree to which an individual believes that using the technology will be useful and will improve his or her performance [27]. According to the causal relationship suggested in TAM, an individual will have a positive attitude towards the use of technology if the individual considers it to be easy and useful.
This model has been used by researchers in educational contexts to reveal the nuances of students’ perceptions and behaviors in accepting new technologies [29]. Thus, TAM can be an instrumental tool to understand to what extent students interact with or will eventually interact with AI tools. In the context of artificial intelligence, TAM helps to understand how the perceived ease of use and usefulness of AI-oriented technologies impact the adoption of GenAI tools. More specifically, the TAM, in the case of this study, sets the background to understand students’ awareness, familiarity, and willingness to use GenAI tools in their academic work. In the context of GenAI tools, perceived usefulness reflects how students believe these tools improve their academic performance, while perceived ease of use is related to their comfort and familiarity in using these technologies. Therefore, we intend to verify how these factors influence the adoption and use of GenAI tools in their academic career.

2.5.2. Task–Technology Fit (TTF)

Task–technology Fit (TTF) theory posits that the effectiveness of technology in enhancing performance depends on how well it aligns with the requirements of a specific task [30]. Task–technology fit refers to the extent to which technology meets the needs of users who are reasonably satisfying certain goals [31].
As a framework to assess the suitability of a technology for a task [30], TTF associates the efficiency of the technology with its compatibility with the task requirements, user attributes, and organizational context. The greater the match between technology and task, the greater the potential benefits [32]. Thus, the TTF model postulates that technology acceptance depends on its suitability to the requirements of a specific task [33] and extends the TAM by taking into account how the task influences intention [34]. Essentially, the TTF model illustrates the relationship between the “tasks” users seek to accomplish, the “technology” or digital tools at their disposal, and their willingness to use these tools—a decision largely affected by the perceived alignment between their tasks and the technology on offer [35]. Ref. [36] confirmed that task and technology characteristics affect the TTF model. More specifically, the TTF model is crucial for assessing the effect of digital interactions, namely in technologically advanced learning environments for students [37]. A strong task–technology adaptation, achieved through well-designed features, leads to greater user satisfaction, as it effectively satisfies their needs [38]. Essentially, users are more likely to engage with a digital product when its “technology” directly supports the specific “tasks” they intend to perform [39].
The TTF model has proven effective in understanding user behavioral intention [36]. Widely recognized as a key factor in successful information technology decisions in various types of digital transformation [40,41], the TTF model has significantly impacted user adoption. When applied to AI, the TTF model can analyze the degree of alignment of AI technologies with specific tasks or roles, which helps determine whether AI can increase productivity, performance, and satisfaction in task performance [42]. Furthermore, TTF theory can be used to gain deeper insight into higher education students’ perceptions of GenAI tools and the benefits and challenges they experience in their learning. The TTF model guides how students engage in their academic tasks, influencing their use of GenAI tools, while technology functionality refers to the capabilities of these tools to support those tasks.

3. Material and Methods

To investigate students’ use and perceptions of GenAI tools, we conducted a quantitative study using a survey. The survey was designed to understand the use that higher education students make of generative artificial intelligence tools and their perceptions regarding the benefits and challenges associated with their use. The survey questions were developed based on relevant literature and the research questions listed above.
The questionnaire applied was mainly based on the questionnaire survey used in the study by Ref. [7], whose research instrument had been previously applied and validated. Therefore, one of the objectives of the current research is to gather data from Portuguese higher education students and compare the results obtained with those of the study by Ref. [7] as well as with other similar studies, in order to advance scientific knowledge regarding the use and perceptions of GenAI tools by higher education students.
The survey was divided into the following sections: demographic information, use of GenAI tools and reasons for non-use, and perceived benefits and challenges associated with its use. The survey was developed using the technology acceptance model (TAM) and the task–technology fit (TTF) framework [7]. Thus, the TAM model is represented in the awareness and familiarity section by the question, “Do you use generative AI tools in higher education?” and in the question about educational objectives: “Uses generative AI tools in teaching?” as this question contains the concepts of perceived ease of use and perceived usefulness, as well as in TTF’s focus on task requirements. TTF is also illustrated by the perceived benefits, such as “enhances academic performance,” “improves learning engagement,” “enhances general language ability,” and “fosters critical thinking and problem-solving,” and the challenges encountered when using GenAI tools, such as “provides inaccurate or false references” and “provides unreliable information,” which correspond to technology functionality [7].
In the specific case of the present research, the TTF theory was used to gain deeper insights into higher education students’ perceptions of GenAI tools and the benefits and challenges they experience in their learning. Given that the task–technology fit suggests that the effectiveness of technology relies on the alignment between task requirements and technology functionality, we thus verified that task requirements guide the way students engage with their academic tasks, influencing their use of GenAI tools, while technology functionality concerns the capabilities of these tools to support these tasks.
The online questionnaire was created in Google Forms and was distributed via social networks to students currently attending higher education in Portugal. Social media platforms provide researchers with an efficient way to connect with a large, diverse, and geographically dispersed audience in real time. This was particularly beneficial for our study, which sought to collect data from participants across multiple regions in Portugal. As Ref. [26] highlights, Facebook is increasingly recognized as a valuable research tool in the social sciences, offering access to a broad and varied pool of participants who can be selectively recruited for both online and offline studies. Similarly, Ref. [43] emphasizes the advantages of using social media for survey distribution, noting its convenience, flexibility in survey design, cost-effectiveness, respondent anonymity, and ability to reach a wider audience across different locations.
The survey was available from 15 October 2024 to 30 December 2024. A total of 132 students participated in the survey, with 62.9% identifying as female and 37.1% as male. Among the respondents, 77.3% were enrolled in an undergraduate program, 10.6% were pursuing a master’s degree, 3.8% were doctoral students, and 8.3% were enrolled in a TeSP—a higher vocational technical course (Table 1).
The quantitative survey data were analyzed using descriptive statistics (SPSS v29) to calculate frequency distributions and summarize participants’ responses to Likert scale questions. Inferential statistical analyses were also conducted, and the appropriate statistical tests were selected based on the distribution of the data, with results being discussed with reference to relevant literature. The Shapiro–Wilk test assessed normality of distribution for quantitative variables, and the nonparametric Mann–Whitney test compared differences in data distribution between groups. Furthermore, to tackle concerns about the non-validated questionnaire, we assessed internal consistency through Cronbach’s alpha, with a score of 0.82, which is considered adequate for research purposes. For all hypothesis tests, a 95% confidence interval was used, with statistical significance set at p < 0.05. All statistical analyses were performed with SPPS v29.

4. Findings and Discussion

4.1. Higher Education Students’ Use of GenAI Tools

Among the respondents, 129 respondents (97.7%) use GenAI tools. Only 3 students (2.3%) do not do so, citing lack of interest as the main reason. In the study by Ref. [7], some interviewees justified not using GenAI with a lack of familiarity, experience, and confidence in using these emerging tools. Students believed they could achieve better results through their intellectual and problem-solving skills than by using AI tools. In fact, some students considered that in scientific areas, such as engineering and humanities, AI tools are not useful because of its inaccurate, misleading, and not very useful responses to prompts. In line with this finding, the studies by Refs. [2,44] also suggest that some students are not interested in using these tools due to a lack of digital literacy or technological proficiency, lack of experience, and lack of knowledge.
Regarding the frequency of use, among the 129 students who use GenAI tools, 27.3% use these tools very often, 36.4% use them often, 28% reported occasional use, and 8.3% rarely use these tools (Figure 2). These data reveal different levels of use of generative artificial intelligence tools among higher education students, highlighting varying degrees of dependence and familiarity with these technologies.
The fact that 27.3% of students use GenAI tools very often suggests that for a part of the student population, these tools have become part of their academic routine, as they report relying on GenAI for tasks such as content generation, writing assistance, coding support, brainstorming ideas, and summarizing complex materials, demonstrating a high level of AI integration in their learning processes. Their frequent use may stem from the perceived benefits of efficiency, accessibility, and enhancement of cognitive tasks, as well as a growing comfort in adapting AI into academic workflows [20,45]. The 36.4% of students who use GenAI often represent the majority, suggesting that while they do not depend on these tools daily, they regularly incorporate them into their studies. These students may use AI strategically, using it for specific assignments, research tasks, or language refinement. Their engagement indicates a moderate but consistent reliance, most likely balancing AI assistance with traditional learning approaches [7,20]. The 28% of students who use GenAI tools occasionally suggest a more situational or experimental approach to AI adoption. This group may still be exploring the capabilities of these tools, using them only when necessary, or might have concerns about reliability, academic integrity, or institutional policies that limit their engagement. These students may also prefer traditional study methods and only use AI when they perceive it as beneficial for specific challenges. Finally, the 8.3% of students who rarely use GenAI tools reflect a segment of the population that either lacks awareness, access, or confidence in AI tools, or consciously chooses not to engage with them. Their limited use could be due to ethical concerns, a preference for independent learning, or institutional restrictions discouraging AI-generated content. Additionally, some students may perceive AI as unnecessary for their field of study or lack the digital literacy skills required to integrate these tools effectively [20,46,47,48].
Data in Table 2 below reveal that the use of GenAI tools is becoming a common practice in higher education, as 62.9% of respondents refer to it as widespread among peers, which also indicates its increasing acceptance and recognition in an educational setting. However, only 27.8% indicate that its use is encouraged by teachers, which suggests that there is still a long way to go for teachers to understand the benefits of using GenAI and integrate it into the teaching and learning process. Despite this, this number is encouraging in the sense that it is in line with other similar studies [7,49] that suggest that an increasing number of teachers are starting to make use of GenAI, recognizing its potential advantages to improve learning and collaboration among students. Furthermore, 28% of respondents to this study are aware of the rules or guidelines established by the university for the responsible use of GenAI tools. This number, higher than previous studies by Refs. [7,48], seems to point out that there is a growing awareness and commitment of higher education institutions to a culture of responsible use of GenAI tools in the teaching and learning process.
Regarding the use of specific generative AI tools by higher education students, the findings reveal a clear divergence in their preferences and adoption patterns. On one hand, there is a considerable concentration of use around ChatGPT, which dominates as the preferred GenAI tool among students. The overwhelming reliance on ChatGPT (93.8%) aligns with existing literature [7,47,50,51], reinforcing its status as the most widely recognized and accessible AI-powered assistant in academic settings. Several factors could explain this trend, including ChatGPT’s user-friendly interface, free accessibility (at least in its basic version), versatility across different academic tasks, and extensive media exposure, which have contributed to its widespread adoption. Ref. [46] identifies perceived ease of use, perceived usefulness, feedback quality and assessment quality, and trust as the factors that more consistently influence ChatGPT’s users to adopt it. Beyond ChatGPT, the usage rates of other Gen AI tools drop significantly, illustrating a long-tail distribution of AI adoption among students. Gemini (26.4%) emerges as the second most popular tool, likely due to Google’s ecosystem integration, which offers students a seamless experience across platforms. QuillBot (18.6%), primarily known for paraphrasing and text refinement, suggests that students leverage AI not only for generating content but also for improving their writing quality. Similarly, GPTZero (13.2%), an AI detection tool, indicates that some students are actively engaging with verification mechanisms, either out of curiosity or to ensure compliance with academic integrity policies. A smaller subset of students explores more specialized GenAI tools, such as Galileo AI (10.1%) and Socratic (9.3%), which focus on visual and interactive learning support. The relatively low adoption of CoPilot (8.5%), despite its integration into Microsoft products, suggests that AI assistants embedded in productivity tools may not yet be as widely recognized or utilized in academic workflows. The 6.8% of students who use niche tools like Elicit, PDF AI, Aithor, Napkin AI, ChatPDF, and Aria might seem to suggest that some learners actively seek task-specific AI applications (Table 3). These tools offer capabilities such as literature review automation (Elicit), AI-powered PDF summarization (ChatPDF, PDF AI), and structured notetaking (Napkin AI), indicating a growing interest in AI for research and personalized learning support.
When asked about the main purposes of using GenAI tools to support their academic tasks (Table 4), the majority of students reported using them to define or clarify concepts (76%), to generate ideas while writing (68.2%), to assist in completing assignments (65.1%), and to assist with home exams (14%). Among the primary purposes of using generative AI tools, their role in assisting with writing, understanding academic material, and supporting task completion stands out. Specifically, 36.4% of students use GenAI to enhance the quality of their writing, while 31% rely on it for proofreading and editing. Additionally, 42.6% use these tools to summarize articles, books, and videos, making information more accessible and digestible. These findings align with previous studies [7,20,52,53], which highlight the widespread use of ChatGPT, Grammarly, and QuillBot for learning, writing, and research. These tools support various academic activities, including idea generation, literature searches, text summarization, grammar correction, brainstorming, paraphrasing, and hypothesis formulation based on data analysis.
Another significant finding is that 45% of respondents use GenAI tools for translation purposes, indicating that many students seek not only to understand academic content but also to reinforce their knowledge by consulting additional sources [53,54,55]. This suggests that students view GenAI as a means to broaden their academic comprehension and improve their multilingual literacy. Beyond text-based applications, students also leverage GenAI tools for collaborative and creative academic tasks. The data show that 40.3% use them to facilitate project work, 27.1% to assist in creating digital media and presentations, and 22.5% to solve numerical problems. These insights reinforce the idea that GenAI tools enhance efficiency, productivity, and the overall quality of students’ work [7].
Interestingly, the use of GenAI for coding assistance was less frequently mentioned, although open-ended responses revealed additional creative applications, such as using GenAI tools to generate flashcards for studying, demonstrating how students tailor AI functionalities to fit their individual learning needs.

4.2. Higher Education Students’ Perceived Benefits of Using GenAI Tools

In the next section of the questionnaire, students were asked to indicate their level of agreement with a series of statements, derived from relevant literature, that reflected their perceptions of the benefits and challenges of using GenAI tools.
Table 5 below highlights that the majority of students recognize the benefits of generative artificial intelligence tools, particularly in terms of ease of access and use (94.6%), time-saving on tasks (83.7%), instant feedback (75.9%), and increased confidence while learning (65.1%). Considering that 94.6% of students agree that GenAI tools are easy to access and use, with no disagreement recorded, this highlights the user-friendly nature of GenAI tools, which aligns with previous research indicating that intuitive design and availability contribute to widespread adoption and that students perceive AI as a valuable resource for enhancing efficiency and optimizing time management [7,56,57]. A total of 83.7% of students believe that AI saves time on tasks, reinforcing its role in academic efficiency. Only 1.6% disagree, suggesting that students widely recognize AI’s ability to optimize workflows and reduce effort in academic tasks. Furthermore, 75.9% agree that these tools provide instant feedback, which is crucial for self-paced learning and revision. This feature likely helps students refine their understanding and correct mistakes in real time, making AI a valuable study companion.
Regarding confidence and learning engagement, there is a more moderate level of agreement, since 65.1% feel AI increases their confidence while learning, with 25.6% feeling neutral and 9.3% disagreeing. This indicates that while many students find GenAI beneficial for building self-assurance in their academic work, a significant minority remain unconvinced, possibly due to concerns about over-reliance or accuracy. Also, 53.5% believe these tools enhance learning engagement, though 38.8% are neutral. This suggests that although they perceive it as useful, it does not necessarily make learning more engaging for all students, potentially due to differences in individual learning preferences. These trends are consistent with the findings of Ref. [7], where agreement levels were also lower for higher-order cognitive skills such as critical thinking and problem-solving (55.0%), language development (58.6%), and academic performance (64.5%).
In terms of academic and cognitive beliefs, students’ opinions are more divided, since 54.3% of students agree that it enhances academic performance, but a substantial 35.7% remain neutral. This response may indicate that while GenAI helps with tasks, it does not always lead to measurable academic improvement. A total of 46.5% agree that these tools foster critical thinking and problem-solving, yet 17.8% disagree and 35.7% are neutral. The higher level of skepticism suggests that while AI can assist with information processing, students may feel it does not encourage deep analytical thinking or independent problem-solving. Finally, 41.8% believe AI enhances general language ability, while 41.4% are neutral and 17.1% disagree. This even distribution of opinions suggests that students may use it for grammar checks and writing assistance but may not see a significant impact on their overall language proficiency.

4.3. Higher Education Students’ Perceived Challenges of Using GenAI Tools

The data in Table 6 below highlight key challenges and concerns students associate with the use of Generative AI tools, with notable variations in their perceptions. While some concerns, such as reliability and academic integrity, receive higher agreement, others, like subscription fees and internet speed requirements, show a more balanced distribution of opinions.
The biggest challenges that students feel when using GenAI tools are related to unreliable information, plagiarism and cheating, and inaccurate or false references. A total of 65.1% of students agree that GenAI provides unreliable information, with only 3.9% disagreeing, making this the most widely recognized challenge. Similarly, 58.2% believe that GenAI generates inaccurate or false references, indicating skepticism toward AI-generated academic content. A total of 58.9% agree that GenAI tools could lead to plagiarism and cheating, reinforcing ongoing concerns about ethical misuse and academic dishonesty. This perception of respondents reveals that they are fearful of the results coming from GenAI. There is also a perception that these tools return inaccurate results. The research by Refs. [7,53] states that students often report errors in AI results, noting that AI tools are not infallible and sometimes provide incorrect or misleading information. The study by Ref. [55] already suggested that the use of some of these tools is a challenge because it increases the likelihood of plagiarism, while at the same time presenting challenges related to transparency, privacy, ethics, and increasing dependence on technologies, reducing their capacity to think critically [20]. It would be relevant to integrate the use of these tools into the classroom so that students learn to use them correctly and make good use of them to improve their learning. To do so, it would also be important to have adequate support from teachers and higher education institutions.
Regarding privacy, learning autonomy, and human interaction, students show moderate concern. A total of 46.5% of students view GenAI as a risk to privacy and data security, while 36.4% remain neutral. This suggests that while many students are aware of potential security risks, not all fully understand or prioritize data privacy concerns. A total of 41.9% believe GenAI restricts learning autonomy and narrows learning experiences, while 40.3% remain neutral. This even distribution of opinions suggests that some students feel AI limits their independent thinking, while others may not experience this as a major issue. A total of 44.9% agree that AI reduces human-to-human interaction, but 29.5% remain neutral, indicating that not all students feel AI significantly impacts interpersonal communication in learning environments.
In terms of practical barriers, concerning internet speed and subscription costs, 51.9% of students agree that GenAI tools require a fast internet connection, which can be a limiting factor for students in areas with poor connectivity. Moreover, 39.5% believe that advanced AI features require a subscription fee, but 31.8% remain neutral, suggesting that while some students perceive cost as a barrier, others may rely on free or basic versions. However, one way to overcome this limitation regarding subscription fees might be to make higher education institutions aware of the importance of using these tools to improve student learning and teachers’ teaching, and how they can support the academic community for free access to these tools.
Finally, regarding students’ perceptions on the long-term impact on learning, 34.8% believe AI will negatively impact learning in the future, but 32.6% disagree and 32.6% remain neutral—indicating divergent opinions on the long-term educational effects of AI.

4.4. Internal Reliability of Combined Items

In the survey, there were eight items for the construct of perceived benefits (PBs) and nine items for the construct of perceived challenges (PCs), with a rating on a three-point Likert scale. The reliability of the questions for the constructs was tested by Cronbach’s alpha. As Table 7 below presents, Cronbach’s alpha for the questions that measured PBs and PCs was 0.731 and 0.774, respectively. Since it was greater than 0.7 and there were less than 10 items in the construct, the internal consistencies of the items were considered good [58].
The Shapiro–Wilk test was run to test whether the data were normally distributed, both for perceived benefits and for perceived challenges. As Table 8 below shows, the significance values of the test were less than 0.001, which means that the data were not normally distributed.
In order to understand whether the perceived benefits and perceived challenges vary according to age and academic level, non-parametric tests were performed to compare the different groups. The Mann–Whitney U test can be considered a possible alternative to the parametric independent samples t-test when certain distributional assumptions (e.g., normality) are not met for that test. Therefore, the Mann–Whitney U test and the Kruskal–Wallis test were used, as they do not assume normality [59]. The Mann–Whitney U test was used for comparison across the groups of gender and the Kruskal–Wallis test was used for comparison across the groups of academic level, as there were more than two subgroups.
As Table 9 below shows, the null hypothesis that the perceived benefits are the same among female and male respondents is to be retained, which means there were no statistically significant differences among these two groups. The same happened with the variable of perceived challenges, where the null hypothesis is to be retained, despite the lower significance level of 0.248.
The Kruskal–Wallis test results are shown in Table 10 below. The asymptotic significance value for perceived benefits was 0.815, while that for perceived challenges was 0.122. As all significance values were greater than 0.05, it is concluded that there were no significant differences in perceptions among the groups of respondents depending on their academic level (TeSP, undergraduate, Master’s, and Ph.D).
Overall, these findings suggest that perceptions of the benefits and challenges of generative AI do not significantly differ based on gender or academic level, implying relatively homogeneous perceptions across demographic subgroups.
One of the objectives of the current research was to compare this research with that of Saudi Arabia by Ref. [7]. Although the current study is more theoretical and connects the TAM and TTF models to students’ perceptions, the Saudi Arabian study is slightly more policy-oriented and applies the models to policy implementation. Another interesting difference is related to the sample size, because the Saudi research is significantly larger (n = 859) than the current study (n = 132). Due to cultural differences, the demographics are also different, as the Saudi sample is composed of a male majority, while the Portuguese sample has more female respondents. The main differences in the survey results show that Portuguese students use GenAI more frequently (97.7%) compared to Saudi students (78.7%) and also rely more on ChatGPT (93.8%), whereas Saudi students show slightly more variety in tool usage. For instance, Portuguese students report higher usage of GenAI tools for specific academic tasks compared to their Saudi counterparts, particularly in “defining or clarifying concepts” (76% vs. 69.2%), “generating ideas while writing” (68.2% vs. 53.3%), “assisting in completing assignments” (65.1% vs. 41.4%), “creating digital multimedia and presentations” (27.1% vs. 21%), and “solving numerical problems” (22.5% vs. 18.5%). Conversely, Saudi students demonstrate slightly higher usage in eight other educational applications of GenAI: “translation” (50.7% vs. 45%), “summarizing articles, books, and videos” (47.5% vs. 40.3%), “searching for academic literature” (41.7% vs. 36.4%), “enhancing the quality of writing” (40.8% vs. 36.4%), “editing writing” (41.1% vs. 31%), “assisting in home exams” (17% vs. 14%), and “supporting coding” (21.6% vs. 10.1%).
In both countries, teachers are somewhat reluctant to encourage AI use (~27%), despite high adoption among students. Regarding the perceived benefits, respondents from both studies agree on ease of use and time-saving as the main benefits, although Portuguese students are slightly more confident about AI improving their learning and Saudi students are more concerned about plagiarism and academic autonomy.
In general, both studies confirm high adoption rates of GenAI tools in higher education, with Portuguese students using GenAI tools more frequently and Saudi students expressing more concerns about academic integrity. Similarly, Portuguese students focus more on practical benefits, while Saudi students highlight ethical and financial concerns.

5. Conclusions

This study offers important insights into how Portuguese higher education students perceive and engage with generative AI tools, helping to shape educational policies and strategies for effectively integrating these technologies into academic settings.
Applying the TAM model in this study reveals that students’ perceptions of the usefulness and ease of use of GenAI tools are crucial for the adoption of these tools. Despite this, students are concerned about plagiarism and cheating, providing unreliable information, and providing inaccurate or false references, which needs to be evaluated in light of perceived usefulness. It is essential that higher education institutions (HEIs) take these concerns into account and work together to mitigate them to improve student acceptance and adoption of GenAI tools.
On the other hand, the TTF theory reinforces the need for GenAI tools to respond to students’ academic needs. This research, similar to the study by Ref. [7], shows that students consider that GenAI tools can improve understanding, learning, and research processing, and in this sense, there is an acceptance of GenAI tools because they are suitable for academic tasks, and therefore, user satisfaction. However, concerns remain regarding the mismatch between the tool’s capabilities and requirements. This is because GenAI tools still have limitations related to the available computing power. It is therefore essential that HEIs align students’ academic needs with the capacity that derives from GenAI tools.
The findings indicate that 97.7% of Portuguese students use GenAI tools, which is significantly higher than the number reported by the research that informed this study, which reported a 78.7% use of GenAI among Saudi Arabian students and pointed out a notable 21.3% who do not use these tools, largely due to a lack of awareness or interest [7].
Among those who actively utilize GenAI, the most common application is, by far, ChatGPT, and students mainly use these tools to define and clarify concepts, generate ideas while writing, and assist in completing assignments, while in the previous study the main uses include defining and clarifying concepts, translation, generating ideas for writing, and summarizing academic literature. These findings highlight the growing role of AI in supporting academic tasks while also emphasizing the need for initiatives that promote AI literacy and accessibility to ensure all students can make informed decisions about its use [7].
Overall, data regarding the most used tools suggest that while ChatGPT remains the dominant tool, students are increasingly experimenting with a broader range of GenAI applications to enhance different aspects of their academic work. This diversity of use cases underscores the evolving role of GenAI in education, where students are not only generating content but also refining, analyzing, and validating their work with AI assistance. Future research could further explore how students select and combine GenAI tools based on their academic needs, as well as the potential gaps in digital literacy that may limit the adoption of lesser-known but highly effective AI applications.
The findings also reveal a diverse landscape of GenAI adoption, with a progressive increase in engagement among students. While a significant proportion of students frequently use GenAI tools to optimize their academic work, others remain hesitant or infrequent users, potentially due to uncertainty, limited exposure, or varying levels of digital confidence. Understanding these patterns is crucial for designing AI literacy initiatives, institutional policies, and pedagogical strategies that address both the benefits and the challenges of AI integration in higher education.
Generally, students perceive the benefits of using GenAI tools in terms of accessibility and efficiency, appreciating its ease of use, time-saving capabilities, and instant feedback. However, despite regarding GenAI as a supportive tool, many students remain neutral about its direct impact on academic success. Critical thinking, problem-solving, and language development remain areas where students are less convinced about AI’s effectiveness, highlighting potential limitations in AI’s role in higher-order learning skills.
Despite the perceived advantages, concerns remain regarding over-reliance on GenAI for problem-solving and the potential impact on students’ ability to develop independent critical thinking skills [53]. Indeed, among the biggest concerns related to the use of GenAI tools by students are misinformation, plagiarism, and critical thinking, as the convenience of having GenAI tools readily provide information and solve problems can lead to a passive learning attitude, in which what is generated by AI is trusted and accepted without question. This lack of questioning and acceptance of all information generated by AI leads to misinformation and plagiarism, because there is no concern in seeking the original source and discussing the results obtained. Reliance on these tools can therefore impact the ability to generate unique ideas independently [60,61]. This suggests a need for balanced AI integration in education, ensuring that while students benefit from AI’s support, they also engage in active and reflective learning processes to develop essential analytical skills.
These findings highlight the need for AI literacy initiatives to help students critically evaluate AI-generated content and prevent over-reliance on unverified sources. A significant majority worry about misinformation, plagiarism, and cheating risks. Students’ concerns also emphasize the importance of ensuring equal access to AI tools for students, particularly in regions with connectivity or financial limitations. This uncertainty highlights the need for further research and institutional policies to guide ethical and effective AI use in education.
This research brings important implications for the effective integration of GenAI tools in education. To maximize its benefits, educators should promote a balanced use, leveraging the strengths of generative artificial intelligence tools for efficiency while encouraging independent thinking. Furthermore, educators should also design learning activities that integrate AI-assisted critical thinking exercises, as well as provide training on AI literacy to help students use these tools effectively while maintaining academic integrity.
In turn, institutions should also be encouraged to implement AI literacy programs to help students critically evaluate AI-generated content, as well as develop guidelines for ethical AI use to prevent plagiarism and encourage responsible learning, ensuring equitable access by advocating for affordable AI tools and better digital infrastructure.
According to Ref. [62], some of the strategies implemented by higher education institutions consist of banning ChatGPT in assessments or completely; using software to detect AI-generated text; switching to oral, handwritten, or checked exams; using assessments that AI has difficulty producing, such as podcasts, lab activities, and group work; establishing policies and guidelines for the ethical and transparent use of AI in teaching, learning, and research (e.g., by enabling a thoughtful use of ChatGPT); and creating new forms of assessment that indicate the explicit use of GenAI tools. These issues become increasingly pressing, as these tools are increasingly integrated into teaching and, rather than prohibiting them, it is necessary to teach students how to use the information generated by GenAI tools. Based on UNESCO’s guidelines, it is relevant that learning in HEIs can prioritize critical thinking, problem solving, and creativity over mechanical memorization, integrating GenAI tools as a learning tool and not as a substitute for it.
Despite this study’s valuable insights and implications, it does not come without limitations. The sample size is a limitation that affects the potential generalization of these findings to the broader population of higher education students. Also, the fact that data rely on self-reported responses may be subject to bias, as students may overstate or understate their use of GenAI tools. Furthermore, the study identifies use trends, but further research would be important to assess students’ AI literacy levels, as well as a focus on learning outcomes, measuring the impact these tools have on academic performance or learning effectiveness.

Author Contributions

A.E.S. and P.C. have equally contributed to this research paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was supported by CITUR/IPLeiria (Centre for Tourism Research, Development and Innovation). FCT 2025-2029.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L.B. Intelligence Unleashed: An Argument for AI in Education; Pearson: Hong Kong, China, 2016; Available online: https://static.googleusercontent.com/media/edu.google.com/en//pdfs/Intelligence-Unleashed-Publication.pdf (accessed on 23 December 2024).
  2. Zawacki-Richter, O.; Marín, V.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  3. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education: Promise and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2019; Available online: https://www.researchgate.net/publication/332180327 (accessed on 27 November 2024).
  4. Licht, F. Generative Artificial Intelligence in Higher Education: Why the ’Banning Approach’ to Student use is Sometimes Morally Justified. Philos. Technol. 2024, 37, 113. [Google Scholar] [CrossRef]
  5. Williamson, B.; Eynon, R.; Potter, J. Pandemic politics, pedagogies and practices: Digital technologies and distance education during the coronavirus emergency. Learn. Media Technol. 2020, 45, 107–114. [Google Scholar] [CrossRef]
  6. Alier, M.; Camba, J.; García-Peñalvo, F. Generative Artificial Intelligence in Education: From Deceptive to Disruptive. Int. J. Interact. Multimed. Artif. Intell. 2024, 5, 5–14. [Google Scholar] [CrossRef]
  7. Almassaad, A.; Alajlan, H.; Alebaikan, R. Student Perceptions of Generative Artificial Intelligence: Investigating Utilization, Benefits, and Challenges in Higher Education. Systems 2024, 12, 385. [Google Scholar] [CrossRef]
  8. Maslej, N.; Fattorini, L.; Brynjolfsson, E.; Etchemendy, J.; Ligett, K.; Lyons, T.; Manyika, J.; Ngo, H.; Niebles, J.; Parli, V.; et al. The AI Index 2023 Annual Report; AI Index Steering Committee, Institute for Human-Centered AI, Stanford University: Stanford, CA, USA, 2023. [Google Scholar]
  9. Buchanan, B.G. A (Very) Brief History of Artificial Intelligence. AI Mag. 2005, 26, 53–60. [Google Scholar] [CrossRef]
  10. Haenlein, M.; Kaplan, A. A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. Calif. Manag. Rev. 2019, 61, 5–14. [Google Scholar]
  11. Turing, A.M. Computing Machinery and Intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  12. Moor, J. The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years. AI Mag. 2006, 27, 87. [Google Scholar] [CrossRef]
  13. McCorduck, P. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 2nd ed.; A K Peters: Natick, MA, USA, 2004. [Google Scholar]
  14. Floridi, L. AI and Its New Winter: From Myths to Realities. Philos. Technol. 2020, 33, 1–3. [Google Scholar] [CrossRef]
  15. Toosi, A.; Bottino, A.; Saboury, B.; Siegel, E.; Rahmim, A. A brief history of AI: How to prevent another winter (a critical review). arXiv 2021, arXiv:2109.01517. [Google Scholar] [CrossRef]
  16. UNICEF. Policy Guidance on AI for Children; UNICEF: Paris, France, 2021; Available online: https://www.unicef.org/innocenti/media/1326/file/UNICEF-Global-Insight-policy-guidance-AI-children-draft-1.0-2020.pdf (accessed on 27 November 2024).
  17. Baidoo-Anu, D.; Owusu, L. Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. J. AI 2023, 7, 52–62. [Google Scholar] [CrossRef]
  18. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016; ISBN 9780262035613. [Google Scholar]
  19. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  20. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. arXiv 2023, arXiv:2305.00290. https://arxiv.org/abs/2305.00290. [Google Scholar] [CrossRef]
  21. Bahroun, Z.; Anane, C.; Ahmed, V.; Zacca, A. Transforming Education: A Comprehensive Review of Generative Artificial Intelligence in Educational Settings through Bibliometric and Content Analysis. Sustainability 2023, 15, 12983. [Google Scholar] [CrossRef]
  22. Xia, Q.; Weng, X.; Ouyang, F.; Tzung, J.; Chiu, T. A scoping review on how generative artificial intelligence transforms assessment in higher education. Int. J. Educ. Technol. High. Educ. 2024, 21, 40. [Google Scholar] [CrossRef]
  23. Chan, C.K.Y.; Lee, K.K. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and Millennial Generation teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  24. Peres, R.; Shreier, M.; Schweidel, D.; Sorescu, A. On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice. Int. J. Res. Mark. 2023, 40, 269–275. [Google Scholar] [CrossRef]
  25. Farrelly, T.; Baker, N. Generative Artificial Intelligence: Implications and Considerations for Higher Education Practice. Educ. Sci. 2023, 13, 1109. [Google Scholar] [CrossRef]
  26. Kosinski, M.; Matz, S.; Gosling, S.; Popov, V.; Stillwell, D. Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines. Am. Psychol. 2015, 70, 534–556. [Google Scholar] [CrossRef]
  27. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  28. He, Y.; Chen, Q.; Kitkuakul, S. Regulatory focus and technology acceptance: Perceived ease of use and usefulness as efficacy. Cogent Bus. Manag. 2018, 5, 1459006. [Google Scholar]
  29. Jan, A.U.; Contreras, V. Technology acceptance model for the use of information technology in universities. Comput. Hum. Behav. 2011, 27, 845–851. [Google Scholar] [CrossRef]
  30. Goodhue, D.L.; Thompson, R.L. Task-technology fit and individual performance. MIS Q. 1995, 19, 213–236. [Google Scholar] [CrossRef]
  31. Aljukhadar, M.; Senecal, S.; Nantel, J. Is more always better? Investigating the task-technology fit theory in an online user context. Inf. Manag. 2014, 51, 391–397. [Google Scholar]
  32. Furneaux, B. Task-technology fit theory: A survey and synopsis of the literature. Information Systems Theory. Explain. Predict. Our Digit. Soc. 2012, 1, 87–102. [Google Scholar]
  33. Wu, B.; Chen, X. Continuance intention to use MOOCs: Integrating the technology acceptance model (TAM) and task technology fit (TTF) model. Comput. Hum. Behav. 2017, 67, 221–232. [Google Scholar]
  34. Al-Maatouk, Q.; Othman, M.S.; Aldraiweesh, A.; Alturki, U.; Al-Rahmi, W.M.; Aljeraiwi, A.A. Task-technology fit and technology acceptance model application to structure and evaluate the adoption of social media in academia. IEEE Access 2020, 8, 78427–78440. [Google Scholar]
  35. Jung, T.; Bae, S.; Moorhouse, N.; Kwon, O. The effects of experience-technology fit (ETF) on consumption behavior: Extended reality (XR) visitor experience. Inf. Technol. People 2023, 37, 2006–2034. [Google Scholar]
  36. Zhou, T.; Lu, Y.; Wang, B. Integrating TTF and UTAUT to explain mobile banking user adoption. Comput. Hum. Behav. 2010, 26, 760–767. [Google Scholar] [CrossRef]
  37. Elçi, A.; Abubakar, A.M. The configurational effects of task-technology fit, technology-induced engagement and motivation on learning performance during COVID-19 pandemic: An fsQCA approach. Educ. Inf. Technol. 2021, 26, 7259–7277. [Google Scholar] [CrossRef] [PubMed]
  38. Przegalinska, A.; Triantoro, T.; Kovbasiuk, A.; Ciechanowski, L.; Freeman, R.; Sowa, K. Collaborative AI in the workplace: Enhancing organizational performance through resource-based and task-technology fit perspectives. Int. J. of Inf. Manag. 2024, 81, 102853. [Google Scholar] [CrossRef]
  39. Sun, J.; Guo, Y. A new destination on the palm? The moderating effect of travel anxiety on digital tourism behavior in extended UTAUT2 and TTF models. Front Psychol 2022, 13, 965655. [Google Scholar] [CrossRef] [PubMed]
  40. Lafi, G.A. The effect of knowledge management systems on organizational ambidexterity: A conceptual model. J. Econ. Manag. Trade 2023, 29, 40–51. [Google Scholar] [CrossRef]
  41. Sinha, A.; Kumar, P.; Rana, N.P.; Islam, R.; Dwivedi, Y.K. Impact of internet of things (IoT) in disaster management: A task-technology fit perspective. Ann. Oper. Res. 2019, 283, 759–794. [Google Scholar] [CrossRef]
  42. Sturm, T.; Peters, F. The Impact of Artificial Intelligence on Individual Performance: Exploring the Fit between Task, Data, and Technology. In Proceedings of the 28th European Conference on Information Systems (ECIS), Online, 15–17 June 2020. [Google Scholar]
  43. Ong, W.; Gauhar, V.; Castellani, D.; Teoh, J. Tips and Pitfalls in Using Social Media Platforms for Survey Dissemination. Société Int. D’urologie J. 2023, 4, 118–124. [Google Scholar] [CrossRef]
  44. Kelly, A.; Sullivan, M.; Strampel, K. Generative artificial intelligence: University student awareness, experience, and confidence in use across disciplines. J. Univ. Teach. Learn. Pract. 2023, 20, 12. [Google Scholar] [CrossRef]
  45. Krause, S.; Panchal, B.; Ubhe, N. The Evolution of Learning: Assessing the Transformative Impact of Generative AI on Higher Education. In International Conference on Artificial Intelligence in Education Technology; Springer Nature: Singapore, 2024; pp. 356–371. [Google Scholar] [CrossRef]
  46. Al-kfairy, M. Factors Impacting the Adoption and Acceptance of ChatGPT in Educational Settings: A Narrative Review of Empirical Studies. Appl. Syst. Innov. 2024, 7, 110. [Google Scholar] [CrossRef]
  47. Johnston, H.; Wells, R.F.; Shanks, E.M.; Boey, T.; Parsons, B. Student perspectives on the use of generative artificial intelligence technologies in higher education. Int. J. Educ. Integr. 2024, 20, 2. [Google Scholar] [CrossRef]
  48. Malmström, H.; Stöhr, C.; Ou, W. Chatbots and Other AI for Learning: A Survey of Use and Views Among University Students in Sweden; Chalmers Studies in Communication and Learning in Higher Education: Gothenburg, Sweden, 2023. [Google Scholar] [CrossRef]
  49. Strzelecki, A.; ElArabawy, S. Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. Br. J. Educ. Technol. 2024, 55, 1209–1230. [Google Scholar] [CrossRef]
  50. Von Garrel, J.; Mayer, J. Artificial Intelligence in studies—Use of ChatGPT and AI-based tools among students in Germany. Humanit. Soc. Sci. Commun. 2023, 10, 799. [Google Scholar] [CrossRef]
  51. Pan, Z.; Xie, Z.; Liu, T.; Xia, T. Exploring the Key Factors Influencing College Students’ Willingness to Use AI Coding Assistant Tools: An Expanded Technology Acceptance Model. Systems 2024, 12, 176. [Google Scholar] [CrossRef]
  52. Grájeda, A.; Burgos, J.; Córdova, P.; Sanjinés, A. Assessing student-perceived impact of using artificial intelligence tools: Construction of a synthetic index of application in higher education. Cogent Educ. 2024, 11, 2287917. [Google Scholar] [CrossRef]
  53. Zhou, X.; Zhang, J.; Chan, C. Unveiling Students’ Experiences and Perceptions of Artificial Intelligence Usage in Higher Education. J. Univ. Teach. Learn. Pract. 2024, 21, 1–20. [Google Scholar] [CrossRef]
  54. JISC. Report on Student Perceptions of Generative AI; National Centre for AI in Tertiary Education, University of Manchester: Manchester, UK, 2023; Available online: https://www.jisc.ac.uk/reports/student-perceptions-of-generative-ai (accessed on 4 November 2024).
  55. Hosseini, M.; Gao, C.A.; Liebovitz, D.M.; Carvalho, A.M.; Ahmad, F.S.; Luo, Y.; MacDonald, N.; Holmes, K.L.; Kho, A. An exploratory survey about using ChatGPT in education, healthcare, and research. PLoS ONE 2023, 18, e0292216. [Google Scholar] [CrossRef] [PubMed]
  56. Ngo, T.T.A. The Perception by University Students of the Use of ChatGPT in Education. Int. J. Emerg. Technol. Learn. 2023, 18, 4–19. [Google Scholar] [CrossRef]
  57. Fischer, I.; Mirbahai, L.; Beer, L.; Buxton, D.; Grierson, S.; Griffin, L.; Gupta, N. Report on Transforming Higher Education: How We Can Harness AI in Teaching and Assessments and Uphold Academic Rigour and Integrity; University of Warwick: Coventry, UK, 2023; Available online: https://warwick.ac.uk/fac/cross_fac/academy/activities/learningcircles/future-of-learning/ai__education_12-7-23.pdf (accessed on 27 November 2024).
  58. Pallant, J. SPSS Survival Manual; McGraw-Hill Education: New York City, NY, USA, 2013. [Google Scholar]
  59. Carver, R.; Nash, J. Doing Data Analysis with SPSS: Version 18.0; Cengage Learning: Belmont, CA, USA, 2011. [Google Scholar]
  60. Kurtz, G.; Amzalag, M.; Shaked, N.; Zaguri, Y.; Kohen-Vacs, D.; Gal, E.; Zailer, G.; Barak-Medina, E. Strategies for integrating generative AI into higher education: Navigating challenges and leveraging opportunities. Educ. Sci. 2024, 14, 503. [Google Scholar] [CrossRef]
  61. Abbas, M.; Jam, F.A.; Khan, T.I. Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. Int. J. Educ. Technol. High. Educ. 2024, 21, 10. [Google Scholar] [CrossRef]
  62. UNESCO. Harnessing the Era of Artificial Intelligence in Higher Education: A Primer for Higher Education Stakeholders; Bosen, L.L., Morales, D., Roser-Chinchilla, J., Sabzalieva, E., Valentini, A., Nascimento, D.V.D., Yerovi, C., Eds.; UNESCO IESALC: Caracas, Venezuela, 2023; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386670/PDF/386670eng.pdf.multi (accessed on 4 November 2024).
Figure 1. National AI strategies by country (AI Index Report 2023).
Figure 1. National AI strategies by country (AI Index Report 2023).
Electronics 14 01258 g001
Figure 2. Frequency of use of GenAI tools by higher education students (n = 129).
Figure 2. Frequency of use of GenAI tools by higher education students (n = 129).
Electronics 14 01258 g002
Table 1. Demographic information of the participants (n = 132).
Table 1. Demographic information of the participants (n = 132).
AttributesFrequencyPercentage
GenderMale4937.1
Female83629
Academic levelTeSP118.3
Undergraduate10277.3
Master’s1410.6
Ph.D53.8
Total 132100
Table 2. Use of GenAI tools in the educational setting (n = 132).
Table 2. Use of GenAI tools in the educational setting (n = 132).
FrequencyPercentage
Widespread among my peers8362.9
My university has established rules or guidelines for its responsible use3728
My teacher(s) encourage its use3427.8
Table 3. GenAI tools used by higher education students (n = 129).
Table 3. GenAI tools used by higher education students (n = 129).
GenAI ToolsFrequencyPercentage
ChatGPT12193.8
Gemini3426.4
QuillBot2418.6
GPTZero1713.2
Galileo AI1310.1
Socratic129.3
CoPilot118.5
Other96.8
Midjourney32.3
ChatSonic21.6
Table 4. Educational purposes of using GenAI (n = 129).
Table 4. Educational purposes of using GenAI (n = 129).
Educational PurposeFrequencyPercentage
To define or clarify concepts9876
To generate ideas while writing8868.2
To assist in completing assignments8465.1
For translation5845
To summarize articles, books, and videos5542.6
To facilitate project work5240.3
To search for academic literature4736.4
To enhance the quality of writing4736.4
To proofread and edit my writing4031
To assist in creating digital multimedia and presentations3527.1
To aid in solving numerical problems2922.5
To assist in completing home exams1814
To support coding1310.1
Others10.8
Table 5. Perceived benefits of using GenAI (n = 129).
Table 5. Perceived benefits of using GenAI (n = 129).
Perceived Benefits DisagreeNeutralAgree
Easy to access and useF07122
%05.494.6
Save time on tasksF219108
%1.614.783.7
Provide instant feedbackF22998
%1.622.575.9
Increase confidence while learningF123384
%9.325.665.1
Enhance academic performanceF134670
%1035.754.3
Improve learning engagementF105069
%7.738.853.5
Foster critical thinking and problem-solvingF234660
%17.835.746.5
Enhance general language abilityF225354
%17.141.441.8
F: Frequency.
Table 6. Perceived challenges of using GenAI (n = 129).
Table 6. Perceived challenges of using GenAI (n = 129).
Perceived Challenges DisagreeNeutralAgree
Provide unreliable informationF54084
%3.93165.1
Lead to plagiarism and cheatingF233076
%17.823.358.9
Provide inaccurate or false referencesF74775
%5.436.458.2
Require fast internet connectionF253767
%19.428.751.9
Poses a risk to privacy and data securityF224760
%17.136.446.5
Reduce human-to-human interactionF333858
%25.629.544.9
Restrict learning autonomy and narrow learning experiences F235254
%17.840.341.9
Require a subscription fee for accessing advanced featuresF374151
%28.731.839.5
Negatively impact learning in the futureF424245
%32.632.634.8
F: Frequency.
Table 7. Reliability statistics (n = 129).
Table 7. Reliability statistics (n = 129).
ConstructCronbach’s AlphaCronbach’s Alpha Based on Standardized ItemsNo. of Items
Perceived benefits (PBs)0.7310.7268
Perceived challenges (PCs)0.7740.7689
Table 8. Tests of normality (n = 129).
Table 8. Tests of normality (n = 129).
ConstructShapiro–Wilk
StatisticDegree of FreedomSignificance
Perceived benefits (PBs)0.953129<0.001
Perceived challenges (PCs)0.960129<0.001
Table 9. Mann–Whitney U results (n = 129).
Table 9. Mann–Whitney U results (n = 129).
Null HypothesisTestSignificance ab
The distribution of benefits is the same across categories of genderIndependent sample Mann–Whitney U Test0.857
The distribution of challenges is the same across categories of genderIndependent samples Mann–Whitney U Test0.248
a. The significance level is 0.50. b. Asymptotic significance is displayed.
Table 10. Kruskal–Wallis H result (n = 129).
Table 10. Kruskal–Wallis H result (n = 129).
Perceived Benefits aPerceived Challenges a
Kruskal–Wallis H0.9455.798
df33
Asymptotic Significance0.8150.122
a. Grouping variable: academic level.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sousa, A.E.; Cardoso, P. Use of Generative AI by Higher Education Students. Electronics 2025, 14, 1258. https://doi.org/10.3390/electronics14071258

AMA Style

Sousa AE, Cardoso P. Use of Generative AI by Higher Education Students. Electronics. 2025; 14(7):1258. https://doi.org/10.3390/electronics14071258

Chicago/Turabian Style

Sousa, Ana Elisa, and Paula Cardoso. 2025. "Use of Generative AI by Higher Education Students" Electronics 14, no. 7: 1258. https://doi.org/10.3390/electronics14071258

APA Style

Sousa, A. E., & Cardoso, P. (2025). Use of Generative AI by Higher Education Students. Electronics, 14(7), 1258. https://doi.org/10.3390/electronics14071258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop