1. Introduction
The development of artificial intelligence (AI) is revolutionizing various sectors, including the education sector, and is likely to redefine the ways of teaching and learning. In higher education, generative artificial intelligence (GenAI) has many applications that potentially assist in customizing learning, promoting collaborative learning and meta-cognition, providing useful feedback, and fostering student motivation, improving students’ learning experiences [
1]. GenAI technologies, especially large language models (LLMs) such as ChatGPT, developed by OpenAI, have caused public and particularly educational interest, due to their promises about the ability to comprehend and humanize natural language, produce coherent responses, and offer personalized feedback [
2]. Such tools allow educators to design their own learning content, examine student work more efficiently, and promote interactive learning environments that bring the potential of making education more available and equitable [
3]. But, for this to happen, certain conditions need to be met, as postulated further in this paper; otherwise, it may certainly worsen existing biases and inequalities.
However, the use of GenAI in higher education is not without its problems. Issues with the use of GenAI systems include concerns about data security, moral principles, and overdependence on technology [
4]. Furthermore, differences in technological infrastructure and digital competence at the institutional level may worsen existing inequalities in the availability of high-quality education [
5]. Educators worldwide are concerned about the potential for increased plagiarism and automating tasks [
6]; therefore, it becomes crucial to understand how students are making use of GenAI in order to take the best advantage of its features and minimize the risks associated with potential misuse.
As GenAI technology continues to develop, understanding how students engage with these tools around the world is essential to effectively integrating them into teaching, learning, and research. The integration of the technology acceptance model (TAM) and task–technology fit (TTF) theory in this study contributes to its relevance, as it provides a broad basis for exploring the multifaceted aspects of GenAI tools among higher education students.
This work thus investigates how higher education students in Portugal are making use of generative AI technologies in their academic life, particularly students’ awareness, adoption patterns, and perceptions of GenAI’s role in academic tasks, alongside the benefits they identify and the challenges they face, including ethical concerns, reliability, and accessibility. Simultaneously, we intend to compare the results obtained in this research with the study by Almassaad et al. [
7] developed in Saudi Arabia, as according to the AI Index Report 2023 [
8], students in this country are among the global leaders in the adoption of GenAI tools and Saudi Arabia is actively exploring the potential of GenAI in various fields, including education, with the aim of effectively leveraging its capabilities [
7]. Based on data from a 2022 IPSOS survey (
Figure 1), the AI Index Report also states that Saudi Arabia comes in second place (76%), right after Chinese respondents (78%), when it comes to how positive respondents feel towards AI products. According to the same study, Saudi Arabia has had, since 2020, a national AI strategy, which is an important measure to understand how countries are aware and prioritizing the management and regulation of AI technologies. In Portugal, this national AI strategy was implemented in 2019, one year sooner, and this allows us to also compare trends in these countries’ national strategies.
The research questions for this study are:
What are higher education students’ adoption practices of GenAI technologies?
What are the potential benefits and challenges of using GenAI technologies, as perceived by students?
How do Portuguese students’ perceptions of GenAI compare with Saudi Arabian students’ perceptions?
3. Material and Methods
To investigate students’ use and perceptions of GenAI tools, we conducted a quantitative study using a survey. The survey was designed to understand the use that higher education students make of generative artificial intelligence tools and their perceptions regarding the benefits and challenges associated with their use. The survey questions were developed based on relevant literature and the research questions listed above.
The questionnaire applied was mainly based on the questionnaire survey used in the study by Ref. [
7], whose research instrument had been previously applied and validated. Therefore, one of the objectives of the current research is to gather data from Portuguese higher education students and compare the results obtained with those of the study by Ref. [
7] as well as with other similar studies, in order to advance scientific knowledge regarding the use and perceptions of GenAI tools by higher education students.
The survey was divided into the following sections: demographic information, use of GenAI tools and reasons for non-use, and perceived benefits and challenges associated with its use. The survey was developed using the technology acceptance model (TAM) and the task–technology fit (TTF) framework [
7]. Thus, the TAM model is represented in the awareness and familiarity section by the question, “Do you use generative AI tools in higher education?” and in the question about educational objectives: “Uses generative AI tools in teaching?” as this question contains the concepts of perceived ease of use and perceived usefulness, as well as in TTF’s focus on task requirements. TTF is also illustrated by the perceived benefits, such as “enhances academic performance,” “improves learning engagement,” “enhances general language ability,” and “fosters critical thinking and problem-solving,” and the challenges encountered when using GenAI tools, such as “provides inaccurate or false references” and “provides unreliable information,” which correspond to technology functionality [
7].
In the specific case of the present research, the TTF theory was used to gain deeper insights into higher education students’ perceptions of GenAI tools and the benefits and challenges they experience in their learning. Given that the task–technology fit suggests that the effectiveness of technology relies on the alignment between task requirements and technology functionality, we thus verified that task requirements guide the way students engage with their academic tasks, influencing their use of GenAI tools, while technology functionality concerns the capabilities of these tools to support these tasks.
The online questionnaire was created in Google Forms and was distributed via social networks to students currently attending higher education in Portugal. Social media platforms provide researchers with an efficient way to connect with a large, diverse, and geographically dispersed audience in real time. This was particularly beneficial for our study, which sought to collect data from participants across multiple regions in Portugal. As Ref. [
26] highlights, Facebook is increasingly recognized as a valuable research tool in the social sciences, offering access to a broad and varied pool of participants who can be selectively recruited for both online and offline studies. Similarly, Ref. [
43] emphasizes the advantages of using social media for survey distribution, noting its convenience, flexibility in survey design, cost-effectiveness, respondent anonymity, and ability to reach a wider audience across different locations.
The survey was available from 15 October 2024 to 30 December 2024. A total of 132 students participated in the survey, with 62.9% identifying as female and 37.1% as male. Among the respondents, 77.3% were enrolled in an undergraduate program, 10.6% were pursuing a master’s degree, 3.8% were doctoral students, and 8.3% were enrolled in a TeSP—a higher vocational technical course (
Table 1).
The quantitative survey data were analyzed using descriptive statistics (SPSS v29) to calculate frequency distributions and summarize participants’ responses to Likert scale questions. Inferential statistical analyses were also conducted, and the appropriate statistical tests were selected based on the distribution of the data, with results being discussed with reference to relevant literature. The Shapiro–Wilk test assessed normality of distribution for quantitative variables, and the nonparametric Mann–Whitney test compared differences in data distribution between groups. Furthermore, to tackle concerns about the non-validated questionnaire, we assessed internal consistency through Cronbach’s alpha, with a score of 0.82, which is considered adequate for research purposes. For all hypothesis tests, a 95% confidence interval was used, with statistical significance set at p < 0.05. All statistical analyses were performed with SPPS v29.
4. Findings and Discussion
4.1. Higher Education Students’ Use of GenAI Tools
Among the respondents, 129 respondents (97.7%) use GenAI tools. Only 3 students (2.3%) do not do so, citing lack of interest as the main reason. In the study by Ref. [
7], some interviewees justified not using GenAI with a lack of familiarity, experience, and confidence in using these emerging tools. Students believed they could achieve better results through their intellectual and problem-solving skills than by using AI tools. In fact, some students considered that in scientific areas, such as engineering and humanities, AI tools are not useful because of its inaccurate, misleading, and not very useful responses to prompts. In line with this finding, the studies by Refs. [
2,
44] also suggest that some students are not interested in using these tools due to a lack of digital literacy or technological proficiency, lack of experience, and lack of knowledge.
Regarding the frequency of use, among the 129 students who use GenAI tools, 27.3% use these tools very often, 36.4% use them often, 28% reported occasional use, and 8.3% rarely use these tools (
Figure 2). These data reveal different levels of use of generative artificial intelligence tools among higher education students, highlighting varying degrees of dependence and familiarity with these technologies.
The fact that 27.3% of students use GenAI tools very often suggests that for a part of the student population, these tools have become part of their academic routine, as they report relying on GenAI for tasks such as content generation, writing assistance, coding support, brainstorming ideas, and summarizing complex materials, demonstrating a high level of AI integration in their learning processes. Their frequent use may stem from the perceived benefits of efficiency, accessibility, and enhancement of cognitive tasks, as well as a growing comfort in adapting AI into academic workflows [
20,
45]. The 36.4% of students who use GenAI often represent the majority, suggesting that while they do not depend on these tools daily, they regularly incorporate them into their studies. These students may use AI strategically, using it for specific assignments, research tasks, or language refinement. Their engagement indicates a moderate but consistent reliance, most likely balancing AI assistance with traditional learning approaches [
7,
20]. The 28% of students who use GenAI tools occasionally suggest a more situational or experimental approach to AI adoption. This group may still be exploring the capabilities of these tools, using them only when necessary, or might have concerns about reliability, academic integrity, or institutional policies that limit their engagement. These students may also prefer traditional study methods and only use AI when they perceive it as beneficial for specific challenges. Finally, the 8.3% of students who rarely use GenAI tools reflect a segment of the population that either lacks awareness, access, or confidence in AI tools, or consciously chooses not to engage with them. Their limited use could be due to ethical concerns, a preference for independent learning, or institutional restrictions discouraging AI-generated content. Additionally, some students may perceive AI as unnecessary for their field of study or lack the digital literacy skills required to integrate these tools effectively [
20,
46,
47,
48].
Data in
Table 2 below reveal that the use of GenAI tools is becoming a common practice in higher education, as 62.9% of respondents refer to it as widespread among peers, which also indicates its increasing acceptance and recognition in an educational setting. However, only 27.8% indicate that its use is encouraged by teachers, which suggests that there is still a long way to go for teachers to understand the benefits of using GenAI and integrate it into the teaching and learning process. Despite this, this number is encouraging in the sense that it is in line with other similar studies [
7,
49] that suggest that an increasing number of teachers are starting to make use of GenAI, recognizing its potential advantages to improve learning and collaboration among students. Furthermore, 28% of respondents to this study are aware of the rules or guidelines established by the university for the responsible use of GenAI tools. This number, higher than previous studies by Refs. [
7,
48], seems to point out that there is a growing awareness and commitment of higher education institutions to a culture of responsible use of GenAI tools in the teaching and learning process.
Regarding the use of specific generative AI tools by higher education students, the findings reveal a clear divergence in their preferences and adoption patterns. On one hand, there is a considerable concentration of use around ChatGPT, which dominates as the preferred GenAI tool among students. The overwhelming reliance on ChatGPT (93.8%) aligns with existing literature [
7,
47,
50,
51], reinforcing its status as the most widely recognized and accessible AI-powered assistant in academic settings. Several factors could explain this trend, including ChatGPT’s user-friendly interface, free accessibility (at least in its basic version), versatility across different academic tasks, and extensive media exposure, which have contributed to its widespread adoption. Ref. [
46] identifies perceived ease of use, perceived usefulness, feedback quality and assessment quality, and trust as the factors that more consistently influence ChatGPT’s users to adopt it. Beyond ChatGPT, the usage rates of other Gen AI tools drop significantly, illustrating a long-tail distribution of AI adoption among students. Gemini (26.4%) emerges as the second most popular tool, likely due to Google’s ecosystem integration, which offers students a seamless experience across platforms. QuillBot (18.6%), primarily known for paraphrasing and text refinement, suggests that students leverage AI not only for generating content but also for improving their writing quality. Similarly, GPTZero (13.2%), an AI detection tool, indicates that some students are actively engaging with verification mechanisms, either out of curiosity or to ensure compliance with academic integrity policies. A smaller subset of students explores more specialized GenAI tools, such as Galileo AI (10.1%) and Socratic (9.3%), which focus on visual and interactive learning support. The relatively low adoption of CoPilot (8.5%), despite its integration into Microsoft products, suggests that AI assistants embedded in productivity tools may not yet be as widely recognized or utilized in academic workflows. The 6.8% of students who use niche tools like Elicit, PDF AI, Aithor, Napkin AI, ChatPDF, and Aria might seem to suggest that some learners actively seek task-specific AI applications (
Table 3). These tools offer capabilities such as literature review automation (Elicit), AI-powered PDF summarization (ChatPDF, PDF AI), and structured notetaking (Napkin AI), indicating a growing interest in AI for research and personalized learning support.
When asked about the main purposes of using GenAI tools to support their academic tasks (
Table 4), the majority of students reported using them to define or clarify concepts (76%), to generate ideas while writing (68.2%), to assist in completing assignments (65.1%), and to assist with home exams (14%). Among the primary purposes of using generative AI tools, their role in assisting with writing, understanding academic material, and supporting task completion stands out. Specifically, 36.4% of students use GenAI to enhance the quality of their writing, while 31% rely on it for proofreading and editing. Additionally, 42.6% use these tools to summarize articles, books, and videos, making information more accessible and digestible. These findings align with previous studies [
7,
20,
52,
53], which highlight the widespread use of ChatGPT, Grammarly, and QuillBot for learning, writing, and research. These tools support various academic activities, including idea generation, literature searches, text summarization, grammar correction, brainstorming, paraphrasing, and hypothesis formulation based on data analysis.
Another significant finding is that 45% of respondents use GenAI tools for translation purposes, indicating that many students seek not only to understand academic content but also to reinforce their knowledge by consulting additional sources [
53,
54,
55]. This suggests that students view GenAI as a means to broaden their academic comprehension and improve their multilingual literacy. Beyond text-based applications, students also leverage GenAI tools for collaborative and creative academic tasks. The data show that 40.3% use them to facilitate project work, 27.1% to assist in creating digital media and presentations, and 22.5% to solve numerical problems. These insights reinforce the idea that GenAI tools enhance efficiency, productivity, and the overall quality of students’ work [
7].
Interestingly, the use of GenAI for coding assistance was less frequently mentioned, although open-ended responses revealed additional creative applications, such as using GenAI tools to generate flashcards for studying, demonstrating how students tailor AI functionalities to fit their individual learning needs.
4.2. Higher Education Students’ Perceived Benefits of Using GenAI Tools
In the next section of the questionnaire, students were asked to indicate their level of agreement with a series of statements, derived from relevant literature, that reflected their perceptions of the benefits and challenges of using GenAI tools.
Table 5 below highlights that the majority of students recognize the benefits of generative artificial intelligence tools, particularly in terms of ease of access and use (94.6%), time-saving on tasks (83.7%), instant feedback (75.9%), and increased confidence while learning (65.1%). Considering that 94.6% of students agree that GenAI tools are easy to access and use, with no disagreement recorded, this highlights the user-friendly nature of GenAI tools, which aligns with previous research indicating that intuitive design and availability contribute to widespread adoption and that students perceive AI as a valuable resource for enhancing efficiency and optimizing time management [
7,
56,
57]. A total of 83.7% of students believe that AI saves time on tasks, reinforcing its role in academic efficiency. Only 1.6% disagree, suggesting that students widely recognize AI’s ability to optimize workflows and reduce effort in academic tasks. Furthermore, 75.9% agree that these tools provide instant feedback, which is crucial for self-paced learning and revision. This feature likely helps students refine their understanding and correct mistakes in real time, making AI a valuable study companion.
Regarding confidence and learning engagement, there is a more moderate level of agreement, since 65.1% feel AI increases their confidence while learning, with 25.6% feeling neutral and 9.3% disagreeing. This indicates that while many students find GenAI beneficial for building self-assurance in their academic work, a significant minority remain unconvinced, possibly due to concerns about over-reliance or accuracy. Also, 53.5% believe these tools enhance learning engagement, though 38.8% are neutral. This suggests that although they perceive it as useful, it does not necessarily make learning more engaging for all students, potentially due to differences in individual learning preferences. These trends are consistent with the findings of Ref. [
7], where agreement levels were also lower for higher-order cognitive skills such as critical thinking and problem-solving (55.0%), language development (58.6%), and academic performance (64.5%).
In terms of academic and cognitive beliefs, students’ opinions are more divided, since 54.3% of students agree that it enhances academic performance, but a substantial 35.7% remain neutral. This response may indicate that while GenAI helps with tasks, it does not always lead to measurable academic improvement. A total of 46.5% agree that these tools foster critical thinking and problem-solving, yet 17.8% disagree and 35.7% are neutral. The higher level of skepticism suggests that while AI can assist with information processing, students may feel it does not encourage deep analytical thinking or independent problem-solving. Finally, 41.8% believe AI enhances general language ability, while 41.4% are neutral and 17.1% disagree. This even distribution of opinions suggests that students may use it for grammar checks and writing assistance but may not see a significant impact on their overall language proficiency.
4.3. Higher Education Students’ Perceived Challenges of Using GenAI Tools
The data in
Table 6 below highlight key challenges and concerns students associate with the use of Generative AI tools, with notable variations in their perceptions. While some concerns, such as reliability and academic integrity, receive higher agreement, others, like subscription fees and internet speed requirements, show a more balanced distribution of opinions.
The biggest challenges that students feel when using GenAI tools are related to unreliable information, plagiarism and cheating, and inaccurate or false references. A total of 65.1% of students agree that GenAI provides unreliable information, with only 3.9% disagreeing, making this the most widely recognized challenge. Similarly, 58.2% believe that GenAI generates inaccurate or false references, indicating skepticism toward AI-generated academic content. A total of 58.9% agree that GenAI tools could lead to plagiarism and cheating, reinforcing ongoing concerns about ethical misuse and academic dishonesty. This perception of respondents reveals that they are fearful of the results coming from GenAI. There is also a perception that these tools return inaccurate results. The research by Refs. [
7,
53] states that students often report errors in AI results, noting that AI tools are not infallible and sometimes provide incorrect or misleading information. The study by Ref. [
55] already suggested that the use of some of these tools is a challenge because it increases the likelihood of plagiarism, while at the same time presenting challenges related to transparency, privacy, ethics, and increasing dependence on technologies, reducing their capacity to think critically [
20]. It would be relevant to integrate the use of these tools into the classroom so that students learn to use them correctly and make good use of them to improve their learning. To do so, it would also be important to have adequate support from teachers and higher education institutions.
Regarding privacy, learning autonomy, and human interaction, students show moderate concern. A total of 46.5% of students view GenAI as a risk to privacy and data security, while 36.4% remain neutral. This suggests that while many students are aware of potential security risks, not all fully understand or prioritize data privacy concerns. A total of 41.9% believe GenAI restricts learning autonomy and narrows learning experiences, while 40.3% remain neutral. This even distribution of opinions suggests that some students feel AI limits their independent thinking, while others may not experience this as a major issue. A total of 44.9% agree that AI reduces human-to-human interaction, but 29.5% remain neutral, indicating that not all students feel AI significantly impacts interpersonal communication in learning environments.
In terms of practical barriers, concerning internet speed and subscription costs, 51.9% of students agree that GenAI tools require a fast internet connection, which can be a limiting factor for students in areas with poor connectivity. Moreover, 39.5% believe that advanced AI features require a subscription fee, but 31.8% remain neutral, suggesting that while some students perceive cost as a barrier, others may rely on free or basic versions. However, one way to overcome this limitation regarding subscription fees might be to make higher education institutions aware of the importance of using these tools to improve student learning and teachers’ teaching, and how they can support the academic community for free access to these tools.
Finally, regarding students’ perceptions on the long-term impact on learning, 34.8% believe AI will negatively impact learning in the future, but 32.6% disagree and 32.6% remain neutral—indicating divergent opinions on the long-term educational effects of AI.
4.4. Internal Reliability of Combined Items
In the survey, there were eight items for the construct of perceived benefits (PBs) and nine items for the construct of perceived challenges (PCs), with a rating on a three-point Likert scale. The reliability of the questions for the constructs was tested by Cronbach’s alpha. As
Table 7 below presents, Cronbach’s alpha for the questions that measured PBs and PCs was 0.731 and 0.774, respectively. Since it was greater than 0.7 and there were less than 10 items in the construct, the internal consistencies of the items were considered good [
58].
The Shapiro–Wilk test was run to test whether the data were normally distributed, both for perceived benefits and for perceived challenges. As
Table 8 below shows, the significance values of the test were less than 0.001, which means that the data were not normally distributed.
In order to understand whether the perceived benefits and perceived challenges vary according to age and academic level, non-parametric tests were performed to compare the different groups. The Mann–Whitney U test can be considered a possible alternative to the parametric independent samples t-test when certain distributional assumptions (e.g., normality) are not met for that test. Therefore, the Mann–Whitney U test and the Kruskal–Wallis test were used, as they do not assume normality [
59]. The Mann–Whitney U test was used for comparison across the groups of gender and the Kruskal–Wallis test was used for comparison across the groups of academic level, as there were more than two subgroups.
As
Table 9 below shows, the null hypothesis that the perceived benefits are the same among female and male respondents is to be retained, which means there were no statistically significant differences among these two groups. The same happened with the variable of perceived challenges, where the null hypothesis is to be retained, despite the lower significance level of 0.248.
The Kruskal–Wallis test results are shown in
Table 10 below. The asymptotic significance value for perceived benefits was 0.815, while that for perceived challenges was 0.122. As all significance values were greater than 0.05, it is concluded that there were no significant differences in perceptions among the groups of respondents depending on their academic level (TeSP, undergraduate, Master’s, and Ph.D).
Overall, these findings suggest that perceptions of the benefits and challenges of generative AI do not significantly differ based on gender or academic level, implying relatively homogeneous perceptions across demographic subgroups.
One of the objectives of the current research was to compare this research with that of Saudi Arabia by Ref. [
7]. Although the current study is more theoretical and connects the TAM and TTF models to students’ perceptions, the Saudi Arabian study is slightly more policy-oriented and applies the models to policy implementation. Another interesting difference is related to the sample size, because the Saudi research is significantly larger (
n = 859) than the current study (
n = 132). Due to cultural differences, the demographics are also different, as the Saudi sample is composed of a male majority, while the Portuguese sample has more female respondents. The main differences in the survey results show that Portuguese students use GenAI more frequently (97.7%) compared to Saudi students (78.7%) and also rely more on ChatGPT (93.8%), whereas Saudi students show slightly more variety in tool usage. For instance, Portuguese students report higher usage of GenAI tools for specific academic tasks compared to their Saudi counterparts, particularly in “defining or clarifying concepts” (76% vs. 69.2%), “generating ideas while writing” (68.2% vs. 53.3%), “assisting in completing assignments” (65.1% vs. 41.4%), “creating digital multimedia and presentations” (27.1% vs. 21%), and “solving numerical problems” (22.5% vs. 18.5%). Conversely, Saudi students demonstrate slightly higher usage in eight other educational applications of GenAI: “translation” (50.7% vs. 45%), “summarizing articles, books, and videos” (47.5% vs. 40.3%), “searching for academic literature” (41.7% vs. 36.4%), “enhancing the quality of writing” (40.8% vs. 36.4%), “editing writing” (41.1% vs. 31%), “assisting in home exams” (17% vs. 14%), and “supporting coding” (21.6% vs. 10.1%).
In both countries, teachers are somewhat reluctant to encourage AI use (~27%), despite high adoption among students. Regarding the perceived benefits, respondents from both studies agree on ease of use and time-saving as the main benefits, although Portuguese students are slightly more confident about AI improving their learning and Saudi students are more concerned about plagiarism and academic autonomy.
In general, both studies confirm high adoption rates of GenAI tools in higher education, with Portuguese students using GenAI tools more frequently and Saudi students expressing more concerns about academic integrity. Similarly, Portuguese students focus more on practical benefits, while Saudi students highlight ethical and financial concerns.
5. Conclusions
This study offers important insights into how Portuguese higher education students perceive and engage with generative AI tools, helping to shape educational policies and strategies for effectively integrating these technologies into academic settings.
Applying the TAM model in this study reveals that students’ perceptions of the usefulness and ease of use of GenAI tools are crucial for the adoption of these tools. Despite this, students are concerned about plagiarism and cheating, providing unreliable information, and providing inaccurate or false references, which needs to be evaluated in light of perceived usefulness. It is essential that higher education institutions (HEIs) take these concerns into account and work together to mitigate them to improve student acceptance and adoption of GenAI tools.
On the other hand, the TTF theory reinforces the need for GenAI tools to respond to students’ academic needs. This research, similar to the study by Ref. [
7], shows that students consider that GenAI tools can improve understanding, learning, and research processing, and in this sense, there is an acceptance of GenAI tools because they are suitable for academic tasks, and therefore, user satisfaction. However, concerns remain regarding the mismatch between the tool’s capabilities and requirements. This is because GenAI tools still have limitations related to the available computing power. It is therefore essential that HEIs align students’ academic needs with the capacity that derives from GenAI tools.
The findings indicate that 97.7% of Portuguese students use GenAI tools, which is significantly higher than the number reported by the research that informed this study, which reported a 78.7% use of GenAI among Saudi Arabian students and pointed out a notable 21.3% who do not use these tools, largely due to a lack of awareness or interest [
7].
Among those who actively utilize GenAI, the most common application is, by far, ChatGPT, and students mainly use these tools to define and clarify concepts, generate ideas while writing, and assist in completing assignments, while in the previous study the main uses include defining and clarifying concepts, translation, generating ideas for writing, and summarizing academic literature. These findings highlight the growing role of AI in supporting academic tasks while also emphasizing the need for initiatives that promote AI literacy and accessibility to ensure all students can make informed decisions about its use [
7].
Overall, data regarding the most used tools suggest that while ChatGPT remains the dominant tool, students are increasingly experimenting with a broader range of GenAI applications to enhance different aspects of their academic work. This diversity of use cases underscores the evolving role of GenAI in education, where students are not only generating content but also refining, analyzing, and validating their work with AI assistance. Future research could further explore how students select and combine GenAI tools based on their academic needs, as well as the potential gaps in digital literacy that may limit the adoption of lesser-known but highly effective AI applications.
The findings also reveal a diverse landscape of GenAI adoption, with a progressive increase in engagement among students. While a significant proportion of students frequently use GenAI tools to optimize their academic work, others remain hesitant or infrequent users, potentially due to uncertainty, limited exposure, or varying levels of digital confidence. Understanding these patterns is crucial for designing AI literacy initiatives, institutional policies, and pedagogical strategies that address both the benefits and the challenges of AI integration in higher education.
Generally, students perceive the benefits of using GenAI tools in terms of accessibility and efficiency, appreciating its ease of use, time-saving capabilities, and instant feedback. However, despite regarding GenAI as a supportive tool, many students remain neutral about its direct impact on academic success. Critical thinking, problem-solving, and language development remain areas where students are less convinced about AI’s effectiveness, highlighting potential limitations in AI’s role in higher-order learning skills.
Despite the perceived advantages, concerns remain regarding over-reliance on GenAI for problem-solving and the potential impact on students’ ability to develop independent critical thinking skills [
53]. Indeed, among the biggest concerns related to the use of GenAI tools by students are misinformation, plagiarism, and critical thinking, as the convenience of having GenAI tools readily provide information and solve problems can lead to a passive learning attitude, in which what is generated by AI is trusted and accepted without question. This lack of questioning and acceptance of all information generated by AI leads to misinformation and plagiarism, because there is no concern in seeking the original source and discussing the results obtained. Reliance on these tools can therefore impact the ability to generate unique ideas independently [
60,
61]. This suggests a need for balanced AI integration in education, ensuring that while students benefit from AI’s support, they also engage in active and reflective learning processes to develop essential analytical skills.
These findings highlight the need for AI literacy initiatives to help students critically evaluate AI-generated content and prevent over-reliance on unverified sources. A significant majority worry about misinformation, plagiarism, and cheating risks. Students’ concerns also emphasize the importance of ensuring equal access to AI tools for students, particularly in regions with connectivity or financial limitations. This uncertainty highlights the need for further research and institutional policies to guide ethical and effective AI use in education.
This research brings important implications for the effective integration of GenAI tools in education. To maximize its benefits, educators should promote a balanced use, leveraging the strengths of generative artificial intelligence tools for efficiency while encouraging independent thinking. Furthermore, educators should also design learning activities that integrate AI-assisted critical thinking exercises, as well as provide training on AI literacy to help students use these tools effectively while maintaining academic integrity.
In turn, institutions should also be encouraged to implement AI literacy programs to help students critically evaluate AI-generated content, as well as develop guidelines for ethical AI use to prevent plagiarism and encourage responsible learning, ensuring equitable access by advocating for affordable AI tools and better digital infrastructure.
According to Ref. [
62], some of the strategies implemented by higher education institutions consist of banning ChatGPT in assessments or completely; using software to detect AI-generated text; switching to oral, handwritten, or checked exams; using assessments that AI has difficulty producing, such as podcasts, lab activities, and group work; establishing policies and guidelines for the ethical and transparent use of AI in teaching, learning, and research (e.g., by enabling a thoughtful use of ChatGPT); and creating new forms of assessment that indicate the explicit use of GenAI tools. These issues become increasingly pressing, as these tools are increasingly integrated into teaching and, rather than prohibiting them, it is necessary to teach students how to use the information generated by GenAI tools. Based on UNESCO’s guidelines, it is relevant that learning in HEIs can prioritize critical thinking, problem solving, and creativity over mechanical memorization, integrating GenAI tools as a learning tool and not as a substitute for it.
Despite this study’s valuable insights and implications, it does not come without limitations. The sample size is a limitation that affects the potential generalization of these findings to the broader population of higher education students. Also, the fact that data rely on self-reported responses may be subject to bias, as students may overstate or understate their use of GenAI tools. Furthermore, the study identifies use trends, but further research would be important to assess students’ AI literacy levels, as well as a focus on learning outcomes, measuring the impact these tools have on academic performance or learning effectiveness.