Next Article in Journal
Shame Regulation in Learning: A Double-Edged Sword
Previous Article in Journal
Messy Data in Education: Enhancing Data Science Literacy Through Real-World Datasets in a Master’s Program
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Perceptions of Generative AI Tools in Higher Education: Insights from Students and Academics at Sultan Qaboos University

by
Alsaeed Alshamy
1,2,*,
Aisha Salim Ali Al-Harthi
1 and
Shubair Abdullah
3
1
Educational Foundations and Administration, College of Education, Sultan Qaboos University, Muscat 123, Oman
2
Faculty of Education, Alexandria University, Alexandria 21526, Egypt
3
Instructional and Learning Technologies, College of Education, Sultan Qaboos University, Muscat 123, Oman
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(4), 501; https://doi.org/10.3390/educsci15040501
Submission received: 17 March 2025 / Revised: 8 April 2025 / Accepted: 9 April 2025 / Published: 16 April 2025
(This article belongs to the Section Technology Enhanced Education)

Abstract

:
This study investigates the perceptions of generative artificial intelligence (GenAI) tools, such as ChatGPT, among students and academics at Sultan Qaboos University (SQU) within the context of higher education in Oman. Using the Technology Acceptance Model (TAM), it explores five key dimensions: actual use (AU), ease of use (EU), perceived usefulness (PU), perceived challenges (PC), and intention to use (IU). Data collected from 555 students and 168 academics provide valuable insights into the opportunities and challenges associated with the adoption of GenAI tools, based on the results of a t-test. The findings reveal notable differences between students and academics regarding their perceptions of GenAI tools across all TAM variables. Students report frequent use of GenAI for academic support, including personalized learning, brainstorming, and completing assignments, while academics highlight its role in developing learning materials, assessments, lesson plans, and customizing learning content. Both groups recognize its potential to enhance efficiency and innovation in academic practices. However, concerns arise regarding over-reliance on GenAI, diminished critical thinking and creativity, and academic integrity risks. Academics consistently express greater concerns about these challenges than students, particularly regarding plagiarism, academic misconduct, and the potential for over-reliance on GenAI. Despite these challenges, the majority of students and academics indicate a willingness to continue using GenAI tools. This contrast underscores the need for tailored interventions to address the distinct concerns of students and academics. These findings highlight the need for regulatory frameworks, comprehensive institutional guidelines, and targeted training programs to ensure the ethical and responsible use of GenAI technologies. By addressing these critical areas, higher education institutions in Oman can leverage the potential of GenAI while safeguarding academic integrity and fostering essential skills such as critical thinking and creativity.

1. Introduction

Academic interest in new Generative Artificial Intelligence (GenAI) models such as ChatGPT-4 and GPT-4o has focused largely on determining how GenAI improves educational outcomes (Chiu, 2024; Su & Yang, 2023b). Its potential promise to automate, customize, and improve student participation has encouraged research into its use in many instructional settings (Sharma et al., 2024; Su & Yang, 2023b). In addition, GenAI shows promises for assisting instruction and administrative duties and for creating adaptive interactive learning environments (Pratama et al., 2023). This growing interest in GenAI, particularly among higher education institutions (HEIs) like Sultan Qaboos University (SQU), reflects a changing research agenda aimed at understanding how students and academics perceive the integration of GenAI tools in teaching and learning. As HEIs adapt to technology improvements, GenAI represents great potential for improving learning and teaching across a wide range of subjects.
ChatGPT has shown unique promises for enhancing learning experiences and educational outcomes in HEIs, but it has also shown several challenges. The capacity of ChatGPT to create logical, relevant material makes it an invaluable resource for students such as non-native speakers of English who require assistance with language skills and brainstorming ideas, as emphasized by Firaina and Sulisworo (2023). According to Téllez et al. (2024), ChatGPT can assist math students by transforming difficult math operations and concepts into easy explanations. Due to its immediate feedback, it is also believed that ChatGPT can aid students in grasping complex concepts and promoting autonomous learning. However, relying on GenAI to solve problems may weaken students’ involvement with the fundamental mathematical abilities and critical thinking necessary for precise calculations and proofs (Téllez et al., 2024).
Furthermore, for activities such as developing new applications in engineering education, ChatGPT can facilitate project-based learning by giving technical explanations, design ideas, and coding help for activities such as developing new applications (Chan & Hu, 2023). It also aids students learning about systems and ideas, like structural mechanics and coding principles, thus increasing effectiveness and engagement (Atlas, 2023). However, it is argued that students’ overreliance on GenAI can lead to gaps in knowledge of crucial engineering concepts important for creativity and practical problem solving (Nikolic et al., 2023; Sullivan et al., 2023).
Another crucial concern of using ChatGPT is academic integrity, because students’ irresponsible use of ChatGPT to complete tasks may result in plagiarism and compromising of real skill development (Cotton et al., 2024; Eke, 2023). Another big fear is privacy, because users may put their personal information at danger when they unintentionally reveal sensitive data (Huang, 2023). Furthermore, while ChatGPT gives quick and organized replies, it lacks the ability to critically assess material, which might lead to factual mistakes if used alone without human oversight (Branum & Schiavenato, 2023). Concerns also include assessment validity, as the distinction between student-authored and AI-generated content may be blurred and consequently affect fair evaluation processes (Téllez et al., 2024; Chaudhry et al., 2023).
For academics, ChatGPT can be utilized to transform teaching practices, material delivery, and support systems (Hashem et al., 2024). ChatGPT can increase academic productivity by automating repetitive processes such as answering commonly asked questions, creating examples, and simplifying difficult explanations (Al Ghazali et al., 2024; Hashem et al., 2024). It can also assist academics to provide personalized learning experience and feedback for their students, especially in big courses that have little or no individualized attention (Q. Lu et al., 2024). Academics can also utilize ChatGPT to automate administrative duties like grading, feedback, and enrollment management, saving their time and giving space for more difficult tasks (Al Ghazali et al., 2024; Okulu & Muslu, 2024). Furthermore, assisting academics with new content design and development (Okulu & Muslu, 2024) and inspiring course materials and evaluations are one of the major benefits of ChatGPT (Al Ghazali et al., 2024). For example, academics can use ChatGPT to increase educational accessibility and inclusivity by creating flexible study materials and developing scenarios for active learning exercises (Sullivan et al., 2023).
However, incorporating ChatGPT presents certain challenges for academics as well. Academic integrity is one of the key challenges (Cong-Lem et al., 2024; Eke, 2023; Miao et al., 2024). Because AI programs may mistakenly create reliance, instructors must educate students in appropriate use (Miao et al., 2024). Furthermore, ChatGPT’s responses may lack sophisticated and subject-specific accuracy, which might be problematic in subjects that require critical thinking (Cong-Lem et al., 2024). Additionally, the human-like responses generated by ChatGPT raise ethical concerns about plagiarism (Ray, 2023), intellectual property (David et al., 2024), and academic integrity (Qiao-Franco & Zhu, 2024). Other concerns are related to privacy and data security (Huang, 2023; Ray, 2023), especially when sensitive or private student data are shared in the GenAI tools (Huang, 2023). Thus, as technology advances, academics and students must continue to get professional development and training to fully utilize ChatGPT’s potential in the classroom in an ethical and responsible manner (J. Lu et al., 2024).
Given the benefits and challenges of using GenAI in higher education, GenAI tools, such as ChatGPT, play a significant role in enhancing the teaching and learning experience in HEIs; thus, they have drawn significant attention for promoting benefits and mitigating threats (Rahman & Watanobe, 2023; Trust et al., 2023). However, despite the great potential of GenAI integration in education (Parycek et al., 2023), there is still an acute need to investigate the perceptions of students and academics about their acceptance of the emerging GenAI technologies and how it can be integrated into teaching and learning.
While the potential of GenAI in education demonstrates its promises for students and academics, its concerns demand more research on how students and academics view its effectiveness in learning and teaching. This is particularly important for higher education institutions like Sultan Qaboos University (SQU), so they can better address the challenges arising from GenAI manifestation in the teaching and learning environments, as well as benefit from the new opportunities it brings. To better understand technology adoption, Davis and Granić (1989) introduced the Technology Acceptance Model (TAM), which is a framework that explains how users accept and use technology (Pillai et al., 2024). It is based on perceived ease of use and perceived usefulness, which are initial factors shaping the acceptance of technology (Sánchez-Prieto et al., 2019). TAM has been applied in various contexts to assess students’ and academics’ acceptance of education technology. Research on TAM has indicated that perceived ease of use (EU) and perceived usefulness (PU) factors can predict technology adoption among students and academics (Pillai et al., 2024).
The main purpose for many studies in using technology acceptance models (TAM) is to determine the reasons for accepting or rejecting technology and predicting behavioral intention to use technology in the future. Thus, TAM has evolved through variations (TAM1, TAM2, and TAM3) (Ursavaş, 2022). This study will adopt TAM3, which was developed by Venkatesh and Bala (2008), and which is more aligned with capturing the characteristics of newer technology by adding predictors related to perceived ease of use and perceived usefulness. TAM3 provides a more comprehensive approach to technology adoption with more variables to consider. The extent to which students and academics find a new technology easy to utilize reflects perceived ease of use (EU). They are more likely to incorporate new technologies into their learning and teaching practices when they find that this technology is user-friendly and accessible (Dwivedi et al., 2023). Research on technology acceptance shows that while complicated tools can discourage involvement and lead to technology resistance, simplicity of use is an important factor in the acceptability of educational technology (Mhlanga, 2023; Silva et al., 2024). Perceived usefulness (PU) in the context of GenAI adoption reflects the extent to which students and academics believe that GenAI tools, like ChatGPT, can improve their teaching and learning experiences. Several studies found that students are more likely to utilize technology when they perceive its usefulness in generating ideas, enhancing writing quality, and aiding research (Albayati, 2024; Rohan et al., 2023). Furthermore, GenAI’s ability to deliver quick feedback and individualized learning experiences can considerably improve perceived usefulness for both students and academics (Su & Yang, 2023a).
Furthermore, the capacity of generative AI (GenAI) to deliver quick feedback and individualized learning experiences can significantly improve perceived usefulness for both students and academics (Su & Yang, 2023a). Sánchez-Vera (2025) found that chatbots trained in specific subject areas serve as effective tutors for higher education students by fostering autonomous learning. These chatbots help clarify students’ content-related doubts and support their understanding, contributing to improved academic performance. Notably, these benefits were observed with moderate use of the chatbot, rather than minimal or excessive engagement. In line with TAM3, this indicates an additional advantage in students’ perceived usefulness of GenAI when it is employed as a complementary learning tool rather than as a primary source. Moreover, successful adoption of GenAI tools necessitates addressing issues related to ease of use by equipping students with skills in prompt engineering to enable effective utilization of such technologies. By addressing both perceived usefulness and ease of use, students are more likely to develop a stronger behavioral intention to use GenAI tools, leading to more consistent and effective usage.
Therefore, based on the TAM model, this study aims to investigate the perceptions of students and academics at SQU on the integration of GenAI technologies in teaching and learning environment through addressing the following research questions:
  • How familiar are students and academics at SQU with GenAI tools like ChatGPT?
  • Are there any differences between students’ and academics’ perceptions of GenAI (potential usefulness, PU, ease of use, EU, and perceived challenges, PC) at SQU?
  • What are the factors influencing students’ and academics’ intentions to use GenAI in the future at Sultan Qaboos University?

1.1. Students’ Perceptions on the Use of GenAI in Higher Education

The way students perceive GenAI tools affects the integration of these tools in higher education and impact the achievement of the learning outcomes (Cao et al., 2023). Research on TAM has shown that perceived usefulness and ease of use influence students’ attitudes towards the use and adoption of new educational technology in higher education (Saqr et al., 2024). Amoozadeh et al. (2024) found that students have different perceptions about using GenAI in computer science courses which, in turn, affected their GenAI-adoption and learning outcomes in these courses. Students are more likely to use GenAI for enhancing content understanding (Jeong, 2023; Trust et al., 2023), providing personalized learning and tailored support (Dogan et al., 2023), and improving productivity in content creation and problem-solving (Taylor, 2023) when they have a positive perception of its usefulness. On the other hand, students’ acceptance of GenAI tools is significantly influenced by the reliability and ethical use. They expressed concerns about the production of low-quality, inaccurate, or biased outputs (Crawford et al., 2023), which pose challenges to academic integrity, cultural stereotyping (Cotton et al., 2024; Eke, 2023), privacy, fairness, and transparency (Qiao-Franco & Zhu, 2024; Ray, 2023). Furthermore, over-reliance on GenAI may hinder the development of essential skills (Taylor, 2023).
Recent research has revealed that there are several factors which influence students’ acceptance of GenAI in teaching and learning such as trust, confidence, motivation, and the adaptation of assessment and curricula to integrate GenAI use. For instance, Amoozadeh et al. (2024) found that students’ trust in GenAI as well as their confidence and motivation towards using it not only affect the extent to which they integrate GenAI but also affect their performance. Similarly, Wang and Zhang (2023) stated that the effort expectancy, price value, and motivation, with optimism, creativity, and trait curiosity, affect the Chinese Generation Z’s adoption of AI in art design. Qashou (2021) stated that students’ intentions to adopt AI technology in Palestine were significantly influenced by factors of perceived usefulness, ease of use, and self-efficacy. Locally, in Oman, Sultan Qaboos University administered a web-based questionnaire to all clinical-year medical students to study the knowledge, perspectives, attitudes, and readiness of medical students regarding AI in healthcare (Al Hadithy et al., 2023). The findings revealed that although students had good perceptions and attitudes about AI, 60% of them had no prior exposure to AI in healthcare, with a median knowledge score of 3.25 out of 5 (Al Hadithy et al., 2023).

1.2. Academics’ Perceptions on the Use of GenAI in Higher Education

Academics’ perceptions of teaching elements, such as teaching materials, strategies, and assessment tools, shape their teaching practices and significantly influence their willingness to adopt new technological approaches. For instance, tools like chatbots can be utilized to foster a deeper understanding of the content or merely achieve surface-level learning outcomes (Sweet Moore et al., 2023). When academics believe that GenAl is a constructive aiding tool in teaching, they are more likely to adopt such tools in their teaching practices to enhance learning outcomes (Donlon & Tiernan, 2023; Farazouli et al., 2024). Likewise, academics’ concerns and reservations about GenAI tools may make them more careful and cautious about their adoption in teaching and learning (Kaplan-Rakowski et al., 2023; Vartiainen & Tedre, 2023).
The way academics perceive the advantages, drawbacks and useability of GenAI tools determine to what extent the adoption of such tools is effective. Academics believe that the integration of GenAI in teaching has a significant potential to enhance teaching and learning (Potter et al., 2023) despite its challenges related to ethical use, the need for proper training, and the impact on academic integrity (Crawford et al., 2023). Research on academics’ perceptions of AI tools, such as automated grading systems and virtual teaching assistants, showed how useful these tools are for enhancing assessment accuracy, supporting individualized instruction, fostering critical AI literacy development and reducing their workload (Qu et al., 2023; Xiong et al., 2024). However, academics also raised some concerns about GenAI’s weaknesses. It is argued that GenAI is unable to foster genuine learning interactions, has data privacy issues, and lacks human interference in AI-mediated learning (Kamoun et al., 2024). Moreover, academics expressed concerns regarding the integration of AI literacy into educational programs (Su & Yang, 2024), job security (Qizi, 2023), and potential biases in decision making and evaluation procedures within education (Vartiainen & Tedre, 2023). Alrishan (2023) used TAM to examine factors affecting academics’ adoption of GenAI, like instructor support, personal innovativeness, and perceived learning value, and found that these factors impact academics’ perceptions of ChatGPT’s utility and ease of use.

2. Materials and Methods

2.1. Survey Development and Validation

Data for this study were collected using a survey, which was specifically developed to measure students’ and academics’ perceptions of ease of use (EU), perceived usefulness (PU) and perceived challenges (PC) of GenAI. The survey was based on previous studies on TAM generally (Dwivedi et al., 2023; Ursavaş, 2022; Venkatesh & Bala, 2008) and on studies that have adopted some of the TAM variables to be used to measure adoption of GenAI (Bernabei et al., 2023; Chan & Hu, 2023; Ngo, 2023; Shoufan, 2023). To assure content validity of the survey, the initial survey was reviewed by 7 academics who are researching and using GenAI at SQU. Modifications were made based on their feedback, which included clarifying some of the statements, adding some additional AI tools, and contextual comments related to the situation at SQU. The final survey included 11 items for EU and 8 items for PC for both students and academics. However, for PU, items were different to reflect the different ways the two groups benefit from GenAI, so there were 12 items for students and 9 items for academics. Table 1 presents the reliability and correlations for the study variables for both samples. As the table shows, all internal consistency Cronbach alpha coefficients are higher than 0.7, which demonstrates high reliability of all survey scales in both groups. Additionally, there is significant bivariate correlations between all study variables in both surveys, assuring convergent validity of the TAM scales. The strongest bi-variable correlation was between EU and PU in both samples. The strengths of bi-variable correlations between the other variables differed in each sample.

2.2. Data Collection and Sample

Data were collected via a survey sent in a bulk email to all academics and students at Sultan Qaboos University (SQU). The sample of respondents consisted of 555 students and 168 academics from different colleges at SQU. Table 2 presents full details about the sample demographics.

3. Results

Concerning the first research question: How familiar are students and academics at SQU with GenAI tools like ChatGPT?, the data illustrate a high level of familiarity with GenAI tools among students and academics at SQU, with heavier student use compared to academic use, as depicted in Figure 1. Only about a quarter of the participants in both samples have “never” or “rarely” used it, while 88% of students and 82% of academics reported using ChatGPT 3.5, and 20% of students and 32% of academics reported using ChatGPT Plus-4, along with lower and varying percentages of both samples reporting using other GenAI tools, such as Gemini, Claude, Perplexity, etc.

3.1. Differences Between Students’ and Academics’ Perceptions of GenAI

To address the second research question: Are there any differences between students’ and academics’ perceptions of GenAI (potential usefulness, PU, ease of use, EU, and perceived challenges, PC) at SQU?, a t-test was conducted. Based on the results as presented in Table 3, there are significant differences between students and academics in their perceptions of GenAI across all TAM variables (EU, PU and PC). While academics perceive the use of GenAI as easier than students do, at the same time they also perceive more challenges compared to students, who, on the other hand, find the use of GenAI more useful than academics do.

3.1.1. Perceptions of Perceived Usefulness (PU)

To understand the perceptions of specific statements for TAM variables, we explore the descriptive statistics for both students and academics provided in Table 4, Table 5 and Table 6. For perceived usefulness, the findings indicate that students generally rated the benefits of ChatGPT higher (mean = 3.85) than academics (mean = 3.64). The highest statements for students are that ChatGPT “can help me save time”, “can provide information in a variety of fields”, “is useful for research” and “allows students to ask questions that they would otherwise not ask their teachers”. It is clear from these statements that students find in GenAI, like ChatGPT, a valuable resource and a private tutor. Similarly, academics found Chat GPT to be a great resource for teaching, with the highest statements on its usefulness in developing learning materials, assessments, lesson plans, and customization of learning content.

3.1.2. Perceptions of Perceived Ease of Use (EU)

For perceived ease of use (EU), the findings indicate that both students and academics perceive ChatGPT as a highly user-friendly tool, with slight variations in their evaluations. The ease of use of GenAI tools was rated slightly higher by academics (mean = 3.96) compared to students (mean = 3.85). The highest four statements are the same for both students and academics, with different orders within samples. These are that ChatGPT “is easy to use”, “can give answers quickly”, “is available 24/7 for me” and “provides a friendly user experience”. All these statements represent the ease and availability of ChatGPT and suggest that both students and academics at SQU are self-confident to leverage these tools for various educational purposes.
Table 5. Descriptive statistics for perceived ease of use (EU).
Table 5. Descriptive statistics for perceived ease of use (EU).
StatementsStudents (n1 = 555)Academics (n2 = 168)
MeanStd. DeviationMeanStd. Deviation
  • ChatGPT is easy to use.
4.010.994.200.86
2.
ChatGPT can give answers quickly.
4.030.994.240.75
3.
ChatGPT can be used with various input languages.
3.821.013.860.9
4.
ChatGPT is available 24/7 for me.
4.181.064.160.84
5.
ChatGPT provides a friendly user experience.
4.100.914.110.79
6.
ChatGPT can provide human-like responses.
3.431.023.681.06
7.
ChatGPT offers the possibility to ask follow-up questions and prompts.
3.900.964.010.94
8.
ChatGPT can improve my efficiency and productivity.
3.711.034.050.86
9.
ChatGPT can process and analyze large amounts of data quickly and accurately.
3.660.983.770.97
10.
I understand how ChatGPT works.
3.501.073.770.94
11.
I feel comfortable using ChatGPT as it does not judge me.
3.561.033.760.92
Mean 3.850.683.960.62

3.1.3. Perceptions of Perceived Challenges (PC)

For perceived challenges, the data illustrate that the challenges associated with GenAI tools were rated higher by academics (mean = 3.74) compared to students (mean = 3.56). The statement “ChatGPT can make people over-reliant on it, reducing critical thinking and creativity” is the highest challenge from the students’ perspective and the second highest challenge from the academics’ perspective. In the sample of academics, the statement with the highest mean was “ChatGPT can facilitate cheating in exams or assignments”, which reflects the worries about students cheating using ChatGPT. Then, the second highest rated statement for them was “ChatGPT can give biased/misleading and out-of-context answers/information”. For students, the second highest statement was “ChatGPT can exhibit logical errors and contradictions”, and then “ChatGPT is unable to cite sources accurately and may fabricate references”. This clearly shows that students are well aware of the shortcomings of ChatGPT and that they should not utterly trust information generated from it.
Table 6. Descriptive statistics for perceived challenges (PC).
Table 6. Descriptive statistics for perceived challenges (PC).
StatementsStudents (n1 = 555)Academics (n2 = 168)
MeanStd. DeviationMeanStd. Deviation
  • ChatGPT can exhibit logical errors and contradictions.
3.760.933.790.84
2.
ChatGPT can give biased/misleading and out of context answers/information.
3.660.973.950.85
3.
ChatGPT is unable to cite sources accurately and may fabricate references.
3.741.023.900.9
4.
ChatGPT does not understand a single word of what it produces.
2.971.083.071.13
5.
ChatGPT can facilitate cheating in exams or assignments.
3.671.113.961
6.
ChatGPT can exacerbate the inequality and digital divide among students.
3.461.13.780.9
7.
ChatGPT can make people over-reliant on it, reducing critical thinking and creativity.
3.831.113.950.92
8.
It is increasingly difficult to distinguish between human-like responses generated by generative AI tools and those of a human.
3.391.023.501.01
Mean3.560.643.740.61

3.2. Intended Future Use of GenAI at SQU

Table 7 addresses the third research question: What are the factors influencing students’ and academics’ intentions to use GenAI in the future at sultan Qaboos University? To understand adoption of GenAI and its future use at SQU, it was critical to understand students’ and academics’ future intention to use (IU), their opinion on whether GenAI should be adopted or prohibited at SQU, and their awareness of any available policies related to ethical GenAI use at SQU. These results are summarized in Table 7.
The majority of respondents expressed a positive attitude toward using GenAI tools like ChatGPT in the future, with 81% of students and 86% of academics indicating “yes”. This strong interest aligns with earlier findings related to the Technology Acceptance Model (TAM) variables concerning perceived usefulness and ease of use (Dwivedi et al., 2023). However, a small percentage of respondents—16.3% who were unsure and 1.7% who explicitly stated “no” to future use—suggests that there may be potential barriers to the adoption of GenAI tools at SQU, such as perceived challenges and situational factors.
The previous results indicate that both students and academics perceive GenAI tools as highly beneficial. Consequently, it is not surprising that a significant majority (about 68% of students and 74% of academics) believe that GenAI technologies like ChatGPT should be embraced at SQU. In contrast, a small minority believe it should be prohibited (about 4% of both students and academics), which may be coming from their concerns about its use and possibly lack of familiarity with its use. On the other hand, a small percentage (27%) of respondents in both samples were unsure whether GenAI tools should be embraced or prohibited.
While the majority of respondents indicate an intention to continue using GenAI tools, like ChatGPT, in the future and support embracing it at SQU, the majority as well (63% of students and 74% of academics) reported that they were either unsure about the availability of GenAI policies or guidelines or that they were unavailable at SQU during the time of data collection (March–July 2024). This is a key situational barrier to GenAI acceptance at the university as policies legitimatize and organize GenAI use and clarify the institutional expectations. Thus, it contributes to increasing students’ and academics’ perceptions of ease of use and perceived usefulness. Since that time, Sultan Qaboos University has produced several documents pertaining to the use of artificial intelligence by students and faculty, including “Guidelines for using GenAl tools in Teaching, Learning and Assessment” and “SQU AI Guidelines for administrative purposes” (Centre for Excellence in Teaching and Learning (CETL, 2025).

4. Discussion

The findings from this study provide a nuanced understanding of the promises and challenges associated with using GenAI tools, such as ChatGPT, at SQU from the perspective of both students and academics. They reveal that GenAI usage is relatively common among both students and academics, with a substantial number of respondents reporting frequent use of these tools. This widespread adoption points to a recognition of GenAI’s benefits in supporting academic tasks through providing adaptive and personalized learning, which increase teaching efficiency and enhance the quality of student learning. This finding aligns with previous research that reported similar potential benefits of incorporating GenAI in education, such as personalizing learning experiences, enhancing student engagement (Sharma et al., 2024; Atlas, 2023; Su & Yang, 2023b; Sullivan et al., 2023), improving academic performance (Chiu, 2024; Su & Yang, 2023b), supporting both teaching and administrative tasks, and facilitating adaptive and interactive learning environments (Pratama et al., 2023; Q. Lu et al., 2024; Al Ghazali et al., 2024; Okulu & Muslu, 2024). However, this high level of engagement may also raise several concerns.
On the other hand, the findings also reflect the concerns both academics and students have regarding the negative impact of over-reliance on GenAI on academic integrity and critical thinking and creativity. This finding suggests that both groups see GenAI as a potential threat to students’ intellectual independence. The fact that academics tend to perceive higher levels of challenges associated with GenAI use than students indicates that academics may have a more critical view of GenAI because of their greater awareness of its potential ethical and academic risks, such as misuse or dependency, whereas students may be more open to its perceived benefits or may not yet fully recognize certain risks. This underscores academics’ cautious approach to GenAI integration in the academic setting.
These findings align with several studies that reported similar concerns about academic integrity and the potential for students to use GenAI unethically to cheat on projects and assignments (Cotton et al., 2024; Eke, 2023; Cong-Lem et al., 2024; Miao et al., 2024). It also echoes findings from similar studies which found that overreliance on GenAI technologies could undermine students’ creativity and critical thinking skills and overall learning experience (Téllez et al., 2024; Nikolic et al., 2023; Sullivan et al., 2023; Taylor, 2023). Additionally, while ChatGPT provides quick and structured responses, it lacks the ability to critically evaluate information, potentially leading to factual inaccuracies if used without human oversight (Branum & Schiavenato, 2023; Cong-Lem et al., 2024).
The general positive attitude of both students and academics towards the adoption of GenAI at SQU, despite their concerns, suggests readiness among both groups to explore GenAI’s potential benefits, with an acknowledgment that regulatory frameworks may be necessary to address its potential drawbacks. However, 29% in both samples remain unsure whether GenAI tools should be embraced or prohibited, alluding to the need for more awareness and understanding of GenAI’s applications and limitations. This finding reinforces the importance of involving both students and academics in discussions and decisions regarding GenAI policies and usage guidelines to help foster a collaborative approach to GenAI integration that addresses the needs and values of the academic community. This finding aligns with research on TAM which has indicated that perceived ease of use (EU) and perceived usefulness (PU) factors can predict technology adoption among students and academics (Pillai et al., 2024; Albayati, 2024; Rohan et al., 2023). It also echoes findings from previous research, which indicate that as technology continues to evolve, it is essential for both students and academics to engage in ongoing professional development and training to integrate GenAI tools in HEIs in an effective, ethical, and responsible manner (J. Lu et al., 2024).

5. Conclusions

To conclude, the general findings in this study contribute valuable insights into the promises and challenges associated with GenAI usage at SQU. They indicate that although both students and academics at SQU recognize the challenges of GenAI in higher education, there is substantial support for GenAI integration, but with notable concerns around ethical and practical implications, especially from academics. Understanding these challenges and concerns about ethical and practical implications is crucial for creating an educational environment that fosters both technological advancement and academic integrity.
By examining these challenges, the study highlights the need for clear, structured guidelines and policies within SQU and other HEIs in Oman and internationally to support informed, ethical, and responsible use of GenAI tools in academic settings. Involving both students and academics in the development of GenAI-related policies and best practices can lead to more relevant, practical, and widely accepted frameworks. Furthermore, HEIs must establish mechanisms to regularly assess the evolving impact of GenAI on academic integrity and make timely adjustments to policies to address emerging concerns.
To address specific ethical risks, such as data privacy, algorithmic bias, and the preservation of academic integrity, higher education institutions must develop transparent AI governance frameworks and promote critical AI literacy. In this context, the authors recommend the adoption of an AI disclosure policy in teaching and learning, based on their own instructional practices at SQU. Under this policy, students are required to disclose the GenAI tools they used, describe how the tools were applied (e.g., to brainstorm ideas, to generate text, to assist with coding, to paraphrase, to enhance the language, to translate, etc.), include the exact prompts used, and attach the AI-generated output as an appendix to their assignments. This practice mirrors AI disclosure policies already adopted in scientific research and publishing and serves as a practical mechanism to maintain academic integrity while fostering ethical and reflective AI use in education.
In addition, the findings emphasize the critical importance of implementing ongoing training programs that go beyond basic digital literacy. Such programs should focus on enhancing AI ethics and critical thinking and prompt engineering skills, equipping users with the knowledge and competencies needed to engage with GenAI tools effectively, ethically, and responsibly. As GenAI technologies continue to evolve in complexity and functionality, continuous capacity-building efforts will ensure that students and academics can meaningfully and safely integrate these tools as complementary aids in teaching and learning, rather than as primary sources of information.
For policymakers, academic leaders and practitioners, the study’s findings provide a foundation for developing comprehensive frameworks that promote a balanced approach to GenAI integration. By proactively addressing ethical concerns, fostering transparency through adopting AI disclosure policies, and promoting responsible use through education and training, HEIs in Oman and beyond can leverage the benefits of GenAI while mitigating associated risks, safeguarding academic integrity, and cultivating essential skills in critical thinking, creativity, and ethical reasoning.

6. Limitations and Future Research

This study was conducted within the context of a single higher education institution in the Sultanate of Oman, which may limit the generalizability of its findings. While the insights gained from students and academics at Sultan Qaboos University (SQU) offer valuable contributions to understanding the current state of GenAI adoption in higher education, they may not fully represent the broader experiences and practices of other institutions, both within Oman and internationally. Nevertheless, these findings offer a strong foundation for further empirical investigation into the role and effectiveness of integrating GenAI tools into teaching and learning.
Future research could focus on investigating the long-term impact of GenAI on the development of students’ academic skills and the achievement of learning outcomes across different disciplines at multiple HEIs. Employing mixed-method research designs is recommended to provide both breadth and depth in understanding user experiences and engagement with GenAI tools, capturing both quantitative patterns and qualitative insights. Longitudinal studies could also offer valuable insights into how perceptions and usage patterns evolve over time with continued exposure to these technologies. Furthermore, future investigations could examine how institutional AI policies, training programs, and awareness initiatives influence both students’ and academics’ perceptions and adherence to ethical and responsible use of GenAI in higher education. Beyond national boundaries, comparative studies across countries and cultural contexts would be valuable in understanding the role of cultural factors in shaping GenAI adoption and usage in higher education institutions. In addition, interdisciplinary research – particularly involving education, computer science, ethics, and psychology – can offer more holistic perspectives on the pedagogical, technical, ethical, and cognitive dimensions of GenAI integration in higher education. Finally, as a rapidly evolving and transformative technological innovation, GenAI continues to reshape teaching and learning practices in higher education. Therefore, HEIs must closely monitor its development and regularly review and refine their policies and practices to ensure its ethical, informed, and effective integration into academic environments.

Author Contributions

Conceptualization, A.A. and A.S.A.A.-H. methodology, A.A. and A.S.A.A.-H.; validation, A.A. and A.S.A.A.-H.; formal analysis, S.A. and A.S.A.A.-H.; resources, A.A.; data curation, A.A., A.S.A.A.-H. and S.A.; writing—original draft preparation, A.A.; writing—review and editing, A.A., A.S.A.A.-H. and S.A.; visualization A.S.A.A.-H.; project administration, A.A.; funding acquisition, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Higher Education, Research and Innovation in Oman under SQU project code: RC/RG-EDU/DEFA/23/01.

Institutional Review Board Statement

The data collection tool (the survey) was approved by the Research Ethics Committee of the College of Education at Sultan Qaboos University. The approval number is: REAAF/EDU/DEFA/2023/01, and the approval date is 7 January 2024.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study at the beginning of the online survey.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to the privacy regulations at SQU.

Acknowledgments

The authors acknowledge the Ministry of Higher Education, Research and Innovation in Oman for funding this research under SQU project code: RC/RG-EDU/DEFA/23/01. They also acknowledge the participants of the study for their time and valuable feedback.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Albayati, H. (2024). Investigating undergraduate students’ perceptions and awareness of using ChatGPT as a regular assistance tool: A user acceptance perspective study. Computers and Education: Artificial Intelligence, 6. [Google Scholar] [CrossRef]
  2. Al Ghazali, S., Zaki, N., Ali, L., & Harous, S. (2024). Exploring the potential of ChatGPT as a substitute teacher: A case study. International Journal of Information and Education Technology, 14(2), 271–278. [Google Scholar] [CrossRef]
  3. Al Hadithy, Z. A., Al Lawati, A., Al-Zadjali, R., & Al Sinawi, H. (2023). Knowledge, attitudes, and perceptions of artificial intelligence in healthcare among medical students at Sultan Qaboos University. Cureus, 15(9), e44887. [Google Scholar] [CrossRef] [PubMed]
  4. Alrishan, A. M. H. (2023). Determinants of intention to use ChatGPT for professional development among Omani EFL pre-service teachers. International Journal of Learning, Teaching and Educational Research, 22(12), 187–209. [Google Scholar] [CrossRef]
  5. Amoozadeh, M., Daniels, D., Nam, D., Kumar, A., Chen, S., Hilton, M., Srinivasa Ragavan, S., & Alipour, M. A. (2024, March 20–23). Trust in generative AI among students: An exploratory study. SIGCSE 2024—Proceedings of the 55th ACM Technical Symposium on Computer Science Education (Vol. 1, ), Portland, OR, USA. [Google Scholar] [CrossRef]
  6. Atlas, S. (2023). ChatGPT for higher education and professional development: A guide to conversational AI. In DigitalCommons@URI (Vol. 1). University of Rhode Island. [Google Scholar]
  7. Bernabei, M., Colabianchi, S., Falegnami, A., & Costantino, F. (2023). Students’ use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances. Computers and Education: Artificial Intelligence, 5, 100172. [Google Scholar] [CrossRef]
  8. Branum, C., & Schiavenato, M. (2023). Can ChatGPT accurately answer a PICOT question? Assessing AI response to a clinical question. Nurse Educator, 48(5), 231–233. [Google Scholar] [CrossRef]
  9. Cao, Y., Aziz, A. A., & Arshard, W. N. R. M. (2023). University students’ perspectives on artificial intelligence: A survey of attitudes and awareness among interior architecture students. International Journal of Educational Research and Innovation, 2023(20), 1–21. [Google Scholar] [CrossRef]
  10. Centre for Excellence in Teaching and Learning (CETL). (2025). Useful resources. Available online: https://www.squ.edu.om/About/Support-Centers-/Centre-For-Excellence-In-Teaching-and-Learning (accessed on 5 January 2025).
  11. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  12. Chaudhry, I. S., Sarwary, S. A. M., El Refae, G. A., & Chabchoub, H. (2023). Time to revisit existing student’s performance evaluation approach in higher education sector in a new era of ChatGPT—A case study. Cogent Education, 10(1), 2210461. [Google Scholar] [CrossRef]
  13. Chiu, T. K. F. (2024). Future research recommendations for transforming higher education with generative AI. Computers and Education: Artificial Intelligence, 6, 100197. [Google Scholar] [CrossRef]
  14. Cong-Lem, N., Tran, T. N., & Nguyen, T. T. (2024). Academic integrity in the age of generative AI: Perceptions and responses of Vietnamese EFL teachers. Teaching English with Technology, 24(1), 28–47. [Google Scholar] [CrossRef]
  15. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
  16. Crawford, J., Vallis, C., Yang, J., Fitzgerald, R., & O’dea, C. (2023). Editorial: Artificial intelligence is awesome, but good teaching should always come first. Journal of University Teaching and Learning Practice, 20(7), 1–12. [Google Scholar] [CrossRef]
  17. David, P., Choung, H., & Seberger, J. S. (2024). Who is responsible? US public perceptions of AI governance through the lenses of trust and ethics. Public Understanding of Science, 33(5), 654–672. [Google Scholar] [CrossRef] [PubMed]
  18. Davis, F. D., & Granić, A. (1989). Technology acceptance model. Springer Nature. Available online: https://link.springer.com/content/pdf/10.1007/978-3-030-45274-2.pdf (accessed on 3 January 2025).
  19. Dogan, M. E., Goru Dogan, T., & Bozkurt, A. (2023). The use of artificial intelligence (AI) in online learning and distance education processes: A systematic review of empirical studies. Applied Sciences, 13(5), 3056. [Google Scholar] [CrossRef]
  20. Donlon, E., & Tiernan, P. (2023). Chatbots and citations: An experiment in academic writing with generative AI. Irish Journal of Technology Enhanced Learning, 7(2), 75–87. [Google Scholar] [CrossRef]
  21. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. [Google Scholar] [CrossRef]
  22. Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. [Google Scholar] [CrossRef]
  23. Farazouli, A., Cerratto-Pargman, T., Bolander-Laksov, K., & McGrath, C. (2024). Hello GPT! Goodbye home examination? An exploratory study of AI chatbots impact on university teachers’ assessment practices. Assessment and Evaluation in Higher Education, 49(3), 363–375. [Google Scholar] [CrossRef]
  24. Firaina, R., & Sulisworo, D. (2023). Exploring the usage of ChatGPT in higher education: Frequency and impact on productivity. Buletin Edukasi Indonesia, 2(01), 39–46. [Google Scholar] [CrossRef]
  25. Hashem, R., Ali, N., El Zein, F., Fidalgo, P., & Khurma, O. A. (2024). AI to the rescue: Exploring the potential of ChatGPT as a teacher ally for workload relief and burnout prevention. Research and Practice in Technology Enhanced Learning, 19, 23. [Google Scholar] [CrossRef]
  26. Huang, L. (2023). Ethics of artificial intelligence in education: Student privacy and data protection. Science Insights Education Frontiers, 16(2), 2577–2587. [Google Scholar] [CrossRef]
  27. Jeong, C. (2023). A Study on the implementation of generative AI services using an enterprise data-based LLM application architecture. Advances in Artificial Intelligence and Machine Learning, 3(4), 1588–1618. [Google Scholar] [CrossRef]
  28. Kamoun, F., El Ayeb, W., Jabri, I., Sifi, S., & Iqbal, F. (2024). Exploring students’ and faculty’s knowledge, attitudes, and perceptions towards ChatGPT: A cross-sectional empirical study. Journal of Information Technology Education: Research, 23, 1. [Google Scholar] [CrossRef]
  29. Kaplan-Rakowski, R., Grotewold, K., Hartwick, P., & Papin, K. (2023). Generative AI and teachers’ perspectives on its implementation in education. Journal of Interactive Learning Research, 34(2), 313–338. [Google Scholar]
  30. Lu, J., Zheng, R., Gong, Z., & Xu, H. (2024). Supporting teachers’ professional development with generative AI: The effects on higher order thinking and self-efficacy. IEEE Transactions on Learning Technologies, 17, 1267–1277. [Google Scholar] [CrossRef]
  31. Lu, Q., Yao, Y., Xiao, L., Yuan, M., Wang, J., & Zhu, X. (2024). Can ChatGPT effectively complement teacher assessment of undergraduate students’ academic writing? Assessment and Evaluation in Higher Education, 49(5), 616–633. [Google Scholar] [CrossRef]
  32. Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  33. Miao, J., Thongprayoon, C., Suppadungsuk, S., Garcia Valencia, O. A., Qureshi, F., & Cheungpasitporn, W. (2024). Ethical dilemmas in using AI for academic writing and an example framework for peer review in nephrology Academia: A narrative review. Clinics and Practice, 14(1), 89–105. [Google Scholar] [CrossRef]
  34. Ngo, T. T. A. (2023). The perception by university students of the use of ChatGPT in education. International Journal of Emerging Technologies in Learning (Online), 18(17), 4. [Google Scholar] [CrossRef]
  35. Nikolic, S., Daniel, S., Haque, R., Belkina, M., Hassan, G. M., Grundy, S., Lyden, S., Neal, P., & Sandison, C. (2023). ChatGPT versus engineering education assessment: A multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integrity. European Journal of Engineering Education, 48(4), 559–614. [Google Scholar] [CrossRef]
  36. Okulu, H. Z., & Muslu, N. (2024). Designing a course for pre-service science teachers using ChatGPT: What ChatGPT brings to the table. Interactive Learning Environments, 32, 7450–7467. [Google Scholar] [CrossRef]
  37. Parycek, P., Schmid, V., & Novak, A. S. (2023). Artificial intelligence (AI) and automation in administrative procedures: Potentials, limitations, and framework conditions. Journal of the Knowledge Economy, 15, 8390–8415. [Google Scholar] [CrossRef]
  38. Pillai, R., Sivathanu, B., Metri, B., & Kaushik, N. (2024). Students’ adoption of AI-based teacher-bots (T-bots) for learning in higher education. Information Technology and People, 37(1), 328–355. [Google Scholar] [CrossRef]
  39. Potter, J., Welsh, K., & Milne, L. (2023). Evaluating an institutional response to Generative Artificial Intelligence (GenAI): Applying Kotter’s change model and sharing lessons learned for educational development. Journal of Perspectives in Applied Academic Practice, 11(3), 139–152. [Google Scholar] [CrossRef]
  40. Pratama, M. P., Sampelolo, R., & Lura, H. (2023). Revolutionizing education: Harnessing the power of artificial intelligence for personalized learning. Klasikal: Journal of Education, Language Teaching and Science, 5(2), 350–357. [Google Scholar] [CrossRef]
  41. Qashou, A. (2021). Influencing factors in M-learning adoption in higher education. Education and Information Technologies, 26(2), 1755–1785. [Google Scholar] [CrossRef]
  42. Qiao-Franco, G., & Zhu, R. (2024). China’s artificial intelligence ethics: Policy development in an emergent community of practice. Journal of Contemporary China, 33(146), 189–205. [Google Scholar] [CrossRef]
  43. Qizi, S. (2023). The role of AI in teaching: Advantages, disadvantages, and future implications. Eurasian Scientific Herald, 24, 8–12. Available online: https://geniusjournals.org/index.php/esh/article/view/4885 (accessed on 3 January 2025).
  44. Qu, J. H., Qin, X. R., Li, C. D., Peng, R. M., Xiao, G. G., Cheng, J., Gu, S. F., Wang, H. K., & Hong, J. (2023). Fully automated grading system for the evaluation of punctate epithelial erosions using deep neural networks. British Journal of Ophthalmology, 107(4), 453–460. [Google Scholar] [CrossRef]
  45. Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for education and research: Opportunities, threats, and strategies. Applied sciences, 13(9), 5783. [Google Scholar] [CrossRef]
  46. Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121–154. [Google Scholar] [CrossRef]
  47. Rohan, R., Faruk, L. I. D., Puapholthep, K., & Pal, D. (2023, December 6–9). Unlocking the black box: Exploring the use of generative AI (ChatGPT) in information systems research [ACM International Conference Proceeding Series]. 13th International Conference on Advances in Information Technology, Bangkok, Thailand. [Google Scholar] [CrossRef]
  48. Saqr, R. R., Al-Somali, S. A., & Sarhan, M. Y. (2024). Exploring the Acceptance and User Satisfaction of AI-Driven e-Learning Platforms (Blackboard, Moodle, Edmodo, Coursera and edX): An Integrated Technology Model. Sustainability, 16(1), 204. [Google Scholar] [CrossRef]
  49. Sánchez-Prieto, J. C., Cruz-Benito, J., Therón, R., & García-Pẽalvo, F. J. (2019, October 16–18). How to measure teachers’ acceptance of AI-driven assessment in eLearning: A TAM-based proposal [ACM International Conference Proceeding Series]. TEEM’19: Technological Ecosystems for Enhancing Multiculturality, León, Spain. [Google Scholar] [CrossRef]
  50. Sánchez-Vera, F. (2025). Subject-specialized chatbot in higher education as a tutor for autonomous exam preparation: Analysis of the impact on academic performance and students’ perception of its usefulness. Education Sciences, 15(1), 26. [Google Scholar] [CrossRef]
  51. Sharma, S., Singh, G., Sharma, C. S., & Kapoor, S. (2024). Artificial intelligence in Indian higher education institutions: A quantitative study on adoption and perceptions. International Journal of System Assurance Engineering and Management. [Google Scholar] [CrossRef]
  52. Shoufan, A. (2023). Exploring students’ perceptions of ChatGPT: Thematic analysis and follow-up survey. IEEE Access, 11, 38805–38818. [Google Scholar] [CrossRef]
  53. Silva, C. A. G. d., Ramos, F. N., de Moraes, R. V., & Santos, E. L. d. (2024). ChatGPT: Challenges and benefits in software programming for higher education. Sustainability, 16(3), 1245. [Google Scholar] [CrossRef]
  54. Su, J., & Yang, W. (2023a). Powerful or mediocre? Kindergarten teachers’ perspectives on using ChatGPT in early childhood education. Interactive Learning Environments, 32(10), 6496–6508. [Google Scholar] [CrossRef]
  55. Su, J., & Yang, W. (2023b). Unlocking the power of ChatGPT: A framework for applying generative AI in education. ECNU Review of Education, 6(3), 355–366. [Google Scholar] [CrossRef]
  56. Su, J., & Yang, W. (2024). AI literacy curriculum and its relation to children’s perceptions of robots and attitudes towards engineering and science: An intervention study in early childhood education. Journal of Computer Assisted Learning, 40(1), 241–253. [Google Scholar] [CrossRef]
  57. Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning and Teaching, 6(1), 1–10. [Google Scholar] [CrossRef]
  58. Sweet Moore, P., Coleman, B., Young, H., Bunch, J. C., & Jagger, C. (2023). Preservice teachers’ perceptions of important elements of the student teaching experience. Journal of Agricultural Education, 64(1), 171–183. [Google Scholar] [CrossRef]
  59. Taylor, K. (2023). Supporting students and educators in using generative artificial intelligence. ASCILITE Publications. [Google Scholar] [CrossRef]
  60. Téllez, N. R., Villela, P. R., & Bautista, R. B. (2024). Evaluating ChatGPT-generated linear algebra formative assessments. International Journal of Interactive Multimedia and Artificial Intelligence, 8(5), 75–82. [Google Scholar] [CrossRef]
  61. Trust, T., Whalen, J., & Mouza, C. (2023). Editorial: ChatGPT: Challenges, opportunities, and implications for teacher education. Contemporary Issues in Technology and Teacher Education, 23(1), 1–23. [Google Scholar]
  62. Ursavaş, Ö. F. (2022). Conducting technology acceptance research in education: Theory, models, implementation, and analysis. Springer Nature. [Google Scholar]
  63. Vartiainen, H., & Tedre, M. (2023). Using artificial intelligence in craft education: Crafting with text-to-image generative models. Digital Creativity, 34(1), 1–21. [Google Scholar] [CrossRef]
  64. Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. [Google Scholar] [CrossRef]
  65. Wang, Y., & Zhang, W. (2023). Factors Influencing the adoption of generative AI for art designing among Chinese generation Z: A structural equation modeling approach. IEEE Access, 11, 143272–143284. [Google Scholar] [CrossRef]
  66. Xiong, L., Chen, Y., Peng, Y., & Ghadi, Y. Y. (2024). Improving robot-assisted virtual teaching using transformers, GANs, and computer vision. Journal of Organizational and End User Computing, 36(1), 1–32. [Google Scholar] [CrossRef]
Figure 1. Percentages of reported use of GenAI by students and academics.
Figure 1. Percentages of reported use of GenAI by students and academics.
Education 15 00501 g001
Table 1. Reliability and validity of study variables.
Table 1. Reliability and validity of study variables.
VariablesStudents’ SurveyAcademics’ Survey
Number of ItemsCronbach AlphaPearson CorrelationNumber of ItemsCronbach AlphaPearson Correlation
123123
  • Ease of Use (EU)
110.87910.750 **0.221 **110.89110.674 **0.363 **
2
Perceived Usefulness (PU)
120.915 10.259 **90.907 10.360 **
3
Perceived Challenges (PC)
80.763 180.769 1
** Correlation is significant at the 0.01 level (2-tailed).
Table 2. Demographics of study samples.
Table 2. Demographics of study samples.
GroupsStudents
n1 = 555
Academics
n2 = 168
Total
N = 723
VariableSub-VariableNumber%Number%Number%
GenderMale22941.39858.332745.2
Female32658.77041.739654.8
SpecializationSocial Sciences and Humanities28851.99053.637852.3
Natural Sciences8515.33219.011716.2
Engineering and Technology9116.42514.911616.0
Health Sciences9116.42112.511215.5
Table 3. Differences between students’ and academics’ perceptions of TAM Variables.
Table 3. Differences between students’ and academics’ perceptions of TAM Variables.
ScalesStudents
n1 = 555
Academics
n2 = 168
t-Test
Ease of Use3.847 (0.66)3.963 (0.62)t = −1.97, df = 721, SE = 0.06, p = 0.049
Perceived Usefulness3.853 (0.69)3.634 (0.70)t = 3.56, df = 721, SE = 0.06, p = 0.000
Perceived Challenges3.561 (0.64)3.736 (0.61)t = −1.76.732, df = 721, SE = 0.06, p = 0.002
Table 4. Descriptive statistics for perceived usefulness (PU).
Table 4. Descriptive statistics for perceived usefulness (PU).
Statements for StudentsStudents
(n1 = 555)
Statements for AcademicsAcademics
(n2 = 168)
MeanSDMeanSD
  • ChatGPT can help me save time.
4.150.92
  • Generative AI tools can be used to develop learning materials (e.g., images, text, videos).
3.890.82
2.
ChatGPT can provide information in diverse fields.
4.180.87
2.
Generative AI tools can be used to develop different types of assessment (e.g., quizzes, assignments, rubrics, exams).
3.890.85
3.
ChatGPT is useful in research.
3.980.92
3.
ChatGPT assists in preparing lesson plans and educational activities.
3.850.84
4.
ChatGPT allows students to ask questions that they would otherwise not ask their teachers.
3.960.95
4.
Generative AI tools can be used to customize learning content for different students.
3.800.91
5.
ChatGPT is useful in writing tasks (summarizing, paraphrasing, editing, proofreading, etc.)
3.921.01
5.
ChatGPT can provide me with unique insights and perspectives that I may not have thought of myself.
3.740.94
6.
ChatGPT is useful in brainstorming/generating ideas.
3.871.02
6.
ChatGPT supports academics’ professional development through access to diverse learning resources and customized training.
3.620.92
7.
ChatGPT can help me better understand theories and concepts.
3.830.94
7.
Generative AI tools can be used to analyze assessment data and assessment criteria.
3.491.00
8.
ChatGPT can provide me with unique insights and perspectives that I may not have thought of myself.
3.8?0.99
8.
Generative AI tools can be utilized in grading and providing feedback.
3.270.97
9.
ChatGPT can be used to translate educational materials into different languages.
3.740.96
9.
ChatGPT can provide guidance for coursework as effectively as human teachers.
3.171.13
10.
ChatGPT can provide personalized tutoring and feedback based on the students’ needs and learning progress.
3.680.97
11.
ChatGPT provides well-structured and convincing answers.
3.6?0.99
12.
ChatGPT can help me learn by offering personalized and adaptive learning experiences.
3.511.03
Mean of Usefulness3.850.69 3.640.71
Table 7. Descriptive statistics for GenAI at SQU: Intention to Use (IU), Adoption/Prohibition Opinion, and Awareness of GenAI Policies.
Table 7. Descriptive statistics for GenAI at SQU: Intention to Use (IU), Adoption/Prohibition Opinion, and Awareness of GenAI Policies.
CategoryResponseStudents (n = 555)Academics (n = 168)Total (N = 723)
Number%Number%Number%
Intention to Use GenAI in The FutureYes44880.714586.359382.0
Not sure9617.32213.111816.3
No112.010.6121.7
Adoption of GenAIIt should be embraced37567.612473.849969.0
I am not sure15728.33923.219627.1
It should be prohibited234.153.0283.9
Awareness of GenAI PoliciesYes, available20637.14325.624934.4
I am not sure30054.18550.638553.3
No, not available498.84023.88912.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alshamy, A.; Al-Harthi, A.S.A.; Abdullah, S. Perceptions of Generative AI Tools in Higher Education: Insights from Students and Academics at Sultan Qaboos University. Educ. Sci. 2025, 15, 501. https://doi.org/10.3390/educsci15040501

AMA Style

Alshamy A, Al-Harthi ASA, Abdullah S. Perceptions of Generative AI Tools in Higher Education: Insights from Students and Academics at Sultan Qaboos University. Education Sciences. 2025; 15(4):501. https://doi.org/10.3390/educsci15040501

Chicago/Turabian Style

Alshamy, Alsaeed, Aisha Salim Ali Al-Harthi, and Shubair Abdullah. 2025. "Perceptions of Generative AI Tools in Higher Education: Insights from Students and Academics at Sultan Qaboos University" Education Sciences 15, no. 4: 501. https://doi.org/10.3390/educsci15040501

APA Style

Alshamy, A., Al-Harthi, A. S. A., & Abdullah, S. (2025). Perceptions of Generative AI Tools in Higher Education: Insights from Students and Academics at Sultan Qaboos University. Education Sciences, 15(4), 501. https://doi.org/10.3390/educsci15040501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop