Next Article in Journal
Enhancing Cultural Heritage Accessibility Through Three-Dimensional Artifact Visualization on Web-Based Open Frameworks
Previous Article in Journal
Markov-CVAELabeller: A Deep Learning Approach for the Labelling of Fault Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Ethical Implications of Using Generative AI Tools in Higher Education

1
Information and Communications Technology—Postgraduate Program, University North, 48000 Koprivnica, Croatia
2
Department of Computer Science and Informatics, University North, 48000 Koprivnica, Croatia
3
Public Relations Department, University North, 42000 Varaždin, Croatia
*
Authors to whom correspondence should be addressed.
Informatics 2025, 12(2), 36; https://doi.org/10.3390/informatics12020036
Submission received: 2 February 2025 / Revised: 18 March 2025 / Accepted: 2 April 2025 / Published: 7 April 2025

Abstract

:
A significant portion of the academic community, including students, teachers, and researchers, has incorporated generative artificial intelligence (GenAI) tools into their everyday tasks. Alongside increased productivity and numerous benefits, specific challenges that are fundamental to maintaining academic integrity and excellence must be addressed. This paper examines whether ethical implications related to copyrights and authorship, transparency, responsibility, and academic integrity influence the usage of GenAI tools in higher education, with emphasis on differences across academic segments. The findings, based on a survey of 883 students, teachers, and researchers at University North in Croatia, reveal significant differences in ethical awareness across academic roles, gender, and experience with GenAI tools. Teachers and researchers demonstrated the highest awareness of ethical principles, personal responsibility, and potential negative consequences, while students—particularly undergraduates—showed lower levels, likely due to limited exposure to structured ethical training. Gender differences were also significant, with females consistently demonstrating higher awareness across all ethical dimensions compared to males. Longer experience with GenAI tools was associated with greater ethical awareness, emphasizing the role of familiarity in fostering understanding. Although strong correlations were observed between ethical dimensions, their connection to future adoption was weaker, highlighting the need to integrate ethical education with practical strategies for responsible GenAI tool use.

1. Introduction

Many authors refer to 2023 as artificial intelligence’s (AI) breakout year due to the launch of ChatGPT (powered by the GPT-3.5 model), an AI chatbot built on OpenAI’s foundational large language models (LLMs). ChatGPT debuted publicly, reaching 1 million users in just five days from its launch and 100 million users in two months which set the record for the fastest-growing consumer application in history [1,2]. The viral success of Generative AI tools like ChatGPT, Microsoft Copilot, and Gemini, due to their broad range of applications across various industries and domains, has made them central topics of contemporary digital transformation and technology-driven discussions. According to a McKinsey report, “Generative AI’s impact on productivity could add trillions of dollars in value to the global economy, as well as automate half of today’s work activities between 2030 and 2060, with a midpoint in 2045, or roughly a decade earlier” [3]. Beyond the business context, in which AI is seen as a catalyst for growth and a disruptive force in the economy and the future of work [4], Generative AI (GenAI) tools have increasingly penetrated the academic environment. While many opportunities and challenges are common to both ecosystems, the academic setting presents a unique set of concerns that need to be critically analyzed and addressed. Foremost among these are ethical implications [5], such as copyright issues, plagiarism, transparency, and responsibility, which are fundamental to maintaining academic excellence and integrity. Adding to the complexity and intricacy of this debate is the lack of formal consensus on whether, how, and under what conditions GenAI tools can (or should be) integrated into current and future higher education curricula.
The emergence of GenAI has created a novel research topic within Artificial Intelligence in Education (AIED)—a multidisciplinary field that has been developing since the late 20th century, combining advances in artificial intelligence with insights from education, psychology, and other social sciences to enhance teaching, learning [6], and administration. AIED encompasses a wide range of technologies, including intelligent tutoring systems, personalized learning environments, data-driven educational insights [7], and, most recently, the use of GenAI in academia and education.
Some universities, professors, and researchers have advocated banning GenAI tools from educational contexts [8,9], expressing concern that these tools might undermine students’ independent and critical thinking or facilitate cheating and academic misconduct, especially in online examination environments fueled by factors like anonymity, reduced supervision, and easier access to unauthorized resources during exams [10]. Other scholars add that GenAI tools negatively affect teachers’ ability to differentiate between meticulous and automation-dependent students, as well as to measure the achievement of learning outcomes [11]. On the other hand, some higher education institutions (HEIs) have swiftly adapted and enthusiastically leveraged the potential of new tools, while the majority have chosen a middle ground by embracing the GenAI benefits “while practicing pragmatic caution and scrutiny” [12] (p. 3). Since there is no global consensus or standard guidelines for AIED, adopting and using these tools remain an individual’s choice and preference, resulting in multiple avenues for exploration and analysis.
Despite the increasing number of scholarly articles and studies on integrating GenAI tools in HEIs, a significant gap remains in understanding how challenges specific to the academic environment—particularly ethical implications and user trust—differ across various academic segments. This study seeks to address this gap by exploring the unique experiences and perspectives of distinct groups, including undergraduate and graduate students as well as teachers and researchers. The aim is to outline the complexities of AI adoption, emphasizing ethical implications to offer insights and recommendations for strategies that promote responsible and effective AI integration across all levels of the academic community.

2. Background

2.1. Overview of Generative AI Tool Use in Higher Education

Although AI has become an indispensable buzzword in many stakeholders’ discussions worldwide, there is still a lack of agreement on its unique definition. Artificial intelligence, a term coined by emeritus Stanford professor John McCarthy in 1955, was defined by him as “the science and engineering of making intelligent machines, especially intelligent computer programs” [13] (p. 2), Haenlein and Kaplan define AI as a “system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” [14] (p. 17). Other scholars, like Mutasa et al. (2020), explain artificial intelligence (AI) as “a broad umbrella term used to encompass a wide variety of subfields dedicated to, simply put, creating algorithms to perform tasks that mimic human intelligence” [15] (p. 96). Generative AI (GenAI) is one of the artificial intelligence and machine learning (ML) subfields and is a “technology that leverages deep learning models to generate human-like content in response to complex and varied prompts” [16] (p. 10) like languages, instructions, and questions. Compared to other AI systems, which are explicitly programmed to follow predetermined rules or generate specific outputs, GenAI can self-formulate new and original outputs, like images, video, audio, text, code, and 3D renderings [17].
Unlike teaching methods, educational systems, and institutional reforms, which historically implemented changes at a very slow pace [18], the landscape of higher education has significantly transformed towards online learning, which was accelerated by the recent pandemic [19]. Many students, teachers, and researchers became eager to explore and experiment with technological innovations and global trends. Therefore, the academic community quickly recognized the potential of large language models (LLMs) powered by natural language processing (NLP), which enables real-time assistance and contextual understanding as powerful assets to enhance various academic tasks, including research, learning, and teaching.
The widely recognized AI-powered chatbots, such as OpenAI’s ChatGPT (GPT-4.5), Microsoft’s Copilot, and Google’s Gemini (v2.5), provide general virtual assistance by offering step-by-step instructions, brainstorming ideas, analyzing data, translating languages, and crafting personalized answers tailored to individual’s prompts [20]. These tools also provide a potential replacement for search engines. For example, manual information screening is very time-consuming, while GenAI tools offer simple and relevant results within seconds [11]. This way, students can automate their manual search and screening tasks. Moreover, the academic community can also benefit from utilizing more specialized GenAI tools. Specific applications range from solving complex equations with step-by-step explanations by utilizing AI tools like Photomath (v8.43) and Wolfram Alpha (v14.2), real-time code writing assistance by GitHub Copilot (GPT-4.5), or boosting creativity by creating art from textual descriptions in tools like DALL·E 3 and Midjourney (v6.1) and compelling visual storytelling content without traditional filming through apps like Synthesia (v10.9) and HeyGen (v3.0). Researchers can also benefit from utilizing AI-driven tools like Scite.ai, Elicit, and Semantic Scholar to assist with academic literature reviews, summarize scholarly articles, synthesize research findings, and discover related studies [21]. Teachers and educators can leverage GenAI tools to help them brainstorm content, customize lecture tasks, draft curriculum strategies and objectives, or assist with grading—especially for open-ended responses [22]. Additionally, AI can help them automate manual tasks like data entry, course scheduling [23], or analyzing student performance insights to automatically identify low-engagement students and implement interventions [24]. By embracing these tools, educators can enhance their teaching effectiveness and efficiency, which can help them focus on student engagement, improving learning outcomes [25] and “fulfilling their primary role—empathic human teaching” [26] (p. 21) because true, authentic learning and personal growth emerge from deep, reciprocal connections and meaningful relationships [27].

2.2. Identifying Challenges and Concerns in AIED

As shown by many examples, generative artificial intelligence has made a significant impact on streamlining and facilitating the daily academic tasks as well as disrupting the knowledge creation and dissemination process. However, along with numerous benefits, the usage of GenAI tools tackle specific set of challenges that need to be discussed because “educational technology is not (only) about technology—it is the pedagogical, ethical, socio-technological, cultural and economic dimensions of AIED we should be concerned about” [28] (p. 106). In their book, The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates, Holmes and Porayska-Pomsta (2023) argue that the moral dilemmas and challenges surrounding AI ethics in education cannot be simply fixed through better design or programming. They believe that instead of using a set of common-sense but somewhat random principles, the development of robust and actionable guidelines for AIED research and practice should be grounded in fundamental concepts from moral philosophy [29] (p. xiv). Moreover, since AIED involves a complex mix of human–computer interaction, philosophy, social sciences, psychology, and educational practice, Wagner (2018) emphasizes the importance of remaining skeptical of oversimplified and conceptually shallow “technical fixes” that promise to engineer achievable “ethical” actions. He warns that we need to stay vigilant against superficial responses that lead to obfuscation and ethics-washing [30]. Similarly, “designing new social AI systems for education requires more than fine-tuning existing language models for academic purposes. It demands a multidisciplinary approach, where AI is built to uphold human rights, respect teachers’ expertise, and support the diverse needs of students” [31]. Sharples (2023) [31] adds that effective GenAI integration should be a “collaborative effort, bringing together experts in neural and symbolic AI with specialists in pedagogy and learning sciences. By engaging educators and practitioners in this process, AI systems can be designed to foster meaningful dialogue and enhance learning experiences, rather than merely automating educational tasks” [31].

2.3. Key Differences: Ethical Implications and User Trust

Following a review of the existing literature [5,32,33,34,35] the majority of above-mentioned implications could be separated into two categories: (1) ethical implications specific to the academic environment, and (2) user trust issues that focus on the technical aspects of AI tools. While both categories combined cover the most significant concerns of GenAI tool usage in education, distinguishing them into separate categories allows for a more thorough analysis. Ethical concerns are primarily related to (i) copyrights and authorship, (ii) transparency, (iii) responsibility, and (iv) academic integrity. In contrast, user trust issues pertaining to (i) the accuracy of generated content, (ii) privacy and safety, and (iii) the non-maleficence of the AI systems. This separation maintains the specificity of each topic, avoids dilution, and supports the objective of this research paper to closely examine whether and how ethical issues affect the adoption and use of GenAI tools in HEIs. Moreover, the same approach aligns with Holmes’s argument that “the ethics of AIED cannot be reduced to questions about data or computational approaches alone and AIED research also needs to account for the ethics of education, which, although the subject of decades of research, is all too often overlooked” [36] (p. 161). In the context of technology adoption, ethical implications, such as concerns about violating academic integrity including plagiarism, copyright infringement, transparency, and unauthorized resource use [10], often shape external barriers that are closely related to organizational policies, legal compliance, and societal acceptance [37,38]. These implications originate from normative ethics models (deontology, consequentialism, and virtue ethics), which address the moral principles of right and wrong conduct [26], as well as bioethics that align with digital technologies by tackling issues related to new forms of agents, users, and environments [37]. On the other hand, issues of user trust (doubts about the accuracy of AI-generated content, explainability, fairness and protection of privacy, personal data, and system security) are rooted in psychology and sociology and focus on individual’s internal willingness and motivation to engage with technology, influencing perceptions of risk and expected outcomes [39,40,41]. In addition to similar (yet not identical) theoretical domains, there are differences between ethical implications and user trust, which should be addressed to facilitate stakeholders’ development of targeted interventions for each area as they require different mitigation strategies. Addressing ethical challenges primarily requires changes in policy development, educational curriculums, and the establishment of community-wide accepted AIED ethical frameworks and guidelines [42], while enhancing user trust involves improving system transparency, strengthening security features, and optimizing user experience design [43]. Ethical concerns could primarily affect the initial decision to adopt or reject technology due to perceptions of ethical acceptability influenced by evolving legal standards and societal norms [44,45]. User trust, on the other hand, can be built or eroded more rapidly based on individual perceptions and experiences with system performance [46], which affects continued engagement with the technology. Therefore, addressing this challenge leads to the creation of user-centric design to ensure that end-users feel confident and secure in their interactions with GenAI systems [41].

2.4. Shifting the Focus to Ethical Implications

After conducting a systematic review, the ethical implications were extracted, analyzed, and grouped into the following categories:
(i) Copyrights and Authorship: With the rapid advancement of GenAI tools, distinguishing between content purely generated by AI, content co-created with AI, and content created by humans has become increasingly challenging. This complexity disrupts traditional concepts of originality and creativity, raising fundamental questions about the application of copyright frameworks originally developed for human authorship, as only humans can be legally responsible for their creations. Per Sandiumenge [47] (p. 71), “Lawsuits have been filed not only alleging that AI is being trained on datasets that contain copyrighted materials without asking their rightsholders for permission to use them (which would result in copyright infringement) but also alleging that the AI-generated works are derivatives from the works used to train the AI”. Another grey area and common question that arises is whether GenAI tools can be credited as co-authors. While the postdigital approach and some scholars like O’Connor [48] and Jinag et al. [49] underscore the entangled nature of agency and co-authorship while advocating for recognizing AI tools as co-authors and emphasizing a human–machine collaborative approach, others argue that attributing GenAI tools as authors does not diminish human accountability for the outputs and responsibility for the final content [50]. Furthermore, some researchers firmly reject the idea of naming large language models (LLMs) as authors or even mentioning them in acknowledgments, arguing that LLMs lack free will and cannot be held morally or legally responsible for their outputs. Instead, they emphasize that AI tools should be cited in-text like any other software or tool, followed by a reference in the bibliography [51]. Dwivedi et al. [52] shed light on unnamed authors who trained the AI algorithms and claimed that their contributions are often overlooked. There are numerous potential challenges associated with violating copyrights and authorship when using Generative AI (GenAI) tools in the academic environment. For instance, students may misuse AI to falsely present AI-generated content as their own effort and knowledge [5]. Among researchers, attribution and copyright issues can lead to confusion and the fear of obscuring human intellectual input when facing dilemmas on how to credit AI for translation to foreign languages (primarily for publishing in international journals), enhancing vocabulary and grammar, or using GenAI tools for a summary of their original content which will still be detected as AI-generated. Teachers also face multiple challenges, not only related to professional use for producing teaching materials and tasks but also for the evaluation of students’ assignments—particularly in distinguishing between work genuinely created by students and that which was co-authored or entirely generated by GenAI.
(ii) Transparency: This is closely related to authorship, fairness, and openness regarding the use of GenAI tools for creating or enhancing the content in an academic environment. For example, some challenges might occur if teachers are using GenAI tools for evaluating the quality of student essays and assignments. As many large language models (LLMs) lack explainability due to their complex architecture, it remains unclear and difficult to understand how outputs are generated, which may lead to bias and inequality in the decision-making process. Another example might include the transparency of researchers while conducting research and creating papers for publishing in academic journals. Frank et al. [53] examine and compare multiple international publishers’ guidelines on the attribution of GenAI tools in papers. While all of them share a unique standpoint that full AI authorship is not allowed, the majority allow textual enhancements (e.g., for readability and language) but discourage the use for image generation (with the exception of explicit permission from the editors). Focusing only on transparency requirements, the majority of publishers require that AI assistance is attributed in the Methods or Acknowledgement section; a declaration or statement of GenAI tools usage or, in some cases, full disclosure of the prompt and AI tool details (name and version) is noted.
(iii) Responsibility: Unlike the transparency implications, which are more informational and procedural and focus on trust, disclosure, and communication about AI’s contribution to academic tasks, responsibility pertains to ethical and accountable use while emphasizing the ownership of actions, outputs, and consequences. Responsibility is more subjective and relies on individuals’ interpersonal and moral beliefs in order for them to be accountable for their work and maintain academic integrity. While discussing the responsible use of GenAI tools aligned with ethical and societal values and expectations, Malik et al. [54], like many other scholars [5,42], emphasize the need to develop clear policies and guidelines that all academic segments should be aware of, as well as to highlight potential challenges and conduct regular evaluations and audits of AI systems. Responsibility is closely tied to and intersects with authorship and copyright issues. As noted by Bozkurt [50], citing the US Copyright Office, GenAI programs’ outputs lack copyright regulation, placing the primary responsibility for the content on the human author, who should be fully accountable and transparent about their use.
It is important to stress that ethical challenges and responsibility in GenAI tool use are not solely limited to students. The growing pressure on scientists to produce more publications to sustain and progress in their academic careers, a phenomenon widely known as “publish or perish” [55], has raised concerns over “AI-generated mass output rather than work that is innovative or informed by social values and priorities” [56]. Studies have found that researchers in highly competitive environments, including China [57,58], have turned to AI to meet publication quotas, raising questions about the authenticity and depth of AI-assisted work. This practice undermines academic integrity and contributes to the proliferation of low-quality or duplicate research, making it difficult to distinguish quality work from AI-generated work. Without regulation, the overuse of AI in mass-producing papers can potentially generate multiple ethical issues, including unattributed AI authorship, plagiarism, and the dissemination of misinformation [56,59]. Although AI can help researchers write and brainstorm, its misuse in academic environments results in a surge of poor studies, which makes it harder to distinguish between proper research and AI-generated content. Academic institutions must adopt stricter policies around using AI in research, emphasizing ethical use and maintaining scholarly standards to counteract this trend. If left unchecked, the AI-driven “publish or perish” culture could destabilize the credibility of academic discourse and erode public trust in scientific research.
(iv) Academic integrity: The International Center for Academic Integrity (ICAI) defines academic integrity as a “commitment, even in the face of adversity, to six fundamental values: honesty, trust, fairness, respect, responsibility, and courage” [60]. With the advent of Generative AI, which contributes to effectiveness and efficiency in performing a wide range of academic tasks, there is also growing concern that pillars of ethical academic practices may be at risk [50,61] due to misuse in the form of academic misconduct and plagiarism [62]. Therefore, some authors suggest deploying new tools that analyze academic texts that do not meet academic standards due to low quality, inauthenticity, unreliability of the citation process, or non-adherence to scientific methodologies. Other research [63] has demonstrated that students are often concerned about breaching academic rules and regulations, as many of them fear they cannot fully comply due to a lack of understanding about proper citation and referencing practices. As technology evolves, the rapid advancement of sophisticated writing tools that manipulate language for various purposes will further complicate this landscape [64]. This dilemma highlights the need for institutions to provide clearer guidelines and support mechanisms, like hands-on training and education, to facilitate students’ navigation through AI-assisted writing [65]. Previous research [64,66] emphasized that “training has shown to be an effective intervention in reducing academic integrity violations, so they present one of the most important interventions available”. Along with practical trainings, authors suggest that “institutions must commit to a continuous dialogue on ethical practices, keeping pace with the evolving capabilities of AI in order to safeguard the foundational principles that are challenged in the new digital age” [5] (p. 7).
Ethical implications often involve legal considerations, such as compliance with copyright laws and academic integrity policies [32,67], which makes them of primary importance to institutions, regulators, and society at large [68]. The rationale for the division between ethical implications and user trust is to acknowledge a specific set of ethical concerns emerging in AIED, address the complexity and differences among HEIs’ academic segments, and shed light on academic integrity—one of the core principles shaping this topic. Therefore, this paper aims to examine and empirically test the influence of ethical challenges on GenAI tool adoption and use among the academic community while comparing students’, teachers’, and researchers’ perspectives.
The specific research questions that will be addressed are as follows:
RQ1: How do perceptions of the ethical implications of using Generative AI tools differ across academic roles in HEIs?
RQ2: To what extent does understanding the ethical implications of Generative AI tools depend on gender?
RQ3: How does the duration of Generative AI tool usage affect users’ understanding of ethical principles, personal responsibility, and potential negative consequences?
RQ4: What is the impact of ethical implications on the adoption and future use of Generative AI tools in HEIs?
The following hypotheses will be empirically tested:
H1: 
Teachers and researchers have a higher understanding of ethical principles, personal responsibility, and the potential negative consequences of Generative AI tool usage compared to students.
Academic roles influence the understanding of ethical implications such as copyright, authorship, responsibility, and academic integrity within the context of AI technologies. This assumption is grounded in the observation that teachers and researchers in higher education institutions (HEIs) are often more actively engaged in ethical discussions and policy formation through participation in conferences, institutional initiatives, and professional working groups. Such engagement provides them with broader exposure to these challenges, shaping their awareness and understanding of ethical issues. Prior research underscores the connection between professional experience and ethical decision-making in the use of technology, suggesting that individuals with more exposure are better equipped to navigate ethical dilemmas. Therefore, teachers and researchers, due to their distinct roles and responsibilities, are particularly sensitive to concerns like plagiarism, intellectual property rights, and ensuring transparency in scholarly outputs. Some authors claim that they should “play a critical role in introducing professional codes of ethics, cultivating critical thinking and ethical reasoning which is transferable across different professional contexts” [69] (p. 64).
H2: 
Gender influences the understanding of ethical implications, with female users demonstrating greater awareness of ethical principles, personal responsibility, and the potential negative consequences of GenAI tool usage.
The second hypothesis relies on documented gender-based differences in ethical awareness. Previous research found that women have higher, more steadfast ethical standards and act more ethically than men in a variety of behavioral realms [70,71]. Likewise, there are studies that show the positive impact of female leadership (e.g., institutional boards) on performance and ethical and social compliance [72].
H3: 
Users using Generative AI tools for a longer time exhibit a higher understanding of ethical implications than users who use them for a shorter time.
The prolonged use of GenAI tools, like ChatGPT, Microsoft Copilot, and Gemini allows users to gain deeper insights into their capabilities and limitations, which enhances their understanding of and navigating through certain ethical implications. Familiarity with tools through regular interaction provides users with a practical framework to evaluate ethical principles and personal responsibility through crediting AI-generated content (copyrights and authorship), acknowledging potential biases (transparency), and understanding the consequences of handling sensitive or confidential data (e.g., student’s personal or institutional data) and misuse (responsibility). Previous studies [73] have suggested that repeated exposure to technology encourages users to reflect on their actions and adapt their behaviors to align with ethical norms.
H4: 
Higher awareness of ethical implications positively correlates with intentions of adoption and future usage of Generative AI tools in HEIs.
Users’ perceptions of ethical alignment with societal norms and individual values directly affect whether users view GenAI tools as acceptable for their tasks. Relying on frameworks and rationale from user acceptance theories, such as the Theory of Planned Behavior (TPB) [74], awareness and knowledge influence users’ perceived behavioral control, a key factor in behavioral intention. From a behavioral perspective, an awareness of ethical implications reduces perceived risks and uncertainties, enabling users to adopt GenAI tools confidently and responsibly. Moreover, ethical awareness fosters trust, which is one of the crucial factors in technology adoption [75]. When users are aware of ethical implications, such as copyright and authorship, transparency, and responsibility to maintain academic integrity, and feel confident that they can navigate these implications, they are more likely to feel in control and empowered to use these tools responsibly.

3. Materials and Methods

This research employed a quantitative research design, utilizing an online questionnaire as the primary data collection instrument. The questionnaire was created using Google Forms and was disseminated during February and March through official email lists and the Moodle-based learning system “Merlin” to reach the entire academic population at University North in Croatia. To ensure the reliability and validity of the instrument, a pilot study was conducted with a small subset of respondents, and questions were refined according to their feedback. Participants were categorized into three different academic segments, treated as independent samples for analysis: (1) undergraduate students, (2) graduate students, (3) doctoral students, and (4) teachers and researchers. It is important to note that doctoral students were also included as respondents of the questionnaire; however, the sample size (n = 26) was too small, and the statistical analysis of their results is presented as one of the study’s limitations. Considering the overall population of students at University North, nearly 20% of active students and around 30% of active teachers participated in this survey. To ensure respondent authenticity and eligibility, the Single Sign-On (SSO) feature in Google Forms was utilized, requiring credentials specific to the higher education institution (HEI). There was no need for further exclusions since participants joined with their official email addresses, and there were no answers from non-uni members. Data analysis was performed using the IBM SPSS Statistics 29.0.2 software for a comprehensive statistical examination of the results.

4. Research Results

The analysis centered on the ethical implications of adopting GenAI tools, such as ChatGPT, Microsoft Copilot, and Gemini, by examining how their adoption is influenced and shaped by individual factors such as age, gender, and academic role (as shown in Table 1). The results present variations among different academic subgroups on topics like understanding potential negative consequences, personal user responsibility, and awareness of ethical principles. The discussion aims to contextualize the observed differences in perceptions within the framework of broader social and technological trends to conclude that these findings that can inform guidelines for developing strategies to address the complexities of integrating GenAI tools into academic environments.
This study included 883 respondents to examine the perception and use of Generative AI tools. The sample was structured based on key demographic and academic characteristics, such as gender, age, role at the university, and year of study. The socio-demographic structure of respondents was comprised of 55.9% women (N = 494), while men accounted for 44.1% (N = 389) of the sample. Most respondents, 71.1% (N = 628), were 18 to 25 years old. Participants aged 26 to 35 years represented 11.0% (N = 97), while 15.1% were between 36 and 55 years. Only 2.3% (N = 20) were over 55 years old, and there were no participants under 18 or over 65 years old. Undergraduate students constituted the largest portion of the sample, representing 69.9% (N = 617), followed by graduate students at 16.4% (N = 145) and doctoral students at 2.9% (N = 26). Teachers and researchers accounted for 10.8% (N = 95), reflecting a diverse yet student-centered sample. Distribution by year of study showed a strong presence of younger students, with first-year students making up 35.9% of the sample (N = 317) and second-year students close behind at 34.7% (N = 306).
The analysis of descriptive statistics and Kolmogorov–Smirnov (K-S) test results provides an overview of perceptions related to ethical implications, personal responsibility, and the potential negative consequences of using GenAI tools in the academic environment. The average values for the three items—“I understand the potential negative consequences of using GenAI tools”, “I am aware of my responsibility as a user of GenAI tools”, and “I understand the ethical principles of using GenAI tools”—ranged between 3.62 and 3.68, indicating a moderate level of awareness among respondents (Table 2).
However, the relatively high standard deviations (1.136–1.336) and coefficients of variation (above 1.0) suggest considerable variability in responses. Understanding ethical principles showed the highest relative variability (CV = 36.89%) among respondents, indicating less consistency in their understanding and perceptions. The Kolmogorov–Smirnov test results showed that responses for all three items deviated significantly from a normal distribution (p-values < 0.05), which reflects the diverse perspectives and heterogeneity in the sample.
Table 3 provides an analysis of understanding the potential negative consequences of GenAI tool usage and its correlation with multiple factors, such as tool usage duration, academic role, and gender. When dividing per academic role, the participants’ responses were grouped into two categories: (i) students that included undergraduate (UGS), graduate (GS), doctoral students (DS); and (ii) teachers and researchers. The data indicate that the use of GenAI tools was similar among both groups—students, teachers, and researchers. As highlighted above, the sample size (n = 26) of doctoral students alone was too small to be analyzed as an individual category, but their responses were integrated into the “students” group. Short-term use (up to three months) was observed among 13.38% of students and 12.71% of teachers and researchers, while the majority in both groups (over 86%) had been using the GenAI tools for more than three months (86.62% of students and 87.29% of teachers and researchers) which concludes that usage does not depend on academic role. However, the academic role does impact the understanding of potential negative consequences while using GenAI tools because teachers and researchers (58.91%) indeed showed a better understanding compared to students (41.09%). This correlation is statistically significant and supported by the Chi-square test, which yielded a p-value < 0.000.
Gender differences also emerged as a significant factor in the perception of potential negative consequences of GenAI tool usage. Females, regardless of academic role, were more likely to understand negative consequences (78.02%) compared to males (21.98%). The Chi-square test showed a p-value of 0.023, which indicates a significant association between gender and the perception of potential negative consequences of using GenAI tools. Users who had used them for more than three months showed greater awareness of possible risks than those who had used them for a shorter period (9.81%).
The analysis in Table 4 combines the Chi-square (χ2) test with descriptive statistics to explore the relationship between users’ awareness of personal responsibility while using GenAI tools with the duration of use, academic role (teachers/researchers and students), and gender. The results demonstrate a statistically significant relationship between academic role and the awareness of personal responsibility (χ2 = 26.39, p < 0.000). Teachers and researchers showed significantly higher levels of awareness (65.32%) compared to students (34.68%). However, the relatively high awareness of personal user responsibility among students with more than three months of experience (89.98%) suggests adaptability among younger users. Gender showed a highly significant correlation with user personal responsibility (χ2 = 24.2, p < 0.000). Females again exhibited much higher levels of awareness (81.12%) compared to males (18.88%).
The results from Table 5 indicate a highly significant relationship between academic role and understanding of ethical principles (χ2 = 18.26, p < 0.000). Teachers and researchers (72.71%) demonstrated much higher understanding and awareness compared to students (27.29%). Additional explanation as to why doctoral students, grouped as “students” with undergraduate and graduate level students, do not display higher ethical awareness might be the sample structure and small number of respondents, which is listed as one of this study’s limitations. Gender differences show that females had a slightly higher understanding of ethical principles. However, when using tools over 3 months, their results became very similar, which indicates that gender disparity is less prominent with prolonged use. Chi-square test results indicated statistically significant differences (χ2 = 21.39, p < 0.000) in this category, meaning that gender has a notable impact on understanding ethical principles in AI technology usage.
The analysis of pairwise correlations and the results of Pearson correlation coefficients (Table 6) highlight interrelationships between observed variables. Strong positive correlations are observed between understanding potential negative consequences, an awareness of user responsibility, and an understanding of ethical principles in using GenAI tools. Notably, the correlation between awareness of user responsibility and understanding of ethical principles is the strongest at 0.874, reflecting a close relationship.
The reliability analysis of measurement instruments is essential for understanding the consistency and validity of variables that assess different aspects of user perceptions and behaviors regarding generative artificial intelligence (GenAI) tools. This analysis, presented in Table 7, included four variables: the frequency of use of GenAI tools and an understanding of possible negative consequences when using them, an awareness of user responsibility, an understanding of ethical principles of GenAI tool usage, and the prediction of future use of these tools. Reliability was assessed using Cronbach’s Alpha, the Spearman–Brown coefficient, and the Guttman Split-Half coefficient.
The results revealed significant differences in consistency between two subsets of variables. The first subset, which includes understanding the possible negative consequences and an awareness of user responsibility, demonstrated high internal consistency with a Cronbach’s Alpha of 0.876. This result indicates that these two variables are closely interrelated and assess aspects of the responsible use of GenAI tools together. The strong correlation suggests that users who are aware of the potential risks of using GenAI tools also acknowledge their responsibility to manage these risks. The second subset, which includes understanding ethical principles and predicting future use of GenAI tools, showed a slightly lower Cronbach’s Alpha value of 0.678 but was still reliable enough to draw conclusions. This weaker correlation suggests that understanding ethical principles does not necessarily influence predictions about the long-term use of these tools. Users may recognize ethical implications but fail to connect this awareness with their attitudes toward future usage. Overall, the instrument demonstrates strong reliability, with a correlation between the forms of 0.723, a Spearman–Brown coefficient of 0.839, and a Guttman coefficient of 0.829. These findings suggest that the instrument effectively measures user awareness of responsibility and potential harms but highlight the need for further research and the possible refinement of the second subset to better align ethical understanding with behavioral predictions.
Table 8 provides the results of an Analysis of variance (ANOVA) with Cochran’s test, which evaluates variability between and within subjects to determine whether significant differences exist in the responses. ANOVA analyzes overall variability by separating it into “between people” (differences between individuals) and “within people” (differences between items or questions for the same individual). Cochran’s test, applied in the context of within-trial variation, helps assess whether the variability between items is significant. Together, these methods provide insights into how responses vary across individuals and within each respondent’s answers. The results show a Sum of Squares (2717.136) for “Between People”, reflecting the total variability between respondents, such as differences between subgroups (e.g., undergraduate, graduate, and doctoral students and teachers/researchers). This high value indicates significant variability among individuals, which is expected given the diversity of the respondent groups. The Sum of Squares (3.310) for “Between Items” shows the total variability across items, and the corresponding Cochran’s Q (6.413) with a significance value of 0.093 suggests that the variability between items is not statistically significant. These findings indicate that while responses vary significantly between individuals, the instrument shows consistency across items. The grand mean value of 3.63 summarizes the average response, helping contextualize the variability results. Overall, this analysis highlights inter-subject differences and the relative consistency of the instrument in measuring the intended constructs.

5. Discussion

This study included 883 respondents, and the sample was structured based on key demographic and academic characteristics, such as gender, age, role at the university, and year of study. The socio-demographic structure of the respondents (Table 1) revealed a clear dominance of younger respondents in the study, with 71.1% of participants aged 18 to 25 years. This trend aligns with the expected demographic structure of higher education institutions, where undergraduate students form the largest proportion of the population. The slight overrepresentation of female participants (55.9%) may reflect broader trends in academia, where female students often outnumber their male colleagues. The inclusion of teachers and researchers (10.8%) ensured multiple perspectives of the ethical implications of GenAI tool usage in academic settings. The distribution by year of study also highlights the predominance of undergraduate students (70.6% first- and second-year students combined). This suggests that the findings are influenced by a demographic that is likely to have less experience with using AI for advanced research and academic purposes, potentially shaping their perceptions of the ethical implications while using GenAI tools.
The results of descriptive statistics and the Kolmogorov–Smirnov (K-S) test provided an overview of perceptions related to ethical implications, personal responsibility, and the potential negative consequences of using GenAI tools in the academic environment (Table 2). The average scores for understanding the potential negative consequences (3.68), an awareness of personal responsibility (3.68), and an understanding of ethical principles (3.62) indicate a moderate level of awareness across the sample. However, the standard deviations (1.136–1.336) and coefficients of variation (above 1.0) highlight considerable variability in responses, suggesting the significant differences in understanding among participants. The highest relative variability was observed in the understanding of ethical principles (CV = 36.89%), indicating less consistency in respondents’ awareness. This variability points to diverse interpretations of ethical principles and underscores the challenges of achieving a shared understanding of ethics in GenAI tool usage, which supports the findings from the existing literature. The Kolmogorov–Smirnov test results also showed significant deviations from normal distribution for all three implications, which highlights the heterogeneity in respondents’ perceptions. These findings suggest the need for targeted educational initiatives to address inconsistencies in understanding ethical principles, potential negative consequences, and the role of personal responsibility in technology usage.
The findings strongly support the first hypothesis, H1. Teachers and researchers display a significantly higher awareness of ethical principles (72.71%), personal responsibility (65.32%), and the potential negative consequences of GenAI tool usage (58.91%) compared to students (27.29%, 34.68%, 41.09%, respectively). These differences are statistically significant (p < 0.000), as shown in Table 3, Table 4 and Table 5. The given results can be attributed to their professional roles, which involve guiding students, ensuring academic integrity, and participating in multiple discussions about transparency, authorship, and ethical considerations in technology use. Students, particularly undergraduates, showed lower levels of awareness of ethical implications, likely due to limited exposure to structured ethical training or a lack of experience with advanced use of tools for research and scholarly purposes. However, students showed adaptability through increased awareness of personal responsibility within three months of GenAI usage (>89%, Table 4), which indicates that targeted interventions can effectively address this gap.
The findings from this research also confirmed the second hypothesis, H2. Gender differences were significant across all dimensions of ethical implications. Females demonstrated a consistently higher understanding of ethical principles (Table 5) and an awareness of personal responsibility (Table 4) and potential negative consequences (Table 3) compared to males (p < 0.000 in all cases). For example, females’ understanding of negative consequences was 78.02%, compared to males’ understanding at 21.98%, and their awareness of personal responsibility was 81.12%, compared to 18.88% for males. These results suggest that cognitive or societal factors may drive enhanced ethical awareness among females.
The third hypothesis (H3) assumed that users with longer experience in GenAI tool usage have a higher understanding of ethical implications than those with a shorter usage duration; this was supported by findings across all dimensions—an understanding of ethical principles (Table 5), personal responsibility (Table 4), and negative consequences (Table 3). Respondents who had used GenAI tools for more than three months consistently demonstrated higher awareness compared to those with shorter experience. For instance, 90.19% of females with over three months of experience understood potential negative consequences compared to 9.81% of females with less than three months of experience. Similarly, students and teachers with longer usage exhibited greater awareness, underscoring the importance of experience in shaping their understanding of ethical implications. These results suggest that familiarity and hands-on experience with the tools contributes to a deeper understanding of their ethical and societal impacts.
The results provided only partial support for H4. Strong positive correlations were observed between understanding potential negative consequences, an awareness of user responsibility, and an understanding of ethical principles in using GenAI tools (Table 6 and Table 7). The strongest relationship (r = 0.874) was between an awareness of personal responsibility and an understanding of ethical principles, suggesting a close connection between these dimensions of ethical implications. These findings indicate that fostering ethical awareness, particularly by emphasizing user responsibility and an understanding of ethical principles, could play a significant role in cultivating a reinforcing cycle of ethical behavior. Education on ethical implications should focus on these interconnections to build a stronger foundation for the responsible use of GenAI tools.
However, the correlations between these ethical dimensions and the prediction of future GenAI tool use were weaker, suggesting that while ethical awareness contributes to shaping user attitudes, it may not be the primary factor that influences adoption behavior. This aligns with the findings from ANOVA and Cochran’s test (Table 8), which revealed significant variability between individuals (Sum of Squares = 2717.136), indicating diverse user perspectives and motivations. These findings underscore the complex nature of technology adoption, which is influenced by a combination of cognitive, emotional, and contextual factors [76]. Established technology acceptance models, such as TAM [77], UTAUT [78], and UTAUT2 [79], highlight the necessity of integrating multiple factors—such as performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, price value, and habit—to fully explain why some individuals adopt specific technologies while others do not. Consequently, ethical considerations (e.g., copyright and authorship, transparency, responsibility, and academic integrity), which are particularly central in academic contexts, should be examined in conjunction with these broader factors. Extending existing models to incorporate ethical dimensions may provide a more comprehensive understanding of the factors influencing GenAI adoption in higher education.

6. Limitations

This study has several limitations that should be considered when interpreting the findings. Considering the overall population of students at University North, nearly 20% of active students and around 30% of active teachers participated in this survey, forming a database of 883 respondents. However, the uneven distribution of academic levels may affect generalizability. First, the sample size of doctoral students was relatively small (n = 26), which precluded the independent analysis of their responses. Consequently, their influence on findings from the much larger undergraduate and graduate samples may be limited. Likewise, all doctoral candidates in this study belonged to the same study program at University North, which may have shaped their perspectives. Their research-oriented focus and frequent use of AI tools for data analysis and automation could lead to different views on AI tools’ usefulness and ethics in academic settings.
While the sample size was sufficient, all participants, as a source of survey data, came from a single institution, consequently limiting the generalizability of the findings. Institutional policies, academic culture, and even geographic context may significantly influence participants’ perceptions of GenAI and related ethical issues. Expanding future research to include more universities with different profiles or diverse cultural contexts would enhance the applicability of the findings. Additionally, the survey could capture a deeper and more nuanced understanding of participants’ ethical perceptions (within different dimensions), allowing for the translation of findings into actionable insights for educators and policymakers. Based on the previous point, addressing factors such as institutional policies or peer influence would provide greater clarity on whether external pressures shape adoption decisions more than ethical considerations. The current findings suggest that peer influence plays a dominant role in the absence of clear policies, but further research is needed to explore the extent of this effect.
Considering this study’s results and findings, future works are recommended to investigate whether the weak correlation between ethical awareness and AI adoption suggests that ethical awareness alone is insufficient to drive behavior change. A mix of more factors may drive such decisions in an educational environment. Finally, the research instrument, initially written in English and later translated into Croatian to facilitate dissemination within the target population, may have introduced subtle differences in interpretation or phrasing. While careful attention was given to the translation process, future studies should address potential differences and consider further elaboration and rephrasing to ensure consistency across languages and contexts.

7. Conclusions

This study provides important insights into the awareness and understanding of the ethical implications of Generative AI tool usage in higher education institutions. The findings reveal significant differences in awareness based on academic role, gender, and experience with GenAI tools, which might serve as valuable implications for creating institutional policies and frameworks regarding the use of AI technology in educational settings.
Teachers and researchers demonstrated higher levels of awareness across all ethical dimensions (an awareness of potential negative consequences, personal responsibility, and understanding of ethical principles) compared to students. Their professional experience and roles, which involve guiding academic integrity and addressing risks like plagiarism, authorship, and maintaining academic integrity, contributed to their increased understanding. On the other hand, students, particularly undergraduates, showed lower awareness levels, which might result from limited exposure to courses and guidelines that help them develop critical thinking and ethical reasoning. However, the adaptability of students, evidenced by their development of personal responsibility with prolonged GenAI tool usage (over 3 months), underscores the potential of early and structured educational programs. Supported by previous studies, gender also emerged as a significant factor, with female respondents consistently demonstrating greater awareness of ethical principles, personal responsibility, and risks compared to male colleagues. This finding highlights the need for gender-sensitive approaches to ethical education, particularly for male users, who showed consistently lower awareness levels.
Experience with GenAI tools was a strong predictor of higher ethical awareness, reaffirming the role of familiarity and hands-on use in fostering deeper understanding. Respondents with longer usage demonstrated greater awareness of ethical principles, personal responsibility, and potential risks. These findings underscore the importance of integrating practical exposure to GenAI tools into academic curricula. Early exposure, combined with structured education, can help all academic segments to develop a deeper understanding of their responsibilities when using these tools to enhance their academic tasks and performance. The results of this study highlight the importance of targeted and tailored training programs, such as workshops and case studies focused on transparency, authorship, responsibility, and academic integrity, which could effectively enhance understanding and address gaps in ethical awareness, particularly among students.
Future research should prioritize exploring and empirically testing challenges unique to the academic environment, such as user trust in the accuracy of generated content, privacy, safety, and the non-maleficence of AI systems. As indicated by the findings of this study, the field of Artificial Intelligence in Education (AIED) would benefit from empirical studies that incorporate multiple factors specific to educational settings within the framework of established and widely used technology acceptance models. Future research should also consider differences between academic disciplines, as ethical concerns with AI tools likely vary across fields. For instance, STEM researchers may be more concerned with data accuracy and reproducibility, while those in the humanities may prioritize originality, authorship, and the implications of AI-generated content on creative work. Addressing these discipline-specific perspectives could offer a holistic overview and help shape more targeted guidelines and insights for various academic communities. Considering the limitations of this study and data collection within a single institution, future studies should include universities with different institutional profiles and cultural contexts. This would allow researchers to assess whether AI-related ethical concerns and adoption trends vary or remain consistent across educational settings. Insights derived from these studies, when combined with an understanding of ethical considerations, could inform the creation of comprehensive and customizable guidelines to effectively support the integration of AI technologies across varied educational practices.
To summarize, fostering a deeper understanding of ethical principles, personal responsibility, and the potential risks associated with GenAI tool usage is essential for equipping academic communities with the skills required for effective, responsible, and sustainable engagement with transformative AI technologies. The findings of this study underscore the importance of a balanced and integrated approach that combines ethical education, practical application, and institutional support. Such an approach ensures that GenAI tools are adopted and utilized ethically and responsibly within higher education institutions (HEIs), ultimately enhancing teaching and learning processes.

Author Contributions

Conceptualization, E.Đ. and D.F.; methodology, E.Đ.; software, D.V.; validation, D.V.; formal analysis, D.V.; investigation, E.Đ.; resources, E.Đ.; data curation, D.V.; writing—original draft preparation, E.Đ.; writing—review and editing, E.Đ.; visualization, E.Đ.; supervision, D.F.; project administration, E.Đ.; funding acquisition, D.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the response and feedback from the Institutional Ethics Committee. The Ethics Committee was consulted before questionnaire distribution and confirmed that this research is not subject to their approval since the questionnaire was voluntary, anonymous, and quantitative. For questionnaire dissemination, the authors (D.F. and E.Đ.) were granted access to the official mailing list, which includes all university members (teachers, researchers, students of all levels, and administrative personnel).

Informed Consent Statement

All participants were informed by the preamble of the questionnaire, and they could proceed voluntarily.

Data Availability Statement

Data supporting reported results can be found at https://docs.google.com/forms/d/1N5enFr0JhC_P8L8AnFH3lzga3zba_-KKVxT3QFPwSyk/edit#responses (accessed on 15 March 2025). However, due to ongoing analysis for future research articles, we kindly ask that this data are not made publicly available. Thank you in advance for your understanding.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
GenAIGenerative Artificial Intelligence
AIEDArtificial Intelligence in Education
HEIsHigher Education Institutions
TEACHTeachers
RESResearchers
STUDStudents
UGSUndergraduate students
GSGraduate students
DSDoctoral students

References

  1. Teubner, T.; Flath, C.M.; Weinhardt, C.; van der Aalst, W.; Hinz, O. Welcome to the Era of ChatGPT et al. Bus. Inf. Syst. Eng. 2023, 65, 95–101. [Google Scholar] [CrossRef]
  2. Gordon, C. ChatGPT Is the Fastest Growing App in the History of Web Applications. Available online: https://www.forbes.com/sites/cindygordon/2023/02/02/chatgpt-is-the-fastest-growing-ap-in-the-history-of-web-applications/ (accessed on 14 November 2024).
  3. Economic Potential of Generative AI|McKinsey. Available online: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction (accessed on 17 October 2023).
  4. AI for Everyone? Critical Perspectives; University of Westminster Press: London, UK, 2021; Available online: https://www.jstor.org/stable/j.ctv26qjjhj (accessed on 14 November 2024).
  5. Al-kfairy, M.; Mustafa, D.; Kshetri, N.; Insiew, M.; Alfandi, O. Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics 2024, 11, 58. [Google Scholar] [CrossRef]
  6. Wang, S.; Wang, F.; Zhu, Z.; Wang, J.; Tran, T.; Du, Z. Artificial intelligence in education: A systematic literature review. Expert Syst. Appl. 2024, 252, 124167. [Google Scholar] [CrossRef]
  7. Chaudhry, M.A.; Kazim, E. Artificial Intelligence in Education (AIEd): A high-level academic and industry note 2021. AI Ethics 2022, 2, 157–165. [Google Scholar] [CrossRef] [PubMed]
  8. Lau, S.; Guo, P. From “Ban It Till We Understand It” to “Resistance is Futile”: How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools such as ChatGPT and GitHub Copilot. In Proceedings of the 2023 ACM Conference on International Computing Education Research—Volume 1, Cambridge, UK, 8–10 August 2023; Association for Computing Machinery: New York, NY, USA, 2023; Volume 1, pp. 106–121. [Google Scholar] [CrossRef]
  9. Huang, K. Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach. The New York Times. 16 January 2023. Available online: https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html (accessed on 8 November 2024).
  10. Susnjak, T.; McIntosh, T.R. ChatGPT: The End of Online Exam Integrity? Educ. Sci. 2024, 14, 656. [Google Scholar] [CrossRef]
  11. AlAfnan, M.A.; Dishari, S.; Jovic, M.; Lomidze, K. ChatGPT as an Educational Tool: Opportunities, Challenges, and Recommendations for Communication, Business Writing, and Composition Courses. J. Artif. Intell. Technol. 2023, 3, 60–68. [Google Scholar] [CrossRef]
  12. Robert, J.; Muscanell, N. 2023 EDUCAUSE Horizon Action Plan: Generative AI. 2023. Available online: https://library.educause.edu/resources/2023/9/2023-educause-horizon-action-plan-generative-ai (accessed on 10 December 2024).
  13. McCarthy, J. WHAT IS ARTIFICIAL INTELLIGENCE? Available online: https://www-formal.stanford.edu/jmc/whatisai.pdf (accessed on 26 November 2024).
  14. Haenlein, M.; Kaplan, A. A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. Calif. Manag. Rev. 2019, 61, 5–14. [Google Scholar] [CrossRef]
  15. Mutasa, S.; Sun, S.; Ha, R. Understanding artificial intelligence based radiology studies: What is overfitting? Clin. Imaging 2020, 65, 96–99. [Google Scholar] [CrossRef]
  16. Lim, W.M.; Gunasekara, A.; Pallant, J.L.; Pallant, J.I.; Pechenkina, E. Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ. 2023, 21, 100790. [Google Scholar] [CrossRef]
  17. UNIN|Innovation Library|Generative AI Primer. Available online: https://www.uninnovation.network/innovation-library/generative-ai-primer (accessed on 14 November 2024).
  18. Barker, B. Public service reform in education: Why is progress so slow? J. Educ. Adm. Hist. 2009, 41, 57–72. [Google Scholar] [CrossRef]
  19. Barber, M.; Bird, L.; Fleming, J.; Titterington-Giles, E.; Edwards, E.; Leyland, C. Gravity assist: Propelling higher education towards a brighter future—Digital teaching and learning review. Available online: https://blobofsproduks.blob.core.windows.net/files/Gravity%20assist/Gravity-assist-DTL-finalforweb.pdf (accessed on 5 December 2024).
  20. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  21. Ammar, W.; Groeneveld, D.; Bhagavatula, C.; Beltagy, I.; Crawford, M.; Downey, D.; Dunkelberger, J.; Elgohary, A.; Feldman, S.; Ha, V.; et al. Construction of the Literature Graph in Semantic Scholar. arXiv 2018, arXiv:1805.02262. [Google Scholar]
  22. Baidoo-anu, D.; Ansah, L.O. Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. J. AI 2023, 7, 52–62. [Google Scholar] [CrossRef]
  23. Kardan, A.A.; Sadeghi, H.; Ghidary, S.S.; Sani, M.R.F. Prediction of student course selection in online higher education institutes using neural network. Comput. Educ. 2013, 65, 1–11. [Google Scholar] [CrossRef]
  24. Shoaib, M.; Sayed, N.; Singh, J.; Shafi, J.; Khan, S.; Ali, F. AI student success predictor: Enhancing personalized learning in campus management systems. Comput. Hum. Behav. 2024, 158, 108301. [Google Scholar] [CrossRef]
  25. Ding, A.-C.E.; Shi, L.; Yang, H.; Choi, I. Enhancing teacher AI literacy and integration through different types of cases in teacher professional development. Comput. Educ. Open 2024, 6, 100178. [Google Scholar] [CrossRef]
  26. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  27. Guilherme, A. AI and education: The importance of teacher and student relations. AI Soc. 2019, 34, 47–54. [Google Scholar] [CrossRef]
  28. Selwyn, N. Is Technology Good for Education? Polity Press: Cambridge, UK, 2016; Available online: https://research.monash.edu/en/publications/is-technology-good-for-education (accessed on 17 March 2024).
  29. The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates. Available online: https://www.routledge.com/The-Ethics-of-Artificial-Intelligence-in-Education-Practices-Challenges-and-Debates/Holmes-Porayska-Pomsta/p/book/9780367349721 (accessed on 14 November 2024).
  30. Wagner, B. Ethics As An Escape From Regulation. From “Ethics-Washing” To Ethics-Shopping? In BEING PROFILED:COGITAS ERGO SUM: COGITAS ERGO SUM: 10 Years of Profiling the European Citizen; Bayamlioglu, E., Baraliuc, I., Janssens, L.A.W., Hildebrandt, M., Eds.; Amsterdam University Press: Amsterdam, The Netherlands, 2018; pp. 84–89. [Google Scholar] [CrossRef]
  31. Sharples, M. Towards social generative AI for education: Theory, practices and ethics. Learn. Res. Pract. 2023, 9, 159–167. [Google Scholar] [CrossRef]
  32. Prather, J.; Denny, P.; Leinonen, J.; Becker, B.A.; Albluwi, I.; Craig, M.; Keuning, H.; Kiesler, N.; Kohn, T.; Luxton-Reilly, A.; et al. The Robots Are Here: Navigating the Generative AI Revolution in Computing Education. In Proceedings of the Proceedings of the 2023 Working Group Reports on Innovation and Technology in Computer Science Education, Turku, Finland, 8–12 July 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 108–159. [Google Scholar] [CrossRef]
  33. Yan, L.; Sha, L.; Zhao, L.; Li, Y.; Martinez-Maldonado, R.; Chen, G.; Li, X.; Jin, Y.; Gasevic, D. Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review. Br. J. Educ. Technol. 2024, 55, 90–112. [Google Scholar] [CrossRef]
  34. Zhan, H.; Zheng, A.; Lee, Y.K.; Suh, J.; Li, J.J.; Ong, D.C. Large Language Models are Capable of Offering Cognitive Reappraisal, if Guided. arXiv 2024, arXiv:2404.01288. [Google Scholar]
  35. Khosravi, H.; Shum, S.B.; Chen, G.; Conati, C.; Tsai, Y.-S.; Kay, J.; Knight, S.; Martinez-Maldonado, R.; Sadiq, S.; Gašević, D. Explainable Artificial Intelligence in education. Comput. Educ. Artif. Intell. 2022, 3, 100074. [Google Scholar] [CrossRef]
  36. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education. Promise and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2019. [Google Scholar]
  37. Floridi, L.; Cowls, J. A Unified Framework of Five Principles for AI in Society. Harv. Data Sci. Rev. 2019, 1, 5. [Google Scholar] [CrossRef]
  38. Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  39. Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum.-Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  40. Lockey, S.; Gillespie, N.; Holm, D.; Asadi Someh, I. A Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions; 2021; Available online: http://hdl.handle.net/10125/71284 (accessed on 10 January 2025).
  41. Araujo, T.; Helberger, N.; Kruikemeier, S.; de Vreese, C.H. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 2020, 35, 611–623. [Google Scholar] [CrossRef]
  42. Holmes, W.; Porayska-Pomsta, K.; Holstein, K.; Sutherland, E.; Baker, T.; Shum, S.B.; Santos, O.C.; Rodrigo, M.T.; Cukurova, M.; Bittencourt, I.I.; et al. Ethics of AI in Education: Towards a Community-Wide Framework. Int. J. Artif. Intell. Educ. 2022, 32, 504–526. [Google Scholar] [CrossRef]
  43. Yuanyuan, X.; Gao, W.; Wang, Y.; Shan, X.; Lin, Y.-S. Enhancing user experience and trust in advanced LLM-based conversational agents. Comput. Artif. Intell. 2024, 2, 1467. [Google Scholar] [CrossRef]
  44. European Parliament. Directorate General for Parliamentary Research Services. The Ethics of Artificial Intelligence: Issues and Initiatives; Publications Office: Luxembourg, 2020; Available online: https://data.europa.eu/doi/10.2861/6644 (accessed on 14 November 2024).
  45. Xue, L.; Pang, Z. Ethical governance of artificial intelligence: An integrated analytical framework. J. Digit. Econ. 2022, 1, 44–52. [Google Scholar] [CrossRef]
  46. Kim, D.; Benbasat, I. The Effects of Trust-Assuring Arguments on Consumer Trust in Internet Stores: Application of Toulmin’s Model of Argumentation. Inf. Syst. Res. 2006, 17, 286–300. [Google Scholar] [CrossRef]
  47. Sandiumenge, I. Copyright Implications of the Use of Generative AI; Social Science Research Network: Rochester, NY, USA, 2023. [Google Scholar] [CrossRef]
  48. O’Connor, S. Open Artificial Intelligence Platforms in Nursing Education: Tools for Academic Progress or Abuse? Nurse Educ. Pract. 2023, 66, 103537. [Google Scholar] [CrossRef] [PubMed]
  49. Jiang, J.; Vetter, M.A.; Lucia, B. Toward a ‘More-Than-Digital’ AI Literacy: Reimagining Agency and Authorship in the Postdigital Era with ChatGPT. Postdigit Sci. Educ. 2024, 6, 922–939. [Google Scholar] [CrossRef]
  50. Bozkurt, A. GenAI et al.: Cocreation, Authorship, Ownership, Academic Ethics and Integrity in a Time of Generative. Open Prax. 2024, 16, 1–10. [Google Scholar] [CrossRef]
  51. The Ethics of Disclosing the Use of Artificial Intelligence Tools in Writing Scholarly Manuscripts. Available online: https://journals.sagepub.com/doi/epub/10.1177/17470161231180449 (accessed on 16 March 2025).
  52. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  53. Frank, D.; Bernik, A.; Milković, M. Efficient Generative AI-Assisted Academic Research: Considerations for a Research Model Proposal. In Proceedings of the 2024 IEEE 11th International Conference on Computational Cybernetics and Cyber-Medical Systems (ICCC), Rijeka, Croatia, 16–19 July 2024; pp. 000025–000030. [Google Scholar] [CrossRef]
  54. Malik, T.; Hughes, L.; Dwivedi, Y.K.; Dettmer, S. Exploring the Transformative Impact of Generative AI on Higher Education. In Proceedings of the New Sustainable Horizons in Artificial Intelligence and Digital Solutions: 22nd IFIP WG 6.11 Conference on e-Business, e-Services and e-Society, I3E 2023, Curitiba, Brazil, 9–11 November 2023; Proceedings. Springer: Berlin/Heidelberg, Germany, 2023; pp. 69–77. [Google Scholar] [CrossRef]
  55. Al-leimon, O.; Juweid, M.E. “Publish or Perish” Paradigm and Medical Research: Replication Crisis in the Context of Artificial Intelligence Trend. Ann. Biomed. Eng. 2025, 53, 3–4. [Google Scholar] [CrossRef]
  56. Elbanna, S.; Child, J. From ‘publish or perish’ to ‘publish for purpose’. Eur. Manag. Rev. 2023, 20, 614–618. [Google Scholar] [CrossRef]
  57. Wu, C. Publish or perish: A study on academic misconduct in publishing among Chinese doctoral students. Br. J. Sociol. Educ. 2025, 46, 303–322. [Google Scholar] [CrossRef]
  58. Qiu, J. Publish or perish in China. Nature 2010, 463, 142. [Google Scholar] [CrossRef] [PubMed]
  59. Ajwang, S.O.; Ikoha, A.P. Publish or perish in the era of artificial intelligence: Which way for the Kenyan research community? Libr. Hi Tech News 2024, 41, 7–11. [Google Scholar] [CrossRef]
  60. Fundamental Values. Available online: https://academicintegrity.org/resources/fundamental-values (accessed on 30 December 2024).
  61. Eke, D.O. ChatGPT and the rise of generative AI: Threat to academic integrity? J. Responsible Technol. 2023, 13, 100060. [Google Scholar] [CrossRef]
  62. Bin-Nashwan, S.A.; Sadallah, M.; Bouteraa, M. Use of ChatGPT in academia: Academic integrity hangs in the balance. Technol. Soc. 2023, 75, 102370. [Google Scholar] [CrossRef]
  63. Perkins, M.; Roe, J. Decoding Academic Integrity Policies: A Corpus Linguistics Investigation of AI and Other Technological Threats. High. Educ. Policy 2023, 37, 633–653. [Google Scholar] [CrossRef]
  64. Roe, J.; Renandya, W.; Jacobs, G. A Review of AI-Powered Writing Tools and Their Implications for Academic Integrity in the Language Classroom. J. Engl. Appl. Linguist. 2023, 2, 3. [Google Scholar] [CrossRef]
  65. Vetter, M.A.; Lucia, B.; Jiang, J.; Othman, M. Towards a framework for local interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT. Comput. Compos. 2024, 71, 102831. [Google Scholar] [CrossRef]
  66. Perkins, M.; Basar Gezgin, U.; Roe, J. Reducing plagiarism through academic misconduct education. Int. J. Educ. Integr. 2020, 16, 3. [Google Scholar] [CrossRef]
  67. Levendowski, A. How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem; Social Science Research Network: Rochester, NY, USA, 2017; Available online: https://papers.ssrn.com/abstract=3024938 (accessed on 14 November 2024).
  68. Morley, J.; Kinsey, L.; Elhalal, A.; Garcia, F.; Ziosi, M.; Floridi, L. Operationalising AI ethics: Barriers, enablers and next steps. AI Soc. 2023, 38, 411–423. [Google Scholar] [CrossRef]
  69. Borenstein, J.; Howard, A. Emerging challenges in AI and the need for AI ethics education. AI Ethics 2021, 1, 61–65. [Google Scholar] [CrossRef]
  70. Franke, G.R.; Crown, D.F.; Spake, D.F. Gender differences in ethical perceptions of business practices: A social role theory perspective. J. Appl. Psychol. 1997, 82, 920–934. [Google Scholar] [CrossRef]
  71. Are Women More Ethical Than Men? Available online: https://greatergood.berkeley.edu/article/item/are_women_more_ethical_than_men (accessed on 2 January 2025).
  72. Isidro, H.; Sobral, M. The Effects of Women on Corporate Boards on Firm Value, Financial Performance, and Ethical and Social Compliance. J. Bus. Ethics 2015, 132, 1–19. [Google Scholar] [CrossRef]
  73. Ethics, Technology, and Engineering: An Introduction, 2nd Edition|Wiley. Available online: https://www.wiley.com/en-ie/Ethics%2C+Technology%2C+and+Engineering%3A+An+Introduction%2C+2nd+Edition-p-9781119879435 (accessed on 2 January 2025).
  74. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  75. Farooq, A.; Dubinina, A.; Virtanen, S.; Isoaho, J. Understanding Dynamics of Initial Trust and its Antecedents in Password Managers Adoption Intention among Young Adults. Procedia Comput. Sci. 2021, 184, 266–274. [Google Scholar] [CrossRef]
  76. Straub, E.T. Understanding Technology Adoption: Theory and Future Directions for Informal Learning. Rev. Educ. Res. 2009, 79, 625–649. [Google Scholar] [CrossRef]
  77. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  78. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  79. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
Table 1. Socio-demographic structure of respondents.
Table 1. Socio-demographic structure of respondents.
Questions%N
1. Gender
Male44.1%389
Female55.90%494
2. Age group
>180%0
18–2571.10%628
26–3511.00%97
36–458.10%71
46–557.00%63
56–652.30%20
66+
3. Academic segment/role at the university
Teachers and researchers10.80%95
Undergraduate students69.90%617
Graduate students16.40%145
Doctoral study2.90%26
4. Year of study
010.40%92
135.90%317
234.70%306
319.00%168
Table 2. Descriptive statistics and the Kolmogorov–Smirnov test on the role and significance of the use of GenAI tools in the student population and the academic community.
Table 2. Descriptive statistics and the Kolmogorov–Smirnov test on the role and significance of the use of GenAI tools in the student population and the academic community.
Descriptive StatisticsThe Kolmogorov–Smirnov Test
ResultsNMinMaxAvgStd DevCVStatisticsp-Value
“I understand the potential negative consequences of using GenAI tools”8831.005.003.681.1381.3180.1610.000
“I am aware of my responsibility as a user of GenAI tools”8831.005.003.681.1361.3180.1250.002
“I understand the ethical principles of using GenAI tools”8831.005.003.621.3361.3360.2610.003
Table 3. Chi-square: the influence of GenAI tools usage duration and user’s academic role and gender on understanding the potential negative consequences of usage.
Table 3. Chi-square: the influence of GenAI tools usage duration and user’s academic role and gender on understanding the potential negative consequences of usage.
Duration of GenAI Tools UsageAcademic RoleGender
Students (UG, GS, DS)Teachers
Research.
MaleFemale
Understanding of potential negative consequences>3 monthsn%13.38%12.71%24.69%9.81%
<3 monthsn%86.62%87.29%75.31%90.19%
Totaln%41.09%58.91%21.98%78.02%
Chi-square
(p)
26.43
(<0.000)
23.27
(<0.000)
Table 4. Chi-square: the influence of GenAI tools usage duration and user’s academic role and gender on awareness of personal responsibility while using GenAI tools.
Table 4. Chi-square: the influence of GenAI tools usage duration and user’s academic role and gender on awareness of personal responsibility while using GenAI tools.
Length of GenAI Tools UsageAcademic RoleGender
Students
(UG, GS, DS)
Teachers
Research.
MaleFemale
Awareness of user responsibility>3 monthsn%10.02%12.09%22.13%6.24%
<3 monthsn%89.98%87.91%77.87%93.76%
Totaln%34.68%65.32%18.88%81.12%
Chi-square
(p)
26.39
(<0.000)
24.2
(<0.000)
Table 5. Chi-square: the influence of GenAI tools usage length and user’s academic role and gender on understanding of the ethical principles while using GenAI tools.
Table 5. Chi-square: the influence of GenAI tools usage length and user’s academic role and gender on understanding of the ethical principles while using GenAI tools.
Length of GenAI Tools UsageAcademic RoleGender
StudentsTeachers
Research.
MaleFemale
Understanding of ethical principles>3 monthsn%9.16%14.27%20.91%21.22%
<3 monthsn%90.84%85.73%79.09%78.78%
Totaln%27.29%72.71%24.47%75.73%
Chi-square
(p)
18.26
(<0.000)
21.39
(<0.000)
Table 6. Posterior distribution characterization for pairwise correlations: the influence of ethical implications (understanding potential negative consequences, personal user responsibility, and ethical principles) on the prediction of future GenAI tools usage.
Table 6. Posterior distribution characterization for pairwise correlations: the influence of ethical implications (understanding potential negative consequences, personal user responsibility, and ethical principles) on the prediction of future GenAI tools usage.
Posterior Distribution Characterization for Pairwise Correlations 1
Underst. of Negative Consequen.Awareness of User ResponsibilityUnderstanding of Ethical Principles
Understanding of potential negative consequencesPosteriorMode 0.8440.824
Mean 0.8440.823
Variance 0.0000.000
95% Credible IntervalLower Bound 0.8250.802
Upper Bound 0.8630.844
N0.8830.8830.883
Awareness of user responsibilityPosteriorMode0.844 0.875
Mean0.844 0.874
Variance0.000 0.000
95% Credible IntervalLower Bound0.825 0.858
Upper Bound0.863 0.889
N0.8830.8830.883
Understanding of ethical principlesPosteriorMode0.8240.875
Mean0.8230.874
Variance0.0000.000
95% Credible IntervalLower Bound0.8020.858
Upper Bound0.8440.889
N0.8830.8830.883
1 The analyses assume reference priors (c = 0).
Table 7. Reliability analysis of measuring instruments: Cronbach’s Alpha, Spearman–Brown, and Guttman coefficients.
Table 7. Reliability analysis of measuring instruments: Cronbach’s Alpha, Spearman–Brown, and Guttman coefficients.
Reliability Statistics
Cronbach’s AlphaPart 1Value 0.876
N of Items 2 a
Part 2Value 0.678
N of Items 2 b
Total N of Items 4
Correlation Between Forms 0.723
Spearman–Brown Coefficient Equal Length 0.839
Unequal Length 0.839
Guttman Split-Half Coefficient 0.829
a. The items are as follows the frequency of GenAI tools usage and an understanding of possible negative consequences//an awareness of user responsibility. b. The items are as follows: an understanding the ethical principles of GenAI tools usage//the prediction of future use.
Table 8. Analysis of variance (ANOVA) with Cochran’s test for variability between items and individuals.
Table 8. Analysis of variance (ANOVA) with Cochran’s test for variability between items and individuals.
ANOVA with Cochran’s Test
Source of VarianceSum of SquaresdfMean SquareCochran’s QSig
Between People 2,717,1368823081
Within People Between Items 33103110364130.093
Residual 1,363,94026460.515
Total 1,367,25026490.516
Total 4,084,38635311157
Grand Mean = 3.63
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Đerić, E.; Frank, D.; Vuković, D. Exploring the Ethical Implications of Using Generative AI Tools in Higher Education. Informatics 2025, 12, 36. https://doi.org/10.3390/informatics12020036

AMA Style

Đerić E, Frank D, Vuković D. Exploring the Ethical Implications of Using Generative AI Tools in Higher Education. Informatics. 2025; 12(2):36. https://doi.org/10.3390/informatics12020036

Chicago/Turabian Style

Đerić, Elena, Domagoj Frank, and Dijana Vuković. 2025. "Exploring the Ethical Implications of Using Generative AI Tools in Higher Education" Informatics 12, no. 2: 36. https://doi.org/10.3390/informatics12020036

APA Style

Đerić, E., Frank, D., & Vuković, D. (2025). Exploring the Ethical Implications of Using Generative AI Tools in Higher Education. Informatics, 12(2), 36. https://doi.org/10.3390/informatics12020036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop