Previous Article in Journal
The Effect of COVID-19 on a Short-Term Teacher-Education Program: The Israeli Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prioritizing Ethical Conundrums in the Utilization of ChatGPT in Education through an Analytical Hierarchical Approach

by
Umar Ali Bukar
1,
Md Shohel Sayeed
1,*,
Siti Fatimah Abdul Razak
1,
Sumendra Yogarayan
1 and
Radhwan Sneesl
2
1
Centre for Intelligent Cloud Computing (CICC), Faculty of Information Science and Technology, Multimedia University, Melaka 75450, Malaysia
2
College of Science, University of Basrah, Basrah 61001, Iraq
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(9), 959; https://doi.org/10.3390/educsci14090959
Submission received: 15 June 2024 / Revised: 9 August 2024 / Accepted: 22 August 2024 / Published: 30 August 2024
(This article belongs to the Section Technology Enhanced Education)

Abstract

:
The transformative integration of artificial intelligence (AI) into educational settings, exemplified by ChatGPT, presents a myriad of ethical considerations that extend beyond conventional risk assessments. This study employs a pioneering framework encapsulating risk, reward, and resilience (RRR) dynamics to explore the ethical landscape of ChatGPT utilization in education. Drawing on an extensive literature review and a robust conceptual framework, the research identifies and categorizes ethical concerns associated with ChatGPT, offering decision-makers a structured approach to navigate this intricate terrain. Through the Analytic Hierarchy Process (AHP), the study prioritizes ethical themes based on global weights. The findings underscore the paramount importance of resilience elements such as solidifying ethical values, higher-level reasoning skills, and transforming educative systems. Privacy and confidentiality emerge as critical risk concerns, along with safety and security concerns. This work also highlights reward elements, including increasing productivity, personalized learning, and streamlining workflows. This study not only addresses immediate practical implications but also establishes a theoretical foundation for future AI ethics research in education.

1. Introduction

The concept of AI-driven education, as proposed by George and Wooden [1], envisions a revolutionary shift where artificial intelligence (AI) plays a central role in transforming and enriching the learning experience. This evolution in education and research has been a captivating journey [2] marked by rapid growth, substantial investments, and swift adoption. However, educators may harbor concerns about effectively leveraging the pedagogical advantages of AI and its potential positive impact on the teaching and learning processes [3]. The imminent era of AI and augmented human intelligence poses challenges for educational environments, where the demands on human capacities risk outpacing the potential response of the existing educational system [4]. Innovative technologies, representing cutting-edge advancements, have the capability to transform various aspects of society and the economy [2]. In the face of powerful new technologies, changing job landscapes, and the looming threat of increased inequality, the current configuration of the education sector is poised for a destabilizing shift. Successfully addressing the challenges ahead will necessitate a reconfigured learning sector [4] and effective policy strategies.
In light of the transformation brought about by AI technologies, the methods of learning, pedagogy, and research have undergone a significant metamorphosis [2]. For instance, the integration of AI into educational settings has ushered in unprecedented opportunities, transforming the learning landscape [1,5]. Among these AI applications, ChatGPT, a state-of-the-art language generation model, has emerged as a versatile tool in educational contexts. The academic community is increasingly leveraging ChatGPT for diverse purposes, ranging from automated grading to personalized learning experiences [6,7,8,9]. However, its proliferation brings forth a myriad of ethical challenges [10,11,12,13,14], demanding an integrated approach for an effective approach to promote responsible usage [15,16]. Accordingly, as institutions embrace these advancements, questions of ethical responsibility, transparency, and the potential societal impact of ChatGPT deployment become paramount [15].
This study recognizes the need for a holistic understanding of the ethical landscape, acknowledging that ethical considerations extend beyond risk assessment to encompass potential rewards and the resilience needed to navigate evolving challenges. As a result, the study undertakes an exploration of the ethical themes surrounding ChatGPT utilization in education [17], employing innovative risk, reward, and resilience (RRR) terminologies to guide the data extraction process. In this context, the study identifies and categorizes ethical concerns associated with ChatGPT within educational contexts, offering a structured lens for evaluating risk, reward, and resilience dynamics, by using an Analytic Hierarchy Process (AHP). This research approach positions this study at the forefront of AI ethics research. By examining the interplay between risk, reward, and resilience, the research not only offers immediate practical implications but also establishes a theoretical foundation for future studies in the rapidly evolving field of AI in education.
This study aims to investigate the ethical conundrums of ChatGPT by employing the RRR integrative framework and the AHP method. It proposes a framework to address these ethical dilemmas and supports decision-making regarding its use in educational settings. The paper is organized as follows: Section 2 reviews existing literature and identifies the research gap addressed by this study; Section 3 discusses the theoretical framework underpinning the research; and Section 4 details the research methodology, discussing the application of AHP. The study’s results, along with insights and analyses derived from the methodology, are presented in Section 5. Section 6 discusses the contributions and implications of the study, covering theoretical aspects, practical applications, limitations, and future research directions. Finally, Section 7 concludes the paper.

2. Review of Related Studies

To comprehensively understand the literature on ChatGPT, this study conducted a quick search on Scopus on 2 August 2024 using the keywords ‘ChatGPT AND utilization OR usage AND education’. The search yielded 315 documents. After excluding non-English documents, 307 remained, comprising various types of publications such as articles, conference papers, reviews, and book chapters. We further refined the selection by focusing only on journal articles, resulting in 181 documents. We assume that other publication types (conference papers, conference reviews, notes, letters, editorials, and book chapters) often repeat ideas or present work-in-progress that typically appear in journal articles eventually or already have [18,19,20,21]. Review articles were also excluded as they are not original research papers. The dataset was then downloaded, and we scanned the titles and abstracts to identify articles relevant to this study’s scope: those that investigated ChatGPT utilization and usage in the context of educational context. Among the 181 articles, several themes emerged, such as technology adoption, acceptance, usage and capabilities, performance, review and conceptual studies, and unrelated topics. However, a detailed analysis of these articles is beyond the scope of this paper. Therefore, this study discusses the articles representing the relevant literature based on taxonomy, theoretical models, methodologies, and research gaps.

2.1. Taxonomy of ChatGPT Utilization in Education

The comprehensive review of literature spans diverse fields, shedding light on the varied applications and implications of ChatGPT. Specifically, significant contributions were brought into the educational sector [6,8,9,22,23], ref. [14] provided insights into scholarly publishing, and [7] narrowed their scope to health care education through literature based on PRISMA guidelines. Similarly, Qasem [24] explored scientific research and academic works, showcasing the broader academic application. Many contributions have focused on scientific research [12,25]. Moreover, a study by Yan [26] explored L2 learning; Ray [27] delved into customer service, healthcare, and education; and Taecharungroj [28] analyzed ChatGPT tweets. In addition, Gruenebaum et al. [29] focused on medicine, specifically obstetrics and gynecology. Accordingly, Kooli [23] provided insights that span across education and research, providing a multifaceted understanding of ChatGPT’s impact on these interconnected realms. Furthermore, Cox and Tzoc [30] delve into more general applications, potentially encapsulating a broader perspective on ChatGPT utilization. Karaali [31] focused on quantitative literacy, providing valuable insights into the intersection of ChatGPT and numerical comprehension. Jungwirth and Haluza [32] ventured into scientific writing, providing valuable insights from public health perspectives. Pavlik [13] contributed to journalism and media, examining the impact of ChatGPT in shaping narratives and content creation. Finally, Geerling et al. [33] focused on economics and assessment, unraveling the potential implications of ChatGPT in these domains. These studies underline the diverse applications of ChatGPT across educational environments, emphasizing its transformative potential in reshaping various educational tasks prompting the need for effective considerations in policy, research, and practical applications. Figure 1 captures the taxonomy of specific areas and contributions made in the literature, as well as methodologies employed thus far.

2.2. Theoretical Models of Prior Research

The current literature on the adoption and utilization of ChatGPT in educational settings employs a variety of theoretical models to understand and predict behavior. Among these, the unified theory of acceptance and use of technology (UTAUT) and its extension, UTAUT2, as well as the technology acceptance model (TAM) are prominently featured. Hence, Figure 2 represents the frequency of the theoretical models that investigated the ChatGPT utilization in education.
Firstly, several studies employ the UTAUT or UTAUT2 model to investigate the behavioral intentions of students and faculty members toward ChatGPT [34,35,36,37]. These studies span across various countries and educational settings. For instance, Elkefi et al. [38] utilized a mixed-method triangulation design based on UTAUT, gathering data from engineering students in developing countries through semi-structured surveys. Bouteraa et al. [39] also adopted UTAUT in conjunction with social cognitive theory (SCT) focusing on the role of students’ integrity in adoption behavior. Similarly, Bhat et al. [40] examined educators’ acceptance and utilization using an extended UTAUT model. Strzelecki et al. [36] focused on Polish academics, incorporating personal innovativeness into the UTAUT2. In addition, Arthur et al. [41] and Grassini et al. [42] employed the UTAUT2 model to examine the predictors of higher education students. Salifu et al. [43] similarly investigate among economics students in Ghana, and Elshaer et al. [44] integrated gender and study disciplines as moderators, finding significant moderating effects on the relationship between performance expectancy and ChatGPT usage.
Secondly, TAM is another widely used framework in the literature. Gustilo et al. [45] used TAM to understand factors influencing the acceptance of ChatGPT, while Kajiwara and Kawabata [46] examined the impact of teaching ethical use among students aged 12 to 24. Cambra-Fierro et al. [47] assessed university faculty members using TAM, and Tiwari et al. [48] investigated students’ attitudes toward ChatGPT for educational purposes. Abdalla [49] used a modified version of TAM to investigate college students, with personalization acting as a moderator. Abdaljaleel et al. [50] and Sallam et al. [51] employed a TAM-based survey instrument called TAME-ChatGPT (Technology Acceptance Model Edited to Assess ChatGPT Adoption) to examine ChatGPT integration in education, with participants from various countries comprising Egypt, Iraq, Jordan, Kuwait, and Lebanon. Nonetheless, the application of TAM to investigate ChatGPT is evident by its extensive utilization in the current literature [52,53,54,55,56,57,58,59,60].
In addition to UTAUT and TAM, other theoretical frameworks have been employed. Specifically, Duong et al. [61] integrated the information systems success (ISS) model with the stimulus–organism–response (SOR) paradigm to explore factors affecting students’ trust, satisfaction, and continuance usage intention of ChatGPT. Mandai et al. [62] approached ChatGPT’s impact on higher education through John Dewey’s reflective-thought-and-action model and revised Bloom’s taxonomy. Jochim and Lenz-Kesekamp [63] used domestication theory to explore the adaptation process of text-generative AI among students and teachers. Abdalla et al. [64] applied the diffusion of innovation theory (DIT) to investigate ChatGPT adoption by business and management students. Mahmud et al. [65] evaluated factors within the extended value-based adoption model (VAM) to understand university students’ attitudes toward ChatGPT. Nonetheless, other theoretical models include social construction of technology (SCOT) theory [66], interpretative phenomenological analysis (IPA) [67], and hedonic motivation system adoption model (HMSAM) [68].
Overall, the diversity of theoretical models highlights the multifaceted nature of research on ChatGPT adoption in education, with each model providing unique insights into the factors influencing acceptance and usage. This theoretical pluralism enriches the understanding of ChatGPT’s integration into educational contexts and underscores the importance of a comprehensive approach to studying technology adoption.

2.3. Methodologies of Prior Research

The methodologies employed in the research on ChatGPT adoption and utilization in education span a variety of quantitative and qualitative approaches, often leveraging established theoretical frameworks and advanced statistical techniques to examine different influencing factors. Several studies have utilized structural equation modeling (SEM) to explore the factors influencing ChatGPT adoption [40,49,54]. For example, Cambra-Fierro et al. [47] assessed the impact of a series of factors on ChatGPT adoption among university faculty members using covariance-based structural equation modeling (CB-SEM). Abdalla et al. [64] investigated the business and management students by using partial least square structural equation modeling (PLS-SEM). Additional studies that use CB-SEM or PLS-SEM are evident across the literature, and the research spans higher institution teachers, undergraduates, and postgraduate students as well as post-primary education teachers and students in various countries including Vietnam, Norway, Egypt, Poland, Bangladesh, Oman, South Africa, Czech Republic, China, Malaysia, Saudia [34,35,36,37,41,42,44,50,55,56,57,60,68,69,70].
Additionally, hybrid methodologies have been adopted to capture both linear and nonlinear relationships in the data. Mahmud et al. [65] and Salifu et al. [43] integrated PLS-SEM with artificial neural networks (ANN) and deep neural networks (DNN) to enhance the precision of their analyses. This hybrid approach reveals a growing trend towards utilizing complex models to better understand the multifaceted nature of ChatGPT adoption. Moreover, qualitative methods [71,72,73] also play a significant role in understanding the subjective experiences and perceptions related to ChatGPT. Specifically, studies by Jangjarat et al. [72] and Komba [73] employed content and thematic analysis, using software like NVivo software program to analyze qualitative data from interviews and chat content. In addition, Espartinez [74] utilized Q-Methodology, Sun et al. [75] employed a quasi-experimental design, and others applied statistical methods such as descriptive analysis and inferential statistics like Chi-square or regression analyses [76,77] to capture the perspectives of students and academics as well as impact on programming mode behaviors regarding ChatGPT.
Furthermore, Bukar et al. [78] utilized AHP to propose a decision-making of ChatGPT utilization in education through 10 expert panels. Mixed-method designs have been used to combine the strengths of both quantitative and qualitative approaches. Adams et al. [79] employed a sequential explanatory mixed-method design to explore university students’ readiness and perceived usefulness of ChatGPT for academic purposes, using SPSS and ATLAS.ti for data analysis. These diverse methodologies highlight the comprehensive efforts undertaken by researchers to dissect the multifactorial influences on ChatGPT adoption and utilization in educational settings, providing rich insights and guiding future research directions.

2.4. Research Gap and Motivation

Despite the extensive research observed in the current literature, especially regarding the application of various theories from the technology adoption literature due to the extensive use of SEM-based and various qualitative methodologies to explore the factors influencing ChatGPT adoption in educational contexts, there is a notable gap in the application of the RRR and AHP to rank and prioritize these factors. While the RRR framework provides a robust foundation for understanding the multifaceted dimensions of Gen-AI [17], it lacks a structured approach to quantitatively assess and prioritize these components. This limitation hampers the ability of policymakers and stakeholders to make well-informed decisions regarding the adoption and utilization of Gen-AI. Secondly, because the AHP method lacks application in the contexts of ChatGPT utilization for policy and decision-making [78], as well as the need to quantitatively assess and prioritize the RRR components, its application can be instrumental. AHP is a powerful decision-making tool that can systematically rank and prioritize the components of the RRR framework. Hence, the study aims to close this by employing this methodological approach to evaluate the relative importance of risk, reward, and resilience themes, facilitating an evidence-based approach to Gen-AI utilization decisions.

3. Theoretical Background and Framework

The diverse studies identified from the literature have collectively reported a spectrum of findings regarding ChatGPT. The insights from these studies encompass a wide range of themes, including ethical concerns, perceived benefits, and strategies for individuals and society to navigate the implications of ChatGPT. The information was extracted and strategically classified into the three interconnected categories of risk, reward, and resilience through a systematic literature review (SLR) as outlined in Bukar et al. [17]. This classification was motivated by the integrated policy-making framework known as RRR, as proposed by Roberts [80].
This framework provided a structural foundation for understanding and categorizing the identified issues, ensuring the exploration of ChatGPT’s landscape. Following this, the study delved into a frequency analysis to quantify the prevalence of these identified concerns. Subsequently, ethical and risk-related themes, specifically those falling under the risk category, were selected for a detailed examination, where the study employed these themes to construct a decision-support framework, leveraging the AHP to discern their relative importance and guide decisions on whether to restrict or legislate ChatGPT utilization [78]. Accordingly, this study elevated this analysis by encompassing all elements of the RRR framework: risk, reward, and resilience. This comprehensive approach allowed for a holistic consideration of themes gleaned from previous studies. By incorporating all the associated themes of RRR, the study ensured that ethical concerns, risks, and potential rewards associated with ChatGPT were thoroughly examined and prioritized. This methodological progression underscores the depth and rigor of the study, providing valuable insights for policymakers and stakeholders grappling with the ethical implications of ChatGPT in the educational environment. The proceeding sections discuss these issues accordingly.

3.1. Risk

Risk arises where threat, exposure, and vulnerability converge [80]. This convergence depends on the severity of the threat and its interaction with the exposure and vulnerability. In the context of ChatGPT utilization in education, risk emerges at the intersection of potential threats, such as misuse or ethical breaches (exposure) which includes the extent to which ChatGPT is integrated into educational practices, and vulnerability, referring to the susceptibility of students and educators to these threats. The level of risk is influenced by the severity of the potential threats and how they interact with the degree of exposure and vulnerability within the educational environment. Accordingly, the SLR and RRR conceptual study [17] played a crucial role in identifying and synthesizing a comprehensive list of ethical and risk-related concerns associated with the implementation of ChatGPT. Building upon the insights gleaned from previous studies [17,78], the current study refined and narrowed down the list [78]. As a result, seven concerns were selected for their frequency count, as presented in Table 1. For a detailed account of the extraction process, refer to Bukar et al. [17,78]. The selected concerns are considered relevant based on their frequency count and have been earmarked as risk themes for this investigation. This approach aims to investigate the most significant risks and intricately assess and rank them among reward and resilience issues.

3.2. Reward

The reward is determined by factors like opportunity, access, and capability [80]. In the context of ChatGPT utilization in education, opportunity refers to the potential benefits that ChatGPT can offer, like enhanced learning experiences and personalized education. Access involves the conditions and channels through which educators and students can effectively leverage ChatGPT, such as technological infrastructure and availability of resources. Capability encompasses the internal attributes of the educational system and its participants, including their digital literacy and adaptability, which determine the extent of the benefits they can achieve from utilizing ChatGPT. Together, these elements define the potential rewards that can be realized through the strategic and informed integration of ChatGPT into educational practices.
Accordingly, themes associated with the rewards of ChatGPT usage were extracted [17] and synthesized. This process involved not only identifying these themes but also computing their frequency count to discern patterns and assess their significance within the dataset. Further, a detailed examination of the reward-related information was performed, focusing on identifying recurring themes such as common topics, ideas, patterns, and approaches. The identification of primary themes was grounded in the observation of connections among sub-themes sharing a logical context. For instance, themes like providing feedback, prompt writing, collaboration and friendship, and increased student engagement were collectively categorized under the umbrella term “question answering”. This systematic approach was consistently applied to the remaining themes.
Building upon this analysis, seven reward-related themes were singled out based on their frequency count and thematic relevance, as outlined in Table 2. These carefully selected themes are regarded as more significant and have been designated as criteria for further investigation utilizing the AHP. This strategic selection aims to concentrate on the most significant rewards stemming from the utilization of ChatGPT in educational settings. Nevertheless, some themes were not analyzed in this study due to their low frequency and thematic relevance, comprising decision support, expertise, and judgment [23,81], multilingual communication and translation [14,81], cost-saving [7], passed exams [82], pitches [83], support societal megatrends [84], and transformation [11].

3.3. Resilience

The core of resilience is found in the “ability of entities and systems to absorb, adapt to, and transform in response to ongoing changes” [80,85]. In the context of ChatGPT utilization in education, resilience is rooted in the capacity of educational systems and stakeholders (students, educators, and institutions) to absorb, adapt to, and transform in response to the ongoing changes introduced by Gen-AI. As ChatGPT and similar AI tools continue to evolve and impact educational practices, the emphasis on resilience thinking becomes essential. This enables educational systems to not only withstand risk-related concerns but also to adapt and transform these challenges into opportunities for innovation and growth, like developing plagiarism detection tools, encouraging higher-level reasoning skills, etc. Drawing from the literature, resilience-related themes connected to addressing the ethical challenges posed by ChatGPT were identified [17] and synthesized. Accordingly, the frequency of these themes was computed to unravel underlying patterns and significance within them.
In addition, this study further synthesized the gathered information, aiming to adopt a primary theme that would become the focal point for further investigation. For example, themes such as improved human–AI interaction, balance between AI-assisted innovation and human expertise, are considered as recurrent terminologies in the resilience discourse. Recognizing their shared logical context, these were amalgamated under the overarching theme of “co-creation between humans and AI”, as presented in Table 3. Extending this process to encompass various other keywords and conceptual clusters, this study curated a selection of seven resilience-related themes. This strategic curation was guided by both the frequency of occurrence and thematic relevance. The chosen themes stand as pivotal factors within the resilience category, laying the foundation for their exploration in the subsequent AHP analysis. Nonetheless, other resilience-related themes that were not analyzed in this study include addressing the digital divide and potential mitigation strategies [27], auditing the trail of queries [86], sustainability, raising awareness, scientific discourse [23], and experimental learning framework [33].

3.4. Summary and Conceptual Framework Based on AHP

Given the research’s objective to employ the AHP for its popularity [87,88,89,90,91,92,93] for prioritizing themes crucial to policymaking and decision-making in ChatGPT utilization, a conceptual framework was devised. This framework unfolds across three hierarchical levels encompassing the overarching objective, criteria or primary themes or factors, and sub-criteria or sub-factors, or sub-themes. As a result, the primary focus of this study is to ascertain priorities among elements shaping the policies and decisions surrounding ChatGPT application. Positioned at the peak of this hierarchical structure is the central research objective, representing the goal of the study. At level 2, the framework delineates the main factors or overarching categories, namely, risk, reward, and resilience, drawn from the well-established integrated policymaking model known as RRR [17,80]. These categories stand as pivotal themes influencing decisions regarding ChatGPT usage in educational settings. Further, level 3 of the hierarchy details the sub-factors, with the potential to exert influence on decisions concerning ChatGPT’s utilization in the educational sector. To give equal priority to the RRR categories, seven (7) elements were considered from each category to provide equal opportunity to investigate them. Hence, the detailed structure of this conceptual framework is visually presented in Figure 3, providing a typical illustration of the hierarchical relationships. Subsequently, the breakdown of the research methodology is expounded upon in the succeeding section, explaining the systematic steps undertaken to unravel the multifactorial relevance of ChatGPT’s ethical themes within the educational context.

4. Research Design and Methodology

This study is divided into three parts, each making a distinct and significant contribution to the ethics conundrum of ChatGPT and similar LLM models, as depicted in Figure 4. Firstly, a systematic review was used to identify articles concerning the ethical issues of ChatGPT. The outcomes of the review help this study to identify issues of ChatGPT and propose a policymaking framework based on the RRR integrated framework for policy making [17,80]. The proposed framework discussed the complex nature of creating and developing policy to guide the implementation and utilization of LLMs. One key limitation of the framework was that there is a lack of objective assessment of the elements proposed in RRR, which is conceptual and theoretical in nature. Accordingly, there is a lack of investigation and examples on how elements will guide initial policy creation and how to weigh the components of RRR objectively by considering their various components. The former was investigated and reported in Bukar et al. [78], and the latter is covered in this study. In particular, this study utilized the concept of AHP by following the guidelines provided by Gupta et al. [87] to compare and prioritize the various elements of RRR.
Accordingly, the AHP stands as a widely applied technique in the literature on multiple-criteria decision-making (MCDM), especially suited for scenarios involving a multitude of criteria or factors, addressing complex challenges within MCDM [87,93,94]. The core methodology of AHP involves breaking down an MCDM problem into a hierarchical structure with a minimum of three levels comprising objectives, criteria, and decision alternatives [94]. In this hierarchical model, AHP systematically constructs an evaluation framework, gauges the relative priorities of the criteria, conducts comparisons among available decision alternatives for each criterion, and ultimately establishes a ranking of these alternatives [95]. The determination of ranking of factors using the AHP involves expert pairwise comparisons, where judgments express the degree to which one element dominates another concerning a specific attribute, as outlined by [94]. The AHP unfolds through various phases, as illustrated in Figure 5, to analyze the RRR themes regarding ChatGPT utilization in education.
Firstly, the initial phase of the AHP approach is dedicated to the construction of a cohesive hierarchy for the research problem. Consequently, the structure of the AHP problem specific to this study is illustrated in Figure 3. It is crucial to emphasize that the AHP model employed in this study deliberately excludes alternative options, as the primary objective revolves around prioritizing these identified RRR themes.
Secondly, an AHP questionnaire was carefully crafted utilizing the pairwise comparison method proposed by [94]. This comprehensive questionnaire encompassed all the primary themes and sub-themes relevant to the study. This study developed the AHP questionnaire based on the comparison concept, where each factor is compared on a scale of 1–9. The questionnaire is organized into several sections. The introduction includes the invitation letter and privacy notice/PDPA clause. Detailed instructions on how to complete the questionnaire follow. The main body comprises comparison tables for the main classifications (risk, reward, and resilience), as well as for risk-related themes, reward-related themes, and resilience-related themes. The final section collects the respondents’ demographic information.
Accordingly, the questionnaire was distributed among academics within the university environment and a total of 12 responses were garnered. This sample is considered adequate for AHP investigation for several reasons: AHP yields reliable results with small samples [96,97] and typically uses sample sizes ranging from 2 to 100 experts [98]. There is no strict minimum sample size [99], and involving more experts may introduce repetitive information, diminishing the value of additional responses [100]. Smaller samples make AHP efficient, especially when analyzing equally important alternatives [96,97]. As AHP is not a statistical method, it does not require a statistically significant sample size [101,102], and the emphasis is on decisions made rather than the respondents, negating the need for a representative sample [102]. Finally, AHP is often applied to knowledgeable individuals, further justifying smaller sample sizes [103]. Accordingly, the participating academics provided valuable data through pair-wise comparisons, grounded in the themes of risk, reward, and resilience. The relative scores assigned to these pairwise comparisons adhered to the well-established nine-point scale introduced by Saaty [94], as delineated in Table 4.
Thirdly, the study undertook the transformation of raw pairwise comparison data into priority weights. This pivotal step involved converting the judgments provided by the respondents into a quantifiable format, laying the groundwork for subsequent analyses. Hence, the detailed methodology employed to ensure the accurate reflection of the relative importance of RRR component in this study is elaborated in the following section. These steps systematically outline the determination of normalized weights and ranking of the RRR element, aligning to the methodological approach by Gupta et al. [87].

4.1. Building Pairwise Comparison Matrices

The process of pairwise comparison plays a crucial role in assessing the relative significance of factors, a methodology introduced by Saaty [94,104] and further expounded upon by Forman and Peniwati [105]. In this comparison phase, judgments are formulated and expressed as integers, representing the preference for one factor over another. If the judgment signifies that the x t h factor is more important than the y t h factor, the integer is placed in the x t h row and y t h column of the comparison matrix. Simultaneously, the reciprocal of this integer is recorded in the y t h row and x t h column of the matrix. In situations where the factors being compared are deemed equally important, a value of one is assigned to both locations in the matrix. Consequently, each comparison matrix, denoted as M = [ M x y ] , takes the form of a square matrix of order n, where n is the number of factors compared, and it includes reciprocated elements, as depicted in Equation (1). This systematic process sets the stage for the subsequent computation of normalized priority weights.
M x y = 1 M x y ; x , y = 1 , 2 , 3 n

4.2. Construction of Aggregate Comparison Matrices

To synthesize the evaluations or judgment for each element within the comparison matrix, we employ the geometric mean method, a technique that involves the aggregation of the responses obtained from all academics engaged in the pairwise comparisons of various themes and sub-themes of RRR. This approach facilitates the attainment of a consensus assessment [104,105]. The resulting aggregated comparison matrix, denoted as A = [ A x y ] , is specifically generated. In this matrix, each element A x y signifies the geometric mean of judgments provided by N decision-makers. The calculation is outlined in Equation (2), where N represents the number of academics participating in the assessments, and C x y denotes the individual judgments furnished by each participant for the corresponding factor or sub-factor pair under comparison. This aggregation process contributes to obtaining a collective perspective on the relative importance of these elements within the realm of ChatGPT utilization decisions.
a x y = x = 1 n C x y 1 / N

4.3. Computation of RRR Theme Relative Weights

To ascertain the precedence of each category (primary factor) and sub-factor using the AHP methodology, a normalized matrix denoted as N is formulated. Derived from the corresponding comparison matrix A, the creation of N follows the procedure outlined in Equation (3). Here, N represents the normalized matrix specific to the category or sub-factor, with n x y denoting the element at the x t h row and y t h column of the normalized matrix N, a x y representing the corresponding element in the comparison matrix A, and n representing the total count of factors or sub-factors under consideration. This normalization procedure serves to standardize the data within the matrix, rendering them amenable to subsequent calculations of priority weights. The resultant normalized matrix N enables a quantitative evaluation of the relative significance of each category and sub-factor within the context of ChatGPT utilization in the educational sector.
N = [ n x ] , where n x y = a x y x = 1 n a x y
Moreover, to derive the priority weights for each factor, the investigation computed the average of the elements within each row of the normalized matrix N. This computation yields a priority vector denoted as W = [ w x ] , representing a column matrix of order n x 1 , as depicted in Equation (4). In this context, W signifies the priority vector, where each element w x corresponds to the priority weight of a specific factor, and n denotes the total count of factors under consideration. Significantly, the priority vector W serves as a concise quantitative depiction of the relative significance of each factor within the study. This allows the study to effectively rank and prioritize these themes based on the collective evaluations provided by the academics engaged in the pairwise comparisons and normalization procedures.
w x = y = 1 n n x y n

4.4. Validation of the Comparison Matrix through Consistency Test

According to the AHP approach in responding to the variability in human responses, it is essential to evaluate the consistency of the comparison matrices to validate the predicted priority vectors. The consistency ratio (CR) acts as the metric for assessing pairwise comparisons. If the C R is equal to or less than 0.10 ( C R < = 0.10 ), it signifies an acceptable level of consistency within the comparison matrix A. In such cases, the ranking results can be considered reliable and accepted, aligning with the guidelines from [94]. However, if the C R surpasses the 0.10 threshold ( C R > 0.10 ), it indicates that the ranking results are unacceptable due to excessive inconsistency. In such situations, it is recommended that the decision-maker revisits the evaluation process, as advised by prior research [87,93]. Ensuring consistency in the comparison matrices is crucial for obtaining robust and dependable priority vectors. Thus, matrix A is deemed consistent if it fulfills the conditions outlined in Equation (5). These conditions are fundamental in ensuring the reliability of the prioritization results, where A represents the comparison matrix, and W is the priority vector.
A W = n W
Additionally, the mathematical expression in Equation (4) is an eigenvalue problem. In this scenario, the principal eigenvalue, referred to as λ m a x , must be equal to or greater than n, as specified by [94]. A crucial criterion for consistency is that the larger λ m a x is, in Equation (4) the closer it should be to n. This correlation ensures increased consistency in the matrix A, as the principal eigenvalue converges toward the magnitude of the number of themes n. Therefore, to evaluate the consistency of C R associated with a comparison matrix A, the typical steps involve the following:
  • Step 1: Calculating the principal Eigenvalue λ m a x by using Equation (6):
    A W = λ m a x W
  • Step 2: Calculate the C I , where C I is the consistency index given by Equation (7):
    C I = λ m a x n n 1
  • Step 3: Calculate the random index ( R I ), where R I represents the predefined value determined by the matrix order (n). It is acquired from a reference table corresponding to the matrix order, resulting in distinct values of R I for different numbers of criteria (n), as outlined in Table 5.
  • Step 4: Calculating CR by using Equation (8):
    C R = C I R I
  • Step 5: Verify the acceptance of C R . If the C R is equal to or less than 0.10 ( C R < = 0.10 ) , it signifies that the level of inconsistency within the comparison matrix A is acceptable, and the reliability of the ranking results can be affirmed. Among the 12 responses, only 10 passed the consistency test; therefore, only those are reported in this study.

4.5. Computation of Global Weights of the RRR Themes

Equation (4) is utilized to compute the local weights for both the primary category and sub-themes associated with ChatGPT utilization decisions. These local weights offer insights into the relative importance of themes and sub-themes within their specific categories. For the primary themes (categories), their global weights coincide with their local weights. In essence, the local weights directly signify the global weights for the main categories. However, concerning sub-themes, the determination of global weights follows a distinct process. Equation (9) is employed to calculate the global weights for sub-themes. This implies that the significance of sub-themes within their parent category is evaluated in connection with the overall priorities of all themes and sub-themes considered in the study. This methodology facilitates a comprehensive assessment of the importance of sub-themes in the broader context of decision-making regarding ChatGPT utilization.
G W S F = L W S F G W C M F
Accordingly, G W S F stands for the global weight of the sub-themes, L W S F represents the local weight of the sub-themes, and G W C M F signifies the global weight of the corresponding main factor (category).

5. Results and Discussion

The AHP results and discussion were prepared following the presentation and guidelines in Gupta et al. [87]. Accordingly, the data amassed for this study were saved in MS Excel. Expert responses gathered through the intricate process of pairwise comparisons for various themes and sub-themes were systematically aggregated using the geometric mean approach outlined in Equation (4). The presentation of findings encapsulates comparison matrices, weights, and consistency ratios for all categories within the hierarchical model. These values were derived, adhering to the methodology elucidated in the preceding section. It is noteworthy that not all CR values obtained from 12 responses fall below the predefined threshold of 0.10 . Accordingly, 10 responses signify a commendable level of consistency in the comparison matrices, as reported in this study. This underscores the reliability of the calculated weights or priorities. The high consistency observed in the matrices serves as a crucial validation, fortifying the trustworthiness of the prioritization results for key themes and dimensions in the intricate landscape of ChatGPT RRR element ranking. Hence, the proceeding sections discuss the results obtained in this study.

5.1. RRR Normalized Matrix and Weight

Upon delving into the results presented in Table 6, a discernible pattern emerges. Among the three primary categories, “Resilience” emerges as the heavyweight with a substantial weight of 0.4589 . This underscores resilience as the preeminent factor wielding the most influence in the ethical conundrum surrounding ChatGPT utilization. Following closely, “Risk” occupies an intermediate position with a weight of 0.3279 , denoting its significant but balanced importance. On the other end, “Reward” assumes the role of the third primary category with a weight of 0.2132 , highlighting its role as a critical consideration. These weights not only provide a hierarchical perspective but also offer valuable insights into the relative priorities of these categories within the framework of RRR for ChatGPT utilization in education. Accordingly, the calculated values for λ m a x ( 3.00101 ) , C I ( 0.000504257 ) , and C R ( 0.000869408 < 0.10 ) indicated that the consistency of the RRR matrices is acceptable.

5.2. Normalized Matrix and Weight of Risk Themes

The analysis of the “Risk” category delves into the intricate sub-themes, unraveling the ethical complexities associated with ChatGPT. Table 7 presents a detailed breakdown of the weight analysis for the risk-related themes. The result reveals that privacy and confidentiality concerns ( w e i g h t = 0.236 ) take precedence as the most pivotal and pertinent dimension in the realm of risk-related concerns. Following closely is safety and security concerns ( w e i g h t = 0.208 ) , emphasizing the paramount importance of data security and confidentiality. Next, in descending order of significance, are academic integrity concern ( w e i g h t = 0.175 ) , plagiarism ( w e i g h t = 0.128 ) , infodemics and misinformation ( w e i g h t = 0.095 ) , risk hallucination through manipulation and misleading ( w e i g h t = 0.090 ) , and biased responses ( w e i g h t = 0.068 ) . The respondents’ heightened concerns regarding privacy and confidentiality underscore the critical nature of safeguarding data and confidential information, alongside the pressing need for safety and security measures.
Secondly, the calculated values of λ m a x ( 7.0436 ) , C I ( 0.0073 ) , and C R ( 0.005499 < 0.10 ) affirm the acceptable consistency of the comparison matrices employed in the analysis, solidifying the reliability of the ranking results. These rankings provide a comprehensive understanding of the themes shaping risk-related concerns in ChatGPT utilization. Stakeholders and policymakers can leverage this information to craft tailored policy strategies and guidelines, considering the relative importance of each concern. This strategic approach should aim to foster the responsible and ethical utilization of ChatGPT and by extension generative AI.

5.3. Normalized Matrix and Weight of Reward Themes

The comprehensive analysis within the “Reward” category delves into the multifaceted themes that wield substantial influence over the benefits emanating from the integration of ChatGPT, as presented in Table 8. Each dimension, discerned through the discerning perspectives of the respondents, contributes uniquely to the overall landscape of rewards in ChatGPT utilization. Notably, increased productivity and efficiency emerge as the preeminent reward, commanding the highest weight of 0.303. This underscores the unanimous acknowledgment by respondents of the transformative impact that ChatGPT can have on enhancing productivity and streamlining various tasks. Furthermore, idea and text generation and summarization ( w e i g h t = 0.175 ) follow closely, emphasizing the prowess of ChatGPT in creative ideation and content summarization. Subsequently, the remaining weight of reward themes includes decreased teaching workload ( w e i g h t = 0.143 ) , personalized learning ( w e i g h t = 0.134 ) , streamlining workflow ( w e i g h t = 0.083 ) , dissemination and diffusion of new information ( w e i g h t = 0.082 ) , and question answering ( w e i g h t = 0.081 ) . Notably, the respondents accorded the highest importance to increased productivity and efficiency as the most significant reward derived from the utilization of ChatGPT.
Secondly, the calculated values of λ m a x ( 7.0764 ) , C I ( 0.012726 ) , and C R ( 0.009641 < 0.10 ) affirm the acceptable consistency of the comparison matrices used in the analysis, reinforcing the reliability of the ranking results. The insights obtained from this investigation into the varied themes of rewards associated with ChatGPT utilization offer stakeholders a profound understanding of the potential benefits. From empowering educators to streamlining organizational workflows, the findings underscore the transformative potential of ChatGPT across diverse domains. Stakeholders can leverage this comprehensive understanding to formulate targeted policies and strategies that maximize the positive impact of ChatGPT while addressing specific challenges and concerns associated with its application.

5.4. Normalized Matrix and Weight of Resilience Themes

The exploration within the “Resilience” category illuminates the pivotal themes that underpin the resilience of ChatGPT adoption, as meticulously outlined in Table 9. These themes, shaped by the discerning perspectives of respondents, collectively contribute to the overarching landscape of resilience in the utilization of ChatGPT. At the forefront is the dimension of solidifying ethical values ( w e i g h t = 0.191 ) , underscoring the paramount importance of ethical considerations in fortifying the resilience of ChatGPT adoption. This dimension reflects the commitment to upholding ethical standards and ensuring responsible usage. In addition, higher-level reasoning skills ( w e i g h t = 0.184 ) follow closely, emphasizing the role of ChatGPT in fostering advanced cognitive abilities and critical thinking. Subsequently, the weight of the remaining resilience themes include academic integrity policies ( w e i g h t = 0.179 ) , transforming educative systems ( w e i g h t = 0.146 ) , acceptable usage in science ( w e i g h t = 0.121 ) , co-creation between humans and AI ( w e i g h t = 0.116 ) , and appropriate testing framework ( w e i g h t = 0.063 ) .
Furthermore, the computed values, including λ m a x ( 7.0136 ) , C I ( 0.002273 ) , and C R ( 0.001722 < 0.10 ) , provide compelling evidence of the consistency within the comparison matrices employed in the analysis. These metrics not only validate the robustness of the analytical process but also underscore the reliability of the obtained ranking results. The weights obtained from this analysis offer insights into the multifaceted nature of the resilience dimension. Hence, educational stakeholders can leverage these weightings to tailor policies and strategies that bolster the resilience of ChatGPT adoption, fostering an environment where ethical considerations, cognitive development, academic integrity, transformative potential, and collaborative frameworks converge for responsible and impactful utilization.

5.5. Global Weights

Table 10 presents the global weights and rankings of the RRR dimensions concerning the ethical conundrum of ChatGPT utilization in education. Each element is assigned a global weight, reflecting its importance within the study’s context. The rankings of the RRR themes are discussed based on the study’s findings. Notably, solidifying ethical values (global weight = 0.08743) from the resilience category claims the top position in the ranking order, emphasizing the significant influence that ethical values hold in decision-making for ChatGPT utilization. The result indicates the importance of informed decisions by educational stakeholders in promoting ethical values within the educational system. Following closely is higher-level reasoning skills (global weight = 0.08458) from the resilience category, emphasizing the importance of improving human reasoning capabilities. This ranking guides stakeholders in prioritizing strategies to enhance higher-order cognitive and reasoning skills for effective ChatGPT utilization in education. Similarly, academic integrity policies (global weight = 0.08225) from the resilience category secure the third position, highlighting the importance of establishing policies for academic integrity.
Furthermore, in the risk category, privacy and confidentiality (global weight = 0.07736) claim the fourth rank, stressing the need to address privacy and confidentiality issues impacting users’ data. Safety and security concerns (global weight = 0.06834) from the risk category secure the fifth position, emphasizing the significance of addressing security-related issues in the ethical conundrum of ChatGPT utilization. Transformative educational systems (global weight = 0.06697) from the resilience category take the sixth spot, indicating the fundamental role of transformative educational systems in ChatGPT utilization. Increasing productivity and efficiency (global weight = 0.06453) is the only factor from the reward category in the top ten, ranking seventh, suggesting that ChatGPT enhances work efficiency. Academic integrity concern (global weight = 0.05739) from the risk category occupies the eighth position, highlighting concerns related to integrity emerging with the advent of ChatGPT. Acceptable usage in science (global weight = 0.05537) from the resilience category ranks ninth, indicating the possibility of implementing acceptable usage policies for ChatGPT. Finally, the co-creation between humans and AI (global weight = 0.05341) from the resilience category completes the top ten themes, showcasing respondents’ support for the co-existence of AI tools and humans. It is noteworthy that the resilience category dominates the top ten RRR elements, accounting for six themes, while the risk category contributes three themes, and the reward category features only one.

5.6. Discussion

The transformative integration of Gen-AI into educational settings presents a myriad of ethical considerations that extend beyond conventional risk assessments. Unlike technology adoption theories such as UTAUT [34,35,36,37,38,39,40,41,42,43,44] and TAM [45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60], this study employs a RRR framework (risk, reward, and resilience) to comprehensively explore the ethical landscape of ChatGPT utilization in education. Additionally, the study uses the AHP methodology, in contrast to PLS-SEM and CB-SEM [40,47,48,49,54,64], or qualitative analysis text analysis [66,106,107,108,109,110,111], to prioritize ethical themes based on their global weights. The prioritization framework, as illustrated in Figure 6 highlights the paramount importance of resilience elements such as solidifying ethical values, enhancing higher-level reasoning skills, and transforming educational systems. Privacy and confidentiality emerge as critical risk concerns, along with safety and security issues. The research also highlights reward elements including increased productivity, personalized learning, and streamlined workflows.
These findings align with the work of Gammoh [112], who conducted a thematic analysis and classified risks associated with ChatGPT integration, such as plagiarism, compromised originality, overdependency on technology, diminished critical thinking skills, and reduced overall assignment quality. Similarly, Murtiningsih et al. [113] emphasized that excessive dependence on ChatGPT risks diminishing the quality of human resources in education. In addition, Gammoh [112] further suggested risk mitigation strategies, including using plagiarism detection tools, implementing measures to improve student assignments, raising awareness about the benefits and risks, and establishing clear guidelines. These recommendations reinforce the findings of this study, particularly supporting resilience themes that balance risk control with the efficient utilization of benefits (rewards), emphasizing solidifying ethical values. Moreover, Murtiningsih et al. [113] advocated for the development of strategies by educators to harness technological advancements effectively while fostering critical thinking skills in students, which supports the findings of this study that identify higher-level reasoning skills as key resilience elements.
Additionally, the emphasis on instructional guidance for engagement with Gen-AI tools [114] suggests that the literature favors the use of ChatGPT as a means to enhance resilience—the ability to balance risks and rewards. Similarly, the findings of Moorhouse [115]—which report that experienced teachers generally recognize the potential of Gen-AI to support their professional work—along with those of Espartinez [74]—who identified ethical tech guardians and balanced pedagogy integrators as key factors—further emphasize this point. Furthermore, Ogugua et al. [116] identified several recommendations, such as the integration of Gen-AI into the curriculum, defining specific goals for using Gen-AI tools in classes, establishing clear guidelines and boundaries, and emphasizing the importance of critical thinking and independent problem-solving skills. These studies collectively support several resilience themes highlighted in this study [74,114,116], including co-creation between humans and AI, academic integrity policies, solidifying ethical values, acceptable usage, transforming educational systems, and fostering higher-level reasoning skills.
The ranking provide valuable insights into the themes that should be prioritized when evaluating the ethical conundrum associated with ChatGPT utilization in education. The dominance of resilience elements underscores their critical role in adopting and absorbing concerns related to ChatGPT utilization, influencing people’s ability to use ChatGPT ethically and responsibly. However, the inclusion of elements from the risk and reward categories suggests a balanced approach that considers not only how individuals adapt to ChatGPT but also how risks are mitigated and rewards optimized. The findings highlight the complexity of ChatGPT utilization in an academic environment, emphasizing the need for a holistic approach that considers various facets of how individuals interact with the technology. This understanding can inform more effective and targeted efforts aimed at building ethical and responsible usage of ChatGPT and similar generative AI tools.

6. Contribution and Implications

In this section, the distinctive and substantial contributions of the study concerning the ethical challenges posed by ChatGPT in educational settings are discussed. Accordingly, this study uniquely explores the risk, reward, and resilience within the landscape of ChatGPT implementation. By employing the AHP, our study systematically evaluates and prioritizes the significance of sub-themes under RRR, providing insights into their pivotal roles in shaping decisions surrounding ChatGPT utilization. Consequently, the following sections expound on the novel contributions and implications that emerge from this study while also acknowledging its limitations and suggesting avenues for future research.

6.1. Theoretical Contributions

This study contributes to and expands the theoretical landscape of ChatGPT utilization and other related Gen-AI tools. This approach is distinct from the theoretical models (UTAUT, TAM, etc.) employed by previous studies. Accordingly, this study addresses the need for evaluating and integrating the dynamics of risk, reward, and resilience within the ethical considerations of AI utilization. By exploring the interplay of these themes, this study enhances our understanding of how the RRR framework can be applied to decision-making in the context of ChatGPT utilization objectively, enriching the theoretical foundations laid by existing research [17,80]. While the initial RRR framework offers a structured model for navigating complex problems, this study goes a step further by providing a framework capable of systematically weighing different risks and rewards, considering the aspect of resilience through the AHP. This contribution stands as a valuable theoretical foundation for future research in this domain.
Furthermore, the framework’s examination of three interconnected categories (risk, reward, and resilience) allows for a systematic assessment of their relative priorities, fostering comparisons and resulting in the establishment of a category ranking. The study’s methodology involves expert pair-wise comparisons, allowing judgments to quantify the dominance of one element over another concerning specific attributes. In contrast, the existing RRR framework does not prescribe specific conclusions for policymakers but guides them in approaching complex problems. This approach facilitates the inclusion of diverse perspectives, contributing to the decision-making process. In contrast, this study offers an objective tool for decision-making by analyzing and weighing themes or factors objectively. Notably, the prioritization of RRR elements establishes a hierarchy that informs future research, policymaking, and implementation strategies of Gen-AI utilization in education.

6.2. Practical Implications

This study provides valuable practical implications for a range of stakeholders engaged in formulating policies within the educational sector. These practical implications extend throughout the educational landscape, offering actionable insights for entities such as organizations, government agencies, policymakers, and researchers. Therefore, by acknowledging the central role of Gen-AI and navigating through the complexities of risk, reward, and resilience, educational stakeholders can make well-informed decisions, develop policies with care, and devise strategies that strengthen individuals’ ability to effectively manage the ethical challenges posed by ChatGPT. To illustrate, the findings furnish actionable insights for educational leaders and decision-makers by pinpointing elements with the most significant influence related to ChatGPT utilization. This guidance aids in strategically shaping policies to enhance the ethical and responsible use of AI tools. Additionally, grasping the relative importance of themes can assist educational institutions in crafting more effective policies for promoting responsible ChatGPT usage, involving targeted investments in training and promotion initiatives.
Moreover, the study’s prioritization of RRR elements, particularly the dominance of resilience themes in the top ten, provides valuable insights for practical decision-making in the utilization of ChatGPT in educational environments. This prioritization framework aids decision-makers by offering a hierarchy of themes that need attention. Policymakers, educators, and institutions can use these insights to develop targeted strategies, policies, and guidelines for responsible ChatGPT utilization in educational settings. Hence, employing AHP contributes a practical methodology for systematically evaluating and prioritizing these themes, enhancing the systematic and structured analysis of Gen-AI tool utilization. Overall, the practical implication is that stakeholders should adopt a balanced approach, considering resilience, risks, and rewards when integrating ChatGPT into educational practices to ensure ethical, responsible, and effective utilization.

6.3. Limitations and Future Work

The study is not exempt from limitations, and a transparent discussion of these constraints is imperative for guiding future researchers’ endeavors. These limitations and aspects of the methodology present opportunities for future exploration. Firstly, even though the elements of RRR were derived from the general literature, the findings of the study may lack context generality, as they are not analyzed from specific educational institutions and participant demographics were not taken into account. The focus on a particular institution, country, or demographic group might necessitate further investigation. Hence, expanding this research to encompass a broader range of cultural and regional contexts could unveil variations in the significance of ethical concerns associated with ChatGPT utilization in educational settings, thereby enhancing the generalizability of the findings. Secondly, the methodology employed in this study relies on expert judgments, which inherently possess subjectivity. Conducting a comparative analysis of the effectiveness of AHP against other decision-making methodologies in the realm of ChatGPT ethics could illuminate the strengths and weaknesses of different approaches. Furthermore, the study focused on seven themes from each category (risk, reward, and resilience), potentially overlooking other pertinent themes that could influence decision-making in ChatGPT utilization. Future research may explore additional themes that contribute to the ethical considerations surrounding ChatGPT implementation in education.
Additionally, this study has paved the way for a comprehensive exploration, underscoring the imperative need to thoroughly investigate the adoption of ChatGPT and similar technologies from three critical perspectives. It is recommended that forthcoming studies examine the primary themes, namely, risk, rewards, and resilience, as individual outcome variables or themes within the adoption of LLMs. This approach to analysis will offer a more granular understanding of the multifaceted landscape surrounding LLM adoption. By dissecting these themes independently, researchers can gain deeper insights into the intricate dynamics that shape the utilization of ChatGPT. This, in turn, will facilitate the formulation of well-informed policies, ensuring the responsible and beneficial deployment of ChatGPT for societal advancement. For instance, elucidating the specific risks associated with ChatGPT adoption, exploring the resilience mechanisms embedded in its deployment, or assessing the tangible rewards can collectively inform another research direction that addresses the complex interplay of themes or factors in the utilization of ChatGPT.
Moreover, considering the mixed nature of academics regarding ChatGPT, this study’s findings may not fully capture the comprehensive scope of changes in ethical concerns over time. Developing dynamic models that account for the evolving nature of ethical considerations associated with ChatGPT utilization would provide a more realistic representation of the landscape. Therefore, while this study makes a substantial contribution to understanding the themes or factors influencing decision-making related to ChatGPT, it also underscores avenues for further research to address these limitations and propel the field forward. Finally, the elements extracted for risk, rewards, and resilience could be updated with additional information according to the recent literature. Thus, future studies should conduct similar studies to capture additional factors based on the current literature. This will help consolidate the findings of this study as well as add more insight into the ChatGPT utilization in educational settings.

6.4. Next Steps

Building on the insights gleaned from this study, our future research endeavors are poised to delve deeper into the themes influencing ChatGPT utilization within educational contexts. Our next step involves employing SEM to investigate the intricate relationships among the identified RRR elements. SEM offers a robust analytical framework that allows for the assessment of both observed and latent variables [117,118,119], providing a more perspective on the complex dynamics involved. To gauge the effectiveness of our model, we plan to collect empirical data directly from users or students engaged with ChatGPT in educational settings. This user-centric approach will enable us to measure the real-world impact of RRR elements surrounding ChatGPT utilization. Developing a tailored instrument for data collection will be crucial, allowing us to probe into user experiences and perceptions related to ethical considerations and the risk, reward, and resilience themes.
Nevertheless, the future research methodology should encompass various statistical techniques, with SEM taking center stage in validating the relationships between the structural elements. By employing SEM, we aim to ascertain the interplay between observed variables and latent constructs, providing a more comprehensive understanding of the factors influencing ChatGPT usage. Additionally, we will explore the application of regression models and artificial neural networks, similar to previous research [120,121,122,123], to further validate and complement future research efforts. These advanced analytical tools will offer a multi-faceted approach to scrutinizing the complex relationships of factors within the educational environment concerning ChatGPT.

7. Conclusions

This study delves into the complex landscape of ChatGPT utilization within educational environments, focusing on the ethical conundrum associated with its adoption. Employing SLR and frequency analysis, we selected seven themes for each of the RRR components. Our study not only contributes a decision-making prioritization framework for educational stakeholders but also offers an understanding of the ethical considerations, providing valuable insights for policymakers and institutions navigating the integration of ChatGPT. Furthermore, our exploration into the risk, reward, and resilience themes, guided by AHP, yielded critical insights. The results show that solidifying ethical values, higher-level reasoning skills, and academic integrity policies emerged as the top-ranking themes, emphasizing their paramount importance in decision-making for ChatGPT utilization. These findings inform a holistic understanding of the themes influencing ethical considerations.
In addition, the study’s practical implications extend to diverse stakeholders involved in educational policymaking. By acknowledging the intertwined dynamics of risk, reward, and resilience, institutions can make informed decisions, formulate cautious policies, and develop strategies to enhance ethical decision-making surrounding ChatGPT. Our research provides actionable insights for educational leaders and policymakers, guiding the creation of policies that promote responsible ChatGPT utilization. While this study contributes substantially to the theoretical foundations and practical considerations in the ethical implementation of ChatGPT in education, it is not without limitations. Future research endeavors should address these limitations, fostering a continuous dialogue and exploration of the multifaceted landscape surrounding ChatGPT utilization. In this ever-evolving domain, the study offers a valuable framework for decision-makers, researchers, and institutions navigating the ethical complexities of integrating ChatGPT into educational settings.

Author Contributions

Conceptualization, U.A.B. and M.S.S.; investigation, U.A.B. and R.S.; resources, M.S.S. and S.F.A.R.; writing—original draft preparation, U.A.B. and R.S.; writing—review and editing, U.A.B. and S.Y.; methodology, U.A.B. and R.S.; visualization, U.A.B. and S.Y.; supervision, M.S.S. and S.F.A.R.; project administration, M.S.S. and S.Y.; funding acquisition, M.S.S. All authors have read and agreed to the published version of the manuscript.

Funding

Telekom Malaysia Research and Development under Project No. MMUI/220169 at Multimedia University Malaysia.

Institutional Review Board Statement

This work involved human subjects in its research. Approval of all ethical and experimental procedures and protocols was granted by the MMU Research Ethics Committee under Application No. EA0202023, and performed in line with the Personal Data Protection Act 2010 (PDPA Act 2010) and other relevant research ethics.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data supporting this study are available on Mendeley Data and can be accessed at https://data.mendeley.com/preview/jfkbrx8m2x. The file covers all relevant information, computation, and findings that underlie the results presented in this study, allowing for further analysis and replication of the research.

Acknowledgments

This work was made possible through support rendered by Telekom Malaysia Research and Development at Multimedia University Malaysia. Gratitude is also extended to all individuals and entities whose contributions were instrumental in the successful execution of this research endeavor.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. George, B.; Wooden, O. Managing the strategic transformation of higher education through artificial intelligence. Adm. Sci. 2023, 13, 196. [Google Scholar] [CrossRef]
  2. Kumar, D. How Emerging Technologies are Transforming Education and Research: Trends, Opportunities, and Challenges. In Infinite Horizons: Exploring the Unknown; CIRS Publication: Patna, India, 2023; p. 89. [Google Scholar]
  3. Tan, S. Harnessing Artificial Intelligence for innovation in education. In Learning Intelligence: Innovative and Digital Transformative Learning Strategies: Cultural and Social Engineering Perspectives; Springer Nature: Singapore, 2023; pp. 335–363. [Google Scholar]
  4. Natriello, G.; Chae, H. The Paradox of Learning in the Intelligence Age: Creating a New Learning Ecosystem to Meet the Challenge. In Bridging Human Intelligence and Artificial Intelligence; Springer International Publishing: Cham, Switzerland, 2022; pp. 287–300. [Google Scholar]
  5. Michel-Villarreal, R.; Vilalta-Perdomo, E.; Salinas-Navarro, D.; Thierry-Aguilera, R.; Gerardou, F. Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Educ. Sci. 2023, 13, 856. [Google Scholar] [CrossRef]
  6. Farrokhnia, M.; Banihashem, S.; Noroozi, O.; Wals, A. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int. 2023, 61, 460–474. [Google Scholar] [CrossRef]
  7. Sallam, M. ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef] [PubMed]
  8. Cotton, D.; Cotton, P.; Shipway, J. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 2023, 61, 228–239. [Google Scholar] [CrossRef]
  9. Su, J.; Yang, W. Unlocking the power of ChatGPT: A framework for applying generative AI in education. ECNU Rev. Educ. 2023, 6, 355–366. [Google Scholar]
  10. Liebrenz, M.; Schleifer, R.; Buadze, A.; Bhugra, D.; Smith, A. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. Lancet Digit. Health 2023, 5, 105–106. [Google Scholar] [CrossRef] [PubMed]
  11. Tlili, A.; Shehata, B.; Adarkwah, M.; Bozkurt, A.; Hickey, D.; Huang, R.; Agyemang, B. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 2023, 10, 15. [Google Scholar] [CrossRef]
  12. Lee, H. The rise of ChatGPT: Exploring its potential in medical education. Anat. Sci. Educ. 2023, 17, 926–931. [Google Scholar] [CrossRef]
  13. Pavlik, J. Collaborating with ChatGPT: Considering the Implications of Generative Artificial. J. Mass Commun. Educ. 2023, 78, 84–93. [Google Scholar]
  14. Lund, B.; Wang, T.; Mannuru, N.; Nie, B.; Shimray, S.; Wang, Z. ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. J. Assoc. Inf. Sci. Technol. 2023, 74, 570–581. [Google Scholar] [CrossRef]
  15. Dwivedi, Y.; Kshetri, N.; Hughes, L.; Slade, E.; Jeyaraj, A.; Kar, A.; Wright, R. So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  16. Lim, W.; Gunasekara, A.; Pallant, J.; Pallant, J.; Pechenkina, E. Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ. 2023, 21, 100790. [Google Scholar] [CrossRef]
  17. Bukar, U.A.; Sayeed, M.S.; Razak, S.F.A.; Yogarayan, S.; Amodu, O.A. An integrative decision-making framework to guide policies on regulating ChatGPT usage. PeerJ Comput. Sci. 2024, 10, e1845. [Google Scholar] [CrossRef]
  18. Bukar, U.A.; Jabar, M.A.; Sidi, F.; Nor, R.N.H.B.; Abdullah, S.; Othman, M. Crisis informatics in the context of social media crisis communication: Theoretical models, taxonomy, and open issues. IEEE Access 2020, 8, 185842–185869. [Google Scholar] [CrossRef]
  19. Zhang, L.; Glänzel, W. Proceeding papers in journals versus the “regular” journal publications. J. Inf. 2012, 6, 88–96. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Jia, X. Republication of conference papers in journals? Learn. Publ. 2013, 26, 189–196. [Google Scholar] [CrossRef]
  21. Montesi, M.; Owen, J.M. From conference to journal publication: How conference papers in software engineering are extended for publication in journals. J. Am. Soc. Inf. Sci. Technol. 2008, 59, 816–829. [Google Scholar] [CrossRef]
  22. Perkins, M. Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. J. Univ. Teach. Learn. Pract. 2023, 20, 07. [Google Scholar] [CrossRef]
  23. Kooli, C. Chatbots in education and research: A critical examination of ethical implications and solutions. Sustainability 2023, 15, 5614. [Google Scholar] [CrossRef]
  24. Qasem, F. ChatGPT in scientific and academic research: Future fears and reassurances. Libr. Tech News 2023, 40, 30–32. [Google Scholar] [CrossRef]
  25. Ariyaratne, S.; Iyengar, K.; Nischal, N.; Chitti Babu, N.; Botchu, R. A comparison of ChatGPT-generated articles with human-written articles. Skelet. Radiol. 2023, 52, 1755–1758. [Google Scholar] [CrossRef]
  26. Yan, D. Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Educ. Inf. Technol. 2023, 28, 13943–13967. [Google Scholar] [CrossRef]
  27. Ray, P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things-Cyber-Phys. Syst. 2023, 3, 121–154. [Google Scholar] [CrossRef]
  28. Taecharungroj, V. What Can ChatGPT Do?” Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data Cogn. Comput. 2023, 7, 35. [Google Scholar] [CrossRef]
  29. Grünebaum, A.; Chervenak, J.; Pollet, S.; Katz, A.; Chervenak, F. The exciting potential for ChatGPT in obstetrics and gynecology. Am. J. Obstet. Gynecol. 2023, 228, 696–705. [Google Scholar] [CrossRef]
  30. Cox, C.; Tzoc, E. ChatGPT: Implications for academic libraries. Coll. Res. Libr. News 2023, 84, 99. [Google Scholar] [CrossRef]
  31. Karaali, G. Artificial Intelligence, Basic Skills, and Quantitative Literacy. Numeracy 2023, 16, 9. [Google Scholar] [CrossRef]
  32. Jungwirth, D.; Haluza, D. Artificial intelligence and public health: An exploratory study. Int. J. Environ. Res. Public Health 2023, 20, 4541. [Google Scholar] [CrossRef]
  33. Geerling, W.; Mateer, G.; Wooten, J.; Damodaran, N. ChatGPT has aced the test of understanding in college economics: Now what? Am. Econ. 2023, 68, 233–245. [Google Scholar] [CrossRef]
  34. Supianto; Widyaningrum, R.; Wulandari, F.; Zainudin, M.; Athiyallah, A.; Rizqa, M. Exploring the factors affecting ChatGPT acceptance among university students. Multidiscip. Sci. J. 2024, 6, 2024273. [Google Scholar] [CrossRef]
  35. Alshammari, S.H.; Alshammari, M.H. Factors Affecting the Adoption and Use of ChatGPT in Higher Education. Int. J. Inf. Commun. Technol. Educ. 2024, 20, 1–16. [Google Scholar] [CrossRef]
  36. Strzelecki, A.; Cicha, K.; Rizun, M.; Rutecka, P. Acceptance and use of ChatGPT in the academic community. Educ. Inf. Technol. 2024, 1–26. [Google Scholar] [CrossRef]
  37. Strzelecki, A.; ElArabawy, S. Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. Br. J. Educ. Technol. 2024, 55, 1209–1230. [Google Scholar] [CrossRef]
  38. Elkefi, S.; Tounsi, A.; Kefi, M.A. Use of ChatGPT for education by engineering students in developing countries: A mixed-methods study. Behav. Inf. Technol. 2024, 1–17. [Google Scholar] [CrossRef]
  39. Bouteraa, M.; Bin-Nashwan, S.A.; Al-Daihani, M.; Dirie, K.A.; Benlahcene, A.; Sadallah, M.; Zaki, H.O.; Lada, S.; Ansar, R.; Fook, L.M.; et al. Understanding the diffusion of AI-generative (ChatGPT) in higher education: Does students’ integrity matter? Comput. Hum. Behav. Rep. 2024, 14, 100402. [Google Scholar] [CrossRef]
  40. Bhat, M.A.; Tiwari, C.K.; Bhaskar, P.; Khan, S.T. Examining ChatGPT adoption among educators in higher educational institutions using extended UTAUT model. J. Inf. Commun. Ethics Soc. 2024; ahead-of-print. [Google Scholar]
  41. Arthur, F.; Salifu, I.; Abam Nortey, S. Predictors of higher education students’ behavioural intention and usage of ChatGPT: The moderating roles of age, gender and experience. Interact. Learn. Environ. 2024, 1–27. [Google Scholar] [CrossRef]
  42. Grassini, S.; Aasen, M.L.; Møgelvang, A. Understanding University Students’ Acceptance of ChatGPT: Insights from the UTAUT2 Model. Appl. Artif. Intell. 2024, 38, 2371168. [Google Scholar] [CrossRef]
  43. Salifu, I.; Arthur, F.; Arkorful, V.; Abam Nortey, S.; Solomon Osei-Yaw, R. Economics students’ behavioural intention and usage of ChatGPT in higher education: A hybrid structural equation modelling-artificial neural network approach. Cogent Soc. Sci. 2024, 10, 2300177. [Google Scholar] [CrossRef]
  44. Elshaer, I.A.; Hasanein, A.M.; Sobaih, A.E.E. The Moderating Effects of Gender and Study Discipline in the Relationship between University Students’ Acceptance and Use of ChatGPT. Eur. J. Investig. Heal. Psychol. Educ. 2024, 14, 1981–1995. [Google Scholar] [CrossRef]
  45. Gustilo, L.; Ong, E.; Lapinid, M.R. Algorithmically-driven writing and academic integrity: Exploring educators’ practices, perceptions, and policies in AI era. Int. J. Educ. Integr. 2024, 20, 3. [Google Scholar] [CrossRef]
  46. Kajiwara, Y.; Kawabata, K. AI literacy for ethical use of chatbot: Will students accept AI ethics? Comput. Educ. Artif. Intell. 2024, 6, 100251. [Google Scholar] [CrossRef]
  47. Cambra-Fierro, J.J.; Blasco, M.F.; López-Pérez, M.E.E.; Trifu, A. ChatGPT adoption and its influence on faculty well-being: An empirical research in higher education. Educ. Inf. Technol. 2024, 1–22. [Google Scholar] [CrossRef]
  48. Tiwari, C.K.; Bhat, M.A.; Khan, S.T.; Subramaniam, R.; Khan, M.A.I. What drives students toward ChatGPT? An investigation of the factors influencing adoption and usage of ChatGPT. Interact. Technol. Smart Educ. 2024, 21, 333–355. [Google Scholar] [CrossRef]
  49. Abdalla, R.A. Examining awareness, social influence, and perceived enjoyment in the TAM framework as determinants of ChatGPT. Personalization as a moderator. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100327. [Google Scholar] [CrossRef]
  50. Abdaljaleel, M.; Barakat, M.; Alsanafi, M.; Salim, N.A.; Abazid, H.; Malaeb, D.; Mohammed, A.H.; Hassan, B.A.R.; Wayyes, A.M.; Farhan, S.S.; et al. A multinational study on the factors influencing university students’ attitudes and usage of ChatGPT. Sci. Rep. 2024, 14, 1983. [Google Scholar] [CrossRef] [PubMed]
  51. Sallam, M.; Salim, N.A.; Barakat, M.; Al-Mahzoum, K.; Al-Tammemi, A.B.; Malaeb, D.; Hallit, R.; Hallit, S. Assessing Health Students’ Attitudes and Usage of ChatGPT in Jordan: Validation Study. JMIR Med Educ. 2023, 9, e48254. [Google Scholar] [CrossRef]
  52. Sevnarayan, K.; Potter, M.A. Generative Artificial Intelligence in distance education: Transformations, challenges, and impact on academic integrity and student voice. J. Appl. Learn. Teach. 2024, 7, 104–114. [Google Scholar]
  53. García-Alonso, E.M.; León-Mejía, A.C.; Sánchez-Cabrero, R.; Guzmán-Ordaz, R. Training and Technology Acceptance of ChatGPT in University Students of Social Sciences: A Netcoincidental Analysis. Behav. Sci. 2024, 14, 612. [Google Scholar] [CrossRef]
  54. Masa’deh, R.; Majali, S.A.L.; Alkhaffaf, M.; Thurasamy, R.; Almajali, D.; Altarawneh, K.; Al-Sherideh, A.; Altarawni, I. Antecedents of adoption and usage of ChatGPT among Jordanian university students: Empirical study. Int. J. Data Netw. Sci. 2024, 8, 1099–1110. [Google Scholar] [CrossRef]
  55. Dahri, N.A.; Yahaya, N.; Al-Rahmi, W.M.; Aldraiweesh, A.; Alturki, U.; Almutairy, S.; Shutaleva, A.; Soomro, R.B. Extended TAM based acceptance of AI-Powered ChatGPT for supporting metacognitive self-regulated learning in education: A mixed-methods study. Heliyon 2024, 10, e29317. [Google Scholar] [CrossRef] [PubMed]
  56. Rahman, M.S.; Sabbir, M.M.; Zhang, J.; Moral, I.H.; Hossain, G.M.S. Examining students’ intention to use ChatGPT: Does trust matter? Australas. J. Educ. Technol. 2023, 39, 51–71. [Google Scholar] [CrossRef]
  57. Maheshwari, G. Factors influencing students’ intention to adopt and use ChatGPT in higher education: A study in the Vietnamese context. Educ. Inf. Technol. 2024, 29, 12167–12195. [Google Scholar] [CrossRef]
  58. Awal, M.R.; Haque, M.E. Revisiting university students’ intention to accept AI-Powered chatbot with an integration between TAM and SCT: A south Asian perspective. J. Appl. Res. High. Educ. 2024; ahead-of-print. [Google Scholar] [CrossRef]
  59. Duong, C.D.; Vu, T.N.; Ngo, T.V.N. Applying a modified technology acceptance model to explain higher education students’ usage of ChatGPT: A serial multiple mediation model with knowledge sharing as a moderator. Int. J. Manag. Educ. 2023, 21, 100883. [Google Scholar] [CrossRef]
  60. Alrishan, A.M.H. Determinants of Intention to Use ChatGPT for Professional Development among Omani EFL Pre-service Teachers. Int. J. Learn. Teach. Educ. Res. 2023, 22, 187–209. [Google Scholar] [CrossRef]
  61. Duong, C.D.; Nguyen, T.H.; Ngo, T.V.N.; Dao, V.T.; Do, N.D.; Pham, T.V. Exploring higher education students’ continuance usage intention of ChatGPT: Amalgamation of the information system success model and the stimulus-organism-response paradigm. Int. J. Inf. Learn. Technol. 2024; ahead-of-print. [Google Scholar]
  62. Mandai, K.; Tan, M.J.H.; Padhi, S.; Pang, K.T. A Cross-Era Discourse on ChatGPT’s Influence in Higher Education through the Lens of John Dewey and Benjamin Bloom. Educ. Sci. 2024, 14, 64. [Google Scholar] [CrossRef]
  63. Jochim, J.; Lenz-Kesekamp, V.K. Teaching and testing in the era of text-generative AI: Exploring the needs of students and teachers. Inf. Learn. Sci. 2024; ahead-of-print. [Google Scholar]
  64. Abdalla, A.A.; Bhat, M.A.; Tiwari, C.K.; Khan, S.T.; Wedajo, A.D. Exploring ChatGPT adoption among business and management students through the lens of diffusion of Innovation Theory. Comput. Educ. Artif. Intell. 2024, 7, 100257. [Google Scholar] [CrossRef]
  65. Mahmud, A.; Sarower, A.H.; Sohel, A.; Assaduzzaman, M.; Bhuiyan, T. Adoption of ChatGPT by university students for academic purposes: Partial least square, artificial neural network, deep neural network and classification algorithms approach. Array 2024, 21, 100339. [Google Scholar] [CrossRef]
  66. Gupta, P.; Mahajan, R.; Badhera, U.; Kushwaha, P. Integrating generative AI in management education: A mixed-methods study using social construction of technology theory. Int. J. Manag. Educ. 2024, 22, 101017. [Google Scholar] [CrossRef]
  67. Al-Mughairi, H.; Bhaskar, P. Exploring the factors affecting the adoption AI techniques in higher education: Insights from teachers’ perspectives on ChatGPT. J. Res. Innov. Teach. Learn. 2024; ahead-of-print. [Google Scholar]
  68. Qu, K.; Wu, X. ChatGPT as a CALL tool in language education: A study of hedonic motivation adoption models in English learning environments. Educ. Inf. Technol. 2024, 1–33. [Google Scholar] [CrossRef]
  69. Crawford, J.; Allen, K.A.; Pani, B.; Cowling, M. When artificial intelligence substitutes humans in higher education: The cost of loneliness, student success, and retention. Stud. High. Educ. 2024, 49, 883–897. [Google Scholar] [CrossRef]
  70. Ngo, T.T.A.; Tran, T.T.; An, G.K.; Nguyen, P.T. ChatGPT for Educational Purposes: Investigating the Impact of Knowledge Management Factors on Student Satisfaction and Continuous Usage. IEEE Trans. Learn. Technol. 2024, 17, 1367–1378. [Google Scholar] [CrossRef]
  71. Okulu, H.Z.; Muslu, N. Designing a course for pre-service science teachers using ChatGPT: What ChatGPT brings to the table. Interact. Learn. Environ. 2024, 1–18. [Google Scholar] [CrossRef]
  72. Jangjarat, K.; Kraiwanit, T.; Limna, P.; Sonsuphap, R. Public Perceptions towards ChatGPT as the Robo-Assistant. Online J. Commun. Media Technol. 2023, 13, e202338. [Google Scholar] [CrossRef]
  73. Komba, M.M. The influence of ChatGPT on digital learning: Experience among university students. Glob. Knowledge, Mem. Commun. 2024; ahead-of-print. [Google Scholar]
  74. Espartinez, A.S. Exploring student and teacher perceptions of ChatGPT use in higher education: A Q-Methodology study. Comput. Educ. Artif. Intell. 2024, 7, 100264. [Google Scholar] [CrossRef]
  75. Sun, D.; Boudouaia, A.; Zhu, C.; Li, Y. Would ChatGPT-facilitated programming mode impact college students’ programming behaviors, performances, and perceptions? An empirical study. Int. J. Educ. Technol. High. Educ. 2024, 21, 14. [Google Scholar] [CrossRef]
  76. Sánchez-Ruiz, L.M.; Moll-López, S.; Nuñez-Pérez, A.; Moraño-Fernández, J.A.; Vega-Fleitas, E. ChatGPT Challenges Blended Learning Methodologies in Engineering Education: A Case Study in Mathematics. Appl. Sci. 2023, 13, 6039. [Google Scholar] [CrossRef]
  77. Mohammed, M.; Kumar, N.; Zawiah, M.; Al-Ashwal, F.Y.; Bala, A.A.; Lawal, B.K.; Wada, A.S.; Halboup, A.; Muhammad, S.; Ahmad, R.; et al. Psychometric Properties and Assessment of Knowledge, Attitude, and Practice Towards ChatGPT in Pharmacy Practice and Education: A Study Protocol. J. Racial Ethn. Health Disparities 2024, 11, 2284–2293. [Google Scholar] [CrossRef]
  78. Bukar, U.A.; Sayeed, M.S.; Fatimah Abdul Razak, S.; Yogarayan, S.; Sneesl, R. Decision-Making Framework for the Utilization of Generative Artificial Intelligence in Education: A Case Study of ChatGPT. IEEE Access 2024, 12, 95368–95389. [Google Scholar] [CrossRef]
  79. Adams, D.; Chuah, K.M.; Devadason, E.; Azzis, M.S.A. From novice to navigator: Students’ academic help-seeking behaviour, readiness, and perceived usefulness of ChatGPT in learning. Educ. Inf. Technol. 2023, 1–18. [Google Scholar] [CrossRef]
  80. Roberts, A. Risk, reward, and resilience framework: Integrative policy making in a complex 910 world. J. Int. Econ. Law 2023, 26, 233–265. [Google Scholar] [CrossRef]
  81. Eggmann, F.; Weiger, R.; Zitzmann, N.; Blatz, M. Implications of large language models such as ChatGPT for dental medicine. J. Esthet. Restor. Dent. 2023, 35, 1098–1102. [Google Scholar] [CrossRef]
  82. Victor, B.; Kubiak, S.; Angell, B.; Perron, B. Time to Move Beyond the ASWB Licensing Exams: Can Generative Artificial Intelligence Offer a Way Forward for Social Work? Res. Soc. Work Pract. 2023, 33, 511–517. [Google Scholar] [CrossRef]
  83. Short, C.; Short, J. The artificially intelligent entrepreneur: ChatGPT, prompt engineering, and entrepreneurial rhetoric creation. J. Bus. Ventur. Insights 2023, 19, e00388. [Google Scholar] [CrossRef]
  84. Haluza, D.; Jungwirth, D. Artificial Intelligence and Ten Societal Megatrends: An Exploratory Study Using GPT-3. Systems 2023, 11, 120. [Google Scholar] [CrossRef]
  85. Béné, C.; Wood, R.G.; Newsham, A.; Davies, M. Resilience: New utopia or new tyranny? Reflection about the potentials and limits of the concept of resilience in relation to vulnerability reduction programmes. IDS Work. Pap. 2012, 2012, 1–61. [Google Scholar] [CrossRef]
  86. Halaweh, M. ChatGPT in education: Strategies for responsible implementation. Contemp. Educ. Technol. 2023, 15, ep421. [Google Scholar] [CrossRef]
  87. Gupta, K.; Bhaskar, P.; Singh, S. Prioritization of factors influencing employee adoption of e-government using the analytic hierarchy process. J. Syst. Inf. Technol. 2017, 19, 116–137. [Google Scholar] [CrossRef]
  88. Canco, I.; Kruja, D.; Iancu, T. AHP, a reliable method for quality decision making: A case study in business. Sustainability 2021, 13, 13932. [Google Scholar] [CrossRef]
  89. Felice, F.; Deldoost, M.; Faizollahi, M.; Petrillo, A. Performance measurement model for the supplier selection based on AHP. Int. J. Eng. Bus. Manag. 2015, 7, 17. [Google Scholar] [CrossRef]
  90. Jurenka, R.; Cagáňová, D.; Špirková, D. Application of AHP method in decision-making process. In Smart Technology Trends in Industrial and Business Management; Springer: Cham, Switzerland, 2019; pp. 3–15. [Google Scholar]
  91. Singh, R. Prioritizing the factors for coordinated supply chain using analytic hierarchy process (AHP). Meas. Bus. Excell. 2013, 17, 80–97. [Google Scholar] [CrossRef]
  92. Sneesl, R.; Jusoh, Y.; Jabar, M.; Abdullah, S.; Bukar, U. Factors Affecting the Adoption of IoT-Based Smart Campus: An Investigation Using Analytical Hierarchical Process (AHP). Sustainability 2022, 14, 8359. [Google Scholar] [CrossRef]
  93. Sharma, M.; Gupta, R.; Acharya, P. Prioritizing the critical factors of cloud computing adoption using multi-criteria decision-making techniques. Glob. Bus. Rev. 2020, 21, 142–161. [Google Scholar] [CrossRef]
  94. Saaty, T. The Analytic Hierarchy Process; McGraw-Hill International: New York, NY, USA, 1980. [Google Scholar]
  95. Douligeris, C.; Pereira, I. A telecommunications quality study using the analytic hierarchy process. IEEE J. Sel. Areas Commun. 1994, 12, 241–250. [Google Scholar] [CrossRef]
  96. Abduh, M.; Omar, M.A. Islamic-bank selection criteria in Malaysia: An AHP approach. Bus. Intell. J. 2012, 5, 271–281. [Google Scholar]
  97. Melillo, P.; Pecchia, L. What is the appropriate sample size to run analytic hierarchy process in a survey-based research. In Proceedings of the International Symposium on the Analytic Hierarchy Process, London, UK, 4–7 August 2016; pp. 4–8. [Google Scholar]
  98. Şahin, M.; Yurdugül, H. A content analysis study on the use of analytic hierarchy process in educational studies. J. Meas. Eval. Educ. Psychol. 2018, 9, 376–392. [Google Scholar] [CrossRef]
  99. Darko, A.; Chan, A.P.C.; Ameyaw, E.E.; Owusu, E.K.; Pärn, E.; Edwards, D.J. Review of application of analytic hierarchy process (AHP) in construction. Int. J. Constr. Manag. 2019, 19, 436–452. [Google Scholar] [CrossRef]
  100. Raišienė, A.G.; Raišys, S.J. Business customer satisfaction with B2B consulting services: AHP-based criteria for a new perspective. Sustainability 2022, 14, 7437. [Google Scholar] [CrossRef]
  101. Dias Jr, A.; Ioannou, P.G. Company and project evaluation model for privately promoted infrastructure projects. J. Constr. Eng. Manag. 1996, 122, 71–82. [Google Scholar] [CrossRef]
  102. Duke, J.M.; Aull-Hyde, R. Identifying public preferences for land preservation using the analytic hierarchy process. Ecol. Econ. 2002, 42, 131–145. [Google Scholar] [CrossRef]
  103. Shrestha, R.K.; Alavalapati, J.R.; Kalmbacher, R.S. Exploring the potential for silvopasture adoption in south-central Florida: An application of SWOT–AHP method. Agric. Syst. 2004, 81, 185–199. [Google Scholar] [CrossRef]
  104. Saaty, T. Decision making, scaling, and number crunching. Decis. Sci. 1989, 20, 404–409. [Google Scholar] [CrossRef]
  105. Forman, E.; Peniwati, K. Aggregating individual judgments and priorities with the analytic hierarchy process. Eur. J. Oper. Res. 1998, 108, 165–169. [Google Scholar] [CrossRef]
  106. Gorichanaz, T. Accused: How students respond to allegations of using ChatGPT on assessments. Learn. Res. Pract. 2023, 9, 183–196. [Google Scholar] [CrossRef]
  107. Yang, S.; Dong, Y.; Yu, Z.G. ChatGPT in Education: Ethical Considerations and Sentiment Analysis. Int. J. Inf. Commun. Technol. Educ. 2024, 20, 1–19. [Google Scholar] [CrossRef]
  108. Naing, S.Z.S.; Udomwong, P. Public Opinions on ChatGPT: An Analysis of Reddit Discussions by Using Sentiment Analysis, Topic Modeling, and SWOT Analysis. Data Intell. 2024, 6, 344–374. [Google Scholar] [CrossRef]
  109. Mamo, Y.; Crompton, H.; Burke, D.; Nickel, C. Higher Education Faculty Perceptions of ChatGPT and the Influencing Factors: A Sentiment Analysis of X. TechTrends 2024, 68, 520–534. [Google Scholar] [CrossRef]
  110. Rejeb, A.; Rejeb, K.; Appolloni, A.; Treiblmaier, H.; Iranmanesh, M. Exploring the impact of ChatGPT on education: A web mining and machine learning approach. Int. J. Manag. Educ. 2024, 22, 100932. [Google Scholar] [CrossRef]
  111. Bukar, U.A.; Sayeed, M.S.; Razak, S.F.A.; Yogarayan, S.; Amodu, O.A.; Raja Mahmood, R.A. Text Analysis on Early Reactions to ChatGPT as a Tool for Academic Progress or Exploitation. SN Comput. Sci. 2024, 5, 366. [Google Scholar] [CrossRef]
  112. Gammoh, L.A. ChatGPT in academia: Exploring university students’ risks, misuses, and challenges in Jordan. J. Furth. High. Educ. 2024, 48, 608–624. [Google Scholar] [CrossRef]
  113. Murtiningsih, S.; Sujito, A.; Soe, K.K. Challenges of using ChatGPT in education: A digital pedagogy analysis. Int. J. Eval. Res. Educ. 2024, 13, 3466–3473. [Google Scholar] [CrossRef]
  114. Reddy, M.R.; Walter, N.G.; Sevryugina, Y.V. Implementation and Evaluation of a ChatGPT-Assisted Special Topics Writing Assignment in Biochemistry. J. Chem. Educ. 2024, 101, 2740–2748. [Google Scholar] [CrossRef]
  115. Moorhouse, B.L. Beginning and first-year language teachers’ readiness for the generative AI age. Comput. Educ. Artif. Intell. 2024, 6, 100201. [Google Scholar] [CrossRef]
  116. Ogugua, D.; Yoon, S.N.; Lee, D. Academic Integrity in a Digital Era: Should the Use of ChatGPT Be Banned in Schools? Glob. Bus. Financ. Rev. 2023, 28, 1–10. [Google Scholar]
  117. Hair, J.; Sarstedt, M.; Ringle, C.; Mena, J. An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Mark. Sci. 2012, 40, 414–433. [Google Scholar] [CrossRef]
  118. Hair, J.; Risher, J.; Sarstedt, M.; Ringle, C. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  119. Hair, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M.; Danks, N.P.; Ray, S. Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook; Springer Nature: Singapore, 2021. [Google Scholar]
  120. Sohaib, O.; Hussain, W.; Asif, M.; Ahmad, M.; Mazzara, M. A PLS-SEM neural network approach for understanding cryptocurrency adoption. IEEE Access 2019, 8, 13138–13150. [Google Scholar] [CrossRef]
  121. Sarstedt, M.; Liu, Y. Advanced marketing analytics using partial least squares structural equation modeling (PLS-SEM). J. Mark. Anal. 2023, 12, 1–5. [Google Scholar] [CrossRef]
  122. Bukar, U.A.; Sidi, F.; Jabar, M.A.; Nor, R.N.H.B.; Abdullah, S.; Ishak, I. A Multistage Analysis of Predicting Public Resilience of Impactful Social Media Crisis Communication in Flooding Emergencies. IEEE Access 2022, 10, 57266–57282. [Google Scholar] [CrossRef]
  123. Sneesl, R.; Jusoh, Y.Y.; Jabar, M.A.; Abdullah, S. Examining IoT-Based Smart Campus Adoption Model: An Investigation Using Two-Stage Analysis Comprising Structural Equation Modelling and Artificial Neural Network. IEEE Access 2023, 11, 125995–126026. [Google Scholar] [CrossRef]
Figure 1. Taxonomy of research areas in the literature.
Figure 1. Taxonomy of research areas in the literature.
Education 14 00959 g001
Figure 2. Theoretical models from the existing literature.
Figure 2. Theoretical models from the existing literature.
Education 14 00959 g002
Figure 3. Conceptual framework based on hierarchical structure of the AHP.
Figure 3. Conceptual framework based on hierarchical structure of the AHP.
Education 14 00959 g003
Figure 4. Research design and process [17,78].
Figure 4. Research design and process [17,78].
Education 14 00959 g004
Figure 5. Phases of AHP approach.
Figure 5. Phases of AHP approach.
Education 14 00959 g005
Figure 6. Framework of RRR ranking for decision-making.
Figure 6. Framework of RRR ranking for decision-making.
Education 14 00959 g006
Table 1. Risk ethical themes based on frequency count.
Table 1. Risk ethical themes based on frequency count.
S/NCodeRisk ThemesFrequency CountRelated Themes
1RIS1Infodemics and misinformation18Quality of output, inaccuracy, nonsense content, Data not apparently updated; limited knowledge, lack of originality
2RIS2Bias response08
3RIS3Plagiarism08
4RIS4Privacy and confidentiality07Data confidentiality
5RIS5Academic integrity concern07
6RIS6Risk hallucination through manipulation and mislead07Deception
7RIS7Safety and security concern10Cybersecurity concerns
Table 2. Reward themes of ChatGPT based on frequency count.
Table 2. Reward themes of ChatGPT based on frequency count.
S/NCodeReward ThemesFrequency CountRelated Themes
1REW1Question answering10Provide feedback, prompt writing, collaboration and friendship, and increased student engagement
2REW2Dissemination and diffusion of new information08Data processing, Data identification, code writing, search engines
3REW3Streamlining the workflow06Documentation
4REW4Personalized learning08Improved literacy, Critical thinking and problem-based learning
5REW5Decrease teaching workload05Teaching and mentoring, support professional activities
6REW6Idea and text generation and summarization18Assemble or organize text, writing fluency and efficiency, hypothesis generation, code writing
7REW7Increase productivity and efficiency05Usefulness
Table 3. Resilience themes of ChatGPT based on frequency count.
Table 3. Resilience themes of ChatGPT based on frequency count.
S/NCodeResilience ThemesFrequency CountRelated Themes
1RES1Appropriate testing framework06Use AI detector tools
2RES2Acceptable usage in science05
3RES3Co-creation between humans and AI08Improved human-AI interaction, balance between AI-assisted innovation and human expertise,
4RES4Academic integrity policies10Rigorous guidelines; developing policies and procedures
5RES5Solidify ethical values09
6RES6Transform educative systems11Establishment of corresponding pedagogical adjustments, reintroduce proctored, in-person assessments
7RES7Higher-level reasoning skills11Significant training and upskilling
Table 4. The proposed AHP scale based on Saaty.
Table 4. The proposed AHP scale based on Saaty.
ScoreMeaningExplanation in This Study
1EqualTwo themes are equally important
2Weakly importantOne theme is weakly more important than the other
3Moderately importantOne theme is slightly preferred over the other
4Moderate plusOne theme is moderately more important than the other
5Strongly importantOne theme is strongly preferred over the other
6Strong plusOne theme is stronger than the other
7Very strongOne theme is very strongly preferred over the other
8Very, very strongOne theme is much, much stronger than the other
9Absolutely importantOne theme is absolutely more important than the other
Table 5. Saaty (1980) predefined value of the random index ( R I ).
Table 5. Saaty (1980) predefined value of the random index ( R I ).
N12345678910111213
RI000.580.901.121.241.321.411.451.491.511.581.56
Table 6. Normalized matrix and weight of primary RRR themes.
Table 6. Normalized matrix and weight of primary RRR themes.
RRR ElementsRiskRewardResilienceWeightsAWLambdaConsistency Test
Risk0.330.320.340.32790.9840573.00098 λ m a x = 3.00101
Reward0.220.210.210.21320.6397223.00064 C I = 0.000504257 ;
Resilience0.450.470.460.45891.3773213.00139 R I = 0.58 ;
C R = 0.000869408
Table 7. Normalized matrix and weight of risk-related themes: Infodemics and misinformation (RIS1), biased responses (RIS2), plagiarism (RIS3), privacy and confidentiality (RIS4), academic integrity concern (RIS5), risk hallucination through manipulation and misleading (RIS6), and safety and security concerns (RIS7).
Table 7. Normalized matrix and weight of risk-related themes: Infodemics and misinformation (RIS1), biased responses (RIS2), plagiarism (RIS3), privacy and confidentiality (RIS4), academic integrity concern (RIS5), risk hallucination through manipulation and misleading (RIS6), and safety and security concerns (RIS7).
RiskRIS1RIS2RIS3RIS4RIS5RIS6RIS7WeightsConsistency Test
RIS10.0900.1030.1150.0790.0830.1050.0880.095 λ m a x = 7.04355095 ;
C I = 0.007258 ;
R I = 1.32 ;
C R = 0.005499
RIS20.0600.0690.0730.0710.0800.0540.0660.068
RIS30.1040.1250.1320.1450.1240.1090.1580.128
RIS40.2680.2270.2130.2340.2240.2490.2360.236
RIS50.1920.1520.1890.1860.1780.1560.1720.175
RIS60.0740.1100.1050.0820.0990.0870.0750.090
RIS70.2110.2140.1720.2040.2120.2390.2060.208
Table 8. Normalized matrix and weight of reward-related themes: Question answering (REW1), dissemination and diffusion of new information (REW2), streamlining the workflow (REW3), personalized learning (REW4), decrease teaching workload (REW5), idea and text generation and summarization (REW6), and increased productivity and efficiency (REW7).
Table 8. Normalized matrix and weight of reward-related themes: Question answering (REW1), dissemination and diffusion of new information (REW2), streamlining the workflow (REW3), personalized learning (REW4), decrease teaching workload (REW5), idea and text generation and summarization (REW6), and increased productivity and efficiency (REW7).
RewardREW1REW2REW3REW4REW5REW6REW7WeightsConsistency Test
REW10.0840.0630.0730.0780.0870.0840.0960.081 λ m a x = 7.07635432 ;
C I = 0.012726 ;
R I = 1.32 ;
C R = 0.009641
REW20.1100.0820.0660.0710.0610.0890.0940.082
REW30.0950.1010.0820.0710.0690.0690.0930.083
REW40.1400.1500.1480.1300.1280.1260.1180.134
REW50.1320.1830.1640.1390.1370.0950.1510.143
REW60.1660.1510.1950.1700.2370.1650.1380.175
REW70.2730.2700.2720.3420.2810.3710.3100.303
Table 9. Normalized matrix and weight of resilience-related themes: Appropriate testing framework (RES1), acceptable usage in science (RES2), co-creation between humans and AI (RES3), academic integrity policies (RES4), solidifying ethical values (RES5), transform educative systems (RES6), and higher-level reasoning skills (RES7).
Table 9. Normalized matrix and weight of resilience-related themes: Appropriate testing framework (RES1), acceptable usage in science (RES2), co-creation between humans and AI (RES3), academic integrity policies (RES4), solidifying ethical values (RES5), transform educative systems (RES6), and higher-level reasoning skills (RES7).
ResilienceRES1RES2RES3RES4RES5RES6RES7WeightsConsistency Test
RES10.0620.0620.0480.0590.0550.0770.0780.063 λ m a x = 7.01364045 ;
C I = 0.002273 ;
R I = 1.32 ;
C R = 0.001722
RES20.1180.1190.0990.1260.0970.1530.1330.121
RES30.1440.1320.1100.0980.0990.1000.1320.116
RES40.1970.1770.2120.1870.1990.0950.1880.179
RES50.2070.2240.2040.1730.1830.1780.1650.191
RES60.1250.1200.1710.1730.1600.1550.1190.146
RES70.1470.1660.1560.1850.2070.2430.1860.184
Table 10. Ranking of RRR themes for ChatGPT ethics conundrums.
Table 10. Ranking of RRR themes for ChatGPT ethics conundrums.
ThemesRRR ElementGlobal WeightRank
Solidify ethical valuesResilience0.087431
Higher-level reasoning skillsResilience0.084582
Academic integrity policiesResilience0.082253
Privacy and confidentialityRisk0.077364
Safety and security concernRisk0.068345
Transform educative systemsResilience0.066976
Increase productivity and efficiencyReward0.064537
Academic integrity concernRisk0.057398
Acceptable usage in scienceResilience0.055379
Co-creation between humans and AIResilience0.0534110
PlagiarismRisk0.0420211
Idea and text generation and summarizationReward0.0372112
Infodemics and misinformationRisk0.0310513
Decrease teaching workloadReward0.0305114
Risk hallucination through manipulation and misleadingRisk0.0295715
Appropriate testing frameworkResilience0.0288816
Personalized learningReward0.0286217
Biased responsesRisk0.0222018
Streamlining the workflowReward0.0176619
Dissemination and diffusion of new informationReward0.0174620
Question answeringReward0.0172121
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bukar, U.A.; Sayeed, M.S.; Razak, S.F.A.; Yogarayan, S.; Sneesl, R. Prioritizing Ethical Conundrums in the Utilization of ChatGPT in Education through an Analytical Hierarchical Approach. Educ. Sci. 2024, 14, 959. https://doi.org/10.3390/educsci14090959

AMA Style

Bukar UA, Sayeed MS, Razak SFA, Yogarayan S, Sneesl R. Prioritizing Ethical Conundrums in the Utilization of ChatGPT in Education through an Analytical Hierarchical Approach. Education Sciences. 2024; 14(9):959. https://doi.org/10.3390/educsci14090959

Chicago/Turabian Style

Bukar, Umar Ali, Md Shohel Sayeed, Siti Fatimah Abdul Razak, Sumendra Yogarayan, and Radhwan Sneesl. 2024. "Prioritizing Ethical Conundrums in the Utilization of ChatGPT in Education through an Analytical Hierarchical Approach" Education Sciences 14, no. 9: 959. https://doi.org/10.3390/educsci14090959

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop