Next Article in Journal
Urban Air Pollution Exposure Impact on COVID-19 Transmission in a Few Metropolitan Regions
Previous Article in Journal
Could the Aging of the Rural Population Boost Green Agricultural Total Factor Productivity? Evidence from China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence and the Transformation of Higher Education Institutions: A Systems Approach

1
Gabelli School of Business, Fordham University, New York, NY 10023, USA
2
Department of Social Science and Policy Studies, Worcester Polytechnic Institute, Worcester, MA 01609, USA
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(14), 6118; https://doi.org/10.3390/su16146118
Submission received: 16 April 2024 / Revised: 9 July 2024 / Accepted: 16 July 2024 / Published: 17 July 2024
(This article belongs to the Section Economic and Business Aspects of Sustainability)

Abstract

:
Artificial intelligence (AI) advances and the rapid adoption of generative AI tools, like ChatGPT, present new opportunities and challenges for higher education. While substantial literature discusses AI in higher education, there is a lack of a systems approach that captures a holistic view of the structure and dynamics of the AI transformation of higher education institutions (HEIs). To fill this gap, this article develops a causal loop diagram (CLD) to map the causal feedback mechanisms of AI transformation in a typical HEI. We identify important variables and their relationships and map multiple reinforcing and balancing feedback loops accounting for the forces that drive the AI transformation and its impact on value creation in a typical HEI. The model shows how, motivated by AI technology advances, the HEI can invest in AI to improve student learning, research, and administration while dealing with academic integrity problems and adapting to job market changes by emphasizing AI-complementary student skills. We explore model insights, scenarios, and policy interventions and recommend that HEI leaders become systems thinkers to manage the complexity of the AI transformation and benefit from the AI feedback loops while avoiding policy traps that may lead to decline. We also discuss the notion of HEIs influencing the direction of AI and directions for future research on AI transformation and the sustainability of HEIs.

1. Introduction

The spectacular growth of generative artificial intelligence (AI) tools, like ChatGPT, since late 2022 has brought AI to the forefront of all debates about technology and its impact on the economy and society [1]. While companies explore how to benefit from generative AI investment [2], there are concerns about the future of work and the adverse social consequences of automation that may lead to a jobless future [3,4,5].
In higher education, the rapid adoption of ChatGPT brings excitement about opportunities for learning as well as concerns about challenges, such as students cheating on their assignments [6], for instance, by asking ChatGPT to write an essay about any topic [7]. While the initial reaction was banning generative AI, several organizations have developed guidelines about the beneficial use of such tools in higher education institutions (HEIs), such as colleges or universities. The Russell Group of universities in the UK developed five principles, emphasizing the need for “students and staff to become AI-literate”, adapting “teaching and assessment to incorporate the ethical use of generative AI”, upholding academic integrity and rigor, and working collaboratively to share best practices [8]. The intense interest in developing guidelines around AI in higher education underscores the topic’s significance.
AI brings several opportunities and challenges for teaching, learning, student support, scholarship, and administration in HEIs. AI is not a new phenomenon in education, and it has been studied for more than 30 years, as captured in several review articles [9,10,11,12,13,14] that provide a background to inform our research. Still, less understood is how AI will transform education [15,16] and what HEIs could do about it, especially about generative AI, due to its novelty [17,18,19,20].
This article aims to study the AI transformation of higher education by deploying a systems approach [21]. It develops a causal loop diagram (CLD) model that captures the major factors that affect AI transformation in an HEI. The CLD shows the feedback loop structure that defines how an HEI creates value and how AI restructures value creation in an HEI. That allows us to understand the causal mechanism underlying several AI effects relevant to HEI, such as effects on learning, academic integrity, and jobs. Visualizing the university as a complex system helps to derive novel insights into the complex dynamics of higher education and practical implications for higher education leaders. The study underscores the significance and value of a systems approach in developing theory and understanding, designing, and managing AI transformation to create value in higher education.
The article makes several research contributions. First, it contributes to our understanding of the AI transformation of HEIs by providing a holistic view of the driving forces and the consequences of the AI transformation. Integrating systems thinking with economic concepts and incentives, we show that investment in AI can have strategic value because AI can transform the structure of value creation in an HEI. The CLD allows us to see the strategic significance of AI within a HEI from a whole-system viewpoint, contributing to higher education economics and strategy. A key concept is the AI feedback loop [22], which captures novel reinforcing value-creation processes due to AI.
Additionally, this article contributes to sustainability through the study of HEIs. Goal four of the United Nations Sustainable Development Goals (SDGs) concerns access to quality education [23]. We show that AI can support the advancement of goal four by demonstrating that AI can help HEIs improve their quality of learning, deal with associated challenges, and better their reputation. Moreover, the model provides insights into the AI-enabled sustainability of HEIs. Therefore, our work connects with two interrelated aspects of sustainability.
Moreover, the article provides practical insights for HEI leaders seeking to understand and leverage AI in higher education. We argue that HEI leaders need to become systems thinkers to manage the complexity of the AI transformation, benefiting from AI feedback loops while avoiding the associated pitfalls. We also aim to clarify what is new about generative AI in the broader historical context of AI use in higher education.
Section 2 develops the theoretical framework and Section 3 explains the research methods. Section 4 presents the CLD model and feedback analysis. Section 5 and Section 6 are the discussion and conclusions, respectively.

2. Theoretical Framework

The theoretical framework provides the foundation for the development of our CLD. We study AI transformation in a typical HEI, focusing on the processes that create value in the HEI and the impact of AI on those processes while emphasizing novel opportunities and challenges due to generative AI. Therefore, we decided to organize our theoretical framework into three parts: advances in AI technology that enable the AI transformation, dimensions of AI transformation in the HEI, and AI’s impact on jobs for graduating students. These three parts are aligned with the three main processes mapped in the CLD model presented in Section 4, following the methodological choices and steps explained in Section 3.

2.1. Advances in Artificial Intelligence (AI) Technology

With its continuous advances, AI has many promising business applications, and it is expected to transform our lives, businesses, and society [1,24,25,26,27,28]. Artificial intelligence as a field has a 70-year history, with multiple waves of progress followed by periods of challenges called AI winters. It is a diverse field of research and practice related to creating and evaluating intelligent systems [29] with various problems (e.g., reasoning, prediction, planning, vision, language understanding), approaches, technologies, and applications. One popular approach has been creating rule-based systems that encode the knowledge of experts, e.g., rules about making a medical diagnosis, but these systems have substantial limitations. Instead of capturing knowledge in software, the approach that proved most fruitful is designing algorithms that learn from data and training them with large quantities of data on powerful computers—this is the machine learning approach. Various approaches to learning are used depending on the problem: supervised learning, unsupervised learning, reinforcement learning, and others.
Most recent AI advances rely on machine learning using large-scale neural networks, called deep learning, due to the multiple layers of neurons. One example is large-scale neural networks for language, called large language models (LLMs), that can generate text, including code, following a user prompt or a sequence of user prompts (dialogue with the user), hence generative AI. LLMs are trained using large datasets [30], and because they deal with language, they also belong to the area of AI called natural language processing (NLP). OpenAI’s ChatGPT, using generalized pre-trained transformer architecture with billions of parameters (weights), is the most well-known example, amongst many, of a conversational generative AI application built on an LLM. Other generative AI applications produce images, music, videos, or multiple types of media (multimodal models), so the general term ‘foundation model’ is sometimes used for generative AI models. The art of writing prompts to obtain the best results from the system is called prompt engineering. The systems typically incorporate filters called guardrails to ensure they do not produce offensive or otherwise undesirable content. Other significant challenges and risks are discussed in Section 2.2.5. Overall, AI advances create opportunities for benefiting from AI within an HEI, as we explain next.

2.2. Dimensions of AI Transformation in HEIs

We identify and discuss five dimensions of AI transformation in an HEI: student learning, academic integrity problems, faculty research productivity, administration and operations, and AI-related risks.

2.2.1. Student Learning

AI can support student learning by empowering instructors and students [31]. In particular, AI has the potential to transform teaching by supporting instructors. Instructors could use AI as a support to design programs or courses, create new education material and assignments, deliver better instruction that increases student engagement and motivation for learning, and to assess learning more creatively and authentically. Faculty can also use AI to automate time-consuming administrative tasks so that they can focus on creativity and innovation in teaching and research. AI and other Industry 4.0 technologies, such as the Internet of Things, can enable smart classrooms and the digital transformation of education management, teaching, and learning [32]. Other examples include learning analytics, educational data mining, intelligent web-based education [9], and cobots (collaborative robots) that assist teachers in the classroom [33]. A large-scale review of more than 4500 articles published between 2000–2019 [34] found that the main research topics include intelligent tutoring systems for special education, natural language processing for language education, educational data mining for performance prediction, discourse analysis in computer-supported collaborative learning, neural networks for teaching evaluation, affective computing for learner emotion detection, and recommender systems for personalized learning. Another review of 138 articles from 2016 to 2022 [10] found 5 five topics: assessment/evaluation, predicting, AI assistant, intelligent tutoring system, and managing student learning.
Students can use AI as a support tool to meet their learning goals via personalized adaptive learning. Applications come in various forms, such as personalized learning [35], AI teaching assistants, teacherbots [36,37], intelligent tutoring systems [38], and others. An experimental study in India found that personalized technology-aided after-school instruction improves student scores in math and language [39]. Gains attributed to the tutoring effect can be expected to be larger using more recent AI technologies, such as GPT-4. Generative AI can empower students and enhance their educational resources and experiences [40]. There are several ways that generative AI can be used in the classroom, such as a tutor, coach, or teammate [41]. Alternatively, AI can be used as a tutor or coach outside the classroom, while classroom time is used for activities that apply knowledge.
While publicly available general-purpose tools, like ChatGPT, receive most of the attention, the greatest value may come from specialized tools created with specific education objectives and trained with appropriate data or using retrieval augmented generation (RAG). An example is Khanmigo by Khan Academy (https://www.khanacademy.org/khan-labs, accessed on 14 January 2024), which aims to bring one-to-one tutoring to all students and an assistant to teachers using AI. It runs on top of the OpenAI platform and is used widely as a pilot phase, but research on its efficacy is expected in 2024 [42].

2.2.2. Academic Integrity Problems

There is significant concern that generative AI tools will facilitate high levels of cheating in higher education, undermining learning and academic honesty [43,44]. Although cheating existed before ChatGPT [45,46], just two months after ChatGPT’s release, an estimated one-fifth to over one-third of students reported using it, with the vast majority believing they cheated using it [47]. Furthermore, as students become more familiar with the technology, they also become more effective at using it.
Moreover, academic integrity problems may relate to employers seeing higher education as a signaling device [48]. For instance, employers will only consider applicants who graduated college and screen candidates by grade point average (GPA) [49]. As a result, students could perceive that graduating with a degree and GPA that employers will desire is more important than learning. This creates an incentive for students to cheat using AI.
HEIs can respond by reducing incentives to cheat, increasing the value of learning, making it harder to cheat, or increasing the risk and consequences of getting caught. A systematic review of cheating in online exams from 2010 to 2021 found several approaches to reduce academic dishonesty before testing [50], such as strengthening student ethics, bringing the learning goal of the exams to mind, and moving away from summative assessments towards formative assessments. Instructors have modified their teaching and assessment in response to technologies that make cheating easier, such as the calculator [45] and Wikipedia [51]. However, with widespread AI usage, randomizing questions or shifting toward essays becomes less effective. However, anti-cheating measures have tradeoffs. For example, using online proctoring software may reduce cheating, but it also costs money, causes technological difficulties, has false positives, and reduces student’s privacy. The most common initial approach by schools was using AI detection software. Unfortunately, AI detection software has an extremely high false positive and false negative rate and flags the work of non-native speakers significantly more than their peers [52]. There is a need for clear policies to deal with academic integrity and plagiarism detection challenges [53]. Therefore, HEIs must update their academic integrity policies, and faculty must update their course syllabi to account for generative AI. For instance, some courses could allow the creative use of generative AI and adjust assignments and assessments accordingly, while others prohibit it. Overall, as AI advances, students may discover new ways to cheat, and HEIs must take measures to deal with those challenges.

2.2.3. Faculty Research and Accelerated Scientific Discovery

AI, such as machine learning techniques, is increasingly used in science research, and researchers are excited about its potential [54]. However, they are also concerned about the quality of work and reproducibility of results [55]. Generative AI can support scholarly work and faculty research productivity [56]. Such tools can support problem formulation, data collection, analysis, and writing [57], including research brainstorming, identifying research questions, hypothesis generation [58,59,60], summarizing or conducting a literature review, creating graphs from data, and drafting parts of manuscripts.
However, all those uses come with challenges, such as AI hallucinations (making things up), accuracy, completeness, quality, and others. Moreover, the ease of creating content using generative AI tools may increase academic misconduct or result in the mass production of low-quality papers flooding journals and the established peer-review process. Both would have significant negative consequences for scholarly publishing and research, and journals are updating their editorial policies. For instance, science journals do not accept text written by AI tools [61]. Ultimately, the authors are responsible for all aspects of the research output, and they also need to be transparent about whether and how they use AI tools. While conversational generative AI tools have the potential to play a significant role in the research workflow, the details of the practical application of those tools need to be clarified (Table 6 in [57]), and guidelines must be defined [58]. Overall, AI can positively impact faculty research productivity, accelerating research and scientific discovery [59,60,62].

2.2.4. Administration and Operations: Institutional Learning

Although our review of the literature on AI in higher education finds that the main focus is student learning and teaching, other HEI areas can benefit from AI [63,64]. AI can support the HEI administration at multiple levels, including departments and schools. Moreover, admissions can use AI and data to target the right students and manage the admission process to improve enrollments. Academic advisors can use AI to guide students, improving student educational experience, satisfaction, and retention. AI can also support career advising [65], internships, and job placements for students. Managing alumni relations can be important for many HEIs, and AI helps manage the relationship. AI can support IT, human resources, athletics, facilities, and operations [66]. For instance, the IT department can use AI to automate tasks and workflows and lower the cost of managing the IT infrastructure. Facilities can use AI to make infrastructure more intelligent, allowing for efficiencies, remote management, and maintenance.
In summary, AI and data can help improve effectiveness and lower the operating costs of all university areas. Many of those opportunities for improvement can be seen as institutional learning. Therefore, an HEI can use AI to become a learning organization and pursue continuous improvement while adapting to changes in its environment.

2.2.5. AI Risks and Ethics in HEIs

Generative AI has a long history [67], and while recent generative AI signifies progress, we should be aware of its limitations [68,69,70] and discount the hype. For instance, LLMs are probabilistic language modelers predicting how to continue the text based on patterns learned from training data. They lack causal models of understanding the world, and their outputs need critical evaluation. ChatGPT and related tools are designed to create persuasive and authoritative output, even when they make things up, a well-known problem called hallucination. This is a severe problem for education because the only thing worse than not learning anything is learning the wrong things very well. AI-created fake media, such as images and videos (deep fakes), will exacerbate learning and social cohesion challenges.
In addition to clearly damaging misinformation, large quantities of poor-quality content are a problem for student learning. Humans have limited time and attention (cognitive capacity), and those resources can be easily wasted in an environment where multiple services compete for user attention (attention economy) using algorithms optimized for user engagement. Moreover, poor-quality content from GenAI tools may pollute the Web, affecting all users, including GenAI tools that use that content for training.
Algorithmic bias is another significant concern [71]. Algorithms may reinforce decision biases when evaluating student work, admissions, job placements, etc. In a reinforcing feedback loop, bias in historical data drives algorithmic bias, which drives decision bias, which leads to even more human bias and bias in the data.
In addition, AI in higher education also has a dark side related to data [72]. Data is an essential resource for AI. The need for large quantities of data creates privacy, security, and copyright risks. For instance, sensitive student data must be well protected. Confidential data may leak if it is used to interact with publicly available AI chatbots. Malicious actors can use AI for cyberattacks. Ignoring copyrights in model training is another issue, and ongoing lawsuits may affect how future generative AI systems work [73].
Multiple ethical issues arise. The process of training AI models often utilizes cheap global labor to label data, moderate content, or provide feedback, creating ethical concerns about labor practices [74]. Increased complexity due to fast change, loss of control, manipulation of behavior, dependence on tech firms, like OpenAI, controlling the AI platform, and lack of transparency and accountability are other issues due to AI that may negatively affect multiple areas of an HEI. Constant surveillance by AI [75] damages trust and meaningful education [76]. Automation itself is a risk, if not well designed, because it could cause an organization to do the wrong things faster and in an automated way while no one pays attention. Accountability in AI-mediated education practices is an issue that needs to be studied more [77]. The environmental impacts, carbon and water footprints, and energy consumption of AI data centers are also concerning [78].
Organizations need to take measures to manage all these AI-related risks. The explainability, transparency, and fairness [79] of AI decisions should be priorities in the design of AI systems. Human oversight, critical thinking [80], and education on the responsible and ethical use of new tools [81,82] are vital. Learning analytic systems must be thoroughly audited to ensure they are fair, transparent, and robust [83]. Generative AI tools, such as ChatGPT, raise even more ethical challenges and call for stakeholder engagement and a systemic view of the benefits and risks when applications are developed [81]. The UNESCO guidance proposes the regulation of generative AI tools by government agencies and validation of the ethical and pedagogical aspects of those tools by education institutions [82].

2.3. Jobs for Graduating Students

HEIs educate students who seek jobs after graduation. Therefore, the state of the job (labor) market and the workforce needs of companies are crucial determinants of the value of an HEI degree.
AI can be a tool that makes a worker more productive (AI augmentation) or an automation engine that eliminates the worker’s job (AI substitution). Therefore, what jobs and how will be most impacted by AI is a complex question [84,85,86,87]. A way to approach that question is to think of a job as a set of tasks and consider how AI affects tasks. Then, a job with many tasks automated or augmented by AI will be affected the most [88,89]. Our study aims to connect job market changes due to AI with the value created by HEIs considering AI substitution and augmentation.
Generative AI can make knowledge workers more productive. Software developers randomly assigned to use GitHub copilot, an AI coding assistant, completed their task 55% faster than the control group [90]. Moreover, using GitHub Copilot improves other metrics, such as developer job satisfaction [91]. College-educated professionals randomly assigned to use ChatGPT in a writing task took 40% less time and produced 18% higher output quality, and participants with weaker skills benefited the most [92]. Customer support workers using generative AI achieve higher productivity but with significant heterogeneity across workers, as novice and low-skilled workers benefit the most [93]. While AI can help improve the effectiveness of consultants in many tasks, there are tasks in which AI fails, implying that overreliance on AI can lower performance [94]; for instance, LLMs hallucinate and sometimes do poorly in basic math.
Companies care about the optimal mix of humans and AI that maximizes the company’s performance. The interaction of companies’ needs and workers’ skills and preferences will determine the effect of AI on employment outcomes. For instance, a recent study using data from a large online platform found that generative AI negatively affects freelancers’ employment and earnings [95].

3. Methods

We introduce the systems approach, describe the typical CLD development process, and explain the steps we followed to develop our CLD model.

3.1. Systems Approach and CLD

A systems approach calls for a holistic view of systems with multiple interacting parts because the behavior of a complex system can only be understood by studying the whole system [21,96]. A systems approach using a CLD is called systems thinking or qualitative system dynamics [21,97,98]. The CLD is a causal system mapping tool [99] used to map the structure of a system. It shows the causal feedback processes, or feedback loops, that drive the dynamic behavior of a system. The process helps to visualize the interconnectedness of different system parts, externalize and explore mental models, and identify leverage points for system change. In addition, building a CLD with the participation of multiple stakeholders aids in visualizing the whole system and building consensus for action [100]. From a practical standpoint, a CLD can help a manager anticipate and manage dynamic complexity.
Developing a CLD to gain insight into a system has been widely used in multiple applications across multiple fields [21,101]. Examples include understanding complexity in organizations [102], business strategy [103], health systems [104,105,106], sustainability [107], digital technologies and business models [108,109], pandemics [110], diffusion of innovations, such as car-sharing [111], and many others. The systems approach has been used for the study of several issues in higher education, such as university management and planning [112,113,114], quality management [115], the enrollment crisis due to demographics [116], university funding [117], tuition inflation [118], program development [119], and others.

3.2. Development of a CLD

We built the CLD following the relevant literature on systems approach and qualitative system dynamics methodology [21,98,120,121,122,123]. We defined the problem, identified key variables (factors), and defined the system boundary. Then, we identified the rest of the variables, the causal links between variables, and the feedback loops that emerged from connecting the causal links. Making those feedback loops visible is a significant value of the CLD modeling process. A feedback loop is reinforcing (a change in a factor amplifies via the loop) or balancing (a change is dampened via the loop). The structure and interaction of the feedback loops determine the system behavior through time. A CLD is, in essence, a dynamic theory of the problem under study, and we want as many variables as possible to be endogenous.

3.3. Steps We Followed to Develop Our CLD

Our study follows the standard process for developing a CLD described above, and here we provide more study-specific methodological details. Our study relies on an extensive literature review of our topic and our exploration of current AI-related developments leveraging our domain expertise. Our domain expertise is more than 50 years of cumulative experience in higher education.
Our objective is to create a high-level, holistic map of AI transformation in a typical HEI, focusing on the processes that create value in the HEI and the impact of AI on those processes. Therefore, the key variables we want to focus on are student learning (because the primary mission of an HEI is to teach students), AI investment (because this determines whether the HEI adopts and uses AI), and HEI reputation (because HEIs compete on reputation [124,125]). Therefore, explaining how those key variables behave over time is crucial.
The definition of the system boundary is also driven by the problem we want to solve. We decided to focus on the processes within the HEI and the primary interaction of the HEI with its environment. This suggests three main processes: the AI industry that drives AI advances that affect the HEI, the focal HEI that uses AI for transformation, and the companies that offer jobs to students graduating from the HEI. These three overlapping processes were identified after an initial review of the literature on AI and education, and as a result of our study of current developments in the area. They define the boundary of the system we will explore using our CLD.
After we defined our system boundary, we went back to expand and refine the literature review and organize the theoretical framework of our research (Section 2) according to the three main processes we decided to focus on. That way, the organization of the theoretical framework is aligned with the main model components. The theoretical framework and the CLD constitute an integrated whole.
Like all models, a CLD is an abstraction of reality, and the theoretical framework section is a crucial step toward building the CLD model. In addition to the key variables mentioned above, all the CLD variables and their relationships were identified following the three main overlapping processes in the theoretical framework. A complete list of variables is presented in Table 1, and the relevant theoretical framework sections for each variable are listed in parentheses.
After several iterations of adding, refining, and building confidence that the CLD maps what we know about the system, the validity of the resulting CLD model was further established by feedback from three domain experts—a student, a faculty member, and a university administrator—following [126]. This concludes the development of the CLD.
In the next section, we explain all the relationships between variables and present the CLD model. We emphasize the important feedback loops and derive insights from the feedback loops and their interactions. In addition, we evaluate policy interventions (leverage points) qualitatively. This can be carried out because the CLD allows us to assess how a change in one part of the system ripples through the whole system. The CLD is an essential output of this research; other researchers and practitioners can use it as a starting point for more exploration. Like all methods, the systems approach we use has limitations, discussed in Section 4.2, alongside recommendations for future research that could address those limitations.

4. CLD Model and Insights

The CLD model maps the causal mechanisms of AI transformation in a typical HEI (Figure 1). A positive arrow signifies that the cause-and-effect variables move in the same direction, while a negative causal relationship between variables is shown as a negative arrow. Letters R and B denote reinforcing and balancing feedback loops, respectively. Our model captures three interconnected processes: the AI industry that drives AI advances that the HEI adopts, the focal HEI that uses AI for transformation, and the companies that offer jobs to students graduating from the HEI.
We identified and analyzed 15 reinforcing (R) and 4 balancing (B) feedback loops that define the structure of value creation in the HEI and its interaction with the business sector and the AI industry. The CLD defines the system structure, which determines the system behavior through time. The feedback loops are summarized in Table 2 and discussed below.

4.1. Advances in AI Technology

We start with feedback loops related to AI advances (improved AI capabilities) in the AI industry following Section 2.1. AI advances, such as generative AI and LLMs, create opportunities for AI transformation in HEIs. The following two feedback loops capture the main mechanism of AI advances.
R1: Due to AI research (R&D), AI capabilities improve and encourage more business investment in AI, motivating the AI industry to invest even more in AI R&D.
R2: As AI capabilities improve, businesses invest more in AI, thus increasing business automation. More automation benefits businesses and encourages even more investment in AI.
Loops R1 and R2, supported by R5 and R8 discussed below, are the primary economic forces driving AI advances. We focus on the AI transformation within the HEI next.

4.2. Student Learning

We now focus on mechanisms within the HEI, starting with student learning following Section 2.2.1.
The following loop, R3, is the most fundamental feedback process that creates value for students and financially sustains a typical college or university.
R3: The HEI invests in quality education because it improves student learning and job placement and positively affects the HEI reputation, ensuring enrollment and revenue.
The following feedback loop shows how AI investment by the HEI contributes to student learning.
R4: The AI investment leads to better learning analytics, AI tools, and data, which improve student learning, allow students to find good jobs, and build the HEI’s reputation. A strong reputation contributes to healthy enrollments and revenues, enabling more investment.
Another loop that affects student learning is R5, as described below.
R5: Advances in AI capabilities facilitate students’ self-learning, which improves student learning.
A trade-off between formal learning (R4) and self-learning (R5) is apparent here. Suppose students undertake an increasing amount of their learning through self-learning. In that case, the position of the HEI is weakened over time because fewer students will be interested in enrolling, or those enrolled will be asking for tuition discounts.

4.3. Student Academic Integrity Problems

The AI-assisted academic integrity problems might undermine the HEI’s business model, as captured by loop B1, following Section 2.2.2.
B1: Better AI leads to more academic integrity problems (AIPs), such as student cheating, which negatively affects student learning, job placement, and the HEI’s reputation.
The HEI can use data about AIPs and AI to fight academic integrity problems, as shown in loop B2.
B2: As AIPs increase, the HEI will increase its efforts to deal with AIPs, aided by collecting more data about the problems.
It is in the HEI’s interests to invest in measures to deal with AIPs, as shown in R15.
R15: AI investment can support dealing with AIPs, and this improves student learning, HEI reputation, and enrollments, enabling more AI investment.

4.4. Faculty Research

AI can support faculty research and contribute to further value-creation in the university, as captured by feedback loops R6 and R7, following Section 2.2.3.
R6: AI investment supporting the research productivity of the HEI faculty has a positive effect on the reputation of the HEI and leads to more robust enrollment numbers and positive net revenue.
R7: AI investment supporting the research productivity of the HEI faculty adds value to student learning due to research–teaching complementarity.
In summary, two mechanisms add value when AI investment supports faculty research. Improved research productivity is positive for the HEI’s reputation (direct mechanism), and better research can support innovative teaching (indirect mechanism).

4.5. HEI Administration and Operations

Following Section 2.2.4, AI can support HEI administration and operations in multiple ways. The following feedback loops capture important mechanisms that add value to HEIs.
R8: Advances in AI motivate the HEI to invest more in AI.
R9: The HEI uses AI to lower operating costs, so there is a higher net revenue for investments in quality education and AI supporting it.
R10: The HEI uses AI to support admissions and improve new enrollment numbers, student support, student retention, and graduation rates, thus increasing total enrollment in HEI.
R11: The HEI uses AI to support alumni engagement and improve alumni giving.

4.6. AI Risks

Following Section 2.2.5, multiple AI-related risks (biased decisions, privacy, security, and misinformation) can harm the reputation of HEI. This mechanism is captured by feedback loop B4.
B4: Increased AI investment and adoption increase the risks of AI, which could harm HEI’s reputation, hence harming enrollment and revenues.
The HEI must manage this feedback loop with risk prevention and mitigation measures.

4.7. Job Placement

We now focus on the interaction between HEI students looking for jobs and businesses offering jobs (Section 2.3). The following three feedback loops capture the main mechanisms.
R12: Business adoption of AI is an opportunity for job placement of students who acquire AI-complementary skills. These skills are discussed in more detail later.
The job-substitution effect of AI manifests itself as a balancing loop (B3): Business automation is a challenge for job placement because it lowers the number of available jobs.
R13: The HEI relative reputation and the student job placement reinforce each other.
R14: When the HEI does well in terms of student job placement, it enlarges its alumni network, which is an opportunity for more extensive alumni giving (which helps all the other investments), and also improves the job placements of new graduates.

4.8. AI Transformation and HEI Success

The model sheds light on the dynamic complexity of value-creation in an HEI and the impact of AI. It identifies the precise mechanism through which an HEI creates value and explains its success.
Job placement of students is a vital factor for HEI’s success. Students who graduate from HEIs expect to find jobs, so job placement is a crucial factor in the system under study. The CLD shows the pivotal role of an HEI’s student job placement because it affects enrollments and revenues through several pathways (e.g., R3, R4, R7). Job placement depends on student learning, an HEI’s relative reputation, and job availability. AI impacts all three factors through several pathways, as shown in Figure 1. Therefore, the HEI needs to make the best use of AI to prepare its students for a job market shaped by AI, while other HEIs are likely to do the same, creating new AI opportunities and challenges over time.
AI helps the HEI improve the quality of its offered services (R4, R6, R10, R11) and lower the cost of operations for a given level of service (R9). AI can help an HEI improve learning and increase its reputation, student enrollment, and revenue through multiple reinforcing loops (e.g., R3, R4, R7). The reinforcing feedback loops together work for the benefit of a well-managed HEI. As long as AI keeps advancing, driven primarily by business demand, the reinforcing feedback loops create a virtuous cycle for an HEI that invests in AI and improves its reputation relative to its competitors. However, those same loops will hinder any HEI that falls behind in the competitive higher education market because HEIs compete on reputation. In that context, AI investments can help an HEI differentiate itself and soften competition.
In addition, AI advances intensify academic integrity problems, a balancing loop (B1), and if not adequately addressed, they may undermine learning and the associated benefits for HEIs. A potential danger is education turning into a ‘market for lemons’ in the eyes of employers, as employers cannot easily discern which students learned and which used AI to cheat. In extreme cases, the employment market could collapse. Measures to fight AIPs can differentiate an HEI from others if AIPs become a significant problem in the higher education sector.
In summary, AI rewires the feedback loop structure that defines how an HEI creates value. Therefore, our study underscores the crucial role of AI feedback loops [22,108,127] in the success of HEIs. Depending on AI investment and policies, an HEI can prosper or decline.

4.9. Job Market Scenarios and HEI

In the business world, AI automation lowers the demand for labor (B3) but increases the demand for new skills (R12). Successful HEIs adapt to these changes by teaching AI complementary skills. In the long-term scenario where AI automates all or most of the jobs, the current model of HEI collapses (see feedback loops R3, R4). HEIs, as we know them today, may disappear if there is no demand for degrees, perhaps except for a small number of elite HEIs educating the government and business leaders. Those HEIs that survive and thrive will need models disconnected from degrees for jobs. They will need to create value in other ways, perhaps teaching humans leisure skills, providing lifelong learning training to humans (instead of intensive higher education degrees as we know them today), or training and tuning AI systems in partnership with companies. If humans are supported by a universal basic income (UBI) [3] due to the lack of jobs, then part of that income could be support for lifelong learning, i.e., a universal basic lifelong learning income (UBLI). Under this scenario, government support will be a source of revenue for future HEIs.
An alternative long-term scenario is that AI will become a new platform for new types of jobs, and there will be an enormous demand for people to fill those jobs (similar to jobs in factories after the Industrial Revolution or office jobs with the adoption of computing). In that case, the future of HEIs is bright, especially if the job market is very fluid and people need multiple degrees over their lifetime.

4.10. Interventions

The model lets us see why and how an intervention propagates through the system. For instance, increasing AI investment will be reinforced through multiple feedback loops (R4, R6, R9, R10, R11, R15). An intervention that increases research productivity will be reinforced in R6 and R7 and then in additional feedback loops, interacting with those.
A policy focused on cost-cutting at the expense of education quality risks placing the HEI at a reinforcing decline trajectory due to R3 and other reinforcing feedback loops. If AI is used to support such a policy, then AI will speed up the decline, whereby revenues keep getting lower, and the HEI keeps cost-cutting until both approach zero.
Data is a valuable resource for the effective use of AI in HEIs (see, for instance, R4 and B2). Indeed, the more data the HEI collects about all areas (learning effectiveness, job placement, alumni, reputation, admissions, student retention, etc.), the more effective its AI can become. For an HEI, value comes from AI plus data. Therefore, interventions targeting the accumulation of high-quality data can be powerful.
Interventions targeting one variable are not a system’s most potent leverage points. More powerful leverage points include creating new desirable feedback loops and changing the system’s rules or goals in a desirable direction [96].
In addition to the interventions explored here, other scholars can use our CLD as a map for exploring additional policy interventions or scenarios.

5. Discussion

This article takes a novel complex systems approach to how an HEI creates value and how AI affects those value-creation processes. The article explores the effects of AI in higher education using a CLD, and it identifies multiple feedback loops and their interactions. Next, we discuss implications for academic leadership and policymakers, research limitations, and future research directions.

5.1. Lessons for Academic Leadership

AI advances in the form of generative AI create several opportunities for AI transformation, including the promise to bring HEIs closer to the vision of personalized AI assistants that support students, faculty, and administrators. In that context, our research provides a first map of AI causal mechanisms to help HEI leaders navigate an uncharted landscape of opportunities and pitfalls.
Leaders can use the CLD to build intuition and evaluate the benefits and risks of various scenarios and HEI policies. Our discussion of feedback loops in Section 4.10 is a starting point in that direction, but many other policies can be evaluated.
A crucial question for academic leaders is what competencies and skills students will need to find a job. Following our earlier exploration, students should avoid competing head-to-head with AI. Instead, they need foundational human skills that AI lacks, such as critical thinking, planning, complex problem-solving, creativity, lifelong learning, communication, management, and collaboration. Students need to learn and think in ways that differentiate them from machine learning. If AI becomes ubiquitous in firms, humans will need skills that complement what AI can do well. That includes skills to build, train, deploy, use, and manage AI systems, identify valuable use cases, devise AI strategies, lead teams or companies, etc. Moreover, students need to acquire those AI complementary skills in a way (quality, breadth, and depth) that allows them to compete effectively against other humans seeking similar jobs. For instance, managers that use AI effectively may replace those that do not.
HEIs need to monitor changes in the job market [4] and remain adaptive. For instance, a recent study argues that LLMs can transform the role of a data scientist from coding and data-wrangling to assessing and managing analyses performed by AI tools [128]. In that case, skills related to strategic planning, coordinating resources, and overseeing the product life cycle become more critical, and those teaching data scientists must adapt accordingly, perhaps gradually over time.
The effects of AI on productivity and automation are also relevant to what happens to jobs within HEIs. Will AI make instructors, administrators, and staff more productive and their jobs more fulfilling? Will AI replace instructors, administrators, and staff in the longer term? Multiple effects play a role simultaneously, and the specified time horizon matters. However, a crucial framing question is as follows: What does the HEI want to achieve with AI? The university’s policy and mission matters. For instance, a university that does not grow and does not aspire to the highest learning standards may manage with a few instructors, administrators, and staff, provided all those roles become more productive, and many tasks are automated. However, a student-centered and human-centered university that appreciates its people may be successful by providing a superior education, differentiating itself from competitors focusing on cost-cutting.
A related issue is the future direction of AI. Our exploration suggests that the direction of AI advances is not predefined [129], and the social responsibility of a university lies in prioritizing how AI can empower humans by augmenting jobs rather than eliminating them [130]. As a starting point, HEIs could focus on designing and adopting personalized AI assistants for higher education, such as for faculty, students, staff, administrators (including department chairs and deans), advising, and more. At the same time, there is a need for careful integration of generative AI tools into education [131]; during the COVID-19 pandemic, students suffered both academically and socially, and we re-learned that education is a “deeply human act rooted in social interaction” (p. 7). Beyond the boundaries of the education sector, HEIs could promote AI assistants for various roles (e.g., financial analyst, CEO) across all industries and teach students accordingly.
In that direction, our CLD suggests that a single HEI has very little influence over the direction of AI, but multiple HEIs working together can have a meaningful influence. Moreover, similar to the proposals in the healthcare industry [132], there is value in open-source LLMs developed by a community of HEIs. Those insights suggest a trade-off for an HEI: Investment in AI is a tool for getting ahead of its competition, but if it wants to influence the direction of AI meaningfully, the HEI needs to collaborate with other HEIs. Along those lines, AI advances could support educational research that provides novel, rigorously validated insights into teaching and learning methods that could benefit all HEIs.
AI’s promise to accelerate research and scientific discovery is aligned with the knowledge-creation mission of HEIs. However, in the longer term, only large tech companies may have the computing and data resources for complex, large-scale, and high-impact science research, such as Google DeepMind’s AlphaFold for protein folding in biology [133] and discovering thousands of new materials in material science [134,135]. As a result, HEIs may be sidelined unless they partner with big tech companies, the research divide in higher education may get bigger, and big tech firms may become the gatekeepers of consequential research agendas.
Overall, AI promises several benefits but entails challenges, and ultimately, it depends on what policy the HEI wants to follow and how it intends to position itself by leveraging AI-enabled transformation while protecting itself from the associated pitfalls. Regarding generative AI, HEIs deal with fast-changing technology and applications. Therefore, HEIs need to be adaptive. It is advised to start with small-scale experiments by faculty, students, and staff, then learn from that, aggregate the experiences and perceptions, allow for more stability, and then plan and develop more comprehensive policies and guidelines. Leaders must take a balanced and cautious approach. At this point, both businesses and HEIs are exploring how to take advantage of the latest AI innovations. Generative AI is the current novel tech, and it is natural that it has been overhyped and accompanied by an aura that it will solve all of our problems. This pattern is typical in technology and tends to appear every few years. AI can bring new benefits and challenges, but it cannot do everything. As long as AI advances, HEIs and AI will co-evolve. Within that process, universities could also learn from partnering with AI firms or other universities.
The complexity associated with the rapid adoption of AI underscores the need for academic leaders who are system thinkers. They must study the feedback loops that define the value-creation structure and determine the system behavior. Moreover, AI can bring a substantial restructuring by creating new feedback loops, rewiring existing ones, and strengthening or weakening others. Leaders should aim to leverage those feedback loops for their benefit. A systems approach appreciates complexity, takes a whole-system view, understands that system behavior over time is often non-trivial and counterintuitive, and considers the unintended consequences. For instance, an overreliance on cost-cutting approaches can place an HEI into a self-reinforcing decline. Another underappreciated systemic risk arises from uniformly adopting identical AI models and practices across all HEIs, escalating academic competition.

5.2. Limitations and Future Research Directions

This article provides the first holistic map of AI transformation in HEIs. Future work could enhance and refine that map or go deeper into specific aspects of the map. While the level of analysis here is an HEI, future research could be more micro-focused, taking an in-depth look into particular aspects of a university. An example would be exploring the details of various learning methods and their impact on learning outcomes. Alternatively, future research could be more macro-focused, using the higher education sector as a unit of analysis.
At the sector level, ‘superstar effects’ may be significant in the longer term. A global education marketplace and ubiquitous online access create positive feedback loops where the positive reputation of a school, program, course, or instructor keeps increasing. As a result, superstars may emerge, similar to superstars in the sports or entertainment industries.
Our model suggests that the AI industry plays a significant role because it drives AI advances affecting businesses and HEIs. More work is needed on how established and startup tech and edtech companies affect the broader transformation of the higher education sector. More generally, higher education has a lot to learn from other sectors, such as media and advertising, already transformed by AI and related digital technologies, and this has to be a topic of rigorous future research.
Future dynamic research needs to explore the ethical implications of AI in education, examine the long-term effects of AI on student learning outcomes, or investigate AI’s role in promoting inclusivity and accessibility in higher education. Another promising direction is to consider and evaluate novel business models for higher education.
Future research could study various scenarios or interventions in more detail. For instance, potential decreases or a plateau in AI capabilities through regulations, limitations of current AI approaches, another AI winter, black swan events, or otherwise, could cause significant economic shocks to HEIs and businesses. Approaches to prevent ‘lemon market’ effects, including exit exams, micro-certifications, and employment tests, should be examined. Future educational advances, like customized courses and AI tutoring, will need to be studied empirically.
Because generative AI lowers the cost of knowledge tasks [93], it can have a crucial impact on higher education. In essence, HEIs manage knowledge: they create new knowledge via research, deliver knowledge to students via teaching, and assess learning by asking students to perform knowledge tasks, such as essay writing. Future research could benefit from a thorough exploration of such a knowledge perspective.
Methodologically, the current article focuses on a CLD, or qualitative system dynamics. This does not allow for quantitative evaluation of policy interventions and planning. A natural next step is developing and analyzing quantitative models to derive additional insight into AI in higher education. For instance, a natural next step is to build a system dynamics simulation using a stock-and-flow model. Such a model could consider additional extensions, such as endogenizing HEI competition. However, one could also use other computational modeling approaches, such as agent-based, or analytical modeling, if the aim is to develop a simplified model.

6. Conclusions

This article presents the first causal loop diagram of the AI transformation in HE, providing a holistic view of how important variables interact to drive AI investment and impact. We show that several reinforcing and balancing AI feedback loops work together to impact value creation in an HEI that interacts with companies that provide jobs and the AI industry that drives AI advances. The model shows that the HEI invests in AI to improve teaching, research, and administration. Still, it must adapt to changes in the job market and take measures to deal with academic integrity problems. Student job placement is a crucial factor for the sustainability of the HEI model. Therefore, the HEI needs to emphasize AI complementary skills for its students. However, HEIs face a competitive threat and several traps that may lead to a decline. For instance, HEI policies focusing on excessive cost-cutting may reinforce its decline. In the long term, the current HEI model will not be viable if AI automation in companies becomes increasingly labor-displacing.
The article makes several contributions. It provides a systemic view of AI in education and proposes that academic leaders should become system thinkers to benefit from AI opportunities. It contributes to our understanding of the AI transformation of higher education from a complex systems perspective that focuses on the etiology and the consequences of AI-transformed value creation in HEIs. The article integrates systems thinking and economic concepts and contributes to higher education economics and strategy. Moreover, it contributes to our thinking of how AI can support the sustainability of HEIs and high-quality education, which is one of the UN’s Sustainable Development Goals. Another significant contribution is connecting the HEI model affected by AI with job market factors, also affected by AI. Still, a systems approach to higher education suggests that we are just starting to explore the impact of AI on that sector. Therefore, the article outlines several directions for future research on AI transformation and provides a basis for developing quantitative models.

Author Contributions

Conceptualization, E.K., O.V.P., and R.S.; formal analysis, E.K. and O.V.P.; writing—original draft preparation, E.K. and R.S.; writing—review and editing, E.K. and O.V.P.; visualization, E.K. and O.V.P.; project administration, E.K. and O.V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. “So What If ChatGPT Wrote It?” Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  2. McAfee, A.; Rock, D.; Brynjolfsson, E. How to Capitalize on Generative AI. Available online: https://hbr.org/2023/11/how-to-capitalize-on-generative-ai (accessed on 31 October 2023).
  3. Ford, M. Rise of the Robots: Technology and the Threat of a Jobless Future; Basic Books: New York, NY, USA, 2015. [Google Scholar]
  4. McKinsey Generative AI and the Future of Work in America. Available online: https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america (accessed on 6 December 2023).
  5. Brynjolfsson, E.; McAfee, A. The Second Machine Age; W. W. Norton & Company: New York, NY, USA, 2016. [Google Scholar]
  6. Fütterer, T.; Fischer, C.; Alekseeva, A.; Chen, X.; Tate, T.; Warschauer, M.; Gerjets, P. ChatGPT in Education: Global Reactions to AI Innovations. Sci. Rep. 2023, 13, 15310. [Google Scholar] [CrossRef] [PubMed]
  7. Anders, B.A. Is Using ChatGPT Cheating, Plagiarism, Both, Neither, or Forward Thinking? Patterns 2023, 4, 100694. [Google Scholar] [CrossRef] [PubMed]
  8. Russell Group Russell Group Principles on the Use of Generative AI Tools in Education. Available online: https://russellgroup.ac.uk/news/new-principles-on-use-of-ai-in-education/ (accessed on 10 November 2023).
  9. Chen, L.; Chen, P.; Lin, Z. Artificial Intelligence in Education: A Review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  10. Crompton, H.; Burke, D. Artificial Intelligence in Higher Education: The State of the Field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  11. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic Review of Research on Artificial Intelligence Applications in Higher Education—Where Are the Educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  12. Roll, I.; Wylie, R. Evolution and Revolution in Artificial Intelligence in Education. Int. J. Artif. Intell. Educ. 2016, 26, 582–599. [Google Scholar] [CrossRef]
  13. Maphosa, V.; Maphosa, M. Artificial Intelligence in Higher Education: A Bibliometric Analysis and Topic Modeling Approach. Appl. Artif. Intell. 2023, 37, 2261730. [Google Scholar] [CrossRef]
  14. Bahroun, Z.; Anane, C.; Ahmed, V.; Zacca, A. Transforming Education: A Comprehensive Review of Generative Artificial Intelligence in Educational Settings through Bibliometric and Content Analysis. Sustainability 2023, 15, 12983. [Google Scholar] [CrossRef]
  15. Ma, Y.; Siau, K.L. Artificial Intelligence Impacts on Higher Education. In Proceedings of the Thirteenth Midwest Association for Information Systems Conference (MWAIS 2018), St. Louis, MO, USA, 17–18 May 2018; Volume 42, pp. 1–5. [Google Scholar]
  16. Bates, T.; Cobo, C.; Mariño, O.; Wheeler, S. Can Artificial Intelligence Transform Higher Education? Int. J. Educ. Technol. High. Educ. 2020, 17, 42. [Google Scholar] [CrossRef]
  17. Kshetri, N. The Economics of Generative Artificial Intelligence in the Academic Industry. Computer 2023, 56, 77–83. [Google Scholar] [CrossRef]
  18. Gill, S.S.; Xu, M.; Patros, P.; Wu, H.; Kaur, R.; Kaur, K.; Fuller, S.; Singh, M.; Arora, P.; Parlikad, A.K.; et al. Transformative Effects of ChatGPT on Modern Education: Emerging Era of AI Chatbots. Internet Things Cyber-Physical Syst. 2024, 4, 19–23. [Google Scholar] [CrossRef]
  19. Yeralan, S.; Lee, L.A. Generative AI: Challenges to Higher Education. Sustain. Eng. Innov. 2023, 5, 107–116. [Google Scholar] [CrossRef]
  20. Dempere, J.; Modugu, K.; Hesham, A.; Ramasamy, L.K. The Impact of ChatGPT on Higher Education. Front. Educ. 2023, 8, 1206936. [Google Scholar] [CrossRef]
  21. Sterman, J.D. Business Dynamics: Systems Thinking and Modeling for a Complex World; Irwin McGraw-Hill: Boston, MA, USA, 2000; ISBN 007238915X. [Google Scholar]
  22. Katsamakas, E.; Pavlov, O.V. AI and Business Model Innovation: Leverage the AI Feedback Loops. J. Bus. Model. 2020, 8, 22–30. [Google Scholar] [CrossRef]
  23. UN Department of Economic and Social Affairs THE 17 GOALS | Sustainable Development. Available online: https://sdgs.un.org/goals (accessed on 8 February 2022).
  24. Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Duan, Y.; Dwivedi, R.; Edwards, J.; Eirug, A.; et al. Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. Int. J. Inf. Manage. 2021, 57, 101994. [Google Scholar] [CrossRef]
  25. Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Artificial Intelligence for Decision Making in the Era of Big Data—Evolution, Challenges and Research Agenda. Int. J. Inf. Manage. 2019, 48, 63–71. [Google Scholar] [CrossRef]
  26. Kshetri, N.; Dwivedi, Y.K.; Davenport, T.H.; Panteli, N. Generative Artificial Intelligence in Marketing: Applications, Opportunities, Challenges, and Research Agenda. Int. J. Inf. Manage. 2023, 31, 102716. [Google Scholar] [CrossRef]
  27. Feuerriegel, S.; Hartmann, J.; Janiesch, C.; Zschech, P. Generative AI. Bus. Inf. Syst. Eng. 2023, 66, 111–126. [Google Scholar] [CrossRef]
  28. Fui-Hoon Nah, F.; Zheng, R.; Cai, J.; Siau, K.; Chen, L. Generative AI and ChatGPT: Applications, Challenges, and AI-Human Collaboration. J. Inf. Technol. Case Appl. Res. 2023, 25, 277–304. [Google Scholar] [CrossRef]
  29. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Prentice Hall: New York, NY, USA, 2022; ISBN 9780136042594. [Google Scholar]
  30. Zhao, W.X.; Zhou, K.; Li, J.; Tang, T.; Wang, X.; Hou, Y.; Min, Y.; Zhang, B.; Zhang, J.; Dong, Z.; et al. A Survey of Large Language Models. arXiv 2023, arXiv:2303.18223. [Google Scholar]
  31. Popenici, S.; Kerr, S. Exploring the Impact of Artificial Intelligence on Teaching and Learning in Higher Education. Res. Pract. Technol. Enhanc. Learn. 2017, 12, 22. [Google Scholar] [CrossRef] [PubMed]
  32. Quy, V.K.; Thanh, B.T.; Chehri, A.; Linh, D.M.; Tuan, D.A. AI and Digital Transformation in Higher Education: Vision and Approach of a Specific University in Vietnam. Sustainability 2023, 15, 11093. [Google Scholar] [CrossRef]
  33. Timms, M.J. Letting Artificial Intelligence in Education Out of the Box: Educational Cobots and Smart Classrooms. Int. J. Artif. Intell. Educ. 2016, 26, 701–712. [Google Scholar] [CrossRef]
  34. Chen, X.; Zou, D.; Xie, H.; Cheng, G.; Liu, C.; Chen, X.; Zou, D.; Xie, H.; Cheng, G.; Liu, C. Two Decades of Artificial Intelligence in Education. Educ. Technol. Soc. 2022, 25, 28–47. [Google Scholar] [CrossRef]
  35. Dziuban, C.; Moskal, P.; Parker, L.; Campbell, M.; Howlin, C.; Johnson, C. Adaptive Learning: A Stabilizing Influence across Disciplines and Universities. Online Learn. J. 2018, 22, 7–39. [Google Scholar] [CrossRef]
  36. Pillai, R.; Sivathanu, B.; Metri, B.; Kaushik, N. Students’ Adoption of AI-Based Teacher-Bots (T-Bots) for Learning in Higher Education. Inf. Technol. People 2023, 37, 328–355. [Google Scholar] [CrossRef]
  37. Bayne, S. Teacherbot: Interventions in Automated Teaching. Teach. High. Educ. 2015, 20, 455–467. [Google Scholar] [CrossRef]
  38. Gillani, N.; Eynon, R.; Chiabaut, C.; Finkel, K.; Gillani, N.; Eynon, R.; Chiabaut, C.; Finkel, K. Unpacking the “Black Box” of AI in Education. Educ. Technol. Soc. 2023, 26, 99–111. [Google Scholar] [CrossRef]
  39. Muralidharan, K.; Singh, A.; Ganimian, A.J. Disrupting Education? Experimental Evidence on Technology-Aided Instruction in India. Am. Econ. Rev. 2019, 109, 1426–1460. [Google Scholar] [CrossRef]
  40. Dai, Y.; Liu, A.; Lim, C.P. Reconceptualizing ChatGPT and Generative AI as a Student-Driven Innovation in Higher Education. Procedia CIRP 2023, 119, 84–90. [Google Scholar] [CrossRef]
  41. Mollick, E.; Mollick, L. Assigning AI: Seven Approaches for Students, with Prompts. arXiv 2023, arXiv:2306.10052. [Google Scholar] [CrossRef]
  42. Extance, A. ChatGPT Enters the Classroom. Nature 2023, 623, 474–477. [Google Scholar] [CrossRef]
  43. Grassini, S. Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
  44. Malinka, K.; Peresíni, M.; Firc, A.; Hujnák, O.; Janus, F. On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree? In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education, Turku, Finland, 10–12 July 2023; ACM: New York, NY, USA, 2023; Volume 1, pp. 47–53. [Google Scholar]
  45. Mollick, E. The Homework Apocalypse. Available online: https://www.oneusefulthing.org/p/the-homework-apocalypse (accessed on 11 October 2023).
  46. Pickering, E.; Schuller, C. Widespread Usage of Chegg for Academic Misconduct: Perspective from an Audit of Chegg Usage Within an Australian Engineering School. EdArXiv 2022. [Google Scholar] [CrossRef]
  47. Sullivan, M.; Kelly, A.; McLaughlan, P. ChatGPT in Higher Education: Considerations for Academic Integrity and Student Learning. J. Appl. Learn. Teach. 2023, 6, 1–10. [Google Scholar] [CrossRef]
  48. Bonevac, D.A. The Signaling Device. Acad. Quest. 2018, 31, 506–511. [Google Scholar] [CrossRef]
  49. Swift, S.A.; Moore, D.A.; Sharek, Z.S.; Gino, F. Inflated Applicants: Attribution Errors in Performance Evaluation by Professionals. PLoS ONE 2013, 8, e69258. [Google Scholar] [CrossRef] [PubMed]
  50. Noorbehbahani, F.; Mohammadi, A.; Aminazadeh, M. A Systematic Review of Research on Cheating in Online Exams from 2010 to 2021. Educ. Inf. Technol. 2022, 27, 8413–8460. [Google Scholar] [CrossRef]
  51. Willems, J. ChatGPT at Universities—The Least of Our Concerns. SSRN Electron. J. 2023. [Google Scholar] [CrossRef]
  52. Mathewson, T.G. AI Detection Tools Falsely Accuse International Students of Cheating. Available online: https://themarkup.org/machine-learning/2023/08/14/ai-detection-tools-falsely-accuse-international-students-of-cheating (accessed on 11 October 2023).
  53. Michel-Villarreal, R.; Vilalta-Perdomo, E.; Salinas-Navarro, D.E.; Thierry-Aguilera, R.; Gerardou, F.S. Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT. Educ. Sci. 2023, 13, 856. [Google Scholar] [CrossRef]
  54. Van Noorden, R.; Perkel, J.M. AI and Science: What 1,600 Researchers Think. Nature 2023, 621, 672–675. [Google Scholar] [CrossRef]
  55. Ball, P. Is AI Leading to a Reproducibility Crisis in Science? Nature 2023, 624, 22–25. [Google Scholar] [CrossRef] [PubMed]
  56. van Dis, E.A.M.; Bollen, J.; Zuidema, W.; van Rooij, R.; Bockting, C.L. ChatGPT: Five Priorities for Research. Nature 2023, 614, 224–226. [Google Scholar] [CrossRef]
  57. Susarla, A.; Gopal, R.; Thatcher, J.B.; Sarker, S. The Janus Effect of Generative AI: Charting the Path for Responsible Conduct of Scholarly Activities in Information Systems. Inf. Syst. Res. 2023, 34, 399–408. [Google Scholar] [CrossRef]
  58. Bockting, C.L.; van Dis, E.A.M.; van Rooij, R.; Zuidema, W.; Bollen, J. Living Guidelines for Generative AI—Why Scientists Must Oversee Its Use. Nature 2023, 622, 693–696. [Google Scholar] [CrossRef] [PubMed]
  59. Hassabis, D. Using AI to Accelerate Scientific Discovery; Institute for Ethics in AI; Oxford University: Oxford, UK, 2022. [Google Scholar]
  60. Wang, H.; Fu, T.; Du, Y.; Gao, W.; Huang, K.; Liu, Z.; Chandak, P.; Liu, S.; Van Katwyk, P.; Deac, A.; et al. Scientific Discovery in the Age of Artificial Intelligence. Nature 2023, 620, 47–60. [Google Scholar] [CrossRef]
  61. Thorp, H.H. ChatGPT Is Fun, but Not an Author. Science 2023, 379, 313. [Google Scholar] [CrossRef]
  62. Davies, A.; Veličković, P.; Buesing, L.; Blackwell, S.; Zheng, D.; Tomašev, N.; Tanburn, R.; Battaglia, P.; Blundell, C.; Juhász, A.; et al. Advancing Mathematics by Guiding Human Intuition with AI. Nature 2021, 600, 70–74. [Google Scholar] [CrossRef]
  63. Picciano, A.G. Planning for Online Education: A Systems Model. Online Learn. 2015, 19, 142–158. [Google Scholar] [CrossRef]
  64. Picciano, A.G. Artificial Intelligence and the Academy’s Loss of Purpose. Online Learn. 2019, 23, 270–284. [Google Scholar] [CrossRef]
  65. Akiba, D.; Fraboni, M.C. AI-Supported Academic Advising: Exploring ChatGPT’s Current State and Future Potential toward Student Empowerment. Educ. Sci. 2023, 13, 885. [Google Scholar] [CrossRef]
  66. Daniel, B. Big Data and Analytics in Higher Education: Opportunities and Challenges. Br. J. Educ. Technol. 2015, 46, 904–920. [Google Scholar] [CrossRef]
  67. Cao, Y.; Li, S.; Liu, Y.; Yan, Z.; Dai, Y.; Yu, P.S.; Sun, L. A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT. arXiv 2023, arXiv:2303.04226. [Google Scholar]
  68. Chomsky, N. The False Promise of ChatGPT. Available online: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html (accessed on 3 November 2023).
  69. Mitchell, M. How Do We Know How Smart AI Systems Are? Science 2023, 381, eadj5957. [Google Scholar] [CrossRef] [PubMed]
  70. Floridi, L. AI as Agency without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models. Philos. Technol. 2023, 36, 15. [Google Scholar] [CrossRef]
  71. Baker, R.S.; Hawn, A. Algorithmic Bias in Education. Int. J. Artif. Intell. Educ. 2022, 32, 1052–1092. [Google Scholar] [CrossRef]
  72. Ivanov, S. The Dark Side of Artificial Intelligence in Higher Education. Serv. Ind. J. 2023, 43, 1055–1082. [Google Scholar] [CrossRef]
  73. Samuelson, P. Generative AI Meets Copyright. Science 2023, 381, 158–161. [Google Scholar] [CrossRef]
  74. Wired Millions of Workers Are Training AI Models for Pennies. Available online: https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/ (accessed on 9 December 2023).
  75. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power; PublicAffairs: New York, NY, USA, 2019; ISBN 1610395697. [Google Scholar]
  76. Popenici, S.; Rudolph, J.; Tan, S.; Tan, S. A Critical Perspective on Generative AI and Learning Futures. An Interview with Stefan Popenici. J. Appl. Learn. Teach. 2023, 6, 311–331. [Google Scholar] [CrossRef]
  77. Bearman, M.; Ryan, J.; Ajjawi, R. Discourses of Artificial Intelligence in Higher Education: A Critical Literature Review. High. Educ. 2023, 86, 369–385. [Google Scholar] [CrossRef]
  78. Li, P.; Yang, J.; Islam, M.A.; Ren, S. Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. arXiv 2023, arXiv:2304.03271. [Google Scholar]
  79. Bogina, V.; Hartman, A.; Kuflik, T.; Shulner-Tal, A. Educating Software and AI Stakeholders about Algorithmic Fairness, Accountability, Transparency and Ethics. Int. J. Artif. Intell. Educ. 2022, 32, 808–833. [Google Scholar] [CrossRef]
  80. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  81. Stahl, B.C.; Eke, D. The Ethics of ChatGPT—Exploring the Ethical Issues of an Emerging Technology. Int. J. Inf. Manag. 2024, 74, 102700. [Google Scholar] [CrossRef]
  82. Miao, F.; Holmes, W. Guidance for Generative AI in Education and Research; UNESCO: Paris, France, 2023; ISBN 9789231006128. [Google Scholar]
  83. Simbeck, K. They Shall Be Fair, Transparent, and Robust: Auditing Learning Analytics Systems. AI Ethics 2023, 4, 555–571. [Google Scholar] [CrossRef]
  84. Agrawal, A.; Gans, J.S.; Goldfarb, A. Do We Want Less Automation? AI May Provide a Path to Decrease Inequality. Science 2023, 381, 155–158. [Google Scholar] [CrossRef] [PubMed]
  85. Acemoglu, D.; Restrepo, P. The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand. Cambridge J. Reg. Econ. Soc. 2020, 13, 25–35. [Google Scholar] [CrossRef]
  86. Acemoglu, D.; Restrepo, P. The Race between Man and Machine: Implications of Technology for Growth, Factor Shares, and Employment. Am. Econ. Rev. 2018, 108, 1488–1542. [Google Scholar] [CrossRef]
  87. MacCrory, F.; Westerman, G.; Alhammadi, Y.; Brynjolfsson, E. Racing with and against the Machine: Changes in Occupational Skill Composition in an Era of Rapid Technological Advance. In Proceedings of the 35th International Conference on Information Systems, Auckland, New Zealand, 14–17 December 2014; pp. 1–17. [Google Scholar]
  88. Felten, E.W.; Raj, M.; Seamans, R. How Will Language Modelers like ChatGPT Affect Occupations and Industries? arXiv 2023, arXiv:2303.01157. [Google Scholar] [CrossRef]
  89. Eloundou, T.; Manning, S.; Mishkin, P.; Rock, D. GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. arXiv 2023, arXiv:2303.10130. [Google Scholar]
  90. Peng, S.; Kalliamvakou, E.; Cihon, P.; Demirer, M. The Impact of AI on Developer Productivity: Evidence from GitHub Copilot. arXiv 2023, arXiv:2302.06590. [Google Scholar]
  91. Kalliamvakou, E. Research: Quantifying GitHub Copilot’s Impact on Developer Productivity and Happiness. Github Blog, 7 September 2022. [Google Scholar]
  92. Noy, S.; Zhang, W. Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence. Science 2023, 381, 187–192. [Google Scholar] [CrossRef] [PubMed]
  93. Brynjolfsson, E.; Li, D.; Raymond, L.R. Generative AI at Work. Available online: http://www.nber.org/papers/w31161 (accessed on 20 November 2023).
  94. Dell’Acqua, F.; McFowland, E.; Mollick, E.R.; Lifshitz-Assaf, H.; Kellogg, K.; Rajendran, S.; Krayer, L.; Candelon, F.; Lakhani, K.R. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality; Working Paper 24-013; Harvard Business School: Boston, MA, USA, 2023. [Google Scholar]
  95. Hui, X.; Reshef, O.; Zhou, L. The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market. SSRN Electron. J. 2023, 1–30. [Google Scholar] [CrossRef]
  96. Meadows, D.H. Thinking in Systems; Chelsea Green Publishing: Hartford, VT, USA, 2008; ISBN 9781844077267. [Google Scholar]
  97. Homer, J.; Oliva, R. Maps and Models in System Dynamics: A Response to Coyle. Syst. Dyn. Rev. 2001, 17, 347–355. [Google Scholar] [CrossRef]
  98. Wittenborn, A.K.; Rahmandad, H.; Rick, J.; Hosseinichimeh, N. Depression as a Systemic Syndrome: Mapping the Feedback Loops of Major Depressive Disorder. Psychol. Med. 2016, 46, 551–562. [Google Scholar] [CrossRef] [PubMed]
  99. Barbrook-Johnson, P.; Penn, A.S. Systems Mapping—How to Build and Use Causal Models of Systems; Palgrave Macmillan: London, UK, 2022; ISBN 9783031018336. [Google Scholar]
  100. Amissah, M.; Gannon, T.; Monat, J. What Is Systems Thinking? Expert Perspectives from the WPI Systems Thinking Colloquium of 2 October 2019. Systems 2020, 8, 6. [Google Scholar] [CrossRef]
  101. Crielaard, L.; Quax, R.; Sawyer, A.D.M.; Vasconcelos, V.V.; Nicolaou, M.; Stronks, K.; Sloot, P.M.A. Using Network Analysis to Identify Leverage Points Based on Causal Loop Diagrams Leads to False Inference. Sci. Rep. 2023, 13, 21046. [Google Scholar] [CrossRef]
  102. Senge, P. The Fifth Discipline: The Art and Practice of the Learning Organization; Doubleday/Currency: New York, NY, USA, 1990. [Google Scholar]
  103. Casadesus-Masanell, R.; Ricart, J.E. From Strategy to Business Models and onto Tactics. Long Range Plann. 2010, 43, 195–215. [Google Scholar] [CrossRef]
  104. Cassidy, R.; Borghi, J.; Semwanga, A.R.; Binyaruka, P.; Singh, N.S.; Blanchet, K. How to Do (or Not to Do)…using Causal Loop Diagrams for Health System Research in Low and Middle-Income Settings. Health Policy Plan. 2022, 37, 1328–1336. [Google Scholar] [CrossRef]
  105. Yourkavitch, J.; Lich, K.H.; Flax, V.L.; Okello, E.S.; Kadzandira, J.; Katahoire, A.R.; Munthali, A.C.; Thomas, J.C. Interactions among Poverty, Gender, and Health Systems Affect Women’s Participation in Services to Prevent HIV Transmission from Mother to Child: A Causal Loop Analysis. PLoS ONE 2018, 13, e0197239. [Google Scholar] [CrossRef] [PubMed]
  106. Gaveikaite, V.; Grundstrom, C.; Lourida, K.; Winter, S.; Priori, R.; Chouvarda, I.; Maglaveras, N. Developing a Strategic Understanding of Telehealth Service Adoption for COPD Care Management: A Causal Loop Analysis of Healthcare Professionals. PLoS ONE 2020, 15, e0229619. [Google Scholar] [CrossRef] [PubMed]
  107. Voulvoulis, N.; Giakoumis, T.; Hunt, C.; Kioupi, V.; Petrou, N.; Souliotis, I.; Vaghela, C.; binti Wan Rosely, W. Systems Thinking as a Paradigm Shift for Sustainability Transformation. Glob. Environ. Change 2022, 75, 102544. [Google Scholar] [CrossRef]
  108. Katsamakas, E.; Pavlov, O.V. Artificial Intelligence Feedback Loops in Mobile Platform Business Models. Int. J. Wirel. Inf. Networks 2022, 29, 250–256. [Google Scholar] [CrossRef]
  109. von Kutzschenbach, M.; Schmid, A.; Schoenenberger, L. Using Feedback Systems Thinking to Explore Theories of Digital Business for Medtech Companies. In Business Information Systems and Technology 4.0; Springer: Cham, Switzerland, 2018; Volume 141, pp. 161–175. [Google Scholar]
  110. Sahin, O.; Salim, H.; Suprun, E.; Richards, R.; MacAskill, S.; Heilgeist, S.; Rutherford, S.; Stewart, R.A.; Beal, C.D. Developing a Preliminary Causal Loop Diagram for Understanding the Wicked Complexity of the COVID-19 Pandemic. Systems 2020, 8, 20. [Google Scholar] [CrossRef]
  111. Shams Esfandabadi, Z.; Ranjbari, M. Exploring Carsharing Diffusion Challenges through Systems Thinking and Causal Loop Diagrams. Systems 2023, 11, 93. [Google Scholar] [CrossRef]
  112. Galbraith, P.L. System Dynamics and University Management. Syst. Dyn. Rev. 1998, 14, 69–84. [Google Scholar] [CrossRef]
  113. Strauss, L.M.; Borenstein, D. A System Dynamics Model for Long-Term Planning of the Undergraduate Education in Brazil. High. Educ. 2015, 69, 375–397. [Google Scholar] [CrossRef]
  114. Barlas, Y.; Diker, V.G. A Dynamic Simulation Game (UNIGAME) for Strategic University Management. Simul. Gaming 2000, 31, 331–358. [Google Scholar] [CrossRef]
  115. Oyo, B.; Williams, D. Re-Conceptualisation of Higher Education Quality Management Problems Using Feedback Systems Thinking. Int. J. Manag. Educ. 2009, 3, 220–233. [Google Scholar] [CrossRef]
  116. Pavlov, O.V.; Katsamakas, E. Will Colleges Survive the Storm of Declining Enrollments? A Computational Model. PLoS ONE 2020, 15, e0236872. [Google Scholar] [CrossRef] [PubMed]
  117. Rissanen, M.; Savolainen, J.; Collan, M. Analyzing the Finnish University Funding System through System-Based Simulation. Policy Futur. Educ. 2024. [Google Scholar] [CrossRef]
  118. Pavlov, O.V.; Katsamakas, E. Tuition Too High? Blame Competition. J. Econ. Behav. Organ. 2023, 213, 409–431. [Google Scholar] [CrossRef]
  119. Faham, E.; Rezvanfar, A.; Movahed Mohammadi, S.H.; Rajabi Nohooji, M. Using System Dynamics to Develop Education for Sustainable Development in Higher Education with the Emphasis on the Sustainability Competencies of Students. Technol. Forecast. Soc. Change 2017, 123, 307–326. [Google Scholar] [CrossRef]
  120. Dhirasasna; Sahin A Multi-Methodology Approach to Creating a Causal Loop Diagram. Systems 2019, 7, 42. [CrossRef]
  121. Sterman, J.D.; Henderson, R.; Beinhocker, E.D.; Newman, L.I. Getting Big Too Fast: Strategic Dynamics with Increasing Returns and Bounded Rationality. Manag. Sci. 2007, 53, 683–696. [Google Scholar] [CrossRef]
  122. Martinez-Moyano, I.J.; Richardson, G.P. Best Practices in System Dynamics Modeling. Syst. Dyn. Rev. 2013, 29, 102–123. [Google Scholar] [CrossRef]
  123. Katsamakas, E.; Miliaresis, K.; Pavlov, O. V Digital Platforms for the Common Good: Social Innovation for Active Citizenship and ESG. Sustainability 2022, 14, 639. [Google Scholar] [CrossRef]
  124. Angliss, K. An Alternative Approach to Measuring University Reputation. Corp. Reput. Rev. 2022, 25, 33–49. [Google Scholar] [CrossRef]
  125. Pucciarelli, F.; Kaplan, A. Competition and Strategy in Higher Education: Managing Complexity and Uncertainty. Bus. Horiz. 2016, 59, 311–320. [Google Scholar] [CrossRef]
  126. Burns, J.R.; Musa, P. Structural Validation of Causal Loop Diagrams. In Proceedings of the 19th International Conference of the System Dynamics Society, Atlanta, GA, USA, 23–27 July 2001. [Google Scholar]
  127. Hagiu, A.; Wright, J. Data-enabled Learning, Network Effects, and Competitive Advantage. RAND J. Econ. 2023, 54, 638–667. [Google Scholar] [CrossRef]
  128. Tu, X.; Zou, J.; Su, W.J.; Zhang, L. What Should Data Science Education Do with Large Language Models? arXiv 2023, arXiv:2307.02792. [Google Scholar] [CrossRef]
  129. Acemoglu, D.; Johnson, S. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity; PublicAffairs: New York, NY, USA, 2023. [Google Scholar]
  130. Acemoglu, D.; Johnson, S. What’s Wrong with ChatGPT? Available online: https://www.project-syndicate.org/commentary/chatgpt-ai-big-tech-corporate-america-investing-in-eliminating-workers-by-daron-acemoglu-and-simon-johnson-2023-02 (accessed on 12 November 2023).
  131. Giannini, S. Generative AI and the Future of Education. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000371316?posInSet=4&queryId=N-EXPLORE-f75771a0-8dfc-4471-9183-e25037d0705c (accessed on 11 December 2023).
  132. Toma, A.; Senkaiahliyan, S.; Lawler, P.R.; Rubin, B.; Wang, B. Generative AI Could Revolutionize Health Care—But Not If Control Is Ceded to Big Tech. Nature 2023, 624, 36–38. [Google Scholar] [CrossRef] [PubMed]
  133. Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly Accurate Protein Structure Prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef]
  134. Merchant, A.; Batzner, S.; Schoenholz, S.S.; Aykol, M.; Cheon, G.; Cubuk, E.D. Scaling Deep Learning for Materials Discovery. Nature 2023, 624, 80–85. [Google Scholar] [CrossRef]
  135. Szymanski, N.J.; Rendy, B.; Fei, Y.; Kumar, R.E.; He, T.; Milsted, D.; McDermott, M.J.; Gallant, M.; Cubuk, E.D.; Merchant, A.; et al. An Autonomous Laboratory for the Accelerated Synthesis of Novel Materials. Nature 2023, 624, 86–91. [Google Scholar] [CrossRef]
Figure 1. AI and the transformation of a higher education institution (HEI). HEI investment in AI aggregates investment for teaching, learning, research, admissions, student advising, and alumni relations. HEI investment in quality education aggregates all other investments in faculty, facilities, methods, advising, etc.
Figure 1. AI and the transformation of a higher education institution (HEI). HEI investment in AI aggregates investment for teaching, learning, research, admissions, student advising, and alumni relations. HEI investment in quality education aggregates all other investments in faculty, facilities, methods, advising, etc.
Sustainability 16 06118 g001
Table 1. Model variables and their brief description (the relevant theoretical framework section is listed in parenthesis).
Table 1. Model variables and their brief description (the relevant theoretical framework section is listed in parenthesis).
#VariableBrief Description
1AI R&DTotal AI R&D leading to AI advances (2.1)
2AI capabilitiesCapabilities of AI resulting from AI advances (2.1)
3Business investment in AIBusiness sector investment in AI applications (2.1 and 2.3)
4Total AI demandTotal demand for AI in the economy (2.3)
5Automation in businessLevel of business automation using AI (2.3)
6Business benefit from automationThe value businesses gain from AI (2.3)
7HEI investment in educationLevel of HEI’s education investment (2.2)
8HEI student learningStudent knowledge acquisition in HEI (2.2.1)
9HEI student job placementSuccessful HEI graduate employment (2.3)
10HEI relative reputationOverall HEI reputation (perceived quality) (2.2)
11Enrollment in HEITotal student enrollment in HEI (standard HEI metric)
12HEI net revenuesHEI revenue minus the costs (standard HEI metric)
13HEI investment in AIHEI’s AI funding (2.2)
14Learning analytics, tools, and dataLevel of learning analytics use in HEI (2.2.1)
15Self-learningIndependent learning by students (2.2.1)
16HEI alumni networkSize of HEI’s alumni network (2.2.4)
17Alumni givingLevel of alumni giving to HEI (2.2.4)
18Total AI demand from HEIsTotal AI needs by colleges and universities (2.2)
19Academic integrity problems (student cheating)Violations of academic standards in HEI (2.2.2)
20Measures to deal with AIPsHEI efforts against academic misconduct (2.2.2)
21Data about AIPsData about academic misconduct (2.2.2)
22Research productivityScholarly output by HEI faculty (2.2.3)
23HEI operating costsHEI’s operational expenses (standard HEI metric)
24Personalized recruitment and advisingAI supported student recruitment and help (2.2.4)
25Alumni engagementHEI engagement with alumni network (2.2.4)
26Demand for AI-skilled workforceBusiness need for AI-skilled employees (2.3)
27HEI teaching AI skillsQuality of AI-related education in HEI (2.2.1)
28Competitor reputationReputation of HEI competitors (2.2)
29AI investment by other HEIsAI funding by other colleges and universities (2.2)
30AI risksBias, security, and other AI risks (2.2.5)
Table 2. Feedback loops and their brief description. Names of reinforcing loops begin with the letter R. The letter B is for balancing loops.
Table 2. Feedback loops and their brief description. Names of reinforcing loops begin with the letter R. The letter B is for balancing loops.
NameVariablesBrief Description
R11, 2, 3, 4Business investment drives AI R&D and AI advances
R23, 5, 6Benefits from automation drive business investment in AI
R37, 8, 9, 10, 11, 12HEI creates value (and revenues) through quality education
R413, 14, 8, 9, 10, 11, 12HEI invests in AI to improve learning
R52, 15, 8, 9, 10, 11, 12, 13, 18, 4, 1AI facilitates students’ self-learning
R622, 10, 11, 12, 13AI can support research productivity and HEI reputation
R722, 8, 9, 10, 11, 12, 13AI supports research that contributes to student learning
R82, 13, 18, 4, 1Advances in AI motivate the HEI to invest more in AI
R923, 12, 13HEI uses AI to lower operating costs
R1024, 11, 12, 13AI supports admissions and student advising
R1113, 25, 17, 12HEI uses AI to support alumni engagement and giving
R1226, 27, 9, 10, 11, 12, 13, 18, 4, 1, 2, 3, 5HEI teaches AI skills as a response to business demand for an AI-skilled workforce
R139, 10HEI’s reputation and job placement reinforce each other
R149, 16The size of the alumni network helps job placement, which grows the alumni network
R1520, 19, 8, 9, 10, 11, 12, 13HEI benefits from measures to deal with academic integrity problems (AIPs)
B12, 19, 8, 9, 10, 11, 12, 13, 18, 4, 1AI advances lead to more AIPs which hurts HEI
B219, 21, 20HEI’s efforts to deal with AIPs
B35, 9, 10, 11, 12, 13, 18, 4, 1, 2, 3The job-substitution effect of AI hurts HEI job placement
B430, 10, 11, 12, 13AI risks can harm the HEI’s reputation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Katsamakas, E.; Pavlov, O.V.; Saklad, R. Artificial Intelligence and the Transformation of Higher Education Institutions: A Systems Approach. Sustainability 2024, 16, 6118. https://doi.org/10.3390/su16146118

AMA Style

Katsamakas E, Pavlov OV, Saklad R. Artificial Intelligence and the Transformation of Higher Education Institutions: A Systems Approach. Sustainability. 2024; 16(14):6118. https://doi.org/10.3390/su16146118

Chicago/Turabian Style

Katsamakas, Evangelos, Oleg V. Pavlov, and Ryan Saklad. 2024. "Artificial Intelligence and the Transformation of Higher Education Institutions: A Systems Approach" Sustainability 16, no. 14: 6118. https://doi.org/10.3390/su16146118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop