1. Introduction
The integration of generative artificial intelligence (GenAI) into higher education (HE) presents a transformative potential that extends beyond simple technological advancement. This review explores the multifaceted impact of GenAI, drawing parallels with the “more-than-human” world as well as other creative and artistic domains. The term “more-than-human” is used here to describe “the real relation between our species and the countless other shapes of sensitivity and sentience with whom are lives are entangled” [
1]. By considering GenAI in the context of broader ethical and ecological systems, we can emphasize the importance of humility in knowledge acquisition, transparency, and ethical engagement. As we wrestle with the promises and perils of GenAI in HE, it will be necessary to balance its benefits with ethical considerations, fostering an environment that promotes holistic student development and respects the complex interplay between technology and humanity.
Whilst still in its infancy, to date, the literature around GenAI and its application in HE has been largely optimistic. Common themes include enthusiasm for GenAI’s performance and capabilities across a wide range of academic disciplines [
2,
3,
4], the importance of integrating GenAI training into educational curricula and practice [
5], and its role as a personal tutor [
6]. Differences in findings often stem from discipline-specific needs and the varied impact on educational practices and student–educator relationships, but there is widespread interest among students and faculty, who express excitement and motivation towards GenAI and who see these tools as useful for both academic and professional development [
7]. Broadly these perceived benefits are related to personalized learning support, feedback, and assistance with tasks such as writing, brainstorming, data analysis, and the broader research process. There is a strong belief that GenAI will play a significant role in certain future careers, and students appreciate the potential benefits of learning processes that prepare them for professional environments that will be increasingly GenAI driven [
8]. However, concerns about the accuracy and reliability of GenAI-generated information are common among students and faculty alike, highlighting the need for careful, critical evaluation of GenAI outputs [
9]. Ethical considerations (including data privacy), the potential for misuse and over-reliance, and AI-mediated shifts in relationships between students, faculty, and institutions are also often cited [
10,
11,
12].
Much like the advent of the digital calculator or the internet, GenAI represents a truly disruptive technology. Twenty-five years ago, it was impossible to foresee all the ramifications of the internet, but it profoundly altered people’s lives. Similarly, while we could not have anticipated some of the deeply negative impacts of social media (e.g., the impact on mental health and the spread of misinformation), we cannot fully predict all of the outcomes of integrating GenAI into HE. Our focus, therefore, should not be on the binary question of embracing or banning GenAI but on understanding and preparing for the ramifications of this technology. In the context of HE, GenAI has the potential to revolutionize teaching and learning processes, enhance administrative efficiency, and open new avenues for research and collaboration. However, it also necessitates a careful examination of ethical considerations, the potential for dependency and learning loss, and the wider societal implications. By concentrating on the future impacts and preparing to mitigate the negative consequences while maximizing positive outcomes, we can better navigate the journey ahead.
Only four months after its release in November 2022, half of the 17–24 year olds in one HE survey said they were already using OpenAI’s ChatGPT to support their studies [
13], and 42% of primary and secondary school teachers said they had used GenAI in their role [
14]. Two-thirds of secondary school students admitted they had used chatbots such as ChatGPT to write essays [
15], and 74% of undergraduate students from China, India, the USA, Brazil, and the UK indicated they would use ChatGPT to assist with their next term of studies [
7]. Of 443 South Indian college and HE students, 20% reported using ChatGPT daily, and 31% reported using it a few times each week [
16]. In a recent UK-wide survey of HE students, more than 50% reported using GenAI to help them prepare assignments; however, only 22% were satisfied with the support they had received around the use of GenAI [
17].
2. A “More-than-Human World”
In his book, “Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence” [
18], James Bridle uses the Monetary National Income Analogue Computer (MONIAC) to illustrate the importance of distinguishing between understanding the steps involved in arriving at an answer (the process) and simply being informed of the result (the output). The MONIAC was built in 1949 by the economist Bill Phillips and consisted of a series of water tanks and pipes that represented a fully functional model of the British economy. A large tank marked “Treasury” was positioned at the top, and government expenditure on things like healthcare, education, and infrastructure was represented by additional tanks. Opening and closing taps would adjust “spending” on these things by draining water from the treasury. At various stages, water was siphoned off into private savings and returned in the form of investments. Tanks could run dry if balances were not maintained, and water (money) could be pumped back up to the treasury in the form of taxation. The MONIAC was transparent and comprehensible, allowing users to see clearly (and interact with) the complex system of floats, pulleys, and counterweights, which led to different economic outcomes. This transparency contrasts sharply with many “black box” GenAI tools, which often obscure the processes leading to their outputs. In HE, GenAI would ideally function as a modern-day MONIAC, demystifying complex subjects and engaging students more actively in the learning process. By focusing on
how knowledge is constructed and understood rather than simply delivering a polished final product, there is an opportunity to use GenAI to enhance critical thinking, creativity, and ethical awareness.
Throughout history, unusual animal behavior has often been recorded preceding natural disasters, such as earthquakes and volcanic eruptions. In 1975, the earthquake in Haicheng, China, was preceded by observations of unusual behavior in snakes and rats [
19]. More recent studies using accelerometers attached to animals near seismically active zones have shown increased agitation and activity prior to eruptions, suggesting animals may detect subtle environmental changes that we do not presently understand [
20,
21]. Similarly, Monica Gagliano’s research on memory in the
Mimosa pudica plant highlights how plants also exhibit behaviors that challenge our contemporary understanding of some biological processes [
22]. These examples underscore a crucial point: studying complex systems often requires humility and recognition of our own limitations. In this vein, GenAI should be viewed as a tool that can enhance the learning process, without the need or desire to have all of the answers. Like animals sensing seismic changes, GenAI may be able to provide novel insights, but it must be used in a way that acknowledges its limitations and promotes a deeper, process-oriented approach to learning. As Bridle expounds, “If we are prepared to relinquish our desire for totalizing knowledge and view understanding as a process and negotiation rather than a route to mastery and dominance, there is much we can learn from the wisdom of others” [
18]. This mindset can apply not only to our relationship with animals and plants but also to our engagement with GenAI within HE. Just as we must admit that we cannot know everything or be everywhere, we must recognize that GenAI, like plants and animals, may offer insights beyond our current understanding, without being subjected to total control or surveillance. Whilst we should acknowledge that GenAI systems can enhance our understanding and capabilities, this should not be at the expense of human autonomy, dignity, and privacy. Like our nonhuman counterparts, GenAI should be integrated into HE in ways that respect and preserve individual intellects and freedoms.
3. GenAI in Learning and Assessment: Outcome vs. Process
Summative assessment in HE often emphasizes the end product (e.g., essay, presentation, exam, etc.) rather than the learning process or journey [
23]. This outcome-focused approach often fails to capture or assess the skills developed throughout a student’s educational experience. Precisely because GenAI is so effective at producing polished essays and professional-sounding summaries, it may inadvertently reinforce this trend. It will be crucial to approach these technologies with humility, acknowledging their limitations as well as our own when we use them. It is argued that GenAI systems, while able to process vast amounts of data and recognize patterns, lack the deep contextual understanding and ethical judgment inherent in human cognition. Because of this, educators and students should learn to view GenAI as a partner in the learning process rather than a single infallible source of truth. This partnership necessitates a recognition that both humans and GenAI have limitations, but both can be improved in the presence of the other, as opposed to operating in isolation.
Since the release of ChatGPT in late 2022, an increasingly common question among educators when marking student assessments is “How do we know the student produced this?”. With the existence of essay mills and sophisticated writing assistants (e.g., Grammarly, WordTune), this is not an entirely new problem. Wherever there are summative assessments to which students attach value, some may look for shortcuts or ways to deceive the marker. Whilst the use of essay mills is often cost prohibitive and represents an unambiguous breach of assessment regulations and academic integrity, GenAI produces outputs quickly, often for free, and may not seem as overtly unethical as purchasing an essay from an online mill [
24]. One possible solution is to adopt assessment practices that emphasize the significance of the learning process over the final outcome—one example is the “processfolio” [
25]. Characterized by artifacts that are submitted alongside the final product, they aim to evidence the steps and learning that occurred en route to the final submission. Such assessments encourage students to reflect on their strengths and weaknesses in relation to specific aspects of the assessment task, as well as the strategies employed to overcome any challenges.
Assessment design must now be cognizant of the fact that students have access to powerful GenAI tools. This may necessitate adjusting expectations and revisiting learning outcomes to ensure they are appropriate for the task and the context in which a student may complete it. One response may be to ask students to declare when and how they have used GenAI. If it becomes clear that GenAI is replacing knowledge or skills stipulated by the learning outcomes, then the assessment will need to change. If a student is expected to complete a task without reference to external resources (e.g., books, notes, internet, GenAI, etc.), the assessment should be an examination, where the conditions under which students complete the task are constrained. However, as Powell and Forsyth [
24] suggest, it may be rare that we would want to constrain any truly “authentic assessment” in this way. Such “authentic” assessments should be “set in a rich societal context” and therefore “we should expect students to have access to a range of resources while they are working on an assessment task” [
24]. Ideally, authentic assessment would mean students learn how to make use of and discriminate between a varied range of resources based on their relevancy and validity and ultimately feel they have produced a consequential piece of contextually specific work. Used effectively, GenAI can play a meaningful role in this process. A “processfolio” approach to assessment could also facilitate transparency regarding GenAI use. GenAI tools can support a student to complete an assessment task flexibly and autonomously, but it is important that they can critically evaluate and articulate why they chose to use a specific tool and how it added value (or improved efficiency)—“a processfolio” is a way to assess these skills.
Balancing engagement with GenAI whilst maintaining student autonomy will be essential. Whilst GenAI may offer personalized learning experiences and assist in repetitive tasks, it should not replace human interaction or the development of independent critical thinking. HE should endeavor to be a process-oriented journey, where GenAI tools can facilitate iterative and interactive learning rather than delivering the illusion of one-time mastery. By promoting a balanced approach, where GenAI supports rather than supplants human agency, HE can help students develop the ability to analyze, synthesize, and make informed decisions collaboratively with GenAI. Effective integration of GenAI into HE will help prepare students not only to use these tools proficiently but also to navigate the complex ethical landscapes they will encounter in an increasingly GenAI-driven workplace.
4. Originality and Academic Integrity
As students (and faculty) increasingly co-produce work alongside GenAI, we will be compelled to reflect on the implications for human–AI collaboration, specifically related to notions of originality and ownership. Within the music industry, a “cover version” of a piece of music typically involves an artist performing someone else’s song whilst retaining the original structure and melody. In academic work, this is akin to quoting or paraphrasing other authors’ ideas or theories. Similar to a musician crediting the original songwriter, proper citation of the original sources is essential within academic work. Distinct from a cover version, “sampling” in music involves taking a portion of a recording and reusing it in a new song, creating something original by mixing it with other sounds or musical elements. This may be considered analogous to a student synthesizing multiple academic sources and combining them with their own ideas or data to produce novel insights. As in music, proper attribution of the original source is still necessary. Taking a sample of a piece of music without permission and acknowledgment of the original source is an infringement of copyright law, and artists who do this may face legal consequences. In academia, copying text, ideas, or research findings without proper attribution is plagiarism, and such an ethical violation would typically result in disciplinary action.
To identify plagiarism, HE institutes often rely upon proprietary software to detect passages of text that match published or previously submitted work. A possible case of plagiarism is typically indicated via a high “similarity” score. Users familiar with such systems know all too well the limitations of this approach and the risk of both false positives and false negatives. Conversely, a low similarity score is often taken to mean an assessment is “original” and reflects the student’s own work. However, similarity and originality are not inversely proportional—saying a piece of work has a “low similarity score” is not the same as saying the work is original. This already complex dynamic will become harder to untangle as students increasingly collaborate with GenAI.
True originality in academic work goes beyond just the correct attribution of sources; it requires the addition of new insights and interpretations of existing knowledge. Originality here is defined by novel contributions that extend beyond merely rephrasing or summarizing others work. Just as musicians credit original songwriters when covering or sampling existing work, HE requires that students credit original authors, and now, increasingly, any GenAI tools that are used to produce a final output. Whilst this may be a pragmatic approach for now, we must continue to encourage students to use their existing knowledge as a foundation and to carefully critique any new perspectives, insights, or analyses provided to them by GenAI. It is vital that students understand the importance of academic integrity and ethics in education and research—mirroring the legal and ethical boundaries in creative industries. Ultimately, students should be responsible for the content, accuracy, and credibility of any work they submit.
When responding to student use of GenAI in the context of assessment, institutions face a challenge that is underpinned by notions of originality and ownership. To address this issue robustly, it is necessary to explore how these concepts have been understood historically and in contemporary contexts. Originality has long been considered a cornerstone of academic and artistic endeavors. Historical accounts of the solitary academic creating works ex nihilo has shaped our understanding of what it means to be an author or creator. Postmodernist critiques would challenge this notion, arguing that texts are intertextual and that authors combine pre-existing ideas and concepts into cultural narratives [
26]. Viewed in this light, GenAI can be seen as a continuation of this approach, assembling and recontextualizing existing knowledge rather than creating it anew.
GenAI’s capacity to produce human-like text blurs the traditional boundaries of ownership and originality. If a GenAI output draws from a vast corpus of human knowledge, to what extent should it be considered original, and who can claim ownership? The student, the developers of the GenAI tool, or those whose work was used to train the model? To help answer these questions, HE must be clear about the ethical and pedagogical goals of the assessments they set. Is the primary objective to assess a student’s individual knowledge and understanding or to evaluate their ability to critically appraise, synthesize, and apply knowledge?
The issue of GenAI content parallels historical concerns about plagiarism, where the uncredited use of another’s work is deemed unethical. HE has established mechanisms to detect and deter plagiarism, emphasizing the importance of intellectual honesty and the correct attribution of sources. However, GenAI is forcing us to re-evaluate what constitutes a “source” if it is not attributable to a human author. The U.S. Copyright Office, for example, has generally held that copyright protection is only afforded to works created by humans. This stance suggests a need for HE to similarly explore the requirement for “meaningful human input” when evaluating work that may have been created using GenAI. Considering writing, reading, teaching, and assessing alongside GenAI, Sarah Eaton suggests that HE may need to rethink its current conceptualization of plagiarism [
27]. In light of the fact that historical notions of what it means to write and create are being challenged, the idea of “postplagiarism” may represent a fitting paradigm shift. Postplagiarism refers to “an era in human society in which advanced technologies, including artificial intelligence and neurotechnology, including brain-computer interfaces (BCIs), are a normal part of life, including how we teach, learn, and interact daily” [
27]. When defining what is considered an acceptable use of GenAI in the context of assessment, HE is currently lacking a collective sense of what the new ethical normal will be. From a practical perspective, HE should consider the following responses:
Transparent Policies: Clearly articulate an institutional stance on the use of GenAI in the context of summative assessment, delineating what is considered acceptable assistance versus unacceptable use and substitution of student effort.
Education and Training: Provide students and staff with the necessary training to develop a deep understanding of the ethical implications of using GenAI tools, including discussions of originality and intellectual property.
Innovative Assessment Methods: Design authentic assessments that emphasize critical thinking, problem solving, and personal reflection, areas where GenAI is less likely to produce high-quality one-shot responses. A greater focus on the “process” as opposed to the final product would also be useful.
Human–AI Collaboration: Encourage the use of GenAI as a tool for learning and collaboration rather than a shortcut. This could include exercises where students critique or improve upon GenAI-generated content.
HE will need to adopt a nuanced and dynamic approach to the use of GenAI in assessment, one that respects the evolving nature of concepts, such as originality and ownership. Drawing on lessons from plagiarism and copyright and by implementing thoughtful pedagogical strategies, HE can uphold the integrity of academic work while embracing the benefits of GenAI. This requires a balance between safeguarding the values of intellectual honesty and fostering an environment where technology enhances, rather than diminishes, the educational experience.
5. Ethics and Transparency
OpenAI’s ChatGPT recorded one million users within 5 days of its launch and 100 million users just two months later [
28]. By March 2024, this had surged to 180.5 million users [
29]. GenAI tools have undeniably captured the public’s attention, with newer, more powerful tools seemingly released weekly. However, advocating for GenAI in HE requires awareness of its impact beyond the lecture room or campus. The view of GenAI as an abstract or otherworldly technology serves to distance it from its significant environmental impact. To fuel its rapidly expanding infrastructure, GenAI is dependent upon the continued extraction of rare earth minerals, water, coal, and oil to support the construction of key components and run large data centers [
30,
31]. Training GenAI models involves considerable energy, labor, and capital—this process is resource intensive and environmentally destructive; however, the true impact of GenAI is often unacknowledged. The opacity of the GenAI supply chain (e.g., less than one-third of data centers measure and report water consumption) allows companies to avoid restitution for environmental harm, a practice rooted in long-established business models of exploiting common resources [
32]. The expansion of data centers coupled with the sector’s collaboration with the oil and gas industry exacerbates this environmental degradation. It has been estimated that each GenAI query uses approximately ten times the power consumption of a traditional Google search [
33], and using GenAI to produce a single image is equivalent to fully charging a modern smartphone [
34]. Training a single large language model has an estimated carbon footprint of around 300,000 kg of CO
2 emissions, comparable to 125 round-trip flights between New York and Beijing [
35].
In contrast to medicine or law, GenAI operates without formal professional governance structures. Such a lack of regulation means companies decide for themselves what constitutes the ethical use of GenAI, often prioritizing profits over privacy. Equally, technology companies rarely face severe consequences for breaches of ethical principles. Instead of retroactively applying legal and ethical standards, we require a more proactive approach, starting with a commitment to social justice and environmental sustainability. By integrating GenAI into HE, there is an opportunity to enhance both ethical awareness and critical thinking. By embedding ethics-focused content into our curricula, students can engage with case studies and debates that challenge them to consider the implications of GenAI, developing an appreciation of issues such as data privacy, algorithmic bias, and societal impact. Utilizing transparent AI systems, such as explainable AI (XAI) tools, would allow students to interact with and comprehend the underlying mechanisms of GenAI, promoting greater transparency and ethical engagement. Collaborative projects have the potential to further deepen this learning by bringing students together from diverse fields to solve real-world problems, encouraging interdisciplinary critical analysis. Problem-based learning (PBL) scenarios, where students apply GenAI to address contemporary societal issues (e.g., the climate crises, housing, human rights, immigration), can help develop iterative feedback, group work, and problem-solving skills. If we are going to ask students to engage and even collaborate with GenAI, we must be transparent and reflective about our (and especially our students) relationship with existing technological platforms and infrastructures.
As Naomi Klein outlines in her book “Doppelganger: A Trip to the Mirror World” [
36], many of our previously private actions are now enclosed by corporate technological platforms. Founders of these platforms claim to be “helping us connect with the people in our life”, but in reality are focused on extracting our data. Klein describes how this process of enclosure not only changes how we relate to each other but also the underlying purpose of those relations. She draws parallels with the forms of enclosure that transformed common lands in England into privately held commodities surrounded by hedges and fences. Land no longer benefitted the community via communal grazing and food growing but was mechanized to increase yields and profits for landowners. Our unrelenting online presence (e.g., our “likes”, comments, and photos) is the technological yield, and modern online platforms are designed to harvest these. Just as the health of the soil that grows our food is irrevocably sacrificed to grow ever more monocrops, there is a risk that the misuse of GenAI in HE could mean critical thought and individuality are sacrificed in favor of standardization, homogenization, and efficiency [
36].
Modern social media reduces individual identities and personalities to mere data points. Writing specifically about the rise of Facebook (and by extension all other social media platforms), Zadie Smith noted “When a human being becomes a set of data on a website like Facebook, he or she is reduced, everything shrinks. Individual character, friendships, language, sensibility” [
37]. Just as social media reduces individual character and sensibility to mere data points, GenAI could similarly reduce students’ unique learning experiences and intellectual growth to standardized data outputs, reducing the richness and individuality of the experience. GenAI may offer significant benefits, including personalized learning experiences tailored to individual students’ needs. However, it is crucial to ensure that this personalization does not paradoxically lead to the homogenization of educational content, as seen in the standardization of trends in social media and online activities. Richard Seymour’s 2019 book “The Twittering Machine” [
38] highlights how our daily online activities are reduced to data for machine learning algorithms, losing their human essence. Seymour writes “They have created a machine for us to write to. The bait is that we are interacting with other people: our friends, professional colleagues, celebrities, politicians, royals, terrorists, porn actors—anyone we like. We are not interacting with them, however, but with the machine. We write to it, and it passes on the message for us after keeping a record of the data” [
38].
Just as with the algorithms that drive social media, there is a risk that HE may use GenAI as a way to expand its collection and analysis of student data. The 1999 movie “The Matrix” serves as a powerful metaphor for understanding this concern. As the humans in the film are depicted as mere energy sources for machines, there is a risk that students’ educational journey is driven more and more by data extraction to feed computer algorithms than by true intellectual growth and development. As the enclosure of common lands in the Middle Ages transformed communal resources into private commodities, the inappropriate integration of GenAI in HE risks enclosing intellectual and educational activities within proprietary platforms. This transformation risks shifting the focus even further from communal learning and student engagement to profit- and outcome-driven objectives.
6. Bias and Structural Inequalities
From personalized learning and feedback tools that adapt to individual student needs to AI-driven administrative tools that can optimize scheduling, monitoring and resource allocation, the potential benefit of GenAI in HE is significant. However, these benefits may not be distributed equally. To democratize GenAI in HE means ensuring that the tools and resources are accessible to all students, regardless of their (or their institution’s) financial standing. If students from wealthy families and/or well-funded institutions have greater access to frontier GenAI tools, this could create a divide between students who can benefit from these tools and those who cannot.
It is also important to acknowledge that GenAI tools are not neutral computational techniques that make observations or give opinions without any human input. GenAI tools are subject to training processes and data that are rooted in social, political, cultural, and economic landscapes, and these are shaped by humans. The large language models (LLMs) that underpin GenAI tools are designed to consume vast amounts of data and identify patterns, amplify hierarchies, and propagate classifications, which risks reproducing and amplifying existing biases and inequalities. For example, Stable Diffusion, a prevalent text-to-image generating AI model, has been found to amplify stereotypes about ethnicity and gender. When asked to produce images of judges, they were portrayed as whiter and more male than in reality. Stable Diffusion depicts judges as male 97% of the time, even though 34% of judges in the US are women. When asked to produce pictures of fast food workers, 70% had darker skin tones, even though 70% of American fast food workers are white [
39]. GenAI seems to adopt a generally liberal pro-capitalist political ideology, a result of fine-tuning and reinforcement learning from human feedback (RLHF). It is important, however, to remember GenAI has no intrinsic sense of morality; RLHF simply restricts the model’s ability to behave in ways its creators (and consumers) consider immoral [
40].
GenAI systems are created to function in ways that benefit the institutions and corporations they serve. They mirror the power dynamics that arise from economic and political environments curated to increase profit and control for those in power. The risk for HE is that GenAI tools might reinforce existing biases in student assessment, attainment, or resource allocation, privileging certain groups over others. By reflecting and amplifying the inequities present in their training data, GenAI tools may perpetuate a cycle of disadvantage and privilege. As Bridle writes “…we should be thinking more carefully about the ecosystem in which we’re raising A.I…. that these systems are overly concerned with profit and loss, control and dominance, suggests that the slice of the environment shaping their evolution, is somewhat narrow. Their responses are that of a corporate intelligence, evolving within the arid ecology of neoliberal capitalism and increasing financial and social disparities. If we wish them to evolve differently, we need to address and alter this ecology” [
18]. Current GenAI systems are often developed and driven by corporate interests, prioritizing profitability, efficiency, and market dominance. This focus on profit and loss could manifest in GenAI tools in HE that are implemented primarily for cost-saving measures or competitive advantage rather than for pedagogical enhancement or equity.
GenAI has the potential to level the playing field in HE by providing personalized learning experiences that cater to the diverse needs of all students. For example, GenAI-driven tutoring systems can offer individualized support to students who may otherwise struggle in large, impersonal classroom settings. Additionally, GenAI can help identify students at risk of falling behind and provide timely interventions. However, if GenAI systems perpetuate existing biases or are implemented without considering the specific needs of diverse student populations, they may reinforce existing disparities. There is also a risk that GenAI tools are viewed as a way to replace (rather than augment) human educators, particularly in under-resourced institutions, leading to a worsening in the quality of education. To ensure that GenAI in education is equitable, it is essential to involve diverse stakeholders in the development and implementation of these technologies. This includes educators, students, administrators, and representatives from marginalized communities. By incorporating a wide range of perspectives and experiences, GenAI tools could be deployed to help address the unique challenges faced by different student populations and ensure that the potential benefits are more evenly distributed. One example of successfully democratizing access to education is the Khan Academy, which offers free, high-quality educational resources to students worldwide, free of charge. The Khan Academy has recently launched “Khanmigo”, a GenAI Socratic tutor that supports students with their learning. Similarly, initiatives that provide GenAI-driven learning tools to rural and underserved educational institutions demonstrate how technology can help to bridge educational gaps [
6].
7. Policy and Implementation
Government and institutional policies will play an increasingly important role in democratizing GenAI in HE. Policies that promote funding for GenAI research and development in public and under-resourced institutions can help ensure that all students benefit from these technological advancements. Moreover, regulations that mandate transparency and accountability of GenAI use can help in the fight against the misuse of technology and protect the interests of vulnerable populations. Democratizing GenAI in HE will require a concerted effort to make the technology accessible, equitable, and inclusive. By addressing these challenges and leveraging the potential of GenAI, there is an opportunity to create an educational landscape where GenAI serves as a tool for empowerment and equity. This will involve not just technological innovation but also thoughtful policy-making and inclusive practices that prioritize the needs of all learners.
In her book “Atlas of AI; power, politics, and the planetary costs of artificial intelligence”, Kate Crawford asks “Could there not be an AI for the people, that is reoriented toward justice and equality, rather than industrial extraction and discrimination?” [
41]. The current infrastructures that enable (and are enabled by) GenAI are heavily skewed toward the centralization of control. Historically, the response to so-called “disruptive technologies” has been the development of frameworks for ethical use. However, such frameworks often reflect only the perspectives of economically developed countries. As Crawford writes, “The voices of the people most harmed by AI systems are largely missing from the processes that produce them” [
41]. At its core, AI is built to enhance and replicate existing power structures. Arguably, it is here that we should concentrate our efforts, rather than on producing another set of unenforceable ethical principles. This will involve focusing on the needs of the most impacted members of society and understanding the lives of those who are already marginalized and discriminated against by AI systems.
8. Conclusions
While GenAI has made remarkable advancements in recent years, it arguably still lags behind human capabilities in areas such as causal reasoning, abstract thinking, and creativity. Such swift evolution presents both opportunities and challenges for HE. It is crucial to acknowledge that GenAI, having been trained on vast amounts of internet data, has absorbed many human biases and prejudices. This unintended assimilation of biases raises important ethical considerations for its use in HE, underscoring the importance of a critical evaluation of GenAI outputs as well as diverse and representative training data [
42]. If the integration of GenAI in HE creates the kinds of efficiencies many hope it will, we must consider how the time saved will be utilized. Ideally, this efficiency would translate into more meaningful human interactions, for example, increased one-on-one time between students and educators, more opportunities for in-depth discussions and debates, and greater focus on mentorship and personal development.
GenAI’s ability to produce work at the level of an “average” student in some domains and on some tasks should make us question the current assessment methods, marking criteria, and feedback processes employed in HE. It also raises concerns linked to graduate employability—GenAI does not necessarily need to be the “best” writer, coder, or analyst to be truly disruptive. Its ability to produce outputs at or above the level of many graduates raises significant questions about the future job market. Employers may begin to question the value of hiring graduates or offering placements for roles that require a level of competency provided by a GenAI tool. To remain relevant, students and HE institutions must focus on developing skills and knowledge that surpass what GenAI can offer, allowing students to harness the power of GenAI in effective and ethical ways and at the appropriate times. This may involve a greater emphasis on specialized knowledge, interdisciplinary thinking, and uniquely human skills such as emotional intelligence, complex ethical reasoning, and the ability to make truly novel connections across disparate fields of knowledge.
In conclusion, the integration of GenAI in HE presents both an opportunity and a significant challenge. As we navigate this transformative landscape, it is crucial to adopt an approach that balances technological advancements with ethical considerations and human-centered values. Our focus will increasingly need to shift from outcome-based assessments to process-oriented assessments for learning, fostering critical thinking and ethical awareness among our students. Simultaneously, we must be honest and transparent about the environmental impact of GenAI and work towards sustainable methods of implementation. To harness the potential of GenAI in HE, efforts must be made to democratize access, mitigate biases, and prevent the amplification of existing structural inequalities. This requires thoughtful policymaking and a commitment to preserving the richness of individual learning experiences. Involving and collaborating with students at each step of this process will be crucial to ensure the transparency, authenticity, and practical relevance of any changes implemented within HE. Moving forward, the goal should be to create an educational ecosystem where GenAI serves as a tool for empowerment and equity, enhancing rather than replacing human intellect, effort, and creativity. If successful, we will prepare students not just to use these tools but to critically engage with and navigate the complex ethical landscapes they will encounter in their increasingly GenAI-driven futures.