Next Article in Journal
Multi-User Optimal Load Scheduling of Different Objectives Combined with Multi-Criteria Decision Making for Smart Grid
Previous Article in Journal
Generating Process Models by Interacting with Chatbots—A Literature Review
Previous Article in Special Issue
Digital Transformation in Maritime Ports: Defining Smart Gates through Process Improvement in a Portuguese Container Terminal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities

College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA
*
Authors to whom correspondence should be addressed.
Future Internet 2024, 16(10), 354; https://doi.org/10.3390/fi16100354
Submission received: 31 August 2024 / Revised: 23 September 2024 / Accepted: 26 September 2024 / Published: 28 September 2024
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)

Abstract

:
Generative AI has drawn significant attention from stakeholders in higher education. As it introduces new opportunities for personalized learning and tutoring support, it simultaneously poses challenges to academic integrity and leads to ethical issues. Consequently, governing responsible AI usage within higher education institutions (HEIs) becomes increasingly important. Leading universities have already published guidelines on Generative AI, with most attempting to embrace this technology responsibly. This study provides a new perspective by focusing on strategies for responsible AI governance as demonstrated in these guidelines. Through a case study of 14 prestigious universities in the United States, we identified the multi-unit governance of AI, the role-specific governance of AI, and the academic characteristics of AI governance from their AI guidelines. The strengths and potential limitations of these strategies and characteristics are discussed. The findings offer practical implications for guiding responsible AI usage in HEIs and beyond.

1. Introduction

AI applications in education (AIEd) have been at the forefront of discussions among education stakeholders, particularly in light of the advancements in Generative AI (GenAI). Undoubtedly, the continually improving performance of AI holds immense potential for enhancing educational experiences, enabling personalized learning, and automating administrative tasks [1,2]. However, alongside these benefits, there are legitimate concerns about the potential negative impact of AI, especially GenAI. The fact that AI is trained with existing work and its ability to quickly generate content that may be unauthorized imitations impose fundamental challenges to the regulation of plagiarism and academic integrity [3,4]. These challenges will potentially impact both faculty and students in their work and study. Specifically, for educators, the need to learn about AI and address it in teaching brings extra requirements for their knowledge and skills [5]. For students, the prevalence of AI also raises concerns about ethical issues, personal development, career prospects, and societal values [6].
Higher education institutions (HEIs) play a crucial role in technology innovation and its diffusion through creating knowledge, providing talent, and translating research into innovations [7]. In the development and implementation of norms for AI governance, academic institutions responded by establishing centers for the study of AI governance, engaging in basic research, and influencing AI norms through academic expertise and collaborations [8,9]. While HEIs contribute to AI governance at a societal level, technology diffusion in universities itself can also be a complex process involving issues of power, legitimacy, and identity [10]. In the case of GenAI, due to the complex implications of encouraging or discouraging such an important technology, universities are urged to take action and devise strategies for guiding and regulating its usage among students, faculty, and staff [11]. Despite significant uncertainty about the pros and cons of incorporating GenAI in education, many universities have published policies and guidelines, aiming to promote responsible and beneficial AI usage. For example, the early review conducted by Moorhouse et al. [12] found that 23 out of the 50 top-ranking universities have developed publicly available guidelines by June 2023, which addressed the potential influence of GenAI on three main areas: academic integrity, advice on assessment design, and communicating with students. However, the authors pointed out that as these guidelines have been developed rapidly, it is likely that many of the suggestions have not been sufficiently tested. Similarly, Adams and Ivanov [13] examined documents produced by 116 high research activity (R1) universities in the US, concluding that the majority of universities encourage the use of GenAI but in a way that presents potential burdens for faculty and students and without much regard for ethical concerns.
The current study will contribute to this line of research by extending the focus beyond teaching and learning implications and analyzing the guidelines through the lens of AI governance. Prior studies on AI guidelines have provided an overview of how many HEIs are embracing GenAI usage and publishing AI guidelines and what main topics are discussed. This study aims to further analyze the structure of AI guidelines from the organizational level and a community perspective. Specifically, we aim to illustrate the approaches HEIs take to promote responsible AI usage by answering the following research questions:
  • What strategies of AI governance are demonstrated by these guidelines?
  • What are the characteristics of the guidance provided to the university community?

2. Background

2.1. Responsible AI Governance

As AI technology continues to advance and to be widely adopted, the governance of its development and implementation has become a key issue that is widely concerned and discussed by various stakeholders [14]. AI governance refers to the frameworks, policies, and practices designed to ensure that AI systems are developed, deployed, and managed in a way that aligns with ethical principles, legal requirements, and societal values. It encompasses the establishment of guidelines for responsible AI use, mechanisms for accountability, and processes for managing risks associated with AI technologies [15,16,17]. Organizations establish ethical guidelines for AI usage by engaging in a multifaceted process that involves the identification of core ethical principles, the development of practical frameworks, and the implementation of governance models that ensure adherence to these guidelines throughout the AI system’s lifecycle [18,19]. The process often begins with the recognition of the need for ethical AI principles that sustain human values and rights [20], acknowledging the complex and opaque nature of AI systems and their potential to impact fairness, accountability, and transparency [21,22]. This recognition is supported by international and organizational efforts to publish principles of ethical AI, which aim to outline values and abstract requirements for AI development and deployment [23]. However, the effectiveness of these principles is contingent upon their translation into measurable and actionable guidelines that can be practically applied [24].
To address the gap between ethical principles and their application, organizations are adopting frameworks like the hourglass model of organizational AI governance. This model emphasizes the need for governance at environmental, organizational, and AI system levels, connecting ethical principles to the AI system lifecycle to ensure comprehensive governance [25]. These collective efforts underline the importance of a structured approach to AI governance, ensuring that AI technologies are developed and deployed in ways that uphold ethical standards and promote social good [26]. HEIs also make prominent contributions to the process as research efforts are critical in assessing the impact of intelligent applications, minimizing harm, and promoting well-being and social good [27]. In addition, a comprehensive understanding of the impact of disruptive AI technologies on education and the development of frameworks is critical for society to make informed decisions about the use of Generative AI tools [28]. This study aims to bridge the gap between theoretical principles and practical application, offering a detailed analysis of whether and how HEIs implement responsible AI governance frameworks.

2.2. Technology Diffusion in HEIs

Technology diffusion refers to the process by which new technologies are adopted and spread across different regions, sectors, and social groups, during which technology is transferred, implemented, and utilized, leading to widespread acceptance and integration into everyday practices [29]. Technology diffusion in universities is a complicated process, which often faces multifaceted challenges rooted in both organizational and cultural aspects of HEIs [30,31]. One primary challenge is the rapid and massive development of technology, which requires continuous adaptation and innovation from HEIs to maintain their educational and academic quality [32]. This adaptation is contingent upon the effective diffusion of innovation models that consider the profile of human resources, technological conditions, organizational policies, documentation, and environmental dynamics [33]. Additionally, the adoption of new information and communication technology is hindered by issues such as incompatible technology with faculty’s traditional teaching practices, inadequate faculty support, and insufficient plans for implementation [34]. Instructors face challenges in staying updated with the changing uses of technology in the classroom, necessitating training, funding, and alignment of perceptions to facilitate technology adoption [35]. The integration of technology in teaching and learning processes is influenced by technological, pedagogical, and organizational dimensions, with contextual factors playing a determinant role [36].
Regarding the adoption of AI in HEIs, there is still a lack of sound evidence on the pedagogical impact of AI technologies, which raises questions about their ability to improve learning outcomes or facilitate effective pedagogical changes [37]. Additionally, the ethical concerns associated with biased algorithms, which could adversely affect students if used in admissions or grading processes, represent a significant technological and ethical barrier [38]. Educators’ perspectives and the social, psychological, and cultural factors influencing their trust and adoption of educational technology also play a role in the slow uptake of AI tools [39]. The barriers to digital technology integration, including technophobia and the absence of planning, directly impact the adoption of AI in university teaching [40]. Therefore, to effectively govern the implementation of AI tools, it is crucial to integrate robust communication practices. AI governance in HEIs involves not just creating policies but also ensuring they are clearly communicated and understood at all levels. This includes identifying key stakeholders responsible for crafting and delivering these messages, such as senior leaders or governance committees. Effective communication from authoritative figures emphasizes the importance of these guidelines and fosters a culture of ethical compliance. Thus, a connection between governance and AI guidelines is established through the need for clear, consistent, and authoritative communications. Through the analysis of public guidelines on GenAI, we aim to shed light on how HEIs could actively adopt AI technology and how it manages the potential harms through communications with its community. These communications, in the form of policy documentation and webpages, facilitate understanding, transparency, and collaboration between the institution and its stakeholders and effectively convey how the institution is integrating AI technology and addressing any associated risks. It should be noted that this case study mainly focused on universities in the US, and there can be cultural and societal differences across countries regarding AI development and governance. For example, Zahra and Nurmandi [41] compared the AI development strategies in Singapore, the US, and the UK, concluding that Singapore prefers to utilize government sectors and the UK prefers to utilize education sectors while the US prefers to utilize both education sectors and private sectors. In terms of AI policy for higher education, a comprehensive examination in May 2023 [42] identified a dispersed geographic distribution of policy implementation in that European and North American universities show higher responsiveness to this new technology tool. Xie et al. [43] conducted a comparative analysis of four specific countries and revealed that all exhibit a positive attitude toward GenAI in higher education; Japan and the USA prioritize a human-centered approach with a focus on teaching and learning, and China and Mongolia prioritize national security and concerns at the societal level. English-speaking countries in the Global North tend to be ranked higher in various university rankings [12]. While studying universities in these regions may provide valuable insights for universities with more limited resources, the geographic and cultural differences must be considered before generalizing the findings and implications.

3. Methods

This study is a qualitative case study focusing on AI guidelines across the Big Ten universities. The case study approach allows for an in-depth exploration of AI policy development and implementation within a specific set of institutions—namely, the Big Ten. These universities were chosen due to their shared membership in a prominent collegiate conference and their diverse approaches to AI integration in education. The research does not aim to generalize across all higher education institutions but rather seeks to understand the localized issues, policies, and experiences of AI guidance within this particular context. By concentrating on this group, the study provides valuable insights into how large, research-intensive universities manage the emerging challenges and opportunities posed by AI in education.

3.1. The Big Ten Universities

The Big Ten Conference is the oldest collegiate athletic conference in the United States. While historically celebrated for its athletic prowess, its member institutions are also major research universities with large financial endowments and strong academic reputations. Established in 1896, the Big Ten has grown to include fourteen universities spread across the Midwest and Northeast. These institutions are known for their research and academic excellence, contributing significantly to advancements in science, technology, engineering, and mathematics (STEM) education. The list of the 14 universities, their locations, and enrollments are presented in Table 1 (On 2 August 2024, the conference expanded to 18 member institutions and 2 affiliate institutions. The additional 4 members and affiliate members were not included in this study). The Big Ten universities are widely recognized for their high academic rankings and robust research output, particularly in fields related to science and technology. Institutions like the University of Michigan, Northwestern University, and the University of Wisconsin–Madison consistently rank among the top universities nationally and globally, such as the U.S. News & World Report [44] and the Times Higher Education World University Rankings [45]. These universities have strong research reputations, with several members of the Big Ten listed among the top research institutions in the United States based on research expenditures [46]. For example, the University of Michigan reported annual research expenditures exceeding USD 1.6 billion, placing it among the top in the nation for research funding. This substantial financial support enables extensive research initiatives, including significant contributions to Artificial Intelligence and related fields.
The choice to focus on the United States, specifically the Big Ten academic institutions, as the context for this study on AI guidance is driven by several factors. First, the US is a global leader in AI research and development, with significant investments in both technology and policy shaping the direction of AI innovation. The country’s educational and research institutions, particularly those in the Big Ten, are at the forefront of integrating AI technologies into various academic disciplines, providing a robust environment for studying how AI guidance is being implemented and developed. Moreover, focusing on the US allows for a deep, context-specific analysis. Other countries could have enriched the findings, but the additional analysis needed to compare background differences and ensure the depth of findings would have been beyond the scope of the current study. The Big Ten is known for its research contributions and scholarly output. They often have substantial funding and resources dedicated to technological advancement, including AI, making them influential in shaping research agendas and policies related to AI nationally and internationally. Studying their guidance on AI can provide insights into how leading research institutions envision the future of AI governance and ethical considerations. The Big Ten also shares similar profiles as large, research-intensive institutions with significant resources and academic influence. This homogeneity provides a level of consistency in comparing AI guidelines, making it easier to identify patterns and common themes within a similar institutional context. Focusing on this list limits the sample to a manageable number of institutions, which facilitates a more in-depth and detailed analysis of each university’s AI guidelines and policies. Previous research has also utilized this sample to study campus recreation [47], ethnicity diversity [48], sexual harassment [49], and transgender policy [50] in HEIs. Therefore, focusing on the Big Ten allows us to identify insights on AI governance that are interpretable and meaningful to other HEIs.

3.2. Data Collection

The data collection was conducted in March 2024. We define AI guidance as guidelines regarding AI usage published officially at the university level. Therefore, guidelines published by a specific college or a branch campus are not included in the analysis. To identify AI guidelines for each university, the primary author visited the 14 universities’ official websites and conducted manual searches. The keywords ‘AI’, ‘Generative AI’, ‘Guidance’, ‘Guideline’, and ‘Policy’ are used in combination to conduct the search. Additionally, we used the keywords appended with the university name in Google Search and inspected the results returned on the first page to ensure comprehensive coverage in our search. After the initial round of data collection, another researcher repeated the search process independently to validate the results and check for missing information. All of the 14 universities have official guidelines related to AI. Only publicly accessible documents or websites were extracted for further analysis. The data collection and analysis process was reviewed and approved by the IRB office in the authors’ home institution.

3.3. Data Analysis

Mind mapping technique [51] and thematic analysis [52] were employed to analyze the guidelines. First, the primary author read through all the collected data and created mind maps using the university as the base unit. Each mind map represents the structure of AI guidelines published by the university, including the specific unit that publishes the guideline and the organization of its content. The mind maps enable an overview of how guidelines are organized by each university, revealing commonalities and differences in AI governance. The initial mind maps were then reviewed collectively by all researchers to ensure the accurate representation of the original data. In the next round of data analysis, the primary author conducted a thematic analysis of the mind maps, systematically coding and categorizing the mind map nodes while referencing the original guideline text to maintain contextual accuracy. Specifically, topics and segments that represented the strategies and characteristics of the AI guidance were identified, categorized, and linked to broader themes within the research. This thematic analysis not only highlighted the core elements of AI guidance but also allowed for the identification of patterns, relationships, and emerging trends within the data, thereby providing a deeper understanding of the subject matter. The coding was then shared and reviewed by all researchers, who discussed the consistency in coding and resolved minor discrepancies. The codes were grouped into three major themes: the multi-unit governance of AI, the role-specific governance of AI, and the academic characteristics of AI governance. After the researchers completed the process, they collectively translated the themes and sub-themes into narratives. The mind maps of the 14 universities can be found in the supplementary materials.
These methods were chosen for their ability to capture the complexity and contextual nuances inherent in policy documents. The mind mapping technique offers a visual and structural representation, facilitating the identification of patterns and differences in AI governance across institutions. Thematic analysis allows for an in-depth exploration of the strategies and characteristics within the AI guidelines. However, we acknowledge that qualitative methods can be subjective, relying on the researchers’ interpretations. Alternative methodologies, such as quantitative content analysis or computational text analysis, could offer statistical insights and handle larger datasets. However, these methods may not capture the detailed contextual information that is crucial for understanding the nuances of AI governance policies. To mitigate the disadvantages of qualitative methods, we incorporated multiple rounds of analysis and collective reviews among all researchers to enhance reliability and reduce bias.

4. Results

Instead of specific rules or suggestions for AI usage, our analysis focuses on the strategies taken by universities to guide AI usage and the common characteristics observed in these AI guidelines. The findings and insights are detailed in the following subsections.

4.1. Multi-Unit Governance of AI

The adoption and management of new technologies in HEIs can be complex due to their diverse constituencies, including faculty, students, and staff, each with distinct needs and priorities. In the case of the Big Ten, multiple units have been involved in publishing AI guidelines for the university community, as shown in Table 2. While the advice offered by different units can still overlap and reference each other, some key differences entail the unique role of each unit in the organizational management and AI governance in HEIs.

4.1.1. Information Technology

Information Technology (IT) and similar departments in HEIs play a crucial role in managing network services, supporting various technologies and platforms, and resolving software and hardware issues encountered by students and staff [53]. Six universities from the Big Ten have issued AI guidelines through their IT departments. These guidelines mainly focused on security and privacy risks associated with Gen AI tools and how to avoid them. Specifically, these risks are managed by IT through statements related to the following topics:

Data-Sharing Policy

IT established a specific policy for data usage as the community interacts with AI tools. These statements guide people to input limited types of data into AI tools and prohibit the sharing of institutional data and other sensitive information, which is also in accordance with the university’s existing data classification system:
Do not share institutional data with this tool. Providing any personally identifiable information or university internal information, such as development code for systems hosting institutional data, is a violation of IU policy.
—Indiana U
The data classification can be found on other pages of the IT website, which usually categorize data into four levels: Public (Low Sensitivity), University Internal (Moderate Sensitivity), Sensitive/Restricted (High Sensitivity), and Restricted/Critical (Very High Sensitivity). Among the four levels, the Public data are considered safe to share without the need for further IT review. By recapitulating this data classification in AI guidelines, IT helps university users avoid undesired information leakage while using AI.

Enterprise Agreement

IT also provides information on AI tools that are considered more secure through an enterprise contract or agreement with the university. These tools are approved for data interaction classified up to and including the University Internal level. The most common tool with such an agreement is Microsoft Co-Pilot due to part of the university’s existing licensing with Microsoft services such as Office 365. Therefore, despite the agreement, IT still seeks to enhance risk avoidance through further qualifications of AI tools:
Since Microsoft released this tool as part of existing licensing, it has not yet gone through the formal review process that is part of our standard procurement process. As with any such tool, caution is advised.
—Michigan State
As this statement shows, IT has a formal review process that ensures a tool or software meets the university’s security requirements. The guidelines welcome faculty to contact IT to assess security attributes for a given implementation of AI.

Trustworthy AI

While it is impossible for IT to comment on all AI tools available on the market, it has encouraged the community to use “Trustworthy AI” tools. The characteristics of “Trustworthy AI” are described using the existing National Institute of Standards and Technology (NIST) AI Risk Management Framework (see Figure 1):
UW Madison faculty, staff, students, and affiliates can help protect themselves and others by choosing tools and services that exhibit the NIST’s characteristics of trustworthy AI.
—U Wisconsin
By referencing NIST’s framework, IT promotes responsible AI usage in compliance with national standards.

4.1.2. Teaching and Learning

These units are university departments committed to providing various services and solutions to enhance teaching and learning practices. With the grand objective of supporting teaching and learning, some of these departments also emphasize the role of technology in empowering students and instructors:
The ITS Office of Teaching, Learning, and Technology provides expertise, tools, and services to optimize teaching and learning through learning sciences research, ICON, teaching and learning data, and advanced classroom and instructional technology.
—U Iowa
Naturally, these units aim to provide guidelines for faculty to understand GenAI and how it can impact their teaching practices. As teaching is a core activity for universities, these guidelines involve a diverse list of implications of AI, covering the benefits of personalized learning, ethical issues with education equity, and long-term impact on higher education. We summarized three common approaches the guidelines take to shape responsible AI behaviors among faculty:

Emphasize the Need to Learn about GenAI

These guidelines provide a useful introduction to GenAI and related concepts in the format of text or video. Faculty are encouraged to learn more about the benefits and limitations of GenAI not only by reading and viewing these resources but also by experimenting with AI tools:
Learn what AI tools can and cannot do by reading up on these tools and experimenting with them before incorporating an AI tool into a class activity or restricting its use.
—Rutgers U
‘Frequently asked questions’ is a commonly used format to describe how AI could be appropriately used by faculty, e.g., “What are some examples of assessments that incorporate AI tools?”. While Teaching and Learning attempts to help faculty learn the implications of AI in a broad range of aspects, Ohio State also points out that “much of what we know about Generative AI applications and AI language models will shift over time”. Therefore, these guidelines emphasize that faculty need to take active actions to become more knowledgeable on AI technology.

Example-Based Recommendations

In addition to learning about GenAI, these AI guidelines make recommendations for faculty to modify their teaching practices to address potential issues brought by AI. These recommendations cover the full cycle of education delivery, including course preparation, class management, assignment design, and grading methods, with examples to follow. The examples aim to help faculty integrate AI in teaching and regulate AI usage among students. Importantly, they also highlight the fact that efforts are needed to ensure the essential objectives of teaching are not disrupted by the existence of AI tools:
Consider developing assignments that require students to use higher-order thinking, connect concepts to specific personal experiences, cite class readings and discussions, and make innovative connections. These types of prompts are more difficult for students to answer using AI tools.
—U Nebraska

Responsibilities for Guiding Student Usage

Teaching and Learning mostly provide guidance for faculty and instructors. By doing so, it also delegates the responsibilities of guiding students to the faculty. Specifically, the guidelines recommend instructors to “develop clear policies for each course” that specify whether and when AI usage is allowed, communicate expectations with students in “inclusive, student-centered language”, and “include time for ethics discussions” that help students to not only use AI properly but also learn the implications of using AI. Another important responsibility for faculty is to protect academic integrity as GenAI is generally considered a threat to academic integrity. Instead of simply restricting or punishing the use of GenAI, the guidelines encourage faculty to foster a positive mindset and emphasize the values of academic integrity for students. For example:
Share your perspectives on how you think these tools can help or hinder their learning, and why you value academic integrity. We suggest focusing on the benefit to students and their learning, and not potential negative consequences to their grade.
—Purdue U

4.1.3. President and Provost

The Office of the President or Office of the Provost typically plays a central role in academic administration, serving as the chief academic officer of an institution. However, in terms of AI governance, the role of the President and Provost seems ambiguously defined, with guidance content overlapping with several other units. For example, in U Maryland, it provides high-level summary guidance for students, faculty, and staff while Teaching and Learning provides more comprehensive guidance for faculty. Similarly, in U Iowa, President and Provost mainly recapitulates issues addressed by other units and serve to raise people’s awareness of available resources:
The Office of Teaching, Learning, and Technology and the Center for Teaching have put together an AI Tools and Teaching webpage that includes sample policy language for the use of AI tools in a variety of contexts that you can incorporate into your syllabus.
—U Iowa

4.1.4. University Libraries

Libraries often provide a wide range of support to the university community, including research, learning, and engagement. The guidance published by libraries mainly targets scholarly activities, which can include teaching activities but focus more on research and academic publishing. The content provides instructions for scholars to improve their productivity with AI but also raises awareness of the complications involving accuracy, bias, academic integrity, and intellectual property.
Although it’s not advised to use Generative AI directly to find sources on your topic, AI chatbots may be helpful for some parts of your research process.
—U Wisconsin
The statement above indicates an awareness of both the potential risks and the improvement of productivity by using AI in research. Therefore, AI guidance by libraries features both recommendations for AI tools and ethical considerations for using them, e.g., citing AI-generated content to address academic integrity concerns:
Before including AI-generated content in a project you intend to get published, check publisher policies regarding permissible use and attribution. Below are some examples of publisher policies regarding the use of AI.
—Rutgers U
Similar to the Trustworthy AI framework, the reference to example policies from well-known publishers shows that institutions try to shape responsible AI behaviors that are in compliance with existing policies and standards from well-known organizations.

4.1.5. AI Center

AI Centers refer to units specifically devoted to AI-related issues in the university. In the case of Northwestern U, the AI Center is a website focused generally on AI-related research and education, broadcasting AI activities on campus. It also provides AI guidance that resembles what is offered by Teaching and Learning in other institutions:
This website is intended to acquaint you with GAI and to give you some suggestions for its use in the classroom.
—Northwestern U
Incontrast, the AI Center at U Michigan is a website dedicated to GenAI but offers comprehensive guidelines covering students, faculty, and staff (see Figure 2). Beyond the impact on teaching and the classroom, it addresses enterprise agreements, data privacy, and research usage, topics typically covered by other units. It is possible for an institution to have more than one AI Center. In the case of Penn State, the ‘AI, Pedagogy, and Academic Integrity’ website focuses on the impact of GenAI, with content similar to that of Teaching and Learning, mainly discussing the impact of AI in the classroom. In addition, the ‘AI Hub’ addresses all AI-related research and education news, and its guidance covers accessibility and data security considerations.

4.1.6. Additional Units

While the units mentioned above have covered the most important activities impacted by GenAI in HEIs, there are some less common units that complement AI guidelines in HEIs. For instance, University Relations (U Minnesota) can provide guidance on AI for university marketing and communications, which “does not address academic use by students or faculty”. The Office of Research and Innovation (Michigan State) focuses on the use of AI in scholarly activity, similar to the role of libraries in other institutions:
This document outlines best practices for employing Generative AI in various research processes, ensuring its application supports the university’s mission while adhering to legal and ethical standards.
—Michigan State
Additionally, the Office of Student Conduct and Community Standards (U Wisconsin) and Learning.IU (Indiana U) publishes guidance from a student’s perspective, emphasizing the importance of students to proactively seek guidance before using AI:
It is your responsibility to know and follow your instructor’s expectations. Expectations will vary across courses. If unsure, check your course syllabi, course information in Canvas, or talk with your instructors.
—U Wisconsin
The variety of units involved in AI governance demonstrates the organizational complexity of HEIs and the profound implications of GenAI on higher education. While each unit has its specific emphasis, there is also a significant proportion of overlapping or cross-referencing in their guidelines, which is mainly related to teaching practices or AI usage in the classroom. Although the efforts from multiple units contribute to the comprehensiveness of AI guidelines, it can be difficult for a university member to identify all information relevant to their role from multiple units, creating challenges for guiding responsible AI usage.

4.2. Role-Specific Governance of AI

The involvement of multiple units reveals another important characteristic of AI governance in HEIs: the need to address the concerns of different roles within the university community. In the case of Big Ten, four predominant roles merged from the AI guidelines: faculty (or instructor), student, staff, and researcher. Some guidelines apply to all members regardless of their roles, such as the data-sharing policy published by IT, but most of the guidelines are written with a specific role as the intended audience, as detailed below.

4.2.1. Faculty

Delivering high-quality education is one of the most important objectives of universities, and faculty play an indispensable role in this process. Therefore, it is unsurprising that most guidelines are written from a faculty’s perspective:
At the TLTC, we look forward to helping you think creatively about your assessments and your specific learning outcomes to put authentic, relevant, student-centered learning at the forefront of your academic planning
—U Maryland
As this statement suggests, although some AI guidelines are written for faculty, their content is still student-centered, aiming to improve learning outcomes. Specifically, university guidelines often imply three tasks for the role of a faculty. First, the guidelines present plenty of resources for faculty to learn what AI is, what its benefits and limitations are, how it may impact assessment and student learning, how it may impact education equity, etc. Therefore, the first task is for the faculty to become knowledgeable about AI. Although there is no enforcement or specific requirements for what faculty’s level of knowledge should be on these topics, the following statement shows universities’ preferences for faculty to be more familiarized with AI:
AI is quickly becoming an embedded element of the teaching and learning process that requires the acknowledgment and attention of instructors, instructional designers, and academic leaders.
—Ohio State
Second, faculty are encouraged to integrate AI into their pedagogical practices such as lecture preparation, assignment design, and brainstorming for active learning activities. On the one hand, the need to integrate AI is motivated by the increased productivity brought by AI, e.g., “shortening the time instructors spend on creating course materials, coming up with examples and assignments, as well as making grading more efficient. (Northwestern U)” On the other hand, inappropriate use of AI among students can hamper the learning process, making ‘AI-proof’ teaching practices necessary:
Probably the best way to guard against inappropriate use of AI-generated text is to redesign your assignments, both the prompts themselves and the related processes.
—Indiana U
This concern also leads to the third task for faculty: providing guidance for students and supervising students’ AI usage. The guidelines support faculty in creating syllabus statements, communicating expectations of AI usage with students, and discussing the implications with students. While instructors have the freedom to define acceptable AI usage in their classes, the guidelines have made recommendations regarding the detection of AI-generated content. Generally, universities discourage the use of AI detection tools, highlighting their high false-positive rates and emphasizing the need to focus on the learning process rather than the assignment product:
The available tools are simply not effective in providing the evidence needed to build an academic integrity case against a student. Our pedagogies should be built with critical AI literacy in mind, so it’s important to think through what goals AI prohibition is going to meet and whether enforcement is how you want to spend your time and energy.
—U Illinois

4.2.2. Student

Students typically make up the largest population in the university community and AI guidelines are also student-centered, highlighting the need to promote student learning. However, as the regulation of students’ use of AI has been primarily delegated to faculty, AI guidelines are often not written from the student’s perspective. The limited guidelines targeting students echo those written for faculty, urging students to communicate with their instructors and seek suggestions and guidance. In addition, the guidelines highlight that students should thoroughly consider the consequences of their usage of AI both for themselves but also for the society:
As GenAI poses to be a revolutionary tool that can change the academic space and beyond, it is important for you to understand why and how you intend to use these new, powerful tools… Understand that your usage of GenAI-based tools can give you the means to better not just yourself, but also society as a whole, and there is an ethical responsibility towards doing so.
—U Michigan

4.2.3. Staff

Staff can be involved with a broad range of administrative and operational tasks that support the university community. However, due to the variety of staff positions and their unique responsibilities, it is difficult to provide specific guidance for this role. U Minnesota provided guidance for staff involved in university communication and marketing, demonstrating the importance of these activities for HEIs. In addition to examples of using AI operational tasks, writing, and editing, the guidance also provides examples where GenAI should not be used:
At this time, we advise AI should not be used in the creation of institution-specific content (e.g., leadership messaging) or information regarding the immediate health and safety of our community (e.g., updates and triage.)
Generative AI should not be used to modify any University trademarks, mascots, or otherwise without explicit permission from University Relations.
—U Minnesota
These statements show the university’s utter caution about how AI may impact the authenticity, accountability, and reliability of its communications.

4.2.4. Researcher

While the role of a researcher can often coexist with the other three roles, it indicates the role’s involvement with core scholarly activities, including conducting research, gathering literature, publishing scholarly work, and more. Since all Big Ten are research-intensive universities, AI usage in research is an important topic in their guidance. Similar to suggestions for faculty, researchers are encouraged to utilize AI too to strengthen productivity in their research processes such as literature review, experiment design, and data analysis. Meanwhile, researchers are advised to take responsibility for their AI usage in research, documenting their usage and accounting for bias and limitations:
Researchers should utilize GenAI systems in research only where they perform well and exhibit few hallucinations. Researchers should verify all outputs for accuracy and attribution and attest that this has been done in all cases, detailing the methods used to do so.
—U Michigan
Overall, these guidelines attempt to address the various concerns of the university community regarding AI usage by providing specific guidelines for four distinct roles. Among these roles, the guidelines for faculty are the most detailed, as faculty play a core role in maintaining educational values and quality while also governing AI usage among students within the classroom context. Guidelines for researchers are also prominent, reflecting the importance of research activities in the Big Ten. In contrast, a smaller portion of the content is dedicated to student-specific or staff-specific perspectives.

4.3. The Academic Characteristics of AI Governance

The multi-unit and role-specific characteristics of AI guidelines reflect the organizational complexity and multifaceted functionality of HEIs. However, regardless of the unit and role, AI governance in the Big Ten has generally incorporated academic characteristics, emphasizing the intention to empower the university community for informed decision-making. We summarize the characteristics as Educative and advisory, Flexible, and the Socratic method, as described below:

4.3.1. Educative and Advisory Guidance

Except for the IT policies related to data sharing and privacy issues, most of the guidelines do not enforce or prohibit any actions for the university community. Instead, they focus on providing resources for people to learn more about AI and recommend possible behaviors to avoid risks and promote benefits:
Purdue University continues to support the autonomy and choice of faculty and instructors to utilize instructional technology that best suits their teaching and learning environments. As such, there is no official university policy restricting or governing the use of Artificial Intelligence, Large Language Models or similar generative technologies.
—Purdue U
As this statement shows, the guidelines are written with the mindset to support the autonomy of its community to make the best use of the technology. In addition to being educative, the guidelines also make specific suggestions on appropriate or inappropriate usage of AI (see Figure 3). However, it should be noted with the educative and advisory characteristics; there also comes uncertainty regarding the use and management of AI in some scenarios. An obvious example is the use of AI detection tools. Northwestern U concluded that “We do not recommend using this detection tool as the basis for reporting a suspected case of academic dishonesty”, which seems a consensus among the Big Ten. Yet, it remains unclear for faculty regarding what could be used as the basis for reporting academic dishonesty related to AI usage.

4.3.2. Flexible Guidance

On the positive side, the uncertainty also accompanies the flexibility in AI guidelines, which encourages the community to experiment and explore the potential benefits of using AI. The flexibility is exemplified through the acknowledgment that AI is a rapidly evolving technology and that the guidelines will evolve as new information arises. In some cases, universities also make the process of developing AI guidelines an engaging and interactive process where different community members are encouraged to share their perspectives on the implications of GenAI. For instance, U Illinois leveraged the power of social media and created an online space for GenAI discussions among the university community:
Welcome to our budding community, a space where we hope to see collaboration and knowledge exchange thrive. Here, you can both contribute and gain insights into the innovative ways in which our faculty, instructors, students, and staff are using GenAI tools to develop new teaching and learning methodologies. In addition, we hope that this platform will serve as a forum for thoughtful and respectful conversations to address the ethical complexities of GenAI.
—U Illinois
Moreover, this flexibility is also demonstrated through multiple options provided for resolving AI-related issues. For instance, considering the course syllabus managing students’ use of AI, universities often provide three types of sample statements allowing faculty to customize for their classroom setting, e.g., “no restrictions”, “allow limited usage of ChatGPT”, and “prohibit the usage of ChatGPT”.

4.3.3. Socratic Method

The Socratic Method is a form of logical argumentation that promotes critical thinking, provoked by the continual probing questions of the teacher [57]. While the AI guidelines are mostly static descriptions, some of its approaches resemble the Socratic Method by imposing questions on its readers. For some questions, the guidelines will provide possible answers or rationale, creating a FAQ section, but there are also questions that come with no answers, provoking the readers to think critically about the impact of GenAI:
As GenAI poses to be a revolutionary tool that can change higher education and beyond, it is important for you to understand why and how you intend to use these new, powerful tools. These are a few questions to consider and note that the answers to these questions will vary for each person.
—U Michigan
Again, the Socratic Method approach resonates with other academic characteristics, highlighting that the overall intention of AI guidance in universities is to improve AI literacy and raise awareness of responsible AI usage among its community rather than regulating its usage in a strict sense.

4.4. Summary

The strategies and characteristics of the AI guidelines can be summarized as the framework in Figure 4. The governance of AI is practiced through AI guidelines published by multiple units. Among them, the President and Provost recapitulates the content from other units and directs people’s attention to the resources available. The AI Center can act as the one-stop shop for all guidelines. Other units tend to address the concerns of specific roles in the university community, including faculty, students, staff, and researchers, with guidance for students often delegated to faculty. Regardless of the role, community members are encouraged to participate in the discussion of the GenAI implications. Only the guidelines published by IT specify scenarios of prohibited use of AI. The rest of the guidelines tend to be educative, advisory, flexible, and Socratic, demonstrating the objective to improve the community’s understanding of AI and empower them to use GenAI responsibly.

5. Discussion

In this section, we discuss the findings in relation to prior literature and their practical implications. First of all, the guidelines published by the Big Ten agree with the literature on the ethical development and deployment of AI: attempting to maximize the benefits of AI but also acknowledging the complex and opaque nature of AI systems and their potential limitations [21,22]. They also demonstrate the recognition of the considerations of human values and rights in the community [20]. By contrast, the unique strategies and characteristics of AI guidelines in Big Ten reflect the challenges faced by HEIs when adapting themselves to the advancement of AI technology.
Specifically, the multi-unit governance of AI helps address the challenge of diverse stakeholders during technology diffusion in HEIs. By engaging different units in the process of AI governance, the university can consider more factors that impact the technology implementation, such as technological infrastructure and human resources [33]. As our findings suggest, each of the units has a specific emphasis on roles and activities impacted by GenAI, contributing to more comprehensive guidelines addressing the needs of different stakeholders. Despite its advantages, the involvement of multiple units can result in more web pages and a more complicated information structure of the guidelines. Consequently, it is inevitable that the level of difficulty in finding information also increases due to the complicated structure [58]. For instance, a faculty member may need to access IT, Teaching and Learning, and University Libraries to find all the information they need, as data privacy, teaching implications, and research use of AI are all related to the role of faculty and they are addressed in different levels of detail by these units. Moreover, as our analysis shows, the content can also overlap between different units, leading to further efforts in comparing those guidelines and checking for discrepancies. Therefore, while the involvement of multiple units is beneficial, the publishing and rendering of AI guidelines may need to be simplified to make it more accessible to the university community.
One possible solution is to leverage the AI Center, i.e., create a website devoted to AI-related issues and make all AI guidelines accessible on this single site. In addition, strategies in website design could used to optimize the guidelines and reduce information load for readers [59]. Specifically, information seeking is easier when user needs are considered in the information architecture [60], suggesting the guidelines could be designed as role-oriented. Currently, the guidelines provide role-specific content that addresses the concerns of different roles. However, a few of them are organized in a role-specific way, making it difficult for students, faculty, and staff to find corresponding information. Organizing the guidelines in a role-oriented way may help the university community to comprehend the guidelines and put them into practice.
Another important issue regarding different roles is that faculty is the primary focus in most AI guidelines, with the responsibility for guiding students largely delegated to them. While guiding students through classroom-level management may be an effective way to maintain academic quality [61] and address potential issues brought by AI, it should be noted that this responsibility may lead to an increased workload for faculty. Recent research has shown that faculty could complain about the increased workload due to the need to manage more AI-related academic integrity problems [62]. This is especially important as the lack of work-life balance has been a known problem for junior faculty in the United States [63]. Therefore, it is important for HEIs to consider how they may effectively guide students’ use of AI through faculty without creating an extra workload for them.
The workload issue may be further amplified by some of the academic characteristics of AI guidance. While the educative, advisory, and flexible nature gives the community valuable autonomy to make their own decisions about AI usage, it also requires them to become more knowledgeable about AI to make informed decisions. One potential problem is that faculty and students may have difficulty knowing what to do even after reading the guidelines. For example, identifying inappropriate use of AI in coursework is a known struggle for faculty and instructors [62], and the flexibility regarding this issue would not make it easier. Similarly, the Socratic Method is ineffective when there is no active participation from students or the question is misaligned with the student’s knowledge level. For example, students may not think seriously about the questions listed in the guidelines. Even if they do, it is difficult for them to know exactly how AI could impact the future job market due to their lack of relevant knowledge and perspectives [64,65]. Strengthening the communication between the institution and its community, as well as the community’s active participation, can improve engagement and reduce uncertainty in interpreting AI guidelines [66]. This communication can include targeted educational sessions, accessible resources, and open forums for discussion, enabling students to build their knowledge base. By fostering a more informed community, students can actively participate in AI governance and take stronger ownership of their AI usage.
It should also be emphasized that the academic characteristics of AI guidance reflect its intention to improve AI literacy and empower the university community to explore the possibilities of AI in a somewhat protected manner. It provides insights for other institutions or even organizations outside the academia in terms of AI governance: it is important to engage the community in defining responsible AI usage and encourage the community to take responsibility for their actions when using AI tools. Other HEIs may utilize the framework in Figure 4 to develop their own AI guidelines but also seek ways to balance empowerment and uncertainty in AI governance.

6. Limitations and Future Work

There are limitations in the current study that can be addressed through further research. First, while focusing on the Big Ten allowed us to identify meaningful patterns in AI guidance for HEIs, we recognize that the findings should not be generalized without considering cultural and socioeconomic differences among HEIs across the world. Specifically, HEIs in non-English-speaking countries and those in the Global South may face different challenges at the organizational level, and insights from the Big Ten may not be applicable in their scenarios. Future studies may seek to compare AI governance in universities with different cultural and societal backgrounds. Second, even though most AI guidelines are straightforward and clearly written, readers may still have different perceptions that misalign with the guidelines’ intention. Investigating the perception of AI guidelines among different university members is important for assessing the effectiveness of AI guidelines in HEIs’ AI governance. Another key limitation is the methodology of the case study, which mainly relies on qualitative analysis. Quantitative data, such as surveys or statistical analyses, could provide additional support for the thematic conclusions. Future research could incorporate stakeholder perceptions of AI governance, gathered through surveys or interviews, to quantify the themes identified in this study and offer a broader perspective on the issues at hand. Moreover, while this research focuses on AI governance strategies and their general characteristics, it does investigate specific, practical examples of how these guidelines have been implemented or their tangible effects on stakeholders. A follow-up study could explore real-world case studies within these universities to demonstrate the impact of AI governance in practice, adding further depth to the discussion. Student perspectives are also essential to understanding how AI guidelines are experienced by the primary users in educational settings. Future research could incorporate student feedback through interviews or surveys, examining challenges such as ethical dilemmas or technical complexities encountered during the implementation AI guidelines.

7. Conclusions

The current study presents a case study of AI guidance in the Big Ten universities. Through thematic analysis of different AI guidelines, we identify the multi-unit governance of AI, role-specific governance of AI, and the academic characteristics of AI governance in these universities. These strategies and characteristics reflect the universities’ intention to develop comprehensive guidelines for AI usage that both maintain the autonomy of its community and help them to take advantage of AI. However, the complicated information structure and flexibility in guidance may cause problems and confusion for the community. The findings have practical implications for other HEIs and organizations regarding AI governance.

8. Positionality Statement

The authors acknowledge their backgrounds and the potential biases that might be introduced when interpreting the findings of this study. The primary author’s training spans multiple disciplines, including engineering, management, and educational research, and encompasses diverse roles such as student, instructor, and researcher in HEIs in the United States. Similarly, the research team, with its substantial experience in HEIs comparable to those in the Big Ten, brought a comprehensive perspective to the data analysis. All team members actively engaged in interpreting the findings and discussing the implications of the study. Despite this breadth of experience, we recognize that our educational and cultural backgrounds may have shaped our interpretations. To address potential biases, we rigorously documented our initial assumptions and continuously reflected on our understanding at each stage of the research.

9. Ethics Statement

All data utilized were publicly available at the time of data collection. It is important to note that the information may have been updated or changed afterward. Furthermore, the interpretations and conclusions drawn represent the authors’ viewpoints. While they are based on the universities’ AI guidelines, they should not be taken as the official positions or attitudes of the universities.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/fi16100354/s1, AI Guidelines (.pdf): Mind maps of AI Guidelines for U1–U14.

Author Contributions

Conceptualization, C.W. and J.M.C.; methodology, C.W.; software, C.W.; validation, C.W., H.Z., and J.M.C.; formal analysis, C.W., H.Z., and J.M.C.; investigation, C.W.; resources, C.W.; data curation, C.W.; writing—original draft preparation, C.W. and H.Z.; writing—review and editing, C.W., H.Z., and J.M.C.; visualization, C.W.; supervision, J.M.C.; project administration, C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article and Supplementary Materials. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank Jiyoon Kim for contributing to the data collection. The authors also thank Sarah Zipf, Tehniyet Azam, and Zimeng Shao for their suggestions and help in improving this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Baidoo-Anu, D.; Ansah, L.O. Education in the era of generative Artificial Intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. J. AI 2023, 7, 52–62. [Google Scholar] [CrossRef]
  2. Chen, L.; Chen, P.; Lin, Z. Artificial intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  3. Dehouche, N. Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3). Ethics Sci. Environ. Politics 2021, 21, 17–23. [Google Scholar] [CrossRef]
  4. Smits, J.; Borghuis, T. Generative AI and Intellectual Property Rights. In Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice; T.M.C. Asser Press: The Hague, The Netherlands, 2022; pp. 323–344. [Google Scholar]
  5. Wu, Y. Integrating Generative AI in education: How ChatGPT brings challenges for future learning and teaching. J. Adv. Res. Educ. 2023, 2, 6–10. [Google Scholar] [CrossRef]
  6. Chan, C.K.Y.; Hu, W. Students’ voices on Generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  7. Gunasekara, C. Reframing the role of universities in the development of regional innovation systems. J. Technol. Transf. 2006, 31, 101–113. [Google Scholar] [CrossRef]
  8. Chinen, M. AI developers, associations, and the academic community. In The International Governance of Artificial Intelligence; Edward Elgar Publishing: Cheltenham, UK, 2023; pp. 107–137. [Google Scholar]
  9. Mainzer, K. Responsible Artificial Intelligence. Challenges in Research, University, and Society. In Evolving Business Ethics: Integrity, Experimental Method and Responsible Innovation in the Digital Age; J.B. Metzler: Stuttgart, Germany, 2022; pp. 117–127. [Google Scholar]
  10. Smith, K.M. Higher education culture and the diffusion of technology in classroom instruction. In Case Studies on Information Technology in Higher Education: Implications for Policy and Practice; IGI Global: Hershey, PA, USA, 2000; pp. 144–156. [Google Scholar]
  11. Chan, C.K.Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 2023, 20, 38. [Google Scholar] [CrossRef]
  12. Moorhouse, B.L.; Yeo, M.A.; Wan, Y. Generative AI tools and assessment: Guidelines of the world’s top-ranking universities. Comput. Educ. Open 2023, 5, 100151. [Google Scholar] [CrossRef]
  13. Adams, R.H.; Ivanov, I.I. Using socio-technical system methodology to analyze emerging information technology implementation in the higher education settings. Int. J. e-Educ. e-Business e-Manag. e-Learn. 2015, 5, 31–39. [Google Scholar]
  14. Thiebes, S.; Lins, S.; Sunyaev, A. Trustworthy Artificial Intelligence. Electron. Mark. 2021, 31, 447–464. [Google Scholar] [CrossRef]
  15. Felzmann, H.; Fosch-Villaronga, E.; Lutz, C.; Tamò-Larrieux, A. Towards transparency by design for Artificial Intelligence. Sci. Eng. Ethics 2020, 26, 3333–3361. [Google Scholar] [CrossRef] [PubMed]
  16. Balasubramaniam, N.; Kauppinen, M.; Rannisto, A.; Hiekkanen, K.; Kujala, S. Transparency and explainability of AI systems: From ethical guidelines to requirements. Inf. Softw. Technol. 2023, 159, 107197. [Google Scholar] [CrossRef]
  17. Rismani, S.; Moon, A. What does it mean to be a responsible AI practitioner: An ontology of roles and skills. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, Montreal, QC, Canada, 8–10 August 2023; pp. 584–595. [Google Scholar]
  18. Janssen, M.; Brous, P.; Estevez, E.; Barbosa, L.S.; Janowski, T. Data governance: Organizing data for trustworthy Artificial Intelligence. Gov. Inf. Q. 2020, 37, 101493. [Google Scholar] [CrossRef]
  19. Georgieva, I.; Lazo, C.; Timan, T.; van Veenstra, A.F. From AI ethics principles to data science practice: A reflection and a gap analysis based on recent frameworks and practical experience. AI Ethics 2022, 2, 697–711. [Google Scholar] [CrossRef]
  20. Díaz-Rodríguez, N.; Del Ser, J.; Coeckelbergh, M.; de Prado, M.L.; Herrera-Viedma, E.; Herrera, F. Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Inf. Fusion 2023, 99, 101896. [Google Scholar] [CrossRef]
  21. Akinrinola, O.; Okoye, C.C.; Ofodile, O.C.; Ugochukwu, C.E. Navigating and reviewing ethical dilemmas in AI development: Strategies for transparency, fairness, and accountability. GSC Adv. Res. Rev. 2024, 18, 050–058. [Google Scholar] [CrossRef]
  22. Diakopoulos, N. Accountability, transparency, and algorithms. Oxf. Handb. Ethics AI 2020, 17, 197. [Google Scholar]
  23. Rees, C.; Müller, B. All that glitters is not gold: Trustworthy and ethical AI principles. AI Ethics 2023, 3, 1241–1254. [Google Scholar] [CrossRef] [PubMed]
  24. Felländer, A.; Rebane, J.; Larsson, S.; Wiggberg, M.; Heintz, F. Achieving a data-driven risk assessment methodology for ethical AI. Digit. Soc. 2022, 1, 13. [Google Scholar] [CrossRef]
  25. Huriye, A.Z. The ethics of Artificial Intelligence: Examining the ethical considerations surrounding the development and use of AI. Am. J. Technol. 2023, 2, 37–44. [Google Scholar] [CrossRef]
  26. Smuha, N.A. The EU approach to ethics guidelines for trustworthy Artificial Intelligence. Comput. Law Rev. Int. 2019, 20, 97–106. [Google Scholar] [CrossRef]
  27. Safdar, N.M.; Banja, J.D.; Meltzer, C.C. Ethical considerations in Artificial Intelligence. Eur. J. Radiol. 2020, 122, 108768. [Google Scholar] [CrossRef] [PubMed]
  28. Khan, Z.R. Ethics of Artificial Intelligence in Academia. In Second Handbook of Academic Integrity; Springer: Cham, Switzerland, 2024; pp. 1551–1582. [Google Scholar]
  29. Rogers, E.M.; Singhal, A.; Quinlan, M.M. Diffusion of innovations. In An Integrated Approach to Communication Theory and Research; Routledge: New York, NY, USA, 2014; pp. 432–448. [Google Scholar]
  30. Hawawini, G. The Internationalization of Higher Education Institutions: A Critical Review and a Radical Proposal; INSEAD: Singapore, 2011. [Google Scholar]
  31. Liu, Q.; Geertshuis, S.; Grainger, R. Understanding academics’ adoption of learning technologies: A systematic review. Comput. Educ. 2020, 151, 103857. [Google Scholar] [CrossRef]
  32. Christensen, C.M.; Eyring, H.J. The Innovative University: Changing the DNA of Higher Education from the Inside Out; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  33. Ramdhani, M.A.; Priatna, T.; Maylawati, D.S.; Sugilar, H.; Mahmud, M.; Gerhana, Y.A. Diffusion of Innovations for Optimizing the Information Technology Implementation in Higher Education. In Proceedings of the 2021 9th International Conference on Cyber and IT Service Management (CITSM), Bengkulu, Indonesia, 22–23 September 2021; pp. 1–8. [Google Scholar]
  34. Dintoe, S.S. Technology innovation diffusion at the University of Botswana: A comparative literature survey. Int. J. Educ. Dev. Using Inf. Commun. Technol. 2019, 15, n1. [Google Scholar]
  35. Baadel, S.; Majeed, A.; Kabene, S. Technology adoption and diffusion in the Gulf: Some challenges. In Proceedings of the 8th International Conference on E-Education, E-Business, E-Management and E-Learning, Kuala Lumpur, Malaysia, 5–7 January 2017; pp. 16–18. [Google Scholar]
  36. Rodríguez-Abitia, G.; Martínez-Pérez, S.; Ramirez-Montoya, M.S.; Lopez-Caudana, E. Digital gap in universities and challenges for quality education: A diagnostic study in Mexico and Spain. Sustainability 2020, 12, 9069. [Google Scholar] [CrossRef]
  37. O’Dea, X.C.; O’Dea, M. Is Artificial Intelligence really the next big thing in learning and teaching in higher education? A conceptual paper. J. Univ. Teach. Learn. Pract. 2023, 20. [Google Scholar] [CrossRef]
  38. Slimi, Z.; Carballido, B.V. Navigating the Ethical Challenges of Artificial Intelligence in Higher Education: An Analysis of Seven Global AI Ethics Policies. TEM J. 2023, 12, 590–602. [Google Scholar] [CrossRef]
  39. Kizilcec, R.F. To advance AI use in education, focus on understanding educators. Int. J. Artif. Intell. Educ. 2024, 34, 12–19. [Google Scholar] [CrossRef]
  40. Mercader, C. Explanatory model of barriers to integration of digital technologies in higher education institutions. Educ. Inf. Technol. 2020, 25, 5133–5147. [Google Scholar] [CrossRef]
  41. Zahra, A.A.; Nurmandi, A. The strategy of develop Artificial Intelligence in singapore, united states, and united kingdom. Iop Conf. Ser. Earth Environ. Sci. 2021, 717, 012012. [Google Scholar] [CrossRef]
  42. Xiao, P.; Chen, Y.; Bao, W. Waiting, banning, and embracing: An empirical analysis of adapting policies for Generative AI in higher education. arXiv 2023, arXiv:2305.18617. [Google Scholar] [CrossRef]
  43. Xie, Q.; Li, M.; Enkhtur, A. Exploring Generative AI Policies in Higher Education: A Comparative Perspective from China, Japan, Mongolia, and the USA. arXiv 2024, arXiv:2407.08986. [Google Scholar]
  44. U.S. News and World Report. Best Colleges Rankings. 2024. Available online: https://www.usnews.com/best-colleges (accessed on 22 September 2024).
  45. Times Higher Education. World University Rankings. 2024. Available online: https://www.timeshighereducation.com/world-university-rankings (accessed on 22 September 2024).
  46. Big Ten Academic Alliance. Research Initiatives and Collaborations. 2024. Available online: https://btaa.org/research (accessed on 22 September 2024).
  47. Wilson, O.W.; Guthrie, D.; Bopp, M. Big 10 Institution Campus Recreation: A Review of Current Values, Policies, and Practices. J. Campus Act. Pract. Scholarsh. 2020, 2, 72–79. [Google Scholar] [CrossRef] [PubMed]
  48. Bennett, C.I. Enhancing ethnic diversity at a big ten university through project TEAM: A case study in teacher education. Educ. Res. 2002, 31, 21–29. [Google Scholar] [CrossRef]
  49. Clair, R.P. The bureaucratization, commodification, and privatization of sexual harassment through institutional discourse: A study of the big ten universities. Manag. Commun. Q. 1993, 7, 123–157. [Google Scholar] [CrossRef]
  50. Dirks, D.A.D. Transgender people at four Big Ten campuses: A policy discourse analysis. Rev. High. Educ. 2016, 39, 371–393. [Google Scholar] [CrossRef]
  51. Wheeldon, J.; Faubert, J. Framing experience: Concept maps, mind maps, and data collection in qualitative research. Int. J. Qual. Methods 2009, 8, 68–83. [Google Scholar] [CrossRef]
  52. Drisko, J.W.; Maschi, T. Content Analysis; Oxford University Press: Cary, NC, USA, 2016. [Google Scholar]
  53. Volk, M.; Jamous, N. IT-landscape management in the higher educational institutions. In Proceedings of the 2018 Sixth International Conference on Enterprise Systems (ES), Limassol, Cyprus, 1–2 October 2018; pp. 211–216. [Google Scholar]
  54. Tabassi, E. Artificial Intelligence Risk Management Framework (AI RMF 1.0). 2023. Available online: https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/3-sec-characteristics (accessed on 22 September 2024).
  55. University of Michigan. Generative AI Resources. 2024. Available online: https://genai.umich.edu/resources (accessed on 22 September 2024).
  56. Indiana University. Generative AI in Teaching and Learning. 2024. Available online: https://teaching.iu.edu/resources/generative-ai/teaching-learning.html (accessed on 22 September 2024).
  57. Delić, H.; Bećirović, S. Socratic method as an approach to teaching. Eur. Res. Ser. A 2016, 111, 511–517. [Google Scholar]
  58. Tombros, A.; Ruthven, I.; Jose, J.M. How users assess web pages for information seeking. J. Am. Soc. Inf. Sci. Technol. 2005, 56, 327–344. [Google Scholar] [CrossRef]
  59. Chen, M. Improving website structure through reducing information overload. Decis. Support Syst. 2018, 110, 84–94. [Google Scholar] [CrossRef]
  60. Shih, C.W.; Chen, M.Y.; Chu, H.C.; Chen, Y.M. Enhancement of information seeking using an information needs radar model. Inf. Process. Manag. 2012, 48, 524–536. [Google Scholar] [CrossRef]
  61. Korpershoek, H.; Harms, T.; de Boer, H.; van Kuijk, M.; Doolaard, S. A meta-analysis of the effects of classroom management strategies and classroom management programs on students’ academic, behavioral, emotional, and motivational outcomes. Rev. Educ. Res. 2016, 86, 643–680. [Google Scholar] [CrossRef]
  62. Wu, C.; Wang, X.; Carroll, J.; Rajtmajer, S. Reacting to Generative AI: Insights from Student and Faculty Discussions on Reddit. In Proceedings of the 16th ACM Web Science Conference, Stuttgart, Germany, 21–24 May 2024; pp. 103–113. [Google Scholar]
  63. Azevedo, L.; Shi, W.; Medina, P.S.; Bagwell, M.T. Examining junior faculty work-life balance in public affairs programs in the United States. In Work-Life Balance in Higher Education; Routledge: New York, NY, USA, 2022; pp. 21–41. [Google Scholar]
  64. Vicsek, L.; Bokor, T.; Pataki, G. Younger generations’ expectations regarding Artificial Intelligence in the job market: Mapping accounts about the future relationship of automation and work. J. Sociol. 2024, 60, 21–38. [Google Scholar] [CrossRef]
  65. Abdelwahab, H.R.; Rauf, A.; Chen, D. Business students’ perceptions of Dutch higher educational institutions in preparing them for Artificial Intelligence work environments. Ind. High. Educ. 2023, 37, 22–34. [Google Scholar] [CrossRef]
  66. Moon, M.J. Searching for inclusive Artificial Intelligence for social good: Participatory governance and policy recommendations for making AI more inclusive and benign for society. Public Adm. Rev. 2023, 83, 1496–1505. [Google Scholar] [CrossRef]
Figure 1. Characteristics of trustworthy AI systems defined by NIST: Valid and Reliable is a necessary condition of trustworthiness and is shown as the base for other trustworthiness characteristics. Accountable and Transparent is shown as a vertical box because it relates to all other characteristics. Figure Source: National Institute of Standards and Technology (NIST) [54].
Figure 1. Characteristics of trustworthy AI systems defined by NIST: Valid and Reliable is a necessary condition of trustworthiness and is shown as the base for other trustworthiness characteristics. Accountable and Transparent is shown as a vertical box because it relates to all other characteristics. Figure Source: National Institute of Standards and Technology (NIST) [54].
Futureinternet 16 00354 g001
Figure 2. GenAI guidance published by AI Center at U Michigan, categorized by student, faculty, and staff. Figure Source: U Michigan [55].
Figure 2. GenAI guidance published by AI Center at U Michigan, categorized by student, faculty, and staff. Figure Source: U Michigan [55].
Futureinternet 16 00354 g002
Figure 3. Example use cases for GenAI provided by Indiana U. Figure Source: Indiana U [56].
Figure 3. Example use cases for GenAI provided by Indiana U. Figure Source: Indiana U [56].
Futureinternet 16 00354 g003
Figure 4. Summarized framework of AI Governance through Official Guidance.
Figure 4. Summarized framework of AI Governance through Official Guidance.
Futureinternet 16 00354 g004
Table 1. The background information of the Big Ten universities.
Table 1. The background information of the Big Ten universities.
IDInstitutionShort NameLocationTypeEnrollment 1
U1University of IowaU IowaIowa City, IowaPublic31,452
U2University of Wisconsin–MadisonU WisconsinMadison, WisconsinPublic (land-grant)50,662
U3University of Maryland, College ParkU MarylandCollege Park, MarylandPublic (land-grant)40,813
U4Michigan State UniversityMichigan StateEast Lansing, MichiganPublic (land-grant)51,316
U5Pennsylvania State UniversityPenn StateUniversity Park, PennsylvaniaPublic (land-grant)48,535
U6Indiana University BloomingtonIndiana UBloomington, IndianaPublic47,527
U7University of MichiganU MichiganAnn Arbor, MichiganPublic52,065
U8University of Minnesota, Twin CitiesU MinnesotaMinneapolis-St. Paul, MinnesotaPublic (land-grant)54,890
U9Rutgers University-New BrunswickRutgers UNew Brunswick–Piscataway, New JerseyPublic (land-grant)50,617
U10Purdue UniversityPurdue UWest Lafayette, IndianaPublic (land-grant)52,211
U11University of Nebraska–LincolnU NebraskaLincoln, NebraskaPublic (land-grant)23,600
U12Northwestern UniversityNorthwestern UEvanston, IllinoisPrivate not-for-profit22,801
U13Ohio State UniversityOhio StateColumbus, OhioPublic (land-grant)60,046
U14University of Illinois Urbana–ChampaignU IllinoisUrbana-Champaign, IllinoisPublic (land-grant)56,403
1 The enrollment data, sourced from the National Center for Education Statistics as of Fall 2023, indicates a total of 642,938 students, representing approximately 4.5% of the overall U.S. college enrollment.
Table 2. Units that publish AI guidelines on a university level.
Table 2. Units that publish AI guidelines on a university level.
IDU1U2U3U4U5U6U7U8U9U10U11U12U13U14
Information Technology
Teaching and Learning
President and Provost
University Libraries
AI Center
Additional units
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, C.; Zhang, H.; Carroll, J.M. AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities. Future Internet 2024, 16, 354. https://doi.org/10.3390/fi16100354

AMA Style

Wu C, Zhang H, Carroll JM. AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities. Future Internet. 2024; 16(10):354. https://doi.org/10.3390/fi16100354

Chicago/Turabian Style

Wu, Chuhao, He Zhang, and John M. Carroll. 2024. "AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities" Future Internet 16, no. 10: 354. https://doi.org/10.3390/fi16100354

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop