Next Article in Journal
A Heuristic-Primed Decision-Making Model under the Assumption of Bounded Resources
Previous Article in Journal
Cross-Cultural Challenges to Artificial Intelligence Ethics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Integrating Large Language Models into Higher Education: Guidelines for Effective Implementation †

by
Karl de Fine Licht
Department of Technology Management and Economics, Chalmers University of Technology, 412 96 Gothenburg, Sweden
Presented at the Workshop on AI and People, IS4SI Summit 2023, Beijing, China, 14–16 August 2023.
Comput. Sci. Math. Forum 2023, 8(1), 65; https://doi.org/10.3390/cmsf2023008065
Published: 11 August 2023
(This article belongs to the Proceedings of 2023 International Summit on the Study of Information)

Abstract

:
The emergence of large language models (LLMs), such as OpenAI’s GPT-4, introduces transformative opportunities for higher education across various disciplines. While the integration of LLMs into higher education has sparked significant debate regarding whether to fully incorporate these systems into curricula or restrict their use, this paper contends that there has been an inadequate focus on the process of establishing suitable guidelines for their usage. Given the importance of stakeholder buy in, especially in terms of perceiving the final decision as legitimate, this paper advocates for transparent and inclusive procedures that involve faculty, administration, and students during the integration process. Once a decision is made, clear justifications for LLM guidelines should be provided, paired with an effective implementation strategy, to ensure widespread acceptance and adherence.

1. Introduction

The emergence of large language models (LLMs), such as OpenAI’s GPT-4, introduces transformative opportunities for higher education across diverse fields such as literature and computer science [1,2,3,4]. These AI-driven platforms possess the potential to dramatically reshape the landscapes of teaching and learning, thereby influencing pedagogical strategies and educational outcomes. A vibrant debate has unfolded surrounding the integration of LLMs into higher education. Among the questions posed are whether we should fully integrate these systems and instruct students in their use, or conversely, should we ban them and encourage students to utilize them outside the university, or perhaps should we do something in between?
However, irrespective of our position on whether we should permit LLMs or not, there has been limited discussion concerning the appropriate procedures for establishing guidelines for the use of LLMs. Given the necessity of substantial stakeholder buy in, in the sense that they perceive the final decision as legitimate, for any approach we choose, and recognizing that the process of implementation influences this, it is crucial that the process itself is thoroughly discussed. In this paper, I argue that when considering the process of integration, the need for transparent and inclusive procedures involving faculty, administration, and students becomes paramount. Offering clear justifications for guidelines concerning LLMs, in combination with an effective implementation strategy, is also essential for securing widespread acceptance of the final decision. I will first give some background on the question in the first part of Section 2 and then argue my case in the rest of the section. I will conclude this paper with a brief outlook.

2. The Process of Implementing LLMs in Higher Education

In various universities, we are currently deliberating our approach to the emergence of large language models (LLMs). This topic sometimes ignites sensitive issues and has the potential to lead to disputes.
The divide typically exists between ‘AI-optimists’, who perceive LLMs solely as a significant opportunity to enhance student learning and performance, and ‘AI-skeptics’, who express concern about students potentially not acquiring the necessary knowledge and subsequently struggling post-graduation. This dichotomy is an instance of the general divide between technology optimists and pessimists (for references, see [5]). Both camps present (or, in principle, could present) valid arguments that are not easily dismissed due to their cogency. Thus, the integration of LLMs into higher education is not merely about adopting a new tool; it may involve a complex process that necessitates careful planning and strategic execution.
Ultimately, the goal when it comes to producing guidelines for the use of LLMs is to create the best possible opportunities for students to receive a high-quality education. To achieve this, we need input from at least faculty and students [1], and ideally, all parties should perceive the final decision as legitimate, regardless of their standpoints. Drawing from a vast body of research in political science and public administration, the concept of perceived legitimacy is vital to achieve effective governance and to avoid obstruction or suboptimal use (see, e.g., [6,7]). This could range from faculty not abiding by the guidelines to students who seize every opportunity to cheat.
The deployment of LLMs in higher education and its subsequent impact on perceived legitimacy can be delineated into three stages: preparation, explanation, and implementation (see, e.g., [8]). The ‘preparation phase’ necessitates comprehensive research and input from faculty members and students to formulate guidelines or recommendations. The ‘explanation phase’ entails the disclosure of the selected course of action along with its underlying rationale. Lastly, the ‘implementation phase’ involves the application of these guidelines. Considering the pivotal role of legitimacy beliefs, meticulous planning of these processes is of utmost importance. Of course, outcome favorability will be a strong predictor, and hence, not everyone will be satisfied when the decision does not go their way (see, e.g., [9]). But we might do better instead of worse in a tough spot.
Beginning with the preparation phase, various strategies can enhance perceived legitimacy (see, for example, [5] for references). These strategies could include maintaining transparency regarding the process and involving all relevant stakeholders. Although the evidence may not be as strong as one might assume in favor of ‘transparency in process’ for perceived legitimacy [10], it is likely that we still need faculty and student involvement to achieve a high-quality outcome for the guidelines pertaining to LLMs. At many universities, faculty and sometimes students have a mandate over such guidelines, making their bypass inconceivable.
Nevertheless, during the process of involving faculty members and students in the creation of guidelines, it is essential to anticipate the potential for polarization. To circumvent such polarization, research advocates for the employment of deliberative norms [11,12]. These norms encompass principles such as inclusion, equality of discussion, reciprocity, reasoned justification, reflection, sincerity, and respect. Such principles can aid in mitigating opinion polarization, thereby fostering a productive discourse where the results are acceptable to all participants irrespective of their stances. Even though these norms, and others akin to them, should be ingrained in the standard procedures at universities, it remains paramount to periodically remind participants of them and be ready to enforce these norms in diverse ways. This ensures that both faculty members and students maintain appropriate conduct in settings that deviate slightly from the usual contexts in which these discussions occur.
In the ‘explanation phase’, it is crucial that justifications not only reference the preceding process but also clearly outline the reasoning behind the decisions. These are entrenched in a set of shared values, which, in this context, likely include the quality of education and concern for students’ future careers, strong counterarguments, and the rationale for the final decision despite these counterarguments. This approach fulfills the criteria for a satisfactory explanation concerning both perceived and actual legitimacy [6,13,14]. It can also be beneficial to deliver the decision and justification in person [15].
For example, if you ban the use of LLMs, as some universities have, it is crucial to carefully describe the strong arguments in its favor without ‘strawmanning’ or ignoring the arguments against them. Delivering this information directly to the faculty and students might be the best approach. Trying to clarify the best arguments for and against the decision while knowing that these will be made public could also potentially improve the quality of the decision-making process (see, e.g., [5]). This type of communication can further ‘soften the blow’ for those opposed to the decision, signaling that it was made after careful consideration. This could potentially lead to a higher perceived legitimacy than if such steps were not taken [16].
In terms of explanations, it is noteworthy that satisfactory justifications for unfavorable decisions can stimulate a positive disposition toward the outcome and inspire compliance [14,17]. Therefore, explanations emphasizing the policy’s benefits, such as enhancing education quality, might be perceived more favorably than those citing inevitable external factors, like the use of LLMs, as excuses. However, endeavors to sway perceptions of a decision should be cautiously undertaken, as manipulating students and faculty fundamentally undermines the legitimacy and fails to secure the high-quality decision universities aspire to make. That said, if sound reasons exist for implementing a certain policy, irrespective of the necessity to involve or exclude LLMs, these reasons should probably be incorporated into the explanation for the decision.
During the implementation phase, the focus is not on the decision itself but on its execution (see, e.g., [6]). This process can frequently result in compensatory measures, addressing individual instrumental concerns and thereby fostering acceptance [18]. A compelling example of this strategy can be found in conflict resolution. Practitioners emphasize the importance of understanding the needs of the dissenting party to secure their compliance [19]. Additionally, decision makers can signal their commitment to ameliorate the negative impacts of a decision by planning for compensatory measures, a readiness that can be shown irrespective of direct consultation with affected parties.
Applying this strategy to higher education, one feasible approach might involve providing the ‘losers’ with resources, time, and larger portions of the course curriculum. If the AI skeptics were the ones to lose out, they could concentrate on enhancing student language or programming skills without resorting to LLMs. They might also use these tools exclusively as teaching aids to improve writing and coding skills rather than letting students use them for these tasks. Conversely, if the AI optimists made up the “losing side”, they could be granted a similar allocation of resources and time to develop methods that positively utilize LLMs, impacting learning outcomes favorably despite current concerns. These groups could also be assigned portions of the curriculum to experiment with their ideas, instructing students on how to use LLMs, even when the wider student body is typically barred from doing so.
Last, it should be noted that even though the process of implementing LLMs here has been construed as singular, it is an iterative process hinging on evaluation and feedback for continuous improvement and adaptation. This cyclical process aids in identifying potential shortcomings, ensuring the system remains relevant and effective. Feedback from students and teachers, classroom observations, and academic performance data could serve as valuable inputs for evaluation. This implies that the guidelines should undergo regular revisions, and justifications should adapt correspondingly to uphold high perceived legitimacy and educational quality.

3. Conclusions and Outlook

In conclusion, the potential integration of large language models (LLMs) into higher education is a complex and multifaceted process. It requires meticulous planning, strategic execution, and careful stakeholder engagement. This process would involve developing guidelines for the use of LLMs that would support broad acceptance and elevate educational quality. These guidelines should be underpinned by robust justifications for the potential inclusion of LLMs, with a specific implementation process tailored to enhance acceptance and quality outcomes. However, it is important to note that universities might also choose not to allow the use of LLMs, in which case alternative strategies for enhancing learning would need to be identified.
Should universities decide to introduce LLMs, this would need to align with the core objectives of higher education, which encompass skills development in areas such as writing and presenting, critical thinking, advanced methodologies, and programming, among others. Students would then need to learn how to use these advanced models responsibly and effectively and to understand the benefits, limitations, and ethical implications associated with LLMs. To this end, this paper provides a start for developing comprehensive guidelines and considerations for teachers and examiners who might be considering the use of LLMs in their teaching and assessment strategies. Emphasizing continuous improvement and adaptability, our discussion underlines the necessity for an iterative approach to potential LLM integration. This includes maintaining transparency and frequent engagement with stakeholders, irrespective of whether the decision is to incorporate or exclude the use of LLMs.
As we look to the future, the landscape of LLMs in education appears expansive yet still uncertain. With the continuing evolution of AI, there could be a shift towards more personalized, interactive, and integrated LLMs in teaching and learning. However, if universities decide against their use, this will likely influence the direction of AI development in the education sector. In both scenarios, the ethical, privacy, and pedagogical considerations linked to LLMs will become more pronounced, demanding ongoing dialogue and research. This dynamic situation also uncovers numerous areas for future research, such as the long-term impacts of LLM usage—or lack thereof—on students’ learning experiences and wellbeing, the effects of LLM integration on teaching practices, and the potential impact on educational equity. Further, exploring effective ways to measure educational objectives, with or without LLMs, is another significant area for future study.

Funding

This paper was funded by CHAIR and Chalmers University of Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created.

Acknowledgments

I would like to thank Gordana Dodig Crnkovic for kindly inviting me to take part in this conference and CHAIR for paying the fees.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Gimpel, H.; Hall, K.; Decker, S.; Eymann, T.; Lämmermann, L.; Mädche, A.; Röglinger, R.; Ruiner, C.; Schoch, M.; Schoop, M.; et al. Unlocking the Power of Generative AI Models and Systems Such as GPT-4 and ChatGPT for Higher Education: A Guide for Students and Lecturers; University of Hohenheim: Stuttgart, Germany, 2023. [Google Scholar]
  2. Mollick, E.R.; Mollick, L. New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments, 2022. SSRN. Available online: https://ssrn.com/abstract=4300783 (accessed on 30 July 2023). [CrossRef]
  3. Mollick, E.R.; Mollick, L. Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts, 2023. SSRN. Available online: https://ssrn.com/abstract=4391243 (accessed on 30 July 2023). [CrossRef]
  4. Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education? J. Appl. Learn. Teach. 2023, 6, 1. [Google Scholar] [CrossRef]
  5. Danaher, J. Techno-optimism: An Analysis, an Evaluation and a Modest Defence. Philos. Technol. 2022, 35, 54. [Google Scholar] [CrossRef]
  6. de Fine Licht, K.; de Fine Licht, J. Artificial Intelligence, Transparency, and Public Decision-Making: Why Explanations are Key When Trying to Produce Perceived Legitimacy. AI Soc. 2020, 35, 917–926. [Google Scholar] [CrossRef] [Green Version]
  7. Tyler, T. Psychological Perspectives on Legitimacy and Legitimation. Annu. Rev. Psychol. 2006, 57, 375–400. [Google Scholar] [CrossRef] [PubMed]
  8. de Fine Licht, J.; Agerberg, M.; Esaiasson, P. “It’s not over when it’s over”―Post-decision Arrangements and Empirical Legitimacy. J. Public Adm. Res. Theory 2022, 32, 183–199. [Google Scholar] [CrossRef]
  9. Esaiasson, P.; Persson, M.; Gilljam, M.; Lindholm, T. Reconsidering the Role of Procedures for Decision Acceptance. Br. J. Political Sci. 2019, 49, 291–314. [Google Scholar] [CrossRef] [Green Version]
  10. Cucciniello, M.; Porumbescu, G.A.; Grimmelikhuijsen, S. 25 Years of Transparency Research: Evidence and Future Directions. Public Admin. Rev. 2017, 77, 32–44. [Google Scholar] [CrossRef] [Green Version]
  11. Strandberg, K.; Himmelroos, S.; Grönlund, K. Do Discussions in Like-minded Groups Necessarily Lead to More Extreme Opinions? Deliberative Democracy and Group Polarization. Int. Political Sci. Rev. 2019, 40, 41–57. [Google Scholar] [CrossRef] [Green Version]
  12. Grönlund, K.; Herne, K.; Setälä, M. Does Enclave Deliberation Polarize Opinions? Political Behav. 2015, 37, 995–1020. [Google Scholar] [CrossRef] [Green Version]
  13. McGraw, K.M. Managing Blame: An Experimental Test of the Effects of Political Accounts. Am. Political Sci. Rev. 1991, 85, 1133–1157. [Google Scholar] [CrossRef]
  14. Colquitt, J.A. On the Dimensionality of Organizational Justice: A Construct Validation of a Measure. J. Appl. Psychol. 2001, 86, 386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Christensen, H.S. How Citizens Evaluate Participatory Processes: A Conjoint Analysis. Eur. Political Sci. Rev. 2020, 12, 239–253. [Google Scholar] [CrossRef] [Green Version]
  16. Goovaerts, I.; de Fine Licht, J.; Marien, S. Legitimacy Perceptions in Times of Participatory Decision-Making: Examining Elite Communication when Participatory Process Outcomes and Political Decisions Clash. In Proceedings of the APSA Annual Conference, Montréal, QC, Canada, 15–18 September 2022. [Google Scholar]
  17. Burlacu, D.; Vössing, K. Beyond Blame Avoidance: Elite Explanations for Social Policy Reform and Their Effects on Public Opinion. In Proceedings of the ECPR Joint Sessions, Nicosia, Cyprus, 10–14 April 2018. [Google Scholar]
  18. Hildreth, J.A.D.; Moore, D.A.; Blader, S.L. Revisiting the Instrumentality of Voice: Having Voice in the Process Makes People Think They Will Get What They Want. Soc. Justice Res. 2014, 27, 209–230. [Google Scholar] [CrossRef] [Green Version]
  19. Lewis, M.; Woodhull, J. Inside the No: Five Steps to Decisions that Last; M. Lewis: Pretoria, South Africa, 2008. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

de Fine Licht, K. Integrating Large Language Models into Higher Education: Guidelines for Effective Implementation. Comput. Sci. Math. Forum 2023, 8, 65. https://doi.org/10.3390/cmsf2023008065

AMA Style

de Fine Licht K. Integrating Large Language Models into Higher Education: Guidelines for Effective Implementation. Computer Sciences & Mathematics Forum. 2023; 8(1):65. https://doi.org/10.3390/cmsf2023008065

Chicago/Turabian Style

de Fine Licht, Karl. 2023. "Integrating Large Language Models into Higher Education: Guidelines for Effective Implementation" Computer Sciences & Mathematics Forum 8, no. 1: 65. https://doi.org/10.3390/cmsf2023008065

Article Metrics

Back to TopTop