Next Article in Journal
CTAARCHS: Cloud-Based Technologies for Archival Astronomical Research Contents and Handling Systems
Previous Article in Journal
On the Dearth of Retractions in Social Work: A Cross-Sectional Study of Ten Leading Journals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Scoping Review of Generative Artificial Intelligence (GenAI) and Pedagogy Nexus: Implications for the Higher Education Sector

UNE Business School, University of New England, Elm Avenue, Armidale, NSW 2351, Australia
Metrics 2025, 2(3), 17; https://doi.org/10.3390/metrics2030017
Submission received: 19 June 2025 / Revised: 13 August 2025 / Accepted: 26 August 2025 / Published: 1 September 2025

Abstract

The higher education sector is increasingly being reshaped and reimagined in the era of Generative Artificial Intelligence (GenAI). For instance, the promise of GenAI to innovate pedagogical approaches in the way teaching and learning (T&L) occur across universities has been increasingly recognised. It is in this context that the question of how literature on the GenAI and Pedagogy (GenAIP) nexus has evolved in recent years has the potential to generate insights that inform and shape T&L policies and practices. However, the systematic analysis of scholarly literature on the GenAIP nexus has remained under the radar. This study responds to this gap and draws on PRISMA for the Scoping Review (PRISMA-ScR) method to carry out a Bibliometric Scoping Review of the GenAIP nexus. It examines scholarly research outputs (n = 310) published between 2023 and 2025 that are available on the Scopus database with two research objectives: (i) to ascertain research trends, thematic emphasis, prominent authors, countries and outlets, and (ii) to map various pedagogical approaches. Beyond revealing that authors from developing economies have produced significantly fewer research outputs than those from developed economies, the analysis highlights an urgent need for appropriate GenAI policies and curriculum redesign. It also documents 40 distinct pedagogical approaches reported in the literature. In light of the growing academic integrity challenges posed by GenAI, this article discusses three key implications for the higher education sector and future research: (i) redesigning courses and assessments to foster AI literacy, (ii) developing fit-for-purpose academic integrity policies, and (iii) delivering AI-focused professional development for academic staff.

1. Introduction

Generative artificial intelligence (GenAI), such as ChatGPT and Gemini [1], is having an overwhelming impact on how teaching and learning (T&L) occur across universities. In general, GenAI refers to “algorithms that can be used to create new content, including audio, code, images, text, simulations, and videos” [2] (para 1). On the one hand, the promise of GenAI to innovate pedagogical approaches in the way T&L occur across universities has been increasingly recognised [3]. On the other hand, the growing sophistication of GenAI tools in a relatively short period has raised concerns about the effectiveness and relevance of traditional T&L practices [4]. Consequently, there is a growing recognition that T&L practices, often underpinned by concepts such as andragogy, heutagogy, and pedagogy, need to be recalibrated in the GenAI era. Succinct descriptions of these constructs are as follows: “(i) Pedagogy is the teaching of children, or dependent personalities, (ii) Andragogy is the facilitation of learning for adults, who are self-directed learners, and (iii) Heutagogy is the management of learning for self-managed learners” [5] (para 1). Although these three different concepts have nuanced differences, they represent a lens through which the purpose and mechanics of T&L practices can be understood. Notwithstanding the scholarly debate [6] on the blurry dichotomy between pedagogy and andragogy as well as the theoretical foundations on the trichotomy of pedagogy, andragogy, and heutagogy [7], these three concepts have been interpreted [8] as being in a continuum with a variation in T&L approaches, that is, teacher-led, self-directed, and self-determined learning processes, respectively. Since pedagogy is the most familiar and commonly used representative concept of the three in the higher education setting [9], this study concentrates explicitly on pedagogy. For this study, Waring and Evans’ [10] conceptualisation of pedagogy is adopted—one that views it as a construct encompassing the nature and purpose of effective learning, along with the interactions between teachers, students, the learning environment, and learning tasks.
Three broad T&L elements are an integral component of the GenAI and Pedagogy (GenAIP) nexus: (i) assessment, (ii) policy, and (iii) professional development. For example, (i) assessment design based on pedagogies such as authentic assessment and experiential learning has the potential to ensure learning outcomes are met regardless of GenAI usage [11], (ii) policies grounded in a nuanced understanding of not only how GenAI can align with pedagogical objectives but also safeguard academic integrity and responsible use of GenAI are necessary [12], and (iii) [13] posit that “responding to GenAI technologies, academics press for professional learning and development that informs pedagogical practice and policy development” (p. 547). GenAI has challenged traditional pedagogies and associated T&L practices because it allows learners to complete assignments without acquiring knowledge or demonstrating actual capability [14]. This means that GenAI not only offers transformative pedagogical prospects but also poses ethical dilemmas in ensuring learning outcomes, as well as policy challenges surrounding academic integrity in the higher education sector. It is in this context that the question of how literature on the GenAIP nexus has evolved in recent years can potentially generate insights into informing and shaping T&L policies and practices. However, the systematic analysis of scholarly literature on the GenAIP nexus is lacking. This study responds to this gap and draws on PRISMA for the Scoping Review (PRISMA-ScR) method to carry out a Bibliometric Scoping Review (BSR) on the GenAIP nexus.
This article is divided into five parts. Following this introduction, the materials and methods used to carry out a BSR are presented. Subsequently, the article reports on its findings and discusses the implications before concluding with final remarks.

2. Materials and Methods

2.1. Bibliometric Scoping Review

Bibliometrics Scoping Review (BSR) is a methodological approach that combines a scoping review and bibliometric analysis to map the extent and characteristics of research on a particular topic, utilising quantitative techniques to analyse trends and identify knowledge gaps. On the one hand, bibliometrics analysis focuses on gauging the reach of literature within a field of study. BSR is therefore aligned with an exploratory research approach, as it aims to examine the magnitude or scope of emerging research trends [15]. On the other hand, scoping reviews help map the existing literature to identify prevalent notions as well as gaps in knowledge [16]. This study adopts PRISMA techniques for Scoping Reviews (PRISMA-ScR) with the purpose of “synthesising evidence and assessing the scope of literature on a topic” [17] (para 1). The primary advantage of bibliometrics techniques in the context of scoping reviews is in capturing unfolding issues or evolving research areas “iteratively and expeditiously” [18] (p. 357). Figure 1 captures the screening process used in this study based on the PRISMA-ScR guidelines [19] and examines the GenAIP nexus with two research objectives (RO):
RO1: to ascertain research trends, thematic emphasis, prominent authors, countries and outlets, and
RO2: to map various pedagogical approaches found in the literature.

2.2. Data Source and Analysis

This study relied solely on the Scopus database, primarily because sourcing information from multiple databases is complicated due to the need to screen for duplicate outputs. Scholars have noted that the Scopus database offers a broader scope and scale of coverage, efficient indexing, and a more contemporary depiction of research outputs [20,21]. More importantly, utilising multiple databases does not necessarily add value and instead leads to diminishing returns [22,23]. Figure 2 shows the reproducible code used to search for scholarly literature.
The inclusion and exclusion criteria based on PRISMA guidelines [19] identified 310 research outputs for inclusion. Although bibliometric analysis generally relies only on journal articles, given that the GenAIP research landscape is still unfolding in its early years, three categories of scholarly items—journal articles, book chapters, and conference papers—were included in the analysis [24,25]. Tabular and graphical summaries of the analysis were generated using Microsoft Excel and VOSviewer 1.6.20 software.

3. Results

RO1: to ascertain research trends, thematic emphasis, prominent authors, countries and outlets

3.1. Research Trend

The search was conducted on 1 June 2025, and of the total of 359 research outputs, 310 published between 2022 and 2025 (Figure 3) were selected for analysis after screening. There were 192 journal articles (62%), 70 conference papers (23%), and 48 book chapters (15%).
The research outputs belonged to 23 different disciplines (Figure 4). The top five disciplines were Social Sciences (n = 238), Computer Science (n = 134), Arts and Humanities (n = 52), Engineering (n = 40), and Business, Management, and Accounting (n = 27). It is worth noting that some outputs are classified as multidisciplinary (n = 4) and assigned to multiple discipline categories, which may potentially affect the accuracy of the distribution count [24].

3.2. Top Outlets and Articles

Table 1 lists the top five outlets (n ≥ 5) in the Scopus database and quality metrics, e.g., Cite Score (CS), Impact Factor (IF), Scientific Journal Rankings (SJR) and SJR Quartiles where applicable. The revelation that these eight outlets are the top ones to publish studies on the GenAIP nexus is not surprising, given that GenAI has forced the higher education sector “to realise the transformative potential of GenAI, educators must reimagine the role of AI, transitioning from mere automation to meaningful learner-teacher-GenAI interaction and collaboration” [3] (p. 12).
The journal Computers and Education: Artificial Intelligence had the most output (n = 8). This journal also tops the list in terms of quality metrics. The journal’s aim is to: “afford a worldwide platform for researchers, developers, and educators to present their research studies, exchange new ideas, and demonstrate novel systems and pedagogical innovations on the research topics concerning applications of artificial intelligence (AI) in education and AI education” [26] (para 1). The Journal of Applied Learning and Teaching and Reading Research Quarterly were the second and third-ranked outlets based on the quality metrics.
Three distinguished conference proceedings —ASEE Annual Conference and Exposition Conference Proceedings, ACM International Conference Proceedings Series, and Lecture Notes in Computer Science—feature on the list of prominent outlets. Although the relative importance of proceedings in terms of scholarly contribution is shrinking over time, their impact in the computer sciences discipline cannot be underestimated [27].

3.3. Top-Cited Outputs

Analysing top-cited research outputs provides valuable insights into research trends and their influence, as well as a measure of their impact. Of the 310 outputs, 5 had citation counts exceeding 100 in both the Scopus database and Google Scholar (Table 2). The most cited output was [28], titled “Examining science education in ChatGPT: An exploratory study of Generative Artificial Intelligence”, in which the author examines some of the ways educators could harness the potential of ChatGPT to foster science-specific pedagogies.

3.4. Prominent Authors and Countries

A total of 160 authors were associated with the 310 research outputs. McKnight, L. (Deakin University, Australia) was the most prominent author with three research outputs. Authors collectively represented 63 nations, with the United States of America (n = 123) producing the most significant number of outputs. The other prominent countries included Australia (n = 36), the United Kingdom (n = 33), China (n = 22), Canada (n = 17), Hong Kong (n = 16), and India (n = 15).
Although the countries represent a mix of developed and emerging economies from around the world, there appears to be a paucity of research being produced by authors from developing economies overall. This is consistent with [33] observation that “AI revolution has exposed a divide between developed and developing economies” (p. 8). Consequently, the divide in terms of research outputs can be viewed as a symptom of the broader phenomenon in which access to and skills for benefiting from GenAI remain limited for both learners and educators in many countries in the Global South.

3.5. Research Focus

Bibliometric data from Scopus was exported to VOSviewer. A co-occurrence network analysis was conducted using author keywords (with a minimum criterion of three) to identify prevalent research themes within the literature [34]. 129 out of 1491 keywords meet the threshold, and after removing generic words, such as country names, 126 keywords were used to create a map (Figure 5). Keywords analysis helps unravel the trends and patterns of scholarly research and knowledge [20]. The network map shows five distinct clusters: (i) Higher Education (38 items in red), (ii) Generative Artificial Intelligence (24 items in yellow), (iii) Students (23 items in blue), (iv) Academic Integrity (22 items in green), and (v) Pedagogy (19 items in purple).
First, the Higher Education cluster illustrates the collective rapid impact of GenAI across universities. However, the mechanics of this transition are often fuzzy. For example, the current trend suggests that: “higher education institutions are considering revamping assessment and evaluative methods, fostering policies that uphold integrity and guide educators effectively” [35] (p. 140). Second, the Generative Artificial Intelligence cluster encompasses themes related to how universities and academia in general are responding to disruptions associated with GenAI. For instance, it has been argued that universities should adapt to enhance the AI literacy of their graduates and focus on GenAI skills to future-proof their prospects in the evolving labour market [29]. Third, the Students cluster revolves around the need for university administrators to not only formulate apt policies and guidelines to promote the ethical and responsible use of GenAI but also provide professional development support for teachers. For example, universities can no longer ignore the favourable perceptions of students towards GenAI, given its personalised learning support [36]. Additionally, the importance of GenAI-specific professional development for educators cannot be underestimated [37]. Fourth, the Academic Integrity cluster encompasses both the potential and the pitfalls of GenAI in the context of T&L practices. The focus is primarily on simplistic approaches to maintaining academic integrity, such as prevention, detection, monitoring, and evaluation of GenAI use [12]. However, a growing number of scholars have pointed out the need for academic integrity policies that enable the responsible and ethical utilisation of GenAI [4,30]. Fifth, the Pedagogy cluster gathers academic approaches that enhance student engagement and experiences by leveraging GenAI rather than resisting it. For example, it has been posited that: “… pedagogy for an AI-mediated world involves learning to work with opaque, partial and ambiguous situations, which reflect the entangled relationships between people and technologies” [38] (p. 1160).

3.6. Collaborative Research Clusters

The analysis examined influential scholars and collaborative research clusters. According to [39], outputs produced from collaborative efforts are expected to have a greater impact and, consequently, tend to be cited in a higher number. In the co-authorship analysis, authors were chosen as the unit of analysis using the fractional counting method. Analysis revealed four prominent institutions (n ≥ 5) in terms of research outputs tied to them: Purdue University, USA (n = 6), Deakin University, Australia (n = 6), the University of Hong Kong, Hong Kong (n = 5), and the Education University of Hong Kong (n = 5). VOSviewer’s overlay network map shows 30 authors connected in three distinct clusters (Figure 6).
RO2: to map various pedagogical approaches found in the literature.

3.7. Pedagogical Approaches

Keywords associated with pedagogical approaches (n ≥ 2) were extracted from the list generated by VOSviewer. A total of 40 pedagogical approaches were detected, and MS Excel was utilised to create a radar map (Figure 7). The top five approaches, based on occurrence, were: (i) AI literacy (n = 21), Critical thinking (n = 11), Design thinking pedagogy (n = 11), Bloom’s taxonomy (n = 10), and Writing pedagogy (n = 10).
The mapping illustrates the dual nature of the GenAIP nexus. On the one hand, the scholarly outputs have acknowledged that integrating AI literacy as an integral component of pedagogy is essential. As AI has become increasingly ubiquitous in society, university programmes must prioritise AI literacy and competency as key graduate attributes. For instance, it has been contended that “basic understanding and knowledge of AI should be a critical component of student education to foster successful global citizens” [40] (p. 1). On the other hand, universities cannot allow GenAI to erode critical thinking pedagogy—i.e., a teaching approach that emphasises developing learners’ abilities to analyse information, form reasoned judgments and solve problems effectively. For example, the stance of scholarly research on the GenAIP nexus “… should engage in questions about the interrelationships between GenAI use and both individual and social critical thinking, and how … education can help students counteract GenAI’s threats to critical thinking while also leveraging its benefits” [41] (p. 373).
Design thinking pedagogy—an iterative approach to addressing real-world challenges, when integrated with GenAI —has been shown to improve student learning outcomes by fostering creativity and ethical reasoning, particularly when supported by reflective and team-based learning [42]. Furthermore, the adequacy of traditional frameworks, such as Bloom’s Taxonomy—one of the most widely used pedagogical tools for shaping assessment design and ensuring learning assurance– has been questioned in the era of GenAI [43]. At the same time, the utility of emerging GenAI-specific pedagogies, such as prompt engineering—the practice of crafting instructions (prompts) to guide GenAI in generating desired outputs and fostering learning outcomes—has gained prominence [44]. In contrast, scholars have emphasised the need to reimagine writing-oriented pedagogical approaches to how writing is taught and learnt in the GenAI era [45]. The map, therefore, captures divergent needs: (i) innovative new pedagogical approaches in the context of declining utility of traditional pedagogies, and (ii) recalibration of assessment design, academic integrity policies, and professional development needs of academics.

4. Discussion

The BSR indicates that GenAI has emerged as both a significant challenge and an opportunity for the higher education sector. Apart from the revelation that authors from developing economies have produced a relatively small number of research outputs compared to those from developed economies, the analysis uncovered a total of 40 distinct pedagogical approaches in the literature. These findings suggest that pedagogical underpinnings have not kept pace with the GenAI disruption for various reasons. For instance, regulatory bodies in some developed economies have emphasised the need for comprehensive and measured policy responses to adopt, integrate, and harness GenAI in teaching and learning (T&L) practices across universities. However, given the rapid and continuous evolution of these technologies, the way universities approach GenAI is likely to be influenced by the GenAI-readiness of academics, disciplinary contexts, and accreditation requirements [46,47,48,49]. It is in this context that three implications of the findings on the GenAIP nexus for the higher education sector (Figure 8) and future research are thematically discussed next.

4.1. Redesigning Courses and Assessments to Advance AI Literacy

The increasing ubiquitousness of GenAI in everyday contexts means that course and assessment designs warrant pedagogical innovations in the broader context of AI literacy. Appropriate course and assessment designs can mitigate the risks of GenAI misuse by incorporating high-security elements that validate learning outcomes and teach students to use GenAI critically and responsibly, adhering to educational and professional standards. Given that AI literacy is expected to become an integral component of graduate learning outcomes [50,51], course and assessment designs should adopt a thinking-outside-the-box approach and not be limited to controlling whether and to what extent students can utilise GenAI. For example, there is a growing recognition that “… merely incorporating AI literacy into the curriculum will not suffice. Instead, a cultural shift towards digital and technological proficiency among students is necessary” [52] (p. 2). More importantly, assessment design in the GenAI era can benefit from honing pedagogies that focus on minimising GenAI dependency while maximising AI literacy so that learners understand how GenAI tools are socially constructed and interpreted [53]. Future studies could focus on the state, impact, and response of pedagogically innovative courses and assessments on AI literacy.

4.2. Reshaping Fit-for-Purpose Academic Integrity Policy

The higher education sector has generally struggled to develop an apt GenAI policy response in a timely fashion. While some universities have adopted a zero-resistance approach, others have adopted a zero-tolerance approach. For example, one of the leading Australian universities has fully endorsed the responsible use of GenAI [54]. In contrast, another university [55] has taken a hardline stance on the responsible use of GenAI. Such policy divergence on GenAI use is labelled as a ‘Two-Lane approach’ [56], i.e., Lane 1, where any use of GenAI is prohibited, and Lane 2, where any use of GenAI is permitted (p. 1). Other scholars have focused on policies to ensure academic integrity through the Artificial Intelligence Assessment Scale [12], which is often equated with the traffic light metaphor [14]. However, the utility of such academic integrity policy options is already being questioned [57]. Ideally, a fit-for-purpose policy should include clear regulations governing the responsible use of GenAI tools, explicitly define the roles and responsibilities of all stakeholders, such as students, teachers, and administrators, and adopt and communicate procedures for detecting academic misconduct, along with associated penalties and sanctions [58]. However, despite the urgent need to develop appropriate policies for the responsible use of GenAI, universities have generally struggled with this task [59]. Given the importance of fit-for-purpose policies in proactively integrating GenAI into curricula in line with the principles of responsible use of GenAI [60], future research on the GenAIP nexus could explore how policies have evolved across universities and assess their impact on both academics and learners.

4.3. Reimagining GenAI-Centric Professional Development of Educators

GenAI-centric professional development for academics is crucial for effectively integrating this technology into teaching, research, and administrative tasks. For example, the Tertiary Education Quality and Standards Agency (TEQSA), Australia’s independent national quality assurance and regulatory agency for higher education, has recommended universities to complement the focus on preventing, detecting, and evaluating students’ GenAI use with professional development efforts to foster capabilities to design courses and assessments [61] (para 10). However, studies have shown that piecemeal professional development initiatives and policy ambiguities across universities have often impeded learning outcomes [62]. More importantly, GenAI tools can enhance student learning and pedagogical innovations only when educators are supported in developing the capabilities to pursue innovative pedagogical interventions [30]. As professional development initiatives, alongside institutional support, are essential for fostering AI-readiness among academics [13], future research on the GenAIP nexus could investigate the needs, effectiveness, and impact of GenAI-focused professional development on academic efficiency and productivity.

4.4. Limitations

The BSR outlined above has certain limitations, especially given the rapid pace of GenAI development. First, the reproducible code used to search the Scopus database included only the generic term “GenAI” (and its variants), while excluding more technical terms such as “large language models” (LLMs). Second, grey literature—such as industry reports and university policies on GenAI—was not considered. Future research could integrate grey literature into an evidence-based approach to provide a more comprehensive understanding of the GenAIP nexus and its implications.

5. Conclusions

The higher education sector is increasingly being reshaped and reimagined in the GenAI era. This article analysed 310 scholarly outputs related to the GenAIP nexus that are available in the Scopus database. This article’s main contribution lies in revealing the GenAI divide in research outputs between developed and developing economies and documenting 40 distinct pedagogical approaches reported in the literature. In addition, it highlights an urgent need to reshape academic integrity policies, redesign the curriculum, and reimagine professional development initiatives for academics in the GenAI era. The findings reported in the article concur with the observations of other scholars [63,64,65]. The analysis demonstrates the duality of GenAI, as it can have both positive and negative consequences for T&L practices. The article recommends that future research on the GenAIP nexus examine (i) the status, impact, and effectiveness of pedagogically innovative courses and assessments in promoting AI literacy, (ii) the evolution and influence of GenAI policies, and (iii) the effects of GenAI-focused professional development on academic efficiency and productivity.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Barros, A.; Prasad, A.; Śliwa, M. Generative artificial intelligence and academia: Implications for research, teaching and service. Manag. Learn. 2023, 54, 597–604. [Google Scholar] [CrossRef]
  2. McKinsey. What Is Generative AI? 2024. Available online: https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai (accessed on 15 June 2025).
  3. Qian, Y. Pedagogical Applications of Generative AI in Higher Education: A Systematic Review of the Field. TechTrends 2025, 1–16. [Google Scholar] [CrossRef]
  4. Yusuf, A.; Pervin, N.; Román-González, M. Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. Int. J. Educ. Technol. High. Educ. 2024, 21, 21. [Google Scholar] [CrossRef]
  5. University of Illinois. Pedagogy, Andragogy and Heutagogy. 2025. Available online: https://www.uis.edu/colrs/teaching-resources/foundations-good-teaching/pedagogy-andragogy-heutagogy#:~:text=Pedagogy%20is%20the%20teaching%20of,learning%20for%20self%2Dmanaged%20learners (accessed on 1 July 2025).
  6. Holmes, G.; Abington-Cooper, M. Pedagogy vs. andragogy: A false dichotomy? J. Technol. Stud. 2000, 26, 50–55. [Google Scholar] [CrossRef]
  7. Blaschke, L.M. The pedagogy–andragogy–heutagogy continuum and technology-supported personal learning environments. In Open and Distance Education Theory Revisited: Implications for the Digital Era; Springer: Singapore, 2019; pp. 75–84. [Google Scholar]
  8. Halupa, C.M. Pedagogy, andragogy, and heutagogy. In Transformative Curriculum Design in Health Sciences Education; IGI Global: Hershey, PA USA, 2015; pp. 143–158. [Google Scholar]
  9. McMaster University. The Three “Gogies”. 2025. Available online: https://mi.mcmaster.ca/adult-learning-theories/adult-learning-theories-3-gogies/ (accessed on 1 July 2025).
  10. Waring, M.; Evans, C. Understanding Pedagogy: Developing a Critical Approach to Teaching and Learning; Taylor & Francis: London, UK, 2014. [Google Scholar]
  11. Salinas-Navarro, D.E.; Vilalta-Perdomo, E.; Michel-Villarreal, R.; Montesinos, L. Using generative artificial intelligence tools to explain and enhance experiential learning for authentic assessment. Educ. Sci. 2024, 14, 83. [Google Scholar] [CrossRef]
  12. Perkins, M.; Furze, L.; Roe, J.; MacVaugh, J. The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of generative AI in educational assessment. J. Univ. Teach. Learn. Pract. 2024, 21, 49–66. [Google Scholar] [CrossRef]
  13. Bannister, P.; Carver, M. ‘I don’t need professional development; I want institutional development’: Legitimising marginalised epistemic capital that disrupts generative AI discourse. Prof. Dev. Educ. 2025, 51, 547–565. [Google Scholar] [CrossRef]
  14. Corbin, T.; Dawson, P.; Liu, D. Talk is cheap: Why structural assessment changes are needed for a time of GenAI. Assess. Eval. High. Educ. 2025, 1–11. [Google Scholar] [CrossRef]
  15. Bhattacherjee, A. Social Science Research: Principles, Methods, and Practices; University of South Florida: Tampa, FL, USA, 2012; Available online: https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=1002&context=oa_textbooks (accessed on 18 July 2025).
  16. Munn, Z.; Peters, M.D.; Stern, C.; Tufanaru, C.; McArthur, A.; Aromataris, E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med. Res. Methodol. 2018, 18, 143. [Google Scholar] [CrossRef]
  17. PRISMA. PRISMA for Scoping Reviews (PRISMA-ScR). 2025. Available online: https://www.prisma-statement.org/scoping (accessed on 17 July 2025).
  18. Dhakal, S.; Burgess, J.; Connell, J. COVID-19 crisis, work and employment: Policy and research trends. Labour Ind. 2021, 1, 353–365. [Google Scholar] [CrossRef]
  19. Page, M.J.; Moher, D. Evaluations of the uptake and impact of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement and extensions: A scoping review. Syst. Rev. 2017, 6, 263. [Google Scholar] [CrossRef]
  20. Mahmood, M.N.; Dhakal, S.P. Ageing Population and Society: A Scientometric Analysis. Qual. Quant. 2023, 57, 3133–3150. [Google Scholar] [CrossRef]
  21. Dhakal, S.P.; Mahmood, M.N. A scientometric analysis of three decades of research on workplace psychosocial hazards: Implications for policy and practice. J. Saf. Res. 2025, 93, 79–89. [Google Scholar] [CrossRef]
  22. Bramer, W.M.; Rethlefsen, M.L.; Kleijnen, J.; Franco, O.H. Optimal database combinations for literature searches in systematic reviews: A prospective exploratory study. Syst. Rev. 2017, 6, 245. [Google Scholar] [CrossRef]
  23. Öztürk, O.; Kocaman, R.; Kanbach, D.K. How to design bibliometric research: An overview and a framework proposal. Rev. Manag. Sci. 2024, 18, 3333–3361. [Google Scholar] [CrossRef]
  24. Arévalo, Y.B.; García, M.B. Scientific production on dialogical pedagogy: A bibliometric analysis. Data Metadata 2023, 2, 7. [Google Scholar] [CrossRef]
  25. Irwanto, I.; Saputro, A.D.; Widiyanti, W.; Laksana, S.D. Global trends on mobile learning in higher education: A bibliometric analysis (2002–2022). IJIET Int. J. Inf. Educ. Technol. 2023, 13, 223–231. [Google Scholar] [CrossRef]
  26. Computers and Education: Artificial Intelligence. About the Journal. 2025. Available online: https://www.sciencedirect.com/journal/computers-and-education-artificial-intelligence (accessed on 7 July 2025).
  27. Lisée, C.; Larivière, V.; Archambault, É. Conference proceedings as a source of scientific information: A bibliometric analysis. J. Am. Soc. Inf. Sci. Technol. 2008, 59, 1776–1784. [Google Scholar] [CrossRef]
  28. Cooper, G. Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. J. Sci. Educ. Technol. 2023, 32, 444–452. [Google Scholar] [CrossRef]
  29. Prather, J.; Denny, P.; Leinonen, J.; Becker, B.A.; Albluwi, I.; Craig, M.; Keuning, H.; Kiesler, N.; Kohn, T.; Luxton-Reilly, A.; et al. The robots are here: Navigating the generative AI revolution in computing education. In Proceedings of the 2023 Working Group Reports on Innovation and Technology in Computer Science Education, Turku, Finland, 7–12 July 2023; pp. 108–159. [Google Scholar]
  30. Chiu, T.K. Future research recommendations for transforming higher education with generative AI. Comput. Educ. Artif. Intell. 2024, 6, 100197. [Google Scholar] [CrossRef]
  31. Eager, B.; Brunton, R. Prompting higher education towards AI-augmented teaching and learning practice. J. Univ. Teach. Learn. Pract. 2023, 20, 1–19. [Google Scholar] [CrossRef]
  32. Chang, D.H.; Lin, M.P.C.; Hajian, S.; Wang, Q.Q. Educational design principles of using AI chatbot that supports self-regulated learning in education: Goal setting, feedback, and personalisation. Sustainability 2023, 15, 12921. [Google Scholar] [CrossRef]
  33. Anzolin, G. Bridging the AI Divide. Empowering Developing Countries Through Manufacturing. United Nations Industrial Development Organisation (UNIDO). 2024. Available online: https://hub.unido.org/sites/default/files/publications/Bridging%20the%20AI%20Divide%20Empowering%20Developing%20Countries%20Through%20Manufacturing.pdf (accessed on 8 July 2025).
  34. van Eck, N.J.; Waltman, L. VOSviewer Manual: Manual for VOSviewer Version 1.6. 2019. Available online: https://www.vosviewer.com/documentation/Manual_VOSviewer_1.6.11.pdf (accessed on 15 July 2025).
  35. Tan, S.; Rudolph, J.; Tan, S. Riding the Generative AI Tsunami: Addressing the Teaching and Learning Crisis in Higher Education. In The Palgrave Handbook of Crisis Leadership in Higher Education; Springer Nature: Cham, Switzerland, 2024; pp. 135–154. [Google Scholar]
  36. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  37. Meli, K.; Taouki, J.; Pantazatos, D. Empowering educators with generative ai: The genai education frontier initiative. In EDULEARN24 Proceedings; IATED: Palma, Spain, 2024; pp. 4289–4299. [Google Scholar]
  38. Bearman, M.; Ajjawi, R. Learning to work with the black box: Pedagogy for a world with artificial intelligence. Br. J. Educ. Technol. 2023, 54, 1160–1173. [Google Scholar] [CrossRef]
  39. Glänzel, W.; Schubert, A. Analysing scientific networks through co-authorship. In Handbook of Quantitative Science and Technology Research: The Use of Publication and Patent Statistics in Studies of S&T Systems; Springer: Dordrecht, The Netherlands, 2004; pp. 257–276. [Google Scholar]
  40. Southworth, J.; Migliaccio, K.; Glover, J.; Glover, J.N.; Reed, D.; McCarty, C.; Brendemuhl, J.; Thomas, A. Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Comput. Educ. Artif. Intell. 2023, 4, 100127. [Google Scholar] [CrossRef]
  41. Larson, B.Z.; Moser, C.; Caza, A.; Muehlfeld, K.; Colombo, L.A. Critical thinking in the age of generative AI. Acad. Manag. Learn. Educ. 2024, 23, 373–378. [Google Scholar] [CrossRef]
  42. Rana, V.; Verhoeven, K.B.T.; Sharma, M. Generative AI in design thinking pedagogy: Enhancing creativity, critical thinking, and ethical reasoning in higher education. J. Univ. Teach. Learn. Pract. 2025, 22. Available online: https://open-publishing.org/journals/index.php/jutlp/article/view/1193/1030 (accessed on 17 July 2025). [CrossRef]
  43. Gonsalves, C. Generative AI’s Impact on Critical Thinking: Revisiting Bloom’s Taxonomy. J. Mark. Educ. 2024, 1–16. [Google Scholar] [CrossRef]
  44. Cain, W. Prompting change: Exploring prompt engineering in large language model AI and its potential to transform education. TechTrends 2024, 68, 47–57. [Google Scholar] [CrossRef]
  45. De Matas, J. ChatGPT and the future of writing about writing. Double Helix 2023, 11, 1–7. [Google Scholar] [CrossRef]
  46. Li, W.; Song, R.; Yu, K. GenAI enabling the high-quality development of higher education: Operational mechanisms and pathways. Innov. Educ. Teach. Int. 2025, 1–16. [Google Scholar] [CrossRef]
  47. Kohnke, L.; Zou, D.; Ou, A.W.; Gu, M.M. Preparing future educators for AI-enhanced classrooms: Insights into AI literacy and integration. Comput. Educ. Artif. Intell. 2025, 8, 100398. [Google Scholar] [CrossRef]
  48. Dosumu, O.; Porumb, V.A.; Stafford, A.; Zimmer, A. In the wake of ChatGPT: Early reflections on marking open-book online accounting assessments. Account. Educ. 2025, 1–32. [Google Scholar] [CrossRef]
  49. Kolade, O.; Owoseni, A.; Egbetokun, A. Is AI changing learning and assessment as we know it? Evidence from a ChatGPT experiment and a conceptual framework. Heliyon 2024, 10, e25953. [Google Scholar] [CrossRef] [PubMed]
  50. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualising AI literacy: An exploratory review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  51. Beckman, K.; Apps, T.; Howard, S.K.; Rogerson, C.; Rogerson, A.; Tondeur, J. The GenAI divide among university students: A call for action. Internet High. Educ. 2025, 67, 101036. [Google Scholar] [CrossRef]
  52. Salhab, R. AI Literacy across Curriculum Design: Investigating College Instructors’ Perspectives. Online Learn. 2024, 28, n2. [Google Scholar] [CrossRef]
  53. Beninger, S.; Reppel, A.; Stanton, J.; Watson, F. Facilitating Generative AI Literacy in the Face of Evolving Technology: Interventions in Marketing Classrooms. J. Mark. Educ. 2025, 47, 112–125. [Google Scholar] [CrossRef]
  54. University of Sydney. Academic Integrity. 2025. Available online: https://www.sydney.edu.au/students/academic-integrity/artificial-intelligence.html#:~:text=Semester%201%2C%202025%3A-,you%20are%20allowed%20to%20use%20generative%20AI%20and%20automated%20writing,coordinator%20has%20expressly%20permitted%20it (accessed on 8 June 2025).
  55. University of Canberra. GenAI at UC. 2025. Available online: https://canberra.libguides.com/c.php?g=970043&p=7053081 (accessed on 19 August 2025).
  56. Curtis, G.J. The two-lane road to hell is paved with good intentions: Why an all-or-none approach to generative AI, integrity, and assessment is insupportable. High. Educ. Res. Dev. 2025, 1–8. [Google Scholar] [CrossRef]
  57. Liu, D. Menus, Not Traffic Lights: A Different Way to Think About AI and Assessments. 2025. Available online: https://educational-innovation.sydney.edu.au/teaching@sydney/menus-not-traffic-lights-a-different-way-to-think-about-ai-and-assessments/ (accessed on 20 August 2025).
  58. Rasul, T.; Nair, S.; Kalendra, D.; Balaji, M.S.; de Oliveira Santini, F.; Ladeira, W.J.; Rather, R.A.; Yasin, N.; Rodriguez, R.V.; Kokkalis, P.; et al. Enhancing academic integrity among students in GenAI Era: A holistic framework. Int. J. Manag. Educ. 2024, 22, 101041. [Google Scholar] [CrossRef]
  59. Cervini, E. Universities Struggle To Keep Pace with AI Integrity Challenges. 25 July 2024. Available online: https://www.eurekastreet.com.au/article/universities-struggle-to-keep-pace-with-ai-integrity-challenges (accessed on 20 August 2025).
  60. Association to Advance Collegiate Schools of Business (AACSB). GenAI Adoption in Business Schools: Deans and Faculty Respond. 2025. Available online: https://www.aacsb.edu/insights/reports/2025/genai-adoption-in-business-schools-deans-and-faculty-respond (accessed on 20 August 2025).
  61. TEQSA. Artificial Intelligence. 2023. Available online: https://www.teqsa.gov.au/guides-resources/higher-education-good-practice-hub/artificial-intelligence (accessed on 20 August 2025).
  62. Parida, S.; Dhakal, S.P.; Dayaram, K.; Mohammadi, H.; Ayentimi, D.T.; Amankwaa, A.; D’CRuz, D. Rhetoric and realities in Australian universities of student engagement in online learning: Implications for a post-pandemic era. Int. J. Manag. Educ. 2023, 21, 100795. [Google Scholar] [CrossRef]
  63. Ogunleye, B.; Zakariyyah, K.I.; Ajao, O.; Olayinka, O.; Sharma, H. A systematic review of generative AI for teaching and learning practice. Educ. Sci. 2024, 14, 636. [Google Scholar] [CrossRef]
  64. Tillmanns, T.; Salomão Filho, A.; Rudra, S.; Weber, P.; Dawitz, J.; Wiersma, E.; Dudenaite, D.; Reynolds, S. Mapping tomorrow’s teaching and learning spaces: A systematic review on GenAI in higher education. Trends High. Educ. 2025, 4, 2. [Google Scholar] [CrossRef]
  65. Eacersall, D.; Pretorius, L.; Smirnov, I.; Spray, E.; Illingworth, S.; Chugh, R.; Strydom, S.; Stratton-Maher, D.; Simmons, J.; Jennings, I.; et al. Navigating ethical challenges in generative AI-enhanced research: The ethical framework for responsible generative AI use. J. Appl. Learn. Teach. 2024, 8, 1–14. [Google Scholar]
Figure 1. Study selection procedures based on PRISMA guidelines [19].
Figure 1. Study selection procedures based on PRISMA guidelines [19].
Metrics 02 00017 g001
Figure 2. Reproducible search code used.
Figure 2. Reproducible search code used.
Metrics 02 00017 g002
Figure 3. Trend of research outputs related to the GenAIP nexus (2023–2025).
Figure 3. Trend of research outputs related to the GenAIP nexus (2023–2025).
Metrics 02 00017 g003
Figure 4. Research outputs on the GenAIP nexus by disciplines.
Figure 4. Research outputs on the GenAIP nexus by disciplines.
Metrics 02 00017 g004
Figure 5. Co-occurrence network analysis map of themes on the GenAIP nexus.
Figure 5. Co-occurrence network analysis map of themes on the GenAIP nexus.
Metrics 02 00017 g005
Figure 6. Clusters map of authors producing research outputs on the GenAIP nexus. Authors’ surnames depicted in red indicate a recent collaboration (2025), while those in light blue denote collaboration in the early years of the GenAI era (2023). The network diagram also clearly shows Lee, C. J. K., affiliated with the Education University of Hong Kong, Hong Kong, is the influential bridging author between the two different clusters.
Figure 6. Clusters map of authors producing research outputs on the GenAIP nexus. Authors’ surnames depicted in red indicate a recent collaboration (2025), while those in light blue denote collaboration in the early years of the GenAI era (2023). The network diagram also clearly shows Lee, C. J. K., affiliated with the Education University of Hong Kong, Hong Kong, is the influential bridging author between the two different clusters.
Metrics 02 00017 g006
Figure 7. Radar map of pedagogical approaches.
Figure 7. Radar map of pedagogical approaches.
Metrics 02 00017 g007
Figure 8. Implications of the BSR on the GenAIP nexus.
Figure 8. Implications of the BSR on the GenAIP nexus.
Metrics 02 00017 g008
Table 1. Prominent outlets and quality metrics (n ≥ 4).
Table 1. Prominent outlets and quality metrics (n ≥ 4).
OutletsOutputs CountCite ScoreImpact FactorSJR (SciMago Quartile)
Computers and Education Artificial Intelligence823.78.95.21 (Q1)
Computers and Composition73.3n/a0.42 (Q1)
ASEE Annual Conference and Exposition Conference Proceedings60.7n/an/a
Journal of Applied Learning and Teaching510.4n/a1.76 (Q1)
Education Sciences55.52.50.73 (Q1)
ACM International Conference Proceeding Series52.0n/a0.19 (n/a)
Sustainability Switzerland47.73.30.68 (Q1)
Reading Research Quarterly410.03.892.51 (Q1)
Lecture Notes in Computer Science42.4n/a0.35 (Q2)
Source: Compiled by the author.
Table 2. Most cited research outputs (n ≥ 100).
Table 2. Most cited research outputs (n ≥ 100).
Author(s)/DateTitleOutletCitation Count *
[28]Examining science education in ChatGPT: An exploratory study of generative artificial intelligence.Journal of science education and technologyScopus = 654; GS = 1172
[29]The robots are here: Navigating the generative AI revolution in computing education.Proceedings of the 2023 working group reports on innovation and technology in computer science education Scopus = 165;
GS = 274
[30]Future research recommendations for transforming higher education with generative AI.Computers and Education: Artificial IntelligenceScopus = 155;
GS = 329
[31] Prompting higher education towards AI-augmented teaching and learning practice.Journal of University Teaching and Learning PracticeScopus = 130;
GS = 244
[32]Educational design principles of using AI chatbot that supports self-regulated learning in education: Goal setting, feedback, and personalisation.SustainabilityScopus = 129;
GS = 249
* GS = Google Scholar.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dhakal, S.P. A Scoping Review of Generative Artificial Intelligence (GenAI) and Pedagogy Nexus: Implications for the Higher Education Sector. Metrics 2025, 2, 17. https://doi.org/10.3390/metrics2030017

AMA Style

Dhakal SP. A Scoping Review of Generative Artificial Intelligence (GenAI) and Pedagogy Nexus: Implications for the Higher Education Sector. Metrics. 2025; 2(3):17. https://doi.org/10.3390/metrics2030017

Chicago/Turabian Style

Dhakal, Subas P. 2025. "A Scoping Review of Generative Artificial Intelligence (GenAI) and Pedagogy Nexus: Implications for the Higher Education Sector" Metrics 2, no. 3: 17. https://doi.org/10.3390/metrics2030017

APA Style

Dhakal, S. P. (2025). A Scoping Review of Generative Artificial Intelligence (GenAI) and Pedagogy Nexus: Implications for the Higher Education Sector. Metrics, 2(3), 17. https://doi.org/10.3390/metrics2030017

Article Metrics

Back to TopTop