Next Article in Journal
An Enhanced Multi-Layer Blockchain Security Model for Improved Latency and Scalability
Next Article in Special Issue
Review of Robotics Activities to Promote Kindergarteners’ Communication, Collaboration, Critical Thinking, and Creativity
Previous Article in Journal
Speech Emotion Recognition and Serious Games: An Entertaining Approach for Crowdsourcing Annotated Samples
Previous Article in Special Issue
AI Chatbots in Education: Challenges and Opportunities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human-Centered Artificial Intelligence in Higher Education: A Framework for Systematic Literature Reviews

1
Research Institute on SMEs, Université du Québec à Trois-Rivières, Trois-Rivières, QC G9A 5H7, Canada
2
Mathematics, Statistics & Computer Science Department, University of Wisconsin-Stout, Menomonie, WI 54751, USA
*
Author to whom correspondence should be addressed.
Information 2025, 16(3), 240; https://doi.org/10.3390/info16030240
Submission received: 20 February 2025 / Revised: 11 March 2025 / Accepted: 13 March 2025 / Published: 18 March 2025

Abstract

:
Human-centered approaches are vital to manage the rapid growth of artificial intelligence (AI) in higher education, where AI-driven applications can reshape teaching, research, and student engagement. This study presents the Human-Centered AI for Systematic Literature Reviews (HCAI-SLR) framework to guide educators and researchers in integrating AI tools effectively. The methodology combines AI augmentation with human oversight and ethical checkpoints at each review stage to balance automation and expertise. An illustrative example and experiments demonstrate how AI supports tasks such as searching, screening, extracting, and synthesizing large volumes of literature that lead to measurable gains in efficiency and comprehensiveness. Results show that HCAI-driven processes can reduce time costs while preserving rigor, transparency, and user control. By embedding human values through constant oversight, trust in AI-generated findings is bolstered and potential biases are mitigated. Overall, the framework promotes ethical, transparent, and robust approaches to AI integration in higher education without compromising academic standards. Future work will refine its adaptability across various research contexts and further validate its impact on scholarly practices.

Graphical Abstract

1. Introduction

Nowadays, the potential benefits of artificial intelligence (AI)-powered applications have led to their widespread adoption in various spheres of scientific and socio-economic life, and this adoption is affecting humans and their activities in ways that are not yet well understood [1,2]. For several studies in different domains [3,4,5], AI-powered applications are black boxes, whose data analysis and decision-making processes are not transparent or explainable. This raises the question of the reliability, safety, and trustworthiness of AI-based results and leads to the idea of developing Human-Centered AI (HCAI). HCAI systems are designed to enhance and complement human capabilities, focusing on improving human well-being and ensuring that AI technologies are aligned with human values and needs [6].
In the field of education, the concept of Artificial Intelligence in Education (AIED) has seen significant advancements in recent years, indicating a growing interest and adoption of AI in educational settings [7,8]. Moreover, the shift towards HCAI systems in education emphasizes the importance of considering AI as a tool for augmenting human intelligence while simultaneously pursuing high levels of automation and user control [9]. This trend prioritizes user experience, ethical considerations, and societal impacts, aiming to create AI that is accessible, transparent, and beneficial to the academic community, students, and teachers/researchers alike.
Accordingly, there is an urgent need to create a synergy between human intelligence and AI-powered applications in education. The teaching and learning processes can be revolutionized by HCAI, but it needs careful planning, efficient management, and ethical concerns. This will enable harnessing the potential of AI to augment human capabilities while addressing the very challenges of transparency and trustworthiness that are crucial in the context of higher education. This paper proposes a new research outset that relies on an HCAI-based framework specifically tailored for conducting systematic literature reviews (SLRs), hereafter referred to as the HCAI-SLR framework.
Based on the design science research method [10], the paper is organized as follows. Section 2 introduces the theoretical background and related work to highlight the problem. Section 3 presents the principles of the HCAI-SLR framework as a novel artifact, including the constructs, process, and model. Section 4 presents the SLR process related to human–AI interactions to operationalize the proposed framework. Section 5 provides an illustrative example and experiments in education and research. Finally, Section 6 ends the paper with a conclusion and future research directions.

2. Conceptual Foundation

The basic concepts related to AI and HCAI in education are briefly introduced. The main limitations of existing studies in AI-powered applications in education and research, along with new insights about HCAI requirements for AI-powered applications, are presented. On this basis, an HCAI-based framework for SLRs is developed.

2.1. AI in Education

Artificial intelligence has the potential to transform the way we learn and teach by making education more personalized, engaging, and efficient [11]. AI in education involves utilizing AI-powered technologies like machine learning and natural language processing to improve the overall learning experience [12]. AI-powered applications are tools, systems, or platforms that leverage AI techniques to perform tasks that typically require human intelligence. In the domain of education, there are popular applications such as chatbots, expert systems, intelligent agents, machine learning, personalized learning systems, and virtual learning environments [13]. In this aim, chatbots can play the role of intelligent assistants providing solutions for higher education institutions to improve their current teaching and research services and to create new innovative services [14,15]. An expert system is an application that solves complicated problems by simulating the human reasoning process by applying specific knowledge [16]. Intelligent agents deliver personalized, timely, and suitable materials, guidance, and feedback to learners [13]. Machine learning (ML) can be used in education to personalize and improve student learning experiences in different applications such as personalized learning, adaptive testing, intelligent tutoring systems, learning analytics, smart services, and content creation [17,18]. Personalized learning systems can generate appropriate course materials for learners based on individual learners’ requirements [19,20]. Finally, virtual learning environments allow students to enjoy the learning experience in virtual reality to facilitate learning and collaboration [13].
In the context of evolving AI, generative AI has emerged as a powerful exemplar of how AI can be harnessed as an ally in the pursuit of knowledge [21,22]. These tools, which are not simply vast repositories of information, represent a sophisticated blend of AI capabilities designed to enhance human cognitive functions. They enable researchers and users alike to sift through the expansive realms of the academic literature and complex data with unprecedented efficiency [23]. Generative AI in education holds transformative potential by supporting personalized learning but also raises concerns about academic integrity that emphasize the need for balanced implementation to improve educational outcomes and efficiency [21]. Accordingly, a human-centered approach to AI in education has emerged, which advocates that AI should be used to enhance human capabilities and promote an inclusive, just, and sustainable future [21].

2.2. Human-Centered AI in Education

The development of HCAI systems requires the clarification of the levels of human control versus computer automation. Rather than viewing human control and computer automation as opposing ends of a single spectrum—where one increases at the expense of the other—they should be regarded as two distinct dimensions that can be combined in various ways [6]. This perspective promotes human intelligence augmentation through AI-based systems that combine high levels of human control with high levels of computer automation to enable significant improvements in human performance [24]. The combination of high levels of both human control and computer automation will involve determining which situations and circumstances require human control and which require computer automation. This determination, if performed correctly, is essential to improve the outcome of collaboration between humans and AI [25]. Indeed, experimentation by the latter authors suggests that humans fail to properly identify which tasks to delegate to AI for enhanced combined performance.
The main idea of HCAI applications is to be able to use AI-based tools to improve human performance while preserving human values [26]. In other words, HCAI systems simultaneously pursue two objectives: empowering the human user and ensuring compliance with ethical considerations [27]. Human empowerment means that AI-based systems are designed with properties that allow humans to stay in control for the sake of safety and self-determination [26]. Ethical considerations or human values refer to the issues of social responsibility that include fairness, accountability, interpretability, and transparency [2]. In addition, it is also important to consider the properties of HCAI systems, such as reliability, safety, and trustworthiness [6].
In the realm of smart education, HCAI promises not just a technological advancement but also a reaffirmation of its humanistic goals by embedding human values from the outset [28]. HCAI in education can revolutionize education, ensuring that AI acts as a catalyst for learning and accessibility, all grounded in an ethical foundation for the common good [29]. Furthermore, a paradigm change in educational programs is necessary for human–AI collaboration, with an emphasis on the new knowledge associated with the development of enhanced metacognitive capacities rather than the principles of communication [30].

2.3. HCAI-Based Systematic Literature Reviews

In higher education, scientific research is a fundamental aspect, particularly at research-intensive universities, as it encourages the creation and dissemination of new knowledge, provides valuable learning experiences for students, and contributes to the overall advancement of various fields of study [31].
A key aspect of scientific research is the principle of cumulative knowledge, which underscores the vital role of literature reviews in advancing understanding within the field [32]. Each new study refers to previous studies to establish its theoretical and methodological foundations and to justify and demonstrate its incremental contribution. In the current research landscape, literature reviews that satisfy the principle of cumulative knowledge have become difficult because of the ever-increasing number of publications, the intersection of research disciplines, and the diversity of stakeholders involved in research projects [33]. The resulting scientific information overload can be challenging for researchers since it may require a significant amount of time [34]. AI-powered tools can be mobilized to alleviate this difficulty through the automation of some aspects of the literature review process.
Moreover, the literature review process seems appropriate for the combination of human control and computer automation. This process involves both creative tasks that are amenable to human control and mechanical, repetitive, and time-consuming tasks that are suited to computer automation. The difficulty in determining tasks to delegate to AI seems lower in this case [25]. An examination concerning the use of AI to support literature reviews in the information system field [35], reveals that AI-based tools can be mobilized at each stage of the literature review process, from the problem formulation to data analysis and interpretation.
Therefore, this study aims to develop a structured approach through an HCAI-based framework for conducting SLRs, hereafter called the HCAI-SLR framework, to serve as a guideline in higher education with a focus on research activities. In other words, this study leverages AI-powered applications to offer personalized support, guidance, and resources, making the SLR process more efficient and effective for researchers and students in higher education.

3. Research Design

Research gap. As previously mentioned, HCAI emphasizes aligning technological innovation with human-centric values; therefore, it is vital to apply an interdisciplinary approach in the development of HCAI-based applications to ensure that they are intuitive, accessible, and adhere to both educational and technical perspectives [13]. However, AIED has not yet kept pace with the rapid advancements in AI technology, leaving a gap in evidence-based guidelines and support for AI applications in education [13]. Despite the progress in AIED technologies, recent studies highlight a persistent lack of educational perspectives in AIED research, which is mostly performed by researchers with STEM backgrounds. Overall, educators do not seem to have any clear ideas about how to derive pedagogical benefits from AI [36]. Interdisciplinary collaboration with educators and educational researchers is more likely to yield practical guidelines and exemplary practices for fellow educators [13].
Research question. The study concerns the guidelines and support for AI applications in higher education, with a focus on a key research activity: the systematic literature review. To address this gap, the present study proposes an interdisciplinary framework rooted in design science for HCAI-based SLRs, named the HCAI-SLR framework. The primary research question of this paper is, “How to effectively integrate human-centered artificial intelligence into the process of systematic literature reviews in higher education?”.
Research method. To address this research question, design science research (DSR) has been employed as an interdisciplinary approach to create and evaluate artifacts aimed at solving identified problems systematically [10,37]. The DSR first focuses on the environment within which a particular problem exists and then on potential artifacts that can be developed to address the problems identified.
Research model. Following DSR principles, the study is structured around a research model illustrated in Figure 1. This model outlines key elements of DSR, including problem diagnosis, theory building, technology intervention, and technology evaluation [37]. It also emphasizes the components of HCAI, which incorporate HCAI principles and guidance throughout the research process.
Following the research model, Problem diagnosis is mentioned at the beginning of this section. Section 4 continues with the HCAI-SLR framework, which focuses on the incorporation of Human-centered AI elements and the remaining of the model, including Theory building and Technology invention. Then, Section 5 presents the validations of the framework, which corresponds to Technology evaluation.

4. HCAI-SLR Framework

This section begins with the basis of Theory building, including different artifacts of the DSR, such as the SLR process, its corresponding constructs, and a model representing the relationship between constructs. Then, it continues with HCAI elements such as HCAI principles and guidance and ends with Technology intervention, including AI-powered tools and HCAI activities [38].

4.1. Theory Building

4.1.1. SLR Process

Concerning the SLR process, AIED leads to the need to adapt the existing education systems and processes to the advances in AI-powered technologies [39]. To adapt the traditional SLR process to incorporate AI tools, a focus group was created, including researchers and educators in different domains [40]. Meetings and interviews were organized to discuss the adaptation of the SLR process and the literature [41,42,43]. Different steps of the process have been identified, analyzed, and described using content and theme analysis [40]. A process, which is defined as a set of steps used to perform a task, is based on a set of underlying constructs [10]. In the HCAI-SLR framework, an adapted process for incorporating HCAI is required. Table 1 presents the proposed SLR process, including different steps, the objectives of each step, and their corresponding constructs.

4.1.2. Model and Constructs

Constructs (also called concepts), which form the vocabulary of a domain, constitute a conceptualization used to describe knowledge within this domain [10]. The model is a set of propositions expressing relationships among constructs, which can be viewed as a representation of how constructs should be used in the SLR process [10]. Adapted from Levy and Ellis [42], Okoli and Schabram [41], and Xiao and Watson [43], the model and constructs of the HCAI-SLR framework are presented in Figure 2 as a UML diagram [46].
Concerning the Identification step (S1), the purpose explains the goals and reason of the literature review. A research question is defined as a clear, focused, and complex enough statement defining the audience’s purpose and end use of the review.
Concerning the Protocol and training (S2), this step is omitted since there are no related constructs.
Concerning the Searching for the literature step (S3), electronic sources are open-access databases or specific subject databases that contain academic papers. Based on a research question, a search keyword is a specific word or phrase allowing finding the relevant literature on a specific topic such as papers that can be used for keyword search, backward and forward search in different electronic sources. A paper is a research publication, including academic journal papers or conference proceedings. A title refers to the words and phrasing that name a research paper. A journal/conference is a publication venue for a research paper that may focus on specialized topics and have a peer-reviewed evaluation process.
Concerning the Practical screen step (S4), an abstract is a summary of a research paper. For practical screening, inclusion criteria are used to include papers that meet the standard and/or the limits of a search established by the reviewer, which can be an expression, including keywords and their relationships (e.g., period, specific location such as academic journals, language, etc.). On the other hand, exclusion criteria are the criteria to eliminate papers that do not meet the standard and/or the limits of a search that are established by the reviewer. Thus, screening refers to the process of determining which papers should be included or excluded during a literature review. Accordingly, screened papers refer to the results that remain after applying the screening (inclusion/exclusion) criteria and process to filter out irrelevant or low-quality papers.
Concerning the Quality appraisal step (S5), paper full text is the content of a research paper, which will typically contain a Title, Abstract, Review of literature, Introduction, Methods, Results, Discussion, and References sections. Selected paper refers to the study chosen for inclusion in the literature review after undergoing the full-text screening process and has passed the eligibility and quality criteria outlined in the review protocol.
Concerning the Data extraction step (S6), extracted data are data captured from the selected studies whose aim is to record and organize essential data from the literature in a standardized format. To support data extraction, AI-powered tools are various software programs and applications that incorporate AI capabilities to assist with aspects of the literature review process.
Concerning the Synthesis of studies and Writing the review steps (S7 and S8), literature synthesis includes activities such as combing, integrating, modifying, rearranging, designing, composing, and generating to assemble the literature being reviewed for a given concept into a whole. Moreover, literature analysis provides a foundation for your research by highlighting what is known, identifying gaps, and suggesting directions for future study [47].

4.2. HCAI Elements

4.2.1. HCAI Principles

To ensure the development and deployment of AI systems that are designed with human needs, values, and ethical considerations, HCAI principles are often associated with constructs such as Explainability, Accountability, Fairness, and Ethics [6,48]. These principles seek to address the concerns about the opacity of AI mentioned in the introduction (AI as a black box). Accountability ensures that AI actions and decisions comply with the legal frame of the organization and society [49]. Explainability guarantees that AI-based decision-making processes can be explained and justified [50]. Fairness addresses biases and ensures that AI benefits all segments of society [51]. Ethics ensure that the morality of decisions is considered [52].

4.2.2. HCAI Guidance

To implement HCAI principles, HCAI guidance is proposed and it includes the following three phases: Human-before-the-loop, Human-in-the-loop, and Human-over-the-loop [48,53]. Human-before-the-loop focuses on the initial design phase for embedding human values and HCAI principles into the AI system’s foundation. Human-in-the-loop covers the design, development, and deployment phases for ensuring active human participation and oversight. Human-over-the-loop addresses AI governance during deployment, ensuring ongoing monitoring, evaluation, and adjustment to align with human values and address biases.
To ensure controllability and mitigate potential risks, distinct control points are proposed within this HCAI guidance:
  • First control point (Human-before-the-loop): This control point verifies whether all planning requirements, ethical considerations, and human-centered design principles are met during the initial design phase. For example, when planning a literature review, researchers ensure that the research questions are clearly defined, comprehensive and unbiased search strategies are used, and ethical data collection and analysis considerations are addressed.
  • Second control point (Human-in-the-loop): This control point encompasses two crucial checks:
  • Data diversity and bias check: Verifies if the collected data objects are sufficiently diverse and representative to minimize bias in the AI system. For example, researchers ensure that the literature search includes a diverse range of sources, including different databases, journals, and publication types, to avoid bias towards a particular perspective or geographical region.
  • Model/processing validity check: Ensures that the chosen modeling algorithms or data processing techniques are appropriate, accurate, and aligned with the desired outcomes. For example, when using AI for data extraction, researchers evaluate the accuracy of the extracted data by manually checking a sample of papers. They also ensure the chosen extraction tool is suitable for the type of data being collected and the research questions being addressed.
  • Third control point (Human-over-the-loop): This control point confirms that rigorous testing and validation procedures are appropriately performed during deployment. It ensures that the AI functions as intended and that any unintended consequences or biases are identified and addressed. For example, if after deployment, user feedback reveals that the AI-generated summaries are often too short, or the text lacks the appropriate citations, the team would adjust the AI’s parameters to provide more comprehensive summaries and put the right citations, addressing these unintended consequences.

4.3. Technology Intervention

4.3.1. AI-Powered Tools

Based on the study of the literature related to AI in education and generative AI in education, this study suggests two related categories of AI-powered tools used in the SLR process:
  • Type 1 AI tools—Prompt-based tools: The first category includes conversational AI systems like ChatGPT-4o, Claude-3.5-Sonnet, and Google Bard (Gemini Flash 2.0) [54] that allow interactive querying through natural language prompts and responses. Their conversational nature makes them well-suited for interactive use in the literature review process [23]. These tools utilize large language models (LLMs) [55] trained on massive text datasets. Prompt engineering techniques [56] are crucial to optimize their performance for specific tasks.
  • Type 2 AI tools—Task-oriented tools: The second category comprises platforms with graphical user interfaces to support specific literature review tasks [57]. These tools incorporate AI and machine learning (AI/ML) capabilities like natural language processing or machine learning algorithms but operate through predefined interfaces rather than open-ended prompts. Examples are tools for citation screening, quality assessment, and data extraction [57].

4.3.2. HCAI Activities

To ensure the HCAI principles and guidance, this study proposes the HCAI activities, including Human initiation, Augmentation, AI Triangulation, and Human decision activities (Table 2).
Human initiation activity. The process begins with human experts, who set the direction of the review. They initiate the research topic and meticulously outline the research objectives and initial research questions, ensuring precision in the subsequent stages. It is also important to provide a clear direction for the AI tools in the next steps by preparing the prompts and selecting appropriate tools.
In this activity, the principle of accountability plays a pivotal role. Here, researchers ensure that the AI tools are configured to comply with the legal and ethical standards required by both the organization and broader societal norms. For example, when setting up the AI for an SLR, researchers ensure that all data used by AI tools is sourced from credible and ethical sources and that any personal data used complies with privacy regulations. This sets a foundation ensuring that the AI’s operations are transparent and accountable right from the start. The explainability and ethical engagement are also involved in this activity. For example, when defining research questions for a “systematic review on AI in education”, researchers explicitly document their rationale for focusing on specific educational levels or AI applications. They might explain why they chose to explore the impact of AI tutoring systems on undergraduate STEM education, detailing how this aligns with current educational needs and ethical considerations. Engaging diverse stakeholders through structured workshops or focus groups can facilitate the collaborative refinement of research questions, ensuring that the research aligns with community needs and ethical considerations.
AI augmentation activity. Once the human researcher has initiated the review, AI tools can augment the process with two kinds of AI tool groups: Prompt-based and Task-oriented.
The Prompt-based AI tools work like conversational AI systems like ChatGPT, ClaudeAI, and Google Gemini [54]. They are designed to interact through natural language prompts. Their use is primarily during the phases of the SLR, where creative synthesis and integration of information are required. Prompt-based tools can be particularly useful in the following SLR steps: Identification, Synthesis of Studies, and Writing (Table 3).
Prompt engineering is crucial to optimize LLMs’ performance. It consists of the process of designing and refining the prompts that are given to an LLM. The goal is to elicit the most useful and relevant responses from the model. This is important because the quality of the model’s output is heavily dependent on the quality of the input it receives. Techniques include providing context, asking clear questions, providing examples, and iterating based on model responses. For example, instead of asking: “Find relevant literature”, a more effective prompt might be: “Find peer-reviewed articles published in the last five years that study the impact of data on decision marketing”.
The Task-oriented AI tools are designed to support specific tasks, including features for screening, quality assessment, and data extraction. Task-oriented functions are crucial in the initial stages of the SLR, such as data collection and preliminary analysis, where structured tasks like identifying relevant studies and extracting key data are performed. They can be particularly useful in the following SLR steps: Searching for the Literature, Practical Screening, Quality Appraisal, and Data Extraction (Table 4).
This activity also incorporates the HCAI principles of accountability and fairness. AI tools must be used within a structured and ethical framework to ensure that their outputs are reliable and valid. For example, when using AI to draft sections of the literature review, the human researcher must review and validate the AI-generated content to ensure it meets academic standards and is free from biases. This collaborative approach ensures that AI augments human capabilities without compromising the review’s quality and fairness.
AI triangulation activity. To ensure the robustness and accuracy of the data that are obtained, a process akin to triangulation is implemented [66]. Different AI tools are cross-checked against each other under human supervision to identify discrepancies or inconsistencies in the results. Any difference in outcomes from varied tools is flagged for further human evaluation. This activity aligns with the HCAI principles of fairness and explainability. For accountability, researchers document any discrepancies between AI tools and their reasoning for final decisions. For instance, if different AI tools provide conflicting recommendations on a study’s inclusion, researchers would record this conflict and explain their resolution process, ensuring that all AI actions and decisions can be traced and justified.
AI triangulation provides two key benefits:
  • Identifies inconsistencies across tools: No AI-based tool is perfect. Thus, the discrepancies help pinpoint areas needing refinement. For example, if Tool A includes a paper but Tool B excludes it, it indicates the screening criteria may need adjustment.
  • Reduces systemic biases: Overreliance on one AI tool risks bias inherent in that tool’s training data or algorithms. Testing outputs across diverse tools minimizes singular blind spots. The “wisdom of crowds” principle [67] creates more robust results.
It should be noted that while this study advocates for the inter-referencing of various AI tools to cross-check results, it is imperative to understand that the process is not as straightforward as it seems. Validating the outcomes produced by these tools often necessitates a more intricate procedure [68]. However, it is crucial to highlight that the primary objective of integrating AI tools in this paper is to supplement and expedite the literature review process, not to replace the human role. The expertise and experience of the researcher in utilizing these tools remain paramount. AI tools, no matter how advanced, are adjuncts–they cannot substitute human intuition, judgment, and expertise. Comparing the results from different tools serves as a mechanism to bolster confidence in the findings. Hence, it is always the human researcher who makes the final call, armed with the understanding that there is an inherent margin of error. The essence of AI triangulation, therefore, lies not in replacing human judgment but in augmenting it, ensuring that decisions are made with a higher degree of reliability and precision.
Human decision activity. After the AI tools have screened and procured the data, human intervention—the most important part—is reintroduced to ensure the validity and relevance of the findings [69]. In the case of flagged discrepancies from the AI triangulation phase, a consensus review is undertaken [70]. A team of experts reviews the differences and makes collective decisions on the inclusion or exclusion of specific pieces of literature or findings. This phase ensures that the final decision on the literature is both comprehensive and relevant, minimizing the biases that may arise solely from AI processing. It upholds the HCAI principles of accountability, ethics, and explainability. It is important to note that the effectiveness of these tools depends on their proper use and integration into the process, as well as the quality of the input data and the expertise of the users. Therefore, training and guidance on using these tools should be provided to the users, and continuous evaluation and improvement of the tools should be carried out to ensure their effectiveness.

5. Technology Evaluation

Concerning Technology evaluation, different evaluation strategies of DSR [71] have been used to demonstrate the application and validation of the proposed framework, including an illustrative scenario, a case study, and a demonstration. This section begins with the illustrative scenario as an application of the framework to a particular situation. Then, the section continues with a case study highlighting the framework’s application to a real-world scenario. Furthermore, the demonstration evaluation is also used to refine and continuously enhance the framework and its practices.

5.1. Illustrative Scenario

This section provides a brief illustration of how the framework is operationalized in a real situation (Figure 3), which is a specific SLR on a particular subject. The basic elements of the framework such as its constructs, model, and process are adapted and customized to meet the specific needs of the synthesis situation. The objective in presenting the illustrative example below is not to prescribe a standard procedure or mandatory tools but rather to show a simplistic approach where humans and AI tools can collaboratively undertake SLR tasks. The tools employed in this example can be substituted with others, and the steps delineated can be rearranged, adjusted, or replaced (See Supplementary Material for further details).
The illustration focused on the topic “Artificial Intelligence and Machine Learning in Cybersecurity in small and medium sized enterprises (SMEs)”, following an adapted process:
  • Step 1—Identification phase, ChatGPT (chat.openai.com) and ClaudeAI (claude.ai) were used to establish the scope and parameters to guide the literature search. Three core goals were defined, including synthesizing the landscape, challenges, and outlook. Five specific research questions (RQ) were developed, covering the AI/ML tools used, benefits, adoption barriers, the literature gaps, and evolution. A comprehensive set of 44 keywords was identified and organized into three groups: 24 cybersecurity terms, 13 AI/ML terms, and 7 SME terms. These keywords were combined into a Boolean search string to link the concepts.
  • Step 2—Searching phase, using the SCOPUS database, filters were applied, and 144 English language papers published from 2013 to 2023 sourced from journals and conferences were selected.
  • Step 3—Screening phase, these 144 papers were imported into the AI-powered platform Covidence (https://www.covidence.org/, accessed on 1 March 2025), which focuses on managing and streamlining SLRs, to resolve conflicts and remove duplicates. Inclusion and exclusion criteria were set and aligned to the review scope. Two reviewers collaboratively performed title/abstract screening supported by Covidence’s machine learning algorithms and highlighted keywords function to expedite the process. A total of 27 papers remained for the full-text review.
  • Step 4—Practical screen phase, the full-text screening was conducted using Typeset (typeset.io), a tool designed to enhance the comprehension of research papers. This phase included two quality assessment questions. From this procedure, 22 papers were retained. Moreover, the forward and backward searches were conducted via Citationchaser (estech.shinyapps.io/citationchaser), which allows for obtaining lists of references from across studies. This process did not add any new papers. In total, 22 papers were selected. Thus, in Step 4, ChatGPT and ClaudeAI were again mobilized to divide the main research questions into more focused sub-research questions (sub-RQs). These questions were designed to guide the data extraction using AI tools such as Typeset. In summary, we had 4 sub-RQ for RQ1, 2 sub-RQ for RQ2, 4 sub-RQ for RQ3, 7 sub-RQ for RQ4, and 5 sub-RQ for RQ5.
  • Step 5—Quality appraisal and Step 6—Data extractions, these 22 papers then underwent through extraction, guided by the sub-questions formulated in Step 4. Again, the AI tool Typeset was leveraged to accelerate the extraction process through its summarization, data extraction, and chat features. Data were then compiled into a Microsoft Excel sheet aligned to each sub-research question.
  • Step 7—Synthesis phase, the extracted unstructured data compiled in the Excel sheet from Step 6 was synthesized using ChatGPT-4. ChatGPT analyzed the spreadsheet data to identify key themes, trends, and insights aligned with the research questions and sub-research questions. Finally, the sub-RQs with too little data have been removed.

5.2. Case Study

This case study explores an SLR titled “A Big Picture of Circular Economy in Organizations: A Focus on North American Researchers”, conducted for the Québec Circular Economy Research Network (RRECQ) in 2024 (https://rrecq.ca/revue-de-litterature-sur-lintegration-et-la-mise-en-oeuvre-de-la-circularite-dans-les-entreprises-et-les-organizations/, accessed on 1 March 2025). The RRECQ is a leading organization dedicated to advancing the circular economy (CE) through interdisciplinary research, collaboration, and knowledge dissemination. Considering the large number of studies on the CE, this SLR was crucial for the RRECQ to synthesize existing research, identify challenges, and outline new research directions.
Background and needs. The RRECQ required this SLR to synthesize the vast number of studies on circular economy practices within organizations, which had not been previously compiled comprehensively. The primary stakeholders involved in this project included researchers, community members, businesses, policymakers, and other relevant parties within the RRECQ.
Objectives. The SLR aimed to address five main goals:
  • Synthesize and understand circular economy integration across different organizational types.
  • Identify challenges, benefits, and key themes in CE implementation.
  • Explore sector-specific approaches and determine critical strategies or models.
  • Assess factors and enablers for successful CE implementation.
  • Provide a comprehensive overview and future directions for CE research.
Methodology. The HCAI-SLR framework was applied to conduct the review. The PRISMA diagram was used to manage the review [72]. The Scopus database was chosen for the literature search, applying strict inclusion and exclusion criteria, focusing on papers published between 2014 and 2024, relevance to the research questions, and a North American geographical scope, reflecting the RRECQ’s regional focus.
AI tools and control points. Several AI tools were employed to enhance efficiency and accuracy throughout the SLR process. To mitigate potential biases and ensure human oversight, control points were implemented at each stage:
  • First control point: Before initiating the literature search, the research team collaborated with the RRECQ experts to validate the research questions, ensuring their clarity, neutrality, and ethical soundness. This ensured alignment with the RRECQ’s research priorities and ethical considerations. A list of critical keywords was defined by the group of experts.
  • Second control point: During the screening, the three independent reviewers manually reviewed the selected papers to assess representation across sectors, organizational types, and research perspectives that helped identify and address any potential geographical or methodological biases. Furthermore, the accuracy of data extracted using AI tools was verified through manual spot-checking. Different AI tools’ results were compared, and expert feedback was incorporated.
  • Third control point: Sensitivity analysis was conducted by adjusting search terms and screening criteria to assess the robustness of the findings and minimize potential biases introduced by the AI tools or the review process itself. Moreover, a group of experts validated AI-generated syntheses via many online meetings, and conclusions were drawn before finalizing the review.
Key findings. The SLR revealed several critical insights:
  • A comprehensive overview of circular economy practices across organizations, highlighting diverse approaches and the importance of technological, policy, and social factors.
  • Identification of drivers for adopting circular economy practices, including environmental stewardship and regulatory compliance, alongside significant barriers such as economic constraints and resistance to change.
  • The influence of organizational size on the adoption of circular economy practices, noting distinct challenges faced by large corporations versus SMEs.
Impact and dissemination. The SLR findings were synthesized into a “Special Report on the Organizations & Territories Review (2024)” and disseminated widely within the RRECQ community in different languages. The SLR findings have had a significant impact on the RRECQ’s research and activities such as Informed Research Priorities, Enhanced Collaboration, and Policy Recommendations.
Challenges and lessons learned. The SLR process presented several challenges, primarily related to managing the vast number of studies on CE (170 selected studies). Techniques for refining the search scope, selecting relevant papers, and managing extracted data were crucial. Collaboration among geographically dispersed experts required effective online communication and coordination strategies. Prompt engineering for AI tools proved essential for optimizing their performance and achieving desired outcomes. Key lessons learned from this experience include: (i) The importance of a well-defined scope and clear research questions for managing large literature volumes; (ii) The need for effective collaboration and communication strategies when working with diverse stakeholders; and (iii) The value of prompt engineering for maximizing the effectiveness of AI tools in SLRs.
Stakeholder feedback. The SLR received positive feedback for its comprehensive approach and the actionable insights it provided. Stakeholders appreciated the integration of AI tools that enhanced the review’s efficiency and depth, acknowledging the significant contribution to the academic and practical understanding of the circular economy.
Conclusions. The application of the HCAI-SLR framework in conducting this review demonstrated its effectiveness in producing a high-quality and comprehensive SLR. This case study highlights the potential of the framework in advancing research in complex fields such as circular economy, providing valuable insights for both academic and practical applications.

5.3. Demonstration

To demonstrate the use of the proposed framework, the team applies it retroactively to a research seminar and a training workshop. First, a research seminar titled “A Framework Focused on Human-Centered Artificial Intelligence for Conducting a Literature Review” was held in December 2023 at the Université du Québec à Trois-Rivières, Canada. The target audience included researchers, graduate students, librarians, and other practitioners, with over 50 participants attending both online and offline.
Second, participants intending to use the framework for their research activities were invited to participate in a training workshop, titled “AI-Assisted Systematic Reviews: A Step-by-Step Guide for Researchers”, in May 2024 to do their own studies with AI tools. The training workshop was conducted in May 2024. The three-hour workshop aimed to introduce the theoretical background and the HCAI-SLR framework, demonstrate how AI tools can augment each step of the SLR process, provide hands-on experience with prompt engineering and data synthesis techniques using AI, and equip participants with practical strategies for conducting efficient and rigorous SLRs.
Participant Feedback and Data Collection. Data on participants’ experiences and learning outcomes were collected through a group survey conducted via Google Forms. The survey was completed by 21 participants (researchers, graduate students, and practitioners) who provided detailed feedback on the seminar’s impact.
Key Findings and Impact. The seminar received positive feedback from participants, highlighting several key outcomes:
  • Enhanced Understanding: Participants reported a significant improvement in their understanding of the HCAI-SLR framework and its practical application. A majority of participants (57.1%) found the framework easy to apply. Specifically, 9.5% found that the HCAI-SLR framework very easy to understand, and 47.6% said that it is easy.
  • Improved Performance: The majority (71.4%) confirmed that utilizing AI in research enhanced their performance, effectiveness, efficiency, and impact of their research activities. Specifically, 38.1% were very satisfied, and 33.3% were satisfied with the framework’s performance.
  • Convenience and Collaboration: 52.4% found the framework convenient for organizing the SLR process, with 19% rating it as very convenient. Furthermore, 57.1% strongly agreed, and 38.1% agreed that the human–AI collaborative approach provided clear benefits.
Discussion. The findings suggest a positive reception of the HCAI-SLR framework among diverse stakeholders in higher education. The high satisfaction rates and perceived benefits indicate the potential of this approach to significantly enhance the SLR process. The demonstration effectively illustrated the practical application and advantages of the HCAI-SLR framework in an educational setting. The positive feedback and active engagement from the participants underscored the effectiveness of this approach in enhancing research methodologies through AI integration.

6. Conclusions

This paper introduces the Human-Centered AI for Systematic Literature Reviews (HCAI-SLR) framework, presenting a structured approach to integrate AI capabilities into the systematic literature review (SLR) process while emphasizing human oversight and ethical considerations. To our knowledge, this framework is among the first to address the strategic integration of human and AI capabilities in a coherent manner. This framework serves as a comprehensive roadmap for researchers looking to harness emerging AI technologies without sacrificing the rigor and reliability of research activities in general and SLRs in particular. It underscores the critical role of researchers in the review process and illustrates the collaborative interaction between humans and AI, highlighting its significant implications for higher education.
This work contributes to both research and practice in several ways. The HCAI-SLR framework provides a structured method for researchers to effectively balance AI automation of repetitive tasks with human expertise for nuanced judgment and interpretation that is crucial for ensuring both scientific productivity and ethical considerations. The framework offers a holistic approach to support AI augmentation across all core review steps. It guides researchers in identifying the appropriate AI tools for specific tasks such as searching, screening, and data extraction, recognizing that human researchers remain essential for interpreting findings and drawing conclusions. The framework promotes AI triangulation as a validation mechanism, encouraging cross-checking of findings through multiple AI tools to minimize bias and enhance confidence in the results. It emphasizes the crucial role of human researchers in directing the review process by advocating for careful prompt engineering, meticulous evaluation of AI outputs, and adherence to rigorous analysis standards. Finally, the demonstration through different evaluation strategies highlights the continuous and collaborative interplay between humans and AI throughout the review process, demonstrating the practical applicability of the HCAI-SLR framework.
The HCAI-SLR framework has several implications to significantly impact higher education. First, by reducing the time and effort required for SLR, researchers can focus on higher-level analysis, synthesis, and the development of new ideas. Accordingly, the principles of HCAI-SLR could be incorporated into research methodology courses, providing students with cutting-edge skills that blend traditional research methods with AI-assisted techniques. Thus, the HCAI-SLR framework provides researchers with a structured approach to conduct a comprehensive SLR that can significantly enhance research skills. Furthermore, the framework can serve as a teaching tool in academic settings, where students learn to interact with AI technologies, understand their capabilities and limitations, and apply these tools in their research methodologies. Finally, by mitigating bias through AI triangulation and human validation, the framework can promote research that is more representative of diverse perspectives and populations.
The study enhances smart education by integrating Human-Centered AI, focusing on creating systems that are sensitive, manageable, adaptable, responsive, and timely. The HCAI-SLR framework ensures AI technologies align with human values, allowing for personalized and ethical learning experiences. By combining human oversight with AI automation, the framework supports educators in managing and tailoring educational strategies to diverse learners’ needs. AI tools quickly process large volumes of data, providing timely feedback and insights, which helps educators adapt to various teaching contexts efficiently. This synergy between human intelligence and AI not only personalizes learning but also ensures that educational processes are aligned with ethical considerations in order to ultimately support a more flexible and dynamic approach to smart education.
While this paper establishes a foundation for integrating AI into systematic reviews within a human-centered framework, several areas require further exploration. Continuous advancements in AI and machine learning necessitate ongoing updates and refinements to the tools integrated within the HCAI-SLR framework. Longitudinal studies assessing the impact of AI-integrated SLRs on research quality, publication speed, and educational outcomes are essential to validate the benefits and identify potential drawbacks. Exploring the applicability and adaptation of the framework in disciplines beyond education, such as medical research, social sciences, and engineering, could provide broader insights and enhancements. As AI tools become more integrated into research processes, addressing ethical concerns and privacy implications, especially when handling sensitive data, becomes paramount. Finally, ensuring that these advanced tools are accessible to a broad range of researchers, including those from under-resourced institutions or regions, is crucial for democratizing research capabilities.

Supplementary Materials

The comprehensive illustration (Section 5.1) and its complete results are available at the following link: https://github.com/tranducle/HCAI-SLR.

Author Contributions

T.L.D.: conceptualization, methodology, writing—original draft preparation (Section 1, Section 2, Section 3, Section 4, and Section 6). T.D.L.: writing (Section 4 and Section 5)—review and editing. S.U.: review and editing. C.P.: review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article (and Supplementary Materials).

Acknowledgments

The authors acknowledge the use of a generative AI tool (ChatGPT by OpenAI) for editing and proofreading this paper. The tool was employed to refine the text and improve writing quality, including rewriting sentences and identifying grammatical errors. The authors reviewed all revisions, conducted contextual analysis, and ensured the accuracy and integrity of the final version.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ozmen Garibay, O.; Winslow, B.; Andolina, S.; Antona, M.; Bodenschatz, A.; Coursaris, C.; Falco, G.; Fiore, S.M.; Garibay, I.; Grieman, K. Six human-centered artificial intelligence grand challenges. Int. J. Hum. Comput. Interact. 2023, 39, 391–437. [Google Scholar] [CrossRef]
  2. Riedl, M.O. Human-centered artificial intelligence and machine learning. Hum. Behav. Emerg. Technol. 2019, 1, 33–36. [Google Scholar] [CrossRef]
  3. Brożek, B.; Furman, M.; Jakubiec, M.; Kucharzyk, B. The black box problem revisited. Real and imaginary challenges for automated legal decision making. Artif. Intell. Law 2024, 32, 427–440. [Google Scholar] [CrossRef]
  4. Wadden, J.J. Defining the undefinable: The black box problem in healthcare artificial intelligence. J. Med. Ethics 2022, 48, 764–768. [Google Scholar] [CrossRef]
  5. Zednik, C. Solving the black box problem: A normative framework for explainable artificial intelligence. Philos. Technol. 2021, 34, 265–288. [Google Scholar] [CrossRef]
  6. Shneiderman, B. Human-centered artificial intelligence: Reliable, safe & trustworthy. Int. J. Hum. Comput. Interact. 2020, 36, 495–504. [Google Scholar]
  7. Chen, L.; Chen, P.; Lin, Z. Artificial intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  8. Munir, H.; Vogel, B.; Jacobsson, A. Artificial intelligence and machine learning approaches in digital education: A systematic revision. Information 2022, 13, 203. [Google Scholar] [CrossRef]
  9. Yang, S.J.; Ogata, H.; Matsui, T.; Chen, N.-S. Human-centered artificial intelligence in education: Seeing the invisible through the visible. Comput. Educ. Artif. Intell. 2021, 2, 100008. [Google Scholar] [CrossRef]
  10. March, S.T.; Smith, G.F. Design and natural science research on information technology. Decis. Support Syst. 1995, 15, 251–266. [Google Scholar] [CrossRef]
  11. Qu, J.; Zhao, Y.; Xie, Y. Artificial intelligence leads the reform of education models. Syst. Res. Behav. Sci. 2022, 39, 581–588. [Google Scholar] [CrossRef]
  12. Harry, A. Role of AI in Education. Interdiciplinary J. Hummanity 2023, 2, 260–268. [Google Scholar] [CrossRef]
  13. Zhang, K.; Aslan, A.B. AI technologies for education: Recent research & future directions. Comput. Educ. Artif. Intell. 2021, 2, 100025. [Google Scholar]
  14. Hien, H.T.; Cuong, P.-N.; Nam, L.N.H.; Nhung, H.L.T.K.; Thang, L.D. Intelligent assistants in higher-education environments: The FIT-EBot, a chatbot for administrative and learning support. In Proceedings of the 9th International Symposium on Information and Communication Technology, Da Nang, Vietnam, 6–7 December 2018; pp. 69–76. [Google Scholar]
  15. Sajja, R.; Sermet, Y.; Cikmaz, M.; Cwiertny, D.; Demir, I. Artificial intelligence-enabled intelligent assistant for personalized and adaptive learning in higher education. Information 2024, 15, 596. [Google Scholar] [CrossRef]
  16. Khanna, S.; Kaushik, A.; Barnela, M. Expert systems advances in education. In Proceedings of the National Conference on Computational Instrumentation NCCI-2010, Chandigarh, India, 19–20 March 2010; Central Scientific Instruments Organisation: Chandigarh, India, 2010; pp. 109–112. [Google Scholar]
  17. Le Dinh, T.; Pham Thi, T.T.; Pham-Nguyen, C.; Nam, L.N.H. A knowledge-based model for context-aware smart service systems. J. Inf. Telecommun. 2022, 6, 141–162. [Google Scholar] [CrossRef]
  18. Tiwari, R. The integration of AI and machine learning in education and its potential to personalize and improve student learning experiences. Int. J. Sci. Res. Eng. Manag. 2023, 7. [Google Scholar] [CrossRef]
  19. Chen, C.-M. Intelligent web-based learning system with personalized learning path guidance. Comput. Educ. 2008, 51, 787–814. [Google Scholar] [CrossRef]
  20. Li, K.C.; Wong, B.T.-M. Artificial intelligence in personalised learning: A bibliometric analysis. Interact. Technol. Smart Educ. 2023, 20, 422–445. [Google Scholar] [CrossRef]
  21. Holmes, W.; Miao, F. Guidance for Generative AI in Education and Research; UNESCO Publishing: Paris, France, 2023. [Google Scholar]
  22. Batista, J.; Mesquita, A.; Carnaz, G. Generative AI and higher education: Trends, challenges, and future directions from a systematic literature review. Information 2024, 15, 676. [Google Scholar] [CrossRef]
  23. Alshami, A.; Elsayed, M.; Ali, E.; Eltoukhy, A.E.; Zayed, T. Harnessing the power of ChatGPT for automating systematic review process: Methodology, case study, limitations, and future directions. Systems 2023, 11, 351. [Google Scholar] [CrossRef]
  24. Carter, S.; Nielsen, M. Using artificial intelligence to augment human intelligence. Distill 2017, 2, e9. [Google Scholar] [CrossRef]
  25. Fügener, A.; Grahl, J.; Gupta, A.; Ketter, W. Cognitive challenges in human–artificial intelligence collaboration: Investigating the path toward productive delegation. Inf. Syst. Res. 2022, 33, 678–696. [Google Scholar] [CrossRef]
  26. Schmidt, A. Interactive human centered artificial intelligence: A definition and research challenges. In Proceedings of the International Conference on Advanced Visual Interfaces, Ischia Island, Italy, 28 September–2 October 2020; pp. 1–4. [Google Scholar]
  27. Usmani, U.A.; Happonen, A.; Watada, J. Human-centered artificial intelligence: Designing for user empowerment and ethical considerations. In Proceedings of the 2023 5th International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Istanbul, Turkey, 8–10 June 2023; pp. 1–7. [Google Scholar]
  28. Li, K.C.; Wong, B.T.-M. Research landscape of smart education: A bibliometric analysis. Interact. Technol. Smart Educ. 2022, 19, 3–19. [Google Scholar] [CrossRef]
  29. Gattupalli, S.; Maloy, R.W. On Human-Centered AI in Education; University of Massachusetts Amherst: Amherst, MA, USA, 2024. [Google Scholar]
  30. Hutson, J.; Plate, D. Human-AI collaboration for smart education: Reframing applied learning to support metacognition. In Advanced Virtual Assistants—A Window to the Virtual Future; IntechOpen: London, UK, 2023. [Google Scholar]
  31. Hajdarpasic, A.; Brew, A.; Popenici, S. The contribution of academics’ engagement in research to undergraduate education. Stud. High. Educ. 2015, 40, 644–657. [Google Scholar] [CrossRef]
  32. Richardson, A.J. The discovery of cumulative knowledge: Strategies for designing and communicating qualitative research. Account. Audit. Account. J. 2018, 31, 563–585. [Google Scholar] [CrossRef]
  33. Correia, A.; Grover, A.; Jameel, S.; Schneider, D.; Antunes, P.; Fonseca, B. A hybrid human–AI tool for scientometric analysis. Artif. Intell. Rev. 2023, 56, 983–1010. [Google Scholar] [CrossRef]
  34. Walsh, I.; Renaud, A.; Medina, M.J.; Baudet, C.; Mourmant, G. ARTIREV: An integrated bibliometric tool to efficiently conduct quality literature reviews. Systèmes D’information Manag. 2022, 27, 5–50. [Google Scholar] [CrossRef]
  35. Wagner, G.; Lukyanenko, R.; Paré, G. Artificial intelligence and the conduct of literature reviews. J. Inf. Technol. 2022, 37, 209–226. [Google Scholar] [CrossRef]
  36. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 1–27. [Google Scholar] [CrossRef]
  37. Venable, J. The role of theory and theorising in design science research. In Proceedings of the 1st International Conference on Design Science in Information Systems and Technology (DESRIST 2006), Claremont, CA, USA, 24–25 February 2006; pp. 1–18. [Google Scholar]
  38. Vom Brocke, J.; Hevner, A.; Maedche, A. Introduction to design science research. In Design Science Research. Cases; Springer Nature: Berlin, Germany, 2020; pp. 1–13. [Google Scholar]
  39. Muzammul, M. Education System re-engineering with AI (artificial intelligence) for Quality Im-provements with proposed model. Adcaij Adv. Distrib. Comput. Artif. Intell. J. 2019, 8, 51. [Google Scholar]
  40. Rabiee, F. Focus-group interview and data analysis. Proc. Nutr. Soc. 2004, 63, 655–660. [Google Scholar] [CrossRef] [PubMed]
  41. Okoli, C.; Schabram, K. A Guide to Conducting a Systematic Literature Review of Information Systems Research. 2015. Available online: https://ssrn.com/abstract=1954824 (accessed on 1 March 2025).
  42. Levy, Y.; Ellis, T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Sci. 2006, 9, 181–212. [Google Scholar] [CrossRef] [PubMed]
  43. Xiao, Y.; Watson, M. Guidance on conducting a systematic literature review. J. Plan. Educ. Res. 2019, 39, 93–112. [Google Scholar] [CrossRef]
  44. Frické, M. The knowledge pyramid: The DIKW hierarchy. Ko Knowl. Organ. 2019, 46, 33–46. [Google Scholar] [CrossRef]
  45. Le Dinh, T.; Van, T.H.; Nomo, T.S. A framework for knowledge management in project management offices. J. Mod. Proj. Manag. 2016, 3, 159. [Google Scholar]
  46. Pilone, D.; Pitman, N. UML 2.0 in a Nutshell; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2005. [Google Scholar]
  47. Müller-Bloch, C.; Kranz, J. A framework for rigorously identifying research gaps in qualitative literature reviews. In Proceedings of the 36th International Conference on Information Systems (ICIS), Fort Worth, TX, USA, 13–16 December 2015. [Google Scholar]
  48. Smuha, N.A. The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence. Comput. Law Rev. Int. 2019, 20, 97–106. [Google Scholar] [CrossRef]
  49. ISO/IEC TR 24028; Information Technology. Artificial Intelligence. Overview of Trustworthiness in Artificial Intelligence. ISO: Geneva, Switzerland, 2020. [CrossRef]
  50. Arrieta, A.B.; Rodríguez, N.D.; Ser, J.D.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Inf. Fusion 2019, 58, 82–115. [Google Scholar] [CrossRef]
  51. Berkel, N.V.; Tag, B.; Gonçalves, J.; Hosio, S.J. Human-centred artificial intelligence: A contextual morality perspective. Behav. Inf. Technol. 2020, 41, 502–518. [Google Scholar] [CrossRef]
  52. Tahaei, M.; Constantinides, M.; Quercia, D.; Muller, M. A Systematic Literature Review of Human-Centered, Ethical, and Responsible AI. arXiv 2023, arXiv:2302.05284. [Google Scholar]
  53. Xu, W.; Gao, Z. Enabling Human-Centered AI: A Methodological Perspective. In Proceedings of the 2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS), Toronto, ON, Canada, 15–17 May 2024; pp. 1–6. [Google Scholar]
  54. Lozić, E.; Štular, B. ChatGPT v Bard v Bing v Claude 2 v Aria v human-expert. How good are AI chatbots at scientific writing? arXiv 2023, arXiv:2309.08636. [Google Scholar]
  55. Hadi, M.U.; Tashi, A.; Qureshi, R.; Shah, A.; Irfan, M.; Zafar, A.; Shaikh, M.B.; Akhtar, N.; Wu, J.; Mirjalili, S.; et al. Large Language Models: A Comprehensive Survey of its Applications, Challenges, Limitations, and Future Prospects. TechRxiv 2023. [Google Scholar] [CrossRef]
  56. Giray, L.G. Prompt Engineering with ChatGPT: A Guide for Academic Writers. Ann. Biomed. Eng. 2023, 51, 2629–2633. [Google Scholar] [CrossRef] [PubMed]
  57. Pinzolits, R. AI in academia: An overview of selected tools and their areas of application. MAP Educ. Humanit. 2023, 4, 37–50. [Google Scholar] [CrossRef]
  58. Wang, S.; Scells, H.; Koopman, B.; Zuccon, G. Can ChatGPT Write a Good Boolean Query for Systematic Review Literature Search? In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval 2023, Taipei, Taiwan, 23–27 July 2023. [Google Scholar]
  59. Khalil, H.; Ameen, D.; Zarnegar, A. Tools to support the automation of systematic reviews: A scoping review. J. Clin. Epidemiol. 2021, 144, 22–42. [Google Scholar] [CrossRef] [PubMed]
  60. Ramirez-Orta, J.; Xamena, E.; Maguitman, A.G.; Soto, A.J.; Zanoto, F.P.; Milios, E.E. QuOTeS: Query-Oriented Technical Summarization. arXiv 2023, arXiv:2306.11832. [Google Scholar]
  61. D’Amico, S.; Dall’Olio, D.; Sala, C.; Dall’Olio, L.; Sauta, E.; Zampini, M.; Asti, G.; Lanino, L.; Maggioni, G.; Campagna, A.; et al. Synthetic Data Generation by Artificial Intelligence to Accelerate Research and Precision Medicine in Hematology. JCO Clin. Cancer Inform. 2023, 7, e2300021. [Google Scholar] [CrossRef]
  62. Wu, J.; Williams, K.; Chen, H.-H.; Khabsa, M.; Caragea, C.; Tuarob, S.; Ororbia, A.; Jordan, D.; Mitra, P.; Giles, C.L. CiteSeerX: AI in a Digital Library Search Engine. AI Mag. 2014, 36, 35–48. [Google Scholar] [CrossRef]
  63. Harfield, S.; Davy, C.; McArthur, A.; Munn, Z.; Brown, A.; Brown, N.J. Covidence vs Excel for the title and abstract review stage of a systematic review. Int. J. Evid. Based Healthc. 2016, 14, 200–201. [Google Scholar] [CrossRef]
  64. Marshall, I.J.; Kuiper, J.; Wallace, B.C. RobotReviewer: Evaluation of a system for automatically assessing bias in clinical trials. J. Am. Med. Inform. Assoc. JAMIA 2015, 23, 193–201. [Google Scholar] [CrossRef]
  65. Baviskar, D.; Ahirrao, S.; Potdar, V.; Kotecha, K.V. Efficient Automated Processing of the Unstructured Documents Using Artificial Intelligence: A Systematic Literature Review and Future Directions. IEEE Access 2021, 9, 72894–72936. [Google Scholar] [CrossRef]
  66. Hamilton, L.; Elliott, D.; Quick, A.; Smith, S.; Choplin, V. Exploring the Use of AI in Qualitative Analysis: A Comparative Study of Guaranteed Income Data. Int. J. Qual. Methods 2023, 22, 16094069231201504. [Google Scholar] [CrossRef]
  67. Lyon, A.; Pacuit, E. The Wisdom of Crowds: Methods of Human Judgement Aggregation. In Handbook of Human Computation; Springer Nature: Berlin, Germany, 2013. [Google Scholar]
  68. Greene, J.; McClintock, C.C. Triangulation in Evaluation. Eval. Rev. 1985, 9, 523–545. [Google Scholar] [CrossRef]
  69. Santana, V.F.d.; Galeno, L.M.D.F.; Brazil, E.V.; Heching, A.R.; Cerqueira, R.F.G. Retrospective End-User Walkthrough: A Method for Assessing How People Combine Multiple AI Models in Decision-Making Systems. arXiv 2023, arXiv:2305.07530. [Google Scholar]
  70. Dobrow, M.J.; Hagens, V.; Chafe, R.; Sullivan, T.J.; Rabeneck, L. Consolidated principles for screening based on a systematic review and consensus process. Can. Med. Assoc. J. 2018, 190, E422–E429. [Google Scholar] [CrossRef]
  71. Peffers, K.; Rothenberger, M.; Tuunanen, T.; Vaezi, R. Design science research evaluation. In Proceedings of the Design Science Research in Information Systems. Advances in Theory and Practice: 7th International Conference, DESRIST 2012, Las Vegas, NV, USA, 14–15 May 2012; pp. 398–410. [Google Scholar]
  72. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Bmj 2021, 372, n71. [Google Scholar] [CrossRef]
Figure 1. Research model for the HCAI-SLR framework.
Figure 1. Research model for the HCAI-SLR framework.
Information 16 00240 g001
Figure 2. Model of the HCAI-SLR framework.
Figure 2. Model of the HCAI-SLR framework.
Information 16 00240 g002
Figure 3. Demonstration flow for HCAI-SLR framework.
Figure 3. Demonstration flow for HCAI-SLR framework.
Information 16 00240 g003
Table 1. Steps for conducting SLR and their related constructs.
Table 1. Steps for conducting SLR and their related constructs.
StepObjectiveConstructs
Identification
(S1)
Identify the purpose, goals, title, keywords, and research questions of the reviewPurpose, Research question
Protocol
and training
(S2)
Identify the protocol of the review process. More specifically if there is more than one reviewer
Searching for the literature
(S3)
Find the related papers in different databasesElectronic sources, Search keyword, Title, Paper, Journal/Conference
Practical screen
(S4)
Identify what studies were considered for review based on the following criteria: content, publication language, journals, authors, setting, participants or subjects, program or intervention, research design, and sampling methodology, date of publication or of data collection, source of financial support [41]Abstract, Inclusion criteria, Exclusion criteria, Screened paper
Quality
appraisal
(S5)
Identify the exclusion criteria for judging which articles are of insufficient quality to be includedSelected paper,
Paper full text
Data
Extraction
(S6)
Extract the applicable information related to selected research papersExtracted data
Synthesis
of studies
(S7)
Combine the facts extracted from the studies carried out by selected research papersLiterature
synthesis
Writing the
review
(S8)
Present the results of the review systematically [42]. This step is based on the DIKW hierarchy (data–information–knowledge–wisdom) [44]. Data are for gathering of parts. Information is for connecting parts. Knowledge is for forming a whole, and wisdom is for joining the wholes [45]Literature analysis
Table 2. HCAI activities.
Table 2. HCAI activities.
HCAI ActivityHCAI GuidanceObjective
Human initiationHuman-before-the-loopInitiating the research topic and outlining the research objectives and initial research questions
AI augmentationHuman-in-the-loopAugmenting the SLR process using prompt-based and task-oriented AI tools
AI triangulationHuman-in-the-loopUsing different AI tools to cross-check against each other under human supervision to identify discrepancies or inconsistencies in the results
Human decisionHuman-over-the-loopEnsuring the validity and relevance of the findings
Table 3. Involved SLR steps with Prompt-based AI tools.
Table 3. Involved SLR steps with Prompt-based AI tools.
StepRole of AI ToolsRole of Humans
Identification
(S1)
Suggest, refine, and select the research title, keywords, research outline, and initial research questions from input data [58]
-
Define the review’s scope, objectives, and critical keywords based on expertise and research needs
-
Prepare or refine the appropriate prompts
-
Determine the search strategy
-
Shortlist the keywords and research questions
Synthesis of
studies
(S7)
-
Summarizing and synthesizing key findings [59] of papers through prompted queries [60]
-
Identify and cluster recurring themes, patterns, or insights
-
Suggest the conclusions and research gaps based on the structured findings
-
Choose appropriate methods (quantitative or qualitative) to synthesize data from the selected studies
-
Draw insightful conclusions from aggregated findings
Writing
(S8)
-
Draft initial sections, format references, and ensure consistency throughout the review
-
Paraphrase the text of the report with an academic and concise tone
-
Write the draft of report
-
Craft the narrative, ensuring that interpretations align with evidence [54,61]
Table 4. Involved SLR steps with Task-oriented AI tools.
Table 4. Involved SLR steps with Task-oriented AI tools.
StepRole of AI ToolsRole of Humans
Protocol and
training
(S2)
AI tools have a limited roleChoose a supported platform and define the review protocol
Searching for the literature (S3)
-
The capability to search for papers related to specific keywords or research questions [62]
-
AI search engine cannot cover all databases
Ensure comprehensive search across databases and refine search strategy for systematic coverage
Practical screen (S4)Prioritize relevant titles/abstracts with keyword highlights [63] and provide relevance scores; duplicate detectionValidate and decide on the list of screened papers
Quality
appraisal
(S5)
-
AI tools have a limited role
-
Semi-automated assessment based on the quality assessment questions or keywords [64]
Manually verify with full-text screening
Data extraction
(S6)
Automatically extract key information such as study characteristics, outcomes, contributions, and results from papers according to the research questions [65]
-
Verify AI-extracted data for accuracy
-
Interpret results
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Le Dinh, T.; Le, T.D.; Uwizeyemungu, S.; Pelletier, C. Human-Centered Artificial Intelligence in Higher Education: A Framework for Systematic Literature Reviews. Information 2025, 16, 240. https://doi.org/10.3390/info16030240

AMA Style

Le Dinh T, Le TD, Uwizeyemungu S, Pelletier C. Human-Centered Artificial Intelligence in Higher Education: A Framework for Systematic Literature Reviews. Information. 2025; 16(3):240. https://doi.org/10.3390/info16030240

Chicago/Turabian Style

Le Dinh, Thang, Tran Duc Le, Sylvestre Uwizeyemungu, and Claudia Pelletier. 2025. "Human-Centered Artificial Intelligence in Higher Education: A Framework for Systematic Literature Reviews" Information 16, no. 3: 240. https://doi.org/10.3390/info16030240

APA Style

Le Dinh, T., Le, T. D., Uwizeyemungu, S., & Pelletier, C. (2025). Human-Centered Artificial Intelligence in Higher Education: A Framework for Systematic Literature Reviews. Information, 16(3), 240. https://doi.org/10.3390/info16030240

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop