Next Article in Journal
Extended Learning through After-School Programs: Supporting Disadvantaged Students and Promoting Social Sustainability
Previous Article in Journal
A Systematic Review of Meta-Analyses on the Impact of Formative Assessment on K-12 Students’ Learning: Toward Sustainable Quality Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Generative AI for Sustainable Academic Advising: Enhancing Educational Practices through AI-Driven Recommendations

by
Omiros Iatrellis
1,*,
Nicholas Samaras
1,
Konstantinos Kokkinos
1 and
Theodor Panagiotakopoulos
2,3
1
Department of Digital Systems, University of Thessaly, 41500 Larissa, Greece
2
School of Science and Technology, Hellenic Open University, 26335 Patras, Greece
3
Business School, University of Nicosia, CY-2417 Nicosia, Cyprus
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(17), 7829; https://doi.org/10.3390/su16177829
Submission received: 17 August 2024 / Revised: 3 September 2024 / Accepted: 6 September 2024 / Published: 8 September 2024
(This article belongs to the Special Issue Sustainable Education and Innovative Teaching Methods)

Abstract

:
This study explores the integration of ChatGPT, a generative AI tool, into academic advising systems, aiming to assess its efficacy compared to traditional human-generated advisories. Conducted within the INVEST European University, which emphasizes sustainable and innovative educational practices, this research leverages AI to demonstrate its potential in enhancing sustainability within the context of academic advising. By providing ChatGPT with scenarios from academic advising, we evaluated the AI-generated recommendations against traditional advisories across multiple dimensions, including acceptance, clarity, practicality, impact, and relevance, in real academic settings. Five academic advisors reviewed recommendations across diverse advising scenarios such as pursuing certifications, selecting bachelor dissertation topics, enrolling in micro-credential programs, and securing internships. AI-generated recommendations provided unique insights and were considered highly relevant and understandable, although they received moderate scores in acceptance and practicality. This study demonstrates that while AI does not replace human judgment, it can reduce administrative burdens, significantly enhance the decision-making process in academic advising, and provide a foundation for a new framework that improves the efficacy and sustainability of academic advising practices.

1. Introduction

Amid growing environmental and societal concerns, sustainable education has emerged as a pivotal agenda in global educational reform [1,2]. Recognizing the transformative potential of generative AI, we assert that its integration will revolutionize educational practices, driving significant advancements in academic advising and fostering the development of educational systems that are scalable, efficient, and less resource intensive [3]. This approach aligns with evolving educational paradigms that emphasize the necessity of integrating innovative technologies to address both current and future educational challenges sustainably.
The role of academic advising is increasingly critical in this context. It serves not just to guide students through their academic journeys but to operationalize education practices effectively [4,5]. Effective academic advising can significantly enhance student retention and success, as it directly impacts students’ decision-making regarding course selection, major choice, and career planning [6,7]. According to the Council for the Advancement of Standards in Higher Education, comprehensive academic advising is fundamental to fostering learning and development outcomes that are critical to the success of students [8]. In addition to guiding individual academic decisions, academic advising tackles broader educational challenges, such as pursuing scholarships and financial aid, facilitating access to international exchange programs, and effectively integrating extracurricular programs into students’ educational pathways [9]. The European Union has emphasized the importance of academic guidance in ensuring that learners can make informed decisions about their education in line with emerging trends such as micro-credentials, which are recognized for their potential to enhance lifelong learning and employability [10].
In this context, the adoption of electronic academic advising systems has increased, driven by the need to manage a growing array of course offerings and complex graduation pathways effectively [11,12]. These systems, which range from basic program requirement tracking to more sophisticated predictive analytics platforms, are increasingly recognized as vital tools for delivering timely and personalized advice to students [13]. By integrating AI, specifically through generative models, we aim to reduce administrative burdens and allow for scalable solutions that adeptly address the needs of a diverse student body. This integration not only enhances the functionality and reach of academic advising systems but also contributes significantly to the sustainability of educational operations by optimizing resource use and improving accessibility.
Recognizing these challenges, this study employs a mixed-mode research design, integrating both qualitative and quantitative methods to explore the integration of generative AI technologies for enhancing academic advising systems. Specifically, we utilize ChatGPT-4, an advanced AI language model developed by OpenAI, known for its ability to generate human-like text responses [14]. Since its advent, the academic research community has been actively investigating its potential and exploring its diverse applications [15]. The sections that follow will detail the methodology employed to leverage ChatGPT in academic advising, including the design, deployment, and evaluation phases of our study. We will present a series of case studies and experiments that demonstrate the practical applications and impacts of AI-driven recommendations within academic settings. Throughout the discussion, we will critically analyze the benefits and limitations of integrating AI into the advising process, providing insights and observations from our empirical research. This paper culminates in the introduction of a new framework developed from our findings, designed to enhance the efficacy and sustainability of academic advising. Our goal is to demonstrate that generative AI tools can be used not only to augment but also to enhance the capabilities of academic advisors, thus creating a hybrid advising framework where human expertise and artificial intelligence work in concert to provide superior student support.

2. Research Design and Methods

2.1. Research Context and Setting

This study was conducted at the University of Thessaly, a member of the INVEST European University Alliance (https://www.invest-alliance.eu, accessed on 1 July 2024, known for its commitment to sustainable regional development. The INVEST European University Alliance, comprising seven universities, was chosen for its emphasis on sustainability as a core part of its mission and provided an ideal environment for this study due to its diverse student body and comprehensive range of academic programs, from undergraduate to doctoral levels. This diversity provided a comprehensive testing ground for deploying and evaluating approaches designed to assist students in course selection, identify prerequisite gaps, and suggest academic paths aligned with their career aspirations. Within the context of the INVEST project, several academic advising scenarios were examined, considering various factors that influence academic advising, including students’ interests and motivations, academic achievements, socio-economic backgrounds, and personality and behavioral patterns. These considerations were instrumental in developing the intelligent academic advising system named EDUC8EU, which is built around a rule-based expert system [16].

2.2. Academic Advising Scenarios and Criteria

This study examined seven academic advising scenarios at the University of Thessaly, detailed in Table 1. These scenarios, selected from the INVEST project, were previously assessed for potential enhancements by academic advisors. Throughout the review process, the logic of each scenario and the recommendations made by human experts were meticulously documented. The recommendations included specific inclusion and exclusion criteria, which could be easily converted into rule format. This conversion allowed the criteria to be fed into the EDUC8EU rule-based expert system, enabling it to generate precise and tailored recommendations.

2.3. Integration and Application of Generative AI

To enhance academic advising practices, we utilized ChatGPT-4, tailored to address a wide range of advising requirements. We provided the model with descriptions of the seven academic advising scenarios listed in Table 1. The prompts were structured as follows: “I have a student seeking academic advising to [specific advising scenario], considering [any relevant academic regulations, prerequisites, and criteria], what criteria/solutions should we add to the scenario?” An example of how ChatGPT-4 was used to refine academic advising practices is depicted in Figure 1, while the complete prompts for the predefined scenarios are detailed in the Supplementary Materials

2.4. Expert Assessment of Human and AI Recommendations

We combined recommendations generated by AI with those created by academic advisors (five professors/academic advisors) within the INVEST project, organizing them according to the relevant academic advising scenarios. In each scenario, we shuffled the order of AI- and human-generated recommendations. For recommendations from human advisors, which often contained specific keywords, we reformatted them to remove these details, as ChatGPT does not include such specific academic terms. For instance, if an advisor suggested “Student has sufficient knowledge in TELECOM_I”, where “TELECOM_I” referred to the “Telecommunication Systems I” course, we revised it to “Student has sufficient knowledge in telecommunication systems”. The evaluators were unaware of whether the suggestions were from humans or ChatGPT, though they were informed that they were evaluating a mixture of both types of recommendations.
We developed an online semi-structured questionnaire to collect data. Each academic advising scenario included the following:
  • The scenario’s description;
  • The logic behind the advice.
The questionnaire was first tested within the research team to improve its structure. Participants, who were academic advisors experienced in student guidance and academic planning, were recruited from the University of Thessaly. Each was given a unique identifier to maintain anonymity. They assessed each recommendation independently using a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree) from various angles:
  • Clarity: “I understand this recommendation”;
  • Relevance: “This recommendation effectively addresses the scenario”;
  • Practicality: “This recommendation can be implemented practically”;
  • Acceptance: “I would implement this recommendation without any changes”;
  • Impact: “This suggestion offers practical enhancements and meaningful changes that effectively address specific academic challenges”.
Additional comments were solicited for each recommendation through a provided text box.

2.5. Statistical Analysis

To evaluate the validity of expert ratings between AI-generated and human-generated recommendations, we utilized the Mann–Whitney U test [17]. Furthermore, for a deeper analysis at the scenario level, the Kruskal–Wallis H-test was employed to compare the median values of each criterion, setting the significance threshold at p < 0.01 [18].
To assess the consistency of ratings among different reviewers, the intraclass correlation coefficient (ICC) was calculated [19]. The analysis utilized a bidirectional mixed-effects approach, considering the average characteristics and reliability assessed by k raters. ICC values below 0.5 indicate low reliability, values between 0.5 and 0.74 suggest moderate reliability, values between 0.75 and 0.9 indicate good reliability, and values above 0.9 denote excellent reliability.
All statistical analyses were executed in Python 3.6. Additionally, qualitative feedback received via free-text comments was methodically analyzed using NVivo 13 through an inductive approach [20]. An open coding scheme guided the thematic analysis, which was conducted by the authors, who meticulously read and coded all remarks to extract and summarize the prevalent themes.
Furthermore, we provided a comprehensive report on the descriptive statistics of participants’ demographics, including their roles, academic departments, and years of experience in the academic field. This multi-faceted approach ensures a thorough examination of the data, enhancing the credibility and utility of the findings.

3. Results

Five academic advising experts, specializing in computer science, computer networks, data science, software engineering, and economics, participated in this study. They brought an average of 18.5 years of academic experience. Participant details are summarized in Table 2. The ICC was 0.86 (95% CI [0.84, 0.89]), indicating good to excellent reliability.

3.1. AI and Human Advisory Examples

ChatGPT produced 40 recommendations spanning seven advising scenarios, in contrast to the 27 developed by human advisors. Each of these recommendations was incorporated into the final survey. On average, recommendations by ChatGPT were 193.0 characters long, with a standard deviation (SD) of 47.0. Recommendations by human advisors measured 78.6 characters, with a standard deviation of 31.2. Of the top 15 highest-rated recommendations in the survey, 6 were from ChatGPT. Table 3 presents these top 15 recommendations along with their scores for acceptance, relevance, understanding, practicality, and impact.

3.2. Results for Recommendations Generated by Experts and AI

Among the 40 recommendations produced by AI, 28 (70%) reached a score of 3 or above, peaking at 3.8 (SD = 0.4) and bottoming out at 2.4 (SD = 0.6). The mean score was 3.1 (SD = 0.6). These AI-generated recommendations focused on ensuring students’ academic readiness and commitment to their chosen programs by suggesting criteria like manageable course loads, formal assessments of potential, and demonstrated involvement in relevant community or educational activities.
In Table 4, the AI-generated recommendations are quantitatively evaluated across the five criteria. The average scores for relevance and clarity were notably high, categorized as “Agree”. Practicality, however, was rated much lower, falling under ”Strongly disagree”, indicating challenges in the implementation of these AI suggestions. The scores for acceptance and impact were moderate, aligning more with “Neutral” and suggesting some hesitance toward adopting AI recommendations without modifications. In comparison, the 27 human-generated recommendations achieved slightly higher overall scores, with more consistent ratings across all dimensions. The mean score for human-generated recommendations was 3.4 (SD = 0.6), with a maximum score of 4.4 (SD = 0.4) and a minimum score of 2.8 (SD = 0.2). These suggestions often focused on targeted academic strategies that were closely aligned with the department’s offerings and collaborative initiatives.
AI-generated recommendations were praised for their high levels of relevance and clarity, but they received moderate scores for impact and acceptance and low scores for practical implementation. For instance, in the scenario “S1: Pursuing Pedagogical and Teaching Certification”, the AI-generated suggestion to include conditions where the number of courses taken each semester does not exceed a maximum credit limit was noted for ensuring that students can handle the additional workload of the parallel program. This suggestion was highlighted for its high relevance and clarity, resonating well with academic standards and expectations. In contrast, human-generated recommendations were noted for their higher acceptance rates and perceived practicality, reflecting advisors’ greater confidence in concrete and tangible methods.
Figure 2 and Figure 3 display the scores for each evaluation item for both AI-generated and human-generated suggestions. The results show that human recommendations were rated higher in terms of acceptance, impact, and practicality compared to those generated by AI, although the ratings for clarity and relevance were comparable between the two sources of recommendations.
To explore the differences in expert ratings across the seven academic advising scenarios, we conducted a Kruskal–Wallis H-test. The analysis revealed a statistically significant difference in expert ratings across the scenarios, χ2 (6, N = 335) = 14.56, p = 0.023, η2 = 0.22. This result indicates that the ratings varied significantly depending on the scenario, with a moderate effect size suggesting meaningful differences in how experts evaluated the recommendations. Additionally, the analysis highlighted that variations in acceptance, clarity, and impact scores were statistically significant across different scenarios (p < 0.001). However, scores for relevance and practicality did not show significant differences (relevance p = 0.4, practicality p = 0.8), underscoring a consistent perception of these attributes across various contexts.

3.3. Qualitative Analysis of Comments on AI-Generated Recommendations

A recurring theme in the qualitative feedback from academic advisors was the difficulty of effectively managing and implementing AI-generated recommendations. While advisors appreciated the precision and detail of the AI-generated suggestions, they often noted that these recommendations required additional context and resources to be successfully put into practice.
For instance, concerning the “S8: Overcoming Challenges in Telecommunications Courses” scenario, one AI-generated recommendation suggested that students who are struggling with core courses should be automatically enrolled in supplementary tutoring sessions. Although advisors recognized the potential benefits of this recommendation, they pointed out several implementation challenges. First, the institution would need a robust system to identify students at risk in real time, which involves integrating multiple data sources such as grades, attendance records, and feedback from instructors. Second, ensuring that there are enough qualified tutors available at the times needed by the students would require significant coordination and resource allocation. This includes recruiting, training, and scheduling tutors, as well as maintaining up-to-date records of student progress and tutor effectiveness.
This example illustrates that while AI-generated recommendations can provide valuable insights, their practical application often demands extensive coordination and resource management. Advisors stressed the importance of developing comprehensive implementation plans that integrate AI tools with existing institutional resources and frameworks. This approach ensures that the recommendations are not only actionable but also effectively support student success.
In addition, we observed incorrect outputs in the AI-generated academic advising recommendations, where the AI produced suggestions involving inaccurate or non-existent criteria. This issue, known as AI hallucination, stems from the inherent nature of AI models that rely on probabilistic algorithms to make inferences [21]. For example, in the scenario “S6: Transition to MBA After a Bachelor’s in Digital Systems”, the AI suggested that applicants must have “a good GMAT or DAT score” as an inclusion criterion for application to the MBA program. Academic advisors pointed out that while a good GMAT score is a typical requirement for MBA admissions, the DAT (Dental Admission Test) is entirely unrelated to business school applications. They suspected that the AI might have meant to refer to the CAT (Common Admission Test), which is a widely accepted test for business schools in some countries.
Moreover, we encountered instances of partially correct information. Additionally, we observed examples where the information was not entirely accurate. It was noted by an expert that a recommendation produced by AI that proposed that students need proficiency in English and the local language of the host university for scenario “S20: Selection of an INVEST Bachelor Specialization” was not completely precise. While English proficiency is indeed required, the requirement for local language skills is not universally applicable. Most student exchange programs offer programs in English either as lectures or personal tutoring, eliminating the necessity for additional language qualifications. This oversight highlights the need for AI systems to align more closely with actual program requirements to prevent the potential misguidance of students.
Finally, in our evaluation of AI-generated recommendations for scenario “S7: Securing Internships in the Bioinformatics Sector”, we observed differing expert opinions. The AI suggested that applicants should possess substantial biological knowledge. While two advisors supported this recommendation, emphasizing the importance of biology for many roles within bioinformatics, another expert raised concerns about its potential limitations. This expert pointed out that numerous internships, particularly those centered on computational tasks and data analysis, demand robust programming and data management skills over extensive biological expertise. They expressed that the AI’s broad recommendation might unintentionally restrict the pool of eligible applicants, sidelining those with strong technical capabilities but more moderate biology knowledge.

4. Discussion

In our research, we utilized ChatGPT to enhance the academic advising framework. We assessed AI-generated recommendations by combining them with those crafted by humans, requesting feedback from experts in academic advising on aspects like clarity, impact, relevance, and practicality. Although the recommendations generated by AI did not receive ratings as high as those from human contributors, they were noted for their clarity and relevance, showing moderate impact and acceptance but limited improvement in practical applications. The study’s outcomes suggest that ChatGPT has the potential to systematically review and refine academic advising methods. While most recommendations from AI required alterations, they provided foundational insights that could significantly contribute to enhancing the processes. This methodology supports swift evaluations across numerous advising situations, potentially broadening the scope of academic advising improvements. It also integrates seamlessly into the preliminary stages of the advising enhancement process, enabling the incorporation of AI-driven insights early on.
Building on the foundational work of Yang and Huang [22], which emphasized the enhancement of counselor workflows through AI, our research broadens the application by integrating AI directly into the academic advising processes. This expansion not only complements but also enriches advisor–student interactions, going beyond professional development to directly support student engagement and success. Echoing findings from other studies on AI’s impact on student motivation and learning outcomes [23], our work also delves into the administrative and strategic benefits of employing AI in academic settings. We provide a comprehensive examination of how AI-driven recommendations can complement traditional advising methods by offering rapid, personalized feedback, a key component in improving student retention and success.
Furthermore, the integration of AI technologies, as discussed in research by Smith and colleagues [24], demonstrates the synergy between human expertise and machine efficiency, enhancing the responsiveness of academic advising systems. Our study corroborates these findings, showing that AI can reduce the administrative burdens traditionally associated with academic advising. By implementing scalable solutions, as seen in Johnson’s exploration of AI in higher education [25], we further validate the capacity of AI to adapt to a diverse student body, enhancing the overall accessibility and effectiveness of educational systems.
In terms of technology acceptance and user integration, the insights from Thompson et al. [26] underscore the critical role of user engagement in adopting new technologies. Our research aligns with these perspectives, revealing that while AI recommendations are highly innovative, their practical implementation requires the careful consideration of user acceptance and ease of integration into existing advising practices.
With these positive results for AI in mind, we sought to refine academic advising through an updated integration of the EDUC8EU expert system with ChatGPT. To achieve this, we enhanced the backend, which allows end users to model the inclusion and exclusion criteria using semantic rules (Figure 4) derived from a comprehensive ontology. This ontology holistically describes the academic domain, incorporating concepts related to learners, learning pathways, organizational aspects, and quality assurance. The consistency of the semantic rules is ensured, as they directly derive from this well-defined ontology, providing consistency across the system. These rules are crucial for shaping tailored advising and encapsulate complex academic knowledge, including domain-specific prerequisites and career pathways. Continuously refined through real-world application and feedback, the system’s advice remains both accurate and aligned with current academic standards. In the enhanced backend, the implementation of semantic rules is linked to rich text generation (Figure 5). This setup produces an explainable output that serves as the initial academic advising output. Subsequently, the EDUC8EU’s recommendation is further enhanced by embedding it within ChatGPT’s conversational AI framework using the OpenAI API (Figure 6). ChatGPT enriches the output from EDUC8EU with detailed, contextually appropriate expansions in natural language. This enhancement not only makes the advice more accessible but also enables interactive feedback, allowing the system to refine its recommendations based on student interactions.
This hybrid approach leverages the strengths of both systems: the reliability and depth of EDUC8EU’s expert system and the adaptability and user-friendly interface of ChatGPT. Together, they form a dynamic academic advising tool that combines precise expert knowledge with the engaging, flexible capabilities of conversational AI. As a result, students receive personalized, understandable, and actionable academic advice, empowering them to make informed decisions aligned with their educational and career aspirations. This prototype represents a significant advancement in academic advising, aiming toward a new standard for integrating structured expert systems with the nuanced capabilities of AI-driven interactions.
Over the next years of the INVEST project, the new system will collect data on its effectiveness in academic advising. A comprehensive evaluation of this data will be conducted in subsequent years to assess the system’s impact on student decision-making and advising quality. This forthcoming analysis, though crucial for validating the prototype’s practical utility, falls outside the scope of the current paper.

5. Practical Implications

This study makes several key contributions to the field of academic advising by leveraging generative AI, specifically enhancing the framework for using AI tools in practical educational settings. Conducted within the context of the INVEST European University, which prioritizes sustainable regional development, this study aligns with broader sustainability goals. By integrating AI to enhance academic advising, we contribute to the development of scalable, efficient, and less resource-intensive educational support systems. These advancements not only support the sustainability objectives of the INVEST project but also offer a pathway to more sustainable educational practices by optimizing resource use and improving accessibility to personalized guidance.
Here are the key practical implications of this research:
  • Support for academic advisors: The findings demonstrate that AI-generated recommendations can augment, rather than replace, human expertise. This hybrid approach supports academic advisors by reducing administrative burdens and enabling them to focus on more complex, personalized student interactions, ultimately enhancing the quality of advising.
  • Scalable educational solutions: The use of AI in academic advising systems addresses the growing need for scalable solutions in higher education, particularly in institutions with large and diverse student populations. AI’s ability to provide timely, relevant, and context-sensitive advice can significantly improve the student experience.
  • Contribution to sustainable education: By optimizing resource use and improving accessibility to personalized guidance, AI-driven advising systems contribute to the sustainability objectives of educational institutions. This aligns with the broader goals of fostering sustainable practices within the academic environment.
  • Customization and flexibility: The integration of AI in academic advising allows for the development of customized advising solutions tailored to the specific needs and policies of individual institutions. This flexibility ensures that the advising experience is aligned with the unique educational offerings and values of each institution.

6. Limitations

Our study encountered several challenges. The most significant limitation is that ChatGPT’s training only included data available up to a certain year, not accounting for subsequent advancements in educational methods, technologies, or policies. Consequently, the model lacks the capability to provide insights on recent educational strategies or tools. Additionally, while the AI-derived recommendations were evaluated based on feedback from academic advising experts, their actual impact on educational outcomes remains unmeasured, leaving uncertainty regarding the effectiveness of these recommendations. Another issue is the responsiveness of the ChatGPT model to specific prompts, which leads to variability in outputs depending on the nuances of the input sentences. Despite our prompt design being based on an extensive analysis of various formats, alternative approaches might better harness the ChatGPT model for specific tasks. Finally, the differences in length and style between AI-generated and human-generated recommendations could potentially allow reviewers to distinguish between the two sources, possibly affecting their evaluations.

7. Conclusions

Academic advising plays a crucial role in fostering student success, though delivering tailored and effective guidance poses significant challenges. This research assessed the potential of employing ChatGPT to develop recommendations aimed at refining the precision of academic advising processes. The AI-crafted recommendations provided distinctive insights and were commended for their clear relevance, earning them moderate evaluations for effectiveness and acceptability but lower assessments for ease of practical application. Consequently, these recommendations from AI can serve as a valuable supplement to the enhancement of academic advising by pinpointing and facilitating the enhancement of advisory strategies. Additionally, they could provide advisors with a foundation to develop their own improved strategies for student support.
Furthermore, our research underscores the effectiveness of a hybrid advising model, where AI-generated recommendations, based on validated inclusion and exclusion criteria, supplement human expertise. This model ensures high levels of knowledge management, implementation, and practicality and can be further enriched with ChatGPT’s detailed proposals, creating a robust support system for academic advising.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su16177829/s1.

Author Contributions

Conceptualization, O.I., N.S., K.K. and T.P.; methodology, N.S., K.K. and T.P.; validation, N.S., K.K. and T.P.; formal analysis, N.S., K.K. and T.P.; investigation, N.S., K.K. and T.P.; resources, N.S., K.K. and T.P.; data curation, N.S., K.K. and T.P.; writing—original draft preparation, N.S., K.K. and T.P.; writing—review and editing, N.S., K.K. and T.P.; visualization, N.S., K.K. and T.P.; supervision, N.S., K.K. and T.P.; project administration, N.S., K.K. and T.P.; funding acquisition, N.S., K.K. and T.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the INVEST project under the ERASMUS-EDU-2023-EUR-UNIV program (Project No.: 101124598, www.invest-alliance.eu, accessed on 1 July 2024), and the APC was also funded by the same project.

Institutional Review Board Statement

The INVEST project, supported by ERASMUS+ and H2020, has developed the “INVEST4EXCELLENCE Open Science Code”, which serves as a comprehensive framework for ethical research practices, transparency, and collaboration. Our research is aligned with these principles.

Informed Consent Statement

Not applicable.

Data Availability Statement

The AI-generated data used in this study are available. However, the human-generated data are not publicly available due to privacy considerations. Access to the human-generated data is restricted by the respective university’s regulations. Researchers interested in accessing the human-generated data may contact the corresponding author for inquiries or requests regarding data access.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mian, S.H.; Salah, B.; Ameen, W.; Moiduddin, K.; Alkhalefah, H. Adapting Universities for Sustainability Education in Industry 4.0: Channel of Challenges and Opportunities. Sustainability 2020, 12, 6100. [Google Scholar] [CrossRef]
  2. Higgins, B.; Thomas, I. Education for Sustainability in Universities: Challenges and Opportunities for Change. Aust. J. Environ. Educ. 2016, 32, 91–108. [Google Scholar] [CrossRef]
  3. Su, J.; Yang, W. Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education. ECNU Rev. Educ. 2023, 6, 355–366. [Google Scholar] [CrossRef]
  4. Kouatli, I. The Need for Social and Academic Responsibility Advisor (SARA): A Catalyst toward the Sustainability of Educational Institutes. Soc. Responsib. J. 2020, 16, 1275–1291. [Google Scholar] [CrossRef]
  5. Chan, Z.C.Y.; Chan, H.Y.; Chow, H.C.J.; Choy, S.N.; Ng, K.Y.; Wong, K.Y.; Yu, P.K. Academic Advising in Undergraduate Education: A Systematic Review. Nurse Educ. Today 2019, 75, 58–74. [Google Scholar] [CrossRef] [PubMed]
  6. Cardona, T.A.; Cudney, E.A. Predicting Student Retention Using Support Vector Machines. Procedia Manuf. 2019, 39, 1827–1833. [Google Scholar] [CrossRef]
  7. Iatrellis, O.; Kameas, A.; Fitsilis, P. EDUC8 Pathways: Executing Self-Evolving and Personalized Intra-Organizational Educational Processes. Evol. Syst. 2020, 11, 227–240. [Google Scholar] [CrossRef]
  8. Drake, J.K. The Role of Academic Advising in Student Retention and Persistence. CAS Stand. Context. Statement 2015, 16, 8–12. [Google Scholar] [CrossRef]
  9. Iatrellis, O.; Samaras, N.; Kokkinos, K. Towards a Capability Maturity Model for Micro-Credential Providers in European Higher Education. Trends High. Educ. 2024, 3, 504–527. [Google Scholar] [CrossRef]
  10. Commission, E. A European Approach to Micro-Credentials Final Report; European Commission: Brussels, Belgium, 2020. [Google Scholar]
  11. Maphosa, V.; Maphosa, M. Fifteen Years of Recommender Systems Research in Higher Education: Current Trends and Future Direction. Appl. Artif. Intell. 2023, 37, 2175106. [Google Scholar] [CrossRef]
  12. Iatrellis, O.; Kameas, A.; Fitsilis, P. Academic Advising Systems: A Systematic Literature Review of Empirical Evidence. Educ. Sci. 2017, 7, 90. [Google Scholar] [CrossRef]
  13. Kuhail, M.A.; Al Katheeri, H.; Negreiros, J.; Seffah, A.; Alfandi, O. Engaging Students with a Chatbot-Based Academic Advising System. Int. J. Hum. Comput. Interact. 2023, 39, 2115–2141. [Google Scholar] [CrossRef]
  14. OpenAI. Available online: https://openai.com/ (accessed on 23 July 2024).
  15. Bahrini, A.; Khamoshifar, M.; Abbasimehr, H.; Riggs, R.J.; Esmaeili, M.; Majdabadkohne, R.M.; Pasehvar, M. ChatGPT: Applications, Opportunities, and Threats. In Proceedings of the 2023 Systems and Information Engineering Design Symposium, SIEDS 2023, Charlottesville, VA, USA, 27–28 April 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023; pp. 274–279. [Google Scholar]
  16. Iatrellis, O.; Stamatiadis, E.; Samaras, N.; Panagiotakopoulos, T.; Fitsilis, P. An Intelligent Expert System for Academic Advising Utilizing Fuzzy Logic and Semantic Web Technologies for Smart Cities Education. J. Comput. Educ. 2023, 10, 293–323. [Google Scholar] [CrossRef]
  17. Chen, L.-T.; Liu, L. Methods to Analyze Likert-Type Data in Educational Technology Research. J. Educ. Technol. Dev. Exch. 2020, 13, 39–60. [Google Scholar] [CrossRef]
  18. MacFarland, T.W.; Yates, J.M. Kruskal–Wallis H-Test for Oneway Analysis of Variance (ANOVA) by Ranks. In Introduction to Nonparametric Statistics for the Biological Sciences Using R; Springer: Cham, Switzerland, 2016. [Google Scholar]
  19. Bartko, J.J. The Intraclass Correlation Coefficient as a Measure of Reliability. Psychol. Rep. 1966; 19, 3–11. [Google Scholar] [CrossRef]
  20. Ozone, S.; Haruta, J.; Takayashiki, A.; Maeno, T.; Maeno, T. Students’ Understanding of Social Determinants of Health in a Community-Based Curriculum: A General Inductive Approach for Qualitative Data Analysis. BMC Med. Educ. 2020, 20, 470. [Google Scholar] [CrossRef] [PubMed]
  21. Feuerriegel, S.; Hartmann, J.; Janiesch, C.; Zschech, P. Generative AI. Bus. Inf. Syst. Eng. 2024, 66, 111–126. [Google Scholar] [CrossRef]
  22. Exploration of Career Development Paths for Counselors Empowered by ChatGPT. Adv. Vocat. Tech. Educ. 2023, 5, 8–14. [CrossRef]
  23. Caratiquit, K.D.; Caratiquit, L.J.C. ChatGPT as an Academic Support Tool on the Academic Performance among Students: The Mediating Role of Learning Motivation. J. Soc. Humanit. Educ. 2023, 4, 21–33. [Google Scholar] [CrossRef]
  24. Lekan, K.; Pardos, Z.A. AI-Augmented Advising: A Comparative Study of GPT-4 and Advisor-Based Major Recommendations. Proc. Mach. Learn. Res. 2024, 257, 85–96. [Google Scholar]
  25. Akiba, D.; Fraboni, M.C. AI-Supported Academic Advising: Exploring ChatGPT’s Current State and Future Potential toward Student Empowerment. Educ. Sci. 2023, 13, 885. [Google Scholar] [CrossRef]
  26. Haglund, J.H. Students Acceptance and Use of ChatGPT in Academic Settings; Uppsala Universitet: Uppsala, Sweden, 2023. [Google Scholar]
Figure 1. Dialogue with ChatGPT about “S1: Pursuing Pedagogical and Teaching Certification” academic scenario.
Figure 1. Dialogue with ChatGPT about “S1: Pursuing Pedagogical and Teaching Certification” academic scenario.
Sustainability 16 07829 g001
Figure 2. Scores for acceptance, clarity, relevance, practicality, and impact for recommendations generated by AI.
Figure 2. Scores for acceptance, clarity, relevance, practicality, and impact for recommendations generated by AI.
Sustainability 16 07829 g002
Figure 3. Scores for acceptance, clarity, relevance, practicality, and impact for recommendations generated by humans.
Figure 3. Scores for acceptance, clarity, relevance, practicality, and impact for recommendations generated by humans.
Sustainability 16 07829 g003
Figure 4. Modeling academic advising recommendations using SWRL rules.
Figure 4. Modeling academic advising recommendations using SWRL rules.
Sustainability 16 07829 g004
Figure 5. Graphical tool for creating and editing SWRL rules that model the academic advising inclusion/exclusion criteria.
Figure 5. Graphical tool for creating and editing SWRL rules that model the academic advising inclusion/exclusion criteria.
Sustainability 16 07829 g005
Figure 6. The initial recommendations produced by the EDUC8EU rule-based expert system can be extended through a conversation with ChatGPT using the OpenAI API.
Figure 6. The initial recommendations produced by the EDUC8EU rule-based expert system can be extended through a conversation with ChatGPT using the OpenAI API.
Sustainability 16 07829 g006
Table 1. Selected academic advising scenarios and descriptions.
Table 1. Selected academic advising scenarios and descriptions.
ScenarioDescription
S1: Pursuing a Pedagogical and Teaching CertificationSome departments offer an optional, parallel program granting a Pedagogical and Teaching Certification, qualifying students to teach in schools. This certification equips them with essential teaching skills and methodologies, enhancing their career opportunities in education.
S2: Selecting a BSc Dissertation Topic in Machine LearningAssisting on the selection of a dissertation topic in machine learning.
S3: Enrolling in a Micro-Credential Program in Database Management SystemsRecommending the pursuit of a micro-credential in database management systems for graduate students to supplement their qualifications and enhance career readiness in data management.
S4: Transitioning to an MBA After a Bachelor’s in Digital SystemsAdvising on the strategic steps for transitioning from a Bachelor’s degree in digital systems to pursuing an MBA
S5: Securing Internships in the Bioinformatics SectorProviding expert guidance on securing internships within the bioinformatics sector.
S6: Assisting Students Struggling with Telecommunications CoursesProviding tailored support to students facing difficulties in telecommunications courses.
S7: Selecting an INVEST Bachelor SpecializationThe INVEST university offers students from partner universities the opportunity to complete their final academic year at a partner institution, aligned with the ERASMUS student exchange framework, allowing them to receive a Bachelor’s specialization.
Table 2. Profile of five academic advising experts involved in the academic advising survey.
Table 2. Profile of five academic advising experts involved in the academic advising survey.
CharacteristicDetails
GenderMale: 4, female: 1
Academic SpecialtyComputer science: 1, computer networks: 1, economics: 1, data science: 1, software engineering: 1
Academic RoleProfessor and academic advisor: 5
Years of Experience in Academia18.5
Table 4. Selected academic advising scenarios and descriptions.
Table 4. Selected academic advising scenarios and descriptions.
SourceImpact (Mean, SD)Practicality (Mean, SD)Relevance (Mean, SD)Clarity (Mean, SD)Acceptance (Mean, SD)
AI2.95 (1.3)2.05 (0.8)3.65 (1.1)3.6 (1.0)2.65 (1.4)
Human3.25 (1.3)2.75 (0.9)3.75 (1.0)3.8 (0.5)3.0 (0.6)
Table 3. Recommendations and their evaluation scores.
Table 3. Recommendations and their evaluation scores.
ScenarioSourceRecommendationAcceptanceClarityRelevancePracticalityImpact
(Mean, SD)(Mean, SD)(Mean, SD)(Mean, SD)(Mean, SD)
S5: Securing Internships in the Bioinformatics SectorHUInclude students who possess adequate knowledge of the programming languages Python and R.4.8(0.5)4.8(0.5)4.8(0.5)4.8(0.5)4.8(0.5)
S4: Transition to MBA After a Bachelor’s in Digital SystemsHUInclude students with a GMAT score of ≥700.4.4(0.9)4.4(0.6)4.8(4.8)4.8(0.5)4.8(0.5)
S6: Students in Need of Support for Telecommunications CoursesHUImplement a solution to establish a peer-tutoring program where PhD students specializing in telecommunications provide targeted tutoring sessions.4.4(0.9)4.8(0.5)4.8(0.5)4.4(0.6)4.6(0.6)
S1: Pursuing a Pedagogical and Teaching CertificationAIInclude conditions where the number of courses taken each semester does not exceed a maximum credit limit, ensuring students can handle the additional workload of the parallel program.3.2(1.2)4.6(0.6)4.6(0.6)4.2(0.8)4.6(0.6)
S7: Selection of an INVEST Bachelor SpecializationHUInclude students who must have a fluent level of English.4.2(0.8)4.8(0.5)4.2(0.5)3.8(1.6)4.0(1.0)
S6: Students in Need of Support for Telecommunications CoursesAIInclude students who have not met the prerequisite knowledge in mathematics (including calculus and linear algebra) or physics (especially electromagnetism).2.2(0.9)4.8(0.5)4.6(0.6)3.8(1.1)4.4(0.6)
S3: Enrollment in a Micro-Credential Program in Database Management SystemsHUExclude students who do not have a basic understanding of how computers work, including knowledge of hardware, software, and operating systems.2.2(0.9)4.2(0.5)4.6(0.6)4.2(0.5)4.2(0.5)
S4: Transition to MBA After a Bachelor’s in Digital SystemsAIInclude students who have relevant work experience. 2.2(0.9)4.6(0.6)4.4(0.6)3.8(0.9)4.2(0.5)
S3: Enrollment in a Micro-Credential Program in Database Management SystemsAIExclude students who lack fundamental knowledge of database languages, such as SQL.3.4(0.9)4.4(0.6)4.2(0.5)3.8(1.6)3.4(0.9)
S1: Pursuing a Pedagogical and Teaching CertificationHUInclude students who have passed at least two-thirds of the core program of study courses.2.6(1.5)4.8(0.5)4.6(0.6)3.8(0.9)3.0(1.0)
S3: Enrollment in a Micro-Credential Program in Database Management SystemsHUExclude students who do not have a basic understanding of arrays, linked lists, stacks, queues, trees.3.2(1.0)4.6(0.6)4.4(0.6)3.0(1.0)3.4(1.4)
S2: Selecting a BSc Dissertation Topic in Machine LearningHUInclude students who have experience with machine learning tools such as WEKA and Orange.3.4(1.4)4.4(0.6)4.8(0.5)3.2(0.9)2.6(1.3)
S5: Securing Internships in the Bioinformatics SectorHUInclude students familiar with machine learning algorithms for analyzing and interpreting complex biological datasets.2.0(1.5)3.6(1.3)4.2(0.5)3.8(1.6)4.0(1.0)
S5: Securing Internships in the Bioinformatics SectorAIInclude the requirement that an understanding of database management systems is essential for working with large biological datasets stored in databases.2.4(1.5)4.4(0.6)4.2(0.8)2.8(0.9)3.4(1.4)
S7: Selection of an INVEST Bachelor SpecializationAIInclude students who have demonstrated strong teamwork and collaboration skills through participation in group projects, extracurricular activities, or professional experiences.3.2(1.0)3.8(1.6)3.8(1.6)2.6(1.3)3.4(1.3)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Iatrellis, O.; Samaras, N.; Kokkinos, K.; Panagiotakopoulos, T. Leveraging Generative AI for Sustainable Academic Advising: Enhancing Educational Practices through AI-Driven Recommendations. Sustainability 2024, 16, 7829. https://doi.org/10.3390/su16177829

AMA Style

Iatrellis O, Samaras N, Kokkinos K, Panagiotakopoulos T. Leveraging Generative AI for Sustainable Academic Advising: Enhancing Educational Practices through AI-Driven Recommendations. Sustainability. 2024; 16(17):7829. https://doi.org/10.3390/su16177829

Chicago/Turabian Style

Iatrellis, Omiros, Nicholas Samaras, Konstantinos Kokkinos, and Theodor Panagiotakopoulos. 2024. "Leveraging Generative AI for Sustainable Academic Advising: Enhancing Educational Practices through AI-Driven Recommendations" Sustainability 16, no. 17: 7829. https://doi.org/10.3390/su16177829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop