Next Article in Journal
Reading, Viewing, Writing, Creating and Talking about Persuasive Multimodal Texts in the Elementary Years
Previous Article in Journal
Claiming Space to (Re)generate: The Impact of Critical Race Professional Development on Teacher Educators of Color
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Recommender Systems for Teachers: A Systematic Literature Review of Recent (2011–2023) Research

by
Vissarion Siafis
1,*,
Maria Rangoussi
1 and
Yannis Psaromiligkos
2
1
Department of Electrical and Electronics Engineering, University of West Attica, Egaleo, GR-12241 Athens, Greece
2
Department of Business Administration, University of West Attica, Egaleo, GR-12241 Athens, Greece
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(7), 723; https://doi.org/10.3390/educsci14070723 (registering DOI)
Submission received: 20 May 2024 / Revised: 21 June 2024 / Accepted: 1 July 2024 / Published: 3 July 2024
(This article belongs to the Section Technology Enhanced Education)

Abstract

:
Recommender Systems (RSs) have recently emerged as a practical solution to the information overload problem users face when searching for digital content. In general, RSs provide their respective users with specialized advice and guidance in order to make informed decisions on the selection of suitable digital content. This paper is a systematic literature review of recent (2011–2023) publications on RSs designed and developed in the context of education to support teachers in particular—one of the target groups least frequently addressed by existing RSs. A body of 61 journal papers is selected and analyzed to answer research questions focusing on experimental studies that include RS evaluation and report evaluation results. This review is expected to help teachers in better exploiting RS technology as well as new researchers/developers in this field in better designing and developing RSs for the benefit of teachers. An interesting result obtained through this study is that the recent employment of machine learning algorithms for the generation of recommendations has brought about significant RS quality and performance improvements in terms of recommendation accuracy, personalization and timeliness.

1. Introduction

Recommender Systems or Recommendation Systems (RSs) were developed in order to address the accumulation of large volumes of diverse data in digital form, as a result of advances in Internet and web/cloud technologies. The need to store, process and analyze such data has arisen in fields as diverse as e-commerce, e-health, e-entertainment, e-tourism, the recommendation of personalized content (e-news, webpage recommenders, and e-mail filters), the provision of services (financial services, life insurance, real estate, job searching and recruiting) and e-learning, e.g., [1,2,3,4]. The motivation behind the rapid growth of RSs during the last two decades came from the need to respond to a double challenge: (i) to provide personalized advice and directions to users of diverse profiles and interests searching for specific digital content and (ii) to provide specialized software tools to professionals and other interested parties for informed decision making [2]. As a result, an increasing number of professionals and decision-making bodies resort to the services of RSs to cope with information overload. Indeed, as reported by Souabi et al. [3], an increased user interest for RS has been evident since 2017.
An efficient RS relies on the one hand on access to various types of data and information, and on the other hand on all available information regarding the profile and history of the user (past activities, choices, decisions, etc.) possibly stored in digital databases. The goal of a successful RS is to achieve an optimal match between the two—not once, but every time the user asks for a recommendation. As soon as a recommendation is available, the user is free to examine it and adopt or reject it; in that sense, the RS offers counseling rather than a compelling service. Furthermore, the user may contribute explicit or tacit feedback to the RS. User decisions or activities in response to a recommendation as well as explicit user comments and feedback on a given recommendation are valuable data. RSs store such feedback in their knowledge base and use it to improve their recommendation generation algorithms for subsequent user requests [4]. Despite the employment of carefully selected algorithms and sets of rules, current RSs may provide an overwhelming number of recommendations that are not always optimal. Such an ‘over-reaction’ of an RS to user requests may bounce back: users may feel that the RS threatens their independence and autonomy and eventually reduces their well-being [5].
RS development is a multi-disciplinary task involving the fields of machine learning, data mining, statistics, human–computer interaction, marketing, decision-support systems, adaptive user interfaces, etc. [6]. The employment of machine learning algorithms, in particular, has dramatically increased RS performance and the quality of results, e.g., [7,8].
In the field of education, the adoption of RSs in web-based environments and e-learning platforms has greatly promoted open and distant learning by offering valuable services to multiple target groups, such as learners, teachers, content authors, administrators and policy and decision makers. RSs for learners/students constitute the most frequent and popular class [3,9,10], followed by RSs for OERs [11], ontology-based RSs for e-learning [12,13,14], RSs for MOOCs [15], RSs for mobile learning, social networks or cloud computing environments [16,17], and deep learning-based RSs that guide researchers and professionals in understanding trends and challenges in their respective areas [18,19,20]. In contrast, the number of RSs that focus on recommendations for teachers or instructors remains limited. Sandoussi et al. [11] report that out of the total research works on RSs for OER that they reviewed, 64% were aimed to learners, while only 36% were aimed at teachers. In certain cases, RSs are aimed at the teacher side, but the results are evaluated through the student side, as, in [8], where recommendations to improve teachers’ strategies are evaluated through student learning experiences. The fact that RSs for learners account for double the percentage of RSs for teachers cannot be attributed only to the different sizes of the respective target groups. The need to boost student motivation for learning, which is often fading in e-learning contexts, is certainly one reason for this imbalance [3]. On the other hand, in their work on RSs for MOOCs, Khalid et al. [15] clearly conclude that there is an urgent need for RSs for teachers. Deschênes [10] stresses the value of peer recommendations and the need to involve learners and/or teachers in the process of RS design and development right from the early stages. The importance of the proper evaluation of RS performance is emphasized in Erdt et al. [21] as a tool for their improvement. As the authors pertinently point out, accurate predictions or ‘good’ recommendations do not necessarily correspond to high levels of user satisfaction. According to Da’u and Salim [19], RSs of increased performance can be obtained through (i) the incorporation of more data on user profile and social behavior, (ii) the adoption of machine learning algorithms with ‘deeper’ architectures (more layers) and (iii) the optimization of these algorithms for the generation of recommendations. These results are only a few highlights of extensive and intense research activity on RSs, following various directions and perspectives. The important role of machine learning algorithms in the generation of recommendations is evident; yet, the evaluation of the quality of recommendations produced by RSs remains an open challenge that will determine the future of the field.
Along these lines, the present systematic literature review (SLR) focuses on RSs for teachers, i.e., RSs that aim to aid and support teachers in making the various choices and decisions necessary to improve the process and outcomes of (online) education. The main objective of this study is to shed light on the status of current research on (and the development of) RSs for teachers and to provide new researchers in the field with insights to allow them to take meaningful directions in the development of innovative RSs for teachers. The perspective adopted here encompasses all education settings and contexts, be they traditional face-to-face in-class, fully online (e.g., MOOCs, open and distant learning, etc.) or blended (e.g., face-to-face in-class learning with asynchronous online support, synchronous online learning or the various blended learning plans, such as flipped classroom, etc.). The research works selected through a systematic process are analyzed as to their added value along the following major axes that constitute the research questions of this review at the same time:
  • What is the extent of interest in research on RSs for teachers, as expressed by the volume and other features of recent publications?
  • What are the research aims, research questions and approaches adopted for the design and development of RSs?
  • In which educational settings or contexts are RSs employed and evaluated?
  • What are the methods, algorithms and tools employed for the generation of recommendations?
  • What are the RS quality evaluation methods and tools and the evaluation results obtained?
  • What is the impact of the use of RSs and their endorsement by researchers and teachers?
In comparison to existing relevant reviews/surveys on RSs, such as [1,3,7,10,11,12,13,14,15,16,17,18,19,20,21], the current review is differentiated across several axes: focus (RSs for teachers), extent (longer than a decade), timeliness (up to 2023), database coverage (four major databases), the quality of sources (journal papers only) and research questions (research aims and objectives, RS evaluation methods and tools and the classification of machine learning algorithms employed to generate the recommendations). The current study constitutes the first step towards the authors’ research plan to design and develop an innovative RS for teachers; subsequent steps are therefore expected to benefit from discerning possible gaps or under-researched aspects in existing research on RSs. Furthermore, this review is expected to be useful to all parties interested in RS development and use. It aspires to support and aid new researchers in the field, education practitioners and professionals, education administrative and decision-making bodies and learning content authors and developers, and to help them save time, effort and research resources by offering them an overview of the field along with its trends and research opportunities, and leading them to make informed decisions on planning their research steps into the field.

2. Research Methodology and Selection Procedure

2.1. Research Methodology

The methodology adopted in this study is that of SLR, following the steps proposed (i) by Pai et al. [22], for the field of Medical Clinical Research, and (ii) by Kitchenham [23], for the field of Software Engineering, with appropriate adaptations [24,25]. The major steps include research planning, conducting and reporting; the PRISMA-S methodology [26] is adopted for research reporting.
The current SLR covers, practically, a little more than the last decade (years 2011–2023)—a period that has at the same time witnessed the proliferation of RS-related publications and established the critical role of machine learning algorithms in RSs. The mixed nature of RSs in both computer science/engineering and educational/social sciences fields led to the selection of the following bibliographic databases for article searching and retrieval: (i) ERIC-Education Resources Information Center (https://eric.ed.gov/ (accessed on 15 March 2024)), (ii) Scopus (https://www.scopus.com/ (accessed on 15 March 2024)), (iii) Web of Science (https://www.webofscience.com/ (accessed on 15 March 2024)) and (iv) Science Direct (https://www.sciencedirect.com/ (accessed on 15 March 2024)). The former two databases were selected thanks to (i) their advanced organization and filtering functionalities, which greatly facilitate the search and retrieval tasks, (ii) their selective policy and yet unrestricted coverage of publication sources, and (iii) the statistics readily available, as well as their direct linking to the referenced sources. The latter two databases were added for the sake of more complete coverage: Web of Science is strongly selective and yet it includes some sources not present in the first two bases; similarly, Science Direct has a considerable, but certainly not complete, overlap with Scopus.

2.2. Selection Procedure

The query formed in order to search in the four selected databases was based on either of the two essential keywords “Recommendation System(s)” OR “Recommender System(s)”. In the cases of Scopus, Web of Science and Science Direct, these are logically ‘AND’-ed with the keywords “Teachers OR Educators”, because Scopus, Web of Science and Science Direct cover a broad spectrum of disciplines besides education. This was not necessary with ERIC, however, since ERIC is a purely educational base. For all cases, the search was performed in the {title, abstract, keywords} triplet; the year span is set to 2011–2023 and the source type was set to journal papers. A total of N0 = 599 articles was thus retrieved from ERIC (191 articles), Scopus (138 articles), Web of Science (82 articles) and Science Direct (188 articles). After the removal of 117 duplicates, a body of N1 = 482 unique articles remained for screening, as detailed in Table 1.
The first screening of these N1 = 482 articles was based on the triplet {title, abstract, keywords} and the use of the following exclusion criteria, empirically set by the authors:
  • Not a journal paper (e.g., article in conference proceedings, book, patent, technical report, thesis, etc.);
  • Not a primary study (e.g., review or meta-analysis);
  • Not referring to e-learning or distant learning;
  • The RS involved is not addressed to teachers or educators;
  • Not an English-language publication.
A total of 302 articles were thus excluded, while N2 = 180 articles were forwarded to the second screening (see Table 2 for details). The first two authors performed the 1st screening independently. Inter-rater reliability was calculated as k = 0.875 (Cohen k index). The third author resolved conflicts in his capacity as a field expert. The second screening was performed using the same exclusion criteria, yet applied to the full texts of the screened articles; 119 more articles were thus excluded, leaving a set of N3 = 61 articles for further analysis across the research questions (see Table 2 for details). The 2nd screening was also performed independently by the first two authors, with a higher inter-rater reliability (k = 0.931); again, the third author judged the conflicts of interest. The whole procedure is presented in steps following PRISMA in Figure 1. The number of articles (61) eventually retained for analysis through the systematic methodology described above is considered adequate for the purposes of the current review, because it is the net outcome of (a) the definition of a valid area of interest (RSs for teachers), and (b) the strict conformance with a recognized SLR methodology (PRISMA). It is evident that this specific area of interest is not over-researched, and consequently it constitutes a field worthy of further investigation.
On the other hand, the current review has inevitably certain limitations that come mainly as a result of practical considerations and of the adopted methodology. These include the year span of the review, the English language of the sources, the restriction to journal publications and also the restriction to published research. Certainly, a number of interesting and fruitful research projects that have not resulted in a journal publication lie out of the reach of our methodology. The same holds true for works produced in non-English-speaking parts of the world and published in languages other than English. These limitations are important, especially when global coverage of the subject is sought.

3. Analysis and Results

3.1. RQ1: What Is the Extent of Interest in Research on RSs for Teachers, as Expressed by the Volume and Other Features of Recent Publications?

3.1.1. Evolution of the Number of Publications on RSs for Teachers over Time

The popularity of research on RSs for teachers along the 12-year span of this review was deduced from the number of relevant publications. In Figure 2, the publication counts per year, along the time span of this review, are shown (i) for the total number of N1 = 482 initially retrieved unique articles (blue), (ii) for the Ν2 = 180 articles retained after the first screening (red) and (iii) for the N3 = 61 articles retained after the second screening (green). Blue and red sequences exhibit an almost linearly increasing trend, meaning that the interest in RS-related research in education in general is steadily increasing. The green sequence, however, would more accurately be described as that of sustained interest, with no clearly increasing or decreasing trend. It represents research interest in RS for teachers in particular, given that the exclusion criterion with the higher score in the second screening in Figure 1 (91 out of the 119 exclusions) was “4. RS not addressed to teachers/educators”.

3.1.2. Number of Authors per Publication

The complexity of the relevant research can be deduced from the number of authors that had to collaborate in order to produce publishable research results. As indicated by the results in Table 3, the majority of publications are authored by three (27.87%), two (23.31%) or four (19.67%) authors; 10 out of the 61 publications (16.39%) are authored by more than five authors and 1 publication is authored by one author (1.64%).

3.1.3. Journals That Host Relevant Publications

Regarding the journals that host the reviewed publications, 45 different sources were identified. These are listed in Table 4, in descending order of frequency (number of hosted publications in each journal). Education and Information Technologies and IEEE Transactions on Learning Technologies head this list with five publications each, followed by Expert Systems with Applications and Soft Computing, each one with three publications, respectively, and by a set of four journals hosting two publications each (Information Sciences; International Journal of Information and Communication Technology Education; International Journal of Emerging Technologies in Learning; and Information Processing & Management). Thirty-seven journals hosting one publication each follow.

3.1.4. Geographic Distribution of Research on RSs for Teachers

The geographic distribution of research on RSs for teachers was deduced from the affiliation of the first author of each of the reviewed publications. Table 5 shows the cumulative results at the continent scale, while Figure 3 and Figure 4 further detail the results at the country scale. Asia, with 20 publications (32.79%), heads the list in Table 5, closely followed closely by Europe with 19 publications (31.14%). One-fourth of the publications come from the Americas (29.51%); Oceania and Africa close the list with lower representation rates. As revealed by the results per country in Figure 4, the top place of Asia is held thanks to China (nine publications or 14.75%), while the second-top place of Europe is held thanks to Spain (seven publications or 11.47%). Regarding the Americas, the USA is the top country, with seven publications (11.47%), followed by Brazil with four publications (6.55%).

3.2. RQ2: What Are the Aims of the Recommendations and the Aims of Research on RSs, as Expressed by the Respective Research Questions?

3.2.1. The Identification of the Major Aims of the Generated Recommendations

The aim of any recommendation is an essential characteristic of a research study on RSs. The qualitative (content) analysis of the recommendation aims explicitly or implicitly adopted by the researchers in the reviewed publications has led to the identification of five different classes (major recommendation aims), as listed in Table 6. The aim of improving teaching practices heads the list (32.79%): this result verifies that teaching quality improvement is a major issue and challenge that teachers face [27]. Personalized recommendations for multiple categories of users (including teachers) follow (24.59%), while personalized search/recommendations for learning objects account for 22.95% of publications. A smaller number of publications (16.39%) focus on personalized recommendations for teachers in particular, while a small number of articles are aimed at other tasks, such as personalized recommendations for social navigation (3.28%).

3.2.2. Aims of Research on RSs, as Expressed through the Respective Research Questions

Another qualitative characteristic of the reviewed publications refers to the research aims they pursue, as expressed by their respective research questions. As a result of qualitative (content) analysis, five major research aims are identified. Certain publications pursue more than one of these aims, as shown in Table 7. The top three aims are (i) the improvement of RS efficiency/quality/accuracy, pursued by 21 of the 61 publications (34.42%), (ii) personalization in the RS, with 18 studies (29.51%), and (iii) technology-specific RSs, with 17 studies (27.87%). The last three aims are far less frequent: affective/emotional aspects in RS are researched by three publications (4.92%) and RSs based on teachers’ ICT profiles/competences/skills/attitudes by are represented by two publications (3.28%).

3.3. RQ3: In Which Educational Settings or Contexts Are RSs Employed and Evaluated?

Regarding the educational settings or contexts where RSs are employed and evaluated, four major settings are identified: (i) educational environments, including web-based learning environments, Learning Management Systems (LMSs), Learning Content Management Systems (LCMSs), Learning Activity Environment Systems (LAMSs) and Social Learning Platforms, (ii) decision-support systems or frameworks, (iii) educational tool collections and (iv) Repositories. The popularity of each setting in general is shown in Table 8a. The results reveal the clear precedence of educational environments (37 out of the 61 papers, or 60.65%), followed by decision-support systems or frameworks (19 out of the 61 papers, or 31.14%). Educational tool collections are used in 15 papers (24.59%), while Repositories are used in 10 papers (16.39%).
The majority of RSs are designed for hybrid use in more than one setting, however—indeed, Table 8’s accounts add up to more than 61 papers. For a more detailed view, Table 8b tabulates the results of RS usage in a single setting (upper zone) or in more than one setting (lower zones). Regarding papers where RSs are used in a single setting, the results in the upper zone of Table 8b are headed by educational environments (29 out of the 61 cases, or 47.54%), followed by decision-support systems or frameworks (9 out of the 61 cases, or 14.75%). Educational tool collections alone account for only three cases (4.92%), while Repositories alone do not constitute a setting of choice. In the cases of RSs intended for hybrid usage, depicted in the lower zones of Table 8b, educational tool collections constitute the most frequent ‘ingredient’ in hybrid cases, followed by decision-support systems or frameworks.

3.4. RQ4: What Are the Filtering Methods, Algorithms and Tools Employed for the Generation of Recommendations?

3.4.1. Filtering Methods Adopted in the Design and Development of the RS

The reviewed publications employ one of the three well-known filtering methods for the design and development of their RS: (i) collaborative filtering—the RS relies on user behavior or the user evaluation of the proposed objects and recommends objects that similar users have liked/adopted [7,89,90]; (ii) content-based filtering—the RS relies on the identification of features the user has already shown a preference for and recommends similar objects [91]; and (iii) hybrid filtering—the RS relies on more than one method, e.g., collaborative, content-based and/or knowledge-based filtering (knowledge-based Recommendation Systems, e.g., in Aggarwal [2], among others). The publication counts in Table 9 show that collaborative filtering is the most popular approach (26 out of the 61 cases, or 42.62%), while content-based filtering is far less frequent (13 out of the 61 cases, 21.31%), possibly because the introduction of new users is not straightforward in that case, while content analysis poses numerous limitations [90]. Hybrid filtering, although not a homogeneous class, is adopted in almost one-third of cases (22 out of the 61 cases, or 36.07%).

3.4.2. Algorithms and Tools Employed for the Generation of the Recommendations

Algorithms constitute the core mechanism for the generation of a recommendation. The selection and employment of the appropriate algorithm, therefore, has a direct impact on the quality of service provided by an RS. It is not surprising, consequently, that considerable research efforts are allocated to this direction. As mentioned in the Introduction, the last decade has witnessed the adoption of machine learning/deep learning algorithms in RS-related research. The relevant results are shown in Table 10. Given the large variety of algorithms employed, and the tools internally used by these algorithms, the publications in Table 10 are grouped under three categories: (A) those using supervised learning algorithms, (B) those using unsupervised learning algorithms and (C) those not disclosing any information about the algorithm(s) employed. Let it be noted that the use of more than one algorithm is typical in the reviewed publications, as shown in Table 10.
The general picture, as depicted in the three header rows, (A), (B) and (C), in Table 10, clearly favors supervised learning algorithms. Within the 61 reviewed publications, the algorithms in the supervised learning category were used 78 times out of a total of 111 times (70.27%), while those in the unsupervised learning category were used 21 times in a total of 111 times (18.92%). Finally, in 12 out of the total of 111 times (10.81%), the authors do not report on the algorithms employed.
The results of a closer inspection within the supervised/unsupervised categories are also detailed in Table 10. The following groups of algorithms are identified in the supervised category: ranking algorithms (used 17 times in a total of 78, or 15.32%), Text Mining–NLP algorithms (used 16 times in 78, or 14.41%), Tree and Graph algorithms (used 16 times in 78, or 14.41%), ANN and Factorization algorithms (used 11 times in 78, or 9.91%), classification algorithms (used 7 times in 78, or 6.31%), Association Rule algorithms (used 5 times in 78, or 4.51%), filtering algorithms (used 3 times in 78, or 2.70%), Evolutionary Computing algorithms (used 2 times in 78, or 1.80%) and Meta-Algorithms, such as Adaboost (used 1 time in 78, or 0.90%).
In the unsupervised category, the groups identified are the following: the K-means-family of algorithms (used 13 times in a total of 21 times, or 11.72%), other clustering/grouping algorithms (used 4 times in 21 times, or 3.60%) and model-driven–performance criterion optimization algorithms (used 4 times in 21 times, or 3.60%).
Furthermore, a closer inspection reveals that several different tools, such as distance measures and/or similarity measures, are employed within the above-mentioned algorithms. The following measures are used in 13 different papers (the numbers of papers follow in parentheses): the Fisher algorithm (1), Euclidean Distance (4), Damerau-Levenshtein Distance or Edit Distance (for strings) (1), the Manhattan Correlation (2), Cosine similarity (2), Pearson Correlation (4), the NJWDE algorithm (1) and the Tanimoto algorithm (1). Obviously, the majority of papers (45 out of 61 or 73.77%) do not refer explicitly to the measure used to quantify distance or similarity.
At a more abstract level, the machine learning algorithms identified and categorized in Table 10 are employed in order to address and solve the basic problems of prediction, classification, clustering, detection and identification. In fact, one or more of these problems is expected to arise in any data processing and analysis task. At this level, the results are shown in Table 11. Prediction is the most common problem, addressed in 28 out of the 61 studies (45.90%), followed by classification (25 studies, or 40.98%), identification (22 studies, or 36.06%), clustering (16 studies, or 26.23%) and detection (9 studies, or 14.75%). It is typical in the reviewed publications to address more than one of these problems through RSs. A closer look reveals that 11 studies address both classification and prediction problems, 6 studies address classification and identification problems, 5 studies address clustering and identification problems and other problem pairs are present in lower frequencies.

3.5. RQ5: What Are the RS Quality Evaluation Methods and Tools and the Evaluation Results Obtained?

3.5.1. Research Methodology (Experimental Plan) Used for Evaluation of the Proposed RS

The reviewed publications were analyzed according to the methodology (experimental plan) adopted for the testing and evaluation of the proposed RS in each of them. The major experimental setups identified are (i) pure experiments, (ii) quasi-experiments and (iii) case studies. As can be seen in Table 12, the majority of the reviewed research works (47 out of the 61, or 77.05%) resort to quasi-experimental setups for the evaluation of the proposed RS. This preference may be ascribed to the flexibility of the quasi-experimental setup that does not require an equivalent control group or a randomized sample selection procedure (convenience sampling is a typical choice in quasi-experiments). Case studies are the second most frequent category, with 8 out of 61 cases (13.11%), followed by only four pure experiments (6.56%). A positive result is that only 2 out of the total 61 research works (3.28%) do not include an evaluation phase or do not report on the method employed for evaluation.

3.5.2. The Characteristics of the Sample Used for Evaluation of the Proposed RS

The reviewed publications were analyzed regarding (i) the sample sizes of users involved in the evaluation of the proposed RS and (ii) the types and volumes of the objects recommended to those users. Three different classes of individuals are identified in the role of users of the proposed RS for evaluation purposes: ‘teachers’, ‘students’ and ‘users in general’. These are included in the current review, despite the keywords ‘teachers’ or ‘educators’ being applied during article selection, because certain publications use mixed samples. The ‘users in general’ class involves teachers and students, among other users. Table 13 presents the results across assets of empirically defined increasing intervals to accommodate the sample size span of the reviewed publications. Regarding the objects recommended to users, these are learning objects and movies, or similar objects (videos, etc.) Table 14 shows the results across assets of empirically defined, increasing intervals. Figure 5 illustrates results from both Table 13 and Table 14. It is worth noticing in Table 13 that ‘teachers’ peaks at the first interval (smaller groups of 1–20 people) while ‘students’ as well as ‘users in general’ (a class that includes students) peak both at the first (1–40 people) and the last interval (groups of 80–140 people). On the other hand, ‘Learning objects’ peaks at the first interval (recommendations refer to pools of 1–500 objects) while ‘movies, etc.’ peak at much higher dimensionality resources (more than 3500 items in the pool). These results indicate the diversity of the setups in which RSs are used and evaluated.

3.5.3. RS Evaluation Results Reported in the Reviewed Publications

RS evaluation results, as these are reported in the respective publications, are of interest as they concisely describe the effectiveness of the RS paradigm. The results in Table 15 show that the majority of research studies (47 out of 61 or 77.05%) report positive results from the use of the proposed RS, while only 14 out of 61 studies (22.95%) report neutral results. Interestingly, no study reports negative results.

3.6. RQ6: What Is the Impact of the Use of RS and Their Endorsement by Researchers and Teachers?

The Impact of Research on RS as Expressed by Publication Citations per County and per Institution
The impact of research on RSs is deduced from the number of citations the relevant publications receive by peer researchers. The citation counts for the present analysis come from the database the corresponding publication was retrieved from. A total of 2262 citations were found for the 61 reviewed publications (78.00 average citations per publication, May 2024).
At the country level of the 1st affiliated author’s location, the results are given in Figure 5. The countries are shown in decreasing citation counts; 29 different countries are represented. Spain heads the list with 452 citations, followed by the USA (332), Italy (187), Mexico (157), Croatia (143), China (132), Brazil (120) and Greece (113). The 21 countries following it had decreasing numbers of citations.
At the institution level where the affiliation of 1st author is located (university, institute, etc.), the results are given in Figure 6. The institutions are shown in decreasing citation counts; out of the 54 different institutions represented, the 36 with 10 citations or more are shown in Figure 6, for practical purposes. The University of Cordoba heads the list, with 209 citations, closely followed by George Mason University, with 197 citations. University Roma III is the next one, with 187 citations, followed by the University of Rijeka, with 143 citations, respectively; the Autonomous University of Yucatan had 139 citations and Universidad Nacional de Educación a Distancia had 136 citations. Forty-eight more institutions follow, with decreasing citation counts.

4. Discussion

Through the results of analysis presented in the previous sections, RSs for teachers emerge as an intensively active area of research and development within the field of educational technology. Indeed, publication counts keep increasing, especially in the most recent years following 2018, while novel RSs keep coming up. An initial observation is that this field’s popularity is on the rise: its ‘footprint’ is already conspicuous, as is numerically documented in Figure 2; furthermore, it is expected to increase in the near future, as an extrapolation beyond year 2023 of the trends illustrated in Figure 2 would indicate. This view is corroborated by the answers to the first research question on the popularity of RSs for teachers. It was found that 45 different journals have hosted relevant research publications within the time span of this review; each publication is co-authored by a median number of three authors; these authors come from countries across all continents, headed by Asia and Europe at the continent scale or by China, Spain and the USA at the country scale.
Regarding the aims of the generated recommendations (Table 6), the top two aims of RSs for teachers are (i) to support teachers in the improvement of their education practices (20 cases or 32.79%) and (ii) to offer them personalized advice and directions as to the retrieval of educational content, OERs, etc. (15 cases or 24.59%). This result is in good agreement with existing research that places emphasis on the improvement of teaching strategies [8]. The top two research objectives, as expressed through the research questions of the respective studies (Table 7), are (i) the higher performance, quality and accuracy of the generated recommendations (21 cases or 34.42%), as well as (ii) the increased personalization of the generated recommendations (18 cases, or 29.51%). Indeed, personalization has already been recognized as a major feature in teaching and learning; education-related RSs hold great promise for meeting the individual needs of students and teachers [92]. Specialized technologies and Artificial Intelligence/machine learning algorithms are employed to these ends.
Machine learning algorithms are gaining ground in RSs for teachers as major tools for performance optimization (increased timeliness, personalization and accuracy of the generated recommendations) (Table 10). This result is in good agreement with Khanal et al. [7], who state that machine learning methods, algorithms, data sets, evaluations, assessments and results are the constituents of an RS. In this context, it was found that supervised methods are far more frequently used than unsupervised methods, as was also concluded by Souabi et al. [3]. In particular, prediction problems hold the top place in the list of problems addressed via machine learning algorithms within the RS-for-teachers framework, followed by classification, identification and clustering problems (Table 14). The top place of prediction (supervised learning), along with the much lower place of clustering (unsupervised learning), is in good agreement with relevant findings by Souabi et al. [3].
Collaborative filtering is clearly the preferred method for the generation of recommendations; hybrid methods follow, while content-based filtering is the least frequent option (Table 9). This result is in good agreement with Urdaneta-Ponte et al. [93]; on the contrary, Deschênes [10] finds that content-based filtering is the preferred type, while Souabi et al. [3] find that content-based and collaborative filtering approaches are equally frequent. The contexts or digital environments where RSs are embedded are primarily e-learning platforms (LMS, VLE, LCMS, etc.), followed by decision-making support platforms or frameworks (Table 8).
Another observation of interest is that the vast majority (more than 96%) of the reviewed publications includes experimentation (either pure-/quasi-experiments or case studies, Table 11) and report evaluation results (Table 15). This signifies a welcome deviation from previous practices reported in Erdt et al. [21], where 42% of the reviewed publications did not evaluate the relevant RS, or later on in Souabi et al. [3], who found that 25% of the reviewed publications did not include an evaluation of the RS and the results. Certainly, the evaluation of RS efficiency and success remains an open issue, as existing research often concludes. For example, da Silva et al. [92] report that real-life testing was scarcely employed in the reviewed publications, and warn against biases in studies of user satisfaction through questionnaires. They also notice that multiple targets may obscure evaluation results due to cross-effects.
Regarding the sample sizes of the experiments for the evaluation of the proposed and developed RS, a great variability is registered in RSs for teachers. This result agrees with previous results on RSs for students or generally for users, with the significant differentiation that sample sizes of RSs for students/users peak at the small and the large scale, while RSs for teachers peak only at the small scale (Table 12). While the sizes of the sets of courses under recommendation remain limited, the number of other items (especially those in digital form, such as OERs, movies, videos, etc.) are considerably increased (Table 13), a fact that verifies the need for and the value of a good RS.
As a final comment, it is worth noticing the results obtained regarding the impact of research on RS, as expressed via peer-citations (Figure 5 and Figure 6): Spain (University of Cordoba, etc.), the USA (George Mason University, etc.), Italy (University Roma III, etc.), Mexico (Autonomous University of Yucatan, etc.), Croatia (University of Rijeka, etc.) and China (Tsinghua University, etc.) occupy the top places in the relevant lists and emerge as the focal points of relevant research worldwide.

5. Conclusions–Further Research

A systematic literature review of recent publications (2011–2023) on RSs for teachers is presented in this paper. The analysis carried out across six research questions reveals interesting aspects of this subfield of the general field of RS for education and learning. This area is active and its activity is expected to keep rising, especially as it incorporates and benefits from modern machine learning algorithms for performance optimization and the general improvement of the generated recommendations. Currently, recommendations for teachers are produced mainly through collaborative filtering; they address prediction, classification, identification and clustering problems and aim to support teachers in improving their educational practices and in the search and retrieval of suitable digital learning content, such as OERs. A large variety of algorithms, methods and tools are employed to meet the RS research aims and objectives; the majority falls under the supervised learning paradigm. Relevant research is carried out in numerous focal points around the globe, is disseminated through numerous journals and has a considerable impact worldwide. Fortunately, the current trend favors the experimental evaluation of the proposed RS and the reporting of experimental results—a welcome development that is expected to raise the quality of relevant research.
Despite these findings, RSs for teachers is found to be an under-researched area—certainly not as researched as RSs for students or RSs for ‘users’ in general. Furthermore, the results (Table 6) indicate that the reviewed RSs for teachers aid them in navigating across and locating and retrieving suitable material and learning objects according to the respective requests and the algorithm employed. However, they do not aid teachers in designing, structuring or putting together the components of a (new) course on the basis of a preferred and adopted theory of learning. To achieve these aims, teachers often resort to LAMS or to LMS/LCMS environments. While working in such an environment, teachers select the learning activities to include in their course/lesson plans according to their preferred theory of learning. The results of the current review imply that (i) RSs for teachers are not yet seamlessly embedded in LAMS or LMS/LCMS environments, while (ii) they also do not generate recommendations on the basis of specific theories of learning. Future research that would address these points and embed relevant aspects in the algorithms and filters used to generate recommendations, is expected to enhance the quality of RSs for teachers and promote their acceptance and use among teachers.
Along this line, the future steps in our research plan are to design, develop and experimentally evaluate a novel RS for teachers that will support them to implement the specific educational methods and scenarios each of them selects and adopts under an overarching theory on learning. Different theories of learning lead to different educational practice, methods and scenarios; they consequently result in different needs for the teacher. A personalized RS that will guide and advise on the search and retrieval of relevant content, material, course plans, etc., is expected to provide a substantial aid to the teacher and, therefore, to enhance education and learning.

Author Contributions

Conceptualization, V.S., M.R. and Y.P.; methodology, V.S. and M.R.; formal analysis, V.S. and M.R.; validation, V.S. and Y.P.; writing—original draft preparation, V.S.; writing—review and editing, M.R. and Y.P.; supervision, M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in the article.

Acknowledgments

The authors wish to thank the Special Account for Research Grants of the University of West Attica, Athens-Egaleo, Greece, for partially covering the cost of this publication (APC).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Uta, M.; Felfernig, A.; Le, V.-M.; Tran, T.N.T.; Garber, D.; Lubos, S.; Burgstaller, T. Knowledge-based recommender systems: Overview and research directions. Front. Big Data 2024, 7, 1304439. [Google Scholar] [CrossRef] [PubMed]
  2. Aggarwal, C.C. Recommender Systems: The Textbook, 1st ed.; Springer: Cham, Switzerland, 2016. [Google Scholar]
  3. Souabi, S.; Retbi, A.; KhalidiIdrissi, M.; Bennani, S. Recommendation Systems on E-Learning and Social Learning: A Systematic Review. Electron. J. e-Learn. 2021, 19, 5. [Google Scholar] [CrossRef]
  4. Ricci, F.; Rokach, L.; Shapira, B. Recommender Systems Handbook, 2nd ed.; Springer: New York, NY, USA, 2015. [Google Scholar]
  5. Schwartz, B. The Paradox of Choice. Why More Is Less, 1st ed.; HarperCollins: New York, NY, USA, 2004. [Google Scholar]
  6. Manouselis, N.; Drachsler, H.; Verbert, K.; Duval, E. Recommender Systems for Learning, 1st ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  7. Khanal, S.S.; Prasad, P.W.C.; Alsadoon, A.; Maag, A. A Systematic Review: Machine Learning Based Recommendation Systems for E-Learning. Educ. Inf. Technol. 2020, 25, 2635–2664. [Google Scholar] [CrossRef]
  8. Yanes, N.; Mostafa, A.M.; Ezz, M.; Almuayqil, S.N. A machine learning-based recommender system for improving students learning experiences. IEEE Access 2020, 8, 201218–201235. [Google Scholar] [CrossRef]
  9. Bodily, R.; Verbert, K. Review of Research on Student-Facing Learning Analytics Dashboards and Educational Recommender Systems. IEEE Trans. Learn. Technol. 2017, 10, 405–418. [Google Scholar] [CrossRef]
  10. Deschênes, M. Recommender systems to support learners’ Agency in a Learning Context: A systematic review. Int. J. Educ. Technol. High. Educ. 2020, 17, 50. [Google Scholar] [CrossRef]
  11. Sandoussi, R.; Hnida, M.; Daoudi, N.; Ajhoun, R. Systematic Literature Review on Open Educational Resources Recommender Systems. Int. J. Interact. Mob. Technol. (iJIM) 2022, 16, 44–77. [Google Scholar] [CrossRef]
  12. Tarus, J.; Niu, Z.; Mustafa, G. Knowledge-based recommendation: A review of ontology-based recommender systems for e-learning. Artif. Intell. Rev. 2018, 50, 21–48. [Google Scholar] [CrossRef]
  13. George, G.; Lal, A.M. Review of ontology-based recommender systems in e-learning. Comput. Educ. 2019, 142, 103642. [Google Scholar] [CrossRef]
  14. Rahayu, N.W.; Ferdiana, R.; Kusumawardani, S.S. A systematic review of ontology use in E-learning recommender system. Comput. Educ. Artif. Intell. 2022, 3, 100047. [Google Scholar] [CrossRef]
  15. Khalid, A.; Lundqvist, K.; Yates, A. Recommender Systems for MOOCs: A Systematic Literature Survey (January 1, 2012–July 12, 2019). Int. Rev. Res. Open Distrib. Learn. 2020, 21, 255–291. [Google Scholar] [CrossRef]
  16. Gonzalez Camacho, L.A.; Alves-Souza, S.N. Social network data to alleviate cold-start in recommender system: A systematic review. Inf. Process. Manag. 2018, 54, 529–544. [Google Scholar] [CrossRef]
  17. Alhijawi, B.; Kilani, Y. The recommender system: A survey. Int. J. Adv. Intell. Paradig. 2020, 15, 229–251. [Google Scholar] [CrossRef]
  18. Zhang, S.; Yao, L.; Sun, A.; Tay, Y. Deep Learning Based Recommender System: A Survey and New Perspectives. ACM Comput. Surv. 2019, 52, 1–38. [Google Scholar] [CrossRef]
  19. Da’u, A.; Salim, N. Recommendation system based on deep learning methods: A systematic review and new directions. Artif. Intell. Rev. 2020, 53, 2709–2748. [Google Scholar] [CrossRef]
  20. Liu, T.; Wu, Q.; Chang, L.; Gu, T. A review of deep learning-based recommender system in e-learning environments. Artif. Intell. Rev. 2022, 55, 5953–5980. [Google Scholar] [CrossRef]
  21. Erdt, M.; Fernández, A.; Rensing, C. Evaluating Recommender Systems for Technology Enhanced Learning: A Quantitative Survey. IEEE Trans. Learn. Technol. 2015, 8, 326–344. [Google Scholar] [CrossRef]
  22. Pai, M.; McCulloch, M.; Gorman, J.D.; Pai, N.; Enanoria, W.; Kennedy, G.; Tharyan, P.; Colford, J. Systematic reviews and meta-analyses: An illustrated, step-by-step guide. Natl. Med. J. India 2004, 17, 86–95. [Google Scholar] [PubMed]
  23. Kitchenham, B.A. Procedures for Undertaking Systematic Reviews (Report No. TR-SE 0401); Report No. 0400011T.1; Computer Science Department, Keele University: Keele, UK; National ICT: Eversleigh, Australia, 2004. [Google Scholar]
  24. Kitchenham, B.A.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Report No. EB-SE 2007-001; School of Computer Science and Mathematics, Keele University: Keele, UK; University of Durham: Durham, UK, 2007. [Google Scholar]
  25. Kitchenham, B.A.; Brereton, P. A systematic review of systematic review process research in software engineering. Inf. Softw. Technol. 2013, 55, 2049–2075. [Google Scholar] [CrossRef]
  26. Rethlefsen, M.L.; Kirtley, S.; Waffenschmidt, S.; Ayala, A.P.; Moher, D.; Page, M.J.; Koffel, J.B. PRISMA-S: An extension to the PRISMA statement for reporting literature searches in systematic reviews. J. Med. Libr. Assoc. 2021, 109, 174–200. [Google Scholar] [CrossRef]
  27. Dhahri, M.; Khribi, M.K. A Review of Educational Recommender Systems for Teachers. In Proceedings of the 18th International Conference on Cognition and Exploratory Learning in Digital Age (CELDA 2021), Virtual, 13–15 October 2021. [Google Scholar]
  28. Liu, C.; Zhang, H.; Zhang, J.; Zhang, Z.; Yuan, P. Design of a Learning Path Recommendation System Based on a Knowledge Graph. Int. J. Inf. Commun. Technol. Educ. (IJICTE) 2023, 19, 1–18. [Google Scholar] [CrossRef]
  29. Tahir, S.; Hafeez, Y.; Abbas, M.; Nawaz, A.; Hamid, B. Smart Learning Objects Retrieval for E-Learning with Contextual Recommendation based on Collaborative Filtering. Educ. Inf. Technol. 2022, 27, 8631–8668. [Google Scholar] [CrossRef]
  30. Yao, D.; Deng, X.; Qing, X. A Course Teacher Recommendation Method Based on an Improved Weighted Bipartite Graph and Slope One. IEEE Access 2022, 10, 129763–129780. [Google Scholar] [CrossRef]
  31. Liu, H.; Sun, Z.; Qu, X.; Yuan, F. Top-aware recommender distillation with deep reinforcement Learning. Inf. Sci. 2021, 576, 642–657. [Google Scholar] [CrossRef]
  32. Ma, Z.H.; Hwang, W.Y.; Shih, T.K. Effects of a peer tutor recommender system (PTRS) with machine learning and automated assessment on vocational high school students’ computer application operating skills. J. Comput. Educ. 2020, 7, 435–462. [Google Scholar] [CrossRef]
  33. Gordillo, A.; López-Fernández, D.; Verbert, K. Examining the Usefulness of Quality Scores for Generating Learning Object Recommendations in Repositories of Open Educational Resources. Appl. Sci. 2020, 10, 4638. [Google Scholar] [CrossRef]
  34. Poitras, E.; Mayne, Z.; Huang, L.; Udy, L.; Lajoie, S. Scaffolding Student Teachers’ Information-Seeking Behaviours with a Network-Based Tutoring System. J. Comput. Assist. Learn. 2019, 35, 731–746. [Google Scholar] [CrossRef]
  35. Mimis, M.; El Hajji, M.; Es-saady, Y.; Guejdi, A.O.; Douzi, H.; Mammass, D. A framework for smart academic guidance using educational data mining. Educ. Inf. Technol. 2019, 24, 1379–1393. [Google Scholar] [CrossRef]
  36. Chen, X.; Zhang, Y.; Xu, H.; Qin, Z.; Zha, H. Adversarial distillation for efficient recommendation with external knowledge. ACM Trans. Inf. Syst. 2019, 37, 1–28. [Google Scholar] [CrossRef]
  37. Fazeli, S.; Drachsler, H.; Bitter-Rijpkema, M.; Brouns, F.; Van der Vegt, W.; Sloep, P.B. User-Centric Evaluation of Recommender Systems in Social Learning Platforms: Accuracy is Just the Tip of the Iceberg. IEEE Trans. Learn. Technol. 2018, 11, 294–306. [Google Scholar] [CrossRef]
  38. Wongthongtham, P.; Chan, K.Y.; Potdar, V.; Abu-Salih, B.; Gaikwad, S.; Jain, P. State-of-the-Art Ontology Annotation for Personalised Teaching and Learning and Prospects for Smart Learning Recommender Based on Multiple Intelligence and Fuzzy Ontology. Int. J. Fuzzy Syst. 2018, 20, 1357–1372. [Google Scholar] [CrossRef]
  39. Almohammadi, K.; Hagras, H.; Alzahrani, A.; Alghazzawi, D.; Aldabbagh, G. A type-2 fuzzy logic recommendation system for adaptive teaching. Soft Comput. 2017, 21, 965–979. [Google Scholar] [CrossRef]
  40. Knez, T.; Dlab, M.H.; Hoic-Bozic, N. Implementation of group formation algorithms in the ELARS recommender system. Int. J. Emerg. Technol. Learn. (Ijet) 2017, 12, 198–207. [Google Scholar] [CrossRef]
  41. Hoic-Bozic, N.; HolenkoDlab, M.; Mornar, V. Recommender System and Web 2.0 Tools to Enhance a Blended Learning Model. IEEE Trans. Educ. 2016, 59, 39–44. [Google Scholar] [CrossRef]
  42. Khadiev, K.; Miftakhutdinov, Z.; Sidikov, M. Collaborative filtering approach in adaptive learning. Int. J. Pharm. Technol. 2016, 8, 15124–15132. [Google Scholar]
  43. Zervas, P.; Sergis, S.; Sampson, D.G.; Fyskilis, S. Towards Competence-Based Learning Design Driven Remote and Virtual Labs Recommendations for Science Teachers. Technol. Knowl. Learn. 2015, 20, 185–199. [Google Scholar] [CrossRef]
  44. Zapata, A.; Menéndez, V.H.; Prieto, M.E.; Romero, C. Evaluation and selection of group recommendation strategies for collaborative searching of learning objects. Int. J. Hum.-Comput. Stud. 2015, 76, 22–39. [Google Scholar] [CrossRef]
  45. Pedro, L.; Santos, C.; Almeida, S.F.; Ramos, F.; Moreira, A.; Almeida, M.; Antunes, M.J. The SAPO Campus Recommender System: A Study about Students’ and Teachers’ Opinions. Res. Learn. Technol. 2014, 22, 22921. [Google Scholar] [CrossRef]
  46. Thaiklang, S.; Arch-Int, N.; Arch-Int, S. Learning resources recommendation framework using rule-based reasoning approach. J. Theor. Appl. Inf. Technol. 2014, 69, 68–76. [Google Scholar]
  47. Cobos, C.; Rodriguez, O.; Rivera, J.; Betancourt, J.; Mendoza, M.; Leon, E.; Herrera-Viedma, E. A hybrid system of pedagogical pattern recommendations based on singular value decomposition and variable data attributes. Inf. Process. Manag. 2013, 49, 607–625. [Google Scholar] [CrossRef]
  48. García, E.; Romero, C.; Ventura, S.; De Castro, C. A collaborative educational association rule mining tool. Internet High. Educ. 2011, 14, 77–88. [Google Scholar] [CrossRef]
  49. Andrade, T.; Almeira, C.; Barbosa, J.; Rigo, S. Recommendation System model integrated with Active Methodologies, EDM, and Learning Analytics for dropout mitigation in Distance Education. Rev. Latinoam. Tecnol. Educ.-RELATEC 2023, 22, 185–205. [Google Scholar]
  50. Pereira, F.D.; Rodrigues, L.; Henklain, M.H.O.; Freitas, H.; Oliveira, D.F.; Cristea, A.I.; Carvlho, L.; Isotani, S.; Benedict, A.; Dorodchi, M.; et al. Toward Human–AI Collaboration: A Recommender System to Support CS1 Instructors to Select Problems for Assignments and Exams. IEEE Trans. Learn. Technol. 2023, 16, 457–472. [Google Scholar] [CrossRef]
  51. Kang, S.; Hwang, J.; Kweon, W.; Yu, H. Item-side ranking regularized distillation for recommender system. Inf. Sci. 2021, 580, 15–34. [Google Scholar] [CrossRef]
  52. Ali, S.; Hafeez, Y.; Abbas, M.A.; Aqib, M.; Nawaz, A. Enabling remote learning system for virtual personalized preferences during COVID-19 pandemic. Multimed. Tools Appl. 2021, 80, 33329–33355. [Google Scholar] [CrossRef] [PubMed]
  53. Gao, J.; Yue, X.G.; Hao, L.; Crabbe, M.J.C.; Manta, O.; Duarte, N. Optimization Analysis and Implementation of Online Wisdom Teaching Mode in Cloud Classroom Based on Data Mining and Processing. Int. J. Emerg. Technol. Learn. (iJET) 2021, 16, 205–218. [Google Scholar] [CrossRef]
  54. Bulut, O.; Cormier, D.C.; Shin, J. An Intelligent Recommender System for Personalized Test Administration Scheduling with Computerized Formative Assessments. Front. Educ. 2020, 5, 572612. [Google Scholar] [CrossRef]
  55. De Medio, C.; Limongelli, C.; Sciarrone, F.; Temperini, M. MoodleREC: A recommendation system for creating courses using the Moodle e-learning Platform. Comput. Hum. Behav. 2020, 104, 106168. [Google Scholar] [CrossRef]
  56. Li, J.; Ye, Z. Course Recommendations in Online Education Based on Collaborative Filtering Recommendation Algorithm. Complexity 2020, 2020, 6619249. [Google Scholar] [CrossRef]
  57. Graesser, A.C.; Hu, X.; Nye, B.D.; VanLehn, K.; Kumar, R.; Heffernan, C.; Heffernan, N.; Woolf, B.; Olney, A.M.; Rus, V.; et al. ElectronixTutor: An Intelligent Tutoring System with Multiple Learning Resources for Electronics. Int. J. STEM Educ. 2018, 5, 15. [Google Scholar] [CrossRef]
  58. Karga, S.; Satratzemi, M. A Hybrid Recommender System Integrated into LAMS for Learning Designers. Educ. Inf. Technol. 2018, 23, 1297–1329. [Google Scholar] [CrossRef]
  59. Afridi, A.H. Stakeholders Analysis for Serendipitous Recommenders system in Learning Environments. Procedia Comput. Sci. 2018, 130, 222–230. [Google Scholar] [CrossRef]
  60. Revilla Muñoz, O.; Alpiste Penalba, F.; Fernández Sánchez, J. The Skills, Competences, and Attitude toward Information and Communications Technology Recommender System: An online support program for teachers with personalized recommendations. New Rev. Hypermedia Multimed. 2016, 22, 83–110. [Google Scholar] [CrossRef]
  61. Bozo, J.; Alarcon, R.; Peralta, M.; Mery, T.; Cabezas, V. Metadata for recommending primary and secondary level learning resources. JUCS—J. Univers. Comput. Sci. 2016, 22, 197–227. [Google Scholar]
  62. Sergis, S.; Sampson, D.G. Learning Object Recommendations for Teachers Based on Elicited ICT Competence Profiles. IEEE Trans. Learn. Technol. 2016, 9, 67–80. [Google Scholar] [CrossRef]
  63. Zapata, A.; Menéndez, V.H. A framework for recommendation in learning object repositories: An example of application in civil engineering. Adv. Eng. Softw. 2013, 56, 1–14. [Google Scholar] [CrossRef]
  64. Berkani, L.; Chikh, A.; Nouali, O. Using hybrid semantic information filtering approach in communities of practice of E-learning. J. Web Eng. 2013, 12, 383–402. [Google Scholar]
  65. Cechinel, C.; Sicilia, M.Á.; Sánchez-Alonso, S.; García-Barriocanal, E. Evaluating collaborative filtering recommendations inside large learning object repositories. Inf. Process. Manag. 2013, 49, 34–50. [Google Scholar] [CrossRef]
  66. Peiris, K.D.A.; Gallupe, R.B. A Conceptual Framework for Evolving, Recommender Online Learning Systems. Decis. Sci. J. Innov. Educ. 2012, 10, 389–412. [Google Scholar] [CrossRef]
  67. Zhu, Z.; Sun, Y. Personalized information push system for education management based on big data mode and collaborative filtering algorithm. Soft Comput. 2023, 27, 10057–10067. [Google Scholar] [CrossRef]
  68. Tong, W.; Wang, Y.; Su, Q.; Hu, Z. Digital twin campus with a novel double-layer collaborative filtering recommendation algorithm framework. Educ. Inf. Technol. 2022, 27, 11901–11917. [Google Scholar] [CrossRef]
  69. Dias, L.L.; Barrere, E.; De Souza, J.F. The impact of semantic annotation techniques on content-based video lecture recommendation. J. Inf. Sci. 2021, 47, 740–752. [Google Scholar] [CrossRef]
  70. Rawat, B.; Dwivedi, S.K. Discovering Learners’ Characteristics through Cluster Analysis for Recommendation of Courses in E-Learning Environment. Int. J. Inf. Commun. Technol. Educ. (IJICTE) 2019, 15, 42–66. [Google Scholar] [CrossRef]
  71. Karga, S.; Satratzemi, M. Using Explanations for Recommender Systems in Learning Design Settings to Enhance Teachers’ Acceptance and Perceived Experience. Educ. Inf. Technol. 2019, 24, 2953–2974. [Google Scholar] [CrossRef]
  72. Peralta, M.; Alarcon, R.; Pichara, K.; Mery, T.; Cano, F.; Bozo, J. Understanding learning resources metadata for primary and secondary education. IEEE Trans. Learn. Technol. 2018, 11, 456–467. [Google Scholar] [CrossRef]
  73. Liu, L.; Liang, Y.; Li, W. Dynamic assessment and prediction in online learning: Exploring the methods of collaborative filtering in a task recommender system. Int. J. Technol. Teach. Learn. 2017, 13, 103–117. [Google Scholar]
  74. Dabbagh, Ν.; Fake, H. Tech Select Decision Aide: A Mobile Application to Facilitate Just-in-Time Decision Support for Instructional Designers. TechTrends 2017, 61, 393–403. [Google Scholar] [CrossRef]
  75. Aguilar, J.; Valdiviezo-Diaz, P.; Riofrio, G. A general framework for intelligent recommender systems. Appl. Comput. Inform. 2017, 13, 147–160. [Google Scholar] [CrossRef]
  76. Sweeney, M.; Rangwala, H.; Lester, J.; Johri, A. Next-Term Student Performance Prediction: A Recommender Systems Approach. J. Educ. Data Min. 2016, 8, 22–50. [Google Scholar]
  77. Niemann, K.; Wolpers, M. Creating Usage Context-based Object Similarities to Boost Recommender Systems in Technology Enhanced Learning. IEEE Trans. Learn. Technol. 2015, 8, 274–285. [Google Scholar] [CrossRef]
  78. Santos, O.C.; Boticario, J.G. User-centred design and educational data mining support during the recommendations elicitation process in social online learning environments. Expert Syst. 2015, 32, 293–311. [Google Scholar] [CrossRef]
  79. Kortemeyer, G.; Droschler, S.; Pritchard, D.E. Harvesting latent and usage-based metadata in a course management system to enrich the underlying educational digital library: A case study. Int. J. Digit. Libr. 2014, 14, 1–15. [Google Scholar] [CrossRef]
  80. Anaya, A.R.; Luque, M.; García-Saiz, T. Recommender system in collaborative learning environment using an influence diagram. Expert Syst. Appl. 2013, 40, 7193–7202. [Google Scholar] [CrossRef]
  81. Sevarac, Z.; Devedzic, V.; Jovanovic, J. Adaptive neuro-fuzzy pedagogical recommender. Expert Syst. Appl. 2012, 39, 9797–9806. [Google Scholar] [CrossRef]
  82. Ferreira-Satler, M.; Romero, F.P.; Menendez-Dominguez, V.H.; Zapata, A.; Prieto, E. Fuzzy ontologies-based user profiles applied to enhance e-learning activities. Soft Comput. 2012, 16, 1129–1141. [Google Scholar] [CrossRef]
  83. Alinaghi, T.; Bahreininejad, A. A multi-agent question-answering system for E-learning and collaborative learning environment. Int. J. Distance Educ. Technol. (IJDET) 2011, 9, 23–39. [Google Scholar] [CrossRef]
  84. Caglar-Ozhan, S.; Altun, A.; Ekmekcioglu, E. Emotional patterns in a simulated virtual classroom supported with an affective recommendation system. Br. J. Educ. Technol. 2022, 53, 1724–1749. [Google Scholar] [CrossRef]
  85. López, M.B.; Alor-Hernández, G.; Sánchez-Cervantes, J.L.; Paredes-Valverde, M.A.; Salas-Zárate, M.D.P. EduRecomSys: An Educational Resource Recommender System Based on Collaborative Filtering and Emotion Detection. Interact. Comput. 2020, 32, 407–432. [Google Scholar] [CrossRef]
  86. Poitras, E.G.; Fazeli, N.; Mayne, Z.R. Modeling Student Teachers’ Information-Seeking Behaviors While Learning with Network-Based Tutors. J. Educ. Technol. Syst. 2018, 47, 227–247. [Google Scholar] [CrossRef]
  87. Clemente, J.; Yago, H.; Pedro-Carracedo, J.; Bueno, J. A proposal for an adaptive Recommender System based on competences and ontologies. Expert Syst. Appl. 2022, 208, 118171. [Google Scholar] [CrossRef]
  88. Song, B.; Li, X. The Research of Intelligent Virtual Learning Community. Int. J. Mach. Learn. Comput. 2019, 9, 621–628. [Google Scholar] [CrossRef]
  89. Schafer, J.B.; Frankowski, D.; Herlocker, J.; Sen, S. Collaborative Filtering Recommender Systems. In The Adaptive Web. Lecture Notes in Computer Science, 1st ed.; Brusilovsky, P., Kobsa, A., Nejdl, W., Eds.; Springer: Heidelberg, Germany, 2007; Volume 4321, pp. 291–324. [Google Scholar]
  90. Adomavicius, G.; Tuzhilin, A. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Trans. Knowl. Data Eng. 2005, 17, 734–749. [Google Scholar] [CrossRef]
  91. Pérez-Almaguer, Y.; Yera, R.; Alzahrani, A.A.; Martínez, L. Content-based group recommender systems: A general taxonomy and further improvements. Expert Syst. Appl. 2021, 184, 115444. [Google Scholar] [CrossRef]
  92. da Silva, F.L.; Slodkowski, B.K.; da Silva, K.K.A.; Cazella, S.C. A systematic literature review on educational recommender systems for teaching and learning: Research trends, limitations and opportunities. Educ. Inf. Technol. 2023, 28, 3289–3328. [Google Scholar] [CrossRef]
  93. Urdaneta-Ponte, M.C.; Mendez-Zorrilla, A.; Oleagordia-Ruiz, I. Recommendation Systems for Education: Systematic Review. Electronics 2021, 10, 1611. [Google Scholar] [CrossRef]
Figure 1. The retrieval and selection procedure in steps, according to PRISMA methodology.
Figure 1. The retrieval and selection procedure in steps, according to PRISMA methodology.
Education 14 00723 g001
Figure 2. Evolution of the number of relevant publications along the time span of this review (2011–2023) [blue: initially retrieved unique articles (N1 = 482); red: articles retained after 1st screening (Ν2 = 180); green: articles retained after 2nd screening (N3 = 61)].
Figure 2. Evolution of the number of relevant publications along the time span of this review (2011–2023) [blue: initially retrieved unique articles (N1 = 482); red: articles retained after 1st screening (Ν2 = 180); green: articles retained after 2nd screening (N3 = 61)].
Education 14 00723 g002
Figure 3. The geographic distribution of research on RSs for teachers (location of 1st author affiliation), at the country scale—map view.
Figure 3. The geographic distribution of research on RSs for teachers (location of 1st author affiliation), at the country scale—map view.
Education 14 00723 g003
Figure 4. The geographic distribution of research on RSs for teachers (location of 1st author affiliation), at the country scale—bar chart view. Vertical axis: number of papers from a given country (in absolute numbers). Horizontal axis: countries of the 61 reviewed papers.
Figure 4. The geographic distribution of research on RSs for teachers (location of 1st author affiliation), at the country scale—bar chart view. Vertical axis: number of papers from a given country (in absolute numbers). Horizontal axis: countries of the 61 reviewed papers.
Education 14 00723 g004
Figure 5. The impact of RS-related research, as expressed by citation counts in descending order (by country of 1st author affiliation, in decreasing order). Vertical axis: number of citations received by papers from a given country, within the set of 61 reviewed papers (in absolute numbers). Horizontal axis: countries of 1st author affiliation within the 61 reviewed papers.
Figure 5. The impact of RS-related research, as expressed by citation counts in descending order (by country of 1st author affiliation, in decreasing order). Vertical axis: number of citations received by papers from a given country, within the set of 61 reviewed papers (in absolute numbers). Horizontal axis: countries of 1st author affiliation within the 61 reviewed papers.
Education 14 00723 g005
Figure 6. The impact of RS-related research, as expressed by citation counts in descending order (by institution of 1st author affiliation). Vertical axis: number of citations received by papers from a given university/institute, within the set of 61 reviewed papers (in absolute numbers). Horizontal axis: university/institute of 1st author affiliation within the 61 reviewed papers.
Figure 6. The impact of RS-related research, as expressed by citation counts in descending order (by institution of 1st author affiliation). Vertical axis: number of citations received by papers from a given university/institute, within the set of 61 reviewed papers (in absolute numbers). Horizontal axis: university/institute of 1st author affiliation within the 61 reviewed papers.
Education 14 00723 g006
Table 1. Articles retrieved from the 4 databases (ERIC, Scopus, Web of Science and Science Direct).
Table 1. Articles retrieved from the 4 databases (ERIC, Scopus, Web of Science and Science Direct).
DatabaseKeywordsArticles (Retrieved)Duplicates (Excluded)Articles
(Remaining)
ERICRecommendation System(s) OR Recommender System(s)19112179
SCOPUS(Recommendation System(s) OR Recommender System(s)) AND (Teacher OR Educator)1389129
Web of Science(Recommendation System(s) OR Recommender System(s))
AND (Teacher OR Educator)
82748
Science Direct(Recommendation System(s) OR Recommender System(s))
AND (Teacher OR Educator)
18822166
Total599117482
Table 2. Articles excluded during the two screenings.
Table 2. Articles excluded during the two screenings.
NrExclusion Criterion1st Screening
{Title, Abstract, Keywords}
2nd Screening
{Full Text}
ERICScopusWeb of ScienceScience DirectERICScopusWeb of ScienceScience Direct
1Not a journal paper (e.g., article in conference proceedings, book, etc.)00200020
2Not a primary study (e.g., review or meta-analysis)2150220100
3Not referring to e-learning or distant learning1929082410110
4The RS involved is not addressed to teachers or educators81110303342115
5Not an English-language publication00000000
Total Excluded1214521343753425
Table 3. The number of authors per RS publication.
Table 3. The number of authors per RS publication.
Number of AuthorsNumber of Publications (Absolute Number)Number of Publications (Percentage)
>51016.39
5813.12
41219.67
31727.87
21321.31
111.64
Total61100%
Table 4. The journals that host publications on RSs for teachers.
Table 4. The journals that host publications on RSs for teachers.
Number of Publications HostedNumber of JournalsJournal Titles (in Alphabetic Order within Each Cell)
52Education and Information Technologies; IEEE Transactions on Learning Technologies
32Expert Systems with Applications; Soft Computing
24International Journal of Emerging Technologies in Learning; Information Sciences; International Journal of Information and Communication Technology Education; Information Processing & Management
137Journal of Computers in Education; Computers in Human Behavior; IEEE Access; Advances in Engineering Software; Applied Computing and Informatics; Applied Sciences; British Journal of Educational Technology; Complexity; Decision Sciences Journal of Innovative Education; Expert Systems; Frontiers in Education; IEEE Transactions on Education; Interacting with Computers; International Journal of Fuzzy Systems; International Journal of Human-Computer Studies; International Journal of Machine Learning and Computing; International Journal of Pharmacy and Technology; International Journal of STEM Education; International Journal of Technology in Teaching and Learning; International Journal on Digital Libraries; Journal of Computer Assisted Learning; Journal of Educational Data Mining; Journal of Educational Technology Systems; Journal of Information Science; Journal of Theoretical and Applied Information Technology; JUCS—Journal of Universal Computer Science; Multimedia Tools and Applications; New Review of Hypermedia and Multimedia; Research in Learning Technology; Technology, Knowledge and Learning; TechTrends; The Internet and Higher Education; ACM Transactions on Information Systems; International Journal of Distance Education Technologies; Revista Latinoamericana de Tecnologia Educativa (RELATEC); Journal of Web Engineering; Institute of Management Sciences.
Total45100%
Table 5. The geographic distribution of research on RSs for teachers (location of 1st author affiliation), at the continent scale.
Table 5. The geographic distribution of research on RSs for teachers (location of 1st author affiliation), at the continent scale.
ContinentNumber of Publications (Absolute Number)Number of Publications (Percentage)
Asia2032.79
Europe1931.14
The Americas1829.51
Oceania23.28
Africa23.28
Total61100.00
Table 6. Classification of the reviewed publications according to their (major) recommendation aims.
Table 6. Classification of the reviewed publications according to their (major) recommendation aims.
Recommendation AimsNumber of Publications (Absolute Number)Number of Publications (Percentage)
Improve Teaching Practices2032.79
Personalized Recommendations for Users (including Teachers)1524.59
Personalized Search/Recommendation for Learning Objects (LOs)1422.95
Personalized Recommendations for Teachers1016.39
Personalized Recommendations for Social Navigation23.28
Total61100.00
Table 7. The reviewed publications grouped according to their research aims, as expressed by the respective research questions.
Table 7. The reviewed publications grouped according to their research aims, as expressed by the respective research questions.
Research Aims, as Expressed in the Research Questions PosedNr. of Publications
(abs. nr.)
Nr. of Publications
(%)
References to Reviewed Publications
1. Improvement of RS efficiency/quality/accuracy2134.42[28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]
2. Personalization in the RS1829.51[49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66]
3. Technology-specific RS1727.87[67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83]
4. Affective/emotional aspects in RS34.92[84,85,86]
5. RS based on teachers’ ICT prοfiles/competences/skills/attitudes23.28[87,88]
Total61100.00
Table 8. (a) Classification of the reviewed publications according to the educational settings or contexts of RS usage—general view. (b) Classification of the reviewed publications according to the educational settings or contexts of RS usage—detailed view.
Table 8. (a) Classification of the reviewed publications according to the educational settings or contexts of RS usage—general view. (b) Classification of the reviewed publications according to the educational settings or contexts of RS usage—detailed view.
(a)
Educational Setting or ContextNumber of Publications
(Absolute Number)
Number of Publications (Percentage over 61)
Educational Environments3760.65
Decision-Support Systems or Frameworks1931.14
Educational Tool Collections1524.59
Repositories1016.39
(b)
Educational Setting or ContextNumber of Publications
(Absolute Number)
Number of Publications
(Percentage)
Educational Environments2947.54
Decision-Support Systems or Frameworks914.75
Educational Tool Collections34.92
Repositories00.0
Educational Environments and Decision-Support Systems or Frameworks23.28
Educational Environments and Educational Tool Collections69.84
Educational Environments and Repositories00.00
Decision-Support Systems or Frameworks and Educational Tool Collections23.28
Decision-Support Systems or Frameworks and Repositories69.84
Educational Tool Collections and Repositories46.55
Total61100.00
Table 9. Classification of the reviewed publications according to the filtering method employed.
Table 9. Classification of the reviewed publications according to the filtering method employed.
Filtering MethodNumber of Publications
(Absolute Number)
Number of Publications
(Percentage)
Collaborative Filtering2642.62
Content-Based Filtering1321.30
Hybrid Filtering2236.07
Total61100.00
Table 10. Classification of the reviewed publications according to the algorithms and tools employed for recommendation generation.
Table 10. Classification of the reviewed publications according to the algorithms and tools employed for recommendation generation.
(A) Supervised Learning AlgorithmsUsed in nr. of Papers
(Absolute Number)
Used in nr. of Papers
(Percentage)
Ranking algorithms: kNN (9), Personal Rank Algorithm (1), instance-based classifier—IBK, a form of kNN (1), Item-kNN (1), Scoring Algorithms-Page Rank (1), Search Ranking Algorithm (1), Ranking Algorithm (query based) for Text Documents (1), Item-side ranking regularized distillation (1), MostPop Algorithm (1)1715.32
Text Mining—NLP algorithms: NLP (1), Text Mining and Topics Retrieval Algorithms–Latent Dirichlet Allocation, Matrix Factorization (4), Singular Value Decomposition-SVD (6), Factorization Machine (1), Key-phrase Extraction Algorithm-KEA (1), Text Pre-processing (1), Latent factor-based method (1), Latent Factors Model-BPRMF (1)1614.41
Tree and Graph algorithms: Decision Tree (6), Random Forest (3), C4.5 Algorithm (J48) (3), Algorithm 1: Available Set of previous and current Similar Multi-perspective preferences (1), Graph-searching algorithms (Dijkstra’s Shortest Path First (1), Breadth First Search (Graph Search Algorithm) (1), influence diagrams (IDs) (1)1614.41
ANN and Factorization algorithms: Artificial Neural Networks–MLP (4), Deep Neural Networks—DNN (1), Convolutional Neural Networks—CNN (3), KERAS Neural Network deep learning library with TensorFlow (1), RNN-LSTM (1), Neural Matrix Factorization-NeuMF (1)119.91
Classification algorithms: Naϊve Bayes (5), Support Vector Machines—SVMs (1), LogLikelihood Algorithm (1)76.31
Association Rule algorithms: Rule—Induction Algorithm (1), A priori algorithm (2), Ripper Algorithm (2)54.51
Filtering Algorithms: Filtering (Collaborative (1), Content-based (1), Hybrid (1))32.70
Evolutionary Computing algorithms: Genetic Algorithms (1), Particle Swarm Optimization (1)21.80
Meta-Algorithms: Adaboost (1)10.90
Total Supervised7870.27
(B) Unsupervised Learning AlgorithmsUsed in nr. of papers
(absolute number)
Used in nr. of papers
(percentage)
K-means-family of algorithms: k-means (2), k-means++ (3), Fuzzy c-means (1), Expectation–Maximization—EM (2), Top-N (3), Affinity Propagation (1), Compatibility Degree Algorithm (1)1311.72
Other clustering/grouping algorithms: Clustering Algorithm (1), Algorithm 1– Calculating group sizes (1), Algorithm 2—Forming homogeneous groups (1), Algorithm 3—Forming heterogeneous groups (1)43.60
Model-driven–Performance Criterion Optimization algorithms: Random Stochastic Gradient Descent Regression—SGD (1), simple weighted summation average (1), Complex weighted summation average (1), Personalized Linear Multiple Regression—PLMR (1)43.60
Total Unsupervised2118.92
(C) Algorithms used are not reported1210.81
Total cases of algorithm use111100.00
Table 11. Problems addressed through the use of RSs.
Table 11. Problems addressed through the use of RSs.
Problem AddressedNumber of Publications
(Absolute Number)
Number of Publications
(Percentage over 61)
Prediction2845.90
Classification2540.98
Identification2236.06
Clustering 1616.00
Detection99.00
Table 12. Methodology (experimental design) for the evaluation of the proposed RS.
Table 12. Methodology (experimental design) for the evaluation of the proposed RS.
Experimental DesignNumber of Publications
(Absolute Number)
Number of Publications
(Percentage)
Quasi experiment4777.05
Case study813.11
Pure experiment46.56
No evaluation/not reported 23.28
Total61100.00
Table 13. The characteristics of the sample used for the evaluation of the proposed RS.
Table 13. The characteristics of the sample used for the evaluation of the proposed RS.
No. of Individuals/ItemsTeachersStudentsUsers in General
[1–20)857
[20–40)781
[40–60)220
[60–80)223
[80–100)001
[100–120)030
[120–140)100
[140–…165
Not reported766
Total283223
Table 14. The objects recommended to the users during the evaluation of the proposed RS.
Table 14. The objects recommended to the users during the evaluation of the proposed RS.
Number of ItemsLearning ObjectsMovies, etc.
[1–500)86
[500–1000)11
[1000–1500)21
[1500–2000)22
[2000–2500)02
[2500–3000)10
[3000–3500)10
[3500–…08
Not reported53
Total2023
Table 15. RS evaluation results reported in the reviewed publications.
Table 15. RS evaluation results reported in the reviewed publications.
Evaluation ResultsNumber of Publications
(Absolute Number)
Number of Publications
(Percentage)
Positive4777.05
Neutral1422.95
Negative00.00
Total 61100.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Siafis, V.; Rangoussi, M.; Psaromiligkos, Y. Recommender Systems for Teachers: A Systematic Literature Review of Recent (2011–2023) Research. Educ. Sci. 2024, 14, 723. https://doi.org/10.3390/educsci14070723

AMA Style

Siafis V, Rangoussi M, Psaromiligkos Y. Recommender Systems for Teachers: A Systematic Literature Review of Recent (2011–2023) Research. Education Sciences. 2024; 14(7):723. https://doi.org/10.3390/educsci14070723

Chicago/Turabian Style

Siafis, Vissarion, Maria Rangoussi, and Yannis Psaromiligkos. 2024. "Recommender Systems for Teachers: A Systematic Literature Review of Recent (2011–2023) Research" Education Sciences 14, no. 7: 723. https://doi.org/10.3390/educsci14070723

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop