Next Article in Journal
Comparison of Ensemble and Meta-Ensemble Models for Early Risk Prediction of Acute Myocardial Infarction
Previous Article in Journal
Integrating Speech Recognition into Intelligent Information Systems: From Statistical Models to Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review on Scholarly Publication Recommender Systems: Features, Approaches, Evaluation, and Open Research Directions

1
WMG, University of Warwick, Coventry CV4 7AL, UK
2
Software and Security Group, Analog Devices, Edinburgh EH3 9FQ, UK
*
Author to whom correspondence should be addressed.
Informatics 2025, 12(4), 108; https://doi.org/10.3390/informatics12040108
Submission received: 2 May 2025 / Revised: 25 August 2025 / Accepted: 26 August 2025 / Published: 10 October 2025

Abstract

The exponential growth of scientific literature has made it increasingly difficult for researchers to identify relevant and timely publications within vast academic digital libraries. Although academic search engines, reference management tools, and recommender systems have evolved, many still rely heavily on metadata and lack mechanisms to incorporate full-text content or time-awareness. This review systematically examines the landscape of scholarly publication recommender systems, employing the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology for a comprehensive and transparent selection of relevant studies. We highlight the limitations of current systems and explore the potential of integrating fine-grained citation knowledge—such as citation proximity, context, section, graph, and intention—extracted from full-text documents. These elements have shown promise in enhancing both the contextual relevance and recency of recommendations. Our findings highlight the importance of moving beyond accuracy-focused metrics toward user-centric evaluations that emphasise novelty, diversity, and serendipity. This paper advocates for the development of more holistic and adaptive recommender systems that better align with the evolving needs of researchers.

1. Introduction

The need for recommender systems in academia is increasingly evident as new research entities, such as papers, grants, and proposals, are published daily. In the 1960s, De Solla Price [1] forecasted that the number of journals would reach 1,000,000 by 2000, while the record was only 60,000 in the 50s. According to a study up to 2010 [2], there was an annual increase of 8–9%, while [3] reported a 3.7% annual increment. Due to such information overload, discovering relevant research documents from the huge corpora of digital libraries is like finding a needle in a haystack. To illustrate the magnitude of the problem, statistics on publication volumes from major digital libraries are presented. The Association of Computing Machinery Digital Library (ACM DL) alone holds 1430 periodicals, 32,228 proceedings, 181,514 books and theses, and 140,477 publishers [4]. Google Scholar have not disclosed the size of their dataset. However, a study estimated it to include around 160 million research documents—including patents, citations, theses, books, and other materials—as of 2014, based on an empirical study [5]. Similarly, a scientometric study estimated 389 million records as of 2018 [6]. Moreover, the monthly submission rates of electronic preprint publications from ArXiv, launched in August 1991, reached 2,764,327 as of June 2025, as visualised in Figure 1. Likewise, there are other digital libraries such as IEEE Xplore, CiteSeer, and more. The problem is exacerbated for interdisciplinary research domains, as research publications can be published in a wider variety of venues, proceedings, and journals. This creates challenges for researchers trying to stay abreast of relevant articles and governments seeking to identify high-quality research for funding and innovation. Publishers must satisfy customer needs by recommending relevant content, while universities face pressure to design and teach up-to-date courses. Preprints such as ArXiv and Preprints.org have established themselves as alternatives to traditional peer-reviewed venues due to rapid publication, open access, and strong academic support. However, these benefits come with a risk of publishing false information, biased views, etc. In the context of academic Recommender Systems (RecSys), there are more items to sift through and a danger of recommending false and biased work. Therefore, a system that can sift through many items from huge corpora of digital libraries and provide relevant items to its users according to their preferences is needed. The primary goal and real-world purpose of a RecSys is to assist researchers in discovering relevant items.
The concept of digital recommender systems was introduced in the early 90s by Goldberg et al. [8], and one of the early academic RecSys was developed in [9] in the late 90s. Since then, various features, aspects, and algorithms have been researched and added to improve the academic RecSys. The purpose of this literature review is to examine the trends and research progress in academic RecSys over the years and to outline future directions and open research questions in the field. This review provides a comprehensive overview of key components, including feature representations, baseline algorithms, datasets, and evaluation metrics, that have been employed in the development and assessment of these systems. The aim of this survey is to serve as a valuable resource for both novice and experienced researchers and practitioners, offering insights into the landscape of scholarly publication recommender systems. The rest of this paper is structured in the following way. Section 2 explains the methodology of the survey, and Section 3 presents features related to items and user/target modelling. An overview of different approaches is presented in Section 4. Reviews of different evaluation methods and metrics, both item-centric and user-centric, are presented in Section 5. The shortcomings of the current approaches are discussed in Section 6, and avenues for future lines of research are described in Section 7.

2. Research Methodology

This section presents the methodology used to conduct this survey. Figure 2 shows how the methodology adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.
In the identification stage, digital libraries were selected to search for relevant literature, namely Scopus (https://www.scopus.com/ (accessed on 11 November 2024)) and Web of Knowledge (www.webofknowledge.com (accessed on 11 November 2024)). Next, queries related to research publication RecSys were constructed by combining two types of queries: (1) retrieving documents on recommender systems and (2) retrieving documents on scholarly publications.
The query containing key phrases on recommender systems included the following: recommend*, recommendation systems*, recommender system*, recommendation service*, recommender service*, recommendation approach*, recommender approach*, recommendation model*, recommender model*, recommendation method*, recommender method*, recommendation algorithm*, recommender algorithm*, recommendation application*, recommender application*, recommendation engine*, recommender engine*, recommendation framework*, and recommender framework*.
The key phrases for the research paper query included the following: “research paper*”, “research publication*”, “research article*”, “research document*”, “research literature*”, “scientific paper*”, “scientific publication*”, “scientific document*”, “scientific article*”, “scientific literature*”, “scholarly publication*”, “scholarly paper*”, “scholarly document*”, “scholarly literature*”, “scholarly article*”, “academic publication*”, “academic paper*”, “academic document*”, “academic article*”, “academic literature*”, “related publication*”, “related paper*”, “related document*”, “related literature*”, “related article*”, “digital librar*”, “citation recommend*”, and “citation-based*”.
The search queries resulted in 856 records up to March 2024. The queries were checked on Google Scholar, and it was confirmed that no additional records were found. In the second phase, the papers were screened manually. During the screening process, 103 duplicates were removed. In the next phase, additional records were excluded based on eligibility. Two academics independently reviewed the titles and abstracts, followed by a thorough discussion. They manually reviewed the titles and abstracts, identifying 442 unrelated works, 13 non-English records, and 17 without full-text access.
In the end, 220 papers remained. Additionally, we included relevant backwards citations from the reviewed papers, along with supplementary literature such as survey articles and general resources on Machine Learning (ML) methods, recommender systems, and evaluation metrics, to deepen our understanding of the field. This step yielded 30 additional records. A total of 252 papers were analysed in this work.
Throughout the remainder of this paper, the terms scholarly publication recommender system and research paper recommender system are used interchangeably.

3. User-Item Modelling

Recommender systems comprise two main components: items and users, where items are suggested to users based on their preferences [10,11]. In academic RecSys, research publications are considered items and researchers are the users. These tasks may go beyond recommending research articles; for instance, suggesting potential collaborators or suitable publication venues. However, this work specifically focuses on the recommendation of research publications to researchers. The following sections review various features involved in modelling both items and users or targets in the context of research paper recommendations.

3.1. Item Modeling and Features

A research paper is a content-rich entity comprising various sections and types of information. The contents refer to the textual components of scholarly publications and play a vital role in academic RecSys. Typically, a research paper consists of various elements, such as the title, abstract, keywords, and various other sections, including the Introduction, Methodology, and Bibliography. We present a list of item features that are used to model an item for recommendation in Table 1.
It has been observed that item features are commonly represented using vector and graph representation schemes in the literature. Vector Space Model (VSM), Term Frequency (TF), Term Frequency-Inverse Document Frequency (TF-IDF) [12], BM25 [13], bag-of-words, Word2Vec [14], and Glove [15] are also commonly used methods for term representations. Bollacker and Lawrence [9,16] developed the Co-Citation Inverse Document Frequency (CCIDF) method, which is similar to TF-IDF but uses citation frequencies instead of term frequencies. West et al. [17] constructed citation network graphs, where nodes represent citing papers and edges represent citations, generating recommendations based on centrality measures. Other examples, including PaperRank [18], Katz distance-based methods [19], and direction-aware random walks [20], were also used for the graphical representations.
Table 1. List of reviewed papers utilising different item features for modelling item profiles. Abbreviations: Ti—Title, Ab—Abstract, Ke—Keywords, Au—Author, Af—Affiliation, Pd—Publication Date, Ve—Venue, Tx—Taxonomy, Rl—Reference List, Ck—Citation Knowledge.
Table 1. List of reviewed papers utilising different item features for modelling item profiles. Abbreviations: Ti—Title, Ab—Abstract, Ke—Keywords, Au—Author, Af—Affiliation, Pd—Publication Date, Ve—Venue, Tx—Taxonomy, Rl—Reference List, Ck—Citation Knowledge.
ReferencesTiAbKeAuAfPDVTxRlCk
[21]xxxxx x x
[22]xxxx xx
[23]xxxx x
[24]xxxx x x
[25,26]xxxx xx
[27]xxxx x
[28]xxxx
[29]xxx xx
[30]xxx x xx
[31]xxx x
[32]xxx x
[9]xxx xx
[33]xxx xx
[34,35,36,37,38,39,40,41,42,43,44,45,46]xxx
[47]xx x xx x
[48]xx x x xx
[49]xx x x xx
[50]xx x x x
[51]xx x x
[52,53]xx x xx
[54]xx x
[55]xx x x
[56]xx x
[57,58]xx xx
[59,60,61,62,63,64,65,66]xx
[67]x xx xx
[68]x xx xx
[69]x xx x
[70]x x x xx
[71]x x x
[72,73]x x
[74]x x x
[75]x x
[76]x x
[77]x xx
[78]x x
[79] xx xx
[80] x x
[81] x x
[82] x xx
[83] x x
[84,85,86,87,88] x
[89] xxxxx
[90] xx x xx
[91] xx xx
[92] x xx x
[93,94] x xx
[95,96,97] x x
[98,99,100,101,102,103,104] x
[105] x xx
[106] x xx
[107,108] x x
[109,110] x x xx
[111] x xx
[112,113] x x
[114,115] x
[115] x
[116] x xx
[117] x x
[118] x
[119] x
[120] x
[121] x
[122] x
[123,124,125,126,127] xx
[17,20,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160] xx
[161] x
[19] x
[18,162,163,164,165,166,167] x

3.2. User Modelling and Features

A user is a target who receives recommendations based on their needs or preferences. Therefore, building a user profile is a crucial task in any recommender system. This section explores the different types of targets that receive research paper recommendations. There are two types of recommendation tasks: (1) recommending for a piece of work and (2) recommending for a user. A piece of work can be (i) a paper, (ii) a set of papers, (iii) a snapshot of text (titles, abstracts, etc.), or (iv) an ongoing (yet-to-be-published) manuscript [168]. The reviewed work is presented based on the different tasks in Table 2, and further details are available in [168].
Based on the two categories of the target, (i) a piece of work and (ii) a user, different modelling strategies are used. As mentioned earlier, features and preferences are two critical factors of modelling. To model a piece of work, preferences can be information derived from metadata or full text, such as the title [59,114], abstract [68,80], keywords [24,27,90], authors [93,98], publication date [47,92], publication venue [24,90], bibliography (i.e., the list of publications that are referenced in a paper) [19,125,128,149], and various types of citation knowledge [52,134,168,193,194]. Citation knowledge comprises a citation graph, citation section, citation proximity, citation intention, and citation context [168]. Table 3 provides a brief description of each component of citation knowledge. Citation graphs are the most popular citation knowledge, and others are slowly being adopted by the field. A summary of the works that have used citation knowledge to capture the preferences of a recommendation target when the recommendation target is a given piece of work can be seen in Table 4. The distribution of all other features to model targets across reviewed works is summarised in Figure 3. For paper-based details, see Table A1. Note that the citation knowledge in Table A1 comprises all the categorisations of the citation knowledge, and Table 4 presents the finer granularity of the usage of citation knowledge.
In contrast, a user is a researcher whose preferences can be captured using their implicit and explicit feedback. Explicit feedback may consist of ratings [40,88,104,174,179], scoring [37,158], or user accounts, with the topic of interest stated by the user [75,108]. Implicit feedback captures user interactions, such as browsing sessions [89,172,173,191], clicks [73,100,175], bookmarks [38,181,182], and tags [174,179], to name a few. Figure 4 details which target preferences were used by the reviewed papers when the target is the user and whether they were explicit or implicit in nature. For details on individual papers, see Table A2.
These features are fundamental in constructing a target profile that accurately reflects the user’s current research needs and preferences. For example, the seed paper indicates immediate interests, while authorship and co-authorship reveal broader collaborative contexts. Several works have considered users’ authored publications to extract research interests [66,155,157]. Understanding and effectively modelling these features is crucial for developing an academic RecSys that can deliver personalised and contextually relevant recommendations to users. Each of these features contributes to building a comprehensive user profile that can significantly enhance the user experience by aligning recommendations closely with the user’s needs.

4. Recommendation Approaches

Recommender systems can broadly be classified into Content-Based Filtering (CBF), Collaborative Filtering (CF), and hybrid-based approaches. These approaches are based on how they use user features and represent them. Among them, the hybrid approach is the most widely adopted and uses both CBF and CF to generate recommendations. About 45% of the reviewed papers adopted this approach. Table 5 categorises the papers based on their adopted approaches. Note that a few papers used and/or compared different approaches. For example, [82] proposed the use of both CBF and CF. In the following sections, these methods are explained to describe how they have been applied for the recommendation of scientific publications.

4.1. Content-Based Filtering (CBF) Approach

CBF is a widely researched technique in recommender systems [196]. It analyses the contents, for example, a set of items previously interacted by a user, and then extracts the features from the items to design the user profile [196,197]. CBF approaches then match items’ features with user profiles and generate recommendations based on similarity score. Following this approach, the CiteSeer system, developed in [9,198], was one of the earliest content-based scholarly recommender systems, which recommended relevant scientific literature to its users based on their needs. It uses textual information from metadata and analyses common citations between documents. The idea of using citations and creating a comprehensive citation network, where nodes are scientific papers and edges are their citations, proposed in [9,198], has been used and followed by numerous researchers, including [18,20,82,117,128]. Examples include TheAdvisor [20,117], PaperRank [18], and Human Recommender Interaction (HRI) [82].
Table 5. List of reviewed papers on different recommendation approaches.
Table 5. List of reviewed papers on different recommendation approaches.
Recommendation
Approach
References
Content-Based Filtering[19,23,24,31,33,36,37,42,43,44,45,47,49,51,54,55,56,59,60,65,72,73,81,82,83,84,85,86,88,89,91,95,96,97,98,100,101,102,103,105,108,114,118,120,121,124,126,127,142,154,163,166,167,169,170,175,177,181,187,191,199]
Collaborative Filtering[17,18,20,28,35,41,67,79,82,93,109,112,113,115,117,128,130,131,136,137,138,140,141,143,144,145,148,149,150,151,153,159,160,161,165,179,180,182,200,200]
Hybrid Filtering[9,21,22,25,26,27,29,30,32,34,38,39,40,46,48,50,52,53,57,58,61,62,63,64,66,68,69,70,71,74,75,76,77,78,80,87,90,92,94,99,104,106,107,110,111,116,119,122,123,125,129,132,133,134,135,139,146,147,152,155,156,157,158,162,164,168,171,172,173,174,176,178,183,184,185,186,188,189,190,192,193,194,201,202,203]

4.2. Collaborative Filtering (CF) Approach

CF is a popular technique in recommender systems, known for recommending items that are preferred by users with similar preferences [8,204]. Many researchers have adopted CF to develop a research paper RecSys [28,60,108,113,115,128,137,149,153,160,186,205,206,207,208,209,210]. In these systems, user feedback is frequently gathered through citations, as authors acknowledge other researchers’ work by citing it. This citation-based feedback helps construct a user-item matrix by treating research papers as users and their references as items [58,126,128,155,157,160,211]. While CF is a popular technique in e-commerce, it is less commonly adopted in academic recommendations compared to CBF.

4.3. Hybrid-Based Filtering Approach

The hybrid-based approach combines CBF and CF to leverage the strengths of each method while overcoming individual limitations. Burke [212] pioneered hybrid systems and demonstrated that combining multiple techniques improves recommendation accuracy and flexibility. Examples include the Entree restaurant recommender and FindMe systems, where users can update features and receive relevant recommendations [189,212,213,214]. These systems highlight the flexibility and adaptability of hybrid approaches in providing personalised and effective recommendations. Burke [215] applied a hybrid approach to a restaurant recommender system, the Entree System, while [125,149] explored hybrid methods for recommending research publications [125,149,215].
Further advancements include West et al. [17], who developed a state-of-the-art hybrid system using citation data [17]. Their work builds on the taxonomy of hybrid systems proposed by Burke [212], emphasising that no single technique can address all recommendation challenges. Several hybrid systems have been created, for example, ref. [62] combined traditional CF with probabilistic topic modelling, specifically Latent Dirichlet Analysis (LDA) as in CBF, to provide an interpretable latent structure for users and items, allowing recommendations for both existing and newly published articles. This method demonstrates how hybrid systems can alleviate the cold-start issue. Likewise, ref. [189] proposed a hybrid system that utilised research disciplines and key terms from papers’ titles, abstracts, keywords, and body sections to link publications within a graph network. Similarly, Hristakeva [66] combined CF with implicit feedback from user interactions to develop a hybrid scholarly recommender system. This system incorporated various features, such as users’ personal library information. The works are categorised based on their use of approaches for the research paper recommendation tasks in Table 5.
Recent advances in Natural Language Processing (NLP) have led to a shift from sparse or topic-based representations (e.g., TF-IDF, LDA, or Doc2Vec) to contextual embeddings that capture richer semantic and syntactic information. Transformer-based architectures, notably Bidirectional Encoder Representations from Transformers (BERT), have played a pivotal role in this transition by introducing deep bidirectional encoding of text sequences. SciBERT is a domain-specific BERT model trained on 1.14M full-text papers from Semantic Scholar [216]. Unlike the general-purpose BERT, SciBERT captures technical and domain-specific terminology common in academic writing. When used in recommendation pipelines, SciBERT consistently yields better representations for downstream tasks such as clustering, linking, and classification compared to vanilla BERT or word2vec-style embeddings. SPECTRE [200] is another example, which leverages SciBERT as the base transformer and fine-tunes the model using citation triples. These resulting embeddings of citing and cited papers are closer in the vector space. SPECTER outperforms TF-IDF, Doc2Vec, and unsupervised BERT-based baselines across multiple benchmark tasks, including citation prediction and related paper retrieval. On the Microsoft Academic Graph (MAG) and OpenCitations benchmarks, SPECTER has shown up to +10% NDCG@10 gains over TF-IDF and Doc2Vec. Likewise, BERT-GCN [217] combines contextual embeddings with graph convolutional networks by integrating text and citation graph features. This hybrid design enables joint learning from paper content and citation structures. On public citation networks like Cora, PubMed, and MAG, BERT-GCN improves micro-F1 scores for paper classification and link prediction over both GCN-only and BERT-only setups.
In contrast to earlier methods such as TF-IDF, which rely on sparse vector spaces with limited semantic understanding, or topic models like LDA, which assume a fixed vocabulary and topic space, transformer-based models dynamically learn context-aware representations. These models can disambiguate polysemous terms, model long-range dependencies, and generalise better across disciplines, making them particularly well suited to scholarly recommendation tasks where nuanced textual signals are critical.

5. Evaluation

This section discusses the evaluation of recommender systems and related components, including evaluation methods and metrics, used for the academic RecSys.

5.1. Dataset

A dataset is crucial to assess the relevance of recommendations generated by a recommender system. During this review, it was observed that the availability of ground-truth datasets specifically for research paper recommendations was limited and often unsatisfactory. Not all research publications, especially peer-reviewed ones, are publicly available, and there is a lack of availability of datasets that contain all published (i.e., peer-reviewed and preprint) research publications. Many researchers have created datasets containing research publications by downloading or crawling from various sources, such as digital libraries. Researchers have used different numbers of publications in their experimental datasets, ranging from 15 articles to 2 million articles. Given this irregularity, we chose not to include the quantity of the datasets. Table 6 presents a curated list of publicly available datasets, including AMiner, OpenCitations, Open Academic Graph, ArXiv, CORE, and CiteULike. While these datasets offer valuable resources, they often lack full-text access, user interaction histories, or citation contexts, which are critical for advanced recommendation tasks. The scarcity of datasets that combine full-text content with user behaviour and citation metadata remains a major bottleneck in the field.

5.2. Evaluation Methods

Evaluation methods in scholarly RecSys can be broadly categorised into three types: offline evaluations, online evaluations, and user studies. User studies typically involve a small group of participants who either complete questionnaires or use a controlled application for a set period. Other online evaluations involve live systems, often without users being aware that they are part of an evaluation process [219,220]. More than 70% of the reviewed papers predominantly employed offline evaluation methods, followed by user studies, with only a few utilising online live evaluation systems. Surprisingly, several papers did not specify or conduct any evaluation [65,76,180,189,191,192,207,221,222,223,224,225,226,227]. The lack of evaluation raises questions about the validity of the work and its quality.

5.2.1. Offline Evaluation Method

The offline evaluation method does not require active user participation and typically measures the accuracy of a system using pre-collected, static datasets. The most common approach is to split the dataset into training and testing sets, where the system is trained on the former and predictions are made on the latter. The “leave-one-out” method, where a reference from a paper’s bibliography is removed and the system’s ability to predict the missing reference is tested, is widely used [149,186]. However, offline methods have limitations. They rely on static datasets that may not include recent or novel items, leading to potential biases in the evaluation. Assessing user-centric judgement is also challenging through offline evaluation [82,228,229]. Nevertheless, offline evaluation remains popular due to its cost-effectiveness and convenience, allowing for rapid testing of multiple algorithms [219,230]. The most common metrics for offline evaluation include Precision, Recall, F-measure, Normalised Discounted Cumulative Gain (nDCG), Mean Reciprocal Rank (MRR), and Mean Average Precision (MAP).

5.2.2. Online Evaluation Method

Online evaluations assess the interaction between users and recommendations in live systems. They provide a more accurate reflection of user satisfaction, as they capture real user behaviour [11,82,231]. Despite their importance, online evaluations are less common, with only a few studies utilising this method [17,48,187,220]. Usage logs are a valuable tool in online evaluations, offering insights into how users interact with recommendations and allowing for retrospective analysis of system performance.
A/B testing is a key online evaluation method, enabling comparisons between different system versions by measuring variations in user interactions, such as clicks and downloads [230]. However, relying solely on implicit feedback from these interactions may not fully capture user satisfaction, as clicks might be accidental or not indicative of actual interest [220,232]. Therefore, combining implicit feedback with explicit user input, like reviews or comments, is recommended for a more comprehensive evaluation.

5.2.3. User Studies

User studies focus on user feedback to evaluate recommendations. Participants are typically asked to evaluate recommendations based on aspects such as novelty, usefulness, and serendipity [70,125,128,149,233]. This method is valuable for simulating user behaviour and can be particularly useful before deploying a system to ensure that it meets user expectations [230]. However, user studies can be expensive and time-consuming, especially when recruiting knowledgeable participants [219,220].
In summary, offline methods are suitable for initial algorithm comparison due to their efficiency and cost-effectiveness. However, user-centric evaluations, such as user studies or online testing, are essential for ensuring that systems meet the ultimate goal of satisfying user needs. Some researchers have effectively combined these methods, conducting both offline evaluations for accuracy and user studies for user-centric assessment [28,125,128,134,149]. Table 7 lists the popularity of different evaluation methods in academic recommender systems.

5.3. Evaluation Metrics

Evaluation metrics are quantifiable measures used to assess the performance of a RecSys. These metrics are crucial for understanding how well a system meets its intended goals. In the domain of recommender systems, metrics are generally categorised into two types: item-centric and user-centric. Item-centric metrics primarily focus on the accuracy of recommendations. The most common method in this category is the “leave-one-out” approach, where a portion of the dataset is withheld and used as test data to evaluate the system’s ability to predict accurate results [230]. Accuracy is typically measured by the precision and correctness of the recommended items [262,263]. While accuracy is important, it alone may not be sufficient to meet the diverse and subjective needs of users. Researchers have argued for a broader focus that includes user-centric evaluations such as serendipity, novelty, and diversity [82,264,265,266]. These user-centric metrics address the qualitative aspects of user experience, which are crucial for building trust and satisfaction with the system.
Serendipity refers to the discovery of unexpected yet useful items. It captures the element of surprise in recommendations, where users find something valuable that they did not actively seek [264,267,268,269]. Although only a few studies focus on serendipity, it is key to increasing user engagement by providing novel and surprising recommendations [57,126,155]. Different techniques were explored, for example, ref. [268] used long-tail, while [269] utilised time rareness and the dissimilarity concept to achieve serendipity. Diversity measures how dissimilar the recommended items are from one another. It helps prevent overspecialisation, where the system repeatedly recommends similar items, reducing the overall effectiveness of the recommendation [117,210]. Strategies to increase diversity include re-ranking recommendations and introducing long-tail items to the top of the list [270]. Novelty focuses on recommending items that are unknown or new to the user [271]. This metric is particularly useful for experienced researchers who are already familiar with much of the existing literature in their field. Novel recommendations can keep users informed about recent developments and emerging trends [118,251].
The shift from accuracy-focused metrics to user-centric evaluations is gaining interest within the research community. While accuracy remains important, it is increasingly recognised that a sole focus on precision can fail to meet user expectations and reduce engagement with the system [82,265,266]. Therefore, the ongoing debate about the best evaluation methods is critical; user-centric approaches like user studies and online evaluations become essential for systems aiming to provide more than just accuracy.

6. Discussion and Conclusions

Finding relevant publications from huge document libraries is becoming ever more challenging. Although new tools such as large language models (LLMs) have emerged, they are still in their infancy and may suffer from hallucination. Therefore, a robust academic RecSys that can suggest serendipitous, recent, diverse, and relevant materials—not only similar ones—is essential. This section discusses our investigation of various factors relating to academic RecSys. This review reveals that CBF is the predominant technique in academic RecSys, with over 70% of the reviewed papers employing it. The Term Frequency–Inverse Document Frequency (TF–IDF) algorithm is widely used to identify relationships between documents and generate recommendations. However, textual similarity-based recommendations may fail to distinguish between different types of papers or their quality, potentially leading to recommendations of less relevant or lower-quality materials. For example, influential papers and their reproductions by novice researchers might be weighted equally despite their differing impacts. Citation-based approaches are also common, as citations are less prone to issues like ambiguity and synonymy compared to text-based methods. However, citation-based approaches have their own limitations, such as treating all citations equally, which does not reflect the varying significance of citations. Additionally, these approaches are susceptible to “topic drifting,” where citations may serve different purposes (e.g., defining concepts, providing background, and supporting methodology) and thus should not be treated uniformly.
A major gap identified in this review is the underutilisation of rich citation knowledge, including citation context, intention, and section. These features, though shown to improve recommendation quality, are rarely implemented due to the lack of standardised datasets and the computational complexity of extracting them from full-text documents. This highlights a pressing need for open, annotated corpora and scalable NLP pipelines that can support fine-grained citation analysis.
Evaluation methods and metrics are another important aspect of academic RecSys that needs attention. There are significant challenges, particularly when determining the most promising methods. Offline evaluation is cost-effective and generates results quickly, making it a popular choice compared to online methods. However, user studies, which involve human judges to assess user satisfaction, offer deeper insights but are more expensive in terms of time and cost, and they require subject matter experts, who can be challenging to find. Online testing, while comprehensive, is also costly due to the need for sophisticated infrastructure and extended time frames to obtain stable results. Moreover, online testing can be compromised by noisy data, such as unintentional clicks or downloads, which may introduce false positives. Researchers, including [3,11,82,231], have argued that offline evaluation is insufficient, as it fails to reflect real-world scenarios accurately. Offline methods struggle to capture users’ preferences, which are the ultimate goal of recommender systems. Despite these limitations, offline evaluation remains widely favoured, particularly when access to real-world systems is limited.
Given these challenges, it has been proposed that a combination of evaluation methods, specifically offline testing followed by user studies or online evaluations, could be a more effective approach. Initially, offline testing can be used to validate the effectiveness and efficiency of algorithms. Once these algorithms demonstrate accuracy, they can be subjected to user studies or extensive online evaluations to assess user satisfaction and subjective metrics. This mixed-method approach could enhance the reliability and applicability of recommender systems.
There is significant inconsistency in the size and scope of datasets used for experiments, ranging from as few as fifteen articles [50] to over two million [272]. This variability, coupled with the lack of publicly available datasets, contributes to issues of reproducibility. While a handful of researchers share their datasets and facilitate reproducibility [58,75,193,194], there are significant difficulties in replicating and validating findings across the field. Many papers suffer from a lack of clarity in their descriptions of methodologies, making it difficult to replicate studies. For example, ambiguities in the representation of features, the absence of comparison with baselines, and insufficiently detailed explanations are common issues that hinder the reproducibility of research in this field.
Despite rapid advances in modelling capabilities, ethical considerations such as fairness, bias, and privacy remain underexplored in scholarly recommender systems. Demographic and institutional biases, for example, overrepresentation of English-language or Western-affiliated research, can be amplified by algorithmic pipelines, leading to homogenised or exclusionary outputs [273]. Similarly, filter bubbles may emerge when recommender systems overfit to narrow domains or citation cliques, reinforcing intellectual silos and limiting exposure to diverse or interdisciplinary work. Another major concern is the privacy of usage data, particularly reading logs or download histories, which are often used for implicit feedback signals but can reveal sensitive user attributes or affiliations if not handled responsibly [274]. Addressing these issues will require integrating fairness-aware learning objectives, differential privacy mechanisms, and critical audits of training data pipelines into future system designs. Finally, there is a pressing need to move beyond accuracy as the sole metric for evaluating recommendation systems. User satisfaction, trust, and confidence are equally important, yet they are often overlooked. Higher accuracy does not necessarily correlate with user satisfaction, and neglecting these subjective factors can undermine the effectiveness of recommendation systems. Future research should emphasise user-centric evaluations to ensure that systems meet the diverse needs of their users.

7. Future Research Directions

This survey reviewed over 200 research papers published between 1990 and 2024 that address the task of research paper recommendation and highlighted the evolution of feature selection and augmentation aimed at improving research paper recommendations. It was noticed that early studies primarily relied on keyword searches extracted from the title, abstract, and keyword sections of publications. With advancements in technology, including full-text accessibility and enhanced software capabilities, feature augmentation has expanded to include citation position, citation context, and critical information from sections such as the Introduction, Related Work, and Conclusion. Researchers now leverage a wide range of ML algorithms, from simple models like K-Nearest Neighbour (KNN) to deep learning techniques like Long-Short Term Memory (LSTM) networks. As a result, several new research avenues have emerged, which are outlined below:
  • Interdisciplinary Recommendations: Interdisciplinary recommendations have become increasingly significant, with data indicating that 80% of recent studies are interdisciplinary in nature. Despite the recognition of its importance, as mentioned by researchers [126,155], there remains a gap in developing recommender systems that cater specifically to interdisciplinary studies. It is suggested that future research should focus on creating systems capable of facilitating interdisciplinary recommendations, thereby pushing the boundaries of academic exploration.
  • Recommendation with Explanation: Recommender systems are designed to help users navigate vast information spaces. As these systems evolve to address users’ diverse informational needs, incorporating explanations for recommendations becomes critical. Providing reasoning for why a particular item is recommended can significantly enhance user satisfaction and trust. However, achieving this will require the development of richer datasets, comprehensive evaluation metrics, and possibly larger volunteer-driven studies to test and refine these systems.
  • User Modelling, Satisfaction, and Personalised Recommendations: Our review indicates that current research tends to prioritise similarity-based matching between user profiles and item attributes. This approach, while effective, often leads to redundant recommendations, reducing user satisfaction. Future research should focus on developing more nuanced user models that go beyond content-based matching, emphasising serendipity and diversity in recommendations that could increase user engagement. Additionally, as user-centric approaches gain prominence, there is a growing need for personalised recommendations that respect user privacy, a concern that must be addressed in the design of future systems.
  • Topic Evolution: An intriguing direction for future research involves incorporating topic evolution into recommender systems. By tracking how research areas evolve over time, systems could generate “must-read” lists tailored to a user’s previous reading history. This would be particularly useful for providing recommendations that reflect the latest developments in a field. Additionally, recommending various types of content—such as literature reviews or interdisciplinary papers—based on a user’s expertise could enhance the utility of these systems.
  • Situational Awareness: The needs of a new PhD student differ significantly from those of an established researcher. Current recommender systems do not adequately account for these different research contexts. Addressing situational awareness in recommendation systems could lead to more tailored and effective recommendations for users at different stages of their academic careers.
  • Sparsity: The vast discrepancy between the number of publications and user interactions creates a highly sparse user-item matrix, posing a significant challenge for recommendation systems. Therefore, developing advanced techniques to mitigate this sparsity, particularly in collaborative filtering, is crucial for improving recommendation accuracy.
  • Reproducibility: A significant issue in the field is the lack of transparency in the implementation of recommendation approaches. The absence of shared code, datasets, and detailed methodological information impedes reproducibility, which is critical for the advancement of the field. Addressing these issues by promoting openness and methodological clarity will be essential for fostering robust scientific progress.
  • Emerging Role of Generative AI (GenAI) and Large Language Models (LLMs): Recent advances in GenAI and LLMs, such as GPT-4, LLaMA, and Claude, have started to influence scholarly paper recommendation systems, as in several other domains. These models enable novel capabilities such as generative retrieval, conversational recommendation, and cold-start mitigation by synthesising paper representations from minimal metadata. However, they also introduce challenges around hallucination, bias amplification, reproducibility, and computational cost. While our survey focused on established and domain-adapted traditional approaches and LLMs (e.g., SciBERT, SPECTER, and BERT-GCN), exploring the integration of general-purpose GenAI in RecSys and addressing its unique risks represent promising directions for future research and warrant dedicated investigation.

Author Contributions

Conceptualisation, A.K.; Methodology, A.K.; Formal Analysis, A.K.; Investigation, A.K.; Writing—Original Draft Preparation, A.K.; Writing—Review & Editing, A.K. and S.S.; Funding Acquisition, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

Author Saurav Sthapit was employed by the company Analog Devices. The remaining authors declare that the research was conducted in the absences of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A. Additional Materials

Table A1. List of reviewed papers categorised based on target preferences when the target is a piece of work. Abbreviations: Ci—Citing, Ti—Title, Ab—Abstract, Ke—Keywords, Au—Author, Ve—Venue, Py—Publication Year, Ft—Terms from Free Text, Tx—Taxonomy, Ck—Citation Knowledge.
Table A1. List of reviewed papers categorised based on target preferences when the target is a piece of work. Abbreviations: Ci—Citing, Ti—Title, Ab—Abstract, Ke—Keywords, Au—Author, Ve—Venue, Py—Publication Year, Ft—Terms from Free Text, Tx—Taxonomy, Ck—Citation Knowledge.
ReferencesCiTiAbKeAuVePyFtTxCk
[24]xxxxxx x
[82]xxxxx x
[136,143]x x
[50]xxx xx x
[52]xxx x
[70]xx x x x
[78]xx x
[90]x xx x x
[83]x x x
[27] 1x xx x
[109]x xx x
[135,164]x x x
[116]x x x
[19,61,123] 2x x
[17,18,20,128,129,133,134,137,138,139,140,144,145,146,148,149,150,151,152,162,163]x x
[165]x
[69] x xx
[141,147] x
[130,131,132]
[49] xx xx
[59,60] xx
[71,72] x x
[76] x
[84,85] x
[93] x
[124] xx
[142] 3 x
[169,170] 4
[9,21,25,47,51,53,55,56,68,74,77,79,81,86,92,95,96,97,98,110,111,112,114,153,154,161,166,167,171] x
[76] x
1: No mention of entities to extract terms; 2: NoMT; 3: NoMT; 4: NoMT.
Table A2. Reviewed papers categorised by target preferences when the target is a user. Abbreviations: A—Authoring, B—Browsing, T—Tagging, Bm—Bookmarking, Sc—Scoring, Rd—Reading, Cl—Clicking, R—Rating, V—Viewing, D—Downloading, P—Profile availability, Sr—Searching, Ac—Accessing, Sh—Sharing, Vo—Voting, Cm—Commenting, An—Annotating, Ci—Citing.
Table A2. Reviewed papers categorised by target preferences when the target is a user. Abbreviations: A—Authoring, B—Browsing, T—Tagging, Bm—Bookmarking, Sc—Scoring, Rd—Reading, Cl—Clicking, R—Rating, V—Viewing, D—Downloading, P—Profile availability, Sr—Searching, Ac—Accessing, Sh—Sharing, Vo—Voting, Cm—Commenting, An—Annotating, Ci—Citing.
Ref.Implicit FeedbackExplicit Feedback
ABTSvBmRdClVDSrAcShCmAnCiScRPVo
[91,185] x x x
[26] x x
[34,36,43,44,89,122,172,173,191] x
[182] xxxx
[37]x xx x x
[41] xx
[38,87,181] x x
[40,88,94,104,174,177,179] x x
[175] xxxx
[187] x x x
[66] x x
[62,63,67,115] x
[48] xxxx x x x
[107,186,188] x
[29,156]x x
[184]x x x
[45,176,183] x
[158] x x x x
[113] x x
[101,102,103,117] x
[39] xx x x
[73,100] x
[42,54,180] x
[65,99,118,178] x
[125] x
[75,108] x
[64] x x
[22] x
[127] x
[28,30,57,58,126,155,157,159]x x
[119] x
[23,31,32,33,35,46,106,120,121,160,189,190]x

References

  1. De Solla Price, D.J. Networks of Scientific Papers. Science 1965, 149, 510–515. [Google Scholar] [CrossRef]
  2. Bornmann, L.; Mutz, R. Growth rates of modern science: A bibliometric analysis. J. Assoc. Inf. Sci. Technol. 2015, 66, 2215–2222. [Google Scholar] [CrossRef]
  3. Beel, J.; Gipp, B.; Langer, S.; Breitinger, C. Research Paper Recommender Systems: A Literature Survey. Int. J. Digit. Libr. 2016, 17, 305–338. [Google Scholar] [CrossRef]
  4. Holdings of the ACM DL. 2018. Available online: https://dl.acm.org/contents_guide.cfm?coll=portal&dl=GUIDE (accessed on 28 January 2019).
  5. Orduna-Malea, E.; Ayllón, J.M.; Martín-Martín, A.; Delgado López-Cózar, E. Methods for estimating the size of Google Scholar. Scientometrics 2015, 104, 931–949. [Google Scholar] [CrossRef]
  6. Gusenbauer, M. Google Scholar to Overshadow Them All? Comparing the Sizes of 12 Academic Search Engines and Bibliographic Databases. Scientometrics 2019, 118, 177–214. [Google Scholar] [CrossRef]
  7. arXiv Monthly Submission Rates. Available online: https://arxiv.org/stats/monthly_submissions (accessed on 14 July 2024).
  8. Goldberg, D.; Nichols, D.; Oki, B.M.; Terry, D. Using Collaborative Filtering to Weave an Information Tapestry. Commun. ACM 1992, 35, 61–70. [Google Scholar] [CrossRef]
  9. Bollacker, K.D.; Lawrence, S.; Giles, C.L. CiteSeer: An Autonomous Web Agent for Automatic Retrieval and Identification of Interesting Publications. In Proceedings of the Second International Conference on Autonomous Agents, Minneapolis, MN, USA, 10–13 May 1998; pp. 116–123. [Google Scholar] [CrossRef]
  10. Resnick, P.; Iacovou, N.; Suchak, M.; Bergstrom, P.; Riedl, J. GroupLens: An open architecture for collaborative filtering of netnews. In Proceedings of the CSCW ’94, Chapel Hill, NC, USA, 22–26 October 1994. [Google Scholar]
  11. Konstan, J.; Riedl, J. Recommender systems: From algorithms to user experience. User Model.-User-Adapt. Interact. 2012, 22, 101–123. [Google Scholar] [CrossRef]
  12. Jones, K.S. A statistical interpretation of term specificity and its application in retrieval. J. Doc. 1972, 28, 11–21. [Google Scholar] [CrossRef]
  13. Robertson, S.; Zaragoza, H. The Probabilistic Relevance Framework: BM25 and Beyond. Found. Trends Inf. Retr. 2009, 3, 333–389. [Google Scholar] [CrossRef]
  14. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.; Dean, J. Distributed Representations of Words and Phrases and Their Compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–8 December 2013; Curran Associates Inc.: Red Hook, NY, USA, 2013; Volume 2, pp. 3111–3119. [Google Scholar]
  15. Pennington, J.; Socher, R.; Manning, C.D. GloVe: Global Vectors for Word Representation. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
  16. Lawrence, S.; Giles, C.L.; Bollacker, K. Digital libraries and autonomous citation indexing. Computer 1999, 32, 67–71. [Google Scholar] [CrossRef]
  17. West, J.D.; Wesley-Smith, I.; Bergstrom, C.T. A Recommendation System Based on Hierarchical Clustering of an Article-Level Citation Network. IEEE Trans. Big Data 2016, 2, 113–123. [Google Scholar] [CrossRef]
  18. Gori, M.; Pucci, A. Research Paper Recommender Systems: A Random-Walk Based Approach. In Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence (WI 2006 Main Conference Proceedings) (WI’06), Hong Kong, China, 18–22 December 2006; pp. 778–781. [Google Scholar] [CrossRef]
  19. Strohman, T.; Croft, W.B.; Jensen, D. Recommending Citations for Academic Papers. In Proceedings of the 30th International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands, 23–27 July 2007; pp. 705–706. [Google Scholar]
  20. Kucuktunc, O.; Saule, E.; Kaya, K.; Çatalyürek, U. TheAdvisor: A Webservice for Academic Recommendation. In Proceedings of the 13th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL ’13, Indianapolis, IN, USA, 22–26 July 2013; pp. 433–434. [Google Scholar] [CrossRef]
  21. Liu, X.; Yu, Y.; Guo, C.; Sun, Y. Meta-Path-Based Ranking with Pseudo Relevance Feedback on Heterogeneous Graph for Citation Recommendation. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM ’14, Shanghai, China, 3–7 November 2014; ACM: New York, NY, USA, 2014; pp. 121–130. [Google Scholar] [CrossRef]
  22. Hwang, S.Y.; Wei, C.P.; Lee, C.H.; Chen, Y.S. Coauthorship network-based literature recommendation with topic model. Online Inf. Rev. 2017, 41, 318–336. [Google Scholar] [CrossRef]
  23. Alzoghbi, A.; Ayala, V.; Fischer, P.; Lausen, G. PubRec: Recommending Publications Based on Publicly Available Meta-Data; CEUR-WS: Aachen, Germany, 2015; Volume 1458, pp. 11–18. [Google Scholar]
  24. Livne, A.; Gokuladas, V.; Teevan, J.; Dumais, S.; Adar, E. CiteSight: Supporting Contextual Citation Recommendation Using Differential Search; ACM—Association for Computing Machinery: New York, NY, USA, 2014. [Google Scholar]
  25. Mu, D.; Guo, L.; Cai, X.; Hao, F. Query-Focused Personalized Citation Recommendation with Mutually Reinforced Ranking. IEEE Access 2017, 6, 3107–3119. [Google Scholar] [CrossRef]
  26. Hwang, S.Y.; Wei, C.P.; Liao, Y.F. Coauthorship networks and academic literature recommendation. Electron. Commer. Res. Appl. 2010, 9, 323–334. [Google Scholar] [CrossRef]
  27. Guo, L.; Cai, X.; Hao, F.; Mu, D.; Fang, C.; Yang, L. Exploiting Fine-Grained Co-Authorship for Personalized Citation Recommendation. IEEE Access 2017, 5, 12714–12725. [Google Scholar] [CrossRef]
  28. Lee, J.; Lee, K.; Kim, J.G. Personalized Academic Research Paper Recommendation System. arXiv 2013, arXiv:1304.5457. [Google Scholar] [CrossRef]
  29. Sun, J.; Jiang, Y.; Cheng, X.; Du, W.; Liu, Y.; Ma, J. A hybrid approach for article recommendation in research social networks. J. Inf. Sci. 2018, 44, 696–711. [Google Scholar] [CrossRef]
  30. Yang, W.S.; Lin, Y.R. A task-focused literature recommender system for digital libraries. Online Inf. Rev. 2013, 37, 581–601. [Google Scholar] [CrossRef]
  31. Sun, J.; Ma, J.; Liu, X.; Liu, Z.; Wang, G.; Jiang, H.; Silva, T. A Novel Approach for Personalized Article Recommendation in Online Scientific Communities. In Proceedings of the 2013 46th Hawaii International Conference on System Sciences, Wailea, HI, USA, 7–10 January 2013; pp. 1543–1552. [Google Scholar] [CrossRef]
  32. Tejeda-Lorente, A.; Porcel, C.; Bernabé-Moreno, J.; Herrera-Viedma, E. REFORE: A recommender system for researchers based on bibliometrics. Appl. Soft Comput. J. 2015, 30, 778–791. [Google Scholar] [CrossRef]
  33. Chen, J.; Ban, Z. Literature recommendation by researchers’ publication analysis. In Proceedings of the 2016 IEEE International Conference on Information and Automation (ICIA), Ningbo, China, 1–3 August 2016; pp. 1964–1969. [Google Scholar] [CrossRef]
  34. Hwang, S.Y.; Chuang, S.M. Combining article content and Web usage for literature recommendation in digital libraries. Online Inf. Rev. 2004, 28, 260–272. [Google Scholar] [CrossRef]
  35. Loh, S.; Lorenzi, F.; Granada, R.; Lichtnow, D.; Krug Wives, L.; De Oliveira, J. Identifying Similar Users by Their Scientific Publications to Reduce Cold Start in Recommender Systems, Lisbon. In Proceedings of the WEBIST 2009–Proceedings of the 5th International Conference on Web Information Systems and Technologies, Lisbon, Portugal, 23–26 March 2009; pp. 593–600. [Google Scholar]
  36. Ohta, M.; Hachiki, T.; Takasu, A. Related paper recommendation to support online-browsing of research papers. In Proceedings of the Fourth International Conference on the Applications of Digital Information and Web Technologies (ICADIWT 2011), Stevens Point, WI, USA, 4–6 August 2011; pp. 130–136. [Google Scholar] [CrossRef]
  37. Wang, Y.; Liu, J.; Dong, X.; Liu, T.; Huang, Y. Personalized Paper Recommendation Based on User Historical Behavior. In Natural Language Processing and Chinese Computing; Zhou, M., Zhou, G., Zhao, D., Liu, Q., Zou, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1–12. [Google Scholar]
  38. Pera, M.; Ng, Y.K. Exploiting the wisdom of social connections to make personalized recommendations on scholarly articles. J. Intell. Inf. Syst. 2014, 42, 371–391. [Google Scholar] [CrossRef]
  39. Ma, K.; Lu, T.; Abraham, A. Hybrid Parallel Approach for Personalized Literature Recommendation System; Institute of Electrical and Electronics Engineers Inc.: Porto, Portugal, 2014; pp. 31–36. [Google Scholar] [CrossRef]
  40. Sun, J.; Ma, J.; Liu, Z.; Miao, Y. Leveraging Content and Connections for Scientific Article Recommendation in Social Computing Contexts. Comput. J. 2014, 57, 1331–1342. [Google Scholar] [CrossRef]
  41. Bansal, T.; Belanger, D.; McCallum, A. Ask the GRU: Multi-Task Learning for Deep Text Recommendations; Association for Computing Machinery, Inc.: New York, NY, USA, 2016; pp. 107–114. [Google Scholar] [CrossRef]
  42. Alzoghbi, A.; Ayala, V.; Fischer, P.; Lausen, G. Learning-to-rank in research paper CBF recommendation: Leveraging irrelevant papers. CEUR Workshop Proc. 2016, 1673, 43–46. [Google Scholar]
  43. Zhao, W.; Wu, R.; Dai, W.; Dai, Y. Research Paper Recommendation Based on the Knowledge Gap. In Proceedings of the 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Atlantic City, NJ, USA, 14–17 November 2015; pp. 373–380. [Google Scholar] [CrossRef]
  44. Zhao, W.; Wu, R.; Liu, H. Paper recommendation based on the knowledge gap between a researcher’s background knowledge and research target. Inf. Process. Manag. 2016, 52, 976–988. [Google Scholar] [CrossRef]
  45. Al Alshaikh, M.; Uchyigit, G.; Evans, R. A Research Paper Recommender System Using a Dynamic Normalized Tree of Concepts Model for User Modelling; IEEE Computer Society: Brighton, UK, 2017; pp. 200–210. [Google Scholar] [CrossRef]
  46. Ma, X.; Wang, R. Personalized Scientific Paper Recommendation Based on Heterogeneous Graph Representation. IEEE Access 2019, 7, 79887–79894. [Google Scholar] [CrossRef]
  47. Zarrinkalam, F.; Kahani, M. SemCiR: A citation recommendation system based on a novel semantic distance measure. Program 2013, 47, 92–112. [Google Scholar] [CrossRef]
  48. Xue, H.; Guo, J.; Lan, Y.; Cao, L. Personalized Paper Recommendation in Online Social Scholar System; Institute of Electrical and Electronics Engineers Inc.: Beijing, China, 2014; pp. 612–619. [Google Scholar] [CrossRef]
  49. Ren, X.; Liu, J.; Yu, X.; Khandelwal, U.; Gu, Q.; Wang, L.; Han, J. ClusCite: Effective Citation Recommendation by Information Network-Based Clustering. In Proceedings of the KDD ’14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 821–830. [Google Scholar] [CrossRef]
  50. Yang, L.; Zheng, Y.; Cai, X.; Dai, H.; Mu, D.; Guo, L.; Dai, T. A LSTM Based Model for Personalized Context-Aware Citation Recommendation. IEEE Access 2018, 6, 59618–59627. [Google Scholar] [CrossRef]
  51. Mayr, P. How Do Practitioners, PhD Students and Postdocs in the Social Sciences Assess Topic-Specific Recommendations? CEUR-WS: Aachen, Germany, 2016; Volume 1610, pp. 84–92. [Google Scholar]
  52. He, Q.; Pei, J.; Kifer, D.; Mitra, P.; Giles, L. Context-aware Citation Recommendation. In Proceedings of the 19th International Conference on World Wide Web, WWW ’10, Raleigh, NC, USA, 26–30 April 2010; pp. 421–430. [Google Scholar] [CrossRef]
  53. Liu, X.Y.; Chien, B.C. Applying Citation Network Analysis on Recommendation of Research Paper Collection. In Proceedings of the 4th Multidisciplinary International Social Networks Conference, MISNC ’17, Bangkok, Thailand, 17–19 July 2017; pp. 1–6. [Google Scholar] [CrossRef]
  54. Semeraro, G.; Basile, P.; de Gemmis, M.; Lops, P. Discovering User Profiles from Semantically Indexed Scientific Papers. In From Web to Social Web: Discovering and Deploying User and Content Profiles; Berendt, B., Hotho, A., Mladenic, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 61–81. [Google Scholar]
  55. Jiang, Z.; Liu, X.; Gao, L. Chronological Citation Recommendation with Information-Need Shifting. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, Melbourne, Australia, 18–23 October 2015; pp. 1291–1300. [Google Scholar] [CrossRef]
  56. Bethard, S.; Jurafsky, D. Who Should I Cite: Learning Literature Search Models from Citation Behavior. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, Toronto, ON, Canada, 26–30 October 2010; pp. 609–618. [Google Scholar]
  57. Sugiyama, K.; Kan, M.Y. A comprehensive evaluation of scholarly paper recommendation using potential citation papers. Int. J. Digit. Libr. 2015, 16, 91–109. [Google Scholar] [CrossRef]
  58. Sugiyama, K.; Kan, M.Y. Exploiting Potential Citation Papers in Scholarly Paper Recommendation. In Proceedings of the 13th ACM/IEEE-CS Joint Conference on Digital Libraries, Indianapolis, IN, USA, 22–26 July 2013; pp. 153–162. [Google Scholar]
  59. Nascimento, C.; Laender, A.; Da Silva, A.; Gonçalves, M. A Source Independent Framework for Research Paper Recommendation; Association for Computing Machinery: Ottawa, ON, Canada, 2011; pp. 297–306. [Google Scholar] [CrossRef]
  60. Xiao, Z.; Che, F.; Miao, E.; Lu, M. Increasing serendipity of recommender system with ranking topic model. Appl. Math. Inf. Sci. 2014, 8, 2041–2053. [Google Scholar] [CrossRef]
  61. He, Q.; Kifer, D.; Pei, J.; Mitra, P.; Lee Giles, C. Citation Recommendation Without Author Supervision. In Proceedings of the WSDM ’11: Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, Hong Kong, China, 9–12 February 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 755–764. [Google Scholar] [CrossRef]
  62. Wang, C.; Blei, D.M. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, 21–24 August 2011; pp. 448–456. [Google Scholar]
  63. Tian, G.; Jing, L. Recommending scientific articles using bi-relational graph-based iterative RWR. In Proceedings of the RecSys ’13: Proceedings of the 7th ACM Conference on Recommender Systems, Hong Kong, 12–16 October 2013; pp. 399–402. [CrossRef]
  64. Yang, M.; Li, Y.M.; Zhang, Z. Scientific articles recommendation with topic regression and relational matrix factorization. J. Zhejiang Univ. Sci. C 2014, 15, 984–998. [Google Scholar] [CrossRef]
  65. Mohamed Hassan, H. Personalized Research Paper Recommendation Using Deep Learning; Association for Computing Machinery, Inc.: New York, NY, USA, 2017; pp. 327–330. [Google Scholar] [CrossRef]
  66. Hristakeva, M.; Kershaw, D.; Rossetti, M.; Knoth, P.; Pettit, B.; Vargas, S.; Jack, K. Building Recommender Systems for Scholarly Information. In Proceedings of the 1st Workshop on Scholarly Web Mining, Cambridge, UK, 10 February 2017; pp. 25–32. [Google Scholar]
  67. Bogers, T.; van den Bosch, A. Recommending Scientific Articles Using Citeulike. In Proceedings of the 2008 ACM Conference on Recommender Systems, RecSys ’08, Lausanne, Switzerland, 23–25 October 2008; pp. 287–290. [Google Scholar] [CrossRef]
  68. Zhang, Y.; Yang, L.; Cai, X.; Dai, H. A Novel Personalized Citation Recommendation Approach Based on GAN. In International Symposium on Methodologies for Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2018; pp. 268–278. [Google Scholar]
  69. Ahmad, S.; Afzal, M. Combining Co-Citation and Metadata for Recommending More Related Papers; Institute of Electrical and Electronics Engineers Inc.: Islamabad, Pakistan, 2018; pp. 218–222. [Google Scholar] [CrossRef]
  70. Sesagiri Raamkumar, A.; Foo, S.; Pang, N. Rec4LRW-scientific paper recommender system for literature review and writing. Front. Artif. Intell. Appl. 2015, 275, 106–119. [Google Scholar] [CrossRef]
  71. Sesagiri Raamkumar, A.; Foo, S.; Pang, N. User evaluation of a task for shortlisting papers from researcher’s reading list for citing in manuscripts. Aslib J. Inf. Manag. 2017, 69, 740–760. [Google Scholar] [CrossRef]
  72. Magara, M.; Ojo, S.; Zuva, T. Towards a Serendipitous Research Paper Recommender System Using Bisociative Information Networks (BisoNets); Institute of Electrical and Electronics Engineers Inc.: Durban, South Africa, 2018. [Google Scholar] [CrossRef]
  73. Hong, K.; Jeon, H.; Jeon, C. UserProfile-based personalized research paper recommendation system. In Proceedings of the 2012 8th International Conference on Computing and Networking Technology (INC, ICCIS and ICMIC), Gyeongju, Republic of Korea, 27–29 August 2012; pp. 134–138. [Google Scholar]
  74. Ebesu, T.; Fang, Y. Neural Citation Network for Context-Aware Citation Recommendation. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’17, Tokyo Japan, 7–11 August 2017; pp. 1093–1096. [Google Scholar] [CrossRef]
  75. Nishioka, C.; Scherp, A. Profiling vs. Time vs. Content: What Does Matter for Top-K Publication Recommendation Based on Twitter Profiles? Institute of Electrical and Electronics Engineers Inc.: Newark, NJ, USA, 2016; pp. 171–180. [Google Scholar] [CrossRef]
  76. Neethukrishnan, K.; Swaraj, K. Ontology Based Research Paper Recommendation Using Personal Ontology Similarity Method; Institute of Electrical and Electronics Engineers Inc.: Coimbatore, India, 2017. [Google Scholar] [CrossRef]
  77. Sesagiri Raamkumar, A.; Foo, S.; Pang, N. Evaluating a threefold intervention framework for assisting researchers in literature review and manuscript preparatory tasks. J. Doc. 2017, 73, 555–580. [Google Scholar] [CrossRef]
  78. Sesagiri Raamkumar, A.; Foo, S.; Pang, N. Can I have more of these please?: Assisting researchers in finding similar research papers from a seed basket of papers. Electron. Libr. 2018, 36, 568–587. [Google Scholar] [CrossRef]
  79. Chakraborty, T.; Modani, N.; Narayanam, R.; Nagar, S. DiSCern: A Diversified Citation Recommendation System for Scientific Queries; IEEE Computer Society: Brighton, UK, 2015; pp. 555–566. [Google Scholar] [CrossRef]
  80. Hwang, S.Y.; Wei, C.P.; Huang, Y.c.; Tang, Y. Combining Coauthorship Network and Content for Literature Recommendation. In Proceedings of the PACIS, Taipei, Taiwan, 9–12 July 2010; p. 40. [Google Scholar]
  81. Ayala-Gomez, F.; Daroczy, B.; Benczur, A.; Mathioudakis, M.; Gionis, A. Global citation recommendation using knowledge graphs. J. Intell. Fuzzy Syst. 2018, 34, 3089–3100. [Google Scholar] [CrossRef]
  82. McNee, S.; Kapoor, N.; Konstan, J. Don’t Look Stupid: Avoiding Pitfalls When Recommending Research Papers. In Proceedings of the CSCW06: Computer Supported Cooperative Work, Banff, AB, Canada, 4–8 November 2006; pp. 171–180. [Google Scholar] [CrossRef]
  83. Jiang, Y.; Jia, A.; Feng, Y.; Zhao, D. Recommending Academic Papers via Users’ Reading Purposes. In Proceedings of the RecSys ’12: Sixth ACM Conference on Recommender Systems, Dublin, UK, 9–13 September 2012; pp. 241–244. [Google Scholar] [CrossRef]
  84. Achakulvisut, T.; Acuna, D.; Ruangrong, T.; Kording, K. Science Concierge: A fast content-based recommendation system for scientific publications. PLoS ONE 2016, 11, e0158423. [Google Scholar] [CrossRef]
  85. Kazemi, B.; Abhari, A. A Comparative Study on Content-Based Paper-to-Paper Recommendation Approaches in Scientific Literature. Soc. Model. Simul. Int. 2017, 49, 47–56. [Google Scholar]
  86. Paraschiv, I.; Dascalu, M.; Dessus, P.; Trausan-Matu, S.; McNamara, D. A paper recommendation system with readerbench: The graphical visualization of semantically related papers and concepts. Lect. Notes Educ. Technol. 2016, 445–451. [Google Scholar] [CrossRef]
  87. Guan, Z.; Wang, C.; Bu, J.; Chen, C.; Yang, K.; Cai, D.; He, X. Document recommendation in social tagging services. In Proceedings of the WWW ’10: Proceedings of the 19th International Conference on World Wide Web, Raleigh, NC, USA, 26–30 April 2010. [Google Scholar] [CrossRef]
  88. Ferrara, F.; Pudota, N.; Tasso, C. A keyphrase-based paper recommender system. Commun. Comput. Inf. Sci. 2011, 249, 14–25. [Google Scholar] [CrossRef]
  89. Wang, Z.; Liu, Y.; Yang, J.; Zheng, Z.; Wu, K. A personalization-oriented academic literature recommendation method. Data Sci. J. 2015, 14, 17. [Google Scholar] [CrossRef]
  90. Jia, H.; Saule, E. An Analysis of Citation Recommender Systems: Beyond the Obvious; Association for Computing Machinery, Inc.: New York, NY, USA, 2017; pp. 216–223. [Google Scholar] [CrossRef]
  91. Cui, T.; Tang, X.; Zeng, Q. User Network Construction Within Online Paper Recommendation Systems. In Proceedings of the 2010 IEEE 2nd Symposium on Web Society, Beijing, China, 16–17 August 2010; pp. 361–366. [Google Scholar] [CrossRef]
  92. Matsatsinis, N.F.; Lakiotaki, K.; Delias, P. A System based on Multiple Criteria Analysis for Scientific Paper Recommendation. In Proceedings of the PCI’ 2007 11th Panhellenic Conference in Informatics, Patra, Greece, 18–20 May 2007. [Google Scholar]
  93. Anand, A.; Chakraborty, T.; Das, A. FairScholar: Balancing relevance and diversity for scientific paper recommendation. Lect. Notes Comput. Sci. 2017, 10193, 753–757. [Google Scholar] [CrossRef]
  94. Yin, P.; Zhang, M.; Li, X. Recommending scientific literatures in a collaborative tagging environment. Lect. Notes Comput. Sci. 2007, 4822, 478–481. [Google Scholar]
  95. Le Anh, V.; Hoang, H.; Tran, H.; Jung, J. SciRecSys: A recommendation system for scientific publication by discovering keyword relationships. Lect. Notes Comput. Sci. 2014, 8733, 72–82. [Google Scholar]
  96. Raamkumar, A.; Foo, S.; Pang, N. Comparison of Techniques for Measuring Research Coverage of Scientific Papers: A Case Study; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2016; pp. 132–137. [Google Scholar] [CrossRef]
  97. Sesagiri Raamkumar, A.; Foo, S.; Pang, N. Using author-specified keywords in building an initial reading list of research papers in scientific paper retrieval and recommender systems. Inf. Process. Manag. 2017, 53, 577–594. [Google Scholar] [CrossRef]
  98. Bruns, S.; Valdez, A.C.; Greven, C.; Ziefle, M.; Schroeder, U. What Should I Read Next? A Personalized Visual Publication Recommender System. In Human Interface and the Management of Information. Information and Knowledge in Context; Yamamoto, S., Ed.; Springer: Cham, Switzerland, 2015; pp. 89–100. [Google Scholar]
  99. Vellino, A.; Zeber, D. A Hybrid, Multi-Dimensional Recommender for Journal Articles in a Scientific Digital Library. In Proceedings of the 2007 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology—Workshops, Silicon Valley, CA, USA, 5–12 November 2007; pp. 111–114. [Google Scholar] [CrossRef]
  100. Kodakateri Pudhiyaveetil, A.; Gauch, S.; Luong, H.; Eno, J. Conceptual Recommender System for CiteSeerX. In Proceedings of the RecSys ’09: Third ACM Conference on Recommender Systems, New York, NY, USA, 23–25 October 2009; pp. 241–244. [Google Scholar] [CrossRef]
  101. De Nart, D.; Ferrara, F.; Tasso, C. Personalized access to scientific publications: From recommendation to explanation. Lect. Notes Comput. Sci. 2013, 7899, 296–301. [Google Scholar] [CrossRef]
  102. De Nart, D.; Ferrara, F.; Tasso, C. Personalized Recommendation and Explanation by Using Keyphrases Automatically Extracted from Scientific Literature; SciTePress: Vilamoura, Algarve, 2013; pp. 96–103. [Google Scholar]
  103. De Nart, D.; Ferrara, F.; Tasso, C. RES: A personalized filtering tool for CiteSeerX queries based on keyphrase extraction. Lect. Notes Comput. Sci. 2013, 7899, 341–343. [Google Scholar] [CrossRef]
  104. Asabere, N.; Xia, F.; Meng, Q.; Li, F.; Liu, H. Scholarly paper recommendation based on social awareness and folksonomy. Int. J. Parallel Emergent Distrib. Syst. 2015, 30, 211–232. [Google Scholar] [CrossRef]
  105. Guan, P.; Wang, Y. Personalized scientific literature recommendation based on user’s research interest. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Changsha, China, 13–15 August 2016; pp. 1273–1277. [Google Scholar] [CrossRef]
  106. Tejeda-Lorente, A.; Bernabe-Moreno, J.; Porcel, C.; Herrera-Viedma, E. Using bibliometrics and fuzzy linguistic modeling to deal with cold start in recommender systems for digital libraries. Adv. Intell. Syst. Comput. 2018, 643, 393–404. [Google Scholar] [CrossRef]
  107. Alotaibi, S.; Vassileva, J. Personalized Recommendation of Research Papers by Fusing Recommendations from Explicit and Implicit Social Networks; CEUR-WS: Aachen, Germany, 2016; Volume 1618. [Google Scholar]
  108. Siebert, S.; Dinesh, S.; Feyer, S. Extending a Research-Paper Recommendation System with Scientometric Measures; CEUR-WS: Aachen, Germany, 2017; Volume 1823, pp. 112–121. [Google Scholar]
  109. Zhou, D.; Zhu, S.; Yu, K.; Song, X.; Tseng, B.; Zha, H.; Giles, C. Learning multiple graphs for document recommendations. In Proceedings of the WWW ’08: Proceedings of the 17th International Conference on World Wide Web, Beijing China, 21–25 April 2008. [CrossRef]
  110. Liu, X.; Yu, Y.; Guo, C.; Sun, Y.; Gao, L. Full-Text Based Context-Rich Heterogeneous Network Mining Approach for Citation Recommendation; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2014; pp. 361–370. [Google Scholar] [CrossRef]
  111. Wang, Q.; Li, W.; Zhang, X.; Lu, S. Academic Paper Recommendation Based on Community Detection in Citation-Collaboration Networks. In Web Technologies and Applications; Li, F., Shim, K., Zheng, K., Liu, G., Eds.; Springer: Cham, Switzerland, 2016; pp. 124–136. [Google Scholar]
  112. Meng, F.; Gao, D.; Li, W.; Sun, X.; Hou, Y. A Unified Graph Model for Personalized Query-Oriented Reference Paper Recommendation. In Proceedings of the CIKM ’13: Proceedings of the 22nd ACM international conference on Information & Knowledge Management, San Francisco, CA, USA, 27 October–1 November 2013; pp. 1509–1512. [Google Scholar] [CrossRef]
  113. Liu, H.; Yang, Z.; Lee, I.; Xu, Z.; Yu, S.; Xia, F. CAR: Incorporating Filtered Citation Relations for Scientific Article Recommendation. In Proceedings of the 2015 IEEE International Conference on Smart City/SocialCom/SustainCom (SmartCity), Chengdu, China, 19–21 December 2015; pp. 513–518. [Google Scholar]
  114. Färber, M.; Thiemann, A.; Jatowt, A. CITEWERTs: A system combining cite-worthiness with citation recommendation. Lect. Notes Comput. Sci. 2018, 10772, 815–819. [Google Scholar] [CrossRef]
  115. Xia, F.; Liu, H.; Lee, I.; Cao, L. Scientific Article Recommendation: Exploiting Common Author Relations and Historical Preferences. IEEE Trans. Big Data 2016, 2, 101–112. [Google Scholar] [CrossRef]
  116. Dhanda, M.; Verma, V. Recommender System for Academic Literature with Incremental Dataset. Procedia Comput. Sci. 2016, 89, 483–491. [Google Scholar] [CrossRef]
  117. Küçüktunç, O.; Saule, E.; Kaya, K.; Çatalyürek, U.V. Diversifying Citation Recommendations. ACM Trans. Intell. Syst. Technol. 2014, 5. [Google Scholar] [CrossRef]
  118. Watanabe, S.; Ito, T.; Ozono, T.; Shintani, T. A Paper Recommendation Mechanism for the Research Support System Papits. In Proceedings of the International Workshop on Data Engineering Issues in E-Commerce, Tokyo, Japan, 9 April 2005; Volume 2005, pp. 71–80. [Google Scholar] [CrossRef]
  119. Dhanda, M.; Verma, V. Personalized recommendation approach for academic literature using high-utility itemset mining technique. Adv. Intell. Syst. Comput. 2018, 519, 247–254. [Google Scholar] [CrossRef]
  120. Chandrasekaran, K.; Gauch, S.; Lakkaraju, P.; Luong, H.P. Concept-Based Document Recommendations for CiteSeer Authors. In Adaptive Hypermedia and Adaptive Web-Based Systems; Nejdl, W., Kay, J., Pu, P., Herder, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 83–92. [Google Scholar]
  121. Chandrasekaran, K.; Gauch, S.; Lakkaraju, P.; Luong, H. Concept-based document recommendations for CiteSeer authors. Lect. Notes Comput. Sci. 2008, 5149, 83–92. [Google Scholar] [CrossRef]
  122. Weng, S.S.; Chang, H.L. Using ontology network analysis for research document recommendation. Expert Syst. Appl. 2008, 34, 1857–1869. [Google Scholar] [CrossRef]
  123. Pan, L.; Dai, X.; Huang, S.; Chen, J. Academic paper recommendation based on heterogeneous graph. Lect. Notes Comput. Sci. 2015, 9427, 381–392. [Google Scholar] [CrossRef]
  124. Gao, Z. Examining Influences of Publication Dates on Citation Recommendation Systems; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2016; pp. 1400–1405. [Google Scholar] [CrossRef]
  125. Torres, R.; McNee, S.M.; Abel, M.; Konstan, J.A.; Riedl, J. Enhancing Digital Libraries with TechLens+. In Proceedings of the 4th ACM/IEEE-CS Joint Conference on Digital Libraries, Tuscon, AZ, USA, 7–11 June 2004; pp. 228–236. [Google Scholar]
  126. Sugiyama, K.; Kan, M.Y. Scholarly Paper Recommendation via User’s Recent Research Interests. In Proceedings of the 10th Joint Conference on Digital Libraries, Gold Coast, Australia, 21–25 June 2010; pp. 29–38. [Google Scholar]
  127. Beel, J.; Langer, S.; Kapitsaki, G.; Breitinger, C.; Gipp, B. Exploring the potential of user modeling based on mind maps. Lect. Notes Comput. Sci. 2015, 9146, 3–17. [Google Scholar] [CrossRef]
  128. McNee, S.; Albert, I.; Cosley, D.; Gopalkrishnan, P.; Lam, S.; Rashid, A.; Konstan, J.; Riedl, J. On the Recommending of Citations for Research Papers. In Proceedings of the CSCW02: Computer Supported Cooperative Work, New Orleans, LO, USA, 16–20 November 2002; pp. 116–125. [Google Scholar]
  129. Huang, S.; Xue, G.R.; Zhang, B.Y.; Zheng, C.; Yu, Y.; Ma, W.Y. TSSP: A Reinforcement Algorithm to Find Related Papers. In Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence (WI’04), Beijing, China, 20–24 September 2004; pp. 117–123. [Google Scholar] [CrossRef]
  130. Gipp, B.; Beel, J. Citation Proximity Analysis (CPA): A New Approach for Identifying Related Work Based on Co-Citation Analysis. In Proceedings of the 12th International Conference on Scientometrics and Informetrics, Rio de Janeiro, Brazil, 14–17 July 2009; Larsen, B., Ed.; BIREME/PANO/WHO: São Paulo, Brazil, 2009; Volume 1, pp. 571–575. [Google Scholar]
  131. Gipp, B.; Beel, J. Identifying related documents for research paper recommender by CPA and COA. In Proceedings of the World Congress on Engineering and Computer Science, San Francisco, CA, USA, 20–22 October 2009; Volume 1, pp. 20–22. [Google Scholar]
  132. Gipp, B.; Beel, J.; Hentschel, C. Scienstein: A research paper recommender system. In Proceedings of the International Conference on Emerging Trends in Computing (ICETIC’09), Virudhunagar, India, 8–10 January 2009; pp. 309–315. [Google Scholar]
  133. Liang, S.; Liu, Y.; Jian, L.; Gao, Y.; Lin, Z. A Utility-Based Recommendation Approach for Academic Literatures. In Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, Lyon, Frence, 22–27 August 2011; Volume 3, pp. 229–232. [Google Scholar] [CrossRef]
  134. Liang, Y.; Li, Q.; Qian, T. Finding Relevant Papers Based on Citation Relations. In Proceedings of the 12th International Conference on Web-Age Information Management, WAIM’11, Wuhan, China, 14–16 September 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 403–414. [Google Scholar]
  135. Zarrinkalam, F.; Kahani, M. A multi-criteria hybrid citation recommendation system based on linked data. In Proceedings of the 2012 2nd International eConference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 18–19 October 2012; pp. 283–288. [Google Scholar] [CrossRef]
  136. Huynh, T.; Hoang, K.; Do, L.; Tran, H.; Luong, H.; Gauch, S. Scientific Publication Recommendations Based on Collaborative Citation Networks. In Proceedings of the 2012 International Conference on Collaboration Technologies and Systems (CTS), Denver, CO, USA, 21–25 May 2012; pp. 316–321. [Google Scholar] [CrossRef]
  137. Caragea, C.; Silvescu, A.; Mitra, P.; Lee Giles, C. Can’t See the Forest for the Trees? A Citation Recommendation System. In Proceedings of the JCDL ’13: 13th ACM/IEEE-CS Joint Conference on Digital Libraries, Indianapolis, IN, USA, 22–26 July 2013; pp. 111–114. [Google Scholar] [CrossRef]
  138. Liu, H.; Kong, X.; Bai, X.; Wang, W.; Bekele, T.M.; Xia, F. Context-Based Collaborative Filtering for Citation Recommendation. IEEE Access 2015, 3, 1695–1703. [Google Scholar] [CrossRef]
  139. Chakraborty, T.; Krishna, A.; Singh, M.; Ganguly, N.; Goyal, P.; Mukherjee, A. FeRoSA: A Faceted Recommendation System for Scientific Articles. In Proceedings of the 20th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, PAKDD, Auckland, New Zealand, 19–22 April 2016; Springer: Berlin/Heidelberg, Germany, 2016; Volume 9652, pp. 528–541. [Google Scholar] [CrossRef]
  140. Haruna, K.; Ismail, M.; Damiasih, D.; Sutopo, J.; Herawan, T. A collaborative approach for research paper recommender system. PLoS ONE 2017, 12, e0184516. [Google Scholar] [CrossRef]
  141. Knoth, P.; Khadka, A. Can we do better than co-citations? Bringing Citation Proximity Analysis from idea to practice in research articles recommendation. In Proceedings of the CEUR Workshop Proceedings, Poznań, Poland, 19–21 June 2017; Volume 1888, pp. 14–25. [Google Scholar]
  142. Arif, M. Content Aware Citation Recommendation System; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
  143. Haruna, K.; Ismail, M.; Bichi, A.; Chang, V.; Wibawa, S.; Herawan, T. A citation-based recommender system for scholarly paper recommendation. Lect. Notes Comput. Sci. 2018, 10960, 514–525. [Google Scholar] [CrossRef]
  144. Son, J.; Kim, S. Academic paper recommender system using multilevel simultaneous citation networks. Decis. Support Syst. 2018, 105, 24–33. [Google Scholar] [CrossRef]
  145. Ollagnier, A.; Fournier, S.; Bellot, P. BIBLME RecSys: Harnessing Bibliometric Measures for a Scholarly Paper Recommender System; CEUR-WS: Aachen, Germany, 2018; Volume 2080, pp. 34–45. [Google Scholar]
  146. Kobayashi, Y.; Shimbo, M.; Matsumoto, Y. Citation Recommendation Using Distributed Representation of Discourse Facets in Scientific Articles. In Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries, JCDL ’18, Fort Worth, TX, USA, 3–6 June 2018; pp. 243–251. [Google Scholar] [CrossRef]
  147. Khadka, A.; Knoth, P. Using Citation-context to Reduce Topic Drifting on Pure Citation-based Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems, Vancouver, Canada, 2 October 2018; pp. 362–366. [Google Scholar]
  148. Tanner, W.; Akbas, E.; Hasan, M. Paper Recommendation Based on Citation Relation. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 3053–3059. [Google Scholar] [CrossRef]
  149. Ekstrand, M.D.; Kannan, P.; Stemper, J.A.; Butler, J.T.; Konstan, J.A.; Riedl, J.T. Automatically Building Research Reading Lists. In Proceedings of the 4th ACM Conference on Recommender Systems, Barcelona, Spain, 26–30 September 2010; pp. 159–166. [Google Scholar]
  150. Kúçúktunç, O.; Saule, E.; Kaya, K.; Çatalyürek, Ú.V. Recommendation on Academic Networks using Direction Aware Citation Analysis. arXiv 2012, arXiv:1205.1143. [Google Scholar] [CrossRef]
  151. Jia, H.; Saule, E. Local Is Good: A Fast Citation Recommendation Approach. In Advances in Information Retrieval; Pasi, G., Piwowarski, B., Azzopardi, L., Hanbury, A., Eds.; Springer: Cham, Switzerland, 2018; pp. 758–764. [Google Scholar]
  152. Jeong, C.; Jang, S.; Shin, H.; Park, E.; Choi, S. A Context-Aware Citation Recommendation Model with BERT and Graph Convolutional Networks. arXiv 2019, arXiv:1903.06464. [Google Scholar] [CrossRef]
  153. Zhou, Q.; Chen, X.; Chen, C. Authoritative scholarly paper recommendation based on paper communities. In Proceedings of the 2014 IEEE 17th International Conference on Computational Science and Engineering (CSE), Chengdu, China, 19–21 December 2014; pp. 1536–1540. [Google Scholar]
  154. Huang, W.; Wu, Z.; Liang, C.; Mitra, P.; Giles, C.L. A Neural Probabilistic Model for Context Based Citation Recommendation. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; pp. 2404–2410. [Google Scholar]
  155. Sugiyama, K.; Kan, M.Y. Serendipitous Recommendation for Scholarly Papers Considering Relations Among Researchers. In Proceedings of the Ottawa, ON, Canada, 13–17 June 2011; pp. 307–310. [Google Scholar] [CrossRef]
  156. Alotaibi, S.; Vassileva, J. Trust-based recommendations for scientific papers based on the researcher’s current interest. Lect. Notes Comput. Sci. 2013, 7926, 717–720. [Google Scholar] [CrossRef]
  157. Sugiyama, K.; Kan, M.Y. “Towards Higher Relevance and Serendipity in Scholarly Paper Recommendation” by Kazunari Sugiyama and Min-Yen Kan with Martin Vesely as Coordinator. SIGWEB Newsl. 2015, 2015, 1–16. [Google Scholar] [CrossRef]
  158. Guo, S.; Zhang, W.; Zhang, S. A Pagerank-based Collaborative Filtering Recommendation Approach in Digital Libraries. 2017. Available online: https://www.semanticscholar.org/paper/A-PAGERANK-BASED-COLLABORATIVE-FILTERING-APPROACH-Guo-Zhang/7338927eb70e467efedd3b2ef996a0629504c29e (accessed on 25 August 2025).
  159. Ha, J.; Kwon, S.H.; Kim, S.W.; Lee, D. Recommendation of Newly Published Research Papers Using Belief Propagation. In Proceedings of the 2014 Conference on Research in Adaptive and Convergent Systems, Towson, MD, USA, 5–8 October 2014; pp. 77–81. [Google Scholar]
  160. Chen, T.; Lee, M. Research paper recommender systems on big scholarly data. Lect. Notes Comput. Sci. 2018, 11016, 251–260. [Google Scholar] [CrossRef]
  161. Haruna, K.; Ismail, M. Research Paper Recommender System Evaluation Using Collaborative Filtering; American Institute of Physics Inc.: New York, NY, USA, 2018; Volume 1974. [Google Scholar] [CrossRef]
  162. Steinert, L.; Chounta, I.A.; Hoppe, H.U. Where to Begin? Using Network Analytics for the Recommendation of Scientific Papers. In Collaboration and Technology; Baloian, N., Zorian, Y., Taslakian, P., Shoukouryan, S., Eds.; Springer: Cham, Switzerland, 2015; pp. 124–139. [Google Scholar]
  163. Habib, R.; Afzal, M.T. Paper recommendation using citation proximity in bibliographic coupling. Turk. J. Electr. Eng. Comput. Sci. 2017, 33, 13. [Google Scholar] [CrossRef]
  164. Chen, X.; Zhao, H.; Zhao, S.Z.; Chen, J.; Zhang, Y. Citation recommendation based on citation tendency. Scientometrics 2019, 121, 937–956. [Google Scholar] [CrossRef]
  165. Jia, H.; Saule, E. Graph Embedding for Citation Recommendation. arXiv 2018, arXiv:1812.03835. [Google Scholar] [CrossRef]
  166. Duma, D.; Liakata, M.; Clare, A.; Ravenscroft, J.; Klein, E. Applying Core Scientific Concepts to Context-Based Citation Recommendation; European Language Resources Association (ELRA): Reykjavik, Iceland, 2016; pp. 1737–1742. [Google Scholar]
  167. Färber, M.; Saier, T. Semantic Modelling of Citation Contexts for Context-aware Citation Recommendation. In Proceedings of the European Conference on Information Retrieval (ECIR’20), Lisbon, Portugal, 14–17 April 2020; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  168. Khadka, A. Capturing and Exploiting Citation Knowledge for the Recommendation of Scientific Publications; Open University: Milton Keynes, UK, 2020. [Google Scholar]
  169. Benard Magara, M.; Ojo, S.; Zuva, T. A Comparative Analysis of Text Similarity Measures and Algorithms in Research Paper Recommender Systems; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2018; pp. 1–5. [Google Scholar] [CrossRef]
  170. Sharma, R.; Gopalani, D.; Meena, Y. Concept-Based Approach for Research Paper Recommendation. Lect. Notes Comput. Sci. 2017, 10597, 687–692. [Google Scholar] [CrossRef]
  171. Huang, W.; Wu, Z.; Mitra, P.; Giles, C. RefSeer: A Citation Recommendation System; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2014; pp. 371–374. [Google Scholar] [CrossRef]
  172. Middleton, S.E.; De Roure, D.C.; Shadbolt, N.R. Capturing Knowledge of User Preferences: Ontologies in Recommender Systems. In Proceedings of the 1st International Conference on Knowledge Capture, Victoria, BC, Canada, 21–23 October 2001; pp. 100–107. [Google Scholar]
  173. Middleton, S.; Shadbolt, N.; De Roure, D. Capturing Interest Through Inference and Visualization: Ontological User Profiling in Recommender Systems; Association for Computing Machinery, Inc.: Las Vegas, NV, USA, 2003; pp. 62–69. [Google Scholar] [CrossRef]
  174. Zhang, M.; Wang, W.; Li, X. A paper recommender for scientific literatures based on semantic concept similarity. Lect. Notes Comput. Sci. 2008, 5362, 359–362. [Google Scholar] [CrossRef]
  175. Popa, H.E.; Negru, V.; Pop, D.; Muscalagiu, I. DL-Agentrecom—A Multi-Agent Based Recommendation System for Scientific Documents. In Proceedings of the 2008 10th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, Timisoara, Romania, 26–29 September 2008; pp. 320–324. [Google Scholar] [CrossRef]
  176. Dong, R.; Tokarchuk, L.; Ma, A. Digging friendship: Paper recommendation in social network. In Proceedings of the Networking & Electronic Commerce Research Conference (NAEC 2009), Riva Del Garda, Italy, 8–11 October 2009; pp. 21–28. [Google Scholar]
  177. Jomsri, P.; Sanguansintukul, S.; Choochaiwattana, W. A Framework for Tag-Based Research Paper Recommender System: An Ir Approach. In Proceedings of the 2010 IEEE 24th International Conference on Advanced Information Networking and Applications Workshops, Perth, Australia, 20–23 April 2010; pp. 103–108. [Google Scholar] [CrossRef]
  178. Zhang, Z.; Li, L. A Research Paper Recommender System Based on Spreading Activation Model. In Proceedings of the The 2nd International Conference on Information Science and Engineering, Hangzhou, China, 4–6 December 2010; pp. 928–931. [Google Scholar] [CrossRef]
  179. Parra-Santander, D.; Brusilovsky, P. Improving Collaborative Filtering in Social Tagging Systems for the Recommendation of Scientific Articles. In Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Toronto, ON, Canada, 31 Augus–3 September 2010; Volume 1, pp. 136–142. [Google Scholar] [CrossRef]
  180. Pan, C.; Li, W. Research Paper Recommendation with Topic Analysis. In Proceedings of the 2010 International Conference on Computer Design and Applications, Qinhuangdao, Hebei, 25–27 June 2010; Volume 4, pp. V4264–V4268. [Google Scholar] [CrossRef]
  181. Choochaiwattana, W. Usage of Tagging for Research Paper Recommendation. In Proceedings of the 2010 3rd International Conference on Advanced Computer Theory and Engineering(ICACTE), Chengdu, China, 20–22 August 2010; Volume 2, pp. V2439–V2442. [Google Scholar] [CrossRef]
  182. Pera, M.; Ng, Y.K. A Personalized Recommendation System on Scholarly Publications. In Proceedings of the CIKM ’11: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, Glasgow, UK, 24–28 October 2011. [CrossRef]
  183. Amini, B.; Ibrahim, R.; Othman, M.; Rastegari, H. Incorporating scholar’s background knowledge into recommender system for digital libraries. In Proceedings of the 2011 Malaysian Conference in Software Engineering, Johor Bahru, Malaysia, 13–14 December 2011; pp. 516–523. [Google Scholar] [CrossRef]
  184. Amini, B.; Ibrahim, R.; Othman, M. Exploiting scholar’s background knowledge to improve recommender system for digital libraries. Int. J. Digit. Content Technol. Its Appl. 2012, 6, 119–128. [Google Scholar] [CrossRef]
  185. Tang, X.; Zeng, Q. Keyword clustering for user interest profiling refinement within paper recommender systems. J. Syst. Softw. 2012, 85, 87–101. [Google Scholar] [CrossRef]
  186. Doerfel, S.; Jäschke, R.; Hotho, A.; Stumme, G. Leveraging Publication Metadata and Social Data into Folkrank for Scientific Publication Recommendation. In Proceedings of the RSWeb ’12: Proceedings of the 4th ACM RecSys Workshop on Recommender Systems and the Social Web, Dublin, UK, 9 September 2012; pp. 9–16. [Google Scholar] [CrossRef]
  187. Beel, J.; Langer, S.; Genzmehr, M.; Nürnberger, A. Introducing Docear’S Research Paper Recommender System. In Proceedings of the JCDL ’13: Proceedings of the 13th ACM/IEEE-CS Joint Conference on Digital Libraries, Indianapolis, IN, USA, 22–26 July 2013; pp. 459–460. [Google Scholar] [CrossRef]
  188. Alotaibi, S.; Vassileva, J. Effect of Different Implicit Social Networks on Recommending Research Papers; Association for Computing Machinery, Inc.: New York, NY, USA, 2016; pp. 217–221. [Google Scholar] [CrossRef]
  189. Tsolakidis, A.; Triperina, E.; Christidis, N.; Sgouropoulou, C. Research Publication Recommendation System Based on a Hybrid Approach; Association for Computing Machinery: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  190. Amami, M.; Faiz, R.; Stella, F.; Pasi, G. A Graph Based Approach to Scientific Paper Recommendation; Association for Computing Machinery, Inc.: New York, NY, USA, 2017; pp. 777–782. [Google Scholar] [CrossRef]
  191. Sripadh, T.; Ramesh, G. Personalized research paper recommender system. Lect. Notes Comput. Vis. Biomech. 2018, 28, 437–446. [Google Scholar] [CrossRef]
  192. Magara, M.; Ojo, S.; Zuva, T. Toward Altmetric-Driven Research-Paper Recommender System Framework; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2018; pp. 63–68. [Google Scholar] [CrossRef]
  193. Khadka, A.; Cantador, I.; Fernandez, M. Capturing and Exploiting Citation Knowledge for Recommending Recently Published Papers. In Proceedings of the 2020 IEEE 29th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), Virtual, 4–6 November 2020; pp. 239–244. [Google Scholar] [CrossRef]
  194. Khadka, A.; Cantador, I.; Fernandez, M. Exploiting Citation Knowledge in Personalised Recommendation of Recent Scientific Publications. In Proceedings of the Twelfth Language Resources and Evaluation Conference, Marseille, France, 11–16 May 2020; pp. 2231–2240. [Google Scholar]
  195. Schwarzer, M.; Schubotz, M.; Meuschke, N.; Breitinger, C.; Markl, V.; Gipp, B. Evaluating Link-Based Recommendations for Wikipedia. In Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, JCDL ’16, New York, NY, USA, 19–23 June 2016; pp. 191–200. [Google Scholar] [CrossRef]
  196. Lops, P.; de Gemmis, M.; Semeraro, G. Content-based Recommender Systems: State of the Art and Trends. In Recommender Systems Handbook; Ricci, F., Rokach, L., Shapira, B., Kantor, P.B., Eds.; Springer: Boston, MA, USA, 2011; pp. 73–105. [Google Scholar] [CrossRef]
  197. Basu, C.; Hirsh, H.; Cohen, W. Recommendation As Classification: Using Social and Content-based Information in Recommendation. In Proceedings of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative Applications of Artificial Intelligence, AAAI ’98/IAAI ’98, Menlo Park, CA, USA, 26–30 July 1998; pp. 714–720. [Google Scholar]
  198. Bollacker, K.D.; Lawrence, S.; Giles, C.L. A System for Automatic Personalized Tracking of Scientific Literature on the Web. In Proceedings of the Fourth ACM Conference on Digital Libraries, DL ’99, New York, NY, USA, 11–14 August 1999; pp. 105–113. [Google Scholar] [CrossRef]
  199. Zhu, J.; Patra, B.G.; Yaseen, A. Recommender system of scholarly papers using public datasets. AMIA Summits Transl. Sci. Proc. 2021, 2021, 672–679. [Google Scholar]
  200. Cohan, A.; Feldman, S.; Beltagy, I.; Downey, D.; Weld, D.S. Specter: Document-level representation learning using citation-informed transformers. arXiv 2020, arXiv:2004.07180. [Google Scholar]
  201. Haruna, K.; Ismail, M.A.; Qazi, A.; Kakudi, H.A.; Hassan, M.; Muaz, S.A.; Chiroma, H. Research paper recommender system based on public contextual metadata. Scientometrics 2020, 125, 101–114. [Google Scholar] [CrossRef]
  202. Yadav, P.; Pervin, N. Towards efficient navigation in digital libraries: Leveraging popularity, semantics and communities to recommend scholarly articles. J. Inf. 2022, 16, 101336. [Google Scholar] [CrossRef]
  203. Hadhiatma, A.; Azhari, A.; Suyanto, Y. A Scientific Paper Recommendation Framework Based on Multi-Topic Communities and Modified PageRank. IEEE Access 2023, 11, 25303–25317. [Google Scholar] [CrossRef]
  204. Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J. Item-based Collaborative Filtering Recommendation Algorithms. In Proceedings of the 10th International Conference on World Wide Web, Hong Kong, 1–5 May 2001; pp. 285–295. [Google Scholar]
  205. Agarwal, N.; Haque, E.; Liu, H.; Parsons, L. Research Paper Recommender Systems: A Subspace Clustering Approach. In Advances in Web-Age Information Management; Fan, W., Wu, Z., Yang, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 475–491. [Google Scholar]
  206. Tang, T.Y.; McCalla, G.I. The Pedagogical Value of Papers: A Collaborative-Filtering based Paper Recommender. J. Digit. Inf. 2009, 10. [Google Scholar]
  207. Besimi, N.; Çiço, B.; Besimi, A. Hybrid solution for scalable research articles recommendation. In Proceedings of the 2018 7th Mediterranean Conference on Embedded Computing (MECO), Budva, Montenegro, 10–14 June 2018; pp. 1–4. [Google Scholar] [CrossRef]
  208. Vellino, A. A comparison between usage-based and citation-based methods for recommending scholarly research articles. In Proceedings of the ASIST Annual Meeting, Pittsburgh, PA, USA, 22–27 October 2010; Volume 47. [Google Scholar] [CrossRef]
  209. Yang, C.; Wei, B.; Wu, J.; Zhang, Y.; Zhang, L. CARES: A Ranking-Oriented CADAL Recommender System. In Proceedings of the 9th ACM/IEEE-CS Joint Conference on Digital Libraries, Association for Computing Machinery, JCDL ’09, Austin, TX, USA, 15–19 June 2009; pp. 203–212. [Google Scholar] [CrossRef]
  210. Mei, Q.; Guo, J.; Radev, D. DivRank: The Interplay of Prestige and Diversity in Information Networks. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’10, New York, NY, USA, 24–28 July 2010; pp. 1009–1018. [Google Scholar] [CrossRef]
  211. Cano, V. Citation behavior: Classification, utility, and location. J. Am. Soc. Inf. Sci. 1989, 40, 284–290. [Google Scholar] [CrossRef]
  212. Burke, R. Hybrid Recommender Systems: Survey and Experiments. User Model.-User-Adapt. Interact. 2002, 12, 331–370. [Google Scholar] [CrossRef]
  213. Burke, R. Knowledge-Based Recommender Systems. In Encyclopedia of Library and Information Systems; Marcel Dekker: New York, NY, USA, 2000; p. 2000. [Google Scholar]
  214. Mobasher, B.; Cooley, R.; Srivastava, J. Automatic Personalization Based on Web Usage Mining. Commun. ACM 2000, 43, 142–151. [Google Scholar] [CrossRef]
  215. Burke, R. Hybrid Web Recommender Systems. In The Adaptive Web: Methods and Strategies of Web Personalization; Springer: Berlin/Heidelberg, Germany, 2007; pp. 377–408. [Google Scholar]
  216. Beltagy, I.; Lo, K.; Cohan, A. SciBERT: A pretrained language model for scientific text. arXiv 2019, arXiv:1903.10676. [Google Scholar] [CrossRef]
  217. Lin, Y.; Meng, Y.; Sun, X.; Han, Q.; Kuang, K.; Li, J.; Wu, F. BertGCN: Transductive Text Classification by Combining GNN and BERT. In Proceedings of the Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Online, 1–6 August 2021; Zong, C., Xia, F., Li, W., Navigli, R., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 1456–1462. [Google Scholar] [CrossRef]
  218. Jack, K.; Hammerton, J.; Harvey, D.; Hoyt, J.J.; Reichelt, J.; Henning, V. Mendeleys reply to the datatel challenge. Procedia Comput. Sci. 2010, 1, 1–3. [Google Scholar]
  219. Shani, G.; Gunawardana, A. Evaluating Recommendation Systems. In Recommender Systems Handbook; Ricci, F., Rokach, L., Shapira, B., Kantor, P.B., Eds.; Springer: Boston, MA, USA, 2011; pp. 257–297. [Google Scholar] [CrossRef]
  220. Beel, J.; Genzmehr, M.; Langer, S.; Nürnberger, A.; Gipp, B. A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation. In Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation, RepSys ’13, New York, NY, USA, 12 October 2013; pp. 7–14. [Google Scholar] [CrossRef]
  221. Yang, S.; Hsu, C. A New Ontology-Supported and Hybrid Recommending Information System for Scholars. In Proceedings of the 2010 13th International Conference on Network-Based Information Systems, Gifu, Japan, 14–16 September 2010; pp. 379–384. [Google Scholar] [CrossRef]
  222. Philip, S.; Musa, E. A Paper Recommender System Based on the Past Ratings of a User. Int. J. Adv. Comput. Technol. 2014, 3, 41–46. [Google Scholar]
  223. Yu, L.; Yang, J.; Yang, D.; Yang, X. A Decision Support System for Finding Research Topic based on Paper Recommendation. In Proceedings of the PACIS, Jeju Island, Republic of Korea, 18–22 June 2013. [Google Scholar]
  224. Patil, S.; Ansari, P.M.B. User Profile Based Personalized Research Paper Recommendation System Using Top-K Query. 2015. Available online: https://www.semanticscholar.org/paper/UserProfile-based-personalized-research-paper-Hong-Jeon/9814ffbfb182ea6e9f30efbe20789d2f84c968e0 (accessed on 11 November 2024).
  225. Jain, M. Algorithm for Research Paper Recommendation System. Int. J. Inf. Technol. Knowl. Manag. 2012, 443–445. [Google Scholar]
  226. Pruitikanee, S.; Jorio, L.D.; Laurent, A.; Sala, M. Paper Recommendation System: A Global and Soft Approach. 2012. Available online: https://hal-lirmm.ccsd.cnrs.fr/lirmm-00803915v1/document (accessed on 28 January 2019).
  227. Singh, S.; Ahuja, N. Article recommendation system based on keyword using map-reduce. In Proceedings of the 2015 Third International Conference on Image Information Processing (ICIIP), Waknaghat, India, 21–24 December 2015; pp. 548–550. [Google Scholar] [CrossRef]
  228. Rashid, A.M.; Albert, I.; Cosley, D.; Lam, S.K.; McNee, S.M.; Konstan, J.A.; Riedl, J. Getting to Know You: Learning New User Preferences in Recommender Systems. In Proceedings of the 7th International Conference on Intelligent User Interfaces, IUI ’02, New York, NY, USA, 13–16 January 2002; pp. 127–134. [Google Scholar] [CrossRef]
  229. Adamopoulos, P.; Tuzhilin, A. On Unexpectedness in Recommender Systems: Or How to Better Expect the Unexpected. ACM Trans. Intell. Syst. Technol. 2014, 5, 54. [Google Scholar] [CrossRef]
  230. Herlocker, J.L.; Konstan, J.A.; Terveen, L.G.; Riedl, J.T. Evaluating Collaborative Filtering Recommender Systems. ACM Trans. Inf. Syst. 2004, 22, 5–53. [Google Scholar] [CrossRef]
  231. McNee, S.M.; Riedl, J.; Konstan, J.A. Being Accurate is Not Enough: How Accuracy Metrics Have Hurt Recommender Systems. In Proceedings of the CHI ’06 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’06, New York, NY, USA, 22–27 April 2006; pp. 1097–1101. [Google Scholar] [CrossRef]
  232. Zheng, H.; Wang, D.; Zhang, Q.; Li, H.; Yang, T. Do Clicks Measure Recommendation Relevancy?: An Empirical User Study. In Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys ’10, New York, NY, USA, 26–30 September 2010; pp. 249–252. [Google Scholar] [CrossRef]
  233. Tang, T.; McCalla, G. Utilizing Artificial Learners to Help Overcome the Cold-Start Problem in a Pedagogically-Oriented Paper Recommendation System. In Adaptive Hypermedia and Adaptive Web-Based Systems; De Bra, P.M.E., Nejdl, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 245–254. [Google Scholar]
  234. Tang, T.Y.; McCalla, G. A Multidimensional Paper Recommender: Experiments and Evaluations. IEEE Internet Comput. 2009, 13, 34–41. [Google Scholar] [CrossRef]
  235. Jokar, N.; Honarvar, A.; Esfandiari, K. A contextual information based scholary paper recommender system using big data platform. J. Fundam. Appl. Sci. 2016, 8, 914–924. [Google Scholar] [CrossRef]
  236. Raamkumar, A.S.; Foo, S.; Pang, N. A Framework for Scientific Paper Retrieval and Recommender Systems. arXiv 2016, arXiv:1609.01415. [Google Scholar] [CrossRef]
  237. Sesagiri Raamkumar, A.; Foo, S. Multi-method Evaluation in Scientific Paper Recommender Systems. In Proceedings of the Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization, UMAP ’18, New York, NY, USA, 8–11 July 2018; pp. 179–182. [Google Scholar] [CrossRef]
  238. Alotaibi, S.; Vassileva, J. Multi-dimensional Ratings for Research Paper Recommender Systems: A Qualitative Study. In Proceedings of the International Symposium on Web AlGorithms, Victoria, BC, Canada, 5–7 August 2015. [Google Scholar]
  239. Huang, Z.; Chung, W.; Ong, T.H.; Chen, H. A Graph-Based Recommender System for Digital Library. In Proceedings of the 2nd ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL ’02, New York, NY, USA, 14–18 July 2002; pp. 65–73. [Google Scholar] [CrossRef]
  240. Labille, K.; Gauch, S.; Joseph, A.S.; Bogers, T.; Koolen, M. Conceptual Impact-Based Recommender System for CiteSeerx. In Proceedings of the CBRecSys@ RecSys, Vienna, Austria, 2 December 2015; pp. 50–53. [Google Scholar]
  241. Mönnich, M.; Spiering, M. Adding value to the library catalog by implementing a recommendation system. D-Lib Mag. 2008, 14, 1082–9873. [Google Scholar] [CrossRef]
  242. Tran, H.N.; Huynh, T.; Hoang, K. A Potential Approach to Overcome Data Limitation in Scientific Publication Recommendation. In Proceedings of the 2015 Seventh International Conference on Knowledge and Systems Engineering (KSE), Ho Chi Minh City, Vietnam, 8–10 October 2015; pp. 310–313. [Google Scholar] [CrossRef]
  243. Igbe, T.; Ojokoh, B. Incorporating user’s preferences into scholarly publications recommendation. Intell. Inf. Manag. 2016, 8, 27. [Google Scholar] [CrossRef]
  244. Cai, T.; Cheng, H.; Luo, J.; Zhou, S. An Efficient and Simple Graph Model for Scientific Article Cold Start Recommendation. In Conceptual Modeling; Comyn-Wattiau, I., Tanaka, K., Song, I.Y., Yamamoto, S., Saeki, M., Eds.; Springer: Cham, Switzerland, 2016; pp. 248–259. [Google Scholar]
  245. Wang, G.; He, X.; Ishuga, C.I. HAR-SI: A novel hybrid article recommendation approach integrating with social information in scientific social network. Knowl.-Based Syst. 2018, 148, 85–99. [Google Scholar] [CrossRef]
  246. Keshavarz, S.; Honarvar, A.R. A Parallel Paper recommender system in Big Data Scholarly. In Proceedings of the International Conference on Electrical Engineering and Computer, Dhaka, Bangladesh, 19–20 December 2015. [Google Scholar]
  247. Tang, T.; McCalla, G. Beyond Learners’ Interest: Personalized Paper Recommendation Based on Their Pedagogical Features for an e-Learning System. In PRICAI 2004: Trends in Artificial Intelligence; Zhang, C., Guesgen, H.W., Yeap, W.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 301–310. [Google Scholar]
  248. Winoto, P. Contexts in a Paper Recommendation System with Collaborative Filtering. 2012. Available online: https://www.irrodl.org/index.php/irrodl/article/view/1243/2367 (accessed on 25 August 2025).
  249. Shimbo, M.; Takahiko, I.; Matsumoto, Y. Evaluation of kernel-based link analysis measures on research paper recommendation. In Proceedings of the 7th ACM International Conference on Digital Libraries, Vancouver, BC, Canada, 18–23 June 2007; pp. 354–355. [Google Scholar]
  250. Manouselis, N.; Verbert, K. Layered Evaluation of Multi-Criteria Collaborative Filtering for Scientific Paper Recommendation. Procedia Comput. Sci. 2013, 18, 1189–1197. [Google Scholar] [CrossRef]
  251. Chen, C.; Mayanglambam, S.D.; Hsu, F.; Lu, C.; Lee, H.; Ho, J. Novelty Paper Recommendation Using Citation Authority Diffusion. In Proceedings of the 2011 International Conference on Technologies and Applications of Artificial Intelligence, Chung-Li, Taiwan, 11–13 November 2011; pp. 126–131. [Google Scholar] [CrossRef]
  252. Sun, Y.; Ni, W.; Men, R. A Personalized Paper Recommendation Approach Based on Web Paper Mining and Reviewer’s Interest Modeling. In Proceedings of the 2009 International Conference on Research Challenges in Computer Science, Shanghai, China, 28–29 December 2009; pp. 49–52. [Google Scholar] [CrossRef]
  253. Hong, K.; Jeon, H.; Jeon, C. Advanced personalized research paper recommendation system based on expanded userprofile through semantic analysis. Int. J. Digit. Content Technol. Its Appl. 2013, 7, 67. [Google Scholar]
  254. Amami, M.; Pasi, G.; Stella, F.; Faiz, R. An LDA-Based Approach to Scientific Paper Recommendation. In Natural Language Processing and Information Systems; Métais, E., Meziane, F., Saraee, M., Sugumaran, V., Vadera, S., Eds.; Springer: Cham, Switzerland, 2016; pp. 200–210. [Google Scholar]
  255. Siama, P.; Yaneeb, J.; Ukrit, M. Digital Research Paper Recommendation System Appling Feature Based Method. In Proceedings of the International Conference on Language Education and Innovation, Phuket, Thailand, June 2015; Volume 2. [Google Scholar]
  256. Jiang, Z.; Liu, X.; Gao, L. Dynamic Topic/Citation Influence Modeling for Chronological Citation Recommendation. In Proceedings of the 5th International Workshop on Web-scale Knowledge Representation Retrieval & Reasoning, Web-KR ’14, New York, NY, USA, 3 November 2014; pp. 15–18. [Google Scholar] [CrossRef]
  257. Jiang, Z.; Lu, Y.; Liu, X. Cross-language Citation Recommendation via Publication Content and Citation Representation Fusion. In Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries, JCDL ’18, New York, NY, USA, 3–6 June 2018; 2018; pp. 347–348. [Google Scholar] [CrossRef]
  258. Roy, D. An Improved Test Collection and Baselines for Bibliographic Citation Recommendation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM ’17, New York NY, USA, 6–10 November 2017; 2017; pp. 2271–2274. [Google Scholar] [CrossRef]
  259. Tang, X.; Wan, X.; Zhang, X. Cross-language Context-aware Citation Recommendation in Scientific Articles. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR ’14, New York, NY, USA, 6–11 July 2014; pp. 817–826. [Google Scholar] [CrossRef]
  260. Lin, J.; Wilbur, W.J. PubMed related articles: A probabilistic topic-based model for content similarity. BMC Bioinform. 2007, 8, 423. [Google Scholar] [CrossRef]
  261. Rokach, L.; Mitra, P.; Kataria, S.; Huang, W.; Giles, L. A Supervised Learning Method for Context-Aware Citation Recommendation in a Large Corpus. INVITED SPEAKER: Analyzing the Performance of Top-K Retrieval Algorithms 1978:1978. Available online: https://fontoura.org/papers/lsdsir2013.pdf (accessed on 25 August 2025).
  262. Billsus, D.; Pazzani, M.J. Learning Collaborative Information Filters. In Proceedings of the Fifteenth International Conference on Machine Learning, ICML ’98, San Francisco, CA, USA, 24–27 July 1998; pp. 46–54. [Google Scholar]
  263. Breese, J.S.; Heckerman, D.; Kadie, C. Empirical Analysis of Predictive Algorithms for Collaborative Filtering. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, San Francisco, CA, USA, 24–26 July 1998; pp. 43–52. [Google Scholar]
  264. Ge, M.; Delgado-Battenfeld, C.; Jannach, D. Beyond Accuracy: Evaluating Recommender Systems by Coverage and Serendipity. In Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys ’10, New York, NY, USA, 26–30 September 2010; pp. 257–260. [Google Scholar] [CrossRef]
  265. Knijnenburg, B.P.; Willemsen, M.C.; Gantner, Z.; Soncu, H.; Newell, C. Explaining the User Experience of Recommender Systems. User Model.-User-Adapt. Interact. 2012, 22, 441–504. [Google Scholar] [CrossRef]
  266. Pu, P.; Chen, L.; Hu, R. Evaluating Recommender Systems from the User’s Perspective: Survey of the State of the Art. User Model.-User-Adapt. Interact. 2012, 22, 317–355. [Google Scholar] [CrossRef]
  267. Corneli, J.; Jordanous, A.; Guckelsberger, C.; Pease, A.; Colton, S. Modelling serendipity in a computational context. arXiv 2014, arXiv:1411.0440. [Google Scholar]
  268. Lu, Q.; Chen, T.; Zhang, W.; Yang, D.; Yu, Y. Serendipitous Personalized Ranking for Top-N Recommendation. In Proceedings of the 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology, WI-IAT ’12, Macau, China, 4–7 December 2012; IEEE Computer Society: Brighton, UK, 2012; Volume 1, pp. 258–265. [Google Scholar]
  269. Zheng, Q.; Chan, C.K.; Ip, H.H.S. An Unexpectedness Augmented Utility Model for Making Serendipitous Recommendation. In Advances in Data Mining: Applications and Theoretical Aspects; Perner, P., Ed.; Springer: Cham, Switzerland, 2015; pp. 216–230. [Google Scholar]
  270. Adomavicius, G.; Kwon, Y. Improving Aggregate Recommendation Diversity Using Ranking-Based Techniques. IEEE Trans. Knowl. Data Eng. 2012, 24, 896–911. [Google Scholar] [CrossRef]
  271. Konstan, J.A.; McNee, S.M.; Ziegler, C.N.; Torres, R.; Kapoor, N.; Riedl, J.T. Lessons on Applying Automated Recommender Systems to Information-seeking Tasks. In Proceedings of the 21st National Conference on Artificial Intelligence, AAAI’06, Boston, MA, USA, 16–17 July 2006; AAAI Press: Washington, DC, USA, 2006; Volume 2, pp. 1630–1633. [Google Scholar]
  272. Steinert, L.; Hoppe, H.U. A Comparative Analysis of Network-Based Similarity Measures for Scientific Paper Recommendations. In Proceedings of the 2016 Third European Network Intelligence Conference (ENIC), Wroclaw, Poland, 5–7 September 2016; pp. 17–24. [Google Scholar] [CrossRef]
  273. Färber, M.; Coutinho, M.; Yuan, S. Biases in scholarly recommender systems: Impact, prevalence, and mitigation. Scientometrics 2023, 128, 2703–2736. [Google Scholar] [CrossRef]
  274. Pramod, D. Privacy-preserving techniques in recommender systems: State-of-the-art review and future research agenda. Data Technol. Appl. 2023, 57, 32–55. [Google Scholar] [CrossRef]
Figure 1. Monthly e-preprint scholarly publication submission rates in ArXiv from July 1991 to June 2025 show the exponential rate at which new resources are being added [7].
Figure 1. Monthly e-preprint scholarly publication submission rates in ArXiv from July 1991 to June 2025 show the exponential rate at which new resources are being added [7].
Informatics 12 00108 g001
Figure 2. PRISMA flow diagram of the literature search and selection process.
Figure 2. PRISMA flow diagram of the literature search and selection process.
Informatics 12 00108 g002
Figure 3. Distribution of feature types used in scholarly recommendation tasks targeting a piece of work. Each bar represents the number of reviewed papers utilising a specific feature (e.g., citation data, title, or abstract).
Figure 3. Distribution of feature types used in scholarly recommendation tasks targeting a piece of work. Each bar represents the number of reviewed papers utilising a specific feature (e.g., citation data, title, or abstract).
Informatics 12 00108 g003
Figure 4. Taxonomy of user feedback types used in scholarly recommender systems, grouped by feedback mode. Each bar represents the number of reviewed studies that incorporate the corresponding signal. Implicit feedback types (e.g., reading and bookmarking) dominate the literature, while explicit signals (e.g., rating, scoring, and voting) are less frequently used. A dashed line visually separates implicit and explicit categories.
Figure 4. Taxonomy of user feedback types used in scholarly recommender systems, grouped by feedback mode. Each bar represents the number of reviewed studies that incorporate the corresponding signal. Implicit feedback types (e.g., reading and bookmarking) dominate the literature, while explicit signals (e.g., rating, scoring, and voting) are less frequently used. A dashed line visually separates implicit and explicit categories.
Informatics 12 00108 g004
Table 2. Different recommendation tasks adopted by the reviewed literature.
Table 2. Different recommendation tasks adopted by the reviewed literature.
Recommendation TaskReferences
A piece of workA paper[17,20,49,59,60,69,70,71,72,76,78,80,82,83,84,85,90,93,109,116,123,124,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,162,163,164,165,169,170]
A set of papers[20,70,71,78,80,85,90,116,149,150,151,164,165,170]
A manuscript[18,19,24,27,50,52,61,152]
A snapshot of text[9,21,25,47,51,53,55,56,68,74,77,79,81,86,92,95,96,97,98,110,111,112,114,153,154,161,166,167,171]
A user[22,23,26,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,48,54,57,58,62,63,64,65,66,67,73,75,87,88,89,91,94,99,100,101,102,103,104,105,106,107,108,113,115,117,118,119,120,121,122,125,126,127,155,156,157,158,159,160,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192]
Table 3. Brief description of citation knowledge.
Table 3. Brief description of citation knowledge.
Citation KnowledgeDescription
Citation GraphCaptures citation relations between papers as a graph, where nodes represent citing papers and edges represent the relations based on citations. Relations can be directed [128,148] or undirected [159]. Although this method is commonly used due to the availability of metadata, it may not always accurately reflect preferences, as citations can serve different purposes, including criticism [168,193,194].
Citation ProximityRefers to the distance between co-cited papers in a publication [130]; for example, shorter distances imply stronger relevance. It was conceptualised in 2009 by [130,195] applied it for web page recommendations, and [141] utilised it for the research paper recommendation task.
Citation ContextThe text surrounding a citation, indicating the semantics of the citation [52,58,147]. It has been used to enrich the profiles of target manuscripts [52] or user preferences [58,193,194] in recommending scientific publications.
Citation IntentionCaptures the purpose of a citation, such as providing background or comparing work. Different intentions may reflect varying levels of relevance. While extensively used in scientometrics, it has been less explored in recommendation systems [134,166,193].
Citation SectionRefers to the section of a paper where the citation appears (e.g., the introduction or related work) [139,168]. Different sections imply different relevance. Ref. [168] explored this notion in combination with citation graphs, finding improved performance, especially for citations in the introduction, background, and method sections.
Table 4. Reviewed papers utilising different notions of citation knowledge for modelling as a target (a piece of work). Abbreviations: CG—Citation Graph, CC—Citation Context, CS—Citation Section, CP—Citation Proximity, CI—Citation Intention.
Table 4. Reviewed papers utilising different notions of citation knowledge for modelling as a target (a piece of work). Abbreviations: CG—Citation Graph, CC—Citation Context, CS—Citation Section, CP—Citation Proximity, CI—Citation Intention.
ReferencesCGCCCSCPCI
[146]xxx
[9,21,52,74,110,129,152,164]xx
[139]x x
[17,18,19,20,24,25,49,53,68,70,71,77,78,79,82,83,85,93,109,111,116,123,124,128,133,135,136,137,138,140,142,143,144,145,148,149,150,151,153,162,163,165]x
[47,50,114,147,154,167] x
[166] x x
[134]xx x
Table 6. Publicly available datasets for academic RecSys. Here, P D F a v stands for Portable Document Format (PDF) document available and U P H a v represents the availability of authors’ publication history; A/P = Accessed/Published, R = Ratings and NS = Not Specified.
Table 6. Publicly available datasets for academic RecSys. Here, P D F a v stands for Portable Document Format (PDF) document available and U P H a v represents the availability of authors’ publication history; A/P = Accessed/Published, R = Ratings and NS = Not Specified.
DatasetDescriptionA/PUsersItemsR PDF av UPH av
AMiner 1AMiner contains a series of datasets capturing relations among citations, academic social networks, topics, etc. The data on the citations dataset V11 is reported here2019NS4 MNoNoNo
Open Citations 2Open repository of scholarly citation data2019NS7.5 MNoNoNo
Open Academic Graph 3Large knowledge graph combining Microsoft Academic Graph and AMiner2019253 M381 MNoNoNo
ArXiv 4Open-access e-prints of publications in different fields such as physics, mathematics, etc.2019NS1.5 MNoYes 5No
CORE 6Dataset of open-access research publications published up to 20182019No9.8 MNoYes 7No
CiteULike [67]Dataset of users’ selected bookmarks to academic papers2019555116,980NoNoNo
Mendeley [218]Dataset shared by Mendeley for a recommender system challenge2010 850,0004.8 MYes 9NoNo
SPD 1 [126]ACL anthology-based papers published between 2000 and 2006201928597YesYesNo
SPD 2 [67]ACM proceedings-based papers published between 2000 and 2010201950100,531Yes 10NoNo
[193]35,473 articles collected after selecting authors from DBLP202054715,17417,637NoNo
[194]35,473 articles collected after selecting authors from DBLP2020446939911,381NoNo
1: https://www.aminer.cn/aminer_data (accessed on 14 July 2024); 2: https://download.opencitations.net/ (accessed on 14 July 2024); 3: https://www.microsoft.com/en-us/research/project/open-academic-graph/ (accessed on 14 July 2024); 4: https://arxiv.org/help/bulk_data (accessed on 14 July 2024); 5: Download requester pays Amazon S3 bucket https://arxiv.org/help/bulk_data_s3 (accessed on 14 July 2024); 6: https://core.ac.uk/services/dataset/ (accessed on 14 July 2024); 7: Data need to be requested; 8: published date; 9: Anonymised data that need to be requested; 10: Anonymised data.
Table 7. Evaluation methods used by the reviewed papers.
Table 7. Evaluation methods used by the reviewed papers.
PaperEvaluation Methods
OfflineOnlineUser StudyParticipants
[98] x16
[75] x123
[136] x-
[145] x 31
[50] x4
[141] x10
[147] x14
[187] x 938
[206,233,234] x24
[178] x12
[235] x25
[236,237] x119
[28]x x3
[181,238] x15
[73] x5
[44] x200
[163,239] x2
[31,43] x40
[100] x7
[240] x30
[134]x x5
[149] x19
[125]x x111
[82] x138
[128]x x-
[129] x-
[17,173,241] x
[17,18,21,28,36,37,38,42,45,47,49,52,53,55,57,58,59,60,61,62,66,67,74,76,83,92,93,110,111,112,113,115,118,120,126,137,140,143,144,146,155,160,161,170,171,174,176,177,205,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261]x
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khadka, A.; Sthapit, S. A Review on Scholarly Publication Recommender Systems: Features, Approaches, Evaluation, and Open Research Directions. Informatics 2025, 12, 108. https://doi.org/10.3390/informatics12040108

AMA Style

Khadka A, Sthapit S. A Review on Scholarly Publication Recommender Systems: Features, Approaches, Evaluation, and Open Research Directions. Informatics. 2025; 12(4):108. https://doi.org/10.3390/informatics12040108

Chicago/Turabian Style

Khadka, Anita, and Saurav Sthapit. 2025. "A Review on Scholarly Publication Recommender Systems: Features, Approaches, Evaluation, and Open Research Directions" Informatics 12, no. 4: 108. https://doi.org/10.3390/informatics12040108

APA Style

Khadka, A., & Sthapit, S. (2025). A Review on Scholarly Publication Recommender Systems: Features, Approaches, Evaluation, and Open Research Directions. Informatics, 12(4), 108. https://doi.org/10.3390/informatics12040108

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop