Popularity Bias in Recommender Systems: The Search for Fairness in the Long Tail
Abstract
:1. Introduction
1.1. Research Questions
- RQ1
- How does popularity bias affect the functioning and fairness of recommender systems towards users and stakeholders?
- RQ2
- What can be carried out to mitigate the effects of popularity bias and showcase more of the long tail in recommendations?
1.2. Methodology
- Section 2 will review definitions of the concept of popularity bias, a term which can be used for human behavior as well as for the characteristics of algorithms. We will describe the process of attempting to understand how the two aspects are related, describing how this problem affects recommender systems specifically.
- Section 3 will dive deeper into how the fairness of recommender systems is affected by this bias in order to answer RQ1. Since recommender systems have multiple stakeholders and can be used for a variety of applications, we will review how fairness is impacted from multiple viewpoints.
- Section 4 mostly concerns answering RQ2 by reviewing the methods that are proposed in the literature to expose more of the long tail to users, grouping these algorithms by their general approaches and describing useful metrics to evaluate the effectiveness of these algorithms. Some additional considerations on the open challenges related to these approaches will be included in this section.
2. Understanding Popularity Bias
2.1. Human Popularity Bias
2.2. Algorithmic Popularity Bias
3. Fairness Perspectives
3.1. Popularity and Fairness for Objects
3.2. Popularity and Fairness for Subjects
3.3. Popularity Bias Effect in Different Systems
4. Mitigation Approaches
4.1. Novelty and Diversity
4.1.1. Metrics
Metrics over (Lists of) Recommendations
Global Metrics
4.1.2. Algorithms
Re-Ranking
Algorithm Modification
4.1.3. Challenges
4.2. Serendipity
4.2.1. Metrics
4.2.2. Algorithms
Re-Ranking
Algorithm Modification
Other Approaches
4.2.3. Challenges
4.3. Other Approaches
4.3.1. Algorithms
4.3.2. Challenges
5. Discussion and Future Opportunities
- RQ1
- How does popularity bias affect the functioning and fairness of recommender systems towards users and stakeholders?In order to fully understand the question, we first discussed definitions of popularity bias in various contexts, showing how the preference for more popular items is rooted in human nature (see Section 2). In recommender systems, in particular, popular items are disproportionately likely to be recommended when compared with less popular items, as discussed in Section 3. While this is partly to be expected because items can become popular due to their inherent quality, it can also be detrimental to the quality of the system. Users who would like to see more niche items are still recommended popular items, and items that would deserve more recognition are not recommended due to this bias. Moreover, the literature shows that popularity biases can interact with pre-existing biases, worsening the effects of systematic biases, and can affect different demographic classes in different ways. All of these effects hinder the fairness of recommender systems and can worsen the overall user experience.
- RQ2
- What can be carried out to mitigate the effects of popularity bias and showcase more of the long tail in recommendations?In Section 4, we attempted to cover the large amount of research that has been devoted to the diversification of recommendations, which is meant to expose more items from the long tail. Three key reoccurring concepts were found in the literature: diversity, the inclusion of dissimilar items in a set of recommendations, novelty, the inclusion of items different from the user’s history in recommendations, and serendipity, the recommendation of items that are unexpected but also valuable for the user. For each, researchers have tried to propose metrics to quantify the level of variety in the recommendations, as well as various algorithms that leverage those metrics. Some researchers also proposed novel algorithms for a recommendation that explicitly encode for popularity and learn to mitigate the preference towards these items.
- Open Problem 1:
- Evaluation ParadigmsThe evaluation of approaches to mitigate popularity bias and promote serendipitous recommendations has largely relied on metrics adapted from information retrieval, such as novelty, diversity, and their variants. However, these metrics have recognized limitations when applied to recommender systems [109,125], as they do not fully capture user experience and utility. For serendipity in particular, the current quantitative metrics based on unexpectedness and accuracy serve as proxies but may fail to evaluate the qualitative aspects that make a recommendation truly serendipitous from a user’s perspective. There is a need to move beyond just evaluating the recommendation outputs to more holistic paradigms that assess the capability of the recommendation process itself to potentially generate serendipitous results. Drawing inspiration from the field of computational creativity [126], which has long grappled with evaluating creative artifacts, could provide a fresh perspective [122,127]. Computational creativity emphasizes evaluating the process that gives rise to creative outcomes rather than just the outcomes themselves. It employs multi-faceted evaluation techniques, including human studies and analysis of the exploration of the conceptual space. Adapting such process-centric evaluation paradigms could enable a more insightful assessment of serendipity in recommender systems beyond what is currently possible with output-based metrics alone.
- Open Problem 2:
- Leveraging Additional InformationMost current approaches that address popularity bias primarily rely on user–item interaction data, such as explicit ratings or implicit interaction logs. However, these data are inherently skewed by the very popularity of the items themselves. To overcome this intrinsic limitation and enable more meaningful recommendations beyond popularity, recommender systems should leverage additional sources of information. Content metadata about the items, such as textual descriptions, tags, and multimedia attributes, can provide a semantic understanding of the items themselves, separate from their popularity. Similarly, contextual signals like the user’s recent activity, location, and the device used can enrich the user profile beyond just past interactions. Incorporating such additional information through hybrid or multi-signal approaches can help overcome the limitations of collaborative data alone. Content-based and context-aware recommendations, unbiased by popularity, can be combined with collaborative ones to mitigate bias. Furthermore, this additional information can feed more advanced learning models, such as neural networks or reasoning systems, to infer latent preferences and discover non-trivial connections between users and items. While acquiring and integrating these additional data presents operational challenges, it represents a promising direction to move beyond the intrinsic limitations of popularity data alone.
- Future Opportunity 1:
- Unified View of PopularityThe issue of popularity bias has two interconnected sides: the over-recommendation of already popular items and the under-recommendation of long tail, niche items. Current research efforts tend to tackle these as separate problems, with some approaches aimed at limiting exposure to popular items, while others try to boost the recommendation of novel and diverse items from the long tail. However, a siloed view fails to address the inherent skewness in the distribution of recommendations towards popularity. A unified perspective is needed that simultaneously accounts for both the head and the long tail. Hybrid recommendation approaches that combine different models and strategies could provide a path towards this unified treatment. Additionally, evaluation metrics should evolve to characterize the overall distribution rather than focusing on either extreme. Measures like the Gini index or Shannon’s entropy offer a more comprehensive view compared to metrics that only capture popular items or novel recommendations in isolation. Exploring unified metrics aligned with a hybrid recommendation framework could pave the way for more balanced and less popularity-skewed recommendations.
- Future Opportunity 2:
- Personalized Levels of PopularityAs described in Section 2.1, popularity bias is intrinsic to human nature and can sometimes even be seen as a positive feature of human psychology or of an information system. However, just accepting that a system can be biased towards popular items because humans are similarly biased does not consider the bigger picture, where some users desire to find more niche items and would not accept being exposed to many popular items [56]. Moreover, without explicit control over the popularity of recommendations, different categories of users could receive unbalanced treatment [18]. Therefore, in order to increase the holistic effectiveness of recommender systems as well as overall user satisfaction, the popularity of recommended items could be explicitly controlled by the recommender system, allowing for recommendations that are personalized both in the content of the recommendations and in this meta-feature of the recommendation that would match the level of explorativeness of the user.
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Bobadilla, J.; Ortega, F.; Hernando, A.; Gutiérrez, A. Recommender systems survey. Knowl.-Based Syst. 2013, 46, 109–132. [Google Scholar] [CrossRef]
- Vermaas, P.; Kroes, P.; Van de Poel, I.; Franssen, M.; Houkes, W. A philosophy of technology: From technical artefacts to sociotechnical systems. Synth. Lect. Eng. Technol. Soc. 2011, 6, 1–134. [Google Scholar]
- Ntoutsi, E.; Fafalios, P.; Gadiraju, U.; Iosifidis, V.; Nejdl, W.; Vidal, M.E.; Ruggieri, S.; Turini, F.; Papadopoulos, S.; Krasanakis, E. Bias in data-driven artificial intelligence systems—An introductory survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1356. [Google Scholar] [CrossRef]
- Osoba, O.A.; Welser, W., IV. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence; Rand Corporation: Santa Monica, CA, USA, 2017. [Google Scholar]
- Banerjee, A.V. A Simple Model of Herd Behavior. Q. J. Econ. 1992, 107, 797–817. [Google Scholar] [CrossRef]
- Abdollahpouri, H.; Mansoury, M.; Burke, R.; Mobasher, B. The Unfairness of Popularity Bias in Recommendation. arXiv 2019, arXiv:1907.13286. [Google Scholar]
- Idrissi, N.; Zellou, A. A systematic literature review of sparsity issues in recommender systems. Soc. Netw. Anal. Min. 2020, 10, 15. [Google Scholar] [CrossRef]
- Bobadilla, J.; Serradilla, F. The effect of sparsity on collaborative filtering metrics. In Proceedings of the Twentieth Australasian Conference on Australasian Database, Wellington, New Zealand, 20–23 January 2009; Volume 92, pp. 9–18. [Google Scholar]
- Park, Y.J.; Tuzhilin, A. The long tail of recommender systems and how to leverage it. In Proceedings of the 2008 ACM Conference on Recommender Systems, Lausanne, Switzerland, 23–25 October 2008; pp. 11–18. [Google Scholar]
- Wang, Y.; Ma, W.; Zhang, M.; Liu, Y.; Ma, S. A survey on the fairness of recommender systems. Acm Trans. Inf. Syst. 2023, 41, 1–43. [Google Scholar] [CrossRef]
- Zhao, Y.; Wang, Y.; Liu, Y.; Cheng, X.; Aggarwal, C.C.; Derr, T. Fairness and diversity in recommender systems: A survey. Acm Trans. Intell. Syst. Technol. 2023, 16, 1–28. [Google Scholar] [CrossRef]
- Jin, D.; Wang, L.; Zhang, H.; Zheng, Y.; Ding, W.; Xia, F.; Pan, S. A survey on fairness-aware recommender systems. Inf. Fusion 2023, 100, 101906. [Google Scholar] [CrossRef]
- Chen, J.; Dong, H.; Wang, X.; Feng, F.; Wang, M.; He, X. Bias and debias in recommender system: A survey and future directions. Acm Trans. Inf. Syst. 2023, 41, 1–39. [Google Scholar] [CrossRef]
- Klimashevskaia, A.; Jannach, D.; Elahi, M.; Trattner, C. A survey on popularity bias in recommender systems. User Model.-User-Adapt. Interact. 2024, 34, 1777–1834. [Google Scholar] [CrossRef]
- Greenhalgh, T.; Thorne, S.; Malterud, K. Time to challenge the spurious hierarchy of systematic over narrative reviews? Eur. J. Clin. Investig. 2018, 48, e12931. [Google Scholar] [CrossRef]
- Abdollahpouri, H.; Burke, R. Multi-stakeholder recommendation and its connection to multi-sided fairness. arXiv 2019, arXiv:1907.13158. [Google Scholar]
- Abdollahpouri, H. Popularity Bias in Ranking and Recommendation. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; pp. 529–530. [Google Scholar] [CrossRef]
- Lesota, O.; Melchiorre, A.; Rekabsaz, N.; Brandl, S.; Kowald, D.; Lex, E.; Schedl, M. Analyzing Item Popularity Bias of Music Recommender Systems: Are Different Genders Equally Affected? In Proceedings of the Fifteenth ACM Conference on Recommender Systems, Amsterdam, The Netherlands, 27 September–1 October 2021; pp. 601–606. [Google Scholar] [CrossRef]
- Porcaro, L.; Castillo, C.; Gómez, E. Diversity by Design in Music Recommender Systems. Trans. Int. Soc. Music. Inf. Retr. 2021, 4, 114–126. [Google Scholar] [CrossRef]
- Thompson, B.; Griffiths, T.L. Human biases limit cumulative innovation. Proc. R. Soc. B Biol. Sci. 2021, 288, 20202752. [Google Scholar] [CrossRef] [PubMed]
- Kahneman, D.; Slovic, S.P.; Slovic, P.; Tversky, A. Judgment Under Uncertainty: Heuristics and Biases; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
- Gilovich, T.; Griffin, D.; Kahneman, D. Heuristics and Biases: The Psychology of Intuitive Judgment; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
- Bikhchandani, S.; Hirshleifer, D.; Welch, I. Learning from the behavior of others: Conformity, fads, and informational cascades. J. Econ. Perspect. 1998, 12, 151–170. [Google Scholar] [CrossRef]
- Bikhchandani, S.; Sharma, S. Herd behavior in financial markets: A review. IMF Work. Pap. 2000, 47, 279–310. [Google Scholar] [CrossRef]
- Choijil, E.; Méndez, C.E.; Wong, W.K.; Vieito, J.P.; Batmunkh, M.U. Thirty years of herd behavior in financial markets: A bibliometric analysis. Res. Int. Bus. Financ. 2022, 59, 101506. [Google Scholar] [CrossRef]
- Rook, L. An Economic Psychological Approach to Herd Behavior. J. Econ. Issues 2006, 40, 75–95. [Google Scholar] [CrossRef]
- Calvó-Armengol, A.; Jackson, M.O. Peer Pressure. J. Eur. Econ. Assoc. 2010, 8, 62–89. [Google Scholar] [CrossRef]
- Bornstein, R.F.; Craver-Lemley, C. Mere exposure effect. In Cognitive Illusions; Psychology Press: Hove, UK, 2016; pp. 266–285. [Google Scholar]
- Montoya, R.M.; Horton, R.S.; Vevea, J.L.; Citkowicz, M.; Lauber, E.A. A re-examination of the mere exposure effect: The influence of repeated exposure on recognition, familiarity, and liking. Psychol. Bull. 2017, 143, 459–498. [Google Scholar] [CrossRef]
- Chen, Y.F. Herd behavior in purchasing books online. Comput. Hum. Behav. 2008, 24, 1977–1992. [Google Scholar] [CrossRef]
- Hanson, W.A.; Putler, D.S. Hits and misses: Herd behavior and online product popularity. Mark. Lett. 1996, 7, 297–305. [Google Scholar] [CrossRef]
- Zhu, F.; Zhang, X.M. Impact of Online Consumer Reviews on Sales: The Moderating Role of Product and Consumer Characteristics. J. Mark. 2010, 74, 133–148. [Google Scholar] [CrossRef]
- Dholakia, U.M.; Basuroy, S.; Soltysinski, K. Auction or agent (or both)? A study of moderators of the herding bias in digital auctions. Int. J. Res. Mark. 2002, 19, 115–130. [Google Scholar] [CrossRef]
- Griskevicius, V.; Goldstein, N.J.; Mortensen, C.R.; Sundie, J.M.; Cialdini, R.B.; Kenrick, D.T. Fear and Loving in Las Vegas: Evolution, Emotion, and Persuasion. J. Mark. Res. 2009, 46, 384–395. [Google Scholar] [CrossRef] [PubMed]
- Nolan, J.M.; Schultz, P.W.; Cialdini, R.B.; Goldstein, N.J.; Griskevicius, V. Normative social influence is underdetected. Personal. Soc. Psychol. Bull. 2008, 34, 913–923. [Google Scholar] [CrossRef] [PubMed]
- Bearden, W.O.; Etzel, M.J. Reference group influence on product and brand purchase decisions. J. Consum. Res. 1982, 9, 183–194. [Google Scholar] [CrossRef]
- Letheren, K.; Russell-Bennett, R.; Whittaker, L. Black, white or grey magic? Our future with artificial intelligence. J. Mark. Manag. 2020, 36, 216–232. [Google Scholar] [CrossRef]
- Biswas, M.; Murray, J. The effects of cognitive biases and imperfectness in long-term robot-human interactions: Case studies using five cognitive biases on three robots. Cogn. Syst. Res. 2017, 43, 266–290. [Google Scholar] [CrossRef]
- Sengupta, E.; Garg, D.; Choudhury, T.; Aggarwal, A. Techniques to Elimenate Human Bias in Machine Learning. In Proceedings of the 2018 International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India, 23–24 November 2018; pp. 226–230. [Google Scholar] [CrossRef]
- Abdollahpouri, H.; Burke, R.; Mobasher, B. Controlling Popularity Bias in Learning-to-Rank Recommendation. In Proceedings of the Eleventh ACM Conference on Recommender Systems, Como, Italy, 27–31 August 2017; pp. 42–46. [Google Scholar] [CrossRef]
- Cañamares, R.; Castells, P. Should I follow the crowd? A probabilistic analysis of the effectiveness of popularity in recommender systems. In Proceedings of the the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; pp. 415–424. [Google Scholar]
- Weiss, G.M.; McCarthy, K.; Zabar, B. Cost-sensitive learning vs. sampling: Which is best for handling unbalanced classes with unequal error costs? Dmin 2007, 7, 24. [Google Scholar]
- Elkan, C. The foundations of cost-sensitive learning. In Proceedings of the International Joint Conference on Artificial Intelligence, Seattle, WA, USA, 4–10 August 2001; Lawrence Erlbaum Associates Ltd.: Mahwah, NJ, USA, 2001; Volume 17, pp. 973–978. [Google Scholar]
- Zhao, Z.; Chen, J.; Zhou, S.; He, X.; Cao, X.; Zhang, F.; Wu, W. Popularity Bias Is Not Always Evil: Disentangling Benign and Harmful Bias for Recommendation. arXiv 2021, arXiv:2109.07946. [Google Scholar] [CrossRef]
- Ciampaglia, G.L.; Nematzadeh, A.; Menczer, F.; Flammini, A. How algorithmic popularity bias hinders or promotes quality. Sci. Rep. 2018, 8, 15951. [Google Scholar] [CrossRef]
- Anderson, A.; Maystre, L.; Anderson, I.; Mehrotra, R.; Lalmas, M. Algorithmic effects on the diversity of consumption on spotify. In Proceedings of the Web Conference, Taipei, Taiwan, 20–24 April 2020; pp. 2155–2165. [Google Scholar]
- Abdollahpouri, H.; Mansoury, M.; Burke, R.; Mobasher, B. The Connection Between Popularity Bias, Calibration, and Fairness in Recommendation. In Proceedings of the Fourteenth ACM Conference on Recommender Systems, Virtual, 22–26 September 2020; pp. 726–731. [Google Scholar] [CrossRef]
- Brynjolfsson, E.; Hu, Y.J.; Smith, M.D. From niches to riches: Anatomy of the long tail. Sloan Manag. Rev. 2006, 47, 67–71. [Google Scholar]
- Schelenz, L. Diversity-aware Recommendations for Social Justice? Exploring User Diversity and Fairness in Recommender Systems. In Proceedings of the UMAP 2021—Adjunct Publication of the 29th ACM Conference on User Modeling, Adaptation and Personalization, Utrecht, The Netherlands, 21–25 June 2021; pp. 404–410. [Google Scholar] [CrossRef]
- Ferraro, A.; Serra, X.; Bauer, C. Break the Loop: Gender Imbalance in Music Recommenders. In Proceedings of the 2021 Conference on Human Information Interaction and Retrieval, Canberra, Australia, 14–19 March 2021; pp. 249–254. [Google Scholar] [CrossRef]
- Shakespeare, D.; Porcaro, L.; Gómez, E.; Castillo, C. Exploring artist gender bias in music recommendation. arXiv 2020, arXiv:2009.01715. [Google Scholar]
- Park, M.; Weber, I.; Naaman, M.; Vieweg, S. Understanding musical diversity via online social media. In Proceedings of the International AAAI Conference on Web and Social Media, Oxford, UK, 26–29 May 2015; Volume 9, pp. 308–317. [Google Scholar]
- Beel, J.; Langer, S.; Nürnberger, A.; Genzmehr, M. The Impact of Demographics (Age and Gender) and Other User-Characteristics on Evaluating Recommender Systems. In Research and Advanced Technology for Digital Libraries; Lecture Notes in Computer Science; Hutchison, D., Kanade, T., Kittler, J., Kleinberg, J.M., Mattern, F., Mitchell, J.C., Naor, M., Nierstrasz, O., Pandu Rangan, C., Steffen, B., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8092, pp. 396–400. [Google Scholar] [CrossRef]
- Mansoury, M.; Mobasher, B.; Burke, R.; Pechenizkiy, M. Bias disparity in collaborative recommendation: Algorithmic evaluation and comparison. arXiv 2019, arXiv:1908.00831. [Google Scholar]
- Abdollahpouri, H.; Mansoury, M.; Burke, R.; Mobasher, B.; Malthouse, E. User-centered Evaluation of Popularity Bias in Recommender Systems. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, Utrecht, The Netherlands, 21–25 June 2021; pp. 119–129. [Google Scholar] [CrossRef]
- Kowald, D.; Schedl, M.; Lex, E. The Unfairness of Popularity Bias in Music Recommendation: A Reproducibility Study. In Advances in Information Retrieval; Lecture Notes in Computer Science; Jose, J.M., Yilmaz, E., Magalhães, J., Castells, P., Ferro, N., Silva, M.J., Martins, F., Eds.; Springer International Publishing: Cham, Switzerland, 2020; Volume 12036, pp. 35–42. [Google Scholar] [CrossRef]
- Lin, K.; Sonboli, N.; Mobasher, B.; Burke, R. Calibration in Collaborative Filtering Recommender Systems: A User-Centered Analysis. In Proceedings of the 31st ACM Conference on Hypertext and Social Media, Virtual Event, 13–15 July 2020; pp. 197–206. [Google Scholar] [CrossRef]
- Tsintzou, V.; Pitoura, E.; Tsaparas, P. Bias disparity in recommendation systems. arXiv 2018, arXiv:1811.01461. [Google Scholar]
- Yang, J. Effects of popularity-based news recommendations (“most-viewed”) on users’ exposure to online news. Media Psychol. 2016, 19, 243–271. [Google Scholar] [CrossRef]
- Lunardi, G.M.; Machado, G.M.; Maran, V.; de Oliveira, J.P.M. A metric for Filter Bubble measurement in recommender algorithms considering the news domain. Appl. Soft Comput. 2020, 97, 106771. [Google Scholar] [CrossRef]
- Nguyen, T.T.; Hui, P.M.; Harper, F.M.; Terveen, L.; Konstan, J.A. Exploring the filter bubble: The effect of using recommender systems on content diversity. In Proceedings of the 23rd International Conference on World Wide Web, Seoul, Republic of Korea, 7–11 April 2014; pp. 677–686. [Google Scholar]
- Akar, E.; Hakyemez, T.C.; Bozanta, A.; Akar, S. What Sells on the Fake News Market? Examining the Impact of Contextualized Rhetorical Features on the Popularity of Fake Tweets. Online J. Commun. Media Technol. 2021, 12, e202201. [Google Scholar] [CrossRef]
- Smyth, B.; McClave, P. Similarity vs. diversity. In Proceedings of the International Conference on Case-Based Reasoning, Vancouver, BC, Canada, 30 July–2 August 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 347–361. [Google Scholar]
- Castells, P.; Hurley, N.; Vargas, S. Novelty and diversity in recommender systems. In Recommender Systems Handbook; Springer: Berlin/Heidelberg, Germany, 2022; pp. 603–646. [Google Scholar]
- Adomavicius, G.; Kwon, Y. Improving aggregate recommendation diversity using ranking-based techniques. IEEE Trans. Knowl. Data Eng. 2012, 24, 896–911. [Google Scholar] [CrossRef]
- Kaminskas, M.; Bridge, D. Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems. Acm Trans. Interact. Intell. Syst. 2017, 7, 1–42. [Google Scholar] [CrossRef]
- Bellogín, A.; Cantador, I.; Castells, P. A comparative study of heterogeneous item recommendations in social systems. Inf. Sci. 2013, 221, 142–169. [Google Scholar] [CrossRef]
- Bellogín, A.; Cantador, I.; Díez, F.; Castells, P.; Chavarriaga, E. An empirical comparison of social, collaborative filtering, and hybrid recommenders. Acm Trans. Intell. Syst. Technol. (TIST) 2013, 4, 1–29. [Google Scholar] [CrossRef]
- Herlocker, J.L.; Konstan, J.A.; Terveen, L.G.; Riedl, J.T. Evaluating collaborative filtering recommender systems. Acm Trans. Inf. Syst. (TOIS) 2004, 22, 5–53. [Google Scholar] [CrossRef]
- Vargas, S.; Castells, P. Improving sales diversity by recommending users to items. In Proceedings of the 8th ACM Conference on Recommender Systems, Silicon Valley, CA, USA, 6–10 October 2014; pp. 145–152. [Google Scholar]
- Szlávik, Z.; Kowalczyk, W.; Schut, M. Diversity measurement of recommender systems under different user choice models. In Proceedings of the International AAAI Conference on Web and Social Media, Barcelona, Spain, 17–21 July 2011; Volume 5, pp. 369–376. [Google Scholar]
- Kotkov, D.; Wang, S.; Veijalainen, J. A survey of serendipity in recommender systems. Knowl.-Based Syst. 2016, 111, 180–192. [Google Scholar] [CrossRef]
- Ziarani, R.J.; Ravanmehr, R. Serendipity in Recommender Systems: A Systematic Literature Review. J. Comput. Sci. Technol. 2021, 36, 375–396. [Google Scholar] [CrossRef]
- Chantanurak, N.; Punyabukkana, P.; Suchato, A. Video recommender system using textual data: Its application on LMS and serendipity evaluation. In Proceedings of the 2016 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), Bangkok, Thailand, 7–9 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 289–295. [Google Scholar]
- De Gemmis, M.; Lops, P.; Semeraro, G.; Musto, C. An investigation on the serendipity problem in recommender systems. Inf. Process. Manag. 2015, 51, 695–717. [Google Scholar] [CrossRef]
- Steck, H. Item popularity and recommendation accuracy. In Proceedings of the Fifth ACM Conference on Recommender Systems—RecSys ’11, Chicago, IL, USA, 23–27 October 2011; p. 125. [Google Scholar] [CrossRef]
- Yu, C.; Lakshmanan, L.; Amer-Yahia, S. It takes variety to make a world: Diversification in recommender systems. In Proceedings of the 12th International Conference on Extending DATABASE Technology: Advances in Database Technology, Saint-Petersburg, Russia, 24–26 March 2009; pp. 368–378. [Google Scholar]
- Deselaers, T.; Gass, T.; Dreuw, P.; Ney, H. Jointly optimising relevance and diversity in image retrieval. In Proceedings of the ACM International Conference on Image and VIDEO Retrieval, Santorini Island, Greece, 8–10 July 2009; pp. 1–8. [Google Scholar]
- Carbonell, J.; Goldstein, J. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Melbourne, Australia, 24–28 August 1998; pp. 335–336. [Google Scholar]
- Ito, H.; Yoshikawa, T.; Furuhashi, T. A study on improvement of serendipity in item-based collaborative filtering using association rule. In Proceedings of the 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 977–981. [Google Scholar]
- Zhang, Y.C.; Séaghdha, D.Ó.; Quercia, D.; Jambor, T. Auralist: Introducing serendipity into music recommendation. In Proceedings of the Fifth ACM International Conference on Web Search and Data Mining—WSDM ’12, Seattle, WA, USA, 8–12 February 2012; p. 13. [Google Scholar] [CrossRef]
- Said, A.; Fields, B.; Jain, B.J.; Albayrak, S. User-centric evaluation of a k-furthest neighbor collaborative filtering recommender algorithm. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, San Antonio, TX, USA, 23–27 February 2013; pp. 1399–1408. [Google Scholar]
- Nakatsuji, M.; Fujiwara, Y.; Tanaka, A.; Uchiyama, T.; Fujimura, K.; Ishida, T. Classical music for rock fans? Novel recommendations for expanding user interests. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, Toronto, ON, Canada, 26–30 October 2010; pp. 949–958. [Google Scholar]
- Vargas, S.; Castells, P. Rank and relevance in novelty and diversity metrics for recommender systems. In Proceedings of the Fifth ACM Conference on Recommender Systems—RecSys ’11, Chicago, IL, USA, 23–27 October 2011; p. 109. [Google Scholar] [CrossRef]
- Zhang, M.; Hurley, N. Novel item recommendation by user profile partitioning. In Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, Milan, Italy, 15–18 September 2009; IEEE: Piscataway, NJ, USA, 2009; Volume 1, pp. 508–515. [Google Scholar]
- Kito, N.; Oku, K.; Kawagoe, K. Correlation analysis among the metadata-based similarity, acoustic-based distance, and serendipity of music. In Proceedings of the 19th International Database Engineering & Applications Symposium, Yokohama, Japan, 13–15 July 2015; pp. 198–199. [Google Scholar]
- Wang, C.D.; Deng, Z.H.; Lai, J.H.; Philip, S.Y. Serendipitous recommendation in e-commerce using innovator-based collaborative filtering. IEEE Trans. Cybern. 2018, 49, 2678–2692. [Google Scholar] [CrossRef]
- Kawamae, N. Serendipitous recommendations via innovators. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Geneva, Switzerland, 19–23 July 2010; pp. 218–225. [Google Scholar]
- Deng, Z.H.; Huang, L.; Wang, C.D.; Lai, J.H.; Philip, S.Y. Deepcf: A unified framework of representation learning and matching function learning in recommender system. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 61–68. [Google Scholar]
- He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T.S. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 3–7 April 2017; pp. 173–182. [Google Scholar]
- Borges, R.; Stefanidis, K. On mitigating popularity bias in recommendations via variational autoencoders. In Proceedings of the 36th Annual ACM Symposium on Applied Computing, Virtual Event, 22–26 March 2021; pp. 1383–1389. [Google Scholar] [CrossRef]
- Lu, W.; Chung, F.L. Computational Creativity Based Video Recommendation. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, Pisa, Italy, 17–21 July 2016; pp. 793–796. [Google Scholar] [CrossRef]
- Sayahi, S.; Ghorbel, L.; Zayani, C.; Champagnat, R. Towards Serendipitous Learning Resource Recommendation. In Proceedings of the 15th International Conference on Computer Supported Education—Volume 1: EKM, Prague, Czech Republic, 21–23 April 2023; INSTICC, SciTePress: Setúbal, Portugal, 2023; pp. 454–462. [Google Scholar] [CrossRef]
- Wei, T.; Feng, F.; Chen, J.; Wu, Z.; Yi, J.; He, X. Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 1791–1800. [Google Scholar] [CrossRef]
- Castro, L.; Toro, M.A. Cumulative cultural evolution: The role of teaching. J. Theor. Biol. 2014, 347, 74–83. [Google Scholar] [CrossRef] [PubMed]
- Salganik, M.J.; Dodds, P.S.; Watts, D.J. Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market. Science 2006, 311, 854–856. [Google Scholar] [CrossRef]
- Fraiberger, S.P.; Sinatra, R.; Resch, M.; Riedl, C.; Barabási, A.L. Quantifying reputation and success in art. Science 2018, 362, 825–829. [Google Scholar] [CrossRef] [PubMed]
- Powell, D.; Yu, J.; DeWolf, M.; Holyoak, K.J. The love of large numbers: A popularity bias in consumer choice. Psychol. Sci. 2017, 28, 1432–1442. [Google Scholar] [CrossRef] [PubMed]
- Heck, D.W.; Seiling, L.; Bröder, A. The Love of Large Numbers Revisited: A Coherence Model of the Popularity Bias. Cognition 2020, 195, 104069. [Google Scholar] [CrossRef] [PubMed]
- Rescher, N. Fairness; Routledge: London, UK, 2017. [Google Scholar]
- Geyik, S.C.; Ambler, S.; Kenthapadi, K. Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2221–2231. [Google Scholar] [CrossRef]
- Vall, A.; Quadrana, M.; Schedl, M.; Widmer, G. Order, context and popularity bias in next-song recommendations. Int. J. Multimed. Inf. Retr. 2019, 8, 101–113. [Google Scholar] [CrossRef]
- Xiao, L.; Min, Z.; Yongfeng, Z.; Zhaoquan, G.; Yiqun, L.; Shaoping, M. Fairness-aware group recommendation with pareto-efficiency. In Proceedings of the Eleventh ACM Conference on Recommender Systems, Como, Italy, 27–31 August 2017; pp. 107–115. [Google Scholar]
- Htun, N.N.; Lecluse, E.; Verbert, K. Perception of fairness in group music recommender systems. In Proceedings of the 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, 14–17 April 2021; pp. 302–306. [Google Scholar]
- Yalcin, E.; Bilge, A. Investigating and counteracting popularity bias in group recommendations. Inf. Process. Manag. 2021, 58, 102608. [Google Scholar] [CrossRef]
- Deshpande, M.; Karypis, G. Item-based top- N recommendation algorithms. Acm Trans. Inf. Syst. 2004, 22, 143–177. [Google Scholar] [CrossRef]
- Karypis, G. Evaluation of Item-Based Top- N Recommendation Algorithms. In Proceedings of the Tenth International Conference on Information and Knowledge Management—CIKM’01, Atlanta, GA, USA, 5–10 November 2001; p. 247. [Google Scholar] [CrossRef]
- Zhao, S.; Zhou, M.X.; Yuan, Q.; Zhang, X.; Zheng, W.; Fu, R. Who is talking about what: Social map-based recommendation for content-centric social websites. In Proceedings of the Fourth ACM Conference on Recommender Systems, Barcelona, Spain, 26–30 September 2010; pp. 143–150. [Google Scholar]
- Valcarce, D.; Bellogín, A.; Parapar, J.; Castells, P. On the robustness and discriminative power of information retrieval metrics for top-N recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems, Vancouver, BC, Canada, 2–7 October 2018; pp. 260–268. [Google Scholar] [CrossRef]
- Cremonesi, P.; Koren, Y.; Turrin, R. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems—RecSys ’10, Barcelona, Spain, 26–30 September 2010; p. 39. [Google Scholar] [CrossRef]
- Tong, H.; Faloutsos, C.; Pan, J.Y. Fast random walk with restart and its applications. In Proceedings of the Sixth International Conference on Data Mining (ICDM’06), Hong Kong, China, 18–22 December 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 613–622. [Google Scholar]
- Celma, Ò.; Herrera, P. A new approach to evaluating novel recommendations. In Proceedings of the 2008 ACM Conference on Recommender Systems, Lausanne, Switzerland, 23–25 October 2008; pp. 179–186. [Google Scholar]
- Merton, R.K.; Barber, E. The travels and adventures of serendipity. In The Travels and Adventures of Serendipity; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
- Leong, T.W.; Vetere, F.; Howard, S. The serendipity shuffle. In Proceedings of the 17th Australia Conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future, Canberra, Australia, 21–25 November 2005; pp. 1–4. [Google Scholar]
- Kotkov, D.; Konstan, J.A.; Zhao, Q.; Veijalainen, J. Investigating serendipity in recommender systems based on real user feedback. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing, Pau, France, 9–13 April 2018; pp. 1341–1350. [Google Scholar] [CrossRef]
- Maccatrozzo, V.; Terstall, M.; Aroyo, L.; Schreiber, G. SIRUP: Serendipity In Recommendations via User Perceptions. In Proceedings of the 22nd International Conference on Intelligent User Interfaces, Limassol, Cyprus, 13–16 March 2017; pp. 35–44. [Google Scholar] [CrossRef]
- Sarkar, P.; Chakrabarti, A. Studying engineering design creativity-developing a common definition and associated measures. In Proceedings of the NSF Workshop on Studying Design Creativity, Aix-en-Provence, France, 10–11 March 2008; p. 20. [Google Scholar]
- Carnovalini, F.; Rodà, A. Computational Creativity and Music Generation Systems: An Introduction to the State of the Art. Front. Artif. Intell. 2020, 3, 14. [Google Scholar] [CrossRef] [PubMed]
- Jordanous, A. Four PPPPerspectives on computational creativity in theory and in practice. Connect. Sci. 2016, 28, 194–216. [Google Scholar] [CrossRef]
- Hodson, J. The Creative Machine. In Proceedings of the ICCC, Atlanta, GA, USA, 19–23 June 2017; pp. 143–150. [Google Scholar]
- Wiggins, G.A. Computational Creativity and Consciousness: Framing, Fiction and Fraud Paper type: Study Paper. In Proceedings of the 12th International Conference on Computational Creativity (ICCC ’21), México City, Mexico, 14–18 September 2021; p. 10. [Google Scholar]
- Jordanous, A. Evaluating Evaluation: Assessing Progress and Practices in Computational Creativity Research. In Computational Creativity: The Philosophy and Engineering of Autonomously Creative Systems; Veale, T., Cardoso, F.A., Eds.; Computational Synthesis and Creative Systems; Springer International Publishing: Cham, Switzerland, 2019; pp. 211–236. [Google Scholar] [CrossRef]
- Boden, M.A. The Creative Mind: Myths and Mechanisms; Routledge: London, UK, 2004. [Google Scholar]
- Wiggins, G.A. A Framework for Description, Analysis and Comparison of Creative Systems. In Computational Creativity: The Philosophy and Engineering of Autonomously Creative Systems; Veale, T., Cardoso, F.A., Eds.; Computational Synthesis and Creative Systems; Springer International Publishing: Cham, Switzerland, 2019; pp. 21–47. [Google Scholar] [CrossRef]
- Ferro, N.; Fuhr, N.; Grefenstette, G.; Konstan, J.A.; Castells, P.; Daly, E.M.; Declerck, T.; Ekstrand, M.D.; Geyer, W.; Gonzalo, J.; et al. The Dagstuhl Perspectives Workshop on Performance Modeling and Prediction. Acm SIGIR Forum 2018, 52, 91–101. [Google Scholar] [CrossRef]
- Colton, S.; Wiggins, G.A. Computational creativity: The final frontier? In Proceedings of the ECAI, Montpellier, France, 27–31 August 2012; Volume 2012, pp. 21–26. [Google Scholar]
- Jordanous, A. A standardised procedure for evaluating creative systems: Computational creativity evaluation based on what it is to be creative. Cogn. Comput. 2012, 4, 246–279. [Google Scholar] [CrossRef]
Main Topic | Subtopic/Method | References |
---|---|---|
Human Popularity Bias | psychological heuristic | [20,21,22] |
herd behavior | [5,23,24,25,26,27] | |
mere exposure effect | [28,29] | |
powerful persuasion mechanism | [30,31,32,33,34,35,36] | |
Algorithmic Popularity Bias | ranking | [37,38,39] |
collaborative filtering | [9,40,41] | |
benign or harmful popularity bias | [42,43,44,45,46] | |
Fairness | related to objects/items to be recommended | [16,44,45,47,48,49,50,51] |
related to subjects/users that receive the recommendation | [18,49,52,53,54,55,56,57,58] | |
related to news/contents of the recommendation | [59,60,61,62] | |
Metrics | intra-list diversity | [63] |
novelty | [64] | |
aggregate diversity | [65] | |
coverage | [66,67,68,69] | |
Gini index | [70] | |
Shannon entropy | [71] | |
serendipity | [72,73,74,75] | |
conformity | [44] | |
popularity-stratified recall | [76] | |
Mitigation Algorithms | re-ranking | [77,78,79,80,81] |
random approach | [64] | |
K-furthest neighbors | [82] | |
relatedness approach | [83] | |
transposed recommendation matrix | [84] | |
item clustering approach | [85] | |
content-based | [86] | |
innovators-based approach | [87,88] | |
NN-based approach | [89,90] | |
DL-based approach | [91] | |
graph-based approach | [75] | |
tags-based approach | [92] | |
emotional analysis | [93] | |
TIDE (TIme-Aware DisEntagled Framework) | [44] | |
causal inference graph | [94] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Carnovalini, F.; Rodà, A.; Wiggins, G.A. Popularity Bias in Recommender Systems: The Search for Fairness in the Long Tail. Information 2025, 16, 151. https://doi.org/10.3390/info16020151
Carnovalini F, Rodà A, Wiggins GA. Popularity Bias in Recommender Systems: The Search for Fairness in the Long Tail. Information. 2025; 16(2):151. https://doi.org/10.3390/info16020151
Chicago/Turabian StyleCarnovalini, Filippo, Antonio Rodà, and Geraint A. Wiggins. 2025. "Popularity Bias in Recommender Systems: The Search for Fairness in the Long Tail" Information 16, no. 2: 151. https://doi.org/10.3390/info16020151
APA StyleCarnovalini, F., Rodà, A., & Wiggins, G. A. (2025). Popularity Bias in Recommender Systems: The Search for Fairness in the Long Tail. Information, 16(2), 151. https://doi.org/10.3390/info16020151