Hierarchical Self-Supervised Learning for Knowledge-Aware Recommendation
Abstract
:1. Introduction
- We highlight the significance of the hierarchical learning mechanism in both KG and UIG for knowledge graph-aware recommendation within a joint self-supervised learning paradigm. This is crucial as it generates more valuable self-supervised signals for item and user representations, enhancing the ability to mine useful knowledge in scenarios of data sparsity.
- We present a model HKRec to unify the generative and the contrastive learning paradigms in a hierarchical manner. HKRec captures the implicit semantic information from the knowledge graph incorporating a multi-perspective hierarchy and integrates representations into recommender systems through hierarchical entity–item alignments.
- We perform extensive experiments on three real-world benchmark datasets to verify that HKRec achieves significant performance compared to recent state-of-the-art baselines.
2. Related Work
2.1. Knowledge-Aware Recommendation
2.2. Self-Supervised Learning
3. Preliminaries
- Input: a user–item interaction graph and a knowledge graph .
- Output: a predictive function that estimates how likely a user would adopt an item.
4. Methodology
4.1. Triple-Graph Masked Autoencoder
4.1.1. Node Connection Reconstruction
4.1.2. Node Feature Reconstruction
Algorithm 1 Triple-Graph Masked Autoencoder (T-GMAE) |
|
4.2. Parallel Cross-View Contrastive Learning
4.2.1. Graph Augmentations
4.2.2. Contrastive Learning
4.2.3. Joint Learning Strategy
5. Experiments
- RQ1: How does our model’s performance compare to state-of-the-art methods?
- RQ2: What is the effectiveness of the key components in our model?
- RQ3: How effective is our model when tackling cold start and long-tail issues?
- RQ4: Does the self-supervised paradigm of our model lead to improved item representations?
5.1. Experimental Settings
5.1.1. Datasets
- Last-FM: Last-FM is a dataset commonly used in the field of music recommender systems that collects user music listening history and tagging data from the Last.fm platform.
- MIND: The MIND dataset is collected from the Wikidata platform for news recommendation tasks. It contains a considerable amount of user browsing behavior data and news topic information.
- Alibaba-iFashion. The Alibaba-iFashion dataset comprises fashion outfits with various fashion items, categorized according to a fashion taxonomy, collected from the Alibaba-iFashion online shopping platform.
5.1.2. Evaluation Metrics
5.1.3. Baseline Models
- BPR [56]: A widely used recommendation model that employs Bayesian analysis to rank items and generate user preference predictions.
- LightGCN [41]: A simplified collaborative filtering method that integrates GCN.
- CKE [11]: A typical method that integrates collaborative filtering with knowledge graph feature learning, incorporating three key components to learn representations including structural, textual, and visual information.
- KGAT [19]: This model initially utilizes the TransR model for KG vectorization learning and a GNN for information propagation and aggregation.
- KGIN [21]: KGIN explores the relational modeling of user intents and provides explainable semantics captured from knowledge graphs for recommendation tasks.
- SGL [25]: This method introduces contrastive learning into recommendation data by generating multi-views through techniques such as node removing, edge removing, and random walk.
- KGCL [27]: This model generates auxiliary self-supervised signals through data augmentation strategies to construct contrastive views on both KG and UIG.
- KGRec [31]: This method generates rational scores for knowledge graph triplets and designs a self-supervised learning framework based on the scores.
5.2. Performance Comparison (RQ1)
- The proposed HKRec consistently outperforms all baselines across all three datasets, demonstrating its superiority in recommendation performance. Particularly noteworthy is that HKRec surpasses even the best-performing baseline. Recall@20 and NDCG@20 improved by 2.2% to 24.95% and 3.38% to 22.32% in the Last-FM dataset, by 7.0% to 23.82% and 5.7% to 39.66% in the MIND dataset, and by 1.76% to 34.73% and 1.62% to 35.13% in the Alibaba-iFashion dataset. We summarize the reasons for the usefulness of our HKRec as follows: (1) The design of T-GMAE effectively extracts semantic relatedness within the items that contribute to the recommendation tasks. (2) The incorporation of multiple contrastive cross-views assists HKRec in capturing deeper semantic relatedness and injecting entity representations into the user–item interaction.
- Examining Table 1 and Table 2, it can be seen that when the KG contains more diverse entities and relations, our HKRec model, through its T-GMAE and cross-view contrastive learning mechanisms, can effectively discover more useful information from the KG. This process enriches the model with semantic information to understand and recommend item. Consequently, the model achieves the highest recall of 0.1192 on the Alibaba-iFashion dataset. In addition, compared with the strongest baseline, the recall improved by 2.2%, 7.0%, and 1.76% on different datasets, respectively. The model achieves the most notable performance improvement on the MIND dataset, which has the most sparse interactions. In such cases, the provision of more diverse information from the KG has the most significant impact.
- The knowledge-aware methods outperform CF-based methods, underscoring the effective alleviation of data sparsity issues in recommender systems through the incorporation of KG.
- The performance of the GNN-based methods (i.e, KGAT, KGIN) on the Last-FM and Alibaba-iFashion datasets outperforms the embedding-based approach, CKE. This outcome highlights the effectiveness of GNNs in capturing higher-order dependencies and attaining superior knowledge representations. However, such superiority is not observed in the MIND dataset. The rationale lies in its relatively small knowledge graph scale, where entities exhibit low connection density and lack high-order dependencies. Consequently, the efficacy of GNNs in this scenario is limited.
- The two methods combining both contrastive and generative paradigms (i.e., KGRec, HKRec) demonstrate superior performance compared to others that employ a single contrastive learning paradigm (i.e., SGL, KGCL) only. This observation indicates that the generative facilitates the distilling of richer semantic information from the KG. Hence, leveraging this paradigm effectively is crucial for enhancing recommendation performance.
5.3. Ablation Study (RQ2)
- w/o ND: We remove the neighborhood-level dropout components from the contrastive learning module.
- w/o T-GMAE: We remove the T-GAME module which includes the degree decoder component and the node feature masking reconstruction component.
5.4. Benefits of HKRec (RQ3)
5.5. Item Embedding Visualization (RQ4)
- We find that the lightest colors are observed among items of the same topic and the highest similarity between items of the same topic. Specifically, Figure 6a presents the experimental result of HKRec, showcasing a similarity range of [0.82, 1] for actor items, [0.95, 1] for institutions, and [0.79, 1] for sport items. This is because in recommender systems, items of the same topic typically share similar features, exhibit similar contexts in the KG, and produce simultaneous interactions in the UIG. Yet, as illustrated in Figure 6b, when employing KGCL, the similarity ranges are [0.45, 1], [0.20, 1], and [0.58, 1] separately. Overall, HKRec effectively learns the correlations among items of the same topic, enabling their close positions to each other in the embedding space and performing high-quality item embeddings. However, KGCL exhibits limitations in its node representation learning ability, especially for sports-related items.
- Compared to items of the same topic, the visualization matrix colors between items of different topics are darker, indicating lower similarity. Specifically, for institutional items, the colors of the similarity matrix between institutions and actors and sports items are the darkest. This is attributed to the fact that in UIG, only one or two users simultaneously click on both institutional and actor items or both institutional and sports items. And there is no correlation between institutions and the other two types of items in KG.
- There is similarity between actors and sports, ranging from 0.55 to 0.81. This phenomenon arises from users clicking on both types of items concurrently in the UIG, while the correlation between these two categories in the KG further contributes to the observed distinctions in similarities. For example, in the case of HKRec, the similarity values between the actor Jennifer Lopez and the sports of World Series and Pittsburgh Steelers are measured at 0.79 and 0.61, respectively. As shown in Figure 7, the rationale behind these findings lies in the fact that almost 30 users concurrently click news related to Jennifer Lopez and World Series, as well as Jennifer Lopez and Pittsburgh Steelers. This limited interaction results in a weaker correlation between Jennifer Lopez and World Series, as well as between Jennifer Lopez and Pittsburgh Steelers. However, the correlation relationship between these two sets of items in the KG provides useful information to the user and makes the correlation between these two sets of items stronger. Specifically, as shown in Figure 7a, the analysis of KG reveals that, in comparison to Pittsburgh Steelers, there is one three-hop path between Jennifer Lopez and World Series, while no three-hop path exists between Jennifer Lopez and Pittsburgh Steelers. Additionally, as shown in Figure 7b, there are twenty-six four-hop paths between Jennifer Lopez and World Series, whereas only eleven paths exist between Jennifer Lopez and Pittsburgh Steelers. Thus, the similarity between Jennifer Lopez and World Series is 0.79 higher than the similarity between Jennifer Lopez and Pittsburgh Steelers, which is 0.61.
- Distinct from HKRec, the results of KGCL show corresponding values of only 0.40 and 0.34. This could be due to the inherent limitations of single graph augmentation or single cross-view contrastive learning, which are insufficient for extracting meaningful semantic information and fail to capture user interests effectively. These experimental findings indicate that HKRec exhibits superior node representation learning capabilities compared to KGCL. The resulting high-quality item embeddings effectively capture and reflect the complex correlations between items.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Niu, Y.; Lin, R.; Xue, H. Research on learning resource recommendation based on knowledge graph and collaborative filtering. Appl. Sci. 2023, 19, 10933. [Google Scholar] [CrossRef]
- Lei, C.; Liu, Y.; Zhang, L.; Wang, G.; Tang, H.; Li, H.; Miao, C. Semi: A sequential multi-modal information transfer network for e-commerce micro-video recommendations. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 3161–3171. [Google Scholar]
- Long, X.; Huang, C.; Xu, Y.; Xu, H.; Dai, P.; Xia, L.; Bo, L. Social recommendation with self-supervised metagraph informax network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, QLD, Australia, 1–5 November 2021; pp. 1160–1169. [Google Scholar]
- Hu, B.; Shi, C.; Zhao, W.; Yu, P. Leveraging meta-path based context for top-n recommendation with a neural co-attention model. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 1531–1540. [Google Scholar]
- Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th International Conference on World Wide Web, Hong Kong, China, 1–5 May 2011; pp. 285–295. [Google Scholar]
- He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, Perth, WA, Australia, 3–7 April 2017; pp. 173–182. [Google Scholar]
- Wang, Y.; Javari, A.; Balaji, J.; Shalaby, W.; Derr, T.; Cui, X. Knowledge graph-based session recommendation with session-adaptive propagation. In Proceedings of the The ACM Web Conference 2024, Singapore, 13–17 May 2024; pp. 264–273. [Google Scholar]
- Wang, L.; Du, W.; Chen, Z. Multi-feature-enhanced academic paper recommendation model with knowledge graph. Appl. Sci. 2024, 14, 5022. [Google Scholar] [CrossRef]
- Ai, Q.; Azizi, V.; Chen, X.; Zhang, Y. Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 2018, 11, 137. [Google Scholar] [CrossRef]
- Cao, Y.; Wang, X.; He, X.; Hu, Z.; Chua, T. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. In Proceedings of the WWW’19: The World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 151–161. [Google Scholar]
- Zhang, F.; Yuan, N.; Lian, D.; Xie, X.; Ma, W. Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 353–362. [Google Scholar]
- Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. In Proceedings of the 26th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; Volume 26, pp. 1–9. [Google Scholar]
- Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; Zhu, X. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the 29-th AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; pp. 2181–2187. [Google Scholar]
- Guo, Q.; Zhuang, F.; Qin, C.; Zhu, H.; Xie, X.; Xiong, H.; He, Q. A survey on knowledge graph-based recommender systems. J. Internet Technol. 2020, 34, 3549–3568. [Google Scholar] [CrossRef]
- Wang, H.; Zhang, F.; Wang, J. Ripplenet: Propagating user preferences on the knowledge graph for recommender systems. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Torino, Italy, 22–26 October 2018; pp. 417–426. [Google Scholar]
- Wang, X.; Wang, D.; Xu, C.; He, X.; Cao, Y.; Chua, T. Explainable reasoning over knowledge graphs for recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 5329–5336. [Google Scholar]
- Zhao, H.; Yao, Q.; Li, J.; Song, Y.; Lee, D. Meta-graph based recommendation fusion over heterogeneous information networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 635–644. [Google Scholar]
- Wang, H.; Zhao, M.; Xie, X.; Li, W.; Guo, M. Knowledge graph convolutional networks for recommender systems. In Proceedings of the WWW’19: The World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 3307–3313. [Google Scholar]
- Wang, X.; He, X.; Cao, Y.; Liu, M.; Chua, T.S. KGAT: Knowledge graph attention network for recommendation. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 950–958. [Google Scholar]
- Wu, S.; Sun, F.; Zhang, W.; Xie, X. Graph neural networks in recommender systems: A survey. ACM Comput. Surv. 2022, 55, 1–37. [Google Scholar] [CrossRef]
- Zou, D.; Wei, W.; Wang, Z.; Mao, X.; Zhu, F.; Fang, R.; Chen, D. Improving knowledge-aware recommendation with multi-level interactive contrastive learning. In Proceedings of the 31th ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, 17–21 October 2022; pp. 2817–2826. [Google Scholar]
- Duan, H.; Liang, X.; Zhu, Y.; Zhu, Z.; Liu, P. Reducing noise-triplets via differentiable sampling for knowledge-enhanced recommendation with collaborative signal guidance. Neurocomputing 2023, 558, 126771. [Google Scholar] [CrossRef]
- Wang, X.; Huang, T.; Wang, D.; Yuan, Y.; Liu, Z.; He, X.; Chua, T. Learning intents behind interactions with knowledge graph for recommendation. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 878–887. [Google Scholar]
- Wu, L.; Lin, H.; Tan, C.; Gao, Z.; Li, S.; Engineering, D. Self-supervised learning on graphs: Contrastive, generative, or predictive. IEEE Trans. Knowl. Data Eng. 2021, 35, 4216–4235. [Google Scholar] [CrossRef]
- Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; Xie, X. Self-supervised graph learning for recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual, 11–15 July 2021; pp. 726–735. [Google Scholar]
- Jiang, Y.; Huang, C.; Huang, L. Adaptive graph contrastive learning for recommendation. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 4252–4261. [Google Scholar]
- Yang, Y.; Huang, C.; Xia, L.; Li, C. Knowledge graph contrastive learning for recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; pp. 1434–1443. [Google Scholar]
- Zou, D.; Wei, W.; Zhu, F. Knowledge enhanced multi-intent transformer network for recommendation. In Proceedings of the The Web Conference 2024, Sigapore, 13–17 May 2024; pp. 151–159. [Google Scholar]
- Chen, S.; Li, Z. Hierarchically Coupled View-Crossing Contrastive Learning for Knowledge Enhanced Recommendation. Access 2024, 12, 75532–75541. [Google Scholar] [CrossRef]
- Ma, Y.; Zhang, X.; Gao, C.; Tang, Y.; Li, L.; Zhu, R.; Yin, C. Enhancing recommendations with contrastive learning from collaborative knowledge graph. Neurocomputing 2023, 523, 103–115. [Google Scholar] [CrossRef]
- Yang, Y.; Huang, C.; Xia, L.; Huang, C. Knowledge graph self-supervised rationalization for recommendation. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 3046–3056. [Google Scholar]
- Zou, D.; Wei, W.; Mao, X.; Wang, Z.; Qiu, M.; Zhu, F.; Cao, X. Multi-level cross-view contrastive learning for knowledge-aware recommender system. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; pp. 1358–1368. [Google Scholar]
- Wang, H.; Zhang, F.; Zhao, M.; Li, W.; Xie, X.; Guo, M. Multi-task feature learning for knowledge graph enhanced recommendation. In Proceedings of the WWW’19: The World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 2000–2010. [Google Scholar]
- Shu, H.; Huang, J. Multi-task feature and structure learning for user-preference based knowledge-aware recommendation. Neurocomputing 2023, 532, 43–55. [Google Scholar] [CrossRef]
- Wang, Z.; Zhang, J.; Feng, J.; Chen, Z. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014; pp. 1112–1119. [Google Scholar]
- Balloccu, G.; Boratto, L.; Fenu, G.; Marras, M. Reinforcement recommendation reasoning through knowledge graphs for explanation path quality. Knowl.-Based Syst. 2023, 260, 110098. [Google Scholar] [CrossRef]
- Catherine, R.; Cohen, W. Personalized recommendations using knowledge graphs: A probabilistic logic programming approach. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; pp. 325–332. [Google Scholar]
- Yu, X.; Ren, X.; Sun, Y.; Gu, Q.; Sturt, B.; Khandelwal, U.; Norick, B.; Han, J. Personalized entity recommendation: A heterogeneous information network approach. In Proceedings of the 7th ACM International Conference on Web Search and Data Mining, New York, NY, USA, 24–28 February 2014; pp. 283–292. [Google Scholar]
- Wang, H.; Zhang, F.; Zhang, M.; Leskovec, J.; Zhao, M.; Li, W.; Wang, Z. Knowledge-aware graph neural networks with label smoothness regularization for recommender systems. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 968–977. [Google Scholar]
- Du, Y.; Zhu, X.; Chen, L.; Zheng, B.; Gao, Y. Hakg: Hierarchy-aware knowledge gated network for recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; pp. 1390–1400. [Google Scholar]
- He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; Wang, M. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 25–30 July 2020; pp. 639–648. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, PMLR, Vienna, Austria, 12–18 July 2020; pp. 1597–1607. [Google Scholar]
- Jing, L.; Tian, Y. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 4037–4058. [Google Scholar] [CrossRef] [PubMed]
- Misra, I.; Maaten, L. Self-supervised learning of pretext-invariant representations. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6707–6717. [Google Scholar]
- Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; Soricut, R. Albert: A lite bert for self-supervised learning of language representations. arXiv 2019, arXiv:1909.11942. [Google Scholar]
- Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv 2020, arXiv:1906.08237. [Google Scholar]
- Xia, J.; Wu, L.; Chen, J.; Hu, B.; Li, S. Simgrace: A simple framework for graph contrastive learning without data augmentation. In Proceedings of the ACM Web Conference 2022, Virtual Event, 25–29 April 2022; pp. 1070–1079. [Google Scholar]
- You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; Shen, Y. Graph contrastive learning with augmentations. Adv. Neural Inf. Process. Syst. 2020, 33, 5812–5823. [Google Scholar]
- Hao, B.; Zhang, J.; Yin, H.; Li, C.; Chen, H. Pre-training graph neural networks for cold-start users and items representation. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, Israel, 8–12 March 2021; pp. 265–273. [Google Scholar]
- Yao, T.; Yi, X.; Cheng, D.; Yu, F.; Chen, T. Self-supervised learning for large-scale item recommendations. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, QLD, Australia, 1–5 November 2021; pp. 4321–4330. [Google Scholar]
- Devlin, J.; Chang, M.; Lee, K.; Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv 2015, arXiv:1511.03791. [Google Scholar]
- He, K.; Chen, X.; Xie, S.; Li, Y.; Dollar, P.; Girshick, R. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LO, USA, 18–24 June 2022; pp. 16000–16009. [Google Scholar]
- Kipf, T.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 1025–1035. [Google Scholar]
- Guo, K.; Hu, Y.; Sun, Y.; Qian, S.; Gao, J.; Yin, B. Hierarchical graph convolution network for traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; pp. 151–159. [Google Scholar]
- Rendle, S.; Freudenthaler, C.; Gantner, Z.; Schmid, L. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the 25-th Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, 18–21 June 2009; pp. 452–461. [Google Scholar]
Dataset | Last-FM | MIND | Alibaba-iFashion | |
---|---|---|---|---|
User–Item graph | #Users | 23,566 | 100,000 | 114,737 |
#Items | 48,123 | 30,577 | 30,040 | |
#Interactions | 3,034,796 | 2,975,319 | 1,781,093 | |
#Density | ||||
Knowledge graph | #Entities | 58,266 | 24,733 | 59,156 |
#Relations | 9 | 512 | 51 | |
#Triplets | 464,567 | 148,568 | 279,155 |
Last-FM | MIND | Alibaba-iFashion | ||||
---|---|---|---|---|---|---|
Recall | NDCG | Recall | NDCG | Recall | NDCG | |
BPR [56] | 0.0847 | 0.0720 | 0.0392 | 0.0264 | 0.0821 | 0.0506 |
LightGCN [41] | 0.0716 | 0.0644 | 0.0403 | 0.0279 | 0.0999 | 0.0614 |
CKE [11] | 0.0853 | 0.0707 | 0.0385 | 0.0276 | 0.0778 | 0.0482 |
KGAT [19] | 0.0873 | 0.0743 | 0.0339 | 0.0294 | 0.0961 | 0.0582 |
KGIN [23] | 0.0914 | 0.0786 | 0.0382 | 0.0245 | 0.1171 | 0.0730 |
SGL [25] | 0.0768 | 0.0679 | 0.0339 | 0.0210 | 0.1118 | 0.0702 |
KGCL [27] | 0.0876 | 0.0789 | 0.0352 | 0.0221 | 0.1097 | 0.0693 |
KGRec [31] | 0.0933 | 0.0801 | 0.0414 | 0.0328 | 0.1170 | 0.0731 |
HKRec | 0.0954 | 0.0829 | 0.0445 | 0.0348 | 0.1192 | 0.0743 |
MODEL | MIND | Alibaba-iFashion | ||
---|---|---|---|---|
Recall | NDCG | Recall | NDCG | |
HKRec | 0.0445 | 0.0348 | 0.1192 | 0.0743 |
w/o ND | 0.0438 | 0.0346 | 0.1185 | 0.0741 |
w/o T-GMAE | 0.0432 | 0.0334 | 0.1180 | 0.0737 |
KGRec | 0.0414 | 0.0328 | 0.1170 | 0.0731 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, C.; Zhou, S.; Huang, J.; Wang, D. Hierarchical Self-Supervised Learning for Knowledge-Aware Recommendation. Appl. Sci. 2024, 14, 9394. https://doi.org/10.3390/app14209394
Zhou C, Zhou S, Huang J, Wang D. Hierarchical Self-Supervised Learning for Knowledge-Aware Recommendation. Applied Sciences. 2024; 14(20):9394. https://doi.org/10.3390/app14209394
Chicago/Turabian StyleZhou, Cong, Sihang Zhou, Jian Huang, and Dong Wang. 2024. "Hierarchical Self-Supervised Learning for Knowledge-Aware Recommendation" Applied Sciences 14, no. 20: 9394. https://doi.org/10.3390/app14209394
APA StyleZhou, C., Zhou, S., Huang, J., & Wang, D. (2024). Hierarchical Self-Supervised Learning for Knowledge-Aware Recommendation. Applied Sciences, 14(20), 9394. https://doi.org/10.3390/app14209394