Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples
Abstract
:1. Introduction
- To alleviate the problem that traditional LLM-based reasoning methods fail to fully utilize the existing data of knowledge graphs, we propose the rule-and-reinforce triple extraction method; the method can enhance the in-context learning of LLMs for knowledge graph reasoning;
- In order to obtain more effective in-context triples, we construct an in-context triple extractor, which is designed based on the encoder-decoder architecture. The triples involved in the logical rules of the knowledge graph are utilized as supervised data for pre-training. Then, the extractor is further trained through reinforcement learning methods with the feedback from LLMs;
- The experimental results on five different knowledge graphs indicate that the in-context triples extracted by the proposed method can effectively enhance the capabilities of LLMs in knowledge graph reasoning.
2. Related Work
2.1. Traditional Knowledge Graph Reasoning Methods
2.2. Pre-Trained Language Model- and Large Language Model-Based Methods
3. Formalized Description of the Proposed Solution
4. Methodology
4.1. Logical Rules-Guided in-Context Triples Retrieval and Extractor Pre-Training
4.1.1. Logical Rules-Guided in-Context Triples Retrieval
4.1.2. Supervised Extractor Pre-Training
4.2. Reinforcement Learning with LLM’s Feedback as Rewards
4.2.1. Feedback of LLM Collecting
4.2.2. Reinforcement Learning with LLM’s Feedback
4.3. In-Context Learning and Reasoning
5. Experimental Setup
5.1. Datasets
- FB15k-237 [41], derived from FB15k, and some reverse relations are deleted.
5.2. Baseline Methods
5.3. Evaluation Metrics
5.4. Experimental Settings
6. Experimental Results and Discussion
6.1. Comparison with Baselines
- Comparing the results in rows 1–4 (traditional methods) and rows 5–8 (pre-trained language model- and large language model-based methods) in two tables, among 15 metrics, the 10 best results are obtained by the language model-based methods. These results demonstrate that employing a large language model can enhance the performance of knowledge graph reasoning. One reason could be that compared to reasoning based on limited knowledge graph triples, pre-trained language models contain more extensive knowledge, which is more conducive to completing the reasoning process.
- Considering the results in rows 5–6 and rows 7–8 of the two tables, most of the performances in rows 7–8 are better than rows 5–6. The reason may be that the methods and RuleLLM employ in-context learning in reasoning, which can help LLMs to comprehend the given reasoning task and further improve the reasoning performance. These results validate the importance and effectiveness of in-context learning for LLMs on reasoning.
- Our proposed method performs the best on three datasets (WN18RR, FB15k-237, and Wikidata5m). Compared with traditional reasoning methods, the greatest improvement is 0.147 on Hits@10 for FB15k-237 (compared with the model AnyBURL). These results demonstrate the effectiveness of our proposed method. One reason may be that the proposed rule-and-reinforce in-context triple selection method is able to extract better in-context examples with respect to the specific reasoning task, providing more helpful factual evidence for reasoning.
6.2. Ablation Study
6.3. Performance on Entities with Different Frequencies
6.4. Performance of Methods with Different Number of in-Context Triples
6.5. Manual Evaluation of in-Context Triples by Different Methods
7. Conclusions and Future Work
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Examples of the Logical Rules-Guided in-Context Triples Retrieval
- (x, nationality, y)←(x, works in, z)∧(z, located in state, w)∧(w, state in country, y)
- (x, nationality, y)←(x, born in, z)∧(z, located in state, w)∧(w, state in country, y)
- {(Zuckerberg, works in, Facebook), (Facebook, located in state, California), (California, state in country, USA)}
- {(Zuckerberg, born in, New York City), (New York City, located in state, New York State), (New York State, state in country, USA)}
Appendix B. Top-N Prediction Prompt
References
- Ji, S.; Pan, S.; Cambria, E.; Marttinen, P.; Philip, S.Y. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 494–514. [Google Scholar] [CrossRef] [PubMed]
- Ren, Z.; Zhao, Y.; Zong, C. Towards Informative Open-ended Text Generation with Dynamic Knowledge Triples. In Proceedings of the Empirical Methods in Natural Language Processing 2023, Singapore, 6–10 December 2023; pp. 3189–3203. [Google Scholar]
- Wang, S.; Dang, D. Robust cross-lingual knowledge base question answering via knowledge distillation. Data Technol. Appl. 2021, 55, 661–681. [Google Scholar] [CrossRef]
- Zhao, Y.; Xiang, L.; Zhu, J.; Zhang, J.; Zhou, Y.; Zong, C. Knowledge graph enhanced neural machine translation via multi-task learning on sub-entity granularity. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020; pp. 4495–4505. [Google Scholar]
- Zhao, Y.; Zhang, J.; Zhou, Y.; Zong, C. Knowledge graphs enhanced neural machine translation. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, Yokohama, Japan, 7–15 January 2021; pp. 4039–4045. [Google Scholar]
- Zhang, W.; Chen, J.; Li, J.; Xu, Z.; Pan, J.Z.; Chen, H. Knowledge graph reasoning with logics and embeddings: Survey and perspective. arXiv 2022, arXiv:2202.07412. [Google Scholar]
- Xue, B.; Zou, L. Knowledge graph quality management: A comprehensive survey. IEEE Trans. Knowl. Data Eng. 2022, 35, 4969–4988. [Google Scholar] [CrossRef]
- Shen, T.; Zhang, F.; Cheng, J. A comprehensive overview of knowledge graph completion. Knowl. Based Syst. 2022, 255, 109597–109661. [Google Scholar] [CrossRef]
- Jia, N.; Yao, C. A Brief Survey on Deep Learning-Based Temporal Knowledge Graph Completion. Appl. Sci. 2024, 14, 8871. [Google Scholar] [CrossRef]
- Liang, Y.; Zhang, Y.; Ma, C.; Zhang, Z.; Zhao, Y.; Xiang, L.; Zong, C.; Zhou, Y. Document Image Machine Translation with Dynamic Multi-pre-trained Models Assembling. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Mexico City, Mexico, 16–21 June 2024; pp. 7077–7088. [Google Scholar]
- Krawczyk, N.; Probierz, B.; Kozak, J. Towards AI-Generated Essay Classification Using Numerical Text Representation. Appl. Sci. 2024, 14, 9795. [Google Scholar] [CrossRef]
- Iaroshev, I.; Pillai, R.; Vaglietti, L.; Hanne, T. Evaluating Retrieval-Augmented Generation Models for Financial Report Question and Answering. Appl. Sci. 2024, 14, 9318. [Google Scholar] [CrossRef]
- Hu, L.; Liu, Z.; Zhao, Z.; Hou, L.; Nie, L.; Li, J. A survey of knowledge enhanced pre-trained language models. IEEE Trans. Knowl. Data Eng. 2023, 36, 1413–1430. [Google Scholar] [CrossRef]
- Wang, X.; Zhu, W.; Saxon, M.; Steyvers, M.; Wang, W.Y. Large language models are latent variable models: Explaining and finding good demonstrations for in-context learning. In Advances in Neural Information Processing Systems, Proceedings of the Thirty-seventh Conference on Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2023; Curran Associates, Inc.: Red Hook, New York, USA, 2024; Volume 36. [Google Scholar]
- Zhou, D.; Schärli, N.; Hou, L.; Wei, J.; Scales, N.; Wang, X.; Schuurmans, D.; Cui, C.; Bousquet, O.; Le, Q.V.; et al. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. In Proceedings of the Eleventh International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
- Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems, Proceedings of the 27th Annual Conference on Neural Information Processing Systems 2013, Lake Tahoe, Nevada, 5–8 December 2013; Curran Associates, Inc.: Red Hook, New York, USA, 2013; Volume 26, p. 26. [Google Scholar]
- Wang, Z.; Zhang, J.; Feng, J.; Chen, Z. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014; Volume 28. [Google Scholar]
- Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; Zhu, X. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; Volume 29. [Google Scholar]
- Ji, G.; He, S.; Xu, L.; Liu, K.; Zhao, J. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Beijing, China, 26–31 July 2015; pp. 687–696. [Google Scholar]
- Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, É.; Bouchard, G. Complex embeddings for simple link prediction. In Proceedings of the International Conference on Machine Learning. PMLR, New York, NY, USA, 19–24 June 2016; pp. 2071–2080. [Google Scholar]
- Sun, Z.; Deng, Z.H.; Nie, J.Y.; Tang, J. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Galárraga, L.A.; Teflioudi, C.; Hose, K.; Suchanek, F. AMIE: Association rule mining under incomplete evidence in ontological knowledge bases. In Proceedings of the 22nd International Conference on World Wide Web, Rio de Janeiro, Brazil, 13–17 May 2013; pp. 413–422. [Google Scholar]
- Galárraga, L.; Teflioudi, C.; Hose, K.; Suchanek, F.M. Fast rule mining in ontological knowledge bases with AMIE+. VLDB J. 2015, 24, 707–730. [Google Scholar] [CrossRef]
- Lajus, J.; Galárraga, L.; Suchanek, F. Fast and exact rule mining with AMIE 3. In The Semantic Web: 17th International Conference, ESWC 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 36–52. [Google Scholar]
- Omran, P.G.; Wang, K.; Wang, Z. An embedding-based approach to rule learning in knowledge graphs. IEEE Trans. Knowl. Data Eng. 2019, 33, 1348–1359. [Google Scholar] [CrossRef]
- Yang, F.; Yang, Z.; Cohen, W.W. Differentiable learning of logical rules for knowledge base reasoning. In Advances in Neural Information Processing Systems, Proceedings of the Thirty-First Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: Red Hook, New York, USA, 2017; Volume 30. [Google Scholar]
- Meilicke, C.; Chekol, M.W.; Ruffinelli, D.; Stuckenschmidt, H. Anytime bottom-up rule learning for knowledge graph completion. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 3137–3143. [Google Scholar]
- Yao, L.; Mao, C.; Luo, Y. KG-BERT: BERT for knowledge graph completion. arXiv 2019, arXiv:1909.03193. [Google Scholar]
- Kenton, J.D.M.W.C.; Toutanova, L.K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, 2–7 June 2019; Volume 1, p. 2. [Google Scholar]
- Saxena, A.; Kochsiek, A.; Gemulla, R. Sequence-to-Sequence Knowledge Graph Completion and Question Answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland, 22–27 May 2022; pp. 2814–2828. [Google Scholar]
- Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 2020, 21, 1–67. [Google Scholar]
- Xie, X.; Zhang, N.; Li, Z.; Deng, S.; Chen, H.; Xiong, F.; Chen, M.; Chen, H. From discrimination to generation: Knowledge graph completion with generative transformer. In Proceedings of the Companion Proceedings of the Web Conference 2022, Lyon, France, 25–29 April 2022; pp. 162–165. [Google Scholar]
- Lewis, M. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv 2019, arXiv:1910.13461. [Google Scholar]
- Zhu, Y.; Wang, X.; Chen, J.; Qiao, S.; Ou, Y.; Yao, Y.; Deng, S.; Chen, H.; Zhang, N. Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities. World Wide Web 2024, 27, 58. [Google Scholar] [CrossRef]
- Baek, J.; Aji, A.F.; Saffari, A. Knowledge-augmented language model prompting for zero-shot knowledge graph question answering. arXiv 2023, arXiv:2306.04136. [Google Scholar]
- Zhang, Y.; Chen, Z.; Zhang, W.; Chen, H. Making Large Language Models Perform Better in Knowledge Graph Completion. In Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne, Australia, 28 October–1 November 2024. [Google Scholar]
- Wei, Y.; Huang, Q.; Zhang, Y.; Kwok, J. KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6–10 December 2023. [Google Scholar]
- Dettmers, T.; Minervini, P.; Stenetorp, P.; Riedel, S. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Miller, G.A. WordNet: A lexical database for English. Commun. ACM 1995, 38, 39–41. [Google Scholar] [CrossRef]
- Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; Taylor, J. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, Vancouver, BC, Canada, 10–12 June 2008; pp. 1247–1250. [Google Scholar]
- Toutanova, K.; Chen, D. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and Their Compositionality, Beijing, China, 26–31 July 2015; pp. 57–66. [Google Scholar]
- Akrami, F.; Saeef, M.S.; Zhang, Q.; Hu, W.; Li, C. Realistic re-evaluation of knowledge graph completion methods: An experimental study. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, Portland, OR, USA, 14–19 June 2020; pp. 1995–2010. [Google Scholar]
- Wang, X.; Gao, T.; Zhu, Z.; Zhang, Z.; Liu, Z.; Li, J.; Tang, J. KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation. Trans. Assoc. Comput. Linguist. 2021, 9, 176–194. [Google Scholar] [CrossRef]
- Vrandečić, D.; Krötzsch, M. Wikidata: A free collaborative knowledgebase. Commun. ACM 2014, 57, 78–85. [Google Scholar] [CrossRef]
- Wang, S.; Li, S.; Zou, L. Analogy-Triple Enhanced Fine-Grained Transformer for Sparse Knowledge Graph Completion. In Proceedings of the International Conference on Database Systems for Advanced Applications, Tianjin, China, 17–20 April 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 742–757. [Google Scholar]
- Vaswani, A. Attention is all you need. In Advances in Neural Information Processing Systems. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Zhao, Y.; Zhang, J.; Zong, C. Transformer: A general framework from machine translation to others. Mach. Intell. Res. 2023, 20, 514–538. [Google Scholar] [CrossRef]
- Tan, Z.; Zhang, J.; Huang, X.; Chen, G.; Wang, S.; Sun, M.; Luan, H.; Liu, Y. THUMT: An open-source toolkit for neural machine translation. In Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), Online, 6–9 October 2020; pp. 116–122. [Google Scholar]
Dataset | #Entity | #Relation | #Train | #Valid | #Test |
---|---|---|---|---|---|
WN18RR | 40,943 | 11 | 86,835 | 3034 | 3134 |
FB15k | 14,951 | 1345 | 483,142 | 50,000 | 59,071 |
FB15k-237 | 14,541 | 237 | 272,115 | 17,535 | 20,046 |
YAGO3-10-dr | 122,837 | 36 | 732,556 | 3390 | 3359 |
Wikidata5m | 4,818,503 | 828 | 21,343,515 | 5357 | 5321 |
# | Model | WN18RR | FB15k | FB15k-237 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
MRR | Hits@1 | Hits@10 | MRR | Hits@1 | Hits@10 | MRR | Hits@1 | Hits@10 | ||
1 | TransE | 0.243 | 0.043 | 0.532 | 0.391 | 0.031 | 0.624 | 0.279 | 0.198 | 0.441 |
2 | RotatE | 0.476 | 0.428 | 0.571 | 0.791 | 0.742 | 0.881 | 0.338 | 0.241 | 0.533 |
3 | AMIE | 0.357 | 0.287 | 0.356 | 0.797 | 0.617 | 0.881 | 0.308 | 0.174 | 0.477 |
4 | AnyBURL | 0.480 | 0.445 | 0.549 | 0.830 | 0.808 | 0.876 | 0.260 | 0.196 | 0.410 |
5 | KG-BERT | 0.219 | 0.095 | 0.497 | – | – | – | 0.237 | 0.144 | 0.427 |
6 | KGT5 | 0.508 | 0.487 | 0.544 | – | – | – | 0.276 | 0.210 | 0.414 |
7 | 0.511 | 0.482 | 0.556 | 0.815 | 0.731 | 0.867 | 0.323 | 0.255 | 0.551 | |
8 | RuleLLM | 0.518 | 0.493 | 0.573 | 0.826 | 0.737 | 0.879 | 0.341 | 0.260 | 0.557 |
# | Model | YAGO3-10-dr | Wikidata5m | ||||
---|---|---|---|---|---|---|---|
MRR | Hits@1 | Hits@10 | MRR | Hits@1 | Hits@10 | ||
1 | TransE | 0.190 | 0.136 | 0.323 | 0.253 | 0.170 | 0.392 |
2 | RotatE | 0.214 | 0.153 | 0.332 | 0.290 | 0.234 | 0.390 |
3 | AMIE | – | – | – | – | – | – |
4 | AnyBURL | 0.211 | 0.154 | 0.331 | – | – | – |
5 | KG-BERT | – | – | – | – | – | – |
6 | KGT5 | 0.211 | 0.151 | 0.327 | 0.300 | 0.267 | 0.365 |
7 | 0.213 | 0.153 | 0.333 | 0.326 | 0.298 | 0.397 | |
8 | RuleLLM | 0.212 | 0.152 | 0.337 | 0.357 | 0.321 | 0.448 |
# | Model | WN18RR | FB15k-237 | ||||
---|---|---|---|---|---|---|---|
MRR | Hits@1 | Hits@10 | MRR | Hits@1 | Hits@10 | ||
1 | w/o in-context triples | 0.326 | 0.177 | 0.528 | 0.233 | 0.151 | 0.431 |
2 | directly connected in-context triples | 0.315 | 0.171 | 0.503 | 0.226 | 0.151 | 0.422 |
3 | in-context triples guided by logical rules | 0.457 | 0.403 | 0.552 | 0.326 | 0.254 | 0.547 |
4 | in-context triples by logical rules and LLM’s feedback | 0.517 | 0.494 | 0.571 | 0.343 | 0.264 | 0.558 |
Method | Extracted in-Context Triples | Ex. 1 | Ex. 2 | Ex. 3 | Ex. 4 | Ex. 5 | Average |
---|---|---|---|---|---|---|---|
directly connected
in-context triples |
1. (Eric Allin Cornell, occupation, scientist)
2. (Eric Allin Cornell, has won prize, Nobel Prize) 3. (Eric Allin Cornell, has friend, Wolfgang Ketterle) 4. (Eric Allin Cornell, has gender, male) 5. (Eric Allin Cornell, year of birth, 1961) | 2 | 3 | 3 | 1 | 2 | 2.2 |
in-context triples by
logical rules |
1. (Eric Allin Cornell, has friend,
Wolfgang Ketterle) 2. (Wolfgang Ketterle, graduated From, Heidelberg University) 3. (Heidelberg University, is located in, Heidelberg) 4. (Heidelberg, is located in, Germany) 5. (Eric Allin Cornell, graduated from, Massachusetts Institute of Technology) | 4 | 5 | 6 | 5 | 6 | 5.2 |
in-context triples by
logical rules and LLMs feedback |
1. (Eric Allin Cornell, works at,
University of Colorado Boulder) 2. (University of Colorado Boulder, is located in, Colorado) 3. (Eric Allin Cornell, was born in, Palo Alto) 4. (Palo Alto, is located in, California) 5. (Eric Allin Cornell, graduated from, Massachusetts Institute of Technology) | 8 | 8 | 7 | 8 | 9 | 8.0 |
# | Method | Ex. 1 | Ex. 2 | Ex. 3 | Ex. 4 | Ex. 5 | Average |
---|---|---|---|---|---|---|---|
1 | directly connected in-context triples | 2.50 | 3.44 | 3.86 | 2.46 | 1.84 | 2.82 |
2 | in-context triples guided by logical rules | 4.64 | 4.78 | 5.56 | 5.34 | 4.64 | 4.99 |
3 | in-context triples by logical rules and LLMs feedback | 5.82 | 5.66 | 6.22 | 6.78 | 7.02 | 6.30 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, S. Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples. Appl. Sci. 2025, 15, 1088. https://doi.org/10.3390/app15031088
Wang S. Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples. Applied Sciences. 2025; 15(3):1088. https://doi.org/10.3390/app15031088
Chicago/Turabian StyleWang, Shaofei. 2025. "Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples" Applied Sciences 15, no. 3: 1088. https://doi.org/10.3390/app15031088
APA StyleWang, S. (2025). Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples. Applied Sciences, 15(3), 1088. https://doi.org/10.3390/app15031088