Commonsense-Guided Inductive Relation Prediction with Dual Attention Mechanism
Abstract
:1. Introduction
- Contributions. The contributions of this work can be summarized into three aspects:
- We put forward a commonsense-guided inductive relation prediction method with a dual attention mechanism, CNIA, which can enhance the representation learning and the accuracy of results;
- We propose to construct a neighbor-enriched subgraph to retain more useful neighboring information to aid the prediction;
- We compare the CNIA with state-of-the-art models on benchmark datasets, and the results demonstrate the superior performance of our model.
2. Related Works
2.1. Relation Prediction Methods
2.2. Commonsense Knowledge
3. Methodology
3.1. Problem Statement
3.2. Model Overview
3.3. Foundation Framework
- Neighboring Relational Feature Extraction. The neighboring relational feature model consists of two parts: subgraph extraction and node initialization. The local graph neighborhood of a specific triple in the KGs contained the logical evidence needed to infer the relationship between the target nodes; as such, the first step was to extract the subgraphs around the target nodes that contain complete neighbor relationships to initialize the node features. For the subgraph extraction, the target triple was identified and the enclosing subgraph around the target triple was extracted. More details can be found in the GraIL paper [15]. Then, regarding node initialization—considering that the inductive inference method cannot utilize the node attribute features—the node initial features were obtained by extracting the node’s positional features and adjacency features. The details can be found in [21].
- Subgraph Representaton Learning. As the main component of the foundation framework, this stage includes two parts: (1) obtaining the representation of subgraph entity nodes by the subgraph neural networks; (2) extracting and modeling the neighboring relational paths across target triples.
- Supervised training. A loss function for supervised learning was constructed:
- Contrastive Learning. Contrastive learning was widely used in unsupervised learning to attract positive samples and repel negative samples to obtain a higher-quality representation. To avoid the SNN in SNRI over-emphasizing the local structure, we further modeled the neighboring relations in a global way by subgraph–graph mutual information maximization, that is, SNRI sought to enable the neighboring relational features and paths to capture the global information of the entire KG, which would be realized in [21]. And the loss function could also be obtained.
- Joint Training. The ultimate learning goal of this task was to train the following loss function:
3.4. Our Model
3.4.1. Neighbor-Enriched Subgraph Extraction
- Step 1
- Obtain the set of three-hop neighbor nodes and of the target nodes u and v in the KG, respectively. The neighbor nodes obtained here do not distinguish the direction.
- Step 2
- Take the intersection of the neighbor nodes of u and v to obtain the nodes of the enclosing subgraph.
- Step 3
- Filter out the isolated nodes or nodes with a distance that is greater than three from any of the target nodes to obtain the enclosing subgraph with a path length that does not exceed the length between the target nodes.
- Step 4
- Keep the complete three-hop neighbor relationship of each node i, which includes the part omitted by the enclosing subgraph.
3.4.2. Neighboring Relational Path
3.4.3. Subgraph Modeling Based on a Dual Attention Mechanism
3.4.4. Commonsense Re-Ranking
- Fine tuning BERT to acquire contextual representations. The pre-trained language model BERT is suitable for acquiring the contextual representations of triples. Specifically, given a tuple , u, r, and v are iteratively masked such that the encoder predicts the masked elements using the other two unmasked elements, which enables the BERT model to better understand the relationships between the tuple elements.
- Filtering abstract concepts. Replacing ternary entities with entity concepts may result in concepts with a high level of abstraction. We use entity and concept representations to compute the probability of a concept appearing in an entity to measure the abstraction level of the concept. A threshold is set to filter out concepts with a higher abstraction level whose occurrence expectation is lower than the threshold.
- Entity-to-concept mapping. After filtering out the higher degree of abstraction, concepts were used instead of entities in the triad to obtain conceptual triples. Commonsense knowledge in the individual form C1 was obtained by eliminating duplicate concept-level triples. Commonsense knowledge in the set form C2 was then obtained by merging concept-level triples that contained the same relations.
- Filtering relationship-independent concepts. The commonsense knowledge obtained by substituting concepts for entities would also suffer from the problem that concept is not related to relation . To measure the degree of relevance of concept and relation , the following cosine similarity was calculated to obtain the similarity score.
3.4.5. Algorithmic Descriptions
4. Experiments
4.1. Experimental Configurations
4.1.1. Datasets
Algorithm 1 The inductive process of the CNIA model |
Input: , target triple Output: the of
|
4.1.2. Evaluation Metrics
4.1.3. Parameter Settings
4.1.4. Baseline Models
- GraIL [15]: This method pioneered a novel approach to inductive reasoning by introducing subgraph encoding for the first time, whereby it addressed the invisible entities in entirely new KGs.
- TACT [16]: This approach models the semantics between relationships and uses relationship topology to detect correlations for inductive relation predictions.
- CoMPILE [17]: This method is grounded in the structure of local directed subgraphs, and it exhibits a robust inductive bias for handling entity-independent semantic relations.
- SNRI [21]: This approach leverages the full adjacency of entities in subgraphs using neighbor relationship features and neighbor relationship paths. This forms the basis for inductive relation predictions in KGs.
- ConGLR [13]: This method constructs a contextual graph that represents relational paths and subsequently processes them with two GCNs, each incorporating enclosing subgraphs.
- RMPI [29]: This approach passes messages directly between relations to make full use of the relation patterns for subgraph reasoning.
4.2. Experimental Results
4.2.1. Main Results
4.2.2. Ablation Study
- CNIA w/o Att: Remove the dual attention mechanism and directly splice to obtain the neighboring relationship features. The Hits@10 was reduced by 3.72%, 1.25%, 5.37%, and 0.63%. This showed that ignoring the effect of different edges on nodes and the effect of different relations on the target relation reduces the accuracy of inference.
- CNIA w/o NRF: Remove the neighbor relational features and predict directly from the node features obtained by initialization. Ignoring the neighbor relationship makes the node features less expressive, which results in losing the effective information, and it also cannot completely portray the nodes.
- CNIA w/o NRP: Remove the neighboring relational path feature and ignore the message propagation path feature from the head node to the tail node. This makes the performance degradation obvious, thereby indicating that the neighboring relational path feature plays an important role in dealing with sparse subgraphs.
- CNIA w/o CSR: Remove the commonsense re-ranking module that fails to filter out the predictions that do not conform to commonsense. The experiments verified that the absence of the commonsense re-ranking will produce relationships that do not conform to commonsense, thus degrading model performance.
- CNIA w/o CL: Remove the contrastive learning and do not perform this operation of MI maximization. This resulted in the Hits@10 being reduced by 4.24%, 4.99%, 7.57%, and 9.24%. The results showed that, without contrastive learning, the results of most of the metrics dropped, thus demonstrating that global information helps to better model neighboring relational features.
4.2.3. Hyper-Parameter Analysis
4.2.4. Learning Rate Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhang, Y.; Dai, H.; Kozareva, Z.; Smola, A.J.; Song, L. Variational Reasoning for Question Answering With Knowledge Graph. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018), New Orleans, LA, USA, 2–7 February 2018; pp. 6069–6076. [Google Scholar]
- Verlinden, S.; Zaporojets, K.; Deleu, J.; Demeester, T.; Develder, C. Injecting Knowledge Base Information into End-to-End Joint Entity and Relation Extraction and Coreference Resolution. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021; Association for Computational Linguistics: Kerrville, TX, USA, 2021; pp. 1952–1957. [Google Scholar]
- Wang, H.; Zhao, M.; Xie, X.; Li, W.; Guo, M. Knowledge Graph Convolutional Networks for Recommender Systems. In Proceedings of the WWW 2019, San Francisco, CA, USA, 13–17 May 2019; pp. 3307–3313. [Google Scholar]
- Zhao, X.; Zeng, W.; Tang, J. Entity Alignment—Concepts, Recent Advances and Novel Approaches; Springer: Singapore, 2023. [Google Scholar] [CrossRef]
- Zeng, W.; Zhao, X.; Li, X.; Tang, J.; Wang, W. On entity alignment at scale. VLDB J. 2022, 31, 1009–1033. [Google Scholar] [CrossRef]
- Bollacker, K.D.; Evans, C.; Paritosh, P.K.; Sturge, T.; Taylor, J. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the SIGMOD Conference 2008, Vancouver, BC, Canada, 10–12 June 2008; pp. 1247–1250. [Google Scholar]
- Vrandecic, D. Wikidata: A new platform for collaborative data collection. In Proceedings of the WWW 2012, Lyon, France, 16–20 April 2012; pp. 1063–1064. [Google Scholar]
- Bordes, A.; Usunier, N.; García-Durán, A.; Weston, J.; Yakhnenko, O. Translating Embeddings for Modeling Multi-relational Data. In Proceedings of the NIPS 2013, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 2787–2795. [Google Scholar]
- Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, É.; Bouchard, G. Complex Embeddings for Simple Link Prediction. In Proceedings of the 33rd International Conference on International Conference on Machine Learning (ICML 2016), New York, NY, USA, 19–24 June 2016; Volume 48, pp. 2071–2080. [Google Scholar]
- Schlichtkrull, M.S.; Kipf, T.N.; Bloem, P.; van den Berg, R.; Titov, I.; Welling, M. Modeling Relational Data with Graph Convolutional Networks. In The Semantic Web, Proceedings of the 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, 3–7 June 2018; Proceedings 15; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 10843, pp. 593–607. [Google Scholar]
- Vashishth, S.; Sanyal, S.; Nitin, V.; Talukdar, P.P. Composition-based Multi-Relational Graph Convolutional Networks. In Proceedings of the ICLR 2020, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Liu, J.; Fan, C.; Zhou, F.; Xu, H. Complete feature learning and consistent relation modeling for few-shot knowledge graph completion. Expert Syst. Appl. 2024, 238, 121725. [Google Scholar] [CrossRef]
- Lin, Q.; Liu, J.; Xu, F.; Pan, Y.; Zhu, Y.; Zhang, L.; Zhao, T. Incorporating Context Graph with Logical Reasoning for Inductive Relation Prediction. In Proceedings of the SIGIR 2022, Madrid, Spain, 11–15 July 2022; pp. 893–903. [Google Scholar]
- Yang, F.; Yang, Z.; Cohen, W.W. Differentiable Learning of Logical Rules for Knowledge Base Reasoning. In Proceedings of the NIPS 2017, Long Beach, CA, USA, 4–9 December 2017; pp. 2319–2328. [Google Scholar]
- Teru, K.K.; Denis, E.G.; Hamilton, W.L. Inductive Relation Prediction by Subgraph Reasoning. In Proceedings of the ICML 2020, Virtual, 13–18 July 2020; Volume 119, pp. 9448–9457. [Google Scholar]
- Chen, J.; He, H.; Wu, F.; Wang, J. Topology-Aware Correlations Between Relations for Inductive Link Prediction in Knowledge Graphs. In Proceedings of the AAAI 2021, Virtual, 2–9 February 2021; pp. 6271–6278. [Google Scholar]
- Mai, S.; Zheng, S.; Yang, Y.; Hu, H. Communicative Message Passing for Inductive Relation Reasoning. In Proceedings of the AAAI 2021, Virtual, 2–9 February 2021; pp. 4294–4302. [Google Scholar]
- Galkin, M.; Denis, E.G.; Wu, J.; Hamilton, W.L. NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs. In Proceedings of the ICLR 2022, Virtual, 25–29 April 2022. [Google Scholar]
- Baek, J.; Lee, D.B.; Hwang, S.J. Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction. In Proceedings of the NeurIPS 2020, Virtual, 6–12 December 2020. [Google Scholar]
- Zhang, Y.; Wang, W.; Chen, W.; Xu, J.; Liu, A.; Zhao, L. Meta-Learning Based Hyper-Relation Feature Modeling for Out-of-Knowledge-Base Embedding. In Proceedings of the CIKM 2021, Gold Coast, QLD, Australia, 1–5 November 2021; pp. 2637–2646. [Google Scholar]
- Xu, X.; Zhang, P.; He, Y.; Chao, C.; Yan, C. Subgraph Neighboring Relations Infomax for Inductive Link Prediction on Knowledge Graphs. In Proceedings of the IJCAI 2022, Vienna, Austria, 23–29 July 2022; pp. 2341–2347. [Google Scholar]
- Zeng, W.; Zhao, X.; Tang, J.; Lin, X. Collective Entity Alignment via Adaptive Features. In Proceedings of the 36th IEEE International Conference on Data Engineering, ICDE 2020, Dallas, TX, USA, 20–24 April 2020; pp. 1870–1873. [Google Scholar]
- Zeng, W.; Zhao, X.; Tang, J.; Lin, X.; Groth, P. Reinforcement Learning-based Collective Entity Alignment with Adaptive Features. ACM Trans. Inf. Syst. 2021, 39, 26:1–26:31. [Google Scholar] [CrossRef]
- Wang, J.; Lin, X.; Huang, H.; Ke, X.; Wu, R.; You, C.; Guo, K. GLANet: Temporal knowledge graph completion based on global and local information-aware network. Appl. Intell. 2023, 53, 19285–19301. [Google Scholar] [CrossRef]
- Meng, X.; Bai, L.; Hu, J.; Zhu, L. Multi-hop path reasoning over sparse temporal knowledge graphs based on path completion and reward shaping. Inf. Process. Manag. 2024, 61, 103605. [Google Scholar] [CrossRef]
- Wang, J.; Wang, B.; Gao, J.; Hu, S.; Hu, Y.; Yin, B. Multi-Level Interaction Based Knowledge Graph Completion. IEEE ACM Trans. Audio Speech Lang. Process. 2024, 32, 386–396. [Google Scholar] [CrossRef]
- Sadeghian, A.; Armandpour, M.; Ding, P.; Wang, D.Z. DRUM: End-To-End Differentiable Rule Mining on Knowledge Graphs. In Proceedings of the NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019; pp. 15321–15331. [Google Scholar]
- Li, J.; Wang, Q.; Mao, Z. Inductive Relation Prediction from Relational Paths and Context with Hierarchical Transformers. In Proceedings of the ICASSP 2023, Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
- Geng, Y.; Chen, J.; Pan, J.Z.; Chen, M.; Jiang, S.; Zhang, W.; Chen, H. Relational Message Passing for Fully Inductive Knowledge Graph Completion. In Proceedings of the ICDE 2023, Anaheim, CA, USA, 3–7 April 2023; pp. 1221–1233. [Google Scholar]
- Lenat, D.B. CYC: A Large-Scale Investment in Knowledge Infrastructure. Commun. ACM 1995, 38, 32–38. [Google Scholar] [CrossRef]
- Suchanek, F.M.; Kasneci, G.; Weikum, G. Yago: A core of semantic knowledge. In Proceedings of the WWW 2007, Banff, AB, Canada, 8–12 May 2007; pp. 697–706. [Google Scholar]
- Song, Y.; Zheng, S.; Niu, Z.; Fu, Z.; Lu, Y.; Yang, Y. Communicative Representation Learning on Attributed Molecular Graphs. In Proceedings of the IJCAI 2020, Yokohama, Japan, 7–15 January 2021; pp. 2831–2838. [Google Scholar]
- Dettmers, T.; Minervini, P.; Stenetorp, P.; Riedel, S. Convolutional 2D Knowledge Graph Embeddings. In Proceedings of the AAAI 2018, New Orleans, LO, USA, 2–7 February 2018; pp. 1811–1818. [Google Scholar]
- Toutanova, K.; Chen, D.; Pantel, P.; Poon, H.; Choudhury, P.; Gamon, M. Representing Text for Joint Embedding of Text and Knowledge Bases. In Proceedings of the EMNLP 2015, Lisbon, Portugal, 17–21 September 2015; pp. 1499–1509. [Google Scholar]
- Xiong, W.; Hoang, T.; Wang, W.Y. DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning. In Proceedings of the EMNLP 2017, Copenhagen, Denmark, 7–11 September 2017; pp. 564–573. [Google Scholar]
Symbols | Descriptions |
---|---|
Target triple | |
Target relationship to be predicted | |
Extracted subgraph for target triple | |
The representation of subgraph G | |
Node set of subgraph G | |
The weight of MI maximization mechanism | |
The relational path representation of the subgraph | |
The representation of subgraph G | |
Edge-aware attention | |
Relation-aware attention | |
Joint neighbor attention | |
Common sense | |
The weight of commonsense in score function |
Version | Split | WN18RR | FB15k-237 | NELL-995 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
#R | #E | #TR1 | #TR2 | #R | #E | #TR1 | #TR2 | #R | #E | #TR1 | #TR2 | ||
v1 | Train | 9 | 2746 | 5410 | 630 | 183 | 2000 | 4245 | 489 | 14 | 10,915 | 4687 | 414 |
Test | 9 | 922 | 1618 | 188 | 146 | 1500 | 1993 | 205 | 14 | 225 | 833 | 100 | |
v2 | Train | 10 | 6954 | 15,262 | 1838 | 203 | 3000 | 9739 | 1166 | 88 | 2564 | 8219 | 922 |
Test | 10 | 2923 | 4011 | 441 | 176 | 2000 | 4145 | 478 | 79 | 4937 | 4586 | 476 | |
v3 | Train | 11 | 12,078 | 25,901 | 3097 | 218 | 4000 | 17,986 | 2194 | 142 | 4647 | 16,393 | 1851 |
Test | 11 | 5084 | 6327 | 605 | 187 | 3000 | 7406 | 865 | 122 | 4921 | 8048 | 809 | |
v4 | Train | 9 | 3861 | 7940 | 934 | 222 | 5000 | 27,203 | 3352 | 77 | 2092 | 7546 | 876 |
Test | 9 | 7208 | 12,334 | 1429 | 204 | 3500 | 11,714 | 1424 | 61 | 3294 | 7073 | 731 |
Model | WN18RR | FB15k-237 | NELL-995 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
v1 | v2 | v3 | v4 | Avg. | v1 | v2 | v3 | v4 | Avg. | v1 | v2 | v3 | v4 | Avg. | |
GraIL | 94.32 | 94.18 | 85.80 | 92.72 | 91.75 | 84.69 | 90.57 | 91.68 | 94.46 | 90.35 | 86.05 | 92.62 | 93.34 | 87.50 | 89.87 |
TACT | 95.43 | 97.54 | 87.65 | 96.04 | 94.16 | 83.15 | 93.01 | 92.10 | 94.25 | 90.62 | 81.06 | 93.12 | 96.07 | 85.75 | 89.00 |
CoMPILE | 98.23 | 99.56 | 93.60 | 99.80 | 97.79 | 85.50 | 91.68 | 93.12 | 94.90 | 91.30 | 80.16 | 95.88 | 96.08 | 85.48 | 89.04 |
SNRI | 99.10 | 99.92 | 94.90 | 99.61 | 98.38 | 86.69 | 91.77 | 91.22 | 93.37 | 90.76 | — | — | — | — | — |
ConGLR | 99.58 | 99.67 | 93.78 | 99.88 | 98.22 | 85.68 | 92.32 | 93.91 | 95.05 | 91.74 | 86.48 | 95.22 | 96.16 | 88.46 | 91.58 |
RMPI | 95.00 | 95.96 | 88.53 | 95.78 | 93.82 | 85.25 | 92.19 | 92.09 | 92.80 | 90.58 | 81.12 | 93.46 | 95.35 | 91.77 | 90.43 |
CNIA (ours) | 99.89 | 99.91 | 94.95 | 99.90 | 98.66 | 86.99 | 92.75 | 93.42 | 94.80 | 91.99 | 95.50 | 95.43 | 96.25 | 88.13 | 93.83 |
Model | WN18RR | FB15k-237 | NELL-995 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
v1 | v2 | v3 | v4 | Avg. | v1 | v2 | v3 | v4 | Avg. | v1 | v2 | v3 | v4 | Avg. | |
GraIL | 82.45 | 78.68 | 58.43 | 73.41 | 73.24 | 64.15 | 81.80 | 82.83 | 89.29 | 79.51 | 59.50 | 93.25 | 91.41 | 73.19 | 79.33 |
TACT | 84.04 | 81.63 | 67.97 | 76.56 | 77.55 | 65.76 | 83.56 | 85.20 | 88.69 | 80.80 | 79.80 | 88.91 | 94.02 | 73.78 | 84.12 |
CoMPILE | 83.60 | 79.82 | 60.69 | 75.49 | 74.90 | 67.64 | 82.98 | 84.67 | 87.44 | 80.68 | 58.38 | 93.87 | 92.77 | 75.19 | 80.05 |
SNRI | 87.23 | 83.10 | 67.31 | 83.32 | 80.24 | 71.79 | 86.50 | 89.59 | 89.39 | 84.32 | — | — | — | — | — |
ConGLR | 85.64 | 92.93 | 70.74 | 92.90 | 85.55 | 68.29 | 85.98 | 88.61 | 89.31 | 82.93 | 81.07 | 94.92 | 94.36 | 81.61 | 87.99 |
RMPI | 82.45 | 78.68 | 58.68 | 73.41 | 73.31 | 65.37 | 81.80 | 81.10 | 87.25 | 78.88 | 59.50 | 92.23 | 93.57 | 87.62 | 83.23 |
CNIA (ours) | 89.36 | 85.94 | 72.06 | 83.03 | 82.60 | 74.20 | 86.51 | 89.43 | 89.39 | 84.88 | 85.22 | 94.11 | 94.55 | 82.03 | 88.98 |
Ablation | WN18RR | |||
---|---|---|---|---|
v1 | v2 | v3 | v4 | |
CNIA | 89.36 | 85.94 | 72.06 | 83.03 |
CNIA w/o Att | 85.64 | 84.69 | 66.69 | 82.40 |
CNIA w/o NRF | 86.45 | 85.06 | 71.40 | 72.98 |
CNIA w/o NRP | 84.16 | 85.10 | 71.81 | 74.07 |
CNIA w/o CSR | 86.44 | 83.90 | 70.00 | 82.15 |
CNIA w/o CL | 85.12 | 80.95 | 64.49 | 73.79 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Duan, Y.; Tang, J.; Xu, H.; Liu, C.; Zeng, W. Commonsense-Guided Inductive Relation Prediction with Dual Attention Mechanism. Appl. Sci. 2024, 14, 2044. https://doi.org/10.3390/app14052044
Duan Y, Tang J, Xu H, Liu C, Zeng W. Commonsense-Guided Inductive Relation Prediction with Dual Attention Mechanism. Applied Sciences. 2024; 14(5):2044. https://doi.org/10.3390/app14052044
Chicago/Turabian StyleDuan, Yuxiao, Jiuyang Tang, Hao Xu, Changsen Liu, and Weixin Zeng. 2024. "Commonsense-Guided Inductive Relation Prediction with Dual Attention Mechanism" Applied Sciences 14, no. 5: 2044. https://doi.org/10.3390/app14052044
APA StyleDuan, Y., Tang, J., Xu, H., Liu, C., & Zeng, W. (2024). Commonsense-Guided Inductive Relation Prediction with Dual Attention Mechanism. Applied Sciences, 14(5), 2044. https://doi.org/10.3390/app14052044