Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (271)

Search Parameters:
Keywords = knowledge graph representation learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 783 KB  
Systematic Review
KAVAI: A Systematic Review of the Building Blocks for Knowledge-Assisted Visual Analytics in Industrial Manufacturing
by Adrian J. Böck, Stefanie Größbacher, Jan Vrablicz, Christina Stoiber, Alexander Rind, Josef Suschnigg, Tobias Schreck, Wolfgang Aigner and Markus Wagner
Appl. Sci. 2025, 15(18), 10172; https://doi.org/10.3390/app151810172 - 18 Sep 2025
Viewed by 430
Abstract
Industry 4.0 produces large volumes of sensor and machine data, offering new possibilities for manufacturing analytics but also creating challenges in combining domain knowledge with visual analysis. We present a systematic review of 13 peer-reviewed knowledge-assisted visual analytics (KAVA) systems published between 2014 [...] Read more.
Industry 4.0 produces large volumes of sensor and machine data, offering new possibilities for manufacturing analytics but also creating challenges in combining domain knowledge with visual analysis. We present a systematic review of 13 peer-reviewed knowledge-assisted visual analytics (KAVA) systems published between 2014 and 2024, following PRISMA guidelines for the identification, screening, and inclusion processes. The survey is organized around six predefined building blocks, namely, user group, industrial domain, visualization, knowledge, data and machine learning, with a specific emphasis on the integration of knowledge and visualization in the reviewed studies. We find that ontologies, taxonomies, rule sets, and knowledge graphs provide explicit representations of expert understanding, sometimes enriched with annotations and threshold specifications. These structures are stored in RDF or graph databases, relational tables, or flat files, though interoperability is limited, and post-design contributions are not always persisted. Explicit knowledge is visualized through standard and specialized techniques, including thresholds in time-series plots, annotated dashboards, node–link diagrams, customized machine views from ontologies, and 3D digital twins with expert-defined rules. Line graphs, bar charts, and scatterplots are the most frequently used chart types, often augmented with thresholds and annotations derived from explicit knowledge. Recurring challenges include fragmented storage, heterogeneous data and knowledge types, limited automation, inconsistent validation of user input, and scarce long-term evaluations. Addressing these gaps will be essential for developing adaptable, reusable KAVA systems for industrial analytics. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

34 pages, 1982 KB  
Article
Knowledge Graphs and Artificial Intelligence for the Implementation of Cognitive Heritage Digital Twins
by Achille Felicetti, Aida Himmiche and Miriana Somenzi
Appl. Sci. 2025, 15(18), 10061; https://doi.org/10.3390/app151810061 - 15 Sep 2025
Viewed by 1128
Abstract
This paper explores the integration of Artificial Intelligence and semantic technologies to support the creation of intelligent Heritage Digital Twins, digital constructs capable of representing, interpreting, and reasoning over cultural data. This study focuses on transforming the often fragmented and unstructured documentation produced [...] Read more.
This paper explores the integration of Artificial Intelligence and semantic technologies to support the creation of intelligent Heritage Digital Twins, digital constructs capable of representing, interpreting, and reasoning over cultural data. This study focuses on transforming the often fragmented and unstructured documentation produced in cultural heritage into coherent Knowledge Graphs aligned with internationally recognised standards and ontologies. Two complementary AI-assisted workflows are proposed: one for extracting and formalising structured knowledge from heritage science reports and another for enhancing AI models through the integration of curated ontological knowledge. The experiments demonstrate how this synergy facilitates both the retrieval and the reuse of complex information while ensuring interpretability and semantic consistency. Beyond technical efficacy, this paper also addresses the ethical implications of AI use in cultural heritage, with particular attention to transparency, bias mitigation, and meaningful representation of diverse narratives. The results highlight the importance of a reflexive and ethically grounded deployment of AI, where knowledge extraction and machine learning are guided by structured ontologies and human oversight, to ensure conceptual rigour and respect for cultural complexity. Full article
Show Figures

Figure 1

16 pages, 846 KB  
Article
MMKT: Multimodal Sentiment Analysis Model Based on Knowledge-Enhanced and Text-Guided Learning
by Chengkai Shi and Yunhua Zhang
Appl. Sci. 2025, 15(17), 9815; https://doi.org/10.3390/app15179815 - 7 Sep 2025
Viewed by 753
Abstract
Multimodal Sentiment Analysis (MSA) aims to predict subjective human emotions by leveraging multimodal information. However, existing research inadequately utilizes explicit sentiment semantic information at the lexical level in text and overlooks noise interference from non-dominant modalities, such as irrelevant movements in visual modalities [...] Read more.
Multimodal Sentiment Analysis (MSA) aims to predict subjective human emotions by leveraging multimodal information. However, existing research inadequately utilizes explicit sentiment semantic information at the lexical level in text and overlooks noise interference from non-dominant modalities, such as irrelevant movements in visual modalities and background noise in audio modalities. To address this issue, we propose a multimodal sentiment analysis model based on knowledge enhancement and text-guided learning (MMKT). The model constructs a sentiment knowledge graph for the textual modality using the SenticNet knowledge base. This graph directly annotates word-level sentiment polarity, strengthening the model’s understanding of emotional vocabulary. Furthermore, global sentiment knowledge features are generated through graph embedding computations to enhance the multimodal fusion process. Simultaneously, a dynamic text-guided learning approach is introduced, which dynamically leverages multi-scale textual features to actively suppress redundant or conflicting information in visual and audio modalities, thereby generating purer cross-modal representations. Finally, concatenated textual features, cross-modal features, and knowledge features are utilized for sentiment prediction. Experimental results on the CMU-MOSEI and Twitter2019 dataset demonstrate the superior performance of the MMKT model. Full article
Show Figures

Figure 1

28 pages, 6171 KB  
Article
Semantic Path-Guided Remote Sensing Recommendation for Natural Disasters Based on Knowledge Graph
by Xiangyu Zhao, Chunju Zhang, Chenchen Luo, Jun Zhang, Chaoqun Chu, Chenxi Li, Yifan Pei and Zhaofu Wu
Sensors 2025, 25(17), 5575; https://doi.org/10.3390/s25175575 - 6 Sep 2025
Viewed by 1213
Abstract
To address the challenges of complex task matching, limited semantic representation, and low recommendation efficiency in remote sensing data acquisition for natural disasters, this study proposes a semantic path-guided recommendation method based on a knowledge graph framework. A disaster-oriented remote sensing knowledge graph [...] Read more.
To address the challenges of complex task matching, limited semantic representation, and low recommendation efficiency in remote sensing data acquisition for natural disasters, this study proposes a semantic path-guided recommendation method based on a knowledge graph framework. A disaster-oriented remote sensing knowledge graph is constructed by integrating entities such as disaster types, remote sensing tasks, observation requirements, sensors, and satellite platforms. High-order meta-paths with semantic closure are designed to model task–resource relationships structurally. A Meta-Path2Vec embedding mechanism is employed to learn vector representations of nodes through path-constrained random walks and Skip-Gram training, capturing implicit semantic correlations between tasks and sensors. Cosine similarity and a Top-K ranking strategy are then applied to perform intelligent task-driven sensor recommendation. Experiments on multiple disaster scenarios—such as floods, landslides, and wildfires—demonstrate the model’s high accuracy and robust stability. An interactive recommendation system is also developed, integrating data querying, model inference, and visual feedback, validating the method’s practicality and effectiveness in real-world applications. This work provides a theoretical foundation and practical solution for intelligent remote sensing data matching in disaster contexts. Full article
(This article belongs to the Collection Machine Learning and AI for Sensors)
Show Figures

Figure 1

35 pages, 8966 KB  
Article
Verified Language Processing with Hybrid Explainability
by Oliver Robert Fox, Giacomo Bergami and Graham Morgan
Electronics 2025, 14(17), 3490; https://doi.org/10.3390/electronics14173490 - 31 Aug 2025
Cited by 1 | Viewed by 622
Abstract
The volume and diversity of digital information have led to a growing reliance on Machine Learning (ML) techniques, such as Natural Language Processing (NLP), for interpreting and accessing appropriate data. While vector and graph embeddings represent data for similarity tasks, current state-of-the-art pipelines [...] Read more.
The volume and diversity of digital information have led to a growing reliance on Machine Learning (ML) techniques, such as Natural Language Processing (NLP), for interpreting and accessing appropriate data. While vector and graph embeddings represent data for similarity tasks, current state-of-the-art pipelines lack guaranteed explainability, failing to accurately determine similarity for given full texts. These considerations can also be applied to classifiers exploiting generative language models with logical prompts, which fail to correctly distinguish between logical implication, indifference, and inconsistency, despite being explicitly trained to recognise the first two classes. We present a novel pipeline designed for hybrid explainability to address this. Our methodology combines graphs and logic to produce First-Order Logic (FOL) representations, creating machine- and human-readable representations through Montague Grammar (MG). The preliminary results indicate the effectiveness of this approach in accurately capturing full text similarity. To the best of our knowledge, this is the first approach to differentiate between implication, inconsistency, and indifference for text classification tasks. To address the limitations of existing approaches, we use three self-contained datasets annotated for the former classification task to determine the suitability of these approaches in capturing sentence structure equivalence, logical connectives, and spatiotemporal reasoning. We also use these data to compare the proposed method with language models pre-trained for detecting sentence entailment. The results show that the proposed method outperforms state-of-the-art models, indicating that natural language understanding cannot be easily generalised by training over extensive document corpora. This work offers a step toward more transparent and reliable Information Retrieval (IR) from extensive textual data. Full article
Show Figures

Graphical abstract

26 pages, 1255 KB  
Article
Interpretable Knowledge Tracing via Transformer-Bayesian Hybrid Networks: Learning Temporal Dependencies and Causal Structures in Educational Data
by Nhu Tam Mai, Wenyang Cao and Wenhe Liu
Appl. Sci. 2025, 15(17), 9605; https://doi.org/10.3390/app15179605 - 31 Aug 2025
Cited by 3 | Viewed by 896
Abstract
Knowledge tracing, the computational modeling of student learning progression through sequential educational interactions, represents a critical component for adaptive learning systems and personalized education platforms. However, existing approaches face a fundamental trade-off between predictive accuracy and interpretability: deep sequence models excel at capturing [...] Read more.
Knowledge tracing, the computational modeling of student learning progression through sequential educational interactions, represents a critical component for adaptive learning systems and personalized education platforms. However, existing approaches face a fundamental trade-off between predictive accuracy and interpretability: deep sequence models excel at capturing complex temporal dependencies in student interaction data but lack transparency in their decision-making processes, while probabilistic graphical models provide interpretable causal relationships but struggle with the complexity of real-world educational sequences. We propose a hybrid architecture that integrates transformer-based sequence modeling with structured Bayesian causal networks to overcome this limitation. Our dual-pathway design employs a transformer encoder to capture complex temporal patterns in student interaction sequences, while a differentiable Bayesian network explicitly models prerequisite relationships between knowledge components. These pathways are unified through a cross-attention mechanism that enables bidirectional information flow between temporal representations and causal structures. We introduce a joint training objective that simultaneously optimizes sequence prediction accuracy and causal graph consistency, ensuring learned temporal patterns align with interpretable domain knowledge. The model undergoes pre-training on 3.2 million student–problem interactions from diverse MOOCs to establish foundational representations, followed by domain-specific fine-tuning. Comprehensive experiments across mathematics, computer science, and language learning demonstrate substantial improvements: 8.7% increase in AUC over state-of-the-art knowledge tracing models (0.847 vs. 0.779), 12.3% reduction in RMSE for performance prediction, and 89.2% accuracy in discovering expert-validated prerequisite relationships. The model achieves a 0.763 F1-score for early at-risk student identification, outperforming baselines by 15.4%. This work demonstrates that sophisticated temporal modeling and interpretable causal reasoning can be effectively unified for educational applications. Full article
Show Figures

Figure 1

19 pages, 1297 KB  
Article
A Novel Method for Named Entity Recognition in Long-Text Safety Accident Reports of Prefabricated Construction
by Qianmai Luo, Guozong Zhang and Yuan Sun
Buildings 2025, 15(17), 3063; https://doi.org/10.3390/buildings15173063 - 27 Aug 2025
Viewed by 504
Abstract
Prefabricated construction represents an advanced approach to sustainable development, and safety issues in prefabricated construction projects have drawn widespread attention. Safety accident case reports contain a wealth of safety knowledge, and extracting and learning from such historical reports can significantly enhance safety management [...] Read more.
Prefabricated construction represents an advanced approach to sustainable development, and safety issues in prefabricated construction projects have drawn widespread attention. Safety accident case reports contain a wealth of safety knowledge, and extracting and learning from such historical reports can significantly enhance safety management capabilities. However, these texts are often semantically complex and lengthy, posing challenges for traditional Information Extraction (IE) methods. This study focuses on the challenge of Named Entity Recognition (NER) in long texts under complex engineering contexts and proposes a novel model that integrates Modern Bidirectional Encoder Representations from Transformers (ModernBERT),Bidirectional Long Short-Term Memory (BiLSTM), andConditional Random Field (CRF). A comparative analysis with current mainstream methods is conducted. The results show that the proposed model achieves an F1 score of 0.6234, outperforming mainstream baseline methods. Notably, it attains F1 scores of 0.95 and 0.92 for the critical entity categories “Consequence” and “Type,” respectively. The model maintains stable performance even under semantic noise interference, demonstrating strong robustness in processing unstructured and highly heterogeneous engineering texts. Compared with existing long-text NER models, the proposed method exhibits superior semantic parsing ability in engineering contexts. This study enhances information extraction methods and provides solid technical support for constructing safety knowledge graphs in prefabricated construction, thereby advancing the level of intelligence in the construction industry. Full article
(This article belongs to the Special Issue Large-Scale AI Models Across the Construction Lifecycle)
Show Figures

Figure 1

17 pages, 2751 KB  
Article
Joint Extraction of Cyber Threat Intelligence Entity Relationships Based on a Parallel Ensemble Prediction Model
by Huan Wang, Shenao Zhang, Zhe Wang, Jing Sun and Qingzheng Liu
Sensors 2025, 25(16), 5193; https://doi.org/10.3390/s25165193 - 21 Aug 2025
Viewed by 864
Abstract
The construction of knowledge graphs in cyber threat intelligence (CTI) critically relies on automated entity–relation extraction. However, sequence tagging-based methods for joint entity–relation extraction are affected by the order-dependency problem. As a result, overlapping relations are handled ineffectively. To address this limitation, a [...] Read more.
The construction of knowledge graphs in cyber threat intelligence (CTI) critically relies on automated entity–relation extraction. However, sequence tagging-based methods for joint entity–relation extraction are affected by the order-dependency problem. As a result, overlapping relations are handled ineffectively. To address this limitation, a parallel, ensemble-prediction–based model is proposed for joint entity–relation extraction in CTI. The joint extraction task is reformulated as an ensemble prediction problem. A joint network that combines Bidirectional Encoder Representations from Transformers (BERT) with a Bidirectional Gated Recurrent Unit (BiGRU) is constructed to capture deep contextual features in sentences. An ensemble prediction module and a triad representation of entity–relation facts are designed for joint extraction. A non-autoregressive decoder is employed to generate relation triad sets in parallel, thereby avoiding unnecessary sequential constraints during decoding. In the threat intelligence domain, labeled data are scarce and manual annotation is costly. To mitigate these constraints, the SecCti dataset is constructed by leveraging ChatGPT’s small-sample learning capability for labeling and augmentation. This approach reduces annotation costs effectively. Experimental results show a 4.6% absolute F1 improvement over the baseline on joint entity–relation extraction for threat intelligence concerning Advanced Persistent Threats (APTs) and cybercrime activities. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

19 pages, 3172 KB  
Article
RASD: Relation Aware Spectral Decoupling Attention Network for Knowledge Graph Reasoning
by Zheng Wang, Taiyu Li and Zengzhao Chen
Appl. Sci. 2025, 15(16), 9049; https://doi.org/10.3390/app15169049 - 16 Aug 2025
Viewed by 669
Abstract
Knowledge Graph Reasoning (KGR) aims to deduce missing or novel knowledge by learning structured information and semantic relationships within Knowledge Graphs (KGs). Despite significant advances achieved by deep neural networks in recent years, existing models typically extract non-linear representations from explicit features in [...] Read more.
Knowledge Graph Reasoning (KGR) aims to deduce missing or novel knowledge by learning structured information and semantic relationships within Knowledge Graphs (KGs). Despite significant advances achieved by deep neural networks in recent years, existing models typically extract non-linear representations from explicit features in a relatively simplistic manner and fail to fully exploit semantic heterogeneity of relation types and entity co-occurrence frequencies. Consequently, these models struggle to capture critical predictive cues embedded in various entities and relations. To address these limitations, this paper proposes a relation aware spectral decoupling attention network for KGR (RASD). First, a spectral decoupling attention network module projects joint embeddings of entities and relations into the frequency domain, extracting features across different frequency bands and adaptively allocating attention at the global level to model frequency specific information. Next, a relation-aware learning module employs relation aware filters and an augmentation mechanism to preserve distinct relational properties and suppress redundant features, thereby enhancing representation of heterogeneous relations. Experimental results demonstrate that RASD achieves significant and consistent improvements over multiple leading baseline models on link prediction tasks across five public benchmark datasets. Full article
Show Figures

Figure 1

22 pages, 894 KB  
Article
Adaptive Knowledge Assessment via Symmetric Hierarchical Bayesian Neural Networks with Graph Symmetry-Aware Concept Dependencies
by Wenyang Cao, Nhu Tam Mai and Wenhe Liu
Symmetry 2025, 17(8), 1332; https://doi.org/10.3390/sym17081332 - 15 Aug 2025
Cited by 9 | Viewed by 703
Abstract
Traditional educational assessment systems suffer from inefficient question selection strategies that fail to optimally probe student knowledge while requiring extensive testing time. We present a novel hierarchical probabilistic neural framework that integrates Bayesian inference with symmetric deep neural architectures to enable adaptive, efficient [...] Read more.
Traditional educational assessment systems suffer from inefficient question selection strategies that fail to optimally probe student knowledge while requiring extensive testing time. We present a novel hierarchical probabilistic neural framework that integrates Bayesian inference with symmetric deep neural architectures to enable adaptive, efficient knowledge assessment. Our method models student knowledge as latent representations within a graph-structured concept dependency network, where probabilistic mastery states, updated through variational inference, are encoded by symmetric graph properties and symmetric concept representations that preserve structural equivalences across similar knowledge configurations. The system employs a symmetric dual-network architecture: a concept embedding network that learns scale-invariant hierarchical knowledge representations from assessment data and a question selection network that optimizes symmetric information gain through deep reinforcement learning with symmetric reward structures. We introduce a novel uncertainty-aware objective function that leverages symmetric uncertainty measures to balance exploration of uncertain knowledge regions with exploitation of informative question patterns. The hierarchical structure captures both fine-grained concept mastery and broader domain understanding through multi-scale graph convolutions that preserve local graph symmetries and global structural invariances. Our symmetric information-theoretic method ensures balanced assessment strategies that maintain diagnostic equivalence across isomorphic concept subgraphs. Experimental validation on large-scale educational datasets demonstrates that our method achieves 76.3% diagnostic accuracy while reducing the question count by 35.1% compared to traditional assessments. The learned concept embeddings reveal interpretable knowledge structures with symmetric dependency patterns that align with pedagogical theory. Our work generalizes across domains and student populations through symmetric transfer learning mechanisms, providing a principled framework for intelligent tutoring systems and adaptive testing platforms. The integration of probabilistic reasoning with symmetric neural pattern recognition offers a robust solution to the fundamental trade-off between assessment efficiency and diagnostic precision in educational technology. Full article
(This article belongs to the Special Issue Advances in Graph Theory Ⅱ)
Show Figures

Figure 1

24 pages, 653 KB  
Article
Yul2Vec: Yul Code Embeddings
by Krzysztof Fonał
Appl. Sci. 2025, 15(16), 8915; https://doi.org/10.3390/app15168915 - 13 Aug 2025
Viewed by 550
Abstract
In this paper, I propose Yul2Vec, a novel method for representing Yul programs as distributed embeddings in continuous space. Yul serves as an intermediate language between Solidity and Ethereum Virtual Machine (EVM) bytecode, designed to enable more efficient optimization of smart contract execution [...] Read more.
In this paper, I propose Yul2Vec, a novel method for representing Yul programs as distributed embeddings in continuous space. Yul serves as an intermediate language between Solidity and Ethereum Virtual Machine (EVM) bytecode, designed to enable more efficient optimization of smart contract execution compared to direct Solidity-to-bytecode compilation. The vectorization of a program is achieved by aggregating the embeddings of its constituent code elements from the bottom to the top of the program structure. The representation of the smallest construction units, known as opcodes (operation codes), along with their types and arguments, is generated using knowledge graph relationships to construct a seed vocabulary, which forms the foundation for this approach. This research is important for enabling future enhancements to the Solidity compiler, paving the way for advanced optimizations of Yul and, consequently, EVM code. Optimizing the EVM bytecode is essential not only for improving performance but also for minimizing the operational costs of smart contracts—a key concern for decentralized applications. By introducing Yul2Vec, this paper aims to provide a foundation for further research into compiler optimization techniques and cost-efficient smart contract execution on Ethereum. The proposed method is not only fast in learning embeddings but also efficient in calculating the final vector representation of Yul code, making it feasible to integrate this step into the future compilation process of Solidity-based smart contracts. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 8113 KB  
Article
An Autoencoder-like Non-Negative Matrix Factorization with Structure Regularization Algorithm for Clustering
by Haiyan Gao and Ling Zhong
Symmetry 2025, 17(8), 1283; https://doi.org/10.3390/sym17081283 - 10 Aug 2025
Viewed by 723
Abstract
Clustering plays a crucial role in data mining and knowledge discovery, where non-negative matrix factorization (NMF) has attracted widespread attention due to its effective data representation and dimensionality reduction capabilities. However, standard NMF has inherent limitations when processing sampled data embedded in low-dimensional [...] Read more.
Clustering plays a crucial role in data mining and knowledge discovery, where non-negative matrix factorization (NMF) has attracted widespread attention due to its effective data representation and dimensionality reduction capabilities. However, standard NMF has inherent limitations when processing sampled data embedded in low-dimensional manifold structures within high-dimensional ambient spaces, failing to effectively capture the complex structural information hidden in feature manifolds and sampling manifolds, and neglecting the learning of global structures. To address these issues, a novel structure regularization autoencoder-like non-negative matrix factorization for clustering (SRANMF) is proposed. Firstly, based on the non-negative symmetric encoder-decoder framework, we construct an autoencoder-like NMF model to enhance the characterization ability of latent information in data. Then, by fully considering high-order neighborhood relationships in the data, an optimal graph regularization strategy is introduced to preserve multi-order topological information structures. Additionally, principal component analysis (PCA) is employed to measure global data structures by maximizing the variance of projected data. Comparative experiments on 11 benchmark datasets demonstrate that SRANMF exhibits excellent clustering performance. Specifically, on the large-scale complex datasets MNIST and COIL100, the clustering evaluation metrics improved by an average of 35.31% and 46.17% (ACC) and 47.12% and 18.10% (NMI), respectively. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

12 pages, 2368 KB  
Article
Uncertainty-Aware Continual Reinforcement Learning via PPO with Graph Representation Learning
by Dongjae Kim
Mathematics 2025, 13(16), 2542; https://doi.org/10.3390/math13162542 - 8 Aug 2025
Viewed by 734
Abstract
Continual reinforcement learning (CRL) agents face significant challenges when encountering distributional shifts. This paper formalizes these shifts into two key scenarios, namely virtual drift (domain switches), where object semantics change (e.g., walls becoming lava), and concept drift (task switches), where the environment’s structure [...] Read more.
Continual reinforcement learning (CRL) agents face significant challenges when encountering distributional shifts. This paper formalizes these shifts into two key scenarios, namely virtual drift (domain switches), where object semantics change (e.g., walls becoming lava), and concept drift (task switches), where the environment’s structure is reconfigured (e.g., moving from object navigation to a door key puzzle). This paper demonstrates that while conventional convolutional neural networks (CNNs) struggle to preserve relational knowledge during these transitions, graph convolutional networks (GCNs) can inherently mitigate catastrophic forgetting by encoding object interactions through explicit topological reasoning. A unified framework is proposed that integrates GCN-based state representation learning with a proximal policy optimization (PPO) agent. The GCN’s message-passing mechanism preserves invariant relational structures, which diminishes performance degradation during abrupt domain switches. Experiments conducted in procedurally generated MiniGrid environments show that the method significantly reduces catastrophic forgetting in domain switch scenarios. While showing comparable mean performance in task switch scenarios, our method demonstrates substantially lower performance variance (Levene’s test, p<1.0×1010), indicating superior learning stability compared to CNN-based methods. By bridging graph representation learning with robust policy optimization in CRL, this research advances the stability of decision-making in dynamic environments and establishes GCNs as a principled alternative to CNNs for applications requiring stable, continual learning. Full article
(This article belongs to the Special Issue Decision Making under Uncertainty in Soft Computing)
Show Figures

Figure 1

16 pages, 1618 KB  
Article
Multimodal Temporal Knowledge Graph Embedding Method Based on Mixture of Experts for Recommendation
by Bingchen Liu, Guangyuan Dong, Zihao Li, Yuanyuan Fang, Jingchen Li, Wenqi Sun, Bohan Zhang, Changzhi Li and Xin Li
Mathematics 2025, 13(15), 2496; https://doi.org/10.3390/math13152496 - 3 Aug 2025
Viewed by 1260
Abstract
Knowledge-graph-based recommendation aims to provide personalized recommendation services to users based on their historical interaction information, which is of great significance for shopping transaction rates and other aspects. With the rapid growth of online shopping, the knowledge graph constructed from users’ historical interaction [...] Read more.
Knowledge-graph-based recommendation aims to provide personalized recommendation services to users based on their historical interaction information, which is of great significance for shopping transaction rates and other aspects. With the rapid growth of online shopping, the knowledge graph constructed from users’ historical interaction data now incorporates multiattribute information, including timestamps, images, and textual content. The information of multiple modalities is difficult to effectively utilize due to their different representation structures and spaces. The existing methods attempt to utilize the above information through simple embedding representation and aggregation, but ignore targeted representation learning for information with different attributes and learning effective weights for aggregation. In addition, existing methods are not sufficient for effectively modeling temporal information. In this article, we propose MTR, a knowledge graph recommendation framework based on mixture of experts network. To achieve this goal, we use a mixture-of-experts network to learn targeted representations and weights of different product attributes for effective modeling and utilization. In addition, we effectively model the temporal information during the user shopping process. A thorough experimental study on popular benchmarks validates that MTR can achieve competitive results. Full article
(This article belongs to the Special Issue Data-Driven Decentralized Learning for Future Communication Networks)
Show Figures

Figure 1

15 pages, 1515 KB  
Article
Ontology-Based Data Pipeline for Semantic Reaction Classification and Research Data Management
by Hendrik Borgelt, Frederick Gabriel Kitel and Norbert Kockmann
Computers 2025, 14(8), 311; https://doi.org/10.3390/computers14080311 - 1 Aug 2025
Viewed by 546
Abstract
Catalysis research is complex and interdisciplinary, involving diverse physical effects and challenging data practices. Research data often captures only selected aspects, such as specific reactants and products, limiting its utility for machine learning and the implementation of FAIR (Findable, Accessible, Interoperable, Reusable) workflows. [...] Read more.
Catalysis research is complex and interdisciplinary, involving diverse physical effects and challenging data practices. Research data often captures only selected aspects, such as specific reactants and products, limiting its utility for machine learning and the implementation of FAIR (Findable, Accessible, Interoperable, Reusable) workflows. To improve this, semantic structuring through ontologies is essential. This work extends the established ontologies by refining logical relations and integrating semantic tools such as the Web Ontology Language or the Shape Constraint Language. It incorporates application programming interfaces from chemical databases, such as the Kyoto Encyclopedia of Genes and Genomes and the National Institute of Health’s PubChem database, and builds upon established ontologies. A key innovation lies in automatically decomposing chemical substances through database entries and chemical identifier representations to identify functional groups, enabling more generalized reaction classification. Using new semantic functionality, functional groups are flexibly addressed, improving the classification of reactions such as saponification and ester cleavage with simultaneous oxidation. A graphical interface (GUI) supports user interaction with the knowledge graph, enabling ontological reasoning and querying. This approach demonstrates improved specificity of the newly established ontology over its predecessors and offers a more user-friendly interface for engaging with structured chemical knowledge. Future work will focus on expanding ontology coverage to support a wider range of reactions in catalysis research. Full article
Show Figures

Figure 1

Back to TopTop