Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (640)

Search Parameters:
Keywords = graph attention neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 2850 KB  
Review
Network Traffic Analysis Based on Graph Neural Networks: A Scoping Review
by Ruonan Wang, Jinjing Zhao, Hongzheng Zhang, Liqiang He, Hu Li and Minhuan Huang
Big Data Cogn. Comput. 2025, 9(11), 270; https://doi.org/10.3390/bdcc9110270 (registering DOI) - 24 Oct 2025
Abstract
Network traffic analysis is crucial for understanding network behavior and identifying underlying applications, protocols, and service groups. The increasing complexity of network environments, driven by the evolution of the Internet, poses significant challenges to traditional analytical approaches. Graph Neural Networks (GNNs) have recently [...] Read more.
Network traffic analysis is crucial for understanding network behavior and identifying underlying applications, protocols, and service groups. The increasing complexity of network environments, driven by the evolution of the Internet, poses significant challenges to traditional analytical approaches. Graph Neural Networks (GNNs) have recently garnered considerable attention in network traffic analysis due to their ability to model complex relationships within network flows and between communicating entities. This scoping review systematically surveys major academic databases, employing predefined eligibility criteria to identify and synthesize key research in the field, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) methodology. We present a comprehensive overview of a generalized architecture for GNN-based traffic analysis and categorize recent methods into three primary types: node prediction, edge prediction, and graph prediction. We discuss challenges in network traffic analysis, summarize solutions from various methods, and provide practical recommendations for model selection. This review also compiles publicly available datasets and open-source code, serving as valuable resources for further research. Finally, we outline future research directions to advance this field. This work offers an updated understanding of GNN applications in network traffic analysis and provides practical guidance for researchers and practitioners. Full article
Show Figures

Figure 1

14 pages, 1036 KB  
Article
Biomedical Knowledge Graph Embedding with Hierarchical Capsule Network and Rotational Symmetry for Drug-Drug Interaction Prediction
by Sensen Zhang, Xia Li, Yang Liu, Peng Bi and Tiangui Hu
Symmetry 2025, 17(11), 1793; https://doi.org/10.3390/sym17111793 - 23 Oct 2025
Abstract
The forecasting of Drug-Drug Interactions (DDIs) is essential in pharmacology and clinical practice to prevent adverse drug reactions. Existing approaches, often based on neural networks and knowledge graph embedding, face limitations in modeling correlations among drug features and in handling complex BioKG relations, [...] Read more.
The forecasting of Drug-Drug Interactions (DDIs) is essential in pharmacology and clinical practice to prevent adverse drug reactions. Existing approaches, often based on neural networks and knowledge graph embedding, face limitations in modeling correlations among drug features and in handling complex BioKG relations, such as one-to-many, hierarchical, and composite interactions. To address these issues, we propose Rot4Cap, a novel framework that embeds drug entity pairs and BioKG relationships into a four-dimensional vector space, enabling effective modeling of diverse mapping properties and hierarchical structures. In addition, our method integrates molecular structures and drug descriptions with BioKG entities, and it employs capsule network–based attention routing to capture feature correlations. Experiments on three benchmark BioKG datasets demonstrate that Rot4Cap outperforms state-of-the-art baselines, highlighting its effectiveness and robustness. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 32866 KB  
Article
Low-Altitude Multi-Object Tracking via Graph Neural Networks with Cross-Attention and Reliable Neighbor Guidance
by Hanxiang Qian, Xiaoyong Sun, Runze Guo, Shaojing Su, Bing Ding and Xiaojun Guo
Remote Sens. 2025, 17(20), 3502; https://doi.org/10.3390/rs17203502 - 21 Oct 2025
Viewed by 175
Abstract
In low-altitude multi-object tracking (MOT), challenges such as frequent inter-object occlusion and complex non-linear motion disrupt the appearance of individual targets and the continuity of their trajectories, leading to frequent tracking failures. We posit that the relatively stable spatio-temporal relationships within object groups [...] Read more.
In low-altitude multi-object tracking (MOT), challenges such as frequent inter-object occlusion and complex non-linear motion disrupt the appearance of individual targets and the continuity of their trajectories, leading to frequent tracking failures. We posit that the relatively stable spatio-temporal relationships within object groups (e.g., pedestrians and vehicles) offer powerful contextual cues to resolve such ambiguities. We present NOWA-MOT (Neighbors Know Who We Are), a novel tracking-by-detection framework designed to systematically exploit this principle through a multi-stage association process. We make three primary contributions. First, we introduce a Low-Confidence Occlusion Recovery (LOR) module that dynamically adjusts detection scores by integrating IoU, a novel Recovery IoU (RIoU) metric, and location similarity to surrounding objects, enabling occluded targets to participate in high-priority matching. Second, for initial data association, we propose a Graph Cross-Attention (GCA) mechanism. In this module, separate graphs are constructed for detections and trajectories, and a cross-attention architecture is employed to propagate rich contextual information between them, yielding highly discriminative feature representations for robust matching. Third, to resolve the remaining ambiguities, we design a cascaded Matched Neighbor Guidance (MNG) module, which uniquely leverages the reliably matched pairs from the first stage as contextual anchors. Through MNG, star-shaped topological features are built for unmatched objects relative to their stable neighbors, enabling accurate association even when intrinsic features are weak. Our comprehensive experimental evaluation on the VisDrone2019 and UAVDT datasets confirms the superiority of our approach, achieving state-of-the-art HOTA scores of 51.34% and 62.69%, respectively, and drastically reducing identity switches compared to previous methods. Full article
Show Figures

Figure 1

18 pages, 1825 KB  
Article
Fast Deep Belief Propagation: An Efficient Learning-Based Algorithm for Solving Constraint Optimization Problems
by Shufeng Kong, Feifan Chen, Zijie Wang and Caihua Liu
Mathematics 2025, 13(20), 3349; https://doi.org/10.3390/math13203349 - 21 Oct 2025
Viewed by 209
Abstract
Belief Propagation (BP) is a fundamental heuristic for solving Constraint Optimization Problems (COPs), yet its practical applicability is constrained by slow convergence and instability in loopy factor graphs. While Damped BP (DBP) improves convergence by using manually tuned damping factors, its reliance on [...] Read more.
Belief Propagation (BP) is a fundamental heuristic for solving Constraint Optimization Problems (COPs), yet its practical applicability is constrained by slow convergence and instability in loopy factor graphs. While Damped BP (DBP) improves convergence by using manually tuned damping factors, its reliance on labor-intensive hyperparameter optimization limits scalability. Deep Attentive BP (DABP) addresses this by automating damping through recurrent neural networks (RNNs), but introduces significant memory overhead and sequential computation bottlenecks. To reduce memory usage and accelerate deep belief propagation, this paper introduces Fast Deep Belief Propagation (FDBP), a deep learning framework that improves COP solving through online self-supervised learning and graphics processing unit (GPU) acceleration. FDBP decouples the learning of damping factors from BP message passing, inferring all parameters for an entire BP iteration in a single step, and leverages mixed precision to further optimize GPU memory usage. This approach substantially improves both the efficiency and scalability of BP optimization. Extensive evaluations on synthetic and real-world benchmarks highlight the superiority of FDBP, especially for large-scale instances where DABP fails due to memory constraints. Moreover, FDBP achieves an average speedup of 2.87× over DABP with the same restart counts. Because BP for COPs is a mathematically grounded GPU-parallel message-passing framework that bridges applied mathematics, computing, and machine learning, and is widely applicable across science and engineering, our work offers a promising step toward more efficient solutions to these problems. Full article
(This article belongs to the Special Issue Applied Mathematics, Computing, and Machine Learning)
Show Figures

Figure 1

19 pages, 674 KB  
Article
Reservoir Computation with Networks of Differentiating Neuron Ring Oscillators
by Alexander Yeung, Peter DelMastro, Arjun Karuvally, Hava Siegelmann, Edward Rietman and Hananel Hazan
Analytics 2025, 4(4), 28; https://doi.org/10.3390/analytics4040028 - 20 Oct 2025
Viewed by 126
Abstract
Reservoir computing is an approach to machine learning that leverages the dynamics of a complex system alongside a simple, often linear, machine learning model for a designated task. While many efforts have previously focused their attention on integrating neurons, which produce an output [...] Read more.
Reservoir computing is an approach to machine learning that leverages the dynamics of a complex system alongside a simple, often linear, machine learning model for a designated task. While many efforts have previously focused their attention on integrating neurons, which produce an output in response to large, sustained inputs, we focus on using differentiating neurons, which produce an output in response to large changes in input. Here, we introduce a small-world graph built from rings of differentiating neurons as a Reservoir Computing substrate. We find the coupling strength and network topology that enable these small-world networks to function as an effective reservoir. The dynamics of differentiating neurons naturally give rise to oscillatory dynamics when arranged in rings, where we study their computational use in the Reservoir Computing setting. We demonstrate the efficacy of these networks in the MNIST digit recognition task, achieving comparable performance of 90.65% to existing Reservoir Computing approaches. Beyond accuracy, we conduct systematic analysis of our reservoir’s internal dynamics using three complementary complexity measures that quantify neuronal activity balance, input dependence, and effective dimensionality. Our analysis reveals that optimal performance emerges when the reservoir operates with intermediate levels of neural entropy and input sensitivity, consistent with the edge-of-chaos hypothesis, where the system balances stability and responsiveness. The findings suggest that differentiating neurons can be a potential alternative to integrating neurons and can provide a sustainable future alternative for power-hungry AI applications. Full article
Show Figures

Figure 1

18 pages, 1960 KB  
Article
CasDacGCN: A Dynamic Attention-Calibrated Graph Convolutional Network for Information Popularity Prediction
by Bofeng Zhang, Yanlin Zhu, Zhirong Zhang, Kaili Liao, Sen Niu, Bingchun Li and Haiyan Li
Entropy 2025, 27(10), 1064; https://doi.org/10.3390/e27101064 - 14 Oct 2025
Viewed by 366
Abstract
Information popularity prediction is a critical problem in social network analysis. With the increasing prevalence of social platforms, accurate prediction of the diffusion process has become increasingly important. Existing methods mainly rely on graph neural networks to model structural relationships, but they are [...] Read more.
Information popularity prediction is a critical problem in social network analysis. With the increasing prevalence of social platforms, accurate prediction of the diffusion process has become increasingly important. Existing methods mainly rely on graph neural networks to model structural relationships, but they are often insufficient in capturing the complex interplay between temporal evolution and local cascade structures, especially in real-world scenarios involving sparse or rapidly changing cascades. To address this issue, we propose the Cascading Dynamic attention-calibrated Graph Convolutional Network, named CasDacGCN. It enhances prediction performance through spatiotemporal feature fusion and adaptive representation learning. The model integrates snapshot-level local encoding, global temporal modeling, cross-attention mechanisms, and a hypernetwork-based sample-wise calibration strategy, enabling flexible modeling of multi-scale diffusion patterns. Results from experiments demonstrate that the proposed model consistently surpasses existing approaches on two real-world datasets, validating its effectiveness in popularity prediction tasks. Full article
Show Figures

Figure 1

22 pages, 724 KB  
Article
State of Health Estimation for Batteries Based on a Dynamic Graph Pruning Neural Network with a Self-Attention Mechanism
by Xuanyuan Gu, Mu Liu and Jilun Tian
Energies 2025, 18(20), 5333; https://doi.org/10.3390/en18205333 - 10 Oct 2025
Viewed by 463
Abstract
The accurate estimation of the state of health (SOH) of lithium-ion batteries is critical for ensuring the safety, reliability, and efficiency of modern energy storage systems. Traditional model-based and data-driven methods often struggle to capture complex spatiotemporal degradation patterns, leading to reduced accuracy [...] Read more.
The accurate estimation of the state of health (SOH) of lithium-ion batteries is critical for ensuring the safety, reliability, and efficiency of modern energy storage systems. Traditional model-based and data-driven methods often struggle to capture complex spatiotemporal degradation patterns, leading to reduced accuracy and robustness. To address these limitations, this paper proposes a novel dynamic graph pruning neural network with self-attention mechanism (DynaGPNN-SAM) for SOH estimation. The method transforms sequential battery features into graph-structured representations, enabling the explicit modeling of spatial dependencies among operational variables. A self-attention-guided pruning strategy is introduced to dynamically preserve informative nodes while filtering redundant ones, thereby enhancing interpretability and computational efficiency. The framework is validated on the NASA lithium-ion battery dataset, with extensive experiments and ablation studies demonstrating superior performance compared to conventional approaches. Results show that DynaGPNN-SAM achieves lower root mean square error (RMSE) and mean absolute error (MAE) values across multiple batteries, particularly excelling during rapid degradation phases. Overall, the proposed approach provides an accurate, robust, and scalable solution for real-world battery management systems. Full article
Show Figures

Figure 1

30 pages, 5986 KB  
Article
Attention-Aware Graph Neural Network Modeling for AIS Reception Area Prediction
by Ambroise Renaud, Clément Iphar and Aldo Napoli
Sensors 2025, 25(19), 6259; https://doi.org/10.3390/s25196259 - 9 Oct 2025
Viewed by 580
Abstract
Accurately predicting the reception area of the Automatic Identification System (AIS) is critical for ship tracking and anomaly detection, as errors in signal interpretation may lead to incorrect vessel localization and behavior analysis. However, traditional propagation models, whether they are deterministic, empirical, or [...] Read more.
Accurately predicting the reception area of the Automatic Identification System (AIS) is critical for ship tracking and anomaly detection, as errors in signal interpretation may lead to incorrect vessel localization and behavior analysis. However, traditional propagation models, whether they are deterministic, empirical, or semi-empirical, face limitations when applied to dynamic environments due to their reliance on detailed atmospheric and terrain inputs. Therefore, to address these challenges, we propose a data-driven approach based on graph neural networks (GNNs) to model AIS reception as a function of environmental and geographic variables. Specifically, inspired by attention mechanisms that power transformers in large language models, our framework employs the SAmple and aggreGatE (GraphSAGE) framework convolutions to aggregate neighborhood features, then combines layer outputs through Jumping Knowledge (JK) with Bidirectional Long Short-Term Memory (BiLSTM)-derived attention coefficients and integrates an attentional pooling module at the graph-level readout. Moreover, trained on real-world AIS data enriched with terrain and meteorological features, the model captures both local and long-range reception patterns. As a result, it outperforms classical baselines—including ITU-R P.2001 and XGBoost in F1-score and accuracy. Ultimately, this work illustrates the value of deep learning and AIS sensor networks for the detection of positioning anomalies in ship tracking and highlights the potential of data-driven approaches in modeling sensor reception. Full article
(This article belongs to the Special Issue Transformer Applications in Target Tracking)
Show Figures

Figure 1

19 pages, 4133 KB  
Article
FLOW-GLIDE: Global–Local Interleaved Dynamics Estimator for Flow Field Prediction
by Jinghan Su, Li Xiao and Jingyu Wang
Appl. Sci. 2025, 15(19), 10834; https://doi.org/10.3390/app151910834 - 9 Oct 2025
Viewed by 191
Abstract
Accurate prediction of the flow field is crucial to evaluating the aerodynamic performance of an aircraft. While traditional computational fluid dynamics (CFD) methods solve the governing equations to capture both global flow structures and localized gradients, they are computationally intensive. Deep learning-based surrogate [...] Read more.
Accurate prediction of the flow field is crucial to evaluating the aerodynamic performance of an aircraft. While traditional computational fluid dynamics (CFD) methods solve the governing equations to capture both global flow structures and localized gradients, they are computationally intensive. Deep learning-based surrogate models offer a promising alternative, yet often struggle to simultaneously model long-range dependencies and near-wall flow gradients with sufficient fidelity. To address this challenge, this paper introduces the Message-passing And Global-attention block (MAG-BLOCK), a graph neural network module that combines local message passing with global self-attention mechanisms to jointly learn fine-scale features and large-scale flow patterns. Building on MAG-BLOCK, we propose FLOW-GLIDE, a cross-architecture deep learning framework that learns a mapping from initial conditions to steady-state flow fields in a latent space. Evaluated on the AirfRANS dataset, FLOW-GLIDE outperforms existing models on key performance metrics. Specifically, it reduces the error in the volumetric flow field by 62% and surface pressure prediction by 82% compared to the state-of-the-art. Full article
(This article belongs to the Section Fluid Science and Technology)
Show Figures

Figure 1

16 pages, 4740 KB  
Article
Measuring Inter-Bias Effects and Fairness-Accuracy Trade-Offs in GNN-Based Recommender Systems
by Nikzad Chizari, Keywan Tajfar and María N. Moreno-García
Future Internet 2025, 17(10), 461; https://doi.org/10.3390/fi17100461 - 8 Oct 2025
Viewed by 369
Abstract
Bias in artificial intelligence is a critical issue because these technologies increasingly influence decision-making in a wide range of areas. The recommender system field is one of them, where biases can lead to unfair or skewed outcomes. The origin usually lies in data [...] Read more.
Bias in artificial intelligence is a critical issue because these technologies increasingly influence decision-making in a wide range of areas. The recommender system field is one of them, where biases can lead to unfair or skewed outcomes. The origin usually lies in data biases coming from historical inequalities or irregular sampling. Recommendation algorithms using such data contribute to a greater or lesser extent to amplify and perpetuate those imbalances. On the other hand, different types of biases can be found in the outputs of recommender systems, and they can be evaluated by a variety of metrics specific to each of them. However, biases should not be treated independently, as they are interrelated and can potentiate or mask each other. Properly assessing the biases is crucial for ensuring fair and equitable recommendations. This work focuses on analyzing the interrelationship between different types of biases and proposes metrics designed to jointly evaluate multiple interrelated biases, with particular emphasis on those biases that tend to mask or obscure discriminatory treatment against minority or protected demographic groups, evaluated in terms of disparities in recommendation quality outcomes. This approach enables a more comprehensive assessment of algorithmic performance in terms of both fairness and predictive accuracy. Special attention is given to Graph Neural Network-based recommender systems, due to their strong performance in this application domain. Full article
(This article belongs to the Special Issue Deep Learning in Recommender Systems)
Show Figures

Figure 1

32 pages, 3383 KB  
Article
DLG–IDS: Dynamic Graph and LLM–Semantic Enhanced Spatiotemporal GNN for Lightweight Intrusion Detection in Industrial Control Systems
by Junyi Liu, Jiarong Wang, Tian Yan, Fazhi Qi and Gang Chen
Electronics 2025, 14(19), 3952; https://doi.org/10.3390/electronics14193952 - 7 Oct 2025
Viewed by 332
Abstract
Industrial control systems (ICSs) face escalating security challenges due to evolving cyber threats and the inherent limitations of traditional intrusion detection methods, which fail to adequately model spatiotemporal dependencies or interpret complex protocol semantics. To address these gaps, this paper proposes DLG–IDS—a lightweight [...] Read more.
Industrial control systems (ICSs) face escalating security challenges due to evolving cyber threats and the inherent limitations of traditional intrusion detection methods, which fail to adequately model spatiotemporal dependencies or interpret complex protocol semantics. To address these gaps, this paper proposes DLG–IDS—a lightweight intrusion detection framework that innovatively integrates dynamic graph construction for capturing real–time device interactions and logical control relationships from traffic, LLM–driven semantic enhancement to extract fine–grained embeddings from graphs, and a spatio–temporal graph neural network (STGNN) optimized via sparse attention and local window Transformers to minimize computational overhead. Evaluations on SWaT and SBFF datasets demonstrate the framework’s superiority, achieving a state–of–the–art accuracy of 0.986 while reducing latency by 53.2% compared to baseline models. Ablation studies further validate the critical contributions of semantic fusion, sparse topology modeling, and localized temporal attention. The proposed solution establishes a robust, real–time detection mechanism tailored for resource–constrained industrial environments, effectively balancing high accuracy with operational efficiency. Full article
Show Figures

Figure 1

40 pages, 1929 KB  
Review
The Evolution and Taxonomy of Deep Learning Models for Aircraft Trajectory Prediction: A Review of Performance and Future Directions
by NaeJoung Kwak and ByoungYup Lee
Appl. Sci. 2025, 15(19), 10739; https://doi.org/10.3390/app151910739 - 5 Oct 2025
Viewed by 726
Abstract
Accurate aircraft trajectory prediction is fundamental to air traffic management, operational safety, and intelligent aerospace systems. With the growing availability of flight data, deep learning has emerged as a powerful tool for modeling the spatiotemporal complexity of 4D trajectories. This paper presents a [...] Read more.
Accurate aircraft trajectory prediction is fundamental to air traffic management, operational safety, and intelligent aerospace systems. With the growing availability of flight data, deep learning has emerged as a powerful tool for modeling the spatiotemporal complexity of 4D trajectories. This paper presents a comprehensive review of deep learning-based approaches for aircraft trajectory prediction, focusing on their evolution, taxonomy, performance, and future directions. We classify existing models into five groups—RNN-based, attention-based, generative, graph-based, and hybrid and integrated models—and evaluate them using standardized metrics such as the RMSE, MAE, ADE, and FDE. Common datasets, including ADS-B and OpenSky, are summarized, along with the prevailing evaluation metrics. Beyond model comparison, we discuss real-world applications in anomaly detection, decision support, and real-time air traffic management, and highlight ongoing challenges such as data standardization, multimodal integration, uncertainty quantification, and self-supervised learning. This review provides a structured taxonomy and forward-looking perspectives, offering valuable insights for researchers and practitioners working to advance next-generation trajectory prediction technologies. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

19 pages, 1381 KB  
Article
MAMGN-HTI: A Graph Neural Network Model with Metapath and Attention Mechanisms for Hyperthyroidism Herb–Target Interaction Prediction
by Yanqin Zhou, Xiaona Yang, Ru Lv, Xufeng Lang, Yao Zhu, Zuojian Zhou and Kankan She
Bioengineering 2025, 12(10), 1085; https://doi.org/10.3390/bioengineering12101085 - 5 Oct 2025
Viewed by 694
Abstract
The accurate prediction of herb–target interactions is essential for the modernization of traditional Chinese medicine (TCM) and the advancement of drug discovery. Nonetheless, the inherent complexity of herbal compositions and diversity of molecular targets render experimental validation both time-consuming and labor-intensive. We propose [...] Read more.
The accurate prediction of herb–target interactions is essential for the modernization of traditional Chinese medicine (TCM) and the advancement of drug discovery. Nonetheless, the inherent complexity of herbal compositions and diversity of molecular targets render experimental validation both time-consuming and labor-intensive. We propose a graph neural network model, MAMGN-HTI, which integrates metapaths with attention mechanisms. A heterogeneous graph consisting of herbs, efficacies, ingredients, and targets is constructed, where semantic metapaths capture latent relationships among nodes. An attention mechanism is employed to dynamically assign weights, thereby emphasizing the most informative metapaths. In addition, ResGCN and DenseGCN architectures are combined with cross-layer skip connections to improve feature propagation and enable effective feature reuse. Experiments show that MAMGN-HTI outperforms several state-of-the-art methods across multiple metrics, exhibiting superior accuracy, robustness, and generalizability in HTI prediction and candidate drug screening. Validation against literature and databases further confirms the model’s predictive reliability. The model also successfully identified herbs with potential therapeutic effects for hyperthyroidism, including Vinegar-processed Bupleuri Radix (Cu Chaihu), Prunellae Spica (Xiakucao), and Processed Cyperi Rhizoma (Zhi Xiangfu). MAMGN-HTI provides a reliable computational framework and theoretical foundation for applying TCM in hyperthyroidism treatment, providing mechanistic insights while improving research efficiency and resource utilization. Full article
Show Figures

Figure 1

21 pages, 831 KB  
Article
TSAD: Transformer-Based Semi-Supervised Anomaly Detection for Dynamic Graphs
by Jin Zhang and Ke Feng
Mathematics 2025, 13(19), 3123; https://doi.org/10.3390/math13193123 - 30 Sep 2025
Viewed by 368
Abstract
Anomaly detection aims to identify abnormal instances that significantly deviate from normal samples. With the natural connectivity between instances in the real world, graph neural networks have become increasingly important in solving anomaly detection problems. However, existing research mainly focuses on static graphs, [...] Read more.
Anomaly detection aims to identify abnormal instances that significantly deviate from normal samples. With the natural connectivity between instances in the real world, graph neural networks have become increasingly important in solving anomaly detection problems. However, existing research mainly focuses on static graphs, while there is less research on mining anomaly patterns in dynamic graphs, which has important application value. This paper proposes a Transformer-based semi-supervised anomaly detection framework for dynamic graphs. The framework adopts the Transformer architecture as the core encoder, which can effectively capture long-range dependencies and complex temporal patterns between nodes in dynamic graphs. By introducing time-aware attention mechanisms, the model can adaptively focus on important information at different time steps, thereby better understanding the evolution process of graph structures. The multi-head attention mechanism of Transformer enables the model to simultaneously learn structural and temporal features of nodes, while positional encoding helps the model understand periodic patterns in time series. Comprehensive experiments on three real datasets show that TSAD significantly outperforms existing methods in anomaly detection accuracy, particularly demonstrating excellent performance in label-scarce scenarios. Full article
(This article belongs to the Special Issue New Advances in Graph Neural Networks (GNNs) and Applications)
Show Figures

Figure 1

22 pages, 5899 KB  
Article
Research on Power Flow Prediction Based on Physics-Informed Graph Attention Network
by Qiyue Huang, Yapeng Wang, Xu Yang, Sio-Kei Im and Jianxiu Cai
Appl. Sci. 2025, 15(19), 10555; https://doi.org/10.3390/app151910555 - 29 Sep 2025
Viewed by 351
Abstract
As an emerging distributed energy system, microgrid power flow prediction plays a crucial role in optimizing energy dispatch and power grid operation. Traditional methods of power flow prediction mainly rely on statistics and time series models, neglecting the spatial relationships among different nodes [...] Read more.
As an emerging distributed energy system, microgrid power flow prediction plays a crucial role in optimizing energy dispatch and power grid operation. Traditional methods of power flow prediction mainly rely on statistics and time series models, neglecting the spatial relationships among different nodes within the microgrid. To overcome this limitation, a Physical-Informed Graph Attention Network (PI-GAT) is proposed to capture the spatial structure of the microgrid, while an attention mechanism is introduced to measure the importance weights between nodes. In this study, we constructed a representative 14-node microgrid power flow dataset. After collecting the data, we preprocessed and transformed it into a suitable format for graph neural networks. Next, an autoencoder was employed for pre-training, enabling unsupervised learning-based dimensionality reduction to enhance the expressive power of the data. Subsequently, the extended data is fed into a graph convolution module with attention mechanism, allowing adaptive weight learning and capturing relationships between nodes. And integrate the physical state equation into the loss function to achieve high-precision power flow prediction. Finally, simulation verification was conducted, comparing the PI-GAT method with traditional approaches. The results indicate that the proposed model outperforms the other latest model across various evaluation indicators. Specifically, it has 46.9% improvement in MSE and 14.08% improvement in MAE. Full article
Show Figures

Figure 1

Back to TopTop