Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (502)

Search Parameters:
Keywords = traffic anomalies

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 7259 KB  
Article
Enhancing IoT Network Security: A BPSO-Optimized Attention-GRU Deep Learning Framework for Intrusion Detection
by Abdallah Elayan and Michel Kadoch
Computers 2026, 15(5), 266; https://doi.org/10.3390/computers15050266 - 23 Apr 2026
Abstract
The exponential expansion of computer networks, alongside the rapid development of the Internet of Things (IoT), has significantly increased the volume and complexity of transmitted data, emphasizing the need for robust network security measures to secure sensitive data and prevent unauthorized access or [...] Read more.
The exponential expansion of computer networks, alongside the rapid development of the Internet of Things (IoT), has significantly increased the volume and complexity of transmitted data, emphasizing the need for robust network security measures to secure sensitive data and prevent unauthorized access or breaches. Intrusion Detection Systems (IDSs) have emerged as a vital tool for protecting networks and IoT environments from threats. Various IDSs have been proposed in the literature; however, the lack of optimal feature learning, computational efficiency, and reliance on obsolete datasets poses significant challenges, limiting their effectiveness against evolving cyber threats. Moreover, traditional IDSs struggle to efficiently manage the high-dimensional and imbalanced nature of IoT network traffic data. To address these challenges, this research proposes a hybrid deep learning (DL)-based IDS integrating Binary Particle Swarm Optimization (BPSO), MultiHead Attention mechanisms (MHA), and a deep Gated Recurrent Unit (GRU) architecture, improving detection effectiveness while reducing computational overhead. Our proposed approach also utilizes a Target Sampling strategy to balance class distributions, enhancing the model’s ability to accurately identify minority attacks. The BPSO algorithm is employed to identify the most influential features from the high-dimensional network traffic datasets, enhancing model interpretability and supporting more efficient learning. This optimized feature subset is then fed into a GRU-based DL architecture augmented with MHA, which performs sequence processing and attention-based learning for intrusion detection. The performance of the proposed model is evaluated utilizing the BoT-IoT and the CIC-IDS2017 benchmark datasets, ensuring a comprehensive assessment of anomaly detection capabilities. Extensive experimental results demonstrate the superior performance of the proposed model, achieving a recall of 98.42% and 99.76%, with F1-score of 98.94% and 99.76% for binary classification and a recall of 99.79% and 98.69%, with F1-score of 99.89% and 98.04% for multiclass classification on the BoT-IoT and CIC-IDS2017 datasets, respectively, highlighting the effectiveness of our model in enhancing threat detection for computer networks and IoT environments in comparison to recent state-of-the-art IDSs. Full article
28 pages, 1805 KB  
Article
Intelligent Threat Defense Mechanisms for 5G APIs
by Asif Yasin, Seyed Ebrahim Hosseini, Muhammad Nadeem and Shahbaz Pervez
Future Internet 2026, 18(5), 223; https://doi.org/10.3390/fi18050223 - 22 Apr 2026
Abstract
As 5G Standalone Core networks grow, Application Programming Interface (APIs) have become a key part of how network systems talk to each other. They allow different functions to share data and complete tasks quickly. However, this also makes them targets for attacks. 5G [...] Read more.
As 5G Standalone Core networks grow, Application Programming Interface (APIs) have become a key part of how network systems talk to each other. They allow different functions to share data and complete tasks quickly. However, this also makes them targets for attacks. 5G Standalone Core networks rely on Service-Based Architecture (SBA), where network functions communicate through exposed APIs. These APIs are attractive targets for cyberattacks because they are externally accessible, handle sensitive control-plane operations, and exchange structured data using Hypertext Transfer Protocol version 2 (HTTP/2) and JavaScript Object Notation (JSON) protocols. Most older security tools work using fixed rules, which cannot always detect new or changing threats. This study aimed to fix that gap by using Artificial Intelligence to make API security smarter. Two AI models were tested: Long Short-Term Memory (LSTM), which learns from past traffic and Reinforcement Learning (RL), which learns by adapting to network behavior. Both were used to assess API traffic and assign a real-time risk score. Synthetic traffic was created using Python, including both normal API calls and different types of attacks like Distributed Denial-of-Service (DDoS), brute force, and Structured Query Language (SQL) injection. The results show that both LSTM and RL models were better than traditional rule-based systems. They found more threats, gave fewer false alerts, and responded faster. RL was especially strong at handling unknown or changing attacks. Experimental results show that the proposed LSTM and RL models achieved approximately 95% detection accuracy, significantly outperforming the static rule-based baseline model, which achieved 58% accuracy. The results demonstrate the effectiveness of adaptive AI-based security mechanisms for detecting evolving API threats. This research shows that AI can help protect 5G APIs in a smarter and more flexible way. It can support telecom networks by making threat detection faster, more accurate, and ready for future challenges. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

30 pages, 1289 KB  
Article
Anomaly Detection for Substations Based on IEC 61850-NFA Model
by Deniz Berfin Tastan and Musa Balta
Appl. Sci. 2026, 16(8), 4000; https://doi.org/10.3390/app16084000 - 20 Apr 2026
Abstract
The increasing digitalization of energy transmission and distribution infrastructures has made industrial control systems (ICS), and especially IEC 61850-based communication structures, critical. IEC 61850 performs protection and control functions in substations in real time via GOOSE and MMS protocols. The fast and low-latency [...] Read more.
The increasing digitalization of energy transmission and distribution infrastructures has made industrial control systems (ICS), and especially IEC 61850-based communication structures, critical. IEC 61850 performs protection and control functions in substations in real time via GOOSE and MMS protocols. The fast and low-latency operation of these protocols is essential; however, their open structure leaves systems vulnerable to cyberattacks. Traditional signature-based solutions are insufficient for detecting such anomalies, and models capable of learning both time and state relationships are needed. This study develops a time-aware probabilistic NFA model to detect anomalous behavior in IEC 61850 traffic. The model analyzes GOOSE and MMS message sequences with both state transitions and time differences (Δt). Thus, not only the message sequence but also the timing variations between events are learned. The probability of each transition is dynamically updated, and deviations from normal behavior are marked as “anomalies”. The dataset used in this study was created based on normal and attack scenarios conducted in the Sakarya University Critical Infrastructure National Testbed Center Energy Laboratory (Center Energy). The experimental results obtained in the study show that the model detects time-based, structural, and behavioral anomalies with high accuracy. With a dual-model configuration, results of 91.7% accuracy, 88.9% precision, 100% recall, and a 94.1% F1-score were achieved; particularly in time-based attack scenarios, the model performance reached an accuracy level of up to 93%. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
32 pages, 550 KB  
Article
Resilient Multi-Agent State Estimation for Smart City Traffic: A Systems Engineering Approach to Emission Mitigation
by Ahmet Cihan
Appl. Sci. 2026, 16(8), 3972; https://doi.org/10.3390/app16083972 - 19 Apr 2026
Viewed by 102
Abstract
Uninterrupted traffic flow monitoring is a prerequisite for optimal resource allocation and minimizing vehicular emissions in smart cities. However, centralized traffic management architectures are highly vulnerable to single points of failure. When structural sensor malfunctions occur, the resulting network unobservability paralyzes dynamic signalization, [...] Read more.
Uninterrupted traffic flow monitoring is a prerequisite for optimal resource allocation and minimizing vehicular emissions in smart cities. However, centralized traffic management architectures are highly vulnerable to single points of failure. When structural sensor malfunctions occur, the resulting network unobservability paralyzes dynamic signalization, triggering cascading traffic congestion, extended idling times, and severe greenhouse gas emissions. To address this cyber-ecological vulnerability, we propose the Hybrid Multi-Agent State Estimation (H-MASE) protocol, a fully decentralized decision-support framework designed from an applied systems reliability engineering perspective. By deploying PSAs and VLAs directly onto IoT-enabled edge devices at smart intersections, H-MASE leverages a hop-by-hop edge computing topology to collaboratively track macroscopic route flow dynamics. Mathematically, this distributed estimation process is formulated as a network-wide least-squares convex optimization problem, where local projection operators function as exact Distributed Gradient Descent steps to minimize the global residual sum of squares. The distributed consensus mechanism acts as a spatial variance reduction tool, effectively dampening measurement noise and stochastic demand fluctuations. Furthermore, we introduce an autonomous anomaly detection logic that isolates severe structural faults rapidly, which is mathematically structured to prevent false alarms under bounded disturbance conditions. Numerical simulations demonstrate that the protocol yields a highly resilient optimality gap (e.g., a Root Mean Square Error of merely 0.81 vehicles per estimated state) even under catastrophic hardware failures. Ultimately, H-MASE provides a robust, fail-safe data foundation for sustainable urban logistics and green-wave signalization, ensuring that smart cities maintain ecological resilience and optimal resource utilization under severe structural disruptions. Full article
(This article belongs to the Special Issue Advances in Transportation and Smart City)
Show Figures

Figure 1

19 pages, 510 KB  
Article
From Vector Space to Symbolic Space: Informational and Semantic Analysis of Benign and DDoS IoT Traffic Using LLMs
by Mironela Pirnau, Iustin Priescu, Mihai-Alexandru Botezatu, Catalina Mihaela Priescu and Daniela Joita
Electronics 2026, 15(8), 1724; https://doi.org/10.3390/electronics15081724 - 18 Apr 2026
Viewed by 180
Abstract
This paper investigates the feasibility of using Large Language Models (LLMs) for the structural analysis of flow-based network data. This analysis is carried out in the presence of a structural difference between the multidimensional numerical space of IoT features and the symbolic space [...] Read more.
This paper investigates the feasibility of using Large Language Models (LLMs) for the structural analysis of flow-based network data. This analysis is carried out in the presence of a structural difference between the multidimensional numerical space of IoT features and the symbolic space in which LLMs operate. The primary objective was the development of a formal framework that enables the controlled transformation of numerical data into linguistically analyzable semantic representations, without resorting to classification or machine learning mechanisms. We propose the Semantic Flow Encoding (SFE) mechanism, a deterministic method for robust discretization and behavioral abstraction that converts the numerical characteristics of Internet of Things (IoT) flows into structural semantic descriptions using the Canadian Institute for Cybersecurity Internet of Things Device Identification and Anomaly Detection (CIC IoT-DIAD) 2024 dataset. Through formal informational measures, it is demonstrated that the existence of an intrinsic structural difference between benign and DDoS traffic in the analyzed dataset. In the validation stage, we evaluated whether these informational differences are reflected at the level of linguistic abstraction through controlled inference experiments in IBM WatsonX. The present paper suggests that LLMs may support semantic auditing of distributional structure when guided by a formal encoding layer. In this manner, a reproducible framework for integrating numerical security data into language-model-based analysis is suggested. Full article
Show Figures

Figure 1

40 pages, 1741 KB  
Article
Edge AI Bridge: A Micro-Layer Intrusion Detection Architecture for Smart-City IoT Networks
by Sethu Subramanian N, Prabu P, Kurunandan Jain and Prabhakar Krishnan
IoT 2026, 7(2), 33; https://doi.org/10.3390/iot7020033 - 16 Apr 2026
Viewed by 277
Abstract
Smart-city IoT ecosystems depend on a large number of devices with limited resources, which often lack built-in security mechanisms. While traditional cloud-based or gateway-centric intrusion detection systems (IDSs) offer essential security, they are still characterized by high detection latency, considerable bandwidth demand, and [...] Read more.
Smart-city IoT ecosystems depend on a large number of devices with limited resources, which often lack built-in security mechanisms. While traditional cloud-based or gateway-centric intrusion detection systems (IDSs) offer essential security, they are still characterized by high detection latency, considerable bandwidth demand, and a lack of precise monitoring of single device actions. This study proposes the Edge AI Bridge, a novel micro-computing security layer positioned between IoT devices and the gateway to enable early-stage threat interception. The architecture integrates embedded AI hardware with a hybrid pipeline, utilizing unsupervised anomaly detection for behavioral profiling and a lightweight signature-matching module to minimize false positives. System operations—including localized traffic inspection, protocol parsing, and feature extraction—are performed before data aggregation, which preserves device-level privacy and reduces the computational burden on the IoT gateway. The contemporary CIC-IoT-2023 dataset, which captures a wide range of smart-city protocols and attack vectors, is used to evaluate the architecture. The Edge AI Bridge leads to a significant reduction in detection latency—≈50 ms on average as opposed to the 500 ms of cloud-based solutions—while the resource footprint is kept low to about 20% CPU utilization. The Edge AI Bridge demonstrates a potential solution that is scalable, modular, and can preserve privacy while improving the cyber resilience of the smart-city infrastructures that are large, heterogeneous, and difficult to manage. Full article
Show Figures

Figure 1

27 pages, 2093 KB  
Article
Comparative Analysis of Supervised and Unsupervised Learning for Intrusion Detection in Network Logs
by Paulo Castro, Fernando Santos and Pedro Lopes
Computation 2026, 14(4), 92; https://doi.org/10.3390/computation14040092 - 15 Apr 2026
Viewed by 267
Abstract
The escalating complexity of network infrastructures and the increasing sophistication of cyber threats require increasingly robust and automated Intrusion Detection Systems (IDS). This article presents a comparative investigation of the effectiveness of various Machine Learning and Deep Learning architectures in detecting network anomalies [...] Read more.
The escalating complexity of network infrastructures and the increasing sophistication of cyber threats require increasingly robust and automated Intrusion Detection Systems (IDS). This article presents a comparative investigation of the effectiveness of various Machine Learning and Deep Learning architectures in detecting network anomalies in network logs. The methodology encompassed classic supervised and ensemble algorithms, such as Random Forest and XGBoost, to sequential Deep Learning approaches (LSTM, GRU) and unsupervised models based on latent reconstruction (VAE, DeepLog). The results demonstrate that supervised approaches significantly outperformed unsupervised methods in the analyzed context. The optimized XGBoost model established a performance benchmark, achieving a Recall of 0.96 and a Precision of 0.85, thereby offering an optimal balance between detecting rare threats and minimizing false alarms. In contrast, unsupervised models revealed critical limitations, suggesting that statistical mimicry between normal and anomalous traffic hinders detection based solely on reconstruction error. Additionally, the study documents the technical interoperability challenges when attempting to integrate state-of-the-art language models, such as BERT. In conclusion, this work validates the effectiveness of Gradient Boosting algorithms and recurrent networks as viable and scalable solutions for critical network security, providing guidelines for model selection in real monitoring environments. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

20 pages, 604 KB  
Article
eMQTT Traffic Generator for IoT Intrusion Detection Systems
by Jorge Ortega-Moody, Cesar Isaza, Kouroush Jenab, Karina Anaya, Adrian Leon and Cristian Felipe Ramirez-Gutierrez
Future Internet 2026, 18(4), 203; https://doi.org/10.3390/fi18040203 - 13 Apr 2026
Viewed by 451
Abstract
The development of effective Intrusion Detection Systems (IDS) for Internet of Things (IoT) environments is constrained by the absence of realistic, large-scale datasets, particularly for the Message Queuing Telemetry Transport (MQTT) protocol, which is prevalent in industrial IoT. Existing datasets are frequently limited [...] Read more.
The development of effective Intrusion Detection Systems (IDS) for Internet of Things (IoT) environments is constrained by the absence of realistic, large-scale datasets, particularly for the Message Queuing Telemetry Transport (MQTT) protocol, which is prevalent in industrial IoT. Existing datasets are frequently limited in scope, imbalanced, or do not capture MQTT-specific attack patterns, thereby impeding the training of accurate machine learning models. To address this gap, the extensible Message Queuing Telemetry Transport (eMQTT) Traffic Generator is introduced as a modular platform capable of simulating both legitimate MQTT communication and targeted denial-of-service (DoS) attacks. The framework features a scalable and reproducible architecture that incorporates protocol-aware attack modeling, automated traffic labeling, and direct export of datasets suitable for machine learning applications. The system produces standardized, configurable, repeatable, and publicly accessible datasets, thereby facilitating reproducible research and scalable experimentation. Experimental validation demonstrates that the simulated traffic aligns with established DoS behavior models. Two high-volume datasets were generated: one representing normal MQTT traffic and another emulating CONNECT-flooding attacks. Machine learning classifiers trained on these datasets exhibited strong performance, with gradient boosting models achieving over 95% accuracy in distinguishing benign from malicious traffic. This work offers a practical solution to the scarcity of datasets in IoT security research. By providing a controlled, extensible, and reproducible traffic-generation platform alongside validated datasets, eMQTT enables systematic experimentation, supports the advancement of IDS solutions, and enhances MQTT security for critical IoT infrastructures. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Graphical abstract

22 pages, 903 KB  
Review
Exploring Recent Maritime Research on AIS-Based Ship Behavior Analysis and Modeling
by Anila Duka, Houxiang Zhang, Pero Vidan and Guoyuan Li
J. Mar. Sci. Eng. 2026, 14(8), 712; https://doi.org/10.3390/jmse14080712 - 11 Apr 2026
Viewed by 246
Abstract
Automatic Identification System (AIS) data provide valuable insights into ship behavior, supporting maritime safety, situational awareness, and operational efficiency capabilities that are increasingly required for autonomous ship functions and harbor maneuvering assistance. This review synthesizes recent research on AIS-based ship behavior analysis and [...] Read more.
Automatic Identification System (AIS) data provide valuable insights into ship behavior, supporting maritime safety, situational awareness, and operational efficiency capabilities that are increasingly required for autonomous ship functions and harbor maneuvering assistance. This review synthesizes recent research on AIS-based ship behavior analysis and modeling published between 2022 and 2024 using a structured literature search and screening process informed by PRISMA principles. The review presents a five-stage workflow, spanning data processing, data analysis, knowledge extraction, modeling, and runtime applications with emphasis on how these stages contribute to perception, prediction, and decision support in automated navigation. Four dimensions are considered in data analysis, including statistical analysis, safety indicators, situational awareness, and anomaly detection. The modeling approaches are categorized into classification, regression, and optimization, highlighting current limitations such as data quality, algorithmic transparency, and real-time performance, while also assessing runtime feasibility for onboard or edge deployment. Three runtime application directions are identified: autonomous vessel functions, remote monitoring and control operations, and onboard decision-support tools, with numerous studies focusing on constrained waterways and port-approach scenarios. Future directions suggest integrating multi-source data and advancing machine learning models to improve robustness in complex traffic and harbor environments. By linking theoretical insights with practical onboard needs, this study provides guidance for developing intelligent, adaptive, and safety-enhancing maritime systems. Full article
(This article belongs to the Special Issue Autonomous Ship and Harbor Maneuvering: Modeling and Control)
Show Figures

Figure 1

21 pages, 1353 KB  
Article
Chaos Theory with AI Analysis in IoT Network Scenarios
by Antonio Francesco Gentile and Maria Cilione
Cryptography 2026, 10(2), 25; https://doi.org/10.3390/cryptography10020025 - 10 Apr 2026
Viewed by 189
Abstract
While general network dynamics have been extensively modeled using stochastic methods, the emergence of dense Internet of Things (IoT) ecosystems demands a more specialized analytical framework. IoT environments are characterized by extreme non-linearity and sensitivity to initial conditions, where traditional models often fail [...] Read more.
While general network dynamics have been extensively modeled using stochastic methods, the emergence of dense Internet of Things (IoT) ecosystems demands a more specialized analytical framework. IoT environments are characterized by extreme non-linearity and sensitivity to initial conditions, where traditional models often fail to account for chaotic latency and packet loss. This paper introduces a specialized approach that integrates Chaos Theory with the innovative paradigm of Vibe Coding—an AI-assisted development and analysis methodology that allows for the ‘encoding’ and interpretation of the dynamic ‘vibe’ or signature of network fluctuations in real-time. By categorizing network behavior into four distinct scenarios (quiescent, perturbed, attacked, and perturbed–Attacked), the proposed framework utilizes deep learning to transform chaotic signals into actionable intelligence. Our findings demonstrate that this specialized synergy between chaos analysis and Vibe Coding provides superior classification of adversarial threats, such as DoS and injection attacks, fostering intelligent native security for next-generation IoT infrastructures. Full article
Show Figures

Figure 1

24 pages, 1361 KB  
Article
Adaptive Decision-Level Intrusion Detection for Known and Zero-Day Attacks
by Joseph P. Mchina, Neema Mduma and Ramadhani S. Sinde
Network 2026, 6(2), 23; https://doi.org/10.3390/network6020023 - 9 Apr 2026
Viewed by 306
Abstract
Network Intrusion Detection Systems (NIDS) face increasing challenges from sophisticated cyber threats, particularly zero-day attacks that evade signature-based methods. While supervised learning is effective for known attack classification, it struggles with novel threats, whereas anomaly-based approaches suffer from high false positive rates and [...] Read more.
Network Intrusion Detection Systems (NIDS) face increasing challenges from sophisticated cyber threats, particularly zero-day attacks that evade signature-based methods. While supervised learning is effective for known attack classification, it struggles with novel threats, whereas anomaly-based approaches suffer from high false positive rates and unstable thresholds. To address these limitations, this paper proposes a decision-level adaptive intrusion-detection framework combining hierarchical CNN-based closed-set classification with autoencoder-based zero-day detection in a cascade architecture. The framework enables deployment-time adaptation by dynamically adjusting class-specific confidence thresholds and fusion parameters without model retraining. Experiments on the CSE-CIC-IDS2018 dataset demonstrate strong closed-set performance, achieving 98.98% accuracy and a macro-F1-score of 0.9342, with improved recall for minority attack classes under adaptive thresholding. Under a zero-day evaluation protocol in which Web_Attacks and Infiltration are excluded from training and validation, the proposed approach achieves an F1-score of 0.9319 while maintaining a low false positive rate of 0.0019. The framework is further evaluated on the Simulated University Network Environment (SUNE) dataset representing campus network traffic, achieving 96.18% closed-set accuracy and 97.54% accuracy in the integrated cascade setting. These results demonstrate that the proposed framework effectively balances minority attack detection, zero-day identification, and false-alarm control in dynamic and resource-constrained network environments. Full article
(This article belongs to the Special Issue Artificial Intelligence in Effective Intrusion Detection for Clouds)
Show Figures

Figure 1

23 pages, 1950 KB  
Article
Encrypted Traffic Detection via a Federated Learning-Based Multi-Scale Feature Fusion Framework
by Yichao Fei, Youfeng Zhao, Wenrui Liu, Fei Wu, Shangdong Liu, Xinyu Zhu, Yimu Ji and Pingsheng Jia
Electronics 2026, 15(8), 1570; https://doi.org/10.3390/electronics15081570 - 9 Apr 2026
Viewed by 270
Abstract
With the proliferation of edge computing in IoT and smart security, there is a growing demand for large-scale encrypted traffic anomaly detection. However, the opaque nature of encrypted traffic makes it difficult for traditional detection methods to balance efficiency and accuracy. To address [...] Read more.
With the proliferation of edge computing in IoT and smart security, there is a growing demand for large-scale encrypted traffic anomaly detection. However, the opaque nature of encrypted traffic makes it difficult for traditional detection methods to balance efficiency and accuracy. To address this challenge, this paper proposes FMTF, a Multi-Scale Feature Fusion method based on Federated Learning for encrypted traffic anomaly detection. FMTF constructs graph structures at three scales—spatial, statistical, and content—to comprehensively characterize traffic features. At the spatial scale, communication graphs are constructed based on host-to-host IP interactions, where each node represents the IP address of a host and edges capture the communication relationships between them. The statistical scale builds traffic statistic graphs based on interactions between port numbers, with nodes representing individual ports and edge weights corresponding to the lengths of transmitted packets. At the content scale, byte-level traffic graphs are generated, where nodes represent pairs of bytes extracted from the traffic data, and edges are weighted using pointwise mutual information (PMI) to reflect the statistical association between byte occurrences. To extract and fuse these multi-scale features, FMTF employs the Graph Attention Network (GAT), enhancing the model’s traffic representation capability. Furthermore, to reduce raw-data exposure in distributed edge environments, FMTF integrates a federated learning framework. In this framework, edge devices train models locally based on their multi-scale traffic features and periodically share model parameters with a central server for aggregation, thereby optimizing the global model without exposing raw data. Experimental results demonstrate that FMTF maintains efficient and accurate anomaly detection performance even under limited computing resources, offering a practical and effective solution for encrypted traffic identification and network security protection in edge computing environments. Full article
Show Figures

Figure 1

17 pages, 33215 KB  
Data Descriptor
ANAID: Autonomous Naturalistic Obstacle-Avoidance Interaction Dataset
by Manuel Garcia-Fernandez, Maria Juarez Molera, Adrian Canadas Gallardo, Nourdine Aliane and Javier Fernandez Andres
Data 2026, 11(4), 77; https://doi.org/10.3390/data11040077 - 8 Apr 2026
Viewed by 321
Abstract
This paper presents ANAID (Autonomous Naturalistic obstacle-Avoidance Interaction Dataset), a new multimodal dataset designed to support research on autonomous driving, particularly with regard to obstacle avoidance and naturalistic driver–vehicle interaction. Data were collected using a Hyundai Tucson Hybrid equipped with a Comma-3X autonomous-driving [...] Read more.
This paper presents ANAID (Autonomous Naturalistic obstacle-Avoidance Interaction Dataset), a new multimodal dataset designed to support research on autonomous driving, particularly with regard to obstacle avoidance and naturalistic driver–vehicle interaction. Data were collected using a Hyundai Tucson Hybrid equipped with a Comma-3X autonomous-driving development kit, combining high-resolution front-facing video with detailed CAN-bus telemetry. The dataset comprises four data collection campaigns, each corresponding to a single continuous driving session, yielding a total of 208 videos and 240,014 synchronized frames. In addition to the video data, the dataset provides vehicle state measurements (speed, acceleration, steering, pedal positions, turn signals, etc.) and an additional annotation layer identifying evasive maneuvers derived from steering-related signals. Data were recorded across four driving campaigns on an urban circuit at Universidad Europea de Madrid, capturing diverse real-world scenarios such as roundabouts, intersections, pedestrian areas, and segments requiring obstacle avoidance. A multi-stage processing pipeline aligns telemetry and visual data, extracts frames at 20 FPS, and detects evasive maneuvers using threshold-based time-series analysis. ANAID provides a fully aligned and non-destructive representation of naturalistic driving behavior, enabling research on control prediction, driver modeling, anomaly detection, and human–autonomy interaction in realistic traffic conditions. Full article
Show Figures

Figure 1

27 pages, 3109 KB  
Article
Early Detection of Virtual Machine Failures in Cloud Computing Using Quantum-Enhanced Support Vector Machine
by Bhargavi Krishnamurthy, Saikat Das and Sajjan G. Shiva
Mathematics 2026, 14(7), 1229; https://doi.org/10.3390/math14071229 - 7 Apr 2026
Viewed by 241
Abstract
Cloud computing is one of the essential computing platforms for modern enterprises. A total of 84 percent of large businesses use cloud computing services in 2025 to enable remote working and higher flexibility of operation with reduction in the cost of operation. Cloud [...] Read more.
Cloud computing is one of the essential computing platforms for modern enterprises. A total of 84 percent of large businesses use cloud computing services in 2025 to enable remote working and higher flexibility of operation with reduction in the cost of operation. Cloud environments are dynamic and multitenant, often demanding high computational resources for real-time processing. However, the cloud system’s behavior is subjected to various kinds of anomalies in which patterns of data deviate from the normal traffic. The varieties of anomalies that exist are performance anomalies, security anomalies, resource anomalies, and network anomalies. These anomalies disrupt the normal operation of cloud systems by increasing the latency, reducing throughput, frequently violating service level agreements (SLAs), and experiencing the failure of virtual machines. Among all anomalies, virtual machine failures are one of the potential anomalies in which the normal operation of the virtual machine is interrupted, resulting in the degradation of services. Virtual machine failure happens because of resource exhaustion, malware access, packet loss, Distributed Denial of Service attacks, etc. Hence, there is a need to detect the chances of virtual machine failures and prevent it through proactive measures. Traditional machine learning techniques often struggle with high-dimensional data and nonlinear correlations, ending up with poor real-time adaptation. Hence, quantum machine learning is found to be a promising solution which effectively deals with combinatorially complex and high-dimensional data. In this paper, a novel quantum-enhanced support vector machine (QSVM) is designed as an optimized binary classifier which combines the principles of both quantum computing and support vector machine. It encodes the classical data into quantum states. Feature mapping is performed to transform the data into the high-dimensional form of Hilbert space. Quantum kernel evaluation is performed to evaluate similarities. Through effective optimization, optimal hyperplanes are designed to detect the anomalous behavior of virtual machines. This results in the exponential speed-up of operation and prevents the local minima through entanglement and superposition operation. The performance of the proposed QSVM is analyzed using the QuCloudSim 1.0 simulator and further validated using expected value analysis methodology. Full article
Show Figures

Figure 1

23 pages, 2167 KB  
Article
Congestion-Aware Traffic Forecasting with Physics-Guided Spatio-Temporal Graph Convolutional Networks
by Yueqiao Zhang and Jian Zhang
Appl. Sci. 2026, 16(7), 3546; https://doi.org/10.3390/app16073546 - 4 Apr 2026
Viewed by 325
Abstract
Traffic flow forecasting provides essential support for the construction of smart transportation systems. Despite the superiority of the ASTGCN, which uses an attention mechanism to capture spatio-temporal correlations, it lacks an explicit physical interpretation and thus falls into a more general category known [...] Read more.
Traffic flow forecasting provides essential support for the construction of smart transportation systems. Despite the superiority of the ASTGCN, which uses an attention mechanism to capture spatio-temporal correlations, it lacks an explicit physical interpretation and thus falls into a more general category known for its lack of such interpretation. As a result, in the presence of sparse or unstable congestion, these data-driven models often violate conservation laws and may generate “physical anomalies” or other logically impossible states. To close the gap of data-driven expressiveness and physical consistency, we propose the congestion-aware physics-guided STGCN (CAP-STGCN). This framework builds a synergistic model that achieves intrinsic coupling between the macroscopic traffic flow kinematics (fundamental diagram) and the spatio-temporal learning process. That is to say, under the model’s solution-space constraining effect, its motion space is bound on a feasible manifold. In terms of kinematics, it restricts consistency in the flow, density and speed. Concurrently, to address slow convergence under long-tailed distributions due to a lack of training samples, such as when there are fewer users or higher-quality items, a dynamic congestion-rectification mechanism is introduced. The aforementioned mechanism redefines the optimization landscape by prioritizing hard-to-predict saturation occurrences. Experiments show that, compared with other models, CAP-STGCN achieves higher prediction accuracy; more importantly, it is free of physical anomalies during inference and can be directly used in practice. Full article
Show Figures

Figure 1

Back to TopTop