Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,153)

Search Parameters:
Keywords = real-time anomaly detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1599 KB  
Article
VCMA-MRAM In-Memory Stochastic Sampling for Edge Boltzmann Machine Inference
by Xuesheng Deng, Yuesheng Li, Bin Fang and Lin Wang
Electronics 2026, 15(8), 1622; https://doi.org/10.3390/electronics15081622 - 13 Apr 2026
Abstract
Edge intelligence is often limited by the computation–energy trade-off in resource-constrained devices. Boltzmann machines (BMs) provide strong unsupervised learning capability, yet their reliance on Gibbs sampling makes digital implementations costly in both computation and energy. In this paper, we present a voltage-controlled magnetic [...] Read more.
Edge intelligence is often limited by the computation–energy trade-off in resource-constrained devices. Boltzmann machines (BMs) provide strong unsupervised learning capability, yet their reliance on Gibbs sampling makes digital implementations costly in both computation and energy. In this paper, we present a voltage-controlled magnetic anisotropy magnetic tunnel junction (VCMA-MTJ)-based MRAM system that performs in-memory stochastic sampling for state generation and updates in restricted/deep Boltzmann machines (RBMs/DBMs). By exploiting the intrinsic stochastic switching of VCMA-MTJs, the proposed system achieves probabilistic sampling with an energy as low as ∼10 fJ per sample. Implemented on a microcontroller-based edge platform, it enables real-time multi-sensor anomaly detection with an F1-score of 0.9854 and stable operation. The proposed hardware–algorithm co-design achieves in situ stochastic computing and storage within a single MRAM cell, providing an ultra-low-power substrate for probabilistic inference at the edge. Full article
(This article belongs to the Section Electronic Materials, Devices and Applications)
Show Figures

Figure 1

20 pages, 1585 KB  
Article
CNN-LSTM-POT-Based Anomaly Detection for Smart Greenhouse Sensor Data: A Real-Time Edge Deployment Approach
by Jun Shu and Dengke Yang
Future Internet 2026, 18(4), 205; https://doi.org/10.3390/fi18040205 - 13 Apr 2026
Abstract
Traditional agricultural greenhouse environmental monitoring systems often lack effective anomaly detection mechanisms, which can lead to inaccurate environmental regulation and negatively affect plant growth. To address this issue, this paper proposes a greenhouse monitoring system integrating Zigbee and 4G communication technologies, combined with [...] Read more.
Traditional agricultural greenhouse environmental monitoring systems often lack effective anomaly detection mechanisms, which can lead to inaccurate environmental regulation and negatively affect plant growth. To address this issue, this paper proposes a greenhouse monitoring system integrating Zigbee and 4G communication technologies, combined with a CNN-LSTM-POT anomaly detection algorithm. The system employs a Convolutional Neural Network (CNN) to extract local spatial features from multi-source sensor data and a Long Short-Term Memory (LSTM) network to model long-term temporal dependencies. To accurately identify anomalies, the Peaks Over Threshold (POT) method from extreme value theory is applied to prediction residuals, enabling adaptive dynamic threshold determination. Experimental results show that the proposed algorithm substantially improves anomaly detection precision, prevents erroneous data from disrupting greenhouse control decisions and reduces the volume of data transmitted to the cloud platform, thereby lowering computational overhead. This work provides a reliable and efficient solution for data monitoring and precise environmental control in smart agricultural greenhouses. Full article
(This article belongs to the Topic Smart Edge Devices: Design and Applications)
Show Figures

Figure 1

28 pages, 1987 KB  
Review
Applications, Challenges, and Future Trends of Artificial Intelligence of Things (AIoT)-Enabled Water Quality and Resource Management
by Ashikur Rahman, Gwo Chin Chung and Yin Hoe Ng
Water 2026, 18(8), 919; https://doi.org/10.3390/w18080919 - 12 Apr 2026
Viewed by 81
Abstract
Safe and sustainable water sources are a serious global concern because of growing population, urbanization, industrialization, and climate change. The conventional water surveillance systems that rely on periodic sampling and laboratory analysis fail to provide time-sensitive and high-resolution data utilized for proactive water [...] Read more.
Safe and sustainable water sources are a serious global concern because of growing population, urbanization, industrialization, and climate change. The conventional water surveillance systems that rely on periodic sampling and laboratory analysis fail to provide time-sensitive and high-resolution data utilized for proactive water management. Artificial Intelligence of Things (AIoT) offers a viable solution, as they can provide tools of constant active monitoring and predictive analytics. The integration of IoT sensor networks with machine learning (ML) methods enables real-time data-driven water resource monitoring and intelligent decision-making, enhances water quality assessment, supports early detection of anomalies, improves predictive capabilities for floods and droughts, and facilitates efficient irrigation and reservoir management, ultimately leading to sustainable and resilient water management systems. The paper presents an extensive overview of AIoT solutions for water quality monitoring and water resource management, including IoT sensor networks for real-time data acquisition, machine learning methods for prediction, classification, anomaly detection, and edge computing platforms for data processing and decision support. This study also highlights existing possibilities, obstacles, and research gaps identified through a review of the recent literature. Key challenges reported across multiple studies include limited data availability, sensor calibration bias, integration of heterogeneous data, and insufficient model interpretability. Advanced paradigms such as digital twin systems, TinyML, federated learning, and explainable AI (XAI) are examined as enabling technologies to enhance system efficiency, flexibility, and transparency. Future research directions are outlined to develop scalable, interpretable, and real-time water management solutions. Full article
Show Figures

Figure 1

25 pages, 3642 KB  
Article
Label-Free Deep Learning with Feature Adaptation for Crop Anomaly Detection on Small Datasets
by Ming-Der Yang, Tzu-Han Lee, Hsin-Hung Tseng, Tung-Ching Su and Yu-Chun Hsu
Agriculture 2026, 16(8), 854; https://doi.org/10.3390/agriculture16080854 - 12 Apr 2026
Viewed by 72
Abstract
Efficient crop health monitoring is crucial for global food security. Supervised deep learning approaches are often impractical due to the scarcity of large, labeled datasets. To address this limitation, this study adapts EfficientAD, an unsupervised, label-free anomaly detection framework originally designed for industrial [...] Read more.
Efficient crop health monitoring is crucial for global food security. Supervised deep learning approaches are often impractical due to the scarcity of large, labeled datasets. To address this limitation, this study adapts EfficientAD, an unsupervised, label-free anomaly detection framework originally designed for industrial inspection, for agricultural imagery on small datasets. The method utilizes a Patch Description Network (PDN) for localized feature extraction, a student network for local anomalies, and an autoencoder for global structural constraints. Benchmarked against AnoGAN, Pix2Pix, InTra, and Teacher–Student models, the framework demonstrated superior performance on the MVTec AD, PlantVillage, Coffee Leaf, and a custom real-world Sweet Potato dataset. The model achieved perfect area under the receiver operating characteristic curve (AUROC) scores of up to 100% in categories like “Pongamia”, “Potato”, and “Coffee Leaf”. While image-level classification was exceptionally robust, pixel-level localization (AUPRO) proved sensitive to complex agricultural backgrounds. To overcome this, a background interference analysis was conducted using Background Removed (BGRM) and out-of-distribution Background Replaced-Green (BGRP-G) strategies on the custom dataset. Notably, the BGRP-G strategy remarkably improved the image-level AUROC from 88.9% to 99.5% and substantially boosted the pixel-level AUPRO from 47.1% to 61.9%, successfully preserving the boundary integrity of severe structural defects. Achieving millisecond-level latency without complex data augmentation, this adapted label-free framework offers a versatile, highly efficient solution for real-time crop health diagnostics on resource-constrained Edge AI devices. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 908 KB  
Review
Exploring Recent Maritime Research on AIS-Based Ship Behavior Analysis and Modeling
by Anila Duka, Houxiang Zhang, Pero Vidan and Guoyuan Li
J. Mar. Sci. Eng. 2026, 14(8), 712; https://doi.org/10.3390/jmse14080712 - 11 Apr 2026
Viewed by 113
Abstract
Automatic Identification System (AIS) data provide valuable insights into ship behavior, supporting maritime safety, situational awareness, and operational efficiency capabilities that are increasingly required for autonomous ship functions and harbor maneuvering assistance. This review synthesizes recent research on AIS-based ship behavior analysis and [...] Read more.
Automatic Identification System (AIS) data provide valuable insights into ship behavior, supporting maritime safety, situational awareness, and operational efficiency capabilities that are increasingly required for autonomous ship functions and harbor maneuvering assistance. This review synthesizes recent research on AIS-based ship behavior analysis and modeling published between 2022 and 2024 using a structured literature search and screening process informed by PRISMA principles. The review presents a five-stage workflow, spanning data processing, data analysis, knowledge extraction, modeling, and runtime applications with emphasis on how these stages contribute to perception, prediction, and decision support in automated navigation. Four dimensions are considered in data analysis, including statistical analysis, safety indicators, situational awareness, and anomaly detection. The modeling approaches are categorized into classification, regression, and optimization, highlighting current limitations such as data quality, algorithmic transparency, and real-time performance, while also assessing runtime feasibility for onboard or edge deployment. Three runtime application directions are identified: autonomous vessel functions, remote monitoring and control operations, and onboard decision-support tools, with numerous studies focusing on constrained waterways and port-approach scenarios. Future directions suggest integrating multi-source data and advancing machine learning models to improve robustness in complex traffic and harbor environments. By linking theoretical insights with practical onboard needs, this study provides guidance for developing intelligent, adaptive, and safety-enhancing maritime systems. Full article
(This article belongs to the Special Issue Autonomous Ship and Harbor Maneuvering: Modeling and Control)
20 pages, 4549 KB  
Article
Online Track Anomaly Detection: Comparison of Different Machine Learning Techniques Through Injection of Synthetic Defects on Experimental Datasets
by Giovanni Bellacci, Luca Di Carlo, Marco Fiaschi, Luca Bocciolini, Carmine Zappacosta and Luca Pugi
Machines 2026, 14(4), 424; https://doi.org/10.3390/machines14040424 - 10 Apr 2026
Viewed by 279
Abstract
The adoption of instrumented wheelsets on diagnostic trains offers the possibility of continuous monitoring of wheel–rail contact forces. The collection of large datasets can be exploited for diagnostic purposes, aiming to localize specific track defects, allowing significant improvements in terms of safety and [...] Read more.
The adoption of instrumented wheelsets on diagnostic trains offers the possibility of continuous monitoring of wheel–rail contact forces. The collection of large datasets can be exploited for diagnostic purposes, aiming to localize specific track defects, allowing significant improvements in terms of safety and maintenance costs. Machine learning (ML) techniques can be used to automate anomaly detection. In this work, the authors compare the application of various ML algorithms based on the identification of different frequency or time-based features of analyzed signals. To perform the activity, a significant number and variety of local defects have been included in the recorded data. From a practical point of view, the insertion of real known defects into an existing line is extremely time-consuming, expensive, and not immune to safety issues. On the other hand, the design of anomaly detection algorithms involves the usage of relatively extended datasets with different faulty conditions. The authors propose deliberately adding real contact force profiles of healthy lines to a mix of synthetic signals, which substantially reproduce the behavior and the variability of foreseen faulty conditions. The results of this work, although preliminary and still to be completed, offer a contribution to the scientific community both in terms of obtained results and adopted methodologies. Full article
(This article belongs to the Special Issue AI-Driven Reliability Analysis and Predictive Maintenance)
Show Figures

Figure 1

27 pages, 10569 KB  
Article
Operational Discharge Severity Analysis and Multi-Horizon Forecasting Based on Reservoir Operation Data: A Case Study of Ba Ha Hydropower Reservoir, Vietnam
by Nguyen Thi Huong, Vo Quang Tuong and Ho Huu Loc
Hydrology 2026, 13(4), 110; https://doi.org/10.3390/hydrology13040110 - 10 Apr 2026
Viewed by 180
Abstract
Reservoir release induced flooding is a major downstream hazard worldwide, yet most warning systems rely on hydraulic modeling and underuse real time reservoir operation data. This study presents a data driven framework to detect flood discharge events, assess downstream operational severity, and forecast [...] Read more.
Reservoir release induced flooding is a major downstream hazard worldwide, yet most warning systems rely on hydraulic modeling and underuse real time reservoir operation data. This study presents a data driven framework to detect flood discharge events, assess downstream operational severity, and forecast daily discharges using deep learning. The approach was validated at the Ba Ha hydropower reservoir (Vietnam) with inflow, discharge, water level, and CHIRPS rainfall data to represent basin-scale precipitation forcing. More than 160 discharge events were identified using a composite Operational Severity Index (OSI) based on peak discharge, duration, and rise rate; although only ~2% were extreme, they posed the greatest risks. Among three Transformer-based models, Informer achieved the best short-term forecasting performance (RMSE ≈ 78 m3/s, R2 ≈ 0.80), while Autoformer showed greater stability at longer horizons (3–7 days). In contrast, all models exhibited reduced skill under abrupt and extreme discharge conditions. These results demonstrate that combining trend and anomaly-aware modeling enables reliable discharge prediction and severity assessment without complex hydraulic simulations. The proposed framework provides a practical foundation for reservoir early warning systems by transforming routine operational data into actionable flood-risk information. Full article
Show Figures

Figure 1

23 pages, 7215 KB  
Article
Applications of Distributed Optical Fiber Sensing Technology in Wellbore Leakage Monitoring and Its Integrity Analysis of Underground Gas Storage
by Zhentao Li, Xianjian Zou and Pengtao Wu
Energies 2026, 19(8), 1859; https://doi.org/10.3390/en19081859 - 10 Apr 2026
Viewed by 143
Abstract
With the exponential growth of natural gas reserves and utilization scale in China, underground gas storage (UGS) facilities—critical infrastructure within the natural gas production-supply-storage-sales system—have entered a phase of rapid expansion. As the core component connecting subsurface reservoirs with surface systems, wellbore integrity [...] Read more.
With the exponential growth of natural gas reserves and utilization scale in China, underground gas storage (UGS) facilities—critical infrastructure within the natural gas production-supply-storage-sales system—have entered a phase of rapid expansion. As the core component connecting subsurface reservoirs with surface systems, wellbore integrity directly influences operational safety and service lifespan of UGS facilities. However, current leakage detection and integrity analysis methodologies for gas storage wellbores remain deficient in effective real-time monitoring capabilities. Traditional methods, however, are constrained by limited spatial coverage and insufficient precision, rendering them inadequate for comprehensive, continuous safety monitoring requirements. To address this industry challenge, this study proposes a real-time wellbore integrity monitoring framework based on distributed fiber optic sensing technology, integrating distributed temperature sensing (DTS) and distributed acoustic sensing (DAS) devices into a synergistic monitoring system. The DTS component enables preliminary localization of potential leakage points through detection of minute temperature anomalies along the wellbore, while the DAS unit accurately identifies acoustic signatures caused by gas leakage within casings via monitoring of acoustic vibration signals propagating along the optical fiber. Through joint analysis of DTS and DAS data streams, real-time diagnosis of wellbore leakage events and integrity status can be achieved. Field trials demonstrated that this hybrid monitoring system achieved leakage localization accuracy within 1.0 m, effectively distinguishing normal operational signals from abnormal leakage characteristics. During actual monitoring operations, no indications of wellbore integrity compromise were detected; only minor noise and interference signals originating from surface construction activities were observed. Full article
(This article belongs to the Section D: Energy Storage and Application)
24 pages, 4186 KB  
Article
Progressive Spatiotemporal Graph Modeling for Spacecraft Anomaly Detection
by Zihan Chen, Zewen Li, Yuge Cao, Yue Wang and Hsi Chang
Entropy 2026, 28(4), 426; https://doi.org/10.3390/e28040426 - 10 Apr 2026
Viewed by 215
Abstract
The growing number of on-orbit spacecraft and the increasing volume of telemetry data have made intelligent anomaly detection in multi-channel telemetry essential for mission operations. Current spacecraft anomaly detection methods primarily rely on statistical models or time-series deep learning approaches, which often fail [...] Read more.
The growing number of on-orbit spacecraft and the increasing volume of telemetry data have made intelligent anomaly detection in multi-channel telemetry essential for mission operations. Current spacecraft anomaly detection methods primarily rely on statistical models or time-series deep learning approaches, which often fail to explicitly model spatiotemporal dependencies across multiple telemetry channels. This shortcoming limits their ability to capture the dynamically evolving and intricately coupled relationships between variables. To overcome this limitation, a Progressive Spatiotemporal Graph (PSTG) model is proposed for anomaly detection in multi-channel spacecraft telemetry. PSTG employs a multi-scale patch embedding module to extract hierarchical semantic features from multi-channel time series, effectively reducing the dimensionality of the spatiotemporal graph. It constructs a sparse adjacency matrix using a multi-head attention mechanism that integrates intra-channel temporal dynamics, inter-channel spatial correlations, and cross-channel spatiotemporal interactions. An improved multi-head graph attention network then captures pairwise dependencies among nodes within the adjacency matrix. As a result, PSTG encodes rich spatiotemporal representations derived from intricate variable interactions, enabling accurate, real-time prediction of multi-channel telemetry. Furthermore, a dynamic thresholding mechanism is incorporated into PSTG to perform online anomaly detection based on prediction residuals. Extensive experiments on real-world spacecraft telemetry data collected over 84 months show that PSTG outperforms eleven state-of-the-art benchmark methods in almost all cases across multiple evaluation metrics. Finally, visualizations of the learned adjacency and attention matrices are presented to interpret the spatiotemporal modeling process, providing operators with actionable insights into the detected anomalies and facilitating root cause analysis. Full article
Show Figures

Figure 1

24 pages, 2983 KB  
Article
A Neural Network-Enhanced Kalman Filter for Time Series Anomaly Detection in Cyber-Physical Systems
by Zhongnan Ma, Wentao Xu, Hao Zhou, Ke Yu and Xiaofei Wu
Sensors 2026, 26(8), 2332; https://doi.org/10.3390/s26082332 - 9 Apr 2026
Viewed by 170
Abstract
Cyber-physical systems (CPSs) represent sophisticated intelligent architectures that tightly couple computational elements, communication networks, and physical processes. Their deployments now span virtually every industrial and civilian domain—from power grids and manufacturing plants to autonomous transportation networks. Ensuring the secure operation of CPSs relies [...] Read more.
Cyber-physical systems (CPSs) represent sophisticated intelligent architectures that tightly couple computational elements, communication networks, and physical processes. Their deployments now span virtually every industrial and civilian domain—from power grids and manufacturing plants to autonomous transportation networks. Ensuring the secure operation of CPSs relies fundamentally on effective time series anomaly detection, which remains a challenging task due to the complex, often unknown system dynamics and non-negligible sensor noise present in real-world environments. To address these challenges, we introduce a Neural Network-Enhanced Kalman Filter (NNEKF), a novel anomaly detection framework that combines model-based filtering with data-driven learning. The NNEKF employs a two-stage trained neural network with a specialized architecture: the first stage learns the underlying dynamics of the CPS, while the second stage optimizes the computation of the Kalman gain during the update step. At inference time, the enhanced Kalman filter recursively estimates the likelihood of observed sensor measurements to identify anomalies, supported by a batched parallel inference scheme that delivers substantial speedups. Extensive experiments on benchmark datasets demonstrate that the NNEKF attains an average F1-score of 0.935, coupled with rapid inference and minimal model footprint—surpassing all competitive baselines and facilitating dependable real-time anomaly detection for CPS environments. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

28 pages, 664 KB  
Article
A Cross-Modal Temporal Alignment Framework for Artificial Intelligence-Driven Sensing in Multilingual Risk Monitoring
by Hanzhi Sun, Jiarui Zhang, Wei Hong, Yihan Fang, Mengqi Ma, Kehan Shi and Manzhou Li
Sensors 2026, 26(8), 2319; https://doi.org/10.3390/s26082319 - 9 Apr 2026
Viewed by 159
Abstract
Against the background of highly interconnected global capital markets and rapidly propagating cross-lingual information streams, traditional anomaly detection paradigms based solely on single-modality numerical time-series sensors are insufficient for forward-looking risk sensing. From the perspective of artificial intelligence-driven sensing, this study proposes a [...] Read more.
Against the background of highly interconnected global capital markets and rapidly propagating cross-lingual information streams, traditional anomaly detection paradigms based solely on single-modality numerical time-series sensors are insufficient for forward-looking risk sensing. From the perspective of artificial intelligence-driven sensing, this study proposes a multilingual semantic–numerical collaborative Transformer framework to construct a unified multimodal financial sensing architecture for intelligent anomaly sensing and risk perception. Within the proposed sensing paradigm, multilingual texts are conceptualized as semantic sensors that continuously emit event-driven sensing signals, while market prices, trading volumes, and order book dynamics are modeled as heterogeneous numerical sensor streams reflecting behavioral market sensing responses. These heterogeneous sensors are jointly integrated through a cross-modal sensor fusion architecture. A cross-modal temporal alignment attention mechanism is designed to explicitly model dynamic lag structures between semantic sensing signals and numerical sensor responses, enabling temporally adaptive sensor-level alignment and fusion. To enhance sensing robustness, a multilingual semantic noise-robust encoding module is introduced to suppress unreliable textual sensor noise and stabilize cross-lingual semantic sensing representations. Furthermore, a semantic–numerical collaborative risk fusion module is constructed within a shared latent sensing space to achieve adaptive sensor contribution weighting and cross-sensor feature coupling, thereby improving anomaly sensing accuracy and robustness under complex multimodal sensing environments. Extensive experiments conducted on real-world multi-market financial sensing datasets demonstrate that the proposed artificial intelligence-driven sensing framework significantly outperforms representative statistical and deep learning baselines. The framework achieves a Precision of 0.852, Recall of 0.781, F1-score of 0.815, and an AUC of 0.892, while substantially improving early warning time in practical risk sensing scenarios. In cross-market transfer settings, the proposed sensing architecture maintains stable anomaly sensing performance under bidirectional domain shifts, with AUC consistently exceeding 0.86, indicating strong structural generalization across heterogeneous sensing environments. Ablation analysis further verifies that temporal sensor alignment, semantic sensor denoising, and collaborative cross-sensor risk coupling contribute independently and synergistically to the overall sensing performance. Overall, this study establishes a scalable multimodal intelligent sensing framework for dynamic financial anomaly sensing, providing an effective artificial intelligence-driven sensing solution for cross-market risk surveillance and adaptive financial signal sensing. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Sensing)
18 pages, 1606 KB  
Article
Multi-Scale Dynamic Perception and Context Guidance Modulation for Efficient Deepfake Detection
by Yuanqing Ding, Fanliang Bu and Hanming Zhai
Electronics 2026, 15(8), 1569; https://doi.org/10.3390/electronics15081569 - 9 Apr 2026
Viewed by 190
Abstract
Deepfake technology poses significant threats to information authenticity and social trust, necessitating effective detection methods. However, existing detection approaches predominantly rely on high-complexity network architectures that, while accurate in controlled environments, suffer from prohibitive computational costs that hinder deployment in resource-constrained scenarios such [...] Read more.
Deepfake technology poses significant threats to information authenticity and social trust, necessitating effective detection methods. However, existing detection approaches predominantly rely on high-complexity network architectures that, while accurate in controlled environments, suffer from prohibitive computational costs that hinder deployment in resource-constrained scenarios such as social media platforms. To address this efficiency-accuracy dilemma, we propose a lightweight face forgery detection method that systematically learns multi-scale forgery traces. Our approach features a four-stage lightweight architecture that hierarchically extracts features from local textures to global semantics, mimicking the human visual system. Within each stage, a multi-scale dynamic perception mechanism divides feature channels into parallel groups equipped with lightweight attention modules to capture forgery cues spanning pixel-level anomalies, local structures, regional patterns, and semantic inconsistencies. Furthermore, rather than relying on conventional feature fusion that risks suppressing subtle artifacts, we introduce a novel Context-Guided Dynamic Convolution. This mechanism uses mid-level spatial anomalies as active anchors to dynamically modulate high-level semantic filters, with the goal of mitigating the disconnect between semantic content and forgery evidence. Our model achieves strong performance, yielding an AUC of 91.98% on FaceForensics++ and 93.50% on DeepFake Detection Challenge, outperforming current state-of-the-art lightweight methods. Furthermore, compared to heavy Vision Transformers, our model achieves a superior performance-efficiency trade-off, requiring only 3.06 M parameters and 1.36 G FLOPs, making it highly suitable for real-time, resource-constrained deployment. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

17 pages, 33215 KB  
Data Descriptor
ANAID: Autonomous Naturalistic Obstacle-Avoidance Interaction Dataset
by Manuel Garcia-Fernandez, Maria Juarez Molera, Adrian Canadas Gallardo, Nourdine Aliane and Javier Fernandez Andres
Data 2026, 11(4), 77; https://doi.org/10.3390/data11040077 - 8 Apr 2026
Viewed by 241
Abstract
This paper presents ANAID (Autonomous Naturalistic obstacle-Avoidance Interaction Dataset), a new multimodal dataset designed to support research on autonomous driving, particularly with regard to obstacle avoidance and naturalistic driver–vehicle interaction. Data were collected using a Hyundai Tucson Hybrid equipped with a Comma-3X autonomous-driving [...] Read more.
This paper presents ANAID (Autonomous Naturalistic obstacle-Avoidance Interaction Dataset), a new multimodal dataset designed to support research on autonomous driving, particularly with regard to obstacle avoidance and naturalistic driver–vehicle interaction. Data were collected using a Hyundai Tucson Hybrid equipped with a Comma-3X autonomous-driving development kit, combining high-resolution front-facing video with detailed CAN-bus telemetry. The dataset comprises four data collection campaigns, each corresponding to a single continuous driving session, yielding a total of 208 videos and 240,014 synchronized frames. In addition to the video data, the dataset provides vehicle state measurements (speed, acceleration, steering, pedal positions, turn signals, etc.) and an additional annotation layer identifying evasive maneuvers derived from steering-related signals. Data were recorded across four driving campaigns on an urban circuit at Universidad Europea de Madrid, capturing diverse real-world scenarios such as roundabouts, intersections, pedestrian areas, and segments requiring obstacle avoidance. A multi-stage processing pipeline aligns telemetry and visual data, extracts frames at 20 FPS, and detects evasive maneuvers using threshold-based time-series analysis. ANAID provides a fully aligned and non-destructive representation of naturalistic driving behavior, enabling research on control prediction, driver modeling, anomaly detection, and human–autonomy interaction in realistic traffic conditions. Full article
Show Figures

Figure 1

25 pages, 4742 KB  
Article
An Edge-Enabled Predictive Maintenance Approach Based on Anomaly-Driven Health Indicators for Industrial Production Systems
by Bouzidi Lamdjad and Adem Chaiter
Algorithms 2026, 19(4), 286; https://doi.org/10.3390/a19040286 - 8 Apr 2026
Viewed by 297
Abstract
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach [...] Read more.
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach combines edge-level monitoring, anomaly detection, and predictive modeling to analyze operational signals and estimate system health conditions from high-frequency industrial data. Empirical validation was conducted using operational datasets collected from two industrial production facilities between 2024 and 2025. The model evaluates patterns associated with operational instability and degradation-related anomalies and translates them into interpretable health indicators that can support proactive intervention. The empirical results show strong predictive performance, with R2 reaching 0.989, a mean absolute percentage error of 3.67%, and a root mean square error of 0.79. In addition, the mitigation of early anomaly signals was associated with an observed improvement of approximately 3.99% in system stability. Unlike many existing studies that treat anomaly detection, predictive modeling, and prognostic analysis as separate tasks, the proposed framework connects these stages within a unified analytical structure designed for deployment in industrial environments. The findings indicate that edge-generated anomaly signals can provide meaningful early information about potential system deterioration and can assist in planning timely maintenance actions even when explicit failure labels are limited. The study contributes to the development of scalable predictive maintenance solutions that integrate artificial intelligence with edge-based industrial monitoring systems. Full article
Show Figures

Figure 1

27 pages, 3109 KB  
Article
Early Detection of Virtual Machine Failures in Cloud Computing Using Quantum-Enhanced Support Vector Machine
by Bhargavi Krishnamurthy, Saikat Das and Sajjan G. Shiva
Mathematics 2026, 14(7), 1229; https://doi.org/10.3390/math14071229 - 7 Apr 2026
Viewed by 170
Abstract
Cloud computing is one of the essential computing platforms for modern enterprises. A total of 84 percent of large businesses use cloud computing services in 2025 to enable remote working and higher flexibility of operation with reduction in the cost of operation. Cloud [...] Read more.
Cloud computing is one of the essential computing platforms for modern enterprises. A total of 84 percent of large businesses use cloud computing services in 2025 to enable remote working and higher flexibility of operation with reduction in the cost of operation. Cloud environments are dynamic and multitenant, often demanding high computational resources for real-time processing. However, the cloud system’s behavior is subjected to various kinds of anomalies in which patterns of data deviate from the normal traffic. The varieties of anomalies that exist are performance anomalies, security anomalies, resource anomalies, and network anomalies. These anomalies disrupt the normal operation of cloud systems by increasing the latency, reducing throughput, frequently violating service level agreements (SLAs), and experiencing the failure of virtual machines. Among all anomalies, virtual machine failures are one of the potential anomalies in which the normal operation of the virtual machine is interrupted, resulting in the degradation of services. Virtual machine failure happens because of resource exhaustion, malware access, packet loss, Distributed Denial of Service attacks, etc. Hence, there is a need to detect the chances of virtual machine failures and prevent it through proactive measures. Traditional machine learning techniques often struggle with high-dimensional data and nonlinear correlations, ending up with poor real-time adaptation. Hence, quantum machine learning is found to be a promising solution which effectively deals with combinatorially complex and high-dimensional data. In this paper, a novel quantum-enhanced support vector machine (QSVM) is designed as an optimized binary classifier which combines the principles of both quantum computing and support vector machine. It encodes the classical data into quantum states. Feature mapping is performed to transform the data into the high-dimensional form of Hilbert space. Quantum kernel evaluation is performed to evaluate similarities. Through effective optimization, optimal hyperplanes are designed to detect the anomalous behavior of virtual machines. This results in the exponential speed-up of operation and prevents the local minima through entanglement and superposition operation. The performance of the proposed QSVM is analyzed using the QuCloudSim 1.0 simulator and further validated using expected value analysis methodology. Full article
Show Figures

Figure 1

Back to TopTop