Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (674)

Search Parameters:
Keywords = anomaly detection and localization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4549 KB  
Article
Online Track Anomaly Detection: Comparison of Different Machine Learning Techniques Through Injection of Synthetic Defects on Experimental Datasets
by Giovanni Bellacci, Luca Di Carlo, Marco Fiaschi, Luca Bocciolini, Carmine Zappacosta and Luca Pugi
Machines 2026, 14(4), 424; https://doi.org/10.3390/machines14040424 - 10 Apr 2026
Abstract
The adoption of instrumented wheelsets on diagnostic trains offers the possibility of continuous monitoring of wheel–rail contact forces. The collection of large datasets can be exploited for diagnostic purposes, aiming to localize specific track defects, allowing significant improvements in terms of safety and [...] Read more.
The adoption of instrumented wheelsets on diagnostic trains offers the possibility of continuous monitoring of wheel–rail contact forces. The collection of large datasets can be exploited for diagnostic purposes, aiming to localize specific track defects, allowing significant improvements in terms of safety and maintenance costs. Machine learning (ML) techniques can be used to automate anomaly detection. In this work, the authors compare the application of various ML algorithms based on the identification of different frequency or time-based features of analyzed signals. To perform the activity, a significant number and variety of local defects have been included in the recorded data. From a practical point of view, the insertion of real known defects into an existing line is extremely time-consuming, expensive, and not immune to safety issues. On the other hand, the design of anomaly detection algorithms involves the usage of relatively extended datasets with different faulty conditions. The authors propose deliberately adding real contact force profiles of healthy lines to a mix of synthetic signals, which substantially reproduce the behavior and the variability of foreseen faulty conditions. The results of this work, although preliminary and still to be completed, offer a contribution to the scientific community both in terms of obtained results and adopted methodologies. Full article
(This article belongs to the Special Issue AI-Driven Reliability Analysis and Predictive Maintenance)
Show Figures

Figure 1

23 pages, 7215 KB  
Article
Applications of Distributed Optical Fiber Sensing Technology in Wellbore Leakage Monitoring and Its Integrity Analysis of Underground Gas Storage
by Zhentao Li, Xianjian Zou and Pengtao Wu
Energies 2026, 19(8), 1859; https://doi.org/10.3390/en19081859 - 10 Apr 2026
Abstract
With the exponential growth of natural gas reserves and utilization scale in China, underground gas storage (UGS) facilities—critical infrastructure within the natural gas production-supply-storage-sales system—have entered a phase of rapid expansion. As the core component connecting subsurface reservoirs with surface systems, wellbore integrity [...] Read more.
With the exponential growth of natural gas reserves and utilization scale in China, underground gas storage (UGS) facilities—critical infrastructure within the natural gas production-supply-storage-sales system—have entered a phase of rapid expansion. As the core component connecting subsurface reservoirs with surface systems, wellbore integrity directly influences operational safety and service lifespan of UGS facilities. However, current leakage detection and integrity analysis methodologies for gas storage wellbores remain deficient in effective real-time monitoring capabilities. Traditional methods, however, are constrained by limited spatial coverage and insufficient precision, rendering them inadequate for comprehensive, continuous safety monitoring requirements. To address this industry challenge, this study proposes a real-time wellbore integrity monitoring framework based on distributed fiber optic sensing technology, integrating distributed temperature sensing (DTS) and distributed acoustic sensing (DAS) devices into a synergistic monitoring system. The DTS component enables preliminary localization of potential leakage points through detection of minute temperature anomalies along the wellbore, while the DAS unit accurately identifies acoustic signatures caused by gas leakage within casings via monitoring of acoustic vibration signals propagating along the optical fiber. Through joint analysis of DTS and DAS data streams, real-time diagnosis of wellbore leakage events and integrity status can be achieved. Field trials demonstrated that this hybrid monitoring system achieved leakage localization accuracy within 1.0 m, effectively distinguishing normal operational signals from abnormal leakage characteristics. During actual monitoring operations, no indications of wellbore integrity compromise were detected; only minor noise and interference signals originating from surface construction activities were observed. Full article
(This article belongs to the Section D: Energy Storage and Application)
23 pages, 1950 KB  
Article
Encrypted Traffic Detection via a Federated Learning-Based Multi-Scale Feature Fusion Framework
by Yichao Fei, Youfeng Zhao, Wenrui Liu, Fei Wu, Shangdong Liu, Xinyu Zhu, Yimu Ji and Pingsheng Jia
Electronics 2026, 15(8), 1570; https://doi.org/10.3390/electronics15081570 - 9 Apr 2026
Abstract
With the proliferation of edge computing in IoT and smart security, there is a growing demand for large-scale encrypted traffic anomaly detection. However, the opaque nature of encrypted traffic makes it difficult for traditional detection methods to balance efficiency and accuracy. To address [...] Read more.
With the proliferation of edge computing in IoT and smart security, there is a growing demand for large-scale encrypted traffic anomaly detection. However, the opaque nature of encrypted traffic makes it difficult for traditional detection methods to balance efficiency and accuracy. To address this challenge, this paper proposes FMTF, a Multi-Scale Feature Fusion method based on Federated Learning for encrypted traffic anomaly detection. FMTF constructs graph structures at three scales—spatial, statistical, and content—to comprehensively characterize traffic features. At the spatial scale, communication graphs are constructed based on host-to-host IP interactions, where each node represents the IP address of a host and edges capture the communication relationships between them. The statistical scale builds traffic statistic graphs based on interactions between port numbers, with nodes representing individual ports and edge weights corresponding to the lengths of transmitted packets. At the content scale, byte-level traffic graphs are generated, where nodes represent pairs of bytes extracted from the traffic data, and edges are weighted using pointwise mutual information (PMI) to reflect the statistical association between byte occurrences. To extract and fuse these multi-scale features, FMTF employs the Graph Attention Network (GAT), enhancing the model’s traffic representation capability. Furthermore, to reduce raw-data exposure in distributed edge environments, FMTF integrates a federated learning framework. In this framework, edge devices train models locally based on their multi-scale traffic features and periodically share model parameters with a central server for aggregation, thereby optimizing the global model without exposing raw data. Experimental results demonstrate that FMTF maintains efficient and accurate anomaly detection performance even under limited computing resources, offering a practical and effective solution for encrypted traffic identification and network security protection in edge computing environments. Full article
Show Figures

Figure 1

18 pages, 1606 KB  
Article
Multi-Scale Dynamic Perception and Context Guidance Modulation for Efficient Deepfake Detection
by Yuanqing Ding, Fanliang Bu and Hanming Zhai
Electronics 2026, 15(8), 1569; https://doi.org/10.3390/electronics15081569 - 9 Apr 2026
Abstract
Deepfake technology poses significant threats to information authenticity and social trust, necessitating effective detection methods. However, existing detection approaches predominantly rely on high-complexity network architectures that, while accurate in controlled environments, suffer from prohibitive computational costs that hinder deployment in resource-constrained scenarios such [...] Read more.
Deepfake technology poses significant threats to information authenticity and social trust, necessitating effective detection methods. However, existing detection approaches predominantly rely on high-complexity network architectures that, while accurate in controlled environments, suffer from prohibitive computational costs that hinder deployment in resource-constrained scenarios such as social media platforms. To address this efficiency-accuracy dilemma, we propose a lightweight face forgery detection method that systematically learns multi-scale forgery traces. Our approach features a four-stage lightweight architecture that hierarchically extracts features from local textures to global semantics, mimicking the human visual system. Within each stage, a multi-scale dynamic perception mechanism divides feature channels into parallel groups equipped with lightweight attention modules to capture forgery cues spanning pixel-level anomalies, local structures, regional patterns, and semantic inconsistencies. Furthermore, rather than relying on conventional feature fusion that risks suppressing subtle artifacts, we introduce a novel Context-Guided Dynamic Convolution. This mechanism uses mid-level spatial anomalies as active anchors to dynamically modulate high-level semantic filters, with the goal of mitigating the disconnect between semantic content and forgery evidence. Our model achieves strong performance, yielding an AUC of 91.98% on FaceForensics++ and 93.50% on DeepFake Detection Challenge, outperforming current state-of-the-art lightweight methods. Furthermore, compared to heavy Vision Transformers, our model achieves a superior performance-efficiency trade-off, requiring only 3.06 M parameters and 1.36 G FLOPs, making it highly suitable for real-time, resource-constrained deployment. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

28 pages, 658 KB  
Article
Dual-Branch Deep Remote Sensing for Growth Anomaly and Risk Perception in Smart Horticultural Systems
by Yan Bai, Ceteng Fu, Shen Liu, Xichen Wang, Jibo Fan, Yuecheng Li and Yihong Song
Horticulturae 2026, 12(4), 461; https://doi.org/10.3390/horticulturae12040461 - 8 Apr 2026
Abstract
In the context of the rapid development of smart horticulture, a deep remote sensing-based dual detection method for horticultural crop growth anomalies and safety risks was proposed to address the limitations of existing remote sensing monitoring approaches. These conventional methods, which predominantly focused [...] Read more.
In the context of the rapid development of smart horticulture, a deep remote sensing-based dual detection method for horticultural crop growth anomalies and safety risks was proposed to address the limitations of existing remote sensing monitoring approaches. These conventional methods, which predominantly focused on growth vigor assessment or single-task anomaly detection, had difficulty distinguishing anomalies from actual production risks and exhibited insufficient sensitivity to weak anomalies and complex temporal disturbances. Within a unified framework, a growth state modeling branch and an anomaly perception branch were constructed, enabling the joint modeling of normal growth trajectories and anomalous deviation features. By further introducing a risk joint discrimination mechanism, an integrated analysis pipeline from anomaly identification to risk assessment was achieved. Multi-temporal remote sensing features were used as inputs, through which normal crop growth patterns were characterized via trend perception, texture modeling, and temporal aggregation, while sensitivity to local disturbances and weak anomaly signals was enhanced by anomaly embeddings and energy representations. Systematic experiments conducted on multi-regional and multi-crop horticultural remote sensing datasets demonstrated that the proposed method significantly outperformed comparative approaches, including traditional threshold-based methods, support vector machines, random forests, autoencoders, ConvLSTM, and temporal transformer models. In the dual task of horticultural crop growth anomaly detection and safety risk identification, an accuracy of approximately 0.91 and an F1 score of 0.88 were achieved, indicating higher anomaly recognition accuracy and more stable risk discrimination capability. Further anomaly-type awareness experiments showed that consistent performance was maintained across diverse real-world production scenarios, including climate stress, disease-induced anomalies, and management errors. Full article
(This article belongs to the Special Issue New Trends in Smart Horticulture)
Show Figures

Figure 1

26 pages, 7110 KB  
Article
Research on an Automatic Detection Method for Response Keypoints of Three-Dimensional Targets in Directional Borehole Radar Profiles
by Xiaosong Tang, Maoxuan Xu, Feng Yang, Jialin Liu, Suping Peng and Xu Qiao
Remote Sens. 2026, 18(7), 1102; https://doi.org/10.3390/rs18071102 - 7 Apr 2026
Abstract
During the interpretation of Borehole Radar (BHR) B-scan profiles, the accurate determination of the azimuth of geological targets in three-dimensional space is a critical issue for achieving precise anomaly localization and spatial structure inversion. However, existing directional BHR anomaly localization methods exhibit limited [...] Read more.
During the interpretation of Borehole Radar (BHR) B-scan profiles, the accurate determination of the azimuth of geological targets in three-dimensional space is a critical issue for achieving precise anomaly localization and spatial structure inversion. However, existing directional BHR anomaly localization methods exhibit limited intelligence, insufficient adaptability to multi-site data, and weak generalization capability, rendering them inadequate for engineering applications under complex geological conditions. To address these challenges, a robust deep learning model, termed BSS-Pose-BHR, is developed based on YOLOv11n-pose for keypoint detection in directional BHR profiles. The model incorporates three key optimizations: Bi-Level Routing Attention (BRA) replaces Multi-Head Self-Attention (MHSA) in the backbone to improve computational efficiency; Conv_SAMWS enhances keypoint-related feature weighting in the backbone and neck; and Spatial and Channel Reconstruction Convolution (SCConv) is integrated into the detection head to reduce redundancy and strengthen local feature extraction, thereby improving suitability for keypoint detection tasks. In addition, a three-dimensional electromagnetic model of limestone containing a certain density of clay particles is established to construct a simulation dataset. On the simulated test set, compared with current mainstream deep learning approaches and conventional directional borehole radar anomaly localization algorithms, BSS-Pose-BHR achieves superior performance, with an mAP50(B) of 0.9686, an mAP50–95(B) of 0.7712, an mAP50(P) of 0.9951, and an mAP50–95(P) of 0.9952. Ablation experiments demonstrate that each proposed module contributes significantly to performance improvement. Compared with the baseline, BSS-Pose-BHR improves mAP50(B) by 5.39% and mAP50(P) by 0.86%, while increasing model weight by only 1.05 MB, thereby achieving a reasonable trade-off between detection accuracy and complexity. Furthermore, indoor physical model experiments validate the effectiveness of the method on measured data. Robustness experiments under different Peak Signal-to-Noise Ratio (PSNR) conditions and varying missing-trace rates indicate that BSS-Pose-BHR maintains high detection accuracy under moderate noise and data loss, demonstrating strong engineering applicability and practical value. Full article
Show Figures

Figure 1

30 pages, 3109 KB  
Article
Early Detection of Virtual Machine Failures in Cloud Computing Using Quantum-Enhanced Support Vector Machine
by Bhargavi Krishnamurthy, Saikat Das and Sajjan G. Shiva
Mathematics 2026, 14(7), 1229; https://doi.org/10.3390/math14071229 - 7 Apr 2026
Abstract
Cloud computing is one of the essential computing platforms for modern enterprises. A total of 84 percent of large businesses use cloud computing services in 2025 to enable remote working and higher flexibility of operation with reduction in the cost of operation. Cloud [...] Read more.
Cloud computing is one of the essential computing platforms for modern enterprises. A total of 84 percent of large businesses use cloud computing services in 2025 to enable remote working and higher flexibility of operation with reduction in the cost of operation. Cloud environments are dynamic and multitenant, often demanding high computational resources for real-time processing. However, the cloud system’s behavior is subjected to various kinds of anomalies in which patterns of data deviate from the normal traffic. The varieties of anomalies that exist are performance anomalies, security anomalies, resource anomalies, and network anomalies. These anomalies disrupt the normal operation of cloud systems by increasing the latency, reducing throughput, frequently violating service level agreements (SLAs), and experiencing the failure of virtual machines. Among all anomalies, virtual machine failures are one of the potential anomalies in which the normal operation of the virtual machine is interrupted, resulting in the degradation of services. Virtual machine failure happens because of resource exhaustion, malware access, packet loss, Distributed Denial of Service attacks, etc. Hence, there is a need to detect the chances of virtual machine failures and prevent it through proactive measures. Traditional machine learning techniques often struggle with high-dimensional data and nonlinear correlations, ending up with poor real-time adaptation. Hence, quantum machine learning is found to be a promising solution which effectively deals with combinatorially complex and high-dimensional data. In this paper, a novel quantum-enhanced support vector machine (QSVM) is designed as an optimized binary classifier which combines the principles of both quantum computing and support vector machine. It encodes the classical data into quantum states. Feature mapping is performed to transform the data into the high-dimensional form of Hilbert space. Quantum kernel evaluation is performed to evaluate similarities. Through effective optimization, optimal hyperplanes are designed to detect the anomalous behavior of virtual machines. This results in the exponential speed-up of operation and prevents the local minima through entanglement and superposition operation. The performance of the proposed QSVM is analyzed using the QuCloudSim 1.0 simulator and further validated using expected value analysis methodology. Full article
Show Figures

Figure 1

33 pages, 15356 KB  
Article
Active Acoustic Sensing of Ground Surface Condition Using a Drone-Mounted Speaker–Microphone Array
by Kotaro Hoshiba, Kai Shirota, Yuta Tsukamoto and Hiroshi Yamaura
Drones 2026, 10(4), 258; https://doi.org/10.3390/drones10040258 - 3 Apr 2026
Viewed by 212
Abstract
Rapid assessment of ground surface conditions is essential in disaster response and search-and-rescue operations, where drones are increasingly deployed for aerial inspection and victim localization. This paper proposes an active acoustic sensing method for estimating ground surface conditions using a drone-mounted speaker and [...] Read more.
Rapid assessment of ground surface conditions is essential in disaster response and search-and-rescue operations, where drones are increasingly deployed for aerial inspection and victim localization. This paper proposes an active acoustic sensing method for estimating ground surface conditions using a drone-mounted speaker and microphone array. The method is based on the multiple signal classification framework and enables three-dimensional localization of reflection points according to the principle of echolocation. A key feature of the proposed approach is that it shares both hardware and signal processing components with acoustic-based victim search, allowing simultaneous execution of surface sensing and sound source localization (SSL) on a single drone platform without increasing system complexity. Outdoor experiments were conducted to evaluate sensing performance for ground surface anomalies, specifically ground surface depressions and cracks. The experimental results clarify the achievable sensing performance and coverage in real environments and reveal key factors affecting detection performance. The feasibility of simultaneous execution of active acoustic sensing and SSL was also investigated, and the mutual interactions between sensing and localization performance were clarified. These findings highlight both the potential and the practical limitations of integrating environmental sensing and victim localization on a single drone platform. Full article
Show Figures

Figure 1

24 pages, 1929 KB  
Article
Speech-Adaptive Detection of Unnatural Intra-Sentential Pauses Using Contextual Anomaly Modeling for Interpreter Training
by Hyoeun Kang, Jin-Dong Kim, Juriae Lee, Hee-Jo Nam, Kon Woo Kim, Joowon Lim and Hyun-Seok Park
Appl. Sci. 2026, 16(7), 3492; https://doi.org/10.3390/app16073492 - 3 Apr 2026
Viewed by 170
Abstract
Detecting unnatural pauses is a critical component of automated quality assessment (AQA) in interpreter training, as pause patterns directly reflect an interpreter’s cognitive load and fluency. Traditional pause detection methods rely on static temporal thresholds (e.g., 1.0 s), which often fail to account [...] Read more.
Detecting unnatural pauses is a critical component of automated quality assessment (AQA) in interpreter training, as pause patterns directly reflect an interpreter’s cognitive load and fluency. Traditional pause detection methods rely on static temporal thresholds (e.g., 1.0 s), which often fail to account for segment-specific speech rate variability and individual speaking styles. This study proposes a context-adaptive pause detection framework that integrates unsupervised anomaly detection using Isolation Forest (iForest) with a sliding window technique. To enhance pedagogical validity, we specifically focused on intra-sentential pauses by delineating sentence boundaries using a specialized segmentation model. The proposed model was evaluated against ground-truth labels annotated by professional interpreting experts. Our results demonstrate that the sliding window–based contextual anomaly detection model significantly outperforms the conventional static baseline, particularly in terms of recall and Cohen’s kappa. Furthermore, by applying a weighted F3-score and the “Recognition-over-Recall” principle, we confirmed that the proposed model substantially reduces the instructor’s total operational burden by shifting the workload from de novo annotation creation to more efficient corrective pruning. These findings suggest that speech-adaptive modeling provides a more reliable and labor-saving framework for automated interpreting assessment and feedback. Specifically, this study makes three main contributions: (1) the proposal of a context-adaptive pause detection framework using anomaly detection, (2) the integration of sliding window–based local contextual modeling for speech-rate–aware analysis, and (3) the introduction of an evaluation strategy based on the Recognition-over-Recall principle to reduce instructor workload in interpreter training. Full article
(This article belongs to the Special Issue The Application of Digital Technology in Education)
Show Figures

Figure 1

15 pages, 789 KB  
Article
EdgeRescue: Lightweight AI-Based Self-Healing for Energy-Constrained IoT Meshes
by Haifa A. Alanazi, Abdulaziz G. Alanazi and Nasser S. Albalawi
Computation 2026, 14(4), 84; https://doi.org/10.3390/computation14040084 - 3 Apr 2026
Viewed by 215
Abstract
As the scale and complexity of Internet of Things (IoT) deployments increase, maintaining resilience in resource-constrained mesh networks becomes a significant challenge. Frequent node failures due to battery depletion, environmental interference, or hardware degradation can disrupt data flows and lead to operational downtime. [...] Read more.
As the scale and complexity of Internet of Things (IoT) deployments increase, maintaining resilience in resource-constrained mesh networks becomes a significant challenge. Frequent node failures due to battery depletion, environmental interference, or hardware degradation can disrupt data flows and lead to operational downtime. To address this, we propose EdgeRescue, a novel lightweight AI-driven framework for self-healing in energy-constrained IoT mesh environments. EdgeRescue enables each node to perform local anomaly detection using compact 1D Convolutional Neural Networks (1D-CNNs) and initiates distributed, energy-aware routing reconfiguration when faults are detected. Unlike cloud-dependent methods, EdgeRescue operates entirely at the edge, requiring minimal computation, memory, and communication overhead. Extensive simulations on a 100-node testbed demonstrate that EdgeRescue improves packet delivery by 13.2%, reduces recovery latency by 57%, and lowers average node energy consumption by 18.8% compared to state-of-the-art baselines. These results establish EdgeRescue as a scalable and practical solution for achieving real-time resilience in next-generation IoT mesh networks. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

16 pages, 2953 KB  
Article
Drone-Based Statistical Detection of Methane Anomalies Around Abandoned Oil and Gas Well Sites
by William Hoyt Thomas and Caixia Wang
Sensors 2026, 26(7), 2205; https://doi.org/10.3390/s26072205 - 2 Apr 2026
Viewed by 203
Abstract
Abandoned oil and gas wells pose significant risks to human health and the environment by emitting air pollutants, contaminating groundwater, and leaving behind hazardous debris. In the United States, approximately 3.9 million documented wells vary widely in the accuracy of their recorded locations [...] Read more.
Abandoned oil and gas wells pose significant risks to human health and the environment by emitting air pollutants, contaminating groundwater, and leaving behind hazardous debris. In the United States, approximately 3.9 million documented wells vary widely in the accuracy of their recorded locations and plugging status, creating major challenges for detection, mapping, and remediation. Existing well detection methods show some promise but often lose effectiveness under complex conditions, such as vegetation occlusion or construction without metal components. In this study, we propose a drone-based approach equipped with a highly sensitive methane sensor to identify statistical anomalies in methane concentrations around abandoned oil and gas well sites. To address the noisy and variable nature of environmental sensor data, statistical methods were developed that enable reliable anomaly detection under field conditions. Controlled release experiments with known emission points validated the method’s ability to statistically detect methane anomalies that may indicate nearby emission sources. We further tested the approach at a field site containing three abandoned wells with known locations and sparse emission profiles. The results demonstrate that the proposed drone-based sensing method can serve as a rapid survey approach to identify areas with elevated methane signals around well sites, helping to reduce the scope of the ground survey area, and supporting prioritization of follow-up ground investigations. This approach provides a practical means to support targeted monitoring and prioritization of remediation efforts, while supporting the future development of source attribution and localization methods. Full article
(This article belongs to the Special Issue Smart Gas Sensor Applications in Environmental Change Monitoring)
Show Figures

Figure 1

17 pages, 22047 KB  
Article
Urban Water Leakage Detection System over Dark Fiber Networks Based on Distributed Acoustic Sensing and Sparse Autoencoders
by Vahid Sharif, Yuanyuan Yao, Alayn Loayssa and Mikel Sagues
Sensors 2026, 26(7), 2152; https://doi.org/10.3390/s26072152 - 31 Mar 2026
Viewed by 261
Abstract
We propose and experimentally validate an automatic urban water leakage detection architecture that leverages dark fiber links already deployed in telecommunication networks in underground conduits in the vicinity of water pipelines. The sensing stage relies on a differential-phase coherent optical time-domain reflectometry interrogator [...] Read more.
We propose and experimentally validate an automatic urban water leakage detection architecture that leverages dark fiber links already deployed in telecommunication networks in underground conduits in the vicinity of water pipelines. The sensing stage relies on a differential-phase coherent optical time-domain reflectometry interrogator enhanced with optical pulse compression to improve sensitivity. Building on this vibration acquisition stage, automatic leakage detection algorithms are implemented by searching for leak-induced activity in the frequency domain, which is well suited to revealing leakage-related features. After acquiring a baseline calibration to characterize normal-condition vibrations at each sensing position, leakage candidates are identified by comparing distribution-based metrics computed over multiple measurements against the corresponding baseline statistics. Two automatic leakage detection strategies are developed. First, low-complexity feature-based metrics are implemented, enabling continuous monitoring with minimal computational requirements. Second, an autoencoder-based anomaly detection technique is introduced, which also relies on location-specific normal-condition calibration but reduces the dependence on prior knowledge of the expected leakage vibration signatures. A real-world field trial on an urban network demonstrates reliable detection and localization using controlled leak events generated in the field, with measurements performed over a 17 km sensing fiber and an effective spatial resolution of 2.6 m. Benchmarking against a commercial punctual electro-acoustic leak detector yields consistent trends. Overall, the proposed system could complement existing technologies by enabling automated, continuous city-scale monitoring over already deployed dark fiber infrastructure. Full article
(This article belongs to the Special Issue Sensors in 2026)
Show Figures

Figure 1

14 pages, 5017 KB  
Article
Calibrated Feature Fusion: Enhancing Few-Shot Industrial Anomaly Detection via Cross-Stage Representation Alignment
by Shuangjun Zheng, Songtao Zhang, Zhihuan Huang, Kuoteng Sun, Yuzhong Gong, Jiayan Wen and Eryun Liu
Sensors 2026, 26(7), 2164; https://doi.org/10.3390/s26072164 - 31 Mar 2026
Viewed by 331
Abstract
Few-shot industrial anomaly detection technology has received more and more attention because it does not require a large number of abnormal samples to train. Recent few-shot industrial anomaly detection methods commonly fuse multi-stage features from frozen vision transformers for anomaly scoring. However, we [...] Read more.
Few-shot industrial anomaly detection technology has received more and more attention because it does not require a large number of abnormal samples to train. Recent few-shot industrial anomaly detection methods commonly fuse multi-stage features from frozen vision transformers for anomaly scoring. However, we find that such direct fusion suffers from cross-stage representation misalignment—shallow and deep features differ significantly in scale and semantic granularity, leading to inconsistent anomaly maps and degraded localization. To address this problem, we propose Calibrated Feature Fusion (CFF), a lightweight adapter that enhances feature fusion via cross-stage representation alignment. The CFF module can be integrated into existing state-of-the-art frameworks and operates effectively in few-shot settings. Experiments on MVTec AD and VisA show that CFF consistently improves the state-of-the-art method across 1/2/4-shot settings, achieving gains of up to +1.6% AUROC and +4.1% AP in pixel-level segmentation. Notably, CFF enhances both precision and recall in four-shot scenarios. Ablation studies confirm that cross-stage alignment is key to stable multi-stage fusion. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 6716 KB  
Article
In-Situ Infrared Camera Monitoring for Defect and Anomaly Detection in Laser Powder Bed Fusion: Calibration, Data Mapping, and Feature Extraction
by Shawn Hinnebusch, David Anderson, Berkay Bostan and Albert C. To
Appl. Sci. 2026, 16(7), 3378; https://doi.org/10.3390/app16073378 - 31 Mar 2026
Viewed by 203
Abstract
Laser powder bed fusion (LPBF) is susceptible to defects arising from melt pool instabilities, spatter, heat accumulation, and powder spreading anomalies. In situ infrared (IR) monitoring can detect these issues; however, it typically generates large volumes of data that are costly to store [...] Read more.
Laser powder bed fusion (LPBF) is susceptible to defects arising from melt pool instabilities, spatter, heat accumulation, and powder spreading anomalies. In situ infrared (IR) monitoring can detect these issues; however, it typically generates large volumes of data that are costly to store and analyze. This work proposes a projection-based framework that directly maps in situ thermal measurements onto a three-dimensional (3D) voxelized part geometry, substantially reducing storage requirements while preserving spatial fidelity. In addition, several IR derived features are incorporated into a practical workflow for defect detection and process model calibration, including laser scan order, local pre-deposition temperature, maximum pre-scan temperature, and spatter generation and landing locations. For completeness, commonly used metrics such as interpass temperature, heat intensity, cooling rate, and relative melt pool area are extracted within the same unified processing pipeline. All features are computed using a consistent, reproducible Python-based implementation to streamline integration into routine monitoring and analysis tasks. Multiple parts are fabricated, monitored, and characterized to evaluate the proposed framework, demonstrating that the extracted features reliably identify process anomalies and correlate with observed defects. Full article
Show Figures

Graphical abstract

19 pages, 1755 KB  
Article
New Fault Diagnosis Strategy Based on KGLRT Chart for Monitoring Chemical Processes
by Hajer Lahdhiri, Imen Hamrouni, Okba Taouali, Ali Alshehri and Esam Aloufi
Appl. Sci. 2026, 16(7), 3334; https://doi.org/10.3390/app16073334 - 30 Mar 2026
Viewed by 136
Abstract
Process monitoring methods play a crucial role in identifying equipment malfunctions and instrument failures, as well as in maintaining process safety and product quality. Selecting the right approach for fault detection and diagnosis is therefore vital. Several localization methods based on Kernel Principal [...] Read more.
Process monitoring methods play a crucial role in identifying equipment malfunctions and instrument failures, as well as in maintaining process safety and product quality. Selecting the right approach for fault detection and diagnosis is therefore vital. Several localization methods based on Kernel Principal Component Analysis (KPCA) exist, such as the partial localization approach, which is effective at detecting anomalies but does not always pinpoint faults precisely. This method often identifies a suspicious area or group of variables without isolating the exact source of the fault. In complex systems such as chemical reactors, it can produce false positives or incorrect localizations if the data are noisy or if the fault affects multiple correlated variables. Conversely, the reconstruction-based contribution approach, when integrated with Kernel Principal Component Analysis (KPCA), is both widely documented in the literature and highly effective for fault localization. This method first identifies anomalies using the Hotelling’s T2 statistic and Q (squared prediction error) statistic, then analyzes the contributions of individual variables to these indices in order to isolate the fault. However, the convergence of the optimization algorithm using the T2 index is not guaranteed. To address this limitation, we introduce RBC-KGLRT, a novel localization framework that integrates reconstruction-based contribution with KPCA and the Generalized Likelihood Ratio Test in its kernel form to improve both precision and reliability in localization tasks. This work transforms traditional KPCA and reduced-rank KPCA fault detection approaches—enhanced by the KGLRT metric—into a powerful fault localization solution through the reconstruction-based contribution (RBC) method. Its effectiveness is rigorously evaluated using the Tennessee Eastman Process (TEP), a widely recognized simulation benchmark in process control and chemical engineering. Full article
Show Figures

Figure 1

Back to TopTop