Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (675)

Search Parameters:
Keywords = time-frequency fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2373 KB  
Article
LLM-Empowered Kolmogorov-Arnold Frequency Learning for Time Series Forecasting in Power Systems
by Zheng Yang, Yang Yu, Shanshan Lin and Yue Zhang
Mathematics 2025, 13(19), 3149; https://doi.org/10.3390/math13193149 - 2 Oct 2025
Abstract
With the rapid evolution of artificial intelligence technologies in power systems, data-driven time-series forecasting has become instrumental in enhancing the stability and reliability of power systems, allowing operators to anticipate demand fluctuations and optimize energy distribution. Despite the notable progress made by current [...] Read more.
With the rapid evolution of artificial intelligence technologies in power systems, data-driven time-series forecasting has become instrumental in enhancing the stability and reliability of power systems, allowing operators to anticipate demand fluctuations and optimize energy distribution. Despite the notable progress made by current methods, they are still hindered by two major limitations: most existing models are relatively small in architecture, failing to fully leverage the potential of large-scale models, and they are based on fixed nonlinear mapping functions that cannot adequately capture complex patterns, leading to information loss. To this end, an LLM-Empowered Kolmogorov–Arnold frequency learning (LKFL) is proposed for time series forecasting in power systems, which consists of LLM-based prompt representation learning, KAN-based frequency representation learning, and entropy-oriented cross-modal fusion. Specifically, LKFL first transforms multivariable time-series data into text prompts and leverages a pre-trained LLM to extract semantic-rich prompt representations. It then applies Fast Fourier Transform to convert the time-series data into the frequency domain and employs Kolmogorov–Arnold networks (KAN) to capture multi-scale periodic structures and complex frequency characteristics. Finally, LKFL integrates the prompt and frequency representations through an entropy-oriented cross-modal fusion strategy, which minimizes the semantic gap between different modalities and ensures full integration of complementary information. This comprehensive approach enables LKFL to achieve superior forecasting performance in power systems. Extensive evaluations on five benchmarks verify that LKFL sets a new standard for time-series forecasting in power systems compared with baseline methods. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science, 2nd Edition)
Show Figures

Figure 1

23 pages, 4303 KB  
Article
LMCSleepNet: A Lightweight Multi-Channel Sleep Staging Model Based on Wavelet Transform and Muli-Scale Convolutions
by Jiayi Yang, Yuanyuan Chen, Tingting Yu and Ying Zhang
Sensors 2025, 25(19), 6065; https://doi.org/10.3390/s25196065 - 2 Oct 2025
Abstract
Sleep staging is a crucial indicator for assessing sleep quality, which contributes to sleep monitoring and the diagnosis of sleep disorders. Although existing sleep staging methods achieve high classification performance, two major challenges remain: (1) the ability to effectively extract salient features from [...] Read more.
Sleep staging is a crucial indicator for assessing sleep quality, which contributes to sleep monitoring and the diagnosis of sleep disorders. Although existing sleep staging methods achieve high classification performance, two major challenges remain: (1) the ability to effectively extract salient features from multi-channel sleep data remains limited; (2) excessive model parameters hinder efficiency improvements. To address these challenges, this work proposes a lightweight multi-channel sleep staging network (LMCSleepNet). LMCSleepNet is composed of four modules. The first module enhances frequency domain features through continuous wavelet transform. The second module extracts time–frequency features using multi-scale convolutions. The third module optimizes ResNet18 with depthwise separable convolutions to reduce parameters. The fourth module improves spatial correlation using the Convolutional Block Attention Module (CBAM). On the public datasets SleepEDF-20, SleepEDF-78, and LMCSleepNet, respectively, LMCSleepNet achieved classification accuracies of 88.2% (κ = 0.84, MF1 = 82.4%) and 84.1% (κ = 0.77, MF1 = 77.7%), while reducing model parameters to 1.49 M. Furthermore, experiments validated the influence of temporal sampling points in wavelet time–frequency maps on sleep classification performance (accuracy, Cohen’s kappa, and macro-average F1-score) and the influence of multi-scale dilated convolution module fusion methods on classification performance. LMCSleepNet is an efficient lightweight model for extracting and integrating multimodal features from multichannel Polysomnography (PSG) data, which facilitates its application in resource-constrained scenarios. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 7222 KB  
Article
Multi-Channel Spectro-Temporal Representations for Speech-Based Parkinson’s Disease Detection
by Hadi Sedigh Malekroodi, Nuwan Madusanka, Byeong-il Lee and Myunggi Yi
J. Imaging 2025, 11(10), 341; https://doi.org/10.3390/jimaging11100341 - 1 Oct 2025
Abstract
Early, non-invasive detection of Parkinson’s Disease (PD) using speech analysis offers promise for scalable screening. In this work, we propose a multi-channel spectro-temporal deep-learning approach for PD detection from sentence-level speech, a clinically relevant yet underexplored modality. We extract and fuse three complementary [...] Read more.
Early, non-invasive detection of Parkinson’s Disease (PD) using speech analysis offers promise for scalable screening. In this work, we propose a multi-channel spectro-temporal deep-learning approach for PD detection from sentence-level speech, a clinically relevant yet underexplored modality. We extract and fuse three complementary time–frequency representations—mel spectrogram, constant-Q transform (CQT), and gammatone spectrogram—into a three-channel input analogous to an RGB image. This fused representation is evaluated across CNNs (ResNet, DenseNet, and EfficientNet) and Vision Transformer using the PC-GITA dataset, under 10-fold subject-independent cross-validation for robust assessment. Results showed that fusion consistently improves performance over single representations across architectures. EfficientNet-B2 achieves the highest accuracy (84.39% ± 5.19%) and F1-score (84.35% ± 5.52%), outperforming recent methods using handcrafted features or pretrained models (e.g., Wav2Vec2.0, HuBERT) on the same task and dataset. Performance varies with sentence type, with emotionally salient and prosodically emphasized utterances yielding higher AUC, suggesting that richer prosody enhances discriminability. Our findings indicate that multi-channel fusion enhances sensitivity to subtle speech impairments in PD by integrating complementary spectral information. Our approach implies that multi-channel fusion could enhance the detection of discriminative acoustic biomarkers, potentially offering a more robust and effective framework for speech-based PD screening, though further validation is needed before clinical application. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

15 pages, 930 KB  
Article
Analysis of Sensor Location and Time–Frequency Feature Contributions in IMU-Based Gait Identity Recognition
by Fangyu Liu, Hao Wang, Xiang Li and Fangmin Sun
Electronics 2025, 14(19), 3905; https://doi.org/10.3390/electronics14193905 - 30 Sep 2025
Abstract
Inertial measurement unit (IMU)-based gait biometrics have attracted increasing attention for unobtrusive identity recognition. While recent studies often fuse signals from multiple sensor positions and time–frequency features, the actual contribution of each sensor location and signal modality remains insufficiently explored. In this work, [...] Read more.
Inertial measurement unit (IMU)-based gait biometrics have attracted increasing attention for unobtrusive identity recognition. While recent studies often fuse signals from multiple sensor positions and time–frequency features, the actual contribution of each sensor location and signal modality remains insufficiently explored. In this work, we present a comprehensive quantitative analysis of the role of different IMU placements and feature domains in gait-based identity recognition. IMU data were collected from three body positions (shank, waist, and wrist) and processed to extract both time-domain and frequency-domain features. An attention-gated fusion network was employed to weight each signal branch adaptively, enabling interpretable assessment of their discriminative power. Experimental results show that shank IMU dominates recognition accuracy, while waist and wrist sensors primarily provide auxiliary information. Similarly, the contribution of time-domain features to classification performance is the greatest, while frequency-domain features offer complementary robustness. These findings illustrate the importance of sensor and feature selection in designing efficient, scalable IMU-based identity recognition systems for wearable applications. Full article
18 pages, 2459 KB  
Article
FFMamba: Feature Fusion State Space Model Based on Sound Event Localization and Detection
by Yibo Li, Dongyuan Ge, Jieke Xu and Xifan Yao
Electronics 2025, 14(19), 3874; https://doi.org/10.3390/electronics14193874 - 29 Sep 2025
Abstract
Previous studies on Sound Event Localization and Detection (SELD) have primarily focused on CNN- and Transformer-based designs. While CNNs possess local receptive fields, making it difficult to capture global dependencies over long sequences, Transformers excel at modeling long-range dependencies but have limited sensitivity [...] Read more.
Previous studies on Sound Event Localization and Detection (SELD) have primarily focused on CNN- and Transformer-based designs. While CNNs possess local receptive fields, making it difficult to capture global dependencies over long sequences, Transformers excel at modeling long-range dependencies but have limited sensitivity to local time–frequency features. Recently, the VMamba architecture, built upon the Visual State Space (VSS) model, has shown great promise in handling long sequences, yet it remains limited in modeling local spatial details. To address this issue, we propose a novel state space model with an attention-enhanced feature fusion mechanism, termed FFMamba, which balances both local spatial modeling and long-range dependency capture. At a fine-grained level, we design two key modules: the Multi-Scale Fusion Visual State Space (MSFVSS) module and the Wavelet Transform-Enhanced Downsampling (WTED) module. Specifically, the MSFVSS module integrates a Multi-Scale Fusion (MSF) component into the VSS framework, enhancing its ability to capture both long-range temporal dependencies and detailed local spatial information. Meanwhile, the WTED module employs a dual-branch design to fuse spatial and frequency domain features, improving the richness of feature representations. Comparative experiments were conducted on the DCASE2021 Task 3 and DCASE2022 Task 3 datasets. The results demonstrate that the proposed FFMamba model outperforms recent approaches in capturing long-range temporal dependencies and effectively integrating multi-scale audio features. In addition, ablation studies confirmed the effectiveness of the MSFVSS and WTED modules. Full article
Show Figures

Figure 1

12 pages, 1923 KB  
Article
Microwave Resonant Probe-Based Defect Detection for Butt Fusion Joints in High-Density Polyethylene Pipes
by Jinping Pan, Chaoming Zhu and Lianjiang Tan
Polymers 2025, 17(19), 2617; https://doi.org/10.3390/polym17192617 - 27 Sep 2025
Abstract
With the widespread use of high-density polyethylene (HDPE) pipes in various industrial and municipal applications, ensuring the structural integrity of their joints is crucial. This paper presents a novel defect detection method based on a microwave resonant probe, designed to perform efficient and [...] Read more.
With the widespread use of high-density polyethylene (HDPE) pipes in various industrial and municipal applications, ensuring the structural integrity of their joints is crucial. This paper presents a novel defect detection method based on a microwave resonant probe, designed to perform efficient and non-destructive evaluation of butt fusion joints in HDPE pipes. The experimental setup integrates a microwave antenna and resonant cavity to record real-time variations in resonance frequency and S21 magnitude while scanning the pipe surface. This method effectively detects common defects, including cracks, holes, and inclusions, within the butt fusion joints. The results show that the microwave resonant probe exhibits high sensitivity in detecting HDPE pipe defects. It can identify different sizes of cracks and holes, and can distinguish between talc powder and sand particles. This technique offers a promising solution for pipeline health monitoring, particularly for evaluating the quality of welded joints in non-metallic materials. Full article
(This article belongs to the Special Issue Advanced Joining Technologies for Polymers and Polymer Composites)
Show Figures

Graphical abstract

23 pages, 1950 KB  
Article
Multi-Classification Model for PPG Signal Arrhythmia Based on Time–Frequency Dual-Domain Attention Fusion
by Yubo Sun, Keyu Meng, Shipan Lang, Pei Li, Wentao Wang and Jun Yang
Sensors 2025, 25(19), 5985; https://doi.org/10.3390/s25195985 - 27 Sep 2025
Abstract
Cardiac arrhythmia is a leading cause of sudden cardiac death. Its early detection and continuous monitoring hold significant clinical value. Photoplethysmography (PPG) signals, owing to their non-invasive nature, low cost, and convenience, have become a vital information source for monitoring cardiac activity and [...] Read more.
Cardiac arrhythmia is a leading cause of sudden cardiac death. Its early detection and continuous monitoring hold significant clinical value. Photoplethysmography (PPG) signals, owing to their non-invasive nature, low cost, and convenience, have become a vital information source for monitoring cardiac activity and vascular health. However, the inherent non-stationarity of PPG signals and significant inter-individual variations pose a major challenge in developing highly accurate and efficient arrhythmia classification methods. To address this challenge, we propose a Fusion Deep Multi-domain Attention Network (Fusion-DMA-Net). Within this framework, we innovatively introduce a cross-scale residual attention structure to comprehensively capture discriminative features in both the time and frequency domains. Additionally, to exploit complementary information embedded in PPG signals across these domains, we develop a fusion strategy integrating interactive attention, self-attention, and gating mechanisms. The proposed Fusion-DMA-Net model is evaluated for classifying four major types of cardiac arrhythmias. Experimental results demonstrate its outstanding classification performance, achieving an overall accuracy of 99.05%, precision of 99.06%, and an F1-score of 99.04%. These results demonstrate the feasibility of the Fusion-DMA-Net model in classifying four types of cardiac arrhythmias using single-channel PPG signals, thereby contributing to the early diagnosis and treatment of cardiovascular diseases and supporting the development of future wearable health technologies. Full article
(This article belongs to the Special Issue Systems for Contactless Monitoring of Vital Signs)
Show Figures

Figure 1

16 pages, 9648 KB  
Article
A Novel Classification Framework for VLF/LF Lightning-Radiation Electric-Field Waveforms
by Wenxing Sun, Tingxiu Jiang, Duanjiao Li, Yun Zhang, Xinru Li, Yunlong Wang and Jiachen Gao
Atmosphere 2025, 16(10), 1130; https://doi.org/10.3390/atmos16101130 - 26 Sep 2025
Abstract
The classification of very-low-frequency and low-frequency (VLF/LF) lightning-radiation electric-field waveforms is of paramount importance for lightning-disaster prevention and mitigation. However, traditional waveform classification methods suffer from the complex characteristics of lightning waveforms, such as non-stationarity, strong noise interference, and feature coupling, limiting classification [...] Read more.
The classification of very-low-frequency and low-frequency (VLF/LF) lightning-radiation electric-field waveforms is of paramount importance for lightning-disaster prevention and mitigation. However, traditional waveform classification methods suffer from the complex characteristics of lightning waveforms, such as non-stationarity, strong noise interference, and feature coupling, limiting classification accuracy and generalization. To address this problem, a novel framework is proposed for VLF/LF lightning-radiated electric-field waveform classification. Firstly, an improved Kalman filter (IKF) is meticulously designed to eliminate possible high-frequency interferences (such as atmospheric noise, electromagnetic radiation from power systems, and electronic noise from measurement equipment) embedded within the waveforms based on the maximum entropy criterion. Subsequently, an attention-based multi-fusion convolutional neural network (AMCNN) is developed for waveform classification. In the AMCNN architecture, waveform information is comprehensively extracted and enhanced through an optimized feature fusion structure, which allows for a more thorough consideration of feature diversity, thereby significantly improving the classification accuracy. An actual dataset from Anhui province in China is used to validate the proposed classification framework. Experimental results demonstrate that our framework achieves a classification accuracy of 98.9% within a processing time of no more than 5.3 ms, proving its superior classification performance for lightning-radiation electric-field waveforms. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

25 pages, 20535 KB  
Article
DWTF-DETR: A DETR-Based Model for Inshore Ship Detection in SAR Imagery via Dynamically Weighted Joint Time–Frequency Feature Fusion
by Tiancheng Dong, Taoyang Wang, Yuqi Han, Deren Li, Guo Zhang and Yuan Peng
Remote Sens. 2025, 17(19), 3301; https://doi.org/10.3390/rs17193301 - 25 Sep 2025
Abstract
Inshore ship detection in synthetic aperture radar (SAR) imagery poses significant challenges due to the high density and diversity of ships. However, low inter-object backscatter contrast and blurred boundaries of docked ships often result in performance degradation for traditional object detection methods, especially [...] Read more.
Inshore ship detection in synthetic aperture radar (SAR) imagery poses significant challenges due to the high density and diversity of ships. However, low inter-object backscatter contrast and blurred boundaries of docked ships often result in performance degradation for traditional object detection methods, especially under complex backgrounds and low signal-to-noise ratio (SNR) conditions. To address these issues, this paper proposes a novel detection framework, the Dynamic Weighted Joint Time–Frequency Feature Fusion DEtection TRansformer (DETR) Model (DWTF-DETR), specifically designed for SAR-based ship detection in inshore areas. The proposed model integrates a Dual-Domain Feature Fusion Module (DDFM) to extract and fuse features from both SAR images and their frequency-domain representations, enhancing sensitivity to both high- and low-frequency target features. Subsequently, a Dual-Path Attention Fusion Module (DPAFM) is introduced to dynamically weight and fuse shallow detail features with deep semantic representations. By leveraging an attention mechanism, the module adaptively adjusts the importance of different feature paths, thereby enhancing the model’s ability to perceive targets with ambiguous structural characteristics. Experiments conducted on a self-constructed inshore SAR ship detection dataset and the public HRSID dataset demonstrate that DWTF-DETR achieves superior performance compared to the baseline RT-DETR. Specifically, the proposed method improves mAP@50 by 1.60% and 0.72%, and F1-score by 0.58% and 1.40%, respectively. Moreover, comparative experiments show that the proposed approach outperforms several state-of-the-art SAR ship detection methods. The results confirm that DWTF-DETR is capable of achieving accurate and robust detection in diverse and complex maritime environments. Full article
Show Figures

Graphical abstract

22 pages, 6045 KB  
Article
Early Warning of Anthracnose on Illicium verum Through the Synergistic Integration of Environmental and Remote Sensing Time Series Data
by Junji Li, Yuxin Zhao, Tianteng Zhang, Jiahui Du, Yucai Li, Ling Wu and Xiangnan Liu
Remote Sens. 2025, 17(19), 3294; https://doi.org/10.3390/rs17193294 - 25 Sep 2025
Abstract
Anthracnose on Illicium verum Hook.f (I. verum) significantly affects the yield and quality of I. verum, and timely detection methods are urgently needed for early control. However, early warning is difficult due to two major challenges, including the sparse availability [...] Read more.
Anthracnose on Illicium verum Hook.f (I. verum) significantly affects the yield and quality of I. verum, and timely detection methods are urgently needed for early control. However, early warning is difficult due to two major challenges, including the sparse availability of optical remote sensing observations due to frequent cloud and rain interference, and the weak spectral responses caused by infestation during early stages. In this article, a framework for early warning of anthracnose on I. verum that combines high-frequency environmental (meteorological and topographical) data and Sentinel-2 remote sensing time-series data, along with a Time-Aware Long Short-Term Memory (T-LSTM) network incorporating an attentional mechanism (At-T-LSTM) was proposed. First, all available environmental and remote sensing data during the study period were analyzed to characterize the early anthracnose outbreaks, and sensitive features were selected as the algorithm input. On this basis, to address the issue of unequal temporal lengths between environmental and remote sensing time series, the At-T-LSTM model incorporates a time-aware mechanism to capture intra-feature temporal dependencies, while a Self-Attention layer is used to quantify inter-feature interaction weights, enabling effective multi-source features time-series fusion. The results show that the proposed framework achieves a spatial accuracy (F1-score) of 0.86 and a temporal accuracy of 83% in early-stage detection, demonstrating high reliability. By integrating remote sensing features with environmental drivers, this approach enables multi-feature collaborative modeling for the risk assessment and monitoring of I. verum anthracnose. It effectively mitigates the impact of sparse observations and significantly improves the accuracy of early warnings. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Agroforestry (Third Edition))
Show Figures

Graphical abstract

24 pages, 7350 KB  
Article
An Attention-Driven Multi-Scale Framework for Rotating-Machinery Fault Diagnosis Under Noisy Conditions
by Le-Min Xu, Pak Kin Wong, Zhi-Jiang Gao, Zhi-Xin Yang, Jing Zhao and Xian-Bo Wang
Electronics 2025, 14(19), 3805; https://doi.org/10.3390/electronics14193805 - 25 Sep 2025
Abstract
Failures of rotating machinery, such as bearings and gears, are a critical concern in industrial systems, leading to significant operational downtime and economic losses. A primary research challenge is achieving accurate fault diagnosis under complex industrial noise, where weak fault signatures are often [...] Read more.
Failures of rotating machinery, such as bearings and gears, are a critical concern in industrial systems, leading to significant operational downtime and economic losses. A primary research challenge is achieving accurate fault diagnosis under complex industrial noise, where weak fault signatures are often masked by interference signals. This problem is particularly acute in demanding applications like offshore wind turbines, where harsh operating conditions and high maintenance costs necessitate highly robust and reliable diagnostic methods. To address this challenge, this paper proposes a novel Multi-Scale Domain Convolutional Attention Network (MSDCAN). The method integrates enhanced adaptive multi-domain feature extraction with a hybrid attention mechanism, combining information from the time, frequency, wavelet, and cyclic spectral domains with domain-specific attention weighting. A core innovation is the hybrid attention fusion mechanism, which enables cross-modal interaction between deep convolutional features and domain-specific features, enhanced by channel attention modules. The model’s effectiveness is validated on two public benchmark datasets for key rotating components. On the Case Western Reserve University (CWRU) bearing dataset, the MSDCAN achieves accuracies of 97.3% under clean conditions, 96.6% at 15 dB signal-to-noise ratio (SNR), 94.4% at 10 dB SNR, and a robust 85.5% under severe 5 dB SNR. To further validate its generalization, on the Xi’an Jiaotong University (XJTU) gear dataset, the model attains accuracies of 94.8% under clean conditions, 95.0% at 15 dB SNR, 83.6% at 10 dB SNR, and 63.8% at 5 dB SNR. These comprehensive results quantitatively validate the model’s superior diagnostic accuracy and exceptional noise robustness for rotating machinery, establishing a strong foundation for its application in reliable condition monitoring for complex systems, including wind turbines. Full article
(This article belongs to the Special Issue Advances in Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

16 pages, 3004 KB  
Article
Lamb Wave-Based Damage Fusion Detection of Composite Laminate Panels Using Distance Analysis and Evidence Theory
by Li Wang, Guoqiang Liu, Xiaguang Wang and Yu Yang
Sensors 2025, 25(18), 5930; https://doi.org/10.3390/s25185930 - 22 Sep 2025
Viewed by 130
Abstract
The Lamb wave-based damage detection method shows great potential for composite impact failure assessments. However, the traditional single signal feature-based methods only depend on partial structural state monitoring information, without considering the inconsistency of damage sensitivity and detection capability for different signal features. [...] Read more.
The Lamb wave-based damage detection method shows great potential for composite impact failure assessments. However, the traditional single signal feature-based methods only depend on partial structural state monitoring information, without considering the inconsistency of damage sensitivity and detection capability for different signal features. Therefore, this paper proposes a damage fusion detection method based on distance analysis and evidence theory for composite laminate panels. Firstly, the signal features of different dimensions are extracted from time–frequency domain perspectives. Correlational analysis and cluster analysis are applied to achieve feature reduction and retain highly sensitive signal features. Secondly, the damage detection results of highly sensitive features and the corresponding basic probability assignments (BPAs) are acquired using distance analysis. Finally, the consistent damage detection result can be acquired by applying evidence theory to the decision level to fuse detection results for highly sensitive signal features. Impact tests on ten composite laminate panels are implemented to validate the proposed fusion detection method. The results show that the proposed method can accurately identify the delamination damage with different locations and different areas. In addition, the classification accuracy is above 85%, the false alarm rate is below 25% and the missing alarm rate is below 15%. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

28 pages, 8918 KB  
Article
A Multi-Channel Multi-Scale Spatiotemporal Convolutional Cross-Attention Fusion Network for Bearing Fault Diagnosis
by Ruixue Li, Guohai Zhang, Yi Niu, Kai Rong, Wei Liu and Haoxuan Hong
Sensors 2025, 25(18), 5923; https://doi.org/10.3390/s25185923 - 22 Sep 2025
Viewed by 208
Abstract
Bearings, as commonly used elements in mechanical apparatus, are essential in transmission systems. Fault diagnosis is of significant importance for the normal and safe functioning of mechanical systems. Conventional fault diagnosis methods depend on one or more vibration sensors, and their diagnostic results [...] Read more.
Bearings, as commonly used elements in mechanical apparatus, are essential in transmission systems. Fault diagnosis is of significant importance for the normal and safe functioning of mechanical systems. Conventional fault diagnosis methods depend on one or more vibration sensors, and their diagnostic results are often unsatisfactory under strong noise interference. To tackle this problem, this research develops a bearing fault diagnosis technique utilizing a multi-channel, multi-scale spatiotemporal convolutional cross-attention fusion network. At first, continuous wavelet transform (CWT) is applied to convert the raw 1D acoustic and vibration signals of the dataset into 2D time–frequency images. These acoustic and vibration time–frequency images are then simultaneously fed into two parallel structures. After rough feature extraction using ResNet, deep feature extraction is performed using the Multi-Scale Temporal Convolutional Module (MTCM) and the Multi-Feature Extraction Block (MFE). Next, these traits are input into a dual cross-attention mechanism module (DCA), where fusion is achieved using attention interaction. The experimental findings validate the efficacy of the proposed method using tests and comparisons on two bearing datasets. The testing findings validate that the suggested method outperforms the existing advanced multi-sensor fusion diagnostic methods. Compared with other existing multi-sensor fusion diagnostic methods, the proposed method was proven to outperform the five existing methods (1DCNN-VAF, MFAN-VAF, 2MNET, MRSDF, and FAC-CNN). Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

27 pages, 4710 KB  
Article
Compound Jamming Recognition Under Low JNR Setting Based on a Dual-Branch Residual Fusion Network
by Wen Lu, Junbao Li, Feng Xie and Huanyu Liu
Sensors 2025, 25(18), 5881; https://doi.org/10.3390/s25185881 - 19 Sep 2025
Viewed by 241
Abstract
In complex electromagnetic environments, radar systems face increasing challenges from advanced jamming techniques. These challenges mainly stem from the diversity of jamming patterns, the complexity of compound jamming signals, and the difficulty of recognition under low jamming-to-noise ratio conditions. Accurate recognition of such [...] Read more.
In complex electromagnetic environments, radar systems face increasing challenges from advanced jamming techniques. These challenges mainly stem from the diversity of jamming patterns, the complexity of compound jamming signals, and the difficulty of recognition under low jamming-to-noise ratio conditions. Accurate recognition of such signals is critical for enhancing radar anti-jamming capabilities. However, traditional methods often struggle with diverse and evolving jamming patterns. To address this issue, we propose a novel deep learning-based approach for accurate and robust recognition of complex radar jamming signals. Specifically, the proposed network adopts a dual-branch architecture that concurrently processes time-domain and time–frequency-domain features of jamming signals. It further incorporates a multi-branch convolutional structure to strengthen feature extraction and applies an effective feature fusion strategy to capture subtle patterns. Simulation results demonstrate that the proposed method outperforms six representative baseline approaches in recognition accuracy and noise robustness, particularly under low jamming-to-noise ratio conditions. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

19 pages, 4815 KB  
Article
Strain Sensor-Based Fatigue Prediction for Hydraulic Turbine Governor Servomotor in Complementary Energy Systems
by Hong Hua, Zhizhong Zhang, Xiaobing Liu and Wanquan Deng
Sensors 2025, 25(18), 5860; https://doi.org/10.3390/s25185860 - 19 Sep 2025
Viewed by 210
Abstract
Hydraulic turbine governor servomotors in wind solar hydro complementary energy systems face significant fatigue failure challenges due to high-frequency regulation. This study develops an intelligent fatigue monitoring and prediction system based on strain sensors, specifically designed for the frequent regulation requirements of complementary [...] Read more.
Hydraulic turbine governor servomotors in wind solar hydro complementary energy systems face significant fatigue failure challenges due to high-frequency regulation. This study develops an intelligent fatigue monitoring and prediction system based on strain sensors, specifically designed for the frequent regulation requirements of complementary systems. A multi-point monitoring network was constructed using resistive strain sensors, integrated with temperature and vibration sensors for multimodal data fusion. Field validation was conducted at an 18.56 MW hydroelectric unit, covering guide vane opening ranges from 13% to 63%, with system response time <1 ms and a signal-to-noise ratio of 65 dB. A simulation model combining sensor measurements with finite element simulation was established through fine-mesh modeling to identify critical fatigue locations. The finite element analysis results show excellent agreement with experimental measurements (error < 8%), validating the simulation model approach. The fork head was identified as the critical component with a stress concentration factor of 3.4, maximum stress of 51.7 MPa, and predicted fatigue life of 1.2 × 106 cycles (12–16 years). The cylindrical pin shows a maximum shear stress of 36.1 MPa, with fatigue life of 3.8 × 106 cycles (16–20 years). Monte Carlo reliability analysis indicates a system reliability of 51.2% over 20 years. This work provides an effective technical solution for the predictive maintenance and digital operation of wind solar hydro complementary systems. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Back to TopTop