Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,000)

Search Parameters:
Keywords = wavelet feature

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2410 KB  
Article
Feature–Shuffle and Multi–Head Attention–Based Autoencoder for Eliminating Electrode Motion Noise in ECG Applications
by Szu-Ting Wang, Wen-Yen Hsu, Shin-Chi Lai, Ming-Hwa Sheu, Chuan-Yu Chang, Shih-Chang Hsia and Szu-Hong Wang
Sensors 2025, 25(20), 6322; https://doi.org/10.3390/s25206322 (registering DOI) - 13 Oct 2025
Abstract
Electrocardiograms (ECGs) are critical for cardiovascular disease diagnosis, but their accuracy is often compromised by electrode motion (EM) artifacts—large, nonstationary distortions caused by patient movement and electrode–skin interface shifts. These artifacts overlap in frequency with genuine cardiac signals, rendering traditional filtering methods ineffective [...] Read more.
Electrocardiograms (ECGs) are critical for cardiovascular disease diagnosis, but their accuracy is often compromised by electrode motion (EM) artifacts—large, nonstationary distortions caused by patient movement and electrode–skin interface shifts. These artifacts overlap in frequency with genuine cardiac signals, rendering traditional filtering methods ineffective and increasing the risk of false alarms and misdiagnosis, particularly in wearable and ambulatory ECG applications. To address this, we propose the Feature–Shuffle Multi–Head Attention Autoencoder (FMHA–AE), a novel architecture integrating multi-head self–attention (MHSA) and a feature–shuffle mechanism to enhance ECG denoising. MHSA captures long–range temporal and spatial dependencies, while feature shuffling improves representation robustness and generalization. Experimental results show that FMHA–AE achieves an average signal–to–noise ratio (SNR) improvement of 25.34 dB and a percentage root mean square difference (PRD) of 10.29%, outperforming conventional wavelet–based and deep learning baselines. These results confirm the model’s ability to retain critical ECG morphology while effectively removing noise. FMHA–AE demonstrates strong potential for real–time ECG monitoring in mobile and clinical environments. This work contributes an efficient deep learning approach for noise–robust ECG analysis, supporting accurate cardiovascular assessment under motion–prone conditions. Full article
(This article belongs to the Special Issue AI on Biomedical Signal Sensing and Processing for Health Monitoring)
20 pages, 1656 KB  
Article
Transformer Core Loosening Diagnosis Based on Fusion Feature Extraction and CPO-Optimized CatBoost
by Yuanqi Xiao, Yipeng Yin, Jiaqi Xu and Yuxin Zhang
Processes 2025, 13(10), 3247; https://doi.org/10.3390/pr13103247 (registering DOI) - 12 Oct 2025
Abstract
Transformer reliability is crucial to grid security, with core loosening a common fault. This paper proposes a transformer core loosening fault diagnosis method based on a fusion feature extraction approach and Categorical Boosting (CatBoost) optimized by the Crested Porcupine Optimizer (CPO) algorithm. Firstly, [...] Read more.
Transformer reliability is crucial to grid security, with core loosening a common fault. This paper proposes a transformer core loosening fault diagnosis method based on a fusion feature extraction approach and Categorical Boosting (CatBoost) optimized by the Crested Porcupine Optimizer (CPO) algorithm. Firstly, the audio signal is decomposed into six Intrinsic Mode Functions (IMF) components through Variational Mode Decomposition (VMD). This paper utilizes Gaussian membership functions to quantify the energy proportion, central frequency, and kurtosis of IMF and constructs a fuzzy entropy discrimination function. Then, the IMF noise components are removed through an adaptive threshold. Subsequently, the denoised signal undergoes a wavelet packet transform instead of a short-time Fourier transform to optimize Mel-frequency cepstral coefficients (WPT-MFCC), combining time-domain statistical features and frequency-band energy distribution to form a 24-dimensional fusion feature. Finally, the CatBoost algorithm is employed to validate the effects of different feature schemes. The CPO is introduced to optimize its iteration number, learning rate, tree depth, and random strength parameters, thereby enhancing overall performance. The CPO-optimized CatBoost model had 99.0196% fault recognition accuracy in experimental testing, 15% better than the standard CatBoost. Accuracy exceeded 90% even under extreme 0 dB noise. This method makes fault diagnosis more accurate and reliable. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
25 pages, 4301 KB  
Article
Diagnosing Hydraulic Directional Valve Spool Stick Faults Enabled by Hybridized Intelligent Algorithms
by Zicheng Wang, Binbin Qiu, Chunhua Feng, Weidong Li and Xin Lu
Appl. Sci. 2025, 15(20), 10937; https://doi.org/10.3390/app152010937 - 11 Oct 2025
Viewed by 35
Abstract
The hydraulic directional valve represents a fundamental component of a hydraulic system. The severe operating environment could cause undesirable faults, with the spool stick being the particular concern. It will lead to a reduction in the overall performance of the operating system, even [...] Read more.
The hydraulic directional valve represents a fundamental component of a hydraulic system. The severe operating environment could cause undesirable faults, with the spool stick being the particular concern. It will lead to a reduction in the overall performance of the operating system, even with the potential for failure. To address this issue, this study presents a hybrid intelligent algorithm-based diagnostic approach for the hydraulic directional valve spool stick fault to facilitate timely industrial inspection and maintenance. Firstly, the monitoring signals on hydraulic directional valves are denoised using wavelet packet denoising (WPD). Then, the denoised signals are decomposed via sparrow search algorithm (SSA) optimized for variational mode decomposition (VMD) in order to obtain a typical fault feature vector. Finally, a combined model of the convolutional neural network (CNN) and the long short-term memory (LSTM) is employed to diagnose the valve spool stick fault. The results of this study indicate that the proposed approach can reduce the signal processing time by 56.60%. The diagnostic accuracy of the approach is 97.01% and 96.24% for sensors located at different positions, and the accuracy of the fusion sensor group is 99.55%. These fault diagnostic performances provide a basis for further research into hydraulic directional valve spool stick fault and are appliable to other hydraulic equipment fault diagnosis applications. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 2165 KB  
Article
Seizure Type Classification Based on Hybrid Feature Engineering and Mutual Information Analysis Using Electroencephalogram
by Yao Miao
Entropy 2025, 27(10), 1057; https://doi.org/10.3390/e27101057 - 11 Oct 2025
Viewed by 44
Abstract
Epilepsy has diverse seizure types that challenge diagnosis and treatment, requiring automated and accurate classification to improve patient outcomes. Traditional electroencephalogram (EEG)-based diagnosis relies on manual interpretation, which is subjective and inefficient, particularly for multi-class differentiation in imbalanced datasets. This study aims to [...] Read more.
Epilepsy has diverse seizure types that challenge diagnosis and treatment, requiring automated and accurate classification to improve patient outcomes. Traditional electroencephalogram (EEG)-based diagnosis relies on manual interpretation, which is subjective and inefficient, particularly for multi-class differentiation in imbalanced datasets. This study aims to develop a hybrid framework for automated multi-class seizure type classification using segment-wise EEG processing and multi-band feature engineering to enhance precision and address data challenges. EEG signals from the TUSZ dataset were segmented into 1-s windows with 0.5-s overlaps, followed by the extraction of multi-band features, including statistical measures, sample entropy, wavelet energies, Hurst exponent, and Hjorth parameters. The mutual information (MI) approach was employed to select the optimal features, and seven machine learning models (SVM, KNN, DT, RF, XGBoost, CatBoost, LightGBM) were evaluated via 10-fold stratified cross-validation with a class balancing strategy. The results showed the following: (1) XGBoost achieved the highest performance (accuracy: 0.8710, F1 score: 0.8721, AUC: 0.9797), with γ-band features dominating importance. (2) Confusion matrices indicated robust discrimination but noted overlaps in focal subtypes. This framework advances seizure type classification by integrating multi-band features and the MI method, which offers a scalable and interpretable tool for supporting clinical epilepsy diagnostics. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

24 pages, 1597 KB  
Article
A Comparative Study of Electricity Sales Forecasting Models Based on Different Feature Decomposition Methods
by Shichong Chen, Yushu Zhang, Xiaoteng Ma, Xu Yang, Junyi Shi and Haoyang Ji
Energies 2025, 18(20), 5352; https://doi.org/10.3390/en18205352 (registering DOI) - 11 Oct 2025
Viewed by 38
Abstract
Accurate forecasting of electricity sales holds significant practical importance. On the one hand, it helps to implement and achieve the annual goals of power companies, and on the other hand, it helps to control the balance of enterprise profits. This study was conducted [...] Read more.
Accurate forecasting of electricity sales holds significant practical importance. On the one hand, it helps to implement and achieve the annual goals of power companies, and on the other hand, it helps to control the balance of enterprise profits. This study was conducted in China using data from the State Grid Corporation (Henan, Fujian, and national data) from the Wind database. Based on collected data such as electricity sales, this study addresses the limitations of the existing literature, which mostly employs a single feature decomposition method for forecasting. We simultaneously apply three decomposition techniques—seasonal adjustment decomposition (X13), empirical mode decomposition (EMD), and discrete wavelet transform (DWT)—to decompose electricity sales into multiple components. Subsequently, we model each component using the ADL, SARIMAX, and LSTM models, synthesize the component-level forecasts, and realize the comparison of electricity sales forecasting models based on different feature decomposition methods. The findings reveal (1) forecasting performance based on feature decomposition generally outperforms direct forecasting without decomposition; (2) different regions may benefit from different decomposition methods—EMD is more suitable for regions with high sales volatility, while DWT is preferable for more stable regions; and (3) among the forecasting models, ADL performs better than SARIMAX, while LSTM yields the least accurate results when combined with decomposition methods. Full article
(This article belongs to the Section C: Energy Economics and Policy)
25 pages, 3977 KB  
Article
Multi-Sensor Data Fusion and Vibro-Acoustic Feature Engineering for Health Monitoring and Remaining Useful Life Prediction of Hydraulic Valves
by Xiaomin Li, Liming Zhang, Tian Tan, Xiaolong Wang, Xinwen Zhao and Yanlong Xu
Sensors 2025, 25(20), 6294; https://doi.org/10.3390/s25206294 (registering DOI) - 11 Oct 2025
Viewed by 168
Abstract
The reliability of hydraulic valves is critical for the safety and efficiency of industrial systems. While vibration and pressure sensors are widely deployed for condition monitoring, leveraging the heterogeneous data from these multi-sensor systems for accurate remaining useful life (RUL) prediction remains challenging [...] Read more.
The reliability of hydraulic valves is critical for the safety and efficiency of industrial systems. While vibration and pressure sensors are widely deployed for condition monitoring, leveraging the heterogeneous data from these multi-sensor systems for accurate remaining useful life (RUL) prediction remains challenging due to noise, outliers, and inconsistent sampling rates. This study proposes a sensor data-driven framework that integrates multi-step signal preprocessing, time–frequency feature fusion, and a machine learning model to address these challenges. Specifically, raw data from vibration and pressure sensors are first harmonized through a multi-step preprocessing pipeline including Hampel filtering for impulse noise, Robust Scaler for outlier mitigation, Butterworth low-pass filtering for effective frequency band retention, and resampling to a unified rate. Subsequently, vibro-acoustic features are extracted from the preprocessed sensor signals, including Fast Fourier Transform (FFT)-based frequency domain features and Wavelet Packet Decomposition (WPD)-based time–frequency features, to comprehensively characterize the valve’s degradation. A health indicator (HI) is constructed by fusing the most sensitive features. Finally, a Kernel Principal Component Analysis (KPCA)-optimized Random Forest model is developed for HI prediction, which strongly correlates with RUL. Validated on the UCI hydraulic condition monitoring dataset through 20-run Monte-Carlo cross-validation, our method achieves a root mean square error (RMSE) of 0.0319 ± 0.0090, a mean absolute error (MAE) of 0.0109 ± 0.0014, and a coefficient of determination (R2) of 0.9828 ± 0.0097, demonstrating consistent performance across different data partitions. These results confirm the framework’s effectiveness in translating multi-sensor data into actionable insights for predictive maintenance, offering a viable solution for industrial health management systems. Full article
Show Figures

Figure 1

20 pages, 2793 KB  
Article
Investigating Brain Activity of Children with Autism Spectrum Disorder During STEM-Related Cognitive Tasks
by Harshith Penmetsa, Rahma Abbasi, Nagasree Yellamilli, Kimberly Winkelman, Jeff Chan, Jaejin Hwang and Kyu Taek Cho
Information 2025, 16(10), 880; https://doi.org/10.3390/info16100880 - 10 Oct 2025
Viewed by 173
Abstract
Children with Autism Spectrum Disorder (ASD) often experience cognitive difficulties that impact learning. This study explores the use of electroencephalogram data collected with the MUSE 2 headband during task-based cognitive sessions to understand how cognitive states in children with ASD change across three [...] Read more.
Children with Autism Spectrum Disorder (ASD) often experience cognitive difficulties that impact learning. This study explores the use of electroencephalogram data collected with the MUSE 2 headband during task-based cognitive sessions to understand how cognitive states in children with ASD change across three structured tasks: Shape Matching, Shape Sorting, and Number Matching. Following signal preprocessing using Independent Component Analysis (ICA), power across various frequency bands was extracted using the Welch method. These features were used to analyze cognitive states in children with ASD in comparison to typically developing (TD) peers. To capture dynamic changes in attention over time, Morlet wavelet transform was applied, revealing distinct brain signal patterns. Machine learning classifiers were then developed to accurately distinguish between ASD and TD groups using the EEG data. Models included Support Vector Machine, K-Nearest Neighbors, Random Forest, an Ensemble method, and a Neural Network. Among these, the Ensemble method achieved the highest accuracy at 0.90. Feature importance analysis was conducted to identify the most influential EEG features contributing to classification performance. Based on these findings, an ASD map was generated to visually highlight the key EEG regions associated with ASD-related cognitive patterns. These findings highlight the potential of EEG-based models to capture ASD-specific neural and attentional patterns during learning, supporting their application in developing more personalized educational approaches. However, due to the limited sample size and participant heterogeneity, these findings should be considered exploratory. Future studies with larger samples are needed to validate and generalize the results. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

24 pages, 2472 KB  
Article
Beyond Radiomics Alone: Enhancing Prostate Cancer Classification with ADC Ratio in a Multicenter Benchmarking Study
by Dimitrios Samaras, Georgios Agrotis, Alexandros Vamvakas, Maria Vakalopoulou, Marianna Vlychou, Katerina Vassiou, Vasileios Tzortzis and Ioannis Tsougos
Diagnostics 2025, 15(19), 2546; https://doi.org/10.3390/diagnostics15192546 - 9 Oct 2025
Viewed by 179
Abstract
Background/Objectives: Radiomics enables extraction of quantitative imaging features to support non-invasive classification of prostate cancer (PCa). Accurate detection of clinically significant PCa (csPCa; Gleason score ≥ 3 + 4) is crucial for guiding treatment decisions. However, many studies explore limited feature selection, [...] Read more.
Background/Objectives: Radiomics enables extraction of quantitative imaging features to support non-invasive classification of prostate cancer (PCa). Accurate detection of clinically significant PCa (csPCa; Gleason score ≥ 3 + 4) is crucial for guiding treatment decisions. However, many studies explore limited feature selection, classifier, and harmonization combinations, and lack external validation. We aimed to systematically benchmark modeling pipelines and evaluate whether combining radiomics with the lesion-to-normal ADC ratio improves classification robustness and generalizability in multicenter datasets. Methods: Radiomic features were extracted from ADC maps using IBSI-compliant pipelines. Over 100 model configurations were tested, combining eight feature selection methods, fifteen classifiers, and two harmonization strategies across two scenarios: (1) repeated cross-validation on a multicenter dataset and (2) nested cross-validation with external testing on the PROSTATEx dataset. The ADC ratio was defined as the mean lesion ADC divided by contralateral normal tissue ADC, by placing two identical ROIs in each side, enabling patient-specific normalization. Results: In Scenario 1, the best model combined radiomics, ADC ratio, LASSO, and Naïve Bayes (AUC-PR = 0.844 ± 0.040). In Scenario 2, the top-performing configuration used Recursive Feature Elimination (RFE) and Boosted GLM (a generalized linear model trained with boosting), generalizing well to the external set (AUC-PR = 0.722; F1 = 0.741). ComBat harmonization improved calibration but not external discrimination. Frequently selected features were texture-based (GLCM, GLSZM) from wavelet- and LoG-filtered ADC maps. Conclusions: Integrating radiomics with the ADC ratio improves csPCa classification and enhances generalizability, supporting its potential role as a robust, clinically interpretable imaging biomarker in multicenter MRI studies. Full article
(This article belongs to the Special Issue AI in Radiology and Nuclear Medicine: Challenges and Opportunities)
Show Figures

Figure 1

20 pages, 4005 KB  
Article
EEG Complexity Analysis of Psychogenic Non-Epileptic and Epileptic Seizures Using Entropy and Machine Learning
by Hesam Shokouh Alaei, Samaneh Kouchaki, Mahinda Yogarajah and Daniel Abasolo
Entropy 2025, 27(10), 1044; https://doi.org/10.3390/e27101044 - 7 Oct 2025
Viewed by 295
Abstract
Psychogenic non-epileptic seizures (PNES) are often misdiagnosed as epileptic seizures (ES), leading to inappropriate treatment and delayed psychological care. To address this challenge, we analysed electroencephalogram (EEG) data from 74 patients (46 PNES, 28 ES) using one-minute preictal and interictal recordings per subject. [...] Read more.
Psychogenic non-epileptic seizures (PNES) are often misdiagnosed as epileptic seizures (ES), leading to inappropriate treatment and delayed psychological care. To address this challenge, we analysed electroencephalogram (EEG) data from 74 patients (46 PNES, 28 ES) using one-minute preictal and interictal recordings per subject. Nine entropy measures (Sample, Fuzzy, Permutation, Dispersion, Conditional, Phase, Spectral, Rényi, and Wavelet entropy) were evaluated individually to classify PNES from ES using k-nearest neighbours, Naïve Bayes, linear discriminant analysis, logistic regression, support vector machine, random forest, multilayer perceptron, and XGBoost within a leave-one-subject-out cross-validation framework. In addition, a dynamic state, defined as the entropy difference between interictal and preictal periods, was examined. Sample, Fuzzy, Conditional, and Dispersion entropy were higher in PNES than in ES during interictal recordings (not significant), but significantly lower in the preictal (p < 0.05) and dynamic states (p < 0.01). Spatial mapping and permutation-based importance analyses highlighted O1, O2, T5, F7, and Pz as key discriminative channels. Classification performance peaked in the dynamic state, with Fuzzy entropy and support vector machine achieving the best results (balanced accuracy = 72.4%, F1 score = 77.8%, sensitivity = 74.5%, specificity = 70.4%). These results demonstrate the potential of entropy features for differentiating PNES from ES. Full article
(This article belongs to the Special Issue Entropy Analysis of ECG and EEG Signals)
Show Figures

Figure 1

29 pages, 4573 KB  
Article
LCW-YOLO: A Lightweight Multi-Scale Object Detection Method Based on YOLOv11 and Its Performance Evaluation in Complex Natural Scenes
by Gang Li and Juelong Fang
Sensors 2025, 25(19), 6209; https://doi.org/10.3390/s25196209 - 7 Oct 2025
Viewed by 502
Abstract
Accurate object detection is fundamental to computer vision, yet detecting small targets in complex backgrounds remains challenging due to feature loss and limited model efficiency. To address this, we propose LCW-YOLO, a lightweight detection framework that integrates three innovations: Wavelet Pooling, a CGBlock-enhanced [...] Read more.
Accurate object detection is fundamental to computer vision, yet detecting small targets in complex backgrounds remains challenging due to feature loss and limited model efficiency. To address this, we propose LCW-YOLO, a lightweight detection framework that integrates three innovations: Wavelet Pooling, a CGBlock-enhanced C3K2 structure, and an improved LDHead detection head. The Wavelet Pooling strategy employs Haar-based multi-frequency reconstruction to preserve fine-grained details while mitigating noise sensitivity. CGBlock introduces dynamic channel interactions within C3K2, facilitating the fusion of shallow visual cues with deep semantic features without excessive computational overhead. LDHead incorporates classification and localization functions, thereby improving target recognition accuracy and spatial precision. Extensive experiments across multiple public datasets demonstrate that LCW-YOLO outperforms mainstream detectors in both accuracy and inference speed, with notable advantages in small-object, sparse, and cluttered scenarios. Here we show that the combination of multi-frequency feature preservation and efficient feature fusion enables stronger representations under complex conditions, advancing the design of resource-efficient detection models for safety-critical and real-time applications. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

26 pages, 1958 KB  
Article
Real-Time Heartbeat Classification on Distributed Edge Devices: A Performance and Resource Utilization Study
by Eko Sakti Pramukantoro, Kasyful Amron, Putri Annisa Kamila and Viera Wardhani
Sensors 2025, 25(19), 6116; https://doi.org/10.3390/s25196116 - 3 Oct 2025
Viewed by 185
Abstract
Early detection is crucial for preventing heart disease. Advances in health technology, particularly wearable devices for automated heartbeat detection and machine learning, can enhance early diagnosis efforts. However, previous studies on heartbeat classification inference systems have primarily relied on batch processing, which introduces [...] Read more.
Early detection is crucial for preventing heart disease. Advances in health technology, particularly wearable devices for automated heartbeat detection and machine learning, can enhance early diagnosis efforts. However, previous studies on heartbeat classification inference systems have primarily relied on batch processing, which introduces delays. To address this limitation, a real-time system utilizing stream processing with a distributed computing architecture is needed for continuous, immediate, and scalable data analysis. Real-time ECG inference is particularly crucial for immediate heartbeat classification, as human heartbeats occur with durations between 0.6 and 1 s, requiring inference times significantly below this threshold for effective real-time processing. This study implements a real-time heartbeat classification inference system using distributed stream processing with LSTM-512, LSTM-256, and FCN models, incorporating RR-interval, morphology, and wavelet features. The system is developed as a distributed web-based application using the Flask framework with distributed backend processing, integrating Polar H10 sensors via Bluetooth and Web Bluetooth API in JavaScript. The implementation consists of a frontend interface, distributed backend services, and coordinated inference processing. The frontend handles sensor pairing and manages real-time streaming for continuous ECG data transmission. The backend processes incoming ECG streams, performing preprocessing and model inference. Performance evaluations demonstrate that LSTM-based heartbeat classification can achieve real-time performance on distributed edge devices by carefully selecting features and models. Wavelet-based features with an LSTM-Sequential architecture deliver optimal results, achieving 99% accuracy with balanced precision-recall metrics and an inference time of 0.12 s—well below the 0.6–1 s heartbeat duration requirement. Resource analysis on Jetson Orin devices reveals that Wavelet-FCN models offer exceptional efficiency with 24.75% CPU usage, minimal GPU utilization (0.34%), and 293 MB memory consumption. The distributed architecture’s dynamic load balancing ensures resilience under varying workloads, enabling effective horizontal scaling. Full article
(This article belongs to the Special Issue Advanced Sensors for Human Health Management)
Show Figures

Figure 1

23 pages, 4303 KB  
Article
LMCSleepNet: A Lightweight Multi-Channel Sleep Staging Model Based on Wavelet Transform and Muli-Scale Convolutions
by Jiayi Yang, Yuanyuan Chen, Tingting Yu and Ying Zhang
Sensors 2025, 25(19), 6065; https://doi.org/10.3390/s25196065 - 2 Oct 2025
Viewed by 212
Abstract
Sleep staging is a crucial indicator for assessing sleep quality, which contributes to sleep monitoring and the diagnosis of sleep disorders. Although existing sleep staging methods achieve high classification performance, two major challenges remain: (1) the ability to effectively extract salient features from [...] Read more.
Sleep staging is a crucial indicator for assessing sleep quality, which contributes to sleep monitoring and the diagnosis of sleep disorders. Although existing sleep staging methods achieve high classification performance, two major challenges remain: (1) the ability to effectively extract salient features from multi-channel sleep data remains limited; (2) excessive model parameters hinder efficiency improvements. To address these challenges, this work proposes a lightweight multi-channel sleep staging network (LMCSleepNet). LMCSleepNet is composed of four modules. The first module enhances frequency domain features through continuous wavelet transform. The second module extracts time–frequency features using multi-scale convolutions. The third module optimizes ResNet18 with depthwise separable convolutions to reduce parameters. The fourth module improves spatial correlation using the Convolutional Block Attention Module (CBAM). On the public datasets SleepEDF-20, SleepEDF-78, and LMCSleepNet, respectively, LMCSleepNet achieved classification accuracies of 88.2% (κ = 0.84, MF1 = 82.4%) and 84.1% (κ = 0.77, MF1 = 77.7%), while reducing model parameters to 1.49 M. Furthermore, experiments validated the influence of temporal sampling points in wavelet time–frequency maps on sleep classification performance (accuracy, Cohen’s kappa, and macro-average F1-score) and the influence of multi-scale dilated convolution module fusion methods on classification performance. LMCSleepNet is an efficient lightweight model for extracting and integrating multimodal features from multichannel Polysomnography (PSG) data, which facilitates its application in resource-constrained scenarios. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

23 pages, 17632 KB  
Article
Multipath Identification and Mitigation for Enhanced GNSS Positioning in Urban Environments
by Qianxia Li, Xue Hou, Yuanbin Ye, Wenfeng Zhang, Qingsong Li and Yuezhen Cai
Sensors 2025, 25(19), 6061; https://doi.org/10.3390/s25196061 - 2 Oct 2025
Viewed by 343
Abstract
Due to the increasing demand for accurate and robust GNSS positioning for location-based services (LBS) in urban regions, the impacts prevalent in metropolitan areas, like multipath reflections and various interferences, have become persistent challenges. Consequently, developing effective strategies to address these sophisticated influences [...] Read more.
Due to the increasing demand for accurate and robust GNSS positioning for location-based services (LBS) in urban regions, the impacts prevalent in metropolitan areas, like multipath reflections and various interferences, have become persistent challenges. Consequently, developing effective strategies to address these sophisticated influences has become both a primary research focus and a shared priority. In this paper, the authors explore an approach to identify and mitigate the drawbacks arising from multipath effects in urban positioning. Unlike conventional ways for building complex models, an adaptive data-driven methodology is proposed to identify the fingerprints of a multipath in GNSS observations. This approach utilizes the Fourier transform (FT) to examine code multipath and other error sources in terms of frequency, as represented by the power spectrum. Wavelet decomposition and signal spectrum methods are subsequently applied to seek traces of code multipath in multilayer decompositions. Based on the exhibited multipath features, the impacts of multipath in GNSS observations are detected and mitigated in the reconstructed observations. The proposed method is validated for both static and dynamic positioning scenarios, demonstrating seamless integration with existing positioning models. The feasibility has been verified through a series of experiments and tests under urban environments using navigation terminals and smartphones. Full article
(This article belongs to the Special Issue Advances in GNSS Signal Processing and Navigation—Second Edition)
Show Figures

Figure 1

15 pages, 1392 KB  
Article
Optimal Source Selection for Distributed Bearing Fault Classification Using Wavelet Transform and Machine Learning Algorithms
by Ramin Rajabioun and Özkan Atan
Appl. Sci. 2025, 15(19), 10631; https://doi.org/10.3390/app151910631 - 1 Oct 2025
Viewed by 206
Abstract
Early and accurate detection of distributed bearing faults is essential to prevent equipment failures and reduce downtime in industrial environments. This study explores the optimal selection of input signal sources for high-accuracy distributed fault classification, employing wavelet transform and machine learning algorithms. The [...] Read more.
Early and accurate detection of distributed bearing faults is essential to prevent equipment failures and reduce downtime in industrial environments. This study explores the optimal selection of input signal sources for high-accuracy distributed fault classification, employing wavelet transform and machine learning algorithms. The primary contribution of this work is to demonstrate that robust distributed bearing fault diagnosis can be achieved through optimal sensor fusion and wavelet-based feature engineering, without the need for deep learning or high-dimensional inputs. This approach provides interpretable, computationally efficient, and generalizable fault classification, setting it apart from most existing studies that rely on larger models or more extensive data. All experiments were conducted in a controlled laboratory environment across multiple loads and speeds. A comprehensive dataset, including three-axis vibration, stray magnetic flux, and two-phase current signals, was used to diagnose six distinct bearing fault conditions. The wavelet transform is applied to extract frequency-domain features, capturing intricate fault signatures. To identify the most effective input signal combinations, we systematically evaluated Random Forest, XGBoost, and Support Vector Machine (SVM) models. The analysis reveals that specific signal pairs significantly enhance classification accuracy. Notably, combining vibration signals with stray magnetic flux consistently achieved the highest performance across models, with Random Forest reaching perfect test accuracy (100%) and SVM showing robust results. These findings underscore the importance of optimal source selection and wavelet-transformed features for improving machine learning model performance in bearing fault classification tasks. While the results are promising, validation in real-world industrial settings is needed to fully assess the method’s practical reliability and impact on predictive maintenance systems. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

17 pages, 930 KB  
Article
Investigation of the MobileNetV2 Optimal Feature Extraction Layer for EEG-Based Dementia Severity Classification: A Comparative Study
by Noor Kamal Al-Qazzaz, Sawal Hamid Bin Mohd Ali and Siti Anom Ahmad
Algorithms 2025, 18(10), 620; https://doi.org/10.3390/a18100620 - 1 Oct 2025
Viewed by 168
Abstract
Diagnosing dementia and recognizing substantial cognitive decline are challenging tasks. Thus, the objective of this study was to classify electroencephalograms (EEGs) recorded during a working memory task in 15 patients with mild cognitive impairment (MCogImp), 5 patients with vascular dementia (VasD), and 15 [...] Read more.
Diagnosing dementia and recognizing substantial cognitive decline are challenging tasks. Thus, the objective of this study was to classify electroencephalograms (EEGs) recorded during a working memory task in 15 patients with mild cognitive impairment (MCogImp), 5 patients with vascular dementia (VasD), and 15 healthy controls (NC). Before creating spectrogram pictures from the EEG dataset, the data were subjected to preprocessing, which included preprocessing using conventional filters and the discrete wavelet transformation. The convolutional neural network (CNN) MobileNetV2 was employed in our investigation to identify features and assess the severity of dementia. The features were extracted from five layers of the MobileNetV2 CNN architecture—convolutional layers (‘Conv-1’), batch normalization (‘Conv-1-bn’), clipped ReLU (‘out-relu’), 2D Global Average Pooling (‘global-average-pooling2d1’), and fully connected (‘Logits’) layers. This was carried out to find the efficient features layer for dementia severity from EEGs. Feature extraction from MobileNetV2’s five layers was carried out using a decision tree (DT) and k-nearest neighbor (KNN) machine learning (ML) classifier, in conjunction with a MobileNetV2 deep learning (DL) network. The study’s findings show that the DT classifier performed best using features derived from MobileNetV2 with the 2D Global Average Pooling (global-average-pooling2d-1) layer, achieving an accuracy score of 95.9%. Second place went to the characteristics of the fully connected (Logits) layer, which achieved a score of 95.3%. The findings of this study endorse the utilization of deep processing algorithms, offering a viable approach for improving early dementia identification with high precision, hence facilitating the differentiation among NC individuals, VasD patients, and MCogImp patients. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

Back to TopTop