Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,307)

Search Parameters:
Keywords = signal-detection algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4741 KB  
Article
An Edge-Enabled Predictive Maintenance Approach Based on Anomaly-Driven Health Indicators for Industrial Production Systems
by Bouzidi Lamdjad and Adem Chaiter
Algorithms 2026, 19(4), 286; https://doi.org/10.3390/a19040286 - 8 Apr 2026
Abstract
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach [...] Read more.
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach combines edge-level monitoring, anomaly detection, and predictive modeling to analyze operational signals and estimate system health conditions from high-frequency industrial data. Empirical validation was conducted using operational datasets collected from two industrial production facilities between 2024 and 2025. The model evaluates patterns associated with operational instability and degradation-related anomalies and translates them into interpretable health indicators that can support proactive intervention. The empirical results show strong predictive performance, with R2 reaching 0.989, a mean absolute percentage error of 3.67%, and a root mean square error of 0.79. In addition, the mitigation of early anomaly signals was associated with an observed improvement of approximately 3.99% in system stability. Unlike many existing studies that treat anomaly detection, predictive modeling, and prognostic analysis as separate tasks, the proposed framework connects these stages within a unified analytical structure designed for deployment in industrial environments. The findings indicate that edge-generated anomaly signals can provide meaningful early information about potential system deterioration and can assist in planning timely maintenance actions even when explicit failure labels are limited. The study contributes to the development of scalable predictive maintenance solutions that integrate artificial intelligence with edge-based industrial monitoring systems. Full article
Show Figures

Figure 1

26 pages, 7110 KB  
Article
Research on an Automatic Detection Method for Response Keypoints of Three-Dimensional Targets in Directional Borehole Radar Profiles
by Xiaosong Tang, Maoxuan Xu, Feng Yang, Jialin Liu, Suping Peng and Xu Qiao
Remote Sens. 2026, 18(7), 1102; https://doi.org/10.3390/rs18071102 - 7 Apr 2026
Abstract
During the interpretation of Borehole Radar (BHR) B-scan profiles, the accurate determination of the azimuth of geological targets in three-dimensional space is a critical issue for achieving precise anomaly localization and spatial structure inversion. However, existing directional BHR anomaly localization methods exhibit limited [...] Read more.
During the interpretation of Borehole Radar (BHR) B-scan profiles, the accurate determination of the azimuth of geological targets in three-dimensional space is a critical issue for achieving precise anomaly localization and spatial structure inversion. However, existing directional BHR anomaly localization methods exhibit limited intelligence, insufficient adaptability to multi-site data, and weak generalization capability, rendering them inadequate for engineering applications under complex geological conditions. To address these challenges, a robust deep learning model, termed BSS-Pose-BHR, is developed based on YOLOv11n-pose for keypoint detection in directional BHR profiles. The model incorporates three key optimizations: Bi-Level Routing Attention (BRA) replaces Multi-Head Self-Attention (MHSA) in the backbone to improve computational efficiency; Conv_SAMWS enhances keypoint-related feature weighting in the backbone and neck; and Spatial and Channel Reconstruction Convolution (SCConv) is integrated into the detection head to reduce redundancy and strengthen local feature extraction, thereby improving suitability for keypoint detection tasks. In addition, a three-dimensional electromagnetic model of limestone containing a certain density of clay particles is established to construct a simulation dataset. On the simulated test set, compared with current mainstream deep learning approaches and conventional directional borehole radar anomaly localization algorithms, BSS-Pose-BHR achieves superior performance, with an mAP50(B) of 0.9686, an mAP50–95(B) of 0.7712, an mAP50(P) of 0.9951, and an mAP50–95(P) of 0.9952. Ablation experiments demonstrate that each proposed module contributes significantly to performance improvement. Compared with the baseline, BSS-Pose-BHR improves mAP50(B) by 5.39% and mAP50(P) by 0.86%, while increasing model weight by only 1.05 MB, thereby achieving a reasonable trade-off between detection accuracy and complexity. Furthermore, indoor physical model experiments validate the effectiveness of the method on measured data. Robustness experiments under different Peak Signal-to-Noise Ratio (PSNR) conditions and varying missing-trace rates indicate that BSS-Pose-BHR maintains high detection accuracy under moderate noise and data loss, demonstrating strong engineering applicability and practical value. Full article
Show Figures

Figure 1

9 pages, 640 KB  
Communication
Noninvasive Measurement of Infant Respiration During Sleep: A Validation Study
by Melissa N. Horger, Maristella Lucchini, Shambhavi Thakur, Rebecca M. C. Spencer and Natalie Barnett
Sensors 2026, 26(7), 2275; https://doi.org/10.3390/s26072275 - 7 Apr 2026
Abstract
Infant respiration is a physiological marker of health and wellbeing that can provide insight into sleep and wake patterns. Technological innovation presents opportunities to enhance measurements of physiological signals, which improves ecological validity and participant experiences. This is particularly true in the context [...] Read more.
Infant respiration is a physiological marker of health and wellbeing that can provide insight into sleep and wake patterns. Technological innovation presents opportunities to enhance measurements of physiological signals, which improves ecological validity and participant experiences. This is particularly true in the context of studying infant sleep, as it can be disrupted by changes in the environment and the physical sensation of unfamiliar or uncomfortable sensors. The goal of this study was to examine if a commercially available video baby monitor (Nanit system) can accurately estimate respiration during a nap relative to a commonly used cardiorespiratory sensor (Isansys Lifetouch sensor). Thirty-three infants (M = 9.7 months; range = 1–22 months) took a nap while wearing the Lifetouch sensor and Nanit Breathing Band. Infants slept in view of the Nanit camera. A computer vision algorithm applied to the video detected movement of the patterns on the fabric band worn around the infant’s torso to determine respiratory rates. The results showed strong consistency between the devices. More than 95% of the minute-by-minute respiration data fell within the limits of agreement, with little bias. Agreement was not influenced by age or nap duration, suggesting the Nanit Breathing Band provides a valid measure of respiration across infancy. Full article
(This article belongs to the Collection Biomedical Imaging and Sensing)
Show Figures

Figure 1

25 pages, 3586 KB  
Article
A Classification Algorithm of UAV and Bird Target Based on L/K Dual-Band Micro-Doppler and Mamba
by Tao Zhang and Xiaoru Song
Drones 2026, 10(4), 265; https://doi.org/10.3390/drones10040265 - 6 Apr 2026
Viewed by 70
Abstract
To address the challenge of accurately distinguishing UAVs and birds in a low-altitude detection field, this paper proposes a classification algorithm of UAVs and birds based on L/K dual-band micro-Doppler spectrograms and Mamba. We establish a dual-band radar detection model for unmanned aerial [...] Read more.
To address the challenge of accurately distinguishing UAVs and birds in a low-altitude detection field, this paper proposes a classification algorithm of UAVs and birds based on L/K dual-band micro-Doppler spectrograms and Mamba. We establish a dual-band radar detection model for unmanned aerial vehicles (UAVs) and birds, provide a method for characterizing the Doppler parameters of the echo signals, and research a UAV and bird target classification network model that integrates micro-Doppler and Mamba. Based on a dual-branch encoding framework, we use Patch block decomposition to design a classification model to serialize the two-dimensional spectrogram of the echo signal, and introduce the Mamba state-space backbone network to extract the long-term sequence features of the target’s micro-motion. The main breakthrough of the proposed classification algorithm lies in the feature fusion stage, where a late fusion strategy is adopted to integrate the dual-path high-level representation measures, fully leveraging the sensitivity of the K-band to high-frequency textures and the scale complementarity of the L-band. Then, according to the joint loss function of mutual learning and contrastive learning, we improve the model’s prediction consistency and representation discriminability. Through experimental testing, the results show that the proposed method can effectively classify UAVs and birds, and compared with other algorithms, the accuracy rate reaches 97.5%. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

30 pages, 8434 KB  
Review
AI-Assisted Molecular Biosensors: Design Strategies for Wearable and Real-Time Monitoring
by Sishi Zhu, Jie Zhang, Xuming He, Lijun Ding, Xiao Luo and Weijia Wen
Int. J. Mol. Sci. 2026, 27(7), 3305; https://doi.org/10.3390/ijms27073305 - 6 Apr 2026
Viewed by 127
Abstract
Artificial intelligence (AI) has become a transformative tool in the field of molecular biosensing, enabling data-driven optimization in sensor design, signal processing, and real-time monitoring. AI promotes the discovery of biomarkers, the design of high-affinity receptors, and the rational engineering of sensing materials, [...] Read more.
Artificial intelligence (AI) has become a transformative tool in the field of molecular biosensing, enabling data-driven optimization in sensor design, signal processing, and real-time monitoring. AI promotes the discovery of biomarkers, the design of high-affinity receptors, and the rational engineering of sensing materials, thereby enhancing sensitivity, specificity, and detection accuracy. In the development of biosensors, AI-assisted strategies have accelerated the identification of novel molecular targets, guided the design of proteins and aptamers with enhanced binding performance, and optimized plasmonic and nanophotonic structures through forward prediction and inverse design frameworks. The integration of artificial intelligence has significantly enhanced the performance of various biosensing platforms, including optical, electrochemical, and microfluidic biosensors. It also enabled automatic feature extraction, noise reduction, dimensionality reduction, and multimodal data fusion, overcoming the challenges posed by complex signals, environmental interference, and device variations. These capabilities are particularly crucial for wearable molecular biosensors, as low signal strength, motion artifacts, and fluctuations in physiological conditions impose strict requirements on robustness and real-time reliability. This review systematically summarizes the latest advancements in AI-assisted molecular biosensors, highlighting representative sensing strategies and algorithms for wearable and real-time monitoring, and discusses the current challenges and future development opportunities of intelligent biosensing technologies. Full article
(This article belongs to the Special Issue Biosensors: Emerging Technologies and Real-Time Monitoring)
Show Figures

Figure 1

14 pages, 2277 KB  
Article
Deep Learning Denoising for Enhanced Acetone Detection in Cavity Ring-Down Spectroscopy
by Wenxuan Li, Dongxin Shi, Feifei Wang, Yuxiao Song, Yong Yang, Jing Sun and Chenyu Jiang
Chemosensors 2026, 14(4), 92; https://doi.org/10.3390/chemosensors14040092 - 5 Apr 2026
Viewed by 166
Abstract
Cavity ring-down spectroscopy has significant potential for detecting trace volatile organic compounds, owing to its long absorption path and high sensitivity. However, in practical measurements, noise severely decreases the accuracy of decay curves and the reliability of concentration retrieval. To address this, we [...] Read more.
Cavity ring-down spectroscopy has significant potential for detecting trace volatile organic compounds, owing to its long absorption path and high sensitivity. However, in practical measurements, noise severely decreases the accuracy of decay curves and the reliability of concentration retrieval. To address this, we developed a deep learning-based denoising model called decay-upsampling FC-Net. Experimental results showed that the model improved the signal-to-noise ratio from 13.86 dB to 26.79 dB and processed a single decay curve in only 0.000207 s on average. Moreover, under high-noise conditions, it determined the ring-down time more accurately than conventional methods. This study provides an effective signal processing solution to enhance the practical reliability of Cavity ring-down spectroscopy gas detection systems. Full article
(This article belongs to the Special Issue Spectroscopic Techniques for Chemical Analysis, 2nd Edition)
Show Figures

Figure 1

28 pages, 5004 KB  
Article
High-Precision Spoofing Detection Using an Auxiliary Baseline Three-Antenna Configuration for GNSS Systems
by Jiajia Chen, Xing’ao Wang, Zhibo Fang, Ming Gao and Ying Xu
Aerospace 2026, 13(4), 339; https://doi.org/10.3390/aerospace13040339 - 3 Apr 2026
Viewed by 218
Abstract
As Global Navigation Satellite Systems (GNSSs) underpin safety-critical infrastructure, their vulnerability to sophisticated spoofing attacks poses severe physical layer security risks. To address the limitations of existing single-antenna defense mechanisms, this paper proposes a rigorous instantaneous spoofing detection framework utilizing a novel “one-primary-two-auxiliary” [...] Read more.
As Global Navigation Satellite Systems (GNSSs) underpin safety-critical infrastructure, their vulnerability to sophisticated spoofing attacks poses severe physical layer security risks. To address the limitations of existing single-antenna defense mechanisms, this paper proposes a rigorous instantaneous spoofing detection framework utilizing a novel “one-primary-two-auxiliary” three-antenna configuration. By embedding the rigid baseline length as a hard geometric constraint into the Integer Least Squares (ILS) model, we derive a specialized constrained LAMBDA algorithm that significantly shrinks the ambiguity search space. A rigorous hypothesis testing mechanism is established based on the Sum of Squared Residuals (SSR), analytically deriving the detection threshold from the central Chi-square distribution and analyzing the sensitivity via the non-central parameter. Through conducting field experiments using commercial receivers and professional GNSS signal simulators, the proposed method was validated using both single-satellite spoofing and full-constellation spoofing scenarios. Results demonstrate that the system achieves a Minimum Detectable Deviation (MDD) of spatial direction as low as 0.33 and maintains an empirical detection rate of >99% with a negligible false alarm rate. Notably, the method exhibits instantaneous response capabilities, effectively identifying both single-satellite and full-constellation spoofing attacks within a single epoch without requiring prior attitude information or external aiding. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

26 pages, 8175 KB  
Article
In Situ Damage Detection Method for Metallic Shear Plate Dampers Based on the Active Sensing Method and Machine Learning Algorithms
by Yunfei Li, Feng Xiong, Hong Liu, Xiongfei Li, Huanlong Ding, Yi Liao and Yi Zeng
Sensors 2026, 26(7), 2203; https://doi.org/10.3390/s26072203 - 2 Apr 2026
Viewed by 240
Abstract
Metallic Shear Plate Dampers (MSPDs) are essential components in passive vibration control systems and require rapid post-earthquake inspection to assess damage and determine replacement needs. Traditional visual inspection methods suffer from low efficiency and limited ability to detect concealed damage. This study proposes [...] Read more.
Metallic Shear Plate Dampers (MSPDs) are essential components in passive vibration control systems and require rapid post-earthquake inspection to assess damage and determine replacement needs. Traditional visual inspection methods suffer from low efficiency and limited ability to detect concealed damage. This study proposes a novel MSPD damage detection method based on active sensing and the k-nearest neighbor (KNN) algorithm, featuring high accuracy, efficiency, and low cost. Quasi-static tests were conducted to simulate various damage states. Sweep-frequency excitation was applied using a charge amplifier, and piezoelectric sensors were employed to generate and receive stress wave signals corresponding to different damage conditions. The acquired signals were processed using wavelet packet transform (WPT) and energy spectrum analysis to extract discriminative time–frequency features, which were used to train and validate the KNN model. Results show that the model achieved a validation accuracy of 98.9% using all valid data and 98.1% using a single excitation-sensing channel. When tested on an MSPD with a similar overall structure but lacking stiffeners, the model achieved an accuracy of 92.6% in distinguishing between healthy and damaged states. This indicates that the proposed method has good robustness and practical potential for MSPDs with similar damage evolution and failure modes despite certain structural variations. Full article
Show Figures

Figure 1

17 pages, 771 KB  
Article
MSA-Net: A Deep Learning Network with Multi-Axial Hadamard Attention and Pyramid Pooling for Stroke Microwave Imaging
by Bo Han, Dongliang Li, Xuhui Zhu, Mingshuai Zhang and Peng Li
Algorithms 2026, 19(4), 276; https://doi.org/10.3390/a19040276 - 2 Apr 2026
Viewed by 183
Abstract
Microwave imaging is emerging as an alternative to conventional medical diagnostic techniques. Traditional analytical and numerical methods fail to adequately address these fundamental challenges: they often rely on strict linear approximations or simplified physical models, leading to low reconstruction accuracy, poor robustness, and [...] Read more.
Microwave imaging is emerging as an alternative to conventional medical diagnostic techniques. Traditional analytical and numerical methods fail to adequately address these fundamental challenges: they often rely on strict linear approximations or simplified physical models, leading to low reconstruction accuracy, poor robustness, and limited generalization ability in complex clinical scenarios. As a result, they cannot meet the high-precision requirements of practical stroke microwave imaging. To further improve the accuracy of microwave imaging algorithms in recognizing stroke regions and solving the backscattering problem, this study employs a combination of methods with deep learning. It presents the Multi-Scale Attention Network (MSA-Net) for microwave imaging. The network is based on the EGE-UNet network structure with improved multi-axis Hadamard attention, incorporating null-space pyramid pooling and introducing a deep supervisory mechanism to improve the network performance further. To combine microwave imaging with deep learning, firstly, a large amount of microwave data need to be simulated with HFSS, in which the simulation model is a human brain stroke model constructed by an HFSS simulation system. Secondly, the microwave data obtained from the simulation are converted into a tensor format. Then, the tensor data are input into the MSA-Net neural network, which generates a binary mask image that can be used to detect the size and location of the stroke. This study also prompts the model to converge faster by sparsifying the microwave data to improve training efficiency. The method has been tested using simulation data, and based on the comparison experiments with other networks, MSA-Net is more accurate in detecting the location and the bleed size. The experimental results show that the proposed method is superior for stroke imaging. The experimental results show that the proposed model achieves a 1.08 improvement in peak signal-to-noise ratio and a 0.017 reduction in learned perceptual image block similarity, fully validating the effectiveness of the structural optimization strategy proposed in this paper. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis: 3rd Edition)
Show Figures

Figure 1

52 pages, 18820 KB  
Article
Multimodal Industrial Scene Characterisation for Pouring Process Monitoring Using a Mixture of Experts
by Javier Nieves, Javier Selva, Guillermo Elejoste-Rementeria, Jorge Angulo-Pines, Jon Leiñena, Xuban Barberena and Fátima A. Saiz
Appl. Sci. 2026, 16(7), 3430; https://doi.org/10.3390/app16073430 - 1 Apr 2026
Viewed by 217
Abstract
Industrial pouring processes operate under highly dynamic conditions where small deviations can lead to defects, scrap, and production losses. Although modern foundries are equipped with multiple sensors and visual inspection systems, most monitoring approaches remain fragmented, unimodal, and difficult to interpret. Furthermore, annotated [...] Read more.
Industrial pouring processes operate under highly dynamic conditions where small deviations can lead to defects, scrap, and production losses. Although modern foundries are equipped with multiple sensors and visual inspection systems, most monitoring approaches remain fragmented, unimodal, and difficult to interpret. Furthermore, annotated anomalous samples in industrial settings are scarce, hindering the development of traditional methods. As a result, many critical pouring anomalies are detected too late or lack sufficient contextual information for effective decision making. In this work, we propose a multimodal framework for industrial scene characterisation that combines visual information and process signals through an explainable Mixture-of-Experts (MoE)-style expert-fusion strategy. First, we deploy an ensemble of specialised modules that collaborate to identify regions of interest, assess pouring quality, and contextualise events within the production process, thereby generating an interpretable description of pouring events. Second, we introduce a novel anomaly detection method for multimodal video data, combining a self-supervised transformer with an outlier-aware clustering algorithm. Our approach effectively identifies rare anomalies without requiring extensive manual labelling. The resulting information is structured into a digital twin-ready representation, supporting synchronisation between the physical system and its virtual counterpart. This solution provides a scalable, deployable pathway to transform heterogeneous industrial data into actionable knowledge, supporting advanced monitoring, anomaly detection, and quality control in real foundry environments. Full article
Show Figures

Figure 1

24 pages, 1855 KB  
Article
Fairness-Aware Optimization in Spatio-Temporal Epidemic Data Mining: A Graph-Augmented Temporal Fusion Transformer
by Saleh Albahli
Mathematics 2026, 14(7), 1179; https://doi.org/10.3390/math14071179 - 1 Apr 2026
Viewed by 255
Abstract
Modeling the complex spatio-temporal dynamics of infectious diseases presents a significant computational challenge due to heterogeneous regional interactions, high-dimensional multimodal data streams, and the critical need for algorithmic fairness. This paper proposes a novel computational framework that unifies graph-based spatio-temporal forecasting, anomaly detection, [...] Read more.
Modeling the complex spatio-temporal dynamics of infectious diseases presents a significant computational challenge due to heterogeneous regional interactions, high-dimensional multimodal data streams, and the critical need for algorithmic fairness. This paper proposes a novel computational framework that unifies graph-based spatio-temporal forecasting, anomaly detection, and retrieval-augmented generation (RAG) into a single mathematical architecture. The predictive backbone employs a graph-augmented Temporal Fusion Transformer to capture non-linear temporal dependencies and spatial disease propagation. By formalizing regional topology and mobility flows as a weighted mathematical graph, the model systematically integrates structured epidemiological counts, continuous environmental covariates, and digital trace signals. To address algorithmic bias, we formulate a fairness-aware optimization problem by embedding a specific regularization term into the training objective, which mathematically penalizes disparities in true-positive rates across diverse socio-demographic strata. Furthermore, the numerical outputs and anomaly scores are processed by a large language model equipped with hybrid dense and sparse retrieval to generate interpretable, computationally grounded decision support. Extensive experiments on a longitudinal dataset comprising 62 administrative regions over 260 weeks validate the mathematical robustness of the proposed framework. The graph-augmented architecture improved forecasting accuracy by up to 24% and anomaly detection F1 scores by over 6% compared to state-of-the-art deep learning baselines, while the fairness-regularized loss function reduced the maximum subgroup recall gap by more than 50%. These findings demonstrate that predictive accuracy and algorithmic fairness can be jointly optimized, providing a rigorous computational methodology for spatio-temporal epidemic modeling and AI-driven surveillance. Full article
Show Figures

Figure 1

20 pages, 3356 KB  
Article
Experimental Study of High-Frequency Current Transformer for Partial Discharge Detection Using Frequency and Impulse Metrics
by Laura Della Giovanna, Francesco Guastavino and Eugenia Torello
Metrology 2026, 6(2), 24; https://doi.org/10.3390/metrology6020024 - 1 Apr 2026
Viewed by 160
Abstract
This study presents a characterization method for High-Frequency Current Transformers (HFCTs) intended for partial discharge (PD) measurement in on-line acquisition systems designed for AI-based processing and clustering. The primary objective is to analyze how key design parameters, ferrite core material, and number of [...] Read more.
This study presents a characterization method for High-Frequency Current Transformers (HFCTs) intended for partial discharge (PD) measurement in on-line acquisition systems designed for AI-based processing and clustering. The primary objective is to analyze how key design parameters, ferrite core material, and number of turns, influence HFCT frequency response, attenuation, and sensitivity, thereby providing a basis for optimized sensor design when data analysis is to be performed by means of AI-based algorithms. The investigation focuses on the influence of different ferrite core materials and varying secondary turn numbers on the frequency spectrum and the response to IEC 60270-compliant calibrator impulses Both concentrated and well-distributed HFCT secondary winding configurations are analyzed to evaluate their impact on signal behavior and sensitivity. The experimental results are compared with a simplified theoretical model to validate performance trends and identify key design factors. The HFCT response to IEC 60270-compliant calibrator impulses is examined to assess its suitability for PD measurement systems and monitoring. The results highlight the critical role of core selection and the number of turns in shaping HFCT bandwidth, attenuation, and impulse response, which are essential for accurate and reliable PD detection in continuous monitoring systems to perform the diagnostic of the electrical insulation condition. This diagnostic approach is based on the detection of partial discharge (PD) activity over time, with the objective of identifying evolving phenomena by monitoring the amplitude and characteristics of the signals associated with different defects. Therefore, accurate separation of signals originating from different defects and from noise is essential. These results provide a foundation for designing HFCT sensors suitable for integration into advanced diagnostic frameworks, AI-aided for Condition-Based Maintenance (CBM). Full article
Show Figures

Graphical abstract

20 pages, 3108 KB  
Article
Intrusion Detection in the Structure of Signal-Code Design in Cyber-Physical Systems of Swarm Small Aerial Vehicles Group Interaction
by Vadim A. Nenashev, Renata I. Chembarisova, Svetlana S. Dymkova and Oleg V. Varlamov
Future Internet 2026, 18(4), 183; https://doi.org/10.3390/fi18040183 - 1 Apr 2026
Viewed by 183
Abstract
The fault tolerance of a swarm of small aerial vehicles (SAVs) is directly dependent on the reliability of data transmitted over communication channels. One of the key threats is the intentional distortion of signal sequences by an attacker, such as Barker codes or [...] Read more.
The fault tolerance of a swarm of small aerial vehicles (SAVs) is directly dependent on the reliability of data transmitted over communication channels. One of the key threats is the intentional distortion of signal sequences by an attacker, such as Barker codes or M-sequences, which are used for synchronization and control of the swarm. Such an attack can disable the entire swarm. The aim of this study is to develop a method for detecting such intrusions. The proposed algorithm analyzes mathematical expressions that describe the sidelobes’ levels of the autocorrelation function of the code. This approach not only detects unauthorized changes but also accurately identifies the location and magnitude of the distorted element. The conducted experiments confirm the high accuracy of the algorithm. The practical significance of the work lies in the possibility of integrating this method into the security subsystem of group interaction for small aerial vehicles. This creates a mechanism for active anomaly detection in communication channels: when a threat is detected, the swarm can respond promptly by switching to a backup channel, requesting data retransmission, or isolating the compromised channel, which in turn enhances the survivability and fault tolerance of the system’s functioning within the group. Full article
Show Figures

Figure 1

16 pages, 1022 KB  
Article
An Effective and Interpretable EEG-Based Depression Recognition Method Using Hybrid Feature Selection
by Xin Xu, Qiuyun Fan, Shanjing Ju and Ruoyu Du
Bioengineering 2026, 13(4), 410; https://doi.org/10.3390/bioengineering13040410 - 31 Mar 2026
Viewed by 195
Abstract
Recent studies on EEG-based automated depression detection have primarily depended on complex deep learning models. While these methods improve classification performance, their practical application is limited by high computational complexity, challenging training processes, and poor interpretability. This paper proposes an efficient method for [...] Read more.
Recent studies on EEG-based automated depression detection have primarily depended on complex deep learning models. While these methods improve classification performance, their practical application is limited by high computational complexity, challenging training processes, and poor interpretability. This paper proposes an efficient method for depression recognition, which extracts multi-domain features from preprocessed EEG signals and selects the most discriminative feature subset by integrating the rapid preliminary screening capability of RankSearch with the interactive optimization ability of the Genetic Algorithm (GA). Our approach first eliminates redundant features efficiently through RankSearch, then deeply explores inter-feature relationships via GA, significantly enhancing classification performance while maintaining feature-level interpretability. Using the optimized feature subset, we evaluate performance with multiple machine learning classifiers (Decision Tree, KNN, Random Forest, SVM, XGBoost). Experiments on the public HUSM dataset demonstrate superior performance under rigorous cross-validation (accuracy = 95.08%, sensitivity = 95.99%, specificity = 94.30%, F1-score = 95%, AUC = 0.9514), with feature importance analysis further confirming interpretability. Compared to existing models, our method achieves lower computational complexity and higher clinical practicality, offering a more efficient technical solution for objective depression diagnosis. Full article
Show Figures

Figure 1

26 pages, 2101 KB  
Article
A Localization Method Based on Nonlinear Batch Processing for Non-Cooperative Underwater Acoustic Pulse Source
by Xiaoyan Wang, Yang Ye, Haopeng Deng, Yuntian Ji, Hongli Cao and Liang An
Electronics 2026, 15(7), 1452; https://doi.org/10.3390/electronics15071452 - 31 Mar 2026
Viewed by 175
Abstract
The position of a non-cooperative underwater pulse signal source can be estimated by applying target motion analysis techniques to the direction of arrival (DOA) and frequency of arrival (FOA) measurements obtained from a hydrophone array. However, the harsh underwater acoustic environment, with its [...] Read more.
The position of a non-cooperative underwater pulse signal source can be estimated by applying target motion analysis techniques to the direction of arrival (DOA) and frequency of arrival (FOA) measurements obtained from a hydrophone array. However, the harsh underwater acoustic environment, with its pronounced multipath propagation, high signal attenuation, and sparse detectable pulses, introduces considerable errors into the estimation of DOA and FOA. These errors can degrade the performance of conventional estimators such as the pseudolinear estimation (PLE) method, leading to significant bias and divergence issues. To address these issues, this paper proposes a method based on nonlinear batch processing for underwater non-cooperative target localization. A cost function is constructed based on a nonlinear observation model and the weighted least squares principle to ensure high modeling fidelity. Subsequently, a multi-start grid search combined with a trust region dogleg algorithm is employed for global iterative optimization of the cost function, enhancing the accuracy and stability of the final position estimate. Numerical simulation results demonstrate that the proposed method achieves high convergence speed and localization accuracy under adverse noise conditions and with a limited number of received pulses. Moreover, the sea trial results confirm that the algorithm attained a convergence rate of 93% with only 25 received pulses, and outperformed the conventional PLE method by approximately 80% in terms of positioning accuracy. Full article
Show Figures

Figure 1

Back to TopTop