Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (196)

Search Parameters:
Keywords = automatic partition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 22465 KB  
Article
Automatic SEA Substructuring on Shell Meshes Using Physical Discontinuity Detection
by Yifan Xue, Li Tang, Hao Zan and Chen Qiang
Appl. Sci. 2026, 16(6), 2941; https://doi.org/10.3390/app16062941 - 18 Mar 2026
Viewed by 171
Abstract
Statistical Energy Analysis (SEA) requires a physically meaningful subsystem definition, whereas manual partitioning of complex shell structures is often time-consuming and strongly dependent on engineering experience. To address this issue, this study proposes an automatic initial subsystem partitioning framework for shell FE models [...] Read more.
Statistical Energy Analysis (SEA) requires a physically meaningful subsystem definition, whereas manual partitioning of complex shell structures is often time-consuming and strongly dependent on engineering experience. To address this issue, this study proposes an automatic initial subsystem partitioning framework for shell FE models based on explicit prior attributes available in the model definition. The method unifies four classes of physical discontinuities—geometric discontinuity, thickness discontinuity, material/property discontinuity, and topological discontinuity—within a single adjacency evaluation procedure. The shell FE mesh is represented through element adjacencies, and adjacencies crossing any identified physical discontinuity are removed so that the remaining connected components define the partitioned subsystems. In this way, the framework generates partitioning results with explicit boundaries and traceable origins without relying on posterior response-field analysis or manually prescribed subsystem boundaries. Because the procedure operates directly on existing large-scale shell FE models and does not require additional response-feature construction or complex pre-partitioning, it provides a lightweight, repeatable, and practically executable automation path for SEA-related front-end modeling. The resulting partitions are intended as physically explicit initial partitioning results that provide a reliable boundary basis for higher-level statistical modeling objectives. When a coarser subsystem representation is required for subsequent modeling, further aggregation may be introduced as an optional enhancement according to the modeling objective, rather than as a prerequisite for the validity of the present method. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

21 pages, 1119 KB  
Article
Risk-Weighted D-Optimal Sensor Placement for Substructure-Level Damage-Parameter Identification in Space Grid Structures Using Differentiable Flexibility-Submatrix Surrogates
by Jiakai Xiu
Buildings 2026, 16(5), 966; https://doi.org/10.3390/buildings16050966 - 1 Mar 2026
Viewed by 226
Abstract
Optimal sensor placement (OSP) for structural health monitoring of large-scale space grid structures must enable reliable identification of localized member deterioration with sparse instrumentation. Modal-based OSP criteria optimize observability of a healthy model but do not directly minimize uncertainty in substructure-level damage parameters. [...] Read more.
Optimal sensor placement (OSP) for structural health monitoring of large-scale space grid structures must enable reliable identification of localized member deterioration with sparse instrumentation. Modal-based OSP criteria optimize observability of a healthy model but do not directly minimize uncertainty in substructure-level damage parameters. We partition the structure into substructures, simulate axial and biaxial bending stiffness-loss cases, and compute truncated modal flexibility. Each element is encoded by stacked end-node flexibility submatrices over m=6 modes. A multi-task, zero-anchored multi-layer perceptron is trained to regress three nonnegative damage parameters and classify damage presence using losses tailored for small-damage accuracy. Sensor sensitivities are obtained by automatic differentiation of the surrogate with respect to flexibility features and aggregated with scenario weights emphasizing critical bending and neighbor-substructure interference scenarios. A greedy D-optimal design then maximizes the log-determinant of a regularized Fisher information matrix under practical coverage constraints; substructure selections are merged into a globally feasible layout. On a representative space grid, the method improves task-oriented identifiability over EFI and MKE across budgets Ktot=30–60 (higher-damage D-optimality, lower A-optimality trace, and reduced proxy variance indicators), while yielding lower modal log-determinants. These findings indicate risk-weighted, substructure-first task design as an alternative to purely modal criteria for substructure-level damage-parameter identification. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

16 pages, 1691 KB  
Article
Weakly Supervised Optimization for Power Distribution Transformer Area Identification Based on Frequency-Domain Representation
by Suwei Zhai, Junkai Liang, Wangxia Yang, Chao Zheng, Dongdong Wang, Xiaodong Xing and Yanjun Feng
Electronics 2026, 15(5), 1000; https://doi.org/10.3390/electronics15051000 - 28 Feb 2026
Viewed by 264
Abstract
Accurate identification of user–transformer relationships is fundamental to refined management, load forecasting, and fault diagnosis in low-voltage distribution networks. Traditional approaches often rely on costly manual inspection or complex physical modeling, which limits their scalability. This paper proposes a frequency-domain representation learning and [...] Read more.
Accurate identification of user–transformer relationships is fundamental to refined management, load forecasting, and fault diagnosis in low-voltage distribution networks. Traditional approaches often rely on costly manual inspection or complex physical modeling, which limits their scalability. This paper proposes a frequency-domain representation learning and weakly supervised optimization method for automatic transformer-area identification from large-scale user electricity data with incomplete labels. Specifically, the proposed method first applies the Fast Fourier Transform (FFT) to convert users’ voltage and current time series into robust frequency-domain feature vectors, effectively revealing intrinsic periodic structures while reducing noise interference. Then, under limited supervision, a deep metric learning framework is employed to optimize the embedding space such that users belonging to the same transformer area are clustered more compactly, while those from different areas are separated farther apart. Finally, a high-density clustering algorithm is applied in the optimized embedding space to complete the transformer-area partition for all users. Experimental results demonstrate that the proposed approach can effectively leverage limited label information and significantly improve transformer-area identification accuracy, providing an efficient and low-cost solution for digitalized operation and maintenance of low-voltage distribution networks. Full article
(This article belongs to the Special Issue AI Applications for Smart Grid: 2nd Edition)
Show Figures

Figure 1

14 pages, 1506 KB  
Article
LightGBM-Based Seizure Detection Method in Pilocarpine Mouse Model of Epilepsy
by Mercy Edoho, Nicolas Partouche, Christiaan Warner Hoornenborg, Tycho M. Hoogland, Stéphane Baudouin, Catherine Mooney and Lan Wei
Algorithms 2026, 19(3), 167; https://doi.org/10.3390/a19030167 - 24 Feb 2026
Viewed by 383
Abstract
Electroencephalogram (EEG) has been the gold standard for measuring epileptic activity in rodent models of epilepsy. Manual scoring of seizures in EEG recordings lasting from days to months is laborious and prone to human error. The existing literature on automatic seizure detection in [...] Read more.
Electroencephalogram (EEG) has been the gold standard for measuring epileptic activity in rodent models of epilepsy. Manual scoring of seizures in EEG recordings lasting from days to months is laborious and prone to human error. The existing literature on automatic seizure detection in rodent models of epilepsy is limited, and the electrographic characteristics of induced epilepsy significantly differ from those of other epilepsy types. This study employed a Light Gradient Boosting Machine (LightGBM), with the dataset carefully partitioned into separate training and testing sets to ensure no data overlap. The model was trained using five-fold cross-validation to enhance robustness and generalisability. The training, validation, and independent test sets comprised 29,722 h of EEG recordings from 102 mice with pilocarpine-induced temporal lobe epilepsy. Following feature selection, model training, and post-processing, the lightGBM-based model exhibited a sensitivity of 80%, a specificity of 99%, and an F1-score of 0.71 on the independent test set. Multiple pairwise and non-parametric statistical tests indicated that envelope, skewness, and kurtosis, identified as the three most significant features in the feature importance ranking, exhibit statistically significant differences in their distributions (p-value < 0.05). The statistical analysis revealed significant differences across the three features and between seizure and non-seizure events for each feature, highlighting their relevance for discriminating epileptic activity. This study highlights the potential to support the automation of seizure event detection in preclinical rodent models of epilepsy. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (4th Edition))
Show Figures

Figure 1

28 pages, 14898 KB  
Article
Deep Learning for Classification of Internal Defects in Fused Filament Fabrication Using Optical Coherence Tomography
by Valentin Lang, Qichen Zhu, Malgorzata Kopycinska-Müller and Steffen Ihlenfeldt
Appl. Syst. Innov. 2026, 9(2), 42; https://doi.org/10.3390/asi9020042 - 14 Feb 2026
Viewed by 628
Abstract
Additive manufacturing is increasingly adopted for the industrial production of small series of functional components, particularly in thermoplastic strand extrusion processes such as Fused Filament Fabrication. This transition relies on technological advances addressing key process limitations, including dimensional instability, weak interlayer bonding, extrusion [...] Read more.
Additive manufacturing is increasingly adopted for the industrial production of small series of functional components, particularly in thermoplastic strand extrusion processes such as Fused Filament Fabrication. This transition relies on technological advances addressing key process limitations, including dimensional instability, weak interlayer bonding, extrusion defects, moisture sensitivity, and insufficient melting. Process monitoring therefore focuses on early defect detection to minimize failed builds and costs, while ultimately enabling process optimization and adaptive control to mitigate defects during fabrication. For this purpose, a data processing pipeline for monitoring Optical Coherence Tomography images acquired in Fused Filament Fabrication is introduced. Convolutional neural networks are used for the automatic classification of tomographic cross-sections. A dataset of tomographic images passes semi-automatic labeling, preprocessing, model training and evaluation. A sliding window detects outlier regions in the tomographic cross-sections, while masks suppress peripheral noise, enabling label generation based on outlier ratios. Data are split into training, validation, and test sets using block-based partitioning to limit leakage. The classification model employs a ResNet-V2 architecture with BottleneckV2 modules. Hyperparameters are optimized, with N = 2, K = 2, dropout 0.5, and learning rate 0.001 yielding best performance. The model achieves 0.9446 accuracy and outperforms EfficientNet-B0 and VGG16 in accuracy and efficiency. Full article
(This article belongs to the Special Issue AI-Driven Decision Support for Systemic Innovation)
Show Figures

Figure 1

24 pages, 10247 KB  
Article
A Segmented Adaptive Filtering Method for Nearshore Bathymetry Using ICESat-2 Dataset
by Yifu Chen, Ziqiang Wang, Wuxing Song, Yuan Le, Liqin Zhou, Haichao Guo, Lin Wu and Lin Yi
Remote Sens. 2026, 18(4), 568; https://doi.org/10.3390/rs18040568 - 11 Feb 2026
Viewed by 378
Abstract
Equipped with an Advanced Topographic Laser Altimeter System (ATLAS), ICESat-2 (Ice, Cloud and land Elevation Satellite-2) is a photon-counting laser altimetry mission with strong potential for nearshore bathymetry. In this study, a novel filtering and bathymetric method termed a segmented adaptive filtering bathymetry [...] Read more.
Equipped with an Advanced Topographic Laser Altimeter System (ATLAS), ICESat-2 (Ice, Cloud and land Elevation Satellite-2) is a photon-counting laser altimetry mission with strong potential for nearshore bathymetry. In this study, a novel filtering and bathymetric method termed a segmented adaptive filtering bathymetry has been proposed. Sea-surface photons are identified from peaks in the elevation-density histogram, enabling separation of surface and seafloor photons. The seafloor photons are then partitioned into along-track segments, where seafloor signal photons are extracted using an adaptive elliptical kernel whose parameters and orientation are determined from local density patterns and seafloor slope. The seafloor profile is obtained by polynomial fitting, and nearshore depth is estimated from the elevations of the surface and seafloor signal photons. To ensure and improve the accuracy and reliability of the proposed method, ICESat-2 data from Qilianyu Islands at the South China Sea and West Island at the Florida Keys of the United States were adopted to perform experiments. Furthermore, the bathymetric results obtained by ICESat-2 datasets at different experimental areas were compared with the reference bathymetry obtained by the airborne light detection and ranging (LiDAR) bathymetry (ALB) system. Finally, the bathymetric accuracy validation and assessment were performed. The highest accuracy of root mean square error (RMSE) and coefficient of determination (R2) has reached 0.37 m and 98%, respectively. The accuracy validation of bathymetric results at different study areas demonstrated that the method proposed in this study can automatically and effectively achieve high-precision nearshore bathymetry and topographic surveys. Full article
Show Figures

Figure 1

29 pages, 11326 KB  
Article
Constrained Soft Actor–Critic for Joint Computation Offloading and Resource Allocation in UAV-Assisted Edge Computing
by Nawazish Muhammad Alvi, Waqas Muhammad Alvi, Xiaolong Zhou, Jun Li and Yifei Wei
Sensors 2026, 26(4), 1149; https://doi.org/10.3390/s26041149 - 10 Feb 2026
Viewed by 597
Abstract
Unmanned Aerial Vehicle (UAV)-assisted edge computing supports latency-sensitive applications by offloading computational tasks to ground-based servers. However, determining optimal resource allocation under strict latency constraints and stochastic channel conditions remains challenging. This paper addresses the joint computation partitioning and power allocation problem for [...] Read more.
Unmanned Aerial Vehicle (UAV)-assisted edge computing supports latency-sensitive applications by offloading computational tasks to ground-based servers. However, determining optimal resource allocation under strict latency constraints and stochastic channel conditions remains challenging. This paper addresses the joint computation partitioning and power allocation problem for UAV-assisted edge computing systems. We formulate the problem as a Constrained Markov Decision Process (CMDP) that explicitly models latency constraints, rather than relying on implicit reward shaping. To solve this CMDP, we propose Constrained Soft Actor–Critic (C-SAC), a deep reinforcement learning algorithm that combines maximum-entropy policy optimization with Lagrangian dual methods. C-SAC employs a dedicated constraint critic network to estimate long-term constraint violations and an adaptive Lagrange multiplier that automatically balances energy efficiency against latency satisfaction without manual tuning. Extensive experiments demonstrate that C-SAC achieves an 18.9% constraint violation rate. This represents a 60.6-percentage-point improvement compared to unconstrained Soft Actor–Critic, with 79.5%, and a 22.4-percentage-point improvement over deterministic TD3-Lagrangian, achieving 41.3%. The learned policies exhibit strong channel-adaptive behavior with a correlation coefficient of 0.894 between the local computation ratio and channel quality, despite the absence of explicit channel modeling in the reward function. Ablation studies confirm that both adaptive mechanisms are essential, while sensitivity analyses show that C-SAC maintains robust performance with violation rates varying by less than 2 percentage points even as channel variability triples. These results establish constrained reinforcement learning as an effective approach for reliable UAV edge computing under stringent quality-of-service requirements. Full article
(This article belongs to the Special Issue Communications and Networking Based on Artificial Intelligence)
Show Figures

Figure 1

19 pages, 6934 KB  
Article
Machine Learning-Based Automatic Control of Shield Tunneling Attitude in Karst Strata
by Liang Li, Changming Hu, Jianbo Tang, Zhipeng Wu and Peng Zhang
Buildings 2026, 16(4), 701; https://doi.org/10.3390/buildings16040701 - 8 Feb 2026
Viewed by 469
Abstract
Accurate prediction and optimized control of shield tunneling attitude are critical for ensuring tunneling quality and construction safety. In karst and other highly heterogeneous strata, complex geological conditions and construction parameters exhibit significant nonlinear coupling, greatly increasing the difficulty of attitude regulation. To [...] Read more.
Accurate prediction and optimized control of shield tunneling attitude are critical for ensuring tunneling quality and construction safety. In karst and other highly heterogeneous strata, complex geological conditions and construction parameters exhibit significant nonlinear coupling, greatly increasing the difficulty of attitude regulation. To address this challenge, this study proposes a machine learning-based approach for the automatic control of shield tunneling attitude. First, a Tree-structured Parzen Estimator-optimized Light Gradient Boosting Machine predictive model is employed to construct a nonlinear mapping model between construction parameters and shield tunneling attitude. Subsequently, the SHapley Additive exPlanations (SHAP) interpretability model is introduced to identify the core tunneling factors influencing attitude stability. On this basis, the developed predictive model is integrated into the multi-objective evolutionary algorithm based on decomposition (MOEA/D) framework as a fitness function to achieve multi-objective optimization of key construction parameters. Using field data from shield tunneling construction in the karst strata of Shenzhen Metro Line 16, the proposed model achieved prediction accuracies of R2 = 0.959 for pitch and R2 = 0.936 for roll, outperforming XGBoost, Random Forest, Long Short-Term Memory, and Transformer baselines. SHAP analysis identified the partitioned propulsion thrust, partitioned chamber pressure, cutterhead rotational speed, and advance rate as key parameters influencing attitude. Further, MOEA/D optimization generated a Pareto set of construction parameters, from which the optimal solution was selected using the ideal point method, resulting in reductions of 26.45% and 39.47% in pitch and roll deviations, respectively. These findings demonstrate the feasibility and effectiveness of the proposed method in achieving high-precision prediction and intelligent optimization control of shield tunneling attitude under complex geological conditions, providing a reliable technical pathway for metro and tunnel construction projects. Full article
Show Figures

Figure 1

31 pages, 20786 KB  
Article
Multi-Scale Analysis of Ecosystem Service Trade-Off Intensity and Its Drivers Based on Wavelet Transform: A Case Study of the Plain–Mountain Transition Zone in China
by Congyi Li, Penggen Cheng, Xiaojian Wei, Bei Liu, Yunju Nie and Zhanhui Zhao
Land 2026, 15(2), 278; https://doi.org/10.3390/land15020278 - 7 Feb 2026
Viewed by 407
Abstract
Identifying the multi-scale drivers of ecosystem service (ES) trade-off intensity is essential for promoting regional sustainability. However, the existing multi-scale ES studies typically rely on predefined administrative units or fixed grid sizes due to the absence of scientifically sound scale-partitioning approaches, which limits [...] Read more.
Identifying the multi-scale drivers of ecosystem service (ES) trade-off intensity is essential for promoting regional sustainability. However, the existing multi-scale ES studies typically rely on predefined administrative units or fixed grid sizes due to the absence of scientifically sound scale-partitioning approaches, which limits the identification of characteristic scales and obscures scale-dependent interactions. This study broke new ground by combining continuous wavelet transform (CWT) and optimal parameter geographic detector (OPGD) to automatically identify the characteristic scales of trade-offs between ecosystem services, thus opening up a new avenue in multi-scale studies. Taking China’s plain–mountain transition zone as a case study, we evaluate trade-off intensity among four key ecosystem services—water yield (WY), habitat quality (HQ), soil conservation (SC), and carbon storage (CS). The results show that the following: (1) The identification of 36 characteristic scales (ranging from 5 km to 55 km) indicates that ecosystem service trade-offs operate across a wide range of spatial extents, implying that a single management scale cannot effectively address all ES interactions. (2) From 2000 to 2020, CS-HQ, SC-HQ, and WY-HQ trade-off intensities were jointly driven by both natural conditions and human activities, whereas CS-SC was predominantly influenced by natural and climatic factors. The trade-off intensities between CS-WY and WY-SC were mainly controlled by climatic forces. (3) The explanatory power (q value) of each factor varied distinctly with spatial scale, and the interaction effects between multiple factors were substantially stronger than their individual effects. This indicates that ecosystem service trade-offs are primarily governed by coupled processes rather than isolated drivers. Consequently, management strategies targeting single drivers are unlikely to be effective. Instead, ecosystem management should be designed around combinations of drivers that operate at specific spatial scales and provide a concrete pathway for translating trade-off analyses into spatially differentiated management actions. Full article
Show Figures

Figure 1

28 pages, 2256 KB  
Article
A Moving Window-Based Feature Extraction Method for Gearbox Fault Detection Using Vibration Signals
by Ietezaz ul Hassan, Krishna Panduru, Daniel Riordan and Joseph Walsh
Machines 2026, 14(2), 178; https://doi.org/10.3390/machines14020178 - 4 Feb 2026
Viewed by 446
Abstract
Early gearbox defect detection is imperative for reducing unplanned downtime, ensuring reliability and efficiency, and minimizing maintenance expenses. In recent years, with the rise of Artificial Intelligence (AI) and digital transformation, gearbox defect detection using AI has gained popularity. Machine learning (ML) classifiers [...] Read more.
Early gearbox defect detection is imperative for reducing unplanned downtime, ensuring reliability and efficiency, and minimizing maintenance expenses. In recent years, with the rise of Artificial Intelligence (AI) and digital transformation, gearbox defect detection using AI has gained popularity. Machine learning (ML) classifiers are very popular and transform gearbox condition monitoring from manual to automatic monitoring systems. This work proposes a moving window-based method for extracting statistical features from recorded vibration signals from the gearbox. The extracted features were used to train traditional ML classifiers. Moving window sizes of 300, 400, 500, 600, 700, and 800 were applied to extract statistical features from the publicly available benchmark dataset. The six different moving window sizes caused six types of datasets, each one corresponding to the moving window size. The generated datasets were partitioned using the K-fold cross-validation method to train and test ML models. This study explored and evaluated seven prominent ML classifiers: Decision Tree, Random Forest, Support Vector Machine (SVM), Naïve Bayes, K-Nearest Neighbor (KNN), Gradient Boosting Classifier (GBC), and Logistic Regression. The experimental results demonstrated that SVM, Logistic Regression, and GBC can outperform other ML classifiers. The experimental results in terms of accuracy, precision, and recall revealed that the ML classifier’s performance improves as the size of the moving window used for feature extraction increases. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

23 pages, 842 KB  
Article
Vine Copula Modelling of Extreme Temperature, Wind Speed, and Relative Humidity Towards Enhancement of Renewable Energy Production
by Maashele Kholofelo Metwane, Daniel Maposa and Caston Sigauke
Math. Comput. Appl. 2026, 31(1), 19; https://doi.org/10.3390/mca31010019 - 1 Feb 2026
Viewed by 553
Abstract
The increasing global reliance on wind and solar energy underscores the critical vulnerability of renewable systems to extreme weather, which can severely disrupt power generation. Accurately modelling the complex, multivariate dependencies of weather extremes is essential for building grid resilience, yet conventional statistical [...] Read more.
The increasing global reliance on wind and solar energy underscores the critical vulnerability of renewable systems to extreme weather, which can severely disrupt power generation. Accurately modelling the complex, multivariate dependencies of weather extremes is essential for building grid resilience, yet conventional statistical models often fail to capture critical tail dependencies. This study aims to develop a robust framework using vine copulas to model the tail dependencies among key meteorological variables, extreme temperature, wind speed, and relative humidity, across the Eastern Cape province, South Africa, in order to identify optimal seasons for renewable energy production. We first clustered weather stations across the province into five distinct groups using Partitioning Around Medoids (PAM), based on geographical features (elevation, longitude, and latitude). This study explored an automatic selection of the optimal vine copula structure that adequately describes the dependence structure of the meteorological variables employed. The analysis demonstrated that R-vine copulas successfully captured the multivariate tail behaviour of temperature and relative humidity, while D-vine copulas were highly effective for wind speed. The models revealed significant tail dependencies, indicating a high potential for concurrent extreme weather events that impact energy generation. Our findings confirm that vine copulas offer a superior framework for assessing the risks associated with extreme weather to renewable energy systems. The results provide critical insights for regional energy policy and grid resilience planning, highlighting the importance of advanced risk assessment to safeguard renewable energy production against climate extremes. Full article
(This article belongs to the Section Natural Sciences)
Show Figures

Figure 1

27 pages, 2218 KB  
Article
A Deep Learning-Based Pipeline for Detecting Rip Currents from Satellite Imagery
by Yuli Liu, Yifei Yang, Xiang Li, Fan Yang, Huarong Xie, Wei Wang and Changming Dong
Remote Sens. 2026, 18(2), 368; https://doi.org/10.3390/rs18020368 - 22 Jan 2026
Cited by 1 | Viewed by 575
Abstract
Detecting rip currents from satellite imagery offers valuable information for the characterization and assessment of this coastal hazard. While recent advances in deep learning have enabled automatic detection from close-view beach images, the broader geospatial context available in far-view satellite imagery has not [...] Read more.
Detecting rip currents from satellite imagery offers valuable information for the characterization and assessment of this coastal hazard. While recent advances in deep learning have enabled automatic detection from close-view beach images, the broader geospatial context available in far-view satellite imagery has not yet been fully exploited. The main challenge lies in identifying rips as small objects within large and visually complex scenes that include both beach and non-beach areas. To address this, we proposed a detection pipeline which partitions high-resolution satellite images into small regions on which rip currents are detected using a deep learning object detection model that merges the results. The merged results are processed by applying a deep learning classification model to filter out non-beach scenes, followed by applying the detection model on augmented images to remove spurious detection. The proposed pipeline achieved an overall accuracy of 98.4%, a recall of 0.890, a precision of 0.633, and an F2 score of 0.823 on the testing dataset, demonstrating its effectiveness in locating rip currents within complex coastal scenes and its potential applicability to other regions. In addition, a new rip image dataset containing far-view satellite imagery was constructed. With the new dataset, we demonstrated a potential application of the proposed method in characterizing rip occurrences and found that rip currents tended to occur at open beaches under moderate-energy, onshore-directed waves conditions. Overall, the proposed pipeline, unlike existing near-real-time rip current monitoring systems, provides a high-accuracy offline analysis tool for rip current assessment using satellite imagery. Along with the new dataset introduced in this work, it can represent a valuable step towards expanding available resources for improving automated detection methods and rip current research. Full article
Show Figures

Figure 1

20 pages, 3262 KB  
Article
Glass Fall-Offs Detection for Glass Insulated Terminals via a Coarse-to-Fine Machine-Learning Framework
by Weibo Li, Bingxun Zeng, Weibin Li, Nian Cai, Yinghong Zhou, Shuai Zhou and Hao Xia
Micromachines 2026, 17(1), 128; https://doi.org/10.3390/mi17010128 - 19 Jan 2026
Viewed by 987
Abstract
Glass-insulated terminals (GITs) are widely used in high-reliability microelectronic systems, where glass fall-offs in the sealing region may seriously degrade the reliability of the microelectronic component and further degrade the device reliability. Automatic inspection of such defects is challenging due to strong light [...] Read more.
Glass-insulated terminals (GITs) are widely used in high-reliability microelectronic systems, where glass fall-offs in the sealing region may seriously degrade the reliability of the microelectronic component and further degrade the device reliability. Automatic inspection of such defects is challenging due to strong light reflection, irregular defect appearances, and limited defective samples. To address these issues, a coarse-to-fine machine-learning framework is proposed for glass fall-off detection in GIT images. By exploiting the circular-ring geometric prior of GITs, an adaptive sector partition scheme is introduced to divide the region of interest into sectors. Four categories of sector features, including color statistics, gray-level variations, reflective properties, and gradient distributions, are designed for coarse classification using a gradient boosting decision tree (GBDT). Furthermore, a sector neighbor (SN) feature vector is constructed from adjacent sectors to enhance fine classification. Experiments on real industrial GIT images show that the proposed method outperforms several representative inspection approaches, achieving an average IoU of 96.85%, an F1-score of 0.984, a pixel-level false alarm rate of 0.55%, and a pixel-level missed alarm rate of 35.62% at a practical inspection speed of 32.18 s per image. Full article
(This article belongs to the Special Issue Emerging Technologies and Applications for Semiconductor Industry)
Show Figures

Figure 1

19 pages, 2028 KB  
Article
RSSI-Based Localization of Smart Mattresses in Hospital Settings
by Yeh-Liang Hsu, Chun-Hung Yi, Shu-Chiung Lee and Kuei-Hua Yen
J. Low Power Electron. Appl. 2026, 16(1), 4; https://doi.org/10.3390/jlpea16010004 - 14 Jan 2026
Viewed by 543
Abstract
(1) Background: In hospitals, mattresses are often relocated for cleaning or patient transfer, leading to mismatches between actual and recorded bed locations. Manual updates are time-consuming and error-prone, requiring an automatic localization system that is cost-effective and easy to deploy to ensure traceability [...] Read more.
(1) Background: In hospitals, mattresses are often relocated for cleaning or patient transfer, leading to mismatches between actual and recorded bed locations. Manual updates are time-consuming and error-prone, requiring an automatic localization system that is cost-effective and easy to deploy to ensure traceability and reduce nursing workload. (2) Purpose: This study presents a pragmatic, large-scale implementation and validation of a BLE-based localization system using RSSI measurements. The goal was to achieve reliable room-level identification of smart mattresses by leveraging existing hospital infrastructure. (3) Results: The system showed stable signals in the complex hospital environment, with a 12.04 dBm mean gap between primary and secondary rooms, accurately detecting mattress movements and restoring location confidence. Nurses reported easier operation, reduced manual checks, and improved accuracy, though occasional mismatches occurred when receivers were offline. (4) Conclusions: The RSSI-based system demonstrates a feasible and scalable model for real-world asset tracking. Future upgrades include receiver health monitoring, watchdog restarts, and enhanced user training to improve reliability and usability. (5) Method: RSSI–distance relationships were characterized under different partition conditions to determine parameters for room differentiation. To evaluate real-world scalability, a field validation involving 266 mattresses in 101 rooms over 42 h tested performance, along with relocation tests and nurse feedback. Full article
Show Figures

Figure 1

17 pages, 49679 KB  
Article
A Lightweight Denoising Network with TCN–Mamba Fusion for Modulation Classification
by Yubo Kong, Yang Ge and Zhengbing Guo
Electronics 2026, 15(1), 188; https://doi.org/10.3390/electronics15010188 - 31 Dec 2025
Viewed by 635
Abstract
Automatic modulation classification (AMC) under low signal-to-noise ratio (SNR) and complex channel conditions remains a significant challenge due to the trade-off between robustness and efficiency. This study proposes a lightweight temporal convolutional network (TCN) and Mamba fusion architecture designed to enhance modulation recognition [...] Read more.
Automatic modulation classification (AMC) under low signal-to-noise ratio (SNR) and complex channel conditions remains a significant challenge due to the trade-off between robustness and efficiency. This study proposes a lightweight temporal convolutional network (TCN) and Mamba fusion architecture designed to enhance modulation recognition performance. In the modulation signal denoising stage, a non-local adaptive thresholding denoising module (NATM) is introduced to explicitly improve the effective signal-to-noise ratio. In the parallel feature extraction stage, TCN captures local symbol-level dependencies, while Mamba models long-range temporal relationships. In the output stage, their outputs are integrated through additive layer-wise fusion, which prevents parameter explosion. Experiments were conducted on the RadioML 2016.10A, 2016.10B, and 2018.01A datasets with leakage-controlled partitioning strategies including GroupKFold and Leave-One-SNR-Out cross-validation. The proposed method achieves up to a 3.8 dB gain in the required signal-to-noise ratio at 90 percent accuracy compared with state-of-the-art baselines, while maintaining a substantially lower parameter count and reduced inference latency. The denoising module provides clear robustness improvements under low signal-to-noise ratio conditions, particularly below −8 dB. The results show that the proposed network strikes a balance between accuracy and efficiency, highlighting its application potential in real-time wireless receivers under resource constraints. Full article
(This article belongs to the Special Issue AI-Driven Signal Processing in Communications)
Show Figures

Figure 1

Back to TopTop