Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (775)

Search Parameters:
Keywords = supervision signal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1676 KB  
Article
Robust and Interpretable Machine Learning for Network Quality Prediction with Noisy and Incomplete Data
by Pei Huang, Yicheng Li, Hai Gong and Herman Koara
Photonics 2025, 12(10), 965; https://doi.org/10.3390/photonics12100965 - 29 Sep 2025
Abstract
Accurate classification of optical communication signal quality is crucial for maintaining the reliability and performance of high-speed communication networks. While existing supervised learning approaches achieve high accuracy on laboratory-collected datasets, they often face difficulties in generalizing to real-world conditions due to the lack [...] Read more.
Accurate classification of optical communication signal quality is crucial for maintaining the reliability and performance of high-speed communication networks. While existing supervised learning approaches achieve high accuracy on laboratory-collected datasets, they often face difficulties in generalizing to real-world conditions due to the lack of variability and noise in controlled experimental data. In this study, we propose a targeted data augmentation framework designed to improve the robustness and generalization of binary optical signal quality classifiers. Using the OptiCom Signal Quality Dataset, we systematically inject controlled perturbations into the training data including label boundary flipping, Gaussian noise addition, and missing-value simulation. To further approximate real-world deployment scenarios, the test set is subjected to additional distribution shifts, including feature drift and scaling. Experiments are conducted under 5-fold cross-validation to evaluate the individual and combined impacts of augmentation strategies. Results show that the optimal augmentation setting (flip_rate = 0.10, noise_level = 0.50, missing_rate = 0.20) substantially improve robustness to unseen distributions, raising accuracy from 0.863 to 0.950, precision from 0.384 to 0.632, F1 from 0.551 to 0.771, and ROC-AUC from 0.926 to 0.999 compared to model without augmentation. Our research provides an example for balancing data augmentation intensity to optimize generalization without over-compromising accuracy on clean data. Full article
Show Figures

Figure 1

27 pages, 4168 KB  
Article
Electromyographic Diaphragm and Electrocardiographic Signal Analysis for Weaning Outcome Classification in Mechanically Ventilated Patients
by Alejandro Arboleda, Manuel Franco, Francisco Naranjo and Beatriz Fabiola Giraldo
Sensors 2025, 25(19), 6000; https://doi.org/10.3390/s25196000 - 29 Sep 2025
Abstract
Early prediction of weaning outcomes in mechanically ventilated patients has significant potential to influence the duration of treatment as well as associated morbidity and mortality. This study aimed to investigate the utility of signal analysis using electromyographic diaphragm (EMG) and electrocardiography (ECG) signals [...] Read more.
Early prediction of weaning outcomes in mechanically ventilated patients has significant potential to influence the duration of treatment as well as associated morbidity and mortality. This study aimed to investigate the utility of signal analysis using electromyographic diaphragm (EMG) and electrocardiography (ECG) signals to classify the success or failure of weaning in mechanically ventilated patients. Electromyographic signals of 40 subjects were recorded using 5-channel surface electrodes placed around the diaphragm muscle, along with an ECG recording through a 3-lead Holter system during extubation. EMG and ECG signals were recorded from mechanically ventilated patients undergoing weaning trials. Linear and nonlinear signal analysis techniques were used to assess the interaction between diaphragm muscle activity and cardiac activity. Supervised machine learning algorithms were then used to classify the weaning outcomes. The study revealed clear differences in diaphragmatic and cardiac patterns between patients who succeeded and failed in the weaning trials. Successful weaning was characterised by a higher ECG-derived respiration amplitude, whereas failed weaning was characterised by an elevated EMG amplitude. Furthermore, successful weaning exhibited greater oscillations in diaphragmatic muscle activity. Spectral analysis and parameter extraction identified 320 parameters, of which 43 were significant predictors of weaning outcomes. Using seven of these parameters, the Naive Bayes classifier demonstrated high accuracy in classifying weaning outcomes. Surface electromyographic and electrocardiographic signal analyses can predict weaning outcomes in mechanically ventilated patients. This approach could facilitate the early identification of patients at risk of weaning failure, allowing for improved clinical management. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

12 pages, 622 KB  
Article
Combined Infrared Thermography and Agitated Behavior in Sows Improve Estrus Detection When Applied to Supervised Machine Learning Algorithms
by Leila Cristina Salles Moura, Janaina Palermo Mendes, Yann Malini Ferreira, Rayna Sousa Vieira Amaral, Diana Assis Oliveira, Fabiana Ribeiro Caldara, Bianca Thais Baumann, Jansller Luiz Genova, Charles Kiefer, Luciano Hauschild and Luan Sousa Santos
Animals 2025, 15(19), 2798; https://doi.org/10.3390/ani15192798 - 25 Sep 2025
Abstract
The identification of estrus at the right moment allows for a higher success of fecundity with artificial insemination. Evaluating changes in body surface temperature of sows during the estrus period using an infrared thermography camera (ITC) can provide an accurate model to predict [...] Read more.
The identification of estrus at the right moment allows for a higher success of fecundity with artificial insemination. Evaluating changes in body surface temperature of sows during the estrus period using an infrared thermography camera (ITC) can provide an accurate model to predict these changes. This pilot study comprised nine crossbred Large White x Landrace sows, providing 59 data records for analysis. Observed changes in the behavior and physiological signs of the sows signaled the identification of estrus. Images of the ocular area, ear tips, breast, back, vulva, and perianal area were collected with the ITC. The images were analyzed using the FLIR Thermal Studio Starter software. Infrared mean temperatures were reported and compared using ANOVA and Tukey–Kramer tests (p < 0.05). Supervised machine learning models were tested using random forest (RF), Conditional inference trees (Ctree), Partial least squares (PLS), and K-nearest neighbors (KNN), and the method performance was measured using a confusion matrix. The orbital region showed significant differences between estrus and non-estrus states in sows. In the confusion matrix, the algorithm predicted estrus with 87% accuracy in the test set, which contained 40% of the data, when agitated behavior was combined with orbital area temperature. These findings suggest the potential for integrating behavioral and physiological observations with orbital thermography and machine learning to detect estrus in sows under field conditions accurately. Full article
(This article belongs to the Section Pigs)
Show Figures

Figure 1

30 pages, 3852 KB  
Article
Application of Supervised Neural Networks to Classify Failure Modes in Reinforced Concrete Columns Using Basic Structural Data
by Konstantinos G. Megalooikonomou and Grigorios N. Beligiannis
Appl. Sci. 2025, 15(18), 10175; https://doi.org/10.3390/app151810175 - 18 Sep 2025
Viewed by 574
Abstract
Reinforced concrete (RC) columns play a vital role in structural integrity, and accurately predicting their failure modes is essential for enhancing seismic safety and performance. This study explores the use of a supervised machine learning approach—specifically, an artificial neural network (ANN) model—to classify [...] Read more.
Reinforced concrete (RC) columns play a vital role in structural integrity, and accurately predicting their failure modes is essential for enhancing seismic safety and performance. This study explores the use of a supervised machine learning approach—specifically, an artificial neural network (ANN) model—to classify failure modes of RC columns. The model is trained using data from the well-established Pacific Earthquake Engineering Research Center (PEER) structural performance database, which contains results from over 400 cyclic lateral-load tests on RC columns. These tests encompass a wide range of column types, including those with spiral or circular hoop confinement, rectangular ties, and varying configurations of longitudinal reinforcement with or without lap splices at critical sections. The ANNs were evaluated using a randomly selected subset from the PEER database, achieving classification accuracies of 94% for rectangular columns and 95% for circular columns. Notably, in certain cases, the model’s predictions aligned with or exceeded the accuracy of traditional building code-based methods. These findings underscore the strong potential of machine learning—particularly ANNs—for reliably postdicting failure modes (even the brittle ones) in RC columns, signaling a promising advancement in the field of earthquake engineering. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 580 KB  
Article
Fuzzy Classifier Based on Mamdani Inference and Statistical Features of the Target Population
by Miguel Antonio Caraveo-Cacep, Rubén Vázquez-Medina and Antonio Hernández Zavala
Modelling 2025, 6(3), 106; https://doi.org/10.3390/modelling6030106 - 18 Sep 2025
Viewed by 246
Abstract
Classifying study objects into groups is facilitated by fuzzy classifiers based on a set of rules and membership functions. Typically, the characteristics of the study objects are used to establish the criteria for classification. This work arises from the need to design fuzzy [...] Read more.
Classifying study objects into groups is facilitated by fuzzy classifiers based on a set of rules and membership functions. Typically, the characteristics of the study objects are used to establish the criteria for classification. This work arises from the need to design fuzzy classifiers in contexts where real data is scarce or highly random, proposing a design based on statistics and chaotic maps that simplifies the design process. This study introduces the development of a fuzzy classifier, assuming that three features of the population to be classified are random variables. A Mamdani fuzzy inference system and three pseudorandom number generators based on one-dimensional chaotic maps are utilized to achieve this. The logistic, Bernoulli, and tent chaotic maps are implemented to emulate the random features of the target population, and their statistical distribution functions serve as input to the fuzzy inference system. Four experimental tests were conducted to demonstrate the functionality of the proposed classifier. The results show that it is possible to achieve a symmetric and robust classification through simple adjustments to membership functions, without the need for supervised training, which represents a significant methodological contribution, especially because this indicates that designers with minimal experience can build effective classifiers in just a few steps. Real applications of the proposed design may focus on the classification of biomedical signals (sEMG), network traffic, and personalized medical assistance systems, where data exhibits high variability and randomness. Full article
Show Figures

Graphical abstract

20 pages, 2930 KB  
Article
Pain Level Classification from Speech Using GRU-Mixer Architecture with Log-Mel Spectrogram Features
by Adi Alhudhaif
Diagnostics 2025, 15(18), 2362; https://doi.org/10.3390/diagnostics15182362 - 17 Sep 2025
Viewed by 279
Abstract
Background/Objectives: Automatic pain detection from speech signals holds strong promise for non-invasive and real-time assessment in clinical and caregiving settings, particularly for populations with limited capacity for self-report. Methods: In this study, we introduce a lightweight recurrent deep learning approach, namely the [...] Read more.
Background/Objectives: Automatic pain detection from speech signals holds strong promise for non-invasive and real-time assessment in clinical and caregiving settings, particularly for populations with limited capacity for self-report. Methods: In this study, we introduce a lightweight recurrent deep learning approach, namely the Gated Recurrent Unit (GRU)-Mixer model for pain level classification based on speech signals. The proposed model maps raw audio inputs into Log-Mel spectrogram features, which are passed through a stacked bidirectional GRU for modeling the spectral and temporal dynamics of vocal expressions. To extract compact utterance-level embeddings, an adaptive average pooling-based temporal mixing mechanism is applied over the GRU outputs, followed by a fully connected classification head alongside dropout regularization. This architecture is used for several supervised classification tasks, including binary (pain/non-pain), graded intensity (mild, moderate, severe), and thermal-state (cold/warm) classification. End-to-end training is done using speaker-independent splits and class-balanced loss to promote generalization and discourage bias. The provided audio inputs are normalized to a consistent 3-s window and resampled at 8 kHz for consistency and computational efficiency. Results: Experiments on the TAME Pain dataset showcase strong classification performance, achieving 83.86% accuracy for binary pain detection and as high as 75.36% for multiclass pain intensity classification. Conclusions: As the first deep learning based classification work on the TAME Pain dataset, this work introduces the GRU-Mixer as an effective benchmark architecture for future studies on speech-based pain recognition and affective computing. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

16 pages, 1094 KB  
Article
Recognition of EEG Features in Autism Disorder Using SWT and Fisher Linear Discriminant Analysis
by Fahmi Fahmi, Melinda Melinda, Prima Dewi Purnamasari, Elizar Elizar and Aufa Rafiki
Diagnostics 2025, 15(18), 2291; https://doi.org/10.3390/diagnostics15182291 - 10 Sep 2025
Viewed by 389
Abstract
Background/Objectives: An ASD diagnosis from EEG is challenging due to non-stationary, low-SNR signals and small cohorts. We propose a compact, interpretable pipeline that pairs a shift-invariant Stationary Wavelet Transform (SWT) with Fisher’s Linear Discriminant (FLDA) as a supervised projection method, delivering band-level [...] Read more.
Background/Objectives: An ASD diagnosis from EEG is challenging due to non-stationary, low-SNR signals and small cohorts. We propose a compact, interpretable pipeline that pairs a shift-invariant Stationary Wavelet Transform (SWT) with Fisher’s Linear Discriminant (FLDA) as a supervised projection method, delivering band-level insight and subject-wise evaluation suitable for resource-constrained clinics. Methods: EEG from the KAU dataset (eight ASD, eight controls; 256 Hz) was decomposed with SWT (db4). We retained levels 3, 4, and 6 (γ/β/θ) as features. FLDA learned a low-dimensional discriminant subspace, followed by a linear decision rule. Evaluation was conducted using a subject-wise 70/30 split (no subject overlap) with accuracy, precision, recall, F1, and confusion matrices. Results: The β band (Level 4) achieved the best performance (accuracy/precision/recall/F1 = 0.95), followed by γ (0.92) and θ (0.85). Despite partial overlap in FLDA scores, the projection maximized between-class separation relative to within-class variance, yielding robust linear decisions. Conclusions: Unlike earlier FLDA-only pipelines and wavelet–entropy–ANN approaches, our study (1) employs SWT (undecimated, shift-invariant) rather than DWT to stabilize sub-band features on short resting segments, (2) uses FLDA as a supervised projection to mitigate small-sample covariance pathologies before classification, (3) provides band-specific discriminative insight (β > γ/θ) under a subject-wise protocol, and (4) targets low-compute deployment. These choices yield a reproducible baseline with competitive accuracy and clear clinical interpretability. Future work will benchmark kernel/regularized discriminants and lightweight deep models as cohort size and compute permit. Full article
(This article belongs to the Special Issue Advances in the Diagnosis of Nervous System Diseases—3rd Edition)
Show Figures

Figure 1

23 pages, 1928 KB  
Systematic Review
Eye Tracking-Enhanced Deep Learning for Medical Image Analysis: A Systematic Review on Data Efficiency, Interpretability, and Multimodal Integration
by Jiangxia Duan, Meiwei Zhang, Minghui Song, Xiaopan Xu and Hongbing Lu
Bioengineering 2025, 12(9), 954; https://doi.org/10.3390/bioengineering12090954 - 5 Sep 2025
Viewed by 768
Abstract
Deep learning (DL) has revolutionized medical image analysis (MIA), enabling early anomaly detection, precise lesion segmentation, and automated disease classification. However, its clinical integration faces two major challenges: reliance on limited, narrowly annotated datasets that inadequately capture real-world patient diversity, and the inherent [...] Read more.
Deep learning (DL) has revolutionized medical image analysis (MIA), enabling early anomaly detection, precise lesion segmentation, and automated disease classification. However, its clinical integration faces two major challenges: reliance on limited, narrowly annotated datasets that inadequately capture real-world patient diversity, and the inherent “black-box” nature of DL decision-making, which complicates physician scrutiny and accountability. Eye tracking (ET) technology offers a transformative solution by capturing radiologists’ gaze patterns to generate supervisory signals. These signals enhance DL models through two key mechanisms: providing weak supervision to improve feature recognition and diagnostic accuracy, particularly when labeled data are scarce, and enabling direct comparison between machine and human attention to bridge interpretability gaps and build clinician trust. This approach also extends effectively to multimodal learning models (MLMs) and vision–language models (VLMs), supporting the alignment of machine reasoning with clinical expertise by grounding visual observations in diagnostic context, refining attention mechanisms, and validating complex decision pathways. Conducted in accordance with the PRISMA statement and registered in PROSPERO (ID: CRD42024569630), this review synthesizes state-of-the-art strategies for ET-DL integration. We further propose a unified framework in which ET innovatively serves as a data efficiency optimizer, a model interpretability validator, and a multimodal alignment supervisor. This framework paves the way for clinician-centered AI systems that prioritize verifiable reasoning, seamless workflow integration, and intelligible performance, thereby addressing key implementation barriers and outlining a path for future clinical deployment. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

13 pages, 2561 KB  
Article
Unsupervised Bearing Fault Diagnosis Using Masked Self-Supervised Learning and Swin Transformer
by Pengping Luo and Zhiwei Liu
Machines 2025, 13(9), 792; https://doi.org/10.3390/machines13090792 - 1 Sep 2025
Viewed by 611
Abstract
Bearings are vital to rotating machinery, where undetected faults can cause severe failures. Conventional fault diagnosis methods depend on manual feature engineering and labeled data, struggling with complex industrial conditions. This study introduces an innovative unsupervised framework combining masked self-supervised learning with the [...] Read more.
Bearings are vital to rotating machinery, where undetected faults can cause severe failures. Conventional fault diagnosis methods depend on manual feature engineering and labeled data, struggling with complex industrial conditions. This study introduces an innovative unsupervised framework combining masked self-supervised learning with the Swin Transformer for bearing fault diagnosis. The novel integration leverages masked Auto Encoders to learn robust features from unlabeled vibration signals through reconstruction-based pretraining, while the Swin Transformer’s shifted window attention mechanism enhances efficient capture of fault-related patterns in long-sequence signals. This approach eliminates reliance on labeled data, enabling precise detection of unknown faults. The proposed method achieves 99.53% accuracy on the Paderborn dataset and 100% accuracy on the CWRU dataset significantly, surpassing other unsupervised Auto Encoder-based methods. This method’s innovative design offers high adaptability and substantial potential for predictive maintenance in industrial applications. Full article
Show Figures

Figure 1

19 pages, 20365 KB  
Article
GeoNR-PSW: Prompt-Aligned Localization Leveraging Ray-Traced 5G Channels and LLM Reasoning
by Wenbin Shi, Zhongxu Zhan, Jingsheng Lei and Xingli Gan
Sensors 2025, 25(17), 5397; https://doi.org/10.3390/s25175397 - 1 Sep 2025
Viewed by 464
Abstract
Accurate user-equipment positioning is crucial for the successful deployment of 5G New Radio (NR) networks, particularly in dense urban and vehicular environments where multipath effects and signal blockage frequently compromise GNSS reliability. Building upon the pseudo-signal-word (PSW) paradigm initially developed for low-power wide-area [...] Read more.
Accurate user-equipment positioning is crucial for the successful deployment of 5G New Radio (NR) networks, particularly in dense urban and vehicular environments where multipath effects and signal blockage frequently compromise GNSS reliability. Building upon the pseudo-signal-word (PSW) paradigm initially developed for low-power wide-area networks, this paper proposes GeoNR-PSW, a novel localization architecture designed for sub-6 GHz (FR1, 2.8 GHz) and mmWave (FR2, 60 GHz) fingerprints from the Raymobtime S007 dataset. GeoNR-PSW encodes 5G channel snapshots into concise PSW sequences and leverages a frozen GPT-2 backbone enhanced by lightweight PSW-Adapters to enable few-shot 3D localization. Despite the limited size of the dataset, the proposed method achieves median localization errors of 5.90 m at FR1 and 3.25 m at FR2. These results highlight the potential of prompt-aligned language models for accurate and scalable 5G positioning with minimal supervision. Full article
Show Figures

Figure 1

16 pages, 951 KB  
Article
Deep LSTM Surrogates for MEMD: A Noise-Assisted Approach to EEG Intrinsic Mode Function Extraction
by Pablo Andres Muñoz-Gutierrez, Diego Fernando Ramirez-Jimenez and Eduardo Giraldo
Information 2025, 16(9), 754; https://doi.org/10.3390/info16090754 - 31 Aug 2025
Viewed by 393
Abstract
In this paper, we propose a deep learning-based surrogate model for Multivariate Empirical Mode Decomposition (MEMD) using Long Short-Term Memory (LSTM) networks, aimed at efficiently extracting Intrinsic Mode Functions (IMFs) from electroencephalographic (EEG) signals. Unlike traditional data-driven methods, our approach leverages temporal sequence [...] Read more.
In this paper, we propose a deep learning-based surrogate model for Multivariate Empirical Mode Decomposition (MEMD) using Long Short-Term Memory (LSTM) networks, aimed at efficiently extracting Intrinsic Mode Functions (IMFs) from electroencephalographic (EEG) signals. Unlike traditional data-driven methods, our approach leverages temporal sequence modeling to learn the decomposition process in an end-to-end fashion. We further enhance the decomposition targets by employing Noise-Assisted MEMD (NA-MEMD), which stabilizes mode separation and mitigates mode mixing effects, leading to better supervised learning signals. Extensive experiments on synthetic and real EEG data demonstrate the superior performance of the proposed LSTM surrogate over conventional feedforward neural networks and standard MEMD-based targets. Specifically, the LSTM trained on NA-MEMD outputs achieved the lowest mean squared error (MSE) and the highest signal-to-noise ratio (SNR), significantly outperforming the feedforward baseline, even when compared using the Power Spectral Density (PSD). These results confirm the effectiveness of combining LSTM architectures with noise-assisted decomposition strategies to approximate nonlinear signal analysis tasks such as MEMD. The proposed surrogate model offers a fast and accurate alternative to classical empirical methods, enabling real-time and scalable EEG analysis. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning, 2nd Edition)
Show Figures

Figure 1

23 pages, 5508 KB  
Article
From CSI to Coordinates: An IoT-Driven Testbed for Individual Indoor Localization
by Diana Macedo, Miguel Loureiro, Óscar G. Martins, Joana Coutinho Sousa, David Belo and Marco Gomes
Future Internet 2025, 17(9), 395; https://doi.org/10.3390/fi17090395 - 30 Aug 2025
Viewed by 599
Abstract
Indoor wireless networks face increasing challenges in maintaining stable coverage and performance, particularly with the widespread use of high-frequency Wi-Fi and growing demands from smart home devices. Traditional methods to improve signal quality, such as adding access points, often fall short in dynamic [...] Read more.
Indoor wireless networks face increasing challenges in maintaining stable coverage and performance, particularly with the widespread use of high-frequency Wi-Fi and growing demands from smart home devices. Traditional methods to improve signal quality, such as adding access points, often fall short in dynamic environments where user movement and physical obstructions affect signal behavior. In this work, we propose a system that leverages existing Internet of Things (IoT) devices to perform real-time user localization and network adaptation using fine-grained Channel State Information (CSI) and Received Signal Strength Indicator (RSSI) measurements. We deploy multiple ESP-32 microcontroller-based receivers in fixed positions to capture wireless signal characteristics and process them through a pipeline that includes filtering, segmentation, and feature extraction. Using supervised machine learning, we accurately predict the user’s location within a defined indoor grid. Our system achieves over 82% accuracy in a realistic laboratory setting and shows improved performance when excluding redundant sensors. The results demonstrate the potential of communication-based sensing to enhance both user tracking and wireless connectivity without requiring additional infrastructure. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems, 2nd Edition)
Show Figures

Figure 1

28 pages, 2198 KB  
Article
A Large Kernel Convolutional Neural Network with a Noise Transfer Mechanism for Real-Time Semantic Segmentation
by Jinhang Liu, Yuhe Du, Jing Wang and Xing Tang
Sensors 2025, 25(17), 5357; https://doi.org/10.3390/s25175357 - 29 Aug 2025
Viewed by 485
Abstract
In semantic segmentation tasks, large kernels and Atrous convolution have been utilized to increase the receptive field, enabling models to achieve competitive performance with fewer parameters. However, due to the fixed size of kernel functions, networks incorporating large convolutional kernels are limited in [...] Read more.
In semantic segmentation tasks, large kernels and Atrous convolution have been utilized to increase the receptive field, enabling models to achieve competitive performance with fewer parameters. However, due to the fixed size of kernel functions, networks incorporating large convolutional kernels are limited in adaptively capturing multi-scale features and fail to effectively leverage global contextual information. To address this issue, we combine Atrous convolution with large kernel convolution, using different dilation rates to compensate for the single-scale receptive field limitation of large kernels. Simultaneously, we employ a dynamic selection mechanism to adaptively highlight the most important spatial features based on global information. Additionally, to enhance the model’s ability to fit the true label distribution, we propose a Multi-Scale Contextual Noise Transfer Matrix (NTM), which uses high-order consistency information from neighborhood representations to estimate NTM and correct supervision signals, thereby improving the model’s generalization capability. Extensive experiments conducted on Cityscapes, ADE20K, and COCO-Stuff-10K demonstrate that this approach achieves a new state-of-the-art balance between speed and accuracy. Specifically, LKNTNet achieves 80.05% mIoU on Cityscapes with an inference speed of 80.7 FPS and 42.7% mIoU on ADE20K with an inference speed of 143.6 FPS. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 3976 KB  
Article
A Self-Supervised Pre-Trained Transformer Model for Accurate Genomic Prediction of Swine Phenotypes
by Weixi Xiang, Zhaoxin Li, Qixin Sun, Xiujuan Chai and Tan Sun
Animals 2025, 15(17), 2485; https://doi.org/10.3390/ani15172485 - 24 Aug 2025
Viewed by 482
Abstract
Accurate genomic prediction of complex phenotypes is crucial for accelerating genetic progress in swine breeding. However, conventional methods like Genomic Best Linear Unbiased Prediction (GBLUP) face limitations in capturing complex non-additive effects that contribute significantly to phenotypic variation, restricting the potential accuracy of [...] Read more.
Accurate genomic prediction of complex phenotypes is crucial for accelerating genetic progress in swine breeding. However, conventional methods like Genomic Best Linear Unbiased Prediction (GBLUP) face limitations in capturing complex non-additive effects that contribute significantly to phenotypic variation, restricting the potential accuracy of phenotype prediction. To address this challenge, we introduce a novel framework based on a self-supervised, pre-trained encoder-only Transformer model. Its core novelty lies in tokenizing SNP sequences into non-overlapping 6-mers (sequences of 6 SNPs), enabling the model to directly learn local haplotype patterns instead of treating SNPs as independent markers. The model first undergoes self-supervised pre-training on the unlabeled version of the same SNP dataset used for subsequent fine-tuning, learning intrinsic genomic representations through a masked 6-mer prediction task. Subsequently, the pre-trained model is fine-tuned on labeled data to predict phenotypic values for specific economic traits. Experimental validation demonstrates that our proposed model consistently outperforms baseline methods, including GBLUP and a Transformer of the same architecture trained from scratch (without pre-training), in prediction accuracy across key economic traits. This outperformance suggests the model’s capacity to capture non-linear genetic signals missed by linear models. This research contributes not only a new, more accurate methodology for genomic phenotype prediction but also validates the potential of self-supervised learning to decipher complex genomic patterns for direct application in breeding programs. Ultimately, this approach offers a powerful new tool to enhance the rate of genetic gain in swine production by enabling more precise selection based on predicted phenotypes. Full article
(This article belongs to the Section Pigs)
Show Figures

Figure 1

13 pages, 1341 KB  
Proceeding Paper
Predicting Nurse Stress Levels Using Time-Series Sensor Data and Comparative Evaluation of Classification Algorithms
by Ayşe Çiçek Korkmaz, Adem Korkmaz and Selahattin Koşunalp
Eng. Proc. 2025, 104(1), 30; https://doi.org/10.3390/engproc2025104030 - 22 Aug 2025
Viewed by 372
Abstract
This study proposes a machine learning-based framework for classifying occupational stress levels among nurses using physiological time-series data collected from wearable sensors. The dataset comprises multimodal signals including electrodermal activity (EDA), heart rate (HR), skin temperature (TEMP), and tri-axial accelerometer measurements (X, Y, [...] Read more.
This study proposes a machine learning-based framework for classifying occupational stress levels among nurses using physiological time-series data collected from wearable sensors. The dataset comprises multimodal signals including electrodermal activity (EDA), heart rate (HR), skin temperature (TEMP), and tri-axial accelerometer measurements (X, Y, Z), which are labeled into three categorical stress levels: low (0), medium (1), and high (2). To enhance the usability of the raw data, a resampling process was performed to aggregate the measurements into one-minute intervals, followed by the application of the Synthetic Minority Over-sampling Technique (SMOTE) to mitigate severe class imbalance. Subsequently, a comparative classification analysis was conducted using four supervised learning algorithms: Random Forest, XGBoost, k-Nearest Neighbors (k-NN), and LightGBM. Model performances were evaluated based on accuracy, weighted F1-score, and confusion matrices to ensure robustness across imbalanced class distributions. Additionally, temporal pattern analyses by the day of the week and the hour of the day revealed significant trends in stress variation, underscoring the influence of circadian and organizational factors. Among the models tested, ensemble-based methods, particularly Random Forest and XGBoost with optimized hyperparameters, demonstrated a superior predictive performance. These findings highlight the feasibility of integrating real-time, sensor-driven stress monitoring systems into healthcare environments to support proactive workforce management and improve care quality. Full article
Show Figures

Figure 1

Back to TopTop