Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,787)

Search Parameters:
Keywords = transformer encoder

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 4267 KB  
Article
Isolated German Sign Language Recognition for Classifying Polar Answers Using Landmarks and Lightweight Transformers
by Cristina Luna-Jiménez, Lennart Eing, Sergio Esteban-Romero, Manuel Gil-Martín and Elisabeth André
Appl. Sci. 2025, 15(21), 11571; https://doi.org/10.3390/app152111571 - 29 Oct 2025
Abstract
Sign Languages are the primary communication modality of deaf communities, yet building effective Isolated Sign Language Recognition (ISLR) systems remains difficult under data limitations. In this work, we curated a sub-dataset from the DGS-Korpus focused on recognizing affirmations and negations (polar answers) in [...] Read more.
Sign Languages are the primary communication modality of deaf communities, yet building effective Isolated Sign Language Recognition (ISLR) systems remains difficult under data limitations. In this work, we curated a sub-dataset from the DGS-Korpus focused on recognizing affirmations and negations (polar answers) in German Sign Language (DGS). We designed lightweight transformer models using landmark-based inputs and evaluated them on two tasks: the binary classification of affirmations versus negations (binary semantic recognition) and the multi-class recognition of sign variations expressing positive or negative replies (multi-class gloss recognition). The main contribution of the article, hence, relies on the exploration of models for performing polar answer recognition in DGS and the exploration of differences between performing multi-class or binary class classification. Our best binary model achieved an accuracy of 97.71% using only hand landmarks without Positional Encoding, highlighting the potential of lightweight landmark-based transformers for efficient ISLR in constrained domains. Full article
(This article belongs to the Special Issue Affective Computing for Human–Computer Interactions)
28 pages, 1624 KB  
Article
Domain-Constrained Stacking Framework for Credit Default Prediction
by Ming-Liang Ding, Yu-Liang Ma and Fu-Qiang You
Mathematics 2025, 13(21), 3451; https://doi.org/10.3390/math13213451 (registering DOI) - 29 Oct 2025
Abstract
Accurate and reliable credit risk classification is fundamental to the stability of financial systems and the efficient allocation of capital. However, with the rapid expansion of customer information in both volume and complexity, traditional rule-based or purely statistical approaches have become increasingly inadequate. [...] Read more.
Accurate and reliable credit risk classification is fundamental to the stability of financial systems and the efficient allocation of capital. However, with the rapid expansion of customer information in both volume and complexity, traditional rule-based or purely statistical approaches have become increasingly inadequate. Motivated by these challenges, this study introduces a domain-constrained stacking ensemble framework that systematically integrates business knowledge with advanced machine learning techniques. First, domain heuristics are embedded at multiple stages of the pipeline: threshold-based outlier removal improves data quality, target variable redefinition ensures consistency with industry practice, and feature discretization with monotonicity verification enhances interpretability. Then, each variable is transformed through Weight-of-Evidence (WOE) encoding and evaluated via Information Value (IV), which enables robust feature selection and effective dimensionality reduction. Next, on this transformed feature space, we train logistic regression (LR), random forest (RF), extreme gradient boosting (XGBoost), and a two-layer stacking ensemble. Finally, the ensemble aggregates cross-validated out-of-fold predictions from LR, RF and XGBoost as meta-features, which are fused by a meta-level logistic regression, thereby capturing both linear and nonlinear relationships while mitigating overfitting. Experimental results across two credit datasets demonstrate that the proposed framework achieves superior predictive performance compared with single models, highlighting its potential as a practical solution for credit risk assessment in real-world financial applications. Full article
Show Figures

Figure 1

26 pages, 896 KB  
Article
EXPERT: EXchange Rate Prediction Using Encoder Representation from Transformers
by Efstratios Bilis, Theophilos Papadimitriou, Konstantinos Diamantaras and Konstantinos Goulianas
Forecasting 2025, 7(4), 65; https://doi.org/10.3390/forecast7040065 (registering DOI) - 29 Oct 2025
Abstract
This study introduces a Transformer-based forecasting tool termed EXPERT (EXchange rate Prediction using Encoder Representation from Transformers) and applies it to exchange rate forecasting. We developed and trained a Transformer-based forecasting model, then evaluated its performance on nine currency pairs with various characteristics. [...] Read more.
This study introduces a Transformer-based forecasting tool termed EXPERT (EXchange rate Prediction using Encoder Representation from Transformers) and applies it to exchange rate forecasting. We developed and trained a Transformer-based forecasting model, then evaluated its performance on nine currency pairs with various characteristics. Finally, we benchmarked its effectiveness against six established forecasting models: Linear Regression, Random Forest, Stochastic Gradient Descent, XGBoost, Bagging Regression, and Long Short-Term Memory. Our dataset covers the period from 1999 to 2022. The models were evaluated for their ability to predict the next day’s closing price using three performance metrics. In addition, the EXPERT system was evaluated on its ability to extend forecast horizons and as the core of a trading strategy. The model’s robustness was further evaluated using the Multiple Comparisons with the Best (MCB) metric on five dataset samples. Full article
(This article belongs to the Section Forecasting in Economics and Management)
Show Figures

Figure 1

14 pages, 1656 KB  
Article
Capacity Estimation Method for Lithium-Ion Battery Based on Frequency-Domain Enhancement and Residual Connection
by Wenqing Xu and Dejin Yan
Processes 2025, 13(11), 3467; https://doi.org/10.3390/pr13113467 - 28 Oct 2025
Abstract
With the widespread adoption of lithium-ion batteries in electric vehicles, energy storage, and consumer electronics, accurate capacity estimation has become critical for battery management systems (BMS). To address the limitations of existing methods—which emphasize time-domain features, and struggle to capture periodic degradation and [...] Read more.
With the widespread adoption of lithium-ion batteries in electric vehicles, energy storage, and consumer electronics, accurate capacity estimation has become critical for battery management systems (BMS). To address the limitations of existing methods—which emphasize time-domain features, and struggle to capture periodic degradation and high-frequency disturbances in the frequency domain, and whose deep networks often under-represent long-term degradation trends due to gradient issues—this paper proposes a frequency-domain enhanced Transformer for lithium-ion battery capacity estimation. Specifically, for each cycle we apply FFT to voltage, current, and temperature signals to extract frequency-domain features and fuse them with time-domain statistics; a residual connection is introduced within the Transformer encoder to stabilize optimization and preserve long-term degradation trends, enabling high-precision capacity estimation. Evaluated on three batches of the MIT public fast-charging dataset, the method achieves Avg RMSE 0.0013, Avg MAE 0.0006, and Avg R2 0.9977 on the test set; compared with the Transformer baseline, RMSE decreases by 60.6%, MAE decreases by 73.9%, and R2 increases by 0.83%. The contribution lies in jointly embedding explicit time–frequency feature fusion and residual connections into a Transformer backbone to obtain accurate estimates with stable generalization. In terms of societal benefit, more reliable capacity/health estimates can support better BMS decision-making, improve the safety and lifetime of electric vehicles and energy-storage systems, and reduce lifecycle costs. Full article
(This article belongs to the Special Issue Fault Diagnosis of Equipment in the Process Industry)
47 pages, 5755 KB  
Article
ZeroDay-LLM: A Large Language Model Framework for Zero-Day Threat Detection in Cybersecurity
by Mohammed Abdullah Alsuwaiket
Information 2025, 16(11), 939; https://doi.org/10.3390/info16110939 (registering DOI) - 28 Oct 2025
Abstract
Zero-day attacks pose unprecedented challenges to modern cybersecurity frameworks, exploiting unknown vulnerabilities that evade traditional signature-based detection systems. This paper presents ZeroDay-LLM, a novel large language model framework specifically designed for real-time zero-day threat detection in IoT and cloud networks. The proposed system [...] Read more.
Zero-day attacks pose unprecedented challenges to modern cybersecurity frameworks, exploiting unknown vulnerabilities that evade traditional signature-based detection systems. This paper presents ZeroDay-LLM, a novel large language model framework specifically designed for real-time zero-day threat detection in IoT and cloud networks. The proposed system integrates lightweight edge encoders with centralized transformer-based reasoning engines, enabling contextual understanding of network traffic patterns and behavioral anomalies. Through comprehensive evaluation on benchmark cybersecurity datasets including CICIDS2017, NSL-KDD, and UNSW-NB15, ZeroDay-LLM demonstrates superior performance, with a 97.8% accuracy in detecting novel attack signatures, a 23% reduction in false positives compared to traditional intrusion detection systems, and enhanced resilience against adversarial evasion techniques. The framework achieves real-time processing capabilities with an average latency of 12.3 ms per packet analysis while maintaining scalability across heterogeneous network infrastructures. Experimental results across urban, rural, and mixed deployment scenarios validate the practical applicability and robustness of the proposed approach. Full article
(This article belongs to the Special Issue Cyber Security in IoT)
Show Figures

Graphical abstract

17 pages, 1454 KB  
Technical Note
PolarFormer: A Registration-Free Fusion Transformer with Polar Coordinate Position Encoding for Multi-View SAR Target Recognition
by Xiang Yu, Ying Qian, Guodong Jin, Zhe Geng and Daiyin Zhu
Remote Sens. 2025, 17(21), 3559; https://doi.org/10.3390/rs17213559 - 28 Oct 2025
Abstract
Multi-view Synthetic Aperture Radar (SAR) provides rich information for target recognition. However, fusing features from unaligned multi-view images presents challenges for existing methods. Conventional early fusion methods often rely on image registration, a process that is computationally intensive and can introduce feature distortions. [...] Read more.
Multi-view Synthetic Aperture Radar (SAR) provides rich information for target recognition. However, fusing features from unaligned multi-view images presents challenges for existing methods. Conventional early fusion methods often rely on image registration, a process that is computationally intensive and can introduce feature distortions. More recent registration-free approaches based on the Transformer architecture are constrained by standard position encodings, which were not designed to represent the rotational relationships among multi-view SAR data and thus can cause spatial ambiguity. To address this specific limitation of position encodings, we propose a registration-free fusion framework based on a spatially aware Transformer. The framework includes two key components: (1) a multi-view polar coordinate position encoding that models the geometric relationships of patches both within and across views in a unified coordinate system; and (2) a spatially aware self-attention mechanism that injects this geometric information as a learnable inductive bias. Experiments were conducted on our self-developed FAST-Vehicle dataset, which provides full 360° azimuthal coverage. The results show that our method outperforms both registration-based strategies and Transformer baselines that use conventional position encodings. This work indicates that for multi-view SAR fusion, explicitly modeling the underlying geometric relationships with a suitable position encoding is an effective alternative to physical image registration or the use of generic, single-image position encodings. Full article
Show Figures

Figure 1

19 pages, 2431 KB  
Article
Predicting the Remaining Service Life of Power Transformers Using Machine Learning
by Zimo Gao, Binkai Yu, Jiahe Guang, Shanghua Jiang, Xinze Cong, Minglei Zhang and Lin Yu
Processes 2025, 13(11), 3459; https://doi.org/10.3390/pr13113459 - 28 Oct 2025
Abstract
In response to the insufficient adaptability of power transformer remaining useful life (RUL) prediction under complex working conditions and the difficulty of multi-scale feature fusion, this study proposes an industrial time series prediction model based on the parallel Transformer–BiGRU–GlobalAttention model. The parallel Transformer [...] Read more.
In response to the insufficient adaptability of power transformer remaining useful life (RUL) prediction under complex working conditions and the difficulty of multi-scale feature fusion, this study proposes an industrial time series prediction model based on the parallel Transformer–BiGRU–GlobalAttention model. The parallel Transformer encoder captures long-range temporal dependencies, the BiGRU network enhances local sequence associations through bidirectional modeling, the global attention mechanism dynamically weights key temporal features, and cross-attention achieves spatiotemporal feature interaction and fusion. Experiments were conducted based on the public ETT transformer temperature dataset, employing sliding window and piecewise linear label processing techniques, with MAE, MSE, and RMSE as evaluation metrics. The results show that the model achieved excellent predictive performance on the test set, with an MSE of 0.078, MAE of 0.233, and RMSE of 11.13. Compared with traditional LSTM, CNN-BiGRU-Attention, and other methods, the model achieved improvements of 17.2%, 6.0%, and 8.9%, respectively. Ablation experiments verified that the global attention mechanism rationalizes the feature contribution distribution, with the core temporal feature OT having a contribution rate of 0.41. Multiple experiments demonstrated that this method has higher precision compared with other methods. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

26 pages, 3155 KB  
Article
Symmetry and Asymmetry in Pre-Trained Transformer Models: A Comparative Study of TinyBERT, BERT, and RoBERTa for Chinese Educational Text Classification
by Munire Muhetaer, Xiaoyan Meng, Jing Zhu, Aixiding Aikebaier, Liyaer Zu and Yawen Bai
Symmetry 2025, 17(11), 1812; https://doi.org/10.3390/sym17111812 - 27 Oct 2025
Abstract
With the advancement of educational informatization, vast amounts of Chinese text are generated across online platforms and digital textbooks. Effectively classifying such text is essential for intelligent education systems. This study conducts a systematic comparative evaluation of three Transformer-based models—TinyBERT-4L, BERT-base-Chinese, and RoBERTa-wwm-ext—for [...] Read more.
With the advancement of educational informatization, vast amounts of Chinese text are generated across online platforms and digital textbooks. Effectively classifying such text is essential for intelligent education systems. This study conducts a systematic comparative evaluation of three Transformer-based models—TinyBERT-4L, BERT-base-Chinese, and RoBERTa-wwm-ext—for Chinese educational text classification. Using a balanced four-category subset of the THUCNews corpus (Education, Technology, Finance, and Stock), the research investigates the trade-off between classification effectiveness and computational efficiency under a unified experimental framework. The experimental results show that RoBERTa-wwm-ext achieves the highest effectiveness (93.12% Accuracy, 93.08% weighted F1), validating the benefits of whole-word masking and extended pre-training. BERT-base-Chinese maintains a balanced performance (91.74% Accuracy, 91.66% F1) with moderate computational demand. These findings reveal a clear symmetry–asymmetry dynamic: structural symmetry arises from the shared Transformer encoder and identical fine-tuning setup, while asymmetry emerges from differences in model scale and pre-training strategy. This interplay leads to distinct accuracy–latency trade-offs, providing practical guidance for deploying pre-trained language models in resource-constrained intelligent education systems. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 2145 KB  
Article
AI-Based Decision Support System for Attenuating Traffic Congestion
by Catalin Dumitrescu, Alina-Iuliana Tăbîrcă, Alina Stanciu, Lacramioara Nemtoi, Valentin Radu and Beatrice Elena Gore
Appl. Sci. 2025, 15(21), 11470; https://doi.org/10.3390/app152111470 - 27 Oct 2025
Abstract
The transportation industry and transportation infrastructure are undergoing a profound transformation due to advances in the development of artificial intelligence (AI) algorithms that are not just a concept of the future but a reality. Advanced algorithms, predictive systems, and intelligent automation contribute to [...] Read more.
The transportation industry and transportation infrastructure are undergoing a profound transformation due to advances in the development of artificial intelligence (AI) algorithms that are not just a concept of the future but a reality. Advanced algorithms, predictive systems, and intelligent automation contribute to optimizing logistics, reducing costs, increasing safety, and reducing traffic congestion. AI is also used to optimize routes by analyzing multiple variables, such as distance, traffic, time constraints, and user preferences, to generate optimal routes between departure and destination points. Route planning systems can be integrated with real-time data on traffic, planned or unforeseen events, and other conditions that may affect the trip. AI algorithms can use this data to adapt routes and estimated arrival times based on changes in traffic or other conditions. The purpose of this article is to develop a model for predicting traffic flows at intersections based on historical and real-time data. The focus is on the genetic algorithm used to optimize a Long Short-Term Memory (LSTM) encoder–decoder. Specifically, the research aims to determine how well the proposed model performs when the data is optimized using the genetic algorithm. The results obtained for the proposed GA-LSTM show an average TTS reduction of −18.7%, a maximum improvement of −27.3%, an RMSE of 0.003587, and an MSE of 0.00348 compared to traditional models used in real time for traffic management. Finally, the performance of GA-LSTM was compared with the results reported in the literature to demonstrate the usefulness of the proposed algorithm. Full article
(This article belongs to the Special Issue Sustainable Urban Mobility)
Show Figures

Figure 1

17 pages, 4058 KB  
Article
Cloning and Functional Analysis of the Aldehyde Dehydrogenase Gene VvALDH in the IAA Synthesis Pathway of Volvariella volvacea
by Mingjuan Mao, Lijuan Hou, Lin Ma, Ning Jiang, Jinsheng Lin, Shaoxuan Qu, Huiping Li, Ping Xu, Di Liu and Wei Ji
J. Fungi 2025, 11(11), 773; https://doi.org/10.3390/jof11110773 (registering DOI) - 27 Oct 2025
Abstract
Volvariella volvacea, the Chinese mushroom, is a high-temperature grass-rot fungus with great production potential, yet its low yield limits industrial development. Exogenous sodium acetate (NaAc) has been shown to increase yield by promoting indole-3-acetic acid (IAA) synthesis during the primordium stage, but [...] Read more.
Volvariella volvacea, the Chinese mushroom, is a high-temperature grass-rot fungus with great production potential, yet its low yield limits industrial development. Exogenous sodium acetate (NaAc) has been shown to increase yield by promoting indole-3-acetic acid (IAA) synthesis during the primordium stage, but the underlying mechanism remains unclear. In this study, the aldehyde dehydrogenase gene VvALDH, highly expressed at the primordium stage, was cloned and functionally characterized. VvALDH encodes a 1509 bp cDNA with a conserved aldehyde dehydrogenase domain. Using Agrobacterium-mediated transformation, overexpression lines showed a 4.76-fold increase in VvALDH expression, accompanied by higher biomass (38%), yield (83%), and IAA content (34%), while RNAi lines showed opposite trends. These results demonstrate that VvALDH promotes IAA biosynthesis, enhances primordium differentiation, and increases yield. Further analysis revealed its involvement in multiple IAA biosynthetic pathways, including indolepyruvate, tryptamine, and tryptophan side-chain oxidase pathways. This work clarifies the molecular basis of NaAc-mediated yield improvement and provides a theoretical foundation for genetic and cultivation strategies in V. volvacea. Full article
(This article belongs to the Section Fungal Genomics, Genetics and Molecular Biology)
Show Figures

Figure 1

19 pages, 1950 KB  
Article
Thermo-Mechanical Fault Diagnosis for Marine Steam Turbines: A Hybrid DLinear–Transformer Anomaly Detection Framework
by Ziyi Zou, Guobing Chen, Luotao Xie, Jintao Wang and Zichun Yang
J. Mar. Sci. Eng. 2025, 13(11), 2050; https://doi.org/10.3390/jmse13112050 - 27 Oct 2025
Viewed by 82
Abstract
Thermodynamic fault diagnosis of marine steam turbines remains challenging due to non-stationary multivariate sensor data under stochastic loads and transient conditions. While conventional threshold-based methods lack the sophistication for such dynamics, existing data-driven Transformers struggle with inherent non-stationarity. To address this, we propose [...] Read more.
Thermodynamic fault diagnosis of marine steam turbines remains challenging due to non-stationary multivariate sensor data under stochastic loads and transient conditions. While conventional threshold-based methods lack the sophistication for such dynamics, existing data-driven Transformers struggle with inherent non-stationarity. To address this, we propose a hybrid DLinear–Transformer framework that synergistically integrates localized trend decomposition with global feature extraction. The model employs a dual-branch architecture with adaptive positional encoding and a gated fusion mechanism to enhance robustness. Extensive evaluations demonstrate the framework’s superiority: on public benchmarks (SMD, SWaT), it achieves statistically significant F1-score improvements of 2.7% and 0.3% over the state-of-the-art TranAD model under a controlled, reproducible setup. Most importantly, validation on a real-world marine steam turbine dataset confirms a leading fault detection accuracy of 94.6% under variable conditions. By providing a reliable foundation for identifying precursor anomalies, this work establishes a robust offline benchmark that paves the way for practical predictive maintenance in marine engineering. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

33 pages, 1433 KB  
Article
Hybrid Time Series Transformer–Deep Belief Network for Robust Anomaly Detection in Mobile Communication Networks
by Anita Ershadi Oskouei, Mehrdad Kaveh, Francisco Hernando-Gallego and Diego Martín
Symmetry 2025, 17(11), 1800; https://doi.org/10.3390/sym17111800 - 25 Oct 2025
Viewed by 235
Abstract
The rapid evolution of 5G and emerging 6G networks has increased system complexity, data volume, and security risks, making anomaly detection vital for ensuring reliability and resilience. However, existing machine learning (ML)-based approaches still face challenges related to poor generalization, weak temporal modeling, [...] Read more.
The rapid evolution of 5G and emerging 6G networks has increased system complexity, data volume, and security risks, making anomaly detection vital for ensuring reliability and resilience. However, existing machine learning (ML)-based approaches still face challenges related to poor generalization, weak temporal modeling, and degraded accuracy under heterogeneous and imbalanced real-world conditions. To overcome these limitations, a hybrid time series transformer–deep belief network (HTST-DBN) is introduced, integrating the sequential modeling strength of TST with the hierarchical feature representation of DBN, while an improved orchard algorithm (IOA) performs adaptive hyper-parameter optimization. The framework also embodies the concept of symmetry and asymmetry. The IOA introduces controlled symmetry-breaking between exploration and exploitation, while the TST captures symmetric temporal patterns in network traffic whose asymmetric deviations often indicate anomalies. The proposed method is evaluated across four benchmark datasets (ToN-IoT, 5G-NIDD, CICDDoS2019, and Edge-IoTset) that capture diverse network environments, including 5G core traffic, IoT telemetry, mobile edge computing, and DDoS attacks. Experimental evaluation is conducted by benchmarking HTST-DBN against several state-of-the-art models, including TST, bidirectional encoder representations from transformers (BERT), DBN, deep reinforcement learning (DRL), convolutional neural network (CNN), and random forest (RF) classifiers. The proposed HTST-DBN achieves outstanding performance, with the highest accuracy reaching 99.61%, alongside strong recall and area under the curve (AUC) scores. The HTST-DBN framework presents a scalable and reliable solution for anomaly detection in next-generation mobile networks. Its hybrid architecture, reinforced by hyper-parameter optimization, enables effective learning in complex, dynamic, and heterogeneous environments, making it suitable for real-world deployment in future 5G/6G infrastructures. Full article
(This article belongs to the Special Issue AI-Driven Optimization for EDA: Balancing Symmetry and Asymmetry)
Show Figures

Figure 1

16 pages, 3350 KB  
Article
A Novel Demographic Indicator Fusion Network (DIFNet) for Dynamic Fusion of EEG and Demographic Indicators for Robust Depression Detection
by Chaoliang Wang, Qingshu Zhou, Mengfan Li, Jiaxin Li and Jing Zhao
Sensors 2025, 25(21), 6549; https://doi.org/10.3390/s25216549 - 24 Oct 2025
Viewed by 245
Abstract
Electroencephalography (EEG) has proven to be effective for detecting major depressive disorder (MDD), with deep learning models further advancing its potential. However, the performance of these models may be limited by their neglect of demographic factors (e.g., age, sex, and education), which are [...] Read more.
Electroencephalography (EEG) has proven to be effective for detecting major depressive disorder (MDD), with deep learning models further advancing its potential. However, the performance of these models may be limited by their neglect of demographic factors (e.g., age, sex, and education), which are known to influence EEG characteristics of depression. To address this, we propose DIFNet, a deep learning framework that dynamically fuses EEG features with demographic indicators (age, sex, and years of education) to enhance depression recognition accuracy. DIFNet is composed of four modules: a multiscale convolutional module, a Transformer encoder module, a temporal convolutional network (TCN) module, and a demographic indicator fusion module. The fusion model leverages convolution to process demographic vectors and integrates them with spatiotemporal EEG features, thereby embedding demographic indicators within the deep learning model for classification. Cross-validation between data trials showed that the DIFNet fusing age and years of education achieves a superior accuracy of 99.66%; the dynamic fusion mechanism improves accuracy by 0.72% compared to the baseline without fusing demographic indicators (98.94%), outperforming state-of-the-art methods (SparNet 94.37% and DBGCN 98.30%). Full article
(This article belongs to the Collection EEG-Based Brain–Computer Interface for a Real-Life Appliance)
Show Figures

Figure 1

25 pages, 1874 KB  
Article
Industry 5.0 Digital DNA: A Genetic Code of Human-Centric Smart Manufacturing
by Khaled Djebbouri, Hind Alofaysan, Fatma Ahmed Hassan and Kamal Si Mohammed
Sustainability 2025, 17(21), 9450; https://doi.org/10.3390/su17219450 - 24 Oct 2025
Viewed by 220
Abstract
This study proposes and empirically assesses a bio-inspired conceptual framework, termed Digital DNA, for modeling Industry 5.0 transformation as a complementary extension of established Industry 4.0 principles with an explicit focus on human-centricity, sustainability, and resilience. Rather than positing a new industrial revolution, [...] Read more.
This study proposes and empirically assesses a bio-inspired conceptual framework, termed Digital DNA, for modeling Industry 5.0 transformation as a complementary extension of established Industry 4.0 principles with an explicit focus on human-centricity, sustainability, and resilience. Rather than positing a new industrial revolution, our positioning follows the European Commission’s view that Industry 5.0 complements Industry 4.0 by emphasizing stakeholder value and human-technology symbiosis. We encode organizational capabilities (genotype) into four gene groups, Adaptability, Technology, Governance, and Culture, and link them to five human-centric outcomes (phenotype). Twenty capability genes and ten outcome measures were scored, normalized (0–100 scale), and analyzed using correlations, K-means clustering, and mutation/drift tracking to capture both static maturity levels and dynamic change patterns. Results show that high Industry 5.0 readiness is consistently associated with elevated Governance and Culture scores. Three transformation archetypes were identified: Alpha, representing holistic socio-technical integration; Beta, with strong technical capacity but weaker cultural alignment; and Gamma, with fragmented capabilities and elevated vulnerability. The Digital DNA framework offers a replicable diagnostic tool for linking socio-technical capabilities to human-centric outcomes, enabling readiness assessment and guiding adaptive, ethical manufacturing strategies. Full article
Show Figures

Figure 1

25 pages, 4755 KB  
Article
DA-GSGTNet: Dynamic Aggregation Gated Stratified Graph Transformer for Multispectral LiDAR Point Cloud Segmentation
by Qiong Ding, Runyuan Zhang, Alex Hay-Man Ng, Long Tang, Bohua Ling, Dan Wang and Yuelin Hou
Remote Sens. 2025, 17(21), 3515; https://doi.org/10.3390/rs17213515 - 23 Oct 2025
Viewed by 325
Abstract
Multispectral LiDAR point clouds, which integrate both geometric and spectral information, offer rich semantic content for scene understanding. However, due to data scarcity and distributional discrepancies, existing methods often struggle to balance accuracy and efficiency in complex urban environments. To address these challenges, [...] Read more.
Multispectral LiDAR point clouds, which integrate both geometric and spectral information, offer rich semantic content for scene understanding. However, due to data scarcity and distributional discrepancies, existing methods often struggle to balance accuracy and efficiency in complex urban environments. To address these challenges, we propose DA-GSGTNet, a novel segmentation framework that integrates Gated Stratified Graph Transformer Blocks (GSGT-Block) with Dynamic Aggregation Transition Down (DATD). The GSGT-Block employs graph convolutions to enhance the local continuity of windowed attention in sparse neighborhoods and adaptively fuses these features via a gating mechanism. The DATD module dynamically adjusts k-NN strides based on point density, while jointly aggregating coordinates and feature vectors to preserve structural integrity during downsampling. Additionally, we introduce a relative position encoding scheme using quantized lookup tables with a Euclidean distance bias to improve recognition of elongated and underrepresented classes. Experimental results on a benchmark multispectral point cloud dataset demonstrate that DA-GSGTNet achieves 86.43% mIoU, 93.74% mAcc, and 90.78% OA, outperforming current state-of-the-art methods. Moreover, by fine-tuning from source-domain pretrained weights and using only ~30% of the training samples (4 regions) and 30% of the training epochs (30 epochs), we achieve over 90% of the full-training segmentation accuracy (100 epochs). These results validate the effectiveness of transfer learning for rapid convergence and efficient adaptation in data-scarce scenarios, offering practical guidance for future multispectral LiDAR applications with limited annotation. Full article
Show Figures

Figure 1

Back to TopTop