Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,043)

Search Parameters:
Keywords = cross-domain generalization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 773 KB  
Article
Vocabulary at the Living–Machine Interface: A Narrative Review of Shared Lexicon for Hybrid AI
by Andrew Prahl and Yan Li
Biomimetics 2025, 10(11), 723; https://doi.org/10.3390/biomimetics10110723 (registering DOI) - 29 Oct 2025
Abstract
The rapid rise of bio-hybrid robots and hybrid human–AI systems has triggered an explosion of terminology that inhibits clarity and progress. To investigate how terms are defined, we conduct a narrative scoping review and concept analysis. We extract 60 verbatim definitions spanning engineering, [...] Read more.
The rapid rise of bio-hybrid robots and hybrid human–AI systems has triggered an explosion of terminology that inhibits clarity and progress. To investigate how terms are defined, we conduct a narrative scoping review and concept analysis. We extract 60 verbatim definitions spanning engineering, human–computer interaction, human factors, biomimetics, philosophy, and policy. Entries are coded on three axes: agency locus (human, shared, machine), integration depth (loose, moderate, high), and normative valence (negative, neutral, positive), and then clustered. Four categories emerged from the analysis: (i) machine-led, low-integration architectures such as neuro-symbolic or “Hybrid-AI” models; (ii) shared, moderately integrated systems like mixed-initiative cobots; (iii) human-led, medium-coupling decision aids; and (iv) human-centric, low-integration frameworks that focus on user agency. Most definitions adopt a generally positive valence, suggesting a gap with risk-heavy popular narratives. We show that, for researchers investigating where living meets machine, terminological precision is more than semantics and it can shape design, accountability, and public trust. This narrative review contributes a comparative taxonomy and a shared lexicon for reporting hybrid systems. Researchers are encouraged to clarify which sense of Hybrid-AI is intended (algorithmic fusion vs. human–AI ensemble), to specify agency locus and integration depth, and to adopt measures consistent with these conceptualizations. Such practices can reduce construct confusion, enhance cross-study comparability, and align design, safety, and regulatory expectations across domains. Full article
(This article belongs to the Section Bioinspired Sensorics, Information Processing and Control)
Show Figures

Figure 1

23 pages, 1313 KB  
Article
Data Component Method Based on Dual-Factor Ownership Identification with Multimodal Feature Fusion
by Shenghao Nie, Jin Shi, Xiaoyang Zhou and Mingxin Lu
Sensors 2025, 25(21), 6632; https://doi.org/10.3390/s25216632 (registering DOI) - 29 Oct 2025
Abstract
In the booming digital economy, data circulation—particularly for massive multimodal data generated by IoT sensor networks—faces critical challenges: ambiguous ownership and broken cross-domain traceability. Traditional property rights theory, ill-suited to data’s non-rivalrous nature, leads to ownership fuzziness after multi-source fusion and traceability gaps [...] Read more.
In the booming digital economy, data circulation—particularly for massive multimodal data generated by IoT sensor networks—faces critical challenges: ambiguous ownership and broken cross-domain traceability. Traditional property rights theory, ill-suited to data’s non-rivalrous nature, leads to ownership fuzziness after multi-source fusion and traceability gaps in cross-organizational flows, hindering marketization. This study aims to establish native ownership confirmation capabilities in trusted IoT-driven data ecosystems. The approach involves a dual-factor system: the collaborative extraction of text (from sensor-generated inspection reports), numerical (from industrial sensor measurements), visual (from 3D scanning sensors), and spatio-temporal features (from GPS and IoT device logs) generates unique SHA-256 fingerprints (first factor), while RSA/ECDSA private key signatures (linked to sensor node identities) bind ownership (second factor). An intermediate state integrates these with metadata, supported by blockchain (consortium chain + IPFS) and cross-domain protocols optimized for IoT environments to ensure full-link traceability. This scheme, tailored to the characteristics of IoT sensor networks, breaks traditional ownership confirmation bottlenecks in multi-source fusion, demonstrating strong performance in ownership recognition, anti-tampering robustness, cross-domain traceability and encryption performance. It offers technical and theoretical support for standardized data components and the marketization of data elements within IoT ecosystems. Full article
Show Figures

Figure 1

25 pages, 2392 KB  
Article
Causal Intervention and Counterfactual Reasoning for Multimodal Pedestrian Trajectory Prediction
by Xinyu Han and Huosheng Xu
J. Imaging 2025, 11(11), 379; https://doi.org/10.3390/jimaging11110379 - 28 Oct 2025
Abstract
Pedestrian trajectory prediction is crucial for autonomous systems navigating human-populated environments. However, existing methods face fundamental challenges including spurious correlations induced by confounding social environments, passive uncertainty modeling that limits prediction diversity, and bias coupling during feature interaction that contaminates trajectory representations. To [...] Read more.
Pedestrian trajectory prediction is crucial for autonomous systems navigating human-populated environments. However, existing methods face fundamental challenges including spurious correlations induced by confounding social environments, passive uncertainty modeling that limits prediction diversity, and bias coupling during feature interaction that contaminates trajectory representations. To address these issues, we propose a novel Causal Intervention and Counterfactual Reasoning (CICR) framework that shifts trajectory prediction from associative learning to a causal inference paradigm. Our approach features a hierarchical architecture having three core components: a Multisource Encoder that extracts comprehensive spatio-temporal and social context features; a Causal Intervention Fusion Module that eliminates confounding bias through the front-door criterion and cross-attention mechanisms; and a Counterfactual Reasoning Decoder that proactively generates diverse future trajectories by simulating hypothetical scenarios. Extensive experiments on the ETH/UCY, SDD, and AVD datasets demonstrate superior performance, achieving an average ADE/FDE of 0.17/0.24 on ETH/UCY and 7.13/10.29 on SDD, with particular advantages in long-term prediction and cross-domain generalization. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Computer Vision Applications)
Show Figures

Figure 1

38 pages, 9358 KB  
Article
Generation of a Multi-Class IoT Malware Dataset for Cybersecurity
by Mazdak Maghanaki, Soraya Keramati, F. Frank Chen and Mohammad Shahin
Electronics 2025, 14(21), 4196; https://doi.org/10.3390/electronics14214196 - 27 Oct 2025
Abstract
This study introduces a modular, behaviorally curated malware dataset suite consisting of eight independent sets, each specifically designed to represent a single malware class: Trojan, Mirai (botnet), ransomware, rootkit, worm, spyware, keylogger, and virus. In contrast to earlier approaches that aggregate all malware [...] Read more.
This study introduces a modular, behaviorally curated malware dataset suite consisting of eight independent sets, each specifically designed to represent a single malware class: Trojan, Mirai (botnet), ransomware, rootkit, worm, spyware, keylogger, and virus. In contrast to earlier approaches that aggregate all malware into large, monolithic collections, this work emphasizes the selection of features unique to each malware type. Feature selection was guided by established domain knowledge and detailed behavioral telemetry obtained through sandbox execution and a subsequent report analysis on the AnyRun platform. The datasets were compiled from two primary sources: (i) the AnyRun platform, which hosts more than two million samples and provides controlled, instrumented sandbox execution for malware, and (ii) publicly available GitHub repositories. To ensure data integrity and prevent cross-contamination of behavioral logs, each sample was executed in complete isolation, allowing for the precise capture of both static attributes and dynamic runtime behavior. Feature construction was informed by operational signatures characteristic of each malware category, ensuring that the datasets accurately represent the tactics, techniques, and procedures distinguishing one class from another. This targeted design enabled the identification of subtle but significant behavioral markers that are frequently overlooked in aggregated datasets. Each dataset was balanced to include benign, suspicious, and malicious samples, thereby supporting the training and evaluation of machine learning models while minimizing bias from disproportionate class representation. Across the full suite, 10,000 samples and 171 carefully curated features were included. This constitutes one of the first dataset collections intentionally developed to capture the behavioral diversity of multiple malware categories within the context of Internet of Things (IoT) security, representing a deliberate effort to bridge the gap between generalized malware corpora and class-specific behavioral modeling. Full article
Show Figures

Graphical abstract

31 pages, 2985 KB  
Article
Heterogeneous Ensemble Sentiment Classification Model Integrating Multi-View Features and Dynamic Weighting
by Song Yang, Jiayao Xing, Zongran Dong and Zhaoxia Liu
Electronics 2025, 14(21), 4189; https://doi.org/10.3390/electronics14214189 - 27 Oct 2025
Viewed by 16
Abstract
With the continuous growth of user reviews, identifying underlying sentiment across multi-source texts efficiently and accurately has become a significant challenge in NLP. Traditional single models in cross-domain sentiment analysis often exhibit insufficient stability, limited generalization capabilities, and sensitivity to class imbalance. Existing [...] Read more.
With the continuous growth of user reviews, identifying underlying sentiment across multi-source texts efficiently and accurately has become a significant challenge in NLP. Traditional single models in cross-domain sentiment analysis often exhibit insufficient stability, limited generalization capabilities, and sensitivity to class imbalance. Existing ensemble methods predominantly rely on static weighting or voting strategies among homogeneous models, failing to fully leverage the complementary advantages between models. To address these issues, this study proposes a heterogeneous ensemble sentiment classification model integrating multi-view features and dynamic weighting. At the feature learning layer, the model constructs three complementary base learners, a RoBERTa-FC for extracting global semantic features, a BERT-BiGRU for capturing temporal dependencies, and a TextCNN-Attention for focusing on local semantic features, thereby achieving multi-level text representation. At the decision layer, a meta-learner is used to fuse multi-view features, and dynamic uncertainty weighting and attention weighting strategies are employed to adaptively adjust outputs from different base learners. Experimental results across multiple domains demonstrate that the proposed model consistently outperforms single learners and comparison methods in terms of Accuracy, Precision, Recall, F1 Score, and Macro-AUC. On average, the ensemble model achieves a Macro-AUC of 0.9582 ± 0.023 across five datasets, with an Accuracy of 0.9423, an F1 Score of 0.9590, and a Macro-AUC of 0.9797 on the AlY_ds dataset. Moreover, in cross-dataset ranking evaluation based on equally weighted metrics, the model consistently ranks within the top two, confirming its superior cross-domain adaptability and robustness. These findings highlight the effectiveness of the proposed framework in enhancing sentiment classification performance and provide valuable insights for future research on lightweight dynamic ensembles, multilingual, and multimodal applications. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

23 pages, 1943 KB  
Article
Modeling of New Agents with Potential Antidiabetic Activity Based on Machine Learning Algorithms
by Yevhen Pruhlo, Ivan Iurchenko and Alina Tomenko
AppliedChem 2025, 5(4), 30; https://doi.org/10.3390/appliedchem5040030 - 27 Oct 2025
Viewed by 19
Abstract
Type 2 diabetes mellitus (T2DM) is a growing global health challenge, expected to affect over 600 million people by 2045. The discovery of new antidiabetic agents remains resource-intensive, motivating the use of machine learning (ML) for virtual screening based on molecular structure. In [...] Read more.
Type 2 diabetes mellitus (T2DM) is a growing global health challenge, expected to affect over 600 million people by 2045. The discovery of new antidiabetic agents remains resource-intensive, motivating the use of machine learning (ML) for virtual screening based on molecular structure. In this study, we developed a predictive pipeline integrating two distinct descriptor types: high-dimensional numerical features from the Mordred library (>1800 2D/3D descriptors) and categorical ontological annotations from the ClassyFire and ChEBI systems. These encode hierarchical chemical classifications and functional group labels. The dataset included 45 active compounds and thousands of inactive molecules, depending on the descriptor system. To address class imbalance, we applied SMOTE and created balanced training and test sets while preserving independent validation sets. Thirteen ML models—including regression, SVM, naive Bayes, decision trees, ensemble methods, and others—were trained using stratified 12-fold cross-validation and evaluated across training, test, and validation. Ridge Regression showed the best generalization (MCC = 0.814), with Gradient Boosting following (MCC = 0.570). Feature importance analysis highlighted the complementary nature of the descriptors: Ridge Regression emphasized ClassyFire taxonomies such as CHEMONTID:0000229 and CHEBI:35622, while Mordred-based models (e.g., Random Forest) prioritized structural and electronic features like MAXsssCH and ETA_dEpsilon_D. This study is the first to systematically integrate and compare structural and ontological descriptors for antidiabetic compound prediction. The framework offers a scalable and interpretable approach to virtual screening and can be extended to other therapeutic domains to accelerate early-stage drug discovery. Full article
Show Figures

Figure 1

44 pages, 1049 KB  
Review
Toward Intelligent AIoT: A Comprehensive Survey on Digital Twin and Multimodal Generative AI Integration
by Xiaoyi Luo, Aiwen Wang, Xinling Zhang, Kunda Huang, Songyu Wang, Lixin Chen and Yejia Cui
Mathematics 2025, 13(21), 3382; https://doi.org/10.3390/math13213382 - 23 Oct 2025
Viewed by 402
Abstract
The Artificial Intelligence of Things (AIoT) is rapidly evolving from basic connectivity to intelligent perception, reasoning, and decision making across domains such as healthcare, manufacturing, transportation, and smart cities. Multimodal generative AI (GAI) and digital twins (DTs) provide complementary solutions. DTs deliver high-fidelity [...] Read more.
The Artificial Intelligence of Things (AIoT) is rapidly evolving from basic connectivity to intelligent perception, reasoning, and decision making across domains such as healthcare, manufacturing, transportation, and smart cities. Multimodal generative AI (GAI) and digital twins (DTs) provide complementary solutions. DTs deliver high-fidelity virtual replicas for real-time monitoring, simulation, and optimization with GAI enhancing cognition, cross-modal understanding, and the generation of synthetic data. This survey presents a comprehensive overview of DT–GAI integration in the AIoT. We review the foundations of DTs and multimodal GAI and highlight their complementary roles. We further introduce the Sense–Map–Generate–Act (SMGA) framework, illustrating their interaction through the SMGA loop. We discuss key enabling technologies, including multimodal data fusion, dynamic DT evolution, and cloud–edge–end collaboration. Representative application scenarios, including smart manufacturing, smart cities, autonomous driving, and healthcare, are examined to demonstrate their practical impact. Finally, we outline open challenges, including efficiency, reliability, privacy, and standardization, and we provide directions for future research toward sustainable, trustworthy, and intelligent AIoT systems. Full article
Show Figures

Figure 1

17 pages, 1406 KB  
Article
Interleaved Fusion Learning for Trustworthy AI: Improving Cross-Dataset Performance in Cervical Cancer Analysis
by Carlos Martínez, Laura Busto, Olivia Zulaica and César Veiga
Mach. Learn. Knowl. Extr. 2025, 7(4), 128; https://doi.org/10.3390/make7040128 - 23 Oct 2025
Viewed by 190
Abstract
This study introduces a novel Interleaved Fusion Learning (IFL) methodology leveraging transfer learning to generate a family of models optimized for specific datasets while maintaining superior generalization performance across others. The approach is demonstrated in cervical cancer screening, where cytology image datasets present [...] Read more.
This study introduces a novel Interleaved Fusion Learning (IFL) methodology leveraging transfer learning to generate a family of models optimized for specific datasets while maintaining superior generalization performance across others. The approach is demonstrated in cervical cancer screening, where cytology image datasets present challenges of heterogeneity and imbalance. By interleaving transfer steps across dataset partitions and regulating adaptation through a dynamic learning parameter, IFL promotes both domain-specific accuracy and cross-domain robustness. To evaluate its effectiveness, complementary metrics are used to capture not only predictive accuracy but also fairness in performance distribution across datasets. Results highlight the potential of IFL to deliver reliable and unbiased models in clinical decision support. Beyond cervical cytology, the methodology is designed to be scalable to other medical imaging tasks and, more broadly, to domains requiring equitable AI solutions across multiple heterogeneous datasets. Full article
Show Figures

Figure 1

18 pages, 4081 KB  
Article
DAFSF: A Defect-Aware Fine Segmentation Framework Based on Hybrid Encoder and Adaptive Optimization for Image Analysis
by Xiaoyi Liu, Jianyu Zhu, Zhanyu Zhu and Jianjun He
Appl. Sci. 2025, 15(21), 11351; https://doi.org/10.3390/app152111351 - 23 Oct 2025
Viewed by 181
Abstract
Accurate image segmentation is a fundamental requirement for fine-grained image analysis, providing critical support for applications such as medical diagnostics, remote sensing, and industrial fault detection. However, in complex industrial environments, conventional deep learning-based methods often struggle with noisy backgrounds, blurred boundaries, and [...] Read more.
Accurate image segmentation is a fundamental requirement for fine-grained image analysis, providing critical support for applications such as medical diagnostics, remote sensing, and industrial fault detection. However, in complex industrial environments, conventional deep learning-based methods often struggle with noisy backgrounds, blurred boundaries, and highly imbalanced class distributions, which make fine-grained fault localization particularly challenging. To address these issues, this paper proposes a Defect-Aware Fine Segmentation Framework (DAFSF) that integrates three complementary components. First, a multi-scale hybrid encoder combines convolutional neural networks for capturing local texture details with Transformer-based modules for modeling global contextual dependencies. Second, a boundary-aware refinement module explicitly learns edge features to improve segmentation accuracy in damaged or ambiguous fault regions. Third, a defect-aware adaptive loss function jointly considers boundary weighting, hard-sample reweighting, and class balance, which enables the model to focus on challenging pixels while alleviating class imbalance. The proposed framework is evaluated on public benchmarks including Aeroscapes, Magnetic Tile Defect, and MVTec AD. The proposed DAFSF achieves mF1 scores of 85.3%, 85.9%, and 87.2%, and pixel accuracy (PA) of 91.5%, 91.8%, and 92.0% on the respective datasets. These findings highlight the effectiveness of the proposed framework for advancing fine-grained fault localization in industrial applications. Full article
Show Figures

Figure 1

28 pages, 990 KB  
Article
Cross-Domain Adversarial Alignment for Network Anomaly Detection Through Behavioral Embedding Enrichment
by Cristian Salvador-Najar and Luis Julián Domínguez Pérez
Computers 2025, 14(11), 450; https://doi.org/10.3390/computers14110450 - 22 Oct 2025
Viewed by 251
Abstract
Detecting anomalies in network traffic is a central task in cybersecurity and digital infrastructure management. Traditional approaches rely on statistical models, rule-based systems, or machine learning techniques to identify deviations from expected patterns, but often face limitations in generalization across domains. This study [...] Read more.
Detecting anomalies in network traffic is a central task in cybersecurity and digital infrastructure management. Traditional approaches rely on statistical models, rule-based systems, or machine learning techniques to identify deviations from expected patterns, but often face limitations in generalization across domains. This study proposes a cross-domain data enrichment framework that integrates behavioral embeddings with network traffic features through adversarial autoencoders. Each network traffic record is paired with the most similar behavioral profile embedding from user web activity data (Charles dataset) using cosine similarity, thereby providing contextual enrichment for anomaly detection. The proposed system comprises (i) behavioral profile clustering via autoencoder embeddings and (ii) cross-domain latent alignment through adversarial autoencoders, with a discriminator to enable feature fusion. A Deep Feedforward Neural Network trained on the enriched feature space achieves 97.17% accuracy, 96.95% precision, 97.34% recall, and 97.14% F1-score, with stable cross-validation performance (99.79% average accuracy across folds). Behavioral clustering quality is supported by a silhouette score of 0.86 and a Davies–Bouldin index of 0.57. To assess robustness and transferability, the framework was evaluated on the UNSW-NB15 and the CIC-IDS2017 datasets, where results confirmed consistent performance and reliability when compared to traffic-only baselines. This supports the feasibility of cross-domain alignment and shows that adversarial training enables stable feature integration without evidence of overfitting or memorization. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

10 pages, 214 KB  
Article
Quality of Life in Adults with Congenital Heart Disease: Insights from a Tertiary Centre
by Polona Kacar, Melita Flander and Katja Prokselj
J. Clin. Med. 2025, 14(20), 7451; https://doi.org/10.3390/jcm14207451 - 21 Oct 2025
Viewed by 301
Abstract
Objective: As the survival of individuals born with congenital heart disease (CHD) improves into adulthood, the focus has shifted from traditional clinical outcomes to patient-reported outcome measures that better reflect the impact of the disease on daily life. Our aim was to assess [...] Read more.
Objective: As the survival of individuals born with congenital heart disease (CHD) improves into adulthood, the focus has shifted from traditional clinical outcomes to patient-reported outcome measures that better reflect the impact of the disease on daily life. Our aim was to assess the quality of life (QoL) of adult patients with congenital heart disease (ACHD) followed in a tertiary centre and to evaluate the parameters that influence QoL in this population. Methods: This cross-sectional observational study included patients followed up at the national referral ACHD centre between April and September 2022. Sociodemographic and clinical data were collected from medical records and self-report questionnaires. Quality of life (QoL) was assessed using the validated Short Form–36 (SF-36) and Euro Quality of Life–5 Dimension (EQ-5D) questionnaires, including the EQ Visual Analogue Scale (VAS). Results: A total of 123 ACHD patients were included (median age 34 (29–41) years; 43.9% male). Most participants had moderate CHD (61%), and 14.6% were cyanotic. Overall, SF-36 Physical Component Summary scores were higher than Mental Component Summary scores. Almost half of the patients (48.8%) reported no problems in all five domains of the EQ-5D, with most problems reported in anxiety/depression domain. Patients with severe CHD, cyanosis, or HF reported lower QoL scores across multiple SF-36 domains, particularly general health, role–physical, and physical functioning domains. Conclusions: QoL among ACHD patients in our cohort was generally high in most domains as assessed by the SF-36 and EQ-5D. Patients with HF reported lower QoL scores, emphasizing the importance of close clinical follow-up and the need for tailored QoL assessment tools for this complex population. Full article
Show Figures

Graphical abstract

21 pages, 2245 KB  
Article
Frequency-Aware and Interactive Spatial-Temporal Graph Convolutional Network for Traffic Flow Prediction
by Guoqing Teng, Han Wu, Hao Wu, Jiahao Cao and Meng Zhao
Appl. Sci. 2025, 15(20), 11254; https://doi.org/10.3390/app152011254 - 21 Oct 2025
Viewed by 363
Abstract
Accurate traffic flow prediction is pivotal for intelligent transportation systems; yet, existing spatial-temporal graph neural networks (STGNNs) struggle to jointly capture the long-term structural stability, short-term dynamics, and multi-scale temporal patterns of road networks. To address these shortcomings, we propose FISTGCN, a Frequency-Aware [...] Read more.
Accurate traffic flow prediction is pivotal for intelligent transportation systems; yet, existing spatial-temporal graph neural networks (STGNNs) struggle to jointly capture the long-term structural stability, short-term dynamics, and multi-scale temporal patterns of road networks. To address these shortcomings, we propose FISTGCN, a Frequency-Aware Interactive Spatial-Temporal Graph Convolutional Network. FISTGCN enriches raw traffic flow features with learnable spatial and temporal embeddings, thereby providing comprehensive spatial-temporal representations for subsequent modeling. Specifically, it utilizes an interactive dynamic graph convolutional block that generates a time-evolving fused adjacency matrix by combining adaptive and dynamic adjacency matrices. It then applies dual sparse graph convolutions with cross-scale interactions to capture multi-scale spatial dependencies. The gated spectral block projects the input features into the frequency domain and adaptively separates low- and high-frequency components using a learnable threshold. It then employs learnable filters to extract features from different frequency bands and adopts a gating mechanism to adaptively fuse low- and high-frequency information, thereby dynamically highlighting short-term fluctuations or long-term trends. Extensive experiments on four benchmark datasets demonstrate that FISTGCN delivers state-of-the-art predictive accuracy while maintaining competitive computational efficiency. Full article
Show Figures

Figure 1

14 pages, 3001 KB  
Article
Investigation of Debris Mitigation in Droplet-Based Terbium Plasma Sources Produced by Laser Ablation Under Varying Buffer Gas Pressures
by Shuaichao Zhou, Tao Wu, Ziyue Wu, Junjie Tian and Peixiang Lu
Photonics 2025, 12(10), 1035; https://doi.org/10.3390/photonics12101035 - 19 Oct 2025
Viewed by 241
Abstract
The fragment suppression ability of terbium plasma generated by laser at different environmental pressures is investigated, with a focus on exploring the slowing effect of buffer gas on high-energy particles. Using two-dimensional radiation hydrodynamic simulations with the FLASH code, this study evaluates the [...] Read more.
The fragment suppression ability of terbium plasma generated by laser at different environmental pressures is investigated, with a focus on exploring the slowing effect of buffer gas on high-energy particles. Using two-dimensional radiation hydrodynamic simulations with the FLASH code, this study evaluates the debris mitigation efficiency of terbium plasma across a range of buffer gas pressures (50–1000 Pa). Key findings reveal that helium buffer gas exhibits a nonlinear pressure-dependent response in plasma dynamics and debris suppression. Specifically, at 1000 Pa helium, the plasma shockwave stops within stopping distance xst = 12.13 mm with an attenuation coefficient of b = 0.0013 ns−1, reducing radial expansion by 40% compared to 50 Pa (xst = 23.15 mm, b = 0.0010). This pressure scaling arises from enhanced collisional dissipation, confining over 80% of debris kinetic energy below 200 eV under 1000 Pa conditions. In contrast, argon exhibits superior stopping power within ion energy domains (≤1300 eV), attaining a maximum stopping power of 2000 eV·mm−1 at 1300 eV–a value associated with a 6.4-times-larger scattering cross-section compared to helium under equivalent conditions. The study uncovers a nonlinear relationship between kinetic energy and gas pressure, where the deceleration capability of buffer gases intensifies with increasing kinetic energy. This work demonstrates that by leveraging argon’s broadband stopping efficiency and helium’s confinement capacity, debris and high energy ions can be effectively suppressed, thereby securing mirror integrity and source efficiency at high repetition rates. Full article
(This article belongs to the Special Issue The Principle and Application of Photonic Metasurfaces)
Show Figures

Figure 1

19 pages, 1935 KB  
Article
Domain Generalization for Bearing Fault Diagnosis via Meta-Learning with Gradient Alignment and Data Augmentation
by Gang Chen, Jun Ye, Dengke Li, Lai Hu, Zixi Wang, Mengchen Zi, Chao Liang and Jiahao Zhang
Machines 2025, 13(10), 960; https://doi.org/10.3390/machines13100960 - 17 Oct 2025
Viewed by 333
Abstract
Rotating machinery is a core component of modern industry, and its operational state directly affects system safety and reliability. In order to achieve intelligent fault diagnosis of bearings under complex working conditions, the health management of bearings has become an important issue. Although [...] Read more.
Rotating machinery is a core component of modern industry, and its operational state directly affects system safety and reliability. In order to achieve intelligent fault diagnosis of bearings under complex working conditions, the health management of bearings has become an important issue. Although deep learning has shown remarkable advantages, its performance still relies on the assumption that the training and testing data share the same distribution, which often deteriorates in real applications due to variations in load and rotational speed. This study focused on the scenario of domain generalization (DG) and proposed a Meta-Learning with Gradient Alignment and Data Augmentation (MGADA) method for cross-domain bearing fault diagnosis. Within the meta-learning framework, Mixup-based data augmentation was performed on the support set in the inner loop to alleviate overfitting under small-sample conditions and enhanced task-level data diversity. In the outer loop optimization stage, an arithmetic gradient alignment constraint was introduced to ensure consistent update directions across different source domains, thereby reducing cross-domain optimization conflicts. Meanwhile, a centroid convergence constraint was incorporated to enforce samples of the same class from different domains to converge to a shared centroid in the feature space, thus enhancing intra-class compactness and semantic consistency. Cross-working-condition experiments conducted on the Case Western Reserve University (CWRU) bearing dataset demonstrate that the proposed method achieves high classification accuracy across different target domains, with an average accuracy of 98.89%. Furthermore, ablation studies confirm the necessity of each module (Mixup, gradient alignment, and centroid convergence), while t-SNE and confusion matrix visualizations further illustrate that the proposed approach effectively achieves cross-domain feature alignment and intra-class aggregation. The proposed method provides an efficient and robust solution for bearing fault diagnosis under complex working conditions and offers new insights and theoretical references for promoting domain generalization in practical industrial applications. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

30 pages, 6302 KB  
Article
Pixel-Attention W-Shaped Network for Joint Lesion Segmentation and Diabetic Retinopathy Severity Staging
by Archana Singh, Sushma Jain and Vinay Arora
Diagnostics 2025, 15(20), 2619; https://doi.org/10.3390/diagnostics15202619 - 17 Oct 2025
Viewed by 356
Abstract
Background: Visual impairment remains a critical public health challenge, and diabetic retinopathy (DR) is a leading cause of preventable blindness worldwide. Early stages of the disease are particularly difficult to identify, as lesions are subtle, expert review is time-consuming, and conventional diagnostic workflows [...] Read more.
Background: Visual impairment remains a critical public health challenge, and diabetic retinopathy (DR) is a leading cause of preventable blindness worldwide. Early stages of the disease are particularly difficult to identify, as lesions are subtle, expert review is time-consuming, and conventional diagnostic workflows remain subjective. Methods: To address these challenges, we propose a novel Pixel-Attention W-shaped (PAW-Net) deep learning framework that integrates a Lesion-Prior Cross Attention (LPCA) module with a W-shaped encoder–decoder architecture. The LPCA module enhances pixel-level representation of microaneurysms, hemorrhages, and exudates, while the dual-branch W-shaped design jointly performs lesion segmentation and disease severity grading in a single, clinically interpretable pass. The framework has been trained and validated using DDR and a preprocessed Messidor + EyePACS dataset, with APTOS-2019 reserved for external, out-of-distribution evaluation. Results: The proposed PAW-Net framework achieved robust performance across severity levels, with an accuracy of 98.65%, precision of 98.42%, recall (sensitivity) of 98.83%, specificity of 99.12%, F1-score of 98.61%, and a Dice coefficient of 98.61%. Comparative analyses demonstrate consistent improvements over contemporary architectures, particularly in accuracy and F1-score. Conclusions: The PAW-Net framework generates interpretable lesion overlays that facilitate rapid triage and follow-up, exhibits resilience under domain shift, and maintains an efficient computational footprint suitable for telemedicine and mobile deployment. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

Back to TopTop