Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,147)

Search Parameters:
Keywords = disentanglement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4665 KB  
Article
Robust Bathymetric Mapping in Shallow Waters: A Digital Surface Model-Integrated Machine Learning Approach Using UAV-Based Multispectral Imagery
by Mandi Zhou, Ai Chin Lee, Ali Eimran Alip, Huong Trinh Dieu, Yi Lin Leong and Seng Keat Ooi
Remote Sens. 2025, 17(17), 3066; https://doi.org/10.3390/rs17173066 - 3 Sep 2025
Abstract
The accurate monitoring of short-term bathymetric changes in shallow waters is essential for effective coastal management and planning. Machine Learning (ML) applied to Unmanned Aerial Vehicle (UAV)-based multispectral imagery offers a rapid and cost-effective solution for bathymetric surveys. However, models based solely on [...] Read more.
The accurate monitoring of short-term bathymetric changes in shallow waters is essential for effective coastal management and planning. Machine Learning (ML) applied to Unmanned Aerial Vehicle (UAV)-based multispectral imagery offers a rapid and cost-effective solution for bathymetric surveys. However, models based solely on multispectral imagery are inherently limited by confounding factors such as shadow effects, poor water quality, and complex seafloor textures, which obscure the spectral–depth relationship, particularly in heterogeneous coastal environments. To address these issues, we developed a hybrid bathymetric inversion model that integrates digital surface model (DSM) data—providing high-resolution topographic information—with ML applied to UAV-based multispectral imagery. The model training was supported by multibeam sonar measurements collected from an Unmanned Surface Vehicle (USV), ensuring high accuracy and adaptability to diverse underwater terrains. The study area, located around Lazarus Island, Singapore, encompasses a sandy beach slope transitioning into seagrass meadows, coral reef communities, and a fine-sediment seabed. Incorporating DSM-derived topographic information substantially improved prediction accuracy and correlation, particularly in complex environments. Compared with linear and bio-optical models, the proposed approach achieved accuracy improvements exceeding 20% in shallow-water regions, with performance reaching an R2 > 0.93. The results highlighted the effectiveness of DSM integration in disentangling spectral ambiguities caused by environmental variability and improving bathymetric prediction accuracy. By combining UAV-based remote sensing with the ML model, this study presents a scalable and high-precision approach for bathymetric mapping in complex shallow-water environments, thereby enhancing the reliability of UAV-based surveys and supporting the broader application of ML in coastal monitoring and management. Full article
Show Figures

Figure 1

16 pages, 22201 KB  
Article
MECO: Mixture-of-Expert Codebooks for Multiple Dense Prediction Tasks
by Gyutae Hwang and Sang Jun Lee
Sensors 2025, 25(17), 5387; https://doi.org/10.3390/s25175387 - 1 Sep 2025
Viewed by 147
Abstract
Autonomous systems operating in embedded environments require robust scene understanding under computational constraints. Multi-task learning offers a compact alternative to deploying multiple task-specific models by jointly solving dense prediction tasks. However, recent MTL models often suffer from entangled shared feature representations and significant [...] Read more.
Autonomous systems operating in embedded environments require robust scene understanding under computational constraints. Multi-task learning offers a compact alternative to deploying multiple task-specific models by jointly solving dense prediction tasks. However, recent MTL models often suffer from entangled shared feature representations and significant computational overhead. To address these limitations, we propose Mixture-of-Expert Codebooks (MECO), a novel multi-task learning framework that leverages vector quantization to design Mixture-of-Experts with lightweight codebooks. MECO disentangles task-generic and task-specific representations and enables efficient learning across multiple dense prediction tasks such as semantic segmentation and monocular depth estimation. The proposed multi-task learning model is trained end-to-end using a composite loss that combines task-specific objectives and vector quantization losses. We evaluate MECO on a real-world driving dataset collected in challenging embedded scenarios. MECO achieves a +0.4% mIoU improvement in semantic segmentation and maintains comparable depth estimation accuracy to the baseline, while reducing model parameters and FLOPs by 18.33% and 28.83%, respectively. These results demonstrate the potential of vector quantization-based Mixture-of-Experts modeling for efficient and scalable multi-task learning in embedded environments. Full article
Show Figures

Figure 1

25 pages, 709 KB  
Article
ESG Disclosure Frequency and Its Association with Market Performance: Evidence from Taiwan
by Chih-Feng Liao
Sustainability 2025, 17(17), 7812; https://doi.org/10.3390/su17177812 - 29 Aug 2025
Viewed by 447
Abstract
This study challenges the conventional wisdom that investor reactions to Environmental, Social, and Governance (ESG) information are primarily driven by disclosure sentiment. We propose and test an alternative hypothesis: that for investors navigating information-rich environments, the frequency of ESG disclosures can serve as [...] Read more.
This study challenges the conventional wisdom that investor reactions to Environmental, Social, and Governance (ESG) information are primarily driven by disclosure sentiment. We propose and test an alternative hypothesis: that for investors navigating information-rich environments, the frequency of ESG disclosures can serve as a more potent signal of a firm’s underlying commitment and risk profile than the sentiment of the announcements themselves. Focusing on Taiwan’s capital market—a globally pivotal technology hub—we analyze 2576 firm-initiated ESG events from 2014 to 2023 using an event study methodology. We innovate by employing a BERT-based NLP model, specifically fine-tuned for Traditional Chinese, to disentangle the effects of disclosure frequency from sentiment. Our results reveal that announcement frequency is a more robust predictor of abnormal returns than sentiment, but its effect is highly contingent on the ESG pillar. A higher frequency of negative Social (S) and Governance (G) disclosures incurs a significant market penalty, whereas frequent proactive Environmental (E) disclosures are rewarded. These findings establish a “disclosure frequency premium/penalty” and offer critical, nuanced insights for corporate strategy and sustainable investment. By demonstrating how communication patterns shape market perceptions, this research directly informs UN SDG 12 (Responsible Production) and SDG 16 (Strong Institutions). Full article
Show Figures

Graphical abstract

22 pages, 305 KB  
Article
Public Perceptions on the Efficiency of National Healthcare Systems Before and After the COVID-19 Pandemic
by Athina Economou
Healthcare 2025, 13(17), 2146; https://doi.org/10.3390/healthcare13172146 - 28 Aug 2025
Viewed by 174
Abstract
Background/Objectives: This study examines individual perceptions of national healthcare system efficiency before and after the COVID-19 pandemic across 18 countries grouped into three clusters (the Anglo-world, Europe, East Asia). This paper aims to identify the demographic, socioeconomic, health-related, and macroeconomic healthcare drivers of [...] Read more.
Background/Objectives: This study examines individual perceptions of national healthcare system efficiency before and after the COVID-19 pandemic across 18 countries grouped into three clusters (the Anglo-world, Europe, East Asia). This paper aims to identify the demographic, socioeconomic, health-related, and macroeconomic healthcare drivers of public assessments, and explain changes in attitudes between 2011–2013 and 2021–2023. Methods: Using individual-level data from the International Social Survey Programme (ISSP) for 2011–2013 and 2021–2023, logistic regression models of perceived healthcare inefficiency are estimated. In addition, the Oaxaca–Blinder decomposition model is adopted in order to decompose the assessment gap between the two periods. Models include a range of individual demographic and socioeconomic characteristics and national healthcare controls (healthcare expenditure, potential years of life lost). Results: Health-related factors, especially self-assessed health and trust in doctors, consistently emerge as predictors of more favourable evaluations across regions and periods. Higher national healthcare expenditure is associated with more positive public views and is the single largest contributor to the improved assessments in 2021–2023. Demographic and socioeconomic variables show smaller regionally and temporally heterogeneous effects. Decomposition indicates that both changes in observed characteristics (notably, expenditure and trust) and unobserved behavioural, cultural, or institutional shifts account for the gap in public healthcare assessments between the two time periods. Conclusions: Public assessments of healthcare systems are primarily shaped by individual health status, trust in providers, and national spending rather than differential demographic and socioeconomic traits. Therefore, policymakers should couple targeted investments in the healthcare sector in order to address adequately public healthcare needs, and strengthen doctor–patient relationships in order to sustain public support. Future research should focus on disentangling the cultural and behavioural pathways influencing healthcare attitudes. Full article
18 pages, 526 KB  
Article
DPBD: Disentangling Preferences via Borrowing Duration for Book Recommendation
by Zhifang Liao, Liping Chen, Yuelan Qi and Fei Li
Big Data Cogn. Comput. 2025, 9(9), 222; https://doi.org/10.3390/bdcc9090222 - 28 Aug 2025
Viewed by 313
Abstract
Traditional book recommendation methods predominantly rely on collaborative filtering and context-based approaches. However, existing methods fail to account for the order of users’ book borrowings and the duration they hold them, both of which are crucial indicators reflecting users’ book preferences. To address [...] Read more.
Traditional book recommendation methods predominantly rely on collaborative filtering and context-based approaches. However, existing methods fail to account for the order of users’ book borrowings and the duration they hold them, both of which are crucial indicators reflecting users’ book preferences. To address this challenge, we propose a book recommendation framework called DPBD, which disentangles preferences based on borrowing duration, thereby explicitly modeling temporal patterns in library borrowing behaviors. The DPBD model adopts a dual-path neural architecture comprising the following: (1) The item-level path utilizes self-attention networks to encode historical borrowing sequences while incorporating borrowing duration as an adaptive weighting mechanism for attention score refinement. (2) The feature-level path employs gated fusion modules to effectively aggregate multi-source item attributes (e.g., category and title), followed by self-attention networks to model feature transition patterns. The framework subsequently combines both path representations through fully connected layers to generate user preference embeddings for next-book recommendation. Extensive experiments conducted on two real-world university library datasets demonstrate the superior performance of the proposed DPBD model compared with baseline methods. Specifically, the model achieved 13.67% and 15.75% on HR@1 and 15.75% and 12.90% on NDCG@1 across the two datasets. Full article
Show Figures

Figure 1

18 pages, 485 KB  
Study Protocol
SANA-Biome: A Protocol for a Cross-Sectional Study on Oral Health, Diet, and the Oral Microbiome in Romania
by Sterling L. Wright, Oana Slusanschi, Ana Cristina Giura, Ioanina Părlătescu, Cristian Funieru, Samantha M. Gaidula, Nicole E. Moore and Laura S. Weyrich
Healthcare 2025, 13(17), 2133; https://doi.org/10.3390/healthcare13172133 - 27 Aug 2025
Viewed by 360
Abstract
Periodontal disease is a widespread chronic condition linked to systemic illnesses such as cardiovascular disease, diabetes, and adverse pregnancy outcomes. Despite its global burden, population-specific studies on its risk factors remain limited, particularly in Central and Eastern Europe. The SANA-biome Project is a [...] Read more.
Periodontal disease is a widespread chronic condition linked to systemic illnesses such as cardiovascular disease, diabetes, and adverse pregnancy outcomes. Despite its global burden, population-specific studies on its risk factors remain limited, particularly in Central and Eastern Europe. The SANA-biome Project is a cross-sectional, community-based study designed to investigate the biological and social determinants of periodontal disease in Romania, a country with disproportionately high oral disease rates and minimal microbiome data. This protocol will integrate metagenomic, proteomic, and metabolomic data of the oral microbiome from saliva and dental calculus samples with detailed sociodemographic and lifestyle data collected through a structured 44-question survey. This study is grounded in two complementary frameworks: the IMPEDE model, which conceptualizes inflammation as both a driver and a consequence of microbial dysbiosis, and Ecosocial Theory, which situates disease within social and structural contexts. Our aims are as follows: (1) to identify lifestyle and behavioral predictors of periodontal disease; (2) to characterize the oral microbiome in individuals with and without periodontal disease; and (3) to evaluate the predictive value of combined microbial and sociodemographic features using statistical and machine learning approaches. Power calculations based on pilot data indicate a target enrollment of 120 participants. This integrative approach will help disentangle the complex interplay between microbiological and structural determinants of periodontal disease and inform culturally relevant prevention strategies. By focusing on an underrepresented population, this work contributes to a more equitable and interdisciplinary model of oral health research and supports the development of future precision public health interventions. Full article
(This article belongs to the Special Issue Oral Health in Healthcare)
Show Figures

Figure 1

19 pages, 2394 KB  
Article
A Decoupled Contrastive Learning Framework for Backdoor Defense in Federated Learning
by Jiahao Cheng, Tingrui Zhang, Meijiao Li, Wenbin Wang, Jun Wang and Ying Zhang
Symmetry 2025, 17(9), 1398; https://doi.org/10.3390/sym17091398 - 27 Aug 2025
Viewed by 343
Abstract
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy by sharing only local parameters. However, this decentralized setup, while preserving data privacy, also introduces new vulnerabilities, particularly to backdoor attacks, in which compromised clients inject poisoned data or [...] Read more.
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy by sharing only local parameters. However, this decentralized setup, while preserving data privacy, also introduces new vulnerabilities, particularly to backdoor attacks, in which compromised clients inject poisoned data or gradients to manipulate the global model. Existing defenses rely on the global server to inspect model parameters, while mitigating backdoor effects locally remains underexplored. To address this, we propose a decoupled contrastive learning–based defense. We first train a backdoor model using poisoned data, then extract intermediate features from both the local and backdoor models, and apply a contrastive objective to reduce their similarity, encouraging the local model to focus on clean patterns and suppress backdoor behaviors. Crucially, we leverage an implicit symmetry between clean and poisoned representations—structurally similar but semantically different. Disrupting this symmetry helps disentangle benign and malicious components. Our approach requires no prior attack knowledge or clean validation data, making it suitable for practical FL deployments. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 2967 KB  
Article
Ultra-Short-Term Wind Power Prediction Based on Spatiotemporal Contrastive Learning
by Jie Xu, Tie Chen, Jiaxin Yuan, Youyuan Fan, Liping Li and Xinyu Gong
Electronics 2025, 14(17), 3373; https://doi.org/10.3390/electronics14173373 - 25 Aug 2025
Viewed by 364
Abstract
With the accelerating global energy transition, wind power has become a core pillar of renewable energy systems. However, its inherent intermittency and volatility pose significant challenges to the safe, stable, and economical operation of power grids—making ultra-short-term wind power prediction a critical technical [...] Read more.
With the accelerating global energy transition, wind power has become a core pillar of renewable energy systems. However, its inherent intermittency and volatility pose significant challenges to the safe, stable, and economical operation of power grids—making ultra-short-term wind power prediction a critical technical link in optimizing grid scheduling and promoting large-scale wind power integration. Current forecasting techniques are plagued by problems like the inadequate representation of features, the poor separation of features, and the challenging clarity of deep learning models. This study introduces a method for the prediction of wind energy using spatiotemporal contrastive learning, employing seasonal trend decomposition to encapsulate the diverse characteristics of time series. A contrastive learning framework and a feature disentanglement loss function are employed to effectively decouple spatiotemporal features. Data on geographical positions are integrated to simulate spatial correlations, and a convolutional network of spatiotemporal graphs, integrated with a multi-head attention system, is crafted to improve the clarity. The proposed method is validated using operational data from two actual wind farms in Northwestern China. The research indicates that, compared with typical baselines (e.g., STGCN), this method reduces the RMSE by up to 38.47% and the MAE by up to 44.71% for ultra-short-term wind power prediction, markedly enhancing the prediction precision and offering a more efficient way to forecast wind power. Full article
Show Figures

Figure 1

38 pages, 4467 KB  
Article
Causal Decoupling for Temporal Knowledge Graph Reasoning via Contrastive Learning and Adaptive Fusion
by Siling Feng, Housheng Lu, Qian Liu, Peng Xu, Yujie Zheng, Bolin Chen and Mengxing Huang
Information 2025, 16(9), 717; https://doi.org/10.3390/info16090717 - 22 Aug 2025
Viewed by 408
Abstract
Temporal knowledge graphs (TKGs) are crucial for modeling evolving real-world facts and are widely applied in event forecasting and risk analysis. However, current TKG reasoning models struggle to separate causal signals from noisy observations, align temporal dynamics with semantic structures, and integrate long-term [...] Read more.
Temporal knowledge graphs (TKGs) are crucial for modeling evolving real-world facts and are widely applied in event forecasting and risk analysis. However, current TKG reasoning models struggle to separate causal signals from noisy observations, align temporal dynamics with semantic structures, and integrate long-term and short-term knowledge effectively. To address these challenges, we propose the Temporal Causal Contrast Graph Network (TCCGN), a unified framework that disentangles causal features from noise via orthogonal decomposition and adversarial learning; applies dual-domain contrastive learning to enhance both temporal and semantic consistency; and introduces a gated fusion module for adaptive integration of static and dynamic features across time scales. Extensive experiments on five benchmarks (ICEWS14/05-15/18, YAGO, GDELT) show that TCCGN consistently outperforms prior models. On ICEWS14, it achieves 42.46% MRR and 31.63% Hits@1, surpassing RE-GCN by 1.21 points. On the high-noise GDELT dataset, it improves MRR by 1.0%. These results highlight TCCGN’s robustness and its promise for real-world temporal reasoning tasks involving fine-grained causal inference under noisy conditions. Full article
Show Figures

Figure 1

24 pages, 1841 KB  
Article
Symmetric Decomposition Framework for Enhanced Air Quality Prediction: A Component-Wise Linear Modeling Approach
by Yuke Jiang, Chenyue Wang and Peng Qin
Symmetry 2025, 17(9), 1370; https://doi.org/10.3390/sym17091370 - 22 Aug 2025
Viewed by 493
Abstract
Air quality prediction is a critical and complex time-series forecasting task, where traditional linear models often struggle to capture non-stationary, multiscale temporal dynamics. While recent advances in deep learning have improved predictive performance, challenges remain in interpretability, component disentanglement, and structured modeling. In [...] Read more.
Air quality prediction is a critical and complex time-series forecasting task, where traditional linear models often struggle to capture non-stationary, multiscale temporal dynamics. While recent advances in deep learning have improved predictive performance, challenges remain in interpretability, component disentanglement, and structured modeling. In this work, we propose a symmetric decomposition–modeling–evaluation framework tailored for air quality forecasting. Our method decomposes air quality time series into three complementary components: trend, periodicity, and fluctuation, based on their structural characteristics and symmetry relationships. Each component is modeled using lightweight, component-specific linear modules that preserve interpretability and computational efficiency. Importantly, the framework explicitly leverages the symmetrical properties between components to inform both model design and multiscale interaction. To evaluate component-level prediction quality, we introduce dual series-wise metrics that assess temporal correlation and distributional symmetry, addressing the limitations of conventional point-wise error metrics in capturing sequence-level consistency. Experimental results on real-world AQI datasets and other public time-series benchmarks demonstrate that our approach achieves competitive forecasting accuracy. On the Beijing dataset, our method achieved an MAE of 0.445 at the 96 h prediction horizon, outperforming the best baseline model, PatchFormer, with an MAE of 0.464. On the Tianjin dataset, our method achieved an MAE of 0.466 compared to the best competing model’s MAE of 0.481. Across multiple datasets, our approach consistently outperformed traditional methods, with improvements in MAE ranging from 0.019 to 0.086, demonstrating its effectiveness in capturing complex temporal patterns while enhancing interpretability through symmetry-aware modeling and evaluation. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

12 pages, 1776 KB  
Article
Effect of Midazolam Premedication on Salivary Cortisol Levels in Pediatric Patients with Negative Frankl Behavior: A Pilot Study
by Juan Ignacio Aura-Tormos, Laura Marqués-Martínez, Esther Garcia-Miralles, Isabel Torres-Cuevas, Bianca Quartararo and Clara Guinot-Barona
Children 2025, 12(8), 1097; https://doi.org/10.3390/children12081097 - 20 Aug 2025
Viewed by 292
Abstract
Background/Objectives: This pilot study aimed to evaluate stress levels in pediatric patients classified as definitely negative according to the Frankl scale by measuring salivary cortisol concentrations. Additionally, the study assessed the impact of Midazolam premedication on stress reduction during dental procedures. Methods. Children [...] Read more.
Background/Objectives: This pilot study aimed to evaluate stress levels in pediatric patients classified as definitely negative according to the Frankl scale by measuring salivary cortisol concentrations. Additionally, the study assessed the impact of Midazolam premedication on stress reduction during dental procedures. Methods. Children and adolescents attending the Pediatric Dentistry Master’s program at the Catholic University of Valencia participated in the study. Salivary cortisol levels were measured before and after dental treatments, differentiating between invasive and non-invasive procedures. Patients were divided into two groups: those receiving Midazolam premedication and those who did not. Results. Findings showed a significant increase in cortisol levels following invasive dental treatments (0.991), whereas non-invasive treatments (0.992) did not lead to notable changes (p < 0.001). Patients premedicated with Midazolam exhibited significantly lower post-treatment cortisol levels compared to those who did not receive the medication (p < 0.05). Conclusions. These preliminary findings suggest that Midazolam-based management in children with definitely negative behavior may be associated with reduced physiological stress responses. As a pilot study with a limited sample and inherent group-allocation bias, the results should be interpreted with caution. The methodology proved feasible and supports the use of salivary cortisol in future, larger-scale studies designed to disentangle behavioral and pharmacological effects. Full article
(This article belongs to the Section Pediatric Dentistry & Oral Medicine)
Show Figures

Graphical abstract

20 pages, 1466 KB  
Article
Towards Controllable and Explainable Text Generation via Causal Intervention in LLMs
by Jie Qiu, Quanrong Fang and Wenhao Kang
Electronics 2025, 14(16), 3279; https://doi.org/10.3390/electronics14163279 - 18 Aug 2025
Viewed by 498
Abstract
Large Language Models (LLMs) excel in diverse text generation tasks but still face limited controllability, opaque decision processes, and frequent hallucinations. This paper presents a structural causal intervention framework that models input–hidden–output dependencies through a structural causal model and performs targeted interventions on [...] Read more.
Large Language Models (LLMs) excel in diverse text generation tasks but still face limited controllability, opaque decision processes, and frequent hallucinations. This paper presents a structural causal intervention framework that models input–hidden–output dependencies through a structural causal model and performs targeted interventions on hidden representations. By combining counterfactual sample construction with contrastive training, our method enables precise control of style, sentiment, and factual consistency while providing explicit causal explanations for output changes. Experiments on three representative tasks demonstrate consistent and substantial improvements: style transfer accuracy reaches 92.3% (+7–14 percentage points over strong baselines), sentiment-controlled generation achieves 90.1% accuracy (+1.3–10.9 points), and multi-attribute conflict rates drop to 3.7% (a 40–60% relative reduction). Our method also improves causal attribution scores to 0.83–0.85 and human agreement rates to 87–88%, while reducing training and inference latency by 25–30% through sparse masking that modifies ≤10% of hidden units per attribute. These results confirm that integrating structural causal intervention with counterfactual training advances controllability, interpretability, and efficiency in LLM-based generation, offering a robust foundation for deployment in reliability-critical and resource-constrained applications. Full article
Show Figures

Figure 1

26 pages, 4379 KB  
Article
Carbon Dioxide Emission-Reduction Efficiency in China’s New Energy Vehicle Sector Toward Sustainable Development: Evidence from a Three-Stage Super-Slacks Based-Measure Data Envelopment Analysis Model
by Liying Zheng, Fangjuan Zhan and Fangrong Ren
Sustainability 2025, 17(16), 7440; https://doi.org/10.3390/su17167440 - 17 Aug 2025
Viewed by 680
Abstract
This research evaluates the carbon dioxide emission-reduction efficiency of new energy vehicles (NEVs) in China from 2018 to 2023 by applying a three-stage super-SBM data envelopment analysis (DEA) model that incorporates undesirable outputs. This model offers significant advantages over traditional DEA models, as [...] Read more.
This research evaluates the carbon dioxide emission-reduction efficiency of new energy vehicles (NEVs) in China from 2018 to 2023 by applying a three-stage super-SBM data envelopment analysis (DEA) model that incorporates undesirable outputs. This model offers significant advantages over traditional DEA models, as it effectively disentangles the influences of external environmental factors and stochastic noise, thereby providing a more accurate and robust assessment of true efficiency. Its super-efficiency characteristic also allows for effective ranking of all decision-making units (DMUs) on the efficiency frontier. The empirical findings reveal several key insights. (1) The NEV industry’s carbon-reduction efficiency in China between 2018 and 2023 displayed an upward trend accompanied by pronounced fluctuations. Its mean super-efficiency score was 0.353, indicating substantial scope for improvements in scale efficiency. (2) Significant interprovincial disparities in efficiency appear. Unbalanced coordination between production and consumption in provinces such as Shaanxi, Beijing, and Liaoning has produced correspondingly high or low efficiency values. (3) Although accelerated urbanization has reduced the capital and labor inputs required by the NEV industry and has raised energy consumption, the net effect enhances carbon-reduction efficiency. Household consumption levels and technological advancement exerts divergent effects on efficiency. The former negatively relates to efficiency, whereas the latter is positively associated. Full article
Show Figures

Figure 1

21 pages, 8269 KB  
Article
Context-Aware Feature Adaptation for Mitigating Negative Transfer in 3D LiDAR Semantic Segmentation
by Lamiae El Mendili, Sylvie Daniel and Thierry Badard
Remote Sens. 2025, 17(16), 2825; https://doi.org/10.3390/rs17162825 - 14 Aug 2025
Viewed by 355
Abstract
Semantic segmentation of 3D LiDAR point clouds is crucial for autonomous driving and urban modeling but requires extensive labeled data. Unsupervised domain adaptation from synthetic to real data offers a promising solution, yet faces the challenge of negative transfer, particularly due to context [...] Read more.
Semantic segmentation of 3D LiDAR point clouds is crucial for autonomous driving and urban modeling but requires extensive labeled data. Unsupervised domain adaptation from synthetic to real data offers a promising solution, yet faces the challenge of negative transfer, particularly due to context shifts between domains. This paper introduces Context-Aware Feature Adaptation, a novel approach to mitigate negative transfer in 3D unsupervised domain adaptation. The proposed approach disentangles object-specific and context-specific features, refines source context features through cross-attention with target information, and adaptively fuses the results. We evaluate our approach on challenging synthetic-to-real adaptation scenarios, demonstrating consistent improvements over state-of-the-art domain adaptation methods with up to 7.9% improvement in classes subject to context shift. Our comprehensive domain shift analysis reveals a positive correlation between context shift magnitude and performance improvement. Extensive ablation studies and visualizations further validate the efficacy in handling context shift for 3D semantic segmentation. Full article
Show Figures

Figure 1

18 pages, 2639 KB  
Article
CA-NodeNet: A Category-Aware Graph Neural Network for Semi-Supervised Node Classification
by Zichang Lu, Meiyu Zhong, Qiguo Sun and Kai Ma
Electronics 2025, 14(16), 3215; https://doi.org/10.3390/electronics14163215 - 13 Aug 2025
Viewed by 215
Abstract
Graph convolutional networks (GCNs) have demonstrated remarkable effectiveness in processing graph-structured data and have been widely adopted across various domains. Existing methods mitigate over-smoothing through selective aggregation strategies such as attention mechanisms, edge dropout, and neighbor sampling. While some approaches incorporate global structural [...] Read more.
Graph convolutional networks (GCNs) have demonstrated remarkable effectiveness in processing graph-structured data and have been widely adopted across various domains. Existing methods mitigate over-smoothing through selective aggregation strategies such as attention mechanisms, edge dropout, and neighbor sampling. While some approaches incorporate global structural context, they often underexplore category-aware representations and inter-category differences, which are crucial for enhancing node discriminability. To address these limitations, a novel framework, CA-NodeNet, is proposed for semi-supervised node classification. CA-NodeNet comprises three key components: (1) coarse-grained node feature learning, (2) category-decoupled multi-branch attention, and (3) inter-category difference feature learning. Initially, a GCN-based encoder is employed to aggregate neighborhood information and learn coarse-grained representations. Subsequently, the category-decoupled multi-branch attention module employs a hierarchical multi-branch architecture, in which each branch incorporates category-specific attention mechanisms to project coarse-grained features into disentangled semantic subspaces. Furthermore, a layer-wise intermediate supervision strategy is adopted to facilitate the learning of discriminative category-specific features within each branch. To further enhance node feature discriminability, we introduce an inter-category difference feature learning module. This module first encodes pairwise differences between the category-specific features obtained from the previous stage and then integrates complementary information across multiple feature pairs to refine node representations. Finally, we design a dual-component optimization function that synergistically combines intermediate supervision loss with the final classification objective, encouraging the network to learn robust and fine-grained node representations. Extensive experiments on multiple real-world benchmark datasets demonstrate the superior performance of CA-NodeNet over existing state-of-the-art methods. Ablation studies further validate the effectiveness of each module in contributing to overall performance gains. Full article
(This article belongs to the Special Issue How Graph Convolutional Networks Work: Mechanisms and Models)
Show Figures

Figure 1

Back to TopTop