Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,271)

Search Parameters:
Keywords = heterogeneous integrated network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1381 KB  
Article
MAMGN-HTI: A Graph Neural Network Model with Metapath and Attention Mechanisms for Hyperthyroidism Herb–Target Interaction Prediction
by Yanqin Zhou, Xiaona Yang, Ru Lv, Xufeng Lang, Yao Zhu, Zuojian Zhou and Kankan She
Bioengineering 2025, 12(10), 1085; https://doi.org/10.3390/bioengineering12101085 - 5 Oct 2025
Viewed by 229
Abstract
The accurate prediction of herb–target interactions is essential for the modernization of traditional Chinese medicine (TCM) and the advancement of drug discovery. Nonetheless, the inherent complexity of herbal compositions and diversity of molecular targets render experimental validation both time-consuming and labor-intensive. We propose [...] Read more.
The accurate prediction of herb–target interactions is essential for the modernization of traditional Chinese medicine (TCM) and the advancement of drug discovery. Nonetheless, the inherent complexity of herbal compositions and diversity of molecular targets render experimental validation both time-consuming and labor-intensive. We propose a graph neural network model, MAMGN-HTI, which integrates metapaths with attention mechanisms. A heterogeneous graph consisting of herbs, efficacies, ingredients, and targets is constructed, where semantic metapaths capture latent relationships among nodes. An attention mechanism is employed to dynamically assign weights, thereby emphasizing the most informative metapaths. In addition, ResGCN and DenseGCN architectures are combined with cross-layer skip connections to improve feature propagation and enable effective feature reuse. Experiments show that MAMGN-HTI outperforms several state-of-the-art methods across multiple metrics, exhibiting superior accuracy, robustness, and generalizability in HTI prediction and candidate drug screening. Validation against literature and databases further confirms the model’s predictive reliability. The model also successfully identified herbs with potential therapeutic effects for hyperthyroidism, including Vinegar-processed Bupleuri Radix (Cu Chaihu), Prunellae Spica (Xiakucao), and Processed Cyperi Rhizoma (Zhi Xiangfu). MAMGN-HTI provides a reliable computational framework and theoretical foundation for applying TCM in hyperthyroidism treatment, providing mechanistic insights while improving research efficiency and resource utilization. Full article
Show Figures

Figure 1

22 pages, 3580 KB  
Article
Edge-AI Enabled Resource Allocation for Federated Learning in Cell-Free Massive MIMO-Based 6G Wireless Networks: A Joint Optimization Perspective
by Chen Yang and Quanrong Fang
Electronics 2025, 14(19), 3938; https://doi.org/10.3390/electronics14193938 - 4 Oct 2025
Viewed by 109
Abstract
The advent of sixth-generation (6G) wireless networks and cell-free massive multiple-input multiple-output (MIMO) architectures underscores the need for efficient resource allocation to support federated learning (FL) at the network edge. Existing approaches often treat communication, computation, and learning in isolation, overlooking dynamic heterogeneity [...] Read more.
The advent of sixth-generation (6G) wireless networks and cell-free massive multiple-input multiple-output (MIMO) architectures underscores the need for efficient resource allocation to support federated learning (FL) at the network edge. Existing approaches often treat communication, computation, and learning in isolation, overlooking dynamic heterogeneity and fairness, which leads to degraded performance in large-scale deployments. To address this gap, we propose a joint optimization framework that integrates communication–computation co-design, fairness-aware aggregation, and a hybrid strategy combining convex relaxation with deep reinforcement learning. Extensive experiments on benchmark vision datasets and real-world wireless traces demonstrate that the framework achieves up to 23% higher accuracy, 18% lower latency, and 21% energy savings compared with state-of-the-art baselines. These findings advance joint optimization in federated learning (FL) and demonstrate scalability for 6G applications. Full article
12 pages, 284 KB  
Article
AI-Enabled Secure and Scalable Distributed Web Architecture for Medical Informatics
by Marian Ileana, Pavel Petrov and Vassil Milev
Appl. Sci. 2025, 15(19), 10710; https://doi.org/10.3390/app151910710 - 4 Oct 2025
Viewed by 228
Abstract
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical [...] Read more.
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical informatics, integrating artificial intelligence techniques and cloud-based services. The system ensures interoperability via HL7 FHIR standards and preserves data privacy and fault tolerance across interconnected medical institutions. A hybrid AI pipeline combining principal component analysis (PCA), K-Means clustering, and convolutional neural networks (CNNs) is applied to diffusion tensor imaging (DTI) data for early detection of neurological anomalies. The architecture leverages containerized microservices orchestrated with Docker Swarm, enabling adaptive resource management and high availability. Experimental validation confirms reduced latency, improved system reliability, and enhanced compliance with medical data exchange protocols. Results demonstrate superior performance with an average latency of 94 ms, a diagnostic accuracy of 91.3%, and enhanced clinical workflow efficiency compared to traditional monolithic architectures. The proposed solution successfully addresses scalability limitations while maintaining data security and regulatory compliance across multi-institutional deployments. This work contributes to the advancement of intelligent, interoperable, and scalable e-health infrastructures aligned with the evolution of digital healthcare ecosystems. Full article
(This article belongs to the Special Issue Data Science and Medical Informatics)
Show Figures

Figure 1

17 pages, 10273 KB  
Article
Deep Learning-Based Approach for Automatic Defect Detection in Complex Structures Using PAUT Data
by Kseniia Barshok, Jung-In Choi and Jaesun Lee
Sensors 2025, 25(19), 6128; https://doi.org/10.3390/s25196128 - 3 Oct 2025
Viewed by 334
Abstract
This paper presents a comprehensive study on automated defect detection in complex structures using phased array ultrasonic testing data, focusing on both traditional signal processing and advanced deep learning methods. As a non-AI baseline, the well-known signal-to-noise ratio algorithm was improved by introducing [...] Read more.
This paper presents a comprehensive study on automated defect detection in complex structures using phased array ultrasonic testing data, focusing on both traditional signal processing and advanced deep learning methods. As a non-AI baseline, the well-known signal-to-noise ratio algorithm was improved by introducing automatic depth gate calculation using derivative analysis and eliminated the need for manual parameter tuning. Even though this method demonstrates robust flaw indication, it faces difficulties for automatic defect detection in highly noisy data or in cases with large pore zones. Considering this, multiple DL architectures—including fully connected networks, convolutional neural networks, and a novel Convolutional Attention Temporal Transformer for Sequences—are developed and trained on diverse datasets comprising simulated CIVA data and real-world data files from welded and composite specimens. Experimental results show that while the FCN architecture is limited in its ability to model dependencies, the CNN achieves a strong performance with a test accuracy of 94.9%, effectively capturing local features from PAUT signals. The CATT-S model, which integrates a convolutional feature extractor with a self-attention mechanism, consistently outperforms the other baselines by effectively modeling both fine-grained signal morphology and long-range inter-beam dependencies. Achieving a remarkable accuracy of 99.4% and a strong F1-score of 0.905 on experimental data, this integrated approach demonstrates significant practical potential for improving the reliability and efficiency of NDT in complex, heterogeneous materials. Full article
Show Figures

Figure 1

15 pages, 2076 KB  
Article
Forecasting Urban Water Demand Using Multi-Scale Artificial Neural Networks with Temporal Lag Optimization
by Elias Farah and Isam Shahrour
Water 2025, 17(19), 2886; https://doi.org/10.3390/w17192886 - 3 Oct 2025
Viewed by 268
Abstract
Accurate short-term forecasting of urban water demand is a persistent challenge for utilities seeking to optimize operations, reduce energy costs, and enhance resilience in smart distribution systems. This study presents a multi-scale Artificial Neural Network (ANN) modeling approach that integrates temporal lag optimization [...] Read more.
Accurate short-term forecasting of urban water demand is a persistent challenge for utilities seeking to optimize operations, reduce energy costs, and enhance resilience in smart distribution systems. This study presents a multi-scale Artificial Neural Network (ANN) modeling approach that integrates temporal lag optimization to predict daily and hourly water consumption across heterogeneous user profiles. Using high-resolution smart metering data from the SunRise Smart City Project in Lille, France, four demand nodes were analyzed: a District Metered Area (DMA), a student residence, a university restaurant, and an engineering school. Results demonstrate that incorporating lagged consumption variables substantially improves prediction accuracy, with daily R2 values increasing from 0.490 to 0.827 at the DMA and from 0.420 to 0.806 at the student residence. At the hourly scale, the 1-h lag model consistently outperformed other configurations, achieving R2 up to 0.944 at the DMA, thus capturing both peak and off-peak consumption dynamics. The findings confirm that short-term autocorrelation is a dominant driver of demand variability, and that ANN-based forecasting enhanced by temporal lag features provides a robust, computationally efficient tool for real-time water network management. Beyond improving forecasting performance, the proposed methodology supports operational applications such as leakage detection, anomaly identification, and demand-responsive planning, contributing to more sustainable and resilient urban water systems. Full article
(This article belongs to the Section Urban Water Management)
Show Figures

Figure 1

18 pages, 748 KB  
Review
Statistical Methods for Multi-Omics Analysis in Neurodevelopmental Disorders: From High Dimensionality to Mechanistic Insight
by Manuel Airoldi, Veronica Remori and Mauro Fasano
Biomolecules 2025, 15(10), 1401; https://doi.org/10.3390/biom15101401 - 2 Oct 2025
Viewed by 451
Abstract
Neurodevelopmental disorders (NDDs), including autism spectrum disorder, intellectual disability, and attention-deficit/hyperactivity disorder, are genetically and phenotypically heterogeneous conditions affecting millions worldwide. High-throughput omics technologies—transcriptomics, proteomics, metabolomics, and epigenomics—offer a unique opportunity to link genetic variation to molecular and cellular mechanisms underlying these disorders. [...] Read more.
Neurodevelopmental disorders (NDDs), including autism spectrum disorder, intellectual disability, and attention-deficit/hyperactivity disorder, are genetically and phenotypically heterogeneous conditions affecting millions worldwide. High-throughput omics technologies—transcriptomics, proteomics, metabolomics, and epigenomics—offer a unique opportunity to link genetic variation to molecular and cellular mechanisms underlying these disorders. However, the high dimensionality, sparsity, batch effects, and complex covariance structures of omics data present significant statistical challenges, requiring robust normalization, batch correction, imputation, dimensionality reduction, and multivariate modeling approaches. This review provides a comprehensive overview of statistical frameworks for analyzing high-dimensional omics datasets in NDDs, including univariate and multivariate models, penalized regression, sparse canonical correlation analysis, partial least squares, and integrative multi-omics methods such as DIABLO, similarity network fusion, and MOFA. We illustrate how these approaches have revealed convergent molecular signatures—synaptic, mitochondrial, and immune dysregulation—across transcriptomic, proteomic, and metabolomic layers in human cohorts and experimental models. Finally, we discuss emerging strategies, including single-cell and spatially resolved omics, machine learning-driven integration, and longitudinal multi-modal analyses, highlighting their potential to translate complex molecular patterns into mechanistic insights, biomarkers, and therapeutic targets. Integrative multi-omics analyses, grounded in rigorous statistical methodology, are poised to advance mechanistic understanding and precision medicine in NDDs. Full article
(This article belongs to the Section Bioinformatics and Systems Biology)
Show Figures

Figure 1

49 pages, 517 KB  
Review
A Comprehensive Review of Data-Driven Techniques for Air Pollution Concentration Forecasting
by Jaroslaw Bernacki and Rafał Scherer
Sensors 2025, 25(19), 6044; https://doi.org/10.3390/s25196044 - 1 Oct 2025
Viewed by 449
Abstract
Air quality is crucial for public health and the environment, which makes it important to both monitor and forecast the level of pollution. Polluted air, containing harmful substances such as particulate matter, nitrogen oxides, or ozone, can lead to serious respiratory and circulatory [...] Read more.
Air quality is crucial for public health and the environment, which makes it important to both monitor and forecast the level of pollution. Polluted air, containing harmful substances such as particulate matter, nitrogen oxides, or ozone, can lead to serious respiratory and circulatory diseases, especially in people at risk. Air quality forecasting allows for early warning of smog episodes and taking actions to reduce pollutant emissions. In this article, we review air pollutant concentration forecasting methods, analyzing both classical statistical approaches and modern techniques based on artificial intelligence, including deep models, neural networks, and machine learning, as well as advanced sensing technologies. This work aims to present the current state of research and identify the most promising directions of development in air quality modeling, which can contribute to more effective health and environmental protection. According to the reviewed literature, deep learning–based models, particularly hybrid and attention-driven architectures, emerge as the most promising approaches, while persistent challenges such as data quality, interpretability, and integration of heterogeneous sensing systems define the open issues for future research. Full article
(This article belongs to the Special Issue Smart Gas Sensor Applications in Environmental Change Monitoring)
Show Figures

Figure 1

19 pages, 1182 KB  
Article
HGAA: A Heterogeneous Graph Adaptive Augmentation Method for Asymmetric Datasets
by Hongbo Zhao, Wei Liu, Congming Gao, Weining Shi, Zhihong Zhang and Jianfei Chen
Symmetry 2025, 17(10), 1623; https://doi.org/10.3390/sym17101623 - 1 Oct 2025
Viewed by 188
Abstract
Edge intelligence plays an increasingly vital role in ensuring the reliability of distributed microservice-based applications, which are widely used in domains such as e-commerce, industrial IoT, and cloud-edge collaborative platforms. However, anomaly detection in these systems encounters a critical challenge: labeled anomaly data [...] Read more.
Edge intelligence plays an increasingly vital role in ensuring the reliability of distributed microservice-based applications, which are widely used in domains such as e-commerce, industrial IoT, and cloud-edge collaborative platforms. However, anomaly detection in these systems encounters a critical challenge: labeled anomaly data are scarce. This scarcity leads to severe class asymmetry and compromised detection performance, particularly under the resource constraints of edge environments. Recent approaches based on Graph Neural Networks (GNNs)—often integrated with DeepSVDD and regularization techniques—have shown potential, but they rarely address this asymmetry in an adaptive, scenario-specific way. This work proposes Heterogeneous Graph Adaptive Augmentation (HGAA), a framework tailored for edge intelligence scenarios. HGAA dynamically optimizes graph data augmentation by leveraging feedback from online anomaly detection. To enhance detection accuracy while adhering to resource constraints, the framework incorporates a selective bias toward underrepresented anomaly types. It uses knowledge distillation to model dataset-dependent distributions and adaptively adjusts augmentation probabilities, thus avoiding excessive computational overhead in edge environments. Additionally, a dynamic adjustment mechanism evaluates augmentation success rates in real time, refining the selection processes to maintain model robustness. Experiments were conducted on two real-world datasets (TraceLog and FlowGraph) under simulated edge scenarios. Results show that HGAA consistently outperforms competitive baseline methods. Specifically, compared with the best non-adaptive augmentation strategies, HGAA achieves an average improvement of 4.5% in AUC and 4.6% in AP. Even larger gains are observed in challenging cases: for example, when using the HGT model on the TraceLog dataset, AUC improves by 14.6% and AP by 18.1%. Beyond accuracy, HGAA also significantly enhances efficiency: compared with filter-based methods, training time is reduced by up to 71% on TraceLog and 8.6% on FlowGraph, confirming its suitability for resource-constrained edge environments. These results highlight the potential of adaptive, edge-aware augmentation techniques in improving microservice anomaly detection within heterogeneous, resource-limited environments. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Embedded Systems)
Show Figures

Figure 1

25 pages, 4270 KB  
Article
Policy Coordination and Green Transformation of STAR Market Enterprises Under “Dual Carbon” Goals
by Wenchao Feng, Yueyue Liu and Zhenxing Liu
Sustainability 2025, 17(19), 8790; https://doi.org/10.3390/su17198790 - 30 Sep 2025
Viewed by 372
Abstract
China’s dual carbon goals necessitate green transformation across industries, with STAR Market enterprises serving as crucial drivers of technological innovation. Existing studies predominantly focus on traditional sectors, overlooking dynamic policy interactions and structural heterogeneity in these technology-intensive firms. This study examines how coordinated [...] Read more.
China’s dual carbon goals necessitate green transformation across industries, with STAR Market enterprises serving as crucial drivers of technological innovation. Existing studies predominantly focus on traditional sectors, overlooking dynamic policy interactions and structural heterogeneity in these technology-intensive firms. This study examines how coordinated environmental tax reforms, green finance initiatives, and equity network synergies collectively shape enterprise green transition, using multi-period difference-in-differences and triple-difference models across 2019 Q3–2023 Q4. By integrating financial records, patent filings, and carbon emission data from 487 STAR Market firms, the analysis identifies environmental cost pressures as the dominant policy driver, complemented by delayed financing incentives and accelerated resource integration through corporate networks. Regional institutional environments further modulate these effects, with areas implementing stricter tax reforms exhibiting stronger outcomes. The findings advocate for adaptive policy designs that align fiscal instruments with regional innovation capacities, optimize financial tools for technology commercialization cycles, and leverage inter-firm networks to amplify sustainability efforts. These insights contribute to refining China’s climate governance framework for emerging technology sectors. Full article
Show Figures

Figure 1

37 pages, 523 KB  
Review
Artificial Intelligence and Machine Learning Approaches for Indoor Air Quality Prediction: A Comprehensive Review of Methods and Applications
by Dominik Latoń, Jakub Grela, Andrzej Ożadowicz and Lukasz Wisniewski
Energies 2025, 18(19), 5194; https://doi.org/10.3390/en18195194 - 30 Sep 2025
Viewed by 341
Abstract
Indoor air quality (IAQ) is a critical determinant of health, comfort, and productivity, and is strongly connected to building energy demand due to the role of ventilation and air treatment in HVAC systems. This review examines recent applications of Artificial Intelligence (AI) and [...] Read more.
Indoor air quality (IAQ) is a critical determinant of health, comfort, and productivity, and is strongly connected to building energy demand due to the role of ventilation and air treatment in HVAC systems. This review examines recent applications of Artificial Intelligence (AI) and Machine Learning (ML) for IAQ prediction across residential, educational, commercial, and public environments. Approaches are categorized by predicted parameters, forecasting horizons, facility types, and model architectures. Particular focus is given to pollutants such as CO2, PM2.5, PM10, VOCs, and formaldehyde. Deep learning methods, especially the LSTM and GRU networks, achieve superior accuracy in short-term forecasting, while hybrid models integrating physical simulations or optimization algorithms enhance robustness and generalizability. Importantly, predictive IAQ frameworks are increasingly applied to support demand-controlled ventilation, adaptive HVAC strategies, and retrofit planning, contributing directly to reduced energy consumption and carbon emissions without compromising indoor environmental quality. Remaining challenges include data heterogeneity, sensor reliability, and limited interpretability of deep models. This review highlights the need for scalable, explainable, and energy-aware IAQ prediction systems that align health-oriented indoor management with energy efficiency and sustainability goals. Such approaches directly contribute to policy priorities, including the EU Green Deal and Fit for 55 package, advancing both occupant well-being and low-carbon smart building operation. Full article
(This article belongs to the Collection Energy Efficiency and Environmental Issues)
Show Figures

Figure 1

29 pages, 1328 KB  
Article
A Resilient Energy-Efficient Framework for Jamming Mitigation in Cluster-Based Wireless Sensor Networks
by Carolina Del-Valle-Soto, José A. Del-Puerto-Flores, Leonardo J. Valdivia, Aimé Lay-Ekuakille and Paolo Visconti
Algorithms 2025, 18(10), 614; https://doi.org/10.3390/a18100614 - 29 Sep 2025
Viewed by 138
Abstract
This paper presents a resilient and energy-efficient framework for jamming mitigation in cluster-based wireless sensor networks (WSNs), addressing a critical vulnerability in hostile or interference-prone environments. The proposed approa ch integrates dynamic cluster reorganization, adaptive MAC-layer behavior, and multipath routing strategies to restore [...] Read more.
This paper presents a resilient and energy-efficient framework for jamming mitigation in cluster-based wireless sensor networks (WSNs), addressing a critical vulnerability in hostile or interference-prone environments. The proposed approa ch integrates dynamic cluster reorganization, adaptive MAC-layer behavior, and multipath routing strategies to restore communication capabilities and sustain network functionality under jamming conditions. The framework is evaluated across heterogeneous topologies using Zigbee and Bluetooth Low Energy (BLE); both stacks were validated in a physical testbed with matched jammer and traffic conditions, while simulation was used solely to tune parameters and support sensitivity analyses. Results demonstrate significant improvements in Packet Delivery Ratio, end-to-end delay, energy consumption, and retransmission rate, with BLE showing particularly high resilience when combined with the mitigation mechanism. Furthermore, a comparative analysis of routing protocols including AODV, GAF, and LEACH reveals that hierarchical protocols achieve superior performance when integrated with the proposed method. This framework has broader applicability in mission-critical IoT domains, including environmental monitoring, industrial automation, and healthcare systems. The findings confirm that the framework offers a scalable and protocol-agnostic defense mechanism, with potential applicability in mission-critical and interference-sensitive IoT deployments. Full article
Show Figures

Figure 1

23 pages, 18084 KB  
Article
WetSegNet: An Edge-Guided Multi-Scale Feature Interaction Network for Wetland Classification
by Li Chen, Shaogang Xia, Xun Liu, Zhan Xie, Haohong Chen, Feiyu Long, Yehong Wu and Meng Zhang
Remote Sens. 2025, 17(19), 3330; https://doi.org/10.3390/rs17193330 - 29 Sep 2025
Viewed by 233
Abstract
Wetlands play a crucial role in climate regulation, pollutant filtration, and biodiversity conservation. Accurate wetland classification through high-resolution remote sensing imagery is pivotal for the scientific management, ecological monitoring, and sustainable development of these ecosystems. However, the intricate spatial details in such imagery [...] Read more.
Wetlands play a crucial role in climate regulation, pollutant filtration, and biodiversity conservation. Accurate wetland classification through high-resolution remote sensing imagery is pivotal for the scientific management, ecological monitoring, and sustainable development of these ecosystems. However, the intricate spatial details in such imagery pose significant challenges to conventional interpretation techniques, necessitating precise boundary extraction and multi-scale contextual modeling. In this study, we propose WetSegNet, an edge-guided Multi-Scale Feature Interaction network for wetland classification, which integrates a convolutional neural network (CNN) and Swin Transformer within a U-Net architecture to synergize local texture perception and global semantic comprehension. Specifically, the framework incorporates two novel components: (1) a Multi-Scale Feature Interaction (MFI) module employing cross-attention mechanisms to mitigate semantic discrepancies between encoder–decoder features, and (2) a Multi-Feature Fusion (MFF) module that hierarchically enhances boundary delineation through edge-guided spatial attention (EGA). Experimental validation on GF-2 satellite imagery of Dongting Lake wetlands demonstrates that WetSegNet achieves state-of-the-art performance, with an overall accuracy (OA) of 90.81% and a Kappa coefficient of 0.88. Notably, it achieves classification accuracies exceeding 90% for water, sedge, and reed habitats, surpassing the baseline U-Net by 3.3% in overall accuracy and 0.05 in Kappa. The proposed model effectively addresses heterogeneous wetland classification challenges, validating its capability to reconcile local–global feature representation. Full article
Show Figures

Figure 1

21 pages, 1618 KB  
Article
Towards Realistic Virtual Power Plant Operation: Behavioral Uncertainty Modeling and Robust Dispatch Through Prospect Theory and Social Network-Driven Scenario Design
by Yi Lu, Ziteng Liu, Shanna Luo, Jianli Zhao, Changbin Hu and Kun Shi
Sustainability 2025, 17(19), 8736; https://doi.org/10.3390/su17198736 - 29 Sep 2025
Viewed by 206
Abstract
The growing complexity of distribution-level virtual power plants (VPPs) demands a rethinking of how flexible demand is modeled, aggregated, and dispatched under uncertainty. Traditional optimization frameworks often rely on deterministic or homogeneous assumptions about end-user behavior, thereby overestimating controllability and underestimating risk. In [...] Read more.
The growing complexity of distribution-level virtual power plants (VPPs) demands a rethinking of how flexible demand is modeled, aggregated, and dispatched under uncertainty. Traditional optimization frameworks often rely on deterministic or homogeneous assumptions about end-user behavior, thereby overestimating controllability and underestimating risk. In this paper, we propose a behavior-aware, two-stage stochastic dispatch framework for VPPs that explicitly models heterogeneous user participation via integrated behavioral economics and social interaction structures. At the behavioral layer, user responses to demand response (DR) incentives are captured using a Prospect Theory-based utility function, parameterized by loss aversion, nonlinear gain perception, and subjective probability weighting. In parallel, social influence dynamics are modeled using a peer interaction network that modulates individual participation probabilities through local contagion effects. These two mechanisms are combined to produce a high-dimensional, time-varying participation map across user classes, including residential, commercial, and industrial actors. This probabilistic behavioral landscape is embedded within a scenario-based two-stage stochastic optimization model. The first stage determines pre-committed dispatch quantities across flexible loads, electric vehicles, and distributed storage systems, while the second stage executes real-time recourse based on realized participation trajectories. The dispatch model includes physical constraints (e.g., energy balance, network limits), behavioral fatigue, and the intertemporal coupling of flexible resources. A scenario reduction technique and the Conditional Value-at-Risk (CVaR) metric are used to ensure computational tractability and robustness against extreme behavior deviations. Full article
Show Figures

Figure 1

18 pages, 3524 KB  
Article
Transformer-Embedded Task-Adaptive-Regularized Prototypical Network for Few-Shot Fault Diagnosis
by Mingkai Xu, Huichao Pan, Siyuan Wang and Shiying Sun
Electronics 2025, 14(19), 3838; https://doi.org/10.3390/electronics14193838 - 27 Sep 2025
Viewed by 187
Abstract
Few-shot fault diagnosis (FSFD) seeks to build accurate models from scarce labeled data, a frequent challenge in industrial settings with noisy measurements and varying operating conditions. Conventional metric-based meta-learning (MBML) often assumes task-invariant, class-separable feature spaces, which rarely hold in heterogeneous environments. To [...] Read more.
Few-shot fault diagnosis (FSFD) seeks to build accurate models from scarce labeled data, a frequent challenge in industrial settings with noisy measurements and varying operating conditions. Conventional metric-based meta-learning (MBML) often assumes task-invariant, class-separable feature spaces, which rarely hold in heterogeneous environments. To address this, we propose a Transformer-embedded Task-Adaptive-Regularized Prototypical Network (TETARPN). A tailored Transformer-based Temporal Encoder Module is integrated into MBML to capture long-range dependencies and global temporal correlations in industrial time series. In parallel, a task-adaptive prototype regularization dynamically adjusts constraints according to task difficulty, enhancing intra-class compactness and inter-class separability. This combination improves both adaptability and robustness in FSFD. Experiments on bearing benchmark datasets show that TETARPN consistently outperforms state-of-the-art methods under diverse fault types and operating conditions, demonstrating its effectiveness and potential for real-world deployment. Full article
Show Figures

Figure 1

23 pages, 4130 KB  
Article
Spectral Properties of Complex Distributed Intelligence Systems Coupled with an Environment
by Alexander P. Alodjants, Dmitriy V. Tsarev, Petr V. Zakharenko and Andrei Yu. Khrennikov
Entropy 2025, 27(10), 1016; https://doi.org/10.3390/e27101016 - 27 Sep 2025
Viewed by 188
Abstract
The increasing integration of artificial intelligence agents (AIAs) based on large language models (LLMs) is transforming many spheres of society. These agents act as human assistants, forming Distributed Intelligent Systems (DISs) and engaging in opinion formation, consensus-building, and collective decision-making. However, complex DIS [...] Read more.
The increasing integration of artificial intelligence agents (AIAs) based on large language models (LLMs) is transforming many spheres of society. These agents act as human assistants, forming Distributed Intelligent Systems (DISs) and engaging in opinion formation, consensus-building, and collective decision-making. However, complex DIS network topologies introduce significant uncertainty into these processes. We propose a quantum-inspired graph signal processing framework to model collective behavior in a DIS interacting with an external environment represented by an influence matrix (IM). System topology is captured using scale-free and Watts–Strogatz graphs. Two contrasting interaction regimes are considered. In the first case, the internal structure fully aligns with the external influence, as expressed by the commutativity between the adjacency matrix and the IM. Here, a renormalization-group-based scaling approach reveals minimal reservoir influence, characterized by full phase synchronization and coherent dynamics. In the second case, the IM includes heterogeneous negative (antagonistic) couplings that do not commute with the network, producing partial or complete spectral disorder. This disrupts phase coherence and may fragment opinions, except for the dominant collective (Perron) mode, which remains robust. Spectral entropy quantifies disorder and external influence. The proposed framework offers insights into designing LLM-participated DISs that can maintain coherence under environmental perturbations. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Back to TopTop