Next Issue
Volume 28, April
Previous Issue
Volume 28, February
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 28, Issue 3 (March 2026) – 112 articles

Cover Story (view full-size image): Understanding if and when quantum coherence and correlations may boost quantum machine performance is central to quantum thermodynamics. Here, we propose using Nth-root gates to drive a quantum heat engine: they allow for the incremental application of two-qubit operations while pacing the creation of correlations in the system. By comparing two- and three-qubit circuits, we find that providing one qubit with initial quantum coherence as a free resource consistently improves the engine performance. Across most parameters, this increases the maximum extractable work and with efficiencies as high as 84% to 100%. Furthermore, we identify a linear correlation between work production and the many-body correlations generated by these specific gates within the working medium. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 381 KB  
Article
Spectral Signatures of Prime Factorization
by Giuseppe Mussardo and Andrea Trombettoni
Entropy 2026, 28(3), 363; https://doi.org/10.3390/e28030363 - 23 Mar 2026
Viewed by 267
Abstract
We present a protocol for integer factorization for all integers N below a certain cut-off Λ=2d, grounded in the theory of quantum measurement. In this framework, the factorization of an integer NΛ is achieved in a number [...] Read more.
We present a protocol for integer factorization for all integers N below a certain cut-off Λ=2d, grounded in the theory of quantum measurement. In this framework, the factorization of an integer NΛ is achieved in a number of steps equal to the total number I of primes present in its factorization; explicitly, the procedure consists of a sequence of I quantum measurements. The method requires a single-purpose quantum device designed to perform measurements of an observable with a prescribed spectrum. Crucially, the construction of this device involves solving, once and for all, a set of approximately 2d differential equations, independently of the specific integer to be factorized. We argue that the initialization task of this device can be efficiently implemented on a quantum computer in d steps, thereby decoupling the computational cost of device preparation from the factorization process itself. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

12 pages, 2032 KB  
Article
The Scaled Hirshfeld Partitioning: Mathematical Development and Information-Theoretic Foundation
by Farnaz Heidar-Zadeh
Entropy 2026, 28(3), 362; https://doi.org/10.3390/e28030362 - 23 Mar 2026
Viewed by 302
Abstract
Atomic charges play a central role in the analysis of molecular electronic structure and are widely used in the development of computational models. We introduce a simple and computationally efficient extension of Hirshfeld’s 1977 stockholder partitioning method, called scaled Hirshfeld, in which neutral [...] Read more.
Atomic charges play a central role in the analysis of molecular electronic structure and are widely used in the development of computational models. We introduce a simple and computationally efficient extension of Hirshfeld’s 1977 stockholder partitioning method, called scaled Hirshfeld, in which neutral proatom densities are scaled to construct a promolecular density better adapted to the molecular electron density. We present a fixed-point iterative algorithm to compute the proatom scaling coefficients and show that this formulation is equivalent to the information-theoretic additive variational Hirshfeld method with a minimal basis. This equivalence establishes a rigorous mathematical foundation for the scaled Hirshfeld method and ensures size consistency as well as the existence of a unique solution. Numerical results demonstrate that the proposed approach yields charges larger than those obtained with the original Hirshfeld method, while retaining computational efficiency and providing an improved description of molecular dipole moments and electrostatic potentials. Full article
Show Figures

Figure 1

44 pages, 2527 KB  
Article
Managing Uncertainty and Information Dynamics with Graphics-Enhanced TOGAF Architecture in Higher Education
by A’aeshah Alhakamy
Entropy 2026, 28(3), 361; https://doi.org/10.3390/e28030361 - 22 Mar 2026
Viewed by 311
Abstract
Adaptive learning at scale requires explicit handling of uncertainty and information flow across diverse educational technologies. This paper proposes a TOGAF-conformant enterprise architecture for the University of Tabuk (UT) that embeds entropy- and uncertainty-aware requirements from the outset and aligns them with institutional [...] Read more.
Adaptive learning at scale requires explicit handling of uncertainty and information flow across diverse educational technologies. This paper proposes a TOGAF-conformant enterprise architecture for the University of Tabuk (UT) that embeds entropy- and uncertainty-aware requirements from the outset and aligns them with institutional goals in teaching, research, and administration. Using the Architecture Development Method (ADM), we map information-theoretic requirements to architectural artifacts across the architecture vision, business, information systems, and technology domains; formally specify core entropy-informed observables, including predictive entropy, expected information gain, workflow variability entropy, and uncertainty hot-spot severity; and define semantic and metadata standards for their near-real-time computation. These indicators are positioned explicitly across the TOGAF domains: business architecture identifies where uncertainty matters, information systems architecture defines the computable data and application representations, technology architecture operationalizes secure and scalable computation, and later ADM phases use the resulting metrics for prioritization and governance. The architecture also establishes governance that ranks initiatives by their expected uncertainty reduction through Architecture Review Board (ARB) decision gates. We address three research questions: (R.Q.1) how to design a TOGAF-conformant architecture for UT that natively encodes uncertainty-aware requirements and aligns with institutional needs; (R.Q.2) how to integrate dispersed data, achieve semantic harmonization, and deliver analytics-ready streams that support information-theoretic indicators for personalization without delay; and (R.Q.3) how to embed IT demand planning in opportunities and solutions and migration planning using uncertainty reduction and expected information gain as prioritization criteria. The resulting architecture offers a university-wide foundation for adaptive learning: it unifies learner and system interaction data under governed schemas, supports low-latency analytics, and formalizes decision processes that treat uncertainty as a primary metric. Though learner-level operational validation is future work, the design establishes the technical and organizational foundations for responsible, large-scale deployment of entropy-driven learner modeling, content sequencing, and feedback optimization. Full article
Show Figures

Figure 1

38 pages, 4142 KB  
Article
Failure Mode and Effect Analysis Using Large-Scale Group Decision Making and Normal Cloud Model
by Lijie Wu, Changchun Liu and Hanwen Song
Entropy 2026, 28(3), 360; https://doi.org/10.3390/e28030360 - 22 Mar 2026
Viewed by 356
Abstract
Failure Modes and Effects Analysis (FMEA) is crucial for complex system reliability. However, traditional FMEA and its existing enhancements face significant limitations. These notably include difficulties in handling diverse heterogeneous data, effectively coordinating large expert groups, and robustly propagating inherent uncertainties. To bridge [...] Read more.
Failure Modes and Effects Analysis (FMEA) is crucial for complex system reliability. However, traditional FMEA and its existing enhancements face significant limitations. These notably include difficulties in handling diverse heterogeneous data, effectively coordinating large expert groups, and robustly propagating inherent uncertainties. To bridge these critical gaps, this paper proposes an innovative and robust FMEA framework, specifically designed for Large Group Decision Making (LGDM) under uncertainty, leveraging the Normal Cloud Model (NCM). First, LGDM is genuinely integrated into FMEA by involving an unprecedented number of experts (>50). Second, a broad spectrum of heterogeneous data, including exact numbers, interval numbers, NCMs, linguistic terms, and linguistic expressions, is utilized to effectively model and manage diverse uncertainties. Third, a four-step data preprocessing method is incorporated to efficiently screen invalid and low-quality inputs, significantly enhancing the reliability of aggregated results. Fourth, an innovative and comprehensive expert weight determination method that judiciously combines subjective factors with objective data quality is proposed, ensuring more trustworthy and equitable aggregation of judgments. Distinctively, our method explicitly preserves and propagates uncertainty information across the entire computational process, yielding more insightful and informative results beyond simple rankings, encompassing detailed quantitative uncertainty analysis. A practical case study, alongside detailed result analysis, sensitivity analysis, both qualitative and quantitative comparative analysis, and advantages and limitations analysis, collectively confirms the effectiveness, practicality, rationality, and robustness of the proposed method. The sensitivity analyses demonstrate that the final risk rankings are highly stable even under varying trade-off coefficients, confirming the method’s strong robustness and insensitivity to parameter fluctuations. Our framework provides a scientifically advanced and robust approach for FMEA in complex decision-making environments, particularly applicable to high-stakes industries such as modern aviation, thereby enabling more informed risk management decisions. Full article
Show Figures

Figure 1

21 pages, 1500 KB  
Article
Additomultiplicative Cascades Govern Multifractal Scaling Reliability Across Cardiac, Financial, and Climate Systems
by Madhur Mangalam, Eiichi Watanabe and Ken Kiyono
Entropy 2026, 28(3), 359; https://doi.org/10.3390/e28030359 - 22 Mar 2026
Viewed by 267
Abstract
The generative mechanisms underlying multifractal scaling in complex systems remain a fundamental unsolved problem, limiting our ability to distinguish healthy from pathological dynamics, predict system failures, or understand how scale-invariant organization emerges across vastly different physical domains. We resolve this challenge by introducing [...] Read more.
The generative mechanisms underlying multifractal scaling in complex systems remain a fundamental unsolved problem, limiting our ability to distinguish healthy from pathological dynamics, predict system failures, or understand how scale-invariant organization emerges across vastly different physical domains. We resolve this challenge by introducing threshold sensitivity analysis—an extension of Chhabra–Jensen’s direct method—as a framework that classifies cascade types by examining how scaling reliability varies across moment orders q. Different q values systematically probe weak fluctuations (negative q) versus strong fluctuations (positive q), and the coefficient of determination (r2) of partition function regressions quantifies scaling reliability at each q. Analyzing r2(q) patterns in 280 cardiac recordings (healthy controls through fatal heart failure), 200 financial time series (global equity markets and currencies, 2000–2025), and 80 climate stations (tropical to continental zones, 2000–2025), we discover a universal diagnostic signature: symmetric expansion of valid scaling behavior under relaxed r2 thresholds, spanning both weak and strong fluctuations. This threshold sensitivity fingerprint—predicted by synthetic cascade simulations but never before validated empirically—uniquely identifies additomultiplicative cascades, hybrid processes that randomly alternate between additive stabilization and multiplicative amplification. Critically, this symmetric signature persists universally across domains: cardiac dynamics maintain consistent patterns across health and disease states, financial markets show varying robustness across asset classes (currencies more variable than US equities) while preserving a hybrid structure, and climate systems exhibit geographical variations (subtropical/continental stronger than tropical) without altering fundamental cascade type. These findings suggest that additomultiplicative organization is a unifying feature of complex adaptive systems, offering a resolution to decades of debate between additive and multiplicative models. The r2(q) profiling provides a mechanistic diagnostic capable of detecting early dysfunction, assessing system resilience, and revealing how environmental constraints shape—but do not determine—the fundamental principles governing multifractal complexity. Full article
Show Figures

Figure 1

49 pages, 2023 KB  
Article
Secure Multiplicative Aggregation and Key-Reuse Optimization: Achieving Dropout Resilience with Amortized Efficiency
by Hongyuan Cai, Bei Liang, Yue Qin and Jintai Ding
Entropy 2026, 28(3), 358; https://doi.org/10.3390/e28030358 - 22 Mar 2026
Viewed by 195
Abstract
We present the first secure multiplicative aggregation protocol as a variant of secure aggregation. In this case, a server can compute the component-wise product of the input vectors of users while handling the possible dropout of users during protocol execution. Using pairwise masks, [...] Read more.
We present the first secure multiplicative aggregation protocol as a variant of secure aggregation. In this case, a server can compute the component-wise product of the input vectors of users while handling the possible dropout of users during protocol execution. Using pairwise masks, threshold secret sharing and the secure aggregation protocol itself, our construction is correct and secure against semi-honest adversaries. We also consider secure aggregation protocols for the case in which fixed users can reuse their private keys to do aggregation many times, and we propose key reusable secure aggregation protocols. Our protocols have an overhead polynomial in the number of users. We conduct a comprehensive evaluation of our proposed protocols. For multiplicative aggregation protocol, experiments varying the number of users (K) from 50 to 300 (with fixed input size Xu=100 KB) demonstrate that user computation scales monotonically with K and is largely insensitive to dropout rates. In contrast, server computation is highly dropout-sensitive and exhibits a steeper growth rate with respect to K. When varying the input size (10–250 KB) with a fixed K, both user and server communication overheads increase linearly, while server computation remains the primary bottleneck affected by dropouts. We compare reusable and non-reusable secure aggregation protocol over repeated interactions q{1,,10} at Xu=100 KB and K=100, showing that reusing Round 1 reduces the cumulative user computation time by about 2.5 times and reduces the cumulative server computation overhead by about 1.2 times at q=10 while leaving the server communication overhead nearly unchanged, which indicates that the overall communication overhead is dominated by the non-reused rounds. Full article
(This article belongs to the Special Issue Secure Aggregation for Federated Learning and Distributed Computation)
Show Figures

Figure 1

22 pages, 1269 KB  
Article
Mining the Collaborative Networks: A Machine Learning-Based Approach to Firm Innovation in the Digital Transformation Era
by Wenhao Zhou and Zhiwei Zhang
Entropy 2026, 28(3), 357; https://doi.org/10.3390/e28030357 - 22 Mar 2026
Viewed by 325
Abstract
Understanding how collaborative network structures and digital transformation jointly shape firm innovation has become a critical issue amid rapid technological change. Drawing on social network theory and a configurational perspective, this study investigates the nonlinear and interactive effects of collaborative network characteristics and [...] Read more.
Understanding how collaborative network structures and digital transformation jointly shape firm innovation has become a critical issue amid rapid technological change. Drawing on social network theory and a configurational perspective, this study investigates the nonlinear and interactive effects of collaborative network characteristics and digital transformation on firm innovation performance. Using patent data from Chinese listed manufacturing firms for the period between 2012 and 2022, inter-firm technological collaboration networks are constructed based on co-patenting relationships. A Classification and Regression Tree (CART) model is employed to uncover complex configurational patterns, complemented by regression-based robustness tests. The results reveal that innovation outcomes are not driven by single network attributes but by joint configurations of structural hole positions, centrality measures, and digital transformation. Among all factors, structural holes emerge as the most influential determinant. The findings further show that digital transformation interacts with network positions, generating multiple paths leading to high or low innovation performance. Model comparisons demonstrate that the CART approach outperforms traditional linear models in capturing nonlinear effects. This study contributes to the literature by highlighting the configurational logic of collaborative innovation and providing a machine learning-based framework for analyzing network–digital transformation interplay. Full article
(This article belongs to the Special Issue Dynamics in Biological and Social Networks, Second Edition)
Show Figures

Figure 1

21 pages, 6628 KB  
Article
Shannon Entropy of a Hydrogenic Impurity on a Conical Surface: Confinement and Aharonov–Bohm Effects
by Luis Manuel Arvizu, Eleuterio Castaño and Norberto Aquino
Entropy 2026, 28(3), 356; https://doi.org/10.3390/e28030356 - 22 Mar 2026
Viewed by 203
Abstract
In this work, we solve the Schrödinger equation for a hydrogenic impurity located at the apex of a right circular cone, with the electron constrained to move on the conical surface of semi-aperture angle θ0 and subjected to an Aharonov–Bohm magnetic flux [...] Read more.
In this work, we solve the Schrödinger equation for a hydrogenic impurity located at the apex of a right circular cone, with the electron constrained to move on the conical surface of semi-aperture angle θ0 and subjected to an Aharonov–Bohm magnetic flux along the symmetry axis. Analytical expressions for the energy eigenvalues and normalized radial wave functions are obtained in terms of the principal quantum number n and the angular quantum number m, the magnetic flux ν, and the cone angle. The Shannon entropy is evaluated in both configuration and momentum spaces for several low-lying states, and its variation with ν and θ0 is analyzed in detail. When the magnetic flux vanishes, pairs of states n, m and n, m share the same entropic behavior; for finite flux, this degeneracy is lifted and the entropies depend explicitly on the state, the cone geometry, and the flux strength. Finally, we verify that the entropic sum Sr+Sp fulfills the Bialynicki-Birula–Mycielski bound, providing an information-theoretic consistency check for the model. Full article
Show Figures

Figure 1

18 pages, 3126 KB  
Article
SS-AdaMoE: Spatio-Spectral Adaptive Mixture of Experts with Global Structural Priors for Graph Node Classification
by Xilin Kang, Tianyue Yu, Letao Wang, Yutong Guo and Fengjun Zhang
Entropy 2026, 28(3), 355; https://doi.org/10.3390/e28030355 - 21 Mar 2026
Viewed by 213
Abstract
Graph Neural Networks (GNNs) have emerged as the standard for learning representations from graph-structured data. While traditional architectures relying on message-passing mechanisms excel in homophilic settings, they essentially function as fixed low-pass filters. However, this smoothing operation limits their ability to generalize to [...] Read more.
Graph Neural Networks (GNNs) have emerged as the standard for learning representations from graph-structured data. While traditional architectures relying on message-passing mechanisms excel in homophilic settings, they essentially function as fixed low-pass filters. However, this smoothing operation limits their ability to generalize to heterophilic graphs, where connected nodes often exhibit dissimilar labels and high-frequency signals are crucial for discrimination. Furthermore, existing Mixture-of-Experts (MoE) methods for graphs often suffer from local-view routing, failing to capture global structural context during expert selection. To address these challenges, this paper proposes SS-AdaMoE, a novel Spatio-Spectral Adaptive Mixture of Experts framework designed for robust node classification across diverse graph patterns. Specifically, a Dual-Domain Expert System is constructed, integrating heterogeneous spatial aggregators with learnable spectral filters based on Bernstein polynomials. This allows the model to adaptively capture arbitrary frequency responses—including high-pass and band-pass signals—which are overlooked by standard GNNs. To resolve the locality bias, a Hierarchical Global-Prior Gating Network augmented by a Linear Graph Transformer is introduced, ensuring that expert selection is guided by both local node features and global topological awareness. Extensive experiments are conducted on five benchmark datasets spanning both homophilic and heterophilic networks. The results demonstrate that SS-AdaMoE consistently outperforms baselines, achieving accuracy improvements of up to 2.65% on Chameleon and 1.41% on Roman-empire over the strongest MoE baseline, while surpassing traditional GCN architectures by margins exceeding 28% on heterophilic datasets such as Texas. These findings validate that the synergy of learnable spectral priors and global gating effectively bridges the gap between spatial aggregation and spectral filtering. Full article
Show Figures

Figure 1

42 pages, 3547 KB  
Article
Risk-Sensitive Machine Learning for Financial Decision Modeling Under Imbalanced Data: Evidence from Bank Telemarketing
by Bowen Dong, Xinyu Zhang, Yang Liu, Tianhui Zhang, Xianchen Liu, Lingmin Hou, Lingyi Meng, Zhen Guo and Aliya Mulati
Entropy 2026, 28(3), 354; https://doi.org/10.3390/e28030354 - 21 Mar 2026
Viewed by 363
Abstract
Bank telemarketing campaigns often experience low subscription rates due to customer heterogeneity and severe class imbalance, which pose challenges for reliable predictive modeling. This study investigates a data-driven approach that integrates synthetic minority oversampling and cost-sensitive learning to improve the prediction of telemarketing [...] Read more.
Bank telemarketing campaigns often experience low subscription rates due to customer heterogeneity and severe class imbalance, which pose challenges for reliable predictive modeling. This study investigates a data-driven approach that integrates synthetic minority oversampling and cost-sensitive learning to improve the prediction of telemarketing outcomes. Experiments are conducted using the Portuguese Bank Marketing dataset, comprising 41,188 instances with a positive response rate of 11.3%. Eight machine learning models are evaluated under a unified preprocessing pipeline and five-fold stratified cross-validation, including Logistic Regression, Decision Tree, Random Forest, and Ensemble methods. The results show that Ensemble models, particularly CatBoost, XGBoost, and LightGBM, achieve improved performance compared with traditional baselines, with notable gains in minority-class recall and overall discrimination ability. The best-performing model attains an F1-score of 0.540, a recall of 0.812 for the positive class, and a ROC–AUC of 0.908. To enhance interpretability, SHAP-based analysis is applied to quantify feature contributions, identifying campaign duration, previous contact outcomes, and selected macroeconomic indicators as key predictors. These findings indicate that combining resampling strategies with cost-sensitive optimization provides a robust and transparent approach for learning from imbalanced telemarketing data, thereby supporting reproducible and data-driven financial decision-making by explicitly addressing difficulty in minority-class identification under imbalance and class imbalance under cross-entropy training in imbalanced banking data. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications, 2nd Edition)
Show Figures

Figure 1

25 pages, 6261 KB  
Article
Stochastic and Statistical Analysis of Cnoidal, Snoidal, Dnoidal, Hyperbolic, Trigonometric and Exponential Wave Solutions of a Coupled Volatility Option-Pricing System
by L. M. Abdalgadir, Shabir Ahmad, Bakri Youniso and Khaled Aldwoah
Entropy 2026, 28(3), 353; https://doi.org/10.3390/e28030353 - 20 Mar 2026
Viewed by 234
Abstract
We investigate a stochastic coupled nonlinear Schrödinger (Manakov-type) system for option price and volatility wave fields within the Ivancevic adaptive-wave option-pricing paradigm, and derive exact wave families together with statistical diagnostics of the resulting dynamics. This system combines behavioral market effects with classical [...] Read more.
We investigate a stochastic coupled nonlinear Schrödinger (Manakov-type) system for option price and volatility wave fields within the Ivancevic adaptive-wave option-pricing paradigm, and derive exact wave families together with statistical diagnostics of the resulting dynamics. This system combines behavioral market effects with classical efficient-market dynamics and incorporates a controlled stochastic volatility component. Randomness in both the option price and volatility is incorporated via white noise, and a system of stochastic partial differential equations (PDEs) is developed that governs the joint evolution of option prices and stock price volatility. We derive advanced solutions of the proposed system using a newly created methodology. The obtained solutions are expressions of cnoidal, snoidal, dnoidal, hyperbolic, trigonometric, and exponential functions. The stochastic dynamical investigation, together with the statistical measures are presented. The autocorrelation function (ACF) of squared returns for the obtained analytical solutions is demonstrated to show distinct differences in second-order temporal dependence, while asymmetries in the temporal evolution of the fluctuations are depicted via leverage correlation (LC). The probability distribution function (PDF) dynamics of the soliton solutions illustrate prominent temporal variability and non-stationary statistical dynamics. Differences in dynamical coupling between the two components of the considered system are presented via phase velocity cross-correlation analysis and are supported by phase difference dynamics visualizations. The strength and structure of coupling between components are displayed via the amplitude cross-correlation function. Mean amplitude dynamics and variance as a function of noise intensity σ, provide a systematic influence of stochastic forcing on their energy and a quantitative measure of stochastic dispersion of soliton solutions. All the results are displayed in 3D and 2D graphs of the stochastics and statistical dynamics of the obtained solutions. Full article
(This article belongs to the Special Issue Stochastic Processes in Pricing Financial Derivatives)
Show Figures

Figure 1

19 pages, 1416 KB  
Article
On the Communication–Key Rate Region of Hierarchical Vector Linear Secure Aggregation
by Jiawen Lv, Xiang Zhang and Zhou Li
Entropy 2026, 28(3), 352; https://doi.org/10.3390/e28030352 - 20 Mar 2026
Viewed by 174
Abstract
Motivated by heterogeneous data distributions and task-dependent aggregation requirements in federated learning, we study information-theoretic secure aggregation of linear functions over a two-hop hierarchical network. The system comprises an aggregation server, an intermediate layer of U relays, and UV users, where each [...] Read more.
Motivated by heterogeneous data distributions and task-dependent aggregation requirements in federated learning, we study information-theoretic secure aggregation of linear functions over a two-hop hierarchical network. The system comprises an aggregation server, an intermediate layer of U relays, and UV users, where each relay serves a disjoint cluster of V users. Each relay observes all uplink transmissions within its cluster and forwards a coded message to the server. The server is authorized to compute a prescribed linear function F of the users’ inputs with zero error, while being prevented from learning any additional information about an unauthorized linear function G. Moreover, each relay must obtain no information about any non-trivial linear function Bu of the inputs in its own cluster. We define the communication rates on both hops as the number of transmitted symbols per input symbol. By deriving matching information-theoretic converse and achievability bounds, we fully characterize the optimal communication rates and propose an explicit linear coding scheme that achieves the resulting optimal region. Our results demonstrate that hierarchical architectures can attain optimal communication rates while substantially reducing the server-side masking burden, thereby enabling scalable secure aggregation of authorized linear functions. Full article
(This article belongs to the Special Issue Secure Aggregation for Federated Learning and Distributed Computation)
Show Figures

Figure 1

23 pages, 7446 KB  
Article
MCMC Correction of Score-Based Diffusion Models for Model Composition
by Anders Sjöberg, Jakob Lindqvist, Magnus Önnheim, Mats Jirstrand and Lennart Svensson
Entropy 2026, 28(3), 351; https://doi.org/10.3390/e28030351 - 20 Mar 2026
Viewed by 242
Abstract
Diffusion models can be parameterized in terms of either score or energy function. The energy parameterization is attractive as it enables sampling procedures such as Markov Chain Monte Carlo (MCMC) that incorporates a Metropolis–Hastings (MH) correction step based on energy differences between proposed [...] Read more.
Diffusion models can be parameterized in terms of either score or energy function. The energy parameterization is attractive as it enables sampling procedures such as Markov Chain Monte Carlo (MCMC) that incorporates a Metropolis–Hastings (MH) correction step based on energy differences between proposed samples. Such corrections can significantly improve sampling quality, particularly in the context of model composition, where pre-trained models are combined to generate samples from novel distributions. Score-based diffusion models, on the other hand, are more widely adopted and come with a rich ecosystem of pre-trained models. However, they do not, in general, define an underlying energy function, making MH-based sampling inapplicable. In this work, we address this limitation by retaining score parameterization and introducing a novel MH-like acceptance rule based on line integration of the score function. This allows the reuse of existing diffusion models while still combining the reverse process with various MCMC techniques, viewed as an instance of annealed MCMC. Through experiments on synthetic and real-world data, we show that our MH-like samplers yield relative improvements of similar magnitude to those observed with energy-based models, without requiring explicit energy parameterization. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

24 pages, 423 KB  
Article
Exact Response Theory for Delay Equations
by Federico Gollinucci, Enrico Ortu and Lamberto Rondoni
Entropy 2026, 28(3), 350; https://doi.org/10.3390/e28030350 - 20 Mar 2026
Viewed by 205
Abstract
The exact response theory, also known as Transient Time Correlation Function formalism, is a powerful method concerning how observables respond to a given perturbation of the dynamics of the systems of interest, and it extends linear response theory to generic (autonomous) dynamical systems. [...] Read more.
The exact response theory, also known as Transient Time Correlation Function formalism, is a powerful method concerning how observables respond to a given perturbation of the dynamics of the systems of interest, and it extends linear response theory to generic (autonomous) dynamical systems. Its main ingredient is the so-called dissipation function. In this paper, we adapt this theory for time-lagged systems, and we illustrate its applicability considering simple examples of delay equations, with different memory terms. Adopting the technique already used for time deterministic as well as stochastic time-dependent perturbations, the dynamics is described in a higher dimensional phase space, in which the delay-dependent dynamics is mapped into an augmented phase space: the new dynamics is proven to be autonomous and suitable for the exact responses to be computed. In addition, we explore the comparison between linear and exact approaches for a specific kernel choice. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
Show Figures

Figure 1

3 pages, 144 KB  
Editorial
Can Information and Entropic Dynamics Bridge the Gap Between Biology and the Physical Sciences?
by Richard L. Summers and Melvin M. Vopson
Entropy 2026, 28(3), 349; https://doi.org/10.3390/e28030349 - 20 Mar 2026
Viewed by 282
Abstract
This Special Issue, entitled Entropy and Information in Biological Systems, comprises several unique contributions that attempt to bridge the gap between the fields of biology and the physical sciences [...] Full article
(This article belongs to the Special Issue Entropy and Information in Biological Systems)
17 pages, 1950 KB  
Article
Stark Many-Body Localization-Induced Quantum Mpemba Effect
by Yi-Rui Zhang, Han-Ze Li, Xu-Yang Huang, Yu-Jun Zhao and Jian-Xin Zhong
Entropy 2026, 28(3), 348; https://doi.org/10.3390/e28030348 - 19 Mar 2026
Viewed by 271
Abstract
The quantum Mpemba effect (QME) describes the counterintuitive phenomenon where a system initially further from equilibrium relaxes faster than one closer to it. Specifically, the QME associated with symmetry restoration has been extensively investigated across integrable, ergodic, and disordered localized systems. However, its [...] Read more.
The quantum Mpemba effect (QME) describes the counterintuitive phenomenon where a system initially further from equilibrium relaxes faster than one closer to it. Specifically, the QME associated with symmetry restoration has been extensively investigated across integrable, ergodic, and disordered localized systems. However, its fate in disorder-free ergodicity-breaking settings, such as the Stark many-body localized (Stark-MBL) phase, remains an open question. Here, we explore the dynamics of local U(1) symmetry restoration in a Stark-MBL XXZ spin-12 chain, using the Rényi-2 entanglement asymmetry (EA) as a probe. Using an analytical operator-string expansion supported by numerical simulations, we demonstrate that the QME transitions from an initial-state-dependent anomaly in the ergodic phase to a universal feature in the Stark-MBL regime. Moreover, the Mpemba time scales exponentially with the subsystem size, even in the absence of global transport, and is governed by high-order off-resonant processes. We attribute this robust inversion to a Stark-induced hierarchy of relaxation channels that fundamentally constrains the effective Hilbert space dimension. The findings pave the way for utilizing tunable potentials to engineer and control anomalous relaxation timescales in quantum technologies without reliance on quenched disorder. Full article
Show Figures

Figure 1

17 pages, 639 KB  
Article
Characterizing the Evolution of Inter-Actor Networks in the South China Sea Arbitration via Entropy-Driven Graph Representation Learning from Massive Media Event Data
by Menglan Ma, Hong Yu and Peng Fang
Entropy 2026, 28(3), 347; https://doi.org/10.3390/e28030347 - 19 Mar 2026
Viewed by 187
Abstract
On 12 July 2016, the ruling on the South China Sea Arbitration was announced and rapidly drew worldwide attention, turning the event into a major international hotspot. Quantifying the dynamics of such hotspot events and understanding the evolution of media-based inter-actor networks during [...] Read more.
On 12 July 2016, the ruling on the South China Sea Arbitration was announced and rapidly drew worldwide attention, turning the event into a major international hotspot. Quantifying the dynamics of such hotspot events and understanding the evolution of media-based inter-actor networks during major shocks are of substantial research interest. Viewing these interactions as dynamic networks, we analyze the time-varying actor interaction structure surrounding the arbitration using the Global Database of Events, Location and Tone (GDELT), a large-scale media-based event database with global coverage since 1979. We extract nearly 30,000 events related to the arbitration from 5 July to 25 July 2016, constructing daily cooperation and conflict networks to quantify structural changes via network size and degree-entropy dynamics. To further reveal actor-level structural roles, we learn node embeddings on each daily network via an entropy-driven graph representation learning scheme and perform embedding-based clustering with automatically selected cluster numbers, visualized via t-SNE. The results show that key dates in the event window are associated with pronounced structural shifts in the networks, including changes in participation breadth, degree-distribution heterogeneity, and clearer differentiation and reconfiguration of actor roles, with distinct patterns between cooperation and conflict networks. These findings demonstrate the potential of massive media event data for characterizing structural responses and actor-role evolution in event-driven inter-actor networks. Full article
Show Figures

Figure 1

19 pages, 1409 KB  
Article
A Q-Learning-Based Distributed Energy-Efficient Routing Protocol in UASNs
by Xuan Geng, Qingyuan Li, Xiaowei Pan and Fang Cao
Entropy 2026, 28(3), 346; https://doi.org/10.3390/e28030346 - 19 Mar 2026
Viewed by 265
Abstract
This paper proposes a Q-Learning-Based Distributed Energy-Efficient Routing (QDER) protocol for underwater acoustic sensor networks (UASNs). The routing problem is formulated as a Markov Decision Process (MDP) and a distributed Q-learning approach is proposed. Each sensor node is treated as an agent that [...] Read more.
This paper proposes a Q-Learning-Based Distributed Energy-Efficient Routing (QDER) protocol for underwater acoustic sensor networks (UASNs). The routing problem is formulated as a Markov Decision Process (MDP) and a distributed Q-learning approach is proposed. Each sensor node is treated as an agent that independently selects its next-hop node based on a Q-table. The rewards function is designed that jointly considers node residual energy and depth information, enabling each node to learn an effective routing policy through distributed decision-making. Unlike centralized routing approaches that rely on extensive global information exchange, the proposed scheme allows nodes to make local decisions, thereby reducing communication overhead and energy consumption while maintaining efficient routing paths. In addition, link quality is designed in the reward to account for channel conditions, which improves the robustness of the routing strategy under noisy underwater acoustic environments. Simulation results demonstrate that the QDER achieves better system performance compared with Depth-Based Routing (DBR) and Deep Q-Network-Based Intelligent Routing (DQIR). Considering channel attenuation and noise, the proposed method with the link quality metric achieves improved network lifetime and energy efficiency. It also shows good robustness and adaptability under different signal-to-noise ratio (SNR) conditions. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
Show Figures

Figure 1

15 pages, 377 KB  
Article
Planar Black Holes and Entanglement Entropy in Analog Gravity Models
by Neven Bilic and Tobias Zingg
Entropy 2026, 28(3), 345; https://doi.org/10.3390/e28030345 - 19 Mar 2026
Viewed by 267
Abstract
Via constructing an explicit Lagrangian for which the perturbation equations are analogs of a scalar field propagating in a planar black-hole space–time, it is found that all planar black holes conformal to a Painlevé–Gullstrand-type line element can be realized as analog metrics. We [...] Read more.
Via constructing an explicit Lagrangian for which the perturbation equations are analogs of a scalar field propagating in a planar black-hole space–time, it is found that all planar black holes conformal to a Painlevé–Gullstrand-type line element can be realized as analog metrics. We also introduce the concept of holographic entanglement entropy for planar black-hole space–times. This is valid for an arbitrary choice of conformal and blackening factor, thereby vastly extending the number of known examples of explicitly known analog metrics. Full article
(This article belongs to the Special Issue Coarse and Fine-Grained Aspects of Gravitational Entropy)
Show Figures

Figure 1

30 pages, 4894 KB  
Article
Comparing Ising and Spin Glass Dynamics in Financial Markets: A Complex Systems Approach to Asset Interdependence
by Irina Georgescu and Jani Kinnunen
Entropy 2026, 28(3), 344; https://doi.org/10.3390/e28030344 - 19 Mar 2026
Cited by 1 | Viewed by 397
Abstract
This paper analyzes financial market interdependence from a statistical-physics perspective by comparing Ising and spin glass representations of asset interactions. Financial markets are modeled as complex systems in which collective behavior emerges from time-varying interaction structures. Using daily data for a diversified 15-asset [...] Read more.
This paper analyzes financial market interdependence from a statistical-physics perspective by comparing Ising and spin glass representations of asset interactions. Financial markets are modeled as complex systems in which collective behavior emerges from time-varying interaction structures. Using daily data for a diversified 15-asset commodity system, including precious metals, energy commodities, industrial metals and soft commodities, over the period 2020–2024, we construct rolling coupling matrices based on both linear correlations and nonlinear mutual information and embed them into Ising and Sherrington–Kirkpatrick-type interaction frameworks. While aggregate synchronization indicators—such as average coupling strength and the largest eigenvalue—exhibit similar dynamics across the two representations, the spin glass framework reveals substantially richer structural heterogeneity. Preserving the sign structure of the interactions leads to wider dispersion, higher variability and nontrivial network configurations that are suppressed in the Ising representation. The results identify the Ising model as a benchmark for market coherence. The spin glass model is essential for capturing heterogeneous interactions and nonlinear dependence in financial markets. Full article
Show Figures

Figure 1

25 pages, 3042 KB  
Article
Quantifying Epistemic Uncertainty in Multimodal Long-Tailed Classification: A Belief Entropy-Based Evidential Fusion Framework
by Guorui Zhu
Entropy 2026, 28(3), 343; https://doi.org/10.3390/e28030343 - 19 Mar 2026
Viewed by 292
Abstract
Deep multimodal learning has excelled in tasks involving vision, language, and audio modalities. Nevertheless, their performance on tail classes exhibits significant degradation under the long-tailed distributions common in real-world data, meanwhile related fusion schemes often provide only limited treatment of modality-specific uncertainty and [...] Read more.
Deep multimodal learning has excelled in tasks involving vision, language, and audio modalities. Nevertheless, their performance on tail classes exhibits significant degradation under the long-tailed distributions common in real-world data, meanwhile related fusion schemes often provide only limited treatment of modality-specific uncertainty and rarely incorporate explicit mechanisms for class-level fairness. To address these information discrepancies, we present a framework that integrates evidential reasoning with deep learning–Uncertainty-Quantified Multimodal Learning for Long-Tailed Classification (UMuLT). The framework includes: (i) an uncertainty-gated evidential fusion module that adaptively down-weights unreliable modalities; (ii) an exponential moving average (EMA) fairness regularizer that dynamically amplifies tail-class gradients; and (iii) a cross-modal consistency regularizer optimized in two stages: tail specialization with lightweight adapters on tail-class data to obtain a balanced initialization, followed by end-to-end fine-tuning. The effectiveness and practicality of our method are verified on three long-tailed benchmarks for multimodal classification. Experiments show consistent gains over strong baselines in overall metrics, calibration, and tail subset performance. Statistical significance tests confirm the superiority of the proposed framework. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

22 pages, 4742 KB  
Article
PromptSeg: An End-to-End Universal Medical Image Segmentation Method via Visual Prompts
by Minfan Zhao, Bingxun Wang, Jun Shi and Hong An
Entropy 2026, 28(3), 342; https://doi.org/10.3390/e28030342 - 18 Mar 2026
Viewed by 254
Abstract
Deep learning has achieved remarkable advancements in medical image segmentation, yet its generalization capability across unseen tasks remains a significant challenge. The variety of task objectives, disease-dependent labeling variations, and multi-center data contribute to the high uncertainty of task-specific models on unseen distributions. [...] Read more.
Deep learning has achieved remarkable advancements in medical image segmentation, yet its generalization capability across unseen tasks remains a significant challenge. The variety of task objectives, disease-dependent labeling variations, and multi-center data contribute to the high uncertainty of task-specific models on unseen distributions. In this study, we propose PromptSeg, an innovative Transformer-based unified framework for universal 2D medical image segmentation. From an information-theoretic perspective, PromptSeg formulates the segmentation process as a conditional entropy minimization problem, utilizing visual prompts as side information to reduce the uncertainty of the target task. Guided by the information bottleneck principle, PromptSeg aims to utilize the provided visual prompts to filter out redundant noise and learn contextual representations, thereby breaking the restrictions of the task-specific paradigm. When faced with unseen datasets or segmentation targets, our method only requires a few annotated visual prompt pairs to extract task-specific semantics and segment the query images without retraining. Extensive experiments on CT and MRI datasets demonstrate that PromptSeg not only outperforms state-of-the-art methods but also exhibits strong multi-modality generalization capabilities. Full article
Show Figures

Figure 1

20 pages, 1252 KB  
Article
Tail-Latency-Aware Federated Learning with Pinching Antenna: Latency, Participation, and Placement
by Yushen Lin and Zhiguo Ding
Entropy 2026, 28(3), 341; https://doi.org/10.3390/e28030341 - 18 Mar 2026
Viewed by 188
Abstract
Straggler synchronization is a dominant wall-clock bottleneck in synchronous wireless federated learning (FL). Under non-IID data, however, aggressively sampling only fast clients may significantly slow convergence due to statistical heterogeneity. This paper studies PASS-enabled FL, where a radiating pinching antenna (PA) can be [...] Read more.
Straggler synchronization is a dominant wall-clock bottleneck in synchronous wireless federated learning (FL). Under non-IID data, however, aggressively sampling only fast clients may significantly slow convergence due to statistical heterogeneity. This paper studies PASS-enabled FL, where a radiating pinching antenna (PA) can be activated at an arbitrary position along a dielectric waveguide to reshape uplink latencies. We consider a joint optimization of PA placement and client participation to minimize a proxy for time-to-accuracy, coupling the exact expected maximum round latency via order statistics with a heterogeneity-aware statistical-efficiency proxy. We derive first-order optimality conditions that reveal an explicit tail-latency premium in the KKT recursion, quantifying how latency gaps are amplified by maximum-order-statistic synchronization. Under a latency-class structure, we obtain a within-class square-root sampling law and establish a two-class phase transition where slow-class participation collapses under an explicit heterogeneity-threshold condition as the per-round sample size grows. For PA placement, we prove a piecewise envelope-derivative characterization and provide an exact breakpoint-and-root candidate-enumeration procedure. Simulation results validate the structural findings and show that PASS enables more eligible participation, yielding higher wall-clock accuracy. Full article
Show Figures

Figure 1

9 pages, 227 KB  
Article
Blocked Two-Level Regular Designs with Individual Aliased Effect Number Pattern
by Min Han, Shengli Zhao and Tao Sun
Entropy 2026, 28(3), 340; https://doi.org/10.3390/e28030340 - 18 Mar 2026
Viewed by 189
Abstract
This paper proposes a blocked individual aliased effect number pattern (BI-AENP) for regular blocked designs and establishes its relationships with the core patterns of several existing optimality criteria. We develop an algorithm to compute the BI-AENP. A catalogue of 16-, 32-, and 64-run [...] Read more.
This paper proposes a blocked individual aliased effect number pattern (BI-AENP) for regular blocked designs and establishes its relationships with the core patterns of several existing optimality criteria. We develop an algorithm to compute the BI-AENP. A catalogue of 16-, 32-, and 64-run BI-AENP 2nk:2r designs is presented, together with comparisons with the minimum aberration and clear effects criteria. Full article
11 pages, 9243 KB  
Article
Reversal of the Skin Effect in Disordered Non-Hermitian Systems
by Xiansheng Zeng
Entropy 2026, 28(3), 339; https://doi.org/10.3390/e28030339 - 18 Mar 2026
Viewed by 291
Abstract
Non-Hermitian systems under nonreciprocity-induced evolution present an exotic phenomenon, known as the non-Hermitian skin effect. Yet, the control mechanisms and the generalized Brillouin zone in disordered systems have not been fully understood. Here, using Floquet quantum-walk models with disorder, we demonstrate effective control [...] Read more.
Non-Hermitian systems under nonreciprocity-induced evolution present an exotic phenomenon, known as the non-Hermitian skin effect. Yet, the control mechanisms and the generalized Brillouin zone in disordered systems have not been fully understood. Here, using Floquet quantum-walk models with disorder, we demonstrate effective control of the direction of the non-Hermitian skin effect in both one- and two-dimensional systems. Once the disorder strength exceeds a critical threshold, the direction of the skin effect is reversed. We develop a modified generalized Brillouin zone theory that correctly predicts skin effect reversal. Furthermore, we also investigate how the direction of the non-Hermitian skin effect depends on the disorder strength in each subsystem of the two-dimensional quantum walk. Our work paves the way for the design of quantum transport devices in quantum simulation platforms. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

17 pages, 3230 KB  
Article
Semi-Supervised Graph Attention Network for Screw Pump Fault Diagnosis: Revealing the Dynamic Coupling of Multi-Source Information
by Weigang Wen, Jingqi Qin and Qiuying Chang
Entropy 2026, 28(3), 338; https://doi.org/10.3390/e28030338 - 18 Mar 2026
Viewed by 209
Abstract
The screw pump is a critical device for elevating downhole petroleum to the surface, and screw pump failure can significantly disrupt the production of oil wells. Due to the complex structure of the screw pump, the same pump fault can cause different changes [...] Read more.
The screw pump is a critical device for elevating downhole petroleum to the surface, and screw pump failure can significantly disrupt the production of oil wells. Due to the complex structure of the screw pump, the same pump fault can cause different changes in the monitoring parameters, and different faults can also cause the same parameter change. In consequence of the complexity, it requires a large amount of labeled data for a diagnosis model to achieve fault diagnosis of a screw pump in practical application. Aiming for this kind of condition, we discovered the dynamic coupling effect between multi-source information through detailed research on the collected data of screw pumps. To fully leverage the information dynamic coupling (IDC) effect, a semi-supervised learning graph attention network (SSL-GAT) fault diagnosis method is proposed. This approach integrates the semi-supervised learning framework and graph attention neural network for the fault diagnosis of a screw pump. The experimental validation of the SSL-GAT method demonstrates its outstanding performance in screw pump fault diagnosis. Full article
Show Figures

Figure 1

25 pages, 27044 KB  
Article
Joint Model Partitioning and Bandwidth Allocation for UAV-Assisted Space–Air–Ground–Sea Integrated Network: A Hybrid A3C-PPO Approach
by Yuanmo Lin, Yuanyuan Han, Minmin Wu, Shaoyu Lin, Xia Zhang and Zhiyong Xu
Entropy 2026, 28(3), 337; https://doi.org/10.3390/e28030337 - 18 Mar 2026
Viewed by 207
Abstract
Unmanned Aerial Vehicle (UAV)-assisted mobile edge computing is pivotal for the Space–Air–Ground–Sea Integrated Network (SAGSIN) to support heterogeneous task offloading. However, the inherent resource constraints of UAVs limit their ability to support intensive and concurrent task processing in dynamic environments. In such complex [...] Read more.
Unmanned Aerial Vehicle (UAV)-assisted mobile edge computing is pivotal for the Space–Air–Ground–Sea Integrated Network (SAGSIN) to support heterogeneous task offloading. However, the inherent resource constraints of UAVs limit their ability to support intensive and concurrent task processing in dynamic environments. In such complex scenarios, the dual requirements of discrete model partitioning and continuous bandwidth allocation make it difficult for traditional reinforcement learning algorithms to achieve optimal resource matching. Therefore, in this paper, we design a joint optimization framework based on Asynchronous Advantage Actor-Critic (A3C) and proximal policy optimization (PPO). Specifically, the model partitioning strategy is learned through PPO, which utilizes a clipped objective function to ensure training stability and generalization across complex Deep Neural Network (DNN) structures. Moreover, the framework leverages the asynchronous multi-threaded architecture of A3C to dynamically allocate bandwidth, effectively accommodating rapid fluctuations in terminal access. Finally, to prevent resource monopolization and ensure fairness, a weighted priority scheduling mechanism based on task urgency and computation time is introduced. Extensive simulations show that the proposed algorithm outperforms existing approaches in terms of task completion rate, task processing latency, and resource utilization under dynamic SAGSIN scenarios. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
Show Figures

Figure 1

23 pages, 3177 KB  
Article
Weighted Copula Entropy for Structural Pruning in Long-Tailed Autonomous Driving Object Detection
by Yue Zhou, Jihui Ma and Honghui Dong
Entropy 2026, 28(3), 336; https://doi.org/10.3390/e28030336 - 17 Mar 2026
Viewed by 271
Abstract
In autonomous driving, deep convolutional neural networks face a core conflict between computational efficiency and safety-critical robustness on resource-constrained onboard computing units. Dominant structural pruning, based on weight magnitude or geometric statistics, fails in long-tailed traffic scenarios by equating parameter magnitude with feature [...] Read more.
In autonomous driving, deep convolutional neural networks face a core conflict between computational efficiency and safety-critical robustness on resource-constrained onboard computing units. Dominant structural pruning, based on weight magnitude or geometric statistics, fails in long-tailed traffic scenarios by equating parameter magnitude with feature importance and pruning critical filters in the tail classes. To address this, we propose a structural pruning framework that evaluates the semantic utility of features using weighted copula entropy rather than relying solely on their magnitude. Our novel approach integrates Elastic Net regularization for inducing sparsity and weighted copula entropy for unbiased information-theoretic feature selection. By incorporating inverse class frequency weighting into empirical Copula estimation, we decouple feature relevance from sample abundance, ensuring the preservation of rare-class discriminators based on their information content rather than occurrence frequency. Furthermore, this metric is embedded into an enhanced max-relevance and min-redundancy algorithm to eliminate semantic redundancy while maintaining representational diversity. Extensive experiments on the BDD100K dataset with YOLOv5l and YOLOv8l architectures demonstrate that, at a 50% pruning rate, the proposed method reduces FLOPs and parameters by nearly 50%, with only 0.09% mAP@0.5 loss for YOLOv5l and 0.14% mAP@0.5 loss for YOLOv8l, while significantly improving the mAP of the extreme tail class Train from 0% to 3.84% and 2.76% to 5.12%, respectively. It achieves a more favorable trade-off between detection accuracy and computational efficiency than mainstream pruning approaches. This work provides a lightweight scheme for autonomous driving perception models and a new information-theoretic perspective for structured network pruning. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

13 pages, 274 KB  
Article
Modified Bekenstein Hawking Entropy of Five-Dimensioned Static Multi-Charge AdS Black Holes in Gauged Supergravity Theory
by Cong Wang and Shu-Zheng Yang
Entropy 2026, 28(3), 335; https://doi.org/10.3390/e28030335 - 17 Mar 2026
Viewed by 243
Abstract
Considering the dynamics of spin-1/2 fermion in higher-dimensional static multi-charge black holes in gauged supergravity theory, taking into account Lorentz breaking and quantum perturbation theory, this study investigates new expressions for the Hawking temperature and Bekenstein-Hawking entropy of such black holes based on [...] Read more.
Considering the dynamics of spin-1/2 fermion in higher-dimensional static multi-charge black holes in gauged supergravity theory, taking into account Lorentz breaking and quantum perturbation theory, this study investigates new expressions for the Hawking temperature and Bekenstein-Hawking entropy of such black holes based on WKB theory and quantum tunneling radiation theory, as well as the laws of black hole thermodynamics. The physical significance of the research methods used in this paper and the related results obtained are analyzed. Furthermore, an in-depth discussion is provided regarding the implications of the research content for addressing relevant issues in high-dimensional curved spacetime. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
17 pages, 673 KB  
Article
An Information-Theoretic Analysis of High-Frequency Load Disaggregation
by Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Geraldo Pereira Rocha Filho, Vinícius Pereira Gonçalves and Rodolfo Ipolito Meneguette
Entropy 2026, 28(3), 334; https://doi.org/10.3390/e28030334 - 17 Mar 2026
Viewed by 303
Abstract
High-frequency non-intrusive load monitoring provides detailed harmonic information for appliances’ power disaggregation, and machine-learning approaches have demonstrated good performance in this task. However, these methods provide little transparency regarding the information structure of the aggregate signal. To address this, this paper models NILM [...] Read more.
High-frequency non-intrusive load monitoring provides detailed harmonic information for appliances’ power disaggregation, and machine-learning approaches have demonstrated good performance in this task. However, these methods provide little transparency regarding the information structure of the aggregate signal. To address this, this paper models NILM as a coding-decoding process and applies information-theoretic measures to quantify uncertainty, recoverability, temporal contribution, and inter-appliance masking effects in aggregate signals. In the analyzed dataset, transfer entropy suggests negligible temporal gains, which is consistent with the observed effectiveness of pointwise models such as Random Forest. Moreover, conditional mutual information emphasizes the asymmetric masking relationships between appliances, with the laptop charger acting as a dominant interferer in the considered measurements. These findings are validated through a Random Forest regression model with minimum Redundancy Maximum Relevance feature selection. The results show that the mutual information between an appliance and the aggregate is a good predictor of disaggregation performance in the examined data, as appliances with high mutual information, such as hair dryer and electric water heater, achieve lower estimation errors, while others, such as iron, are difficult to recover despite stable distributions. This relationship is statistically supported by a strong negative monotonic correlation between normalized mutual information and the disaggregation error (Spearman rs=0.81, p=0.015). Hence, this work demonstrates how information-theoretic analysis can help characterize disaggregation difficulty prior to model training and assess the observability of appliances in high-frequency NILM. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop