Next Issue
Volume 27, April
Previous Issue
Volume 27, February
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 3 (March 2025) – 109 articles

Cover Story (view full-size image): In this figure, graphite foils (carbon atoms in grey) are depicted in contact with H2 gas (shown in pink closer to the surface and in blue farther from it). The molecules are mobile on the surface. Molecular adsorption and desorption continuously occur, resulting in opposite unidirectional fluxes, symbolized by red and blue arrows. Molecular dynamics simulations show that the fluxes are proportional to the activity of the gas and the adsorbed phase. This differs from the well-known Langmuir adsorption kinetics. New analytical expressions are derived to model adsorption/desorption kinetics and clarify the link with non-equilibrium thermodynamics. The analysis sheds a new light on adsorption/desorption dynamics in both equilibrium and non-equilibrium conditions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 23174 KiB  
Article
Optimal Scheduling of Energy Systems for Gas-to-Methanol Processes Using Operating Zone Models and Entropy Weights
by Xueteng Wang, Mengyao Wei, Jiandong Wang and Yang Yue
Entropy 2025, 27(3), 324; https://doi.org/10.3390/e27030324 - 20 Mar 2025
Viewed by 315
Abstract
In coal chemical industries, the optimal allocation of gas and steam is crucial for enhancing production efficiency and maximizing economic returns. This paper proposes an optimal scheduling method using operating zone models and entropy weights for an energy system in a gas-to-methanol process. [...] Read more.
In coal chemical industries, the optimal allocation of gas and steam is crucial for enhancing production efficiency and maximizing economic returns. This paper proposes an optimal scheduling method using operating zone models and entropy weights for an energy system in a gas-to-methanol process. The first step is to develop mechanistic models for the main facilities in methanol production, namely desulfurization, air separation, syngas compressors, and steam boilers. A genetic algorithm is employed to estimate the unknown parameters of the models. These models are grounded in physical mechanisms such as energy conservation, mass conservation, and thermodynamic laws. A multi-objective optimization problem is formulated, with the objectives of minimizing gas loss, steam loss, and operating costs. The required operating constraints include equipment capacities, energy balance, and energy coupling relationships. The entropy weights are then employed to convert this problem into a single-objective optimization problem. The second step is to solve the optimization problem based on an operating zone model, which describes a high-dimensional geometric space consisting of all steady-state data points that satisfy the operation constraints. By projecting the operating zone model on the decision variable plane, an optimal scheduling solution is obtained in a visual manner with contour lines and auxiliary lines. Case studies based on Aspen Hysys are used to support and validate the effectiveness of the proposed method. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 691 KiB  
Article
Goalie: Defending Against Correlated Value and Sign Encoding Attacks
by Rongfei Zhuang, Ximing Fu, Chuanyi Liu, Peiyi Han and Shaoming Duan
Entropy 2025, 27(3), 323; https://doi.org/10.3390/e27030323 - 20 Mar 2025
Viewed by 251
Abstract
In this paper, we propose a method, namely Goalie, to defend against the correlated value and sign encoding attacks used to steal shared data from data trusts. Existing methods prevent these attacks by perturbing model parameters, gradients, or training data while significantly degrading [...] Read more.
In this paper, we propose a method, namely Goalie, to defend against the correlated value and sign encoding attacks used to steal shared data from data trusts. Existing methods prevent these attacks by perturbing model parameters, gradients, or training data while significantly degrading model performance. To guarantee the performance of the benign models, Goalie detects the malicious models and stops their training. The key insight of detection is that encoding additional information in model parameters through regularization terms changes the parameter distributions. Our theoretical analysis suggests that the regularization terms lead to the differences in parameter distributions between benign and malicious models. According to the analysis, Goalie extracts features from the parameters in the early training epochs of the models and uses these features to detect malicious models. The experimental results show the high effectiveness and efficiency of Goalie. The accuracy of Goalie in detecting the models with one regularization term is more than 0.9, and Goalie has high performance in some extreme situations. Meanwhile, Goalie takes only 1.1 ms to detect a model using the features extracted from the first 30 training epochs. Full article
(This article belongs to the Special Issue Applications of Information Theory to Machine Learning)
Show Figures

Figure 1

15 pages, 818 KiB  
Article
DeeWaNA: An Unsupervised Network Representation Learning Framework Integrating Deepwalk and Neighborhood Aggregation for Node Classification
by Xin Xu, Xinya Lu and Jianan Wang
Entropy 2025, 27(3), 322; https://doi.org/10.3390/e27030322 - 20 Mar 2025
Viewed by 252
Abstract
This paper introduces DeeWaNA, an unsupervised network representation learning framework that unifies random walk strategies and neighborhood aggregation mechanisms to improve node classification performance. Unlike existing methods that treat these two paradigms separately, our approach integrates them into a cohesive model, addressing limitations [...] Read more.
This paper introduces DeeWaNA, an unsupervised network representation learning framework that unifies random walk strategies and neighborhood aggregation mechanisms to improve node classification performance. Unlike existing methods that treat these two paradigms separately, our approach integrates them into a cohesive model, addressing limitations in structural feature extraction and neighborhood relationship modeling. DeeWaNA first leverages DeepWalk to capture global structural information and then employs an attention-based weighting mechanism to refine neighborhood relationships through a novel distance metric. Finally, a weighted aggregation operator fuses these representations into a unified low-dimensional space. By bridging the gap between random-walk-based and neural-network-based techniques, our framework enhances representation quality and improves classification accuracy. Extensive evaluations on real-world networks demonstrate that DeeWaNA outperforms four widely used unsupervised network representation learning methods, underscoring its effectiveness and broader applicability. Full article
Show Figures

Figure 1

17 pages, 461 KiB  
Article
Weibull-Type Incubation Period and Time of Exposure Using γ-Divergence
by Daisuke Yoneoka, Takayuki Kawashima, Yuta Tanoue, Shuhei Nomura and Akifumi Eguchi
Entropy 2025, 27(3), 321; https://doi.org/10.3390/e27030321 - 19 Mar 2025
Viewed by 196
Abstract
Accurately determining the exposure time to an infectious pathogen, together with the corresponding incubation period, is vital for identifying infection sources and implementing targeted public health interventions. However, real-world outbreak data often include outliers—namely, tertiary or subsequent infection cases not directly linked to [...] Read more.
Accurately determining the exposure time to an infectious pathogen, together with the corresponding incubation period, is vital for identifying infection sources and implementing targeted public health interventions. However, real-world outbreak data often include outliers—namely, tertiary or subsequent infection cases not directly linked to the initial source—that complicate the estimation of exposure time. To address this challenge, we introduce a robust estimation framework based on a three-parameter Weibull distribution in which the location parameter naturally corresponds to the unknown exposure time. Our method employs a γ-divergence criterion—a robust generalization of the standard cross-entropy criterion—optimized via a tailored majorization–minimization (MM) algorithm designed to guarantee a monotonic decrease in the objective function despite the non-convexity typically present in robust formulations. Extensive Monte Carlo simulations demonstrate that our approach outperforms conventional estimation methods in terms of bias and mean squared error as well as in estimating the incubation period. Moreover, applications to real-world surveillance data on COVID-19 illustrate the practical advantages of the proposed method. These findings highlight the method’s robustness and efficiency in scenarios where data contamination from secondary or tertiary infections is common, showing its potential value for early outbreak detection and rapid epidemiological response. Full article
(This article belongs to the Special Issue Entropy in Biomedical Engineering, 3rd Edition)
Show Figures

Figure 1

21 pages, 389 KiB  
Article
Distribution Approach to Local Volatility for European Options in the Merton Model with Stochastic Interest Rates
by Piotr Nowak and Dariusz Gatarek
Entropy 2025, 27(3), 320; https://doi.org/10.3390/e27030320 - 19 Mar 2025
Viewed by 263
Abstract
The Dupire formula is a very useful tool for pricing financial derivatives. This paper is dedicated to deriving the aforementioned formula for the European call option in the space of distributions by applying a mathematically rigorous approach developed in our previous paper concerning [...] Read more.
The Dupire formula is a very useful tool for pricing financial derivatives. This paper is dedicated to deriving the aforementioned formula for the European call option in the space of distributions by applying a mathematically rigorous approach developed in our previous paper concerning the case of the Margrabe option. We assume that the underlying asset is described by the Merton jump-diffusion model. Using this stochastic process allows us to take into account jumps in the price of the considered asset. Moreover, we assume that the instantaneous interest rate follows the Merton model (1973). Therefore, in contrast to the models combining a constant interest rate and a continuous underlying asset price process, frequently observed in the literature, applying both stochastic processes could accurately reflect financial market behaviour. Moreover, we illustrate the possibility of using the minimal entropy martingale measure as the risk-neutral measure in our approach. Full article
(This article belongs to the Special Issue Probabilistic Models for Dynamical Systems)
32 pages, 34979 KiB  
Article
Generative Large Model-Driven Methodology for Color Matching and Shape Design in IP Products
by Fan Wu, Peng Lu and Shih-Wen Hsiao
Entropy 2025, 27(3), 319; https://doi.org/10.3390/e27030319 - 19 Mar 2025
Viewed by 333
Abstract
The rise in generative large models has gradually influenced traditional product design processes, with AI-generated content (AIGC) playing an increasingly significant role. Globally, tourism IP cultural products are crucial for promoting sustainable tourism development. However, there is a lack of practical design methodologies [...] Read more.
The rise in generative large models has gradually influenced traditional product design processes, with AI-generated content (AIGC) playing an increasingly significant role. Globally, tourism IP cultural products are crucial for promoting sustainable tourism development. However, there is a lack of practical design methodologies incorporating generative large models for tourism IP cultural products. Therefore, this study proposes a methodology for the color matching and shape design of tourism IP cultural products using multimodal generative large models. The process includes four phases, as follows: (1) GPT-4o is used to explore visitors’ emotional needs and identify target imagery; (2) Midjourney generates shape options that align with the target imagery, and the optimal shape is selected through quadratic curvature entropy method based on shape curves; (3) Midjourney generates colored images reflecting the target imagery, and representative colors are selected using AHP and OpenCV; and (4) color harmony calculations are used to identify the best color combination. These alternatives are evaluated quantitatively and qualitatively using a color-matching aesthetic measurement formula and a sensibility questionnaire. The effectiveness of the methodology is demonstrated through a case study on the harbor seal, showing a strong correlation between quantitative and qualitative evaluations, confirming its effectiveness in tourism IP product design. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 1341 KiB  
Article
Entropy of Volatility Changes: Novel Method for Assessment of Regularity in Volatility Time Series
by Joanna Olbryś
Entropy 2025, 27(3), 318; https://doi.org/10.3390/e27030318 - 19 Mar 2025
Viewed by 309
Abstract
The goal of this research is to introduce and thoroughly investigate a new methodology for the assessment of sequential regularity in volatility time series. Three volatility estimators based on daily range data are analyzed: (1) the Parkinson estimator, (2) the Garman–Klass estimator, and [...] Read more.
The goal of this research is to introduce and thoroughly investigate a new methodology for the assessment of sequential regularity in volatility time series. Three volatility estimators based on daily range data are analyzed: (1) the Parkinson estimator, (2) the Garman–Klass estimator, and (3) the Rogers–Satchell estimator. To measure the level of complexity of time series, the modified Shannon entropy based on symbol-sequence histograms is utilized. Discretization of the time series of volatility changes into a sequence of symbols is performed using a novel encoding procedure with two thresholds. Five main stock market indexes are analyzed. The whole sample covers the period from January 2017 to December 2023 (seven years). To check the robustness of our empirical findings, two sub-samples of equal length are investigated: (1) the pre-COVID-19 period from January 2017 to February 2020 and (2) the COVID-19 pandemic period from March 2020 to April 2023. An additional formal statistical analysis of the symbol-sequence histograms is conducted. The empirical results for all volatility estimators and stock market indexes are homogeneous and confirm that the level of regularity (in terms of sequential patterns) in the time series of daily volatility changes is high, independently of the choice of sample period. These results are important for academics and practitioners since the existence of regularity in the time series of volatility changes implies the possibility of volatility prediction. Full article
Show Figures

Figure 1

12 pages, 1912 KiB  
Article
Framework for Groove Rating in Exercise-Enhancing Music Based on a CNN–TCN Architecture with Integrated Entropy Regularization and Pooling
by Jiangang Chen, Junbo Han, Pei Su and Gaoquan Zhou
Entropy 2025, 27(3), 317; https://doi.org/10.3390/e27030317 - 18 Mar 2025
Viewed by 338
Abstract
Groove, a complex aspect of music perception, plays a crucial role in eliciting emotional and physical responses from listeners. However, accurately quantifying and predicting groove remains challenging due to its intricate acoustic features. To address this, we propose a novel framework for groove [...] Read more.
Groove, a complex aspect of music perception, plays a crucial role in eliciting emotional and physical responses from listeners. However, accurately quantifying and predicting groove remains challenging due to its intricate acoustic features. To address this, we propose a novel framework for groove rating that integrates Convolutional Neural Networks (CNNs) with Temporal Convolutional Networks (TCNs), enhanced by entropy regularization and entropy-pooling techniques. Our approach processes audio files into Mel-spectrograms, which are analyzed by a CNN for feature extraction and by a TCN to capture long-range temporal dependencies, enabling precise groove-level prediction. Experimental results show that our CNN–TCN framework significantly outperforms benchmark methods in predictive accuracy. The integration of entropy pooling and regularization is critical, with their omission leading to notable reductions in R2 values. Our method also surpasses the performance of CNN and other machine-learning models, including long short-term memory (LSTM) networks and support vector machine (SVM) variants. This study establishes a strong foundation for the automated assessment of musical groove, with potential applications in music education, therapy, and composition. Future research will focus on expanding the dataset, enhancing model generalization, and exploring additional machine-learning techniques to further elucidate the factors influencing groove perception. Full article
(This article belongs to the Special Issue Entropy Based Machine Learning Models)
Show Figures

Figure 1

15 pages, 548 KiB  
Article
Centralized Hierarchical Coded Caching Scheme for Two-Layer Network
by Kun Zhao, Jinyu Wang and Minquan Cheng
Entropy 2025, 27(3), 316; https://doi.org/10.3390/e27030316 - 18 Mar 2025
Viewed by 234
Abstract
This paper considers a two-layer hierarchical network, where a server containing N files is connected to K1 mirrors and each mirror is connected to K2 users. Each mirror and each user has a cache memory of size M1 and [...] Read more.
This paper considers a two-layer hierarchical network, where a server containing N files is connected to K1 mirrors and each mirror is connected to K2 users. Each mirror and each user has a cache memory of size M1 and M2 files, respectively. The server can only broadcast to the mirrors, and each mirror can only broadcast to its connected users. For such a network, we propose a novel coded caching scheme based on two known placement delivery arrays (PDAs). To fully utilize the cache memory of both the mirrors and users, we first treat the mirrors and users as cache nodes of the same type; i.e., the cache memory of each mirror is regarded as an additional part of the connected users’ cache, then the server broadcasts messages to all mirrors according to a K1K2-user PDA in the first layer. In the second layer, each mirror first cancels useless file packets (if any) in the received useful messages and forwards them to the connected users, such that each user can decode the requested packets not cached by the mirror, then broadcasts coded subpackets to the connected users according to a K2-user PDA, such that each user can decode the requested packets cached by the mirror. The proposed scheme is extended to a heterogeneous two-layer hierarchical network, where the number of users connected to different mirrors may be different. Numerical comparison showed that the proposed scheme achieved lower coding delays compared to existing hierarchical coded caching schemes at most memory ratio points. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

17 pages, 283 KiB  
Article
What Is Ontic and What Is Epistemic in the Quantum Mechanics of Spin?
by Ariel Caticha
Entropy 2025, 27(3), 315; https://doi.org/10.3390/e27030315 - 18 Mar 2025
Viewed by 231
Abstract
Entropic Dynamics (ED) provides a framework that allows the reconstruction of the formalism of quantum mechanics by insisting on ontological and epistemic clarity and adopting entropic methods and information geometry. Our present goal is to extend the ED framework to account for spin. [...] Read more.
Entropic Dynamics (ED) provides a framework that allows the reconstruction of the formalism of quantum mechanics by insisting on ontological and epistemic clarity and adopting entropic methods and information geometry. Our present goal is to extend the ED framework to account for spin. The result is a realist ψ-epistemic model in which the ontology consists of a particle described by a definite position plus a discrete variable that describes Pauli’s peculiar two-valuedness. The resulting dynamics of probabilities is, as might be expected, described by the Pauli equation. What may be unexpected is that the generators of transformations—Hamiltonians and angular momenta, including spin, are all granted clear epistemic status. To the old question, ‘what is spinning?’ ED provides a crisp answer: nothing is spinning. Full article
(This article belongs to the Special Issue Maximum Entropy Principle and Applications)
11 pages, 706 KiB  
Article
On the Resistance Coefficients for Heat Conduction in Anisotropic Bodies at the Limit of Linear Extended Thermodynamics
by Devyani Thapliyal, Raj Kumar Arya, Dimitris S. Achilias and George D. Verros
Entropy 2025, 27(3), 314; https://doi.org/10.3390/e27030314 - 18 Mar 2025
Viewed by 245
Abstract
This study examines the thermal conduction resistance in anisotropic bodies using linear extended irreversible thermodynamics. The fulfilment of the Onsager Reciprocal Relations in anisotropic bodies, such as crystals, has been demonstrated. This fulfilment is achieved by incorporating Newton’s heat transfer coefficients into the [...] Read more.
This study examines the thermal conduction resistance in anisotropic bodies using linear extended irreversible thermodynamics. The fulfilment of the Onsager Reciprocal Relations in anisotropic bodies, such as crystals, has been demonstrated. This fulfilment is achieved by incorporating Newton’s heat transfer coefficients into the calculation of the entropy production rate. Furthermore, a basic principle for the transport of heat, similar to the Onsager–Fuoss formalism for the multicomponent diffusion at a constant temperature, was established. This work has the potential to be applied not just in the field of material science, but also to enhance our understanding of heat conduction in crystals. A novel formalism for heat transfer analogous to Onsager–Fuoss model for multicomponent diffusion was developed. It is believed that this work could be applied for educational purposes. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

19 pages, 1196 KiB  
Article
Clustered Distributed Data Storage Repairing Multiple Failures
by Shiqiu Liu, Fangwei Ye and Qihui Wu
Entropy 2025, 27(3), 313; https://doi.org/10.3390/e27030313 - 17 Mar 2025
Viewed by 215
Abstract
A clustered distributed storage system (DSS), also called a rack-aware storage system, is a distributed storage system in which the nodes are grouped into several clusters. The communication between two clusters may be restricted by their connectivity; that is to say, the communication [...] Read more.
A clustered distributed storage system (DSS), also called a rack-aware storage system, is a distributed storage system in which the nodes are grouped into several clusters. The communication between two clusters may be restricted by their connectivity; that is to say, the communication cost between nodes differs depending on their location. As such, when repairing a failed node, downloading data from nodes that are in the same cluster is much cheaper and more efficient than downloading data from nodes in another cluster. In this article, we consider a scenario in which the failed nodes only download data from nodes in the same cluster, which is an extreme and important case that leverages the fact that the intra-cluster bandwidth is much cheaper than the cross-cluster repair bandwidth. Also, we study the problem of repairing multiple failures in this article, which allows for collaboration within the same cluster, i.e., failed nodes in the same cluster can exchange data with each other. We derive the trade-off between the storage and repair bandwidth for the clustered DSSs and provide explicit code constructions achieving two extreme points in the trade-off, namely the minimum storage clustered collaborative repair (MSCCR) point and the minimum bandwidth clustered collaborative repair (MBCCR) point, respectively. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

15 pages, 4838 KiB  
Article
Numerical Investigation of Effect of Nozzle Upper Divergent Angle on Asymmetric Rectangular Section Ejector
by Manfei Lu, Jingming Dong, Chi Feng, Shuaiyu Song, Miao Zhang and Runfa Wang
Entropy 2025, 27(3), 312; https://doi.org/10.3390/e27030312 - 17 Mar 2025
Viewed by 255
Abstract
Ejectors, as widely utilized devices in the field of industrial energy conservation, exhibit a performance that is significantly affected by their structural parameters. However, the study of the influence of nozzle geometry parameters on asymmetric ejector performance is still limited. In this paper, [...] Read more.
Ejectors, as widely utilized devices in the field of industrial energy conservation, exhibit a performance that is significantly affected by their structural parameters. However, the study of the influence of nozzle geometry parameters on asymmetric ejector performance is still limited. In this paper, the effect of the nozzle upper divergent angle on the operating characteristics of an asymmetric rectangular section ejector was comprehensively investigated. The results indicated that the entrainment ratio gradually decreased with an increase in the nozzle upper divergent angle, and the maximum decrease could be 20%. At the same time, the relationship between the upper and lower divergent angles was closely linked to the trend of change in the secondary fluid mass flow rate. The analysis of flow characteristics found that the deflection of the central jet was caused by the pressure difference between the walls of the upper and lower divergent sections of the nozzle. Additionally, quantitative analysis of the development of the mixing layer showed that the mass flow rate of the secondary fluid inlet was related to the development of the mixing boundary. Shock wave analysis demonstrated that the deterioration in ejector performance was due to the reduction in the shock wave strength caused by Mach reflection and the increase in the Mach stem height. Full article
(This article belongs to the Special Issue Thermal Science and Engineering Applications)
Show Figures

Figure 1

12 pages, 284 KiB  
Article
Coded Distributed Computing Under Combination Networks
by Yongcheng Yang, Yifei Huang, Xiaohuan Qin and Shenglian Lu
Entropy 2025, 27(3), 311; https://doi.org/10.3390/e27030311 - 16 Mar 2025
Viewed by 371
Abstract
Coded distributed computing (CDC) is a powerful approach to reduce the communication overhead in distributed computing frameworks by utilizing coding techniques. In this paper, we focus on the CDC problem in (H,L)-combination networks, where H APs act as [...] Read more.
Coded distributed computing (CDC) is a powerful approach to reduce the communication overhead in distributed computing frameworks by utilizing coding techniques. In this paper, we focus on the CDC problem in (H,L)-combination networks, where H APs act as intermediate pivots and K=HL workers are connected to different subsets of L APs. Each worker processes a subset of the input file and computes intermediate values (IVs) locally, which are then exchanged via uplink and downlink transmissions through the AP station to ensure that all workers compute their assigned output functions. In this paper, we first novelly characterize the transmission scheme for the shuffle phase from the view point of the coefficient matrix and then obtain the scheme by using the Combined Placement Delivery Array (CPDA). Compared with the baseline scheme, our scheme significantly improves the uplink and downlink communication loads while maintaining the robustness and efficiency of the combined multi-AP network. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

10 pages, 273 KiB  
Article
Broadcast Channel Cooperative Gain: An Operational Interpretation of Partial Information Decomposition
by Chao Tian and Shlomo Shamai (Shitz)
Entropy 2025, 27(3), 310; https://doi.org/10.3390/e27030310 - 15 Mar 2025
Viewed by 439
Abstract
Partial information decomposition has recently found applications in biological signal processing and machine learning. Despite its impacts, the decomposition was introduced through an informal and heuristic route, and its exact operational meaning is unclear. In this work, we fill this gap by connecting [...] Read more.
Partial information decomposition has recently found applications in biological signal processing and machine learning. Despite its impacts, the decomposition was introduced through an informal and heuristic route, and its exact operational meaning is unclear. In this work, we fill this gap by connecting partial information decomposition to the capacity of the broadcast channel, which has been well studied in the information theory literature. We show that the synergistic information in the decomposition can be rigorously interpreted as the cooperative gain, or a lower bound of this gain, on the corresponding broadcast channel. This interpretation can help practitioners to better explain and expand the applications of the partial information decomposition technique. Full article
(This article belongs to the Special Issue Semantic Information Theory)
Show Figures

Figure 1

22 pages, 428 KiB  
Article
Restart Mechanisms for the Successive-Cancellation List-Flip Decoding of Polar Codes
by Charles Pillet, Ilshat Sagitov, Alexios Balatsoukas-Stimming and Pascal Giard
Entropy 2025, 27(3), 309; https://doi.org/10.3390/e27030309 - 14 Mar 2025
Viewed by 438
Abstract
Polar codes concatenated with a cyclic redundancy check (CRC) code have been selected in the 5G standard with the successive-cancellation list (SCL) of list size L = 8 as the baseline algorithm. Despite providing great error-correction performance, a large list size increases the [...] Read more.
Polar codes concatenated with a cyclic redundancy check (CRC) code have been selected in the 5G standard with the successive-cancellation list (SCL) of list size L = 8 as the baseline algorithm. Despite providing great error-correction performance, a large list size increases the hardware complexity of the SCL decoder. Alternatively, flip decoding algorithms were proposed to improve the error-correction performance with a low-complexity hardware implementation. The combination of list and flip algorithms, the successive-cancellation list flip (SCLF) and dynamic SCLF (DSCLF) algorithms, provides error-correction performance close to SCL-32 with a list size L = 2 and Tmax = 300 maximum additional trials. However, these decoders have a variable execution time, a characteristic that poses a challenge to some practical applications. In this work, we propose a restart mechanism for list–flip algorithms that allows us to skip parts of the decoding computations without affecting the error-correction performance. We show that the restart location cannot realistically be allowed to occur at any location in a codeword as it would lead to an unreasonable memory overhead under DSCLF. Hence, we propose a mechanism where the possible restart locations are limited to a set and propose various construction methods for that set. The construction methods are compared, and the tradeoffs are discussed. For a polar code of length N = 1024 and rate ¼, under DSCLF decoding with a list size L = 2 and a maximum number of trials Tmax = 300, our proposed approach is shown to reduce the average execution time by 41.7% with four restart locations at the cost of approximately 1.5% in memory overhead. Full article
(This article belongs to the Special Issue Advances in Modern Channel Coding)
Show Figures

Figure 1

42 pages, 3013 KiB  
Article
Optimal Power Procurement for Green Cellular Wireless Networks Under Uncertainty and Chance Constraints
by Nadhir Ben Rached, Shyam Mohan Subbiah Pillai and Raúl Tempone
Entropy 2025, 27(3), 308; https://doi.org/10.3390/e27030308 - 14 Mar 2025
Viewed by 397
Abstract
Given the increasing global emphasis on sustainable energy usage and the rising energy demands of cellular wireless networks, this work seeks an optimal short-term, continuous-time power-procurement schedule to minimize operating expenditure and the carbon footprint of cellular wireless networks equipped with energy-storage capacity, [...] Read more.
Given the increasing global emphasis on sustainable energy usage and the rising energy demands of cellular wireless networks, this work seeks an optimal short-term, continuous-time power-procurement schedule to minimize operating expenditure and the carbon footprint of cellular wireless networks equipped with energy-storage capacity, and hybrid energy systems comprising uncertain renewable energy sources. Despite the stochastic nature of wireless fading channels, the network operator must ensure a certain quality-of-service (QoS) constraint with high probability. This probabilistic constraint prevents using the dynamic programming principle to solve the stochastic optimal control problem. This work introduces a novel time-continuous Lagrangian relaxation approach tailored for real-time, near-optimal energy procurement in cellular networks, overcoming tractability problems associated with the probabilistic QoS constraint. The numerical solution procedure includes an efficient upwind finite-difference solver for the Hamilton–Jacobi–Bellman equation corresponding to the relaxed problem, and an effective combination of the limited memory bundle method (LMBM) for handling nonsmooth optimization and the stochastic subgradient method (SSM) to navigate the stochasticity of the dual problem. Numerical results, based on the German power system and daily cellular traffic data, demonstrate the computational efficiency of the proposed numerical approach, providing a near-optimal policy in a practical timeframe. Full article
Show Figures

Figure 1

25 pages, 2080 KiB  
Article
Multilabel Classification for Entry-Dependent Expert Selection in Distributed Gaussian Processes
by Hamed Jalali and Gjergji Kasneci
Entropy 2025, 27(3), 307; https://doi.org/10.3390/e27030307 - 14 Mar 2025
Viewed by 342
Abstract
By distributing the training process, local approximation reduces the cost of the standard Gaussian process. An ensemble method aggregates predictions from local Gaussian experts, each trained on different data partitions, under the assumption of perfect diversity among them. While this assumption ensures tractable [...] Read more.
By distributing the training process, local approximation reduces the cost of the standard Gaussian process. An ensemble method aggregates predictions from local Gaussian experts, each trained on different data partitions, under the assumption of perfect diversity among them. While this assumption ensures tractable aggregation, it is frequently violated in practice. Although ensemble methods provide consistent results by modeling dependencies among experts, they incur a high computational cost, scaling cubically with the number of experts. Implementing an expert-selection strategy reduces the number of experts involved in the final aggregation step, thereby improving efficiency. However, selection approaches that assign a fixed set of experts to each data point cannot account for the unique properties of individual data points. This paper introduces a flexible expert-selection approach tailored to the characteristics of individual data points. To achieve this, we frame the selection task as a multi-label classification problem in which experts define the labels, and each data point is associated with specific experts. We discuss in detail the prediction quality, efficiency, and asymptotic properties of the proposed solution. We demonstrate the efficiency of the proposed method through extensive numerical experiments on synthetic and real-world datasets. This strategy is easily extendable to distributed learning scenarios and multi-agent models, regardless of Gaussian assumptions regarding the experts. Full article
Show Figures

Figure 1

19 pages, 1138 KiB  
Article
Democratic Thwarting of Majority Rule in Opinion Dynamics: 1. Unavowed Prejudices Versus Contrarians
by Serge Galam
Entropy 2025, 27(3), 306; https://doi.org/10.3390/e27030306 - 14 Mar 2025
Viewed by 382
Abstract
I study the conditions under which the democratic dynamics of a public debate drives a minority-to-majority transition. A landscape of the opinion dynamics is thus built using the Galam Majority Model (GMM) in a 3-dimensional parameter space for three different sizes, [...] Read more.
I study the conditions under which the democratic dynamics of a public debate drives a minority-to-majority transition. A landscape of the opinion dynamics is thus built using the Galam Majority Model (GMM) in a 3-dimensional parameter space for three different sizes, r=2,3,4, of local discussion groups. The related parameters are (p0,k,x), the respective proportions of initial agents supporting opinion A, unavowed tie prejudices breaking in favor of opinion A, and contrarians. Combining k and x yields unexpected and counterintuitive results. In most of the landscape the final outcome is predetermined, with a single-attractor dynamics, independent of the initial support for the competing opinions. Large domains of (k,x) values are found to lead an initial minority to turn into a majority democratically without any external influence. A new alternating regime is also unveiled in narrow ranges of extreme proportions of contrarians. The findings indicate that the expected democratic character of free opinion dynamics is indeed rarely satisfied. The actual values of (k,x) are found to be instrumental to predetermining the final winning opinion independently of p0. Therefore, the conflicting challenge for the predetermined opinion to lose is to modify these values appropriately to become the winner. However, developing a model which could help in manipulating public opinion raises ethical questions. This issue is discussed in the Conclusions. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Sociophysics II)
Show Figures

Figure 1

20 pages, 19302 KiB  
Article
Variability Identification and Uncertainty Evolution Characteristic Analysis of Hydrological Variables in Anhui Province, China
by Xia Bai, Jinhuang Yu, Yule Li, Juliang Jin, Chengguo Wu and Rongxing Zhou
Entropy 2025, 27(3), 305; https://doi.org/10.3390/e27030305 - 14 Mar 2025
Viewed by 309
Abstract
Variability identification and uncertainty characteristic analysis, under the impacts of climate change and human activities, is beneficial for accurately predicting the future evolution trend of hydrological variables. In this study, based on the evolution trend and characteristic analyses of historical precipitation and temperature [...] Read more.
Variability identification and uncertainty characteristic analysis, under the impacts of climate change and human activities, is beneficial for accurately predicting the future evolution trend of hydrological variables. In this study, based on the evolution trend and characteristic analyses of historical precipitation and temperature sequences from monthly, annual, and interannual scales through the Linear Tendency Rate (LTR) index, as well as its variability point identification using the M–K trend test method, we further utilized three cloud characteristic parameters comprising the average Ex, entropy En, and hyper-entropy He of the Cloud Model (CM) method to quantitatively reveal the uncertainty features corresponding to the diverse cloud distribution of precipitation and temperature sample scatters. And then, through an application analysis of the proposed research framework in Anhui Province, China, the following can be summarized from the application results: (1) The annual precipitation of Anhui Province presented a remarkable decreasing trend from south to north and an annual increasing trend from 1960 to 2020, especially in the southern area, with the LTR index equaling 55.87 mm/10a, and the annual average temperature of the entire provincial area also presented an obvious increasing trend from 1960 to 2020, with LTR equaling about 0.226 °C/10a. (2) The uncertainty characteristic of the precipitation series was evidently intensified after the variability points in 2013 and 2014 in the southern and provincial areas, respectively, according to the derived values of entropy En and hyper-entropy He, which are basically to the contrary for the historical annual average temperature series in southern Anhui Province. (3) The obtained result was basically consistent with the practical statistics of historical hydrological and disaster data, indicating that the proposed research methodologies can be further applied in related variability diagnosis analyses of non-stationary hydrological variables. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 4990 KiB  
Article
Edge-Centric Embeddings of Digraphs: Properties and Stability Under Sparsification
by Ahmed Begga, Francisco Escolano Ruiz and Miguel Ángel Lozano
Entropy 2025, 27(3), 304; https://doi.org/10.3390/e27030304 - 14 Mar 2025
Viewed by 564
Abstract
In this paper, we define and characterize the embedding of edges and higher-order entities in directed graphs (digraphs) and relate these embeddings to those of nodes. Our edge-centric approach consists of the following: (a) Embedding line digraphs (or their iterated versions); (b) Exploiting [...] Read more.
In this paper, we define and characterize the embedding of edges and higher-order entities in directed graphs (digraphs) and relate these embeddings to those of nodes. Our edge-centric approach consists of the following: (a) Embedding line digraphs (or their iterated versions); (b) Exploiting the rank properties of these embeddings to show that edge/path similarity can be posed as a linear combination of node similarities; (c) Solving scalability issues through digraph sparsification; (d) Evaluating the performance of these embeddings for classification and clustering. We commence by identifying the motive behind the need for edge-centric approaches. Then we proceed to introduce all the elements of the approach, and finally, we validate it. Our edge-centric embedding entails a top-down mining of links, instead of inferring them from the similarities of node embeddings. This analysis is key to discovering inter-subgraph links that hold the whole graph connected, i.e., central edges. Using directed graphs (digraphs) allows us to cluster edge-like hubs and authorities. In addition, since directed edges inherit their labels from destination (origin) nodes, their embedding provides a proxy representation for node classification and clustering as well. This representation is obtained by embedding the line digraph of the original one. The line digraph provides nice formal properties with respect to the original graph; in particular, it produces more entropic latent spaces. With these properties at hand, we can relate edge embeddings to node embeddings. The main contribution of this paper is to set and prove the linearity theorem, which poses each element of the transition matrix for an edge embedding as a linear combination of the elements of the transition matrix for the node embedding. As a result, the rank preservation property explains why embedding the line digraph and using the labels of the destination nodes provides better classification and clustering performances than embedding the nodes of the original graph. In other words, we do not only facilitate edge mining but enforce node classification and clustering. However, computing the line digraph is challenging, and a sparsification strategy is implemented for the sake of scalability. Our experimental results show that the line digraph representation of the sparsified input graph is quite stable as we increase the sparsification level, and also that it outperforms the original (node-centric) representation. For the sake of simplicity, our theorem relies on node2vec-like (factorization) embeddings. However, we also include several experiments showing how line digraphs may improve the performance of Graph Neural Networks (GNNs), also following the principle of maximum entropy. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 498 KiB  
Article
Minimizing System Entropy: A Dual-Phase Optimization Approach for EV Charging Scheduling
by Wenpeng Yuan and Lin Guan
Entropy 2025, 27(3), 303; https://doi.org/10.3390/e27030303 - 14 Mar 2025
Viewed by 422
Abstract
To address the electric vehicle (EV) charging scheduling problem in rural distribution networks, this study proposes a novel two-phase optimization strategy that combines particle swarm optimization (PSO) and Q-learning for global optimization and real-time adaptation. In the first stage, PSO is used to [...] Read more.
To address the electric vehicle (EV) charging scheduling problem in rural distribution networks, this study proposes a novel two-phase optimization strategy that combines particle swarm optimization (PSO) and Q-learning for global optimization and real-time adaptation. In the first stage, PSO is used to generate an initial charging plan that minimizes voltage deviations and line overloads while maximizing user satisfaction. In the second phase, a Q-learning approach dynamically adjusts the plan based on real-time grid conditions and feedback. The strategy reduces the system’s entropy by minimizing the uncertainty and disorder in power distribution caused by variable EV charging loads. Experimental results on a 33-bus distribution system under baseline and high-load scenarios demonstrate significant improvements over conventional dispatch methods, with voltage deviation reduced from 5.8% to 1.9%, maximum load factor reduced from 95% to 82%, and average customer satisfaction increased from 75% to 88%. While the computation time increases compared to standalone PSO (66 min vs. 34 min), the enhanced grid stability and customer satisfaction justify the trade-off. By effectively minimizing system entropy and balancing grid reliability with user convenience, the proposed two-phase strategy offers a practical and robust solution for integrating EVs into rural power systems. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 653 KiB  
Article
A Consistent Approach to Modeling Quantum Observers
by David W. Ring
Entropy 2025, 27(3), 302; https://doi.org/10.3390/e27030302 - 14 Mar 2025
Viewed by 371
Abstract
A number of no-go theorems have shown that Wigner’s Friend scenarios combined with various metaphysical assumptions lead to contradictions in any version of quantum theory. We present an alternative constructive approach that only assumes that agents make properly qualified true statements. Quantum observers [...] Read more.
A number of no-go theorems have shown that Wigner’s Friend scenarios combined with various metaphysical assumptions lead to contradictions in any version of quantum theory. We present an alternative constructive approach that only assumes that agents make properly qualified true statements. Quantum observers are modeled rigorously, although simplistically, using quantum circuits. Terminology is suggested to help avoid contradictions. Our methodology is applied to the Frauchiger-Renner paradox and results in statements by all agents that are both true and consistent. Quantum theory evades the no-go theorems because they make an incorrect implicit assumption about how quantum agents behave. Full article
(This article belongs to the Special Issue Classical and Quantum Networks: Theory, Modeling and Optimization)
Show Figures

Figure 1

11 pages, 548 KiB  
Article
Enhancing Visual-Language Prompt Tuning Through Sparse Knowledge-Guided Context Optimization
by Qiangxing Tian and Min Zhang
Entropy 2025, 27(3), 301; https://doi.org/10.3390/e27030301 - 14 Mar 2025
Viewed by 425
Abstract
Prompt tuning visual-language models (VLMs) for specialized tasks often involves leveraging task-specific textual tokens, which can tailor the pre-existing, broad capabilities of a VLM to more narrowly focused applications. This approach, exemplified by CoOp-based methods, integrates mutable textual tokens with categorical tokens to [...] Read more.
Prompt tuning visual-language models (VLMs) for specialized tasks often involves leveraging task-specific textual tokens, which can tailor the pre-existing, broad capabilities of a VLM to more narrowly focused applications. This approach, exemplified by CoOp-based methods, integrates mutable textual tokens with categorical tokens to foster nuanced textual comprehension. Nonetheless, such specialized textual insights often fail to generalize beyond the scope of familiar categories, as they tend to overshadow the versatile, general textual knowledge intrinsic to the model’s wide-ranging applicability. Addressing this base-novel dilemma, we propose the innovative concept of SparseKnowledge-guided Context Optimization (Sparse-KgCoOp). This technique aims to fortify the adaptable prompts’ capacity to generalize to categories yet unencountered. The cornerstone of Sparse-KgCoOp is based on the premise that reducing the differences between adaptive prompt and their hand-crafted counterparts through sparsification operations can mitigate the erosion of fundamental knowledge. Specifically, Sparse-KgCoOp seeks to narrow the gap between the textual embeddings produced by both the dynamic prompts and the manually devised ones, thus preserving the foundational knowledge while maintaining adaptability. Extensive experiments of several benchmarks demonstrate that the proposed Sparse-KgCoOp is an efficient method for prompt tuning. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

13 pages, 447 KiB  
Article
Biswas–Chatterjee–Sen Model Defined on Solomon Networks in (1 ≤ D ≤ 6)-Dimensional Lattices
by Gessineide Sousa Oliveira, David Santana Alencar, Tayroni Alencar Alves, José Ferreira da Silva Neto, Gladstone Alencar Alves, Antônio Macedo-Filho, Ronan S. Ferreira, Francisco Welington Lima and João Antônio Plascak
Entropy 2025, 27(3), 300; https://doi.org/10.3390/e27030300 - 14 Mar 2025
Viewed by 356
Abstract
The discrete version of the Biswas–Chatterjee–Sen model, defined on D-dimensional hypercubic Solomon networks, with 1D6, has been studied by means of extensive Monte Carlo simulations. Thermodynamic-like variables have been computed as a function of the external noise [...] Read more.
The discrete version of the Biswas–Chatterjee–Sen model, defined on D-dimensional hypercubic Solomon networks, with 1D6, has been studied by means of extensive Monte Carlo simulations. Thermodynamic-like variables have been computed as a function of the external noise probability. Finite-size scaling theory, applied to different network sizes, has been utilized in order to characterize the phase transition of the system in the thermodynamic limit. The results show that the model presents a phase transition of the second order for all considered dimensions. Despite the lower critical dimension being zero, this dynamical system seems not to have any upper critical dimension since the critical exponents change with D and go away from the expected mean-field values. Although larger networks could not be simulated because the number of sites drastically increases with the dimension D, the scaling regime has been achieved when computing the critical exponent ratios and the corresponding critical noise probability. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Sociophysics II)
Show Figures

Figure 1

25 pages, 37855 KiB  
Article
Hyperchaotic System-Based PRNG and S-Box Design for a Novel Secure Image Encryption
by Erman Özpolat, Vedat Çelik and Arif Gülten
Entropy 2025, 27(3), 299; https://doi.org/10.3390/e27030299 - 13 Mar 2025
Viewed by 535
Abstract
A hyperchaotic system was analyzed in this study, and its hyperchaotic behavior was confirmed through dynamic analysis. The system was utilized to develop a pseudo-random number generator (PRNG), whose statistical reliability was validated through NIST SP800-22 tests, demonstrating its suitability for cryptographic applications. [...] Read more.
A hyperchaotic system was analyzed in this study, and its hyperchaotic behavior was confirmed through dynamic analysis. The system was utilized to develop a pseudo-random number generator (PRNG), whose statistical reliability was validated through NIST SP800-22 tests, demonstrating its suitability for cryptographic applications. Additionally, a 16 × 16 S-box was constructed based on the hyperchaotic system, ensuring high nonlinearity and strong cryptographic performance. A comparative analysis revealed that the proposed S-box structure outperforms existing designs in terms of security and efficiency. A new image encryption algorithm was designed using the PRNG and S-box, and its performance was evaluated on 512 × 512 grayscale images, including the commonly used baboon and pepper images. The decryption process successfully restored the original images, confirming the encryption scheme’s reliability. Security evaluations, including histogram analysis, entropy measurement, correlation analysis, and resistance to differential and noise attacks, were conducted. The findings showed that the suggested encryption algorithm outperforms current techniques in terms of security and efficiency. This study contributes to the advancement of robust PRNG generation, secure S-box design, and efficient image encryption algorithms using hyperchaotic systems, offering a promising approach for secure communication and data protection. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 9871 KiB  
Article
HA: An Influential Node Identification Algorithm Based on Hub-Triggered Neighborhood Decomposition and Asymmetric Order-by-Order Recurrence Model
by Min Zhao, Junhan Ye, Jiayun Li, Yuzhuo Dai, Tianze Zhao and Gengchen Zhang
Entropy 2025, 27(3), 298; https://doi.org/10.3390/e27030298 - 13 Mar 2025
Viewed by 371
Abstract
In recent years, the rise of power network security incidents caused by malicious attacks has drawn considerable attention to identifying influential nodes in power networks. Power networks are a special class of complex networks characterized by a high relative clustering coefficient, which reflects [...] Read more.
In recent years, the rise of power network security incidents caused by malicious attacks has drawn considerable attention to identifying influential nodes in power networks. Power networks are a special class of complex networks characterized by a high relative clustering coefficient, which reflects a more intricate connection between nodes. This paper proposes a novel node influence evaluation algorithm based on hub-triggered neighborhood decomposition and asymmetric order-by-order recurrence model. First, the concepts of network directionalization strategy and hub-triggered neighborhood decomposition are introduced to distinguish the functional differences among nodes in the virus-spreading process. Second, this paper proposes the concepts of infected and infecting potential, then constructs a calculation model with asymmetric characteristics based on the order-by-order recurrence method to fully use the information in the connection structure of the adjacent neighborhood. Finally, the influence of the hub node is evaluated by integrating the infected potential and infecting potential of neighbors of multiple orders. We compare our method with the traditional and state-of-the-art algorithms on six power networks regarding Susceptible–Infected–Recovered (SIR) correlation coefficients, imprecision functions, and algorithmic resolution. The experimental results show that the algorithm proposed in this paper is superior in the above aspects. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

22 pages, 7837 KiB  
Article
Online Monitoring and Fault Diagnosis for High-Dimensional Stream with Application in Electron Probe X-Ray Microanalysis
by Tao Wang, Yunfei Guo, Fubo Zhu and Zhonghua Li
Entropy 2025, 27(3), 297; https://doi.org/10.3390/e27030297 - 13 Mar 2025
Viewed by 448
Abstract
This study introduces an innovative two-stage framework for monitoring and diagnosing high-dimensional data streams with sparse changes. The first stage utilizes an exponentially weighted moving average (EWMA) statistic for online monitoring, identifying change points through extreme value theory and multiple hypothesis testing. The [...] Read more.
This study introduces an innovative two-stage framework for monitoring and diagnosing high-dimensional data streams with sparse changes. The first stage utilizes an exponentially weighted moving average (EWMA) statistic for online monitoring, identifying change points through extreme value theory and multiple hypothesis testing. The second stage involves a fault diagnosis mechanism that accurately pinpoints abnormal components upon detecting anomalies. Through extensive numerical simulations and electron probe X-ray microanalysis applications, the method demonstrates exceptional performance. It rapidly detects anomalies, often within one or two sampling intervals post-change, achieves near 100% detection power, and maintains type-I error rates around the nominal 5%. The fault diagnosis mechanism shows a 99.1% accuracy in identifying components in 200-dimensional anomaly streams, surpassing principal component analysis (PCA)-based methods by 28.0% in precision and controlling the false discovery rate within 3%. Case analyses confirm the method’s effectiveness in monitoring and identifying abnormal data, aligning with previous studies. These findings represent significant progress in managing high-dimensional sparse-change data streams over existing methods. Full article
Show Figures

Figure 1

19 pages, 709 KiB  
Article
Design Particularities of Quadrature Chaos Shift Keying Communication System with Enhanced Noise Immunity for IoT Applications
by Darja Cirjulina, Ruslans Babajans and Deniss Kolosovs
Entropy 2025, 27(3), 296; https://doi.org/10.3390/e27030296 - 12 Mar 2025
Cited by 1 | Viewed by 435
Abstract
This article is devoted to the investigation of synchronization noise immunity in quadrature chaos shift keying (QCSK) communication systems and its profound impact on system performance. The study focuses on Colpitts and Vilnius chaos oscillators in different synchronization configurations, and the reliability of [...] Read more.
This article is devoted to the investigation of synchronization noise immunity in quadrature chaos shift keying (QCSK) communication systems and its profound impact on system performance. The study focuses on Colpitts and Vilnius chaos oscillators in different synchronization configurations, and the reliability of the system in the particular configuration is assessed using the bit error rate (BER) estimation. The research considers synchronization imbalances and demonstrates their effect on the accuracy of data detection and overall transmission stability. The article proposes an approach for optimal bit detection in the case of imbalanced synchronization and correlated chaotic signals in data transmission. The study practically shows the importance of the proposed decision-making technique, revealing that certain adjustments can significantly enhance system noise resilience. Full article
Show Figures

Figure 1

23 pages, 11439 KiB  
Article
Enterprise Digital Transformation Strategy: The Impact of Digital Platforms
by Qiong Huang and Yifan Tang
Entropy 2025, 27(3), 295; https://doi.org/10.3390/e27030295 - 12 Mar 2025
Viewed by 862
Abstract
The development of the digital economy is a strategic choice for seizing new opportunities in the latest wave of technological revolution and industrial transformation. As a critical tool for driving the digital transformation of enterprises, digital platforms play a pivotal role in this [...] Read more.
The development of the digital economy is a strategic choice for seizing new opportunities in the latest wave of technological revolution and industrial transformation. As a critical tool for driving the digital transformation of enterprises, digital platforms play a pivotal role in this process. This study employs the evolutionary game theory of complex networks to develop a game model for the digital transformation of enterprises and utilizes the Fermi rule from sociophysics to characterize the evolution of enterprise strategies. Throughout this process, the interactive behaviors and strategic choices of enterprises embody the features of information flow and dynamic adjustment within the network. These features are crucial for elucidating the complexity and uncertainty inherent in strategic decision-making. The research findings indicate that digital platforms, through the provision of high-quality services and the implementation of effective pricing strategies, can significantly reduce the costs associated with digital transformation, thereby enhancing operational efficiency and innovation capacity. Moreover, the model reveals the competitive relationships between enterprises and their impact on transformation strategies, offering theoretical insights for policymakers. Based on these findings, the paper proposes policy recommendations such as strengthening infrastructure, implementing differentiated service strategies, and enhancing decision-making capability training, with the aim of supporting the digital transformation of enterprises across various industries and promoting sustainable development. Full article
(This article belongs to the Special Issue Entropy, Econophysics, and Complexity)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop