Next Issue
Volume 27, September
Previous Issue
Volume 27, July
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 8 (August 2025) – 115 articles

Cover Story (view full-size image): The image depicts system and environmental complexity as functions of scale (the level of detail in their descriptions), with system complexity exceeding environmental complexity at all scales—a necessary condition for effective adaptation. This requirement, which we term the multi-scale law of requisite variety, frames complexity not merely as an abstract concept but as a property essential for successful interaction. Below the curves, the grids represent successive coarse-graining, illustrating how finer-scale descriptions can be systematically aggregated into larger-scale ones. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 1208 KB  
Article
A Hyperbolic Graph Neural Network Model with Contrastive Learning for Rating–Review Recommendation
by Shuyun Fang, Junling Wang and Fukun Chen
Entropy 2025, 27(8), 886; https://doi.org/10.3390/e27080886 - 21 Aug 2025
Viewed by 1068
Abstract
In recommender systems research, the data sparsity problem has driven the development of hybrid recommendation algorithms integrating multimodal information and the application of graph neural networks (GNNs). However, conventional GNNs relying on homogeneous Euclidean embeddings fail to effectively model the non-Euclidean geometric manifold [...] Read more.
In recommender systems research, the data sparsity problem has driven the development of hybrid recommendation algorithms integrating multimodal information and the application of graph neural networks (GNNs). However, conventional GNNs relying on homogeneous Euclidean embeddings fail to effectively model the non-Euclidean geometric manifold structures prevalent in real-world scenarios, consequently constraining the representation capacity for heterogeneous interaction patterns and compromising recommendation accuracy. As a consequence, the representation capability for heterogeneous interaction patterns is restricted, thereby affecting the overall representational power and recommendation accuracy of the models. In this paper, we propose a hyperbolic graph neural network model with contrastive learning for rating–review recommendation, implementing a dual-graph construction strategy. First, it constructs a review-aware graph to integrate rich semantic information from reviews, thus enhancing the recommendation system’s context awareness. Second, it builds a user–item interaction graph to capture user preferences and item characteristics. The hyperbolic graph neural network architecture enables joint learning of high-order features from these two graphs, effectively avoiding the embedding distortion problem commonly associated with high-order feature learning. Furthermore, through contrastive learning in hyperbolic space, the model effectively leverages review information and user–item interaction data to enhance recommendation system performance. Experimental results demonstrate that the proposed algorithm achieves excellent performance on multiple real-world datasets, significantly improving recommendation accuracy. Full article
(This article belongs to the Special Issue Causal Inference in Recommender Systems)
Show Figures

Figure 1

9 pages, 306 KB  
Article
Description of the Condensed Phases of Water in Terms of Quantum Condensates
by François Fillaux
Entropy 2025, 27(8), 885; https://doi.org/10.3390/e27080885 - 21 Aug 2025
Viewed by 603
Abstract
The “abnormal” properties of ice and liquid water can be explained by a hybrid quantum/classical framework based on objective facts. Internal decoherence due to the low dissociation energy of the H-bond and the strong electric dipole moment lead to a quantum condensate of [...] Read more.
The “abnormal” properties of ice and liquid water can be explained by a hybrid quantum/classical framework based on objective facts. Internal decoherence due to the low dissociation energy of the H-bond and the strong electric dipole moment lead to a quantum condensate of O atoms dressed with classical oscillators and a degenerate electric field. These classical oscillators are either subject to equipartition in the liquid or enslaved to the field interference in the ice. A set of four observables and the degeneracy entropy explain the heat capacities, temperatures, and latent heats of the quantum phase transition; the super-thermal-insulator state of the ice; the transition between high- and low-density liquids by supercooling; AND the temperature of the liquid’s maximum density. The condensate also describes an aerosol of water droplets. In conclusion, quantum condensates turn out to be an essential part of our everyday environment. Full article
(This article belongs to the Special Issue Entanglement Entropy and Quantum Phase Transition)
Show Figures

Figure 1

16 pages, 15431 KB  
Article
Investigation of Signal Transmission Dynamics in Rulkov Neuronal Networks with Q-Learned Pathways
by Mio Kobayashi
Entropy 2025, 27(8), 884; https://doi.org/10.3390/e27080884 - 21 Aug 2025
Viewed by 642
Abstract
The dynamics of signal transmission in neuronal networks remain incompletely understood. In this study, we propose a novel Rulkov neuronal network model that incorporates Q-learning, a reinforcement learning method, to establish efficient signal transmission pathways. Using a simulated neuronal network, we focused on [...] Read more.
The dynamics of signal transmission in neuronal networks remain incompletely understood. In this study, we propose a novel Rulkov neuronal network model that incorporates Q-learning, a reinforcement learning method, to establish efficient signal transmission pathways. Using a simulated neuronal network, we focused on a key parameter that modulates both the intrinsic dynamics of individual neurons and the input signals received from active neighbors. We investigated how variations in this parameter affect signal transmission efficiency by analyzing changes in attenuation rate, as well as the maximum and minimum firing intervals of the start and goal neurons. Our simulations revealed that signal transmission efficiency between distant neurons was significantly impaired in the parameter region, where a chaotic attractor and an attractor of the eight-periodic points are observed to co-exist. A key finding was that low-frequency oscillatory bursts, while failing long-distance transmission, were capable of amplifying signals in neighboring neurons. Furthermore, we observed variation in signal transmission even when individual neuron dynamics remained similar. This variability, despite similar presynaptic activity, is a biologically significant phenomenon, and it is argued that it may contribute to the flexibility and robustness of information processing. These findings are discussed in the context of their biological implications. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

24 pages, 756 KB  
Article
Complex Time Approach to the Hamiltonian and the Entropy Production of the Damped Harmonic Oscillator
by Kyriaki-Evangelia Aslani
Entropy 2025, 27(8), 883; https://doi.org/10.3390/e27080883 - 21 Aug 2025
Viewed by 867
Abstract
The present work applies and extends the previously developed Quantitative Geometrical Thermodynamics (QGT) formalism to the derivation of a Hamiltonian for the damped harmonic oscillator (DHO) across all damping regimes. By introducing complex time, with the real part encoding entropy production and the [...] Read more.
The present work applies and extends the previously developed Quantitative Geometrical Thermodynamics (QGT) formalism to the derivation of a Hamiltonian for the damped harmonic oscillator (DHO) across all damping regimes. By introducing complex time, with the real part encoding entropy production and the imaginary part governing reversible dynamics, QGT provides a unified geometric framework for irreversible thermodynamics, showing that the DHO Hamiltonian can be obtained directly from the (complex) entropy production in a simple exponential form that is generalized across all damping regimes. The derived Hamiltonian preserves a modified Poisson bracket structure and embeds thermodynamic irreversibility into the system’s evolution. Moreover, the resulting expression coincides in form with the well-known Caldirola–Kanai Hamiltonian, despite arising from fundamentally different principles, reinforcing the validity of the QGT approach. The results are also compared with the GENERIC framework, showing that QGT offers an elegant alternative to existing approaches that maintains consistency with symplectic geometry. Furthermore, the imaginary time component is interpreted as isomorphic to the antisymmetric Poisson matrix through the lens of geometric algebra. The formalism opens promising avenues for extending Hamiltonian mechanics to dissipative systems, with potential applications in nonlinear dynamics, quantum thermodynamics, and spacetime algebra. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics, 4th Edition)
Show Figures

Figure 1

24 pages, 3024 KB  
Article
Varying-Coefficient Additive Models with Density Responses and Functional Auto-Regressive Error Process
by Zixuan Han, Tao Li, Jinhong You and Narayanaswamy Balakrishnan
Entropy 2025, 27(8), 882; https://doi.org/10.3390/e27080882 - 20 Aug 2025
Viewed by 564
Abstract
In many practical applications, data collected over time often exhibit autocorrelation, which, if unaccounted for, can lead to biased or misleading statistical inferences. To address this issue, we propose a varying-coefficient additive model for density-valued responses, incorporating a functional auto-regressive (FAR) error process [...] Read more.
In many practical applications, data collected over time often exhibit autocorrelation, which, if unaccounted for, can lead to biased or misleading statistical inferences. To address this issue, we propose a varying-coefficient additive model for density-valued responses, incorporating a functional auto-regressive (FAR) error process to capture serial dependence. Our estimation procedure consists of three main steps, utilizing spline-based methods after mapping density functions into a linear space via the log-quantile density transformation. First, we obtain initial estimates of the bivariate varying-coefficient functions using a B-spline series approximation. Second, we estimate the error process from the residuals using spline smoothing techniques. Finally, we refine the estimates of the additive components by adjusting for the estimated error process. We establish theoretical properties of the proposed method, including convergence rates and asymptotic behavior. The effectiveness of our approach is further demonstrated through simulation studies and applications to real-world data. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 3264 KB  
Article
Hybrid CNN-LSTM-GNN Neural Network for A-Share Stock Prediction
by Junhao Dong and Shi Liang
Entropy 2025, 27(8), 881; https://doi.org/10.3390/e27080881 - 20 Aug 2025
Viewed by 1525
Abstract
Optimization of stock selection strategies has been a topic of interest in finance. Although deep learning models have demonstrated superior performance over traditional methods, there are still shortcomings. For example, previous studies do not provide enough explanation for feature selection and usually use [...] Read more.
Optimization of stock selection strategies has been a topic of interest in finance. Although deep learning models have demonstrated superior performance over traditional methods, there are still shortcomings. For example, previous studies do not provide enough explanation for feature selection and usually use features such as closing price directly to make predictions; for example, most studies predict the trend of multiple stock indices or only individual stocks, which is difficult to be directly applied to actual stock selection. In this paper, a multivariate hybrid neural network model CNN-LSTM-GNN (CLGNN) for stock prediction is proposed, in which the CNN and the LSTM modules analyze the local and the whole, respectively, while the multivariate time series GNN module is added to explore the potential relationships between the data through the graph learning, graph convolutional, and temporal convolutional layers. CLGNN analyzes the potential relationships between the data based on the returns to classify stocks, and then develops a stock selection strategy, and directly outputs the returns and stock codes. In this paper, a hybrid filter approach based on entropy and Pearson correlation is proposed for feature selection, and experiments are conducted on all stocks in the CSI All Share Index (CSI); the results show that among multiple models, the returns obtained when the features of daily return, turnover rate, relative strength index, volume, and forward adjusted closing price are used as inputs are all the highest, and the return obtained by CLGNN is even higher than that of the other models (e.g., TCN, Transformer, etc.). Full article
(This article belongs to the Special Issue Entropy, Artificial Intelligence and the Financial Markets)
Show Figures

Figure 1

23 pages, 12472 KB  
Article
Fixed-Time Active Disturbance Rejection Temperature–Pressure Decoupling Control for a High-Flow Air Intake System
by Louyue Zhang, Hehong Zhang, Duoqi Shi, Zhihong Dan, Xi Wang, Chao Zhai, Gaoxi Xiao and Zhouzhe Xu
Entropy 2025, 27(8), 880; https://doi.org/10.3390/e27080880 - 20 Aug 2025
Viewed by 548
Abstract
High-flow aeroengine transient tests involve strong coupling and external disturbances, which pose significant challenges for intake environment simulation systems (IESSs). This study proposes a compound control scheme that combines fixed-time active disturbance rejection with static decoupling methods. The scheme integrates a fixed-time sliding-mode [...] Read more.
High-flow aeroengine transient tests involve strong coupling and external disturbances, which pose significant challenges for intake environment simulation systems (IESSs). This study proposes a compound control scheme that combines fixed-time active disturbance rejection with static decoupling methods. The scheme integrates a fixed-time sliding-mode controller (FT-SMC) and a super-twisting fixed-time extended-state observer (ST-FT-ESO). A decoupling transformation separates pressure and temperature dynamics into two independent loops. The observer estimates system states and total disturbances, including residual coupling, while the controller ensures fixed-time convergence. The method is deployed on a real-time programmable logic controller (PLC) and validated through hardware-in-the-loop (HIL) simulations under representative high-flow scenarios. Compared to conventional linear active disturbance rejection decoupling control (LADRDC), the proposed scheme reduces the absolute integral error (AIE) in pressure and temperature tracking by 71.9% and 77.9%, respectively, and reduces the mean-squared error (MSE) by 46.0% and 41.3%. The settling time improves from over 5 s to under 2 s. These results demonstrate improved tracking accuracy, faster convergence, and enhanced robustness against disturbances. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

32 pages, 2072 KB  
Article
Airline Ranking Using Social Feedback and Adapted Fuzzy Belief TOPSIS
by Ewa Roszkowska and Marzena Filipowicz-Chomko
Entropy 2025, 27(8), 879; https://doi.org/10.3390/e27080879 - 19 Aug 2025
Viewed by 827
Abstract
In the era of digital interconnectivity, user-generated reviews on platforms such as TripAdvisor have become a valuable source of social feedback, reflecting collective experiences and perceptions of airline services. However, aggregating such feedback presents several challenges: evaluations are typically expressed using linguistic ordinal [...] Read more.
In the era of digital interconnectivity, user-generated reviews on platforms such as TripAdvisor have become a valuable source of social feedback, reflecting collective experiences and perceptions of airline services. However, aggregating such feedback presents several challenges: evaluations are typically expressed using linguistic ordinal scales, are subjective, often incomplete, and influenced by opinion dynamics within social networks. To effectively deal with these complexities and extract meaningful insights, this study proposes an information-driven decision-making framework that integrates Fuzzy Belief Structures with the TOPSIS method. To handle the uncertainty and imprecision of linguistic ratings, user opinions are modeled as fuzzy belief distributions over satisfaction levels. Rankings are then derived using TOPSIS by comparing each airline’s aggregated profile to ideal satisfaction benchmarks via a belief-based distance measure. This framework presents a novel solution for measuring synthetic satisfaction in complex social feedback systems, thereby contributing to the understanding of information flow, belief aggregation, and emergent order in digital opinion networks. The methodology is demonstrated using a real-world dataset of TripAdvisor airline reviews, providing a robust and interpretable benchmark for service quality. Moreover, this study applies Shannon entropy to classify and interpret the consistency of customer satisfaction ratings among Star Alliance airlines. The results confirm the stability of the Airline Satisfaction Index (ASI), with extremely high correlations among the five rankings generated using different fuzzy utility function models. The methodology reveals that airlines such as Singapore Airlines, ANA, EVA Air, and Air New Zealand consistently achieve high satisfaction scores across all fuzzy model configurations, highlighting their strong and stable performance regardless of model variation. These airlines also show both low entropy and high average scores, confirming their consistent excellence. Full article
(This article belongs to the Special Issue Dynamics in Biological and Social Networks)
Show Figures

Figure 1

25 pages, 484 KB  
Tutorial
Geometric Neural Ordinary Differential Equations: From Manifolds to Lie Groups
by Yannik P. Wotte, Federico Califano and Stefano Stramigioli
Entropy 2025, 27(8), 878; https://doi.org/10.3390/e27080878 - 19 Aug 2025
Viewed by 1746
Abstract
Neural ordinary differential equations (neural ODEs) are a well-established tool for optimizing the parameters of dynamical systems, with applications in image classification, optimal control, and physics learning. Although dynamical systems of interest often evolve on Lie groups and more general differentiable manifolds, theoretical [...] Read more.
Neural ordinary differential equations (neural ODEs) are a well-established tool for optimizing the parameters of dynamical systems, with applications in image classification, optimal control, and physics learning. Although dynamical systems of interest often evolve on Lie groups and more general differentiable manifolds, theoretical results for neural ODEs are frequently phrased on Rn. We collect recent results for neural ODEs on manifolds and present a unifying derivation of various results that serves as a tutorial to extend existing methods to differentiable manifolds. We also extend the results to the recent class of neural ODEs on Lie groups, highlighting a non-trivial extension of manifold neural ODEs that exploits the Lie group structure. Full article
(This article belongs to the Special Issue Lie Group Machine Learning)
Show Figures

Figure 1

24 pages, 8653 KB  
Article
Sea Surface Wind Speed Retrieval from Marine Radar Image Sequences Based on GLCM-Derived Texture Features
by Hui Wang, Haiyang Qiu, Lei Wang, Jingxi Huang and Xingbo Ruan
Entropy 2025, 27(8), 877; https://doi.org/10.3390/e27080877 - 19 Aug 2025
Viewed by 706
Abstract
Sea surface wind speed is a key parameter in marine meteorology, navigation safety, and offshore engineering. Traditional marine radar wind speed retrieval algorithms often suffer from poor environmental adaptability and limited applicability across different radar systems, while existing empirical models face challenges in [...] Read more.
Sea surface wind speed is a key parameter in marine meteorology, navigation safety, and offshore engineering. Traditional marine radar wind speed retrieval algorithms often suffer from poor environmental adaptability and limited applicability across different radar systems, while existing empirical models face challenges in accuracy and generalization. To address these issues, this study proposes a novel wind speed retrieval method based on X-band marine radar image sequences and texture features derived from the Gray-Level Co-occurrence Matrix (GLCM). A three-stage preprocessing pipeline—comprising noise suppression, geometric correction, and interpolation—is employed to extract small-scale wind streaks that reflect wind field characteristics, ensuring high-quality image data. Two key GLCM texture features of wind streaks, energy and entropy, are identified, and their stable values are used to construct a segmented dual-parameter wind speed model with a division at 10 m/s. Experimental results show that both energy- and entropy-based models outperform traditional empirical models, reducing mean errors by approximately 49.3% and 16.7%, respectively. The energy stable model achieves the best overall performance with a correlation coefficient of 0.89, while the entropy stable model demonstrates superior performance at low wind speeds. The complementary nature of the two models enhances robustness under varying conditions, providing a more accurate and efficient solution for sea surface wind speed retrieval. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 2110 KB  
Article
Navigating Cross-Border E-Commerce: Prioritizing Logistics Partners with Hybrid MCGDM
by Xingyu Ma and Chuanxu Wang
Entropy 2025, 27(8), 876; https://doi.org/10.3390/e27080876 - 19 Aug 2025
Viewed by 765
Abstract
As global e-commerce expands, efficient cross-border logistics services have become essential. To support the evaluation of logistics service providers (LSPs), we propose HD-CBDTOPSIS (Technique for Order Preference by Similarity to Ideal Solution with heterogeneous data and cloud Bhattacharyya distance), a hybrid multi-criteria group [...] Read more.
As global e-commerce expands, efficient cross-border logistics services have become essential. To support the evaluation of logistics service providers (LSPs), we propose HD-CBDTOPSIS (Technique for Order Preference by Similarity to Ideal Solution with heterogeneous data and cloud Bhattacharyya distance), a hybrid multi-criteria group decision-making (MCGDM) model designed to handle complex, uncertain data. Our criteria system integrates traditional supplier evaluation with cross-border e-commerce characteristics, using heterogeneous data types—including exact numbers, intervals, digital datasets, multi-granularity linguistic terms, and linguistic expressions. These are unified using normal cloud models (NCMs), ensuring uncertainty is consistently represented. A novel algorithm, improved multi-step backward cloud transformation with sampling replacement (IMBCT-SR), is developed for converting dataset-type indicators into cloud models. We also introduce a new similarity measure, the Cloud Bhattacharyya Distance (CBD), which shows superior discrimination ability compared to traditional distances. Using the coefficient of variation (CV) based on CBD, we objectively determine criteria weights. A cloud-based TOPSIS approach is then applied to rank alternative LSPs, with all variables modeled using NCMs to ensure consistent uncertainty representation. An application case and comparative experiments demonstrate that HD-CBDTOPSIS is an effective, flexible, and robust tool for evaluating cross-border LSPs under uncertain and multi-dimensional conditions. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

27 pages, 7664 KB  
Article
Autoencoder-like Sparse Non-Negative Matrix Factorization with Structure Relationship Preservation
by Ling Zhong and Haiyan Gao
Entropy 2025, 27(8), 875; https://doi.org/10.3390/e27080875 - 19 Aug 2025
Viewed by 604
Abstract
Clustering algorithms based on non-negative matrix factorization (NMF) have garnered significant attention in data mining due to their strong interpretability and computational simplicity. However, traditional NMF often struggles to effectively capture and preserve topological structure information between data during low-dimensional representation. Therefore, this [...] Read more.
Clustering algorithms based on non-negative matrix factorization (NMF) have garnered significant attention in data mining due to their strong interpretability and computational simplicity. However, traditional NMF often struggles to effectively capture and preserve topological structure information between data during low-dimensional representation. Therefore, this paper proposes an autoencoder-like sparse non-negative matrix factorization with structure relationship preservation (ASNMF-SRP). Firstly, drawing on the principle of autoencoders, a “decoder-encoder” co-optimization matrix factorization framework is constructed to enhance the factorization stability and representation capability of the coefficient matrix. Then, a preference-adjusted random walk strategy is introduced to capture higher-order neighborhood relationships between samples, encoding multi-order topological structure information of the data through an optimal graph regularization term. Simultaneously, to mitigate the impact of noise and outliers, the l2,1-norm is used to constrain the feature correlation between low-dimensional representations and the original data, preserving feature relationships between data, and a sparse constraint is imposed on the coefficient matrix via the inner product. Finally, clustering experiments conducted on 8 public datasets demonstrate that ASNMF-SRP consistently exhibits favorable clustering performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

32 pages, 14643 KB  
Article
Image Encryption Algorithm Based on Dynamic Rhombus Transformation and Digital Tube Model
by Xiaoqiang Zhang, Yupeng Song and Ke Huang
Entropy 2025, 27(8), 874; https://doi.org/10.3390/e27080874 - 18 Aug 2025
Viewed by 696
Abstract
With the rapid advancement of information technology, as critical information carriers, images are confronted with significant security risks. To ensure the image security, this paper proposes an image encryption algorithm based on a dynamic rhombus transformation and digital tube model. Firstly, a two-dimensional [...] Read more.
With the rapid advancement of information technology, as critical information carriers, images are confronted with significant security risks. To ensure the image security, this paper proposes an image encryption algorithm based on a dynamic rhombus transformation and digital tube model. Firstly, a two-dimensional hyper-chaotic system is constructed by combining the Sine map, Cubic map and May map. The analysis results demonstrate that the constructed hybrid chaotic map exhibits superior chaotic characteristics in terms of bifurcation diagrams, Lyapunov exponents, sample entropy, etc. Secondly, a dynamic rhombus transformation is proposed to scramble pixel positions, and chaotic sequences are used to dynamically select transformation centers and traversal orders. Finally, a digital tube model is designed to diffuse pixel values, which utilizes chaotic sequences to dynamically control the bit reversal and circular shift operations, and the exclusive OR operation to diffuse pixel values. The performance analyses show that the information entropy of the cipher image is 7.9993, and the correlation coefficients in horizontal, vertical, and diagonal directions are 0.0008, 0.0001, and 0.0005, respectively. Moreover, the proposed algorithm has strong resistance against noise attacks, cropping attacks, and exhaustive attacks, effectively ensuring the security of images during storage and transmission. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

21 pages, 2034 KB  
Article
Brain Oscillations and Autonomic Synthonization via Comodulation in Collaborative Negotiation
by Katia Rovelli, Carlotta Acconito, Laura Angioletti and Michela Balconi
Entropy 2025, 27(8), 873; https://doi.org/10.3390/e27080873 - 18 Aug 2025
Viewed by 651
Abstract
This study investigates the relationship between neural and physiological synthonization via comodulation (Synth) in dyadic exchanges centered on negotiation processes. In total, 13 dyads participated in a negotiation task with three phases: Initiation (IP), Negotiation Core (NCP), and Resolution (RP). Electroencephalographic (EEG) frequency [...] Read more.
This study investigates the relationship between neural and physiological synthonization via comodulation (Synth) in dyadic exchanges centered on negotiation processes. In total, 13 dyads participated in a negotiation task with three phases: Initiation (IP), Negotiation Core (NCP), and Resolution (RP). Electroencephalographic (EEG) frequency bands (i.e., delta, theta, alpha) and autonomic responses (heart rate variability, HRV) were recorded. Synth was analyzed using Euclidean distance (EuDist) for EEG and autonomic indices. Significant Synth in delta, theta, and alpha bands in temporo-central and parieto-occipital regions was observed, indicating social cognitive alignment. HRV Synth was higher during the NCP than IP, suggesting better coordination. Based on this result, a cluster analysis was performed on HRV EuDist to identify distinct groups based on HRV, and eventually personality patterns, that revealed one cluster with higher Synth and reward sensitivity, and another with lower Synth and reward sensitivity. These findings show how neural and autonomic Synth enhances social cognition and emotional regulation. Full article
(This article belongs to the Special Issue Active Inference in Cognitive Neuroscience)
Show Figures

Figure 1

23 pages, 1276 KB  
Article
Data-Driven Assessment of Carbon Emission and Optimization of Carbon Emission Reduction in the Ceramic Industry
by Xingbin Huang and Weihua He
Entropy 2025, 27(8), 872; https://doi.org/10.3390/e27080872 - 18 Aug 2025
Viewed by 734
Abstract
By integrating statistical modeling and data analysis techniques, we systematically assess the carbon emission performance of the ceramic industry and propose targeted emission reduction pathways. Firstly, the entropy weight TOPSIS model is employed to quantitatively evaluate the carbon emission performance of the three [...] Read more.
By integrating statistical modeling and data analysis techniques, we systematically assess the carbon emission performance of the ceramic industry and propose targeted emission reduction pathways. Firstly, the entropy weight TOPSIS model is employed to quantitatively evaluate the carbon emission performance of the three major Chinese ceramic production areas: Foshan, Jingdezhen, and Zibo. Through data-driven quantitative analysis, it is disclosed that the carbon emission intensity in Foshan is significantly higher than that in the other two regions (with a relative closeness degree of 0.5185). The key issues identified include high energy consumption in the production process, a high reliance on raw coal, and insufficient investment in environmental protection. Furthermore, through the XGBoost-SHAP combined modeling, the key drivers of carbon emissions are precisely identified from multi-dimensional data. It is found that the elasticity coefficient of raw coal in the carbon emission proportion is as high as 25.84%, while the potential for substitution with natural gas is remarkable. Based on statistical prediction techniques, a carbon emission trend model under the scenario of energy structure optimization is constructed, predicting that after reaching a peak in 2017, Foshan’s carbon emissions will continue to decline, with the proportion of raw coal dropping to 48% and that of natural gas rising to 10%, thereby verifying the feasibility of the green transformation. Additionally, a multi-agent carbon trading simulation model is constructed to explore the emission reduction behaviors of enterprises under different carbon price scenarios. This study not only achieves precise quantitative analysis of carbon emissions through statistical method innovation but also verifies the feasible paths of low-carbon transformation through data modeling. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 2734 KB  
Article
Time-Marching Quantum Algorithm for Simulation of Nonlinear Lorenz Dynamics
by Efstratios Koukoutsis, George Vahala, Min Soe, Kyriakos Hizanidis, Linda Vahala and Abhay K. Ram
Entropy 2025, 27(8), 871; https://doi.org/10.3390/e27080871 - 17 Aug 2025
Viewed by 878
Abstract
Simulating nonlinear classical dynamics on a quantum computer is an inherently challenging task due to the linear operator formulation of quantum mechanics. In this work, we provide a systematic approach to alleviate this difficulty by developing an explicit quantum algorithm that implements the [...] Read more.
Simulating nonlinear classical dynamics on a quantum computer is an inherently challenging task due to the linear operator formulation of quantum mechanics. In this work, we provide a systematic approach to alleviate this difficulty by developing an explicit quantum algorithm that implements the time evolution of a second-order time-discretized version of the Lorenz model. The Lorenz model is a celebrated system of nonlinear ordinary differential equations that has been extensively studied in the contexts of climate science, fluid dynamics, and chaos theory. Our algorithm possesses a recursive structure and requires only a linear number of copies of the initial state with respect to the number of integration time-steps. This provides a significant improvement over previous approaches, while preserving the characteristic quantum speed-up in terms of the dimensionality of the underlying differential equations system, which similar time-marching quantum algorithms have previously demonstrated. Notably, by classically implementing the proposed algorithm, we showcase that it accurately captures the structural characteristics of the Lorenz system, reproducing both regular attractors–limit cycles–and the chaotic attractor within the chosen parameter regime. Full article
(This article belongs to the Special Issue Quantum Computing in the NISQ Era)
Show Figures

Figure 1

11 pages, 745 KB  
Article
Information Storage in a Black Hole’s Gravitational Field
by Dongshan He, Jinfang Li and Qian Qiu
Entropy 2025, 27(8), 870; https://doi.org/10.3390/e27080870 - 16 Aug 2025
Viewed by 945
Abstract
The key to resolving the black hole information loss paradox lies in clarifying the origin of black hole entropy and the mechanism by which black holes store information. By applying thermodynamic principles, we demonstrate that the entropy of a gravitational field is negative [...] Read more.
The key to resolving the black hole information loss paradox lies in clarifying the origin of black hole entropy and the mechanism by which black holes store information. By applying thermodynamic principles, we demonstrate that the entropy of a gravitational field is negative and proportional to the strength of the field, indicating that gravitational fields possess information storage capacity. For Schwarzschild black holes, we further demonstrate that information conventionally attributed to the black hole’s interior is in fact encoded within its external gravitational field. During black hole evaporation, the emitted particles transmit this information via gravitational correlations. This study advances our understanding of gravitational field entropy and provides valuable insights toward resolving the black hole information loss problem. Full article
(This article belongs to the Special Issue Black Hole Information Problem: Challenges and Perspectives)
Show Figures

Figure 1

21 pages, 3126 KB  
Article
CViT Weakly Supervised Network Fusing Dual-Branch Local-Global Features for Hyperspectral Image Classification
by Wentao Fu, Xiyan Sun, Xiuhua Zhang, Yuanfa Ji and Jiayuan Zhang
Entropy 2025, 27(8), 869; https://doi.org/10.3390/e27080869 - 15 Aug 2025
Viewed by 675
Abstract
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant [...] Read more.
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant performance usually comes at the cost of feature representation capabilities. High-dimensional and deep convolution can capture rich deep semantic features, but with high complexity and resource consumption. To deal with these problems, we propose a CViT Weakly Supervised Network (CWSN) for HSI classification. Specifically, a lightweight 1D-2D two-branch network is used for local generalization and enhancement of spatial–spectral features. Then, the fusion and characterization of local and global features are achieved through the CNN-Vision Transformer (CViT) cascade strategy. The experimental results on four benchmark HSI datasets show that CWSN has good anti-noise ability and ensures the robustness and versatility of the network facing both clean and noisy training sets. Compared to other methods, the CWSN has better classification accuracy. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Graphical abstract

11 pages, 9959 KB  
Article
Are Human Judgments of Real and Fake Faces Quantum-like Contextual?
by Peter Bruza, Aaron Lee and Pamela Hoyte
Entropy 2025, 27(8), 868; https://doi.org/10.3390/e27080868 - 15 Aug 2025
Viewed by 628
Abstract
This paper describes a crowdsourced experiment in which participants were asked to judge which of two simultaneously presented facial images (one real, one AI-generated) was fake. With the growing presence of synthetic imagery in digital environments, cognitive systems must adapt to novel and [...] Read more.
This paper describes a crowdsourced experiment in which participants were asked to judge which of two simultaneously presented facial images (one real, one AI-generated) was fake. With the growing presence of synthetic imagery in digital environments, cognitive systems must adapt to novel and often deceptive visual stimuli. Recent developments in cognitive science propose that some mental processes may exhibit quantum-like characteristics, particularly in their context sensitivity. Drawing on Tezzin’s “generalized fair coin” model, this study applied Contextuality-by-Default (CbD) theory to investigate whether human judgments of human faces exhibit quantum-like contextuality. Across 20 trials, each treated as a “generalized coin”, bootstrap resampling (10,000 iterations per coin) revealed that nine trials demonstrated quantum-like contextuality. Notably, Coin 4 exhibited strong context-sensitive causal asymmetry, where both the real and synthetic faces elicited inverse judgments due to their unusually strong resemblance to one another. These results support the growing evidence that cognitive judgments are sometimes quantum-like contextual, suggesting that adopting comparative strategies, such as evaluating unfamiliar faces alongside known-real exemplars, may enhance accuracy in detecting synthetic images. Such pairwise methods align with the strengths of human perception and may inform future interventions, user interfaces, or educational tools aimed at improving visual judgment under uncertainty. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness V)
Show Figures

Figure 1

18 pages, 2704 KB  
Article
A Robust Hybrid Weighting Scheme Based on IQRBOW and Entropy for MCDM: Stability and Advantage Criteria in the VIKOR Framework
by Ali Erbey, Üzeyir Fidan and Cemil Gündüz
Entropy 2025, 27(8), 867; https://doi.org/10.3390/e27080867 - 15 Aug 2025
Viewed by 757
Abstract
In multi-criteria decision-making (MCDM) environments characterized by uncertainty and data irregularities, the reliability of weighting methods becomes critical for ensuring robust and accurate decisions. This study introduces a novel hybrid objective weighting method—IQRBOW-E (Interquartile Range-Based Objective Weighting with Entropy)—which dynamically combines the statistical [...] Read more.
In multi-criteria decision-making (MCDM) environments characterized by uncertainty and data irregularities, the reliability of weighting methods becomes critical for ensuring robust and accurate decisions. This study introduces a novel hybrid objective weighting method—IQRBOW-E (Interquartile Range-Based Objective Weighting with Entropy)—which dynamically combines the statistical robustness of the IQRBOW method with the information sensitivity of Entropy through a tunable parameter β. The method allows decision-makers to flexibly control the trade-off between robustness and information contribution, enhancing the adaptability of decision support systems. A comprehensive experimental design involving ten simulation scenarios was implemented, in which the number of criteria, alternatives, and outlier ratios were varied. The IQRBOW-E method was integrated into the VIKOR framework and evaluated through average Q values, stability ratios, SRD scores, and the Friedman test. The results indicate that the proposed hybrid approach achieves superior decision stability and performance, particularly in data environments with increasing outlier contamination. Optimal β values were shown to shift systematically depending on data conditions, highlighting the model’s sensitivity and adaptability. This study not only advances the methodological landscape of MCDM by introducing a parameterized hybrid weighting model but also contributes a robust and generalizable weighting infrastructure for modern decision-making under uncertainty. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)
Show Figures

Figure 1

33 pages, 2080 KB  
Article
Latent Class Analysis with Arbitrary-Distribution Responses
by Huan Qing and Xiaofei Xu
Entropy 2025, 27(8), 866; https://doi.org/10.3390/e27080866 - 14 Aug 2025
Viewed by 898
Abstract
The latent class model has been proposed as a powerful tool in understanding human behavior for various fields such as social, psychological, behavioral, and biological sciences. However, one important limitation of the latent class model is that it is primarily applied to data [...] Read more.
The latent class model has been proposed as a powerful tool in understanding human behavior for various fields such as social, psychological, behavioral, and biological sciences. However, one important limitation of the latent class model is that it is primarily applied to data with binary responses or categorical responses, making it fail to model real-world data with continuous or negative responses. In many applications, ignoring the weights throws out a lot of potentially valuable information contained in the weights. To address this limitation, we propose a novel generative model, the arbitrary-distribution latent class model (adLCM). Our model enables the generation of data’s response matrix from an arbitrary distribution with a latent class structure. When compared to the latent class model, our adLCM is both more realistic and general. To our knowledge, our adLCM is the first model for latent class analysis with any real-valued responses, including continuous, negative, and signed values, thereby extending the classical latent class model beyond its traditional limitation to binary or categorical outcomes. We investigate the identifiability of the model and propose an efficient algorithm for estimating the latent classes and other model parameters. We show that the proposed algorithm enjoys consistent estimation. The performance of our algorithm is evaluated using both computer-generated data and real-world personality test data. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 407 KB  
Article
Variations on the Expectation Due to Changes in the Probability Measure
by Samir M. Perlaza and Gaetan Bisson
Entropy 2025, 27(8), 865; https://doi.org/10.3390/e27080865 - 14 Aug 2025
Viewed by 554
Abstract
In this paper, closed-form expressions for the variation of the expectation of a given function due to changes in the probability measure (probability distribution drifts) are presented. These expressions unveil interesting connections with Gibbs probability measures, information projections, Pythagorean identities for relative entropy, [...] Read more.
In this paper, closed-form expressions for the variation of the expectation of a given function due to changes in the probability measure (probability distribution drifts) are presented. These expressions unveil interesting connections with Gibbs probability measures, information projections, Pythagorean identities for relative entropy, mutual information, and lautum information. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

23 pages, 374 KB  
Article
Empirical Lossless Compression Bound of a Data Sequence
by Lei M. Li
Entropy 2025, 27(8), 864; https://doi.org/10.3390/e27080864 - 14 Aug 2025
Viewed by 1007
Abstract
We consider the lossless compression bound of any individual data sequence. Conceptually, its Kolmogorov complexity is such a bound yet uncomputable. According to Shannon’s source coding theorem, the average compression bound is nH, where n is the number of words and [...] Read more.
We consider the lossless compression bound of any individual data sequence. Conceptually, its Kolmogorov complexity is such a bound yet uncomputable. According to Shannon’s source coding theorem, the average compression bound is nH, where n is the number of words and H is the entropy of an oracle probability distribution characterizing the data source. The quantity nH(θ^n) obtained by plugging in the maximum likelihood estimate is an underestimate of the bound. Shtarkov showed that the normalized maximum likelihood (NML) distribution is optimal in a minimax sense for any parametric family. Fitting a data sequence—without any a priori distributional assumption—by a relevant exponential family, we apply the local asymptotic normality to show that the NML code length is nH(θ^n)+d2logn2π+logΘ|I(θ)|1/2dθ+o(1), where d is dictionary size, |I(θ)| is the determinant of the Fisher information matrix, and Θ is the parameter space. We demonstrate that sequentially predicting the optimal code length for the next word via a Bayesian mechanism leads to the mixture code whose length is given by nH(θ^n)+d2logn2π+log|I(θ^n)|1/2w(θ^n)+o(1), where w(θ) is a prior. The asymptotics apply to not only discrete symbols but also continuous data if the code length for the former is replaced by the description length for the latter. The analytical result is exemplified by calculating compression bounds of protein-encoding DNA sequences under different parsing models. Typically, compression is maximized when parsing aligns with amino acid codons, while pseudo-random sequences remain incompressible, as predicted by Kolmogorov complexity. Notably, the empirical bound becomes more accurate as the dictionary size increases. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

18 pages, 737 KB  
Article
Mutual Information and Quantum Coherence in Minimum Error Discrimination of N Pure Equidistant Quantum States
by Omar Jiménez
Entropy 2025, 27(8), 863; https://doi.org/10.3390/e27080863 - 14 Aug 2025
Viewed by 605
Abstract
We study the quantum state discrimination problem under the minimum error (ME) strategy for a set of N pure equidistant states. These states are characterized by the property that the inner product between any pair of states is given by a unique complex [...] Read more.
We study the quantum state discrimination problem under the minimum error (ME) strategy for a set of N pure equidistant states. These states are characterized by the property that the inner product between any pair of states is given by a unique complex number S. We provide the explicit form of the states and analyze their main structural properties. The optimal success probability for ME discrimination is evaluated as a function of the number of states, as well as the modulus and phase of the inner product S. Furthermore, we propose an experimental scheme for implementing the ME discrimination of equidistant states. We also investigate the quantum coherence consumed in the implementation of the minimum error discrimination of the equidistant states, which has an established operational interpretation as cryptographic randomness gain. As an application, we propose a quantum communication protocol in which Alice prepares and sends one of the equidistant states, while Bob applies the minimum error discrimination to extract the classical information encoded in the state. Finally, we discuss the optimal conditions under which the protocol achieves an optimal balance of classical correlations and quantum coherence, thereby ensuring effective information transfer and cryptographic security. Full article
(This article belongs to the Special Issue Insight into Entropy)
Show Figures

Figure 1

24 pages, 3961 KB  
Article
Hierarchical Multi-Scale Mamba with Tubular Structure-Aware Convolution for Retinal Vessel Segmentation
by Tao Wang, Dongyuan Tian, Haonan Zhao, Jiamin Liu, Weijie Wang, Chunpei Li and Guixia Liu
Entropy 2025, 27(8), 862; https://doi.org/10.3390/e27080862 - 14 Aug 2025
Viewed by 820
Abstract
Retinal vessel segmentation plays a crucial role in diagnosing various retinal and cardiovascular diseases and serves as a foundation for computer-aided diagnostic systems. Blood vessels in color retinal fundus images, captured using fundus cameras, are often affected by illumination variations and noise, making [...] Read more.
Retinal vessel segmentation plays a crucial role in diagnosing various retinal and cardiovascular diseases and serves as a foundation for computer-aided diagnostic systems. Blood vessels in color retinal fundus images, captured using fundus cameras, are often affected by illumination variations and noise, making it difficult to preserve vascular integrity and posing a significant challenge for vessel segmentation. In this paper, we propose HM-Mamba, a novel hierarchical multi-scale Mamba-based architecture that incorporates tubular structure-aware convolution to extract both local and global vascular features for retinal vessel segmentation. First, we introduce a tubular structure-aware convolution to reinforce vessel continuity and integrity. Building on this, we design a multi-scale fusion module that aggregates features across varying receptive fields, enhancing the model’s robustness in representing both primary trunks and fine branches. Second, we integrate multi-branch Fourier transform with the dynamic state modeling capability of Mamba to capture both long-range dependencies and multi-frequency information. This design enables robust feature representation and adaptive fusion, thereby enhancing the network’s ability to model complex spatial patterns. Furthermore, we propose a hierarchical multi-scale interactive Mamba block that integrates multi-level encoder features through gated Mamba-based global context modeling and residual connections, enabling effective multi-scale semantic fusion and reducing detail loss during downsampling. Extensive evaluations on five widely used benchmark datasets—DRIVE, CHASE_DB1, STARE, IOSTAR, and LES-AV—demonstrate the superior performance of HM-Mamba, yielding Dice coefficients of 0.8327, 0.8197, 0.8239, 0.8307, and 0.8426, respectively. Full article
Show Figures

Figure 1

19 pages, 1692 KB  
Article
Overview of Mathematical Relations Between Poincaré Plot Measures and Time and Frequency Domain Measures of Heart Rate Variability
by Arie M. van Roon, Mark M. Span, Joop D. Lefrandt and Harriëtte Riese
Entropy 2025, 27(8), 861; https://doi.org/10.3390/e27080861 - 14 Aug 2025
Viewed by 789
Abstract
The Poincaré plot was introduced as a tool to analyze heart rate variations caused by arrhythmias. Later, it was applied to time series with normal beats. The plot shows the relationship between the inter-beat interval (IBI) of one beat to the next. Several [...] Read more.
The Poincaré plot was introduced as a tool to analyze heart rate variations caused by arrhythmias. Later, it was applied to time series with normal beats. The plot shows the relationship between the inter-beat interval (IBI) of one beat to the next. Several parameters were developed to characterize this relationship. The short and long axis of the fitting ellipse, SD1 and SD2, respectively, their ratio, and their product are used. The difference between the IBI of a beat and m beats later are also studied, SD1(m) and SD2(m). We studied the mathematical relations between heart rate variability measures and the Poincaré measures in the time (standard deviation of IBI, SDNN, root mean square of successive differences, RMSSD) and frequency domain (power in low and high frequency band, and their ratio). We concluded that SD1 and SD2 do not provide new information compared to SDNN and RMSSD. Only the correlation coefficient r(m) provides new information for m > 1. Novel findings are that ln(SD2(m)/SD1(m)) = tanh−1(r(m)), which is an approximately normal distributed transformation of r(m), and that SD1(m) and SD2(m) can be calculated by multiplying the power spectrum by a weighing function that depends on m, revealing the relationship with spectral measures, but also the relationship between SD1(m) and SD2(m). Both lagged parameters are extremely difficult to interpret compared to low and high frequency power, which are more closely related to the functioning of the autonomic nervous system. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

15 pages, 2607 KB  
Article
Adaptive Feedback Compensation Algorithm for Quantum Random Number Generators
by Wei Deng, Kun Chen, Fei Hua, Jing Cheng, Banghong Guo and Huanwen Xie
Entropy 2025, 27(8), 860; https://doi.org/10.3390/e27080860 - 14 Aug 2025
Viewed by 668
Abstract
As a core component in quantum cryptography, Quantum Random Number Generators (QRNGs) face dual critical challenges: insufficient randomness enhancement and limited compatibility with post-processing algorithms. This study proposes an Adaptive Feedback Compensation Algorithm (AFCA) to address these limitations through dynamic parameter feedback and [...] Read more.
As a core component in quantum cryptography, Quantum Random Number Generators (QRNGs) face dual critical challenges: insufficient randomness enhancement and limited compatibility with post-processing algorithms. This study proposes an Adaptive Feedback Compensation Algorithm (AFCA) to address these limitations through dynamic parameter feedback and selective encryption strategies. The AFCA dynamically adjusts nonlinear transformation intensity based on real-time statistical deviations, retaining over 50% of original bits while correcting local imbalances. Experimental results demonstrate significant improvements across QRNG types: the Monobit Test p-value for continuous QRNGs increased from 0.1376 to 0.9743, and the 0/1 distribution deviation in discrete QRNGs decreased from 7.9% to 0.5%. Compared to traditional methods like von Neumann correction, AFCA reduces data discard rates by over 55% without compromising processing efficiency. These advancements provide a robust solution for high-security quantum communication systems requiring multi-layered encryption architectures. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

17 pages, 386 KB  
Article
A Horizon-as-Apparatus Model That Reproduces Black Hole Thermodynamics
by Daegene Song
Entropy 2025, 27(8), 859; https://doi.org/10.3390/e27080859 - 14 Aug 2025
Viewed by 702
Abstract
We present a measurement-driven model in which the black hole horizon functions as a classical apparatus, with Planck-scale patches acting as detectors for quantum field modes. This approach reproduces the Bekenstein–Hawking area law SBH=A4p2 and provides [...] Read more.
We present a measurement-driven model in which the black hole horizon functions as a classical apparatus, with Planck-scale patches acting as detectors for quantum field modes. This approach reproduces the Bekenstein–Hawking area law SBH=A4p2 and provides a concrete statistical interpretation of the 1/4 factor, while adhering to established principles rather than deriving the entropy anew from first principles. Each patch generates a thermal ensemble (∼0.25 nat per mode), and summing over area-scaling patches yields the total entropy. Quantum simulations incorporating a realistic Hawking spectrum produce Sk=0.257 nat (3% above 0.25 nat), and we outline testable predictions for analogue systems. Our main contribution is the horizon-as-apparatus mechanism and its information-theoretic bookkeeping. Full article
(This article belongs to the Special Issue Coarse and Fine-Grained Aspects of Gravitational Entropy)
Show Figures

Figure 1

23 pages, 418 KB  
Article
Robust Stability and Robust Stabilization of Discrete-Time Markov Jump Linear Systems Under a Class of Stochastic Structured Nonlinear Uncertainties
by Vasile Dragan and Samir Aberkane
Entropy 2025, 27(8), 858; https://doi.org/10.3390/e27080858 - 13 Aug 2025
Viewed by 616
Abstract
Robust stability/stabilization for discrete-time time-varying Markovian jump linear systems subject to block-diagonal stochastic parameter perturbations is addressed in this paper. Using a scaling technique, we succeed in effectively addressing the multi-perturbations case. We obtain an estimation of the lower bound of the stability [...] Read more.
Robust stability/stabilization for discrete-time time-varying Markovian jump linear systems subject to block-diagonal stochastic parameter perturbations is addressed in this paper. Using a scaling technique, we succeed in effectively addressing the multi-perturbations case. We obtain an estimation of the lower bound of the stability radius in terms of the unique bounded and positive semidefinite solutions of adequately defined parameterized backward Lyapunov difference equations. In the time-invariant case, we show that such a lower bound is actually the exact value of the stability radius. Using the obtained result, we effectively address the state-feedback robust stabilization problem. Full article
(This article belongs to the Special Issue Information Theory in Control Systems, 2nd Edition)
Show Figures

Figure 1

13 pages, 662 KB  
Article
Phase-Space Approach for Topological Phase Transitions in Silicene
by Maciej Kalka, Piotr Pigoń and Bartłomiej J. Spisak
Entropy 2025, 27(8), 857; https://doi.org/10.3390/e27080857 - 12 Aug 2025
Viewed by 668
Abstract
Silicene is a two-dimensional silicon monolayer with a band gap caused by relatively strong spin–orbit coupling. This band gap can be steered using a vertical electric field. In turn, the change in this electric field value leads to a transition from a topological [...] Read more.
Silicene is a two-dimensional silicon monolayer with a band gap caused by relatively strong spin–orbit coupling. This band gap can be steered using a vertical electric field. In turn, the change in this electric field value leads to a transition from a topological insulator to a bulk insulator regime. This study aims to develop a phase-space approach to detecting the topological phase transitions in silicene induced by the presence of parallel magnetic and electric fields with the aid of the concept of topological quantum number based on the Wigner–Rényi entropy. A reinterpreted definition of the Wigner distribution function is employed to determine this indicator. The topological phase transition in silicene as a function of the electric field in the presence of the magnetic field is confirmed through the use of the topological quantum number determined for the one-half, Shannon and collision entropies. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

Previous Issue
Back to TopTop