Previous Issue
Volume 27, July
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 8 (August 2025) – 108 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
33 pages, 1695 KiB  
Article
Airline Ranking Using Social Feedback and Adapted Fuzzy Belief TOPSIS
by Roszkowska Ewa and Filipowicz-Chomko Marzena
Entropy 2025, 27(8), 879; https://doi.org/10.3390/e27080879 (registering DOI) - 19 Aug 2025
Abstract
In the era of digital interconnectivity, user-generated reviews on platforms such as TripAdvisor have become a valuable source of social feedback, reflecting collective experiences and perceptions of airline services. However, aggregating such feedback presents several challenges: evaluations are typically expressed using linguistic ordinal [...] Read more.
In the era of digital interconnectivity, user-generated reviews on platforms such as TripAdvisor have become a valuable source of social feedback, reflecting collective experiences and perceptions of airline services. However, aggregating such feedback presents several challenges: evaluations are typically expressed using linguistic ordinal scales, are subjective, often incomplete, and influenced by opinion dynamics within social networks. To effectively deal with these complexities and extract meaningful insights, this study proposes an information-driven decision-making framework that integrates Fuzzy Belief Structures with the TOPSIS method. To handle the uncertainty and imprecision of linguistic ratings, user opinions are modeled as fuzzy belief distributions over satisfaction levels. Rankings are then derived using TOPSIS by comparing each airline’s aggregated profile to ideal satisfaction benchmarks via a belief-based distance measure. This framework presents a novel solution for measuring synthetic satisfaction in complex social feedback systems, thereby contributing to the understanding of information flow, belief aggregation, and emergent order in digital opinion networks. The methodology is demonstrated using a real-world dataset of TripAdvisor airline reviews, providing a robust and interpretable benchmark for service quality. Moreover, this study applies Shannon entropy to classify and interpret the consistency of customer satisfaction ratings among Star Alliance airlines. The results confirm the stability of the Airline Satisfaction Index (ASI), with extremely high correlations among the five rankings generated using different fuzzy utility function models. The methodology reveals that airlines such as Singapore Airlines, ANA, EVA Air, and Air New Zealand consistently achieve high satisfaction scores across all fuzzy model configurations, highlighting their strong and stable performance regardless of model variation. These airlines also show both low entropy and high average scores, confirming their consistent excellence. Full article
(This article belongs to the Special Issue Dynamics in Biological and Social Networks)
25 pages, 484 KiB  
Tutorial
Geometric Neural Ordinary Differential Equations: From Manifolds to Lie Groups
by Yannik P. Wotte, Federico Califano and Stefano Stramigioli
Entropy 2025, 27(8), 878; https://doi.org/10.3390/e27080878 - 19 Aug 2025
Abstract
Neural ordinary differential equations (neural ODEs) are a well-established tool for optimizing the parameters of dynamical systems, with applications in image classification, optimal control, and physics learning. Although dynamical systems of interest often evolve on Lie groups and more general differentiable manifolds, theoretical [...] Read more.
Neural ordinary differential equations (neural ODEs) are a well-established tool for optimizing the parameters of dynamical systems, with applications in image classification, optimal control, and physics learning. Although dynamical systems of interest often evolve on Lie groups and more general differentiable manifolds, theoretical results for neural ODEs are frequently phrased on Rn. We collect recent results for neural ODEs on manifolds and present a unifying derivation of various results that serves as a tutorial to extend existing methods to differentiable manifolds. We also extend the results to the recent class of neural ODEs on Lie groups, highlighting a non-trivial extension of manifold neural ODEs that exploits the Lie group structure. Full article
(This article belongs to the Special Issue Lie Group Machine Learning)
24 pages, 8653 KiB  
Article
Sea Surface Wind Speed Retrieval from Marine Radar Image Sequences Based on GLCM-Derived Texture Features
by Hui Wang, Haiyang Qiu, Lei Wang, Jingxi Huang and Xingbo Ruan
Entropy 2025, 27(8), 877; https://doi.org/10.3390/e27080877 - 19 Aug 2025
Abstract
Sea surface wind speed is a key parameter in marine meteorology, navigation safety, and offshore engineering. Traditional marine radar wind speed retrieval algorithms often suffer from poor environmental adaptability and limited applicability across different radar systems, while existing empirical models face challenges in [...] Read more.
Sea surface wind speed is a key parameter in marine meteorology, navigation safety, and offshore engineering. Traditional marine radar wind speed retrieval algorithms often suffer from poor environmental adaptability and limited applicability across different radar systems, while existing empirical models face challenges in accuracy and generalization. To address these issues, this study proposes a novel wind speed retrieval method based on X-band marine radar image sequences and texture features derived from the Gray-Level Co-occurrence Matrix (GLCM). A three-stage preprocessing pipeline—comprising noise suppression, geometric correction, and interpolation—is employed to extract small-scale wind streaks that reflect wind field characteristics, ensuring high-quality image data. Two key GLCM texture features of wind streaks, energy and entropy, are identified, and their stable values are used to construct a segmented dual-parameter wind speed model with a division at 10 m/s. Experimental results show that both energy- and entropy-based models outperform traditional empirical models, reducing mean errors by approximately 49.3% and 16.7%, respectively. The energy stable model achieves the best overall performance with a correlation coefficient of 0.89, while the entropy stable model demonstrates superior performance at low wind speeds. The complementary nature of the two models enhances robustness under varying conditions, providing a more accurate and efficient solution for sea surface wind speed retrieval. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 2099 KiB  
Article
Navigating Cross-Border E-Commerce: Prioritizing Logistics Partners with Hybrid MCGDM
by Xingyu Ma and Chuanxu Wang
Entropy 2025, 27(8), 876; https://doi.org/10.3390/e27080876 - 19 Aug 2025
Abstract
As global e-commerce expands, efficient cross-border logistics services have become essential. To support the evaluation of logistics service providers (LSPs), we propose HD-CBDTOPSIS (Technique for Order Preference by Similarity to Ideal Solution with heterogeneous data and cloud Bhattacharyya distance), a hybrid multi-criteria group [...] Read more.
As global e-commerce expands, efficient cross-border logistics services have become essential. To support the evaluation of logistics service providers (LSPs), we propose HD-CBDTOPSIS (Technique for Order Preference by Similarity to Ideal Solution with heterogeneous data and cloud Bhattacharyya distance), a hybrid multi-criteria group decision-making (MCGDM) model designed to handle complex, uncertain data. Our criteria system integrates traditional supplier evaluation with cross-border e-commerce characteristics, using heterogeneous data types—including exact numbers, intervals, digital datasets, multi-granularity linguistic terms, and linguistic expressions. These are unified using normal cloud models (NCMs), ensuring uncertainty is consistently represented. A novel algorithm, improved multi-step backward cloud transformation with sampling replacement (IMBCT-SR), is developed for converting dataset-type indicators into cloud models. We also introduce a new similarity measure, the Cloud Bhattacharyya Distance (CBD), which shows superior discrimination ability compared to traditional distances. Using the coefficient of variation (CV) based on CBD, we objectively determine criteria weights. A cloud-based TOPSIS approach is then applied to rank alternative LSPs, with all variables modeled using NCMs to ensure consistent uncertainty representation. An application case and comparative experiments demonstrate that HD-CBDTOPSIS is an effective, flexible, and robust tool for evaluating cross-border LSPs under uncertain and multi-dimensional conditions. Full article
(This article belongs to the Section Complexity)
28 pages, 5196 KiB  
Article
Autoencoder-Like Sparse Non-Negative Matrix Factorization with Structure Relationship Preservation
by Ling Zhong and Haiyan Gao
Entropy 2025, 27(8), 875; https://doi.org/10.3390/e27080875 - 19 Aug 2025
Abstract
Clustering algorithms based on non-negative matrix factorization (NMF) have garnered significant attention in data mining due to their strong interpretability and computational simplicity. However, traditional NMF often struggles to effectively capture and preserve topological structure information between data during low-dimensional representation. Therefore, this [...] Read more.
Clustering algorithms based on non-negative matrix factorization (NMF) have garnered significant attention in data mining due to their strong interpretability and computational simplicity. However, traditional NMF often struggles to effectively capture and preserve topological structure information between data during low-dimensional representation. Therefore, this paper proposes an autoencoder-like sparse non-negative matrix factorization with structure relationship preservation (ASNMF-SRP). Firstly, drawing on the principle of autoencoders, a “decoder-encoder” co-optimization matrix factorization framework is constructed to enhance the factorization stability and representation capability of the coefficient matrix. Then, a preference-adjusted random walk strategy is introduced to capture higher-order neighborhood relationships between samples, encoding multi-order topological structure information of the data through an optimal graph regularization term. Simultaneously, to mitigate the impact of noise and outliers, the -norm is used to constrain the feature correlation between low-dimensional representations and the original data, preserving feature relationships between data, and a sparse constraint is imposed on the coefficient matrix via the inner product. Finally, clustering experiments conducted on 8 public datasets demonstrate that ASNMF-SRP consistently exhibits favorable clustering performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
32 pages, 14643 KiB  
Article
Image Encryption Algorithm Based on Dynamic Rhombus Transformation and Digital Tube Model
by Xiaoqiang Zhang, Yupeng Song and Ke Huang
Entropy 2025, 27(8), 874; https://doi.org/10.3390/e27080874 - 18 Aug 2025
Abstract
With the rapid advancement of information technology, as critical information carriers, images are confronted with significant security risks. To ensure the image security, this paper proposes an image encryption algorithm based on a dynamic rhombus transformation and digital tube model. Firstly, a two-dimensional [...] Read more.
With the rapid advancement of information technology, as critical information carriers, images are confronted with significant security risks. To ensure the image security, this paper proposes an image encryption algorithm based on a dynamic rhombus transformation and digital tube model. Firstly, a two-dimensional hyper-chaotic system is constructed by combining the Sine map, Cubic map and May map. The analysis results demonstrate that the constructed hybrid chaotic map exhibits superior chaotic characteristics in terms of bifurcation diagrams, Lyapunov exponents, sample entropy, etc. Secondly, a dynamic rhombus transformation is proposed to scramble pixel positions, and chaotic sequences are used to dynamically select transformation centers and traversal orders. Finally, a digital tube model is designed to diffuse pixel values, which utilizes chaotic sequences to dynamically control the bit reversal and circular shift operations, and the exclusive OR operation to diffuse pixel values. The performance analyses show that the information entropy of the cipher image is 7.9993, and the correlation coefficients in horizontal, vertical, and diagonal directions are 0.0008, 0.0001, and 0.0005, respectively. Moreover, the proposed algorithm has strong resistance against noise attacks, cropping attacks, and exhaustive attacks, effectively ensuring the security of images during storage and transmission. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

21 pages, 2034 KiB  
Article
Brain Oscillations and Autonomic Synthonization via Comodulation in Collaborative Negotiation
by Katia Rovelli, Carlotta Acconito, Laura Angioletti and Michela Balconi
Entropy 2025, 27(8), 873; https://doi.org/10.3390/e27080873 - 18 Aug 2025
Abstract
This study investigates the relationship between neural and physiological synthonization via comodulation (Synth) in dyadic exchanges centered on negotiation processes. In total, 13 dyads participated in a negotiation task with three phases: Initiation (IP), Negotiation Core (NCP), and Resolution (RP). Electroencephalographic (EEG) frequency [...] Read more.
This study investigates the relationship between neural and physiological synthonization via comodulation (Synth) in dyadic exchanges centered on negotiation processes. In total, 13 dyads participated in a negotiation task with three phases: Initiation (IP), Negotiation Core (NCP), and Resolution (RP). Electroencephalographic (EEG) frequency bands (i.e., delta, theta, alpha) and autonomic responses (heart rate variability, HRV) were recorded. Synth was analyzed using Euclidean distance (EuDist) for EEG and autonomic indices. Significant Synth in delta, theta, and alpha bands in temporo-central and parieto-occipital regions was observed, indicating social cognitive alignment. HRV Synth was higher during the NCP than IP, suggesting better coordination. Based on this result, a cluster analysis was performed on HRV EuDist to identify distinct groups based on HRV, and eventually personality patterns, that revealed one cluster with higher Synth and reward sensitivity, and another with lower Synth and reward sensitivity. These findings show how neural and autonomic Synth enhances social cognition and emotional regulation. Full article
(This article belongs to the Special Issue Active Inference in Cognitive Neuroscience)
Show Figures

Figure 1

23 pages, 1276 KiB  
Article
Data-Driven Assessment of Carbon Emission and Optimization of Carbon Emission Reduction in the Ceramic Industry
by Xingbin Huang and Weihua He
Entropy 2025, 27(8), 872; https://doi.org/10.3390/e27080872 - 18 Aug 2025
Abstract
By integrating statistical modeling and data analysis techniques, we systematically assess the carbon emission performance of the ceramic industry and propose targeted emission reduction pathways. Firstly, the entropy weight TOPSIS model is employed to quantitatively evaluate the carbon emission performance of the three [...] Read more.
By integrating statistical modeling and data analysis techniques, we systematically assess the carbon emission performance of the ceramic industry and propose targeted emission reduction pathways. Firstly, the entropy weight TOPSIS model is employed to quantitatively evaluate the carbon emission performance of the three major Chinese ceramic production areas: Foshan, Jingdezhen, and Zibo. Through data-driven quantitative analysis, it is disclosed that the carbon emission intensity in Foshan is significantly higher than that in the other two regions (with a relative closeness degree of 0.5185). The key issues identified include high energy consumption in the production process, a high reliance on raw coal, and insufficient investment in environmental protection. Furthermore, through the XGBoost-SHAP combined modeling, the key drivers of carbon emissions are precisely identified from multi-dimensional data. It is found that the elasticity coefficient of raw coal in the carbon emission proportion is as high as 25.84%, while the potential for substitution with natural gas is remarkable. Based on statistical prediction techniques, a carbon emission trend model under the scenario of energy structure optimization is constructed, predicting that after reaching a peak in 2017, Foshan’s carbon emissions will continue to decline, with the proportion of raw coal dropping to 48% and that of natural gas rising to 10%, thereby verifying the feasibility of the green transformation. Additionally, a multi-agent carbon trading simulation model is constructed to explore the emission reduction behaviors of enterprises under different carbon price scenarios. This study not only achieves precise quantitative analysis of carbon emissions through statistical method innovation but also verifies the feasible paths of low-carbon transformation through data modeling. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 2734 KiB  
Article
Time-Marching Quantum Algorithm for Simulation of Nonlinear Lorenz Dynamics
by Efstratios Koukoutsis, George Vahala, Min Soe, Kyriakos Hizanidis, Linda Vahala and Abhay K. Ram
Entropy 2025, 27(8), 871; https://doi.org/10.3390/e27080871 - 17 Aug 2025
Viewed by 58
Abstract
Simulating nonlinear classical dynamics on a quantum computer is an inherently challenging task due to the linear operator formulation of quantum mechanics. In this work, we provide a systematic approach to alleviate this difficulty by developing an explicit quantum algorithm that implements the [...] Read more.
Simulating nonlinear classical dynamics on a quantum computer is an inherently challenging task due to the linear operator formulation of quantum mechanics. In this work, we provide a systematic approach to alleviate this difficulty by developing an explicit quantum algorithm that implements the time evolution of a second-order time-discretized version of the Lorenz model. The Lorenz model is a celebrated system of nonlinear ordinary differential equations that has been extensively studied in the contexts of climate science, fluid dynamics, and chaos theory. Our algorithm possesses a recursive structure and requires only a linear number of copies of the initial state with respect to the number of integration time-steps. This provides a significant improvement over previous approaches, while preserving the characteristic quantum speed-up in terms of the dimensionality of the underlying differential equations system, which similar time-marching quantum algorithms have previously demonstrated. Notably, by classically implementing the proposed algorithm, we showcase that it accurately captures the structural characteristics of the Lorenz system, reproducing both regular attractors–limit cycles–and the chaotic attractor within the chosen parameter regime. Full article
(This article belongs to the Special Issue Quantum Computing in the NISQ Era)
Show Figures

Figure 1

11 pages, 745 KiB  
Article
Information Storage in a Black Hole’s Gravitational Field
by Dongshan He, Jinfang Li and Qian Qiu
Entropy 2025, 27(8), 870; https://doi.org/10.3390/e27080870 - 16 Aug 2025
Viewed by 225
Abstract
The key to resolving the black hole information loss paradox lies in clarifying the origin of black hole entropy and the mechanism by which black holes store information. By applying thermodynamic principles, we demonstrate that the entropy of a gravitational field is negative [...] Read more.
The key to resolving the black hole information loss paradox lies in clarifying the origin of black hole entropy and the mechanism by which black holes store information. By applying thermodynamic principles, we demonstrate that the entropy of a gravitational field is negative and proportional to the strength of the field, indicating that gravitational fields possess information storage capacity. For Schwarzschild black holes, we further demonstrate that information conventionally attributed to the black hole’s interior is in fact encoded within its external gravitational field. During black hole evaporation, the emitted particles transmit this information via gravitational correlations. This study advances our understanding of gravitational field entropy and provides valuable insights toward resolving the black hole information loss problem. Full article
(This article belongs to the Special Issue Black Hole Information Problem: Challenges and Perspectives)
Show Figures

Figure 1

21 pages, 3126 KiB  
Article
CViT Weakly Supervised Network Fusing Dual-Branch Local-Global Features for Hyperspectral Image Classification
by Wentao Fu, Xiyan Sun, Xiuhua Zhang, Yuanfa Ji and Jiayuan Zhang
Entropy 2025, 27(8), 869; https://doi.org/10.3390/e27080869 - 15 Aug 2025
Viewed by 204
Abstract
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant [...] Read more.
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant performance usually comes at the cost of feature representation capabilities. High-dimensional and deep convolution can capture rich deep semantic features, but with high complexity and resource consumption. To deal with these problems, we propose a CViT Weakly Supervised Network (CWSN) for HSI classification. Specifically, a lightweight 1D-2D two-branch network is used for local generalization and enhancement of spatial–spectral features. Then, the fusion and characterization of local and global features are achieved through the CNN-Vision Transformer (CViT) cascade strategy. The experimental results on four benchmark HSI datasets show that CWSN has good anti-noise ability and ensures the robustness and versatility of the network facing both clean and noisy training sets. Compared to other methods, the CWSN has better classification accuracy. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Graphical abstract

11 pages, 9959 KiB  
Article
Are Human Judgments of Real and Fake Faces Quantum-like Contextual?
by Peter Bruza, Aaron Lee and Pamela Hoyte
Entropy 2025, 27(8), 868; https://doi.org/10.3390/e27080868 - 15 Aug 2025
Viewed by 90
Abstract
This paper describes a crowdsourced experiment in which participants were asked to judge which of two simultaneously presented facial images (one real, one AI-generated) was fake. With the growing presence of synthetic imagery in digital environments, cognitive systems must adapt to novel and [...] Read more.
This paper describes a crowdsourced experiment in which participants were asked to judge which of two simultaneously presented facial images (one real, one AI-generated) was fake. With the growing presence of synthetic imagery in digital environments, cognitive systems must adapt to novel and often deceptive visual stimuli. Recent developments in cognitive science propose that some mental processes may exhibit quantum-like characteristics, particularly in their context sensitivity. Drawing on Tezzin’s “generalized fair coin” model, this study applied Contextuality-by-Default (CbD) theory to investigate whether human judgments of human faces exhibit quantum-like contextuality. Across 20 trials, each treated as a “generalized coin”, bootstrap resampling (10,000 iterations per coin) revealed that nine trials demonstrated quantum-like contextuality. Notably, Coin 4 exhibited strong context-sensitive causal asymmetry, where both the real and synthetic faces elicited inverse judgments due to their unusually strong resemblance to one another. These results support the growing evidence that cognitive judgments are sometimes quantum-like contextual, suggesting that adopting comparative strategies, such as evaluating unfamiliar faces alongside known-real exemplars, may enhance accuracy in detecting synthetic images. Such pairwise methods align with the strengths of human perception and may inform future interventions, user interfaces, or educational tools aimed at improving visual judgment under uncertainty. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness V)
Show Figures

Figure 1

18 pages, 2704 KiB  
Article
A Robust Hybrid Weighting Scheme Based on IQRBOW and Entropy for MCDM: Stability and Advantage Criteria in the VIKOR Framework
by Ali Erbey, Üzeyir Fidan and Cemil Gündüz
Entropy 2025, 27(8), 867; https://doi.org/10.3390/e27080867 - 15 Aug 2025
Viewed by 183
Abstract
In multi-criteria decision-making (MCDM) environments characterized by uncertainty and data irregularities, the reliability of weighting methods becomes critical for ensuring robust and accurate decisions. This study introduces a novel hybrid objective weighting method—IQRBOW-E (Interquartile Range-Based Objective Weighting with Entropy)—which dynamically combines the statistical [...] Read more.
In multi-criteria decision-making (MCDM) environments characterized by uncertainty and data irregularities, the reliability of weighting methods becomes critical for ensuring robust and accurate decisions. This study introduces a novel hybrid objective weighting method—IQRBOW-E (Interquartile Range-Based Objective Weighting with Entropy)—which dynamically combines the statistical robustness of the IQRBOW method with the information sensitivity of Entropy through a tunable parameter β. The method allows decision-makers to flexibly control the trade-off between robustness and information contribution, enhancing the adaptability of decision support systems. A comprehensive experimental design involving ten simulation scenarios was implemented, in which the number of criteria, alternatives, and outlier ratios were varied. The IQRBOW-E method was integrated into the VIKOR framework and evaluated through average Q values, stability ratios, SRD scores, and the Friedman test. The results indicate that the proposed hybrid approach achieves superior decision stability and performance, particularly in data environments with increasing outlier contamination. Optimal β values were shown to shift systematically depending on data conditions, highlighting the model’s sensitivity and adaptability. This study not only advances the methodological landscape of MCDM by introducing a parameterized hybrid weighting model but also contributes a robust and generalizable weighting infrastructure for modern decision-making under uncertainty. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)
Show Figures

Figure 1

33 pages, 2080 KiB  
Article
Latent Class Analysis with Arbitrary-Distribution Responses
by Huan Qing and Xiaofei Xu
Entropy 2025, 27(8), 866; https://doi.org/10.3390/e27080866 - 14 Aug 2025
Viewed by 266
Abstract
The latent class model has been proposed as a powerful tool in understanding human behavior for various fields such as social, psychological, behavioral, and biological sciences. However, one important limitation of the latent class model is that it is primarily applied to data [...] Read more.
The latent class model has been proposed as a powerful tool in understanding human behavior for various fields such as social, psychological, behavioral, and biological sciences. However, one important limitation of the latent class model is that it is primarily applied to data with binary responses or categorical responses, making it fail to model real-world data with continuous or negative responses. In many applications, ignoring the weights throws out a lot of potentially valuable information contained in the weights. To address this limitation, we propose a novel generative model, the arbitrary-distribution latent class model (adLCM). Our model enables the generation of data’s response matrix from an arbitrary distribution with a latent class structure. When compared to the latent class model, our adLCM is both more realistic and general. To our knowledge, our adLCM is the first model for latent class analysis with any real-valued responses, including continuous, negative, and signed values, thereby extending the classical latent class model beyond its traditional limitation to binary or categorical outcomes. We investigate the identifiability of the model and propose an efficient algorithm for estimating the latent classes and other model parameters. We show that the proposed algorithm enjoys consistent estimation. The performance of our algorithm is evaluated using both computer-generated data and real-world personality test data. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 402 KiB  
Article
Variations on the Expectation Due to Changes in the Probability Measure
by Samir M. Perlaza and Gaetan Bisson
Entropy 2025, 27(8), 865; https://doi.org/10.3390/e27080865 - 14 Aug 2025
Viewed by 90
Abstract
In this paper, closed-form expressions for the variation of the expectation of a given function due to changes in the probability measure (probability distribution drifts) are presented. These expressions unveil interesting connections with Gibbs probability measures, information projections, Pythagorean identities for relative entropy, [...] Read more.
In this paper, closed-form expressions for the variation of the expectation of a given function due to changes in the probability measure (probability distribution drifts) are presented. These expressions unveil interesting connections with Gibbs probability measures, information projections, Pythagorean identities for relative entropy, mutual information, and lautum information. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
23 pages, 374 KiB  
Article
Empirical Lossless Compression Bound of a Data Sequence
by Lei M. Li
Entropy 2025, 27(8), 864; https://doi.org/10.3390/e27080864 - 14 Aug 2025
Viewed by 231
Abstract
We consider the lossless compression bound of any individual data sequence. Conceptually, its Kolmogorov complexity is such a bound yet uncomputable. According to Shannon’s source coding theorem, the average compression bound is nH, where n is the number of words and [...] Read more.
We consider the lossless compression bound of any individual data sequence. Conceptually, its Kolmogorov complexity is such a bound yet uncomputable. According to Shannon’s source coding theorem, the average compression bound is nH, where n is the number of words and H is the entropy of an oracle probability distribution characterizing the data source. The quantity nH(θ^n) obtained by plugging in the maximum likelihood estimate is an underestimate of the bound. Shtarkov showed that the normalized maximum likelihood (NML) distribution is optimal in a minimax sense for any parametric family. Fitting a data sequence—without any a priori distributional assumption—by a relevant exponential family, we apply the local asymptotic normality to show that the NML code length is nH(θ^n)+d2logn2π+logΘ|I(θ)|1/2dθ+o(1), where d is dictionary size, |I(θ)| is the determinant of the Fisher information matrix, and Θ is the parameter space. We demonstrate that sequentially predicting the optimal code length for the next word via a Bayesian mechanism leads to the mixture code whose length is given by nH(θ^n)+d2logn2π+log|I(θ^n)|1/2w(θ^n)+o(1), where w(θ) is a prior. The asymptotics apply to not only discrete symbols but also continuous data if the code length for the former is replaced by the description length for the latter. The analytical result is exemplified by calculating compression bounds of protein-encoding DNA sequences under different parsing models. Typically, compression is maximized when parsing aligns with amino acid codons, while pseudo-random sequences remain incompressible, as predicted by Kolmogorov complexity. Notably, the empirical bound becomes more accurate as the dictionary size increases. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

18 pages, 737 KiB  
Article
Mutual Information and Quantum Coherence in Minimum Error Discrimination of N Pure Equidistant Quantum States
by Omar Jiménez
Entropy 2025, 27(8), 863; https://doi.org/10.3390/e27080863 - 14 Aug 2025
Viewed by 103
Abstract
We study the quantum state discrimination problem under the minimum error (ME) strategy for a set of N pure equidistant states. These states are characterized by the property that the inner product between any pair of states is given by a unique complex [...] Read more.
We study the quantum state discrimination problem under the minimum error (ME) strategy for a set of N pure equidistant states. These states are characterized by the property that the inner product between any pair of states is given by a unique complex number S. We provide the explicit form of the states and analyze their main structural properties. The optimal success probability for ME discrimination is evaluated as a function of the number of states, as well as the modulus and phase of the inner product S. Furthermore, we propose an experimental scheme for implementing the ME discrimination of equidistant states. We also investigate the quantum coherence consumed in the implementation of the minimum error discrimination of the equidistant states, which has an established operational interpretation as cryptographic randomness gain. As an application, we propose a quantum communication protocol in which Alice prepares and sends one of the equidistant states, while Bob applies the minimum error discrimination to extract the classical information encoded in the state. Finally, we discuss the optimal conditions under which the protocol achieves an optimal balance of classical correlations and quantum coherence, thereby ensuring effective information transfer and cryptographic security. Full article
(This article belongs to the Special Issue Insight into Entropy)
Show Figures

Figure 1

24 pages, 3961 KiB  
Article
Hierarchical Multi-Scale Mamba with Tubular Structure-Aware Convolution for Retinal Vessel Segmentation
by Tao Wang, Dongyuan Tian, Haonan Zhao, Jiamin Liu, Weijie Wang, Chunpei Li and Guixia Liu
Entropy 2025, 27(8), 862; https://doi.org/10.3390/e27080862 - 14 Aug 2025
Viewed by 213
Abstract
Retinal vessel segmentation plays a crucial role in diagnosing various retinal and cardiovascular diseases and serves as a foundation for computer-aided diagnostic systems. Blood vessels in color retinal fundus images, captured using fundus cameras, are often affected by illumination variations and noise, making [...] Read more.
Retinal vessel segmentation plays a crucial role in diagnosing various retinal and cardiovascular diseases and serves as a foundation for computer-aided diagnostic systems. Blood vessels in color retinal fundus images, captured using fundus cameras, are often affected by illumination variations and noise, making it difficult to preserve vascular integrity and posing a significant challenge for vessel segmentation. In this paper, we propose HM-Mamba, a novel hierarchical multi-scale Mamba-based architecture that incorporates tubular structure-aware convolution to extract both local and global vascular features for retinal vessel segmentation. First, we introduce a tubular structure-aware convolution to reinforce vessel continuity and integrity. Building on this, we design a multi-scale fusion module that aggregates features across varying receptive fields, enhancing the model’s robustness in representing both primary trunks and fine branches. Second, we integrate multi-branch Fourier transform with the dynamic state modeling capability of Mamba to capture both long-range dependencies and multi-frequency information. This design enables robust feature representation and adaptive fusion, thereby enhancing the network’s ability to model complex spatial patterns. Furthermore, we propose a hierarchical multi-scale interactive Mamba block that integrates multi-level encoder features through gated Mamba-based global context modeling and residual connections, enabling effective multi-scale semantic fusion and reducing detail loss during downsampling. Extensive evaluations on five widely used benchmark datasets—DRIVE, CHASE_DB1, STARE, IOSTAR, and LES-AV—demonstrate the superior performance of HM-Mamba, yielding Dice coefficients of 0.8327, 0.8197, 0.8239, 0.8307, and 0.8426, respectively. Full article
Show Figures

Figure 1

19 pages, 1692 KiB  
Article
Overview of Mathematical Relations Between Poincaré Plot Measures and Time and Frequency Domain Measures of Heart Rate Variability
by Arie M. van Roon, Mark M. Span, Joop D. Lefrandt and Harriëtte Riese
Entropy 2025, 27(8), 861; https://doi.org/10.3390/e27080861 - 14 Aug 2025
Viewed by 180
Abstract
The Poincaré plot was introduced as a tool to analyze heart rate variations caused by arrhythmias. Later, it was applied to time series with normal beats. The plot shows the relationship between the inter-beat interval (IBI) of one beat to the next. Several [...] Read more.
The Poincaré plot was introduced as a tool to analyze heart rate variations caused by arrhythmias. Later, it was applied to time series with normal beats. The plot shows the relationship between the inter-beat interval (IBI) of one beat to the next. Several parameters were developed to characterize this relationship. The short and long axis of the fitting ellipse, SD1 and SD2, respectively, their ratio, and their product are used. The difference between the IBI of a beat and m beats later are also studied, SD1(m) and SD2(m). We studied the mathematical relations between heart rate variability measures and the Poincaré measures in the time (standard deviation of IBI, SDNN, root mean square of successive differences, RMSSD) and frequency domain (power in low and high frequency band, and their ratio). We concluded that SD1 and SD2 do not provide new information compared to SDNN and RMSSD. Only the correlation coefficient r(m) provides new information for m > 1. Novel findings are that ln(SD2(m)/SD1(m)) = tanh−1(r(m)), which is an approximately normal distributed transformation of r(m), and that SD1(m) and SD2(m) can be calculated by multiplying the power spectrum by a weighing function that depends on m, revealing the relationship with spectral measures, but also the relationship between SD1(m) and SD2(m). Both lagged parameters are extremely difficult to interpret compared to low and high frequency power, which are more closely related to the functioning of the autonomic nervous system. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

15 pages, 2607 KiB  
Article
Adaptive Feedback Compensation Algorithm for Quantum Random Number Generators
by Wei Deng, Kun Chen, Fei Hua, Jing Cheng, Banghong Guo and Huanwen Xie
Entropy 2025, 27(8), 860; https://doi.org/10.3390/e27080860 - 14 Aug 2025
Viewed by 178
Abstract
As a core component in quantum cryptography, Quantum Random Number Generators (QRNGs) face dual critical challenges: insufficient randomness enhancement and limited compatibility with post-processing algorithms. This study proposes an Adaptive Feedback Compensation Algorithm (AFCA) to address these limitations through dynamic parameter feedback and [...] Read more.
As a core component in quantum cryptography, Quantum Random Number Generators (QRNGs) face dual critical challenges: insufficient randomness enhancement and limited compatibility with post-processing algorithms. This study proposes an Adaptive Feedback Compensation Algorithm (AFCA) to address these limitations through dynamic parameter feedback and selective encryption strategies. The AFCA dynamically adjusts nonlinear transformation intensity based on real-time statistical deviations, retaining over 50% of original bits while correcting local imbalances. Experimental results demonstrate significant improvements across QRNG types: the Monobit Test p-value for continuous QRNGs increased from 0.1376 to 0.9743, and the 0/1 distribution deviation in discrete QRNGs decreased from 7.9% to 0.5%. Compared to traditional methods like von Neumann correction, AFCA reduces data discard rates by over 55% without compromising processing efficiency. These advancements provide a robust solution for high-security quantum communication systems requiring multi-layered encryption architectures. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

17 pages, 386 KiB  
Article
A Horizon-as-Apparatus Model That Reproduces Black Hole Thermodynamics
by Daegene Song
Entropy 2025, 27(8), 859; https://doi.org/10.3390/e27080859 - 14 Aug 2025
Viewed by 200
Abstract
We present a measurement-driven model in which the black hole horizon functions as a classical apparatus, with Planck-scale patches acting as detectors for quantum field modes. This approach reproduces the Bekenstein–Hawking area law SBH=A4p2 and provides [...] Read more.
We present a measurement-driven model in which the black hole horizon functions as a classical apparatus, with Planck-scale patches acting as detectors for quantum field modes. This approach reproduces the Bekenstein–Hawking area law SBH=A4p2 and provides a concrete statistical interpretation of the 1/4 factor, while adhering to established principles rather than deriving the entropy anew from first principles. Each patch generates a thermal ensemble (∼0.25 nat per mode), and summing over area-scaling patches yields the total entropy. Quantum simulations incorporating a realistic Hawking spectrum produce Sk=0.257 nat (3% above 0.25 nat), and we outline testable predictions for analogue systems. Our main contribution is the horizon-as-apparatus mechanism and its information-theoretic bookkeeping. Full article
(This article belongs to the Special Issue Coarse and Fine-Grained Aspects of Gravitational Entropy)
Show Figures

Figure 1

23 pages, 418 KiB  
Article
Robust Stability and Robust Stabilization of Discrete-Time Markov Jump Linear Systems Under a Class of Stochastic Structured Nonlinear Uncertainties
by Vasile Dragan and Samir Aberkane
Entropy 2025, 27(8), 858; https://doi.org/10.3390/e27080858 - 13 Aug 2025
Viewed by 201
Abstract
Robust stability/stabilization for discrete-time time-varying Markovian jump linear systems subject to block-diagonal stochastic parameter perturbations is addressed in this paper. Using a scaling technique, we succeed in effectively addressing the multi-perturbations case. We obtain an estimation of the lower bound of the stability [...] Read more.
Robust stability/stabilization for discrete-time time-varying Markovian jump linear systems subject to block-diagonal stochastic parameter perturbations is addressed in this paper. Using a scaling technique, we succeed in effectively addressing the multi-perturbations case. We obtain an estimation of the lower bound of the stability radius in terms of the unique bounded and positive semidefinite solutions of adequately defined parameterized backward Lyapunov difference equations. In the time-invariant case, we show that such a lower bound is actually the exact value of the stability radius. Using the obtained result, we effectively address the state-feedback robust stabilization problem. Full article
(This article belongs to the Special Issue Information Theory in Control Systems, 2nd Edition)
Show Figures

Figure 1

13 pages, 662 KiB  
Article
Phase-Space Approach for Topological Phase Transitions in Silicene
by Maciej Kalka, Piotr Pigoń and Bartłomiej J. Spisak
Entropy 2025, 27(8), 857; https://doi.org/10.3390/e27080857 - 12 Aug 2025
Viewed by 244
Abstract
Silicene is a two-dimensional silicon monolayer with a band gap caused by relatively strong spin–orbit coupling. This band gap can be steered using a vertical electric field. In turn, the change in this electric field value leads to a transition from a topological [...] Read more.
Silicene is a two-dimensional silicon monolayer with a band gap caused by relatively strong spin–orbit coupling. This band gap can be steered using a vertical electric field. In turn, the change in this electric field value leads to a transition from a topological insulator to a bulk insulator regime. This study aims to develop a phase-space approach to detecting the topological phase transitions in silicene induced by the presence of parallel magnetic and electric fields with the aid of the concept of topological quantum number based on the Wigner–Rényi entropy. A reinterpreted definition of the Wigner distribution function is employed to determine this indicator. The topological phase transition in silicene as a function of the electric field in the presence of the magnetic field is confirmed through the use of the topological quantum number determined for the one-half, Shannon and collision entropies. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

19 pages, 1029 KiB  
Article
Scaling Invariance: A Gateway to Phase Transitions
by Edson Denis Leonel
Entropy 2025, 27(8), 856; https://doi.org/10.3390/e27080856 - 11 Aug 2025
Viewed by 205
Abstract
We explore the concept of scaling invariance in a type of dynamical systems that undergo a transition from regularity to chaos. The systems are described by a two-dimensional, nonlinear mapping that preserves the area in the phase space. The key variables are the [...] Read more.
We explore the concept of scaling invariance in a type of dynamical systems that undergo a transition from regularity to chaos. The systems are described by a two-dimensional, nonlinear mapping that preserves the area in the phase space. The key variables are the action and the angle, as usual from Hamiltonian systems. The transition is influenced by a control parameter giving the form of the order parameter. We observe a scaling invariance in the average squared action within the chaotic region, providing evidence that this change from regularity (integrability) to chaos (non-integrability) is akin to a second-order or continuous phase transition. As the order parameter approaches zero, its response against the variation in the control parameter (susceptibility) becomes increasingly pronounced (indeed diverging), resembling a phase transition. Full article
Show Figures

Figure 1

24 pages, 1233 KiB  
Article
DRL-Based Scheduling for AoI Minimization in CR Networks with Perfect Sensing
by Juan Sun, Shubin Zhang and Xinjie Yu
Entropy 2025, 27(8), 855; https://doi.org/10.3390/e27080855 - 11 Aug 2025
Viewed by 155
Abstract
Age of Information (AoI) is a newly introduced metric that quantifies the freshness and timeliness of data, playing a crucial role in applications reliant on time-sensitive information. Minimizing AoI through optimal scheduling is challenging, especially in energy-constrained Internet of Things (IoT) networks. In [...] Read more.
Age of Information (AoI) is a newly introduced metric that quantifies the freshness and timeliness of data, playing a crucial role in applications reliant on time-sensitive information. Minimizing AoI through optimal scheduling is challenging, especially in energy-constrained Internet of Things (IoT) networks. In this work, we begin by analyzing a simplified cognitive radio network (CRN) where a single secondary user (SU) harvests RF energy from the primary user and transmits status update packets when the PU spectrum is available. Time is divided into equal time slots, and the SU performs either energy harvesting, spectrum sensing, or status update transmission in each slot. To optimize the AoI within the CRN, we formulate the sequential decision-making process as a partially observable Markov decision process (POMDP) and employ dynamic programming to determine optimal actions. Then, we extend our investigation to evaluate the long-term average weighted sum of AoIs for a multi-SU CRN. Unlike the single-SU scenario, decisions must be made regarding which SU performs sensing and which SU forwards the status update packs. Given the partially observable nature of the PU spectrum, we propose an enhanced Deep Q-Network (DQN) algorithm. Simulation results demonstrate that the proposed policies significantly outperform the myopic policy. Additionally, we analyze the effect of various parameter settings on system performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 370 KiB  
Article
Tight Bounds Between the Jensen–Shannon Divergence and the Minmax Divergence
by Arseniy Akopyan, Herbert Edelsbrunner, Žiga Virk and Hubert Wagner
Entropy 2025, 27(8), 854; https://doi.org/10.3390/e27080854 - 11 Aug 2025
Viewed by 206
Abstract
Motivated by questions arising at the intersection of information theory and geometry, we compare two dissimilarity measures between finite categorical distributions. One is the well-known Jensen–Shannon divergence, which is easy to compute and whose square root is a proper metric. The other is [...] Read more.
Motivated by questions arising at the intersection of information theory and geometry, we compare two dissimilarity measures between finite categorical distributions. One is the well-known Jensen–Shannon divergence, which is easy to compute and whose square root is a proper metric. The other is what we call the minmax divergence, which is harder to compute. Just like the Jensen–Shannon divergence, it arises naturally from the Kullback–Leibler divergence. The main contribution of this paper is a proof showing that the minmax divergence can be tightly approximated by the Jensen–Shannon divergence. The bounds suggest that the square root of the minmax divergence is a metric, and we prove that this is indeed true in the one-dimensional case. The general case remains open. Finally, we consider analogous questions in the context of another Bregman divergence and the corresponding Burbea–Rao (Jensen–Bregman) divergence. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 1876 KiB  
Article
Efficient AES Side-Channel Attacks Based on Residual Mamba Enhanced CNN
by Zhaobin Li, Chenchong Du and Xiaoyi Duan
Entropy 2025, 27(8), 853; https://doi.org/10.3390/e27080853 - 11 Aug 2025
Viewed by 362
Abstract
With the continuous advancement of side-channel attacks (SCA), deep learning-based methods have emerged as a prominent research focus due to their powerful feature extraction and nonlinear modeling capabilities. Traditional convolutional neural networks (CNNs) excel at capturing local temporal dependencies but struggle to model [...] Read more.
With the continuous advancement of side-channel attacks (SCA), deep learning-based methods have emerged as a prominent research focus due to their powerful feature extraction and nonlinear modeling capabilities. Traditional convolutional neural networks (CNNs) excel at capturing local temporal dependencies but struggle to model long-range sequential information effectively, limiting attack efficiency and generalization. In this paper, we propose a hybrid deep neural network architecture that integrates Residual Mamba blocks with multi-layer perceptrons (MLP) to enhance the modeling of side-channel information from AES implementations. The Residual Mamba module leverages state-space modeling to capture long-range dependencies, improving the model’s global temporal perception, while the MLP module further fuses high-dimensional features. Experiments conducted on the publicly available ASCAD dataset targeting the second byte of AES demonstrate that our model achieves guessing entropy (GE) rank 1 with fewer than 100 attack traces, significantly outperforming traditional CNNs and recent Transformer-based models. The proposed approach exhibits fast convergence and high attack efficiency, offering an effective new paradigm for deep learning in side-channel analysis with important theoretical and practical implications. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

19 pages, 8180 KiB  
Article
Weighted Color Image Encryption Algorithm Based on RNA Extended Dynamic Coding and Quantum Chaotic System
by Xiangyu Zhang, Heping Wen, Wei Feng, Shenghao Kang, Zhiyu Xie, Xuexi Zhang and Yiting Lin
Entropy 2025, 27(8), 852; https://doi.org/10.3390/e27080852 - 11 Aug 2025
Viewed by 250
Abstract
The rapid development of Internet technology, while providing convenient services for users, has also aroused deep concern among the public about the issue of privacy leakage during image data transmission. To address this situation, this article proposes a color image encryption algorithm based [...] Read more.
The rapid development of Internet technology, while providing convenient services for users, has also aroused deep concern among the public about the issue of privacy leakage during image data transmission. To address this situation, this article proposes a color image encryption algorithm based on RNA extended dynamic coding and quantum chaos (CIEA-RQ). This algorithm significantly improves the ability of the system to withstand cryptographic attacks by introducing RNA extended dynamic encoding with 384 encoding rules. The employed quantum chaotic map improves the randomness of chaotic sequences and increases the key space. First, the algorithm decomposes the plaintext image into bit planes and obtains two parts, high 4-bit and low 4-bit planes, based on different weights of information. Then, the high 4-bit planes are partitioned into blocks and scrambled, and the scrambled planes are confused using RNA extended coding rules. Meanwhile, the low 4-bit planes employ a lightweight XOR operation to improve encryption efficiency. Finally, the algorithm performs cross-iterative diffusion on the processed high 4-bit and low 4-bit planes and then synthesizes a color ciphertext image. Experimental simulations and security assessments demonstrate the superior numerical statistical outcomes of the CIEA-RQ. According to the criteria of cryptanalysis, it can effectively resist known-plaintext attacks and chosen-plaintext attacks. Therefore, the CIEA-RQ presented in this article serves as an efficient digital image privacy safeguard technique, promising extensive applications in image secure transmission for the upcoming generation of networks. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

11 pages, 1243 KiB  
Article
Fast and Robust Optical Cooling via Shortcut to Adiabaticity
by Zhiyu Wang and Jie Lu
Entropy 2025, 27(8), 851; https://doi.org/10.3390/e27080851 - 11 Aug 2025
Viewed by 186
Abstract
Optical cooling is a key technique for preparing ultracold atoms in quantum technologies and precision experiments. We employ shortcut-to-adiabaticity (STA) techniques to accelerate and stabilize laser-based atomic cooling protocols. This approach improves the performance of conventional adiabatic momentum transfer schemes by addressing key [...] Read more.
Optical cooling is a key technique for preparing ultracold atoms in quantum technologies and precision experiments. We employ shortcut-to-adiabaticity (STA) techniques to accelerate and stabilize laser-based atomic cooling protocols. This approach improves the performance of conventional adiabatic momentum transfer schemes by addressing key limitations such as Doppler shifts, laser intensity fluctuations, and spontaneous emission. We first examine two- and three-level atomic systems subjected to counter-propagating laser pulses that induce momentum reduction through photon recoil. STA methods are then employed to construct pulse sequences that are robust against detuning errors and amplitude noise, outperforming standard π-pulse schemes in resilience. Meanwhile, we analyze the dissipative dynamics during the momentum transfer and demonstrate the superiority of the STA protocol in enhancing momentum transfer efficiency via accelerated control. The results demonstrate that STA can significantly improve both the efficiency and robustness of cooling. These findings have implications for applications in atomic physics, quantum information processing, and precision metrology. Full article
(This article belongs to the Special Issue Shortcut to Adiabaticity in Classical and Quantum Systems)
Show Figures

Figure 1

20 pages, 1350 KiB  
Article
Beyond the Second Law: Darwinian Evolution as a Tendency for Entropy Production to Increase
by Charles H. Lineweaver
Entropy 2025, 27(8), 850; https://doi.org/10.3390/e27080850 - 11 Aug 2025
Viewed by 414
Abstract
There is much confusion about the apparent opposition between Darwinian evolution and the second law of thermodynamics. Both entropy and entropy production play more fundamental roles in the origin of life and Darwinian evolution than is generally recognized. I argue that Darwinian evolution [...] Read more.
There is much confusion about the apparent opposition between Darwinian evolution and the second law of thermodynamics. Both entropy and entropy production play more fundamental roles in the origin of life and Darwinian evolution than is generally recognized. I argue that Darwinian evolution can be understood as a tendency for entropy production to increase. Since the second law is about the increase in entropy, this hypothesis goes beyond the second law because it is about the increase in entropy production. This hypothesis can explain some aspects of biology that Darwinism struggles with, such as the origin of life, the origin of Darwinism, ecological successions, and an apparent general trend towards biological complexity. Gould proposed a wall of minimal complexity to explain this apparent increase in biological complexity. I argue that the apparent increase in biological complexity can be understood as a tendency for biological entropy production to increase through a broader range of free energy transduction mechanisms. In the context of a simple universe-in-a-cup-of-coffee model, entropy production is proposed as a more quantifiable replacement for the notion of complexity. Finally, I sketch the cosmic history of entropy production, which suggests that increases and decreases of free energy availability constrain the tendency for entropy production to increase. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop