Next Issue
Volume 26, July
Previous Issue
Volume 26, May
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 26, Issue 6 (June 2024) – 98 articles

Cover Story (view full-size image): Entanglement engines are autonomous quantum thermal machines designed to generate entanglement from the presence of a particle current flowing through the device. In this work, we investigate the functioning of a two-qubit entanglement engine beyond the steady-state regime. Within a master equation approach, we derive the time-dependent state, the particle current, as well as the associated current correlation functions. Our findings establish a direct connection between coherence and the internal current, elucidating the existence of a critical current that serves as an indicator for entanglement in the steady state. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 14392 KiB  
Article
Entropy Fluctuations and Correlations in Compressible Turbulent Plane Channel Flow
by G. A. Gerolymos and I. Vallet
Entropy 2024, 26(6), 530; https://doi.org/10.3390/e26060530 - 20 Jun 2024
Viewed by 409
Abstract
The thermodynamic turbulence structure of compressible aerodynamic flows is often characterised by the correlation coefficient of entropy with pressure or temperature. We study entropy fluctuations s and their correlations with the fluctuations of the other thermodynamic variables in compressible turbulent plane channel [...] Read more.
The thermodynamic turbulence structure of compressible aerodynamic flows is often characterised by the correlation coefficient of entropy with pressure or temperature. We study entropy fluctuations s and their correlations with the fluctuations of the other thermodynamic variables in compressible turbulent plane channel flow using dns data. We investigate the influence of the hcb (Huang–Coleman–Bradshaw) friction Reynolds number (100Reτ1000) and of the centreline Mach number (0.3M¯CLx2.5) on the magnitude and location of the peak of the root-mean-square srms. The complete series expansions of s with respect to the fluctuations of the basic thermodynamic variables (pressure p, density ρ and temperature T) are calculated for the general case of variable heat-capacity cp(T) thermodynamics. The correlation coefficients of s with the fluctuations of the basic thermodynamic quantities (csp, csρ, csT), for varying (Reτ,M¯CLx), are studied. Insight on these correlations is provided by considering the probability density function (pdf) of s′ and its joint pdfs with the other thermodynamic variables. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

15 pages, 3929 KiB  
Article
Coreference Resolution Based on High-Dimensional Multi-Scale Information
by Yu Wang, Zenghui Ding, Tao Wang, Shu Xu, Xianjun Yang and Yining Sun
Entropy 2024, 26(6), 529; https://doi.org/10.3390/e26060529 - 19 Jun 2024
Viewed by 372
Abstract
Coreference resolution is a key task in Natural Language Processing. It is difficult to evaluate the similarity of long-span texts, which makes text-level encoding somewhat challenging. This paper first compares the impact of commonly used methods to improve the global information collection ability [...] Read more.
Coreference resolution is a key task in Natural Language Processing. It is difficult to evaluate the similarity of long-span texts, which makes text-level encoding somewhat challenging. This paper first compares the impact of commonly used methods to improve the global information collection ability of the model on the BERT encoding performance. Based on this, a multi-scale context information module is designed to improve the applicability of the BERT encoding model under different text spans. In addition, improving linear separability through dimension expansion. Finally, cross-entropy loss is used as the loss function. After adding BERT and span BERT to the module designed in this article, F1 increased by 0.5% and 0.2%, respectively. Full article
(This article belongs to the Special Issue Natural Language Processing and Data Mining)
Show Figures

Figure 1

16 pages, 468 KiB  
Article
Measuring the Subjective Passage of Time: A Sociophysics Modeling
by Serge Galam
Entropy 2024, 26(6), 528; https://doi.org/10.3390/e26060528 - 19 Jun 2024
Viewed by 357
Abstract
A simple model is built to evaluate quantitatively the individual feeling of the passage of time using a sociophysics approach. Given an objective unit of time like the year, I introduce an individualized mirror-subjective counterpart, which is inversely proportional to the number of [...] Read more.
A simple model is built to evaluate quantitatively the individual feeling of the passage of time using a sociophysics approach. Given an objective unit of time like the year, I introduce an individualized mirror-subjective counterpart, which is inversely proportional to the number of objective units of time already experienced by a person. An associated duration of time is then calculated. Past and future individual horizons are also defined together with a subjective speed of time. Furthermore, I rescale the subjective unit of time by activating additional clocks connected to ritualized socializations, which mark and shape the specific times of an individual throughout their life. The model shows that without any ritual socialization, an individual perceives their anticipated life as infinite via a “soft” infinity. The past horizon is also perceived at infinity but with a “hard” infinity. However, the price for the first ritualized socialization is to exit eternity in terms of the anticipated future with the simultaneous reward of experiencing a finite moment of infinity analogous to that related to birth. I then extend the model using a power law of the number of past objective units of time to mitigate the phenomenon of shrinking of time. The findings are sound and recover common feelings about the passage of time over a lifetime. In particular, the fact that time passes more quickly with aging with a concomitant slowing down of the speed of time. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

23 pages, 407 KiB  
Article
Time-Dependent Effective Hamiltonians for Light–Matter Interactions
by Aroaldo S. Santos, Pedro H. Pereira, Patrícia P. Abrantes, Carlos Farina, Paulo A. Maia Neto and Reinaldo de Melo e Souza
Entropy 2024, 26(6), 527; https://doi.org/10.3390/e26060527 - 19 Jun 2024
Viewed by 366
Abstract
In this paper, we present a systematic approach to building useful time-dependent effective Hamiltonians in molecular quantum electrodynamics. The method is based on considering part of the system as an open quantum system and choosing a convenient unitary transformation based on the evolution [...] Read more.
In this paper, we present a systematic approach to building useful time-dependent effective Hamiltonians in molecular quantum electrodynamics. The method is based on considering part of the system as an open quantum system and choosing a convenient unitary transformation based on the evolution operator. We illustrate our formalism by obtaining four Hamiltonians, each suitable to a different class of applications. We show that we may treat several effects of molecular quantum electrodynamics with a direct first-order perturbation theory. In addition, our effective Hamiltonians shed light on interesting physical aspects that are not explicit when employing more standard approaches. As applications, we discuss three examples: two-photon spontaneous emission, resonance energy transfer, and dispersion interactions. Full article
16 pages, 471 KiB  
Article
A Metric Based on the Efficient Determination Criterion
by Jesús E. García, Verónica A. González-López and Johsac I. Gomez Sanchez
Entropy 2024, 26(6), 526; https://doi.org/10.3390/e26060526 - 19 Jun 2024
Viewed by 327
Abstract
This paper extends the concept of metrics based on the Bayesian information criterion (BIC), to achieve strongly consistent estimation of partition Markov models (PMMs). We introduce a set of metrics drawn from the family of model selection criteria known as efficient determination criteria [...] Read more.
This paper extends the concept of metrics based on the Bayesian information criterion (BIC), to achieve strongly consistent estimation of partition Markov models (PMMs). We introduce a set of metrics drawn from the family of model selection criteria known as efficient determination criteria (EDC). This generalization extends the range of options available in BIC for penalizing the number of model parameters. We formally specify the relationship that determines how EDC works when selecting a model based on a threshold associated with the metric. Furthermore, we improve the penalty options within EDC, identifying the penalty ln(ln(n)) as a viable choice that maintains the strongly consistent estimation of a PMM. To demonstrate the utility of these new metrics, we apply them to the modeling of three DNA sequences of dengue virus type 3, endemic in Brazil in 2023. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

20 pages, 2272 KiB  
Article
Design of a Robust Synchronization-Based Topology Observer for Complex Delayed Networks with Fixed and Adaptive Coupling Strength
by Yanqin Sun, Huaiyu Wu, Zhihuan Chen, Yang Chen and Xiujuan Zheng
Entropy 2024, 26(6), 525; https://doi.org/10.3390/e26060525 - 18 Jun 2024
Viewed by 326
Abstract
Network topology plays a key role in determining the characteristics and dynamical behaviors of a network. But in practice, network topology is sometimes hidden or uncertain ahead of time because of network complexity. In this paper, a robust-synchronization-based topology observer (STO) is proposed [...] Read more.
Network topology plays a key role in determining the characteristics and dynamical behaviors of a network. But in practice, network topology is sometimes hidden or uncertain ahead of time because of network complexity. In this paper, a robust-synchronization-based topology observer (STO) is proposed and applied to solve the problem of identifying the topology of complex delayed networks (TICDNs). In comparison to the existing literature, the proposed STO does not require any prior knowledge about the range of topological parameters and does not have strict limits on topology type. Furthermore, the proposed STO is suitable not only for networks with fixed coupling strength, but also for networks with adaptive coupling strength. Finally, a few comparison examples for TICDNs are used to verify the feasibility and efficiency of the proposed STO, and the results show that the proposed STO outperforms the other methods. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

23 pages, 2186 KiB  
Article
Effect of Private Deliberation: Deception of Large Language Models in Game Play
by Kristijan Poje, Mario Brcic, Mihael Kovac and Marina Bagic Babac
Entropy 2024, 26(6), 524; https://doi.org/10.3390/e26060524 - 18 Jun 2024
Viewed by 577
Abstract
Integrating large language model (LLM) agents within game theory demonstrates their ability to replicate human-like behaviors through strategic decision making. In this paper, we introduce an augmented LLM agent, called the private agent, which engages in private deliberation and employs deception in repeated [...] Read more.
Integrating large language model (LLM) agents within game theory demonstrates their ability to replicate human-like behaviors through strategic decision making. In this paper, we introduce an augmented LLM agent, called the private agent, which engages in private deliberation and employs deception in repeated games. Utilizing the partially observable stochastic game (POSG) framework and incorporating in-context learning (ICL) and chain-of-thought (CoT) prompting, we investigated the private agent’s proficiency in both competitive and cooperative scenarios. Our empirical analysis demonstrated that the private agent consistently achieved higher long-term payoffs than its baseline counterpart and performed similarly or better in various game settings. However, we also found inherent deficiencies of LLMs in certain algorithmic capabilities crucial for high-quality decision making in games. These findings highlight the potential for enhancing LLM agents’ performance in multi-player games using information-theoretic approaches of deception and communication with complex environments. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

2 pages, 146 KiB  
Correction
Correction: Sandlersky et al. Multispectral Remote Sensing Data Application in Modelling Non-Extensive Tsallis Thermodynamics for Mountain Forests in Northern Mongolia. Entropy 2023, 25, 1653
by Robert Sandlersky, Nataliya Petrzhik, Tushigma Jargalsaikhan and Ivan Shironiya
Entropy 2024, 26(6), 523; https://doi.org/10.3390/e26060523 - 18 Jun 2024
Viewed by 265
Abstract
There were some errors in the original publication [...] Full article
(This article belongs to the Section Entropy and Biology)
34 pages, 10834 KiB  
Article
Unambiguous Models and Machine Learning Strategies for Anomalous Extreme Events in Turbulent Dynamical System
by Di Qi
Entropy 2024, 26(6), 522; https://doi.org/10.3390/e26060522 - 17 Jun 2024
Viewed by 350
Abstract
Data-driven modeling methods are studied for turbulent dynamical systems with extreme events under an unambiguous model framework. New neural network architectures are proposed to effectively learn the key dynamical mechanisms including the multiscale coupling and strong instability, and gain robust skill for long-time [...] Read more.
Data-driven modeling methods are studied for turbulent dynamical systems with extreme events under an unambiguous model framework. New neural network architectures are proposed to effectively learn the key dynamical mechanisms including the multiscale coupling and strong instability, and gain robust skill for long-time prediction resistive to the accumulated model errors from the data-driven approximation. The machine learning model overcomes the inherent limitations in traditional long short-time memory networks by exploiting a conditional Gaussian structure informed of the essential physical dynamics. The model performance is demonstrated under a prototype model from idealized geophysical flow and passive tracers, which exhibits analytical solutions with representative statistical features. Many attractive properties are found in the trained model in recovering the hidden dynamics using a limited dataset and sparse observation time, showing uniformly high skill with persistent numerical stability in predicting both the trajectory and statistical solutions among different statistical regimes away from the training regime. The model framework is promising to be applied to a wider class of turbulent systems with complex structures. Full article
(This article belongs to the Special Issue An Information-Theoretical Perspective on Complex Dynamical Systems)
Show Figures

Figure 1

15 pages, 3678 KiB  
Article
Tsallis Entropy-Based Complexity-IPE Casualty Plane: A Novel Method for Complex Time Series Analysis
by Zhe Chen, Changling Wu, Junyi Wang and Hongbing Qiu
Entropy 2024, 26(6), 521; https://doi.org/10.3390/e26060521 - 17 Jun 2024
Viewed by 379
Abstract
Due to its capacity to unveil the dynamic characteristics of time series data, entropy has attracted growing interest. However, traditional entropy feature extraction methods, such as permutation entropy, fall short in concurrently considering both the absolute amplitude information of signals and the temporal [...] Read more.
Due to its capacity to unveil the dynamic characteristics of time series data, entropy has attracted growing interest. However, traditional entropy feature extraction methods, such as permutation entropy, fall short in concurrently considering both the absolute amplitude information of signals and the temporal correlation between sample points. Consequently, this limitation leads to inadequate differentiation among different time series and susceptibility to noise interference. In order to augment the discriminative power and noise robustness of entropy features in time series analysis, this paper introduces a novel method called Tsallis entropy-based complexity-improved permutation entropy casualty plane (TC-IPE-CP). TC-IPE-CP adopts a novel symbolization approach that preserves both absolute amplitude information and inter-point correlations within sequences, thereby enhancing feature separability and noise resilience. Additionally, by incorporating Tsallis entropy and weighting the probability distribution with parameter q, it integrates with statistical complexity to establish a feature plane of complexity and entropy, further enriching signal features. Through the integration of multiscale algorithms, a multiscale Tsallis-improved permutation entropy algorithm is also developed. The simulation results indicate that TC-IPE-CP requires a small amount of data, exhibits strong noise resistance, and possesses high separability for signals. When applied to the analysis of heart rate signals, fault diagnosis, and underwater acoustic signal recognition, experimental findings demonstrate that TC-IPE-CP can accurately differentiate between electrocardiographic signals of elderly and young subjects, achieve precise bearing fault diagnosis, and identify four types of underwater targets. Particularly in underwater acoustic signal recognition experiments, TC-IPE-CP achieves a recognition rate of 96.67%, surpassing the well-known multi-scale dispersion entropy and multi-scale permutation entropy by 7.34% and 19.17%, respectively. This suggests that TC-IPE-CP is highly suitable for the analysis of complex time series. Full article
(This article belongs to the Special Issue Ordinal Pattern-Based Entropies: New Ideas and Challenges)
Show Figures

Figure 1

13 pages, 428 KiB  
Article
A Possibilistic Formulation of Autonomous Search for Targets
by Zhijin Chen, Branko Ristic and Du Yong Kim
Entropy 2024, 26(6), 520; https://doi.org/10.3390/e26060520 - 17 Jun 2024
Viewed by 337
Abstract
Autonomous search is an ongoing cycle of sensing, statistical estimation, and motion control with the objective to find and localise targets in a designated search area. Traditionally, the theoretical framework for autonomous search combines sequential Bayesian estimation with information theoretic motion control. This [...] Read more.
Autonomous search is an ongoing cycle of sensing, statistical estimation, and motion control with the objective to find and localise targets in a designated search area. Traditionally, the theoretical framework for autonomous search combines sequential Bayesian estimation with information theoretic motion control. This paper formulates autonomous search in the framework of possibility theory. Although the possibilistic formulation is slightly more involved than the traditional method, it provides a means for quantitative modelling and reasoning in the presence of epistemic uncertainty. This feature is demonstrated in the paper in the context of partially known probability of detection, expressed as an interval value. The paper presents an elegant Bayes-like solution to sequential estimation, with the reward function for motion control defined to take into account the epistemic uncertainty. The advantages of the proposed search algorithm are demonstrated by numerical simulations. Full article
(This article belongs to the Special Issue Advances in Uncertain Information Fusion)
Show Figures

Figure 1

21 pages, 561 KiB  
Article
Comparative Analysis of Deterministic and Nondeterministic Decision Trees for Decision Tables from Closed Classes
by Azimkhon Ostonov and Mikhail Moshkov
Entropy 2024, 26(6), 519; https://doi.org/10.3390/e26060519 - 17 Jun 2024
Cited by 1 | Viewed by 322
Abstract
In this paper, we consider classes of decision tables with many-valued decisions closed under operations of the removal of columns, the changing of decisions, the permutation of columns, and the duplication of columns. We study relationships among three parameters of these tables: the [...] Read more.
In this paper, we consider classes of decision tables with many-valued decisions closed under operations of the removal of columns, the changing of decisions, the permutation of columns, and the duplication of columns. We study relationships among three parameters of these tables: the complexity of a decision table (if we consider the depth of the decision trees, then the complexity of a decision table is the number of columns in it), the minimum complexity of a deterministic decision tree, and the minimum complexity of a nondeterministic decision tree. We consider the rough classification of functions characterizing relationships and enumerate all possible seven types of relationships. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

12 pages, 845 KiB  
Article
The Universal Optimism of the Self-Evidencing Mind
by Elizabeth L. Fisher and Jakob Hohwy
Entropy 2024, 26(6), 518; https://doi.org/10.3390/e26060518 - 17 Jun 2024
Viewed by 563
Abstract
Karl Friston’s free-energy principle casts agents as self-evidencing through active inference. This implies that decision-making, planning and information-seeking are, in a generic sense, ‘wishful’. We take an interdisciplinary perspective on this perplexing aspect of the free-energy principle and unpack the epistemological implications of [...] Read more.
Karl Friston’s free-energy principle casts agents as self-evidencing through active inference. This implies that decision-making, planning and information-seeking are, in a generic sense, ‘wishful’. We take an interdisciplinary perspective on this perplexing aspect of the free-energy principle and unpack the epistemological implications of wishful thinking under the free-energy principle. We use this epistemic framing to discuss the emergence of biases for self-evidencing agents. In particular, we argue that this elucidates an optimism bias as a foundational tenet of self-evidencing. We allude to a historical precursor to some of these themes, interestingly found in Machiavelli’s oeuvre, to contextualise the universal optimism of the free-energy principle. Full article
Show Figures

Figure 1

34 pages, 23631 KiB  
Article
FFT-Based Probability Density Imaging of Euler Solutions
by Shujin Cao, Peng Chen, Guangyin Lu, Zhiyuan Ma, Bo Yang and Xinyue Chen
Entropy 2024, 26(6), 517; https://doi.org/10.3390/e26060517 - 15 Jun 2024
Viewed by 588
Abstract
When using traditional Euler deconvolution optimization strategies, it is difficult to distinguish between anomalies and their corresponding Euler tails (those solutions are often distributed outside the anomaly source, forming “tail”-shaped spurious solutions, i.e., misplaced Euler solutions, which must be removed or marked) with [...] Read more.
When using traditional Euler deconvolution optimization strategies, it is difficult to distinguish between anomalies and their corresponding Euler tails (those solutions are often distributed outside the anomaly source, forming “tail”-shaped spurious solutions, i.e., misplaced Euler solutions, which must be removed or marked) with only the structural index. The nonparametric estimation method based on the normalized B-spline probability density (BSS) is used to separate the Euler solution clusters and mark different anomaly sources according to the similarity and density characteristics of the Euler solutions. For display purposes, the BSS needs to map the samples onto the estimation grid at the points where density will be estimated in order to obtain the probability density distribution. However, if the size of the samples or the estimation grid is too large, this process can lead to high levels of memory consumption and excessive computation times. To address this issue, a fast linear binning approximation algorithm is introduced in the BSS to speed up the computation process and save time. Subsequently, the sample data are quickly projected onto the estimation grid to facilitate the discrete convolution between the grid and the density function using a fast Fourier transform. A method involving multivariate B-spline probability density estimation based on the FFT (BSSFFT), in conjunction with fast linear binning appropriation, is proposed in this paper. The results of two random normal distributions show the correctness of the BSS and BSSFFT algorithms, which is verified via a comparison with the true probability density function (pdf) and Gaussian kernel smoothing estimation algorithms. Then, the Euler solutions of the two synthetic models are analyzed using the BSS and BSSFFT algorithms. The results are consistent with their theoretical values, which verify their correctness regarding Euler solutions. Finally, the BSSFFT is applied to Bishop 5X data, and the numerical results show that the comprehensive analysis of the 3D probability density distributions using the BSSFFT algorithm, derived from the Euler solution subset of x0,y0,z0, can effectively separate and locate adjacent anomaly sources, demonstrating strong adaptability. Full article
Show Figures

Figure 1

18 pages, 482 KiB  
Article
Dual-Tower Counterfactual Session-Aware Recommender System
by Wenzhuo Song and Xiaoyu Xing
Entropy 2024, 26(6), 516; https://doi.org/10.3390/e26060516 - 14 Jun 2024
Viewed by 374
Abstract
In the complex dynamics of modern information systems such as e-commerce and streaming services, managing uncertainty and leveraging information theory are crucial in enhancing session-aware recommender systems (SARSs). This paper presents an innovative approach to SARSs that combines static long-term and dynamic short-term [...] Read more.
In the complex dynamics of modern information systems such as e-commerce and streaming services, managing uncertainty and leveraging information theory are crucial in enhancing session-aware recommender systems (SARSs). This paper presents an innovative approach to SARSs that combines static long-term and dynamic short-term preferences within a counterfactual causal framework. Our method addresses the shortcomings of current prediction models that tend to capture spurious correlations, leading to biased recommendations. By incorporating a counterfactual viewpoint, we aim to elucidate the causal influences of static long-term preferences on next-item selections and enhance the overall robustness of predictive models. We introduce a dual-tower architecture with a novel data augmentation process and a self-supervised training strategy, tailored to tackle inherent biases and unreliable correlations. Extensive experiments demonstrate the effectiveness of our approach, outperforming existing benchmarks and paving the way for more accurate and reliable session-based recommendations. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

13 pages, 756 KiB  
Article
Underwater Wavelength Attack on Discrete Modulated Continuous-Variable Quantum Key Distribution
by Kangyi Feng, Yijun Wang, Yin Li, Yuang Wang, Zhiyue Zuo and Ying Guo
Entropy 2024, 26(6), 515; https://doi.org/10.3390/e26060515 - 14 Jun 2024
Viewed by 407
Abstract
The wavelength attack utilizes the dependence of beam splitters (BSs) on wavelength to cause legitimate users Alice and Bob to underestimate their excess noise so that Eve can steal more secret keys without being detected. Recently, the wavelength attack on Gaussian-modulated continuous-variable quantum [...] Read more.
The wavelength attack utilizes the dependence of beam splitters (BSs) on wavelength to cause legitimate users Alice and Bob to underestimate their excess noise so that Eve can steal more secret keys without being detected. Recently, the wavelength attack on Gaussian-modulated continuous-variable quantum key distribution (CV-QKD) has been researched in both fiber and atmospheric channels. However, the wavelength attack may also pose a threat to the case of ocean turbulent channels, which are vital for the secure communication of both ocean sensor networks and submarines. In this work, we propose two wavelength attack schemes on underwater discrete modulated (DM) CV-QKD protocol, which is effective for the case with and without local oscillator (LO) intensity monitor, respectively. In terms of the transmittance properties of the fused biconical taper (FBT) BS, two sets of wavelengths are determined for Eve’s pulse manipulation, which are all located in the so-called blue–green band. The derived successful criterion shows that both attack schemes can control the estimated excess noise of Alice and Bob close to zero by selecting the corresponding condition parameters based on channel transmittance. Additionally, our numerical analysis shows that Eve can steal more bits when the wavelength attack controls the value of the estimated excess noise closer to zero. Full article
(This article belongs to the Special Issue Quantum Communications Networks: Trends and Challenges)
Show Figures

Figure 1

11 pages, 1561 KiB  
Article
A Symmetric Form of the Clausius Statement of the Second Law of Thermodynamics
by Ti-Wei Xue, Tian Zhao and Zeng-Yuan Guo
Entropy 2024, 26(6), 514; https://doi.org/10.3390/e26060514 - 14 Jun 2024
Viewed by 380
Abstract
Bridgman once reflected on thermodynamics that the laws of thermodynamics were formulated in their present form by the great founders of thermodynamics, Kelvin and Clausius, before all the essential physical facts were in, and there has been no adequate reexamination of the fundamentals [...] Read more.
Bridgman once reflected on thermodynamics that the laws of thermodynamics were formulated in their present form by the great founders of thermodynamics, Kelvin and Clausius, before all the essential physical facts were in, and there has been no adequate reexamination of the fundamentals since. Thermodynamics still has unknown possibilities waiting to be explored. This paper begins with a brief review of Clausius’s work on the second law of thermodynamics and a reassessment of the content of Clausius’s statement. The review tells that what Clausius originally referred to as the second law of thermodynamics was, in fact, the theorem of equivalence of transformations (TET) in a reversible cycle. On this basis, a new symmetric form of Clausius’s TET is proposed. This theorem says that the two transformations, i.e., the transformation of heat to work and the transformation of work from high pressure to low pressure, should be equivalent in a reversible work-to-heat cycle. New thermodynamic cyclic laws are developed on the basis of the cycle with two work reservoirs (two pressures), which enriches the fundamental of the second law of thermodynamics. Full article
(This article belongs to the Special Issue Trends in the Second Law of Thermodynamics)
Show Figures

Figure 1

12 pages, 274 KiB  
Article
Building Test Batteries Based on Analyzing Random Number Generator Tests within the Framework of Algorithmic Information Theory
by Boris Ryabko
Entropy 2024, 26(6), 513; https://doi.org/10.3390/e26060513 - 14 Jun 2024
Viewed by 346
Abstract
The problem of testing random number generators is considered and a new method for comparing the power of different statistical tests is proposed. It is based on the definitions of random sequence developed in the framework of algorithmic information theory and allows comparing [...] Read more.
The problem of testing random number generators is considered and a new method for comparing the power of different statistical tests is proposed. It is based on the definitions of random sequence developed in the framework of algorithmic information theory and allows comparing the power of different tests in some cases when the available methods of mathematical statistics do not distinguish between tests. In particular, it is shown that tests based on data compression methods using dictionaries should be included in test batteries. Full article
(This article belongs to the Special Issue Complexity, Entropy and the Physics of Information II)
14 pages, 1090 KiB  
Article
New Quantum Private Comparison Using Four-Particle Cluster State
by Min Hou, Yue Wu and Shibin Zhang
Entropy 2024, 26(6), 512; https://doi.org/10.3390/e26060512 - 14 Jun 2024
Viewed by 323
Abstract
Quantum private comparison (QPC) enables two users to securely conduct private comparisons in a network characterized by mutual distrust while guaranteeing the confidentiality of their private inputs. Most previous QPC protocols were primarily used to determine the equality of private information between two [...] Read more.
Quantum private comparison (QPC) enables two users to securely conduct private comparisons in a network characterized by mutual distrust while guaranteeing the confidentiality of their private inputs. Most previous QPC protocols were primarily used to determine the equality of private information between two users, which constrained their scalability. In this paper, we propose a QPC protocol that leverages the entanglement correlation between particles in a four-particle cluster state. This protocol can compare the information of two groups of users within one protocol execution, with each group consisting of two users. A semi-honest third party (TP), who will not deviate from the protocol execution or conspire with any participant, is involved in assisting users to achieve private comparisons. Users encode their inputs into specific angles of rotational operations performed on the received quantum sequence, which is then sent back to TP. Security analysis shows that both external attacks and insider threats are ineffective at stealing private data. Finally, we compare our protocol with some previously proposed QPC protocols. Full article
(This article belongs to the Special Issue Entropy, Quantum Information and Entanglement)
Show Figures

Figure 1

29 pages, 4721 KiB  
Article
Exergoeconomic Analysis and Optimization of a Biomass Integrated Gasification Combined Cycle Based on Externally Fired Gas Turbine, Steam Rankine Cycle, Organic Rankine Cycle, and Absorption Refrigeration Cycle
by Jie Ren, Chen Xu, Zuoqin Qian, Weilong Huang and Baolin Wang
Entropy 2024, 26(6), 511; https://doi.org/10.3390/e26060511 - 12 Jun 2024
Viewed by 414
Abstract
Adopting biomass energy as an alternative to fossil fuels for electricity production presents a viable strategy to address the prevailing energy deficits and environmental concerns, although it faces challenges related to suboptimal energy efficiency levels. This study introduces a novel combined cooling and [...] Read more.
Adopting biomass energy as an alternative to fossil fuels for electricity production presents a viable strategy to address the prevailing energy deficits and environmental concerns, although it faces challenges related to suboptimal energy efficiency levels. This study introduces a novel combined cooling and power (CCP) system, incorporating an externally fired gas turbine (EFGT), steam Rankine cycle (SRC), absorption refrigeration cycle (ARC), and organic Rankine cycle (ORC), aimed at boosting the efficiency of biomass integrated gasification combined cycle systems. Through the development of mathematical models, this research evaluates the system’s performance from both thermodynamic and exergoeconomic perspectives. Results show that the system could achieve the thermal efficiency, exergy efficiency, and levelized cost of exergy (LCOE) of 70.67%, 39.13%, and 11.67 USD/GJ, respectively. The analysis identifies the combustion chamber of the EFGT as the component with the highest rate of exergy destruction. Further analysis on parameters indicates that improvements in thermodynamic performance are achievable with increased air compressor pressure ratio and gas turbine inlet temperature, or reduced pinch point temperature difference, while the LCOE can be minimized through adjustments in these parameters. Optimized operation conditions demonstrate a potential 5.7% reduction in LCOE at the expense of a 2.5% decrease in exergy efficiency when compared to the baseline scenario. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Industrial Energy Systems)
Show Figures

Figure 1

16 pages, 597 KiB  
Article
A Bayesian Measure of Model Accuracy
by Gabriel Hideki Vatanabe Brunello and Eduardo Yoshio Nakano
Entropy 2024, 26(6), 510; https://doi.org/10.3390/e26060510 - 12 Jun 2024
Viewed by 406
Abstract
Ensuring that the proposed probabilistic model accurately represents the problem is a critical step in statistical modeling, as choosing a poorly fitting model can have significant repercussions on the decision-making process. The primary objective of statistical modeling often revolves around predicting new observations, [...] Read more.
Ensuring that the proposed probabilistic model accurately represents the problem is a critical step in statistical modeling, as choosing a poorly fitting model can have significant repercussions on the decision-making process. The primary objective of statistical modeling often revolves around predicting new observations, highlighting the importance of assessing the model’s accuracy. However, current methods for evaluating predictive ability typically involve model comparison, which may not guarantee a good model selection. This work presents an accuracy measure designed for evaluating a model’s predictive capability. This measure, which is straightforward and easy to understand, includes a decision criterion for model rejection. The development of this proposal adopts a Bayesian perspective of inference, elucidating the underlying concepts and outlining the necessary procedures for application. To illustrate its utility, the proposed methodology was applied to real-world data, facilitating an assessment of its practicality in real-world scenarios. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

15 pages, 374 KiB  
Article
Violations of Hyperscaling in Finite-Size Scaling above the Upper Critical Dimension
by A. Peter Young
Entropy 2024, 26(6), 509; https://doi.org/10.3390/e26060509 - 12 Jun 2024
Viewed by 359
Abstract
We consider how finite-size scaling (FSS) is modified above the upper critical dimension, du=4, due to hyperscaling violations, which in turn arise from a dangerous irrelevant variable. In addition to the commonly studied case of periodic boundary conditions, we [...] Read more.
We consider how finite-size scaling (FSS) is modified above the upper critical dimension, du=4, due to hyperscaling violations, which in turn arise from a dangerous irrelevant variable. In addition to the commonly studied case of periodic boundary conditions, we also consider new effects that arise with free boundary conditions. Some numerical results are presented in addition to theoretical arguments. Full article
Show Figures

Figure 1

12 pages, 996 KiB  
Article
Characterizing Complex Spatiotemporal Patterns from Entropy Measures
by Luan Orion Barauna, Rubens Andreas Sautter, Reinaldo Roberto Rosa, Erico Luiz Rempel and Alejandro C. Frery
Entropy 2024, 26(6), 508; https://doi.org/10.3390/e26060508 - 12 Jun 2024
Viewed by 432
Abstract
In addition to their importance in statistical thermodynamics, probabilistic entropy measurements are crucial for understanding and analyzing complex systems, with diverse applications in time series and one-dimensional profiles. However, extending these methods to two- and three-dimensional data still requires further development. In this [...] Read more.
In addition to their importance in statistical thermodynamics, probabilistic entropy measurements are crucial for understanding and analyzing complex systems, with diverse applications in time series and one-dimensional profiles. However, extending these methods to two- and three-dimensional data still requires further development. In this study, we present a new method for classifying spatiotemporal processes based on entropy measurements. To test and validate the method, we selected five classes of similar processes related to the evolution of random patterns: (i) white noise; (ii) red noise; (iii) weak turbulence from reaction to diffusion; (iv) hydrodynamic fully developed turbulence; and (v) plasma turbulence from MHD. Considering seven possible ways to measure entropy from a matrix, we present the method as a parameter space composed of the two best separating measures of the five selected classes. The results highlight better combined performance of Shannon permutation entropy (SHp) and a new approach based on Tsallis Spectral Permutation Entropy (Sqs). Notably, our observations reveal the segregation of reaction terms in this SHp×Sqs space, a result that identifies specific sectors for each class of dynamic process, and it can be used to train machine learning models for the automatic classification of complex spatiotemporal patterns. Full article
Show Figures

Figure 1

24 pages, 7581 KiB  
Article
Fault Diagnosis of Wind Turbine Gearbox Based on Modified Hierarchical Fluctuation Dispersion Entropy of Tan-Sigmoid Mapping
by Xiang Wang and Yang Du
Entropy 2024, 26(6), 507; https://doi.org/10.3390/e26060507 - 11 Jun 2024
Viewed by 419
Abstract
Vibration monitoring and analysis are important methods in wind turbine gearbox fault diagnosis, and determining how to extract fault characteristics from the vibration signal is of primary importance. This paper presents a fault diagnosis approach based on modified hierarchical fluctuation dispersion entropy of [...] Read more.
Vibration monitoring and analysis are important methods in wind turbine gearbox fault diagnosis, and determining how to extract fault characteristics from the vibration signal is of primary importance. This paper presents a fault diagnosis approach based on modified hierarchical fluctuation dispersion entropy of tan-sigmoid mapping (MHFDE_TANSIG) and northern goshawk optimization–support vector machine (NGO–SVM) for wind turbine gearboxes. The tan-sigmoid (TANSIG) mapping function replaces the normal cumulative distribution function (NCDF) of the hierarchical fluctuation dispersion entropy (HFDE) method. Additionally, the hierarchical decomposition of the HFDE method is improved, resulting in the proposed MHFDE_TANSIG method. The vibration signals of wind turbine gearboxes are analyzed using the MHFDE_TANSIG method to extract fault features. The constructed fault feature set is used to intelligently recognize and classify the fault type of the gearboxes with the NGO–SVM classifier. The fault diagnosis methods based on MHFDE_TANSIG and NGO–SVM are applied to the experimental data analysis of gearboxes with different operating conditions. The results show that the fault diagnosis model proposed in this paper has the best performance with an average accuracy rate of 97.25%. Full article
(This article belongs to the Special Issue Entropy Applications in Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

8 pages, 226 KiB  
Article
Multimodel Approaches Are Not the Best Way to Understand Multifactorial Systems
by Benjamin M. Bolker
Entropy 2024, 26(6), 506; https://doi.org/10.3390/e26060506 - 11 Jun 2024
Viewed by 402
Abstract
Information-theoretic (IT) and multi-model averaging (MMA) statistical approaches are widely used but suboptimal tools for pursuing a multifactorial approach (also known as the method of multiple working hypotheses) in ecology. (1) Conceptually, IT encourages ecologists to perform tests on sets of artificially simplified [...] Read more.
Information-theoretic (IT) and multi-model averaging (MMA) statistical approaches are widely used but suboptimal tools for pursuing a multifactorial approach (also known as the method of multiple working hypotheses) in ecology. (1) Conceptually, IT encourages ecologists to perform tests on sets of artificially simplified models. (2) MMA improves on IT model selection by implementing a simple form of shrinkage estimation (a way to make accurate predictions from a model with many parameters relative to the amount of data, by “shrinking” parameter estimates toward zero). However, other shrinkage estimators such as penalized regression or Bayesian hierarchical models with regularizing priors are more computationally efficient and better supported theoretically. (3) In general, the procedures for extracting confidence intervals from MMA are overconfident, providing overly narrow intervals. If researchers want to use limited data sets to accurately estimate the strength of multiple competing ecological processes along with reliable confidence intervals, the current best approach is to use full (maximal) statistical models (possibly with Bayesian priors) after making principled, a priori decisions about model complexity. Full article
23 pages, 6880 KiB  
Article
Code Similarity Prediction Model for Industrial Management Features Based on Graph Neural Networks
by Zhenhao Li, Hang Lei, Zhichao Ma and Fengyun Zhang
Entropy 2024, 26(6), 505; https://doi.org/10.3390/e26060505 - 9 Jun 2024
Viewed by 585
Abstract
The code of industrial management software typically features few system API calls and a high number of customized variables and structures. This makes the similarity of such codes difficult to compute using text features or traditional neural network methods. In this paper, we [...] Read more.
The code of industrial management software typically features few system API calls and a high number of customized variables and structures. This makes the similarity of such codes difficult to compute using text features or traditional neural network methods. In this paper, we propose an FSPS-GNN model, which is based on graph neural networks (GNNs), to address this problem. The model categorizes code features into two types, outer graph and inner graph, and conducts training and prediction with four stages—feature embedding, feature enhancement, feature fusion, and similarity prediction. Moreover, differently structured GNNs were used in the embedding and enhancement stages, respectively, to increase the interaction of code features. Experiments with code from three open-source projects demonstrate that the model achieves an average precision of 87.57% and an F0.5 Score of 89.12%. Compared to existing similarity-computation models based on GNNs, this model exhibits a Mean Squared Error (MSE) that is approximately 0.0041 to 0.0266 lower and an F0.5 Score that is 3.3259% to 6.4392% higher. It broadens the application scope of GNNs and offers additional insights for the study of code-similarity issues. Full article
Show Figures

Figure 1

11 pages, 283 KiB  
Article
Derivation of Bose’s Entropy Spectral Density from the Multiplicity of Energy Eigenvalues
by Arnaldo Spalvieri
Entropy 2024, 26(6), 504; https://doi.org/10.3390/e26060504 - 9 Jun 2024
Viewed by 496
Abstract
The modern textbook analysis of the thermal state of photons inside a three-dimensional reflective cavity is based on the three quantum numbers that characterize photon’s energy eigenvalues coming out when the boundary conditions are imposed. The crucial passage from the quantum numbers to [...] Read more.
The modern textbook analysis of the thermal state of photons inside a three-dimensional reflective cavity is based on the three quantum numbers that characterize photon’s energy eigenvalues coming out when the boundary conditions are imposed. The crucial passage from the quantum numbers to the continuous frequency is operated by introducing a three-dimensional continuous version of the three discrete quantum numbers, which leads to the energy spectral density and to the entropy spectral density. This standard analysis obscures the role of the multiplicity of energy eigenvalues associated to the same eigenfrequency. In this paper we review the past derivations of Bose’s entropy spectral density and present a new analysis of energy spectral density and entropy spectral density based on the multiplicity of energy eigenvalues. Our analysis explicitly defines the eigenfrequency distribution of energy and entropy and uses it as a starting point for the passage from the discrete eigenfrequencies to the continuous frequency. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

15 pages, 289 KiB  
Article
Refinements and Extensions of Ziv’s Model of Perfect Secrecy for Individual Sequences
by Neri Merhav
Entropy 2024, 26(6), 503; https://doi.org/10.3390/e26060503 - 9 Jun 2024
Viewed by 409
Abstract
We refine and extend Ziv’s model and results regarding perfectly secure encryption of individual sequences. According to this model, the encrypter and the legitimate decrypter share a common secret key that is not shared with the unauthorized eavesdropper. The eavesdropper is aware of [...] Read more.
We refine and extend Ziv’s model and results regarding perfectly secure encryption of individual sequences. According to this model, the encrypter and the legitimate decrypter share a common secret key that is not shared with the unauthorized eavesdropper. The eavesdropper is aware of the encryption scheme and has some prior knowledge concerning the individual plaintext source sequence. This prior knowledge, combined with the cryptogram, is harnessed by the eavesdropper, who implements a finite-state machine as a mechanism for accepting or rejecting attempted guesses of the plaintext source. The encryption is considered perfectly secure if the cryptogram does not provide any new information to the eavesdropper that may enhance their knowledge concerning the plaintext beyond their prior knowledge. Ziv has shown that the key rate needed for perfect secrecy is essentially lower bounded by the finite-state compressibility of the plaintext sequence, a bound that is clearly asymptotically attained through Lempel–Ziv compression followed by one-time pad encryption. In this work, we consider some more general classes of finite-state eavesdroppers and derive the respective lower bounds on the key rates needed for perfect secrecy. These bounds are tighter and more refined than Ziv’s bound, and they are attained using encryption schemes that are based on different universal lossless compression schemes. We also extend our findings to the case where side information is available to the eavesdropper and the legitimate decrypter but may or may not be available to the encrypter. Full article
(This article belongs to the Collection Feature Papers in Information Theory)
9 pages, 608 KiB  
Article
Modelling Heterogeneous Anomalous Dynamics of Radiation-Induced Double-Strand Breaks in DNA during Non-Homologous End-Joining Pathway
by Nickolay Korabel, John W. Warmenhoven, Nicholas T. Henthorn, Samuel Ingram, Sergei Fedotov, Charlotte J. Heaven, Karen J. Kirkby, Michael J. Taylor and Michael J. Merchant
Entropy 2024, 26(6), 502; https://doi.org/10.3390/e26060502 - 8 Jun 2024
Viewed by 550
Abstract
The process of end-joining during nonhomologous repair of DNA double-strand breaks (DSBs) after radiation damage is considered. Experimental evidence has revealed that the dynamics of DSB ends exhibit subdiffusive motion rather than simple diffusion with rare directional movement. Traditional models often overlook the [...] Read more.
The process of end-joining during nonhomologous repair of DNA double-strand breaks (DSBs) after radiation damage is considered. Experimental evidence has revealed that the dynamics of DSB ends exhibit subdiffusive motion rather than simple diffusion with rare directional movement. Traditional models often overlook the rare long-range directed motion. To address this limitation, we present a heterogeneous anomalous diffusion model consisting of subdiffusive fractional Brownian motion interchanged with short periods of long-range movement. Our model sheds light on the underlying mechanisms of heterogeneous diffusion in DSB repair and could be used to quantify the DSB dynamics on a time scale inaccessible to single particle tracking analysis. The model predicts that the long-range movement of DSB ends is responsible for the misrepair of DSBs in the form of dicentric chromosome lesions. Full article
Show Figures

Figure 1

27 pages, 9015 KiB  
Article
Simultaneous Optimization and Integration of Multiple Process Heat Cascade and Site Utility Selection for the Design of a New Generation of Sugarcane Biorefinery
by Victor Fernandes Garcia and Adriano Viana Ensinas
Entropy 2024, 26(6), 501; https://doi.org/10.3390/e26060501 - 8 Jun 2024
Viewed by 651
Abstract
Biorefinery plays a crucial role in the decarbonization of the current economic model, but its high investments and costs make its products less competitive. Identifying the best technological route to maximize operational synergies is crucial for its viability. This study presents a new [...] Read more.
Biorefinery plays a crucial role in the decarbonization of the current economic model, but its high investments and costs make its products less competitive. Identifying the best technological route to maximize operational synergies is crucial for its viability. This study presents a new superstructure model based on mixed integer linear programming to identify an ideal biorefinery configuration. The proposed formulation considers the selection and process scale adjustment, utility selection, and heat integration by heat cascade integration from different processes. The formulation is tested by a study where the impact of new technologies on energy efficiency and the total annualized cost of a sugarcane biorefinery is evaluated. As a result, the energy efficiency of biorefinery increased from 50.25% to 74.5% with methanol production through bagasse gasification, mainly due to its high heat availability that can be transferred to the distillery, which made it possible to shift the bagasse flow from the cogeneration to gasification process. Additionally, the production of DME yields outcomes comparable to methanol production. However, CO2 hydrogenation negatively impacts profitability and energy efficiency due to the significant consumption and electricity cost. Nonetheless, it is advantageous for surface power density as it increases biofuel production without expanding the biomass area. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Industrial Energy Systems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop