Next Issue
Volume 26, June
Previous Issue
Volume 26, April
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 26, Issue 5 (May 2024) – 83 articles

Cover Story (view full-size image): Information theory has found applications in diverse disciplines due to its solid foundation in probability theory. However, data often doesn’t follow specific probability distributions, complicating direct applications of information-theoretic concepts. Non-parametric methods can estimate these quantities from data, but which existing method is best? We evaluated different estimation methods practically by measuring the relative error of their estimates across different uni- and multivariate distributions and sample sizes. We also considered each estimator’s hyperparameters and implementation challenges. Through synthetic case studies, we show the behavior of different methods and highlight the advantages of nearest neighbor-based estimation. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 601 KiB  
Article
Revisiting the Characterization of Resting Brain Dynamics with the Permutation Jensen–Shannon Distance
by Luciano Zunino
Entropy 2024, 26(5), 432; https://doi.org/10.3390/e26050432 - 20 May 2024
Viewed by 273
Abstract
Taking into account the complexity of the human brain dynamics, the appropriate characterization of any brain state is a challenge not easily met. Actually, even the discrimination of simple behavioral tasks, such as resting with eyes closed or eyes open, represents an intricate [...] Read more.
Taking into account the complexity of the human brain dynamics, the appropriate characterization of any brain state is a challenge not easily met. Actually, even the discrimination of simple behavioral tasks, such as resting with eyes closed or eyes open, represents an intricate problem and many efforts have been and are being made to overcome it. In this work, the aforementioned issue is carefully addressed by performing multiscale analyses of electroencephalogram records with the permutation Jensen–Shannon distance. The influence that linear and nonlinear temporal correlations have on the discrimination is unveiled. Results obtained lead to significant conclusions that help to achieve an improved distinction between these resting brain states. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

19 pages, 1256 KiB  
Article
AM-MSFF: A Pest Recognition Network Based on Attention Mechanism and Multi-Scale Feature Fusion
by Meng Zhang, Wenzhong Yang, Danny Chen, Chenghao Fu and Fuyuan Wei
Entropy 2024, 26(5), 431; https://doi.org/10.3390/e26050431 - 20 May 2024
Viewed by 217
Abstract
Traditional methods for pest recognition have certain limitations in addressing the challenges posed by diverse pest species, varying sizes, diverse morphologies, and complex field backgrounds, resulting in a lower recognition accuracy. To overcome these limitations, this paper proposes a novel pest recognition method [...] Read more.
Traditional methods for pest recognition have certain limitations in addressing the challenges posed by diverse pest species, varying sizes, diverse morphologies, and complex field backgrounds, resulting in a lower recognition accuracy. To overcome these limitations, this paper proposes a novel pest recognition method based on attention mechanism and multi-scale feature fusion (AM-MSFF). By combining the advantages of attention mechanism and multi-scale feature fusion, this method significantly improves the accuracy of pest recognition. Firstly, we introduce the relation-aware global attention (RGA) module to adaptively adjust the feature weights of each position, thereby focusing more on the regions relevant to pests and reducing the background interference. Then, we propose the multi-scale feature fusion (MSFF) module to fuse feature maps from different scales, which better captures the subtle differences and the overall shape features in pest images. Moreover, we introduce generalized-mean pooling (GeMP) to more accurately extract feature information from pest images and better distinguish different pest categories. In terms of the loss function, this study proposes an improved focal loss (FL), known as balanced focal loss (BFL), as a replacement for cross-entropy loss. This improvement aims to address the common issue of class imbalance in pest datasets, thereby enhancing the recognition accuracy of pest identification models. To evaluate the performance of the AM-MSFF model, we conduct experiments on two publicly available pest datasets (IP102 and D0). Extensive experiments demonstrate that our proposed AM-MSFF outperforms most state-of-the-art methods. On the IP102 dataset, the accuracy reaches 72.64%, while on the D0 dataset, it reaches 99.05%. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

12 pages, 1987 KiB  
Review
Information Theory, Living Systems, and Communication Engineering
by Dragana Bajić
Entropy 2024, 26(5), 430; https://doi.org/10.3390/e26050430 - 18 May 2024
Viewed by 287
Abstract
Mainstream research on information theory within the field of living systems involves the application of analytical tools to understand a broad range of life processes. This paper is dedicated to an opposite problem: it explores the information theory and communication engineering methods that [...] Read more.
Mainstream research on information theory within the field of living systems involves the application of analytical tools to understand a broad range of life processes. This paper is dedicated to an opposite problem: it explores the information theory and communication engineering methods that have counterparts in the data transmission process by way of DNA structures and neural fibers. Considering the requirements of modern multimedia, transmission methods chosen by nature may be different, suboptimal, or even far from optimal. However, nature is known for rational resource usage, so its methods have a significant advantage: they are proven to be sustainable. Perhaps understanding the engineering aspects of methods of nature can inspire a design of alternative green, stable, and low-cost transmission. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

8 pages, 944 KiB  
Article
Heat Bath in a Quantum Circuit
by Jukka P. Pekola and Bayan Karimi
Entropy 2024, 26(5), 429; https://doi.org/10.3390/e26050429 - 17 May 2024
Viewed by 355
Abstract
We discuss the concept and realization of a heat bath in solid state quantum systems. We demonstrate that, unlike a true resistor, a finite one-dimensional Josephson junction array or analogously a transmission line with non-vanishing frequency spacing, commonly considered as a reservoir of [...] Read more.
We discuss the concept and realization of a heat bath in solid state quantum systems. We demonstrate that, unlike a true resistor, a finite one-dimensional Josephson junction array or analogously a transmission line with non-vanishing frequency spacing, commonly considered as a reservoir of a quantum circuit, does not strictly qualify as a Caldeira–Leggett type dissipative environment. We then consider a set of quantum two-level systems as a bath, which can be realized as a collection of qubits. We show that only a dense and wide distribution of energies of the two-level systems can secure long Poincare recurrence times characteristic of a proper heat bath. An alternative for this bath is a collection of harmonic oscillators, for instance, in the form of superconducting resonators. Full article
(This article belongs to the Special Issue Advances in Quantum Thermodynamics)
Show Figures

Figure 1

15 pages, 2610 KiB  
Article
A Novel Fault Diagnosis Method of High-Speed Train Based on Few-Shot Learning
by Yunpu Wu, Jianhua Chen, Xia Lei and Weidong Jin
Entropy 2024, 26(5), 428; https://doi.org/10.3390/e26050428 - 16 May 2024
Viewed by 269
Abstract
Ensuring the safe and stable operation of high-speed trains necessitates real-time monitoring and diagnostics of their suspension systems. While machine learning technology is widely employed for industrial equipment fault diagnosis, its effective application relies on the availability of a large dataset with annotated [...] Read more.
Ensuring the safe and stable operation of high-speed trains necessitates real-time monitoring and diagnostics of their suspension systems. While machine learning technology is widely employed for industrial equipment fault diagnosis, its effective application relies on the availability of a large dataset with annotated fault data for model training. However, in practice, the availability of informational data samples is often insufficient, with most of them being unlabeled. The challenge arises when traditional machine learning methods encounter a scarcity of training data, leading to overfitting due to limited information. To address this issue, this paper proposes a novel few-shot learning method for high-speed train fault diagnosis, incorporating sensor-perturbation injection and meta-confidence learning to improve detection accuracy. Experimental results demonstrate the superior performance of the proposed method, which introduces perturbations, compared to existing methods. The impact of perturbation effects and class numbers on fault detection is analyzed, confirming the effectiveness of our learning strategy. Full article
Show Figures

Figure 1

18 pages, 3382 KiB  
Article
Fault Diagnosis Method for Space Fluid Loop Systems Based on Improved Evidence Theory
by Yue Liu, Zhenxiang Li, Lu Zhang and Hongyong Fu
Entropy 2024, 26(5), 427; https://doi.org/10.3390/e26050427 - 16 May 2024
Viewed by 276
Abstract
Addressing the challenges posed by the complexity of the structure and the multitude of sensor types installed in space application fluid loop systems, this paper proposes a fault diagnosis method based on an improved D-S evidence theory. The method first employs the Gaussian [...] Read more.
Addressing the challenges posed by the complexity of the structure and the multitude of sensor types installed in space application fluid loop systems, this paper proposes a fault diagnosis method based on an improved D-S evidence theory. The method first employs the Gaussian affiliation function to convert the information acquired by sensors into BPA functions. Subsequently, it utilizes a pignistic probability transformation to convert the multiple subset focal elements into single subset focal elements. Finally, it comprehensively evaluates the credibility and uncertainty factors between evidences, introducing Bray–Curtis dissimilarity and belief entropy to achieve the fusion of conflicting evidence. The proposed method is initially validated on the classic Iris dataset, demonstrating its reliability. Furthermore, when applied to fault diagnosis in space application fluid circuit loop pumps, the results indicate that the method can effectively fuse multiple sensors and accurately identify faults. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 1657 KiB  
Article
Exploring Simplicity Bias in 1D Dynamical Systems
by Kamal Dingle, Mohammad Alaskandarani, Boumediene Hamzi and Ard A. Louis
Entropy 2024, 26(5), 426; https://doi.org/10.3390/e26050426 - 16 May 2024
Viewed by 316
Abstract
Arguments inspired by algorithmic information theory predict an inverse relation between the probability and complexity of output patterns in a wide range of input–output maps. This phenomenon is known as simplicity bias. By viewing the parameters of dynamical systems as inputs, and the [...] Read more.
Arguments inspired by algorithmic information theory predict an inverse relation between the probability and complexity of output patterns in a wide range of input–output maps. This phenomenon is known as simplicity bias. By viewing the parameters of dynamical systems as inputs, and the resulting (digitised) trajectories as outputs, we study simplicity bias in the logistic map, Gauss map, sine map, Bernoulli map, and tent map. We find that the logistic map, Gauss map, and sine map all exhibit simplicity bias upon sampling of map initial values and parameter values, but the Bernoulli map and tent map do not. The simplicity bias upper bound on the output pattern probability is used to make a priori predictions regarding the probability of output patterns. In some cases, the predictions are surprisingly accurate, given that almost no details of the underlying dynamical systems are assumed. More generally, we argue that studying probability–complexity relationships may be a useful tool when studying patterns in dynamical systems. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

10 pages, 290 KiB  
Article
Memory Corrections to Markovian Langevin Dynamics
by Mateusz Wiśniewski, Jerzy Łuczka and Jakub Spiechowicz
Entropy 2024, 26(5), 425; https://doi.org/10.3390/e26050425 - 16 May 2024
Viewed by 353
Abstract
Analysis of non-Markovian systems and memory-induced phenomena poses an everlasting challenge in the realm of physics. As a paradigmatic example, we consider a classical Brownian particle of mass M subjected to an external force and exposed to correlated thermal fluctuations. We show that [...] Read more.
Analysis of non-Markovian systems and memory-induced phenomena poses an everlasting challenge in the realm of physics. As a paradigmatic example, we consider a classical Brownian particle of mass M subjected to an external force and exposed to correlated thermal fluctuations. We show that the recently developed approach to this system, in which its non-Markovian dynamics given by the Generalized Langevin Equation is approximated by its memoryless counterpart but with the effective particle mass M<M, can be derived within the Markovian embedding technique. Using this method, we calculate the first- and the second-order memory correction to Markovian dynamics of the Brownian particle for the memory kernel represented as the Prony series. The second one lowers the effective mass of the system further and improves the precision of the approximation. Our work opens the door for the derivation of higher-order memory corrections to Markovian Langevin dynamics. Full article
(This article belongs to the Collection Foundations of Statistical Mechanics)
Show Figures

Figure 1

50 pages, 652 KiB  
Article
Non-Negative Decomposition of Multivariate Information: From Minimum to Blackwell-Specific Information
by Tobias Mages, Elli Anastasiadi and Christian Rohner
Entropy 2024, 26(5), 424; https://doi.org/10.3390/e26050424 - 15 May 2024
Viewed by 311
Abstract
Partial information decompositions (PIDs) aim to categorize how a set of source variables provides information about a target variable redundantly, uniquely, or synergetically. The original proposal for such an analysis used a lattice-based approach and gained significant attention. However, finding a suitable underlying [...] Read more.
Partial information decompositions (PIDs) aim to categorize how a set of source variables provides information about a target variable redundantly, uniquely, or synergetically. The original proposal for such an analysis used a lattice-based approach and gained significant attention. However, finding a suitable underlying decomposition measure is still an open research question at an arbitrary number of discrete random variables. This work proposes a solution with a non-negative PID that satisfies an inclusion–exclusion relation for any f-information measure. The decomposition is constructed from a pointwise perspective of the target variable to take advantage of the equivalence between the Blackwell and zonogon order in this setting. Zonogons are the Neyman–Pearson region for an indicator variable of each target state, and f-information is the expected value of quantifying its boundary. We prove that the proposed decomposition satisfies the desired axioms and guarantees non-negative partial information results. Moreover, we demonstrate how the obtained decomposition can be transformed between different decomposition lattices and that it directly provides a non-negative decomposition of Rényi-information at a transformed inclusion–exclusion relation. Finally, we highlight that the decomposition behaves differently depending on the information measure used and how it can be used for tracing partial information flows through Markov chains. Full article
Show Figures

Figure 1

18 pages, 957 KiB  
Article
Landauer Bound in the Context of Minimal Physical Principles: Meaning, Experimental Verification, Controversies and Perspectives
by Edward Bormashenko
Entropy 2024, 26(5), 423; https://doi.org/10.3390/e26050423 - 15 May 2024
Viewed by 588
Abstract
The physical roots, interpretation, controversies, and precise meaning of the Landauer principle are surveyed. The Landauer principle is a physical principle defining the lower theoretical limit of energy consumption necessary for computation. It states that an irreversible change in information stored in a [...] Read more.
The physical roots, interpretation, controversies, and precise meaning of the Landauer principle are surveyed. The Landauer principle is a physical principle defining the lower theoretical limit of energy consumption necessary for computation. It states that an irreversible change in information stored in a computer, such as merging two computational paths, dissipates a minimum amount of heat kBTln2 per a bit of information to its surroundings. The Landauer principle is discussed in the context of fundamental physical limiting principles, such as the Abbe diffraction limit, the Margolus–Levitin limit, and the Bekenstein limit. Synthesis of the Landauer bound with the Abbe, Margolus–Levitin, and Bekenstein limits yields the minimal time of computation, which scales as τmin~hkBT. Decreasing the temperature of a thermal bath will decrease the energy consumption of a single computation, but in parallel, it will slow the computation. The Landauer principle bridges John Archibald Wheeler’s “it from bit” paradigm and thermodynamics. Experimental verifications of the Landauer principle are surveyed. The interrelation between thermodynamic and logical irreversibility is addressed. Generalization of the Landauer principle to quantum and non-equilibrium systems is addressed. The Landauer principle represents the powerful heuristic principle bridging physics, information theory, and computer engineering. Full article
Show Figures

Figure 1

16 pages, 299 KiB  
Article
Model Selection for Exponential Power Mixture Regression Models
by Yunlu Jiang, Jiangchuan Liu, Hang Zou and Xiaowen Huang
Entropy 2024, 26(5), 422; https://doi.org/10.3390/e26050422 - 15 May 2024
Viewed by 326
Abstract
Finite mixture of linear regression (FMLR) models are among the most exemplary statistical tools to deal with various heterogeneous data. In this paper, we introduce a new procedure to simultaneously determine the number of components and perform variable selection for the different regressions [...] Read more.
Finite mixture of linear regression (FMLR) models are among the most exemplary statistical tools to deal with various heterogeneous data. In this paper, we introduce a new procedure to simultaneously determine the number of components and perform variable selection for the different regressions for FMLR models via an exponential power error distribution, which includes normal distributions and Laplace distributions as special cases. Under some regularity conditions, the consistency of order selection and the consistency of variable selection are established, and the asymptotic normality for the estimators of non-zero parameters is investigated. In addition, an efficient modified expectation-maximization (EM) algorithm and a majorization-maximization (MM) algorithm are proposed to implement the proposed optimization problem. Furthermore, we use the numerical simulations to demonstrate the finite sample performance of the proposed methodology. Finally, we apply the proposed approach to analyze a baseball salary data set. Results indicate that our proposed method obtains a smaller BIC value than the existing method. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
13 pages, 371 KiB  
Article
Optimal Decoding Order and Power Allocation for Sum Throughput Maximization in Downlink NOMA Systems
by Zhuo Han, Wanming Hao, Zhiqing Tang and Shouyi Yang
Entropy 2024, 26(5), 421; https://doi.org/10.3390/e26050421 - 15 May 2024
Viewed by 298
Abstract
In this paper, we consider a downlink non-orthogonal multiple access (NOMA) system over Nakagami-m channels. The single-antenna base station serves two single-antenna NOMA users based on statistical channel state information (CSI). We derive the closed-form expression of the exact outage probability under [...] Read more.
In this paper, we consider a downlink non-orthogonal multiple access (NOMA) system over Nakagami-m channels. The single-antenna base station serves two single-antenna NOMA users based on statistical channel state information (CSI). We derive the closed-form expression of the exact outage probability under a given decoding order, and we also deduce the asymptotic outage probability and diversity order in a high-SNR regime. Then, we analyze all the possible power allocation ranges and theoretically prove the optimal power allocation range under the corresponding decoding order. The demarcation points of the optimal power allocation ranges are affected by target data rates and total power, without an effect from the CSI. In particular, the values of the demarcation points are proportional to the total power. Furthermore, we formulate a joint decoding order and power allocation optimization problem to maximize the sum throughput, which is solved by efficiently searching in our obtained optimal power allocation ranges. Finally, Monte Carlo simulations are conducted to confirm the accuracy of our derived exact outage probability. Numerical results show the accuracy of our deduced demarcation points of the optimal power allocation ranges. And the optimal decoding order is not constant at different total transmit power levels. Full article
Show Figures

Figure 1

28 pages, 509 KiB  
Review
The SU(3)C × SU(3)L × U(1)X (331) Model: Addressing the Fermion Families Problem within Horizontal Anomalies Cancellation
by Claudio Corianò and Dario Melle
Entropy 2024, 26(5), 420; https://doi.org/10.3390/e26050420 - 14 May 2024
Viewed by 346
Abstract
One of the most important and unanswered problems in particle physics is the origin of the three generations of quarks and leptons. The Standard Model does not provide any hint regarding its sequential charge assignments, which remain a fundamental mystery of Nature. One [...] Read more.
One of the most important and unanswered problems in particle physics is the origin of the three generations of quarks and leptons. The Standard Model does not provide any hint regarding its sequential charge assignments, which remain a fundamental mystery of Nature. One possible solution of the puzzle is to look for charge assignments, in a given gauge theory, that are inter-generational, by employing the cancellation of the gravitational and gauge anomalies horizontally. The 331 model, based on an SU(3)C×SU(3)L×U(1)X does this in an economical way and defines a possible extension of the Standard Model, where the number of families has necessarily to be three. We review the model in Pisano, Pleitez, and Frampton’s formulation, which predicts the existence of bileptons. Another characteristics of the model is to unify the SU(3)C×SU(2)L×U(1)X into the 331 symmetry at a scale that is in the TeV range. Expressions of the scalar mass eigenstates and of the renormalization group equations of the model are also presented. Full article
Show Figures

Figure 1

32 pages, 3331 KiB  
Article
Capacity Analysis of Hybrid Satellite–Terrestrial Systems with Selection Relaying
by Predrag Ivaniš, Jovan Milojković, Vesna Blagojević and Srđan Brkić
Entropy 2024, 26(5), 419; https://doi.org/10.3390/e26050419 - 13 May 2024
Viewed by 310
Abstract
A hybrid satellite–terrestrial relay network is a simple and flexible solution that can be used to improve the performance of land mobile satellite systems, where the communication links between satellite and mobile terrestrial users can be unstable due to the multipath effect, obstacles, [...] Read more.
A hybrid satellite–terrestrial relay network is a simple and flexible solution that can be used to improve the performance of land mobile satellite systems, where the communication links between satellite and mobile terrestrial users can be unstable due to the multipath effect, obstacles, as well as the additional atmospheric losses. Motivated by these facts, in this paper, we analyze a system where the satellite–terrestrial links undergo shadowed Rice fading, and, following this, terrestrial relay applies the selection relaying protocol and forwards the information to the final destination using the communication link subjected to Nakagami-m fading. For the considered relaying protocol, we derive the exact closed-form expressions for the outage probability, outage capacity, and ergodic capacity, presented in polynomial–exponential form for the integer-valued fading parameters. The presented numerical results illustrate the usefulness of the selection relaying for various propagation scenarios and system geometry parameters. The obtained analytical results are corroborated by an independent simulation method, based on the originally developed fading simulator. Full article
(This article belongs to the Special Issue Information Theory and Coding for Wireless Communications II)
Show Figures

Figure 1

49 pages, 468 KiB  
Article
In Our Mind’s Eye: Thinkable and Unthinkable, and Classical and Quantum in Fundamental Physics, with Schrödinger’s Cat Experiment
by Arkady Plotnitsky
Entropy 2024, 26(5), 418; https://doi.org/10.3390/e26050418 - 13 May 2024
Viewed by 403
Abstract
This article reconsiders E. Schrödinger’s cat paradox experiment from a new perspective, grounded in the interpretation of quantum mechanics that belongs to the class of interpretations designated as “reality without realism” (RWR) interpretations. These interpretations assume that the reality ultimately responsible for quantum [...] Read more.
This article reconsiders E. Schrödinger’s cat paradox experiment from a new perspective, grounded in the interpretation of quantum mechanics that belongs to the class of interpretations designated as “reality without realism” (RWR) interpretations. These interpretations assume that the reality ultimately responsible for quantum phenomena is beyond conception, an assumption designated as the Heisenberg postulate. Accordingly, in these interpretations, quantum physics is understood in terms of the relationships between what is thinkable and what is unthinkable, with, physical, classical, and quantum, corresponding to thinkable and unthinkable, respectively. The role of classical physics becomes unavoidable in quantum physics, the circumstance designated as the Bohr postulate, which restores to classical physics its position as part of fundamental physics, a position commonly reserved for quantum physics and relativity. This view of quantum physics and relativity is maintained by this article as well but is argued to be sufficient for understanding fundamental physics. Establishing this role of classical physics is a distinctive contribution of the article, which allows it to reconsider Schrödinger’s cat experiment, but has a broader significance for understanding fundamental physics. RWR interpretations have not been previously applied to the cat experiment, including by N. Bohr, whose interpretation, in its ultimate form (he changed it a few times), was an RWR interpretation. The interpretation adopted in this article follows Bohr’s interpretation, based on the Heisenberg and Bohr postulates, but it adds the Dirac postulate, stating that the concept of a quantum object only applies at the time of observation and not independently. Full article
18 pages, 734 KiB  
Article
Aging Intensity for Step-Stress Accelerated Life Testing Experiments
by Francesco Buono and Maria Kateri
Entropy 2024, 26(5), 417; https://doi.org/10.3390/e26050417 - 13 May 2024
Viewed by 439
Abstract
The aging intensity (AI), defined as the ratio of the instantaneous hazard rate and a baseline hazard rate, is a useful tool for the describing reliability properties of a random variable corresponding to a lifetime. In this work, the concept of AI is [...] Read more.
The aging intensity (AI), defined as the ratio of the instantaneous hazard rate and a baseline hazard rate, is a useful tool for the describing reliability properties of a random variable corresponding to a lifetime. In this work, the concept of AI is introduced in step-stress accelerated life testing (SSALT) experiments, providing new insights to the model and enabling the further clarification of the differences between the two commonly employed cumulative exposure (CE) and tampered failure rate (TFR) models. New AI-based estimators for the parameters of a SSALT model are proposed and compared to the MLEs in terms of examples and a simulation study. Full article
(This article belongs to the Special Issue Information-Theoretic Criteria for Statistical Model Selection)
Show Figures

Figure 1

20 pages, 6707 KiB  
Article
Towards Multi-Objective Object Push-Grasp Policy Based on Maximum Entropy Deep Reinforcement Learning under Sparse Rewards
by Tengteng Zhang and Hongwei Mo
Entropy 2024, 26(5), 416; https://doi.org/10.3390/e26050416 - 12 May 2024
Viewed by 464
Abstract
In unstructured environments, robots need to deal with a wide variety of objects with diverse shapes, and often, the instances of these objects are unknown. Traditional methods rely on training with large-scale labeled data, but in environments with continuous and high-dimensional state spaces, [...] Read more.
In unstructured environments, robots need to deal with a wide variety of objects with diverse shapes, and often, the instances of these objects are unknown. Traditional methods rely on training with large-scale labeled data, but in environments with continuous and high-dimensional state spaces, the data become sparse, leading to weak generalization ability of the trained models when transferred to real-world applications. To address this challenge, we present an innovative maximum entropy Deep Q-Network (ME-DQN), which leverages an attention mechanism. The framework solves complex and sparse reward tasks through probabilistic reasoning while eliminating the trouble of adjusting hyper-parameters. This approach aims to merge the robust feature extraction capabilities of Fully Convolutional Networks (FCNs) with the efficient feature selection of the attention mechanism across diverse task scenarios. By integrating an advantage function with the reasoning and decision-making of deep reinforcement learning, ME-DQN propels the frontier of robotic grasping and expands the boundaries of intelligent perception and grasping decision-making in unstructured environments. Our simulations demonstrate a remarkable grasping success rate of 91.6%, while maintaining excellent generalization performance in the real world. Full article
Show Figures

Figure 1

26 pages, 1014 KiB  
Article
Quantum Synchronization and Entanglement of Dissipative Qubits Coupled to a Resonator
by Alexei D. Chepelianskii and Dima L. Shepelyansky
Entropy 2024, 26(5), 415; https://doi.org/10.3390/e26050415 - 11 May 2024
Viewed by 371
Abstract
In a dissipative regime, we study the properties of several qubits coupled to a driven resonator in the framework of a Jaynes–Cummings model. The time evolution and the steady state of the system are numerically analyzed within the Lindblad master equation, with up [...] Read more.
In a dissipative regime, we study the properties of several qubits coupled to a driven resonator in the framework of a Jaynes–Cummings model. The time evolution and the steady state of the system are numerically analyzed within the Lindblad master equation, with up to several million components. Two semi-analytical approaches, at weak and strong (semiclassical) dissipations, are developed to describe the steady state of this system and determine its validity by comparing it with the Lindblad equation results. We show that the synchronization of several qubits with the driving phase can be obtained due to their coupling to the resonator. We establish the existence of two different qubit synchronization regimes: In the first one, the semiclassical approach describes well the dynamics of qubits and, thus, their quantum features and entanglement are suppressed by dissipation and the synchronization is essentially classical. In the second one, the entangled steady state of a pair of qubits remains synchronized in the presence of dissipation and decoherence, corresponding to the regime non-existent in classical synchronization. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

16 pages, 3749 KiB  
Article
Quantum Tunneling and Complex Dynamics in the Suris’s Integrable Map
by Yasutaka Hanada and Akira Shudo
Entropy 2024, 26(5), 414; https://doi.org/10.3390/e26050414 - 11 May 2024
Viewed by 520
Abstract
Quantum tunneling in a two-dimensional integrable map is studied. The orbits of the map are all confined to the curves specified by the one-dimensional Hamiltonian. It is found that the behavior of tunneling splitting for the integrable map and the associated Hamiltonian system [...] Read more.
Quantum tunneling in a two-dimensional integrable map is studied. The orbits of the map are all confined to the curves specified by the one-dimensional Hamiltonian. It is found that the behavior of tunneling splitting for the integrable map and the associated Hamiltonian system is qualitatively the same, with only a slight difference in magnitude. However, the tunneling tails of the wave functions, obtained by superposing the eigenfunctions that form the doublet, exhibit significant differences. To explore the origin of the difference, we observe the classical dynamics in the complex plane and find that the existence of branch points appearing in the potential function of the integrable map could play the role of yielding non-trivial behavior in the tunneling tail. The result highlights the subtlety of quantum tunneling, which cannot be captured in nature only by the dynamics in the real plane. Full article
(This article belongs to the Special Issue Tunneling in Complex Systems)
Show Figures

Figure 1

17 pages, 3408 KiB  
Article
Efficient Quantum Private Comparison Based on GHZ States
by Min Hou, Yue Wu and Shibin Zhang
Entropy 2024, 26(5), 413; https://doi.org/10.3390/e26050413 - 10 May 2024
Viewed by 372
Abstract
Quantum private comparison (QPC) is a fundamental cryptographic protocol that allows two parties to compare the equality of their private inputs without revealing any information about those inputs to each other. In recent years, QPC protocols utilizing various quantum resources have been proposed. [...] Read more.
Quantum private comparison (QPC) is a fundamental cryptographic protocol that allows two parties to compare the equality of their private inputs without revealing any information about those inputs to each other. In recent years, QPC protocols utilizing various quantum resources have been proposed. However, these QPC protocols have lower utilization of quantum resources and qubit efficiency. To address this issue, we propose an efficient QPC protocol based on GHZ states, which leverages the unique properties of GHZ states and rotation operations to achieve secure and efficient private comparison. The secret information is encoded in the rotation angles of rotation operations performed on the received quantum sequence transmitted along the circular mode. This results in the multiplexing of quantum resources and enhances the utilization of quantum resources. Our protocol does not require quantum key distribution (QKD) for sharing a secret key to ensure the security of the inputs, resulting in no consumption of quantum resources for key sharing. One GHZ state can be compared to three bits of classical information in each comparison, leading to qubit efficiency reaching 100%. Compared with the existing QPC protocol, our protocol does not require quantum resources for sharing a secret key. It also demonstrates enhanced performance in qubit efficiency and the utilization of quantum resources. Full article
(This article belongs to the Special Issue Quantum Computation, Communication and Cryptography)
Show Figures

Figure 1

15 pages, 349 KiB  
Article
Finite-Temperature Correlation Functions Obtained from Combined Real- and Imaginary-Time Propagation of Variational Thawed Gaussian Wavepackets
by Jens Aage Poulsen and Gunnar Nyman
Entropy 2024, 26(5), 412; https://doi.org/10.3390/e26050412 - 10 May 2024
Viewed by 355
Abstract
We apply the so-called variational Gaussian wavepacket approximation (VGA) for conducting both real- and imaginary-time dynamics to calculate thermal correlation functions. By considering strongly anharmonic systems, such as a quartic potential and a double-well potential at high and low temperatures, it is shown [...] Read more.
We apply the so-called variational Gaussian wavepacket approximation (VGA) for conducting both real- and imaginary-time dynamics to calculate thermal correlation functions. By considering strongly anharmonic systems, such as a quartic potential and a double-well potential at high and low temperatures, it is shown that this method is partially able to account for tunneling. This is contrary to other popular many-body methods, such as ring polymer molecular dynamics and the classical Wigner method, which fail in this respect. It is a historical peculiarity that no one has considered the VGA method for representing both the Boltzmann operator and the real-time propagation. This method should be well suited for molecular systems containing many atoms. Full article
(This article belongs to the Special Issue Tunneling in Complex Systems)
Show Figures

Figure 1

14 pages, 323 KiB  
Article
Does Quantum Mechanics Require “Conspiracy”?
by Ovidiu Cristinel Stoica
Entropy 2024, 26(5), 411; https://doi.org/10.3390/e26050411 - 9 May 2024
Viewed by 407
Abstract
Quantum states containing records of incompatible outcomes of quantum measurements are valid states in the tensor-product Hilbert space. Since they contain false records, they conflict with the Born rule and with our observations. I show that excluding them requires a fine-tuning to an [...] Read more.
Quantum states containing records of incompatible outcomes of quantum measurements are valid states in the tensor-product Hilbert space. Since they contain false records, they conflict with the Born rule and with our observations. I show that excluding them requires a fine-tuning to an extremely restricted subspace of the Hilbert space that seems “conspiratorial”, in the sense that (1) it seems to depend on future events that involve records (including measurement settings) and on the dynamical law (normally thought to be independent of the initial conditions), and (2) it violates Statistical Independence, even when it is valid in the context of Bell’s theorem. To solve the puzzle, I build a model in which, by changing the dynamical law, the same initial conditions can lead to different histories in which the validity of records is relative to the new dynamical law. This relative validity of the records may restore causality, but the initial conditions still must depend, at least partially, on the dynamical law. While violations of Statistical Independence are often seen as non-scientific, they turn out to be needed to ensure the validity of records and our own memories and, by this, of science itself. A Past Hypothesis is needed to ensure the existence of records and turns out to require violations of Statistical Independence. It is not excluded that its explanation, still unknown, ensures such violations in the way needed by local interpretations of quantum mechanics. I suggest that an as-yet unknown law or superselection rule may restrict the full tensor-product Hilbert space to the very special subspace required by the validity of records and the Past Hypothesis. Full article
(This article belongs to the Section Quantum Information)
22 pages, 331 KiB  
Article
Geometric Algebra Jordan–Wigner Transformation for Quantum Simulation
by Grégoire Veyrac and Zeno Toffano
Entropy 2024, 26(5), 410; https://doi.org/10.3390/e26050410 - 8 May 2024
Viewed by 593
Abstract
Quantum simulation qubit models of electronic Hamiltonians rely on specific transformations in order to take into account the fermionic permutation properties of electrons. These transformations (principally the Jordan–Wigner transformation (JWT) and the Bravyi–Kitaev transformation) correspond in a quantum circuit to the introduction of [...] Read more.
Quantum simulation qubit models of electronic Hamiltonians rely on specific transformations in order to take into account the fermionic permutation properties of electrons. These transformations (principally the Jordan–Wigner transformation (JWT) and the Bravyi–Kitaev transformation) correspond in a quantum circuit to the introduction of a supplementary circuit level. In order to include the fermionic properties in a more straightforward way in quantum computations, we propose to use methods issued from Geometric Algebra (GA), which, due to its commutation properties, are well adapted for fermionic systems. First, we apply the Witt basis method in GA to reformulate the JWT in this framework and use this formulation to express various quantum gates. We then rewrite the general one and two-electron Hamiltonian and use it for building a quantum simulation circuit for the Hydrogen molecule. Finally, the quantum Ising Hamiltonian, widely used in quantum simulation, is reformulated in this framework. Full article
Show Figures

Figure 1

17 pages, 6146 KiB  
Article
Entropy-Aided Meshing-Order Modulation Analysis for Wind Turbine Planetary Gear Weak Fault Detection under Variable Rotational Speed
by Shaodan Zhi, Hengshan Wu, Haikuo Shen, Tianyang Wang and Hongfei Fu
Entropy 2024, 26(5), 409; https://doi.org/10.3390/e26050409 - 8 May 2024
Viewed by 395
Abstract
As one of the most vital energy conversation systems, the safe operation of wind turbines is very important; however, weak fault and time-varying speed may challenge the conventional monitoring strategies. Thus, an entropy-aided meshing-order modulation method is proposed for detecting the optimal frequency [...] Read more.
As one of the most vital energy conversation systems, the safe operation of wind turbines is very important; however, weak fault and time-varying speed may challenge the conventional monitoring strategies. Thus, an entropy-aided meshing-order modulation method is proposed for detecting the optimal frequency band, which contains the weak fault-related information. Specifically, the variable rotational frequency trend is first identified and extracted based on the time–frequency representation of the raw signal by constructing a novel scaling-basis local reassigning chirplet transform (SLRCT). A new entropy-aided meshing-order modulation (EMOM) indicator is then constructed to locate the most sensitive modulation frequency area according to the extracted fine speed trend with the help of order tracking technique. Finally, the raw vibration signal is bandpass filtered via the corresponding optimal frequency band with the highest EMOM indicator. The order components resulting from the weak fault can be highlighted to accomplish weak fault detection. The effectiveness of the proposed EMOM analysis-based method has been tested using the experimental data of three different gear fault types of different fault levels from a planetary test rig. Full article
Show Figures

Figure 1

25 pages, 14468 KiB  
Article
Investigation of Thermo-Hydraulics in a Lid-Driven Square Cavity with a Heated Hemispherical Obstacle at the Bottom
by Farhan Lafta Rashid, Abbas Fadhil Khalaf, Arman Ameen and Mudhar A. Al-Obaidi
Entropy 2024, 26(5), 408; https://doi.org/10.3390/e26050408 - 8 May 2024
Viewed by 406
Abstract
Lid-driven cavity (LDC) flow is a significant area of study in fluid mechanics due to its common occurrence in engineering challenges. However, using numerical simulations (ANSYS Fluent) to accurately predict fluid flow and mixed convective heat transfer features, incorporating both a moving top [...] Read more.
Lid-driven cavity (LDC) flow is a significant area of study in fluid mechanics due to its common occurrence in engineering challenges. However, using numerical simulations (ANSYS Fluent) to accurately predict fluid flow and mixed convective heat transfer features, incorporating both a moving top wall and a heated hemispherical obstruction at the bottom, has not yet been attempted. This study aims to numerically demonstrate forced convection in a lid-driven square cavity (LDSC) with a moving top wall and a heated hemispherical obstacle at the bottom. The cavity is filled with a Newtonian fluid and subjected to a specific set of velocities (5, 10, 15, and 20 m/s) at the moving wall. The finite volume method is used to solve the governing equations using the Boussinesq approximation and the parallel flow assumption. The impact of various cavity geometries, as well as the influence of the moving top wall on fluid flow and heat transfer within the cavity, are evaluated. The results of this study indicate that the movement of the wall significantly disrupts the flow field inside the cavity, promoting excellent mixing between the flow field below the moving wall and within the cavity. The static pressure exhibits fluctuations, with the highest value observed at the top of the cavity of 1 m width (adjacent to the moving wall) and the lowest at 0.6 m. Furthermore, dynamic pressure experiences a linear increase until reaching its peak at 0.7 m, followed by a steady decrease toward the moving wall. The velocity of the internal surface fluctuates unpredictably along its length while other parameters remain relatively stable. Full article
(This article belongs to the Special Issue Modern Trends in Multi-Phase Flow and Heat Transfer)
Show Figures

Figure 1

18 pages, 579 KiB  
Article
Minimizing Computation and Communication Costs of Two-Sided Secure Distributed Matrix Multiplication under Arbitrary Collusion Pattern
by Jin Li, Nan Liu and Wei Kang
Entropy 2024, 26(5), 407; https://doi.org/10.3390/e26050407 - 8 May 2024
Viewed by 376
Abstract
This paper studies the problem of minimizing the total cost, including computation cost and communication cost, in the system of two-sided secure distributed matrix multiplication (SDMM) under an arbitrary collusion pattern. In order to perform SDMM, the two input matrices are split into [...] Read more.
This paper studies the problem of minimizing the total cost, including computation cost and communication cost, in the system of two-sided secure distributed matrix multiplication (SDMM) under an arbitrary collusion pattern. In order to perform SDMM, the two input matrices are split into some blocks, blocks of random matrices are appended to protect the security of the two input matrices, and encoded copies of the blocks are distributed to all computing nodes for matrix multiplication calculation. Our aim is to minimize the total cost, overall matrix splitting factors, number of appended random matrices, and distribution vector, while satisfying the security constraint of the two input matrices, the decodability constraint of the desired result of the multiplication, the storage capacity of the computing nodes, and the delay constraint. First, a strategy of appending zeros to the input matrices is proposed to overcome the divisibility problem of matrix splitting. Next, the optimization problem is divided into two subproblems with the aid of alternating optimization (AO), where a feasible solution can be obtained. In addition, some necessary conditions for the problem to be feasible are provided. Simulation results demonstrate the superiority of our proposed scheme compared to the scheme without appending zeros and the scheme with no alternating optimization. Full article
(This article belongs to the Special Issue Information-Theoretic Cryptography and Security)
Show Figures

Figure 1

22 pages, 391 KiB  
Article
Relativistic Roots of κ-Entropy
by Giorgio Kaniadakis
Entropy 2024, 26(5), 406; https://doi.org/10.3390/e26050406 - 7 May 2024
Viewed by 422
Abstract
The axiomatic structure of the κ-statistcal theory is proven. In addition to the first three standard Khinchin–Shannon axioms of continuity, maximality, and expansibility, two further axioms are identified, namely the self-duality axiom and the scaling axiom. It is shown that both the [...] Read more.
The axiomatic structure of the κ-statistcal theory is proven. In addition to the first three standard Khinchin–Shannon axioms of continuity, maximality, and expansibility, two further axioms are identified, namely the self-duality axiom and the scaling axiom. It is shown that both the κ-entropy and its special limiting case, the classical Boltzmann–Gibbs–Shannon entropy, follow unambiguously from the above new set of five axioms. It has been emphasized that the statistical theory that can be built from κ-entropy has a validity that goes beyond physics and can be used to treat physical, natural, or artificial complex systems. The physical origin of the self-duality and scaling axioms has been investigated and traced back to the first principles of relativistic physics, i.e., the Galileo relativity principle and the Einstein principle of the constancy of the speed of light. It has been shown that the κ-formalism, which emerges from the κ-entropy, can treat both simple (few-body) and complex (statistical) systems in a unified way. Relativistic statistical mechanics based on κ-entropy is shown that preserves the main features of classical statistical mechanics (kinetic theory, molecular chaos hypothesis, maximum entropy principle, thermodynamic stability, H-theorem, and Lesche stability). The answers that the κ-statistical theory gives to the more-than-a-century-old open problems of relativistic physics, such as how thermodynamic quantities like temperature and entropy vary with the speed of the reference frame, have been emphasized. Full article
9 pages, 2877 KiB  
Article
Chaos Synchronization of Integrated Five-Section Semiconductor Lasers
by Yuanyuan Guo, Yao Du, Hua Gao, Min Tan, Tong Zhao, Zhiwei Jia, Pengfa Chang and Longsheng Wang
Entropy 2024, 26(5), 405; https://doi.org/10.3390/e26050405 - 6 May 2024
Viewed by 507
Abstract
We proposed and verified a scheme of chaos synchronization for integrated five-section semiconductor lasers with matching parameters. The simulation results demonstrated that the integrated five-section semiconductor laser could generate a chaotic signal within a large parameter range of the driving currents of five [...] Read more.
We proposed and verified a scheme of chaos synchronization for integrated five-section semiconductor lasers with matching parameters. The simulation results demonstrated that the integrated five-section semiconductor laser could generate a chaotic signal within a large parameter range of the driving currents of five sections. Subsequently, chaos synchronization between two integrated five-section semiconductor lasers with matched parameters was realized by using a common noise signal as a driver. Moreover, it was found that the synchronization was sensitive to the current mismatch in all five sections, indicating that the driving currents of the five sections could be used as keys of chaotic optical communication. Therefore, this synchronization scheme provides a candidate to increase the dimension of key space and enhances the security of the system. Full article
Show Figures

Figure 1

32 pages, 2235 KiB  
Article
Importance of Characteristic Features and Their Form for Data Exploration
by Urszula Stańczyk, Beata Zielosko and Grzegorz Baron
Entropy 2024, 26(5), 404; https://doi.org/10.3390/e26050404 - 6 May 2024
Viewed by 516
Abstract
The nature of the input features is one of the key factors indicating what kind of tools, methods, or approaches can be used in a knowledge discovery process. Depending on the characteristics of the available attributes, some techniques could lead to unsatisfactory performance [...] Read more.
The nature of the input features is one of the key factors indicating what kind of tools, methods, or approaches can be used in a knowledge discovery process. Depending on the characteristics of the available attributes, some techniques could lead to unsatisfactory performance or even may not proceed at all without additional preprocessing steps. The types of variables and their domains affect performance. Any changes to their form can influence it as well, or even enable some learners. On the other hand, the relevance of features for a task constitutes another element with a noticeable impact on data exploration. The importance of attributes can be estimated through the application of mechanisms belonging to the feature selection and reduction area, such as rankings. In the described research framework, the data form was conditioned on relevance by the proposed procedure of gradual discretisation controlled by a ranking of attributes. Supervised and unsupervised discretisation methods were employed to the datasets from the stylometric domain and the task of binary authorship attribution. For the selected classifiers, extensive tests were performed and they indicated many cases of enhanced prediction for partially discretised datasets. Full article
Show Figures

Figure 1

21 pages, 5009 KiB  
Article
A Novel Classification Method: Neighborhood-Based Positive Unlabeled Learning Using Decision Tree (NPULUD)
by Bita Ghasemkhani, Kadriye Filiz Balbal, Kokten Ulas Birant and Derya Birant
Entropy 2024, 26(5), 403; https://doi.org/10.3390/e26050403 - 4 May 2024
Viewed by 611
Abstract
In a standard binary supervised classification task, the existence of both negative and positive samples in the training dataset are required to construct a classification model. However, this condition is not met in certain applications where only one class of samples is obtainable. [...] Read more.
In a standard binary supervised classification task, the existence of both negative and positive samples in the training dataset are required to construct a classification model. However, this condition is not met in certain applications where only one class of samples is obtainable. To overcome this problem, a different classification method, which learns from positive and unlabeled (PU) data, must be incorporated. In this study, a novel method is presented: neighborhood-based positive unlabeled learning using decision tree (NPULUD). First, NPULUD uses the nearest neighborhood approach for the PU strategy and then employs a decision tree algorithm for the classification task by utilizing the entropy measure. Entropy played a pivotal role in assessing the level of uncertainty in the training dataset, as a decision tree was developed with the purpose of classification. Through experiments, we validated our method over 24 real-world datasets. The proposed method attained an average accuracy of 87.24%, while the traditional supervised learning approach obtained an average accuracy of 83.99% on the datasets. Additionally, it is also demonstrated that our method obtained a statistically notable enhancement (7.74%), with respect to state-of-the-art peers, on average. Full article
(This article belongs to the Special Issue Entropy in Real-World Datasets and Its Impact on Machine Learning II)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop