entropy-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
53 pages, 574 KiB  
Article
Mathematical Models of Consciousness
by Johannes Kleiner
Entropy 2020, 22(6), 609; https://doi.org/10.3390/e22060609 - 30 May 2020
Cited by 22 | Viewed by 9878
Abstract
In recent years, promising mathematical models have been proposed that aim to describe conscious experience and its relation to the physical domain. Whereas the axioms and metaphysical ideas of these theories have been carefully motivated, their mathematical formalism has not. In this article, [...] Read more.
In recent years, promising mathematical models have been proposed that aim to describe conscious experience and its relation to the physical domain. Whereas the axioms and metaphysical ideas of these theories have been carefully motivated, their mathematical formalism has not. In this article, we aim to remedy this situation. We give an account of what warrants mathematical representation of phenomenal experience, derive a general mathematical framework that takes into account consciousness’ epistemic context, and study which mathematical structures some of the key characteristics of conscious experience imply, showing precisely where mathematical approaches allow to go beyond what the standard methodology can do. The result is a general mathematical framework for models of consciousness that can be employed in the theory-building process. Full article
(This article belongs to the Special Issue Models of Consciousness)
44 pages, 628 KiB  
Review
What Is So Special about Quantum Clicks?
by Karl Svozil
Entropy 2020, 22(6), 602; https://doi.org/10.3390/e22060602 - 28 May 2020
Cited by 24 | Viewed by 4686
Abstract
This is an elaboration of the “extra” advantage of the performance of quantized physical systems over classical ones, both in terms of single outcomes as well as probabilistic predictions. From a formal point of view, it is based on entities related to (dual) [...] Read more.
This is an elaboration of the “extra” advantage of the performance of quantized physical systems over classical ones, both in terms of single outcomes as well as probabilistic predictions. From a formal point of view, it is based on entities related to (dual) vectors in (dual) Hilbert spaces, as compared to the Boolean algebra of subsets of a set and the additive measures they support. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

25 pages, 494 KiB  
Article
Differential Parametric Formalism for the Evolution of Gaussian States: Nonunitary Evolution and Invariant States
by Julio A. López-Saldívar, Margarita A. Man’ko and Vladimir I. Man’ko
Entropy 2020, 22(5), 586; https://doi.org/10.3390/e22050586 - 23 May 2020
Cited by 13 | Viewed by 3144
Abstract
In the differential approach elaborated, we study the evolution of the parameters of Gaussian, mixed, continuous variable density matrices, whose dynamics are given by Hermitian Hamiltonians expressed as quadratic forms of the position and momentum operators or quadrature components. Specifically, we obtain in [...] Read more.
In the differential approach elaborated, we study the evolution of the parameters of Gaussian, mixed, continuous variable density matrices, whose dynamics are given by Hermitian Hamiltonians expressed as quadratic forms of the position and momentum operators or quadrature components. Specifically, we obtain in generic form the differential equations for the covariance matrix, the mean values, and the density matrix parameters of a multipartite Gaussian state, unitarily evolving according to a Hamiltonian H ^ . We also present the corresponding differential equations, which describe the nonunitary evolution of the subsystems. The resulting nonlinear equations are used to solve the dynamics of the system instead of the Schrödinger equation. The formalism elaborated allows us to define new specific invariant and quasi-invariant states, as well as states with invariant covariance matrices, i.e., states were only the mean values evolve according to the classical Hamilton equations. By using density matrices in the position and in the tomographic-probability representations, we study examples of these properties. As examples, we present novel invariant states for the two-mode frequency converter and quasi-invariant states for the bipartite parametric amplifier. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

14 pages, 1842 KiB  
Article
A Method to Present and Analyze Ensembles of Information Sources
by Nicholas M. Timme, David Linsenbardt and Christopher C. Lapish
Entropy 2020, 22(5), 580; https://doi.org/10.3390/e22050580 - 21 May 2020
Viewed by 3700
Abstract
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with [...] Read more.
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with information theory to provide estimates of information shared between variables (forming a network between variables), or between neural variables and other variables (e.g., behavior or sensory stimuli). However, it can be difficult to (1) evaluate if the ensemble is significantly different from what would be expected in a purely noisy system and (2) determine if two ensembles are different. Herein, we introduce relatively simple methods to address these problems by analyzing ensembles of information sources. We demonstrate how an ensemble built of mutual information connections can be compared to null surrogate data to determine if the ensemble is significantly different from noise. Next, we show how two ensembles can be compared using a randomization process to determine if the sources in one contain more information than the other. All code necessary to carry out these analyses and demonstrations are provided. Full article
(This article belongs to the Special Issue Information Theory in Computational Neuroscience)
Show Figures

Figure 1

26 pages, 7851 KiB  
Article
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network
by Takazumi Matsumoto and Jun Tani
Entropy 2020, 22(5), 564; https://doi.org/10.3390/e22050564 - 18 May 2020
Cited by 26 | Viewed by 6319
Abstract
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees [...] Read more.
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories. Full article
Show Figures

Graphical abstract

35 pages, 1693 KiB  
Article
Re-Thinking the World with Neutral Monism:Removing the Boundaries Between Mind, Matter, and Spacetime
by Michael Silberstein and William Stuckey
Entropy 2020, 22(5), 551; https://doi.org/10.3390/e22050551 - 14 May 2020
Cited by 2 | Viewed by 10899
Abstract
Herein we are not interested in merely using dynamical systems theory, graph theory, information theory, etc., to model the relationship between brain dynamics and networks, and various states and degrees of conscious processes. We are interested in the question of how phenomenal conscious [...] Read more.
Herein we are not interested in merely using dynamical systems theory, graph theory, information theory, etc., to model the relationship between brain dynamics and networks, and various states and degrees of conscious processes. We are interested in the question of how phenomenal conscious experience and fundamental physics are most deeply related. Any attempt to mathematically and formally model conscious experience and its relationship to physics must begin with some metaphysical assumption in mind about the nature of conscious experience, the nature of matter and the nature of the relationship between them. These days the most prominent metaphysical fixed points are strong emergence or some variant of panpsychism. In this paper we will detail another distinct metaphysical starting point known as neutral monism. In particular, we will focus on a variant of the neutral monism of William James and Bertrand Russell. Rather than starting with physics as fundamental, as both strong emergence and panpsychism do in their own way, our goal is to suggest how one might derive fundamental physics from neutral monism. Thus, starting with two axioms grounded in our characterization of neutral monism, we will sketch out a derivation of and explanation for some key features of relativity and quantum mechanics that suggest a unity between those two theories that is generally unappreciated. Our mode of explanation throughout will be of the principle as opposed to constructive variety in something like Einstein’s sense of those terms. We will argue throughout that a bias towards property dualism and a bias toward reductive dynamical and constructive explanation lead to the hard problem and the explanatory gap in consciousness studies, and lead to serious unresolved problems in fundamental physics, such as the measurement problem and the mystery of entanglement in quantum mechanics and lack of progress in producing an empirically well-grounded theory of quantum gravity. We hope to show that given our take on neutral monism and all that follows from it, the aforementioned problems can be satisfactorily resolved leaving us with a far more intuitive and commonsense model of the relationship between conscious experience and physics. Full article
(This article belongs to the Special Issue Models of Consciousness)
Show Figures

Graphical abstract

18 pages, 865 KiB  
Article
Prediction and Variable Selection in High-Dimensional Misspecified Binary Classification
by Konrad Furmańczyk and Wojciech Rejchel
Entropy 2020, 22(5), 543; https://doi.org/10.3390/e22050543 - 13 May 2020
Cited by 5 | Viewed by 3755
Abstract
In this paper, we consider prediction and variable selection in the misspecified binary classification models under the high-dimensional scenario. We focus on two approaches to classification, which are computationally efficient, but lead to model misspecification. The first one is to apply penalized logistic [...] Read more.
In this paper, we consider prediction and variable selection in the misspecified binary classification models under the high-dimensional scenario. We focus on two approaches to classification, which are computationally efficient, but lead to model misspecification. The first one is to apply penalized logistic regression to the classification data, which possibly do not follow the logistic model. The second method is even more radical: we just treat class labels of objects as they were numbers and apply penalized linear regression. In this paper, we investigate thoroughly these two approaches and provide conditions, which guarantee that they are successful in prediction and variable selection. Our results hold even if the number of predictors is much larger than the sample size. The paper is completed by the experimental results. Full article
Show Figures

Graphical abstract

20 pages, 2082 KiB  
Article
Inferring What to Do (And What Not to)
by Thomas Parr
Entropy 2020, 22(5), 536; https://doi.org/10.3390/e22050536 - 11 May 2020
Cited by 7 | Viewed by 4279
Abstract
In recent years, the “planning as inference” paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about “how I am going to act”. This paper provides an overview of [...] Read more.
In recent years, the “planning as inference” paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about “how I am going to act”. This paper provides an overview of the factors that contribute to this prior. These are rooted in optimal experimental design, information theory, and statistical decision making. We unpack how these factors imply a functional architecture for motivated behaviour. This raises an important question: how can we put this architecture to work in the service of understanding observed neurobiological structure? To answer this question, we draw from established techniques in experimental studies of behaviour. Typically, these examine the influence of perturbations of the nervous system—which include pathological insults or optogenetic manipulations—to see their influence on behaviour. Here, we argue that the message passing that emerges from inferring what to do can be similarly perturbed. If a given perturbation elicits the same behaviours as a focal brain lesion, this provides a functional interpretation of empirical findings and an anatomical grounding for theoretical results. We highlight examples of this approach that influence different sorts of goal-directed behaviour, active learning, and decision making. Finally, we summarise their implications for the neuroanatomy of inferring what to do (and what not to). Full article
Show Figures

Graphical abstract

21 pages, 1102 KiB  
Article
Non-Hermitian Hamiltonians and Quantum Transport in Multi-Terminal Conductors
by Nikolay M. Shubin, Alexander A. Gorbatsevich and Gennadiy Ya. Krasnikov
Entropy 2020, 22(4), 459; https://doi.org/10.3390/e22040459 - 17 Apr 2020
Cited by 3 | Viewed by 3677
Abstract
We study the transport properties of multi-terminal Hermitian structures within the non-equilibrium Green’s function formalism in a tight-binding approximation. We show that non-Hermitian Hamiltonians naturally appear in the description of coherent tunneling and are indispensable for the derivation of a general compact expression [...] Read more.
We study the transport properties of multi-terminal Hermitian structures within the non-equilibrium Green’s function formalism in a tight-binding approximation. We show that non-Hermitian Hamiltonians naturally appear in the description of coherent tunneling and are indispensable for the derivation of a general compact expression for the lead-to-lead transmission coefficients of an arbitrary multi-terminal system. This expression can be easily analyzed, and a robust set of conditions for finding zero and unity transmissions (even in the presence of extra electrodes) can be formulated. Using the proposed formalism, a detailed comparison between three- and two-terminal systems is performed, and it is shown, in particular, that transmission at bound states in the continuum does not change with the third electrode insertion. The main conclusions are illustratively exemplified by some three-terminal toy models. For instance, the influence of the tunneling coupling to the gate electrode is discussed for a model of quantum interference transistor. The results of this paper will be of high interest, in particular, within the field of quantum design of molecular electronic devices. Full article
(This article belongs to the Special Issue Quantum Dynamics with Non-Hermitian Hamiltonians)
Show Figures

Graphical abstract

36 pages, 531 KiB  
Article
Towards a Unified Theory of Learning and Information
by Ibrahim Alabdulmohsin
Entropy 2020, 22(4), 438; https://doi.org/10.3390/e22040438 - 13 Apr 2020
Cited by 4 | Viewed by 4499
Abstract
In this paper, we introduce the notion of “learning capacity” for algorithms that learn from data, which is analogous to the Shannon channel capacity for communication systems. We show how “learning capacity” bridges the gap between statistical learning theory and information theory, and [...] Read more.
In this paper, we introduce the notion of “learning capacity” for algorithms that learn from data, which is analogous to the Shannon channel capacity for communication systems. We show how “learning capacity” bridges the gap between statistical learning theory and information theory, and we will use it to derive generalization bounds for finite hypothesis spaces, differential privacy, and countable domains, among others. Moreover, we prove that under the Axiom of Choice, the existence of an empirical risk minimization (ERM) rule that has a vanishing learning capacity is equivalent to the assertion that the hypothesis space has a finite Vapnik–Chervonenkis (VC) dimension, thus establishing an equivalence relation between two of the most fundamental concepts in statistical learning theory and information theory. In addition, we show how the learning capacity of an algorithm provides important qualitative results, such as on the relation between generalization and algorithmic stability, information leakage, and data processing. Finally, we conclude by listing some open problems and suggesting future directions of research. Full article
Show Figures

Graphical abstract

10 pages, 618 KiB  
Article
Measuring the Tangle of Three-Qubit States
by Adrián Pérez-Salinas, Diego García-Martín, Carlos Bravo-Prieto and José I. Latorre
Entropy 2020, 22(4), 436; https://doi.org/10.3390/e22040436 - 11 Apr 2020
Cited by 12 | Viewed by 5132
Abstract
We present a quantum circuit that transforms an unknown three-qubit state into its canonical form, up to relative phases, given many copies of the original state. The circuit is made of three single-qubit parametrized quantum gates, and the optimal values for the parameters [...] Read more.
We present a quantum circuit that transforms an unknown three-qubit state into its canonical form, up to relative phases, given many copies of the original state. The circuit is made of three single-qubit parametrized quantum gates, and the optimal values for the parameters are learned in a variational fashion. Once this transformation is achieved, direct measurement of outcome probabilities in the computational basis provides an estimate of the tangle, which quantifies genuine tripartite entanglement. We perform simulations on a set of random states under different noise conditions to asses the validity of the method. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

21 pages, 1900 KiB  
Article
Sensitivity Analysis of Selected Parameters in the Order Picking Process Simulation Model, with Randomly Generated Orders
by Mariusz Kostrzewski
Entropy 2020, 22(4), 423; https://doi.org/10.3390/e22040423 - 9 Apr 2020
Cited by 27 | Viewed by 4311
Abstract
Sensitivity analysis of selected parameters in simulation models of logistics facilities is one of the key aspects in functioning of self-conscious and efficient management. In order to develop simulation models adequate of real logistics facilities’ processes, it is important to input actual data [...] Read more.
Sensitivity analysis of selected parameters in simulation models of logistics facilities is one of the key aspects in functioning of self-conscious and efficient management. In order to develop simulation models adequate of real logistics facilities’ processes, it is important to input actual data connected to material flows on entry to models, whereas most models assume unified load units as default. To provide such data, pseudorandom number generators (PRNGs) are used. The original generator described in the paper was employed in order to generate picking lists for order picking process (OPP). This ensures building a hypothetical, yet close to reality process in terms of unpredictable customers’ orders. Models with applied PRNGs ensure more detailed and more understandable representation of OPPs in comparison to analytical models. Therefore, the author’s motivation was to present the original model as a tool for enterprises’ managers who might control OPP, devices and means of transport employed therein. The outcomes and implications of the contribution are connected to presentation of selected possibilities in OPP analyses, which might be developed and solved within the model. The presented model has some limitations. One of them is assumption that one mean of transport per one aisle is taken into consideration. Another limitation is the indirectly randomization of certain model’s parameters. Full article
Show Figures

Graphical abstract

22 pages, 3546 KiB  
Article
On Geometry of Information Flow for Causal Inference
by Sudam Surasinghe and Erik M. Bollt
Entropy 2020, 22(4), 396; https://doi.org/10.3390/e22040396 - 30 Mar 2020
Cited by 3 | Viewed by 5836
Abstract
Causal inference is perhaps one of the most fundamental concepts in science, beginning originally from the works of some of the ancient philosophers, through today, but also weaved strongly in current work from statisticians, machine learning experts, and scientists from many other fields. [...] Read more.
Causal inference is perhaps one of the most fundamental concepts in science, beginning originally from the works of some of the ancient philosophers, through today, but also weaved strongly in current work from statisticians, machine learning experts, and scientists from many other fields. This paper takes the perspective of information flow, which includes the Nobel prize winning work on Granger-causality, and the recently highly popular transfer entropy, these being probabilistic in nature. Our main contribution will be to develop analysis tools that will allow a geometric interpretation of information flow as a causal inference indicated by positive transfer entropy. We will describe the effective dimensionality of an underlying manifold as projected into the outcome space that summarizes information flow. Therefore, contrasting the probabilistic and geometric perspectives, we will introduce a new measure of causal inference based on the fractal correlation dimension conditionally applied to competing explanations of future forecasts, which we will write G e o C y x . This avoids some of the boundedness issues that we show exist for the transfer entropy, T y x . We will highlight our discussions with data developed from synthetic models of successively more complex nature: these include the Hénon map example, and finally a real physiological example relating breathing and heart rate function. Full article
Show Figures

Graphical abstract

29 pages, 4801 KiB  
Article
Spectral Structure and Many-Body Dynamics of Ultracold Bosons in a Double-Well
by Frank Schäfer, Miguel A. Bastarrachea-Magnani, Axel U. J. Lode, Laurent de Forges de Parny and Andreas Buchleitner
Entropy 2020, 22(4), 382; https://doi.org/10.3390/e22040382 - 26 Mar 2020
Cited by 4 | Viewed by 4228
Abstract
We examine the spectral structure and many-body dynamics of two and three repulsively interacting bosons trapped in a one-dimensional double-well, for variable barrier height, inter-particle interaction strength, and initial conditions. By exact diagonalization of the many-particle Hamiltonian, we specifically explore the dynamical behavior [...] Read more.
We examine the spectral structure and many-body dynamics of two and three repulsively interacting bosons trapped in a one-dimensional double-well, for variable barrier height, inter-particle interaction strength, and initial conditions. By exact diagonalization of the many-particle Hamiltonian, we specifically explore the dynamical behavior of the particles launched either at the single-particle ground state or saddle-point energy, in a time-independent potential. We complement these results by a characterization of the cross-over from diabatic to quasi-adiabatic evolution under finite-time switching of the potential barrier, via the associated time evolution of a single particle’s von Neumann entropy. This is achieved with the help of the multiconfigurational time-dependent Hartree method for indistinguishable particles (MCTDH-X)—which also allows us to extrapolate our results for increasing particle numbers. Full article
(This article belongs to the Special Issue Quantum Entropies and Complexity)
Show Figures

Graphical abstract

24 pages, 1285 KiB  
Article
Endoreversible Modeling of a Hydraulic Recuperation System
by Robin Masser and Karl Heinz Hoffmann
Entropy 2020, 22(4), 383; https://doi.org/10.3390/e22040383 - 26 Mar 2020
Cited by 22 | Viewed by 3378
Abstract
Hybrid drive systems able to recover and reuse braking energy of the vehicle can reduce fuel consumption, air pollution and operating costs. Among them, hydraulic recuperation systems are particularly suitable for commercial vehicles, especially if they are already equipped with a hydraulic system. [...] Read more.
Hybrid drive systems able to recover and reuse braking energy of the vehicle can reduce fuel consumption, air pollution and operating costs. Among them, hydraulic recuperation systems are particularly suitable for commercial vehicles, especially if they are already equipped with a hydraulic system. Thus far, the investigation of such systems has been limited to individual components or optimizing their control. In this paper, we focus on thermodynamic effects and their impact on the overall systems energy saving potential using endoreversible thermodynamics as the ideal framework for modeling. The dynamical behavior of the hydraulic recuperation system as well as energy savings are estimated using real data of a vehicle suitable for application. Here, energy savings accelerating the vehicle around 10% and a reduction in energy transferred to the conventional disc brakes around 58% are predicted. We further vary certain design and loss parameters—such as accumulator volume, displacement of the hydraulic unit, heat transfer coefficients or pipe diameter—and discuss their influence on the energy saving potential of the system. It turns out that heat transfer coefficients and pipe diameter are of less importance than accumulator volume and displacement of the hydraulic unit. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

28 pages, 4982 KiB  
Article
Theory, Analysis, and Applications of the Entropic Lattice Boltzmann Model for Compressible Flows
by Nicolò Frapolli, Shyam Chikatamarla and Ilya Karlin
Entropy 2020, 22(3), 370; https://doi.org/10.3390/e22030370 - 24 Mar 2020
Cited by 16 | Viewed by 5301
Abstract
The entropic lattice Boltzmann method for the simulation of compressible flows is studied in detail and new opportunities for extending operating range are explored. We address limitations on the maximum Mach number and temperature range allowed for a given lattice. Solutions to both [...] Read more.
The entropic lattice Boltzmann method for the simulation of compressible flows is studied in detail and new opportunities for extending operating range are explored. We address limitations on the maximum Mach number and temperature range allowed for a given lattice. Solutions to both these problems are presented by modifying the original lattices without increasing the number of discrete velocities and without altering the numerical algorithm. In order to increase the Mach number, we employ shifted lattices while the magnitude of lattice speeds is increased in order to extend the temperature range. Accuracy and efficiency of the shifted lattices are demonstrated with simulations of the supersonic flow field around a diamond-shaped and NACA0012 airfoil, the subsonic, transonic, and supersonic flow field around the Busemann biplane, and the interaction of vortices with a planar shock wave. For the lattices with extended temperature range, the model is validated with the simulation of the Richtmyer–Meshkov instability. We also discuss some key ideas of how to reduce the number of discrete speeds in three-dimensional simulations by pruning of the higher-order lattices, and introduce a new construction of the corresponding guided equilibrium by entropy minimization. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Show Figures

Figure 1

32 pages, 5963 KiB  
Review
On the Evidence of Thermodynamic Self-Organization during Fatigue: A Review
by Mehdi Naderi
Entropy 2020, 22(3), 372; https://doi.org/10.3390/e22030372 - 24 Mar 2020
Cited by 7 | Viewed by 5900
Abstract
In this review paper, the evidence and application of thermodynamic self-organization are reviewed for metals typically with single crystals subjected to cyclic loading. The theory of self-organization in thermodynamic processes far from equilibrium is a cutting-edge theme for the development of a new [...] Read more.
In this review paper, the evidence and application of thermodynamic self-organization are reviewed for metals typically with single crystals subjected to cyclic loading. The theory of self-organization in thermodynamic processes far from equilibrium is a cutting-edge theme for the development of a new generation of materials. It could be interpreted as the formation of globally coherent patterns, configurations and orderliness through local interactivities by “cascade evolution of dissipative structures”. Non-equilibrium thermodynamics, entropy, and dissipative structures connected to self-organization phenomenon (patterning, orderliness) are briefly discussed. Some example evidences are reviewed in detail to show how thermodynamics self-organization can emerge from a non-equilibrium process; fatigue. Evidences including dislocation density evolution, stored energy, temperature, and acoustic signals can be considered as the signature of self-organization. Most of the attention is given to relate an analogy between persistent slip bands (PSBs) and self-organization in metals with single crystals. Some aspects of the stability of dislocations during fatigue of single crystals are discussed using the formulation of excess entropy generation. Full article
(This article belongs to the Special Issue Review Papers for Entropy)
Show Figures

Graphical abstract

14 pages, 2541 KiB  
Review
New Invariant Expressions in Chemical Kinetics
by Gregory S. Yablonsky, Daniel Branco, Guy B. Marin and Denis Constales
Entropy 2020, 22(3), 373; https://doi.org/10.3390/e22030373 - 24 Mar 2020
Cited by 6 | Viewed by 3248
Abstract
This paper presents a review of our original results obtained during the last decade. These results have been found theoretically for classical mass-action-law models of chemical kinetics and justified experimentally. In contrast with the traditional invariances, they relate to a special battery of [...] Read more.
This paper presents a review of our original results obtained during the last decade. These results have been found theoretically for classical mass-action-law models of chemical kinetics and justified experimentally. In contrast with the traditional invariances, they relate to a special battery of kinetic experiments, not a single experiment. Two types of invariances are distinguished and described in detail: thermodynamic invariants, i.e., special combinations of kinetic dependences that yield the equilibrium constants, or simple functions of the equilibrium constants; and “mixed” kinetico-thermodynamic invariances, functions both of equilibrium constants and non-thermodynamic ratios of kinetic coefficients. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Show Figures

Graphical abstract

18 pages, 3218 KiB  
Article
BiEntropy, TriEntropy and Primality
by Grenville J. Croll
Entropy 2020, 22(3), 311; https://doi.org/10.3390/e22030311 - 10 Mar 2020
Cited by 4 | Viewed by 7374
Abstract
The order and disorder of binary representations of the natural numbers < 28 is measured using the BiEntropy function. Significant differences are detected between the primes and the non-primes. The BiEntropic prime density is shown to be quadratic with a very small [...] Read more.
The order and disorder of binary representations of the natural numbers < 28 is measured using the BiEntropy function. Significant differences are detected between the primes and the non-primes. The BiEntropic prime density is shown to be quadratic with a very small Gaussian distributed error. The work is repeated in binary using a Monte Carlo simulation of a sample of natural numbers < 232 and in trinary for all natural numbers < 39 with similar but cubic results. We found a significant relationship between BiEntropy and TriEntropy such that we can discriminate between the primes and numbers divisible by six. We discuss the theoretical basis of these results and show how they generalise to give a tight bound on the variance of Pi(x)–Li(x) for all x. This bound is much tighter than the bound given by Von Koch in 1901 as an equivalence for proof of the Riemann Hypothesis. Since the primes are Gaussian due to a simple induction on the binary derivative, this implies that the twin primes conjecture is true. We also provide absolutely convergent asymptotes for the numbers of Fermat and Mersenne primes in the appendices. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

17 pages, 538 KiB  
Article
Two Faced Janus of Quantum Nonlocality
by Andrei Khrennikov
Entropy 2020, 22(3), 303; https://doi.org/10.3390/e22030303 - 6 Mar 2020
Cited by 31 | Viewed by 3825
Abstract
This paper is a new step towards understanding why “quantum nonlocality” is a misleading concept. Metaphorically speaking, “quantum nonlocality” is Janus faced. One face is an apparent nonlocality of the Lüders projection and another face is Bell nonlocality (a wrong conclusion that the [...] Read more.
This paper is a new step towards understanding why “quantum nonlocality” is a misleading concept. Metaphorically speaking, “quantum nonlocality” is Janus faced. One face is an apparent nonlocality of the Lüders projection and another face is Bell nonlocality (a wrong conclusion that the violation of Bell type inequalities implies the existence of mysterious instantaneous influences between distant physical systems). According to the Lüders projection postulate, a quantum measurement performed on one of the two distant entangled physical systems modifies their compound quantum state instantaneously. Therefore, if the quantum state is considered to be an attribute of the individual physical system and if one assumes that experimental outcomes are produced in a perfectly random way, one quickly arrives at the contradiction. It is a primary source of speculations about a spooky action at a distance. Bell nonlocality as defined above was explained and rejected by several authors; thus, we concentrate in this paper on the apparent nonlocality of the Lüders projection. As already pointed out by Einstein, the quantum paradoxes disappear if one adopts the purely statistical interpretation of quantum mechanics (QM). In the statistical interpretation of QM, if probabilities are considered to be objective properties of random experiments we show that the Lüders projection corresponds to the passage from joint probabilities describing all set of data to some marginal conditional probabilities describing some particular subsets of data. If one adopts a subjective interpretation of probabilities, such as QBism, then the Lüders projection corresponds to standard Bayesian updating of the probabilities. The latter represents degrees of beliefs of local agents about outcomes of individual measurements which are placed or which will be placed at distant locations. In both approaches, probability-transformation does not happen in the physical space, but only in the information space. Thus, all speculations about spooky interactions or spooky predictions at a distance are simply misleading. Coming back to Bell nonlocality, we recall that in a recent paper we demonstrated, using exclusively the quantum formalism, that CHSH inequalities may be violated for some quantum states only because of the incompatibility of quantum observables and Bohr’s complementarity. Finally, we explain that our criticism of quantum nonlocality is in the spirit of Hertz-Boltzmann methodology of scientific theories. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

12 pages, 8723 KiB  
Article
Exploring the Phase Space of Multi-Principal-Element Alloys and Predicting the Formation of Bulk Metallic Glasses
by Mirko Gabski, Martin Peterlechner and Gerhard Wilde
Entropy 2020, 22(3), 292; https://doi.org/10.3390/e22030292 - 2 Mar 2020
Cited by 3 | Viewed by 3476
Abstract
Multi-principal-element alloys share a set of thermodynamic and structural parameters that, in their range of adopted values, correlate to the tendency of the alloys to assume a solid solution, whether as a crystalline or an amorphous phase. Based on empirical correlations, this work [...] Read more.
Multi-principal-element alloys share a set of thermodynamic and structural parameters that, in their range of adopted values, correlate to the tendency of the alloys to assume a solid solution, whether as a crystalline or an amorphous phase. Based on empirical correlations, this work presents a computational method for the prediction of possible glass-forming compositions for a chosen alloys system as well as the calculation of their critical cooling rates. The obtained results compare well to experimental data for Pd-Ni-P, micro-alloyed Pd-Ni-P, Cu-Mg-Ca, and Cu-Zr-Ti. Furthermore, a random-number-generator-based algorithm is employed to explore glass-forming candidate alloys with a minimum critical cooling rate, reducing the number of datapoints necessary to find suitable glass-forming compositions. A comparison with experimental results for the quaternary Ti-Zr-Cu-Ni system shows a promising overlap of calculation and experiment, implying that it is a reasonable method to find candidates for glass-forming alloys with a sufficiently low critical cooling rate to allow the formation of bulk metallic glasses. Full article
(This article belongs to the Special Issue Crystallization Thermodynamics)
Show Figures

Graphical abstract

15 pages, 456 KiB  
Article
Deng Entropy Weighted Risk Priority Number Model for Failure Mode and Effects Analysis
by Haixia Zheng and Yongchuan Tang
Entropy 2020, 22(3), 280; https://doi.org/10.3390/e22030280 - 28 Feb 2020
Cited by 25 | Viewed by 4602
Abstract
Failure mode and effects analysis (FMEA), as a commonly used risk management method, has been extensively applied to the engineering domain. A vital parameter in FMEA is the risk priority number (RPN), which is the product of occurrence (O), severity (S), and detection [...] Read more.
Failure mode and effects analysis (FMEA), as a commonly used risk management method, has been extensively applied to the engineering domain. A vital parameter in FMEA is the risk priority number (RPN), which is the product of occurrence (O), severity (S), and detection (D) of a failure mode. To deal with the uncertainty in the assessments given by domain experts, a novel Deng entropy weighted risk priority number (DEWRPN) for FMEA is proposed in the framework of Dempster–Shafer evidence theory (DST). DEWRPN takes into consideration the relative importance in both risk factors and FMEA experts. The uncertain degree of objective assessments coming from experts are measured by the Deng entropy. An expert’s weight is comprised of the three risk factors’ weights obtained independently from expert’s assessments. In DEWRPN, the strategy of assigning weight for each expert is flexible and compatible to the real decision-making situation. The entropy-based relative weight symbolizes the relative importance. In detail, the higher the uncertain degree of a risk factor from an expert is, the lower the weight of the corresponding risk factor will be and vice versa. We utilize Deng entropy to construct the exponential weight of each risk factor as well as an expert’s relative importance on an FMEA item in a state-of-the-art way. A case study is adopted to verify the practicability and effectiveness of the proposed model. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

19 pages, 509 KiB  
Review
Thermodynamic Limits and Optimality of Microbial Growth
by Nima P. Saadat, Tim Nies, Yvan Rousset and Oliver Ebenhöh
Entropy 2020, 22(3), 277; https://doi.org/10.3390/e22030277 - 28 Feb 2020
Cited by 18 | Viewed by 6900
Abstract
Understanding microbial growth with the use of mathematical models has a long history that dates back to the pioneering work of Jacques Monod in the 1940s. Monod’s famous growth law expressed microbial growth rate as a simple function of the limiting nutrient concentration. [...] Read more.
Understanding microbial growth with the use of mathematical models has a long history that dates back to the pioneering work of Jacques Monod in the 1940s. Monod’s famous growth law expressed microbial growth rate as a simple function of the limiting nutrient concentration. However, to explain growth laws from underlying principles is extremely challenging. In the second half of the 20th century, numerous experimental approaches aimed at precisely measuring heat production during microbial growth to determine the entropy balance in a growing cell and to quantify the exported entropy. This has led to the development of thermodynamic theories of microbial growth, which have generated fundamental understanding and identified the principal limitations of the growth process. Although these approaches ignored metabolic details and instead considered microbial metabolism as a black box, modern theories heavily rely on genomic resources to describe and model metabolism in great detail to explain microbial growth. Interestingly, however, thermodynamic constraints are often included in modern modeling approaches only in a rather superficial fashion, and it appears that recent modeling approaches and classical theories are rather disconnected fields. To stimulate a closer interaction between these fields, we here review various theoretical approaches that aim at describing microbial growth based on thermodynamics and outline the resulting thermodynamic limits and optimality principles. We start with classical black box models of cellular growth, and continue with recent metabolic modeling approaches that include thermodynamics, before we place these models in the context of fundamental considerations based on non-equilibrium statistical mechanics. We conclude by identifying conceptual overlaps between the fields and suggest how the various types of theories and models can be integrated. We outline how concepts from one approach may help to inform or constrain another, and we demonstrate how genome-scale models can be used to infer key black box parameters, such as the energy of formation or the degree of reduction of biomass. Such integration will allow understanding to what extent microbes can be viewed as thermodynamic machines, and how close they operate to theoretical optima. Full article
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)
Show Figures

Figure 1

14 pages, 477 KiB  
Article
Maxwell’s Demon in Quantum Mechanics
by Orly Shenker and Meir Hemmo
Entropy 2020, 22(3), 269; https://doi.org/10.3390/e22030269 - 27 Feb 2020
Cited by 4 | Viewed by 4393
Abstract
Maxwell’s Demon is a thought experiment devised by J. C. Maxwell in 1867 in order to show that the Second Law of thermodynamics is not universal, since it has a counter-example. Since the Second Law is taken by many to provide an arrow [...] Read more.
Maxwell’s Demon is a thought experiment devised by J. C. Maxwell in 1867 in order to show that the Second Law of thermodynamics is not universal, since it has a counter-example. Since the Second Law is taken by many to provide an arrow of time, the threat to its universality threatens the account of temporal directionality as well. Various attempts to “exorcise” the Demon, by proving that it is impossible for one reason or another, have been made throughout the years, but none of them were successful. We have shown (in a number of publications) by a general state-space argument that Maxwell’s Demon is compatible with classical mechanics, and that the most recent solutions, based on Landauer’s thesis, are not general. In this paper we demonstrate that Maxwell’s Demon is also compatible with quantum mechanics. We do so by analyzing a particular (but highly idealized) experimental setup and proving that it violates the Second Law. Our discussion is in the framework of standard quantum mechanics; we give two separate arguments in the framework of quantum mechanics with and without the projection postulate. We address in our analysis the connection between measurement and erasure interactions and we show how these notions are applicable in the microscopic quantum mechanical structure. We discuss what might be the quantum mechanical counterpart of the classical notion of “macrostates”, thus explaining why our Quantum Demon setup works not only at the micro level but also at the macro level, properly understood. One implication of our analysis is that the Second Law cannot provide a universal lawlike basis for an account of the arrow of time; this account has to be sought elsewhere. Full article
(This article belongs to the Special Issue Time and Entropy)
Show Figures

Figure 1

11 pages, 1108 KiB  
Review
Some Notes on Counterfactuals in Quantum Mechanics
by Avshalom C. Elitzur and Eliahu Cohen
Entropy 2020, 22(3), 266; https://doi.org/10.3390/e22030266 - 26 Feb 2020
Cited by 1 | Viewed by 3683
Abstract
Counterfactuals, i.e., events that could have occurred but eventually did not, play a unique role in quantum mechanics in that they exert causal effects despite their non-occurrence. They are therefore vital for a better understanding of quantum mechanics (QM) and possibly the universe [...] Read more.
Counterfactuals, i.e., events that could have occurred but eventually did not, play a unique role in quantum mechanics in that they exert causal effects despite their non-occurrence. They are therefore vital for a better understanding of quantum mechanics (QM) and possibly the universe as a whole. In earlier works, we have studied counterfactuals both conceptually and experimentally. A fruitful framework termed quantum oblivion has emerged, referring to situations where one particle seems to "forget" its interaction with other particles despite the latter being visibly affected. This framework proved to have significant explanatory power, which we now extend to tackle additional riddles. The time-symmetric causality employed by the Two State-Vector Formalism (TSVF) reveals a subtle realm ruled by “weak values,” already demonstrated by numerous experiments. They offer a realistic, simple and intuitively appealing explanation to the unique role of quantum non-events, as well as to the foundations of QM. In this spirit, we performed a weak value analysis of quantum oblivion and suggest some new avenues for further research. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Figure 1

8 pages, 1136 KiB  
Article
Entropy of Conduction Electrons from Transport Experiments
by Nicolás Pérez, Constantin Wolf, Alexander Kunzmann, Jens Freudenberger, Maria Krautz, Bruno Weise, Kornelius Nielsch and Gabi Schierning
Entropy 2020, 22(2), 244; https://doi.org/10.3390/e22020244 - 21 Feb 2020
Cited by 5 | Viewed by 4098
Abstract
The entropy of conduction electrons was evaluated utilizing the thermodynamic definition of the Seebeck coefficient as a tool. This analysis was applied to two different kinds of scientific questions that can—if at all—be only partially addressed by other methods. These are the field-dependence [...] Read more.
The entropy of conduction electrons was evaluated utilizing the thermodynamic definition of the Seebeck coefficient as a tool. This analysis was applied to two different kinds of scientific questions that can—if at all—be only partially addressed by other methods. These are the field-dependence of meta-magnetic phase transitions and the electronic structure in strongly disordered materials, such as alloys. We showed that the electronic entropy change in meta-magnetic transitions is not constant with the applied magnetic field, as is usually assumed. Furthermore, we traced the evolution of the electronic entropy with respect to the chemical composition of an alloy series. Insights about the strength and kind of interactions appearing in the exemplary materials can be identified in the experiments. Full article
(This article belongs to the Special Issue Simulation with Entropy Thermodynamics)
Show Figures

Figure 1

10 pages, 658 KiB  
Article
Thermodynamic and Transport Properties of Equilibrium Debye Plasmas
by Gianpiero Colonna and Annarita Laricchiuta
Entropy 2020, 22(2), 237; https://doi.org/10.3390/e22020237 - 20 Feb 2020
Cited by 5 | Viewed by 3284
Abstract
The thermodynamic and transport properties of weakly non-ideal, high-density partially ionized hydrogen plasma are investigated, accounting for quantum effects due to the change in the energy spectrum of atomic hydrogen when the electron–proton interaction is considered embedded in the surrounding particles. The complexity [...] Read more.
The thermodynamic and transport properties of weakly non-ideal, high-density partially ionized hydrogen plasma are investigated, accounting for quantum effects due to the change in the energy spectrum of atomic hydrogen when the electron–proton interaction is considered embedded in the surrounding particles. The complexity of the rigorous approach led to the development of simplified models, able to include the neighbor-effects on the isolated system while remaining consistent with the traditional thermodynamic approach. High-density conditions have been simulated assuming particle interactions described by a screened Coulomb potential. Full article
(This article belongs to the Special Issue Simulation with Entropy Thermodynamics)
Show Figures

Figure 1

12 pages, 290 KiB  
Article
Global Geometry of Bayesian Statistics
by Atsuhide Mori
Entropy 2020, 22(2), 240; https://doi.org/10.3390/e22020240 - 20 Feb 2020
Cited by 2 | Viewed by 3097
Abstract
In the previous work of the author, a non-trivial symmetry of the relative entropy in the information geometry of normal distributions was discovered. The same symmetry also appears in the symplectic/contact geometry of Hilbert modular cusps. Further, it was observed that a contact [...] Read more.
In the previous work of the author, a non-trivial symmetry of the relative entropy in the information geometry of normal distributions was discovered. The same symmetry also appears in the symplectic/contact geometry of Hilbert modular cusps. Further, it was observed that a contact Hamiltonian flow presents a certain Bayesian inference on normal distributions. In this paper, we describe Bayesian statistics and the information geometry in the language of current geometry in order to spread our interest in statistics through general geometers and topologists. Then, we foliate the space of multivariate normal distributions by symplectic leaves to generalize the above result of the author. This foliation arises from the Cholesky decomposition of the covariance matrices. Full article
(This article belongs to the Special Issue Information Geometry III)
Show Figures

Graphical abstract

6 pages, 733 KiB  
Article
Entropy, Information, and Symmetry; Ordered Is Symmetrical, II: System of Spins in the Magnetic Field
by Edward Bormashenko
Entropy 2020, 22(2), 235; https://doi.org/10.3390/e22020235 - 19 Feb 2020
Cited by 10 | Viewed by 3322
Abstract
The second part of this paper develops an approach suggested in Entropy 2020, 22(1), 11; which relates ordering in physical systems to symmetrizing. Entropy is frequently interpreted as a quantitative measure of “chaos” or “disorder”. However, the notions of “chaos” and [...] Read more.
The second part of this paper develops an approach suggested in Entropy 2020, 22(1), 11; which relates ordering in physical systems to symmetrizing. Entropy is frequently interpreted as a quantitative measure of “chaos” or “disorder”. However, the notions of “chaos” and “disorder” are vague and subjective, to a great extent. This leads to numerous misinterpretations of entropy. We propose that the disorder is viewed as an absence of symmetry and identify “ordering” with symmetrizing of a physical system; in other words, introducing the elements of symmetry into an initially disordered physical system. We explore the initially disordered system of elementary magnets exerted to the external magnetic field H . Imposing symmetry restrictions diminishes the entropy of the system and decreases its temperature. The general case of the system of elementary magnets demonstrating j-fold symmetry is studied. The T j = T j interrelation takes place, where T and T j are the temperatures of non-symmetrized and j-fold-symmetrized systems of the magnets, correspondingly. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

14 pages, 585 KiB  
Communication
The Brevity Law as a Scaling Law, and a Possible Origin of Zipf’s Law for Word Frequencies
by Álvaro Corral and Isabel Serra
Entropy 2020, 22(2), 224; https://doi.org/10.3390/e22020224 - 17 Feb 2020
Cited by 21 | Viewed by 4829
Abstract
An important body of quantitative linguistics is constituted by a series of statistical laws about language usage. Despite the importance of these linguistic laws, some of them are poorly formulated, and, more importantly, there is no unified framework that encompasses all them. This [...] Read more.
An important body of quantitative linguistics is constituted by a series of statistical laws about language usage. Despite the importance of these linguistic laws, some of them are poorly formulated, and, more importantly, there is no unified framework that encompasses all them. This paper presents a new perspective to establish a connection between different statistical linguistic laws. Characterizing each word type by two random variables—length (in number of characters) and absolute frequency—we show that the corresponding bivariate joint probability distribution shows a rich and precise phenomenology, with the type-length and the type-frequency distributions as its two marginals, and the conditional distribution of frequency at fixed length providing a clear formulation for the brevity-frequency phenomenon. The type-length distribution turns out to be well fitted by a gamma distribution (much better than with the previously proposed lognormal), and the conditional frequency distributions at fixed length display power-law-decay behavior with a fixed exponent α 1.4 and a characteristic-frequency crossover that scales as an inverse power δ 2.8 of length, which implies the fulfillment of a scaling law analogous to those found in the thermodynamics of critical phenomena. As a by-product, we find a possible model-free explanation for the origin of Zipf’s law, which should arise as a mixture of conditional frequency distributions governed by the crossover length-dependent frequency. Full article
(This article belongs to the Special Issue Information Theory and Language)
Show Figures

Figure 1

34 pages, 514 KiB  
Article
Generalised Measures of Multivariate Information Content
by Conor Finn and Joseph T. Lizier
Entropy 2020, 22(2), 216; https://doi.org/10.3390/e22020216 - 14 Feb 2020
Cited by 16 | Viewed by 6024
Abstract
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted [...] Read more.
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random variables. These measures complement the existing measures of multivariate mutual information and are constructed by considering the algebraic structure of information sharing. It is shown that the distinct ways in which a set of marginal observers can share their information with a non-observing third party corresponds to the elements of a free distributive lattice. The redundancy lattice from partial information decomposition is then subsequently and independently derived by combining the algebraic structures of joint and shared information content. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

17 pages, 2442 KiB  
Article
Finite-Time Thermodynamic Model for Evaluating Heat Engines in Ocean Thermal Energy Conversion
by Takeshi Yasunaga and Yasuyuki Ikegami
Entropy 2020, 22(2), 211; https://doi.org/10.3390/e22020211 - 13 Feb 2020
Cited by 28 | Viewed by 4333
Abstract
Ocean thermal energy conversion (OTEC) converts the thermal energy stored in the ocean temperature difference between warm surface seawater and cold deep seawater into electricity. The necessary temperature difference to drive OTEC heat engines is only 15–25 K, which will theoretically be of [...] Read more.
Ocean thermal energy conversion (OTEC) converts the thermal energy stored in the ocean temperature difference between warm surface seawater and cold deep seawater into electricity. The necessary temperature difference to drive OTEC heat engines is only 15–25 K, which will theoretically be of low thermal efficiency. Research has been conducted to propose unique systems that can increase the thermal efficiency. This thermal efficiency is generally applied for the system performance metric, and researchers have focused on using the higher available temperature difference of heat engines to improve this efficiency without considering the finite flow rate and sensible heat of seawater. In this study, our model shows a new concept of thermodynamics for OTEC. The first step is to define the transferable thermal energy in the OTEC as the equilibrium state and the dead state instead of the atmospheric condition. Second, the model shows the available maximum work, the new concept of exergy, by minimizing the entropy generation while considering external heat loss. The maximum thermal energy and exergy allow the normalization of the first and second laws of thermal efficiencies. These evaluation methods can be applied to optimized OTEC systems and their effectiveness is confirmed. Full article
(This article belongs to the Special Issue Entropy in Renewable Energy Systems)
Show Figures

Figure 1

12 pages, 1921 KiB  
Article
On the Irrationality of Being in Two Minds
by Shahram Dehdashti, Lauren Fell and Peter Bruza
Entropy 2020, 22(2), 174; https://doi.org/10.3390/e22020174 - 4 Feb 2020
Cited by 4 | Viewed by 4108
Abstract
This article presents a general framework that allows irrational decision making to be theoretically investigated and simulated. Rationality in human decision making under uncertainty is normatively prescribed by the axioms of probability theory in order to maximize utility. However, substantial literature from psychology [...] Read more.
This article presents a general framework that allows irrational decision making to be theoretically investigated and simulated. Rationality in human decision making under uncertainty is normatively prescribed by the axioms of probability theory in order to maximize utility. However, substantial literature from psychology and cognitive science shows that human decisions regularly deviate from these axioms. Bistable probabilities are proposed as a principled and straight forward means for modeling (ir)rational decision making, which occurs when a decision maker is in “two minds”. We show that bistable probabilities can be formalized by positive-operator-valued projections in quantum mechanics. We found that (1) irrational decision making necessarily involves a wider spectrum of causal relationships than rational decision making, (2) the accessible information turns out to be greater in irrational decision making when compared to rational decision making, and (3) irrational decision making is quantum-like because it violates the Bell–Wigner polytope. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Graphical abstract

13 pages, 448 KiB  
Article
Nonlinear Fokker–Planck Equation Approach to Systems of Interacting Particles: Thermostatistical Features Related to the Range of the Interactions
by Angel R. Plastino and Roseli S. Wedemann
Entropy 2020, 22(2), 163; https://doi.org/10.3390/e22020163 - 31 Jan 2020
Cited by 8 | Viewed by 3077
Abstract
Nonlinear Fokker–Planck equations (NLFPEs) constitute useful effective descriptions of some interacting many-body systems. Important instances of these nonlinear evolution equations are closely related to the thermostatistics based on the S q power-law entropic functionals. Most applications of the connection between the NLFPE and [...] Read more.
Nonlinear Fokker–Planck equations (NLFPEs) constitute useful effective descriptions of some interacting many-body systems. Important instances of these nonlinear evolution equations are closely related to the thermostatistics based on the S q power-law entropic functionals. Most applications of the connection between the NLFPE and the S q entropies have focused on systems interacting through short-range forces. In the present contribution we re-visit the NLFPE approach to interacting systems in order to clarify the role played by the range of the interactions, and to explore the possibility of developing similar treatments for systems with long-range interactions, such as those corresponding to Newtonian gravitation. In particular, we consider a system of particles interacting via forces following the inverse square law and performing overdamped motion, that is described by a density obeying an integro-differential evolution equation that admits exact time-dependent solutions of the q-Gaussian form. These q-Gaussian solutions, which constitute a signature of S q -thermostatistics, evolve in a similar but not identical way to the solutions of an appropriate nonlinear, power-law Fokker–Planck equation. Full article
(This article belongs to the Special Issue Entropy and Gravitation)
Show Figures

Figure 1

22 pages, 3190 KiB  
Article
Evolution of Neuroaesthetic Variables in Portrait Paintings throughout the Renaissance
by Ivan Correa-Herran, Hassan Aleem and Norberto M. Grzywacz
Entropy 2020, 22(2), 146; https://doi.org/10.3390/e22020146 - 26 Jan 2020
Cited by 9 | Viewed by 4457
Abstract
To compose art, artists rely on a set of sensory evaluations performed fluently by the brain. The outcome of these evaluations, which we call neuroaesthetic variables, helps to compose art with high aesthetic value. In this study, we probed whether these variables varied [...] Read more.
To compose art, artists rely on a set of sensory evaluations performed fluently by the brain. The outcome of these evaluations, which we call neuroaesthetic variables, helps to compose art with high aesthetic value. In this study, we probed whether these variables varied across art periods despite relatively unvaried neural function. We measured several neuroaesthetic variables in portrait paintings from the Early and High Renaissance, and from Mannerism. The variables included symmetry, balance, and contrast (chiaroscuro), as well as intensity and spatial complexities measured by two forms of normalized entropy. The results showed that the degree of symmetry remained relatively constant during the Renaissance. However, the balance of portraits decayed abruptly at the end of the Early Renaissance, that is, at the closing of the 15th century. Intensity and spatial complexities, and thus entropies, of portraits also fell in such manner around the same time. Our data also showed that the decline of complexity and entropy could be attributed to the rise of chiaroscuro. With few exceptions, the values of aesthetic variables from the top of artists of the Renaissance resembled those of their peers. We conclude that neuroaesthetic variables have flexibility to change in brains of artists (and observers). Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Show Figures

Figure 1

20 pages, 855 KiB  
Article
Adapting Logic to Physics: The Quantum-Like Eigenlogic Program
by Zeno Toffano and François Dubois
Entropy 2020, 22(2), 139; https://doi.org/10.3390/e22020139 - 24 Jan 2020
Cited by 11 | Viewed by 3748
Abstract
Considering links between logic and physics is important because of the fast development of quantum information technologies in our everyday life. This paper discusses a new method in logic inspired from quantum theory using operators, named Eigenlogic. It expresses logical propositions using linear [...] Read more.
Considering links between logic and physics is important because of the fast development of quantum information technologies in our everyday life. This paper discusses a new method in logic inspired from quantum theory using operators, named Eigenlogic. It expresses logical propositions using linear algebra. Logical functions are represented by operators and logical truth tables correspond to the eigenvalue structure. It extends the possibilities of classical logic by changing the semantics from the Boolean binary alphabet { 0 , 1 } using projection operators to the binary alphabet { + 1 , 1 } employing reversible involution operators. Also, many-valued logical operators are synthesized, for whatever alphabet, using operator methods based on Lagrange interpolation and on the Cayley–Hamilton theorem. Considering a superposition of logical input states one gets a fuzzy logic representation where the fuzzy membership function is the quantum probability given by the Born rule. Historical parallels from Boole, Post, Poincaré and Combinatory Logic are presented in relation to probability theory, non-commutative quaternion algebra and Turing machines. An extension to first order logic is proposed inspired by Grover’s algorithm. Eigenlogic is essentially a logic of operators and its truth-table logical semantics is provided by the eigenvalue structure which is shown to be related to the universality of logical quantum gates, a fundamental role being played by non-commutativity and entanglement. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Figure 1

15 pages, 940 KiB  
Article
Electric Double Layers with Surface Charge Regulation Using Density Functional Theory
by Dirk Gillespie, Dimiter N. Petsev and Frank van Swol
Entropy 2020, 22(2), 132; https://doi.org/10.3390/e22020132 - 22 Jan 2020
Cited by 9 | Viewed by 3328
Abstract
Surprisingly, the local structure of electrolyte solutions in electric double layers is primarily determined by the solvent. This is initially unexpected as the solvent is usually a neutral species and not a subject to dominant Coulombic interactions. Part of the solvent dominance in [...] Read more.
Surprisingly, the local structure of electrolyte solutions in electric double layers is primarily determined by the solvent. This is initially unexpected as the solvent is usually a neutral species and not a subject to dominant Coulombic interactions. Part of the solvent dominance in determining the local structure is simply due to the much larger number of solvent molecules in a typical electrolyte solution.The dominant local packing of solvent then creates a space left for the charged species. Our classical density functional theory work demonstrates that the solvent structural effect strongly couples to the surface chemistry, which governs the charge and potential. In this article we address some outstanding questions relating double layer modeling. Firstly, we address the role of ion-ion correlations that go beyond mean field correlations. Secondly we consider the effects of a density dependent dielectric constant which is crucial in the description of a electrolyte-vapor interface. Full article
Show Figures

Figure 1

13 pages, 4382 KiB  
Article
Eigenvalues of Two-State Quantum Walks Induced by the Hadamard Walk
by Shimpei Endo, Takako Endo, Takashi Komatsu and Norio Konno
Entropy 2020, 22(1), 127; https://doi.org/10.3390/e22010127 - 20 Jan 2020
Cited by 7 | Viewed by 4023
Abstract
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks [...] Read more.
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks in one dimension we had treated in our previous studies. Especially, we clarified the characteristic parameter dependence for the distributions of the eigenvalues with the aid of numerical simulation. Full article
(This article belongs to the Special Issue Quantum Walks and Related Issues)
Show Figures

Figure 1

19 pages, 4838 KiB  
Article
Energy and Exergy Evaluation of a Two-Stage Axial Vapour Compressor on the LNG Carrier
by Igor Poljak, Josip Orović, Vedran Mrzljak and Dean Bernečić
Entropy 2020, 22(1), 115; https://doi.org/10.3390/e22010115 - 17 Jan 2020
Cited by 7 | Viewed by 4667
Abstract
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while [...] Read more.
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while changing the main propeller shafts rpm. As the compressor supply of vaporized gas to the main engines increases, so does the load and rpm in propulsion electric motors, and vice versa. The results show that when the main engine load varied from 46 to 56 rpm at main propulsion shafts increased mass flow rate of vaporized LNG at a two-stage compressor has an influence on compressor performance. Compressor average energy efficiency is around 50%, while the exergy efficiency of the compressor is significantly lower in all measured ranges and on average is around 34%. The change in the ambient temperature from 0 to 50 °C also influences the compressor’s exergy efficiency. Higher exergy efficiency is achieved at lower ambient temperatures. As temperature increases, overall compressor exergy efficiency decreases by about 7% on average over the whole analyzed range. The proposed new concept of energy-saving and increasing the compressor efficiency based on pre-cooling of the compressor second stage is also analyzed. The temperature at the second stage was varied in the range from 0 to −50 °C, which results in power savings up to 26 kW for optimal running regimes. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

14 pages, 3812 KiB  
Article
Determining the Bulk Parameters of Plasma Electrons from Pitch-Angle Distribution Measurements
by Georgios Nicolaou, Robert Wicks, George Livadiotis, Daniel Verscharen, Christopher Owen and Dhiren Kataria
Entropy 2020, 22(1), 103; https://doi.org/10.3390/e22010103 - 16 Jan 2020
Cited by 13 | Viewed by 4610
Abstract
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution [...] Read more.
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution function. One such high-resolution measurement strategy consists of sampling the two-dimensional pitch-angle distributions of the plasma particles, which describes the velocities of the particles with respect to the local magnetic field direction. Here, we investigate the accuracy of plasma bulk parameters from such high-resolution measurements. We simulate electron observations from the Solar Wind Analyser’s (SWA) Electron Analyser System (EAS) on board Solar Orbiter. We show that fitting analysis of the synthetic datasets determines the plasma temperature and kappa index of the distribution within 10% of their actual values, even at large heliocentric distances where the expected solar wind flux is very low. Interestingly, we show that although measurement points with zero counts are not statistically significant, they provide information about the particle distribution function which becomes important when the particle flux is low. We also examine the convergence of the fitting algorithm for expected plasma conditions and discuss the sources of statistical and systematic uncertainties. Full article
(This article belongs to the Special Issue Theoretical Aspects of Kappa Distributions)
Show Figures

Figure 1

29 pages, 970 KiB  
Article
Quantifying Athermality and Quantum Induced Deviations from Classical Fluctuation Relations
by Zoë Holmes, Erick Hinds Mingo, Calvin Y.-R. Chen and Florian Mintert
Entropy 2020, 22(1), 111; https://doi.org/10.3390/e22010111 - 16 Jan 2020
Cited by 2 | Viewed by 3942
Abstract
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the [...] Read more.
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the system’s energy supply. In particular, we develop Crooks-like equalities for an oscillator system which is prepared either in photon added or photon subtracted thermal states and derive a Jarzynski-like equality for average work extraction. We use these equalities to discuss the extent to which adding or subtracting a photon increases the informational content of a state, thereby amplifying the suppression of free energy increasing process. We go on to derive a Crooks-like equality for an energy supply that is prepared in a pure binomial state, leading to a non-trivial contribution from energy and coherence on the resultant irreversibility. We show how the binomial state equality fits in relation to a previously derived coherent state equality and offers a richer feature-set. Full article
Show Figures

Graphical abstract

19 pages, 2322 KiB  
Article
Learning in Feedforward Neural Networks Accelerated by Transfer Entropy
by Adrian Moldovan, Angel Caţaron and Răzvan Andonie
Entropy 2020, 22(1), 102; https://doi.org/10.3390/e22010102 - 16 Jan 2020
Cited by 16 | Viewed by 4909
Abstract
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially [...] Read more.
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series). Later, it was related to causality, even if they are not the same. There are only few papers reporting applications of causality or TE in neural networks. Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks. The information transfer is measured by the TE of feedback neural connections. Intuitively, TE measures the relevance of a connection in the network and the feedback amplifies this connection. We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

27 pages, 1642 KiB  
Article
The Convex Information Bottleneck Lagrangian
by Borja Rodríguez Gálvez, Ragnar Thobaben and Mikael Skoglund
Entropy 2020, 22(1), 98; https://doi.org/10.3390/e22010098 - 14 Jan 2020
Cited by 17 | Viewed by 4789
Abstract
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the [...] Read more.
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) β I ( X ; T ) ) for many values of β [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem: the squared IB Lagrangian: L sq IB ( T ; β sq ) = I ( T ; Y ) β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes. This eliminates the burden of solving the optimization problem for many values of the Lagrange multiplier. That is, we prove that we can solve the original constrained problem with a single optimization. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Graphical abstract

13 pages, 5513 KiB  
Article
On Heat Transfer Performance of Cooling Systems Using Nanofluid for Electric Motor Applications
by Ali Deriszadeh and Filippo de Monte
Entropy 2020, 22(1), 99; https://doi.org/10.3390/e22010099 - 14 Jan 2020
Cited by 25 | Viewed by 7122
Abstract
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid [...] Read more.
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid dynamics and 3D fluid motion analysis are used. The base fluid is water with a laminar flow. The fluid Reynolds number and turn-number of spiral channels are evaluation parameters. The effect of nanoparticles volume fraction in the base fluid on the heat transfer performance of the cooling system is studied. Increasing the volume fraction of nanoparticles leads to improving the heat transfer performance of the cooling system. On the other hand, a high-volume fraction of the nanofluid increases the pressure drop of the coolant fluid and increases the required pumping power. This paper aims at finding a trade-off between effective parameters by studying both fluid flow and heat transfer characteristics of the nanofluid. Full article
Show Figures

Figure 1

12 pages, 607 KiB  
Article
On Unitary t-Designs from Relaxed Seeds
by Rawad Mezher, Joe Ghalbouni, Joseph Dgheim and Damian Markham
Entropy 2020, 22(1), 92; https://doi.org/10.3390/e22010092 - 12 Jan 2020
Cited by 3 | Viewed by 4671
Abstract
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. [...] Read more.
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. In particular, they are composed of ensembles of unitaries which, for technical reasons, must contain inverses and whose entries are algebraic. In this work, we reduce the requirements for generating an ε -approximate unitary t-design. To do so, we first construct a specific n-qubit random quantum circuit composed of a sequence of randomly chosen 2-qubit gates, chosen from a set of unitaries which is approximately universal on U ( 4 ) , yet need not contain unitaries and their inverses nor are in general composed of unitaries whose entries are algebraic; dubbed r e l a x e d seed. We then show that this relaxed seed, when used as a basis for our construction, gives rise to an ε -approximate unitary t-design efficiently, where the depth of our random circuit scales as p o l y ( n , t , l o g ( 1 / ε ) ) , thereby overcoming the two requirements which limited previous constructions. We suspect the result found here is not optimal and can be improved; particularly because the number of gates in the relaxed seeds introduced here grows with n and t. We conjecture that constant sized seeds such as those which are usually present in the literature are sufficient. Full article
(This article belongs to the Special Issue Quantum Information: Fragility and the Challenges of Fault Tolerance)
Show Figures

Figure 1

25 pages, 832 KiB  
Concept Paper
Introduction to Extreme Seeking Entropy
by Jan Vrba and Jan Mareš
Entropy 2020, 22(1), 93; https://doi.org/10.3390/e22010093 - 12 Jan 2020
Cited by 7 | Viewed by 3776
Abstract
Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses [...] Read more.
Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses adaptable parameters. Instead of a multi-scale enhanced approach, the generalized Pareto distribution is employed to estimate the probability of unusual updates, as well as for detecting novelties. This measure was successfully tested in various scenarios with (i) synthetic data, (ii) real time series datasets, and multiple adaptive filters and learning algorithms. The results of these experiments are presented. Full article
Show Figures

Figure 1

14 pages, 12760 KiB  
Article
Development of Novel Lightweight Dual-Phase Al-Ti-Cr-Mn-V Medium-Entropy Alloys with High Strength and Ductility
by Yu-Chin Liao, Po-Sung Chen, Chao-Hsiu Li, Pei-Hua Tsai, Jason S. C. Jang, Ker-Chang Hsieh, Chih-Yen Chen, Ping-Hung Lin, Jacob C. Huang, Hsin-Jay Wu, Yu-Chieh Lo, Chang-Wei Huang and I-Yu Tsao
Entropy 2020, 22(1), 74; https://doi.org/10.3390/e22010074 - 6 Jan 2020
Cited by 13 | Viewed by 4852
Abstract
A novel lightweight Al-Ti-Cr-Mn-V medium-entropy alloy (MEA) system was developed using a nonequiatiomic approach and alloys were produced through arc melting and drop casting. These alloys comprised a body-centered cubic (BCC) and face-centered cubic (FCC) dual phase with a density of approximately 4.5 [...] Read more.
A novel lightweight Al-Ti-Cr-Mn-V medium-entropy alloy (MEA) system was developed using a nonequiatiomic approach and alloys were produced through arc melting and drop casting. These alloys comprised a body-centered cubic (BCC) and face-centered cubic (FCC) dual phase with a density of approximately 4.5 g/cm3. However, the fraction of the BCC phase and morphology of the FCC phase can be controlled by incorporating other elements. The results of compression tests indicated that these Al-Ti-Cr-Mn-V alloys exhibited a prominent compression strength (~1940 MPa) and ductility (~30%). Moreover, homogenized samples maintained a high compression strength of 1900 MPa and similar ductility (30%). Due to the high specific compressive strength (0.433 GPa·g/cm3) and excellent combination of strength and ductility, the cast lightweight Al-Ti-Cr-Mn-V MEAs are a promising alloy system for application in transportation and energy industries. Full article
(This article belongs to the Special Issue High-Entropy Materials)
Show Figures

Figure 1

17 pages, 384 KiB  
Article
Energy Disaggregation Using Elastic Matching Algorithms
by Pascal A. Schirmer, Iosif Mporas and Michael Paraskevas
Entropy 2020, 22(1), 71; https://doi.org/10.3390/e22010071 - 6 Jan 2020
Cited by 24 | Viewed by 3542
Abstract
In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant [...] Read more.
In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant amount of data to train a model, elastic matching-based approaches do not have a model training process but perform recognition using template matching. Five different elastic matching algorithms were evaluated across different datasets and the experimental results showed that the minimum variance matching algorithm outperforms all other evaluated matching algorithms. The best performing minimum variance matching algorithm improved the energy disaggregation accuracy by 2.7% when compared to the baseline dynamic time warping algorithm. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

13 pages, 2502 KiB  
Article
Bounds on Mixed State Entanglement
by Bruno Leggio, Anna Napoli, Hiromichi Nakazato and Antonino Messina
Entropy 2020, 22(1), 62; https://doi.org/10.3390/e22010062 - 1 Jan 2020
Cited by 6 | Viewed by 3474
Abstract
In the general framework of d 1 × d 2 mixed states, we derive an explicit bound for bipartite negative partial transpose (NPT) entanglement based on the mixedness characterization of the physical system. The derived result is very general, being based only on [...] Read more.
In the general framework of d 1 × d 2 mixed states, we derive an explicit bound for bipartite negative partial transpose (NPT) entanglement based on the mixedness characterization of the physical system. The derived result is very general, being based only on the assumption of finite dimensionality. In addition, it turns out to be of experimental interest since some purity-measuring protocols are known. Exploiting the bound in the particular case of thermal entanglement, a way to connect thermodynamic features to the monogamy of quantum correlations is suggested, and some recent results on the subject are given a physically clear explanation. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

15 pages, 2142 KiB  
Article
Nonlinear Information Bottleneck
by Artemy Kolchinsky, Brendan D. Tracey and David H. Wolpert
Entropy 2019, 21(12), 1181; https://doi.org/10.3390/e21121181 - 30 Nov 2019
Cited by 85 | Viewed by 9299
Abstract
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. [...] Read more.
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete X and Y with small state spaces, and continuous X and Y with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous X and Y, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed “variational IB” method on several real-world datasets. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Figure 1

Back to TopTop