Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 16, Issue 7 (July 2014), Pages 3552-4184

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-32
Export citation of selected articles as:
Open AccessArticle Characterizing the Asymptotic Per-Symbol Redundancy of Memoryless Sources over Countable Alphabets in Terms of Single-Letter Marginals
Entropy 2014, 16(7), 4168-4184; https://doi.org/10.3390/e16074168
Received: 27 May 2014 / Revised: 24 June 2014 / Accepted: 7 July 2014 / Published: 23 July 2014
PDF Full-text (258 KB) | HTML Full-text | XML Full-text
Abstract
The minimum expected number of bits needed to describe a random variable is its entropy, assuming knowledge of the distribution of the random variable. On the other hand, universal compression describes data supposing that the underlying distribution is unknown, but that it belongs
[...] Read more.
The minimum expected number of bits needed to describe a random variable is its entropy, assuming knowledge of the distribution of the random variable. On the other hand, universal compression describes data supposing that the underlying distribution is unknown, but that it belongs to a known set Ρ of distributions. However, since universal descriptions are not matched exactly to the underlying distribution, the number of bits they use on average is higher, and the excess over the entropy used is the redundancy. In this paper, we study the redundancy incurred by the universal description of strings of positive integers (Z+), the strings being generated independently and identically distributed (i.i.d.) according an unknown distribution over Z+ in a known collection P. We first show that if describing a single symbol incurs finite redundancy, then P is tight, but that the converse does not always hold. If a single symbol can be described with finite worst-case regret (a more stringent formulation than redundancy above), then it is known that describing length n i.i.d. strings only incurs vanishing (to zero) redundancy per symbol as n increases. On the contrary, we show it is possible that the description of a single symbol from an unknown distribution of P incurs finite redundancy, yet the description of length n i.i.d. strings incurs a constant (> 0) redundancy per symbol encoded. We then show a sufficient condition on single-letter marginals, such that length n i.i.d. samples will incur vanishing redundancy per symbol encoded. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Network Decomposition and Complexity Measures: An Information Geometrical Approach
Entropy 2014, 16(7), 4132-4167; https://doi.org/10.3390/e16074132
Received: 28 March 2014 / Revised: 24 June 2014 / Accepted: 14 July 2014 / Published: 23 July 2014
Cited by 3 | PDF Full-text (1740 KB) | HTML Full-text | XML Full-text
Abstract
We consider the graph representation of the stochastic model with n binary variables, and develop an information theoretical framework to measure the degree of statistical association existing between subsystems as well as the ones represented by each edge of the graph representation. Besides,
[...] Read more.
We consider the graph representation of the stochastic model with n binary variables, and develop an information theoretical framework to measure the degree of statistical association existing between subsystems as well as the ones represented by each edge of the graph representation. Besides, we consider the novel measures of complexity with respect to the system decompositionability, by introducing the geometric product of Kullback–Leibler (KL-) divergence. The novel complexity measures satisfy the boundary condition of vanishing at the limit of completely random and ordered state, and also with the existence of independent subsystem of any size. Such complexity measures based on the geometric means are relevant to the heterogeneity of dependencies between subsystems, and the amount of information propagation shared entirely in the system. Full article
(This article belongs to the Special Issue Information Geometry)
Figures

Graphical abstract

Open AccessArticle Numerical Investigation on the Temperature Characteristics of the Voice Coil for a Woofer Using Thermal Equivalent Heat Conduction Models
Entropy 2014, 16(7), 4121-4131; https://doi.org/10.3390/e16074121
Received: 21 May 2014 / Revised: 3 July 2014 / Accepted: 7 July 2014 / Published: 21 July 2014
PDF Full-text (443 KB) | HTML Full-text | XML Full-text
Abstract
The objective of this study is to numerically investigate the temperature and heat transfer characteristics of the voice coil for a woofer with and without bobbins using the thermal equivalent heat conduction models. The temperature and heat transfer characteristics of the main components
[...] Read more.
The objective of this study is to numerically investigate the temperature and heat transfer characteristics of the voice coil for a woofer with and without bobbins using the thermal equivalent heat conduction models. The temperature and heat transfer characteristics of the main components of the woofer were analyzed with input powers ranging from 5 W to 60 W. The numerical results of the voice coil showed good agreement within ±1% of the data by Odenbach (2003). The temperatures of the voice coil and its units for the woofer without the bobbin were 6.1% and 5.0% on average, respectively; lower than those of the woofer with the bobbin. However, at an input power of 30 W for the voice coil, the temperatures of the main components of the woofer without the bobbin were 40.0% higher on average than those of the woofer obtained by Lee et al. (2013). Full article
Open AccessArticle Can the Hexagonal Ice-like Model Render the Spectroscopic Fingerprints of Structured Water? Feedback from Quantum-Chemical Computations
Entropy 2014, 16(7), 4101-4120; https://doi.org/10.3390/e16074101
Received: 2 June 2014 / Revised: 14 July 2014 / Accepted: 16 July 2014 / Published: 21 July 2014
PDF Full-text (1528 KB) | HTML Full-text | XML Full-text
Abstract
The spectroscopic features of the multilayer honeycomb model of structured water are analyzed on theoretical grounds, by using high-level ab initio quantum-chemical methodologies, through model systems built by two fused hexagons of water molecules: the monomeric system [H19O10], in
[...] Read more.
The spectroscopic features of the multilayer honeycomb model of structured water are analyzed on theoretical grounds, by using high-level ab initio quantum-chemical methodologies, through model systems built by two fused hexagons of water molecules: the monomeric system [H19O10], in different oxidation states (anionic and neutral species). The findings do not support anionic species as the origin of the spectroscopic fingerprints observed experimentally for structured water. In this context, hexameric anions can just be seen as a source of hydrated hydroxyl anions and cationic species. The results for the neutral dimer are, however, fully consistent with the experimental evidence related to both, absorption and fluorescence spectra. The neutral π-stacked dimer [H38O20] can be assigned as the main responsible for the recorded absorption and fluorescence spectra with computed band maxima at 271 nm (4.58 eV) and 441 nm (2.81 eV), respectively. The important role of triplet excited states is finally discussed. The most intense vertical triplet⇨ triplet transition is predicted to be at 318 nm (3.90 eV). Full article
(This article belongs to the Special Issue Entropy and EZ-Water)
Figures

Graphical abstract

Open AccessArticle Using Geometry to Select One Dimensional Exponential Families That Are Monotone Likelihood Ratio in the Sample Space, Are Weakly Unimodal and Can Be Parametrized by a Measure of Central Tendency
Entropy 2014, 16(7), 4088-4100; https://doi.org/10.3390/e16074088
Received: 30 April 2014 / Revised: 30 June 2014 / Accepted: 14 July 2014 / Published: 18 July 2014
PDF Full-text (262 KB) | HTML Full-text | XML Full-text
Abstract
One dimensional exponential families on finite sample spaces are studied using the geometry of the simplex Δn°-1 and that of a transformation Vn-1 of its interior. This transformation is the natural parameter space associated with the family of multinomial
[...] Read more.
One dimensional exponential families on finite sample spaces are studied using the geometry of the simplex Δn°-1 and that of a transformation Vn-1 of its interior. This transformation is the natural parameter space associated with the family of multinomial distributions. The space Vn-1 is partitioned into cones that are used to find one dimensional families with desirable properties for modeling and inference. These properties include the availability of uniformly most powerful tests and estimators that exhibit optimal properties in terms of variability and unbiasedness. Full article
(This article belongs to the Special Issue Information Geometry)
Open AccessEditorial Biosemiotic Entropy: Concluding the Series
Entropy 2014, 16(7), 4060-4087; https://doi.org/10.3390/e16074060
Received: 21 October 2013 / Revised: 9 June 2014 / Accepted: 1 July 2014 / Published: 18 July 2014
Cited by 1 | PDF Full-text (859 KB) | HTML Full-text | XML Full-text
Abstract
This article concludes the special issue on Biosemiotic Entropy looking toward the future on the basis of current and prior results. It highlights certain aspects of the series, concerning factors that damage and degenerate biosignaling systems. As in ordinary linguistic discourse, well-formedness (coherence)
[...] Read more.
This article concludes the special issue on Biosemiotic Entropy looking toward the future on the basis of current and prior results. It highlights certain aspects of the series, concerning factors that damage and degenerate biosignaling systems. As in ordinary linguistic discourse, well-formedness (coherence) in biological signaling systems depends on valid representations correctly construed: a series of proofs are presented and generalized to all meaningful sign systems. The proofs show why infants must (as empirical evidence shows they do) proceed through a strict sequence of formal steps in acquiring any language. Classical and contemporary conceptions of entropy and information are deployed showing why factors that interfere with coherence in biological signaling systems are necessary and sufficient causes of disorders, diseases, and mortality. Known sources of such formal degeneracy in living organisms (here termed, biosemiotic entropy) include: (a) toxicants, (b) pathogens; (c) excessive exposures to radiant energy and/or sufficiently powerful electromagnetic fields; (d) traumatic injuries; and (e) interactions between the foregoing factors. Just as Jaynes proved that irreversible changes invariably increase entropy, the theory of true narrative representations (TNR theory) demonstrates that factors disrupting the well-formedness (coherence) of valid representations, all else being held equal, must increase biosemiotic entropy—the kind impacting biosignaling systems. Full article
(This article belongs to the Special Issue Biosemiotic Entropy: Disorder, Disease, and Mortality)
Open AccessArticle Entropy and Its Discontents: A Note on Definitions
Entropy 2014, 16(7), 4044-4059; https://doi.org/10.3390/e16074044
Received: 29 May 2014 / Revised: 27 June 2014 / Accepted: 8 July 2014 / Published: 17 July 2014
Cited by 2 | PDF Full-text (431 KB) | HTML Full-text | XML Full-text
Abstract
The routine definitions of Shannon entropy for both discrete and continuous probability laws show inconsistencies that make them not reciprocally coherent. We propose a few possible modifications of these quantities so that: (1) they no longer show incongruities; and (2) they go one
[...] Read more.
The routine definitions of Shannon entropy for both discrete and continuous probability laws show inconsistencies that make them not reciprocally coherent. We propose a few possible modifications of these quantities so that: (1) they no longer show incongruities; and (2) they go one into the other in a suitable limit as the result of a renormalization. The properties of the new quantities would slightly differ from that of the usual entropies in a few other respects. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Application of a Modified Entropy Computational Method in Assessing the Complexity of Pulse Wave Velocity Signals in Healthy and Diabetic Subjects
Entropy 2014, 16(7), 4032-4043; https://doi.org/10.3390/e16074032
Received: 19 May 2014 / Revised: 2 July 2014 / Accepted: 8 July 2014 / Published: 17 July 2014
Cited by 13 | PDF Full-text (1232 KB) | HTML Full-text | XML Full-text
Abstract
Using 1000 successive points of a pulse wave velocity (PWV) series, we previously distinguished healthy from diabetic subjects with multi-scale entropy (MSE) using a scale factor of 10. One major limitation is the long time for data acquisition (i.e., 20 min).
[...] Read more.
Using 1000 successive points of a pulse wave velocity (PWV) series, we previously distinguished healthy from diabetic subjects with multi-scale entropy (MSE) using a scale factor of 10. One major limitation is the long time for data acquisition (i.e., 20 min). This study aimed at validating the sensitivity of a novel method, short time MSE (sMSE) that utilized a substantially smaller sample size (i.e., 600 consecutive points), in differentiating the complexity of PWV signals both in simulation and in human subjects that were divided into four groups: healthy young (Group 1; n = 24) and middle-aged (Group 2; n = 30) subjects without known cardiovascular disease and middle-aged individuals with well-controlled (Group 3; n = 18) and poorly-controlled (Group 4; n = 22) diabetes mellitus type 2. The results demonstrated that although conventional MSE could differentiate the subjects using 1000 consecutive PWV series points, sensitivity was lost using only 600 points. Simulation study revealed consistent results. By contrast, the novel sMSE method produced significant differences in entropy in both simulation and testing subjects. In conclusion, this study demonstrated that using a novel sMSE approach for PWV analysis, the time for data acquisition can be substantially reduced to that required for 600 cardiac cycles (~10 min) with remarkable preservation of sensitivity in differentiating among healthy, aged, and diabetic populations. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics)
Figures

Graphical abstract

Open AccessArticle New Riemannian Priors on the Univariate Normal Model
Entropy 2014, 16(7), 4015-4031; https://doi.org/10.3390/e16074015
Received: 17 April 2014 / Revised: 23 June 2014 / Accepted: 9 July 2014 / Published: 17 July 2014
Cited by 6 | PDF Full-text (2418 KB) | HTML Full-text | XML Full-text
Abstract
The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that
[...] Read more.
The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ) with hyperparameters θ - Θ and γ > 0, whose density with respect to Riemannian volume is proportional to exp(d2(θ, θ - )/2γ2), where d2(θ, θ - ) is the square of Rao’s Riemannian distance. The distributions G( θ - , γ) are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ) is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ) has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ), as shown in the paper), and its dispersion away from θ - is given by γ. Therefore, one thinks of members of the class represented by G( θ - , γ) as being centered around θ - and lying within a typical distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ) and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ) can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that this leads to an improvement in performance over the use of conjugate priors. Full article
(This article belongs to the Special Issue Information Geometry)
Open AccessArticle A Note of Caution on Maximizing Entropy
Entropy 2014, 16(7), 4004-4014; https://doi.org/10.3390/e16074004
Received: 4 May 2014 / Revised: 10 July 2014 / Accepted: 15 July 2014 / Published: 17 July 2014
Cited by 2 | PDF Full-text (709 KB) | HTML Full-text | XML Full-text
Abstract
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some
[...] Read more.
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Human Brain Networks: Spiking Neuron Models, Multistability, Synchronization, Thermodynamics, Maximum Entropy Production, and Anesthetic Cascade Mechanisms
Entropy 2014, 16(7), 3939-4003; https://doi.org/10.3390/e16073939
Received: 6 May 2014 / Revised: 19 June 2014 / Accepted: 3 July 2014 / Published: 17 July 2014
Cited by 7 | PDF Full-text (1906 KB) | HTML Full-text | XML Full-text
Abstract
Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system,
[...] Read more.
Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a theoretical foundation for general anesthesia using the network properties of the brain. Finally, we present some key emergent properties from the fields of thermodynamics and electromagnetic field theory to qualitatively explain the underlying neuronal mechanisms of action for anesthesia and consciousness. Full article
(This article belongs to the Special Issue Entropy in Human Brain Networks)
Open AccessReview Panel I: Connecting 2nd Law Analysis with Economics, Ecology and Energy Policy
Entropy 2014, 16(7), 3903-3938; https://doi.org/10.3390/e16073903
Received: 11 February 2014 / Revised: 4 June 2014 / Accepted: 10 June 2014 / Published: 16 July 2014
Cited by 5 | PDF Full-text (1057 KB) | HTML Full-text | XML Full-text
Abstract
The present paper is a review of several papers from the Proceedings of the Joint European Thermodynamics Conference, held in Brescia, Italy, 1–5 July 2013, namely papers introduced by their authors at Panel I of the conference. Panel I was devoted to applications
[...] Read more.
The present paper is a review of several papers from the Proceedings of the Joint European Thermodynamics Conference, held in Brescia, Italy, 1–5 July 2013, namely papers introduced by their authors at Panel I of the conference. Panel I was devoted to applications of the Second Law of Thermodynamics to social issues—economics, ecology, sustainability, and energy policy. The concept called Available Energy which goes back to mid-nineteenth century work of Kelvin, Rankine, Maxwell and Gibbs, is relevant to all of the papers. Various names have been applied to the concept when interactions between the system of interest and an environment are involved. Today, the name exergy is generally accepted. The scope of the papers being reviewed is wide and they complement one another well. Full article
(This article belongs to the Special Issue Advances in Methods and Foundations of Non-Equilibrium Thermodynamics)
Open AccessArticle Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing
Entropy 2014, 16(7), 3889-3902; https://doi.org/10.3390/e16073889
Received: 4 March 2014 / Revised: 23 June 2014 / Accepted: 7 July 2014 / Published: 15 July 2014
Cited by 6 | PDF Full-text (339 KB) | HTML Full-text | XML Full-text
Abstract
We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network.
[...] Read more.
We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. To accurately identify the chaotic neurons may thus be necessary and important, for example, applying appropriate controls to bring the network to a normal state. However, due to couplings among the nodes, the measured time series, even from non-chaotic neurons, would appear random, rendering inapplicable traditional nonlinear time-series analysis, such as the delay-coordinate embedding method, which yields information about the global dynamics of the entire network. Our method is based on compressive sensing. In particular, we demonstrate that identifying chaotic elements can be formulated as a general problem of reconstructing the nodal dynamical systems, network connections and all coupling functions, as well as their weights. The working and efficiency of the method are illustrated by using networks of non-identical FitzHugh–Nagumo neurons with randomly-distributed coupling weights. Full article
(This article belongs to the Special Issue Information in Dynamical Systems and Complex Systems)
Open AccessArticle The Entropy-Based Quantum Metric
Entropy 2014, 16(7), 3878-3888; https://doi.org/10.3390/e16073878
Received: 15 May 2014 / Revised: 25 June 2014 / Accepted: 11 July 2014 / Published: 15 July 2014
Cited by 6 | PDF Full-text (215 KB) | HTML Full-text | XML Full-text
Abstract
The von Neumann entropy S(D^) generates in the space of quantum density matrices D^ the Riemannian metric ds2 = −d2S(D^), which is physically founded and which characterises the amount of quantum information
[...] Read more.
The von Neumann entropy S( D ^ ) generates in the space of quantum density matrices D ^ the Riemannian metric ds2 = −d2S( D ^ ), which is physically founded and which characterises the amount of quantum information lost by mixing D ^ and D ^ + d D ^ . A rich geometric structure is thereby implemented in quantum mechanics. It includes a canonical mapping between the spaces of states and of observables, which involves the Legendre transform of S( D ^ ). The Kubo scalar product is recovered within the space of observables. Applications are given to equilibrium and non equilibrium quantum statistical mechanics. There the formalism is specialised to the relevant space of observables and to the associated reduced states issued from the maximum entropy criterion, which result from the exact states through an orthogonal projection. Von Neumann’s entropy specialises into a relevant entropy. Comparison is made with other metrics. The Riemannian properties of the metric ds2 = −d2S( D ^ ) are derived. The curvature arises from the non-Abelian nature of quantum mechanics; its general expression and its explicit form for q-bits are given, as well as geodesics. Full article
(This article belongs to the Special Issue Information Geometry)
Open AccessArticle Many Can Work Better than the Best: Diagnosing with Medical Images via Crowdsourcing
Entropy 2014, 16(7), 3866-3877; https://doi.org/10.3390/e16073866
Received: 8 March 2014 / Revised: 22 June 2014 / Accepted: 3 July 2014 / Published: 14 July 2014
Cited by 2 | PDF Full-text (1130 KB) | HTML Full-text | XML Full-text
Abstract
We study a crowdsourcing-based diagnosis algorithm, which is against the fact that currently we do not lack medical staff, but high level experts. Our approach is to make use of the general practitioners’ efforts: For every patient whose illness cannot be judged definitely,
[...] Read more.
We study a crowdsourcing-based diagnosis algorithm, which is against the fact that currently we do not lack medical staff, but high level experts. Our approach is to make use of the general practitioners’ efforts: For every patient whose illness cannot be judged definitely, we arrange for them to be diagnosed multiple times by different doctors, and we collect the all diagnosis results to derive the final judgement. Our inference model is based on the statistical consistency of the diagnosis data. To evaluate the proposed model, we conduct experiments on both the synthetic and real data; the results show that it outperforms the benchmarks. Full article
Back to Top