Next Issue
Volume 15, October
Previous Issue
Volume 15, August
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 15, Issue 9 (September 2013) – 33 articles , Pages 3312-3982

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
722 KiB  
Article
Methods of Evaluating Thermodynamic Properties of Landscape Cover Using Multispectral Reflected Radiation Measurements by the Landsat Satellite
by Yuriy Puzachenko, Robert Sandlersky and Alexey Sankovski
Entropy 2013, 15(9), 3970-3982; https://doi.org/10.3390/e15093970 - 23 Sep 2013
Cited by 21 | Viewed by 5934
Abstract
The paper discusses methods of evaluating thermodynamic properties of landscape cover based on multi-spectral measurements by the Landsat satellites. Authors demonstrate how these methods could be used for studying functionality of landscapes and for spatial interpolation of Flux NET system measurements. Full article
(This article belongs to the Special Issue Exergy: Analysis and Applications)
Show Figures

Figure 1

1039 KiB  
Review
Molecular Dynamics at Constant Pressure: Allowing the System to Control Volume Fluctuations via a “Shell” Particle
by Mark J. Uline and David S. Corti
Entropy 2013, 15(9), 3941-3969; https://doi.org/10.3390/e15093941 - 23 Sep 2013
Cited by 21 | Viewed by 6708
Abstract
Since most experimental observations are performed at constant temperature and pressure, the isothermal-isobaric (NPT) ensemble has been widely used in molecular simulations. Nevertheless, the NPT ensemble has only recently been placed on a rigorous foundation. The proper formulation of the NPT [...] Read more.
Since most experimental observations are performed at constant temperature and pressure, the isothermal-isobaric (NPT) ensemble has been widely used in molecular simulations. Nevertheless, the NPT ensemble has only recently been placed on a rigorous foundation. The proper formulation of the NPT ensemble requires a “shell” particle to uniquely identify the volume of the system, thereby avoiding the redundant counting of configurations. Here, we review our recent work in incorporating a shell particle into molecular dynamics simulation algorithms to generate the correct NPT ensemble averages. Unlike previous methods, a piston of unknown mass is no longer needed to control the response time of the volume fluctuations. As the volume of the system is attached to the shell particle, the system itself now sets the time scales for volume and pressure fluctuations. Finally, we discuss a number of tests that ensure the equations of motion sample phase space correctly and consider the response time of the system to pressure changes with and without the shell particle. Overall, the shell particle algorithm is an effective simulation method for studying systems exposed to a constant external pressure and may provide an advantage over other existing constant pressure approaches when developing nonequilibrium molecular dynamics methods. Full article
(This article belongs to the Special Issue Molecular Dynamics Simulation)
Show Figures

Figure 1

163 KiB  
Article
Solutions of Some Nonlinear Diffusion Equations and Generalized Entropy Framework
by Ervin K. Lenzi, Maike A. F. Dos Santos, Flavio S. Michels, Renio S. Mendes and Luiz R. Evangelista
Entropy 2013, 15(9), 3931-3940; https://doi.org/10.3390/e15093931 - 18 Sep 2013
Cited by 5 | Viewed by 6082
Abstract
We investigate solutions of a generalized diffusion equation that contains nonlinear terms in the presence of external forces and reaction terms. The solutions found here can have a compact or long tail behavior and can be expressed in terms of the q-exponential [...] Read more.
We investigate solutions of a generalized diffusion equation that contains nonlinear terms in the presence of external forces and reaction terms. The solutions found here can have a compact or long tail behavior and can be expressed in terms of the q-exponential functions present in the Tsallis framework. In the case of the long-tailed behavior, in the asymptotic limit, these solutions can also be connected with the L´evy distributions. In addition, from the results presented here, a rich class of diffusive processes, including normal and anomalous ones, can be obtained. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

306 KiB  
Article
Permutation Complexity and Coupling Measures in Hidden Markov Models
by Taichi Haruna and Kohei Nakajima
Entropy 2013, 15(9), 3910-3930; https://doi.org/10.3390/e15093910 - 16 Sep 2013
Cited by 6 | Viewed by 4871
Abstract
Recently, the duality between values (words) and orderings (permutations) has been proposed by the authors as a basis to discuss the relationship between information theoretic measures for finite-alphabet stationary stochastic processes and their permutatio nanalogues. It has been used to give a simple [...] Read more.
Recently, the duality between values (words) and orderings (permutations) has been proposed by the authors as a basis to discuss the relationship between information theoretic measures for finite-alphabet stationary stochastic processes and their permutatio nanalogues. It has been used to give a simple proof of the equality between the entropy rate and the permutation entropy rate for any finite-alphabet stationary stochastic process and to show some results on the excess entropy and the transfer entropy for finite-alphabet stationary ergodic Markov processes. In this paper, we extend our previous results to hidden Markov models and show the equalities between various information theoretic complexity and coupling measures and their permutation analogues. In particular, we show the following two results within the realm of hidden Markov models with ergodic internal processes: the two permutation analogues of the transfer entropy, the symbolic transfer entropy and the transfer entropy on rank vectors, are both equivalent to the transfer entropy if they are considered as the rates, and the directed information theory can be captured by the permutation entropy approach. Full article
843 KiB  
Article
Analysis and Visualization of Seismic Data Using Mutual Information
by José A. Tenreiro Machado and António M. Lopes
Entropy 2013, 15(9), 3892-3909; https://doi.org/10.3390/e15093892 - 16 Sep 2013
Cited by 42 | Viewed by 6686
Abstract
Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 [...] Read more.
Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes. Full article
(This article belongs to the Special Issue Dynamical Systems)
Show Figures

Figure 1

584 KiB  
Article
Blind Demodulation of Chaotic Direct Sequence Spread Spectrum Signals Based on Particle Filters
by Ting Li, Dexin Zhao, Zhiping Huang, Chunwu Liu, Shaojing Su and Yimeng Zhang
Entropy 2013, 15(9), 3877-3891; https://doi.org/10.3390/e15093877 - 13 Sep 2013
Cited by 6 | Viewed by 5526
Abstract
Applying the particle filter (PF) technique, this paper proposes a PF-based algorithm to blindly demodulate the chaotic direct sequence spread spectrum (CDS-SS) signals under the colored or non-Gaussian noises condition. To implement this algorithm, the PFs are modified by (i) the colored or [...] Read more.
Applying the particle filter (PF) technique, this paper proposes a PF-based algorithm to blindly demodulate the chaotic direct sequence spread spectrum (CDS-SS) signals under the colored or non-Gaussian noises condition. To implement this algorithm, the PFs are modified by (i) the colored or non-Gaussian noises are formulated by autoregressive moving average (ARMA) models, and then the parameters that model the noises are included in the state vector; (ii) the range-differentiating factor is imported into the intruder’s chaotic system equation. Since the range-differentiating factor is able to make the inevitable chaos fitting error advantageous based on the chaos fitting method, thus the CDS-SS signals can be demodulated according to the range of the estimated message. Simulations show that the proposed PF-based algorithm can obtain a good bit-error rate performance when extracting the original binary message from the CDS-SS signals without any knowledge of the transmitter’s chaotic map, or initial value, even when colored or non-Gaussian noises exist. Full article
Show Figures

Figure 1

708 KiB  
Review
Biological Water Dynamics and Entropy: A Biophysical Origin of Cancer and Other Diseases
by Robert M. Davidson, Ann Lauritzen and Stephanie Seneff
Entropy 2013, 15(9), 3822-3876; https://doi.org/10.3390/e15093822 - 13 Sep 2013
Cited by 40 | Viewed by 18794
Abstract
This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they [...] Read more.
This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they perturb a well-balanced system sufficiently to facilitate disease. We propose that the disruption of water dynamics between and within cells underlies many disease conditions. We survey recent advances in magnetobiology, nanobiology, and colloid and interface science that point compellingly to the crucial role played by the unique physical properties of quantum coherent nanomolecular clusters of magnetized water in enabling life at the cellular level by solving the “problems” of thermal diffusion, intracellular crowding, and molecular self-assembly. Interphase water and cellular surface tension, normally maintained by biological sulfates at membrane surfaces, are compromised by exogenous interfacial water stressors such as cationic aluminum, with consequences that include greater local water hydrophobicity, increased water tension, and interphase stretching. The ultimate result is greater “stiffness” in the extracellular matrix and either the “soft” cancerous state or the “soft” neurodegenerative state within cells. Our hypothesis provides a basis for understanding why so many idiopathic diseases of today are highly stereotyped and pluricausal. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

1339 KiB  
Article
Entropies in Alloy Design for High-Entropy and Bulk Glassy Alloys
by Akira Takeuchi, Kenji Amiya, Takeshi Wada, Kunio Yubuta, Wei Zhang and Akihiro Makino
Entropy 2013, 15(9), 3810-3821; https://doi.org/10.3390/e15093810 - 12 Sep 2013
Cited by 104 | Viewed by 10819
Abstract
High-entropy (H-E) alloys, bulk metallic glasses (BMGs) and high-entropy BMGs (HE-BMGs) were statistically analyzed with the help of a database of ternary amorphous alloys. Thermodynamic quantities corresponding to heat of mixing and atomic size differences were calculated as a function of composition of [...] Read more.
High-entropy (H-E) alloys, bulk metallic glasses (BMGs) and high-entropy BMGs (HE-BMGs) were statistically analyzed with the help of a database of ternary amorphous alloys. Thermodynamic quantities corresponding to heat of mixing and atomic size differences were calculated as a function of composition of the multicomponent alloys. Actual calculations were performed for configurational entropy (Sconfig.) in defining the H-E alloys and mismatch entropy (Ss) normalized with Boltzmann constant (kB), together with mixing enthalpy (DHmix) based on Miedema’s empirical model and Delta parameter (d) as a corresponding parameter to Ss/kB. The comparison between DHmixd and DHmix– diagrams for the ternary amorphous alloys revealed Ss/kB ~ (d /22)2. The zones S, S′ and B’s where H-E alloys with disordered solid solutions, ordered alloys and BMGs are plotted in the DHmixd diagram are correlated with the areas in the DHmixSs /kB diagram. The results provide mutual understandings among H-E alloys, BMGs and HE-BMGs. Full article
(This article belongs to the Special Issue High Entropy Alloys)
Show Figures

Figure 1

1782 KiB  
Article
Phase Composition of a CrMo0.5NbTa0.5TiZr High Entropy Alloy: Comparison of Experimental and Simulated Data
by Oleg N. Senkov, Fan Zhang and Jonathan D. Miller
Entropy 2013, 15(9), 3796-3809; https://doi.org/10.3390/e15093796 - 12 Sep 2013
Cited by 59 | Viewed by 8436
Abstract
Microstructure and phase composition of a CrMo0.5NbTa0.5TiZr high entropy alloy were studied in the as-solidified and heat treated conditions. In the as-solidified condition, the alloy consisted of two disordered BCC phases and an ordered cubic Laves phase. The BCC1 [...] Read more.
Microstructure and phase composition of a CrMo0.5NbTa0.5TiZr high entropy alloy were studied in the as-solidified and heat treated conditions. In the as-solidified condition, the alloy consisted of two disordered BCC phases and an ordered cubic Laves phase. The BCC1 phase solidified in the form of dendrites enriched with Mo, Ta and Nb, and its volume fraction was 42%. The BCC2 and Laves phases solidified by the eutectic-type reaction, and their volume fractions were 27% and 31%, respectively. The BCC2 phase was enriched with Ti and Zr and the Laves phase was heavily enriched with Cr. After hot isostatic pressing at 1450 °C for 3 h, the BCC1 dendrites coagulated into round-shaped particles and their volume fraction increased to 67%. The volume fractions of the BCC2 and Laves phases decreased to 16% and 17%, respectively. After subsequent annealing at 1000 °C for 100 h, submicron-sized Laves particles precipitated inside the BCC1 phase, and the alloy consisted of 52% BCC1, 16% BCC2 and 32% Laves phases. Solidification and phase equilibrium simulations were conducted for the CrMo0.5NbTa0.5TiZr alloy using a thermodynamic database developed by CompuTherm LLC. Some discrepancies were found between the calculated and experimental results and the reasons for these discrepancies were discussed. Full article
(This article belongs to the Special Issue High Entropy Alloys)
Show Figures

Figure 1

1256 KiB  
Article
Improved Time Complexities for Learning Boolean Networks
by Yun Zheng and Chee Keong Kwoh
Entropy 2013, 15(9), 3762-3795; https://doi.org/10.3390/e15093762 - 11 Sep 2013
Cited by 3 | Viewed by 5200
Abstract
Existing algorithms for learning Boolean networks (BNs) have time complexities of at least O(N · n0:7(k+1)), where n is the number of variables, N is the number of samples and k is the number of inputs in Boolean functions. Some recent [...] Read more.
Existing algorithms for learning Boolean networks (BNs) have time complexities of at least O(N · n0:7(k+1)), where n is the number of variables, N is the number of samples and k is the number of inputs in Boolean functions. Some recent studies propose more efficient methods with O(N · n2) time complexities. However, these methods can only be used to learn monotonic BNs, and their performances are not satisfactory when the sample size is small. In this paper, we mathematically prove that OR/AND BNs, where the variables are related with logical OR/AND operations, can be found with the time complexity of O(k·(N+ logn)·n2), if there are enough noiseless training samples randomly generated from a uniform distribution. We also demonstrate that our method can successfully learn most BNs, whose variables are not related with exclusive OR and Boolean equality operations, with the same order of time complexity for learning OR/AND BNs, indicating our method has good efficiency for learning general BNs other than monotonic BNs. When the datasets are noisy, our method can still successfully identify most BNs with the same efficiency. When compared with two existing methods with the same settings, our method achieves a better comprehensive performance than both of them, especially for small training sample sizes. More importantly, our method can be used to learn all BNs. However, of the two methods that are compared, one can only be used to learn monotonic BNs, and the other one has a much worse time complexity than our method. In conclusion, our results demonstrate that Boolean networks can be learned with improved time complexities. Full article
Show Figures

Figure 1

280 KiB  
Article
Combination Synchronization of Three Identical or Different Nonlinear Complex Hyperchaotic Systems
by Xiaobing Zhou, Murong Jiang and Yaqun Huang
Entropy 2013, 15(9), 3746-3761; https://doi.org/10.3390/e15093746 - 10 Sep 2013
Cited by 23 | Viewed by 5480
Abstract
In this paper, we investigate the combination synchronization of three nonlinear complex hyperchaotic systems: the complex hyperchaotic Lorenz system, the complex hyperchaotic Chen system and the complex hyperchaotic L¨u system. Based on the Lyapunov stability theory, corresponding controllers to achieve combination synchronization among [...] Read more.
In this paper, we investigate the combination synchronization of three nonlinear complex hyperchaotic systems: the complex hyperchaotic Lorenz system, the complex hyperchaotic Chen system and the complex hyperchaotic L¨u system. Based on the Lyapunov stability theory, corresponding controllers to achieve combination synchronization among three identical or different nonlinear complex hyperchaotic systems are derived, respectively. Numerical simulations are presented to demonstrate the validity and feasibility of the theoretical analysis. Full article
(This article belongs to the Special Issue Dynamical Systems)
Show Figures

Figure 1

2804 KiB  
Article
On the Calculation of Solid-Fluid Contact Angles from Molecular Dynamics
by Erik E. Santiso, Carmelo Herdes and Erich A. Müller
Entropy 2013, 15(9), 3734-3745; https://doi.org/10.3390/e15093734 - 6 Sep 2013
Cited by 73 | Viewed by 12409
Abstract
A methodology for the determination of the solid-fluid contact angle, to be employed within molecular dynamics (MD) simulations, is developed and systematically applied. The calculation of the contact angle of a fluid drop on a given surface, averaged over an equilibrated MD trajectory, [...] Read more.
A methodology for the determination of the solid-fluid contact angle, to be employed within molecular dynamics (MD) simulations, is developed and systematically applied. The calculation of the contact angle of a fluid drop on a given surface, averaged over an equilibrated MD trajectory, is divided in three main steps: (i) the determination of the fluid molecules that constitute the interface, (ii) the treatment of the interfacial molecules as a point cloud data set to define a geometric surface, using surface meshing techniques to compute the surface normals from the mesh, (iii) the collection and averaging of the interface normals collected from the post-processing of the MD trajectory. The average vector thus found is used to calculate the Cassie contact angle (i.e., the arccosine of the averaged normal z-component). As an example we explore the effect of the size of a drop of water on the observed solid-fluid contact angle. A single coarse-grained bead representing two water molecules and parameterized using the SAFT-γ Mie equation of state (EoS) is employed, meanwhile the solid surfaces are mimicked using integrated potentials. The contact angle is seen to be a strong function of the system size for small nano-droplets. The thermodynamic limit, corresponding to the infinite size (macroscopic) drop is only truly recovered when using an excess of half a million water coarse-grained beads and/or a drop radius of over 26 nm. Full article
(This article belongs to the Special Issue Molecular Dynamics Simulation)
Show Figures

Graphical abstract

210 KiB  
Article
Consideration on Singularities in Learning Theory and the Learning Coefficient
by Miki Aoyagi
Entropy 2013, 15(9), 3714-3733; https://doi.org/10.3390/e15093714 - 6 Sep 2013
Cited by 5 | Viewed by 4921
Abstract
We consider the learning coefficients in learning theory and give two new methods for obtaining these coefficients in a homogeneous case: a method for finding a deepest singular point and a method to add variables. In application to Vandermonde matrix-type singularities, we show [...] Read more.
We consider the learning coefficients in learning theory and give two new methods for obtaining these coefficients in a homogeneous case: a method for finding a deepest singular point and a method to add variables. In application to Vandermonde matrix-type singularities, we show that these methods are effective. The learning coefficient of the generalization error in Bayesian estimation serves to measure the learning efficiency in singular learning models. Mathematically, the learning coefficient corresponds to a real log canonical threshold of singularities for the Kullback functions (relative entropy) in learning theory. Full article
(This article belongs to the Special Issue The Information Bottleneck Method)
Show Figures

Figure 1

321 KiB  
Article
Correlation Distance and Bounds for Mutual Information
by Michael J. W. Hall
Entropy 2013, 15(9), 3698-3713; https://doi.org/10.3390/e15093698 - 6 Sep 2013
Cited by 7 | Viewed by 6896
Abstract
The correlation distance quantifies the statistical independence of two classical or quantum systems, via the distance from their joint state to the product of the marginal states. Tight lower bounds are given for the mutual information between pairs of two-valued classical variables and [...] Read more.
The correlation distance quantifies the statistical independence of two classical or quantum systems, via the distance from their joint state to the product of the marginal states. Tight lower bounds are given for the mutual information between pairs of two-valued classical variables and quantum qubits, in terms of the corresponding classical and quantum correlation distances. These bounds are stronger than the Pinsker inequality (and refinements thereof) for relative entropy. The classical lower bound may be used to quantify properties of statistical models that violate Bell inequalities. Partially entangled qubits can have lower mutual information than can any two-valued classical variables having the same correlation distance. The qubit correlation distance also provides a direct entanglement criterion, related to the spin covariance matrix. Connections of results with classically-correlated quantum states are briefly discussed. Full article
(This article belongs to the Special Issue Distance in Information and Statistical Physics Volume 2)
Show Figures

Figure 1

247 KiB  
Article
Dynamic Distance Measure on Spaces of Isospectral Mixed Quantum States
by Ole Andersson and Hoshang Heydari
Entropy 2013, 15(9), 3688-3697; https://doi.org/10.3390/e15093688 - 6 Sep 2013
Cited by 9 | Viewed by 4686
Abstract
Distance measures are used to quantify the extent to which information is preserved or altered by quantum processes, and thus are indispensable tools in quantum information and quantum computing. In this paper we propose a new distance measure for mixed quantum states, which [...] Read more.
Distance measures are used to quantify the extent to which information is preserved or altered by quantum processes, and thus are indispensable tools in quantum information and quantum computing. In this paper we propose a new distance measure for mixed quantum states, which we call the dynamic distance measure, and we show that it is a proper distance measure. The dynamic distance measure is defined in terms of a measurable quantity, which makes it suitable for applications. In a final section we compare the dynamic distance measure with the well-known Bures distance measure. Full article
(This article belongs to the Special Issue Quantum Information 2012)
1468 KiB  
Article
SpaGrOW—A Derivative-Free Optimization Scheme for Intermolecular Force Field Parameters Based on Sparse Grid Methods
by Marco Hülsmann and Dirk Reith
Entropy 2013, 15(9), 3640-3687; https://doi.org/10.3390/e15093640 - 6 Sep 2013
Cited by 12 | Viewed by 7359
Abstract
Molecular modeling is an important subdomain in the field of computational modeling, regarding both scientific and industrial applications. This is because computer simulations on a molecular level are a virtuous instrument to study the impact of microscopic on macroscopic phenomena. Accurate molecular models [...] Read more.
Molecular modeling is an important subdomain in the field of computational modeling, regarding both scientific and industrial applications. This is because computer simulations on a molecular level are a virtuous instrument to study the impact of microscopic on macroscopic phenomena. Accurate molecular models are indispensable for such simulations in order to predict physical target observables, like density, pressure, diffusion coefficients or energetic properties, quantitatively over a wide range of temperatures. Thereby, molecular interactions are described mathematically by force fields. The mathematical description includes parameters for both intramolecular and intermolecular interactions. While intramolecular force field parameters can be determined by quantum mechanics, the parameterization of the intermolecular part is often tedious. Recently, an empirical procedure, based on the minimization of a loss function between simulated and experimental physical properties, was published by the authors. Thereby, efficient gradient-based numerical optimization algorithms were used. However, empirical force field optimization is inhibited by the two following central issues appearing in molecular simulations: firstly, they are extremely time-consuming, even on modern and high-performance computer clusters, and secondly, simulation data is affected by statistical noise. The latter provokes the fact that an accurate computation of gradients or Hessians is nearly impossible close to a local or global minimum, mainly because the loss function is flat. Therefore, the question arises of whether to apply a derivative-free method approximating the loss function by an appropriate model function. In this paper, a new Sparse Grid-based Optimization Workflow (SpaGrOW) is presented, which accomplishes this task robustly and, at the same time, keeps the number of time-consuming simulations relatively small. This is achieved by an efficient sampling procedure for the approximation based on sparse grids, which is described in full detail: in order to counteract the fact that sparse grids are fully occupied on their boundaries, a mathematical transformation is applied to generate homogeneous Dirichlet boundary conditions. As the main drawback of sparse grids methods is the assumption that the function to be modeled exhibits certain smoothness properties, it has to be approximated by smooth functions first. Radial basis functions turned out to be very suitable to solve this task. The smoothing procedure and the subsequent interpolation on sparse grids are performed within sufficiently large compact trust regions of the parameter space. It is shown and explained how the combination of the three ingredients leads to a new efficient derivative-free algorithm, which has the additional advantage that it is capable of reducing the overall number of simulations by a factor of about two in comparison to gradient-based optimization methods. At the same time, the robustness with respect to statistical noise is maintained. This assertion is proven by both theoretical considerations and practical evaluations for molecular simulations on chemical example substances. Full article
(This article belongs to the Special Issue Molecular Dynamics Simulation)
Show Figures

Figure 1

427 KiB  
Article
Gravitational Entropy and Inflation
by Øystein Elgarøy and Øyvind Grøn
Entropy 2013, 15(9), 3620-3639; https://doi.org/10.3390/e15093620 - 4 Sep 2013
Cited by 1 | Viewed by 6817
Abstract
The main topic of this paper is a description of the generation of entropy at the end of the inflationary era. As a generalization of the present standard model of the Universe dominated by pressureless dust and a Lorentz invariant vacuum energy (LIVE), [...] Read more.
The main topic of this paper is a description of the generation of entropy at the end of the inflationary era. As a generalization of the present standard model of the Universe dominated by pressureless dust and a Lorentz invariant vacuum energy (LIVE), we first present a flat Friedmann universe model, where the dust is replaced with an ideal gas. It is shown that the pressure of the gas is inversely proportional to the fifth power of the scale factor and that the entropy in a comoving volume does not change during the expansion. We then review different measures of gravitational entropy related to the Weyl curvature conjecture and calculate the time evolution of two proposed measures of gravitational entropy in a LIVE-dominated Bianchi type I universe, and a Lemaitre-Bondi-Tolman universe with LIVE. Finally, we elaborate upon a model of energy transition from vacuum energy to radiation energy, that of Bonanno and Reuter, and calculate the time evolution of the entropies of vacuum energy and radiation energy. We also calculate the evolution of the maximal entropy according to some recipes and demonstrate how a gap between the maximal entropy and the actual entropy opens up at the end of the inflationary era. Full article
(This article belongs to the Special Issue Entropy and the Second Law of Thermodynamics)
Show Figures

Figure 1

228 KiB  
Article
Unification of Quantum and Gravity by Non Classical Information Entropy Space
by Germano Resconi, Ignazio Licata and Davide Fiscaletti
Entropy 2013, 15(9), 3602-3619; https://doi.org/10.3390/e15093602 - 4 Sep 2013
Cited by 18 | Viewed by 6862
Abstract
A quantum entropy space is suggested as the fundamental arena describing the quantum effects. In the quantum regime the entropy is expressed as the superposition of many different Boltzmann entropies that span the space of the entropies before any measure. When a measure [...] Read more.
A quantum entropy space is suggested as the fundamental arena describing the quantum effects. In the quantum regime the entropy is expressed as the superposition of many different Boltzmann entropies that span the space of the entropies before any measure. When a measure is performed the quantum entropy collapses to one component. A suggestive reading of the relational interpretation of quantum mechanics and of Bohm’s quantum potential in terms of the quantum entropy are provided. The space associated with the quantum entropy determines a distortion in the classical space of position, which appears as a Weyl-like gauge potential connected with Fisher information. This Weyl-like gauge potential produces a deformation of the moments which changes the classical action in such a way that Bohm’s quantum potential emerges as consequence of the non classical definition of entropy, in a non-Euclidean information space under the constraint of a minimum condition of Fisher information (Fisher Bohm- entropy). Finally, the possible quantum relativistic extensions of the theory and the connections with the problem of quantum gravity are investigated. The non classical thermodynamic approach to quantum phenomena changes the geometry of the particle phase space. In the light of the representation of gravity in ordinary phase space by torsion in the flat space (Teleparallel gravity), the change of geometry in the phase space introduces quantum phenomena in a natural way. This gives a new force to F. Shojai’s and A. Shojai’s theory where the geometry of space-time is highly coupled with a quantum potential whose origin is not the Schrödinger equation but the non classical entropy of a system of many particles that together change the geometry of the phase space of the positions (entanglement). In this way the non classical thermodynamic changes the classical geodetic as a consequence of the quantum phenomena and quantum and gravity are unified. Quantum affects geometry of multidimensional phase space and gravity changes in any point the torsion in the ordinary four-dimensional Lorenz space-time metric. Full article
231 KiB  
Article
A Discrete Meta-Control Procedure for Approximating Solutions to Binary Programs
by Pengbo Zhang, Wolf Kohn and Zelda B. Zabinsky
Entropy 2013, 15(9), 3592-3601; https://doi.org/10.3390/e15093592 - 4 Sep 2013
Viewed by 4508
Abstract
Large-scale binary integer programs occur frequently in many real-world applications. For some binary integer problems, finding an optimal solution or even a feasible solution is computationally expensive. In this paper, we develop a discrete meta-control procedure to approximately solve large-scale binary integer programs [...] Read more.
Large-scale binary integer programs occur frequently in many real-world applications. For some binary integer problems, finding an optimal solution or even a feasible solution is computationally expensive. In this paper, we develop a discrete meta-control procedure to approximately solve large-scale binary integer programs efficiently. The key idea is to map the vector of n binary decision variables into a scalar function defined over a time interval [0; n] and construct a linear quadratic tracking (LQT) problem that can be solved efficiently. We prove that an LQT formulation has an optimal binary solution, analogous to a classical bang-bang control in continuous time. Our LQT approach can provide advantages in reducing computation while generating a good approximate solution. Numerical examples are presented to demonstrate the usefulness of the proposed method. Full article
(This article belongs to the Special Issue Dynamical Systems)
732 KiB  
Article
Objective Bayesianism and the Maximum Entropy Principle
by Jürgen Landes and Jon Williamson
Entropy 2013, 15(9), 3528-3591; https://doi.org/10.3390/e15093528 - 4 Sep 2013
Cited by 25 | Viewed by 6924
Abstract
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes [...] Read more.
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)
Show Figures

Figure 1

1625 KiB  
Article
The Measurement of Information Transmitted by a Neural Population: Promises and Challenges
by Marshall Crumiller, Bruce Knight and Ehud Kaplan
Entropy 2013, 15(9), 3507-3527; https://doi.org/10.3390/e15093507 - 3 Sep 2013
Cited by 13 | Viewed by 6224
Abstract
All brain functions require the coordinated activity of many neurons, and therefore there is considerable interest in estimating the amount of information that the discharge of a neural population transmits to its targets. In the past, such estimates had presented a significant challenge [...] Read more.
All brain functions require the coordinated activity of many neurons, and therefore there is considerable interest in estimating the amount of information that the discharge of a neural population transmits to its targets. In the past, such estimates had presented a significant challenge for populations of more than a few neurons, but we have recently described a novel method for providing such estimates for populations of essentially arbitrary size. Here, we explore the influence of some important aspects of the neuronal population discharge on such estimates. In particular, we investigate the roles of mean firing rate and of the degree and nature of correlations among neurons. The results provide constraints on the applicability of our new method and should help neuroscientists determine whether such an application is appropriate for their data. Full article
(This article belongs to the Special Issue Estimating Information-Theoretic Quantities from Data)
Show Figures

Figure 1

476 KiB  
Article
Land-Use Planning for Urban Sprawl Based on the CLUE-S Model: A Case Study of Guangzhou, China
by Linyu Xu, Zhaoxue Li, Huimin Song and Hao Yin
Entropy 2013, 15(9), 3490-3506; https://doi.org/10.3390/e15093490 - 2 Sep 2013
Cited by 51 | Viewed by 8703
Abstract
In recent years, changes in land use resulting from rapid urbanization or urban sprawl have brought about many negative effects to land ecosystems, and have led to entropy increases. This study introduces the novel ideas of a planning regulation coefficient for sustainable land-use [...] Read more.
In recent years, changes in land use resulting from rapid urbanization or urban sprawl have brought about many negative effects to land ecosystems, and have led to entropy increases. This study introduces the novel ideas of a planning regulation coefficient for sustainable land-use planning in order to decrease entropy, combined with the CLUE-S model to predict land-use change. Three scenarios were designed as the basis for land-use projections for Guangzhou, China, in 2015, and the changes in the land ecological service function for each scenario were predicted. The results show that, although the current land-use plan is quite reasonable, it will be necessary to further strengthen the protection of farmland and important ecological service function areas. Full article
(This article belongs to the Special Issue Entropy and Urban Sprawl)
Show Figures

Figure 1

134 KiB  
Article
Deformed Exponentials and Applications to Finance
by Barbara Trivellato
Entropy 2013, 15(9), 3471-3489; https://doi.org/10.3390/e15093471 - 2 Sep 2013
Cited by 48 | Viewed by 6692
Abstract
We illustrate some financial applications of the Tsallis and Kaniadakis deformed exponential. The minimization of the corresponding deformed divergence is discussed as a criterion to select a pricing measure in the valuation problems of incomplete markets. Moreover, heavy-tailed models for price processes are [...] Read more.
We illustrate some financial applications of the Tsallis and Kaniadakis deformed exponential. The minimization of the corresponding deformed divergence is discussed as a criterion to select a pricing measure in the valuation problems of incomplete markets. Moreover, heavy-tailed models for price processes are proposed, which generalized the well-known Black and Scholes model. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
1070 KiB  
Article
Analysis of EEG via Multivariate Empirical Mode Decomposition for Depth of Anesthesia Based on Sample Entropy
by Qin Wei, Quan Liu, Shou-Zhen Fan, Cheng-Wei Lu, Tzu-Yu Lin, Maysam F. Abbod and Jiann-Shing Shieh
Entropy 2013, 15(9), 3458-3470; https://doi.org/10.3390/e15093458 - 30 Aug 2013
Cited by 58 | Viewed by 10308
Abstract
In monitoring the depth of anesthesia (DOA), the electroencephalography (EEG) signals of patients have been utilized during surgeries to diagnose their level of consciousness. Different entropy methods were applied to analyze the EEG signal and measure its complexity, such as spectral entropy, approximate [...] Read more.
In monitoring the depth of anesthesia (DOA), the electroencephalography (EEG) signals of patients have been utilized during surgeries to diagnose their level of consciousness. Different entropy methods were applied to analyze the EEG signal and measure its complexity, such as spectral entropy, approximate entropy (ApEn) and sample entropy (SampEn). However, as a weak physiological signal, EEG is easily subject to interference from external sources such as the electric power, electric knives and other electrophysiological signal sources, which lead to a reduction in the accuracy of DOA determination. In this study, we adopt the multivariate empirical mode decomposition (MEMD) to decompose and reconstruct the EEG recorded from clinical surgeries according to its best performance among the empirical mode decomposition (EMD), the ensemble EMD (EEMD), and the complementary EEMD (CEEMD) and the MEMD. Moreover, according to the comparison between SampEn and ApEn in measuring DOA, the SampEn is a practical and efficient method to monitor the DOA during surgeries at real time. Full article
Show Figures

Figure 1

1102 KiB  
Article
Implication of Negative Entropy Flow for Local Rainfall
by Ying Liu, Chongjian Liu and Zhaohui Li
Entropy 2013, 15(9), 3449-3457; https://doi.org/10.3390/e15093449 - 30 Aug 2013
Cited by 2 | Viewed by 5292
Abstract
The relation between the atmospheric entropy flow field and local rainfall is examined in terms of the theory of dissipative structures. In this paper, the entropy balance equation in a form suitable for describing the entropy budget of the Earth’s atmosphere is derived [...] Read more.
The relation between the atmospheric entropy flow field and local rainfall is examined in terms of the theory of dissipative structures. In this paper, the entropy balance equation in a form suitable for describing the entropy budget of the Earth’s atmosphere is derived starting from the Gibbs relation, and, as examples, the entropy flows of the two severe weather events associated with the development of an extratropical cyclone and a tropical storm are calculated, respectively. The results show that negative entropy flow (NEF) has a significant effect on the precipitation intensity and scope with an apparent matching of the NEF’s pattern with the rainfall distribution revealed and, that the diagnosis of NEF is able to provide a good indicator for precipitation forecasting. Full article
Show Figures

Figure 1

526 KiB  
Article
Bacterial DNA Sequence Compression Models Using Artificial Neural Networks
by Manuel J. Duarte and Armando J. Pinho
Entropy 2013, 15(9), 3435-3448; https://doi.org/10.3390/e15093435 - 30 Aug 2013
Viewed by 5690
Abstract
It is widely accepted that the advances in DNA sequencing techniques have contributed to an unprecedented growth of genomic data. This fact has increased the interest in DNA compression, not only from the information theory and biology points of view, but also from [...] Read more.
It is widely accepted that the advances in DNA sequencing techniques have contributed to an unprecedented growth of genomic data. This fact has increased the interest in DNA compression, not only from the information theory and biology points of view, but also from a practical perspective, since such sequences require storage resources. Several compression methods exist, and particularly, those using finite-context models (FCMs) have received increasing attention, as they have been proven to effectively compress DNA sequences with low bits-per-base, as well as low encoding/decoding time-per-base. However, the amount of run-time memory required to store high-order finite-context models may become impractical, since a context-order as low as 16 requires a maximum of 17.2 x 109 memory entries. This paper presents a method to reduce such a memory requirement by using a novel application of artificial neural networks (ANN) to build such probabilistic models in a compact way and shows how to use them to estimate the probabilities. Such a system was implemented, and its performance compared against state-of-the art compressors, such as XM-DNA (expert model) and FCM-Mx (mixture of finite-context models) , as well as with general-purpose compressors. Using a combination of order-10 FCM and ANN, similar encoding results to those of FCM, up to order-16, are obtained using only 17 megabytes of memory, whereas the latter, even employing hash-tables, uses several hundreds of megabytes. Full article
Show Figures

Figure 1

648 KiB  
Article
Determination of Optimal Water Quality Monitoring Points in Sewer Systems using Entropy Theory
by Jung Ho Lee
Entropy 2013, 15(9), 3419-3434; https://doi.org/10.3390/e15093419 - 29 Aug 2013
Cited by 20 | Viewed by 6545
Abstract
To monitor water quality continuously over the entire sewer network is important for efficient management of the system. However, it is practically impossible to implement continuous water quality monitoring of all junctions of a sewer system due to budget constraints. Therefore, water quality [...] Read more.
To monitor water quality continuously over the entire sewer network is important for efficient management of the system. However, it is practically impossible to implement continuous water quality monitoring of all junctions of a sewer system due to budget constraints. Therefore, water quality monitoring locations must be selected as those points which are the most representative of the dataset throughout a system. However, the optimal selection of water quality monitoring locations in urban sewer networks has rarely been studied. This study proposes a method for the optimal selection of water quality monitoring points in sewer systems based on entropy theory. The proposed model searches for a quantitative assessment of data collected from monitoring points. The points that maximize the total information among the collected data at multiple locations are selected using genetic algorithm (GA) for water quality monitoring. The proposed model is demonstrated for a small urban sewer system. Full article
Show Figures

Figure 1

913 KiB  
Article
Information Entropy As a Basic Building Block of Complexity Theory
by Jianbo Gao, Feiyan Liu, Jianfang Zhang, Jing Hu and Yinhe Cao
Entropy 2013, 15(9), 3396-3418; https://doi.org/10.3390/e15093396 - 29 Aug 2013
Cited by 57 | Viewed by 12405
Abstract
What is information? What role does information entropy play in this information exploding age, especially in understanding emergent behaviors of complex systems? To answer these questions, we discuss the origin of information entropy, the difference between information entropy and thermodynamic entropy, the role [...] Read more.
What is information? What role does information entropy play in this information exploding age, especially in understanding emergent behaviors of complex systems? To answer these questions, we discuss the origin of information entropy, the difference between information entropy and thermodynamic entropy, the role of information entropy in complexity theories, including chaos theory and fractal theory, and speculate new fields in which information entropy may play important roles. Full article
(This article belongs to the Special Issue Dynamical Systems)
Show Figures

Figure 1

284 KiB  
Article
A New Perspective on Classical Ideal Gases
by Jacques Arnaud, Laurent Chusseau and Fabrice Philippe
Entropy 2013, 15(9), 3379-3395; https://doi.org/10.3390/e15093379 - 29 Aug 2013
Cited by 1 | Viewed by 6739
Abstract
The ideal-gas barometric and pressure laws are derived from the Democritian concept of independent corpuscles moving in vacuum, plus a principle of simplicity, namely that these laws are independent of the kinetic part of the Hamiltonian. A single corpuscle in contact with a [...] Read more.
The ideal-gas barometric and pressure laws are derived from the Democritian concept of independent corpuscles moving in vacuum, plus a principle of simplicity, namely that these laws are independent of the kinetic part of the Hamiltonian. A single corpuscle in contact with a heat bath in a cylinder and submitted to a constant force (weight) is considered. The paper importantly supplements a previously published paper: First, the stability of ideal gases is established. Second, we show that when walls separate the cylinder into parts and are later removed, the entropy is unaffected. We obtain full agreement with Landsberg’s and others’ (1994) classical thermodynamic result for the entropy of a column of gas submitted to gravity. Full article
Show Figures

Figure 1

293 KiB  
Article
Information Geometry of Complex Hamiltonians and Exceptional Points
by Dorje C. Brody and Eva-Maria Graefe
Entropy 2013, 15(9), 3361-3378; https://doi.org/10.3390/e15093361 - 23 Aug 2013
Cited by 36 | Viewed by 7666
Abstract
Information geometry provides a tool to systematically investigate the parameter sensitivity of the state of a system. If a physical system is described by a linear combination of eigenstates of a complex (that is, non-Hermitian) Hamiltonian, then there can be phase transitions where [...] Read more.
Information geometry provides a tool to systematically investigate the parameter sensitivity of the state of a system. If a physical system is described by a linear combination of eigenstates of a complex (that is, non-Hermitian) Hamiltonian, then there can be phase transitions where dynamical properties of the system change abruptly. In the vicinities of the transition points, the state of the system becomes highly sensitive to the changes of the parameters in the Hamiltonian. The parameter sensitivity can then be measured in terms of the Fisher-Rao metric and the associated curvature of the parameter-space manifold. A general scheme for the geometric study of parameter-space manifolds of eigenstates of complex Hamiltonians is outlined here, leading to generic expressions for the metric. Full article
(This article belongs to the Special Issue Distance in Information and Statistical Physics Volume 2)
Previous Issue
Next Issue
Back to TopTop