Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 16, Issue 6 (June 2014), Pages 2904-3551

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-29
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle Reaction Kinetics Path Based on Entropy Production Rate and Its Relevance to Low-Dimensional Manifolds
Entropy 2014, 16(6), 2904-2943; doi:10.3390/e16062904
Received: 22 December 2013 / Revised: 13 May 2014 / Accepted: 14 May 2014 / Published: 26 May 2014
Cited by 1 | PDF Full-text (2080 KB) | HTML Full-text | XML Full-text
Abstract
The equation that approximately traces the trajectory in the concentration phase space of chemical kinetics is derived based on the rate of entropy production. The equation coincides with the true chemical kinetics equation to first order in a variable that characterizes the [...] Read more.
The equation that approximately traces the trajectory in the concentration phase space of chemical kinetics is derived based on the rate of entropy production. The equation coincides with the true chemical kinetics equation to first order in a variable that characterizes the degree of quasi-equilibrium for each reaction, and the equation approximates the trajectory along at least final part of one-dimensional (1-D) manifold of true chemical kinetics that reaches equilibrium in concentration phase space. Besides the 1-D manifold, each higher dimensional manifold of the trajectories given by the equation is an approximation to that of true chemical kinetics when the contour of the entropy production rate in the concentration phase space is not highly distorted, because the Jacobian and its eigenvectors for the equation are exactly the same as those of true chemical kinetics at equilibrium; however, the path or trajectory itself is not necessarily an approximation to that of true chemical kinetics in manifolds higher than 1-D. The equation is for the path of steepest descent that sufficiently accounts for the constraints inherent in chemical kinetics such as element conservation, whereas the simple steepest-descent-path formulation whose Jacobian is the Hessian of the entropy production rate cannot even approximately reproduce any part of the 1-D manifold of true chemical kinetics except for the special case where the eigenvector of the Hessian is nearly identical to that of the Jacobian of chemical kinetics. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Figures

Open AccessArticle Information Geometric Complexity of a Trivariate Gaussian Statistical Model
Entropy 2014, 16(6), 2944-2958; doi:10.3390/e16062944
Received: 1 April 2014 / Revised: 21 May 2014 / Accepted: 22 May 2014 / Published: 26 May 2014
Cited by 3 | PDF Full-text (237 KB) | HTML Full-text | XML Full-text
Abstract
We evaluate the information geometric complexity of entropic motion on low-dimensional Gaussian statistical manifolds in order to quantify how difficult it is to make macroscopic predictions about systems in the presence of limited information. Specifically, we observe that the complexity of such [...] Read more.
We evaluate the information geometric complexity of entropic motion on low-dimensional Gaussian statistical manifolds in order to quantify how difficult it is to make macroscopic predictions about systems in the presence of limited information. Specifically, we observe that the complexity of such entropic inferences not only depends on the amount of available pieces of information but also on the manner in which such pieces are correlated. Finally, we uncover that, for certain correlational structures, the impossibility of reaching the most favorable configuration from an entropic inference viewpoint seems to lead to an information geometric analog of the well-known frustration effect that occurs in statistical physics. Full article
(This article belongs to the Special Issue Information Geometry)
Open AccessArticle How to Determine Losses in a Flow Field: A Paradigm Shift towards the Second Law Analysis
Entropy 2014, 16(6), 2959-2989; doi:10.3390/e16062959
Received: 26 February 2014 / Revised: 2 April 2014 / Accepted: 20 May 2014 / Published: 26 May 2014
Cited by 4 | PDF Full-text (2515 KB) | HTML Full-text | XML Full-text
Abstract
Assuming that CFD solutions will be more and more used to characterize losses in terms of drag for external flows and head loss for internal flows, we suggest to replace single-valued data, like the drag force or a pressure drop, by field [...] Read more.
Assuming that CFD solutions will be more and more used to characterize losses in terms of drag for external flows and head loss for internal flows, we suggest to replace single-valued data, like the drag force or a pressure drop, by field information about the losses. These information are gained when the entropy generation in the flow field is analyzed, an approach that often is called second law analysis (SLA), referring to the second law of thermodynamics. We show that this SLA approach is straight-forward, systematic and helpful when it comes to the physical interpretation of the losses in a flow field. Various examples are given, including external and internal flows, two phase flow, compressible flow and unsteady flow. Finally, we show that an energy transfer within a certain process can be put into a broader perspective by introducing the entropic potential of an energy. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics)
Open AccessArticle Constraints of Compound Systems: Prerequisites for Thermodynamic Modeling Based on Shannon Entropy
Entropy 2014, 16(6), 2990-3008; doi:10.3390/e16062990
Received: 14 April 2014 / Revised: 17 May 2014 / Accepted: 21 May 2014 / Published: 26 May 2014
Cited by 2 | PDF Full-text (218 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Thermodynamic modeling of extensive systems usually implicitly assumes the additivity of entropy. Furthermore, if this modeling is based on the concept of Shannon entropy, additivity of the latter function must also be guaranteed. In this case, the constituents of a thermodynamic system [...] Read more.
Thermodynamic modeling of extensive systems usually implicitly assumes the additivity of entropy. Furthermore, if this modeling is based on the concept of Shannon entropy, additivity of the latter function must also be guaranteed. In this case, the constituents of a thermodynamic system are treated as subsystems of a compound system, and the Shannon entropy of the compound system must be subjected to constrained maximization. The scope of this paper is to clarify prerequisites for applying the concept of Shannon entropy and the maximum entropy principle to thermodynamic modeling of extensive systems. This is accomplished by investigating how the constraints of the compound system have to depend on mean values of the subsystems in order to ensure additivity. Two examples illustrate the basic ideas behind this approach, comprising the ideal gas model and condensed phase lattice systems as limiting cases of fluid phases. The paper is the first step towards developing a new approach for modeling interacting systems using the concept of Shannon entropy. Full article
Open AccessArticle Tsallis Wavelet Entropy and Its Application in Power Signal Analysis
Entropy 2014, 16(6), 3009-3025; doi:10.3390/e16063009
Received: 29 March 2014 / Revised: 29 March 2014 / Accepted: 20 May 2014 / Published: 27 May 2014
Cited by 11 | PDF Full-text (788 KB) | HTML Full-text | XML Full-text
Abstract
As a novel data mining approach, a wavelet entropy algorithm is used to perform entropy statistics on wavelet coefficients (or reconstructed signals) at various wavelet scales on the basis of wavelet decomposition and entropy statistic theory. Shannon wavelet energy entropy, one kind [...] Read more.
As a novel data mining approach, a wavelet entropy algorithm is used to perform entropy statistics on wavelet coefficients (or reconstructed signals) at various wavelet scales on the basis of wavelet decomposition and entropy statistic theory. Shannon wavelet energy entropy, one kind of wavelet entropy algorithm, has been taken into consideration and utilized in many areas since it came into being. However, as there is wavelet aliasing after the wavelet decomposition, and the information set of different-scale wavelet decomposition coefficients (or reconstructed signals) is non-additive to a certain extent, Shannon entropy, which is more adaptable to extensive systems, couldn’t do accurate uncertainty statistics on the wavelet decomposition results. Therefore, the transient signal features are extracted incorrectly by using Shannon wavelet energy entropy. From the two aspects, the theoretical limitations and negative effects of wavelet aliasing on extraction accuracy, the problems which exist in the feature extraction process of transient signals by Shannon wavelet energy entropy, are discussed in depth. Considering the defects of Shannon wavelet energy entropy, a novel wavelet entropy named Tsallis wavelet energy entropy is proposed by using Tsallis entropy instead of Shannon entropy, and it is applied to the feature extraction of transient signals in power systems. Theoretical derivation and experimental result prove that compared with Shannon wavelet energy entropy, Tsallis wavelet energy entropy could reduce the negative effects of wavelet aliasing on accuracy of feature extraction and extract transient signal feature of power system accurately. Full article
(This article belongs to the Special Issue Entropy in Experimental Design, Sensor Placement, Inquiry and Search)
Open AccessArticle Asymptotically Constant-Risk Predictive Densities When the Distributions of Data and Target Variables Are Different
Entropy 2014, 16(6), 3026-3048; doi:10.3390/e16063026
Received: 28 March 2014 / Revised: 9 May 2014 / Accepted: 22 May 2014 / Published: 28 May 2014
PDF Full-text (293 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a [...] Read more.
We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and the Fisher information matrix for the target variables. We assume that the trace has a unique maximum point with respect to the parameter. We construct asymptotically constant-risk Bayesian predictive densities using a prior depending on the sample size. Further, we apply the theory to the subminimax estimator problem and the prediction based on the binary regression model. Full article
(This article belongs to the Special Issue Information Geometry)
Open AccessArticle Using Permutation Entropy to Measure the Changes in EEG Signals During Absence Seizures
Entropy 2014, 16(6), 3049-3061; doi:10.3390/e16063049
Received: 14 April 2014 / Revised: 26 May 2014 / Accepted: 27 May 2014 / Published: 30 May 2014
Cited by 13 | PDF Full-text (614 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose to use permutation entropy to explore whether the changes in electroencephalogram (EEG) data can effectively distinguish different phases in human absence epilepsy, i.e., the seizure-free, the pre-seizure and seizure phases. Permutation entropy is applied to analyze [...] Read more.
In this paper, we propose to use permutation entropy to explore whether the changes in electroencephalogram (EEG) data can effectively distinguish different phases in human absence epilepsy, i.e., the seizure-free, the pre-seizure and seizure phases. Permutation entropy is applied to analyze the EEG data from these three phases, each containing 100 19-channel EEG epochs of 2 s duration. The experimental results show the mean value of PE gradually decreases from the seizure-free to the seizure phase and provides evidence that these three different seizure phases in absence epilepsy can be effectively distinguished. Furthermore, our results strengthen the view that most frontal electrodes carry useful information and patterns that can help discriminate among different absence seizure phases. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography)
Figures

Open AccessArticle Entropy Content During Nanometric Stick-Slip Motion
Entropy 2014, 16(6), 3062-3073; doi:10.3390/e16063062
Received: 5 May 2014 / Revised: 27 May 2014 / Accepted: 27 May 2014 / Published: 3 June 2014
Cited by 1 | PDF Full-text (313 KB) | HTML Full-text | XML Full-text
Abstract
To explore the existence of self-organization during friction, this paper considers the motion of all atoms in a systems consisting of an Atomic Force Microscope metal tip sliding on a metal slab. The tip and the slab are set in relative motion [...] Read more.
To explore the existence of self-organization during friction, this paper considers the motion of all atoms in a systems consisting of an Atomic Force Microscope metal tip sliding on a metal slab. The tip and the slab are set in relative motion with constant velocity. The vibrations of individual atoms with respect to that relative motion are obtained explicitly using Molecular Dynamics with Embedded Atom Method potentials. First, we obtain signatures of Self Organized Criticality in that the stick-slip jump force probability densities are power laws with exponents in the range (0.5, 1.5) for aluminum and copper. Second, we characterize the dynamical attractor by the entropy content of the overall atomic jittering. We find that in all cases, friction minimizes the entropy and thus makes a strong case for self-organization. Full article
(This article belongs to the Special Issue Entropy and Friction Volume 2)
Open AccessArticle Information-Geometric Markov Chain Monte Carlo Methods Using Diffusions
Entropy 2014, 16(6), 3074-3102; doi:10.3390/e16063074
Received: 29 March 2014 / Revised: 23 May 2014 / Accepted: 28 May 2014 / Published: 3 June 2014
Cited by 5 | PDF Full-text (611 KB) | HTML Full-text | XML Full-text
Abstract
Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation [...] Read more.
Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this, geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of the appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed. Full article
(This article belongs to the Special Issue Information Geometry)
Open AccessArticle Analysis and Optimization of a Compressed Air Energy Storage—Combined Cycle System
Entropy 2014, 16(6), 3103-3120; doi:10.3390/e16063103
Received: 11 March 2014 / Revised: 25 May 2014 / Accepted: 28 May 2014 / Published: 4 June 2014
Cited by 6 | PDF Full-text (408 KB) | HTML Full-text | XML Full-text
Abstract
Compressed air energy storage (CAES) is a commercial, utility-scale technology that provides long-duration energy storage with fast ramp rates and good part-load operation. It is a promising storage technology for balancing the large-scale penetration of renewable energies, such as wind and solar [...] Read more.
Compressed air energy storage (CAES) is a commercial, utility-scale technology that provides long-duration energy storage with fast ramp rates and good part-load operation. It is a promising storage technology for balancing the large-scale penetration of renewable energies, such as wind and solar power, into electric grids. This study proposes a CAES-CC system, which is based on a conventional CAES combined with a steam turbine cycle by waste heat boiler. Simulation and thermodynamic analysis are carried out on the proposed CAES-CC system. The electricity and heating rates of the proposed CAES-CC system are lower than those of the conventional CAES by 0.127 kWh/kWh and 0.338 kWh/kWh, respectively, because the CAES-CC system recycles high-temperature turbine-exhausting air. The overall efficiency of the CAES-CC system is improved by approximately 10% compared with that of the conventional CAES. In the CAES-CC system, compressing intercooler heat can keep the steam turbine on hot standby, thus improving the flexibility of CAES-CC. This study brought about a new method for improving the efficiency of CAES and provided new thoughts for integrating CAES with other electricity-generating modes. Full article
Open AccessArticle Quantum Flows for Secret Key Distribution in the Presence of the Photon Number Splitting Attack
Entropy 2014, 16(6), 3121-3135; doi:10.3390/e16063121
Received: 8 April 2014 / Revised: 23 May 2014 / Accepted: 29 May 2014 / Published: 5 June 2014
Cited by 1 | PDF Full-text (3598 KB) | HTML Full-text | XML Full-text
Abstract
Physical implementations of quantum key distribution (QKD) protocols, like the Bennett-Brassard (BB84), are forced to use attenuated coherent quantum states, because the sources of single photon states are not functional yet for QKD applications. However, when using attenuated coherent states, the relatively [...] Read more.
Physical implementations of quantum key distribution (QKD) protocols, like the Bennett-Brassard (BB84), are forced to use attenuated coherent quantum states, because the sources of single photon states are not functional yet for QKD applications. However, when using attenuated coherent states, the relatively high rate of multi-photonic pulses introduces vulnerabilities that can be exploited by the photon number splitting (PNS) attack to brake the quantum key. Some QKD protocols have been developed to be resistant to the PNS attack, like the decoy method, but those define a single photonic gain in the quantum channel. To overcome this limitation, we have developed a new QKD protocol, called ack-QKD, which is resistant to the PNS attack. Even more, it uses attenuated quantum states, but defines two interleaved photonic quantum flows to detect the eavesdropper activity by means of the quantum photonic error gain (QPEG) or the quantum bit error rate (QBER). The physical implementation of the ack-QKD is similar to the well-known BB84 protocol. Full article
Open AccessArticle Minimum Entropy-Based Cascade Control for Governing Hydroelectric Turbines
Entropy 2014, 16(6), 3136-3148; doi:10.3390/e16063136
Received: 21 January 2014 / Revised: 21 May 2014 / Accepted: 27 May 2014 / Published: 5 June 2014
Cited by 3 | PDF Full-text (465 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, an improved cascade control strategy is presented for hydroturbine speed governors. Different from traditional proportional-integral-derivative (PID) control and model predictive control (MPC) strategies, the performance index of the outer controller is constructed by integrating the entropy and mean value [...] Read more.
In this paper, an improved cascade control strategy is presented for hydroturbine speed governors. Different from traditional proportional-integral-derivative (PID) control and model predictive control (MPC) strategies, the performance index of the outer controller is constructed by integrating the entropy and mean value of the tracking error with the constraints on control energy. The inner controller is implemented by a proportional controller. Compared with the conventional PID-P and MPC-P cascade control methods, the proposed cascade control strategy can effectively decrease fluctuations of hydro-turbine speed under non-Gaussian disturbance conditions in practical hydropower plants. Simulation results show the advantages of the proposed cascade control method. Full article
Open AccessArticle A Derivation of a Microscopic Entropy and Time Irreversibility From the Discreteness of Time
Entropy 2014, 16(6), 3149-3172; doi:10.3390/e16063149
Received: 2 April 2014 / Revised: 4 May 2014 / Accepted: 21 May 2014 / Published: 6 June 2014
Cited by 2 | PDF Full-text (808 KB) | HTML Full-text | XML Full-text
Abstract
The basic microsopic physical laws are time reversible. In contrast, the second law of thermodynamics, which is a macroscopic physical representation of the world, is able to describe irreversible processes in an isolated system through the change of entropy ΔS > [...] Read more.
The basic microsopic physical laws are time reversible. In contrast, the second law of thermodynamics, which is a macroscopic physical representation of the world, is able to describe irreversible processes in an isolated system through the change of entropy ΔS > 0. It is the attempt of the present manuscript to bridge the microscopic physical world with its macrosocpic one with an alternative approach than the statistical mechanics theory of Gibbs and Boltzmann. It is proposed that time is discrete with constant step size. Its consequence is the presence of time irreversibility at the microscopic level if the present force is of complex nature (F(r) ≠ const). In order to compare this discrete time irreversible mechamics (for simplicity a “classical”, single particle in a one dimensional space is selected) with its classical Newton analog, time reversibility is reintroduced by scaling the time steps for any given time step n by the variable sn leading to the Nosé-Hoover Lagrangian. The corresponding Nos´e-Hoover Hamiltonian comprises a term Ndf kB T ln sn (kB the Boltzmann constant, T the temperature, and Ndf the number of degrees of freedom) which is defined as the microscopic entropy Sn at time point n multiplied by T. Upon ensemble averaging this microscopic entropy Sn in equilibrium for a system which does not have fast changing forces approximates its macroscopic counterpart known from thermodynamics. The presented derivation with the resulting analogy between the ensemble averaged microscopic entropy and its thermodynamic analog suggests that the original description of the entropy by Boltzmann and Gibbs is just an ensemble averaging of the time scaling variable sn which is in equilibrium close to 1, but that the entropy Full article
Figures

Open AccessArticle Relative Entropy, Interaction Energy and the Nature of Dissipation
Entropy 2014, 16(6), 3173-3206; doi:10.3390/e16063173
Received: 10 February 2014 / Revised: 20 April 2014 / Accepted: 23 May 2014 / Published: 6 June 2014
Cited by 4 | PDF Full-text (314 KB) | HTML Full-text | XML Full-text
Abstract
Many thermodynamic relations involve inequalities, with equality if a process does not involve dissipation. In this article we provide equalities in which the dissipative contribution is shown to involve the relative entropy (a.k.a. Kullback-Leibler divergence). The processes considered are general time evolutions [...] Read more.
Many thermodynamic relations involve inequalities, with equality if a process does not involve dissipation. In this article we provide equalities in which the dissipative contribution is shown to involve the relative entropy (a.k.a. Kullback-Leibler divergence). The processes considered are general time evolutions both in classical and quantum mechanics, and the initial state is sometimes thermal, sometimes partially so. By calculating a transport coefficient we show that indeed—at least in this case—the source of dissipation in that coefficient is the relative entropy. Full article
(This article belongs to the Special Issue Complex Systems)
Open AccessArticle On the Fisher Metric of Conditional Probability Polytopes
Entropy 2014, 16(6), 3207-3233; doi:10.3390/e16063207
Received: 31 March 2014 / Revised: 18 May 2014 / Accepted: 29 May 2014 / Published: 6 June 2014
PDF Full-text (354 KB) | HTML Full-text | XML Full-text
Abstract
We consider three different approaches to define natural Riemannian metrics on polytopes of stochastic matrices. First, we define a natural class of stochastic maps between these polytopes and give a metric characterization of Chentsov type in terms of invariance with respect to [...] Read more.
We consider three different approaches to define natural Riemannian metrics on polytopes of stochastic matrices. First, we define a natural class of stochastic maps between these polytopes and give a metric characterization of Chentsov type in terms of invariance with respect to these maps. Second, we consider the Fisher metric defined on arbitrary polytopes through their embeddings as exponential families in the probability simplex. We show that these metrics can also be characterized by an invariance principle with respect to morphisms of exponential families. Third, we consider the Fisher metric resulting from embedding the polytope of stochastic matrices in a simplex of joint distributions by specifying a marginal distribution. All three approaches result in slight variations of products of Fisher metrics. This is consistent with the nature of polytopes of stochastic matrices, which are Cartesian products of probability simplices. The first approach yields a scaled product of Fisher metrics; the second, a product of Fisher metrics; and the third, a product of Fisher metrics scaled by the marginal distribution. Full article
(This article belongs to the Special Issue Information Geometry)
Figures

Open AccessArticle On Spatial Covariance, Second Law of Thermodynamics and Configurational Forces in Continua
Entropy 2014, 16(6), 3234-3256; doi:10.3390/e16063234
Received: 27 December 2013 / Revised: 10 February 2014 / Accepted: 12 May 2014 / Published: 10 June 2014
Cited by 1 | PDF Full-text (457 KB) | HTML Full-text | XML Full-text
Abstract
This paper studies the transformation properties of the spatial balance of energy equation for a dissipative material, under the superposition of arbitrary spatial diffeomorphisms. The study reveals that for a dissipative material the transformed energy balance equation has some non-standard terms in [...] Read more.
This paper studies the transformation properties of the spatial balance of energy equation for a dissipative material, under the superposition of arbitrary spatial diffeomorphisms. The study reveals that for a dissipative material the transformed energy balance equation has some non-standard terms in it. These terms are related to a system of microforces with its own balance equation. These microforces act during the superposition of the spatial diffeomorphism, because of the dissipative properties of the material. Moreover, it is shown that for the case in question the stress tensor is additively decomposed into a conventional part given by the standard Doyle-Ericksen formula and a non-conventional one which is related to changes in the material internal structure in the course of deformation. On the basis of the second law of thermodynamics and the integrability condition of a Pfaffian form it is shown that the non-conventional part of the stress tensor can be related not only to dissipative but also to conservative response. A further insight to this conservative response is provided by exploiting the invariance properties of the balance of energy equation within the context of the material intrinsic “physical” metric concept. In this case, it is shown that the assumption of spatial covariance yields the standard conservation and balance laws of classical mechanics but it does not yield the standard Doyle-Ericksen formula. In fact, the Doyle-Ericksen formula has an additional term in it, which is related directly to the evolution of the material internal structure, as it is determined by the (time) evolution of the material metric in the spatial configuration. A formal connection between this term and the Eshelby energy-momentum tensor is derived as well. Full article
(This article belongs to the Special Issue Entropy and the Second Law of Thermodynamics)
Open AccessArticle Density Reconstructions with Errors in the Data
Entropy 2014, 16(6), 3257-3272; doi:10.3390/e16063257
Received: 2 May 2014 / Revised: 3 June 2014 / Accepted: 9 June 2014 / Published: 12 June 2014
Cited by 3 | PDF Full-text (136 KB) | HTML Full-text | XML Full-text
Abstract
The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the [...] Read more.
The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the method is its potential to incorporate errors in the data. Here, we examine two possible ways of doing that. The two approaches have different intuitive interpretations, and one of them allows for error estimation. Our motivating example comes from the field of risk analysis, but the statement of the problem might as well come from any branch of applied sciences. We apply the methodology to a problem consisting of the determination of a probability density from a few values of its numerically-determined Laplace transform. This problem can be mapped onto a problem consisting of the determination of a probability density on [0, 1] from the knowledge of a few of its fractional moments up to some measurement errors stemming from insufficient data. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle On Clustering Histograms with k-Means by Using Mixed α-Divergences
Entropy 2014, 16(6), 3273-3301; doi:10.3390/e16063273
Received: 15 May 2014 / Revised: 10 June 2014 / Accepted: 13 June 2014 / Published: 17 June 2014
Cited by 1 | PDF Full-text (354 KB) | HTML Full-text | XML Full-text
Abstract
Clustering sets of histograms has become popular thanks to the success of the generic method of bag-of-X used in text categorization and in visual categorization applications. In this paper, we investigate the use of a parametric family of distortion measures, called the [...] Read more.
Clustering sets of histograms has become popular thanks to the success of the generic method of bag-of-X used in text categorization and in visual categorization applications. In this paper, we investigate the use of a parametric family of distortion measures, called the α-divergences, for clustering histograms. Since it usually makes sense to deal with symmetric divergences in information retrieval systems, we symmetrize the α -divergences using the concept of mixed divergences. First, we present a novel extension of k-means clustering to mixed divergences. Second, we extend the k-means++ seeding to mixed α-divergences and report a guaranteed probabilistic bound. Finally, we describe a soft clustering technique for mixed α-divergences. Full article
(This article belongs to the Special Issue Information Geometry)
Figures

Open AccessArticle A Bayesian Probabilistic Framework for Rain Detection
Entropy 2014, 16(6), 3302-3314; doi:10.3390/e16063302
Received: 27 March 2014 / Revised: 27 May 2014 / Accepted: 9 June 2014 / Published: 17 June 2014
Cited by 1 | PDF Full-text (1287 KB) | HTML Full-text | XML Full-text
Abstract
Heavy rain deteriorates the video quality of outdoor imaging equipments. In order to improve video clearness, image-based and sensor-based methods are adopted for rain detection. In earlier literature, image-based detection methods fall into spatio-based and temporal-based categories. In this paper, we propose [...] Read more.
Heavy rain deteriorates the video quality of outdoor imaging equipments. In order to improve video clearness, image-based and sensor-based methods are adopted for rain detection. In earlier literature, image-based detection methods fall into spatio-based and temporal-based categories. In this paper, we propose a new image-based method by exploring spatio-temporal united constraints in a Bayesian framework. In our framework, rain temporal motion is assumed to be Pathological Motion (PM), which is more suitable to time-varying character of rain steaks. Temporal displaced frame discontinuity and spatial Gaussian mixture model are utilized in the whole framework. Iterated expectation maximization solving method is taken for Gaussian parameters estimation. Pixels state estimation is finished by an iterated optimization method in Bayesian probability formulation. The experimental results highlight the advantage of our method in rain detection. Full article
Open AccessArticle A Novel Block-Based Scheme for Arithmetic Coding
Entropy 2014, 16(6), 3315-3328; doi:10.3390/e16063315
Received: 8 February 2014 / Revised: 25 May 2014 / Accepted: 10 June 2014 / Published: 18 June 2014
PDF Full-text (257 KB) | HTML Full-text | XML Full-text
Abstract
It is well-known that for a given sequence, its optimal codeword length is fixed. Many coding schemes have been proposed to make the codeword length as close to the optimal value as possible. In this paper, a new block-based coding scheme operating [...] Read more.
It is well-known that for a given sequence, its optimal codeword length is fixed. Many coding schemes have been proposed to make the codeword length as close to the optimal value as possible. In this paper, a new block-based coding scheme operating on the subsequences of a source sequence is proposed. It is proved that the optimal codeword lengths of the subsequences are not larger than that of the given sequence. Experimental results using arithmetic coding will be presented. Full article
Open AccessArticle Speeding up Derivative Configuration from Product Platforms
Entropy 2014, 16(6), 3329-3356; doi:10.3390/e16063329
Received: 16 January 2014 / Revised: 27 May 2014 / Accepted: 9 June 2014 / Published: 18 June 2014
Cited by 1 | PDF Full-text (618 KB) | HTML Full-text | XML Full-text
Abstract
To compete in the global marketplace, manufacturers try to differentiate their products by focusing on individual customer needs. Fulfilling this goal requires that companies shift from mass production to mass customization. Under this approach, a generic architecture, named product platform, is designed [...] Read more.
To compete in the global marketplace, manufacturers try to differentiate their products by focusing on individual customer needs. Fulfilling this goal requires that companies shift from mass production to mass customization. Under this approach, a generic architecture, named product platform, is designed to support the derivation of customized products through a configuration process that determines which components the product comprises. When a customer configures a derivative, typically not every combination of available components is valid. To guarantee that all dependencies and incompatibilities among the derivative constituent components are satisfied, automated configurators are used. Flexible product platforms provide a big number of interrelated components, and so, the configuration of all, but trivial, derivatives involves considerable effort to select which components the derivative should include. Our approach alleviates that effort by speeding up the derivative configuration using a heuristic based on the information theory concept of entropy. Full article
Figures

Open AccessArticle Effects of Anticipation in Individually Motivated Behaviour on Survival and Control in a Multi-Agent Scenario with Resource Constraints
Entropy 2014, 16(6), 3357-3378; doi:10.3390/e16063357
Received: 1 March 2014 / Revised: 21 March 2014 / Accepted: 10 June 2014 / Published: 19 June 2014
Cited by 1 | PDF Full-text (378 KB) | HTML Full-text | XML Full-text
Abstract
Self-organization and survival are inextricably bound to an agent’s ability to control and anticipate its environment. Here we assess both skills when multiple agents compete for a scarce resource. Drawing on insights from psychology, microsociology and control theory, we examine how different [...] Read more.
Self-organization and survival are inextricably bound to an agent’s ability to control and anticipate its environment. Here we assess both skills when multiple agents compete for a scarce resource. Drawing on insights from psychology, microsociology and control theory, we examine how different assumptions about the behaviour of an agent’s peers in the anticipation process affect subjective control and survival strategies. To quantify control and drive behaviour, we use the recently developed information-theoretic quantity of empowerment with the principle of empowerment maximization. In two experiments involving extensive simulations, we show that agents develop risk-seeking, risk-averse and mixed strategies, which correspond to greedy, parsimonious and mixed behaviour. Although the principle of empowerment maximization is highly generic, the emerging strategies are consistent with what one would expect from rational individuals with dedicated utility models. Our results support empowerment maximization as a universal drive for guided self-organization in collective agent systems. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Figures

Open AccessArticle Coarse Dynamics for Coarse Modeling: An Example From Population Biology
Entropy 2014, 16(6), 3379-3400; doi:10.3390/e16063379
Received: 23 March 2014 / Revised: 29 May 2014 / Accepted: 6 June 2014 / Published: 19 June 2014
Cited by 4 | PDF Full-text (601 KB) | HTML Full-text | XML Full-text
Abstract
Networks have become a popular way to concisely represent complex nonlinear systems where the interactions and parameters are imprecisely known. One challenge is how best to describe the associated dynamics, which can exhibit complicated behavior sensitive to small changes in parameters. A [...] Read more.
Networks have become a popular way to concisely represent complex nonlinear systems where the interactions and parameters are imprecisely known. One challenge is how best to describe the associated dynamics, which can exhibit complicated behavior sensitive to small changes in parameters. A recently developed computational approach that we refer to as a database for dynamics provides a robust and mathematically rigorous description of global dynamics over large ranges of parameter space. To demonstrate the potential of this approach we consider two classical age-structured population models that share the same network diagram and have a similar nonlinear overcompensatory term, but nevertheless yield different patterns of qualitative behavior as a function of parameters. Using a generalization of these models we relate the different structure of the dynamics that are observed in the context of biologically relevant questions such as stable oscillations in populations, bistability, and permanence. Full article
(This article belongs to the Special Issue Information in Dynamical Systems and Complex Systems)
Open AccessArticle A Maximum Entropy Method for a Robust Portfolio Problem
Entropy 2014, 16(6), 3401-3415; doi:10.3390/e16063401
Received: 27 March 2014 / Revised: 9 June 2014 / Accepted: 17 June 2014 / Published: 20 June 2014
Cited by 2 | PDF Full-text (247 KB) | HTML Full-text | XML Full-text
Abstract
We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed [...] Read more.
We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Figures

Open AccessArticle Identifying the Coupling Structure in Complex Systems through the Optimal Causation Entropy Principle
Entropy 2014, 16(6), 3416-3433; doi:10.3390/e16063416
Received: 28 April 2014 / Revised: 14 May 2014 / Accepted: 9 June 2014 / Published: 20 June 2014
Cited by 8 | PDF Full-text (1396 KB) | HTML Full-text | XML Full-text
Abstract
Inferring the coupling structure of complex systems from time series data in general by means of statistical and information-theoretic techniques is a challenging problem in applied science. The reliability of statistical inferences requires the construction of suitable information-theoretic measures that take into [...] Read more.
Inferring the coupling structure of complex systems from time series data in general by means of statistical and information-theoretic techniques is a challenging problem in applied science. The reliability of statistical inferences requires the construction of suitable information-theoretic measures that take into account both direct and indirect influences, manifest in the form of information flows, between the components within the system. In this work, we present an application of the optimal causation entropy (oCSE) principle to identify the coupling structure of a synthetic biological system, the repressilator. Specifically, when the system reaches an equilibrium state, we use a stochastic perturbation approach to extract time series data that approximate a linear stochastic process. Then, we present and jointly apply the aggregative discovery and progressive removal algorithms based on the oCSE principle to infer the coupling structure of the system from the measured data. Finally, we show that the success rate of our coupling inferences not only improves with the amount of available data, but it also increases with a higher frequency of sampling and is especially immune to false positives. Full article
(This article belongs to the Special Issue Information in Dynamical Systems and Complex Systems)
Open AccessArticle Some Trends in Quantum Thermodynamics
Entropy 2014, 16(6), 3434-3470; doi:10.3390/e16063434
Received: 28 February 2014 / Revised: 23 April 2014 / Accepted: 10 June 2014 / Published: 23 June 2014
Cited by 5 | PDF Full-text (789 KB) | HTML Full-text | XML Full-text
Abstract
Traditional answers to what the 2nd Law is are well known. Some are based on the microstate of a system wandering rapidly through all accessible phase space, while others are based on the idea of a system occupying an initial multitude of [...] Read more.
Traditional answers to what the 2nd Law is are well known. Some are based on the microstate of a system wandering rapidly through all accessible phase space, while others are based on the idea of a system occupying an initial multitude of states due to the inevitable imperfections of measurements that then effectively, in a coarse grained manner, grow in time (mixing). What has emerged are two somewhat less traditional approaches from which it is said that the 2nd Law emerges, namely, that of the theory of quantum open systems and that of the theory of typicality. These are the two principal approaches, which form the basis of what today has come to be called quantum thermodynamics. However, their dynamics remains strictly linear and unitary, and, as a number of recent publications have emphasized, “testing the unitary propagation of pure states alone cannot rule out a nonlinear propagation of mixtures”. Thus, a non-traditional approach to capturing such a propagation would be one which complements the postulates of QM by the 2nd Law of thermodynamics, resulting in a possibly meaningful, nonlinear dynamics. An unorthodox approach, which does just that, is intrinsic quantum thermodynamics and its mathematical framework, steepest-entropy-ascent quantum thermodynamics. The latter has evolved into an effective tool for modeling the dynamics of reactive and non-reactive systems at atomistic scales. It is the usefulness of this framework in the context of quantum thermodynamics as well as the theory of typicality which are discussed here in some detail. A brief discussion of some other trends such as those related to work, work extraction, and fluctuation theorems is also presented. Full article
(This article belongs to the Special Issue Advances in Methods and Foundations of Non-Equilibrium Thermodynamics)
Open AccessArticle Hybrid Quantum-Classical Protocol for Storage and Retrieval of Discrete-Valued Information
Entropy 2014, 16(6), 3537-3551; doi:10.3390/e16063537
Received: 10 March 2014 / Revised: 19 June 2014 / Accepted: 19 June 2014 / Published: 24 June 2014
Cited by 5 | PDF Full-text (218 KB) | HTML Full-text | XML Full-text | Correction
Abstract
In this paper we present a hybrid (i.e., quantum-classical) adaptive protocol for the storage and retrieval of discrete-valued information. The purpose of this paper is to introduce a procedure that exhibits how to store and retrieve unanticipated information values by using a [...] Read more.
In this paper we present a hybrid (i.e., quantum-classical) adaptive protocol for the storage and retrieval of discrete-valued information. The purpose of this paper is to introduce a procedure that exhibits how to store and retrieve unanticipated information values by using a quantum property, that of using different vector space bases for preparation and measurement of quantum states. This simple idea leads to an interesting old wish in Artificial Intelligence: the development of computer systems that can incorporate new knowledge on a real-time basis just by hardware manipulation. Full article

Review

Jump to: Research, Other

Open AccessReview Entropy in the Critical Zone: A Comprehensive Review
Entropy 2014, 16(6), 3482-3536; doi:10.3390/e16063482
Received: 19 March 2014 / Revised: 9 June 2014 / Accepted: 16 June 2014 / Published: 24 June 2014
Cited by 2 | PDF Full-text (29866 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Thermodynamic entropy was initially proposed by Clausius in 1865. Since then it has been implemented in the analysis of different systems, and is seen as a promising concept to understand the evolution of open systems in non-equilibrium conditions. Information entropy was proposed [...] Read more.
Thermodynamic entropy was initially proposed by Clausius in 1865. Since then it has been implemented in the analysis of different systems, and is seen as a promising concept to understand the evolution of open systems in non-equilibrium conditions. Information entropy was proposed by Shannon in 1948, and has become an important concept to measure information in different systems. Both thermodynamic entropy and information entropy have been extensively applied in different fields related to the Critical Zone, such as hydrology, ecology, pedology, and geomorphology. In this study, we review the most important applications of these concepts in those fields, including how they are calculated, and how they have been utilized to analyze different processes. We then synthesize the link between thermodynamic and information entropies in the light of energy dissipation and organizational patterns, and discuss how this link may be used to enhance the understanding of the Critical Zone. Full article

Other

Jump to: Research, Review

Open AccessLetter Application of the Generalized Work Relation for an N-level Quantum System
Entropy 2014, 16(6), 3471-3481; doi:10.3390/e16063471
Received: 21 April 2014 / Revised: 12 June 2014 / Accepted: 19 June 2014 / Published: 23 June 2014
PDF Full-text (264 KB) | HTML Full-text | XML Full-text
Abstract
An efficient periodic operation to obtain the maximum work from a nonequilibrium initial state in an N–level quantum system is shown. Each cycle consists of a stabilization process followed by an isentropic restoration process. The instantaneous time limit can be taken in [...] Read more.
An efficient periodic operation to obtain the maximum work from a nonequilibrium initial state in an N–level quantum system is shown. Each cycle consists of a stabilization process followed by an isentropic restoration process. The instantaneous time limit can be taken in the stabilization process from the nonequilibrium initial state to a stable passive state. In the restoration process that preserves the passive state a minimum period is needed to satisfy the uncertainty relation between energy and time. An efficient quantum feedback control in a symmetric two–level quantum system connected to an energy source is proposed. Full article

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top