Entropy
http://mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 17, Pages 4762-4774: H∞ Control for Markov Jump Systems with Nonlinear Noise Intensity Function and Uncertain Transition Rates]]>
http://mdpi.com/1099-4300/17/7/4762
The problem of robust H∞ control is investigated for Markov jump systems with nonlinear noise intensity function and uncertain transition rates. A robust H∞ performance criterion is developed for the given systems for the first time. Based on the developed performance criterion, the desired H∞ state-feedback controller is also designed, which guarantees the robust H∞ performance of the closed-loop system. All the conditions are in terms of linear matrix inequalities (LMIs), and hence they can be readily solved by any LMI solver. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed methods.Entropy2015-07-06177Article10.3390/e17074762476247741099-43002015-07-06doi: 10.3390/e17074762Xiaonian WangYafeng Guo<![CDATA[Entropy, Vol. 17, Pages 4744-4761: Energetic and Exergetic Analysis of an Ejector-Expansion Refrigeration Cycle Using the Working Fluid R32]]>
http://mdpi.com/1099-4300/17/7/4744
The performance characteristics of an ejector-expansion refrigeration cycle (EEC) using R32 have been investigated in comparison with that using R134a. The coefficient of performance (COP), the exergy destruction, the exergy efficiency and the suction nozzle pressure drop (SNPD) are discussed. The results show that the application of an ejector instead of a throttle valve in R32 cycle decreases the cycle’s total exergy destruction by 8.84%–15.84% in comparison with the basic cycle (BC). The R32 EEC provides 5.22%–13.77% COP improvement and 5.13%–13.83% exergy efficiency improvement respectively over the BC for the given ranges of evaporating and condensing temperatures. There exists an optimum suction nozzle pressure drop (SNPD) which gives a maximum system COP and volumetric cooling capacity (VCC) under a specified condition. The value of the optimum SNPD mainly depends on the efficiencies of the ejector components, but is virtually independent of evaporating temperature and condensing temperature. In addition, the improvement of the component efficiency, especially the efficiencies of diffusion nozzle and the motive nozzle, can enhance the EEC performance.Entropy2015-07-06177Article10.3390/e17074744474447611099-43002015-07-06doi: 10.3390/e17074744Zhenying ZhangLirui TongLi ChangYanhua ChenXingguo Wang<![CDATA[Entropy, Vol. 17, Pages 4701-4743: Asymptotic Description of Neural Networks with Correlated Synaptic Weights]]>
http://mdpi.com/1099-4300/17/7/4701
We study the asymptotic law of a network of interacting neurons when the number of neurons becomes infinite. Given a completely connected network of neurons in which the synaptic weights are Gaussian correlated random variables, we describe the asymptotic law of the network when the number of neurons goes to infinity. We introduce the process-level empirical measure of the trajectories of the solutions to the equations of the finite network of neurons and the averaged law (with respect to the synaptic weights) of the trajectories of the solutions to the equations of the network of neurons. The main result of this article is that the image law through the empirical measure satisfies a large deviation principle with a good rate function which is shown to have a unique global minimum. Our analysis of the rate function allows us also to characterize the limit measure as the image of a stationary Gaussian measure defined on a transformed set of trajectories.Entropy2015-07-06177Article10.3390/e17074701470147431099-43002015-07-06doi: 10.3390/e17074701Olivier FaugerasJames MacLaurin<![CDATA[Entropy, Vol. 17, Pages 4684-4700: Geometric Interpretation of Surface Tension Equilibrium in Superhydrophobic Systems]]>
http://mdpi.com/1099-4300/17/7/4684
Surface tension and surface energy are closely related, although not identical concepts. Surface tension is a generalized force; unlike a conventional mechanical force, it is not applied to any particular body or point. Using this notion, we suggest a simple geometric interpretation of the Young, Wenzel, Cassie, Antonoff and Girifalco–Good equations for the equilibrium during wetting. This approach extends the traditional concept of Neumann’s triangle. Substances are presented as points, while tensions are vectors connecting the points, and the equations and inequalities of wetting equilibrium obtain simple geometric meaning with the surface roughness effect interpreted as stretching of corresponding vectors; surface heterogeneity is their linear combination, and contact angle hysteresis is rotation. We discuss energy dissipation mechanisms during wetting due to contact angle hysteresis, the superhydrophobicity and the possible entropic nature of the surface tension.Entropy2015-07-06177Article10.3390/e17074684468447001099-43002015-07-06doi: 10.3390/e17074684Michael NosonovskyRahul Ramachandran<![CDATA[Entropy, Vol. 17, Pages 4664-4683: A New Feature Extraction Method Based on the Information Fusion of Entropy Matrix and Covariance Matrix and Its Application in Face Recognition]]>
http://mdpi.com/1099-4300/17/7/4664
The classic principal components analysis (PCA), kernel PCA (KPCA) and linear discriminant analysis (LDA) feature extraction methods evaluate the importance of components according to their covariance contribution, not considering the entropy contribution, which is important supplementary information for the covariance. To further improve the covariance-based methods such as PCA (or KPCA), this paper firstly proposed an entropy matrix to load the uncertainty information of random variables similar to the covariance matrix loading the variation information in PCA. Then an entropy-difference matrix was used as a weighting matrix for transforming the original training images. This entropy-difference weighting (EW) matrix not only made good use of the local information of the training samples, contrast to the global method of PCA, but also considered the category information similar to LDA idea. Then the EW method was integrated with PCA (or KPCA), to form new feature extracting method. The new method was used for face recognition with the nearest neighbor classifier. The experimental results based on the ORL and Yale databases showed that the proposed method with proper threshold parameters reached higher recognition rates than the usual PCA (or KPCA) methods.Entropy2015-07-03177Article10.3390/e17074664466446831099-43002015-07-03doi: 10.3390/e17074664Shunfang WangPing Liu<![CDATA[Entropy, Vol. 17, Pages 4654-4663: Continuous Variable Quantum Key Distribution with a Noisy Laser]]>
http://mdpi.com/1099-4300/17/7/4654
Existing experimental implementations of continuous-variable quantum key distribution require shot-noise limited operation, achieved with shot-noise limited lasers. However, loosening this requirement on the laser source would allow for cheaper, potentially integrated systems. Here, we implement a theoretically proposed prepare-and-measure continuous-variable protocol and experimentally demonstrate the robustness of it against preparation noise stemming for instance from technical laser noise. Provided that direct reconciliation techniques are used in the post-processing we show that for small distances large amounts of preparation noise can be tolerated in contrast to reverse reconciliation where the key rate quickly drops to zero. Our experiment thereby demonstrates that quantum key distribution with non-shot-noise limited laser diodes might be feasible.Entropy2015-07-03177Article10.3390/e17074654465446631099-43002015-07-03doi: 10.3390/e17074654Christian JacobsenTobias GehringUlrik Andersen<![CDATA[Entropy, Vol. 17, Pages 4644-4653: Quantifying Redundant Information in Predicting a Target Random Variable]]>
http://mdpi.com/1099-4300/17/7/4644
We consider the problem of defining a measure of redundant information that quantifies how much common information two or more random variables specify about a target random variable. We discussed desired properties of such a measure, and propose new measures with some desirable properties.Entropy2015-07-02177Article10.3390/e17074644464446531099-43002015-07-02doi: 10.3390/e17074644Virgil GriffithTracey Ho<![CDATA[Entropy, Vol. 17, Pages 4627-4643: Differentiating Interictal and Ictal States in Childhood Absence Epilepsy through Permutation Rényi Entropy]]>
http://mdpi.com/1099-4300/17/7/4627
Permutation entropy (PE) has been widely exploited to measure the complexity of the electroencephalogram (EEG), especially when complexity is linked to diagnostic information embedded in the EEG. Recently, the authors proposed a spatial-temporal analysis of the EEG recordings of absence epilepsy patients based on PE. The goal here is to improve the ability of PE in discriminating interictal states from ictal states in absence seizure EEG. For this purpose, a parametrical definition of permutation entropy is introduced here in the field of epileptic EEG analysis: the permutation Rényi entropy (PEr). PEr has been extensively tested against PE by tuning the involved parameters (order, delay time and alpha). The achieved results demonstrate that PEr outperforms PE, as there is a statistically-significant, wider gap between the PEr levels during the interictal states and PEr levels observed in the ictal states compared to PE. PEr also outperformed PE as the input to a classifier aimed at discriminating interictal from ictal states.Entropy2015-07-02177Article10.3390/e17074627462746431099-43002015-07-02doi: 10.3390/e17074627Nadia MammoneJonas Duun-HenriksenTroels KjaerFrancesco Morabito<![CDATA[Entropy, Vol. 17, Pages 4602-4626: A New Robust Regression Method Based on Minimization of Geodesic Distances on a Probabilistic Manifold: Application to Power Laws]]>
http://mdpi.com/1099-4300/17/7/4602
In regression analysis for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. In many situations, the assumptions underlying OLS are not fulfilled, and several other approaches have been proposed. However, most techniques address only part of the shortcomings of OLS. We here discuss a new and more general regression method, which we call geodesic least squares regression (GLS). The method is based on minimization of the Rao geodesic distance on a probabilistic manifold. For the case of a power law, we demonstrate the robustness of the method on synthetic data in the presence of significant uncertainty on both the data and the regression model. We then show good performance of the method in an application to a scaling law in magnetic confinement fusion.Entropy2015-07-01177Article10.3390/e17074602460246261099-43002015-07-01doi: 10.3390/e17074602Geert Verdoolaege<![CDATA[Entropy, Vol. 17, Pages 4582-4601: Applications of the Fuzzy Sumudu Transform for the Solution of First Order Fuzzy Differential Equations]]>
http://mdpi.com/1099-4300/17/7/4582
In this paper, we study the classical Sumudu transform in fuzzy environment, referred to as the fuzzy Sumudu transform (FST). We also propose some results on the properties of the FST, such as linearity, preserving, fuzzy derivative, shifting and convolution theorem. In order to show the capability of the FST, we provide a detailed procedure to solve fuzzy differential equations (FDEs). A numerical example is provided to illustrate the usage of the FST.Entropy2015-07-01177Article10.3390/e17074582458246011099-43002015-07-01doi: 10.3390/e17074582Norazrizal RahmanMuhammad Ahmad<![CDATA[Entropy, Vol. 17, Pages 4563-4581: A Possible Cosmological Application of Some Thermodynamic Properties of the Black Body Radiation in n-Dimensional Euclidean Spaces]]>
http://mdpi.com/1099-4300/17/7/4563
In this work, we present the generalization of some thermodynamic properties of the black body radiation (BBR) towards an n-dimensional Euclidean space. For this case, the Planck function and the Stefan–Boltzmann law have already been given by Landsberg and de Vos and some adjustments by Menon and Agrawal. However, since then, not much more has been done on this subject, and we believe there are some relevant aspects yet to explore. In addition to the results previously found, we calculate the thermodynamic potentials, the efficiency of the Carnot engine, the law for adiabatic processes and the heat capacity at constant volume. There is a region at which an interesting behavior of the thermodynamic potentials arises: maxima and minima appear for the n—dimensional BBR system at very high temperatures and low dimensionality, suggesting a possible application to cosmology. Finally, we propose that an optimality criterion in a thermodynamic framework could be related to the 3—dimensional nature of the universe.Entropy2015-06-29177Article10.3390/e17074563456345811099-43002015-06-29doi: 10.3390/e17074563Julian Gonzalez-AyalaJennifer Perez-OregonRubén CorderoFernando Angulo-Brown<![CDATA[Entropy, Vol. 17, Pages 4547-4562: Noiseless Linear Amplifiers in Entanglement-Based Continuous-Variable Quantum Key Distribution]]>
http://mdpi.com/1099-4300/17/7/4547
We propose a method to improve the performance of two entanglement-based continuous-variable quantum key distribution protocols using noiseless linear amplifiers. The two entanglement-based schemes consist of an entanglement distribution protocol with an untrusted source and an entanglement swapping protocol with an untrusted relay. Simulation results show that the noiseless linear amplifiers can improve the performance of these two protocols, in terms of maximal transmission distances, when we consider small amounts of entanglement, as typical in realistic setups.Entropy2015-06-26177Article10.3390/e17074547454745621099-43002015-06-26doi: 10.3390/e17074547Yichen ZhangZhengyu LiChristian WeedbrookKevin MarshallStefano PirandolaSong YuHong Guo<![CDATA[Entropy, Vol. 17, Pages 4533-4546: Reliability Analysis Based on a Jump Diffusion Model with Two Wiener Processes for Cloud Computing with Big Data]]>
http://mdpi.com/1099-4300/17/7/4533
At present, many cloud services are managed by using open source software, such as OpenStack and Eucalyptus, because of the unification management of data, cost reduction, quick delivery and work savings. The operation phase of cloud computing has a unique feature, such as the provisioning processes, the network-based operation and the diversity of data, because the operation phase of cloud computing changes depending on many external factors. We propose a jump diffusion model with two-dimensional Wiener processes in order to consider the interesting aspects of the network traffic and big data on cloud computing. In particular, we assess the stability of cloud software by using the sample paths obtained from the jump diffusion model with two-dimensional Wiener processes. Moreover, we discuss the optimal maintenance problem based on the proposed jump diffusion model. Furthermore, we analyze actual data to show numerical examples of dependability optimization based on the software maintenance cost considering big data on cloud computing.Entropy2015-06-26177Article10.3390/e17074533453345461099-43002015-06-26doi: 10.3390/e17074533Yoshinobu TamuraShigeru Yamada<![CDATA[Entropy, Vol. 17, Pages 4519-4532: Estimating Portfolio Value at Risk in the Electricity Markets Using an Entropy Optimized BEMD Approach]]>
http://mdpi.com/1099-4300/17/7/4519
In this paper, we propose a new entropy-optimized bivariate empirical mode decomposition (BEMD)-based model for estimating portfolio value at risk (PVaR). It reveals and analyzes different components of the price fluctuation. These components are decomposed and distinguished by their different behavioral patterns and fluctuation range, by the BEMD model. The entropy theory has been introduced for the identification of the model parameters during the modeling process. The decomposed bivariate data components are calculated with the DCC-GARCH models. Empirical studies suggest that the proposed model outperforms the benchmark multivariate exponential weighted moving average (MEWMA) and DCC-GARCH model, in terms of conventional out-of-sample performance evaluation criteria for the model accuracy.Entropy2015-06-26177Article10.3390/e17074519451945321099-43002015-06-26doi: 10.3390/e17074519Yingchao ZouLean YuKaijian He<![CDATA[Entropy, Vol. 17, Pages 4500-4518: Clausius’ Disgregation: A Conceptual Relic that Sheds Light on the Second Law]]>
http://mdpi.com/1099-4300/17/7/4500
The present work analyzes the cognitive process that led Clausius towards the translation of the Second Law of Thermodynamics into mathematical expressions. We show that Clausius’ original formal expression of the Second Law was achieved by making extensive use of the concept of disgregation, a quantity which has subsequently disappeared from the thermodynamic language. Our analysis demonstrates that disgregation stands as a crucial logical step of such process and sheds light on the comprehension of such fundamental relation. The introduction of entropy—which occurred three years after the first formalization of the Second Law—was aimed at making the Second Law exploitable in practical contexts. The reasons for the disappearance of disgregation, as well as of other “pre-modern” quantities, from the thermodynamics language are discussed.Entropy2015-06-25177Article10.3390/e17074500450045181099-43002015-06-25doi: 10.3390/e17074500Emilio PellegrinoElena GhibaudiLuigi Cerruti<![CDATA[Entropy, Vol. 17, Pages 4485-4499: On Monotone Embedding in Information Geometry]]>
http://mdpi.com/1099-4300/17/7/4485
A paper was published (Harsha and Subrahamanian Moosath, 2014) in which the authors claimed to have discovered an extension to Amari's \(\alpha\)-geometry through a general monotone embedding function. It will be pointed out here that this so-called \((F, G)\)-geometry (which includes \(F\)-geometry as a special case) is identical to Zhang's (2004) extension to the \(\alpha\)-geometry, where the name of the pair of monotone embedding functions \(\rho\) and \(\tau\) were used instead of \(F\) and \(H\) used in Harsha and Subrahamanian Moosath (2014). Their weighting function \(G\) for the Riemannian metric appears cosmetically due to a rewrite of the score function in log-representation as opposed to \((\rho, \tau)\)-representation in Zhang (2004). It is further shown here that the resulting metric and \(\alpha\)-connections obtained by Zhang (2004) through arbitrary monotone embeddings is a unique extension of the \(\alpha\)-geometric structure. As a special case, Naudts' (2004) \(\phi\)-logarithm embedding (using the so-called \(\log_\phi\) function) is recovered with the identification \(\rho=\phi, \, \tau=\log_\phi\), with \(\phi\)-exponential \(\exp_\phi\) given by the associated convex function linking the two representations.Entropy2015-06-25177Article10.3390/e17074485448544991099-43002015-06-25doi: 10.3390/e17074485Jun Zhang<![CDATA[Entropy, Vol. 17, Pages 4454-4484: Modeling Soil Moisture Profiles in Irrigated Fields by the Principle of Maximum Entropy]]>
http://mdpi.com/1099-4300/17/6/4454
Vertical soil moisture profiles based on the principle of maximum entropy (POME) were validated using field and model data and applied to guide an irrigation cycle over a maize field in north central Alabama (USA). The results demonstrate that a simple two-constraint entropy model under the assumption of a uniform initial soil moisture distribution can simulate most soil moisture profiles that occur in the particular soil and climate regime that prevails in the study area. The results of the irrigation simulation demonstrated that the POME model produced a very efficient irrigation strategy with minimal losses (about 1.9% of total applied water). However, the results for finely-textured (silty clay) soils were problematic in that some plant stress did develop due to insufficient applied water. Soil moisture states in these soils fell to around 31% of available moisture content, but only on the last day of the drying side of the irrigation cycle. Overall, the POME approach showed promise as a general strategy to guide irrigation in humid environments, such as the Southeastern United States.Entropy2015-06-23176Article10.3390/e17064454445444841099-43002015-06-23doi: 10.3390/e17064454Vikalp MishraWalter EllenburgOsama Al-HamdanJosh BruceJames Cruise<![CDATA[Entropy, Vol. 17, Pages 4439-4453: Analysis of the Keller–Segel Model with a Fractional Derivative without Singular Kernel]]>
http://mdpi.com/1099-4300/17/6/4439
Using some investigations based on information theory, the model proposed by Keller and Segel was extended to the concept of fractional derivative using the derivative with fractional order without singular kernel recently proposed by Caputo and Fabrizio. We present in detail the existence of the coupled-solutions using the fixed-point theorem. A detailed analysis of the uniqueness of the coupled-solutions is also presented. Using an iterative approach, we derive special coupled-solutions of the modified system and we present some numerical simulations to see the effect of the fractional order.Entropy2015-06-23176Article10.3390/e17064439443944531099-43002015-06-23doi: 10.3390/e17064439Abdon AtanganaBadr Alkahtani<![CDATA[Entropy, Vol. 17, Pages 4413-4438: Detecting Chronotaxic Systems from Single-Variable Time Series with Separable Amplitude and Phase]]>
http://mdpi.com/1099-4300/17/6/4413
The recent introduction of chronotaxic systems provides the means to describe nonautonomous systems with stable yet time-varying frequencies which are resistant to continuous external perturbations. This approach facilitates realistic characterization of the oscillations observed in living systems, including the observation of transitions in dynamics which were not considered previously. The novelty of this approach necessitated the development of a new set of methods for the inference of the dynamics and interactions present in chronotaxic systems. These methods, based on Bayesian inference and detrended fluctuation analysis, can identify chronotaxicity in phase dynamics extracted from a single time series. Here, they are applied to numerical examples and real experimental electroencephalogram (EEG) data. We also review the current methods, including their assumptions and limitations, elaborate on their implementation, and discuss future perspectives.Entropy2015-06-23176Article10.3390/e17064413441344381099-43002015-06-23doi: 10.3390/e17064413Gemma LancasterPhilip ClemsonYevhen SuprunenkoTomislav StankovskiAneta Stefanovska<![CDATA[Entropy, Vol. 17, Pages 4364-4412: Intransitivity in Theory and in the Real World]]>
http://mdpi.com/1099-4300/17/6/4364
This work considers reasons for and implications of discarding the assumption of transitivity—the fundamental postulate in the utility theory of von Neumann and Morgenstern, the adiabatic accessibility principle of Caratheodory and most other theories related to preferences or competition. The examples of intransitivity are drawn from different fields, such as law, biology and economics. This work is intended as a common platform that allows us to discuss intransitivity in the context of different disciplines. The basic concepts and terms that are needed for consistent treatment of intransitivity in various applications are presented and analysed in a unified manner. The analysis points out conditions that necessitate appearance of intransitivity, such as multiplicity of preference criteria and imperfect (i.e., approximate) discrimination of different cases. The present work observes that with increasing presence and strength of intransitivity, thermodynamics gradually fades away leaving space for more general kinetic considerations. Intransitivity in competitive systems is linked to complex phenomena that would be difficult or impossible to explain on the basis of transitive assumptions. Human preferences that seem irrational from the perspective of the conventional utility theory, become perfectly logical in the intransitive and relativistic framework suggested here. The example of competitive simulations for the risk/benefit dilemma demonstrates the significance of intransitivity in cyclic behaviour and abrupt changes in the system. The evolutionary intransitivity parameter, which is introduced in the Appendix, is a general measure of intransitivity, which is particularly useful in evolving competitive systems.Entropy2015-06-19176Article10.3390/e17064364436444121099-43002015-06-19doi: 10.3390/e17064364Alexander Klimenko<![CDATA[Entropy, Vol. 17, Pages 4323-4363: Information Geometry Formalism for the Spatially Homogeneous Boltzmann Equation]]>
http://mdpi.com/1099-4300/17/6/4323
Information Geometry generalizes to infinite dimension by modeling the tangent space of the relevant manifold of probability densities with exponential Orlicz spaces. We review here several properties of the exponential manifold on a suitable set Ɛ of mutually absolutely continuous densities. We study in particular the fine properties of the Kullback-Liebler divergence in this context. We also show that this setting is well-suited for the study of the spatially homogeneous Boltzmann equation if Ɛ is a set of positive densities with finite relative entropy with respect to the Maxwell density. More precisely, we analyze the Boltzmann operator in the geometric setting from the point of its Maxwell’s weak form as a composition of elementary operations in the exponential manifold, namely tensor product, conditioning, marginalization and we prove in a geometric way the basic facts, i.e., the H-theorem. We also illustrate the robustness of our method by discussing, besides the Kullback-Leibler divergence, also the property of Hyvärinen divergence. This requires us to generalize our approach to Orlicz–Sobolev spaces to include derivatives.Entropy2015-06-19176Article10.3390/e17064323432343631099-43002015-06-19doi: 10.3390/e17064323Bertrand LodsGiovanni Pistone<![CDATA[Entropy, Vol. 17, Pages 4293-4322: Concurrence Measurement for the Two-Qubit Optical and Atomic States]]>
http://mdpi.com/1099-4300/17/6/4293
Concurrence provides us an effective approach to quantify entanglement, which is quite important in quantum information processing applications. In the paper, we mainly review some direct concurrence measurement protocols of the two-qubit optical or atomic system. We first introduce the concept of concurrence for a two-qubit system. Second, we explain the approaches of the concurrence measurement in both a linear and a nonlinear optical system. Third, we introduce some protocols for measuring the concurrence of the atomic entanglement system.Entropy2015-06-19176Review10.3390/e17064293429343221099-43002015-06-19doi: 10.3390/e17064293Lan ZhouYu-Bo Sheng<![CDATA[Entropy, Vol. 17, Pages 4271-4292: A Hybrid Physical and Maximum-Entropy Landslide Susceptibility Model]]>
http://mdpi.com/1099-4300/17/6/4271
The clear need for accurate landslide susceptibility mapping has led to multiple approaches. Physical models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical methods can include other factors influencing slope stability such as distance to roads, but rely on good landslide inventories. The maximum entropy (MaxEnt) model has been widely and successfully used in species distribution mapping, because data on absence are often uncertain. Similarly, knowledge about the absence of landslides is often limited due to mapping scale or methodology. In this paper a hybrid approach is described that combines the physically-based landslide susceptibility model “Stability INdex MAPping” (SINMAP) with MaxEnt. This method is tested in a coastal watershed in Pacifica, CA, USA, with a well-documented landslide history including 3 inventories of 154 scars on 1941 imagery, 142 in 1975, and 253 in 1983. Results indicate that SINMAP alone overestimated susceptibility due to insufficient data on root cohesion. Models were compared using SINMAP stability index (SI) or slope alone, and SI or slope in combination with other environmental factors: curvature, a 50-m trail buffer, vegetation, and geology. For 1941 and 1975, using slope alone was similar to using SI alone; however in 1983 SI alone creates an Areas Under the receiver operator Curve (AUC) of 0.785, compared with 0.749 for slope alone. In maximum-entropy models created using all environmental factors, the stability index (SI) from SINMAP represented the greatest contributions in all three years (1941: 48.1%; 1975: 35.3; and 1983: 48%), with AUC of 0.795, 0822, and 0.859, respectively; however; using slope instead of SI created similar overall AUC values, likely due to the combined effect with plan curvature indicating focused hydrologic inputs and vegetation identifying the effect of root cohesion. The combined approach––using either stability index or slope––highlights the importance of additional environmental variables in modeling landslide initiation.Entropy2015-06-19176Article10.3390/e17064271427142921099-43002015-06-19doi: 10.3390/e17064271Jerry DavisLeonhard Blesius<![CDATA[Entropy, Vol. 17, Pages 4255-4270: New Hyperbolic Function Solutions for Some Nonlinear Partial Differential Equation Arising in Mathematical Physics]]>
http://mdpi.com/1099-4300/17/6/4255
In this study, we investigate some new analytical solutions to the (1 + 1)-dimensional nonlinear Dispersive Modified Benjamin–Bona–Mahony equation and the (2 + 1)-dimensional cubic Klein–Gordon equation by using the generalized Kudryashov method. After we submitted the general properties of the generalized Kudryashov method in Section 2, we applied this method to these problems to obtain some new analytical solutions, such as rational function solutions, exponential function solutions and hyperbolic function solutions in Section 3. Afterwards, we draw two- and three-dimensional surfaces of analytical solutions by using Wolfram Mathematica 9.Entropy2015-06-19176Article10.3390/e17064255425542701099-43002015-06-19doi: 10.3390/e17064255Haci BaskonusHasan Bulut<![CDATA[Entropy, Vol. 17, Pages 4215-4254: Natural Gradient Flow in the Mixture Geometry of a Discrete Exponential Family]]>
http://mdpi.com/1099-4300/17/6/4215
In this paper, we study Amari’s natural gradient flows of real functions defined on the densities belonging to an exponential family on a finite sample space. Our main example is the minimization of the expected value of a real function defined on the sample space. In such a case, the natural gradient flow converges to densities with reduced support that belong to the border of the exponential family. We have suggested in previous works to use the natural gradient evaluated in the mixture geometry. Here, we show that in some cases, the differential equation can be extended to a bigger domain in such a way that the densities at the border of the exponential family are actually internal points in the extended problem. The extension is based on the algebraic concept of an exponential variety. We study in full detail a toy example and obtain positive partial results in the important case of a binary sample space.Entropy2015-06-18176Article10.3390/e17064215421542541099-43002015-06-18doi: 10.3390/e17064215Luigi MalagòGiovanni Pistone<![CDATA[Entropy, Vol. 17, Pages 4202-4214: Sliding-Mode Synchronization Control for Uncertain Fractional-Order Chaotic Systems with Time Delay]]>
http://mdpi.com/1099-4300/17/6/4202
Specifically setting a time delay fractional financial system as the study object, this paper proposes a single controller method to eliminate the impact of model uncertainty and external disturbances on the system. The proposed method is based on the stability theory of Lyapunov sliding-mode adaptive control and fractional-order linear systems. The controller can fit the system state within the sliding-mode surface so as to realize synchronization of fractional-order chaotic systems. Analysis results demonstrate that the proposed single integral, sliding-mode control method can control the time delay fractional power system to realize chaotic synchronization, with strong robustness to external disturbance. The controller is simple in structure. The proposed method was also validated by numerical simulation.Entropy2015-06-18176Article10.3390/e17064202420242141099-43002015-06-18doi: 10.3390/e17064202Haorui LiuJuan Yang<![CDATA[Entropy, Vol. 17, Pages 4173-4201: Contribution to Transfer Entropy Estimation via the k-Nearest-Neighbors Approach]]>
http://mdpi.com/1099-4300/17/6/4173
This paper deals with the estimation of transfer entropy based on the k-nearest neighbors (k-NN) method. To this end, we first investigate the estimation of Shannon entropy involving a rectangular neighboring region, as suggested in already existing literature, and develop two kinds of entropy estimators. Then, applying the widely-used error cancellation approach to these entropy estimators, we propose two novel transfer entropy estimators, implying no extra computational cost compared to existing similar k-NN algorithms. Experimental simulations allow the comparison of the new estimators with the transfer entropy estimator available in free toolboxes, corresponding to two different extensions to the transfer entropy estimation of the Kraskov–Stögbauer–Grassberger (KSG) mutual information estimator and prove the effectiveness of these new estimators.Entropy2015-06-16176Article10.3390/e17064173417342011099-43002015-06-16doi: 10.3390/e17064173Jie ZhuJean-Jacques BellangerHuazhong ShuRégine Le Bouquin Jeannès<![CDATA[Entropy, Vol. 17, Pages 4155-4172: 2D Anisotropic Wavelet Entropy with an Application to Earthquakes in Chile]]>
http://mdpi.com/1099-4300/17/6/4155
We propose a wavelet-based approach to measure the Shannon entropy in the context of spatial point patterns. The method uses the fully anisotropic Morlet wavelet to estimate the energy distribution at different directions and scales. The spatial heterogeneity and complexity of spatial point patterns is then analyzed using the multiscale anisotropic wavelet entropy. The efficacy of the approach is shown through a simulation study. Finally, an application to the catalog of earthquake events in Chile is considered.Entropy2015-06-16176Article10.3390/e17064155415541721099-43002015-06-16doi: 10.3390/e17064155Orietta NicolisJorge Mateu<![CDATA[Entropy, Vol. 17, Pages 4134-4154: General and Local: Averaged k-Dependence Bayesian Classifiers]]>
http://mdpi.com/1099-4300/17/6/4134
The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB) classifier can construct at arbitrary points (values of k) along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB) classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI) showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB), tree augmented naive Bayes (TAN), Averaged one-dependence estimators (AODE), and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.Entropy2015-06-16176Article10.3390/e17064134413441541099-43002015-06-16doi: 10.3390/e17064134Limin WangHaoyu ZhaoMinghui SunYue Ning<![CDATA[Entropy, Vol. 17, Pages 4110-4133: The Non-Equilibrium Statistical Distribution Function for Electrons and Holes in Semiconductor Heterostructures in Steady-State Conditions]]>
http://mdpi.com/1099-4300/17/6/4110
The main goal of this work is to determine a statistical non-equilibrium distribution function for the electron and holes in semiconductor heterostructures in steady-state conditions. Based on the postulates of local equilibrium, as well as on the integral form of the weighted Gyarmati’s variational principle in the force representation, using an alternative method, we have derived general expressions, which have the form of the Fermi–Dirac distribution function with four additional components. The physical interpretation of these components has been carried out in this paper. Some numerical results of a non-equilibrium distribution function for an electron in HgCdTe structures are also presented.Entropy2015-06-15176Article10.3390/e17064110411041331099-43002015-06-15doi: 10.3390/e17064110Krzysztof JόzwikowskaAlina JόzwikowskaMichał Nietopiel<![CDATA[Entropy, Vol. 17, Pages 4083-4109: Losing Information Outside the Horizon]]>
http://mdpi.com/1099-4300/17/6/4083
Suppose we allow a system to fall freely from infinity to a point near (but not beyond) the horizon of a black hole. We note that in a sense the information in the system is already lost to an observer at infinity. Once the system is too close to the horizon it does not have enough energy to send its information back because the information carrying quanta would get redshifted to a point where they get confused with Hawking radiation. If one attempts to turn the infalling system around and bring it back to infinity for observation then it will experience Unruh radiation from the required acceleration. This radiation can excite the bits in the system carrying the information, thus reducing the fidelity of this information. We find the radius where the information is essentially lost in this way, noting that this radius depends on the energy gap (and coupling) of the system. We look for some universality by using the highly degenerate BPS ground states of a quantum gravity theory (string theory) as our information storage device. For such systems one finds that the critical distance to the horizon set by Unruh radiation is the geometric mean of the black hole radius and the radius of the extremal hole with quantum numbers of the BPS bound state. Overall, the results suggest that information in gravity theories should be regarded not as a quantity contained in a system, but in terms of how much of this information is accessible to another observer.Entropy2015-06-12176Article10.3390/e17064083408341091099-43002015-06-12doi: 10.3390/e17064083Samir Mathur<![CDATA[Entropy, Vol. 17, Pages 4064-4082: Passive Decoy-State Quantum Key Distribution with Coherent Light]]>
http://mdpi.com/1099-4300/17/6/4064
Signal state preparation in quantum key distribution schemes can be realized using either an active or a passive source. Passive sources might be valuable in some scenarios; for instance, in those experimental setups operating at high transmission rates, since no externally driven element is required. Typical passive transmitters involve parametric down-conversion. More recently, it has been shown that phase-randomized coherent pulses also allow passive generation of decoy states and Bennett–Brassard 1984 (BB84) polarization signals, though the combination of both setups in a single passive source is cumbersome. In this paper, we present a complete passive transmitter that prepares decoy-state BB84 signals using coherent light. Our method employs sum-frequency generation together with linear optical components and classical photodetectors. In the asymptotic limit of an infinite long experiment, the resulting secret key rate (per pulse) is comparable to the one delivered by an active decoy-state BB84 setup with an infinite number of decoy settings.Entropy2015-06-12176Article10.3390/e17064064406440821099-43002015-06-12doi: 10.3390/e17064064Marcos CurtyMarc JofreValerio PruneriMorgan Mitchell<![CDATA[Entropy, Vol. 17, Pages 4040-4063: A Penalized Likelihood Approach to Parameter Estimation with Integral Reliability Constraints]]>
http://mdpi.com/1099-4300/17/6/4040
Stress-strength reliability problems arise frequently in applied statistics and related fields. Often they involve two independent and possibly small samples of measurements on strength and breakdown pressures (stress). The goal of the researcher is to use the measurements to obtain inference on reliability, which is the probability that stress will exceed strength. This paper addresses the case where reliability is expressed in terms of an integral which has no closed form solution and where the number of observed values on stress and strength is small. We find that the Lagrange approach to estimating constrained likelihood, necessary for inference, often performs poorly. We introduce a penalized likelihood method and it appears to always work well. We use third order likelihood methods to partially offset the issue of small samples. The proposed method is applied to draw inferences on reliability in stress-strength problems with independent exponentiated exponential distributions. Simulation studies are carried out to assess the accuracy of the proposed method and to compare it with some standard asymptotic methods.Entropy2015-06-12176Article10.3390/e17064040404040631099-43002015-06-12doi: 10.3390/e17064040Barry SmithSteven WangAugustine WongXiaofeng Zhou<![CDATA[Entropy, Vol. 17, Pages 4028-4039: Generalized Boundary Conditions for the Time-Fractional Advection Diffusion Equation]]>
http://mdpi.com/1099-4300/17/6/4028
The different kinds of boundary conditions for standard and fractional diffusion and advection diffusion equations are analyzed. Near the interface between two phases there arises a transition region which state differs from the state of contacting media owing to the different material particle interaction conditions. Particular emphasis has been placed on the conditions of nonperfect diffusive contact for the time-fractional advection diffusion equation. When the reduced characteristics of the interfacial region are equal to zero, the conditions of perfect contact are obtained as a particular case.Entropy2015-06-12176Article10.3390/e17064028402840391099-43002015-06-12doi: 10.3390/e17064028Yuriy Povstenko<![CDATA[Entropy, Vol. 17, Pages 3989-4027: Entropy, Information Theory, Information Geometry and Bayesian Inference in Data, Signal and Image Processing and Inverse Problems]]>
http://mdpi.com/1099-4300/17/6/3989
The main content of this review article is first to review the main inference tools using Bayes rule, the maximum entropy principle (MEP), information theory, relative entropy and the Kullback–Leibler (KL) divergence, Fisher information and its corresponding geometries. For each of these tools, the precise context of their use is described. The second part of the paper is focused on the ways these tools have been used in data, signal and image processing and in the inverse problems, which arise in different physical sciences and engineering applications. A few examples of the applications are described: entropy in independent components analysis (ICA) and in blind source separation, Fisher information in data model selection, different maximum entropy-based methods in time series spectral estimation and in linear inverse problems and, finally, the Bayesian inference for general inverse problems. Some original materials concerning the approximate Bayesian computation (ABC) and, in particular, the variational Bayesian approximation (VBA) methods are also presented. VBA is used for proposing an alternative Bayesian computational tool to the classical Markov chain Monte Carlo (MCMC) methods. We will also see that VBA englobes joint maximum a posteriori (MAP), as well as the different expectation-maximization (EM) algorithms as particular cases.Entropy2015-06-12176Article10.3390/e17063989398940271099-43002015-06-12doi: 10.3390/e17063989Ali Mohammad-Djafari<![CDATA[Entropy, Vol. 17, Pages 3963-3988: Most Likely Maximum Entropy for Population Analysis with Region-Censored Data]]>
http://mdpi.com/1099-4300/17/6/3963
The paper proposes a new non-parametric density estimator from region-censored observations with application in the context of population studies, where standard maximum likelihood is affected by over-fitting and non-uniqueness problems. It is a maximum entropy estimator that satisfies a set of constraints imposing a close fit to the empirical distributions associated with the set of censoring regions. The degree of relaxation of the data-fit constraints is chosen, such that the likelihood of the inferred model is maximal. In this manner, the estimator is able to overcome the singularity of the non-parametric maximum likelihood estimator and, at the same time, maintains a good fit to the observations. The behavior of the estimator is studied in a simulation, demonstrating its superior performance with respect to the non-parametric maximum likelihood and the importance of carefully choosing the degree of relaxation of the data-fit constraints. In particular, the predictive performance of the resulting estimator is better, which is important when the population analysis is done in the context of risk assessment. We also apply the estimator to real data in the context of the prevention of hyperbaric decompression sickness, where the available observations are formally equivalent to region-censored versions of the variables of interest, confirming that it is a superior alternative to non-parametric maximum likelihood in realistic situations.Entropy2015-06-11176Article10.3390/e17063963396339881099-43002015-06-11doi: 10.3390/e17063963Youssef BennaniLuc PronzatoMaria Rendas<![CDATA[Entropy, Vol. 17, Pages 3947-3962: Personal Information Leaks with Automatic Login in Mobile Social Network Services]]>
http://mdpi.com/1099-4300/17/6/3947
To log in to a mobile social network service (SNS) server, users must enter their ID and password to get through the authentication process. At that time, if the user sets up the automatic login option on the app, a sort of security token is created on the server based on the user’s ID and password. This security token is called a credential. Because such credentials are convenient for users, they are utilized by most mobile SNS apps. However, the current state of credential management for the majority of Android SNS apps is very weak. This paper demonstrates the possibility of a credential cloning attack. Such attacks occur when an attacker extracts the credential from the victim’s smart device and inserts it into their own smart device. Then, without knowing the victim’s ID and password, the attacker can access the victim’s account. This type of attack gives access to various pieces of personal information without authorization. Thus, in this paper, we analyze the vulnerabilities of the main Android-based SNS apps to credential cloning attacks, and examine the potential leakage of personal information that may result. We then introduce effective countermeasures to resolve these problems.Entropy2015-06-10176Article10.3390/e17063947394739621099-43002015-06-10doi: 10.3390/e17063947Jongwon ChoiHaehyun ChoJeong Yi<![CDATA[Entropy, Vol. 17, Pages 3913-3946: Entropy-Based Privacy against Profiling of User Mobility]]>
http://mdpi.com/1099-4300/17/6/3913
Location-based services (LBSs) flood mobile phones nowadays, but their use poses an evident privacy risk. The locations accompanying the LBS queries can be exploited by the LBS provider to build the user profile of visited locations, which might disclose sensitive data, such as work or home locations. The classic concept of entropy is widely used to evaluate privacy in these scenarios, where the information is represented as a sequence of independent samples of categorized data. However, since the LBS queries might be sent very frequently, location profiles can be improved by adding temporal dependencies, thus becoming mobility profiles, where location samples are not independent anymore and might disclose the user’s mobility patterns. Since the time dimension is factored in, the classic entropy concept falls short of evaluating the real privacy level, which depends also on the time component. Therefore, we propose to extend the entropy-based privacy metric to the use of the entropy rate to evaluate mobility profiles. Then, two perturbative mechanisms are considered to preserve locations and mobility profiles under gradual utility constraints. We further use the proposed privacy metric and compare it to classic ones to evaluate both synthetic and real mobility profiles when the perturbative methods proposed are applied. The results prove the usefulness of the proposed metric for mobility profiles and the need for tailoring the perturbative methods to the features of mobility profiles in order to improve privacy without completely loosing utility.Entropy2015-06-10176Article10.3390/e17063913391339461099-43002015-06-10doi: 10.3390/e17063913Alicia Rodriguez-CarrionDavid Rebollo-MonederoJordi FornéCeleste CampoCarlos Garcia-RubioJavier Parra-ArnauSajal Das<![CDATA[Entropy, Vol. 17, Pages 3898-3912: General Hyperplane Prior Distributions Based on Geometric Invariances for Bayesian Multivariate Linear Regression]]>
http://mdpi.com/1099-4300/17/6/3898
Based on geometric invariance properties, we derive an explicit prior distribution for the parameters of multivariate linear regression problems in the absence of further prior information. The problem is formulated as a rotationally-invariant distribution of \(L\)-dimensional hyperplanes in \(N\) dimensions, and the associated system of partial differential equations is solved. The derived prior distribution generalizes the already known special cases, e.g., 2D plane in three dimensions.Entropy2015-06-10176Article10.3390/e17063898389839121099-43002015-06-10doi: 10.3390/e17063898Udo von Toussaint<![CDATA[Entropy, Vol. 17, Pages 3877-3897: A Colour Image Encryption Scheme Using Permutation-Substitution Based on Chaos]]>
http://mdpi.com/1099-4300/17/6/3877
An encryption scheme for colour images using a spatiotemporal chaotic system is proposed. Initially, we use the R, G and B components of a colour plain-image to form a matrix. Then the matrix is permutated by using zigzag path scrambling. The resultant matrix is then passed through a substitution process. Finally, the ciphered colour image is obtained from the confused matrix. Theoretical analysis and experimental results indicate that the proposed scheme is both secure and practical, which make it suitable for encrypting colour images of any size.Entropy2015-06-09176Article10.3390/e17063877387738971099-43002015-06-09doi: 10.3390/e17063877Xing-Yuan WangYing-Qian ZhangXue-Mei Bao<![CDATA[Entropy, Vol. 17, Pages 3857-3876: Radial Wavelet Neural Network with a Novel Self-Creating Disk-Cell-Splitting Algorithm for License Plate Character Recognition]]>
http://mdpi.com/1099-4300/17/6/3857
In this paper, a novel self-creating disk-cell-splitting (SCDCS) algorithm is proposed for training the radial wavelet neural network (RWNN) model. Combining with the least square (LS) method which determines the linear weight coefficients, SCDCS can create neurons adaptively on a disk according to the distribution of input data and learning goals. As a result, a disk map is made for input data as well as a RWNN model with proper architecture and parameters can be decided for the recognition task. The proposed SCDCS-LS based RWNN model is employed for the recognition of license plate characters. Compared to the classical radial-basis-function (RBF) network with K-means clustering and LS, the proposed model can make a better recognition performance even with fewer neurons.Entropy2015-06-09176Article10.3390/e17063857385738761099-43002015-06-09doi: 10.3390/e17063857Rong ChengYanping BaiHongping HuXiuhui Tan<![CDATA[Entropy, Vol. 17, Pages 3838-3856: The Fisher Information as a Neural Guiding Principle for Independent Component Analysis]]>
http://mdpi.com/1099-4300/17/6/3838
The Fisher information constitutes a natural measure for the sensitivity of a probability distribution with respect to a set of parameters. An implementation of the stationarity principle for synaptic learning in terms of the Fisher information results in a Hebbian self-limiting learning rule for synaptic plasticity. In the present work, we study the dependence of the solutions to this rule in terms of the moments of the input probability distribution and find a preference for non-Gaussian directions, making it a suitable candidate for independent component analysis (ICA). We confirm in a numerical experiment that a neuron trained under these rules is able to find the independent components in the non-linear bars problem. The specific form of the plasticity rule depends on the transfer function used, becoming a simple cubic polynomial of the membrane potential for the case of the rescaled error function. The cubic learning rule is also an excellent approximation for other transfer functions, as the standard sigmoidal, and can be used to show analytically that the proposed plasticity rules are selective for directions in the space of presynaptic neural activities characterized by a negative excess kurtosis.Entropy2015-06-09176Article10.3390/e17063838383838561099-43002015-06-09doi: 10.3390/e17063838Rodrigo EchevesteSamuel EckmannClaudius Gros<![CDATA[Entropy, Vol. 17, Pages 3806-3837: On the Detection of Fake Certificates via Attribute Correlation]]>
http://mdpi.com/1099-4300/17/6/3806
Transport Layer Security (TLS) and its predecessor, SSL, are important cryptographic protocol suites on the Internet. They both implement public key certificates and rely on a group of trusted certificate authorities (i.e., CAs) for peer authentication. Unfortunately, the most recent research reveals that, if any one of the pre-trusted CAs is compromised, fake certificates can be issued to intercept the corresponding SSL/TLS connections. This security vulnerability leads to catastrophic impacts on SSL/TLS-based HTTPS, which is the underlying protocol to provide secure web services for e-commerce, e-mails, etc. To address this problem, we design an attribute dependency-based detection mechanism, called SSLight. SSLight can expose fake certificates by checking whether the certificates contain some attribute dependencies rarely occurring in legitimate samples. We conduct extensive experiments to evaluate SSLight and successfully confirm that SSLight can detect the vast majority of fake certificates issued from any trusted CAs if they are compromised. As a real-world example, we also implement SSLight as a Firefox add-on and examine its capability of exposing existent fake certificates from DigiNotar and Comodo, both of which have made a giant impact around the world.Entropy2015-06-08176Article10.3390/e17063806380638371099-43002015-06-08doi: 10.3390/e17063806Xiaojing GuXingsheng Gu<![CDATA[Entropy, Vol. 17, Pages 3787-3805: General Approach for Composite Thermoelectric Systems with Thermal Coupling: The Case of a Dual Thermoelectric Cooler]]>
http://mdpi.com/1099-4300/17/6/3787
In this work, we show a general approach for inhomogeneous composite thermoelectric systems, and as an illustrative case, we consider a dual thermoelectric cooler. This composite cooler consists of two thermoelectric modules (TEMs) connected thermally in parallel and electrically in series. Each TEM has different thermoelectric (TE) properties, namely thermal conductance, electrical resistance and the Seebeck coefficient. The system is coupled by thermal conductances to heat reservoirs. The proposed approach consists of derivation of the dimensionless thermoelectric properties for the whole system. Thus, we obtain an equivalent figure of merit whose impact and meaning is discussed. We make use of dimensionless equations to study the impact of the thermal conductance matching on the cooling capacity and the coefficient of the performance of the system. The equivalent thermoelectric properties derived with our formalism include the external conductances and all intrinsic thermoelectric properties of each component of the system. Our proposed approach permits us changing the thermoelectric parameters of the TEMs and the working conditions of the composite system. Furthermore, our analysis shows the effect of the number of thermocouples on the system. These considerations are very useful for the design of thermoelectric composite systems. We reproduce the qualitative behavior of a commercial composite TEM connected electrically in series.Entropy2015-06-08176Article10.3390/e17063787378738051099-43002015-06-08doi: 10.3390/e17063787Cuautli Flores-NiñoMiguel Olivares-RoblesIgor Loboda<![CDATA[Entropy, Vol. 17, Pages 3766-3786: Learning a Flexible K-Dependence Bayesian Classifier from the Chain Rule of Joint Probability Distribution]]>
http://mdpi.com/1099-4300/17/6/3766
As one of the most common types of graphical models, the Bayesian classifier has become an extremely popular approach to dealing with uncertainty and complexity. The scoring functions once proposed and widely used for a Bayesian network are not appropriate for a Bayesian classifier, in which class variable C is considered as a distinguished one. In this paper, we aim to clarify the working mechanism of Bayesian classifiers from the perspective of the chain rule of joint probability distribution. By establishing the mapping relationship between conditional probability distribution and mutual information, a new scoring function, Sum_MI, is derived and applied to evaluate the rationality of the Bayesian classifiers. To achieve global optimization and high dependence representation, the proposed learning algorithm, the flexible K-dependence Bayesian (FKDB) classifier, applies greedy search to extract more information from the K-dependence network structure. Meanwhile, during the learning procedure, the optimal attribute order is determined dynamically, rather than rigidly. In the experimental study, functional dependency analysis is used to improve model interpretability when the structure complexity is restricted.Entropy2015-06-08176Article10.3390/e17063766376637861099-43002015-06-08doi: 10.3390/e17063766Limin WangHaoyu Zhao<![CDATA[Entropy, Vol. 17, Pages 3752-3765: Delayed-Compensation Algorithm for Second-Order Leader-Following Consensus Seeking under Communication Delay]]>
http://mdpi.com/1099-4300/17/6/3752
In this paper, the leader-following consensus algorithm, which is accompanied with compensations related to neighboring agents’ delayed states, is constructed for second-order multi-agent systems with communication delay. Using frequency-domain analysis, delay-independent and delay-dependent consensus conditions are obtained for second-order agents respectively to converge to the dynamical leader’s states asymptotically. Simulation illustrates the correctness of the results.Entropy2015-06-08176Article10.3390/e17063752375237651099-43002015-06-08doi: 10.3390/e17063752Cheng-Lin LiuFei Liu<![CDATA[Entropy, Vol. 17, Pages 3738-3751: Maximum Entropy Rate Reconstruction of Markov Dynamics]]>
http://mdpi.com/1099-4300/17/6/3738
We develop ideas proposed by Van der Straeten to extend maximum entropy principles to Markov chains. We focus in particular on the convergence of such estimates in order to explain how our approach makes possible the estimation of transition probabilities when only short samples are available, which opens the way to applications to non-stationary processes. The current work complements an earlier communication by providing numerical details, as well as a full derivation of the multi-constraint two-state and three-state maximum entropy transition matrices.Entropy2015-06-08176Article10.3390/e17063738373837511099-43002015-06-08doi: 10.3390/e17063738Gregor ChliamovitchAlexandre DupuisBastien Chopard<![CDATA[Entropy, Vol. 17, Pages 3724-3737: Tail Risk Constraints and Maximum Entropy]]>
http://mdpi.com/1099-4300/17/6/3724
Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position). Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i) the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii) the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another), which is quite familiar to traders, naturally emerges in our construction.Entropy2015-06-05176Article10.3390/e17063724372437371099-43002015-06-05doi: 10.3390/e17063724Donald GemanHélyette GemanNassim Taleb<![CDATA[Entropy, Vol. 17, Pages 3710-3723: Entropy of Weighted Graphs with Randi´c Weights]]>
http://mdpi.com/1099-4300/17/6/3710
Shannon entropies for networks have been widely introduced. However, entropies for weighted graphs have been little investigated. Inspired by the work due to Eagle et al., we introduce the concept of graph entropy for special weighted graphs. Furthermore, we prove extremal properties by using elementary methods of classes of weighted graphs, and in particular, the one due to Bollobás and Erdös, which is also called the Randi´c weight. As a result, we derived statements on dendrimers that have been proven useful for applications. Finally, some open problems are presented.Entropy2015-06-05176Article10.3390/e17063710371037231099-43002015-06-05doi: 10.3390/e17063710Zengqiang ChenMatthias DehmerFrank Emmert-StreibYongtang Shi<![CDATA[Entropy, Vol. 17, Pages 3692-3709: The Switching Generator: New Clock-Controlled Generator with Resistance against the Algebraic and Side Channel Attacks]]>
http://mdpi.com/1099-4300/17/6/3692
Since Advanced Encryption Standard (AES) in stream modes, such as counter (CTR), output feedback (OFB) and cipher feedback (CFB), can meet most industrial requirements, the range of applications for dedicated stream ciphers is decreasing. There are many attack results using algebraic properties and side channel information against stream ciphers for hardware applications. Al-Hinai et al. presented an algebraic attack approach to a family of irregularly clock-controlled linear feedback shift register systems: the stop and go generator, self-decimated generator and alternating step generator. Other clock-controlled systems, such as shrinking and cascade generators, are indeed vulnerable against side channel attacks. To overcome these threats, new clock-controlled systems were presented, e.g., the generalized alternating step generator, cascade jump-controlled generator and mutual clock-controlled generator. However, the algebraic attack could be applied directly on these new systems. In this paper, we propose a new clock-controlled generator: the switching generator, which has resistance to algebraic and side channel attacks. This generator also preserves both security properties and the efficiency of existing clock-controlled generators.Entropy2015-06-04176Article10.3390/e17063692369237091099-43002015-06-04doi: 10.3390/e17063692Jun ChoiDukjae MoonSeokhie HongJaechul Sung<![CDATA[Entropy, Vol. 17, Pages 3679-3691: Density Regression Based on Proportional Hazards Family]]>
http://mdpi.com/1099-4300/17/6/3679
This paper develops a class of density regression models based on proportional hazards family, namely, Gamma transformation proportional hazard (Gt-PH) model . Exact inference for the regression parameters and hazard ratio is derived. These estimators enjoy some good properties such as unbiased estimation, which may not be shared by other inference methods such as maximum likelihood estimate (MLE). Generalised confidence interval and hypothesis testing for regression parameters are also provided. The method itself is easy to implement in practice. The regression method is also extended to Lasso-based variable selection.Entropy2015-06-04176Article10.3390/e17063679367936911099-43002015-06-04doi: 10.3390/e17063679Wei DangKeming Yu<![CDATA[Entropy, Vol. 17, Pages 3656-3678: A Robust Bayesian Approach to an Optimal Replacement Policy for Gas Pipelines]]>
http://mdpi.com/1099-4300/17/6/3656
In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, as determined by the failure rates of pipelines deployed under different conditions. We relax the assumptions, made in previous papers, about the prior distributions on the failure rates and study changes in replacement priorities under different choices of generalized moment-constrained classes of priors. We focus on the set of non-dominated actions, and among them, we propose the least sensitive action as the optimal choice to rank different classes of pipelines, providing a sound approach to the sensitivity problem. Moreover, we are also interested in determining which classes have a failure rate exceeding a given acceptable value, considered as the threshold determining no need for replacement. Graphical tools are introduced to help decisionmakers to determine if pipelines are to be replaced and the corresponding priorities.Entropy2015-06-03176Article10.3390/e17063656365636781099-43002015-06-03doi: 10.3390/e17063656José Arias-NicolásJacinto MartínFabrizio RuggeriAlfonso Suárez-Llorens<![CDATA[Entropy, Vol. 17, Pages 3645-3655: Entropy Production of Stars]]>
http://mdpi.com/1099-4300/17/6/3645
The entropy production (inside the volume bounded by a photosphere) of main-sequence stars, subgiants, giants, and supergiants is calculated based on B–V photometry data. A non-linear inverse relationship of thermodynamic fluxes and forces as well as an almost constant specific (per volume) entropy production of main-sequence stars (for 95% of stars, this quantity lies within 0.5 to 2.2 of the corresponding solar magnitude) is found. The obtained results are discussed from the perspective of known extreme principles related to entropy production.Entropy2015-06-02176Article10.3390/e17063645364536551099-43002015-06-02doi: 10.3390/e17063645Leonid MartyushevSergey Zubarev<![CDATA[Entropy, Vol. 17, Pages 3631-3644: Consensus Analysis of Heterogeneous Multi-Agent Systems with Time-Varying Delay]]>
http://mdpi.com/1099-4300/17/6/3631
This paper studies consensus and \(H_\infty\) consensus problems for heterogeneous multi-agent systems composed of first-order and second-order integrator agents. We first rewrite the multi-agent systems into the corresponding reduced-order systems based on the graph theory and the reduced-order transformation. Then, the linear matrix inequality approach is used to consider the consensus of heterogeneous multi-agent systems with time-varying delays in directed networks. As a result, sufficient conditions for consensus and \(H_\infty\) consensus of heterogeneous multi-agent systems in terms of linear matrix inequalities are established in the cases of fixed and switching topologies. Finally, numerical simulations are given to illustrate the effectiveness of the theoretical results.Entropy2015-06-02176Article10.3390/e17063631363136441099-43002015-06-02doi: 10.3390/e17063631Beibei WangYuangong Sun<![CDATA[Entropy, Vol. 17, Pages 3621-3630: Probabilistic Teleportation via Quantum Channel with Partial Information]]>
http://mdpi.com/1099-4300/17/6/3621
Two novel schemes are proposed to teleport an unknown two-level quantum state probabilistically when the sender and the receiver only have partial information about the quantum channel, respectively. This is distinct from the fact that either the sender or the receiver has entire information about the quantum channel in previous schemes for probabilistic teleportation. Theoretical analysis proves that these schemes are straightforward, efficient and cost-saving. The concrete realization procedures of our schemes are presented in detail, and the result shows that our proposals could extend the application range of probabilistic teleportation.Entropy2015-06-02176Article10.3390/e17063621362136301099-43002015-06-02doi: 10.3390/e17063621Desheng LiuZhiping HuangXiaojun Guo<![CDATA[Entropy, Vol. 17, Pages 3595-3620: Entropies from Markov Models as Complexity Measures of Embedded Attractors]]>
http://mdpi.com/1099-4300/17/6/3595
This paper addresses the problem of measuring complexity from embedded attractors as a way to characterize changes in the dynamical behavior of different types of systems with a quasi-periodic behavior by observing their outputs. With the aim of measuring the stability of the trajectories of the attractor along time, this paper proposes three new estimations of entropy that are derived from a Markov model of the embedded attractor. The proposed estimators are compared with traditional nonparametric entropy measures, such as approximate entropy, sample entropy and fuzzy entropy, which only take into account the spatial dimension of the trajectory. The method proposes the use of an unsupervised algorithm to find the principal curve, which is considered as the “profile trajectory”, that will serve to adjust the Markov model. The new entropy measures are evaluated using three synthetic experiments and three datasets of physiological signals. In terms of consistency and discrimination capabilities, the results show that the proposed measures perform better than the other entropy measures used for comparison purposes.Entropy2015-06-02176Article10.3390/e17063595359536201099-43002015-06-02doi: 10.3390/e17063595Julián Arias-LondoñoJuan Godino-Llorente<![CDATA[Entropy, Vol. 17, Pages 3581-3594: Brownian Motion in Minkowski Space]]>
http://mdpi.com/1099-4300/17/6/3581
We construct a model of Brownian motion in Minkowski space. There are two aspects of the problem. The first is to define a sequence of stopping times associated with the Brownian “kicks” or impulses. The second is to define the dynamics of the particle along geodesics in between the Brownian kicks. When these two aspects are taken together, the Central Limit Theorem (CLT) leads to temperature dependent four dimensional distributions defined on Minkowski space, for distances and 4-velocities. In particular, our processes are characterized by two independent time variables defined with respect to the laboratory frame: a discrete one corresponding to the stopping times when the impulses take place and a continuous one corresponding to the geodesic motion in-between impulses. The subsequent distributions are solutions of a (covariant) pseudo-diffusion equation which involves derivatives with respect to both time variables, rather than solutions of the telegraph equation which has a single time variable. This approach simplifies some of the known problems in this context.Entropy2015-06-01176Article10.3390/e17063581358135941099-43002015-06-01doi: 10.3390/e17063581Paul O'HaraLamberto Rondoni<![CDATA[Entropy, Vol. 17, Pages 3552-3580: Content Based Image Retrieval Using Embedded Neural Networks with Bandletized Regions]]>
http://mdpi.com/1099-4300/17/6/3552
One of the major requirements of content based image retrieval (CBIR) systems is to ensure meaningful image retrieval against query images. The performance of these systems is severely degraded by the inclusion of image content which does not contain the objects of interest in an image during the image representation phase. Segmentation of the images is considered as a solution but there is no technique that can guarantee the object extraction in a robust way. Another limitation of the segmentation is that most of the image segmentation techniques are slow and their results are not reliable. To overcome these problems, a bandelet transform based image representation technique is presented in this paper, which reliably returns the information about the major objects found in an image. For image retrieval purposes, artificial neural networks (ANN) are applied and the performance of the system and achievement is evaluated on three standard data sets used in the domain of CBIR.Entropy2015-05-29176Article10.3390/e17063552355235801099-43002015-05-29doi: 10.3390/e17063552Rehan AshrafKhalid BashirAun IrtazaMuhammad Mahmood<![CDATA[Entropy, Vol. 17, Pages 3518-3551: Entropy vs. Energy Waveform Processing: A Comparison Based on the Heat Equation]]>
http://mdpi.com/1099-4300/17/6/3518
Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information”, as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be defined as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.Entropy2015-05-25176Article10.3390/e17063518351835511099-43002015-05-25doi: 10.3390/e17063518Michael HughesJohn McCarthyPaul BruillardJon MarshSamuel Wickline<![CDATA[Entropy, Vol. 17, Pages 3501-3517: Information Decomposition and Synergy]]>
http://mdpi.com/1099-4300/17/5/3501
Recently, a series of papers addressed the problem of decomposing the information of two random variables into shared information, unique information and synergistic information. Several measures were proposed, although still no consensus has been reached. Here, we compare these proposals with an older approach to define synergistic information based on the projections on exponential families containing only up to k-th order interactions. We show that these measures are not compatible with a decomposition into unique, shared and synergistic information if one requires that all terms are always non-negative (local positivity). We illustrate the difference between the two measures for multivariate Gaussians.Entropy2015-05-22175Article10.3390/e17053501350135171099-43002015-05-22doi: 10.3390/e17053501Eckehard OlbrichNils BertschingerJohannes Rauh<![CDATA[Entropy, Vol. 17, Pages 3479-3500: Operational Reliability Assessment of Compressor Gearboxes with Normalized Lifting Wavelet Entropy from Condition Monitoring Information]]>
http://mdpi.com/1099-4300/17/5/3479
Classical reliability assessment methods have predominantly focused on probability and statistical theories, which are insufficient in assessing the operational reliability of individual mechanical equipment with time-varying characteristics. A new approach to assess machinery operational reliability with normalized lifting wavelet entropy from condition monitoring information is proposed, which is different from classical reliability assessment methods depending on probability and statistics analysis. The machinery vibration signals with time-varying operational characteristics are firstly decomposed and reconstructed by means of a lifting wavelet package transform. The relative energy of every reconstructed signal is computed as an energy percentage of the reconstructed signal in the whole signal energy. Moreover, a normalized lifting wavelet entropy is defined by the relative energy to reveal the machinery operational uncertainty. Finally, operational reliability degree is defined by the quantitative value obtained by the normalized lifting wavelet entropy belonging to the range of [0, 1]. The proposed method is applied in the operational reliability assessment of the gearbox in an oxy-generator compressor to validate the effectiveness.Entropy2015-05-20175Article10.3390/e17053479347935001099-43002015-05-20doi: 10.3390/e17053479Xiaoli ZhangBaojian WangHongrui CaoBing LiXuefeng Chen<![CDATA[Entropy, Vol. 17, Pages 3461-3478: Nonparametric Denoising Methods Based on Contourlet Transform with Sharp Frequency Localization: Application to Low Exposure Time Electron Microscopy Images]]>
http://mdpi.com/1099-4300/17/5/3461
Image denoising is a very important step in cryo-transmission electron microscopy (cryo-TEM) and the energy filtering TEM images before the 3D tomography reconstruction, as it addresses the problem of high noise in these images, that leads to a loss of the contained information. High noise levels contribute in particular to difficulties in the alignment required for 3D tomography reconstruction. This paper investigates the denoising of TEM images that are acquired with a very low exposure time, with the primary objectives of enhancing the quality of these low-exposure time TEM images and improving the alignment process. We propose denoising structures to combine multiple noisy copies of the TEM images. The structures are based on Bayesian estimation in the transform domains instead of the spatial domain to build a novel feature preserving image denoising structures; namely: wavelet domain, the contourlet transform domain and the contourlet transform with sharp frequency localization. Numerical image denoising experiments demonstrate the performance of the Bayesian approach in the contourlet transform domain in terms of improving the signal to noise ratio (SNR) and recovering fine details that may be hidden in the data. The SNR and the visual quality of the denoised images are considerably enhanced using these denoising structures that combine multiple noisy copies. The proposed methods also enable a reduction in the exposure time.Entropy2015-05-20175Article10.3390/e17053461346134781099-43002015-05-20doi: 10.3390/e17053461Soumia AhmedZoubeida MessaliAbdeldjalil OuahabiSylvain TrepoutCedric MessaoudiSergio Marco<![CDATA[Entropy, Vol. 17, Pages 3458-3460: Maximum Entropy Applied to Inductive Logic and Reasoning]]>
http://mdpi.com/1099-4300/17/5/3458
This editorial explains the scope of the special issue and provides a thematic introduction to the contributed papers.Entropy2015-05-18175Editorial10.3390/e17053458345834601099-43002015-05-18doi: 10.3390/e17053458Jürgen LandesJon Williamson<![CDATA[Entropy, Vol. 17, Pages 3438-3457: Heat Transfer and Pressure Drop Characteristics in Straight Microchannel of Printed Circuit Heat Exchangers]]>
http://mdpi.com/1099-4300/17/5/3438
Performance tests were carried out for a microchannel printed circuit heat exchanger (PCHE), which was fabricated with micro photo-etching and diffusion bonding technologies. The microchannel PCHE was tested for Reynolds numbers in the range of 100‒850 varying the hot-side inlet temperature between 40 °C–50 °C while keeping the cold-side temperature fixed at 20 °C. It was found that the average heat transfer rate and heat transfer performance of the countercurrrent configuration were 6.8% and 10%‒15% higher, respectively, than those of the parallel flow. The average heat transfer rate, heat transfer performance and pressure drop increased with increasing Reynolds number in all experiments. Increasing inlet temperature did not affect the heat transfer performance while it slightly decreased the pressure drop in the experimental range considered. Empirical correlations have been developed for the heat transfer coefficient and pressure drop factor as functions of the Reynolds number.Entropy2015-05-18175Article10.3390/e17053438343834571099-43002015-05-18doi: 10.3390/e17053438Jang-Won SeoYoon-Ho KimDongseon KimYoung-Don ChoiKyu-Jung Lee<![CDATA[Entropy, Vol. 17, Pages 3419-3437: Minimum Error Entropy Algorithms with Sparsity Penalty Constraints]]>
http://mdpi.com/1099-4300/17/5/3419
Recently, sparse adaptive learning algorithms have been developed to exploit system sparsity as well as to mitigate various noise disturbances in many applications. In particular, in sparse channel estimation, the parameter vector with sparsity characteristic can be well estimated from noisy measurements through a sparse adaptive filter. In previous studies, most works use the mean square error (MSE) based cost to develop sparse filters, which is rational under the assumption of Gaussian distributions. However, Gaussian assumption does not always hold in real-world environments. To address this issue, we incorporate in this work an l1-norm or a reweighted l1-norm into the minimum error entropy (MEE) criterion to develop new sparse adaptive filters, which may perform much better than the MSE based methods, especially in heavy-tailed non-Gaussian situations, since the error entropy can capture higher-order statistics of the errors. In addition, a new approximator of l0-norm, based on the correntropy induced metric (CIM), is also used as a sparsity penalty term (SPT). We analyze the mean square convergence of the proposed new sparse adaptive filters. An energy conservation relation is derived and a sufficient condition is obtained, which ensures the mean square convergence. Simulation results confirm the superior performance of the new algorithms.Entropy2015-05-18175Article10.3390/e17053419341934371099-43002015-05-18doi: 10.3390/e17053419Zongze WuSiyuan PengWentao MaBadong ChenJose Principe<![CDATA[Entropy, Vol. 17, Pages 3400-3418: Entropy Approximation in Lossy Source Coding Problem]]>
http://mdpi.com/1099-4300/17/5/3400
In this paper, we investigate a lossy source coding problem, where an upper limit on the permitted distortion is defined for every dataset element. It can be seen as an alternative approach to rate distortion theory where a bound on the allowed average error is specified. In order to find the entropy, which gives a statistical length of source code compatible with a fixed distortion bound, a corresponding optimization problem has to be solved. First, we show how to simplify this general optimization by reducing the number of coding partitions, which are irrelevant for the entropy calculation. In our main result, we present a fast and feasible for implementation greedy algorithm, which allows one to approximate the entropy within an additive error term of log2 e. The proof is based on the minimum entropy set cover problem, for which a similar bound was obtained.Entropy2015-05-18175Article10.3390/e17053400340034181099-43002015-05-18doi: 10.3390/e17053400Marek ŚmiejaJacek Tabor<![CDATA[Entropy, Vol. 17, Pages 3376-3399: Non-Abelian Topological Approach to Non-Locality of a Hypergraph State]]>
http://mdpi.com/1099-4300/17/5/3376
We present a theoretical study of new families of stochastic complex information modules encoded in the hypergraph states which are defined by the fractional entropic descriptor. The essential connection between the Lyapunov exponents and d-regular hypergraph fractal set is elucidated. To further resolve the divergence in the complexity of classical and quantum representation of a hypergraph, we have investigated the notion of non-amenability and its relation to combinatorics of dynamical self-organization for the case of fractal system of free group on finite generators. The exact relation between notion of hypergraph non-locality and quantum encoding through system sets of specified non-Abelian fractal geometric structures is presented. Obtained results give important impetus towards designing of approximation algorithms for chip imprinted circuits in scalable quantum information systems.Entropy2015-05-15175Article10.3390/e17053376337633991099-43002015-05-15doi: 10.3390/e17053376Vesna Berec<![CDATA[Entropy, Vol. 17, Pages 3352-3375: Nonlinear Stochastic Control and Information Theoretic Dualities: Connections, Interdependencies and Thermodynamic Interpretations]]>
http://mdpi.com/1099-4300/17/5/3352
In this paper, we present connections between recent developments on the linearly-solvable stochastic optimal control framework with early work in control theory based on the fundamental dualities between free energy and relative entropy. We extend these connections to nonlinear stochastic systems with non-affine controls by using the generalized version of the Feynman–Kac lemma. We present alternative formulations of the linearly-solvable stochastic optimal control framework and discuss information theoretic and thermodynamic interpretations. On the algorithmic side, we present iterative stochastic optimal control algorithms and applications to nonlinear stochastic systems. We conclude with an overview of the frameworks presented and discuss limitations, differences and future directions.Entropy2015-05-15175Article10.3390/e17053352335233751099-43002015-05-15doi: 10.3390/e17053352Evangelos Theodorou<![CDATA[Entropy, Vol. 17, Pages 3332-3351: An Information-Theoretic Perspective on Coarse-Graining, Including the Transition from Micro to Macro]]>
http://mdpi.com/1099-4300/17/5/3332
An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. In the micro-macro transition, it is shown how this local information quantity is transformed into a macroscopic entropy, as the local states are aggregated into macroscopic concentration variables. The information loss in this transition is identified, and the connection to the irreversibility of the macro dynamics and the second law of thermodynamics is discussed. This is then connected to a process of further coarse-graining towards higher characteristic length scales in the context of chemical reaction-diffusion dynamics capable of pattern formation. On these higher levels of coarse-graining, information flows across length scales and across space are defined. These flows obey a continuity equation for information, and they are connected to the thermodynamic constraints of the system, via an outflow of information from macroscopic to microscopic levels in the form of entropy production, as well as an inflow of information, from an external free energy source, if a spatial chemical pattern is to be maintained.Entropy2015-05-14175Article10.3390/e17053332333233511099-43002015-05-14doi: 10.3390/e17053332Kristian Lindgren<![CDATA[Entropy, Vol. 17, Pages 3319-3331: A Mean-Variance Hybrid-Entropy Model for Portfolio Selection with Fuzzy Returns]]>
http://mdpi.com/1099-4300/17/5/3319
In this paper, we define the portfolio return as fuzzy average yield and risk as hybrid-entropy and variance to deal with the portfolio selection problem with both random uncertainty and fuzzy uncertainty, and propose a mean-variance hybrid-entropy model (MVHEM). A multi-objective genetic algorithm named Non-dominated Sorting Genetic Algorithm II (NSGA-II) is introduced to solve the model. We make empirical comparisons by using the data from the Shanghai and Shenzhen stock exchanges in China. The results show that the MVHEM generally performs better than the traditional portfolio selection models.Entropy2015-05-14175Article10.3390/e17053319331933311099-43002015-05-14doi: 10.3390/e17053319Rongxi ZhouYu ZhanRu CaiGuanqun Tong<![CDATA[Entropy, Vol. 17, Pages 3253-3318: The Homological Nature of Entropy]]>
http://mdpi.com/1099-4300/17/5/3253
We propose that entropy is a universal co-homological class in a theory associated to a family of observable quantities and a family of probability distributions. Three cases are presented: (1) classical probabilities and random variables; (2) quantum probabilities and observable operators; (3) dynamic probabilities and observation trees. This gives rise to a new kind of topology for information processes, that accounts for the main information functions: entropy, mutual-informations at all orders, and Kullback–Leibler divergence and generalizes them in several ways. The article is divided into two parts, that can be read independently. In the first part, the introduction, we provide an overview of the results, some open questions, future results and lines of research, and discuss briefly the application to complex data. In the second part we give the complete definitions and proofs of the theorems A, C and E in the introduction, which show why entropy is the first homological invariant of a structure of information in four contexts: static classical or quantum probability, dynamics of classical or quantum strategies of observation of a finite system.Entropy2015-05-13175Article10.3390/e17053253325333181099-43002015-05-13doi: 10.3390/e17053253Pierre BaudotDaniel Bennequin<![CDATA[Entropy, Vol. 17, Pages 3205-3252: Generalized Stochastic Fokker-Planck Equations]]>
http://mdpi.com/1099-4300/17/5/3205
We consider a system of Brownian particles with long-range interactions. We go beyond the mean field approximation and take fluctuations into account. We introduce a new class of stochastic Fokker-Planck equations associated with a generalized thermodynamical formalism. Generalized thermodynamics arises in the case of complex systems experiencing small-scale constraints. In the limit of short-range interactions, we obtain a generalized class of stochastic Cahn-Hilliard equations. Our formalism has application for several systems of physical interest including self-gravitating Brownian particles, colloid particles at a fluid interface, superconductors of type II, nucleation, the chemotaxis of bacterial populations, and two-dimensional turbulence. We also introduce a new type of generalized entropy taking into account anomalous diffusion and exclusion or inclusion constraints.Entropy2015-05-13175Article10.3390/e17053205320532521099-43002015-05-13doi: 10.3390/e17053205Pierre-Henri Chavanis<![CDATA[Entropy, Vol. 17, Pages 3194-3204: Quantum Data Locking for Secure Communication against an Eavesdropper with Time-Limited Storage]]>
http://mdpi.com/1099-4300/17/5/3194
Quantum cryptography allows for unconditionally secure communication against an eavesdropper endowed with unlimited computational power and perfect technologies, who is only constrained by the laws of physics. We review recent results showing that, under the assumption that the eavesdropper can store quantum information only for a limited time, it is possible to enhance the performance of quantum key distribution in both a quantitative and qualitative fashion. We consider quantum data locking as a cryptographic primitive and discuss secure communication and key distribution protocols. For the case of a lossy optical channel, this yields the theoretical possibility of generating secret key at a constant rate of 1 bit per mode at arbitrarily long communication distances.Entropy2015-05-13175Article10.3390/e17053194319432041099-43002015-05-13doi: 10.3390/e17053194Cosmo Lupo<![CDATA[Entropy, Vol. 17, Pages 3182-3193: Exact Solutions of Non-Linear Lattice Equations by an Improved Exp-Function Method]]>
http://mdpi.com/1099-4300/17/5/3182
In this paper, the exp-function method is improved to construct exact solutions of non-linear lattice equations by modifying its exponential function ansätz. The improved method has two advantages. One is that it can solve non-linear lattice equations with variable coefficients, and the other is that it is not necessary to balance the highest order derivative with the highest order nonlinear term in the procedure of determining the exponential function ansätz. To show the advantages of this improved method, a variable-coefficient mKdV lattice equation is considered. As a result, new exact solutions, which include kink-type solutions and bell-kink-type solutions, are obtained.Entropy2015-05-13175Article10.3390/e17053182318231931099-43002015-05-13doi: 10.3390/e17053182Sheng ZhangJiahong LiYingying Zhou<![CDATA[Entropy, Vol. 17, Pages 3172-3181: Existence of Ulam Stability for Iterative Fractional Differential Equations Based on Fractional Entropy]]>
http://mdpi.com/1099-4300/17/5/3172
In this study, we introduce conditions for the existence of solutions for an iterative functional differential equation of fractional order. We prove that the solutions of the above class of fractional differential equations are bounded by Tsallis entropy. The method depends on the concept of Hyers-Ulam stability. The arbitrary order is suggested in the sense of Riemann-Liouville calculus.Entropy2015-05-13175Article10.3390/e17053172317231811099-43002015-05-13doi: 10.3390/e17053172Rabha IbrahimHamid Jalab<![CDATA[Entropy, Vol. 17, Pages 3160-3171: Effect of Heterogeneity in Initial Geographic Distribution on Opinions’ Competitiveness]]>
http://mdpi.com/1099-4300/17/5/3160
Spin dynamics on networks allows us to understand how a global consensus emerges out of individual opinions. Here, we are interested in the effect of heterogeneity in the initial geographic distribution of a competing opinion on the competitiveness of its own opinion. Accordingly, in this work, we studied the effect of spatial heterogeneity on the majority rule dynamics using a three-state spin model, in which one state is neutral. Monte Carlo simulations were performed on square lattices divided into square blocks (cells). Accordingly, one competing opinion was distributed uniformly among cells, whereas the spatial distribution of the rival opinion was varied from the uniform to heterogeneous, with the median-to-mean ratio in the range from 1 to 0. When the size of discussion group is odd, the uncommitted agents disappear completely after 3.30 ± 0.05 update cycles, and then the system evolves in a two-state regime with complementary spatial distributions of two competing opinions. Even so, the initial heterogeneity in the spatial distribution of one of the competing opinions causes a decrease of this opinion competitiveness. That is, the opinion with initially heterogeneous spatial distribution has less probability to win, than the opinion with the initially uniform spatial distribution, even when the initial concentrations of both opinions are equal. We found that although the time to consensus , the opinion’s recession rate is determined during the first 3.3 update cycles. On the other hand, we found that the initial heterogeneity of the opinion spatial distribution assists the formation of quasi-stable regions, in which this opinion is dominant. The results of Monte Carlo simulations are discussed with regard to the electoral competition of political parties.Entropy2015-05-13175Article10.3390/e17053160316031711099-43002015-05-13doi: 10.3390/e17053160Alexander BalankinMiguel Martínez CruzFelipe Gayosso MartínezClaudia Martínez-GonzálezLeobardo Morales RuizJulián Patiño Ortiz<![CDATA[Entropy, Vol. 17, Pages 3152-3159: Continuous-Variable Entanglement Swapping]]>
http://mdpi.com/1099-4300/17/5/3152
We present a very brief overview of entanglement swapping as it relates to continuous-variable quantum information. The technical background required is discussed and the natural link to quantum teleportation is established before discussing the nature of Gaussian entanglement swapping. The limitations of Gaussian swapping are introduced, along with the general applications of swapping in the context of to quantum communication and entanglement distribution. In light of this, we briefly summarize a collection of entanglement swapping schemes which incorporate a non-Gaussian ingredient and the benefits of such schemes are noted. Finally, we motivate the need to further study and develop such schemes by highlighting requirements of a continuous-variable repeater.Entropy2015-05-13175Review10.3390/e17053152315231591099-43002015-05-13doi: 10.3390/e17053152Kevin MarshallChristian Weedbrook<![CDATA[Entropy, Vol. 17, Pages 3124-3151: 2D Temperature Analysis of Energy and Exergy Characteristics of Laminar Steady Flow across a Square Cylinder under Strong Blockage]]>
http://mdpi.com/1099-4300/17/5/3124
Energy and exergy characteristics of a square cylinder (SC) in confined flow are investigated computationally by numerically handling the steady-state continuity, Navier-Stokes and energy equations in the Reynolds number range of Re = 10–50, where the blockage ratio (β = B/H) is kept constant at the high level of β = 0.8. Computations indicated for the upstream region that, the mean non-dimensional streamwise (u/Uo) and spanwise (v/Uo) velocities attain the values of u/Uo = 0.840®0.879 and v/Uo = 0.236®0.386 (Re = 10®50) on the front-surface of the SC, implying that Reynolds number and blockage have stronger impact on the spanwise momentum activity. It is determined that flows with high Reynolds number interact with the front-surface of the SC developing thinner thermal boundary layers and greater temperature gradients, which promotes the thermal entropy generation values as well. The strict guidance of the throat, not only resulted in the fully developed flow character, but also imposed additional cooling; such that the analysis pointed out the drop of duct wall (y = 0.025 m) non-dimensional temperature values (ζ) from ζ = 0.387®0.926 (Re = 10®50) at xth = 0 mm to ζ = 0.002®0.266 at xth = 40 mm. In the downstream region, spanwise thermal disturbances are evaluated to be most inspectable in the vortex driven region, where the temperature values show decrease trends in the spanwise direction. In the corresponding domain, exergy destruction is determined to grow with Reynolds number and decrease in the streamwise direction (xds = 0®10 mm). Besides, asymmetric entropy distributions as well were recorded due to the comprehensive mixing caused by the vortex system.Entropy2015-05-12175Article10.3390/e17053124312431511099-43002015-05-12doi: 10.3390/e17053124M. Korukcu<![CDATA[Entropy, Vol. 17, Pages 3110-3123: The Multiscale Entropy Algorithm and Its Variants: A Review]]>
http://mdpi.com/1099-4300/17/5/3110
Multiscale entropy (MSE) analysis was introduced in the 2002 to evaluate the complexity of a time series by quantifying its entropy over a range of temporal scales. The algorithm has been successfully applied in different research fields. Since its introduction, a number of modifications and refinements have been proposed, some aimed at increasing the accuracy of the entropy estimates, others at exploring alternative coarse-graining procedures. In this review, we first describe the original MSE algorithm. Then, we review algorithms that have been introduced to improve the estimation of MSE. We also report a recent generalization of the method to higher moments.Entropy2015-05-12175Review10.3390/e17053110311031231099-43002015-05-12doi: 10.3390/e17053110Anne Humeau-Heurtier<![CDATA[Entropy, Vol. 17, Pages 3097-3109: Exponential Outer Synchronization between Two Uncertain Time-Varying Complex Networks with Nonlinear Coupling]]>
http://mdpi.com/1099-4300/17/5/3097
This paper studies the problem of exponential outer synchronization between two uncertain nonlinearly coupled complex networks with time delays. In order to synchronize uncertain complex networks, an adaptive control scheme is designed based on the Lyapunov stability theorem. Simultaneously, the unknown system parameters of uncertain complex networks are identified when exponential outer synchronization occurs. Finally, numerical examples are provided to demonstrate the feasibility and effectiveness of the theoretical results.Entropy2015-05-11175Article10.3390/e17053097309731091099-43002015-05-11doi: 10.3390/e17053097Yongqing WuLi Liu<![CDATA[Entropy, Vol. 17, Pages 3053-3096: Predicting Community Evolution in Social Networks]]>
http://mdpi.com/1099-4300/17/5/3053
Nowadays, sustained development of different social media can be observed worldwide. One of the relevant research domains intensively explored recently is analysis of social communities existing in social media as well as prediction of their future evolution taking into account collected historical evolution chains. These evolution chains proposed in the paper contain group states in the previous time frames and its historical transitions that were identified using one out of two methods: Stable Group Changes Identification (SGCI) and Group Evolution Discovery (GED). Based on the observed evolution chains of various length, structural network features are extracted, validated and selected as well as used to learn classification models. The experimental studies were performed on three real datasets with different profile: DBLP, Facebook and Polish blogosphere. The process of group prediction was analysed with respect to different classifiers as well as various descriptive feature sets extracted from evolution chains of different length. The results revealed that, in general, the longer evolution chains the better predictive abilities of the classification models. However, chains of length 3 to 7 enabled the GED-based method to almost reach its maximum possible prediction quality. For SGCI, this value was at the level of 3–5 last periods.Entropy2015-05-11175Article10.3390/e17053053305330961099-43002015-05-11doi: 10.3390/e17053053Stanisław SaganowskiBogdan GliwaPiotr BródkaAnna ZygmuntPrzemysław KazienkoJarosław Koźlak<![CDATA[Entropy, Vol. 17, Pages 3035-3052: Dimensional Upgrade Approach for Spatial-Temporal Fusion of Trend Series in Subsidence Evaluation]]>
http://mdpi.com/1099-4300/17/5/3035
Physical models and grey system models (GSMs) are commonly used to evaluate and predict physical behavior. A physical model avoids the incorrect trend series of a GSM, whereas a GSM avoids the assumptions and uncertainty of a physical model. A technique that combines the results of physical models and GSMs would make prediction more reasonable and reliable. This study proposes a fusion method for combining two trend series, calculated using two one-dimensional models, respectively, that uses a slope criterion and a distance weighting factor in the temporal and spatial domains. The independent one-dimensional evaluations are upgraded to a spatially and temporally connected two-dimensional distribution. The proposed technique was applied to a subsidence problem in Jhuoshuei River Alluvial Fan, Taiwan. The fusion results show dramatic decreases of subsidence quantity and rate compared to those estimated by the GSM. The subsidence behavior estimated using the proposed method is physically reasonable due to a convergent trend of subsidence under the assumption of constant discharge of groundwater. The technique proposed in this study can be used in fields that require a combination of two trend series from physical and nonphysical models.Entropy2015-05-11175Communication10.3390/e17053035303530521099-43002015-05-11doi: 10.3390/e17053035Shih-Jung Wang<![CDATA[Entropy, Vol. 17, Pages 2988-3034: Log-Determinant Divergences Revisited: Alpha-Beta and Gamma Log-Det Divergences]]>
http://mdpi.com/1099-4300/17/5/2988
This work reviews and extends a family of log-determinant (log-det) divergences for symmetric positive definite (SPD) matrices and discusses their fundamental properties. We show how to use parameterized Alpha-Beta (AB) and Gamma log-det divergences to generate many well-known divergences; in particular, we consider the Stein’s loss, the S-divergence, also called Jensen-Bregman LogDet (JBLD) divergence, Logdet Zero (Bhattacharyya) divergence, Affine Invariant Riemannian Metric (AIRM), and other divergences. Moreover, we establish links and correspondences between log-det divergences and visualise them on an alpha-beta plane for various sets of parameters. We use this unifying framework to interpret and extend existing similarity measures for semidefinite covariance matrices in finite-dimensional Reproducing Kernel Hilbert Spaces (RKHS). This paper also shows how the Alpha-Beta family of log-det divergences relates to the divergences of multivariate and multilinear normal distributions. Closed form formulas are derived for Gamma divergences of two multivariate Gaussian densities; the special cases of the Kullback-Leibler, Bhattacharyya, Rényi, and Cauchy-Schwartz divergences are discussed. Symmetrized versions of log-det divergences are also considered and briefly reviewed. Finally, a class of divergences is extended to multiway divergences for separable covariance (or precision) matrices.Entropy2015-05-08175Review10.3390/e17052988298830341099-43002015-05-08doi: 10.3390/e17052988Andrzej CichockiSergio CrucesShun-ichi Amari<![CDATA[Entropy, Vol. 17, Pages 2973-2987: Kolmogorov Complexity Based Information Measures Applied to the Analysis of Different River Flow Regimes]]>
http://mdpi.com/1099-4300/17/5/2973
We have used the Kolmogorov complexities and the Kolmogorov complexity spectrum to quantify the randomness degree in river flow time series of seven rivers with different regimes in Bosnia and Herzegovina, representing their different type of courses, for the period 1965–1986. In particular, we have examined: (i) the Neretva, Bosnia and the Drina (mountain and lowland parts), (ii) the Miljacka and the Una (mountain part) and the Vrbas and the Ukrina (lowland part) and then calculated the Kolmogorov complexity (KC) based on the Lempel–Ziv Algorithm (LZA) (lower—KCL and upper—KCU), Kolmogorov complexity spectrum highest value (KCM) and overall Kolmogorov complexity (KCO) values for each time series. The results indicate that the KCL, KCU, KCM and KCO values in seven rivers show some similarities regardless of the amplitude differences in their monthly flow rates. The KCL, KCU and KCM complexities as information measures do not “see” a difference between time series which have different amplitude variations but similar random components. However, it seems that the KCO information measures better takes into account both the amplitude and the place of the components in a time series.Entropy2015-05-08175Article10.3390/e17052973297329871099-43002015-05-08doi: 10.3390/e17052973Dragutin MihailovićGordan MimićNusret DreškovićIlija Arsenić<![CDATA[Entropy, Vol. 17, Pages 2958-2972: Maximum Entropy Method for Operational Loads Feedback Using Concrete Dam Displacement]]>
http://mdpi.com/1099-4300/17/5/2958
Safety control of concrete dams is required due to the potential great loss of life and property in case of dam failure. The purpose of this paper is to feed back the operational control loads for concrete dam displacement using the maximum entropy method. The proposed method is not aimed at a judgement about the safety conditions of the dam. When a strong trend-line effect is evident, the method should be carefully applied. In these cases, the hydrostatic and temperature effects are added to the irreversible displacements, thus maximum operational loads should be accordingly reduced. The probability density function for the extreme load effect component of dam displacement can be selected by employing the principle of maximum entropy, which is effective to construct the least subjective probability density distribution merely given the moments information from the stated data. The critical load effect component in the warning criterion can be determined through the corresponding cumulative distribution function obtained by the maximum entropy method. Then the control loads feedback of concrete dam displacement is realized by the proposed warning criterion. The proposed method is applied to a concrete dam. A comparison of the results shows that the maximum entropy method can feed back rational control loads for the dam displacement. The control loads diagram obtained can be a straightforward and visual tool to the operation and management department of the concrete dam. The result from the proposed method is recommended to be used due to minimal subjectivity.Entropy2015-05-08175Article10.3390/e17052958295829721099-43002015-05-08doi: 10.3390/e17052958Jingmei ZhangChongshi Gu<![CDATA[Entropy, Vol. 17, Pages 2932-2957: Oxygen Saturation and RR Intervals Feature Selection for Sleep Apnea Detection]]>
http://mdpi.com/1099-4300/17/5/2932
A diagnostic system for sleep apnea based on oxygen saturation and RR intervals obtained from the EKG (electrocardiogram) is proposed with the goal to detect and quantify minute long segments of sleep with breathing pauses. We measured the discriminative capacity of combinations of features obtained from RR series and oximetry to evaluate improvements of the performance compared to oximetry-based features alone. Time and frequency domain variables derived from oxygen saturation (SpO2) as well as linear and non-linear variables describing the RR series have been explored in recordings from 70 patients with suspected sleep apnea. We applied forward feature selection in order to select a minimal set of variables that are able to locate patterns indicating respiratory pauses. Linear discriminant analysis (LDA) was used to classify the presence of apnea during specific segments. The system will finally provide a global score indicating the presence of clinically significant apnea integrating the segment based apnea detection. LDA results in an accuracy of 87%; sensitivity of 76% and specificity of 91% (AUC = 0.90) with a global classification of 97% when only oxygen saturation is used. In case of additionally including features from the RR series; the system performance improves to an accuracy of 87%; sensitivity of 73% and specificity of 92% (AUC = 0.92), with a global classification rate of 100%.Entropy2015-05-07175Article10.3390/e17052932293229571099-43002015-05-07doi: 10.3390/e17052932Antonio Ravelo-GarcíaJan KraemerJuan Navarro-MesaEduardo Hernández-PérezJavier Navarro-EstevaGabriel Juliá-SerdáThomas PenzelNiels Wessel<![CDATA[Entropy, Vol. 17, Pages 2919-2931: Three-Stage Quantum Cryptography Protocol under Collective-Rotation Noise]]>
http://mdpi.com/1099-4300/17/5/2919
Information security is increasingly important as society migrates to the information age. Classical cryptography widely used nowadays is based on computational complexity, which means that it assumes that solving some particular mathematical problems is hard on a classical computer. With the development of supercomputers and, potentially, quantum computers, classical cryptography has more and more potential risks. Quantum cryptography provides a solution which is based on the Heisenberg uncertainty principle and no-cloning theorem. While BB84-based quantum protocols are only secure when a single photon is used in communication, the three-stage quantum protocol is multi-photon tolerant. However, existing analyses assume perfect noiseless channels. In this paper, a multi-photon analysis is performed for the three-stage quantum protocol under the collective-rotation noise model. The analysis provides insights into the impact of the noise level on a three-stage quantum cryptography system.Entropy2015-05-07175Article10.3390/e17052919291929311099-43002015-05-07doi: 10.3390/e17052919Linsen WuYuhua Chen<![CDATA[Entropy, Vol. 17, Pages 2895-2918: AIM for Allostery: Using the Ising Model to Understand Information Processing and Transmission in Allosteric Biomolecular Systems]]>
http://mdpi.com/1099-4300/17/5/2895
In performing their biological functions, molecular machines must process and transmit information with high fidelity. Information transmission requires dynamic coupling between the conformations of discrete structural components within the protein positioned far from one another on the molecular scale. This type of biomolecular “action at a distance” is termed allostery. Although allostery is ubiquitous in biological regulation and signal transduction, its treatment in theoretical models has mostly eschewed quantitative descriptions involving the system’s underlying structural components and their interactions. Here, we show how Ising models can be used to formulate an approach to allostery in a structural context of interactions between the constitutive components by building simple allosteric constructs we termed Allosteric Ising Models (AIMs). We introduce the use of AIMs in analytical and numerical calculations that relate thermodynamic descriptions of allostery to the structural context, and then show that many fundamental properties of allostery, such as the multiplicative property of parallel allosteric channels, are revealed from the analysis of such models. The power of exploring mechanistic structural models of allosteric function in more complex systems by using AIMs is demonstrated by building a model of allosteric signaling for an experimentally well-characterized asymmetric homodimer of the dopamine D2 receptor.Entropy2015-05-07175Article10.3390/e17052895289529181099-43002015-05-07doi: 10.3390/e17052895Michael LeVineHarel Weinstein<![CDATA[Entropy, Vol. 17, Pages 2876-2894: Properties of Nonnegative Hermitian Matrices and New Entropic Inequalities for Noncomposite Quantum Systems]]>
http://mdpi.com/1099-4300/17/5/2876
We consider the probability distributions, spin (qudit)-state tomograms and density matrices of quantum states, and their information characteristics, such as Shannon and von Neumann entropies and q-entropies, from the viewpoints of both well-known purely mathematical features of nonnegative numbers and nonnegative matrices and their physical characteristics, such as entanglement and other quantum correlation phenomena. We review entropic inequalities such as the Araki–Lieb inequality and the subadditivity and strong subadditivity conditions known for bipartite and tripartite systems, and recently obtained for single qudit states. We present explicit matrix forms of the known and some new entropic inequalities associated with quantum states of composite and noncomposite systems. We discuss the tomographic probability distributions of qudit states and demonstrate the inequalities for tomographic entropies of the qudit states. In addition, we mention a possibility to use the discussed information properties of single qudit states in quantum technologies based on multilevel atoms and quantum circuits produced of Josephson junctions.Entropy2015-05-06175Review10.3390/e17052876287628941099-43002015-05-06doi: 10.3390/e17052876Margarita Man'koVladimir Man'ko<![CDATA[Entropy, Vol. 17, Pages 2862-2875: Stabilization Effects of Dichotomous Noise on the Lifetime of the Superconducting State in a Long Josephson Junction]]>
http://mdpi.com/1099-4300/17/5/2862
We investigate the superconducting lifetime of a long overdamped current-biased Josephson junction, in the presence of telegraph noise sources. The analysis is performed by randomly choosing the initial condition for the noise source. However, in order to investigate how the initial value of the dichotomous noise affects the phase dynamics, we extend our analysis using two different fixed initial values for the source of random fluctuations. In our study, the phase dynamics of the Josephson junction is analyzed as a function of the noise signal intensity, for different values of the parameters of the system and external driving currents. We find that the mean lifetime of the superconductive metastable state as a function of the noise intensity is characterized by nonmonotonic behavior, strongly related to the soliton dynamics during the switching towards the resistive state. The role of the correlation time of the noise source is also taken into account. Noise-enhanced stability is observed in the investigated system.Entropy2015-05-06175Article10.3390/e17052862286228751099-43002015-05-06doi: 10.3390/e17052862Claudio GuarcelloDavide ValentiAngelo CarolloBernardo Spagnolo<![CDATA[Entropy, Vol. 17, Pages 2853-2861: Conceptual Inadequacy of the Shore and Johnson Axioms for Wide Classes of Complex Systems]]>
http://mdpi.com/1099-4300/17/5/2853
It is by now well known that the Boltzmann-Gibbs-von Neumann-Shannon logarithmic entropic functional (\(S_{BG}\)) is inadequate for wide classes of strongly correlated systems: see for instance the 2001 Brukner and Zeilinger's {\it Conceptual inadequacy of the Shannon information in quantum measurements}, among many other systems exhibiting various forms of complexity. On the other hand, the Shannon and Khinchin axioms uniquely mandate the BG form \(S_{BG}=-k\sum_i p_i \ln p_i\); the Shore and Johnson axioms follow the same path. Many natural, artificial and social systems have been satisfactorily approached with nonadditive entropies such as the \(S_q=k \frac{1-\sum_i p_i^q}{q-1}\) one (\(q \in {\cal R}; \,S_1=S_{BG}\)), basis of nonextensive statistical mechanics. Consistently, the Shannon 1948 and Khinchine 1953 uniqueness theorems have already been generalized in the literature, by Santos 1997 and Abe 2000 respectively, in order to uniquely mandate \(S_q\). We argue here that the same remains to be done with the Shore and Johnson 1980 axioms. We arrive to this conclusion by analyzing specific classes of strongly correlated complex systems that await such generalization.Entropy2015-05-05175Article10.3390/e17052853285328611099-43002015-05-05doi: 10.3390/e17052853Constantino Tsallis<![CDATA[Entropy, Vol. 17, Pages 2834-2852: A Novel Risk Metric for Staff Turnover in a Software Project Based on Information Entropy]]>
http://mdpi.com/1099-4300/17/5/2834
Staff turnover in a software project is a significant risk that can result in project failure. Despite the urgency of this issue, however, relevant studies are limited and are mostly qualitative; quantitative studies are extremely rare. This paper proposes a novel risk metric for staff turnover in a software project based on the information entropy theory. To address the gaps of existing studies, five aspects are considered, namely, staff turnover probability, turnover type, staff level, software project complexity, and staff order degree. This paper develops a method of calculating staff turnover risk probability in a software project based on the field, equity, and goal congruence theories. The proposed method prevents the probability of subjective estimation. It is more objective and comprehensive and superior than existing research. This paper not only presents a detailed operable model, but also theoretically demonstrates the scientificity and rationality of the research. The case study performed in this study indicates that the approach is reasonable, effective, and feasible.Entropy2015-05-04175Article10.3390/e17052834283428521099-43002015-05-04doi: 10.3390/e17052834Rong Jiang<![CDATA[Entropy, Vol. 17, Pages 2812-2833: On the κ-Deformed Cyclic Functions and the Generalized Fourier Series in the Framework of the κ-Algebra]]>
http://mdpi.com/1099-4300/17/5/2812
We explore two possible generalizations of the Euler formula for the complex \(\kappa\)-exponential, which give two different sets of \(\kappa\)-deformed cyclic functions endowed with different analytical properties. In a case, the \(\kappa\)-sine and \(\kappa\)-cosine functions take real values on \(\Re\) and are characterized by an asymptotic log-periodic behavior. In the other case, the \(\kappa\)-cyclic functions take real values only in the region \(|x|\leq1/|\kappa|\), while, for \(|x|&gt;1/|\kappa|\), they assume purely imaginary values with an increasing modulus. However, the main mathematical properties of the standard cyclic functions, opportunely reformulated in the formalism of the \(\kappa\)-mathematics, are fulfilled by the two sets of the \(\kappa\)-trigonometric functions. In both cases, we study the orthogonality and the completeness relations and introduce their respective generalized Fourier series for square integrable functions.Entropy2015-05-04175Article10.3390/e17052812281228331099-43002015-05-04doi: 10.3390/e17052812Antonio Scarfone<![CDATA[Entropy, Vol. 17, Pages 2781-2811: The Grading Entropy-based Criteria for Structural Stability of Granular Materials and Filters]]>
http://mdpi.com/1099-4300/17/5/2781
This paper deals with three grading entropy-based rules that describe different soil structure stability phenomena: an internal stability rule, a filtering rule and a segregation rule. These rules are elaborated on the basis of a large amount of laboratory testing and from existing knowledge in the field. Use is made of the theory of grading entropy to derive parameters which incorporate all of the information of the grading curve into a pair of entropy-based parameters that allow soils with common behaviours to be grouped into domains on an entropy diagram. Applications of the derived entropy-based rules are presented by examining the reason of a dam failure, by testing against the existing filter rules from the literature, and by giving some examples for the design of non-segregating grading curves (discrete particle size distributions by dry weight). A physical basis for the internal stability rule is established, wherein the higher values of base entropy required for granular stability are shown to reflect the closeness between the mean and maximum grain diameters, which explains how there are sufficient coarser grains to achieve a stable grain skeleton.Entropy2015-05-04175Article10.3390/e17052781278128111099-43002015-05-04doi: 10.3390/e17052781Janos LőrinczEmöke ImreStephen FityusPhong TrangTibor TarnaiIstván TalataVijay Singh<![CDATA[Entropy, Vol. 17, Pages 2764-2780: Estimating the Lower Limit of the Impact of Amines on Nucleation in the Earth’s Atmosphere]]>
http://mdpi.com/1099-4300/17/5/2764
Amines, organic derivatives of NH3, are important common trace atmospheric species that can enhance new particle formation in the Earth’s atmosphere under favorable conditions. While methylamine (MA), dimethylamine (DMA) and trimethylamine (TMA) all efficiently enhance binary nucleation, MA may represent the lower limit of the enhancing effect of amines on atmospheric nucleation. In the present paper, we report new thermochemical data concerning MA-enhanced nucleation, which were obtained using the DFT PW91PW91/6-311++G (3df, 3pd) method, and investigate the enhancement in production of stable pre-nucleation clusters due to the MA. We found that the MA ternary nucleation begins to dominate over ternary nucleation of sulfuric acid, water and ammonia at [MA]/[NH3] &gt; ~10−3. This means that under real atmospheric conditions ([MA] ~ 1 ppt, [NH3] ~ 1 ppb) the lower limit of the enhancement due to methylamines is either close to or higher than the typical effect of NH3. A very strong impact of the MA is observed at low RH; however it decreases quickly as the RH grows. Low RH and low ambient temperatures were found to be particularly favorable for the enhancement in production of stable sulfuric acid-water clusters due to the MA.Entropy2015-04-30175Article10.3390/e17052764276427801099-43002015-04-30doi: 10.3390/e17052764Alexey NadyktoJason HerbFangqun YuYisheng XuEkaterina Nazarenko<![CDATA[Entropy, Vol. 17, Pages 2749-2763: Detection of Changes in Ground-Level Ozone Concentrations via Entropy]]>
http://mdpi.com/1099-4300/17/5/2749
Ground-level ozone concentration is a key indicator of air quality. Theremay exist sudden changes in ozone concentration data over a long time horizon, which may be caused by the implementation of government regulations and policies, such as establishing exhaust emission limits for on-road vehicles. To monitor and assess the efficacy of these policies, we propose a methodology for detecting changes in ground-level ozone concentrations, which consists of three major steps: data transformation, simultaneous autoregressive modelling and change-point detection on the estimated entropy. To show the effectiveness of the proposed methodology, the methodology is applied to detect changes in ground-level ozone concentration data collected in the Toronto region of Canada between June and September for the years from 1988 to 2009. The proposed methodology is also applicable to other climate data.Entropy2015-04-30175Article10.3390/e17052749274927631099-43002015-04-30doi: 10.3390/e17052749Yuehua WuBaisuo JinElton Chan<![CDATA[Entropy, Vol. 17, Pages 2741-2748: Synthesis and Surface Thermodynamic Functions of CaMoO4 Nanocakes]]>
http://mdpi.com/1099-4300/17/5/2741
CaMoO4 nanocakes with uniform size and morphology were prepared on a large scale via a room temperature reverse-microemulsion method. The products were characterized in detail by X-ray powder diffraction, field-emission scanning electron microscopy, transmission electron microscopy, and high-resolution transmission electron microscopy. By establishing the relations between the thermodynamic functions of nano-CaMoO4 and bulk-CaMoO4 reaction systems, the equations for calculating the surface thermodynamic functions of nano-CaMoO4 were derived. Then, combined with in-situ microcalorimetry, the molar surface enthalpy, molar surface Gibbs free energy, and molar surface entropy of the prepared CaMoO4 nanocakes at 298.15 K were successfully obtained as (19.674 ± 0.017) kJ·mol−1, (619.704 ± 0.016) J·mol−1, and (63.908 ± 0.057) J·mol−1·K−1, respectively.Entropy2015-04-30175Article10.3390/e17052741274127481099-43002015-04-30doi: 10.3390/e17052741Xingxing LiGaochao FanZaiyin Huang<![CDATA[Entropy, Vol. 17, Pages 2723-2740: Finite Key Size Analysis of Two-Way Quantum Cryptography]]>
http://mdpi.com/1099-4300/17/5/2723
Quantum cryptographic protocols solve the longstanding problem of distributing a shared secret string to two distant users by typically making use of one-way quantum channel. However, alternative protocols exploiting two-way quantum channel have been proposed for the same goal and with potential advantages. Here, we overview a security proof for two-way quantum key distribution protocols, against the most general eavesdropping attack, that utilize an entropic uncertainty relation. Then, by resorting to the “smooth” version of involved entropies, we extend such a proof to the case of finite key size. The results will be compared to those available for one-way protocols showing some advantages.Entropy2015-04-30175Article10.3390/e17052723272327401099-43002015-04-30doi: 10.3390/e17052723Jesni ShaariStefano Mancini<![CDATA[Entropy, Vol. 17, Pages 2706-2722: Identifying the Most Relevant Lag with Runs]]>
http://mdpi.com/1099-4300/17/5/2706
In this paper, we propose a nonparametric statistical tool to identify the most relevant lag in the model description of a time series. It is also shown that it can be used for model identification. The statistic is based on the number of runs, when the time series is symbolized depending on the empirical quantiles of the time series. With a Monte Carlo simulation, we show the size and power performance of our new test statistic under linear and nonlinear data generating processes. From the theoretical point of view, it is the first time that symbolic analysis and runs are proposed to identifying characteristic lags and also to help in the identification of univariate time series models. From a more applied point of view, the results show the power and competitiveness of the proposed tool with respect to other techniques without presuming or specifying a model.Entropy2015-04-28175Article10.3390/e17052706270627221099-43002015-04-28doi: 10.3390/e17052706Úrsula FauraMatilde LafuenteMariano Matilla-GarcíaManuel Ruiz<![CDATA[Entropy, Vol. 17, Pages 2688-2705: A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations]]>
http://mdpi.com/1099-4300/17/5/2688
Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.Entropy2015-04-27175Article10.3390/e17052688268827051099-43002015-04-27doi: 10.3390/e17052688Aranzazu Berbey AlvarezFernando MerchanFrancisco Calvo PoyoRony Caballero George