Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 19, Pages 143: Paradigms of Cognition]]>
http://www.mdpi.com/1099-4300/19/4/143
An abstract, quantitative theory which connects elements of information —key ingredients in the cognitive proces—is developed. Seemingly unrelated results are thereby unified. As an indication of this, consider results in classical probabilistic information theory involving information projections and so-called Pythagorean inequalities. This has a certain resemblance to classical results in geometry bearing Pythagoras’ name. By appealing to the abstract theory presented here, you have a common point of reference for these results. In fact, the new theory provides a general framework for the treatment of a multitude of global optimization problems across a range of disciplines such as geometry, statistics and statistical physics. Several applications are given, among them an “explanation” of Tsallis entropy is suggested. For this, as well as for the general development of the abstract underlying theory, emphasis is placed on interpretations and associated philosophical considerations. Technically, game theory is the key tool.Entropy2017-03-27194Article10.3390/e190401431431099-43002017-03-27doi: 10.3390/e19040143Flemming Topsøe<![CDATA[Entropy, Vol. 19, Pages 142: A Novel Framework for Shock Filter Using Partial Differential Equations]]>
http://www.mdpi.com/1099-4300/19/4/142
In dilation or erosion processes, a shock filter is widely used in signal enhancing or image deburring. Traditionally, sign function is employed in shock filtering for reweighting of edge-detection in images and decides whether a pixel should dilate to the local maximum or evolve to the local minimum. Some researchers replace sign function with tanh function or arctan function, trying to change the evolution tracks of the pixels when filtering is in progress. However, analysis here reveals that only function replacement does usually not work. This paper revisits first shock filters and their modifications. Then, a fuzzy shock filter is proposed after a membership function in a shock filter model is adopted to adjust the evolve rate of image pixels. The proposed filter is a parameter tuning system, which unites several formulations of shock filters into one fuzzy framework. Experimental results show that the new filter is flexible and robust and can converge fast.Entropy2017-03-26194Article10.3390/e190401421421099-43002017-03-26doi: 10.3390/e19040142Chunmei DuanHongqian Lu<![CDATA[Entropy, Vol. 19, Pages 141: Permutation Entropy for the Characterisation of Brain Activity Recorded with Magnetoencephalograms in Healthy Ageing]]>
http://www.mdpi.com/1099-4300/19/4/141
The characterisation of healthy ageing of the brain could help create a fingerprint of normal ageing that might assist in the early diagnosis of neurodegenerative conditions. This study examined changes in resting state magnetoencephalogram (MEG) permutation entropy due to age and gender in a sample of 220 healthy participants (98 males and 122 females, ages ranging between 7 and 84). Entropy was quantified using normalised permutation entropy and modified permutation entropy, with an embedding dimension of 5 and a lag of 1 as the input parameters for both algorithms. Effects of age were observed over the five regions of the brain, i.e., anterior, central, posterior, and left and right lateral, with the anterior and central regions containing the highest permutation entropy. Statistically significant differences due to age were observed in the different brain regions for both genders, with the evolutions described using the fitting of polynomial regressions. Nevertheless, no significant differences between the genders were observed across all ages. These results suggest that the evolution of entropy in the background brain activity, quantified with permutation entropy algorithms, might be considered an alternative illustration of a ‘nominal’ physiological rhythm.Entropy2017-03-25194Article10.3390/e190401411411099-43002017-03-25doi: 10.3390/e19040141Elizabeth ShumbayawondaAlberto FernándezMichael HughesDaniel Abásolo<![CDATA[Entropy, Vol. 19, Pages 137: Impact Location and Quantification on an Aluminum Sandwich Panel Using Principal Component Analysis and Linear Approximation with Maximum Entropy]]>
http://www.mdpi.com/1099-4300/19/4/137
To avoid structural failures it is of critical importance to detect, locate and quantify impact damage as soon as it occurs. This can be achieved by impact identification methodologies, which continuously monitor the structure, detecting, locating, and quantifying impacts as they occur. This article presents an improved impact identification algorithm that uses principal component analysis (PCA) to extract features from the monitored signals and an algorithm based on linear approximation with maximum entropy to estimate the impacts. The proposed methodology is validated with two experimental applications, which include an aluminum plate and an aluminum sandwich panel. The results are compared with those of other impact identification algorithms available in literature, demonstrating that the proposed method outperforms these algorithms.Entropy2017-03-25194Article10.3390/e190401371371099-43002017-03-25doi: 10.3390/e19040137Viviana MeruanePablo VélizEnrique López DroguettAlejandro Ortiz-Bernardin<![CDATA[Entropy, Vol. 19, Pages 140: Ionic Liquids Confined in Silica Ionogels: Structural, Thermal, and Dynamical Behaviors]]>
http://www.mdpi.com/1099-4300/19/4/140
Ionogels are porous monoliths providing nanometer-scale confinement of an ionic liquid within an oxide network. Various dynamic parameters and the detailed nature of phase transitions were investigated by using a neutron scattering technique, giving smaller time and space scales compared to earlier results from other techniques. By investigating the nature of the hydrogen mean square displacement (local mobility), qualitative information on diffusion and different phase transitions were obtained. The results presented herein show similar short-time molecular dynamics between pristine ionic liquids and confined ionic liquids through residence time and diffusion coefficient values, thus, explaining in depth the good ionic conductivity of ionogels.Entropy2017-03-24194Article10.3390/e190401401401099-43002017-03-24doi: 10.3390/e19040140Subhankur MitraCarole CerclierQuentin BerrodFilippo FerdeghiniRodrigo de Oliveira-SilvaPatrick JudeinsteinJean le BideauJean-Marc Zanotti<![CDATA[Entropy, Vol. 19, Pages 139: Tensor Singular Spectrum Decomposition Algorithm Based on Permutation Entropy for Rolling Bearing Fault Diagnosis]]>
http://www.mdpi.com/1099-4300/19/4/139
Mechanical vibration signal mapped into a high-dimensional space tends to exhibit a special distribution and movement characteristics, which can further reveal the dynamic behavior of the original time series. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, the tensor decomposition algorithm has broad application prospects in signal processing. High-dimensional tensor can be obtained from a one-dimensional vibration signal by using phase space reconstruction, which is called the tensorization of data. As a new signal decomposition method, tensor-based singular spectrum algorithm (TSSA) fully combines the advantages of phase space reconstruction and tensor decomposition. However, TSSA has some problems, mainly in estimating the rank of tensor and selecting the optimal reconstruction tensor. In this paper, the improved TSSA algorithm based on convex-optimization and permutation entropy (PE) is proposed. Firstly, aiming to accurately estimate the rank of tensor decomposition, this paper presents a convex optimization algorithm using non-convex penalty functions based on singular value decomposition (SVD). Then, PE is employed to evaluate the desired tensor and improve the denoising performance. In order to verify the effectiveness of proposed algorithm, both numerical simulation and experimental bearing failure data are analyzed.Entropy2017-03-23194Article10.3390/e190401391391099-43002017-03-23doi: 10.3390/e19040139Cancan YiYong LvMao GeHan XiaoXun Yu<![CDATA[Entropy, Vol. 19, Pages 136: The Quantum Harmonic Otto Cycle]]>
http://www.mdpi.com/1099-4300/19/4/136
The quantum Otto cycle serves as a bridge between the macroscopic world of heat engines and the quantum regime of thermal devices composed from a single element. We compile recent studies of the quantum Otto cycle with a harmonic oscillator as a working medium. This model has the advantage that it is analytically trackable. In addition, an experimental realization has been achieved, employing a single ion in a harmonic trap. The review is embedded in the field of quantum thermodynamics and quantum open systems. The basic principles of the theory are explained by a specific example illuminating the basic definitions of work and heat. The relation between quantum observables and the state of the system is emphasized. The dynamical description of the cycle is based on a completely positive map formulated as a propagator for each stroke of the engine. Explicit solutions for these propagators are described on a vector space of quantum thermodynamical observables. These solutions which employ different assumptions and techniques are compared. The tradeoff between power and efficiency is the focal point of finite-time-thermodynamics. The dynamical model enables the study of finite time cycles limiting time on the adiabatic and the thermalization times. Explicit finite time solutions are found which are frictionless (meaning that no coherence is generated), and are also known as shortcuts to adiabaticity.The transition from frictionless to sudden adiabats is characterized by a non-hermitian degeneracy in the propagator. In addition, the influence of noise on the control is illustrated. These results are used to close the cycles either as engines or as refrigerators. The properties of the limit cycle are described. Methods to optimize the power by controlling the thermalization time are also introduced. At high temperatures, the Novikov–Curzon–Ahlborn efficiency at maximum power is obtained. The sudden limit of the engine which allows finite power at zero cycle time is shown. The refrigerator cycle is described within the frictionless limit, with emphasis on the cooling rate when the cold bath temperature approaches zero.Entropy2017-03-23194Review10.3390/e190401361361099-43002017-03-23doi: 10.3390/e19040136Ronnie KosloffYair Rezek<![CDATA[Entropy, Vol. 19, Pages 138: Leveraging Receiver Message Side Information in Two-Receiver Broadcast Channels: A General Approach †]]>
http://www.mdpi.com/1099-4300/19/4/138
We consider two-receiver broadcast channels where each receiver may know a priori some of the messages requested by the other receiver as receiver message side information (RMSI). We devise a general approach to leverage RMSI in these channels. To this end, we first propose a pre-coding scheme considering the general message setup where each receiver requests both common and private messages and knows a priori part of the private message requested by the other receiver as RMSI. We then construct the transmission scheme of a two-receiver channel with RMSI by applying the proposed pre-coding scheme to the best transmission scheme for the channel without RMSI. To demonstrate the effectiveness of our approach, we apply our pre-coding scheme to three categories of the two-receiver discrete memoryless broadcast channel: (i) channel without state; (ii) channel with states known causally to the transmitter; and (iii) channel with states known non-causally to the transmitter. We then derive a unified inner bound for all three categories. We show that our inner bound is tight for some new cases in each of the three categories, as well as all cases whose capacity region was known previously.Entropy2017-03-23194Article10.3390/e190401381381099-43002017-03-23doi: 10.3390/e19040138Behzad AsadiLawrence OngSarah Johnson<![CDATA[Entropy, Vol. 19, Pages 119: Thermal Ratchet Effect in Confining Geometries]]>
http://www.mdpi.com/1099-4300/19/4/119
The stochastic model of the Feynman–Smoluchowski ratchet is proposed and solved using generalization of the Fick–Jacobs theory. The theory fully captures nonlinear response of the ratchet to the difference of heat bath temperatures. The ratchet performance is discussed using the mean velocity, the average heat flow between the two heat reservoirs and the figure of merit, which quantifies energetic cost for attaining a certain mean velocity. Limits of the theory are tested comparing its predictions to numerics. We also demonstrate connection between the ratchet effect emerging in the model and rotations of the probability current and explain direction of the mean velocity using simple discrete analogue of the model.Entropy2017-03-23194Article10.3390/e190401191191099-43002017-03-23doi: 10.3390/e19040119Viktor HolubecArtem RyabovMohammad YaghoubiMartin VargaAyub KhodaeeM. FoulaadvandPetr Chvosta<![CDATA[Entropy, Vol. 19, Pages 135: Structure and Dynamics of Water at Carbon-Based Interfaces]]>
http://www.mdpi.com/1099-4300/19/3/135
Water structure and dynamics are affected by the presence of a nearby interface. Here, first we review recent results by molecular dynamics simulations about the effect of different carbon-based materials, including armchair carbon nanotubes and a variety of graphene sheets—flat and with corrugation—on water structure and dynamics. We discuss the calculations of binding energies, hydrogen bond distributions, water’s diffusion coefficients and their relation with surface’s geometries at different thermodynamical conditions. Next, we present new results of the crystallization and dynamics of water in a rigid graphene sieve. In particular, we show that the diffusion of water confined between parallel walls depends on the plate distance in a non-monotonic way and is related to the water structuring, crystallization, re-melting and evaporation for decreasing inter-plate distance. Our results could be relevant in those applications where water is in contact with nanostructured carbon materials at ambient or cryogenic temperatures, as in man-made superhydrophobic materials or filtration membranes, or in techniques that take advantage of hydrated graphene interfaces, as in aqueous electron cryomicroscopy for the analysis of proteins adsorbed on graphene.Entropy2017-03-21193Article10.3390/e190301351351099-43002017-03-21doi: 10.3390/e19030135Jordi MartíCarles CaleroGiancarlo Franzese<![CDATA[Entropy, Vol. 19, Pages 134: Permutation Entropy: New Ideas and Challenges]]>
http://www.mdpi.com/1099-4300/19/3/134
Over recent years, some new variants of Permutation entropy have been introduced and applied to EEG analysis, including a conditional variant and variants using some additional metric information or being based on entropies that are different from the Shannon entropy. In some situations, it is not completely clear what kind of information the new measures and their algorithmic implementations provide. We discuss the new developments and illustrate them for EEG data.Entropy2017-03-21193Article10.3390/e190301341341099-43002017-03-21doi: 10.3390/e19030134Karsten KellerTeresa MangoldInga StolzJenna Werner<![CDATA[Entropy, Vol. 19, Pages 133: Spectral Entropy Parameters during Rapid Ventricular Pacing for Transcatheter Aortic Valve Implantation]]>
http://www.mdpi.com/1099-4300/19/3/133
The time-frequency balanced spectral entropy of the EEG is a monitoring technique measuring the level of hypnosis during general anesthesia. Two components of spectral entropy are calculated: state entropy (SE) and response entropy (RE). Transcatheter aortic valve implantation (TAVI) is a less invasive treatment for patients suffering from symptomatic aortic stenosis with contraindications for open heart surgery. The goal of hemodynamic management during the procedure is to achieve hemodynamic stability with exact blood pressure control and use of rapid ventricular pacing (RVP) that result in severe hypotension. The objective of this study was to examine how the spectral entropy values respond to RVP and other critical events during the TAVI procedure. Twenty one patients undergoing general anesthesia for TAVI were evaluated. The RVP was used twice during the procedure at a rate of 185 ± 9/min with durations of 16 ± 4 s (range 8–22 s) and 24 ± 6 s (range 18–39 s). The systolic blood pressure during RVP was under 50 ± 5 mmHg. Spectral entropy values SE were significantly declined during the RVP procedure, from 28 ± 13 to 23 ± 13 (p &lt; 0.003) and from 29 ± 12 to 24 ± 10 (p &lt; 0.001). The corresponding values for RE were 29 ± 13 vs. 24 ± 13 (p &lt; 0.006) and 30 ± 12 vs. 25 ± 10 (p &lt; 0.001). Both SE and RE values returned to the pre-RVP values after 1 min. Ultra-short hypotension during RVP changed the spectral entropy parameters, however these indices reverted rapidly to the same value before application of RVP.Entropy2017-03-20193Article10.3390/e190301331331099-43002017-03-20doi: 10.3390/e19030133Tadeusz MusialowiczAntti ValtolaMikko HippeläinenJari HalonenPasi Lahtinen<![CDATA[Entropy, Vol. 19, Pages 132: Discrepancies between Conventional Multiscale Entropy and Modified Short-Time Multiscale Entropy of Photoplethysmographic Pulse Signals in Middle- and Old- Aged Individuals with or without Diabetes]]>
http://www.mdpi.com/1099-4300/19/3/132
Multiscale entropy (MSE) of physiological signals may reflect cardiovascular health in diabetes. The classic MSE (cMSE) algorithm requires more than 750 signals for the calculations. The modified short-time MSE (sMSE) may have inconsistent outcomes compared with the cMSE at large time scales and in a disease status. Therefore, we compared the cMSE of 1500 (cMSE1500) consecutive and 1000 photoplethysmographic (PPG) pulse amplitudes with the sMSE of 500 PPG (sMSE500) pulse amplitudes of bilateral fingertips among middle- to old-aged individuals with or without type 2 diabetes. We discovered that cMSE1500 had the smallest value across scale factors 1–10, followed by cMSE1000, and then sMSE500 in both hands. The cMSE1500, cMSE1000 and sMSE500 did not differ at each scale factor in both hands of persons without diabetes and in the dominant hand of those with diabetes. In contrast, the sMSE500 differed at all scales 1–10 in the non-dominant hand with diabetes. In conclusion, autonomic dysfunction, prevalent in the non-dominant hand which had a low local physical activity in the person with diabetes, might be imprecisely evaluated by the sMSE; therefore, using more PPG signal numbers for the cMSE is preferred in such a situation.Entropy2017-03-18193Article10.3390/e190301321321099-43002017-03-18doi: 10.3390/e19030132Gen-Min LinBagus HaryadiChieh-Ming YangShiao-Chiang ChuCheng-Chan YangHsien-Tsai Wu<![CDATA[Entropy, Vol. 19, Pages 131: Information Submanifold Based on SPD Matrices and Its Applications to Sensor Networks]]>
http://www.mdpi.com/1099-4300/19/3/131
In this paper, firstly, manifoldPD(n)consisting of alln×nsymmetric positive-definite matrices is introduced based on matrix information geometry; Secondly, the geometrical structures of information submanifold ofPD(n)are presented including metric, geodesic and geodesic distance; Thirdly, the information resolution with sensor networks is presented by three classical measurement models based on information submanifold; Finally, the bearing-only tracking by single sensor is introduced by the Fisher information matrix. The preliminary analysis results introduced in this paper indicate that information submanifold is able to offer consistent and more comprehensive means to understand and solve sensor network problems for targets resolution and tracking, which are not easily handled by some conventional analysis methods.Entropy2017-03-17193Article10.3390/e190301311311099-43002017-03-17doi: 10.3390/e19030131Hao XuHuafei SunAung Win<![CDATA[Entropy, Vol. 19, Pages 130: Quantitative EEG Markers of Entropy and Auto Mutual Information in Relation to MMSE Scores of Probable Alzheimer’s Disease Patients]]>
http://www.mdpi.com/1099-4300/19/3/130
Analysis of nonlinear quantitative EEG (qEEG) markers describing complexity of signal in relation to severity of Alzheimer’s disease (AD) was the focal point of this study. In this study, 79 patients diagnosed with probable AD were recruited from the multi-centric Prospective Dementia Database Austria (PRODEM). EEG recordings were done with the subjects seated in an upright position in a resting state with their eyes closed. Models of linear regressions explaining disease severity, expressed in Mini Mental State Examination (MMSE) scores, were analyzed by the nonlinear qEEG markers of auto mutual information (AMI), Shannon entropy (ShE), Tsallis entropy (TsE), multiscale entropy (MsE), or spectral entropy (SpE), with age, duration of illness, and years of education as co-predictors. Linear regression models with AMI were significant for all electrode sites and clusters, where R 2 is 0.46 at the electrode site C3, 0.43 at Cz, F3, and central region, and 0.42 at the left region. MsE also had significant models at C3 with R 2 &gt; 0.40 at scales τ = 5 and τ = 6 . ShE and TsE also have significant models at T7 and F7 with R 2 &gt; 0.30 . Reductions in complexity, calculated by AMI, SpE, and MsE, were observed as the MMSE score decreased.Entropy2017-03-17193Article10.3390/e190301301301099-43002017-03-17doi: 10.3390/e19030130Carmina CoronelHeinrich GarnMarkus WaserManfred DeistlerThomas BenkePeter Dal-BiancoGerhard RansmayrStephan SeilerDieter GrosseggerReinhold Schmidt<![CDATA[Entropy, Vol. 19, Pages 129: Distance-Based Lempel–Ziv Complexity for the Analysis of Electroencephalograms in Patients with Alzheimer’s Disease]]>
http://www.mdpi.com/1099-4300/19/3/129
The analysis of electroencephalograms (EEGs) of patients with Alzheimer’s disease (AD) could contribute to the diagnosis of this dementia. In this study, a new non-linear signal processing metric, distance-based Lempel–Ziv complexity (dLZC), is introduced to characterise changes between pairs of electrodes in EEGs in AD. When complexity in each signal arises from different sub-sequences, dLZC would be greater than when similar sub-sequences are present in each signal. EEGs from 11 AD patients and 11 age-matched control subjects were analysed. The dLZC values for AD patients were lower than for control subjects for most electrode pairs, with statistically significant differences (p &lt; 0.01, Student’s t-test) in 17 electrode pairs in the distant left, local posterior left, and interhemispheric regions. Maximum diagnostic accuracies with leave-one-out cross-validation were 77.27% for subject-based classification and 78.25% for epoch-based classification. These findings suggest not only that EEGs from AD patients are less complex than those from controls, but also that the richness of the information contained in pairs of EEGs from patients is also lower than in age-matched controls. The analysis of EEGs in AD with dLZC may increase the insight into brain dysfunction, providing complementary information to that obtained with other complexity and synchrony methods.Entropy2017-03-17193Article10.3390/e190301291291099-43002017-03-17doi: 10.3390/e19030129Samantha SimonsDaniel Abásolo<![CDATA[Entropy, Vol. 19, Pages 127: Fractional Jensen–Shannon Analysis of the Scientific Output of Researchers in Fractional Calculus]]>
http://www.mdpi.com/1099-4300/19/3/127
This paper analyses the citation profiles of researchers in fractional calculus. Different metrics are used to quantify the dissimilarities between the data, namely the Canberra distance, and the classical and the generalized (fractional) Jensen–Shannon divergence. The information is then visualized by means of multidimensional scaling and hierarchical clustering. The mathematical tools and metrics allow for direct comparison and visualization of researchers based on their relative positioning and on patterns displayed in two- or three-dimensional maps.Entropy2017-03-17193Article10.3390/e190301271271099-43002017-03-17doi: 10.3390/e19030127José MachadoAntónio Mendes Lopes<![CDATA[Entropy, Vol. 19, Pages 128: Pairs Generating as a Consequence of the Fractal Entropy: Theory and Applications]]>
http://www.mdpi.com/1099-4300/19/3/128
In classical concepts, theoretical models are built assuming that the dynamics of the complex system’s stuctural units occur on continuous and differentiable motion variables. In reality, the dynamics of the natural complex systems are much more complicated. These difficulties can be overcome in a complementary approach, using the fractal concept and the corresponding non-differentiable theoretical model, such as the scale relativity theory or the extended scale relativity theory. Thus, using the last theory, fractal entropy through non-differentiable Lie groups was established and, moreover, the pairs generating mechanisms through fractal entanglement states were explained. Our model has implications in the dynamics of biological structures, in the form of the “chameleon-like” behavior of cholesterol.Entropy2017-03-17193Article10.3390/e190301281281099-43002017-03-17doi: 10.3390/e19030128Alexandru GrigoroviciElena BacaitaViorel PaunConstantin GreceaIrina ButucMaricel AgopOvidiu Popa<![CDATA[Entropy, Vol. 19, Pages 123: Friction, Free Axes of Rotation and Entropy]]>
http://www.mdpi.com/1099-4300/19/3/123
Friction forces acting on rotators may promote their alignment and therefore eliminate degrees of freedom in their movement. The alignment of rotators by friction force was shown by experiments performed with different spinners, demonstrating how friction generates negentropy in a system of rotators. A gas of rigid rotators influenced by friction force is considered. The orientational negentropy generated by a friction force was estimated with the Sackur-Tetrode equation. The minimal change in total entropy of a system of rotators, corresponding to their eventual alignment, decreases with temperature. The reported effect may be of primary importance for the phase equilibrium and motion of ubiquitous colloidal and granular systems.Entropy2017-03-17193Article10.3390/e190301231231099-43002017-03-17doi: 10.3390/e19030123Alexander KazachkovVictor MultanenViktor DanchukMark FrenkelEdward Bormashenko<![CDATA[Entropy, Vol. 19, Pages 121: Identity Based Generalized Signcryption Scheme in the Standard Model]]>
http://www.mdpi.com/1099-4300/19/3/121
Generalized signcryption (GSC) can adaptively work as an encryption scheme, a signature scheme or a signcryption scheme with only one algorithm. It is more suitable for the storage constrained setting. In this paper, motivated by Paterson–Schuldt’s scheme, based on bilinear pairing, we first proposed an identity based generalized signcryption (IDGSC) scheme in the standard model. To the best of our knowledge, it is the first scheme that is proven secure in the standard model.Entropy2017-03-17193Article10.3390/e190301211211099-43002017-03-17doi: 10.3390/e19030121Xiaoqin ShenYang MingJie Feng<![CDATA[Entropy, Vol. 19, Pages 126: Nonequilibrium Thermodynamics and Scale Invariance]]>
http://www.mdpi.com/1099-4300/19/3/126
A variant of continuous nonequilibrium thermodynamic theory based on the postulate of the scale invariance of the local relation between generalized fluxes and forces is proposed here. This single postulate replaces the assumptions on local equilibrium and on the known relation between thermodynamic fluxes and forces, which are widely used in classical nonequilibrium thermodynamics. It is shown here that such a modification not only makes it possible to deductively obtain the main results of classical linear nonequilibrium thermodynamics, but also provides evidence for a number of statements for a nonlinear case (the maximum entropy production principle, the macroscopic reversibility principle, and generalized reciprocity relations) that are under discussion in the literature.Entropy2017-03-16193Article10.3390/e190301261261099-43002017-03-16doi: 10.3390/e19030126Leonid M. MartyushevVladimir Celezneff<![CDATA[Entropy, Vol. 19, Pages 122: On Hölder Projective Divergences]]>
http://www.mdpi.com/1099-4300/19/3/122
We describe a framework to build distances by measuring the tightness of inequalities and introduce the notion of proper statistical divergences and improper pseudo-divergences. We then consider the Hölder ordinary and reverse inequalities and present two novel classes of Hölder divergences and pseudo-divergences that both encapsulate the special case of the Cauchy–Schwarz divergence. We report closed-form formulas for those statistical dissimilarities when considering distributions belonging to the same exponential family provided that the natural parameter space is a cone (e.g., multivariate Gaussians) or affine (e.g., categorical distributions). Those new classes of Hölder distances are invariant to rescaling and thus do not require distributions to be normalized. Finally, we show how to compute statistical Hölder centroids with respect to those divergences and carry out center-based clustering toy experiments on a set of Gaussian distributions which demonstrate empirically that symmetrized Hölder divergences outperform the symmetric Cauchy–Schwarz divergence.Entropy2017-03-16193Article10.3390/e190301221221099-43002017-03-16doi: 10.3390/e19030122Frank NielsenKe SunStéphane Marchand-Maillet<![CDATA[Entropy, Vol. 19, Pages 125: Packer Detection for Multi-Layer Executables Using Entropy Analysis]]>
http://www.mdpi.com/1099-4300/19/3/125
Packing algorithms are broadly used to avoid anti-malware systems, and the proportion of packed malware has been growing rapidly. However, just a few studies have been conducted on detection various types of packing algorithms in a systemic way. Following this understanding, we elaborate a method to classify packing algorithms of a given executable into three categories: single-layer packing, re-packing, or multi-layer packing. We convert entropy values of the executable file loaded into memory into symbolic representations, for which we used SAX (Symbolic Aggregate Approximation). Based on experiments of 2196 programs and 19 packing algorithms, we identify that precision (97.7%), accuracy (97.5%), and recall ( 96.8%) of our method are respectively high to confirm that entropy analysis is applicable in identifying packing algorithms.Entropy2017-03-16193Article10.3390/e190301251251099-43002017-03-16doi: 10.3390/e19030125Munkhbayar Bat-ErdeneTaebeom KimHyundo ParkHeejo Lee<![CDATA[Entropy, Vol. 19, Pages 124: Witnessing Multipartite Entanglement by Detecting Asymmetry]]>
http://www.mdpi.com/1099-4300/19/3/124
The characterization of quantum coherence in the context of quantum information theory and its interplay with quantum correlations is currently subject of intense study. Coherence in a Hamiltonian eigenbasis yields asymmetry, the ability of a quantum system to break a dynamical symmetry generated by the Hamiltonian. We here propose an experimental strategy to witness multipartite entanglement in many-body systems by evaluating the asymmetry with respect to an additive Hamiltonian. We test our scheme by simulating asymmetry and entanglement detection in a three-qubit Greenberger–Horne–Zeilinger (GHZ) diagonal state.Entropy2017-03-16193Article10.3390/e190301241241099-43002017-03-16doi: 10.3390/e19030124Davide GirolamiBenjamin Yadin<![CDATA[Entropy, Vol. 19, Pages 120: Variational Principle for Relative Tail Pressure]]>
http://www.mdpi.com/1099-4300/19/3/120
We introduce the relative tail pressure to establish a variational principle for continuous bundle random dynamical systems. We also show that the relative tail pressure is conserved by the principal extension.Entropy2017-03-15193Article10.3390/e190301201201099-43002017-03-15doi: 10.3390/e19030120Xianfeng MaErcai Chen<![CDATA[Entropy, Vol. 19, Pages 118: Thermoeconomic Optimization of an Irreversible Novikov Plant Model under Different Regimes of Performance]]>
http://www.mdpi.com/1099-4300/19/3/118
The so-called Novikov power plant model has been widely used to represent some actual power plants, such as nuclear electric power generators. In the present work, a thermo-economic study of a Novikov power plant model is presented under three different regimes of performance: maximum power (MP), maximum ecological function (ME) and maximum efficient power (EP). In this study, different heat transfer laws are used: The Newton’s law of cooling, the Stefan–Boltzmann radiation law, the Dulong–Petit’s law and another phenomenological heat transfer law. For the thermoeconomic optimization of power plant models, a benefit function defined as the quotient of an objective function and the total economical costs is commonly employed. Usually, the total costs take into account two contributions: a cost related to the investment and another stemming from the fuel consumption. In this work, a new cost associated to the maintenance of the power plant is also considered. With these new total costs, it is shown that under the maximum ecological function regime the plant improves its economic and energetic performance in comparison with the other two regimes. The methodology used in this paper is within the context of finite-time thermodynamics.Entropy2017-03-15193Article10.3390/e190301181181099-43002017-03-15doi: 10.3390/e19030118Juan Pacheco-PaezFernando Angulo-BrownMarco Barranco-Jiménez<![CDATA[Entropy, Vol. 19, Pages 117: Specific Emitter Identification Based on the Natural Measure]]>
http://www.mdpi.com/1099-4300/19/3/117
Specific emitter identification (SEI) techniques are often used in civilian and military spectrum-management operations, and they are also applied to support the security and authentication of wireless communication. In this letter, a new SEI method based on the natural measure of the one-dimensional component of the chaotic system is proposed. We find that the natural measures of the one-dimensional components of higher dimensional systems exist and that they are quite diverse for different systems. Based on this principle, the natural measure is used as an RF fingerprint in this letter. The natural measure can solve the problems caused by a small amount of data and a low sample rate. The Kullback–Leibler divergence is used to quantify the difference between the natural measures obtained from diverse emitters and classify them. The data obtained from real application are exploited to test the validity of the proposed method. Experimental results show that the proposed method is not only easy to operate, but also quite effective, even though the amount of data is small and the sample rate is low.Entropy2017-03-15193Letter10.3390/e190301171171099-43002017-03-15doi: 10.3390/e19030117Yongqiang JiaShengli ZhuLu Gan<![CDATA[Entropy, Vol. 19, Pages 115: A Model of Mechanothermodynamic Entropy in Tribology]]>
http://www.mdpi.com/1099-4300/19/3/115
A brief analysis of entropy concepts in continuum mechanics and thermodynamics is presented. The methods of accounting for friction, wear and fatigue processes in the calculation of the thermodynamic entropy are described. It is shown that these and other damage processes of solids are more adequately described by tribo-fatigue entropy. It was established that mechanothermodynamic entropy calculated as the sum of interacting thermodynamic and tribo-fatigue entropy components has the most general character. Examples of usage (application) of tribo-fatigue and mechanothermodynamic entropies for practical analysis of wear and fatigue processes are given.Entropy2017-03-14193Article10.3390/e190301151151099-43002017-03-14doi: 10.3390/e19030115Leonid SosnovskiySergei Sherbakov<![CDATA[Entropy, Vol. 19, Pages 116: Fluctuation-Driven Transport in Biological Nanopores. A 3D Poisson–Nernst–Planck Study]]>
http://www.mdpi.com/1099-4300/19/3/116
Living systems display a variety of situations in which non-equilibrium fluctuations couple to certain protein functions yielding astonishing results. Here we study the bacterial channel OmpF under conditions similar to those met in vivo, where acidic resistance mechanisms are known to yield oscillations in the electric potential across the cell membrane. We use a three-dimensional structure-based theoretical approach to assess the possibility of obtaining fluctuation-driven transport. Our calculations show that remarkably high voltages would be necessary to observe the actual transport of ions against their concentration gradient. The reasons behind this are the mild selectivity of this bacterial pore and the relatively low efficiencies of the oscillating signals characteristic of membrane cells (random telegraph noise and thermal noise).Entropy2017-03-14193Article10.3390/e190301161161099-43002017-03-14doi: 10.3390/e19030116Marcel Aguilella-ArzoMaría Queralt-MartínMaría-Lidón LopezAntonio Alcaraz<![CDATA[Entropy, Vol. 19, Pages 113: Recoverable Random Numbers in an Internet of Things Operating System]]>
http://www.mdpi.com/1099-4300/19/3/113
Over the past decade, several security issues with Linux Random Number Generator (LRNG) on PCs and Androids have emerged. The main problem involves the process of entropy harvesting, particularly at boot time. An entropy source in the input pool of LRNG is not transferred into the non-blocking output pool if the entropy counter of the input pool is less than 192 bits out of 4098 bits. Because the entropy estimation of LRNG is highly conservative, the process may require more than one minute for starting the transfer. Furthermore, the design principle of the estimation algorithm is not only heuristic but also unclear. Recently, Google released an Internet of Things (IoT) operating system called Brillo based on the Linux kernel. We analyze the behavior of the random number generator in Brillo, which inherits that of LRNG. In the results, we identify two features that enable recovery of random numbers. With these features, we demonstrate that random numbers of 700 bytes at boot time can be recovered with the success probability of 90% by using time complexity for 5.20 × 2 40 trials. Therefore, the entropy of random numbers of 700 bytes is merely about 43 bits. Since the initial random numbers are supposed to be used for sensitive security parameters, such as stack canary and key derivation, our observation can be applied to practical attacks against cryptosystem.Entropy2017-03-13193Article10.3390/e190301131131099-43002017-03-13doi: 10.3390/e19030113Taeill YooJu-Sung KangYongjin Yeom<![CDATA[Entropy, Vol. 19, Pages 112: Quantum Probabilities as Behavioral Probabilities]]>
http://www.mdpi.com/1099-4300/19/3/112
We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.Entropy2017-03-13193Article10.3390/e190301121121099-43002017-03-13doi: 10.3390/e19030112Vyacheslav YukalovDidier Sornette<![CDATA[Entropy, Vol. 19, Pages 111: The Two-Time Interpretation and Macroscopic Time-Reversibility]]>
http://www.mdpi.com/1099-4300/19/3/111
The two-state vector formalism motivates a time-symmetric interpretation of quantum mechanics that entails a resolution of the measurement problem. We revisit a post-selection-assisted collapse model previously suggested by us, claiming that unlike the thermodynamic arrow of time, it can lead to reversible dynamics at the macroscopic level. In addition, the proposed scheme enables us to characterize the classical-quantum boundary. We discuss the limitations of this approach and its broad implications for other areas of physics.Entropy2017-03-12193Article10.3390/e190301111111099-43002017-03-12doi: 10.3390/e19030111Yakir AharonovEliahu CohenTomer Landsberger<![CDATA[Entropy, Vol. 19, Pages 110: The Gibbs Paradox, the Landauer Principle and the Irreversibility Associated with Tilted Observers]]>
http://www.mdpi.com/1099-4300/19/3/110
It is well known that, in the context of General Relativity, some spacetimes, when described by a congruence of comoving observers, may consist of a distribution of a perfect (non–dissipative) fluid, whereas the same spacetime as seen by a “tilted” (Lorentz–boosted) congruence of observers may exhibit the presence of dissipative processes. As we shall see, the appearance of entropy-producing processes are related to the high dependence of entropy on the specific congruence of observers. This fact is well illustrated by the Gibbs paradox. The appearance of such dissipative processes, as required by the Landauer principle, are necessary in order to erase the different amount of information stored by comoving observers, with respect to tilted ones.Entropy2017-03-11193Article10.3390/e190301101101099-43002017-03-11doi: 10.3390/e19030110Luis Herrera<![CDATA[Entropy, Vol. 19, Pages 108: Entropy Generation Analysis and Performance Evaluation of Turbulent Forced Convective Heat Transfer to Nanofluids]]>
http://www.mdpi.com/1099-4300/19/3/108
The entropy generation analysis of fully turbulent convective heat transfer to nanofluids in a circular tube is investigated numerically using the Reynolds Averaged Navier–Stokes (RANS) model. The nanofluids with particle concentration of 0%, 1%, 2%, 4% and 6% are treated as single phases of effective properties. The uniform heat flux is enforced at the tube wall. To confirm the validity of the numerical approach, the results have been compared with empirical correlations and analytical formula. The self-similarity profiles of local entropy generation are also studied, in which the peak values of entropy generation by direct dissipation, turbulent dissipation, mean temperature gradients and fluctuating temperature gradients for different Reynolds number as well as different particle concentration are observed. In addition, the effects of Reynolds number, volume fraction of nanoparticles and heat flux on total entropy generation and Bejan number are discussed. In the results, the intersection points of total entropy generation for water and four nanofluids are observed, when the entropy generation decrease before the intersection and increase after the intersection as the particle concentration increases. Finally, by definition of Ep, which combines the first law and second law of thermodynamics and attributed to evaluate the real performance of heat transfer processes, the optimal Reynolds number Reop corresponding to the best performance and the advisable Reynolds number Read providing the appropriate Reynolds number range for nanofluids in convective heat transfer can be determined.Entropy2017-03-11193Article10.3390/e190301081081099-43002017-03-11doi: 10.3390/e19030108Yu JiHao-Chun ZhangXie YangLei Shi<![CDATA[Entropy, Vol. 19, Pages 109: Formulation of Exergy Cost Analysis to Graph-Based Thermal Network Models]]>
http://www.mdpi.com/1099-4300/19/3/109
Information from exergy cost analysis can be effectively used in the design and management of modern district heating networks (DHNs) since it allows to properly account for the irreversibilities in energy conversion and distribution. Nevertheless, this requires the development of suitable graph-based approaches that are able to effectively consider the network topology and the variations of the physical properties of the heating fluid on a time-dependent basis. In this work, a formulation of exergetic costs suitable for large graph-based networks is proposed, which is consistent with the principles of exergetic costing. In particular, the approach is more compact in comparison to straightforward approaches of exergetic cost formulation available in the literature, especially when applied to fluid networks. Moreover, the proposed formulation is specifically considering transient operating conditions, which is a crucial feature and a necessity for the analysis of future DHNs. Results show that transient effects of the thermodynamic behavior are not negligible for exergy cost analysis, while this work offers a coherent approach to quantify them.Entropy2017-03-10193Article10.3390/e190301091091099-43002017-03-10doi: 10.3390/e19030109Stefano CossElisa GuelpaEtienne LetournelOlivier Le-CorreVittorio Verda<![CDATA[Entropy, Vol. 19, Pages 107: Physical Intelligence and Thermodynamic Computing]]>
http://www.mdpi.com/1099-4300/19/3/107
This paper proposes that intelligent processes can be completely explained by thermodynamic principles. They can equally be described by information-theoretic principles that, from the standpoint of the required optimizations, are functionally equivalent. The underlying theory arises from two axioms regarding distinguishability and causality. Their consequence is a theory of computation that applies to the only two kinds of physical processes possible—those that reconstruct the past and those that control the future. Dissipative physical processes fall into the first class, whereas intelligent ones comprise the second. The first kind of process is exothermic and the latter is endothermic. Similarly, the first process dumps entropy and energy to its environment, whereas the second reduces entropy while requiring energy to operate. It is shown that high intelligence efficiency and high energy efficiency are synonymous. The theory suggests the usefulness of developing a new computing paradigm called Thermodynamic Computing to engineer intelligent processes. The described engineering formalism for the design of thermodynamic computers is a hybrid combination of information theory and thermodynamics. Elements of the engineering formalism are introduced in the reverse-engineer of a cortical neuron. The cortical neuron provides perhaps the simplest and most insightful example of a thermodynamic computer possible. It can be seen as a basic building block for constructing more intelligent thermodynamic circuits.Entropy2017-03-09193Article10.3390/e190301071071099-43002017-03-09doi: 10.3390/e19030107Robert Fry<![CDATA[Entropy, Vol. 19, Pages 106: On Quantum Collapse as a Basis for the Second Law of Thermodynamics]]>
http://www.mdpi.com/1099-4300/19/3/106
It was first suggested by David Z. Albert that the existence of a real, physical non-unitary process (i.e., “collapse”) at the quantum level would yield a complete explanation for the Second Law of Thermodynamics (i.e., the increase in entropy over time). The contribution of such a process would be to provide a physical basis for the ontological indeterminacy needed to derive the irreversible Second Law against a backdrop of otherwise reversible, deterministic physical laws. An alternative understanding of the source of this possible quantum “collapse” or non-unitarity is presented herein, in terms of the Transactional Interpretation (TI). The present model provides a specific physical justification for Boltzmann’s often-criticized assumption of molecular randomness (Stosszahlansatz), thereby changing its status from an ad hoc postulate to a theoretically grounded result, without requiring any change to the basic quantum theory. In addition, it is argued that TI provides an elegant way of reconciling, via indeterministic collapse, the time-reversible Liouville evolution with the time-irreversible evolution inherent in so-called “master equations” that specify the changes in occupation of the various possible states in terms of the transition rates between them. The present model is contrasted with the Ghirardi–Rimini–Weber (GRW) “spontaneous collapse” theory previously suggested for this purpose by Albert.Entropy2017-03-09193Article10.3390/e190301061061099-43002017-03-09doi: 10.3390/e19030106Ruth Kastner<![CDATA[Entropy, Vol. 19, Pages 105: Brownian Dynamics Computational Model of Protein Diffusion in Crowded Media with Dextran Macromolecules as Obstacles]]>
http://www.mdpi.com/1099-4300/19/3/105
The high concentration of macromolecules (i.e., macromolecular crowding) in cellular environments leads to large quantitative effects on the dynamic and equilibrium biological properties. These effects have been experimentally studied using inert macromolecules to mimic a realistic cellular medium. In this paper, two different experimental in vitro systems of diffusing proteins which use dextran macromolecules as obstacles are computationally analyzed. A new model for dextran macromolecules based on effective radii accounting for macromolecular compression induced by crowding is proposed. The obtained results for the diffusion coefficient and the anomalous diffusion exponent exhibit good qualitative and generally good quantitative agreement with experiments. Volume fraction and hydrodynamic interactions are found to be crucial to describe the diffusion coefficient decrease in crowded media. However, no significant influence of the hydrodynamic interactions in the anomalous diffusion exponent is found.Entropy2017-03-09193Article10.3390/e190301051051099-43002017-03-09doi: 10.3390/e19030105Pablo BlancoMireia ViaJosep GarcésSergio MadurgaFrancesc Mas<![CDATA[Entropy, Vol. 19, Pages 101: “Over-Learning” Phenomenon of Wavelet Neural Networks in Remote Sensing Image Classifications with Different Entropy Error Functions]]>
http://www.mdpi.com/1099-4300/19/3/101
Artificial neural networks are widely applied for prediction, function simulation, and data classification. Among these applications, the wavelet neural network is widely used in image classification problems due to its advantages of high approximation capabilities, fault-tolerant capabilities, learning capacity, its ability to effectively overcome local minimization issues, and so on. The error function of a network is critical to determine the convergence, stability, and classification accuracy of a neural network. The selection of the error function directly determines the network’s performance. Different error functions will correspond with different minimum error values in training samples. With the decrease of network errors, the accuracy of the image classification is increased. However, if the image classification accuracy is difficult to improve upon, or is even decreased with the decreasing of the errors, then this indicates that the network has an “over-learning” phenomenon, which is closely related to the selection of the function errors. With regards to remote sensing data, it has not yet been reported whether there have been studies conducted regarding the “over-learning” phenomenon, as well as the relationship between the “over-learning” phenomenon and error functions. This study takes SAR, hyper-spectral, high-resolution, and multi-spectral images as data sources, in order to comprehensively and systematically analyze the possibility of an “over-learning” phenomenon in the remote sensing images from the aspects of image characteristics and neural network. Then, this study discusses the impact of three typical entropy error functions (NB, CE, and SH) on the “over-learning” phenomenon of a network. The experimental results show that the “over-learning” phenomenon may be caused only when there is a strong separability between the ground features, a low image complexity, a small image size, and a large number of hidden nodes. The SH entropy error function in that case will show a good “over-learning” resistance ability. However, for remote sensing image classification, the “over-learning” phenomenon will not be easily caused in most cases, due to the complexity of the image itself, and the diversity of the ground features. In that case, the NB and CE entropy error network mainly show a good stability. Therefore, a blind selection of a SH entropy error function with a high “over-learning” resistance ability from the wavelet neural network classification of the remote sensing image will only decrease the classification accuracy of the remote sensing image. It is therefore recommended to use an NB or CE entropy error function with a stable learning effect.Entropy2017-03-08193Article10.3390/e190301011011099-43002017-03-08doi: 10.3390/e19030101Dongmei SongYajie ZhangXinjian ShanJianyong CuiHuisheng Wu<![CDATA[Entropy, Vol. 19, Pages 104: Complexity and Vulnerability Analysis of the C. Elegans Gap Junction Connectome]]>
http://www.mdpi.com/1099-4300/19/3/104
We apply a network complexity measure to the gap junction network of the somatic nervous system of C. elegans and find that it possesses a much higher complexity than we might expect from its degree distribution alone. This “excess” complexity is seen to be caused by a relatively small set of connections involving command interneurons. We describe a method which progressively deletes these “complexity-causing” connections, and find that when these are eliminated, the network becomes significantly less complex than a random network. Furthermore, this result implicates the previously-identified set of neurons from the synaptic network’s “rich club” as the structural components encoding the network’s excess complexity. This study and our method thus support a view of the gap junction Connectome as consisting of a rather low-complexity network component whose symmetry is broken by the unique connectivities of singularly important rich club neurons, sharply increasing the complexity of the network.Entropy2017-03-08193Article10.3390/e190301041041099-43002017-03-08doi: 10.3390/e19030104James Kunert-GrafNikita SakhanenkoDavid Galas<![CDATA[Entropy, Vol. 19, Pages 103: Analysis of the Temporal Structure Evolution of Physical Systems with the Self-Organising Tree Algorithm (SOTA): Application for Validating Neural Network Systems on Adaptive Optics Data before On-Sky Implementation]]>
http://www.mdpi.com/1099-4300/19/3/103
Adaptive optics reconstructors are needed to remove the effects of atmospheric distortion in optical systems of large telescopes. The use of reconstructors based on neural networks has been proved successful in recent times. Some of their properties require a specific characterization. A procedure, based in time series clustering algorithms, is presented to characterize the relationship between temporal structure of inputs and outputs, through analyzing the data provided by the system. This procedure is used to compare the performance of a reconstructor based in Artificial Neural Networks, with one that shows promising results, but is still in development, in order to corroborate its suitability previously to its implementation in real applications. Also, this procedure could be applied with other physical systems that also have evolution in time.Entropy2017-03-07193Article10.3390/e190301031031099-43002017-03-07doi: 10.3390/e19030103Sergio Suárez GómezJesús Santos RodríguezFrancisco Iglesias RodríguezFrancisco de Cos Juez<![CDATA[Entropy, Vol. 19, Pages 102: Emergence of Distinct Spatial Patterns in Cellular Automata with Inertia: A Phase Transition-Like Behavior]]>
http://www.mdpi.com/1099-4300/19/3/102
We propose a Cellular Automata (CA) model in which three ubiquitous and relevant processes in nature are present, namely, spatial competition, distinction between dynamically stronger and weaker agents and the existence of an inner resistance to changes in the actual state S n (=−1,0,+1) of each CA lattice cell n (which we call inertia). Considering ensembles of initial lattices, we study the average properties of the CA final stationary configuration structures resulting from the system time evolution. Assuming the inertia a (proper) control parameter, we identify qualitative changes in the CA spatial patterns resembling usual phase transitions. Interestingly, some of the observed features may be associated with continuous transitions (critical phenomena). However, certain quantities seem to present jumps, typical of discontinuous transitions. We argue that these apparent contradictory findings can be attributed to the inertia parameter’s discrete character. Along the work, we also briefly discuss a few potential applications for the present CA formulation.Entropy2017-03-07193Article10.3390/e190301021021099-43002017-03-07doi: 10.3390/e19030102Klaus KramerMarlus KoehlerCarlos FioreMarcos da Luz<![CDATA[Entropy, Vol. 19, Pages 100: Normalized Unconditional ϵ-Security of Private-Key Encryption]]>
http://www.mdpi.com/1099-4300/19/3/100
In this paper we introduce two normalized versions of non-perfect security for private-key encryption: one version in the framework of Shannon entropy, another version in the framework of Kolmogorov complexity. We prove the lower bound on either key entropy or key size for these models and study the relations between these normalized security notions.Entropy2017-03-07193Article10.3390/e190301001001099-43002017-03-07doi: 10.3390/e19030100Lvqing BiSongsong DaiBo Hu<![CDATA[Entropy, Vol. 19, Pages 99: Tunable-Q Wavelet Transform Based Multivariate Sub-Band Fuzzy Entropy with Application to Focal EEG Signal Analysis]]>
http://www.mdpi.com/1099-4300/19/3/99
This paper analyses the complexity of multivariate electroencephalogram (EEG) signals in different frequency scales for the analysis and classification of focal and non-focal EEG signals. The proposed multivariate sub-band entropy measure has been built based on tunable-Q wavelet transform (TQWT). In the field of multivariate entropy analysis, recent studies have performed analysis of biomedical signals with a multi-level filtering approach. This approach has become a useful tool for measuring inherent complexity of the biomedical signals. However, these methods may not be well suited for quantifying the complexity of the individual multivariate sub-bands of the analysed signal. In this present study, we have tried to resolve this difficulty by employing TQWT for analysing the sub-band signals of the analysed multivariate signal. It should be noted that higher value of Q factor is suitable for analysing signals with oscillatory nature, whereas the lower value of Q factor is suitable for analysing signals with non-oscillatory transients in nature. Moreover, with an increased number of sub-bands and a higher value of Q-factor, a reasonably good resolution can be achieved simultaneously in high and low frequency regions of the considered signals. Finally, we have employed multivariate fuzzy entropy (mvFE) to the multivariate sub-band signals obtained from the analysed signal. The proposed Q-based multivariate sub-band entropy has been studied on the publicly available bivariate Bern Barcelona focal and non-focal EEG signals database to investigate the statistical significance of the proposed features in different time segmented signals. Finally, the features are fed to random forest and least squares support vector machine (LS-SVM) classifiers to select the best classifier. Our method has achieved the highest classification accuracy of 84.67% in classifying focal and non-focal EEG signals with LS-SVM classifier. The proposed multivariate sub-band fuzzy entropy can also be applied to measure complexity of other multivariate biomedical signals.Entropy2017-03-03193Article10.3390/e19030099991099-43002017-03-03doi: 10.3390/e19030099Abhijit BhattacharyyaRam PachoriU. Acharya<![CDATA[Entropy, Vol. 19, Pages 98: Quantum Theory from Rules on Information Acquisition]]>
http://www.mdpi.com/1099-4300/19/3/98
We summarize a recent reconstruction of the quantum theory of qubits from rules constraining an observer’s acquisition of information about physical systems. This review is accessible and fairly self-contained, focusing on the main ideas and results and not the technical details. The reconstruction offers an informational explanation for the architecture of the theory and speciﬁcally for its correlation structure. In particular, it explains entanglement, monogamy and non-locality compellingly from limited accessible information and complementarity. As a by-product, it also unravels new ‘conserved informational charges’ from complementarity relations that characterize the unitary group and the set of pure states.Entropy2017-03-03193Review10.3390/e19030098981099-43002017-03-03doi: 10.3390/e19030098Philipp Höhn<![CDATA[Entropy, Vol. 19, Pages 97: Taxis of Artificial Swimmers in a Spatio-Temporally Modulated Activation Medium]]>
http://www.mdpi.com/1099-4300/19/3/97
Contrary to microbial taxis, where a tactic response to external stimuli is controlled by complex chemical pathways acting like sensor-actuator loops, taxis of artificial microswimmers is a purely stochastic effect associated with a non-uniform activation of the particles’ self-propulsion. We study the tactic response of such swimmers in a spatio-temporally modulated activating medium by means of both numerical and analytical techniques. In the opposite limits of very fast and very slow rotational particle dynamics, we obtain analytic approximations that closely reproduce the numerical description. A swimmer drifts on average either parallel or anti-parallel to the propagation direction of the activating pulses, depending on their speed and width. The drift in line with the pulses is solely determined by the finite persistence length of the active Brownian motion performed by the swimmer, whereas the drift in the opposite direction results from the combination of the ballistic and diffusive properties of the swimmer’s dynamics.Entropy2017-03-03193Article10.3390/e19030097971099-43002017-03-03doi: 10.3390/e19030097Alexander GeiselerPeter HänggiFabio Marchesoni<![CDATA[Entropy, Vol. 19, Pages 96: Effect of a Magnetic Quadrupole Field on Entropy Generation in Thermomagnetic Convection of Paramagnetic Fluid with and without a Gravitational Field]]>
http://www.mdpi.com/1099-4300/19/3/96
Entropy generation for a paramagnetic fluid in a square enclosure with thermomagnetic convection is numerically investigated under the influence of a magnetic quadrupole field. The magnetic field is calculated using the scalar magnetic potential approach. The finite-volume method is applied to solve the coupled equation for flow, energy, and entropy generation. Simulations are conducted to obtain streamlines, isotherms, Nusselt numbers, entropy generation, and the Bejan number for various magnetic forces (1 ≤ γ ≤ 100) and Rayleigh numbers (104 ≤ Ra ≤ 106). In the absence of gravity, the total entropy generation increases with the increasing magnetic field number, but the average Bejan number decreases. In the gravitational field, the total entropy generation respects the insensitive trend to the change of the magnetic force for low Rayleigh numbers, while it changes significantly for high Rayleigh numbers. When the magnetic field enhances, the share of viscous dissipation in energy losses keeps growing.Entropy2017-03-03193Article10.3390/e19030096961099-43002017-03-03doi: 10.3390/e19030096Er ShiXiaoqin SunYecong HeChangwei Jiang<![CDATA[Entropy, Vol. 19, Pages 95: On the Complexity Reduction of Coding WSS Vector Processes by Using a Sequence of Block Circulant Matrices]]>
http://www.mdpi.com/1099-4300/19/3/95
In the present paper, we obtain a result on the rate-distortion function (RDF) of wide sense stationary (WSS) vector processes that allows us to reduce the complexity of coding those processes. To achieve this result, we propose a sequence of block circulant matrices. In addition, we use the proposed sequence to reduce the complexity of filtering WSS vector processes.Entropy2017-03-02193Article10.3390/e19030095951099-43002017-03-02doi: 10.3390/e19030095Jesús Gutiérrez-GutiérrezMarta Zárraga-RodríguezXabier InsaustiBjørn Hogstad<![CDATA[Entropy, Vol. 19, Pages 94: Numerical Study of the Magnetic Field Effects on the Heat Transfer and Entropy Generation Aspects of a Power Law Fluid over an Axisymmetric Stretching Plate Structure]]>
http://www.mdpi.com/1099-4300/19/3/94
Numerical investigation of the effects of magnetic field strength, thermal radiation, Joule heating, and viscous heating on a forced convective flow of a non-Newtonian, incompressible power law fluid in an axisymmetric stretching sheet with variable temperature wall is accomplished. The power law shear thinning viscosity-shear rate model for the anisotropic solutions and the Rosseland approximation for the thermal radiation through a highly absorbing medium are considered. The temperature dependent heat sources, Joule heating, and viscous heating are considered as the source terms in the energy balance. The non-dimensional boundary layer equations are solved numerically in terms of similarity variable. A parameter study on the Nusselt number, viscous components of entropy generation, and thermal components of entropy generation in fluid is performed as a function of thermal radiation parameter (0 to 2), Brinkman number (0 to 10), Prandtl number (0 to 10), Hartmann number (0 to 1), power law index (0 to 1), and heat source coefficient (0 to 0.1).Entropy2017-03-01193Article10.3390/e19030094941099-43002017-03-01doi: 10.3390/e19030094Payam HooshmandHamed GatabiNavid BagheriIsma’il PirzadehAshkan HesabiMohammad Abdollahzadeh JamalabadiMajid Oveisi<![CDATA[Entropy, Vol. 19, Pages 91: On the Entropy of Deformed Phase Space Black Hole and the Cosmological Constant]]>
http://www.mdpi.com/1099-4300/19/3/91
In this paper we study the effects of noncommutative phase space deformations on the Schwarzschild black hole. This idea has been previously studied in Friedmann–Robertson–Walker (FRW) cosmology, where this “noncommutativity” provides a simple mechanism that can explain the origin of the cosmological constant. In this paper, we obtain the same relationship between the cosmological constant and the deformation parameter that appears in deformed phase space cosmology, but in the context of the deformed phase space black holes. This was achieved by comparing the entropy of the deformed Schwarzschild black hole with the entropy of the Schwarzschild–de Sitter black hole.Entropy2017-02-28193Article10.3390/e19030091911099-43002017-02-28doi: 10.3390/e19030091Andrés Crespo-HernándezEri Mena-BarbozaMiguel Sabido<![CDATA[Entropy, Vol. 19, Pages 92: Use of Accumulated Entropies for Automated Detection of Congestive Heart Failure in Flexible Analytic Wavelet Transform Framework Based on Short-Term HRV Signals]]>
http://www.mdpi.com/1099-4300/19/3/92
In the present work, an automated method to diagnose Congestive Heart Failure (CHF) using Heart Rate Variability (HRV) signals is proposed. This method is based on Flexible Analytic Wavelet Transform (FAWT), which decomposes the HRV signals into different sub-band signals. Further, Accumulated Fuzzy Entropy (AFEnt) and Accumulated Permutation Entropy (APEnt) are computed over cumulative sums of these sub-band signals. This provides complexity analysis using fuzzy and permutation entropies at different frequency scales. We have extracted 20 features from these signals obtained at different frequency scales of HRV signals. The Bhattacharyya ranking method is used to rank the extracted features from the HRV signals of three different lengths (500, 1000 and 2000 samples). These ranked features are fed to the Least Squares Support Vector Machine (LS-SVM) classifier. Our proposed system has obtained a sensitivity of 98.07%, specificity of 98.33% and accuracy of 98.21% for the 500-sample length of HRV signals. Our system yielded a sensitivity of 97.95%, specificity of 98.07% and accuracy of 98.01% for HRV signals of a length of 1000 samples and a sensitivity of 97.76%, specificity of 97.67% and accuracy of 97.71% for signals corresponding to the 2000-sample length of HRV signals. Our automated system can aid clinicians in the accurate detection of CHF using HRV signals. It can be installed in hospitals, polyclinics and remote villages where there is no access to cardiologists.Entropy2017-02-27193Article10.3390/e19030092921099-43002017-02-27doi: 10.3390/e19030092Mohit KumarRam PachoriU. Acharya<![CDATA[Entropy, Vol. 19, Pages 93: An Entropy-Assisted Shielding Function in DDES Formulation for the SST Turbulence Model]]>
http://www.mdpi.com/1099-4300/19/3/93
The intent of shielding functions in delayed detached-eddy simulation methods (DDES) is to preserve the wall boundary layers as Reynolds-averaged Navier–Strokes (RANS) mode, avoiding possible modeled stress depletion (MSD) or even unphysical separation due to grid refinement. An entropy function fs is introduced to construct a DDES formulation for the k-ω shear stress transport (SST) model, whose performance is extensively examined on a range of attached and separated flows (flat-plate flow, circular cylinder flow, and supersonic cavity-ramp flow). Two more forms of shielding functions are also included for comparison: one that uses the blending function F2 of SST, the other which adopts the recalibrated shielding function fd_cor of the DDES version based on the Spalart-Allmaras (SA) model. In general, all of the shielding functions do not impair the vortex in fully separated flows. However, for flows including attached boundary layer, both F2 and the recalibrated fd_cor are found to be too conservative to resolve the unsteady flow content. On the other side, fs is proposed on the theory of energy dissipation and independent on from any particular turbulence model, showing the generic priority by properly balancing the need of reserving the RANS modeled regions for wall boundary layers and generating the unsteady turbulent structures in detached areas.Entropy2017-02-27193Article10.3390/e19030093931099-43002017-02-27doi: 10.3390/e19030093Ling ZhouRui ZhaoXiao-Pan Shi<![CDATA[Entropy, Vol. 19, Pages 90: A LiBr-H2O Absorption Refrigerator Incorporating a Thermally Activated Solution Pumping Mechanism]]>
http://www.mdpi.com/1099-4300/19/3/90
This paper provides an illustrated description of a proposed LiBr-H2O vapour absorption refrigerator which uses a thermally activated solution pumping mechanism that combines controlled variations in generator vapour pressure with changes it produces in static-head pressure difference to circulate the absorbent solution between the generator and absorber vessels. The proposed system is different and potentially more efficient than a bubble pump system previously proposed and avoids the need for an electrically powered circulation pump found in most conventional LiBr absorption refrigerators. The paper goes on to provide a sample set of calculations that show that the coefficient of performance values of the proposed cycle are similar to those found for conventional cycles. The theoretical results compare favourably with some preliminary experimental results, which are also presented for the first time in this paper. The paper ends by proposing an outline design for an innovative steam valve, which is a key component needed to control the solution pumping mechanism.Entropy2017-02-26193Article10.3390/e19030090901099-43002017-02-26doi: 10.3390/e19030090Ian Eames<![CDATA[Entropy, Vol. 19, Pages 89: Optimization of Alpha-Beta Log-Det Divergences and their Application in the Spatial Filtering of Two Class Motor Imagery Movements]]>
http://www.mdpi.com/1099-4300/19/3/89
The Alpha-Beta Log-Det divergences for positive definite matrices are flexible divergences that are parameterized by two real constants and are able to specialize several relevant classical cases like the squared Riemannian metric, the Steins loss, the S-divergence, etc. A novel classification criterion based on these divergences is optimized to address the problem of classification of the motor imagery movements. This research paper is divided into three main sections in order to address the above mentioned problem: (1) Firstly, it is proven that a suitable scaling of the class conditional covariance matrices can be used to link the Common Spatial Pattern (CSP) solution with a predefined number of spatial filters for each class and its representation as a divergence optimization problem by making their different filter selection policies compatible; (2) A closed form formula for the gradient of the Alpha-Beta Log-Det divergences is derived that allows to perform optimization as well as easily use it in many practical applications; (3) Finally, in similarity with the work of Samek et al. 2014, which proposed the robust spatial filtering of the motor imagery movements based on the beta-divergence, the optimization of the Alpha-Beta Log-Det divergences is applied to this problem. The resulting subspace algorithm provides a unified framework for testing the performance and robustness of the several divergences in different scenarios.Entropy2017-02-25193Article10.3390/e19030089891099-43002017-02-25doi: 10.3390/e19030089Deepa ThiyamSergio CrucesJavier OliasAndrzej Cichocki<![CDATA[Entropy, Vol. 19, Pages 88: Systematic Analysis of the Non-Extensive Statistical Approach in High Energy Particle Collisions—Experiment vs. Theory]]>
http://www.mdpi.com/1099-4300/19/3/88
The analysis of high-energy particle collisions is an excellent testbed for the non-extensive statistical approach. In these reactions we are far from the thermodynamical limit. In small colliding systems, such as electron-positron or nuclear collisions, the number of particles is several orders of magnitude smaller than the Avogadro number; therefore, finite-size and fluctuation effects strongly influence the final-state one-particle energy distributions. Due to the simple characterization, the description of the identified hadron spectra with the Boltzmann–Gibbs thermodynamical approach is insufficient. These spectra can be described very well with Tsallis–Pareto distributions instead, derived from non-extensive thermodynamics. Using the q-entropy formula, we interpret the microscopic physics in terms of the Tsallis q and T parameters. In this paper we give a view on these parameters, analyzing identified hadron spectra from recent years in a wide center-of-mass energy range. We demonstrate that the fitted Tsallis-parameters show dependency on the center-of-mass energy and particle species (mass). Our findings are described well by a QCD (Quantum Chromodynamics) inspired parton evolution ansatz. Based on this comprehensive study, apart from the evolution, both mesonic and baryonic components found to be non-extensive ( q &gt; 1 ), besides the mass ordered hierarchy observed in the parameter T. We also study and compare in details the theory-obtained parameters for the case of PYTHIA8 Monte Carlo Generator, perturbative QCD and quark coalescence models.Entropy2017-02-24193Article10.3390/e19030088881099-43002017-02-24doi: 10.3390/e19030088Gábor BíróGergely BarnaföldiTamás BiróKároly ÜrmössyÁdám Takács<![CDATA[Entropy, Vol. 19, Pages 86: Motion Sequence Decomposition-Based Hybrid Entropy Feature and Its Application to Fault Diagnosis of a High-Speed Automatic Mechanism]]>
http://www.mdpi.com/1099-4300/19/3/86
High-speed automatic weapons play an important role in the field of national defense. However, current research on reliability analysis of automaton principally relies on simulations due to the fact that experimental data are difficult to collect in real life. Different from rotating machinery, a high-speed automaton needs to accomplish complex motion consisting of a series of impacts. In addition to strong noise, the impacts generated by different components of the automaton will interfere with each other. There is no effective approach to cope with this in the fault diagnosis of automatic mechanisms. This paper proposes a motion sequence decomposition approach combining modern signal processing techniques to develop an effective approach to fault detection in high-speed automatons. We first investigate the entire working procedure of the automatic mechanism and calculate the corresponding action times of travel involved. The vibration signal collected from the shooting experiment is then divided into a number of impacts corresponding to action orders. Only the segment generated by a faulty component is isolated from the original impacts according to the action time of the component. Wavelet packet decomposition (WPD) is first applied on the resulting signals for investigation of energy distribution, and the components with higher energy are selected for feature extraction. Three information entropy features are utilized to distinguish various states of the automaton using empirical mode decomposition (EMD). A gray-wolf optimization (GWO) algorithm is introduced as an alternative to improve the performance of the support vector machine (SVM) classifier. We carry out shooting experiments to collect vibration data for demonstration of the proposed work. Experimental results show that the proposed work in this paper is effective for fault diagnosis of a high-speed automaton and can be applied in real applications. Moreover, the GWO is able to provide a competitive diagnosis result compared with the genetic algorithm (GA) and the particle swarm optimization (PSO) algorithm.Entropy2017-02-24193Article10.3390/e19030086861099-43002017-02-24doi: 10.3390/e19030086Baoxiang WangHongxia PanHeng Du<![CDATA[Entropy, Vol. 19, Pages 87: Entropy, Topological Theories and Emergent Quantum Mechanics]]>
http://www.mdpi.com/1099-4300/19/3/87
The classical thermostatics of equilibrium processes is shown to possess a quantum mechanical dual theory with a ﬁnite dimensional Hilbert space of quantum states. Speciﬁcally, the kernel of a certain Hamiltonian operator becomes the Hilbert space of quasistatic quantum mechanics. The relation of thermostatics to topological ﬁeld theory is also discussed in the context of the approach of the emergence of quantum theory, where the concept of entropy plays a key role.Entropy2017-02-23193Article10.3390/e19030087871099-43002017-02-23doi: 10.3390/e19030087D. CabreraP. de CórdobaJ. IsidroJ. Molina<![CDATA[Entropy, Vol. 19, Pages 79: Using k-Mix-Neighborhood Subdigraphs to Compute Canonical Labelings of Digraphs]]>
http://www.mdpi.com/1099-4300/19/2/79
This paper presents a novel theory and method to calculate the canonical labelings of digraphs whose definition is entirely different from the traditional definition of Nauty. It indicates the mutual relationships that exist between the canonical labeling of a digraph and the canonical labeling of its complement graph. It systematically examines the link between computing the canonical labeling of a digraph and the k-neighborhood and k-mix-neighborhood subdigraphs. To facilitate the presentation, it introduces several concepts including mix diffusion outdegree sequence and entire mix diffusion outdegree sequences. For each node in a digraph G, it assigns an attribute m_NearestNode to enhance the accuracy of calculating canonical labeling. The four theorems proved here demonstrate how to determine the first nodes added into M a x Q ( G ) . Further, the other two theorems stated below deal with identifying the second nodes added into M a x Q ( G ) . When computing C m a x ( G ) , if M a x Q ( G ) already contains the first i vertices u 1 , u 2 , ⋯ , u i , Diffusion Theorem provides a guideline on how to choose the subsequent node of M a x Q ( G ) . Besides, the Mix Diffusion Theorem shows that the selection of the ( i + 1 ) th vertex of M a x Q ( G ) for computing C m a x ( G ) is from the open mix-neighborhood subdigraph N + + ( Q ) of the nodes set Q = { u 1 , u 2 , ⋯ , u i } . It also offers two theorems to calculate the C m a x ( G ) of the disconnected digraphs. The four algorithms implemented in it illustrate how to calculate M a x Q ( G ) of a digraph. Through software testing, the correctness of our algorithms is preliminarily verified. Our method can be utilized to mine the frequent subdigraph. We also guess that if there exists a vertex v ∈ S + ( G ) satisfying conditions C m a x ( G − v ) ⩽ C m a x ( G − w ) for each w ∈ S + ( G ) ∧ w ≠ v , then u 1 = v for M a x Q ( G ) .Entropy2017-02-22192Article10.3390/e19020079791099-43002017-02-22doi: 10.3390/e19020079Jianqiang HaoYunzhan GongYawen WangLi TanJianzhi Sun<![CDATA[Entropy, Vol. 19, Pages 85: Quantifying Synergistic Information Using Intermediate Stochastic Variables]]>
http://www.mdpi.com/1099-4300/19/2/85
Quantifying synergy among stochastic variables is an important open problem in information theory. Information synergy occurs when multiple sources together predict an outcome variable better than the sum of single-source predictions. It is an essential phenomenon in biology such as in neuronal networks and cellular regulatory processes, where different information flows integrate to produce a single response, but also in social cooperation processes as well as in statistical inference tasks in machine learning. Here we propose a metric of synergistic entropy and synergistic information from first principles. The proposed measure relies on so-called synergistic random variables (SRVs) which are constructed to have zero mutual information about individual source variables but non-zero mutual information about the complete set of source variables. We prove several basic and desired properties of our measure, including bounds and additivity properties. In addition, we prove several important consequences of our measure, including the fact that different types of synergistic information may co-exist between the same sets of variables. A numerical implementation is provided, which we use to demonstrate that synergy is associated with resilience to noise. Our measure may be a marked step forward in the study of multivariate information theory and its numerous applications.Entropy2017-02-22192Article10.3390/e19020085851099-43002017-02-22doi: 10.3390/e19020085Rick QuaxOmri Har-ShemeshPeter Sloot<![CDATA[Entropy, Vol. 19, Pages 82: The More You Know, the More You Can Grow: An Information Theoretic Approach to Growth in the Information Age]]>
http://www.mdpi.com/1099-4300/19/2/82
In our information age, information alone has become a driver of social growth. Information is the fuel of “big data” companies, and the decision-making compass of policy makers. Can we quantify how much information leads to how much social growth potential? Information theory is used to show that information (in bits) is effectively a quantifiable ingredient of growth. The article presents a single equation that allows both to describe hands-off natural selection of evolving populations and to optimize population fitness in uncertain environments through intervention. The setup analyzes the communication channel between the growing population and its uncertain environment. The role of information in population growth can be thought of as the optimization of information flow over this (more or less) noisy channel. Optimized growth implies that the population absorbs all communicated environmental structure during evolutionary updating (measured by their mutual information). This is achieved by endogenously adjusting the population structure to the exogenous environmental pattern (through bet-hedging/portfolio management). The setup can be applied to decompose the growth of any discrete population in stationary, stochastic environments (economic, cultural, or biological). Two empirical examples from the information economy reveal inherent trade-offs among the involved information quantities during growth optimization.Entropy2017-02-22192Article10.3390/e19020082821099-43002017-02-22doi: 10.3390/e19020082Martin Hilbert<![CDATA[Entropy, Vol. 19, Pages 83: Breakdown Point of Robust Support Vector Machines]]>
http://www.mdpi.com/1099-4300/19/2/83
Support vector machine (SVM) is one of the most successful learning methods for solving classiﬁcation problems. Despite its popularity, SVM has the serious drawback that it is sensitive to outliers in training samples. The penalty on misclassiﬁcation is deﬁned by a convex loss called the hinge loss, and the unboundedness of the convex loss causes the sensitivity to outliers. To deal with outliers, robust SVMs have been proposed by replacing the convex loss with a non-convex bounded loss called the ramp loss. In this paper, we study the breakdown point of robust SVMs. The breakdown point is a robustness measure that is the largest amount of contamination such that the estimated classiﬁer still gives information about the non-contaminated data. The main contribution of this paper is to show an exact evaluation of the breakdown point of robust SVMs. For learning parameters such as the regularization parameter, we derive a simple formula that guarantees the robustness of the classiﬁer. When the learning parameters are determined with a grid search using cross-validation, our formula works to reduce the number of candidate search points. Furthermore, the theoretical ﬁndings are conﬁrmed in numerical experiments. We show that the statistical properties of robust SVMs are well explained by a theoretical analysis of the breakdown point.Entropy2017-02-21192Article10.3390/e19020083831099-43002017-02-21doi: 10.3390/e19020083Takafumi KanamoriShuhei FujiwaraAkiko Takeda<![CDATA[Entropy, Vol. 19, Pages 84: Sequential Batch Design for Gaussian Processes Employing Marginalization †]]>
http://www.mdpi.com/1099-4300/19/2/84
Within the Bayesian framework, we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive, it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances —being indicators for the quality of the fit—as the utility function, we establish an optimized and automated sequential parameter selection procedure. However, it is also often desirable to utilize the parallel running capabilities of present computer technology and abandon the sequential parameter selection for a faster overall turn-around time (wall-clock time). This paper proposes to achieve this by marginalizing over the expected outcomes at optimized test points in order to set up a pool of starting values for batch execution. For a one-dimensional test case, the numerical results are validated with the analytical solution. Eventually, a systematic convergence study demonstrates the advantage of the optimized approach over randomly chosen parameter settings.Entropy2017-02-21192Article10.3390/e19020084841099-43002017-02-21doi: 10.3390/e19020084Roland PreussUdo von Toussaint<![CDATA[Entropy, Vol. 19, Pages 70: User-Centric Key Entropy: Study of Biometric Key Derivation Subject to Spoofing Attacks]]>
http://www.mdpi.com/1099-4300/19/2/70
Biometric data can be used as input for PKI key pair generation. The concept of not saving the private key is very appealing, but the implementation of such a system shouldn’t be rushed because it might prove less secure then current PKI infrastructure. One biometric characteristic can be easily spoofed, so it was believed that multi-modal biometrics would offer more security, because spoofing two or more biometrics would be very hard. This notion, of increased security of multi-modal biometric systems, was disproved for authentication and matching, studies showing that not only multi-modal biometric systems are not more secure, but they introduce additional vulnerabilities. This paper is a study on the implications of spoofing biometric data for retrieving the derived key. We demonstrate that spoofed biometrics can yield the same key, which in turn will lead an attacker to obtain the private key. A practical implementation is proposed using fingerprint and iris as biometrics and the fuzzy extractor for biometric key extraction. Our experiments show what happens when the biometric data is spoofed for both uni-modal systems and multi-modal. In case of multi-modal system tests were performed when spoofing one biometric or both. We provide detailed analysis of every scenario in regard to successful tests and overall key entropy. Our paper defines a biometric PKI scenario and an in depth security analysis for it. The analysis can be viewed as a blueprint for implementations of future similar systems, because it highlights the main security vulnerabilities for bioPKI. The analysis is not constrained to the biometric part of the system, but covers CA security, sensor security, communication interception, RSA encryption vulnerabilities regarding key entropy, and much more.Entropy2017-02-21192Article10.3390/e19020070701099-43002017-02-21doi: 10.3390/e19020070Lavinia DincaGerhard Hancke<![CDATA[Entropy, Vol. 19, Pages 80: A Risk-Free Protection Index Model for Portfolio Selection with Entropy Constraint under an Uncertainty Framework]]>
http://www.mdpi.com/1099-4300/19/2/80
This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free interest rate we define a risk-free protection index (RFPI), which can measure the protection degree when the loss of risk assets happens. Third, note that the proportion entropy serves as a complementary means to reduce the risk by the preset diversification requirement. We put forward a risk-free protection index model with an entropy constraint under an uncertainty framework by applying the RFPI, Huang’s risk index model (RIM), and mean-variance-entropy model (MVEM). Furthermore, to solve our portfolio model, an algorithm is given to estimate the uncertain expected return and standard deviation of different risk assets by applying the Delphi method. Finally, an example is provided to show that the risk-free protection index model performs better than the traditional MVEM and RIM.Entropy2017-02-21192Article10.3390/e19020080801099-43002017-02-21doi: 10.3390/e19020080Jianwei GaoHuicheng Liu<![CDATA[Entropy, Vol. 19, Pages 81: Towards Operational Definition of Postictal Stage: Spectral Entropy as a Marker of Seizure Ending]]>
http://www.mdpi.com/1099-4300/19/2/81
The postictal period is characterized by several neurological alterations, but its exact limits are clinically or even electroencephalographically hard to determine in most cases. We aim to provide quantitative functions or conditions with a clearly distinguishable behavior during the ictal-postictal transition. Spectral methods were used to analyze foramen ovale electrodes (FOE) recordings during the ictal/postictal transition in 31 seizures of 15 patients with strictly unilateral drug resistant temporal lobe epilepsy. In particular, density of links, spectral entropy, and relative spectral power were analyzed. Partial simple seizures are accompanied by an ipsilateral increase in the relative Delta power and a decrease in synchronization in a 66% and 91% of the cases, respectively, after seizures offset. Complex partial seizures showed a decrease in the spectral entropy in 94% of cases, both ipsilateral and contralateral sides (100% and 73%, respectively) mainly due to an increase of relative Delta activity. Seizure offset is defined as the moment at which the “seizure termination mechanisms” actually end, which is quantified in the spectral entropy value. We propose as a definition for the postictal start the time when the ipsilateral SE reaches the first global minimum.Entropy2017-02-21192Article10.3390/e19020081811099-43002017-02-21doi: 10.3390/e19020081Ancor Sanz-GarcíaLorena Vega-ZelayaJesús PastorRafael SolaGuillermo Ortega<![CDATA[Entropy, Vol. 19, Pages 78: Admitting Spontaneous Violations of the Second Law in Continuum Thermomechanics]]>
http://www.mdpi.com/1099-4300/19/2/78
We survey new extensions of continuum mechanics incorporating spontaneous violations of the Second Law (SL), which involve the viscous flow and heat conduction. First, following an account of the Fluctuation Theorem (FT) of statistical mechanics that generalizes the SL, the irreversible entropy is shown to evolve as a submartingale. Next, a stochastic thermomechanics is formulated consistent with the FT, which, according to a revision of classical axioms of continuum mechanics, must be set up on random fields. This development leads to a reformulation of thermoviscous fluids and inelastic solids. These two unconventional constitutive behaviors may jointly occur in nano-poromechanics.Entropy2017-02-21192Article10.3390/e19020078781099-43002017-02-21doi: 10.3390/e19020078Martin Ostoja-Starzewski<![CDATA[Entropy, Vol. 19, Pages 77: Energy Transfer between Colloids via Critical Interactions]]>
http://www.mdpi.com/1099-4300/19/2/77
We report the observation of a temperature-controlled synchronization of two Brownian-particles in a binary mixture close to the critical point of the demixing transition. The two beads are trapped by two optical tweezers whose distance is periodically modulated. We notice that the motion synchronization of the two beads appears when the critical temperature is approached. In contrast, when the fluid is far from its critical temperature, the displacements of the two beads are uncorrelated. Small changes in temperature can radically change the global dynamics of the system. We show that the synchronisation is induced by the critical Casimir forces. Finally, we present the measure of the energy transfers inside the system produced by the critical interaction.Entropy2017-02-17192Article10.3390/e19020077771099-43002017-02-17doi: 10.3390/e19020077Ignacio MartínezClemence DevaillyArtyom PetrosyanSergio Ciliberto<![CDATA[Entropy, Vol. 19, Pages 75: Information Loss in Binomial Data Due to Data Compression]]>
http://www.mdpi.com/1099-4300/19/2/75
This paper explores the idea of information loss through data compression, as occurs in the course of any data analysis, illustrated via detailed consideration of the Binomial distribution. We examine situations where the full sequence of binomial outcomes is retained, situations where only the total number of successes is retained, and in-between situations. We show that a familiar decomposition of the Shannon entropy H can be rewritten as a decomposition into H t o t a l , H l o s t , and H c o m p , or the total, lost and compressed (remaining) components, respectively. We relate this new decomposition to Landauer’s principle, and we discuss some implications for the “information-dynamic” theory being developed in connection with our broader program to develop a measure of statistical evidence on a properly calibrated scale.Entropy2017-02-16192Article10.3390/e19020075751099-43002017-02-16doi: 10.3390/e19020075Susan HodgeVeronica Vieland<![CDATA[Entropy, Vol. 19, Pages 76: A Comparison of Postural Stability during Upright Standing between Normal and Flatfooted Individuals, Based on COP-Based Measures]]>
http://www.mdpi.com/1099-4300/19/2/76
Aging causes foot arches to collapse, possibly leading to foot deformities and falls. This paper proposes a set of measures involving an entropy-based method used for two groups of young adults with dissimilar foot arches to explore and quantize postural stability on a force plate in an upright position. Fifty-four healthy young adults aged 18–30 years participated in this study. These were categorized into two groups: normal (37 participants) and flatfooted (17 participants). We collected the center of pressure (COP) displacement trajectories of participants during upright standing, on a force plate, in a static position, with eyes open (EO), or eyes closed (EC). These nonstationary time-series signals were quantized using entropy-based measures and traditional measures used to assess postural stability, and the results obtained from these measures were compared. The appropriate combinations of entropy-based measures revealed that, with respect to postural stability, the two groups differed significantly (p &lt; 0.05) under both EO and EC conditions. The traditional commonly-used COP-based measures only revealed differences under EO conditions. Entropy-based measures are thus suitable for examining differences in postural stability for flatfooted people, and may be used by clinicians after further refinement.Entropy2017-02-16192Article10.3390/e19020076761099-43002017-02-16doi: 10.3390/e19020076Tsui-Chiao ChaoBernard Jiang<![CDATA[Entropy, Vol. 19, Pages 74: An Approach to Data Analysis in 5G Networks]]>
http://www.mdpi.com/1099-4300/19/2/74
5G networks expect to provide significant advances in network management compared to traditional mobile infrastructures by leveraging intelligence capabilities such as data analysis, prediction, pattern recognition and artificial intelligence. The key idea behind these actions is to facilitate the decision-making process in order to solve or mitigate common network problems in a dynamic and proactive way. In this context, this paper presents the design of Self-Organized Network Management in Virtualized and Software Defined Networks (SELFNET) Analyzer Module, which main objective is to identify suspicious or unexpected situations based on metrics provided by different network components and sensors. The SELFNET Analyzer Module provides a modular architecture driven by use cases where analytic functions can be easily extended. This paper also proposes the data specification to define the data inputs to be taking into account in diagnosis process. This data specification has been implemented with different use cases within SELFNET Project, proving its effectiveness.Entropy2017-02-16192Article10.3390/e19020074741099-43002017-02-16doi: 10.3390/e19020074Lorena Barona LópezJorge Maestre VidalLuis García Villalba<![CDATA[Entropy, Vol. 19, Pages 71: Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss]]>
http://www.mdpi.com/1099-4300/19/2/71
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or redundant and synergy components. In this work, we extend this framework focusing on the lattices that underpin the decomposition. We generalize the type of constructible lattices and examine the relations between different lattices, for example, relating bivariate and trivariate decompositions. We point out that, in information gain lattices, redundancy components are invariant across decompositions, but unique and synergy components are decomposition-dependent. Exploiting the connection between different lattices, we propose a procedure to construct, in the general multivariate case, information gain decompositions from measures of synergy or unique information. We then introduce an alternative type of lattices, information loss lattices, with the role and invariance properties of redundancy and synergy components reversed with respect to gain lattices, and which provide an alternative procedure to build multivariate decompositions. We finally show how information gain and information loss dual lattices lead to a self-consistent unique decomposition, which allows a deeper understanding of the origin and meaning of synergy and redundancy.Entropy2017-02-16192Article10.3390/e19020071711099-43002017-02-16doi: 10.3390/e19020071Daniel ChicharroStefano Panzeri<![CDATA[Entropy, Vol. 19, Pages 73: Identifying Critical States through the Relevance Index]]>
http://www.mdpi.com/1099-4300/19/2/73
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, depending on the values of some parameters. In this paper, we show that the relevance index may be effectively used to identify critical states in complex systems. The relevance index was originally developed to identify relevant sets of variables in dynamical systems, but in this paper, we show that it is also able to capture features of criticality. The index is applied to two prominent examples showing slightly different meanings of criticality, namely the Ising model and random Boolean networks. Results show that this index is maximized at critical states and is robust with respect to system size and sampling effort. It can therefore be used to detect criticality.Entropy2017-02-16192Article10.3390/e19020073731099-43002017-02-16doi: 10.3390/e19020073Andrea RoliMarco VillaniRiccardo CaprariRoberto Serra<![CDATA[Entropy, Vol. 19, Pages 72: Classification of Normal and Pre-Ictal EEG Signals Using Permutation Entropies and a Generalized Linear Model as a Classifier]]>
http://www.mdpi.com/1099-4300/19/2/72
In this contribution, a comparison between different permutation entropies as classifiers of electroencephalogram (EEG) records corresponding to normal and pre-ictal states is made. A discrete probability distribution function derived from symbolization techniques applied to the EEG signal is used to calculate the Tsallis entropy, Shannon Entropy, Renyi Entropy, and Min Entropy, and they are used separately as the only independent variable in a logistic regression model in order to evaluate its capacity as a classification variable in a inferential manner. The area under the Receiver Operating Characteristic (ROC) curve, along with the accuracy, sensitivity, and specificity are used to compare the models. All the permutation entropies are excellent classifiers, with an accuracy greater than 94.5% in every case, and a sensitivity greater than 97%. Accounting for the amplitude in the symbolization technique retains more information of the signal than its counterparts, and it could be a good candidate for automatic classification of EEG signals.Entropy2017-02-16192Article10.3390/e19020072721099-43002017-02-16doi: 10.3390/e19020072Francisco RedelicoFrancisco TraversaroMaría GarcíaWalter SilvaOsvaldo RossoMarcelo Risk<![CDATA[Entropy, Vol. 19, Pages 69: Two Thermoeconomic Diagnosis Methods Applied to Representative Operating Data of a Commercial Transcritical Refrigeration Plant]]>
http://www.mdpi.com/1099-4300/19/2/69
In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the quality of malfunction identification. Both methods were applicable to locate and categorise the malfunctions when using steady state data without measurement uncertainties. By introduction of measurement uncertainty, the categorisation of malfunctions became increasingly difficult, though depending on the magnitude of the uncertainties. Two different uncertainty scenarios were evaluated, as the use of repeated measurements yields a lower magnitude of uncertainty. The two methods show similar performance in the presented study for both of the considered measurement uncertainty scenarios. However, only in the low measurement uncertainty scenario, both methods are applicable to locate the causes of the malfunctions. For both the scenarios an outlier limit was found, which determines if it was possible to reject a high relative indicator based on measurement uncertainty. For high uncertainties, the threshold value of the relative indicator was 35, whereas for low uncertainties one of the methods resulted in a threshold at 8. Additionally, the contribution of different measuring instruments to the relative indicator in two central components was analysed. It shows that the contribution was component dependent.Entropy2017-02-15192Article10.3390/e19020069691099-43002017-02-15doi: 10.3390/e19020069Torben OmmenOskar SigthorssonBrian Elmegaard<![CDATA[Entropy, Vol. 19, Pages 68: Kinetic Theory of a Confined Quasi-Two-Dimensional Gas of Hard Spheres]]>
http://www.mdpi.com/1099-4300/19/2/68
The dynamics of a system of hard spheres enclosed between two parallel plates separated a distance smaller than two particle diameters is described at the level of kinetic theory. The interest focuses on the behavior of the quasi-two-dimensional fluid seen when looking at the system from above or below. In the first part, a collisional model for the effective two-dimensional dynamics is analyzed. Although it is able to describe quite well the homogeneous evolution observed in the experiments, it is shown that it fails to predict the existence of non-equilibrium phase transitions, and in particular, the bimodal regime exhibited by the real system. A critical revision analysis of the model is presented , and as a starting point to get a more accurate description, the Boltzmann equation for the quasi-two-dimensional gas has been derived. In the elastic case, the solutions of the equation verify an H-theorem implying a monotonic tendency to a non-uniform steady state. As an example of application of the kinetic equation, here the evolution equations for the vertical and horizontal temperatures of the system are derived in the homogeneous approximation, and the results compared with molecular dynamics simulation results.Entropy2017-02-14192Article10.3390/e19020068681099-43002017-02-14doi: 10.3390/e19020068J. BreyVicente BuzónMaria García de SoriaPablo Maynar<![CDATA[Entropy, Vol. 19, Pages 65: An Android Malicious Code Detection Method Based on Improved DCA Algorithm]]>
http://www.mdpi.com/1099-4300/19/2/65
Recently, Android malicious code has increased dramatically and the technology of reinforcement is increasingly powerful. Due to the development of code obfuscation and polymorphic deformation technology, the current Android malicious code static detection method whose feature selected is the semantic of application source code can not completely extract malware’s code features. The Android malware static detection methods whose features used are only obtained from the AndroidManifest.xml file are easily affected by useless permissions. Therefore, there are some limitations in current Android malware static detection methods. The current Android malware dynamic detection algorithm is mostly required to customize the system or needs system root permissions. Based on the Dendritic Cell Algorithm (DCA), this paper proposes an Android malware algorithm that has a higher detection rate, does not need to modify the system, and reduces the impact of code obfuscation to a certain degree. This algorithm is applied to an Android malware detection method based on oriented Dalvik disassembly sequence and application interface (API) calling sequence. Through the designed experiments, the effectiveness of this method is verified for the detection of Android malware.Entropy2017-02-11192Article10.3390/e19020065651099-43002017-02-11doi: 10.3390/e19020065Chundong WangZhiyuan LiLiangyi GongXiuliang MoHong YangYi Zhao<![CDATA[Entropy, Vol. 19, Pages 67: Investigation into Multi-Temporal Scale Complexity of Streamflows and Water Levels in the Poyang Lake Basin, China]]>
http://www.mdpi.com/1099-4300/19/2/67
The streamflow and water level complexity of the Poyang Lake basin has been investigated over multiple time-scales using daily observations of the water level and streamflow spanning from 1954 through 2013. The composite multiscale sample entropy was applied to measure the complexity and the Mann-Kendall algorithm was applied to detect the temporal changes in the complexity. The results show that the streamflow and water level complexity increases as the time-scale increases. The sample entropy of the streamflow increases when the timescale increases from a daily to a seasonal scale, also the sample entropy of the water level increases when the time-scale increases from a daily to a monthly scale. The water outflows of Poyang Lake, which is impacted mainly by the inflow processes, lake regulation, and the streamflow processes of the Yangtze River, is more complex than the water inflows. The streamflow and water level complexity over most of the time-scales, between the daily and monthly scales, is dominated by the increasing trend. This indicates the enhanced randomness, disorderliness, and irregularity of the streamflows and water levels. This investigation can help provide a better understanding to the hydrological features of large freshwater lakes. Ongoing research will be made to analyze and understand the mechanisms of the streamflow and water level complexity changes within the context of climate change and anthropogenic activities.Entropy2017-02-10192Article10.3390/e19020067671099-43002017-02-10doi: 10.3390/e19020067Feng HuangXunzhou ChunyuYuankun WangYao WuBao QianLidan GuoDayong ZhaoZiqiang Xia<![CDATA[Entropy, Vol. 19, Pages 64: Bullwhip Entropy Analysis and Chaos Control in the Supply Chain with Sales Game and Consumer Returns]]>
http://www.mdpi.com/1099-4300/19/2/64
In this paper, we study a supply chain system which consists of one manufacturer and two retailers including a traditional retailer and an online retailer. In order to gain a larger market share, the retailers often take the sales as a decision-making variable in the competition game. We devote ourselves to analyze the bullwhip effect in the supply chain with sales game and consumer returns via the theory of entropy and complexity and take the delayed feedback control method to control the system’s chaotic state. The impact of a statutory 7-day no reason for return policy for online retailers is also investigated. The bounded rational expectation is adopt to forecast the future demand in the sales game system with weak noise. Our results show that high return rates will hurt the profits of both the retailers and the adjustment speed of the bounded rational sales expectation has an important impact on the bullwhip effect. There is a stable area for retailers where the bullwhip effect doesn’t appear. The supply chain system suffers a great bullwhip effect in the quasi-periodic state and the quasi-chaotic state. The purpose of chaos control on the sales game can be achieved and the bullwhip effect would be effectively mitigated by using the delayed feedback control method.Entropy2017-02-10192Article10.3390/e19020064641099-43002017-02-10doi: 10.3390/e19020064Wandong LouJunhai MaXueli Zhan<![CDATA[Entropy, Vol. 19, Pages 66: Discussing Landscape Compositional Scenarios Generated with Maximization of Non-Expected Utility Decision Models Based on Weighted Entropies]]>
http://www.mdpi.com/1099-4300/19/2/66
The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies.Entropy2017-02-10192Concept Paper10.3390/e19020066661099-43002017-02-10doi: 10.3390/e19020066José CasquilhoFrancisco Rego<![CDATA[Entropy, Vol. 19, Pages 63: Response Surface Methodology Control Rod Position Optimization of a Pressurized Water Reactor Core Considering Both High Safety and Low Energy Dissipation]]>
http://www.mdpi.com/1099-4300/19/2/63
Response Surface Methodology (RSM) is introduced to optimize the control rod positions in a pressurized water reactor (PWR) core. The widely used 3D-IAEA benchmark problem is selected as the typical PWR core and the neutron flux field is solved. Besides, some additional thermal parameters are assumed to obtain the temperature distribution. Then the total and local entropy production is calculated to evaluate the energy dissipation. Using RSM, three directions of optimization are taken, which aim to determine the minimum of power peak factor Pmax, peak temperature Tmax and total entropy production Stot. These parameters reflect the safety and energy dissipation in the core. Finally, an optimization scheme was obtained, which reduced Pmax, Tmax and Stot by 23%, 8.7% and 16%, respectively. The optimization results are satisfactory.Entropy2017-02-10192Article10.3390/e19020063631099-43002017-02-10doi: 10.3390/e19020063Yi-Ning ZhangHao-Chun ZhangHai-Yan YuChao Ma<![CDATA[Entropy, Vol. 19, Pages 62: Complex and Fractional Dynamics]]>
http://www.mdpi.com/1099-4300/19/2/62
Complex systems (CS) are pervasive in many areas, namely financial markets; highway transportation; telecommunication networks; world and country economies; social networks; immunological systems; living organisms; computational systems; and electrical and mechanical structures. CS are often composed of a large number of interconnected and interacting entities exhibiting much richer global scale dynamics than could be inferred from the properties and behavior of individual elements. [...]Entropy2017-02-08192Editorial10.3390/e19020062621099-43002017-02-08doi: 10.3390/e19020062J. Tenreiro MachadoAntónio Lopes<![CDATA[Entropy, Vol. 19, Pages 60: Nonlinear Wave Equations Related to Nonextensive Thermostatistics]]>
http://www.mdpi.com/1099-4300/19/2/60
We advance two nonlinear wave equations related to the nonextensive thermostatistical formalism based upon the power-law nonadditive S q entropies. Our present contribution is in line with recent developments, where nonlinear extensions inspired on the q-thermostatistical formalism have been proposed for the Schroedinger, Klein–Gordon, and Dirac wave equations. These previously introduced equations share the interesting feature of admitting q-plane wave solutions. In contrast with these recent developments, one of the nonlinear wave equations that we propose exhibits real q-Gaussian solutions, and the other one admits exponential plane wave solutions modulated by a q-Gaussian. These q-Gaussians are q-exponentials whose arguments are quadratic functions of the space and time variables. The q-Gaussians are at the heart of nonextensive thermostatistics. The wave equations that we analyze in this work illustrate new possible dynamical scenarios leading to time-dependent q-Gaussians. One of the nonlinear wave equations considered here is a wave equation endowed with a nonlinear potential term, and can be regarded as a nonlinear Klein–Gordon equation. The other equation we study is a nonlinear Schroedinger-like equation.Entropy2017-02-07192Article10.3390/e19020060601099-43002017-02-07doi: 10.3390/e19020060Angel PlastinoRoseli Wedemann<![CDATA[Entropy, Vol. 19, Pages 61: Computational Complexity]]>
http://www.mdpi.com/1099-4300/19/2/61
Complex systems (CS) involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...]Entropy2017-02-07192Editorial10.3390/e19020061611099-43002017-02-07doi: 10.3390/e19020061J. Tenreiro MachadoAntónio Lopes<![CDATA[Entropy, Vol. 19, Pages 59: On the Binary Input Gaussian Wiretap Channel with/without Output Quantization]]>
http://www.mdpi.com/1099-4300/19/2/59
In this paper, we investigate the effect of output quantization on the secrecy capacity of the binary-input Gaussian wiretap channel. As a result, a closed-form expression with infinite summation terms of the secrecy capacity of the binary-input Gaussian wiretap channel is derived for the case when both the legitimate receiver and the eavesdropper have unquantized outputs. In particular, computable tight upper and lower bounds on the secrecy capacity are obtained. Theoretically, we prove that when the legitimate receiver has unquantized outputs while the eavesdropper has binary quantized outputs, the secrecy capacity is larger than that when both the legitimate receiver and the eavesdropper have unquantized outputs or both have binary quantized outputs. Further, numerical results show that in the low signal-to-noise ratio (SNR) (of the main channel) region, the secrecy capacity of the binary input Gaussian wiretap channel when both the legitimate receiver and the eavesdropper have unquantized outputs is larger than the capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs; as the SNR increases, the secrecy capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs tends to overtake.Entropy2017-02-04192Article10.3390/e19020059591099-43002017-02-04doi: 10.3390/e19020059Chao QiYanling ChenA. Vinck<![CDATA[Entropy, Vol. 19, Pages 58: Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†]]>
http://www.mdpi.com/1099-4300/19/2/58
We compare the application of Bayesian inference and the maximum entropy (MaxEnt) method for the analysis of ﬂow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of ﬂow rates and other variables, when there is insufﬁcient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf) by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method ﬁnds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.Entropy2017-02-02192Article10.3390/e19020058581099-43002017-02-02doi: 10.3390/e19020058Steven WaldripRobert Niven<![CDATA[Entropy, Vol. 19, Pages 57: The Second Law: From Carnot to Thomson-Clausius, to the Theory of Exergy, and to the Entropy-Growth Potential Principle]]>
http://www.mdpi.com/1099-4300/19/2/57
At its origins, thermodynamics was the study of heat and engines. Carnot transformed it into a scientific discipline by explaining engine power in terms of transfer of “caloric”. That idea became the second law of thermodynamics when Thomson and Clausius reconciled Carnot’s theory with Joule’s conflicting thesis that power was derived from the consumption of heat, which was determined to be a form of energy. Eventually, Clausius formulated the 2nd-law as the universal entropy growth principle: the synthesis of transfer vs. consumption led to what became known as the mechanical theory of heat (MTH). However, by making universal-interconvertibility the cornerstone of MTH their synthesis-project was a defective one, which precluded MTH from developing the full expression of the second law. This paper reiterates that universal-interconvertibility is demonstrably false—as the case has been made by many others—by clarifying the true meaning of the mechanical equivalent of heat. And, presents a two-part formulation of the second law: universal entropy growth principle as well as a new principle that no change in Nature happens without entropy growth potential. With the new principle as its cornerstone replacing universal-interconvertibility, thermodynamics transcends the defective MTH and becomes a coherent conceptual system.Entropy2017-01-28192Article10.3390/e19020057571099-43002017-01-28doi: 10.3390/e19020057Lin-Shu Wang<![CDATA[Entropy, Vol. 19, Pages 55: Bateman–Feshbach Tikochinsky and Caldirola–Kanai Oscillators with New Fractional Differentiation]]>
http://www.mdpi.com/1099-4300/19/2/55
In this work, the study of the fractional behavior of the Bateman–Feshbach–Tikochinsky and Caldirola–Kanai oscillators by using different fractional derivatives is presented. We obtained the Euler–Lagrange and the Hamiltonian formalisms in order to represent the dynamic models based on the Liouville–Caputo, Caputo–Fabrizio–Caputo and the new fractional derivative based on the Mittag–Leffler kernel with arbitrary order α. Simulation results are presented in order to show the fractional behavior of the oscillators, and the classical behavior is recovered when α is equal to 1.Entropy2017-01-28192Article10.3390/e19020055551099-43002017-01-28doi: 10.3390/e19020055Antonio Coronel-EscamillaJosé Gómez-AguilarDumitru BaleanuTeodoro Córdova-FragaRicardo Escobar-JiménezVictor Olivares-PeregrinoMaysaa Qurashi<![CDATA[Entropy, Vol. 19, Pages 56: Scaling Relations of Lognormal Type Growth Process with an Extremal Principle of Entropy]]>
http://www.mdpi.com/1099-4300/19/2/56
The scale, inflexion point and maximum point are important scaling parameters for studying growth phenomena with a size following the lognormal function. The width of the size function and its entropy depend on the scale parameter (or the standard deviation) and measure the relative importance of production and dissipation involved in the growth process. The Shannon entropy increases monotonically with the scale parameter, but the slope has a minimum at p 6/6. This value has been used previously to study spreading of spray and epidemical cases. In this paper, this approach of minimizing this entropy slope is discussed in a broader sense and applied to obtain the relationship between the inflexion point and maximum point. It is shown that this relationship is determined by the base of natural logarithm e ' 2.718 and exhibits some geometrical similarity to the minimal surface energy principle. The known data from a number of problems, including the swirling rate of the bathtub vortex, more data of droplet splashing, population growth, distribution of strokes in Chinese language characters and velocity profile of a turbulent jet, are used to assess to what extent the approach of minimizing the entropy slope can be regarded as useful.Entropy2017-01-27192Article10.3390/e19020056561099-43002017-01-27doi: 10.3390/e19020056Zi-Niu WuJuan LiChen-Yuan Bai<![CDATA[Entropy, Vol. 19, Pages 47: On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests]]>
http://www.mdpi.com/1099-4300/19/2/47
Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, both for the unidimensional and the multivariate setting. In this short survey, we focus on test statistics that involve the Wasserstein distance. Using an entropic smoothing of the Wasserstein distance, we connect these to very different tests including multivariate methods involving energy statistics and kernel based maximum mean discrepancy and univariate methods like the Kolmogorov–Smirnov test, probability or quantile (PP/QQ) plots and receiver operating characteristic or ordinal dominance (ROC/ODC) curves. Some observations are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two-sample testing’s classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others.Entropy2017-01-26192Article10.3390/e19020047471099-43002017-01-26doi: 10.3390/e19020047Aaditya RamdasNicolás TrillosMarco Cuturi<![CDATA[Entropy, Vol. 19, Pages 54: Information Geometric Approach to Recursive Update in Nonlinear Filtering]]>
http://www.mdpi.com/1099-4300/19/2/54
The measurement update stage in the nonlinear filtering is considered in the viewpoint of information geometry, and the filtered state is considered as an optimization estimation in parameter space has been corresponded with the iteration in the statistical manifold, then a recursive method is proposed in this paper. This method is derived based on the natural gradient descent on the statistical manifold, which constructed by the posterior probability density function (PDF) of state conditional on the measurement. The derivation procedure is processing in the geometric viewpoint, and gives a geometric interpretation for the iteration update. Besides, the proposed method can be seen as an extended for the Kalman filter and its variants. For the one step in our proposed method, it is identical to the Extended Kalman filter (EKF) in the nonlinear case, while traditional Kalman filter in the linear case. Benefited from the natural gradient descent used in the update stage, our proposed method performs better than the existing methods, and the results have showed in the numerical experiments.Entropy2017-01-26192Article10.3390/e19020054541099-43002017-01-26doi: 10.3390/e19020054Yubo LiYongqiang ChengXiang LiXiaoqiang HuaYuliang Qin<![CDATA[Entropy, Vol. 19, Pages 53: A Mixed Geographically and Temporally Weighted Regression: Exploring Spatial-Temporal Variations from Global and Local Perspectives]]>
http://www.mdpi.com/1099-4300/19/2/53
To capture both global stationarity and spatiotemporal non-stationarity, a novel mixed geographically and temporally weighted regression (MGTWR) model accounting for global and local effects in both space and time is presented. Since the constant and spatial-temporal varying coefficients could not be estimated in one step, a two-stage least squares estimation is introduced to calibrate the model. Both simulations and real-world datasets are used to test and verify the performance of the proposed MGTWR model. Additionally, an Akaike Information Criterion (AIC) is adopted as a key model fitting diagnostic. The experiments demonstrate that the MGTWR model yields more accurate results than do traditional spatially weighted regression models. For instance, the MGTWR model decreased AIC value by 2.7066, 36.368 and 112.812 with respect to those of the mixed geographically weighted regression (MGWR) model and by 45.5628, −38.774 and 35.656 with respect to those of the geographical and temporal weighted regression (GTWR) model for the three simulation datasets. Moreover, compared to the MGWR and GTWR models, the MGTWR model obtained the lowest AIC value and mean square error (MSE) and the highest coefficient of determination (R2) and adjusted coefficient of determination (R2adj). In addition, our experiments proved the existence of both global stationarity and spatiotemporal non-stationarity, as well as the practical ability of the proposed method.Entropy2017-01-26192Article10.3390/e19020053531099-43002017-01-26doi: 10.3390/e19020053Jiping LiuYangyang ZhaoYi YangShenghua XuFuhao ZhangXiaolu ZhangLihong ShiAgen Qiu<![CDATA[Entropy, Vol. 19, Pages 51: Entropies of the Chinese Land Use/Cover Change from 1990 to 2010 at a County Level]]>
http://www.mdpi.com/1099-4300/19/2/51
Land Use/Cover Change (LUCC) has gradually became an important direction in the research of global changes. LUCC is a complex system, and entropy is a measure of the degree of disorder of a system. According to land use information entropy, this paper analyzes changes in land use from the perspective of the system. Research on the entropy of LUCC structures has a certain “guiding role” for the optimization and adjustment of regional land use structure. Based on the five periods of LUCC data from the year of 1990 to 2010, this paper focuses on analyzing three types of LUCC entropies among counties in China—namely, Shannon, Renyi, and Tsallis entropies. The findings suggest that: (1) Shannon entropy can reflect the volatility of the LUCC, that Renyi and Tsallis entropies also have this function when their parameter has a positive value, and that Renyi and Tsallis entropies can reflect the extreme case of the LUCC when their parameter has a negative value.; (2) The entropy of China’s LUCC is uneven in time and space distributions, and that there is a large trend during 1990–2010, the central region generally has high entropy in space.Entropy2017-01-25192Article10.3390/e19020051511099-43002017-01-25doi: 10.3390/e19020051Yong FanGuangming YuZongyi HeHailong YuRui BaiLinru YangDi Wu<![CDATA[Entropy, Vol. 19, Pages 52: Research and Application of a Novel Hybrid Model Based on Data Selection and Artificial Intelligence Algorithm for Short Term Load Forecasting]]>
http://www.mdpi.com/1099-4300/19/2/52
Machine learning plays a vital role in several modern economic and industrial fields, and selecting an optimized machine learning method to improve time series’ forecasting accuracy is challenging. Advanced machine learning methods, e.g., the support vector regression (SVR) model, are widely employed in forecasting fields, but the individual SVR pays no attention to the significance of data selection, signal processing and optimization, which cannot always satisfy the requirements of time series forecasting. By preprocessing and analyzing the original time series, in this paper, a hybrid SVR model is developed, considering periodicity, trend and randomness, and combined with data selection, signal processing and an optimization algorithm for short-term load forecasting. Case studies of electricity power data from New South Wales and Singapore are regarded as exemplifications to estimate the performance of the developed novel model. The experimental results demonstrate that the proposed hybrid method is not only robust but also capable of achieving significant improvement compared with the traditional single models and can be an effective and efficient tool for power load forecasting.Entropy2017-01-25192Article10.3390/e19020052521099-43002017-01-25doi: 10.3390/e19020052Wendong YangJianzhou WangRui Wang<![CDATA[Entropy, Vol. 19, Pages 50: Multiplicity of Homoclinic Solutions for Fractional Hamiltonian Systems with Subquadratic Potential]]>
http://www.mdpi.com/1099-4300/19/2/50
In this paper, we study the existence of homoclinic solutions for the fractional Hamiltonian systems with left and right Liouville–Weyl derivatives. We establish some new results concerning the existence and multiplicity of homoclinic solutions for the given system by using Clark’s theorem from critical point theory and fountain theorem.Entropy2017-01-24192Article10.3390/e19020050501099-43002017-01-24doi: 10.3390/e19020050Neamat NyamoradiAhmed AlsaediBashir AhmadYong Zhou<![CDATA[Entropy, Vol. 19, Pages 49: Entropy-Based Method for Evaluating Contact Strain-Energy Distribution for Assembly Accuracy Prediction]]>
http://www.mdpi.com/1099-4300/19/2/49
Assembly accuracy significantly affects the performance of precision mechanical systems. In this study, an entropy-based evaluation method for contact strain-energy distribution is proposed to predict the assembly accuracy. Strain energy is utilized to characterize the effects of the combination of form errors and contact deformations on the formation of assembly errors. To obtain the strain energy, the contact state is analyzed by applying the finite element method (FEM) on 3D, solid models of real parts containing form errors. Entropy is employed for evaluating the uniformity of the contact strain-energy distribution. An evaluation model, in which the uniformity of the contact strain-energy distribution is evaluated in three levels based on entropy, is developed to predict the assembly accuracy, and a comprehensive index is proposed. The assembly experiments for five sets of two rotating parts are conducted. Moreover, the coaxiality between the surfaces of two parts with assembly accuracy requirements is selected as the verification index to verify the effectiveness of the evaluation method. The results are in good agreement with the verification index, indicating that the method presented in this study is reliable and effective in predicting the assembly accuracy.Entropy2017-01-24192Article10.3390/e19020049491099-43002017-01-24doi: 10.3390/e19020049Yan FangXin JinChencan HuangZhijing Zhang<![CDATA[Entropy, Vol. 19, Pages 48: Entropy, Shannon’s Measure of Information and Boltzmann’s H-Theorem]]>
http://www.mdpi.com/1099-4300/19/2/48
We start with a clear distinction between Shannon’s Measure of Information (SMI) and the Thermodynamic Entropy. The first is defined on any probability distribution; and therefore it is a very general concept. On the other hand Entropy is defined on a very special set of distributions. Next we show that the Shannon Measure of Information (SMI) provides a solid and quantitative basis for the interpretation of the thermodynamic entropy. The entropy measures the uncertainty in the distribution of the locations and momenta of all the particles; as well as two corrections due to the uncertainty principle and the indistinguishability of the particles. Finally we show that the H-function as defined by Boltzmann is an SMI but not entropy. Therefore; much of what has been written on the H-theorem is irrelevant to entropy and the Second Law of Thermodynamics.Entropy2017-01-24192Article10.3390/e19020048481099-43002017-01-24doi: 10.3390/e19020048Arieh Ben-Naim<![CDATA[Entropy, Vol. 19, Pages 46: Topological Entropy Dimension and Directional Entropy Dimension for ℤ2-Subshifts]]>
http://www.mdpi.com/1099-4300/19/2/46
The notion of topological entropy dimension for a Z -action has been introduced to measure the subexponential complexity of zero entropy systems. Given a Z 2 -action, along with a Z 2 -entropy dimension, we also consider a finer notion of directional entropy dimension arising from its subactions. The entropy dimension of a Z 2 -action and the directional entropy dimensions of its subactions satisfy certain inequalities. We present several constructions of strictly ergodic Z 2 -subshifts of positive entropy dimension with diverse properties of their subgroup actions. In particular, we show that there is a Z 2 -subshift of full dimension in which every direction has entropy 0.Entropy2017-01-24192Article10.3390/e19020046461099-43002017-01-24doi: 10.3390/e19020046Uijin JungJungseob LeeKyewon Koh Park<![CDATA[Entropy, Vol. 19, Pages 45: A Soft Parameter Function Penalized Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification]]>
http://www.mdpi.com/1099-4300/19/1/45
A soft parameter function penalized normalized maximum correntropy criterion (SPF-NMCC) algorithm is proposed for sparse system identification. The proposed SPF-NMCC algorithm is derived on the basis of the normalized adaptive filter theory, the maximum correntropy criterion (MCC) algorithm and zero-attracting techniques. A soft parameter function is incorporated into the cost function of the traditional normalized MCC (NMCC) algorithm to exploit the sparsity properties of the sparse signals. The proposed SPF-NMCC algorithm is mathematically derived in detail. As a result, the proposed SPF-NMCC algorithm can provide an efficient zero attractor term to effectively attract the zero taps and near-zero coefficients to zero, and, hence, it can speed up the convergence. Furthermore, the estimation behaviors are obtained by estimating a sparse system and a sparse acoustic echo channel. Computer simulation results indicate that the proposed SPF-NMCC algorithm can achieve a better performance in comparison with the MCC, NMCC, LMS (least mean square) algorithms and their zero attraction forms in terms of both convergence speed and steady-state performance.Entropy2017-01-23191Article10.3390/e19010045451099-43002017-01-23doi: 10.3390/e19010045Yingsong LiYanyan WangRui YangFelix Albu<![CDATA[Entropy, Vol. 19, Pages 44: Crane Safety Assessment Method Based on Entropy and Cumulative Prospect Theory]]>
http://www.mdpi.com/1099-4300/19/1/44
Assessing the safety status of cranes is an important problem. To overcome the inaccuracies and misjudgments in such assessments, this work describes a safety assessment method for cranes that combines entropy and cumulative prospect theory. Firstly, the proposed method transforms the set of evaluation indices into an evaluation vector. Secondly, a decision matrix is then constructed from the evaluation vectors and evaluation standards, and an entropy-based technique is applied to calculate the index weights. Thirdly, positive and negative prospect value matrices are established from reference points based on the positive and negative ideal solutions. Thus, this enables the crane safety grade to be determined according to the ranked comprehensive prospect values. Finally, the safety status of four general overhead traveling crane samples is evaluated to verify the rationality and feasibility of the proposed method. The results demonstrate that the method described in this paper can precisely and reasonably reflect the safety status of a crane.Entropy2017-01-21191Article10.3390/e19010044441099-43002017-01-21doi: 10.3390/e19010044Aihua LiZhangyan Zhao<![CDATA[Entropy, Vol. 19, Pages 43: Radiative Entropy Production along the Paludification Gradient in the Southern Taiga]]>
http://www.mdpi.com/1099-4300/19/1/43
Entropy production (σ) is a measure of ecosystem and landscape stability in a changing environment. We calculated the σ in the radiation balance for a well-drained spruce forest, a paludified spruce forest, and a bog in the southern taiga of the European part of Russia using long-term meteorological data. Though radiative σ depends both on surface temperature and absorbed radiation, the radiation effect in boreal ecosystems is much more important than the temperature effect. The dynamic of the incoming solar radiation was the main driver of the diurnal, seasonal, and intra-annual courses of σ for all ecosystems; the difference in ecosystem albedo was the second most important factor, responsible for seven-eighths of the difference in σ between the bog and forest in a warm period. Despite the higher productivity and the complex structure of the well-drained forest, the dynamics and sums of σ in two forests were very similar. Summer droughts had no influence on the albedo and σ efficiency of forests, demonstrating high self-regulation of the taiga forest ecosystems. On the contrary, a decreasing water supply significantly elevated the albedo and lowered the σ in bog. Bogs, being non-steady ecosystems, demonstrate unique thermodynamic behavior, which is fluctuant and strongly dependent on the moisture supply. Paludification of territories may result in increasing instability of the energy balance and entropy production in the landscape of the southern taiga.Entropy2017-01-21191Article10.3390/e19010043431099-43002017-01-21doi: 10.3390/e19010043Olga KurichevaVadim MamkinRobert SandlerskyJuriy PuzachenkoAndrej VarlaginJuliya Kurbatova