Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 18, Issue 12 (December 2016)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story We present multivariate surprisal analysis with special reference to personalized medical [...] Read more.
View options order results:
result details:
Displaying articles 1-38
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Condensation: Passenger Not Driver in Atmospheric Thermodynamics
Entropy 2016, 18(12), 417; doi:10.3390/e18120417
Received: 7 April 2016 / Revised: 4 August 2016 / Accepted: 1 September 2016 / Published: 25 November 2016
PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
The second law of thermodynamics states that processes yielding work or at least capable of yielding work are thermodynamically spontaneous, and that those costing work are thermodynamically nonspontaneous. Whether a process yields or costs heat is irrelevant. Condensation of water vapor yields work
[...] Read more.
The second law of thermodynamics states that processes yielding work or at least capable of yielding work are thermodynamically spontaneous, and that those costing work are thermodynamically nonspontaneous. Whether a process yields or costs heat is irrelevant. Condensation of water vapor yields work and hence is thermodynamically spontaneous only in a supersaturated atmosphere; in an unsaturated atmosphere it costs work and hence is thermodynamically nonspontaneous. Far more of Earth’s atmosphere is unsaturated than supersaturated; based on this alone evaporation is far more often work-yielding and hence thermodynamically spontaneous than condensation in Earth’s atmosphere—despite condensation always yielding heat and evaporation always costing heat. Furthermore, establishment of the unstable or at best metastable condition of supersaturation, and its maintenance in the face of condensation that would wipe it out, is always work-costing and hence thermodynamically nonspontaneous in Earth’s atmosphere or anywhere else. The work required to enable supersaturation is most usually provided at the expense of temperature differences that enable cooling to below the dew point. In the case of most interest to us, convective weather systems and storms, it is provided at the expense of vertical temperature gradients exceeding the moist adiabatic. Thus, ultimately, condensation is a work-costing and hence thermodynamically nonspontaneous process even in supersaturated regions of Earth’s or any other atmosphere. While heat engines in general can in principle extract all of the work represented by any temperature difference until it is totally neutralized to isothermality, convective weather systems and storms in particular cannot. They can extract only the work represented by partial neutralization of super-moist-adiabatic lapse rates to moist-adiabaticity. Super-moist-adiabatic lapse rates are required to enable convection of saturated air. Condensation cannot occur fast enough to maintain relative humidity in a cloud exactly at saturation, thereby trapping some water vapor in metastable supersaturation. Only then can the water vapor condense. Thus ultimately condensation is a thermodynamically nonspontaneous process forced by super-moist-adiabatic lapse rates. Yet water vapor plays vital roles in atmospheric thermodynamics and kinetics. Convective weather systems and storms in a dry atmosphere (e.g., dust devils) can extract only the work represented by partial neutralization of super-dry-adiabatic lapse rates to dry-adiabaticity. At typical atmospheric temperatures in the tropics, where convective weather systems and storms are most frequent and active, the moist-adiabatic lapse rate is much smaller (thus much closer to isothermality), and hence represents much more extractable work, than the dry—the thermodynamic advantage of water vapor. Moreover, the large heat of condensation (and to a lesser extent fusion) of water facilitates much faster heat transfer from Earth’s surface to the tropopause than is possible in a dry atmosphere, thereby facilitating much faster extraction of work, i.e., much greater power, than is possible in a dry atmosphere—the kinetic advantage of water vapor. Full article
Open AccessArticle The Information Geometry of Sparse Goodness-of-Fit Testing
Entropy 2016, 18(12), 421; doi:10.3390/e18120421
Received: 31 August 2016 / Revised: 16 November 2016 / Accepted: 19 November 2016 / Published: 24 November 2016
Cited by 1 | PDF Full-text (1321 KB) | HTML Full-text | XML Full-text
Abstract
This paper takes an information-geometric approach to the challenging issue of goodness-of-fit testing in the high dimensional, low sample size context where—potentially—boundary effects dominate. The main contributions of this paper are threefold: first, we present and prove two new theorems on the behaviour
[...] Read more.
This paper takes an information-geometric approach to the challenging issue of goodness-of-fit testing in the high dimensional, low sample size context where—potentially—boundary effects dominate. The main contributions of this paper are threefold: first, we present and prove two new theorems on the behaviour of commonly used test statistics in this context; second, we investigate—in the novel environment of the extended multinomial model—the links between information geometry-based divergences and standard goodness-of-fit statistics, allowing us to formalise relationships which have been missing in the literature; finally, we use simulation studies to validate and illustrate our theoretical results and to explore currently open research questions about the way that discretisation effects can dominate sampling distributions near the boundary. Novelly accommodating these discretisation effects contrasts sharply with the essentially continuous approach of skewness and other corrections flowing from standard higher-order asymptotic analysis. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics) Printed Edition available
Figures

Figure 1

Open AccessArticle Energy Efficiency Improvement in a Modified Ethanol Process from Acetic Acid
Entropy 2016, 18(12), 422; doi:10.3390/e18120422
Received: 16 September 2016 / Revised: 21 November 2016 / Accepted: 21 November 2016 / Published: 24 November 2016
PDF Full-text (1201 KB) | HTML Full-text | XML Full-text
Abstract
For the high utilization of abundant lignocellulose, which is difficult to directly convert into ethanol, an energy-efficient ethanol production process using acetic acid was examined, and its energy saving performance, economics, and thermodynamic efficiency were compared with the conventional process. The raw ethanol
[...] Read more.
For the high utilization of abundant lignocellulose, which is difficult to directly convert into ethanol, an energy-efficient ethanol production process using acetic acid was examined, and its energy saving performance, economics, and thermodynamic efficiency were compared with the conventional process. The raw ethanol synthesized from acetic acid and hydrogen was fed to the proposed ethanol concentration process. The proposed process utilized an extended divided wall column (DWC), for which the performance was investigated with the HYSYS simulation. The performance improvement of the proposed process includes a 27% saving in heating duty and a 41% reduction in cooling duty. The economics shows a 16% saving in investment cost and a 24% decrease in utility cost over the conventional ethanol concentration process. The exergy analysis shows a 9.6% improvement in thermodynamic efficiency for the proposed process. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Figures

Figure 1

Open AccessArticle Consensus of Second Order Multi-Agent Systems with Exogenous Disturbance Generated by Unknown Exosystems
Entropy 2016, 18(12), 423; doi:10.3390/e18120423
Received: 25 July 2016 / Revised: 16 November 2016 / Accepted: 23 November 2016 / Published: 25 November 2016
PDF Full-text (818 KB) | HTML Full-text | XML Full-text
Abstract
This paper is concerned with consensus problem of a class of second-order multi-agent systems subjecting to external disturbance generated from some unknown exosystems. In comparison with the case where the disturbance is generated from some known exosystems, we need to combine adaptive control
[...] Read more.
This paper is concerned with consensus problem of a class of second-order multi-agent systems subjecting to external disturbance generated from some unknown exosystems. In comparison with the case where the disturbance is generated from some known exosystems, we need to combine adaptive control and internal model design to deal with the external disturbance generated from the unknown exosystems. With the help of the internal model, an adaptive protocol is proposed for the consensus problem of the multi-agent systems. Finally, one numerical example is provided to demonstrate the effectiveness of the control design. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Measurement on the Complexity Entropy of Dynamic Game Models for Innovative Enterprises under Two Kinds of Government Subsidies
Entropy 2016, 18(12), 424; doi:10.3390/e18120424
Received: 11 October 2016 / Revised: 22 November 2016 / Accepted: 23 November 2016 / Published: 29 November 2016
PDF Full-text (4258 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, setting the high-tech industry as the background, we build a dynamic duopoly game model in two cases with different government subsidies based on the innovation inputs and outputs, respectively. We analyze the equilibrium solution and stability conditions of the system,
[...] Read more.
In this paper, setting the high-tech industry as the background, we build a dynamic duopoly game model in two cases with different government subsidies based on the innovation inputs and outputs, respectively. We analyze the equilibrium solution and stability conditions of the system, and study the dynamic evolution of the system under the conditions of different system parameters by the numerical simulation method. The simulation results show that both innovation subsidy policies have positive effects on firms’ innovation activities. Besides, improving the level of innovation can encourage firms to innovate. It also shows that an exaggerated adjusting speed of innovation outputs may cause complicated dynamic phenomena such as bifurcation and chaos, which means that the system has relatively higher entropy than that in a stable state. The degree of the government innovation subsidies is also shown to impact the stability and entropy of the system. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Anisotropically Weighted and Nonholonomically Constrained Evolutions on Manifolds
Entropy 2016, 18(12), 425; doi:10.3390/e18120425
Received: 1 September 2016 / Revised: 15 November 2016 / Accepted: 23 November 2016 / Published: 26 November 2016
PDF Full-text (2983 KB) | HTML Full-text | XML Full-text
Abstract
We present evolution equations for a family of paths that results from anisotropically weighting curve energies in non-linear statistics of manifold valued data. This situation arises when performing inference on data that have non-trivial covariance and are anisotropic distributed. The family can be
[...] Read more.
We present evolution equations for a family of paths that results from anisotropically weighting curve energies in non-linear statistics of manifold valued data. This situation arises when performing inference on data that have non-trivial covariance and are anisotropic distributed. The family can be interpreted as most probable paths for a driving semi-martingale that through stochastic development is mapped to the manifold. We discuss how the paths are projections of geodesics for a sub-Riemannian metric on the frame bundle of the manifold, and how the curvature of the underlying connection appears in the sub-Riemannian Hamilton–Jacobi equations. Evolution equations for both metric and cometric formulations of the sub-Riemannian metric are derived. We furthermore show how rank-deficient metrics can be mixed with an underlying Riemannian metric, and we relate the paths to geodesics and polynomials in Riemannian geometry. Examples from the family of paths are visualized on embedded surfaces, and we explore computational representations on finite dimensional landmark manifolds with geometry induced from right-invariant metrics on diffeomorphism groups. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics) Printed Edition available
Figures

Figure 1

Open AccessArticle On Macrostates in Complex Multi-Scale Systems
Entropy 2016, 18(12), 426; doi:10.3390/e18120426
Received: 20 October 2016 / Revised: 18 November 2016 / Accepted: 21 November 2016 / Published: 29 November 2016
Cited by 1 | PDF Full-text (6775 KB) | HTML Full-text | XML Full-text
Abstract
A characteristic feature of complex systems is their deep structure, meaning that the definition of their states and observables depends on the level, or the scale, at which the system is considered. This scale dependence is reflected in the distinction of micro- and
[...] Read more.
A characteristic feature of complex systems is their deep structure, meaning that the definition of their states and observables depends on the level, or the scale, at which the system is considered. This scale dependence is reflected in the distinction of micro- and macro-states, referring to lower and higher levels of description. There are several conceptual and formal frameworks to address the relation between them. Here, we focus on an approach in which macrostates are contextually emergent from (rather than fully reducible to) microstates and can be constructed by contextual partitions of the space of microstates. We discuss criteria for the stability of such partitions, in particular under the microstate dynamics, and outline some examples. Finally, we address the question of how macrostates arising from stable partitions can be identified as relevant or meaningful. Full article
(This article belongs to the Special Issue Information and Self-Organization)
Figures

Figure 1

Open AccessArticle Healthcare Teams Neurodynamically Reorganize When Resolving Uncertainty
Entropy 2016, 18(12), 427; doi:10.3390/e18120427
Received: 23 September 2016 / Revised: 13 November 2016 / Accepted: 22 November 2016 / Published: 29 November 2016
Cited by 1 | PDF Full-text (2424 KB) | HTML Full-text | XML Full-text
Abstract
Research on the microscale neural dynamics of social interactions has yet to be translated into improvements in the assembly, training and evaluation of teams. This is partially due to the scale of neural involvements in team activities, spanning the millisecond oscillations in individual
[...] Read more.
Research on the microscale neural dynamics of social interactions has yet to be translated into improvements in the assembly, training and evaluation of teams. This is partially due to the scale of neural involvements in team activities, spanning the millisecond oscillations in individual brains to the minutes/hours performance behaviors of the team. We have used intermediate neurodynamic representations to show that healthcare teams enter persistent (50–100 s) neurodynamic states when they encounter and resolve uncertainty while managing simulated patients. Each of the second symbols was developed situating the electroencephalogram (EEG) power of each team member in the contexts of those of other team members and the task. These representations were acquired from EEG headsets with 19 recording electrodes for each of the 1–40 Hz frequencies. Estimates of the information in each symbol stream were calculated from a 60 s moving window of Shannon entropy that was updated each second, providing a quantitative neurodynamic history of the team’s performance. Neurodynamic organizations fluctuated with the task demands with increased organization (i.e., lower entropy) occurring when the team needed to resolve uncertainty. These results show that intermediate neurodynamic representations can provide a quantitative bridge between the micro and macro scales of teamwork. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Figures

Figure 1

Open AccessArticle Fiber-Mixing Codes between Shifts of Finite Type and Factors of Gibbs Measures
Entropy 2016, 18(12), 428; doi:10.3390/e18120428
Received: 7 October 2016 / Revised: 20 November 2016 / Accepted: 24 November 2016 / Published: 30 November 2016
PDF Full-text (274 KB) | HTML Full-text | XML Full-text
Abstract
A sliding block code π:XY between shift spaces is called fiber-mixing if, for every x and x in X with y=π(x)=π(x), there is zπ
[...] Read more.
A sliding block code π : X Y between shift spaces is called fiber-mixing if, for every x and x in X with y = π ( x ) = π ( x ) , there is z π - 1 ( y ) which is left asymptotic to x and right asymptotic to x . A fiber-mixing factor code from a shift of finite type is a code of class degree 1 for which each point of Y has exactly one transition class. Given an infinite-to-one factor code between mixing shifts of finite type (of unequal entropies), we show that there is also a fiber-mixing factor code between them. This result may be regarded as an infinite-to-one (unequal entropies) analogue of Ashley’s Replacement Theorem, which states that the existence of an equal entropy factor code between mixing shifts of finite type guarantees the existence of a degree 1 factor code between them. Properties of fiber-mixing codes and applications to factors of Gibbs measures are presented. Full article
(This article belongs to the Special Issue Entropic Properties of Dynamical Systems)
Figures

Figure 1

Open AccessArticle CoFea: A Novel Approach to Spam Review Identification Based on Entropy and Co-Training
Entropy 2016, 18(12), 429; doi:10.3390/e18120429
Received: 20 October 2016 / Accepted: 28 November 2016 / Published: 30 November 2016
PDF Full-text (815 KB) | HTML Full-text | XML Full-text
Abstract
With the rapid development of electronic commerce, spam reviews are rapidly growing on the Internet to manipulate online customers’ opinions on goods being sold. This paper proposes a novel approach, called CoFea (Co-training by Features), to identify spam reviews, based on entropy and
[...] Read more.
With the rapid development of electronic commerce, spam reviews are rapidly growing on the Internet to manipulate online customers’ opinions on goods being sold. This paper proposes a novel approach, called CoFea (Co-training by Features), to identify spam reviews, based on entropy and the co-training algorithm. After sorting all lexical terms of reviews by entropy, we produce two views on the reviews by dividing the lexical terms into two subsets. One subset contains odd-numbered terms and the other contains even-numbered terms. Using SVM (support vector machine) as the base classifier, we further propose two strategies, CoFea-T and CoFea-S, embedded with the CoFea approach. The CoFea-T strategy uses all terms in the subsets for spam review identification by SVM. The CoFea-S strategy uses a predefined number of terms with small entropy for spam review identification by SVM. The experiment results show that the CoFea-T strategy produces better accuracy than the CoFea-S strategy, while the CoFea-S strategy saves more computing time than the CoFea-T strategy with acceptable accuracy in spam review identification. Full article
Figures

Figure 1

Open AccessArticle Multivariable Fuzzy Measure Entropy Analysis for Heart Rate Variability and Heart Sound Amplitude Variability
Entropy 2016, 18(12), 430; doi:10.3390/e18120430
Received: 14 October 2016 / Revised: 28 November 2016 / Accepted: 29 November 2016 / Published: 3 December 2016
Cited by 2 | PDF Full-text (995 KB) | HTML Full-text | XML Full-text
Abstract
Simultaneously analyzing multivariate time series provides an insight into underlying interaction mechanisms of cardiovascular system and has recently become an increasing focus of interest. In this study, we proposed a new multivariate entropy measure, named multivariate fuzzy measure entropy (mvFME), for the analysis
[...] Read more.
Simultaneously analyzing multivariate time series provides an insight into underlying interaction mechanisms of cardiovascular system and has recently become an increasing focus of interest. In this study, we proposed a new multivariate entropy measure, named multivariate fuzzy measure entropy (mvFME), for the analysis of multivariate cardiovascular time series. The performances of mvFME, and its two sub-components: the local multivariate fuzzy entropy (mvFEL) and global multivariate fuzzy entropy (mvFEG), as well as the commonly used multivariate sample entropy (mvSE), were tested on both simulation and cardiovascular multivariate time series. Simulation results on multivariate coupled Gaussian signals showed that the statistical stability of mvFME is better than mvSE, but its computation time is higher than mvSE. Then, mvSE and mvFME were applied to the multivariate cardiovascular signal analysis of R wave peak (RR) interval, and first (S1) and second (S2) heart sound amplitude series from three positions of heart sound signal collections, under two different physiological states: rest state and after stair climbing state. The results showed that, compared with rest state, for univariate time series analysis, after stair climbing state has significantly lower mvSE and mvFME values for both RR interval and S1 amplitude series, whereas not for S2 amplitude series. For bivariate time series analysis, all mvSE and mvFME report significantly lower values for after stair climbing. For trivariate time series analysis, only mvFME has the discrimination ability for the two physiological states, whereas mvSE does not. In summary, the new proposed mvFME method shows better statistical stability and better discrimination ability for multivariate time series analysis than the traditional mvSE method. Full article
(This article belongs to the Special Issue Multivariate Entropy Measures and Their Applications)
Figures

Figure 1

Open AccessArticle Maximum Entropy Production Is Not a Steady State Attractor for 2D Fluid Convection
Entropy 2016, 18(12), 431; doi:10.3390/e18120431
Received: 17 October 2016 / Revised: 16 November 2016 / Accepted: 28 November 2016 / Published: 1 December 2016
PDF Full-text (2818 KB) | HTML Full-text | XML Full-text
Abstract
Multiple authors have claimed that the natural convection of a fluid is a process that exhibits maximum entropy production (MEP). However, almost all such investigations were limited to fixed temperature boundary conditions (BCs). It was found that under those conditions, the system tends
[...] Read more.
Multiple authors have claimed that the natural convection of a fluid is a process that exhibits maximum entropy production (MEP). However, almost all such investigations were limited to fixed temperature boundary conditions (BCs). It was found that under those conditions, the system tends to maximize its heat flux, and hence it was concluded that the MEP state is a dynamical attractor. However, since entropy production varies with heat flux and difference of inverse temperature, it is essential that any complete investigation of entropy production allows for variations in heat flux and temperature difference. Only then can we legitimately assess whether the MEP state is the most attractive. Our previous work made use of negative feedback BCs to explore this possibility. We found that the steady state of the system was far from the MEP state. For any system, entropy production can only be maximized subject to a finite set of physical and material constraints. In the case of our previous work, it was possible that the adopted set of fluid parameters were constraining the system in such a way that it was entirely prevented from reaching the MEP state. Hence, in the present work, we used a different set of boundary parameters, such that the steady states of the system were in the local vicinity of the MEP state. If MEP was indeed an attractor, relaxing those constraints of our previous work should have caused a discrete perturbation to the surface of steady state heat flux values near the value corresponding to MEP. We found no such perturbation, and hence no discernible attraction to the MEP state. Furthermore, systems with fixed flux BCs actually minimize their entropy production (relative to the alternative stable state, that of pure diffusive heat transport). This leads us to conclude that the principle of MEP is not an accurate indicator of which stable steady state a convective system will adopt. However, for all BCs considered, the quotient of heat flux and temperature difference F / Δ T —which is proportional to the dimensionless Nusselt number—does appear to be maximized. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessCommunication EEG-Based Person Authentication Using a Fuzzy Entropy-Related Approach with Two Electrodes
Entropy 2016, 18(12), 432; doi:10.3390/e18120432
Received: 6 October 2016 / Revised: 26 November 2016 / Accepted: 29 November 2016 / Published: 2 December 2016
Cited by 2 | PDF Full-text (2854 KB) | HTML Full-text | XML Full-text
Abstract
Person authentication, based on electroencephalography (EEG) signals, is one of the directions possible in the study of EEG signals. In this paper, a method for the selection of EEG electrodes and features in a discriminative manner is proposed. Given that EEG signals are
[...] Read more.
Person authentication, based on electroencephalography (EEG) signals, is one of the directions possible in the study of EEG signals. In this paper, a method for the selection of EEG electrodes and features in a discriminative manner is proposed. Given that EEG signals are unstable and non-linear, a non-linear analysis method, i.e., fuzzy entropy, is more appropriate. In this paper, unlike other methods using different signal sources and patterns, such as rest state and motor imagery, a novel paradigm using the stimuli of self-photos and non-self-photos is introduced. Ten subjects are selected to take part in this experiment, and fuzzy entropy is used as a feature to select the minimum number of electrodes that identifies individuals. The experimental results show that the proposed method can make use of two electrodes (FP1 and FP2) in the frontal area, while the classification accuracy is greater than 87.3%. The proposed biometric system, based on EEG signals, can provide each subject with a unique key and is capable of human recognition. Full article
Figures

Figure 1

Open AccessArticle Foliations-Webs-Hessian Geometry-Information Geometry-Entropy and Cohomology
Entropy 2016, 18(12), 433; doi:10.3390/e18120433
Received: 1 June 2016 / Accepted: 16 November 2016 / Published: 2 December 2016
PDF Full-text (562 KB) | HTML Full-text | XML Full-text
Abstract IN MEMORIAM OF ALEXANDER GROTHENDIECK. THE MAN. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics) Printed Edition available
Figures

Figure 1

Open AccessArticle Entropy and Stability Analysis of Delayed Energy Supply–Demand Model
Entropy 2016, 18(12), 434; doi:10.3390/e18120434
Received: 21 October 2016 / Revised: 27 November 2016 / Accepted: 29 November 2016 / Published: 3 December 2016
PDF Full-text (4704 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a four-dimensional model of energy supply–demand with two-delay is built. The interactions among energy demand of east China, energy supply of west China and the utilization of renewable energy in east China are delayed in this model. We discuss stability
[...] Read more.
In this paper, a four-dimensional model of energy supply–demand with two-delay is built. The interactions among energy demand of east China, energy supply of west China and the utilization of renewable energy in east China are delayed in this model. We discuss stability of the system affected by parameters and the existence of Hopf bifurcation at the equilibrium point from two aspects: single delay and two-delay. The stability and complexity of the system are demonstrated through bifurcation diagram, Poincare section plot, entropy diagram, etc. in numerical simulation. The simulation results show that the parameters beyond the stable region will cause the system to be unstable and increase the complexity of the system. At this point, because of energy supply–demand system fluctuations, it is difficult to formulate energy policies. Finally, the bifurcation control is realized successfully by the method of delayed feedback control. The results of bifurcation control simulation indicate that the system can return to stable state by adjusting the control parameter. In addition, we find that the bigger the value of the control parameter, the better the effect of the bifurcation control. The results of this paper can provide help for maintaining the stability of the system, which will be conducive to energy scheduling. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Intra-Day Trading System Design Based on the Integrated Model of Wavelet De-Noise and Genetic Programming
Entropy 2016, 18(12), 435; doi:10.3390/e18120435
Received: 13 October 2016 / Revised: 24 November 2016 / Accepted: 1 December 2016 / Published: 6 December 2016
PDF Full-text (536 KB) | HTML Full-text | XML Full-text
Abstract
Technical analysis has been proved to be capable of exploiting short-term fluctuations in financial markets. Recent results indicate that the market timing approach beats many traditional buy-and-hold approaches in most of the short-term trading periods. Genetic programming (GP) was used to generate short-term
[...] Read more.
Technical analysis has been proved to be capable of exploiting short-term fluctuations in financial markets. Recent results indicate that the market timing approach beats many traditional buy-and-hold approaches in most of the short-term trading periods. Genetic programming (GP) was used to generate short-term trade rules on the stock markets during the last few decades. However, few of the related studies on the analysis of financial time series with genetic programming considered the non-stationary and noisy characteristics of the time series. In this paper, to de-noise the original financial time series and to search profitable trading rules, an integrated method is proposed based on the Wavelet Threshold (WT) method and GP. Since relevant information that affects the movement of the time series is assumed to be fully digested during the market closed periods, to avoid the jumping points of the daily or monthly data, in this paper, intra-day high-frequency time series are used to fully exploit the short-term forecasting advantage of technical analysis. To validate the proposed integrated approach, an empirical study is conducted based on the China Securities Index (CSI) 300 futures in the emerging China Financial Futures Exchange (CFFEX) market. The analysis outcomes show that the wavelet de-noise approach outperforms many comparative models. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Numerical Study of Entropy Generation in Mixed MHD Convection in a Square Lid-Driven Cavity Filled with Darcy–Brinkman–Forchheimer Porous Medium
Entropy 2016, 18(12), 436; doi:10.3390/e18120436
Received: 3 October 2016 / Revised: 22 November 2016 / Accepted: 22 November 2016 / Published: 6 December 2016
PDF Full-text (2182 KB) | HTML Full-text | XML Full-text
Abstract
This investigation deals with the numerical simulation of entropy generation at mixed convection flow in a lid-driven saturated porous cavity submitted to a magnetic field. The magnetic field is applied in the direction that is normal to the cavity cross section. The governing
[...] Read more.
This investigation deals with the numerical simulation of entropy generation at mixed convection flow in a lid-driven saturated porous cavity submitted to a magnetic field. The magnetic field is applied in the direction that is normal to the cavity cross section. The governing equations, written in the Darcy–Brinkman–Forchheimer formulation, are solved using a numerical code based on the Control Volume Finite Element Method. The flow structure and heat transfer are presented in the form of streamlines, isotherms and average Nusselt number. The entropy generation was studied for various values of Darcy number (10−3 ≤ Da ≤ 1) and for a range of Hartmann number (0 ≤ Ha ≤ 102). It was found that entropy generation is affected by the variations of the considered dimensionless physical parameters. Moreover, the form drag related to the Forchheimer effect remains significant until a critical Hartmann number value. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Figures

Figure 1

Open AccessArticle Static Einstein–Maxwell Magnetic Solitons and Black Holes in an Odd Dimensional AdS Spacetime
Entropy 2016, 18(12), 438; doi:10.3390/e18120438
Received: 28 October 2016 / Revised: 28 November 2016 / Accepted: 1 December 2016 / Published: 8 December 2016
Cited by 3 | PDF Full-text (6128 KB) | HTML Full-text | XML Full-text
Abstract
We construct a new class of Einstein–Maxwell static solutions with a magnetic field in D-dimensions (with D5 an odd number), approaching at infinity a globally Anti-de Sitter (AdS) spacetime. In addition to the mass, the new solutions possess an extra-parameter
[...] Read more.
We construct a new class of Einstein–Maxwell static solutions with a magnetic field in D-dimensions (with D 5 an odd number), approaching at infinity a globally Anti-de Sitter (AdS) spacetime. In addition to the mass, the new solutions possess an extra-parameter associated with a non-zero magnitude of the magnetic potential at infinity. Some of the black holes possess a non-trivial zero-horizon size limit, which corresponds to a solitonic deformation of the AdS background. Full article
(This article belongs to the Special Issue Black Hole Thermodynamics II)
Figures

Figure 1

Open AccessArticle The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors
Entropy 2016, 18(12), 439; doi:10.3390/e18120439
Received: 14 October 2016 / Revised: 28 November 2016 / Accepted: 29 November 2016 / Published: 8 December 2016
PDF Full-text (8291 KB) | HTML Full-text | XML Full-text
Abstract
With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to
[...] Read more.
With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function) neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism) soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system. Full article
Figures

Figure 1

Open AccessArticle Supply Chain Strategies for Quality Inspection under a Customer Return Policy: A Game Theoretical Approach
Entropy 2016, 18(12), 440; doi:10.3390/e18120440
Received: 9 November 2016 / Revised: 27 November 2016 / Accepted: 2 December 2016 / Published: 8 December 2016
PDF Full-text (1652 KB) | HTML Full-text | XML Full-text
Abstract
This paper outlines the quality inspection strategies in a supplier–buyer supply chain under a customer return policy. This paper primarily focuses on product quality and quality inspection techniques to maximize the actors’ and supply chain’s profits using game theory approach. The supplier–buyer setup
[...] Read more.
This paper outlines the quality inspection strategies in a supplier–buyer supply chain under a customer return policy. This paper primarily focuses on product quality and quality inspection techniques to maximize the actors’ and supply chain’s profits using game theory approach. The supplier–buyer setup is described in terms of textile manufacturer–retailer supply chain where quality inspection is an important aspect and the product return from the customer is generally accepted. Textile manufacturer produces the product, whereas, retailer acts as a reseller who buys the products from the textile manufacturer and sells them to the customers. In this context, the former invests in the product quality whereas the latter invests in the random quality inspection and traceability. The relationships between the textile manufacturer and the retailer are recognized as horizontal and vertical alliances and modeled using non-cooperative and cooperative games. The non-cooperative games are based on the Stackelberg and Nash equilibrium models. Further, bargaining and game change scenarios have been discussed to maximize the profit under different games. To understand the appropriateness of a strategic alliance, a computational study demonstrates textile manufacturer–retailer relation under different game scenarios. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Construction of Fractional Repetition Codes with Variable Parameters for Distributed Storage Systems
Entropy 2016, 18(12), 441; doi:10.3390/e18120441
Received: 2 November 2016 / Revised: 1 December 2016 / Accepted: 2 December 2016 / Published: 8 December 2016
PDF Full-text (383 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a new class of regular fractional repetition (FR) codes constructed from perfect difference families and quasi-perfect difference families to store big data in distributed storage systems. The main advantage of the proposed construction method is that it supports
[...] Read more.
In this paper, we propose a new class of regular fractional repetition (FR) codes constructed from perfect difference families and quasi-perfect difference families to store big data in distributed storage systems. The main advantage of the proposed construction method is that it supports a wide range of code parameter values compared to existing ones, which is an important feature to be adopted in practical systems. When using one instance of the proposed codes for a given parameter set, we show that the amount of stored data is very close to that of an existing state-of-the-art optimal FR code. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities
Entropy 2016, 18(12), 442; doi:10.3390/e18120442
Received: 20 October 2016 / Revised: 4 December 2016 / Accepted: 5 December 2016 / Published: 9 December 2016
Cited by 2 | PDF Full-text (766 KB) | HTML Full-text | XML Full-text
Abstract
Information-theoretic measures, such as the entropy, the cross-entropy and the Kullback–Leibler divergence between two mixture models, are core primitives in many signal processing tasks. Since the Kullback–Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated
[...] Read more.
Information-theoretic measures, such as the entropy, the cross-entropy and the Kullback–Leibler divergence between two mixture models, are core primitives in many signal processing tasks. Since the Kullback–Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte Carlo stochastic integration, approximated or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy, the Kullback–Leibler and the α-divergences of mixtures. We illustrate the versatile method by reporting our experiments for approximating the Kullback–Leibler and the α-divergences between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures and Gamma mixtures. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics) Printed Edition available
Figures

Figure 1

Open AccessArticle The Evaluation of Noise Spectroscopy Tests
Entropy 2016, 18(12), 443; doi:10.3390/e18120443
Received: 7 October 2016 / Revised: 5 December 2016 / Accepted: 5 December 2016 / Published: 10 December 2016
Cited by 1 | PDF Full-text (6824 KB) | HTML Full-text | XML Full-text
Abstract
The paper discusses mathematical tools to evaluate novel noise spectroscopy based analysis and describes, via physical similarity, the mathematical models expressing the quantitative character of the modeled task. Using the Stefan–Boltzmann law, the authors indicate finding the spectral density of the radiated power
[...] Read more.
The paper discusses mathematical tools to evaluate novel noise spectroscopy based analysis and describes, via physical similarity, the mathematical models expressing the quantitative character of the modeled task. Using the Stefan–Boltzmann law, the authors indicate finding the spectral density of the radiated power of a hemisphere, and, for the selected frequency interval and temperature, they compare the simplified models with the expression of noise spectral density according to the Johnson–Nyquist formula or Nyquist’s expression of the function of spectral density based on a derivation of Planck’s law. The related measurements and evaluations, together with analyses of the noise spectroscopy of periodic resonant structures, are also outlined in the given context. Full article
(This article belongs to the Section Complexity)
Figures

Open AccessArticle Determining the Optimum Inner Diameter of Condenser Tubes Based on Thermodynamic Objective Functions and an Economic Analysis
Entropy 2016, 18(12), 444; doi:10.3390/e18120444
Received: 1 October 2016 / Revised: 3 December 2016 / Accepted: 6 December 2016 / Published: 10 December 2016
PDF Full-text (2631 KB) | HTML Full-text | XML Full-text
Abstract
The diameter and configuration of tubes are important design parameters of power condensers. If a proper tube diameter is applied during the design of a power unit, a high energy efficiency of the condenser itself can be achieved and the performance of the
[...] Read more.
The diameter and configuration of tubes are important design parameters of power condensers. If a proper tube diameter is applied during the design of a power unit, a high energy efficiency of the condenser itself can be achieved and the performance of the whole power generation unit can be improved. If a tube assembly is to be replaced, one should verify whether the chosen condenser tube diameter is correct. Using a diameter that is too large increases the heat transfer area, leading to over-dimensioning and higher costs of building the condenser. On the other hand, if the diameter is too small, water flows faster through the tubes, which results in larger flow resistance and larger pumping power of the cooling-water pump. Both simple and complex methods can be applied to determine the condenser tube diameter. The paper proposes a method of technical and economic optimisation taking into account the performance of a condenser, the low-pressure (LP) part of a turbine, and a cooling-water pump as well as the profit from electric power generation and costs of building the condenser and pumping cooling water. The results obtained by this method were compared with those provided by the following simpler methods: minimization of the entropy generation rate per unit length of a condenser tube (considering entropy generation due to heat transfer and resistance of cooling-water flow), minimization of the total entropy generation rate (considering entropy generation for the system comprising the LP part of the turbine, the condenser, and the cooling-water pump), and maximization of the power unit’s output. The proposed methods were used to verify diameters of tubes in power condensers in a200-MW and a 500-MW power units. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Multivariate Surprisal Analysis of Gene Expression Levels
Entropy 2016, 18(12), 445; doi:10.3390/e18120445
Received: 14 November 2016 / Revised: 6 December 2016 / Accepted: 7 December 2016 / Published: 11 December 2016
PDF Full-text (3369 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We consider here multivariate data which we understand as the problem where each data point i is measured for two or more distinct variables. In a typical situation there are many data points i while the range of the different variables is more
[...] Read more.
We consider here multivariate data which we understand as the problem where each data point i is measured for two or more distinct variables. In a typical situation there are many data points i while the range of the different variables is more limited. If there is only one variable then the data can be arranged as a rectangular matrix where i is the index of the rows while the values of the variable label the columns. We begin here with this case, but then proceed to the more general case with special emphasis on two variables when the data can be organized as a tensor. An analysis of such multivariate data by a maximal entropy approach is discussed and illustrated for gene expressions in four different cell types of six different patients. The different genes are indexed by i, and there are 24 (4 by 6) entries for each i. We used an unbiased thermodynamic maximal-entropy based approach (surprisal analysis) to analyze the multivariate transcriptional profiles. The measured microarray experimental data is organized as a tensor array where the two minor orthogonal directions are the different patients and the different cell types. The entries are the transcription levels on a logarithmic scale. We identify a disease signature of prostate cancer and determine the degree of variability between individual patients. Surprisal analysis determined a baseline expression level common for all cells and patients. We identify the transcripts in the baseline as the “housekeeping” genes that insure the cell stability. The baseline and two surprisal patterns satisfactorily recover (99.8%) the multivariate data. The two patterns characterize the individuality of the patients and, to a lesser extent, the commonality of the disease. The immune response was identified as the most significant pathway contributing to the cancer disease pattern. Delineating patient variability is a central issue in personalized diagnostics and it remains to be seen if additional data will confirm the power of multivariate analysis to address this key point. The collapsed limits where the data is compacted into two dimensional arrays are contained within the proposed formalism. Full article
(This article belongs to the Special Issue Multivariate Entropy Measures and Their Applications)
Figures

Figure 1

Open AccessArticle Effects Induced by the Initial Condition in the Quantum Kibble–Zurek Scaling for Changing the Symmetry-Breaking Field
Entropy 2016, 18(12), 446; doi:10.3390/e18120446
Received: 18 November 2016 / Revised: 7 December 2016 / Accepted: 9 December 2016 / Published: 14 December 2016
PDF Full-text (2420 KB) | HTML Full-text | XML Full-text
Abstract
The Kibble–Zurek scaling describes the driven critical dynamics starting with an equilibrium state far away from the critical point. Recently, it has been shown that scaling behaviors also exist when the fluctuation term changes starting near the critical point. In this case, the
[...] Read more.
The Kibble–Zurek scaling describes the driven critical dynamics starting with an equilibrium state far away from the critical point. Recently, it has been shown that scaling behaviors also exist when the fluctuation term changes starting near the critical point. In this case, the relevant initial conditions should be included in the scaling theory as additional scaling variables. Here, we study the driven quantum critical dynamics in which a symmetry-breaking field is linearly changed starting from the vicinity of the critical point. We find that, similar to the case of changing the fluctuation term, scaling behaviors in the driven dynamics can be described by the Kibble–Zurek scaling with the initial symmetry-breaking field being included as its additional scaling variable. Both the cases of zero and finite temperatures are considered, and the scaling forms of the order parameter and the entanglement entropy are obtained. We numerically verify the scaling theory by taking the quantum Ising model as an example. Full article
(This article belongs to the Special Issue Entropy in Quantum Systems and Quantum Field Theory (QFT))
Figures

Figure 1

Open AccessArticle Quantum Thermodynamics with Degenerate Eigenstate Coherences
Entropy 2016, 18(12), 447; doi:10.3390/e18120447
Received: 27 October 2016 / Revised: 6 December 2016 / Accepted: 9 December 2016 / Published: 15 December 2016
Cited by 1 | PDF Full-text (488 KB) | HTML Full-text | XML Full-text
Abstract
We establish quantum thermodynamics for open quantum systems weakly coupled to their reservoirs when the system exhibits degeneracies. The first and second law of thermodynamics are derived, as well as a finite-time fluctuation theorem for mechanical work and energy and matter currents. Using
[...] Read more.
We establish quantum thermodynamics for open quantum systems weakly coupled to their reservoirs when the system exhibits degeneracies. The first and second law of thermodynamics are derived, as well as a finite-time fluctuation theorem for mechanical work and energy and matter currents. Using a double quantum dot junction model, local eigenbasis coherences are shown to play a crucial role on thermodynamics and on the electron counting statistics. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessArticle The Kullback–Leibler Information Function for Infinite Measures
Entropy 2016, 18(12), 448; doi:10.3390/e18120448
Received: 21 July 2016 / Revised: 1 December 2016 / Accepted: 12 December 2016 / Published: 15 December 2016
PDF Full-text (294 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we introduce the Kullback–Leibler information function ρ(ν,μ) and prove the local large deviation principle for σ-finite measures μ and finitely additive probability measures ν. In particular, the entropy of a continuous probability distribution
[...] Read more.
In this paper, we introduce the Kullback–Leibler information function ρ ( ν , μ ) and prove the local large deviation principle for σ-finite measures μ and finitely additive probability measures ν. In particular, the entropy of a continuous probability distribution ν on the real axis is interpreted as the exponential rate of asymptotics for the Lebesgue measure of the set of those samples that generate empirical measures close to ν in a suitable fine topology. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Entropy-Constrained Scalar Quantization with a Lossy-Compressed Bit
Entropy 2016, 18(12), 449; doi:10.3390/e18120449
Received: 8 September 2016 / Revised: 7 December 2016 / Accepted: 12 December 2016 / Published: 16 December 2016
PDF Full-text (504 KB) | HTML Full-text | XML Full-text
Abstract
We consider the compression of a continuous real-valued source X using scalar quantizers and average squared error distortion D. Using lossless compression of the quantizer’s output, Gish and Pierce showed that uniform quantizing yields the smallest output entropy in the limit D
[...] Read more.
We consider the compression of a continuous real-valued source X using scalar quantizers and average squared error distortion D. Using lossless compression of the quantizer’s output, Gish and Pierce showed that uniform quantizing yields the smallest output entropy in the limit D 0 , resulting in a rate penalty of 0.255 bits/sample above the Shannon Lower Bound (SLB). We present a scalar quantization scheme named lossy-bit entropy-constrained scalar quantization (Lb-ECSQ) that is able to reduce the D 0 gap to SLB to 0.251 bits/sample by combining both lossless and binary lossy compression of the quantizer’s output. We also study the low-resolution regime and show that Lb-ECSQ significantly outperforms ECSQ in the case of 1-bit quantization. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle
Entropy 2016, 18(12), 450; doi:10.3390/e18120450
Received: 26 September 2016 / Revised: 10 December 2016 / Accepted: 10 December 2016 / Published: 16 December 2016
PDF Full-text (884 KB) | HTML Full-text | XML Full-text
Abstract
Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply
[...] Read more.
Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed. Full article
Figures

Figure 1

Open AccessArticle Linear Quantum Entropy and Non-Hermitian Hamiltonians
Entropy 2016, 18(12), 451; doi:10.3390/e18120451
Received: 18 October 2016 / Revised: 4 December 2016 / Accepted: 13 December 2016 / Published: 16 December 2016
PDF Full-text (257 KB) | HTML Full-text | XML Full-text
Abstract
We consider the description of open quantum systems with probability sinks (or sources) in terms of general non-Hermitian Hamiltonians. Within such a framework, we study novel possible definitions of the quantum linear entropy as an indicator of the flow of information during the
[...] Read more.
We consider the description of open quantum systems with probability sinks (or sources) in terms of general non-Hermitian Hamiltonians. Within such a framework, we study novel possible definitions of the quantum linear entropy as an indicator of the flow of information during the dynamics. Such linear entropy functionals are necessary in the case of a partially Wigner-transformed non-Hermitian Hamiltonian (which is typically useful within a mixed quantum-classical representation). Both the case of a system represented by a pure non-Hermitian Hamiltonian as well as that of the case of non-Hermitian dynamics in a classical bath are explicitly considered. Full article
(This article belongs to the Special Issue Entropy in Quantum Systems and Quantum Field Theory (QFT))
Open AccessArticle Ranking DMUs by Comparing DEA Cross-Efficiency Intervals Using Entropy Measures
Entropy 2016, 18(12), 452; doi:10.3390/e18120452
Received: 7 November 2016 / Revised: 12 December 2016 / Accepted: 14 December 2016 / Published: 17 December 2016
PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
Cross-efficiency evaluation, an extension of data envelopment analysis (DEA), can eliminate unrealistic weighing schemes and provide a ranking for decision making units (DMUs). In the literature, the determination of input and output weights uniquely receives more attentions. However, the problem of choosing the
[...] Read more.
Cross-efficiency evaluation, an extension of data envelopment analysis (DEA), can eliminate unrealistic weighing schemes and provide a ranking for decision making units (DMUs). In the literature, the determination of input and output weights uniquely receives more attentions. However, the problem of choosing the aggressive (minimal) or benevolent (maximal) formulation for decision-making might still remain. In this paper, we develop a procedure to perform cross-efficiency evaluation without the need to make any specific choice of DEA weights. The proposed procedure takes into account the aggressive and benevolent formulations at the same time, and the choice of DEA weights can then be avoided. Consequently, a number of cross-efficiency intervals is obtained for each DMU. The entropy, which is based on information theory, is an effective tool to measure the uncertainty. We then utilize the entropy to construct a numerical index for DMUs with cross-efficiency intervals. A mathematical program is proposed to find the optimal entropy values of DMUs for comparison. With the derived entropy value, we can rank DMUs accordingly. Two examples are illustrated to show the effectiveness of the idea proposed in this paper. Full article
Open AccessArticle Entropic Citizenship Behavior and Sustainability in Urban Organizations: Towards a Theoretical Model
Entropy 2016, 18(12), 453; doi:10.3390/e18120453
Received: 1 November 2016 / Revised: 13 December 2016 / Accepted: 14 December 2016 / Published: 19 December 2016
PDF Full-text (375 KB) | HTML Full-text | XML Full-text
Abstract
Entropy is a concept derived from Physics that has been used to describe natural and social systems’ structure and behavior. Applications of the concept in the social sciences so far have been largely limited to the disciplines of economics and sociology. In the
[...] Read more.
Entropy is a concept derived from Physics that has been used to describe natural and social systems’ structure and behavior. Applications of the concept in the social sciences so far have been largely limited to the disciplines of economics and sociology. In the current paper, the concept of entropy is applied to organizational citizenship behavior with implications for urban organizational sustainability. A heuristic is presented for analysing personal and organizational citizenship configurations and distributions within a given workforce that can lead to corporate entropy; and for allowing prescriptive remedial steps to be taken to manage the process, should entropy from this source threaten its sustainability and survival. Full article
(This article belongs to the Special Issue Entropy for Sustainable and Resilient Urban Future)
Figures

Figure 1

Open AccessArticle Grey Coupled Prediction Model for Traffic Flow with Panel Data Characteristics
Entropy 2016, 18(12), 454; doi:10.3390/e18120454
Received: 13 September 2016 / Revised: 22 November 2016 / Accepted: 12 December 2016 / Published: 20 December 2016
Cited by 1 | PDF Full-text (2136 KB) | HTML Full-text | XML Full-text
Abstract
This paper studies the grey coupled prediction problem of traffic data with panel data characteristics. Traffic flow data collected continuously at the same site typically has panel data characteristics. The longitudinal data (daily flow) is time-series data, which show an obvious intra-day trend
[...] Read more.
This paper studies the grey coupled prediction problem of traffic data with panel data characteristics. Traffic flow data collected continuously at the same site typically has panel data characteristics. The longitudinal data (daily flow) is time-series data, which show an obvious intra-day trend and can be predicted using the autoregressive integrated moving average (ARIMA) model. The cross-sectional data is composed of observations at the same time intervals on different days and shows weekly seasonality and limited data characteristics; this data can be predicted using the rolling seasonal grey model (RSDGM(1,1)). The length of the rolling sequence is determined using matrix perturbation analysis. Then, a coupled model is established based on the ARIMA and RSDGM(1,1) models; the coupled prediction is achieved at the intersection of the time-series data and cross-sectional data, and the weights are determined using grey relational analysis. Finally, numerical experiments on 16 groups of cross-sectional data show that the RSDGM(1,1) model has good adaptability and stability and can effectively predict changes in traffic flow. The performance of the coupled model is also better than that of the benchmark model, the coupled model with equal weights and the Bayesian combination model. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Rényi Divergences, Bures Geometry and Quantum Statistical Thermodynamics
Entropy 2016, 18(12), 455; doi:10.3390/e18120455
Received: 22 November 2016 / Revised: 14 December 2016 / Accepted: 16 December 2016 / Published: 19 December 2016
PDF Full-text (311 KB) | HTML Full-text | XML Full-text
Abstract The Bures geometry of quantum statistical thermodynamics at thermal equilibrium is investigated by introducing the connections between the Bures angle and the Rényi 1/2-divergence. Fundamental relations concerning free energy, moments of work, and distance are established. Full article
(This article belongs to the Special Issue Entropy in Quantum Systems and Quantum Field Theory (QFT))
Figures

Figure 1

Open AccessArticle A Possible Application of the Contribution of Aromaticity to Entropy: Thermal Switch
Entropy 2016, 18(12), 456; doi:10.3390/e18120456
Received: 2 March 2016 / Revised: 25 October 2016 / Accepted: 26 October 2016 / Published: 20 December 2016
PDF Full-text (993 KB) | HTML Full-text | XML Full-text
Abstract
It has been known for a long time that the loss of aromaticity of gaseous molecules leads to a large increase of the enthalpy and to a tiny increase of the entropy. Generally, the calculated transition temperature from an aromatic structure towards a
[...] Read more.
It has been known for a long time that the loss of aromaticity of gaseous molecules leads to a large increase of the enthalpy and to a tiny increase of the entropy. Generally, the calculated transition temperature from an aromatic structure towards a non-aromatic structure at which these two contributions cancel is very high. The entropy associated to the loss of aromaticity of adsorbed molecules, such as pyridine on Si(100) and on Ge(100), is roughly the same while the associated enthalpy is much smaller, a consequence of which is a low transition temperature. This allows us to imagine monomolecular devices, such as thermal switches, based on the difference of electrical conductivity between aromatic and non-aromatic species adsorbed on Si(100) or on Ge(100). Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Monitoring Test for Stability of Dependence Structure in Multivariate Data Based on Copula
Entropy 2016, 18(12), 457; doi:10.3390/e18120457
Received: 17 September 2016 / Revised: 31 October 2016 / Accepted: 19 December 2016 / Published: 21 December 2016
PDF Full-text (312 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we consider a sequential monitoring procedure for detecting changes in copula function. We propose a cusum type of monitoring test based on the empirical copula function and apply it to the detection of the distributional changes in copula function. We
[...] Read more.
In this paper, we consider a sequential monitoring procedure for detecting changes in copula function. We propose a cusum type of monitoring test based on the empirical copula function and apply it to the detection of the distributional changes in copula function. We investigate the asymptotic properties of the stopping time and show that under regularity conditions, its limiting null distribution is the same as the sup of Kiefer process. Moreover, we utilize the bootstrap method in order to obtain the limiting distribution. A simulation study and a real data analysis are conducted to evaluate our test. Full article
Figures

Figure 1

Review

Jump to: Research

Open AccessReview Application of Shannon Wavelet Entropy and Shannon Wavelet Packet Entropy in Analysis of Power System Transient Signals
Entropy 2016, 18(12), 437; doi:10.3390/e18120437
Received: 9 August 2016 / Revised: 25 November 2016 / Accepted: 2 December 2016 / Published: 7 December 2016
Cited by 1 | PDF Full-text (3823 KB) | HTML Full-text | XML Full-text
Abstract
In a power system, the analysis of transient signals is the theoretical basis of fault diagnosis and transient protection theory. Shannon wavelet entropy (SWE) and Shannon wavelet packet entropy (SWPE) are powerful mathematics tools for transient signal analysis. Combined with the recent achievements
[...] Read more.
In a power system, the analysis of transient signals is the theoretical basis of fault diagnosis and transient protection theory. Shannon wavelet entropy (SWE) and Shannon wavelet packet entropy (SWPE) are powerful mathematics tools for transient signal analysis. Combined with the recent achievements regarding SWE and SWPE, their applications are summarized in feature extraction of transient signals and transient fault recognition. For wavelet aliasing at adjacent scale of wavelet decomposition, the impact of wavelet aliasing is analyzed for feature extraction accuracy of SWE and SWPE, and their differences are compared. Meanwhile, the analyses mentioned are verified by partial discharge (PD) feature extraction of power cable. Finally, some new ideas and further researches are proposed in the wavelet entropy mechanism, operation speed and how to overcome wavelet aliasing. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Figures

Figure 1

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
loading...
Back to Top