Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 1 (January 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Deltas are undergoing change due to natural and anthropogenic impacts, thus there is a need to [...] Read more.
View options order results:
result details:
Displaying articles 1-77
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Acknowledgement to Reviewers of Entropy in 2017
Entropy 2018, 20(1), 66; doi:10.3390/e20010066
Received: 15 January 2018 / Accepted: 15 January 2018 / Published: 15 January 2018
PDF Full-text (184 KB) | HTML Full-text | XML Full-text
Abstract
Peer review is an essential part in the publication process, ensuring that Entropy maintains high quality standards for its published papers.[...] Full article

Research

Jump to: Editorial, Review

Open AccessArticle Non-Equilibrium Relations for Bounded Rational Decision-Making in Changing Environments
Entropy 2018, 20(1), 1; doi:10.3390/e20010001
Received: 30 July 2017 / Revised: 17 December 2017 / Accepted: 18 December 2017 / Published: 21 December 2017
PDF Full-text (1845 KB) | HTML Full-text | XML Full-text
Abstract
Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of
[...] Read more.
Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy, in a similar fashion to thermodynamic systems undergoing isothermal transformations. Such systems minimize the free energy to reach equilibrium states that balance internal energy and entropic cost. When there is a fast change in the environment, these systems evolve in a non-equilibrium fashion because they are unable to follow the path of equilibrium distributions. Here, we apply concepts from non-equilibrium thermodynamics to characterize decision-makers that adapt to changing environments under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker’s action. This allows one to quantify performance loss due to imperfect adaptation in a general manner and, additionally, to find relations for decision-making similar to Crooks’ fluctuation theorem and Jarzynski’s equality. We provide simulations of several exemplary decision and inference problems in the discrete and continuous domains to illustrate the new relations. Full article
Figures

Figure 1

Open AccessArticle Rate-Distortion Region of a Gray–Wyner Model with Side Information
Entropy 2018, 20(1), 2; doi:10.3390/e20010002
Received: 29 November 2017 / Revised: 13 December 2017 / Accepted: 15 December 2017 / Published: 22 December 2017
PDF Full-text (1201 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we establish a full single-letter characterization of the rate-distortion region of an instance of the Gray–Wyner model with side information at the decoders. Specifically, in this model, an encoder observes a pair of memoryless, arbitrarily correlated, sources (S1
[...] Read more.
In this work, we establish a full single-letter characterization of the rate-distortion region of an instance of the Gray–Wyner model with side information at the decoders. Specifically, in this model, an encoder observes a pair of memoryless, arbitrarily correlated, sources ( S 1 n , S 2 n ) and communicates with two receivers over an error-free rate-limited link of capacity R 0 , as well as error-free rate-limited individual links of capacities R 1 to the first receiver and R 2 to the second receiver. Both receivers reproduce the source component S 2 n losslessly; and Receiver 1 also reproduces the source component S 1 n lossily, to within some prescribed fidelity level D 1 . In addition, Receiver 1 and Receiver 2 are equipped, respectively, with memoryless side information sequences Y 1 n and Y 2 n . Important in this setup, the side information sequences are arbitrarily correlated among them, and with the source pair ( S 1 n , S 2 n ) ; and are not assumed to exhibit any particular ordering. Furthermore, by specializing the main result to two Heegard–Berger models with successive refinement and scalable coding, we shed light on the roles of the common and private descriptions that the encoder should produce and the role of each of the common and private links. We develop intuitions by analyzing the developed single-letter rate-distortion regions of these models, and discuss some insightful binary examples. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Figures

Figure 1

Open AccessArticle Polar Codes for Covert Communications over Asynchronous Discrete Memoryless Channels
Entropy 2018, 20(1), 3; doi:10.3390/e20010003
Received: 7 August 2017 / Revised: 14 November 2017 / Accepted: 14 November 2017 / Published: 22 December 2017
PDF Full-text (525 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper introduces an explicit covert communication code for binary-input asynchronous discrete memoryless channels based on binary polar codes, in which legitimate parties exploit uncertainty created by both the channel noise and the time of transmission to avoid detection by an adversary. The
[...] Read more.
This paper introduces an explicit covert communication code for binary-input asynchronous discrete memoryless channels based on binary polar codes, in which legitimate parties exploit uncertainty created by both the channel noise and the time of transmission to avoid detection by an adversary. The proposed code jointly ensures reliable communication for a legitimate receiver and low probability of detection with respect to the adversary, both observing noisy versions of the codewords. Binary polar codes are used to shape the weight distribution of codewords and ensure that the average weight decays as the block length grows. The performance of the proposed code is severely limited by the speed of polarization, which in turn controls the decay of the average codeword weight with the block length. Although the proposed construction falls largely short of achieving the performance of random codes, it inherits the low-complexity properties of polar codes. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Fog Computing: Enabling the Management and Orchestration of Smart City Applications in 5G Networks
Entropy 2018, 20(1), 4; doi:10.3390/e20010004
Received: 24 November 2017 / Revised: 16 December 2017 / Accepted: 18 December 2017 / Published: 23 December 2017
PDF Full-text (3464 KB) | HTML Full-text | XML Full-text
Abstract
Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent
[...] Read more.
Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle State Estimation for General Complex Dynamical Networks with Incompletely Measured Information
Entropy 2018, 20(1), 5; doi:10.3390/e20010005
Received: 16 November 2017 / Revised: 17 December 2017 / Accepted: 20 December 2017 / Published: 23 December 2017
PDF Full-text (538 KB) | HTML Full-text | XML Full-text
Abstract
Estimating uncertain state variables of a general complex dynamical network with randomly incomplete measurements of transmitted output variables is investigated in this paper. The incomplete measurements, occurring randomly through the transmission of output variables, always cause the failure of the state estimation process.
[...] Read more.
Estimating uncertain state variables of a general complex dynamical network with randomly incomplete measurements of transmitted output variables is investigated in this paper. The incomplete measurements, occurring randomly through the transmission of output variables, always cause the failure of the state estimation process. Different from the existing methods, we propose a novel method to handle the incomplete measurements, which can perform well to balance the excessively deviated estimators under the influence of incomplete measurements. In particular, the proposed method has no special limitation on the node dynamics compared with many existing methods. By employing the Lyapunov stability theory along with the stochastic analysis method, sufficient criteria are deduced rigorously to ensure obtaining the proper estimator gains with known model parameters. Illustrative simulation for the complex dynamical network composed of chaotic nodes are given to show the validity and efficiency of the proposed method. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Entropy Measures as Geometrical Tools in the Study of Cosmology
Entropy 2018, 20(1), 6; doi:10.3390/e20010006
Received: 20 October 2017 / Revised: 27 November 2017 / Accepted: 20 December 2017 / Published: 25 December 2017
PDF Full-text (259 KB) | HTML Full-text | XML Full-text
Abstract
Classical chaos is often characterized as exponential divergence of nearby trajectories. In many interesting cases these trajectories can be identified with geodesic curves. We define here the entropy by S=lnχ(x) with χ(x) being the
[...] Read more.
Classical chaos is often characterized as exponential divergence of nearby trajectories. In many interesting cases these trajectories can be identified with geodesic curves. We define here the entropy by S = ln χ ( x ) with χ ( x ) being the distance between two nearby geodesics. We derive an equation for the entropy, which by transformation to a Riccati-type equation becomes similar to the Jacobi equation. We further show that the geodesic equation for a null geodesic in a double-warped spacetime leads to the same entropy equation. By applying a Robertson–Walker metric for a flat three-dimensional Euclidean space expanding as a function of time, we again reach the entropy equation stressing the connection between the chosen entropy measure and time. We finally turn to the Raychaudhuri equation for expansion, which also is a Riccati equation similar to the transformed entropy equation. Those Riccati-type equations have solutions of the same form as the Jacobi equation. The Raychaudhuri equation can be transformed to a harmonic oscillator equation, and it has been shown that the geodesic deviation equation of Jacobi is essentially equivalent to that of a harmonic oscillator. The Raychaudhuri equations are strong geometrical tools in the study of general relativity and cosmology. We suggest a refined entropy measure applicable in cosmology and defined by the average deviation of the geodesics in a congruence. Full article
(This article belongs to the Special Issue Advances in Relativistic Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Paths of Cultural Systems
Entropy 2018, 20(1), 8; doi:10.3390/e20010008
Received: 6 November 2017 / Revised: 9 December 2017 / Accepted: 17 December 2017 / Published: 25 December 2017
PDF Full-text (1393 KB) | HTML Full-text | XML Full-text
Abstract
A theory of cultural structures predicts the objects observed by anthropologists. We here define those which use kinship relationships to define systems. A finite structure we call a partially defined quasigroup (or pdq, as stated by Definition 1 below) on a dictionary (called
[...] Read more.
A theory of cultural structures predicts the objects observed by anthropologists. We here define those which use kinship relationships to define systems. A finite structure we call a partially defined quasigroup (or pdq, as stated by Definition 1 below) on a dictionary (called a natural language) allows prediction of certain anthropological descriptions, using homomorphisms of pdqs onto finite groups. A viable history (defined using pdqs) states how an individual in a population following such history may perform culturally allowed associations, which allows a viable history to continue to survive. The vector states on sets of viable histories identify demographic observables on descent sequences. Paths of vector states on sets of viable histories may determine which histories can exist empirically. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness)
Figures

Figure 1

Open AccessArticle Analysis and Optimization of Trapezoidal Grooved Microchannel Heat Sink Using Nanofluids in a Micro Solar Cell
Entropy 2018, 20(1), 9; doi:10.3390/e20010009
Received: 15 November 2017 / Revised: 7 December 2017 / Accepted: 20 December 2017 / Published: 25 December 2017
PDF Full-text (8088 KB) | HTML Full-text | XML Full-text
Abstract
It is necessary to control the temperature of solar cells for enhancing efficiency with increasing concentrations of multiple photovoltaic systems. A heterogeneous two-phase model was established after considering the interacting between temperature, viscosity, the flow of nanofluid, and the motion of nanoparticles in
[...] Read more.
It is necessary to control the temperature of solar cells for enhancing efficiency with increasing concentrations of multiple photovoltaic systems. A heterogeneous two-phase model was established after considering the interacting between temperature, viscosity, the flow of nanofluid, and the motion of nanoparticles in the nanofluid, in order to study the microchannel heat sink (MCHS) using Al2O3-water nanofluid as coolant in the photovoltaic system. Numerical simulations were carried out to investigate the thermal performance of MCHS with a series of trapezoidal grooves. The numerical results showed us that, (1) better thermal performance of MCSH using nanofluid can be achieved from a heterogeneous two-phase model than that from single-phase model; (2) The effects of flow field, volume fraction, nanoparticle size on the heat transfer enhancement in MCHS were interpreted by a non-dimensional parameter NBT (i.e., ratio of Brownian diffusion and thermophoretic diffusion). In addition, the geometrical parameters of MCHS and the physical parameters of the nanofluid were optimized. This can provide a sound foundation for the design of MCHS. Full article
(This article belongs to the Special Issue Non-Equilibrium Thermodynamics of Micro Technologies)
Figures

Figure 1

Open AccessArticle A Novel Method for Multi-Fault Feature Extraction of a Gearbox under Strong Background Noise
Entropy 2018, 20(1), 10; doi:10.3390/e20010010
Received: 2 August 2017 / Revised: 15 September 2017 / Accepted: 28 September 2017 / Published: 26 December 2017
PDF Full-text (10131 KB) | HTML Full-text | XML Full-text
Abstract
Strong background noise and complicated interfering signatures when implementing vibration-based monitoring make it difficult to extract the weak diagnostic features due to incipient faults in a multistage gearbox. This can be more challenging when multiple faults coexist. This paper proposes an effective approach
[...] Read more.
Strong background noise and complicated interfering signatures when implementing vibration-based monitoring make it difficult to extract the weak diagnostic features due to incipient faults in a multistage gearbox. This can be more challenging when multiple faults coexist. This paper proposes an effective approach to extract multi-fault features of a wind turbine gearbox based on an integration of minimum entropy deconvolution (MED) and multipoint optimal minimum entropy deconvolution adjusted (MOMEDA). By using simulated periodic transient signals with different noise to signal ratios (SNR), it evaluates the outstanding performance of MED in noise suppression and reveals the deficient in extract multiple impulses. On the other hand, MOMEDA can performs better in extracting multiple pulses but not robust to noise influences. To compromise the merits of them, therefore the diagnostic approach is formalized by extracting the multiple weak features with MOMEDA based on the MED denoised signals. Experimental verification based on vibrations from a wind turbine gearbox test bed shows that the approach allows successful identification of multiple faults occurring simultaneously on the shaft and bearing in the high speed transmission stage of the gearbox. Full article
Figures

Figure 1

Open AccessArticle Remarks on the Maximum Entropy Principle with Application to the Maximum Entropy Theory of Ecology
Entropy 2018, 20(1), 11; doi:10.3390/e20010011
Received: 1 November 2017 / Revised: 27 November 2017 / Accepted: 21 December 2017 / Published: 27 December 2017
PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
In the first part of the paper we work out the consequences of the fact that Jaynes’ Maximum Entropy Principle, when translated in mathematical terms, is a constrained extremum problem for an entropy function H(p) expressing the uncertainty associated with
[...] Read more.
In the first part of the paper we work out the consequences of the fact that Jaynes’ Maximum Entropy Principle, when translated in mathematical terms, is a constrained extremum problem for an entropy function H ( p ) expressing the uncertainty associated with the probability distribution p. Consequently, if two observers use different independent variables p or g ( p ) , the associated entropy functions have to be defined accordingly and they are different in the general case. In the second part we apply our findings to an analysis of the foundations of the Maximum Entropy Theory of Ecology (M.E.T.E.) a purely statistical model of an ecological community. Since the theory has received considerable attention by the scientific community, we hope to give a useful contribution to the same community by showing that the procedure of application of MEP, in the light of the theory developed in the first part, suffers from some incongruences. We exhibit an alternative formulation which is free from these limitations and that gives different results. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Open AccessArticle A New Chaotic System with Multiple Attractors: Dynamic Analysis, Circuit Realization and S-Box Design
Entropy 2018, 20(1), 12; doi:10.3390/e20010012
Received: 17 November 2017 / Revised: 22 December 2017 / Accepted: 25 December 2017 / Published: 27 December 2017
PDF Full-text (5274 KB) | HTML Full-text | XML Full-text
Abstract
This paper reports about a novel three-dimensional chaotic system with three nonlinearities. The system has one stable equilibrium, two stable equilibria and one saddle node, two saddle foci and one saddle node for different parameters. One salient feature of this novel system is
[...] Read more.
This paper reports about a novel three-dimensional chaotic system with three nonlinearities. The system has one stable equilibrium, two stable equilibria and one saddle node, two saddle foci and one saddle node for different parameters. One salient feature of this novel system is its multiple attractors caused by different initial values. With the change of parameters, the system experiences mono-stability, bi-stability, mono-periodicity, bi-periodicity, one strange attractor, and two coexisting strange attractors. The complex dynamic behaviors of the system are revealed by analyzing the corresponding equilibria and using the numerical simulation method. In addition, an electronic circuit is given for implementing the chaotic attractors of the system. Using the new chaotic system, an S-Box is developed for cryptographic operations. Moreover, we test the performance of this produced S-Box and compare it to the existing S-Box studies. Full article
Figures

Figure 1

Open AccessArticle Self-Organization of Genome Expression from Embryo to Terminal Cell Fate: Single-Cell Statistical Mechanics of Biological Regulation
Entropy 2018, 20(1), 13; doi:10.3390/e20010013
Received: 4 December 2017 / Revised: 19 December 2017 / Accepted: 20 December 2017 / Published: 28 December 2017
PDF Full-text (3139 KB) | HTML Full-text | XML Full-text
Abstract
A statistical mechanical mean-field approach to the temporal development of biological regulation provides a phenomenological, but basic description of the dynamical behavior of genome expression in terms of autonomous self-organization with a critical transition (Self-Organized Criticality: SOC). This approach reveals the basis of
[...] Read more.
A statistical mechanical mean-field approach to the temporal development of biological regulation provides a phenomenological, but basic description of the dynamical behavior of genome expression in terms of autonomous self-organization with a critical transition (Self-Organized Criticality: SOC). This approach reveals the basis of self-regulation/organization of genome expression, where the extreme complexity of living matter precludes any strict mechanistic approach. The self-organization in SOC involves two critical behaviors: scaling-divergent behavior (genome avalanche) and sandpile-type critical behavior. Genome avalanche patterns—competition between order (scaling) and disorder (divergence) reflect the opposite sequence of events characterizing the self-organization process in embryo development and helper T17 terminal cell differentiation, respectively. On the other hand, the temporal development of sandpile-type criticality (the degree of SOC control) in mouse embryo suggests the existence of an SOC control landscape with a critical transition state (i.e., the erasure of zygote-state criticality). This indicates that a phase transition of the mouse genome before and after reprogramming (immediately after the late 2-cell state) occurs through a dynamical change in a control parameter. This result provides a quantitative open-thermodynamic appreciation of the still largely qualitative notion of the epigenetic landscape. Our results suggest: (i) the existence of coherent waves of condensation/de-condensation in chromatin, which are transmitted across regions of different gene-expression levels along the genome; and (ii) essentially the same critical dynamics we observed for cell-differentiation processes exist in overall RNA expression during embryo development, which is particularly relevant because it gives further proof of SOC control of overall expression as a universal feature. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Open AccessArticle Leakage Evaluation by Virtual Entropy Generation (VEG) Method
Entropy 2018, 20(1), 14; doi:10.3390/e20010014
Received: 15 November 2017 / Revised: 23 December 2017 / Accepted: 28 December 2017 / Published: 29 December 2017
Cited by 1 | PDF Full-text (1925 KB) | HTML Full-text | XML Full-text
Abstract
Leakage through microscale or nanoscale cracks is usually hard to observe, difficult to control, and causes significant economic loss. In the present research, the leakage in a pipe was evaluated by the virtual entropy generation (VEG) method. In virtual entropy generation method, the
[...] Read more.
Leakage through microscale or nanoscale cracks is usually hard to observe, difficult to control, and causes significant economic loss. In the present research, the leakage in a pipe was evaluated by the virtual entropy generation (VEG) method. In virtual entropy generation method, the “measured entropy generation” is forced to follow the “experimental second law of thermodynamics”. Taking the leakage as the source virtual entropy generation, a new pipe leakage evaluation criterion was analytically derived, which indicates that the mass leakage rate should be smaller than the pressure drop rate inside a pipe. A numerical study based on computational fluid dynamics showed the existence of an unrealistic virtual entropy generation at a high mass leakage rate. Finally, the new criterion was used in the evaluation of leakage available in the literature. These results could be useful for leakage control or industry criteria design in the future. Full article
(This article belongs to the Special Issue Non-Equilibrium Thermodynamics of Micro Technologies)
Figures

Open AccessArticle Robust Consensus of Networked Evolutionary Games with Attackers and Forbidden Profiles
Entropy 2018, 20(1), 15; doi:10.3390/e20010015
Received: 2 November 2017 / Revised: 20 December 2017 / Accepted: 28 December 2017 / Published: 29 December 2017
PDF Full-text (287 KB) | HTML Full-text | XML Full-text
Abstract
Using the algebraic state space representation, this paper studies the robust consensus of networked evolutionary games (NEGs) with attackers and forbidden profiles. Firstly, an algebraic form is established for NEGs with attackers and forbidden profiles. Secondly, based on the algebraic form, a necessary
[...] Read more.
Using the algebraic state space representation, this paper studies the robust consensus of networked evolutionary games (NEGs) with attackers and forbidden profiles. Firstly, an algebraic form is established for NEGs with attackers and forbidden profiles. Secondly, based on the algebraic form, a necessary and sufficient condition is presented for the robust constrained reachability of NEGs. Thirdly, a series of robust reachable sets is constructed by using the robust constrained reachability, based on which a constructive procedure is proposed to design state feedback controls for the robust consensus of NEGs with attackers and forbidden profiles. Finally, an illustrative example is given to show that the main results are effective. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle The Poincaré Half-Plane for Informationally-Complete POVMs
Entropy 2018, 20(1), 16; doi:10.3390/e20010016
Received: 12 October 2017 / Revised: 22 November 2017 / Accepted: 28 December 2017 / Published: 31 December 2017
PDF Full-text (555 KB) | HTML Full-text | XML Full-text
Abstract
It has been shown in previous papers that classes of (minimal asymmetric) informationally-complete positive operator valued measures (IC-POVMs) in dimension d can be built using the multiparticle Pauli group acting on appropriate fiducial states. The latter states may also be derived starting from
[...] Read more.
It has been shown in previous papers that classes of (minimal asymmetric) informationally-complete positive operator valued measures (IC-POVMs) in dimension d can be built using the multiparticle Pauli group acting on appropriate fiducial states. The latter states may also be derived starting from the Poincaré upper half-plane model H . To do this, one translates the congruence (or non-congruence) subgroups of index d of the modular group into groups of permutation gates, some of the eigenstates of which are the sought fiducials. The structure of some IC-POVMs is found to be intimately related to the Kochen–Specker theorem. Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Figures

Figure 1

Open AccessArticle Nonlinear Multiscale Entropy and Recurrence Quantification Analysis of Foreign Exchange Markets Efficiency
Entropy 2018, 20(1), 17; doi:10.3390/e20010017
Received: 30 November 2017 / Revised: 26 December 2017 / Accepted: 27 December 2017 / Published: 31 December 2017
PDF Full-text (3959 KB) | HTML Full-text | XML Full-text
Abstract
The regularity of price fluctuations in exchange rates plays a crucial role in foreign exchange (FX) market dynamics. In this paper, we quantify the multiply irregular fluctuation behaviors of exchange rates in the last 10 years (November 2006–November 2016) of eight world economies
[...] Read more.
The regularity of price fluctuations in exchange rates plays a crucial role in foreign exchange (FX) market dynamics. In this paper, we quantify the multiply irregular fluctuation behaviors of exchange rates in the last 10 years (November 2006–November 2016) of eight world economies with two nonlinear approaches. One is a recently proposed multiscale weighted permutation entropy (MWPE) and another is the typical quantification recurrence analysis (RQA) technique. Furthermore, we utilize the RQA technique to study the different intrinsic mode functions (IMFs) that represents different frequencies and scales of the raw time series via the empirical mode decomposition algorithm. Complexity characteristics of abundance and distinction are obtained in the foreign exchange markets. The empirical results show that JPY/USD (followed by EUR/USD) implies a a higher complexity and indicates relatively higher efficiency of the Japanese FX market, while some economies like South Korea, Hong Kong and China show lower and weaker efficiency of their FX markets. Meanwhile, it is suggested that the financial crisis enhances the market efficiency in the FX markets. Full article
Figures

Figure 1a

Open AccessArticle Composite Likelihood Methods Based on Minimum Density Power Divergence Estimator
Entropy 2018, 20(1), 18; doi:10.3390/e20010018
Received: 6 November 2017 / Revised: 26 December 2017 / Accepted: 28 December 2017 / Published: 31 December 2017
PDF Full-text (322 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a robust version of the Wald test statistic for composite likelihood is considered by using the composite minimum density power divergence estimator instead of the composite maximum likelihood estimator. This new family of test statistics will be called Wald-type test
[...] Read more.
In this paper, a robust version of the Wald test statistic for composite likelihood is considered by using the composite minimum density power divergence estimator instead of the composite maximum likelihood estimator. This new family of test statistics will be called Wald-type test statistics. The problem of testing a simple and a composite null hypothesis is considered, and the robustness is studied on the basis of a simulation study. The composite minimum density power divergence estimator is also introduced, and its asymptotic properties are studied. Full article
Open AccessArticle Thermodynamics-Based Evaluation of Various Improved Shannon Entropies for Configurational Information of Gray-Level Images
Entropy 2018, 20(1), 19; doi:10.3390/e20010019
Received: 14 November 2017 / Revised: 17 December 2017 / Accepted: 23 December 2017 / Published: 2 January 2018
PDF Full-text (7871 KB) | HTML Full-text | XML Full-text
Abstract
The quality of an image affects its utility and image quality assessment has been a hot research topic for many years. One widely used measure for image quality assessment is Shannon entropy, which has a well-established information-theoretic basis. The value of this entropy
[...] Read more.
The quality of an image affects its utility and image quality assessment has been a hot research topic for many years. One widely used measure for image quality assessment is Shannon entropy, which has a well-established information-theoretic basis. The value of this entropy can be interpreted as the amount of information. However, Shannon entropy is badly adapted to information measurement in images, because it captures only the compositional information of an image and ignores the configurational aspect. To fix this problem, improved Shannon entropies have been actively proposed in the last few decades, but a thorough evaluation of their performance is still lacking. This study presents such an evaluation, involving twenty-three improved Shannon entropies based on various tools such as gray-level co-occurrence matrices and local binary patterns. For the evaluation, we proposed: (a) a strategy to generate testing (gray-level) images by simulating the mixing of ideal gases in thermodynamics; (b) three criteria consisting of validity, reliability, and ability to capture configurational disorder; and (c) three measures to assess the fulfillment of each criterion. The evaluation results show only the improved entropies based on local binary patterns are invalid for use in quantifying the configurational information of images, and the best variant of Shannon entropy in terms of reliability and ability is the one based on the average distance between same/different-value pixels. These conclusions are theoretically important in setting a direction for the future research on improving entropy and are practically useful in selecting an effective entropy for various image processing applications. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Liouvillian of the Open STIRAP Problem
Entropy 2018, 20(1), 20; doi:10.3390/e20010020
Received: 25 October 2017 / Revised: 11 December 2017 / Accepted: 20 December 2017 / Published: 3 January 2018
PDF Full-text (707 KB) | HTML Full-text | XML Full-text
Abstract
With the corresponding Liouvillian as a starting point, we demonstrate two seemingly new phenomena of the STIRAP problem when subjected to irreversible losses. It is argued that both of these can be understood from an underlying Zeno effect, and in particular both can
[...] Read more.
With the corresponding Liouvillian as a starting point, we demonstrate two seemingly new phenomena of the STIRAP problem when subjected to irreversible losses. It is argued that both of these can be understood from an underlying Zeno effect, and in particular both can be viewed as if the environment assists the STIRAP population transfer. The first of these is found for relative strong dephasing, and, in the language of the Liouvillian, it is explained from the explicit form of the matrix generating the time-evolution; the coherence terms of the state decay off, which prohibits further population transfer. For pure dissipation, another Zeno effect is found, where the presence of a non-zero Liouvillian gap protects the system’s (adiabatic) state from non-adiabatic excitations. In contrast to full Zeno freezing of the evolution, which is often found in many problems without explicit time-dependence, here, the freezing takes place in the adiabatic basis such that the system still evolves but adiabatically. Full article
(This article belongs to the Special Issue Coherence in Open Quantum Systems)
Figures

Figure 1

Open AccessFeature PaperArticle Fuzzy Entropy Analysis of the Electroencephalogram in Patients with Alzheimer’s Disease: Is the Method Superior to Sample Entropy?
Entropy 2018, 20(1), 21; doi:10.3390/e20010021
Received: 29 November 2017 / Revised: 20 December 2017 / Accepted: 28 December 2017 / Published: 3 January 2018
PDF Full-text (590 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Alzheimer’s disease (AD) is the most prevalent form of dementia in the world, which is characterised by the loss of neurones and the build-up of plaques in the brain, causing progressive symptoms of memory loss and confusion. Although definite diagnosis is only possible
[...] Read more.
Alzheimer’s disease (AD) is the most prevalent form of dementia in the world, which is characterised by the loss of neurones and the build-up of plaques in the brain, causing progressive symptoms of memory loss and confusion. Although definite diagnosis is only possible by necropsy, differential diagnosis with other types of dementia is still needed. An electroencephalogram (EEG) is a cheap, portable, non-invasive method to record brain signals. Previous studies with non-linear signal processing methods have shown changes in the EEG due to AD, which is characterised reduced complexity and increased regularity. EEGs from 11 AD patients and 11 age-matched control subjects were analysed with Fuzzy Entropy (FuzzyEn), a non-linear method that was introduced as an improvement over the frequently used Approximate Entropy (ApEn) and Sample Entropy (SampEn) algorithms. AD patients had significantly lower FuzzyEn values than control subjects (p < 0.01) at electrodes T6, P3, P4, O1, and O2. Furthermore, when diagnostic accuracy was calculated using Receiver Operating Characteristic (ROC) curves, FuzzyEn outperformed both ApEn and SampEn, reaching a maximum accuracy of 86.36%. These results suggest that FuzzyEn could increase the insight into brain dysfunction in AD, providing potentially useful diagnostic information. However, results depend heavily on the input parameters that are used to compute FuzzyEn. Full article
Figures

Figure 1

Open AccessArticle Thermodynamic Fluid Equations-of-State
Entropy 2018, 20(1), 22; doi:10.3390/e20010022
Received: 24 October 2017 / Revised: 20 December 2017 / Accepted: 30 December 2017 / Published: 4 January 2018
PDF Full-text (4297 KB) | HTML Full-text | XML Full-text
Abstract
As experimental measurements of thermodynamic properties have improved in accuracy, to five or six figures, over the decades, cubic equations that are widely used for modern thermodynamic fluid property data banks require ever-increasing numbers of terms with more fitted parameters. Functional forms with
[...] Read more.
As experimental measurements of thermodynamic properties have improved in accuracy, to five or six figures, over the decades, cubic equations that are widely used for modern thermodynamic fluid property data banks require ever-increasing numbers of terms with more fitted parameters. Functional forms with continuity for Gibbs density surface ρ(p,T) which accommodate a critical-point singularity are fundamentally inappropriate in the vicinity of the critical temperature (Tc) and pressure (pc) and in the supercritical density mid-range between gas- and liquid-like states. A mesophase, confined within percolation transition loci that bound the gas- and liquid-state by third-order discontinuities in derivatives of the Gibbs energy, has been identified. There is no critical-point singularity at Tc on Gibbs density surface and no continuity of gas and liquid. When appropriate functional forms are used for each state separately, we find that the mesophase pressure functions are linear. The negative and positive deviations, for both gas and liquid states, on either side of the mesophase, are accurately represented by three or four-term virial expansions. All gaseous states require only known virial coefficients, and physical constants belonging to the fluid, i.e., Boyle temperature (TB), critical temperature (Tc), critical pressure (pc) and coexisting densities of gas (ρcG) and liquid (ρcL) along the critical isotherm. A notable finding for simple fluids is that for all gaseous states below TB, the contribution of the fourth virial term is negligible within experimental uncertainty. Use may be made of a symmetry between gas and liquid states in the state function rigidity (dp/dρ)T to specify lower-order liquid-state coefficients. Preliminary results for selected isotherms and isochores are presented for the exemplary fluids, CO2, argon, water and SF6, with focus on the supercritical mesophase and critical region. Full article
(This article belongs to the Special Issue Selected Papers from 14th Joint European Thermodynamics Conference)
Figures

Figure 1

Open AccessArticle Asymmetric Bimodal Exponential Power Distribution on the Real Line
Entropy 2018, 20(1), 23; doi:10.3390/e20010023
Received: 1 November 2017 / Revised: 29 December 2017 / Accepted: 1 January 2018 / Published: 3 January 2018
PDF Full-text (352 KB) | HTML Full-text | XML Full-text
Abstract
The asymmetric bimodal exponential power (ABEP) distribution is an extension of the generalized gamma distribution to the real line via adding two parameters that fit the shape of peakedness in bimodality on the real line. The special values of peakedness parameters of the
[...] Read more.
The asymmetric bimodal exponential power (ABEP) distribution is an extension of the generalized gamma distribution to the real line via adding two parameters that fit the shape of peakedness in bimodality on the real line. The special values of peakedness parameters of the distribution are a combination of half Laplace and half normal distributions on the real line. The distribution has two parameters fitting the height of bimodality, so capacity of bimodality is enhanced by using these parameters. Adding a skewness parameter is considered to model asymmetry in data. The location-scale form of this distribution is proposed. The Fisher information matrix of these parameters in ABEP is obtained explicitly. Properties of ABEP are examined. Real data examples are given to illustrate the modelling capacity of ABEP. The replicated artificial data from maximum likelihood estimates of parameters of ABEP and other distributions having an algorithm for artificial data generation procedure are provided to test the similarity with real data. A brief simulation study is presented. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle High Density Nodes in the Chaotic Region of 1D Discrete Maps
Entropy 2018, 20(1), 24; doi:10.3390/e20010024
Received: 28 October 2017 / Revised: 2 December 2017 / Accepted: 2 January 2018 / Published: 4 January 2018
PDF Full-text (19549 KB) | HTML Full-text | XML Full-text
Abstract
We report on the definition and characteristics of nodes in the chaotic region of bifurcation diagrams in the case of 1D mono-parametrical and S-unimodal maps, using as guiding example the logistic map. We examine the arrangement of critical curves, the identification and arrangement
[...] Read more.
We report on the definition and characteristics of nodes in the chaotic region of bifurcation diagrams in the case of 1D mono-parametrical and S-unimodal maps, using as guiding example the logistic map. We examine the arrangement of critical curves, the identification and arrangement of nodes, and the connection between the periodic windows and nodes in the chaotic zone. We finally present several characteristic features of nodes, which involve their convergence and entropy. Full article
(This article belongs to the Special Issue Theoretical Aspects of Kappa Distributions)
Figures

Figure 1

Open AccessArticle Exact Renormalization Groups As a Form of Entropic Dynamics
Entropy 2018, 20(1), 25; doi:10.3390/e20010025
Received: 1 December 2017 / Revised: 25 December 2017 / Accepted: 30 December 2017 / Published: 4 January 2018
PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
The Renormalization Group (RG) is a set of methods that have been instrumental in tackling problems involving an infinite number of degrees of freedom, such as, for example, in quantum field theory and critical phenomena. What all these methods have in common—which is
[...] Read more.
The Renormalization Group (RG) is a set of methods that have been instrumental in tackling problems involving an infinite number of degrees of freedom, such as, for example, in quantum field theory and critical phenomena. What all these methods have in common—which is what explains their success—is that they allow a systematic search for those degrees of freedom that happen to be relevant to the phenomena in question. In the standard approaches the RG transformations are implemented by either coarse graining or through a change of variables. When these transformations are infinitesimal, the formalism can be described as a continuous dynamical flow in a fictitious time parameter. It is generally the case that these exact RG equations are functional diffusion equations. In this paper we show that the exact RG equations can be derived using entropic methods. The RG flow is then described as a form of entropic dynamics of field configurations. Although equivalent to other versions of the RG, in this approach the RG transformations receive a purely inferential interpretation that establishes a clear link to information theory. Full article
Open AccessCommunication Polyadic Entropy, Synergy and Redundancy among Statistically Independent Processes in Nonlinear Statistical Physics with Microphysical Codependence
Entropy 2018, 20(1), 26; doi:10.3390/e20010026
Received: 5 December 2017 / Revised: 26 December 2017 / Accepted: 3 January 2018 / Published: 4 January 2018
PDF Full-text (264 KB) | HTML Full-text | XML Full-text
Abstract
The information shared among observables representing processes of interest is traditionally evaluated in terms of macroscale measures characterizing aggregate properties of the underlying processes and their interactions. Traditional information measures are grounded on the assumption that the observable represents a memoryless process without
[...] Read more.
The information shared among observables representing processes of interest is traditionally evaluated in terms of macroscale measures characterizing aggregate properties of the underlying processes and their interactions. Traditional information measures are grounded on the assumption that the observable represents a memoryless process without any interaction among microstates. Generalized entropy measures have been formulated in non-extensive statistical mechanics aiming to take microphysical codependence into account in entropy quantification. By taking them into consideration when formulating information measures, the question is raised on whether and if so how much information permeates across scales to impact on the macroscale information measures. The present study investigates and quantifies the emergence of macroscale information from microscale codependence among microphysics. In order to isolate the information emergence coming solely from the nonlinearly interacting microphysics, redundancy and synergy are evaluated among macroscale variables that are statistically independent from each other but not necessarily so within their own microphysics. Synergistic and redundant information are found when microphysical interactions take place, even if the statistical distributions are factorable. These findings stress the added value of nonlinear statistical physics to information theory in coevolutionary systems. Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Open AccessArticle Strategic Information Processing from Behavioural Data in Iterated Games
Entropy 2018, 20(1), 27; doi:10.3390/e20010027
Received: 29 November 2017 / Revised: 24 December 2017 / Accepted: 28 December 2017 / Published: 4 January 2018
PDF Full-text (10006 KB) | HTML Full-text | XML Full-text
Abstract
Iterated games are an important framework of economic theory and application, at least since the original work of Axelrod’s computational tournaments of the early 80’s. Recent theoretical results have shown that games (the economic context) and game theory (the decision-making process) are both
[...] Read more.
Iterated games are an important framework of economic theory and application, at least since the original work of Axelrod’s computational tournaments of the early 80’s. Recent theoretical results have shown that games (the economic context) and game theory (the decision-making process) are both formally equivalent to computational logic gates. Here these results are extended to behavioural data obtained from an experiment in which rhesus monkeys sequentially played thousands of the “matching pennies” game, an empirical example similar to Axelrod’s tournaments in which algorithms played against one another. The results show that the monkeys exhibit a rich variety of behaviours, both between and within subjects when playing opponents of varying complexity. Despite earlier suggestions, there is no clear evidence that the win-stay, lose-switch strategy is used, however there is evidence of non-linear strategy-based interactions between the predictors of future choices. It is also shown that there is consistent evidence across protocols and across individuals that the monkeys extract non-markovian information, i.e., information from more than just the most recent state of the game. This work shows that the use of information theory in game theory can test important hypotheses that would otherwise be more difficult to extract using traditional statistical methods. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessArticle Fractional Time Fluctuations in Viscoelasticity: A Comparative Study of Correlations and Elastic Moduli
Entropy 2018, 20(1), 28; doi:10.3390/e20010028
Received: 27 November 2017 / Revised: 25 December 2017 / Accepted: 3 January 2018 / Published: 11 January 2018
PDF Full-text (2490 KB) | HTML Full-text | XML Full-text
Abstract
We calculate the transverse velocity fluctuations correlation function of a linear and homogeneous viscoelastic liquid by using a generalized Langevin equation (GLE) approach. We consider a long-ranged (power-law) viscoelastic memory and a noise with a long-range (power-law) auto-correlation. We first evaluate
[...] Read more.
We calculate the transverse velocity fluctuations correlation function of a linear and homogeneous viscoelastic liquid by using a generalized Langevin equation (GLE) approach. We consider a long-ranged (power-law) viscoelastic memory and a noise with a long-range (power-law) auto-correlation. We first evaluate the transverse velocity fluctuations correlation function for conventional time derivatives C ^ N F ( k , t ) and then introduce time fractional derivatives in their equations of motion and calculate the corresponding fractional correlation function. We find that the magnitude of the fractional correlation C ^ F ( k , t ) is always lower than the non-fractional one and decays more rapidly. The relationship between the fractional loss modulus G F ( ω ) and C ^ F ( k , t ) is also calculated analytically. The difference between the values of G ( ω ) for two specific viscoelastic fluids is quantified. Our model calculation shows that the fractional effects on this measurable quantity may be three times as large as compared with its non-fractional value. The fact that the dynamic shear modulus is related to the light scattering spectrum suggests that the measurement of this property might be used as a suitable test to assess the effects of temporal fractional derivatives on a measurable property. Finally, we summarize the main results of our approach and emphasize that the eventual validity of our model calculations can only come from experimentation. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Figures

Figure 1

Open AccessArticle Influences of the Thomson Effect on the Performance of a Thermoelectric Generator-Driven Thermoelectric Heat Pump Combined Device
Entropy 2018, 20(1), 29; doi:10.3390/e20010029
Received: 9 December 2017 / Revised: 29 December 2017 / Accepted: 2 January 2018 / Published: 5 January 2018
Cited by 1 | PDF Full-text (3240 KB) | HTML Full-text | XML Full-text
Abstract
A thermodynamic model of a thermoelectric generator-driven thermoelectric heat pump (TEG-TEH) combined device is established considering the Thomson effect and the temperature dependence of the thermoelectric properties based on non-equilibrium thermodynamics. Energy analysis and exergy analysis are performed. New expressions for heating load,
[...] Read more.
A thermodynamic model of a thermoelectric generator-driven thermoelectric heat pump (TEG-TEH) combined device is established considering the Thomson effect and the temperature dependence of the thermoelectric properties based on non-equilibrium thermodynamics. Energy analysis and exergy analysis are performed. New expressions for heating load, maximum working temperature difference, coefficient of performance (COP), and exergy efficiency are obtained. The performance is analyzed and optimized using numerical calculations. The general performance, optimal performance, optimum variables, optimal performance ranges, and optimum variable ranges are obtained. The results show that the Thomson effect decreases the general performance and optimal performance, and narrows the optimal operating ranges and optimum variable ranges. Considering the Thomson effect, more thermoelectric elements should be allocated to the thermoelectric generator when designing the devices. The optimum design variables for the maximum exergy efficiency are different from those for the maximum COP. The results can provide more scientific guidelines for designing TEG-TEH devices. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Human Postural Control: Assessment of Two Alternative Interpretations of Center of Pressure Sample Entropy through a Principal Component Factorization of Whole-Body Kinematics
Entropy 2018, 20(1), 30; doi:10.3390/e20010030
Received: 22 November 2017 / Revised: 21 December 2017 / Accepted: 1 January 2018 / Published: 5 January 2018
Cited by 1 | PDF Full-text (430 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Sample entropy (SaEn), calculated for center of pressure (COP) trajectories, is often distinct for compromised postural control, e.g., in Parkinson, stroke, or concussion patients, but the interpretation of COP-SaEn remains subject to debate. The purpose of this paper is to test the hypotheses
[...] Read more.
Sample entropy (SaEn), calculated for center of pressure (COP) trajectories, is often distinct for compromised postural control, e.g., in Parkinson, stroke, or concussion patients, but the interpretation of COP-SaEn remains subject to debate. The purpose of this paper is to test the hypotheses that COP-SaEn is related (Hypothesis 1; H1) to the complexity of the postural movement structures, i.e., to the utilization and coordination of the mechanical degrees of freedom; or (Hypothesis 2; H2) to the irregularity of the individual postural movement strategies, i.e., to the neuromuscular control of these movements. Twenty-one healthy volunteers (age 26.4 ± 2.4; 10 females), equipped with 27 reflective markers, stood on a force plate and performed 2-min quiet stances. Principal movement strategies (PMs) were obtained from a principal component analysis (PCA) of the kinematic data. Then SaEn was calculated for the COP and PM time-series. H1 was tested by correlating COP-SaEn to the relative contribution of the PMs to the subject specific overall movement and H2 by correlating COP-SaEn and PM-SaEn. Both hypotheses were supported. This suggests that in a healthy population the COP-SaEn is linked to the complexity of the coordinative structure of postural movements, as well as to the irregularity of the neuromuscular control of specific movement components. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Energetic and Exergetic Analysis of a Transcritical N2O Refrigeration Cycle with an Expander
Entropy 2018, 20(1), 31; doi:10.3390/e20010031
Received: 6 December 2017 / Revised: 31 December 2017 / Accepted: 4 January 2018 / Published: 18 January 2018
PDF Full-text (2725 KB) | HTML Full-text | XML Full-text
Abstract
Comparative energy and exergy investigations are reported for a transcritical N2O refrigeration cycle with a throttling valve or with an expander when the gas cooler exit temperature varies from 30 to 55 °C and the evaporating temperature varies from −40 to
[...] Read more.
Comparative energy and exergy investigations are reported for a transcritical N2O refrigeration cycle with a throttling valve or with an expander when the gas cooler exit temperature varies from 30 to 55 °C and the evaporating temperature varies from −40 to 10 °C. The system performance is also compared with that of similar cycles using CO2. Results show that the N2O expander cycle exhibits a larger maximum cooling coefficient of performance (COP) and lower optimum discharge pressure than that of the CO2 expander cycle and N2O throttling valve cycle. It is found that in the N2O throttling valve cycle, the irreversibility of the throttling valve is maximum and the exergy losses of the gas cooler and compressor are ordered second and third, respectively. In the N2O expander cycle, the largest exergy loss occurs in the gas cooler, followed by the compressor and the expander. Compared with the CO2 expander cycle and N2O throttling valve cycle, the N2O expander cycle has the smallest component-specific exergy loss and the highest exergy efficiency at the same operating conditions and at the optimum discharge pressure. It is also proven that the maximum COP and the maximum exergy efficiency cannot be obtained at the same time for the investigated cycles. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Searching for Chaos Evidence in Eye Movement Signals
Entropy 2018, 20(1), 32; doi:10.3390/e20010032
Received: 4 November 2017 / Revised: 27 December 2017 / Accepted: 29 December 2017 / Published: 7 January 2018
PDF Full-text (2652 KB) | HTML Full-text | XML Full-text
Abstract
Most naturally-occurring physical phenomena are examples of nonlinear dynamic systems, the functioning of which attracts many researchers seeking to unveil their nature. The research presented in this paper is aimed at exploring eye movement dynamic features in terms of the existence of chaotic
[...] Read more.
Most naturally-occurring physical phenomena are examples of nonlinear dynamic systems, the functioning of which attracts many researchers seeking to unveil their nature. The research presented in this paper is aimed at exploring eye movement dynamic features in terms of the existence of chaotic nature. Nonlinear time series analysis methods were used for this purpose. Two time series features were studied: fractal dimension and entropy, by utilising the embedding theory. The methods were applied to the data collected during the experiment with “jumping point” stimulus. Eye movements were registered by means of the Jazz-novo eye tracker. One thousand three hundred and ninety two (1392) time series were defined, based on the horizontal velocity of eye movements registered during imposed, prolonged fixations. In order to conduct detailed analysis of the signal and identify differences contributing to the observed patterns of behaviour in time scale, fractal dimension and entropy were evaluated in various time series intervals. The influence of the noise contained in the data and the impact of the utilized filter on the obtained results were also studied. The low pass filter was used for the purpose of noise reduction with a 50 Hz cut-off frequency, estimated by means of the Fourier transform and all concerned methods were applied to time series before and after noise reduction. These studies provided some premises, which allow perceiving eye movements as observed chaotic data: characteristic of a space-time separation plot, low and non-integer time series dimension, and the time series entropy characteristic for chaotic systems. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Entropy Measures for Stochastic Processes with Applications in Functional Anomaly Detection
Entropy 2018, 20(1), 33; doi:10.3390/e20010033
Received: 5 December 2017 / Revised: 29 December 2017 / Accepted: 2 January 2018 / Published: 11 January 2018
PDF Full-text (738 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We propose a definition of entropy for stochastic processes. We provide a reproducing kernel Hilbert space model to estimate entropy from a random sample of realizations of a stochastic process, namely functional data, and introduce two approaches to estimate minimum entropy sets. These
[...] Read more.
We propose a definition of entropy for stochastic processes. We provide a reproducing kernel Hilbert space model to estimate entropy from a random sample of realizations of a stochastic process, namely functional data, and introduce two approaches to estimate minimum entropy sets. These sets are relevant to detect anomalous or outlier functional data. A numerical experiment illustrates the performance of the proposed method; in addition, we conduct an analysis of mortality rate curves as an interesting application in a real-data context to explore functional anomaly detection. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
Entropy 2018, 20(1), 34; doi:10.3390/e20010034
Received: 7 November 2017 / Revised: 3 January 2018 / Accepted: 5 January 2018 / Published: 9 January 2018
PDF Full-text (1254 KB) | HTML Full-text | XML Full-text
Abstract
The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able
[...] Read more.
The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Automated Multiclass Classification of Spontaneous EEG Activity in Alzheimer’s Disease and Mild Cognitive Impairment
Entropy 2018, 20(1), 35; doi:10.3390/e20010035
Received: 15 December 2017 / Revised: 4 January 2018 / Accepted: 5 January 2018 / Published: 9 January 2018
Cited by 1 | PDF Full-text (1237 KB) | HTML Full-text | XML Full-text
Abstract
The discrimination of early Alzheimer’s disease (AD) and its prodromal form (i.e., mild cognitive impairment, MCI) from cognitively healthy control (HC) subjects is crucial since the treatment is more effective in the first stages of the dementia. The aim of our study is
[...] Read more.
The discrimination of early Alzheimer’s disease (AD) and its prodromal form (i.e., mild cognitive impairment, MCI) from cognitively healthy control (HC) subjects is crucial since the treatment is more effective in the first stages of the dementia. The aim of our study is to evaluate the usefulness of a methodology based on electroencephalography (EEG) to detect AD and MCI. EEG rhythms were recorded from 37 AD patients, 37 MCI subjects and 37 HC subjects. Artifact-free trials were analyzed by means of several spectral and nonlinear features: relative power in the conventional frequency bands, median frequency, individual alpha frequency, spectral entropy, Lempel–Ziv complexity, central tendency measure, sample entropy, fuzzy entropy, and auto-mutual information. Relevance and redundancy analyses were also conducted through the fast correlation-based filter (FCBF) to derive an optimal set of them. The selected features were used to train three different models aimed at classifying the trials: linear discriminant analysis (LDA), quadratic discriminant analysis (QDA) and multi-layer perceptron artificial neural network (MLP). Afterwards, each subject was automatically allocated in a particular group by applying a trial-based majority vote procedure. After feature extraction, the FCBF method selected the optimal set of features: individual alpha frequency, relative power at delta frequency band, and sample entropy. Using the aforementioned set of features, MLP showed the highest diagnostic performance in determining whether a subject is not healthy (sensitivity of 82.35% and positive predictive value of 84.85% for HC vs. all classification task) and whether a subject does not suffer from AD (specificity of 79.41% and negative predictive value of 84.38% for AD vs. all comparison). Our findings suggest that our methodology can help physicians to discriminate AD, MCI and HC. Full article
Figures

Figure 1

Open AccessArticle Biological Networks Entropies: Examples in Neural Memory Networks, Genetic Regulation Networks and Social Epidemic Networks
Entropy 2018, 20(1), 36; doi:10.3390/e20010036
Received: 8 November 2017 / Revised: 25 December 2017 / Accepted: 4 January 2018 / Published: 13 January 2018
PDF Full-text (1581 KB) | HTML Full-text | XML Full-text
Abstract
Networks used in biological applications at different scales (molecule, cell and population) are of different types: neuronal, genetic, and social, but they share the same dynamical concepts, in their continuous differential versions (e.g., non-linear Wilson-Cowan system) as well as in their discrete Boolean
[...] Read more.
Networks used in biological applications at different scales (molecule, cell and population) are of different types: neuronal, genetic, and social, but they share the same dynamical concepts, in their continuous differential versions (e.g., non-linear Wilson-Cowan system) as well as in their discrete Boolean versions (e.g., non-linear Hopfield system); in both cases, the notion of interaction graph G(J) associated to its Jacobian matrix J, and also the concepts of frustrated nodes, positive or negative circuits of G(J), kinetic energy, entropy, attractors, structural stability, etc., are relevant and useful for studying the dynamics and the robustness of these systems. We will give some general results available for both continuous and discrete biological networks, and then study some specific applications of three new notions of entropy: (i) attractor entropy, (ii) isochronal entropy and (iii) entropy centrality; in three domains: a neural network involved in the memory evocation, a genetic network responsible of the iron control and a social network accounting for the obesity spread in high school environment. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Gaussian Guided Self-Adaptive Wolf Search Algorithm Based on Information Entropy Theory
Entropy 2018, 20(1), 37; doi:10.3390/e20010037
Received: 13 October 2017 / Revised: 13 December 2017 / Accepted: 4 January 2018 / Published: 10 January 2018
PDF Full-text (9074 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, swarm intelligence algorithms are becoming increasingly popular for solving many optimization problems. The Wolf Search Algorithm (WSA) is a contemporary semi-swarm intelligence algorithm designed to solve complex optimization problems and demonstrated its capability especially for large-scale problems. However, it still inherits a
[...] Read more.
Nowadays, swarm intelligence algorithms are becoming increasingly popular for solving many optimization problems. The Wolf Search Algorithm (WSA) is a contemporary semi-swarm intelligence algorithm designed to solve complex optimization problems and demonstrated its capability especially for large-scale problems. However, it still inherits a common weakness for other swarm intelligence algorithms: that its performance is heavily dependent on the chosen values of the control parameters. In 2016, we published the Self-Adaptive Wolf Search Algorithm (SAWSA), which offers a simple solution to the adaption problem. As a very simple schema, the original SAWSA adaption is based on random guesses, which is unstable and naive. In this paper, based on the SAWSA, we investigate the WSA search behaviour more deeply. A new parameter-guided updater, the Gaussian-guided parameter control mechanism based on information entropy theory, is proposed as an enhancement of the SAWSA. The heuristic updating function is improved. Simulation experiments for the new method denoted as the Gaussian-Guided Self-Adaptive Wolf Search Algorithm (GSAWSA) validate the increased performance of the improved version of WSA in comparison to its standard version and other prevalent swarm algorithms. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Information Entropy Suggests Stronger Nonlinear Associations between Hydro-Meteorological Variables and ENSO
Entropy 2018, 20(1), 38; doi:10.3390/e20010038
Received: 13 November 2017 / Revised: 5 January 2018 / Accepted: 5 January 2018 / Published: 9 January 2018
Cited by 1 | PDF Full-text (5464 KB) | HTML Full-text | XML Full-text
Abstract
Understanding the teleconnections between hydro-meteorological data and the El Niño–Southern Oscillation cycle (ENSO) is an important step towards developing flood early warning systems. In this study, the concept of mutual information (MI) was applied using marginal and joint information entropy to
[...] Read more.
Understanding the teleconnections between hydro-meteorological data and the El Niño–Southern Oscillation cycle (ENSO) is an important step towards developing flood early warning systems. In this study, the concept of mutual information (MI) was applied using marginal and joint information entropy to quantify the linear and non-linear relationship between annual streamflow, extreme precipitation indices over Mekong river basin, and ENSO. We primarily used Pearson correlation as a linear association metric for comparison with mutual information. The analysis was performed at four hydro-meteorological stations located on the mainstream Mekong river basin. It was observed that the nonlinear correlation information is comparatively higher between the large-scale climate index and local hydro-meteorology data in comparison to the traditional linear correlation information. The spatial analysis was carried out using all the grid points in the river basin, which suggests a spatial dependence structure between precipitation extremes and ENSO. Overall, this study suggests that mutual information approach can further detect more meaningful connections between large-scale climate indices and hydro-meteorological variables at different spatio-temporal scales. Application of nonlinear mutual information metric can be an efficient tool to better understand hydro-climatic variables dynamics resulting in improved climate-informed adaptation strategies. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Robust Macroscopic Quantum Measurements in the Presence of Limited Control and Knowledge
Entropy 2018, 20(1), 39; doi:10.3390/e20010039
Received: 31 October 2017 / Revised: 8 December 2017 / Accepted: 26 December 2017 / Published: 9 January 2018
PDF Full-text (1204 KB) | HTML Full-text | XML Full-text
Abstract
Quantum measurements have intrinsic properties that seem incompatible with our everyday-life macroscopic measurements. Macroscopic Quantum Measurement (MQM) is a concept that aims at bridging the gap between well-understood microscopic quantum measurements and macroscopic classical measurements. In this paper, we focus on the task
[...] Read more.
Quantum measurements have intrinsic properties that seem incompatible with our everyday-life macroscopic measurements. Macroscopic Quantum Measurement (MQM) is a concept that aims at bridging the gap between well-understood microscopic quantum measurements and macroscopic classical measurements. In this paper, we focus on the task of the polarization direction estimation of a system of N spins 1/2 particles and investigate the model some of us proposed in Barnea et al., 2017. This model is based on a von Neumann pointer measurement, where each spin component of the system is coupled to one of the three spatial component directions of a pointer. It shows traits of a classical measurement for an intermediate coupling strength. We investigate relaxations of the assumptions on the initial knowledge about the state and on the control over the MQM. We show that the model is robust with regard to these relaxations. It performs well for thermal states and a lack of knowledge about the size of the system. Furthermore, a lack of control on the MQM can be compensated by repeated “ultra-weak” measurements. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Characterizing Complex Dynamics in the Classical and Semi-Classical Duffing Oscillator Using Ordinal Patterns Analysis
Entropy 2018, 20(1), 40; doi:10.3390/e20010040
Received: 20 December 2017 / Revised: 3 January 2018 / Accepted: 5 January 2018 / Published: 10 January 2018
PDF Full-text (2516 KB) | HTML Full-text | XML Full-text
Abstract
The driven double-well Duffing oscillator is a well-studied system that manifests a wide variety of dynamics, from periodic behavior to chaos, and describes a diverse array of physical systems. It has been shown to be relevant in understanding chaos in the classical to
[...] Read more.
The driven double-well Duffing oscillator is a well-studied system that manifests a wide variety of dynamics, from periodic behavior to chaos, and describes a diverse array of physical systems. It has been shown to be relevant in understanding chaos in the classical to quantum transition. Here we explore the complexity of its dynamics in the classical and semi-classical regimes, using the technique of ordinal pattern analysis. This is of particular relevance to potential experiments in the semi-classical regime. We unveil different dynamical regimes within the chaotic range, which cannot be detected with more traditional statistical tools. These regimes are characterized by different hierarchies and probabilities of the ordinal patterns. Correlation between the Lyapunov exponent and the permutation entropy is revealed that leads us to interpret dips in the Lyapunov exponent as transitions in the dynamics of the system. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Spooky Action at a Temporal Distance
Entropy 2018, 20(1), 41; doi:10.3390/e20010041
Received: 25 November 2017 / Revised: 5 January 2018 / Accepted: 9 January 2018 / Published: 10 January 2018
PDF Full-text (275 KB) | HTML Full-text | XML Full-text
Abstract
Since the discovery of Bell’s theorem, the physics community has come to take seriously the possibility that the universe might contain physical processes which are spatially nonlocal, but there has been no such revolution with regard to the possibility of temporally nonlocal processes.
[...] Read more.
Since the discovery of Bell’s theorem, the physics community has come to take seriously the possibility that the universe might contain physical processes which are spatially nonlocal, but there has been no such revolution with regard to the possibility of temporally nonlocal processes. In this article, we argue that the assumption of temporal locality is actively limiting progress in the field of quantum foundations. We investigate the origins of the assumption, arguing that it has arisen for historical and pragmatic reasons rather than good scientific ones, then explain why temporal locality is in tension with relativity and review some recent results which cast doubt on its validity. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Open AccessArticle Classical-Equivalent Bayesian Portfolio Optimization for Electricity Generation Planning
Entropy 2018, 20(1), 42; doi:10.3390/e20010042
Received: 12 November 2017 / Revised: 24 December 2017 / Accepted: 8 January 2018 / Published: 10 January 2018
PDF Full-text (321 KB) | HTML Full-text | XML Full-text
Abstract
There are several electricity generation technologies based on different sources such as wind, biomass, gas, coal, and so on. The consideration of the uncertainties associated with the future costs of such technologies is crucial for planning purposes. In the literature, the allocation of
[...] Read more.
There are several electricity generation technologies based on different sources such as wind, biomass, gas, coal, and so on. The consideration of the uncertainties associated with the future costs of such technologies is crucial for planning purposes. In the literature, the allocation of resources in the available technologies has been solved as a mean-variance optimization problem assuming knowledge of the expected values and the covariance matrix of the costs. However, in practice, they are not exactly known parameters. Consequently, the obtained optimal allocations from the mean-variance optimization are not robust to possible estimation errors of such parameters. Additionally, it is usual to have electricity generation technology specialists participating in the planning processes and, obviously, the consideration of useful prior information based on their previous experience is of utmost importance. The Bayesian models consider not only the uncertainty in the parameters, but also the prior information from the specialists. In this paper, we introduce the classical-equivalent Bayesian mean-variance optimization to solve the electricity generation planning problem using both improper and proper prior distributions for the parameters. In order to illustrate our approach, we present an application comparing the classical-equivalent Bayesian with the naive mean-variance optimal portfolios. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Impact of Microgroove Shape on Flat Miniature Heat Pipe Efficiency
Entropy 2018, 20(1), 44; doi:10.3390/e20010044
Received: 27 October 2017 / Revised: 7 January 2018 / Accepted: 9 January 2018 / Published: 11 January 2018
PDF Full-text (5213 KB) | HTML Full-text | XML Full-text
Abstract
Miniature heat pipes are considered to be an innovative solution able to dissipate high heat with low working fluid fill charge, provide automatic temperature control, and operate with minimum energy consumption and low noise levels. A theoretical analysis on heat pipe thermal performance
[...] Read more.
Miniature heat pipes are considered to be an innovative solution able to dissipate high heat with low working fluid fill charge, provide automatic temperature control, and operate with minimum energy consumption and low noise levels. A theoretical analysis on heat pipe thermal performance using Deionized water or n-pentane as the working fluid has been carried out. Analysis on the maximum heat and capillary limitation is conducted for three microgroove cross sections: rectangular, triangular, and trapezoidal. The effect of microgroove height and width, effective length, trapezoidal microgroove inclination angle, and microgroove shape on heat pipe performance is analysed. Theoretical and experimental investigations of the heat pipes’ heat transport limitations and thermal resistances are conducted. Full article
(This article belongs to the Special Issue Non-Equilibrium Thermodynamics of Micro Technologies)
Figures

Figure 1

Open AccessArticle Information-Theoretic Analysis of a Family of Improper Discrete Constellations
Entropy 2018, 20(1), 45; doi:10.3390/e20010045
Received: 25 October 2017 / Revised: 5 January 2018 / Accepted: 8 January 2018 / Published: 11 January 2018
PDF Full-text (368 KB) | HTML Full-text | XML Full-text
Abstract
Non-circular or improper Gaussian signaling has proven beneficial in several interference-limited wireless networks. However, all implementable coding schemes are based on finite discrete constellations rather than Gaussian signals. In this paper, we propose a new family of improper constellations generated by widely linear
[...] Read more.
Non-circular or improper Gaussian signaling has proven beneficial in several interference-limited wireless networks. However, all implementable coding schemes are based on finite discrete constellations rather than Gaussian signals. In this paper, we propose a new family of improper constellations generated by widely linear processing of a square M-QAM (quadrature amplitude modulation) signal. This family of discrete constellations is parameterized by κ , the circularity coefficient and a phase ϕ . For uncoded communication systems, this phase should be optimized as ϕ * ( κ ) to maximize the minimum Euclidean distance between points of the improper constellation, therefore minimizing the bit error rate (BER). For the more relevant case of coded communications, where the coded symbols are constrained to be in this family of improper constellations using ϕ * ( κ ) , it is shown theoretically and further corroborated by simulations that, except for a shaping loss of 1.53 dB encountered at a high signal-to-noise ratio (snr), there is no rate loss with respect to the improper Gaussian capacity. In this sense, the proposed family of constellations can be viewed as the improper counterpart of the standard proper M-QAM constellations widely used in coded communication systems. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Image Segmentation Based on Statistical Confidence Intervals
Entropy 2018, 20(1), 46; doi:10.3390/e20010046
Received: 29 November 2017 / Revised: 5 January 2018 / Accepted: 10 January 2018 / Published: 11 January 2018
Cited by 1 | PDF Full-text (2745 KB) | HTML Full-text | XML Full-text
Abstract
Image segmentation is defined as a partition realized to an image into homogeneous regions to modify it into something that is more meaningful and softer to examine. Although several segmentation approaches have been proposed recently, in this paper, we develop a new image
[...] Read more.
Image segmentation is defined as a partition realized to an image into homogeneous regions to modify it into something that is more meaningful and softer to examine. Although several segmentation approaches have been proposed recently, in this paper, we develop a new image segmentation method based on the statistical confidence interval tool along with the well-known Otsu algorithm. According to our numerical experiments, our method has a dissimilar performance in comparison to the standard Otsu algorithm to specially process images with speckle noise perturbation. Actually, the effect of the speckle noise entropy is almost filtered out by our algorithm. Furthermore, our approach is validated by employing some image samples. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Changes in the Complexity of Heart Rate Variability with Exercise Training Measured by Multiscale Entropy-Based Measurements
Entropy 2018, 20(1), 47; doi:10.3390/e20010047
Received: 12 December 2017 / Revised: 4 January 2018 / Accepted: 8 January 2018 / Published: 17 January 2018
PDF Full-text (904 KB) | HTML Full-text | XML Full-text
Abstract
Quantifying complexity from heart rate variability (HRV) series is a challenging task, and multiscale entropy (MSE), along with its variants, has been demonstrated to be one of the most robust approaches to achieve this goal. Although physical training is known to be beneficial,
[...] Read more.
Quantifying complexity from heart rate variability (HRV) series is a challenging task, and multiscale entropy (MSE), along with its variants, has been demonstrated to be one of the most robust approaches to achieve this goal. Although physical training is known to be beneficial, there is little information about the long-term complexity changes induced by the physical conditioning. The present study aimed to quantify the changes in physiological complexity elicited by physical training through multiscale entropy-based complexity measurements. Rats were subject to a protocol of medium intensity training ( n = 13 ) or a sedentary protocol ( n = 12 ). One-hour HRV series were obtained from all conscious rats five days after the experimental protocol. We estimated MSE, multiscale dispersion entropy (MDE) and multiscale SDiff q from HRV series. Multiscale SDiff q is a recent approach that accounts for entropy differences between a given time series and its shuffled dynamics. From SDiff q , three attributes (q-attributes) were derived, namely SDiff q m a x , q m a x and q z e r o . MSE, MDE and multiscale q-attributes presented similar profiles, except for SDiff q m a x . q m a x showed significant differences between trained and sedentary groups on Time Scales 6 to 20. Results suggest that physical training increases the system complexity and that multiscale q-attributes provide valuable information about the physiological complexity. Full article
Figures

Figure 1

Open AccessArticle Function Analysis of the Euclidean Distance between Probability Distributions
Entropy 2018, 20(1), 48; doi:10.3390/e20010048
Received: 20 November 2017 / Revised: 31 December 2017 / Accepted: 8 January 2018 / Published: 11 January 2018
PDF Full-text (2370 KB) | HTML Full-text | XML Full-text
Abstract
Minimization of the Euclidean distance between output distribution and Dirac delta functions as a performance criterion is known to match the distribution of system output with delta functions. In the analysis of the algorithm developed based on that criterion and recursive gradient estimation,
[...] Read more.
Minimization of the Euclidean distance between output distribution and Dirac delta functions as a performance criterion is known to match the distribution of system output with delta functions. In the analysis of the algorithm developed based on that criterion and recursive gradient estimation, it is revealed in this paper that the minimization process of the cost function has two gradients with different functions; one that forces spreading of output samples and the other one that compels output samples to move close to symbol points. For investigation the two functions, each gradient is controlled separately through individual normalization of each gradient with their related input. From the analysis and experimental results, it is verified that one gradient is associated with the role of accelerating initial convergence speed by spreading output samples and the other gradient is related with lowering the minimum mean squared error (MSE) by pulling error samples close together. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Entropy-Based Structural Health Monitoring System for Damage Detection in Multi-Bay Three-Dimensional Structures
Entropy 2018, 20(1), 49; doi:10.3390/e20010049
Received: 28 November 2017 / Revised: 3 January 2018 / Accepted: 5 January 2018 / Published: 11 January 2018
PDF Full-text (4056 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a structural health monitoring (SHM) system based on multi-scale cross-sample entropy (MSCE) is proposed for detecting damage locations in multi-bay three-dimensional structures. The location of damage is evaluated for each bay through MSCE analysis by examining the degree of dissimilarity
[...] Read more.
In this paper, a structural health monitoring (SHM) system based on multi-scale cross-sample entropy (MSCE) is proposed for detecting damage locations in multi-bay three-dimensional structures. The location of damage is evaluated for each bay through MSCE analysis by examining the degree of dissimilarity between the response signals of vertically-adjacent floors. Subsequently, the results are quantified using the damage index (DI). The performance of the proposed SHM system was determined in this study by performing a finite element analysis of a multi-bay seven-story structure. The derived results revealed that the SHM system successfully detected the damaged floors and their respective directions for several cases. The proposed system provides a preliminary assessment of which bay has been more severely affected. Thus, the effectiveness and high potential of the SHM system for locating damage in large and complex structures rapidly and at low cost are demonstrated. Full article
Figures

Figure 1

Open AccessArticle Assesment of Thermodynamic Irreversibility in a Micro-Scale Viscous Dissipative Circular Couette Flow
Entropy 2018, 20(1), 50; doi:10.3390/e20010050
Received: 18 December 2017 / Revised: 8 January 2018 / Accepted: 10 January 2018 / Published: 11 January 2018
PDF Full-text (2519 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the effect of viscous dissipation on the thermal transport characteristics of heat and its consequence in terms of the entropy-generation rate in a circular Couette flow. We consider the flow of a Newtonian fluid through a narrow annular space between two
[...] Read more.
We investigate the effect of viscous dissipation on the thermal transport characteristics of heat and its consequence in terms of the entropy-generation rate in a circular Couette flow. We consider the flow of a Newtonian fluid through a narrow annular space between two asymmetrically-heated concentric micro-cylinders where the inner cylinder is rotating at a constant speed. Employing an analytical methodology, we demonstrate the temperature distribution and its consequential effects on the heat-transfer and entropy-generation behaviour in the annulus. We bring out the momentous effect of viscous dissipation on the underlying transport of heat as modulated by the degree of thermal asymmetries. Our results also show that the variation of the Nusselt number exhibits an unbounded swing for some values of the Brinkman number and degrees of asymmetrical wall heating. We explain the appearance of unbounded swing on the variation of the Nusselt number from the energy balance in the flow field as well as from the second law of thermodynamics. We believe that the insights obtained from the present analysis may improve the design of micro-rotating devices/systems. Full article
(This article belongs to the Special Issue Non-Equilibrium Thermodynamics of Micro Technologies)
Figures

Open AccessArticle Performance Features of a Stationary Stochastic Novikov Engine
Entropy 2018, 20(1), 52; doi:10.3390/e20010052
Received: 6 December 2017 / Revised: 8 January 2018 / Accepted: 8 January 2018 / Published: 12 January 2018
PDF Full-text (500 KB) | HTML Full-text | XML Full-text
Abstract
In this article a Novikov engine with fluctuating hot heat bath temperature is presented. Based on this model, the performance measure maximum expected power as well as the corresponding efficiency and entropy production rate is investigated for four different stationary distributions: continuous uniform,
[...] Read more.
In this article a Novikov engine with fluctuating hot heat bath temperature is presented. Based on this model, the performance measure maximum expected power as well as the corresponding efficiency and entropy production rate is investigated for four different stationary distributions: continuous uniform, normal, triangle, quadratic, and Pareto. It is found that the performance measures increase monotonously with increasing expectation value and increasing standard deviation of the distributions. Additionally, we show that the distribution has only little influence on the performance measures for small standard deviations. For larger values of the standard deviation, the performance measures in the case of the Pareto distribution are significantly different compared to the other distributions. These observations are explained by a comparison of the Taylor expansions in terms of the distributions’ standard deviations. For the considered symmetric distributions, an extension of the well known Curzon–Ahlborn efficiency to a stochastic Novikov engine is given. Full article
Figures

Figure 1

Open AccessArticle Chaotic Dynamics of the Fractional-Love Model with an External Environment
Entropy 2018, 20(1), 53; doi:10.3390/e20010053
Received: 27 November 2017 / Revised: 11 January 2018 / Accepted: 11 January 2018 / Published: 12 January 2018
PDF Full-text (6244 KB) | HTML Full-text | XML Full-text
Abstract
Based on the fractional order of nonlinear system for love model with a periodic function as an external environment, we analyze the characteristics of the chaotic dynamic. We analyze the relationship between the chaotic dynamic of the fractional order love model with an
[...] Read more.
Based on the fractional order of nonlinear system for love model with a periodic function as an external environment, we analyze the characteristics of the chaotic dynamic. We analyze the relationship between the chaotic dynamic of the fractional order love model with an external environment and the value of fractional order (α, β) when the parameters are fixed. Meanwhile, we also study the relationship between the chaotic dynamic of the fractional order love model with an external environment and the parameters (a, b, c, d) when the fractional order of the system is fixed. When the parameters of fractional order love model are fixed, the fractional order (α, β) of fractional order love model system exhibit segmented chaotic states with the different fractional orders of the system. When the fractional order (α = β) of the system is fixed, the system shows the periodic state and the chaotic state as the parameter is changing as a result. Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Figures

Figure 1

Open AccessArticle Synchronization in Fractional-Order Complex-Valued Delayed Neural Networks
Entropy 2018, 20(1), 54; doi:10.3390/e20010054
Received: 13 December 2017 / Revised: 7 January 2018 / Accepted: 8 January 2018 / Published: 12 January 2018
Cited by 1 | PDF Full-text (2660 KB) | HTML Full-text | XML Full-text
Abstract
This paper discusses the synchronization of fractional order complex valued neural networks (FOCVNN) at the presence of time delay. Synchronization criterions are achieved through the employment of a linear feedback control and comparison theorem of fractional order linear systems with delay. Feasibility and
[...] Read more.
This paper discusses the synchronization of fractional order complex valued neural networks (FOCVNN) at the presence of time delay. Synchronization criterions are achieved through the employment of a linear feedback control and comparison theorem of fractional order linear systems with delay. Feasibility and effectiveness of the proposed system are validated through numerical simulations. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle A Sequential Algorithm for Signal Segmentation
Entropy 2018, 20(1), 55; doi:10.3390/e20010055
Received: 29 November 2017 / Revised: 8 January 2018 / Accepted: 9 January 2018 / Published: 12 January 2018
PDF Full-text (29827 KB) | HTML Full-text | XML Full-text
Abstract
The problem of event detection in general noisy signals arises in many applications; usually, either a functional form of the event is available, or a previous annotated sample with instances of the event that can be used to train a classification algorithm. There
[...] Read more.
The problem of event detection in general noisy signals arises in many applications; usually, either a functional form of the event is available, or a previous annotated sample with instances of the event that can be used to train a classification algorithm. There are situations, however, where neither functional forms nor annotated samples are available; then, it is necessary to apply other strategies to separate and characterize events. In this work, we analyze 15-min samples of an acoustic signal, and are interested in separating sections, or segments, of the signal which are likely to contain significant events. For that, we apply a sequential algorithm with the only assumption that an event alters the energy of the signal. The algorithm is entirely based on Bayesian methods. Full article
Figures

Figure 1

Open AccessArticle Entropy of Iterated Function Systems and Their Relations with Black Holes and Bohr-Like Black Holes Entropies
Entropy 2018, 20(1), 56; doi:10.3390/e20010056
Received: 16 November 2017 / Revised: 5 January 2018 / Accepted: 10 January 2018 / Published: 12 January 2018
PDF Full-text (296 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we consider the metric entropies of the maps of an iterated function system deduced from a black hole which are known the Bekenstein–Hawking entropies and its subleading corrections. More precisely, we consider the recent model of a Bohr-like black hole
[...] Read more.
In this paper we consider the metric entropies of the maps of an iterated function system deduced from a black hole which are known the Bekenstein–Hawking entropies and its subleading corrections. More precisely, we consider the recent model of a Bohr-like black hole that has been recently analysed in some papers in the literature, obtaining the intriguing result that the metric entropies of a black hole are created by the metric entropies of the functions, created by the black hole principal quantum numbers, i.e., by the black hole quantum levels. We present a new type of topological entropy for general iterated function systems based on a new kind of the inverse of covers. Then the notion of metric entropy for an Iterated Function System ( I F S ) is considered, and we prove that these definitions for topological entropy of IFS’s are equivalent. It is shown that this kind of topological entropy keeps some properties which are hold by the classic definition of topological entropy for a continuous map. We also consider average entropy as another type of topological entropy for an I F S which is based on the topological entropies of its elements and it is also an invariant object under topological conjugacy. The relation between Axiom A and the average entropy is investigated. Full article
Open AccessArticle Granger Causality and Jensen–Shannon Divergence to Determine Dominant Atrial Area in Atrial Fibrillation
Entropy 2018, 20(1), 57; doi:10.3390/e20010057
Received: 15 October 2017 / Revised: 27 December 2017 / Accepted: 5 January 2018 / Published: 12 January 2018
PDF Full-text (664 KB) | HTML Full-text | XML Full-text
Abstract
Atrial fibrillation (AF) is already the most commonly occurring arrhythmia. Catheter pulmonary vein ablation has emerged as a treatment that is able to make the arrhythmia disappear; nevertheless, recurrence to arrhythmia is very frequent. In this study, it is proposed to perform an
[...] Read more.
Atrial fibrillation (AF) is already the most commonly occurring arrhythmia. Catheter pulmonary vein ablation has emerged as a treatment that is able to make the arrhythmia disappear; nevertheless, recurrence to arrhythmia is very frequent. In this study, it is proposed to perform an analysis of the electrical signals recorded from bipolar catheters at three locations, pulmonary veins and the right and left atria, before to and during the ablation procedure. Principal Component Analysis (PCA) was applied to reduce data dimension and Granger causality and divergence techniques were applied to analyse connectivity along the atria, in three main regions: pulmonary veins, left atrium (LA) and right atrium (RA). The results showed that, before the procedure, patients with recurrence in the arrhythmia had greater connectivity between atrial areas. Moreover, during the ablation procedure, in patients with recurrence in the arrhythmial both atria were more connected than in patients that maintained sinus rhythms. These results can be helpful for procedures designing to end AF. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Transfer Entropy as a Tool for Hydrodynamic Model Validation
Entropy 2018, 20(1), 58; doi:10.3390/e20010058
Received: 1 November 2017 / Revised: 6 January 2018 / Accepted: 6 January 2018 / Published: 12 January 2018
PDF Full-text (4193 KB) | HTML Full-text | XML Full-text
Abstract
The validation of numerical models is an important component of modeling to ensure reliability of model outputs under prescribed conditions. In river deltas, robust validation of models is paramount given that models are used to forecast land change and to track water, solid,
[...] Read more.
The validation of numerical models is an important component of modeling to ensure reliability of model outputs under prescribed conditions. In river deltas, robust validation of models is paramount given that models are used to forecast land change and to track water, solid, and solute transport through the deltaic network. We propose using transfer entropy (TE) to validate model results. TE quantifies the information transferred between variables in terms of strength, timescale, and direction. Using water level data collected in the distributary channels and inter-channel islands of Wax Lake Delta, Louisiana, USA, along with modeled water level data generated for the same locations using Delft3D, we assess how well couplings between external drivers (river discharge, tides, wind) and modeled water levels reproduce the observed data couplings. We perform this operation through time using ten-day windows. Modeled and observed couplings compare well; their differences reflect the spatial parameterization of wind and roughness in the model, which prevents the model from capturing high frequency fluctuations of water level. The model captures couplings better in channels than on islands, suggesting that mechanisms of channel-island connectivity are not fully represented in the model. Overall, TE serves as an additional validation tool to quantify the couplings of the system of interest at multiple spatial and temporal scales. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Figures

Figure 1

Open AccessArticle Exergetic Analysis, Optimization and Comparison of LNG Cold Exergy Recovery Systems for Transportation
Entropy 2018, 20(1), 59; doi:10.3390/e20010059
Received: 9 December 2017 / Revised: 5 January 2018 / Accepted: 11 January 2018 / Published: 13 January 2018
PDF Full-text (1520 KB) | HTML Full-text | XML Full-text
Abstract
LNG (Liquefied Natural Gas) shares in the global energy market is steadily increasing. One possible application of LNG is as a fuel for transportation. Stricter air pollution regulations and emission controls have made the natural gas a promising alternative to liquid petroleum fuels,
[...] Read more.
LNG (Liquefied Natural Gas) shares in the global energy market is steadily increasing. One possible application of LNG is as a fuel for transportation. Stricter air pollution regulations and emission controls have made the natural gas a promising alternative to liquid petroleum fuels, especially in the case of heavy transport. However, in most LNG-fueled vehicles, the physical exergy of LNG is destroyed in the regasification process. This paper investigates possible LNG exergy recovery systems for transportation. The analyses focus on “cold energy” recovery systems as the enthalpy of LNG, which may be used as cooling power in air conditioning or refrigeration. Moreover, four exergy recovery systems that use LNG as a low temperature heat sink to produce electric power are analyzed. This includes single-stage and two-stage direct expansion systems, an ORC (Organic Rankine Cycle) system, and a combined system (ORC + direct expansion). The optimization of the above-mentioned LNG power cycles and exergy analyses are also discussed, with the identification of exergy loss in all components. The analyzed systems achieved exergetic efficiencies in the range of 20 % to 36 % , which corresponds to a net work in the range of 214 to 380 kJ/kg L N G . Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle k-Same-Net: k-Anonymity with Generative Deep Neural Networks for Face Deidentification
Entropy 2018, 20(1), 60; doi:10.3390/e20010060
Received: 1 December 2017 / Revised: 31 December 2017 / Accepted: 9 January 2018 / Published: 13 January 2018
PDF Full-text (2986 KB) | HTML Full-text | XML Full-text
Abstract
Image and video data are today being shared between government entities and other relevant stakeholders on a regular basis and require careful handling of the personal information contained therein. A popular approach to ensure privacy protection in such data is the use of
[...] Read more.
Image and video data are today being shared between government entities and other relevant stakeholders on a regular basis and require careful handling of the personal information contained therein. A popular approach to ensure privacy protection in such data is the use of deidentification techniques, which aim at concealing the identity of individuals in the imagery while still preserving certain aspects of the data after deidentification. In this work, we propose a novel approach towards face deidentification, called k-Same-Net, which combines recent Generative Neural Networks (GNNs) with the well-known k-Anonymitymechanism and provides formal guarantees regarding privacy protection on a closed set of identities. Our GNN is able to generate synthetic surrogate face images for deidentification by seamlessly combining features of identities used to train the GNN model. Furthermore, it allows us to control the image-generation process with a small set of appearance-related parameters that can be used to alter specific aspects (e.g., facial expressions, age, gender) of the synthesized surrogate images. We demonstrate the feasibility of k-Same-Net in comprehensive experiments on the XM2VTS and CK+ datasets. We evaluate the efficacy of the proposed approach through reidentification experiments with recent recognition models and compare our results with competing deidentification techniques from the literature. We also present facial expression recognition experiments to demonstrate the utility-preservation capabilities of k-Same-Net. Our experimental results suggest that k-Same-Net is a viable option for facial deidentification that exhibits several desirable characteristics when compared to existing solutions in this area. Full article
(This article belongs to the Special Issue Selected Papers from IWOBI—Entropy-Based Applied Signal Processing)
Figures

Open AccessArticle Low Computational Cost for Sample Entropy
Entropy 2018, 20(1), 61; doi:10.3390/e20010061
Received: 28 November 2017 / Revised: 24 December 2017 / Accepted: 9 January 2018 / Published: 13 January 2018
Cited by 1 | PDF Full-text (464 KB) | HTML Full-text | XML Full-text
Abstract
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used
[...] Read more.
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy. Full article
Figures

Figure 1

Open AccessArticle An Operation Reduction Using Fast Computation of an Iteration-Based Simulation Method with Microsimulation-Semi-Symbolic Analysis
Entropy 2018, 20(1), 62; doi:10.3390/e20010062
Received: 24 October 2017 / Revised: 26 December 2017 / Accepted: 11 January 2018 / Published: 18 January 2018
PDF Full-text (8006 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a method for shortening the computation time and reducing the number of math operations required in complex calculations for the analysis, simulation, and design of processes and systems. The method is suitable for education and engineering applications. The efficacy of
[...] Read more.
This paper presents a method for shortening the computation time and reducing the number of math operations required in complex calculations for the analysis, simulation, and design of processes and systems. The method is suitable for education and engineering applications. The efficacy of the method is illustrated with a case study of a complex wireless communication system. The computer algebra system (CAS) was applied to formulate hypotheses and define the joint probability density function of a certain modulation technique. This innovative method was used to prepare microsimulation-semi-symbolic analyses to fully specify the wireless system. The development of an iteration-based simulation method that provides closed form solutions is presented. Previously, expressions were solved using time-consuming numerical methods. Students can apply this method for performance analysis and to understand data transfer processes. Engineers and researchers may use the method to gain insight into the impact of the parameters necessary to properly transmit and detect information, unlike traditional numerical methods. This research contributes to this field by improving the ability to obtain closed form solutions of the probability density function, outage probability, and considerably improves time efficiency with shortened computation time and reducing the number of calculation operations. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessArticle Analytical Solutions for Multi-Time Scale Fractional Stochastic Differential Equations Driven by Fractional Brownian Motion and Their Applications
Entropy 2018, 20(1), 63; doi:10.3390/e20010063
Received: 21 December 2017 / Revised: 3 January 2018 / Accepted: 11 January 2018 / Published: 16 January 2018
PDF Full-text (276 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we investigate analytical solutions of multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. We firstly decompose homogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions into independent differential subequations, and give their analytical solutions.
[...] Read more.
In this paper, we investigate analytical solutions of multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. We firstly decompose homogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions into independent differential subequations, and give their analytical solutions. Then, we use the variation of constant parameters to obtain the solutions of nonhomogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. Finally, we give three examples to demonstrate the applicability of our obtained results. Full article
(This article belongs to the Special Issue Entropy in Dynamic Systems)
Open AccessArticle Joint Content Recommendation and Delivery in Mobile Wireless Networks with Outage Management
Entropy 2018, 20(1), 64; doi:10.3390/e20010064
Received: 6 November 2017 / Revised: 29 December 2017 / Accepted: 10 January 2018 / Published: 15 January 2018
PDF Full-text (1686 KB) | HTML Full-text | XML Full-text
Abstract
Personalized content retrieval service has become a major information service that consumes a large portion of mobile Internet traffic. Joint content recommendation and delivery is a promising design philosophy that could effectively improve the overall user experience with personalized content retrieval services. Existing
[...] Read more.
Personalized content retrieval service has become a major information service that consumes a large portion of mobile Internet traffic. Joint content recommendation and delivery is a promising design philosophy that could effectively improve the overall user experience with personalized content retrieval services. Existing research mostly focused on a push-type design paradigm called proactive caching, which, however, has multiple inherent drawbacks such as high device cost and low energy efficiency. This paper proposes a novel, interactive joint content recommendation and delivery system as an alternative to overcome the drawbacks of proactive caching systems. We present several optimal and heuristic algorithms for the proposed system and analyze the system performance in terms of user interest and transmission outage probability. Some theoretical performance bounds of the system are also derived. The effectiveness of the proposed system and algorithms is validated by simulation results. Full article
Figures

Figure 1

Open AccessArticle Multi-Attribute Decision-Making Based on Bonferroni Mean Operators under Cubic Intuitionistic Fuzzy Set Environment
Entropy 2018, 20(1), 65; doi:10.3390/e20010065
Received: 22 November 2017 / Revised: 9 January 2018 / Accepted: 10 January 2018 / Published: 17 January 2018
Cited by 3 | PDF Full-text (505 KB) | HTML Full-text | XML Full-text
Abstract
Cubic intuitionistic fuzzy (CIF) set is the hybrid set which can contain much more information to express an interval-valued intuitionistic fuzzy set and an intuitionistic fuzzy set simultaneously for handling the uncertainties in the data. Unfortunately, there has been no research on the
[...] Read more.
Cubic intuitionistic fuzzy (CIF) set is the hybrid set which can contain much more information to express an interval-valued intuitionistic fuzzy set and an intuitionistic fuzzy set simultaneously for handling the uncertainties in the data. Unfortunately, there has been no research on the aggregation operators on CIF sets so far. Since an aggregation operator is an important mathematical tool in decision-making problems, the present paper proposes some new Bonferroni mean and weighted Bonferroni mean averaging operators between the cubic intuitionistic fuzzy numbers for aggregating the different preferences of the decision-maker. Then, we develop a decision-making method based on the proposed operators under the cubic intuitionistic fuzzy environment and illustrated with a numerical example. Finally, a comparison analysis between the proposed and the existing approaches have been performed to illustrate the applicability and feasibility of the developed decision-making method. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Using Entropy in Web Usage Data Preprocessing
Entropy 2018, 20(1), 67; doi:10.3390/e20010067
Received: 30 November 2017 / Revised: 10 January 2018 / Accepted: 13 January 2018 / Published: 22 January 2018
PDF Full-text (3113 KB) | HTML Full-text | XML Full-text
Abstract
The paper is focused on an examination of the use of entropy in the field of web usage mining. Entropy creates an alternative possibility of determining the ratio of auxiliary pages in the session identification using the Reference Length method. The experiment was
[...] Read more.
The paper is focused on an examination of the use of entropy in the field of web usage mining. Entropy creates an alternative possibility of determining the ratio of auxiliary pages in the session identification using the Reference Length method. The experiment was conducted on two different web portals. The first log file was obtained from a course of virtual learning environment web portal. The second log file was received from the web portal with anonymous access. A comparison of the results of entropy estimation of the ratio of auxiliary pages and a sitemap estimation of the ratio of auxiliary pages showed that in the case of sitemap abundance, entropy could be a full-valued substitution for the estimate of the ratio of auxiliary pages. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Figure 1

Open AccessArticle An Entropic Model for the Assessment of Streamwise Velocity Dip in Wide Open Channels
Entropy 2018, 20(1), 69; doi:10.3390/e20010069
Received: 31 October 2017 / Revised: 5 January 2018 / Accepted: 13 January 2018 / Published: 17 January 2018
Cited by 3 | PDF Full-text (7227 KB) | HTML Full-text | XML Full-text
Abstract
The three-dimensional structure of river flow and the presence of secondary currents, mainly near walls, often cause the maximum cross-sectional velocity to occur below the free surface, which is known as the “dip” phenomenon. The present study proposes a theoretical model derived from
[...] Read more.
The three-dimensional structure of river flow and the presence of secondary currents, mainly near walls, often cause the maximum cross-sectional velocity to occur below the free surface, which is known as the “dip” phenomenon. The present study proposes a theoretical model derived from the entropy theory to predict the velocity dip position along with the corresponding velocity value. Field data, collected at three ungauged sections located along the Alzette river in the Grand Duchy of Luxembourg and at three gauged sections located along three large rivers in Basilicata (southern Italy), were used to test its validity. The results show that the model is in good agreement with the experimental measurements and, when compared with other models documented in the literature, yields the least percentage error. Full article
Figures

Figure 1

Open AccessArticle The Fractality of Polar and Reed–Muller Codes
Entropy 2018, 20(1), 70; doi:10.3390/e20010070
Received: 16 October 2017 / Revised: 10 January 2018 / Accepted: 15 January 2018 / Published: 17 January 2018
PDF Full-text (334 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The generator matrices of polar codes and Reed–Muller codes are submatrices of the Kronecker product of a lower-triangular binary square matrix. For polar codes, the submatrix is generated by selecting rows according to their Bhattacharyya parameter, which is related to the error probability
[...] Read more.
The generator matrices of polar codes and Reed–Muller codes are submatrices of the Kronecker product of a lower-triangular binary square matrix. For polar codes, the submatrix is generated by selecting rows according to their Bhattacharyya parameter, which is related to the error probability of sequential decoding. For Reed–Muller codes, the submatrix is generated by selecting rows according to their Hamming weight. In this work, we investigate the properties of the index sets selecting those rows, in the limit as the blocklength tends to infinity. We compute the Lebesgue measure and the Hausdorff dimension of these sets. We furthermore show that these sets are finely structured and self-similar in a well-defined sense, i.e., they have properties that are common to fractals. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Non-Gaussian Closed Form Solutions for Geometric Average Asian Options in the Framework of Non-Extensive Statistical Mechanics
Entropy 2018, 20(1), 71; doi:10.3390/e20010071
Received: 17 November 2017 / Revised: 30 December 2017 / Accepted: 16 January 2018 / Published: 18 January 2018
PDF Full-text (266 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we consider pricing problems of the geometric average Asian options under a non-Gaussian model, in which the underlying stock price is driven by a process based on non-extensive statistical mechanics. The model can describe the peak and fat tail characteristics
[...] Read more.
In this paper we consider pricing problems of the geometric average Asian options under a non-Gaussian model, in which the underlying stock price is driven by a process based on non-extensive statistical mechanics. The model can describe the peak and fat tail characteristics of returns. Thus, the description of underlying asset price and the pricing of options are more accurate. Moreover, using the martingale method, we obtain closed form solutions for geometric average Asian options. Furthermore, the numerical analysis shows that the model can avoid underestimating risks relative to the Black-Scholes model. Full article
(This article belongs to the Special Issue Nonadditive Entropies and Complex Systems)
Figures

Figure 1

Open AccessArticle Adaptive Diagnosis for Rotating Machineries Using Information Geometrical Kernel-ELM Based on VMD-SVD
Entropy 2018, 20(1), 73; doi:10.3390/e20010073
Received: 6 December 2017 / Revised: 14 January 2018 / Accepted: 16 January 2018 / Published: 21 January 2018
PDF Full-text (18367 KB) | HTML Full-text | XML Full-text
Abstract
Rotating machineries often work under severe and variable operation conditions, which brings challenges to fault diagnosis. To deal with this challenge, this paper discusses the concept of adaptive diagnosis, which means to diagnose faults under variable operation conditions with self-adaptively and little prior
[...] Read more.
Rotating machineries often work under severe and variable operation conditions, which brings challenges to fault diagnosis. To deal with this challenge, this paper discusses the concept of adaptive diagnosis, which means to diagnose faults under variable operation conditions with self-adaptively and little prior knowledge or human intervention. To this end, a novel algorithm is proposed, information geometrical extreme learning machine with kernel (IG-KELM). From the perspective of information geometry, the structure and Riemannian metric of Kernel-ELM is specified. Based on the geometrical structure, an IG-based conformal transformation is created to improve the generalization ability and self-adaptability of KELM. The proposed IG-KELM, in conjunction with variation mode decomposition (VMD) and singular value decomposition (SVD) is utilized for adaptive diagnosis: (1) VMD, as a new self-adaptive signal processing algorithm is used to decompose the raw signals into several intrinsic mode functions (IMFs). (2) SVD is used to extract the intrinsic characteristics from the matrix constructed with IMFs. (3) IG-KELM is used to diagnose faults under variable conditions self-adaptively with no requirement of prior knowledge or human intervention. Finally, the proposed method was applied on fault diagnosis of a bearing and hydraulic pump. The results show that the proposed method outperforms the conventional method by up to 7.25% and 7.78% respectively, in percentages of accuracy. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Finite-Time Thermodynamic Modeling and a Comparative Performance Analysis for Irreversible Otto, Miller and Atkinson Cycles
Entropy 2018, 20(1), 75; doi:10.3390/e20010075
Received: 3 December 2017 / Revised: 14 January 2018 / Accepted: 16 January 2018 / Published: 19 January 2018
PDF Full-text (4590 KB) | HTML Full-text | XML Full-text
Abstract
Finite-time thermodynamic models for an Otto cycle, an Atkinson cycle, an over-expansion Miller cycle (M1), an LIVC Miller cycle through late intake valve closure (M2) and an LIVC Miller cycle with constant compression ratio (M3) have been established. The models for the two
[...] Read more.
Finite-time thermodynamic models for an Otto cycle, an Atkinson cycle, an over-expansion Miller cycle (M1), an LIVC Miller cycle through late intake valve closure (M2) and an LIVC Miller cycle with constant compression ratio (M3) have been established. The models for the two LIVC Miller cycles are first developed; and the heat-transfer and friction losses are considered with the effects of real engine parameters. A comparative analysis for the energy losses and performances has been conducted. The optimum compression-ratio ranges for the efficiency and effective power are different. The comparative results of cycle performances are influenced together by the ratios of the energy losses and the cycle types. The Atkinson cycle has the maximum peak power and efficiency, but the minimum power density; and the M1 cycle can achieve the optimum comprehensive performances. The less net fuel amount and the high peak cylinder pressure (M3 cycle) have a significantly adverse effect on the loss ratios of the heat-transfer and friction of the M2 and M3 cycles; and the effective power and energy efficiency are always lower than the M1 and Atkinson cycles. When greatly reducing the weights of the heat-transfer and friction, the M3 cycle has significant advantage in the energy efficiency. The results obtained can provide guidance for selecting the cycle type and optimizing the performances of a real engine. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Maximum Entropy Expectation-Maximization Algorithm for Fitting Latent-Variable Graphical Models to Multivariate Time Series
Entropy 2018, 20(1), 76; doi:10.3390/e20010076
Received: 8 December 2017 / Revised: 14 January 2018 / Accepted: 16 January 2018 / Published: 19 January 2018
PDF Full-text (901 KB) | HTML Full-text | XML Full-text
Abstract
This work is focused on latent-variable graphical models for multivariate time series. We show how an algorithm which was originally used for finding zeros in the inverse of the covariance matrix can be generalized such that to identify the sparsity pattern of the
[...] Read more.
This work is focused on latent-variable graphical models for multivariate time series. We show how an algorithm which was originally used for finding zeros in the inverse of the covariance matrix can be generalized such that to identify the sparsity pattern of the inverse of spectral density matrix. When applied to a given time series, the algorithm produces a set of candidate models. Various information theoretic (IT) criteria are employed for deciding the winner. A novel IT criterion, which is tailored to our model selection problem, is introduced. Some options for reducing the computational burden are proposed and tested via numerical examples. We conduct an empirical study in which the algorithm is compared with the state-of-the-art. The results are good, and the major advantage is that the subjective choices made by the user are less important than in the case of other methods. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Characterizing Normal and Pathological Gait through Permutation Entropy
Entropy 2018, 20(1), 77; doi:10.3390/e20010077
Received: 10 November 2017 / Revised: 15 January 2018 / Accepted: 16 January 2018 / Published: 19 January 2018
PDF Full-text (1072 KB) | HTML Full-text | XML Full-text
Abstract
Cerebral palsy is a physical impairment stemming from a brain lesion at perinatal time, most of the time resulting in gait abnormalities: the first cause of severe disability in childhood. Gait study, and instrumental gait analysis in particular, has been receiving increasing attention
[...] Read more.
Cerebral palsy is a physical impairment stemming from a brain lesion at perinatal time, most of the time resulting in gait abnormalities: the first cause of severe disability in childhood. Gait study, and instrumental gait analysis in particular, has been receiving increasing attention in the last few years, for being the complex result of the interactions between different brain motor areas and thus a proxy in the understanding of the underlying neural dynamics. Yet, and in spite of its importance, little is still known about how the brain adapts to cerebral palsy and to its impaired gait and, consequently, about the best strategies for mitigating the disability. In this contribution, we present the hitherto first analysis of joint kinematics data using permutation entropy, comparing cerebral palsy children with a set of matched control subjects. We find a significant increase in the permutation entropy for the former group, thus indicating a more complex and erratic neural control of joints and a non-trivial relationship between the permutation entropy and the gait speed. We further show how this information theory measure can be used to train a data mining model able to forecast the child’s condition. We finally discuss the relevance of these results in clinical applications and specifically in the design of personalized medicine interventions. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle Anomalous Advection-Dispersion Equations within General Fractional-Order Derivatives: Models and Series Solutions
Entropy 2018, 20(1), 78; doi:10.3390/e20010078
Received: 3 November 2017 / Revised: 18 January 2018 / Accepted: 19 January 2018 / Published: 22 January 2018
PDF Full-text (653 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, an anomalous advection-dispersion model involving a new general Liouville–Caputo fractional-order derivative is addressed for the first time. The series solutions of the general fractional advection-dispersion equations are obtained with the aid of the Laplace transform. The results are given to
[...] Read more.
In this paper, an anomalous advection-dispersion model involving a new general Liouville–Caputo fractional-order derivative is addressed for the first time. The series solutions of the general fractional advection-dispersion equations are obtained with the aid of the Laplace transform. The results are given to demonstrate the efficiency of the proposed formulations to describe the anomalous advection dispersion processes. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle A Newly Secure Solution to MIMOME OFDM-Based SWIPT Frameworks: A Two-Stage Stackelberg Game for a Multi-User Strategy
Entropy 2018, 20(1), 79; doi:10.3390/e20010079
Received: 22 December 2017 / Revised: 9 January 2018 / Accepted: 18 January 2018 / Published: 22 January 2018
PDF Full-text (1207 KB) | HTML Full-text | XML Full-text
Abstract
The paper technically proposes a newly secure scheme for simultaneous wireless power and information transfer (SWIPT) frameworks. We take into account an orthogonal frequency division multiplexing (OFDM)-based game which is in relation to a multi-input multi-output multi-antenna Eavesdropper (MIMOME) strategy. The transceiver is
[...] Read more.
The paper technically proposes a newly secure scheme for simultaneous wireless power and information transfer (SWIPT) frameworks. We take into account an orthogonal frequency division multiplexing (OFDM)-based game which is in relation to a multi-input multi-output multi-antenna Eavesdropper (MIMOME) strategy. The transceiver is generally able to witness the case imperfect channel state information (ICSI) at the transmitter side. Transferring power and information are conducted via orthogonally provided sub-carriers. We propose a two-step Stackelberg game to optimise the Utility Functions of both power and information parts. The price for the first stage (in connection with information) is the total power of the other sub-carriers over which the energy is supported. In this stage, the sum secrecy rate should be essentially maximised. The second level of the proposed Stackelberg game is in association with the energy part. In this stage, the price essentially is the total power of the other sub-carriers over which the information is transferred. In this stage, additionally, the total power transferred is fundamentally maximised. Subsequently, the optimally and near-optimally mathematical solutions are derived, for some special cases such as ICSI one. Finally, the simulations validate our scheme as well, authenticating our contribution’s tightness and efficiency. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Entanglement Entropy of the Spin-1 Condensates at Zero Temperature
Entropy 2018, 20(1), 80; doi:10.3390/e20010080
Received: 14 December 2017 / Revised: 10 January 2018 / Accepted: 18 January 2018 / Published: 22 January 2018
PDF Full-text (243 KB) | HTML Full-text | XML Full-text
Abstract
For spin-1 condensates, the spatial degrees of freedom can be considered as being frozen at temperature zero, while the spin-degrees of freedom remain free. Under this condition, the entanglement entropy has been derived exactly with an analytical form. The entanglement entropy is found
[...] Read more.
For spin-1 condensates, the spatial degrees of freedom can be considered as being frozen at temperature zero, while the spin-degrees of freedom remain free. Under this condition, the entanglement entropy has been derived exactly with an analytical form. The entanglement entropy is found to decrease monotonically with the increase of the magnetic polarization as expected. However, for the ground state in polar phase, an extremely steep fall of the entropy is found when the polarization emerges from zero. Then the fall becomes a gentle descent after the polarization exceeds a turning point. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Figure 1

Open AccessArticle Macroscopic Internal Variables and Mesoscopic Theory: A Comparison Considering Liquid Crystals
Entropy 2018, 20(1), 81; doi:10.3390/e20010081
Received: 25 December 2017 / Revised: 15 January 2018 / Accepted: 16 January 2018 / Published: 22 January 2018
PDF Full-text (654 KB) | HTML Full-text | XML Full-text
Abstract
Internal and mesoscopic variables differ fundamentally from each other: both are state space variables, but mesoscopic variables are additionally equipped with a distribution function introducing a statistical item into consideration which is missing in connection with internal variables. Thus, the alignment tensor of
[...] Read more.
Internal and mesoscopic variables differ fundamentally from each other: both are state space variables, but mesoscopic variables are additionally equipped with a distribution function introducing a statistical item into consideration which is missing in connection with internal variables. Thus, the alignment tensor of the liquid crystal theory can be introduced as an internal variable or as one generated by a mesoscopic background using the microscopic director as a mesoscopic variable. Because the mesoscopic variable is part of the state space, the corresponding balance equations change into mesoscopic balances, and additionally an evolution equation of the mesoscopic distribution function appears. The flexibility of the mesoscopic concept is not only demonstrated for liquid crystals, but is also discussed for dipolar media and flexible fibers. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Review

Jump to: Editorial, Research

Open AccessFeature PaperReview Information Theoretic Approaches for Motor-Imagery BCI Systems: Review and Experimental Comparison
Entropy 2018, 20(1), 7; doi:10.3390/e20010007
Received: 31 October 2017 / Revised: 10 December 2017 / Accepted: 19 December 2017 / Published: 2 January 2018
PDF Full-text (1005 KB) | HTML Full-text | XML Full-text
Abstract
Brain computer interfaces (BCIs) have been attracting a great interest in recent years. The common spatial patterns (CSP) technique is a well-established approach to the spatial filtering of the electroencephalogram (EEG) data in BCI applications. Even though CSP was originally proposed from a
[...] Read more.
Brain computer interfaces (BCIs) have been attracting a great interest in recent years. The common spatial patterns (CSP) technique is a well-established approach to the spatial filtering of the electroencephalogram (EEG) data in BCI applications. Even though CSP was originally proposed from a heuristic viewpoint, it can be also built on very strong foundations using information theory. This paper reviews the relationship between CSP and several information-theoretic approaches, including the Kullback–Leibler divergence, the Beta divergence and the Alpha-Beta log-det (AB-LD)divergence. We also revise other approaches based on the idea of selecting those features that are maximally informative about the class labels. The performance of all the methods will be also compared via experiments. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessReview Constructal Optimizations for Heat and Mass Transfers Based on the Entransy Dissipation Extremum Principle, Performed at the Naval University of Engineering: A Review
Entropy 2018, 20(1), 74; doi:10.3390/e20010074
Received: 18 December 2017 / Revised: 13 January 2018 / Accepted: 16 January 2018 / Published: 19 January 2018
PDF Full-text (6335 KB) | HTML Full-text | XML Full-text
Abstract
Combining entransy theory with constructal theory, this mini-review paper summarizes the constructal optimization work of heat conduction, convective heat transfer, and mass transfer problems during the authors’ working time in the Naval University of Engineering. The entransy dissipation extremum principle (EDEP) is applied
[...] Read more.
Combining entransy theory with constructal theory, this mini-review paper summarizes the constructal optimization work of heat conduction, convective heat transfer, and mass transfer problems during the authors’ working time in the Naval University of Engineering. The entransy dissipation extremum principle (EDEP) is applied in constructal optimizations, and this paper is divided into three parts. The first part is constructal entransy dissipation rate minimizations of heat conduction and finned cooling problems. It includes constructal optimization for a “volume-to-point” heat-conduction assembly with a tapered element, constructal optimizations for “disc-to-point” heat-conduction assemblies with the premise of an optimized last-order construct and without this premise, and constructal optimizations for four kinds of fin assemblies: T-, Y-, umbrella-, and tree-shaped fins. The second part is constructal entransy dissipation rate minimizations of cooling channel and steam generator problems. It includes constructal optimizations for heat generating volumes with tree-shaped and parallel channels, constructal optimization for heat generating volume cooled by forced convection, and constructal optimization for a steam generator. The third part is constructal entransy dissipation rate minimizations of mass transfer problems. It includes constructal optimizations for “volume-to-point” rectangular assemblies with constant and tapered channels, and constructal optimizations for “disc-to-point” assemblies with the premise of an optimized last-order construct and without this premise. The results of the three parts show that the mean heat transfer temperature differences of the heat conduction assemblies are not always decreased when their internal complexity increases. The average heat transfer rate of the steam generator obtained by entransy dissipation rate maximization is increased by 58.7% compared with that obtained by heat transfer rate maximization. Compared with the rectangular mass transfer assembly with a constant high permeability pathway (HPP), the maximum pressure drops of the element and first-order assembly with tapered HPPs are decreased by 6% and 11%, respectively. The global transfer performances of the transfer bodies are improved after optimizations, and new design guidelines derived by EDEP, which are different from the conventional optimization objectives, are provided. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Back to Top