Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 2 (February 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Darwinian evolution is grounded in a dynamical selection process that involves diverse classes of [...] Read more.
View options order results:
result details:
Displaying articles 1-69
Export citation of selected articles as:
Open AccessFeature PaperArticle The Volume of Two-Qubit States by Information Geometry
Entropy 2018, 20(2), 146; https://doi.org/10.3390/e20020146
Received: 22 December 2017 / Revised: 20 February 2018 / Accepted: 22 February 2018 / Published: 24 February 2018
PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
Using the information geometry approach, we determine the volume of the set of two-qubit states with maximally disordered subsystems. Particular attention is devoted to the behavior of the volume of sub-manifolds of separable and entangled states with fixed purity. We show that the
[...] Read more.
Using the information geometry approach, we determine the volume of the set of two-qubit states with maximally disordered subsystems. Particular attention is devoted to the behavior of the volume of sub-manifolds of separable and entangled states with fixed purity. We show that the usage of the classical Fisher metric on phase space probability representation of quantum states gives the same qualitative results with respect to different versions of the quantum Fisher metric. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle Information Thermodynamics Derives the Entropy Current of Cell Signal Transduction as a Model of a Binary Coding System
Entropy 2018, 20(2), 145; https://doi.org/10.3390/e20020145
Received: 12 January 2018 / Revised: 7 February 2018 / Accepted: 14 February 2018 / Published: 24 February 2018
Cited by 3 | PDF Full-text (556 KB) | HTML Full-text | XML Full-text
Abstract
The analysis of cellular signaling cascades based on information thermodynamics has recently developed considerably. A signaling cascade may be considered a binary code system consisting of two types of signaling molecules that carry biological information, phosphorylated active, and non-phosphorylated inactive forms. This study
[...] Read more.
The analysis of cellular signaling cascades based on information thermodynamics has recently developed considerably. A signaling cascade may be considered a binary code system consisting of two types of signaling molecules that carry biological information, phosphorylated active, and non-phosphorylated inactive forms. This study aims to evaluate the signal transduction step in cascades from the viewpoint of changes in mixing entropy. An increase in active forms may induce biological signal transduction through a mixing entropy change, which induces a chemical potential current in the signaling cascade. We applied the fluctuation theorem to calculate the chemical potential current and found that the average entropy production current is independent of the step in the whole cascade. As a result, the entropy current carrying signal transduction is defined by the entropy current mobility. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Group Sparse Precoding for Cloud-RAN with Multiple User Antennas
Entropy 2018, 20(2), 144; https://doi.org/10.3390/e20020144
Received: 6 November 2017 / Revised: 2 February 2018 / Accepted: 19 February 2018 / Published: 23 February 2018
PDF Full-text (950 KB) | HTML Full-text | XML Full-text
Abstract
Cloud radio access network (C-RAN) has become a promising network architecture to support the massive data traffic in the next generation cellular networks. In a C-RAN, a massive number of low-cost remote antenna ports (RAPs) are connected to a single baseband unit (BBU)
[...] Read more.
Cloud radio access network (C-RAN) has become a promising network architecture to support the massive data traffic in the next generation cellular networks. In a C-RAN, a massive number of low-cost remote antenna ports (RAPs) are connected to a single baseband unit (BBU) pool via high-speed low-latency fronthaul links, which enables efficient resource allocation and interference management. As the RAPs are geographically distributed, group sparse beamforming schemes attract extensive studies, where a subset of RAPs is assigned to be active and a high spectral efficiency can be achieved. However, most studies assume that each user is equipped with a single antenna. How to design the group sparse precoder for the multiple antenna users remains little understood, as it requires the joint optimization of the mutual coupling transmit and receive beamformers. This paper formulates an optimal joint RAP selection and precoding design problem in a C-RAN with multiple antennas at each user. Specifically, we assume a fixed transmit power constraint for each RAP, and investigate the optimal tradeoff between the sum rate and the number of active RAPs. Motivated by the compressive sensing theory, this paper formulates the group sparse precoding problem by inducing the 0 -norm as a penalty and then uses the reweighted 1 heuristic to find a solution. By adopting the idea of block diagonalization precoding, the problem can be formulated as a convex optimization, and an efficient algorithm is proposed based on its Lagrangian dual. Simulation results verify that our proposed algorithm can achieve almost the same sum rate as that obtained from an exhaustive search. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Stochastic Dynamics of a Time-Delayed Ecosystem Driven by Poisson White Noise Excitation
Entropy 2018, 20(2), 143; https://doi.org/10.3390/e20020143
Received: 16 December 2017 / Revised: 5 February 2018 / Accepted: 12 February 2018 / Published: 23 February 2018
PDF Full-text (2857 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson
[...] Read more.
We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson white noise. The stochastic averaging method and the perturbation method are applied to calculate the approximate stationary probability density functions for both predator and prey populations. The influences of system parameters and the Poisson white noises are investigated in detail based on the approximate stationary probability density functions. It is found that, increasing time delay parameter as well as the mean arrival rate and the variance of the amplitude of the Poisson white noise will enhance the fluctuations of the prey and predator population. While the larger value of self-competition parameter will reduce the fluctuation of the system. Furthermore, the results from Monte Carlo simulation are also obtained to show the effectiveness of the results from averaging method. Full article
Figures

Figure 1

Open AccessArticle A Simple and Adaptive Dispersion Regression Model for Count Data
Entropy 2018, 20(2), 142; https://doi.org/10.3390/e20020142
Received: 19 January 2018 / Revised: 14 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (409 KB) | HTML Full-text | XML Full-text
Abstract
Regression for count data is widely performed by models such as Poisson, negative binomial (NB) and zero-inflated regression. A challenge often faced by practitioners is the selection of the right model to take into account dispersion, which typically occurs in count datasets. It
[...] Read more.
Regression for count data is widely performed by models such as Poisson, negative binomial (NB) and zero-inflated regression. A challenge often faced by practitioners is the selection of the right model to take into account dispersion, which typically occurs in count datasets. It is highly desirable to have a unified model that can automatically adapt to the underlying dispersion and that can be easily implemented in practice. In this paper, a discrete Weibull regression model is shown to be able to adapt in a simple way to different types of dispersions relative to Poisson regression: overdispersion, underdispersion and covariate-specific dispersion. Maximum likelihood can be used for efficient parameter estimation. The description of the model, parameter inference and model diagnostics is accompanied by simulated and real data analyses. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Figures

Figure 1

Open AccessArticle Finding a Hadamard Matrix by Simulated Quantum Annealing
Entropy 2018, 20(2), 141; https://doi.org/10.3390/e20020141
Received: 2 January 2018 / Revised: 6 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (773 KB) | HTML Full-text | XML Full-text
Abstract
Hard problems have recently become an important issue in computing. Various methods, including a heuristic approach that is inspired by physical phenomena, are being explored. In this paper, we propose the use of simulated quantum annealing (SQA) to find a Hadamard matrix, which
[...] Read more.
Hard problems have recently become an important issue in computing. Various methods, including a heuristic approach that is inspired by physical phenomena, are being explored. In this paper, we propose the use of simulated quantum annealing (SQA) to find a Hadamard matrix, which is itself a hard problem. We reformulate the problem as an energy minimization of spin vectors connected by a complete graph. The computation is conducted based on a path-integral Monte-Carlo (PIMC) SQA of the spin vector system, with an applied transverse magnetic field whose strength is decreased over time. In the numerical experiments, the proposed method is employed to find low-order Hadamard matrices, including the ones that cannot be constructed trivially by the Sylvester method. The scaling property of the method and the measurement of residual energy after a sufficiently large number of iterations show that SQA outperforms simulated annealing (SA) in solving this hard problem. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle A Chemo-Mechanical Model of Diffusion in Reactive Systems
Entropy 2018, 20(2), 140; https://doi.org/10.3390/e20020140
Received: 24 January 2018 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (31650 KB) | HTML Full-text | XML Full-text
Abstract
The functional properties of multi-component materials are often determined by a rearrangement of their different phases and by chemical reactions of their components. In this contribution, a material model is presented which enables computational simulations and structural optimization of solid multi-component systems. Typical
[...] Read more.
The functional properties of multi-component materials are often determined by a rearrangement of their different phases and by chemical reactions of their components. In this contribution, a material model is presented which enables computational simulations and structural optimization of solid multi-component systems. Typical Systems of this kind are anodes in batteries, reactive polymer blends and propellants. The physical processes which are assumed to contribute to the microstructural evolution are: (i) particle exchange and mechanical deformation; (ii) spinodal decomposition and phase coarsening; (iii) chemical reactions between the components; and (iv) energetic forces associated with the elastic field of the solid. To illustrate the capability of the deduced coupled field model, three-dimensional Non-Uniform Rational Basis Spline (NURBS) based finite element simulations of such multi-component structures are presented. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Lagrangian Function on the Finite State Space Statistical Bundle
Entropy 2018, 20(2), 139; https://doi.org/10.3390/e20020139
Received: 26 December 2017 / Revised: 21 January 2018 / Accepted: 24 January 2018 / Published: 22 February 2018
PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
The statistical bundle is the set of couples ( Q , W ) of a probability density Q and a random variable W such that Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Open AccessArticle Coarse-Graining Approaches in Univariate Multiscale Sample and Dispersion Entropy
Entropy 2018, 20(2), 138; https://doi.org/10.3390/e20020138
Received: 1 December 2017 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (3483 KB) | HTML Full-text | XML Full-text
Abstract
The evaluation of complexity in univariate signals has attracted considerable attention in recent years. This is often done using the framework of Multiscale Entropy, which entails two basic steps: coarse-graining to consider multiple temporal scales, and evaluation of irregularity for each of those
[...] Read more.
The evaluation of complexity in univariate signals has attracted considerable attention in recent years. This is often done using the framework of Multiscale Entropy, which entails two basic steps: coarse-graining to consider multiple temporal scales, and evaluation of irregularity for each of those scales with entropy estimators. Recent developments in the field have proposed modifications to this approach to facilitate the analysis of short-time series. However, the role of the downsampling in the classical coarse-graining process and its relationships with alternative filtering techniques has not been systematically explored yet. Here, we assess the impact of coarse-graining in multiscale entropy estimations based on both Sample Entropy and Dispersion Entropy. We compare the classical moving average approach with low-pass Butterworth filtering, both with and without downsampling, and empirical mode decomposition in Intrinsic Multiscale Entropy, in selected synthetic data and two real physiological datasets. The results show that when the sampling frequency is low or high, downsampling respectively decreases or increases the entropy values. Our results suggest that, when dealing with long signals and relatively low levels of noise, the refine composite method makes little difference in the quality of the entropy estimation at the expense of considerable additional computational cost. It is also found that downsampling within the coarse-graining procedure may not be required to quantify the complexity of signals, especially for short ones. Overall, we expect these results to contribute to the ongoing discussion about the development of stable, fast and robust-to-noise multiscale entropy techniques suited for either short or long recordings. Full article
Figures

Figure 1

Open AccessArticle Engine Load Effects on the Energy and Exergy Performance of a Medium Cycle/Organic Rankine Cycle for Exhaust Waste Heat Recovery
Entropy 2018, 20(2), 137; https://doi.org/10.3390/e20020137
Received: 10 December 2017 / Revised: 3 February 2018 / Accepted: 12 February 2018 / Published: 21 February 2018
PDF Full-text (4772 KB) | HTML Full-text | XML Full-text
Abstract
The Organic Rankine Cycle (ORC) has been proved a promising technique to exploit waste heat from Internal Combustion Engines (ICEs). Waste heat recovery systems have usually been designed based on engine rated working conditions, while engines often operate under part load conditions. Hence,
[...] Read more.
The Organic Rankine Cycle (ORC) has been proved a promising technique to exploit waste heat from Internal Combustion Engines (ICEs). Waste heat recovery systems have usually been designed based on engine rated working conditions, while engines often operate under part load conditions. Hence, it is quite important to analyze the off-design performance of ORC systems under different engine loads. This paper presents an off-design Medium Cycle/Organic Rankine Cycle (MC/ORC) system model by interconnecting the component models, which allows the prediction of system off-design behavior. The sliding pressure control method is applied to balance the variation of system parameters and evaporating pressure is chosen as the operational variable. The effect of operational variable and engine load on system performance is analyzed from the aspects of energy and exergy. The results show that with the drop of engine load, the MC/ORC system can always effectively recover waste heat, whereas the maximum net power output, thermal efficiency and exergy efficiency decrease linearly. Considering the contributions of components to total exergy destruction, the proportions of the gas-oil exchanger and turbine increase, while the proportions of the evaporator and condenser decrease with the drop of engine load. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Robustification of a One-Dimensional Generic Sigmoidal Chaotic Map with Application of True Random Bit Generation
Entropy 2018, 20(2), 136; https://doi.org/10.3390/e20020136
Received: 23 December 2017 / Revised: 7 February 2018 / Accepted: 16 February 2018 / Published: 20 February 2018
PDF Full-text (6699 KB) | HTML Full-text | XML Full-text
Abstract
The search for generation approaches to robust chaos has received considerable attention due to potential applications in cryptography or secure communications. This paper is of interest regarding a 1-D sigmoidal chaotic map, which has never been distinctly investigated. This paper introduces a generic
[...] Read more.
The search for generation approaches to robust chaos has received considerable attention due to potential applications in cryptography or secure communications. This paper is of interest regarding a 1-D sigmoidal chaotic map, which has never been distinctly investigated. This paper introduces a generic form of the sigmoidal chaotic map with three terms, i.e., xn+1 = ∓AfNL(Bxn) ± Cxn ± D, where A, B, C, and D are real constants. The unification of modified sigmoid and hyperbolic tangent (tanh) functions reveals the existence of a “unified sigmoidal chaotic map” generically fulfilling the three terms, with robust chaos partially appearing in some parameter ranges. A simplified generic form, i.e., xn+1 = ∓fNL(Bxn) ± Cxn, through various S-shaped functions, has recently led to the possibility of linearization using (i) hardtanh and (ii) signum functions. This study finds a linearized sigmoidal chaotic map that potentially offers robust chaos over an entire range of parameters. Chaos dynamics are described in terms of chaotic waveforms, histogram, cobweb plots, fixed point, Jacobian, and a bifurcation structure diagram based on Lyapunov exponents. As a practical example, a true random bit generator using the linearized sigmoidal chaotic map is demonstrated. The resulting output is evaluated using the NIST SP800-22 test suite and TestU01. Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Figures

Figure 1

Open AccessArticle Complexity of Simple, Switched and Skipped Chaotic Maps in Finite Precision
Entropy 2018, 20(2), 135; https://doi.org/10.3390/e20020135
Received: 29 December 2017 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 20 February 2018
PDF Full-text (5247 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we investigate the degradation of the statistic properties of chaotic maps as consequence of their implementation in a digital media such as Digital Signal Processors (DSP), Field Programmable Gate Arrays (FPGA) or Application-Specific Integrated Circuits (ASIC). In these systems, binary
[...] Read more.
In this paper we investigate the degradation of the statistic properties of chaotic maps as consequence of their implementation in a digital media such as Digital Signal Processors (DSP), Field Programmable Gate Arrays (FPGA) or Application-Specific Integrated Circuits (ASIC). In these systems, binary floating- and fixed-point are the numerical representations available. Fixed-point representation is preferred over floating-point when speed, low power and/or small circuit area are necessary. Then, in this paper we compare the degradation of fixed-point binary precision version of chaotic maps with the one obtained by using floating point 754-IEEE standard, to evaluate the feasibility of their FPGA implementation. The specific period that every fixed-point precision produces was investigated in previous reports. Statistical characteristics are also relevant, it has been recently shown that it is convenient to describe the statistical characteristic using both, causal and non-causal quantifiers. In this paper we complement the period analysis by characterizing the behavior of these maps from an statistical point of view using cuantifiers from information theory. Here, rather than reproducing an exact replica of the real system, the aim is to meet certain conditions related to the statistics of systems. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Investigating the Configurations in Cross-Shareholding: A Joint Copula-Entropy Approach
Entropy 2018, 20(2), 134; https://doi.org/10.3390/e20020134
Received: 24 December 2017 / Revised: 16 February 2018 / Accepted: 17 February 2018 / Published: 20 February 2018
PDF Full-text (877 KB) | HTML Full-text | XML Full-text
Abstract
The complex nature of the interlacement of economic actors is quite evident at the level of the Stock market, where any company may actually interact with the other companies buying and selling their shares. In this respect, the companies populating a Stock market,
[...] Read more.
The complex nature of the interlacement of economic actors is quite evident at the level of the Stock market, where any company may actually interact with the other companies buying and selling their shares. In this respect, the companies populating a Stock market, along with their connections, can be effectively modeled through a directed network, where the nodes represent the companies, and the links indicate the ownership. This paper deals with this theme and discusses the concentration of a market. A cross-shareholding matrix is considered, along with two key factors: the node out-degree distribution which represents the diversification of investments in terms of the number of involved companies, and the node in-degree distribution which reports the integration of a company due to the sales of its own shares to other companies. While diversification is widely explored in the literature, integration is most present in literature on contagions. This paper captures such quantities of interest in the two frameworks and studies the stochastic dependence of diversification and integration through a copula approach. We adopt entropies as measures for assessing the concentration in the market. The main question is to assess the dependence structure leading to a better description of the data or to market polarization (minimal entropy) or market fairness (maximal entropy). In so doing, we derive information on the way in which the in- and out-degrees should be connected in order to shape the market. The question is of interest to regulators bodies, as witnessed by specific alert threshold published on the US mergers guidelines for limiting the possibility of acquisitions and the prevalence of a single company on the market. Indeed, all countries and the EU have also rules or guidelines in order to limit concentrations, in a country or across borders, respectively. The calibration of copulas and model parameters on the basis of real data serves as an illustrative application of the theoretical proposal. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Applying Time-Dependent Attributes to Represent Demand in Road Mass Transit Systems
Entropy 2018, 20(2), 133; https://doi.org/10.3390/e20020133
Received: 24 January 2018 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 20 February 2018
PDF Full-text (3944 KB) | HTML Full-text | XML Full-text
Abstract
The development of efficient mass transit systems that provide quality of service is a major challenge for modern societies. To meet this challenge, it is essential to understand user demand. This article proposes using new time-dependent attributes to represent demand, attributes that differ
[...] Read more.
The development of efficient mass transit systems that provide quality of service is a major challenge for modern societies. To meet this challenge, it is essential to understand user demand. This article proposes using new time-dependent attributes to represent demand, attributes that differ from those that have traditionally been used in the design and planning of this type of transit system. Data mining was used to obtain these new attributes; they were created using clustering techniques, and their quality evaluated with the Shannon entropy function and with neural networks. The methodology was implemented on an intercity public transport company and the results demonstrate that the attributes obtained offer a more precise understanding of demand and enable predictions to be made with acceptable precision. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Graphical abstract

Open AccessArticle Uncertainty Relation Based on Wigner–Yanase–Dyson Skew Information with Quantum Memory
Entropy 2018, 20(2), 132; https://doi.org/10.3390/e20020132
Received: 2 January 2018 / Revised: 11 February 2018 / Accepted: 15 February 2018 / Published: 20 February 2018
Cited by 1 | PDF Full-text (426 KB) | HTML Full-text | XML Full-text
Abstract
We present uncertainty relations based on Wigner–Yanase–Dyson skew information with quantum memory. Uncertainty inequalities both in product and summation forms are derived. It is shown that the lower bounds contain two terms: one characterizes the degree of compatibility of two measurements, and the
[...] Read more.
We present uncertainty relations based on Wigner–Yanase–Dyson skew information with quantum memory. Uncertainty inequalities both in product and summation forms are derived. It is shown that the lower bounds contain two terms: one characterizes the degree of compatibility of two measurements, and the other is the quantum correlation between the measured system and the quantum memory. Detailed examples are given for product, separable and entangled states. Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
Figures

Figure 1

Back to Top