Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 7 (July 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story What happens when information is processed by applying a function f to a variable S? The data [...] Read more.
View options order results:
result details:
Displaying articles 1-87
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle On Unsteady Three-Dimensional Axisymmetric MHD Nanofluid Flow with Entropy Generation and Thermo-Diffusion Effects on a Non-Linear Stretching Sheet
Entropy 2017, 19(7), 168; doi:10.3390/e19070168
Received: 22 February 2017 / Revised: 8 April 2017 / Accepted: 12 April 2017 / Published: 12 July 2017
PDF Full-text (508 KB) | HTML Full-text | XML Full-text
Abstract
The entropy generation in unsteady three-dimensional axisymmetric magnetohydrodynamics (MHD) nanofluid flow over a non-linearly stretching sheet is investigated. The flow is subject to thermal radiation and a chemical reaction. The conservation equations are solved using the spectral quasi-linearization method. The novelty of the
[...] Read more.
The entropy generation in unsteady three-dimensional axisymmetric magnetohydrodynamics (MHD) nanofluid flow over a non-linearly stretching sheet is investigated. The flow is subject to thermal radiation and a chemical reaction. The conservation equations are solved using the spectral quasi-linearization method. The novelty of the work is in the study of entropy generation in three-dimensional axisymmetric MHD nanofluid and the choice of the spectral quasi-linearization method as the solution method. The effects of Brownian motion and thermophoresis are also taken into account. The nanofluid particle volume fraction on the boundary is passively controlled. The results show that as the Hartmann number increases, both the Nusselt number and the Sherwood number decrease, whereas the skin friction increases. It is further shown that an increase in the thermal radiation parameter corresponds to a decrease in the Nusselt number. Moreover, entropy generation increases with respect to some physical parameters. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Figures

Figure 1

Open AccessArticle Entropic Measure of Epistemic Uncertainties in Multibody System Models by Axiomatic Design
Entropy 2017, 19(7), 291; doi:10.3390/e19070291
Received: 14 May 2017 / Revised: 12 June 2017 / Accepted: 15 June 2017 / Published: 26 June 2017
Cited by 4 | PDF Full-text (1524 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the use of the MaxInf Principle in real optimization problems is investigated for engineering applications, where the current design solution is actually an engineering approximation. In industrial manufacturing, multibody system simulations can be used to develop new machines and mechanisms
[...] Read more.
In this paper, the use of the MaxInf Principle in real optimization problems is investigated for engineering applications, where the current design solution is actually an engineering approximation. In industrial manufacturing, multibody system simulations can be used to develop new machines and mechanisms by using virtual prototyping, where an axiomatic design can be employed to analyze the independence of elements and the complexity of connections forming a general mechanical system. In the classic theories of Fisher and Wiener-Shannon, the idea of information is a measure of only probabilistic and repetitive events. However, this idea is broader than the probability alone field. Thus, the Wiener-Shannon’s axioms can be extended to non-probabilistic events and it is possible to introduce a theory of information for non-repetitive events as a measure of the reliability of data for complex mechanical systems. To this end, one can devise engineering solutions consistent with the values of the design constraints analyzing the complexity of the relation matrix and using the idea of information in the metric space. The final solution gives the entropic measure of epistemic uncertainties which can be used in multibody system models, analyzed with an axiomatic design. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle Extraction of the Proton and Electron Radii from Characteristic Atomic Lines and Entropy Principles
Entropy 2017, 19(7), 293; doi:10.3390/e19070293
Received: 13 April 2017 / Revised: 8 June 2017 / Accepted: 9 June 2017 / Published: 29 June 2017
PDF Full-text (3883 KB) | HTML Full-text | XML Full-text
Abstract
We determine the proton and electron radii by analyzing constructive resonances at minimum entropy for elements with atomic number Z ≥ 11.We note that those radii can be derived from entropy principles and published photoelectric cross sections data from the National Institute of
[...] Read more.
We determine the proton and electron radii by analyzing constructive resonances at minimum entropy for elements with atomic number Z ≥ 11.We note that those radii can be derived from entropy principles and published photoelectric cross sections data from the National Institute of Standards and Technology (NIST). A resonance region with optimal constructive interference is given by a principal wavelength λ of the order of Bohr atom radius. Our study shows that the proton radius deviations can be measured. Moreover, in the case of the electron, its radius converges to electron classical radius with a value of 2.817 fm. Resonance waves afforded us the possibility to measure the proton and electron radii through an interference term. This term, was a necessary condition in order to have an effective cross section maximum at the threshold. The minimum entropy means minimum proton shape deformation and it was found to be (0.830 ± 0.015) fm and the average proton radius was found to be (0.825 − 0.0341; 0.888 + 0.0405) fm. Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Figures

Figure 1

Open AccessFeature PaperArticle An Exploration Algorithm for Stochastic Simulators Driven by Energy Gradients
Entropy 2017, 19(7), 294; doi:10.3390/e19070294
Received: 28 February 2017 / Revised: 24 April 2017 / Accepted: 8 May 2017 / Published: 22 June 2017
PDF Full-text (7112 KB) | HTML Full-text | XML Full-text
Abstract
In recent work, we have illustrated the construction of an exploration geometry on free energy surfaces: the adaptive computer-assisted discovery of an approximate low-dimensional manifold on which the effective dynamics of the system evolves. Constructing such an exploration geometry involves geometry-biased sampling (through
[...] Read more.
In recent work, we have illustrated the construction of an exploration geometry on free energy surfaces: the adaptive computer-assisted discovery of an approximate low-dimensional manifold on which the effective dynamics of the system evolves. Constructing such an exploration geometry involves geometry-biased sampling (through both appropriately-initialized unbiased molecular dynamics and through restraining potentials) and, machine learning techniques to organize the intrinsic geometry of the data resulting from the sampling (in particular, diffusion maps, possibly enhanced through the appropriate Mahalanobis-type metric). In this contribution, we detail a method for exploring the conformational space of a stochastic gradient system whose effective free energy surface depends on a smaller number of degrees of freedom than the dimension of the phase space. Our approach comprises two steps. First, we study the local geometry of the free energy landscape using diffusion maps on samples computed through stochastic dynamics. This allows us to automatically identify the relevant coarse variables. Next, we use the information garnered in the previous step to construct a new set of initial conditions for subsequent trajectories. These initial conditions are computed so as to explore the accessible conformational space more efficiently than by continuing the previous, unbiased simulations. We showcase this method on a representative test system. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Figures

Figure 1

Open AccessArticle Bodily Processing: The Role of Morphological Computation
Entropy 2017, 19(7), 295; doi:10.3390/e19070295
Received: 13 April 2017 / Revised: 10 June 2017 / Accepted: 19 June 2017 / Published: 22 June 2017
Cited by 1 | PDF Full-text (492 KB) | HTML Full-text | XML Full-text
Abstract
The integration of embodied and computational approaches to cognition requires that non-neural body parts be described as parts of a computing system, which realizes cognitive processing. In this paper, based on research about morphological computations and the ecology of vision, I argue that
[...] Read more.
The integration of embodied and computational approaches to cognition requires that non-neural body parts be described as parts of a computing system, which realizes cognitive processing. In this paper, based on research about morphological computations and the ecology of vision, I argue that nonneural body parts could be described as parts of a computational system, but they do not realize computation autonomously, only in connection with some kind of—even in the simplest form—central control system. Finally, I integrate the proposal defended in the paper with the contemporary mechanistic approach to wide computation. Full article
Figures

Figure 1

Open AccessArticle Analytical Approximate Solutions of (n + 1)-Dimensional Fractal Heat-Like and Wave-Like Equations
Entropy 2017, 19(7), 296; doi:10.3390/e19070296
Received: 27 May 2017 / Revised: 15 June 2017 / Accepted: 20 June 2017 / Published: 22 June 2017
PDF Full-text (1634 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a new type (n + 1)-dimensional reduced differential transform method (RDTM) based on a local fractional derivative (LFD) to solve (n + 1)-dimensional local fractional partial differential equations (PDEs) in Cantor sets. The presented method is
[...] Read more.
In this paper, we propose a new type (n + 1)-dimensional reduced differential transform method (RDTM) based on a local fractional derivative (LFD) to solve (n + 1)-dimensional local fractional partial differential equations (PDEs) in Cantor sets. The presented method is named the (n + 1)-dimensional local fractional reduced differential transform method (LFRDTM). First the theories, their proofs and also some basic properties of this procedure are given. To understand the introduced method clearly, we apply it on the (n + 1)-dimensional fractal heat-like equations (HLEs) and wave-like equations (WLEs). The applications show that this new technique is efficient, simply applicable and has powerful effects in (n + 1)-dimensional local fractional problems. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle Two Approaches to Obtaining the Space-Time Fractional Advection-Diffusion Equation
Entropy 2017, 19(7), 297; doi:10.3390/e19070297
Received: 3 May 2017 / Revised: 13 June 2017 / Accepted: 21 June 2017 / Published: 23 June 2017
PDF Full-text (1044 KB) | HTML Full-text | XML Full-text
Abstract
Two approaches resulting in two different generalizations of the space-time-fractional advection-diffusion equation are discussed. The Caputo time-fractional derivative and Riesz fractional Laplacian are used. The fundamental solutions to the corresponding Cauchy and source problems in the case of one spatial variable are studied
[...] Read more.
Two approaches resulting in two different generalizations of the space-time-fractional advection-diffusion equation are discussed. The Caputo time-fractional derivative and Riesz fractional Laplacian are used. The fundamental solutions to the corresponding Cauchy and source problems in the case of one spatial variable are studied using the Laplace transform with respect to time and the Fourier transform with respect to the spatial coordinate. The numerical results are illustrated graphically. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle The Legendre Transform in Non-Additive Thermodynamics and Complexity
Entropy 2017, 19(7), 298; doi:10.3390/e19070298
Received: 27 April 2017 / Revised: 20 June 2017 / Accepted: 20 June 2017 / Published: 23 June 2017
Cited by 1 | PDF Full-text (282 KB) | HTML Full-text | XML Full-text
Abstract
We present an argument which purports to show that the use of the standard Legendre transform in non-additive Statistical Mechanics is not appropriate. For concreteness, we use as paradigm, the case of systems which are conjecturally described by the (non-additive) Tsallis entropy. We
[...] Read more.
We present an argument which purports to show that the use of the standard Legendre transform in non-additive Statistical Mechanics is not appropriate. For concreteness, we use as paradigm, the case of systems which are conjecturally described by the (non-additive) Tsallis entropy. We point out the form of the modified Legendre transform that should be used, instead, in the non-additive thermodynamics induced by the Tsallis entropy. We comment on more general implications of this proposal for the thermodynamics of “complex systems”. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics II)
Open AccessArticle Critical Behavior in Physics and Probabilistic Formal Languages
Entropy 2017, 19(7), 299; doi:10.3390/e19070299
Received: 15 December 2016 / Revised: 12 June 2017 / Accepted: 12 June 2017 / Published: 23 June 2017
Cited by 1 | PDF Full-text (1509 KB) | HTML Full-text | XML Full-text
Abstract
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages
[...] Read more.
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity, which we dub the rational mutual information, and discuss generalizations of our claims involving more complicated Bayesian networks. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Measurement Uncertainty Relations for Position and Momentum: Relative Entropy Formulation
Entropy 2017, 19(7), 301; doi:10.3390/e19070301
Received: 26 May 2017 / Revised: 21 June 2017 / Accepted: 21 June 2017 / Published: 24 June 2017
Cited by 2 | PDF Full-text (436 KB) | HTML Full-text | XML Full-text
Abstract
Heisenberg’s uncertainty principle has recently led to general measurement uncertainty relations for quantum systems: incompatible observables can be measured jointly or in sequence only with some unavoidable approximation, which can be quantified in various ways. The relative entropy is the natural theoretical quantifier
[...] Read more.
Heisenberg’s uncertainty principle has recently led to general measurement uncertainty relations for quantum systems: incompatible observables can be measured jointly or in sequence only with some unavoidable approximation, which can be quantified in various ways. The relative entropy is the natural theoretical quantifier of the information loss when a `true’ probability distribution is replaced by an approximating one. In this paper, we provide a lower bound for the amount of information that is lost by replacing the distributions of the sharp position and momentum observables, as they could be obtained with two separate experiments, by the marginals of any smeared joint measurement. The bound is obtained by introducing an entropic error function, and optimizing it over a suitable class of covariant approximate joint measurements. We fully exploit two cases of target observables: (1) n-dimensional position and momentum vectors; (2) two components of position and momentum along different directions. In (1), we connect the quantum bound to the dimension n; in (2), going from parallel to orthogonal directions, we show the transition from highly incompatible observables to compatible ones. For simplicity, we develop the theory only for Gaussian states and measurements. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle Large Scale Emerging Properties from Non Hamiltonian Complex Systems
Entropy 2017, 19(7), 302; doi:10.3390/e19070302
Received: 31 May 2017 / Revised: 22 June 2017 / Accepted: 23 June 2017 / Published: 26 June 2017
Cited by 1 | PDF Full-text (369 KB) | HTML Full-text | XML Full-text
Abstract
The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser,
[...] Read more.
The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom) system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO). Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Open AccessArticle Node Importance Ranking of Complex Networks with Entropy Variation
Entropy 2017, 19(7), 303; doi:10.3390/e19070303
Received: 28 May 2017 / Revised: 21 June 2017 / Accepted: 22 June 2017 / Published: 26 June 2017
PDF Full-text (1444 KB) | HTML Full-text | XML Full-text
Abstract
The heterogeneous nature of a complex network determines the roles of each node in the network that are quite different. Mechanisms of complex networks such as spreading dynamics, cascading reactions, and network synchronization are highly affected by a tiny fraction of so-called important
[...] Read more.
The heterogeneous nature of a complex network determines the roles of each node in the network that are quite different. Mechanisms of complex networks such as spreading dynamics, cascading reactions, and network synchronization are highly affected by a tiny fraction of so-called important nodes. Node importance ranking is thus of great theoretical and practical significance. Network entropy is usually utilized to characterize the amount of information encoded in the network structure and to measure the structural complexity at the graph level. We find that entropy can also serve as a local level metric to quantify node importance. We propose an entropic metric, Entropy Variation, defining the node importance as the variation of network entropy before and after its removal, according to the assumption that the removal of a more important node is likely to cause more structural variation. Like other state-of-the-art methods for ranking node importance, the proposed entropic metric is also used to utilize structural information, but at the systematical level, not the local level. Empirical investigations on real life networks, the Snake Idioms Network, and several other well-known networks, demonstrate the superiority of the proposed entropic metric, notably outperforming other centrality metrics in identifying the top-k most important nodes. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Quantum Probabilities and Maximum Entropy
Entropy 2017, 19(7), 304; doi:10.3390/e19070304
Received: 22 May 2017 / Revised: 14 June 2017 / Accepted: 24 June 2017 / Published: 26 June 2017
PDF Full-text (203 KB) | HTML Full-text | XML Full-text
Abstract
Probabilities in quantum physics can be shown to originate from a maximum entropy principle. Full article
Open AccessArticle Complex Dynamics of an SIR Epidemic Model with Nonlinear Saturate Incidence and Recovery Rate
Entropy 2017, 19(7), 305; doi:10.3390/e19070305
Received: 10 May 2017 / Revised: 18 June 2017 / Accepted: 21 June 2017 / Published: 27 June 2017
PDF Full-text (968 KB) | HTML Full-text | XML Full-text
Abstract
Susceptible-infectious-removed (SIR) epidemic models are proposed to consider the impact of available resources of the public health care system in terms of the number of hospital beds. Both the incidence rate and the recovery rate are considered as nonlinear functions of the number
[...] Read more.
Susceptible-infectious-removed (SIR) epidemic models are proposed to consider the impact of available resources of the public health care system in terms of the number of hospital beds. Both the incidence rate and the recovery rate are considered as nonlinear functions of the number of infectious individuals, and the recovery rate incorporates the influence of the number of hospital beds. It is shown that backward bifurcation and saddle-node bifurcation may occur when the number of hospital beds is insufficient. In such cases, it is critical to prepare an appropriate amount of hospital beds because only reducing the basic reproduction number less than unity is not enough to eradicate the disease. When the basic reproduction number is larger than unity, the model may undergo forward bifurcation and Hopf bifurcation. The increasing hospital beds can decrease the infectious individuals. However, it is useless to eliminate the disease. Therefore, maintaining enough hospital beds is important for the prevention and control of the infectious disease. Numerical simulations are presented to illustrate and complement the theoretical analysis. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle A Novel Geometric Dictionary Construction Approach for Sparse Representation Based Image Fusion
Entropy 2017, 19(7), 306; doi:10.3390/e19070306
Received: 17 April 2017 / Revised: 21 June 2017 / Accepted: 26 June 2017 / Published: 27 June 2017
Cited by 5 | PDF Full-text (4272 KB) | HTML Full-text | XML Full-text
Abstract
Sparse-representation based approaches have been integrated into image fusion methods in the past few years and show great performance in image fusion. Training an informative and compact dictionary is a key step for a sparsity-based image fusion method. However, it is difficult to
[...] Read more.
Sparse-representation based approaches have been integrated into image fusion methods in the past few years and show great performance in image fusion. Training an informative and compact dictionary is a key step for a sparsity-based image fusion method. However, it is difficult to balance “informative” and “compact”. In order to obtain sufficient information for sparse representation in dictionary construction, this paper classifies image patches from source images into different groups based on morphological similarities. Stochastic coordinate coding (SCC) is used to extract corresponding image-patch information for dictionary construction. According to the constructed dictionary, image patches of source images are converted to sparse coefficients by the simultaneous orthogonal matching pursuit (SOMP) algorithm. At last, the sparse coefficients are fused by the Max-L1 fusion rule and inverted to a fused image. The comparison experimentations are simulated to evaluate the fused image in image features, information, structure similarity, and visual perception. The results confirm the feasibility and effectiveness of the proposed image fusion solution. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Effects of Task Demands on Kinematics and EMG Signals during Tracking Tasks Using Multiscale Entropy
Entropy 2017, 19(7), 307; doi:10.3390/e19070307
Received: 26 April 2017 / Revised: 13 June 2017 / Accepted: 16 June 2017 / Published: 27 June 2017
PDF Full-text (1730 KB) | HTML Full-text | XML Full-text
Abstract
Target-directed elbow movements are essential in daily life; however, how different task demands affect motor control is seldom reported. In this study, the relationship between task demands and the complexity of kinematics and electromyographic (EMG) signals on healthy young individuals was investigated. Tracking
[...] Read more.
Target-directed elbow movements are essential in daily life; however, how different task demands affect motor control is seldom reported. In this study, the relationship between task demands and the complexity of kinematics and electromyographic (EMG) signals on healthy young individuals was investigated. Tracking tasks with four levels of task demands were designed, and participants were instructed to track the target trajectories by extending or flexing their elbow joint. The actual trajectories and EMG signals from the biceps and triceps were recorded simultaneously. Multiscale fuzzy entropy was utilized to analyze the complexity of actual trajectories and EMG signals over multiple time scales. Results showed that the complexity of actual trajectories and EMG signals increased when task demands increased. As the time scale increased, there was a monotonic rise in the complexity of actual trajectories, while the complexity of EMG signals rose first, and then fell. Noise abatement may account for the decreasing entropy of EMG signals at larger time scales. This study confirmed the uniqueness of multiscale entropy, which may be useful in the analysis of electrophysiological signals. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Optimal Nonlinear Estimation in Statistical Manifolds with Application to Sensor Network Localization
Entropy 2017, 19(7), 308; doi:10.3390/e19070308
Received: 30 April 2017 / Revised: 23 June 2017 / Accepted: 24 June 2017 / Published: 28 June 2017
Cited by 1 | PDF Full-text (1059 KB) | HTML Full-text | XML Full-text
Abstract
Information geometry enables a deeper understanding of the methods of statistical inference. In this paper, the problem of nonlinear parameter estimation is considered from a geometric viewpoint using a natural gradient descent on statistical manifolds. It is demonstrated that the nonlinear estimation for
[...] Read more.
Information geometry enables a deeper understanding of the methods of statistical inference. In this paper, the problem of nonlinear parameter estimation is considered from a geometric viewpoint using a natural gradient descent on statistical manifolds. It is demonstrated that the nonlinear estimation for curved exponential families can be simply viewed as a deterministic optimization problem with respect to the structure of a statistical manifold. In this way, information geometry offers an elegant geometric interpretation for the solution to the estimator, as well as the convergence of the gradient-based methods. The theory is illustrated via the analysis of a distributed mote network localization problem where the Radio Interferometric Positioning System (RIPS) measurements are used for free mote location estimation. The analysis results demonstrate the advanced computational philosophy of the presented methodology. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Conjugate Representations and Characterizing Escort Expectations in Information Geometry
Entropy 2017, 19(7), 309; doi:10.3390/e19070309
Received: 29 May 2017 / Revised: 22 June 2017 / Accepted: 27 June 2017 / Published: 28 June 2017
PDF Full-text (258 KB) | HTML Full-text | XML Full-text
Abstract
Based on the maximum entropy (MaxEnt) principle for a generalized entropy functional and the conjugate representations introduced by Zhang, we have reformulated the method of information geometry. For a set of conjugate representations, the associated escort expectation is naturally introduced and characterized by
[...] Read more.
Based on the maximum entropy (MaxEnt) principle for a generalized entropy functional and the conjugate representations introduced by Zhang, we have reformulated the method of information geometry. For a set of conjugate representations, the associated escort expectation is naturally introduced and characterized by the generalized score function which has zero-escort expectation. Furthermore, we show that the escort expectation induces a conformal divergence. Full article
(This article belongs to the Special Issue Information Geometry II)
Open AccessArticle Comparing Information-Theoretic Measures of Complexity in Boltzmann Machines
Entropy 2017, 19(7), 310; doi:10.3390/e19070310
Received: 30 April 2017 / Revised: 19 June 2017 / Accepted: 23 June 2017 / Published: 3 July 2017
Cited by 2 | PDF Full-text (1461 KB) | HTML Full-text | XML Full-text
Abstract
In the past three decades, many theoretical measures of complexity have been proposed to help understand complex systems. In this work, for the first time, we place these measures on a level playing field, to explore the qualitative similarities and differences between them,
[...] Read more.
In the past three decades, many theoretical measures of complexity have been proposed to help understand complex systems. In this work, for the first time, we place these measures on a level playing field, to explore the qualitative similarities and differences between them, and their shortcomings. Specifically, using the Boltzmann machine architecture (a fully connected recurrent neural network) with uniformly distributed weights as our model of study, we numerically measure how complexity changes as a function of network dynamics and network parameters. We apply an extension of one such information-theoretic measure of complexity to understand incremental Hebbian learning in Hopfield networks, a fully recurrent architecture model of autoassociative memory. In the course of Hebbian learning, the total information flow reflects a natural upward trend in complexity as the network attempts to learn more and more patterns. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Concepts and Criteria for Blind Quantum Source Separation and Blind Quantum Process Tomography
Entropy 2017, 19(7), 311; doi:10.3390/e19070311
Received: 6 April 2017 / Revised: 9 June 2017 / Accepted: 23 June 2017 / Published: 6 July 2017
PDF Full-text (311 KB) | HTML Full-text | XML Full-text
Abstract
Blind Source Separation (BSS) is an active domain of Classical Information Processing, with well-identified methods and applications. The development of Quantum Information Processing has made possible the appearance of Blind Quantum Source Separation (BQSS), with a recent extension towards Blind Quantum Process Tomography
[...] Read more.
Blind Source Separation (BSS) is an active domain of Classical Information Processing, with well-identified methods and applications. The development of Quantum Information Processing has made possible the appearance of Blind Quantum Source Separation (BQSS), with a recent extension towards Blind Quantum Process Tomography (BQPT). This article investigates the use of several fundamental quantum concepts in the BQSS context and establishes properties already used without justification in that context. It mainly considers a pair of electron spins initially separately prepared in a pure state and then submitted to an undesired exchange coupling between these spins. Some consequences of the existence of the entanglement phenomenon, and of the probabilistic aspect of quantum measurements, upon BQSS solutions, are discussed. An unentanglement criterion is established for the state of an arbitrary qubit pair, expressed first with probability amplitudes and secondly with probabilities. The interest of using the concept of a random quantum state in the BQSS context is presented. It is stressed that the concept of statistical independence of the sources, widely used in classical BSS, should be used with care in BQSS, and possibly replaced by some disentanglement principle. It is shown that the coefficients of the development of any qubit pair pure state over the states of an orthonormal basis can be expressed with the probabilities of results in the measurements of well-chosen spin components. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Analysis of a Hybrid Thermoelectric Microcooler: Thomson Heat and Geometric Optimization
Entropy 2017, 19(7), 312; doi:10.3390/e19070312
Received: 26 May 2017 / Revised: 21 June 2017 / Accepted: 24 June 2017 / Published: 29 June 2017
PDF Full-text (1171 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we analyze the thermodynamics and geometric optimization of thermoelectric elements in a hybrid two-stage thermoelectric micro cooler (TEMC). We propose a novel procedure to improve the performance of the micro cooler based on optimum geometric parameters, cross sectional area (
[...] Read more.
In this work, we analyze the thermodynamics and geometric optimization of thermoelectric elements in a hybrid two-stage thermoelectric micro cooler (TEMC). We propose a novel procedure to improve the performance of the micro cooler based on optimum geometric parameters, cross sectional area (A) and length (L), of the semiconductor elements. Our analysis takes into account the Thomson effect to show its role on the performance of the system. We obtain dimensionless temperature spatial distributions, coefficient of performance ( C O P ) and cooling power ( Q c ) in terms of the electric current for different values of the geometric ratio ω = A / L . In our analysis we consider two cases: (a) the same materials in both stages (homogeneous system); and (b) different materials in each stage (hybrid system). We introduce the geometric parameter, W = ω 1 / ω 2 , to optimize the micro device considering the geometric parameters of both stages, w 1 and w 2 . Our results show the optimal configuration of materials that must be used in each stage. The Thomson effect leads to a slight improvement on the performance of the micro cooler. We determine the optimal electrical current to obtain the best performance of the TEMC. Geometric parameters have been optimized and results show that the hybrid system reaches a maximum cooling power 15.9 % greater than the one-stage system (with the same electric current I = 0.49 A), and 11% greater than a homogeneous system, when ω = 0.78 . The optimization of the ratio in the number of thermocouples in each stage shows that ( C O P ) and ( Q c ) increase as the number of thermocouples in the second stage increase too, but with W = 0.94 . We show that when two materials with different performances are placed in each stage, the optimal configuration of materials in the stages of the system must be determined to obtain a better performance of the hybrid two-stage TEMC system. These results are important because we offer a novel procedure to optimize a thermoelectric micro cooler considering the geometry of materials at a micro level. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle Regularizing Neural Networks via Retaining Confident Connections
Entropy 2017, 19(7), 313; doi:10.3390/e19070313
Received: 30 April 2017 / Revised: 7 June 2017 / Accepted: 23 June 2017 / Published: 30 June 2017
PDF Full-text (308 KB) | HTML Full-text | XML Full-text
Abstract
Regularization of neural networks can alleviate overfitting in the training phase. Current regularization methods, such as Dropout and DropConnect, randomly drop neural nodes or connections based on a uniform prior. Such a data-independent strategy does not take into consideration of the quality of
[...] Read more.
Regularization of neural networks can alleviate overfitting in the training phase. Current regularization methods, such as Dropout and DropConnect, randomly drop neural nodes or connections based on a uniform prior. Such a data-independent strategy does not take into consideration of the quality of individual unit or connection. In this paper, we aim to develop a data-dependent approach to regularizing neural network in the framework of Information Geometry. A measurement for the quality of connections is proposed, namely confidence. Specifically, the confidence of a connection is derived from its contribution to the Fisher information distance. The network is adjusted by retaining the confident connections and discarding the less confident ones. The adjusted network, named as ConfNet, would carry the majority of variations in the sample data. The relationships among confidence estimation, Maximum Likelihood Estimation and classical model selection criteria (like Akaike information criterion) is investigated and discussed theoretically. Furthermore, a Stochastic ConfNet is designed by adding a self-adaptive probabilistic sampling strategy. The proposed data-dependent regularization methods achieve promising experimental results on three data collections including MNIST, CIFAR-10 and CIFAR-100. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Invalid Microstate Densities for Model Systems Lead to Apparent Violation of Thermodynamic Law
Entropy 2017, 19(7), 314; doi:10.3390/e19070314
Received: 18 May 2017 / Revised: 21 June 2017 / Accepted: 24 June 2017 / Published: 30 June 2017
PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
It is often incorrectly assumed that the number of microstates Ω(E,V,N,...) available to an isolated system can have arbitrary dependence on the extensive variables E,V,N, ... However,
[...] Read more.
It is often incorrectly assumed that the number of microstates Ω ( E , V , N , . . . ) available to an isolated system can have arbitrary dependence on the extensive variables E , V , N , ... However, this is not the case for systems which can, in principle, reach thermodynamic equilibrium since restrictions arise from the underlying equilibrium statistical mechanic axioms of independence and a priori equal probability of microstates. Here we derive a concise criterion specifying the condition on Ω which must be met in order for a system to be able, in principle, to reach thermodynamic equilibrium. Natural quantum systems obey this criterion and therefore can, in principle, reach thermodynamic equilibrium. However, models which do not respect this criterion will present inconsistencies when treated under equilibrium thermodynamic formalism. This has relevance to a number of recent models in which negative heat capacity and other violations of fundamental thermodynamic law have been reported. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Open AccessArticle The Expected Missing Mass under an Entropy Constraint
Entropy 2017, 19(7), 315; doi:10.3390/e19070315
Received: 7 June 2017 / Revised: 20 June 2017 / Accepted: 26 June 2017 / Published: 29 June 2017
PDF Full-text (253 KB) | HTML Full-text | XML Full-text
Abstract
In Berend and Kontorovich (2012), the following problem was studied: A random sample of size t is taken from a world (i.e., probability space) of size n; bound the expected value of the probability of the set of elements not appearing in
[...] Read more.
In Berend and Kontorovich (2012), the following problem was studied: A random sample of size t is taken from a world (i.e., probability space) of size n; bound the expected value of the probability of the set of elements not appearing in the sample (unseen mass) in terms of t and n. Here we study the same problem, where the world may be countably infinite, and the probability measure on it is restricted to have an entropy of at most h. We provide tight bounds on the maximum of the expected unseen mass, along with a characterization of the measures attaining this maximum. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Laminar-Turbulent Patterning in Transitional Flows
Entropy 2017, 19(7), 316; doi:10.3390/e19070316
Received: 31 May 2017 / Revised: 22 June 2017 / Accepted: 23 June 2017 / Published: 29 June 2017
PDF Full-text (2956 KB) | HTML Full-text | XML Full-text
Abstract
Wall-bounded flows experience a transition to turbulence characterized by the coexistence of laminar and turbulent domains in some range of Reynolds number R, the natural control parameter. This transitional regime takes place between an upper threshold Rt above which turbulence is
[...] Read more.
Wall-bounded flows experience a transition to turbulence characterized by the coexistence of laminar and turbulent domains in some range of Reynolds number R, the natural control parameter. This transitional regime takes place between an upper threshold R t above which turbulence is uniform (featureless) and a lower threshold R g below which any form of turbulence decays, possibly at the end of overlong chaotic transients. The most emblematic cases of flow along flat plates transiting to/from turbulence according to this scenario are reviewed. The coexistence is generally in the form of bands, alternatively laminar and turbulent, and oriented obliquely with respect to the general flow direction. The final decay of the bands at R g points to the relevance of directed percolation and criticality in the sense of statistical-physics phase transitions. The nature of the transition at R t where bands form is still somewhat mysterious and does not easily fit the scheme holding for pattern-forming instabilities at increasing control parameter on a laminar background. In contrast, the bands arise at R t out of a uniform turbulent background at a decreasing control parameter. Ingredients of a possible theory of laminar-turbulent patterning are discussed. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Incipient Fault Feature Extraction for Rotating Machinery Based on Improved AR-Minimum Entropy Deconvolution Combined with Variational Mode Decomposition Approach
Entropy 2017, 19(7), 317; doi:10.3390/e19070317
Received: 4 May 2017 / Revised: 16 June 2017 / Accepted: 25 June 2017 / Published: 29 June 2017
Cited by 2 | PDF Full-text (6951 KB) | HTML Full-text | XML Full-text
Abstract
Aiming at the issue of extracting the incipient single-fault and multi-fault of rotating machinery from the nonlinear and non-stationary vibration signals with a strong background noise, a new fault diagnosis method based on improved autoregressive-Minimum entropy deconvolution (improved AR-MED) and variational mode decomposition
[...] Read more.
Aiming at the issue of extracting the incipient single-fault and multi-fault of rotating machinery from the nonlinear and non-stationary vibration signals with a strong background noise, a new fault diagnosis method based on improved autoregressive-Minimum entropy deconvolution (improved AR-MED) and variational mode decomposition (VMD) is proposed. Due to the complexity of rotating machinery systems, the periodic transient impulses of single-fault and multiple-faults always emerge in the acquired vibration signals. The improved autoregressive minimum entropy deconvolution (AR-MED) technique can effectively deconvolve the influence of the background noise, which aims to enhance the peak value of the multiple transient impulses. Nevertheless, the envelope spectrum of simulation and experimental in this work shows that there are many interference components exist on both left and right of fault characteristic frequencies when the background noise is strong. To overcome this shortcoming, the VMD is thus applied to adaptively decompose the filtered output vibration signal into a number of quasi-orthogonal intrinsic modes so as to better detect the single- and multiple-faults via those sub-band signals. The experimental and engineering application results demonstrate that the proposed method dramatically sharpens the fault characteristic frequencies (FCFs) from the impacts of bearing outer race and gearbox faults compared to the traditional methods, which show a significant improvement in early incipient faults of rotating machinery. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal
Entropy 2017, 19(7), 318; doi:10.3390/e19070318
Received: 2 May 2017 / Revised: 25 June 2017 / Accepted: 27 June 2017 / Published: 29 June 2017
Cited by 6 | PDF Full-text (588 KB) | HTML Full-text | XML Full-text
Abstract
The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables Xi
[...] Read more.
The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables X i . It can be thought of as quantifying overlapping information content or similarities in the representation of S between the X i . We present a new measure of redundancy which measures the common change in surprisal shared between variables at the local or pointwise level. We provide a game-theoretic operational definition of unique information, and use this to derive constraints which are used to obtain a maximum entropy distribution. Redundancy is then calculated from this maximum entropy distribution by counting only those local co-information terms which admit an unambiguous interpretation as redundant information. We show how this redundancy measure can be used within the framework of the Partial Information Decomposition (PID) to give an intuitive decomposition of the multivariate mutual information into redundant, unique and synergistic contributions. We compare our new measure to existing approaches over a range of example systems, including continuous Gaussian variables. Matlab code for the measure is provided, including all considered examples. Full article
Figures

Figure 1

Open AccessArticle CSL Collapse Model Mapped with the Spontaneous Radiation
Entropy 2017, 19(7), 319; doi:10.3390/e19070319
Received: 30 April 2017 / Revised: 13 June 2017 / Accepted: 25 June 2017 / Published: 29 June 2017
Cited by 1 | PDF Full-text (278 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, new upper limits on the parameters of the Continuous Spontaneous Localization (CSL) collapse model are extracted. To this end, the X-ray emission data collected by the IGEX collaboration are analyzed and compared with the spectrum of the spontaneous photon emission
[...] Read more.
In this paper, new upper limits on the parameters of the Continuous Spontaneous Localization (CSL) collapse model are extracted. To this end, the X-ray emission data collected by the IGEX collaboration are analyzed and compared with the spectrum of the spontaneous photon emission process predicted by collapse models. This study allows the obtainment of the most stringent limits within a relevant range of the CSL model parameters, with respect to any other method. The collapse rate λ and the correlation length r C are mapped, thus allowing the exclusion of a broad range of the parameter space. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle An Entropic Approach for Pair Trading
Entropy 2017, 19(7), 320; doi:10.3390/e19070320
Received: 29 April 2017 / Revised: 20 June 2017 / Accepted: 27 June 2017 / Published: 30 June 2017
PDF Full-text (430 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we derive the optimal boundary for pair trading. This boundary defines the points of entry into or exit from the market for a given stock pair. However, if the assumed model contains uncertainty, the resulting boundary could result in large
[...] Read more.
In this paper, we derive the optimal boundary for pair trading. This boundary defines the points of entry into or exit from the market for a given stock pair. However, if the assumed model contains uncertainty, the resulting boundary could result in large losses. To avoid this, we develop a more robust strategy by accounting for the model uncertainty. To incorporate the model uncertainty, we use the relative entropy as a penalty function in the expected profit from pair trading. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessArticle Entropy Characterization of Random Network Models
Entropy 2017, 19(7), 321; doi:10.3390/e19070321
Received: 31 May 2017 / Revised: 24 June 2017 / Accepted: 27 June 2017 / Published: 30 June 2017
PDF Full-text (614 KB) | HTML Full-text | XML Full-text
Abstract
This paper elaborates on the Random Network Model (RNM) as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc.)
[...] Read more.
This paper elaborates on the Random Network Model (RNM) as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc.) and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Entropic Phase Maps in Discrete Quantum Gravity
Entropy 2017, 19(7), 322; doi:10.3390/e19070322
Received: 26 May 2017 / Revised: 20 June 2017 / Accepted: 25 June 2017 / Published: 30 June 2017
PDF Full-text (579 KB) | HTML Full-text | XML Full-text
Abstract
Path summation offers a flexible general approach to quantum theory, including quantum gravity. In the latter setting, summation is performed over a space of evolutionary pathways in a history configuration space. Discrete causal histories called acyclic directed sets offer certain advantages over similar
[...] Read more.
Path summation offers a flexible general approach to quantum theory, including quantum gravity. In the latter setting, summation is performed over a space of evolutionary pathways in a history configuration space. Discrete causal histories called acyclic directed sets offer certain advantages over similar models appearing in the literature, such as causal sets. Path summation defined in terms of these histories enables derivation of discrete Schrödinger-type equations describing quantum spacetime dynamics for any suitable choice of algebraic quantities associated with each evolutionary pathway. These quantities, called phases, collectively define a phase map from the space of evolutionary pathways to a target object, such as the unit circle S 1 C , or an analogue such as S 3 or S 7 . This paper explores the problem of identifying suitable phase maps for discrete quantum gravity, focusing on a class of S 1 -valued maps defined in terms of “structural increments” of histories, called terminal states. Invariants such as state automorphism groups determine multiplicities of states, and induce families of natural entropy functions. A phase map defined in terms of such a function is called an entropic phase map. The associated dynamical law may be viewed as an abstract combination of Schrödinger’s equation and the second law of thermodynamics. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Trajectory Shape Analysis and Anomaly Detection Utilizing Information Theory Tools
Entropy 2017, 19(7), 323; doi:10.3390/e19070323
Received: 15 February 2017 / Revised: 9 June 2017 / Accepted: 27 June 2017 / Published: 30 June 2017
Cited by 1 | PDF Full-text (8680 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose to improve trajectory shape analysis by explicitly considering the speed attribute of trajectory data, and to successfully achieve anomaly detection. The shape of object motion trajectory is modeled using Kernel Density Estimation (KDE), making use of both the
[...] Read more.
In this paper, we propose to improve trajectory shape analysis by explicitly considering the speed attribute of trajectory data, and to successfully achieve anomaly detection. The shape of object motion trajectory is modeled using Kernel Density Estimation (KDE), making use of both the angle attribute of the trajectory and the speed of the moving object. An unsupervised clustering algorithm, based on the Information Bottleneck (IB) method, is employed for trajectory learning to obtain an adaptive number of trajectory clusters through maximizing the Mutual Information (MI) between the clustering result and a feature set of the trajectory data. Furthermore, we propose to effectively enhance the performance of IB by taking into account the clustering quality in each iteration of the clustering procedure. The trajectories are determined as either abnormal (infrequently observed) or normal by a measure based on Shannon entropy. Extensive tests on real-world and synthetic data show that the proposed technique behaves very well and outperforms the state-of-the-art methods. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1a

Open AccessArticle Entropy Analysis of the Interaction between the Corner Separation and Wakes in a Compressor Cascade
Entropy 2017, 19(7), 324; doi:10.3390/e19070324
Received: 27 May 2017 / Revised: 27 June 2017 / Accepted: 27 June 2017 / Published: 30 June 2017
Cited by 2 | PDF Full-text (34674 KB) | HTML Full-text | XML Full-text
Abstract
The corner separation in the high-loaded compressors deteriorates the aerodynamics and reduces the stable operating range. The flow pattern is further complicated with the interaction between the aperiodic corner separation and the periodically wake-shedding vortices. Accurate prediction of the corner separation is a
[...] Read more.
The corner separation in the high-loaded compressors deteriorates the aerodynamics and reduces the stable operating range. The flow pattern is further complicated with the interaction between the aperiodic corner separation and the periodically wake-shedding vortices. Accurate prediction of the corner separation is a challenge for the Reynolds-Averaged Navier–Stokes (RANS) method, which is based on the linear eddy-viscosity formulation. In the current work, the corner separation is investigated with the Delayed Detached Eddy Simulation (DDES) approach. DDES results agree well with the experiment and are systematically better than the RANS results, especially in the corner region where massive separation occurs. The accurate results from DDES provide a solid foundation for mechanism study. The flow structures and the distribution of Reynolds stress help reveal the process of corner separation and its interaction with the wake vortices. Before massive corner separation occurs, the hairpin-like vortex develops. The appearance of the hairpin-like vortex could be a signal of large-scale corner separation. The strong interaction between corner separation and wake vortices significantly enhances the turbulence intensity. Based on these analyses, entropy analysis is conducted from two aspects to study the losses. One aspect is the time-averaged entropy analysis, and the other is the instantaneous entropy analysis. It is found that the interaction between the passage vortex and wake vortex yields remarkable viscous losses over the 0–12% span when the corner separation has not yet been triggered; however, when the corner separation occurs, an enlarged region covering the 0–30% span is affected, and it is due to the interaction between the corner separation and wake vortices. The detailed coherent structures, local losses information and turbulence characteristics presented can provide guidance for the corner separation control and better design. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle A Bayesian Optimal Design for Sequential Accelerated Degradation Testing
Entropy 2017, 19(7), 325; doi:10.3390/e19070325
Received: 16 May 2017 / Revised: 21 June 2017 / Accepted: 27 June 2017 / Published: 1 July 2017
PDF Full-text (1338 KB) | HTML Full-text | XML Full-text
Abstract
When optimizing an accelerated degradation testing (ADT) plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to
[...] Read more.
When optimizing an accelerated degradation testing (ADT) plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to address this problem by using prior distributions to capture these uncertainties. Nevertheless, when the difference between a prior distribution and actual situation is large, the existing Bayesian optimal design might cause some over-testing or under-testing issues. For example, the implemented ADT following the optimal ADT plan consumes too much testing resources or few accelerated degradation data are obtained during the ADT. To overcome these obstacles, a Bayesian sequential step-down-stress ADT design is proposed in this article. During the sequential ADT, the test under the highest stress level is firstly conducted based on the initial prior information to quickly generate degradation data. Then, the data collected under higher stress levels are employed to construct the prior distributions for the test design under lower stress levels by using the Bayesian inference. In the process of optimization, the inverse Gaussian (IG) process is assumed to describe the degradation paths, and the Bayesian D-optimality is selected as the optimal objective. A case study on an electrical connector’s ADT plan is provided to illustrate the application of the proposed Bayesian sequential ADT design method. Compared with the results from a typical static Bayesian ADT plan, the proposed design could guarantee more stable and precise estimations of different reliability measures. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Non-Causal Computation
Entropy 2017, 19(7), 326; doi:10.3390/e19070326
Received: 11 May 2017 / Revised: 22 June 2017 / Accepted: 30 June 2017 / Published: 2 July 2017
Cited by 1 | PDF Full-text (307 KB) | HTML Full-text | XML Full-text
Abstract
Computation models such as circuits describe sequences of computation steps that are carried out one after the other. In other words, algorithm design is traditionally subject to the restriction imposed by a fixed causal order. We address a novel computing paradigm beyond
[...] Read more.
Computation models such as circuits describe sequences of computation steps that are carried out one after the other. In other words, algorithm design is traditionally subject to the restriction imposed by a fixed causal order. We address a novel computing paradigm beyond quantum computing, replacing this assumption by mere logical consistency: We study non-causal circuits, where a fixed time structure within a gate is locally assumed whilst the global causal structure between the gates is dropped. We present examples of logically consistent non-causal circuits outperforming all causal ones; they imply that suppressing loops entirely is more restrictive than just avoiding the contradictions they can give rise to. That fact is already known for correlations as well as for communication, and we here extend it to computation. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Testing the Interacting Dark Energy Model with Cosmic Microwave Background Anisotropy and Observational Hubble Data
Entropy 2017, 19(7), 327; doi:10.3390/e19070327
Received: 26 May 2017 / Revised: 12 June 2017 / Accepted: 23 June 2017 / Published: 17 July 2017
PDF Full-text (1090 KB) | HTML Full-text | XML Full-text
Abstract
The coupling between dark energy and dark matter provides a possible approach to mitigate the coincidence problem of the cosmological standard model. In this paper, we assumed the interacting term was related to the Hubble parameter, energy density of dark energy, and equation
[...] Read more.
The coupling between dark energy and dark matter provides a possible approach to mitigate the coincidence problem of the cosmological standard model. In this paper, we assumed the interacting term was related to the Hubble parameter, energy density of dark energy, and equation of state of dark energy. The interaction rate between dark energy and dark matter was a constant parameter, which was, Q = 3 H ξ ( 1 + w x ) ρ x . Based on the Markov chain Monte Carlo method, we made a global fitting on the interacting dark energy model from Planck 2015 cosmic microwave background anisotropy and observational Hubble data. We found that the observational data sets slightly favored a small interaction rate between dark energy and dark matter; however, there was not obvious evidence of interaction at the 1 σ level. Full article
(This article belongs to the Special Issue Dark Energy)
Figures

Figure 1

Open AccessArticle On Extractable Shared Information
Entropy 2017, 19(7), 328; doi:10.3390/e19070328
Received: 31 May 2017 / Accepted: 22 June 2017 / Published: 3 July 2017
Cited by 1 | PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
We consider the problem of quantifying the information shared by a pair of random variables X1,X2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that
[...] Read more.
We consider the problem of quantifying the information shared by a pair of random variables X 1 , X 2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that is, the information shared about S is bounded from below by the information shared about f ( S ) for any function f. We show that our measure leads to a new nonnegative decomposition of the mutual information I ( S ; X 1 X 2 ) into shared, complementary and unique components. We study properties of this decomposition and show that a left monotonic shared information is not compatible with a Blackwell interpretation of unique information. We also discuss whether it is possible to have a decomposition in which both shared and unique information are left monotonic. Full article
Open AccessArticle Modeling and Performance Evaluation of a Context Information-Based Optimized Handover Scheme in 5G Networks
Entropy 2017, 19(7), 329; doi:10.3390/e19070329
Received: 10 May 2017 / Revised: 20 June 2017 / Accepted: 22 June 2017 / Published: 3 July 2017
PDF Full-text (4558 KB) | HTML Full-text | XML Full-text
Abstract
Recently, green networks are considered as one of the hottest topics in Information and Communication Technology (ICT), especially in mobile communication networks. In a green network, energy saving of network nodes such as base stations (BSs), switches, and servers should be achieved efficiently.
[...] Read more.
Recently, green networks are considered as one of the hottest topics in Information and Communication Technology (ICT), especially in mobile communication networks. In a green network, energy saving of network nodes such as base stations (BSs), switches, and servers should be achieved efficiently. In this paper, we consider a heterogeneous network architecture in 5G networks with separated data and control planes, where basically a macro cell manages control signals and a small cell manages data traffic. Then, we propose an optimized handover scheme based on context information such as reference signal received power, speed of user equipment (UE), traffic load, call admission control level, and data type. In this paper, the main objective of the proposed optimal handover is either to reduce the number of handovers or the total energy consumption of BSs. To this end, we develop optimization problems with either the minimization of the number of total handovers or the minimization of energy consumption of BSs as the objective function of the optimization problem. The solution of the optimization problem is obtained by particle swarm optimization, since the developed optimization problem is an NP hard problem. Performance analysis results via simulation based on various probability distributions of the characteristics of UE and BS show that the proposed optimized handover based on context information performs better than the previous call admission control based handover scheme, from the perspective of the number of handovers and total energy consumption. We also show that the proposed handover scheme can efficiently reduce either the number of handovers or the total energy consumption by applying either handover minimization or energy minimization depending on the objective of the application. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Overfitting Reduction of Text Classification Based on AdaBELM
Entropy 2017, 19(7), 330; doi:10.3390/e19070330
Received: 2 May 2017 / Revised: 26 June 2017 / Accepted: 29 June 2017 / Published: 6 July 2017
PDF Full-text (922 KB) | HTML Full-text | XML Full-text
Abstract
Overfitting is an important problem in machine learning. Several algorithms, such as the extreme learning machine (ELM), suffer from this issue when facing high-dimensional sparse data, e.g., in text classification. One common issue is that the extent of overfitting is not well quantified.
[...] Read more.
Overfitting is an important problem in machine learning. Several algorithms, such as the extreme learning machine (ELM), suffer from this issue when facing high-dimensional sparse data, e.g., in text classification. One common issue is that the extent of overfitting is not well quantified. In this paper, we propose a quantitative measure of overfitting referred to as the rate of overfitting (RO) and a novel model, named AdaBELM, to reduce the overfitting. With RO, the overfitting problem can be quantitatively measured and identified. The newly proposed model can achieve high performance on multi-class text classification. To evaluate the generalizability of the new model, we designed experiments based on three datasets, i.e., the 20 Newsgroups, Reuters-21578, and BioMed corpora, which represent balanced, unbalanced, and real application data, respectively. Experiment results demonstrate that AdaBELM can reduce overfitting and outperform classical ELM, decision tree, random forests, and AdaBoost on all three text-classification datasets; for example, it can achieve 62.2% higher accuracy than ELM. Therefore, the proposed model has a good generalizability. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Dynamics of Entanglement in Jaynes–Cummings Nodes with Nonidentical Qubit-Field Coupling Strengths
Entropy 2017, 19(7), 331; doi:10.3390/e19070331
Received: 2 June 2017 / Revised: 18 June 2017 / Accepted: 29 June 2017 / Published: 3 July 2017
PDF Full-text (2283 KB) | HTML Full-text | XML Full-text
Abstract
How to analytically deal with the general entanglement dynamics of separate Jaynes–Cummings nodes with continuous-variable fields is still an open question, and few analytical approaches can be used to solve their general entanglement dynamics. Entanglement dynamics between two separate Jaynes–Cummings nodes are examined
[...] Read more.
How to analytically deal with the general entanglement dynamics of separate Jaynes–Cummings nodes with continuous-variable fields is still an open question, and few analytical approaches can be used to solve their general entanglement dynamics. Entanglement dynamics between two separate Jaynes–Cummings nodes are examined in this article. Both vacuum state and coherent state in the initial fields are considered through the numerical and analytical methods. The gap between two nonidentical qubit-field coupling strengths shifts the revival period and changes the revival amplitude of two-qubit entanglement. For vacuum-state fields, the maximal entanglement is fully revived after a gap-dependence period, within which the entanglement nonsmoothly decreases to zero and partly recovers without exhibiting sudden death phenomenon. For strong coherent-state fields, the two-qubit entanglement decays exponentially as the evolution time increases, exhibiting sudden death phenomenon, and the increasing gap accelerates the revival period and amplitude decay of the entanglement, where the numerical and analytical results have an excellent coincidence. Full article
(This article belongs to the collection Quantum Information)
Figures

Figure 1

Open AccessArticle Information Entropy in Predicting Location of Observation Points for Long Tunnel
Entropy 2017, 19(7), 332; doi:10.3390/e19070332
Received: 3 May 2017 / Revised: 24 June 2017 / Accepted: 29 June 2017 / Published: 4 July 2017
PDF Full-text (2692 KB) | HTML Full-text | XML Full-text
Abstract
Based on the Markov model and the basic theory of information entropy, this paper puts forward a new method for optimizing the location of observation points in order to obtain more information from limited geological investigation. According to the existing data from observation
[...] Read more.
Based on the Markov model and the basic theory of information entropy, this paper puts forward a new method for optimizing the location of observation points in order to obtain more information from limited geological investigation. According to the existing data from observation points data, classification of tunnel geological lithology was performed, and various lithology distribution were determined along the tunnel using the Markov model and theory. On the basis of the information entropy theory, the distribution of information entropy was obtained along the axis of the tunnel. Therefore, different information entropy could be acquired by calculating different classification of rocks. Furthermore, uncertainty increases when information entropy increased. The maximum entropy indicates maximum uncertainty and thus, this value determines the position of the new drilling hole. A new geology situation will be decided by the maximum entropy for the lowest accuracy. Optimal distribution will be obtained after recalculating, using the new location of the geology situation. Taking the engineering for the Bashiyi Daban water diversion tunneling in Xinjiang as a case, the maximum information entropy of the geological conditions was analyzed by the method proposed in the present study, with 25 newly added geology observation points along the axis of the 30-km tunnel. The results proved the validity of the present method. The method and results in this paper may be used not only to predict the geological conditions of underground engineering based on the investigated geological information, but also to optimize the distribution of the geology observation points. Full article
Figures

Figure 1

Open AccessArticle Information-Theoretic Bound on the Entropy Production to Maintain a Classical Nonequilibrium Distribution Using Ancillary Control
Entropy 2017, 19(7), 333; doi:10.3390/e19070333
Received: 23 March 2017 / Revised: 22 May 2017 / Accepted: 1 July 2017 / Published: 4 July 2017
Cited by 1 | PDF Full-text (1575 KB) | HTML Full-text | XML Full-text
Abstract
There are many functional contexts where it is desirable to maintain a mesoscopic system in a nonequilibrium state. However, such control requires an inherent energy dissipation. In this article, we unify and extend a number of works on the minimum energetic cost to
[...] Read more.
There are many functional contexts where it is desirable to maintain a mesoscopic system in a nonequilibrium state. However, such control requires an inherent energy dissipation. In this article, we unify and extend a number of works on the minimum energetic cost to maintain a mesoscopic system in a prescribed nonequilibrium distribution using ancillary control. For a variety of control mechanisms, we find that the minimum amount of energy dissipation necessary can be cast as an information-theoretic measure of distinguishability between the target nonequilibrium state and the underlying equilibrium distribution. This work offers quantitative insight into the intuitive idea that more energy is needed to maintain a system farther from equilibrium. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Multi-User Detection for Sporadic IDMA Transmission Based on Compressed Sensing
Entropy 2017, 19(7), 334; doi:10.3390/e19070334
Received: 3 April 2017 / Revised: 18 June 2017 / Accepted: 3 July 2017 / Published: 5 July 2017
PDF Full-text (1623 KB) | HTML Full-text | XML Full-text
Abstract
The Internet of Things (IoT) is placing new demands on existing communication systems. The limited orthogonal resources do not meet the demands of massive connectivity of future IoT systems that require efficient multiple access. Interleave-division multiple access (IDMA) is a promising method of
[...] Read more.
The Internet of Things (IoT) is placing new demands on existing communication systems. The limited orthogonal resources do not meet the demands of massive connectivity of future IoT systems that require efficient multiple access. Interleave-division multiple access (IDMA) is a promising method of improving spectral efficiency and supporting massive connectivity for IoT networks. In a given time, not all sensors signal information to an aggregation node, but each node transmits a short frame on occasion, e.g., time-controlled or event-driven. The sporadic nature of the uplink transmission, low data rates, and massive connectivity in IoT scenarios necessitates minimal control overhead communication schemes. Therefore, sensor activity and data detection should be implemented on the receiver side. However, the current chip-by-chip (CBC) iterative multi-user detection (MUD) assumes that sensor activity is precisely known at the receiver. In this paper, we propose three schemes to solve the MUD problem in a sporadic IDMA uplink transmission system. Firstly, inspired by the observation of sensor sparsity, we incorporate compressed sensing (CS) to MUD in order to jointly perform activity and data detection. Secondly, as CS detection could provide reliable activity detection, we combine CS and CBC and propose a CS-CBC detector. In addition, a CBC-based MUD named CBC-AD is proposed to provide a comparable baseline scheme. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle Initial Results of Testing Some Statistical Properties of Hard Disks Workload in Personal Computers in Terms of Non-Extensive Entropy and Long-Range Dependencies
Entropy 2017, 19(7), 335; doi:10.3390/e19070335
Received: 14 April 2017 / Revised: 9 June 2017 / Accepted: 23 June 2017 / Published: 5 July 2017
PDF Full-text (1460 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to present some preliminary results and non-extensive statistical properties of selected operating system counters related to hard drive behaviour. A number of experiments have been carried out in order to generate the workload and analyse the behaviour
[...] Read more.
The aim of this paper is to present some preliminary results and non-extensive statistical properties of selected operating system counters related to hard drive behaviour. A number of experiments have been carried out in order to generate the workload and analyse the behaviour of computers during man–machine interaction. All analysed computers were personal ones, worked under Windows operating systems. The research was conducted to demonstrate how the concept of non-extensive statistical mechanics can be helpful in the description of computer systems behaviour, especially in the context of statistical properties with scaling phenomena, long-term dependencies and statistical self-similarity. The studies have been made on the basis of perfmon tool that allows the user to trace operating systems counters during processing. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Rate-Distortion Bounds for Kernel-Based Distortion Measures
Entropy 2017, 19(7), 336; doi:10.3390/e19070336
Received: 9 May 2017 / Revised: 16 June 2017 / Accepted: 2 July 2017 / Published: 5 July 2017
PDF Full-text (346 KB) | HTML Full-text | XML Full-text
Abstract
Kernel methods have been used for turning linear learning algorithms into nonlinear ones. These nonlinear algorithms measure distances between data points by the distance in the kernel-induced feature space. In lossy data compression, the optimal tradeoff between the number of quantized points and
[...] Read more.
Kernel methods have been used for turning linear learning algorithms into nonlinear ones. These nonlinear algorithms measure distances between data points by the distance in the kernel-induced feature space. In lossy data compression, the optimal tradeoff between the number of quantized points and the incurred distortion is characterized by the rate-distortion function. However, the rate-distortion functions associated with distortion measures involving kernel feature mapping have yet to be analyzed. We consider two reconstruction schemes, reconstruction in input space and reconstruction in feature space, and provide bounds to the rate-distortion functions for these schemes. Comparison of the derived bounds to the quantizer performance obtained by the kernel K -means method suggests that the rate-distortion bounds for input space and feature space reconstructions are informative at low and high distortion levels, respectively. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Natural Convection and Entropy Generation in a Square Cavity with Variable Temperature Side Walls Filled with a Nanofluid: Buongiorno’s Mathematical Model
Entropy 2017, 19(7), 337; doi:10.3390/e19070337
Received: 15 May 2017 / Revised: 19 June 2017 / Accepted: 26 June 2017 / Published: 5 July 2017
PDF Full-text (14331 KB) | HTML Full-text | XML Full-text
Abstract
Natural convection heat transfer combined with entropy generation in a square cavity filled with a nanofluid under the effect of variable temperature distribution along left vertical wall has been studied numerically. Governing equations formulated in dimensionless non-primitive variables with corresponding boundary conditions taking
[...] Read more.
Natural convection heat transfer combined with entropy generation in a square cavity filled with a nanofluid under the effect of variable temperature distribution along left vertical wall has been studied numerically. Governing equations formulated in dimensionless non-primitive variables with corresponding boundary conditions taking into account the Brownian diffusion and thermophoresis effects have been solved by finite difference method. Distribution of streamlines, isotherms, local entropy generation as well as Nusselt number has been obtained for different values of key parameters. It has been found that a growth of the amplitude of the temperature distribution along the left wall and an increase of the wave number lead to an increase in the average entropy generation. While an increase in abovementioned parameters for low Rayleigh number illustrates a decrease in average Bejan number. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Figures

Figure 1

Open AccessArticle Online Auction Fraud Detection in Privacy-Aware Reputation Systems
Entropy 2017, 19(7), 338; doi:10.3390/e19070338
Received: 19 April 2017 / Revised: 20 June 2017 / Accepted: 2 July 2017 / Published: 5 July 2017
PDF Full-text (407 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
With a privacy-aware reputation system, an auction website allows the buyer in a transaction to hide his/her identity from the public for privacy protection. However, fraudsters can also take advantage of this buyer-anonymized function to hide the connections between themselves and their accomplices.
[...] Read more.
With a privacy-aware reputation system, an auction website allows the buyer in a transaction to hide his/her identity from the public for privacy protection. However, fraudsters can also take advantage of this buyer-anonymized function to hide the connections between themselves and their accomplices. Traditional fraudster detection methods become useless for detecting such fraudsters because these methods rely on accessing these connections to work effectively. To resolve this problem, we introduce two attributes to quantify the buyer-anonymized activities associated with each user and use them to reinforce the traditional methods. Experimental results on a dataset crawled from an auction website show that the proposed attributes effectively enhance the prediction accuracy for detecting fraudsters, particularly when the proportion of the buyer-anonymized activities in the dataset is large. Because many auction websites have adopted privacy-aware reputation systems, the two proposed attributes should be incorporated into their fraudster detection schemes to combat these fraudulent activities. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Quantum-Wave Equation and Heisenberg Inequalities of Covariant Quantum Gravity
Entropy 2017, 19(7), 339; doi:10.3390/e19070339
Received: 18 June 2017 / Revised: 30 June 2017 / Accepted: 2 July 2017 / Published: 6 July 2017
PDF Full-text (304 KB) | HTML Full-text | XML Full-text
Abstract
Key aspects of the manifestly-covariant theory of quantum gravity (Cremaschini and Tessarotto 2015–2017) are investigated. These refer, first, to the establishment of the four-scalar, manifestly-covariant evolution quantum wave equation, denoted as covariant quantum gravity (CQG) wave equation, which advances the quantum state ψ
[...] Read more.
Key aspects of the manifestly-covariant theory of quantum gravity (Cremaschini and Tessarotto 2015–2017) are investigated. These refer, first, to the establishment of the four-scalar, manifestly-covariant evolution quantum wave equation, denoted as covariant quantum gravity (CQG) wave equation, which advances the quantum state ψ associated with a prescribed background space-time. In this paper, the CQG-wave equation is proved to follow at once by means of a Hamilton–Jacobi quantization of the classical variational tensor field g g μ ν and its conjugate momentum, referred to as (canonical) g-quantization. The same equation is also shown to be variational and to follow from a synchronous variational principle identified here with the quantum Hamilton variational principle. The corresponding quantum hydrodynamic equations are then obtained upon introducing the Madelung representation for ψ , which provides an equivalent statistical interpretation of the CQG-wave equation. Finally, the quantum state ψ is proven to fulfill generalized Heisenberg inequalities, relating the statistical measurement errors of quantum observables. These are shown to be represented in terms of the standard deviations of the metric tensor g g μ ν and its quantum conjugate momentum operator. Full article
(This article belongs to the Special Issue Advances in Relativistic Statistical Mechanics)
Open AccessArticle A Novel Feature Extraction Method for Ship-Radiated Noise Based on Variational Mode Decomposition and Multi-Scale Permutation Entropy
Entropy 2017, 19(7), 342; doi:10.3390/e19070342
Received: 28 June 2017 / Revised: 5 July 2017 / Accepted: 6 July 2017 / Published: 8 July 2017
Cited by 1 | PDF Full-text (2342 KB) | HTML Full-text | XML Full-text
Abstract
In view of the problem that the features of ship-radiated noise are difficult to extract and inaccurate, a novel method based on variational mode decomposition (VMD), multi-scale permutation entropy (MPE) and a support vector machine (SVM) is proposed to extract the features of
[...] Read more.
In view of the problem that the features of ship-radiated noise are difficult to extract and inaccurate, a novel method based on variational mode decomposition (VMD), multi-scale permutation entropy (MPE) and a support vector machine (SVM) is proposed to extract the features of ship-radiated noise. In order to eliminate mode mixing and extract the complexity of the intrinsic mode function (IMF) accurately, VMD is employed to decompose the three types of ship-radiated noise instead of Empirical Mode Decomposition (EMD) and its extended methods. Considering the reason that the permutation entropy (PE) can quantify the complexity only in one scale, the MPE is used to extract features in different scales. In this study, three types of ship-radiated noise signals are decomposed into a set of band-limited IMFs by the VMD method, and the intensity of each IMF is calculated. Then, the IMFs with the highest energy are selected for the extraction of their MPE. By analyzing the separability of MPE at different scales, the optimal MPE of the IMF with the highest energy is regarded as the characteristic vector. Finally, the feature vectors are sent into the SVM classifier to classify and recognize different types of ships. The proposed method was applied in simulated signals and actual signals of ship-radiated noise. By comparing with the PE of the IMF with the highest energy by EMD, ensemble EMD (EEMD) and VMD, the results show that the proposed method can effectively extract the features of MPE and realize the classification and recognition for ships. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle α-Connections and a Symmetric Cubic Form on a Riemannian Manifold
Entropy 2017, 19(7), 344; doi:10.3390/e19070344
Received: 9 May 2017 / Revised: 6 July 2017 / Accepted: 6 July 2017 / Published: 10 July 2017
PDF Full-text (691 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we study the construction of α-conformally equivalent statistical manifolds for a given symmetric cubic form on a Riemannian manifold. In particular, we describe a method to obtain α-conformally equivalent connections from the relation between tensors and the symmetric
[...] Read more.
In this paper, we study the construction of α -conformally equivalent statistical manifolds for a given symmetric cubic form on a Riemannian manifold. In particular, we describe a method to obtain α -conformally equivalent connections from the relation between tensors and the symmetric cubic form. Full article
(This article belongs to the Special Issue Information Geometry II)
Open AccessArticle On Interrelation of Time and Entropy
Entropy 2017, 19(7), 345; doi:10.3390/e19070345
Received: 20 June 2017 / Revised: 6 July 2017 / Accepted: 7 July 2017 / Published: 10 July 2017
Cited by 1 | PDF Full-text (918 KB) | HTML Full-text | XML Full-text
Abstract
A measure of time is related to the number of ways by which the human correlates the past and the future for some process. On this basis, a connection between time and entropy (information, Boltzmann–Gibbs, and thermodynamic one) is established. This measure gives
[...] Read more.
A measure of time is related to the number of ways by which the human correlates the past and the future for some process. On this basis, a connection between time and entropy (information, Boltzmann–Gibbs, and thermodynamic one) is established. This measure gives time such properties as universality, relativity, directionality, and non-uniformity. A number of issues of the modern science related to the finding of laws describing changes in nature are discussed. A special emphasis is made on the role of evolutionary adaptation of an observer to the surrounding world. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle An Alternative for Indicators that Characterize the Structure of Economic Systems
Entropy 2017, 19(7), 346; doi:10.3390/e19070346
Received: 15 May 2017 / Revised: 20 June 2017 / Accepted: 7 July 2017 / Published: 10 July 2017
Cited by 1 | PDF Full-text (479 KB) | HTML Full-text | XML Full-text
Abstract
Studies on the structure of economic systems are, most frequently, carried out by the methods of informational statistics. These methods, often accompanied by a broad range of indicators (Shannon entropy, Balassa coefficient, Herfindahl specialty index, Gini coefficient, Theil index, etc.) around which a
[...] Read more.
Studies on the structure of economic systems are, most frequently, carried out by the methods of informational statistics. These methods, often accompanied by a broad range of indicators (Shannon entropy, Balassa coefficient, Herfindahl specialty index, Gini coefficient, Theil index, etc.) around which a wide literature has been created over time, have a major disadvantage. Their weakness is related to the imposition of the system condition, which indicates the need to know all of the components of the system (as absolute values or as weights). This restriction is difficult to accomplish in some situations, while in others this knowledge may be irrelevant, especially when there is an interest in structural changes only in some of the components of the economic system (either we refer to the typology of economic activities—NACE, or of territorial units—Nomenclature of territorial units for statistics (NUTS)). This article presents a procedure for characterizing the structure of a system and for comparing its evolution over time, in the case of incomplete information, thus eliminating the restriction existent in the classical methods. The proposed methodological alternative uses a parametric distribution, with sub-unit values for the variable. The application refers to Gross Domestic Product values for five of the 28 European Union countries, with annual values of over 1000 billion Euros (Germany, Spain, France, Italy, and United Kingdom) for the years 2003 and 2015. A form of the Wald sequential test is applied to measure changes in the structure of this group of countries, between the years compared. The results of this application validate the proposed method. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Iterant Algebra
Entropy 2017, 19(7), 347; doi:10.3390/e19070347
Received: 29 May 2017 / Revised: 26 June 2017 / Accepted: 5 July 2017 / Published: 11 July 2017
PDF Full-text (359 KB) | HTML Full-text | XML Full-text
Abstract
We give an exposition of iterant algebra, a generalization of matrix algebra that is motivated by the structure of measurement for discrete processes. We show how Clifford algebras and matrix algebras arise naturally from iterants, and we then use this point of view
[...] Read more.
We give an exposition of iterant algebra, a generalization of matrix algebra that is motivated by the structure of measurement for discrete processes. We show how Clifford algebras and matrix algebras arise naturally from iterants, and we then use this point of view to discuss the Schrödinger and Dirac equations, Majorana Fermions, representations of the braid group and the framed braids in relation to the structure of the Standard Model for physics. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Multi-Objective Optimization for Solid Amine CO2 Removal Assembly in Manned Spacecraft
Entropy 2017, 19(7), 348; doi:10.3390/e19070348
Received: 5 February 2017 / Revised: 7 June 2017 / Accepted: 8 July 2017 / Published: 10 July 2017
PDF Full-text (7078 KB) | HTML Full-text | XML Full-text
Abstract
Carbon Dioxide Removal Assembly (CDRA) is one of the most important systems in the Environmental Control and Life Support System (ECLSS) for a manned spacecraft. With the development of adsorbent and CDRA technology, solid amine is increasingly paid attention due to its obvious
[...] Read more.
Carbon Dioxide Removal Assembly (CDRA) is one of the most important systems in the Environmental Control and Life Support System (ECLSS) for a manned spacecraft. With the development of adsorbent and CDRA technology, solid amine is increasingly paid attention due to its obvious advantages. However, a manned spacecraft is launched far from the Earth, and its resources and energy are restricted seriously. These limitations increase the design difficulty of solid amine CDRA. The purpose of this paper is to seek optimal design parameters for the solid amine CDRA. Based on a preliminary structure of solid amine CDRA, its heat and mass transfer models are built to reflect some features of the special solid amine adsorbent, Polyethylenepolyamine adsorbent. A multi-objective optimization for the design of solid amine CDRA is discussed further in this paper. In this study, the cabin CO2 concentration, system power consumption and entropy production are chosen as the optimization objectives. The optimization variables consist of adsorption cycle time, solid amine loading mass, adsorption bed length, power consumption and system entropy production. The Improved Non-dominated Sorting Genetic Algorithm (NSGA-II) is used to solve this multi-objective optimization and to obtain optimal solution set. A design example of solid amine CDRA in a manned space station is used to show the optimal procedure. The optimal combinations of design parameters can be located on the Pareto Optimal Front (POF). Finally, Design 971 is selected as the best combination of design parameters. The optimal results indicate that the multi-objective optimization plays a significant role in the design of solid amine CDRA. The final optimal design parameters for the solid amine CDRA can guarantee the cabin CO2 concentration within the specified range, and also satisfy the requirements of lightweight and minimum energy consumption. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessFeature PaperArticle Artificial Noise-Aided Physical Layer Security in Underlay Cognitive Massive MIMO Systems with Pilot Contamination
Entropy 2017, 19(7), 349; doi:10.3390/e19070349
Received: 2 June 2017 / Revised: 23 June 2017 / Accepted: 27 June 2017 / Published: 10 July 2017
Cited by 1 | PDF Full-text (374 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a secure communication model for cognitive multi-user massive multiple-input multiple-output (MIMO) systems with underlay spectrum sharing is investigated. A secondary (cognitive) multi-user massive MIMO system is operated by using underlay spectrum sharing within a primary (licensed) multi-user massive MIMO system.
[...] Read more.
In this paper, a secure communication model for cognitive multi-user massive multiple-input multiple-output (MIMO) systems with underlay spectrum sharing is investigated. A secondary (cognitive) multi-user massive MIMO system is operated by using underlay spectrum sharing within a primary (licensed) multi-user massive MIMO system. A passive multi-antenna eavesdropper is assumed to be eavesdropping upon either the primary or secondary confidential transmissions. To this end, a physical layer security strategy is provisioned for the primary and secondary transmissions via artificial noise (AN) generation at the primary base-station (PBS) and zero-forcing precoders. Specifically, the precoders are constructed by using the channel estimates with pilot contamination. In order to degrade the interception of confidential transmissions at the eavesdropper, the AN sequences are transmitted at the PBS by exploiting the excess degrees-of-freedom offered by its massive antenna array and by using random AN shaping matrices. The channel estimates at the PBS and secondary base-station (SBS) are obtained by using non-orthogonal pilot sequences transmitted by the primary user nodes (PUs) and secondary user nodes (SUs), respectively. Hence, these channel estimates are affected by intra-cell pilot contamination. In this context, the detrimental effects of intra-cell pilot contamination and channel estimation errors for physical layer secure communication are investigated. For this system set-up, the average and asymptotic achievable secrecy rate expressions are derived in closed-form. Specifically, these performance metrics are studied for imperfect channel state information (CSI) and for perfect CSI, and thereby, the secrecy rate degradation due to inaccurate channel knowledge and intra-cell pilot contamination is quantified. Our analysis reveals that a physical layer secure communication can be provisioned for both primary and secondary massive MIMO systems even with the channel estimation errors and pilot contamination. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Fourier’s Law in a Generalized Piston Model
Entropy 2017, 19(7), 350; doi:10.3390/e19070350
Received: 5 June 2017 / Revised: 3 July 2017 / Accepted: 6 July 2017 / Published: 11 July 2017
PDF Full-text (1313 KB) | HTML Full-text | XML Full-text
Abstract
A simplified, but non trivial, mechanical model—gas of N particles of mass m in a box partitioned by n mobile adiabatic walls of mass M—interacting with two thermal baths at different temperatures, is discussed in the framework of kinetic theory. Following an
[...] Read more.
A simplified, but non trivial, mechanical model—gas of N particles of mass m in a box partitioned by n mobile adiabatic walls of mass M—interacting with two thermal baths at different temperatures, is discussed in the framework of kinetic theory. Following an approach due to Smoluchowski, from an analysis of the collisions particles/walls, we derive the values of the main thermodynamic quantities for the stationary non-equilibrium states. The results are compared with extensive numerical simulations; in the limit of large n, m N / M 1 and m / M 1 , we find a good approximation of Fourier’s law. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Chaos Synchronization of Nonlinear Fractional Discrete Dynamical Systems via Linear Control
Entropy 2017, 19(7), 351; doi:10.3390/e19070351
Received: 21 May 2017 / Revised: 6 July 2017 / Accepted: 7 July 2017 / Published: 11 July 2017
Cited by 1 | PDF Full-text (445 KB) | HTML Full-text | XML Full-text
Abstract
By using a linear feedback control technique, we propose a chaos synchronization scheme for nonlinear fractional discrete dynamical systems. Then, we construct a novel 1-D fractional discrete income change system and a kind of novel 3-D fractional discrete system. By means of the
[...] Read more.
By using a linear feedback control technique, we propose a chaos synchronization scheme for nonlinear fractional discrete dynamical systems. Then, we construct a novel 1-D fractional discrete income change system and a kind of novel 3-D fractional discrete system. By means of the stability principles of Caputo-like fractional discrete systems, we lastly design a controller to achieve chaos synchronization, and present some numerical simulations to illustrate and validate the synchronization scheme. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Discrete Wigner Function Derivation of the Aaronson–Gottesman Tableau Algorithm
Entropy 2017, 19(7), 353; doi:10.3390/e19070353
Received: 3 May 2017 / Revised: 28 June 2017 / Accepted: 4 July 2017 / Published: 11 July 2017
PDF Full-text (324 KB) | HTML Full-text | XML Full-text
Abstract
The Gottesman–Knill theorem established that stabilizer states and Clifford operations can be efficiently simulated classically. For qudits with odd dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a
[...] Read more.
The Gottesman–Knill theorem established that stabilizer states and Clifford operations can be efficiently simulated classically. For qudits with odd dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a discrete Wigner function-based simulation algorithm for odd-d qudits that has the same time and space complexity as the Aaronson–Gottesman algorithm for qubits. We show that the efficiency of both algorithms is due to harmonic evolution in the symplectic structure of discrete phase space. The differences between the Wigner function algorithm for odd-d and the Aaronson–Gottesman algorithm for qubits are likely due only to the fact that the Weyl–Heisenberg group is not in S U ( d ) for d = 2 and that qubits exhibit state-independent contextuality. This may provide a guide for extending the discrete Wigner function approach to qubits. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Study on the Business Cycle Model with Fractional-Order Time Delay under Random Excitation
Entropy 2017, 19(7), 354; doi:10.3390/e19070354
Received: 31 May 2017 / Revised: 2 July 2017 / Accepted: 10 July 2017 / Published: 12 July 2017
PDF Full-text (2017 KB) | HTML Full-text | XML Full-text
Abstract
Time delay of economic policy and memory property in a real economy system is omnipresent and inevitable. In this paper, a business cycle model with fractional-order time delay which describes the delay and memory property of economic control is investigated. Stochastic averaging method
[...] Read more.
Time delay of economic policy and memory property in a real economy system is omnipresent and inevitable. In this paper, a business cycle model with fractional-order time delay which describes the delay and memory property of economic control is investigated. Stochastic averaging method is applied to obtain the approximate analytical solution. Numerical simulations are done to verify the method. The effects of the fractional order, time delay, economic control and random excitation on the amplitude of the economy system are investigated. The results show that time delay, fractional order and intensity of random excitation can all magnify the amplitude and increase the volatility of the economy system. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1a

Open AccessArticle Some Remarks on Classical and Classical-Quantum Sphere Packing Bounds: Rényi vs. Kullback–Leibler
Entropy 2017, 19(7), 355; doi:10.3390/e19070355
Received: 20 May 2017 / Revised: 3 July 2017 / Accepted: 10 July 2017 / Published: 12 July 2017
PDF Full-text (390 KB) | HTML Full-text | XML Full-text
Abstract
We review the use of binary hypothesis testing for the derivation of the sphere packing bound in channel coding, pointing out a key difference between the classical and the classical-quantum setting. In the first case, two ways of using the binary hypothesis testing
[...] Read more.
We review the use of binary hypothesis testing for the derivation of the sphere packing bound in channel coding, pointing out a key difference between the classical and the classical-quantum setting. In the first case, two ways of using the binary hypothesis testing are known, which lead to the same bound written in different analytical expressions. The first method historically compares output distributions induced by the codewords with an auxiliary fixed output distribution, and naturally leads to an expression using the Renyi divergence. The second method compares the given channel with an auxiliary one and leads to an expression using the Kullback–Leibler divergence. In the classical-quantum case, due to a fundamental difference in the quantum binary hypothesis testing, these two approaches lead to two different bounds, the first being the “right” one. We discuss the details of this phenomenon, which suggests the question of whether auxiliary channels are used in the optimal way in the second approach and whether recent results on the exact strong-converse exponent in classical-quantum channel coding might play a role in the considered problem. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Clausius Relation for Active Particles: What Can We Learn from Fluctuations
Entropy 2017, 19(7), 356; doi:10.3390/e19070356
Received: 12 June 2017 / Revised: 6 July 2017 / Accepted: 12 July 2017 / Published: 13 July 2017
Cited by 1 | PDF Full-text (269 KB) | HTML Full-text | XML Full-text
Abstract
Many kinds of active particles, such as bacteria or active colloids, move in a thermostatted fluid by means of self-propulsion. Energy injected by such a non-equilibrium force is eventually dissipated as heat in the thermostat. Since thermal fluctuations are much faster and weaker
[...] Read more.
Many kinds of active particles, such as bacteria or active colloids, move in a thermostatted fluid by means of self-propulsion. Energy injected by such a non-equilibrium force is eventually dissipated as heat in the thermostat. Since thermal fluctuations are much faster and weaker than self-propulsion forces, they are often neglected, blurring the identification of dissipated heat in theoretical models. For the same reason, some freedom—or arbitrariness—appears when defining entropy production. Recently three different recipes to define heat and entropy production have been proposed for the same model where the role of self-propulsion is played by a Gaussian coloured noise. Here we compare and discuss the relation between such proposals and their physical meaning. One of these proposals takes into account the heat exchanged with a non-equilibrium active bath: such an “active heat” satisfies the original Clausius relation and can be experimentally verified. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Open AccessFeature PaperArticle Cosmological Time, Entropy and Infinity
Entropy 2017, 19(7), 357; doi:10.3390/e19070357
Received: 29 May 2017 / Revised: 11 July 2017 / Accepted: 12 July 2017 / Published: 14 July 2017
PDF Full-text (365 KB) | HTML Full-text | XML Full-text
Abstract
Time is a parameter playing a central role in our most fundamental modelling of natural laws. Relativity theory shows that the comparison of times measured by different clocks depends on their relative motion and on the strength of the gravitational field in which
[...] Read more.
Time is a parameter playing a central role in our most fundamental modelling of natural laws. Relativity theory shows that the comparison of times measured by different clocks depends on their relative motion and on the strength of the gravitational field in which they are embedded. In standard cosmology, the time parameter is the one measured by fundamental clocks (i.e., clocks at rest with respect to the expanding space). This proper time is assumed to flow at a constant rate throughout the whole history of the universe. We make the alternative hypothesis that the rate at which the cosmological time flows depends on the dynamical state of the universe. In thermodynamics, the arrow of time is strongly related to the second law, which states that the entropy of an isolated system will always increase with time or, at best, stay constant. Hence, we assume that the time measured by fundamental clocks is proportional to the entropy of the region of the universe that is causally connected to them. Under that simple assumption, we find it possible to build toy cosmological models that present an acceleration of their expansion without any need for dark energy while being spatially closed and finite, avoiding the need to deal with infinite values. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle Collaborative Service Selection via Ensemble Learning in Mixed Mobile Network Environments
Entropy 2017, 19(7), 358; doi:10.3390/e19070358
Received: 18 May 2017 / Revised: 4 July 2017 / Accepted: 10 July 2017 / Published: 20 July 2017
Cited by 2 | PDF Full-text (4148 KB) | HTML Full-text | XML Full-text
Abstract
Mobile Service selection is an important but challenging problem in service and mobile computing. Quality of service (QoS) predication is a critical step in service selection in 5G network environments. The traditional methods, such as collaborative filtering (CF), suffer from a series of
[...] Read more.
Mobile Service selection is an important but challenging problem in service and mobile computing. Quality of service (QoS) predication is a critical step in service selection in 5G network environments. The traditional methods, such as collaborative filtering (CF), suffer from a series of defects, such as failing to handle data sparsity. In mobile network environments, the abnormal QoS data are likely to result in inferior prediction accuracy. Unfortunately, these problems have not attracted enough attention, especially in a mixed mobile network environment with different network configurations, generations, or types. An ensemble learning method for predicting missing QoS in 5G network environments is proposed in this paper. There are two key principles: one is the newly proposed similarity computation method for identifying similar neighbors; the other is the extended ensemble learning model for discovering and filtering fake neighbors from the preliminary neighbors set. Moreover, three prediction models are also proposed, two individual models and one combination model. They are used for utilizing the user similar neighbors and servicing similar neighbors, respectively. Experimental results conducted in two real-world datasets show our approaches can produce superior prediction accuracy. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Non-Linear Stability Analysis of Real Signals from Nuclear Power Plants (Boiling Water Reactors) Based on Noise Assisted Empirical Mode Decomposition Variants and the Shannon Entropy
Entropy 2017, 19(7), 359; doi:10.3390/e19070359
Received: 25 May 2017 / Accepted: 10 July 2017 / Published: 14 July 2017
PDF Full-text (6252 KB) | HTML Full-text | XML Full-text
Abstract
There are currently around 78 nuclear power plants (NPPs) in the world based on Boiling Water Reactors (BWRs). The current parameter to assess BWR instability issues is the linear Decay Ratio (DR). However, it is well known that BWRs are complex non-linear dynamical
[...] Read more.
There are currently around 78 nuclear power plants (NPPs) in the world based on Boiling Water Reactors (BWRs). The current parameter to assess BWR instability issues is the linear Decay Ratio (DR). However, it is well known that BWRs are complex non-linear dynamical systems that may even exhibit chaotic dynamics that normally preclude the use of the DR when the BWR is working at a specific operating point during instability. In this work a novel methodology based on an adaptive Shannon Entropy estimator and on Noise Assisted Empirical Mode Decomposition variants is presented. This methodology was developed for real-time implementation of a stability monitor. This methodology was applied to a set of signals stemming from several NPPs reactors (Ringhals-Sweden, Forsmark-Sweden and Laguna Verde-Mexico) under commercial operating conditions, that experienced instabilities events, each one of a different nature. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Extracting Knowledge from the Geometric Shape of Social Network Data Using Topological Data Analysis
Entropy 2017, 19(7), 360; doi:10.3390/e19070360
Received: 13 May 2017 / Revised: 10 July 2017 / Accepted: 14 July 2017 / Published: 14 July 2017
PDF Full-text (1349 KB) | HTML Full-text | XML Full-text
Abstract
Topological data analysis is a noble approach to extract meaningful information from high-dimensional data and is robust to noise. It is based on topology, which aims to study the geometric shape of data. In order to apply topological data analysis, an algorithm called
[...] Read more.
Topological data analysis is a noble approach to extract meaningful information from high-dimensional data and is robust to noise. It is based on topology, which aims to study the geometric shape of data. In order to apply topological data analysis, an algorithm called mapper is adopted. The output from mapper is a simplicial complex that represents a set of connected clusters of data points. In this paper, we explore the feasibility of topological data analysis for mining social network data by addressing the problem of image popularity. We randomly crawl images from Instagram and analyze the effects of social context and image content on an image’s popularity using mapper. Mapper clusters the images using each feature, and the ratio of popularity in each cluster is computed to determine the clusters with a high or low possibility of popularity. Then, the popularity of images are predicted to evaluate the accuracy of topological data analysis. This approach is further compared with traditional clustering algorithms, including k-means and hierarchical clustering, in terms of accuracy, and the results show that topological data analysis outperforms the others. Moreover, topological data analysis provides meaningful information based on the connectivity between the clusters. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Estimating Mixture Entropy with Pairwise Distances
Entropy 2017, 19(7), 361; doi:10.3390/e19070361
Received: 8 June 2017 / Revised: 8 July 2017 / Accepted: 12 July 2017 / Published: 14 July 2017
Cited by 1 | PDF Full-text (325 KB) | HTML Full-text | XML Full-text | Correction
Abstract
Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of
[...] Read more.
Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of approximation necessary. We propose a family of estimators based on a pairwise distance function between mixture components, and show that this estimator class has many attractive properties. For many distributions of interest, the proposed estimators are efficient to compute, differentiable in the mixture parameters, and become exact when the mixture components are clustered. We prove this family includes lower and upper bounds on the mixture entropy. The Chernoff α -divergence gives a lower bound when chosen as the distance function, with the Bhattacharyaa distance providing the tightest lower bound for components that are symmetric and members of a location family. The Kullback–Leibler divergence gives an upper bound when used as the distance function. We provide closed-form expressions of these bounds for mixtures of Gaussians, and discuss their applications to the estimation of mutual information. We then demonstrate that our bounds are significantly tighter than well-known existing bounds using numeric simulations. This estimator class is very useful in optimization problems involving maximization/minimization of entropy and mutual information, such as MaxEnt and rate distortion problems. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle A Survey on Robust Interference Management in Wireless Networks
Entropy 2017, 19(7), 362; doi:10.3390/e19070362
Received: 10 June 2017 / Revised: 7 July 2017 / Accepted: 11 July 2017 / Published: 14 July 2017
PDF Full-text (450 KB) | HTML Full-text | XML Full-text
Abstract
Recent advances in the characterization of fundamental limits on interference management in wireless networks and the discovery of new communication schemes on how to handle interference led to a better understanding towards the capacity of such networks. The benefits in terms of achievable
[...] Read more.
Recent advances in the characterization of fundamental limits on interference management in wireless networks and the discovery of new communication schemes on how to handle interference led to a better understanding towards the capacity of such networks. The benefits in terms of achievable rates of powerful schemes handling interference, such as interference alignment, are substantial. However, the main issue behind most of these results is the assumption of perfect channel state information at the transmitters (CSIT). In the absence of channel knowledge the performance of various interference networks collapses to what is achievable by time division multiple access (TDMA). Robustinterference management techniques are promising solutions to maintain high achievable rates at various levels of CSIT, ranging from delayed to imperfect CSIT. In this survey, we outline and study two main research perspectives of how to robustly handle interference for cases where CSIT is imprecise on examples for non-distributed and distributed networks, namely broadcast and X-channel. To quantify the performance of these schemes, we use the well-known (generalized) degrees of freedom (GDoF) metric as the pre-log factor of achievable rates. These perspectives maintain the capacity benefits at similar levels as for perfect channel knowledge. These two perspectives are: First,scheme-adaptationthat explicitly accounts for the level of channel knowledge and, second,relay-aided infrastructure enlargementto decrease channel knowledge dependency. The relaxation on CSIT requirements through these perspectives will ultimately lead to practical realizations of robust interference management techniques. The survey concludes with a discussion of open problems. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Competitive Sharing of Spectrum: Reservation Obfuscation and Verification Strategies
Entropy 2017, 19(7), 363; doi:10.3390/e19070363
Received: 19 May 2017 / Revised: 10 July 2017 / Accepted: 11 July 2017 / Published: 15 July 2017
PDF Full-text (1311 KB) | HTML Full-text | XML Full-text
Abstract
Sharing of radio spectrum between different types of wireless systems (e.g., different service providers) is the foundation for making more efficient usage of spectrum. Cognitive radio technologies have spurred the design of spectrum servers that coordinate the sharing of spectrum between different wireless
[...] Read more.
Sharing of radio spectrum between different types of wireless systems (e.g., different service providers) is the foundation for making more efficient usage of spectrum. Cognitive radio technologies have spurred the design of spectrum servers that coordinate the sharing of spectrum between different wireless systems. These servers receive information regarding the needs of each system, and then provide instructions back to each system regarding the spectrum bands they may use. This sharing of information is complicated by the fact that these systems are often in competition with each other: each system desires to use as much of the spectrum as possible to support its users, and each system could learn and harm the bands of the other system. Three problems arise in such a spectrum-sharing problem: (1) how to maintain reliable performance for each system-shared resource (licensed spectrum); (2) whether to believe the resource requests announced by each agent; and (3) if they do not believe, how much effort should be devoted to inspecting spectrum so as to prevent possible malicious activity. Since this problem can arise for a variety of wireless systems, we present an abstract formulation in which the agents or spectrum server introduces obfuscation in the resource assignment to maintain reliability. We derive a closed form expression for the expected damage that can arise from possible malicious activity, and using this formula we find a tradeoff between the amount of extra decoys that must be used in order to support higher communication fidelity against potential interference, and the cost of maintaining this reliability. Then, we examine a scenario where a smart adversary may also use obfuscation itself, and formulate the scenario as a signaling game, which can be solved by applying a classical iterative forward-induction algorithm. For an important particular case, the game is solved in a closed form, which gives conditions for deciding whether an agent can be trusted, or whether its request should be inspected and how intensely it should be inspected. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Scaling Exponent and Moderate Deviations Asymptotics of Polar Codes for the AWGN Channel
Entropy 2017, 19(7), 364; doi:10.3390/e19070364
Received: 8 June 2017 / Revised: 8 July 2017 / Accepted: 13 July 2017 / Published: 15 July 2017
PDF Full-text (846 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates polar codes for the additive white Gaussian noise (AWGN) channel. The scaling exponent μ of polar codes for a memoryless channel qY|X with capacity I(qY|X) characterizes the closest gap between the
[...] Read more.
This paper investigates polar codes for the additive white Gaussian noise (AWGN) channel. The scaling exponent μ of polar codes for a memoryless channel q Y | X with capacity I ( q Y | X ) characterizes the closest gap between the capacity and non-asymptotic achievable rates as follows: For a fixed ε ( 0 , 1 ) , the gap between the capacity I ( q Y | X ) and the maximum non-asymptotic rate R n * achieved by a length-n polar code with average error probability ε scales as n - 1 / μ , i.e., I ( q Y | X ) - R n * = Θ ( n - 1 / μ ) . It is well known that the scaling exponent μ for any binary-input memoryless channel (BMC) with I ( q Y | X ) ( 0 , 1 ) is bounded above by 4 . 714 . Our main result shows that 4 . 714 remains a valid upper bound on the scaling exponent for the AWGN channel. Our proof technique involves the following two ideas: (i) The capacity of the AWGN channel can be achieved within a gap of O ( n - 1 / μ log n ) by using an input alphabet consisting of n constellations and restricting the input distribution to be uniform; (ii) The capacity of a multiple access channel (MAC) with an input alphabet consisting of n constellations can be achieved within a gap of O ( n - 1 / μ log n ) by using a superposition of log n binary-input polar codes. In addition, we investigate the performance of polar codes in the moderate deviations regime where both the gap to capacity and the error probability vanish as n grows. An explicit construction of polar codes is proposed to obey a certain tradeoff between the gap to capacity and the decay rate of the error probability for the AWGN channel. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Open AccessArticle A Quantized Kernel Learning Algorithm Using a Minimum Kernel Risk-Sensitive Loss Criterion and Bilateral Gradient Technique
Entropy 2017, 19(7), 365; doi:10.3390/e19070365
Received: 19 June 2017 / Revised: 8 July 2017 / Accepted: 13 July 2017 / Published: 20 July 2017
Cited by 3 | PDF Full-text (400 KB) | HTML Full-text | XML Full-text
Abstract
Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been
[...] Read more.
Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been developed accordingly. However, MKRSL as a traditional kernel adaptive filter (KAF) method, generates a growing radial basis functional (RBF) network. In response to that limitation, through the use of online vector quantization (VQ) technique, this article proposes a novel KAF algorithm, named quantized MKRSL (QMKRSL) to curb the growth of the RBF network structure. Compared with other quantized methods, e.g., quantized kernel least mean square (QKLMS) and quantized kernel maximum correntropy (QKMC), the efficient performance surface makes QMKRSL converge faster and filter more accurately, while maintaining the robustness to outliers. Moreover, considering that QMKRSL using traditional gradient descent method may fail to make full use of the hidden information between the input and output spaces, we also propose an intensified QMKRSL using a bilateral gradient technique named QMKRSL_BG, in an effort to further improve filtering accuracy. Short-term chaotic time-series prediction experiments are conducted to demonstrate the satisfactory performance of our algorithms. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessFeature PaperArticle Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis
Entropy 2017, 19(7), 366; doi:10.3390/e19070366
Received: 28 May 2017 / Revised: 2 July 2017 / Accepted: 14 July 2017 / Published: 18 July 2017
PDF Full-text (397 KB) | HTML Full-text | XML Full-text
Abstract
The storage of frequently requested multimedia content at small-cell base stations (BSs) can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is
[...] Read more.
The storage of frequently requested multimedia content at small-cell base stations (BSs) can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB) is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average) DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Reliable Approximation of Long Relaxation Timescales in Molecular Dynamics
Entropy 2017, 19(7), 367; doi:10.3390/e19070367
Received: 25 April 2017 / Revised: 13 July 2017 / Accepted: 13 July 2017 / Published: 18 July 2017
Cited by 1 | PDF Full-text (515 KB) | HTML Full-text | XML Full-text
Abstract
Many interesting rare events in molecular systems, like ligand association, protein folding or conformational changes, occur on timescales that often are not accessible by direct numerical simulation. Therefore, rare event approximation approaches like interface sampling, Markov state model building, or advanced reaction coordinate-based
[...] Read more.
Many interesting rare events in molecular systems, like ligand association, protein folding or conformational changes, occur on timescales that often are not accessible by direct numerical simulation. Therefore, rare event approximation approaches like interface sampling, Markov state model building, or advanced reaction coordinate-based free energy estimation have attracted huge attention recently. In this article we analyze the reliability of such approaches. How precise is an estimate of long relaxation timescales of molecular systems resulting from various forms of rare event approximation methods? Our results give a theoretical answer to this question by relating it with the transfer operator approach to molecular dynamics. By doing so we also allow for understanding deep connections between the different approaches. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Figures

Figure 1

Open AccessArticle Nonequilibrium Entropy in a Shock
Entropy 2017, 19(7), 368; doi:10.3390/e19070368
Received: 13 June 2017 / Revised: 12 July 2017 / Accepted: 14 July 2017 / Published: 19 July 2017
PDF Full-text (660 KB) | HTML Full-text | XML Full-text
Abstract
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of
[...] Read more.
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies the Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle Optimal Detection under the Restricted Bayesian Criterion
Entropy 2017, 19(7), 370; doi:10.3390/e19070370
Received: 8 May 2017 / Revised: 11 July 2017 / Accepted: 18 July 2017 / Published: 19 July 2017
PDF Full-text (1116 KB) | HTML Full-text | XML Full-text
Abstract
This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater
[...] Read more.
This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Conformity, Anticonformity and Polarization of Opinions: Insights from a Mathematical Model of Opinion Dynamics
Entropy 2017, 19(7), 371; doi:10.3390/e19070371
Received: 29 May 2017 / Revised: 13 July 2017 / Accepted: 18 July 2017 / Published: 19 July 2017
PDF Full-text (850 KB) | HTML Full-text | XML Full-text
Abstract
Understanding and quantifying polarization in social systems is important because of many reasons. It could for instance help to avoid segregation and conflicts in the society or to control polarized debates and predict their outcomes. In this paper, we present a version of
[...] Read more.
Understanding and quantifying polarization in social systems is important because of many reasons. It could for instance help to avoid segregation and conflicts in the society or to control polarized debates and predict their outcomes. In this paper, we present a version of the q-voter model of opinion dynamics with two types of responses to social influence: conformity (like in the original q-voter model) and anticonformity. We put the model on a social network with the double-clique topology in order to check how the interplay between those responses impacts the opinion dynamics in a population divided into two antagonistic segments. The model is analyzed analytically, numerically and by means of Monte Carlo simulations. Our results show that the system undergoes two bifurcations as the number of cross-links between cliques changes. Below the first critical point, consensus in the entire system is possible. Thus, two antagonistic cliques may share the same opinion only if they are loosely connected. Above that point, the system ends up in a polarized state. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Transfer Entropy for Nonparametric Granger Causality Detection: An Evaluation of Different Resampling Methods
Entropy 2017, 19(7), 372; doi:10.3390/e19070372
Received: 16 May 2017 / Revised: 14 July 2017 / Accepted: 17 July 2017 / Published: 21 July 2017
PDF Full-text (1296 KB) | HTML Full-text | XML Full-text
Abstract
The information-theoretical concept transfer entropy is an ideal measure for detecting conditional independence, or Granger causality in a time series setting. The recent literature indeed witnesses an increased interest in applications of entropy-based tests in this direction. However, those tests are typically based
[...] Read more.
The information-theoretical concept transfer entropy is an ideal measure for detecting conditional independence, or Granger causality in a time series setting. The recent literature indeed witnesses an increased interest in applications of entropy-based tests in this direction. However, those tests are typically based on nonparametric entropy estimates for which the development of formal asymptotic theory turns out to be challenging. In this paper, we provide numerical comparisons for simulation-based tests to gain some insights into the statistical behavior of nonparametric transfer entropy-based tests. In particular, surrogate algorithms and smoothed bootstrap procedures are described and compared. We conclude this paper with a financial application to the detection of spillover effects in the global equity market. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessArticle Objective Weights Based on Ordered Fuzzy Numbers for Fuzzy Multiple Criteria Decision-Making Methods
Entropy 2017, 19(7), 373; doi:10.3390/e19070373
Received: 7 June 2017 / Revised: 16 July 2017 / Accepted: 20 July 2017 / Published: 21 July 2017
PDF Full-text (1403 KB) | HTML Full-text | XML Full-text
Abstract
Fuzzy multiple criteria decision-making (FMCDM) methods are techniques of finding the trade-off option out of all feasible alternatives that are characterized by multiple criteria and where data cannot be measured precisely, but can be represented, for instance, by ordered fuzzy numbers (OFNs). One
[...] Read more.
Fuzzy multiple criteria decision-making (FMCDM) methods are techniques of finding the trade-off option out of all feasible alternatives that are characterized by multiple criteria and where data cannot be measured precisely, but can be represented, for instance, by ordered fuzzy numbers (OFNs). One of the main steps in FMCDM methods consist in finding the appropriate criteria weights. A method based on the concept of Shannon entropy is one of many techniques for the determination of criteria weights when obtaining them from the decision-maker is not possible. The goal of the paper is to extend the notion of Shannon entropy to fuzzy data represented by OFNs. The proposed approach allows to obtain criteria weights as OFNs, which are normalized and sum to 1. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Comparative Analysis of Jüttner’s Calculation of the Energy of a Relativistic Ideal Gas and Implications for Accelerator Physics and Cosmology
Entropy 2017, 19(7), 374; doi:10.3390/e19070374
Received: 5 July 2017 / Revised: 20 July 2017 / Accepted: 21 July 2017 / Published: 22 July 2017
PDF Full-text (2462 KB) | HTML Full-text | XML Full-text
Abstract
Jüttner used the conventional theory of relativistic statistical mechanics to calculate the energy of a relativistic ideal gas in 1911. An alternative derivation of the energy of a relativistic ideal gas was published by Horwitz, Schieve and Piron in 1981 within the context
[...] Read more.
Jüttner used the conventional theory of relativistic statistical mechanics to calculate the energy of a relativistic ideal gas in 1911. An alternative derivation of the energy of a relativistic ideal gas was published by Horwitz, Schieve and Piron in 1981 within the context of parametrized relativistic statistical mechanics. The resulting energy in the ultrarelativistic regime differs from Jüttner’s result. We review the derivations of energy and identify physical regimes for testing the validity of the two theories in accelerator physics and cosmology. Full article
(This article belongs to the Special Issue Advances in Relativistic Statistical Mechanics)
Figures

Figure 1

Open AccessArticle A Novel Numerical Approach for a Nonlinear Fractional Dynamical Model of Interpersonal and Romantic Relationships
Entropy 2017, 19(7), 375; doi:10.3390/e19070375
Received: 28 May 2017 / Revised: 4 July 2017 / Accepted: 19 July 2017 / Published: 22 July 2017
Cited by 4 | PDF Full-text (1978 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a new numerical algorithm, namely q-homotopy analysis Sumudu transform method (q-HASTM), to obtain the approximate solution for the nonlinear fractional dynamical model of interpersonal and romantic relationships. The suggested algorithm examines the dynamics of love
[...] Read more.
In this paper, we propose a new numerical algorithm, namely q-homotopy analysis Sumudu transform method (q-HASTM), to obtain the approximate solution for the nonlinear fractional dynamical model of interpersonal and romantic relationships. The suggested algorithm examines the dynamics of love affairs between couples. The q-HASTM is a creative combination of Sumudu transform technique, q-homotopy analysis method and homotopy polynomials that makes the calculation very easy. To compare the results obtained by using q-HASTM, we solve the same nonlinear problem by Adomian’s decomposition method (ADM). The convergence of the q-HASTM series solution for the model is adapted and controlled by auxiliary parameter ℏ and asymptotic parameter n. The numerical results are demonstrated graphically and in tabular form. The result obtained by employing the proposed scheme reveals that the approach is very accurate, effective, flexible, simple to apply and computationally very nice. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle Investigation of Oriented Magnetic Field Effects on Entropy Generation in an Inclined Channel Filled with Ferrofluids
Entropy 2017, 19(7), 377; doi:10.3390/e19070377
Received: 13 June 2017 / Revised: 10 July 2017 / Accepted: 18 July 2017 / Published: 23 July 2017
PDF Full-text (363 KB) | HTML Full-text | XML Full-text
Abstract
Dispersion of super-paramagnetic nanoparticles in nonmagnetic carrier fluids, known as ferrofluids, offers the advantages of tunable thermo-physical properties and eliminate the need for moving parts to induce flow. This study investigates ferrofluid flow characteristics in an inclined channel under inclined magnetic field and
[...] Read more.
Dispersion of super-paramagnetic nanoparticles in nonmagnetic carrier fluids, known as ferrofluids, offers the advantages of tunable thermo-physical properties and eliminate the need for moving parts to induce flow. This study investigates ferrofluid flow characteristics in an inclined channel under inclined magnetic field and constant pressure gradient. The ferrofluid considered in this work is comprised of Cu particles as the nanoparticles and water as the base fluid. The governing differential equations including viscous dissipation are non-dimensionalised and discretized with Generalized Differential Quadrature Method. The resulting algebraic set of equations are solved via Newton-Raphson Method. The work done here contributes to the literature by searching the effects of magnetic field angle and channel inclination separately on the entropy generation of the ferrofluid filled inclined channel system in order to achieve best design parameter values so called entropy generation minimization is implemented. Furthermore, the effect of magnetic field, inclination angle of the channel and volume fraction of nanoparticles on velocity and temperature profiles are examined and represented by figures to give a thorough understanding of the system behavior. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Figures

Figure 1

Open AccessFeature PaperArticle Cognition and Cooperation in Interfered Multiple Access Channels
Entropy 2017, 19(7), 378; doi:10.3390/e19070378
Received: 31 May 2017 / Revised: 17 July 2017 / Accepted: 20 July 2017 / Published: 24 July 2017
PDF Full-text (509 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we investigate a three-user cognitive communication network where a primary two-user multiple access channel suffers interference from a secondary point-to-point channel, sharing the same medium. While the point-to-point channel transmitter—transmitter 3—causes an interference at the primary multiple access channel receiver,
[...] Read more.
In this work, we investigate a three-user cognitive communication network where a primary two-user multiple access channel suffers interference from a secondary point-to-point channel, sharing the same medium. While the point-to-point channel transmitter—transmitter 3—causes an interference at the primary multiple access channel receiver, we assume that the primary channel transmitters—transmitters 1 and 2—do not cause any interference at the point-to-point receiver. It is assumed that one of the multiple access channel transmitters has cognitive capabilities and cribs causally from the other multiple access channel transmitter. Furthermore, we assume that the cognitive transmitter knows the message of transmitter 3 in a non-causal manner, thus introducing the three-user multiple access cognitive Z-interference channel. We obtain inner and outer bounds on the capacity region of the this channel for both causal and strictly causal cribbing cognitive encoders. We further investigate different variations and aspects of the channel, referring to some previously studied cases. Attempting to better characterize the capacity region we look at the vertex points of the capacity region where each one of the transmitters tries to achieve its maximal rate. Moreover, we find the capacity region of a special case of a certain kind of more-capable multiple access cognitive Z-interference channels. In addition, we study the case of full unidirectional cooperation between the 2 multiple access channel encoders. Finally, since direct cribbing allows us full cognition in the case of continuous input alphabets, we study the case of partial cribbing, i.e., when the cribbing is performed via a deterministic function. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle An Application of Pontryagin’s Principle to Brownian Particle Engineered Equilibration
Entropy 2017, 19(7), 379; doi:10.3390/e19070379
Received: 3 July 2017 / Revised: 19 July 2017 / Accepted: 20 July 2017 / Published: 24 July 2017
PDF Full-text (550 KB) | HTML Full-text | XML Full-text
Abstract
We present a stylized model of controlled equilibration of a small system in a fluctuating environment. We derive the optimal control equations steering in finite-time the system between two equilibrium states. The corresponding thermodynamic transition is optimal in the sense that it occurs
[...] Read more.
We present a stylized model of controlled equilibration of a small system in a fluctuating environment. We derive the optimal control equations steering in finite-time the system between two equilibrium states. The corresponding thermodynamic transition is optimal in the sense that it occurs at minimum entropy if the set of admissible controls is restricted by certain bounds on the time derivatives of the protocols. We apply our equations to the engineered equilibration of an optical trap considered in a recent proof of principle experiment. We also analyze an elementary model of nucleation previously considered by Landauer to discuss the thermodynamic cost of one bit of information erasure. We expect our model to be a useful benchmark for experiment design as it exhibits the same integrability properties of well-known models of optimal mass transport by a compressible velocity field. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Pretreatment and Wavelength Selection Method for Near-Infrared Spectra Signal Based on Improved CEEMDAN Energy Entropy and Permutation Entropy
Entropy 2017, 19(7), 380; doi:10.3390/e19070380
Received: 26 June 2017 / Revised: 14 July 2017 / Accepted: 22 July 2017 / Published: 24 July 2017
Cited by 1 | PDF Full-text (2085 KB) | HTML Full-text | XML Full-text
Abstract
The noise of near-infrared spectra and spectral information redundancy can affect the accuracy of calibration and prediction models in near-infrared analytical technology. To address this problem, the improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and permutation entropy (PE) were used
[...] Read more.
The noise of near-infrared spectra and spectral information redundancy can affect the accuracy of calibration and prediction models in near-infrared analytical technology. To address this problem, the improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and permutation entropy (PE) were used to propose a new method for pretreatment and wavelength selection of near-infrared spectra signal. The near-infrared spectra of glucose solution was used as the research object, the improved CEEMDAN energy entropy was then used to reconstruct spectral data for removing noise, and the useful wavelengths are selected based on PE after spectra segmentation. Firstly, the intrinsic mode functions of original spectra are obtained by improved CEEMDAN algorithm. The useful signal modes and noisy signal modes were then identified by the energy entropy, and the reconstructed spectral signal is the sum of useful signal modes. Finally, the reconstructed spectra were segmented and the wavelengths with abundant glucose information were selected based on PE. To evaluate the performance of the proposed method, support vector regression and partial least square regression were used to build the calibration model using the wavelengths selected by the new method, mutual information, successive projection algorithm, principal component analysis, and full spectra data. The results of the model were evaluated by the correlation coefficient and root mean square error of prediction. The experimental results showed that the improved CEEMDAN energy entropy can effectively reconstruct near-infrared spectra signal and that the PE can effectively solve the wavelength selection. Therefore, the proposed method can improve the precision of spectral analysis and the stability of the model for near-infrared spectra analysis. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Review

Jump to: Research, Other

Open AccessReview Photons, Bits and Entropy: From Planck to Shannon at the Roots of the Information Age
Entropy 2017, 19(7), 341; doi:10.3390/e19070341
Received: 26 May 2017 / Revised: 3 July 2017 / Accepted: 4 July 2017 / Published: 8 July 2017
PDF Full-text (332 KB) | HTML Full-text | XML Full-text
Abstract
The present age, which can be called the Information Age, has a core technology constituted by bits transported by photons. Both concepts, bit and photon, originated in the past century: the concept of photon was introduced by Planck in 1900 when he advanced
[...] Read more.
The present age, which can be called the Information Age, has a core technology constituted by bits transported by photons. Both concepts, bit and photon, originated in the past century: the concept of photon was introduced by Planck in 1900 when he advanced the solution of the blackbody spectrum, and bit is a term first used by Shannon in 1948 when he introduced the theorems that founded information theory. The connection between Planck and Shannon is not immediately apparent; nor is it obvious that they derived their basic results from the concept of entropy. Examination of other important scientists can shed light on Planck’s and Shannon’s work in these respects. Darwin and Fowler, who in 1922 published a couple of papers where they reinterpreted Planck’s results, pointed out the centrality of the partition function to statistical mechanics and thermodynamics. The same roots have been more recently reconsidered by Jaynes, who extended the considerations advanced by Darwin and Fowler to information theory. This paper investigates how the concept of entropy was propagated in the past century in order to show how a simple intuition, born in the 1824 during the first industrial revolution in the mind of the young French engineer Carnot, is literally still enlightening the fourth industrial revolution and probably will continue to do so in the coming century. Full article
Open AccessReview Program for the Special State Theory of Quantum Measurement
Entropy 2017, 19(7), 343; doi:10.3390/e19070343
Received: 1 May 2017 / Revised: 26 June 2017 / Accepted: 5 July 2017 / Published: 8 July 2017
PDF Full-text (3646 KB) | HTML Full-text | XML Full-text
Abstract
Establishing (or falsifying) the special state theory of quantum measurement is a program with both theoretical and experimental directions. The special state theory has only pure unitary time evolution, like the many worlds interpretation, but only has one world. How this can be
[...] Read more.
Establishing (or falsifying) the special state theory of quantum measurement is a program with both theoretical and experimental directions. The special state theory has only pure unitary time evolution, like the many worlds interpretation, but only has one world. How this can be accomplished requires both “special states” and significant modification of the usual assumptions about the arrow of time. All this is reviewed below. Experimentally, proposals for tests already exist and the problems are first the practical one of doing the experiment and second the suggesting of other experiments. On the theoretical level, many problems remain and among them are the impact of particle statistics on the availability of special states, finding a way to estimate their abundance and the possibility of using a computer for this purpose. Regarding the arrow of time, there is an early proposal of J. A. Wheeler that may be implementable with implications for cosmology. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Other

Jump to: Research, Review

Open AccessBrief Report Test of the Pauli Exclusion Principle in the VIP-2 Underground Experiment
Entropy 2017, 19(7), 300; doi:10.3390/e19070300
Received: 29 April 2017 / Revised: 6 June 2017 / Accepted: 22 June 2017 / Published: 24 June 2017
Cited by 1 | PDF Full-text (1292 KB) | HTML Full-text | XML Full-text
Abstract
The validity of the Pauli exclusion principle—a building block of Quantum Mechanics—is tested for electrons. The VIP (violation of Pauli exclusion principle) and its follow-up VIP-2 experiments at the Laboratori Nazionali del Gran Sasso search for X-rays from copper atomic transitions that are
[...] Read more.
The validity of the Pauli exclusion principle—a building block of Quantum Mechanics—is tested for electrons. The VIP (violation of Pauli exclusion principle) and its follow-up VIP-2 experiments at the Laboratori Nazionali del Gran Sasso search for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. The candidate events—if they exist—originate from the transition of a 2 p orbit electron to the ground state which is already occupied by two electrons. The present limit on the probability for Pauli exclusion principle violation for electrons set by the VIP experiment is 4.7 × 10 29 . We report a first result from the VIP-2 experiment improving on the VIP limit, which solidifies the final goal of achieving a two orders of magnitude gain in the long run. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessPerspective The History and Perspectives of Efficiency at Maximum Power of the Carnot Engine
Entropy 2017, 19(7), 369; doi:10.3390/e19070369
Received: 30 March 2017 / Revised: 5 July 2017 / Accepted: 14 July 2017 / Published: 19 July 2017
PDF Full-text (1312 KB) | HTML Full-text | XML Full-text
Abstract
Finite Time Thermodynamics is generally associated with the Curzon–Ahlborn approach to the Carnot cycle. Recently, previous publications on the subject were discovered, which prove that the history of Finite Time Thermodynamics started more than sixty years before even the work of Chambadal and
[...] Read more.
Finite Time Thermodynamics is generally associated with the Curzon–Ahlborn approach to the Carnot cycle. Recently, previous publications on the subject were discovered, which prove that the history of Finite Time Thermodynamics started more than sixty years before even the work of Chambadal and Novikov (1957). The paper proposes a careful examination of the similarities and differences between these pioneering works and the consequences they had on the works that followed. The modelling of the Carnot engine was carried out in three steps, namely (1) modelling with time durations of the isothermal processes, as done by Curzon and Ahlborn; (2) modelling at a steady-state operation regime for which the time does not appear explicitly; and (3) modelling of transient conditions which requires the time to appear explicitly. Whatever the method of modelling used, the subsequent optimization appears to be related to specific physical dimensions. The main goal of the methodology is to choose the objective function, which here is the power, and to define the associated constraints. We propose a specific approach, focusing on the main functions that respond to engineering requirements. The study of the Carnot engine illustrates the synthesis carried out and proves that the primary interest for an engineer is mainly connected to what we called Finite (physical) Dimensions Optimal Thermodynamics, including time in the case of transient modelling. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Figures

Figure 1

Back to Top