Entropy doi: 10.3390/e19070374

Authors: John Fanchi

Jüttner used the conventional theory of relativistic statistical mechanics to calculate the energy of a relativistic ideal gas in 1911. An alternative derivation of the energy of a relativistic ideal gas was published by Horwitz, Schieve and Piron in 1981 within the context of parametrized relativistic statistical mechanics. The resulting energy in the ultrarelativistic regime differs from Jüttner’s result. We review the derivations of energy and identify physical regimes for testing the validity of the two theories in accelerator physics and cosmology.

]]>Entropy doi: 10.3390/e19070375

Authors: Jagdev Singh Devendra Kumar Maysaa Al Qurashi Dumitru Baleanu

In this paper, we propose a new numerical algorithm, namely q-homotopy analysis Sumudu transform method (q-HASTM), to obtain the approximate solution for the nonlinear fractional dynamical model of interpersonal and romantic relationships. The suggested algorithm examines the dynamics of love affairs between couples. The q-HASTM is a creative combination of Sumudu transform technique, q-homotopy analysis method and homotopy polynomials that makes the calculation very easy. To compare the results obtained by using q-HASTM, we solve the same nonlinear problem by Adomian’s decomposition method (ADM). The convergence of the q-HASTM series solution for the model is adapted and controlled by auxiliary parameter and asymptotic parameter n. The numerical results are demonstrated graphically and in tabular form. The result obtained by employing the proposed scheme reveals that the approach is very accurate, effective, flexible, simple to apply and computationally very nice.

]]>Entropy doi: 10.3390/e19070372

Authors: Cees Diks Hao Fang

The information-theoretical concept transfer entropy is an ideal measure for detecting conditional independence, or Granger causality in a time series setting. The recent literature indeed witnesses an increased interest in applications of entropy-based tests in this direction. However, those tests are typically based on nonparametric entropy estimates for which the development of formal asymptotic theory turns out to be challenging. In this paper, we provide numerical comparisons for simulation-based tests to gain some insights into the statistical behavior of nonparametric transfer entropy-based tests. In particular, surrogate algorithms and smoothed bootstrap procedures are described and compared. We conclude this paper with a financial application to the detection of spillover effects in the global equity market.

]]>Entropy doi: 10.3390/e19070373

Authors: Dariusz Kacprzak

Fuzzy multiple criteria decision-making (FMCDM) methods are techniques of finding the trade-off option out of all feasible alternatives that are characterized by multiple criteria and where data cannot be measured precisely, but can be represented, for instance, by ordered fuzzy numbers (OFNs). One of the main steps in FMCDM methods consist in finding the appropriate criteria weights. A method based on the concept of Shannon entropy is one of many techniques for the determination of criteria weights when obtaining them from the decision-maker is not possible. The goal of the paper is to extend the notion of Shannon entropy to fuzzy data represented by OFNs. The proposed approach allows to obtain criteria weights as OFNs, which are normalized and sum to 1.

]]>Entropy doi: 10.3390/e19070365

Authors: Xiong Luo Jing Deng Weiping Wang Jenq-Haur Wang Wenbing Zhao

Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been developed accordingly. However, MKRSL as a traditional kernel adaptive filter (KAF) method, generates a growing radial basis functional (RBF) network. In response to that limitation, through the use of online vector quantization (VQ) technique, this article proposes a novel KAF algorithm, named quantized MKRSL (QMKRSL) to curb the growth of the RBF network structure. Compared with other quantized methods, e.g., quantized kernel least mean square (QKLMS) and quantized kernel maximum correntropy (QKMC), the efficient performance surface makes QMKRSL converge faster and filter more accurately, while maintaining the robustness to outliers. Moreover, considering that QMKRSL using traditional gradient descent method may fail to make full use of the hidden information between the input and output spaces, we also propose an intensified QMKRSL using a bilateral gradient technique named QMKRSL_BG, in an effort to further improve filtering accuracy. Short-term chaotic time-series prediction experiments are conducted to demonstrate the satisfactory performance of our algorithms.

]]>Entropy doi: 10.3390/e19070358

Authors: Yuyu Yin Yueshen Xu Wenting Xu Min Gao Lifeng Yu Yujie Pei

Mobile Service selection is an important but challenging problem in service and mobile computing. Quality of service (QoS) predication is a critical step in service selection in 5G network environments. The traditional methods, such as collaborative filtering (CF), suffer from a series of defects, such as failing to handle data sparsity. In mobile network environments, the abnormal QoS data are likely to result in inferior prediction accuracy. Unfortunately, these problems have not attracted enough attention, especially in a mixed mobile network environment with different network configurations, generations, or types. An ensemble learning method for predicting missing QoS in 5G network environments is proposed in this paper. There are two key principles: one is the newly proposed similarity computation method for identifying similar neighbors; the other is the extended ensemble learning model for discovering and filtering fake neighbors from the preliminary neighbors set. Moreover, three prediction models are also proposed, two individual models and one combination model. They are used for utilizing the user similar neighbors and servicing similar neighbors, respectively. Experimental results conducted in two real-world datasets show our approaches can produce superior prediction accuracy.

]]>Entropy doi: 10.3390/e19070371

Authors: Tyll Krueger Janusz Szwabiński Tomasz Weron

Understanding and quantifying polarization in social systems is important because of many reasons. It could for instance help to avoid segregation and conflicts in the society or to control polarized debates and predict their outcomes. In this paper, we present a version of the q-voter model of opinion dynamics with two types of responses to social influence: conformity (like in the original q-voter model) and anticonformity. We put the model on a social network with the double-clique topology in order to check how the interplay between those responses impacts the opinion dynamics in a population divided into two antagonistic segments. The model is analyzed analytically, numerically and by means of Monte Carlo simulations. Our results show that the system undergoes two bifurcations as the number of cross-links between cliques changes. Below the first critical point, consensus in the entire system is possible. Thus, two antagonistic cliques may share the same opinion only if they are loosely connected. Above that point, the system ends up in a polarized state.

]]>Entropy doi: 10.3390/e19070369

Authors: Michel Feidt

Finite Time Thermodynamics is generally associated with the Curzon–Ahlborn approach to the Carnot cycle. Recently, previous publications on the subject were discovered, which prove that the history of Finite Time Thermodynamics started more than sixty years before even the work of Chambadal and Novikov (1957). The paper proposes a careful examination of the similarities and differences between these pioneering works and the consequences they had on the works that followed. The modelling of the Carnot engine was carried out in three steps, namely (1) modelling with time durations of the isothermal processes, as done by Curzon and Ahlborn; (2) modelling at a steady-state operation regime for which the time does not appear explicitly; and (3) modelling of transient conditions which requires the time to appear explicitly. Whatever the method of modelling used, the subsequent optimization appears to be related to specific physical dimensions. The main goal of the methodology is to choose the objective function, which here is the power, and to define the associated constraints. We propose a specific approach, focusing on the main functions that respond to engineering requirements. The study of the Carnot engine illustrates the synthesis carried out and proves that the primary interest for an engineer is mainly connected to what we called Finite (physical) Dimensions Optimal Thermodynamics, including time in the case of transient modelling.

]]>Entropy doi: 10.3390/e19070370

Authors: Shujun Liu Ting Yang Hongqing Liu

This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results.

]]>Entropy doi: 10.3390/e19070368

Authors: L.G. Margolin

In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies the Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.

]]>Entropy doi: 10.3390/e19070367

Authors: Wei Zhang Christof Schütte

Many interesting rare events in molecular systems, like ligand association, protein folding or conformational changes, occur on timescales that often are not accessible by direct numerical simulation. Therefore, rare event approximation approaches like interface sampling, Markov state model building, or advanced reaction coordinate-based free energy estimation have attracted huge attention recently. In this article we analyze the reliability of such approaches. How precise is an estimate of long relaxation timescales of molecular systems resulting from various forms of rare event approximation methods? Our results give a theoretical answer to this question by relating it with the transfer operator approach to molecular dynamics. By doing so we also allow for understanding deep connections between the different approaches.

]]>Entropy doi: 10.3390/e19070366

Authors: Seyyed Azimi Osvaldo Simeone Ravi Tandon

The storage of frequently requested multimedia content at small-cell base stations (BSs) can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB) is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average) DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.

]]>Entropy doi: 10.3390/e19070327

Authors: Weiqiang Yang Lixin Xu Hang Li Yabo Wu Jianbo Lu

The coupling between dark energy and dark matter provides a possible approach to mitigate the coincidence problem of the cosmological standard model. In this paper, we assumed the interacting term was related to the Hubble parameter, energy density of dark energy, and equation of state of dark energy. The interaction rate between dark energy and dark matter was a constant parameter, which was, Q = 3 H ξ ( 1 + w x ) ρ x . Based on the Markov chain Monte Carlo method, we made a global fitting on the interacting dark energy model from Planck 2015 cosmic microwave background anisotropy and observational Hubble data. We found that the observational data sets slightly favored a small interaction rate between dark energy and dark matter; however, there was not obvious evidence of interaction at the 1 σ level.

]]>Entropy doi: 10.3390/e19070364

Authors: Silas Fong Vincent Tan

This paper investigates polar codes for the additive white Gaussian noise (AWGN) channel. The scaling exponent μ of polar codes for a memoryless channel q Y | X with capacity I ( q Y | X ) characterizes the closest gap between the capacity and non-asymptotic achievable rates as follows: For a fixed ε ∈ ( 0 , 1 ) , the gap between the capacity I ( q Y | X ) and the maximum non-asymptotic rate R n * achieved by a length-n polar code with average error probability ε scales as n - 1 / μ , i.e., I ( q Y | X ) - R n * = Θ ( n - 1 / μ ) . It is well known that the scaling exponent μ for any binary-input memoryless channel (BMC) with I ( q Y | X ) ∈ ( 0 , 1 ) is bounded above by 4 . 714 . Our main result shows that 4 . 714 remains a valid upper bound on the scaling exponent for the AWGN channel. Our proof technique involves the following two ideas: (i) The capacity of the AWGN channel can be achieved within a gap of O ( n - 1 / μ log n ) by using an input alphabet consisting of n constellations and restricting the input distribution to be uniform; (ii) The capacity of a multiple access channel (MAC) with an input alphabet consisting of n constellations can be achieved within a gap of O ( n - 1 / μ log n ) by using a superposition of log n binary-input polar codes. In addition, we investigate the performance of polar codes in the moderate deviations regime where both the gap to capacity and the error probability vanish as n grows. An explicit construction of polar codes is proposed to obey a certain tradeoff between the gap to capacity and the decay rate of the error probability for the AWGN channel.

]]>Entropy doi: 10.3390/e19070363

Authors: Andrey Garnaev Wade Trappe

Sharing of radio spectrum between different types of wireless systems (e.g., different service providers) is the foundation for making more efficient usage of spectrum. Cognitive radio technologies have spurred the design of spectrum servers that coordinate the sharing of spectrum between different wireless systems. These servers receive information regarding the needs of each system, and then provide instructions back to each system regarding the spectrum bands they may use. This sharing of information is complicated by the fact that these systems are often in competition with each other: each system desires to use as much of the spectrum as possible to support its users, and each system could learn and harm the bands of the other system. Three problems arise in such a spectrum-sharing problem: (1) how to maintain reliable performance for each system-shared resource (licensed spectrum); (2) whether to believe the resource requests announced by each agent; and (3) if they do not believe, how much effort should be devoted to inspecting spectrum so as to prevent possible malicious activity. Since this problem can arise for a variety of wireless systems, we present an abstract formulation in which the agents or spectrum server introduces obfuscation in the resource assignment to maintain reliability. We derive a closed form expression for the expected damage that can arise from possible malicious activity, and using this formula we find a tradeoff between the amount of extra decoys that must be used in order to support higher communication fidelity against potential interference, and the cost of maintaining this reliability. Then, we examine a scenario where a smart adversary may also use obfuscation itself, and formulate the scenario as a signaling game, which can be solved by applying a classical iterative forward-induction algorithm. For an important particular case, the game is solved in a closed form, which gives conditions for deciding whether an agent can be trusted, or whether its request should be inspected and how intensely it should be inspected.

]]>Entropy doi: 10.3390/e19070360

Authors: Khaled Almgren Minkyu Kim Jeongkyu Lee

Topological data analysis is a noble approach to extract meaningful information from high-dimensional data and is robust to noise. It is based on topology, which aims to study the geometric shape of data. In order to apply topological data analysis, an algorithm called mapper is adopted. The output from mapper is a simplicial complex that represents a set of connected clusters of data points. In this paper, we explore the feasibility of topological data analysis for mining social network data by addressing the problem of image popularity. We randomly crawl images from Instagram and analyze the effects of social context and image content on an image’s popularity using mapper. Mapper clusters the images using each feature, and the ratio of popularity in each cluster is computed to determine the clusters with a high or low possibility of popularity. Then, the popularity of images are predicted to evaluate the accuracy of topological data analysis. This approach is further compared with traditional clustering algorithms, including k-means and hierarchical clustering, in terms of accuracy, and the results show that topological data analysis outperforms the others. Moreover, topological data analysis provides meaningful information based on the connectivity between the clusters.

]]>Entropy doi: 10.3390/e19070359

Authors: Omar Olvera-Guerrero Alfonso Prieto-Guerrero Gilberto Espinosa-Paredes

There are currently around 78 nuclear power plants (NPPs) in the world based on Boiling Water Reactors (BWRs). The current parameter to assess BWR instability issues is the linear Decay Ratio (DR). However, it is well known that BWRs are complex non-linear dynamical systems that may even exhibit chaotic dynamics that normally preclude the use of the DR when the BWR is working at a specific operating point during instability. In this work a novel methodology based on an adaptive Shannon Entropy estimator and on Noise Assisted Empirical Mode Decomposition variants is presented. This methodology was developed for real-time implementation of a stability monitor. This methodology was applied to a set of signals stemming from several NPPs reactors (Ringhals-Sweden, Forsmark-Sweden and Laguna Verde-Mexico) under commercial operating conditions, that experienced instabilities events, each one of a different nature.

]]>Entropy doi: 10.3390/e19070362

Authors: Jaber Kakar Aydin Sezgin

Recent advances in the characterization of fundamental limits on interference management in wireless networks and the discovery of new communication schemes on how to handle interference led to a better understanding towards the capacity of such networks. The benefits in terms of achievable rates of powerful schemes handling interference, such as interference alignment, are substantial. However, the main issue behind most of these results is the assumption of perfect channel state information at the transmitters (CSIT). In the absence of channel knowledge the performance of various interference networks collapses to what is achievable by time division multiple access (TDMA). Robustinterference management techniques are promising solutions to maintain high achievable rates at various levels of CSIT, ranging from delayed to imperfect CSIT. In this survey, we outline and study two main research perspectives of how to robustly handle interference for cases where CSIT is imprecise on examples for non-distributed and distributed networks, namely broadcast and X-channel. To quantify the performance of these schemes, we use the well-known (generalized) degrees of freedom (GDoF) metric as the pre-log factor of achievable rates. These perspectives maintain the capacity benefits at similar levels as for perfect channel knowledge. These two perspectives are: First,scheme-adaptationthat explicitly accounts for the level of channel knowledge and, second,relay-aided infrastructure enlargementto decrease channel knowledge dependency. The relaxation on CSIT requirements through these perspectives will ultimately lead to practical realizations of robust interference management techniques. The survey concludes with a discussion of open problems.

]]>Entropy doi: 10.3390/e19070361

Authors: Artemy Kolchinsky Brendan Tracey

Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of approximation necessary. We propose a family of estimators based on a pairwise distance function between mixture components, and show that this estimator class has many attractive properties. For many distributions of interest, the proposed estimators are efficient to compute, differentiable in the mixture parameters, and become exact when the mixture components are clustered. We prove this family includes lower and upper bounds on the mixture entropy. The Chernoff α -divergence gives a lower bound when chosen as the distance function, with the Bhattacharyaa distance providing the tightest lower bound for components that are symmetric and members of a location family. The Kullback–Leibler divergence gives an upper bound when used as the distance function. We provide closed-form expressions of these bounds for mixtures of Gaussians, and discuss their applications to the estimation of mutual information. We then demonstrate that our bounds are significantly tighter than well-known existing bounds using numeric simulations. This estimator class is very useful in optimization problems involving maximization/minimization of entropy and mutual information, such as MaxEnt and rate distortion problems.

]]>Entropy doi: 10.3390/e19070357

Authors: Clémentine Hauret Pierre Magain Judith Biernaux

Time is a parameter playing a central role in our most fundamental modelling of natural laws. Relativity theory shows that the comparison of times measured by different clocks depends on their relative motion and on the strength of the gravitational field in which they are embedded. In standard cosmology, the time parameter is the one measured by fundamental clocks (i.e., clocks at rest with respect to the expanding space). This proper time is assumed to flow at a constant rate throughout the whole history of the universe. We make the alternative hypothesis that the rate at which the cosmological time flows depends on the dynamical state of the universe. In thermodynamics, the arrow of time is strongly related to the second law, which states that the entropy of an isolated system will always increase with time or, at best, stay constant. Hence, we assume that the time measured by fundamental clocks is proportional to the entropy of the region of the universe that is causally connected to them. Under that simple assumption, we find it possible to build toy cosmological models that present an acceleration of their expansion without any need for dark energy while being spatially closed and finite, avoiding the need to deal with infinite values.

]]>Entropy doi: 10.3390/e19070356

Authors: Andrea Puglisi Umberto Marini Bettolo Marconi

Many kinds of active particles, such as bacteria or active colloids, move in a thermostatted fluid by means of self-propulsion. Energy injected by such a non-equilibrium force is eventually dissipated as heat in the thermostat. Since thermal fluctuations are much faster and weaker than self-propulsion forces, they are often neglected, blurring the identification of dissipated heat in theoretical models. For the same reason, some freedom—or arbitrariness—appears when defining entropy production. Recently three different recipes to define heat and entropy production have been proposed for the same model where the role of self-propulsion is played by a Gaussian coloured noise. Here we compare and discuss the relation between such proposals and their physical meaning. One of these proposals takes into account the heat exchanged with a non-equilibrium active bath: such an “active heat” satisfies the original Clausius relation and can be experimentally verified.

]]>Entropy doi: 10.3390/e19070354

Authors: Zifei Lin Wei Xu Jiaorui Li Wantao Jia Shuang Li

Time delay of economic policy and memory property in a real economy system is omnipresent and inevitable. In this paper, a business cycle model with fractional-order time delay which describes the delay and memory property of economic control is investigated. Stochastic averaging method is applied to obtain the approximate analytical solution. Numerical simulations are done to verify the method. The effects of the fractional order, time delay, economic control and random excitation on the amplitude of the economy system are investigated. The results show that time delay, fractional order and intensity of random excitation can all magnify the amplitude and increase the volatility of the economy system.

]]>Entropy doi: 10.3390/e19070355

Authors: Marco Dalai

We review the use of binary hypothesis testing for the derivation of the sphere packing bound in channel coding, pointing out a key difference between the classical and the classical-quantum setting. In the first case, two ways of using the binary hypothesis testing are known, which lead to the same bound written in different analytical expressions. The first method historically compares output distributions induced by the codewords with an auxiliary fixed output distribution, and naturally leads to an expression using the Renyi divergence. The second method compares the given channel with an auxiliary one and leads to an expression using the Kullback–Leibler divergence. In the classical-quantum case, due to a fundamental difference in the quantum binary hypothesis testing, these two approaches lead to two different bounds, the first being the “right” one. We discuss the details of this phenomenon, which suggests the question of whether auxiliary channels are used in the optimal way in the second approach and whether recent results on the exact strong-converse exponent in classical-quantum channel coding might play a role in the considered problem.

]]>Entropy doi: 10.3390/e19070168

Authors: Mohammed Almakki Sharadia Dey Sabyasachi Mondal Precious Sibanda

The entropy generation in unsteady three-dimensional axisymmetric magnetohydrodynamics (MHD) nanofluid flow over a non-linearly stretching sheet is investigated. The flow is subject to thermal radiation and a chemical reaction. The conservation equations are solved using the spectral quasi-linearization method. The novelty of the work is in the study of entropy generation in three-dimensional axisymmetric MHD nanofluid and the choice of the spectral quasi-linearization method as the solution method. The effects of Brownian motion and thermophoresis are also taken into account. The nanofluid particle volume fraction on the boundary is passively controlled. The results show that as the Hartmann number increases, both the Nusselt number and the Sherwood number decrease, whereas the skin friction increases. It is further shown that an increase in the thermal radiation parameter corresponds to a decrease in the Nusselt number. Moreover, entropy generation increases with respect to some physical parameters.

]]>Entropy doi: 10.3390/e19070350

Authors: Lorenzo Caprini Luca Cerino Alessandro Sarracino Angelo Vulpiani

A simplified, but non trivial, mechanical model—gas of N particles of mass m in a box partitioned by n mobile adiabatic walls of mass M—interacting with two thermal baths at different temperatures, is discussed in the framework of kinetic theory. Following an approach due to Smoluchowski, from an analysis of the collisions particles/walls, we derive the values of the main thermodynamic quantities for the stationary non-equilibrium states. The results are compared with extensive numerical simulations; in the limit of large n, m N / M ≫ 1 and m / M ≪ 1 , we find a good approximation of Fourier’s law.

]]>Entropy doi: 10.3390/e19070351

Authors: Baogui Xin Li Liu Guisheng Hou Yuan Ma

By using a linear feedback control technique, we propose a chaos synchronization scheme for nonlinear fractional discrete dynamical systems. Then, we construct a novel 1-D fractional discrete income change system and a kind of novel 3-D fractional discrete system. By means of the stability principles of Caputo-like fractional discrete systems, we lastly design a controller to achieve chaos synchronization, and present some numerical simulations to illustrate and validate the synchronization scheme.

]]>Entropy doi: 10.3390/e19070353

Authors: Lucas Kocia Yifei Huang Peter Love

The Gottesman–Knill theorem established that stabilizer states and Clifford operations can be efficiently simulated classically. For qudits with odd dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a discrete Wigner function-based simulation algorithm for odd-d qudits that has the same time and space complexity as the Aaronson–Gottesman algorithm for qubits. We show that the efficiency of both algorithms is due to harmonic evolution in the symplectic structure of discrete phase space. The differences between the Wigner function algorithm for odd-d and the Aaronson–Gottesman algorithm for qubits are likely due only to the fact that the Weyl–Heisenberg group is not in S U ( d ) for d = 2 and that qubits exhibit state-independent contextuality. This may provide a guide for extending the discrete Wigner function approach to qubits.

]]>Entropy doi: 10.3390/e19070347

Authors: Louis Kauffman

We give an exposition of iterant algebra, a generalization of matrix algebra that is motivated by the structure of measurement for discrete processes. We show how Clifford algebras and matrix algebras arise naturally from iterants, and we then use this point of view to discuss the Schrödinger and Dirac equations, Majorana Fermions, representations of the braid group and the framed braids in relation to the structure of the Standard Model for physics.

]]>Entropy doi: 10.3390/e19070346

Authors: Irina-Maria Dragan Alexandru Isaic-Maniu

Studies on the structure of economic systems are, most frequently, carried out by the methods of informational statistics. These methods, often accompanied by a broad range of indicators (Shannon entropy, Balassa coefficient, Herfindahl specialty index, Gini coefficient, Theil index, etc.) around which a wide literature has been created over time, have a major disadvantage. Their weakness is related to the imposition of the system condition, which indicates the need to know all of the components of the system (as absolute values or as weights). This restriction is difficult to accomplish in some situations, while in others this knowledge may be irrelevant, especially when there is an interest in structural changes only in some of the components of the economic system (either we refer to the typology of economic activities—NACE, or of territorial units—Nomenclature of territorial units for statistics (NUTS)). This article presents a procedure for characterizing the structure of a system and for comparing its evolution over time, in the case of incomplete information, thus eliminating the restriction existent in the classical methods. The proposed methodological alternative uses a parametric distribution, with sub-unit values for the variable. The application refers to Gross Domestic Product values for five of the 28 European Union countries, with annual values of over 1000 billion Euros (Germany, Spain, France, Italy, and United Kingdom) for the years 2003 and 2015. A form of the Wald sequential test is applied to measure changes in the structure of this group of countries, between the years compared. The results of this application validate the proposed method.

]]>Entropy doi: 10.3390/e19070349

Authors: Hayder Al-Hraishawi Gayan Baduge Rafael Schaefer

In this paper, a secure communication model for cognitive multi-user massive multiple-input multiple-output (MIMO) systems with underlay spectrum sharing is investigated. A secondary (cognitive) multi-user massive MIMO system is operated by using underlay spectrum sharing within a primary (licensed) multi-user massive MIMO system. A passive multi-antenna eavesdropper is assumed to be eavesdropping upon either the primary or secondary confidential transmissions. To this end, a physical layer security strategy is provisioned for the primary and secondary transmissions via artificial noise (AN) generation at the primary base-station (PBS) and zero-forcing precoders. Specifically, the precoders are constructed by using the channel estimates with pilot contamination. In order to degrade the interception of confidential transmissions at the eavesdropper, the AN sequences are transmitted at the PBS by exploiting the excess degrees-of-freedom offered by its massive antenna array and by using random AN shaping matrices. The channel estimates at the PBS and secondary base-station (SBS) are obtained by using non-orthogonal pilot sequences transmitted by the primary user nodes (PUs) and secondary user nodes (SUs), respectively. Hence, these channel estimates are affected by intra-cell pilot contamination. In this context, the detrimental effects of intra-cell pilot contamination and channel estimation errors for physical layer secure communication are investigated. For this system set-up, the average and asymptotic achievable secrecy rate expressions are derived in closed-form. Specifically, these performance metrics are studied for imperfect channel state information (CSI) and for perfect CSI, and thereby, the secrecy rate degradation due to inaccurate channel knowledge and intra-cell pilot contamination is quantified. Our analysis reveals that a physical layer secure communication can be provisioned for both primary and secondary massive MIMO systems even with the channel estimation errors and pilot contamination.

]]>Entropy doi: 10.3390/e19070348

Authors: Rong A Liping Pang Meng Liu Dongsheng Yang

Carbon Dioxide Removal Assembly (CDRA) is one of the most important systems in the Environmental Control and Life Support System (ECLSS) for a manned spacecraft. With the development of adsorbent and CDRA technology, solid amine is increasingly paid attention due to its obvious advantages. However, a manned spacecraft is launched far from the Earth, and its resources and energy are restricted seriously. These limitations increase the design difficulty of solid amine CDRA. The purpose of this paper is to seek optimal design parameters for the solid amine CDRA. Based on a preliminary structure of solid amine CDRA, its heat and mass transfer models are built to reflect some features of the special solid amine adsorbent, Polyethylenepolyamine adsorbent. A multi-objective optimization for the design of solid amine CDRA is discussed further in this paper. In this study, the cabin CO2 concentration, system power consumption and entropy production are chosen as the optimization objectives. The optimization variables consist of adsorption cycle time, solid amine loading mass, adsorption bed length, power consumption and system entropy production. The Improved Non-dominated Sorting Genetic Algorithm (NSGA-II) is used to solve this multi-objective optimization and to obtain optimal solution set. A design example of solid amine CDRA in a manned space station is used to show the optimal procedure. The optimal combinations of design parameters can be located on the Pareto Optimal Front (POF). Finally, Design 971 is selected as the best combination of design parameters. The optimal results indicate that the multi-objective optimization plays a significant role in the design of solid amine CDRA. The final optimal design parameters for the solid amine CDRA can guarantee the cabin CO2 concentration within the specified range, and also satisfy the requirements of lightweight and minimum energy consumption.

]]>Entropy doi: 10.3390/e19070344

Authors: Keiko Uohashi

In this paper, we study the construction of α -conformally equivalent statistical manifolds for a given symmetric cubic form on a Riemannian manifold. In particular, we describe a method to obtain α -conformally equivalent connections from the relation between tensors and the symmetric cubic form.

]]>Entropy doi: 10.3390/e19070345

Authors: Leonid Martyushev

A measure of time is related to the number of ways by which the human correlates the past and the future for some process. On this basis, a connection between time and entropy (information, Boltzmann–Gibbs, and thermodynamic one) is established. This measure gives time such properties as universality, relativity, directionality, and non-uniformity. A number of issues of the modern science related to the finding of laws describing changes in nature are discussed. A special emphasis is made on the role of evolutionary adaptation of an observer to the surrounding world.

]]>Entropy doi: 10.3390/e19070342

Authors: Yuxing Li Yaan Li Xiao Chen Jing Yu

In view of the problem that the features of ship-radiated noise are difficult to extract and inaccurate, a novel method based on variational mode decomposition (VMD), multi-scale permutation entropy (MPE) and a support vector machine (SVM) is proposed to extract the features of ship-radiated noise. In order to eliminate mode mixing and extract the complexity of the intrinsic mode function (IMF) accurately, VMD is employed to decompose the three types of ship-radiated noise instead of Empirical Mode Decomposition (EMD) and its extended methods. Considering the reason that the permutation entropy (PE) can quantify the complexity only in one scale, the MPE is used to extract features in different scales. In this study, three types of ship-radiated noise signals are decomposed into a set of band-limited IMFs by the VMD method, and the intensity of each IMF is calculated. Then, the IMFs with the highest energy are selected for the extraction of their MPE. By analyzing the separability of MPE at different scales, the optimal MPE of the IMF with the highest energy is regarded as the characteristic vector. Finally, the feature vectors are sent into the SVM classifier to classify and recognize different types of ships. The proposed method was applied in simulated signals and actual signals of ship-radiated noise. By comparing with the PE of the IMF with the highest energy by EMD, ensemble EMD (EEMD) and VMD, the results show that the proposed method can effectively extract the features of MPE and realize the classification and recognition for ships.

]]>Entropy doi: 10.3390/e19070341

Authors: Mario Martinelli

The present age, which can be called the Information Age, has a core technology constituted by bits transported by photons. Both concepts, bit and photon, originated in the past century: the concept of photon was introduced by Planck in 1900 when he advanced the solution of the blackbody spectrum, and bit is a term first used by Shannon in 1948 when he introduced the theorems that founded information theory. The connection between Planck and Shannon is not immediately apparent; nor is it obvious that they derived their basic results from the concept of entropy. Examination of other important scientists can shed light on Planck’s and Shannon’s work in these respects. Darwin and Fowler, who in 1922 published a couple of papers where they reinterpreted Planck’s results, pointed out the centrality of the partition function to statistical mechanics and thermodynamics. The same roots have been more recently reconsidered by Jaynes, who extended the considerations advanced by Darwin and Fowler to information theory. This paper investigates how the concept of entropy was propagated in the past century in order to show how a simple intuition, born in the 1824 during the first industrial revolution in the mind of the young French engineer Carnot, is literally still enlightening the fourth industrial revolution and probably will continue to do so in the coming century.

]]>Entropy doi: 10.3390/e19070343

Authors: Lawrence Schulman

Establishing (or falsifying) the special state theory of quantum measurement is a program with both theoretical and experimental directions. The special state theory has only pure unitary time evolution, like the many worlds interpretation, but only has one world. How this can be accomplished requires both “special states” and significant modification of the usual assumptions about the arrow of time. All this is reviewed below. Experimentally, proposals for tests already exist and the problems are first the practical one of doing the experiment and second the suggesting of other experiments. On the theoretical level, many problems remain and among them are the impact of particle statistics on the availability of special states, finding a way to estimate their abundance and the possibility of using a computer for this purpose. Regarding the arrow of time, there is an early proposal of J. A. Wheeler that may be implementable with implications for cosmology.

]]>Entropy doi: 10.3390/e19070339

Authors: Claudio Cremaschini Massimo Tessarotto

Key aspects of the manifestly-covariant theory of quantum gravity (Cremaschini and Tessarotto 2015–2017) are investigated. These refer, first, to the establishment of the four-scalar, manifestly-covariant evolution quantum wave equation, denoted as covariant quantum gravity (CQG) wave equation, which advances the quantum state ψ associated with a prescribed background space-time. In this paper, the CQG-wave equation is proved to follow at once by means of a Hamilton–Jacobi quantization of the classical variational tensor field g ≡ g μ ν and its conjugate momentum, referred to as (canonical) g-quantization. The same equation is also shown to be variational and to follow from a synchronous variational principle identified here with the quantum Hamilton variational principle. The corresponding quantum hydrodynamic equations are then obtained upon introducing the Madelung representation for ψ , which provides an equivalent statistical interpretation of the CQG-wave equation. Finally, the quantum state ψ is proven to fulfill generalized Heisenberg inequalities, relating the statistical measurement errors of quantum observables. These are shown to be represented in terms of the standard deviations of the metric tensor g ≡ g μ ν and its quantum conjugate momentum operator.

]]>Entropy doi: 10.3390/e19070311

Authors: Alain Deville Yannick Deville

Blind Source Separation (BSS) is an active domain of Classical Information Processing, with well-identified methods and applications. The development of Quantum Information Processing has made possible the appearance of Blind Quantum Source Separation (BQSS), with a recent extension towards Blind Quantum Process Tomography (BQPT). This article investigates the use of several fundamental quantum concepts in the BQSS context and establishes properties already used without justification in that context. It mainly considers a pair of electron spins initially separately prepared in a pure state and then submitted to an undesired exchange coupling between these spins. Some consequences of the existence of the entanglement phenomenon, and of the probabilistic aspect of quantum measurements, upon BQSS solutions, are discussed. An unentanglement criterion is established for the state of an arbitrary qubit pair, expressed first with probability amplitudes and secondly with probabilities. The interest of using the concept of a random quantum state in the BQSS context is presented. It is stressed that the concept of statistical independence of the sources, widely used in classical BSS, should be used with care in BQSS, and possibly replaced by some disentanglement principle. It is shown that the coefficients of the development of any qubit pair pure state over the states of an orthonormal basis can be expressed with the probabilities of results in the measurements of well-chosen spin components.

]]>Entropy doi: 10.3390/e19070330

Authors: Xiaoyue Feng Yanchun Liang Xiaohu Shi Dong Xu Xu Wang Renchu Guan

Overfitting is an important problem in machine learning. Several algorithms, such as the extreme learning machine (ELM), suffer from this issue when facing high-dimensional sparse data, e.g., in text classification. One common issue is that the extent of overfitting is not well quantified. In this paper, we propose a quantitative measure of overfitting referred to as the rate of overfitting (RO) and a novel model, named AdaBELM, to reduce the overfitting. With RO, the overfitting problem can be quantitatively measured and identified. The newly proposed model can achieve high performance on multi-class text classification. To evaluate the generalizability of the new model, we designed experiments based on three datasets, i.e., the 20 Newsgroups, Reuters-21578, and BioMed corpora, which represent balanced, unbalanced, and real application data, respectively. Experiment results demonstrate that AdaBELM can reduce overfitting and outperform classical ELM, decision tree, random forests, and AdaBoost on all three text-classification datasets; for example, it can achieve 62.2% higher accuracy than ELM. Therefore, the proposed model has a good generalizability.

]]>Entropy doi: 10.3390/e19070335

Authors: Dominik Strzałka

The aim of this paper is to present some preliminary results and non-extensive statistical properties of selected operating system counters related to hard drive behaviour. A number of experiments have been carried out in order to generate the workload and analyse the behaviour of computers during man–machine interaction. All analysed computers were personal ones, worked under Windows operating systems. The research was conducted to demonstrate how the concept of non-extensive statistical mechanics can be helpful in the description of computer systems behaviour, especially in the context of statistical properties with scaling phenomena, long-term dependencies and statistical self-similarity. The studies have been made on the basis of perfmon tool that allows the user to trace operating systems counters during processing.

]]>Entropy doi: 10.3390/e19070338

Authors: Jun-Lin Lin Laksamee Khomnotai

With a privacy-aware reputation system, an auction website allows the buyer in a transaction to hide his/her identity from the public for privacy protection. However, fraudsters can also take advantage of this buyer-anonymized function to hide the connections between themselves and their accomplices. Traditional fraudster detection methods become useless for detecting such fraudsters because these methods rely on accessing these connections to work effectively. To resolve this problem, we introduce two attributes to quantify the buyer-anonymized activities associated with each user and use them to reinforce the traditional methods. Experimental results on a dataset crawled from an auction website show that the proposed attributes effectively enhance the prediction accuracy for detecting fraudsters, particularly when the proportion of the buyer-anonymized activities in the dataset is large. Because many auction websites have adopted privacy-aware reputation systems, the two proposed attributes should be incorporated into their fraudster detection schemes to combat these fraudulent activities.

]]>Entropy doi: 10.3390/e19070336

Authors: Kazuho Watanabe

Kernel methods have been used for turning linear learning algorithms into nonlinear ones. These nonlinear algorithms measure distances between data points by the distance in the kernel-induced feature space. In lossy data compression, the optimal tradeoff between the number of quantized points and the incurred distortion is characterized by the rate-distortion function. However, the rate-distortion functions associated with distortion measures involving kernel feature mapping have yet to be analyzed. We consider two reconstruction schemes, reconstruction in input space and reconstruction in feature space, and provide bounds to the rate-distortion functions for these schemes. Comparison of the derived bounds to the quantizer performance obtained by the kernel K -means method suggests that the rate-distortion bounds for input space and feature space reconstructions are informative at low and high distortion levels, respectively.

]]>Entropy doi: 10.3390/e19070334

Authors: Bo Li Rui Du Wenjing Kang Gongliang Liu

The Internet of Things (IoT) is placing new demands on existing communication systems. The limited orthogonal resources do not meet the demands of massive connectivity of future IoT systems that require efficient multiple access. Interleave-division multiple access (IDMA) is a promising method of improving spectral efficiency and supporting massive connectivity for IoT networks. In a given time, not all sensors signal information to an aggregation node, but each node transmits a short frame on occasion, e.g., time-controlled or event-driven. The sporadic nature of the uplink transmission, low data rates, and massive connectivity in IoT scenarios necessitates minimal control overhead communication schemes. Therefore, sensor activity and data detection should be implemented on the receiver side. However, the current chip-by-chip (CBC) iterative multi-user detection (MUD) assumes that sensor activity is precisely known at the receiver. In this paper, we propose three schemes to solve the MUD problem in a sporadic IDMA uplink transmission system. Firstly, inspired by the observation of sensor sparsity, we incorporate compressed sensing (CS) to MUD in order to jointly perform activity and data detection. Secondly, as CS detection could provide reliable activity detection, we combine CS and CBC and propose a CS-CBC detector. In addition, a CBC-based MUD named CBC-AD is proposed to provide a comparable baseline scheme.

]]>Entropy doi: 10.3390/e19070337

Authors: Mikhail Sheremet Teodor Grosan Ioan Pop

Natural convection heat transfer combined with entropy generation in a square cavity filled with a nanofluid under the effect of variable temperature distribution along left vertical wall has been studied numerically. Governing equations formulated in dimensionless non-primitive variables with corresponding boundary conditions taking into account the Brownian diffusion and thermophoresis effects have been solved by finite difference method. Distribution of streamlines, isotherms, local entropy generation as well as Nusselt number has been obtained for different values of key parameters. It has been found that a growth of the amplitude of the temperature distribution along the left wall and an increase of the wave number lead to an increase in the average entropy generation. While an increase in abovementioned parameters for low Rayleigh number illustrates a decrease in average Bejan number.

]]>Entropy doi: 10.3390/e19070332

Authors: Chen Xu Chengke Hu Xiaoli Liu Sijing Wang

Based on the Markov model and the basic theory of information entropy, this paper puts forward a new method for optimizing the location of observation points in order to obtain more information from limited geological investigation. According to the existing data from observation points data, classification of tunnel geological lithology was performed, and various lithology distribution were determined along the tunnel using the Markov model and theory. On the basis of the information entropy theory, the distribution of information entropy was obtained along the axis of the tunnel. Therefore, different information entropy could be acquired by calculating different classification of rocks. Furthermore, uncertainty increases when information entropy increased. The maximum entropy indicates maximum uncertainty and thus, this value determines the position of the new drilling hole. A new geology situation will be decided by the maximum entropy for the lowest accuracy. Optimal distribution will be obtained after recalculating, using the new location of the geology situation. Taking the engineering for the Bashiyi Daban water diversion tunneling in Xinjiang as a case, the maximum information entropy of the geological conditions was analyzed by the method proposed in the present study, with 25 newly added geology observation points along the axis of the 30-km tunnel. The results proved the validity of the present method. The method and results in this paper may be used not only to predict the geological conditions of underground engineering based on the investigated geological information, but also to optimize the distribution of the geology observation points.

]]>Entropy doi: 10.3390/e19070333

Authors: Jordan Horowitz Jeremey England

There are many functional contexts where it is desirable to maintain a mesoscopic system in a nonequilibrium state. However, such control requires an inherent energy dissipation. In this article, we unify and extend a number of works on the minimum energetic cost to maintain a mesoscopic system in a prescribed nonequilibrium distribution using ancillary control. For a variety of control mechanisms, we find that the minimum amount of energy dissipation necessary can be cast as an information-theoretic measure of distinguishability between the target nonequilibrium state and the underlying equilibrium distribution. This work offers quantitative insight into the intuitive idea that more energy is needed to maintain a system farther from equilibrium.

]]>Entropy doi: 10.3390/e19070328

Authors: Johannes Rauh Pradeep Banerjee Eckehard Olbrich Jürgen Jost Nils Bertschinger

We consider the problem of quantifying the information shared by a pair of random variables X 1 , X 2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that is, the information shared about S is bounded from below by the information shared about f ( S ) for any function f. We show that our measure leads to a new nonnegative decomposition of the mutual information I ( S ; X 1 X 2 ) into shared, complementary and unique components. We study properties of this decomposition and show that a left monotonic shared information is not compatible with a Blackwell interpretation of unique information. We also discuss whether it is possible to have a decomposition in which both shared and unique information are left monotonic.

]]>Entropy doi: 10.3390/e19070329

Authors: Dong Seo Yun Chung

Recently, green networks are considered as one of the hottest topics in Information and Communication Technology (ICT), especially in mobile communication networks. In a green network, energy saving of network nodes such as base stations (BSs), switches, and servers should be achieved efficiently. In this paper, we consider a heterogeneous network architecture in 5G networks with separated data and control planes, where basically a macro cell manages control signals and a small cell manages data traffic. Then, we propose an optimized handover scheme based on context information such as reference signal received power, speed of user equipment (UE), traffic load, call admission control level, and data type. In this paper, the main objective of the proposed optimal handover is either to reduce the number of handovers or the total energy consumption of BSs. To this end, we develop optimization problems with either the minimization of the number of total handovers or the minimization of energy consumption of BSs as the objective function of the optimization problem. The solution of the optimization problem is obtained by particle swarm optimization, since the developed optimization problem is an NP hard problem. Performance analysis results via simulation based on various probability distributions of the characteristics of UE and BS show that the proposed optimized handover based on context information performs better than the previous call admission control based handover scheme, from the perspective of the number of handovers and total energy consumption. We also show that the proposed handover scheme can efficiently reduce either the number of handovers or the total energy consumption by applying either handover minimization or energy minimization depending on the objective of the application.

]]>Entropy doi: 10.3390/e19070310

Authors: Maxinder Kanwal Joshua Grochow Nihat Ay

In the past three decades, many theoretical measures of complexity have been proposed to help understand complex systems. In this work, for the first time, we place these measures on a level playing field, to explore the qualitative similarities and differences between them, and their shortcomings. Specifically, using the Boltzmann machine architecture (a fully connected recurrent neural network) with uniformly distributed weights as our model of study, we numerically measure how complexity changes as a function of network dynamics and network parameters. We apply an extension of one such information-theoretic measure of complexity to understand incremental Hebbian learning in Hopfield networks, a fully recurrent architecture model of autoassociative memory. In the course of Hebbian learning, the total information flow reflects a natural upward trend in complexity as the network attempts to learn more and more patterns.

]]>Entropy doi: 10.3390/e19070331

Authors: Li-Tuo Shen Zhi-Cheng Shi Huai-Zhi Wu Zhen-Biao Yang

How to analytically deal with the general entanglement dynamics of separate Jaynes–Cummings nodes with continuous-variable fields is still an open question, and few analytical approaches can be used to solve their general entanglement dynamics. Entanglement dynamics between two separate Jaynes–Cummings nodes are examined in this article. Both vacuum state and coherent state in the initial fields are considered through the numerical and analytical methods. The gap between two nonidentical qubit-field coupling strengths shifts the revival period and changes the revival amplitude of two-qubit entanglement. For vacuum-state fields, the maximal entanglement is fully revived after a gap-dependence period, within which the entanglement nonsmoothly decreases to zero and partly recovers without exhibiting sudden death phenomenon. For strong coherent-state fields, the two-qubit entanglement decays exponentially as the evolution time increases, exhibiting sudden death phenomenon, and the increasing gap accelerates the revival period and amplitude decay of the entanglement, where the numerical and analytical results have an excellent coincidence.

]]>Entropy doi: 10.3390/e19070326

Authors: Ämin Baumeler Stefan Wolf

Computation models such as circuits describe sequences of computation steps that are carried out one after the other. In other words, algorithm design is traditionally subject to the restriction imposed by a fixed causal order. We address a novel computing paradigm beyond quantum computing, replacing this assumption by mere logical consistency: We study non-causal circuits, where a fixed time structure within a gate is locally assumed whilst the global causal structure between the gates is dropped. We present examples of logically consistent non-causal circuits outperforming all causal ones; they imply that suppressing loops entirely is more restrictive than just avoiding the contradictions they can give rise to. That fact is already known for correlations as well as for communication, and we here extend it to computation.

]]>Entropy doi: 10.3390/e19070325

Authors: Xiaoyang Li Yuqing Hu Fuqiang Sun Rui Kang

When optimizing an accelerated degradation testing (ADT) plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to address this problem by using prior distributions to capture these uncertainties. Nevertheless, when the difference between a prior distribution and actual situation is large, the existing Bayesian optimal design might cause some over-testing or under-testing issues. For example, the implemented ADT following the optimal ADT plan consumes too much testing resources or few accelerated degradation data are obtained during the ADT. To overcome these obstacles, a Bayesian sequential step-down-stress ADT design is proposed in this article. During the sequential ADT, the test under the highest stress level is firstly conducted based on the initial prior information to quickly generate degradation data. Then, the data collected under higher stress levels are employed to construct the prior distributions for the test design under lower stress levels by using the Bayesian inference. In the process of optimization, the inverse Gaussian (IG) process is assumed to describe the degradation paths, and the Bayesian D-optimality is selected as the optimal objective. A case study on an electrical connector’s ADT plan is provided to illustrate the application of the proposed Bayesian sequential ADT design method. Compared with the results from a typical static Bayesian ADT plan, the proposed design could guarantee more stable and precise estimations of different reliability measures.

]]>Entropy doi: 10.3390/e19070323

Authors: Yuejun Guo Qing Xu Peng Li Mateu Sbert Yu Yang

In this paper, we propose to improve trajectory shape analysis by explicitly considering the speed attribute of trajectory data, and to successfully achieve anomaly detection. The shape of object motion trajectory is modeled using Kernel Density Estimation (KDE), making use of both the angle attribute of the trajectory and the speed of the moving object. An unsupervised clustering algorithm, based on the Information Bottleneck (IB) method, is employed for trajectory learning to obtain an adaptive number of trajectory clusters through maximizing the Mutual Information (MI) between the clustering result and a feature set of the trajectory data. Furthermore, we propose to effectively enhance the performance of IB by taking into account the clustering quality in each iteration of the clustering procedure. The trajectories are determined as either abnormal (infrequently observed) or normal by a measure based on Shannon entropy. Extensive tests on real-world and synthetic data show that the proposed technique behaves very well and outperforms the state-of-the-art methods.

]]>Entropy doi: 10.3390/e19070320

Authors: Daisuke Yoshikawa

In this paper, we derive the optimal boundary for pair trading. This boundary defines the points of entry into or exit from the market for a given stock pair. However, if the assumed model contains uncertainty, the resulting boundary could result in large losses. To avoid this, we develop a more robust strategy by accounting for the model uncertainty. To incorporate the model uncertainty, we use the relative entropy as a penalty function in the expected profit from pair trading.

]]>Entropy doi: 10.3390/e19070321

Authors: Pedro Zufiria Iker Barriales-Valbuena

This paper elaborates on the Random Network Model (RNM) as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc.) and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.

]]>Entropy doi: 10.3390/e19070324

Authors: Hao Wang Dun Lin Xinrong Su Xin Yuan

The corner separation in the high-loaded compressors deteriorates the aerodynamics and reduces the stable operating range. The flow pattern is further complicated with the interaction between the aperiodic corner separation and the periodically wake-shedding vortices. Accurate prediction of the corner separation is a challenge for the Reynolds-Averaged Navier–Stokes (RANS) method, which is based on the linear eddy-viscosity formulation. In the current work, the corner separation is investigated with the Delayed Detached Eddy Simulation (DDES) approach. DDES results agree well with the experiment and are systematically better than the RANS results, especially in the corner region where massive separation occurs. The accurate results from DDES provide a solid foundation for mechanism study. The flow structures and the distribution of Reynolds stress help reveal the process of corner separation and its interaction with the wake vortices. Before massive corner separation occurs, the hairpin-like vortex develops. The appearance of the hairpin-like vortex could be a signal of large-scale corner separation. The strong interaction between corner separation and wake vortices significantly enhances the turbulence intensity. Based on these analyses, entropy analysis is conducted from two aspects to study the losses. One aspect is the time-averaged entropy analysis, and the other is the instantaneous entropy analysis. It is found that the interaction between the passage vortex and wake vortex yields remarkable viscous losses over the 0–12% span when the corner separation has not yet been triggered; however, when the corner separation occurs, an enlarged region covering the 0–30% span is affected, and it is due to the interaction between the corner separation and wake vortices. The detailed coherent structures, local losses information and turbulence characteristics presented can provide guidance for the corner separation control and better design.

]]>Entropy doi: 10.3390/e19070322

Authors: Benjamin Dribus

Path summation offers a flexible general approach to quantum theory, including quantum gravity. In the latter setting, summation is performed over a space of evolutionary pathways in a history configuration space. Discrete causal histories called acyclic directed sets offer certain advantages over similar models appearing in the literature, such as causal sets. Path summation defined in terms of these histories enables derivation of discrete Schrödinger-type equations describing quantum spacetime dynamics for any suitable choice of algebraic quantities associated with each evolutionary pathway. These quantities, called phases, collectively define a phase map from the space of evolutionary pathways to a target object, such as the unit circle S 1 ⊂ C , or an analogue such as S 3 or S 7 . This paper explores the problem of identifying suitable phase maps for discrete quantum gravity, focusing on a class of S 1 -valued maps defined in terms of “structural increments” of histories, called terminal states. Invariants such as state automorphism groups determine multiplicities of states, and induce families of natural entropy functions. A phase map defined in terms of such a function is called an entropic phase map. The associated dynamical law may be viewed as an abstract combination of Schrödinger’s equation and the second law of thermodynamics.

]]>Entropy doi: 10.3390/e19070314

Authors: Karo Michaelian Ivan Santamaría-Holek

It is often incorrectly assumed that the number of microstates Ω ( E , V , N , . . . ) available to an isolated system can have arbitrary dependence on the extensive variables E , V , N , ... However, this is not the case for systems which can, in principle, reach thermodynamic equilibrium since restrictions arise from the underlying equilibrium statistical mechanic axioms of independence and a priori equal probability of microstates. Here we derive a concise criterion specifying the condition on Ω which must be met in order for a system to be able, in principle, to reach thermodynamic equilibrium. Natural quantum systems obey this criterion and therefore can, in principle, reach thermodynamic equilibrium. However, models which do not respect this criterion will present inconsistencies when treated under equilibrium thermodynamic formalism. This has relevance to a number of recent models in which negative heat capacity and other violations of fundamental thermodynamic law have been reported.

]]>Entropy doi: 10.3390/e19070313

Authors: Shengnan Zhang Yuexian Hou Benyou Wang Dawei Song

Regularization of neural networks can alleviate overfitting in the training phase. Current regularization methods, such as Dropout and DropConnect, randomly drop neural nodes or connections based on a uniform prior. Such a data-independent strategy does not take into consideration of the quality of individual unit or connection. In this paper, we aim to develop a data-dependent approach to regularizing neural network in the framework of Information Geometry. A measurement for the quality of connections is proposed, namely confidence. Specifically, the confidence of a connection is derived from its contribution to the Fisher information distance. The network is adjusted by retaining the confident connections and discarding the less confident ones. The adjusted network, named as ConfNet, would carry the majority of variations in the sample data. The relationships among confidence estimation, Maximum Likelihood Estimation and classical model selection criteria (like Akaike information criterion) is investigated and discussed theoretically. Furthermore, a Stochastic ConfNet is designed by adding a self-adaptive probabilistic sampling strategy. The proposed data-dependent regularization methods achieve promising experimental results on three data collections including MNIST, CIFAR-10 and CIFAR-100.

]]>Entropy doi: 10.3390/e19070316

Authors: Paul Manneville

Wall-bounded flows experience a transition to turbulence characterized by the coexistence of laminar and turbulent domains in some range of Reynolds number R, the natural control parameter. This transitional regime takes place between an upper threshold R t above which turbulence is uniform (featureless) and a lower threshold R g below which any form of turbulence decays, possibly at the end of overlong chaotic transients. The most emblematic cases of flow along flat plates transiting to/from turbulence according to this scenario are reviewed. The coexistence is generally in the form of bands, alternatively laminar and turbulent, and oriented obliquely with respect to the general flow direction. The final decay of the bands at R g points to the relevance of directed percolation and criticality in the sense of statistical-physics phase transitions. The nature of the transition at R t where bands form is still somewhat mysterious and does not easily fit the scheme holding for pattern-forming instabilities at increasing control parameter on a laminar background. In contrast, the bands arise at R t out of a uniform turbulent background at a decreasing control parameter. Ingredients of a possible theory of laminar-turbulent patterning are discussed.

]]>Entropy doi: 10.3390/e19070315

Authors: Daniel Berend Aryeh Kontorovich Gil Zagdanski

In Berend and Kontorovich (2012), the following problem was studied: A random sample of size t is taken from a world (i.e., probability space) of size n; bound the expected value of the probability of the set of elements not appearing in the sample (unseen mass) in terms of t and n. Here we study the same problem, where the world may be countably infinite, and the probability measure on it is restricted to have an entropy of at most h. We provide tight bounds on the maximum of the expected unseen mass, along with a characterization of the measures attaining this maximum.

]]>Entropy doi: 10.3390/e19070317

Authors: Qing Li Xia Ji Steven Y. Liang

Aiming at the issue of extracting the incipient single-fault and multi-fault of rotating machinery from the nonlinear and non-stationary vibration signals with a strong background noise, a new fault diagnosis method based on improved autoregressive-Minimum entropy deconvolution (improved AR-MED) and variational mode decomposition (VMD) is proposed. Due to the complexity of rotating machinery systems, the periodic transient impulses of single-fault and multiple-faults always emerge in the acquired vibration signals. The improved autoregressive minimum entropy deconvolution (AR-MED) technique can effectively deconvolve the influence of the background noise, which aims to enhance the peak value of the multiple transient impulses. Nevertheless, the envelope spectrum of simulation and experimental in this work shows that there are many interference components exist on both left and right of fault characteristic frequencies when the background noise is strong. To overcome this shortcoming, the VMD is thus applied to adaptively decompose the filtered output vibration signal into a number of quasi-orthogonal intrinsic modes so as to better detect the single- and multiple-faults via those sub-band signals. The experimental and engineering application results demonstrate that the proposed method dramatically sharpens the fault characteristic frequencies (FCFs) from the impacts of bearing outer race and gearbox faults compared to the traditional methods, which show a significant improvement in early incipient faults of rotating machinery.

]]>Entropy doi: 10.3390/e19070319

Authors: Kristian Piscicchia Angelo Bassi Catalina Curceanu Raffaele Grande Sandro Donadi Beatrix Hiesmayr Andreas Pichler

In this paper, new upper limits on the parameters of the Continuous Spontaneous Localization (CSL) collapse model are extracted. To this end, the X-ray emission data collected by the IGEX collaboration are analyzed and compared with the spectrum of the spontaneous photon emission process predicted by collapse models. This study allows the obtainment of the most stringent limits within a relevant range of the CSL model parameters, with respect to any other method. The collapse rate λ and the correlation length r C are mapped, thus allowing the exclusion of a broad range of the parameter space.

]]>Entropy doi: 10.3390/e19070293

Authors: Edward Jiménez Nicolás Recalde Esteban Chacón

We determine the proton and electron radii by analyzing constructive resonances at minimum entropy for elements with atomic number Z ≥ 11.We note that those radii can be derived from entropy principles and published photoelectric cross sections data from the National Institute of Standards and Technology (NIST). A resonance region with optimal constructive interference is given by a principal wavelength λ of the order of Bohr atom radius. Our study shows that the proton radius deviations can be measured. Moreover, in the case of the electron, its radius converges to electron classical radius with a value of 2.817 fm. Resonance waves afforded us the possibility to measure the proton and electron radii through an interference term. This term, was a necessary condition in order to have an effective cross section maximum at the threshold. The minimum entropy means minimum proton shape deformation and it was found to be (0.830 ± 0.015) fm and the average proton radius was found to be (0.825 − 0.0341; 0.888 + 0.0405) fm.

]]>Entropy doi: 10.3390/e19070318

Authors: Robin Ince

The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables X i . It can be thought of as quantifying overlapping information content or similarities in the representation of S between the X i . We present a new measure of redundancy which measures the common change in surprisal shared between variables at the local or pointwise level. We provide a game-theoretic operational definition of unique information, and use this to derive constraints which are used to obtain a maximum entropy distribution. Redundancy is then calculated from this maximum entropy distribution by counting only those local co-information terms which admit an unambiguous interpretation as redundant information. We show how this redundancy measure can be used within the framework of the Partial Information Decomposition (PID) to give an intuitive decomposition of the multivariate mutual information into redundant, unique and synergistic contributions. We compare our new measure to existing approaches over a range of example systems, including continuous Gaussian variables. Matlab code for the measure is provided, including all considered examples.

]]>Entropy doi: 10.3390/e19070312

Authors: Pablo Ruiz Ortega Miguel Olivares-Robles

In this work, we analyze the thermodynamics and geometric optimization of thermoelectric elements in a hybrid two-stage thermoelectric micro cooler (TEMC). We propose a novel procedure to improve the performance of the micro cooler based on optimum geometric parameters, cross sectional area (A) and length (L), of the semiconductor elements. Our analysis takes into account the Thomson effect to show its role on the performance of the system. We obtain dimensionless temperature spatial distributions, coefficient of performance ( C O P ) and cooling power ( Q c ) in terms of the electric current for different values of the geometric ratio ω = A / L . In our analysis we consider two cases: (a) the same materials in both stages (homogeneous system); and (b) different materials in each stage (hybrid system). We introduce the geometric parameter, W = ω 1 / ω 2 , to optimize the micro device considering the geometric parameters of both stages, w 1 and w 2 . Our results show the optimal configuration of materials that must be used in each stage. The Thomson effect leads to a slight improvement on the performance of the micro cooler. We determine the optimal electrical current to obtain the best performance of the TEMC. Geometric parameters have been optimized and results show that the hybrid system reaches a maximum cooling power 15.9 % greater than the one-stage system (with the same electric current I = 0.49 A), and 11% greater than a homogeneous system, when ω = 0.78 . The optimization of the ratio in the number of thermocouples in each stage shows that ( C O P ) and ( Q c ) increase as the number of thermocouples in the second stage increase too, but with W = 0.94 . We show that when two materials with different performances are placed in each stage, the optimal configuration of materials in the stages of the system must be determined to obtain a better performance of the hybrid two-stage TEMC system. These results are important because we offer a novel procedure to optimize a thermoelectric micro cooler considering the geometry of materials at a micro level.

]]>Entropy doi: 10.3390/e19070308

Authors: Yongqiang Cheng Xuezhi Wang Bill Moran

Information geometry enables a deeper understanding of the methods of statistical inference. In this paper, the problem of nonlinear parameter estimation is considered from a geometric viewpoint using a natural gradient descent on statistical manifolds. It is demonstrated that the nonlinear estimation for curved exponential families can be simply viewed as a deterministic optimization problem with respect to the structure of a statistical manifold. In this way, information geometry offers an elegant geometric interpretation for the solution to the estimator, as well as the convergence of the gradient-based methods. The theory is illustrated via the analysis of a distributed mote network localization problem where the Radio Interferometric Positioning System (RIPS) measurements are used for free mote location estimation. The analysis results demonstrate the advanced computational philosophy of the presented methodology.

]]>Entropy doi: 10.3390/e19070309

Authors: Tatsuaki Wada Hiroshi Matsuzoe

Based on the maximum entropy (MaxEnt) principle for a generalized entropy functional and the conjugate representations introduced by Zhang, we have reformulated the method of information geometry. For a set of conjugate representations, the associated escort expectation is naturally introduced and characterized by the generalized score function which has zero-escort expectation. Furthermore, we show that the escort expectation induces a conformal divergence.

]]>Entropy doi: 10.3390/e19070305

Authors: Qianqian Cui Zhipeng Qiu Wenbin Liu Zengyun Hu

Susceptible-infectious-removed (SIR) epidemic models are proposed to consider the impact of available resources of the public health care system in terms of the number of hospital beds. Both the incidence rate and the recovery rate are considered as nonlinear functions of the number of infectious individuals, and the recovery rate incorporates the influence of the number of hospital beds. It is shown that backward bifurcation and saddle-node bifurcation may occur when the number of hospital beds is insufficient. In such cases, it is critical to prepare an appropriate amount of hospital beds because only reducing the basic reproduction number less than unity is not enough to eradicate the disease. When the basic reproduction number is larger than unity, the model may undergo forward bifurcation and Hopf bifurcation. The increasing hospital beds can decrease the infectious individuals. However, it is useless to eliminate the disease. Therefore, maintaining enough hospital beds is important for the prevention and control of the infectious disease. Numerical simulations are presented to illustrate and complement the theoretical analysis.

]]>Entropy doi: 10.3390/e19070307

Authors: Yuanyu Wu Rong Song

Target-directed elbow movements are essential in daily life; however, how different task demands affect motor control is seldom reported. In this study, the relationship between task demands and the complexity of kinematics and electromyographic (EMG) signals on healthy young individuals was investigated. Tracking tasks with four levels of task demands were designed, and participants were instructed to track the target trajectories by extending or flexing their elbow joint. The actual trajectories and EMG signals from the biceps and triceps were recorded simultaneously. Multiscale fuzzy entropy was utilized to analyze the complexity of actual trajectories and EMG signals over multiple time scales. Results showed that the complexity of actual trajectories and EMG signals increased when task demands increased. As the time scale increased, there was a monotonic rise in the complexity of actual trajectories, while the complexity of EMG signals rose first, and then fell. Noise abatement may account for the decreasing entropy of EMG signals at larger time scales. This study confirmed the uniqueness of multiscale entropy, which may be useful in the analysis of electrophysiological signals.

]]>Entropy doi: 10.3390/e19070306

Authors: Kunpeng Wang Guanqiu Qi Zhiqin Zhu Yi Chai

Sparse-representation based approaches have been integrated into image fusion methods in the past few years and show great performance in image fusion. Training an informative and compact dictionary is a key step for a sparsity-based image fusion method. However, it is difficult to balance “informative” and “compact”. In order to obtain sufficient information for sparse representation in dictionary construction, this paper classifies image patches from source images into different groups based on morphological similarities. Stochastic coordinate coding (SCC) is used to extract corresponding image-patch information for dictionary construction. According to the constructed dictionary, image patches of source images are converted to sparse coefficients by the simultaneous orthogonal matching pursuit (SOMP) algorithm. At last, the sparse coefficients are fused by the Max-L1 fusion rule and inverted to a fused image. The comparison experimentations are simulated to evaluate the fused image in image features, information, structure similarity, and visual perception. The results confirm the feasibility and effectiveness of the proposed image fusion solution.

]]>Entropy doi: 10.3390/e19070303

Authors: Xinbo Ai

The heterogeneous nature of a complex network determines the roles of each node in the network that are quite different. Mechanisms of complex networks such as spreading dynamics, cascading reactions, and network synchronization are highly affected by a tiny fraction of so-called important nodes. Node importance ranking is thus of great theoretical and practical significance. Network entropy is usually utilized to characterize the amount of information encoded in the network structure and to measure the structural complexity at the graph level. We find that entropy can also serve as a local level metric to quantify node importance. We propose an entropic metric, Entropy Variation, defining the node importance as the variation of network entropy before and after its removal, according to the assumption that the removal of a more important node is likely to cause more structural variation. Like other state-of-the-art methods for ranking node importance, the proposed entropic metric is also used to utilize structural information, but at the systematical level, not the local level. Empirical investigations on real life networks, the Snake Idioms Network, and several other well-known networks, demonstrate the superiority of the proposed entropic metric, notably outperforming other centrality metrics in identifying the top-k most important nodes.

]]>Entropy doi: 10.3390/e19070304

Authors: Andreas Schlatter

Probabilities in quantum physics can be shown to originate from a maximum entropy principle.

]]>Entropy doi: 10.3390/e19070302

Authors: Marco Bianucci

The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom) system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO).

]]>Entropy doi: 10.3390/e19070291

Authors: Francesco Villecco Arcangelo Pellegrino

In this paper, the use of the MaxInf Principle in real optimization problems is investigated for engineering applications, where the current design solution is actually an engineering approximation. In industrial manufacturing, multibody system simulations can be used to develop new machines and mechanisms by using virtual prototyping, where an axiomatic design can be employed to analyze the independence of elements and the complexity of connections forming a general mechanical system. In the classic theories of Fisher and Wiener-Shannon, the idea of information is a measure of only probabilistic and repetitive events. However, this idea is broader than the probability alone field. Thus, the Wiener-Shannon’s axioms can be extended to non-probabilistic events and it is possible to introduce a theory of information for non-repetitive events as a measure of the reliability of data for complex mechanical systems. To this end, one can devise engineering solutions consistent with the values of the design constraints analyzing the complexity of the relation matrix and using the idea of information in the metric space. The final solution gives the entropic measure of epistemic uncertainties which can be used in multibody system models, analyzed with an axiomatic design.

]]>Entropy doi: 10.3390/e19070301

Authors: Alberto Barchielli Matteo Gregoratti Alessandro Toigo

Heisenberg’s uncertainty principle has recently led to general measurement uncertainty relations for quantum systems: incompatible observables can be measured jointly or in sequence only with some unavoidable approximation, which can be quantified in various ways. The relative entropy is the natural theoretical quantifier of the information loss when a `true’ probability distribution is replaced by an approximating one. In this paper, we provide a lower bound for the amount of information that is lost by replacing the distributions of the sharp position and momentum observables, as they could be obtained with two separate experiments, by the marginals of any smeared joint measurement. The bound is obtained by introducing an entropic error function, and optimizing it over a suitable class of covariant approximate joint measurements. We fully exploit two cases of target observables: (1) n-dimensional position and momentum vectors; (2) two components of position and momentum along different directions. In (1), we connect the quantum bound to the dimension n; in (2), going from parallel to orthogonal directions, we show the transition from highly incompatible observables to compatible ones. For simplicity, we develop the theory only for Gaussian states and measurements.

]]>Entropy doi: 10.3390/e19070300

Authors: Catalina Curceanu Hexi Shi Sergio Bartalucci Sergio Bertolucci Massimiliano Bazzi Carolina Berucci Mario Bragadireanu Michael Cargnelli Alberto Clozza Luca De Paolis Sergio Di Matteo Jean-Pierre Egger Carlo Guaraldo Mihail Iliescu Johann Marton Matthias Laubenstein Edoardo Milotti Marco Miliucci Andreas Pichler Dorel Pietreanu Kristian Piscicchia Alessandro Scordo Diana Sirghi Florin Sirghi Laura Sperandio Oton Vazquez Doce Eberhard Widmann Johann Zmeskal

The validity of the Pauli exclusion principle—a building block of Quantum Mechanics—is tested for electrons. The VIP (violation of Pauli exclusion principle) and its follow-up VIP-2 experiments at the Laboratori Nazionali del Gran Sasso search for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. The candidate events—if they exist—originate from the transition of a 2 p orbit electron to the ground state which is already occupied by two electrons. The present limit on the probability for Pauli exclusion principle violation for electrons set by the VIP experiment is 4.7 × 10 − 29 . We report a first result from the VIP-2 experiment improving on the VIP limit, which solidifies the final goal of achieving a two orders of magnitude gain in the long run.

]]>Entropy doi: 10.3390/e19070299

Authors: Henry Lin Max Tegmark

We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity, which we dub the rational mutual information, and discuss generalizations of our claims involving more complicated Bayesian networks.

]]>Entropy doi: 10.3390/e19070298

Authors: Nikolaos Kalogeropoulos

We present an argument which purports to show that the use of the standard Legendre transform in non-additive Statistical Mechanics is not appropriate. For concreteness, we use as paradigm, the case of systems which are conjecturally described by the (non-additive) Tsallis entropy. We point out the form of the modified Legendre transform that should be used, instead, in the non-additive thermodynamics induced by the Tsallis entropy. We comment on more general implications of this proposal for the thermodynamics of “complex systems”.

]]>Entropy doi: 10.3390/e19070297

Authors: Yuriy Povstenko Tamara Kyrylych

Two approaches resulting in two different generalizations of the space-time-fractional advection-diffusion equation are discussed. The Caputo time-fractional derivative and Riesz fractional Laplacian are used. The fundamental solutions to the corresponding Cauchy and source problems in the case of one spatial variable are studied using the Laplace transform with respect to time and the Fourier transform with respect to the spatial coordinate. The numerical results are illustrated graphically.

]]>Entropy doi: 10.3390/e19070295

Authors: Przemysław Nowakowski

The integration of embodied and computational approaches to cognition requires that non-neural body parts be described as parts of a computing system, which realizes cognitive processing. In this paper, based on research about morphological computations and the ecology of vision, I argue that nonneural body parts could be described as parts of a computational system, but they do not realize computation autonomously, only in connection with some kind of—even in the simplest form—central control system. Finally, I integrate the proposal defended in the paper with the contemporary mechanistic approach to wide computation.

]]>Entropy doi: 10.3390/e19070294

Authors: Anastasia Georgiou Juan Bello-Rivas Charles Gear Hau-Tieng Wu Eliodoro Chiavazzo Ioannis Kevrekidis

In recent work, we have illustrated the construction of an exploration geometry on free energy surfaces: the adaptive computer-assisted discovery of an approximate low-dimensional manifold on which the effective dynamics of the system evolves. Constructing such an exploration geometry involves geometry-biased sampling (through both appropriately-initialized unbiased molecular dynamics and through restraining potentials) and, machine learning techniques to organize the intrinsic geometry of the data resulting from the sampling (in particular, diffusion maps, possibly enhanced through the appropriate Mahalanobis-type metric). In this contribution, we detail a method for exploring the conformational space of a stochastic gradient system whose effective free energy surface depends on a smaller number of degrees of freedom than the dimension of the phase space. Our approach comprises two steps. First, we study the local geometry of the free energy landscape using diffusion maps on samples computed through stochastic dynamics. This allows us to automatically identify the relevant coarse variables. Next, we use the information garnered in the previous step to construct a new set of initial conditions for subsequent trajectories. These initial conditions are computed so as to explore the accessible conformational space more efficiently than by continuing the previous, unbiased simulations. We showcase this method on a representative test system.

]]>Entropy doi: 10.3390/e19070296

Authors: Omer Acan Dumitru Baleanu Maysaa Mohamed Al Qurashi Mehmet Giyas Sakar

In this paper, we propose a new type (n + 1)-dimensional reduced differential transform method (RDTM) based on a local fractional derivative (LFD) to solve (n + 1)-dimensional local fractional partial differential equations (PDEs) in Cantor sets. The presented method is named the (n + 1)-dimensional local fractional reduced differential transform method (LFRDTM). First the theories, their proofs and also some basic properties of this procedure are given. To understand the introduced method clearly, we apply it on the (n + 1)-dimensional fractal heat-like equations (HLEs) and wave-like equations (WLEs). The applications show that this new technique is efficient, simply applicable and has powerful effects in (n + 1)-dimensional local fractional problems.

]]>Entropy doi: 10.3390/e19060290

Authors: Shihuai Zhou Carl Tackes Ralph Napolitano

The liquid-phase enthalpy of mixing for Al–Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy is determined relative to the unmixed pure (Al and Tb) components. The required formation enthalpy for the Al3Tb phase is computed from first-principles calculations. Based on our measurements, three different semi-empirical solution models are offered for the excess free energy of the liquid, including regular, subregular, and associate model formulations. These models are also compared with the Miedema model prediction of mixing enthalpy.

]]>Entropy doi: 10.3390/e19060292

Authors: Edgar Parker

An alternative derivation of the yield curve based on entropy or the loss of information as it is communicated through time is introduced. Given this focus on entropy growth in communication the Shannon entropy will be utilized. Additionally, Shannon entropy’s close relationship to the Kullback–Leibler divergence is used to provide a more precise understanding of this new yield curve. The derivation of the entropic yield curve is completed with the use of the Burnashev reliability function which serves as a weighting between the true and error distributions. The deep connections between the entropic yield curve and the popular Nelson–Siegel specification are also examined. Finally, this entropically derived yield curve is used to provide an estimate of the economy’s implied information processing ratio. This information theoretic ratio offers a new causal link between bond and equity markets, and is a valuable new tool for the modeling and prediction of stock market behavior.

]]>Entropy doi: 10.3390/e19060288

Authors: Loïc Devilliers Stéphanie Allassonnière Alain Trouvé Xavier Pennec

We tackle the problem of template estimation when data have been randomly deformed under a group action in the presence of noise. In order to estimate the template, one often minimizes the variance when the influence of the transformations have been removed (computation of the Fréchet mean in the quotient space). The consistency bias is defined as the distance (possibly zero) between the orbit of the template and the orbit of one element which minimizes the variance. In the first part, we restrict ourselves to isometric group action, in this case the Hilbertian distance is invariant under the group action. We establish an asymptotic behavior of the consistency bias which is linear with respect to the noise level. As a result the inconsistency is unavoidable as soon as the noise is enough. In practice, template estimation with a finite sample is often done with an algorithm called “max-max”. In the second part, also in the case of isometric group finite, we show the convergence of this algorithm to an empirical Karcher mean. Our numerical experiments show that the bias observed in practice can not be attributed to the small sample size or to a convergence problem but is indeed due to the previously studied inconsistency. In a third part, we also present some insights of the case of a non invariant distance with respect to the group action. We will see that the inconsistency still holds as soon as the noise level is large enough. Moreover we prove the inconsistency even when a regularization term is added.

]]>Entropy doi: 10.3390/e19060289

Authors: Reiner Lenz

Many signals can be described as functions on the unit disk (ball). In the framework of group representations it is well-known how to construct Hilbert-spaces containing these functions that have the groups SU(1,N) as their symmetry groups. One illustration of this construction is three-dimensional color spaces in which chroma properties are described by points on the unit disk. A combination of principal component analysis and the Perron-Frobenius theorem can be used to show that perspective projections map positive signals (i.e., functions with positive values) to a product of the positive half-axis and the unit ball. The representation theory (harmonic analysis) of the group SU(1,1) leads to an integral transform, the Mehler-Fock-transform (MFT), that decomposes functions, depending on the radial coordinate only, into combinations of associated Legendre functions. This transformation is applied to kernel density estimators of probability distributions on the unit disk. It is shown that the transform separates the influence of the data and the measured data. The application of the transform is illustrated by studying the statistical distribution of RGB vectors obtained from a common set of object points under different illuminants.

]]>Entropy doi: 10.3390/e19060286

Authors: Kenric Nelson

An approach to the assessment of probabilistic inference is described which quantifies the performance on the probability scale. From both information and Bayesian theory, the central tendency of an inference is proven to be the geometric mean of the probabilities reported for the actual outcome and is referred to as the “Accuracy”. Upper and lower error bars on the accuracy are provided by the arithmetic mean and the −2/3 mean. The arithmetic is called the “Decisiveness” due to its similarity with the cost of a decision and the −2/3 mean is called the “Robustness”, due to its sensitivity to outlier errors. Visualization of inference performance is facilitated by plotting the reported model probabilities versus the histogram calculated source probabilities. The visualization of the calibration between model and source is summarized on both axes by the arithmetic, geometric, and −2/3 means. From information theory, the performance of the inference is related to the cross-entropy between the model and source distribution. Just as cross-entropy is the sum of the entropy and the divergence; the accuracy of a model can be decomposed into a component due to the source uncertainty and the divergence between the source and model. Translated to the probability domain these quantities are plotted as the average model probability versus the average source probability. The divergence probability is the average model probability divided by the average source probability. When an inference is over/under-confident, the arithmetic mean of the model increases/decreases, while the −2/3 mean decreases/increases, respectively.

]]>Entropy doi: 10.3390/e19060265

Authors: Lei Chen Cheng Sun Guobo Wang Hui Xie Zhenyao Shen

Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS) prediction and evaluation. The artificial neural network (ANN) was used to extend the runoff–pollutant relationship from complete data events to other data-scarce events. The interpolation method was then used to solve the problem of tail deviation in the simulated pollutographs. In addition, the entropy method was utilized to train the ANN for comprehensive evaluations. A case study was performed in the Three Gorges Reservoir Region, China. Results showed that the ANN performed well in the NPS simulation, especially for light rainfall events, and the phosphorus predictions were always more accurate than the nitrogen predictions under scarce data conditions. In addition, peak pollutant data scarcity had a significant impact on the model performance. Furthermore, these traditional indicators would lead to certain information loss during the model evaluation, but the entropy weighting method could provide a more accurate model evaluation. These results would be valuable for monitoring schemes and the quantitation of event-based NPS pollution, especially in data-poor catchments.

]]>Entropy doi: 10.3390/e19060287

Authors: Qin Wang Guangping Zeng Xuyan Tu

In traditional information technology project portfolio management (ITPPM), managers often pay more attention to the optimization of portfolio selection in the initial stage. In fact, during the portfolio implementation process, there are still issues to be optimized. Organizing cooperation will enhance the efficiency, although it brings more immediate risk due to the complex variety of links between projects. In order to balance the efficiency and risk, an optimization method is presented based on the complex network theory and entropy, which will assist portfolio managers in recognizing the structure of the portfolio and determine the cooperation range. Firstly, a complex network model for an IT project portfolio is constructed, in which the project is simulated as an artificial life agent. At the same time, the portfolio is viewed as a small scale of society. Following this, social network analysis is used to detect and divide communities in order to estimate the roles of projects between different portfolios. Based on these, the efficiency and the risk are measured using entropy and are balanced through searching for adequate hierarchy community divisions. Thus, the activities of cooperation in organizations, risk management, and so on—which are usually viewed as an important art—can be discussed and conducted based on quantity calculations.

]]>Entropy doi: 10.3390/e19060285

Authors: George Livadiotis

Space plasmas are frequently described by kappa distributions. Non-extensive statistical mechanics involves the maximization of the Tsallis entropic form under the constraints of canonical ensemble, considering also a dyadic formalism between the ordinary and escort probability distributions. This paper addresses the statistical origin of kappa distributions, and shows that they can be connected with non-extensive statistical mechanics without considering the dyadic formalism of ordinary/escort distributions. While this concept does significantly simplify the usage of the theory, it costs the definition of a dyadic entropic formulation, in order to preserve the consistency between statistical mechanics and thermodynamics. Therefore, the simplification of the theory by means of avoiding dyadic formalism is impossible within the framework of non-extensive statistical mechanics.

]]>Entropy doi: 10.3390/e19060284

Authors: Andrea Crespo Daniel Álvarez Gonzalo C. Gutiérrez-Tobal Fernando Vaquerizo-Villar Verónica Barroso-García María L. Alonso-Álvarez Joaquín Terán-Santos Roberto Hornero Félix del Campo

Untreated paediatric obstructive sleep apnoea syndrome (OSAS) can severely affect the development and quality of life of children. In-hospital polysomnography (PSG) is the gold standard for a definitive diagnosis though it is relatively unavailable and particularly intrusive. Nocturnal portable oximetry has emerged as a reliable technique for OSAS screening. Nevertheless, additional evidences are demanded. Our study is aimed at assessing the usefulness of multiscale entropy (MSE) to characterise oximetric recordings. We hypothesise that MSE could provide relevant information of blood oxygen saturation (SpO2) dynamics in the detection of childhood OSAS. In order to achieve this goal, a dataset composed of unattended SpO2 recordings from 50 children showing clinical suspicion of OSAS was analysed. SpO2 was parameterised by means of MSE and conventional oximetric indices. An optimum feature subset composed of five MSE-derived features and four conventional clinical indices were obtained using automated bidirectional stepwise feature selection. Logistic regression (LR) was used for classification. Our optimum LR model reached 83.5% accuracy (84.5% sensitivity and 83.0% specificity). Our results suggest that MSE provides relevant information from oximetry that is complementary to conventional approaches. Therefore, MSE may be useful to improve the diagnostic ability of unattended oximetry as a simplified screening test for childhood OSAS.

]]>Entropy doi: 10.3390/e19060283

Authors: Donghuo Zeng Chengjie Sun Lei Lin Bingquan Liu

Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific knowledge which are difficult to collect and define. Therefore, we offer an automatic exploring words and characters level features approach: a recurrent neural network using bidirectional long short-term memory (LSTM) with Conditional Random Fields decoding (LSTM-CRF). Two kinds of word representations are used in this work: word embedding, which is trained from a large amount of text, and character-based representation, which can capture orthographic feature of words. Experimental results on the DDI2011 and DDI2013 dataset show the effect of the proposed LSTM-CRF method. Our method outperforms the best system in the DDI2013 challenge.

]]>Entropy doi: 10.3390/e19060282

Authors: Ainara Garde Parastoo Dehkordi John Ansermino Guy Dumont

Pulse rate variability (PRV), an alternative measure of heart rate variability (HRV), is altered during obstructive sleep apnea. Correntropy spectral density (CSD) is a novel spectral analysis that includes nonlinear information. We recruited 160 children and recorded SpO2 and photoplethysmography (PPG), alongside standard polysomnography. PPG signals were divided into 1-min epochs and apnea/hypoapnea (A/H) epochs labeled. CSD was applied to the pulse-to-pulse interval time series (PPIs) and five features extracted: the total spectral power (TP: 0.01–0.6 Hz), the power in the very low frequency band (VLF: 0.01–0.04 Hz), the normalized power in the low and high frequency bands (LFn: 0.04–0.15 Hz, HFn: 0.15–0.6 Hz), and the LF/HF ratio. Nonlinearity was assessed with the surrogate data technique. Multivariate logistic regression models were developed for CSD and power spectral density (PSD) analysis to detect epochs with A/H events. The CSD-based features and model identified epochs with and without A/H events more accurately relative to PSD-based analysis (area under the curve (AUC) 0.72 vs. 0.67) due to the nonlinearity of the data. In conclusion, CSD-based PRV analysis provided enhanced performance in detecting A/H epochs, however, a combination with overnight SpO2 analysis is suggested for optimal results.

]]>Entropy doi: 10.3390/e19060278

Authors: LaVar Isaacson

The results of the computation of entropy generation rates through the dissipation of ordered regions within selected helium boundary layer flows are presented. Entropy generation rates in helium boundary layer flows for five cases of increasing temperature and pressure are considered. The basic format of a turbulent spot is used as the flow model. Statistical processing of the time-dependent series solutions of the nonlinear, coupled Lorenz-type differential equations for the spectral velocity wave components in the three-dimensional boundary layer configuration yields the local volumetric entropy generation rates. Extension of the computational method to the transition from laminar to fully turbulent flow is discussed.

]]>Entropy doi: 10.3390/e19060281

Authors: Zhan Jin Yingsong Li Yanyan Wang

In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel framework. The proposed CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm is derived and analyzed in detail. A desired zero attraction term is put forward in the updating equation of the proposed CIMSM-PNLMS algorithm to force the inactive coefficients to zero. The performance of the proposed CIMSM-PNLMS algorithm is investigated for estimating an underwater communication channel estimation and an echo channel. The obtained results demonstrate that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms.

]]>Entropy doi: 10.3390/e19060280

Authors: Kongming Guo Jun Jiang Yalan Xu

A semi-analytical method is proposed to calculate stochastic quasi-periodic responses of limit cycles in non-equilibrium dynamical systems excited by periodic forces and weak random fluctuations, approximately. First, a kind of 1/N-stroboscopic map is introduced to discretize the quasi-periodic torus into closed curves, which are then approximated by periodic points. Using a stochastic sensitivity function of discrete time systems, the transverse dispersion of these circles can be quantified. Furthermore, combined with the longitudinal distribution of the circles, the probability density function of these closed curves in stroboscopic sections can be determined. The validity of this approach is shown through a van der Pol oscillator and Brusselator.

]]>Entropy doi: 10.3390/e19060277

Authors: Zhijian Wang Junyuan Wang Yanfei Kou Jiping Zhang Shaohui Ning Zhifang Zhao

In view of the problem that the fault signal of the rolling bearing is weak and the fault feature is difficult to extract in the strong noise environment, a method based on minimum entropy deconvolution (MED) and local mean deconvolution (LMD) is proposed to extract the weak fault features of the rolling bearing. Through the analysis of the simulation signal, we find that LMD has many limitations for the feature extraction of weak signals under strong background noise. In order to eliminate the noise interference and extract the characteristics of the weak fault, MED is employed as the pre-filter to remove noise. This method is applied to the weak fault feature extraction of rolling bearings; that is, using MED to reduce the noise of the wind turbine gearbox test bench under strong background noise, and then using the LMD method to decompose the denoised signals into several product functions (PFs), and finally analyzing the PF components that have strong correlation by a cyclic autocorrelation function. The finding is that the failure of the wind power gearbox is generated from the micro-bending of the high-speed shaft and the pitting of the #10 bearing outer race at the output end of the high-speed shaft. This method is compared with LMD, which shows the effectiveness of this method. This paper provides a new method for the extraction of multiple faults and weak features in strong background noise.

]]>Entropy doi: 10.3390/e19060275

Authors: Christian Bentz Dimitrios Alikaniotis Michael Cysouw Ramon Ferrer-i-Cancho

The choice associated with words is a fundamental property of natural languages. It lies at the heart of quantitative linguistics, computational linguistics and language sciences more generally. Information theory gives us tools at hand to measure precisely the average amount of choice associated with words: the word entropy. Here, we use three parallel corpora, encompassing ca. 450 million words in 1916 texts and 1259 languages, to tackle some of the major conceptual and practical problems of word entropy estimation: dependence on text size, register, style and estimation method, as well as non-independence of words in co-text. We present two main findings: Firstly, word entropies display relatively narrow, unimodal distributions. There is no language in our sample with a unigram entropy of less than six bits/word. We argue that this is in line with information-theoretic models of communication. Languages are held in a narrow range by two fundamental pressures: word learnability and word expressivity, with a potential bias towards expressivity. Secondly, there is a strong linear relationship between unigram entropies and entropy rates. The entropy difference between words with and without co-textual information is narrowly distributed around ca. three bits/word. In other words, knowing the preceding text reduces the uncertainty of words by roughly the same amount across languages of the world.

]]>Entropy doi: 10.3390/e19060276

Authors: Shujun Liu Ting Yang Kui Zhang

In this paper, the noise-enhanced detection problem is investigated for the binary hypothesis-testing. The optimal additive noise is determined according to a criterion proposed by DeGroot and Schervish (2011), which aims to minimize the weighted sum of type I and II error probabilities under constraints on type I and II error probabilities. Based on a generic composite hypothesis-testing formulation, the optimal additive noise is obtained. The sufficient conditions are also deduced to verify whether the usage of the additive noise can or cannot improve the detectability of a given detector. In addition, some additional results are obtained according to the specificity of the binary hypothesis-testing, and an algorithm is developed for finding the corresponding optimal noise. Finally, numerical examples are given to verify the theoretical results and proofs of the main theorems are presented in the Appendix.

]]>