Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 3 (March 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The Integrated Information Theory (IIT) proposes that in order to quantify information integration [...] Read more.
View options order results:
result details:
Displaying articles 1-65
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle Mixture and Exponential Arcs on Generalized Statistical Manifold
Entropy 2018, 20(3), 147; doi:10.3390/e20030147
Received: 19 January 2018 / Revised: 18 February 2018 / Accepted: 21 February 2018 / Published: 25 February 2018
PDF Full-text (301 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we investigate the mixture arc on generalized statistical manifolds. We ensure that the generalization of the mixture arc is well defined and we are able to provide a generalization of the open exponential arc and its properties. We consider the
[...] Read more.
In this paper, we investigate the mixture arc on generalized statistical manifolds. We ensure that the generalization of the mixture arc is well defined and we are able to provide a generalization of the open exponential arc and its properties. We consider the model of a φ -family of distributions to describe our general statistical model. Full article
Open AccessArticle Application of Permutation Entropy and Permutation Min-Entropy in Multiple Emotional States Analysis of RRI Time Series
Entropy 2018, 20(3), 148; doi:10.3390/e20030148
Received: 14 January 2018 / Revised: 14 February 2018 / Accepted: 15 February 2018 / Published: 26 February 2018
PDF Full-text (3110 KB) | HTML Full-text | XML Full-text
Abstract
This study’s aim was to apply permutation entropy (PE) and permutation min-entropy (PME) over an RR interval time series to quantify the changes in cardiac activity among multiple emotional states. Electrocardiogram (ECG) signals were recorded under six emotional states (neutral, happiness, sadness, anger,
[...] Read more.
This study’s aim was to apply permutation entropy (PE) and permutation min-entropy (PME) over an RR interval time series to quantify the changes in cardiac activity among multiple emotional states. Electrocardiogram (ECG) signals were recorded under six emotional states (neutral, happiness, sadness, anger, fear, and disgust) in 60 healthy subjects at a rate of 1000 Hz. For each emotional state, ECGs were recorded for 5 min and the RR interval time series was extracted from these ECGs. The obtained results confirm that PE and PME increase significantly during the emotional states of happiness, sadness, anger, and disgust. Both symbolic quantifiers also increase but not in a significant way for the emotional state of fear. Moreover, it is found that PME is more sensitive than PE for discriminating non-neutral from neutral emotional states. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessFeature PaperArticle Hierarchy of Relaxation Times and Residual Entropy: A Nonequilibrium Approach
Entropy 2018, 20(3), 149; doi:10.3390/e20030149
Received: 29 January 2018 / Revised: 21 February 2018 / Accepted: 22 February 2018 / Published: 26 February 2018
PDF Full-text (455 KB) | HTML Full-text | XML Full-text
Abstract
We consider nonequilibrium (NEQ) states such as supercooled liquids and glasses that are described with the use of internal variables. We classify the latter by the state-dependent hierarchy of relaxation times to assess their relevance for irreversible contributions. Given an observation time τ
[...] Read more.
We consider nonequilibrium (NEQ) states such as supercooled liquids and glasses that are described with the use of internal variables. We classify the latter by the state-dependent hierarchy of relaxation times to assess their relevance for irreversible contributions. Given an observation time τ obs , we determine the window of relaxation times that divide the internal variables into active and inactive groups, the former playing a central role in the NEQ thermodynamics. Using this thermodynamics, we determine (i) a bound on the NEQ entropy and on the residual entropy and (ii) the nature of the isothermal relaxation of the entropy and the enthalpy in accordance with the second law. A theory that violates the second law such as the entropy loss view is shown to be internally inconsistent if we require it to be consistent with experiments. The inactive internal variables still play an indirect role in determining the temperature T ( t ) and the pressure P ( t ) of the system, which deviate from their external values. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Figure 1

Open AccessArticle Big Data Blind Separation
Entropy 2018, 20(3), 150; doi:10.3390/e20030150
Received: 8 December 2017 / Revised: 23 February 2018 / Accepted: 23 February 2018 / Published: 27 February 2018
PDF Full-text (5019 KB) | HTML Full-text | XML Full-text
Abstract
Data or signal separation is one of the critical areas of data analysis. In this work, the problem of non-negative data separation is considered. The problem can be briefly described as follows: given XRm×N, find A
[...] Read more.
Data or signal separation is one of the critical areas of data analysis. In this work, the problem of non-negative data separation is considered. The problem can be briefly described as follows: given X R m × N , find A R m × n and S R + n × N such that X = A S . Specifically, the problem with sparse locally dominant sources is addressed in this work. Although the problem is well studied in the literature, a test to validate the locally dominant assumption is not yet available. In addition to that, the typical approaches available in the literature sequentially extract the elements of the mixing matrix. In this work, a mathematical modeling-based approach is presented that can simultaneously validate the assumption, and separate the given mixture data. In addition to that, a correntropy-based measure is proposed to reduce the model size. The approach presented in this paper is suitable for big data separation. Numerical experiments are conducted to illustrate the performance and validity of the proposed approach. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Tsirelson’s Bound Prohibits Communication through a Disconnected Channel
Entropy 2018, 20(3), 151; doi:10.3390/e20030151
Received: 4 December 2017 / Revised: 18 February 2018 / Accepted: 24 February 2018 / Published: 27 February 2018
PDF Full-text (389 KB) | HTML Full-text | XML Full-text
Abstract
Why does nature only allow nonlocal correlations up to Tsirelson’s bound and not beyond? We construct a channel whose input is statistically independent of its output, but through which communication is nevertheless possible if and only if Tsirelson’s bound is violated. This provides
[...] Read more.
Why does nature only allow nonlocal correlations up to Tsirelson’s bound and not beyond? We construct a channel whose input is statistically independent of its output, but through which communication is nevertheless possible if and only if Tsirelson’s bound is violated. This provides a statistical justification for Tsirelson’s bound on nonlocal correlations in a bipartite setting. Full article
(This article belongs to the Special Issue Entropy in Foundations of Quantum Physics)
Figures

Figure 1

Open AccessArticle Optimized Dynamic Mode Decomposition via Non-Convex Regularization and Multiscale Permutation Entropy
Entropy 2018, 20(3), 152; doi:10.3390/e20030152
Received: 22 January 2018 / Revised: 12 February 2018 / Accepted: 23 February 2018 / Published: 27 February 2018
PDF Full-text (5080 KB) | HTML Full-text | XML Full-text
Abstract
Dynamic mode decomposition (DMD) is essentially a hybrid algorithm based on mode decomposition and singular value decomposition, and it inevitably inherits the drawbacks of these two algorithms, including the selection strategy of truncated rank order and wanted mode components. A novel denoising and
[...] Read more.
Dynamic mode decomposition (DMD) is essentially a hybrid algorithm based on mode decomposition and singular value decomposition, and it inevitably inherits the drawbacks of these two algorithms, including the selection strategy of truncated rank order and wanted mode components. A novel denoising and feature extraction algorithm for multi-component coupled noisy mechanical signals is proposed based on the standard DMD algorithm, which provides a new method solving the two intractable problems above. Firstly, a sparse optimization method of non-convex penalty function is adopted to determine the optimal dimensionality reduction space in the process of DMD, obtaining a series of optimal DMD modes. Then, multiscale permutation entropy calculation is performed to calculate the complexity of each DMD mode. Modes corresponding to the noise components are discarded by threshold technology, and we reconstruct the modes whose entropies are smaller than a threshold to recover the signal. By applying the algorithm to rolling bearing simulation signals and comparing with the result of wavelet transform, the effectiveness of the proposed method can be verified. Finally, the proposed method is applied to the experimental rolling bearing signals. Results demonstrated that the proposed approach has a good application prospect in noise reduction and fault feature extraction. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessFeature PaperArticle Anomalous Statistics of Bose-Einstein Condensate in an Interacting Gas: An Effect of the Trap’s Form and Boundary Conditions in the Thermodynamic Limit
Entropy 2018, 20(3), 153; doi:10.3390/e20030153
Received: 31 December 2017 / Revised: 15 February 2018 / Accepted: 24 February 2018 / Published: 27 February 2018
PDF Full-text (417 KB) | HTML Full-text | XML Full-text
Abstract
We analytically calculate the statistics of Bose-Einstein condensate (BEC) fluctuations in an interacting gas trapped in a three-dimensional cubic or rectangular box with the Dirichlet, fused or periodic boundary conditions within the mean-field Bogoliubov and Thomas-Fermi approximations. We study a mesoscopic system of
[...] Read more.
We analytically calculate the statistics of Bose-Einstein condensate (BEC) fluctuations in an interacting gas trapped in a three-dimensional cubic or rectangular box with the Dirichlet, fused or periodic boundary conditions within the mean-field Bogoliubov and Thomas-Fermi approximations. We study a mesoscopic system of a finite number of trapped particles and its thermodynamic limit. We find that the BEC fluctuations, first, are anomalously large and non-Gaussian and, second, depend on the trap’s form and boundary conditions. Remarkably, these effects persist with increasing interparticle interaction and even in the thermodynamic limit—only the mean BEC occupation, not BEC fluctuations, becomes independent on the trap’s form and boundary conditions. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle Multivariate Entropy Characterizes the Gene Expression and Protein-Protein Networks in Four Types of Cancer
Entropy 2018, 20(3), 154; doi:10.3390/e20030154
Received: 2 November 2017 / Revised: 31 January 2018 / Accepted: 23 February 2018 / Published: 28 February 2018
PDF Full-text (8413 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
There is an important urgency to detect cancer at early stages to treat it, to improve the patients’ lifespans, and even to cure it. In this work, we determined the entropic contributions of genes in cancer networks. We detected sudden changes in entropy
[...] Read more.
There is an important urgency to detect cancer at early stages to treat it, to improve the patients’ lifespans, and even to cure it. In this work, we determined the entropic contributions of genes in cancer networks. We detected sudden changes in entropy values in melanoma, hepatocellular carcinoma, pancreatic cancer, and squamous lung cell carcinoma associated to transitions from healthy controls to cancer. We also identified the most relevant genes involved in carcinogenic process of the four types of cancer with the help of entropic changes in local networks. Their corresponding proteins could be used as potential targets for treatments and as biomarkers of cancer. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle An Analysis of the Value of Information When Exploring Stochastic, Discrete Multi-Armed Bandits
Entropy 2018, 20(3), 155; doi:10.3390/e20030155
Received: 12 October 2017 / Revised: 16 February 2018 / Accepted: 26 February 2018 / Published: 28 February 2018
PDF Full-text (11452 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In this paper, we propose an information-theoretic exploration strategy for stochastic, discrete multi-armed bandits that achieves optimal regret. Our strategy is based on the value of information criterion. This criterion measures the trade-off between policy information and obtainable rewards. High amounts of policy
[...] Read more.
In this paper, we propose an information-theoretic exploration strategy for stochastic, discrete multi-armed bandits that achieves optimal regret. Our strategy is based on the value of information criterion. This criterion measures the trade-off between policy information and obtainable rewards. High amounts of policy information are associated with exploration-dominant searches of the space and yield high rewards. Low amounts of policy information favor the exploitation of existing knowledge. Information, in this criterion, is quantified by a parameter that can be varied during search. We demonstrate that a simulated-annealing-like update of this parameter, with a sufficiently fast cooling schedule, leads to a regret that is logarithmic with respect to the number of arm pulls. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle A Quantal Response Statistical Equilibrium Model of Induced Technical Change in an Interactive Factor Market: Firm-Level Evidence in the EU Economies
Entropy 2018, 20(3), 156; doi:10.3390/e20030156
Received: 7 December 2017 / Revised: 15 January 2018 / Accepted: 18 January 2018 / Published: 28 February 2018
PDF Full-text (1651 KB) | HTML Full-text | XML Full-text
Abstract
This paper studies the pattern of technical change at the firm level by applying and extending the Quantal Response Statistical Equilibrium model (QRSE). The model assumes that a large number of cost minimizing firms decide whether to adopt a new technology based on
[...] Read more.
This paper studies the pattern of technical change at the firm level by applying and extending the Quantal Response Statistical Equilibrium model (QRSE). The model assumes that a large number of cost minimizing firms decide whether to adopt a new technology based on the potential rate of cost reduction. The firm in the model is assumed to have a limited capacity to process market signals so there is a positive degree of uncertainty in adopting a new technology. The adoption decision by the firm, in turn, makes an impact on the whole market through changes in the factor-price ratio. The equilibrium distribution of the model is a unimodal probability distribution with four parameters, which is qualitatively different from the Walrasian notion of equilibrium in so far as the state of equilibrium is not a single state but a probability distribution of multiple states. This paper applies Bayesian inference to estimate the unknown parameters of the model using the firm-level data of seven advanced OECD countries over eight years and shows that the mentioned equilibrium distribution from the model can satisfactorily recover the observed pattern of technical change. Full article
Figures

Figure 1

Open AccessArticle Security Analysis of Unidimensional Continuous-Variable Quantum Key Distribution Using Uncertainty Relations
Entropy 2018, 20(3), 157; doi:10.3390/e20030157
Received: 30 January 2018 / Revised: 22 February 2018 / Accepted: 27 February 2018 / Published: 1 March 2018
PDF Full-text (6268 KB) | HTML Full-text | XML Full-text
Abstract
We study the equivalence between the entanglement-based scheme and prepare-and-measure scheme of unidimensional (UD) continuous-variable quantum key distribution protocol. Based on this equivalence, the physicality and security of the UD coherent-state protocols in the ideal detection and realistic detection conditions are investigated using
[...] Read more.
We study the equivalence between the entanglement-based scheme and prepare-and-measure scheme of unidimensional (UD) continuous-variable quantum key distribution protocol. Based on this equivalence, the physicality and security of the UD coherent-state protocols in the ideal detection and realistic detection conditions are investigated using the Heisenberg uncertainty relation, respectively. We also present a method to increase both the secret key rates and maximal transmission distances of the UD coherent-state protocol by adding an optimal noise to the reconciliation side. It is expected that our analysis will aid in the practical applications of the UD protocol. Full article
(This article belongs to the Special Issue Entropy in Foundations of Quantum Physics)
Figures

Figure 1

Open AccessArticle Gudder’s Theorem and the Born Rule
Entropy 2018, 20(3), 158; doi:10.3390/e20030158
Received: 23 December 2017 / Revised: 29 January 2018 / Accepted: 6 February 2018 / Published: 2 March 2018
PDF Full-text (254 KB) | HTML Full-text | XML Full-text
Abstract
We derive the Born probability rule from Gudder’s theorem—a theorem that addresses orthogonally-additive functions. These functions are shown to be tightly connected to the functions that enter the definition of a signed measure. By imposing some additional requirements besides orthogonal additivity, the addressed
[...] Read more.
We derive the Born probability rule from Gudder’s theorem—a theorem that addresses orthogonally-additive functions. These functions are shown to be tightly connected to the functions that enter the definition of a signed measure. By imposing some additional requirements besides orthogonal additivity, the addressed functions are proved to be linear, so they can be given in terms of an inner product. By further restricting them to act on projectors, Gudder’s functions are proved to act as probability measures obeying Born’s rule. The procedure does not invoke any property that fully lies within the quantum framework, so Born’s rule is shown to apply within both the classical and the quantum domains. Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
Open AccessFeature PaperArticle IBVis: Interactive Visual Analytics for Information Bottleneck Based Trajectory Clustering
Entropy 2018, 20(3), 159; doi:10.3390/e20030159
Received: 11 January 2018 / Revised: 27 February 2018 / Accepted: 28 February 2018 / Published: 2 March 2018
PDF Full-text (1641 KB) | HTML Full-text | XML Full-text
Abstract
Analyzing trajectory data plays an important role in practical applications, and clustering is one of the most widely used techniques for this task. The clustering approach based on information bottleneck (IB) principle has shown its effectiveness for trajectory data, in which a predefined
[...] Read more.
Analyzing trajectory data plays an important role in practical applications, and clustering is one of the most widely used techniques for this task. The clustering approach based on information bottleneck (IB) principle has shown its effectiveness for trajectory data, in which a predefined number of the clusters and an explicit distance measure between trajectories are not required. However, presenting directly the final results of IB clustering gives no clear idea of both trajectory data and clustering process. Visual analytics actually provides a powerful methodology to address this issue. In this paper, we present an interactive visual analytics prototype called IBVis to supply an expressive investigation of IB-based trajectory clustering. IBVis provides various views to graphically present the key components of IB and the current clustering results. Rich user interactions drive different views work together, so as to monitor and steer the clustering procedure and to refine the results. In this way, insights on how to make better use of IB for different featured trajectory data can be gained for users, leading to better analyzing and understanding trajectory data. The applicability of IBVis has been evidenced in usage scenarios. In addition, the conducted user study shows IBVis is well designed and helpful for users. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Figures

Figure 1

Open AccessArticle Revisiting Degrees of Freedom of Full-Duplex Systems with Opportunistic Transmission: An Improved User Scaling Law
Entropy 2018, 20(3), 160; doi:10.3390/e20030160
Received: 5 January 2018 / Revised: 7 February 2018 / Accepted: 1 March 2018 / Published: 2 March 2018
PDF Full-text (383 KB) | HTML Full-text | XML Full-text
Abstract
It was recently studied how to achieve the optimal degrees of freedom (DoF) in a multi-antenna full-duplex system with partial channel state information (CSI). In this paper, we revisit the DoF of a multiple-antenna full-duplex system using opportunistic transmission under the partial CSI,
[...] Read more.
It was recently studied how to achieve the optimal degrees of freedom (DoF) in a multi-antenna full-duplex system with partial channel state information (CSI). In this paper, we revisit the DoF of a multiple-antenna full-duplex system using opportunistic transmission under the partial CSI, in which a full-duplex base station having M transmit antennas and M receive antennas supports a set of half-duplex mobile stations (MSs) having a single antenna each. Assuming no self-interference, we present a new hybrid opportunistic scheduling method that achieves the optimal sum DoF under an improved user scaling law. Unlike the state-of-the-art scheduling method, our method is designed in the sense that the scheduling role between downlink MSs and uplink MSs is well-balanced. It is shown that the optimal sum DoF of 2 M is asymptotically achievable provided that the number of MSs scales faster than SNR M , where SNR denotes the signal-to-noise ratio. This result reveals that, in our full-duplex system, better performance on the user scaling law can be obtained without extra CSI, compared to the prior work that showed the required user scaling condition (i.e., the minimum number of MSs for guaranteeing the optimal DoF) of SNR 2 M 1 . Moreover, the average interference decaying rate is analyzed. Numerical evaluation is performed to not only validate our analysis but also show superiority of the proposed method over the state-of-the-art method. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle A Joint Fault Diagnosis Scheme Based on Tensor Nuclear Norm Canonical Polyadic Decomposition and Multi-Scale Permutation Entropy for Gears
Entropy 2018, 20(3), 161; doi:10.3390/e20030161
Received: 15 November 2017 / Revised: 11 February 2018 / Accepted: 28 February 2018 / Published: 3 March 2018
PDF Full-text (4414 KB) | HTML Full-text | XML Full-text
Abstract
Gears are key components in rotation machinery and its fault vibration signals usually show strong nonlinear and non-stationary characteristics. It is not easy for classical time–frequency domain analysis methods to recognize different gear working conditions. Therefore, this paper presents a joint fault diagnosis
[...] Read more.
Gears are key components in rotation machinery and its fault vibration signals usually show strong nonlinear and non-stationary characteristics. It is not easy for classical time–frequency domain analysis methods to recognize different gear working conditions. Therefore, this paper presents a joint fault diagnosis scheme for gear fault classification via tensor nuclear norm canonical polyadic decomposition (TNNCPD) and multi-scale permutation entropy (MSPE). Firstly, the one-dimensional vibration data of different gear fault conditions is converted into a three-dimensional tensor data, and a new tensor canonical polyadic decomposition method based on nuclear norm and convex optimization called TNNCPD is proposed to extract the low rank component of the data, which represents the feature information of the measured signal. Then, the MSPE of the extracted feature information about different gear faults can be calculated as the feature vector in order to recognize fault conditions. Finally, this researched scheme is validated by practical gear vibration data of different fault conditions. The result demonstrates that the proposed scheme can effectively recognize different gear fault conditions. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Open AccessFeature PaperArticle Notes on Computational Uncertainties in Probabilistic Risk/Safety Assessment
Entropy 2018, 20(3), 162; doi:10.3390/e20030162
Received: 5 January 2018 / Revised: 9 February 2018 / Accepted: 26 February 2018 / Published: 4 March 2018
PDF Full-text (1116 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we study computational uncertainties in probabilistic risk/safety assessment resulting from the computational complexity of calculations of risk indicators. We argue that the risk analyst faces the fundamental epistemic and aleatory uncertainties of risk assessment with a bounded calculation capacity, and
[...] Read more.
In this article, we study computational uncertainties in probabilistic risk/safety assessment resulting from the computational complexity of calculations of risk indicators. We argue that the risk analyst faces the fundamental epistemic and aleatory uncertainties of risk assessment with a bounded calculation capacity, and that this bounded capacity over-determines both the design of models and the decisions that can be made from models. We sketch a taxonomy of modelling technologies and recall the main computational complexity results. Then, based on a review of state of the art assessment algorithms for fault trees and event trees, we make some methodological proposals aiming at drawing conceptual and practical consequences of bounded calculability. Full article
(This article belongs to the Special Issue Entropy for Characterization of Uncertainty in Risk and Reliability)
Figures

Figure 1

Open AccessArticle A Variational Formulation of Nonequilibrium Thermodynamics for Discrete Open Systems with Mass and Heat Transfer
Entropy 2018, 20(3), 163; doi:10.3390/e20030163
Received: 31 December 2017 / Revised: 25 February 2018 / Accepted: 27 February 2018 / Published: 4 March 2018
PDF Full-text (2338 KB) | HTML Full-text | XML Full-text
Abstract
We propose a variational formulation for the nonequilibrium thermodynamics of discrete open systems, i.e., discrete systems which can exchange mass and heat with the exterior. Our approach is based on a general variational formulation for systems with time-dependent nonlinear nonholonomic constraints and time-dependent
[...] Read more.
We propose a variational formulation for the nonequilibrium thermodynamics of discrete open systems, i.e., discrete systems which can exchange mass and heat with the exterior. Our approach is based on a general variational formulation for systems with time-dependent nonlinear nonholonomic constraints and time-dependent Lagrangian. For discrete open systems, the time-dependent nonlinear constraint is associated with the rate of internal entropy production of the system. We show that this constraint on the solution curve systematically yields a constraint on the variations to be used in the action functional. The proposed variational formulation is intrinsic and provides the same structure for a wide class of discrete open systems. We illustrate our theory by presenting examples of open systems experiencing mechanical interactions, as well as internal diffusion, internal heat transfer, and their cross-effects. Our approach yields a systematic way to derive the complete evolution equations for the open systems, including the expression of the internal entropy production of the system, independently on its complexity. It might be especially useful for the study of the nonequilibrium thermodynamics of biophysical systems. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Uncertainty Evaluation in Multistage Assembly Process Based on Enhanced OOPN
Entropy 2018, 20(3), 164; doi:10.3390/e20030164
Received: 11 January 2018 / Revised: 15 February 2018 / Accepted: 1 March 2018 / Published: 4 March 2018
PDF Full-text (2161 KB) | HTML Full-text | XML Full-text
Abstract
This study investigated the uncertainty of the multistage assembly process from the viewpoint of a stream of defects in the product assembly process. The vulnerable spots were analyzed and the fluctuations were controlled during this process. An uncertainty evaluation model was developed for
[...] Read more.
This study investigated the uncertainty of the multistage assembly process from the viewpoint of a stream of defects in the product assembly process. The vulnerable spots were analyzed and the fluctuations were controlled during this process. An uncertainty evaluation model was developed for the assembly process on the basis of an object-oriented Petri net (OOPN) by replacing its transition function with a fitted defect changing function. The definition of entropy in physics was applied to characterize the uncertainty of the model in evaluating the assembly process. The uncertainty was then measured as the entropy of the semi-Markov chain, which could be used to calculate the uncertainty of a specific subset of places, as well as the entire process. The OOPN model could correspond to the Markov process because its reachable token can be directly mapped to the Markov process. Using the steady-state probability combined with the uncertainty evaluation, the vulnerable spots in the assembly process were identified and a scanning test program was proposed to improve the quality of the assembly process. Finally, this work analyzed the assembly process on the basis of the uncertainty of the assembly structure and the variables of the assembly process. Finally, the case of a certain product assembly process was analyzed to test the advantages of this method. Full article
Figures

Figure 1

Open AccessArticle Thermodynamic Optimization for an Endoreversible Dual-Miller Cycle (DMC) with Finite Speed of Piston
Entropy 2018, 20(3), 165; doi:10.3390/e20030165
Received: 16 December 2017 / Revised: 28 February 2018 / Accepted: 1 March 2018 / Published: 5 March 2018
Cited by 1 | PDF Full-text (2031 KB) | HTML Full-text | XML Full-text
Abstract
Power output (P), thermal efficiency (η) and ecological function (E) characteristics of an endoreversible Dual-Miller cycle (DMC) with finite speed of the piston and finite rate of heat transfer are investigated by applying finite time thermodynamic (FTT)
[...] Read more.
Power output ( P ), thermal efficiency ( η ) and ecological function ( E ) characteristics of an endoreversible Dual-Miller cycle (DMC) with finite speed of the piston and finite rate of heat transfer are investigated by applying finite time thermodynamic (FTT) theory. The parameter expressions of the non-dimensional power output ( P ¯ ), η and non-dimensional ecological function ( E ¯ ) are derived. The relationships between P ¯ and cut-off ratio ( ρ ), between P ¯ and η , as well as between E ¯ and ρ are demonstrated. The influences of ρ and piston speeds in different processes on P ¯ , η and E ¯ are investigated. The results show that P ¯ and E ¯ first increase and then start to decrease with increasing ρ . The optimal cut-off ratio ρ o p t will increase if piston speeds increase in heat addition processes and heat rejection processes. As piston speeds in different processes increase, the maximum values of P ¯ and E ¯ increase. The results include the performance characteristics of various simplified cycles of DMC, such as Otto cycle, Diesel cycle, Dual cycle, Otto-Atkinson cycle, Diesel-Atkinson cycle, Dual-Atkinson cycle, Otto-Miller cycle and Diesel-Miller cycle. Comparing performance characteristics of the DMC with different optimization objectives, when choosing E ¯ as optimization objective, η improves 26.4% compared to choosing P ¯ as optimization objective, while P ¯ improves 74.3% compared to choosing η as optimization objective. Thus, optimizing E is the best compromise between optimizing P and optimizing η . The results obtained can provide theoretical guidance to design practical DMC engines. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessFeature PaperArticle Statistics of Correlations and Fluctuations in a Stochastic Model of Wealth Exchange
Entropy 2018, 20(3), 166; doi:10.3390/e20030166
Received: 28 December 2017 / Revised: 12 February 2018 / Accepted: 3 March 2018 / Published: 5 March 2018
PDF Full-text (1304 KB) | HTML Full-text | XML Full-text
Abstract
In our recently proposed stochastic version of discretized kinetic theory, the exchange of wealth in a society is modelled through a large system of Langevin equations. The deterministic part of the equations is based on non-linear transition probabilities between income classes. The noise
[...] Read more.
In our recently proposed stochastic version of discretized kinetic theory, the exchange of wealth in a society is modelled through a large system of Langevin equations. The deterministic part of the equations is based on non-linear transition probabilities between income classes. The noise terms can be additive, multiplicative or mixed, both with white or Ornstein–Uhlenbeck spectrum. The most important measured correlations are those between Gini inequality index G and social mobility M, between total income and G, and between M and total income. We describe numerical results concerning these correlations and a quantity which gives average stochastic deviations from the equilibrium solutions in dependence on the noise amplitude. Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Figures

Figure 1

Open AccessArticle Thermodynamic Analysis of an Irreversible Maisotsenko Reciprocating Brayton Cycle
Entropy 2018, 20(3), 167; doi:10.3390/e20030167
Received: 20 January 2018 / Revised: 23 February 2018 / Accepted: 2 March 2018 / Published: 5 March 2018
PDF Full-text (2006 KB) | HTML Full-text | XML Full-text
Abstract
An irreversible Maisotsenko reciprocating Brayton cycle (MRBC) model is established using the finite time thermodynamic (FTT) theory and taking the heat transfer loss (HTL), piston friction loss (PFL), and internal irreversible losses (IILs) into consideration in this paper. A calculation flowchart of the
[...] Read more.
An irreversible Maisotsenko reciprocating Brayton cycle (MRBC) model is established using the finite time thermodynamic (FTT) theory and taking the heat transfer loss (HTL), piston friction loss (PFL), and internal irreversible losses (IILs) into consideration in this paper. A calculation flowchart of the power output (P) and efficiency (η) of the cycle is provided, and the effects of the mass flow rate (MFR) of the injection of water to the cycle and some other design parameters on the performance of cycle are analyzed by detailed numerical examples. Furthermore, the superiority of irreversible MRBC is verified as the cycle and is compared with the traditional irreversible reciprocating Brayton cycle (RBC). The results can provide certain theoretical guiding significance for the optimal design of practical Maisotsenko reciprocating gas turbine plants. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models
Entropy 2018, 20(3), 168; doi:10.3390/e20030168
Received: 12 January 2018 / Revised: 1 March 2018 / Accepted: 1 March 2018 / Published: 5 March 2018
PDF Full-text (959 KB) | HTML Full-text | XML Full-text
Abstract
An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This
[...] Read more.
An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test. Full article
Figures

Figure 1

Open AccessFeature PaperArticle The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy
Entropy 2018, 20(3), 169; doi:10.3390/e20030169
Received: 13 November 2017 / Revised: 26 February 2018 / Accepted: 28 February 2018 / Published: 5 March 2018
PDF Full-text (1887 KB) | HTML Full-text | XML Full-text
Abstract
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a
[...] Read more.
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012) showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessFeature PaperArticle Quantifying Chaos by Various Computational Methods. Part 2: Vibrations of the Bernoulli–Euler Beam Subjected to Periodic and Colored Noise
Entropy 2018, 20(3), 170; doi:10.3390/e20030170
Received: 17 January 2018 / Revised: 16 February 2018 / Accepted: 1 March 2018 / Published: 5 March 2018
PDF Full-text (35014 KB) | HTML Full-text | XML Full-text
Abstract
In this part of the paper, the theory of nonlinear dynamics of flexible Euler–Bernoulli beams (the kinematic model of the first-order approximation) under transverse harmonic load and colored noise has been proposed. It has been shown that the introduced concept of phase transition
[...] Read more.
In this part of the paper, the theory of nonlinear dynamics of flexible Euler–Bernoulli beams (the kinematic model of the first-order approximation) under transverse harmonic load and colored noise has been proposed. It has been shown that the introduced concept of phase transition allows for further generalization of the problem. The concept has been extended to a so-called noise-induced transition, which is a novel transition type exhibited by nonequilibrium systems embedded in a stochastic fluctuated medium, the properties of which depend on time and are influenced by external noise. Colored noise excitation of a structural system treated as a system with an infinite number of degrees of freedom has been studied. Full article
(This article belongs to the Special Issue Entropy in Dynamic Systems)
Figures

Figure 1

Open AccessArticle Correntropy Based Matrix Completion
Entropy 2018, 20(3), 171; doi:10.3390/e20030171
Received: 24 December 2017 / Revised: 8 February 2018 / Accepted: 22 February 2018 / Published: 6 March 2018
PDF Full-text (2425 KB) | HTML Full-text | XML Full-text
Abstract
This paper studies the matrix completion problems when the entries are contaminated by non-Gaussian noise or outliers. The proposed approach employs a nonconvex loss function induced by the maximum correntropy criterion. With the help of this loss function, we develop a rank constrained,
[...] Read more.
This paper studies the matrix completion problems when the entries are contaminated by non-Gaussian noise or outliers. The proposed approach employs a nonconvex loss function induced by the maximum correntropy criterion. With the help of this loss function, we develop a rank constrained, as well as a nuclear norm regularized model, which is resistant to non-Gaussian noise and outliers. However, its non-convexity also leads to certain difficulties. To tackle this problem, we use the simple iterative soft and hard thresholding strategies. We show that when extending to the general affine rank minimization problems, under proper conditions, certain recoverability results can be obtained for the proposed algorithms. Numerical experiments indicate the improved performance of our proposed approach. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Q-Neutrosophic Soft Relation and Its Application in Decision Making
Entropy 2018, 20(3), 172; doi:10.3390/e20030172
Received: 4 February 2018 / Revised: 24 February 2018 / Accepted: 5 March 2018 / Published: 6 March 2018
PDF Full-text (785 KB) | HTML Full-text | XML Full-text
Abstract
Q-neutrosophic soft sets are essentially neutrosophic soft sets characterized by three independent two-dimensional membership functions which stand for uncertainty, indeterminacy and falsity. Thus, it can be applied to two-dimensional imprecise, indeterminate and inconsistent data which appear in most real life problems. Relations are
[...] Read more.
Q-neutrosophic soft sets are essentially neutrosophic soft sets characterized by three independent two-dimensional membership functions which stand for uncertainty, indeterminacy and falsity. Thus, it can be applied to two-dimensional imprecise, indeterminate and inconsistent data which appear in most real life problems. Relations are a suitable tool for describing correspondences between objects. In this study we introduce and discuss Q-neutrosophic soft relations, which can be discussed as a generalization of fuzzy soft relations, intuitionistic fuzzy soft relations, and neutrosophic soft relations. Q-neutrosophic soft relation is a sub Q-neutrosophic soft set of the Cartesian product of the Q-neutrosophic soft sets, in other words Q-neutrosophic soft relation is Q-neutrosophic soft sets in a Cartesian product of universes. We also present the notions of inverse, composition of Q-neutrosophic soft relations and functions along with some related theorems and properties. Reflexivity, symmetry, transitivity as well as equivalence relations and equivalence classes of Q-neutrosophic soft relations are also defined. Some properties of these concepts are presented and supported by real life examples. Finally, an algorithm to solve decision making problems using Q-neutrosophic soft relations is developed and verified by an example to show the efficiency of this method. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Open AccessArticle Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory
Entropy 2018, 20(3), 173; doi:10.3390/e20030173
Received: 18 December 2017 / Revised: 26 February 2018 / Accepted: 27 February 2018 / Published: 6 March 2018
PDF Full-text (708 KB) | HTML Full-text | XML Full-text
Abstract
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information (Φ) in the brain is related to the level of consciousness.
[...] Read more.
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. IIT proposes that, to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that, if a measure of Φ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ in large systems within a practical amount of time. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle First- and Second-Order Hypothesis Testing for Mixed Memoryless Sources
Entropy 2018, 20(3), 174; doi:10.3390/e20030174
Received: 23 January 2018 / Revised: 2 March 2018 / Accepted: 5 March 2018 / Published: 6 March 2018
PDF Full-text (333 KB) | HTML Full-text | XML Full-text
Abstract
The first- and second-order optimum achievable exponents in the simple hypothesis testing problem are investigated. The optimum achievable exponent for type II error probability, under the constraint that the type I error probability is allowed asymptotically up to ε, is called the
[...] Read more.
The first- and second-order optimum achievable exponents in the simple hypothesis testing problem are investigated. The optimum achievable exponent for type II error probability, under the constraint that the type I error probability is allowed asymptotically up to ε , is called the ε -optimum exponent. In this paper, we first give the second-order ε -optimum exponent in the case where the null hypothesis and alternative hypothesis are a mixed memoryless source and a stationary memoryless source, respectively. We next generalize this setting to the case where the alternative hypothesis is also a mixed memoryless source. Secondly, we address the first-order ε -optimum exponent in this setting. In addition, an extension of our results to the more general setting such as hypothesis testing with mixed general source and a relationship with the general compound hypothesis testing problem are also discussed. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Quantifying Chaos by Various Computational Methods. Part 1: Simple Systems
Entropy 2018, 20(3), 175; doi:10.3390/e20030175
Received: 17 January 2018 / Revised: 16 February 2018 / Accepted: 1 March 2018 / Published: 6 March 2018
PDF Full-text (14466 KB) | HTML Full-text | XML Full-text
Abstract
The aim of the paper was to analyze the given nonlinear problem by different methods of computation of the Lyapunov exponents (Wolf method, Rosenstein method, Kantz method, the method based on the modification of a neural network, and the synchronization method) for the
[...] Read more.
The aim of the paper was to analyze the given nonlinear problem by different methods of computation of the Lyapunov exponents (Wolf method, Rosenstein method, Kantz method, the method based on the modification of a neural network, and the synchronization method) for the classical problems governed by difference and differential equations (Hénon map, hyperchaotic Hénon map, logistic map, Rössler attractor, Lorenz attractor) and with the use of both Fourier spectra and Gauss wavelets. It has been shown that a modification of the neural network method makes it possible to compute a spectrum of Lyapunov exponents, and then to detect a transition of the system regular dynamics into chaos, hyperchaos, and others. The aim of the comparison was to evaluate the considered algorithms, study their convergence, and also identify the most suitable algorithms for specific system types and objectives. Moreover, an algorithm of calculation of the spectrum of Lyapunov exponents based on a trained neural network has been proposed. It has been proven that the developed method yields good results for different types of systems and does not require a priori knowledge of the system equations. Full article
(This article belongs to the Special Issue Entropy in Dynamic Systems)
Figures

Figure 1

Open AccessArticle Categorical Data Analysis Using a Skewed Weibull Regression Model
Entropy 2018, 20(3), 176; doi:10.3390/e20030176
Received: 24 November 2017 / Revised: 14 February 2018 / Accepted: 27 February 2018 / Published: 7 March 2018
PDF Full-text (1006 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log–log) can be
[...] Read more.
In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log–log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in detail. The analysis of two datasets to show the efficiency of the proposed model is performed. Full article
Figures

Figure 1

Open AccessArticle Relationship between Entropy and Dimension of Financial Correlation-Based Network
Entropy 2018, 20(3), 177; doi:10.3390/e20030177
Received: 18 January 2018 / Revised: 25 February 2018 / Accepted: 6 March 2018 / Published: 7 March 2018
PDF Full-text (654 KB) | HTML Full-text | XML Full-text
Abstract
We analyze the dimension of a financial correlation-based network and apply our analysis to characterize the complexity of the network. First, we generalize the volume-based dimension and find that it is well defined by the correlation-based network. Second, we establish the relationship between
[...] Read more.
We analyze the dimension of a financial correlation-based network and apply our analysis to characterize the complexity of the network. First, we generalize the volume-based dimension and find that it is well defined by the correlation-based network. Second, we establish the relationship between the Rényi index and the volume-based dimension. Third, we analyze the meaning of the dimensions sequence, which characterizes the level of departure from the comparison benchmark based on the randomized time series. Finally, we use real stock market data from three countries for empirical analysis. In some cases, our proposed analysis method can more accurately capture the structural differences of networks than the power law index commonly used in previous studies. Full article
Figures

Figure 1

Open AccessArticle Evaluating Flight Crew Performance by a Bayesian Network Model
Entropy 2018, 20(3), 178; doi:10.3390/e20030178
Received: 28 November 2017 / Revised: 22 February 2018 / Accepted: 24 February 2018 / Published: 8 March 2018
PDF Full-text (2229 KB) | HTML Full-text | XML Full-text
Abstract
Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization
[...] Read more.
Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization of multidisciplinary sources of objective and subjective information, despite sparse behavioral data. In this paper, the causal factors are selected based on the analysis of 484 aviation accidents caused by human factors. Then, a network termed Flight Crew Performance Model is constructed. The Delphi technique helps to gather subjective data as a supplement to objective data from accident reports. The conditional probabilities are elicited by the leaky noisy MAX model. Two ways of inference for the BN—probability prediction and probabilistic diagnosis are used and some interesting conclusions are drawn, which could provide data support to make interventions for human error management in aviation safety. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Non-Conventional Thermodynamics and Models of Gradient Elasticity
Entropy 2018, 20(3), 179; doi:10.3390/e20030179
Received: 29 December 2017 / Revised: 28 February 2018 / Accepted: 2 March 2018 / Published: 8 March 2018
PDF Full-text (288 KB) | HTML Full-text | XML Full-text
Abstract
We consider material bodies exhibiting a response function for free energy, which depends on both the strain and its gradient. Toupin–Mindlin’s gradient elasticity is characterized by Cauchy stress tensors, which are given by space-like Euler–Lagrange derivative of the free energy with respect to
[...] Read more.
We consider material bodies exhibiting a response function for free energy, which depends on both the strain and its gradient. Toupin–Mindlin’s gradient elasticity is characterized by Cauchy stress tensors, which are given by space-like Euler–Lagrange derivative of the free energy with respect to the strain. The present paper aims at developing a first version of gradient elasticity of non-Toupin–Mindlin’s type, i.e., a theory employing Cauchy stress tensors, which are not necessarily expressed as Euler–Lagrange derivatives. This is accomplished in the framework of non-conventional thermodynamics. A one-dimensional boundary value problem is solved in detail in order to illustrate the differences of the present theory with Toupin–Mindlin’s gradient elasticity theory. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Fruit-80: A Secure Ultra-Lightweight Stream Cipher for Constrained Environments
Entropy 2018, 20(3), 180; doi:10.3390/e20030180
Received: 4 February 2018 / Revised: 5 March 2018 / Accepted: 5 March 2018 / Published: 8 March 2018
PDF Full-text (1217 KB) | HTML Full-text | XML Full-text
Abstract
In Fast Software Encryption (FSE) 2015, while presenting a new idea (i.e., the design of stream ciphers with the small internal state by using a secret key, not only in the initialization but also in the keystream generation), Sprout was proposed. Sprout was
[...] Read more.
In Fast Software Encryption (FSE) 2015, while presenting a new idea (i.e., the design of stream ciphers with the small internal state by using a secret key, not only in the initialization but also in the keystream generation), Sprout was proposed. Sprout was insecure and an improved version of Sprout was presented in FSE 2017. We introduced Fruit stream cipher informally in 2016 on the web page of IACR (eprint) and few cryptanalysis were published on it. Fortunately, the main structure of Fruit was resistant. Now, Fruit-80 is presented as a final version which is easier to implement and is secure. The size of LFSR and NFSR in Fruit-80 is only 80 bits (for 80-bit security level), while for resistance to the classical time-memory-data tradeoff (TMDTO) attacks, the internal state size should be at least twice that of the security level. To satisfy this rule and to design a concrete cipher, we used some new design ideas. It seems that the bottleneck of designing an ultra-lightweight stream cipher is TMDTO distinguishing attacks. A countermeasure was suggested, and another countermeasure is proposed here. Fruit-80 is better than other small-state stream ciphers in terms of the initialization speed and area size in hardware. It is possible to redesign many of the stream ciphers and achieve significantly smaller area size by using the new idea. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding
Entropy 2018, 20(3), 181; doi:10.3390/e20030181
Received: 31 January 2018 / Revised: 25 February 2018 / Accepted: 6 March 2018 / Published: 8 March 2018
Cited by 1 | PDF Full-text (3514 KB) | HTML Full-text | XML Full-text
Abstract
Rate-distortion optimization (RDO) plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC). Among all the possible coding modes, it aims to select the one which has the best trade-off
[...] Read more.
Rate-distortion optimization (RDO) plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC). Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF) technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Figures

Figure 1

Open AccessArticle Gaussian Optimality for Derivatives of Differential Entropy Using Linear Matrix Inequalities
Entropy 2018, 20(3), 182; doi:10.3390/e20030182
Received: 23 January 2018 / Revised: 22 February 2018 / Accepted: 5 March 2018 / Published: 9 March 2018
PDF Full-text (792 KB) | HTML Full-text | XML Full-text
Abstract
Let Z be a standard Gaussian random variable, X be independent of Z, and t be a strictly positive scalar. For the derivatives in t of the differential entropy of X+tZ, McKean noticed that Gaussian X achieves the
[...] Read more.
Let Z be a standard Gaussian random variable, X be independent of Z, and t be a strictly positive scalar. For the derivatives in t of the differential entropy of X + t Z , McKean noticed that Gaussian X achieves the extreme for the first and second derivatives, among distributions with a fixed variance, and he conjectured that this holds for general orders of derivatives. This conjecture implies that the signs of the derivatives alternate. Recently, Cheng and Geng proved that this alternation holds for the first four orders. In this work, we employ the technique of linear matrix inequalities to show that: firstly, Cheng and Geng’s method may not generalize to higher orders; secondly, when the probability density function of X + t Z is log-concave, McKean’s conjecture holds for orders up to at least five. As a corollary, we also recover Toscani’s result on the sign of the third derivative of the entropy power of X + t Z , using a much simpler argument. Full article
(This article belongs to the Section Information Theory)
Open AccessFeature PaperArticle Equilibrium States in Two-Temperature Systems
Entropy 2018, 20(3), 183; doi:10.3390/e20030183
Received: 24 January 2018 / Revised: 24 February 2018 / Accepted: 24 February 2018 / Published: 9 March 2018
PDF Full-text (2922 KB) | HTML Full-text | XML Full-text
Abstract
Systems characterized by more than one temperature usually appear in nonequilibrium statistical mechanics. In some cases, e.g., glasses, there is a temperature at which fast variables become thermalized, and another case associated with modes that evolve towards an equilibrium state in a very
[...] Read more.
Systems characterized by more than one temperature usually appear in nonequilibrium statistical mechanics. In some cases, e.g., glasses, there is a temperature at which fast variables become thermalized, and another case associated with modes that evolve towards an equilibrium state in a very slow way. Recently, it was shown that a system of vortices interacting repulsively, considered as an appropriate model for type-II superconductors, presents an equilibrium state characterized by two temperatures. The main novelty concerns the fact that apart from the usual temperature T, related to fluctuations in particle velocities, an additional temperature θ was introduced, associated with fluctuations in particle positions. Since they present physically distinct characteristics, the system may reach an equilibrium state, characterized by finite and different values of these temperatures. In the application of type-II superconductors, it was shown that θ T , so that thermal effects could be neglected, leading to a consistent thermodynamic framework based solely on the temperature θ . In the present work, a more general situation, concerning a system characterized by two distinct temperatures θ 1 and θ 2 , which may be of the same order of magnitude, is discussed. These temperatures appear as coefficients of different diffusion contributions of a nonlinear Fokker-Planck equation. An H-theorem is proven, relating such a Fokker-Planck equation to a sum of two entropic forms, each of them associated with a given diffusion term; as a consequence, the corresponding stationary state may be considered as an equilibrium state, characterized by two temperatures. One of the conditions for such a state to occur is that the different temperature parameters, θ 1 and θ 2 , should be thermodynamically conjugated to distinct entropic forms, S 1 and S 2 , respectively. A functional Λ [ P ] Λ ( S 1 [ P ] , S 2 [ P ] ) is introduced, which presents properties characteristic of an entropic form; moreover, a thermodynamically conjugated temperature parameter γ ( θ 1 , θ 2 ) can be consistently defined, so that an alternative physical description is proposed in terms of these pairs of variables. The physical consequences, and particularly, the fact that the equilibrium-state distribution, obtained from the Fokker-Planck equation, should coincide with the one from entropy extremization, are discussed. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle Fisher Information Based Meteorological Factors Introduction and Features Selection for Short-Term Load Forecasting
Entropy 2018, 20(3), 184; doi:10.3390/e20030184
Received: 21 January 2018 / Revised: 26 February 2018 / Accepted: 7 March 2018 / Published: 9 March 2018
PDF Full-text (945 KB) | HTML Full-text | XML Full-text
Abstract
Weather information is an important factor in short-term load forecasting (STLF). However, for a long time, more importance has always been attached to forecasting models instead of other processes such as the introduction of weather factors or feature selection for STLF. The main
[...] Read more.
Weather information is an important factor in short-term load forecasting (STLF). However, for a long time, more importance has always been attached to forecasting models instead of other processes such as the introduction of weather factors or feature selection for STLF. The main aim of this paper is to develop a novel methodology based on Fisher information for meteorological variables introduction and variable selection in STLF. Fisher information computation for one-dimensional and multidimensional weather variables is first described, and then the introduction of meteorological factors and variables selection for STLF models are discussed in detail. On this basis, different forecasting models with the proposed methodology are established. The proposed methodology is implemented on real data obtained from Electric Power Utility of Zhenjiang, Jiangsu Province, in southeast China. The results show the advantages of the proposed methodology in comparison with other traditional ones regarding prediction accuracy, and it has very good practical significance. Therefore, it can be used as a unified method for introducing weather variables into STLF models, and selecting their features. Full article
Figures

Figure 1

Open AccessArticle A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications
Entropy 2018, 20(3), 185; doi:10.3390/e20030185
Received: 18 January 2018 / Revised: 6 March 2018 / Accepted: 6 March 2018 / Published: 9 March 2018
PDF Full-text (496 KB) | HTML Full-text | XML Full-text
Abstract
We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new
[...] Read more.
We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure d ( x , x ^ ) = | x x ^ | r , with r 1 , and we establish that the difference between the rate-distortion function and the Shannon lower bound is at most log ( π e ) 1 . 5 bits, independently of r and the target distortion d. For mean-square error distortion, the difference is at most log ( π e 2 ) 1 bit, regardless of d. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of the Gaussian channel with the same noise power is at most log ( π e 2 ) 1 bit. Our results generalize to the case of a random vector X with possibly dependent coordinates. Our proof technique leverages tools from convex geometry. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Figures

Figure 1

Open AccessArticle Conformal Flattening for Deformed Information Geometries on the Probability Simplex
Entropy 2018, 20(3), 186; doi:10.3390/e20030186
Received: 20 February 2018 / Revised: 8 March 2018 / Accepted: 8 March 2018 / Published: 10 March 2018
PDF Full-text (253 KB) | HTML Full-text | XML Full-text
Abstract
Recent progress of theories and applications regarding statistical models with generalized exponential functions in statistical science is giving an impact on the movement to deform the standard structure of information geometry. For this purpose, various representing functions are playing central roles. In this
[...] Read more.
Recent progress of theories and applications regarding statistical models with generalized exponential functions in statistical science is giving an impact on the movement to deform the standard structure of information geometry. For this purpose, various representing functions are playing central roles. In this paper, we consider two important notions in information geometry, i.e., invariance and dual flatness, from a viewpoint of representing functions. We first characterize a pair of representing functions that realizes the invariant geometry by solving a system of ordinary differential equations. Next, by proposing a new transformation technique, i.e., conformal flattening, we construct dually flat geometries from a certain class of non-flat geometries. Finally, we apply the results to demonstrate several properties of gradient flows on the probability simplex. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Open AccessArticle Some Iterative Properties of ( F 1 , F 2 ) -Chaos in Non-Autonomous Discrete Systems
Entropy 2018, 20(3), 188; doi:10.3390/e20030188
Received: 24 January 2018 / Revised: 3 March 2018 / Accepted: 7 March 2018 / Published: 12 March 2018
PDF Full-text (249 KB) | HTML Full-text | XML Full-text
Abstract
This paper is concerned with invariance (F1,F2)-scrambled sets under iterations. The main results are an extension of the compound invariance of Li–Yorke chaos and distributional chaos. New definitions of (F1,F2)
[...] Read more.
This paper is concerned with invariance ( F 1 , F 2 ) -scrambled sets under iterations. The main results are an extension of the compound invariance of Li–Yorke chaos and distributional chaos. New definitions of ( F 1 , F 2 ) -scrambled sets in non-autonomous discrete systems are given. For a positive integer k, the properties P ( k ) and Q ( k ) of Furstenberg families are introduced. It is shown that, for any positive integer k, for any s [ 0 , 1 ] , Furstenberg family M ¯ ( s ) has properties P ( k ) and Q ( k ) , where M ¯ ( s ) denotes the family of all infinite subsets of Z + whose upper density is not less than s. Then, the following conclusion is obtained. D is an ( M ¯ ( s ) , M ¯ ( t ) ) -scrambled set of ( X , f 1 , ) if and only if D is an ( M ¯ ( s ) , M ¯ ( t ) ) -scrambled set of ( X , f 1 , [ m ] ) . Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Open AccessArticle An Investigation into the Relationship among Psychiatric, Demographic and Socio-Economic Variables with Bayesian Network Modeling
Entropy 2018, 20(3), 189; doi:10.3390/e20030189
Received: 29 January 2018 / Revised: 6 March 2018 / Accepted: 9 March 2018 / Published: 12 March 2018
PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to investigate the factors influencing the Beck Depression Inventory score, the Beck Hopelessness Scale score and the Rosenberg Self-Esteem score and the relationships among the psychiatric, demographic and socio-economic variables with Bayesian network modeling. The data of
[...] Read more.
The aim of this paper is to investigate the factors influencing the Beck Depression Inventory score, the Beck Hopelessness Scale score and the Rosenberg Self-Esteem score and the relationships among the psychiatric, demographic and socio-economic variables with Bayesian network modeling. The data of 823 university students consist of 21 continuous and discrete relevant psychiatric, demographic and socio-economic variables. After the discretization of the continuous variables by two approaches, two Bayesian networks models are constructed using the b n l e a r n package in R, and the results are presented via figures and probabilities. One of the most significant results is that in the first Bayesian network model, the gender of the students influences the level of depression, with female students being more depressive. In the second model, social activity directly influences the level of depression. In each model, depression influences both the level of hopelessness and self-esteem in students; additionally, as the level of depression increases, the level of hopelessness increases, but the level of self-esteem drops. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1

Open AccessArticle Game Theoretic Approach for Systematic Feature Selection; Application in False Alarm Detection in Intensive Care Units
Entropy 2018, 20(3), 190; doi:10.3390/e20030190
Received: 10 January 2018 / Revised: 27 February 2018 / Accepted: 5 March 2018 / Published: 12 March 2018
PDF Full-text (472 KB) | HTML Full-text | XML Full-text
Abstract
Intensive Care Units (ICUs) are equipped with many sophisticated sensors and monitoring devices to provide the highest quality of care for critically ill patients. However, these devices might generate false alarms that reduce standard of care and result in desensitization of caregivers to
[...] Read more.
Intensive Care Units (ICUs) are equipped with many sophisticated sensors and monitoring devices to provide the highest quality of care for critically ill patients. However, these devices might generate false alarms that reduce standard of care and result in desensitization of caregivers to alarms. Therefore, reducing the number of false alarms is of great importance. Many approaches such as signal processing and machine learning, and designing more accurate sensors have been developed for this purpose. However, the significant intrinsic correlation among the extracted features from different sensors has been mostly overlooked. A majority of current data mining techniques fail to capture such correlation among the collected signals from different sensors that limits their alarm recognition capabilities. Here, we propose a novel information-theoretic predictive modeling technique based on the idea of coalition game theory to enhance the accuracy of false alarm detection in ICUs by accounting for the synergistic power of signal attributes in the feature selection stage. This approach brings together techniques from information theory and game theory to account for inter-features mutual information in determining the most correlated predictors with respect to false alarm by calculating Banzhaf power of each feature. The numerical results show that the proposed method can enhance classification accuracy and improve the area under the ROC (receiver operating characteristic) curve compared to other feature selection techniques, when integrated in classifiers such as Bayes-Net that consider inter-features dependencies. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessArticle Gaussian Processes and Polynomial Chaos Expansion for Regression Problem: Linkage via the RKHS and Comparison via the KL Divergence
Entropy 2018, 20(3), 191; doi:10.3390/e20030191
Received: 21 January 2018 / Revised: 6 March 2018 / Accepted: 12 March 2018 / Published: 12 March 2018
PDF Full-text (1199 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we examine two widely-used approaches, the polynomial chaos expansion (PCE) and Gaussian process (GP) regression, for the development of surrogate models. The theoretical differences between the PCE and GP approximations are discussed. A state-of-the-art PCE approach is constructed based on
[...] Read more.
In this paper, we examine two widely-used approaches, the polynomial chaos expansion (PCE) and Gaussian process (GP) regression, for the development of surrogate models. The theoretical differences between the PCE and GP approximations are discussed. A state-of-the-art PCE approach is constructed based on high precision quadrature points; however, the need for truncation may result in potential precision loss; the GP approach performs well on small datasets and allows a fine and precise trade-off between fitting the data and smoothing, but its overall performance depends largely on the training dataset. The reproducing kernel Hilbert space (RKHS) and Mercer’s theorem are introduced to form a linkage between the two methods. The theorem has proven that the two surrogates can be embedded in two isomorphic RKHS, by which we propose a novel method named Gaussian process on polynomial chaos basis (GPCB) that incorporates the PCE and GP. A theoretical comparison is made between the PCE and GPCB with the help of the Kullback–Leibler divergence. We present that the GPCB is as stable and accurate as the PCE method. Furthermore, the GPCB is a one-step Bayesian method that chooses the best subset of RKHS in which the true function should lie, while the PCE method requires an adaptive procedure. Simulations of 1D and 2D benchmark functions show that GPCB outperforms both the PCE and classical GP methods. In order to solve high dimensional problems, a random sample scheme with a constructive design (i.e., tensor product of quadrature points) is proposed to generate a valid training dataset for the GPCB method. This approach utilizes the nature of the high numerical accuracy underlying the quadrature points while ensuring the computational feasibility. Finally, the experimental results show that our sample strategy has a higher accuracy than classical experimental designs; meanwhile, it is suitable for solving high dimensional problems. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Thermodynamic Properties of a Regular Black Hole in Gravity Coupling to Nonlinear Electrodynamics
Entropy 2018, 20(3), 192; doi:10.3390/e20030192
Received: 29 September 2017 / Revised: 25 November 2017 / Accepted: 22 January 2018 / Published: 13 March 2018
PDF Full-text (263 KB) | HTML Full-text | XML Full-text
Abstract
We first calculate the heat capacities of the nonlinear electrodynamics (NED) black hole for fixed mass and electric charge, and the electric capacitances for fixed mass and entropy. Then, we study the properties of the Ruppeiner thermodynamic geometry of the NED black hole.
[...] Read more.
We first calculate the heat capacities of the nonlinear electrodynamics (NED) black hole for fixed mass and electric charge, and the electric capacitances for fixed mass and entropy. Then, we study the properties of the Ruppeiner thermodynamic geometry of the NED black hole. Lastly, some discussions on the thermal stability of the NED black hole and the implication to the flatness of its Ruppeiner thermodynamic geometry are given. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics II)
Figures

Figure 1

Open AccessArticle An Information-Theoretic Perspective on the Quantum Bit Commitment Impossibility Theorem
Entropy 2018, 20(3), 193; doi:10.3390/e20030193
Received: 16 January 2018 / Revised: 17 February 2018 / Accepted: 12 March 2018 / Published: 13 March 2018
PDF Full-text (394 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a different approach to pinpoint the causes for which an unconditionally secure quantum bit commitment protocol cannot be realized, beyond the technical details on which the proof of Mayers’ no-go theorem is constructed. We have adopted the tools of quantum
[...] Read more.
This paper proposes a different approach to pinpoint the causes for which an unconditionally secure quantum bit commitment protocol cannot be realized, beyond the technical details on which the proof of Mayers’ no-go theorem is constructed. We have adopted the tools of quantum entropy analysis to investigate the conditions under which the security properties of quantum bit commitment can be circumvented. Our study has revealed that cheating the binding property requires the quantum system acting as the safe to harbor the same amount of uncertainty with respect to both observers (Alice and Bob) as well as the use of entanglement. Our analysis also suggests that the ability to cheat one of the two fundamental properties of bit commitment by any of the two participants depends on how much information is leaked from one side of the system to the other and how much remains hidden from the other participant. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness)
Figures

Figure 1

Open AccessArticle Estimating Multivariate Discrete Distributions Using Bernstein Copulas
Entropy 2018, 20(3), 194; doi:10.3390/e20030194
Received: 18 December 2017 / Revised: 1 March 2018 / Accepted: 7 March 2018 / Published: 14 March 2018
PDF Full-text (3054 KB) | HTML Full-text | XML Full-text
Abstract
Measuring the dependence between random variables is one of the most fundamental problems in statistics, and therefore, determining the joint distribution of the relevant variables is crucial. Copulas have recently become an important tool for properly inferring the joint distribution of the variables
[...] Read more.
Measuring the dependence between random variables is one of the most fundamental problems in statistics, and therefore, determining the joint distribution of the relevant variables is crucial. Copulas have recently become an important tool for properly inferring the joint distribution of the variables of interest. Although many studies have addressed the case of continuous variables, few studies have focused on treating discrete variables. This paper presents a nonparametric approach to the estimation of joint discrete distributions with bounded support using copulas and Bernstein polynomials. We present an application in real obsessive-compulsive disorder data. Full article
Figures

Figure 1

Open AccessArticle Trustworthiness Measurement Algorithm for TWfMS Based on Software Behaviour Entropy
Entropy 2018, 20(3), 195; doi:10.3390/e20030195
Received: 3 February 2018 / Revised: 10 March 2018 / Accepted: 10 March 2018 / Published: 14 March 2018
PDF Full-text (783 KB) | HTML Full-text | XML Full-text
Abstract
As the virtual mirror of complex real-time business processes of organisations’ underlying information systems, the workflow management system (WfMS) has emerged in recent decades as a new self-autonomous paradigm in the open, dynamic, distributed computing environment. In order to construct a trustworthy workflow
[...] Read more.
As the virtual mirror of complex real-time business processes of organisations’ underlying information systems, the workflow management system (WfMS) has emerged in recent decades as a new self-autonomous paradigm in the open, dynamic, distributed computing environment. In order to construct a trustworthy workflow management system (TWfMS), the design of a software behaviour trustworthiness measurement algorithm is an urgent task for researchers. Accompanying the trustworthiness mechanism, the measurement algorithm, with uncertain software behaviour trustworthiness information of the WfMS, should be resolved as an infrastructure. Based on the framework presented in our research prior to this paper, we firstly introduce a formal model for the WfMS trustworthiness measurement, with the main property reasoning based on calculus operators. Secondly, this paper proposes a novel measurement algorithm from the software behaviour entropy of calculus operators through the principle of maximum entropy (POME) and the data mining method. Thirdly, the trustworthiness measurement algorithm for incomplete software behaviour tests and runtime information is discussed and compared by means of a detailed explanation. Finally, we provide conclusions and discuss certain future research areas of the TWfMS. Full article
Figures

Figure 1

Open AccessArticle Real-Time ECG-Based Detection of Fatigue Driving Using Sample Entropy
Entropy 2018, 20(3), 196; doi:10.3390/e20030196
Received: 3 February 2018 / Revised: 3 March 2018 / Accepted: 13 March 2018 / Published: 15 March 2018
PDF Full-text (2564 KB) | HTML Full-text | XML Full-text
Abstract
In present work, the heart rate variability (HRV) characteristics, calculated by sample entropy (SampEn), were used to analyze the driving fatigue state at successive driving stages. Combined with the relative power spectrum ratio β/(θ + α), subjective questionnaire, and brain network parameters of
[...] Read more.
In present work, the heart rate variability (HRV) characteristics, calculated by sample entropy (SampEn), were used to analyze the driving fatigue state at successive driving stages. Combined with the relative power spectrum ratio β/(θ + α), subjective questionnaire, and brain network parameters of electroencephalogram (EEG) signals, the relationships between the different characteristics for driving fatigue were discussed. Thus, it can conclude that the HRV characteristics (RR SampEn and R peaks SampEn), as well as the relative power spectrum ratio β/(θ + α) of the channels (C3, C4, P3, P4), the subjective questionnaire, and the brain network parameters, can effectively detect driving fatigue at various driving stages. In addition, the method for collecting ECG signals from the palm part does not need patch electrodes, is convenient, and will be practical to use in actual driving situations in the future. Full article
Figures

Figure 1

Open AccessArticle Low Probability of Intercept-Based Radar Waveform Design for Spectral Coexistence of Distributed Multiple-Radar and Wireless Communication Systems in Clutter
Entropy 2018, 20(3), 197; doi:10.3390/e20030197
Received: 17 December 2017 / Revised: 21 February 2018 / Accepted: 23 February 2018 / Published: 16 March 2018
PDF Full-text (1067 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the problem of low probability of intercept (LPI)-based radar waveform design for distributed multiple-radar system (DMRS) is studied, which consists of multiple radars coexisting with a wireless communication system in the same frequency band. The primary objective of the multiple-radar
[...] Read more.
In this paper, the problem of low probability of intercept (LPI)-based radar waveform design for distributed multiple-radar system (DMRS) is studied, which consists of multiple radars coexisting with a wireless communication system in the same frequency band. The primary objective of the multiple-radar system is to minimize the total transmitted energy by optimizing the transmission waveform of each radar with the communication signals acting as interference to the radar system, while meeting a desired target detection/characterization performance. Firstly, signal-to-clutter-plus-noise ratio (SCNR) and mutual information (MI) are used as the practical metrics to evaluate target detection and characterization performance, respectively. Then, the SCNR- and MI-based optimal radar waveform optimization methods are formulated. The resulting waveform optimization problems are solved through the well-known bisection search technique. Simulation results demonstrate utilizing various examples and scenarios that the proposed radar waveform design schemes can evidently improve the LPI performance of DMRS without interfering with friendly communications. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Open AccessArticle Modulation Signal Recognition Based on Information Entropy and Ensemble Learning
Entropy 2018, 20(3), 198; doi:10.3390/e20030198
Received: 30 January 2018 / Revised: 13 March 2018 / Accepted: 14 March 2018 / Published: 16 March 2018
PDF Full-text (1703 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, information entropy and ensemble learning based signal recognition theory and algorithms have been proposed. We have extracted 16 kinds of entropy features out of 9 types of modulated signals. The types of information entropy used are numerous, including Rényi entropy
[...] Read more.
In this paper, information entropy and ensemble learning based signal recognition theory and algorithms have been proposed. We have extracted 16 kinds of entropy features out of 9 types of modulated signals. The types of information entropy used are numerous, including Rényi entropy and energy entropy based on S Transform and Generalized S Transform. We have used three feature selection algorithms, including sequence forward selection (SFS), sequence forward floating selection (SFFS) and RELIEF-F to select the optimal feature subset from 16 entropy features. We use five classifiers, including k-nearest neighbor (KNN), support vector machine (SVM), Adaboost, Gradient Boosting Decision Tree (GBDT) and eXtreme Gradient Boosting (XGBoost) to classify the original feature set and the feature subsets selected by different feature selection algorithms. The simulation results show that the feature subsets selected by SFS and SFFS algorithms are the best, with a 48% increase in recognition rate over the original feature set when using KNN classifier and a 34% increase when using SVM classifier. For the other three classifiers, the original feature set can achieve the best recognition performance. The XGBoost classifier has the best recognition performance, the overall recognition rate is 97.74% and the recognition rate can reach 82% when the signal to noise ratio (SNR) is −10 dB. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Open AccessArticle Criticality Analysis of the Lower Ionosphere Perturbations Prior to the 2016 Kumamoto (Japan) Earthquakes as Based on VLF Electromagnetic Wave Propagation Data Observed at Multiple Stations
Entropy 2018, 20(3), 199; doi:10.3390/e20030199
Received: 17 February 2018 / Revised: 12 March 2018 / Accepted: 14 March 2018 / Published: 16 March 2018
PDF Full-text (3426 KB) | HTML Full-text | XML Full-text
Abstract
The perturbations of the ionosphere which are observed prior to significant earthquakes (EQs) have long been investigated and could be considered promising for short-term EQ prediction. One way to monitor ionospheric perturbations is by studying VLF/LF electromagnetic wave propagation through the lower ionosphere
[...] Read more.
The perturbations of the ionosphere which are observed prior to significant earthquakes (EQs) have long been investigated and could be considered promising for short-term EQ prediction. One way to monitor ionospheric perturbations is by studying VLF/LF electromagnetic wave propagation through the lower ionosphere between specific transmitters and receivers. For this purpose, a network of eight receivers has been deployed throughout Japan which receive subionospheric signals from different transmitters located both in the same and other countries. In this study we analyze, in terms of the recently proposed natural time analysis, the data recorded by the above-mentioned network prior to the catastrophic 2016 Kumamoto fault-type EQs, which were as huge as the former 1995 Kobe EQ. These EQs occurred within a two-day period (14 April: M W = 6.2 and M W = 6.0 , 15 April: M W = 7.0 ) at shallow depths (~10 km), while their epicenters were adjacent. Our results show that lower ionospheric perturbations present critical dynamics from two weeks up to two days before the main shock occurrence. The results are compared to those by the conventional nighttime fluctuation method obtained for the same dataset and exhibit consistency. Finally, the temporal evolutions of criticality in ionospheric parameters and those in the lithosphere as seen from the ULF electromagnetic emissions are discussed in the context of the lithosphere-atmosphere-ionosphere coupling. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Leggett-Garg Inequalities for Quantum Fluctuating Work
Entropy 2018, 20(3), 200; doi:10.3390/e20030200
Received: 1 February 2018 / Revised: 28 February 2018 / Accepted: 8 March 2018 / Published: 16 March 2018
PDF Full-text (317 KB) | HTML Full-text | XML Full-text
Abstract
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a
[...] Read more.
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a quantum system unitarily driven in time. It is shown that these inequalities can be violated in a driven two-level system, thereby demonstrating that there exists no general macrorealistic description of quantum work. These violations are shown to emerge within the standard Two-Projective-Measurement scheme as well as for alternative definitions of fluctuating work that are based on weak measurement. Our results elucidate the influences of temporal correlations on work extraction in the quantum regime and highlight a key difference between quantum and classical thermodynamics. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Figures

Figure 1

Open AccessArticle Global Optimization Employing Gaussian Process-Based Bayesian Surrogates
Entropy 2018, 20(3), 201; doi:10.3390/e20030201
Received: 22 December 2017 / Revised: 8 March 2018 / Accepted: 13 March 2018 / Published: 16 March 2018
PDF Full-text (639 KB) | HTML Full-text | XML Full-text
Abstract
The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has
[...] Read more.
The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has been acquired, one could be interested in taking these data as a basis for finding an extremum and possibly an input parameter set for further computer simulations to determine it—a task which belongs to the realm of global optimization. Within the Bayesian framework we utilize Gaussian processes for the creation of a surrogate model function adjusted self-consistently via hyperparameters to represent the data. Although the probability distribution of the hyperparameters may be widely spread over phase space, we make the assumption that only the use of their expectation values is sufficient. While this shortcut facilitates a quickly accessible surrogate, it is somewhat justified by the fact that we are not interested in a full representation of the model by the surrogate but to reveal its maximum. To accomplish this the surrogate is fed to a utility function whose extremum determines the new parameter set for the next data point to obtain. Moreover, we propose to alternate between two utility functions—expected improvement and maximum variance—in order to avoid the drawbacks of each. Subsequent data points are drawn from the model function until the procedure either remains in the points found or the surrogate model does not change with the iteration. The procedure is applied to mock data in one and two dimensions in order to demonstrate proof of principle of the proposed approach. Full article
Figures

Figure 1

Open AccessArticle Global Reliability Sensitivity Analysis Based on Maximum Entropy and 2-Layer Polynomial Chaos Expansion
Entropy 2018, 20(3), 202; doi:10.3390/e20030202
Received: 5 February 2018 / Revised: 13 March 2018 / Accepted: 14 March 2018 / Published: 16 March 2018
PDF Full-text (3288 KB) | HTML Full-text | XML Full-text
Abstract
To optimize contributions of uncertain input variables on the statistical parameter of given model, e.g., reliability, global reliability sensitivity analysis (GRSA) provides an appropriate tool to quantify the effects. However, it may be difficult to calculate global reliability sensitivity indices compared with the
[...] Read more.
To optimize contributions of uncertain input variables on the statistical parameter of given model, e.g., reliability, global reliability sensitivity analysis (GRSA) provides an appropriate tool to quantify the effects. However, it may be difficult to calculate global reliability sensitivity indices compared with the traditional global sensitivity indices of model output, because statistical parameters are more difficult to obtain, Monte Carlo simulation (MCS)-related methods seem to be the only ways for GRSA but they are usually computationally demanding. This paper presents a new non-MCS calculation to evaluate global reliability sensitivity indices. This method proposes: (i) a 2-layer polynomial chaos expansion (PCE) framework to solve the global reliability sensitivity indices; and (ii) an efficient method to build a surrogate model of the statistical parameter using the maximum entropy (ME) method with the moments provided by PCE. This method has a dramatically reduced computational cost compared with traditional approaches. Two examples are introduced to demonstrate the efficiency and accuracy of the proposed method. It also suggests that the important ranking of model output and associated failure probability may be different, which could help improve the understanding of the given model in further optimization design. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessFeature PaperArticle Computational Information Geometry for Binary Classification of High-Dimensional Random Tensors
Entropy 2018, 20(3), 203; doi:10.3390/e20030203
Received: 25 January 2018 / Revised: 13 March 2018 / Accepted: 14 March 2018 / Published: 17 March 2018
PDF Full-text (520 KB) | HTML Full-text | XML Full-text
Abstract
Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero
[...] Read more.
Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero SNR, the observed signals are either a noisy rank-R tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size N q × R , i.e., for 1 q Q , where R , N q with R 1 / q / N q converge towards a finite constant or a noisy tensor admitting TucKer Decomposition (TKD) of multilinear ( M 1 , , M Q ) -rank with large factors of size N q × M q , i.e., for 1 q Q , where N q , M q with M q / N q converge towards a finite constant. The classification of the random entries (coefficients) of the core tensor in the CPD/TKD is hard to study since the exact derivation of the minimal Bayes’ error probability is mathematically intractable. To circumvent this difficulty, the Chernoff Upper Bound (CUB) for larger SNR and the Fisher information at low SNR are derived and studied, based on information geometry theory. The tightest CUB is reached for the value minimizing the error exponent, denoted by s . In general, due to the asymmetry of the s-divergence, the Bhattacharyya Upper Bound (BUB) (that is, the Chernoff Information calculated at s = 1 / 2 ) cannot solve this problem effectively. As a consequence, we rely on a costly numerical optimization strategy to find s . However, thanks to powerful random matrix theory tools, a simple analytical expression of s is provided with respect to the Signal to Noise Ratio (SNR) in the two schemes considered. This work shows that the BUB is the tightest bound at low SNRs. However, for higher SNRs, the latest property is no longer true. Full article
Figures

Figure 1

Open AccessArticle Output-Feedback Control for Discrete-Time Spreading Models in Complex Networks
Entropy 2018, 20(3), 204; doi:10.3390/e20030204
Received: 3 February 2018 / Revised: 6 March 2018 / Accepted: 7 March 2018 / Published: 19 March 2018
PDF Full-text (820 KB) | HTML Full-text | XML Full-text
Abstract
The problem of stabilizing the spreading process to a prescribed probability distribution over a complex network is considered, where the dynamics of the nodes in the network is given by discrete-time Markov-chain processes. Conditions for the positioning and identification of actuators and sensors
[...] Read more.
The problem of stabilizing the spreading process to a prescribed probability distribution over a complex network is considered, where the dynamics of the nodes in the network is given by discrete-time Markov-chain processes. Conditions for the positioning and identification of actuators and sensors are provided, and sufficient conditions for the exponential stability of the desired distribution are derived. Simulations results for a network of N = 10 6 corroborate our theoretical findings. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Generalized Lagrangian Path Approach to Manifestly-Covariant Quantum Gravity Theory
Entropy 2018, 20(3), 205; doi:10.3390/e20030205
Received: 10 January 2018 / Revised: 25 February 2018 / Accepted: 8 March 2018 / Published: 19 March 2018
PDF Full-text (410 KB) | HTML Full-text | XML Full-text
Abstract
A trajectory-based representation for the quantum theory of the gravitational field is formulated. This is achieved in terms of a covariant Generalized Lagrangian-Path (GLP) approach which relies on a suitable statistical representation of Bohmian Lagrangian trajectories, referred to here as GLP-representation. The
[...] Read more.
A trajectory-based representation for the quantum theory of the gravitational field is formulated. This is achieved in terms of a covariant Generalized Lagrangian-Path (GLP) approach which relies on a suitable statistical representation of Bohmian Lagrangian trajectories, referred to here as GLP-representation. The result is established in the framework of the manifestly-covariant quantum gravity theory (CQG-theory) proposed recently and the related CQG-wave equation advancing in proper-time the quantum state associated with massive gravitons. Generally non-stationary analytical solutions for the CQG-wave equation with non-vanishing cosmological constant are determined in such a framework, which exhibit Gaussian-like probability densities that are non-dispersive in proper-time. As a remarkable outcome of the theory achieved by implementing these analytical solutions, the existence of an emergent gravity phenomenon is proven to hold. Accordingly, it is shown that a mean-field background space-time metric tensor can be expressed in terms of a suitable statistical average of stochastic fluctuations of the quantum gravitational field whose quantum-wave dynamics is described by GLP trajectories. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Open AccessArticle Irreversibility and Action of the Heat Conduction Process
Entropy 2018, 20(3), 206; doi:10.3390/e20030206
Received: 24 January 2018 / Revised: 21 February 2018 / Accepted: 14 March 2018 / Published: 20 March 2018
PDF Full-text (572 KB) | HTML Full-text | XML Full-text
Abstract
Irreversibility (that is, the “one-sidedness” of time) of a physical process can be characterized by using Lyapunov functions in the modern theory of stability. In this theoretical framework, entropy and its production rate have been generally regarded as Lyapunov functions in order to
[...] Read more.
Irreversibility (that is, the “one-sidedness” of time) of a physical process can be characterized by using Lyapunov functions in the modern theory of stability. In this theoretical framework, entropy and its production rate have been generally regarded as Lyapunov functions in order to measure the irreversibility of various physical processes. In fact, the Lyapunov function is not always unique. In the represent work, a rigorous proof is given that the entransy and its dissipation rate can also serve as Lyapunov functions associated with the irreversibility of the heat conduction process without the conversion between heat and work. In addition, the variation of the entransy dissipation rate can lead to Fourier’s heat conduction law, while the entropy production rate cannot. This shows that the entransy dissipation rate, rather than the entropy production rate, is the unique action for the heat conduction process, and can be used to establish the finite element method for the approximate solution of heat conduction problems and the optimization of heat transfer processes. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Application of Entropy Ensemble Filter in Neural Network Forecasts of Tropical Pacific Sea Surface Temperatures
Entropy 2018, 20(3), 207; doi:10.3390/e20030207
Received: 1 February 2018 / Revised: 13 March 2018 / Accepted: 15 March 2018 / Published: 20 March 2018
PDF Full-text (5485 KB) | HTML Full-text | XML Full-text
Abstract
Recently, the Entropy Ensemble Filter (EEF) method was proposed to mitigate the computational cost of the Bootstrap AGGregatING (bagging) method. This method uses the most informative training data sets in the model ensemble rather than all ensemble members created by the conventional bagging.
[...] Read more.
Recently, the Entropy Ensemble Filter (EEF) method was proposed to mitigate the computational cost of the Bootstrap AGGregatING (bagging) method. This method uses the most informative training data sets in the model ensemble rather than all ensemble members created by the conventional bagging. In this study, we evaluate, for the first time, the application of the EEF method in Neural Network (NN) modeling of El Nino-southern oscillation. Specifically, we forecast the first five principal components (PCs) of sea surface temperature monthly anomaly fields over tropical Pacific, at different lead times (from 3 to 15 months, with a three-month increment) for the period 1979–2017. We apply the EEF method in a multiple-linear regression (MLR) model and two NN models, one using Bayesian regularization and one Levenberg-Marquardt algorithm for training, and evaluate their performance and computational efficiency relative to the same models with conventional bagging. All models perform equally well at the lead time of 3 and 6 months, while at higher lead times, the MLR model’s skill deteriorates faster than the nonlinear models. The neural network models with both bagging methods produce equally successful forecasts with the same computational efficiency. It remains to be shown whether this finding is sensitive to the dataset size. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Open AccessArticle Deconstructing Cross-Entropy for Probabilistic Binary Classifiers
Entropy 2018, 20(3), 208; doi:10.3390/e20030208
Received: 22 February 2018 / Revised: 16 March 2018 / Accepted: 18 March 2018 / Published: 20 March 2018
PDF Full-text (1789 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performance measure and as an optimization objective. We contextualize cross-entropy in the light of Bayesian decision theory, the formal probabilistic framework for making decisions, and we thoroughly analyze
[...] Read more.
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performance measure and as an optimization objective. We contextualize cross-entropy in the light of Bayesian decision theory, the formal probabilistic framework for making decisions, and we thoroughly analyze its motivation, meaning and interpretation from an information-theoretical point of view. In this sense, this article presents several contributions: First, we explicitly analyze the contribution to cross-entropy of (i) prior knowledge; and (ii) the value of the features in the form of a likelihood ratio. Second, we introduce a decomposition of cross-entropy into two components: discrimination and calibration. This decomposition enables the measurement of different performance aspects of a classifier in a more precise way; and justifies previously reported strategies to obtain reliable probabilities by means of the calibration of the output of a discriminating classifier. Third, we give different information-theoretical interpretations of cross-entropy, which can be useful in different application scenarios, and which are related to the concept of reference probabilities. Fourth, we present an analysis tool, the Empirical Cross-Entropy (ECE) plot, a compact representation of cross-entropy and its aforementioned decomposition. We show the power of ECE plots, as compared to other classical performance representations, in two diverse experimental examples: a speaker verification system, and a forensic case where some glass findings are present. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Open AccessArticle Prior and Posterior Linear Pooling for Combining Expert Opinions: Uses and Impact on Bayesian Networks—The Case of the Wayfinding Model
Entropy 2018, 20(3), 209; doi:10.3390/e20030209
Received: 15 December 2017 / Revised: 17 March 2018 / Accepted: 18 March 2018 / Published: 20 March 2018
PDF Full-text (1924 KB) | HTML Full-text | XML Full-text
Abstract
The use of expert knowledge to quantify a Bayesian Network (BN) is necessary when data is not available. This however raises questions regarding how opinions from multiple experts can be used in a BN. Linear pooling is a popular method for combining probability
[...] Read more.
The use of expert knowledge to quantify a Bayesian Network (BN) is necessary when data is not available. This however raises questions regarding how opinions from multiple experts can be used in a BN. Linear pooling is a popular method for combining probability assessments from multiple experts. In particular, Prior Linear Pooling (PrLP), which pools opinions and then places them into the BN, is a common method. This paper considers this approach and an alternative pooling method, Posterior Linear Pooling (PoLP). The PoLP method constructs a BN for each expert, and then pools the resulting probabilities at the nodes of interest. The advantages and disadvantages of these two methods are identified and compared and the methods are applied to an existing BN, the Wayfinding Bayesian Network Model, to investigate the behavior of different groups of people and how these different methods may be able to capture such differences. The paper focusses on six nodes Human Factors, Environmental Factors, Wayfinding, Communication, Visual Elements of Communication and Navigation Pathway, and three subgroups Gender (Female, Male), Travel Experience (Experienced, Inexperienced), and Travel Purpose (Business, Personal), and finds that different behaviors can indeed be captured by the different methods. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1

Open AccessArticle Amplitude- and Fluctuation-Based Dispersion Entropy
Entropy 2018, 20(3), 210; doi:10.3390/e20030210
Received: 2 November 2017 / Revised: 5 February 2018 / Accepted: 13 March 2018 / Published: 20 March 2018
PDF Full-text (956 KB) | HTML Full-text | XML Full-text
Abstract
Dispersion entropy (DispEn) is a recently introduced entropy metric to quantify the uncertainty of time series. It is fast and, so far, it has demonstrated very good performance in the characterisation of time series. It includes a mapping step, but the effect of
[...] Read more.
Dispersion entropy (DispEn) is a recently introduced entropy metric to quantify the uncertainty of time series. It is fast and, so far, it has demonstrated very good performance in the characterisation of time series. It includes a mapping step, but the effect of different mappings has not been studied yet. Here, we investigate the effect of linear and nonlinear mapping approaches in DispEn. We also inspect the sensitivity of different parameters of DispEn to noise. Moreover, we develop fluctuation-based DispEn (FDispEn) as a measure to deal with only the fluctuations of time series. Furthermore, the original and fluctuation-based forbidden dispersion patterns are introduced to discriminate deterministic from stochastic time series. Finally, we compare the performance of DispEn, FDispEn, permutation entropy, sample entropy, and Lempel–Ziv complexity on two physiological datasets. The results show that DispEn is the most consistent technique to distinguish various dynamics of the biomedical signals. Due to their advantages over existing entropy methods, DispEn and FDispEn are expected to be broadly used for the characterization of a wide variety of real-world time series. The MATLAB codes used in this paper are freely available at http://dx.doi.org/10.7488/ds/2326. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Some Inequalities Combining Rough and Random Information
Entropy 2018, 20(3), 211; doi:10.3390/e20030211
Received: 1 February 2018 / Revised: 18 March 2018 / Accepted: 18 March 2018 / Published: 20 March 2018
PDF Full-text (744 KB) | HTML Full-text | XML Full-text
Abstract
Rough random theory, generally applied to statistics, decision-making, and so on, is an extension of rough set theory and probability theory, in which a rough random variable is described as a random variable taking “rough variable” values. In order to extend and enrich
[...] Read more.
Rough random theory, generally applied to statistics, decision-making, and so on, is an extension of rough set theory and probability theory, in which a rough random variable is described as a random variable taking “rough variable” values. In order to extend and enrich the research area of rough random theory, in this paper, the well-known probabilistic inequalities (Markov inequality, Chebyshev inequality, Holder’s inequality, Minkowski inequality and Jensen’s inequality) are proven for rough random variables, which gives a firm theoretical support to the further development of rough random theory. Besides, considering that the critical values always act as a vital tool in engineering, science and other application fields, some significant properties of the critical values of rough random variables involving the continuity and the monotonicity are investigated deeply to provide a novel analytical approach for dealing with the rough random optimization problems. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)

Other

Jump to: Research

Open AccessDiscussion On the Possibility of Calculating Entropy, Free Energy, and Enthalpy of Vitreous Substances
Entropy 2018, 20(3), 187; doi:10.3390/e20030187
Received: 26 January 2018 / Revised: 23 February 2018 / Accepted: 8 March 2018 / Published: 11 March 2018
Cited by 1 | PDF Full-text (557 KB) | HTML Full-text | XML Full-text
Abstract
A critical analysis for the arguments in support of, and against, the traditional approach to thermodynamics of vitreous state is provided. In this approach one presumes that there is a continuous variation of the entropy in the glass-liquid transition temperature range, or a
[...] Read more.
A critical analysis for the arguments in support of, and against, the traditional approach to thermodynamics of vitreous state is provided. In this approach one presumes that there is a continuous variation of the entropy in the glass-liquid transition temperature range, or a “continuous entropy approach” towards 0 K which produces a positive value of the entropy at T → 0 K. I find that arguments given against this traditional approach use a different understanding of the thermodynamics of glass transition on cooling a liquid, because it suggests a discontinuity or “entropy loss approach” in the variation of entropy in the glass-liquid transition range. That is based on: (1) an unjustifiable use of the classical Boltzmann statistics for interpreting the value of entropy at absolute zero; (2) the rejection of thermodynamic analysis of systems with broken ergodicity, even though the possibility of analogous analysis was proposed already by Gibbs; (3) the possibility of a finite change in entropy of a system without absorption or release of heat; and, (4) describing the thermodynamic properties of glasses in the framework of functions, instead of functionals. The last one is necessary because for glasses the entropy and enthalpy are not functions of the state, but functionals, as defined by Gibbs’ in his classification. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Figure 1

Back to Top