Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 11 (November 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-58
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle Real-Time Robust Voice Activity Detection Using the Upper Envelope Weighted Entropy Measure and the Dual-Rate Adaptive Nonlinear Filter
Entropy 2017, 19(11), 487; doi:10.3390/e19110487
Received: 19 June 2017 / Revised: 1 September 2017 / Accepted: 8 September 2017 / Published: 28 October 2017
PDF Full-text (2277 KB) | HTML Full-text | XML Full-text
Abstract
Voice activity detection (VAD) is a vital process in voice communication systems to avoid unnecessary coding and transmission of noise. Most of the existing VAD algorithms continue to suffer high false alarm rates and low sensitivity when the signal-to-noise ratio (SNR) is low,
[...] Read more.
Voice activity detection (VAD) is a vital process in voice communication systems to avoid unnecessary coding and transmission of noise. Most of the existing VAD algorithms continue to suffer high false alarm rates and low sensitivity when the signal-to-noise ratio (SNR) is low, at 0 dB and below. Others are developed to operate in offline mode or are impractical for implementation in actual devices due to high computational complexity. This paper proposes the upper envelope weighted entropy (UEWE) measure as a means to enable high separation of speech and non-speech segments in voice communication. The asymmetric nonlinear filter (ANF) is employed in UEWE to extract the adaptive weight factor that is subsequently used to compensate the noise effect. In addition, this paper also introduces a dual-rate adaptive nonlinear filter (DANF) with high adaptivity to rapid time-varying noise for computation of the decision threshold. Performance comparison with standard and recent VADs shows that the proposed algorithm is superior especially in real-time practical applications. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Gravitational Contribution to the Heat Flux in a Simple Dilute Fluid: An Approach Based on General Relativistic Kinetic Theory to First Order in the Gradients
Entropy 2017, 19(11), 537; doi:10.3390/e19110537
Received: 11 August 2017 / Revised: 27 September 2017 / Accepted: 9 October 2017 / Published: 28 October 2017
PDF Full-text (243 KB) | HTML Full-text | XML Full-text
Abstract
Richard C. Tolman analyzed the relation between a temperature gradient and a gravitational field in an equilibrium situation. In 2012, Tolman’s law was generalized to a non-equilibrium situation for a simple dilute relativistic fluid. The result in that scenario, obtained by introducing the
[...] Read more.
Richard C. Tolman analyzed the relation between a temperature gradient and a gravitational field in an equilibrium situation. In 2012, Tolman’s law was generalized to a non-equilibrium situation for a simple dilute relativistic fluid. The result in that scenario, obtained by introducing the gravitational force through the molecular acceleration, couples the heat flux with the metric coefficients and the gradients of the state variables. In the present paper it is shown, by explicitly describing the single particle orbits as geodesics in Boltzmann’s equation, that a gravitational field drives a heat flux in this type of system. The calculation is devoted solely to the gravitational field contribution to this heat flux in which a Newtonian limit to the Schwarzschild metric is assumed. The corresponding transport coefficient, which is obtained within a relaxation approximation, corresponds to the dilute fluid in a weak gravitational field. The effect is negligible in the non-relativistic regime, as evidenced by the direct evaluation of the corresponding limit. Full article
(This article belongs to the Special Issue Advances in Relativistic Statistical Mechanics)
Open AccessFeature PaperArticle Partial and Entropic Information Decompositions of a Neuronal Modulatory Interaction
Entropy 2017, 19(11), 560; doi:10.3390/e19110560
Received: 30 June 2017 / Revised: 27 September 2017 / Accepted: 23 October 2017 / Published: 26 October 2017
PDF Full-text (5368 KB) | HTML Full-text | XML Full-text
Abstract
Information processing within neural systems often depends upon selective amplification of relevant signals and suppression of irrelevant signals. This has been shown many times by studies of contextual effects but there is as yet no consensus on how to interpret such studies. Some
[...] Read more.
Information processing within neural systems often depends upon selective amplification of relevant signals and suppression of irrelevant signals. This has been shown many times by studies of contextual effects but there is as yet no consensus on how to interpret such studies. Some researchers interpret the effects of context as contributing to the selective receptive field (RF) input about which neurons transmit information. Others interpret context effects as affecting transmission of information about RF input without becoming part of the RF information transmitted. Here we use partial information decomposition (PID) and entropic information decomposition (EID) to study the properties of a form of modulation previously used in neurobiologically plausible neural nets. PID shows that this form of modulation can affect transmission of information in the RF input without the binary output transmitting any information unique to the modulator. EID produces similar decompositions, except that information unique to the modulator and the mechanistic shared component can be negative when modulating and modulated signals are correlated. Synergistic and source shared components were never negative in the conditions studied. Thus, both PID and EID show that modulatory inputs to a local processor can affect the transmission of information from other inputs. Contrary to what was previously assumed, this transmission can occur without the modulatory inputs becoming part of the information transmitted, as shown by the use of PID with the model we consider. Decompositions of psychophysical data from a visual contrast detection task with surrounding context suggest that a similar form of modulation may also occur in real neural systems. Full article
Figures

Figure 1

Open AccessArticle Information Fusion in a Multi-Source Incomplete Information System Based on Information Entropy
Entropy 2017, 19(11), 570; doi:10.3390/e19110570
Received: 14 August 2017 / Revised: 12 October 2017 / Accepted: 19 October 2017 / Published: 17 November 2017
PDF Full-text (890 KB) | HTML Full-text | XML Full-text
Abstract
As we move into the information age, the amount of data in various fields has increased dramatically, and data sources have become increasingly widely distributed. The corresponding phenomenon of missing data is increasingly common, and it leads to the generation of incomplete multi-source
[...] Read more.
As we move into the information age, the amount of data in various fields has increased dramatically, and data sources have become increasingly widely distributed. The corresponding phenomenon of missing data is increasingly common, and it leads to the generation of incomplete multi-source information systems. In this context, this paper’s proposal aims to address the limitations of rough set theory. We study the method of multi-source fusion in incomplete multi-source systems. This paper presents a method for fusing incomplete multi-source systems based on information entropy; in particular, by comparison with another method, our fusion method is validated. Furthermore, extensive experiments are conducted on six UCI data sets to verify the performance of the proposed method. Additionally, the experimental results indicate that multi-source information fusion approaches significantly outperform other approaches to fusion. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Transport Coefficients from Large Deviation Functions
Entropy 2017, 19(11), 571; doi:10.3390/e19110571
Received: 16 September 2017 / Revised: 18 October 2017 / Accepted: 19 October 2017 / Published: 25 October 2017
PDF Full-text (883 KB) | HTML Full-text | XML Full-text
Abstract
We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their
[...] Read more.
We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green–Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Figures

Figure 1

Open AccessArticle A Behavioural Analysis of Complexity in Socio-Technical Systems under Tension Modelled by Petri Nets
Entropy 2017, 19(11), 572; doi:10.3390/e19110572
Received: 14 September 2017 / Revised: 19 October 2017 / Accepted: 20 October 2017 / Published: 25 October 2017
PDF Full-text (1828 KB) | HTML Full-text | XML Full-text
Abstract
Complexity analysis of dynamic systems provides a better understanding of the internal behaviours that are associated with tension and efficiency, which in the socio-technical systems may lead to innovation. One of the popular approaches for the assessment of complexity is associated with self-similarity.
[...] Read more.
Complexity analysis of dynamic systems provides a better understanding of the internal behaviours that are associated with tension and efficiency, which in the socio-technical systems may lead to innovation. One of the popular approaches for the assessment of complexity is associated with self-similarity. The dynamic component of dynamic systems represents the relationships and interactions among the inner elements (and its surroundings) and fully describes its behaviour. The approach used in this work addresses complexity analysis in terms of system behaviour, i.e., the so-called behavioural analysis of complexity. The self-similarity of a system (structural or behavioural) can be determined, for example, using fractal geometry, whose toolbox provides a number of methods for the measurement of the so-called fractal dimension. Other instruments for measuring the self-similarity in a system, include the Hurst exponent and the framework of complex system theory in general. The approach introduced in this work defines the complexity analysis in a social-technical system under tension. The proposed procedure consists of modelling the key dynamic components of a discrete event dynamic system by any definition of Petri nets. From the stationary probabilities, one can then decide whether the system is self-similar using the abovementioned tools. In addition, the proposed approach allows for finding the critical values (phase transitions) of the analysed systems. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Open AccessCommunication A New Definition of t-Entropy for Transfer Operators
Entropy 2017, 19(11), 573; doi:10.3390/e19110573
Received: 7 September 2017 / Revised: 28 September 2017 / Accepted: 22 October 2017 / Published: 25 October 2017
PDF Full-text (211 KB) | HTML Full-text | XML Full-text
Abstract
This article presents a new definition of t-entropy that makes it more explicit and simplifies the process of its calculation. Full article
Open AccessArticle Forewarning Model of Regional Water Resources Carrying Capacity Based on Combination Weights and Entropy Principles
Entropy 2017, 19(11), 574; doi:10.3390/e19110574
Received: 5 September 2017 / Revised: 7 October 2017 / Accepted: 19 October 2017 / Published: 25 October 2017
PDF Full-text (1867 KB) | HTML Full-text | XML Full-text
Abstract
As a new development form for evaluating the regional water resources carrying capacity, forewarning regional water resources of their carrying capacities is an important adjustment and control measure for regional water security management. Up to now, most research on this issue have been
[...] Read more.
As a new development form for evaluating the regional water resources carrying capacity, forewarning regional water resources of their carrying capacities is an important adjustment and control measure for regional water security management. Up to now, most research on this issue have been qualitative analyses, with a lack of quantitative research. For this reason, an index system for forewarning regional water resources of their carrying capacities and grade standards, has been established in Anhui Province, China, in this paper. Subjective weights of forewarning indices can be calculated using a fuzzy analytic hierarchy process, based on an accelerating genetic algorithm, while objective weights of forewarning indices can be calculated by using a projection pursuit method, based on an accelerating genetic algorithm. These two kinds of weights can be combined into combination weights of forewarning indices, by using the minimum relative information entropy principle. Furthermore, a forewarning model of regional water resources carrying capacity, based on entropy combination weight, is put forward. The model can fully integrate subjective and objective information in the process of forewarning. The results show that the calculation results of the model are reasonable and the method has high adaptability. Therefore, this model is worth studying and popularizing. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle The Isolated Electron: De Broglie’s Hidden Thermodynamics, SU(2) Quantum Yang-Mills Theory, and a Strongly Perturbed BPS Monopole
Entropy 2017, 19(11), 575; doi:10.3390/e19110575
Received: 7 September 2017 / Revised: 17 October 2017 / Accepted: 20 October 2017 / Published: 26 October 2017
PDF Full-text (322 KB) | HTML Full-text | XML Full-text
Abstract
Based on a recent numerical simulation of the temporal evolution of a spherically perturbed BPS monopole, SU(2) Yang-Mills thermodynamics, Louis de Broglie’s deliberations on the disparate Lorentz transformations of the frequency of an internal “clock” on one hand and the associated quantum energy
[...] Read more.
Based on a recent numerical simulation of the temporal evolution of a spherically perturbed BPS monopole, SU(2) Yang-Mills thermodynamics, Louis de Broglie’s deliberations on the disparate Lorentz transformations of the frequency of an internal “clock” on one hand and the associated quantum energy on the other hand, and postulating that the electron is represented by a figure-eight shaped, self-intersecting center vortex loop in SU(2) Quantum Yang-Mills theory we estimate the spatial radius R 0 of this self-intersection region in terms of the electron’s Compton wave length λ C . This region, which is immersed into the confining phase, constitutes a blob of deconfining phase of temperature T 0 mildly above the critical temperature T c carrying a frequently perturbed BPS monopole (with a magnetic-electric dual interpretation of its charge w.r.t. U(1)⊂SU(2)). We also establish a quantitative relation between rest mass m 0 of the electron and SU(2) Yang-Mills scale Λ , which in turn is defined via T c . Surprisingly, R 0 turns out to be comparable to the Bohr radius while the core size of the monopole matches λ C , and the correction to the mass of the electron due to Coulomb energy is about 2%. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessArticle Feynman’s Ratchet and Pawl with Ecological Criterion: Optimal Performance versus Estimation with Prior Information
Entropy 2017, 19(11), 576; doi:10.3390/e19110576
Received: 22 September 2017 / Revised: 18 October 2017 / Accepted: 23 October 2017 / Published: 26 October 2017
PDF Full-text (833 KB) | HTML Full-text | XML Full-text
Abstract
We study the optimal performance of Feynman’s ratchet and pawl, a paradigmatic model in nonequilibrium physics, using ecological criterion as the objective function. The analysis is performed by two different methods: (i) a two-parameter optimization over internal energy scales; and (ii) a one-parameter
[...] Read more.
We study the optimal performance of Feynman’s ratchet and pawl, a paradigmatic model in nonequilibrium physics, using ecological criterion as the objective function. The analysis is performed by two different methods: (i) a two-parameter optimization over internal energy scales; and (ii) a one-parameter optimization of the estimate for the objective function, after averaging over the prior probability distribution (Jeffreys’ prior) for one of the uncertain internal energy scales. We study the model for both engine and refrigerator modes. We derive expressions for the efficiency/coefficient of performance (COP) at maximum ecological function. These expressions from the two methods are found to agree closely with equilibrium situations. Furthermore, the expressions obtained by the second method (with estimation) agree with the expressions obtained in finite-time thermodynamic models. Full article
(This article belongs to the Special Issue Selected Papers from 14th Joint European Thermodynamics Conference)
Figures

Figure 1

Open AccessArticle Stability and Complexity Analysis of a Dual-Channel Closed-Loop Supply Chain with Delayed Decision under Government Intervention
Entropy 2017, 19(11), 577; doi:10.3390/e19110577
Received: 21 September 2017 / Revised: 17 October 2017 / Accepted: 21 October 2017 / Published: 26 October 2017
PDF Full-text (4052 KB) | HTML Full-text | XML Full-text
Abstract
This paper constructs a continuous dual-channel closed-loop supply chain (DCLSC) model with delayed decision under government intervention. The existence conditions of the local stability of the equilibrium point are discussed. We analyze the influence of delay parameters, the adjustment speed of wholesale price,
[...] Read more.
This paper constructs a continuous dual-channel closed-loop supply chain (DCLSC) model with delayed decision under government intervention. The existence conditions of the local stability of the equilibrium point are discussed. We analyze the influence of delay parameters, the adjustment speed of wholesale price, recovery rate of waste products, direct price, carbon quota subsidy, and carbon tax on the stability and complexity of model by using bifurcation diagram, entropy diagram, attractor, and time series diagram and so on. Besides, the delay feedback control method is adopted to control the unstable or chaotic system effectively. The main conclusions of this paper show that the variables mentioned above must be within a reasonable range. Otherwise, the model will lose stability or enter chaos. The government can effectively adjust manufacturers' profit through carbon tax and carbon quota subsidy, and encourage manufacturers to reduce carbon emissions and increase the remanufacturing of waste products. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle A Kernel-Based Intuitionistic Fuzzy C-Means Clustering Using a DNA Genetic Algorithm for Magnetic Resonance Image Segmentation
Entropy 2017, 19(11), 578; doi:10.3390/e19110578
Received: 3 July 2017 / Revised: 17 October 2017 / Accepted: 24 October 2017 / Published: 27 October 2017
PDF Full-text (3934 KB) | HTML Full-text | XML Full-text
Abstract
MRI segmentation is critically important for clinical study and diagnosis. Existing methods based on soft clustering have several drawbacks, including low accuracy in the presence of image noise and artifacts, and high computational cost. In this paper, we introduce a new formulation of
[...] Read more.
MRI segmentation is critically important for clinical study and diagnosis. Existing methods based on soft clustering have several drawbacks, including low accuracy in the presence of image noise and artifacts, and high computational cost. In this paper, we introduce a new formulation of the MRI segmentation problem as a kernel-based intuitionistic fuzzy C-means (KIFCM) clustering problem and propose a new DNA-based genetic algorithm to obtain the optimal KIFCM clustering. While this algorithm searches the solution space for the optimal model parameters, it also obtains the optimal clustering, therefore the optimal MRI segmentation. We perform empirical study by comparing our method with six state-of-the-art soft clustering methods using a set of UCI (University of California, Irvine) datasets and a set of synthetic and clinic MRI datasets. The preliminary results show that our method outperforms other methods in both the clustering metrics and the computational efficiency. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1a

Open AccessArticle Thermodynamic Modelling of Supersonic Gas Ejector with Droplets
Entropy 2017, 19(11), 579; doi:10.3390/e19110579
Received: 21 September 2017 / Revised: 12 October 2017 / Accepted: 24 October 2017 / Published: 30 October 2017
PDF Full-text (1262 KB) | HTML Full-text | XML Full-text
Abstract
This study presents a thermodynamic model for determining the entrainment ratio and double choke limiting pressure of supersonic ejectors within the context of heat driven refrigeration cycles, with and without droplet injection, at the constant area section of the device. Input data include
[...] Read more.
This study presents a thermodynamic model for determining the entrainment ratio and double choke limiting pressure of supersonic ejectors within the context of heat driven refrigeration cycles, with and without droplet injection, at the constant area section of the device. Input data include the inlet operating conditions and key geometry parameters (primary throat, mixing section and diffuser outlet diameter), whereas output information includes the ejector entrainment ratio, maximum double choke compression ratio, ejector efficiency, exergy efficiency and exergy destruction index. In single-phase operation, the ejector entrainment ratio and double choke limiting pressure are determined with a mean accuracy of 18 % and 2.5 % , respectively. In two-phase operation, the choked mass flow rate across convergent-divergent nozzles is estimated with a deviation of 10 % . An analysis on the effect of droplet injection confirms the hypothesis that droplet injection reduces by 8 % the pressure and Mach number jumps associated with shock waves occuring at the end of the constant area section. Nonetheless, other factors such as the mixing of the droplets with the main flow are introduced, resulting in an overall reduction by 11 % of the ejector efficiency and by 15 % of the exergy efficiency. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Thermodynamic Analysis for Buoyancy-Induced Couple Stress Nanofluid Flow with Constant Heat Flux
Entropy 2017, 19(11), 580; doi:10.3390/e19110580
Received: 20 September 2017 / Revised: 14 October 2017 / Accepted: 17 October 2017 / Published: 29 October 2017
PDF Full-text (2603 KB) | HTML Full-text | XML Full-text
Abstract
This paper addresses entropy generation in the flow of an electrically-conducting couple stress nanofluid through a vertical porous channel subjected to constant heat flux. By using the Buongiorno model, equations for momentum, energy, and nanofluid concentration are modelled, solved using homotopy analysis and
[...] Read more.
This paper addresses entropy generation in the flow of an electrically-conducting couple stress nanofluid through a vertical porous channel subjected to constant heat flux. By using the Buongiorno model, equations for momentum, energy, and nanofluid concentration are modelled, solved using homotopy analysis and furthermore, solved numerically. The variations of significant fluid parameters with respect to fluid velocity, temperature, nanofluid concentration, entropy generation, and irreversibility ratio are investigated, presented graphically, and discussed based on physical laws. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessFeature PaperArticle Entropy Production in Stochastics
Entropy 2017, 19(11), 581; doi:10.3390/e19110581
Received: 14 September 2017 / Revised: 21 October 2017 / Accepted: 23 October 2017 / Published: 30 October 2017
PDF Full-text (5287 KB) | HTML Full-text | XML Full-text
Abstract
While the modern definition of entropy is genuinely probabilistic, in entropy production the classical thermodynamic definition, as in heat transfer, is typically used. Here we explore the concept of entropy production within stochastics and, particularly, two forms of entropy production in logarithmic time,
[...] Read more.
While the modern definition of entropy is genuinely probabilistic, in entropy production the classical thermodynamic definition, as in heat transfer, is typically used. Here we explore the concept of entropy production within stochastics and, particularly, two forms of entropy production in logarithmic time, unconditionally (EPLT) or conditionally on the past and present having been observed (CEPLT). We study the theoretical properties of both forms, in general and in application to a broad set of stochastic processes. A main question investigated, related to model identification and fitting from data, is how to estimate the entropy production from a time series. It turns out that there is a link of the EPLT with the climacogram, and of the CEPLT with two additional tools introduced here, namely the differenced climacogram and the climacospectrum. In particular, EPLT and CEPLT are related to slopes of log-log plots of these tools, with the asymptotic slopes at the tails being most important as they justify the emergence of scaling laws of second-order characteristics of stochastic processes. As a real-world application, we use an extraordinary long time series of turbulent velocity and show how a parsimonious stochastic model can be identified and fitted using the tools developed. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Challenging Recently Published Parameter Sets for Entropy Measures in Risk Prediction for End-Stage Renal Disease Patients
Entropy 2017, 19(11), 582; doi:10.3390/e19110582
Received: 29 September 2017 / Revised: 26 October 2017 / Accepted: 27 October 2017 / Published: 31 October 2017
PDF Full-text (808 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Heart rate variability (HRV) analysis is a non-invasive tool for assessing cardiac health. Entropy measures quantify the chaotic properties of HRV, but they are sensitive to the choice of their required parameters. Previous studies therefore have performed parameter optimization, targeting solely their particular
[...] Read more.
Heart rate variability (HRV) analysis is a non-invasive tool for assessing cardiac health. Entropy measures quantify the chaotic properties of HRV, but they are sensitive to the choice of their required parameters. Previous studies therefore have performed parameter optimization, targeting solely their particular patient cohort. In contrast, this work aimed to challenge entropy measures with recently published parameter sets, without time-consuming optimization, for risk prediction in end-stage renal disease patients. Approximate entropy, sample entropy, fuzzy entropy, fuzzy measure entropy, and corrected approximate entropy were examined. In total, 265 hemodialysis patients from the ISAR (rISk strAtification in end-stage Renal disease) study were analyzed. Throughout a median follow-up time of 43 months, 70 patients died. Fuzzy entropy and corrected approximate entropy (CApEn) provided significant hazard ratios, which remained significant after adjustment for clinical risk factors from literature if an entropy maximizing threshold parameter was chosen. Revealing results were seen in the subgroup of patients with heart disease (HD) when setting the radius to a multiple of the data’s standard deviation ( r = 0.2 · σ ); all entropies, except CApEn, predicted mortality significantly and remained significant after adjustment. Therefore, these two parameter settings seem to reflect different cardiac properties. This work shows the potential of entropy measures for cardiovascular risk stratification in cohorts the parameters were not optimized for, and it provides additional insights into the parameter choice. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle Instance Selection for Classifier Performance Estimation in Meta Learning
Entropy 2017, 19(11), 583; doi:10.3390/e19110583
Received: 20 September 2017 / Revised: 22 October 2017 / Accepted: 23 October 2017 / Published: 1 November 2017
PDF Full-text (24385 KB) | HTML Full-text | XML Full-text
Abstract
Building an accurate prediction model is challenging and requires appropriate model selection. This process is very time consuming but can be accelerated with meta-learning–automatic model recommendation by estimating the performances of given prediction models without training them. Meta-learning utilizes metadata extracted from the
[...] Read more.
Building an accurate prediction model is challenging and requires appropriate model selection. This process is very time consuming but can be accelerated with meta-learning–automatic model recommendation by estimating the performances of given prediction models without training them. Meta-learning utilizes metadata extracted from the dataset to effectively estimate the accuracy of the model in question. To achieve that goal, metadata descriptors must be gathered efficiently and must be informative to allow the precise estimation of prediction accuracy. In this paper, a new type of metadata descriptors is analyzed. These descriptors are based on the compression level obtained from the instance selection methods at the data-preprocessing stage. To verify their suitability, two types of experiments on real-world datasets have been conducted. In the first one, 11 instance selection methods were examined in order to validate the compression–accuracy relation for three classifiers: k-nearest neighbors (kNN), support vector machine (SVM), and random forest. From this analysis, two methods are recommended (instance-based learning type 2 (IB2), and edited nearest neighbor (ENN)) which are then compared with the state-of-the-art metaset descriptors. The obtained results confirm that the two suggested compression-based meta-features help to predict accuracy of the base model much more accurately than the state-of-the-art solution. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Single-Cell Reprogramming in Mouse Embryo Development through a Critical Transition State
Entropy 2017, 19(11), 584; doi:10.3390/e19110584
Received: 3 July 2017 / Revised: 6 October 2017 / Accepted: 26 October 2017 / Published: 2 November 2017
PDF Full-text (3778 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Our previous work on the temporal development of the genome-expression profile in single-cell early mouse embryo indicated that reprogramming occurs via a critical transition state, where the critical-regulation pattern of the zygote state disappears. In this report, we unveil the detailed mechanism of
[...] Read more.
Our previous work on the temporal development of the genome-expression profile in single-cell early mouse embryo indicated that reprogramming occurs via a critical transition state, where the critical-regulation pattern of the zygote state disappears. In this report, we unveil the detailed mechanism of how the dynamic interaction of thermodynamic states (critical states) enables the genome system to pass through the critical transition state to achieve genome reprogramming right after the late 2-cell state. Self-organized criticality (SOC) control of overall expression provides a snapshot of self-organization and explains the coexistence of critical states at a certain experimental time point. The time-development of self-organization is dynamically modulated by changes in expression flux between critical states through the cell nucleus milieu, where sequential global perturbations involving activation-inhibition of multiple critical states occur from the middle 2-cell to the 4-cell state. Two cyclic fluxes act as feedback flow and generate critical-state coherent oscillatory dynamics. Dynamic perturbation of these cyclic flows due to vivid activation of the ensemble of low-variance expression (sub-critical state) genes allows the genome system to overcome a transition state during reprogramming. Our findings imply that a universal mechanism of long-term global RNA oscillation underlies autonomous SOC control, and the critical gene ensemble at a critical point (CP) drives genome reprogramming. Identification of the corresponding molecular players will be essential for understanding single-cell reprogramming. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1a

Open AccessArticle A Refined Composite Multivariate Multiscale Fuzzy Entropy and Laplacian Score-Based Fault Diagnosis Method for Rolling Bearings
Entropy 2017, 19(11), 585; doi:10.3390/e19110585
Received: 15 September 2017 / Revised: 25 October 2017 / Accepted: 27 October 2017 / Published: 2 November 2017
PDF Full-text (2667 KB) | HTML Full-text | XML Full-text
Abstract
The vibration signals of rolling bearings are often nonlinear and non-stationary. Multiscale entropy (MSE) has been widely applied to measure the complexity of nonlinear mechanical vibration signals, however, at present only the single channel vibration signals are used for fault diagnosis by many
[...] Read more.
The vibration signals of rolling bearings are often nonlinear and non-stationary. Multiscale entropy (MSE) has been widely applied to measure the complexity of nonlinear mechanical vibration signals, however, at present only the single channel vibration signals are used for fault diagnosis by many scholars. In this paper multiscale entropy in multivariate framework, i.e., multivariate multiscale entropy (MMSE) is introduced to machinery fault diagnosis to improve the efficiency of fault identification as much as possible through using multi-channel vibration information. MMSE evaluates the multivariate complexity of synchronous multi-channel data and is an effective method for measuring complexity and mutual nonlinear dynamic relationship, but its statistical stability is poor. Refined composite multivariate multiscale fuzzy entropy (RCMMFE) was developed to overcome the problems existing in MMSE and was compared with MSE, multiscale fuzzy entropy, MMSE and multivariate multiscale fuzzy entropy by analyzing simulation data. Finally, a new fault diagnosis method for rolling bearing was proposed based on RCMMFE for fault feature extraction, Laplacian score and particle swarm optimization support vector machine (PSO-SVM) for automatic fault mode identification. The proposed method was compared with the existing methods by analyzing experimental data analysis and the results indicate its effectiveness and superiority. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1a

Open AccessArticle Discovering Potential Correlations via Hypercontractivity
Entropy 2017, 19(11), 586; doi:10.3390/e19110586
Received: 11 September 2017 / Revised: 18 October 2017 / Accepted: 30 October 2017 / Published: 2 November 2017
PDF Full-text (2627 KB) | HTML Full-text | XML Full-text
Abstract
Discovering a correlation from one variable to another variable is of fundamental scientific and practical interest. While existing correlation measures are suitable for discovering average correlation, they fail to discover hidden or potential correlations. To bridge this gap, (i) we postulate a set
[...] Read more.
Discovering a correlation from one variable to another variable is of fundamental scientific and practical interest. While existing correlation measures are suitable for discovering average correlation, they fail to discover hidden or potential correlations. To bridge this gap, (i) we postulate a set of natural axioms that we expect a measure of potential correlation to satisfy; (ii) we show that the rate of information bottleneck, i.e., the hypercontractivity coefficient, satisfies all the proposed axioms; (iii) we provide a novel estimator to estimate the hypercontractivity coefficient from samples; and (iv) we provide numerical experiments demonstrating that this proposed estimator discovers potential correlations among various indicators of WHO datasets, is robust in discovering gene interactions from gene expression time series data, and is statistically more powerful than the estimators for other correlation measures in binary hypothesis testing of canonical examples of potential correlations. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle The Application of Dual-Tree Complex Wavelet Transform (DTCWT) Energy Entropy in Misalignment Fault Diagnosis of Doubly-Fed Wind Turbine (DFWT)
Entropy 2017, 19(11), 587; doi:10.3390/e19110587
Received: 20 September 2017 / Revised: 23 October 2017 / Accepted: 1 November 2017 / Published: 2 November 2017
PDF Full-text (3965 KB) | HTML Full-text | XML Full-text
Abstract
Misalignment is one of the common faults for the doubly-fed wind turbine (DFWT), and the normal operation of the unit will be greatly affected under this state. Because it is difficult to obtain a large number of misaligned fault samples of wind turbines
[...] Read more.
Misalignment is one of the common faults for the doubly-fed wind turbine (DFWT), and the normal operation of the unit will be greatly affected under this state. Because it is difficult to obtain a large number of misaligned fault samples of wind turbines in practice, ADAMS and MATLAB are used to simulate the various misalignment conditions of the wind turbine transmission system to obtain the corresponding stator current in this paper. Then, the dual-tree complex wavelet transform is used to decompose and reconstruct the characteristic signal, and the dual-tree complex wavelet energy entropy is obtained from the reconstructed coefficients to form the feature vector of the fault diagnosis. Support vector machine is used as classifier and particle swarm optimization is used to optimize the relevant parameters of support vector machine (SVM) to improve its classification performance. The results show that the method proposed in this paper can effectively and accurately classify the misalignment of the transmission system of the wind turbine and improve the reliability of the fault diagnosis. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle The Mean Field Theories of Magnetism and Turbulence
Entropy 2017, 19(11), 589; doi:10.3390/e19110589
Received: 22 September 2017 / Revised: 20 October 2017 / Accepted: 30 October 2017 / Published: 3 November 2017
PDF Full-text (3373 KB) | HTML Full-text | XML Full-text
Abstract
In the last few decades a series of experiments have revealed that turbulence is a cooperative and critical phenomenon showing a continuous phase change with the critical Reynolds number at its onset. However, the applications of phase transition models, such as the Mean
[...] Read more.
In the last few decades a series of experiments have revealed that turbulence is a cooperative and critical phenomenon showing a continuous phase change with the critical Reynolds number at its onset. However, the applications of phase transition models, such as the Mean Field Theory (MFT), the Heisenberg model, the XY model, etc. to turbulence, have not been realized so far. Now, in this article, a successful analogy to magnetism is reported, and it is shown that a Mean Field Theory of Turbulence (MFTT) can be built that reveals new results. In analogy to compressibility in fluids and susceptibility in magnetic materials, the vorticibility (the authors of this article propose this new name in analogy to response functions, derived and given names in other fields) of a turbulent flowing fluid is revealed, which is identical to the relative turbulence intensity. By analogy to magnetism, in a natural manner, the Curie Law of Turbulence was discovered. It is clear that the MFTT is a theory describing equilibrium flow systems, whereas for a long time it is known that turbulence is a highly non-equilibrium phenomenon. Nonetheless, as a starting point for the development of thermodynamic models of turbulence, the presented MFTT is very useful to gain physical insight, just as Kraichnan’s turbulent energy spectra of 2-D and 3-D turbulence are, which were developed with equilibrium Boltzmann-Gibbs thermodynamics and only recently have been generalized and adapted to non-equilibrium and intermittent turbulent flow fields. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Multiscale Sample Entropy of Cardiovascular Signals: Does the Choice between Fixed- or Varying-Tolerance among Scales Influence Its Evaluation and Interpretation?
Entropy 2017, 19(11), 590; doi:10.3390/e19110590
Received: 9 October 2017 / Revised: 30 October 2017 / Accepted: 31 October 2017 / Published: 4 November 2017
PDF Full-text (2071 KB) | HTML Full-text | XML Full-text
Abstract
Multiscale entropy (MSE) quantifies the cardiovascular complexity evaluating Sample Entropy (SampEn) on coarse-grained series at increasing scales τ. Two approaches exist, one using a fixed tolerance r at all scales (MSEFT), the other a varying tolerance r(τ
[...] Read more.
Multiscale entropy (MSE) quantifies the cardiovascular complexity evaluating Sample Entropy (SampEn) on coarse-grained series at increasing scales τ. Two approaches exist, one using a fixed tolerance r at all scales (MSEFT), the other a varying tolerance r(τ) adjusted following the standard-deviation changes after coarse graining (MSEVT). The aim of this study is to clarify how the choice between MSEFT and MSEVT influences quantification and interpretation of cardiovascular MSE, and whether it affects some signals more than others. To achieve this aim, we considered 2-h long beat-by-beat recordings of inter-beat intervals and of systolic and diastolic blood pressures in male (N = 42) and female (N = 42) healthy volunteers. We compared MSE estimated with fixed and varying tolerances, and evaluated whether the choice between MSEFT and MSEVT estimators influence quantification and interpretation of sex-related differences. We found substantial discrepancies between MSEFT and MSEVT results, related to the degree of correlation among samples and more important for heart rate than for blood pressure; moreover the choice between MSEFT and MSEVT may influence the interpretation of gender differences for MSE of heart rate. We conclude that studies on cardiovascular complexity should carefully choose between fixed- or varying-tolerance estimators, particularly when evaluating MSE of heart rate. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle A Connection Entropy Approach to Water Resources Vulnerability Analysis in a Changing Environment
Entropy 2017, 19(11), 591; doi:10.3390/e19110591
Received: 5 September 2017 / Revised: 26 October 2017 / Accepted: 1 November 2017 / Published: 6 November 2017
PDF Full-text (789 KB) | HTML Full-text | XML Full-text
Abstract
This paper establishes a water resources vulnerability framework based on sensitivity, natural resilience and artificial adaptation, through the analyses of the four states of the water system and its accompanying transformation processes. Furthermore, it proposes an analysis method for water resources vulnerability based
[...] Read more.
This paper establishes a water resources vulnerability framework based on sensitivity, natural resilience and artificial adaptation, through the analyses of the four states of the water system and its accompanying transformation processes. Furthermore, it proposes an analysis method for water resources vulnerability based on connection entropy, which extends the concept of contact entropy. An example is given of the water resources vulnerability in Anhui Province of China, which analysis illustrates that, overall, vulnerability levels fluctuated and showed apparent improvement trends from 2001 to 2015. Some suggestions are also provided for the improvement of the level of water resources vulnerability in Anhui Province, considering the viewpoint of the vulnerability index. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Spatial Optimization of Agricultural Land Use Based on Cross-Entropy Method
Entropy 2017, 19(11), 592; doi:10.3390/e19110592
Received: 4 September 2017 / Revised: 26 October 2017 / Accepted: 2 November 2017 / Published: 7 November 2017
PDF Full-text (3995 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
An integrated optimization model was developed for the spatial distribution of agricultural crops in order to efficiently utilize agricultural water and land resources simultaneously. The model is based on the spatial distribution of crop suitability, spatial distribution of population density, and agricultural land
[...] Read more.
An integrated optimization model was developed for the spatial distribution of agricultural crops in order to efficiently utilize agricultural water and land resources simultaneously. The model is based on the spatial distribution of crop suitability, spatial distribution of population density, and agricultural land use data. Multi-source remote sensing data are combined with constraints of optimal crop area, which are obtained from agricultural cropping pattern optimization model. Using the middle reaches of the Heihe River basin as an example, the spatial distribution of maize and wheat were optimized by minimizing cross-entropy between crop distribution probabilities and desired but unknown distribution probabilities. Results showed that the area of maize should increase and the area of wheat should decrease in the study area compared with the situation in 2013. The comprehensive suitable area distribution of maize is approximately in accordance with the distribution in the present situation; however, the comprehensive suitable area distribution of wheat is not consistent with the distribution in the present situation. Through optimization, the high proportion of maize and wheat area was more concentrated than before. The maize area with more than 80% allocation concentrates on the south of the study area, and the wheat area with more than 30% allocation concentrates on the central part of the study area. The outcome of this study provides a scientific basis for farmers to select crops that are suitable in a particular area. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle A Novel Derivation of the Time Evolution of the Entropy for Macroscopic Systems in Thermal Non-Equilibrium
Entropy 2017, 19(11), 594; doi:10.3390/e19110594
Received: 30 June 2017 / Revised: 20 October 2017 / Accepted: 4 November 2017 / Published: 7 November 2017
PDF Full-text (2581 KB) | HTML Full-text | XML Full-text
Abstract
The paper discusses how the two thermodynamic properties, energy (U) and exergy (E), can be used to solve the problem of quantifying the entropy of non-equilibrium systems. Both energy and exergy are a priori concepts, and their formal dependence on thermodynamic state variables
[...] Read more.
The paper discusses how the two thermodynamic properties, energy (U) and exergy (E), can be used to solve the problem of quantifying the entropy of non-equilibrium systems. Both energy and exergy are a priori concepts, and their formal dependence on thermodynamic state variables at equilibrium is known. Exploiting the results of a previous study, we first calculate the non-equilibrium exergy En-eq can be calculated for an arbitrary temperature distributions across a macroscopic body with an accuracy that depends only on the available information about the initial distribution: the analytical results confirm that En-eq exponentially relaxes to its equilibrium value. Using the Gyftopoulos-Beretta formalism, a non-equilibrium entropy Sn-eq(x,t) is then derived from En-eq(x,t) and U(x,t). It is finally shown that the non-equilibrium entropy generation between two states is always larger than its equilibrium (herein referred to as “classical”) counterpart. We conclude that every iso-energetic non-equilibrium state corresponds to an infinite set of non-equivalent states that can be ranked in terms of increasing entropy. Therefore, each point of the Gibbs plane corresponds therefore to a set of possible initial distributions: the non-equilibrium entropy is a multi-valued function that depends on the initial mass and energy distribution within the body. Though the concept cannot be directly extended to microscopic systems, it is argued that the present formulation is compatible with a possible reinterpretation of the existing non-equilibrium formulations, namely those of Tsallis and Grmela, and answers at least in part one of the objections set forth by Lieb and Yngvason. A systematic application of this paradigm is very convenient from a theoretical point of view and may be beneficial for meaningful future applications in the fields of nano-engineering and biological sciences. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle On Work and Heat in Time-Dependent Strong Coupling
Entropy 2017, 19(11), 595; doi:10.3390/e19110595
Received: 18 August 2017 / Revised: 14 September 2017 / Accepted: 28 October 2017 / Published: 7 November 2017
PDF Full-text (275 KB) | HTML Full-text | XML Full-text
Abstract
This paper revisits the classical problem of representing a thermal bath interacting with a system as a large collection of harmonic oscillators initially in thermal equilibrium. As is well known, the system then obeys an equation, which in the bulk and in the
[...] Read more.
This paper revisits the classical problem of representing a thermal bath interacting with a system as a large collection of harmonic oscillators initially in thermal equilibrium. As is well known, the system then obeys an equation, which in the bulk and in the suitable limit tends to the Kramers–Langevin equation of physical kinetics. I consider time-dependent system-bath coupling and show that this leads to an additional harmonic force acting on the system. When the coupling is switched on and switched off rapidly, the force has delta-function support at the initial and final time. I further show that the work and heat functionals as recently defined in stochastic thermodynamics at strong coupling contain additional terms depending on the time derivative of the system-bath coupling. I discuss these terms and show that while they can be very large if the system-bath coupling changes quickly, they only give a finite contribution to the work that enters in Jarzynski’s equality. I also discuss that these corrections to standard work and heat functionals provide an explanation for non-standard terms in the change of the von Neumann entropy of a quantum bath interacting with a quantum system found in an earlier contribution (Aurell and Eichhorn, 2015). Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Open AccessArticle An Entropy-Based Adaptive Hybrid Particle Swarm Optimization for Disassembly Line Balancing Problems
Entropy 2017, 19(11), 596; doi:10.3390/e19110596
Received: 8 August 2017 / Revised: 2 October 2017 / Accepted: 3 November 2017 / Published: 7 November 2017
PDF Full-text (4228 KB) | HTML Full-text | XML Full-text
Abstract
In order to improve the product disassembly efficiency, the disassembly line balancing problem (DLBP) is transformed into a problem of searching for the optimum path in the directed and weighted graph by constructing the disassembly hierarchy information graph (DHIG). Then, combining the characteristic
[...] Read more.
In order to improve the product disassembly efficiency, the disassembly line balancing problem (DLBP) is transformed into a problem of searching for the optimum path in the directed and weighted graph by constructing the disassembly hierarchy information graph (DHIG). Then, combining the characteristic of the disassembly sequence, an entropy-based adaptive hybrid particle swarm optimization algorithm (AHPSO) is presented. In this algorithm, entropy is introduced to measure the changing tendency of population diversity, and the dimension learning, crossover and mutation operator are used to increase the probability of producing feasible disassembly solutions (FDS). Performance of the proposed methodology is tested on the primary problem instances available in the literature, and the results are compared with other evolutionary algorithms. The results show that the proposed algorithm is efficient to solve the complex DLBP. Full article
Figures

Figure 1

Open AccessArticle Comparison of Two Entropy Spectral Analysis Methods for Streamflow Forecasting in Northwest China
Entropy 2017, 19(11), 597; doi:10.3390/e19110597
Received: 27 September 2017 / Revised: 1 November 2017 / Accepted: 5 November 2017 / Published: 7 November 2017
PDF Full-text (2296 KB) | HTML Full-text | XML Full-text
Abstract
Monthly streamflow has elements of stochasticity, seasonality, and periodicity. Spectral analysis and time series analysis can, respectively, be employed to characterize the periodical pattern and the stochastic pattern. Both Burg entropy spectral analysis (BESA) and configurational entropy spectral analysis (CESA) combine spectral analysis
[...] Read more.
Monthly streamflow has elements of stochasticity, seasonality, and periodicity. Spectral analysis and time series analysis can, respectively, be employed to characterize the periodical pattern and the stochastic pattern. Both Burg entropy spectral analysis (BESA) and configurational entropy spectral analysis (CESA) combine spectral analysis and time series analysis. This study compared the predictive performances of BESA and CESA for monthly streamflow forecasting in six basins in Northwest China. Four criteria were selected to evaluate the performances of these two entropy spectral analyses: relative error (RE), root mean square error (RMSE), coefficient of determination (R2), and Nash–Sutcliffe efficiency coefficient (NSE). It was found that in Northwest China, both BESA and CESA forecasted monthly streamflow well with strong correlation. The forecast accuracy of BESA is higher than CESA. For the streamflow with weak correlation, the conclusion is the opposite. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Accelerating the Computation of Entropy Measures by Exploiting Vectors with Dissimilarity
Entropy 2017, 19(11), 598; doi:10.3390/e19110598
Received: 27 September 2017 / Revised: 31 October 2017 / Accepted: 3 November 2017 / Published: 8 November 2017
PDF Full-text (5594 KB) | HTML Full-text | XML Full-text
Abstract
In the diagnosis of neurological diseases and assessment of brain function, entropy measures for quantifying electroencephalogram (EEG) signals are attracting ever-increasing attention worldwide. However, some entropy measures, such as approximate entropy (ApEn), sample entropy (SpEn), multiscale entropy and so on, imply high computational
[...] Read more.
In the diagnosis of neurological diseases and assessment of brain function, entropy measures for quantifying electroencephalogram (EEG) signals are attracting ever-increasing attention worldwide. However, some entropy measures, such as approximate entropy (ApEn), sample entropy (SpEn), multiscale entropy and so on, imply high computational costs because their computations are based on hundreds of data points. In this paper, we propose an effective and practical method to accelerate the computation of these entropy measures by exploiting vectors with dissimilarity (VDS). By means of the VDS decision, distance calculations of most dissimilar vectors can be avoided during computation. The experimental results show that, compared with the conventional method, the proposed VDS method enables a reduction of the average computation time of SpEn in random signals and EEG signals by 78.5% and 78.9%, respectively. The computation times are consistently reduced by about 80.1~82.8% for five kinds of EEG signals of different lengths. The experiments further demonstrate the use of the VDS method not only to accelerate the computation of SpEn in electromyography and electrocardiogram signals but also to accelerate the computations of time-shift multiscale entropy and ApEn in EEG signals. All results indicate that the VDS method is a powerful strategy for accelerating the computation of entropy measures and has promising application potential in the field of biomedical informatics. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Sparse Coding Algorithm with Negentropy and Weighted 1-Norm for Signal Reconstruction
Entropy 2017, 19(11), 599; doi:10.3390/e19110599
Received: 20 September 2017 / Revised: 24 October 2017 / Accepted: 6 November 2017 / Published: 8 November 2017
PDF Full-text (2283 KB) | HTML Full-text | XML Full-text
Abstract
Compressive sensing theory has attracted widespread attention in recent years and sparse signal reconstruction has been widely used in signal processing and communication. This paper addresses the problem of sparse signal recovery especially with non-Gaussian noise. The main contribution of this paper is
[...] Read more.
Compressive sensing theory has attracted widespread attention in recent years and sparse signal reconstruction has been widely used in signal processing and communication. This paper addresses the problem of sparse signal recovery especially with non-Gaussian noise. The main contribution of this paper is the proposal of an algorithm where the negentropy and reweighted schemes represent the core of an approach to the solution of the problem. The signal reconstruction problem is formalized as a constrained minimization problem, where the objective function is the sum of a measurement of error statistical characteristic term, the negentropy, and a sparse regularization term, p-norm, for 0 < p < 1. The p-norm, however, leads to a non-convex optimization problem which is difficult to solve efficiently. Herein we treat the p -norm as a serious of weighted 1-norms so that the sub-problems become convex. We propose an optimized algorithm that combines forward-backward splitting. The algorithm is fast and succeeds in exactly recovering sparse signals with Gaussian and non-Gaussian noise. Several numerical experiments and comparisons demonstrate the superiority of the proposed algorithm. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Understanding the Fractal Dimensions of Urban Forms through Spatial Entropy
Entropy 2017, 19(11), 600; doi:10.3390/e19110600
Received: 17 September 2017 / Revised: 29 October 2017 / Accepted: 6 November 2017 / Published: 9 November 2017
PDF Full-text (3538 KB) | HTML Full-text | XML Full-text
Abstract
The spatial patterns and processes of cities can be described with various entropy functions. However, spatial entropy always depends on the scale of measurement, and it is difficult to find a characteristic value for it. In contrast, fractal parameters can be employed to
[...] Read more.
The spatial patterns and processes of cities can be described with various entropy functions. However, spatial entropy always depends on the scale of measurement, and it is difficult to find a characteristic value for it. In contrast, fractal parameters can be employed to characterize scale-free phenomena and reflect the local features of random multi-scaling structure. This paper is devoted to exploring the similarities and differences between spatial entropy and fractal dimension in urban description. Drawing an analogy between cities and growing fractals, we illustrate the definitions of fractal dimension based on different entropy concepts. Three representative fractal dimensions in the multifractal dimension set, capacity dimension, information dimension, and correlation dimension, are utilized to make empirical analyses of the urban form of two Chinese cities, Beijing and Hangzhou. The results show that the entropy values vary with the measurement scale, but the fractal dimension value is stable is method and study area are fixed; if the linear size of boxes is small enough (e.g., <1/25), the linear correlation between entropy and fractal dimension is significant (based on the confidence level of 99%). Further empirical analysis indicates that fractal dimension is close to the characteristic values of spatial entropy. This suggests that the physical meaning of fractal dimension can be interpreted by the ideas from entropy and scaling and the conclusion is revealing for future spatial analysis of cities. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics II)
Figures

Figure 1

Open AccessArticle Secret Sharing and Shared Information
Entropy 2017, 19(11), 601; doi:10.3390/e19110601
Received: 21 June 2017 / Revised: 2 November 2017 / Accepted: 5 November 2017 / Published: 9 November 2017
PDF Full-text (262 KB) | HTML Full-text | XML Full-text
Abstract
Secret sharing is a cryptographic discipline in which the goal is to distribute information about a secret over a set of participants in such a way that only specific authorized combinations of participants together can reconstruct the secret. Thus, secret sharing schemes are
[...] Read more.
Secret sharing is a cryptographic discipline in which the goal is to distribute information about a secret over a set of participants in such a way that only specific authorized combinations of participants together can reconstruct the secret. Thus, secret sharing schemes are systems of variables in which it is very clearly specified which subsets have information about the secret. As such, they provide perfect model systems for information decompositions. However, following this intuition too far leads to an information decomposition with negative partial information terms, which are difficult to interpret. One possible explanation is that the partial information lattice proposed by Williams and Beer is incomplete and has to be extended to incorporate terms corresponding to higher-order redundancy. These results put bounds on information decompositions that follow the partial information framework, and they hint at where the partial information lattice needs to be improved. Full article
Figures

Figure 1

Open AccessArticle Revealing Tripartite Quantum Discord with Tripartite Information Diagram
Entropy 2017, 19(11), 602; doi:10.3390/e19110602
Received: 29 September 2017 / Revised: 22 October 2017 / Accepted: 8 November 2017 / Published: 10 November 2017
PDF Full-text (1035 KB) | HTML Full-text | XML Full-text
Abstract
A new measure based on the tripartite information diagram is proposed for identifying quantum discord in tripartite systems. The proposed measure generalizes the mutual information underlying discord from bipartite to tripartite systems, and utilizes both one-particle and two-particle projective measurements to reveal the
[...] Read more.
A new measure based on the tripartite information diagram is proposed for identifying quantum discord in tripartite systems. The proposed measure generalizes the mutual information underlying discord from bipartite to tripartite systems, and utilizes both one-particle and two-particle projective measurements to reveal the characteristics of the tripartite quantum discord. The feasibility of the proposed measure is demonstrated by evaluating the tripartite quantum discord for systems with states close to Greenberger–Horne–Zeilinger, W, and biseparable states. In addition, the connections between tripartite quantum discord and two other quantum correlations—namely genuine tripartite entanglement and genuine tripartite Einstein–Podolsky–Rosen steering—are briefly discussed. The present study considers the case of quantum discord in tripartite systems. However, the proposed framework can be readily extended to general N-partite systems. Full article
(This article belongs to the Section Quantum Information)
Figures

Figure 1

Open AccessArticle Thermodynamics, Statistical Mechanics and Entropy
Entropy 2017, 19(11), 603; doi:10.3390/e19110603
Received: 30 September 2017 / Accepted: 6 November 2017 / Published: 10 November 2017
PDF Full-text (323 KB) | HTML Full-text | XML Full-text
Abstract
The proper definition of thermodynamics and the thermodynamic entropy is discussed in the light of recent developments. The postulates for thermodynamics are examined critically, and some modifications are suggested to allow for the inclusion of long-range forces (within a system), inhomogeneous systems with
[...] Read more.
The proper definition of thermodynamics and the thermodynamic entropy is discussed in the light of recent developments. The postulates for thermodynamics are examined critically, and some modifications are suggested to allow for the inclusion of long-range forces (within a system), inhomogeneous systems with non-extensive entropy, and systems that can have negative temperatures. Only the thermodynamics of finite systems are considered, with the condition that the system is large enough for the fluctuations to be smaller than the experimental resolution. The statistical basis for thermodynamics is discussed, along with four different forms of the (classical and quantum) entropy. The strengths and weaknesses of each are evaluated in relation to the requirements of thermodynamics. Effects of order 1 / N , where N is the number of particles, are included in the discussion because they have played a significant role in the literature, even if they are too small to have a measurable effect in an experiment. The discussion includes the role of discreteness, the non-zero width of the energy and particle number distributions, the extensivity of models with non-interacting particles, and the concavity of the entropy with respect to energy. The results demonstrate the validity of negative temperatures. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Open AccessArticle Rate Distortion Functions and Rate Distortion Function Lower Bounds for Real-World Sources
Entropy 2017, 19(11), 604; doi:10.3390/e19110604
Received: 27 September 2017 / Revised: 3 November 2017 / Accepted: 6 November 2017 / Published: 11 November 2017
PDF Full-text (377 KB) | HTML Full-text | XML Full-text
Abstract
Although Shannon introduced the concept of a rate distortion function in 1948, only in the last decade has the methodology for developing rate distortion function lower bounds for real-world sources been established. However, these recent results have not been fully exploited due to
[...] Read more.
Although Shannon introduced the concept of a rate distortion function in 1948, only in the last decade has the methodology for developing rate distortion function lower bounds for real-world sources been established. However, these recent results have not been fully exploited due to some confusion about how these new rate distortion bounds, once they are obtained, should be interpreted and should be used in source codec performance analysis and design. We present the relevant rate distortion theory and show how this theory can be used for practical codec design and performance prediction and evaluation. Examples for speech and video indicate exactly how the new rate distortion functions can be calculated, interpreted, and extended. These examples illustrate the interplay between source models for rate distortion theoretic studies and the source models underlying video and speech codec design. Key concepts include the development of composite source models per source realization and the application of conditional rate distortion theory. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Figures

Figure 1

Open AccessArticle On the Uniqueness Theorem for Pseudo-Additive Entropies
Entropy 2017, 19(11), 605; doi:10.3390/e19110605
Received: 19 September 2017 / Revised: 8 November 2017 / Accepted: 10 November 2017 / Published: 12 November 2017
PDF Full-text (831 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to show that the Tsallis-type (q-additive) entropic chain rule allows for a wider class of entropic functionals than previously thought. In particular, we point out that the ensuing entropy solutions (e.g., Tsallis entropy) can be
[...] Read more.
The aim of this paper is to show that the Tsallis-type (q-additive) entropic chain rule allows for a wider class of entropic functionals than previously thought. In particular, we point out that the ensuing entropy solutions (e.g., Tsallis entropy) can be determined uniquely only when one fixes the prescription for handling conditional entropies. By using the concept of Kolmogorov–Nagumo quasi-linear means, we prove this with the help of Darótzy’s mapping theorem. Our point is further illustrated with a number of explicit examples. Other salient issues, such as connections of conditional entropies with the de Finetti–Kolmogorov theorem for escort distributions and with Landsberg’s classification of non-extensive thermodynamic systems are also briefly discussed. Full article
(This article belongs to the Special Issue Selected Papers from 14th Joint European Thermodynamics Conference)
Figures

Figure 1

Open AccessArticle A New Stochastic Dominance Degree Based on Almost Stochastic Dominance and Its Application in Decision Making
Entropy 2017, 19(11), 606; doi:10.3390/e19110606
Received: 11 September 2017 / Revised: 6 November 2017 / Accepted: 12 November 2017 / Published: 17 November 2017
PDF Full-text (922 KB) | HTML Full-text | XML Full-text
Abstract
Traditional stochastic dominance rules are so strict and qualitative conditions that generally a stochastic dominance relation between two alternatives does not exist. To solve the problem, we firstly supplement the definitions of almost stochastic dominance (ASD). Then, we propose a new definition of
[...] Read more.
Traditional stochastic dominance rules are so strict and qualitative conditions that generally a stochastic dominance relation between two alternatives does not exist. To solve the problem, we firstly supplement the definitions of almost stochastic dominance (ASD). Then, we propose a new definition of stochastic dominance degree (SDD) that is based on the idea of ASD. The new definition takes both the objective mean and stakeholders’ subjective preference into account, and can measure both standard and almost stochastic dominance degree. The new definition contains four kinds of SDD corresponding to different stakeholders (rational investors, risk averters, risk seekers, and prospect investors). The operator in the definition can also be changed to fit in with different circumstances. On the basis of the new SDD definition, we present a method to solve stochastic multiple criteria decision-making problem. The numerical experiment shows that the new method could produce a more accurate result according to the utility situations of stakeholders. Moreover, even when it is difficult to elicit the group utility distribution of stakeholders, or when the group utility distribution is ambiguous, the method can still rank alternatives. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Fault Detection for Vibration Signals on Rolling Bearings Based on the Symplectic Entropy Method
Entropy 2017, 19(11), 607; doi:10.3390/e19110607 (registering DOI)
Received: 25 September 2017 / Revised: 29 October 2017 / Accepted: 9 November 2017 / Published: 18 November 2017
PDF Full-text (1833 KB) | HTML Full-text | XML Full-text
Abstract
Bearing vibration response studies are crucial for the condition monitoring of bearings and the quality inspection of rotating machinery systems. However, it is still very difficult to diagnose bearing faults, especially rolling element faults, due to the complex, high-dimensional and nonlinear characteristics of
[...] Read more.
Bearing vibration response studies are crucial for the condition monitoring of bearings and the quality inspection of rotating machinery systems. However, it is still very difficult to diagnose bearing faults, especially rolling element faults, due to the complex, high-dimensional and nonlinear characteristics of vibration signals as well as the strong background noise. A novel nonlinear analysis method—the symplectic entropy (SymEn) measure—is proposed to analyze the measured signals for fault monitoring of rolling bearings. The core technique of the SymEn approach is the entropy analysis based on the symplectic principal components. The dynamical characteristics of the rolling bearing data are analyzed using the SymEn method. Unlike other techniques consisting of high-dimensional features in the time-domain, frequency-domain and the empirical mode decomposition (EMD)/wavelet-domain, the SymEn approach constructs low-dimensional (i.e., two-dimensional) features based on the SymEn estimate. The vibration signals from our experiments and the Case Western Reserve University Bearing Data Center are applied to verify the effectiveness of the proposed method. Meanwhile, it is found that faulty bearings have a great influence on the other normal bearings. To sum up, the results indicate that the proposed method can be used to detect rolling bearing faults. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Robust and Sparse Regression via γ-Divergence
Entropy 2017, 19(11), 608; doi:10.3390/e19110608
Received: 30 September 2017 / Revised: 7 November 2017 / Accepted: 9 November 2017 / Published: 13 November 2017
PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is
[...] Read more.
In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is the γ -divergence, and the robust estimator using the γ -divergence is known for having a strong robustness. In this paper, we extend the γ -divergence to the regression problem, consider the robust and sparse regression based on the γ -divergence and show that it has a strong robustness under heavy contamination even when outliers are heterogeneous. The loss function is constructed by an empirical estimate of the γ -divergence with sparse regularization, and the parameter estimate is defined as the minimizer of the loss function. To obtain the robust and sparse estimate, we propose an efficient update algorithm, which has a monotone decreasing property of the loss function. Particularly, we discuss a linear regression problem with L 1 regularization in detail. In numerical experiments and real data analyses, we see that the proposed method outperforms past robust and sparse methods. Full article
Figures

Figure 1

Open AccessArticle Maximum Entropy-Copula Method for Hydrological Risk Analysis under Uncertainty: A Case Study on the Loess Plateau, China
Entropy 2017, 19(11), 609; doi:10.3390/e19110609
Received: 25 September 2017 / Revised: 3 November 2017 / Accepted: 11 November 2017 / Published: 15 November 2017
PDF Full-text (5876 KB) | HTML Full-text | XML Full-text
Abstract
Copula functions have been extensively used to describe the joint behaviors of extreme hydrological events and to analyze hydrological risk. Advanced marginal distribution inference, for example, the maximum entropy theory, is particularly beneficial for improving the performance of the copulas. The goal of
[...] Read more.
Copula functions have been extensively used to describe the joint behaviors of extreme hydrological events and to analyze hydrological risk. Advanced marginal distribution inference, for example, the maximum entropy theory, is particularly beneficial for improving the performance of the copulas. The goal of this paper, therefore, is twofold; first, to develop a coupled maximum entropy-copula method for hydrological risk analysis through deriving the bivariate return periods, risk, reliability and bivariate design events; and second, to reveal the impact of marginal distribution selection uncertainty and sampling uncertainty on bivariate design event identification. Particularly, the uncertainties involved in the second goal have not yet received significant consideration. The designed framework for hydrological risk analysis related to flood and extreme precipitation events is exemplarily applied in two catchments of the Loess plateau, China. Results show that (1) distribution derived by the maximum entropy principle outperforms the conventional distributions for the probabilistic modeling of flood and extreme precipitation events; (2) the bivariate return periods, risk, reliability and bivariate design events are able to be derived using the coupled entropy-copula method; (3) uncertainty analysis highlights the fact that appropriate performance of marginal distribution is closely related to bivariate design event identification. Most importantly, sampling uncertainty causes the confidence regions of bivariate design events with return periods of 30 years to be very large, overlapping with the values of flood and extreme precipitation, which have return periods of 10 and 50 years, respectively. The large confidence regions of bivariate design events greatly challenge its application in practical engineering design. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessFeature PaperArticle Capacity Bounds on the Downlink of Symmetric, Multi-Relay, Single-Receiver C-RAN Networks
Entropy 2017, 19(11), 610; doi:10.3390/e19110610
Received: 17 July 2017 / Revised: 6 November 2017 / Accepted: 6 November 2017 / Published: 14 November 2017
PDF Full-text (267 KB) | HTML Full-text | XML Full-text
Abstract
The downlink of symmetric Cloud Radio Access Networks (C-RANs) with multiple relays and a single receiver is studied. Lower and upper bounds are derived on the capacity. The lower bound is achieved by Marton’s coding, which facilitates dependence among the multiple-access channel inputs.
[...] Read more.
The downlink of symmetric Cloud Radio Access Networks (C-RANs) with multiple relays and a single receiver is studied. Lower and upper bounds are derived on the capacity. The lower bound is achieved by Marton’s coding, which facilitates dependence among the multiple-access channel inputs. The upper bound uses Ozarow’s technique to augment the system with an auxiliary random variable. The bounds are studied over scalar Gaussian C-RANs and are shown to meet and characterize the capacity for interesting regimes of operation. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Multilevel Coding for the Full-Duplex Decode-Compress-Forward Relay Channel
Entropy 2017, 19(11), 611; doi:10.3390/e19110611
Received: 14 August 2017 / Revised: 23 October 2017 / Accepted: 30 October 2017 / Published: 14 November 2017
PDF Full-text (311 KB) | HTML Full-text | XML Full-text
Abstract
The Decode-Compress-Forward (DCF) is a generalization of Decode-Forward (DF) and Compress-Forward (CF). This paper investigates conditions under which DCF offers gains over DF and CF, addresses the problem of coded modulation for DCF, and evaluates the performance of DCF coded modulation implemented via
[...] Read more.
The Decode-Compress-Forward (DCF) is a generalization of Decode-Forward (DF) and Compress-Forward (CF). This paper investigates conditions under which DCF offers gains over DF and CF, addresses the problem of coded modulation for DCF, and evaluates the performance of DCF coded modulation implemented via low-density parity-check (LDPC) codes and polar codes. We begin by revisiting the achievable rate of DCF in discrete memoryless channels under backward decoding. We then study coded modulation for the decode-compress-forward via multi-level coding. We show that the proposed multilevel coding approaches the known achievable rates of DCF. The proposed multilevel coding is implemented (and its performance verified) via a combination of standard DVB-S2 LDPC codes, and polar codes whose design follows the method of Blasco-Serrano. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle An Analysis of Information Dynamic Behavior Using Autoregressive Models
Entropy 2017, 19(11), 612; doi:10.3390/e19110612 (registering DOI)
Received: 20 September 2017 / Revised: 8 November 2017 / Accepted: 10 November 2017 / Published: 18 November 2017
PDF Full-text (963 KB) | HTML Full-text | XML Full-text
Abstract
Information Theory is a branch of mathematics, more specifically probability theory, that studies information quantification. Recently, several researches have been successful with the use of Information Theoretic Learning (ITL) as a new technique of unsupervised learning. In these works, information measures are used
[...] Read more.
Information Theory is a branch of mathematics, more specifically probability theory, that studies information quantification. Recently, several researches have been successful with the use of Information Theoretic Learning (ITL) as a new technique of unsupervised learning. In these works, information measures are used as criterion of optimality in learning. In this article, we will analyze a still unexplored aspect of these information measures, their dynamic behavior. Autoregressive models (linear and non-linear) will be used to represent the dynamics in information measures. As a source of dynamic information, videos with different characteristics like fading, monotonous sequences, etc., will be used. Full article
Figures

Figure 1

Open AccessArticle How to Identify the Most Powerful Node in Complex Networks? A Novel Entropy Centrality Approach
Entropy 2017, 19(11), 614; doi:10.3390/e19110614
Received: 9 October 2017 / Revised: 12 November 2017 / Accepted: 13 November 2017 / Published: 15 November 2017
PDF Full-text (9004 KB) | HTML Full-text | XML Full-text
Abstract
Centrality is one of the most studied concepts in network analysis. Despite an abundance of methods for measuring centrality in social networks has been proposed, each approach exclusively characterizes limited parts of what it implies for an actor to be “vital” to the
[...] Read more.
Centrality is one of the most studied concepts in network analysis. Despite an abundance of methods for measuring centrality in social networks has been proposed, each approach exclusively characterizes limited parts of what it implies for an actor to be “vital” to the network. In this paper, a novel mechanism is proposed to quantitatively measure centrality using the re-defined entropy centrality model, which is based on decompositions of a graph into subgraphs and analysis on the entropy of neighbor nodes. By design, the re-defined entropy centrality which describes associations among node pairs and captures the process of influence propagation can be interpreted explained as a measure of actor potential for communication activity. We evaluate the efficiency of the proposed model by using four real-world datasets with varied sizes and densities and three artificial networks constructed by models including Barabasi-Albert, Erdos-Renyi and Watts-Stroggatz. The four datasets are Zachary’s karate club, USAir97, Collaboration network and Email network URV respectively. Extensive experimental results prove the effectiveness of the proposed method. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Random Walk Null Models for Time Series Data
Entropy 2017, 19(11), 615; doi:10.3390/e19110615
Received: 6 October 2017 / Revised: 10 November 2017 / Accepted: 13 November 2017 / Published: 15 November 2017
PDF Full-text (960 KB) | HTML Full-text | XML Full-text
Abstract
Permutation entropy has become a standard tool for time series analysis that exploits the temporal and ordinal relationships within data. Motivated by a Kullback–Leibler divergence interpretation of permutation entropy as divergence from white noise, we extend pattern-based methods to the setting of random
[...] Read more.
Permutation entropy has become a standard tool for time series analysis that exploits the temporal and ordinal relationships within data. Motivated by a Kullback–Leibler divergence interpretation of permutation entropy as divergence from white noise, we extend pattern-based methods to the setting of random walk data. We analyze random walk null models for correlated time series and describe a method for determining the corresponding ordinal pattern distributions. These null models more accurately reflect the observed pattern distributions in some economic data. This leads us to define a measure of complexity using the deviation of a time series from an associated random walk null model. We demonstrate the applicability of our methods using empirical data drawn from a variety of fields, including to a variety of stock market closing prices. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Effects of Endwall Fillet and Bulb on the Temperature Uniformity of Pin-Fined Microchannel
Entropy 2017, 19(11), 616; doi:10.3390/e19110616
Received: 5 October 2017 / Revised: 12 November 2017 / Accepted: 14 November 2017 / Published: 15 November 2017
PDF Full-text (7475 KB) | HTML Full-text | XML Full-text
Abstract
Endwall fillet and bulb structures are proposed in this research to improve the temperature uniformity of pin-fined microchannels. The periodical laminar flow and heat transfer performances are investigated under different Reynolds numbers and radius of fillet and bulb. The results show that at
[...] Read more.
Endwall fillet and bulb structures are proposed in this research to improve the temperature uniformity of pin-fined microchannels. The periodical laminar flow and heat transfer performances are investigated under different Reynolds numbers and radius of fillet and bulb. The results show that at a low Reynolds number, both the fillet and the bulb structures strengthen the span-wise and the normal secondary flow in the channel, eliminate the high temperature area in the pin-fin, improve the heat transfer performance of the rear of the cylinder, and enhance the thermal uniformity of the pin-fin surface and the outside wall. Compared to traditional pin-fined microchannels, the flow resistance coefficient f of the pin-fined microchannels with fillet, as well as a bulb with a 2 μm or 5 μm radius, does not increase significantly, while, f of the pin-fined microchannels with a 10 μm or 15 μm bulb increases notably. Moreover, Nu has a maximum increase of 16.93% for those with fillet and 20.65% for those with bulb, and the synthetic thermal performance coefficient TP increases by 16.22% at most for those with fillet and 15.67% at most for those with bulb. At last, as the Reynolds number increases, heat transfer improvement of the fillet and bulb decreases. Full article
Figures

Figure 1

Open AccessFeature PaperArticle On Lower Bounds for Statistical Learning Theory
Entropy 2017, 19(11), 617; doi:10.3390/e19110617
Received: 7 September 2017 / Revised: 27 October 2017 / Accepted: 14 November 2017 / Published: 15 November 2017
PDF Full-text (309 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, tools from information theory have played an increasingly prevalent role in statistical machine learning. In addition to developing efficient, computationally feasible algorithms for analyzing complex datasets, it is of theoretical importance to determine whether such algorithms are “optimal” in the
[...] Read more.
In recent years, tools from information theory have played an increasingly prevalent role in statistical machine learning. In addition to developing efficient, computationally feasible algorithms for analyzing complex datasets, it is of theoretical importance to determine whether such algorithms are “optimal” in the sense that no other algorithm can lead to smaller statistical error. This paper provides a survey of various techniques used to derive information-theoretic lower bounds for estimation and learning. We focus on the settings of parameter and function estimation, community recovery, and online learning for multi-armed bandits. A common theme is that lower bounds are established by relating the statistical learning problem to a channel decoding problem, for which lower bounds may be derived involving information-theoretic quantities such as the mutual information, total variation distance, and Kullback–Leibler divergence. We close by discussing the use of information-theoretic quantities to measure independence in machine learning applications ranging from causality to medical imaging, and mention techniques for estimating these quantities efficiently in a data-driven manner. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Open AccessArticle Minimax Estimation of Quantum States Based on the Latent Information Priors
Entropy 2017, 19(11), 618; doi:10.3390/e19110618
Received: 13 September 2017 / Revised: 12 November 2017 / Accepted: 13 November 2017 / Published: 16 November 2017
PDF Full-text (315 KB) | HTML Full-text | XML Full-text
Abstract
We develop priors for Bayes estimation of quantum states that provide minimax state estimation. The relative entropy from the true density operator to a predictive density operator is adopted as a loss function. The proposed prior maximizes the conditional Holevo mutual information, and
[...] Read more.
We develop priors for Bayes estimation of quantum states that provide minimax state estimation. The relative entropy from the true density operator to a predictive density operator is adopted as a loss function. The proposed prior maximizes the conditional Holevo mutual information, and it is a quantum version of the latent information prior in classical statistics. For one qubit system, we provide a class of measurements that is optimal from the viewpoint of minimax state estimation. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Figures

Figure 1

Open AccessFeature PaperArticle Modal Strain Energy-Based Debonding Assessment of Sandwich Panels Using a Linear Approximation with Maximum Entropy
Entropy 2017, 19(11), 619; doi:10.3390/e19110619
Received: 21 September 2017 / Revised: 5 November 2017 / Accepted: 14 November 2017 / Published: 17 November 2017
PDF Full-text (2032 KB)
Abstract
Sandwich structures are very attractive due to their high strength at a minimum weight, and, therefore, there has been a rapid increase in their applications. Nevertheless, these structures may present imperfect bonding or debonding between the skins and core as a result of
[...] Read more.
Sandwich structures are very attractive due to their high strength at a minimum weight, and, therefore, there has been a rapid increase in their applications. Nevertheless, these structures may present imperfect bonding or debonding between the skins and core as a result of manufacturing defects or impact loads, degrading their mechanical properties. To improve both the safety and functionality of these systems, structural damage assessment methodologies can be implemented. This article presents a damage assessment algorithm to localize and quantify debonds in sandwich panels. The proposed algorithm uses damage indices derived from the modal strain energy method and a linear approximation with a maximum entropy algorithm. Full-field vibration measurements of the panels were acquired using a high-speed 3D digital image correlation (DIC) system. Since the number of damage indices per panel is too large to be used directly in a regression algorithm, reprocessing of the data using principal component analysis (PCA) and kernel PCA has been performed. The results demonstrate that the proposed methodology accurately identifies debonding in composite panels. Full article
(This article belongs to the Special Issue Entropy for Characterization of Uncertainty in Risk and Reliability)
Open AccessArticle Surface Interaction of Nanoscale Water Film with SDS from Computational Simulation and Film Thermodynamics
Entropy 2017, 19(11), 620; doi:10.3390/e19110620 (registering DOI)
Received: 15 September 2017 / Revised: 15 November 2017 / Accepted: 15 November 2017 / Published: 18 November 2017
PDF Full-text (2736 KB) | HTML Full-text | XML Full-text
Abstract
Foam systems have been attracting extensive attention due to their importance in a variety of applications, e.g., in the cleaning industry, and in bubble flotation. In the context of flotation chemistry, flotation performance is strongly affected by bubble coalescence, which in turn relies
[...] Read more.
Foam systems have been attracting extensive attention due to their importance in a variety of applications, e.g., in the cleaning industry, and in bubble flotation. In the context of flotation chemistry, flotation performance is strongly affected by bubble coalescence, which in turn relies significantly on the surface forces upon the liquid film between bubbles. Conventionally, unusual short-range strongly repulsive surface interactions for Newton black films (NBF) between two interfaces with thickness of less than 5 nm were not able to be incorporated into the available classical Derjaguin, Landau, Verwey, and Overbeek (DLVO) theory. The non-DLVO interaction would increase exponentially with the decrease of film thickness, as it plays a crucial role in determining liquid film stability. However, its mechanism and origin are still unclear. In the present work, we investigate the surface interaction of free-standing sodium dodecyl-sulfate (SDS) nanoscale black films in terms of disjoining pressure using the molecular simulation method. The aqueous nanoscale film, consisting of a water coating with SDS surfactants, and with disjoining pressure and film tension of SDS-NBF as a function of film thickness, were quantitatively determined by a post-processing technique derived from film thermodynamics. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Figures

Figure 1

Open AccessFeature PaperArticle Inquiry Calculus and the Issue of Negative Higher Order Informations
Entropy 2017, 19(11), 622; doi:10.3390/e19110622 (registering DOI)
Received: 20 September 2017 / Revised: 1 November 2017 / Accepted: 10 November 2017 / Published: 18 November 2017
PDF Full-text (1015 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we will give the derivation of an inquiry calculus, or, equivalently, a Bayesian information theory. From simple ordering follow lattices, or, equivalently, algebras. Lattices admit a quantification, or, equivalently, algebras may be extended to calculi. The general rules of quantification
[...] Read more.
In this paper, we will give the derivation of an inquiry calculus, or, equivalently, a Bayesian information theory. From simple ordering follow lattices, or, equivalently, algebras. Lattices admit a quantification, or, equivalently, algebras may be extended to calculi. The general rules of quantification are the sum and chain rules. Probability theory follows from a quantification on the specific lattice of statements that has an upper context. Inquiry calculus follows from a quantification on the specific lattice of questions that has a lower context. There will be given here a relevance measure and a product rule for relevances, which, taken together with the sum rule of relevances, will allow us to perform inquiry analyses in an algorithmic manner. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy
Entropy 2017, 19(11), 623; doi:10.3390/e19110623 (registering DOI)
Received: 11 September 2017 / Revised: 15 November 2017 / Accepted: 16 November 2017 / Published: 18 November 2017
PDF Full-text (6250 KB) | HTML Full-text | XML Full-text
Abstract
Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD) and relative entropy (RE). In this paper,
[...] Read more.
Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD) and relative entropy (RE). In this paper, the global motion vector (GMV) is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF), Kalman filter (KF), wavelet decomposition (MD) method, empirical mode decomposition (EMD)-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods. Full article
Figures

Figure 1

Open AccessArticle Re-Evaluating Electromyogram–Force Relation in Healthy Biceps Brachii Muscles Using Complexity Measures
Entropy 2017, 19(11), 624; doi:10.3390/e19110624 (registering DOI)
Received: 9 August 2017 / Revised: 26 September 2017 / Accepted: 17 October 2017 / Published: 19 November 2017
PDF Full-text (1209 KB) | HTML Full-text | XML Full-text
Abstract
The objective of this study is to re-evaluate the relation between surface electromyogram (EMG) and muscle contraction torque in biceps brachii (BB) muscles of healthy subjects using two different complexity measures. Ten healthy subjects were recruited and asked to complete a series of
[...] Read more.
The objective of this study is to re-evaluate the relation between surface electromyogram (EMG) and muscle contraction torque in biceps brachii (BB) muscles of healthy subjects using two different complexity measures. Ten healthy subjects were recruited and asked to complete a series of elbow flexion tasks following different isometric muscle contraction levels ranging from 10% to 80% of maximum voluntary contraction (MVC) with each increment of 10%. Meanwhile, both the elbow flexion torque and surface EMG data from the muscle were recorded. The root mean square (RMS), sample entropy (SampEn) and fuzzy entropy (FuzzyEn) of corresponding EMG data were analyzed for each contraction level, and the relation between EMG and muscle torque was accordingly quantified. The experimental results showed a nonlinear relation between the traditional RMS amplitude of EMG and the muscle torque. By contrast, the FuzzyEn of EMG exhibited an improved linear correlation with the muscle torque than the RMS amplitude of EMG, which indicates its great value in estimating BB muscle strength in a simple and straightforward manner. In addition, the SampEn of EMG was found to be insensitive to the varying muscle torques, almost presenting a flat trend with the increment of muscle force. Such a character of the SampEn implied its potential application as a promising surface EMG biomarker for examining neuromuscular changes while overcoming interference from muscle strength. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Review

Jump to: Research, Other

Open AccessFeature PaperReview Entropy Applications to Water Monitoring Network Design: A Review
Entropy 2017, 19(11), 613; doi:10.3390/e19110613
Received: 16 October 2017 / Revised: 9 November 2017 / Accepted: 10 November 2017 / Published: 15 November 2017
PDF Full-text (265 KB) | HTML Full-text | XML Full-text
Abstract
Having reliable water monitoring networks is an essential component of water resources and environmental management. A standardized process for the design of water monitoring networks does not exist with the exception of the World Meteorological Organization (WMO) general guidelines about the minimum network
[...] Read more.
Having reliable water monitoring networks is an essential component of water resources and environmental management. A standardized process for the design of water monitoring networks does not exist with the exception of the World Meteorological Organization (WMO) general guidelines about the minimum network density. While one of the major challenges in the design of optimal hydrometric networks has been establishing design objectives, information theory has been successfully adopted to network design problems by providing measures of the information content that can be deliverable from a station or a network. This review firstly summarizes the common entropy terms that have been used in water monitoring network designs. Then, this paper deals with the recent applications of the entropy concept for water monitoring network designs, which are categorized into (1) precipitation; (2) streamflow and water level; (3) water quality; and (4) soil moisture and groundwater networks. The integrated design method for multivariate monitoring networks is also covered. Despite several issues, entropy theory has been well suited to water monitoring network design. However, further work is still required to provide design standards and guidelines for operational use. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)

Other

Jump to: Research, Review

Open AccessCorrection Correction: Kolchinsky, A. and Tracey, B.D. Estimating Mixture Entropy with Pairwise Distances. Entropy 2017, 19, 361
Entropy 2017, 19(11), 588; doi:10.3390/e19110588
Received: 30 October 2017 / Accepted: 1 November 2017 / Published: 3 November 2017
PDF Full-text (176 KB) | HTML Full-text | XML Full-text
Abstract
Following the publication of our paper [1], we uncovered a mistake in the derivation of two formulas in the manuscript.[...] Full article
Open AccessCorrection Correction: Abdollahzadeh Jamalabadi, M.Y.; Hooshmand, P.; Bagheri, N.; KhakRah, H.; Dousti, M. Numerical Simulation of Williamson Combined Natural and Forced Convective Fluid Flow between Parallel Vertical Walls with Slip Effects and Radiative Heat Transfer in a Porous Medium. Entropy 2016, 18, 147
Entropy 2017, 19(11), 593; doi:10.3390/e19110593
Received: 28 October 2017 / Revised: 31 October 2017 / Accepted: 31 October 2017 / Published: 7 November 2017
PDF Full-text (148 KB) | HTML Full-text | XML Full-text
Abstract
The authors wish to make the following correction to this paper [...] Full article
Open AccessFeature PaperEssay Thermodynamics: The Unique Universal Science
Entropy 2017, 19(11), 621; doi:10.3390/e19110621
Received: 28 June 2017 / Revised: 3 October 2017 / Accepted: 7 November 2017 / Published: 17 November 2017
PDF Full-text (1121 KB)
Abstract
Thermodynamics is a physical branch of science that governs the thermal behavior of dynamical systems from those as simple as refrigerators to those as complex as our expanding universe. The laws of thermodynamics involving conservation of energy and nonconservation of entropy are, without
[...] Read more.
Thermodynamics is a physical branch of science that governs the thermal behavior of dynamical systems from those as simple as refrigerators to those as complex as our expanding universe. The laws of thermodynamics involving conservation of energy and nonconservation of entropy are, without a doubt, two of the most useful and general laws in all sciences. The first law of thermodynamics, according to which energy cannot be created or destroyed, merely transformed from one form to another, and the second law of thermodynamics, according to which the usable energy in an adiabatically isolated dynamical system is always diminishing in spite of the fact that energy is conserved, have had an impact far beyond science and engineering. In this paper, we trace the history of thermodynamics from its classical to its postmodern forms, and present a tutorial and didactic exposition of thermodynamics as it pertains to some of the deepest secrets of the universe. Full article
Back to Top