Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 4 (April 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The quantum Otto cycle serves as a bridge between the macroscopic world of heat engines and the [...] Read more.
View options order results:
result details:
Displaying articles 1-47
Export citation of selected articles as:
Open AccessReview Slow Dynamics and Structure of Supercooled Water in Confinement
Entropy 2017, 19(4), 185; https://doi.org/10.3390/e19040185
Received: 22 November 2016 / Revised: 14 April 2017 / Accepted: 17 April 2017 / Published: 24 April 2017
Cited by 2 | PDF Full-text (1513 KB) | HTML Full-text | XML Full-text
Abstract
We review our simulation results on properties of supercooled confined water. We consider two situations: water confined in a hydrophilic pore that mimics an MCM-41 environment and water at interface with a protein. The behavior upon cooling of the α relaxation of water
[...] Read more.
We review our simulation results on properties of supercooled confined water. We consider two situations: water confined in a hydrophilic pore that mimics an MCM-41 environment and water at interface with a protein. The behavior upon cooling of the α relaxation of water in both environments is well interpreted in terms of the Mode Coupling Theory of glassy dynamics. Moreover, we find a crossover from a fragile to a strong regime. We relate this crossover to the crossing of the Widom line emanating from the liquid-liquid critical point, and in confinement we connect this crossover also to a crossover of the two body excess entropy of water upon cooling. Hydration water exhibits a second, distinctly slower relaxation caused by its dynamical coupling with the protein. The crossover upon cooling of this long relaxation is related to the protein dynamics. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Open AccessArticle Low Complexity List Decoding for Polar Codes with Multiple CRC Codes
Entropy 2017, 19(4), 183; https://doi.org/10.3390/e19040183
Received: 7 February 2017 / Revised: 22 March 2017 / Accepted: 11 April 2017 / Published: 24 April 2017
Cited by 2 | PDF Full-text (445 KB) | HTML Full-text | XML Full-text
Abstract
Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result,
[...] Read more.
Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC) codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Carnot-Like Heat Engines Versus Low-Dissipation Models
Entropy 2017, 19(4), 182; https://doi.org/10.3390/e19040182
Received: 20 March 2017 / Revised: 18 April 2017 / Accepted: 20 April 2017 / Published: 23 April 2017
Cited by 4 | PDF Full-text (1189 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a comparison between two well-known finite time heat engine models is presented: the Carnot-like heat engine based on specific heat transfer laws between the cyclic system and the external heat baths and the Low-Dissipation model where irreversibilities are taken into
[...] Read more.
In this paper, a comparison between two well-known finite time heat engine models is presented: the Carnot-like heat engine based on specific heat transfer laws between the cyclic system and the external heat baths and the Low-Dissipation model where irreversibilities are taken into account by explicit entropy generation laws. We analyze the mathematical relation between the natural variables of both models and from this the resulting thermodynamic implications. Among them, particular emphasis has been placed on the physical consistency between the heat leak and time evolution on the one side, and between parabolic and loop-like behaviors of the parametric power-efficiency plots. A detailed analysis for different heat transfer laws in the Carnot-like model in terms of the maximum power efficiencies given by the Low-Dissipation model is also presented. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Figures

Figure 1

Open AccessArticle Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems
Entropy 2017, 19(4), 181; https://doi.org/10.3390/e19040181
Received: 30 December 2016 / Revised: 14 April 2017 / Accepted: 20 April 2017 / Published: 22 April 2017
PDF Full-text (20121 KB) | HTML Full-text | XML Full-text
Abstract
Recently emerging data-driven citizen sciences need to harness an increasing amount of massive data with varying quality. This paper develops essential theoretical frameworks, example models, and a general definition of complexity measure, and examines its computational complexity for an interactive data-driven citizen science
[...] Read more.
Recently emerging data-driven citizen sciences need to harness an increasing amount of massive data with varying quality. This paper develops essential theoretical frameworks, example models, and a general definition of complexity measure, and examines its computational complexity for an interactive data-driven citizen science within the context of guided self-organization. We first define a conceptual model that incorporates the quality of observation in terms of accuracy and reproducibility, ranging between subjectivity, inter-subjectivity, and objectivity. Next, we examine the database’s algebraic and topological structure in relation to informational complexity measures, and evaluate its computational complexities with respect to an exhaustive optimization. Conjectures of criticality are obtained on the self-organizing processes of observation and dynamical model development. Example analysis is demonstrated with the use of biodiversity assessment database—the process that inevitably involves human subjectivity for management within open complex systems. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Using Measured Values in Bell’s Inequalities Entails at Least One Hypothesis in Addition to Local Realism
Entropy 2017, 19(4), 180; https://doi.org/10.3390/e19040180
Received: 22 February 2017 / Revised: 17 April 2017 / Accepted: 20 April 2017 / Published: 22 April 2017
PDF Full-text (1780 KB) | HTML Full-text | XML Full-text
Abstract
The recent loophole-free experiments have confirmed the violation of Bell’s inequalities in nature. Yet, in order to insert measured values in Bell’s inequalities, it is unavoidable to make a hypothesis similar to “ergodicity at the hidden variables level”. This possibility opens a promising
[...] Read more.
The recent loophole-free experiments have confirmed the violation of Bell’s inequalities in nature. Yet, in order to insert measured values in Bell’s inequalities, it is unavoidable to make a hypothesis similar to “ergodicity at the hidden variables level”. This possibility opens a promising way out from the old controversy between quantum mechanics and local realism. Here, I review the reason why such a hypothesis (actually, it is one of a set of related hypotheses) in addition to local realism is necessary, and present a simple example, related to Bell’s inequalities, where the hypothesis is violated. This example shows that the violation of the additional hypothesis is necessary, but not sufficient, to violate Bell’s inequalities without violating local realism. The example also provides some clues that may reveal the violation of the additional hypothesis in an experiment. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessArticle On the Definition of Diversity Order Based on Renyi Entropy for Frequency Selective Fading Channels
Entropy 2017, 19(4), 179; https://doi.org/10.3390/e19040179
Received: 23 November 2016 / Revised: 11 April 2017 / Accepted: 18 April 2017 / Published: 20 April 2017
PDF Full-text (3296 KB) | HTML Full-text | XML Full-text
Abstract
Outage probabilities are important measures of the performance of wireless communication systems, but to obtain outage probabilities it is necessary to first determine detailed system parameters, followed by complicated calculations. When there are multiple candidates of diversity techniques applicable for a system, the
[...] Read more.
Outage probabilities are important measures of the performance of wireless communication systems, but to obtain outage probabilities it is necessary to first determine detailed system parameters, followed by complicated calculations. When there are multiple candidates of diversity techniques applicable for a system, the diversity order can be used to roughly but quickly compare the techniques for a wide range of operating environments. For a system transmitting over frequency selective fading channels, the diversity order can be defined as the number of multi-paths if multi-paths have all equal energy. However, diversity order may not be adequately defined when the energy values are different. In order to obtain a rough value of diversity order, one may use the number of multi-paths or the reciprocal value of the multi-path energy variance. Such definitions are not very useful for evaluating the performance of diversity techniques since the former is meaningful only when the target outage probability is extremely small, while the latter is reasonable when the target outage probability is very large. In this paper, we propose a new definition of diversity order for frequency selective fading channels. The proposed scheme is based on Renyi entropy, which is widely used in biology and many other fields. We provide various simulation results to show that the diversity order using the proposed definition is tightly correlated with the corresponding outage probability, and thus the proposed scheme can be used for quickly selecting the best diversity technique among multiple candidates. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Entropy “2”-Soft Classification of Objects
Entropy 2017, 19(4), 178; https://doi.org/10.3390/e19040178
Received: 10 March 2017 / Revised: 10 April 2017 / Accepted: 18 April 2017 / Published: 20 April 2017
PDF Full-text (1300 KB) | HTML Full-text | XML Full-text
Abstract
A proposal for a new method of classification of objects of various nature, named “2”-soft classification, which allows for referring objects to one of two types with optimal entropy probability for available collection of learning data with consideration of additive errors therein. A
[...] Read more.
A proposal for a new method of classification of objects of various nature, named “2”-soft classification, which allows for referring objects to one of two types with optimal entropy probability for available collection of learning data with consideration of additive errors therein. A decision rule of randomized parameters and probability density function (PDF) is formed, which is determined by the solution of the problem of the functional entropy linear programming. A procedure for “2”-soft classification is developed, consisting of the computer simulation of the randomized decision rule with optimal entropy PDF parameters. Examples are provided. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle Entropy in Natural Time and the Associated Complexity Measures
Entropy 2017, 19(4), 177; https://doi.org/10.3390/e19040177
Received: 29 March 2017 / Revised: 16 April 2017 / Accepted: 18 April 2017 / Published: 20 April 2017
Cited by 1 | PDF Full-text (961 KB) | HTML Full-text | XML Full-text
Abstract
Natural time is a new time domain introduced in 2001. The analysis of time series associated with a complex system in natural time may provide useful information and may reveal properties that are usually hidden when studying the system in conventional time. In
[...] Read more.
Natural time is a new time domain introduced in 2001. The analysis of time series associated with a complex system in natural time may provide useful information and may reveal properties that are usually hidden when studying the system in conventional time. In this new time domain, an entropy has been defined, and complexity measures based on this entropy, as well as its value under time-reversal have been introduced and found applications in various complex systems. Here, we review these applications in the electric signals that precede rupture, e.g., earthquakes, in the analysis of electrocardiograms, as well as in global atmospheric phenomena, like the El Niño/La Niña Southern Oscillation. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Multi-Scale Permutation Entropy Based on Improved LMD and HMM for Rolling Bearing Diagnosis
Entropy 2017, 19(4), 176; https://doi.org/10.3390/e19040176
Received: 8 January 2017 / Revised: 3 March 2017 / Accepted: 14 April 2017 / Published: 19 April 2017
Cited by 21 | PDF Full-text (2881 KB) | HTML Full-text | XML Full-text
Abstract
Based on the combination of improved Local Mean Decomposition (LMD), Multi-scale Permutation Entropy (MPE) and Hidden Markov Model (HMM), the fault types of bearings are diagnosed. Improved LMD is proposed based on the self-similarity of roller bearing vibration signal by extending the right
[...] Read more.
Based on the combination of improved Local Mean Decomposition (LMD), Multi-scale Permutation Entropy (MPE) and Hidden Markov Model (HMM), the fault types of bearings are diagnosed. Improved LMD is proposed based on the self-similarity of roller bearing vibration signal by extending the right and left side of the original signal to suppress its edge effect. First, the vibration signals of the rolling bearing are decomposed into several product function (PF) components by improved LMD respectively. Then, the phase space reconstruction of the PF1 is carried out by using the mutual information (MI) method and the false nearest neighbor (FNN) method to calculate the delay time and the embedding dimension, and then the scale is set to obtain the MPE of PF1. After that, the MPE features of rolling bearings are extracted. Finally, the features of MPE are used as HMM training and diagnosis. The experimental results show that the proposed method can effectively identify the different faults of the rolling bearing. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Figures

Figure 1

Open AccessArticle Second Law Analysis of a Mobile Air Conditioning System with Internal Heat Exchanger Using Low GWP Refrigerants
Entropy 2017, 19(4), 175; https://doi.org/10.3390/e19040175
Received: 10 March 2017 / Revised: 14 April 2017 / Accepted: 17 April 2017 / Published: 19 April 2017
Cited by 1 | PDF Full-text (2922 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the results of a Second Law analysis applied to a mobile air conditioning system (MACs) integrated with an internal heat exchanger (IHX) by considering R152a, R1234yf and R1234ze as low global warming potential (GWP) refrigerants and establishing R134a as baseline.
[...] Read more.
This paper investigates the results of a Second Law analysis applied to a mobile air conditioning system (MACs) integrated with an internal heat exchanger (IHX) by considering R152a, R1234yf and R1234ze as low global warming potential (GWP) refrigerants and establishing R134a as baseline. System simulation is performed considering the maximum value of entropy generated in the IHX. The maximum entropy production occurs at an effectiveness of 66% for both R152a and R134a, whereas for the cases of R1234yf and R1234ze occurs at 55%. Sub-cooling and superheating effects are evaluated for each one of the cases. It is also found that the sub-cooling effect shows the greatest impact on the cycle efficiency. The results also show the influence of isentropic efficiency on relative exergy destruction, resulting that the most affected components are the compressor and the condenser for all of the refrigerants studied herein. It is also found that the most efficient operation of the system resulted to be when using the R1234ze refrigerant. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Leaks: Quantum, Classical, Intermediate and More
Entropy 2017, 19(4), 174; https://doi.org/10.3390/e19040174
Received: 26 January 2017 / Revised: 30 March 2017 / Accepted: 12 April 2017 / Published: 19 April 2017
Cited by 6 | PDF Full-text (335 KB) | HTML Full-text | XML Full-text
Abstract
We introduce the notion of a leak for general process theories and identify quantum theory as a theory with minimal leakage, while classical theory has maximal leakage. We provide a construction that adjoins leaks to theories, an instance of which describes the emergence
[...] Read more.
We introduce the notion of a leak for general process theories and identify quantum theory as a theory with minimal leakage, while classical theory has maximal leakage. We provide a construction that adjoins leaks to theories, an instance of which describes the emergence of classical theory by adjoining decoherence leaks to quantum theory. Finally, we show that defining a notion of purity for processes in general process theories has to make reference to the leaks of that theory, a feature missing in standard definitions; hence, we propose a refined definition and study the resulting notion of purity for quantum, classical and intermediate theories. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle Multilevel Integration Entropies: The Case of Reconstruction of Structural Quasi-Stability in Building Complex Datasets
Entropy 2017, 19(4), 172; https://doi.org/10.3390/e19040172
Received: 27 February 2017 / Revised: 12 April 2017 / Accepted: 14 April 2017 / Published: 18 April 2017
Cited by 1 | PDF Full-text (3060 KB) | HTML Full-text | XML Full-text
Abstract
The emergence of complex datasets permeates versatile research disciplines leading to the necessity to develop methods for tackling complexity through finding the patterns inherent in datasets. The challenge lies in transforming the extracted patterns into pragmatic knowledge. In this paper, new information entropy
[...] Read more.
The emergence of complex datasets permeates versatile research disciplines leading to the necessity to develop methods for tackling complexity through finding the patterns inherent in datasets. The challenge lies in transforming the extracted patterns into pragmatic knowledge. In this paper, new information entropy measures for the characterization of the multidimensional structure extracted from complex datasets are proposed, complementing the conventionally-applied algebraic topology methods. Derived from topological relationships embedded in datasets, multilevel entropy measures are used to track transitions in building the high dimensional structure of datasets captured by the stratified partition of a simplicial complex. The proposed entropies are found suitable for defining and operationalizing the intuitive notions of structural relationships in a cumulative experience of a taxi driver’s cognitive map formed by origins and destinations. The comparison of multilevel integration entropies calculated after each new added ride to the data structure indicates slowing the pace of change over time in the origin-destination structure. The repetitiveness in taxi driver rides, and the stability of origin-destination structure, exhibits the relative invariance of rides in space and time. These results shed light on taxi driver’s ride habits, as well as on the commuting of persons whom he/she drove. Full article
Figures

Figure 1

Open AccessArticle Design and Implementation of SOC Prediction for a Li-Ion Battery Pack in an Electric Car with an Embedded System
Entropy 2017, 19(4), 146; https://doi.org/10.3390/e19040146
Received: 21 January 2017 / Revised: 22 March 2017 / Accepted: 27 March 2017 / Published: 17 April 2017
PDF Full-text (10716 KB) | HTML Full-text | XML Full-text
Abstract
Li-Ion batteries are widely preferred in electric vehicles. The charge status of batteries is a critical evaluation issue, and many researchers are studying in this area. State of charge gives information about how much longer the battery can be used and when the
[...] Read more.
Li-Ion batteries are widely preferred in electric vehicles. The charge status of batteries is a critical evaluation issue, and many researchers are studying in this area. State of charge gives information about how much longer the battery can be used and when the charging process will be cut off. Incorrect predictions may cause overcharging or over-discharging of the battery. In this study, a low-cost embedded system is used to determine the state of charge of an electric car. A Li-Ion battery cell is trained using a feed-forward neural network via Matlab/Neural Network Toolbox. The trained cell is adapted to the whole battery pack of the electric car and embedded via Matlab/Simulink to a low-cost microcontroller that proposed a system in real-time. The experimental results indicated that accurate robust estimation results could be obtained by the proposed system. Full article
Figures

Figure 1

Open AccessArticle Entropy Generation of Double Diffusive Forced Convection in Porous Channels with Thick Walls and Soret Effect
Entropy 2017, 19(4), 171; https://doi.org/10.3390/e19040171
Received: 14 March 2017 / Revised: 13 April 2017 / Accepted: 13 April 2017 / Published: 15 April 2017
Cited by 5 | PDF Full-text (5780 KB) | HTML Full-text | XML Full-text
Abstract
The second law performance of double diffusive forced convection in a horizontal porous channel with thick walls was considered. The Soret effect is included in the concentration equation and the first order chemical reaction was chosen for the concentration boundary conditions at the
[...] Read more.
The second law performance of double diffusive forced convection in a horizontal porous channel with thick walls was considered. The Soret effect is included in the concentration equation and the first order chemical reaction was chosen for the concentration boundary conditions at the porous-solid walls interfaces. This investigation is focused on two principal types of boundary conditions. The first assumes a constant temperature condition at the outer surfaces of the solid walls, and the second assumes a constant heat flux at the lower wall and convection heat transfer at the upper wall. After obtaining the velocity, temperature and concentration distributions, the local and total entropy generation formulations were used to visualize the second law performance of the two cases. The results indicate that the total entropy generation rate is directly related to the lower wall thickness. Interestingly, it was observed that the total entropy generation rate for the second case reaches a minimum value, if the upper and lower wall thicknesses are chosen correctly. However, this observation was not true for the first case. These analyses can be useful for the design of microreactors and microcombustor systems when the second law analysis is taken into account. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Dynamic Rankings for Seed Selection in Complex Networks: Balancing Costs and Coverage
Entropy 2017, 19(4), 170; https://doi.org/10.3390/e19040170
Received: 27 February 2017 / Revised: 7 April 2017 / Accepted: 12 April 2017 / Published: 15 April 2017
Cited by 3 | PDF Full-text (2681 KB) | HTML Full-text | XML Full-text
Abstract
Information spreading processes within the complex networks are usually initiated by a selection of highly influential nodes in accordance with the used seeding strategy. The majority of earlier studies assumed the usage of selected seeds at the beginning of the process. Our previous
[...] Read more.
Information spreading processes within the complex networks are usually initiated by a selection of highly influential nodes in accordance with the used seeding strategy. The majority of earlier studies assumed the usage of selected seeds at the beginning of the process. Our previous research revealed the advantage of using a sequence of seeds instead of a single stage approach. The current study extends sequential seeding and further improves results with the use of dynamic rankings, which are created by recalculation of network measures used for additional seed selection during the process instead of static ranking computed only once at the beginning. For calculation of network centrality measures such as degree, only non-infected nodes are taken into account. Results showed increased coverage represented by a percentage of activated nodes dependent on intervals between recalculations as well as the trade-off between outcome and computational costs. For over 90% of simulation cases, dynamic rankings with a high frequency of recalculations delivered better coverage than approaches based on static rankings. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Back to Top